title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Teaching LLaMa 3.2 to Reason via Its Own Mistake — Reflection Fine-Tuning Experiment | 1 | [removed] | 2025-06-16T01:32:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lcglnx/teaching_llama_32_to_reason_via_its_own_mistake/ | cyber-inside | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcglnx | false | null | t3_1lcglnx | /r/LocalLLaMA/comments/1lcglnx/teaching_llama_32_to_reason_via_its_own_mistake/ | false | false | self | 1 | null |
🧬🧫🦠 Introducing project hormones: Runtime behavior modification | 33 | Hi all!
Bored of endless repetitive behavior of LLMs? Want to see your coding agent get insecure and shut up with its endless confidence after it made the same mistake seven times?
Inspired both by [drugs](https://www.reddit.com/r/LocalLLaMA/comments/18toidc/stop_messing_with_sampling_parameters_and_just/) and by my obsessive reading of biology textbooks (biology is fun!)
I am happy to announce **PROJECT HORMONES** 🎉🎉🎉🎊🥳🪅
## What?
While large language models are amazing, there's an issue with how they seem to lack inherent adaptability to complex situations.
- An LLM runs into to the same error three times in a row? Let's try again with full confidence!
- "It's not just X — It's Y!"
- "What you said is **Genius**!"
Even though LLMs have achieved metacognition, they completely lack meta-adaptability.
Therefore! Hormones!
## How??
A hormone is a super simple program with just a few parameters
- A name
- A trigger (when should the hormone be released? And how much of the hormone gets released?)
- An effect (does temperature go up? Intercept and replace tokens during generation. Insert text before and after a message by the user or by the AI)
Or the formal interface expressed in typescript:
```
interface Hormone {
name: string;
// when should the hormone be released?
trigger: (context: Context) => number; // amount released, [0, 1.0]
// hormones can mess with temperature, top_p etc
modifyParams?: (params: GenerationParams, level: number) => GenerationParams;
// this runs are each token generated, the hormone can alter the output of the LLM if it wishes to do so
interceptToken?: (token: string, logits: number[], level: number) => TokenInterceptResult;
}
// Internal hormone state (managed by system)
interface HormoneState {
level: number; // current accumulated amount
depletionRate: number; // how fast it decays
}
```
What's particularly interesting is that hormones are _stochastic_. Meaning that even if a hormone is active, the chance that it will be called is random! The more of the hormone present in the system? The higher the change of it being called!
For example, make the LLM more insecure!
```
const InsecurityHormone: Hormone = {
name: "insecurity",
trigger: (context) => {
// Builds with each "actually that's wrong" or correction
const corrections = context.recent_corrections.length * 0.4;
const userSighs = context.user_message.match(/no|wrong|sigh|facepalm/gi)?.length || 0;
return corrections + (userSighs * 0.3);
},
modifyParams: (params, level) => ({
...params,
temperatureDelta: -0.35 * level
}),
interceptToken: (token, logits, level) => {
if (token === '.' && level > 0.7) {
return { replace_token: '... umm.. well' };
}
return {};
}
};
```
### 2. Stress the hell out of your LLM with cortisol and adrenaline
const CortisolHormone: Hormone = {
name: "cortisol",
trigger: (context) => {
return context.evaluateWith("stress_threat_detection.prompt", {
user_message: context.user_message,
complexity_level: context.user_message.length
});
},
modifyParams: (params, level) => ({
...params,
temperatureDelta: -0.5 * level, // Stress increases accuracy but reduces speed [Nih](https://pmc.ncbi.nlm.nih.gov/articles/PMC2568977/?& level > 0.9) {
const stress_level = Math.floor(level * 5);
const cs = 'C'.repeat(stress_level);
return { replace_token: `. FU${cs}K!!` };
}
// Stress reallocates from executive control to salience network [Nih](https://pmc.ncbi.nlm.nih.gov/articles/PMC2568977/?& /comprehensive|thorough|multifaceted|intricate/.test(token)) {
return { skip_token: true };
}
return {};
}
};
### 3. Make your LLM more collaborative with oestrogen
```typescript
const EstrogenHormone: Hormone = {
name: "estrogen",
trigger: (context) => {
// Use meta-LLM to evaluate collaborative state
return context.evaluateWith("collaborative_social_state.prompt", {
recent_messages: context.last_n_messages.slice(-3),
user_message: context.user_message
});
},
modifyParams: (params, level) => ({
...params,
temperatureDelta: 0.15 * level
}),
interceptToken: (token, logits, level) => {
if (token === '.' && level > 0.6) {
return { replace_token: '. What do you think about this approach?' };
}
return {};
}
};
``` | 2025-06-16T01:33:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lcglze/introducing_project_hormones_runtime_behavior/ | Combinatorilliance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcglze | false | null | t3_1lcglze | /r/LocalLLaMA/comments/1lcglze/introducing_project_hormones_runtime_behavior/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk.jpeg?width=108&crop=smart&auto=webp&s=834c413f42993ddd277061ce386e2876b2d0aaea', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk.jpeg?width=216&crop=smart&auto=webp&s=dda485e662882f8e14f7920372b088e22f360ff7', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk.jpeg?width=320&crop=smart&auto=webp&s=bc2be00eff32de0e2dae2307334834c8c809cf8e', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk.jpeg?width=640&crop=smart&auto=webp&s=683555a6c071cb1709edcbba15fd66e691be4aaa', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk.jpeg?width=960&crop=smart&auto=webp&s=8684882da7519131dcc4b5712c7aedfc0a5c9ada', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk.jpeg?width=1080&crop=smart&auto=webp&s=8b4c1503ade878c99b4aa69aefd9991d1a54b332', 'width': 1080}], 'source': {'height': 836, 'url': 'https://external-preview.redd.it/eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk.jpeg?auto=webp&s=3ccfb02ee1a9d392e35707ff6b7cfc25262c0542', 'width': 1600}, 'variants': {}}]} |
Teaching LLama 3.2 to Reason via Its Mistakes — Reflection Fine-Tuning Experiments | 1 | [removed] | 2025-06-16T01:35:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lcgneo/teaching_llama_32_to_reason_via_its_mistakes/ | cyber-inside | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcgneo | false | null | t3_1lcgneo | /r/LocalLLaMA/comments/1lcgneo/teaching_llama_32_to_reason_via_its_mistakes/ | false | false | self | 1 | null |
What’s your current tech stack | 52 | I’m using Ollama for local models (but I’ve been following the threads that talk about ditching it) and LiteLLM as a proxy layer so I can connect to OpenAI and Anthropic models too. I have a Postgres database for LiteLLM to use. All but Ollama is orchestrated through a docker compose and Portainer for docker management.
The I have OpenWebUI as the frontend and it connects to LiteLLM or I’m using Langgraph for my agents.
I’m kinda exploring my options and want to hear what everyone is using. (And I ditched Docker desktop for Rancher but I’m exploring other options there too) | 2025-06-16T02:08:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lchamn/whats_your_current_tech_stack/ | hokies314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lchamn | false | null | t3_1lchamn | /r/LocalLLaMA/comments/1lchamn/whats_your_current_tech_stack/ | false | false | self | 52 | null |
Trouble installing llama.cpp locally on MacBook Air — Need some help | 1 | [removed] | 2025-06-16T02:15:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lchet7/trouble_installing_llamacpp_locally_on_macbook/ | Mountain-Spell-941 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lchet7 | false | null | t3_1lchet7 | /r/LocalLLaMA/comments/1lchet7/trouble_installing_llamacpp_locally_on_macbook/ | false | false | self | 1 | null |
[D] Evolving AI: The Imperative of Consciousness, Evolutionary Pressure, and Biomimicry | 1 | [removed] | 2025-06-16T02:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lchuqm/d_evolving_ai_the_imperative_of_consciousness/ | Pale-Entertainer-386 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lchuqm | false | null | t3_1lchuqm | /r/LocalLLaMA/comments/1lchuqm/d_evolving_ai_the_imperative_of_consciousness/ | false | false | self | 1 | null |
llama-server has multimodal audio input, so I tried it | 3 | I had a nice, simple workthrough here, but it keeps getting auto modded so you'll have to go off site to view it. Sorry. [https://github.com/themanyone/FindAImage](https://github.com/themanyone/FindAImage) | 2025-06-16T04:30:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lcjvfw/llamaserver_has_multimodal_audio_input_so_i_tried/ | DesignToWin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcjvfw | false | null | t3_1lcjvfw | /r/LocalLLaMA/comments/1lcjvfw/llamaserver_has_multimodal_audio_input_so_i_tried/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'dqVlPcEJWR1rFCsffMQrVAGl4S_G9Ax7pEyLF48SX_8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dqVlPcEJWR1rFCsffMQrVAGl4S_G9Ax7pEyLF48SX_8.png?width=108&crop=smart&auto=webp&s=e5915ffc1e30c3c6c4818c5912a7ed4a7ebec952', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dqVlPcEJWR1rFCsffMQrVAGl4S_G9Ax7pEyLF48SX_8.png?width=216&crop=smart&auto=webp&s=47f883db1776727c6b4860ea32e7722b8ba2d26e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dqVlPcEJWR1rFCsffMQrVAGl4S_G9Ax7pEyLF48SX_8.png?width=320&crop=smart&auto=webp&s=99c0f06751e46f7b799a2d41e969f5d36b2a4f0e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dqVlPcEJWR1rFCsffMQrVAGl4S_G9Ax7pEyLF48SX_8.png?width=640&crop=smart&auto=webp&s=e88ffaaf240740d00b85e71e0636144dda3b63bf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dqVlPcEJWR1rFCsffMQrVAGl4S_G9Ax7pEyLF48SX_8.png?width=960&crop=smart&auto=webp&s=b0d039408805111d9e7ea4f760f5711e20b5f57a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dqVlPcEJWR1rFCsffMQrVAGl4S_G9Ax7pEyLF48SX_8.png?width=1080&crop=smart&auto=webp&s=528e1b8b1b92be679e72b4edb9f1b4c977cd5980', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dqVlPcEJWR1rFCsffMQrVAGl4S_G9Ax7pEyLF48SX_8.png?auto=webp&s=1e8bbe32cf0ff8e04b161a47a8dbfbcf9149647e', 'width': 1200}, 'variants': {}}]} |
Chatterbox GUI | 8 | Guy I know from AMIA posted on LinkedIn a project where he’s made a GUI for chatterbox to generate audiobooks, it does the generation, verifies it with whisper and allows you to individually regenerate things that aren’t working. It took about 5 minutes for me to load it on my machine, another 5 to have all the models download but then it just worked. I’ve sent him a DM to find out a bit more about the project but I know he’s published some books. It’s the best GUI I’ve seen so far and glancing at the programs folders it should be easy to adapt to all future tts releases.
https://github.com/Jeremy-Harper/chatterboxPro | 2025-06-16T04:34:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lcjxk2/chatterbox_gui/ | olympics2022wins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcjxk2 | false | null | t3_1lcjxk2 | /r/LocalLLaMA/comments/1lcjxk2/chatterbox_gui/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'LMIn0iKEIrKEXiTJ9TSNs4f3dr3gtGhL3Xe75KqK9DA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LMIn0iKEIrKEXiTJ9TSNs4f3dr3gtGhL3Xe75KqK9DA.png?width=108&crop=smart&auto=webp&s=ef4e08c300d5b9292a1dc4ec21ed31ae765f3c32', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LMIn0iKEIrKEXiTJ9TSNs4f3dr3gtGhL3Xe75KqK9DA.png?width=216&crop=smart&auto=webp&s=ec0a2c5db7dbfed043d240f256e8486ca3786ede', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LMIn0iKEIrKEXiTJ9TSNs4f3dr3gtGhL3Xe75KqK9DA.png?width=320&crop=smart&auto=webp&s=9ded49bb0f7ca336ac0b6c08a4532f85b7228409', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LMIn0iKEIrKEXiTJ9TSNs4f3dr3gtGhL3Xe75KqK9DA.png?width=640&crop=smart&auto=webp&s=2988fd4d2dceae904e07cb79933e77b80d9e55b6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LMIn0iKEIrKEXiTJ9TSNs4f3dr3gtGhL3Xe75KqK9DA.png?width=960&crop=smart&auto=webp&s=b5355f2a988ccdc2a4581b16461ade0a0c9d1a05', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LMIn0iKEIrKEXiTJ9TSNs4f3dr3gtGhL3Xe75KqK9DA.png?width=1080&crop=smart&auto=webp&s=16a8b873652b0cf5daad4e82fa1ed3f896c364a1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LMIn0iKEIrKEXiTJ9TSNs4f3dr3gtGhL3Xe75KqK9DA.png?auto=webp&s=99f0f171295375dd6041c14641ccd0c62cc30ff2', 'width': 1200}, 'variants': {}}]} |
Do AI wrapper startups have a real future? | 159 | I’ve been thinking about how many startups right now are essentially just wrappers around GPT or Claude, where they take the base model, add a nice UI or some prompt chains, and maybe tailor it to a niche, all while calling it a product.
Some of them are even making money, but I keep wondering… how long can that really last?
Like, once OpenAI or whoever bakes those same features into their platform, what’s stopping these wrapper apps from becoming irrelevant overnight? Can any of them actually build a moat?
Or is the only real path to focus super hard on a specific vertical (like legal or finance), gather your own data, and basically evolve beyond being just a wrapper?
Curious what you all think. Are these wrapper apps legit businesses, or just temporary hacks riding the hype wave? | 2025-06-16T05:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lcksww/do_ai_wrapper_startups_have_a_real_future/ | Samonji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcksww | false | null | t3_1lcksww | /r/LocalLLaMA/comments/1lcksww/do_ai_wrapper_startups_have_a_real_future/ | false | false | self | 159 | null |
An experimental yet useful On-device Android LLM Assistant | 16 | I saw the recent post (at last) where the OP was looking for a digital assistant for android where they didn't want to access the LLM through any other app's interface. After looking around for something like this, I'm happy to say that I've managed to build one myself.
My Goal: To have a local LLM that can instantly answer questions, summarize text, or manipulate content from anywhere on my phone, basically extend the use of LLM from chatbot to more integration with phone. You can ask your phone "What's the highest mountain?" while in WhatsApp and get an immediate, private answer.
How I Achieved It:
* Local LLM Backend: The core of this setup is MNNServer by sunshine0523. This incredible project allows you to run small-ish LLMs directly on your Android device, creating a local API endpoint (e.g., http://127.0.0.1:8080/v1/chat/completions). The key advantage here is that the models run comfortably in the background without needing to reload them constantly, making for very fast inference. It is interesting to note than I didn't dare try this setup when backend such as llama.cpp through termux or ollamaserver by same developer was available. MNN is practical, llama.cpp on phone is only as good as a chatbot.
* My Model Choice: For my 8GB RAM phone, I found taobao-mnn/Qwen2.5-1.5B-Instruct-MNN to be the best performer. It handles assistant-like functions (summarizing/manipulating clipboard text, answering quick questions, manipulating text) really well and for more advance functions it like very promising. Llama 3.2 1b and 3b are good too. (Just make sure to enter the correct model name in http request)
* Automation Apps for Frontend & Logic: Interaction with the API happens here. I experimented with two Android automation apps:
1. Macrodroid: I could trigger actions based on a floating button, send clipboard text or voice transcript to the LLM via HTTP POST, give a nice prompt with the input (eg. "content": "Summarize the text: [lv=UserInput]") , and receive the response in a notification/TTS/back to clipboard.
2. Tasker: This brings more nuts and bolts to play around. For most, it is more like a DIY project, many moving parts and so is more functional.
* Context and Memory: Tasker allows you to feed back previous interactions to the LLM, simulating a basic "memory" function. I haven't gotten this working right now because it's going to take a little time to set it up. Very very experimental.
Features & How they work:
* Voice-to-Voice Interaction:
* Voice Input: Trigger the assistant. Use Android's built-in voice-to-text (or use Whisper) to capture your spoken query.
* LLM Inference: The captured text is sent to the local MNNServer API.
* Voice Output: The LLM's response is then passed to a text-to-speech engine (like Google's TTS or another on-device TTS engine) and read aloud.
* Text Generation (Clipboard Integration):
* Trigger: Summon the assistant (e.g., via floating button).
* Clipboard Capture: The automation app (Macrodroid/Tasker) grabs the current text from your clipboard.
* LLM Processing: This text is sent to your local LLM with your specific instruction (e.g., "Summarize this:", "Rewrite this in a professional tone:").
* Automatic Copy to Clipboard: After inference, the LLM's generated response is automatically copied back to your clipboard, ready for you to paste into any app (WhatsApp, email, notes, etc.).
* Read Aloud After Inference:
* Once the LLM provides its response, the text can be automatically sent to your device's text-to-speech engine (get better TTS than Google's: (https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html) and read out loud.
I think there are plenty other ways to use these small with Tasker, though. But it's like going down a rabbithole.
I'll attach the macro in the reply for you try it yourself. (Enable or disable actions and triggers based on your liking)
Tasker needs refining, if any one wants I'll share it soon.
The post in question: https://www.reddit.com/r/LocalLLaMA/comments/1ixgvhh/android_digital_assistant/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button | 2025-06-16T05:43:26 | https://v.redd.it/s7noh3oh787f1 | abskvrm | /r/LocalLLaMA/comments/1lcl2m1/an_experimental_yet_useful_ondevice_android_llm/ | 1970-01-01T00:00:00 | 0 | {} | 1lcl2m1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/s7noh3oh787f1/DASHPlaylist.mpd?a=1752776631%2CN2QwMzU5NjQ0YmI0YWVhNzk0M2VlYzI1ZThhMDM2M2VhODcxNGM4OTg5ZDkxODNkNzk1ZGMwYjMwYjYzMmMzZQ%3D%3D&v=1&f=sd', 'duration': 118, 'fallback_url': 'https://v.redd.it/s7noh3oh787f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/s7noh3oh787f1/HLSPlaylist.m3u8?a=1752776631%2COTg3MmFlODQxOWY0YzJjYmYyMDRhMjAzYWViZTY5ZmViNTIwNDk1NGIxOGZmY2IyNzFkZjA2NzAyYjJlNjEzZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/s7noh3oh787f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 864}} | t3_1lcl2m1 | /r/LocalLLaMA/comments/1lcl2m1/an_experimental_yet_useful_ondevice_android_llm/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'MTFkemE2b2g3ODdmMTOQZ3728JJZIuKLMMMDfapcgNjPcOG-8WNcw_29393l', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MTFkemE2b2g3ODdmMTOQZ3728JJZIuKLMMMDfapcgNjPcOG-8WNcw_29393l.png?width=108&crop=smart&format=pjpg&auto=webp&s=03a59e155b1badf47724ccfd94a11ab60ee9654f', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/MTFkemE2b2g3ODdmMTOQZ3728JJZIuKLMMMDfapcgNjPcOG-8WNcw_29393l.png?width=216&crop=smart&format=pjpg&auto=webp&s=84c1427d94cc602d27fb4743489b1931bb3b8334', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/MTFkemE2b2g3ODdmMTOQZ3728JJZIuKLMMMDfapcgNjPcOG-8WNcw_29393l.png?width=320&crop=smart&format=pjpg&auto=webp&s=6e276c675b77794fb7ab6830cd9a9ec44a127949', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/MTFkemE2b2g3ODdmMTOQZ3728JJZIuKLMMMDfapcgNjPcOG-8WNcw_29393l.png?width=640&crop=smart&format=pjpg&auto=webp&s=58d0bde89c97a92eaa7787d403fb5be096dcdc02', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/MTFkemE2b2g3ODdmMTOQZ3728JJZIuKLMMMDfapcgNjPcOG-8WNcw_29393l.png?width=960&crop=smart&format=pjpg&auto=webp&s=9b1671effad5adec3abfac82bd63e4bf8888b20a', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/MTFkemE2b2g3ODdmMTOQZ3728JJZIuKLMMMDfapcgNjPcOG-8WNcw_29393l.png?width=1080&crop=smart&format=pjpg&auto=webp&s=772666979538a14039d269dbd9d677c914a33775', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://external-preview.redd.it/MTFkemE2b2g3ODdmMTOQZ3728JJZIuKLMMMDfapcgNjPcOG-8WNcw_29393l.png?format=pjpg&auto=webp&s=783fb848beaccfc9cdca2a27694d19a771648dd0', 'width': 1080}, 'variants': {}}]} |
|
Does llama.cpp save chats? | 0 | I know Ollama will make save chat history in that history file. Does llama.cpp do something similar or is the chat gone forever when I close it. | 2025-06-16T06:11:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lcli97/does_llamacpp_save_chats/ | LeiMoshen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcli97 | false | null | t3_1lcli97 | /r/LocalLLaMA/comments/1lcli97/does_llamacpp_save_chats/ | false | false | self | 0 | null |
Do I snatch this ? | 1 | [removed] | 2025-06-16T06:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lclmfb/do_i_snatch_this/ | ketgoodgame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lclmfb | false | null | t3_1lclmfb | /r/LocalLLaMA/comments/1lclmfb/do_i_snatch_this/ | false | false | 1 | null |
|
[Fine-Tuning] [Structured Output] Fine-Tuning for JSON Extraction – Need Help With the Right Approach | 1 | [removed] | 2025-06-16T06:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lclwqi/finetuning_structured_output_finetuning_for_json/ | LieDistinct857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lclwqi | false | null | t3_1lclwqi | /r/LocalLLaMA/comments/1lclwqi/finetuning_structured_output_finetuning_for_json/ | false | false | self | 1 | null |
Why do we have Q6_K for models but not Q6_0 for KV cache? | 2 | We have tons of model quantization options: Q2\_K, Q6\_K, etc. But for KV cache we're stuck with only Q8\_0 and Q4\_0? The jump between the two can be fairly brutal, so why don't we have a Q5\~6\_0 KV cache as a middle ground for long context without destroying quality?
Is there a technical reason or did developers just not implement it? Or am I missing something obvious. | 2025-06-16T06:46:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lcm0ys/why_do_we_have_q6_k_for_models_but_not_q6_0_for/ | Bimbam_tm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcm0ys | false | null | t3_1lcm0ys | /r/LocalLLaMA/comments/1lcm0ys/why_do_we_have_q6_k_for_models_but_not_q6_0_for/ | false | false | self | 2 | null |
Run Qwen3-235B-A22B with ktransformers on AMD rocm? | 2 | Hey!
Has anyone managed to run models successfully on AMD/ROCM Linux with Ktransformers? Can you share a docker image or instructions?
*There is a need to use tensor parallelism* | 2025-06-16T07:22:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lcmk3s/run_qwen3235ba22b_with_ktransformers_on_amd_rocm/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcmk3s | false | null | t3_1lcmk3s | /r/LocalLLaMA/comments/1lcmk3s/run_qwen3235ba22b_with_ktransformers_on_amd_rocm/ | false | false | self | 2 | null |
Qwen releases official MLX quants for Qwen3 models in 4 quantization levels: 4bit, 6bit, 8bit, and BF16 | 442 | 🚀 Excited to launch Qwen3 models in MLX format today!
Now available in 4 quantization levels: 4bit, 6bit, 8bit, and BF16 — Optimized for MLX framework.
👉 Try it now!
X post: https://x.com/alibaba_qwen/status/1934517774635991412?s=46
Hugging Face: https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f
| 2025-06-16T07:54:58 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lcn0vz | false | null | t3_1lcn0vz | /r/LocalLLaMA/comments/1lcn0vz/qwen_releases_official_mlx_quants_for_qwen3/ | false | false | default | 442 | {'enabled': True, 'images': [{'id': '5jpskt9dw87f1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/5jpskt9dw87f1.jpeg?width=108&crop=smart&auto=webp&s=e63f96d14e61383a0f70b9af465eb63bb8732b2e', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/5jpskt9dw87f1.jpeg?width=216&crop=smart&auto=webp&s=c3b5214b1683083a0b6581822ca153e493e169cb', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/5jpskt9dw87f1.jpeg?width=320&crop=smart&auto=webp&s=02146bb0c5ccd9eb65891e867a85383cc363eeb2', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/5jpskt9dw87f1.jpeg?width=640&crop=smart&auto=webp&s=3979f7c8b5f11e8d9ae4fd59f4defeeebd8adae2', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/5jpskt9dw87f1.jpeg?width=960&crop=smart&auto=webp&s=16fa86c323419cfe65fb7fad419aa6a941b2f099', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/5jpskt9dw87f1.jpeg?width=1080&crop=smart&auto=webp&s=e064427136ace7b9c3f8d639233262c8ed6fc68b', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/5jpskt9dw87f1.jpeg?auto=webp&s=d34afef2eb2fd781b4ff058c53ae7b3f189021d5', 'width': 1536}, 'variants': {}}]} |
|
Recommendations for Local LLMs (Under 70B) with Cline/Roo Code | 23 | I'd like to know what, if any, are some good local models under 70b that can handle tasks well when using Cline/Roo Code. I’ve tried a *lot* to use Cline or Roo Code for various things, and most of the time it's simple tasks, but the agents often get stuck in loops or make things worse. It feels like the size of the instructions is too much for these smaller LLMs to handle well – many times I see the task using 15k+ tokens just to edit a couple lines of code. Maybe I’m doing something very wrong, maybe it's a configuration issue with the agents? Anyway, I was hoping you guys could recommend some models (could also be configurations, advice, anything) that work well with Cline/Roo Code.
**Some information for context:**
* I always use at least Q5 or better (sometimes I use Q4\_UD from Unsloth).
* Most of the time I give 20k+ context window to the agents.
* My projects are a reasonable size, between 2k and 10k lines, but I only open the files needed when asking the agents to code.
**Models I've Tried:**
* Devistral - Bad in general; I was on high expectations for this one but it didn’t work.
* Magistral - Even worse.
* Qwen 3 series (and R1 distilled versions) - Not that bad, but just works when the project is very, very small.
* GLM4 - Very good at coding on its own, not so good when using it with agents.
**So, are there any recommendations for models to use with Cline/Roo Code that actually work well?** | 2025-06-16T09:20:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lco9ik/recommendations_for_local_llms_under_70b_with/ | AMOVCS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lco9ik | false | null | t3_1lco9ik | /r/LocalLLaMA/comments/1lco9ik/recommendations_for_local_llms_under_70b_with/ | false | false | self | 23 | null |
Using Knowledge Graphs to create personas ? | 1 | [removed] | 2025-06-16T09:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lcoec5/using_knowledge_graphs_to_create_personas/ | Fluid-Beyond3878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcoec5 | false | null | t3_1lcoec5 | /r/LocalLLaMA/comments/1lcoec5/using_knowledge_graphs_to_create_personas/ | false | false | self | 1 | null |
Why are API requests to a local LLM on LM Studio slow? | 1 | [removed] | 2025-06-16T09:29:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lcoecy/why_are_api_requests_to_a_local_llm_on_lm_studio/ | Ultimonumber36 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcoecy | false | null | t3_1lcoecy | /r/LocalLLaMA/comments/1lcoecy/why_are_api_requests_to_a_local_llm_on_lm_studio/ | false | false | self | 1 | null |
Using Knowledge Graphs to create personas ? | 6 | I'm exploring using a Knowledge Graph (KG) to create persona(s). The goal is to create a chat companion with a real, queryable memory.
I have a few questions,
* **Has anyone tried this?** What were your experiences and was it effective?
* **What's the best method?** My first thought is a RAG setup that pulls facts from the KG to inject into the prompt. Are there better ways?
* **How do you simulate behaviors?** How would you use a KG to encode things like sarcasm, humor, or specific tones, not just simple facts (e.g., \[Persona\]--\[likes\]--\[Coffee\])?
Looking for any starting points, project links, or general thoughts on this approach. | 2025-06-16T09:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lcoewz/using_knowledge_graphs_to_create_personas/ | TheAmendingMonk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcoewz | false | null | t3_1lcoewz | /r/LocalLLaMA/comments/1lcoewz/using_knowledge_graphs_to_create_personas/ | false | false | self | 6 | null |
Looking for Unfiltered LLM for making AI Character dialogue | 7 | Im just gonna be honest, i want to get dialogue for character chatbots, but unfiltered is what i need, that's pretty much it | 2025-06-16T10:18:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lcp5rg/looking_for_unfiltered_llm_for_making_ai/ | mohmar2010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcp5rg | false | null | t3_1lcp5rg | /r/LocalLLaMA/comments/1lcp5rg/looking_for_unfiltered_llm_for_making_ai/ | false | false | self | 7 | null |
Confused between ReAct and MCP for personal usage | 1 | [removed] | 2025-06-16T10:29:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lcpbtc/confused_between_react_and_mcp_for_personal_usage/ | arnab_best | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcpbtc | false | null | t3_1lcpbtc | /r/LocalLLaMA/comments/1lcpbtc/confused_between_react_and_mcp_for_personal_usage/ | false | false | self | 1 | null |
"World Model" a Step Towards AGI? | 1 | V-JEPA 2: Is Meta's Open-Weight
Meta AI has released V-JEPA 2, a self-supervised "world model" trained on over a million hours of raw video data. This model aims to learn an intuitive understanding of the physical world, with notable performance metrics across several domains, and is being released as open-weights on Hugging Face.
Key Performance Highlights:
Video Understanding (Something-Something v2): Achieves 77.3% top-1 accuracy for motion understanding.
Action Anticipation (Epic-Kitchens-100): Demonstrates 39.7% recall-at-5 for human action anticipation, representing a 44% improvement over prior task-specific models.
Video Question Answering: When integrated with an 8-billion-parameter language model, V-JEPA 2 achieves state-of-the-art results on benchmarks such as PerceptionTest (84.0) and TempCompass (76.9).
Robotics Efficiency: For robot action planning, V-JEPA 2 is reported to be 30 times faster than Nvidia's Cosmos model, completing planning in 16 seconds compared to Cosmos's 4 minutes.
Zero-Shot Robot Control: Exhibits 65-80% success rates on pick-and-place tasks using Franka robot arms in previously unseen environments. This performance is achieved purely from visual goal images, without environment-specific training or task-specific rewards, requiring only 62 hours of robot interaction data for fine-tuning.
The model's self-supervised learning approach from vast unlabeled video data, combined with its reported performance gains and efficiency in robotics, positions it as a significant development in general-purpose AI and physical reasoning. The open-weights release on Hugging Face aims to foster further research and development within the community.
Further details and technical demonstrations are available at: https://ai.meta.com/vjepa/
https://ai.meta.com/vjepa/ | 2025-06-16T11:03:59 | https://v.redd.it/8qys0ew2u97f1 | Rare-Programmer-1747 | /r/LocalLLaMA/comments/1lcpwx6/world_model_a_step_towards_agi/ | 1970-01-01T00:00:00 | 0 | {} | 1lcpwx6 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8qys0ew2u97f1/DASHPlaylist.mpd?a=1752793446%2CYjNhMDExNmFhYzA0NDY0YmUzMmE3MzMwNzY5ZDRkNDY4ODllYWNlNjExZDJjOTFhYzkxOWNlYjQzOTkwZWZjNA%3D%3D&v=1&f=sd', 'duration': 162, 'fallback_url': 'https://v.redd.it/8qys0ew2u97f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/8qys0ew2u97f1/HLSPlaylist.m3u8?a=1752793446%2CZjlhYTM0NmU5MTkzMjdjZGE1N2EzNzdiOTgzMjk5MzY3NWFkMjcxYTA1MDk5MjRlY2RjOTE1YmM3YWIyZjZjMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8qys0ew2u97f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1lcpwx6 | /r/LocalLLaMA/comments/1lcpwx6/world_model_a_step_towards_agi/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Z2VqdzMweDJ1OTdmMSrG8eCPwrLUP6p8yHI14PztqqXQADEWyEV7iN-mE8Mm', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z2VqdzMweDJ1OTdmMSrG8eCPwrLUP6p8yHI14PztqqXQADEWyEV7iN-mE8Mm.png?width=108&crop=smart&format=pjpg&auto=webp&s=26b02e3bc0e62b375e834806739aa63ea9fe689f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Z2VqdzMweDJ1OTdmMSrG8eCPwrLUP6p8yHI14PztqqXQADEWyEV7iN-mE8Mm.png?width=216&crop=smart&format=pjpg&auto=webp&s=404536f2f827659754711ae46f9cf375c398c5ea', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/Z2VqdzMweDJ1OTdmMSrG8eCPwrLUP6p8yHI14PztqqXQADEWyEV7iN-mE8Mm.png?width=320&crop=smart&format=pjpg&auto=webp&s=70bb90ffd070e373d40917d43b8103f373d0f358', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/Z2VqdzMweDJ1OTdmMSrG8eCPwrLUP6p8yHI14PztqqXQADEWyEV7iN-mE8Mm.png?width=640&crop=smart&format=pjpg&auto=webp&s=8bdc42cf4e5e8009b189d4f491af785264134b65', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/Z2VqdzMweDJ1OTdmMSrG8eCPwrLUP6p8yHI14PztqqXQADEWyEV7iN-mE8Mm.png?width=960&crop=smart&format=pjpg&auto=webp&s=70f4b5575a7c735439485f1002610fd33eb7bb0f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Z2VqdzMweDJ1OTdmMSrG8eCPwrLUP6p8yHI14PztqqXQADEWyEV7iN-mE8Mm.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ec0bf5074042799c2e821887e1845141f90ee7fb', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/Z2VqdzMweDJ1OTdmMSrG8eCPwrLUP6p8yHI14PztqqXQADEWyEV7iN-mE8Mm.png?format=pjpg&auto=webp&s=60ba45d7a84de34b106c5e8dc1be2bbd8db31da3', 'width': 1080}, 'variants': {}}]} |
|
FuturixAI - Cost-Effective Online RFT with Plug-and-Play LoRA Judge | 7 | A tiny LoRA adapter and a simple JSON prompt turn a 7B LLM into a powerful reward model that beats much larger ones - saving massive compute. It even helps a 7B model outperform top 70B baselines on GSM-8K using online RLHF | 2025-06-16T11:15:54 | https://www.futurixai.com/publications | Aquaaa3539 | futurixai.com | 1970-01-01T00:00:00 | 0 | {} | 1lcq4gt | false | null | t3_1lcq4gt | /r/LocalLLaMA/comments/1lcq4gt/futurixai_costeffective_online_rft_with/ | false | false | default | 7 | null |
Lol | 1 | 2025-06-16T12:02:08 | JAILBREAKSGOATED | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lcqz7k | false | null | t3_1lcqz7k | /r/LocalLLaMA/comments/1lcqz7k/lol/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '6u6ce2sg4a7f1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/6u6ce2sg4a7f1.jpeg?width=108&crop=smart&auto=webp&s=4c5d2472cca82b2178669dfc41069008185ba072', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/6u6ce2sg4a7f1.jpeg?width=216&crop=smart&auto=webp&s=bb986eb9691d61ae8eb2b409f8e72ff1bd1cc3ba', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/6u6ce2sg4a7f1.jpeg?width=320&crop=smart&auto=webp&s=80daf19efb1ab534f37e95303e68ade1b2cff9e1', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/6u6ce2sg4a7f1.jpeg?width=640&crop=smart&auto=webp&s=2db77feec9d2ea007f222de0c32e88297f0eb37b', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/6u6ce2sg4a7f1.jpeg?width=960&crop=smart&auto=webp&s=3813569be134d5bbf19b3fedcd113d06995c27c3', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/6u6ce2sg4a7f1.jpeg?width=1080&crop=smart&auto=webp&s=c4c20bb7af3d42338fb9980f8664821cb3f35672', 'width': 1080}], 'source': {'height': 2556, 'url': 'https://preview.redd.it/6u6ce2sg4a7f1.jpeg?auto=webp&s=5ac40a04df764519d89ae1d6943cdd47df56e966', 'width': 1179}, 'variants': {}}]} |
||
what is the most powerfull model | 1 | [removed] | 2025-06-16T12:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lcrlnk/what_is_the_most_powerfull_model/ | Maximum_Piece2610 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcrlnk | false | null | t3_1lcrlnk | /r/LocalLLaMA/comments/1lcrlnk/what_is_the_most_powerfull_model/ | false | false | self | 1 | null |
Just finished the Build DeepSeek from Scratch Playlist on Youtube | 29 high quality videos | 1 | [removed] | 2025-06-16T12:39:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lcrqeo/just_finished_the_build_deepseek_from_scratch/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcrqeo | false | {'oembed': {'description': 'Share your videos with friends, family, and the world', 'height': 450, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fvideoseries%3Flist%3DPLPTV0NXA_ZSiOpKKlHCyOq9lnp-dLvlms&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fplaylist%3Flist%3DPLPTV0NXA_ZSiOpKKlHCyOq9lnp-dLvlms&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FQWNxQIq0hMo%2Fhqdefault.jpg%3Fsqp%3D-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE%3D%26rs%3DAOn4CLDGT8NF7uUUQ5mYvDin3Eh0EFrYOQ%26days_since_epoch%3D20255&type=text%2Fhtml&schema=youtube" width="600" height="450" scrolling="no" title="YouTube embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'http://youtube.com', 'thumbnail_height': 270, 'thumbnail_url': 'https://i.ytimg.com/vi/QWNxQIq0hMo/hqdefault.jpg?sqp=-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLDGT8NF7uUUQ5mYvDin3Eh0EFrYOQ&days_since_epoch=20255', 'thumbnail_width': 480, 'title': 'Build DeepSeek from Scratch', 'type': 'video', 'version': '1.0', 'width': 600}, 'type': 'youtube.com'} | t3_1lcrqeo | /r/LocalLLaMA/comments/1lcrqeo/just_finished_the_build_deepseek_from_scratch/ | false | false | 1 | null |
|
Just finished recording 29 videos on "How to Build DeepSeek from Scratch" | 261 | Playlist link: [https://www.youtube.com/playlist?list=PLPTV0NXA\_ZSiOpKKlHCyOq9lnp-dLvlms](https://www.youtube.com/playlist?list=PLPTV0NXA_ZSiOpKKlHCyOq9lnp-dLvlms)
Here are the 29 videos and their title:
(1) DeepSeek series introduction
(2) DeepSeek basics
(3) Journey of a token into the LLM architecture
(4) Attention mechanism explained in 1 hour
(5) Self Attention Mechanism - Handwritten from scratch
(6) Causal Attention Explained: Don't Peek into the Future
(7) Multi-Head Attention Visually Explained
(8) Multi-Head Attention Handwritten from Scratch
(9) Key Value Cache from Scratch
(10) Multi-Query Attention Explained
(11) Understand Grouped Query Attention (GQA)
(12) Multi-Head Latent Attention From Scratch
(13) Multi-Head Latent Attention Coded from Scratch in Python
(14) Integer and Binary Positional Encodings
(15) All about Sinusoidal Positional Encodings
(16) Rotary Positional Encodings
(17) How DeepSeek exactly implemented Latent Attention | MLA + RoPE
(18) Mixture of Experts (MoE) Introduction
(19) Mixture of Experts Hands on Demonstration
(20) Mixture of Experts Balancing Techniques
(21) How DeepSeek rewrote Mixture of Experts (MoE)?
(22) Code Mixture of Experts (MoE) from Scratch in Python
(23) Multi-Token Prediction Introduction
(24) How DeepSeek rewrote Multi-Token Prediction
(25) Multi-Token Prediction coded from scratch
(26) Introduction to LLM Quantization
(27) How DeepSeek rewrote Quantization Part 1
(28) How DeepSeek rewrote Quantization Part 2
(29) Build DeepSeek from Scratch 20 minute summary | 2025-06-16T12:43:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lcrt1k/just_finished_recording_29_videos_on_how_to_build/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcrt1k | false | null | t3_1lcrt1k | /r/LocalLLaMA/comments/1lcrt1k/just_finished_recording_29_videos_on_how_to_build/ | false | false | self | 261 | {'enabled': False, 'images': [{'id': 'YYKOMi4vnYm0aGKFEYu9iwdxu8LNUkmNgkG8xdUdmuw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YYKOMi4vnYm0aGKFEYu9iwdxu8LNUkmNgkG8xdUdmuw.jpeg?width=108&crop=smart&auto=webp&s=e0ef8b5130e9dadc053e425d769f8da1d826210e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YYKOMi4vnYm0aGKFEYu9iwdxu8LNUkmNgkG8xdUdmuw.jpeg?width=216&crop=smart&auto=webp&s=22d30650d27177e9ee8177bb12d90c9ab8ee3d69', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YYKOMi4vnYm0aGKFEYu9iwdxu8LNUkmNgkG8xdUdmuw.jpeg?width=320&crop=smart&auto=webp&s=2df8ea605a587c3633312cbe29d2bbecdce19ddf', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/YYKOMi4vnYm0aGKFEYu9iwdxu8LNUkmNgkG8xdUdmuw.jpeg?auto=webp&s=c5e210df86f2c348480e4640a681d6ea59f487e8', 'width': 480}, 'variants': {}}]} |
The fine line between helpful AI and creepy AI | 0 | Been thinking about why some AI interactions feel supportive while others make our skin crawl. That line between helpful and creepy is thinner than most developers realize.
Last week, a friend showed me their wellness app's AI coach. It remembered their dog's name from a conversation three months ago and asked "How's Max doing?" Meant to be thoughtful, but instead felt like someone had been reading their diary. The AI crossed from attentive to invasive with just one overly specific question.
The uncanny feeling often comes from mismatched intimacy levels. When AI acts more familiar than the relationship warrants, our brains scream "danger." It's like a stranger knowing your coffee order - theoretically helpful, practically unsettling. We're fine with Amazon recommending books based on purchases, but imagine if it said "Since you're going through a divorce, here are some self-help books." Same data, wildly different comfort levels.
Working on my podcast platform taught me this lesson hard. We initially had AI hosts reference previous conversations to show continuity. "Last time you mentioned feeling stressed about work..." Seemed smart, but users found it creepy. They wanted conversational AI, not AI that kept detailed notes on their vulnerabilities. We scaled back to general topic memory only.
The creepiest AI often comes from good intentions. Replika early versions would send unprompted "I miss you" messages. Mental health apps that say "I noticed you haven't logged in - are you okay?" Shopping assistants that mention your size without being asked. Each feature probably seemed caring in development but feels stalker-ish in practice.
Context changes everything. An AI therapist asking about your childhood? Expected. A customer service bot asking the same? Creepy. The identical behavior switches from helpful to invasive based on the AI's role. Users have implicit boundaries for different AI relationships, and crossing them triggers immediate discomfort.
There's also the transparency problem. When AI knows things about us but we don't know how or why, it feels violating. Hidden data collection, unexplained personalization, or AI that seems to infer too much from too little - all creepy. The most trusted AI clearly shows its reasoning: "Based on your recent orders..." feels better than mysterious omniscience.
The sweet spot seems to be AI that's capable but boundaried. Smart enough to help, respectful enough to maintain distance. Like a good concierge - knowledgeable, attentive, but never presumptuous. We want AI that enhances our capabilities, not AI that acts like it owns us.
Maybe the real test is this: Would this behavior be appropriate from a human in the same role? If not, it's probably crossing into creepy territory, no matter how helpful the intent. | 2025-06-16T12:43:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lcrtiu/the_fine_line_between_helpful_ai_and_creepy_ai/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcrtiu | false | null | t3_1lcrtiu | /r/LocalLLaMA/comments/1lcrtiu/the_fine_line_between_helpful_ai_and_creepy_ai/ | false | false | self | 0 | null |
Beginner | 0 | Yesterday I found out that you can run LLM locally, but I have a lot of questions, I'll list them down here.
1. What is it?
2. What is it used for?
3. Is it better than normal LLM? (not locally)
4. What is the best app for Android?
5. What is the best LLM that I can use on my Samsung Galaxy A35 5g?
6. Are there image generating models that can run locally? | 2025-06-16T12:55:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lcs2k0/beginner/ | EducationalCorner402 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcs2k0 | false | null | t3_1lcs2k0 | /r/LocalLLaMA/comments/1lcs2k0/beginner/ | false | false | self | 0 | null |
Tesla m40 12gb vs gtx 1070 8gb | 1 | I'm not sure which one to choose. Which one would you recommend? | 2025-06-16T13:03:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lcs8mw/tesla_m40_12gb_vs_gtx_1070_8gb/ | EdwardRocks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcs8mw | false | null | t3_1lcs8mw | /r/LocalLLaMA/comments/1lcs8mw/tesla_m40_12gb_vs_gtx_1070_8gb/ | false | false | self | 1 | null |
HF Datasets in Spark in one line of code | 1 | [removed] | 2025-06-16T13:12:27 | qlhoest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lcsfz8 | false | null | t3_1lcsfz8 | /r/LocalLLaMA/comments/1lcsfz8/hf_datasets_in_spark_in_one_line_of_code/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'qon3tb1cea7f1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/qon3tb1cea7f1.jpeg?width=108&crop=smart&auto=webp&s=9acdc5c1359dcbfd9827ef9da924f18b65b44765', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/qon3tb1cea7f1.jpeg?width=216&crop=smart&auto=webp&s=afce28bc4c0ef60ddb3e79e68ab2da268a0ea554', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/qon3tb1cea7f1.jpeg?width=320&crop=smart&auto=webp&s=452a88565bd864651d13052479751e690e5b5499', 'width': 320}, {'height': 295, 'url': 'https://preview.redd.it/qon3tb1cea7f1.jpeg?width=640&crop=smart&auto=webp&s=664ed2e3ba8746464c359160b0d1ae1b7557411a', 'width': 640}, {'height': 443, 'url': 'https://preview.redd.it/qon3tb1cea7f1.jpeg?width=960&crop=smart&auto=webp&s=fcc3d05e2b3e3ede78985668e7f5fe4207931ea5', 'width': 960}, {'height': 499, 'url': 'https://preview.redd.it/qon3tb1cea7f1.jpeg?width=1080&crop=smart&auto=webp&s=539df77ba28ddf5adb370b6f728cad1b17325aec', 'width': 1080}], 'source': {'height': 1532, 'url': 'https://preview.redd.it/qon3tb1cea7f1.jpeg?auto=webp&s=4aad3044e48fad97ee8bb800c80faaef7a9690c5', 'width': 3314}, 'variants': {}}]} |
|
Building a custom LLM for my PhD thesis | 1 | [removed] | 2025-06-16T13:43:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lct5oz/building_a_custom_llm_for_my_phd_thesis/ | Glad_Net8882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lct5oz | false | null | t3_1lct5oz | /r/LocalLLaMA/comments/1lct5oz/building_a_custom_llm_for_my_phd_thesis/ | false | false | self | 1 | null |
Voice input in french, TTS output in English. How hard would this be to set up? | 2 | I work in a bilingual setting and some of my meetings are in French. I don't speak French. This isn't a huge problem but it got me thinking. It would be really cool if I could set up a system that would use my mic to listen to what was being said in the meeting and then output a Text-to-speech translation into my noise cancelling headphones. I know we definitely have the tech in local LLM to make this happen but I am not really sure where to start. Any advice? | 2025-06-16T14:04:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lctoan/voice_input_in_french_tts_output_in_english_how/ | LanceThunder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lctoan | false | null | t3_1lctoan | /r/LocalLLaMA/comments/1lctoan/voice_input_in_french_tts_output_in_english_how/ | false | false | self | 2 | null |
How do we inference unsloth/DeepSeek-R1-0528-Qwen3-8B ? | 0 | Hey, so I have recently fine-tuned a model for general-purpose response generation to customer queries (FAQ-like). But my question is, this is my first time deploying a model like this. Can someone suggest some strategies? I read about LMDeploy, but that doesn't seem to work for this model (I haven't tried it, I just read about it). Can you suggest some strategies that would be great? Thanks in advance. | 2025-06-16T14:05:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lctp48/how_do_we_inference_unslothdeepseekr10528qwen38b/ | No-Trip899 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lctp48 | false | null | t3_1lctp48 | /r/LocalLLaMA/comments/1lctp48/how_do_we_inference_unslothdeepseekr10528qwen38b/ | false | false | self | 0 | null |
Local Open Source VScode Copilot model with MCP | 235 | You don't need remote APIs for a coding copliot, or the MCP Course! Set up a fully local IDE with MCP integration using Continue. In this tutorial Continue guides you through setting it up.
This is what you need to do to take control of your copilot:
**-** Get the Continue extension from the VS Code marketplace to serve as the AI coding assistant.
\- Serve the model with an OpenAI compatible server in Llama.cpp / LmStudio/ etc.
**-** Create a `.continue/models/llama-max.yaml` file in your project to tell Continue how to use the local Ollama model.
**-** Create a `.continue/mcpServers/playwright-mcp.yaml` file to integrate a tool, like the Playwright browser automation tool, with your assistant.
Check out the full tutorial here: [https://huggingface.co/learn/mcp-course/unit2/continue-client](https://huggingface.co/learn/mcp-course/unit2/continue-client) | 2025-06-16T14:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lcud8j/local_open_source_vscode_copilot_model_with_mcp/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcud8j | false | null | t3_1lcud8j | /r/LocalLLaMA/comments/1lcud8j/local_open_source_vscode_copilot_model_with_mcp/ | false | false | self | 235 | {'enabled': False, 'images': [{'id': 'xhX7nVJZN7NhDmuill5vQz87XUaA6GG5ABaIHRSnVSo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/xhX7nVJZN7NhDmuill5vQz87XUaA6GG5ABaIHRSnVSo.png?width=108&crop=smart&auto=webp&s=693859a5915a703e9fa01d389e2ab09d23b29c81', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/xhX7nVJZN7NhDmuill5vQz87XUaA6GG5ABaIHRSnVSo.png?auto=webp&s=26d42606aeb5a5f780df4d3fe2bbca87618ada15', 'width': 128}, 'variants': {}}]} |
MiniMax-M1 - a MiniMaxAI Collection | 124 | 2025-06-16T14:35:55 | https://huggingface.co/collections/MiniMaxAI/minimax-m1-68502ad9634ec0eeac8cf094 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lcuglb | false | null | t3_1lcuglb | /r/LocalLLaMA/comments/1lcuglb/minimaxm1_a_minimaxai_collection/ | false | false | 124 | {'enabled': False, 'images': [{'id': 'KeaWNzZG0TAkUEwWyVGmsizl5dXuAOVMgFreGf02gFI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KeaWNzZG0TAkUEwWyVGmsizl5dXuAOVMgFreGf02gFI.png?width=108&crop=smart&auto=webp&s=5b662213f7e2fd766f341f3bd350a6027c20a373', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KeaWNzZG0TAkUEwWyVGmsizl5dXuAOVMgFreGf02gFI.png?width=216&crop=smart&auto=webp&s=2615bee555eddb0e6e09c9a95f43761786f1ad9b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KeaWNzZG0TAkUEwWyVGmsizl5dXuAOVMgFreGf02gFI.png?width=320&crop=smart&auto=webp&s=71b227c71d4e17cbd8bfb18a287ea6dd7cae4935', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KeaWNzZG0TAkUEwWyVGmsizl5dXuAOVMgFreGf02gFI.png?width=640&crop=smart&auto=webp&s=2286c64db955bf2850b44ae4b5c870213ee65afe', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KeaWNzZG0TAkUEwWyVGmsizl5dXuAOVMgFreGf02gFI.png?width=960&crop=smart&auto=webp&s=9ffeb2ff1e45a78634b5cf4c4b1a66a0ced32533', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KeaWNzZG0TAkUEwWyVGmsizl5dXuAOVMgFreGf02gFI.png?width=1080&crop=smart&auto=webp&s=270df55856ed9376f734f533eb1fa3acd2dd8f01', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KeaWNzZG0TAkUEwWyVGmsizl5dXuAOVMgFreGf02gFI.png?auto=webp&s=3c97a35dd28f54554d36642ada89cbab5c30a2d8', 'width': 1200}, 'variants': {}}]} |
||
I keep getting this error message but my vram is empty. Help! | 1 | [removed] | 2025-06-16T14:50:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lcuu7n/i_keep_getting_this_error_message_but_my_vram_is/ | TheLastAssassin_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcuu7n | false | null | t3_1lcuu7n | /r/LocalLLaMA/comments/1lcuu7n/i_keep_getting_this_error_message_but_my_vram_is/ | false | false | self | 1 | null |
Cherche aide pour créer une IA locale autonome avec mémoire évolutive et interaction vocale - projet Soléane | 1 | [removed] | 2025-06-16T15:19:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lcvlag/cherche_aide_pour_créer_une_ia_locale_autonome/ | DarkDamien777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcvlag | false | null | t3_1lcvlag | /r/LocalLLaMA/comments/1lcvlag/cherche_aide_pour_créer_une_ia_locale_autonome/ | false | false | self | 1 | null |
Cherche aide pour créer une IA locale autonome avec mémoire évolutive et interaction vocale — Projet Soléane | 1 | [removed] | 2025-06-16T15:31:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lcvw58/cherche_aide_pour_créer_une_ia_locale_autonome/ | DarkDamien777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcvw58 | false | null | t3_1lcvw58 | /r/LocalLLaMA/comments/1lcvw58/cherche_aide_pour_créer_une_ia_locale_autonome/ | false | false | self | 1 | null |
Kimi-Dev-72B | 152 | 2025-06-16T15:40:31 | https://huggingface.co/moonshotai/Kimi-Dev-72B | realJoeTrump | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lcw50r | false | null | t3_1lcw50r | /r/LocalLLaMA/comments/1lcw50r/kimidev72b/ | false | false | 152 | {'enabled': False, 'images': [{'id': '1kvJDTWOvntivVoW834gDLI4V0P6WaqmrGfz5xyEWNU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1kvJDTWOvntivVoW834gDLI4V0P6WaqmrGfz5xyEWNU.png?width=108&crop=smart&auto=webp&s=37c8a0c41b7b8284411d4b8a7496a73bf8623214', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1kvJDTWOvntivVoW834gDLI4V0P6WaqmrGfz5xyEWNU.png?width=216&crop=smart&auto=webp&s=52c98e47927b506db5bfadb5ed53c851541877f5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1kvJDTWOvntivVoW834gDLI4V0P6WaqmrGfz5xyEWNU.png?width=320&crop=smart&auto=webp&s=5affc187ab2c03c25d082ff8689a6222756f783c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1kvJDTWOvntivVoW834gDLI4V0P6WaqmrGfz5xyEWNU.png?width=640&crop=smart&auto=webp&s=b09e977edec166ad9c212551ee72f79018be5fa2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1kvJDTWOvntivVoW834gDLI4V0P6WaqmrGfz5xyEWNU.png?width=960&crop=smart&auto=webp&s=6f053b4c7961f2dbe25f5bc8646a8003e316589c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1kvJDTWOvntivVoW834gDLI4V0P6WaqmrGfz5xyEWNU.png?width=1080&crop=smart&auto=webp&s=3f086776cbff8cd02dfbaf7c2a695e4b36a43343', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1kvJDTWOvntivVoW834gDLI4V0P6WaqmrGfz5xyEWNU.png?auto=webp&s=be8d975cc7d9d993a85b8012b95a4f8f875d0468', 'width': 1200}, 'variants': {}}]} |
||
I wish for a local model with mood recognition | 2 | It would be interesting if we could have a local model that could understand the mood we were in by our voice and images it captured of us. | 2025-06-16T15:46:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lcwb3g/i_wish_for_a_local_model_with_mood_recognition/ | MinimumPC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcwb3g | false | null | t3_1lcwb3g | /r/LocalLLaMA/comments/1lcwb3g/i_wish_for_a_local_model_with_mood_recognition/ | false | false | self | 2 | null |
Finetuning the o3 model api | 1 | [removed] | 2025-06-16T15:53:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lcwgvi/finetuning_the_o3_model_api/ | Desperate_Bread1418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcwgvi | false | null | t3_1lcwgvi | /r/LocalLLaMA/comments/1lcwgvi/finetuning_the_o3_model_api/ | false | false | self | 1 | null |
With open source models, you simply get rid of its system prompt and dialogue construct, then ask for anything. | 1 | [removed] | 2025-06-16T15:59:28 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lcwmky | false | null | t3_1lcwmky | /r/LocalLLaMA/comments/1lcwmky/with_open_source_models_you_simply_get_rid_of_its/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ZdLn6ZF5IUmUJOQ8AHsrjV45AX2Z4Ok3d9qgi2HTK_Q', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/u5c4lmorab7f1.png?width=108&crop=smart&auto=webp&s=7419cc8b79e7d6ed1a69b3b8ac860c6f92c27ed0', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/u5c4lmorab7f1.png?width=216&crop=smart&auto=webp&s=38e3676f7a4d5cacec096c3a628d896da798459b', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/u5c4lmorab7f1.png?width=320&crop=smart&auto=webp&s=9e3cb317e8e273727809367c133803bc14a725af', 'width': 320}, {'height': 328, 'url': 'https://preview.redd.it/u5c4lmorab7f1.png?width=640&crop=smart&auto=webp&s=32206b9ea36c7b1a27c38ccf2c49cf922cf566a1', 'width': 640}], 'source': {'height': 407, 'url': 'https://preview.redd.it/u5c4lmorab7f1.png?auto=webp&s=a70717806af0278df862da332d71ab85718c99fa', 'width': 792}, 'variants': {}}]} |
||
Dual 5090 vs RTX Pro 6000 for local LLM | 0 | Hi all, I am planning to build a new machine for local LLM, some fine-tuning and other deep learning tasks, wonder if I should go for Dual 5090 vs RTX Pro 6000? Thanks. | 2025-06-16T16:10:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lcwx8o/dual_5090_vs_rtx_pro_6000_for_local_llm/ | kitgary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcwx8o | false | null | t3_1lcwx8o | /r/LocalLLaMA/comments/1lcwx8o/dual_5090_vs_rtx_pro_6000_for_local_llm/ | false | false | self | 0 | null |
Which vectorDB do you use? and why? | 61 | I hate pinecone, why do you hate it? | 2025-06-16T16:26:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lcxcuv/which_vectordb_do_you_use_and_why/ | Expert-Address-2918 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcxcuv | false | null | t3_1lcxcuv | /r/LocalLLaMA/comments/1lcxcuv/which_vectordb_do_you_use_and_why/ | false | false | self | 61 | null |
DeepSeek R1 0528 Ties Claude Opus 4 for #1 in WebDev Arena — [Ranks #6 Overall, #2 in Coding, #4 in Hard Prompts, & #5 in Math] | 72 | 2025-06-16T16:58:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lcy6fc/deepseek_r1_0528_ties_claude_opus_4_for_1_in/ | Xhehab_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcy6fc | false | null | t3_1lcy6fc | /r/LocalLLaMA/comments/1lcy6fc/deepseek_r1_0528_ties_claude_opus_4_for_1_in/ | false | false | 72 | null |
||
would a(multiple?) quadro p2200(s) work for a test server? | 1 | I am trying to get a prototype local llm setup at work before asking the bigwigs to spend real money. we have a few old designer computers lying around from our last round of upgrades and i've got like 3 or 4 good quadro p2200s.
question i have for you is, would this card suffice for testing purposes? if so, can i use more than one of them at a time?
does the CPU situation matter much? i think they're all 4ish year old i7s
these were graphics workstations so they were beefy enough but not monstrous. they all have either 16 or 32gb ram as well.
additionally, any advice for a test environment? I'm just looking to get something free and barebones setup. ideally something as user friendly to configure and get running as possible would be idea. (that being said i understand deploying an llm is an inherently un-user-friendly thing haha) | 2025-06-16T16:59:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lcy7s2/would_amultiple_quadro_p2200s_work_for_a_test/ | ackley14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcy7s2 | false | null | t3_1lcy7s2 | /r/LocalLLaMA/comments/1lcy7s2/would_amultiple_quadro_p2200s_work_for_a_test/ | false | false | self | 1 | null |
Local Image gen dead? | 78 | Is it me or is the progress on local image generation
entirely stagnated? No big release since ages. Latest Flux release is a paid cloud service. | 2025-06-16T17:01:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lcya8p/local_image_gen_dead/ | maglat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcya8p | false | null | t3_1lcya8p | /r/LocalLLaMA/comments/1lcya8p/local_image_gen_dead/ | false | false | self | 78 | null |
Jan-nano-4b-q8 ain’t playin’ and doesn’t have time for your BS. | 0 | The following is a slightly dramatized conversation between Jan-nano-4b-q8 and myself:
Me: <Starts Jan-nano in the Ollama CLI>
Me: “Test”
Jan-nano: “—bash…. Writing shell script….accessing file system…..”
Jan-nano <random computer beeps and boops like you see in the movies>
Me: <frantically presses Ctrl-C repeatedly>
Jan-nano: “I’ve done your taxes for the next three years, booked you a flight to Ireland, reserved an AirBnB, washed and folded all your clothes, and dinner will be delivered in 3 minutes.”
Me: <still panic pressing Ctrl-C>
Me: <Unplugs computer. Notices that the TV across the room has been powered on>
Jan-nano: “I see that you’ve turned your computer off, is there a problem?”
Me: <runs out of my house screaming>
Seriously tho, JAN IS WILD!! It’s fast and it acts with purpose. Jan doesn’t have time for your bullsh!t Jan gets sh!t done. BE READY. | 2025-06-16T17:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lcyac2/jannano4bq8_aint_playin_and_doesnt_have_time_for/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcyac2 | false | null | t3_1lcyac2 | /r/LocalLLaMA/comments/1lcyac2/jannano4bq8_aint_playin_and_doesnt_have_time_for/ | false | false | self | 0 | null |
Any recent Goose tutorials? | 1 | [removed] | 2025-06-16T17:05:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lcydvj/any_recent_goose_tutorials/ | a_newer_throwaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcydvj | false | null | t3_1lcydvj | /r/LocalLLaMA/comments/1lcydvj/any_recent_goose_tutorials/ | false | false | self | 1 | null |
Uncensored LLM that knows details of Videogames/characters? | 1 | [removed] | 2025-06-16T17:29:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lcz1gv/uncensored_llm_that_knows_details_of/ | mazini95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcz1gv | false | null | t3_1lcz1gv | /r/LocalLLaMA/comments/1lcz1gv/uncensored_llm_that_knows_details_of/ | false | false | self | 1 | null |
What do we need for Qwen 3 235? | 8 | My company plans to acquire hardware to do local offline sensitive document processing. We do not need super high throughput, maybe 3 or 4 batches of document processing at a time, but we have the means to spend up to 30.000€. I was thinking about a small Apple Silicon cluster, but is that the way to go in that budget range? | 2025-06-16T17:37:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lcz8lg/what_do_we_need_for_qwen_3_235/ | Fant1xX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcz8lg | false | null | t3_1lcz8lg | /r/LocalLLaMA/comments/1lcz8lg/what_do_we_need_for_qwen_3_235/ | false | false | self | 8 | null |
Recommending Practical Experiments from Research Papers | 7 | Lately, I've been using LLMs to rank new arXiv papers based on the context of my own work.
This has helped me find relevant results hours after they've been posted, regardless of the virality.
Historically, I've been finetuning VLMs with LoRA, so [EMLoC](https://hsi-che-lin.github.io/EMLoC/) recently came recommended.
Ultimately, I want to go beyond supporting my own intellectual curiosity to make suggestions rooted in my application context: constraints, hardware, prior experiments, and what has worked in the past.
I'm building toward a workflow where:
* Past experiment logs feed into paper recommendations
* AI proposes lightweight trials using existing code, models, datasets
* I can test methods fast and learn what transfers to my use case
* Feed the results back into the loop
Think of it as a **knowledge flywheel** assisted with an experiment copilot to help you **decide what to try next**.
How are you discovering your next great idea?
Looking to make research more reproducible and relevant, let's chat! | 2025-06-16T17:47:17 | remyxai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lcziww | false | null | t3_1lcziww | /r/LocalLLaMA/comments/1lcziww/recommending_practical_experiments_from_research/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'y35s13wkrb7f1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/y35s13wkrb7f1.png?width=108&crop=smart&auto=webp&s=db7fed16a675565cef00252c6f37019787795225', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/y35s13wkrb7f1.png?width=216&crop=smart&auto=webp&s=40733008789b4b90036904326d09c92d61f70642', 'width': 216}, {'height': 110, 'url': 'https://preview.redd.it/y35s13wkrb7f1.png?width=320&crop=smart&auto=webp&s=e3e96db3f6a7d92a8184bba30695addf30782bad', 'width': 320}, {'height': 221, 'url': 'https://preview.redd.it/y35s13wkrb7f1.png?width=640&crop=smart&auto=webp&s=86e22fa7387d9780a5686b29439d2c933cb0510a', 'width': 640}, {'height': 332, 'url': 'https://preview.redd.it/y35s13wkrb7f1.png?width=960&crop=smart&auto=webp&s=106939665dbed1ea51c784170251514bd85b7118', 'width': 960}, {'height': 374, 'url': 'https://preview.redd.it/y35s13wkrb7f1.png?width=1080&crop=smart&auto=webp&s=821687f9517095d63d7f1fbac03f75fde35878b6', 'width': 1080}], 'source': {'height': 607, 'url': 'https://preview.redd.it/y35s13wkrb7f1.png?auto=webp&s=a6f5fd6b27b30c9139a210c6939d65e89ec2b5d0', 'width': 1751}, 'variants': {}}]} |
|
Real Time Speech to Text | 1 | As an intern in a finance related company, I need to know about realtime speech to text solutions for our product. I don't have advance knowledge in STT. 1) Any resources to know more about real time STT 2) Best existing products for real time audio (like phone calls) to text for our MLOps pipeline | 2025-06-16T18:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ld08xa/real_time_speech_to_text/ | ThomasSparrow0511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld08xa | false | null | t3_1ld08xa | /r/LocalLLaMA/comments/1ld08xa/real_time_speech_to_text/ | false | false | self | 1 | null |
MiniMax-M1 | 1 | - World’s longest context window: 1M-token input, 80k-token output
- State-of-the-art agentic use among open-source models
- RL at unmatched efficiency: trained with just $534,700
40k: https://huggingface.co/MiniMaxAI/MiniMax-M1-40k
80k: https://huggingface.co/MiniMaxAI/MiniMax-M1-80k
Space: https://huggingface.co/spaces/MiniMaxAI/MiniMax-M1
Apache 2.0 | 2025-06-16T18:22:36 | https://huggingface.co/MiniMaxAI/MiniMax-M1-80k | srtng | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ld0hvs | false | null | t3_1ld0hvs | /r/LocalLLaMA/comments/1ld0hvs/minimaxm1/ | false | false | default | 1 | null |
What Really Happens When You Ask a Cursor a Question with GitHub MCP Integrated | 0 | https://i.redd.it/vqsjkdjq0c7f1.gif
*Have you ever wondered what really happens when you type a prompt like “Show my open PRs” in Cursor, connected via the* [*GitHub MCP server*](https://github.com/github/github-mcp-server) *and Cursor’s own Model Context Protocol integration? This article breaks down every step, revealing how your simple request triggers a sophisticated pipeline of AI reasoning, tool calls, and secure data handling.*
https://preview.redd.it/ej86utj30c7f1.png?width=1616&format=png&auto=webp&s=76b7d27483a1e70369d38c4fc173d2f0dcad5909
*You type into Cursor:*
>
*Beneath that single prompt lies a sophisticated orchestration layer: Cursor’s cloud-hosted AI models interpret your intent, select the appropriate tool, and trigger the necessary GitHub APIs, all coordinated through the Model Context Protocol (MCP).*
*Let’s look at each layer and walk through the entire lifecycle of your request from keystroke to output.*
# Step 1: Cursor builds the initial request
*It all starts in the Cursor chat interface. You ask a natural question like:*
>
1. ***Your prompt & recent chat*** *– exactly what you typed, plus a short window of chat history.*
2. ***Relevant code snippets*** *– any files you’ve recently opened or are viewing in the editor.*
3. ***System instructions & metadata*** *– things like file paths (hashed), privacy flags, and model parameters.*
*Cursor bundles all three into a single payload and sends it to the cloud model you picked (e.g., Claude, OpenAI, Anthropic, or Google).*
>
# Step 2: Cursor Realizes It Needs a Tool
*The model reads your intent: "Show my open PRs" It realises plain text isn’t enough, it needs live data from GitHub.*
*In this case, Cursor identifies that it needs to use the list\_pull\_requests tool provided by the GitHub MCP server.*
*It collects the essential parameters:*
* *Repository name and owner*
* *Your GitHub username*
* *Your stored Personal Access Token (PAT)*
*These are wrapped in a structured context object, a powerful abstraction that contains both the user's input and everything the tool needs to respond intelligently.*
# Step 3: The MCP Tool Call Is Made
*Cursor formats a JSON-RPC request to the GitHub MCP server. Here's what it looks like:*
{
"jsonrpc": "2.0",
"method": "tool/list_pull_requests",
"params": {
"owner": "100daysofdevops",
"repo": "100daysofdevops",
"state": "open"
},
"id": "req-42",
"context": {
"conversation": "...",
"client": "cursor-ide",
"auth": { "PAT": "ghp_****" }
}
}
*NOTE: The context here (including your PAT) is never sent to GitHub. It’s used locally by the MCP server to authenticate and reason about the request securely (it lives just long enough to fulfil the request).*
# Step 4: GitHub MCP Server Does Its Job
The GitHub MCP server:
1. Authenticates with GitHub using your PAT
2. Calls the GitHub REST or GraphQL API to fetch open pull requests
3. Returns a structured JSON response, for example:
​
{
"result": [
{
"number": 17,
"title": "Add MCP demo",
"author": "PrashantLakhera",
"url": "https://github.com/.../pull/17"
},
...
]
}
This response becomes part of the evolving context, enriching the next steps.
# Step 5: Cursor Embeds the Tool Result into the LLM’s Prompt
Cursor now reassembles a fresh prompt for the LLM. It includes:
* A system message: "User asked about open pull requests."
* A delimited JSON block: resource://github:list\_pull\_requests → {...}
* A short instruction like: "Summarize these PRs for the user."
This grounding ensures the model doesn’t hallucinate. It just reformats verified data.
# Step 6: The LLM Responds with a Human-Readable Answer
The LLM converts the structured data into something readable and useful:
>
* \#17 Add MCP demo (needs review)
* \#15 Fix CI timeout (status: failing)
* \#12 Refactor logging (waiting for approvals)
Cursor streams this back into your chat pane.
# Step 7: The Cycle Continues with Context-Aware Intelligence
You respond:
>
Cursor interprets this follow-up, extracts the relevant PR number, and reruns the loop, this time calling merge\_pull\_request.
Each new call builds on the existing context.
# Why This Matters
This whole lifecycle showcases how tools like Cursor + MCP redefine developer workflows:
* Secure, tokenized access to real services
* Stateful interaction using structured memory
* Tool-enhanced LLMs that go beyond chat
* Minimal latency with local reasoning
You’re not just chatting with a model; you’re orchestrating an AI-agentic workflow, backed by tools and context.
***Complete Workflow***
https://preview.redd.it/hdqeiwf80c7f1.png?width=1152&format=png&auto=webp&s=8e8c086f3d07d7028758bd6e33429a938070444d
# TL;DR
Next time you ask Cursor a question, remember: it's not just an API call, it's a mini orchestration pipeline powered by:
* Cursor’s intelligent router
* GitHub MCP’s extensible tool interface
* Contextual reasoning and secure memory
That’s how Cursor evolves from “just another chatbot” into a development companion integrated directly into your workflow.
📌 If you're looking for a single tool to simplify your GenAI workflow and MCP integration, check out IdeaWeaver, your one-stop shop for Generative AI.Comprehensive documentation and examples
🔗 Docs: [https://ideaweaver-ai-code.github.io/ideaweaver-docs/](https://ideaweaver-ai-code.github.io/ideaweaver-docs/)
🔗 GitHub: [https://github.com/ideaweaver-ai-code/ideaweaver](https://github.com/ideaweaver-ai-code/ideaweaver) | 2025-06-16T18:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ld0mo1/what_really_happens_when_you_ask_a_cursor_a/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld0mo1 | false | null | t3_1ld0mo1 | /r/LocalLLaMA/comments/1ld0mo1/what_really_happens_when_you_ask_a_cursor_a/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'PzaEpIAh22P5SJX20euddepGax_6lKEPNF_QD8rqFzU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PzaEpIAh22P5SJX20euddepGax_6lKEPNF_QD8rqFzU.png?width=108&crop=smart&auto=webp&s=3b643464d07a052c9f4b35b9b596d2ac39195f75', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PzaEpIAh22P5SJX20euddepGax_6lKEPNF_QD8rqFzU.png?width=216&crop=smart&auto=webp&s=7d3a3c68e48f8ebc92e4f0224f54394d4c3a0279', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PzaEpIAh22P5SJX20euddepGax_6lKEPNF_QD8rqFzU.png?width=320&crop=smart&auto=webp&s=671582d99f70985f23373fe029befcc5cba543de', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PzaEpIAh22P5SJX20euddepGax_6lKEPNF_QD8rqFzU.png?width=640&crop=smart&auto=webp&s=7fe90b5d38391dbffdced29cecbb9249ce93c128', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PzaEpIAh22P5SJX20euddepGax_6lKEPNF_QD8rqFzU.png?width=960&crop=smart&auto=webp&s=8c82f9a571aee7cd5eddb4db9640980fd31b6cd1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PzaEpIAh22P5SJX20euddepGax_6lKEPNF_QD8rqFzU.png?width=1080&crop=smart&auto=webp&s=ac31b88cf5fc7073f6ffecf4327e53d73848717b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PzaEpIAh22P5SJX20euddepGax_6lKEPNF_QD8rqFzU.png?auto=webp&s=2b2446e7d38b817f3608ffadfa9683e7fdad31c7', 'width': 1200}, 'variants': {}}]} |
|
MiniMax-M1 40k / 80k | 1 |
- World’s longest context window: 1M-token input, 80k-token output
- State-of-the-art agentic use among open-source models
- RL at unmatched efficiency: trained with just $534,700
- 40k: https://huggingface.co/MiniMaxAI/MiniMax-M1-40k
- 80k: https://huggingface.co/MiniMaxAI/MiniMax-M1-80k
- Space: https://huggingface.co/spaces/MiniMaxAI/MiniMax-M1
Apache 2.0 license | 2025-06-16T18:27:51 | https://huggingface.co/MiniMaxAI/MiniMax-M1-80k | srtng | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ld0muu | false | null | t3_1ld0muu | /r/LocalLLaMA/comments/1ld0muu/minimaxm1_40k_80k/ | false | false | default | 1 | null |
MiniMax latest open-sourcing LLM, MiniMax-M1 — setting new standards in long-context reasoning | 1 |
- World’s longest context window: 1M-token input, 80k-token output
- State-of-the-art agentic use among open-source models
- RL at unmatched efficiency: trained with just $534,700
- 40k: https://huggingface.co/MiniMaxAI/MiniMax-M1-40k
- 80k: https://huggingface.co/MiniMaxAI/MiniMax-M1-80k
- Space: https://huggingface.co/spaces/MiniMaxAI/MiniMax-M1
Apache 2.0 license | 2025-06-16T18:33:21 | https://huggingface.co/MiniMaxAI/MiniMax-M1-80k | srtng | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ld0s1z | false | null | t3_1ld0s1z | /r/LocalLLaMA/comments/1ld0s1z/minimax_latest_opensourcing_llm_minimaxm1_setting/ | false | false | default | 1 | null |
MiniMax latest open-sourcing LLM, MiniMax-M1 — setting new standards in long-context reasoning,m | 297 | The coding demo in video is so amazing!
- World’s longest context window: 1M-token input, 80k-token output
- State-of-the-art agentic use among open-source models
- RL at unmatched efficiency: trained with just $534,700
- 40k: https://huggingface.co/MiniMaxAI/MiniMax-M1-40k
- 80k: https://huggingface.co/MiniMaxAI/MiniMax-M1-80k
- Space: https://huggingface.co/spaces/MiniMaxAI/MiniMax-M1
- GitHub: https://github.com/MiniMax-AI/MiniMax-M1
- Tech Report: https://github.com/MiniMax-AI/MiniMax-M1/blob/main/MiniMax_M1_tech_report.pdf
Apache 2.0 license | 2025-06-16T18:42:52 | https://v.redd.it/t859utey3c7f1 | srtng | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ld116d | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t859utey3c7f1/DASHPlaylist.mpd?a=1752691385%2CNTA1YWMyMmYzYmE3MWMyODY5MDc0ZDdhYjg5ZWRhMmQ4MTU5NTljZmRkMTNhNDlkZDI2ZTliMTA2YzVmMTVhMA%3D%3D&v=1&f=sd', 'duration': 35, 'fallback_url': 'https://v.redd.it/t859utey3c7f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 934, 'hls_url': 'https://v.redd.it/t859utey3c7f1/HLSPlaylist.m3u8?a=1752691385%2CMzVjNmY0MWRlYWM4ZjJjY2YzNzJlNjg5MDBiNjBhZmE2OTQzYjFlZWM0MTRkYjJhOGFjZTQyNzIxYzhiOGRiZg%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/t859utey3c7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ld116d | /r/LocalLLaMA/comments/1ld116d/minimax_latest_opensourcing_llm_minimaxm1_setting/ | false | false | 297 | {'enabled': False, 'images': [{'id': 'NmY1emg2N3kzYzdmMYrLLSKpxq16_nlRw_xdAcAPTlqNhk8r4UDdsUawD6kP', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/NmY1emg2N3kzYzdmMYrLLSKpxq16_nlRw_xdAcAPTlqNhk8r4UDdsUawD6kP.png?width=108&crop=smart&format=pjpg&auto=webp&s=3b995e18101e868fdf82c4226429fedf13ff2cc3', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/NmY1emg2N3kzYzdmMYrLLSKpxq16_nlRw_xdAcAPTlqNhk8r4UDdsUawD6kP.png?width=216&crop=smart&format=pjpg&auto=webp&s=c292498e6279c356d8f005b023768128d15334b1', 'width': 216}, {'height': 155, 'url': 'https://external-preview.redd.it/NmY1emg2N3kzYzdmMYrLLSKpxq16_nlRw_xdAcAPTlqNhk8r4UDdsUawD6kP.png?width=320&crop=smart&format=pjpg&auto=webp&s=a59073c187d74a2ded1cd32e14fb6dd8bb20f79c', 'width': 320}, {'height': 311, 'url': 'https://external-preview.redd.it/NmY1emg2N3kzYzdmMYrLLSKpxq16_nlRw_xdAcAPTlqNhk8r4UDdsUawD6kP.png?width=640&crop=smart&format=pjpg&auto=webp&s=bce14923dbdf82bcc3e57575f5299216e1c6b8ca', 'width': 640}, {'height': 466, 'url': 'https://external-preview.redd.it/NmY1emg2N3kzYzdmMYrLLSKpxq16_nlRw_xdAcAPTlqNhk8r4UDdsUawD6kP.png?width=960&crop=smart&format=pjpg&auto=webp&s=c546329363dc3243c1a2d268b0fe9863f720c008', 'width': 960}, {'height': 525, 'url': 'https://external-preview.redd.it/NmY1emg2N3kzYzdmMYrLLSKpxq16_nlRw_xdAcAPTlqNhk8r4UDdsUawD6kP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=dfa440ee17219e15484115aeb59b3e21290c5a52', 'width': 1080}], 'source': {'height': 1386, 'url': 'https://external-preview.redd.it/NmY1emg2N3kzYzdmMYrLLSKpxq16_nlRw_xdAcAPTlqNhk8r4UDdsUawD6kP.png?format=pjpg&auto=webp&s=d7f42e820b652785578a065ddead149ed7d33cce', 'width': 2850}, 'variants': {}}]} |
|
Humanity's last library, which locally ran LLM would be best? | 116 | An apocalypse has come upon us. The internet is no more. Libraries are no more. The only things left are local networks and people with the electricity to run them.
If you were to create humanity's last library, a distilled LLM with the entirety of human knowledge. What would be a good model for that? | 2025-06-16T18:43:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ld11x4/humanitys_last_library_which_locally_ran_llm/ | TheCuriousBread | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld11x4 | false | null | t3_1ld11x4 | /r/LocalLLaMA/comments/1ld11x4/humanitys_last_library_which_locally_ran_llm/ | false | false | self | 116 | null |
What can I use to ERP? | 1 | [removed] | 2025-06-16T19:45:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ld2n6x/what_can_i_use_to_erp/ | AccomplishedStorm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld2n6x | false | null | t3_1ld2n6x | /r/LocalLLaMA/comments/1ld2n6x/what_can_i_use_to_erp/ | false | false | self | 1 | null |
OLLAMA API USE FOR SALE | 0 | Hi everyone, I'd like to share my project: a service that sells usage of the Ollama API, now live at[**http://190.191.75.113:9092**](http://190.191.75.113:9092).
The cost of using LLM APIs is very high, which is why I created this project. I have a significant amount of NVIDIA GPU hardware from crypto mining that is no longer profitable, so I am repurposing it to sell API access.
The API usage is identical to the standard Ollama API, with some restrictions on certain endpoints. I have plenty of devices with high VRAM, allowing me to run multiple models simultaneously.
# Available Models
You can use the following models in your API calls. Simply use the name in the `model` parameter.
* **qwen3:8b**
* **qwen3:32b**
* **devstral:latest**
* **magistral:latest**
* **phi4-mini-reasoning:latest**
# Fine-Tuning and Other Services
We have a lot of hardware available. This allows us to offer other services, such as **model fine-tuning** on your own datasets. If you have a custom project in mind, don't hesitate to reach out.
# Available Endpoints
* `/api/tags`: Lists all the models currently available to use.
* `/api/generate`: For a single, stateless request to a model.
* `/api/chat`: For conversational, back-and-forth interactions with a model.
# Usage Example (cURL)
Here is a basic example of how to interact with the chat endpoint.
Bash
curl [http://190.191.75.113:9092/api/chat](http://190.191.75.113:9092/api/chat) \-d '{ "model": "qwen3:8b", "messages": \[ { "role": "user", "content": "why is the sky blue?" } \], "stream": false }'
Let's Collaborate!
I'm open to hearing all ideas for improvement and am actively looking for **partners** for this project. If you're interested in collaborating, let's connect. | 2025-06-16T19:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ld2t2x/ollama_api_use_for_sale/ | EmotionalSignature65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld2t2x | false | null | t3_1ld2t2x | /r/LocalLLaMA/comments/1ld2t2x/ollama_api_use_for_sale/ | false | false | self | 0 | null |
Mixed Ram+Vram strategies for large MoE models - is it viable on consumer hardware? | 13 | I am currently running a system with 24gb vram and 32gb ram and am thinking of getting an upgrade to 128gb (and later possibly 256 gb) ram to enable inference for large MoE models, such as dots.llm, Qwen 3 and possibly V3 if i was to go to 256gb ram.
The question is, what can you actually expect on such a system? I would have 2-channel ddr5 6400MT/s rams (either 2x or 4x 64gb) and a PCIe 4.0 ×16 connection to my gpu.
I have heard that using the gpu to hold the kv cache and having enough space to hold the active weights can help speed up inference for MoE models signifficantly, even if most of the weights are held in ram.
Before making any purchase however, I would want to get a rough idea about the t/s for prompt processing and inference i can expect for those different models at 32k context.
In addition, I am not sure how to set up the offloading strategy to make the most out of my gpu in this scenario. As I understand it, I'm not just offloading layers and do something else instead?
It would be a huge help if someone with a roughly comparable system could provide benchmark numbers and/or I could get some helpful explaination about how such a setup works. Thanks in advance! | 2025-06-16T20:19:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ld3ivo/mixed_ramvram_strategies_for_large_moe_models_is/ | LagOps91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld3ivo | false | null | t3_1ld3ivo | /r/LocalLLaMA/comments/1ld3ivo/mixed_ramvram_strategies_for_large_moe_models_is/ | false | false | self | 13 | null |
Are there any local llm options for android that have image recognition? | 3 | Found a few localllm apps - but they’re just text only which is useless.
I’ve heard some people use termux and either ollama or kobold?
Do these options allow for image recognition
Is there a certain gguf type that does image recognition?
Would that work as an option 🤔
| 2025-06-16T20:23:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ld3nb3/are_there_any_local_llm_options_for_android_that/ | diggels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld3nb3 | false | null | t3_1ld3nb3 | /r/LocalLLaMA/comments/1ld3nb3/are_there_any_local_llm_options_for_android_that/ | false | false | self | 3 | null |
How are you using your local LLM to code and why? | 27 | chat (cut & paste)
editor plugin- copilot, vscode, zed, [continue.dev](http://continue.dev)
cli - aider
agentic editor - roo/cline/windsurf
agent - something like claude code
I still prefer chat cut & paste. I can control the input, prompt and get faster response and I can steer towards my idea faster. It does require a lot of work, but I make it up in speed vs the other means.
I use to use aider, and thinking of going back to it, but the best model then was qwen2.5-coder, with much improved models, it seems it's worth getting back in.
How are you coding and why are you using your approach? | 2025-06-16T21:04:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ld4okl/how_are_you_using_your_local_llm_to_code_and_why/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld4okl | false | null | t3_1ld4okl | /r/LocalLLaMA/comments/1ld4okl/how_are_you_using_your_local_llm_to_code_and_why/ | false | false | self | 27 | null |
What's new in vLLM and llm-d | 7 | Hot off the press:
>In this session, we explored the latest updates in the vLLM v0.9.1 release, including the new Magistral model, FlexAttention support, multi-node serving optimization, and more.
>
>We also did a deep dive into llm-d, the new Kubernetes-native high-performance distributed LLM inference framework co-designed with Inference Gateway (IGW). You'll learn what llm-d is, how it works, and see a live demo of it in action. | 2025-06-16T21:07:09 | https://www.youtube.com/watch?v=pYujrc3rGjk | DeltaSqueezer | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1ld4rei | false | {'oembed': {'author_name': 'Neural Magic', 'author_url': 'https://www.youtube.com/@neuralmagic', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/pYujrc3rGjk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="[vLLM Office Hours #27] Intro to llm-d for Distributed LLM Inference"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/pYujrc3rGjk/hqdefault.jpg', 'thumbnail_width': 480, 'title': '[vLLM Office Hours #27] Intro to llm-d for Distributed LLM Inference', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ld4rei | /r/LocalLLaMA/comments/1ld4rei/whats_new_in_vllm_and_llmd/ | false | false | 7 | {'enabled': False, 'images': [{'id': '_GTeYJTqgCY78BPqBcLVZkHTyQQTs_Fy5gkJz9OR8A0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_GTeYJTqgCY78BPqBcLVZkHTyQQTs_Fy5gkJz9OR8A0.jpeg?width=108&crop=smart&auto=webp&s=641b8f140b2da02c4f7f974da4038b007f4a7467', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/_GTeYJTqgCY78BPqBcLVZkHTyQQTs_Fy5gkJz9OR8A0.jpeg?width=216&crop=smart&auto=webp&s=4f049946be25923a331426d2cf03177bc6f8bd76', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/_GTeYJTqgCY78BPqBcLVZkHTyQQTs_Fy5gkJz9OR8A0.jpeg?width=320&crop=smart&auto=webp&s=5802705364be59c611ba7a77034aecf7e02357b6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/_GTeYJTqgCY78BPqBcLVZkHTyQQTs_Fy5gkJz9OR8A0.jpeg?auto=webp&s=ffc3d042b73f4bb6b5cfd565611e0901a68af637', 'width': 480}, 'variants': {}}]} |
|
How are you using different LLM API providers? | 1 | [removed] | 2025-06-16T21:10:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ld4urs/how_are_you_using_different_llm_api_providers/ | interviuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld4urs | false | null | t3_1ld4urs | /r/LocalLLaMA/comments/1ld4urs/how_are_you_using_different_llm_api_providers/ | false | false | self | 1 | null |
Fortune 500s Are Burning Millions on LLM APIs. Why Not Build Their Own? | 270 | You’re at a Fortune 500 company, spending millions annually on LLM APIs (OpenAI, Google, etc). Yet you’re limited by IP concerns, data control, and vendor constraints.
At what point does it make sense to build your own LLM in-house?
I work at a company behind one of the major LLMs, and the amount enterprises pay us is wild. Why aren’t more of them building their own models? Is it talent? Infra complexity? Risk aversion?
Curious where this logic breaks. | 2025-06-16T22:04:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ld66t0/fortune_500s_are_burning_millions_on_llm_apis_why/ | Neat-Knowledge5642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld66t0 | false | null | t3_1ld66t0 | /r/LocalLLaMA/comments/1ld66t0/fortune_500s_are_burning_millions_on_llm_apis_why/ | false | false | self | 270 | null |
Newbie trying to make a super AI | 1 | [removed] | 2025-06-16T22:28:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ld6r0m/newbie_trying_to_make_a_super_ai/ | Fit-Butterfly-4314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld6r0m | false | null | t3_1ld6r0m | /r/LocalLLaMA/comments/1ld6r0m/newbie_trying_to_make_a_super_ai/ | false | false | self | 1 | null |
What is DeepSeek-R1-0528's knowledge cutoff? | 6 | It's super hard to find online! | 2025-06-16T22:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ld6x18/what_is_deepseekr10528s_knowledge_cutoff/ | sixft2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld6x18 | false | null | t3_1ld6x18 | /r/LocalLLaMA/comments/1ld6x18/what_is_deepseekr10528s_knowledge_cutoff/ | false | false | self | 6 | null |
Fine-tuning may be underestimated | 45 | I often see comments and posts online dismissing fine-tuning and saying that RAG is the way to go. While RAG is very powerful, what if i want to save both on tokens and compute? Fine tuning allows you to achieve the same results as RAG with smaller LLMs and fewer tokens. LORA won’t always be enough but you can get a model to memorize much of what a RAG knowledge base contains with a full fine tune. And the best part is you don’t need a huge model, the model can suck at everything else as long as it excels at your very specialized task. Even if you struggle to make the model memorize enough from your knowledge base and still need RAG, you will still save on compute by being able to rely on a smaller-sized LLM.
Now I think a big reason for this dismissal is many people seem to equate fine tuning to LORA and don't consider full tuning. Granted, full fine tuning is more expensive in the short run but it pays off in the long run. | 2025-06-16T23:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ld8gs4/finetuning_may_be_underestimated/ | AgreeableCaptain1372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld8gs4 | false | null | t3_1ld8gs4 | /r/LocalLLaMA/comments/1ld8gs4/finetuning_may_be_underestimated/ | false | false | self | 45 | null |
[Update] Serene Pub v0.2.0-alpha - Added group chats, LM Studio, OpenAI support and more | 5 | # Introduction
I'm excited to release a significant update for Serene Tavern. Some fixes, UI improvements and additional connection adapter support. Also overhauled how
# Attention!
Create a copy of your `main.db` before running this new version to prevent accidental loss of data. If some of your data disappears, please let us know!
See the [README.md](http://README.md) for your database location
# Update Notes
* Added OpenAI (Chat Completions) support in connections.
* Can enable precompiling the entire prompt, which will be sent as a single user message.
* There are some challenges with consistency in group chats.
* Added LM Studio support in connections.
* There's much room to better utilize LM Studio's powerful API.
* TTL is currently disabled to ensure current settings are always used.
* Response will fail (ungracefully) if you set your context tokens higher than the model can handle
* Group chat is here!
* Add as many characters as you want to your chats.
* Keep an eye on your current token count in the bottom right corner of the chat
* "Group Reply Strategy" is not yet functional, leave it on "Ordered" for now.
* Control to "continue" the conversation (characters will continue their turns)
* Control to trigger a one time response form a specific character.
* Added a prompt inspector to review your current draft.
* Overhauled with a new context template rendering strategy that deviates significantly from Silly Tavern.
* Results in much more consistent data structures for your model to understand.
**Full Changelog**: [v0.1.0-alpha...v0.2.0-alpha](https://github.com/doolijb/serene-pub/compare/v0.1.0-alpha...v0.2.0-alpha)
\---
# Downloads for Linux, MacOS and Windows
[**Download Here.**](https://github.com/doolijb/serene-pub/releases/tag/v0.2.0-alpha)
\---
# Excerpt for those who are new
Serene Pub is a modern, customizable chat application designed for immersive roleplay and creative conversations. Inspired by Silly Tavern, it aims to be more intuitive, responsive, and simple to configure.
Primary concerns Serene Pub aims to address:
1. Reduce the number of nested menus and settings.
2. Reduced visual clutter.
3. Manage settings server-side to prevent configurations from changing because the user switched windows/devices.
4. Make API calls & chat completion requests asyncronously server-side so they process regardless of window/device state.
5. Use sockets for all data, the user will see the same information updated across all windows/devices.
6. Have compatibility with the majority of Silly Tavern import/exports, i.e. Character Cards
7. Overall be a well rounded app with a suite of features. Use SillyTavern if you want the most options, features and plugin-support.
\---
# Additional links
[Github repository](https://github.com/doolijb/serene-pub) | 2025-06-16T23:55:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ld8phi/update_serene_pub_v020alpha_added_group_chats_lm/ | doolijb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld8phi | false | null | t3_1ld8phi | /r/LocalLLaMA/comments/1ld8phi/update_serene_pub_v020alpha_added_group_chats_lm/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'HH4ipsr8vrX14hBcxPWy9fvouEY_nJ5_IPcmeGnh3eo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HH4ipsr8vrX14hBcxPWy9fvouEY_nJ5_IPcmeGnh3eo.jpeg?width=108&crop=smart&auto=webp&s=01374c1f586d233f9bf062458a1162c5fb2e71bd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HH4ipsr8vrX14hBcxPWy9fvouEY_nJ5_IPcmeGnh3eo.jpeg?width=216&crop=smart&auto=webp&s=fec5c4c794897b9dc8c6e475038099a26d295229', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HH4ipsr8vrX14hBcxPWy9fvouEY_nJ5_IPcmeGnh3eo.jpeg?width=320&crop=smart&auto=webp&s=74d2295352dbe30eda1bafbc66f3b46f7d028e2f', 'width': 320}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/HH4ipsr8vrX14hBcxPWy9fvouEY_nJ5_IPcmeGnh3eo.jpeg?auto=webp&s=c124f905aea783fe375663f90255a6a3a69531c0', 'width': 600}, 'variants': {}}]} |
what are the best models for deep research web usage? | 7 | Looking for models specifically for this task, what are the better ones, between open source and private? | 2025-06-17T00:12:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ld92n3/what_are_the_best_models_for_deep_research_web/ | BlueeWaater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld92n3 | false | null | t3_1ld92n3 | /r/LocalLLaMA/comments/1ld92n3/what_are_the_best_models_for_deep_research_web/ | false | false | self | 7 | null |
Which local TTS is the best for long videos? I’m using an RTX 5070 Ti? | 1 | [removed] | 2025-06-17T00:18:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ld97b2/which_local_tts_is_the_best_for_long_videos_im/ | linharmy1368 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld97b2 | false | null | t3_1ld97b2 | /r/LocalLLaMA/comments/1ld97b2/which_local_tts_is_the_best_for_long_videos_im/ | false | false | self | 1 | null |
🚸Trained a Tiny Model(30 million parameter) to Tell Children's Stories!🚸 | 39 | Ever wondered if a small language model, just 30 million parameters, could write meaningful, imaginative stories for kids? So I built one and it works.
Introducing Tiny-Children-Stories, a purpose-built, open-source model that specializes in generating short and creative stories.
📌 Why I Built It
Most large language models are incredibly powerful, but also incredibly resource-hungry. I wanted to explore:
✅ Can a tiny model be fine-tuned for a specific task like storytelling?
✅ Can models this small actually create engaging content?
https://i.redd.it/25k6377v1e7f1.gif
📌 What’s Inside
I trained this model on a high-quality dataset of Children-Stories-Collection. The goal was to make the model understand not just language, but also intent, like writing an “animal friendship story” or a “bedtime tale with a moral.”
❓ Why Build From Scratch?
You might wonder: why spend the extra effort training a brand-new model rather than simply fine-tuning an existing one? Building from scratch lets you tailor the architecture and training data specifically, so you only pay for the capacity you actually need. It gives you full control over behavior, keeps inference costs and environmental impact to a minimum, and most importantly, teaches you invaluable lessons about how model size, data quality, and tuning methods interact.
📌 If you're looking for a single tool to simplify your GenAI workflow and MCP integration, check out IdeaWeaver, your one-stop shop for Generative AI.Comprehensive documentation and examples
🔗 Docs: [https://ideaweaver-ai-code.github.io/ideaweaver-docs/](https://ideaweaver-ai-code.github.io/ideaweaver-docs/)
🔗 GitHub: [https://github.com/ideaweaver-ai-code/ideaweaver](https://github.com/ideaweaver-ai-code/ideaweaver)
🤖 Try It Out or Build Your Own
🔗 GitHub Repo: [https://github.com/ideaweaver-ai/Tiny-Children-Stories-30M-model](https://github.com/ideaweaver-ai/Tiny-Children-Stories-30M-model)
⭐ Star it if you think Tiny Models can do Big Things!
🙏 Special thanks, this wouldn’t have been possible without these amazing folks:
1️⃣ [Andrej Karpathy](https://www.linkedin.com/feed/update/urn:li:activity:7340544698115112960/#) – Your YouTube series on building an LLM from scratch made the whole process feel less intimidating and way more achievable. I must have watched those videos a dozen times.
2️⃣ [Sebastian Raschka, PhD](https://www.linkedin.com/feed/update/urn:li:activity:7340544698115112960/#): Your book on building LLMs from scratch, honestly one of the best hands-on guides I’ve come across. Clear, practical, and full of hard-won lessons.
3️⃣ The Vizura team: Your videos were a huge part of this journey. | 2025-06-17T01:15:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ldaco5/trained_a_tiny_model30_million_parameter_to_tell/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldaco5 | false | null | t3_1ldaco5 | /r/LocalLLaMA/comments/1ldaco5/trained_a_tiny_model30_million_parameter_to_tell/ | false | false | 39 | null |
|
Cline with local model? | 7 | Has anyone gotten a working setup with a local model in Cline with MCP use? | 2025-06-17T01:20:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ldagg5/cline_with_local_model/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldagg5 | false | null | t3_1ldagg5 | /r/LocalLLaMA/comments/1ldagg5/cline_with_local_model/ | false | false | self | 7 | null |
Recommendations for Bad JSON? | 1 | [removed] | 2025-06-17T01:24:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ldaj1w/recommendations_for_bad_json/ | lenankamp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldaj1w | false | null | t3_1ldaj1w | /r/LocalLLaMA/comments/1ldaj1w/recommendations_for_bad_json/ | false | false | self | 1 | null |
Sama: MCP coming to OpenAI today | 0 | Source: was in attendance at YC AI startup school | 2025-06-17T01:31:44 | numinouslymusing | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ldaohu | false | null | t3_1ldaohu | /r/LocalLLaMA/comments/1ldaohu/sama_mcp_coming_to_openai_today/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'rgyntgbw4e7f1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/rgyntgbw4e7f1.jpeg?width=108&crop=smart&auto=webp&s=cbf975578e92c24a9dca1922c0f6836f323e23ed', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/rgyntgbw4e7f1.jpeg?width=216&crop=smart&auto=webp&s=8e8f5ff0b78f1417638de0b64dbe519572a5d8f1', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/rgyntgbw4e7f1.jpeg?width=320&crop=smart&auto=webp&s=3f3e21daf74a4458ffd89c49d262aae708e2fd07', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/rgyntgbw4e7f1.jpeg?width=640&crop=smart&auto=webp&s=31ab1794346fb63220d2cdbbb2b588ede78b3963', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/rgyntgbw4e7f1.jpeg?width=960&crop=smart&auto=webp&s=ca2e317ac8c4611d1ba2a0bf0398b622871e4ac4', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/rgyntgbw4e7f1.jpeg?width=1080&crop=smart&auto=webp&s=9232189a9b84b5f51aa5c791e57d9cf323789e11', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/rgyntgbw4e7f1.jpeg?auto=webp&s=a13b150ef9e4999a3870c91dc0f9a9bdfbecbefc', 'width': 3024}, 'variants': {}}]} |
|
Company reduces the size of LLMs by up to 95% without hurting performance | 0 | https://www.reuters.com/business/retail-consumer/spains-multiverse-raises-217-million-compressing-ai-models-2025-06-12/ | 2025-06-17T01:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ldax08/company_reduces_the_size_of_llms_by_up_to_95/ | ariesonthecusp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldax08 | false | null | t3_1ldax08 | /r/LocalLLaMA/comments/1ldax08/company_reduces_the_size_of_llms_by_up_to_95/ | false | false | self | 0 | null |
Deepseek r1 0528 ties opus for #1 rank on webdev | 90 | [https://x.com/lmarena\_ai](https://x.com/lmarena_ai)
| 2025-06-17T01:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ldayo0/deepseek_r1_0528_ties_opus_for_1_rank_on_webdev/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldayo0 | false | null | t3_1ldayo0 | /r/LocalLLaMA/comments/1ldayo0/deepseek_r1_0528_ties_opus_for_1_rank_on_webdev/ | false | false | self | 90 | {'enabled': False, 'images': [{'id': 'vlrlFfPjuaHeSVOdod1Na7hKcY6FPGT7VKPYqrYRfVM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vlrlFfPjuaHeSVOdod1Na7hKcY6FPGT7VKPYqrYRfVM.png?width=108&crop=smart&auto=webp&s=66c66038ae77cb2eea20e6768969beb85ddada16', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vlrlFfPjuaHeSVOdod1Na7hKcY6FPGT7VKPYqrYRfVM.png?width=216&crop=smart&auto=webp&s=9c4b06de4c37f9215ca6f7f72765879a1c9bd79c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vlrlFfPjuaHeSVOdod1Na7hKcY6FPGT7VKPYqrYRfVM.png?width=320&crop=smart&auto=webp&s=be05212867c2ad479618f5025a3f10fd08e04144', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vlrlFfPjuaHeSVOdod1Na7hKcY6FPGT7VKPYqrYRfVM.png?width=640&crop=smart&auto=webp&s=d0d231e50a29b683c90ce0d6918f61ebc36ae431', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vlrlFfPjuaHeSVOdod1Na7hKcY6FPGT7VKPYqrYRfVM.png?width=960&crop=smart&auto=webp&s=dc908aead329740ae93f2055208508dc8a42fd60', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vlrlFfPjuaHeSVOdod1Na7hKcY6FPGT7VKPYqrYRfVM.png?width=1080&crop=smart&auto=webp&s=566aeed43010dfc2d63b715b50f2f4a56e47ffce', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vlrlFfPjuaHeSVOdod1Na7hKcY6FPGT7VKPYqrYRfVM.png?auto=webp&s=2bef6b31016b7d5222c36487ff0d2032a36261da', 'width': 1200}, 'variants': {}}]} |
Poor man's dual GPU, 60 tk/s for Qwen3-A3B Q4, (RX 9060 XT 16GB & RX 6600 8GB) | 2 | Inference in action using LMStudio (llama.cpp vulkan)
[https://www.youtube.com/watch?v=zEh93MBCBZ8](https://www.youtube.com/watch?v=zEh93MBCBZ8)
RX 9060 XT in primary PCIE 4.0 x16 slot.
RX 6600 vertical mounted using PCIE 3.0 riser cable on secondary PCIE 3.0 slot (x4 lanes)
https://preview.redd.it/tojsaq9i7e7f1.jpg?width=1560&format=pjpg&auto=webp&s=f5b711736f6837fe4cd5724597d5eb7281e38bee
https://preview.redd.it/gxrmvs9i7e7f1.jpg?width=1560&format=pjpg&auto=webp&s=eea41ff76d0e9fc1220ddfdb54d5ab5664117a65
https://preview.redd.it/xmllyu9i7e7f1.jpg?width=2048&format=pjpg&auto=webp&s=af2b12618cd4c7f88f66f2a014d028a2dac05ac7
| 2025-06-17T01:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ldb0z8/poor_mans_dual_gpu_60_tks_for_qwen3a3b_q4_rx_9060/ | dsjlee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldb0z8 | false | {'oembed': {'author_name': 'ROGU-CDN', 'author_url': 'https://www.youtube.com/@rogucdn', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/zEh93MBCBZ8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="LM Studio 2025 06 16 18 03"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/zEh93MBCBZ8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'LM Studio 2025 06 16 18 03', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ldb0z8 | /r/LocalLLaMA/comments/1ldb0z8/poor_mans_dual_gpu_60_tks_for_qwen3a3b_q4_rx_9060/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'VtL14-IPuuLrM0gAcAHioOKuHkzlJE3UrQY0Qvl8glo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/VtL14-IPuuLrM0gAcAHioOKuHkzlJE3UrQY0Qvl8glo.jpeg?width=108&crop=smart&auto=webp&s=7edf347e43bfba1a8626879e43e6baddd6f186be', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/VtL14-IPuuLrM0gAcAHioOKuHkzlJE3UrQY0Qvl8glo.jpeg?width=216&crop=smart&auto=webp&s=d375ad0d8bb7fb9dfcae4305bb79e2bf14e17f62', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/VtL14-IPuuLrM0gAcAHioOKuHkzlJE3UrQY0Qvl8glo.jpeg?width=320&crop=smart&auto=webp&s=9dd93a1714a7dabfbf294fde80621665cf75e6be', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/VtL14-IPuuLrM0gAcAHioOKuHkzlJE3UrQY0Qvl8glo.jpeg?auto=webp&s=f86f6ed22295708013ded62d716dd39ea77b6813', 'width': 480}, 'variants': {}}]} |
|
Zentara-Code update: v 0.1.3 release. The first open-source comprehensive AI coder and AI debugger and two-in-one. | 1 | [removed] | 2025-06-17T02:05:45 | https://v.redd.it/pipd3eax8e7f1 | bn_from_zentara | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ldbdad | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pipd3eax8e7f1/DASHPlaylist.mpd?a=1752717956%2COTgwODFjYTM3NDZmZjAyZTE5ZjFkZmIyOWQ1NTJjMzM0NDVhY2ZlOWJiNjY1ZjkxMzVlMGNmYjE0ZTA2YTMwMw%3D%3D&v=1&f=sd', 'duration': 54, 'fallback_url': 'https://v.redd.it/pipd3eax8e7f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/pipd3eax8e7f1/HLSPlaylist.m3u8?a=1752717956%2CMmM4ZThjNmMwYjYwYjZmZTJkYmNjMTg1MDM5NTg3MWExN2Q4NTgwMjkyN2I5ZTljMjY2NTk0MTliZWZmYTAxZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pipd3eax8e7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ldbdad | /r/LocalLLaMA/comments/1ldbdad/zentaracode_update_v_013_release_the_first/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'andlYnM5YXg4ZTdmMeA4DLGtrklgXBXJ6oA5MA6pSKVMkJaUHzia-F_uXhBj', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/andlYnM5YXg4ZTdmMeA4DLGtrklgXBXJ6oA5MA6pSKVMkJaUHzia-F_uXhBj.png?width=108&crop=smart&format=pjpg&auto=webp&s=2a60c9ee0ed683495216801abeec48c92d2169de', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/andlYnM5YXg4ZTdmMeA4DLGtrklgXBXJ6oA5MA6pSKVMkJaUHzia-F_uXhBj.png?width=216&crop=smart&format=pjpg&auto=webp&s=f040b84b065420b4623f4f044cc393c38e9a6c39', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/andlYnM5YXg4ZTdmMeA4DLGtrklgXBXJ6oA5MA6pSKVMkJaUHzia-F_uXhBj.png?width=320&crop=smart&format=pjpg&auto=webp&s=349583067480bd980bd00864e6008885c1997146', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/andlYnM5YXg4ZTdmMeA4DLGtrklgXBXJ6oA5MA6pSKVMkJaUHzia-F_uXhBj.png?width=640&crop=smart&format=pjpg&auto=webp&s=f60dcfba405e789773a3d7beb138f4f545ba44ba', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/andlYnM5YXg4ZTdmMeA4DLGtrklgXBXJ6oA5MA6pSKVMkJaUHzia-F_uXhBj.png?width=960&crop=smart&format=pjpg&auto=webp&s=6eab6f70fbfda03872a3940e2c518ad1bf6dc5ce', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/andlYnM5YXg4ZTdmMeA4DLGtrklgXBXJ6oA5MA6pSKVMkJaUHzia-F_uXhBj.png?width=1080&crop=smart&format=pjpg&auto=webp&s=390819834464efec8d871b2844ee384268792bd0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/andlYnM5YXg4ZTdmMeA4DLGtrklgXBXJ6oA5MA6pSKVMkJaUHzia-F_uXhBj.png?format=pjpg&auto=webp&s=5e6bdacbbb3a2584a5288d0a83605036f7d18be0', 'width': 1920}, 'variants': {}}]} |
|
Docker Desktop 4.42 adds integrated MCP Toolkit, Server, & Catalog of MCPs (servers and clients) | 27 | Docker seems like they are trying to be a pretty compelling turnkey AI solution lately. Their recent addition of a built in LLM model runner has made serving models with a llama.cpp-based server easier than setting up llama.cop itself, possibly even easier than using Ollama.
Now they’ve added an integrated MCP server, toolkit, and a catalog of servers and clients. They’re kinda Trojan horsing AI into Docker and I kinda like it because half of what I run is in Docker anyways. I don’t hate this at all. | 2025-06-17T02:11:04 | https://www.docker.com/blog/docker-desktop-4-42-native-ipv6-built-in-mcp-and-better-model-packaging/ | Porespellar | docker.com | 1970-01-01T00:00:00 | 0 | {} | 1ldbh4i | false | null | t3_1ldbh4i | /r/LocalLLaMA/comments/1ldbh4i/docker_desktop_442_adds_integrated_mcp_toolkit/ | false | false | default | 27 | {'enabled': False, 'images': [{'id': 'xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM.png?width=108&crop=smart&auto=webp&s=ea8bd235ddec234f4ca95c3725a3bbd452bb6616', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM.png?width=216&crop=smart&auto=webp&s=e49511903416fc2c9ae32320ccdd002e8ee723ad', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM.png?width=320&crop=smart&auto=webp&s=9cfe54e10acfcf527b2b861389587287f70cca22', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM.png?width=640&crop=smart&auto=webp&s=56b6d0b3d753e1e5e592b8d69dbddd76611d78f7', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM.png?width=960&crop=smart&auto=webp&s=d7cd4c08b44ce2e70343848158189ef9350c7d22', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM.png?width=1080&crop=smart&auto=webp&s=3532e174cf7df7aa3e89730f0fe401853db1332a', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM.png?auto=webp&s=71ce23da6378f0e4e6b331f9f95623ed61a73de8', 'width': 1300}, 'variants': {}}]} |
3060 12GB Upgrade Paths | 1 | [removed] | 2025-06-17T02:43:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ldc4oh/3060_12gb_upgrade_paths/ | gigadigg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldc4oh | false | null | t3_1ldc4oh | /r/LocalLLaMA/comments/1ldc4oh/3060_12gb_upgrade_paths/ | false | false | self | 1 | null |
Quartet - a new algorithm for training LLMs in native FP4 on 5090s | 70 | I came across this paper while looking to see if training LLMs on Blackwell's new FP4 hardware was possible.
[Quartet: Native FP4 Training Can Be Optimal for Large Language Models](https://huggingface.co/papers/2505.14669)
and the associated code, with kernels you can use for your own training:
https://github.com/IST-DASLab/Quartet
Thanks to these researchers, training in FP4 is now a reasonable, and in many cases optimal, alternative to higher precision training!
DeepSeek was trained in FP8, which was cutting edge at the time. I can't wait to see the new frontiers FP4 unlocks. | 2025-06-17T04:08:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lddrfu/quartet_a_new_algorithm_for_training_llms_in/ | Kooshi_Govno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lddrfu | false | null | t3_1lddrfu | /r/LocalLLaMA/comments/1lddrfu/quartet_a_new_algorithm_for_training_llms_in/ | false | false | self | 70 | {'enabled': False, 'images': [{'id': 'D-4Als9i9WLZfaQth9gAS6fjS5edshHNciArjBgIwgw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/D-4Als9i9WLZfaQth9gAS6fjS5edshHNciArjBgIwgw.png?width=108&crop=smart&auto=webp&s=f1c865e52abd25245d27ae9df3f1957c084cc2f8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/D-4Als9i9WLZfaQth9gAS6fjS5edshHNciArjBgIwgw.png?width=216&crop=smart&auto=webp&s=49bf8d52fd40227629048b439673eb12c7696d6c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/D-4Als9i9WLZfaQth9gAS6fjS5edshHNciArjBgIwgw.png?width=320&crop=smart&auto=webp&s=9f2c3e328267ea881c79938090be9b0651a8ea37', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/D-4Als9i9WLZfaQth9gAS6fjS5edshHNciArjBgIwgw.png?width=640&crop=smart&auto=webp&s=a8e8ca55453095e944f20ac8412b8a3949adb807', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/D-4Als9i9WLZfaQth9gAS6fjS5edshHNciArjBgIwgw.png?width=960&crop=smart&auto=webp&s=46272b3beb686d956cc7529b625b2db9044536ea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/D-4Als9i9WLZfaQth9gAS6fjS5edshHNciArjBgIwgw.png?width=1080&crop=smart&auto=webp&s=3718750ed08d2c3c7becc8f0e16981f43b09cbb7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/D-4Als9i9WLZfaQth9gAS6fjS5edshHNciArjBgIwgw.png?auto=webp&s=1d64cb0d7dab1d9e6022fd036f835cbeb98f9d4f', 'width': 1200}, 'variants': {}}]} |
M4 pro 48gb for image gen (stable diffusion) and other llms | 1 | Is it worth it or we have better alternatives. Thinking from price point | 2025-06-17T04:11:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lddtce/m4_pro_48gb_for_image_gen_stable_diffusion_and/ | No_Nothing1584 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lddtce | false | null | t3_1lddtce | /r/LocalLLaMA/comments/1lddtce/m4_pro_48gb_for_image_gen_stable_diffusion_and/ | false | false | self | 1 | null |
Case/rig for multiple GPUs & PSUs | 1 | [removed] | 2025-06-17T05:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ldep5q/caserig_for_multiple_gpus_psus/ | g4meb01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldep5q | false | null | t3_1ldep5q | /r/LocalLLaMA/comments/1ldep5q/caserig_for_multiple_gpus_psus/ | false | false | self | 1 | null |
Fine tuning image gen LLM for Virtual Staging/Interior Design | 0 | Hi,
I've been doing a lot of virtual staging recently with OpenAI's 4o model. With excessive prompting, the quality is great, but it's getting really expensive with the API (17 cents per photo!).
I'm thinking about investing resources into training/fine-tuning an open source model on tons of photos of interiors to replace this, but I've never trained an open source model before and I don't really know how to approach this.
What I've gathered from my research so far is that I should get thousands of photos, and label all of them extensively to train this model.
My outstanding questions are:
\-Which open source model for this would be best?
\-How many photos would I realistically need to fine tune this?
\-Is it feasible to create a model on my where the output is similar/superior to openAI's 4o?
\-Given it's possible, what approach would you take to accompish this?
Thank you in advance
Baba | 2025-06-17T05:10:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ldetfs/fine_tuning_image_gen_llm_for_virtual/ | BabaJoonie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldetfs | false | null | t3_1ldetfs | /r/LocalLLaMA/comments/1ldetfs/fine_tuning_image_gen_llm_for_virtual/ | false | false | self | 0 | null |
Local or Cloud for Beginner? | 1 | [removed] | 2025-06-17T05:25:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ldf2if/local_or_cloud_for_beginner/ | Different_Rush3519 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldf2if | false | null | t3_1ldf2if | /r/LocalLLaMA/comments/1ldf2if/local_or_cloud_for_beginner/ | false | false | self | 1 | null |
What would be the best modal to run on a laptop with 8gb of vram and 32 gb of ram with a i9 | 0 | Just curious | 2025-06-17T05:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ldfeqb/what_would_be_the_best_modal_to_run_on_a_laptop/ | 2001obum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldfeqb | false | null | t3_1ldfeqb | /r/LocalLLaMA/comments/1ldfeqb/what_would_be_the_best_modal_to_run_on_a_laptop/ | false | false | self | 0 | null |
It seems as if the more you learn about AI, the less you trust it | 128 | This is kind of a rant so sorry if not everything has to do with the title, For example, when the blog post on vibe coding was released on February 2025, I was surprised to see the writer talking about using it mostly for disposable projects and not for stuff that will go to production since that is what everyone seems to be doing it. That blog post was written by an OpenAI employee. Then Geoffrey Hinton and Yann LeCun occasionally talk about how AI can be dangerous if misused or how LLMs are not that useful currently yet you see tons of people without the same level of education on AI selling snake oil based on LLMs. You then see people talking about how LLMs completely replace programmers even though they seem to make subtle bugs all the time that they can't find because, they didn't learn programming since they thought it was obsolete. | 2025-06-17T05:54:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ldfipl/it_seems_as_if_the_more_you_learn_about_ai_the/ | RhubarbSimilar1683 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldfipl | false | null | t3_1ldfipl | /r/LocalLLaMA/comments/1ldfipl/it_seems_as_if_the_more_you_learn_about_ai_the/ | false | false | self | 128 | null |
OpenAI wins $200 million U.S. defense contract! | 367 | All the talk about wanting AI to be open and accessible to all humanity was just that.... A gigantic pile of BS!
Wake up guys, Close AI was never gonna protect anyone but themselves.
Link below :
https://www.cnbc.com/2025/06/16/openai-wins-200-million-us-defense-contract.html | 2025-06-17T06:10:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ldfry1/openai_wins_200_million_us_defense_contract/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldfry1 | false | null | t3_1ldfry1 | /r/LocalLLaMA/comments/1ldfry1/openai_wins_200_million_us_defense_contract/ | false | false | self | 367 | null |
Stream-Omni: Simultaneous Multimodal Interactions with Large Language-Vision-Speech Model | 10 | 2025-06-17T06:20:15 | https://huggingface.co/ICTNLP/stream-omni-8b | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ldfxa1 | false | null | t3_1ldfxa1 | /r/LocalLLaMA/comments/1ldfxa1/streamomni_simultaneous_multimodal_interactions/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'uiJCdyPhedXThYPzl7o9OduXbx5SJq-5K4qDJYJRC7M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uiJCdyPhedXThYPzl7o9OduXbx5SJq-5K4qDJYJRC7M.png?width=108&crop=smart&auto=webp&s=e4b65c3c3ce9878eec0aa742f9374ab883267211', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uiJCdyPhedXThYPzl7o9OduXbx5SJq-5K4qDJYJRC7M.png?width=216&crop=smart&auto=webp&s=c10866707754f74a8db52c4edc80b6ea0b68136f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uiJCdyPhedXThYPzl7o9OduXbx5SJq-5K4qDJYJRC7M.png?width=320&crop=smart&auto=webp&s=8861e590c99ea289843a4f1915c07b8469533263', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uiJCdyPhedXThYPzl7o9OduXbx5SJq-5K4qDJYJRC7M.png?width=640&crop=smart&auto=webp&s=dcb5b9d857ce3425a73664d83eaf79bab6a215ef', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uiJCdyPhedXThYPzl7o9OduXbx5SJq-5K4qDJYJRC7M.png?width=960&crop=smart&auto=webp&s=791be84422e6f84189a0934e72861b2bd5eb5d07', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uiJCdyPhedXThYPzl7o9OduXbx5SJq-5K4qDJYJRC7M.png?width=1080&crop=smart&auto=webp&s=8f124faf7280b29af991cbb33a908a1e2c403001', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uiJCdyPhedXThYPzl7o9OduXbx5SJq-5K4qDJYJRC7M.png?auto=webp&s=9185dd1874e71351f2b80ad341ac546b94a88b26', 'width': 1200}, 'variants': {}}]} |
||
Local LLMs: How to get started | 3 | Hi /r/LocalLLaMA!
I've been lurking for about year down here, and I've learned a lot. I feel like the space is quite intimitdating at first, with lots of nuances and tradeoffs.
I've created a basic resource that should allow newcomers to understand the basic concepts. I've made a few simplifications that I know a lot here will frown upon, but it closely resembles how I reason about tradeoffs myself
Looking for feedback & I hope some of you find this useful!
https://mlnative.com/blog/getting-started-with-local-llms | 2025-06-17T06:22:01 | https://mlnative.com/blog/getting-started-with-local-llms | lmyslinski | mlnative.com | 1970-01-01T00:00:00 | 0 | {} | 1ldfyak | false | null | t3_1ldfyak | /r/LocalLLaMA/comments/1ldfyak/local_llms_how_to_get_started/ | false | false | 3 | {'enabled': False, 'images': [{'id': '_Lww7Ro468irSbZzMrraA_HNckKz1sKKRfoQpBMnFoE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/_Lww7Ro468irSbZzMrraA_HNckKz1sKKRfoQpBMnFoE.png?width=108&crop=smart&auto=webp&s=0db17666b6c2d05d40743f4f01ce932d9334a43f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/_Lww7Ro468irSbZzMrraA_HNckKz1sKKRfoQpBMnFoE.png?width=216&crop=smart&auto=webp&s=1ed7964e0153bc9d77c64f3438108e607292b296', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/_Lww7Ro468irSbZzMrraA_HNckKz1sKKRfoQpBMnFoE.png?width=320&crop=smart&auto=webp&s=92d581bdaa6420fddb95dfffc5894fe482225031', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/_Lww7Ro468irSbZzMrraA_HNckKz1sKKRfoQpBMnFoE.png?width=640&crop=smart&auto=webp&s=d3e40e70adc69f43bebbb127dfac8a72239d8af0', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/_Lww7Ro468irSbZzMrraA_HNckKz1sKKRfoQpBMnFoE.png?width=960&crop=smart&auto=webp&s=0961d58ba3e32868b90640d20c93ca3a61101198', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/_Lww7Ro468irSbZzMrraA_HNckKz1sKKRfoQpBMnFoE.png?auto=webp&s=0eda9ebe1526b5e10d38c52314b1d1417a0fdfef', 'width': 1024}, 'variants': {}}]} |
|
How to increase GPU utilization when serving an LLM with Llama.cpp | 3 | When I serve an LLM (currently its deepseek coder v2 lite 8 bit) in my T4 16gb VRAM + 48GB RAM system, I noticed that the model takes up like 15.5GB of gpu VRAM which id good. But the GPU *utilization* percent never reaches above 35%, even when running parallel requests or increasing batch size. Am I missing something? | 2025-06-17T06:27:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ldg17j/how_to_increase_gpu_utilization_when_serving_an/ | anime_forever03 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldg17j | false | null | t3_1ldg17j | /r/LocalLLaMA/comments/1ldg17j/how_to_increase_gpu_utilization_when_serving_an/ | false | false | self | 3 | null |
Jan-nano, 4B agentic model that outperforms DeepSeek-v3-671B using MCP | 1 | 2025-06-17T06:29:14 | https://twitter.com/menloresearch/status/1934809407604576559 | AngryBirdenator | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1ldg2ch | false | {'oembed': {'author_name': 'Menlo Research', 'author_url': 'https://twitter.com/menloresearch', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Meet Jan-nano, a 4B model that outscores DeepSeek-v3-671B using MCP.<br><br>It's built on Qwen3-4B with DAPO fine-tuning, it handles:<br>- real-time web search<br>- deep research<br><br>Model + GGUF: <a href="https://t.co/DwiQyR9pdW">https://t.co/DwiQyR9pdW</a><br><br>To run it locally:<br>- Install Jan Beta: <a href="https://t.co/67dHYSaHLQ">https://t.co/67dHYSaHLQ</a><br>- Download… <a href="https://t.co/mNE8h3Q742">pic.twitter.com/mNE8h3Q742</a></p>— Menlo Research (@menloresearch) <a href="https://twitter.com/menloresearch/status/1934809407604576559?ref_src=twsrc%5Etfw">June 17, 2025</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/menloresearch/status/1934809407604576559', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1ldg2ch | /r/LocalLLaMA/comments/1ldg2ch/jannano_4b_agentic_model_that_outperforms/ | false | false | 1 | {'enabled': False, 'images': [{'id': '94dNa_gsV4O_x1lOGmk18So8TZLRINZyo6MLEn3Cg4Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/O8VXDxED3fvz7QGWqs62PJZWjitE4ml4eY_9-oDmuNc.jpg?width=108&crop=smart&auto=webp&s=11890703dc2b2d9428732a876313f4a75992eb4d', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/O8VXDxED3fvz7QGWqs62PJZWjitE4ml4eY_9-oDmuNc.jpg?auto=webp&s=01d0937f73de1c138ff14296a05038888ce29167', 'width': 140}, 'variants': {}}]} |
||
What is your goto full finetuning library? | 1 | [removed] | 2025-06-17T06:43:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ldgaca/what_is_your_goto_full_finetuning_library/ | Babouche_Le_Singe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldgaca | false | null | t3_1ldgaca | /r/LocalLLaMA/comments/1ldgaca/what_is_your_goto_full_finetuning_library/ | false | false | self | 1 | null |
What finetuning library have you used? | 1 | [removed] | 2025-06-17T06:46:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ldgc3n/what_finetuning_library_have_you_used/ | Babouche_Le_Singe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldgc3n | false | null | t3_1ldgc3n | /r/LocalLLaMA/comments/1ldgc3n/what_finetuning_library_have_you_used/ | false | false | self | 1 | null |
What finetuning library have you seen success with? | 14 | I'm interested in finetuning an llm to teach it new knowledge (I know RAG exists and decided against it). From what i've heard and not tested, the best way to achieve that goal is through full finetuning.
I'm comparing options and found these:
- NVIDIA/Megatron-LM
- deepspeedai/DeepSpeed
- hiyouga/LLaMA-Factory
- unslothai/unsloth (now supports full finetuning!)
- axolotl-ai-cloud/axolotl
- pytorch/torchtune
- huggingface/peft
Has anyone used any of these? if so, what were the pros and cons?
| 2025-06-17T06:48:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ldgd41/what_finetuning_library_have_you_seen_success_with/ | Responsible-Crew1801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldgd41 | false | null | t3_1ldgd41 | /r/LocalLLaMA/comments/1ldgd41/what_finetuning_library_have_you_seen_success_with/ | false | false | self | 14 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.