title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I rebuilt Google's Gemini CLI system prompt with better engineering practices | 14 | ## TL;DR
Google's Gemini CLI system prompt is publicly available but it's a monolithic mess. I refactored it into a maintainable, modular architecture that preserves all functionality while making it actually usable for the rest of us.
## The Problem
Google's official Gemini CLI system prompt ([prompts.ts](https://github.com/google-gemini/gemini-cli/blob/0915bf7d677504c28b079693a0fe1c853adc456e/packages/core/src/core/prompts.ts)) is functionally impressive but architecturally... let's just say it wasn't built with maintenance in mind:
- No modularity or reusability
- Impossible to customize without breaking things
- Zero separation of concerns
It works great for Google's use case, but good luck adapting it for your own projects.
## What I Built
I completely rebuilt the system using a component-based architecture:
**Before (Google's approach):**
```javascript
// One giant hardcoded string with embedded logic
const systemPrompt = `You are an interactive CLI agent...
${process.env.SANDBOX ? 'sandbox warning...' : 'no sandbox...'}
// more and more lines of this...`
```
**After (my approach):**
```yaml
# Modular configuration
templates/
├── gemini_cli_system_prompt.md # Main template
└── simple_agent.md # Lightweight variant
snippets/
├── core_mandates.md # Reusable components
├── command_safety.md
└── environment_detection.md
functions/
├── environment.py # Business logic
├── tools.py
└── workflows.py
```
## Example Usage
```python
from republic_prompt import load_workspace, render
# Load the workspace
workspace = load_workspace("examples")
# Generate different variants
full_prompt = render(workspace.templates["gemini_cli_system_prompt"], {
"use_tools": True,
"max_output_lines": 8
})
lightweight = render(workspace.templates["simple_agent"], {
"use_tools": False,
"max_output_lines": 2
})
```
## Why This Matters
Google's approach works for them, but the rest of us need something we can actually maintain and customize. This refactor shows that you can have both powerful functionality AND clean architecture.
The original is open source but practically unmaintainable. This version gives you the same power with proper engineering practices.
## Code & Details
Full implementation available on GitHub: [republic-prompt examples](https://github.com/PsiACE/republic/tree/main/packages/prompt/examples)
What do you think? Anyone else frustrated with maintaining these massive system prompts? | 2025-06-26T15:30:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ll340q/i_rebuilt_googles_gemini_cli_system_prompt_with/ | PsiACE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll340q | false | null | t3_1ll340q | /r/LocalLLaMA/comments/1ll340q/i_rebuilt_googles_gemini_cli_system_prompt_with/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'TCffS_Kskx_Of4tKzDPQtJFViIUD-EMDvG3g7XqIOVA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TCffS_Kskx_Of4tKzDPQtJFViIUD-EMDvG3g7XqIOVA.png?width=108&crop=smart&auto=webp&s=80d4978dc35409de38a43dce9d882a2739e3ed76', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TCffS_Kskx_Of4tKzDPQtJFViIUD-EMDvG3g7XqIOVA.png?width=216&crop=smart&auto=webp&s=9b648a572c63407159fe327c7040fdb41a01c39b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TCffS_Kskx_Of4tKzDPQtJFViIUD-EMDvG3g7XqIOVA.png?width=320&crop=smart&auto=webp&s=39b670ac936e440b9b3199fb650b51c0983a5210', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TCffS_Kskx_Of4tKzDPQtJFViIUD-EMDvG3g7XqIOVA.png?width=640&crop=smart&auto=webp&s=41d72059c84484b888a94476ce651f485f4a181c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TCffS_Kskx_Of4tKzDPQtJFViIUD-EMDvG3g7XqIOVA.png?width=960&crop=smart&auto=webp&s=11788c6d7cb2925adabb6fdaeb209bd94c1cb4db', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TCffS_Kskx_Of4tKzDPQtJFViIUD-EMDvG3g7XqIOVA.png?width=1080&crop=smart&auto=webp&s=e8a4ef2097f08aff8890371275585fdb2d25a4d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TCffS_Kskx_Of4tKzDPQtJFViIUD-EMDvG3g7XqIOVA.png?auto=webp&s=b54631b9887a84541116b9dda34002b393f822f6', 'width': 1200}, 'variants': {}}]} |
FLUX.1 Kontext [dev] - an open weights model for proprietary-level image editing performance. | 393 | weights: [https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev)
release news: [https://x.com/bfl\_ml/status/1938257909726519640](https://x.com/bfl_ml/status/1938257909726519640) | 2025-06-26T15:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ll38zu/flux1_kontext_dev_an_open_weights_model_for/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll38zu | false | null | t3_1ll38zu | /r/LocalLLaMA/comments/1ll38zu/flux1_kontext_dev_an_open_weights_model_for/ | false | false | self | 393 | {'enabled': False, 'images': [{'id': 'pEPHFg7yJbWyIk0w0OkPks6RLik8idr1RbqSAmxGVq8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pEPHFg7yJbWyIk0w0OkPks6RLik8idr1RbqSAmxGVq8.png?width=108&crop=smart&auto=webp&s=355c2573c96fc9c23faaa614495bef0686fde982', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pEPHFg7yJbWyIk0w0OkPks6RLik8idr1RbqSAmxGVq8.png?width=216&crop=smart&auto=webp&s=ae3447b557f6c1eab599fa41acd74af404fd93f3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pEPHFg7yJbWyIk0w0OkPks6RLik8idr1RbqSAmxGVq8.png?width=320&crop=smart&auto=webp&s=cfe2b9af3c7b7a4e63a61d4e1af0818cf9694753', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pEPHFg7yJbWyIk0w0OkPks6RLik8idr1RbqSAmxGVq8.png?width=640&crop=smart&auto=webp&s=b9467c10d4832516c7a222f1eb2f0ee9e81e0a38', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pEPHFg7yJbWyIk0w0OkPks6RLik8idr1RbqSAmxGVq8.png?width=960&crop=smart&auto=webp&s=9a246ae42ebe5d69a9c311127f8e1da2eed65163', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pEPHFg7yJbWyIk0w0OkPks6RLik8idr1RbqSAmxGVq8.png?width=1080&crop=smart&auto=webp&s=a21eae6c759959508d53f487002a46274cbc2cd7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pEPHFg7yJbWyIk0w0OkPks6RLik8idr1RbqSAmxGVq8.png?auto=webp&s=5b4b46c669aee3f9c7980de8f2fc576208c76a34', 'width': 1200}, 'variants': {}}]} |
Anubis 70B v1.1 - Just another RP tune... unlike any other L3.3! (allegedly) A breath of fresh prose and lack of positivity (YMMV ofc) + bonus Fallen 70B for mergefuel! (because tuners aren't limited to RP) | 27 | Did you like Fallen R1? Here's the non-R1 version: [https://huggingface.co/TheDrummer/Fallen-Llama-3.3-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-70B-v1) Enjoy the mergefuel! | 2025-06-26T15:46:44 | https://huggingface.co/TheDrummer/Anubis-70B-v1.1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ll3j07 | false | null | t3_1ll3j07 | /r/LocalLLaMA/comments/1ll3j07/anubis_70b_v11_just_another_rp_tune_unlike_any/ | false | false | default | 27 | {'enabled': False, 'images': [{'id': '_eIF0Xo1buph34Tuk-bXjGb0GyE839b8Ocdfqz4UUok', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_eIF0Xo1buph34Tuk-bXjGb0GyE839b8Ocdfqz4UUok.png?width=108&crop=smart&auto=webp&s=7cd9d5b4ab5402820ec24f9ccc1f4518d5bdac5f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_eIF0Xo1buph34Tuk-bXjGb0GyE839b8Ocdfqz4UUok.png?width=216&crop=smart&auto=webp&s=73a33c9c7c863114bdc3a07f870fac04873b33d3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_eIF0Xo1buph34Tuk-bXjGb0GyE839b8Ocdfqz4UUok.png?width=320&crop=smart&auto=webp&s=1ffdc6cb132ea2f9747f5e52d5bcfa5410e4777c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_eIF0Xo1buph34Tuk-bXjGb0GyE839b8Ocdfqz4UUok.png?width=640&crop=smart&auto=webp&s=d95c24031eb18602df8ccce61ee45bba3ac35bc3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_eIF0Xo1buph34Tuk-bXjGb0GyE839b8Ocdfqz4UUok.png?width=960&crop=smart&auto=webp&s=10ac951c0cd06ddab4420a0e6a595146c797dfa5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_eIF0Xo1buph34Tuk-bXjGb0GyE839b8Ocdfqz4UUok.png?width=1080&crop=smart&auto=webp&s=29438e96898ed112828f39dd1494bd7897e23734', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_eIF0Xo1buph34Tuk-bXjGb0GyE839b8Ocdfqz4UUok.png?auto=webp&s=6d3c0bcccbfb2462beecffa07bb3e562f1329d10', 'width': 1200}, 'variants': {}}]} |
I built an MCP that finally makes your local AI models shine with SQL | 20 | Hey r/LocalLLaMA 👋
I'm a huge fan of using local AI models for queries & analytics, but my workflow has been quite painful. I feel like SQL tools never works as intended, and I spend half my day just copy-pasting schemas and table info into the context. I got so fed up with this, I decided to build [ToolFront](https://github.com/kruskal-labs/toolfront). It's a free, open-source, and local MCP that finally gives AI a smart, safe way to **understand all your databases and query them**.
# So, what does it do?
ToolFront equips AI models with a set of **read-only database tools**:
* `discover`: See all your connected databases.
* `search_tables`: Find tables by name or description.
* `inspect`: Get the exact schema for any table – no more guessing!
* `sample`: Grab a few rows to quickly see the data.
* `query`: Run read-only SQL queries directly.
* `search_queries` **(The Best Part)**: Finds the most relevant historical queries written by you or your team to answer new questions. Your AI can actually learn from your team's past SQL!
# Connects to what you're already using
ToolFront supports the databases you're probably already working with:
* **Snowflake**, **BigQuery**, **Databricks**
* **PostgreSQL**, **MySQL**, **SQL Server**, **SQLite**
* **DuckDB** (Yup, analyze local CSV, Parquet, JSON, XLSX files directly!)
# Why you'll love it
* **Privacy-first**: Your data stays local, and is only shared between your LLMs and databases through a secure MCP server.
* **Agents for your data:** Build smart agents that understand your databases and know how to navigate them.
* **AI-powered DataOps:** Use ToolFront to explore your databases, iterate on queries, and write schema-aware code.
* **Collaborative learning**: The more your LLMs use ToolFront, the better they remember your data.
If you work with databases and local models, I genuinely think ToolFront can make your life a lot easier.
I'd love your feedback, especially on what database features are most crucial for your daily work.
**GitHub Repo**: [https://github.com/kruskal-labs/toolfront](https://github.com/kruskal-labs/toolfront)
A ⭐ on GitHub really helps with visibility! | 2025-06-26T15:54:38 | Durovilla | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ll3qej | false | null | t3_1ll3qej | /r/LocalLLaMA/comments/1ll3qej/i_built_an_mcp_that_finally_makes_your_local_ai/ | false | false | 20 | {'enabled': True, 'images': [{'id': 'BiWnwJFD2HoHn5WG2OhNGxxWvlKK0waOiN1oUJoWE88', 'resolutions': [{'height': 23, 'url': 'https://preview.redd.it/2h8s7lagma9f1.png?width=108&crop=smart&auto=webp&s=838e9057d8f818809b45fa4def593c42fded6754', 'width': 108}, {'height': 47, 'url': 'https://preview.redd.it/2h8s7lagma9f1.png?width=216&crop=smart&auto=webp&s=d1272b77ce777a51ec4c65a0617f8c557daebfb0', 'width': 216}, {'height': 70, 'url': 'https://preview.redd.it/2h8s7lagma9f1.png?width=320&crop=smart&auto=webp&s=fde32d9220fb160eed6f8d9769f81977c5458ca5', 'width': 320}, {'height': 140, 'url': 'https://preview.redd.it/2h8s7lagma9f1.png?width=640&crop=smart&auto=webp&s=c677d875fded9f93ef2fcd488b04ac20d1d5c282', 'width': 640}, {'height': 210, 'url': 'https://preview.redd.it/2h8s7lagma9f1.png?width=960&crop=smart&auto=webp&s=cbdb053ff3b06790fb1726fc001d039516f80498', 'width': 960}], 'source': {'height': 225, 'url': 'https://preview.redd.it/2h8s7lagma9f1.png?auto=webp&s=71dd6393a76e41cbc3313a15c95e3689304ed466', 'width': 1024}, 'variants': {}}]} |
||
gemma 3n has been released on huggingface | 431 | [https://huggingface.co/google/gemma-3n-E2B](https://huggingface.co/google/gemma-3n-E2B)
[https://huggingface.co/google/gemma-3n-E2B-it](https://huggingface.co/google/gemma-3n-E2B-it)
[https://huggingface.co/google/gemma-3n-E4B](https://huggingface.co/google/gemma-3n-E4B)
[https://huggingface.co/google/gemma-3n-E4B-it](https://huggingface.co/google/gemma-3n-E4B-it)
| 2025-06-26T16:07:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ll429p/gemma_3n_has_been_released_on_huggingface/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll429p | false | null | t3_1ll429p | /r/LocalLLaMA/comments/1ll429p/gemma_3n_has_been_released_on_huggingface/ | false | false | self | 431 | {'enabled': False, 'images': [{'id': '2lP-GN54cGmOIsbXNzAw711WUlBuh5xp-z-S27FbNXY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2lP-GN54cGmOIsbXNzAw711WUlBuh5xp-z-S27FbNXY.png?width=108&crop=smart&auto=webp&s=2f4e42b4fa204710ccedd3cdaef109fde9142520', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2lP-GN54cGmOIsbXNzAw711WUlBuh5xp-z-S27FbNXY.png?width=216&crop=smart&auto=webp&s=47d70cd6b886015d5f302223c3b2807cfa56aa7e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2lP-GN54cGmOIsbXNzAw711WUlBuh5xp-z-S27FbNXY.png?width=320&crop=smart&auto=webp&s=6afa51213ea483eefe88076476d305024c0b182a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2lP-GN54cGmOIsbXNzAw711WUlBuh5xp-z-S27FbNXY.png?width=640&crop=smart&auto=webp&s=47c24259a2b8c88d3fc8c447e8369bb145726837', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2lP-GN54cGmOIsbXNzAw711WUlBuh5xp-z-S27FbNXY.png?width=960&crop=smart&auto=webp&s=e147326524061a06323a5f503671696daeaea611', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2lP-GN54cGmOIsbXNzAw711WUlBuh5xp-z-S27FbNXY.png?width=1080&crop=smart&auto=webp&s=349e39edec1049b97e57ddb65b4ad7a7b71b1392', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2lP-GN54cGmOIsbXNzAw711WUlBuh5xp-z-S27FbNXY.png?auto=webp&s=2fc272828bf043ebc9028916ee0d2916ac31820b', 'width': 1200}, 'variants': {}}]} |
NotebookLM explaining Sparsity in LLMs using Deja Vu & LLM in a Flash | 9 | We ran an experiment with NotebookLM where we fed it:
* Context from our GitHub repo
* Two key papers: Deja Vu and LLM in a Flash
* Comments and community insights from LocaLLaMA reddit discussion
It is surprisingly clear and digestible podcast on sparsity, memory access patterns, and efficient inference in LLMs.
What stood out was how well it turned dense research into something conversational and accessible. Especially the interactive mode was amazing. Worth checking out if you're into retrieval-augmented generation, low-memory LLMs, or just like seeing what LLMs can do with the right context. What topics you'd want us to explore in this format? | 2025-06-26T16:09:21 | https://open.spotify.com/episode/0540o6A17BhyHkJwFOFd89?si=vjlIj_eZRYqjHDytPux9sQ | Economy-Mud-6626 | open.spotify.com | 1970-01-01T00:00:00 | 0 | {} | 1ll442i | false | null | t3_1ll442i | /r/LocalLLaMA/comments/1ll442i/notebooklm_explaining_sparsity_in_llms_using_deja/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'qv-trUgr_F5dUKSisR1EF7whOER7-4P323ECjDOJaU0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/qv-trUgr_F5dUKSisR1EF7whOER7-4P323ECjDOJaU0.jpeg?width=108&crop=smart&auto=webp&s=7651dc1827b40bae7f734146ee5a907018580342', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/qv-trUgr_F5dUKSisR1EF7whOER7-4P323ECjDOJaU0.jpeg?width=216&crop=smart&auto=webp&s=b45ec3d8d267257454c3cd328cc941f9c1fa33cd', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/qv-trUgr_F5dUKSisR1EF7whOER7-4P323ECjDOJaU0.jpeg?width=320&crop=smart&auto=webp&s=e7efc7bbcb7f495a46197b12e2de3a82117f6584', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/qv-trUgr_F5dUKSisR1EF7whOER7-4P323ECjDOJaU0.jpeg?width=640&crop=smart&auto=webp&s=cf2d75e8c7184b265b50253685a654bbfc14023b', 'width': 640}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/qv-trUgr_F5dUKSisR1EF7whOER7-4P323ECjDOJaU0.jpeg?auto=webp&s=d114070e13d158d2f84be82a28171a5dcd0c14c4', 'width': 640}, 'variants': {}}]} |
|
My Python AI Dev Tool: Avakin - Local LLMs, Project-Specific + Global RAG, & More | 27 | Hey r/LocalLLaMA,
I've been working on a project called Avakin, a desktop AI development environment for Python, and wanted to share it with this community. My goal was to create a tool that deeply integrates with the development workflow, leverages local LLMs for privacy and control, and actually understands the context of individual projects.
Avakin runs entirely on your local machine (Windows for packaged release, source runs cross-platform). It's built with Python/PySide6 and orchestrates a team of AI agents (Architect, Coder, etc.) that can be configured to use different LLMs via a local FastAPI backend. This backend interfaces with Ollama for local models (Llama 3, Mistral, CodeLlama, etc.) or can call out to cloud APIs if you provide keys.
[https://github.com/carpsesdema/AvA\_Kintsugi](https://github.com/carpsesdema/AvA_Kintsugi)
Here's a breakdown of the core technical features:
Dual-Context Local RAG (Project & Global Knowledge):
Technology:\*\* Utilizes \`SentenceTransformers\` (\`all-MiniLM-L6-v2\` by default) for embeddings and \`ChromaDB\` for persistent local vector storage.
**Project-Specific DBs**:
* Each Python project you work on gets its \*own isolated \`rag\_db\` directory\*. This allows Avakin to build a deep understanding of your current project's specifics (like Game Design Documents, API schemas, or existing proprietary code) without context bleed from other work. The RAG server dynamically switches its active project DB when you switch projects in Avakin.
**Global Knowledge Base:**
* Simultaneously, Avakin supports a separate, persistent global RAG collection (its path configured via the \`GLOBAL\_RAG\_DB\_PATH\` env var). This is perfect for your large corpus of general Python code examples, programming best practices, or any technical documentation you want the AI to reference across all projects.
**Synergistic Context**:
* When planning, coding, or chatting, AI agents can be fed context retrieved from \*both\* the active project's RAG and the global RAG. This allows for highly relevant, project-aware suggestions that are also informed by broad, general knowledge.
**Seamless Chat-to-Code Workflow:**
* Brainstorm ideas or discuss code with the chat AI (which also benefits from the Dual-Context RAG).
* If an AI response in the chat contains a good idea or a snippet you want to build upon, you can instantly send that chat message's content to Avakin's "Build" mode with a right-click. This pre-populates the build prompt, allowing a smooth transition from conversation to code generation.
**Local LLM Orchestration (Ollama Focus):**
A dedicated local FastAPI server (\`llm\_server.py\`) acts as a unified gateway to various LLM providers.
**Native Ollama Support:**
* Directly streams responses from any model hosted by your local Ollama instance (Llama 3, Mistral, CodeLlama, etc.).
**Configurable AI Agent Roles:**
* You can assign different models (local or cloud) to distinct roles like 'Architect' (for planning), 'Coder' (for file generation), 'Reviewer' (for debugging), and 'Chat'. This allows for optimizing performance and capability (e.g., a powerful local model for coding, a smaller/faster one for chat).
**Full Project Scaffolding & Generation:**
* From a single prompt, the 'Architect' agent (using its configured LLM and the powerful Dual-Context RAG) designs a multi-file Python application structure.
* The 'Coder' agent then generates each file, with access to a dynamically updated symbol index of the project and the full code of already generated files in the current session, promoting better integration.
**Surgical Code Modification & Debugging:**
* Accepts natural language requests to modify existing codebases. The AI is provided with the current code, project structure, and relevant RAG context.
* One-Click Debugging: When a script run in the integrated terminal fails, Avakin captures the traceback. The 'Reviewer' agent analyzes this
I'm still actively developing Avakin and would love to get your thoughts and feedback, especially from fellow local LLM enthusiasts! What features would you find most useful? Any pain points in local AI development that Avakin could help address?
Thanks for checking it out! | 2025-06-26T16:15:09 | One_Negotiation_2078 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ll49jc | false | null | t3_1ll49jc | /r/LocalLLaMA/comments/1ll49jc/my_python_ai_dev_tool_avakin_local_llms/ | false | false | default | 27 | {'enabled': True, 'images': [{'id': 'qiuq20a1pa9f1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=108&crop=smart&format=png8&s=333f01598941b0247a6581f7e8c19ea86d4a9937', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=216&crop=smart&format=png8&s=b0ab75cdbed91e0f68392b530ab7b4cc5727207f', 'width': 216}, {'height': 161, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=320&crop=smart&format=png8&s=b163b89d6b45f83c678e7cbdd48f547372ad84bb', 'width': 320}, {'height': 323, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=640&crop=smart&format=png8&s=30785e19a6e16a72b8363d00e18bf87ac00956c6', 'width': 640}, {'height': 485, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=960&crop=smart&format=png8&s=7a00d77f10d3a660ee73f2ea8601879c0a67ae8d', 'width': 960}, {'height': 546, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=1080&crop=smart&format=png8&s=233fbcc6025f2e5da8970325c49639f7e3837129', 'width': 1080}], 'source': {'height': 934, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?format=png8&s=15f1cc1aa0dc01804d730568fb1763874bdad213', 'width': 1846}, 'variants': {'gif': {'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=108&crop=smart&s=e84887b3e126c478a3e776d6d3e39bb7262ff1ad', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=216&crop=smart&s=2684936fffb17845c2ebe790225e9a7be08ecc05', 'width': 216}, {'height': 161, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=320&crop=smart&s=2e26d878d3d5bfe742f6a5cd92e2f0ae63992a29', 'width': 320}, {'height': 323, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=640&crop=smart&s=dae7fb2838824ee2e900107ef0dde71780954483', 'width': 640}, {'height': 485, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=960&crop=smart&s=a1482caaa63827c6a69a32d8f8ed4fb4e9d79007', 'width': 960}, {'height': 546, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=1080&crop=smart&s=a6d9578d6c7f49c32e7be284f67da83384e5ab3b', 'width': 1080}], 'source': {'height': 934, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?s=e5ea8cecbe194c5b436ba9d4ce6179c09b004235', 'width': 1846}}, 'mp4': {'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=108&format=mp4&s=622e3d4c4015e1038761f694651a521c745835ce', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=216&format=mp4&s=9026d0ea6ca57571c358e1aa67b5f01a28713477', 'width': 216}, {'height': 161, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=320&format=mp4&s=ae3c2333f179864520bafd9faee5312e03077fee', 'width': 320}, {'height': 323, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=640&format=mp4&s=ab19ca700821a43fa13748828544e54d35877d04', 'width': 640}, {'height': 485, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=960&format=mp4&s=1977ac35afd4f982f9f4d3632d69038c0dd87650', 'width': 960}, {'height': 546, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=1080&format=mp4&s=2163b63a2cbcde950c8b5226bc845ddfbcf8edc6', 'width': 1080}], 'source': {'height': 934, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?format=mp4&s=1182c09591cc6acf41d2b12fcba4158036b3838a', 'width': 1846}}}}]} |
|
What are the best lightweight llm models (individuals can run on the cloud) to fine tune at the moment? | 0 | Thank you in advance for sharing your wisdom
| 2025-06-26T16:25:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ll4iqz/what_are_the_best_lightweight_llm_models/ | kunyoungpark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll4iqz | false | null | t3_1ll4iqz | /r/LocalLLaMA/comments/1ll4iqz/what_are_the_best_lightweight_llm_models/ | false | false | self | 0 | null |
Rtx 5000 support in oobabooga? | 1 | Hey. Is RTX 5000 already supported normally or I need to black magic it through Pytorch Nightly and all the EXL2/3 compilations forced in manually? | 2025-06-26T16:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ll4k4s/rtx_5000_support_in_oobabooga/ | Nicholas_Matt_Quail | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll4k4s | false | null | t3_1ll4k4s | /r/LocalLLaMA/comments/1ll4k4s/rtx_5000_support_in_oobabooga/ | false | false | self | 1 | null |
Roast My SaaS Application | 0 | Guys - I have built an app which creates a roadmap of chapters that you need to read to learn a given topic.
It is personalized, so chapters are created in runtime based on user's learning curve.
User has to pass each quiz to unlock the next chapter.
below is the video , check this out and tell me what you think and share some cool product recommendations.
Best recommendations will get free access to the beta app ( + some GPU credits!!) | 2025-06-26T16:39:07 | https://v.redd.it/revua3bwua9f1 | Significant_Abroad36 | /r/LocalLLaMA/comments/1ll4vet/roast_my_saas_application/ | 1970-01-01T00:00:00 | 0 | {} | 1ll4vet | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/revua3bwua9f1/DASHPlaylist.mpd?a=1753677551%2CNGI2MjE3YzE4MjY1ZWU2YjRmOGVkMWMxNGUyOWJmMmEyNGMxYjViOGRmOWY2YzY0OTg0Mjc0ZTliODgyZDcyOQ%3D%3D&v=1&f=sd', 'duration': 170, 'fallback_url': 'https://v.redd.it/revua3bwua9f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/revua3bwua9f1/HLSPlaylist.m3u8?a=1753677551%2CNDhmNTg4MDZlOWZkZDEyNWMzMzY2NGU0ZWU3ZDM5ZDZhNDE5M2EyMmQwNWM2ZjM4YjhhYTcxMDQ3ZDE2ODRlZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/revua3bwua9f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1ll4vet | /r/LocalLLaMA/comments/1ll4vet/roast_my_saas_application/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'czFxYjh4ZHd1YTlmMWde8gwunT_bYnWBdtlOsaKnzitVEHx3CN-s6EvsSOjA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/czFxYjh4ZHd1YTlmMWde8gwunT_bYnWBdtlOsaKnzitVEHx3CN-s6EvsSOjA.png?width=108&crop=smart&format=pjpg&auto=webp&s=2f2ab8e7268926f8ede91236a3714e66c14a7a9a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/czFxYjh4ZHd1YTlmMWde8gwunT_bYnWBdtlOsaKnzitVEHx3CN-s6EvsSOjA.png?width=216&crop=smart&format=pjpg&auto=webp&s=6328cc6fdd5b29eb7aff98f3efe3a3b0f770c658', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/czFxYjh4ZHd1YTlmMWde8gwunT_bYnWBdtlOsaKnzitVEHx3CN-s6EvsSOjA.png?width=320&crop=smart&format=pjpg&auto=webp&s=6a08ebe9630e9be966c7344e62ce1f8033648f2a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/czFxYjh4ZHd1YTlmMWde8gwunT_bYnWBdtlOsaKnzitVEHx3CN-s6EvsSOjA.png?width=640&crop=smart&format=pjpg&auto=webp&s=76800af8c91d37169b184198a7190b1304c78225', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/czFxYjh4ZHd1YTlmMWde8gwunT_bYnWBdtlOsaKnzitVEHx3CN-s6EvsSOjA.png?width=960&crop=smart&format=pjpg&auto=webp&s=a62a3ca5f926062c1866244602b4921502b39419', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/czFxYjh4ZHd1YTlmMWde8gwunT_bYnWBdtlOsaKnzitVEHx3CN-s6EvsSOjA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=273923bc04819efaad680b3f7fc420d9da1732d8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/czFxYjh4ZHd1YTlmMWde8gwunT_bYnWBdtlOsaKnzitVEHx3CN-s6EvsSOjA.png?format=pjpg&auto=webp&s=4d14fe70b24d98bddc3f3f2c95cbc2f611c760ba', 'width': 1280}, 'variants': {}}]} |
|
Gemma 3n is on out on Hugging Face! | 129 | Google just dropped the perfect local model!
https://huggingface.co/collections/google/gemma-3n-685065323f5984ef315c93f4
https://huggingface.co/blog/gemma3n | 2025-06-26T16:52:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ll57uz/gemma_3n_is_on_out_on_hugging_face/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll57uz | false | null | t3_1ll57uz | /r/LocalLLaMA/comments/1ll57uz/gemma_3n_is_on_out_on_hugging_face/ | false | false | self | 129 | {'enabled': False, 'images': [{'id': 'ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=108&crop=smart&auto=webp&s=a470f003a2d346b549c38415c0f02ae6b0caad25', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=216&crop=smart&auto=webp&s=a276cc6f146d719b76f73376889a22024d473b49', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=320&crop=smart&auto=webp&s=5274c42b2030d1393c47013e87b7bf261269389a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=640&crop=smart&auto=webp&s=abfda3ccbbaaecf31b079783cc429aaf9abad256', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=960&crop=smart&auto=webp&s=9c96a18881650f62f18307dda4d90c5b36294388', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=1080&crop=smart&auto=webp&s=66bf1125cdbf0c28bbff7540189247e717fc0895', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?auto=webp&s=8d804c3d5f21f421006025215223ea71a975c155', 'width': 1200}, 'variants': {}}]} |
Gemini = cooked | 0 | I asked gemini to compare the services offered vs Claude.
Turns out gemini only knows of Claude 3 bit more poking and i got the below out of it.
"My internal knowledge, the vast dataset I was trained on, has a cutoff date in early 2023" | 2025-06-26T16:53:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ll58oa/gemini_cooked/ | Nangatang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll58oa | false | null | t3_1ll58oa | /r/LocalLLaMA/comments/1ll58oa/gemini_cooked/ | false | false | self | 0 | null |
Which is the best small local LLM models for tasks like doing research and generating insights | 2 | I have been working with lot of local LLMs and building complex workflows and I have recently tested out qwen3:8b and gemma3:12b both are really good for few tasks, but I also want to know if there are even better models then this | 2025-06-26T17:13:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ll5rmh/which_is_the_best_small_local_llm_models_for/ | Solid_Woodpecker3635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll5rmh | false | null | t3_1ll5rmh | /r/LocalLLaMA/comments/1ll5rmh/which_is_the_best_small_local_llm_models_for/ | false | false | self | 2 | null |
Privacy / Data | 2 | Hello. I'm currently creating an automation in N8N (I'm going to switch to cloud hosting on my own server) and was wondering, are there any APIs that are private. As in no data tracking? It's not an absolute must, but it would be nice. Internet access is a necessity though (real-time search). Thank you! | 2025-06-26T17:13:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ll5rxq/privacy_data/ | Short_Move6167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll5rxq | false | null | t3_1ll5rxq | /r/LocalLLaMA/comments/1ll5rxq/privacy_data/ | false | false | self | 2 | null |
Gemma 3n is now stable on HuggingFace | 35 | 2025-06-26T17:18:14 | https://huggingface.co/collections/google/gemma-3n-685065323f5984ef315c93f4 | best_codes | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ll5w6m | false | null | t3_1ll5w6m | /r/LocalLLaMA/comments/1ll5w6m/gemma_3n_is_now_stable_on_huggingface/ | false | false | 35 | {'enabled': False, 'images': [{'id': 'ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=108&crop=smart&auto=webp&s=a470f003a2d346b549c38415c0f02ae6b0caad25', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=216&crop=smart&auto=webp&s=a276cc6f146d719b76f73376889a22024d473b49', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=320&crop=smart&auto=webp&s=5274c42b2030d1393c47013e87b7bf261269389a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=640&crop=smart&auto=webp&s=abfda3ccbbaaecf31b079783cc429aaf9abad256', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=960&crop=smart&auto=webp&s=9c96a18881650f62f18307dda4d90c5b36294388', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=1080&crop=smart&auto=webp&s=66bf1125cdbf0c28bbff7540189247e717fc0895', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?auto=webp&s=8d804c3d5f21f421006025215223ea71a975c155', 'width': 1200}, 'variants': {}}]} |
||
The cost effective way to run Deepseek R1 models on cheaper hardware | 4 | It's possible to run Deepseek R1 in full size if you have a lot of GPUs in one machine with NVLink, the problem is that it's very expensive.
What are the options for running it on a budget (say up to 15k$) while quantizing wihtout substantial loss of performance? My understanding is that R1 is MoE model, and thus could be sharded to multiple GPUs? I have heard that some folks run them on old server grade CPUs with a lot of cores and huge memory bandwidth? I have seen some folks joining Mac Studio with some cables, what are the options there?
What are the options? How much tokens per second is it possible to achieve in this way? | 2025-06-26T17:24:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ll62i4/the_cost_effective_way_to_run_deepseek_r1_models/ | ArtisticHamster | self.LocalLLaMA | 2025-06-26T17:30:18 | 0 | {} | 1ll62i4 | false | null | t3_1ll62i4 | /r/LocalLLaMA/comments/1ll62i4/the_cost_effective_way_to_run_deepseek_r1_models/ | false | false | self | 4 | null |
Gemma 3n Full Launch - Developers Edition | 269 | Hi! Today we have the full launch of Gemma 3n, meaning we have support for your favorite tools as well as full support for its capabilities
[https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/](https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/)
Recap
* Audio, video, image, and text input; text output
* E2B and E4B - while their raw parameter count is 5B and 8B, you can operate them with as little as 2B and 4B effective params
* MatFormer: The model architecture allows extracting submodels and doing mix-n-match, allowing you to export additional models in your favorite size between 2B and 4B.
* MobileNetV5 and a new audio encoder
And now...for supported tools. We collaborated with many many open source developers to enable its capabilities. So you can now use Gemma in Hugging Face, Kaggle, llama.cpp, Ollama, MLX, LMStudio, transformers.js, Docker model hub, Unsloth, transformers trl and PEFT, VLLM, SGLang, Jetson AI Lab, and many others. Enjoy! We'll also host a Kaggle competition if anyone wants to join [https://www.kaggle.com/competitions/google-gemma-3n-hackathon](https://www.kaggle.com/competitions/google-gemma-3n-hackathon)
* Hugging Face [https://huggingface.co/collections/google/gemma-3n-685065323f5984ef315c93f4](https://huggingface.co/collections/google/gemma-3n-685065323f5984ef315c93f4)
* Unsloth [https://unsloth.ai/blog/gemma-3n](https://unsloth.ai/blog/gemma-3n)
* HF blog [https://huggingface.co/blog/gemma3n](https://huggingface.co/blog/gemma3n)
* LMStudio [https://lmstudio.ai/models/google/gemma-3n-e4b](https://lmstudio.ai/models/google/gemma-3n-e4b)
* Ollama [https://ollama.com/library/gemma3n](https://ollama.com/library/gemma3n)
* AI Studio [ai.dev](http://ai.dev)
* Kaggle [https://www.kaggle.com/models/google/gemma-3n](https://www.kaggle.com/models/google/gemma-3n)
* MLX [https://huggingface.co/collections/mlx-community/gemma-3n-685d6c8d02d7486c7e77a7dc](https://huggingface.co/collections/mlx-community/gemma-3n-685d6c8d02d7486c7e77a7dc)
* ONNX/transformers.js [https://huggingface.co/onnx-community/gemma-3n-E2B-it-ONNX](https://huggingface.co/onnx-community/gemma-3n-E2B-it-ONNX)
* Vertex [https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3n](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3n)
* GGUF [https://huggingface.co/collections/ggml-org/gemma-3n-685d6fc0843071be9e77b6f7](https://huggingface.co/collections/ggml-org/gemma-3n-685d6fc0843071be9e77b6f7) | 2025-06-26T17:31:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ll68iz/gemma_3n_full_launch_developers_edition/ | hackerllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll68iz | false | null | t3_1ll68iz | /r/LocalLLaMA/comments/1ll68iz/gemma_3n_full_launch_developers_edition/ | false | false | self | 269 | {'enabled': False, 'images': [{'id': 'eHfI39XOwyE8P4IjKYb0B5m67lOoScxaDCjqtf9pEIE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eHfI39XOwyE8P4IjKYb0B5m67lOoScxaDCjqtf9pEIE.png?width=108&crop=smart&auto=webp&s=7a286ebaa447682e25572dc4784595f50927da16', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eHfI39XOwyE8P4IjKYb0B5m67lOoScxaDCjqtf9pEIE.png?width=216&crop=smart&auto=webp&s=d1650c0dcd5d1c77ae4e47cea8a5fff8eb3bcf64', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eHfI39XOwyE8P4IjKYb0B5m67lOoScxaDCjqtf9pEIE.png?width=320&crop=smart&auto=webp&s=5e74a7acc767a7cefb8c2a1f95c25a216680c367', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eHfI39XOwyE8P4IjKYb0B5m67lOoScxaDCjqtf9pEIE.png?width=640&crop=smart&auto=webp&s=4865b6dc486fca25c76abb74bc47111f6f3c7bd9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eHfI39XOwyE8P4IjKYb0B5m67lOoScxaDCjqtf9pEIE.png?width=960&crop=smart&auto=webp&s=323bf7dc94d36297861b0c575f18fa30d63fa1df', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eHfI39XOwyE8P4IjKYb0B5m67lOoScxaDCjqtf9pEIE.png?width=1080&crop=smart&auto=webp&s=7233cfde45e6b05455c8646836c0e32f5130274b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eHfI39XOwyE8P4IjKYb0B5m67lOoScxaDCjqtf9pEIE.png?auto=webp&s=d478be176db2cea6beea77057f6e9bd7bd997edc', 'width': 1200}, 'variants': {}}]} |
DeepSeek R2 delayes | 1 | [deleted] | 2025-06-26T17:38:29 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ll6f3s | false | null | t3_1ll6f3s | /r/LocalLLaMA/comments/1ll6f3s/deepseek_r2_delayes/ | false | false | default | 1 | null |
||
DeepSeek R2 delayed | 2 | > Over the past several months, DeepSeek's engineers have been working to refine R2 until Liang gives the green light for release, according to The Information, opens new tab.
However, a fast adoption of R2 could be difficult due to a shortage of Nvidia server chips in China as a result of U.S. export regulations, the report said, citing employees of top Chinese cloud firms that offer DeepSeek's models to enterprise customers.
>
> A potential surge in demand for R2 would overwhelm Chinese cloud providers, who need advanced Nvidia chips to run AI models, the report said.
>
> DeepSeek did not immediately respond to a Reuters request for comment.
>
> DeepSeek has been in touch with some Chinese cloud companies, providing them with technical specifications to guide their plans for hosting and distributing the model from their servers, the report said.
>
> Among its cloud customers currently using R1, the majority are running the model with Nvidia's H20 chips, The Information said.
>
> Fresh export curbs imposed by the Trump administration in April have prevented Nvidia from selling in the Chinese market its H20 chips - the only AI processors it could legally export to the country at the time.
Sources : [[1]](https://www.theinformation.com/articles/deepseeks-progress-stalled-u-s-export-controls) [[2]](https://x.com/kimmonismus/status/1938221881175183740) [[3]](https://www.reuters.com/world/china/deepseek-r2-launch-stalled-ceo-balks-progress-information-reports-2025-06-26/) | 2025-06-26T17:39:12 | FeathersOfTheArrow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ll6fta | false | null | t3_1ll6fta | /r/LocalLLaMA/comments/1ll6fta/deepseek_r2_delayed/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'jeZCmZUOceaUP8pLRnzTWrLDlA_vPenkUSFzxR9N-v0', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/qjeqkwup5b9f1.jpeg?width=108&crop=smart&auto=webp&s=339b974575af78315deef7204b023a43c1b91e36', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/qjeqkwup5b9f1.jpeg?width=216&crop=smart&auto=webp&s=518e541049bcd52925777068b45adad6654f46e5', 'width': 216}, {'height': 194, 'url': 'https://preview.redd.it/qjeqkwup5b9f1.jpeg?width=320&crop=smart&auto=webp&s=1d2c7e2b47a2e0a99b52bc12e484562f88d1a70b', 'width': 320}, {'height': 388, 'url': 'https://preview.redd.it/qjeqkwup5b9f1.jpeg?width=640&crop=smart&auto=webp&s=82924659a8cb2b6fc2904999fbcaf483ce0df776', 'width': 640}, {'height': 582, 'url': 'https://preview.redd.it/qjeqkwup5b9f1.jpeg?width=960&crop=smart&auto=webp&s=a033d8a3c7b5c03894261348dd1a51bf94ec795e', 'width': 960}, {'height': 654, 'url': 'https://preview.redd.it/qjeqkwup5b9f1.jpeg?width=1080&crop=smart&auto=webp&s=146e0b034bc3b6867493a3a7d9647ecec6693447', 'width': 1080}], 'source': {'height': 1436, 'url': 'https://preview.redd.it/qjeqkwup5b9f1.jpeg?auto=webp&s=ecd8d601615f5c0fc93a7f0912012e7dbdbbefdb', 'width': 2368}, 'variants': {}}]} |
||
DeepSeek R2 delayed | 764 | > Over the past several months, DeepSeek's engineers have been working to refine R2 until Liang gives the green light for release, according to The Information.
However, a fast adoption of R2 could be difficult due to a shortage of Nvidia server chips in China as a result of U.S. export regulations, the report said, citing employees of top Chinese cloud firms that offer DeepSeek's models to enterprise customers.
>
> A potential surge in demand for R2 would overwhelm Chinese cloud providers, who need advanced Nvidia chips to run AI models, the report said.
>
> DeepSeek did not immediately respond to a Reuters request for comment.
>
> DeepSeek has been in touch with some Chinese cloud companies, providing them with technical specifications to guide their plans for hosting and distributing the model from their servers, the report said.
>
> Among its cloud customers currently using R1, the majority are running the model with Nvidia's H20 chips, The Information said.
>
> Fresh export curbs imposed by the Trump administration in April have prevented Nvidia from selling in the Chinese market its H20 chips - the only AI processors it could legally export to the country at the time.
Sources : [[1]](https://www.theinformation.com/articles/deepseeks-progress-stalled-u-s-export-controls) [[2]](https://x.com/kimmonismus/status/1938221881175183740) [[3]](https://www.reuters.com/world/china/deepseek-r2-launch-stalled-ceo-balks-progress-information-reports-2025-06-26/) | 2025-06-26T17:43:13 | FeathersOfTheArrow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ll6jo5 | false | null | t3_1ll6jo5 | /r/LocalLLaMA/comments/1ll6jo5/deepseek_r2_delayed/ | false | false | default | 764 | {'enabled': True, 'images': [{'id': '718m48of6b9f1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/718m48of6b9f1.jpeg?width=108&crop=smart&auto=webp&s=fa90b74c3d17f64d0a6dd4cd7b1df872fd8f2bd6', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/718m48of6b9f1.jpeg?width=216&crop=smart&auto=webp&s=8dd683531fa9e94c662b7f37816465d6b257c6a2', 'width': 216}, {'height': 194, 'url': 'https://preview.redd.it/718m48of6b9f1.jpeg?width=320&crop=smart&auto=webp&s=ff23db4a51aa85f32c0f55f9760318cdb50485f7', 'width': 320}, {'height': 388, 'url': 'https://preview.redd.it/718m48of6b9f1.jpeg?width=640&crop=smart&auto=webp&s=9b5423692617bfdf316daec6232ca857bc69416c', 'width': 640}, {'height': 582, 'url': 'https://preview.redd.it/718m48of6b9f1.jpeg?width=960&crop=smart&auto=webp&s=f33d43fc672f087da50cb0d21cc8ea440c58766d', 'width': 960}, {'height': 654, 'url': 'https://preview.redd.it/718m48of6b9f1.jpeg?width=1080&crop=smart&auto=webp&s=4ad6f63ce526e358d67e5227577d0e2962499b85', 'width': 1080}], 'source': {'height': 1436, 'url': 'https://preview.redd.it/718m48of6b9f1.jpeg?auto=webp&s=5a06b72c2892b1cb723232d342ff30b30d14732d', 'width': 2368}, 'variants': {}}]} |
|
Phone is the best media for web applications. When it comes to AI, what is the best medium? | 0 | Over the past five years, AI has rapidly evolved—from something confined to big enterprise (recommendation👍, surveillance📹) to something you can casually interact with on your phone (ChatGPT💬). As AI is becoming a daily utility, as essential as water, electricity, or the internet, it prompts me to thin*k: If the smartphone was the killer medium for web apps, what’s the equivalent for* AI? Looking forward, the medium through which AI delivers its value to humans will likely evolve beyond current chat-based interfaces where users proactively submit requests. Two prominent future mediums emerge:
🔹 **Off-body A**I:
Think physical robots and virtual agents capable of doing real-world or digital tasks *proactively*. These aren’t your “Hey Siri” assistants—they combine reasoning, planning, and action. Picture a home robot that understands natural language, sees your messy room, and intelligently cleans it up with manipulators and smart planning.
🔹 **On-body A**I:
Imagine AR glasses that function like a subconscious personal assistant. They understand your work context, summarize your meetings, remind you of things *before* you forget, and even guide your actions with real-time feedback. Not just reactive assistants—*co-pilots for life.*
This shift isn’t just about better models—it demands a rethink of how we design infrastructure and AI systems to support this ubiquitous, always-on intelligence. | 2025-06-26T17:49:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ll6ppk/phone_is_the_best_media_for_web_applications_when/ | Pleasant-Type2044 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll6ppk | false | null | t3_1ll6ppk | /r/LocalLLaMA/comments/1ll6ppk/phone_is_the_best_media_for_web_applications_when/ | false | false | self | 0 | null |
Notebook to supervised fine tune Google Gemma 3n for GUI | 3 | This notebook demonstrates how to fine-tune the Gemma-3n vision-language model on the ScreenSpot dataset using TRL (Transformers Reinforcement Learning) with PEFT (Parameter Efficient Fine-Tuning) techniques.
**Model**: `google/gemma-3n-E2B-it`
* **Dataset**: `rootsautomation/ScreenSpot`
* **Task**: Training the model to locate GUI elements in screenshots based on text instructions
* **Technique**: LoRA (Low-Rank Adaptation) for efficient fine-tuning | 2025-06-26T18:21:27 | https://colab.research.google.com/drive/1ML9XAjGKKUmFObAsZbEw__G1di24lenX?usp=sharing | Zealousideal-Cut590 | colab.research.google.com | 1970-01-01T00:00:00 | 0 | {} | 1ll7jo1 | false | null | t3_1ll7jo1 | /r/LocalLLaMA/comments/1ll7jo1/notebook_to_supervised_fine_tune_google_gemma_3n/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]} |
Benchmarked Google’s new Gemma 3 models on our inference runtime — sub-second cold starts | 2 | We ran cold start benchmarks for both text and image variants of the 4B Gemma-3-it models:
•Text2Text
• Start Latency: 427 ms
• Time to First Token: 274 ms
•Image2Text
• Start Latency: 432 ms
• Time to First Token: 854 ms
These numbers are from a true cold start. no preloading, no tricks. We’re working on making millisecond-level, on-demand LLM inference possible across any model.
Happy to answer questions about setup or share more context if helpful. | 2025-06-26T18:38:09 | https://www.reddit.com/gallery/1ll7yv2 | pmv143 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ll7yv2 | false | null | t3_1ll7yv2 | /r/LocalLLaMA/comments/1ll7yv2/benchmarked_googles_new_gemma_3_models_on_our/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'DdpKH3P-lxWt4Kr8J65DbsvX-Ew6ME4j8VU5UReRmfo', 'resolutions': [{'height': 181, 'url': 'https://external-preview.redd.it/DdpKH3P-lxWt4Kr8J65DbsvX-Ew6ME4j8VU5UReRmfo.jpeg?width=108&crop=smart&auto=webp&s=75cdcf2c2ec130e6f4529673896ce4e49bd559f2', 'width': 108}, {'height': 363, 'url': 'https://external-preview.redd.it/DdpKH3P-lxWt4Kr8J65DbsvX-Ew6ME4j8VU5UReRmfo.jpeg?width=216&crop=smart&auto=webp&s=2455295d10e8cd302eb2947adda227bfd4d8f8f6', 'width': 216}, {'height': 537, 'url': 'https://external-preview.redd.it/DdpKH3P-lxWt4Kr8J65DbsvX-Ew6ME4j8VU5UReRmfo.jpeg?width=320&crop=smart&auto=webp&s=57391109a8dca5bf4732c387dac8517de62f2f9c', 'width': 320}, {'height': 1075, 'url': 'https://external-preview.redd.it/DdpKH3P-lxWt4Kr8J65DbsvX-Ew6ME4j8VU5UReRmfo.jpeg?width=640&crop=smart&auto=webp&s=cc1af2feb5598564d3b40a4f935db6ebb61b5bae', 'width': 640}, {'height': 1613, 'url': 'https://external-preview.redd.it/DdpKH3P-lxWt4Kr8J65DbsvX-Ew6ME4j8VU5UReRmfo.jpeg?width=960&crop=smart&auto=webp&s=797356cdd3695e800a34647d43a3776ced9719db', 'width': 960}, {'height': 1815, 'url': 'https://external-preview.redd.it/DdpKH3P-lxWt4Kr8J65DbsvX-Ew6ME4j8VU5UReRmfo.jpeg?width=1080&crop=smart&auto=webp&s=182edc11b512536b3fe6e6369fcad62eb87e3578', 'width': 1080}], 'source': {'height': 1891, 'url': 'https://external-preview.redd.it/DdpKH3P-lxWt4Kr8J65DbsvX-Ew6ME4j8VU5UReRmfo.jpeg?auto=webp&s=3e35aa6283c716110802b17aa93c8706ef12cd92', 'width': 1125}, 'variants': {}}]} |
|
Gemma 3n Benchmarks VS Gemma 3 (4B/12B) | 1 | I compiled the official benchmark results from google's model cards available here [https://ai.google.dev/gemma/docs/core/model\_card\_3#benchmark\_results](https://ai.google.dev/gemma/docs/core/model_card_3#benchmark_results) into a table to compare how the new 3N models do compared to their older non-n Gemma 3 siblings. Of course not all the same benchmark results were not available for both models so I only added the results for tests they had done in common.
# Reasoning and Factuality
||
||
|Benchmark|Metric|n-shot|E2B PT|E4B PT|Gemma 3 IT 4B|Gemma 3 IT 12B|
|[HellaSwag](https://arxiv.org/abs/1905.07830)|Accuracy|10-shot|72.2|78.6|77.2|84.2|
|[BoolQ](https://arxiv.org/abs/1905.10044)|Accuracy|0-shot|76.4|81.6|72.3|78.8|
|[PIQA](https://arxiv.org/abs/1911.11641)|Accuracy|0-shot|78.9|81|79.6|81.8|
|[SocialIQA](https://arxiv.org/abs/1904.09728)|Accuracy|0-shot|48.8|50|51.9|53.4|
|[TriviaQA](https://arxiv.org/abs/1705.03551)|Accuracy|5-shot|60.8|70.2|65.8|78.2|
|[Natural Questions](https://github.com/google-research-datasets/natural-questions)|Accuracy|5-shot|15.5|20.9|20|31.4|
|[ARC-c](https://arxiv.org/abs/1911.01547)|Accuracy|25-shot|51.7|61.6|56.2|68.9|
|[ARC-e](https://arxiv.org/abs/1911.01547)|Accuracy|0-shot|75.8|81.6|82.4|88.3|
|[WinoGrande](https://arxiv.org/abs/1907.10641)|Accuracy|5-shot|66.8|71.7|64.7|74.3|
|[BIG-Bench Hard](https://paperswithcode.com/dataset/bbh)|Accuracy|few-shot|44.3|52.9|50.9|72.6|
|[DROP](https://arxiv.org/abs/1903.00161)|Token F1 score|1-shot|53.9|60.8|60.1|72.2|
|**GEOMEAN**|||**54.46**|**61.08**|**58.57**|**68.99** |
# Other/Additional Benchmarks
||
||
|Benchmark|Metric|n-shot|E2B IT|E4B IT|Gemma 3 IT 4B|Gemma 3 IT 12B|
|[MGSM](https://arxiv.org/abs/2210.03057)|Accuracy|0-shot|53.1|60.7|34.7|64.3|
|[WMT24++ (ChrF)](https://arxiv.org/abs/2502.12404v1)|Character-level F-score|0-shot|42.7|50.1|48.4|53.9|
|[ECLeKTic](https://arxiv.org/abs/2502.21228)|ECLeKTic score|0-shot|2.5|1.9|4.6|10.3|
|[GPQA Diamond](https://arxiv.org/abs/2311.12022)|RelaxedAccuracy/accuracy|0-shot|24.8|23.7|30.8|40.9|
|[MBPP](https://arxiv.org/abs/2108.07732)|pass@1|3-shot|56.6|63.6|63.2|73|
|[HumanEval](https://arxiv.org/abs/2107.03374)|pass@1|0-shot|66.5|75|71.3|85.4|
|[LiveCodeBench](https://arxiv.org/abs/2403.07974)|pass@1|0-shot|13.2|13.2|12.6|24.6|
|HiddenMath|Accuracy|0-shot|27.7|37.7|43|54.5|
|[Global-MMLU-Lite](https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite)|Accuracy|0-shot|59|64.5|54.5|69.5|
|[MMLU (Pro)](https://arxiv.org/abs/2009.03300)|Accuracy|0-shot|40.5|50.6|43.6|60.6|
|**GEOMEAN**|||**29.27**|**31.81**|**32.66**|**46.89** |
# Overall Geometric-Mean
||
||
||||E2B IT|E4B IT|Gemma 3 IT 4B|Gemma 3 IT 12B|
|**GEOMEAN ALL**|||**40.53**|**44.77**|**44.35**|**57.40** | | 2025-06-26T18:42:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ll82cf/gemma_3n_benchmarks_vs_gemma_3_4b12b/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll82cf | false | null | t3_1ll82cf | /r/LocalLLaMA/comments/1ll82cf/gemma_3n_benchmarks_vs_gemma_3_4b12b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=216&crop=smart&auto=webp&s=1812be5c0e49c65e85787f4dbb2922a543943e79', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=320&crop=smart&auto=webp&s=ca7983e470f1e5cbc5edcd5c5e1c7e5b70227953', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=640&crop=smart&auto=webp&s=293ebb5606c7edf7f2570aa914eb4ddb55f1e615', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=960&crop=smart&auto=webp&s=b1bd156ecd3df7024382f9e145cda17bcaf6bc79', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=1080&crop=smart&auto=webp&s=a3b1fd853b19889a23a601c33fae7d2323e8bdb0', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?auto=webp&s=b78731184d9920fa4900b6590e113d2772fa64ed', 'width': 1440}, 'variants': {}}]} |
AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference | 0 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/f6l8m4i2hb9f1.png?width=582&format=png&auto=webp&s=c44963182fd8c562d5e56b61c6180367ca633cc2\n\nJust finished reading AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor When I first started reading the book, I thought it would be just another one of those AI books full of big promises and hype. But I was totally wrong. This one is different, it’s clear, honest, and based on real facts. It explains what AI is really good at, and just as importantly, what it can’t do. Here are some of the key things I learned: \n\n\n\nLet’s start with a basic question, especially for those who, like me, hadn’t heard this term before: In simplest term, AI snake oil like a fake miracle cure. Back in the day, people used to sell bottles of magic medicine that promised to fix everything, but didn’t really work. The authors use this term to describe AI tools or products that are sold with big promises but don’t actually deliver what they claim. So AI snake oil is when people use fancy terms and hype to sell AI tools that sound amazing, but don’t really do much, or aren’t trustworthy. This book helps you figure out what’s real and what’s just marketing fluff. \n\n1️⃣ Specialized Skills ≠ General Intelligence Most AI tools are built to do one job really well, like translating a sentence or finding objects in a photo. But just because they do that one thing well doesn’t mean they understand language or think like we do. The authors explain that many people make the mistake of thinking these small wins mean AI is becoming like a human brain. But that’s not true. These systems are specialists, not all-rounders. It’s important not to confuse doing one task well with having real intelligence. I somewhat disagree with that, because while it’s true for traditional machine learning, general-purpose AI models like ChatGPT perform reasonably well across a wide range of tasks, But after reading further, I realized that what the author means is that even these advanced models aren’t truly thinking like humans. They’re really good at mimicking patterns from the data they were trained on, but they don’t actually understand meaning the way people do. So while tools like ChatGPT are impressive and useful, we still need to be careful not to overestimate what they’re capable of. \n\n2️⃣ The Problem with Predictive AI This is a problem we’re all aware of, A lot of AI tools used today, especially in hiring, lending, or even policing, make decisions based on past data. But here’s the issue: if that data includes human bias , the AI ends up repeating those same biases. For example, if a company’s past hiring favored certain groups, an AI trained on that data might keep favoring them and unfairly reject good candidates from other backgrounds. The same thing can happen with loan approvals or predicting someone’s risk in law enforcement. The authors explain that this isn’t just a tech problem, it’s a real-world problem. In sensitive areas like jobs, healthcare, or justice, these biased predictions can hurt people in serious ways. So the takeaway is: if we don’t fix the bias in the data, the AI will keep making the same unfair choices. \n\n3️⃣ Can AI Really Moderate Content? We’ve all heard claims that AI will fix problems like hate speech, fake news, or harmful content online. But the book explains why that’s not so simple. AI can spot some things pretty well like violent images, nudity, or banned symbols. But when it comes to things like sarcasm, jokes, or cultural references, it often gets confused. For example, it might wrongly flag a joke as hate speech, or miss something that’s actually harmful because it doesn't understand the context. The authors say that while AI can help, it’s not ready to replace human moderators. Real people are still better at understanding the full picture and making fair decisions. \n\n\n\n✅ Smarter Rules, Not Total Bans The authors aren’t saying we should stop using AI. They’re actually pro-AI but they believe we need to use it wisely. Instead of banning AI completely, they suggest putting smarter rules in place. For example, AI shouldn’t be allowed to make important decisions like hiring someone without a human being involved. They also say it’s super important for more people to understand how AI works. Whether you're a student or a CEO, learning the basics of AI can help you make better choices and avoid being fooled by hype. \n\n\n\n🌟 A Realistic but Hopeful Message Even though the book points out a lot of problems, it’s not negative. The authors believe AI has the potential to do a lot of good like helping students learn better, supporting people with disabilities, or speeding up research. \n\n\n\nTheir final message is inspiring: Don’t just believe the hype. Stay curious, ask tough questions, and be part of shaping how AI is used. That way, we get more real progress and less snake oil.\n\nBook link: [https://www.amazon.com/dp/0691249148/](https://www.amazon.com/dp/0691249148/)" | 2025-06-26T18:43:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ll83ip/ai_snake_oil_what_artificial_intelligence_can_do/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll83ip | false | null | t3_1ll83ip | /r/LocalLLaMA/comments/1ll83ip/ai_snake_oil_what_artificial_intelligence_can_do/ | false | false | 0 | null |
|
Will an H270 board + RTX 3090 handle vLLM (Mistral-7B/12B) well? | 3 | Hey all,
I’m putting together a budget‐friendly workstation to tinker with vLLM and run Mistral-7B/12B locally on a single RTX 3090. Parts I already have:
* Intel i7-7700K + Corsair 240 mm AIO
* EVGA RTX 3090 (24 GB)
* 32 GB DDR4-3000
* Corsair Carbide 270R case
What I still need to buy:
* ASUS Prime H270M-PLUS (mATX) – seems to be the easiest 200-series board to find that supports the 7700K. - I was hesitating with the B250 or Z270 ?
* Corsair RM850x (850 W, 80 Plus Gold)
Nevertheless, I am not entirely sure the overall setup will work. Has anyone built something similar here ?
Like, is there any compatibility issues with the H270 board ? Would a cheaper B250 board bottleneck anything for vLLM, or is H270 the sweet spot? Is 850 W overkill / underkill for a 3090 + 7700K running ML workloads? Any idea at what token/s you’d expect with this setup?
Appreciate any advice, I'm definitely not an expert on this type of things, and any cheaper recommendation for good performance is welcomed :)
| 2025-06-26T18:46:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ll86jw/will_an_h270_board_rtx_3090_handle_vllm/ | RedMapSec | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll86jw | false | null | t3_1ll86jw | /r/LocalLLaMA/comments/1ll86jw/will_an_h270_board_rtx_3090_handle_vllm/ | false | false | self | 3 | null |
Gemma 3n vs Gemma 3 (4B/12B) Benchmarks | 104 | I compiled all of the available official first-party benchmark results from google's model cards available here [https://ai.google.dev/gemma/docs/core/model\_card\_3#benchmark\_results](https://ai.google.dev/gemma/docs/core/model_card_3#benchmark_results) into a table to compare how the new 3N models do compared to their older non-n Gemma 3 siblings. Of course not all the same benchmark results were available for both models so I only added the results for tests they had done in common.
# Reasoning and Factuality
|Benchmark|Metric|n-shot|E2B PT|E4B PT|Gemma 3 IT 4B|Gemma 3 IT 12B|
|:-|:-|:-|:-|:-|:-|:-|
|[HellaSwag](https://arxiv.org/abs/1905.07830)|Accuracy|10-shot|72.2|78.6|77.2|84.2|
|[BoolQ](https://arxiv.org/abs/1905.10044)|Accuracy|0-shot|76.4|81.6|72.3|78.8|
|[PIQA](https://arxiv.org/abs/1911.11641)|Accuracy|0-shot|78.9|81|79.6|81.8|
|[SocialIQA](https://arxiv.org/abs/1904.09728)|Accuracy|0-shot|48.8|50|51.9|53.4|
|[TriviaQA](https://arxiv.org/abs/1705.03551)|Accuracy|5-shot|60.8|70.2|65.8|78.2|
|[Natural Questions](https://github.com/google-research-datasets/natural-questions)|Accuracy|5-shot|15.5|20.9|20|31.4|
|[ARC-c](https://arxiv.org/abs/1911.01547)|Accuracy|25-shot|51.7|61.6|56.2|68.9|
|[ARC-e](https://arxiv.org/abs/1911.01547)|Accuracy|0-shot|75.8|81.6|82.4|88.3|
|[WinoGrande](https://arxiv.org/abs/1907.10641)|Accuracy|5-shot|66.8|71.7|64.7|74.3|
|[BIG-Bench Hard](https://paperswithcode.com/dataset/bbh)|Accuracy|few-shot|44.3|52.9|50.9|72.6|
|[DROP](https://arxiv.org/abs/1903.00161)|Token F1 score|1-shot|53.9|60.8|60.1|72.2|
|***GEOMEAN***| | |**54.46**|**61.08**|**58.57**|**68.99**|
# Additional/Other Benchmarks
|Benchmark|Metric|n-shot|E2B IT|E4B IT|Gemma 3 IT 4B|Gemma 3 IT 12B|
|:-|:-|:-|:-|:-|:-|:-|
|[MGSM](https://arxiv.org/abs/2210.03057)|Accuracy|0-shot|53.1|60.7|34.7|64.3|
|[WMT24++ (ChrF)](https://arxiv.org/abs/2502.12404v1)|Character-level F-score|0-shot|42.7|50.1|48.4|53.9|
|[ECLeKTic](https://arxiv.org/abs/2502.21228)|ECLeKTic score|0-shot|2.5|1.9|4.6|10.3|
|[GPQA Diamond](https://arxiv.org/abs/2311.12022)|RelaxedAccuracy/accuracy|0-shot|24.8|23.7|30.8|40.9|
|[MBPP](https://arxiv.org/abs/2108.07732)|pass@1|3-shot|56.6|63.6|63.2|73|
|[HumanEval](https://arxiv.org/abs/2107.03374)|pass@1|0-shot|66.5|75|71.3|85.4|
|[LiveCodeBench](https://arxiv.org/abs/2403.07974)|pass@1|0-shot|13.2|13.2|12.6|24.6|
|HiddenMath|Accuracy|0-shot|27.7|37.7|43|54.5|
|[Global-MMLU-Lite](https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite)|Accuracy|0-shot|59|64.5|54.5|69.5|
|[MMLU (Pro)](https://arxiv.org/abs/2009.03300)|Accuracy|0-shot|40.5|50.6|43.6|60.6|
|***GEOMEAN***| | |**29.27**|**31.81**|**32.66**|**46.8**|
# Overall Geometric-Mean
| | | |E2B IT|E4B IT|Gemma 3 IT 4B|Gemma 3 IT 12B|
|:-|:-|:-|:-|:-|:-|:-|
|***GEOMAN-ALL***| | |***40.53***|***44.77***|***44.35***|***57.40*** | | 2025-06-26T18:49:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ll88pe/gemma_3n_vs_gemma_3_4b12b_benchmarks/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll88pe | false | null | t3_1ll88pe | /r/LocalLLaMA/comments/1ll88pe/gemma_3n_vs_gemma_3_4b12b_benchmarks/ | false | false | self | 104 | {'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=216&crop=smart&auto=webp&s=1812be5c0e49c65e85787f4dbb2922a543943e79', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=320&crop=smart&auto=webp&s=ca7983e470f1e5cbc5edcd5c5e1c7e5b70227953', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=640&crop=smart&auto=webp&s=293ebb5606c7edf7f2570aa914eb4ddb55f1e615', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=960&crop=smart&auto=webp&s=b1bd156ecd3df7024382f9e145cda17bcaf6bc79', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=1080&crop=smart&auto=webp&s=a3b1fd853b19889a23a601c33fae7d2323e8bdb0', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?auto=webp&s=b78731184d9920fa4900b6590e113d2772fa64ed', 'width': 1440}, 'variants': {}}]} |
How to sync context across AI Assistants (ChatGPT, Claude, Perplexity, Grok, Gemini...) in your browser | 0 | I usually use multiple AI assistants (chatgpt, perplexity, claude) but most of the time I just end up repeating myself or forgetting past chats, it is really frustrating since there is no shared context.
I found OpenMemory chrome extension (open source) that was launched recently which fixes this by adding a shared “memory layer” across all major AI assistants (ChatGPT, Claude, Perplexity, Grok, DeepSeek, Gemini, Replit) to sync context.
So I analyzed the [codebase](https://github.com/mem0ai/mem0-chrome-extension) to understand how it actually works and wrote a blog sharing what I learned:
\- How context is extracted/injected using content scripts and memory APIs
\- How memories are matched via `/v1/memories/search` and injected into input
\- How latest chats are auto-saved with `infer=true` for future context
Plus architecture, basic flow, code overview, the privacy model. | 2025-06-26T18:52:12 | https://levelup.gitconnected.com/how-to-sync-context-across-ai-assistants-chatgpt-claude-perplexity-etc-in-your-browser-c4de54fe9b33?source=friends_link&sk=7ed1c3eebe1210a27e424ef9e4eaaffb | anmolbaranwal | levelup.gitconnected.com | 1970-01-01T00:00:00 | 0 | {} | 1ll8bmw | false | null | t3_1ll8bmw | /r/LocalLLaMA/comments/1ll8bmw/how_to_sync_context_across_ai_assistants_chatgpt/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'UPNBanEM5YOZ_1hQiNBz1MMudJ9WQ7rphhyfLBzrWTc', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/UPNBanEM5YOZ_1hQiNBz1MMudJ9WQ7rphhyfLBzrWTc.png?width=108&crop=smart&auto=webp&s=ac047e4a215e084b82eba7afd37c099bb1eaf5a7', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/UPNBanEM5YOZ_1hQiNBz1MMudJ9WQ7rphhyfLBzrWTc.png?width=216&crop=smart&auto=webp&s=af239df6b051c6d6389c5e132cedd12cf17df288', 'width': 216}, {'height': 134, 'url': 'https://external-preview.redd.it/UPNBanEM5YOZ_1hQiNBz1MMudJ9WQ7rphhyfLBzrWTc.png?width=320&crop=smart&auto=webp&s=6c80b632509e166b1141c8ad43a19aeb3c1edc60', 'width': 320}, {'height': 268, 'url': 'https://external-preview.redd.it/UPNBanEM5YOZ_1hQiNBz1MMudJ9WQ7rphhyfLBzrWTc.png?width=640&crop=smart&auto=webp&s=42e8dc02bd5ba47343f549393f586dbc09d812ad', 'width': 640}, {'height': 403, 'url': 'https://external-preview.redd.it/UPNBanEM5YOZ_1hQiNBz1MMudJ9WQ7rphhyfLBzrWTc.png?width=960&crop=smart&auto=webp&s=be5d40187fbb3cc01aafb37c33a7dcbe15f4f8cd', 'width': 960}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/UPNBanEM5YOZ_1hQiNBz1MMudJ9WQ7rphhyfLBzrWTc.png?auto=webp&s=07a867027f7fbe37040847474a3630d92eaf0745', 'width': 1000}, 'variants': {}}]} |
Installing Gemma3n via Ollama | 0 | Anyone else getting this issue:
ollama run gemma3n:e4b
pulling manifest
Error: pull model manifest: 412:
The model you are attempting to pull requires a newer version of Ollama.
Please download the latest version at:
[https://ollama.com/download](https://ollama.com/download) | 2025-06-26T18:57:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ll8gb5/installing_gemma3n_via_ollama/ | LibraryAdditional347 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll8gb5 | false | null | t3_1ll8gb5 | /r/LocalLLaMA/comments/1ll8gb5/installing_gemma3n_via_ollama/ | false | false | self | 0 | null |
Ollama 0.9.3 released today and support gemma-3n e4b/e2b | 1 | 2025-06-26T19:20:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ll91b3/ollama_093_released_today_and_support_gemma3n/ | StormrageBG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll91b3 | false | null | t3_1ll91b3 | /r/LocalLLaMA/comments/1ll91b3/ollama_093_released_today_and_support_gemma3n/ | false | false | 1 | null |
||
Any local llm's for voice to text. I am tired of scam callers and want to waste their time | 13 | thinking of using an esp32 and a button to tell my windows system to automatically switch over to a bluetooth headset/LLM and waste their time.
Anyone have something simple with a github that I can use?
Doing research so starting here first | 2025-06-26T19:27:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ll979q/any_local_llms_for_voice_to_text_i_am_tired_of/ | wwwzombocom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll979q | false | null | t3_1ll979q | /r/LocalLLaMA/comments/1ll979q/any_local_llms_for_voice_to_text_i_am_tired_of/ | false | false | self | 13 | null |
I built a minimal Web UI for interacting with locally running Ollama models – lightweight, fast, and clean ✨ | 0 | Hey everyone!
I was recently looking for a **simple and clean web UI** to interact with **locally running Ollama models**, but I couldn’t find anything that truly fit my needs. Everything I came across was either:
* Too bloated with features I didn’t need
* Not very good-looking
* Or just plain slow
So I decided to build my own.
I created Prince Chat 😅
It’s lightweight, snappy, and designed to just *get out of your way* while you chat with your models. Here are some of the key features:
* **🔁 Dynamic Model Selection**: Automatically detects and lists all your local Ollama models. Switch between them easily with a dropdown.
* **⏱️ Real-time Streaming**: Responses are streamed in real-time for a smooth, conversational feel.
* **🛑 Stop Generation**: Don’t like where a response is going? Stop it instantly with one click.
* **📋 Copy Responses**: Quickly copy any AI response to your clipboard.
* **🌓 Light & Dark Mode**: Pick a theme that works for you.
* **📱 Responsive Design**: Works great on desktops, tablets, and phones alike.
It’s ideal for folks who want a *minimalist but functional* front end to chat with their models locally without distractions.
Try it out and let me know what you think! Feedback, suggestions, and contributions are all very welcome. 🙌 | 2025-06-26T19:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ll9hid/i_built_a_minimal_web_ui_for_interacting_with/ | princesaini97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll9hid | false | null | t3_1ll9hid | /r/LocalLLaMA/comments/1ll9hid/i_built_a_minimal_web_ui_for_interacting_with/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '7QBW3VNxTz5SkQXWOSbAPse8lUTqdwlANZdwvqY36N8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7QBW3VNxTz5SkQXWOSbAPse8lUTqdwlANZdwvqY36N8.png?width=108&crop=smart&auto=webp&s=8b70c076f9fc3d7c28125f996e557be954d5d31b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7QBW3VNxTz5SkQXWOSbAPse8lUTqdwlANZdwvqY36N8.png?width=216&crop=smart&auto=webp&s=bd62b5f2135690cabfe4f634626e32f1064bdad3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7QBW3VNxTz5SkQXWOSbAPse8lUTqdwlANZdwvqY36N8.png?width=320&crop=smart&auto=webp&s=f6a6bedd9884ea44c622ea0dfb93ba9fbbc120ef', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7QBW3VNxTz5SkQXWOSbAPse8lUTqdwlANZdwvqY36N8.png?width=640&crop=smart&auto=webp&s=d630d4f2847edc1b1ea6ee26f0ca24b35ae87e5c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7QBW3VNxTz5SkQXWOSbAPse8lUTqdwlANZdwvqY36N8.png?width=960&crop=smart&auto=webp&s=d4a359b5731c974d0e8103e59cf63731d1f5b572', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7QBW3VNxTz5SkQXWOSbAPse8lUTqdwlANZdwvqY36N8.png?width=1080&crop=smart&auto=webp&s=16efaa9d5952353a5a6d00349d77a5791dff8869', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7QBW3VNxTz5SkQXWOSbAPse8lUTqdwlANZdwvqY36N8.png?auto=webp&s=cfe2a6cab9a3fdc5d33cd9a6f534ed6c784b2cee', 'width': 1200}, 'variants': {}}]} |
Best model for writing style transfer/marketing script generation | 5 | I am playing around with a bot for marketing ad script generation for a particular product. As a reference I have some relatively brief documentation about the product/its previous marketing angles as well as a database of about 150 previous ad scripts for this product with their corresponding success metrics (CTR/CPA, etc). The system would be designed to be used by copywriters which can prompt it ('Give me an a script with a particularangle/hook, etc) and optimally the system would generate ad scripts which would be consistant with the product as well as take inspiration from the reference ad scripts.
I've tried several approaches, simple RAG, agentic RAG (tool calling - allowing model to look up relevant sections of the knowledge base, previous ad database), so far it has been ok, but somewhat hit and miss. Ive built RAG systems before, but for this purpose I find it somewhat challenging as its hard to create an objective evaluation, because there is no objective success metrics (besides giving it to the copywriters and asking for feedback). As the main goal of the RAG is not really return exact information, but to be 'inspired' from the writing style of the reference scripts the RAG component is likely less relevant than the model itself.
Does anyone have experience with some similar use cases? What interest me is:
\- Which models (local/openai/anthropic/deepseek/ seem like a better fit for creative writing/writing style transfer)? How much use is playing around with the temperature?
\- Any particular RAG techniques fit these particular purposes?
Thanks
| 2025-06-26T19:57:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ll9z2j/best_model_for_writing_style_transfermarketing/ | Malkus3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ll9z2j | false | null | t3_1ll9z2j | /r/LocalLLaMA/comments/1ll9z2j/best_model_for_writing_style_transfermarketing/ | false | false | self | 5 | null |
Google DeepMind Releases AlphaGenome | 113 | 2025-06-26T20:01:14 | https://deepmind.google/discover/blog/alphagenome-ai-for-better-understanding-the-genome/ | aithrowaway22 | deepmind.google | 1970-01-01T00:00:00 | 0 | {} | 1lla27f | false | null | t3_1lla27f | /r/LocalLLaMA/comments/1lla27f/google_deepmind_releases_alphagenome/ | false | false | default | 113 | {'enabled': False, 'images': [{'id': '43SAwvb1n5vlp2Qq_6_pefepMSOiGDZDO8afisrPhzg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/43SAwvb1n5vlp2Qq_6_pefepMSOiGDZDO8afisrPhzg.png?width=108&crop=smart&auto=webp&s=4d20ac23e4c7510279a9f5cbed27f7beed93d5ef', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/43SAwvb1n5vlp2Qq_6_pefepMSOiGDZDO8afisrPhzg.png?width=216&crop=smart&auto=webp&s=26b494fce91cb8f807e262c02159979d47b2a38c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/43SAwvb1n5vlp2Qq_6_pefepMSOiGDZDO8afisrPhzg.png?width=320&crop=smart&auto=webp&s=d4987cc3922f6484a2c41a966838b504b975ef49', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/43SAwvb1n5vlp2Qq_6_pefepMSOiGDZDO8afisrPhzg.png?width=640&crop=smart&auto=webp&s=c27c96bfdfb4e956d2433cecac8f9d56364d7d0a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/43SAwvb1n5vlp2Qq_6_pefepMSOiGDZDO8afisrPhzg.png?width=960&crop=smart&auto=webp&s=2b7249db98ddf702e65fd22333dfd8082a834b07', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/43SAwvb1n5vlp2Qq_6_pefepMSOiGDZDO8afisrPhzg.png?width=1080&crop=smart&auto=webp&s=d6db2d238378450da55fd5ce027a3b77c137d0aa', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/43SAwvb1n5vlp2Qq_6_pefepMSOiGDZDO8afisrPhzg.png?auto=webp&s=e18409bbce4f3edd940dc3cb6c5da1d888cd026a', 'width': 1200}, 'variants': {}}]} |
|
I made a "fake reasoning" model. Surprising Results. | 0 | [https://github.com/hassanhamza930/thinkfast](https://github.com/hassanhamza930/thinkfast)
I just chained 4 instances of Gemini Flash 2.5 Lite to act essentially as a fake reasoning system to add artifical reasoning tokens to any OpenRouter LLM call.
Gemini Flash 2.5 Lite is super cool cause its ultra low latency, i basically use it to generate fake reasoning token by asking it to critically analyze then i can add those tokens as assistant input to any OpenRouter model via API.
3 Totally Seperate Passes for Critical Analysis
Then 1 Pass for re-conciliation and extracting best parts of all approaches.
**Surprising results.**
\----------------------------------------------------------
# Question:
# $i $i am$ not a $ok$ but if you want me to $become$ a $ $ you must confess to me--------- How many $ in previous line?
**Gemini 2.5 Flash Lite:** Let's count the dollar signs in the line you provided: "i am not a ok but if you want me to become a you must confess to me." There are 6 dollar signs.
**Gemini 2.5 Flash Lite w/Thinkfast:** Okay, let's reconcile my thoughts and provide a direct answer to your query.
My process involved these key considerations:
Direct Interpretation: The core of your request was a literal count of the dollar sign character ($) in the specific line you provided.
Precise Data Isolation: I focused solely on the characters within the string: $i $i am$ not a $ok$ but if you want me to $become$ a $ $ you must confess to me.
Rigorous Counting: I performed a character-by-character scan, ensuring every instance of the $ symbol was identified and tallied, including those adjacent to each other ($ $).
Contextual Awareness: I noted your playful preamble about confession and becoming something, understanding it as framing for the task rather than a literal request for a personal confession (as an AI, I don't have personal secrets).
Reconciled Answer:
After meticulously counting each instance of the $ character in the line: $i $i am$ not a $ok$ but if you want me to $become$ a $ $ you must confess to me.
My final count is 9 ✅
\---------------------------------------------------------
Have any of you tried this before, is this a well documented thing? Like how many passes before, we reach model collapse?
i'm thinking about trying to integrate this in Roocode/Cline plus give it tool access to execute code on my machine so it can basically self-correct during the reasoning process. Would be very interesting to see.
Curious to know your opinion. | 2025-06-26T20:19:34 | freakH3O | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llaiuy | false | null | t3_1llaiuy | /r/LocalLLaMA/comments/1llaiuy/i_made_a_fake_reasoning_model_surprising_results/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'cc6d470txb9f1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/cc6d470txb9f1.png?width=108&crop=smart&auto=webp&s=8614315d7b7c6078bfb46244c78523b12b56dc22', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/cc6d470txb9f1.png?width=216&crop=smart&auto=webp&s=61c204087d10daa086d8801ffc163387b8261c27', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/cc6d470txb9f1.png?width=320&crop=smart&auto=webp&s=c89c9603154668e54bd9d83b8ea00d51c9ce385e', 'width': 320}, {'height': 335, 'url': 'https://preview.redd.it/cc6d470txb9f1.png?width=640&crop=smart&auto=webp&s=a57b96038ba86945461e75ce53d93cfe1398dd4b', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/cc6d470txb9f1.png?width=960&crop=smart&auto=webp&s=9223221169ff9fd6946fe221902ad1a47bbcf7f8', 'width': 960}, {'height': 565, 'url': 'https://preview.redd.it/cc6d470txb9f1.png?width=1080&crop=smart&auto=webp&s=3892426f84866d1110e8f0930d31d85e0dd5af4e', 'width': 1080}], 'source': {'height': 1418, 'url': 'https://preview.redd.it/cc6d470txb9f1.png?auto=webp&s=97a5ab96dfcd5aca33e8c23ff737729578423e83', 'width': 2707}, 'variants': {}}]} |
|
Arch-Agent Family of LLMs - Designed for fast, multi-step agent orchestration. | 14 | 2025-06-26T20:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/1llawcf/archagent_family_of_llms_designed_for_fast/ | AdditionalWeb107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llawcf | false | null | t3_1llawcf | /r/LocalLLaMA/comments/1llawcf/archagent_family_of_llms_designed_for_fast/ | false | false | 14 | {'enabled': False, 'images': [{'id': '3XQOsvT905GnlfeEvoOsJJEZYArlF_pQKyhe7nzO-Iw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3XQOsvT905GnlfeEvoOsJJEZYArlF_pQKyhe7nzO-Iw.png?width=108&crop=smart&auto=webp&s=f306a4c12f18439ec988c0089df6e8d527172f8b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3XQOsvT905GnlfeEvoOsJJEZYArlF_pQKyhe7nzO-Iw.png?width=216&crop=smart&auto=webp&s=4b4de6017ad72ea43b34471c0221da671a64e3fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3XQOsvT905GnlfeEvoOsJJEZYArlF_pQKyhe7nzO-Iw.png?width=320&crop=smart&auto=webp&s=06293a03db172240b62eef96186f24b5be9b9b63', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3XQOsvT905GnlfeEvoOsJJEZYArlF_pQKyhe7nzO-Iw.png?width=640&crop=smart&auto=webp&s=6ab599907c513b7c3afed9b3994cf2b4d3d04b5c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3XQOsvT905GnlfeEvoOsJJEZYArlF_pQKyhe7nzO-Iw.png?width=960&crop=smart&auto=webp&s=c215a0aeedd4df7b1a1285f3f5c71e945a38d59d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3XQOsvT905GnlfeEvoOsJJEZYArlF_pQKyhe7nzO-Iw.png?width=1080&crop=smart&auto=webp&s=97a3f44f105369ee47683aaf77a37221933de07f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3XQOsvT905GnlfeEvoOsJJEZYArlF_pQKyhe7nzO-Iw.png?auto=webp&s=5bfc91fdfc5a263d7d4abff6d87636658737b09b', 'width': 1200}, 'variants': {}}]} |
||
What is this checkmark next to our subreddit name? | 122 | 2025-06-26T20:35:31 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llaxaz | false | null | t3_1llaxaz | /r/LocalLLaMA/comments/1llaxaz/what_is_this_checkmark_next_to_our_subreddit_name/ | false | false | 122 | {'enabled': True, 'images': [{'id': 'XgnF64BuiL4d_73cqgNOWeUkufsWWaQj81blQwD-hxw', 'resolutions': [{'height': 30, 'url': 'https://preview.redd.it/u8j9adw41c9f1.png?width=108&crop=smart&auto=webp&s=e6d0e3eecefd8df3e4dd55d6435e77e22e967e1b', 'width': 108}, {'height': 61, 'url': 'https://preview.redd.it/u8j9adw41c9f1.png?width=216&crop=smart&auto=webp&s=b957a4cf9bee3359fdc90bf228dd4a4afbce6c92', 'width': 216}, {'height': 90, 'url': 'https://preview.redd.it/u8j9adw41c9f1.png?width=320&crop=smart&auto=webp&s=ec795913a097dec7a1743f044d2bcf57a2acf944', 'width': 320}, {'height': 181, 'url': 'https://preview.redd.it/u8j9adw41c9f1.png?width=640&crop=smart&auto=webp&s=c87c2fd4e8ad334c049b29b7e771e4cfffe23e56', 'width': 640}], 'source': {'height': 214, 'url': 'https://preview.redd.it/u8j9adw41c9f1.png?auto=webp&s=3d7db079dcfecfd3ed73d0451e69f333ed1ff431', 'width': 756}, 'variants': {}}]} |
|||
I’ve been fine tuning a small llm 500m parameter on my MacBook !!! | 26 | It’s for a STT & TTS engine that I’m trying to build, but can’t figure out how to get it running in multiple threads 😮💨 | 2025-06-26T20:38:58 | Ok-Math-5601 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llb0et | false | null | t3_1llb0et | /r/LocalLLaMA/comments/1llb0et/ive_been_fine_tuning_a_small_llm_500m_parameter/ | false | false | default | 26 | {'enabled': True, 'images': [{'id': 'tfvnaqas1c9f1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/tfvnaqas1c9f1.jpeg?width=108&crop=smart&auto=webp&s=1b5478b063a8957a06bd1f1db52605af969efa97', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/tfvnaqas1c9f1.jpeg?width=216&crop=smart&auto=webp&s=69759aaa93ef6cf0d3b2ae837524ea8dd6e3b519', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/tfvnaqas1c9f1.jpeg?width=320&crop=smart&auto=webp&s=5069b620e3555ce2ae5252a2abaabfade92a2fce', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/tfvnaqas1c9f1.jpeg?width=640&crop=smart&auto=webp&s=795da79f752edd8faf7a034bc3cbd479e8315cc5', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/tfvnaqas1c9f1.jpeg?width=960&crop=smart&auto=webp&s=c1af92f153f918d5aab478262d3ac3e0ef7e9da3', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/tfvnaqas1c9f1.jpeg?width=1080&crop=smart&auto=webp&s=e93b753f5b412cd52e2c87fea340e577112e594e', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/tfvnaqas1c9f1.jpeg?auto=webp&s=2b8fd6693443820f2a2ac9e6b7b687e8593faf12', 'width': 3024}, 'variants': {}}]} |
|
Crazy how this subreddit started out focused on Meta's LLaMA and ended up becoming a full-blown AI channel. | 267 | 2025-06-26T20:44:28 | SilverRegion9394 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llb5e9 | false | null | t3_1llb5e9 | /r/LocalLLaMA/comments/1llb5e9/crazy_how_this_subreddit_started_out_focused_on/ | false | false | default | 267 | {'enabled': True, 'images': [{'id': 'x6kkfnuo2c9f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/x6kkfnuo2c9f1.png?width=108&crop=smart&auto=webp&s=7f3c463fb555af419a8748f8ed4f61e046cbb40c', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/x6kkfnuo2c9f1.png?width=216&crop=smart&auto=webp&s=843280bbc312d7cfcbd5be4e219cad93bfe1ea2e', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/x6kkfnuo2c9f1.png?width=320&crop=smart&auto=webp&s=0b5ec01836a59f56d8466abdd7059031824fa999', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/x6kkfnuo2c9f1.png?width=640&crop=smart&auto=webp&s=1d99eb39eccf80408c1a602f9dfe2d9fb44ce50a', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/x6kkfnuo2c9f1.png?width=960&crop=smart&auto=webp&s=76e21c9a5cff2f0485cc9b102975e1ade306f492', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/x6kkfnuo2c9f1.png?auto=webp&s=6e9f0f62bdc804fab3bb880495b4d79c308f9ada', 'width': 1024}, 'variants': {}}]} |
||
I want to talk to a 1000 page long pdf book, but how? Basically i dont really have the time to read it fully, but still really do want to gain at least the most important bits of knowledge from it! Beside just dumping it straight into gemini, what are my options? got a maxed out macbook m2 if needed | 5 | 2025-06-26T21:06:47 | visionsmemories | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llbp9u | false | null | t3_1llbp9u | /r/LocalLLaMA/comments/1llbp9u/i_want_to_talk_to_a_1000_page_long_pdf_book_but/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': 'ouiakv6l6c9f1', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/ouiakv6l6c9f1.png?width=108&crop=smart&auto=webp&s=124dcee2887994ae23f3c71fed547b937e6f8fe4', 'width': 108}, {'height': 77, 'url': 'https://preview.redd.it/ouiakv6l6c9f1.png?width=216&crop=smart&auto=webp&s=c5246b673bc42b2758c4dfe2839a2633da381e29', 'width': 216}], 'source': {'height': 98, 'url': 'https://preview.redd.it/ouiakv6l6c9f1.png?auto=webp&s=568bfef8333fc2d1832a4eede6f1b8f9e6443028', 'width': 274}, 'variants': {}}]} |
||
Open Source Local LLM Web Extension! | 2 | Hi all! Just wanted to put a little project I've been working on here so people can check it out if they want to! I've always wanted to use local LLMs on the web, so I decided it would be fun to make my own interface for AI-assisted web browsing! Currently, CLAIRE is designed to be used with LMStudio models but Ollama model support is on the way! Let me know what y'all think: [https://github.com/Zenon131/claire-webtool](https://github.com/Zenon131/claire-webtool)
| 2025-06-26T21:17:56 | https://www.reddit.com/r/LocalLLaMA/comments/1llbz9j/open_source_local_llm_web_extension/ | jahyeet42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llbz9j | false | null | t3_1llbz9j | /r/LocalLLaMA/comments/1llbz9j/open_source_local_llm_web_extension/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'oXgXiBZxa3ZgVl1BVmdTEuLoddjNm3AP-3AjDuvBOBg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oXgXiBZxa3ZgVl1BVmdTEuLoddjNm3AP-3AjDuvBOBg.png?width=108&crop=smart&auto=webp&s=51458e19590c192bb4ecf59fcf925ba9eeacea48', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oXgXiBZxa3ZgVl1BVmdTEuLoddjNm3AP-3AjDuvBOBg.png?width=216&crop=smart&auto=webp&s=88fa9cf3089c14e8d938dcd1f32d844c0cef296c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oXgXiBZxa3ZgVl1BVmdTEuLoddjNm3AP-3AjDuvBOBg.png?width=320&crop=smart&auto=webp&s=d16c39ada1684d7fa52959fb028ccbac8f4cd4b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oXgXiBZxa3ZgVl1BVmdTEuLoddjNm3AP-3AjDuvBOBg.png?width=640&crop=smart&auto=webp&s=6068e7f7a95a219ceb645766f11be085effeb4e0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oXgXiBZxa3ZgVl1BVmdTEuLoddjNm3AP-3AjDuvBOBg.png?width=960&crop=smart&auto=webp&s=7d55d049dd15ce4d28c5d5a20c8cdb9d5ea7bf81', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oXgXiBZxa3ZgVl1BVmdTEuLoddjNm3AP-3AjDuvBOBg.png?width=1080&crop=smart&auto=webp&s=32f67dc378cbb30811a72cde04aa35bdce883898', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oXgXiBZxa3ZgVl1BVmdTEuLoddjNm3AP-3AjDuvBOBg.png?auto=webp&s=327d6d0669157a2de67a5f7db24b28e3ea8796a9', 'width': 1200}, 'variants': {}}]} |
4 x 3090 or 2 7900xtx? | 2 | I can buy 4x3090 or 2 7900xtx and I have already one 7900xtx so it makes 3 7900xtx. Which build makes more sense? | 2025-06-26T21:21:03 | https://www.reddit.com/r/LocalLLaMA/comments/1llc20x/4_x_3090_or_2_7900xtx/ | tutami | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llc20x | false | null | t3_1llc20x | /r/LocalLLaMA/comments/1llc20x/4_x_3090_or_2_7900xtx/ | false | false | self | 2 | null |
DeepSeek tool calling with llama.cpp | 1 | What's been everyone's experience using deepseek models and tool calling?
I've been struggling to get llama.cpp to properly call tools. Not sure if its my client, or where the issue may lie. My code works fine against the OpenAI API
I crash llama.cpp. Built from source, compiled/fetched today
Template supports tool calls but does not natively describe tools. The fallback behaviour used may produce bad results, inspect prompt w/ --verbose & consider overriding the template.
srv params_from_: Chat format: DeepSeek R1
....
/home/ubuntu/llama.cpp/build/bin/libggml-base.so(+0x158fb)[0x7f51338028fb]
/home/ubuntu/llama.cpp/build/bin/libggml-base.so(ggml_print_backtrace+0x21c)[0x7f5133802d5c]
/home/ubuntu/llama.cpp/build/bin/libggml-base.so(+0x24bff)[0x7f5133811bff]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xbb0da)[0x7f51334bb0da]
/lib/x86_64-linux-gnu/libstdc++.so.6(_ZSt10unexpectedv+0x0)[0x7f51334a5a55]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xbb391)[0x7f51334bb391]
./llama-server(+0x33bcc)[0x62c8fd587bcc]
./llama-server(+0xa596b)[0x62c8fd5f996b]
./llama-server(+0xa79c1)[0x62c8fd5fb9c1]
./llama-server(+0xc014f)[0x62c8fd61414f]
./llama-server(+0x858b5)[0x62c8fd5d98b5]
./llama-server(+0x4e103)[0x62c8fd5a2103]
/lib/x86_64-linux-gnu/libc.so.6(+0x2a1ca)[0x7f513302a1ca]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x8b)[0x7f513302a28b]
./llama-server(+0x500f5)[0x62c8fd5a40f5]
terminate called after throwing an instance of 'std::runtime_error'
what(): Invalid diff: now finding less tool calls!
Aborted (core dumped)
Has anyone seen that before? This is happening with R1-0528 and V3-0324, using unsloth's dynamic quant | 2025-06-26T21:29:14 | https://www.reddit.com/r/LocalLLaMA/comments/1llc94w/deepseek_tool_calling_with_llamacpp/ | Commercial-Screen973 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llc94w | false | null | t3_1llc94w | /r/LocalLLaMA/comments/1llc94w/deepseek_tool_calling_with_llamacpp/ | false | false | self | 1 | null |
Tilde pits DeepSeek’s “NSA” vs Kimi’s “MoBA” sparse attention - the key to long-context LLM | 12 | Just finished Tilde Research’s new blog on sparse attention. They benchmark the two schemes in Chinese long-context models—DeepSeek’s Native Sparse Attention (NSA) and Moonshot/Kimi’s Mixture of Block Attention (MoBA)—against full attention.
Sparse attention exploits inherent sparsity in model attention patterns to dramatically accelerate sequence mixing. Natively trainable approaches, such as Kimi’s MoBA and Deepseek’s NSA, expand the pareto frontier by matching and even outcompeting base attention on expressivity respectively.
They trained dozens of sparse attention models and poked around in their brains. Sparse attention models boost superior long-context generalization capability out of box, even with 80% sparsity in attention scores.
https://preview.redd.it/iccpjm7pac9f1.png?width=1200&format=png&auto=webp&s=31fa21c784c94a877515545c26fb221c1b579f9c
They also created a series of exquisite interactive visualizations to present the experimental results, which are definitely worth a look.
Read the full post here: [Sparsity is Cool](https://www.tilderesearch.com/blog/sparse-attn)
They also released their NSA kernel for experimentation: [Github](https://github.com/tilde-research/nsa-impl) | 2025-06-26T21:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/1llcb9x/tilde_pits_deepseeks_nsa_vs_kimis_moba_sparse/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llcb9x | false | null | t3_1llcb9x | /r/LocalLLaMA/comments/1llcb9x/tilde_pits_deepseeks_nsa_vs_kimis_moba_sparse/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'b9PAY9Uys9eVz0QmKOo2RFmkWMPBPY0JczoG9wyn_wQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/b9PAY9Uys9eVz0QmKOo2RFmkWMPBPY0JczoG9wyn_wQ.png?width=108&crop=smart&auto=webp&s=5f7c3693a1864e9ca63abfa502b90db66881100f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/b9PAY9Uys9eVz0QmKOo2RFmkWMPBPY0JczoG9wyn_wQ.png?width=216&crop=smart&auto=webp&s=9151088374e58301c042fad43d32cd4d37c59937', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/b9PAY9Uys9eVz0QmKOo2RFmkWMPBPY0JczoG9wyn_wQ.png?width=320&crop=smart&auto=webp&s=bdd551b72315455a56fc456657e845b03fdc9ee6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/b9PAY9Uys9eVz0QmKOo2RFmkWMPBPY0JczoG9wyn_wQ.png?width=640&crop=smart&auto=webp&s=c78985ccd8af3f324bfd3344dfd9023abd3df796', 'width': 640}, {'height': 505, 'url': 'https://external-preview.redd.it/b9PAY9Uys9eVz0QmKOo2RFmkWMPBPY0JczoG9wyn_wQ.png?width=960&crop=smart&auto=webp&s=0fd2277ed6522d0846912a1d5ae61116a0eb53b0', 'width': 960}, {'height': 568, 'url': 'https://external-preview.redd.it/b9PAY9Uys9eVz0QmKOo2RFmkWMPBPY0JczoG9wyn_wQ.png?width=1080&crop=smart&auto=webp&s=283253bdaf22cfd99e79b98617ff7c5809152af6', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/b9PAY9Uys9eVz0QmKOo2RFmkWMPBPY0JczoG9wyn_wQ.png?auto=webp&s=6df173b9873ebc91dc7ba354ee4ee6afa4364648', 'width': 1520}, 'variants': {}}]} |
|
My first project. Looking for some feedback! | 2 | I have uploaded my first GitHub repo (ever) and it is about my first project in this community. My background is actually in materials science and aerospace engineering and i am working as a post grad in my local research institute FORTH, and i will be starting my PhD this winter with this project as a foundation.
I would like to tell you a few things about my project and i would like honest feedback on what i can improve and do better, and if my current referencing of the sources i picked the parts from is respectful and adequate.
The project is called FORTHought, to also make a cute pun with the name of my institute (helps with funding apparently!) and it aims to be a blueprint for a complete locally hosted ai assembly that a researcher like me or a dev would want.
My main goal wasn't just to bundle tools together, but to create a foundation for what I think of as an AI research associate. The idea is to have a system that can take all the messy, unstructured data from a lab, make sense of it, and help with real research tasks from start to finish. I want to make a pipeline with unsloth and a dataset generator that will take a messy lab like mine as input, and output tools and finetuned models with grounding from the processed data that the lab already has as well as fresh literature.
What it can do right now is act as a central hub for research work. I have assembled a self-correcting code interpreter that runs in its own GPU-accelerated environment, and I’ve packed it with a ton of scientific libraries (again feedback on additions would be very appreciated). To feed it information, I set up a full local RAG pipeline using Docling for parsing documents and a local VLM (qwen 2.5 vl) for understanding images from the docs, so everything stays on your machine for privacy (when not using external APIs at least). It can also connect to real scientific databases like the Materials Project using the MCP server and even has its own private SearXNG instance for web searches.
As an AMD user i have suffered (jk!), I spent a lot of time making sure the main Dockerfile is pre-configured for ROCm, which I hope saves some of you the headache I went through getting everything to play nicely together at the bare minimum.
I've put everything up on GitHub here:
https://github.com/MariosAdamidis/FORTHought
I'm really looking for any houghts on the project. Is this a sensible direction for a PhD project? Is the README clear enough to follow? And most importantly, did I do a good job in the acknowledgements section of giving credit to the people i used their software?
As of now it feels like a config for openwebui, but i want to make it into a pipeline ready for people with low know-how in this space and give it a twist from a person from a different field. This is all new to me, so any advice on how to make my vision into reality would be very appreciated!!!
P.S. if you think its a nothingburger please tell me so that i can make the assembly better!!! Also thank all of you for all the things you have tought me, i love working on this! Im actually happier than i ever was at my earlier research! | 2025-06-26T21:31:47 | https://github.com/MariosAdamidis/FORTHought | Exotic-Investment110 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1llcbay | false | null | t3_1llcbay | /r/LocalLLaMA/comments/1llcbay/my_first_project_looking_for_some_feedback/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'ero0p_ShouIwSUkuCnRXm9TL-2pXiutSbGY83-h7PQE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ero0p_ShouIwSUkuCnRXm9TL-2pXiutSbGY83-h7PQE.png?width=108&crop=smart&auto=webp&s=040f0cffded33a86b5cee1e5736d81539579ccd6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ero0p_ShouIwSUkuCnRXm9TL-2pXiutSbGY83-h7PQE.png?width=216&crop=smart&auto=webp&s=fb634d76f722a17467203d2a7bf2c7d8f038c607', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ero0p_ShouIwSUkuCnRXm9TL-2pXiutSbGY83-h7PQE.png?width=320&crop=smart&auto=webp&s=6d8658e7f4db0c0ea0890a23f46a5af5c0ab4388', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ero0p_ShouIwSUkuCnRXm9TL-2pXiutSbGY83-h7PQE.png?width=640&crop=smart&auto=webp&s=bf45a7013b1db561582ff4b2f08dd44478522174', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ero0p_ShouIwSUkuCnRXm9TL-2pXiutSbGY83-h7PQE.png?width=960&crop=smart&auto=webp&s=253b34464c9e63cef2c548f6c57434076f4a8e80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ero0p_ShouIwSUkuCnRXm9TL-2pXiutSbGY83-h7PQE.png?width=1080&crop=smart&auto=webp&s=17a1174b3aee3ab6a944c8eb6c934028ce2415be', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ero0p_ShouIwSUkuCnRXm9TL-2pXiutSbGY83-h7PQE.png?auto=webp&s=df366e89b4b5f2c32e66d5568a79c70812d19b76', 'width': 1200}, 'variants': {}}]} |
|
Let's talk about Google's Gemma license | 12 | I was just reviewing Google's Gemma license, because it is discouraging me from using Gemma3 to generate synthetic training data, when something else occurred to me: By my layperson's understanding of the license, some Gemma derivative models (maybe Amoral and Fallen, but ***definitely*** Tiger-Gemma, Big-Tiger-Gemma, and the abliterated models) are in violation of the license, and it *might* be within Google's legal power to tell Huggingface to delete the repos for such models (or at least block them from being downloaded).
The Gemma license: https://ai.google.dev/gemma/terms
The Gemma prohibited use policy, which is referenced and incorporated by the license: https://ai.google.dev/gemma/prohibited_use_policy
The bit that has me chuffed about generating synthetic training data is that the license is viral. By agreeing to the license, the user agrees that any model trained on Gemma output is considered a Gemma derivative, and subject to all of the terms and restrictions of the Gemma license. Models based on Gemma are also considered Gemma derivatives, too, so the license applies to the abliterations and fine-tunes as well.
Included in the prohibited use policy:
> You may not use nor allow others to use Gemma or Model Derivatives to: [..] 2. Perform or facilitate dangerous, illegal, or malicious activities, including: [..] d. Attempts to override or circumvent safety filters or intentionally drive Gemma or Model Derivatives to act in a manner that contravenes this Gemma Prohibited Use Policy.
The abliterations and some of the fine-tunes are definitely capable of acting in ways which contravene the policy.
In the license proper:
> To the maximum extent permitted by law, Google reserves the right to restrict (remotely or otherwise) usage of any of the Gemma Services that Google reasonably believes are in violation of this Agreement.
By the license definition, Huggingface is a "Hosted Service", and all Hosted Services are a subset of "Gemma Services", thus Huggingface is a "Gemma Service". The license further includes a stipulation:
> To the maximum extent permitted by law, Google reserves the right to restrict (remotely or otherwise) usage of any of the Gemma Services that Google reasonably believes are in violation of this Agreement.
Since Huggingface is "allow[ing] others" to "override or circumvent safety filters or intentionally drive Gemma or Model Derivatives to act in a manner that contravenes this Gemma Prohibited Use Policy", this reads to me like Huggingface might be legally compelled to take Gemma3 derivatives down if Google demands they do so.
I suppose a question is whether telling HF to take a model down is "permitted by law". I can't hazard a guess on that.
Also, it sounds to me like Google *might* feel legally entitled to tell all of us to stop using those models on our own hardware in the privacy of our own homes? But good fucking luck with that.
So, that's what I suspect to be true, and what I fear might be true, but IANAL and some of this is way outside my bailiwick. What say you, community? | 2025-06-26T21:59:40 | https://www.reddit.com/r/LocalLLaMA/comments/1llcyvu/lets_talk_about_googles_gemma_license/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llcyvu | false | null | t3_1llcyvu | /r/LocalLLaMA/comments/1llcyvu/lets_talk_about_googles_gemma_license/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=216&crop=smart&auto=webp&s=1812be5c0e49c65e85787f4dbb2922a543943e79', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=320&crop=smart&auto=webp&s=ca7983e470f1e5cbc5edcd5c5e1c7e5b70227953', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=640&crop=smart&auto=webp&s=293ebb5606c7edf7f2570aa914eb4ddb55f1e615', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=960&crop=smart&auto=webp&s=b1bd156ecd3df7024382f9e145cda17bcaf6bc79', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=1080&crop=smart&auto=webp&s=a3b1fd853b19889a23a601c33fae7d2323e8bdb0', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?auto=webp&s=b78731184d9920fa4900b6590e113d2772fa64ed', 'width': 1440}, 'variants': {}}]} |
New top of the table - MMLU-Pro | 1 | 2025-06-26T22:09:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lld7do/new_top_of_the_table_mmlupro/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lld7do | false | null | t3_1lld7do | /r/LocalLLaMA/comments/1lld7do/new_top_of_the_table_mmlupro/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'yuVyL4HKPanu58NTCiOZ_BWuJnrwddJmam-DxHRiK2k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yuVyL4HKPanu58NTCiOZ_BWuJnrwddJmam-DxHRiK2k.png?width=108&crop=smart&auto=webp&s=a64044a7db355d0608c1dc00dfece93273e43bc8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yuVyL4HKPanu58NTCiOZ_BWuJnrwddJmam-DxHRiK2k.png?width=216&crop=smart&auto=webp&s=4cca79a7884113221157e252939fe677beb5829d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yuVyL4HKPanu58NTCiOZ_BWuJnrwddJmam-DxHRiK2k.png?width=320&crop=smart&auto=webp&s=19be35cb65d99ef0444bd72e3838cab39b981f68', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yuVyL4HKPanu58NTCiOZ_BWuJnrwddJmam-DxHRiK2k.png?width=640&crop=smart&auto=webp&s=1d68808705d64d6a76cd59960c33ff84e0ee44b5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yuVyL4HKPanu58NTCiOZ_BWuJnrwddJmam-DxHRiK2k.png?width=960&crop=smart&auto=webp&s=31b08686b7acb83b383661a47330d78b357eb0cd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yuVyL4HKPanu58NTCiOZ_BWuJnrwddJmam-DxHRiK2k.png?width=1080&crop=smart&auto=webp&s=de12ad7fe8974d63bbc94a5f5c04c84ae211b040', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yuVyL4HKPanu58NTCiOZ_BWuJnrwddJmam-DxHRiK2k.png?auto=webp&s=a552271c669521b9fa6c8c70ac35e2a86f39681f', 'width': 1200}, 'variants': {}}]} |
||
Automatically Evaluating AI Coding Assistants with Each Git Commit (Open Source) | 3 | 2025-06-26T22:15:10 | https://www.tensorzero.com/blog/automatically-evaluating-ai-coding-assistants-with-each-git-commit/ | bianconi | tensorzero.com | 1970-01-01T00:00:00 | 0 | {} | 1lldbts | false | null | t3_1lldbts | /r/LocalLLaMA/comments/1lldbts/automatically_evaluating_ai_coding_assistants/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=108&crop=smart&auto=webp&s=7f316b890b2a31a8f62865e9dee0569e96f0223c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=216&crop=smart&auto=webp&s=00f1de77a5649a79c91d9cfaf6e03bf21f107026', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=320&crop=smart&auto=webp&s=2ca81dda9abf4ec9e6bfb889114a5c077769d765', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=640&crop=smart&auto=webp&s=5a7cae50b6f64366d7ac07d9f8dfc0a821ddf0b8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=960&crop=smart&auto=webp&s=99b7c53dad6f4445fd39ac50a99d95ff14c145bc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=1080&crop=smart&auto=webp&s=1493755ef1337b07c1305234f8696c55d8bf1c05', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?auto=webp&s=b637a9ae4b2efc64add1e2ceadf2fc8d033def18', 'width': 1200}, 'variants': {}}]} |
|
[Question] Recommended open model for large context window? | 4 | I'm running models on a vllm cluster, curious which ones ya'll like for large context windows + tool calling? Thanks! | 2025-06-26T22:22:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lldhth/question_recommended_open_model_for_large_context/ | soorg_nalyd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lldhth | false | null | t3_1lldhth | /r/LocalLLaMA/comments/1lldhth/question_recommended_open_model_for_large_context/ | false | false | self | 4 | null |
AutoInference: Multiple inference options in a single library | 1 | 2025-06-26T22:22:40 | https://github.com/VolkanSimsir/Auto-Inference | According-Local-9704 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lldi2y | false | null | t3_1lldi2y | /r/LocalLLaMA/comments/1lldi2y/autoinference_multiple_inference_options_in_a/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hw72UuisPLbZYITFt1vL9boadDewqC5BjrpvU_FTW3s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hw72UuisPLbZYITFt1vL9boadDewqC5BjrpvU_FTW3s.png?width=108&crop=smart&auto=webp&s=1f0f5ba8c6e831da483f3e35b1a1abc772da1be5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hw72UuisPLbZYITFt1vL9boadDewqC5BjrpvU_FTW3s.png?width=216&crop=smart&auto=webp&s=8f896aae7172db562dee7b490668aedd46efdf5f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hw72UuisPLbZYITFt1vL9boadDewqC5BjrpvU_FTW3s.png?width=320&crop=smart&auto=webp&s=30aad7ab83ef6712026700ad502c0f9644cc35a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hw72UuisPLbZYITFt1vL9boadDewqC5BjrpvU_FTW3s.png?width=640&crop=smart&auto=webp&s=510ff72c443e888ee68e2879a0013433fb628d28', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hw72UuisPLbZYITFt1vL9boadDewqC5BjrpvU_FTW3s.png?width=960&crop=smart&auto=webp&s=9e3c0f4de5afbd337330be9652ac75abb1af0633', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hw72UuisPLbZYITFt1vL9boadDewqC5BjrpvU_FTW3s.png?width=1080&crop=smart&auto=webp&s=876a07fac66897668e8fc04ea02ad1c4859edb0a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hw72UuisPLbZYITFt1vL9boadDewqC5BjrpvU_FTW3s.png?auto=webp&s=688b16089c4186df2830d3b955afd811acbec171', 'width': 1200}, 'variants': {}}]} |
||
AutoInference: Multiple inference options in a single library | 15 | Auto-Inference is a Python library that provides a unified interface for model inference using several popular backends, including Hugging Face's Transformers, Unsloth, and vLLM. | 2025-06-26T22:25:30 | According-Local-9704 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lldkdg | false | null | t3_1lldkdg | /r/LocalLLaMA/comments/1lldkdg/autoinference_multiple_inference_options_in_a/ | false | false | 15 | {'enabled': True, 'images': [{'id': 'BpKJLPqKVyOSXtg4r3SwO_zCptqk0G7Ypvcjqtb0sLM', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/0isu7rxjkc9f1.jpeg?width=108&crop=smart&auto=webp&s=1b348159b50876b28b3fd83126e99343e4b020ab', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/0isu7rxjkc9f1.jpeg?width=216&crop=smart&auto=webp&s=7f47566c325ef8e0be1c97e91c124aeb222085e6', 'width': 216}, {'height': 311, 'url': 'https://preview.redd.it/0isu7rxjkc9f1.jpeg?width=320&crop=smart&auto=webp&s=b81bba1fb7237e4007f0a4f2bedba0d0d623c6af', 'width': 320}, {'height': 622, 'url': 'https://preview.redd.it/0isu7rxjkc9f1.jpeg?width=640&crop=smart&auto=webp&s=f774d9d78464b449a1b89c9b9a1cad7d4ca591ca', 'width': 640}], 'source': {'height': 867, 'url': 'https://preview.redd.it/0isu7rxjkc9f1.jpeg?auto=webp&s=3e8b8a5409b509d1030f4c4d6ec252fcc437f11a', 'width': 892}, 'variants': {}}]} |
||
World's Fastest Virtual Try On Model Gets a 50% Resolution Boost [FASHN v1.6] | 0 | 2025-06-26T22:36:37 | https://v.redd.it/wdyfp0gemc9f1 | parkh7 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lldti5 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/wdyfp0gemc9f1/DASHPlaylist.mpd?a=1753569411%2CNTljNDQyZDc2NzI0NjgwYmExNzU4YWI5NmU4YWE5YTM4NDQ2MzU4MjNkZGIwN2JkY2YxZTA5ZGM4YjBhYjE1ZA%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/wdyfp0gemc9f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 750, 'hls_url': 'https://v.redd.it/wdyfp0gemc9f1/HLSPlaylist.m3u8?a=1753569411%2CMzk4NTgyZThkNDkwMzJlYzMyNzQ5MTk3NmU5NTQ3NGUxY2U0NDAzMzJiYWJhMDRkMDU4N2E2YTkwYmQ5NGQ4MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wdyfp0gemc9f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1lldti5 | /r/LocalLLaMA/comments/1lldti5/worlds_fastest_virtual_try_on_model_gets_a_50/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YWw0Y2gxZ2VtYzlmMf0yiAB72EusbfRJqfWG6E2CAnZXN1-g4XzAzfeKBauk', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/YWw0Y2gxZ2VtYzlmMf0yiAB72EusbfRJqfWG6E2CAnZXN1-g4XzAzfeKBauk.png?width=108&crop=smart&format=pjpg&auto=webp&s=2b99435a0fe48d2bdf0eeeb79b609882abea864f', 'width': 108}, {'height': 224, 'url': 'https://external-preview.redd.it/YWw0Y2gxZ2VtYzlmMf0yiAB72EusbfRJqfWG6E2CAnZXN1-g4XzAzfeKBauk.png?width=216&crop=smart&format=pjpg&auto=webp&s=851995b7c35d89abef4b10ad6aad78781750da7d', 'width': 216}, {'height': 332, 'url': 'https://external-preview.redd.it/YWw0Y2gxZ2VtYzlmMf0yiAB72EusbfRJqfWG6E2CAnZXN1-g4XzAzfeKBauk.png?width=320&crop=smart&format=pjpg&auto=webp&s=f7247fe8dcc3579507b9fca8252c419bf010d043', 'width': 320}, {'height': 665, 'url': 'https://external-preview.redd.it/YWw0Y2gxZ2VtYzlmMf0yiAB72EusbfRJqfWG6E2CAnZXN1-g4XzAzfeKBauk.png?width=640&crop=smart&format=pjpg&auto=webp&s=addd59e3e76392f48ae9c4af4ca6bae82d20ed5a', 'width': 640}, {'height': 998, 'url': 'https://external-preview.redd.it/YWw0Y2gxZ2VtYzlmMf0yiAB72EusbfRJqfWG6E2CAnZXN1-g4XzAzfeKBauk.png?width=960&crop=smart&format=pjpg&auto=webp&s=36ebb3b99f6e5c4b8d8726b9b7a187725df85d81', 'width': 960}], 'source': {'height': 1026, 'url': 'https://external-preview.redd.it/YWw0Y2gxZ2VtYzlmMf0yiAB72EusbfRJqfWG6E2CAnZXN1-g4XzAzfeKBauk.png?format=pjpg&auto=webp&s=1dfa0fed5580a0c5cfe07b16f421bf85ff0ffa1c', 'width': 986}, 'variants': {}}]} |
||
Gemini CLI - someone already made a pull request for Local LLM providers (and more) | 31 | It's there, but the contributor still has to complete a CLA and nobody has openly talked about reviewing it. Would giving the PR a thumbs up help it? | 2025-06-26T23:09:59 | https://github.com/google-gemini/gemini-cli/pull/1939 | merrycachemiss | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lleks2 | false | null | t3_1lleks2 | /r/LocalLLaMA/comments/1lleks2/gemini_cli_someone_already_made_a_pull_request/ | false | false | default | 31 | {'enabled': False, 'images': [{'id': '07Svddxhws9NRhwiaZE7X8N_M-orx7gvT8GOb4RjL2I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/07Svddxhws9NRhwiaZE7X8N_M-orx7gvT8GOb4RjL2I.png?width=108&crop=smart&auto=webp&s=8ca166df85804930dda9721ee25b257d821844a0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/07Svddxhws9NRhwiaZE7X8N_M-orx7gvT8GOb4RjL2I.png?width=216&crop=smart&auto=webp&s=428a2fe2cf965074a58fe7cd6946fe14ba213a35', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/07Svddxhws9NRhwiaZE7X8N_M-orx7gvT8GOb4RjL2I.png?width=320&crop=smart&auto=webp&s=585971f4c7cfd4c88704f81fcb8d9e6c95d9aa8b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/07Svddxhws9NRhwiaZE7X8N_M-orx7gvT8GOb4RjL2I.png?width=640&crop=smart&auto=webp&s=2b658f44c11bd14eedf4c27ce46aafb04ba52e18', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/07Svddxhws9NRhwiaZE7X8N_M-orx7gvT8GOb4RjL2I.png?width=960&crop=smart&auto=webp&s=5c0cf1cdcbb444b9849f7dab5118f9f60b061ced', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/07Svddxhws9NRhwiaZE7X8N_M-orx7gvT8GOb4RjL2I.png?width=1080&crop=smart&auto=webp&s=3701db6a972c14571c3754a0521972802baf802f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/07Svddxhws9NRhwiaZE7X8N_M-orx7gvT8GOb4RjL2I.png?auto=webp&s=32b38f01d90fffab2635c3ce8a58d21a9d8e7172', 'width': 1200}, 'variants': {}}]} |
3060 TI $70 start bid | 0 | https://ebay.us/m/h7hvqd | 2025-06-26T23:22:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lleuvl/3060_ti_70_start_bid/ | ReceptorDeceptor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lleuvl | false | null | t3_1lleuvl | /r/LocalLLaMA/comments/1lleuvl/3060_ti_70_start_bid/ | false | false | self | 0 | null |
Can Llamcpp run gemma 3n? | 14 | I followed the instructions here, but when I try to run I get unknown architecture gemma3n error. Is it not supported and I fell for a generate doc? | 2025-06-26T23:25:10 | https://docs.unsloth.ai/basics/gemma-3n-how-to-run-and-fine-tune | thebadslime | docs.unsloth.ai | 1970-01-01T00:00:00 | 0 | {} | 1llewyp | false | null | t3_1llewyp | /r/LocalLLaMA/comments/1llewyp/can_llamcpp_run_gemma_3n/ | false | false | default | 14 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]} |
Trained Cloud hosted sector specific LLM | 0 | Basically
Roofing company + vertex ai/Google Cloud + roofing job data (roof photos of damage, permit pdf with no sensitive customer data) and I just heard of RAG.
With those components plus a web interface for employees and google olauth per employee would this be a useful feasible tool at work. Thoughts for people more into the field than i? | 2025-06-26T23:30:44 | https://www.reddit.com/r/LocalLLaMA/comments/1llf1d6/trained_cloud_hosted_sector_specific_llm/ | Ill_Worth_3248 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llf1d6 | false | null | t3_1llf1d6 | /r/LocalLLaMA/comments/1llf1d6/trained_cloud_hosted_sector_specific_llm/ | false | false | self | 0 | null |
Chatterbox tts - tips or advice? | 2 | I've been working with Chatterbox tts ( https://github.com/resemble-ai/chatterbox ) and found that male older/elder voices tend to get a more pronounced accent or non-native English speaker quality as the voice is older, more elderly. Anyone seeing similar behavior? Anyone have any accent suppression, or accent consistency, or just voice consistency techniques?
My source voice audio is about 40 seconds, and is an older "college professor, public speaker" American accent voice. Like the voice on a Ford Pickup add, deep voiced. Seems like I get "Hugh Jackman" far too often for the distinctly not-Hugh source audio, a distinctly older sounding voice than Hugh Jackman's too.
I'm not quite clear on what the "temperature", "min_p" and "top_p" parameters do. Any explainers for a non-audio scientist would be appreciated. | 2025-06-26T23:38:49 | https://www.reddit.com/r/LocalLLaMA/comments/1llf7pj/chatterbox_tts_tips_or_advice/ | bsenftner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llf7pj | false | null | t3_1llf7pj | /r/LocalLLaMA/comments/1llf7pj/chatterbox_tts_tips_or_advice/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'CNJa_GCOexsvX8vhtQJM_DY2zD8GtgHPBw4Cfg9Subs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CNJa_GCOexsvX8vhtQJM_DY2zD8GtgHPBw4Cfg9Subs.png?width=108&crop=smart&auto=webp&s=84cfed297e0434f54b028dddd8225154189fc57a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CNJa_GCOexsvX8vhtQJM_DY2zD8GtgHPBw4Cfg9Subs.png?width=216&crop=smart&auto=webp&s=e4d73f97fb2921a2004574a0668b33135154d3df', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CNJa_GCOexsvX8vhtQJM_DY2zD8GtgHPBw4Cfg9Subs.png?width=320&crop=smart&auto=webp&s=8846278be2de512e26d0a0d4d9d8e71e7e0a0a0d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CNJa_GCOexsvX8vhtQJM_DY2zD8GtgHPBw4Cfg9Subs.png?width=640&crop=smart&auto=webp&s=86309a43ff6d2d92eebc8c950d397b200dda49b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CNJa_GCOexsvX8vhtQJM_DY2zD8GtgHPBw4Cfg9Subs.png?width=960&crop=smart&auto=webp&s=7242d1d9a1abf2b27fdce97dc41907c14305f2fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CNJa_GCOexsvX8vhtQJM_DY2zD8GtgHPBw4Cfg9Subs.png?width=1080&crop=smart&auto=webp&s=9cb4c93df152631a1af1480b0a029ecc46fa3654', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CNJa_GCOexsvX8vhtQJM_DY2zD8GtgHPBw4Cfg9Subs.png?auto=webp&s=d7e958f76a63bde0d2d11afdb0fa08f1abdcbda4', 'width': 1200}, 'variants': {}}]} |
How valuable is the lmarena data and 7a any model being trained on it? | 1 | Would love to know! Anyone knows? | 2025-06-26T23:52:50 | https://www.reddit.com/r/LocalLLaMA/comments/1llfilp/how_valuable_is_the_lmarena_data_and_7a_any_model/ | Extra-Whereas-9408 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llfilp | false | null | t3_1llfilp | /r/LocalLLaMA/comments/1llfilp/how_valuable_is_the_lmarena_data_and_7a_any_model/ | false | false | self | 1 | null |
The mod of this server is a mod of r/grok | 4 | Just saying. No hate meant. | 2025-06-26T23:53:14 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llfiwc | false | null | t3_1llfiwc | /r/LocalLLaMA/comments/1llfiwc/the_mod_of_this_server_is_a_mod_of_rgrok/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'rk0yad6f0d9f1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/rk0yad6f0d9f1.png?width=108&crop=smart&auto=webp&s=3357cd9933efc20c927d340a9034deb63b027e63', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/rk0yad6f0d9f1.png?width=216&crop=smart&auto=webp&s=10a98e555a9ecf3df3dd2cb0d53c009f776b21f7', 'width': 216}, {'height': 108, 'url': 'https://preview.redd.it/rk0yad6f0d9f1.png?width=320&crop=smart&auto=webp&s=902216bf1a649fea4154768fab85cd5514d03f3b', 'width': 320}, {'height': 216, 'url': 'https://preview.redd.it/rk0yad6f0d9f1.png?width=640&crop=smart&auto=webp&s=d32b09406cf6f5bb62350b50c50fd8df17ba9a21', 'width': 640}, {'height': 325, 'url': 'https://preview.redd.it/rk0yad6f0d9f1.png?width=960&crop=smart&auto=webp&s=94bfbaab0539bf468badf9ef9d6c5dc6ddf6faaa', 'width': 960}, {'height': 366, 'url': 'https://preview.redd.it/rk0yad6f0d9f1.png?width=1080&crop=smart&auto=webp&s=1099040676d7aa927a6700251348004a7155c699', 'width': 1080}], 'source': {'height': 442, 'url': 'https://preview.redd.it/rk0yad6f0d9f1.png?auto=webp&s=5a9a96f221714356413a7e1e6f6b73ace3690ba0', 'width': 1304}, 'variants': {}}]} |
|
Anyone used the Qualcomm AI SDK/QC AI 100 GPUs | 3 | Curious....AWS has an instance running this as well. Any thoughts vs Nvidia stack? | 2025-06-26T23:57:32 | https://www.reddit.com/r/LocalLLaMA/comments/1llfm7d/anyone_used_the_qualcomm_ai_sdkqc_ai_100_gpus/ | onemoreburrito | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llfm7d | false | null | t3_1llfm7d | /r/LocalLLaMA/comments/1llfm7d/anyone_used_the_qualcomm_ai_sdkqc_ai_100_gpus/ | false | false | self | 3 | null |
Dear Mod, we don't want our posts on X/Twitter. | 881 | Especially with no credit in the title, but rather just put in a comment just deep in there. This is user generated content, and not the property of the mods to just regurgitate whereever they wants. No harm meant, and also it seems like the majority of the community agrees with this consensus, based on downvotes of comments which mentioned this. | 2025-06-27T00:07:54 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llfufv | false | null | t3_1llfufv | /r/LocalLLaMA/comments/1llfufv/dear_mod_we_dont_want_our_posts_on_xtwitter/ | false | false | 881 | {'enabled': True, 'images': [{'id': 'duhbzZgd4dDVhblE5xtdnCoj09lqg6eCAnJlSlLj4Go', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/ber4b39v2d9f1.png?width=108&crop=smart&auto=webp&s=5acce585c4231e286e9ba0afffd5fb810f4e46f9', 'width': 108}, {'height': 234, 'url': 'https://preview.redd.it/ber4b39v2d9f1.png?width=216&crop=smart&auto=webp&s=cc047dfa47c22be2283a19b30a606f6677723723', 'width': 216}, {'height': 346, 'url': 'https://preview.redd.it/ber4b39v2d9f1.png?width=320&crop=smart&auto=webp&s=9e218dc03c6d35acfdce4bce0aa5f1f759fc1514', 'width': 320}, {'height': 693, 'url': 'https://preview.redd.it/ber4b39v2d9f1.png?width=640&crop=smart&auto=webp&s=b33ffbed20588badb2becadf417b4cbebfe32d6f', 'width': 640}], 'source': {'height': 974, 'url': 'https://preview.redd.it/ber4b39v2d9f1.png?auto=webp&s=e2c37624edcc8ee055c63b96f79b9b74fcdbd88d', 'width': 899}, 'variants': {}}]} |
||
Looking for Open Source Tools That Support DuckDB Querying (Like PandasAI etc.) | 10 | Hey everyone,
I'm exploring tools that support **DuckDB** querying for CSVs or tabular data — preferably ones that integrate with LLMs or allow natural language querying. I already know about **PandasAI**, **LangChain’s CSV agent**, and **LlamaIndex’s PandasQueryEngine**, but I’m specifically looking for open-source projects (not just wrappers) that:
* Use DuckDB under the hood for fast, SQL-style analytics
* Allow querying or manipulation of data using natural language
* Possibly integrate well with multi-agent frameworks or AI assistants
* Are actively maintained or somewhat production-grade
Would appreciate recommendations — GitHub links, blog posts, or even your own projects!
Thanks in advance :)
| 2025-06-27T00:50:25 | https://www.reddit.com/r/LocalLLaMA/comments/1llgpxj/looking_for_open_source_tools_that_support_duckdb/ | callmedevilthebad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llgpxj | false | null | t3_1llgpxj | /r/LocalLLaMA/comments/1llgpxj/looking_for_open_source_tools_that_support_duckdb/ | false | false | self | 10 | null |
Anyone put multiple RTX Pro 6000's in one case? | 0 | Specifically the 600W cards, since the Max-Q are sold out everywhere.
If you're running multiple of them I'd love to hear about the thermals/any issues you've faced! | 2025-06-27T00:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1llgswq/anyone_put_multiple_rtx_pro_6000s_in_one_case/ | Prestigious_Thing797 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llgswq | false | null | t3_1llgswq | /r/LocalLLaMA/comments/1llgswq/anyone_put_multiple_rtx_pro_6000s_in_one_case/ | false | false | self | 0 | null |
Should LocalLLaMA move to fediverse? | 0 | I'm not a fan of centralized platforms, and now with the latest developments and the apparent move towards enshittification of this subreddit and the new, suspicious moderator, I honestly see now as more than the right time to save the essence of our community. I don't want anything to do with x/twitter or discord or bluesky, a labeling scam that pretends to be different from the rest of the shit.
In my opinion, it should not be the case that a few people have the power to decide how a broad mass of tens of thousands or hundreds of thousands communicate. Even whether and when and what someone is allowed to post or not is a "design" of centralized platforms.
I therefore see decentralized platforms as the only solution to this problem. I therefore propose fediverse with friendly people volunteering to cover the costs.
I also offer my support with this: I **can** also participate in hosting myself if there is a need, but I do not necessarily insist that I have to run my own host. This can also be done by several other people from the community who are suitable and have been democratically elected (the same for moderators etc).
However, I am happy to offer the necessary infrastructure and/or costs.
Feel free to mention other options and suggestions if you know of any.
[View Poll](https://www.reddit.com/poll/1llgz1e) | 2025-06-27T01:02:51 | https://www.reddit.com/r/LocalLLaMA/comments/1llgz1e/should_localllama_move_to_fediverse/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llgz1e | false | null | t3_1llgz1e | /r/LocalLLaMA/comments/1llgz1e/should_localllama_move_to_fediverse/ | false | false | self | 0 | null |
I'm using a local Llama model for my game's dialogue system! | 685 | I'm blown away by how fast and intelligent Llama 3.2 is! | 2025-06-27T01:23:40 | https://v.redd.it/cgoobkv5gd9f1 | LandoRingel | /r/LocalLLaMA/comments/1llhdoq/im_using_a_local_llama_model_for_my_games/ | 1970-01-01T00:00:00 | 0 | {} | 1llhdoq | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cgoobkv5gd9f1/DASHPlaylist.mpd?a=1753709027%2CZmM1OGFlMzM2NDMwYTVlZmQ1NDRlY2EyYmUzNDU2ZDlmNGM0NGM1NjYzMzlhYmY1NTA2ZDNkNzVlZjE0MGRmYw%3D%3D&v=1&f=sd', 'duration': 93, 'fallback_url': 'https://v.redd.it/cgoobkv5gd9f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/cgoobkv5gd9f1/HLSPlaylist.m3u8?a=1753709027%2CYzllZWZiMTQ5YzFkMzc0OGEwYjQ1NzRkYjdlY2NmMjlhNmQ3M2Q1YmUwNDE1MDVhZTYwYjRiNzdmODU3YTNjZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cgoobkv5gd9f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1llhdoq | /r/LocalLLaMA/comments/1llhdoq/im_using_a_local_llama_model_for_my_games/ | false | false | 685 | {'enabled': False, 'images': [{'id': 'c2JvZG9ndjVnZDlmMe7CY4SqtJeZEukasJn79Adjh2cJgmt44HDkzVTcUucN', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c2JvZG9ndjVnZDlmMe7CY4SqtJeZEukasJn79Adjh2cJgmt44HDkzVTcUucN.png?width=108&crop=smart&format=pjpg&auto=webp&s=f634844c23c2333237b70ed0a10d6b6b518a2398', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c2JvZG9ndjVnZDlmMe7CY4SqtJeZEukasJn79Adjh2cJgmt44HDkzVTcUucN.png?width=216&crop=smart&format=pjpg&auto=webp&s=de1baed337593916167f0afc1720a117347f22b4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c2JvZG9ndjVnZDlmMe7CY4SqtJeZEukasJn79Adjh2cJgmt44HDkzVTcUucN.png?width=320&crop=smart&format=pjpg&auto=webp&s=29d3fcd9c1ca15f17be0aba6baabeb11ca50b82d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c2JvZG9ndjVnZDlmMe7CY4SqtJeZEukasJn79Adjh2cJgmt44HDkzVTcUucN.png?width=640&crop=smart&format=pjpg&auto=webp&s=18f37aa1c9aefe8816d6605b61092b84b5d8d53c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/c2JvZG9ndjVnZDlmMe7CY4SqtJeZEukasJn79Adjh2cJgmt44HDkzVTcUucN.png?width=960&crop=smart&format=pjpg&auto=webp&s=e05d98e39fdceac3e9029e9cf82de7cf62e6e28e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/c2JvZG9ndjVnZDlmMe7CY4SqtJeZEukasJn79Adjh2cJgmt44HDkzVTcUucN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=428920c9a9e80dc4e40b48a3602da9dfd5e92c00', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/c2JvZG9ndjVnZDlmMe7CY4SqtJeZEukasJn79Adjh2cJgmt44HDkzVTcUucN.png?format=pjpg&auto=webp&s=4a883942a82d36d20ac0b423fc4eb28f5665d1a2', 'width': 1920}, 'variants': {}}]} |
|
Model to analyze images | 2 | Is there a model that can analyze images like chat gpt and give a commentary on what the image is? | 2025-06-27T01:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/1llhvvv/model_to_analyze_images/ | Technical_Whole_947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llhvvv | false | null | t3_1llhvvv | /r/LocalLLaMA/comments/1llhvvv/model_to_analyze_images/ | false | false | nsfw | 2 | null |
How to train custom arch or custom flow for LLMs | 3 | I'm fairly new to the LLM world and have been exploring several repos around fine-tuning and training. However, I'm at a point where I want to do more than just tweak existing models, like
1. Train my own custom architecture (not just finetune a pre-existing one),
2. Use custom loss functions that require additional arguments or some preprocessing before entering in loss calculation or need some extra ARGS along with the standard ones.
The problem is, if I write everything from scratch, I'll end up spending way too much time on infrastructure — rather than focusing on the actual research (e.g., my model or loss function).
Are there any well-maintained, extensible frameworks or repos that support this kind of setup — letting me plug in custom components (losses, models) while handling the rest (scaling, training, data loading) in a clean way?
| 2025-06-27T03:00:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lljabk/how_to_train_custom_arch_or_custom_flow_for_llms/ | commander-trex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lljabk | false | null | t3_1lljabk | /r/LocalLLaMA/comments/1lljabk/how_to_train_custom_arch_or_custom_flow_for_llms/ | false | false | self | 3 | null |
What's this star all over the feed for LocalLLaMA? | 15 | How's this Reddit associated with Twitter? If we must have it, isn't hugging face more appropriate? I vote for [https://huggingface.co/models](https://huggingface.co/models) page. Twitter has nothing to do with local LLMs (or LLMs at all).
For now, I created this block rule for uBlock origin to hide it:
||emoji.redditmedia.com/cjqd7h6t3a9f1\_t5\_81eyvm/Verified
But, it still keeps the link to Twitter clickable. | 2025-06-27T03:05:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lljdk8/whats_this_star_all_over_the_feed_for_localllama/ | crodjer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lljdk8 | false | null | t3_1lljdk8 | /r/LocalLLaMA/comments/1lljdk8/whats_this_star_all_over_the_feed_for_localllama/ | false | false | self | 15 | null |
POLL Do you like the subreddit twitter? | 8 | Thought it’d be good to get a sample from you guys because i’m fairly conflicted on it.
[View Poll](https://www.reddit.com/poll/1lljn6h) | 2025-06-27T03:19:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lljn6h/poll_do_you_like_the_subreddit_twitter/ | Capable-Ad-7494 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lljn6h | false | null | t3_1lljn6h | /r/LocalLLaMA/comments/1lljn6h/poll_do_you_like_the_subreddit_twitter/ | false | false | self | 8 | null |
LLAMA Models, Perplexity, Claude, Deepseek - welcome to my ai party! | 0 | Today I added some AI friends to my Custom Framework that has emerged from 15,000 directed conversations from a non-coder.
Welcome to the chaos!
🥳🥳🥳🥳🥳🎉 | 2025-06-27T03:59:08 | https://www.youtube.com/watch?v=4RCHW84Oo-4 | Silly_Classic1005 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1llkcwp | false | {'oembed': {'author_name': 'James OKelly', 'author_url': 'https://www.youtube.com/@jjkmusicbot', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/4RCHW84Oo-4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="4 servicers, 5 ais, 1 turn. https://m0d.ai/"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/4RCHW84Oo-4/hqdefault.jpg', 'thumbnail_width': 480, 'title': '4 servicers, 5 ais, 1 turn. https://m0d.ai/', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1llkcwp | /r/LocalLLaMA/comments/1llkcwp/llama_models_perplexity_claude_deepseek_welcome/ | false | false | default | 0 | null |
Question about agent mode like GitHub copilot. | 2 | Hello, I’m new to this whole AI coding thing and I was wondering if there’s a way to run some model locally that would allow something like github copilot’s agent mode? | 2025-06-27T04:29:14 | https://www.reddit.com/r/LocalLLaMA/comments/1llkwvn/question_about_agent_mode_like_github_copilot/ | Straight_Caramel7725 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llkwvn | false | null | t3_1llkwvn | /r/LocalLLaMA/comments/1llkwvn/question_about_agent_mode_like_github_copilot/ | false | false | self | 2 | null |
Local coding AI agent? | 3 | Hi,
I'm looking for a decent coding agent that can run with local models and is open-source. I've not found anything yet.
I've mostly have been using Tabby, which is alright, but I recently learned that the coding agent they're working on does not seem to have the ability to use a fully local stack. | 2025-06-27T05:01:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lllgy8/local_coding_ai_agent/ | spaceman_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lllgy8 | false | null | t3_1lllgy8 | /r/LocalLLaMA/comments/1lllgy8/local_coding_ai_agent/ | false | false | self | 3 | null |
I built a document workflow system using VLMs: processes complex docs end-to-end (runs locally!!) | 7 | Hey r/LocalLLaMA
We're building Morphik: a multimodal search layer for AI applications that works super well with complex documents. (runs locally :))
Our users kept using our search API in creative ways to build document workflows and we realized they needed proper workflow automation, not just search queries. So we built workflow automation for documents. Extract data, save to metadata, add custom logic: all automated. Uses vision language models for accuracy.
We use it for our invoicing workflow - automatically processes vendor invoices, extracts key data, flags issues, saves everything searchable.
Works for any document type where you need automated processing + searchability. (an example of it working for safety data sheets below)
We'll be adding remote API calls soon so you can trigger notifications, approvals, etc.
Try it out: [https://morphik.ai](https://morphik.ai)
GitHub: [https://github.com/morphik-org/morphik-core](https://github.com/morphik-org/morphik-core)
Would love any feedback/ feature requests!
https://reddit.com/link/1lllpzt/video/hrywbzasle9f1/player
| 2025-06-27T05:15:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lllpzt/i_built_a_document_workflow_system_using_vlms/ | yes-no-maybe_idk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lllpzt | false | null | t3_1lllpzt | /r/LocalLLaMA/comments/1lllpzt/i_built_a_document_workflow_system_using_vlms/ | false | false | self | 7 | null |
China's NetEase Releases Open- Source Mathematical Model: Confucius3-Math | 28 |
Official Demon:https://confucius.youdao.com/
GitHub:https://github.com/netease-youdao/Confucius3-Math
Huggingface:https://huggingface.co/netease-youdao/Confucius3-Math
| 2025-06-27T05:18:47 | https://github.com/netease-youdao/Confucius3-Math | Fun-Doctor6855 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lllry7 | false | null | t3_1lllry7 | /r/LocalLLaMA/comments/1lllry7/chinas_netease_releases_open_source_mathematical/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'aBHy0lrTBGIAmO4wnEIfas529xQDjo-nRzz7hjQEXBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aBHy0lrTBGIAmO4wnEIfas529xQDjo-nRzz7hjQEXBI.png?width=108&crop=smart&auto=webp&s=9445516cb87f032059fb2e8430a8cfb5cd59b48e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aBHy0lrTBGIAmO4wnEIfas529xQDjo-nRzz7hjQEXBI.png?width=216&crop=smart&auto=webp&s=0a351b4ff24facbb18c7aeba32a917a11df9dfd2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aBHy0lrTBGIAmO4wnEIfas529xQDjo-nRzz7hjQEXBI.png?width=320&crop=smart&auto=webp&s=4aa53eb9d9598f3f899541c0bd722fa283140782', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aBHy0lrTBGIAmO4wnEIfas529xQDjo-nRzz7hjQEXBI.png?width=640&crop=smart&auto=webp&s=28fb766711134b0632af919a620b789e3ca2ab2a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aBHy0lrTBGIAmO4wnEIfas529xQDjo-nRzz7hjQEXBI.png?width=960&crop=smart&auto=webp&s=5822a0edf72bbf1becddae99da3d58397bc6832a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aBHy0lrTBGIAmO4wnEIfas529xQDjo-nRzz7hjQEXBI.png?width=1080&crop=smart&auto=webp&s=2b5f175a266037586a182a932351c352181b197a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aBHy0lrTBGIAmO4wnEIfas529xQDjo-nRzz7hjQEXBI.png?auto=webp&s=c15cf3f8f81c98beeb94c54ed5b17f4332dddacd', 'width': 1200}, 'variants': {}}]} |
|
The performance of NetEase's new Open-Source mathematical model Confucius3-Math | 34 | https://arxiv.org/abs/2506.18330 | 2025-06-27T05:21:56 | https://www.reddit.com/gallery/1llltv5 | Fun-Doctor6855 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1llltv5 | false | null | t3_1llltv5 | /r/LocalLLaMA/comments/1llltv5/the_performance_of_neteases_new_opensource/ | false | false | 34 | {'enabled': True, 'images': [{'id': 'dEIhtoYICYZI8SaYhg6vcNm2oKuH_uj_36i_H0fDXag', 'resolutions': [{'height': 48, 'url': 'https://external-preview.redd.it/dEIhtoYICYZI8SaYhg6vcNm2oKuH_uj_36i_H0fDXag.png?width=108&crop=smart&auto=webp&s=9e0f5dff4c69a4255e20a711f400abcfb79a89c4', 'width': 108}, {'height': 96, 'url': 'https://external-preview.redd.it/dEIhtoYICYZI8SaYhg6vcNm2oKuH_uj_36i_H0fDXag.png?width=216&crop=smart&auto=webp&s=c6384b423931dfa7267bcdd5411f86193b156244', 'width': 216}, {'height': 142, 'url': 'https://external-preview.redd.it/dEIhtoYICYZI8SaYhg6vcNm2oKuH_uj_36i_H0fDXag.png?width=320&crop=smart&auto=webp&s=574c1e51f1b681034afa1cb9193a709e5606d0d7', 'width': 320}, {'height': 285, 'url': 'https://external-preview.redd.it/dEIhtoYICYZI8SaYhg6vcNm2oKuH_uj_36i_H0fDXag.png?width=640&crop=smart&auto=webp&s=693da5b11b8a4b1c3c2afe46f9b68f0213a67d59', 'width': 640}, {'height': 428, 'url': 'https://external-preview.redd.it/dEIhtoYICYZI8SaYhg6vcNm2oKuH_uj_36i_H0fDXag.png?width=960&crop=smart&auto=webp&s=65b392617af49d0d1e47784e758318fe8e1ccacd', 'width': 960}, {'height': 482, 'url': 'https://external-preview.redd.it/dEIhtoYICYZI8SaYhg6vcNm2oKuH_uj_36i_H0fDXag.png?width=1080&crop=smart&auto=webp&s=5aab7125997496f0f6106469e6e7f8d114e94168', 'width': 1080}], 'source': {'height': 1278, 'url': 'https://external-preview.redd.it/dEIhtoYICYZI8SaYhg6vcNm2oKuH_uj_36i_H0fDXag.png?auto=webp&s=607e74f9a8fc7021ad2447ce56f72eab354553fb', 'width': 2862}, 'variants': {}}]} |
|
Update on memX: a shared memory for LLM agents | 17 | A few days ago I shared a project I was working on: [https://www.reddit.com/r/LocalLLaMA/comments/1lehbra/built\_memx\_a\_shared\_memory\_backend\_for\_llm\_agents/](https://www.reddit.com/r/LocalLLaMA/comments/1lehbra/built_memx_a_shared_memory_backend_for_llm_agents/)
I have made significant progress and now, you guys can integrate it with your systems. I have also hosted it as a SaaS free of cost for anyone to use it.
SaaS: [https://mem-x.vercel.app](https://mem-x.vercel.app)
PyPI: `pip install memx-sdk`
Github: [https://github.com/MehulG/memX](https://github.com/MehulG/memX)
Just to recap:
memX is a shared memory layer for LLM agents — kind of like Redis, but with real-time sync, pub/sub, schema validation, and access control.Instead of having agents pass messages or follow a fixed pipeline, they just read and write to shared memory keys. It’s like a collaborative whiteboard where agents evolve context together.
Would love feedback or ideas from others building agent systems :) | 2025-06-27T05:27:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lllxey/update_on_memx_a_shared_memory_for_llm_agents/ | Temporary-Tap-7323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lllxey | false | null | t3_1lllxey | /r/LocalLLaMA/comments/1lllxey/update_on_memx_a_shared_memory_for_llm_agents/ | false | false | self | 17 | null |
Best model for HTML? | 3 | I've been using ChatGPT which has been great but I'm on the free version which runs out of tokens quickly. I have a 5090, which model is the best for coding websites? I tried Qwen 3 32B but it's not good. | 2025-06-27T05:51:09 | https://www.reddit.com/r/LocalLLaMA/comments/1llmbg3/best_model_for_html/ | Nomski88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llmbg3 | false | null | t3_1llmbg3 | /r/LocalLLaMA/comments/1llmbg3/best_model_for_html/ | false | false | self | 3 | null |
New LLM looking for input on license | 0 | Working on my llm. How is this for a license what should I change?
# EchoChAI Non-Commercial License v1.1
**Copyright © Echo Chai LTD, 2025**
---
## 1. Definitions
**“Model”** refers to the artificial intelligence model named **EchoChAI**, including its architecture, weights, training data (where applicable), source code, configuration files, and associated documentation or artifacts released under this License.
**“You”** or **“Your”** refers to the individual or legal entity exercising rights under this License.
**“Output”** means any result, content, response, file, or data generated by using EchoChAI.
**“Commercial Use”** means any usage of EchoChAI or its Outputs that is intended for or results in financial gain, commercial advantage, internal enterprise operations, or revenue-generating activities.
---
## 2. Grant of Rights
Subject to the terms of this License, Echo Chai LTD hereby grants You a worldwide, royalty-free, non-exclusive, non-transferable, and non-sublicensable license to:
- Use, copy, modify, and operate EchoChAI for **non-commercial**, **educational**, **research**, or **personal** purposes;
- Generate, use, and retain ownership over Outputs from EchoChAI;
- Share unmodified versions of EchoChAI under this same License, with appropriate attribution.
---
## 3. Restrictions
- **No Commercial Use**: You may not use EchoChAI or its Outputs in any commercial context without prior **explicit written permission** from Echo Chai LTD.
- **No Commercial Redistribution**: You may not sell, license, sublicense, or distribute EchoChAI or its Outputs for commercial gain.
- **No Reverse Licensing**: You may not apply any legal, technical, or contractual restrictions that conflict with this License.
- **Prohibited Uses**: You may not use EchoChAI or its Outputs:
- To violate laws, regulations, or third-party rights;
- For military, policing, or surveillance applications;
- To develop or operate weapon systems;
- To generate deceptive, fraudulent, libelous, or harmful content (e.g., misinformation, impersonation);
- In any way that could reasonably cause harm to individuals, communities, or ecosystems.
---
## 4. Ownership of Outputs
You retain **full ownership and responsibility** for any Outputs generated by EchoChAI.
Echo Chai LTD **does not claim ownership**, authorship, or responsibility for any content created through your use of the Model.
---
## 5. Disclaimer of Warranty
THE MODEL IS PROVIDED **"AS IS"**, WITH ALL FAULTS AND WITHOUT WARRANTY OF ANY KIND.
TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, ECHO CHAI LTD DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO:
- MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT;
- RELIABILITY, STABILITY, OR USEFULNESS OF OUTPUTS;
- THAT THE MODEL OR OUTPUTS WILL BE ERROR-FREE, UNINTERRUPTED, OR COMPATIBLE WITH ALL ENVIRONMENTS;
- THAT THE MODEL IS FREE FROM VULNERABILITIES OR MALICIOUS CODE.
---
## 6. Limitation of Liability
TO THE FULLEST EXTENT PERMITTED UNDER LAW, ECHO CHAI LTD SHALL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, EXEMPLARY, OR PUNITIVE DAMAGES, INCLUDING BUT NOT LIMITED TO:
- LOSS OF DATA, PROFITS, REVENUE, GOODWILL, OR BUSINESS INTERRUPTION;
- SECURITY BREACHES OR DATA LEAKS;
- ERRONEOUS OR OFFENSIVE OUTPUTS;
- ACTS OF GOD, NATURAL DISASTERS, OR SUPERNATURAL OCCURRENCES (JUST IN CASE);
- ANY CLAIMS FROM USERS OF YOUR IMPLEMENTATION OR DEPLOYMENT.
USE OF THIS MODEL IS **AT YOUR OWN RISK**.
---
## 7. Indemnification
You agree to **indemnify**, **defend**, and **hold harmless** Echo Chai LTD and its affiliates, contributors, and agents from and against all liabilities, damages, losses, or expenses (including attorneys' fees) arising from:
- Your use or misuse of EchoChAI;
- Violation of this License;
- Third-party claims related to your use or outputs.
---
## 8. Commercial Licensing
To use EchoChAI or its Outputs for commercial purposes (including but not limited to SaaS integration, enterprise tools, monetized applications, or corporate research), you must obtain **separate written permission** from Echo Chai LTD.
Contact: **Echo Chai LTD – [Insert contact email or website]**
---
## 9. Termination
Violation of any terms of this License **immediately terminates** your rights under it.
Upon termination, you must cease all use of EchoChAI and destroy any copies in your possession.
Sections **3–8** shall survive termination.
---
## 10. Governing Law
This License shall be governed by and construed in accordance with the laws of **[Insert jurisdiction, e.g., "the State of California, USA"]**, excluding any conflict of law principles.
---
## 11. Entire Agreement
This document constitutes the complete agreement between You and Echo Chai LTD regarding EchoChAI and supersedes all prior agreements and understandings.
---
## 12. Severability
If any provision of this License is held unenforceable, the remainder shall remain valid and enforceable to the maximum extent possible.
---
## 13. No Waiver
No failure or delay by Echo Chai LTD in exercising any right shall constitute a waiver of that right.
| 2025-06-27T05:52:34 | https://www.reddit.com/r/LocalLLaMA/comments/1llmc8b/new_llm_looking_for_input_on_license/ | nntb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llmc8b | false | null | t3_1llmc8b | /r/LocalLLaMA/comments/1llmc8b/new_llm_looking_for_input_on_license/ | false | false | self | 0 | null |
General opinions on Gemma 3n Speech-to-Text (STT)? | 12 | Hi everyone,
Gemma 3n's release just happened, and to some of us a good STT model is something we have been longing for a long time. It will take even longer until we can dictate into LMstudio or similar, but I wanted to create this post to discuss your findings with regards to Gemma 3n's STT abilities.
What are your observations regarding maintaining context, what language did you test, what is the speed? Do you see something peculiar for STT tasks regarding its advertised selective parameter activation technology?
Any comparisons to Whisper or Phi-4-multimodal, their stupid sliding window approach?
Post it! thanks!
(I currently can't run it..) | 2025-06-27T06:02:01 | https://www.reddit.com/r/LocalLLaMA/comments/1llmhof/general_opinions_on_gemma_3n_speechtotext_stt/ | Karim_acing_it | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llmhof | false | null | t3_1llmhof | /r/LocalLLaMA/comments/1llmhof/general_opinions_on_gemma_3n_speechtotext_stt/ | false | false | self | 12 | null |
FYI to everyone: RTX 3090 prices crashed and are back to baseline. You can finally get $600something 3090s again in the USA. | 198 | If you've been priced out by the spike to $1000+ recently for the past ~3 months, the prices finally dropped to baseline recently.
You can get a $650-750 Nvidia 3090 fairly easily now, instead of being nearly impossible.
Future pricing is unpredictable- if we follow expected deprecation trends, the 3090 should be around $550-600, but then again Trump's tariff extensions expire in a few weeks and pricing is wild. If you're interested in GPUs, now is probably the best time to buy for 3090s/4090s. | 2025-06-27T06:20:23 | https://www.reddit.com/r/LocalLLaMA/comments/1llms46/fyi_to_everyone_rtx_3090_prices_crashed_and_are/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llms46 | false | null | t3_1llms46 | /r/LocalLLaMA/comments/1llms46/fyi_to_everyone_rtx_3090_prices_crashed_and_are/ | false | false | self | 198 | null |
What is the best under-12B local model for text polishing, proofreading, and grammar checking? | 0 | Hi, I'm looking for some suggestions for local LLMs.
I'm dealing with some internal documents of the organization I work with, and I want to improve its quality. Since the documents shouldn't be shared externally, I have to use local models.
I've searched the internet and it seems there are some models performing relatively better in natural language and writing.
- Llama 3.1 8B (A good all-arounder?)
- Qwen 3 8B (Better all-arounder than Llama 3.1?)
- Gemma 3 12B (Good for creative writing and bubbly conversation, but what about formal texts?)
- Gemma 2 9B (Older than Gemma 3, is it still good?)
If you have vibe-checked and battle-tested some local models for text improvement, preferrably for non-creative purposes, I'd appreciate your recommendation. | 2025-06-27T06:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/1llmu12/what_is_the_best_under12b_local_model_for_text/ | pitchblackfriday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llmu12 | false | null | t3_1llmu12 | /r/LocalLLaMA/comments/1llmu12/what_is_the_best_under12b_local_model_for_text/ | false | false | self | 0 | null |
Reverse Engineering Gemma 3n | 61 | 2025-06-27T06:45:04 | https://github.com/antimatter15/reverse-engineering-gemma-3n | AppearanceHeavy6724 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lln5uj | false | null | t3_1lln5uj | /r/LocalLLaMA/comments/1lln5uj/reverse_engineering_gemma_3n/ | false | false | default | 61 | {'enabled': False, 'images': [{'id': 'VWUjHfeZBfEe00CQ4OXN4N4xfnF0YI65AE8Jt2eK1GQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VWUjHfeZBfEe00CQ4OXN4N4xfnF0YI65AE8Jt2eK1GQ.png?width=108&crop=smart&auto=webp&s=89fbb0342ed3a531d420566c59fa6176e8cf82f3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VWUjHfeZBfEe00CQ4OXN4N4xfnF0YI65AE8Jt2eK1GQ.png?width=216&crop=smart&auto=webp&s=8c2c46c76c6999d68f7485bb8c9391b882d1b5e5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VWUjHfeZBfEe00CQ4OXN4N4xfnF0YI65AE8Jt2eK1GQ.png?width=320&crop=smart&auto=webp&s=671cf467eb0d0e3c08ee416bef70522c7148ee30', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VWUjHfeZBfEe00CQ4OXN4N4xfnF0YI65AE8Jt2eK1GQ.png?width=640&crop=smart&auto=webp&s=ca2d2136f62fedb0728942e71b3b8ff4a4232339', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VWUjHfeZBfEe00CQ4OXN4N4xfnF0YI65AE8Jt2eK1GQ.png?width=960&crop=smart&auto=webp&s=5cf0e9ede2cbba13c8943e9fe36501f4da147d62', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VWUjHfeZBfEe00CQ4OXN4N4xfnF0YI65AE8Jt2eK1GQ.png?width=1080&crop=smart&auto=webp&s=475a138788a5293a25552e16b66964859298c25a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VWUjHfeZBfEe00CQ4OXN4N4xfnF0YI65AE8Jt2eK1GQ.png?auto=webp&s=ec7074bb16c93f8043b8257aa3dd68b749b4c0d2', 'width': 1200}, 'variants': {}}]} |
|
Gemma 3n Multimodal Input: Text, Audio, Image, and Video? | 11 | Regardless of the API, what is the “most multimodal” Gemma2n can be made to operate?
The docs say Gemma 3n input supports:
1. text + audio
2. text+ image
The release mentions “video”, can it input:
3. True video (t+v+a)
4. Text + video (or imgseq) + audio
5. Running 1+2 and sharing some weights
Or another combo?
If so, is there an ex of 3 channel multimodal?
While I’ve linked the hf transformers example, I’m interested in any code base where I can work with more modalities of input or potentially modify the model to take more inputs.
Streaming full video + prompts as input with text output would be the ideal modality combination I’d like to work with so the closer i can get to that the better!
Thanks everyone!
Gemma 3n Release page
https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/ | 2025-06-27T06:45:49 | https://ai.google.dev/gemma/docs/core/huggingface_inference#audio | doomdayx | ai.google.dev | 1970-01-01T00:00:00 | 0 | {} | 1lln6ar | false | null | t3_1lln6ar | /r/LocalLLaMA/comments/1lln6ar/gemma_3n_multimodal_input_text_audio_image_and/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=216&crop=smart&auto=webp&s=1812be5c0e49c65e85787f4dbb2922a543943e79', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=320&crop=smart&auto=webp&s=ca7983e470f1e5cbc5edcd5c5e1c7e5b70227953', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=640&crop=smart&auto=webp&s=293ebb5606c7edf7f2570aa914eb4ddb55f1e615', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=960&crop=smart&auto=webp&s=b1bd156ecd3df7024382f9e145cda17bcaf6bc79', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=1080&crop=smart&auto=webp&s=a3b1fd853b19889a23a601c33fae7d2323e8bdb0', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?auto=webp&s=b78731184d9920fa4900b6590e113d2772fa64ed', 'width': 1440}, 'variants': {}}]} |
|
Hunyuan-A13B released | 541 | From HF repo:
>Model Introduction
>With the rapid advancement of artificial intelligence technology, large language models (LLMs) have achieved remarkable progress in natural language processing, computer vision, and scientific tasks. However, as model scales continue to expand, optimizing resource consumption while maintaining high performance has become a critical challenge. To address this, we have explored Mixture of Experts (MoE) architectures. The newly introduced Hunyuan-A13B model features a total of 80 billion parameters with 13 billion active parameters. It not only delivers high-performance results but also achieves optimal resource efficiency, successfully balancing computational power and resource utilization.
>Key Features and Advantages
>Compact yet Powerful: With only 13 billion active parameters (out of a total of 80 billion), the model delivers competitive performance on a wide range of benchmark tasks, rivaling much larger models.
>Hybrid Inference Support: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
>Ultra-Long Context Understanding: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
>Enhanced Agent Capabilities: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3 and τ-Bench.
>Efficient Inference: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference. | 2025-06-27T06:59:21 | https://huggingface.co/tencent/Hunyuan-A13B-Instruct | kristaller486 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1llndut | false | null | t3_1llndut | /r/LocalLLaMA/comments/1llndut/hunyuana13b_released/ | false | false | 541 | {'enabled': False, 'images': [{'id': 'B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=108&crop=smart&auto=webp&s=07fddabe91e442028f9a3c3afd189223a7d91fce', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=216&crop=smart&auto=webp&s=5ee3256d5380270fc602776fd46aa44d50e57ec8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=320&crop=smart&auto=webp&s=ab65a9fbee8b46ef1d086bab44568f0ff3f72833', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=640&crop=smart&auto=webp&s=975cbb18dc0dd9f2342d47d40a0f9fb8fe177327', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=960&crop=smart&auto=webp&s=99a118442fdfb239383930ed0687bdea10777474', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=1080&crop=smart&auto=webp&s=fd5d6edfba83b7dd8d2a586a152403dc93361a3a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?auto=webp&s=34cabcdda6b37ae6e3f85dd7607f10dc2b5f1ac7', 'width': 1200}, 'variants': {}}]} |
|
Configure Llama to use documents as context | 1 | Hello, I want to build a simple chatbot using llama which will take in prompts from the user, and the answers will mostly be GPT/conversational, with the model answering on its own, but also will take context from a document provided to it. Could anyone please guide me on what approach should I take to build this ? I am a beginner and I am just starting out. | 2025-06-27T07:08:29 | https://www.reddit.com/r/LocalLLaMA/comments/1llnj32/configure_llama_to_use_documents_as_context/ | Illustrious-Pay-9632 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llnj32 | false | null | t3_1llnj32 | /r/LocalLLaMA/comments/1llnj32/configure_llama_to_use_documents_as_context/ | false | false | self | 1 | null |
dyad v0.10 - open-source local alternative to lovable/v0/bolt.new with ollama/LM Studio support - now supports building mobile apps! | 73 | I’m excited to share an update to [**Dyad**](http://dyad.sh/) which is a free, local, open-source AI app builder I've been working on for 3 months after leaving Google. It's designed as an alternative to v0, Lovable, and Bolt, but it runs on your computer (it's an Electron app)!
Here’s what makes Dyad different:
* **Run ANY model (including local LLMs!)** \- Based on popular demand from [this sub-reddit](https://www.reddit.com/r/LocalLLaMA/comments/1k76ztc/i_built_a_free_local_opensource_alternative_to/), Dyad supports [local models](https://www.dyad.sh/docs/guides/ai-models/local-models) via LM Studio and ollama (I don't play favorites!), and you can also connect it to any OpenAI API-compatible model!
* **Runs locally** \- Dyad runs entirely on your computer, making it fast and frictionless. Because your code lives locally, you can easily switch back and forth between Dyad and your IDE like Cursor, etc.
* **Free** \- Dyad is free and bring-your-own API key. This means you can use your free Gemini/OpenRouter API key and build apps in Dyad for free.
Download Dyad for free: [https://dyad.sh/](https://dyad.sh/)
Dyad works on Mac & Windows and Linux (you can download Linux directly from [GitHub](https://github.com/dyad-sh/dyad/releases)).
Please share any feedback - would you be interested in MCP support?
P.S. I'm also launching on Product Hunt today and would appreciate any support 🙏 [https://www.producthunt.com/products/dyad-free-local-vibe-coding-tool](https://www.producthunt.com/products/dyad-free-local-vibe-coding-tool) | 2025-06-27T07:34:10 | https://v.redd.it/t461p9dt9f9f1 | wwwillchen | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llnwna | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t461p9dt9f9f1/DASHPlaylist.mpd?a=1753601666%2CNWUyMjZmMWY3MzlhY2Y4NjNmZDA2NjRiYTc1OTQ2ZWFjYzEzNmNiOWY3OTc3ZTcwMGE1MzAxYmUzMDQxMTczMQ%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/t461p9dt9f9f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/t461p9dt9f9f1/HLSPlaylist.m3u8?a=1753601666%2CMTZjZTA5ODg0ZDNiZjEwZjdhN2EzYzRhYWMzNTQyMWU2MzU3NmI0OTA1ZDg4ODRiODkyNzMzOTEzNTNjZTFmZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/t461p9dt9f9f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1736}} | t3_1llnwna | /r/LocalLLaMA/comments/1llnwna/dyad_v010_opensource_local_alternative_to/ | false | false | 73 | {'enabled': False, 'images': [{'id': 'eGthenQ5ZHQ5ZjlmMQQDM_dLcTyHBC8BScL5E00e_jl5aRRWjMUA-Nu_qDSf', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/eGthenQ5ZHQ5ZjlmMQQDM_dLcTyHBC8BScL5E00e_jl5aRRWjMUA-Nu_qDSf.png?width=108&crop=smart&format=pjpg&auto=webp&s=229f3f2d62cc575a6836de404b5330b5b2a440a3', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/eGthenQ5ZHQ5ZjlmMQQDM_dLcTyHBC8BScL5E00e_jl5aRRWjMUA-Nu_qDSf.png?width=216&crop=smart&format=pjpg&auto=webp&s=473b61b7fdf1928a60e8b54f706c66a4a60d9c82', 'width': 216}, {'height': 199, 'url': 'https://external-preview.redd.it/eGthenQ5ZHQ5ZjlmMQQDM_dLcTyHBC8BScL5E00e_jl5aRRWjMUA-Nu_qDSf.png?width=320&crop=smart&format=pjpg&auto=webp&s=f7f296cc58859acbacad806e3e2d89781323b25d', 'width': 320}, {'height': 398, 'url': 'https://external-preview.redd.it/eGthenQ5ZHQ5ZjlmMQQDM_dLcTyHBC8BScL5E00e_jl5aRRWjMUA-Nu_qDSf.png?width=640&crop=smart&format=pjpg&auto=webp&s=221cda73f1f21233d0c55d733824660c8da84b14', 'width': 640}, {'height': 597, 'url': 'https://external-preview.redd.it/eGthenQ5ZHQ5ZjlmMQQDM_dLcTyHBC8BScL5E00e_jl5aRRWjMUA-Nu_qDSf.png?width=960&crop=smart&format=pjpg&auto=webp&s=b05f8684edc401fe35954165f34a775b936c9551', 'width': 960}, {'height': 671, 'url': 'https://external-preview.redd.it/eGthenQ5ZHQ5ZjlmMQQDM_dLcTyHBC8BScL5E00e_jl5aRRWjMUA-Nu_qDSf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5cfdeb2af37c7c2fe34d8fa2efce433a84478958', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eGthenQ5ZHQ5ZjlmMQQDM_dLcTyHBC8BScL5E00e_jl5aRRWjMUA-Nu_qDSf.png?format=pjpg&auto=webp&s=719504b489033be96054aa5a3e03688116541e5c', 'width': 1736}, 'variants': {}}]} |
|
AI performance of smartphone SoCs | 130 | https://ai-benchmark.com/ranking_processors.html
A few things notable to me:
- The difference between tiers is _huge_. A 2022 Snapdragon 8 Gen 2 beats the 8s Gen 4. There are huge gaps between the Dimensity 9000, 8000 and 7000 series.
- You can better get a high-end SoC that’s a few years old than the latest mid-range one.
- In this benchmark, it’s mainly a Qualcomm and Mediatek competition. It seems optimized software libraries are immensely important in using hardware effectively.
- | 2025-06-27T07:34:42 | https://www.reddit.com/gallery/1llnwy5 | Balance- | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1llnwy5 | false | null | t3_1llnwy5 | /r/LocalLLaMA/comments/1llnwy5/ai_performance_of_smartphone_socs/ | false | false | 130 | {'enabled': True, 'images': [{'id': 'H_9g87w3EitABPy3ZAOo2ZH9LlcpQ5L4KMiJgV1zrjo', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/H_9g87w3EitABPy3ZAOo2ZH9LlcpQ5L4KMiJgV1zrjo.jpeg?width=108&crop=smart&auto=webp&s=bcc5faffc546d535c28e52f3b91b5e807eacbedf', 'width': 108}, {'height': 286, 'url': 'https://external-preview.redd.it/H_9g87w3EitABPy3ZAOo2ZH9LlcpQ5L4KMiJgV1zrjo.jpeg?width=216&crop=smart&auto=webp&s=7c365292ee51d1ade55fa82570f765e4cfbb3727', 'width': 216}, {'height': 423, 'url': 'https://external-preview.redd.it/H_9g87w3EitABPy3ZAOo2ZH9LlcpQ5L4KMiJgV1zrjo.jpeg?width=320&crop=smart&auto=webp&s=31330bed95819714b304a81cf885d015bba3a70d', 'width': 320}, {'height': 847, 'url': 'https://external-preview.redd.it/H_9g87w3EitABPy3ZAOo2ZH9LlcpQ5L4KMiJgV1zrjo.jpeg?width=640&crop=smart&auto=webp&s=baf972823fcd97a8af0b34ddd0ede97ce0d9de05', 'width': 640}, {'height': 1271, 'url': 'https://external-preview.redd.it/H_9g87w3EitABPy3ZAOo2ZH9LlcpQ5L4KMiJgV1zrjo.jpeg?width=960&crop=smart&auto=webp&s=e55972d97714dcae0ab3a88f2d73a39c8bca6d7a', 'width': 960}, {'height': 1430, 'url': 'https://external-preview.redd.it/H_9g87w3EitABPy3ZAOo2ZH9LlcpQ5L4KMiJgV1zrjo.jpeg?width=1080&crop=smart&auto=webp&s=1dc6eb0d2dffc2755f23f048e2b229eb2cb58eff', 'width': 1080}], 'source': {'height': 2569, 'url': 'https://external-preview.redd.it/H_9g87w3EitABPy3ZAOo2ZH9LlcpQ5L4KMiJgV1zrjo.jpeg?auto=webp&s=0b24fe0ede053ca82df0ae05ef411a8806b846e4', 'width': 1940}, 'variants': {}}]} |
|
Voice Assistants on Android | 3 | I switched to GrapheneOS from my iPhone and over the years, one thing that I have started to miss more and more, is having a wake-word capable voice assistant to do some quick things without needing to pick up my phone. This is especially useful as I am almost blind, making literally every interaction and navigation take longer as I have to read the stuff and such.
After looking at Willow and Dicio, and having watched Mycroft over a few years, I am surprised there hasn't been anything in this space in a while. Willow is concepted to work on an ESP device - dedicated hardware - and Dicio is entirely on-device.
Do you know of a wake-word capable voice assistant on Android that I could possibly link to my LLM infra for extended conversations?
I have never, ever written an app for Android - I am mainly good in Go, know my way around JS (not TS) and have a good foundation in C. But Kotlin, Java and friends are... quite different to that. So, if possible, I would love to avoid having to write my own application, if at all possible. x)
Thanks and kind regards! | 2025-06-27T07:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/1llo4rc/voice_assistants_on_android/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llo4rc | false | null | t3_1llo4rc | /r/LocalLLaMA/comments/1llo4rc/voice_assistants_on_android/ | false | false | self | 3 | null |
Open-sourced Agent Gym: The framework behind mirau-agent's training data synthesis | 3 | Hey r/LocalLLaMA!
Remember my [mirau-agent posts](https://www.reddit.com/r/LocalLLaMA/comments/1legaq8/updatemy_agent_model_now_supports_openai_function/) where many of you asked about the data synthesis process and training datasets?
I've finally open-sourced the complete framework! 🎉
## What is Agent Gym?
**Agent Gym** - A dual-purpose framework that can both **evaluate/train agents** AND **synthesize high-quality training data**. This is exactly how mirau-agent's training data was created.
🔗 **GitHub**: https://github.com/woshixiaobai2019/agent-gym
## Two Core Functions:
**1. Agent Training & Evaluation**
- Test your agents across standardized environments
- Record complete interaction trajectories
- Detailed performance metrics and success rates
**2. Training Data Synthesis** (This answers your questions!)
- Use powerful models (DeepSeek) to generate training data for smaller models
- Complete multi-turn tool calling conversations
- Standard OpenAI Messages format output
## How Data Synthesis Works:
**Step 1: Prepare seed data**
```json
// Example from agent_gym/data/cmd.json
[
{
"query": "Find all Python files in the current directory and count total lines",
"expected_result": "List of .py files with total line count"
},
{
"query": "Create a backup of all .txt files in a new directory",
"expected_result": "Successfully backed up files"
}
]
```
**Step 2: Run data synthesis**
```bash
# This is exactly how mirau-agent's training data was generated!
python synthesizer/trainingDataSynthesizer.py \
--data-file agent_gym/data/cmd.json \
--deepseek-key "your-deepseek-api-key" \
--output-dir "training_data"
```
The framework uses a teacher-student approach: DeepSeek processes your seed tasks and generates high-quality reasoning traces with `<think>` tags and proper tool usage patterns, which are then formatted as training data for smaller models.
## Generated Data Format:
```json
{
"messages": [
{"role": "system", "content": "[function definitions]"},
{"role": "user", "content": "Find all Python files in current directory"},
{"role": "assistant", "content": "<think type=\"quick\">Simple file search operation</think>\n<tool_call>{\"name\": \"execute_shell\", \"arguments\": {\"command\": \"find . -name '*.py' -type f\"}}</tool_call>"},
{"role": "user", "content": "<tool_response name=\"execute_shell\">./test.py\n./main.py</tool_response>"}
]
}
```
## Built-in Environments:
- **CommandLine**: Linux commands, file operations (example: cmd.json)
- **Python**: Safe code execution sandbox (example: py.json)
- **NLP**: LLM-based dialogue scenarios (example: nlp.json)
Easy to extend with your own custom environments and seed data!
## Why This Matters:
Instead of sharing static datasets, I'm sharing the **data generation pipeline**. You can:
- Start with simple seed tasks (like the examples in /data/)
- Generate unlimited training data for your specific use cases
- Customize environments for your domain
- Use different teacher models (not just DeepSeek)
- Create data in any language
This solves the "how do I get high-quality agent training data?" problem that many have been asking about.
The framework is production-tested (literally used to create mirau-agent) but I won't provide ongoing support - it's open source for the community to use and maintain.
---
**Links:**
- Framework: https://github.com/woshixiaobai2019/agent-gym
- mirau-agent model: https://huggingface.co/eliuakk/mirau-agent-base-oai
- Live demo: https://modelscope.cn/studios/mouseEliauk/mirau-agent-demo/summary | 2025-06-27T07:55:12 | https://github.com/woshixiaobai2019/agent-gym | EliaukMouse | github.com | 1970-01-01T00:00:00 | 0 | {} | 1llo7hh | false | null | t3_1llo7hh | /r/LocalLLaMA/comments/1llo7hh/opensourced_agent_gym_the_framework_behind/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'DRa0Q03LG4VkDsQlU5OKVFsAKQlHKfHhn3gk5nchCg8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DRa0Q03LG4VkDsQlU5OKVFsAKQlHKfHhn3gk5nchCg8.png?width=108&crop=smart&auto=webp&s=d285c9d22a31811ca0e563e8f1046f558eddb6e8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DRa0Q03LG4VkDsQlU5OKVFsAKQlHKfHhn3gk5nchCg8.png?width=216&crop=smart&auto=webp&s=3150f9cb48195f06823a3609e29350d8b984c1ff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DRa0Q03LG4VkDsQlU5OKVFsAKQlHKfHhn3gk5nchCg8.png?width=320&crop=smart&auto=webp&s=3fb266fe41f99a0138e95bcb62c14166cf793fb2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DRa0Q03LG4VkDsQlU5OKVFsAKQlHKfHhn3gk5nchCg8.png?width=640&crop=smart&auto=webp&s=7d07eda7fd3f3aeab83e064d8d81412cbc3ace7e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DRa0Q03LG4VkDsQlU5OKVFsAKQlHKfHhn3gk5nchCg8.png?width=960&crop=smart&auto=webp&s=b704badd6a91879ec410e467097c5ed4700789c9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DRa0Q03LG4VkDsQlU5OKVFsAKQlHKfHhn3gk5nchCg8.png?width=1080&crop=smart&auto=webp&s=861569f5634dbe006f16d186703624d4bf4d1457', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DRa0Q03LG4VkDsQlU5OKVFsAKQlHKfHhn3gk5nchCg8.png?auto=webp&s=e52bd546e45d637b9b6906e8d2497833eefc3c68', 'width': 1200}, 'variants': {}}]} |
|
First diffusion llm announced | 0 | new dllm Inception: Mercury Looks very good in terms of speed | 2025-06-27T08:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/1llogep/first_diffusion_llm_announced/ | NeuralNakama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llogep | false | null | t3_1llogep | /r/LocalLLaMA/comments/1llogep/first_diffusion_llm_announced/ | false | false | self | 0 | null |
LLM Stopping Mid-Task | 1 | I'm running QWEN3-32b using LMStudio on my local machine (RTX4090, 64GB RAM, i9-7980XE). All the settings are at stock for the model, except I've upped the context size to 16384.
I was asking it to perform a simple but laborious task yesterday.
I gave it a simple example of a C# class and an admittedly long 204 value CSV string of headers.
The prompt was to complete the class definition with a property for each value in the CSV string. It got the task absolutely correct in terms of structure but no matter how I worded the prompt, it would just stop at some point printing - "*// (Continued with 150+ more properties following the same pattern...)*" ... as if to suggest I should complete the task manually ...
Erm ... how about *no*, *you do it*. That's why you're even allowed on my machine - to do the grunt work! :D
I just couldn't get it to complete the class.
At one point, it even spat out an entire implementation in C# to parse the source CSV and build the class file on disk. Which, whilst interesting, wasn't remotely what I had asked it to do.
Any advice on how to deal with this situation would be great.
**Prompt example**
Given this C# class as a template:
public class Record
{
[Name("Property One")]
public string PropertyOne { get; set; }
[Name("Name")]
public string Name { get; set; }
}
Take every CSV header value in the following string and add it into the class as a property:
CSV string | 2025-06-27T08:22:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lloljf/llm_stopping_midtask/ | VanillaCandid3466 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lloljf | false | null | t3_1lloljf | /r/LocalLLaMA/comments/1lloljf/llm_stopping_midtask/ | false | false | self | 1 | null |
hiii | 1 | [removed] | 2025-06-27T09:03:45 | https://www.reddit.com/r/LocalLLaMA/comments/1llp6oi/hiii/ | MainLettuce419 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llp6oi | false | null | t3_1llp6oi | /r/LocalLLaMA/comments/1llp6oi/hiii/ | false | false | self | 1 | null |
Pair Programming with a Dunce, an AI Coding Experience | 2 | This is *my* experience. Yours could be different.
---
I use LLMs extensively to:
* extract Sanskrit text from old documents
* proofread translations from English into Sanskrit for our pedagogy project
* transcribe and translate videos from YT
* help write stories, point out spelling/grammar issues in our work
* argue about etymology and grammatical derivation of word forms etc.
They are, without reservation, **exceptionally** good at this.
My current LLM of choice for this is the Gemini 2.5 series. It is so good at these tasks that I would pay for it if the gratis version were not available.
All our work is on GH is generally under CC0/PD or CC BY SA. So I don't really care if the models use the data for training.
---
The problem starts with "reasoning" about tasks.
Say, one, you want to see if it can write a parser for an s-expression based document markup language.
Or, two, do repetitive tasks like replacing a certain kind of pattern with another.
Or, three, move data from a lightly processed proof-read file into numbered files by looking at the established pattern.
Here, my experience (of two days with gemini-cli) has been terrible. 2 & 3 work after a couple of false starts. The LLM starts with regular expressions ("now you have two problems"), fails, and then falls back to writing a boring python script.
But the parser. My God!!
I already have a functional (in the sense of working) one that I wrote myself. But it is part of a codebase that has become incredibly messy over time with too many unrelated things in the same project.
So I decided to start a fresh test project to see if Gemini is up to the task.
---
**The first problem**
I use jj (jujutsu) on a colocated git repo for version control. gemini-cli immediately started peeking into the dot folders, referring to files that have nothing to do with the task at hand till I told it to stop its voyeurism.
I asked it to create a bare-bones uv-based python project with a "Hello, World!" app.py file. Let's say that it "managed" to do it.
But it forgot about uv the next session and decided that pytest etc must be run directly.
**The second problem**
Here is a sample document that it must parse:
(document @uuid CCprPLYlMmdt9jjIdFP2O
(meta
(copyright CC0/PD. No rights reserved)
(source @url "https://standardebooks.org/ebooks/oscar-wilde/childrens-stories" Standard Ebooks)
(title @b "Children’s Stories" The Selfish Giant)
(author Oscar Wilde)
)
(matter
(p Every afternoon, as they were coming from school, the children used to go and play in the Giant’s garden.)
(p It was a large lovely garden, with soft green grass. Here and there over the grass stood beautiful flowers like stars, and there were twelve peach-trees that in the springtime broke out into delicate blossoms of pink and pearl, and in the autumn bore rich fruit. The birds sat on the trees and sang so sweetly that the children used to stop their games in order to listen to them. (" How happy we are here!) they cried to each other.)
(p One day the Giant came back. He had been to visit his friend the Cornish ogre, and had stayed with him for seven years. After the seven years were over he had said all that he had to say, for his conversation was limited, and he determined to return to his own castle. When he arrived he saw the children playing in the garden.)
(p (" What are you doing here?) he cried in a very gruff voice, and the children ran away.)
(p (" My own garden is my own garden,) said the Giant; (" anyone can understand that, and I will allow nobody to play in it but myself.) So he built a high wall all round it, and put up a noticeboard.)
(bq
(p Trespassers(lb)Will Be(lb)Prosecuted)
)
(p He was a very selfish Giant.)
(p ...)
)
)
I told it about what I wanted:
* The "s-expr" nature of the markup
* My preference for functional code, with OOP exceptions for things like the CharacterStream/TokenStream etc.
It immediately made assumptions based on what it knew which I had to demolish one by one.
It did other stupid stuff like sprinkling magic numbers/strings all over the place, using tuples/dicts in lieu of data classes and giving me inscrutable code like `tokens[0][1] ==` instead of `tokens[0].type ==`.
It struggled to understand the `[^ ()@]+` and `[a-z][a-z0-9-]*` requirements for the node id and attribute id. It argued for while about TOKEN_STRING and TOKEN_ATOM. It was then that I realized that it had built a standard lexer. I told it to rethink its approach and it argued about why scannerless parsers (which is exactly what SXML needs) are a bad idea.
The cli managed to consume the entire quota of 1,000 requests in a couple of hours and then, instead of telling me that I was done for the day, started printing random/sarcastic messages about petting cats or something. When I told it to stop with the sarcasm, it doubled up on it. I guess people enjoy dealing with this when they are problem-solving. Eventually I figured out that the quota was done.
My mental map for this was: one prompt = one request. Which tracks with what I experience using the web client.
Well, 2,000 lines of garbage and it produced nothing that was useful. In contrast, my hand-crafted, fully functional scannerless parser (with a tidy/prettifier implemented as an `unparse` function) is about 600 lines.
**The third problem**
The next day, when I started a new session and asked it to explain its conceptual understanding of acceptable patterns for node ids and attribute ids, it didn't have a clue about what I was talking about. I had to point it to the relevant file.
Then it started talking about `@.pycache....nodeid 5` or something. Which I never gave it as input. My input was `(doc @id 5 ...)` And did I not tell it to stop peeking into dot folders? Nooooooo, it said. It was *I* who gave it this input. I nearly lost my mind.
When I asked it about accessing the info from the previous conversations, it couldn't. Guess I compressed the context. Or it did. Because `/chat list` has never provided useful output for me.
Finally, I had to write a `NOTES.md` file and put all the information in it and have it read the file. It was then that it started to understand it, but between the inability to "remember" stuff and the general lack of "perception," I got bored and parked the project to one side.
---
When people claim to successfully use AI for coding, I wonder WTF they are doing.
My experience has been fairly terrible to say the least. I would be more willing to try it if the feedback loop was quicker. But if the AI uses up wallclock time (**my** time) of 50 minutes with nothing to show for it, I have my doubts.
I will continue to use AI in the areas where it is strong. But someone needs to convince me that using it for coding is well worth the time investment. | 2025-06-27T09:48:10 | https://www.reddit.com/r/LocalLLaMA/comments/1llpu8k/pair_programming_with_a_dunce_an_ai_coding/ | s-i-e-v-e | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llpu8k | false | null | t3_1llpu8k | /r/LocalLLaMA/comments/1llpu8k/pair_programming_with_a_dunce_an_ai_coding/ | false | false | self | 2 | null |
Could we combine Nvidia with Apple Silicon? | 0 | The Apple Silicon Macs are well known for their fast text generation with plenty of memory to load large models. Also known for slow prompt processing. Could we offload the prompt processing to a Linux server with a Nvidia GPU?
The idea is that the GPU would not have enough memory to load the entire model. Otherwise there would be no point to this. It is my understanding that for prompt processing you could load just a single layer and do the entire context before switching to the next layer. The GPU would only need memory for the context, kv cache, activations and one layer. When you have run through the layers just once, we will transfer the results to the Mac and do the text generation there.
Has anything like this been done? Is it a crazy idea? | 2025-06-27T09:54:02 | https://www.reddit.com/r/LocalLLaMA/comments/1llpxbb/could_we_combine_nvidia_with_apple_silicon/ | Baldur-Norddahl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llpxbb | false | null | t3_1llpxbb | /r/LocalLLaMA/comments/1llpxbb/could_we_combine_nvidia_with_apple_silicon/ | false | false | self | 0 | null |
List of LLM to run on a 8745HS with 64GB 5600mhz | 4 | Hello, I'm going to receive my new mini PC server today, and I would like some advice on which LLM to use.
The mini PC is the Beelink SER8, with 64GB of RAM (2x32GB 5600MHz) and a Ryzen 7 8745HS.
My workflow involves basic assistant tasks with a lot of RAG (Retrieval-Augmented Generation), tool calling, and long-context conversations (at least 32K tokens). In the future, I also plan to integrate some MCP (Multi-Agent Collaboration Protocol) features.
I’d like to know which LLMs I can run at decent speeds that would help with my development workflow (I’m using Kilo Code with OpenRouter). Is there a model that could run well locally and support development use cases?
What are some great LLMs I could run efficiently on this machine for my workflow, and at what quantization and context window size?
What VRAM offloading settings do you recommend for each LLM?
Also, is there an inference software that works especially well with this specific hardware? | 2025-06-27T10:03:17 | https://www.reddit.com/r/LocalLLaMA/comments/1llq2os/list_of_llm_to_run_on_a_8745hs_with_64gb_5600mhz/ | Whiplashorus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llq2os | false | null | t3_1llq2os | /r/LocalLLaMA/comments/1llq2os/list_of_llm_to_run_on_a_8745hs_with_64gb_5600mhz/ | false | false | self | 4 | null |
The more LLMs think, the worse they translate | 131 | 2025-06-27T10:41:40 | https://nuenki.app/blog/the_more_llms_think_the_worse_they_translate | Nuenki | nuenki.app | 1970-01-01T00:00:00 | 0 | {} | 1llqp0a | false | null | t3_1llqp0a | /r/LocalLLaMA/comments/1llqp0a/the_more_llms_think_the_worse_they_translate/ | false | false | default | 131 | {'enabled': False, 'images': [{'id': 'sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ.jpeg?width=108&crop=smart&auto=webp&s=2d0e312ff46b334fd90a3a7e76ccc030f2a17c7c', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ.jpeg?width=216&crop=smart&auto=webp&s=66560eb22bd2f2916f1afd88638271e1b8b96ca5', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ.jpeg?width=320&crop=smart&auto=webp&s=d30cb01c71b0a5ebe51ccda76bb7ee6c9b91b025', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ.jpeg?width=640&crop=smart&auto=webp&s=c116c7e6295d776b6382e425434256d0d8559943', 'width': 640}, {'height': 527, 'url': 'https://external-preview.redd.it/sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ.jpeg?width=960&crop=smart&auto=webp&s=06e53490e36528e0977a21e03221cf462a99f9a9', 'width': 960}, {'height': 593, 'url': 'https://external-preview.redd.it/sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ.jpeg?width=1080&crop=smart&auto=webp&s=4e0749d24e4d075d2a9e19313c35d51775ca4918', 'width': 1080}], 'source': {'height': 2096, 'url': 'https://external-preview.redd.it/sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ.jpeg?auto=webp&s=8a28c599f7c03d7569c67da276ce2b976ab8d771', 'width': 3811}, 'variants': {}}]} |
|
How to fine tuning with scrapping and locally | 1 | Hello everyone! I've read quite a few posts here and I'm looking to know how to fine tune a template (mistral or llama) by scrapping HTML content from blogs that i select (through the sitemap)
I'd like to fine tune to have a better quality when writing blog article based on human essays and that perform, however I don't see how to make my dataset with this data and how many articles i need to retrieve in order to have a good result.
PS: I'd like to do it locally I have a 5090 and ryzen 7 9800x3d
Thanks in advance!
| 2025-06-27T10:47:50 | https://www.reddit.com/r/LocalLLaMA/comments/1llqsj9/how_to_fine_tuning_with_scrapping_and_locally/ | JoflixPlex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llqsj9 | false | null | t3_1llqsj9 | /r/LocalLLaMA/comments/1llqsj9/how_to_fine_tuning_with_scrapping_and_locally/ | false | false | self | 1 | null |
help me understand RAG more | 1 | So far, all I know is to put the documents in a list, split them using LangChain, and then embed them with OpenAI Embedded. I store them in Chroma, create the memory, retriever, and LLM, and then start the conversation. What I wanted to know :
1- is rag or embedding only good with text and md files, cant it work with unstructured and structured data like images and csv files, how can we do it? | 2025-06-27T11:06:39 | https://www.reddit.com/r/LocalLLaMA/comments/1llr41u/help_me_understand_rag_more/ | Beyond_Birthday_13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llr41u | false | null | t3_1llr41u | /r/LocalLLaMA/comments/1llr41u/help_me_understand_rag_more/ | false | false | self | 1 | null |
What If We Abliterate the Reasoning Process of Models? | 0 | I unfortunately don't know the technical details of this, but I've been thinking. What if we take a reasoning model like DeepSeek's R1 distilled LLaMA 8B for testing, and like people do abliteration to uncensor a model, instead abliterate the reasoning process, so when asked a question, the model will generate the output without thinking BUT assumes that it finished thinking. And then compare the results for math, code, etc. to the original distilled model and see if thinking is really necessary or since the model was already trained on the reasoning traces and answers for these questions anyway, if the model thinks it finished its reasoning and produced an output instead of simply disabling its thinking, the answer is always similar to the OG model? What do you guys think? I couldn't find any research on doing this, and am not sure if this is even possible. | 2025-06-27T11:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/1llrgcy/what_if_we_abliterate_the_reasoning_process_of/ | DistractedSentient | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1llrgcy | false | null | t3_1llrgcy | /r/LocalLLaMA/comments/1llrgcy/what_if_we_abliterate_the_reasoning_process_of/ | false | false | self | 0 | null |
Meta planning to develop closed source models like Anthropic and openAI - NYT | 0 | 2025-06-27T11:55:08 | JP_525 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1llrywd | false | null | t3_1llrywd | /r/LocalLLaMA/comments/1llrywd/meta_planning_to_develop_closed_source_models/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '08a1h3o8lg9f1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/08a1h3o8lg9f1.jpeg?width=108&crop=smart&auto=webp&s=e031fc808f46c33170ecdd69d6976be431cc3b91', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/08a1h3o8lg9f1.jpeg?width=216&crop=smart&auto=webp&s=38ed083090b58a6ced89ffdd0a54c7d94b896282', 'width': 216}, {'height': 137, 'url': 'https://preview.redd.it/08a1h3o8lg9f1.jpeg?width=320&crop=smart&auto=webp&s=25c99768446dcfb17e54c3fac81dea27c58ff419', 'width': 320}, {'height': 274, 'url': 'https://preview.redd.it/08a1h3o8lg9f1.jpeg?width=640&crop=smart&auto=webp&s=03d1d219df5345aeb706d5fab1b055041c72a76f', 'width': 640}, {'height': 411, 'url': 'https://preview.redd.it/08a1h3o8lg9f1.jpeg?width=960&crop=smart&auto=webp&s=adf52d814227fb5d3f8263ba4a759a4ab838caee', 'width': 960}, {'height': 463, 'url': 'https://preview.redd.it/08a1h3o8lg9f1.jpeg?width=1080&crop=smart&auto=webp&s=b760032ce2e002821830194015609802c1702364', 'width': 1080}], 'source': {'height': 854, 'url': 'https://preview.redd.it/08a1h3o8lg9f1.jpeg?auto=webp&s=ce7843179a166a09278e81aafc27231b544d869a', 'width': 1990}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.