title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Seeking advice on face verification for Indian dating app - fine-tuning vs. alternative approaches | 1 | [removed] | 2025-02-05T20:08:20 | https://www.reddit.com/r/LocalLLaMA/comments/1iijgf8/seeking_advice_on_face_verification_for_indian/ | Ok-Succotash-7945 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iijgf8 | false | null | t3_1iijgf8 | /r/LocalLLaMA/comments/1iijgf8/seeking_advice_on_face_verification_for_indian/ | false | false | self | 1 | null |
Downloading DeepSeek Models | 2 | I've never dealt with Local LLMs before and will probably end up buying another computer/server just to manage DeepSeek down the line but for now I Just want to download the models since I'm unsure if I will be able to in the future.
**Questions**
1. How much space would I need to download all the models?
2. Is there any reason to download all of them or should I Just download the largest model?
3. Anything else I should consider since I've never dealt with local LLMs before? (Anything at all) | 2025-02-05T20:22:05 | https://www.reddit.com/r/LocalLLaMA/comments/1iijsoj/downloading_deepseek_models/ | focusedgrowth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iijsoj | false | null | t3_1iijsoj | /r/LocalLLaMA/comments/1iijsoj/downloading_deepseek_models/ | false | false | self | 2 | null |
Train your own reasoning model in 30 minutes with Deepseek R1 and Kiln AI | 122 |
I've just released an update of Kiln on Github which allows you to distill a custom fine-tuned model from Deepseek R1 (or any reasoning model/chain-of-thought). The whole process only takes about 30 minutes, including generating a synthetic training dataset. It doesn't require any coding or command line work.
* The attached video shows the process
* Our docs have [a guide for distilling R1](https://docs.getkiln.ai/docs/guide-train-a-reasoning-model) if you want to try it out yourself
* Here's the [Github repo](https://github.com/Kiln-AI/Kiln) with all of the source code
I also wanted to add a huge thanks to r/localllama for the awesome reception to on my [last post](https://www.reddit.com/r/LocalLLaMA/comments/1i1ffid/i_accidentally_built_an_open_alternative_to/). It really inspires me to keep building. I've already made about 30 improvements and built feature requests which came from people who found it via r/localllama.
Kiln runs locally and we never have access to your dataset. Unsloth is fully supported if you have the GPUs to train locally. You can also use a training service like Fireworks & OpenAI if you prefer (data is sent to them with your keys, we still never have access to it).
If anyone wants to try Kiln, here's the [GitHub repository](https://github.com/Kiln-AI/Kiln) and [docs are here](https://github.com/Kiln-AI/Kiln). Getting started is super easy - it's a one-click install to get setup and running.
I'm curious to get any feedback/ideas. It really helps me improve Kiln. Thanks!
[Kiln AI demo - distilling Deepseek R1](https://reddit.com/link/1iik4y9/video/1vnufrecrdhe1/player)
| 2025-02-05T20:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1iik4y9/train_your_own_reasoning_model_in_30_minutes_with/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iik4y9 | false | null | t3_1iik4y9 | /r/LocalLLaMA/comments/1iik4y9/train_your_own_reasoning_model_in_30_minutes_with/ | false | false | 122 | {'enabled': False, 'images': [{'id': 'oZhQvB0lZ0Mldj2yTMzbyX2kEWAs7qihuKUzv00kef4', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/fkk_hfuiSuMOZjLy_dEtjSiqJMOwZz9w_oAKY_5Q2Nk.jpg?width=108&crop=smart&auto=webp&s=51b97834d5407887f712dd6d69328108293cc254', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/fkk_hfuiSuMOZjLy_dEtjSiqJMOwZz9w_oAKY_5Q2Nk.jpg?width=216&crop=smart&auto=webp&s=3930ea13bf610c1c1922158364231cb232ec90a1', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/fkk_hfuiSuMOZjLy_dEtjSiqJMOwZz9w_oAKY_5Q2Nk.jpg?width=320&crop=smart&auto=webp&s=ba55a1cb5eb7e4cdd83ce801cb511341be788b92', 'width': 320}, {'height': 315, 'url': 'https://external-preview.redd.it/fkk_hfuiSuMOZjLy_dEtjSiqJMOwZz9w_oAKY_5Q2Nk.jpg?width=640&crop=smart&auto=webp&s=a3dadc03291c7ac04f201561f33b9b740f85a835', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/fkk_hfuiSuMOZjLy_dEtjSiqJMOwZz9w_oAKY_5Q2Nk.jpg?width=960&crop=smart&auto=webp&s=62ec58eab697f4b5bf61c0bd2d46268e1d89c3a2', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/fkk_hfuiSuMOZjLy_dEtjSiqJMOwZz9w_oAKY_5Q2Nk.jpg?width=1080&crop=smart&auto=webp&s=a0fdd79620739c557785bb26dc48a72ddb40f0a5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/fkk_hfuiSuMOZjLy_dEtjSiqJMOwZz9w_oAKY_5Q2Nk.jpg?auto=webp&s=096bfaa4796b3ffe2859bb3509f3db997df5e6b5', 'width': 1280}, 'variants': {}}]} |
|
I found a way to speed up CPU based LLM inference using a HNSW index on the output embeddings | 145 | To get the next token from an LLM, we compute the probabilities for each individual token in the LLM's vocabulary by multiplying the last hidden state with the output embedding matrix. This matrix is massive, accounting for up to 20% of the total parameters in small multilingual LLMs.
When sampling the next token with top-k sampling, we're only sampling from the 40 most probable tokens out of 128,256 (for Llama 3.2 models). By using an HNSW vector index, we can retrieve these 40 most probable tokens directly through an approximate nearest neighbor search over the output embeddings, avoiding the full matrix multiplication with the output embeddings.
This reduces memory accesses and computation, resulting in up to 28% faster CPU-based inference for Llama 2.1 1B on mid-range laptops.
### For more details, read the full blog post on [martinloretz.com/blog/vector-index-cpu/](https://martinloretz.com/blog/vector-index-cpu/)
## Benchmarks
`llama-bench` for Llama 1B F16 (Ubuntu = Intel® Core™ i7-10750H x 12, 2 x 16GiB DDR4 2933 MHz, MacBook = MacBook Pro 16" M4 Pro, vec = vector index, MM = matrix multiplication (reference)):
| model | threads | test | Vec t/s | MM t/s | Speedup |
| :------ | ------: | ----: | -----------: | -----------: | -------: |
| Ubuntu | 1 | tg256 | 5.99 ± 0.05 | 4.73 ± 0.04 | **1.27** |
| Ubuntu | 6 | tg256 | 12.51 ± 0.30 | 9.72 ± 0.13 | **1.29** |
| MacBook | 1 | tg256 | 23.56 ± 0.24 | 20.11 ± 0.44 | **1.17** |
| MacBook | 10 | tg256 | 12.52 ± 0.31 | 11.80 ± 0.18 | **1.06** |
LLama 3.2 1B was selected for these benchmarks because of its relatively large embedding matrix (21% of all parameters). Full model speedups for larger models are lower because less time is spent computing the output embeddings.
**To replicate these benchmarks, checkout this code of the [fork of llama.cpp](https://github.com/martinloretzzz/llama.cpp).** Installation instructions are in the Readme. | 2025-02-05T20:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1iikhg3/i_found_a_way_to_speed_up_cpu_based_llm_inference/ | martinloretz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iikhg3 | false | null | t3_1iikhg3 | /r/LocalLLaMA/comments/1iikhg3/i_found_a_way_to_speed_up_cpu_based_llm_inference/ | false | false | self | 145 | null |
I have a question about deepseek getting ban. | 1 | [removed] | 2025-02-05T21:06:43 | https://www.reddit.com/r/LocalLLaMA/comments/1iikvnb/i_have_a_question_about_deepseek_getting_ban/ | Key_Ambassador3922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iikvnb | false | null | t3_1iikvnb | /r/LocalLLaMA/comments/1iikvnb/i_have_a_question_about_deepseek_getting_ban/ | false | false | self | 1 | null |
Which small local models are worth using even if you typically rely on closed platforms? Examples: Kokoro, maybe Phi-14 or Gemma-9b | 3 | If you don't care about censorship, privacy, or cost, you probably use SOTA closed models. Even so, some small models have interesting properties or just work so well that you might still want to run them locally.
What small models are still useful regardless of expensive APIs and such?
A small local spell and grammar checker, running off maybe Gemma-9b. It's just fast and convenient. Maybe a translator.
I haven't used it, but it seems like Phi-14 is extremely well-behaved from benchmarks. I could imagine using it for local housekeeping tasks. Sorting and organizing text files or data sets for archival, generating indices. General utility.
TTS and other speech tools can still be a bit cumbersome online. Something fast like Kokoro to run locally might make sense. Also, replicating something like Aqua Voice at home, since it's not that polished or customizable.
Online calls sometimes have server delays, weird timeouts, connection issues. For some purposes I could see it being more reliable and snappy locally assuming you have a good GPU.
What other use cases and models are good to run locally regardless of SOTA models? | 2025-02-05T21:12:01 | https://www.reddit.com/r/LocalLLaMA/comments/1iil0fi/which_small_local_models_are_worth_using_even_if/ | redditisunproductive | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iil0fi | false | null | t3_1iil0fi | /r/LocalLLaMA/comments/1iil0fi/which_small_local_models_are_worth_using_even_if/ | false | false | self | 3 | null |
Any way to run DeepSeek R1 (not a distill) smoothly on an M2 Ultra with 128GB RAM? | 1 | [removed] | 2025-02-05T21:13:37 | https://www.reddit.com/r/LocalLLaMA/comments/1iil1wb/any_way_to_run_deepseek_r1_not_a_distill_smoothly/ | Spirited-Lunch1027 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iil1wb | false | null | t3_1iil1wb | /r/LocalLLaMA/comments/1iil1wb/any_way_to_run_deepseek_r1_not_a_distill_smoothly/ | false | false | self | 1 | null |
SandboxAI (OSS): Run AI generated code in containers | 8 | SandboxAI is an open source runtime for executing AI-generated Python code and shell commands in isolated containers.
GitHub repo: https://github.com/substratusai/sandboxai
We created SandboxAI because we wanted to run AI generated code on our laptop without relying on a third party service. We also wanted something that would scale when we were ready to push to production. That's why we support docker for local execution and will soon be adding support for Kubernetes.
Quickstart (local using Docker):
1. Install the Python SDK `pip install sandboxai-client`
2. Launch a sandbox and run code
```python
from sandboxai import Sandbox
with Sandbox(embedded=True) as box:
print(box.run_ipython_cell("print('hi')").output)
print(box.run_shell_command("ls /").output)
```
It works with existing AI agent frameworks such as CrewAI (see [example](https://github.com/substratusai/sandboxai/blob/main/python/sandboxai/experimental/crewai.py)).
The project is brand new and we are looking for feedback on what else you would like to see added or changed.
Like what you’re seeing, show your interest by [adding a star](https://github.com/substratusai/sandboxai).
| 2025-02-05T21:30:09 | https://www.reddit.com/r/LocalLLaMA/comments/1iilgf2/sandboxai_oss_run_ai_generated_code_in_containers/ | nstogner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iilgf2 | false | null | t3_1iilgf2 | /r/LocalLLaMA/comments/1iilgf2/sandboxai_oss_run_ai_generated_code_in_containers/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'Fn1-HNAYSn0E4aBjp4dBrgKWNWkNzDbXUvE6BpyEHis', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E60ahIMvdwto0OduAvkeplKRrAuIAZROXqiG57odsyA.jpg?width=108&crop=smart&auto=webp&s=8bc3ba1281889f8f5b390644990b7dfa91e22f2f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E60ahIMvdwto0OduAvkeplKRrAuIAZROXqiG57odsyA.jpg?width=216&crop=smart&auto=webp&s=0e4d3144529d650877015d484f731c28e288c765', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E60ahIMvdwto0OduAvkeplKRrAuIAZROXqiG57odsyA.jpg?width=320&crop=smart&auto=webp&s=1e41b20d10afbbfbeb0cefb0c654ab4f224c7de9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E60ahIMvdwto0OduAvkeplKRrAuIAZROXqiG57odsyA.jpg?width=640&crop=smart&auto=webp&s=010f524207ffa35462e4737bcdca8c5f477c44be', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E60ahIMvdwto0OduAvkeplKRrAuIAZROXqiG57odsyA.jpg?width=960&crop=smart&auto=webp&s=50a8ec4a11fe39c534c608bfb29c1dede049ee19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E60ahIMvdwto0OduAvkeplKRrAuIAZROXqiG57odsyA.jpg?width=1080&crop=smart&auto=webp&s=cec2ed4cdc4b91b35530bf57e8d8490bbf651b10', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/E60ahIMvdwto0OduAvkeplKRrAuIAZROXqiG57odsyA.jpg?auto=webp&s=fc6bd19a5192139219d55cc1cf135fbb038c4ea4', 'width': 1200}, 'variants': {}}]} |
What STT server can be run locally, fully offline, and adheres to the Open AI whisper API? | 9 | I'm looking for an STT server to hook into a local Open WebUI instance. Is there a canonical one that most people use?
* [whisper.cpp](https://github.com/ggerganov/whisper.cpp) is not compatible with the Open AI API
* [faster-whisper](https://github.com/SYSTRAN/faster-whisper) is not compatible with the Open AI API
* [faster-whisper-server](https://github.com/fedirz/faster-whisper-server) (now [speaches](https://github.com/speaches-ai/speaches?tab=readme-ov-file)) does [not work](https://github.com/speaches-ai/speaches/issues/306) offline
* [opendai-speech](https://github.com/matatonic/openedai-speech) is deprecated
The Open WebUI "local" option uses `faster-whisper` under the hood, so an Open AI compatible API is needed, and none of the Open AI compatible servers work offline. | 2025-02-05T21:31:47 | https://www.reddit.com/r/LocalLLaMA/comments/1iilhul/what_stt_server_can_be_run_locally_fully_offline/ | nonredditaccount | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iilhul | false | null | t3_1iilhul | /r/LocalLLaMA/comments/1iilhul/what_stt_server_can_be_run_locally_fully_offline/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=108&crop=smart&auto=webp&s=9f1a3c72bb85d28ca748578929e813c616ca047f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=216&crop=smart&auto=webp&s=d210c9e07ab2c76fd5db5866582e8d00dc69c210', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=320&crop=smart&auto=webp&s=5975f428f5ed1a6878c876d7a851448ccc82dec1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=640&crop=smart&auto=webp&s=ae5685e95d73e7f40e3ed12ad1d509c1c9bf2ff1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=960&crop=smart&auto=webp&s=30d3a941411a1d510ae4b967b3a13bf5bac8d020', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=1080&crop=smart&auto=webp&s=bb5888f4152853cf96cf29bc16492fa2f95a660b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?auto=webp&s=35f02b760b3d2d35fd8ab6c0ac7ca9e7239c34f1', 'width': 1280}, 'variants': {}}]} |
Omar Sanseviero - Machine Learning Engineer at Google - Hey r/LocalLLaMA 👋 We're cooking 🫡 Gemma going brrr | 1 | 2025-02-05T21:33:49 | https://x.com/osanseviero/status/1887247587776069957 | gothic3020 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1iiljkd | false | null | t3_1iiljkd | /r/LocalLLaMA/comments/1iiljkd/omar_sanseviero_machine_learning_engineer_at/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'y7SAaFjYB3fp9FWNy9BHih_4Pj1R8s18_drpFJpal84', 'resolutions': [{'height': 93, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=108&crop=smart&auto=webp&s=14d33bd661162155d0c92800c4180c3da1a0ffc4', 'width': 108}, {'height': 186, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=216&crop=smart&auto=webp&s=f11cbd85986168c8d533b252c33e9d7f6284d25f', 'width': 216}, {'height': 276, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=320&crop=smart&auto=webp&s=a5cff1d444722e414c968e154d13476f808e9617', 'width': 320}, {'height': 553, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=640&crop=smart&auto=webp&s=c41e6d0d64c5e52d6406d9875886d6158d162266', 'width': 640}, {'height': 830, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=960&crop=smart&auto=webp&s=37b6ab78d2c70460cad9d2f92aed102e197b9aa1', 'width': 960}, {'height': 934, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=1080&crop=smart&auto=webp&s=29469bb4a4c4b2b7a0a618e5514da852659f203c', 'width': 1080}], 'source': {'height': 1142, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?auto=webp&s=e56d939b33f52e244264188492437e52b0d26438', 'width': 1320}, 'variants': {}}]} |
||
New Gemma model incoming | 1 | 2025-02-05T21:35:44 | https://x.com/osanseviero/status/1887247587776069957 | gothic3020 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1iill6w | false | null | t3_1iill6w | /r/LocalLLaMA/comments/1iill6w/new_gemma_model_incoming/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'y7SAaFjYB3fp9FWNy9BHih_4Pj1R8s18_drpFJpal84', 'resolutions': [{'height': 93, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=108&crop=smart&auto=webp&s=14d33bd661162155d0c92800c4180c3da1a0ffc4', 'width': 108}, {'height': 186, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=216&crop=smart&auto=webp&s=f11cbd85986168c8d533b252c33e9d7f6284d25f', 'width': 216}, {'height': 276, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=320&crop=smart&auto=webp&s=a5cff1d444722e414c968e154d13476f808e9617', 'width': 320}, {'height': 553, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=640&crop=smart&auto=webp&s=c41e6d0d64c5e52d6406d9875886d6158d162266', 'width': 640}, {'height': 830, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=960&crop=smart&auto=webp&s=37b6ab78d2c70460cad9d2f92aed102e197b9aa1', 'width': 960}, {'height': 934, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?width=1080&crop=smart&auto=webp&s=29469bb4a4c4b2b7a0a618e5514da852659f203c', 'width': 1080}], 'source': {'height': 1142, 'url': 'https://external-preview.redd.it/ZVtTtgD7wKASmdRTEXnZgCJC48wXob7R_sK0_GfJH8Y.jpg?auto=webp&s=e56d939b33f52e244264188492437e52b0d26438', 'width': 1320}, 'variants': {}}]} |
||
What's the current best in the ~100B range? | 4 | I have three 3090s, and my go-to has been Mistral Large, which I can just manage to fit in VRAM. I tend to use it for story writing, and it's pretty good, but a little dry. Is Mistral Large still the top dog, or has it be surpassed? All the news about Deepseek is exciting, but I'd like something I can actually run at a reasonable speed. I tried the distilled Deepseek Llama 70b but was sorely disappointed. | 2025-02-05T21:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1iillro/whats_the_current_best_in_the_100b_range/ | Motrevock | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iillro | false | null | t3_1iillro | /r/LocalLLaMA/comments/1iillro/whats_the_current_best_in_the_100b_range/ | false | false | self | 4 | null |
Gemma 3 on the way! | 937 | https://x.com/osanseviero/status/1887247587776069957?t=xQ9khq5p-lBM-D2ntK7ZJw&s=19 | 2025-02-05T21:43:33 | ApprehensiveAd3629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iilrym | false | null | t3_1iilrym | /r/LocalLLaMA/comments/1iilrym/gemma_3_on_the_way/ | false | false | 937 | {'enabled': True, 'images': [{'id': 'Ejr7GxJd4uo8cehM3UUWaZJ2qACRQgWQ1Ny7uWJpk0g', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/q2q4555s4ehe1.jpeg?width=108&crop=smart&auto=webp&s=f31e69d5abf7022327330ab0035d9164e900e538', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/q2q4555s4ehe1.jpeg?width=216&crop=smart&auto=webp&s=5c54b8de224907db4225cb2e194b3561a067332e', 'width': 216}, {'height': 427, 'url': 'https://preview.redd.it/q2q4555s4ehe1.jpeg?width=320&crop=smart&auto=webp&s=f2802c2eaad2c1d3ea8fce16a1ac66b8d52b385a', 'width': 320}, {'height': 854, 'url': 'https://preview.redd.it/q2q4555s4ehe1.jpeg?width=640&crop=smart&auto=webp&s=0be6c986a18108bcff251eb781a9cd1a0f4bcbd3', 'width': 640}, {'height': 1281, 'url': 'https://preview.redd.it/q2q4555s4ehe1.jpeg?width=960&crop=smart&auto=webp&s=348e0ec10e075a6afd69901a026d9a1212a70eed', 'width': 960}, {'height': 1442, 'url': 'https://preview.redd.it/q2q4555s4ehe1.jpeg?width=1080&crop=smart&auto=webp&s=10d9182ae83e1f24c470bf69150eab170500c2b2', 'width': 1080}], 'source': {'height': 1442, 'url': 'https://preview.redd.it/q2q4555s4ehe1.jpeg?auto=webp&s=aba033c4b5a1325371879bbeff457562eddc1018', 'width': 1080}, 'variants': {}}]} |
||
What are truly the best front-ends for local LLMs? | 11 | I struggle finding the right one. (For general use case, not creative stuff or RP stuff like sillytavern)
I really like OpenWebUI. However- web search with searXNG or DuckDuckGo (free)
Is not working at all. It really sucks. It barely answers any questions. It gives me “error search” or whatever.
I’d like to be able to search the web (like with chatgpt) without the UI being surrounded around web search.
What are other great general use case UI’s with web search that actually works (and free)?
Thanks. But also just in general- any good UI’s would be great. | 2025-02-05T21:56:13 | https://www.reddit.com/r/LocalLLaMA/comments/1iim2x8/what_are_truly_the_best_frontends_for_local_llms/ | No_Expert1801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iim2x8 | false | null | t3_1iim2x8 | /r/LocalLLaMA/comments/1iim2x8/what_are_truly_the_best_frontends_for_local_llms/ | false | false | self | 11 | null |
Pls help, too confused to get qwen to work | 1 | [removed] | 2025-02-05T22:16:36 | https://www.reddit.com/r/LocalLLaMA/comments/1iimkj8/pls_help_too_confused_to_get_qwen_to_work/ | KaleidoscopeReady161 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iimkj8 | false | null | t3_1iimkj8 | /r/LocalLLaMA/comments/1iimkj8/pls_help_too_confused_to_get_qwen_to_work/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '2xHaffkcgbgO_Mk65ssdmg4t2BIeq_H4s9Lym4lHlEg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CWijkWZKC5leex7kPvTFdnkNXJMzJN5AcGkHKKPdhz0.jpg?width=108&crop=smart&auto=webp&s=16eafe684cc4fe3985bbb351523eb7349d3e1656', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CWijkWZKC5leex7kPvTFdnkNXJMzJN5AcGkHKKPdhz0.jpg?width=216&crop=smart&auto=webp&s=bf3a7f7843e7f3b9fb9603e6a1446f9198fedd63', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CWijkWZKC5leex7kPvTFdnkNXJMzJN5AcGkHKKPdhz0.jpg?width=320&crop=smart&auto=webp&s=33db2cd9cc029f6b8902edf193d8432452dc977e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CWijkWZKC5leex7kPvTFdnkNXJMzJN5AcGkHKKPdhz0.jpg?width=640&crop=smart&auto=webp&s=5f67dcc797cb761698c633c1925b8c95f42961fd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CWijkWZKC5leex7kPvTFdnkNXJMzJN5AcGkHKKPdhz0.jpg?width=960&crop=smart&auto=webp&s=cef24f6b77ae564fa5f43e645680854f892b57c9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CWijkWZKC5leex7kPvTFdnkNXJMzJN5AcGkHKKPdhz0.jpg?width=1080&crop=smart&auto=webp&s=ad33150fa6c1bd838894f39ebe42099bc26e4d9a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CWijkWZKC5leex7kPvTFdnkNXJMzJN5AcGkHKKPdhz0.jpg?auto=webp&s=68ade4b70bc70cbf169aa04c26f7dd22251bfebe', 'width': 1200}, 'variants': {}}]} |
Which models are you most excited about for 2025? | 28 | Which do you think will be most shockingly amazing for math/coding/vision/general intelligence or something else entirely? | 2025-02-05T22:17:25 | https://www.reddit.com/r/LocalLLaMA/comments/1iiml6u/which_models_are_you_most_excited_about_for_2025/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiml6u | false | null | t3_1iiml6u | /r/LocalLLaMA/comments/1iiml6u/which_models_are_you_most_excited_about_for_2025/ | false | false | self | 28 | null |
Does an eGPU make sense? | 0 | When I first built my PC, I had just a 4090. After getting more into working with LLMs, I found a good deal on a 3090 and added it to my setup, so I’ve been running a 4090 + 3090 combo without any issues.
Recently, I found another 3090 at a good price and picked it up, without any real plans. but now I’m not sure what to do next. I have three options:
1. Take out the first 3090 and build a separate PC using both 3090s, leaving my main setup with just the 4090.
2. Use an eGPU enclosure for the second 3090.
3. Get a new motherboard and rebuild my setup to fit all three GPUs in one case—this would be the most complicated option.
I’m not really looking to sell any parts, so I want to make use of what I have. What do you guys think is the best move?
[View Poll](https://www.reddit.com/poll/1iimlf9) | 2025-02-05T22:17:42 | https://www.reddit.com/r/LocalLLaMA/comments/1iimlf9/does_an_egpu_make_sense/ | zgge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iimlf9 | false | null | t3_1iimlf9 | /r/LocalLLaMA/comments/1iimlf9/does_an_egpu_make_sense/ | false | false | self | 0 | null |
Fine-tuning combination of geforce rtx 1000, 2000 and 3000 | 1 | [removed] | 2025-02-05T22:22:43 | https://www.reddit.com/r/LocalLLaMA/comments/1iimprj/finetuning_combination_of_geforce_rtx_1000_2000/ | misterVector | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iimprj | false | null | t3_1iimprj | /r/LocalLLaMA/comments/1iimprj/finetuning_combination_of_geforce_rtx_1000_2000/ | false | false | self | 1 | null |
Mistral Small 3 24B just a little too big to be usable... | 1 | [removed] | 2025-02-05T22:31:02 | https://www.reddit.com/r/LocalLLaMA/comments/1iimwul/mistral_small_3_24b_just_a_little_too_big_to_be/ | random-tomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iimwul | false | null | t3_1iimwul | /r/LocalLLaMA/comments/1iimwul/mistral_small_3_24b_just_a_little_too_big_to_be/ | false | false | self | 1 | null |
Locally host rave.dj mashup maker? | 5 | https://rave.dj/ makes mashups from songs.
Is it even possible to do this locally? I don't think text to music based LLMs can do this sort of thing, but I don't even know how you even begin to build/train something like this, let alone host it, so any help is appreciated. | 2025-02-05T22:33:24 | https://www.reddit.com/r/LocalLLaMA/comments/1iimyw5/locally_host_ravedj_mashup_maker/ | ishtarcrab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iimyw5 | false | null | t3_1iimyw5 | /r/LocalLLaMA/comments/1iimyw5/locally_host_ravedj_mashup_maker/ | false | false | self | 5 | null |
New AI Tool that Roasts your PowerPoints | 21 | Lol this new tool roasts your slides before your teacher/boss does [https://roastmypowerpoint.com](https://roastmypowerpoint.com) | 2025-02-05T22:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/1iimzg3/new_ai_tool_that_roasts_your_powerpoints/ | Embarrassed_Author68 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iimzg3 | false | null | t3_1iimzg3 | /r/LocalLLaMA/comments/1iimzg3/new_ai_tool_that_roasts_your_powerpoints/ | false | false | self | 21 | null |
CPU inference on Lenovo P1 Gen7 w/ LPCAMM2 | 1 | [removed] | 2025-02-05T22:57:34 | https://www.reddit.com/r/LocalLLaMA/comments/1iinjah/cpu_inference_on_lenovo_p1_gen7_w_lpcamm2/ | beauddl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iinjah | false | null | t3_1iinjah | /r/LocalLLaMA/comments/1iinjah/cpu_inference_on_lenovo_p1_gen7_w_lpcamm2/ | false | false | self | 1 | null |
whisper.cpp vs sherpa-onnx vs something else for speech to text | 9 | I'm looking to run my own Whisper endpoint on my server for my apps, which one should I use, any thoughts and recommendations? What about for on-device speech to text as well? | 2025-02-05T23:00:52 | https://www.reddit.com/r/LocalLLaMA/comments/1iinm4r/whispercpp_vs_sherpaonnx_vs_something_else_for/ | zxyzyxz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iinm4r | false | null | t3_1iinm4r | /r/LocalLLaMA/comments/1iinm4r/whispercpp_vs_sherpaonnx_vs_something_else_for/ | false | false | self | 9 | null |
We made hard puzzles easy. DeepSeek r1, o1, o3-mini didn't solve them | 1 | [removed] | 2025-02-05T23:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/1iinnwz/we_made_hard_puzzles_easy_deepseek_r1_o1_o3mini/ | anitakirkovska | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iinnwz | false | null | t3_1iinnwz | /r/LocalLLaMA/comments/1iinnwz/we_made_hard_puzzles_easy_deepseek_r1_o1_o3mini/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QlB_C02EYGPzH4JrUtuU_rMGlSaSXvq6h7SBIOdvIEg', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/-jibAMVXCMgr_FnVy7nR9t3n-78I5RaLH5o2H0-UkAA.jpg?width=108&crop=smart&auto=webp&s=4456ad4cae55bee987aacba94f0f5702d670f798', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/-jibAMVXCMgr_FnVy7nR9t3n-78I5RaLH5o2H0-UkAA.jpg?width=216&crop=smart&auto=webp&s=56fa38ba130da8983afcfdf5f9fac58b847ff517', 'width': 216}, {'height': 198, 'url': 'https://external-preview.redd.it/-jibAMVXCMgr_FnVy7nR9t3n-78I5RaLH5o2H0-UkAA.jpg?width=320&crop=smart&auto=webp&s=acbf8a01d2f3f6a06bf41a445c89c62e0e9e8ec0', 'width': 320}, {'height': 397, 'url': 'https://external-preview.redd.it/-jibAMVXCMgr_FnVy7nR9t3n-78I5RaLH5o2H0-UkAA.jpg?width=640&crop=smart&auto=webp&s=c29b2e603493f4d5da2e45316be7ae4fec3f665c', 'width': 640}, {'height': 596, 'url': 'https://external-preview.redd.it/-jibAMVXCMgr_FnVy7nR9t3n-78I5RaLH5o2H0-UkAA.jpg?width=960&crop=smart&auto=webp&s=fc7bf5e7029d01d0756e3fbc990e27e141329596', 'width': 960}], 'source': {'height': 601, 'url': 'https://external-preview.redd.it/-jibAMVXCMgr_FnVy7nR9t3n-78I5RaLH5o2H0-UkAA.jpg?auto=webp&s=bb7283de2ddc473733a8933d88c2a1a0ba8b5781', 'width': 968}, 'variants': {}}]} |
Mergekit changed licences | 0 | Just a heads up to all model make
makers and code contributors in the community.
Mergekit has changed their license from a (LGPL) to a (BSL)
https://github.com/arcee-ai/mergekit/commit/ad360408d682caaeea17a80f4ec40d2c27847ed2
Several forks have been made with the old license so the contributions made are under the old license.
Honestly there are several worries about this. Will models made with arcee-ai mergekit (at some point) be forced to use their license? What about all the people who contributed to the mergekit code?
Please be aware of the licenses when using software and keep an eye out so you or your models don't get affected. | 2025-02-05T23:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1iinx88/mergekit_changed_licences/ | mentallyburnt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iinx88 | false | null | t3_1iinx88 | /r/LocalLLaMA/comments/1iinx88/mergekit_changed_licences/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'RllzQSvgrzOMvjpKK_L5DOLqOOWKctnf2abkZceeyL8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YWtbWDURHxmnTPoBlPlIdQ2obbbon5cMCwydFMYVd28.jpg?width=108&crop=smart&auto=webp&s=fb7354fffd15f4b9fc7d2cc9838209dd88400bb8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YWtbWDURHxmnTPoBlPlIdQ2obbbon5cMCwydFMYVd28.jpg?width=216&crop=smart&auto=webp&s=d8e6e255b67278072b0e20d251e63c6ac3dd3084', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YWtbWDURHxmnTPoBlPlIdQ2obbbon5cMCwydFMYVd28.jpg?width=320&crop=smart&auto=webp&s=6946da1ea209ccd365dc8ea1aee87c429ee754b7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YWtbWDURHxmnTPoBlPlIdQ2obbbon5cMCwydFMYVd28.jpg?width=640&crop=smart&auto=webp&s=96870e0fbc7be96df84380ca99f008f460ca3790', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YWtbWDURHxmnTPoBlPlIdQ2obbbon5cMCwydFMYVd28.jpg?width=960&crop=smart&auto=webp&s=34f5d2868515b99bede412a0de3bb439b31ed175', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YWtbWDURHxmnTPoBlPlIdQ2obbbon5cMCwydFMYVd28.jpg?width=1080&crop=smart&auto=webp&s=76d40726a4176494abaeedf13f093626fc820d6c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YWtbWDURHxmnTPoBlPlIdQ2obbbon5cMCwydFMYVd28.jpg?auto=webp&s=930b4967c6fdcce035702746bee4a2ce699bbc7b', 'width': 1200}, 'variants': {}}]} |
Where to try different models? | 2 | I saw a post a whole back about a website that lets you use any model you want for a subscription, but I can't find it. They said you can use o1 as well as open source models. Does anyone know where I can find it? | 2025-02-05T23:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/1iio4vp/where_to_try_different_models/ | whatswimsbeneath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iio4vp | false | null | t3_1iio4vp | /r/LocalLLaMA/comments/1iio4vp/where_to_try_different_models/ | false | false | self | 2 | null |
Just got 64GB RAM, what do i do? | 1 | [removed] | 2025-02-05T23:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/1iioj0z/just_got_64gb_ram_what_do_i_do/ | MigorRortis96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iioj0z | false | null | t3_1iioj0z | /r/LocalLLaMA/comments/1iioj0z/just_got_64gb_ram_what_do_i_do/ | false | false | self | 1 | null |
Sentient's new Dobby model is crazy 😂 | 1 | [removed] | 2025-02-05T23:45:50 | https://www.reddit.com/r/LocalLLaMA/comments/1iiomgr/sentients_new_dobby_model_is_crazy/ | FlimsyProperty8544 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiomgr | false | null | t3_1iiomgr | /r/LocalLLaMA/comments/1iiomgr/sentients_new_dobby_model_is_crazy/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fURvXWZzv6wlGksuw-B0Sc28jjlMi1LHGl_97LFGnzo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=108&crop=smart&auto=webp&s=586423125f4b054f3a89511a8e71a674332b4866', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=216&crop=smart&auto=webp&s=2f9eabd7473b3e0f85aca67e9e01eb06cc9ac820', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=320&crop=smart&auto=webp&s=2c97e120eafc17970dd2957386c90e3bb63e8e8c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=640&crop=smart&auto=webp&s=ca8c4531cc8d39da75712ae247aaa9909bd31a2b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=960&crop=smart&auto=webp&s=b1658f8ec776bb05fb1ae236da75fbd3d91ab520', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=1080&crop=smart&auto=webp&s=8a46eefea12cbd63d7028959125d8546fd0ad0b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?auto=webp&s=1fa661ae40c5c7109444f19f7b7d4711b526c4a3', 'width': 1200}, 'variants': {}}]} |
Sentient's new Dobby model is crazy 😂 | 1 | [removed] | 2025-02-05T23:46:37 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1iion25 | false | null | t3_1iion25 | /r/LocalLLaMA/comments/1iion25/sentients_new_dobby_model_is_crazy/ | false | false | default | 1 | null |
||
Sentient's new Dobby model is insane | 1 | [removed] | 2025-02-05T23:47:28 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1iionql | false | null | t3_1iionql | /r/LocalLLaMA/comments/1iionql/sentients_new_dobby_model_is_insane/ | false | false | default | 1 | null |
||
Sentient Foundation's new Dobby model... | 1 | [removed] | 2025-02-05T23:48:41 | https://www.reddit.com/r/LocalLLaMA/comments/1iioond/sentient_foundations_new_dobby_model/ | FlimsyProperty8544 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iioond | false | null | t3_1iioond | /r/LocalLLaMA/comments/1iioond/sentient_foundations_new_dobby_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fURvXWZzv6wlGksuw-B0Sc28jjlMi1LHGl_97LFGnzo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=108&crop=smart&auto=webp&s=586423125f4b054f3a89511a8e71a674332b4866', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=216&crop=smart&auto=webp&s=2f9eabd7473b3e0f85aca67e9e01eb06cc9ac820', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=320&crop=smart&auto=webp&s=2c97e120eafc17970dd2957386c90e3bb63e8e8c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=640&crop=smart&auto=webp&s=ca8c4531cc8d39da75712ae247aaa9909bd31a2b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=960&crop=smart&auto=webp&s=b1658f8ec776bb05fb1ae236da75fbd3d91ab520', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=1080&crop=smart&auto=webp&s=8a46eefea12cbd63d7028959125d8546fd0ad0b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?auto=webp&s=1fa661ae40c5c7109444f19f7b7d4711b526c4a3', 'width': 1200}, 'variants': {}}]} |
Rant about Language Mixing in <think> ... </think> | 23 | Just finished reading the Deepseek R1 paper - in section 2.3.2, they talked about their mitigation strategies against language mixing, which included CoT cold start and a new reward for language consistency.
I totally understand why they might want to encourage language consistency - better alignment with human preference and such, but I speak 4-ish languages (English (native), Chinese (native), Chinese regional dialet (native), Japanese (conversational)) and my thought processes are usually in mixed languages.
As I learned math at a young age in Chinese, before moving to the States, and studing STEM in English, my "internal reasoning" on lots of STEM-related questions are in a mixture of English and Chinese. I usually found myself reasoning better when this way.
Considering pretraining usually happens in mixed language anyways, it feels a bit odd that the research community decided against language mixing altogether, especially the paper also called out that this comes with a sligh performance degradation.
Thanks for reading my rant. | 2025-02-05T23:48:44 | https://www.reddit.com/r/LocalLLaMA/comments/1iioooo/rant_about_language_mixing_in_think_think/ | hi_im_ryanli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iioooo | false | null | t3_1iioooo | /r/LocalLLaMA/comments/1iioooo/rant_about_language_mixing_in_think_think/ | false | false | self | 23 | null |
Just got 64GB RAM, what do i do? | 1 | [removed] | 2025-02-05T23:53:20 | https://www.reddit.com/r/LocalLLaMA/comments/1iiosdr/just_got_64gb_ram_what_do_i_do/ | MigorRortis96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiosdr | false | null | t3_1iiosdr | /r/LocalLLaMA/comments/1iiosdr/just_got_64gb_ram_what_do_i_do/ | false | false | self | 1 | null |
Claude "Thinking deeply", how long has this been a thing? | 1 | [removed] | 2025-02-05T23:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/1iiox28/claude_thinking_deeply_how_long_has_this_been_a/ | Right_Conference_859 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiox28 | false | null | t3_1iiox28 | /r/LocalLLaMA/comments/1iiox28/claude_thinking_deeply_how_long_has_this_been_a/ | false | false | 1 | null |
|
Qwen2.5-VL won't load | 1 | Been wanting to try Qwen2.5-VL on my system. I am using LMStudio and I can download and try and load it, it's the MLX version since Im on a macbook pro m3 pro but I get an error and it fails to load. Curious if anyone has a clue why? Anyone get it to run? The error is:
\`\`\`
🥲 Failed to load the model
Failed to load model
Error when loading model: ValueError: Model type qwen2\_5\_vl not supported.
\`\`\` | 2025-02-06T00:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/1iip2iw/qwen25vl_wont_load/ | jstanaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iip2iw | false | null | t3_1iip2iw | /r/LocalLLaMA/comments/1iip2iw/qwen25vl_wont_load/ | false | false | self | 1 | null |
Which model(s) should I experiment with for writing/reviewing written text? | 3 | I use ChatGPT and Claude to help sanity check and improve documents and reviews I write (nothing major or paid). I'd like to take this offline so I can expand my usage to confidential documents etc
Are most models equally good at this, or are there specific models out there which are extremely good at the English language? Essentially what I'm looking for is something which is particularly good at writing and scrutinising the stuff I'm writing. | 2025-02-06T00:30:16 | https://www.reddit.com/r/LocalLLaMA/comments/1iipky7/which_models_should_i_experiment_with_for/ | luhkomo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iipky7 | false | null | t3_1iipky7 | /r/LocalLLaMA/comments/1iipky7/which_models_should_i_experiment_with_for/ | false | false | self | 3 | null |
Gemini 2.0 Pro Experimental, 2.0 Flash, 2.0 Flash-Lite are on Google AiStudio | 30 | 2025-02-06T00:38:15 | robberviet | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iipqvr | false | null | t3_1iipqvr | /r/LocalLLaMA/comments/1iipqvr/gemini_20_pro_experimental_20_flash_20_flashlite/ | false | false | 30 | {'enabled': True, 'images': [{'id': 'dYx9Jm98WNvGD-2X6hBWv5oMo5ET4j6pODYocAmJ2n4', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/r35v5vlpzehe1.png?width=108&crop=smart&auto=webp&s=ad2c7936b190f522d4d506dab697f21162a7230f', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/r35v5vlpzehe1.png?width=216&crop=smart&auto=webp&s=f43c7e71e026e8c392e613004b19a9abf424e605', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/r35v5vlpzehe1.png?width=320&crop=smart&auto=webp&s=28f7127e52a4c559cb0044954bfe5dae3c6a9c6d', 'width': 320}, {'height': 365, 'url': 'https://preview.redd.it/r35v5vlpzehe1.png?width=640&crop=smart&auto=webp&s=deb32b1deb2e3e8e937da5b249d2280937e502db', 'width': 640}, {'height': 548, 'url': 'https://preview.redd.it/r35v5vlpzehe1.png?width=960&crop=smart&auto=webp&s=5c561414a882e05ff51f5fd78fd0a581327b7fdb', 'width': 960}, {'height': 616, 'url': 'https://preview.redd.it/r35v5vlpzehe1.png?width=1080&crop=smart&auto=webp&s=880fb39c7aed5079c51a33372d2d35d0af39d08c', 'width': 1080}], 'source': {'height': 862, 'url': 'https://preview.redd.it/r35v5vlpzehe1.png?auto=webp&s=ae56052c7e22df5408e66ac42735cfbbb78ffc0f', 'width': 1510}, 'variants': {}}]} |
|||
s1: Simple test-time scaling | 22 | [https://arxiv.org/abs/2501.19393](https://arxiv.org/abs/2501.19393) | 2025-02-06T00:48:36 | https://www.reddit.com/r/LocalLLaMA/comments/1iipyo2/s1_simple_testtime_scaling/ | dil8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iipyo2 | false | null | t3_1iipyo2 | /r/LocalLLaMA/comments/1iipyo2/s1_simple_testtime_scaling/ | false | false | self | 22 | null |
Re: Local LLMs | 1 | Hi, so I am new to LLMs, and I have no previous experience with AI/ML coding, just scientific computing kind of coding. I wanted to use an LLM locally on my laptop with 64gb of ram and 12gb vram, however I am unsure what kind of models xan I or can I not run. At the moment I run a deepseek qwen 2.5 14B model (distilled) on my machine to help me with coding, but realistically could I run a better model? .
Thank you in advance | 2025-02-06T00:49:47 | https://www.reddit.com/r/LocalLLaMA/comments/1iipzkg/re_local_llms/ | Tucking_Fypo911 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iipzkg | false | null | t3_1iipzkg | /r/LocalLLaMA/comments/1iipzkg/re_local_llms/ | false | false | self | 1 | null |
There's a guy running the DeepSeek R1 1.5B distill on the Snapdragon PC NPU | 0 | I don't recognize the app he's doing it in though - anyone know?
[Max Weinbach on X: "DeepSeek R1 Distill on the @Snapdragon X Elite NPU! Not sure if you heard @cristianoamon mention during the earnings call today, but it is now NPU optimized and running on device. ](https://x.com/MaxWinebach/status/1887287451003064399) | 2025-02-06T00:57:30 | https://www.reddit.com/r/LocalLLaMA/comments/1iiq5fu/theres_a_guy_running_the_deepseek_r1_15b_distill/ | Intelligent-Gift4519 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiq5fu | false | null | t3_1iiq5fu | /r/LocalLLaMA/comments/1iiq5fu/theres_a_guy_running_the_deepseek_r1_15b_distill/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '1paNwpSviKNVTjqXuI4wHySVSYRNJ9KxQDjj2IKeLtM', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/hEUXsWTko9BlUZgy8tUTEKbWlIbOFngTGY2SuQipkt8.jpg?width=108&crop=smart&auto=webp&s=9357a64dfa8b60e5faa2bcb09d1f8a5ab868b644', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/hEUXsWTko9BlUZgy8tUTEKbWlIbOFngTGY2SuQipkt8.jpg?width=216&crop=smart&auto=webp&s=990490374b7ffee7e9a499f3dc575b99a3fd2ef6', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/hEUXsWTko9BlUZgy8tUTEKbWlIbOFngTGY2SuQipkt8.jpg?width=320&crop=smart&auto=webp&s=1d5bdcea8a0a71762947c6a22e5e06ed88d0b637', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/hEUXsWTko9BlUZgy8tUTEKbWlIbOFngTGY2SuQipkt8.jpg?width=640&crop=smart&auto=webp&s=aa08c596c4c61efc80695799f4b020b526ff4bbd', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/hEUXsWTko9BlUZgy8tUTEKbWlIbOFngTGY2SuQipkt8.jpg?width=960&crop=smart&auto=webp&s=684650d08d0878a31a93e2a443be412450b754bc', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/hEUXsWTko9BlUZgy8tUTEKbWlIbOFngTGY2SuQipkt8.jpg?width=1080&crop=smart&auto=webp&s=bdeed075718259e487d56fbd04efa32fa2a30aa4', 'width': 1080}], 'source': {'height': 1365, 'url': 'https://external-preview.redd.it/hEUXsWTko9BlUZgy8tUTEKbWlIbOFngTGY2SuQipkt8.jpg?auto=webp&s=5ac64016ed22799d70b6d874ff18effb566d522e', 'width': 2048}, 'variants': {}}]} |
ASIC's for LLM infrence | 54 | I had a meeting today with a really intresting hardware company as part of my work activities. They make ASIC's specifically for LLM infrence. They are souly focused on the Datacenter server market but I brought up making a consumer PCIE card and or a dev board like a RaspberryPi (or even something as small as a google coral TPU). They seemed very intrested in this market but were not sure that it would catch on. What would you guys think about this? An infrence ASIC card that eats up a lot less power (100 -200w) that can host local models and gives near GROQ levels of performance. Any thoughts? | 2025-02-06T01:00:52 | https://www.reddit.com/r/LocalLLaMA/comments/1iiq831/asics_for_llm_infrence/ | jklre | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiq831 | false | null | t3_1iiq831 | /r/LocalLLaMA/comments/1iiq831/asics_for_llm_infrence/ | false | false | self | 54 | null |
Train/fine-tuning VLM for VQA and OCR Tasks | 1 | hello guys i am looking for vlm to fine-tune them on my custom dataset for ocr and vqa tasks .
is their any guide i could use tutoriels and document available?. | 2025-02-06T01:02:25 | https://www.reddit.com/r/LocalLLaMA/comments/1iiq9aa/trainfinetuning_vlm_for_vqa_and_ocr_tasks/ | LahmeriMohamed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiq9aa | false | null | t3_1iiq9aa | /r/LocalLLaMA/comments/1iiq9aa/trainfinetuning_vlm_for_vqa_and_ocr_tasks/ | false | false | self | 1 | null |
ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning | 10 | 2025-02-06T01:05:27 | https://arxiv.org/abs/2502.01100 | Formal_Drop526 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1iiqbjj | false | null | t3_1iiqbjj | /r/LocalLLaMA/comments/1iiqbjj/zebralogic_on_the_scaling_limits_of_llms_for/ | false | false | default | 10 | null |
|
Sonora: A Lightweight, No-Installation AI Assistant with Voice Control | 3 | [Introducing](https://reddit.com/link/1iir49z/video/pdp5cpd7bfhe1/player)
Hey everyone! I want to share a project I’ve been working on: **Sonora**.
Sonora is an extremely lightweight AI assistant, just **38KB**, that runs directly in your browser with no installation required. If you already have Ollama and a TTS API (such as Kokoro API or AllTalk), you can start using it right away to interact via **text or voice**.
# Key Features:
* **No installation needed** – runs via a web interface
* **Native browser STT** – no need for Whisper
* **Supports any model compatible with Ollama** (tested with Gemma 2 2B)
* **Works on Windows and Android (coming soon!)**
# Upcoming Features:
* More informative display with graphs and images (In progress)
* Memory for smarter interactions (In progress)
* **100% hands-free mode** with interruption support
* More improvements and new features
I haven’t released it on GitHub yet, but I plan to share an initial version soon. I was excited to show the progress and get some feedback.
What do you think? Any suggestions? 😊 | 2025-02-06T01:44:28 | https://www.reddit.com/r/LocalLLaMA/comments/1iir49z/sonora_a_lightweight_noinstallation_ai_assistant/ | thecalmgreen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iir49z | false | null | t3_1iir49z | /r/LocalLLaMA/comments/1iir49z/sonora_a_lightweight_noinstallation_ai_assistant/ | false | false | self | 3 | null |
runpod HD space insufficient to store huggingface r1 quants | 2 | I have never used hugging face and I spun up a mi300x machine to try to get llama.cpp built which I did and then I went to download the GGUF files for the 2-bit unsloth DeepSeek r1 quant which I wanted to test with aider. I realized that the HD space on the pod is only 40GB. I tried to attach network storage on runpod but no storage is also available in a DC that is also one with mi300X pods. catch 22.
anyone have any direct experience using runpod? this is the only provider I have found that actually rents these AMD GPUs which look quite good for this application
| 2025-02-06T01:50:38 | https://www.reddit.com/r/LocalLLaMA/comments/1iir8om/runpod_hd_space_insufficient_to_store_huggingface/ | bitmoji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iir8om | false | null | t3_1iir8om | /r/LocalLLaMA/comments/1iir8om/runpod_hd_space_insufficient_to_store_huggingface/ | false | false | self | 2 | null |
The New Gemini Pro 2.0 Experimental sucks Donkey Balls. | 206 | Wow. Last night, after a long coding bender I heard the great news that Gemini were releasing some new models. I woke up this morning super excited to try them.
My first attempt was a quick OCR with Flesh light 2.0 and I was super impressed with the Speed. This thing is going to make complex OCR an absolute breeze. I cannot wait to incorporate this into my apps. I reckon it's going to cut the processing times in half. (Christmas came early)
Then I moved onto testing the Gemini 2.0 Pro Experimental.
How disappointing... This is such a regression from 1206. I could immediately see the drop in the quality of the tasks I've been working on daily like coding.
It makes shit tons of mistakes. The code that comes out doesn't have valid HTML (Super basic task) and it seems to want to interject and refactor code all the time without permission.
I don't know what the fuck these people are doing. Every single release it's like this. They just can't seem to get it right. 1206 has been a great model, and I've been using it as my daily driver for quite some time. I was actually very impressed with it and had they just released 1206 as Gemini 2.0 pro EXP I would have been stoked. This is an absolute regression.
I have seen this multiple times now with Google products. The previous time the same thing happened with 0827 and then Gemini 002.
For some reason at that time, they chose to force concise answers into everything, basically making it impossible to get full lengthy responses. Even with system prompt, it would just keep shortening code, adding comments into everything and basically forcing this dogshit concise mode behavior into everything.
Now they've managed to do it again. This model is NOT better than 1206. The benchmarks or whatever these people are aiming to beat are just an illusion. If your model cannot do simple tasks like outputting valid code without trying to force refactoring it is just a hot mess.
Why can't they get this right? They seem to regress a lot on updates. I've had discussions with people in the know, and apparently it's difficult to juggle the various needs of all the different types of people. Where some might like lengthy thorough answers for example, others might find that annoying and "too verbose". So basically we get stuck with these half arsed models that don't seem to excel in anything in particular.
I use these models for coding and for writing, which has always been the case. I might be in the minority of users and just be too entitled about this. But jesus, what a disappointment.
I am not shitting you, when I say I would rather use deepseek than whatever this is. It's ability to give long thorough answers, without changing parts of code unintentionally is extremely valuable to my use cases.
Google is the biggest and most reliable when it comes to serving their models though, and I absolutely love the flash models for building apps. So you could say I am a major lover and hater of them. It's always felt this way. A genuine love-hate relationship. I am secretly rooting for their success but I absolutely loathe some of the things they do and am really surprised they haven't surpassed chatgpt/claude yet.. Like how the fuck?
Maybe it's time to outsource their LLM production to CHHHIIIIINNAAAA. Just like everything else. Hahahaa
| 2025-02-06T01:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1iirej3/the_new_gemini_pro_20_experimental_sucks_donkey/ | Odd-Environment-7193 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iirej3 | false | null | t3_1iirej3 | /r/LocalLLaMA/comments/1iirej3/the_new_gemini_pro_20_experimental_sucks_donkey/ | false | false | self | 206 | null |
How are people using models smaller than 5b parameters? | 46 | I straight up don't understand the real world problems these models are solving. I get them in theory, function calling, guard, and agents once they've been fine tuned. But I'm yet to see people come out and say, "hey we solved this problem with a 1.5b llama model and it works really well."
Maybe I'm blind or not good enough to use them well some hopefully y'all can enlighten me | 2025-02-06T02:05:49 | https://www.reddit.com/r/LocalLLaMA/comments/1iirjvz/how_are_people_using_models_smaller_than_5b/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iirjvz | false | null | t3_1iirjvz | /r/LocalLLaMA/comments/1iirjvz/how_are_people_using_models_smaller_than_5b/ | false | false | self | 46 | null |
Fazer pesquisas online utilizando o LM Studio | 1 | [removed] | 2025-02-06T02:39:01 | https://www.reddit.com/r/LocalLLaMA/comments/1iis7ir/fazer_pesquisas_online_utilizando_o_lm_studio/ | ProfessionAwkward870 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iis7ir | false | null | t3_1iis7ir | /r/LocalLLaMA/comments/1iis7ir/fazer_pesquisas_online_utilizando_o_lm_studio/ | false | false | self | 1 | null |
Bitsandbytes or AWQ quantization for offline inference? | 0 | I am working on a project where I need to preprocess a large text corpora in an offline fashion with an LLM and structured output. I am trying to run the model locally on 16GB VRAM (though my GPU takes a few GB for the desktop apps)
My current developing solution uses Llama 3.1 8B with 8bit Bitsandbytes quantization and the outlines package for structured output. I am thinking about using AWQ or GPTQ 4 bit quants for speed, but am not sure if this will sacrifice too much quality. I tried loading larger models like Mistral Nemo and Qwen 2.5 Coder 14B with AWQ but they are either too big to fit or the versions I found perform poorly.
Should I stick with BnB or keep trying stuff around AWQ and/or GPTQ? | 2025-02-06T02:39:29 | https://www.reddit.com/r/LocalLLaMA/comments/1iis7v3/bitsandbytes_or_awq_quantization_for_offline/ | hksquinson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iis7v3 | false | null | t3_1iis7v3 | /r/LocalLLaMA/comments/1iis7v3/bitsandbytes_or_awq_quantization_for_offline/ | false | false | self | 0 | null |
AI Recommended CPU Only Build for Local LLM | 1 | [removed] | 2025-02-06T02:40:31 | https://www.reddit.com/r/LocalLLaMA/comments/1iis8lz/ai_recommended_cpu_only_build_for_local_llm/ | kdanielive | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iis8lz | false | null | t3_1iis8lz | /r/LocalLLaMA/comments/1iis8lz/ai_recommended_cpu_only_build_for_local_llm/ | false | false | self | 1 | null |
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate | 19 | Paper: [https://arxiv.org/abs/2501.17703](https://arxiv.org/abs/2501.17703)
Qwen2.5-Math-7B-CFT
[https://huggingface.co/TIGER-Lab/Qwen2.5-Math-7B-CFT](https://huggingface.co/TIGER-Lab/Qwen2.5-Math-7B-CFT)
Qwen2.5-32B-Instruct-CFT
[https://huggingface.co/TIGER-Lab/Qwen2.5-32B-Instruct-CFT](https://huggingface.co/TIGER-Lab/Qwen2.5-32B-Instruct-CFT) | 2025-02-06T02:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/1iisavu/critique_finetuning_learning_to_critique_is_more/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iisavu | false | null | t3_1iisavu | /r/LocalLLaMA/comments/1iisavu/critique_finetuning_learning_to_critique_is_more/ | false | false | self | 19 | null |
Fazer pesquisas online utilizando o LM Studio | 1 | [removed] | 2025-02-06T02:46:40 | https://www.reddit.com/r/LocalLLaMA/comments/1iiscy6/fazer_pesquisas_online_utilizando_o_lm_studio/ | ProfessionAwkward870 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiscy6 | false | null | t3_1iiscy6 | /r/LocalLLaMA/comments/1iiscy6/fazer_pesquisas_online_utilizando_o_lm_studio/ | false | false | self | 1 | null |
Open WebUI drops 3 new releases today. Code Interpreter, Native Tool Calling, Exa Search added | 219 | 0.5.8 had a slew of new adds. 0.5.9 and 0.5.10 seemed to be minor bug fixes for the most part.
From their release page:
🖥️ Code Interpreter: Models can now execute code in real time to refine their answers dynamically, running securely within a sandboxed browser environment using Pyodide. Perfect for calculations, data analysis, and AI-assisted coding tasks!
💬 Redesigned Chat Input UI: Enjoy a sleeker and more intuitive message input with improved feature selection, making it easier than ever to toggle tools, enable search, and interact with AI seamlessly.
🛠️ Native Tool Calling Support (Experimental): Supported models can now call tools natively, reducing query latency and improving contextual responses. More enhancements coming soon!
🔗 Exa Search Engine Integration: A new search provider has been added, allowing users to retrieve up-to-date and relevant information without leaving the chat interface.
https://github.com/open-webui/open-webui/releases | 2025-02-06T02:55:42 | https://www.reddit.com/r/LocalLLaMA/comments/1iisj7j/open_webui_drops_3_new_releases_today_code/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iisj7j | false | null | t3_1iisj7j | /r/LocalLLaMA/comments/1iisj7j/open_webui_drops_3_new_releases_today_code/ | false | false | self | 219 | {'enabled': False, 'images': [{'id': 'JXIco5-abI_d1kcgcGtmHvVqXB-7HOOnViuYo9Z9dtU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UZB4BhZb4aOMzld8vspVzVG5Iz-laZFq7Ryxyz3hBOg.jpg?width=108&crop=smart&auto=webp&s=e7a4842dfbffefc53b67305d434658fe173d0dba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UZB4BhZb4aOMzld8vspVzVG5Iz-laZFq7Ryxyz3hBOg.jpg?width=216&crop=smart&auto=webp&s=30e655dd0ba321224258dec660f7e709aa8bf534', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UZB4BhZb4aOMzld8vspVzVG5Iz-laZFq7Ryxyz3hBOg.jpg?width=320&crop=smart&auto=webp&s=0b046ede9a22b917c261a29c81c6324b458f8081', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UZB4BhZb4aOMzld8vspVzVG5Iz-laZFq7Ryxyz3hBOg.jpg?width=640&crop=smart&auto=webp&s=1ba1150050ef80127c25473f433e1e31fd45307f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UZB4BhZb4aOMzld8vspVzVG5Iz-laZFq7Ryxyz3hBOg.jpg?width=960&crop=smart&auto=webp&s=3bef3b29dda2c0f6394a2c7559749a8cd641e8aa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UZB4BhZb4aOMzld8vspVzVG5Iz-laZFq7Ryxyz3hBOg.jpg?width=1080&crop=smart&auto=webp&s=f2b80cb35316b5c5c6638c48f3b81fd1692964e9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UZB4BhZb4aOMzld8vspVzVG5Iz-laZFq7Ryxyz3hBOg.jpg?auto=webp&s=53bd2c2d0c1dbe326eac31e4fa33d00c5135706a', 'width': 1200}, 'variants': {}}]} |
Best graphical frontend for local llama. How about SillyTavern | 5 | I was considering using SillyTavern, but it looks like all their docs primarily point you towards connection to could LLM. I also don't really need any of it's RP related tooling. it's fine if its there, I just don't want it intervening with my AI unless I explicitly tell it too.
Does anyone have a guide or doc on how to connect the two? Should I be using a different graphical frontend? Ideally I want one that I can use on my local computer or mobile phones.
Thanks! | 2025-02-06T03:08:57 | https://www.reddit.com/r/LocalLLaMA/comments/1iissmk/best_graphical_frontend_for_local_llama_how_about/ | FactoryReboot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iissmk | false | null | t3_1iissmk | /r/LocalLLaMA/comments/1iissmk/best_graphical_frontend_for_local_llama_how_about/ | false | false | self | 5 | null |
Best light weight llm ? | 14 | I want a llm locally for assisting me with python and R. As I don't have any fancy graphics card, which will be best for me ? | 2025-02-06T03:20:36 | https://www.reddit.com/r/LocalLLaMA/comments/1iit0gq/best_light_weight_llm/ | Remarkable_Wrap_5484 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iit0gq | false | null | t3_1iit0gq | /r/LocalLLaMA/comments/1iit0gq/best_light_weight_llm/ | false | false | self | 14 | null |
Can someone explain, how Distill Models work and if it’s at all related/connected with Quantizing? | 1 | Like I’ve read about Quant versions of Llama models, and distill version of Deepseek. What’s actually the difference and which is better? Is there a combination of Distill+Quant?
*Sorry if it’s a dumb question* | 2025-02-06T03:26:31 | https://www.reddit.com/r/LocalLLaMA/comments/1iit4jh/can_someone_explain_how_distill_models_work_and/ | tiwanaldo5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iit4jh | false | null | t3_1iit4jh | /r/LocalLLaMA/comments/1iit4jh/can_someone_explain_how_distill_models_work_and/ | false | false | self | 1 | null |
API Gemini for WebUI | 1 | [removed] | 2025-02-06T03:27:17 | https://www.reddit.com/r/LocalLLaMA/comments/1iit51b/api_gemini_for_webui/ | Low_Information_4256 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iit51b | false | null | t3_1iit51b | /r/LocalLLaMA/comments/1iit51b/api_gemini_for_webui/ | false | false | 1 | null |
|
API Gemini for WebUI | 1 | Hello everyone! I'm adding APIs to the WebUI and I want to add the Gemini API, but I haven't found the correct URL for this inclusion. Has anyone succeeded and can answer this poor soul thirsty for knowledge?" | 2025-02-06T03:29:13 | https://www.reddit.com/r/LocalLLaMA/comments/1iit6ap/api_gemini_for_webui/ | Low_Information_4256 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iit6ap | false | null | t3_1iit6ap | /r/LocalLLaMA/comments/1iit6ap/api_gemini_for_webui/ | false | false | self | 1 | null |
Splitting the data stored in relational database | 1 | [removed] | 2025-02-06T03:32:56 | https://www.reddit.com/r/LocalLLaMA/comments/1iit8rh/splitting_the_data_stored_in_relational_database/ | MoveGlass1109 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iit8rh | false | null | t3_1iit8rh | /r/LocalLLaMA/comments/1iit8rh/splitting_the_data_stored_in_relational_database/ | false | false | self | 1 | null |
Deep Seek !Let the world witness the power of China | 1 | 2025-02-06T03:46:44 | https://app.meme-gen.ai/meme/20250206034141_504289 | Reverie-AI | app.meme-gen.ai | 1970-01-01T00:00:00 | 0 | {} | 1iithrz | false | null | t3_1iithrz | /r/LocalLLaMA/comments/1iithrz/deep_seek_let_the_world_witness_the_power_of_china/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hQLG9tyn52HusPssTgypzJX49fxu2L13N4_QfqNvnec', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?width=108&crop=smart&auto=webp&s=0bcad3dba7935a2d69042dacbdd697c0454c9331', 'width': 108}, {'height': 155, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?width=216&crop=smart&auto=webp&s=43228e1c5f29c077b93802e1e7620e19924f9f85', 'width': 216}, {'height': 230, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?width=320&crop=smart&auto=webp&s=0770ffa22c61fe1bccc67e6a2a17cdec3f5f63be', 'width': 320}], 'source': {'height': 368, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?auto=webp&s=efc3128227d1baa1f361cd73152f24aedd64623b', 'width': 512}, 'variants': {}}]} |
||
L4 or L40S for multi-gpu inferencing? | 1 | I'm planning on building a multi-GPU inferencing server running vLLM and serving multiple concurrent users in the department. The server that I'm looking into has 12 slots for single-wide GPUs. Should I go for 12 L4 GPUs, or 4 L40S GPUs? Is having a few 48GB GPUs that's more powerful and with more VRAM per GPU better than having more weaker 24GB GPUs? Also, the L40S is about twice as expensive as the L4 for the equivalent amount of VRAM.
| 2025-02-06T04:07:13 | https://www.reddit.com/r/LocalLLaMA/comments/1iitvr6/l4_or_l40s_for_multigpu_inferencing/ | CaptainLockes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iitvr6 | false | null | t3_1iitvr6 | /r/LocalLLaMA/comments/1iitvr6/l4_or_l40s_for_multigpu_inferencing/ | false | false | self | 1 | null |
List of all Gemini Models Available through API. | 2 | Some of these may be depreciated, but many are still available.
[https://github.com/2-fly-4-ai/The-AI-Model-List/blob/main/models-available-gemini-api](https://github.com/2-fly-4-ai/The-AI-Model-List/blob/main/models-available-gemini-api)
| 2025-02-06T04:15:40 | https://www.reddit.com/r/LocalLLaMA/comments/1iiu1g5/list_of_all_gemini_models_available_through_api/ | Odd-Environment-7193 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiu1g5 | false | null | t3_1iiu1g5 | /r/LocalLLaMA/comments/1iiu1g5/list_of_all_gemini_models_available_through_api/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'IM7ZZQhyQhZp_TFVItBai_pv7bad5HlHNTSZELxXBf0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7xtSoZ3Kww21lT_74_V0pFemtNxYq9clOY7dgRhyAtM.jpg?width=108&crop=smart&auto=webp&s=a6295e1bafbcf4c567d2813b5d44990438ade27a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7xtSoZ3Kww21lT_74_V0pFemtNxYq9clOY7dgRhyAtM.jpg?width=216&crop=smart&auto=webp&s=84198f8e2dc190153569691ef898b527860e4cf8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7xtSoZ3Kww21lT_74_V0pFemtNxYq9clOY7dgRhyAtM.jpg?width=320&crop=smart&auto=webp&s=673a00d7b0f8bc861440516317e8fafe020c3705', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7xtSoZ3Kww21lT_74_V0pFemtNxYq9clOY7dgRhyAtM.jpg?width=640&crop=smart&auto=webp&s=f72d6be389ef14d90e0a0b527d63632cc3608e52', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7xtSoZ3Kww21lT_74_V0pFemtNxYq9clOY7dgRhyAtM.jpg?width=960&crop=smart&auto=webp&s=dd66af85fa1d8ef9f4557185346c638f842c9eea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7xtSoZ3Kww21lT_74_V0pFemtNxYq9clOY7dgRhyAtM.jpg?width=1080&crop=smart&auto=webp&s=6420301fc04957128f5cf8b26db5f1e822523976', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7xtSoZ3Kww21lT_74_V0pFemtNxYq9clOY7dgRhyAtM.jpg?auto=webp&s=c860eb9de1d35bcae90efc09b60aaf8ce5a01d8e', 'width': 1200}, 'variants': {}}]} |
L4 or L40S for multi-gpu inferencing? | 1 | [removed] | 2025-02-06T04:17:42 | https://www.reddit.com/r/LocalLLaMA/comments/1iiu2s4/l4_or_l40s_for_multigpu_inferencing/ | FearlessCap3199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiu2s4 | false | null | t3_1iiu2s4 | /r/LocalLLaMA/comments/1iiu2s4/l4_or_l40s_for_multigpu_inferencing/ | false | false | self | 1 | null |
L4 or L40S for multi-gpu inferencing? | 1 | [removed] | 2025-02-06T04:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/1iiub10/l4_or_l40s_for_multigpu_inferencing/ | redcape0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiub10 | false | null | t3_1iiub10 | /r/LocalLLaMA/comments/1iiub10/l4_or_l40s_for_multigpu_inferencing/ | false | false | self | 1 | null |
Groq u good bro? | 0 | 2025-02-06T04:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1iiucvf/groq_u_good_bro/ | lightdreamscape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiucvf | false | null | t3_1iiucvf | /r/LocalLLaMA/comments/1iiucvf/groq_u_good_bro/ | false | false | 0 | null |
||
What are some smaller Multimodal LLMs that are capable of in-context vision learning? | 1 | I want to use a MLLM for few shot classification of some niche objects (uncommon classes). What are some under 70B models that are generally good at this? | 2025-02-06T04:55:12 | https://www.reddit.com/r/LocalLLaMA/comments/1iiuqtb/what_are_some_smaller_multimodal_llms_that_are/ | SussyAmogusChungus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiuqtb | false | null | t3_1iiuqtb | /r/LocalLLaMA/comments/1iiuqtb/what_are_some_smaller_multimodal_llms_that_are/ | false | false | self | 1 | null |
Local LLM on iPhone, Looking for a Good Tutorial | 6 | Hey everyone,
I've been trying to find a solid tutorial on how to run a local LLM's on an iPhone, but I’m struggling to find clear, up-to-date guides. I know some people have had success with things like MLC-LLM or llama.cpp, but I haven't found a step-by-step breakdown that specifically covers setting it up on iOS.
Has anyone here managed to get a local LLM running on their iPhone? If so, what tools or methods did you use? Any recommended tutorials or resources would be greatly appreciated!
Thanks in advance! | 2025-02-06T05:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/1iiuuvi/local_llm_on_iphone_looking_for_a_good_tutorial/ | gamblingapocalypse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiuuvi | false | null | t3_1iiuuvi | /r/LocalLLaMA/comments/1iiuuvi/local_llm_on_iphone_looking_for_a_good_tutorial/ | false | false | self | 6 | null |
[LMStudio] [Help] Weren't there different presets for tons of models? They're all gone for me | 1 | 2025-02-06T05:15:52 | Donovanth1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iiv3wv | false | null | t3_1iiv3wv | /r/LocalLLaMA/comments/1iiv3wv/lmstudio_help_werent_there_different_presets_for/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'QUMcB1LuQ8lG5sdqcJiObcuDzSDcKxcMybF_SJcOCxg', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/5kat69ugdghe1.png?width=108&crop=smart&auto=webp&s=6fb85b72c66a7902f49bbfaa4825a4e336381adc', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/5kat69ugdghe1.png?width=216&crop=smart&auto=webp&s=500640c463566905e8eb8fa3e65916437f7d871e', 'width': 216}, {'height': 109, 'url': 'https://preview.redd.it/5kat69ugdghe1.png?width=320&crop=smart&auto=webp&s=3ed770c53443c57ba6e47b26ddf63892cb46696d', 'width': 320}], 'source': {'height': 178, 'url': 'https://preview.redd.it/5kat69ugdghe1.png?auto=webp&s=0f3415b74fef405eea59b3f0dce9ed6343ddc1f8', 'width': 520}, 'variants': {}}]} |
|||
So, Google has no state-of-the-art frontier model now? | 204 | 2025-02-06T05:34:09 | Comfortable-Rock-498 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iiveqd | false | null | t3_1iiveqd | /r/LocalLLaMA/comments/1iiveqd/so_google_has_no_stateoftheart_frontier_model_now/ | false | false | 204 | {'enabled': True, 'images': [{'id': 'Cjvn6wU1_mLMmFEfFNzL1_zoPxLerMlgTdDP072k0GI', 'resolutions': [{'height': 21, 'url': 'https://preview.redd.it/64r0glzkgghe1.png?width=108&crop=smart&auto=webp&s=aab945a6d514919b8c29938a5c3f48f0de11fcab', 'width': 108}, {'height': 43, 'url': 'https://preview.redd.it/64r0glzkgghe1.png?width=216&crop=smart&auto=webp&s=abadb9933b39484802015349d836d4eb46b3fbe5', 'width': 216}, {'height': 64, 'url': 'https://preview.redd.it/64r0glzkgghe1.png?width=320&crop=smart&auto=webp&s=5eeacec3f46d459e5191296bbee220b8de30bb64', 'width': 320}, {'height': 129, 'url': 'https://preview.redd.it/64r0glzkgghe1.png?width=640&crop=smart&auto=webp&s=b4b6ad82ec54e92060d2226f5e8ec28c2f2eaf9b', 'width': 640}, {'height': 194, 'url': 'https://preview.redd.it/64r0glzkgghe1.png?width=960&crop=smart&auto=webp&s=be5f633c5fd80d28128ec52e9f780e02ad7b288a', 'width': 960}, {'height': 219, 'url': 'https://preview.redd.it/64r0glzkgghe1.png?width=1080&crop=smart&auto=webp&s=aa18ab473a1c7984643b745e3112bb0a8064db9b', 'width': 1080}], 'source': {'height': 690, 'url': 'https://preview.redd.it/64r0glzkgghe1.png?auto=webp&s=1c0745457ada0151d4d0049eb167fac7fb8b1635', 'width': 3402}, 'variants': {}}]} |
|||
I know one of you will get this.... | 9 | https://www.reddit.com/r/datacenter/s/7Z9ZGKzN1G | 2025-02-06T05:40:54 | https://www.reddit.com/r/LocalLLaMA/comments/1iiviql/i_know_one_of_you_will_get_this/ | _RouteThe_Switch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiviql | false | null | t3_1iiviql | /r/LocalLLaMA/comments/1iiviql/i_know_one_of_you_will_get_this/ | false | false | self | 9 | null |
Rin AI Agent Stack – Adaptable to Local LLaMA Models | 1 | [removed] | 2025-02-06T05:46:00 | https://www.reddit.com/r/LocalLLaMA/comments/1iivllq/rin_ai_agent_stack_adaptable_to_local_llama_models/ | PNWtreeguy69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iivllq | false | null | t3_1iivllq | /r/LocalLLaMA/comments/1iivllq/rin_ai_agent_stack_adaptable_to_local_llama_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jdP8cx6WZxFgsJMQRj5J_XhLyJIlD6zs6AlD7AYGAT0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Y0-kNiEm4ty3nnUhVcos6PX6DXoWMkwzNG83PuEZ5eE.jpg?width=108&crop=smart&auto=webp&s=1b08b024f7708ab3a3a6aa4e1732bf42de282b53', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Y0-kNiEm4ty3nnUhVcos6PX6DXoWMkwzNG83PuEZ5eE.jpg?width=216&crop=smart&auto=webp&s=87a408b690a3955a2b30b7644ad6280e4c25a0a0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Y0-kNiEm4ty3nnUhVcos6PX6DXoWMkwzNG83PuEZ5eE.jpg?width=320&crop=smart&auto=webp&s=82d362afbf00380d60bc14f461bfddc7b4002229', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Y0-kNiEm4ty3nnUhVcos6PX6DXoWMkwzNG83PuEZ5eE.jpg?width=640&crop=smart&auto=webp&s=a2858c19acffbed916cfefdb4703aa453d6128a6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Y0-kNiEm4ty3nnUhVcos6PX6DXoWMkwzNG83PuEZ5eE.jpg?width=960&crop=smart&auto=webp&s=ffa1345f1c6227d6d1205edc12e970039e3ff6be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Y0-kNiEm4ty3nnUhVcos6PX6DXoWMkwzNG83PuEZ5eE.jpg?width=1080&crop=smart&auto=webp&s=3173b3ef9351f779480eed836b39b6b5b0f663e5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Y0-kNiEm4ty3nnUhVcos6PX6DXoWMkwzNG83PuEZ5eE.jpg?auto=webp&s=ac4dd61d719ae35b4aff7e53027544bee1deca30', 'width': 1200}, 'variants': {}}]} |
trynna look for a model that can be great and light for ocr text extraction for math | 1 | [removed] | 2025-02-06T05:48:44 | https://www.reddit.com/r/LocalLLaMA/comments/1iivn0p/trynna_look_for_a_model_that_can_be_great_and/ | raul_myron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iivn0p | false | null | t3_1iivn0p | /r/LocalLLaMA/comments/1iivn0p/trynna_look_for_a_model_that_can_be_great_and/ | false | false | self | 1 | null |
Is deepseek always free | 1 | 2025-02-06T05:55:47 | https://app.meme-gen.ai/meme/20250206034141_504289 | Reverie-AI | app.meme-gen.ai | 1970-01-01T00:00:00 | 0 | {} | 1iivqw1 | false | null | t3_1iivqw1 | /r/LocalLLaMA/comments/1iivqw1/is_deepseek_always_free/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hQLG9tyn52HusPssTgypzJX49fxu2L13N4_QfqNvnec', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?width=108&crop=smart&auto=webp&s=0bcad3dba7935a2d69042dacbdd697c0454c9331', 'width': 108}, {'height': 155, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?width=216&crop=smart&auto=webp&s=43228e1c5f29c077b93802e1e7620e19924f9f85', 'width': 216}, {'height': 230, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?width=320&crop=smart&auto=webp&s=0770ffa22c61fe1bccc67e6a2a17cdec3f5f63be', 'width': 320}], 'source': {'height': 368, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?auto=webp&s=efc3128227d1baa1f361cd73152f24aedd64623b', 'width': 512}, 'variants': {}}]} |
||
Is the guidance from deepseek provided really useful | 0 | 2025-02-06T06:09:08 | https://app.meme-gen.ai/meme/20250206034141_504289 | Reverie-AI | app.meme-gen.ai | 1970-01-01T00:00:00 | 0 | {} | 1iivyjz | false | null | t3_1iivyjz | /r/LocalLLaMA/comments/1iivyjz/is_the_guidance_from_deepseek_provided_really/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'hQLG9tyn52HusPssTgypzJX49fxu2L13N4_QfqNvnec', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?width=108&crop=smart&auto=webp&s=0bcad3dba7935a2d69042dacbdd697c0454c9331', 'width': 108}, {'height': 155, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?width=216&crop=smart&auto=webp&s=43228e1c5f29c077b93802e1e7620e19924f9f85', 'width': 216}, {'height': 230, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?width=320&crop=smart&auto=webp&s=0770ffa22c61fe1bccc67e6a2a17cdec3f5f63be', 'width': 320}], 'source': {'height': 368, 'url': 'https://external-preview.redd.it/rc8uN0zQQz06h1IZrbUzu7IrE7KFz-J6RcRkOJZud7U.jpg?auto=webp&s=efc3128227d1baa1f361cd73152f24aedd64623b', 'width': 512}, 'variants': {}}]} |
||
are consumer-grade gpu/cpu clusters being overlooked for ai? | 1 | [removed] | 2025-02-06T06:42:21 | https://www.reddit.com/r/LocalLLaMA/comments/1iiwg8b/are_consumergrade_gpucpu_clusters_being/ | Status-Hearing-4084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiwg8b | false | null | t3_1iiwg8b | /r/LocalLLaMA/comments/1iiwg8b/are_consumergrade_gpucpu_clusters_being/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'PFsnl8C1Hpin770f4PKI9zgARMzHaLYwQg0cAVN7t6E', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=108&crop=smart&auto=webp&s=5f75b032c1663578d7b17b1651bb1a5281d1ee3c', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=216&crop=smart&auto=webp&s=f2e505c8cfb27d33c69087ae34eaf7f88675fe23', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=320&crop=smart&auto=webp&s=8de89650bd12ecf200478fea6c973919c5b2a5cf', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=640&crop=smart&auto=webp&s=eb00fda063a7636510c6beb8e955ab40beced7ac', 'width': 640}, {'height': 546, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=960&crop=smart&auto=webp&s=1d10b60389fb93b5217b782e46f6f4fdd7080d2c', 'width': 960}, {'height': 615, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=1080&crop=smart&auto=webp&s=1eed1f33f4af29d8ca52ef58e056ee998bad5523', 'width': 1080}], 'source': {'height': 1166, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?auto=webp&s=633dfba3cbab792ec91e073aa72c92c4c67218be', 'width': 2047}, 'variants': {}}]} |
For coders! free&open DeepSeek R1 > $20 o3-mini with rate-limit! | 213 | 2025-02-06T06:43:10 | BidHot8598 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iiwgou | false | null | t3_1iiwgou | /r/LocalLLaMA/comments/1iiwgou/for_coders_freeopen_deepseek_r1_20_o3mini_with/ | false | false | 213 | {'enabled': True, 'images': [{'id': '8zxdc3q-9NMxies4H1zyRoUu_ot3NWR0xruBWoEkNiQ', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/n9sntvkvsghe1.jpeg?width=108&crop=smart&auto=webp&s=3ab931ccea89519395763c605c8ca3e9e8233ed7', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/n9sntvkvsghe1.jpeg?width=216&crop=smart&auto=webp&s=ccbde0c34aab34ccc4b9228de631c717282c3850', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/n9sntvkvsghe1.jpeg?width=320&crop=smart&auto=webp&s=1736e1f9c21ab238ca0484e206ad1057055ec2ba', 'width': 320}, {'height': 683, 'url': 'https://preview.redd.it/n9sntvkvsghe1.jpeg?width=640&crop=smart&auto=webp&s=a41575abce1f1f8a8cb02b9965f65a613e1a0174', 'width': 640}, {'height': 1025, 'url': 'https://preview.redd.it/n9sntvkvsghe1.jpeg?width=960&crop=smart&auto=webp&s=f7774e390a18c75718a0c55bd9d59a7dc1bee305', 'width': 960}, {'height': 1154, 'url': 'https://preview.redd.it/n9sntvkvsghe1.jpeg?width=1080&crop=smart&auto=webp&s=e3992f38a31fa8a8f32c8e182ac8d747e0036d9d', 'width': 1080}], 'source': {'height': 1866, 'url': 'https://preview.redd.it/n9sntvkvsghe1.jpeg?auto=webp&s=27a296c0ece7309f97fa7834209bc09485ffdf5d', 'width': 1746}, 'variants': {}}]} |
|||
are consumer-grade gpu/cpu clusters being overlooked for ai? | 1 | [removed] | 2025-02-06T06:47:12 | https://www.reddit.com/r/LocalLLaMA/comments/1iiwit3/are_consumergrade_gpucpu_clusters_being/ | Status-Hearing-4084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiwit3 | false | null | t3_1iiwit3 | /r/LocalLLaMA/comments/1iiwit3/are_consumergrade_gpucpu_clusters_being/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'PFsnl8C1Hpin770f4PKI9zgARMzHaLYwQg0cAVN7t6E', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=108&crop=smart&auto=webp&s=5f75b032c1663578d7b17b1651bb1a5281d1ee3c', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=216&crop=smart&auto=webp&s=f2e505c8cfb27d33c69087ae34eaf7f88675fe23', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=320&crop=smart&auto=webp&s=8de89650bd12ecf200478fea6c973919c5b2a5cf', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=640&crop=smart&auto=webp&s=eb00fda063a7636510c6beb8e955ab40beced7ac', 'width': 640}, {'height': 546, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=960&crop=smart&auto=webp&s=1d10b60389fb93b5217b782e46f6f4fdd7080d2c', 'width': 960}, {'height': 615, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?width=1080&crop=smart&auto=webp&s=1eed1f33f4af29d8ca52ef58e056ee998bad5523', 'width': 1080}], 'source': {'height': 1166, 'url': 'https://external-preview.redd.it/fJ991AT_LJ3bt4oRc__r3RGf_qoagq3f1Oju3l6F9Ys.jpg?auto=webp&s=633dfba3cbab792ec91e073aa72c92c4c67218be', 'width': 2047}, 'variants': {}}]} |
Over-Tokenized Transformer - New paper shows massively increasing the input vocabulary (100x larger or more) of a dense LLM significantly enhances model performance for the same training cost | 378 | 2025-02-06T06:55:03 | https://www.reddit.com/gallery/1iiwmsq | jd_3d | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1iiwmsq | false | null | t3_1iiwmsq | /r/LocalLLaMA/comments/1iiwmsq/overtokenized_transformer_new_paper_shows/ | false | false | 378 | null |
||
Hosting ollama on a Proxmox LXC Container with GPU Passthrough. | 1 | [removed] | 2025-02-06T07:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/1iiws0m/hosting_ollama_on_a_proxmox_lxc_container_with/ | ninja-con-gafas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiws0m | false | null | t3_1iiws0m | /r/LocalLLaMA/comments/1iiws0m/hosting_ollama_on_a_proxmox_lxc_container_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CyrwyW3svv8VNE_jd1zkznXRW2Cvu66okDbnCcPaaTI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/zoR5YaCa-HzqWsuXYMaWeo4HCte4_yxd0hM64SoHtec.jpg?width=108&crop=smart&auto=webp&s=16204dfd75cef969fd79cc104d2cb5a24d4d6ad9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/zoR5YaCa-HzqWsuXYMaWeo4HCte4_yxd0hM64SoHtec.jpg?width=216&crop=smart&auto=webp&s=68ff3d231cd037f97c2266ab22b76cc579981704', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/zoR5YaCa-HzqWsuXYMaWeo4HCte4_yxd0hM64SoHtec.jpg?width=320&crop=smart&auto=webp&s=d20dd11bc397dfb574429ac4555486c191c185cf', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/zoR5YaCa-HzqWsuXYMaWeo4HCte4_yxd0hM64SoHtec.jpg?width=640&crop=smart&auto=webp&s=ad2ed778e79a397b3ddad0f5004f83360918842e', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/zoR5YaCa-HzqWsuXYMaWeo4HCte4_yxd0hM64SoHtec.jpg?width=960&crop=smart&auto=webp&s=0908b7bc9a6853587060a1f6673e69dfd7de11d9', 'width': 960}, {'height': 606, 'url': 'https://external-preview.redd.it/zoR5YaCa-HzqWsuXYMaWeo4HCte4_yxd0hM64SoHtec.jpg?width=1080&crop=smart&auto=webp&s=e99c491801b92a7735ab76362fd852ff13ec2689', 'width': 1080}], 'source': {'height': 719, 'url': 'https://external-preview.redd.it/zoR5YaCa-HzqWsuXYMaWeo4HCte4_yxd0hM64SoHtec.jpg?auto=webp&s=69b3b2feaca6e6c620446f11606e4e4a7b612e35', 'width': 1280}, 'variants': {}}]} |
GRPO VRAM Requirements For the GPU Poor | 1 | [removed] | 2025-02-06T07:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/1iiwvfi/grpo_vram_requirements_for_the_gpu_poor/ | FallMindless3563 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiwvfi | false | null | t3_1iiwvfi | /r/LocalLLaMA/comments/1iiwvfi/grpo_vram_requirements_for_the_gpu_poor/ | false | false | 1 | {'enabled': False, 'images': [{'id': '_qWGecEfGMm1eeGD4X8CnRybCdJ4l14ppmOOHmDnuUk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/W0Ehcpo0-8qYZpl8LHiCE99fMRsjDpr0W4x-ddzCfnA.jpg?width=108&crop=smart&auto=webp&s=7f8cc3b83028f3bdbd6ae71669c62a547399c9b5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/W0Ehcpo0-8qYZpl8LHiCE99fMRsjDpr0W4x-ddzCfnA.jpg?width=216&crop=smart&auto=webp&s=0df275bc57e1e6cac90ccdc4123840209111c6ac', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/W0Ehcpo0-8qYZpl8LHiCE99fMRsjDpr0W4x-ddzCfnA.jpg?width=320&crop=smart&auto=webp&s=94a766f4b2f4d9633dbc583efaa19f8b34260f78', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/W0Ehcpo0-8qYZpl8LHiCE99fMRsjDpr0W4x-ddzCfnA.jpg?width=640&crop=smart&auto=webp&s=96a7e5885c02356de5bc3f2db54dd7793996a01b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/W0Ehcpo0-8qYZpl8LHiCE99fMRsjDpr0W4x-ddzCfnA.jpg?width=960&crop=smart&auto=webp&s=51653bce560250eef4497f04c2ac6bb1a2d4bd68', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/W0Ehcpo0-8qYZpl8LHiCE99fMRsjDpr0W4x-ddzCfnA.jpg?width=1080&crop=smart&auto=webp&s=c88ed869cb8996697bee7a8bd1d508490d12f34b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/W0Ehcpo0-8qYZpl8LHiCE99fMRsjDpr0W4x-ddzCfnA.jpg?auto=webp&s=507479445b5bffce59e97ea07bd487f012b7ae2a', 'width': 1200}, 'variants': {}}]} |
|
Deep Agent Released R1-V: Reinforcing Super Generalization in Vision-Language Models with Less than $3 | 1 | 2025-02-06T07:20:14 | https://github.com/Deep-Agent/R1-V | Lazy_Badger_9941 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1iiwzug | false | null | t3_1iiwzug | /r/LocalLLaMA/comments/1iiwzug/deep_agent_released_r1v_reinforcing_super/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aHAdMMvbMUE734a8_uC0efOaNhgg1lFNvUxokIN0dVs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/G-Hvbotzay5oZa7-9Xsb8yftqm_CCRCGAtXI3VpLzK0.jpg?width=108&crop=smart&auto=webp&s=eaf19f8a7cd4af4b1f25d65802e8d4f862330d67', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/G-Hvbotzay5oZa7-9Xsb8yftqm_CCRCGAtXI3VpLzK0.jpg?width=216&crop=smart&auto=webp&s=88c6588907680fb5dc5950e65e62c634bde525f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/G-Hvbotzay5oZa7-9Xsb8yftqm_CCRCGAtXI3VpLzK0.jpg?width=320&crop=smart&auto=webp&s=003cf980ea9483b0fffafae29a3f0bb73460ae8a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/G-Hvbotzay5oZa7-9Xsb8yftqm_CCRCGAtXI3VpLzK0.jpg?width=640&crop=smart&auto=webp&s=b6b9a548abd49ad81d0507b601b3f29dac355e74', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/G-Hvbotzay5oZa7-9Xsb8yftqm_CCRCGAtXI3VpLzK0.jpg?width=960&crop=smart&auto=webp&s=7314703dd7ea026385ed05610a94eb1b3ad57ee5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/G-Hvbotzay5oZa7-9Xsb8yftqm_CCRCGAtXI3VpLzK0.jpg?width=1080&crop=smart&auto=webp&s=e2fa9f2cace1a29754b2e662745e6da7aee7675a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/G-Hvbotzay5oZa7-9Xsb8yftqm_CCRCGAtXI3VpLzK0.jpg?auto=webp&s=4225141cae713b73aac23e212b54b1d9ceec5678', 'width': 1200}, 'variants': {}}]} |
||
Performance benchmarks for various GPU:s? | 2 | I'm considering updating my GPU but I've been having difficulties finding good benchmarks for the performance (token/s) of various models on various GPU:s. Can anyone point me to the right direction? I'm mainly interested in comparing how the 20b-90b models run on multigpu clusters with RTX 4090, RTX 5090 or 7900 XTX? The idea would be to optimize the price per token/s and quality for the current models. Seems like I could get much more VRAM with AMD but is the quality/speed much better if I spend $10k on AMD vs $10k on a (smaller) NVDA cluster? | 2025-02-06T07:39:35 | https://www.reddit.com/r/LocalLLaMA/comments/1iix9bh/performance_benchmarks_for_various_gpus/ | Mysterious_Value_219 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iix9bh | false | null | t3_1iix9bh | /r/LocalLLaMA/comments/1iix9bh/performance_benchmarks_for_various_gpus/ | false | false | self | 2 | null |
No more gemma experimental 1206? | 0 | I've been using gemini experimental 1206 since two months and can't live it now! But now it is gone. There are new models in Google AI studio. Does anybody knows which one should I use instead ? | 2025-02-06T07:54:21 | https://www.reddit.com/r/LocalLLaMA/comments/1iixggg/no_more_gemma_experimental_1206/ | Hazardhazard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iixggg | false | null | t3_1iixggg | /r/LocalLLaMA/comments/1iixggg/no_more_gemma_experimental_1206/ | false | false | self | 0 | null |
Russian ban for AI is the same as in China | 1 | [removed] | 2025-02-06T07:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/1iixhdm/russian_ban_for_ai_is_the_same_as_in_china/ | IntroductionFull4871 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iixhdm | false | null | t3_1iixhdm | /r/LocalLLaMA/comments/1iixhdm/russian_ban_for_ai_is_the_same_as_in_china/ | false | false | self | 1 | null |
Model like Character AI? (Human-like responses) | 4 | Gonna be completely honest here: I just need an AI that I can chat with as a friend.
It doesn't have to be good in benchmarks, or for following instructions. I just want one that you can have a coherent conversation with that actually feels like a human.
Every LLM that I've ever tried behaves like ChatGPT to some extent. Even models fine-tuned to remove GPTisms still don't behave like real people.
I've tried system prompts, and they still didn't give the desired results.
If anyone has any suggestions or ideas, preferably GGUF models under 28B parameters, I'd be grateful for them. | 2025-02-06T08:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/1iixkwg/model_like_character_ai_humanlike_responses/ | RandumbRedditor1000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iixkwg | false | null | t3_1iixkwg | /r/LocalLLaMA/comments/1iixkwg/model_like_character_ai_humanlike_responses/ | false | false | self | 4 | null |
Google confirms Gemma 3 is cooking 🔥 | 1 | https://x.com/osanseviero/status/1887247587776069957 | 2025-02-06T08:08:04 | MMAgeezer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iixn25 | false | null | t3_1iixn25 | /r/LocalLLaMA/comments/1iixn25/google_confirms_gemma_3_is_cooking/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'CtO58mB99vUrw-CXZs7sg9zYwr692chgNjYOjBunJLE', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/gp9g04u68hhe1.png?width=108&crop=smart&auto=webp&s=e842bfd4fe984a11a4bc5f224cb0b6b76c08e489', 'width': 108}, {'height': 289, 'url': 'https://preview.redd.it/gp9g04u68hhe1.png?width=216&crop=smart&auto=webp&s=ca3df77f52d3a330cdb9576ed75728fb251c4b49', 'width': 216}, {'height': 428, 'url': 'https://preview.redd.it/gp9g04u68hhe1.png?width=320&crop=smart&auto=webp&s=3e25f9e520e1f39d449793d1b075dd914e062b26', 'width': 320}, {'height': 857, 'url': 'https://preview.redd.it/gp9g04u68hhe1.png?width=640&crop=smart&auto=webp&s=1151717f1b0ddd357d69517c5853065b2c70f49b', 'width': 640}, {'height': 1286, 'url': 'https://preview.redd.it/gp9g04u68hhe1.png?width=960&crop=smart&auto=webp&s=b50a5e82085b71acdb513912d5f836522376265f', 'width': 960}, {'height': 1447, 'url': 'https://preview.redd.it/gp9g04u68hhe1.png?width=1080&crop=smart&auto=webp&s=08dc6b0d2680d96c72a97517b7d190a8c894a5be', 'width': 1080}], 'source': {'height': 1801, 'url': 'https://preview.redd.it/gp9g04u68hhe1.png?auto=webp&s=75fc7d5d099bfd9328deee99ead98a52db98b6fe', 'width': 1344}, 'variants': {}}]} |
||
Smart cross-Lingual Re-Ranking Model | 1 | I've been using rerankers models for months but fucking hell none of they can do cross-language correctly.
They have very basic matching capacities, for example a sentence translated 1:1 will be matched with no issue but as soon as it's more subtle it fails.
I built two dataset that requires cross-language capacities.
One called "mixed" that requires basic simple understanding of the sentence that is pretty much translated from the question to another language :
{
"question": "When was Peter Donkey Born ?",
"needles": [
"Peter Donkey est n\u00e9 en novembre 1996",
"Peter Donkey ese nacio en 1996",
"Peter Donkey wurde im November 1996 geboren"
]
},
Another another dataset that requires much more grey matter :
{
"question": "Что используется, чтобы утолить жажду?",
"needles": [
"Nature's most essential liquid for survival.",
"La source de vie par excellence.",
"El elemento más puro y necesario.",
"Die Grundlage allen Lebens."
]
}
When there is no cross-language 'thinking' required, and the question is in language A and needles in language A, the rerankers models I used always worked, bge, nomic etc
But as soon as it requires some thinking and it's cross-language (A->B) all languages fails, [the only place I manage to get some good results](https://i.imgur.com/o4Bovh4.png) are with the following **embeddings** model (not even rerankers) : `HIT-TMG/KaLM-embedding-multilingual-mini-instruct-v1.5` | 2025-02-06T08:16:53 | https://www.reddit.com/r/LocalLLaMA/comments/1iixr9z/smart_crosslingual_reranking_model/ | LinkSea8324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iixr9z | false | null | t3_1iixr9z | /r/LocalLLaMA/comments/1iixr9z/smart_crosslingual_reranking_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qByXxmT8TedFAjmhYfKbPvdUnVurdOWDtIrtxpJFKXk', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/oAL7irX_FRfm32vPJ0OIP_EAqfnh3jJKXbDy3Q6aXK0.png?width=108&crop=smart&auto=webp&s=80c2968aca7fa0bbd304196a10c4589dd86821c9', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/oAL7irX_FRfm32vPJ0OIP_EAqfnh3jJKXbDy3Q6aXK0.png?width=216&crop=smart&auto=webp&s=81e8f7b6c71332a503fb03dca5960835cc94c8eb', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/oAL7irX_FRfm32vPJ0OIP_EAqfnh3jJKXbDy3Q6aXK0.png?width=320&crop=smart&auto=webp&s=a9cf6fa7d45c08eac62b1ddc0f1b4abaf6c3a55c', 'width': 320}, {'height': 385, 'url': 'https://external-preview.redd.it/oAL7irX_FRfm32vPJ0OIP_EAqfnh3jJKXbDy3Q6aXK0.png?width=640&crop=smart&auto=webp&s=c4420425b9459d82a44501761dbe383370e221c8', 'width': 640}, {'height': 578, 'url': 'https://external-preview.redd.it/oAL7irX_FRfm32vPJ0OIP_EAqfnh3jJKXbDy3Q6aXK0.png?width=960&crop=smart&auto=webp&s=ef463bfc85da7c388d1641172bb35293831cfbc6', 'width': 960}, {'height': 650, 'url': 'https://external-preview.redd.it/oAL7irX_FRfm32vPJ0OIP_EAqfnh3jJKXbDy3Q6aXK0.png?width=1080&crop=smart&auto=webp&s=28f61ab10e70340d1c26edacf07a4a55761fec48', 'width': 1080}], 'source': {'height': 776, 'url': 'https://external-preview.redd.it/oAL7irX_FRfm32vPJ0OIP_EAqfnh3jJKXbDy3Q6aXK0.png?auto=webp&s=01f26c01175057209d694e013c3d7e730efa1657', 'width': 1288}, 'variants': {}}]} |
Running DeepSeek Privately & Locally on My Website | 0 | 2025-02-06T08:26:57 | https://youtube.com/watch?v=tR4cXjJxnHM&si=7IxsR80qPLPx1XSp | DustinBrett | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1iixvyq | false | {'oembed': {'author_name': 'Dustin Brett', 'author_url': 'https://www.youtube.com/@DustinBrett', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/tR4cXjJxnHM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Running DeepSeek Privately & Locally on My Website"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/tR4cXjJxnHM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Running DeepSeek Privately & Locally on My Website', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1iixvyq | /r/LocalLLaMA/comments/1iixvyq/running_deepseek_privately_locally_on_my_website/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'UasGXXSGBajLlEuJbLzxV6RWAW-o6lZ2I4zB3kFxqBk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XQLqgLA5IdVn-LA_C-VGNQOrKNDwjQiHGzhRrpPx_d8.jpg?width=108&crop=smart&auto=webp&s=70994a959cebd5fcfddca4d8ecceecea9325ad4e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XQLqgLA5IdVn-LA_C-VGNQOrKNDwjQiHGzhRrpPx_d8.jpg?width=216&crop=smart&auto=webp&s=d46ad5eef958b2003f53f80dbb45ce7ec1e789c9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XQLqgLA5IdVn-LA_C-VGNQOrKNDwjQiHGzhRrpPx_d8.jpg?width=320&crop=smart&auto=webp&s=7d9ba04517d785346a847544ec8fc423bd0b17f1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XQLqgLA5IdVn-LA_C-VGNQOrKNDwjQiHGzhRrpPx_d8.jpg?auto=webp&s=401a46b475fac7d5758b4d2b9d5af7aed03c32d1', 'width': 480}, 'variants': {}}]} |
||
Open source chat UI similar to Lm studio that supports the big api providers? | 3 | I'm looking for a desktop app like lm studio but one that also allows you to use deepseek/claude/openai api's in addition to local models. As far as I know most of the UIs that are being discussed here are only for local models? | 2025-02-06T08:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/1iixxc2/open_source_chat_ui_similar_to_lm_studio_that/ | generalamitt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iixxc2 | false | null | t3_1iixxc2 | /r/LocalLLaMA/comments/1iixxc2/open_source_chat_ui_similar_to_lm_studio_that/ | false | false | self | 3 | null |
MNN android support for deepseek r1 1.5b | 24 |
mainpage: [MnnLlmApp](https://github.com/alibaba/MNN/blob/master/project/android/apps/MnnLlmApp/README.md)
apk download: [version0.2](https://github.com/alibaba/MNN/blob/master/project/android/apps/MnnLlmApp/README.md#version-02)
**FAQ:**
* Why is the app not available on the App Store?
* The application is currently in its early development stages. It will be published on the App Store once it reaches a stable and more mature state.
* Will there be support for iOS?
* Yes, an iOS version is under development. We anticipate releasing it within the next few weeks.
https://i.redd.it/vuupe0hoahhe1.gif
| 2025-02-06T08:31:19 | https://www.reddit.com/r/LocalLLaMA/comments/1iixy36/mnn_android_support_for_deepseek_r1_15b/ | Juude89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iixy36 | false | null | t3_1iixy36 | /r/LocalLLaMA/comments/1iixy36/mnn_android_support_for_deepseek_r1_15b/ | false | false | 24 | {'enabled': False, 'images': [{'id': '6UX05TdF-gBFUuKhwBk4CVg5XNc3YFQ40J_1tx61cDM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gk2Ulk7ekzGlEE11Q-IaL-ilF-aztcncit7oLtiKI9Q.jpg?width=108&crop=smart&auto=webp&s=a8ef014c81ec73cd92d007088981d4838e379218', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gk2Ulk7ekzGlEE11Q-IaL-ilF-aztcncit7oLtiKI9Q.jpg?width=216&crop=smart&auto=webp&s=21892c1524dd94553bc5a3acfefbeaa5fe0896ee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gk2Ulk7ekzGlEE11Q-IaL-ilF-aztcncit7oLtiKI9Q.jpg?width=320&crop=smart&auto=webp&s=48fbcf90f9dfb0d0d0224a7dd9d40c0301ed4ce9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gk2Ulk7ekzGlEE11Q-IaL-ilF-aztcncit7oLtiKI9Q.jpg?width=640&crop=smart&auto=webp&s=0994162db48ae28be3cfe5c5566969dfdf23240d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gk2Ulk7ekzGlEE11Q-IaL-ilF-aztcncit7oLtiKI9Q.jpg?width=960&crop=smart&auto=webp&s=45e6526aec38bf61bed7477ca3d27690a4122bb7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gk2Ulk7ekzGlEE11Q-IaL-ilF-aztcncit7oLtiKI9Q.jpg?width=1080&crop=smart&auto=webp&s=d2963e06302b2e1abd4c28a31230897e2ff581e9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gk2Ulk7ekzGlEE11Q-IaL-ilF-aztcncit7oLtiKI9Q.jpg?auto=webp&s=a69061ecb034ad12364feecd08e6c6406b256816', 'width': 1200}, 'variants': {}}]} |
|
Which model do you think is better at solving math problems? | 1 | [removed] | 2025-02-06T09:10:05 | https://www.reddit.com/r/LocalLLaMA/comments/1iiygcw/which_model_do_you_think_is_better_at_solving/ | General-Finger1159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiygcw | false | null | t3_1iiygcw | /r/LocalLLaMA/comments/1iiygcw/which_model_do_you_think_is_better_at_solving/ | false | false | 1 | null |
|
[2502.03387] LIMO: Less is More for Reasoning | 15 | 2025-02-06T09:15:58 | https://arxiv.org/abs/2502.03387 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1iiyj4q | false | null | t3_1iiyj4q | /r/LocalLLaMA/comments/1iiyj4q/250203387_limo_less_is_more_for_reasoning/ | false | false | default | 15 | null |
|
What is the current best local AI model for development? | 1 | [removed] | 2025-02-06T09:38:16 | https://www.reddit.com/r/LocalLLaMA/comments/1iiytrf/what_is_the_current_best_local_ai_model_for/ | CountChick321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiytrf | false | null | t3_1iiytrf | /r/LocalLLaMA/comments/1iiytrf/what_is_the_current_best_local_ai_model_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'JoLAbcgPAn_D7ExuVvyaNJpSY81e3Jca27FTj1G8-xQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=108&crop=smart&auto=webp&s=b6c70517bb80bca66bf94d99af93ec23982e2986', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=216&crop=smart&auto=webp&s=146011169cd4033ebcd4b883efc62f0bd345d74b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=320&crop=smart&auto=webp&s=7a560fe31ff4e8b423a9029c052df232e0365572', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=640&crop=smart&auto=webp&s=ea9ff85c4782247e303164d9d75b4071d789f397', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=960&crop=smart&auto=webp&s=81aa9753e911761e0c56b3b897ba0f44cafff21d', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=1080&crop=smart&auto=webp&s=a67fd0983e228aa2fa0a2ba466c071793fe21afc', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?auto=webp&s=92948afd26cc637bb25c79223a1b99b3ecbbbfa2', 'width': 2401}, 'variants': {}}]} |
Which is currently the best Local LLM for development? | 1 | [removed] | 2025-02-06T09:41:27 | https://www.reddit.com/r/LocalLLaMA/comments/1iiyvam/which_is_currently_the_best_local_llm_for/ | CountChick321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiyvam | false | null | t3_1iiyvam | /r/LocalLLaMA/comments/1iiyvam/which_is_currently_the_best_local_llm_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'JoLAbcgPAn_D7ExuVvyaNJpSY81e3Jca27FTj1G8-xQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=108&crop=smart&auto=webp&s=b6c70517bb80bca66bf94d99af93ec23982e2986', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=216&crop=smart&auto=webp&s=146011169cd4033ebcd4b883efc62f0bd345d74b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=320&crop=smart&auto=webp&s=7a560fe31ff4e8b423a9029c052df232e0365572', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=640&crop=smart&auto=webp&s=ea9ff85c4782247e303164d9d75b4071d789f397', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=960&crop=smart&auto=webp&s=81aa9753e911761e0c56b3b897ba0f44cafff21d', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=1080&crop=smart&auto=webp&s=a67fd0983e228aa2fa0a2ba466c071793fe21afc', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?auto=webp&s=92948afd26cc637bb25c79223a1b99b3ecbbbfa2', 'width': 2401}, 'variants': {}}]} |
Hugging Face have released a new Spaces search. Over 400k AI Apps accessible in intuitive way | 1 | 2025-02-06T10:11:36 | https://v.redd.it/5s9cnp9mthhe1 | Nunki08 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iizahf | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/5s9cnp9mthhe1/DASHPlaylist.mpd?a=1741428710%2CMDFmZGFhZDAyZDcyMDFhYTA2N2M5M2UyOTg5YzU5OTZmY2VmOGVmMGRjYTFjYzMyZDQyNmIzMDZkY2YyODdjNA%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/5s9cnp9mthhe1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/5s9cnp9mthhe1/HLSPlaylist.m3u8?a=1741428710%2CZWE5M2UyZTY3MzVkMTNlOWI3NzE2MGU0NWRmMmE1MTUwZTkwOWE2Zjg0NWI4ODdlYTgwMzhjZmEzMWUxM2FmNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5s9cnp9mthhe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1iizahf | /r/LocalLLaMA/comments/1iizahf/hugging_face_have_released_a_new_spaces_search/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OXdqbWF1OW10aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/OXdqbWF1OW10aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5.png?width=108&crop=smart&format=pjpg&auto=webp&s=f0e536f54dd395f2eb322bf4c167a4fb58e90042', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/OXdqbWF1OW10aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5.png?width=216&crop=smart&format=pjpg&auto=webp&s=ef3070f2c3c73e1d6e002bbc48341982ef13ac74', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/OXdqbWF1OW10aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5.png?width=320&crop=smart&format=pjpg&auto=webp&s=4248929682c9c35cdc22f8ffed3bc902b8919072', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/OXdqbWF1OW10aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5.png?width=640&crop=smart&format=pjpg&auto=webp&s=7be4ae983b3f0887e24363021cfb80ccaf388c1d', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OXdqbWF1OW10aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5.png?format=pjpg&auto=webp&s=2a8f5cfb3a36b11b1b662ae67c1836d58e304f89', 'width': 720}, 'variants': {}}]} |
||
gemini 2.0 flash api just came out | 1 | [removed] | 2025-02-06T10:12:52 | https://www.reddit.com/r/LocalLLaMA/comments/1iizb4g/gemini_20_flash_api_just_came_out/ | Glum_Ad7895 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iizb4g | false | null | t3_1iizb4g | /r/LocalLLaMA/comments/1iizb4g/gemini_20_flash_api_just_came_out/ | false | false | self | 1 | null |
Hugging Face has released a new Spaces search. Over 400k AI Apps accessible in intuitive way. | 676 | 2025-02-06T10:14:23 | https://v.redd.it/50vlqmrkuhhe1 | Nunki08 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iizbxs | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/50vlqmrkuhhe1/DASHPlaylist.mpd?a=1741428878%2CNWEyMzM2NDRkZGI5ZGUxZWU1Njk2MTk1YjNiOTJhOTY1MDJkYzc4ZjFlM2M1MTM5YTI5MWY1OTc4ODllNWE1NA%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/50vlqmrkuhhe1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/50vlqmrkuhhe1/HLSPlaylist.m3u8?a=1741428878%2CNWVjNzljNDBmOWM5MjZmNzRhODdlMjczNmI0NmQyYmNlZGZjOWIzODNkNDc1Mjc3NTQ3NDY5ZGZmZTdhMjY4NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/50vlqmrkuhhe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1iizbxs | /r/LocalLLaMA/comments/1iizbxs/hugging_face_has_released_a_new_spaces_search/ | false | false | 676 | {'enabled': False, 'images': [{'id': 'bDJtMXNycmt1aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/bDJtMXNycmt1aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5.png?width=108&crop=smart&format=pjpg&auto=webp&s=a31f0d18dfdd03577ae4916668f9b0c0308b9397', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/bDJtMXNycmt1aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5.png?width=216&crop=smart&format=pjpg&auto=webp&s=6be68021f6a48611b1befaa2e1ba3f275b53acb1', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/bDJtMXNycmt1aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5.png?width=320&crop=smart&format=pjpg&auto=webp&s=1c84872b42b4046407a97a1075b0ce25622bb782', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/bDJtMXNycmt1aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5.png?width=640&crop=smart&format=pjpg&auto=webp&s=9c4a248fb73b4e326fc0078214ee192fde467f88', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/bDJtMXNycmt1aGhlMQxr13kQ4l494R_6FN5L7tr44dIiu9kzOIdUQI5GS5Z5.png?format=pjpg&auto=webp&s=aae105977411faf7b478a8d407657524adba2e04', 'width': 720}, 'variants': {}}]} |
||
Which models are good at reasoning? | 0 | Which models are good at reasoning? (and OfCourse having a good world knowledge to apply their reasoning based on)
(I am a beginner with local LLMs, so bear with me. I tried both Mistral and deepseek locally and they were terrible at reasoning. But maybe I am expecting too much for a local LLM? Are there any better LLMs for this purpose?) | 2025-02-06T10:15:28 | https://www.reddit.com/r/LocalLLaMA/comments/1iizcin/which_models_are_good_at_reasoning/ | ExtremePresence3030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iizcin | false | null | t3_1iizcin | /r/LocalLLaMA/comments/1iizcin/which_models_are_good_at_reasoning/ | false | false | self | 0 | null |
Which models are good at reasoning? | 4 | Which models are good at reasoning? (and OfCourse having a good world knowledge to apply their reasoning based on)
(I am a beginner with local LLMs, so bear with me. I tried both Mistral and deepseek locally and they were terrible at reasoning. But maybe I am expecting too much for a local LLM? Are there any better LLMs for this purpose?) | 2025-02-06T10:15:29 | https://www.reddit.com/r/LocalLLaMA/comments/1iizciw/which_models_are_good_at_reasoning/ | ExtremePresence3030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iizciw | false | null | t3_1iizciw | /r/LocalLLaMA/comments/1iizciw/which_models_are_good_at_reasoning/ | false | false | self | 4 | null |
Should I be worried about global warming by generating too many tokens for simple questions? | 0 | A simple prompt shouldn't need overthinking but....
Gemini:
https://preview.redd.it/yf86mjb0whhe1.png?width=1032&format=png&auto=webp&s=511d9ee40d80a8deee2c09752c0ef779cefeca2f
R1:
https://preview.redd.it/b77zo8x6whhe1.png?width=1013&format=png&auto=webp&s=a5b4246fb5fd37179089d87430ccc905597ca0a4
| 2025-02-06T10:23:19 | https://www.reddit.com/r/LocalLLaMA/comments/1iizgm9/should_i_be_worried_about_global_warming_by/ | Reasonable-Climate66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iizgm9 | false | null | t3_1iizgm9 | /r/LocalLLaMA/comments/1iizgm9/should_i_be_worried_about_global_warming_by/ | false | false | 0 | null |
Subsets and Splits