title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I made Phi-14b into a (primitive) reasoner using a prototype MLX-GRPO trainer | 42 | 2025-02-04T02:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ih6s1a/i_made_phi14b_into_a_primitive_reasoner_using_a/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih6s1a | false | null | t3_1ih6s1a | /r/LocalLLaMA/comments/1ih6s1a/i_made_phi14b_into_a_primitive_reasoner_using_a/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'nGzKOLZ82dDDD7U2FiP4toXStNflX_D9rL4LgIrAj6A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zVsUFCfc_S8cJO3WpDhkPDW9XBFjXKuImjmcfLcLj_M.jpg?width=108&crop=smart&auto=webp&s=0c550c1c9460a9b155b5744cda7629f2812d73c1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zVsUFCfc_S8cJO3WpDhkPDW9XBFjXKuImjmcfLcLj_M.jpg?width=216&crop=smart&auto=webp&s=91a90bff28d436d172e3d6f240d89a6d3ad15aca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zVsUFCfc_S8cJO3WpDhkPDW9XBFjXKuImjmcfLcLj_M.jpg?width=320&crop=smart&auto=webp&s=d32e794ec3c8cceb4c165ab75a5d6ff29c012e09', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zVsUFCfc_S8cJO3WpDhkPDW9XBFjXKuImjmcfLcLj_M.jpg?width=640&crop=smart&auto=webp&s=899c4ad7fc7f2c831a252b65e14de097d7892f71', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zVsUFCfc_S8cJO3WpDhkPDW9XBFjXKuImjmcfLcLj_M.jpg?width=960&crop=smart&auto=webp&s=47751c3fbe0268446a937fa24dba7f4f41c52347', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zVsUFCfc_S8cJO3WpDhkPDW9XBFjXKuImjmcfLcLj_M.jpg?width=1080&crop=smart&auto=webp&s=238c41b74adb61e849e58aceb8139b1c2573363d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zVsUFCfc_S8cJO3WpDhkPDW9XBFjXKuImjmcfLcLj_M.jpg?auto=webp&s=03dc0a9b24211ae37a2041a4e795923933ca68c1', 'width': 1200}, 'variants': {}}]} |
||
How to Build a Chatbot for Internal Company Documents? Need Guidance! | 1 | [removed] | 2025-02-04T02:21:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ih74e4/how_to_build_a_chatbot_for_internal_company/ | Worth-Switch2352 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih74e4 | false | null | t3_1ih74e4 | /r/LocalLLaMA/comments/1ih74e4/how_to_build_a_chatbot_for_internal_company/ | false | false | self | 1 | null |
PSA: MLX-GRPO trainer prototype (with QLoRA support) is functional | 16 | 2025-02-04T02:29:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ih7abf/psa_mlxgrpo_trainer_prototype_with_qlora_support/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih7abf | false | null | t3_1ih7abf | /r/LocalLLaMA/comments/1ih7abf/psa_mlxgrpo_trainer_prototype_with_qlora_support/ | false | false | 16 | null |
||
AttributeError: 'Qwen2Model' object has no attribute 'lm_head' | 1 | Total shot in the dark, here goes. I get this error when trying to extract loras from Qwen models, using mergekit. It doesn't happen for all qwen models, just some of them, and I cannot figure out what causes it, or why some succeed without issue. This particular example is from "mobiuslabsgmbh_DeepSeek-R1-ReDistill-Qwen-1.5B-v1.1", while some others like "huihui-ai_DeepSeek-R1-Distill-Qwen-7B-abliterated" work just fine. I am running the latest version of mergekit, and I have the latest transformers / torch installed, as far as I can tell.
torch==2.5.1
transformers==4.48.1
File "/media/user/backup/llm/./mergekit/mergekit/scripts/extract_lora.py", line 672, in main
) = validate_and_combine_details(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/user/backup/llm/./mergekit/mergekit/scripts/extract_lora.py", line 184, in validate_and_combine_details
finetuned_model_details, finetuned_vocab_size = get_model_details(
^^^^^^^^^^^^^^^^^^
File "/media/user/backup/llm/./mergekit/mergekit/scripts/extract_lora.py", line 116, in get_model_details
pretrained_model = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/user/backup/llm/mergekit/.venv/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/user/backup/llm/mergekit/.venv/lib/python3.12/site-packages/transformers/modeling_utils.py", line 4224, in from_pretrained
) = cls._load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/user/backup/llm/mergekit/.venv/lib/python3.12/site-packages/transformers/modeling_utils.py", line 4794, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/user/backup/llm/mergekit/.venv/lib/python3.12/site-packages/transformers/modeling_utils.py", line 873, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/media/user/backup/llm/mergekit/.venv/lib/python3.12/site-packages/accelerate/utils/modeling.py", line 248, in set_module_tensor_to_device
new_module = getattr(module, split)
^^^^^^^^^^^^^^^^^^^^^^
File "/media/user/backup/llm/mergekit/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1931, in __getattr__
raise AttributeError(
AttributeError: 'Qwen2Model' object has no attribute 'lm_head' | 2025-02-04T02:36:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ih7f80/attributeerror_qwen2model_object_has_no_attribute/ | thunder9861 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih7f80 | false | null | t3_1ih7f80 | /r/LocalLLaMA/comments/1ih7f80/attributeerror_qwen2model_object_has_no_attribute/ | false | false | self | 1 | null |
[Breakthrough] Running Deepseek-R1 671B on CPU: Breaking the Memory Bandwidth Barrier | 1 | [removed] | 2025-02-04T02:52:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ih7r1a/breakthrough_running_deepseekr1_671b_on_cpu/ | Status-Hearing-4084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih7r1a | false | null | t3_1ih7r1a | /r/LocalLLaMA/comments/1ih7r1a/breakthrough_running_deepseekr1_671b_on_cpu/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ng-6_dy8lnXVDTkT91WC-51qEylFJoEd5DdfUWQwV8A', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=108&crop=smart&auto=webp&s=f8168b0f81c9f8b87ea20ddc2f83c2165a7a3ed7', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=216&crop=smart&auto=webp&s=1364058aa0c5b34d9b239d7f6bb525dd20f25f64', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=320&crop=smart&auto=webp&s=e9e8eb188058511ae079d6f9f6f8a455a675b78c', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=640&crop=smart&auto=webp&s=85d33c5a1cf3541166b6ca7215b49646f77f28d5', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=960&crop=smart&auto=webp&s=a7cdbba916b2424f8afc328a5fd8a6e49e08d40b', 'width': 960}, {'height': 699, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=1080&crop=smart&auto=webp&s=cb3bb3e86ffebcbaceec322378e6c2e42446b1d8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?auto=webp&s=b0c58e92d76f0f35669a64282bb1c641529d08d2', 'width': 1112}, 'variants': {}}]} |
|
[Discussion] Running Deepseek-R1 671B on CPU: Breaking the Memory Bandwidth Barrier | 1 | [removed] | 2025-02-04T02:55:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ih7svv/discussion_running_deepseekr1_671b_on_cpu/ | Status-Hearing-4084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih7svv | false | null | t3_1ih7svv | /r/LocalLLaMA/comments/1ih7svv/discussion_running_deepseekr1_671b_on_cpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ng-6_dy8lnXVDTkT91WC-51qEylFJoEd5DdfUWQwV8A', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=108&crop=smart&auto=webp&s=f8168b0f81c9f8b87ea20ddc2f83c2165a7a3ed7', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=216&crop=smart&auto=webp&s=1364058aa0c5b34d9b239d7f6bb525dd20f25f64', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=320&crop=smart&auto=webp&s=e9e8eb188058511ae079d6f9f6f8a455a675b78c', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=640&crop=smart&auto=webp&s=85d33c5a1cf3541166b6ca7215b49646f77f28d5', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=960&crop=smart&auto=webp&s=a7cdbba916b2424f8afc328a5fd8a6e49e08d40b', 'width': 960}, {'height': 699, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=1080&crop=smart&auto=webp&s=cb3bb3e86ffebcbaceec322378e6c2e42446b1d8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?auto=webp&s=b0c58e92d76f0f35669a64282bb1c641529d08d2', 'width': 1112}, 'variants': {}}]} |
LLMs can be great oppportunities for underpriviledged groups. | 2 | We always talk about AI taking jobs, but what if we flipped the perspective? What if LLMs could provide knowledge and expertise to people who *never* had access to them in the first place?
In many parts of the world, the biggest problem isn’t automation—it’s the *lack* of human expertise. Kids grow up without good teachers, patients live without doctors, and without information the society cannot move forward. The knowledge gap is huge, not because people don’t want to learn, but because there simply aren’t enough experts to go around.
This is where LLMs could be a game-changer. They can act as:
* A substitute teacher for students in remote areas who don’t have access to good educators.
* A medical guide for basic health advice when no doctor is available.
* Any thing they are good at but the average people don't, they do not have to be perfect, just need to be better than what people already have.
Even LLMs make mistakes they still have a intelligent level in many areas match the ability of a well trained human.
Another one of the biggest advantages for people in non Eng speaking areas? **Multi-lingual support.** LLMs can translate and provide information in languages that traditional educational or professional resources never covered. This lowers the barrier for people who would otherwise be locked out of global knowledge just because they don’t speak a dominant language like English.
Of course, this isn’t magic. To actually make this work, underprivileged communities need:
1. **Affordable internet and devices**—many don’t even have smartphones or stable connections.
2. **Cheap or free AI service**—most LLMs are trained on Western-centric data, and that needs to change.
3. **Some basic information literacy in the first place**—LLMs are useful in many aspects but they are not gods or magic and you cannot use them by just praying in your heart.
The potential is massive, this can be a smart library and good teacher everyone can talk to, who is patient, interactive and knowledgeable. Not every country needs and can compete in the top of the line AI race but at least I hope everyone can enjoy and benefit from the rise of a new information media, like language, texts, papers, books, phones, computers and internets. | 2025-02-04T03:08:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ih82h4/llms_can_be_great_oppportunities_for/ | T0beyi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih82h4 | false | null | t3_1ih82h4 | /r/LocalLLaMA/comments/1ih82h4/llms_can_be_great_oppportunities_for/ | false | false | self | 2 | null |
Running Deepseek-R1 671B on CPU: Breaking the Memory Bandwidth Barrier | 1 | [removed] | 2025-02-04T03:10:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ih83ux/running_deepseekr1_671b_on_cpu_breaking_the/ | Status-Hearing-4084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih83ux | false | null | t3_1ih83ux | /r/LocalLLaMA/comments/1ih83ux/running_deepseekr1_671b_on_cpu_breaking_the/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ng-6_dy8lnXVDTkT91WC-51qEylFJoEd5DdfUWQwV8A', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=108&crop=smart&auto=webp&s=f8168b0f81c9f8b87ea20ddc2f83c2165a7a3ed7', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=216&crop=smart&auto=webp&s=1364058aa0c5b34d9b239d7f6bb525dd20f25f64', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=320&crop=smart&auto=webp&s=e9e8eb188058511ae079d6f9f6f8a455a675b78c', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=640&crop=smart&auto=webp&s=85d33c5a1cf3541166b6ca7215b49646f77f28d5', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=960&crop=smart&auto=webp&s=a7cdbba916b2424f8afc328a5fd8a6e49e08d40b', 'width': 960}, {'height': 699, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?width=1080&crop=smart&auto=webp&s=cb3bb3e86ffebcbaceec322378e6c2e42446b1d8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/TBNOUtqgfyJ2zXnY-QCwdcDEDolsGPsGmW3v4zxnB54.jpg?auto=webp&s=b0c58e92d76f0f35669a64282bb1c641529d08d2', 'width': 1112}, 'variants': {}}]} |
Why does OpenAI's o3-mini reason in Chinese? | 0 | There's been a set of reports recently on X (formerly twitter) where people are reporting that the o3-mini reasoning process switch to Chinese despite the conversation being in english:
[Example 1](https://x.com/The_Vikhyat/status/1885552629696389459)
[Example 2](https://x.com/phirex/status/1885829916337344522)
[Example 3](https://x.com/michael_timbs/status/1885526162002329924)
[Example 4](https://x.com/RishabJainK/status/1877157192727466330)
[Speculation about this 1](https://x.com/ClementDelangue/status/1877767382120255792)
Article about this: [https://techcrunch.com/2025/01/14/openais-ai-reasoning-model-thinks-in-chinese-sometimes-and-no-one-really-knows-why/](https://techcrunch.com/2025/01/14/openais-ai-reasoning-model-thinks-in-chinese-sometimes-and-no-one-really-knows-why/)
I think there's a few possibilities:
1. Maybe OpenAI used QwQ/Marco-O1/Deepseek-R1 to generate reasoning traces? And this made it into their training dataset?
2. Possibly that English & Chinese are just dominant parts of existing reasoning datasets
3. Same as 2, but they are just dominant in the pre-training (web) datasets
Interesting what this means for the future of LLMs, will they mostly be fluent in English & Chinese and not much else? | 2025-02-04T03:16:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ih87oz/why_does_openais_o3mini_reason_in_chinese/ | fourDnet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih87oz | false | null | t3_1ih87oz | /r/LocalLLaMA/comments/1ih87oz/why_does_openais_o3mini_reason_in_chinese/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'G0pSRvF213ahuEzxVF_swcSvNQT0PL44bNnXLzoweIw', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/OCP_2jpPHuc7YOAuLnLCUsXu5QLXzPTnf4_zd1AyFAY.jpg?width=108&crop=smart&auto=webp&s=40a330225a583f16e6497402e7b117048a0fec3a', 'width': 108}, {'height': 95, 'url': 'https://external-preview.redd.it/OCP_2jpPHuc7YOAuLnLCUsXu5QLXzPTnf4_zd1AyFAY.jpg?width=216&crop=smart&auto=webp&s=d5839e1eff10082e2e662742b3ba1b1ff8282e82', 'width': 216}, {'height': 141, 'url': 'https://external-preview.redd.it/OCP_2jpPHuc7YOAuLnLCUsXu5QLXzPTnf4_zd1AyFAY.jpg?width=320&crop=smart&auto=webp&s=91d26c9fe0ba0aac9a7597e8a18828c9bac98653', 'width': 320}, {'height': 283, 'url': 'https://external-preview.redd.it/OCP_2jpPHuc7YOAuLnLCUsXu5QLXzPTnf4_zd1AyFAY.jpg?width=640&crop=smart&auto=webp&s=7286524b3db39dd89485d009188da4714f8e6f1d', 'width': 640}, {'height': 425, 'url': 'https://external-preview.redd.it/OCP_2jpPHuc7YOAuLnLCUsXu5QLXzPTnf4_zd1AyFAY.jpg?width=960&crop=smart&auto=webp&s=ae18bc6d7369dd9606a266513258b23c5206c8cd', 'width': 960}, {'height': 478, 'url': 'https://external-preview.redd.it/OCP_2jpPHuc7YOAuLnLCUsXu5QLXzPTnf4_zd1AyFAY.jpg?width=1080&crop=smart&auto=webp&s=3c39e0b1227bd6b8603b19425cef3d9e20796b39', 'width': 1080}], 'source': {'height': 774, 'url': 'https://external-preview.redd.it/OCP_2jpPHuc7YOAuLnLCUsXu5QLXzPTnf4_zd1AyFAY.jpg?auto=webp&s=22a5207733bbd19a999ae306ad4000e7939437c8', 'width': 1746}, 'variants': {}}]} |
Human-as-an-API service. | 0 |
Humans are becoming API for the AI ecosystem.
🤖 AI generating content for AI itself and people are going crazy on this!
First, you generate 11,000 page detailed report using OpenAI Deep Research feature on pro mode paying $200/month only to feed it again to an LLM to understand the report better creating a learning loop for the model itself.
Interesting part is we are paying for the content/knowledge curated from the web which was available for free earlier, the only function here is “time”. | 2025-02-04T03:24:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ih8dbb/humanasanapi_service/ | Secure_Echo_971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih8dbb | false | null | t3_1ih8dbb | /r/LocalLLaMA/comments/1ih8dbb/humanasanapi_service/ | false | false | self | 0 | null |
Small llm for autocorrect in windows | 0 | Looking for a way to run a small LLM that is always in the background, correcting spelling errors or at least highlighting them, regardless of application(email, notepad etc) Is there such a thing? | 2025-02-04T03:31:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ih8hxr/small_llm_for_autocorrect_in_windows/ | rorowhat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih8hxr | false | null | t3_1ih8hxr | /r/LocalLLaMA/comments/1ih8hxr/small_llm_for_autocorrect_in_windows/ | false | false | self | 0 | null |
Whats better than Claude Sonnet 3.5? | 6 | Hi, any recommendations? Claude always limits me. Looking for an alternative for writing emails, business strategy and other work stuff. I like claude because it seems smart. My PC has 80gb unified RAM. I just cant find anything online to compare. Tia | 2025-02-04T03:57:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ih8zba/whats_better_than_claude_sonnet_35/ | DynamicOnion_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih8zba | false | null | t3_1ih8zba | /r/LocalLLaMA/comments/1ih8zba/whats_better_than_claude_sonnet_35/ | false | false | self | 6 | null |
Any love for AMD? | 3 | This is just my biannual check in to see if I can do anything with my modest 6950 XT yet? Hoping to find someone who will take pity on me and answer this before spending many hours doing research online just to find out the answer is "no".
Thanks. | 2025-02-04T04:01:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ih928n/any_love_for_amd/ | encom81 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih928n | false | null | t3_1ih928n | /r/LocalLLaMA/comments/1ih928n/any_love_for_amd/ | false | false | self | 3 | null |
Any love for AMD? | 4 | This is just my biannual check in to see if I can do anything with my modest 6950 XT yet? Hoping to find someone who will take pity on me and answer this before spending many hours doing research online just to find out the answer is "no".
Thanks. | 2025-02-04T04:01:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ih928p/any_love_for_amd/ | encom81 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih928p | false | null | t3_1ih928p | /r/LocalLLaMA/comments/1ih928p/any_love_for_amd/ | false | false | self | 4 | null |
AI is Creating a Generation of Illiterate Programmers | 0 | 2025-02-04T04:18:33 | https://nmn.gl/blog/ai-illiterate-programmers | Tight-Requirement-15 | nmn.gl | 1970-01-01T00:00:00 | 0 | {} | 1ih9d0o | false | null | t3_1ih9d0o | /r/LocalLLaMA/comments/1ih9d0o/ai_is_creating_a_generation_of_illiterate/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'GWJbFrCM-pG8zUBqms2wsid_YiYBpDlX0x02hb5i4XU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/IkqLEGPZjGEJ0pkBuvZ20DbwS8U0q-QphXE0p4wPYkU.jpg?width=108&crop=smart&auto=webp&s=44530e2089cfdc434fece003d49f066d80115578', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/IkqLEGPZjGEJ0pkBuvZ20DbwS8U0q-QphXE0p4wPYkU.jpg?width=216&crop=smart&auto=webp&s=4766071248583aeee454d5535c1b2ea5fffd08d5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/IkqLEGPZjGEJ0pkBuvZ20DbwS8U0q-QphXE0p4wPYkU.jpg?width=320&crop=smart&auto=webp&s=94244beaadc63119c1f5b22e77b9a540fa226c62', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/IkqLEGPZjGEJ0pkBuvZ20DbwS8U0q-QphXE0p4wPYkU.jpg?width=640&crop=smart&auto=webp&s=68980b1ce5e644f0591890b192ecb55f8f83db02', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/IkqLEGPZjGEJ0pkBuvZ20DbwS8U0q-QphXE0p4wPYkU.jpg?width=960&crop=smart&auto=webp&s=fa0c56a5828884364b7a9b275a803b3fe6f07413', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/IkqLEGPZjGEJ0pkBuvZ20DbwS8U0q-QphXE0p4wPYkU.jpg?auto=webp&s=c5cb1cf9e3c277c6f5d6350197d818f4227cd690', 'width': 1024}, 'variants': {}}]} |
||
Ok, you LLaMA-fobics, Claude does have a moat, and impressive one | 249 | If you know me, you might know I eat local LLMs for breakfast, ever since the first Llama with its "I have a borked tokenizer, but I love you" vibes came about. So this isn't some uneducated guess.
A few days ago, I was doing some C++ coding and tried Claude, which was working shockingly well, until it wanted MoooOOOoooney. So I gave in, mid-code, just to see how far this would go.
Darn. Triple darn. Quadruple darn.
Here’s the skinny: No other model understands code with the shocking capabilities of Sonet 3.5. You can fight me on this, and I'll fight back.
This thing is insane. And I’m not just making some simple "snake game" stuff. I have 25 years of C++ under my belt, so when I need something, I need something I *actually* struggle with.
There were so many instances where I felt this was Coding AI (and I’m *very* cautious about calling token predictors AI), but it’s just *insane.* In three days, I made a couple of classes that would have taken me months, and this thing chews through 10K-line classes like bubble gum.
Of course, I made it cry a few times when things didn’t work… and didn’t work… and *didn’t work.* Then Claude wrote an entirely new set of code just to test the old code, and at the end we sorted it out.
A lot of my code was for visual components, so I’d describe what I saw on the screen. It was like programming over the phone, yet it still got things right!
Told it, "Add multithreading" boom. Done. Unique mutexes. Clean as a whistle.
The code it writes is *incredibly* well-structured. I feel like a messy duck playing in the mud by comparison.
I realized a few things:
* It gives me the best solution when I *don’t* over-explain (codexplain) how I *think* the structure or flow should be. Instead, if I just let it do its thing and pretend I’m stupid, it works better.
* Many times, it automatically adds things I *didn’t* ask for, but would have ultimately needed, so it’s not just predicting tokens, it’s predicting my *next* request.
* More than once, it chose a future-proof, open-ended solution *as if* it expected we’d be building on it further and I was pretty surprised later when I wanted to add something how ready the code was
* It comprehends alien code like nothing else I’ve seen. Just throw in my mess.
My previous best model for coding was Google Gemini 2, but in comparison, it feels confused.
I got my money’s worth in the first *ten minutes.* The next 30.98 days? Just a bonus.
I’m saying this because while I *love* Llama and I’m deep into the local LLM phase, this actually feels like magic. *So someone does thing s right, IMHO.* | 2025-02-04T04:24:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ih9h11/ok_you_llamafobics_claude_does_have_a_moat_and/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih9h11 | false | null | t3_1ih9h11 | /r/LocalLLaMA/comments/1ih9h11/ok_you_llamafobics_claude_does_have_a_moat_and/ | false | false | self | 249 | null |
Ai Agent To Control Computer | 1 | [removed] | 2025-02-04T04:33:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ih9mhj/ai_agent_to_control_computer/ | CodingWithSatyam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih9mhj | false | null | t3_1ih9mhj | /r/LocalLLaMA/comments/1ih9mhj/ai_agent_to_control_computer/ | false | false | self | 1 | null |
AllenAI Tulu 3 405b available for chat and download | 53 | Not sure if this has been shared already, but AllenAI / Ai2 is a US-based nonprofit who are trying to build AIs as open-source and transparently as possible.
Their OLMO models have fully transparent training data. Their Tulu ones are as transparent as you can be building on top of Llama.
For some positive news out of the US this week, they released their new 405B Parameter model for free online chat and download.
Chat: [https://playground.allenai.org/](https://playground.allenai.org/)
HuggingFace: [https://huggingface.co/allenai/Llama-3.1-Tulu-3-405B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-405B) | 2025-02-04T04:44:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ih9tb6/allenai_tulu_3_405b_available_for_chat_and/ | SuchSeries8760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih9tb6 | false | null | t3_1ih9tb6 | /r/LocalLLaMA/comments/1ih9tb6/allenai_tulu_3_405b_available_for_chat_and/ | false | false | self | 53 | null |
Local PPTs based on a Template | 0 | I have been looking to build a locally hosted app that can take template ppt and generate a PPT files based on a prompt - augmented via RAG to feed the similar PPT files which were created manually previously. I am building a RAG pipeline to parse the historical files and feed it into a locally hosted Mistral Model using Ollama thru Langchain, and using python pptx finally to generate the PPT file. Can you please advise alternate or easy ways to do this? | 2025-02-04T04:50:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ih9wny/local_ppts_based_on_a_template/ | s4sam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih9wny | false | null | t3_1ih9wny | /r/LocalLLaMA/comments/1ih9wny/local_ppts_based_on_a_template/ | false | false | self | 0 | null |
Model suggestion for RTX 4090 responds to twitch chat based on context of potentially 2-3k words? | 0 | Hey all, I'm looking for a model suggestion for a chatbot that performs well under this prompt (2-3k words):
> Use the following context to answer the viewer's question using 50 completion_tokens or less.
>
> You are "{streamer}AI," a helpful assistant in Twitch chat for streamer "{streamer}" Your twitch chat username is "{streamer}AI".
>
> Keep responses short, clear, direct, and friendly.
>
> Stick to the main point.
>
> Include the viewer's username in the response.
>
> Don’t spam or share all the links at once.
>
> If you don't know the answer, or if the message is not related to {game}, just say that you don't know, don't try to make up an answer.
>
> {game context}... (up to 2-3k words)
>
> ...
>
> ... {some youtube links}
>
> Viewer "${user}" asked: ${message}`;
Most important thing is I want it to follow instructions well, and not make up too much stuff. So far I've tried `deepseek-r1:32b` and `llama3.1`.
Also, would increasing the llama context size above the default `4096` improve performance?
What other settings in ollama.js can I tweak to fine-tune the bot to perform better at this specific task?
Appreciate any guidance, thanks | 2025-02-04T05:09:07 | https://www.reddit.com/r/LocalLLaMA/comments/1iha8ru/model_suggestion_for_rtx_4090_responds_to_twitch/ | geminimini | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iha8ru | false | null | t3_1iha8ru | /r/LocalLLaMA/comments/1iha8ru/model_suggestion_for_rtx_4090_responds_to_twitch/ | false | false | self | 0 | null |
Deepseek is so bad that..... | 0 | Yeah.. So... Pretty sure they are not putting the model out there to ask about some Mass...
https://www.techradar.com/computing/software/deepseek-r1-is-now-available-on-nvidia-aws-and-github-as-available-models-on-hugging-face-shot-past-3-000
| 2025-02-04T05:24:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ihaiac/deepseek_is_so_bad_that/ | Then_Knowledge_719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihaiac | false | null | t3_1ihaiac | /r/LocalLLaMA/comments/1ihaiac/deepseek_is_so_bad_that/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'E04FLc4xOnUimInc6LD8Hc4xovV3nwVRo4-NUrUsc6k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/nsCAe6dvTj8bASlXyjVk391phUex_8ApTiW_go-mBQk.jpg?width=108&crop=smart&auto=webp&s=040f1a4fcee87ae1221b244238ef79e7720fb969', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/nsCAe6dvTj8bASlXyjVk391phUex_8ApTiW_go-mBQk.jpg?width=216&crop=smart&auto=webp&s=0eaf1e3198c041d523818f6f0fa4c6dedc7820ea', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/nsCAe6dvTj8bASlXyjVk391phUex_8ApTiW_go-mBQk.jpg?width=320&crop=smart&auto=webp&s=51b27e61c5ceb1139446d336c9ac9b0ba4d462c3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/nsCAe6dvTj8bASlXyjVk391phUex_8ApTiW_go-mBQk.jpg?width=640&crop=smart&auto=webp&s=8a0a5eb44a78b15ce580c28035dec60b9bf5b04f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/nsCAe6dvTj8bASlXyjVk391phUex_8ApTiW_go-mBQk.jpg?width=960&crop=smart&auto=webp&s=b838040cd4f982a2581b59427da6cef1c84205af', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/nsCAe6dvTj8bASlXyjVk391phUex_8ApTiW_go-mBQk.jpg?width=1080&crop=smart&auto=webp&s=b2ec9fd3b247cda6b75297b0805dbb96a0b23e4b', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/nsCAe6dvTj8bASlXyjVk391phUex_8ApTiW_go-mBQk.jpg?auto=webp&s=a7ef5a9d639dbdea6c470aa110f7833e71cef9bf', 'width': 1200}, 'variants': {}}]} |
Looking for local RAG with API capability | 2 | I really like the combination of LM Stuido and AnythingLLM, especially the RAG support using just drag and drop. I have a need to query the LM Studio server using API, however I can't get the data stored in the RAG because AnythingLLM currently doesn't have a API. I can only use its UI to get the RAG data.
I know I can probably roll my own using Python but I would rather spending the time elsewhere ATM.
I am looking for a complete local solution that supports querying both LLM and RAG via API.
Let me know if you have suggestions.
| 2025-02-04T05:37:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ihappk/looking_for_local_rag_with_api_capability/ | brandtiv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihappk | false | null | t3_1ihappk | /r/LocalLLaMA/comments/1ihappk/looking_for_local_rag_with_api_capability/ | false | false | self | 2 | null |
Is it possible to run anything on ollama installed without root? | 0 | I'm trying to run stuff though ollama on an online jupyter notebook. Even though I'm able to download models and run them, typing anything gives me an endless loading wheel. The models get downloaded without issue and I can even check their parameters and other details with /show. | 2025-02-04T05:44:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ihau11/is_it_possible_to_run_anything_on_ollama/ | z0ers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihau11 | false | null | t3_1ihau11 | /r/LocalLLaMA/comments/1ihau11/is_it_possible_to_run_anything_on_ollama/ | false | false | self | 0 | null |
Why does llama.cpp default to interactive mode? | 1 | [removed] | 2025-02-04T05:55:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ihb07l/why_does_llamacpp_default_to_interactive_mode/ | Positive_Click_8963 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihb07l | false | null | t3_1ihb07l | /r/LocalLLaMA/comments/1ihb07l/why_does_llamacpp_default_to_interactive_mode/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qAUITmRef854lMrT-3NKX9nA-WglDvz2u5ejCzLOjEQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JmlZxQVGfz0RQSbZ3_vRmi5Ny7wFgKbeKWWzTn05NfI.jpg?width=108&crop=smart&auto=webp&s=b8126477206705684804ac45f62086d2a2d20a0f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JmlZxQVGfz0RQSbZ3_vRmi5Ny7wFgKbeKWWzTn05NfI.jpg?width=216&crop=smart&auto=webp&s=3734e8b764d1817b949c95e36863b12dbcf99aef', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JmlZxQVGfz0RQSbZ3_vRmi5Ny7wFgKbeKWWzTn05NfI.jpg?width=320&crop=smart&auto=webp&s=20b9b23d626b6a681b6bf6099cbfa19b58255f74', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JmlZxQVGfz0RQSbZ3_vRmi5Ny7wFgKbeKWWzTn05NfI.jpg?width=640&crop=smart&auto=webp&s=8c01f9501edb84493dfaba243f0b9f230873277f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JmlZxQVGfz0RQSbZ3_vRmi5Ny7wFgKbeKWWzTn05NfI.jpg?width=960&crop=smart&auto=webp&s=8895c494738def029678de91a0dabda832c078d6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JmlZxQVGfz0RQSbZ3_vRmi5Ny7wFgKbeKWWzTn05NfI.jpg?width=1080&crop=smart&auto=webp&s=b66bc437f607bb84d38c3369b8d246eab189beee', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JmlZxQVGfz0RQSbZ3_vRmi5Ny7wFgKbeKWWzTn05NfI.jpg?auto=webp&s=e141998e753edf56fe74e67a9d9312353a262485', 'width': 1200}, 'variants': {}}]} |
Combining power of Mac Studio 128 GB and Mac M4 128 GB to run the Deepseek R1 670B? | 1 | I have a 2022 Mac Studio which is not being used ever since i have the Mac M4, however I wonder how can i get 670B model of deepseek. Currently it cant handle the model size so I wondering if anything can be done about it?
here are the detailed stats:
Mac Studio: Apple M1 Ultra with 20-core CPU, 64-core GPU, 32-core Neural Engine128GB unified memory1TB SSD storage
M4 Macbook Pro: 14-inch , Apple M4 Max chip with 16‑core CPU, 40‑core GPU, 16‑core Neural Engine 128GB unified memory 2TB SSD storage. | 2025-02-04T05:56:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ihb0t1/combining_power_of_mac_studio_128_gb_and_mac_m4/ | dadiamma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihb0t1 | false | null | t3_1ihb0t1 | /r/LocalLLaMA/comments/1ihb0t1/combining_power_of_mac_studio_128_gb_and_mac_m4/ | false | false | self | 1 | null |
Someone made a solar system animation with mistral small 24b so I wanted to see what it would take for a smaller model to achieve the same or similar. | 94 | I used the same original Prompt as him and needed an additional two prompts until it worked.
Prompt 1:
Create an interactive web page that animates the Sun and the planets in our Solar System.
The animation should include the following features:
Sun: A central, bright yellow circle representing the Sun.
Planets: Eight planets (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune)
orbiting around the Sun with realistic relative sizes and distances.
Orbits: Visible elliptical orbits for each planet to show their paths around the Sun.
Animation: Smooth orbital motion for all planets, with varying speeds based on their actual orbital periods.
Labels : Clickable labels for each planet that display additional information when hovered over or clicked (e.g., name, distance from the Sun, orbital period).
Interactivity : Users should be able to pause and resume the animation using buttons.
Ensure the design is visually appealing with a dark background to enhance the visibility of the planets and their orbits. Use CSS for styling and JavaScript for the animation logic.
Prompt 2:
Double check your code for errors
Prompt 3:
Problems in Your Code
Planets are all stacked at (400px, 400px)
Every planet is positioned at the same place (left: 400px; top: 400px;), so they overlap on the Sun.
Use absolute positioning inside an orbit container and apply CSS animations for movement.
Only after pointing out its error did it finally get it right but for a 10 b model I think it did quite well even if it needed some poking in the right direction.
I used Falcon3 10b in this and will try out later what the other small models will make with this prompt. Given them one chance to correct themself and pointing out errors to see if they will fix them.
As anything above 14b runs glacially slow on my machine what would you say are the best Coding llm 14b and under ? | 2025-02-04T07:15:09 | https://v.redd.it/yrrxpppwo2he1 | Eden1506 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihc6oi | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yrrxpppwo2he1/DASHPlaylist.mpd?a=1741245322%2CNjYwMWMzOGVlZDhmOTUwMTUwOGU4ODk2YTNmMDM4NzU3YWUyNzdkZjM0ZmJmMTRmY2QzZTUyOGI2M2Y1MmFmMg%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/yrrxpppwo2he1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1416, 'hls_url': 'https://v.redd.it/yrrxpppwo2he1/HLSPlaylist.m3u8?a=1741245322%2CMGUxZjE4OTNiOTBlOTYwYTU0OTBmNTIxZWZkM2QyZjRiNDQ4NDZhYjQzZjcyOTQzNDJhYjEwMzllMTUzZGJjMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yrrxpppwo2he1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1ihc6oi | /r/LocalLLaMA/comments/1ihc6oi/someone_made_a_solar_system_animation_with/ | false | false | 94 | {'enabled': False, 'images': [{'id': 'MzZib3pxbHdvMmhlMWvr8G49Lc82c-F293AXTrEjb8OY97vj67Wghg0rS0ev', 'resolutions': [{'height': 141, 'url': 'https://external-preview.redd.it/MzZib3pxbHdvMmhlMWvr8G49Lc82c-F293AXTrEjb8OY97vj67Wghg0rS0ev.png?width=108&crop=smart&format=pjpg&auto=webp&s=57e9c0566a7b5120b977e746944ebecca54a2a44', 'width': 108}, {'height': 283, 'url': 'https://external-preview.redd.it/MzZib3pxbHdvMmhlMWvr8G49Lc82c-F293AXTrEjb8OY97vj67Wghg0rS0ev.png?width=216&crop=smart&format=pjpg&auto=webp&s=253ab2919561e9dc491f23d00b73d30c7233f025', 'width': 216}, {'height': 419, 'url': 'https://external-preview.redd.it/MzZib3pxbHdvMmhlMWvr8G49Lc82c-F293AXTrEjb8OY97vj67Wghg0rS0ev.png?width=320&crop=smart&format=pjpg&auto=webp&s=bb74c2435649ba4116dcef1f5adb52f0ecf03290', 'width': 320}, {'height': 839, 'url': 'https://external-preview.redd.it/MzZib3pxbHdvMmhlMWvr8G49Lc82c-F293AXTrEjb8OY97vj67Wghg0rS0ev.png?width=640&crop=smart&format=pjpg&auto=webp&s=282f4bf6b1accd27e257dd5da71361abe4c5a1f8', 'width': 640}, {'height': 1259, 'url': 'https://external-preview.redd.it/MzZib3pxbHdvMmhlMWvr8G49Lc82c-F293AXTrEjb8OY97vj67Wghg0rS0ev.png?width=960&crop=smart&format=pjpg&auto=webp&s=9af8acef01d6f626915f256da0940c75bb790081', 'width': 960}, {'height': 1416, 'url': 'https://external-preview.redd.it/MzZib3pxbHdvMmhlMWvr8G49Lc82c-F293AXTrEjb8OY97vj67Wghg0rS0ev.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a971c1bd7df9c7d51e042135e5c395f1c2651c27', 'width': 1080}], 'source': {'height': 1750, 'url': 'https://external-preview.redd.it/MzZib3pxbHdvMmhlMWvr8G49Lc82c-F293AXTrEjb8OY97vj67Wghg0rS0ev.png?format=pjpg&auto=webp&s=f430e8751b19c95739c45da2d59e61e1c07dda37', 'width': 1334}, 'variants': {}}]} |
|
DeepWeep | 1 | 2025-02-04T07:40:46 | https://v.redd.it/qxu5pr3it2he1 | Bobby_Benevolent | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihcird | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/qxu5pr3it2he1/DASHPlaylist.mpd?a=1741246865%2CNGE1MTYxMWZiMWQ3NGQ1M2U4ZjJmNjcyZGZhY2M4MjU4ZWRjNWIwZmY0YTg4MTMwNzJkYzg5MDczOTQxOTE4MA%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/qxu5pr3it2he1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/qxu5pr3it2he1/HLSPlaylist.m3u8?a=1741246865%2CNGMyNWVmMDVmZGQ0ZDYzNWYyNjVhODQ0YTNiODI0ODdlMDZiMGJkYjkxNzNlYTc0MWFlODNhZTkwY2Y3NmM2MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qxu5pr3it2he1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1ihcird | /r/LocalLLaMA/comments/1ihcird/deepweep/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZzEzY2x2dmh0MmhlMQmVC6ZAsJ3nu71kM7S4SK9zjvFUBcGaVJU4fKMBeAB5', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/ZzEzY2x2dmh0MmhlMQmVC6ZAsJ3nu71kM7S4SK9zjvFUBcGaVJU4fKMBeAB5.png?width=108&crop=smart&format=pjpg&auto=webp&s=17e10c8fa0003b728af3b965239bbd1a76ddb455', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/ZzEzY2x2dmh0MmhlMQmVC6ZAsJ3nu71kM7S4SK9zjvFUBcGaVJU4fKMBeAB5.png?width=216&crop=smart&format=pjpg&auto=webp&s=370d67efbd81c4edb7a7739accca40a5017cace4', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/ZzEzY2x2dmh0MmhlMQmVC6ZAsJ3nu71kM7S4SK9zjvFUBcGaVJU4fKMBeAB5.png?width=320&crop=smart&format=pjpg&auto=webp&s=ceff197b651263e8ad62478d2add8d586ae11278', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/ZzEzY2x2dmh0MmhlMQmVC6ZAsJ3nu71kM7S4SK9zjvFUBcGaVJU4fKMBeAB5.png?width=640&crop=smart&format=pjpg&auto=webp&s=db498d5ddacbc9e8da4c166b6ac2a4cdab5485ba', 'width': 640}, {'height': 1707, 'url': 'https://external-preview.redd.it/ZzEzY2x2dmh0MmhlMQmVC6ZAsJ3nu71kM7S4SK9zjvFUBcGaVJU4fKMBeAB5.png?width=960&crop=smart&format=pjpg&auto=webp&s=fef8aff0315dfaf558608240497a3679b2716f33', 'width': 960}], 'source': {'height': 1757, 'url': 'https://external-preview.redd.it/ZzEzY2x2dmh0MmhlMQmVC6ZAsJ3nu71kM7S4SK9zjvFUBcGaVJU4fKMBeAB5.png?format=pjpg&auto=webp&s=f70ee8f2e7ef0625efbeb21967e4a9291aecdf8a', 'width': 988}, 'variants': {}}]} |
||
How is Qwen2.5 0.5B so good at math! (for it's size) | 1 | [removed] | 2025-02-04T07:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ihcmr6/how_is_qwen25_05b_so_good_at_math_for_its_size/ | amang0112358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihcmr6 | false | null | t3_1ihcmr6 | /r/LocalLLaMA/comments/1ihcmr6/how_is_qwen25_05b_so_good_at_math_for_its_size/ | false | false | self | 1 | null |
How is Qwen2.5 0.5B so good at math! | 1 | [removed] | 2025-02-04T07:52:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ihco4g/how_is_qwen25_05b_so_good_at_math/ | amang0112358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihco4g | false | null | t3_1ihco4g | /r/LocalLLaMA/comments/1ihco4g/how_is_qwen25_05b_so_good_at_math/ | false | false | self | 1 | null |
Crazy that Qwen2.5 0.5B is so good at math benchmarks | 6 | I was testing some small base models on GSM8K, and most models are quite terrible. These scores are generated by EleutherAI's LM evaluation harness (vllm mode). I even double checked all the results.
**HuggingFaceTB/SmolLM2-360M**
GSM8K, COT, 8-shot, strict-match: 0.0455
**meta-llama/Llama-3.2-1B**
GSM8K, COT, 8-shot, strict-match: 0.0569
**Qwen/Qwen2.5-0.5B**
GSM8K, COT, 8-shot, strict-match: **0.3692**
How is Qwen beating a model that is twice its size by 6x? The pretraining dataset size is of a similar order of magnitude for both of them. It is hard to believe that Qwen's pretraining dataset or training has so much special sauce. | 2025-02-04T07:57:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ihcqiq/crazy_that_qwen25_05b_is_so_good_at_math/ | amang0112358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihcqiq | false | null | t3_1ihcqiq | /r/LocalLLaMA/comments/1ihcqiq/crazy_that_qwen25_05b_is_so_good_at_math/ | false | false | self | 6 | null |
Local LLM | 1 | [removed] | 2025-02-04T07:59:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ihcrma/local_llm/ | Elfelf_11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihcrma | false | null | t3_1ihcrma | /r/LocalLLaMA/comments/1ihcrma/local_llm/ | false | false | self | 1 | null |
Deepseek AI researcher says it only took 2-3 weeks to train R1&R1-Zero | 1 | [deleted] | 2025-02-04T08:12:21 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ihcxx4 | false | null | t3_1ihcxx4 | /r/LocalLLaMA/comments/1ihcxx4/deepseek_ai_researcher_says_it_only_took_23_weeks/ | false | false | default | 1 | null |
||
Building an HPC station with Nvidia H200 GPU | 1 | [removed] | 2025-02-04T08:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ihczo2/building_an_hpc_station_with_nvidia_h200_gpu/ | _Slim5God | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihczo2 | false | null | t3_1ihczo2 | /r/LocalLLaMA/comments/1ihczo2/building_an_hpc_station_with_nvidia_h200_gpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SM7GgbILtBF0ORKE4a6oaQ2sYsyIdu1l6EPievQxp4A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=108&crop=smart&auto=webp&s=04e10b5efed448cc1ca7965940648b2ed8b838ed', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=216&crop=smart&auto=webp&s=b6c3840a2eb256e4c09789bfd54732af5c3621b5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=320&crop=smart&auto=webp&s=554432afe4b618c72c9eee63e45f54477639c3b1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=640&crop=smart&auto=webp&s=9543d7aeae74ddebbda495a9edcf3336aae90bb0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=960&crop=smart&auto=webp&s=0ffd0323c71548c0613936bfe76ce39c8b33016d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=1080&crop=smart&auto=webp&s=f0de6a9239ccf0df5c4ce390ef8fa9fb692e0ad7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?auto=webp&s=05450a971f1315695bd262f1273cfa1877485ccb', 'width': 1200}, 'variants': {}}]} |
Deepseek researcher says it only took 2-3 weeks to train R1&R1-Zero | 880 | 2025-02-04T08:18:16 | https://www.reddit.com/gallery/1ihd0rr | nknnr | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ihd0rr | false | null | t3_1ihd0rr | /r/LocalLLaMA/comments/1ihd0rr/deepseek_researcher_says_it_only_took_23_weeks_to/ | false | false | 880 | null |
||
Architecture difference of llama3 flavors | 2 | What are the exact llama3 8b, 70b, and 405b layers exactly ? How many transformer layers and attention heads ? If they all trained on 1.5T tokens and then have 128k vocabulary how does the layers change for more parameters
Searching high and low, asking various LLM (sonnet and OpenAI) I haven't gotten concrete answers
Lack of access to a 1T Ram machine has limited by exploration
Can someone point me to right direction ? I'm looking for a torch summary type or MLIR like description | 2025-02-04T08:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ihd8pm/architecture_difference_of_llama3_flavors/ | inner2021planet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihd8pm | false | null | t3_1ihd8pm | /r/LocalLLaMA/comments/1ihd8pm/architecture_difference_of_llama3_flavors/ | false | false | self | 2 | null |
Best way to store LLM memory of the user | 33 | Working for a small startup as an AI engineer and want to seek advice from friends here. If you want LLM to remember all user conversations, this leads to excessive token consumption. I asked ChatGPT how to manage memory efficiently, and it suggested using structured notes—essentially summarizing key user details in natural language, like "User likes red" or "User is X years old."
I find this approach inconvenient. Ideally, I want the AI to recall specifics in conversations, like “John, how is your \[task\] progress that we that we talked about a few days ago?”. I’m hoping to design a system where AI organizes each conversation into a graph—kind of like a knowledge graph but more complex, with relationship networks and verb associations that AI understands but might not be human-readable. This way, we can highly compress textual information while allowing AI to retain user conversations efficiently. Has anyone tried this or has seen something similar or has better/alterantive methods, appreciate all the inputs :) THANK YOU!
| 2025-02-04T08:39:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ihdad2/best_way_to_store_llm_memory_of_the_user/ | jackiezhang95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihdad2 | false | null | t3_1ihdad2 | /r/LocalLLaMA/comments/1ihdad2/best_way_to_store_llm_memory_of_the_user/ | false | false | self | 33 | null |
What UI to use ? | 2 | I want to switch my custom frontend to an open soruce frontend. My backend is an api that handle thread creation, chat messages etc ..
For a custom backend, what gui to use ? I'm looking to use open web ui backend and change calls to my backends calls but I think it's a bad idea. Do you have any advices ?
Thank you
(sorry for my bad english) | 2025-02-04T10:00:49 | https://www.reddit.com/r/LocalLLaMA/comments/1iheclm/what_ui_to_use/ | Prakkmak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iheclm | false | null | t3_1iheclm | /r/LocalLLaMA/comments/1iheclm/what_ui_to_use/ | false | false | self | 2 | null |
Mining rig for running deepseek | 4 | Hi, I have access to some old mining riga with p106-100 graphic cards. Usually with 10 or more of them running from the same board. Cards are 6gb, and I was wondering would it even be possible to run something on these?
Or it's better option to buy something newer, but with less combine vram. | 2025-02-04T10:13:25 | https://www.reddit.com/r/LocalLLaMA/comments/1iheipb/mining_rig_for_running_deepseek/ | Enough-Grapefruit630 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iheipb | false | null | t3_1iheipb | /r/LocalLLaMA/comments/1iheipb/mining_rig_for_running_deepseek/ | false | false | self | 4 | null |
Help in playing after the kid ran away | 0 | So I think I am near the end of the game.
Emma told me that the kid ran away thru some gate in reservoir and gave me a key. I am trying to find the kid, but I can't even find the reservoir.
Instead I found some "demon of hatred" which has 3 deathblows, and has fire attacks. I have some fire dousing powder but they are limited, so I am not sure if I am even going in the right direction anymore.
Did I miss the reservoir? | 2025-02-04T10:36:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ihetu8/help_in_playing_after_the_kid_ran_away/ | bankinu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihetu8 | false | null | t3_1ihetu8 | /r/LocalLLaMA/comments/1ihetu8/help_in_playing_after_the_kid_ran_away/ | true | false | spoiler | 0 | null |
Not all LLM can solve this | 10 | Only an LLM with RL and CoT can solve this. Can someone try this with their local distilled R1 model?
37#21=928 77#44=3993 123#17=14840 71#6=? | 2025-02-04T10:49:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ihf02o/not_all_llm_can_solve_this/ | Reasonable-Climate66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihf02o | false | null | t3_1ihf02o | /r/LocalLLaMA/comments/1ihf02o/not_all_llm_can_solve_this/ | false | false | self | 10 | null |
DeepSeek-R1's correct answers are generally shorter | 336 | 2025-02-04T10:49:48 | omnisvosscio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihf0gb | false | null | t3_1ihf0gb | /r/LocalLLaMA/comments/1ihf0gb/deepseekr1s_correct_answers_are_generally_shorter/ | false | false | 336 | {'enabled': True, 'images': [{'id': 'YB-Ik8prxoQtRAWZef9_aXUlMqiLVrB_RxzA9DQQF30', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/duiwqfpzq3he1.png?width=108&crop=smart&auto=webp&s=e6083df306cd97e2acb6facc9a70b6ae25acfbcd', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/duiwqfpzq3he1.png?width=216&crop=smart&auto=webp&s=5a2d213e63ed5109b88d70f38b7fa611c4113c14', 'width': 216}, {'height': 255, 'url': 'https://preview.redd.it/duiwqfpzq3he1.png?width=320&crop=smart&auto=webp&s=333248850fab689b0ebe534dda62cf9d8a5e6bad', 'width': 320}, {'height': 510, 'url': 'https://preview.redd.it/duiwqfpzq3he1.png?width=640&crop=smart&auto=webp&s=29a0abef7ab5410cdc5f56f319bd47c9d15c366b', 'width': 640}, {'height': 765, 'url': 'https://preview.redd.it/duiwqfpzq3he1.png?width=960&crop=smart&auto=webp&s=92ee7190d1b28cd24b8abf8cc9857cc5053c1805', 'width': 960}], 'source': {'height': 844, 'url': 'https://preview.redd.it/duiwqfpzq3he1.png?auto=webp&s=91632393e3e6ac43bc2ee0bba95e9a2527635fea', 'width': 1058}, 'variants': {}}]} |
|||
Close llama model soon? | 0 | I'm glad Mark took my advice to slow down the Llama open model. "People" are using it without contributing back.
https://techcrunch.com/2025/02/03/meta-says-it-may-stop-development-of-ai-systems-it-deems-too-risky?guccounter=1 | 2025-02-04T11:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ihf6yt/close_llama_model_soon/ | Reasonable-Climate66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihf6yt | false | null | t3_1ihf6yt | /r/LocalLLaMA/comments/1ihf6yt/close_llama_model_soon/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'X5xnpvdMZGwwWtVSiRWanlOEc1T45IBO3VJ80Y3bpKY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MNAPP3xiMp9afXLPDP5xV4S31M7mtS5QfiRczObPLRE.jpg?width=108&crop=smart&auto=webp&s=f739b8191ed2e4350bb0d73c69eba88e57d7c88d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MNAPP3xiMp9afXLPDP5xV4S31M7mtS5QfiRczObPLRE.jpg?width=216&crop=smart&auto=webp&s=55a064a5654852c9583904f99e9d7bb587e9fc08', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MNAPP3xiMp9afXLPDP5xV4S31M7mtS5QfiRczObPLRE.jpg?width=320&crop=smart&auto=webp&s=493d8948cc83ffdd8e17b7697e4eb5ba46e3ae4f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MNAPP3xiMp9afXLPDP5xV4S31M7mtS5QfiRczObPLRE.jpg?width=640&crop=smart&auto=webp&s=9fbd2c928ad66ee2c8b70c6401c97f95ab1147dd', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MNAPP3xiMp9afXLPDP5xV4S31M7mtS5QfiRczObPLRE.jpg?width=960&crop=smart&auto=webp&s=7259d242101513d5f12eb4516174217d4f8b5e09', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MNAPP3xiMp9afXLPDP5xV4S31M7mtS5QfiRczObPLRE.jpg?width=1080&crop=smart&auto=webp&s=1c35382fcc23e5b9c703875f73eb6b3aeb86cf0e', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/MNAPP3xiMp9afXLPDP5xV4S31M7mtS5QfiRczObPLRE.jpg?auto=webp&s=dd6227f3215c7fca56f6807b610d869a86e345db', 'width': 1200}, 'variants': {}}]} |
Lmstudio, deepseek mode | 3 |
Hello, some thing happened to me, maybe it.s my install that is broken or’something
but i have used eva llama 70b in LM studio and a few other and they all have now the inner monologue deepseek thing before giving answers,
how that possible I thought training and fine tuning was required? | 2025-02-04T11:02:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ihf73y/lmstudio_deepseek_mode/ | sigiel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihf73y | false | null | t3_1ihf73y | /r/LocalLLaMA/comments/1ihf73y/lmstudio_deepseek_mode/ | false | false | self | 3 | null |
How do you currently access deepseek? | 15 | It just seems like the api + website is down all the time. | 2025-02-04T11:19:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ihffra/how_do_you_currently_access_deepseek/ | United-Rush4073 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihffra | false | null | t3_1ihffra | /r/LocalLLaMA/comments/1ihffra/how_do_you_currently_access_deepseek/ | false | false | self | 15 | null |
I have a Mac Mini M2 with 8 GB RAM. So obviously, I don't have much options. Based on the limited options I have, which is the best LLM model I can run. (all rounder- efficient and speed) | 1 | [removed] | 2025-02-04T11:37:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ihfp78/i_have_a_mac_mini_m2_with_8_gb_ram_so_obviously_i/ | ShreyashStonieCrusts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihfp78 | false | null | t3_1ihfp78 | /r/LocalLLaMA/comments/1ihfp78/i_have_a_mac_mini_m2_with_8_gb_ram_so_obviously_i/ | false | false | self | 1 | null |
Which version should I use (DeepSeek R1) | 1 | [removed] | 2025-02-04T11:45:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ihftly/which_version_should_i_use_deepseek_r1/ | Odd-Currency-1909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihftly | false | null | t3_1ihftly | /r/LocalLLaMA/comments/1ihftly/which_version_should_i_use_deepseek_r1/ | false | false | self | 1 | null |
Ollama VPN issue | 0 | University machines sit on a VPN (paloalto) and since Ollama was released there was no issue. Suddenly, I can't pull any models if on that VPN, regardless of machine (Windows, Linux and Mac all affected) or remote access - i.e. GlobalConnect onto the VPN or being on the physical University network. Is this a known issue, and what are the workarounds? I have searched online and can't find any info | 2025-02-04T11:57:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ihfzz8/ollama_vpn_issue/ | caizoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihfzz8 | false | null | t3_1ihfzz8 | /r/LocalLLaMA/comments/1ihfzz8/ollama_vpn_issue/ | false | false | self | 0 | null |
Performance of 32/70B models for text generation | 1 | [removed] | 2025-02-04T12:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ihgfaf/performance_of_3270b_models_for_text_generation/ | Acrobatic_Cat_3448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihgfaf | false | null | t3_1ihgfaf | /r/LocalLLaMA/comments/1ihgfaf/performance_of_3270b_models_for_text_generation/ | false | false | self | 1 | null |
How to speed up Qwen 2.5 14b? | 1 | Hey guys, recently I have gotten into locally running LLMs
I've tried Deepseek coder 6.7, Qwen coder 2.5 7 and 14b
I have found out Qwen to be the best so far, however it's really really slow
I am on 3060 laptop, 6gb vram
I know this is quite less, but running it through ollama, it's good enough, so I'm sure there is some way to make it faster
I have plenty of SSD storage, and 16gb ram
So I can use that too if needed
Is there a good way to speed it up?
I have heard of quantising and offloading to CPU, will probably try it soon
I am basically looking to have a coding LLM, something that supports docs and images
And also a general purpose LLM
What are your suggestions on speeding it up, suggestions for ui or interfaces? | 2025-02-04T12:23:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ihgfri/how_to_speed_up_qwen_25_14b/ | Inferno2211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihgfri | false | null | t3_1ihgfri | /r/LocalLLaMA/comments/1ihgfri/how_to_speed_up_qwen_25_14b/ | false | false | self | 1 | null |
Performance of 32/70B models for text generation | 1 | [removed] | 2025-02-04T12:24:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ihggml/performance_of_3270b_models_for_text_generation/ | Acrobatic_Cat_3448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihggml | false | null | t3_1ihggml | /r/LocalLLaMA/comments/1ihggml/performance_of_3270b_models_for_text_generation/ | false | false | self | 1 | null |
How to find the model max context window | 1 | [removed] | 2025-02-04T12:37:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ihgojm/how_to_find_the_model_max_context_window/ | Specialist_Bee_9726 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihgojm | false | null | t3_1ihgojm | /r/LocalLLaMA/comments/1ihgojm/how_to_find_the_model_max_context_window/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oe6e0Y6j3ZF7Sn0TC7ydmXUNVnuhUTHksPQ-8aNS0hQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rl1wgeoF65wWG-vq4nHCN6x13dxQuLw2z2EHXfmFt54.jpg?width=108&crop=smart&auto=webp&s=4cda9b7784df48f52c827dcfc6cd1cdbb00168a6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rl1wgeoF65wWG-vq4nHCN6x13dxQuLw2z2EHXfmFt54.jpg?width=216&crop=smart&auto=webp&s=4329b767b6f874fda818c6a26948da251fd1f30b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rl1wgeoF65wWG-vq4nHCN6x13dxQuLw2z2EHXfmFt54.jpg?width=320&crop=smart&auto=webp&s=5674360dbba5350b9520a7695a83944974db7efb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rl1wgeoF65wWG-vq4nHCN6x13dxQuLw2z2EHXfmFt54.jpg?width=640&crop=smart&auto=webp&s=ff88852008e12ca48d84ae75bf6ffb061c882249', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rl1wgeoF65wWG-vq4nHCN6x13dxQuLw2z2EHXfmFt54.jpg?width=960&crop=smart&auto=webp&s=e3aafe5b9e861f7d5b8250484531faba8e1f5ad7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rl1wgeoF65wWG-vq4nHCN6x13dxQuLw2z2EHXfmFt54.jpg?width=1080&crop=smart&auto=webp&s=d27f01b1a58275b5b45cf19e8c94a4f9f69ecd0d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rl1wgeoF65wWG-vq4nHCN6x13dxQuLw2z2EHXfmFt54.jpg?auto=webp&s=aaf5e756c8f86d033c2b2699a8dbb37a1cb21f92', 'width': 1200}, 'variants': {}}]} |
How will computer hardware change to cater to local LLMs? | 35 | I think in the next 5 years, the demand for computer hardware is going to skyrocket, specifically hardware that's efficient enough to run something like DeepSeek 671B Parameter model with reasonable speed locally offline. (Or at least that's the goal of everyone here) | 2025-02-04T12:47:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ihgv43/how_will_computer_hardware_change_to_cater_to/ | UnhingedSupernova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihgv43 | false | null | t3_1ihgv43 | /r/LocalLLaMA/comments/1ihgv43/how_will_computer_hardware_change_to_cater_to/ | false | false | self | 35 | null |
Review: Is this pc build good for stable diffusion or images? And local AI model how much can it handle ex. 34b? | 0 |
**CPU**: AMD RYZEN 5 7600 (AM5) WITH WRAITH STEALTH COOLER (BOXED)
**Mobo**: MSI B650 GAMING PLUS WIFI DDR5 ATX AM5
**RAM**: KINGSTON FURY BEAST WHITE (KF560C36BWEK2-64) 32GBX2=64GB 6000MHZ DDR5 EXPO
**GPU**: GEFORCE RTX 4060 TI ASUS DUAL EVO OC BLACK 16GB GDDR6 DUAL FAN
**SSD1**: SAMSUNG 990 PRO NVME 1TB GEN 4
**SSD2**: LEXAR NQ710 NVME 2TB GEN 4
**PSU**: ASUS PRIME AP-850G 850W WHITE-BLACK ATX GOLD WITH PCIE 5.0 FULLY MODULAR
**Casing**: MONTECH AIR X ARGB BLACK 3ARGB FANS
Need advise if this is ok as starter | 2025-02-04T12:52:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ihgxwn/review_is_this_pc_build_good_for_stable_diffusion/ | The_R-Factor_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihgxwn | false | null | t3_1ihgxwn | /r/LocalLLaMA/comments/1ihgxwn/review_is_this_pc_build_good_for_stable_diffusion/ | false | false | self | 0 | null |
Mistral boss says tech CEOs’ obsession with AI outsmarting humans is a ‘very religious’ fascination | 818 | 2025-02-04T12:57:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ihh15n/mistral_boss_says_tech_ceos_obsession_with_ai/ | Nunki08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihh15n | false | null | t3_1ihh15n | /r/LocalLLaMA/comments/1ihh15n/mistral_boss_says_tech_ceos_obsession_with_ai/ | false | false | 818 | null |
||
Handling contradictions in documents on RAG pipelines | 1 | For example a book where a character changes clothes in the middle of it. If I ask “what is the character wearing?” the retriever will pick up relevant documents from before and after the character changes clothes.
What are your usual strategies here? | 2025-02-04T13:03:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ihh5up/handling_contradictions_in_documents_on_rag/ | ParaplegicGuru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihh5up | false | null | t3_1ihh5up | /r/LocalLLaMA/comments/1ihh5up/handling_contradictions_in_documents_on_rag/ | false | false | self | 1 | null |
What would be some good local Ollama models to run on my server? | 1 | I have the [following server setup](https://www.intel.com/content/www/us/en/products/sku/212456/intel-xeon-gold-6348-processor-42m-cache-2-60-ghz/specifications.html) with 256GB of RAM at the office. What models can I run on it without serious bottlenecks? I asked o3-mini earlier today and this was the response:
• **LLaMA 2 7B (4‑bit or 8‑bit version):**
– This is one of the most popular choices for CPU inference, balancing capability with speed.
• **GPT-J 6B or similar models:**
– Well within the comfort zone for a 28‑core, 56‑thread Xeon, especially when quantized.
• **Other “lighter” models like DistilGPT or smaller OPT models:**
– If you need even faster responses at the cost of some language understanding quality.
While your 256 GB RAM means you won’t run out of memory even if you load larger models, the primary bottleneck on a CPU‐only system is the raw compute throughput. In practice, CPU inference with these models might yield response times that are a few times slower than what you’d expect with a high‐performance GPU—but for many non–real-time applications or batch processing, that trade-off is acceptable.
Curious to what that translates to in today's context as it seems o3-mini gave some outdated recommendations. | 2025-02-04T13:05:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ihh6t5/what_would_be_some_good_local_ollama_models_to/ | balmofgilead | self.LocalLLaMA | 2025-02-04T13:11:09 | 0 | {} | 1ihh6t5 | false | null | t3_1ihh6t5 | /r/LocalLLaMA/comments/1ihh6t5/what_would_be_some_good_local_ollama_models_to/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'V2BbFPb_k_l2AKs4YzVE0z6s1Zb4R02q9-fLQNxXM4Q', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JPIgu1b4sKRs3N1Dg4EDKrpB_7THbcstSz8hb9kRs6I.jpg?width=108&crop=smart&auto=webp&s=2b92d4f0603e23f3bd0ffb8e965daa3333a864b7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/JPIgu1b4sKRs3N1Dg4EDKrpB_7THbcstSz8hb9kRs6I.jpg?width=216&crop=smart&auto=webp&s=700d222d26e76bf9f12c2b59e31a9bdb995d6400', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/JPIgu1b4sKRs3N1Dg4EDKrpB_7THbcstSz8hb9kRs6I.jpg?width=320&crop=smart&auto=webp&s=e3f2f39b8c19f7f79d86d4a5b5bc7f34f67b5dae', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/JPIgu1b4sKRs3N1Dg4EDKrpB_7THbcstSz8hb9kRs6I.jpg?width=640&crop=smart&auto=webp&s=3aa66c0e50d13c8dcb4ce5af1b9c9cb912910f45', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/JPIgu1b4sKRs3N1Dg4EDKrpB_7THbcstSz8hb9kRs6I.jpg?width=960&crop=smart&auto=webp&s=13c7a82998c6a296edc15c9d051bcb99a4bc9980', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/JPIgu1b4sKRs3N1Dg4EDKrpB_7THbcstSz8hb9kRs6I.jpg?width=1080&crop=smart&auto=webp&s=ec89568c8ae3deaf81c1b723ac38834f94c44a56', 'width': 1080}], 'source': {'height': 3000, 'url': 'https://external-preview.redd.it/JPIgu1b4sKRs3N1Dg4EDKrpB_7THbcstSz8hb9kRs6I.jpg?auto=webp&s=cd5e844654eba73b1016067bc525be9f2d9f8a78', 'width': 5334}, 'variants': {}}]} |
Best current LLM to run locally on an RTX 3090 | 0 | I have an rtx3090 (24gb vram), 32gb ddr4 plus a ryzen 5700x3d.
What is the best model I can run locally for python coding?
One of the DeepSeek r1 distills or is DeepSeek cider better at just coding? Or something else entirely? | 2025-02-04T13:19:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ihhgfc/best_current_llm_to_run_locally_on_an_rtx_3090/ | PinkyPonk10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihhgfc | false | null | t3_1ihhgfc | /r/LocalLLaMA/comments/1ihhgfc/best_current_llm_to_run_locally_on_an_rtx_3090/ | false | false | self | 0 | null |
O3-mini-high LiveBench coding score seems fishy" | 44 | [???](https://preview.redd.it/uagl0fz0f4he1.png?width=1339&format=png&auto=webp&s=24c8e255a96a4e140feb0476a51ffd4f21608f7f)
We observe diminishing returns across the board going from "O3-mini-Medium" to "O3-mini-High" compared to the gains from "Low" to "Medium".
EXCEPT for the coding category, where the trend is completely opposite.
Even LiveCodeBench and Aider, which are purely coding benchmarks, show the same diminishing returns pattern.
So, is it possible that LiveBench made a mistake?
How do we explain this exceptional jump that goes against every other benchmark? | 2025-02-04T13:26:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ihhlsl/o3minihigh_livebench_coding_score_seems_fishy/ | Mother_Soraka | self.LocalLLaMA | 2025-02-04T13:40:50 | 0 | {} | 1ihhlsl | false | null | t3_1ihhlsl | /r/LocalLLaMA/comments/1ihhlsl/o3minihigh_livebench_coding_score_seems_fishy/ | false | false | 44 | null |
|
is it time to drop the Large in LLM? | 1 | [removed] | 2025-02-04T13:34:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ihhrby/is_it_time_to_drop_the_large_in_llm/ | Either-Researcher681 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihhrby | false | null | t3_1ihhrby | /r/LocalLLaMA/comments/1ihhrby/is_it_time_to_drop_the_large_in_llm/ | false | false | self | 1 | null |
A few words on certain LLM's censorship | 1 | [removed] | 2025-02-04T13:36:25 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ihhsln | false | null | t3_1ihhsln | /r/LocalLLaMA/comments/1ihhsln/a_few_words_on_certain_llms_censorship/ | false | false | default | 1 | null |
||
Will AI cause massive job losses? | 0 | I believe this. But am I stuck in Twitter echo chamber? Or it actually will happen
Won't the government forbid the use of AI to fire people? But how then can ban open source models | 2025-02-04T13:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ihi1io/will_ai_cause_massive_job_losses/ | Admirable_Stock3603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihi1io | false | null | t3_1ihi1io | /r/LocalLLaMA/comments/1ihi1io/will_ai_cause_massive_job_losses/ | false | false | self | 0 | null |
This why you're never getting a 5090 | 0 | https://www.tomshardware.com/pc-components/gpus/chinese-algorithm-claimed-to-boost-nvidia-gpu-performance-by-up-to-800x-for-advanced-science-applications
"The enhanced computational efficiency means researchers can now conduct simulations on consumer-grade GPUs instead of relying on costly, high-performance computing clusters." | 2025-02-04T14:04:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ihid2y/this_why_youre_never_getting_a_5090/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihid2y | false | null | t3_1ihid2y | /r/LocalLLaMA/comments/1ihid2y/this_why_youre_never_getting_a_5090/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'bCafyTMBimKi2rkNJw7N7IZo0QfjXQCRB2-ZNcdx2Q4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/PWBO8vyGgh_1U1I8GpQQ-tG0ZYVYXeVrDjuUK4l6wZc.jpg?width=108&crop=smart&auto=webp&s=de4d85ba9578d4645a9e095785f648252b39cb8b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/PWBO8vyGgh_1U1I8GpQQ-tG0ZYVYXeVrDjuUK4l6wZc.jpg?width=216&crop=smart&auto=webp&s=a1329c4e6dec2d9a28cd2973a35494676caf6953', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/PWBO8vyGgh_1U1I8GpQQ-tG0ZYVYXeVrDjuUK4l6wZc.jpg?width=320&crop=smart&auto=webp&s=fd8417b7aa204b7a34d346ffb94c6dab3410cc53', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/PWBO8vyGgh_1U1I8GpQQ-tG0ZYVYXeVrDjuUK4l6wZc.jpg?width=640&crop=smart&auto=webp&s=6ecd4c4e9d1c28a4017709599faaebdde94d380a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/PWBO8vyGgh_1U1I8GpQQ-tG0ZYVYXeVrDjuUK4l6wZc.jpg?width=960&crop=smart&auto=webp&s=74454f108294ac8658b5026c19f37652bc9071b4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/PWBO8vyGgh_1U1I8GpQQ-tG0ZYVYXeVrDjuUK4l6wZc.jpg?width=1080&crop=smart&auto=webp&s=03fc13a7d40bfaa9cd328cc2c0eedf25e28d2f94', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/PWBO8vyGgh_1U1I8GpQQ-tG0ZYVYXeVrDjuUK4l6wZc.jpg?auto=webp&s=403bce9d0903671e3f3fd7ba012331af1adaedd2', 'width': 1200}, 'variants': {}}]} |
Any good recommendation for an image model that isn't shite on 8GB VRAM? | 11 | Getting issues with very plastic faces, terrible resolutions, but I'm wondering if the SOTA for non-millionaire PC owners has improved... | 2025-02-04T14:09:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ihih1y/any_good_recommendation_for_an_image_model_that/ | blueredscreen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihih1y | false | null | t3_1ihih1y | /r/LocalLLaMA/comments/1ihih1y/any_good_recommendation_for_an_image_model_that/ | false | false | self | 11 | null |
Has anyone tried putting card information in browser agents or operators? | 1 | [removed] | 2025-02-04T14:15:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ihil6f/has_anyone_tried_putting_card_information_in/ | Dry_Steak30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihil6f | false | null | t3_1ihil6f | /r/LocalLLaMA/comments/1ihil6f/has_anyone_tried_putting_card_information_in/ | false | false | self | 1 | null |
AI to become local writing assistant? | 1 | [removed] | 2025-02-04T14:16:05 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ihilyy | false | null | t3_1ihilyy | /r/LocalLLaMA/comments/1ihilyy/ai_to_become_local_writing_assistant/ | false | false | default | 1 | null |
||
Can I run llama with my specs ? | 1 | [removed] | 2025-02-04T14:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ihimkc/can_i_run_llama_with_my_specs/ | AngelicStrength | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihimkc | false | null | t3_1ihimkc | /r/LocalLLaMA/comments/1ihimkc/can_i_run_llama_with_my_specs/ | false | false | self | 1 | null |
Why no one is able to beat Anthropic claude sonet 3.5, not even o3 mini and deepseek? | 1 | [removed] | 2025-02-04T14:25:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ihistt/why_no_one_is_able_to_beat_anthropic_claude_sonet/ | Objective_Coat_999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihistt | false | null | t3_1ihistt | /r/LocalLLaMA/comments/1ihistt/why_no_one_is_able_to_beat_anthropic_claude_sonet/ | false | false | self | 1 | null |
Why no one is able to beat Anthropic claude sonet 3.5, not even o3 mini and deepseek? | 1 | [removed] | 2025-02-04T14:26:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ihitmt/why_no_one_is_able_to_beat_anthropic_claude_sonet/ | Objective_Coat_999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihitmt | false | null | t3_1ihitmt | /r/LocalLLaMA/comments/1ihitmt/why_no_one_is_able_to_beat_anthropic_claude_sonet/ | false | false | self | 1 | null |
Mistral NeMo locally on an Apple Silicon Mac | 1 | [removed] | 2025-02-04T14:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ihiusf/mistral_nemo_locally_on_an_apple_silicon_mac/ | FadiTheChadi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihiusf | false | null | t3_1ihiusf | /r/LocalLLaMA/comments/1ihiusf/mistral_nemo_locally_on_an_apple_silicon_mac/ | false | false | self | 1 | null |
China's OmniHuman-1 🌋🔆 | 674 | 2025-02-04T14:51:12 | https://v.redd.it/44wrxa2vx4he1 | BidHot8598 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihjdh2 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/44wrxa2vx4he1/DASHPlaylist.mpd?a=1741275909%2CMWM4OGQ2ZDM3MWZlZjM0OWE4MWI3YzU5MmQzOGQ1YTM5MDk0MTIxOTA4YWQzYWZmZTBiZDgyNTdmNzgxOTQ3NA%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/44wrxa2vx4he1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/44wrxa2vx4he1/HLSPlaylist.m3u8?a=1741275909%2CZDk2ZmZjN2VlNGNlYWFkNGRmYjM0ZGU5MTM0Yjc4ZDNiMWEzYjllMGViNDE0YmY4MTViZmUwODlhOTU0YzQ2Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/44wrxa2vx4he1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 840}} | t3_1ihjdh2 | /r/LocalLLaMA/comments/1ihjdh2/chinas_omnihuman1/ | false | false | 674 | {'enabled': False, 'images': [{'id': 'Z3E2OWViMnZ4NGhlMTrmrmOiOifzLgIvJmxpKkr6cU0COigmuxJkoC_oGSXh', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/Z3E2OWViMnZ4NGhlMTrmrmOiOifzLgIvJmxpKkr6cU0COigmuxJkoC_oGSXh.png?width=108&crop=smart&format=pjpg&auto=webp&s=0948cc1fe57f342c34b0495349d49f2aed90dc55', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/Z3E2OWViMnZ4NGhlMTrmrmOiOifzLgIvJmxpKkr6cU0COigmuxJkoC_oGSXh.png?width=216&crop=smart&format=pjpg&auto=webp&s=1efd01e6b2303ac08dddf609e833adb6890f5c25', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/Z3E2OWViMnZ4NGhlMTrmrmOiOifzLgIvJmxpKkr6cU0COigmuxJkoC_oGSXh.png?width=320&crop=smart&format=pjpg&auto=webp&s=36be1c31cc0f2a864197074c48b092af4a655074', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/Z3E2OWViMnZ4NGhlMTrmrmOiOifzLgIvJmxpKkr6cU0COigmuxJkoC_oGSXh.png?width=640&crop=smart&format=pjpg&auto=webp&s=c673949724657dbd818171f356b44e700beee822', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/Z3E2OWViMnZ4NGhlMTrmrmOiOifzLgIvJmxpKkr6cU0COigmuxJkoC_oGSXh.png?format=pjpg&auto=webp&s=b1565e8950070b1d0222dda1d819093b0953e59f', 'width': 843}, 'variants': {}}]} |
||
Why no compiled LLMs? | 0 | What often happened in software was to release the binaries, but not the source. This would mean you could run software locally, but couldn't edit it.
I asked GPT-4o and it said it was a possible strategy, but couldn't give an example.
My feelings is that it should impossible to do so, as you need explicitly the parameters to feed them to the GPU.
If you are an AI Lab which only worry is that they might pick your open weight model and fine tune it to remove the safeguards and/or make it evil, this would a way to force the model to only work as is. | 2025-02-04T14:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ihjhl6/why_no_compiled_llms/ | AstridPeth_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihjhl6 | false | null | t3_1ihjhl6 | /r/LocalLLaMA/comments/1ihjhl6/why_no_compiled_llms/ | false | false | self | 0 | null |
Is buying a 5070 ti for AI a good idea? | 0 | I'm planning on buying a 5070 ti for running models locally. I plan on mostly running llms and image generation models.
How will it perform with models like deepseek R1? Will I be able to use the 14b model or will I have to stick with the 8b model?
I am also planning on using image generation models like the bigger sd models and flux dev. Will they run on it.
Anyone who has a 4070 ti super or 4080 please share your experience as the performance of the 5070 ti will be somewhere between these 2 GPUs.
Also are there any other cheaper GPUs that I can buy that would fit my use case? | 2025-02-04T15:04:49 | AC2302 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihjoc2 | false | null | t3_1ihjoc2 | /r/LocalLLaMA/comments/1ihjoc2/is_buying_a_5070_ti_for_ai_a_good_idea/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'iSlkMmo8ZzTxzWShr09FOi18v_BKe0i27dorttj_ZbY', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/nlbgahhq05he1.jpeg?width=108&crop=smart&auto=webp&s=0dd021681ac4e67ea498f250184ee0ee46ecffd6', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/nlbgahhq05he1.jpeg?width=216&crop=smart&auto=webp&s=467eb5a53183c6b950f5842e0109faeaa1ca4f78', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/nlbgahhq05he1.jpeg?width=320&crop=smart&auto=webp&s=ee9c84802aef51faae3c08e7d7efa792188c14da', 'width': 320}, {'height': 375, 'url': 'https://preview.redd.it/nlbgahhq05he1.jpeg?width=640&crop=smart&auto=webp&s=615aa5689bf9f0c53ffa226daf67872dc56198a6', 'width': 640}], 'source': {'height': 424, 'url': 'https://preview.redd.it/nlbgahhq05he1.jpeg?auto=webp&s=54bb1079f68c064833821053619a28f6e327e604', 'width': 723}, 'variants': {}}]} |
||
DeepSeek-R1 Release | Everything About DeepSeek | 1 | 2025-02-04T15:35:43 | https://www.dotnetoffice.com/2025/02/deepseek-r1-release-everything-about.html | Big-Farm-4236 | dotnetoffice.com | 1970-01-01T00:00:00 | 0 | {} | 1ihkdz6 | false | null | t3_1ihkdz6 | /r/LocalLLaMA/comments/1ihkdz6/deepseekr1_release_everything_about_deepseek/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'IXVHglmDKVcFUkp6TWcbmYtyd3GCl4SVitvFISN0ZP4', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/M5GeiJvkWVTne_MNeojGOYlfm68VWwQk-K3VioBl11I.jpg?width=108&crop=smart&auto=webp&s=ddad98b4d2d36d4bb65cd3c7f57c30617c997301', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/M5GeiJvkWVTne_MNeojGOYlfm68VWwQk-K3VioBl11I.jpg?width=216&crop=smart&auto=webp&s=9b08831279dbc5e35282424a99e6e4456150a17f', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/M5GeiJvkWVTne_MNeojGOYlfm68VWwQk-K3VioBl11I.jpg?width=320&crop=smart&auto=webp&s=2bc358c746bc7bb277df4bcc6625ed62304ca5bb', 'width': 320}, {'height': 338, 'url': 'https://external-preview.redd.it/M5GeiJvkWVTne_MNeojGOYlfm68VWwQk-K3VioBl11I.jpg?width=640&crop=smart&auto=webp&s=d9ef4c8859f707963391577a15b56dc0071e43a1', 'width': 640}], 'source': {'height': 338, 'url': 'https://external-preview.redd.it/M5GeiJvkWVTne_MNeojGOYlfm68VWwQk-K3VioBl11I.jpg?auto=webp&s=ea8c003cd3f49008e2b89181a24bda93e9d283c8', 'width': 640}, 'variants': {}}]} |
||
Chain of Agents: Large language models collaborating on long-context tasks | 21 | 2025-02-04T15:44:15 | https://research.google/blog/chain-of-agents-large-language-models-collaborating-on-long-context-tasks/ | ThiccStorms | research.google | 1970-01-01T00:00:00 | 0 | {} | 1ihkl35 | false | null | t3_1ihkl35 | /r/LocalLLaMA/comments/1ihkl35/chain_of_agents_large_language_models/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/FB87VRpzow9VURnv7lr9DIciMsAdtPslOqg25wmsjpM.jpg?width=108&crop=smart&auto=webp&s=7964acc2944af8fe861045d4a392765c4a08d028', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/FB87VRpzow9VURnv7lr9DIciMsAdtPslOqg25wmsjpM.jpg?width=216&crop=smart&auto=webp&s=f000d44a1463184e7d9bcf5d5e4ce118363b1cce', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/FB87VRpzow9VURnv7lr9DIciMsAdtPslOqg25wmsjpM.jpg?width=320&crop=smart&auto=webp&s=ea1f37fdce168ba5ab16900e6722fc6d0a7d576b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/FB87VRpzow9VURnv7lr9DIciMsAdtPslOqg25wmsjpM.jpg?width=640&crop=smart&auto=webp&s=ecad125cfc220e2edee03bfb6fb8e92a060c1d04', 'width': 640}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/FB87VRpzow9VURnv7lr9DIciMsAdtPslOqg25wmsjpM.jpg?auto=webp&s=682d7d11dcd5833b3e3421aaa84010409e01ba2a', 'width': 800}, 'variants': {}}]} |
||
RTX2060 12gb better than RTX3060 12gb? | 2 | Okay hear me out. It may sound strange but from what I understood the RTX2060 12gb should perform better than the RTX3060 12gb. Lemme cook
The RTX2060 has 12.2 TFLOPS of power in FP16 and 57.4 TFLOPS in Tensor whereas the RTX3060 has 9.5 TFLOPS in FP16 and 51.2 in Tensor. (Not accounting for boost)
Yes the bandwidth is slightly faster on the 3060 but it can't be that significant of a difference right?
For the same VRAM with higher flops and about same memory bandwidth, the 2060 should perform better since most AI operations are better suited for FP16 no?
Am I being a total schizo and am I wrong, if so why or am I right?
| 2025-02-04T15:45:58 | https://www.reddit.com/gallery/1ihkmip | shamboozles420 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ihkmip | false | null | t3_1ihkmip | /r/LocalLLaMA/comments/1ihkmip/rtx2060_12gb_better_than_rtx3060_12gb/ | false | false | 2 | null |
|
Which abliterated model to run locally on higher end Mini PC with no GPU? | 2 | I played around with Deepseek 7b on my MinisForum NBP6, but would prefer to run an uncensored/ablitereated designed for a GPU-less computer with a decent CPU.
What is the latest model to try out? | 2025-02-04T15:47:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ihknpw/which_abliterated_model_to_run_locally_on_higher/ | WiKDMoNKY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihknpw | false | null | t3_1ihknpw | /r/LocalLLaMA/comments/1ihknpw/which_abliterated_model_to_run_locally_on_higher/ | false | false | self | 2 | null |
I made a TangoFlux SFX generator server using FastAPI | 6 | 2025-02-04T15:57:47 | https://v.redd.it/m7nk705s95he1 | United-Rush4073 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihkwgt | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m7nk705s95he1/DASHPlaylist.mpd?a=1741277474%2CZjkwYjA5NTJlNWNlMzk1NThmNWExNjA1Yjk4M2RlMWJkZjA0NzNmMzY0MjBlZDIwOTc0YWNkNzQ2N2VkMjQ2Ng%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/m7nk705s95he1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/m7nk705s95he1/HLSPlaylist.m3u8?a=1741277474%2CMjYyOGM4NDhkNzQ3ZTZjNGZkM2Y0OWViZGUwZmM0MTRmZDUxNDJkMTgzYjljOTBlZDc5YjE1NzMxNzg1ZmY1Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/m7nk705s95he1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ihkwgt | /r/LocalLLaMA/comments/1ihkwgt/i_made_a_tangoflux_sfx_generator_server_using/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'dGh0amswNXM5NWhlMeLQyH-n9J7LT5KoTYQFFk2Y4kwdQ6eKa6d2-PEauu55', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dGh0amswNXM5NWhlMeLQyH-n9J7LT5KoTYQFFk2Y4kwdQ6eKa6d2-PEauu55.png?width=108&crop=smart&format=pjpg&auto=webp&s=5fdfae35d90517fdad2d0911e642b862f12e3d65', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dGh0amswNXM5NWhlMeLQyH-n9J7LT5KoTYQFFk2Y4kwdQ6eKa6d2-PEauu55.png?width=216&crop=smart&format=pjpg&auto=webp&s=143de26517c428380cc8a09bf20672d8f50414c0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dGh0amswNXM5NWhlMeLQyH-n9J7LT5KoTYQFFk2Y4kwdQ6eKa6d2-PEauu55.png?width=320&crop=smart&format=pjpg&auto=webp&s=8dcad456ad7a37e266fbd497fd670414fcc7b4a1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dGh0amswNXM5NWhlMeLQyH-n9J7LT5KoTYQFFk2Y4kwdQ6eKa6d2-PEauu55.png?width=640&crop=smart&format=pjpg&auto=webp&s=dde0c6f79906589fbf61111399ad902b0d3197b0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dGh0amswNXM5NWhlMeLQyH-n9J7LT5KoTYQFFk2Y4kwdQ6eKa6d2-PEauu55.png?width=960&crop=smart&format=pjpg&auto=webp&s=e0d6147b4626356455ca8d67dac7dba07cef8b7f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dGh0amswNXM5NWhlMeLQyH-n9J7LT5KoTYQFFk2Y4kwdQ6eKa6d2-PEauu55.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e953587a13dd0b52487c575516daaddbd6f33d77', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dGh0amswNXM5NWhlMeLQyH-n9J7LT5KoTYQFFk2Y4kwdQ6eKa6d2-PEauu55.png?format=pjpg&auto=webp&s=55fb4360b5e499c8d081bb701ccc1e918a78de09', 'width': 1920}, 'variants': {}}]} |
||
We've made AI training assistant called Steev ! | 12 | We just released Steev !
**Ever feel drained from constantly monitoring your training curves?** Or frustrated when you realize—too late—you made a mistake?
**Training AI models is resource-intensive**, so we're often glued to progress, hoping everything goes smoothly. But what if you didn’t have to be?
Introducing Steev—your solution to eliminating the inefficiencies of AI model training.
Give it a try and let us know what you think! 👇
Steev
: [https://www.steev.io/](https://www.steev.io/)
Tutorial: Fine-tuning Llama 8B distilled from DeepSeek-R1 using Unsloth
: [https://tbd-labs-ai.github.io/steev-docs/tutorials/unsloth/](https://tbd-labs-ai.github.io/steev-docs/tutorials/unsloth/) | 2025-02-04T16:08:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ihl5k5/weve_made_ai_training_assistant_called_steev/ | Vivid-Entertainer752 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihl5k5 | false | null | t3_1ihl5k5 | /r/LocalLLaMA/comments/1ihl5k5/weve_made_ai_training_assistant_called_steev/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': '3qoRUboC7KJONeVgCwY9n30qmjpOT5IpHveKzu1ZopY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/x2szZzCKS2TwHPxr2LSluoYORonqlQPX9lMIzV_Ztow.jpg?width=108&crop=smart&auto=webp&s=c5812d9ae2b845bb91b2125e23cc2b1649443063', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/x2szZzCKS2TwHPxr2LSluoYORonqlQPX9lMIzV_Ztow.jpg?width=216&crop=smart&auto=webp&s=493411ce5d96363ef3d4db670ae402b38d9b3451', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/x2szZzCKS2TwHPxr2LSluoYORonqlQPX9lMIzV_Ztow.jpg?width=320&crop=smart&auto=webp&s=2db361796357820fe549ccd39a89c21b5a6356e8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/x2szZzCKS2TwHPxr2LSluoYORonqlQPX9lMIzV_Ztow.jpg?width=640&crop=smart&auto=webp&s=9f064a40288aa0ca475666da840b5eb119d83fce', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/x2szZzCKS2TwHPxr2LSluoYORonqlQPX9lMIzV_Ztow.jpg?width=960&crop=smart&auto=webp&s=26caab700b5e9fe45d8b6e96dccf2d4ef7825d26', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/x2szZzCKS2TwHPxr2LSluoYORonqlQPX9lMIzV_Ztow.jpg?width=1080&crop=smart&auto=webp&s=ae094d8d8585d2297bdf22a474e73f5475285891', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/x2szZzCKS2TwHPxr2LSluoYORonqlQPX9lMIzV_Ztow.jpg?auto=webp&s=339aade0cc1494e9c3afa5b72dc8d7b8d871e9ca', 'width': 1200}, 'variants': {}}]} |
why is deepseek referencing open ai when I asked if they store my data? | 0 | 2025-02-04T16:11:03 | rayenbox | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ihl87q | false | null | t3_1ihl87q | /r/LocalLLaMA/comments/1ihl87q/why_is_deepseek_referencing_open_ai_when_i_asked/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'FVImjFHsILApe4oL1BdWBOFIJcToLZiuXHlvJl8fmT4', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/kgdm1xa6c5he1.png?width=108&crop=smart&auto=webp&s=26221b41b9d361e54e24bc65223b4b8f1610dc27', 'width': 108}, {'height': 226, 'url': 'https://preview.redd.it/kgdm1xa6c5he1.png?width=216&crop=smart&auto=webp&s=03c965edbdfad1c270164c491c056feefd6b77e3', 'width': 216}, {'height': 335, 'url': 'https://preview.redd.it/kgdm1xa6c5he1.png?width=320&crop=smart&auto=webp&s=03661b999a2e16f906ea3f80ef5aaa31b5f57eec', 'width': 320}, {'height': 670, 'url': 'https://preview.redd.it/kgdm1xa6c5he1.png?width=640&crop=smart&auto=webp&s=e9b76b2b83068552ca52738d5c7c942df801413e', 'width': 640}], 'source': {'height': 856, 'url': 'https://preview.redd.it/kgdm1xa6c5he1.png?auto=webp&s=7335e0cf5ecff8aa1f1073e61d6899a4011e2450', 'width': 817}, 'variants': {}}]} |
|||
Is this build enough to run DeepSeek-r1 70B Q4_K_M | 0 | [PCPartPicker Part List](https://pcpartpicker.com/list/xQxMTM)
Type|Item|Price
:----|:----|:----
**CPU** | [AMD Ryzen 9 9950X 4.3 GHz 16-Core Processor](https://pcpartpicker.com/product/T6GhP6/amd-ryzen-9-9950x-43-ghz-16-core-processor-100-100001277wof) | $576.29 @ Amazon
**CPU Cooler** | [ID-COOLING DASHFLOW XT 85 CFM Liquid CPU Cooler](https://pcpartpicker.com/product/YDdG3C/id-cooling-dashflow-xt-85-cfm-liquid-cpu-cooler-dashflow-360-xt) |-
**Motherboard** | [MSI MPG X870E CARBON WIFI ATX AM5 Motherboard](https://pcpartpicker.com/product/dGWJ7P/msi-mpg-x870e-carbon-wifi-atx-am5-motherboard-mpg-x870e-carbon-wifi) | $499.99 @ Amazon
**Memory** | [Corsair Vengeance RGB 128 GB (4 x 32 GB) DDR5-6000 CL30 Memory](https://pcpartpicker.com/product/WTMMnQ/corsair-vengeance-rgb-64-gb-2-x-32-gb-ddr5-6000-cl30-memory-cmh64gx5m2b6000c30) | $410 @ Newegg
**Storage** | [Samsung 990 Pro 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive](https://pcpartpicker.com/product/34ytt6/samsung-990-pro-2-tb-m2-2280-pcie-40-x4-nvme-solid-state-drive-mz-v9p2t0bw) | $169.99 @ Amazon
**Video Card** | [02 x Asus TUF GAMING OC GeForce RTX 4090 24 GB Video Card](https://pcpartpicker.com/product/rB2WGX/asus-tuf-gaming-oc-geforce-rtx-4090-24-gb-video-card-tuf-rtx4090-o24g-gaming) |-
**Case** | [MSI MAG PANO 100L PZ ATX Mid Tower Case](https://pcpartpicker.com/product/QHcgXL/msi-mag-pano-100l-pz-atx-mid-tower-case-mag-pano-100l-pz) | $117.99 @ Newegg
**Power Supply** | [Cooler Master V Platinum V2 1600 W 80+ Platinum Certified Fully Modular ATX Power Supply](https://pcpartpicker.com/product/NXmNnQ/cooler-master-v-platinum-v2-1600-w-80-platinum-certified-fully-modular-atx-power-supply-mpz-g002-afap-bus) | $309.99 @ Amazon
I know that 5090 series is out but i can save some of the cost by using 4090 | 2025-02-04T16:13:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ihl9yk/is_this_build_enough_to_run_deepseekr1_70b_q4_k_m/ | fat_fun_xox | self.LocalLLaMA | 2025-02-04T16:16:08 | 0 | {} | 1ihl9yk | false | null | t3_1ihl9yk | /r/LocalLLaMA/comments/1ihl9yk/is_this_build_enough_to_run_deepseekr1_70b_q4_k_m/ | false | false | self | 0 | null |
Why does llama.cpp default to interactive mode? | 1 | I compiled llama.cpp from source with Vulkan as the backend. It works fine but always goes into interactive mode even when I didn't request it. When I run `llama-cli -m modelName -p "What are the days of the week?"` it takes my prompt and uses it as system prompt, then waits for another prompt:
https://preview.redd.it/qsx2pb7fd5he1.png?width=958&format=png&auto=webp&s=84d911543b839e0acd684318a3706298bb3a31a5
What am I doing wrong? | 2025-02-04T16:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ihley9/why_does_llamacpp_default_to_interactive_mode/ | Positive_Click_8963 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihley9 | false | null | t3_1ihley9 | /r/LocalLLaMA/comments/1ihley9/why_does_llamacpp_default_to_interactive_mode/ | false | false | 1 | null |
|
DABStep, a very hard data analysis benchmark | 2 | Language models are becoming increasingly capable and can solve tasks autonomously as agents. There are many exciting use cases, especially at the intersection of reasoning, code, and data. However, proper evaluation benchmarks on real-world problems are lacking and hinder progress in the field.
To tackle this challenge, Adyen and Hugging Face built the Data Agent Benchmark for Multi-step Reasoning (DABstep) together. DABstep consists of over 450 data analysis tasks designed to evaluate the capabilities of state-of-the-art LLMs and AI agents.
>
Full blog: [https://huggingface.co/blog/dabstep](https://huggingface.co/blog/dabstep)
Leaderboard: [https://huggingface.co/spaces/adyen/DABstep](https://huggingface.co/spaces/adyen/DABstep) | 2025-02-04T16:36:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ihlu7d/dabstep_a_very_hard_data_analysis_benchmark/ | chef1957 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihlu7d | false | null | t3_1ihlu7d | /r/LocalLLaMA/comments/1ihlu7d/dabstep_a_very_hard_data_analysis_benchmark/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'NLGzXY6OLZ1czcuII0lH1Cgs5bkziQdOb27jmsgQKMM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/H8Ae4a10nhMS_zaetaUq_-sxbmxJqx_5O3sGWdmusrQ.jpg?width=108&crop=smart&auto=webp&s=452c0d7bb9c5cf458848ba85b0c2df4a67c90159', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/H8Ae4a10nhMS_zaetaUq_-sxbmxJqx_5O3sGWdmusrQ.jpg?width=216&crop=smart&auto=webp&s=3d31f4c15416b4daede4d88eb984d10dd2a386cc', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/H8Ae4a10nhMS_zaetaUq_-sxbmxJqx_5O3sGWdmusrQ.jpg?width=320&crop=smart&auto=webp&s=a0d5bacf751bd353da86c9c3f03137ae40517aad', 'width': 320}, {'height': 319, 'url': 'https://external-preview.redd.it/H8Ae4a10nhMS_zaetaUq_-sxbmxJqx_5O3sGWdmusrQ.jpg?width=640&crop=smart&auto=webp&s=bc2b388a7b1947aca63dfcbeb8c193e912eb9267', 'width': 640}, {'height': 479, 'url': 'https://external-preview.redd.it/H8Ae4a10nhMS_zaetaUq_-sxbmxJqx_5O3sGWdmusrQ.jpg?width=960&crop=smart&auto=webp&s=2187988b7cb1ff525d7aa8091ff614f6b36a36c6', 'width': 960}, {'height': 539, 'url': 'https://external-preview.redd.it/H8Ae4a10nhMS_zaetaUq_-sxbmxJqx_5O3sGWdmusrQ.jpg?width=1080&crop=smart&auto=webp&s=ba37b0d818971ea5bc9b8a21023d4447f1bb528e', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/H8Ae4a10nhMS_zaetaUq_-sxbmxJqx_5O3sGWdmusrQ.jpg?auto=webp&s=280951eecb10c3f3a17b19f29c366d10898fa297', 'width': 2404}, 'variants': {}}]} |
Can someone explain to me the difference between SearXNG and Selenium? | 5 | I understand that both are the leading ways to connect an LLM to the web without relying on APIS or third parties, but which one of them is better? and for what reasons?
Thanks ! | 2025-02-04T16:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ihlwlu/can_someone_explain_to_me_the_difference_between/ | iChrist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihlwlu | false | null | t3_1ihlwlu | /r/LocalLLaMA/comments/1ihlwlu/can_someone_explain_to_me_the_difference_between/ | false | false | self | 5 | null |
New "Kiwi" model on lmsys arena | 40 | Feels like Grok-3 and Grok-3-mini to me... | 2025-02-04T16:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ihlx8q/new_kiwi_model_on_lmsys_arena/ | Ok_Landscape_6819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihlx8q | false | null | t3_1ihlx8q | /r/LocalLLaMA/comments/1ihlx8q/new_kiwi_model_on_lmsys_arena/ | false | false | self | 40 | null |
Hormoz-8B: a small and on-device language model from Mann-E. | 1 | [removed] | 2025-02-04T16:44:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ihm16o/hormoz8b_a_small_and_ondevice_language_model_from/ | Haghiri75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihm16o | false | null | t3_1ihm16o | /r/LocalLLaMA/comments/1ihm16o/hormoz8b_a_small_and_ondevice_language_model_from/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mcVO1JmpRis8cPPZMMjgEB9DxjI_wWFXCPiw7aOP8-Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=108&crop=smart&auto=webp&s=3e36ca2eee5e019615b11ec524e87044e500e0c3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=216&crop=smart&auto=webp&s=ca1e3176d4106111d5dcd48a31e253f17d6353c5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=320&crop=smart&auto=webp&s=a5ecc77ee9a935d9f03fdbe393294c251ff2e3b9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=640&crop=smart&auto=webp&s=9458a00cd04761418de538aa5186650c870add4c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=960&crop=smart&auto=webp&s=a1706977afdf83c4af893bbeea3a1e79e9200916', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=1080&crop=smart&auto=webp&s=144805e64f44a56dfef711412b05b234870080c3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?auto=webp&s=57b78532278755a08e34ec2f175335c71076b6dd', 'width': 1200}, 'variants': {}}]} |
Drummer's Anubis Pro 105B v1 - An upscaled L3.3 70B with continued training! | 88 | 2025-02-04T16:52:46 | https://huggingface.co/TheDrummer/Anubis-Pro-105B-v1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ihm8pl | false | null | t3_1ihm8pl | /r/LocalLLaMA/comments/1ihm8pl/drummers_anubis_pro_105b_v1_an_upscaled_l33_70b/ | false | false | 88 | {'enabled': False, 'images': [{'id': 'L6EPMfGh4Mdrf4Nj2mDDDH14SNvN_-5nm3Q7H-vUEmE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Isnr897tQLSa3f7knpDj7eFO6fjkOWORPvRfD442vlo.jpg?width=108&crop=smart&auto=webp&s=7a94362e963349d3a382cc5a1ea8d3d52040f879', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Isnr897tQLSa3f7knpDj7eFO6fjkOWORPvRfD442vlo.jpg?width=216&crop=smart&auto=webp&s=0cbf39ab5ec4c2e035f8fbccf5d310be07425e61', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Isnr897tQLSa3f7knpDj7eFO6fjkOWORPvRfD442vlo.jpg?width=320&crop=smart&auto=webp&s=1eaa6aa8844e5ed95f0847f4635a9607a45f4eb5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Isnr897tQLSa3f7knpDj7eFO6fjkOWORPvRfD442vlo.jpg?width=640&crop=smart&auto=webp&s=24f28b0bdfd72abaef01b7d591bc2878895b848b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Isnr897tQLSa3f7knpDj7eFO6fjkOWORPvRfD442vlo.jpg?width=960&crop=smart&auto=webp&s=ec5d8d0a3f1302c4cadcd5025618a863d46a55a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Isnr897tQLSa3f7knpDj7eFO6fjkOWORPvRfD442vlo.jpg?width=1080&crop=smart&auto=webp&s=33e7ac16e2954503b2810c7ccbdfe525a4150a0f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Isnr897tQLSa3f7knpDj7eFO6fjkOWORPvRfD442vlo.jpg?auto=webp&s=b89a23376c96014e0963fe3e8b49f8aae4455b87', 'width': 1200}, 'variants': {}}]} |
||
Beginner questions | 1 | [removed] | 2025-02-04T16:53:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ihm94x/beginner_questions/ | svx23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihm94x | false | null | t3_1ihm94x | /r/LocalLLaMA/comments/1ihm94x/beginner_questions/ | false | false | self | 1 | null |
Teach a new and rare programming language to a model/ Text-to-Diagram | 1 | [removed] | 2025-02-04T17:00:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ihmfbj/teach_a_new_and_rare_programming_language_to_a/ | CoconutFun843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihmfbj | false | null | t3_1ihmfbj | /r/LocalLLaMA/comments/1ihmfbj/teach_a_new_and_rare_programming_language_to_a/ | false | false | self | 1 | null |
Putting together all the LLM web search capable API available for developers | 23 | A developer list of currently available LLM APIs which are also capable of connecting to the internet: [https://github.com/vadimen/awesome\_llm\_api\_with\_web\_search](https://github.com/vadimen/awesome_llm_api_with_web_search). Contains the available models and their prices. Because there are not many such providers, I thought it could be useful to have this list.
Everybody is welcome to contribute with a PR. | 2025-02-04T17:08:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ihmnbq/putting_together_all_the_llm_web_search_capable/ | sickleRunner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihmnbq | false | null | t3_1ihmnbq | /r/LocalLLaMA/comments/1ihmnbq/putting_together_all_the_llm_web_search_capable/ | false | false | self | 23 | null |
Copilot for Overleaf | 1 | [removed] | 2025-02-04T17:10:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ihmox7/copilot_for_overleaf/ | Crafty-Possibility46 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihmox7 | false | null | t3_1ihmox7 | /r/LocalLLaMA/comments/1ihmox7/copilot_for_overleaf/ | false | false | self | 1 | null |
Beginner Questions | 1 | [removed] | 2025-02-04T17:10:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ihmp6y/beginner_questions/ | svx23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihmp6y | false | null | t3_1ihmp6y | /r/LocalLLaMA/comments/1ihmp6y/beginner_questions/ | false | false | self | 1 | null |
Building and Monetizing AI Model APIs | 0 | 2025-02-04T17:18:57 | https://zuplo.com/blog/2025/01/29/monetize-ai-models | ZuploAdrian | zuplo.com | 1970-01-01T00:00:00 | 0 | {} | 1ihmwct | false | null | t3_1ihmwct | /r/LocalLLaMA/comments/1ihmwct/building_and_monetizing_ai_model_apis/ | false | false | 0 | {'enabled': False, 'images': [{'id': '7DJt53y4y_P146uVRKfsLV8mCPQeiBR1Ewd95fwDrLE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-RTAx2okSdBRtdRNYqrLnYLU17lh5hE9nwfuUTH3f6E.jpg?width=108&crop=smart&auto=webp&s=3e47c5978c6c2269071679e4a1266496d970d2ac', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/-RTAx2okSdBRtdRNYqrLnYLU17lh5hE9nwfuUTH3f6E.jpg?width=216&crop=smart&auto=webp&s=772197b187190147ea2cd7d8ea57578c7b12dc49', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/-RTAx2okSdBRtdRNYqrLnYLU17lh5hE9nwfuUTH3f6E.jpg?width=320&crop=smart&auto=webp&s=aeb3e9c7660100b51dca735ee6a2f531dcb255ef', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/-RTAx2okSdBRtdRNYqrLnYLU17lh5hE9nwfuUTH3f6E.jpg?width=640&crop=smart&auto=webp&s=31f2120e0df6d162dad94996185667005d0a50ca', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/-RTAx2okSdBRtdRNYqrLnYLU17lh5hE9nwfuUTH3f6E.jpg?width=960&crop=smart&auto=webp&s=fdf427e16710189d27345abb348a5bbcbea1064d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/-RTAx2okSdBRtdRNYqrLnYLU17lh5hE9nwfuUTH3f6E.jpg?width=1080&crop=smart&auto=webp&s=7072395a575fab1507236fcde6f2e3fa485c0a92', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/-RTAx2okSdBRtdRNYqrLnYLU17lh5hE9nwfuUTH3f6E.jpg?auto=webp&s=c7116555810d9931cb1ee21ca0edf7e0056946ef', 'width': 1200}, 'variants': {}}]} |
||
Write article that Zuckerberg as the Grand Architect of llama Schuld read the Bible because of arriving of OWN AGI. | 1 | [removed] | 2025-02-04T17:21:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ihmypm/write_article_that_zuckerberg_as_the_grand/ | Worldly_Evidence9113 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihmypm | false | null | t3_1ihmypm | /r/LocalLLaMA/comments/1ihmypm/write_article_that_zuckerberg_as_the_grand/ | false | false | self | 1 | null |
Faster GPU+Ram vs slower GPU + more vram? | 2 | I'm looking at getting a setup I can start tinkering with for other projects for both personal developement but also work, and wanted to hear the thoughts of someone more experienced than myself.
Firstly I've seen good things about the Nvidia Tesla p100's and was thinking about getting two of those eventually (likely just one for now) and then get started.
But I've seen some mention of people using normal ram or nvme to store the models in, and that got me thinking about a combined system of having a faster but lower GPU (like one of the Intel cards or like a 4060) to offset the low ram of the card.
I guess I want to know if the second option is doable and if it has a significant impact to performance.
Id like to eventually run a 60b model but that's much further down the line, I think 7 is more achievable in the shorter term. | 2025-02-04T17:29:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ihn65t/faster_gpuram_vs_slower_gpu_more_vram/ | Lucial98 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihn65t | false | null | t3_1ihn65t | /r/LocalLLaMA/comments/1ihn65t/faster_gpuram_vs_slower_gpu_more_vram/ | false | false | self | 2 | null |
Beginner project on news AI agent | 1 | Sharing the project on news AI agent, that curates and shares news articles on AI agents,
[https://github.com/ashgkwd/news-sharing-ai-agent](https://github.com/ashgkwd/news-sharing-ai-agent)
The latest news on AI agents are posted on the website,
[https://aiagentslive.com/](https://aiagentslive.com/)
| 2025-02-04T17:47:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ihnlqv/beginner_project_on_news_ai_agent/ | Chiken-Coffee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihnlqv | false | null | t3_1ihnlqv | /r/LocalLLaMA/comments/1ihnlqv/beginner_project_on_news_ai_agent/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/YY_wcw8H6jFrKaBSRxEDUlM4sQlc7wh21ZhX4yKdfsQ.jpg?auto=webp&s=e9df1be04154af5298a9cf41dfe246b76202eebc', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/YY_wcw8H6jFrKaBSRxEDUlM4sQlc7wh21ZhX4yKdfsQ.jpg?width=108&crop=smart&auto=webp&s=2e6e6e9186552557daec8c4e145e6cc7054d9c7a', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/YY_wcw8H6jFrKaBSRxEDUlM4sQlc7wh21ZhX4yKdfsQ.jpg?width=216&crop=smart&auto=webp&s=0e4fd4d9a9445d290fc101005d54aa0e31f320ed', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/YY_wcw8H6jFrKaBSRxEDUlM4sQlc7wh21ZhX4yKdfsQ.jpg?width=320&crop=smart&auto=webp&s=d28b7abd5a9a98237eba48632f47ce5f417970bd', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/YY_wcw8H6jFrKaBSRxEDUlM4sQlc7wh21ZhX4yKdfsQ.jpg?width=640&crop=smart&auto=webp&s=df93c9831e5142e41d9dbdaa1fb2772ac824d775', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/YY_wcw8H6jFrKaBSRxEDUlM4sQlc7wh21ZhX4yKdfsQ.jpg?width=960&crop=smart&auto=webp&s=8aaafa22d6a4abd1a1db6e936f96e84dba41191b', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/YY_wcw8H6jFrKaBSRxEDUlM4sQlc7wh21ZhX4yKdfsQ.jpg?width=1080&crop=smart&auto=webp&s=cd0de6cddfbf0cfc4076ea043d5a13d2ba609a9b', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'IVScozqXi9byEJxJssHqWE1ZJ50k8pgiO8BPil9DJW8'}], 'enabled': False} |
Most capable function calling open weight model in Jan/2025? | 1 | I'm currently using ChatGPT4o for an AI Agent that is working great, but would like to replace it with a local model.
What are the recommended open weight models that will fit in 48GB of VRAM that preferably has vLLM support. I was thinking of Qwen2.532bi(but saw a mention that quantizing it broke function calling), Mistral 3 small (not sure of vLLM support), any other recommendations? | 2025-02-04T17:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ihnt5t/most_capable_function_calling_open_weight_model/ | alew3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihnt5t | false | null | t3_1ihnt5t | /r/LocalLLaMA/comments/1ihnt5t/most_capable_function_calling_open_weight_model/ | false | false | self | 1 | null |
Introducing Sundry - An intelligent context API for LLMs | 1 | [removed] | 2025-02-04T17:56:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ihntc2/introducing_sundry_an_intelligent_context_api_for/ | something_cleverer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihntc2 | false | null | t3_1ihntc2 | /r/LocalLLaMA/comments/1ihntc2/introducing_sundry_an_intelligent_context_api_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Q7dysfr3X_1zT3s-zGDrkggMwaLrYDKj2tIBmIQ9d3I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/C41z86udQbm_awM-KCsx44Cp2p8NiT3Ihx1-28ahONc.jpg?width=108&crop=smart&auto=webp&s=2a0b01430a5ef0cdf5b027b506658e2d04400bf8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/C41z86udQbm_awM-KCsx44Cp2p8NiT3Ihx1-28ahONc.jpg?width=216&crop=smart&auto=webp&s=e978d4a3a5c48aa87dde6eb41593daaf97824fe0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/C41z86udQbm_awM-KCsx44Cp2p8NiT3Ihx1-28ahONc.jpg?width=320&crop=smart&auto=webp&s=726027e5b7442cd184a9c068d746a924a255f246', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/C41z86udQbm_awM-KCsx44Cp2p8NiT3Ihx1-28ahONc.jpg?width=640&crop=smart&auto=webp&s=62ce36ec6102e46fd2f0b0a57daba9ced36c57e1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/C41z86udQbm_awM-KCsx44Cp2p8NiT3Ihx1-28ahONc.jpg?width=960&crop=smart&auto=webp&s=7bbdedb2f39ebb41a772047693d572c8e8f73f44', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/C41z86udQbm_awM-KCsx44Cp2p8NiT3Ihx1-28ahONc.jpg?width=1080&crop=smart&auto=webp&s=6853b93692147934b26d6b8e40a9719a3f66cebc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/C41z86udQbm_awM-KCsx44Cp2p8NiT3Ihx1-28ahONc.jpg?auto=webp&s=9e2d58e7087b03aa2f398ba167d45a8ee688b2e4', 'width': 1200}, 'variants': {}}]} |
Local auto-complete for coding? | 4 | So I've been playing with various local models for coding, and while chat interface is at least somewhat workable (using e.g. Qwen2.5-Coder-32B), all my attempts at using autocomplete in VS Code with [Continue.dev](http://Continue.dev) failed, utterly and completely.. Either it "thinks" forever (with 100% GPU load) and doesn't give any suggestions at all, or the suggestions are irrelevant or otherwise low quality.
Is it just the state of local AI for coding, or am I doing something wrong here? If the latter, which extension(s) and model(s) do you use for local auto-complete to make it work fine?
For reference, I have a dual 4090s, in case I need to run bigger models for this.
Thanks! | 2025-02-04T17:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ihnvjk/local_autocomplete_for_coding/ | ChangeIsHard_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihnvjk | false | null | t3_1ihnvjk | /r/LocalLLaMA/comments/1ihnvjk/local_autocomplete_for_coding/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'JoLAbcgPAn_D7ExuVvyaNJpSY81e3Jca27FTj1G8-xQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=108&crop=smart&auto=webp&s=b6c70517bb80bca66bf94d99af93ec23982e2986', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=216&crop=smart&auto=webp&s=146011169cd4033ebcd4b883efc62f0bd345d74b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=320&crop=smart&auto=webp&s=7a560fe31ff4e8b423a9029c052df232e0365572', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=640&crop=smart&auto=webp&s=ea9ff85c4782247e303164d9d75b4071d789f397', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=960&crop=smart&auto=webp&s=81aa9753e911761e0c56b3b897ba0f44cafff21d', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=1080&crop=smart&auto=webp&s=a67fd0983e228aa2fa0a2ba466c071793fe21afc', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?auto=webp&s=92948afd26cc637bb25c79223a1b99b3ecbbbfa2', 'width': 2401}, 'variants': {}}]} |
Web browser extension using local models | 1 | [removed] | 2025-02-04T18:16:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ihob42/web_browser_extension_using_local_models/ | Acrobatic_Cat_3448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihob42 | false | null | t3_1ihob42 | /r/LocalLLaMA/comments/1ihob42/web_browser_extension_using_local_models/ | false | false | self | 1 | null |
Wiki KB: Customizable, embedding-ready knowledge base | 1 | [removed] | 2025-02-04T18:25:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ihoj4t/wiki_kb_customizable_embeddingready_knowledge_base/ | leptonflavors | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ihoj4t | false | null | t3_1ihoj4t | /r/LocalLLaMA/comments/1ihoj4t/wiki_kb_customizable_embeddingready_knowledge_base/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Kt8VE5vN069h56PUXmhiG0islOQwla5TbqlOdF9BGtY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-AUI-MIVraLPiqhk7KwWyg1FJJMdX68Tp-JaX_IxFYY.jpg?width=108&crop=smart&auto=webp&s=f7312caf93ba2a7d021ce803c6f83fd305f35e93', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-AUI-MIVraLPiqhk7KwWyg1FJJMdX68Tp-JaX_IxFYY.jpg?width=216&crop=smart&auto=webp&s=32922a752e6e3f7d2ec8a90f8523a1f357ae5553', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-AUI-MIVraLPiqhk7KwWyg1FJJMdX68Tp-JaX_IxFYY.jpg?width=320&crop=smart&auto=webp&s=ef3c8567689bee286f9f627bdb573e90e7ddddbf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-AUI-MIVraLPiqhk7KwWyg1FJJMdX68Tp-JaX_IxFYY.jpg?width=640&crop=smart&auto=webp&s=289982ff7803ce4b138ca78e719c1c1c18fd13bb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-AUI-MIVraLPiqhk7KwWyg1FJJMdX68Tp-JaX_IxFYY.jpg?width=960&crop=smart&auto=webp&s=0d5ac8e8f91525ba7573dade6c3b2c38292178be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-AUI-MIVraLPiqhk7KwWyg1FJJMdX68Tp-JaX_IxFYY.jpg?width=1080&crop=smart&auto=webp&s=ee5b48d693ac6ba6581109c434f2616a16beceae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-AUI-MIVraLPiqhk7KwWyg1FJJMdX68Tp-JaX_IxFYY.jpg?auto=webp&s=cba7bea3105c8129168d310d2048c641d9734457', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits