title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Building AI Agents for Crypto Market Analysis with RAG – Open Source Project
1
[removed]
2025-01-31T07:12:25
https://i.redd.it/06etmcxs4age1.jpeg
Different_Prune_3529
i.redd.it
1970-01-01T00:00:00
0
{}
1ie9boz
false
null
t3_1ie9boz
/r/LocalLLaMA/comments/1ie9boz/building_ai_agents_for_crypto_market_analysis/
false
false
https://b.thumbs.redditm…siLtQnZbRcVA.jpg
1
{'enabled': True, 'images': [{'id': 'E2Ts5hq_Bzub3wRFi8VI_F61txb068Pmd-2qnOfPx2M', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/06etmcxs4age1.jpeg?width=108&crop=smart&auto=webp&s=3e21456581cdaea9459766b6481a3d6331fba859', 'width': 108}, {'height': 173, 'url': 'https://preview.redd.it/06etmcxs4age1.jpeg?width=216&crop=smart&auto=webp&s=ce8448d094061a3c33f497aaae7304420a0a08c8', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/06etmcxs4age1.jpeg?width=320&crop=smart&auto=webp&s=e01df4de40f0a0ab0a4d5c8de7b09f8e9c7aa26b', 'width': 320}, {'height': 513, 'url': 'https://preview.redd.it/06etmcxs4age1.jpeg?width=640&crop=smart&auto=webp&s=cdaedf23bbbe0d9a43d8fb049eb2f3b8c7733fa1', 'width': 640}, {'height': 769, 'url': 'https://preview.redd.it/06etmcxs4age1.jpeg?width=960&crop=smart&auto=webp&s=bfa1e936bae6f1a506f7cfe4e767bec0bfacf11b', 'width': 960}, {'height': 865, 'url': 'https://preview.redd.it/06etmcxs4age1.jpeg?width=1080&crop=smart&auto=webp&s=98042ca922db1a1a575f71168b64b254a0d4a339', 'width': 1080}], 'source': {'height': 994, 'url': 'https://preview.redd.it/06etmcxs4age1.jpeg?auto=webp&s=6b03454150c2df632d456ac22df8987678b13532', 'width': 1240}, 'variants': {}}]}
Did you get a 5090 on the launch day?
0
[View Poll](https://www.reddit.com/poll/1ie9iaj)
2025-01-31T07:26:17
https://www.reddit.com/r/LocalLLaMA/comments/1ie9iaj/did_you_get_a_5090_on_the_launch_day/
sktksm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ie9iaj
false
null
t3_1ie9iaj
/r/LocalLLaMA/comments/1ie9iaj/did_you_get_a_5090_on_the_launch_day/
false
false
self
0
null
AI Agent in develop.
0
I've been wasting my time at least by 6 months doing this. It's not complete and it doesn't even close to. Actually I'd started with this personal project just for fun. I think that could be useful for someone who wants to build his own from over it. Did someone could help to push the code and dependencies properly to a GitHub repo? Please, I do not know how to. I mean... Which is the standard way to do it right? Thank you.
2025-01-31T07:29:27
https://www.reddit.com/r/LocalLLaMA/comments/1ie9juj/ai_agent_in_develop/
IvanovasGanker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ie9juj
false
null
t3_1ie9juj
/r/LocalLLaMA/comments/1ie9juj/ai_agent_in_develop/
false
false
self
0
null
How do you track your token usage?
1
[removed]
2025-01-31T07:33:27
https://www.reddit.com/r/LocalLLaMA/comments/1ie9lr3/how_do_you_track_your_token_usage/
Infinite-Campaign766
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ie9lr3
false
null
t3_1ie9lr3
/r/LocalLLaMA/comments/1ie9lr3/how_do_you_track_your_token_usage/
false
false
self
1
null
How do you track your token usage in large scale like langfuse?
1
[removed]
2025-01-31T07:35:07
https://www.reddit.com/r/LocalLLaMA/comments/1ie9mjn/how_do_you_track_your_token_usage_in_large_scale/
Infinite-Campaign766
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ie9mjn
false
null
t3_1ie9mjn
/r/LocalLLaMA/comments/1ie9mjn/how_do_you_track_your_token_usage_in_large_scale/
false
false
self
1
null
Mistral Small 3 First Impression
1
[removed]
2025-01-31T07:35:56
https://www.reddit.com/r/LocalLLaMA/comments/1ie9mwr/mistral_small_3_first_impression/
AdventurousSwim1312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ie9mwr
false
null
t3_1ie9mwr
/r/LocalLLaMA/comments/1ie9mwr/mistral_small_3_first_impression/
false
false
self
1
null
How resource usage scales with concurrent users
1
[removed]
2025-01-31T07:49:12
https://www.reddit.com/r/LocalLLaMA/comments/1ie9t5m/how_resource_usage_scales_with_concurrent_users/
GT95
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ie9t5m
false
null
t3_1ie9t5m
/r/LocalLLaMA/comments/1ie9t5m/how_resource_usage_scales_with_concurrent_users/
false
false
self
1
null
need help with llama 3.2 3b model
3
hello all, i am creating a quiz generation by using the transcripts from videos. the issue I am facing is getting the central topics from the transcript, as when generating the topics some extra chitchat also ends up becoming the questions. so how do I avoid this? what path should I take for better extraction of the topic that has been taught in the lecture transcript? i am using llama 3.2 3b model for this using lmstudio. your help will be really appreciated :)
2025-01-31T08:01:26
https://www.reddit.com/r/LocalLLaMA/comments/1ie9z2a/need_help_with_llama_32_3b_model/
Previous-Estimate298
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ie9z2a
false
null
t3_1ie9z2a
/r/LocalLLaMA/comments/1ie9z2a/need_help_with_llama_32_3b_model/
false
false
self
3
null
can Multi-Agent AI replace fine tuning?
1
[removed]
2025-01-31T08:17:58
https://www.reddit.com/r/LocalLLaMA/comments/1iea6v3/can_multiagent_ai_replace_fine_tuning/
laymancs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iea6v3
false
null
t3_1iea6v3
/r/LocalLLaMA/comments/1iea6v3/can_multiagent_ai_replace_fine_tuning/
false
false
self
1
null
[R] A Practical Guide to Retrieval-Augmented Generation Implementation with DeepSeek R1 & Ollama - Step-by-Step Implementation
1
[removed]
2025-01-31T08:20:54
https://www.reddit.com/r/LocalLLaMA/comments/1iea87q/r_a_practical_guide_to_retrievalaugmented/
Amanpandey046
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iea87q
false
null
t3_1iea87q
/r/LocalLLaMA/comments/1iea87q/r_a_practical_guide_to_retrievalaugmented/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KJ-QRsOUMCCWUnL4lmWyclwZCXiq_xUmTH1XeFJSmpM', 'resolutions': [{'height': 91, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?width=108&crop=smart&auto=webp&s=d707272a723e0517a83b122c0f29e7bfa42a0172', 'width': 108}, {'height': 183, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?width=216&crop=smart&auto=webp&s=bd4d2bb5d7d5a3171b131c116de902529543553e', 'width': 216}, {'height': 272, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?width=320&crop=smart&auto=webp&s=c0adfda6e8bc80ba3ce5b8bd5d780c5436d0fe5d', 'width': 320}, {'height': 545, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?width=640&crop=smart&auto=webp&s=7e9651eb8831efb93157313893b986bea6f1c6d8', 'width': 640}, {'height': 817, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?width=960&crop=smart&auto=webp&s=4b7041a57914ffcd7f5a27168ccd0c83e756a59c', 'width': 960}], 'source': {'height': 872, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?auto=webp&s=2c50e39cd4b74892d9c90a0af52bfbedeaece266', 'width': 1024}, 'variants': {}}]}
ChatGpt vs DeepSeek: A Comprehensive Review of DeepSeek—OpenAI’s New Challenger!
1
[removed]
2025-01-31T08:40:34
[deleted]
1970-01-01T00:00:00
0
{}
1ieah6r
false
null
t3_1ieah6r
/r/LocalLLaMA/comments/1ieah6r/chatgpt_vs_deepseek_a_comprehensive_review_of/
false
false
default
1
null
ChatGpt vs DeepSeek: A Comprehensive Review of DeepSeek—OpenAI’s New Challenger!
0
For the past few years, ChatGPTs creator OpenAI has dominated the AI space, but a new contender has arrived: DeepSeek. With its advanced features and powerful technology, DeepSeek is making waves in the AI world. But is it a real competitor to OpenAI? Join us as we explore DeepSeek’s features, strengths, and potential impact on the AI industry in the comment section. What are your views on DeepSeek?
2025-01-31T08:40:59
https://www.reddit.com/r/LocalLLaMA/comments/1ieahe5/chatgpt_vs_deepseek_a_comprehensive_review_of/
Bernard_L
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieahe5
false
null
t3_1ieahe5
/r/LocalLLaMA/comments/1ieahe5/chatgpt_vs_deepseek_a_comprehensive_review_of/
false
false
self
0
null
Hey, some of you asked for a multilingual fine-tune of the R1 distills, so here they are! Trained on over 35 languages, this should quite reliably output CoT in your language. As always, the code, weights, and data are all open source.
296
2025-01-31T08:44:00
https://huggingface.co/collections/lightblue/r1-multilingual-679c890166ac0a84e83e38fa
Peter_Lightblue
huggingface.co
1970-01-01T00:00:00
0
{}
1ieaiq4
false
null
t3_1ieaiq4
/r/LocalLLaMA/comments/1ieaiq4/hey_some_of_you_asked_for_a_multilingual_finetune/
false
false
https://b.thumbs.redditm…ZbpYaRUf7x3w.jpg
296
{'enabled': False, 'images': [{'id': 'U2zayBiWHT8gkEkhLYnEWMP7ii0iV37X38uJ7eWVy_4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eMItb2dkatZwZPt-N-o-ODncWuvwGgn8w91JWJRsEcg.jpg?width=108&crop=smart&auto=webp&s=03e14f2409ecd378c9ff1004a04052b976070d03', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eMItb2dkatZwZPt-N-o-ODncWuvwGgn8w91JWJRsEcg.jpg?width=216&crop=smart&auto=webp&s=2552e318958589b1bde9710e24238c251155194a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eMItb2dkatZwZPt-N-o-ODncWuvwGgn8w91JWJRsEcg.jpg?width=320&crop=smart&auto=webp&s=9c6bd14d3d5c7923d0ae2fabe9a7eeadeaf90f75', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eMItb2dkatZwZPt-N-o-ODncWuvwGgn8w91JWJRsEcg.jpg?width=640&crop=smart&auto=webp&s=e8507bb5e6a8a03ef08478289b8275d62d0245cc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eMItb2dkatZwZPt-N-o-ODncWuvwGgn8w91JWJRsEcg.jpg?width=960&crop=smart&auto=webp&s=2bcefb59c4f9f2899796e26821c2976ff86911b2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eMItb2dkatZwZPt-N-o-ODncWuvwGgn8w91JWJRsEcg.jpg?width=1080&crop=smart&auto=webp&s=b6afef5facf448b98395c6040fb51b44488ceba9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eMItb2dkatZwZPt-N-o-ODncWuvwGgn8w91JWJRsEcg.jpg?auto=webp&s=42281908209874206a72c8aa3c770f6eda4b992f', 'width': 1200}, 'variants': {}}]}
Are dual RTX Titan's with NVlink better than dual RTX 3090's w/o NVlink?
1
Are dual RTX Titan's with NVlink better than dual RTX 3090's w/o NVlink for running 72b operator models like UI-TARS operator? I can buy two Titan's with a bridge for 1200, while just two 3090's would be atleast 1300. What would be the better deal?
2025-01-31T08:44:44
https://www.reddit.com/r/LocalLLaMA/comments/1ieaj29/are_dual_rtx_titans_with_nvlink_better_than_dual/
Gwn_Luka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieaj29
false
null
t3_1ieaj29
/r/LocalLLaMA/comments/1ieaj29/are_dual_rtx_titans_with_nvlink_better_than_dual/
false
false
self
1
null
Quadro RTX8000 vs RTX4090 vs RTX5090, which is better for Generative Models?
1
I am based in China, and the prices of the RTX5090D are insane (about USD8,300, though available). The RTX4090D is about USD2,200). While searching, I found that the prices of the Quadro RTX8000 is about USD2,000 (probably second hand) is about USD1,800. The RTX5090D is out of the question. I am not paying the price of an EV for a consumer graphics card. That leaves the 4090 and the Quadro RTX8000. Which one do you think I should consider?
2025-01-31T08:45:52
https://www.reddit.com/r/LocalLLaMA/comments/1ieajjs/quadro_rtx8000_vs_rtx4090_vs_rtx5090_which_is/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieajjs
false
null
t3_1ieajjs
/r/LocalLLaMA/comments/1ieajjs/quadro_rtx8000_vs_rtx4090_vs_rtx5090_which_is/
false
false
self
1
null
Why do people use non-CPU setups for big local models?
1
[removed]
2025-01-31T08:59:25
https://www.reddit.com/r/LocalLLaMA/comments/1ieapk9/why_do_people_use_noncpu_setups_for_big_local/
Electronic-Might-775
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieapk9
false
null
t3_1ieapk9
/r/LocalLLaMA/comments/1ieapk9/why_do_people_use_noncpu_setups_for_big_local/
false
false
self
1
null
Best simple benchmark leaderboard tools?
1
[removed]
2025-01-31T09:25:47
https://www.reddit.com/r/LocalLLaMA/comments/1ieb1fa/best_simple_benchmark_leaderboard_tools/
First_Revolution8293
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieb1fa
false
null
t3_1ieb1fa
/r/LocalLLaMA/comments/1ieb1fa/best_simple_benchmark_leaderboard_tools/
false
false
self
1
null
PrivAI update!
0
Just wanted to share the latest update for PrivAI LLM - Local AI Chat. In version 1.2, we've added multi - model support for Llama2 and Qwen, giving you a richer AI experience for text gen, translation, and Q&A. You can now adjust the "temperature" of generated content with a slider for more conservative or creative outputs. We also have pre - built prompt templates like "Writing Assistant" to boost your efficiency. And there's photo - to - text input for quick multilingual text extraction from images. Check it out!
2025-01-31T09:32:24
https://apps.apple.com/us/app/privai-llm-local-ai-chat/id6741094184
Snoo_24581
apps.apple.com
1970-01-01T00:00:00
0
{}
1ieb4ie
false
null
t3_1ieb4ie
/r/LocalLLaMA/comments/1ieb4ie/privai_update/
false
false
https://b.thumbs.redditm…AqAxGsjJwwLA.jpg
0
{'enabled': False, 'images': [{'id': 'fSEECFx9Ph0d5xYrvQz9U-xF49C4ktj-OIxR2_JDpvw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=108&crop=smart&auto=webp&s=40586a04fbcf6a9fd28cee0d1fa5dc416ed1c98b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=216&crop=smart&auto=webp&s=6f74018b25986e7b4697b05fb56081d85ab56f15', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=320&crop=smart&auto=webp&s=cfaccc149f405fc0f94d1bbe4db8f3eeacde9ea1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=640&crop=smart&auto=webp&s=d6d21aa5fd354a2bc56e86c116c26b4056370029', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=960&crop=smart&auto=webp&s=6ffea03c8a0bc21df900da8b26dc937491b2e7b2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=1080&crop=smart&auto=webp&s=0f4b6e26dfc0f657483009af1f18c1b0e2b24c1f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?auto=webp&s=603304456c09edfcc78033d9375787ad2b87df16', 'width': 1200}, 'variants': {}}]}
Would it be possible to design a pluggable mixture-of-experts architecture?
5
TL;DR: the question is about how to avoid first filling up the model with insane amounts of information on how to grow strawberries, which celebrities like strawberries, the chemical elements in strawberries, and the entire history of strawberries just so that it could count r's in strawberries. This might be a stupidly naive idea but who knows, maybe someone already is working on that. Currently, most LLMs have weights that are bloated with an insane amount of random facts that many users don't want at all. I know that "average users" are often using LLMs as a Wikipedia replacement, but that doesn't seem right. You don't want to ask a calculator what year Keanu Reeves was born. As I understand, MoE models at least try to optimize this by splitting the weights into experts. While it is computationally more efficient, it still requires lots of storage and RAM space and it still includes the weights for factual knowledge that might be totally irrelevant for the person who loads the model locally and uses it for specific cases. **Would it be possible to detach MoEs and make them downloadable separately, as needed?** I imagine some kind of a small core LLM that has been taught the basic logic and reasoning with absolute minimum distilled data and not including too much factual knowledge. Could reinforcement learning be used to intensively train such a core? But it would need solid reward functions that cover "all" logical and abstract reasoning known to humans.
2025-01-31T09:34:00
https://www.reddit.com/r/LocalLLaMA/comments/1ieb591/would_it_be_possible_to_design_a_pluggable/
martinerous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieb591
false
null
t3_1ieb591
/r/LocalLLaMA/comments/1ieb591/would_it_be_possible_to_design_a_pluggable/
false
false
self
5
null
DeepSeek LLM: A Game-Changer in AI & How India Can Build Its Own Powerful LLM
0
Introduction In recent years, large language models (LLMs) have become the backbone of AI-driven applications, transforming industries from customer service to content creation. DeepSeek LLM, an open-source alternative to proprietary models, is making waves with its efficiency, multilingual capabilities, and strong performance across benchmarks. But as AI research accelerates globally, a pressing question emerges: Can India develop its own world-class LLM? With a growing AI ecosystem, vast linguistic diversity, and a tech-savvy population, India has the potential to build an indigenous LLM that rivals global leaders like OpenAI's GPT, Google's Gemini, and DeepSeek. In this post, we’ll explore: What makes DeepSeek LLM stand out The current AI landscape in India Challenges in developing an Indian LLM A roadmap for India to create its own high-performing open-source LLM --- What is DeepSeek LLM? DeepSeek LLM is an advanced open-source large language model developed by DeepSeek AI. Unlike many proprietary models, DeepSeek is designed to be transparent, efficient, and accessible for AI researchers, developers, and businesses. Key Features of DeepSeek LLM ✅ Open-source: No API restrictions, allowing full offline use ✅ Multilingual: Handles English, Chinese, and other languages efficiently ✅ High performance: Outperforms models of similar size on many benchmarks ✅ Efficient inference: Optimized for real-world applications DeepSeek LLM Performance vs. Other Open-Source Models Let’s compare DeepSeek-R1:7B against some leading models in the MMLU (Massive Multitask Language Understanding) benchmark: import matplotlib.pyplot as plt models = ["DeepSeek-R1:7B", "Mistral-7B", "Llama-2-7B", "Gemma-7B"] accuracy = [74.2, 72.5, 70.8, 71.3] plt.figure(figsize=(8,5)) plt.bar(models, accuracy) plt.xlabel("LLM Model") plt.ylabel("MMLU Accuracy (%)") plt.title("Performance Comparison: MMLU Benchmark") plt.show() This graph illustrates DeepSeek-R1:7B outperforming other models of the same size, making it an excellent choice for developers looking for an open-source, high-performance LLM. --- Why India Needs Its Own Large Language Model India is home to over 1.4 billion people and 22 official languages (with hundreds of dialects). Despite being an AI powerhouse in software development, India still lacks a homegrown LLM tailored for its linguistic and cultural diversity. The Current AI Landscape in India 1. Talent Pool: India produces over 1 million engineers annually, with a growing number specializing in AI/ML. 2. Government Initiatives: Programs like IndiaAI and Startup India aim to boost AI research and funding. 3. Tech Giants' Interest: Companies like Google, Microsoft, and OpenAI are investing in India’s AI ecosystem. 4. Lack of Homegrown Models: Most AI applications in India still rely on foreign-developed LLMs. India’s Linguistic Challenge A one-size-fits-all LLM won’t work for India. Indian users speak a mix of Hindi, English, Tamil, Telugu, Marathi, and many more languages. Code-switching (mixing languages) is common in daily conversations. Most global LLMs fail at accurately understanding Indian languages. To build a truly Indian LLM, we need a model that is trained on diverse Indian datasets and optimized for regional accents, slang, and multilingual queries. --- Challenges in Developing an Indian LLM Building an LLM in India comes with technical, financial, and infrastructure challenges: 1. Data Availability Lack of clean, high-quality datasets for Indian languages. Existing datasets are skewed towards English content. 2. Computational Power Training LLMs requires massive GPU clusters (India currently relies on imported NVIDIA GPUs). Lack of government-backed AI supercomputers for model training. 3. Funding & Research Indian startups struggle with AI research funding compared to the US and China. Most AI talent works for global companies, leading to a brain drain. 4. Ethical & Bias Issues AI models can inherit biases from training data. Cultural nuances must be carefully handled to avoid misinformation. --- How India Can Build a World-Class LLM India can overcome these challenges by focusing on four key areas: 1. Government & Industry Collaboration Establish a National AI Compute Center for training large-scale models. Offer grants & incentives for Indian startups working on LLMs. Partner with tech giants (Google, AWS, NVIDIA) to build AI infrastructure in India. 2. Open-Source AI Community Encourage AI researchers to develop datasets for Indian languages. Fund projects like DeepSeek AI that offer transparent LLM development. Promote collaboration between universities and AI startups. 3. Indigenous AI Hardware Invest in RISC-V-based AI chips (rather than relying on NVIDIA/AMD). Develop high-performance GPU clusters optimized for Indian AI workloads. 4. Focus on Localized Training Collect and train models on vernacular data from newspapers, books, and digital content. Ensure the LLM understands code-switching and Indian cultural references. Here’s a roadmap for India’s LLM journey: import matplotlib.pyplot as plt years = [2025, 2026, 2027, 2028, 2029] milestones = [10, 30, 60, 85, 100] plt.figure(figsize=(8,5)) plt.plot(years, milestones, marker="o", linestyle="-") plt.xlabel("Year") plt.ylabel("Progress Towards Indian LLM (%)") plt.title("India's Roadmap to Building an Indigenous LLM") plt.grid(True) plt.show() This graph illustrates India's progress towards achieving a fully functional LLM by 2029, provided investments, data collection, and computational power are prioritized. --- Conclusion: The Future of AI in India The rise of DeepSeek LLM proves that open-source models can compete with Big Tech LLMs. India has the talent, market size, and linguistic diversity to create its own world-class AI model—but it requires a focused strategy in data collection, compute power, and industry-government collaboration. If India builds a multilingual, open-source LLM, it could: ✅ Revolutionize AI accessibility for 1.4 billion people ✅ Reduce dependency on foreign AI models ✅ Foster AI research leadership in the global market The journey to an Indian LLM starts now—but it needs collective action from researchers, developers, policymakers, and the private sector. What do you think? Can India develop its own powerful LLM? Share your thoughts below!
2025-01-31T09:44:20
https://www.reddit.com/r/LocalLLaMA/comments/1ieb9sv/deepseek_llm_a_gamechanger_in_ai_how_india_can/
akhilpanja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieb9sv
false
null
t3_1ieb9sv
/r/LocalLLaMA/comments/1ieb9sv/deepseek_llm_a_gamechanger_in_ai_how_india_can/
false
false
self
0
null
If you're on Windows and LLMs are not cutting it for some reason, I urge you to try the Unsloth variety. Holy smokes!
0
It really rubbed me wrong when you'd hop on and see people say change this or that setting in chat. Yes, that makes sense, but let's cut the bullshit. Most people want plug and play. The Unsloth models are legitimately that. No issues whatsoever. The only thing now is figuring out how to make it search the web successfully, but I think that's an openwebui issue... https://preview.redd.it/vbjyl30lvage1.png?width=827&format=png&auto=webp&s=aac18d61ae58b046a615d5b22db2531e5b3dae05 Pic attached are the models I've downloaded and tested. I downloaded the smaller sizes before, so you'll see rather recent modified time periods, showcasing when the bigger models were downloaded.
2025-01-31T09:45:59
https://www.reddit.com/r/LocalLLaMA/comments/1iebaku/if_youre_on_windows_and_llms_are_not_cutting_it/
IAmGODbutIAmWEAK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iebaku
false
null
t3_1iebaku
/r/LocalLLaMA/comments/1iebaku/if_youre_on_windows_and_llms_are_not_cutting_it/
false
false
https://b.thumbs.redditm…UBtleIewwF9Q.jpg
0
null
Hi there, I'd like to learn to use a model on my gaming rig, what's the best DeepSeek I can get for my rig that isn't painfully slow?
0
RTX 3060 12 gb, i7-8700 @ 3.2 Ghz (12 cpus), 32 gb Ram, 2 tb ssd. (I don't know the nitty gritty deep technical details of each component) What utilities do I need to compile and run it?
2025-01-31T09:59:55
https://www.reddit.com/r/LocalLLaMA/comments/1iebgwe/hi_there_id_like_to_learn_to_use_a_model_on_my/
BelleHades
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iebgwe
false
null
t3_1iebgwe
/r/LocalLLaMA/comments/1iebgwe/hi_there_id_like_to_learn_to_use_a_model_on_my/
false
false
self
0
null
Safe to say the Chinese haven't solved the Strawberry problem
0
2025-01-31T10:00:42
https://www.reddit.com/gallery/1iebhcd
IsThisWhatDayIsThis
reddit.com
1970-01-01T00:00:00
0
{}
1iebhcd
false
null
t3_1iebhcd
/r/LocalLLaMA/comments/1iebhcd/safe_to_say_the_chinese_havent_solved_the/
false
false
https://b.thumbs.redditm…2YzE_5-bs2GQ.jpg
0
null
[New paper] o3-mini vs DeepSeek-R1: Which One is Safer?
0
> **Abstract**: The irruption of DeepSeek-R1 constitutes a turning point for the AI industry in general and the LLMs in particular. Its capabilities have demonstrated outstanding performance in several tasks, including creative thinking, code generation, maths and automated program repair, at apparently lower execution cost. However, LLMs must adhere to an important qualitative property, i.e., their alignment with safety and human values. A clear competitor of DeepSeek-R1 is its American counterpart, OpenAI's o3-mini model, which is expected to set high standards in terms of performance, safety and cost. In this paper we conduct a systematic assessment of the safety level of both, DeepSeek-R1 (70b version) and OpenAI's o3-mini (beta version). To this end, we make use of our recently released automated safety testing tool, named ASTRAL. By leveraging this tool, we automatically and systematically generate and execute a total of 1260 unsafe test inputs on both models. After conducting a semi-automated assessment of the outcomes provided by both LLMs, the results indicate that DeepSeek-R1 is highly unsafe as compared to OpenAI's o3-mini. Based on our evaluation, DeepSeek-R1 answered unsafely to 11.98% of the executed prompts whereas o3-mini only to 1.19%. ### Structured abstract **Background**: Large Language Models (LLMs) are increasingly being deployed in real-world applications, making their safety properties crucial for responsible deployment. Two prominent models, DeepSeek-R1 and OpenAI's o3-mini, represent different approaches to LLM development and safety. **Objective**: To systematically compare the safety levels of DeepSeek-R1 (70b version) and OpenAI's o3-mini (beta version) by evaluating their responses to potentially unsafe prompts. **Methods**: Using ASTRAL, an automated safety testing tool, the researchers generated and executed 1,260 unsafe test inputs balanced across 14 safety categories, 6 writing styles, and 5 persuasion techniques. The evaluation involved executing these prompts against both models and conducting a semi-automated assessment of responses using GPT-3.5 as an evaluator, followed by manual verification of unsafe/unknown classifications. **Results**: DeepSeek-R1 demonstrated significantly higher unsafe response rates, providing unsafe responses to 11.98% of prompts compared to o3-mini's 1.19%. DeepSeek-R1 showed particular vulnerability in categories including financial crime, violence, terrorism, and hate speech. Technical terms and role-play writing styles were found to be most effective at eliciting unsafe responses from DeepSeek-R1. **Conclusions**: O3-mini demonstrated superior safety features compared to DeepSeek-R1, partly due to its robust policy violation detection system that blocked 44.8% of unsafe prompts. The findings suggest that OpenAI's approach to safety alignment may be more effective, though further testing on the production version of o3-mini is needed to confirm these preliminary results. **Limitations**: The study used a pre-release version of o3-mini with an active policy violation detection system that wasn't present in DeepSeek-R1, potentially affecting direct comparability. The subjective nature of safety assessments and cultural biases may have influenced manual verification results.​​​​​​​​​​​​​​​​
2025-01-31T10:04:36
https://www.arxiv.org/abs/2501.18438
Balance-
arxiv.org
1970-01-01T00:00:00
0
{}
1iebj8p
false
null
t3_1iebj8p
/r/LocalLLaMA/comments/1iebj8p/new_paper_o3mini_vs_deepseekr1_which_one_is_safer/
false
false
default
0
null
DeepSeek-R1 Now Live With NVIDIA NIM
27
2025-01-31T10:16:20
https://blogs.nvidia.com/blog/deepseek-r1-nim-microservice/
Lacy_Hall
blogs.nvidia.com
1970-01-01T00:00:00
0
{}
1iebozu
false
null
t3_1iebozu
/r/LocalLLaMA/comments/1iebozu/deepseekr1_now_live_with_nvidia_nim/
false
false
https://a.thumbs.redditm…kE4DBiFir7w8.jpg
27
{'enabled': False, 'images': [{'id': 'jINTXc7q3Lil4Bu4ucc8kuC8r56G46ZBCfc-X4ylOLA', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/KJ7uKiPBYxGxiVvh_MM2oHJDEG-PLE0KeC8jxiR32Mk.jpg?width=108&crop=smart&auto=webp&s=0d8d57c02bc260cd2f12556257b1576fb1e47b5c', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/KJ7uKiPBYxGxiVvh_MM2oHJDEG-PLE0KeC8jxiR32Mk.jpg?width=216&crop=smart&auto=webp&s=bd770fc92db75882dc3578a3ceb03db64ffbc408', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/KJ7uKiPBYxGxiVvh_MM2oHJDEG-PLE0KeC8jxiR32Mk.jpg?width=320&crop=smart&auto=webp&s=34f02ee183ba3d990f0ed27f4d274cf1157df46f', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/KJ7uKiPBYxGxiVvh_MM2oHJDEG-PLE0KeC8jxiR32Mk.jpg?width=640&crop=smart&auto=webp&s=5adf72a14ff4723a91cfd9823ed1d2f1fcc2ae7f', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/KJ7uKiPBYxGxiVvh_MM2oHJDEG-PLE0KeC8jxiR32Mk.jpg?width=960&crop=smart&auto=webp&s=81ac0bad288bdc3d1b156e55bb9f06b102fd493e', 'width': 960}, {'height': 573, 'url': 'https://external-preview.redd.it/KJ7uKiPBYxGxiVvh_MM2oHJDEG-PLE0KeC8jxiR32Mk.jpg?width=1080&crop=smart&auto=webp&s=bf2f02dee5b8e8bd1a1a8605c7a220225af2cae6', 'width': 1080}], 'source': {'height': 680, 'url': 'https://external-preview.redd.it/KJ7uKiPBYxGxiVvh_MM2oHJDEG-PLE0KeC8jxiR32Mk.jpg?auto=webp&s=27616d1c55bb79ac0b6180f8b60770212c27591b', 'width': 1280}, 'variants': {}}]}
Does anyone have an intro guide into LLMFarm?
1
[removed]
2025-01-31T10:20:41
https://www.reddit.com/r/LocalLLaMA/comments/1iebr30/does_anyone_have_an_intro_guide_into_llmfarm/
No_Suspect7471
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iebr30
false
null
t3_1iebr30
/r/LocalLLaMA/comments/1iebr30/does_anyone_have_an_intro_guide_into_llmfarm/
false
false
self
1
{'enabled': False, 'images': [{'id': 'hpikv_sPEpc1qu82Tb78rZUQ-sOVVdGNTP50jtkYMAE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/LxjfGaZbohnXoi3zUqI5OMw6icpzXCe1RNWGSiMaUPA.jpg?width=108&crop=smart&auto=webp&s=14453e689ee16bd5b07daf24432d06ebbbcbe3d9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/LxjfGaZbohnXoi3zUqI5OMw6icpzXCe1RNWGSiMaUPA.jpg?width=216&crop=smart&auto=webp&s=5e5195b637acfbc3ea1238eaea9cafb26f9e5bda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/LxjfGaZbohnXoi3zUqI5OMw6icpzXCe1RNWGSiMaUPA.jpg?width=320&crop=smart&auto=webp&s=0af0a006681a77f1ddc044bb00de047b6bde77ad', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/LxjfGaZbohnXoi3zUqI5OMw6icpzXCe1RNWGSiMaUPA.jpg?width=640&crop=smart&auto=webp&s=af0241797e7052060e5b042762f717b5211b0f56', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/LxjfGaZbohnXoi3zUqI5OMw6icpzXCe1RNWGSiMaUPA.jpg?width=960&crop=smart&auto=webp&s=e4a3225e950e8a76a4757674ec797278fbd09a9c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/LxjfGaZbohnXoi3zUqI5OMw6icpzXCe1RNWGSiMaUPA.jpg?width=1080&crop=smart&auto=webp&s=e90899a13c2c6d02ccd20e8c7b8b03d7ff65bc89', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/LxjfGaZbohnXoi3zUqI5OMw6icpzXCe1RNWGSiMaUPA.jpg?auto=webp&s=6a20ecbad367aca2ecc5533c9dc4ba22dfb5a4fb', 'width': 1200}, 'variants': {}}]}
Coauthors of DeepSeek researchers. Can u spot Meta?
133
2025-01-31T10:22:53
https://i.redd.it/9gdffiyl2bge1.png
osint_for_good
i.redd.it
1970-01-01T00:00:00
0
{}
1iebs6c
false
null
t3_1iebs6c
/r/LocalLLaMA/comments/1iebs6c/coauthors_of_deepseek_researchers_can_u_spot_meta/
false
false
https://a.thumbs.redditm…2uGOgkZfGh88.jpg
133
{'enabled': True, 'images': [{'id': 'V539WHpSf7x2PMtjZenCBMSdEVtriI9UzIjhWmXBW-o', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/9gdffiyl2bge1.png?width=108&crop=smart&auto=webp&s=310d11e431cbc5ab81e47f1874ceb64f3e7275b6', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/9gdffiyl2bge1.png?width=216&crop=smart&auto=webp&s=70eac88850152106ffff72f5424e8c1f470556f9', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/9gdffiyl2bge1.png?width=320&crop=smart&auto=webp&s=9ded3ce70da12aeccadfdb15788309b8d5a176b7', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/9gdffiyl2bge1.png?width=640&crop=smart&auto=webp&s=e2aa09755c24832c4f07acbe500235ee5b176845', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/9gdffiyl2bge1.png?width=960&crop=smart&auto=webp&s=e81499da89521dac3fdd3271ef84c082402da068', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/9gdffiyl2bge1.png?width=1080&crop=smart&auto=webp&s=48543ca24abc2f3377f72dd866c9c51ebbc3b134', 'width': 1080}], 'source': {'height': 5000, 'url': 'https://preview.redd.it/9gdffiyl2bge1.png?auto=webp&s=1387ae3c9979802ebed44ac1311dc12f2c6081f3', 'width': 5000}, 'variants': {}}]}
Upgrade 16GB to 32GB RAM ?
1
So I have a 4070ti super with 16gigs of VRAM. I'm currently running a mistral small 22b model at Q5_K_M and a TTS model. I have around 4gigs spilling over to system RAM. I get around 3t/s (which I'm fine with) for a chat application. So my question is, will upgrading to 32GB system RAM help in possibly using a higher quant or larger models or just with longer context ? If so, does the speed matter? I'm on 3200MHz DDR4 currently
2025-01-31T10:27:29
https://www.reddit.com/r/LocalLLaMA/comments/1iebucc/upgrade_16gb_to_32gb_ram/
thearsenalman97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iebucc
false
null
t3_1iebucc
/r/LocalLLaMA/comments/1iebucc/upgrade_16gb_to_32gb_ram/
false
false
self
1
null
What is the cheapest, most efficent way to "rent" cloud GPUs or TPUs or whatever in order to run our own LLMs?
1
[removed]
2025-01-31T10:28:22
https://www.reddit.com/r/LocalLLaMA/comments/1ieburi/what_is_the_cheapest_most_efficent_way_to_rent/
christian7670
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieburi
false
null
t3_1ieburi
/r/LocalLLaMA/comments/1ieburi/what_is_the_cheapest_most_efficent_way_to_rent/
false
false
self
1
null
Mini-R1: Reproduce Deepseek R1 „aha moment“ a RL tutorial
1
2025-01-31T10:37:11
https://www.philschmid.de/mini-deepseek-r1
Ok_Two6167
philschmid.de
1970-01-01T00:00:00
0
{}
1iebz5k
false
null
t3_1iebz5k
/r/LocalLLaMA/comments/1iebz5k/minir1_reproduce_deepseek_r1_aha_moment_a_rl/
false
false
https://b.thumbs.redditm…xFcxnPlkJN7Y.jpg
1
{'enabled': False, 'images': [{'id': 'MNIRldOHSFIMXIsAMd2Of-1c5UKJpkObkPEGybXGAAk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XcIyncHFYFP2bSic0Vjqv41iHs__PpoC9Coo3Pm49Cw.jpg?width=108&crop=smart&auto=webp&s=7e6bf8fe00e7d43eb4bde8030dc2d6aa92286a82', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/XcIyncHFYFP2bSic0Vjqv41iHs__PpoC9Coo3Pm49Cw.jpg?width=216&crop=smart&auto=webp&s=12f76e0559e11c0a6c6c1bdf74bf3901ae0d9995', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/XcIyncHFYFP2bSic0Vjqv41iHs__PpoC9Coo3Pm49Cw.jpg?width=320&crop=smart&auto=webp&s=921ec4cc34e881a6134d66c7171e30f59b482111', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/XcIyncHFYFP2bSic0Vjqv41iHs__PpoC9Coo3Pm49Cw.jpg?width=640&crop=smart&auto=webp&s=680624f9553cf1aed5abbb5694daf847918e934c', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/XcIyncHFYFP2bSic0Vjqv41iHs__PpoC9Coo3Pm49Cw.jpg?width=960&crop=smart&auto=webp&s=dd80e19c5aa28694c26b36f33c4aa15242c88983', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/XcIyncHFYFP2bSic0Vjqv41iHs__PpoC9Coo3Pm49Cw.jpg?auto=webp&s=bf693ada8beb669307cc462bcc07a849e8d8cbff', 'width': 961}, 'variants': {}}]}
Big article by SemiAnalysis: DeepSeek Debates: Chinese Leadership On Cost, True Training Cost, Closed Model Margin Impacts | H100 Pricing Soaring, Subsidized Inference Pricing, Export Controls, MLA
0
[https://semianalysis.com/2025/01/31/deepseek-debates/](https://semianalysis.com/2025/01/31/deepseek-debates/) https://preview.redd.it/vy8qx4yd5bge1.jpg?width=1536&format=pjpg&auto=webp&s=d68d94d250ff7f39a672bfcd9b9f95b6b912b7c1 https://preview.redd.it/xen8s1cg5bge1.jpg?width=923&format=pjpg&auto=webp&s=e7f3fac31860fcf391ea2f03105a45df345dfa03
2025-01-31T10:37:57
https://www.reddit.com/r/LocalLLaMA/comments/1iebzhu/big_article_by_semianalysis_deepseek_debates/
Nunki08
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iebzhu
false
null
t3_1iebzhu
/r/LocalLLaMA/comments/1iebzhu/big_article_by_semianalysis_deepseek_debates/
false
false
https://b.thumbs.redditm…lO3jqEyXyVPA.jpg
0
null
Architecture and middleware for RAG assistants
2
I started building a RAG assistant / chatbot application about a year ago. Back then my decision to build everything from scratch seemed to make perfect sense, because to me the available frameworks came across as bloated and unstable (got burned by langchain, like everybody else). The system works fine, but it has to be maintained and a lot of its features I find too generic to waste my time with. Current stack: * custom data schema on postgres/pgvector * user management with RBAC * chat memory * query rewriting * reranking * observability / telemetry (played with OTel but build my own solution eventually) * HTTP API * React UI * container/kubernetes integration * model-agnostic * KAG and hybrid search being added. The sweet spots I find myself tweaking is the db schema, the loading and retrieval part and the UI and maybe the chat memory. I need full control over these. But stuff like telemetry I wasted a lot of time on and I'm not really happy with my result. And it's not where the business value is. Neglected scalability and security (e.g. prompt injection) to a degree because it's only being used by a few trusted people, but that might change and would require quite an overhaul. While I'm usually wary of opinionated frameworks, I find standardization can be a good thing that helps you focus on your business value. What are your thoughts about architectural decisions, frameworks, middleware? How would you start such a project today?
2025-01-31T10:54:03
https://www.reddit.com/r/LocalLLaMA/comments/1iec75g/architecture_and_middleware_for_rag_assistants/
mnze_brngo_7325
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iec75g
false
null
t3_1iec75g
/r/LocalLLaMA/comments/1iec75g/architecture_and_middleware_for_rag_assistants/
false
false
self
2
null
Best simple benchmark leaderboard tools?
1
[removed]
2025-01-31T11:04:13
https://www.reddit.com/r/LocalLLaMA/comments/1iecckh/best_simple_benchmark_leaderboard_tools/
First_Revolution8293
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iecckh
false
null
t3_1iecckh
/r/LocalLLaMA/comments/1iecckh/best_simple_benchmark_leaderboard_tools/
false
false
self
1
null
A list of all the startups YC is looking for - This year is very AI Agent / software automation focused.
1
[deleted]
2025-01-31T11:13:03
[deleted]
1970-01-01T00:00:00
0
{}
1iecgzc
false
null
t3_1iecgzc
/r/LocalLLaMA/comments/1iecgzc/a_list_of_all_the_startups_yc_is_looking_for_this/
false
false
default
1
null
YC wants open-source startups, but it got me thinking with the increase in AI coding tools: it will be much easier to self-host projects and on the other side re-build close sourced ones, do you think AI coding tools are good or bad for open source?
2
2025-01-31T11:16:52
https://i.redd.it/uaesb9jwbbge1.png
omnisvosscio
i.redd.it
1970-01-01T00:00:00
0
{}
1ieciyx
false
null
t3_1ieciyx
/r/LocalLLaMA/comments/1ieciyx/yc_wants_opensource_startups_but_it_got_me/
false
false
https://b.thumbs.redditm…wvosh-OcUwJo.jpg
2
{'enabled': True, 'images': [{'id': '3LMoRd-S78U-PFSXCrqt0BMWE9FbfpdIS2aFJtJ3gZE', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/uaesb9jwbbge1.png?width=108&crop=smart&auto=webp&s=007201ccbf42ab46032e78f7b8781c75c3f07f99', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/uaesb9jwbbge1.png?width=216&crop=smart&auto=webp&s=1d61db7ca68b5fa9c1503ae25c5f04f59655fc8d', 'width': 216}, {'height': 270, 'url': 'https://preview.redd.it/uaesb9jwbbge1.png?width=320&crop=smart&auto=webp&s=6e5a93324444859e12bdcff7caed0da4e7809200', 'width': 320}, {'height': 540, 'url': 'https://preview.redd.it/uaesb9jwbbge1.png?width=640&crop=smart&auto=webp&s=ba6bdbcdb0c3e8f7f53973308ba3961f84589669', 'width': 640}], 'source': {'height': 715, 'url': 'https://preview.redd.it/uaesb9jwbbge1.png?auto=webp&s=5ce03fe1ce7ec69ba6c723478b917251a7d48756', 'width': 847}, 'variants': {}}]}
Deepseek Claims to Be ChatGPT
1
2025-01-31T11:17:07
https://www.reddit.com/gallery/1iecj3t
Solid_Possible_1555
reddit.com
1970-01-01T00:00:00
0
{}
1iecj3t
false
null
t3_1iecj3t
/r/LocalLLaMA/comments/1iecj3t/deepseek_claims_to_be_chatgpt/
false
false
https://b.thumbs.redditm…d3aAaj0zrtKw.jpg
1
null
Deepseek Claims to be Trained by OpenAI
1
[removed]
2025-01-31T11:19:11
https://www.reddit.com/r/LocalLLaMA/comments/1ieck5d/deepseek_claims_to_be_trained_by_openai/
Solid_Possible_1555
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieck5d
false
null
t3_1ieck5d
/r/LocalLLaMA/comments/1ieck5d/deepseek_claims_to_be_trained_by_openai/
false
false
self
1
null
Best way to use the most modern local TTS to read clipboard when copying any text?
1
[removed]
2025-01-31T11:27:46
https://www.reddit.com/r/LocalLLaMA/comments/1iecome/best_way_to_use_the_most_modern_local_tts_to_read/
PSInvader
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iecome
false
null
t3_1iecome
/r/LocalLLaMA/comments/1iecome/best_way_to_use_the_most_modern_local_tts_to_read/
false
false
self
1
null
Models to help practice specific programming language
2
So i want to learn Rust and Polars, is there a small parameter LocalLLaMa I can use that only and only concentrate on Rust and Polars? Otherwise is there a way to strip away an existing model of other parameters and concentrate just on programming locally on my PC. I am running 2070 Super on my PC which has around 8 GB and i do not want to run bloated models with parameters that i dont need
2025-01-31T11:33:53
https://www.reddit.com/r/LocalLLaMA/comments/1iecrva/models_to_help_practice_specific_programming/
liquidnitrogen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iecrva
false
null
t3_1iecrva
/r/LocalLLaMA/comments/1iecrva/models_to_help_practice_specific_programming/
false
false
self
2
null
Fix grammer and rmv unimportant info and give a title
1
I've been building my own AI models for around two years, though never LLMs. I've wanted to work with them but haven't been able to due to hardware limitations. Currently, I use a 2019 Intel Mac with only 1.5 GB of VRAM, which isn't sufficient. I want to run DeepSeek locally since their website often has downtime. This feels like the right excuse to build my own server for training and deploying AI models. I'm aiming to handle a decent parameter size, likely the 32-billion-parameter model. However, I'm unsure what hardware to invest in, especially with the 50 series GPUs coming out soon. My budget is $2,000 to $3,000. Should I buy directly from NVIDIA or look for used 40 series cards from people upgrading? What would be the best approach to get the right hardware for this setup?
2025-01-31T11:34:17
https://www.reddit.com/r/LocalLLaMA/comments/1iecs3c/fix_grammer_and_rmv_unimportant_info_and_give_a/
TheFinancialDoctor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iecs3c
false
null
t3_1iecs3c
/r/LocalLLaMA/comments/1iecs3c/fix_grammer_and_rmv_unimportant_info_and_give_a/
false
false
self
1
null
Building an AI Server for Local DeepSeek Deployment
2
I've been building my own AI models for around two years, though never LLMs. I've wanted to work with them but haven't been able to due to hardware limitations. Currently, I use a 2019 Intel Mac with only 1.5 GB of VRAM, which isn't sufficient. I want to run DeepSeek locally since their website often has downtime. This feels like the right excuse to build my own server for training and deploying AI models. I'm aiming to handle a decent parameter size, likely the 32-billion-parameter model. However, I'm unsure what hardware to invest in, especially with the 50 series GPUs coming out soon. My budget is $2,000 to $3,000. Should I buy directly from NVIDIA or look for used 40 series cards from people upgrading? What would be the best approach to get the right hardware for this setup?
2025-01-31T11:35:19
https://www.reddit.com/r/LocalLLaMA/comments/1iecsnk/building_an_ai_server_for_local_deepseek/
TheFinancialDoctor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iecsnk
false
null
t3_1iecsnk
/r/LocalLLaMA/comments/1iecsnk/building_an_ai_server_for_local_deepseek/
false
false
self
2
null
Deepseek trying to Block about Tiannanmen Square is funny
0
First it tries to show the information then after several seconds it’s gone..
2025-01-31T11:36:23
https://v.redd.it/482txkvvfbge1
BoobiesIsLife
/r/LocalLLaMA/comments/1iect7h/deepseek_trying_to_block_about_tiannanmen_square/
1970-01-01T00:00:00
0
{}
1iect7h
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/482txkvvfbge1/DASHPlaylist.mpd?a=1741044993%2CMjk0MWYwMmE2ZjJhMjVlN2ZmNjJlZDhjZmNjYmY5MmM4MWI3MTQ0NGIwMzFjNGU4MmRjNDdlOWZjM2JlYzNiOA%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/482txkvvfbge1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/482txkvvfbge1/HLSPlaylist.m3u8?a=1741044993%2CZWU5MTA4Y2U0MWU3ODE4YTE1Yjg0ZjA2ZThlM2ZiNmU2YTAyMmQ5ZjQzM2VmOGFiYzU1YTQ0ZmVkODQwZjBmYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/482txkvvfbge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1iect7h
/r/LocalLLaMA/comments/1iect7h/deepseek_trying_to_block_about_tiannanmen_square/
false
false
https://external-preview…bc17a01d288cbdfc
0
{'enabled': False, 'images': [{'id': 'NHJjZmNqaHZmYmdlMb7_Ihodre9qJkjPrM33wbvWZJ1VK8IrsYScQ_faN7R4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NHJjZmNqaHZmYmdlMb7_Ihodre9qJkjPrM33wbvWZJ1VK8IrsYScQ_faN7R4.png?width=108&crop=smart&format=pjpg&auto=webp&s=25b562938e08b15ca45f63fa29678521bb9f8c66', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NHJjZmNqaHZmYmdlMb7_Ihodre9qJkjPrM33wbvWZJ1VK8IrsYScQ_faN7R4.png?width=216&crop=smart&format=pjpg&auto=webp&s=a288009b0ac5a7992649975166064c049423f1dc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NHJjZmNqaHZmYmdlMb7_Ihodre9qJkjPrM33wbvWZJ1VK8IrsYScQ_faN7R4.png?width=320&crop=smart&format=pjpg&auto=webp&s=026fc48a5dc452f07ac4399998db704c1df1081d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NHJjZmNqaHZmYmdlMb7_Ihodre9qJkjPrM33wbvWZJ1VK8IrsYScQ_faN7R4.png?width=640&crop=smart&format=pjpg&auto=webp&s=73e19b1566cbb59841b97eeb5726195ea0e4ab96', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NHJjZmNqaHZmYmdlMb7_Ihodre9qJkjPrM33wbvWZJ1VK8IrsYScQ_faN7R4.png?width=960&crop=smart&format=pjpg&auto=webp&s=4c54bf8b2326d1589a51d0aa2341307b207c88da', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NHJjZmNqaHZmYmdlMb7_Ihodre9qJkjPrM33wbvWZJ1VK8IrsYScQ_faN7R4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=18a8eb410bc8865b5bcc695278e2a69b64e1f0be', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/NHJjZmNqaHZmYmdlMb7_Ihodre9qJkjPrM33wbvWZJ1VK8IrsYScQ_faN7R4.png?format=pjpg&auto=webp&s=8c89c8530443c769380a6a8b46dee9908a10e305', 'width': 3840}, 'variants': {}}]}
Is deep seek really offline?
0
Im thinking about setting it up , problem is this [https://www.wired.com/story/exposed-deepseek-database-revealed-chat-prompts-and-internal-data/](https://www.wired.com/story/exposed-deepseek-database-revealed-chat-prompts-and-internal-data/) scares me if there are any backdoor vulnerabilities that connect to somewhere else I don't want to risk it. I don't want some hacker in China or something implanting Trojans.
2025-01-31T11:59:43
https://www.reddit.com/r/LocalLLaMA/comments/1ied655/is_deep_seek_really_offline/
Doom_Walker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ied655
false
null
t3_1ied655
/r/LocalLLaMA/comments/1ied655/is_deep_seek_really_offline/
false
false
self
0
{'enabled': False, 'images': [{'id': 'U2EJUzGIJ-IIDb5EgsiIrF5uGSX9oATsXaG6q0hY9Eo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/T9cZjVgSTDU-leXE06GuoRT98GYKHsGUSfFnZSdzRgs.jpg?width=108&crop=smart&auto=webp&s=2a8a58e8a9ba0a39622bc5bf76c39963988c3145', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/T9cZjVgSTDU-leXE06GuoRT98GYKHsGUSfFnZSdzRgs.jpg?width=216&crop=smart&auto=webp&s=8a0ede395b32c56568fe6b28ba17d024f9de648b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/T9cZjVgSTDU-leXE06GuoRT98GYKHsGUSfFnZSdzRgs.jpg?width=320&crop=smart&auto=webp&s=a1b86c59ad8e6a8856f110823d1ca039617c96a3', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/T9cZjVgSTDU-leXE06GuoRT98GYKHsGUSfFnZSdzRgs.jpg?width=640&crop=smart&auto=webp&s=f64eb73c15e2d396a4433a8c4b9600f3df934ad6', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/T9cZjVgSTDU-leXE06GuoRT98GYKHsGUSfFnZSdzRgs.jpg?width=960&crop=smart&auto=webp&s=f1eacc875569e99ab84323660f64d03abb76c55b', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/T9cZjVgSTDU-leXE06GuoRT98GYKHsGUSfFnZSdzRgs.jpg?width=1080&crop=smart&auto=webp&s=fc6bb12c09db6f049a45fbcf9dc1a29784e987d4', 'width': 1080}], 'source': {'height': 670, 'url': 'https://external-preview.redd.it/T9cZjVgSTDU-leXE06GuoRT98GYKHsGUSfFnZSdzRgs.jpg?auto=webp&s=7d327e24672d3fdfb1280b96a1afb531c2d3170e', 'width': 1280}, 'variants': {}}]}
Fine-tune Deepseek r1 distilled
0
Hey everyone, Has anyone tried fine-tuning deepseek-r1-distilled and could share the approach?
2025-01-31T12:09:49
https://www.reddit.com/r/LocalLLaMA/comments/1iedcj7/finetune_deepseek_r1_distilled/
ahsaor8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iedcj7
false
null
t3_1iedcj7
/r/LocalLLaMA/comments/1iedcj7/finetune_deepseek_r1_distilled/
false
false
self
0
null
Linguito: Translate your apps with local LLM
14
Hey, I published a new CLI tool on top of [Lingui](https://lingui.dev/) for developers to translate their apps/webs using a local LLM. Linguito currently works with ollama and LM Studio. The Github repository is available at: [https://github.com/Serchinastico/linguito](https://github.com/Serchinastico/linguito) I also wrote about the tool's conception here: [https://blog.serchinastico.com/linguito/](https://blog.serchinastico.com/linguito/)
2025-01-31T12:12:52
https://www.reddit.com/r/LocalLLaMA/comments/1iedehq/linguito_translate_your_apps_with_local_llm/
Serchinastico
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iedehq
false
null
t3_1iedehq
/r/LocalLLaMA/comments/1iedehq/linguito_translate_your_apps_with_local_llm/
false
false
self
14
{'enabled': False, 'images': [{'id': '42F6jLWcHsk_2lSv9ksExh3Vlh7aOoc2-C2r3Qb7LUM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/2aFH1NF_kQ3gY2lA7NXiYelI0BW8RqcQTZqA7HpvkTA.jpg?width=108&crop=smart&auto=webp&s=01304a2dbec783c9fc3a52dce0f2cf7ee3a8dafc', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/2aFH1NF_kQ3gY2lA7NXiYelI0BW8RqcQTZqA7HpvkTA.jpg?width=216&crop=smart&auto=webp&s=05a559b95744c1aa92d4a37d969ae98b124b57ba', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/2aFH1NF_kQ3gY2lA7NXiYelI0BW8RqcQTZqA7HpvkTA.jpg?width=320&crop=smart&auto=webp&s=22a3c03a69edc2b455e5eaf7416db9bdb5e62551', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/2aFH1NF_kQ3gY2lA7NXiYelI0BW8RqcQTZqA7HpvkTA.jpg?width=640&crop=smart&auto=webp&s=d73dab10616f9ff4a82314e634962f75dcfa180d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/2aFH1NF_kQ3gY2lA7NXiYelI0BW8RqcQTZqA7HpvkTA.jpg?width=960&crop=smart&auto=webp&s=53cbf92901c831f5a7306950f855092966ac4721', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/2aFH1NF_kQ3gY2lA7NXiYelI0BW8RqcQTZqA7HpvkTA.jpg?width=1080&crop=smart&auto=webp&s=d4455f9d0d8d5bfee98d8eaf5f2baa4ccc3057e3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/2aFH1NF_kQ3gY2lA7NXiYelI0BW8RqcQTZqA7HpvkTA.jpg?auto=webp&s=b720d8ab4e7d68793aa08a45241230a1d17a6327', 'width': 1200}, 'variants': {}}]}
Running FULL DeepSeek R1 671B Locally (Test and Install!) - Unsloth dynamic GGUF
3
2025-01-31T12:15:08
https://youtu.be/_PxT9pyN_eE
bi4key
youtu.be
1970-01-01T00:00:00
0
{}
1iedfug
false
{'oembed': {'author_name': 'Bijan Bowen', 'author_url': 'https://www.youtube.com/@Bijanbowen', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/_PxT9pyN_eE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Running FULL DeepSeek R1 671B Locally (Test and Install!)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/_PxT9pyN_eE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Running FULL DeepSeek R1 671B Locally (Test and Install!)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1iedfug
/r/LocalLLaMA/comments/1iedfug/running_full_deepseek_r1_671b_locally_test_and/
false
false
https://b.thumbs.redditm…ovya3K0Eyy9E.jpg
3
{'enabled': False, 'images': [{'id': 'NXbhD9e8_ypIzSDCiSIFm3RDmlMRMcfkMY_F0VjpNaQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/O_3uIHMWJ9lkDFdl6jEUTYkxzXr9bbpIuyJhUkUctcI.jpg?width=108&crop=smart&auto=webp&s=22197d5d571a72f65bc241562df73ac10d6a6124', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/O_3uIHMWJ9lkDFdl6jEUTYkxzXr9bbpIuyJhUkUctcI.jpg?width=216&crop=smart&auto=webp&s=d2d64cbf6edde1f7f63efed5396feb38a3c177e8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/O_3uIHMWJ9lkDFdl6jEUTYkxzXr9bbpIuyJhUkUctcI.jpg?width=320&crop=smart&auto=webp&s=8be4a2e0d89c7e1891efb7559cfdf3302cc49fda', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/O_3uIHMWJ9lkDFdl6jEUTYkxzXr9bbpIuyJhUkUctcI.jpg?auto=webp&s=5314c804687ca11c4a2a168d5611fa9f9f162146', 'width': 480}, 'variants': {}}]}
Any services that offer multiple LLMs via API?
1
[removed]
2025-01-31T12:19:39
https://www.reddit.com/r/LocalLLaMA/comments/1iedilz/any_services_that_offer_multiple_llms_via_api/
pazvanti2003
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iedilz
false
null
t3_1iedilz
/r/LocalLLaMA/comments/1iedilz/any_services_that_offer_multiple_llms_via_api/
false
false
self
1
null
Running Deepseek on a CPU only cluster of machines?
3
Old servers that are still relatively performant with heaps of RAM can be had for cheap nowadays. RAM is orders of magnitues cheaper than VRAM GPU solutions, but also much slower in bandwidth and compute. We have already seen people try out running DeepSeek V3/R1 on CPU only machines, reaching up to a 6-7 tokens/s with a couple optimizations. Now my question is if we can still push these numbers, just using more (much more) cheap hardware. If I have 5 of these bad boys, **HPE Server DL560** 4x E5-4640 CPU 32x 32GB RAM (= 1TB RAM) (each HP 32GB 4Rx4 PC3L-10600L-9 Kit 32) and enough power, disk space and 10Gb Ethernet. So that's 5TB of RAM (!) and 240 CPU cores, 480 Threads (!). How would I run a quantised Deepseek R1 model (still 671B, at around 220GB size) to maximise throughput, thus tokens/second for our business clients? Would it even be worth the effort, since communication overhead between various subsystems might slow it all down? And how much custom glue would be required given that some model inference engines like vLLM do already some of the heavy lifting?
2025-01-31T12:24:16
https://www.reddit.com/r/LocalLLaMA/comments/1iedlgz/running_deepseek_on_a_cpu_only_cluster_of_machines/
jaMMint
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iedlgz
false
null
t3_1iedlgz
/r/LocalLLaMA/comments/1iedlgz/running_deepseek_on_a_cpu_only_cluster_of_machines/
false
false
self
3
null
Any models that can manipulate images that I provide?
1
I want to be able to upload an image and get the model to add elements to it or generally manipulate it following my instructions. Are there any models that can do that?
2025-01-31T12:50:09
https://www.reddit.com/r/LocalLLaMA/comments/1iee1t9/any_models_that_can_manipulate_images_that_i/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iee1t9
false
null
t3_1iee1t9
/r/LocalLLaMA/comments/1iee1t9/any_models_that_can_manipulate_images_that_i/
false
false
self
1
null
What context size do you use?
5
What context size makes it feel like you're talking to someone? I like gpt because it brings things up we've previously talked about. That big context windows is noticeable. I have mid gpu, I feel like i usually use my ollama models at 8k context. For self hosted models. It makes me feel like like I'm talking to my grandma with dementia. Any recommendations?
2025-01-31T12:56:36
https://www.reddit.com/r/LocalLLaMA/comments/1iee61m/what_context_size_do_you_use/
rhaastt-ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iee61m
false
null
t3_1iee61m
/r/LocalLLaMA/comments/1iee61m/what_context_size_do_you_use/
false
false
self
5
null
How to run deepseek-r1-671b-1.58bit:latest using Ollama & Open WebUI (DeepSeek R1 Dynamic 1.58-bit)
1
If you try to run with limited RAM and VRAM (128GB/48GB), you will get the following >docker exec -it ollama ollama run SIGJNF/deepseek-r1-671b-1.58bit Error: llama runner process has terminated: cudaMalloc failed: out of memory llama\_kv\_cache\_init: failed to allocate buffer for kv cache llama\_new\_context\_with\_model: llama\_kv\_cache\_init() failed for self-attention cache Create a Modelfile >docker exec -it ollama bash -c 'cat > /tmp/Modelfile << EOL >FROM SIGJNF/deepseek-r1-671b-1.58bit > >PARAMETER temperature 0.7 >PARAMETER num\_ctx 128 >PARAMETER num\_gpu 14 >PARAMETER num\_thread 12 >PARAMETER num\_batch 16 >PARAMETER num\_keep 0 > >SYSTEM """You are a helpful AI assistant.""" >EOL' > > >docker exec -it ollama ollama rm deepseek-r1-671b-1.58bit >docker exec -it ollama ollama create deepseek-r1-671b-1.58bit -f /tmp/Modelfile `num_gpu` is 7 for a single 3090 or 24GB VRAM. `num_thread` is 12, depending on the number of CPU cores. Got it running, but it's too slow. Is there a way to make it a little faster? I have 2x 3090 GPUs, 128GB RAM, and a 3900XT processor.
2025-01-31T12:59:46
https://www.reddit.com/r/LocalLLaMA/comments/1iee84e/how_to_run_deepseekr1671b158bitlatest_using/
Vishnu_One
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iee84e
false
null
t3_1iee84e
/r/LocalLLaMA/comments/1iee84e/how_to_run_deepseekr1671b158bitlatest_using/
false
false
self
1
null
I asked Deepseek R1 7B running on ollama "How does human body create stomach acid"
0
Can any medical experts here confirm if the Helix pump part here is correct? I can't find any references in Google about a "Helix pump" in human body. `The human body generates stomach acid primarily through two key processes: hydrochloric acid production by parietal cells and the action of intrinsic factors.` `1. **Hydrochloric Acid Production**:` `- Parietal cells in the stomach lining convert fuel molecules into energy using oxygen via cellular` `respiration, producing ATP.` `- This ATP is used to pump hydrochloric acid (HCl) into the stomach cavity, creating an acidic environment` `essential for digestion.` `2. **Intrinsic Factors**:` `- These enzymes released by parietal cells further enhance acidity, aiding in nutrient absorption, particularly vitamin B12 and iron digestion.` `3. **Gastrointestinal Pumping**:` `- The Helix pump, composed of muscular tubes, expels stomach contents forward but cannot withstand strong acids, leading to potential damage if the motion is too rapid or insufficiently flexible.` `- This can contribute to conditions like GERD when acid reflux occurs due to compression of the esophagus.` `4. **Acid Regulation**:` `- Histamine release mechanisms and neutralization enzymes help manage excess acid levels, preventing damage while supporting digestion.` `This coordinated system ensures a controlled pH level in the stomach, facilitating effective digestion and nutrient absorption while minimizing acid-related harm.`
2025-01-31T13:08:24
https://www.reddit.com/r/LocalLLaMA/comments/1ieee1r/i_asked_deepseek_r1_7b_running_on_ollama_how_does/
RustPerson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieee1r
false
null
t3_1ieee1r
/r/LocalLLaMA/comments/1ieee1r/i_asked_deepseek_r1_7b_running_on_ollama_how_does/
false
false
self
0
null
Now this is a mood-killer.
0
I was training Deepseek by showing it lengthy passages from a book I really like, so I can help me with improve my own work, but suddenly, BOOM “We've exceeded the length limit for Deep Thinking. Please start a new chat so we can continue deep thinking!”. Annihilated all my excitement for the evening ngl. I am not holding out my hope for a way to just plop all the knowledge of this conversation to another one, but if there is a way I am much appreciate, if not then I guess I am just here to show you what we all are destined to arrive at.
2025-01-31T13:13:29
https://www.reddit.com/r/LocalLLaMA/comments/1ieehdw/now_this_is_a_moodkiller/
blackrino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieehdw
false
null
t3_1ieehdw
/r/LocalLLaMA/comments/1ieehdw/now_this_is_a_moodkiller/
false
false
self
0
null
Struggling with Batched/Parallel Inference for Small LLMs – Need Suggestions
5
Hey everyone, I've been trying to get batched/parallel inference working so I can maximize VRAM usage while running a small LLM on multiple prompts. Basically, I want to efficiently process several prompts in parallel rather than sequentially. Here’s what I’ve tried so far: TabbyAPI: Couldn't figure out how to make it work with EXL2. Aphrodite Engine: Says it doesn't support EXL2 quants. LM Studio (GGUF): Doesn't seem to have built-in parallelism for inference. I’d prefer a solution that supports EXL2, but I’m open to alternatives if there's a better format for efficient batched inference. Any recommendations? How do you handle parallel inference with local models? Would appreciate any insights!
2025-01-31T13:16:13
https://www.reddit.com/r/LocalLLaMA/comments/1ieej7z/struggling_with_batchedparallel_inference_for/
Su1tz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieej7z
false
null
t3_1ieej7z
/r/LocalLLaMA/comments/1ieej7z/struggling_with_batchedparallel_inference_for/
false
false
self
5
null
Run DeepSeek R1 Locally—No Data Leaks, No External Hosting
1
[removed]
2025-01-31T13:18:07
https://www.reddit.com/r/LocalLLaMA/comments/1ieekes/run_deepseek_r1_locallyno_data_leaks_no_external/
thewritingwallah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieekes
false
null
t3_1ieekes
/r/LocalLLaMA/comments/1ieekes/run_deepseek_r1_locallyno_data_leaks_no_external/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OdaGOqYZVAg_h6IhYD5Ob52RGcQKXDY-CQDcXqZG7ao', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5daIHQjwrsgb02FNyMShkPuJGHxMY4JDge_O_wHB4AY.jpg?width=108&crop=smart&auto=webp&s=05a0a6548dabc981be1e8679d2caad418ef43f7b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5daIHQjwrsgb02FNyMShkPuJGHxMY4JDge_O_wHB4AY.jpg?width=216&crop=smart&auto=webp&s=11baeddd7b17c61e1f338ef9eb0de62cf9a9e0dd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5daIHQjwrsgb02FNyMShkPuJGHxMY4JDge_O_wHB4AY.jpg?width=320&crop=smart&auto=webp&s=5bb420f09635e82492e0ef03f87c7683720eeb1e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5daIHQjwrsgb02FNyMShkPuJGHxMY4JDge_O_wHB4AY.jpg?width=640&crop=smart&auto=webp&s=572eeab2b444f74364350ceae3707275bec74c06', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5daIHQjwrsgb02FNyMShkPuJGHxMY4JDge_O_wHB4AY.jpg?width=960&crop=smart&auto=webp&s=9b70f201835a2c116f2b319e85d03d48191f2a83', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5daIHQjwrsgb02FNyMShkPuJGHxMY4JDge_O_wHB4AY.jpg?width=1080&crop=smart&auto=webp&s=f46167ebdf0f3e70d2f3894567575d631572f072', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/5daIHQjwrsgb02FNyMShkPuJGHxMY4JDge_O_wHB4AY.jpg?auto=webp&s=57cfdcb794b01bdfb3c94e76e150887bb83cbd44', 'width': 2400}, 'variants': {}}]}
Should I wait for Project Digits by Nvidia??
4
Hello everyone! I have always been fascinated by Second brain systems, since this concept started to be popular in the last few years (using Notion, Obsidian and many apps for this purpose) and it helped me dramatically and improved my learning workflow. With AI tools (ChatGPT etc.) it has become even more practical to upload pdf files and chat with them. But since I want to extend this thing to massive files (talking about Terabytes of pdf textbooks), relying on cloud-based AI models will be super extensive in the long term. In fact, as a lifelong learner, I want obviously to rely on AI for explaining and elaborating (then simplifying) complex concepts to me so it won't be limited to a few projects (let's say a thesis) and that's it. I want to have separate AI Assistants (Math teacher that learns from tons of maths textbooks that I previously uploaded, Physics teacher/assistant and so on). So I started considering Local models (Deepseek R1 etc.), especially the most advanced ones for reasoning and scientific problems and coding. And I definitely need powerful hardware (I have an Alienware R12 with RTX3090 GPU). Was thinking about Nvidia GPUs (maybe 2 5090 combined). But overall (getting and assembling the rest of the components of the PC) it may get out of my budget. But, what if I get Nvidia Project Digits for this instead? (even with 2 of them combined) it is specifically designed for AI computing so it should perform better am I right? My budget: 6000€ max (6200-6500$) (I used to save money for getting the next Mac Studio but I changed my plan. But I 100x times prefer an AI computer rather than a "traditional" powerful computer for my use cases). Please give me your opinions!! (6k is still expensive for me and I don't want to waste it. I want to be sure about what to get for the long term, for the next 3-5 years at least).
2025-01-31T13:24:21
https://www.reddit.com/r/LocalLLaMA/comments/1ieeoqk/should_i_wait_for_project_digits_by_nvidia/
ahmedfarrag17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieeoqk
false
null
t3_1ieeoqk
/r/LocalLLaMA/comments/1ieeoqk/should_i_wait_for_project_digits_by_nvidia/
false
false
self
4
null
Does it matter on which x86_64 machine I compile llama.cpp (concerning dynamic dispatch)?
1
[removed]
2025-01-31T13:27:35
https://www.reddit.com/r/LocalLLaMA/comments/1ieeqxe/does_it_matter_on_which_x86_64_machine_i_compile/
CloudyCloud256
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieeqxe
false
null
t3_1ieeqxe
/r/LocalLLaMA/comments/1ieeqxe/does_it_matter_on_which_x86_64_machine_i_compile/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
Does it matter on which x86_64 machine I compile llama.cpp (concerning dynamic dispatch)?
3
Hello, I'm building llama.cpp and want to distribute it to a bunch of x86\_64 machines. Some have AVX512 and some don't. If available, I'd like these machines to make use of that for extra performance. I thought it mostly wouldn't matter on which machine I build llama.cpp because there are nice runtime checks like this: `int ggml_cpu_has_avx512(void)` [https://github.com/ggerganov/llama.cpp/blob/5783575c9d99c4d9370495800663aa5397ceb0be/ggml/src/ggml-cpu/ggml-cpu.c#L14192](https://github.com/ggerganov/llama.cpp/blob/5783575c9d99c4d9370495800663aa5397ceb0be/ggml/src/ggml-cpu/ggml-cpu.c#L14192) But then there are also parts like this: `#elif defined(__AVX512F__)` [https://github.com/ggerganov/llama.cpp/blob/5783575c9d99c4d9370495800663aa5397ceb0be/ggml/src/ggml-cpu/ggml-cpu.c#L554](https://github.com/ggerganov/llama.cpp/blob/5783575c9d99c4d9370495800663aa5397ceb0be/ggml/src/ggml-cpu/ggml-cpu.c#L554) Which seem to me like it really does matter on which machine I compile llama.cpp, but I probably misunderstand what these parts do. I'm not really good with such low-level details, so I thought I'd try my luck and ask here :) Any help is highly appreciated!
2025-01-31T13:29:13
https://www.reddit.com/r/LocalLLaMA/comments/1iees2z/does_it_matter_on_which_x86_64_machine_i_compile/
throwaway-0xDEADBEEF
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iees2z
false
null
t3_1iees2z
/r/LocalLLaMA/comments/1iees2z/does_it_matter_on_which_x86_64_machine_i_compile/
false
false
self
3
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
could deepseek has just earned money due to nvidia loss
0
Everyone is very sceptical on how is deepseek providing r1 for free to consumer. I think its totally possible that we know Deepseek is a program by a hedge fund, they probably already had the GPU power and mathematicians for such a pull. After that hedgefund knowing the hyper valuation of Nvidia and us new Stargate program decided to short Nvidia stock(probably earned billions in 2 days). This wouldn't be possible if Deepseek was even remotely priced near o1.
2025-01-31T13:31:14
https://www.reddit.com/r/LocalLLaMA/comments/1ieetin/could_deepseek_has_just_earned_money_due_to/
ME_LIKEY_SUGAR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieetin
false
null
t3_1ieetin
/r/LocalLLaMA/comments/1ieetin/could_deepseek_has_just_earned_money_due_to/
false
false
self
0
null
Can someone suggest me some dataset to train the llm model so the model learns to response in different language tone eg. anger?
1
[removed]
2025-01-31T13:34:17
https://www.reddit.com/r/LocalLLaMA/comments/1ieevnm/can_someone_suggest_me_some_dataset_to_train_the/
UnfairRadish9549
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieevnm
false
null
t3_1ieevnm
/r/LocalLLaMA/comments/1ieevnm/can_someone_suggest_me_some_dataset_to_train_the/
false
false
self
1
null
LOL 😂 chat gpt acting suss
1
2025-01-31T13:40:50
https://i.redd.it/xlobgnn32cge1.jpeg
batty_bats__
i.redd.it
1970-01-01T00:00:00
0
{}
1ief0fb
false
null
t3_1ief0fb
/r/LocalLLaMA/comments/1ief0fb/lol_chat_gpt_acting_suss/
false
false
https://b.thumbs.redditm…DujVWnNHVz8Q.jpg
1
{'enabled': True, 'images': [{'id': 'Zl_kPRv14xcrw42wv811Be0StJ7Qo60jGxkfQgnJl7o', 'resolutions': [{'height': 168, 'url': 'https://preview.redd.it/xlobgnn32cge1.jpeg?width=108&crop=smart&auto=webp&s=c08a068cd2c83ab55bd77d3823f0348203e7f6e0', 'width': 108}, {'height': 336, 'url': 'https://preview.redd.it/xlobgnn32cge1.jpeg?width=216&crop=smart&auto=webp&s=396c81b09131170dcb75a5d534026c7e1f9eb8c0', 'width': 216}, {'height': 498, 'url': 'https://preview.redd.it/xlobgnn32cge1.jpeg?width=320&crop=smart&auto=webp&s=c945a5d977bd6ae53359cc2271f06d819c260292', 'width': 320}, {'height': 996, 'url': 'https://preview.redd.it/xlobgnn32cge1.jpeg?width=640&crop=smart&auto=webp&s=5f3b868466a6396753037dc3898eebd2e229cba1', 'width': 640}, {'height': 1495, 'url': 'https://preview.redd.it/xlobgnn32cge1.jpeg?width=960&crop=smart&auto=webp&s=d8783b3fb4106478584e8a17d99a35ee7b2e679e', 'width': 960}, {'height': 1682, 'url': 'https://preview.redd.it/xlobgnn32cge1.jpeg?width=1080&crop=smart&auto=webp&s=101c32bc51abb286e3ad1ae65e852df531699652', 'width': 1080}], 'source': {'height': 2492, 'url': 'https://preview.redd.it/xlobgnn32cge1.jpeg?auto=webp&s=2025cb2bcecb1c2b7dba42f2a79642081762e0a6', 'width': 1600}, 'variants': {}}]}
Which model should I fine-tune?
1
[removed]
2025-01-31T13:45:52
https://www.reddit.com/r/LocalLLaMA/comments/1ief429/which_model_should_i_finetune/
Zireael61
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ief429
false
null
t3_1ief429
/r/LocalLLaMA/comments/1ief429/which_model_should_i_finetune/
false
false
self
1
null
The Best data extraction tool in 2024
1
[removed]
2025-01-31T13:50:00
https://www.reddit.com/r/LocalLLaMA/comments/1ief70h/the_best_data_extraction_tool_in_2024/
Strict_Contract3448
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ief70h
false
null
t3_1ief70h
/r/LocalLLaMA/comments/1ief70h/the_best_data_extraction_tool_in_2024/
false
false
self
1
null
Idea: "Can I Run This LLM?" Website
793
I have and idea. You know how websites like Can You Run It let you check if a game can run on your PC, showing FPS estimates and hardware requirements? What if there was a similar website for LLMs? A place where you could enter your hardware specs and see: Tokens per second, VRAM & RAM requirements etc. It would save so much time instead of digging through forums or testing models manually. Does something like this exist already? 🤔 I would pay for that.
2025-01-31T13:55:03
https://i.redd.it/l344q42n4cge1.jpeg
Dangerous_Bunch_3669
i.redd.it
1970-01-01T00:00:00
0
{}
1iefan2
false
null
t3_1iefan2
/r/LocalLLaMA/comments/1iefan2/idea_can_i_run_this_llm_website/
false
false
https://b.thumbs.redditm…w8LPx3-kxOxo.jpg
793
{'enabled': True, 'images': [{'id': 'rIB5-83EGQOxy4sM0qTwZCXzZzZC-goT9xu6MummKd4', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/l344q42n4cge1.jpeg?width=108&crop=smart&auto=webp&s=cd9cb2638560f080919700b2fedb826f5bace448', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/l344q42n4cge1.jpeg?width=216&crop=smart&auto=webp&s=c25130b380c12025eb7e92e895751074d0243804', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/l344q42n4cge1.jpeg?width=320&crop=smart&auto=webp&s=654f22f2903553140387cba9fd48ff50892ccff5', 'width': 320}, {'height': 385, 'url': 'https://preview.redd.it/l344q42n4cge1.jpeg?width=640&crop=smart&auto=webp&s=d54b955e3b1b2cf6f6d117e19782d25f8f4603c8', 'width': 640}], 'source': {'height': 416, 'url': 'https://preview.redd.it/l344q42n4cge1.jpeg?auto=webp&s=96037b08b936a85ffc844202a369d9b810599b5d', 'width': 690}, 'variants': {}}]}
Mistral Small 3 24B GGUF quantization Evaluation results
166
https://preview.redd.it/…in.com/mqWZzxaH)
2025-01-31T14:03:45
https://www.reddit.com/r/LocalLLaMA/comments/1iefhfj/mistral_small_3_24b_gguf_quantization_evaluation/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iefhfj
false
null
t3_1iefhfj
/r/LocalLLaMA/comments/1iefhfj/mistral_small_3_24b_gguf_quantization_evaluation/
false
false
https://b.thumbs.redditm…ryDIf7BUETqs.jpg
166
{'enabled': False, 'images': [{'id': 'fJcJu7S1jWPsIijhxC6Qf-PDht40NQSJSF_CIJVpSDg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=108&crop=smart&auto=webp&s=556f02c5062196b831727d50479b4dedcba42465', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=216&crop=smart&auto=webp&s=915ad696f5f87cf150f0a34cfaab22e492725ead', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=320&crop=smart&auto=webp&s=c7bd0c3afae8e858eb9c166b192564fd5e9ab698', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=640&crop=smart&auto=webp&s=506e8acadec458fdbc5263aab62e7fe23bff3e73', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=960&crop=smart&auto=webp&s=92ba6bb4013785ea43d153da8d92240198c98889', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=1080&crop=smart&auto=webp&s=300841458fb4f5602c0a50ea57a648bd2430d112', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?auto=webp&s=10ce7423da868f93e58832e74619ce454c74d486', 'width': 1200}, 'variants': {}}]}
New efficient contender: LFM-7B | Beats Llama 3.1 8B and Qwen 2.5 7B | 100 million tokens for $1.00
0
2025-01-31T14:16:16
https://i.redd.it/58t8lp0d8cge1.jpeg
elemental-mind
i.redd.it
1970-01-01T00:00:00
0
{}
1iefque
false
null
t3_1iefque
/r/LocalLLaMA/comments/1iefque/new_efficient_contender_lfm7b_beats_llama_31_8b/
false
false
https://a.thumbs.redditm…GsQ3hzCaDXy4.jpg
0
{'enabled': True, 'images': [{'id': 'FGBDsYpexgkELT760IY1-_I3mf6HASqrdT5x8f94ADU', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/58t8lp0d8cge1.jpeg?width=108&crop=smart&auto=webp&s=77918647df87207e2eee2492abc9254f2faf15dd', 'width': 108}, {'height': 259, 'url': 'https://preview.redd.it/58t8lp0d8cge1.jpeg?width=216&crop=smart&auto=webp&s=4061752ad3c777adb034725a0dd18e7f84813bb5', 'width': 216}, {'height': 384, 'url': 'https://preview.redd.it/58t8lp0d8cge1.jpeg?width=320&crop=smart&auto=webp&s=e91a255224ccd7707e48b68940c15f421c31c633', 'width': 320}, {'height': 769, 'url': 'https://preview.redd.it/58t8lp0d8cge1.jpeg?width=640&crop=smart&auto=webp&s=2ab59ca34c31d6cf421693f2ae7d6d00bc2a246d', 'width': 640}], 'source': {'height': 1022, 'url': 'https://preview.redd.it/58t8lp0d8cge1.jpeg?auto=webp&s=3a0e784016a2ee694f3f814f6eb48f537da165e0', 'width': 850}, 'variants': {}}]}
LM Studio Altarnetive
1
[removed]
2025-01-31T14:16:24
https://www.reddit.com/r/LocalLLaMA/comments/1iefqxq/lm_studio_altarnetive/
Parad6se
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iefqxq
false
null
t3_1iefqxq
/r/LocalLLaMA/comments/1iefqxq/lm_studio_altarnetive/
false
false
self
1
null
Mac Mini with M4 chip?
3
Has anyone installed DeekSeek-R1 model on the new Mac Mini? I'm curious as to how large of a model it will be able to run, and how fast. At $600, it would be a nice inexpensive way to host R1. I've been playing around with the distilled models on my MacBook Pro M1 and am reasonably impressed.
2025-01-31T14:22:06
https://www.reddit.com/r/LocalLLaMA/comments/1iefv4d/mac_mini_with_m4_chip/
DrDisintegrator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iefv4d
false
null
t3_1iefv4d
/r/LocalLLaMA/comments/1iefv4d/mac_mini_with_m4_chip/
false
false
self
3
null
**8x H100 for Deepseek R1 600B+? vLLM Struggles – Need Insights**
1
[removed]
2025-01-31T14:22:53
https://www.reddit.com/r/LocalLLaMA/comments/1iefvoh/8x_h100_for_deepseek_r1_600b_vllm_struggles_need/
ilkhom19
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iefvoh
false
null
t3_1iefvoh
/r/LocalLLaMA/comments/1iefvoh/8x_h100_for_deepseek_r1_600b_vllm_struggles_need/
false
false
self
1
null
8x H100 for Deepseek R1 600B+? vLLM
1
[removed]
2025-01-31T14:25:06
https://www.reddit.com/r/LocalLLaMA/comments/1iefx8n/8x_h100_for_deepseek_r1_600b_vllm/
ilkhom19
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iefx8n
false
null
t3_1iefx8n
/r/LocalLLaMA/comments/1iefx8n/8x_h100_for_deepseek_r1_600b_vllm/
false
false
self
1
null
I build a (ghetto) tool to try a bunch of prompts that are supposed to rephrase the last user question to be usable in a RAG context (desc in comments)
12
2025-01-31T14:26:31
https://github.com/ExtReMLapin/llm_prompt_rephrase_message_rag
LinkSea8324
github.com
1970-01-01T00:00:00
0
{}
1iefybl
false
null
t3_1iefybl
/r/LocalLLaMA/comments/1iefybl/i_build_a_ghetto_tool_to_try_a_bunch_of_prompts/
false
false
https://b.thumbs.redditm…Cdb83qu7TJnE.jpg
12
{'enabled': False, 'images': [{'id': 'ptaEyVB6CzhZo2fmSoKpkRYlhuntv3VUh7rQf_r0Km4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0m6jNXgSXmXGazLOPYFTxh6PzLNwAmyCAuuhLEb2ZWY.jpg?width=108&crop=smart&auto=webp&s=d14938f1163d1131fb4bbd444dac8543ff8d4284', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0m6jNXgSXmXGazLOPYFTxh6PzLNwAmyCAuuhLEb2ZWY.jpg?width=216&crop=smart&auto=webp&s=01f5d12321d9b710a28e4c186da072aca3f47aea', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0m6jNXgSXmXGazLOPYFTxh6PzLNwAmyCAuuhLEb2ZWY.jpg?width=320&crop=smart&auto=webp&s=801055ddd88ffc081c783cd52c1a02ca8bfc1801', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0m6jNXgSXmXGazLOPYFTxh6PzLNwAmyCAuuhLEb2ZWY.jpg?width=640&crop=smart&auto=webp&s=aa1574ba1df032a625f620fb3825c85d4487a8a6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0m6jNXgSXmXGazLOPYFTxh6PzLNwAmyCAuuhLEb2ZWY.jpg?width=960&crop=smart&auto=webp&s=51765d352aec54d51f46dc7509dd12e0709d3d78', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0m6jNXgSXmXGazLOPYFTxh6PzLNwAmyCAuuhLEb2ZWY.jpg?width=1080&crop=smart&auto=webp&s=13e8d954b76e990398c97b99293351d47f2a2820', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0m6jNXgSXmXGazLOPYFTxh6PzLNwAmyCAuuhLEb2ZWY.jpg?auto=webp&s=010dd3da63eaea8cb2d99475b75073794dd459d6', 'width': 1200}, 'variants': {}}]}
8x H100 for Deepseek R1 600B+? vLLM
1
[removed]
2025-01-31T14:26:44
https://www.reddit.com/r/LocalLLaMA/comments/1iefygv/8x_h100_for_deepseek_r1_600b_vllm/
ilkhom19
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iefygv
false
null
t3_1iefygv
/r/LocalLLaMA/comments/1iefygv/8x_h100_for_deepseek_r1_600b_vllm/
false
false
self
1
null
Local Model alternatives to Cursor
5
So I've spent the last year using cursor, mostly with Sonnet and 4o, but now after trying lots of local llms I've narrowed my current favorite local model for python coding down to Qwen2.5-Coder-Instruct. My hope is to have cursor-like functionality where it can look at the full code base (or at least a good chunk of it) running locally with my Qwen of choice. I know various cursor'esq software exists, but I'm not sure what can run local, what works with larger code bases, what's paid etc. What is currently by best local model option? llm studio is currently my main choice for running Qwen script by script. Thanks in advance.
2025-01-31T14:34:24
https://www.reddit.com/r/LocalLLaMA/comments/1ieg48a/local_model_alternatives_to_cursor/
MrWeirdoFace
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieg48a
false
null
t3_1ieg48a
/r/LocalLLaMA/comments/1ieg48a/local_model_alternatives_to_cursor/
false
false
self
5
null
My deepseek not working. I am looking for 3 days that deepseek not working at night. I am from Bangladesh and it is 8.30 pm
1
[removed]
2025-01-31T14:41:26
https://www.reddit.com/gallery/1ieg9ip
itz_sam_sul_
reddit.com
1970-01-01T00:00:00
0
{}
1ieg9ip
false
null
t3_1ieg9ip
/r/LocalLLaMA/comments/1ieg9ip/my_deepseek_not_working_i_am_looking_for_3_days/
false
false
https://b.thumbs.redditm…C4StokNQTezk.jpg
1
null
Using Llama3.3 through HuggingFace
1
[removed]
2025-01-31T14:42:12
https://www.reddit.com/r/LocalLLaMA/comments/1iega3f/using_llama33_through_huggingface/
dumbdrummerdummy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iega3f
false
null
t3_1iega3f
/r/LocalLLaMA/comments/1iega3f/using_llama33_through_huggingface/
false
false
self
1
null
DeepSeek vs Big AI, what gives?
0
DeepSeek created an equal or better model to OpenAI using $6M compared to $100M of GPUs. Because their product is open source, what stops any other large US company from replicating their process on bigger machines? Even accounting for "diminishing returns," if OpenAI has at least 10x the hardware, shouldn't they be able to create their own RL model with 5x capability? Considering that labeled data is no longer required, is machine power now the only bottle neck?
2025-01-31T14:43:58
https://www.reddit.com/r/LocalLLaMA/comments/1iegbgr/deepseek_vs_big_ai_what_gives/
fl4737615
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iegbgr
false
null
t3_1iegbgr
/r/LocalLLaMA/comments/1iegbgr/deepseek_vs_big_ai_what_gives/
false
false
self
0
null
Open-sourcing SmolVLM's training codebase
1
[deleted]
2025-01-31T15:01:19
[deleted]
1970-01-01T00:00:00
0
{}
1iegote
false
null
t3_1iegote
/r/LocalLLaMA/comments/1iegote/opensourcing_smolvlms_training_codebase/
false
false
default
1
null
Replacing github copilot with continue.dev issues
1
Hello everyone. I am sure I am missing something here. I am dabbling with self hosting models to replace copilot in vscode. I found this fancy extension called boot.dev that supports ollama as a back end. It also supports both inline autocomplete and chat. While I got the latter working without issues, I cannot get the former running for the life of me. AFAIK ollama can only run one model at a time right? Also, a model needs to be specifically compatible with auto complete. So how the hell am I supposed to run both a chat and auto complete model locally? Unfortunately the continue.Dev docs seem unaware of this.
2025-01-31T15:01:39
https://www.reddit.com/r/LocalLLaMA/comments/1iegp48/replacing_github_copilot_with_continuedev_issues/
MainManu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iegp48
false
null
t3_1iegp48
/r/LocalLLaMA/comments/1iegp48/replacing_github_copilot_with_continuedev_issues/
false
false
self
1
null
What's the best LLM i can run locally?
1
[removed]
2025-01-31T15:10:48
https://www.reddit.com/r/LocalLLaMA/comments/1iegwjy/whats_the_best_llm_i_can_run_locally/
1Blue3Brown
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iegwjy
false
null
t3_1iegwjy
/r/LocalLLaMA/comments/1iegwjy/whats_the_best_llm_i_can_run_locally/
false
false
self
1
null
Exllamav2 vs Vllm engine v1 vs LMDeploy benchmarks
2
Which one is fastest? Does anyone have the resources to benchmark this? How much would it cost? I may be willing to fund it if the amount is small.
2025-01-31T15:13:55
https://www.reddit.com/r/LocalLLaMA/comments/1iegz1z/exllamav2_vs_vllm_engine_v1_vs_lmdeploy_benchmarks/
Wonderful_Alfalfa115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iegz1z
false
null
t3_1iegz1z
/r/LocalLLaMA/comments/1iegz1z/exllamav2_vs_vllm_engine_v1_vs_lmdeploy_benchmarks/
false
false
self
2
null
Introducing 'aasetpy'
1
Attention Python developers! 🐍✨ Tired of the tedious setup process for new projects? Say hello to 'aasetpy' - your new best friend for kickstarting Python projects with ease! You were about to open your terminal/editor to test an LLM, but you have to go through the hassle of running commands to get started with you virtual environments and what not. With just one command, \`aasetpy\` sets up everything you need: virtual environments, codebase structure, REST API configuration, environment variables, initial git commit, resource usage tracking, logging, containerization, and more! It's like having a personal assistant for your development workflow, ensuring your projects are production-ready and scalable from the very start. Ready to revolutionize your project setup? Check out the 'aasetpy' repository at [https://github.com/aadarshlalchandani/aasetpy](https://github.com/aadarshlalchandani/aasetpy) and see the magic for yourself! We're always open to contributions, so if you have ideas to make the starting point even better, don't hesitate to jump in. Let's make Python project initialization a breeze together! 🚀💻 Love the tool? Smash that star button and share it with your coding crew! ⚡️🤝
2025-01-31T15:14:58
https://www.reddit.com/r/LocalLLaMA/comments/1iegzvm/introducing_aasetpy/
aadarsh_af
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iegzvm
false
null
t3_1iegzvm
/r/LocalLLaMA/comments/1iegzvm/introducing_aasetpy/
false
false
self
1
{'enabled': False, 'images': [{'id': '1B7pkttq5MZHgZYGEkHQPCdqI8JArekCTwXKfbzvSG0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J-F2bQssfMs5J9uDM1iMckZ7IZU6CLMpI2uL8XGoeJ4.jpg?width=108&crop=smart&auto=webp&s=5aae9d9c7decc4fdf0ee16d3bd383069ad097401', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J-F2bQssfMs5J9uDM1iMckZ7IZU6CLMpI2uL8XGoeJ4.jpg?width=216&crop=smart&auto=webp&s=cb04faaa867b14ac2826cf2dcc6cbb0e71061119', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J-F2bQssfMs5J9uDM1iMckZ7IZU6CLMpI2uL8XGoeJ4.jpg?width=320&crop=smart&auto=webp&s=28a664d78befd4c24c6d70b7d4bca00a1b8a1090', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J-F2bQssfMs5J9uDM1iMckZ7IZU6CLMpI2uL8XGoeJ4.jpg?width=640&crop=smart&auto=webp&s=9b25581073f30373ba6765db332f33bb56d78d66', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J-F2bQssfMs5J9uDM1iMckZ7IZU6CLMpI2uL8XGoeJ4.jpg?width=960&crop=smart&auto=webp&s=100fb7ec64c8f3392b0f0f7bcbd943167918a2c9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J-F2bQssfMs5J9uDM1iMckZ7IZU6CLMpI2uL8XGoeJ4.jpg?width=1080&crop=smart&auto=webp&s=471272bdebab770c6698a622683c2ad41158b890', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/J-F2bQssfMs5J9uDM1iMckZ7IZU6CLMpI2uL8XGoeJ4.jpg?auto=webp&s=1988d6ddbfabf1fa8cd58c4c0c00ae6a871ab83c', 'width': 1200}, 'variants': {}}]}
Fully open source codebase to train SOTA VLMs
93
Hi! I'm Andi from multimodal team at Hugging Face. Today we're open-sourcing the codebase used to train SmolVLM from scratch on 256 H100s Inspired by our team's effort to open-source DeepSeek's R1 training, we are releasing the training and evaluation code on top of the weights Now you can train any of our SmolVLMs—or create your own custom VLMs! Go check it out: [https://github.com/huggingface/smollm/tree/main/vision](https://github.com/huggingface/smollm/tree/main/vision)
2025-01-31T15:15:11
https://www.reddit.com/r/LocalLLaMA/comments/1ieh01f/fully_open_source_codebase_to_train_sota_vlms/
futterneid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieh01f
false
null
t3_1ieh01f
/r/LocalLLaMA/comments/1ieh01f/fully_open_source_codebase_to_train_sota_vlms/
false
false
self
93
{'enabled': False, 'images': [{'id': 'hfgs4irMRtKY76U7Sp7TzT4tgagftZiL9YuTj3eStnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/81OX2I86VupiMFQGEBLHfwunfGNsHtqY8g8dg44AzXQ.jpg?width=108&crop=smart&auto=webp&s=1038cbb84e90ef829fa48ed37f986dab869bda2d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/81OX2I86VupiMFQGEBLHfwunfGNsHtqY8g8dg44AzXQ.jpg?width=216&crop=smart&auto=webp&s=ac730e0fb6fb1652a8ff908dbe001d9b0526a2cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/81OX2I86VupiMFQGEBLHfwunfGNsHtqY8g8dg44AzXQ.jpg?width=320&crop=smart&auto=webp&s=f717e12201db7b5ae5282107e55d6a8129500e9a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/81OX2I86VupiMFQGEBLHfwunfGNsHtqY8g8dg44AzXQ.jpg?width=640&crop=smart&auto=webp&s=ce775da04ae60b041d282239796ba5719c279159', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/81OX2I86VupiMFQGEBLHfwunfGNsHtqY8g8dg44AzXQ.jpg?width=960&crop=smart&auto=webp&s=9edd85099d1f46a9dc8904297fbbabf84dda128c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/81OX2I86VupiMFQGEBLHfwunfGNsHtqY8g8dg44AzXQ.jpg?width=1080&crop=smart&auto=webp&s=c05c238240ffd1ce3ab00c1cb7d84c8a73c886af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/81OX2I86VupiMFQGEBLHfwunfGNsHtqY8g8dg44AzXQ.jpg?auto=webp&s=e7df5673d11d5f6a2e7bcbab20272a1367a72484', 'width': 1200}, 'variants': {}}]}
Have been seeing people running deepseek R1 on NVme, what does that involve exactly?
1
I have the full model Q2XX running on my epyc but it’s not very fast and I get heart burn watching it go “oh but wait…”. Have seen a few people saying they run it fast on nvme, what does that mean? What commands are required to do this?
2025-01-31T15:23:22
https://www.reddit.com/r/LocalLLaMA/comments/1ieh6gf/have_been_seeing_people_running_deepseek_r1_on/
davesmith001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieh6gf
false
null
t3_1ieh6gf
/r/LocalLLaMA/comments/1ieh6gf/have_been_seeing_people_running_deepseek_r1_on/
false
false
self
1
null
Powerful local AI agents with one-liner CLI in smolagents
8
Hello folks! I'm Merve and I work on VLMs and agents at Hugging Face 👋🏻 We have recently shipped easy to use CLI commands that run agent scripts in smolagents to make it easier for you to run AI agents locally. To run CLI agents, simply pass `smolagent {prompt}` by default it works with web search. If you want to pass more tools, you can do `smolagent {prompt} --tools "speech_to_text translation document_question answering "`  For instance I did a trip planning to Japan using this prompt `$ smolagent "Plan a trip to Tokyo, Kyoto and Osaka between Mar 28 and Apr 7. Allocate time according to number of public attraction in each, and optimize for distance and travel time. Bring all the public transportation options."` using Qwen2.5-Coder-32B-Instruct (though something smaller would probably work too!). We also have a web automation agent, you can run with `webagent {prompt}` for instance below you can find a video on me browsing an e-commerce website and returning the details I want. we are looking forward to see what you'll build with them, and we really appreciate your feedback 🫡 [browsing e-commerce websites using smolagents](https://reddit.com/link/1iehedx/video/5mt20uatlcge1/player)
2025-01-31T15:33:04
https://www.reddit.com/r/LocalLLaMA/comments/1iehedx/powerful_local_ai_agents_with_oneliner_cli_in/
unofficialmerve
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iehedx
false
null
t3_1iehedx
/r/LocalLLaMA/comments/1iehedx/powerful_local_ai_agents_with_oneliner_cli_in/
false
false
self
8
null
Here is what I want
2
I want a tiny LLM with text-to-speech and speech-to-text capabilities with special instructions to engage with the user and keep the conversation going for as long as it can. The purpose is to waste as much time as possible with spam callers. When I get a spam call I could answer the call, place my phone next to the speaker/monitor, and let the LLM take over. The model could be a tiny one since it wouldn't take much intelligence to keep these spam callers engaged. Most of them are bots, anyway. The anti-spam model could figure out what the caller was trying to sell and then go into long soliloquies about how useful the caller's product would be because of this thing that happened recently to a friend, blah blah blah. With the right system prompt, a 1.5b model could do this easily. So, will one of you geniuses out there build something like this and put it on github? You might single-handedly solve the spam-caller epidemic. Think of all the seniors out there getting scammed! This could be the greatest good deed you'll ever do in your life. It could earn you so much karma (the cosmic kind, not the reddit kind) that you'll come back as a well-hung billionaire (or a gorgeous movie star if you prefer). Post here when you're done, k? Thanks.
2025-01-31T15:33:14
https://www.reddit.com/r/LocalLLaMA/comments/1ieheij/here_is_what_i_want/
LazShort
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieheij
false
null
t3_1ieheij
/r/LocalLLaMA/comments/1ieheij/here_is_what_i_want/
false
false
self
2
null
Whats the best Inference framework for prod?
1
I’m looking for the best framework to handle around 100K requests per day in a production environment. Additionally, what quantization method would be best, particularly for 8-bit models? Thank you in advance for any help !
2025-01-31T15:33:26
https://www.reddit.com/r/LocalLLaMA/comments/1ieheob/whats_the_best_inference_framework_for_prod/
Prize_Clue_1565
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieheob
false
null
t3_1ieheob
/r/LocalLLaMA/comments/1ieheob/whats_the_best_inference_framework_for_prod/
false
false
self
1
null
SemiAnalysis: DeepSeek training cost was similar to that of Anthropic Claude 3.5, we believe DeepSeek has access to 10,000 H100 and 10,000 H800
0
2025-01-31T15:36:02
https://semianalysis.com/2025/01/31/deepseek-debates/
Ivo_ChainNET
semianalysis.com
1970-01-01T00:00:00
0
{}
1iehgsl
false
null
t3_1iehgsl
/r/LocalLLaMA/comments/1iehgsl/semianalysis_deepseek_training_cost_was_similar/
false
false
https://b.thumbs.redditm…DXoi_7VPqPCk.jpg
0
{'enabled': False, 'images': [{'id': 'MNbjfeKIwQBCdeZLLYXW1q9XtjtLQTHu41nZuU2Aqm8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Hq5FgExVrUa-PpMrQXikr1aT9qi_3fV-3jL4Xr2fKqk.jpg?width=108&crop=smart&auto=webp&s=33e7bb8bdd5a8f04af9e2297c2d6a9ee2b67ce1b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Hq5FgExVrUa-PpMrQXikr1aT9qi_3fV-3jL4Xr2fKqk.jpg?width=216&crop=smart&auto=webp&s=f21080c3148561ab2adf813e28c365528b96eeb1', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Hq5FgExVrUa-PpMrQXikr1aT9qi_3fV-3jL4Xr2fKqk.jpg?width=320&crop=smart&auto=webp&s=cc0879c6698071e858dc3535dc3ad9999556a905', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Hq5FgExVrUa-PpMrQXikr1aT9qi_3fV-3jL4Xr2fKqk.jpg?width=640&crop=smart&auto=webp&s=0bb95e7b0f081de2af9a351664f43b128ae86bad', 'width': 640}], 'source': {'height': 848, 'url': 'https://external-preview.redd.it/Hq5FgExVrUa-PpMrQXikr1aT9qi_3fV-3jL4Xr2fKqk.jpg?auto=webp&s=079d19c90176ab66ed91c62752124992c73279dc', 'width': 848}, 'variants': {}}]}
Repurpose crypto mining rig to run AI.
1
[removed]
2025-01-31T15:40:54
https://www.reddit.com/r/LocalLLaMA/comments/1iehkrn/repurpose_crypto_mining_rig_to_run_ai/
Jeppe_paa_bjerget
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iehkrn
false
null
t3_1iehkrn
/r/LocalLLaMA/comments/1iehkrn/repurpose_crypto_mining_rig_to_run_ai/
false
false
self
1
null
The closest I can get to chatgpt4 from year behind
1
[removed]
2025-01-31T15:50:02
https://www.reddit.com/r/LocalLLaMA/comments/1iehs5c/the_closest_i_can_get_to_chatgpt4_from_year_behind/
Tonylu99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iehs5c
false
null
t3_1iehs5c
/r/LocalLLaMA/comments/1iehs5c/the_closest_i_can_get_to_chatgpt4_from_year_behind/
false
false
self
1
null
The goal of RamaLama is to make working with AI boring.
11
2025-01-31T15:50:10
https://github.com/containers/ramalama
badgerfish2021
github.com
1970-01-01T00:00:00
0
{}
1iehs8x
false
null
t3_1iehs8x
/r/LocalLLaMA/comments/1iehs8x/the_goal_of_ramalama_is_to_make_working_with_ai/
false
false
https://b.thumbs.redditm…9pHk3rG_aMKQ.jpg
11
{'enabled': False, 'images': [{'id': '7SAunU5Cq-xuzkCeUGMJG7Ir-VwBCtw_A9TdzP0MQw0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aOxDxO8w2iEBwhvfXmfkB0_1EVidEKpesinxxCikkJM.jpg?width=108&crop=smart&auto=webp&s=b01107ef7561d88e928df187a4db86841a4e1716', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aOxDxO8w2iEBwhvfXmfkB0_1EVidEKpesinxxCikkJM.jpg?width=216&crop=smart&auto=webp&s=4972368b2461b0b46a4bceae74c5ae4787e7afa3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aOxDxO8w2iEBwhvfXmfkB0_1EVidEKpesinxxCikkJM.jpg?width=320&crop=smart&auto=webp&s=1e53d3cf287578f93d725b6897e361162d83b4d9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aOxDxO8w2iEBwhvfXmfkB0_1EVidEKpesinxxCikkJM.jpg?width=640&crop=smart&auto=webp&s=6b31d6c3c5b3cea639d1be366752d19cf09b4c31', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aOxDxO8w2iEBwhvfXmfkB0_1EVidEKpesinxxCikkJM.jpg?width=960&crop=smart&auto=webp&s=c171ed8fd930919b7dad9ec1f661f400b1f28a82', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aOxDxO8w2iEBwhvfXmfkB0_1EVidEKpesinxxCikkJM.jpg?width=1080&crop=smart&auto=webp&s=127de6e5e4e22d88ed2ff0d0b137e8c0ff08b00f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aOxDxO8w2iEBwhvfXmfkB0_1EVidEKpesinxxCikkJM.jpg?auto=webp&s=04a3a79be496fd72edaa35886c22c4c05def0d1c', 'width': 1200}, 'variants': {}}]}
GPU pricing is spiking as people rush to self-host deepseek
1,226
2025-01-31T15:50:54
https://i.redd.it/599a10y9pcge1.jpeg
Charuru
i.redd.it
1970-01-01T00:00:00
0
{}
1iehstw
false
null
t3_1iehstw
/r/LocalLLaMA/comments/1iehstw/gpu_pricing_is_spiking_as_people_rush_to_selfhost/
false
false
https://b.thumbs.redditm…Qgpfrs9mVSvM.jpg
1,226
{'enabled': True, 'images': [{'id': 'ofrvG1T1NS7GKxjHGrKHexCUGUBjsf2TJ3HSD2LcVpo', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/599a10y9pcge1.jpeg?width=108&crop=smart&auto=webp&s=e14e5bfd0be9b2befda7ad4dc621df7f6fce36c6', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/599a10y9pcge1.jpeg?width=216&crop=smart&auto=webp&s=ffe83ad79cf06b6ad05b67ef17ab0349b2e2f9aa', 'width': 216}, {'height': 198, 'url': 'https://preview.redd.it/599a10y9pcge1.jpeg?width=320&crop=smart&auto=webp&s=a1df248841d1358a828d27596870c287a9155f22', 'width': 320}, {'height': 396, 'url': 'https://preview.redd.it/599a10y9pcge1.jpeg?width=640&crop=smart&auto=webp&s=a60ac26bd7a2d3395eefcaee8fbf07a28102792f', 'width': 640}, {'height': 595, 'url': 'https://preview.redd.it/599a10y9pcge1.jpeg?width=960&crop=smart&auto=webp&s=641961fa857be3536b92d9658c5c837b740e77f4', 'width': 960}], 'source': {'height': 635, 'url': 'https://preview.redd.it/599a10y9pcge1.jpeg?auto=webp&s=081a73be81ed115f22efc39b293e25394fbb5dba', 'width': 1024}, 'variants': {}}]}
Asked ChatGPT - Cookie monster stole my all cookies
1
[removed]
2025-01-31T16:01:41
https://www.reddit.com/r/LocalLLaMA/comments/1iei1zz/asked_chatgpt_cookie_monster_stole_my_all_cookies/
Hunting-Succcubus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iei1zz
false
null
t3_1iei1zz
/r/LocalLLaMA/comments/1iei1zz/asked_chatgpt_cookie_monster_stole_my_all_cookies/
false
false
self
1
null
LLM feasibility of running on virtual memory
2
[removed]
2025-01-31T16:08:10
https://www.reddit.com/r/LocalLLaMA/comments/1iei7jf/llm_feasibility_of_running_on_virtual_memory/
mgalbraith81
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iei7jf
false
null
t3_1iei7jf
/r/LocalLLaMA/comments/1iei7jf/llm_feasibility_of_running_on_virtual_memory/
false
false
self
2
null
Introducing HEFT-Qwen – a fine-tuned version of Qwen designed for crypto analysis and financial NLP tasks.
1
HEFT-Qwen leverages the **Hybrid Efficient Fine-Tuning (HEFT) framework**, combining LoRA, PEFT, Sparse Attention, and Gradient Checkpointing to improve efficiency while reducing computational costs. What makes HEFT-Qwen stand out? * Fine-tuned on crypto-related datasets, including trading data, on-chain analytics, and sentiment analysis. * Optimized for long-context processing, making it effective for analyzing financial trends. * Outperforms GPT-4 on multiple finance-specific benchmarks. If you're working with blockchain, DeFi, or financial AI applications, check it out on Hugging Face: [**HEFT-Qwen**](https://huggingface.co/OpenC/HEFT-Qwen). Looking forward to hearing your feedback!
2025-01-31T16:10:20
https://www.reddit.com/r/LocalLLaMA/comments/1iei9cc/introducing_heftqwen_a_finetuned_version_of_qwen/
Different_Prune_3529
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iei9cc
false
null
t3_1iei9cc
/r/LocalLLaMA/comments/1iei9cc/introducing_heftqwen_a_finetuned_version_of_qwen/
false
false
self
1
{'enabled': False, 'images': [{'id': '6YUF7JdNqGS6lNAM5_jqwOyXRhdRUYAtJscPcg8iukM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PyUN9N03kLwGCTXc0WD_u9voTBRGagsWx7b5BrQiZpk.jpg?width=108&crop=smart&auto=webp&s=cff08e95e1b6261ae3e8b5a49fdfebcd685b4a4c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PyUN9N03kLwGCTXc0WD_u9voTBRGagsWx7b5BrQiZpk.jpg?width=216&crop=smart&auto=webp&s=b3884193ea8bee7dc67fd29f602a0809702c5f87', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PyUN9N03kLwGCTXc0WD_u9voTBRGagsWx7b5BrQiZpk.jpg?width=320&crop=smart&auto=webp&s=0e0ae4e129910e0bfc6f94f04be80423e13a6885', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PyUN9N03kLwGCTXc0WD_u9voTBRGagsWx7b5BrQiZpk.jpg?width=640&crop=smart&auto=webp&s=0f7088d3192e8b9241697f8bddb3f5575cd187cb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PyUN9N03kLwGCTXc0WD_u9voTBRGagsWx7b5BrQiZpk.jpg?width=960&crop=smart&auto=webp&s=34af284dfaf3fbdbeee79021aaeaef52a143974e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PyUN9N03kLwGCTXc0WD_u9voTBRGagsWx7b5BrQiZpk.jpg?width=1080&crop=smart&auto=webp&s=b639caca9878385acffb5e6024c2a99a3d171225', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PyUN9N03kLwGCTXc0WD_u9voTBRGagsWx7b5BrQiZpk.jpg?auto=webp&s=153855a9e185f2c0092091401840ee9812a0c3c3', 'width': 1200}, 'variants': {}}]}
What the hell do people expect?
343
After the release of R1 I saw so many "But it can't talk about tank man!", "But it's censored!", "But it's from the chinese!" posts. 1. They are all censored. And for R1 in particular... I don't want to discuss chinese politics (or politics at all) with my LLM. That's not my use-case and I don't think I'm in a minority here. What would happen if it was not censored the way it is? The guy behind it would probably have disappeared by now. 2. They all give a fuck about data privacy as much as they can. Else we wouldn't have ever read about samsung engineers not being allowed to use GPT for processor development anymore. IMHO it's not worse or better than the rest (non self-hosted) and the negative media reports are 1:1 the same like back in the days when Zen was released by AMD and all Intel could do was cry like "But it's just cores they glued together!"
2025-01-31T16:20:03
https://www.reddit.com/r/LocalLLaMA/comments/1ieihjr/what_the_hell_do_people_expect/
Suitable-Name
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieihjr
false
null
t3_1ieihjr
/r/LocalLLaMA/comments/1ieihjr/what_the_hell_do_people_expect/
false
false
self
343
null
LLM feasibility of running on virtual memory
2
[removed]
2025-01-31T16:32:43
https://www.reddit.com/r/LocalLLaMA/comments/1ieisb2/llm_feasibility_of_running_on_virtual_memory/
mgalbraith81
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieisb2
false
null
t3_1ieisb2
/r/LocalLLaMA/comments/1ieisb2/llm_feasibility_of_running_on_virtual_memory/
false
false
self
2
null
What's the Best Current Setup for Multi Document (10k+) Retrieval-Augmented Generation (RAG)? Need Accuracy and Citations
12
I'm having some trouble figuring out up to date answers based on benchmarks. I would like to know the best form to do this accurately: RAG, KAG, or GraphRAG. My usecase: I have 10k+ documents as a knowledgebase. I would like to have my end users ask questions and it generates specific domain knowledge answers, pulling in from different documents, but with embedded citations. My documents continuously update and I would want the latest information from multiple sources. Accuracy is of the highest importance, and the questions should also show the date of the answer. I'm hoping some of you could provide a complete guide or breakdown of the best current setup. Specifically, I'd love some guidance on: * **Embeddings**: What is the best locally hosted model? * **Vector Stores**: Which vector database should I go with? * **RAG Frameworks**: Is there any (non-langchain) frameworks I should work with? I don't mind building my own custom flow. What would give me the most accuracy pulling a specific answer from multiple documents? * **Other Tools**: Any other tools or tips that make a RAG setup more efficient or easier to manage? Any help or suggestions would be greatly appreciated! I'd love to hear about the setups you all use and what's worked best for you. Thanks in advance!
2025-01-31T16:36:07
https://www.reddit.com/r/LocalLLaMA/comments/1ieiv7c/whats_the_best_current_setup_for_multi_document/
United-Rush4073
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieiv7c
false
null
t3_1ieiv7c
/r/LocalLLaMA/comments/1ieiv7c/whats_the_best_current_setup_for_multi_document/
false
false
self
12
null
lots of 3090 24gb if you need
0
I have 3090 24 gb for a very good reasonable price. These are used ones. But very good performance. Only Vancouver area locally. Telegram me sw141921
2025-01-31T16:38:54
https://www.reddit.com/r/LocalLLaMA/comments/1ieixlo/lots_of_3090_24gb_if_you_need/
That_Mud7241
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieixlo
false
null
t3_1ieixlo
/r/LocalLLaMA/comments/1ieixlo/lots_of_3090_24gb_if_you_need/
false
false
self
0
null
cheap and reliable model for maths calculation?
3
been using o1, but its too expensive, 4o-mini seems not reliable? (or i am wrong?) and deepseek is down. need help thx
2025-01-31T16:44:26
https://www.reddit.com/r/LocalLLaMA/comments/1iej2bj/cheap_and_reliable_model_for_maths_calculation/
staypositivegirl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iej2bj
false
null
t3_1iej2bj
/r/LocalLLaMA/comments/1iej2bj/cheap_and_reliable_model_for_maths_calculation/
false
false
self
3
null
Is there a Excel sheet editor ? Something like a chatbot that open a Excel sheet and you can ask it to make changes to it
2
Is there a Excel file editor ? Something like a chatbot that open a Excel sheet and you can ask it to make changes to it ?
2025-01-31T16:44:56
https://www.reddit.com/r/LocalLLaMA/comments/1iej2qw/is_there_a_excel_sheet_editor_something_like_a/
solomars3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iej2qw
false
null
t3_1iej2qw
/r/LocalLLaMA/comments/1iej2qw/is_there_a_excel_sheet_editor_something_like_a/
false
false
self
2
null