title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Currently the best model for coding | 1 | [removed] | 2025-02-05T09:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ii67jl/currently_the_best_model_for_coding/ | Psychological_Sea_99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii67jl | false | null | t3_1ii67jl | /r/LocalLLaMA/comments/1ii67jl/currently_the_best_model_for_coding/ | false | false | self | 1 | null |
Any guide on how to actually train a model using LM Studio? | 1 | [removed] | 2025-02-05T09:27:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ii6ay2/any_guide_on_how_to_actually_train_a_model_using/ | Interesting_Music464 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii6ay2 | false | null | t3_1ii6ay2 | /r/LocalLLaMA/comments/1ii6ay2/any_guide_on_how_to_actually_train_a_model_using/ | false | false | self | 1 | null |
Sam says, "people take his words without context" ; | 2 years ago CEO said 'it's totally hopless that startup with $10 million can compete with OpenAI' | 0 | 2025-02-05T09:42:49 | BidHot8598 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ii6hxk | false | null | t3_1ii6hxk | /r/LocalLLaMA/comments/1ii6hxk/sam_says_people_take_his_words_without_context_2/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'efDiDuZcF-wQI7BudmLlznUenHUNXhT_33JwU302cdE', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/dhyqslh4kahe1.jpeg?width=108&crop=smart&auto=webp&s=fdeb609d71223419983e7561b346945064e9c8ad', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/dhyqslh4kahe1.jpeg?width=216&crop=smart&auto=webp&s=a8eda59f83b5725ad444cc33d7e572e03a289adc', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/dhyqslh4kahe1.jpeg?width=320&crop=smart&auto=webp&s=ff37ded54b5899850fa5611a0d43eeeef53d39f6', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/dhyqslh4kahe1.jpeg?width=640&crop=smart&auto=webp&s=0e8ff4d58fa465ad53e5289a83f71365490e1e5a', 'width': 640}, {'height': 1279, 'url': 'https://preview.redd.it/dhyqslh4kahe1.jpeg?width=960&crop=smart&auto=webp&s=498406eea6f823d6143aaa0336909b6017323e69', 'width': 960}, {'height': 1439, 'url': 'https://preview.redd.it/dhyqslh4kahe1.jpeg?width=1080&crop=smart&auto=webp&s=3a8765158c15d9ff3a1cff29e2dd80fd1b85f9d4', 'width': 1080}], 'source': {'height': 5461, 'url': 'https://preview.redd.it/dhyqslh4kahe1.jpeg?auto=webp&s=e72f33e6440b439d16a14e25610caf36c142aa19', 'width': 4096}, 'variants': {}}]} |
|||
Vibe comparison of frontier models | 0 | 2025-02-05T09:45:02 | https://x.com/jcarlosroldan/status/1887069730315403640 | JCx64 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1ii6izo | false | null | t3_1ii6izo | /r/LocalLLaMA/comments/1ii6izo/vibe_comparison_of_frontier_models/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'l66c2GJ1wk8YgwEkzMPUDkVO6CtZ_sb3kNkdQkpz65I', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/lIH5b-KWYVLUjziJMSEiySBNT5N2HylGFA2vH8bSM7w.jpg?width=108&crop=smart&auto=webp&s=d8636f16c4da840966d26918fe75687b8522ca0a', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/lIH5b-KWYVLUjziJMSEiySBNT5N2HylGFA2vH8bSM7w.jpg?auto=webp&s=0de1dcf6eef6b66aaf3a23be5ded54852f5cf168', 'width': 200}, 'variants': {}}]} |
||
Qihoo 360 and DeepSeek Deeply Collaborate to Launch QAX Security Large Model | 0 | Recently, Qihoo 360 Technology Co., Ltd. announced that it has successfully completed a comprehensive deep integration with DeepSeek, marking further application and development of its QAX security large model. Qihoo 360 stated that this security large model has been deeply integrated across multiple key areas, covering important scenarios such as threat assessment, security operations, penetration testing and vulnerability management, as well as identity and access management
In the current context of increasingly severe information security challenges and the growing complexity of cyberattack methods, this initiative by Qihoo 360 is undoubtedly a proactive response to combat cyber threats. Through collaboration with DeepSeek, Qihoo 360 aims to leverage the advantages of the QAX security large model to provide more comprehensive and efficient solutions across various security domains. Specifically, the QAX security large model will play a crucial role in the following areas:
Firstly, in threat assessment and security operations, QAX can analyze and identify potential security threats in real-time, providing enterprises with rapid response capabilities to minimize the impact of security incidents. Secondly, in penetration testing and vulnerability management, QAX will assist enterprises in timely detection of system vulnerabilities, enhancing protective capabilities and ensuring the security of enterprise information. Additionally, in areas such as identity and access management, phishing protection, and malware defense, QAX will also provide strong support to help enterprises fend off increasingly rampant cyberattacks.
In terms of security training and supply chain security, the introduction of QAX can enhance employee security awareness, strengthen the overall security culture of the enterprise, and provide effective safeguards for supply chain security. These measures will significantly improve the security defense capabilities of enterprises, helping them remain competitive.
The collaboration between Qihoo 360 and DeepSeek is not only a combination of technology but also a forward-looking layout for future security situations. The deep integration of the QAX security large model will provide more enterprises with smarter and more efficient security services, further promoting the development of the cybersecurity industry.
Original Text: [https://www.aibase.com/news/15086](https://www.aibase.com/news/15086) | 2025-02-05T10:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ii6rpi/qihoo_360_and_deepseek_deeply_collaborate_to/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii6rpi | false | null | t3_1ii6rpi | /r/LocalLLaMA/comments/1ii6rpi/qihoo_360_and_deepseek_deeply_collaborate_to/ | false | false | self | 0 | null |
Does inference itself require any vram beyond the model weights? | 0 | Are the model weights the only thing that need to be in VRAM for gpu inference or is there any temporary data that's not just kept in registers but also need to get stored to vram during inference? | 2025-02-05T10:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ii750y/does_inference_itself_require_any_vram_beyond_the/ | MarinatedPickachu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii750y | false | null | t3_1ii750y | /r/LocalLLaMA/comments/1ii750y/does_inference_itself_require_any_vram_beyond_the/ | false | false | self | 0 | null |
Anybody using Turing Pi Cluster for local LLM inference? | 1 | [removed] | 2025-02-05T10:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ii77yi/anybody_using_turing_pi_cluster_for_local_llm/ | aram444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii77yi | false | null | t3_1ii77yi | /r/LocalLLaMA/comments/1ii77yi/anybody_using_turing_pi_cluster_for_local_llm/ | false | false | self | 1 | null |
Where can I find comparisons of quantized/distilled models. The leader board always show only the "max" version. | 1 | [removed] | 2025-02-05T10:45:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ii7blj/where_can_i_find_comparisons_of/ | Pristine-Yak-4242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii7blj | false | null | t3_1ii7blj | /r/LocalLLaMA/comments/1ii7blj/where_can_i_find_comparisons_of/ | false | false | self | 1 | null |
Does Ollama or something similar has an alternative to vector store? | 0 | Basically what title says. I want to try a 8B Model but I need smth similar to Vector Store or something that it needs to have a "memory" to "remember" similar to OpenAI's ChatGPT API Models. | 2025-02-05T10:47:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ii7cfm/does_ollama_or_something_similar_has_an/ | thecowmilk_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii7cfm | false | null | t3_1ii7cfm | /r/LocalLLaMA/comments/1ii7cfm/does_ollama_or_something_similar_has_an/ | false | false | self | 0 | null |
Introducing Hormoz 8B | 1 | [removed] | 2025-02-05T10:48:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ii7crz/introducing_hormoz_8b/ | Haghiri75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii7crz | false | null | t3_1ii7crz | /r/LocalLLaMA/comments/1ii7crz/introducing_hormoz_8b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mcVO1JmpRis8cPPZMMjgEB9DxjI_wWFXCPiw7aOP8-Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=108&crop=smart&auto=webp&s=3e36ca2eee5e019615b11ec524e87044e500e0c3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=216&crop=smart&auto=webp&s=ca1e3176d4106111d5dcd48a31e253f17d6353c5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=320&crop=smart&auto=webp&s=a5ecc77ee9a935d9f03fdbe393294c251ff2e3b9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=640&crop=smart&auto=webp&s=9458a00cd04761418de538aa5186650c870add4c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=960&crop=smart&auto=webp&s=a1706977afdf83c4af893bbeea3a1e79e9200916', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?width=1080&crop=smart&auto=webp&s=144805e64f44a56dfef711412b05b234870080c3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VMUzLIAD3VCEOTHPdFVyCcb94Sb_niW0sCYqXQWEi4g.jpg?auto=webp&s=57b78532278755a08e34ec2f175335c71076b6dd', 'width': 1200}, 'variants': {}}]} |
Upgrading 3090's from 24GB to 48GB | 137 | I'm able to do this conversion but the raw parts cost is 550$ for all the new vram.
Plus labor, is that even worth it anymore for others to pay for the upgrade if I offer it as a service? | 2025-02-05T10:49:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ii7dfq/upgrading_3090s_from_24gb_to_48gb/ | CertainlyBright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii7dfq | false | null | t3_1ii7dfq | /r/LocalLLaMA/comments/1ii7dfq/upgrading_3090s_from_24gb_to_48gb/ | false | false | self | 137 | null |
When a model puts a smiley at the end of their response | 0 | 2025-02-05T11:05:48 | OvdjeZaBolesti | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ii7lgz | false | null | t3_1ii7lgz | /r/LocalLLaMA/comments/1ii7lgz/when_a_model_puts_a_smiley_at_the_end_of_their/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'krIrG-mu7Kg2CPrAGBxH39cH3j-YdzoLqXrh0O3p6Eo', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ofb8br20zahe1.jpeg?width=108&crop=smart&auto=webp&s=14bc109fdf4bfc508117b2a0bf0e0f569e5800fc', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ofb8br20zahe1.jpeg?width=216&crop=smart&auto=webp&s=4e115caccdd0d5de70432557b1dd1bbe0ae688e4', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ofb8br20zahe1.jpeg?width=320&crop=smart&auto=webp&s=4e5218d5542d5186e3d7bb5e1d200ee52e6b2423', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/ofb8br20zahe1.jpeg?width=640&crop=smart&auto=webp&s=885b4d51595f2f03691c7ffa4e21d0c649d514ff', 'width': 640}], 'source': {'height': 900, 'url': 'https://preview.redd.it/ofb8br20zahe1.jpeg?auto=webp&s=f91ec9f249ca47b5b0cdec972adfeaab04c348f0', 'width': 900}, 'variants': {}}]} |
|||
[HotTake] QwQ is also in preview. | 19 | I haven't really checked the info, but hear me out as they shouldn'tbe too far away from actual. I will correct it after work.
DeepSeek R1
Preview release : late '24.Nov
Size : 671B
Preview score : o1 x 101% (?)
Release : late '25.Jan
QwQ
Preview release : early '24.Dec
Size 32B
Preview score : o1 x 85% (?)
Release ?????
Not only that, look at the size difference.
Imo,
if there are 1 in 1,000,000 people who can run 671B on their PC,
There are 1 in 1,000 people who can run 32B on their PC.
I am baffled with the imagination of everyone being able to run an OpenAI flagship-worthy AI in their house. | 2025-02-05T11:12:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ii7oue/hottake_qwq_is_also_in_preview/ | MlNSOO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii7oue | false | null | t3_1ii7oue | /r/LocalLLaMA/comments/1ii7oue/hottake_qwq_is_also_in_preview/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=108&crop=smart&auto=webp&s=4f39a07c027d6036b98ac9f4ba405a8d11549aa3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=216&crop=smart&auto=webp&s=77d81d7dfb3f0dc0281915e155e87541e4069970', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=320&crop=smart&auto=webp&s=e7e73cd0eb037665260b5368de787bf4d34a0086', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=640&crop=smart&auto=webp&s=aa0a8cd368da789c05b75a810cf0a1e21413b8f2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=960&crop=smart&auto=webp&s=fb05999616d9a4f01271acab1427db387e6f4095', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=1080&crop=smart&auto=webp&s=6aea590aabdd6f82e13381ed9c97788ecddef016', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?auto=webp&s=bb5327c204c8ce6c5773c7700d887e31427085b4', 'width': 1200}, 'variants': {}}]} |
I created a website that tracks AI regulations around the world | 180 | To help you stay on top of what governments are doing on AI, I created an interactive world map that tracks AI regulatory and policy developments around the world. Click on a region (or use the search bar) to view its profile. This website is updated regularly (including new regions to be added).
Free to access. No login required. This is for the community :)
[https://www.techieray.com/GlobalAIRegulationTracker](https://www.techieray.com/GlobalAIRegulationTracker) | 2025-02-05T11:15:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ii7qfy/i_created_a_website_that_tracks_ai_regulations/ | techie_ray | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii7qfy | false | null | t3_1ii7qfy | /r/LocalLLaMA/comments/1ii7qfy/i_created_a_website_that_tracks_ai_regulations/ | false | false | self | 180 | {'enabled': False, 'images': [{'id': 'E5rqJDLvULid0wdw9xvT0sQB9-9Xgj3N9V1UBQdTs34', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/R-wPebHWyXpnarzjQuXk_KEx9T7Uawehp0dfTgV2b3Q.jpg?width=108&crop=smart&auto=webp&s=cf764abadad8569b47da2e6caa2b0def483052fb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/R-wPebHWyXpnarzjQuXk_KEx9T7Uawehp0dfTgV2b3Q.jpg?width=216&crop=smart&auto=webp&s=bd8b62f13c6d479a92e44f47b84e63a3fcbe2541', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/R-wPebHWyXpnarzjQuXk_KEx9T7Uawehp0dfTgV2b3Q.jpg?width=320&crop=smart&auto=webp&s=fd58ebb50ce1af5233a69b5c32bb319031a66740', 'width': 320}, {'height': 321, 'url': 'https://external-preview.redd.it/R-wPebHWyXpnarzjQuXk_KEx9T7Uawehp0dfTgV2b3Q.jpg?width=640&crop=smart&auto=webp&s=b0b36657d7d3cae930c21c05fadc80a1e4912ed9', 'width': 640}, {'height': 481, 'url': 'https://external-preview.redd.it/R-wPebHWyXpnarzjQuXk_KEx9T7Uawehp0dfTgV2b3Q.jpg?width=960&crop=smart&auto=webp&s=ce44edcef819dad93fb996a813743a015a077d66', 'width': 960}, {'height': 541, 'url': 'https://external-preview.redd.it/R-wPebHWyXpnarzjQuXk_KEx9T7Uawehp0dfTgV2b3Q.jpg?width=1080&crop=smart&auto=webp&s=8f4c08aee7cf76f0ca7baafde338c9337f1f4c92', 'width': 1080}], 'source': {'height': 852, 'url': 'https://external-preview.redd.it/R-wPebHWyXpnarzjQuXk_KEx9T7Uawehp0dfTgV2b3Q.jpg?auto=webp&s=0d4c2b4f4a8e120384e957de474dcbc709eba2c8', 'width': 1698}, 'variants': {}}]} |
How to host / get the API of Dolphin 3 | 1 | Might be an amateurish question but I want to use Dolphin 3 in one of my webapps (uncensored chat). Can you guide me the best approach to use the model - either using an API (if it's available on replicate, Deepinfra, or any such hosted platform) or hosting it myself.
Just a background - I've been using APIs till now and haven't hosted a model till yet. But I don't mind going that route if it's feasible. | 2025-02-05T11:21:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ii7t7m/how_to_host_get_the_api_of_dolphin_3/ | aashishpahwa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii7t7m | false | null | t3_1ii7t7m | /r/LocalLLaMA/comments/1ii7t7m/how_to_host_get_the_api_of_dolphin_3/ | false | false | self | 1 | null |
Speculation on hardware and model requirements for a local PA Agent | 4 | Hi All,
I've been pondering on the convergence of smarter, smaller, local LLM's and the coming low cost, low power consumer hardware that can run them. I was really looking to find out if there were any details about the expected memory bandwidth of Nvidia DIGITS, and it seems we only have guesses at the moment that it will be between 275-500GB/s.
At the same time, I've been experimenting with Mistrals new Small V3 model which is a good instruct and function calling model that comes in at 24B parameters.
This got me thinking about what we would really need to have a reasonable capable personal assistant agent running locally, and the value it has over the hardware and running costs. If DIGITS does come in around the 500GB/s range, then a 24B model @ 8bit, might be hitting around 20tps. While it's not particularly speedy, I think that get's to a decent level where as an automatic agent managing various tasks for a person/household, it's approaching the level it would need to be at.
In the past I've hired Virtual Assistants to do various things, and even at the lower cost (with people that weren't particularly great) it still cost $200+/month. My guess is something like DIGITS would be \~$20/month to power.
With 128GB memory on the DIGITS, It seems that you could fit on a strong small model, TTS and STT, hotswap LoRa's, have decent context length and few streams being processed in parallel.
While each bit doesn't quite feel like it's quite there yet, it does feel like it is all converging, and it feels pretty close.
So, I guess the dicsussion I wanted to open up is how close do you think we are to usefule, cost effective local personal assistant agents?
Do you think that small models like Mistral Small V3 are too small and we need at least a 70B or 123B model to get the smarts?
Does 500GB/s memory bandwidth get us close to something usable, or do we need to be much higher?
Are pair of 5090's the way to go? Much faster inference speed, but half the memory, more expensive to buy and much more power hungy?
So, are we there yet, do we need faster hardware, stronger models, or all of the above?
It would be great to hear your thoughts, on where you feel the biggest limitations are at the moment. | 2025-02-05T11:28:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ii7wsj/speculation_on_hardware_and_model_requirements/ | StevenSamAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii7wsj | false | null | t3_1ii7wsj | /r/LocalLLaMA/comments/1ii7wsj/speculation_on_hardware_and_model_requirements/ | false | false | self | 4 | null |
DeepSeek just released an official demo for DeepSeek VL2 Small - It's really powerful at OCR, text extraction and chat use-cases (Hugging Face Space) | 775 | Space: [https://huggingface.co/spaces/deepseek-ai/deepseek-vl2-small](https://huggingface.co/spaces/deepseek-ai/deepseek-vl2-small)
From Vaibhav (VB) Srivastav on X: [https://x.com/reach\_vb/status/1887094223469515121](https://x.com/reach_vb/status/1887094223469515121) | 2025-02-05T11:40:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ii82yg/deepseek_just_released_an_official_demo_for/ | Nunki08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii82yg | false | null | t3_1ii82yg | /r/LocalLLaMA/comments/1ii82yg/deepseek_just_released_an_official_demo_for/ | false | false | self | 775 | {'enabled': False, 'images': [{'id': 'pSIfwe8Dzu519XwGpY85zGpIFZtaBtH3AFmwhVfBBfc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/92rXTj5ehJPT3hL7b_fsqKGg1acGApZ2ZUszfosq03U.jpg?width=108&crop=smart&auto=webp&s=ec188bf06266fae159983dcc7e3f89c79ea4e728', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/92rXTj5ehJPT3hL7b_fsqKGg1acGApZ2ZUszfosq03U.jpg?width=216&crop=smart&auto=webp&s=61ebfc1f5736f4f8e316ed91e419ef4883b7a00c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/92rXTj5ehJPT3hL7b_fsqKGg1acGApZ2ZUszfosq03U.jpg?width=320&crop=smart&auto=webp&s=0449bcaaf77e6345b1e96f4e674fd71b129f11ab', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/92rXTj5ehJPT3hL7b_fsqKGg1acGApZ2ZUszfosq03U.jpg?width=640&crop=smart&auto=webp&s=a2535161571239c4d187b345af559d974b2e2c12', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/92rXTj5ehJPT3hL7b_fsqKGg1acGApZ2ZUszfosq03U.jpg?width=960&crop=smart&auto=webp&s=168c50790573786665d9c2bf6135ce33f28e99dc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/92rXTj5ehJPT3hL7b_fsqKGg1acGApZ2ZUszfosq03U.jpg?width=1080&crop=smart&auto=webp&s=b3272f02028de3386530c412157e5051d2c3a3c1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/92rXTj5ehJPT3hL7b_fsqKGg1acGApZ2ZUszfosq03U.jpg?auto=webp&s=21c99790015e1777024ea847af84e89e786c22e0', 'width': 1200}, 'variants': {}}]} |
ROPE Scaling with Ollama for "Needle in haystack" problems | 1 | Hi all, I'm a complete beginner to local LLMs.
My use case requires a large context to be prepended to the prompt, however I have found that most models including large parameter ones aren't very accurate in the output.
After doing some research, I came across some models that are specifically trained to handle very large contexts, and talks about the "Needles in a haystack" problem. 2 Models stood out for this task, `Nous-Capybara` and `Command-R` based on some googling.
I also came across some posts suggesting increasing the ROPE scaling, and I want to increase this ROPE value in ollama-js. But it seems very limited.
My questions are, will using Capy/Command-R with Ollama be enough for my use case? Or will I need to install `llama.cpp` for more advanced tuning? Also when I increase the `ctx_size` in ollama, will it automatically adjust the ROPE scaling to cater for this larger context?
Any guidance is appreciated. | 2025-02-05T11:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ii832t/rope_scaling_with_ollama_for_needle_in_haystack/ | geminimini | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii832t | false | null | t3_1ii832t | /r/LocalLLaMA/comments/1ii832t/rope_scaling_with_ollama_for_needle_in_haystack/ | false | false | self | 1 | null |
Training an LLM on custom data set | 1 | [removed] | 2025-02-05T12:11:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ii8kdf/training_an_llm_on_custom_data_set/ | Sazid-Kabir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii8kdf | false | null | t3_1ii8kdf | /r/LocalLLaMA/comments/1ii8kdf/training_an_llm_on_custom_data_set/ | false | false | self | 1 | null |
LM studio cant detect model at all! | 1 | I have downloaded [deepseek-r1-qwen-2.5-32B-ablated-Q6\_K.gguf](https://huggingface.co/bartowski/deepseek-r1-qwen-2.5-32B-ablated-GGUF/blob/main/deepseek-r1-qwen-2.5-32B-ablated-Q6_K.gguf) manually because of my bad internet but LM studio can't seem to detect it at all and I can't load it in at all any help? | 2025-02-05T12:14:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ii8lpz/lm_studio_cant_detect_model_at_all/ | ussdsse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii8lpz | false | null | t3_1ii8lpz | /r/LocalLLaMA/comments/1ii8lpz/lm_studio_cant_detect_model_at_all/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RqyI2uazuQ_VYENEBgi_htllfQ70_SSA6O3fwBtEj-E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DEZVSYQs86ejlj7cEmp9vxI-sEKAbrOKPhhR2pbq_Cs.jpg?width=108&crop=smart&auto=webp&s=eaa95e998e42c53416c0c6a3e2c7d55362f450b1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DEZVSYQs86ejlj7cEmp9vxI-sEKAbrOKPhhR2pbq_Cs.jpg?width=216&crop=smart&auto=webp&s=870e6da899f6f4b94ab604782d407184d500c28e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DEZVSYQs86ejlj7cEmp9vxI-sEKAbrOKPhhR2pbq_Cs.jpg?width=320&crop=smart&auto=webp&s=0cc38e5c433387cd6f197a1d8258ea02b05ffbce', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DEZVSYQs86ejlj7cEmp9vxI-sEKAbrOKPhhR2pbq_Cs.jpg?width=640&crop=smart&auto=webp&s=2cb8ef460e41196bb98dc91cf04e010851ee10f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DEZVSYQs86ejlj7cEmp9vxI-sEKAbrOKPhhR2pbq_Cs.jpg?width=960&crop=smart&auto=webp&s=cb04f488abe6ade4dc35f6be6867a299f06f968e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DEZVSYQs86ejlj7cEmp9vxI-sEKAbrOKPhhR2pbq_Cs.jpg?width=1080&crop=smart&auto=webp&s=bddd3206aeaa97edaf5e3b48f4f415067bb8e96f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DEZVSYQs86ejlj7cEmp9vxI-sEKAbrOKPhhR2pbq_Cs.jpg?auto=webp&s=2071d34c77881c7d16242e5224ac00f1a9d2b271', 'width': 1200}, 'variants': {}}]} |
Seeking Advice on Building a Personalized Educational AI Assistant for Students in Education Domain | 0 | Hey r/LocalLLaMA community, we are developing a personalized AI assistant tailored for education (Open source LLMs like Deepseek/LLaMA), aimed at students aged 10 to 18 years old in schools. \*\*The goal is to create an assistant that adapts to each student's learning needs and preferences, enhancing their educational experience\*\*. we are focusing on implementing personalization while ensuring safety and privacy for young users. Could you share advice on \*\*best practices, tools, or frameworks for building such an assistant that can scale to millions of students? \*\* \*\*Insights on handling personalization in an educational context would be greatly appreciated\*\*. Thank you! | 2025-02-05T12:46:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ii950j/seeking_advice_on_building_a_personalized/ | shivarajramgiri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii950j | false | null | t3_1ii950j | /r/LocalLLaMA/comments/1ii950j/seeking_advice_on_building_a_personalized/ | false | false | self | 0 | null |
Any small LLM (<7B) surpass GPT-3.5 turbo in 2025? | 1 | [removed] | 2025-02-05T12:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ii9b55/any_small_llm_7b_surpass_gpt35_turbo_in_2025/ | TeacherKitchen960 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii9b55 | false | null | t3_1ii9b55 | /r/LocalLLaMA/comments/1ii9b55/any_small_llm_7b_surpass_gpt35_turbo_in_2025/ | false | false | self | 1 | null |
Does anyone know where "Dr. Elara Voss" and her "Quantum Lab" come from? | 1 | For the last two days I've been playing around with DeepSeek on my PC. A lot of my testing was with both interactive and non interactive stories.
For some reason, a character called Dr. Elara Voss, working in a quantum lab on advanced AI keeps popping up. I have no idea why. It's across different DeepSeek models, from 7b to 32b, in different chats and with wildly different prompts and system prompts. Even in the middle of already running stories. Does anyone know what's behind this? Or am I the only one with this?
And even weirder, the only references of her name that I was able to find online is from stories that others seem to also have generated using DeepSeek. | 2025-02-05T13:01:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ii9er9/does_anyone_know_where_dr_elara_voss_and_her/ | Da_Flix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii9er9 | false | null | t3_1ii9er9 | /r/LocalLLaMA/comments/1ii9er9/does_anyone_know_where_dr_elara_voss_and_her/ | false | false | self | 1 | null |
2B model beats 72B model | 223 | https://github.com/Deep-Agent/R1-V
The 2B model outperforms the 72B model.
Only 100 training steps, costing less than $3.
The outperformance is in both effectiveness and out-of-distribution (OOD) robustness for vision language models.
in OOD tests within just 100 training steps.
R1-V is released, and fully open-sourced.
The project shows a 2B-parameter model surpassing a 72B-parameter counterpart in generalization tests.
With only 100 training steps (vs. thousands in conventional methods), 30 minutes on 8 A100 GPUs and $2.62 total cost.
| 2025-02-05T13:10:56 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ii9lab | false | null | t3_1ii9lab | /r/LocalLLaMA/comments/1ii9lab/2b_model_beats_72b_model/ | false | false | 223 | {'enabled': True, 'images': [{'id': '8yfIYx3G3fNklMPFij1s3vFTaYXjNY7gckU4J7TXEXI', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/nxx7b0kblbhe1.jpeg?width=108&crop=smart&auto=webp&s=b88bc8e34ae71045f9ee4c9df83aea936dad528c', 'width': 108}, {'height': 259, 'url': 'https://preview.redd.it/nxx7b0kblbhe1.jpeg?width=216&crop=smart&auto=webp&s=cf5baa23f0302cc354e96535223563d8215f8970', 'width': 216}, {'height': 383, 'url': 'https://preview.redd.it/nxx7b0kblbhe1.jpeg?width=320&crop=smart&auto=webp&s=2370d90c4eb0aac4e6a601bdd00be93b03f43415', 'width': 320}, {'height': 767, 'url': 'https://preview.redd.it/nxx7b0kblbhe1.jpeg?width=640&crop=smart&auto=webp&s=b7a412b056534d115469db02236cac1fd22d5d1a', 'width': 640}], 'source': {'height': 806, 'url': 'https://preview.redd.it/nxx7b0kblbhe1.jpeg?auto=webp&s=bd37317e06417dd4f237389538d72596fcade949', 'width': 672}, 'variants': {}}]} |
||
Is Twilio still the best way to do AI Voice call apps? | 0 | I'd like to build an app that calls a phone number and uses a an AI voice model to have a chat with the recipient of the call.
1) Is twilio still the best option to make phone calls? | 2025-02-05T13:12:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ii9mbo/is_twilio_still_the_best_way_to_do_ai_voice_call/ | dirtyring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii9mbo | false | null | t3_1ii9mbo | /r/LocalLLaMA/comments/1ii9mbo/is_twilio_still_the_best_way_to_do_ai_voice_call/ | false | false | self | 0 | null |
Why can we run R1 (671B) locally, but can't Mistral Large (123B) on the same hardware? | 1 | [removed] | 2025-02-05T13:21:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ii9s90/why_can_we_run_r1_671b_locally_but_cant_mistral/ | Komd23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii9s90 | false | null | t3_1ii9s90 | /r/LocalLLaMA/comments/1ii9s90/why_can_we_run_r1_671b_locally_but_cant_mistral/ | false | false | self | 1 | null |
What do people use their AI for? | 33 | I've just gotten started with LLMs and have Kobold CPP and Silly Tavern running on my PC to use for RP. I'm quite enjoying it, but I know the AI can be used for all sorts of things. Due to a disability, reading through pages and pages of stuff about it has left me more confused than ever. What sorts of things can it actually be used for nowadays? I've heard of everything from coding to it making money for people to videos and a whole lot more. What is the truth of the matter and how do you use it in your day to day lives. I am always looking to make things easier for myself due to my disabilities. Do you think there's specific things I could use it for that would help me? Please keep things simple in the explanation, not quite ELI5 level but probably not far off. I'm very out of the loop.
Thank you 😊 | 2025-02-05T13:24:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ii9tzt/what_do_people_use_their_ai_for/ | salixfire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ii9tzt | false | null | t3_1ii9tzt | /r/LocalLLaMA/comments/1ii9tzt/what_do_people_use_their_ai_for/ | false | false | self | 33 | null |
Which AI model should I use for storytelling & TTRPGs? Is upgrading to 64GB RAM worth it? | 1 | [removed] | 2025-02-05T13:32:52 | https://www.reddit.com/r/LocalLLaMA/comments/1iia03b/which_ai_model_should_i_use_for_storytelling/ | ataquedenervios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iia03b | false | null | t3_1iia03b | /r/LocalLLaMA/comments/1iia03b/which_ai_model_should_i_use_for_storytelling/ | false | false | self | 1 | null |
GRPO (the method used by Deepseek) will be worse than the original model if you make a mistake in the reward function. | 90 | 2025-02-05T13:47:13 | dahara111 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iiaa1r | false | null | t3_1iiaa1r | /r/LocalLLaMA/comments/1iiaa1r/grpo_the_method_used_by_deepseek_will_be_worse/ | false | false | 90 | {'enabled': True, 'images': [{'id': 'M13COnBPvVvLJ75eYa2v0MkXd4hKpSAnXr0cWoot_jY', 'resolutions': [{'height': 150, 'url': 'https://preview.redd.it/vt6h2sj2rbhe1.jpeg?width=108&crop=smart&auto=webp&s=2e187cb04c1320b85b23cedd5800eb8e788472f2', 'width': 108}, {'height': 301, 'url': 'https://preview.redd.it/vt6h2sj2rbhe1.jpeg?width=216&crop=smart&auto=webp&s=ddc6a65725ee7eec76943393f6036dc3b84a2409', 'width': 216}, {'height': 446, 'url': 'https://preview.redd.it/vt6h2sj2rbhe1.jpeg?width=320&crop=smart&auto=webp&s=10a4c4476b0b3303072dc9e8ed1aa00784dc242e', 'width': 320}], 'source': {'height': 697, 'url': 'https://preview.redd.it/vt6h2sj2rbhe1.jpeg?auto=webp&s=e77d7ac668b1277dd595972b6734d80bd149864e', 'width': 500}, 'variants': {}}]} |
|||
We have to fight back now. | 389 | Open-source innovation is the lifeblood of American progress, and any attempt to lock it down is a threat to our future. Banning open-source AI under harsh penalties will only stifle the creativity, transparency, and collaboration that have fueled our tech breakthroughs for decades. When anyone can build on and improve each other’s work, we all win—especially in the race for a safer, smarter tomorrow.
We need to stand together for a future where ideas flow freely and innovation isn’t held hostage. Embracing open-source means a stronger, more competitive American tech ecosystem that benefits everyone, from citizens to startups to established giants. The open road is the best road—let’s keep it that way.
The only thing that these people understand is money. So, follow the money. Here are some of Hawley’s contributors to get you started. You have a right to have your voice be heard. Let them hear.
# Smead Capital Management
* **Mailing Addresses & Phone Numbers:**
* *Phoenix Office:* 2502 E. Camelback Rd, Suite 210, Phoenix, AZ 85016 Phone: 602.889.3660
* *Jersey City Office:* 30 Montgomery St, Suite 920, Jersey City, NJ 07302 Phone: 484.535.5121
* *London Office (UK):* 18th Floor, 100 Bishopsgate, London EC2N 4AG Phone: +44 (0)20.8819.6490
* **Sales Desk (US):** 877.701.2883
* **Verified Email:** [[email protected]](mailto:[email protected]) *(Additional verified contact: Cole Smead can be reached at [email protected].)*
# Indeck Energy Services
* **Mailing Address & Phone Number:**
* 600 N. Buffalo Grove Road, Suite 300, Buffalo Grove, IL 60089
* Phone: 847-520-3212
* **Verified Email:** [[email protected]](mailto:[email protected])
# Peck Enterprises, LLC
* **Mailing Address & Phone Number (as listed via Swagelok Alabama):**
* 7290 Cahaba Valley Rd, Birmingham, AL 35242
* Phone: 205.988.4812
* **Verified Email:** [[email protected]](mailto:[email protected]) *(Note: An alternate email – Roderick Douglass at* [*[email protected]*](mailto:[email protected]) *– was found on a third‑party directory, but the verified contact on the official Swagelok page is used here.)*
# Northwestern Mutual
* **Mailing Address & Phone Number:**
* 3601 North Point Parkway, Glendale, WI 53217
* Phone: 800-225-5945
* **Verified Email:** *(None published – inquiries are typically directed through the website’s contact form.)*
# Prime Inc
* **Mailing Address & Phone Number:**
* 4201 E. Kentucky Ave, Lincoln, NE 68504
* Phone: 800-866-2747
* **Verified Email:** *(No verified email found on the official website; please use the website contact form.)*
# Veterans United Home Loans
* **Mailing Address & Phone Number:**
* 1701 Wynnton Road, Suite 500, Columbia, MD 21046
* Phone: 855-852-4189
* **Verified Email:** [[email protected]](mailto:[email protected])
# Diamond Pet Foods
* **Mailing Address & Phone Number:**
* 1200 West Kemper Road, Eagan, MN 55122
* Phone: 952-787-3400
* **Verified Email:** [[email protected]](mailto:[email protected])
# Leggett & Platt
* **Mailing Address & Phone Numbers:**
* One Leggett Parkway, Carson, CA 90746
* Customer Care: 800-232-8534; Corporate: (562) 467-2000
* **Verified Email:** *(No verified email address was confirmed on their official site.)*
# Opko Health
* **Mailing Address & Phone Numbers:**
* One Opko Way, Miami, FL 33131
* Phone: 800-543-4741 or (305) 300-1234
* **Verified Email:** [[email protected]](mailto:[email protected])
# Edward Jones
* **Phone Numbers**:
* Client Relations: (800) 441-2357 (7 a.m. – 5:30 p.m. CT, Monday–Friday)
* Headquarters: (314) 515-2000 (7 a.m. – 6 p.m. CT, Monday–Friday)
* Toll-Free: (800) 803-3333
* **Address**: 12555 Manchester Road, St. Louis County, Missouri 63131, USA
* **Email**: Edward Jones does not list a public email for customer service; inquiries are handled via phone or their online access portal.
# Diamond Pet Foods
* **Phone Number**: (800) 442-0402
* **Address**: PO Box 156, Meta, Missouri 65058, USA
* **Email**: Diamond Pet Foods does not publicly provide a direct email but offers a contact form on their website for inquiries.
# Hunter Engineering Company
* **Corporate Headquarters Address**: 11250 Hunter Drive, Bridgeton, Missouri 63044, USA
* **Phone Numbers**:
* Corporate Office: (314) 731-3020 or (800) 448-6848
* **Email**: [email protected] (Canada-specific inquiries); [email protected] (Germany-specific inquiries)
# Hallmark Cards
* **Phone Numbers**:
* Toll-Free in the U.S.: (800) 425-5627
* Customer Service: (816) 274-3613
* **Email**: Hallmark does not list a direct customer service email but allows inquiries through a contact form on their website.
For further assistance with these companies, it is recommended to use the provided phone numbers or visit their official websites for additional contact options.
# Fisher Realty (North Carolina)
* **Information:** *(No verified contact details or email address were found in public sources.)*
# Belle Hart Schmidt LLC
* **Information:** *(No verified contact details or email address were found in public sources.)*
# GJ Grewe Inc
* **Information:** *(No verified contact details or email address were found in public sources.)*
# Holland Law Firm (Missouri)
* **Information:** *(No verified contact details or email address were found; please refer to a state bar directory for direct contact.)*
# Wilson Logistics
* **Information:** *(No verified email address was found; contact information is available via the company’s “Contact Us” page.)*
# AGC Partners
* **Information:** *(No verified contact details or email address were found.)*
# Warren David Properties LLC
* **Information:** *(No verified contact details were found in public sources.)*
# Durham Co
* **Information:** *(No verified contact email was found. Public details are not available for inclusion.)*
# Ozarks Coca‑Cola Bottling
**Information:** *(No verified contact details or email address were found in the public sources.)* | 2025-02-05T13:55:35 | https://www.reddit.com/r/LocalLLaMA/comments/1iiafzm/we_have_to_fight_back_now/ | mr_happy_nice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiafzm | false | null | t3_1iiafzm | /r/LocalLLaMA/comments/1iiafzm/we_have_to_fight_back_now/ | false | false | self | 389 | {'enabled': False, 'images': [{'id': 'tgFydHvkxSNcz9vrz3AVQtBxDnuVFy_XiM5OHMPZTJE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/E_dTDTAnifVGSBjbiGaVb2-FkZDo447vF9_hU1-oNtQ.jpg?width=108&crop=smart&auto=webp&s=54acfada54cdcef86fce9bdfef410aedb09b6fbd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/E_dTDTAnifVGSBjbiGaVb2-FkZDo447vF9_hU1-oNtQ.jpg?width=216&crop=smart&auto=webp&s=3896b1e97b834f99c89905a5c7b7d4b33e9e4c29', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/E_dTDTAnifVGSBjbiGaVb2-FkZDo447vF9_hU1-oNtQ.jpg?width=320&crop=smart&auto=webp&s=53ef942f0827ba9a08572d71001587dbf877b6d6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/E_dTDTAnifVGSBjbiGaVb2-FkZDo447vF9_hU1-oNtQ.jpg?width=640&crop=smart&auto=webp&s=97e0143b3f10ebcd912598fe9782e308f90c76dd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/E_dTDTAnifVGSBjbiGaVb2-FkZDo447vF9_hU1-oNtQ.jpg?width=960&crop=smart&auto=webp&s=f9c16bb4c47a149f992536986672d5965192bb29', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/E_dTDTAnifVGSBjbiGaVb2-FkZDo447vF9_hU1-oNtQ.jpg?width=1080&crop=smart&auto=webp&s=6b8c0446f061b0784bce7133162d65be728377db', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/E_dTDTAnifVGSBjbiGaVb2-FkZDo447vF9_hU1-oNtQ.jpg?auto=webp&s=202abce261aba0ab012e5898097321b4c5a11ae5', 'width': 1200}, 'variants': {}}]} |
Deep seek r1 14B | 1 | [removed] | 2025-02-05T14:03:07 | https://www.reddit.com/r/LocalLLaMA/comments/1iialx2/deep_seek_r1_14b/ | Sad-Space-1858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iialx2 | false | null | t3_1iialx2 | /r/LocalLLaMA/comments/1iialx2/deep_seek_r1_14b/ | false | false | self | 1 | null |
Help me make a plan to learn to develop with LLMs | 1 | [removed] | 2025-02-05T14:15:47 | https://www.reddit.com/r/LocalLLaMA/comments/1iiaw48/help_me_make_a_plan_to_learn_to_develop_with_llms/ | ggeezz12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiaw48 | false | null | t3_1iiaw48 | /r/LocalLLaMA/comments/1iiaw48/help_me_make_a_plan_to_learn_to_develop_with_llms/ | false | false | self | 1 | null |
Does anyone have any information on the inference speed for the instinct mi210? | 1 | [removed] | 2025-02-05T14:17:52 | https://www.reddit.com/r/LocalLLaMA/comments/1iiaxs2/does_anyone_have_any_information_on_the_inference/ | Bitter-Breadfruit6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiaxs2 | false | null | t3_1iiaxs2 | /r/LocalLLaMA/comments/1iiaxs2/does_anyone_have_any_information_on_the_inference/ | false | false | self | 1 | null |
Does DeepSeek change their local models? | 1 | [removed] | 2025-02-05T14:37:42 | https://www.reddit.com/r/LocalLLaMA/comments/1iibdm3/does_deepseek_change_their_local_models/ | Odd-Currency-1909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iibdm3 | false | null | t3_1iibdm3 | /r/LocalLLaMA/comments/1iibdm3/does_deepseek_change_their_local_models/ | false | false | self | 1 | null |
Does DeepSeek update their local models? | 1 | [removed] | 2025-02-05T14:38:25 | https://www.reddit.com/r/LocalLLaMA/comments/1iibe7n/does_deepseek_update_their_local_models/ | Odd-Currency-1909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iibe7n | false | null | t3_1iibe7n | /r/LocalLLaMA/comments/1iibe7n/does_deepseek_update_their_local_models/ | false | false | self | 1 | null |
Anyone try running more than 1 ollama runner on a single 80gb h100 GPU with MIG ? | 1 | Is it even possible? Theoretically could you split an h100 into four different small model runners e.g. llama3.2:8b-instruct, gemma2, phi4, deepseek-r1, and coordinate a kinda consensus group with their outputs for single questions picking the best of all four answers with some evaluation framework? Would that even be sane? | 2025-02-05T14:42:03 | https://www.reddit.com/r/LocalLLaMA/comments/1iibh75/anyone_try_running_more_than_1_ollama_runner_on_a/ | databasehead | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iibh75 | false | null | t3_1iibh75 | /r/LocalLLaMA/comments/1iibh75/anyone_try_running_more_than_1_ollama_runner_on_a/ | false | false | self | 1 | null |
Good MoE Models smaller than R1? | 1 | [removed] | 2025-02-05T14:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/1iiblb4/good_moe_models_smaller_than_r1/ | And1mon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiblb4 | false | null | t3_1iiblb4 | /r/LocalLLaMA/comments/1iiblb4/good_moe_models_smaller_than_r1/ | false | false | self | 1 | null |
Which model is best in acting like a senior developer mentor, rather than just coding the answers for me that will work best with a 4090? | 1 | [removed] | 2025-02-05T15:04:02 | https://www.reddit.com/r/LocalLLaMA/comments/1iibzm2/which_model_is_best_in_acting_like_a_senior/ | ElkNorth5936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iibzm2 | false | null | t3_1iibzm2 | /r/LocalLLaMA/comments/1iibzm2/which_model_is_best_in_acting_like_a_senior/ | false | false | self | 1 | null |
Kokoro voice model extrapolation, blending, and experimenting python application | 36 | Hey all, I have been playing around with blending the kokoro voice models (excellent[ text to speech library](https://github.com/thewh1teagle/kokoro-onnx) and [model](https://huggingface.co/hexgrad/Kokoro-82M)) and determined I wanted more capability to create voices. I made an application that uses sqlite queries to select groups of voices based on the query. It then creates a linear model between the two voice groups which allows for easy blending of the voices, but also allows for **extrapolation** of the voices.
For instance, if I make a group of British and a group of American voices I can model between and beyond them. This effectively allows you to make "extreme" versions of the difference in vocal traits between groups. You can make **very** British and **very** American accents. The code also allows for exporting voice models into other formats for use in other applications. Examples, codes, and instructions in the github.
[https://github.com/RobViren/kokovoicelab](https://github.com/RobViren/kokovoicelab) | 2025-02-05T15:06:21 | https://www.reddit.com/r/LocalLLaMA/comments/1iic1ks/kokoro_voice_model_extrapolation_blending_and/ | rodbiren | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iic1ks | false | null | t3_1iic1ks | /r/LocalLLaMA/comments/1iic1ks/kokoro_voice_model_extrapolation_blending_and/ | false | false | self | 36 | null |
Anyone see very low tps with 80gb h100 running llama3.3:70-q4_K_M? | 1 | I did not collect my stats yet because my set up is quite new, but my qualitative assessment was that I was getting slow responses running llama3.3:70b-q4_K_M with the most recent ollama release binaries on an 80gb h100.
I have to check, but iirc I installed nvidia driver 565.xx.x, cuda 12.6 update 2, cuda-toolkit 12.6, ubuntu 22.04lts, with linux kernel 6.5.0-27, default gcc 12.3.0, glibc 2.35.
Does anyone have a similar setup and recall their stats?
Also another question I have is whether it matters what kernel, gcc, glibc is installed if I’m using ollama packaged release binaries? Also, same for cudart, cuda-toolkit?
I’m thinking of building ollama from source since that’s what I’ve done in the past with a40 running smaller models and always saw way faster inference…
| 2025-02-05T15:13:02 | https://www.reddit.com/r/LocalLLaMA/comments/1iic74g/anyone_see_very_low_tps_with_80gb_h100_running/ | databasehead | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iic74g | false | null | t3_1iic74g | /r/LocalLLaMA/comments/1iic74g/anyone_see_very_low_tps_with_80gb_h100_running/ | false | false | self | 1 | null |
I Made a Completely Free AI Text To Speech Tool Using ChatGPT With No Word Limit | 0 | 2025-02-05T15:14:24 | https://v.redd.it/nvkv3uxb7che1 | Cool-Hornet-8191 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iic89j | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nvkv3uxb7che1/DASHPlaylist.mpd?a=1741360475%2CZjIwOTMyMmQ4ZjQ4NTEyZTQzODZkZjAzNzRkODAwMTU3Y2IwZTMxMmEwNjA4MjMxZWU5ZDljMmE1MTZmNWJhMg%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/nvkv3uxb7che1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/nvkv3uxb7che1/HLSPlaylist.m3u8?a=1741360475%2CNTE3YTkwZjZmM2FlNjg2ZmIxODFkNmU0NDgzOGM5OGQ3OTE4YzI0M2EzNzQ4MWZmNjY0OGRlYTVmNDgyMmQ1OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nvkv3uxb7che1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1iic89j | /r/LocalLLaMA/comments/1iic89j/i_made_a_completely_free_ai_text_to_speech_tool/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OGowNmh5eGI3Y2hlMcCmxbf1GTieZVfP0794PSAFvB8sDZC9m94jYBZ7fuCh', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OGowNmh5eGI3Y2hlMcCmxbf1GTieZVfP0794PSAFvB8sDZC9m94jYBZ7fuCh.png?width=108&crop=smart&format=pjpg&auto=webp&s=d73f6607e6efe96817edf19456f1b21ea731337f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OGowNmh5eGI3Y2hlMcCmxbf1GTieZVfP0794PSAFvB8sDZC9m94jYBZ7fuCh.png?width=216&crop=smart&format=pjpg&auto=webp&s=5b4cac2aa65beaecaf35e6e05a2ad8190f8c7af3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OGowNmh5eGI3Y2hlMcCmxbf1GTieZVfP0794PSAFvB8sDZC9m94jYBZ7fuCh.png?width=320&crop=smart&format=pjpg&auto=webp&s=8e5f0a540efed0cb0b526cea536f329f49c7b552', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OGowNmh5eGI3Y2hlMcCmxbf1GTieZVfP0794PSAFvB8sDZC9m94jYBZ7fuCh.png?width=640&crop=smart&format=pjpg&auto=webp&s=e793b1a5ebedf79ee9fbf162a00b6aa9d74c5e73', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OGowNmh5eGI3Y2hlMcCmxbf1GTieZVfP0794PSAFvB8sDZC9m94jYBZ7fuCh.png?width=960&crop=smart&format=pjpg&auto=webp&s=51905daf7bdfd02b4958c099a1cdfa2ad0ea2772', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OGowNmh5eGI3Y2hlMcCmxbf1GTieZVfP0794PSAFvB8sDZC9m94jYBZ7fuCh.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f3c57f11ebea1a4be84f916fc4251f9ebe9c4b8b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OGowNmh5eGI3Y2hlMcCmxbf1GTieZVfP0794PSAFvB8sDZC9m94jYBZ7fuCh.png?format=pjpg&auto=webp&s=2c6f1153b03ad983c4d4961265f593cf324f01d3', 'width': 1920}, 'variants': {}}]} |
||
Interest in a Visual workflow editor | 17 | Im a developer but when im trying to brainstorm workflows (with or without an LLM) - its a heavy investment to dive into coding something. I want to POC my ideas fast and so I started working on this visual editor.
It has various node types: input, output, read file, processing (out of the box math operations like double, square and custom mode - execute formulas or executed JavaScript code), transform which facilities using huggingfaces transformer.js library so it will do operations like summarize, sentient analysis or translation and finally an ai node which currently is based around interacting with ollama.
The screenshots above are from a demo flow I put together. It reads a csv file, sends the data to ollama and the prompt is to convert the csv to json, then the output branches off into two more nodes one that will find the oldest and one for the youngest. Then there are some processing nodes that essentially formats the data how I want it to be displayed.
The toolbar is fairly self explanatory. The data here is stored in json so it can be saved and loaded. A debug mode that includes adds all the inputs/outputs to the output panel.
They are screenshots so I couldn’t include it - but when the graph is running, you’ll see a visual indicator (red border) around the current executing node.
Right now I’ve been doing things fast and I haven’t focused on the UI appearance either. I wanted to see if a tool like this would be useful for people and if there’s interest in it. This will help me figure out which features to prioritize.
Some additional features I would like to add:
1. Way more node types such as iterators and decision nodes
2. I want to pair the editor with a server component. The server would expose a rest API so people can call their workflows.
If anyone has suggestions on additional features please let me know. | 2025-02-05T15:15:13 | https://www.reddit.com/gallery/1iic8yg | throwawayacc201711 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1iic8yg | false | null | t3_1iic8yg | /r/LocalLLaMA/comments/1iic8yg/interest_in_a_visual_workflow_editor/ | false | false | 17 | null |
|
GPRO training reward questions | 1 | [removed] | 2025-02-05T15:31:09 | https://www.reddit.com/r/LocalLLaMA/comments/1iicmc3/gpro_training_reward_questions/ | Swimming_Option_4884 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iicmc3 | false | null | t3_1iicmc3 | /r/LocalLLaMA/comments/1iicmc3/gpro_training_reward_questions/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VmBBSZtf_ctbrrIVGo3FewK8t9FRWVGcWlu6blTCM9A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8tLbJWr2gL67rDs7DxezAYD0ijfCR44VSSvN3LDB0Fk.jpg?width=108&crop=smart&auto=webp&s=b532801bc50a66af6d00bfbe85b02f61c51a9b62', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8tLbJWr2gL67rDs7DxezAYD0ijfCR44VSSvN3LDB0Fk.jpg?width=216&crop=smart&auto=webp&s=babfec65f11e2844a2e1b7e36516b31d2f19f921', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8tLbJWr2gL67rDs7DxezAYD0ijfCR44VSSvN3LDB0Fk.jpg?width=320&crop=smart&auto=webp&s=2b9d9aad60476787a5fb06680c0c1427be8ddc6b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8tLbJWr2gL67rDs7DxezAYD0ijfCR44VSSvN3LDB0Fk.jpg?width=640&crop=smart&auto=webp&s=5964648f3cce86ba606bd91c964ef1f30577d7cd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8tLbJWr2gL67rDs7DxezAYD0ijfCR44VSSvN3LDB0Fk.jpg?width=960&crop=smart&auto=webp&s=9a9770672c947fb2b1ec072f7a200f7eddcfee15', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8tLbJWr2gL67rDs7DxezAYD0ijfCR44VSSvN3LDB0Fk.jpg?width=1080&crop=smart&auto=webp&s=d792ed065c0de9b380db515b5d92103cc40e5201', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/8tLbJWr2gL67rDs7DxezAYD0ijfCR44VSSvN3LDB0Fk.jpg?auto=webp&s=1fb28742bded61c43fc7c4beaf36f0db2fe98fc0', 'width': 1280}, 'variants': {}}]} |
Is fine-tuning a waste of time if ya ain't got big hardware? | 2 | Ya know, when ya watch plentiful of youtube videos about how ML training takes time and sometimes you have failed runs which are *part of the process*, ya really feel discouraged to let your budget gpu train for a few days in a row and possibly not have the model learn enough
No, i haven't fine-tuned, but at this point i'm getting a hint that RAG would be more cost-effective. "Leave fine-tuning to when you got 50$ to let it run on the cloud" kind of thing | 2025-02-05T15:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/1iicnov/is_finetuning_a_waste_of_time_if_ya_aint_got_big/ | Blender-Fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iicnov | false | null | t3_1iicnov | /r/LocalLLaMA/comments/1iicnov/is_finetuning_a_waste_of_time_if_ya_aint_got_big/ | false | false | self | 2 | null |
OpenAI's First Fear - its daniel johns | 0 | 2025-02-05T15:33:33 | https://itsdanieljohns.com/blog/openai-first-fear | iamdanieljohns | itsdanieljohns.com | 1970-01-01T00:00:00 | 0 | {} | 1iicodl | false | null | t3_1iicodl | /r/LocalLLaMA/comments/1iicodl/openais_first_fear_its_daniel_johns/ | false | false | default | 0 | null |
|
What is the best local AI I can setup on my laptop and how to do that? | 0 | I like to have the most powerful "possible" AI (Text-based only) locally on my ordinary laptop. My purpose is brainstorming, researching and generating long texts.
Can you lead me in the right direction which AI, and how to set it up ?
Here is my system:
CPU: AMD Ryzen 5 8645HS (up to 5.0 GHz boost clock)
Memory: 40 GB
GPU: Nvidia RTX 4050
Storage:500GB SSD
| 2025-02-05T15:42:33 | https://www.reddit.com/r/LocalLLaMA/comments/1iicvwv/what_is_the_best_local_ai_i_can_setup_on_my/ | ExtremePresence3030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iicvwv | false | null | t3_1iicvwv | /r/LocalLLaMA/comments/1iicvwv/what_is_the_best_local_ai_i_can_setup_on_my/ | false | false | self | 0 | null |
Image Dataset Benchmarking Advice | 1 | [removed] | 2025-02-05T15:49:41 | https://www.reddit.com/r/LocalLLaMA/comments/1iid1wn/image_dataset_benchmarking_advice/ | EmetResearch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iid1wn | false | null | t3_1iid1wn | /r/LocalLLaMA/comments/1iid1wn/image_dataset_benchmarking_advice/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OhJLG1ZxWFfE4d8IIopVwVQK78fuNXNq22pNlxW_76Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Ibm_qG-Kw9MkO7NwSYumC8AcK4zPemqsRQggHsV3qz0.jpg?width=108&crop=smart&auto=webp&s=5e1ec6dd71c4d8b8b38c2c32d088b17a886f576c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Ibm_qG-Kw9MkO7NwSYumC8AcK4zPemqsRQggHsV3qz0.jpg?width=216&crop=smart&auto=webp&s=ed2cd5876ded647a0c309afdf4d7ec9896666906', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Ibm_qG-Kw9MkO7NwSYumC8AcK4zPemqsRQggHsV3qz0.jpg?width=320&crop=smart&auto=webp&s=bf6b193772665428c45ee07732c4486e7d4e8d4d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Ibm_qG-Kw9MkO7NwSYumC8AcK4zPemqsRQggHsV3qz0.jpg?width=640&crop=smart&auto=webp&s=3362cbbb9f35e52e80558005be6a94d0c9b31b74', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Ibm_qG-Kw9MkO7NwSYumC8AcK4zPemqsRQggHsV3qz0.jpg?width=960&crop=smart&auto=webp&s=8d57d6820beb4abd22fc639f9743922efcc88083', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Ibm_qG-Kw9MkO7NwSYumC8AcK4zPemqsRQggHsV3qz0.jpg?width=1080&crop=smart&auto=webp&s=073bfb0627bdee184c8ce86059cabdca03f7388d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Ibm_qG-Kw9MkO7NwSYumC8AcK4zPemqsRQggHsV3qz0.jpg?auto=webp&s=68ec9a3edd48a1b28d9c7974d0725de785d2c93e', 'width': 1200}, 'variants': {}}]} |
Best local LLM for converting notes to full text? | 2 | As part of my job I have to take brief notes as I go and later write them up into full documents, so naturally I want to streamline this process with LLMs.
I need to do this locally tho rather than online. Have used llama 3.2-3B instruct with ok but inconsistent results. Just got the deepseek R1 distil llama 8B (GGUF) running locally, a bit slow but servicable, and haven't had it long enough to fully evaluate it for my purposes.
Hoping to have better results with this model but just wondering, does anyone know of any models that are optimised for this specific usecase given my limited local resources? Or how to search for a model that would be optimised? Have looked for text expansion models but not certain that this is the right thing to be looking for. Thanks | 2025-02-05T16:09:50 | https://www.reddit.com/r/LocalLLaMA/comments/1iidjtk/best_local_llm_for_converting_notes_to_full_text/ | Psykromopht | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iidjtk | false | null | t3_1iidjtk | /r/LocalLLaMA/comments/1iidjtk/best_local_llm_for_converting_notes_to_full_text/ | false | false | self | 2 | null |
I Built a Tool to Prevent Data Leaks in ChatGPT, Deepseek etc. – Open to Feedback & Contributions | 1 | [removed] | 2025-02-05T16:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/1iidndk/i_built_a_tool_to_prevent_data_leaks_in_chatgpt/ | Early_Court892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iidndk | false | null | t3_1iidndk | /r/LocalLLaMA/comments/1iidndk/i_built_a_tool_to_prevent_data_leaks_in_chatgpt/ | false | false | 1 | null |
|
I Built a Tool to Prevent Data Leaks in ChatGPT, Deepseek etc. – Open to Feedback & Contributions | 1 | [removed] | 2025-02-05T16:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1iidpd9/i_built_a_tool_to_prevent_data_leaks_in_chatgpt/ | Early_Court892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iidpd9 | false | null | t3_1iidpd9 | /r/LocalLLaMA/comments/1iidpd9/i_built_a_tool_to_prevent_data_leaks_in_chatgpt/ | false | false | 1 | null |
|
A look at DeepSeek's Qwen2.5-7B distill of R1, using Autopen | 4 | 2025-02-05T16:22:25 | https://www.youtube.com/watch?v=GXWZPpVI0zU | disposableoranges | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1iidv2a | false | {'oembed': {'author_name': 'blackhole89', 'author_url': 'https://www.youtube.com/@blackhole89', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/GXWZPpVI0zU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="A look at DeepSeek's Qwen2.5-7B distill of R1, using Autopen"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/GXWZPpVI0zU/hqdefault.jpg', 'thumbnail_width': 480, 'title': "A look at DeepSeek's Qwen2.5-7B distill of R1, using Autopen", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1iidv2a | /r/LocalLLaMA/comments/1iidv2a/a_look_at_deepseeks_qwen257b_distill_of_r1_using/ | false | false | 4 | {'enabled': False, 'images': [{'id': '2btOoittypiQQmoKhP60-_8D7z52EqOU5SduNA_CV-0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/BUXrOXYVgf4oNubv9uWRWQ6PjoEO-RE_DfzNx339xo8.jpg?width=108&crop=smart&auto=webp&s=6141a10fffa3fa544b8fe830edcf07124d73bd2f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/BUXrOXYVgf4oNubv9uWRWQ6PjoEO-RE_DfzNx339xo8.jpg?width=216&crop=smart&auto=webp&s=0b579b66e1e86bac0936930529f029bc038b3783', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/BUXrOXYVgf4oNubv9uWRWQ6PjoEO-RE_DfzNx339xo8.jpg?width=320&crop=smart&auto=webp&s=d8f1136ef242841d786ada3269c59be60b57ee62', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/BUXrOXYVgf4oNubv9uWRWQ6PjoEO-RE_DfzNx339xo8.jpg?auto=webp&s=b5426d135968c691befd78eb4f832a0275396d16', 'width': 480}, 'variants': {}}]} |
||
Gemini 2.0 is now available to everyone | 195 | 2025-02-05T16:22:33 | https://blog.google/technology/google-deepmind/gemini-model-updates-february-2025/ | badgerfish2021 | blog.google | 1970-01-01T00:00:00 | 0 | {} | 1iidv6u | false | null | t3_1iidv6u | /r/LocalLLaMA/comments/1iidv6u/gemini_20_is_now_available_to_everyone/ | false | false | 195 | {'enabled': False, 'images': [{'id': 'hDcXg6dejugF3mYB6GacUCvCEBNGbaqAqiZ4aJzVSNQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/o93Gv9_DhQvlI44kGBVGvb3sGB7HfG5Hch2mizSqwbM.jpg?width=108&crop=smart&auto=webp&s=edd5d6970c6f924c849d582b8bda180d52879339', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/o93Gv9_DhQvlI44kGBVGvb3sGB7HfG5Hch2mizSqwbM.jpg?width=216&crop=smart&auto=webp&s=f32c9b9250fb71946725e13d2a483e3904a990e8', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/o93Gv9_DhQvlI44kGBVGvb3sGB7HfG5Hch2mizSqwbM.jpg?width=320&crop=smart&auto=webp&s=7b616c00352e05f1d637420ce7c38c6722a954df', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/o93Gv9_DhQvlI44kGBVGvb3sGB7HfG5Hch2mizSqwbM.jpg?width=640&crop=smart&auto=webp&s=8912d36a97a1fdaed885b0445f70238c8155ea29', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/o93Gv9_DhQvlI44kGBVGvb3sGB7HfG5Hch2mizSqwbM.jpg?width=960&crop=smart&auto=webp&s=b08e2faa91ef1e9b094a2cf63a680373d4b72a51', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/o93Gv9_DhQvlI44kGBVGvb3sGB7HfG5Hch2mizSqwbM.jpg?width=1080&crop=smart&auto=webp&s=96161d7a1564b520c4f967e51cdad49c27881526', 'width': 1080}], 'source': {'height': 731, 'url': 'https://external-preview.redd.it/o93Gv9_DhQvlI44kGBVGvb3sGB7HfG5Hch2mizSqwbM.jpg?auto=webp&s=d81d04edb620ee41f9a401be866e1b3f5e5b9691', 'width': 1300}, 'variants': {}}]} |
||
How to load LoRAs with tabbyAPI server | 1 | I have trained some loras using unsloth. I want to use them with the base model (exl2) with tabbyAPI inference server. Any pointers? Thanks! | 2025-02-05T16:24:51 | https://www.reddit.com/r/LocalLLaMA/comments/1iidx72/how_to_load_loras_with_tabbyapi_server/ | aadoop6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iidx72 | false | null | t3_1iidx72 | /r/LocalLLaMA/comments/1iidx72/how_to_load_loras_with_tabbyapi_server/ | false | false | self | 1 | null |
I Built a Tool to Prevent Data Leaks in ChatGPT, Deepseek etc. – Open to Feedback & Contributions | 1 | [removed] | 2025-02-05T16:32:25 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1iie3x8 | false | null | t3_1iie3x8 | /r/LocalLLaMA/comments/1iie3x8/i_built_a_tool_to_prevent_data_leaks_in_chatgpt/ | false | false | default | 1 | null |
||
I built a prompt filter to avoid leaking secrets—wondering if Local LLaMA users find it useful? | 1 | [removed] | 2025-02-05T16:38:49 | https://www.reddit.com/r/LocalLLaMA/comments/1iie9jn/i_built_a_prompt_filter_to_avoid_leaking/ | Early_Court892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iie9jn | false | null | t3_1iie9jn | /r/LocalLLaMA/comments/1iie9jn/i_built_a_prompt_filter_to_avoid_leaking/ | false | false | self | 1 | null |
[ Removed by Reddit ] | 1 | [removed] | 2025-02-05T16:39:59 | https://www.reddit.com/r/LocalLLaMA/comments/1iieaju/removed_by_reddit/ | de4dee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iieaju | false | null | t3_1iieaju | /r/LocalLLaMA/comments/1iieaju/removed_by_reddit/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'yNHMaJr3rhKvbOWmjMCXcb6LbZcskEx-r1GaygbEtsI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=108&crop=smart&auto=webp&s=39a9b64173e8ea8cd72e9b0eb7dd76fca0e6c523', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=216&crop=smart&auto=webp&s=138e9f343b73dff2829e5e722c616a2992d3169a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=320&crop=smart&auto=webp&s=9ba654f555fcfd705d1d86e61b68ffcf6808bff3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=640&crop=smart&auto=webp&s=df2ef470291456e0a67e50fbeed17a4a70e358ad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=960&crop=smart&auto=webp&s=ddf7e2a41c0a4953af7ec0d57b0624c1574efb31', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=1080&crop=smart&auto=webp&s=d394134b0b98d9e20b081821452994f1614247c2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?auto=webp&s=bd6352f63b893f3ccc3718c85aa1c912c95d0823', 'width': 1200}, 'variants': {}}]} |
How do you prevent accidentally sharing secrets in local prompts? | 1 | I’ve been tinkering with large language models for a while (including local setups), and one recurring headache was accidentally including sensitive data—API keys, internal code, or private info—in my prompts. Obviously, if you’re running everything purely locally, that risk is smaller because you’re not sending data to an external API. But many of us still compare local models with remote ones (OpenAI, etc.) or occasionally share local prompts with teammates—and that’s where mistakes can happen.
So I built a **proxy tool** (called Trylon) that scans prompts in real time and flags or removes anything that looks like credentials or PII before it goes to an external LLM. I’ve been using it at work when switching between local LLaMA models and cloud-based services (like ChatGPT or Deepseek) for quick comparisons.
**How it works (briefly)**:
* You route your prompt through a local or hosted proxy.
* The proxy checks for patterns (API keys, private tokens, PII).
* If something is flagged, it gets masked or blocked.
**Why I’m posting here**:
* I’m curious if this is even **useful** for people who predominantly run LLaMA locally.
* Do you ever worry about logs or inadvertently sharing sensitive data with others when collaborating?
* Are there known solutions you already use (like local privacy policies, offline logging, etc.)?
* I’d love suggestions on detection rules or ways to handle false positives.
The tool is free to try, but I’m not sure if the local LLaMA crowd sees a benefit unless you also ping external APIs. Let me know what you think—maybe it’s overkill for pure local usage, or maybe it’s handy when you occasionally “go hybrid.”
**Thanks in advance for any feedback!**
I’m considering open sourcing part of the detection logic, so if that piques your interest or you have ideas, I’m all ears.
You can try at [chat.trylon.ai](http://chat.trylon.ai) | 2025-02-05T16:41:51 | https://www.reddit.com/r/LocalLLaMA/comments/1iiec7j/how_do_you_prevent_accidentally_sharing_secrets/ | Early_Court892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiec7j | false | null | t3_1iiec7j | /r/LocalLLaMA/comments/1iiec7j/how_do_you_prevent_accidentally_sharing_secrets/ | false | false | self | 1 | null |
How do you prevent accidentally sharing secrets in prompts? | 1 | [removed] | 2025-02-05T16:43:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1iiedg1 | false | null | t3_1iiedg1 | /r/LocalLLaMA/comments/1iiedg1/how_do_you_prevent_accidentally_sharing_secrets/ | false | false | default | 1 | null |
||
How do you prevent accidentally sharing secrets in prompts? | 1 | [removed] | 2025-02-05T16:45:09 | https://www.reddit.com/r/LocalLLaMA/comments/1iief29/how_do_you_prevent_accidentally_sharing_secrets/ | Early_Court892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iief29 | false | null | t3_1iief29 | /r/LocalLLaMA/comments/1iief29/how_do_you_prevent_accidentally_sharing_secrets/ | false | false | self | 1 | null |
How do you prevent accidentally sharing secrets in prompts? | 0 | I’ve been tinkering with large language models for a while (including local setups), and one recurring headache was accidentally including sensitive data—API keys, internal code, or private info—in my prompts. Obviously, if you’re running everything purely locally, that risk is smaller because you’re not sending data to an external API. But many of us still compare local models with remote ones (OpenAI, etc.) or occasionally share local prompts with teammates—and that’s where mistakes can happen.
So I built a **proxy tool** (called Trylon) that scans prompts in real time and flags or removes anything that looks like credentials or PII before it goes to an external LLM. I’ve been using it at work when switching between local LLaMA models and cloud-based services (like ChatGPT or Deepseek) for quick comparisons.
**How it works (briefly)**:
* You route your prompt through a local or hosted proxy.
* The proxy checks for patterns (API keys, private tokens, PII).
* If something is flagged, it gets masked or blocked.
**Why I’m posting here**:
* I’m curious if this is even **useful** for people who predominantly run LLaMA locally.
* Do you ever worry about logs or inadvertently sharing sensitive data with others when collaborating?
* Are there known solutions you already use (like local privacy policies, offline logging, etc.)?
* I’d love suggestions on adding new policies.
The tool is free to try, but I’m not sure if the local LLaMA crowd sees a benefit unless you also ping external APIs. Let me know what you think—maybe it’s overkill for pure local usage, or maybe it’s handy when you occasionally “go hybrid.”
**Thanks in advance for any feedback!**
I’m considering open sourcing part of the detection logic, so if that piques your interest or you have ideas, I’m all ears.
It's at [chat.trylon.ai](http://chat.trylon.ai)
https://preview.redd.it/bpcw6xiboche1.png?width=707&format=png&auto=webp&s=f08c87b4e8c12c76b31086ed1d7e1869425b75a6
| 2025-02-05T16:49:32 | https://www.reddit.com/r/LocalLLaMA/comments/1iieisx/how_do_you_prevent_accidentally_sharing_secrets/ | Consistent_Equal5327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iieisx | false | null | t3_1iieisx | /r/LocalLLaMA/comments/1iieisx/how_do_you_prevent_accidentally_sharing_secrets/ | false | false | 0 | null |
|
Having trouble understanding deepseek-r1 resource usage. | 1 | I've got a host running an RTX 3090 24 GB, with 32 GB ram. It's running an LXC with gpu passthrough and 28 gb ram allocated to it. This setup generally works, and it works great on smaller models.
From my understanding, with the Q4K\_M quantization of this model, the model itself should fit into roughly 18GB VRAM, plus some space for context. It is also my understanding that Ollama can partially use system RAM.
Instead, what I am observing is massive CPU and disk usage, terrible performance, and low GPU usage.
Here's my log from ollama, which kind of confirms, to the best of my understanding, that I should have enough resources.
Can someone please explain the gap in my understanding?
>time=2025-02-05T16:13:17.414Z level=INFO source=server.go:104 msg="system memory" total="31.2 GiB" free="28.1 GiB" free\_swap="7.3 GiB"
>time=2025-02-05T16:13:17.475Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=16 layers.model=65 layers.offload=16 layers.split="" memory.available="\[23.4 GiB\]" memory.gpu\_overhead="0 B" memory.required.full="21.0 GiB" memory.required.partial="6.3 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="\[6.3 GiB\]" memory.weights.total="18.5 GiB" memory.weights.repeating="17.9 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
>time=2025-02-05T16:13:17.511Z level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE\_GRAPHS = 1 | PEER\_MAX\_BATCH\_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64\_REPACK = 1 | cgo(gcc)" threads=8
>time=2025-02-05T16:13:17.511Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:45421"
>llama\_load\_model\_from\_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23992 MiB free
>llama\_model\_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
>llama\_model\_loader: - kv 0: general.architecture str = qwen2
>llama\_model\_loader: - kv 1: general.type str = model
>llama\_model\_loader: - kv 2: [general.name](http://general.name) str = DeepSeek R1 Distill Qwen 32B
>llama\_model\_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
>llama\_model\_loader: - kv 4: general.size\_label str = 32B
>llama\_model\_loader: - kv 5: qwen2.block\_count u32 = 64
>llama\_model\_loader: - kv 6: qwen2.context\_length u32 = 131072
>llama\_model\_loader: - kv 7: qwen2.embedding\_length u32 = 5120
>llama\_model\_loader: - kv 8: qwen2.feed\_forward\_length u32 = 27648
>llama\_model\_loader: - kv 9: qwen2.attention.head\_count u32 = 40
>llama\_model\_loader: - kv 10: qwen2.attention.head\_count\_kv u32 = 8
>llama\_model\_loader: - kv 11: qwen2.rope.freq\_base f32 = 1000000.000000
>llama\_model\_loader: - kv 12: qwen2.attention.layer\_norm\_rms\_epsilon f32 = 0.000010
>llama\_model\_loader: - kv 13: general.file\_type u32 = 15
>llama\_model\_loader: - kv 14: tokenizer.ggml.model str = gpt2
>llama\_model\_loader: - kv 15: tokenizer.ggml.pre str = deepseek-r1-qwen
>llama\_model\_loader: - kv 16: tokenizer.ggml.tokens arr\[str,152064\] = \["!", "\\"", "#", "$", "%", "&", "'", ...
>llama\_model\_loader: - kv 17: tokenizer.ggml.token\_type arr\[i32,152064\] = \[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
>llama\_model\_loader: - kv 18: tokenizer.ggml.merges arr\[str,151387\] = \["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
>llama\_model\_loader: - kv 19: tokenizer.ggml.bos\_token\_id u32 = 151646
>llama\_model\_loader: - kv 20: tokenizer.ggml.eos\_token\_id u32 = 151643
>llama\_model\_loader: - kv 21: tokenizer.ggml.padding\_token\_id u32 = 151643
>llama\_model\_loader: - kv 22: tokenizer.ggml.add\_bos\_token bool = true
>llama\_model\_loader: - kv 23: tokenizer.ggml.add\_eos\_token bool = false
>llama\_model\_loader: - kv 24: tokenizer.chat\_template str = {% if not add\_generation\_prompt is de...
>llama\_model\_loader: - kv 25: general.quantization\_version u32 = 2
>llama\_model\_loader: - type f32: 321 tensors
>llama\_model\_loader: - type q4\_K: 385 tensors
>llama\_model\_loader: - type q6\_K: 65 tensors
>time=2025-02-05T16:13:17.728Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
>llm\_load\_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
>llm\_load\_vocab: special\_eos\_id is not in special\_eog\_ids - the tokenizer config may be incorrect
>llm\_load\_vocab: special tokens cache size = 22
>llm\_load\_vocab: token to piece cache size = 0.9310 MB
>llm\_load\_print\_meta: format = GGUF V3 (latest)
>llm\_load\_print\_meta: arch = qwen2
>llm\_load\_print\_meta: vocab type = BPE
>llm\_load\_print\_meta: n\_vocab = 152064
>llm\_load\_print\_meta: n\_merges = 151387
>llm\_load\_print\_meta: vocab\_only = 0
>llm\_load\_print\_meta: n\_ctx\_train = 131072
>llm\_load\_print\_meta: n\_embd = 5120
>llm\_load\_print\_meta: n\_layer = 64
>llm\_load\_print\_meta: n\_head = 40
>llm\_load\_print\_meta: n\_head\_kv = 8
>llm\_load\_print\_meta: n\_rot = 128
>llm\_load\_print\_meta: n\_swa = 0
>llm\_load\_print\_meta: n\_embd\_head\_k = 128
>llm\_load\_print\_meta: n\_embd\_head\_v = 128
>llm\_load\_print\_meta: n\_gqa = 5
>llm\_load\_print\_meta: n\_embd\_k\_gqa = 1024
>llm\_load\_print\_meta: n\_embd\_v\_gqa = 1024
>llm\_load\_print\_meta: f\_norm\_eps = 0.0e+00
>llm\_load\_print\_meta: f\_norm\_rms\_eps = 1.0e-05
>llm\_load\_print\_meta: f\_clamp\_kqv = 0.0e+00
>llm\_load\_print\_meta: f\_max\_alibi\_bias = 0.0e+00
>llm\_load\_print\_meta: f\_logit\_scale = 0.0e+00
>llm\_load\_print\_meta: n\_ff = 27648
>llm\_load\_print\_meta: n\_expert = 0
>llm\_load\_print\_meta: n\_expert\_used = 0
>llm\_load\_print\_meta: causal attn = 1
>llm\_load\_print\_meta: pooling type = 0
>llm\_load\_print\_meta: rope type = 2
>llm\_load\_print\_meta: rope scaling = linear
>llm\_load\_print\_meta: freq\_base\_train = 1000000.0
>llm\_load\_print\_meta: freq\_scale\_train = 1
>llm\_load\_print\_meta: n\_ctx\_orig\_yarn = 131072
>llm\_load\_print\_meta: rope\_finetuned = unknown
>llm\_load\_print\_meta: ssm\_d\_conv = 0
>llm\_load\_print\_meta: ssm\_d\_inner = 0
>llm\_load\_print\_meta: ssm\_d\_state = 0
>llm\_load\_print\_meta: ssm\_dt\_rank = 0
>llm\_load\_print\_meta: ssm\_dt\_b\_c\_rms = 0
>llm\_load\_print\_meta: model type = 32B
>llm\_load\_print\_meta: model ftype = Q4\_K - Medium
>llm\_load\_print\_meta: model params = 32.76 B
>llm\_load\_print\_meta: model size = 18.48 GiB (4.85 BPW)
>llm\_load\_print\_meta: [general.name](http://general.name)= DeepSeek R1 Distill Qwen 32B
>llm\_load\_print\_meta: BOS token = 151646 '<|begin▁of▁sentence|>'
>llm\_load\_print\_meta: EOS token = 151643 '<|end▁of▁sentence|>'
>llm\_load\_print\_meta: EOT token = 151643 '<|end▁of▁sentence|>'
>llm\_load\_print\_meta: PAD token = 151643 '<|end▁of▁sentence|>'
>llm\_load\_print\_meta: LF token = 148848 'ÄĬ'
>llm\_load\_print\_meta: FIM PRE token = 151659 '<|fim\_prefix|>'
>llm\_load\_print\_meta: FIM SUF token = 151661 '<|fim\_suffix|>'
>llm\_load\_print\_meta: FIM MID token = 151660 '<|fim\_middle|>'
>llm\_load\_print\_meta: FIM PAD token = 151662 '<|fim\_pad|>'
>llm\_load\_print\_meta: FIM REP token = 151663 '<|repo\_name|>'
>llm\_load\_print\_meta: FIM SEP token = 151664 '<|file\_sep|>'
>llm\_load\_print\_meta: EOG token = 151643 '<|end▁of▁sentence|>'
>llm\_load\_print\_meta: EOG token = 151662 '<|fim\_pad|>'
>llm\_load\_print\_meta: EOG token = 151663 '<|repo\_name|>'
>llm\_load\_print\_meta: EOG token = 151664 '<|file\_sep|>'
>llm\_load\_print\_meta: max token length = 256
>llm\_load\_tensors: offloading 16 repeating layers to GPU
>llm\_load\_tensors: offloaded 16/65 layers to GPU
>llm\_load\_tensors: CPU\_Mapped model buffer size = 14342.91 MiB
>llm\_load\_tensors: CUDA0 model buffer size = 4583.09 MiB
>llama\_new\_context\_with\_model: n\_seq\_max = 4
>llama\_new\_context\_with\_model: n\_ctx = 8192
>llama\_new\_context\_with\_model: n\_ctx\_per\_seq = 2048
>llama\_new\_context\_with\_model: n\_batch = 2048
>llama\_new\_context\_with\_model: n\_ubatch = 512
>llama\_new\_context\_with\_model: flash\_attn = 1
>llama\_new\_context\_with\_model: freq\_base = 1000000.0
>llama\_new\_context\_with\_model: freq\_scale = 1
>llama\_new\_context\_with\_model: n\_ctx\_per\_seq (2048) < n\_ctx\_train (131072) -- the full capacity of the model will not be utilized
>llama\_kv\_cache\_init: kv\_size = 8192, offload = 1, type\_k = 'q8\_0', type\_v = 'q8\_0', n\_layer = 64, can\_shift = 1
>llama\_kv\_cache\_init: CPU KV buffer size = 816.00 MiB
>llama\_kv\_cache\_init: CUDA0 KV buffer size = 272.00 MiB
>llama\_new\_context\_with\_model: KV self size = 1088.00 MiB, K (q8\_0): 544.00 MiB, V (q8\_0): 544.00 MiB
>llama\_new\_context\_with\_model: CPU output buffer size = 2.40 MiB
>llama\_new\_context\_with\_model: CUDA0 compute buffer size = 916.08 MiB
>llama\_new\_context\_with\_model: CUDA\_Host compute buffer size = 26.01 MiB
>llama\_new\_context\_with\_model: graph nodes = 1991
>llama\_new\_context\_with\_model: graph splits = 676 (with bs=512), 3 (with bs=1)
>time=2025-02-05T16:13:34.283Z level=INFO source=server.go:594 msg="llama runner s | 2025-02-05T16:51:55 | https://www.reddit.com/r/LocalLLaMA/comments/1iiekwo/having_trouble_understanding_deepseekr1_resource/ | armedmonkey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiekwo | false | null | t3_1iiekwo | /r/LocalLLaMA/comments/1iiekwo/having_trouble_understanding_deepseekr1_resource/ | false | false | self | 1 | null |
The AHA Indicator | 1 | I have been thinking about how to do the human alignment properly for a while. The way I see it is LLMs are going in the wrong direction in terms of beneficial wisdom. My latest article talks about this:
https://huggingface.co/blog/etemiz/aha-indicator
How to revert this trend? A curator council that will curate the datasets is the way in my opinion. Anyone interested to talk more?
I am continuing to fine tune the Ostrich model:
https://huggingface.co/some1nostr/Ostrich-70B
If folks are interested we can find/build human aligned datasets to further fine tune Ostrich or other models.
| 2025-02-05T16:52:19 | https://www.reddit.com/r/LocalLLaMA/comments/1iiel8w/the_aha_indicator/ | de4dee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiel8w | false | null | t3_1iiel8w | /r/LocalLLaMA/comments/1iiel8w/the_aha_indicator/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'yNHMaJr3rhKvbOWmjMCXcb6LbZcskEx-r1GaygbEtsI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=108&crop=smart&auto=webp&s=39a9b64173e8ea8cd72e9b0eb7dd76fca0e6c523', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=216&crop=smart&auto=webp&s=138e9f343b73dff2829e5e722c616a2992d3169a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=320&crop=smart&auto=webp&s=9ba654f555fcfd705d1d86e61b68ffcf6808bff3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=640&crop=smart&auto=webp&s=df2ef470291456e0a67e50fbeed17a4a70e358ad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=960&crop=smart&auto=webp&s=ddf7e2a41c0a4953af7ec0d57b0624c1574efb31', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?width=1080&crop=smart&auto=webp&s=d394134b0b98d9e20b081821452994f1614247c2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PERGyeynTkzpG9IaMoMa5duLENpibVjIah-gQg_-7fw.jpg?auto=webp&s=bd6352f63b893f3ccc3718c85aa1c912c95d0823', 'width': 1200}, 'variants': {}}]} |
AI agent libary you will actually understand | 1 | [removed] | 2025-02-05T16:54:37 | https://www.reddit.com/r/LocalLLaMA/comments/1iien8r/ai_agent_libary_you_will_actually_understand/ | No_Information6299 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iien8r | false | null | t3_1iien8r | /r/LocalLLaMA/comments/1iien8r/ai_agent_libary_you_will_actually_understand/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ZznOEjlQPqQ4paXb5HVXbxvDt9gwS2lK3ZaKrBasWSA', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/Vyvlt8s3yT45-X2_as121h95jHf8LqDJc6WA4diT3rc.jpg?width=108&crop=smart&auto=webp&s=51564adb113d3d514930ed91de743aece119f540', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/Vyvlt8s3yT45-X2_as121h95jHf8LqDJc6WA4diT3rc.jpg?width=216&crop=smart&auto=webp&s=335f06b3b30d4684da6a57f13a8e383d98744de6', 'width': 216}, {'height': 232, 'url': 'https://external-preview.redd.it/Vyvlt8s3yT45-X2_as121h95jHf8LqDJc6WA4diT3rc.jpg?width=320&crop=smart&auto=webp&s=7a77d782ff073888c4bdda2e48a82920de8e3675', 'width': 320}, {'height': 465, 'url': 'https://external-preview.redd.it/Vyvlt8s3yT45-X2_as121h95jHf8LqDJc6WA4diT3rc.jpg?width=640&crop=smart&auto=webp&s=2a3b2abd567e3f58f26b81aaffcd525ba90f2e56', 'width': 640}, {'height': 698, 'url': 'https://external-preview.redd.it/Vyvlt8s3yT45-X2_as121h95jHf8LqDJc6WA4diT3rc.jpg?width=960&crop=smart&auto=webp&s=b304ad3bd2cb6f338828a3205b53149b8bfb191d', 'width': 960}], 'source': {'height': 703, 'url': 'https://external-preview.redd.it/Vyvlt8s3yT45-X2_as121h95jHf8LqDJc6WA4diT3rc.jpg?auto=webp&s=ffabf5fd68f4c6cad8d3b046292e231951af5bea', 'width': 966}, 'variants': {}}]} |
How does benchmark evaluation work | 3 | I would like to create a new benchmark for my specific domain. I've been trying to find information, but it's hard to come by. How does scoring work, how does feeding questions work etc? One concern I have is if the model produces some rambling like "Here is the answer you requested" but then also provides the right answer, how does the evaluater catch that?
Hoping to find some great articles, maybe some software people are using. | 2025-02-05T16:55:52 | https://www.reddit.com/r/LocalLLaMA/comments/1iieodj/how_does_benchmark_evaluation_work/ | CSharpSauce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iieodj | false | null | t3_1iieodj | /r/LocalLLaMA/comments/1iieodj/how_does_benchmark_evaluation_work/ | false | false | self | 3 | null |
JSON based AI agent libary | 1 | [removed] | 2025-02-05T16:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/1iieofg/json_based_ai_agent_libary/ | No_Information6299 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iieofg | false | null | t3_1iieofg | /r/LocalLLaMA/comments/1iieofg/json_based_ai_agent_libary/ | false | false | self | 1 | null |
Deepseek coder performance on Xeon server | 3 | I have been testing Deepseek coder V2 on my local server recently and there are some good results. Overall my system can run the lite model lightning fast without GPU.
Here is my system configure:
**System:** 2 x Xeon 6140, Supermicro X11DPH, 16 x 32G RDIMM 2933 (2666 actual speed). 10 x 8TB SAS HDD **Sotware:** llama.cpp build with BLIS support. Run with NUMA.
**File system:** Ram disk. Full model gguf loaded in to a 480G preallocated ram disk while test is running.
Following is a list of gguf files I used for testing:
30G ds_coder_lite.gguf: deep seek coder lite, full weight
8.9G ds_coder_lite_q4_k_s.gguf: deep seek coder lite 4bit
440G ds_coder_V2.gguf: deep seek coder full size and full weight
125G ds_coder_V2_q4_k_s.gguf: deep seek coder full size 4bit
# Results:
**Deep seek coder full size full weight:**
**command line:**
llama.cpp/build/bin/llama-bench -m ds_coder_V2.gguf -t 64 --numa distribute
|model|size|params|backend|threads|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|deepseek2 236B F16|439.19 GiB|235.74 B|BLAS|64|pp512|14.91 ± 0.19|
|deepseek2 236B F16|439.19 GiB|235.74 B|BLAS|64|tg128|1.46 ± 0.01|
|model|size|params|backend|threads|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|deepseek2 236B F16|439.19 GiB|235.74 B|BLAS|64|pp512|12.67 ± 0.36|
|deepseek2 236B F16|439.19 GiB|235.74 B|BLAS|64|tg128|1.34 ± 0.03|
**Deep seek coder full size 4bit:**
**command line:**
llama.cpp/build/bin/llama-bench -m ds_coder_V2_q4_k_s.gguf -t 64 --numa distribute
|model|size|params|backend|threads|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|deepseek2 236B Q4\_K - Small|124.68 GiB|235.74 B|BLAS|64|pp512|11.62 ± 0.05|
|deepseek2 236B Q4\_K - Small|124.68 GiB|235.74 B|BLAS|64|tg128|3.45 ± 0.02|
|model|size|params|backend|threads|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|deepseek2 236B Q4\_K - Small|124.68 GiB|235.74 B|BLAS|64|pp512|11.56 ± 0.06|
|deepseek2 236B Q4\_K - Small|124.68 GiB|235.74 B|BLAS|64|tg128|3.48 ± 0.05|
**Deep seek coder lite full weight:**
**command line:**
llama.cpp/build/bin/llama-bench -m ds_coder_lite.gguf -t 64 --numa distribute
|model|size|params|backend|threads|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|deepseek2 16B F16|29.26 GiB|15.71 B|BLAS|64|pp512|126.10 ± 1.69|
|deepseek2 16B F16|29.26 GiB|15.71 B|BLAS|64|tg128|10.32 ± 0.03|
|model|size|params|backend|threads|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|deepseek2 16B F16|29.26 GiB|15.71 B|BLAS|64|pp512|126.66 ± 1.97|
|deepseek2 16B F16|29.26 GiB|15.71 B|BLAS|64|tg128|10.34 ± 0.03|
**Deep seek coder lite 4bit:**
**command line:**
llama.cpp/build/bin/llama-bench -m ds_coder_lite_q4_k_s.gguf -t 64 --numa distribute
|model|size|params|backend|threads|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|deepseek2 16B Q4\_K - Small|8.88 GiB|15.71 B|BLAS|64|pp512|120.88 ± 0.96|
|deepseek2 16B Q4\_K - Small|8.88 GiB|15.71 B|BLAS|64|tg128|18.43 ± 0.04|
|model|size|params|backend|threads|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|deepseek2 16B Q4\_K - Small|8.88 GiB|15.71 B|BLAS|64|pp512|124.27 ± 1.88|
|deepseek2 16B Q4\_K - Small|8.88 GiB|15.71 B|BLAS|64|tg128|18.36 ± 0.05|
I can run coder light full weight smoothly on my server. However what's weird to me is 4bit quantization seems has minor impact to the performance? Can anyone explain why? | 2025-02-05T16:58:11 | https://www.reddit.com/r/LocalLLaMA/comments/1iieqc3/deepseek_coder_performance_on_xeon_server/ | _xulion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iieqc3 | false | null | t3_1iieqc3 | /r/LocalLLaMA/comments/1iieqc3/deepseek_coder_performance_on_xeon_server/ | false | false | self | 3 | null |
Looking for Local Open-Source AI Tools to Dub Videos in Different Languages (3080 10GB + 64GB RAM) | 8 |
Hey everyone! I’m trying to find a local, open-source AI solution that can dub videos from one language to another (or vice versa). Specifically, I want to:
1. Dub non-English videos into English (e.g., Japanese → English).
2. Dub English videos into other languages (e.g., Spanish, Mandarin, etc.).
I have a RTX 3080 (10GB VRAM) and 64GB RAM, so I’m hoping to run this locally for budget reasons.
- Are there any open-source projects (e.g., Whisper, Coqui, etc.) or workflows that handle speech-to-text → translation → text-to-speech + lip-sync?
- Any recommendations for tools that work well with NVIDIA GPUs (like my 3080)?
- Do I need to pre-process videos (e.g., separate audio/video streams) for best results?
- Tips for minimizing latency or optimizing for my hardware setup?
Thanks in advance! 🙏 | 2025-02-05T17:03:03 | https://www.reddit.com/r/LocalLLaMA/comments/1iiev1g/looking_for_local_opensource_ai_tools_to_dub/ | nikprod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiev1g | false | null | t3_1iiev1g | /r/LocalLLaMA/comments/1iiev1g/looking_for_local_opensource_ai_tools_to_dub/ | false | false | self | 8 | null |
Manifold is a platform for enabling workflow automation using AI assistants. | 2 | 2025-02-05T17:03:31 | https://www.reddit.com/r/LocalLLaMA/comments/1iievh5/manifold_is_a_platform_for_enabling_workflow/ | LocoMod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iievh5 | false | null | t3_1iievh5 | /r/LocalLLaMA/comments/1iievh5/manifold_is_a_platform_for_enabling_workflow/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'igo60SRK1kqiQVDfFInOrjRv5Zr12YP86ZMy36Jlw-g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QrdrbAQ8EYBvGT1rfczdKxzTDz2PS7H0lPg6vY34bl8.jpg?width=108&crop=smart&auto=webp&s=b62ad09e57291d5e3c7955b55e536272eb7739ef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QrdrbAQ8EYBvGT1rfczdKxzTDz2PS7H0lPg6vY34bl8.jpg?width=216&crop=smart&auto=webp&s=aaac5031a987767c6495e1cde464498711b116d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QrdrbAQ8EYBvGT1rfczdKxzTDz2PS7H0lPg6vY34bl8.jpg?width=320&crop=smart&auto=webp&s=ec0c8029c8d9c93de3c9a5cd46e4a3f15f34464f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QrdrbAQ8EYBvGT1rfczdKxzTDz2PS7H0lPg6vY34bl8.jpg?width=640&crop=smart&auto=webp&s=af37b643489195e8c676fc64ddba9686c8f5c18d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QrdrbAQ8EYBvGT1rfczdKxzTDz2PS7H0lPg6vY34bl8.jpg?width=960&crop=smart&auto=webp&s=d3fcd6d36e1c03b03c42dfeb44ff9d05c653764a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QrdrbAQ8EYBvGT1rfczdKxzTDz2PS7H0lPg6vY34bl8.jpg?width=1080&crop=smart&auto=webp&s=b83ee6f906acbec55c4a8b1520e44852b7b7f672', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QrdrbAQ8EYBvGT1rfczdKxzTDz2PS7H0lPg6vY34bl8.jpg?auto=webp&s=1ec02ff8ad52e5d05fcb622bc40a5e1df283c1b9', 'width': 1200}, 'variants': {}}]} |
||
Hey so I am interested in creating a custom lightweight model for latin | 6 | I want to take a model with around 8 billion parameters and train it with latin translations, grammer, endings, etc to translate latin accurately. I don't mind manually training it to achieve the results I want. If you can help do that or advise if it is too ambitious for a rookie like myself. I'd like it to run on phones IF possible. Not necessary for it though | 2025-02-05T17:06:30 | https://www.reddit.com/r/LocalLLaMA/comments/1iiey6e/hey_so_i_am_interested_in_creating_a_custom/ | Fine_Salamander_8691 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiey6e | false | null | t3_1iiey6e | /r/LocalLLaMA/comments/1iiey6e/hey_so_i_am_interested_in_creating_a_custom/ | false | false | self | 6 | null |
Do you know any fun game made with local llama? | 1 | [removed] | 2025-02-05T17:12:59 | https://www.reddit.com/r/LocalLLaMA/comments/1iif3x8/do_you_know_any_fun_game_made_with_local_llama/ | whyNamesTurkiye | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iif3x8 | false | null | t3_1iif3x8 | /r/LocalLLaMA/comments/1iif3x8/do_you_know_any_fun_game_made_with_local_llama/ | false | false | self | 1 | null |
Finetune TTS model with specific entonation? | 1 | [removed] | 2025-02-05T17:16:10 | https://www.reddit.com/r/LocalLLaMA/comments/1iif6rx/finetune_tts_model_with_specific_entonation/ | RodrigoDNGT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iif6rx | false | null | t3_1iif6rx | /r/LocalLLaMA/comments/1iif6rx/finetune_tts_model_with_specific_entonation/ | false | false | self | 1 | null |
Running deepseek on amd epyc cpu only | 1 | [removed] | 2025-02-05T17:17:20 | https://www.reddit.com/r/LocalLLaMA/comments/1iif7t6/running_deepseek_on_amd_epyc_cpu_only/ | Resident-Service9229 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iif7t6 | false | null | t3_1iif7t6 | /r/LocalLLaMA/comments/1iif7t6/running_deepseek_on_amd_epyc_cpu_only/ | false | false | self | 1 | null |
Finetune TTS model with specific entonation dataset | 1 | [removed] | 2025-02-05T17:22:50 | https://www.reddit.com/r/LocalLLaMA/comments/1iifcqc/finetune_tts_model_with_specific_entonation/ | RodrigoDNGT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iifcqc | false | null | t3_1iifcqc | /r/LocalLLaMA/comments/1iifcqc/finetune_tts_model_with_specific_entonation/ | false | false | self | 1 | null |
Are there companies interested in LLM unlearning | 0 | I’ve been exploring this area of research independently and was able to make a breakthrough. I looked up for roles specifically related to post-training unlearning in LLMs but couldn’t find anything. If anyone wants to discuss this my dms are open.
Suggestions or referrals would help. | 2025-02-05T17:34:36 | https://www.reddit.com/r/LocalLLaMA/comments/1iifmyx/are_there_companies_interested_in_llm_unlearning/ | East_Turnover_1652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iifmyx | false | null | t3_1iifmyx | /r/LocalLLaMA/comments/1iifmyx/are_there_companies_interested_in_llm_unlearning/ | false | false | self | 0 | null |
Running large scale LLM Judge | 1 | [removed] | 2025-02-05T18:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/1iigcp3/running_large_scale_llm_judge/ | floppy_llama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iigcp3 | false | null | t3_1iigcp3 | /r/LocalLLaMA/comments/1iigcp3/running_large_scale_llm_judge/ | false | false | self | 1 | null |
Running large scale LLM Judge | 1 | [removed] | 2025-02-05T18:05:40 | https://www.reddit.com/r/LocalLLaMA/comments/1iigens/running_large_scale_llm_judge/ | floppy_llama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iigens | false | null | t3_1iigens | /r/LocalLLaMA/comments/1iigens/running_large_scale_llm_judge/ | false | false | self | 1 | null |
Upgrading my ThinkCentre to run a local LLM server: advice needed | 1 | [removed] | 2025-02-05T18:15:54 | https://www.reddit.com/r/LocalLLaMA/comments/1iignrc/upgrading_my_thinkcentre_to_run_a_local_llm/ | GZRattin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iignrc | false | null | t3_1iignrc | /r/LocalLLaMA/comments/1iignrc/upgrading_my_thinkcentre_to_run_a_local_llm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'AuZTzc-vOwVI8j6t14Nx3w7VQ9I54Tt-GajoEVfLyAc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/pxbq3W-zJBpP_8WvaLh3ccqHr2vs7oAtbJUYIPt3CpE.jpg?width=108&crop=smart&auto=webp&s=5407de6a2a14fba51027bae124b9dde4a5d5cbfc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/pxbq3W-zJBpP_8WvaLh3ccqHr2vs7oAtbJUYIPt3CpE.jpg?width=216&crop=smart&auto=webp&s=5a86fc57eb3b855d9c9681c52bcfede4b4a13c6c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/pxbq3W-zJBpP_8WvaLh3ccqHr2vs7oAtbJUYIPt3CpE.jpg?width=320&crop=smart&auto=webp&s=9b63699a8c3e965ea79f445b4294b780a345ab8c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/pxbq3W-zJBpP_8WvaLh3ccqHr2vs7oAtbJUYIPt3CpE.jpg?auto=webp&s=fee792b5e0e6b95a580efee8137ba75f7b330bc2', 'width': 480}, 'variants': {}}]} |
Upgrading my ThinkCentre to run a local LLM server: advice needed | 1 | [removed] | 2025-02-05T18:16:13 | https://www.reddit.com/r/LocalLLaMA/comments/1iigo1h/upgrading_my_thinkcentre_to_run_a_local_llm/ | GZRattin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iigo1h | false | null | t3_1iigo1h | /r/LocalLLaMA/comments/1iigo1h/upgrading_my_thinkcentre_to_run_a_local_llm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'AuZTzc-vOwVI8j6t14Nx3w7VQ9I54Tt-GajoEVfLyAc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/pxbq3W-zJBpP_8WvaLh3ccqHr2vs7oAtbJUYIPt3CpE.jpg?width=108&crop=smart&auto=webp&s=5407de6a2a14fba51027bae124b9dde4a5d5cbfc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/pxbq3W-zJBpP_8WvaLh3ccqHr2vs7oAtbJUYIPt3CpE.jpg?width=216&crop=smart&auto=webp&s=5a86fc57eb3b855d9c9681c52bcfede4b4a13c6c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/pxbq3W-zJBpP_8WvaLh3ccqHr2vs7oAtbJUYIPt3CpE.jpg?width=320&crop=smart&auto=webp&s=9b63699a8c3e965ea79f445b4294b780a345ab8c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/pxbq3W-zJBpP_8WvaLh3ccqHr2vs7oAtbJUYIPt3CpE.jpg?auto=webp&s=fee792b5e0e6b95a580efee8137ba75f7b330bc2', 'width': 480}, 'variants': {}}]} |
How to download the full version of DeepSeek R1? | 7 | I want to download the full version of DeepSeek R1 just in case it gets banned down the line. I've never downloaded a model from Huggingface before and when I go to DeepSeek's page, I don't see the model. I see a lot of safetensors files and some other files, but not the actual model. Where is it? | 2025-02-05T18:16:36 | https://www.reddit.com/r/LocalLLaMA/comments/1iigodb/how_to_download_the_full_version_of_deepseek_r1/ | whatswimsbeneath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iigodb | false | null | t3_1iigodb | /r/LocalLLaMA/comments/1iigodb/how_to_download_the_full_version_of_deepseek_r1/ | false | false | self | 7 | null |
Meta knowledge is the missing piece for integrating reasoning models into enterprise | 3 | Hello everyone,
I have finally started working on agent system (graduated from RAG production system) in my company and how to use reasoning models (like DeepSeek R1).
I found out that reasoning models are not this useful by itself without what I can meta knowledge which is just the knowledge on how to use the business knowledge, data source, internal apis and all the other stuff that is inside each companies.
I really think this is the beginning of enterprise level agentic system where you will have an agent in front of each product / data source and then you will have high level agents that can just call these to get the job done.
I done some tests and honestly it is pretty powerful (but not this reliable yet).
Just written a post about this --> [here](https://www.metadocs.co/2025/02/05/meta-knowledge-the-missing-link-on-how-to-integrate-reasoning-models-into-enterprise/).
What do you think about this ? Do I miss something or maybe this is a dead track ? Love to discuss this :D.
NB: This is not a LLM generated post so don't worry (if you read it, you will see :D). | 2025-02-05T18:18:51 | https://www.reddit.com/r/LocalLLaMA/comments/1iigqdr/meta_knowledge_is_the_missing_piece_for/ | ravediamond000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iigqdr | false | null | t3_1iigqdr | /r/LocalLLaMA/comments/1iigqdr/meta_knowledge_is_the_missing_piece_for/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ahH4S_uUef9UbHX0UGnZpC3aGoxTZ0reKGKVPidEH7I', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/Ev0Efx5xh8cq7sQL7gAqYYGgJlV27jNmOVkU8_fpyBY.jpg?width=108&crop=smart&auto=webp&s=b6cfbde33918ae03d74a4e44a9882c7c65f8135b', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/Ev0Efx5xh8cq7sQL7gAqYYGgJlV27jNmOVkU8_fpyBY.jpg?width=216&crop=smart&auto=webp&s=4fc1dfb61a4fc75cb32188440b11e51839bc7d60', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/Ev0Efx5xh8cq7sQL7gAqYYGgJlV27jNmOVkU8_fpyBY.jpg?width=320&crop=smart&auto=webp&s=b4f1732d7d881f9d092a530c004fe1034f299c9b', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/Ev0Efx5xh8cq7sQL7gAqYYGgJlV27jNmOVkU8_fpyBY.jpg?width=640&crop=smart&auto=webp&s=37a79b9de9b38bcba4acd1ee22a6ff9d8cb27050', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/Ev0Efx5xh8cq7sQL7gAqYYGgJlV27jNmOVkU8_fpyBY.jpg?width=960&crop=smart&auto=webp&s=e7dcf791092c40d4796b132c5fdef7bd9f45fc48', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/Ev0Efx5xh8cq7sQL7gAqYYGgJlV27jNmOVkU8_fpyBY.jpg?width=1080&crop=smart&auto=webp&s=494af50d51852ed3d7a0df482e04efcb0aa41fff', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Ev0Efx5xh8cq7sQL7gAqYYGgJlV27jNmOVkU8_fpyBY.jpg?auto=webp&s=9e5eb4e08357dde1c2c0503571355847745922d2', 'width': 1792}, 'variants': {}}]} |
is there a fine-tined version of Deepseek-R1 Distilled that can code? | 1 | been trying to get DeepSeek-R1-Distill-Qwen-7B to write code examples for me, but it simply can't do it. I was using Qwen2.5-Coder-7B prevously, so I dont suppose theres a distilled version of that? | 2025-02-05T18:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/1iigr34/is_there_a_finetined_version_of_deepseekr1/ | countjj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iigr34 | false | null | t3_1iigr34 | /r/LocalLLaMA/comments/1iigr34/is_there_a_finetined_version_of_deepseekr1/ | false | false | self | 1 | null |
Deepseek R1 don't understand how tube works | 0 | I just had an interesting conversation with Deepseek R1(the real R1 in the web chat from deepseek) and i noticed it doesn't seem to understand how tube works. Take a look at this:
[The response is fine, but the second part seems to imply there isn't a path in the tube for the signal](https://preview.redd.it/kjr87sgj5dhe1.png?width=928&format=png&auto=webp&s=8e7df4fad011c186702ad5ceeb2f98888b8ad152)
I then insisted to make it understand how tubes works, but it really seem to not get it:
https://preview.redd.it/txz2np486dhe1.png?width=832&format=png&auto=webp&s=96e3f8755db7dbfceec442286fdf59b98cebf177
https://preview.redd.it/a7ac5ev06dhe1.png?width=830&format=png&auto=webp&s=f72c1ddd98c38150eb4d8f3a9f4aaae83a16187a
https://preview.redd.it/9ueldbob6dhe1.png?width=701&format=png&auto=webp&s=67bb6157f33912e496ffbc635bab90fccd008326
https://preview.redd.it/t0xp09gf6dhe1.png?width=516&format=png&auto=webp&s=c65fefb30a33624f4730830914485f9841e26e4e
It really doesn't seem to get it. Did anyone else had this problem? | 2025-02-05T18:31:40 | https://www.reddit.com/r/LocalLLaMA/comments/1iih1l3/deepseek_r1_dont_understand_how_tube_works/ | Quantum1248 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iih1l3 | false | null | t3_1iih1l3 | /r/LocalLLaMA/comments/1iih1l3/deepseek_r1_dont_understand_how_tube_works/ | false | false | 0 | null |
|
Google's been at work, not Gemma 3 sadly | 180 | 2025-02-05T18:32:11 | MixtureOfAmateurs | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iih21v | false | null | t3_1iih21v | /r/LocalLLaMA/comments/1iih21v/googles_been_at_work_not_gemma_3_sadly/ | false | false | 180 | {'enabled': True, 'images': [{'id': 'CvXcAbV8uEMKhi5W7krQ-C89aWQu_8_NqtJOlqLHiBE', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/x5uaqeak6dhe1.png?width=108&crop=smart&auto=webp&s=b2c658ddda3ef502fd241c176c37eb65d6a688f8', 'width': 108}, {'height': 181, 'url': 'https://preview.redd.it/x5uaqeak6dhe1.png?width=216&crop=smart&auto=webp&s=fbd4a1e2f029f01484e9b755d64e432a73f9f25b', 'width': 216}, {'height': 268, 'url': 'https://preview.redd.it/x5uaqeak6dhe1.png?width=320&crop=smart&auto=webp&s=87ac5db8ef998927b98f0559468dd0f99a87fa19', 'width': 320}], 'source': {'height': 376, 'url': 'https://preview.redd.it/x5uaqeak6dhe1.png?auto=webp&s=885ef90e2694200a561a42edd6883a3a278a3dae', 'width': 448}, 'variants': {}}]} |
|||
Mixture of Experts | 1 | [removed] | 2025-02-05T18:32:54 | https://www.reddit.com/r/LocalLLaMA/comments/1iih2or/mixture_of_experts/ | roupellstreet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iih2or | false | null | t3_1iih2or | /r/LocalLLaMA/comments/1iih2or/mixture_of_experts/ | false | false | self | 1 | null |
I built my own AI girlfriend | 1 | [removed] | 2025-02-05T18:42:05 | https://www.reddit.com/r/LocalLLaMA/comments/1iihb10/i_built_my_own_ai_girlfriend/ | prabhus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iihb10 | false | null | t3_1iihb10 | /r/LocalLLaMA/comments/1iihb10/i_built_my_own_ai_girlfriend/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'v2VAjztk6em45nh-8UwDteHLOmc-ctV612m-YYv1Yiw', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=108&crop=smart&auto=webp&s=e88a8e92d13c120c3a30f05351b01fccf9186694', 'width': 108}, {'height': 80, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=216&crop=smart&auto=webp&s=b924b179e70ae106a2a9e8d29ad8ca3b4a336e81', 'width': 216}, {'height': 119, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=320&crop=smart&auto=webp&s=ee5993ffc9268f31cd958262ca2ee17b01bda157', 'width': 320}, {'height': 238, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=640&crop=smart&auto=webp&s=1f26797fd83d0ed50f43e511d9666bb730a5f503', 'width': 640}, {'height': 358, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=960&crop=smart&auto=webp&s=926fb6f9f3b25c3a30cf3f405b757926f1921663', 'width': 960}, {'height': 403, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=1080&crop=smart&auto=webp&s=a83ab0c9b242f6fbe0b0e636b0a72e91544a7c9f', 'width': 1080}], 'source': {'height': 411, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?auto=webp&s=6c28369879ef119b7e1040a6709f20b9b8f80135', 'width': 1101}, 'variants': {'nsfw': {'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=6e32b4f179122bf49b3efb53e05f1544b714a73a', 'width': 108}, {'height': 80, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=61e17a95e8e2fca8683dcbeaa0d938c8c8a06671', 'width': 216}, {'height': 119, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=d42166add29bb08350aacb5f4ef9f3c34e1476f9', 'width': 320}, {'height': 238, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6f209bc4381c5a44d12aa732991ce53c828b4c34', 'width': 640}, {'height': 358, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=652b727dc83de65653dba341c5e9576016641229', 'width': 960}, {'height': 403, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=f6c1b2b25003ce36f8fd3301ba069273ba4d64bc', 'width': 1080}], 'source': {'height': 411, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?blur=40&format=pjpg&auto=webp&s=9c27ad44e1e78e0c067c10d40a690c8e6210d902', 'width': 1101}}, 'obfuscated': {'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=6e32b4f179122bf49b3efb53e05f1544b714a73a', 'width': 108}, {'height': 80, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=61e17a95e8e2fca8683dcbeaa0d938c8c8a06671', 'width': 216}, {'height': 119, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=d42166add29bb08350aacb5f4ef9f3c34e1476f9', 'width': 320}, {'height': 238, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6f209bc4381c5a44d12aa732991ce53c828b4c34', 'width': 640}, {'height': 358, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=652b727dc83de65653dba341c5e9576016641229', 'width': 960}, {'height': 403, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=f6c1b2b25003ce36f8fd3301ba069273ba4d64bc', 'width': 1080}], 'source': {'height': 411, 'url': 'https://external-preview.redd.it/Xm2gtDhtVLVcQAukYlq65zoUnkiwN2M8-JbXA-wbUPE.jpg?blur=40&format=pjpg&auto=webp&s=9c27ad44e1e78e0c067c10d40a690c8e6210d902', 'width': 1101}}}}]} |
Those moments in time you wish would last forever | 81 | 2025-02-05T18:43:04 | beezbos_trip | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iihbxk | false | null | t3_1iihbxk | /r/LocalLLaMA/comments/1iihbxk/those_moments_in_time_you_wish_would_last_forever/ | false | false | 81 | {'enabled': True, 'images': [{'id': '2Heo0kfn6VeYAOKeDeaoNtKylYG0_OMJUCO9X7ewVp8', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/awwwugte8dhe1.jpeg?width=108&crop=smart&auto=webp&s=9a96d3ece58c4fada182d0d74bbb414fec02589d', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/awwwugte8dhe1.jpeg?width=216&crop=smart&auto=webp&s=6591b94149981a2d4590c4307fa0a5c2b01500ed', 'width': 216}, {'height': 211, 'url': 'https://preview.redd.it/awwwugte8dhe1.jpeg?width=320&crop=smart&auto=webp&s=183af850bc952ec38239e2cc2f9e8413b41daeec', 'width': 320}, {'height': 423, 'url': 'https://preview.redd.it/awwwugte8dhe1.jpeg?width=640&crop=smart&auto=webp&s=b62ddb1e6dac5bc36f41ce25fb146a4bdd9fae8d', 'width': 640}], 'source': {'height': 529, 'url': 'https://preview.redd.it/awwwugte8dhe1.jpeg?auto=webp&s=d323b72542569dd17be954547d2742a6d64bfdef', 'width': 800}, 'variants': {}}]} |
|||
OpenSource Glama Alternative | 1 | Hello, i like the ai tools that are currently emerging, especially for developement everywhere, however i dont like spending money. Does anyone know a opensource alternative to Glema? It advertises "*Glama* is a ChatGPT alternative for power users, with features like API gateway, agents, MCP, prompt templates, and more" Which sounds pretty cool however the free version lets you just attach 1 MCP server | 2025-02-05T18:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/1iihhlv/opensource_glama_alternative/ | GoHome_Gi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iihhlv | false | null | t3_1iihhlv | /r/LocalLLaMA/comments/1iihhlv/opensource_glama_alternative/ | false | false | self | 1 | null |
Announcing Sage: Open-source voice chat with LLMs | 76 | 2025-02-05T18:52:04 | https://github.com/farshed/sage | felixatwood | github.com | 1970-01-01T00:00:00 | 0 | {} | 1iihjq1 | false | null | t3_1iihjq1 | /r/LocalLLaMA/comments/1iihjq1/announcing_sage_opensource_voice_chat_with_llms/ | false | false | 76 | {'enabled': False, 'images': [{'id': 'zKacJnRxETzeIyCqNswkqshx8eqSFys89ditTIKxSiE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Do-ORxNEK_7XDMp8cozVLzedSBauVAy968xPLSTBvJg.jpg?width=108&crop=smart&auto=webp&s=3ea1aed4bb910ed34be91d83b78b1c87b7c6e2fa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Do-ORxNEK_7XDMp8cozVLzedSBauVAy968xPLSTBvJg.jpg?width=216&crop=smart&auto=webp&s=e06d1df9b6b4e7b01e1a1ffa1ece47ef9850fe56', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Do-ORxNEK_7XDMp8cozVLzedSBauVAy968xPLSTBvJg.jpg?width=320&crop=smart&auto=webp&s=a4d79aa8ba1a9c91b651586e67f202ec70373421', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Do-ORxNEK_7XDMp8cozVLzedSBauVAy968xPLSTBvJg.jpg?width=640&crop=smart&auto=webp&s=321ce755c82cbf201d2d52979b4c2663df16e1df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Do-ORxNEK_7XDMp8cozVLzedSBauVAy968xPLSTBvJg.jpg?width=960&crop=smart&auto=webp&s=f69bd46c35c190bbc46b17faaa47d7d923c5fcee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Do-ORxNEK_7XDMp8cozVLzedSBauVAy968xPLSTBvJg.jpg?width=1080&crop=smart&auto=webp&s=0b1d58addf6e97025079699e94f418ddc8e37a1a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Do-ORxNEK_7XDMp8cozVLzedSBauVAy968xPLSTBvJg.jpg?auto=webp&s=05513fc280ae5fa0a9863e9fa60979faa6b1121f', 'width': 1200}, 'variants': {}}]} |
||
jiktyuktryktr | 1 | wrthygewrtyhewrhbrewthbrwh | 2025-02-05T18:53:06 | https://www.reddit.com/r/LocalLLaMA/comments/1iihkn0/jiktyuktryktr/ | Ok-Succotash-7945 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iihkn0 | false | null | t3_1iihkn0 | /r/LocalLLaMA/comments/1iihkn0/jiktyuktryktr/ | false | false | self | 1 | null |
This workflow with DeepSeek R1 is next level! | 14 | Just found this notebook on setting up data distillation pipeline by CAMEL-AI with **DeepSeek R1**, and, it’s wild. You can crank out super clean **math reasoning datasets** with detailed step-by-step thought processes (Long chain-of thought type stuff).
I mean DeepSeek R1 is everywhere right now, and seeing it in action here is just lit.
Worth checking out if you’re into dataset generation or reasoning tasks. Would love to hear your thoughts!
[https://colab.research.google.com/drive/1BnV4iyWlXdizzpRQPYjmwIt70oVKziBw#scrollTo=RiZXE5RDB8tu](https://colab.research.google.com/drive/1BnV4iyWlXdizzpRQPYjmwIt70oVKziBw#scrollTo=RiZXE5RDB8tu)
https://preview.redd.it/9rilgyug9dhe1.png?width=3840&format=png&auto=webp&s=a2ccd47f2c449a229d5e4a68265da5c756d42773
| 2025-02-05T18:55:28 | https://www.reddit.com/r/LocalLLaMA/comments/1iihmoj/this_workflow_with_deepseek_r1_is_next_level/ | iamnotdeadnuts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iihmoj | false | null | t3_1iihmoj | /r/LocalLLaMA/comments/1iihmoj/this_workflow_with_deepseek_r1_is_next_level/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
|
Deepseek-R1 IQ1_S fullsize for desperate poor man | 1 | [removed] | 2025-02-05T19:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/1iihrya/deepseekr1_iq1_s_fullsize_for_desperate_poor_man/ | Reasonable_Flower_72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iihrya | false | null | t3_1iihrya | /r/LocalLLaMA/comments/1iihrya/deepseekr1_iq1_s_fullsize_for_desperate_poor_man/ | false | false | 1 | null |
|
Google claims to achieve World's Best AI ; & giving to users for FREE ! | 0 | 2025-02-05T19:14:59 | https://www.reddit.com/gallery/1iii4st | BidHot8598 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1iii4st | false | null | t3_1iii4st | /r/LocalLLaMA/comments/1iii4st/google_claims_to_achieve_worlds_best_ai_giving_to/ | false | false | 0 | null |
||
The Engineering Unlocks Behind DeepSeek | YC Decoded | 0 | 2025-02-05T19:18:00 | https://www.youtube.com/watch?v=4Tmn-XP93m4 | hedgehog0 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1iii7ic | false | {'oembed': {'author_name': 'Y Combinator', 'author_url': 'https://www.youtube.com/@ycombinator', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/4Tmn-XP93m4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="The Engineering Unlocks Behind DeepSeek | YC Decoded"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/4Tmn-XP93m4/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'The Engineering Unlocks Behind DeepSeek | YC Decoded', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1iii7ic | /r/LocalLLaMA/comments/1iii7ic/the_engineering_unlocks_behind_deepseek_yc_decoded/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'H-zD_0qJ29qz68hn-zLt00Y-MF0dszm7syJCwldol9w', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/n68KsLdQIUoDzQYPmU_TyGtW06OJmMdDMfToMUWsupg.jpg?width=108&crop=smart&auto=webp&s=e0f94a90034b8ae718ad6fe056a1918a8d7bcc98', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/n68KsLdQIUoDzQYPmU_TyGtW06OJmMdDMfToMUWsupg.jpg?width=216&crop=smart&auto=webp&s=d738a49dc0c0c053da52f884630230fe7007452a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/n68KsLdQIUoDzQYPmU_TyGtW06OJmMdDMfToMUWsupg.jpg?width=320&crop=smart&auto=webp&s=75764c51f0ae605c52a88537223021324d3e816b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/n68KsLdQIUoDzQYPmU_TyGtW06OJmMdDMfToMUWsupg.jpg?auto=webp&s=f37298d03bc93b3ce28591a257c0189f0dd4fffd', 'width': 480}, 'variants': {}}]} |
||
DeepSeek R1 ties o1 for first place on the Generalization Benchmark. | 281 | 2025-02-05T19:30:56 | zero0_one1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iiij1d | false | null | t3_1iiij1d | /r/LocalLLaMA/comments/1iiij1d/deepseek_r1_ties_o1_for_first_place_on_the/ | false | false | 281 | {'enabled': True, 'images': [{'id': 'MliMLpuLiYfK6XpQPXpQ9f96YWFGVp9wlO1dummc_-k', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/7na44xs3gdhe1.png?width=108&crop=smart&auto=webp&s=b218f017aa919d5a32e8cad350e36ea65ac610a9', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/7na44xs3gdhe1.png?width=216&crop=smart&auto=webp&s=cea5aa741310d651cca39524af699e8c53f4bd5a', 'width': 216}, {'height': 261, 'url': 'https://preview.redd.it/7na44xs3gdhe1.png?width=320&crop=smart&auto=webp&s=b840e5b0ef8a571e0aa4b8a61550ce6f7740f07f', 'width': 320}, {'height': 523, 'url': 'https://preview.redd.it/7na44xs3gdhe1.png?width=640&crop=smart&auto=webp&s=77dd2e43eb2352bf9c4ab11068ca9221f8b83934', 'width': 640}, {'height': 785, 'url': 'https://preview.redd.it/7na44xs3gdhe1.png?width=960&crop=smart&auto=webp&s=006b3dd4089d042f268a563b18987ddf342763b5', 'width': 960}, {'height': 883, 'url': 'https://preview.redd.it/7na44xs3gdhe1.png?width=1080&crop=smart&auto=webp&s=3aca2c605d0278c4e264859dcbf8f01638e1198c', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/7na44xs3gdhe1.png?auto=webp&s=f9445595ec0516582cedad0ef6351de8d4d11dac', 'width': 1100}, 'variants': {}}]} |
|||
Anthropic: ‘Please don’t use AI’ | 1,215 |
"While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate ‘Yes’ if you have read and agree."
There's a certain irony in having one of the biggest AI labs coming against AI applications and acknowledging the enshittification of the whole job application process.
| 2025-02-05T19:36:56 | https://www.ft.com/content/9b1e6af4-94f2-41c6-bb91-96a74b9b2da1 | FullstackSensei | ft.com | 1970-01-01T00:00:00 | 0 | {} | 1iiio9u | false | null | t3_1iiio9u | /r/LocalLLaMA/comments/1iiio9u/anthropic_please_dont_use_ai/ | false | false | 1,215 | {'enabled': False, 'images': [{'id': 'MLmPwPx-pmB0m-M4FKseleuM-WJ4yxRERRHeeb-WSAM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XdLbwNiaDfP6hGsSmn44MWaR_4YQK7L36Ar5RuZkt4s.jpg?width=108&crop=smart&auto=webp&s=9d28310e5f041febbd99f5530ad2d90718c33522', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/XdLbwNiaDfP6hGsSmn44MWaR_4YQK7L36Ar5RuZkt4s.jpg?width=216&crop=smart&auto=webp&s=599ad4615d8352ed2e4d3ae7ed81c6b46f718862', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/XdLbwNiaDfP6hGsSmn44MWaR_4YQK7L36Ar5RuZkt4s.jpg?width=320&crop=smart&auto=webp&s=5ad78a1f514117d2f562c4bbceee0aa62bd02032', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/XdLbwNiaDfP6hGsSmn44MWaR_4YQK7L36Ar5RuZkt4s.jpg?width=640&crop=smart&auto=webp&s=9f5b6dec6d423a65124ea27edb0de0e52f12e6ef', 'width': 640}], 'source': {'height': 394, 'url': 'https://external-preview.redd.it/XdLbwNiaDfP6hGsSmn44MWaR_4YQK7L36Ar5RuZkt4s.jpg?auto=webp&s=77551fbc8f8c2129e0669196f65474f2d77423ee', 'width': 700}, 'variants': {}}]} |
|
Andrej Karpathy: Deep Dive into LLMs Like ChatGPT | 95 | 2025-02-05T19:43:08 | https://www.youtube.com/watch?v=7xTGNNLPyMI | hedgehog0 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1iiitl5 | false | {'oembed': {'author_name': 'Andrej Karpathy', 'author_url': 'https://www.youtube.com/@AndrejKarpathy', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/7xTGNNLPyMI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Deep Dive into LLMs like ChatGPT"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/7xTGNNLPyMI/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Deep Dive into LLMs like ChatGPT', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1iiitl5 | /r/LocalLLaMA/comments/1iiitl5/andrej_karpathy_deep_dive_into_llms_like_chatgpt/ | false | false | 95 | {'enabled': False, 'images': [{'id': 'Ypm2SeDGOlqgI8SNrgBNAsFgI2Jh39ID1L1i9iRt_L4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/DmIKQR1Qh6xD2A-jd67MgvOzKDXIcFDp0jJD5ODYpIY.jpg?width=108&crop=smart&auto=webp&s=4991390722249eebb076d921f587590d49b2ce81', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/DmIKQR1Qh6xD2A-jd67MgvOzKDXIcFDp0jJD5ODYpIY.jpg?width=216&crop=smart&auto=webp&s=43614507699d540fb854fe2d58574a985c88e8d2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/DmIKQR1Qh6xD2A-jd67MgvOzKDXIcFDp0jJD5ODYpIY.jpg?width=320&crop=smart&auto=webp&s=4b94b4626807e1636998d3593911753a052fdde2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/DmIKQR1Qh6xD2A-jd67MgvOzKDXIcFDp0jJD5ODYpIY.jpg?auto=webp&s=1d54f193e9442b2b4b2503b31ea208b5ccf25b52', 'width': 480}, 'variants': {}}]} |
||
Good MoE Models smaller than R1? | 11 | I am looking for MoE models of similar sizes like the original Mixtral 8x7B. Is there anything competitive available at the moment?
Background:
I have a pc with 12GB vram and 64GB ram, large models like Llama 3.3 theoretically fit in my ram, but are slow of course (slightly over 1 t/s). However, similar sized MoEs like Mixtral 8x7 are a lot faster, like 4 to 5 t/s, which is at least usable for some things. Of course, fitting into vram is the fastest, but I like experimenting with larger models and don't want to buy new gpus yet. | 2025-02-05T19:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/1iiixsv/good_moe_models_smaller_than_r1/ | And1mon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiixsv | false | null | t3_1iiixsv | /r/LocalLLaMA/comments/1iiixsv/good_moe_models_smaller_than_r1/ | false | false | self | 11 | null |
Alternative to deepsearch | 1 | HuggingFace published an alternative to deepsearch that seems quite interesting
https://preview.redd.it/5chbfyr4kdhe1.png?width=765&format=png&auto=webp&s=0a0b8050f22166a3287156fd00da335bd5cd28f9
| 2025-02-05T19:47:56 | https://www.reddit.com/r/LocalLLaMA/comments/1iiixx9/alternative_to_deepsearch/ | konilse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiixx9 | false | null | t3_1iiixx9 | /r/LocalLLaMA/comments/1iiixx9/alternative_to_deepsearch/ | false | false | 1 | null |
|
Alternative to DeepResearch | 19 | HuggingFace published an alternative to deepresearch that seems quite interesting
https://preview.redd.it/3duhicf9kdhe1.png?width=765&format=png&auto=webp&s=7e158c9a5283d76f9cd578e6047a90ba2a5f7abd
| 2025-02-05T19:49:27 | https://www.reddit.com/r/LocalLLaMA/comments/1iiizaa/alternative_to_deepresearch/ | konilse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iiizaa | false | null | t3_1iiizaa | /r/LocalLLaMA/comments/1iiizaa/alternative_to_deepresearch/ | false | false | 19 | null |
|
S1-32B: The $6 R1 Competitor? | 74 | 2025-02-05T19:56:02 | https://timkellogg.me/blog/2025/02/03/s1 | paf1138 | timkellogg.me | 1970-01-01T00:00:00 | 0 | {} | 1iij58e | false | null | t3_1iij58e | /r/LocalLLaMA/comments/1iij58e/s132b_the_6_r1_competitor/ | false | false | default | 74 | null |
Subsets and Splits