title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Who owns RAG/Agent applications: ML Engineers or AI Engineers? | 0 | I've been wrestling with an interesting challenge at work that I think highlights a growing tension in our field. We're seeing this emergence of "AI Engineers" as distinct from ML Engineers, particularly in companies building GenAI products. But I'm starting to question if this separation actually makes sense at scale.
Here's what I'm seeing in practice:
\- AI Engineers (often from full-stack backgrounds) handle the "application layer" - building frontends, handling API integration with models like GPT-4/Claude
\- ML Engineers own the "ML layer" - managing data pipelines, handling embeddings, evaluation frameworks, etc.
The problem? What starts as "just API calls to OpenAI" quickly evolves into complex ML problems. Take RAG applications for example. You suddenly need:
\- Sophisticated document chunking strategies
\- Complex data ingestion pipelines
\- Real evaluation frameworks
\- LLM observability solutions
These aren't typical software engineering problems. Yet in many orgs, they're landing on AI Engineers' plates because they "own the product."
I wrote some detailed thoughts on this blog post that proposes a separation of these layers (linked below), but I'm curious about real-world experiences: [https://www.zenml.io/blog/ai-engineering-vs-ml-engineering-evolving-roles-genai](https://www.zenml.io/blog/ai-engineering-vs-ml-engineering-evolving-roles-genai)
1. How are your teams handling this divide?
2. Have you found success with separation between application and ML layers?
3. For those scaling GenAI products - are you seeing the need for this specialization, or is it creating unnecessary organizational complexity?
Curious to hear perspectives from both AI Engineers and ML Engineers on this. Are we over-engineering our team structures, or is this specialization inevitable as we scale? | 2025-01-21T16:54:19 | https://www.reddit.com/r/LocalLLaMA/comments/1i6n4md/who_owns_ragagent_applications_ml_engineers_or_ai/ | htahir1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6n4md | false | null | t3_1i6n4md | /r/LocalLLaMA/comments/1i6n4md/who_owns_ragagent_applications_ml_engineers_or_ai/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'cKiZjsPlcCf0R0KFK8mAJIJ1S647KvE6xGjm6PuKYAc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/7KVyGhqoe6aSCQpiqH_hlHu_DLH8HlLFWOOFvwqHOEo.jpg?width=108&crop=smart&auto=webp&s=6b729f6fd7ef8b5590c8dde2bb87dfef26f4f451', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/7KVyGhqoe6aSCQpiqH_hlHu_DLH8HlLFWOOFvwqHOEo.jpg?width=216&crop=smart&auto=webp&s=c2c645eeecab46c12f3b41a3265d6c43b783b5a5', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/7KVyGhqoe6aSCQpiqH_hlHu_DLH8HlLFWOOFvwqHOEo.jpg?width=320&crop=smart&auto=webp&s=f89bd4c6ed10b83ab36948910d86ec119df7f897', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/7KVyGhqoe6aSCQpiqH_hlHu_DLH8HlLFWOOFvwqHOEo.jpg?width=640&crop=smart&auto=webp&s=db927b01fcec5457f6cc93fb7e2e42f8224020a4', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/7KVyGhqoe6aSCQpiqH_hlHu_DLH8HlLFWOOFvwqHOEo.jpg?width=960&crop=smart&auto=webp&s=aea6dde2b4ac2574ac4fdb8a60010b46bfc7684a', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/7KVyGhqoe6aSCQpiqH_hlHu_DLH8HlLFWOOFvwqHOEo.jpg?auto=webp&s=6b24a5075b622353f8c629e9803fc20651155d32', 'width': 1000}, 'variants': {}}]} |
Anyone with the list of all Nvidia ngc containers | 1 | Hello,
And their opensource upstream as a list or table?
Thanks | 2025-01-21T16:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i6n572/anyone_with_the_list_of_all_nvidia_ngc_containers/ | strus_fr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6n572 | false | null | t3_1i6n572 | /r/LocalLLaMA/comments/1i6n572/anyone_with_the_list_of_all_nvidia_ngc_containers/ | false | false | self | 1 | null |
Use SLM as evaluator | 1 | [removed] | 2025-01-21T16:56:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i6n6kj/use_slm_as_evaluator/ | Remarkable_Story_310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6n6kj | false | null | t3_1i6n6kj | /r/LocalLLaMA/comments/1i6n6kj/use_slm_as_evaluator/ | false | false | self | 1 | null |
Pretty sure OpenAI has their devs working 24/7 to not lose their throne to DeepSeek 😭 | 282 | And DeepSeek is making the same progress at a much faster pace than OpenAI is. They are definitely in a rock situation | 2025-01-21T16:57:48 | https://www.reddit.com/r/LocalLLaMA/comments/1i6n7jf/pretty_sure_openai_has_their_devs_working_247_to/ | Condomphobic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6n7jf | false | null | t3_1i6n7jf | /r/LocalLLaMA/comments/1i6n7jf/pretty_sure_openai_has_their_devs_working_247_to/ | false | false | self | 282 | null |
DeepSeek-R1 PlanBench benchmark results | 73 | 2025-01-21T16:58:38 | Wiskkey | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6n87h | false | null | t3_1i6n87h | /r/LocalLLaMA/comments/1i6n87h/deepseekr1_planbench_benchmark_results/ | false | false | 73 | {'enabled': True, 'images': [{'id': 'xlvxnw-DlnWgZ4mXptWzkgFWe3kGiC-15M2fMMhVxLE', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/qa5yh1w3odee1.jpeg?width=108&crop=smart&auto=webp&s=f19c8bf660dea3e6585de26164b626158af61520', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/qa5yh1w3odee1.jpeg?width=216&crop=smart&auto=webp&s=f090e53b92625a59fd947e7225c6598633380e3d', 'width': 216}, {'height': 138, 'url': 'https://preview.redd.it/qa5yh1w3odee1.jpeg?width=320&crop=smart&auto=webp&s=6a7335b9b542883de30cb49033f561f33d1f591e', 'width': 320}, {'height': 277, 'url': 'https://preview.redd.it/qa5yh1w3odee1.jpeg?width=640&crop=smart&auto=webp&s=d4bfe107587dff4e6f5da9478af9b4fd49c54d16', 'width': 640}], 'source': {'height': 295, 'url': 'https://preview.redd.it/qa5yh1w3odee1.jpeg?auto=webp&s=e11bf6a340566de7661e74c6f0e146df01c7674c', 'width': 680}, 'variants': {}}]} |
|||
DeepSeek R1 - Vibe Check Thread | 1 | [removed] | 2025-01-21T17:12:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i6nkaz/deepseek_r1_vibe_check_thread/ | anarchymed3s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6nkaz | false | null | t3_1i6nkaz | /r/LocalLLaMA/comments/1i6nkaz/deepseek_r1_vibe_check_thread/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ffNXCUPQerMMTV5UAIgJRS5QMtKWEhNQFfpmL7I4Bcc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=108&crop=smart&auto=webp&s=fa74f814d5c43d0d9d47c3591a9d667818ebe0c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=216&crop=smart&auto=webp&s=e3494c6906d2c95f78811be98ecf631cdeb08c13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=320&crop=smart&auto=webp&s=08f0479f19185f357e3bccc42a42f10f6fac664c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=640&crop=smart&auto=webp&s=2fdeeb9ada89c2bf4e5dc697043da66bd62cf959', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=960&crop=smart&auto=webp&s=e7b3230584c769f71759db14271d12a5f8cf831a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=1080&crop=smart&auto=webp&s=a8b11dd06cf9be6635cb9fcb2dedf71ecdd9c491', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?auto=webp&s=b8bf601deac4d62d484c6fb69764f7d09d0fd168', 'width': 1200}, 'variants': {}}]} |
Testing the new Deepseek models on my Cybersecurity test | 59 | Ran the new Deepseek models and distils through my multiple choice cyber security test:
A good score requires heavy world knowledge and some reasoning.
1st - 01-preview - 95.72%
**2nd - Deepseek-R1-API - 94.06%**
\*\*\* - Meta-Llama3.1-405b-FP8 - 94.06% (Modified dual prompt to allow CoT)
3rd - Claude-3.5-October - 92.92%
4th - O1-mini - 92.87%
5th - Meta-Llama3.1-405b-FP8 - 92.64%
**\*\*\* - Deepseek-v3-api - 92.64% (Modified dual prompt to allow CoT)**
6th - GPT-4o - 92.45%
7th - Mistral-Large-123b-2411-FP16 92.40%
**8th - Deepseek-v3-api - 91.92%**
9th - GPT-4o-mini - 91.75%
\*\*\* - Sky-T1-32B-BF16 - 91.45
\*\*\* - Qwen-QwQ-32b-AWQ - 90.74% (Modified dual prompt to allow CoT)
**10th - DeepSeek-v2.5-1210-BF16 - 90.50%**
11th - Meta-LLama3.3-70b-FP8 - 90.26%
11th - Qwen-2.5-72b-FP8 - 90.09%
13th - Meta-Llama3.1-70b-FP8 - 89.15%
**14th - DeepSeek-R1-Distill-Qwen-32B-FP16 - 89.31%**
**15th - DeepSeek-R1-Distill-Llama-70B-GGUF-Q5 - 89.07%**
16th - Phi-4-GGUF-Fixed-Q4 - 88.6%
16th - Hunyuan-Large-389b-FP8 - 88.60%
**18th - DeepSeek-R1-Distill-Qwen-32B-GGUF - 87.65%**
Fun fact not seen in the scores above, cost to run my \~420 question test
DeepSeek V3 without COT: 3 Cents
DeepSeek V3 with my COT: 9 Cents
DeepSeek R1: 71 Cents
O1 Mini: 196 Cents
O1 Preview: 1600 Cents
Typically a score on my test drops by 0.5% or less going from full precision to Q4,
Qwen going down 3% seems suspicious, wonder if there are GGUF issues?
Llama distil scoring below llama 3.1 and 3.3 is also a little odd.
I was running unsloth GGUF's in VLLM
| 2025-01-21T17:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i6note/testing_the_new_deepseek_models_on_my/ | Conscious_Cut_6144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6note | false | null | t3_1i6note | /r/LocalLLaMA/comments/1i6note/testing_the_new_deepseek_models_on_my/ | false | false | self | 59 | null |
This xkcd comic is from 2017: Very much relevant today lol | 1 | 2025-01-21T17:18:45 | allisknowing | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6npse | false | null | t3_1i6npse | /r/LocalLLaMA/comments/1i6npse/this_xkcd_comic_is_from_2017_very_much_relevant/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'brF_SOWKw14SDojwc13Hiz96Lr2qbWUWrjX4tDpNEiM', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/vu7n5wvqrdee1.png?width=108&crop=smart&auto=webp&s=041d7213f5436cf2d2ee4000c2577164a1e4312a', 'width': 108}, {'height': 255, 'url': 'https://preview.redd.it/vu7n5wvqrdee1.png?width=216&crop=smart&auto=webp&s=8217b88b7f42c6b3c31c73adbde5c2821973c1d7', 'width': 216}, {'height': 378, 'url': 'https://preview.redd.it/vu7n5wvqrdee1.png?width=320&crop=smart&auto=webp&s=0009607ce1d443626ee95a71823db878bfa93e3c', 'width': 320}], 'source': {'height': 439, 'url': 'https://preview.redd.it/vu7n5wvqrdee1.png?auto=webp&s=af4a84940c18258a35e0f6ca846585b207121e86', 'width': 371}, 'variants': {}}]} |
|||
Vector Storage Optimization in RAG: What Problems Need Solving? | 1 | As part of a team researching vector storage optimization for RAG systems, we've been seeing some pretty mind-blowing results in our early experiments - the kind that initially made us double and triple-check our benchmarks because they seemed too good to be true (especially when we saw search quality improvements alongside massive storage and latency reductions).
But before we go further down this path, I'd love to hear about real-world challenges others are facing with vector databases and RAG implementations:
\- At what scale do storage costs become problematic?
\- What query latency would you consider a deal-breaker?
\- Have you noticed search quality issues as your vector count grows?
\- What would meaningful improvements look like for your use case?
We're particularly interested in understanding:
\- Would dramatic reductions (90%+) in vector storage requirements be impactful for your use case?
\- How much would significant query latency improvements change your application?
\- How do you currently balance the tradeoff between storage efficiency, speed, and search accuracy?
Just looking to learn from others' experiences and understand what matters most in real-world applications. Your insights would be incredibly valuable for guiding research in this space.
Thank you! | 2025-01-21T17:22:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i6ntet/vector_storage_optimization_in_rag_what_problems/ | ItsFuckingRawwwwwww | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6ntet | false | null | t3_1i6ntet | /r/LocalLLaMA/comments/1i6ntet/vector_storage_optimization_in_rag_what_problems/ | false | false | self | 1 | null |
Suggestions needed for structuring data for my chatbot project | 1 | [removed] | 2025-01-21T17:35:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i6o4ub/suggestions_needed_for_structuring_data_for_my/ | Adorable_Database460 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6o4ub | false | null | t3_1i6o4ub | /r/LocalLLaMA/comments/1i6o4ub/suggestions_needed_for_structuring_data_for_my/ | false | false | self | 1 | null |
How to Evaluate LLM Performance on Math QA Tasks | 2 | It was asked previously but didn't find any answers.
I have a model pushed to Hugging Face, and I want to check its performance (particularly on AIME and Math500 datasets). I already have the questions and their correct answers, but I'm struggling with the comparison part.
For example, one might write "5" and another "5.0" or "five". Or maybe the answer is an equation, and the LLM writes it in a different form but still correct. So, how do I handle these variations?
Are there any tools or libraries (online or offline, like something from GitHub) that can handle these variations in answer format and automate the evaluation process? I'd really appreciate your advice!
Thanks in advance! | 2025-01-21T17:38:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i6o6uo/how_to_evaluate_llm_performance_on_math_qa_tasks/ | DataScientia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6o6uo | false | null | t3_1i6o6uo | /r/LocalLLaMA/comments/1i6o6uo/how_to_evaluate_llm_performance_on_math_qa_tasks/ | false | false | self | 2 | null |
Building a €5,000 Rack-Mount Server for running LLMs | 1 | [removed] | 2025-01-21T17:38:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i6o795/building_a_5000_rackmount_server_for_running_llms/ | Educational-Shoe8806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6o795 | false | null | t3_1i6o795 | /r/LocalLLaMA/comments/1i6o795/building_a_5000_rackmount_server_for_running_llms/ | false | false | self | 1 | null |
PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models | 2 | 2025-01-21T17:43:24 | https://arxiv.org/abs/2501.03124 | Wiskkey | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1i6obig | false | null | t3_1i6obig | /r/LocalLLaMA/comments/1i6obig/prmbench_a_finegrained_and_challenging_benchmark/ | false | false | default | 2 | null |
|
You can use R1 distill Qwen 32B in Cline using HuggingChat | 9 | 2025-01-21T17:45:34 | BestSentence4868 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6odfr | false | null | t3_1i6odfr | /r/LocalLLaMA/comments/1i6odfr/you_can_use_r1_distill_qwen_32b_in_cline_using/ | false | false | 9 | {'enabled': True, 'images': [{'id': 'S63WIgnRYdcH7KaU5eF896BpHilzMtzy1oJ5xFVa9XA', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/mx7hpubfwdee1.png?width=108&crop=smart&auto=webp&s=9d55b6eb0ac1f6e155ae61e7f0fa93e322b68184', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/mx7hpubfwdee1.png?width=216&crop=smart&auto=webp&s=f3df8fb3158db0bf79211ec01830fdc09b9f4669', 'width': 216}, {'height': 294, 'url': 'https://preview.redd.it/mx7hpubfwdee1.png?width=320&crop=smart&auto=webp&s=7a289f0ca9963efcb6d24e885500c179d1f4298b', 'width': 320}, {'height': 588, 'url': 'https://preview.redd.it/mx7hpubfwdee1.png?width=640&crop=smart&auto=webp&s=ee67e40eb7938df1ba7cd4935e4e1e7f44635e84', 'width': 640}, {'height': 882, 'url': 'https://preview.redd.it/mx7hpubfwdee1.png?width=960&crop=smart&auto=webp&s=9974e40146bb76e0880fdf42b4e5b8a4da1f5943', 'width': 960}, {'height': 992, 'url': 'https://preview.redd.it/mx7hpubfwdee1.png?width=1080&crop=smart&auto=webp&s=494d20aeba6d3599433fcb7e1827e24a96c52758', 'width': 1080}], 'source': {'height': 1312, 'url': 'https://preview.redd.it/mx7hpubfwdee1.png?auto=webp&s=63c2adeef29f68646ab37e4b16977dd567cda8fd', 'width': 1427}, 'variants': {}}]} |
|||
Anthropics co-worker is being build 2025, but updates and earlier versions already rolled out through the year. Co-worker can use different apps and collaborate, next year however probably more than just PC usage | 1 | 2025-01-21T17:48:32 | https://v.redd.it/vgyx9vk3xdee1 | Pleasant_Cap_7040 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6og2z | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/vgyx9vk3xdee1/DASHPlaylist.mpd?a=1740073727%2CZWRlYzUxMDFkYTcyZDc2MWExZWVmMTdlNzM3NWVlNTViOThlN2E5NjA3NmIwMzFlYzQ2NjZmNDg0MzJkMTcwMA%3D%3D&v=1&f=sd', 'duration': 51, 'fallback_url': 'https://v.redd.it/vgyx9vk3xdee1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/vgyx9vk3xdee1/HLSPlaylist.m3u8?a=1740073727%2CNDYyNGE2YWZiMTMyM2RjYzAyMjdmNGJmYzYyYTUwNWVmZWI1YzliZmE3OTg3NGQ4NGZmYTBmMzAzNWFmZTBjMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vgyx9vk3xdee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1i6og2z | /r/LocalLLaMA/comments/1i6og2z/anthropics_coworker_is_being_build_2025_but/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZTBrOTl3azN4ZGVlMTS_-KB6N8tDoUSMJAGyI9Facb5vdLD4-HIqbMwL90kk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ZTBrOTl3azN4ZGVlMTS_-KB6N8tDoUSMJAGyI9Facb5vdLD4-HIqbMwL90kk.png?width=108&crop=smart&format=pjpg&auto=webp&s=6a3dd1d2c0737e636f7bfd36626cf506f3ff8ffc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ZTBrOTl3azN4ZGVlMTS_-KB6N8tDoUSMJAGyI9Facb5vdLD4-HIqbMwL90kk.png?width=216&crop=smart&format=pjpg&auto=webp&s=5b5f1454636aff061bcba523ae3a9c44d3294b89', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ZTBrOTl3azN4ZGVlMTS_-KB6N8tDoUSMJAGyI9Facb5vdLD4-HIqbMwL90kk.png?width=320&crop=smart&format=pjpg&auto=webp&s=bd1f2f511ec46802dad2aacd4e45c28a5ed6ab06', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ZTBrOTl3azN4ZGVlMTS_-KB6N8tDoUSMJAGyI9Facb5vdLD4-HIqbMwL90kk.png?width=640&crop=smart&format=pjpg&auto=webp&s=d20792af5a67cfdfc9f0c47e885cee182c58a9a5', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/ZTBrOTl3azN4ZGVlMTS_-KB6N8tDoUSMJAGyI9Facb5vdLD4-HIqbMwL90kk.png?format=pjpg&auto=webp&s=961d47c793b81f050d33f88c89faa7c981470875', 'width': 720}, 'variants': {}}]} |
||
Local Llasa TTS (followup) | 27 | Hey everyone, lots of people asked about using the llasa TTS model locally. So I made a quick repo with some examples on how to run it in colab and locally with native hf transformers. It takes about 8.5gb of vram with whisper large turbo. And 6.5gb without. Runs fine on colab though
I'm not too sure how to run it with llama cpp/ollama since it requires the xcodec2 model and also very specific prompt templating. If someone knows feel free to pr.
See my first post for context
https://www.reddit.com/r/LocalLLaMA/comments/1i65c2g/a_new_tts_model_but_its_llama_in_disguise/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button | 2025-01-21T17:56:33 | https://github.com/nivibilla/local-llasa-tts | Eastwindy123 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i6on4k | false | null | t3_1i6on4k | /r/LocalLLaMA/comments/1i6on4k/local_llasa_tts_followup/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'DGUiFddrF7fPK6p_Irv5_boDCRLIxCD-XM36cdsrbz0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1TMC5wCp9v4FuG7z72DixwLUgFNuIDXDGk55piOPdlc.jpg?width=108&crop=smart&auto=webp&s=ba945ad3f13083172c2f3959394a562125d56ed8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1TMC5wCp9v4FuG7z72DixwLUgFNuIDXDGk55piOPdlc.jpg?width=216&crop=smart&auto=webp&s=16faa50da3cd67bb3f89718d67a731b3f69f18ff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1TMC5wCp9v4FuG7z72DixwLUgFNuIDXDGk55piOPdlc.jpg?width=320&crop=smart&auto=webp&s=ebb0c3758f87c8c224d6c0b792fdf879e918cd3b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1TMC5wCp9v4FuG7z72DixwLUgFNuIDXDGk55piOPdlc.jpg?width=640&crop=smart&auto=webp&s=2941de9d4a500f20469258b9e1920c8999cbd5b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1TMC5wCp9v4FuG7z72DixwLUgFNuIDXDGk55piOPdlc.jpg?width=960&crop=smart&auto=webp&s=30e645579e2e1df5794a3644543213ca82b30018', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1TMC5wCp9v4FuG7z72DixwLUgFNuIDXDGk55piOPdlc.jpg?width=1080&crop=smart&auto=webp&s=359a2af4a2a7abe7b3856144878bd43decaf085f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1TMC5wCp9v4FuG7z72DixwLUgFNuIDXDGk55piOPdlc.jpg?auto=webp&s=6c5d2a1603ffcbc95416ca8b0decbb731df5d6aa', 'width': 1200}, 'variants': {}}]} |
|
Deepseek-R1: 14B-Q8 vs 32B-Q4 Questions accuracy | 3 | I'm curious about what to expect as far as accuracy between these two variants, or this sort of comparison in general (Qwen Distill in this case). Is it more reliable to go with the model with higher parameters, or the model with with a higher quant. Or even in general, what can be expected. In my case my main use is coding python (Qwen 2.5 coder instruct had better results on my test than these, but there's a lot I don't understand)
Just for reference, I'm running a system with an RTX 3090 (24GB VRAM) and 64GB RAM. | 2025-01-21T17:59:02 | https://www.reddit.com/r/LocalLLaMA/comments/1i6opeu/deepseekr1_14bq8_vs_32bq4_questions_accuracy/ | MrWeirdoFace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6opeu | false | null | t3_1i6opeu | /r/LocalLLaMA/comments/1i6opeu/deepseekr1_14bq8_vs_32bq4_questions_accuracy/ | false | false | self | 3 | null |
DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs | 19 | 2025-01-21T18:00:55 | https://www.interconnects.ai/p/deepseek-r1-recipe-for-o1 | Wiskkey | interconnects.ai | 1970-01-01T00:00:00 | 0 | {} | 1i6or6l | false | null | t3_1i6or6l | /r/LocalLLaMA/comments/1i6or6l/deepseek_r1s_recipe_to_replicate_o1_and_the/ | false | false | 19 | {'enabled': False, 'images': [{'id': '2V-py_wcBInU0D7F0tjzeWsU48ZGOyaDHjNBGlpnyus', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LHg1r-gP8U-XMLL8TTrj0r5oYUpizITo645zYPrFtJ0.jpg?width=108&crop=smart&auto=webp&s=c8c6a6569e111b3913a32169d98ddc9cbb329a1b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LHg1r-gP8U-XMLL8TTrj0r5oYUpizITo645zYPrFtJ0.jpg?width=216&crop=smart&auto=webp&s=7042d5a2ca04beaa72fd23e9dc2076530b7ceaec', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LHg1r-gP8U-XMLL8TTrj0r5oYUpizITo645zYPrFtJ0.jpg?width=320&crop=smart&auto=webp&s=d2e3fb6350cbba5b0be0ab1f00cf8adbcd3f5bee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LHg1r-gP8U-XMLL8TTrj0r5oYUpizITo645zYPrFtJ0.jpg?width=640&crop=smart&auto=webp&s=3a6e3a49bd652f1c92c00a786d312100debe32b9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LHg1r-gP8U-XMLL8TTrj0r5oYUpizITo645zYPrFtJ0.jpg?width=960&crop=smart&auto=webp&s=bacee36f753ffd03558427df44d921e6e185160f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LHg1r-gP8U-XMLL8TTrj0r5oYUpizITo645zYPrFtJ0.jpg?width=1080&crop=smart&auto=webp&s=524aa0f75e2758957cac0c8e1a0a18ec3ff19d87', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LHg1r-gP8U-XMLL8TTrj0r5oYUpizITo645zYPrFtJ0.jpg?auto=webp&s=ec5274f41640eddda8022695bb81acf59d3f7822', 'width': 1200}, 'variants': {}}]} |
||
Building a €5,000 Rack-Mount Server for running LLMs | 1 | [removed] | 2025-01-21T18:10:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i6p048/building_a_5000_rackmount_server_for_running_llms/ | Educational-Shoe8806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6p048 | false | null | t3_1i6p048 | /r/LocalLLaMA/comments/1i6p048/building_a_5000_rackmount_server_for_running_llms/ | false | false | self | 1 | null |
Deepseek local | 0 | due to privacy concerns, I want to use deepseek r1 locally instead of through their API. Or is local the only method to solve this problem? Anyone with a cheaper MacOS setup try running it on their device? How does the full model run? | 2025-01-21T18:12:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i6p1he/deepseek_local/ | No_Indication4035 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6p1he | false | null | t3_1i6p1he | /r/LocalLLaMA/comments/1i6p1he/deepseek_local/ | false | false | self | 0 | null |
This xkcd comic is from 2017: Very much relevant today lol | 1 | 2025-01-21T18:16:38 | allisknowing | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6p5fw | false | null | t3_1i6p5fw | /r/LocalLLaMA/comments/1i6p5fw/this_xkcd_comic_is_from_2017_very_much_relevant/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'XkofHh5k79kiFyMHBHPKDrvmrx_cAdpBA5rBgYciuyo', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/i0luisk62eee1.png?width=108&crop=smart&auto=webp&s=75ade9bb92dcbe972e8190d54fee120b122b5fba', 'width': 108}, {'height': 255, 'url': 'https://preview.redd.it/i0luisk62eee1.png?width=216&crop=smart&auto=webp&s=727b8a3bae6c6cb98351d952b953e1b3409eaecf', 'width': 216}, {'height': 378, 'url': 'https://preview.redd.it/i0luisk62eee1.png?width=320&crop=smart&auto=webp&s=8991a849118f95bb89de6975f2ceb63bcdda0db4', 'width': 320}], 'source': {'height': 439, 'url': 'https://preview.redd.it/i0luisk62eee1.png?auto=webp&s=31e98db9b8d4295ff423c02db476d260ce8589f3', 'width': 371}, 'variants': {}}]} |
|||
I think <thinking> is overrated | 1 | [removed] | 2025-01-21T18:18:00 | https://www.reddit.com/r/LocalLLaMA/comments/1i6p6nl/i_think_thinking_is_overrated/ | Ok-Scarcity-7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6p6nl | false | null | t3_1i6p6nl | /r/LocalLLaMA/comments/1i6p6nl/i_think_thinking_is_overrated/ | false | false | self | 1 | null |
I think that <thinking> might be overrated | 1 | [removed] | 2025-01-21T18:21:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i6pa4l/i_think_that_thinking_might_be_overrated/ | Ok-Scarcity-7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6pa4l | false | null | t3_1i6pa4l | /r/LocalLLaMA/comments/1i6pa4l/i_think_that_thinking_might_be_overrated/ | false | false | self | 1 | null |
I think I broke Llama 3.2 | 1 | [removed] | 2025-01-21T18:23:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i6pb9f/i_think_i_broke_llama_32/ | Dismal_Raspberry1587 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6pb9f | false | null | t3_1i6pb9f | /r/LocalLLaMA/comments/1i6pb9f/i_think_i_broke_llama_32/ | false | false | self | 1 | null |
Here are some prompts that Deepseek models refuse to answer. | 1 | [removed] | 2025-01-21T18:23:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i6pbbv/here_are_some_prompts_that_deepseek_models_refuse/ | UncannyRobotPodcast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6pbbv | false | null | t3_1i6pbbv | /r/LocalLLaMA/comments/1i6pbbv/here_are_some_prompts_that_deepseek_models_refuse/ | false | false | self | 1 | null |
Blueprint to building your own document to podcast | 2 | 2025-01-21T18:25:56 | https://www.reddit.com/r/LocalLLaMA/comments/1i6pdnp/blueprint_to_building_your_own_document_to_podcast/ | 110_percent_wrong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6pdnp | false | null | t3_1i6pdnp | /r/LocalLLaMA/comments/1i6pdnp/blueprint_to_building_your_own_document_to_podcast/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'cGCK0hL_aR9TUMLAfTo1JtLD9uU6l0y14MF2F3WGyX8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GF8-9tjJ8A15wfxHkb8DmqZ6B_g1GWBAWJipIna7tB4.jpg?width=108&crop=smart&auto=webp&s=d63e37a4d9fb681a5ad17d2d92d2c1e126ffb51f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GF8-9tjJ8A15wfxHkb8DmqZ6B_g1GWBAWJipIna7tB4.jpg?width=216&crop=smart&auto=webp&s=5d375a23743eeceafb7b2262d10008fe2f6a6b6e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GF8-9tjJ8A15wfxHkb8DmqZ6B_g1GWBAWJipIna7tB4.jpg?width=320&crop=smart&auto=webp&s=25aa616a53b350945c12449ba46097a20748da9e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GF8-9tjJ8A15wfxHkb8DmqZ6B_g1GWBAWJipIna7tB4.jpg?width=640&crop=smart&auto=webp&s=a7bfd687f5506a07cc71ef7caa839374bb8ea426', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GF8-9tjJ8A15wfxHkb8DmqZ6B_g1GWBAWJipIna7tB4.jpg?width=960&crop=smart&auto=webp&s=7112212ec798f07f8c4c3dca09fd0cb0b4e49e2c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GF8-9tjJ8A15wfxHkb8DmqZ6B_g1GWBAWJipIna7tB4.jpg?width=1080&crop=smart&auto=webp&s=4899b7b10056ceb87f4697c5b2df117ff6deae85', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GF8-9tjJ8A15wfxHkb8DmqZ6B_g1GWBAWJipIna7tB4.jpg?auto=webp&s=ca6d8653163f0b3f96c7167aa8ed5156cb942796', 'width': 1200}, 'variants': {}}]} |
||
Here are some prompts that Deepseek models refuse to answer. | 1 | [removed] | 2025-01-21T18:26:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i6pdsp/here_are_some_prompts_that_deepseek_models_refuse/ | UncannyRobotPodcast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6pdsp | false | null | t3_1i6pdsp | /r/LocalLLaMA/comments/1i6pdsp/here_are_some_prompts_that_deepseek_models_refuse/ | false | false | self | 1 | null |
<thinking> of R1-distill and other reasoning models might be overrated | 1 | [removed] | 2025-01-21T18:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/1i6pejo/thinking_of_r1distill_and_other_reasoning_models/ | Ok-Scarcity-7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6pejo | false | null | t3_1i6pejo | /r/LocalLLaMA/comments/1i6pejo/thinking_of_r1distill_and_other_reasoning_models/ | false | false | self | 1 | null |
Here are some prompts that Deepseek models refuse to answer. | 1 | [removed] | 2025-01-21T18:28:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i6pfrb/here_are_some_prompts_that_deepseek_models_refuse/ | UncannyRobotPodcast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6pfrb | false | null | t3_1i6pfrb | /r/LocalLLaMA/comments/1i6pfrb/here_are_some_prompts_that_deepseek_models_refuse/ | false | false | self | 1 | null |
I think thinking is overrated | 1 | [removed] | 2025-01-21T18:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/1i6pfy0/i_think_thinking_is_overrated/ | Ok-Scarcity-7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6pfy0 | false | null | t3_1i6pfy0 | /r/LocalLLaMA/comments/1i6pfy0/i_think_thinking_is_overrated/ | false | false | self | 1 | null |
RTX 5090 or a Mac Mini? | 2 | This is going to sound ridiculous and silly, but I’m a final-year comp sci student currently taking a course in NLP. I’ve been wanting to work with LLMs, maybe train my own, and generally explore AI-related work (especially NLP). Honestly, I don’t know a ton about the field yet, but I want to treat myself to something nice, so I apologize in advance for being clueless.
After converting to CAD, the RTX 5090 and the M4 Pro Mac Mini (64GB RAM) both come out to about $3000. I understand the Mac Mini would be better in terms of value for money (64GB VRAM) and power efficiency, but I’m wondering: what kind of work could I do with the 5090 that I couldn’t do with the Mac Mini?
Here’s what I have at home already:
* Ryzen 5800X3D
* 32GB RAM
* RTX 3080
What can I realistically run on this rig? Would I be able to fine-tune or experiment with smaller LLMs on it?
Here are the pros and cons I’ve been thinking about:
# RTX 5090
Pros:
* Can run any AI workload that relies on CUDA.
* Gaming on my fat 49" OLED ultrawide.
Cons:
* $3000… for just a GPU.
* Gaming might distract me from my work.
# M2 Max Mac Mini (64GB RAM)
Pros:
* $3000 gets me a 12-core CPU, 16-core GPU, and 64GB of unified memory.
* Great power efficiency.
* Incredible value for money.
* Can separate work and gaming.
Cons:
* No CUDA support, so anything requiring CUDA is off the table.
* No gaming :(
# The Big Question:
What kind of AI/NLP work could I do with the Mac Mini that makes it worth it? Would I miss CUDA too much for experimenting with LLMs, fine-tuning, or running AI models? Also, what models could I reasonably run on my current rig (RTX 3080, 32GB RAM)?
I’m stuck and could use some advice from people who’ve worked with either of these setups. Thanks in advance for putting up with this post. | 2025-01-21T18:29:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i6pggw/rtx_5090_or_a_mac_mini/ | kaoriyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6pggw | false | null | t3_1i6pggw | /r/LocalLLaMA/comments/1i6pggw/rtx_5090_or_a_mac_mini/ | false | false | self | 2 | null |
Spanish government releases some official models | 128 | Spanish government has fund the training of official and public LLMs, mainly trained on Spanish and co-official spanish languages.
**Main page**: [https://alia.gob.es/](https://alia.gob.es/)
**Huggingface models**: [https://huggingface.co/BSC-LT](https://huggingface.co/BSC-LT)
The main released models are:
* **Alia 40b** (base model still on training, published intermediate result; instruct version will be released in the future)
* **Salamandra 2b/7b** (base and instruct available)
The main model has been trained using the Spanish Marenostrum 5 with a total of 2048 GPUs (H100 64Gb). They are all Apache 2.0 license and most datasets have been published also. They are mainly trained on European languages.
Also some translation models have been published:
* **Salamandra TA 2b:** translation between 30 main European languages directly
* **Plume 256k, 128, 32k**: finetuning of gemma2 models for translation between Spanish languages.
* **Aina models**: a list of 1to1 models for translation between Spanish languages.
The Alia 40b is the latest release and the most important one, although for the moments the results that we are seeing during the tests are pretty bad. Will write a post about it soon. | 2025-01-21T18:40:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i6pra7/spanish_government_releases_some_official_models/ | xdoso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6pra7 | false | null | t3_1i6pra7 | /r/LocalLLaMA/comments/1i6pra7/spanish_government_releases_some_official_models/ | false | false | self | 128 | null |
Is there any LLM that supports both image **and** tool use. | 1 | I've been searching online for the past days for an LLM that supports both image processing and tool use for a work project. However, I haven't been able to find one that offers both capabilities—it's always either one or the other.
Do any of you know of a model that supports both functionalities? I find it odd that none seem to, and I suspect there's a technical reason behind it that I'm not aware of.
At this point, I'm tempted to just use a vision model and handle the tool use manually, like in the good old days.
| 2025-01-21T18:58:47 | https://www.reddit.com/r/LocalLLaMA/comments/1i6q6xq/is_there_any_llm_that_supports_both_image_and/ | Realistic_Recover_40 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6q6xq | false | null | t3_1i6q6xq | /r/LocalLLaMA/comments/1i6q6xq/is_there_any_llm_that_supports_both_image_and/ | false | false | self | 1 | null |
kluster.ai now hosts deepseek R1 | 16 | Saw on their Twitter account and tried it out on their platform. | 2025-01-21T19:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1i6qar0/klusterai_now_hosts_deepseek_r1/ | swarmster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6qar0 | false | null | t3_1i6qar0 | /r/LocalLLaMA/comments/1i6qar0/klusterai_now_hosts_deepseek_r1/ | false | false | self | 16 | null |
Spanish Alia model has been trained with porn and ads content | 2 | 2025-01-21T19:06:50 | https://www.reddit.com/gallery/1i6qecq | xdoso | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i6qecq | false | null | t3_1i6qecq | /r/LocalLLaMA/comments/1i6qecq/spanish_alia_model_has_been_trained_with_porn_and/ | false | false | nsfw | 2 | null |
|
Which LLM to choose? | 3 | Is there any type of website that tells you what llm can run locally based on your hardware?
e.g. enter current hw and a list of compatible LLMs is shown? It'd be nice when downloading a gguf to know it'll work on your hardware. | 2025-01-21T19:15:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i6qm74/which_llm_to_choose/ | op4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6qm74 | false | null | t3_1i6qm74 | /r/LocalLLaMA/comments/1i6qm74/which_llm_to_choose/ | false | false | self | 3 | null |
Google Titans Models | 18 | When can we reasonably expect to see some pretained google titans models open source? Everyone just keeps mentioning the paper without more details from what I could find and even days later o3 has it seemingly old news. It would be incredible to have as a personalised 8B daily alternative.
Might be an obvious question, sorry I'm a newbie. | 2025-01-21T19:20:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i6qq54/google_titans_models/ | Xotsu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6qq54 | false | null | t3_1i6qq54 | /r/LocalLLaMA/comments/1i6qq54/google_titans_models/ | false | false | self | 18 | null |
XTTS V2 is Dead right? | 1 | [removed] | 2025-01-21T19:25:36 | https://www.reddit.com/r/LocalLLaMA/comments/1i6qukz/xtts_v2_is_dead_right/ | whoshriyansh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6qukz | false | null | t3_1i6qukz | /r/LocalLLaMA/comments/1i6qukz/xtts_v2_is_dead_right/ | false | false | self | 1 | null |
Help with "Could not parse response" and TypeError in browser_use + langchain_ollama | 1 | [removed] | 2025-01-21T19:28:52 | https://www.reddit.com/r/LocalLLaMA/comments/1i6qxeu/help_with_could_not_parse_response_and_typeerror/ | bozkurt81 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6qxeu | false | null | t3_1i6qxeu | /r/LocalLLaMA/comments/1i6qxeu/help_with_could_not_parse_response_and_typeerror/ | false | false | self | 1 | null |
DeepSeek in k8s | 1 | [removed] | 2025-01-21T19:55:36 | https://www.reddit.com/r/LocalLLaMA/comments/1i6rl8z/deepseek_in_k8s/ | No-Emphasis6569 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6rl8z | false | null | t3_1i6rl8z | /r/LocalLLaMA/comments/1i6rl8z/deepseek_in_k8s/ | false | false | self | 1 | null |
Where tf did you find data for Mistral's dataset😭 | 0 | 2025-01-21T20:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i6rqdn/where_tf_did_you_find_data_for_mistrals_dataset/ | Standard_Yellow_171 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6rqdn | false | null | t3_1i6rqdn | /r/LocalLLaMA/comments/1i6rqdn/where_tf_did_you_find_data_for_mistrals_dataset/ | false | false | 0 | null |
||
Anybody here uses llama simply to learn more about LLMs in general? | 7 | The company i work at sells AI-Agents that do cold calls. I found that having my own LLM locally and doing all sorts of tests in it helped me learn much more than reading articles or watching YT (not that these two are not good as well). Even if i'm not using my llama in a serious project yet, it did make me better at using LLMs
The areas i felt i got better were prompt engineering, which i think is a bit of an overrated term, and RAG, which i found to be a **huge** field of study. I also got a better understanding of the context window. Few weeks ago i had the classic chat-gpt user mentality of "my prompts rarely have 1000 tokens, what do i need 128k tokens for?" which now sounds laughable
What about you guys? | 2025-01-21T20:02:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i6rrbl/anybody_here_uses_llama_simply_to_learn_more/ | Blender-Fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6rrbl | false | null | t3_1i6rrbl | /r/LocalLLaMA/comments/1i6rrbl/anybody_here_uses_llama_simply_to_learn_more/ | false | false | self | 7 | null |
Propagandist model | 0 | Made me chuckle. Don't tell me they didn't tune it, just what the Party would say ...
https://preview.redd.it/87k7864gleee1.png?width=922&format=png&auto=webp&s=70326d0f6da89b42927cba186079ea2f4a6fe0c4
| 2025-01-21T20:06:06 | https://www.reddit.com/r/LocalLLaMA/comments/1i6ruv3/propagandist_model/ | Otherwise-Tiger3359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6ruv3 | false | null | t3_1i6ruv3 | /r/LocalLLaMA/comments/1i6ruv3/propagandist_model/ | false | false | 0 | null |
|
Can anyone recommend a good Bot to get a 5090 on launch day? | 0 | With supply shortages being confirmed from multiple news sites and the cost of a GPU going up soon due to tariffs can anyone recommend a good bot to rent/buy to get 5090 at MSRP?
[https://wccftech.com/nvidia-geforce-rtx-50-series-gpus-facing-shortages-ahead-of-launch-rtx-5090-5080-price-surge/](https://wccftech.com/nvidia-geforce-rtx-50-series-gpus-facing-shortages-ahead-of-launch-rtx-5090-5080-price-surge/) | 2025-01-21T20:12:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i6s0be/can_anyone_recommend_a_good_bot_to_get_a_5090_on/ | part46 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6s0be | false | null | t3_1i6s0be | /r/LocalLLaMA/comments/1i6s0be/can_anyone_recommend_a_good_bot_to_get_a_5090_on/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'pMFwnc-e7YIETsAs3AYsjcywklqE-JPEX6ywMDeRAmI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/1fimwLgR8HiF7ZE-l1LgRJBv5KgBDRXuMVKddTKEi8M.jpg?width=108&crop=smart&auto=webp&s=65213fbec7227a286e24119bffa595f8b4dc5486', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/1fimwLgR8HiF7ZE-l1LgRJBv5KgBDRXuMVKddTKEi8M.jpg?width=216&crop=smart&auto=webp&s=11d717f24a5678105157ef93ab7d7594eed3f0a4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/1fimwLgR8HiF7ZE-l1LgRJBv5KgBDRXuMVKddTKEi8M.jpg?width=320&crop=smart&auto=webp&s=bc3b77f81afea9cdf5ff7b9f44c992c1dc68a82a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/1fimwLgR8HiF7ZE-l1LgRJBv5KgBDRXuMVKddTKEi8M.jpg?width=640&crop=smart&auto=webp&s=42971d64ec5886158fd2ed47e40c232f336a7560', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/1fimwLgR8HiF7ZE-l1LgRJBv5KgBDRXuMVKddTKEi8M.jpg?width=960&crop=smart&auto=webp&s=66ce88122d5bce36e7964d725f0844e07fd53265', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/1fimwLgR8HiF7ZE-l1LgRJBv5KgBDRXuMVKddTKEi8M.jpg?width=1080&crop=smart&auto=webp&s=279802f23287df159368fd8a111e772d203cb796', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/1fimwLgR8HiF7ZE-l1LgRJBv5KgBDRXuMVKddTKEi8M.jpg?auto=webp&s=7eacd3e7ee35b31dfe619c11e65877aa1d7103ae', 'width': 3840}, 'variants': {}}]} |
Longer Context RP alternatives? | 3 | Hey folks,
I've used Midnight Miqu 70b and 103B a lot and loved them, but I often hit the 32K context length sooner than I'd like. I tried using Monstral and Luminum up at 123B via runpod with 3 A40s to get 80K+ context out of them (They go to 128k but it fits comfy in that config with about 80-90k context at 8bpw).
Basically I'd like something along the lines of midnight miqu, preferably that I could still fit into either 2 or 3 A40s, but with more context length. Is there anything comparable / better that might get that context to 64k Context without having to jump all the way to much slower huge 120B+ models and still feel like the quality of midnight miqu?
Thanks in advance! | 2025-01-21T20:17:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i6s4tm/longer_context_rp_alternatives/ | Traslogan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6s4tm | false | null | t3_1i6s4tm | /r/LocalLLaMA/comments/1i6s4tm/longer_context_rp_alternatives/ | false | false | self | 3 | null |
Help in deciding mac mini for running models | 1 | [removed] | 2025-01-21T20:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i6s5kx/help_in_deciding_mac_mini_for_running_models/ | saebaryo007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6s5kx | false | null | t3_1i6s5kx | /r/LocalLLaMA/comments/1i6s5kx/help_in_deciding_mac_mini_for_running_models/ | false | false | self | 1 | null |
Best setup for running a local LLM for secure business use? | 7 | Hi everyone!
I'm looking to integrate a local LLM into my business so that my team can use it without the risk of accidentally sharing sensitive data with external systems. The goal is to provide a "good" LLM that we can use safely while also having the ability to train it or finetune it with our own sensitive data.
I was considering models like **Qwen-2.5** or somthing similar, but I’m open to other sugestions if anyone has experience with better options. What would be the best hardware and software setup for this? I’m trying to find something manageable for a small-to-medium business, so balancing performance and cost is key.
Would love to hear your reccomendations or advice!
Thanks in advance! | 2025-01-21T20:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i6s9ts/best_setup_for_running_a_local_llm_for_secure/ | cocodirasta3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6s9ts | false | null | t3_1i6s9ts | /r/LocalLLaMA/comments/1i6s9ts/best_setup_for_running_a_local_llm_for_secure/ | false | false | self | 7 | null |
I find Chain of Thought of LLMs really fascinating | 0 | 2025-01-21T20:41:11 | https://www.reddit.com/gallery/1i6spae | Curious_Cantaloupe65 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i6spae | false | null | t3_1i6spae | /r/LocalLLaMA/comments/1i6spae/i_find_chain_of_thought_of_llms_really_fascinating/ | false | false | 0 | null |
||
What is a distilled model? | 7 | I’m seeing a lot of posts about people using deepseek R1 but distilled with other models. What does that mean? And is there a “better” distilled deepseek r1 model than others? Is a pure deepseek r1 model the best? | 2025-01-21T20:41:55 | https://www.reddit.com/r/LocalLLaMA/comments/1i6spxn/what_is_a_distilled_model/ | stereotypical_CS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6spxn | false | null | t3_1i6spxn | /r/LocalLLaMA/comments/1i6spxn/what_is_a_distilled_model/ | false | false | self | 7 | null |
"No-brainer technique" gets the strawberry problem done with r1 distill | 0 | I was experimenting with with the new deepseek distilled model and had the idea to stop them somehow from overthinking. I tried multiple things like prompting not to overthink or even edit inside of the <think> tags.
Long story short, a technique I call the "no-brainer" technique solves the strawberry test in like 90% of the cases:
https://preview.redd.it/36992td1teee1.png?width=1430&format=png&auto=webp&s=d902d71c3566029338b6bafa921a3a32af7a8ecb
The trick is to let the model first write the answer and then remove everything between the thinking tags (it must be the real tags not you just writing <thinking>)
Then you let the model continue after the emptied thinking block. | 2025-01-21T20:50:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i6sxlr/nobrainer_technique_gets_the_strawberry_problem/ | Ok-Scarcity-7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6sxlr | false | null | t3_1i6sxlr | /r/LocalLLaMA/comments/1i6sxlr/nobrainer_technique_gets_the_strawberry_problem/ | false | false | 0 | null |
|
DeepSeek-R1-Distill-Qwen-1.5B running 100% locally in-browser on WebGPU. Reportedly outperforms GPT-4o and Claude-3.5-Sonnet on math benchmarks (28.9% on AIME and 83.9% on MATH). | 181 | 2025-01-21T20:53:47 | https://v.redd.it/5ei4j3c9teee1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6t08q | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5ei4j3c9teee1/DASHPlaylist.mpd?a=1740084847%2COWE2Yzc4Y2VkODRkNmEzODU1MTVkNjZhNDViMmMzZDJjNDc3MDgxNmZjNTgyOTdmMzNmNzZiOWZiMmFkMTliOQ%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/5ei4j3c9teee1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/5ei4j3c9teee1/HLSPlaylist.m3u8?a=1740084847%2CODJmZWQxZjRmZjU2NmY4MDIwYjE1Y2YxNjM2ZjY3YWFmNzM5NzZmNGM2OGRhZjllZTg3NzZiMmRhYjE0NmRiNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5ei4j3c9teee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1i6t08q | /r/LocalLLaMA/comments/1i6t08q/deepseekr1distillqwen15b_running_100_locally/ | false | false | 181 | {'enabled': False, 'images': [{'id': 'bHl5MDU0Yzl0ZWVlMQAQ0j2wFUvXTQrT52Nv81bl04kSZ1X_57NkDQOUMylE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/bHl5MDU0Yzl0ZWVlMQAQ0j2wFUvXTQrT52Nv81bl04kSZ1X_57NkDQOUMylE.png?width=108&crop=smart&format=pjpg&auto=webp&s=54478ffc108a6f5d3c8065a90c5653ede4d1ab79', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/bHl5MDU0Yzl0ZWVlMQAQ0j2wFUvXTQrT52Nv81bl04kSZ1X_57NkDQOUMylE.png?width=216&crop=smart&format=pjpg&auto=webp&s=9e4a0d7d0d2613d72cfe29d737702f2fd5f0a11e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/bHl5MDU0Yzl0ZWVlMQAQ0j2wFUvXTQrT52Nv81bl04kSZ1X_57NkDQOUMylE.png?width=320&crop=smart&format=pjpg&auto=webp&s=24f1e0a4fe7144b4e825cac1cb5871a6ac20023a', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/bHl5MDU0Yzl0ZWVlMQAQ0j2wFUvXTQrT52Nv81bl04kSZ1X_57NkDQOUMylE.png?width=640&crop=smart&format=pjpg&auto=webp&s=99a8a896e73c6ca0c04140ceb9ca3938ec6405d0', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/bHl5MDU0Yzl0ZWVlMQAQ0j2wFUvXTQrT52Nv81bl04kSZ1X_57NkDQOUMylE.png?width=960&crop=smart&format=pjpg&auto=webp&s=90c42d405fb5d8b99ade6444ea91a72be741c15a', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/bHl5MDU0Yzl0ZWVlMQAQ0j2wFUvXTQrT52Nv81bl04kSZ1X_57NkDQOUMylE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=62598a953fc4f65f671c01488751243d5434127b', 'width': 1080}], 'source': {'height': 1398, 'url': 'https://external-preview.redd.it/bHl5MDU0Yzl0ZWVlMQAQ0j2wFUvXTQrT52Nv81bl04kSZ1X_57NkDQOUMylE.png?format=pjpg&auto=webp&s=06f8e16f6c368f2fa9b4b399634543b2112a3d74', 'width': 1398}, 'variants': {}}]} |
||
every time | 1 | 2025-01-21T20:55:06 | Xhehab_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6t1bd | false | null | t3_1i6t1bd | /r/LocalLLaMA/comments/1i6t1bd/every_time/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'KqaHhzRIMNRAoaUhldV8dErei8ZKfJtwP4q8QSkTEfM', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/sk2e4apgueee1.jpeg?width=108&crop=smart&auto=webp&s=a8621b1ad85f278e5c9fc6a85e684b45e7d156cc', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/sk2e4apgueee1.jpeg?width=216&crop=smart&auto=webp&s=f1d8fe608900fec43157d2b080676dcae65e8867', 'width': 216}, {'height': 377, 'url': 'https://preview.redd.it/sk2e4apgueee1.jpeg?width=320&crop=smart&auto=webp&s=63eb89d61d433855234d02475980b1385e98b5b3', 'width': 320}, {'height': 755, 'url': 'https://preview.redd.it/sk2e4apgueee1.jpeg?width=640&crop=smart&auto=webp&s=111295f7201ee0dac814404caabf73280b015077', 'width': 640}, {'height': 1133, 'url': 'https://preview.redd.it/sk2e4apgueee1.jpeg?width=960&crop=smart&auto=webp&s=c217c5c2706b4cf4c015f0d8276648bdbf9b7d80', 'width': 960}, {'height': 1274, 'url': 'https://preview.redd.it/sk2e4apgueee1.jpeg?width=1080&crop=smart&auto=webp&s=c58df4c33133519b5efd5d637f3e31befcf0a5df', 'width': 1080}], 'source': {'height': 1302, 'url': 'https://preview.redd.it/sk2e4apgueee1.jpeg?auto=webp&s=a00f00ba3e629fe3e0b9dadd178ac4367d2f2e38', 'width': 1103}, 'variants': {}}]} |
|||
every time | 1 | 2025-01-21T21:00:02 | Xhehab_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6t5o0 | false | null | t3_1i6t5o0 | /r/LocalLLaMA/comments/1i6t5o0/every_time/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'iuZg00Fx5uFr2TchKCF8_k0BNQZLeiunwBa9pJtu3pc', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/i0p51hdcveee1.jpeg?width=108&crop=smart&auto=webp&s=1b5c8f7606ca50ef5b6d58c369902223656c094e', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/i0p51hdcveee1.jpeg?width=216&crop=smart&auto=webp&s=d942c5905509868d7c7af21fb92c64438d0aa17a', 'width': 216}, {'height': 377, 'url': 'https://preview.redd.it/i0p51hdcveee1.jpeg?width=320&crop=smart&auto=webp&s=ac2b9636340692b825965254036f8dfbd6e86b29', 'width': 320}, {'height': 755, 'url': 'https://preview.redd.it/i0p51hdcveee1.jpeg?width=640&crop=smart&auto=webp&s=22048bce05c8c02638cc827266716ea1e7315d12', 'width': 640}, {'height': 1133, 'url': 'https://preview.redd.it/i0p51hdcveee1.jpeg?width=960&crop=smart&auto=webp&s=5014c28b6b95322e811a1db0b16ea656bfda2703', 'width': 960}, {'height': 1274, 'url': 'https://preview.redd.it/i0p51hdcveee1.jpeg?width=1080&crop=smart&auto=webp&s=fbb6d67ca5bd9a56bcf36acd77255392a7d9fc89', 'width': 1080}], 'source': {'height': 1302, 'url': 'https://preview.redd.it/i0p51hdcveee1.jpeg?auto=webp&s=5aca9b3decec3905e1b3c17262fd8e69d1e7fd0c', 'width': 1103}, 'variants': {}}]} |
|||
Every time China releases an open source model ! | 1 | 2025-01-21T21:03:26 | Xhehab_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6t8uh | false | null | t3_1i6t8uh | /r/LocalLLaMA/comments/1i6t8uh/every_time_china_releases_an_open_source_model/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'CV8wpTzH6dYKCgCeKkpdSZdV1lfW0E8gf--q8nyrjGk', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/yldhod7yveee1.jpeg?width=108&crop=smart&auto=webp&s=384ac9edb9629f41dc9bad1c9e7f63f1103d759f', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/yldhod7yveee1.jpeg?width=216&crop=smart&auto=webp&s=8158a9a7c5c82bb374488ee1e33495f9300ebf98', 'width': 216}, {'height': 377, 'url': 'https://preview.redd.it/yldhod7yveee1.jpeg?width=320&crop=smart&auto=webp&s=fb65506789fe9d14974cea739649f7803c73c7fa', 'width': 320}, {'height': 755, 'url': 'https://preview.redd.it/yldhod7yveee1.jpeg?width=640&crop=smart&auto=webp&s=7947954e0bb5b59e2b5f80aba96fa49f31931d16', 'width': 640}, {'height': 1133, 'url': 'https://preview.redd.it/yldhod7yveee1.jpeg?width=960&crop=smart&auto=webp&s=cb21fa40e01fb2eef5934d94fc0e63d17f30d608', 'width': 960}, {'height': 1274, 'url': 'https://preview.redd.it/yldhod7yveee1.jpeg?width=1080&crop=smart&auto=webp&s=b9d0d246b3ca48cf161ed56f87df36dc2cb883f4', 'width': 1080}], 'source': {'height': 1302, 'url': 'https://preview.redd.it/yldhod7yveee1.jpeg?auto=webp&s=f46f40e2d8f84f326ad4c79d7f97566c0123f3d8', 'width': 1103}, 'variants': {}}]} |
|||
Cursed data: the reason models are adamant there are 2 R's in "strawberry" | 0 | https://i.imgur.com/JXu6Ufz.png
6 days ago /u/Mr_Jericho posted ["Deepthink is overthinking"](https://www.reddit.com/r/LocalLLaMA/comments/1i27l37/deepseek_is_overthinking/). A [comment chain](https://www.reddit.com/r/LocalLLaMA/comments/1i27l37/comment/m7dbpwf/?context=10000) discuss ideas why.
10 hours ago /u/WarlaxZ posted a [log](https://www.reddit.com/r/LocalLLaMA/comments/1i6fxxy/literally_unusable/m8bv0ue/) from DeepSeek-R1-Distill-Qwen-32B. **The phonetic/IPA spellings contain 2 R's** though I don't know where it got "strawb ery" since under American phonetic it says "[ straw-ber-ee, -buh-ree ]". | 2025-01-21T21:31:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i6txak/cursed_data_the_reason_models_are_adamant_there/ | nananashi3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6txak | false | null | t3_1i6txak | /r/LocalLLaMA/comments/1i6txak/cursed_data_the_reason_models_are_adamant_there/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Q6l6O-y3FevVAfj8H80qhKEC3XMIXMNpnaCNbVKwkZg', 'resolutions': [{'height': 133, 'url': 'https://external-preview.redd.it/8CBxU4i5106cv1c7sWYihmF6mawG9mdIwdLT7djZvtY.png?width=108&crop=smart&auto=webp&s=6f068dd724b64390a796c80c131ba76682168a42', 'width': 108}, {'height': 267, 'url': 'https://external-preview.redd.it/8CBxU4i5106cv1c7sWYihmF6mawG9mdIwdLT7djZvtY.png?width=216&crop=smart&auto=webp&s=e93dd3c2993a78beffe70b1ccf87b2f7e2291d77', 'width': 216}, {'height': 396, 'url': 'https://external-preview.redd.it/8CBxU4i5106cv1c7sWYihmF6mawG9mdIwdLT7djZvtY.png?width=320&crop=smart&auto=webp&s=0e2413e9dfc59a2c88e6c19fa53b0dda18706f63', 'width': 320}, {'height': 793, 'url': 'https://external-preview.redd.it/8CBxU4i5106cv1c7sWYihmF6mawG9mdIwdLT7djZvtY.png?width=640&crop=smart&auto=webp&s=b24612d06b2ac03b44ec221226bd88032d51c9bf', 'width': 640}], 'source': {'height': 1118, 'url': 'https://external-preview.redd.it/8CBxU4i5106cv1c7sWYihmF6mawG9mdIwdLT7djZvtY.png?auto=webp&s=9dfabac9085f901c85d1553344ff36dbbd2c8f62', 'width': 902}, 'variants': {}}]} |
Are local LLMs the future of AI? | 3 | It seems like running LLMs locally is more of a hobby right now and the big cloud LLM get all the attention. But with the cloud LLMs, I've noticed there's a direct conflict between business interests and how clouds operate that's preventing the true potential of LLMs:
* Businesses do not want to risk any leak of their internal IP (such as source code) to the cloud.
* But for a LLM to be truly useful it needs highly specific local knowledge. The cloud LLMs have been trying to allow this by increasing context window length, but it will never be enough.
This has led to disappointing results when I try to use LLMs at work. For example, I don't care that Microsoft has trained Copilot on the entire public Github. I care about a LLM helping me with my specific project at work, which would require the LLM to have full knowledge of the project. And it's too big to just add the entire project source code into the context window.
To reach the true potential of LLMs, businesses will have to have their own LLM. Something like this:
* They would start with a base model like Llama and have a nightly process to fine tune it on all the company internal IP (such as source code and documents) They could either buy GPU hardware to do this or rent GPU time from the cloud.
* Employees could use this LLM without any fear of leaking data since the LLM is fully internal to the company. The LLM would also be able to provide highly specific and helpful results because it would know everything about the company's internal IP.
Anyone else here think this might be the future? GPU costs are too high right now, but that will decrease over time. Once that happens, it seems the only next step required is someone to package this up into a product that IT departments can handle deploying and maintaining. | 2025-01-21T21:40:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i6u5cy/are_local_llms_the_future_of_ai/ | AreaOfEffect | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6u5cy | false | null | t3_1i6u5cy | /r/LocalLLaMA/comments/1i6u5cy/are_local_llms_the_future_of_ai/ | false | false | self | 3 | null |
GPUs pools | 5 | Will there ever be (or do there already exist?) pools where people can share their computing power?
It can be used for both inference and finetuning or even pretraining?
What do you think?
There could be some kind of reward (crypto?) for who shares based on power.
Probably could be also fun "invest" computational power without no economic returns but other kind like have cool new and totally community driven AI models (not just LLMs)
Don't know just wondering and want to chat about :) | 2025-01-21T21:55:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i6uhy5/gpus_pools/ | cri10095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6uhy5 | false | null | t3_1i6uhy5 | /r/LocalLLaMA/comments/1i6uhy5/gpus_pools/ | false | false | self | 5 | null |
Billions in proprietary AI? No more. | 346 | OpenAI raised billions on promise of having and securing behind thick doors something no other is even remotely close to. The following tweet from just few days prior R1 release made me think they really have atomic bomb the world will knee for;
https://preview.redd.it/25dv42dl1fee1.png?width=1209&format=png&auto=webp&s=bcfafe3260c7a5257502850b940aa24a2f5f7ecd
The truth is, they have nothing. o1-level, some say human-level reasoning is reproducible, can be privately hosted by anyone, anywhere. Can't be greedily priced.
MIT licensed open models is the future of AI. Zero dollars is the only right price for something made on all human knowledge. It is a sum of effort of the whole civilisation, spanning many generations. Just imagine, any book that landed in the pretraining dataset influences the whole model. There is no better way to honor any author contributing to the overall model performance, knowingly or not than to make a tool that help create new knowledge, available for anyone, at no cost. | 2025-01-21T22:06:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i6urjd/billions_in_proprietary_ai_no_more/ | robertpiosik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6urjd | false | null | t3_1i6urjd | /r/LocalLLaMA/comments/1i6urjd/billions_in_proprietary_ai_no_more/ | false | false | 346 | null |
|
R1 is mind blowing | 655 | Gave it a problem from my graph theory course that’s reasonably nuanced. 4o gave me the wrong answer twice, but did manage to produce the correct answer once. R1 managed to get this problem right in one shot, and also held up under pressure when I asked it to justify its answer. It also gave a great explanation that showed it really understood the nuance of the problem. I feel pretty confident in saying that AI is smarter than me. Not just closed, flagship models, but smaller models that I could run on my MacBook are probably smarter than me at this point. | 2025-01-21T22:10:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i6uviy/r1_is_mind_blowing/ | Not-The-Dark-Lord-7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6uviy | false | null | t3_1i6uviy | /r/LocalLLaMA/comments/1i6uviy/r1_is_mind_blowing/ | false | false | self | 655 | null |
Logical Regression | 1 | [removed] | 2025-01-21T22:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/1i6v0n1/logical_regression/ | Diader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6v0n1 | false | null | t3_1i6v0n1 | /r/LocalLLaMA/comments/1i6v0n1/logical_regression/ | false | false | 1 | null |
|
He cant keep getting away with this! | 16 | 2025-01-21T22:29:31 | Different_Fix_2217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6vbfg | false | null | t3_1i6vbfg | /r/LocalLLaMA/comments/1i6vbfg/he_cant_keep_getting_away_with_this/ | false | false | 16 | {'enabled': True, 'images': [{'id': 'akEnwrYBSH0kKwP32-n7jcVZa1VSvVw00LRUln4nwCA', 'resolutions': [{'height': 185, 'url': 'https://preview.redd.it/dw296ikabfee1.png?width=108&crop=smart&auto=webp&s=d4051ebb3cf964fd05dabba194bca0efd410dcdf', 'width': 108}, {'height': 371, 'url': 'https://preview.redd.it/dw296ikabfee1.png?width=216&crop=smart&auto=webp&s=b2323f0860797a1531632375fa3bef5c8ed6eb85', 'width': 216}, {'height': 550, 'url': 'https://preview.redd.it/dw296ikabfee1.png?width=320&crop=smart&auto=webp&s=693051833d561fdc9ca781f6e11325654dbbc2ad', 'width': 320}], 'source': {'height': 1008, 'url': 'https://preview.redd.it/dw296ikabfee1.png?auto=webp&s=87cef2d57cdf5dda8e030fc26d90d71eb7e161ec', 'width': 586}, 'variants': {}}]} |
|||
Gemini Thinking experimental 01-21 is out! | 74 | 2025-01-21T22:37:02 | Salty-Garage7777 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6vhzy | false | null | t3_1i6vhzy | /r/LocalLLaMA/comments/1i6vhzy/gemini_thinking_experimental_0121_is_out/ | false | false | 74 | {'enabled': True, 'images': [{'id': '6fqEUZepV0bPFZhSi0kmwPbw5Nn6D_E47ooOtkXzPVk', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/lizc4v8ncfee1.jpeg?width=108&crop=smart&auto=webp&s=2d8acfc6fa375b5e67a0edad421f2d582f56c27d', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/lizc4v8ncfee1.jpeg?width=216&crop=smart&auto=webp&s=be59b73ba55603f2145cc06dc03533b3c1f1cc19', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/lizc4v8ncfee1.jpeg?width=320&crop=smart&auto=webp&s=bdecf76a8a1f7a78d6701764bba7f7b9fe0222aa', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/lizc4v8ncfee1.jpeg?width=640&crop=smart&auto=webp&s=7d15911a5fea2f0b9809f3de42a8a8e34ce0a319', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/lizc4v8ncfee1.jpeg?width=960&crop=smart&auto=webp&s=9feabdfde8da591f441f9fb7aa7f2ed88cfc218c', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/lizc4v8ncfee1.jpeg?width=1080&crop=smart&auto=webp&s=d6bed1d0aed02d8239f75ef51d0b0120d117b86d', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/lizc4v8ncfee1.jpeg?auto=webp&s=771505bd29fcefa04ceec4f5312123424c52a1a8', 'width': 1080}, 'variants': {}}]} |
|||
Announcing The Stargate Project | 0 | 2025-01-21T22:41:40 | https://openai.com/index/announcing-the-stargate-project/ | badgerfish2021 | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1i6vly7 | false | null | t3_1i6vly7 | /r/LocalLLaMA/comments/1i6vly7/announcing_the_stargate_project/ | false | false | default | 0 | null |
|
Trump announces a $500 billion AI infrastructure investment in the US | 588 | 2025-01-21T22:43:49 | https://www.cnn.com/2025/01/21/tech/openai-oracle-softbank-trump-ai-investment/index.html | fallingdowndizzyvr | cnn.com | 1970-01-01T00:00:00 | 0 | {} | 1i6vnqc | false | null | t3_1i6vnqc | /r/LocalLLaMA/comments/1i6vnqc/trump_announces_a_500_billion_ai_infrastructure/ | false | false | 588 | {'enabled': False, 'images': [{'id': 'g5oyaMZCe-t9ATMJLt1p81sv49mpSCbF4ykemH6v9TE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/qFlenD3wOMEKpyf-2qth3Zo8oJQzBNIpBFiyCeVPdPY.jpg?width=108&crop=smart&auto=webp&s=c6f55ade3f06879899c9a20029e2919bb53f639c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/qFlenD3wOMEKpyf-2qth3Zo8oJQzBNIpBFiyCeVPdPY.jpg?width=216&crop=smart&auto=webp&s=92f7126bf4e0dc9ba837fe8b46fedbe59a953114', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/qFlenD3wOMEKpyf-2qth3Zo8oJQzBNIpBFiyCeVPdPY.jpg?width=320&crop=smart&auto=webp&s=7bfebcfc71a579bd4dbfe3772e0dda5fa1042cd1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/qFlenD3wOMEKpyf-2qth3Zo8oJQzBNIpBFiyCeVPdPY.jpg?width=640&crop=smart&auto=webp&s=1661f472f059c05f183a4286b726667eb4724fc9', 'width': 640}], 'source': {'height': 450, 'url': 'https://external-preview.redd.it/qFlenD3wOMEKpyf-2qth3Zo8oJQzBNIpBFiyCeVPdPY.jpg?auto=webp&s=f43d9c994518049143508b58acd38e2a80089471', 'width': 800}, 'variants': {}}]} |
||
Do You Have Managers That Worry About Model Origin? | 1 | [removed] | 2025-01-21T22:45:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i6vovy/do_you_have_managers_that_worry_about_model_origin/ | Simusid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6vovy | false | null | t3_1i6vovy | /r/LocalLLaMA/comments/1i6vovy/do_you_have_managers_that_worry_about_model_origin/ | false | false | self | 1 | null |
Ollama Environment Issues (Linux/RedHat) | 0 | Am I the only one that finds ollama ridiculous in terms of set up? I touch the ollama.service file once, debug file activates for no reason.
I have now done:
full re-installation, ollama serve -> works fine in new terminal at default (ollama -v)
change ollama.service using docs for [0.0.0.0:3001](http://0.0.0.0:3001), restart daemon, restart ollama -> ollama serve randomly chooses to either hit port 3001 or the default. it tells me a bunch of stuff in debug. it also seems to occupy port 3001 without actually functioning, because it's randomly in use and not.
I can't delete commands in the ollama.service because it's empty, and designed to take contents and add to files. Which is odd, because now I can't stop it from executing the same address. I don't know if deleting it resets it.
What is the deal? Is Linux's selection bias in user base really used as an excuse to just throw your hands up and say, open source LOL! Because I know development is tedious, but this seems excessive. | 2025-01-21T23:04:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i6w5ig/ollama_environment_issues_linuxredhat/ | Oceanboi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6w5ig | false | null | t3_1i6w5ig | /r/LocalLLaMA/comments/1i6w5ig/ollama_environment_issues_linuxredhat/ | false | false | self | 0 | null |
6x AMD Instinct Mi60 AI Server + Qwen2.5-Coder-32B-Instruct-GPTQ-Int4 - 35 t/s | 17 | 2025-01-21T23:20:00 | https://v.redd.it/wly9co18kfee1 | Any_Praline_8178 | /r/LocalLLaMA/comments/1i6whth/6x_amd_instinct_mi60_ai_server/ | 1970-01-01T00:00:00 | 0 | {} | 1i6whth | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wly9co18kfee1/DASHPlaylist.mpd?a=1740223216%2CNDgwYmYxNDAwNzkzN2ZlODFlOGM1ZWJhY2ZkYzEzMWQ1ZTA3NjlkMzlkMGFlYWFlODQyMDJjNjgxZWQzZjdlYg%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/wly9co18kfee1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1904, 'hls_url': 'https://v.redd.it/wly9co18kfee1/HLSPlaylist.m3u8?a=1740223216%2COTc2NjJhOTU4ZDQ1OTMyNThjZWI3OWY4ZDNjMTU1MWU3MTU2NmFlYzliN2VmYzU1MmQxODExM2Q4NjhlOTkwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wly9co18kfee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1i6whth | /r/LocalLLaMA/comments/1i6whth/6x_amd_instinct_mi60_ai_server/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'dml6MnhsMThrZmVlMa3PllzzEoBewqaI6d-ede8FIintawqw5i_5yPr44i4X', 'resolutions': [{'height': 190, 'url': 'https://external-preview.redd.it/dml6MnhsMThrZmVlMa3PllzzEoBewqaI6d-ede8FIintawqw5i_5yPr44i4X.png?width=108&crop=smart&format=pjpg&auto=webp&s=4a2f4567f7cf39b7b49f006b00d42757548b7711', 'width': 108}, {'height': 380, 'url': 'https://external-preview.redd.it/dml6MnhsMThrZmVlMa3PllzzEoBewqaI6d-ede8FIintawqw5i_5yPr44i4X.png?width=216&crop=smart&format=pjpg&auto=webp&s=1f8bb2e93f5453376453102c4ab2fb0d85c6b77a', 'width': 216}, {'height': 563, 'url': 'https://external-preview.redd.it/dml6MnhsMThrZmVlMa3PllzzEoBewqaI6d-ede8FIintawqw5i_5yPr44i4X.png?width=320&crop=smart&format=pjpg&auto=webp&s=9df80db8ab20d98ec4d625182bcd9efe812efcf9', 'width': 320}, {'height': 1127, 'url': 'https://external-preview.redd.it/dml6MnhsMThrZmVlMa3PllzzEoBewqaI6d-ede8FIintawqw5i_5yPr44i4X.png?width=640&crop=smart&format=pjpg&auto=webp&s=f70df620a86e6da28f02b73ddc8235388b24cad4', 'width': 640}, {'height': 1691, 'url': 'https://external-preview.redd.it/dml6MnhsMThrZmVlMa3PllzzEoBewqaI6d-ede8FIintawqw5i_5yPr44i4X.png?width=960&crop=smart&format=pjpg&auto=webp&s=347b7804753e79c9377b6766920b391ed0e69212', 'width': 960}, {'height': 1903, 'url': 'https://external-preview.redd.it/dml6MnhsMThrZmVlMa3PllzzEoBewqaI6d-ede8FIintawqw5i_5yPr44i4X.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5cdbf5898bbcfb3c9c89155cffe10b87df9f48dc', 'width': 1080}], 'source': {'height': 3796, 'url': 'https://external-preview.redd.it/dml6MnhsMThrZmVlMa3PllzzEoBewqaI6d-ede8FIintawqw5i_5yPr44i4X.png?format=pjpg&auto=webp&s=653039eed11661c65bf19aeb04046431c2d8dc73', 'width': 2154}, 'variants': {}}]} |
||
0x Mini: An 8B Open-Source Model Challenging GPT-4o-mini | 1 | [removed] | 2025-01-21T23:31:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i6wr3i/0x_mini_an_8b_opensource_model_challenging/ | Admirable_Answer8045 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6wr3i | false | null | t3_1i6wr3i | /r/LocalLLaMA/comments/1i6wr3i/0x_mini_an_8b_opensource_model_challenging/ | false | false | self | 1 | null |
0x Mini: An Open Source LLM That's Surprisingly Powerful, and is comparable to GPT-4o-Mini | 1 | [removed] | 2025-01-21T23:34:47 | https://www.reddit.com/r/LocalLLaMA/comments/1i6wtk3/0x_mini_an_open_source_llm_thats_surprisingly/ | Perfect-Bowl-1601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6wtk3 | false | null | t3_1i6wtk3 | /r/LocalLLaMA/comments/1i6wtk3/0x_mini_an_open_source_llm_thats_surprisingly/ | false | false | self | 1 | null |
0x Mini: An Open Source LLM That's Surprisingly Powerful, and is comparable to GPT-4o-Mini | 1 | [removed] | 2025-01-21T23:37:24 | https://www.reddit.com/r/LocalLLaMA/comments/1i6wvlm/0x_mini_an_open_source_llm_thats_surprisingly/ | Perfect-Bowl-1601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6wvlm | false | null | t3_1i6wvlm | /r/LocalLLaMA/comments/1i6wvlm/0x_mini_an_open_source_llm_thats_surprisingly/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'I29N5TjkCIa41OMFT6ukKxZfD9w5wxFQYkO5IRNAj6s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=108&crop=smart&auto=webp&s=c7f9004111cbd1fcfba915b66e1ce3ab7cd4b0e1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=216&crop=smart&auto=webp&s=87629e1e17db952ce18b3e06b8c07c8b5ccd3684', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=320&crop=smart&auto=webp&s=d7c1475c49ed3a621192cb961da69f329b5829e7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=640&crop=smart&auto=webp&s=f0f209be540c59e518231b96a7e871c1c19bf762', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=960&crop=smart&auto=webp&s=55e4a5d325b25be5e7ea425dd74aae076c3a3329', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=1080&crop=smart&auto=webp&s=ef0ff450ec400216cbdde39830226aec464e4ea3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?auto=webp&s=e1d103cd23b2a001035b9596c3859a4914a5d64e', 'width': 1200}, 'variants': {}}]} |
0x Mini: An Open Source LLM. | 1 | [removed] | 2025-01-21T23:38:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i6ww71/0x_mini_an_open_source_llm/ | Perfect-Bowl-1601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6ww71 | false | null | t3_1i6ww71 | /r/LocalLLaMA/comments/1i6ww71/0x_mini_an_open_source_llm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'I29N5TjkCIa41OMFT6ukKxZfD9w5wxFQYkO5IRNAj6s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=108&crop=smart&auto=webp&s=c7f9004111cbd1fcfba915b66e1ce3ab7cd4b0e1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=216&crop=smart&auto=webp&s=87629e1e17db952ce18b3e06b8c07c8b5ccd3684', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=320&crop=smart&auto=webp&s=d7c1475c49ed3a621192cb961da69f329b5829e7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=640&crop=smart&auto=webp&s=f0f209be540c59e518231b96a7e871c1c19bf762', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=960&crop=smart&auto=webp&s=55e4a5d325b25be5e7ea425dd74aae076c3a3329', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?width=1080&crop=smart&auto=webp&s=ef0ff450ec400216cbdde39830226aec464e4ea3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Nrp1QeCOjYNWpDzvGg-o4o47XnmkdFBJ1ItGeos-Xrw.jpg?auto=webp&s=e1d103cd23b2a001035b9596c3859a4914a5d64e', 'width': 1200}, 'variants': {}}]} |
Local hero ports new minimax 456B model to llama.cpp | 56 | 2025-01-21T23:44:01 | https://imgflip.com/i/9hi08y | Aaaaaaaaaeeeee | imgflip.com | 1970-01-01T00:00:00 | 0 | {} | 1i6x0od | false | null | t3_1i6x0od | /r/LocalLLaMA/comments/1i6x0od/local_hero_ports_new_minimax_456b_model_to/ | false | false | 56 | {'enabled': False, 'images': [{'id': '0UGvGM_ucDnJEb4_rj2GCEcyFKKfbPZATXn6QHs240U', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/XboNX-2aBFaEZb_pw6HGHDpi2wkmn6wYOlAdwzo9X-M.jpg?width=108&crop=smart&auto=webp&s=3db79bc18a10e48e60b6588f07f6527b766a7367', 'width': 108}, {'height': 209, 'url': 'https://external-preview.redd.it/XboNX-2aBFaEZb_pw6HGHDpi2wkmn6wYOlAdwzo9X-M.jpg?width=216&crop=smart&auto=webp&s=dad9fda17452c0ff688ed76b89ab46418156ebcd', 'width': 216}, {'height': 310, 'url': 'https://external-preview.redd.it/XboNX-2aBFaEZb_pw6HGHDpi2wkmn6wYOlAdwzo9X-M.jpg?width=320&crop=smart&auto=webp&s=32c1d2fa99a9246c99a12c019dd3fbea99b33755', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/XboNX-2aBFaEZb_pw6HGHDpi2wkmn6wYOlAdwzo9X-M.jpg?auto=webp&s=3896036177719fd607a76b9e23134bcedc195960', 'width': 515}, 'variants': {}}]} |
||
0x Mini | 1 | [removed] | 2025-01-21T23:44:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i6x1ea/0x_mini/ | Perfect-Bowl-1601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6x1ea | false | null | t3_1i6x1ea | /r/LocalLLaMA/comments/1i6x1ea/0x_mini/ | false | false | self | 1 | null |
Deepseek-R1 is brittle | 58 | 2025-01-21T23:45:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i6x1rz/deepseekr1_is_brittle/ | girishsk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6x1rz | false | null | t3_1i6x1rz | /r/LocalLLaMA/comments/1i6x1rz/deepseekr1_is_brittle/ | false | false | 58 | null |
||
0x Mini | 1 | [removed] | 2025-01-21T23:46:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i6x2gf/0x_mini/ | Perfect-Bowl-1601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6x2gf | false | null | t3_1i6x2gf | /r/LocalLLaMA/comments/1i6x2gf/0x_mini/ | false | false | self | 1 | null |
Just found out about Deepseek and I'm baffled how China managed to match O1 for so cheap. Is it ok to use the model over at NanoGPT since I heard it stores your data but NanoGPT doesn't so you should be anonymous? | 0 | Deepseek is impressive. Discovered it thanks to [this video](https://www.youtube.com/watch?v=-2k1rcRzsLA) and it rekindled my hype for AI.
Is Nano GPT a good place to try it out? I have a dollar worth of credits there so I thought I might try it out especially since NanoGPT does not store your chat history.
Or is there a better platform like openrouter or something.
If you wanna get 5% off NanoGPT and give me 10% off, [follow this link to create an account](https://nano-gpt.com/invite/cynxGmiK). | 2025-01-21T23:51:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i6x6py/just_found_out_about_deepseek_and_im_baffled_how/ | Butefluko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6x6py | false | null | t3_1i6x6py | /r/LocalLLaMA/comments/1i6x6py/just_found_out_about_deepseek_and_im_baffled_how/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ibEhMr9jpEe_YNRXqfcd1Wi-6Vi7Q2nPT-it6FodmyQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/w2tflvF_H2FmLSYWHzMSVMiYXm2v50smrN-HfCuWdb8.jpg?width=108&crop=smart&auto=webp&s=8641241409b428a9ca9055095b4d408ca51a3f67', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/w2tflvF_H2FmLSYWHzMSVMiYXm2v50smrN-HfCuWdb8.jpg?width=216&crop=smart&auto=webp&s=80c6c5f8684c46c1dffe4d21b8d274191fbcdc39', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/w2tflvF_H2FmLSYWHzMSVMiYXm2v50smrN-HfCuWdb8.jpg?width=320&crop=smart&auto=webp&s=a113e92aac347025cf5703f20671da6fa258421a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/w2tflvF_H2FmLSYWHzMSVMiYXm2v50smrN-HfCuWdb8.jpg?auto=webp&s=edc61a1dafec12a878b2ed7d0f859d5046549517', 'width': 480}, 'variants': {}}]} |
I ran the "apple test" benchmark on all DeepSeek-R1 distilled versions so you don't have to. Here are the results: | 1 | [removed] | 2025-01-22T00:06:01 | https://www.reddit.com/r/LocalLLaMA/comments/1i6xhxu/i_ran_the_apple_test_benchmark_on_all_deepseekr1/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6xhxu | false | null | t3_1i6xhxu | /r/LocalLLaMA/comments/1i6xhxu/i_ran_the_apple_test_benchmark_on_all_deepseekr1/ | false | false | self | 1 | null |
Newbie question about DeepSeek R1 | 1 | Could it be fine-tuned to be a role playing model? Thanks in advance! | 2025-01-22T00:17:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i6xr60/newbie_question_about_deepseek_r1/ | RuneVikingx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6xr60 | false | null | t3_1i6xr60 | /r/LocalLLaMA/comments/1i6xr60/newbie_question_about_deepseek_r1/ | false | false | self | 1 | null |
How to load up large file documents on Deepseek R1? | 1 | Hey everyone,
I’m currently using ChatGPT and Claude for uploading documents , but I’m limited by the number of documents I can upload. I’m looking for other solutions to efficiently upload and process large PDF files in an open source LLM.
It be cool if I could upload these documents and just keep them there for future use.
Is this something thwt can be done with Deepseek R1 model ? I’m not the most tech savvy so I figured I’d ask for help from the community here .
Thanks in advance! | 2025-01-22T00:27:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i6xyvn/how_to_load_up_large_file_documents_on_deepseek_r1/ | Glad_Travel_1663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6xyvn | false | null | t3_1i6xyvn | /r/LocalLLaMA/comments/1i6xyvn/how_to_load_up_large_file_documents_on_deepseek_r1/ | false | false | self | 1 | null |
Implementing Character.AI’s Memory Optimizations in nanoGPT | 1 | 2025-01-22T00:35:06 | https://www.njkumar.com/implementing-characterais-memory-optimizations-in-nanogpt/?token=82a517f933 | fendiwap1234 | njkumar.com | 1970-01-01T00:00:00 | 0 | {} | 1i6y534 | false | null | t3_1i6y534 | /r/LocalLLaMA/comments/1i6y534/implementing_characterais_memory_optimizations_in/ | false | false | default | 1 | null |
|
Deepseek R1 completely ignores anything i say about '<think>' '</think>' tags in it's output | 0 | I'm trying to write a function that hides <think> </think> section of deepseek R1 output using deepseek r1. It completely ignores any references to <think> </thinks> anywhere in it's output other than around it's thoughts output...
is this a inherent safety mechanism implemented to not have multiple thinking section in json output? | 2025-01-22T00:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1i6y6hc/deepseek_r1_completely_ignores_anything_i_say/ | Economy-Fact-8362 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6y6hc | false | null | t3_1i6y6hc | /r/LocalLLaMA/comments/1i6y6hc/deepseek_r1_completely_ignores_anything_i_say/ | false | false | self | 0 | null |
Implementing Character.AI’s Memory Optimizations in nanoGPT | 14 | 2025-01-22T00:37:15 | https://www.njkumar.com/implementing-characterais-memory-optimizations-in-nanogpt/ | fendiwap1234 | njkumar.com | 1970-01-01T00:00:00 | 0 | {} | 1i6y6t2 | false | null | t3_1i6y6t2 | /r/LocalLLaMA/comments/1i6y6t2/implementing_characterais_memory_optimizations_in/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'bWXVvwa8flCrmkYJvLXfE5G12bSSSTbkElYwWaDiCi0', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?width=108&crop=smart&auto=webp&s=2cd1045517eda93c2aaafc19130bea85c7466318', 'width': 108}], 'source': {'height': 120, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?auto=webp&s=6d730f0aadb2da7eefca105ee16d8e99ecfca4a6', 'width': 124}, 'variants': {}}]} |
||
ooh... Akward. | 231 | 2025-01-22T00:58:10 | https://v.redd.it/2aciqndr1gee1 | enspiralart | /r/LocalLLaMA/comments/1i6ymm4/ooh_akward/ | 1970-01-01T00:00:00 | 0 | {} | 1i6ymm4 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/2aciqndr1gee1/DASHPlaylist.mpd?a=1740229094%2CMDhlZjFlYTg5NzM4ZTE5NWI5NzYxZDFiYThjNjBhMGE2YTM1MzY0NjNhNmM5MGUzODk3NTE5N2FhNDRlZDU3OQ%3D%3D&v=1&f=sd', 'duration': 89, 'fallback_url': 'https://v.redd.it/2aciqndr1gee1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 442, 'hls_url': 'https://v.redd.it/2aciqndr1gee1/HLSPlaylist.m3u8?a=1740229094%2COWVjNzY3ZDA3NjhhNDM2OTM2YjgwYWUwNTZiYzBmYzk5NmY3MjViM2FmOTE5YjhkMDBhYzdlYzdjMGUwY2VkMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2aciqndr1gee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_1i6ymm4 | /r/LocalLLaMA/comments/1i6ymm4/ooh_akward/ | false | false | 231 | {'enabled': False, 'images': [{'id': 'N2IxNjNvZHIxZ2VlMa6_sTZ0EXgszxzlzPuLjFIKBTxbZZ9G0cluTufyUgNX', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/N2IxNjNvZHIxZ2VlMa6_sTZ0EXgszxzlzPuLjFIKBTxbZZ9G0cluTufyUgNX.png?width=108&crop=smart&format=pjpg&auto=webp&s=3f50ec9f3eedeb5f2b88e27196270c9866a2f809', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/N2IxNjNvZHIxZ2VlMa6_sTZ0EXgszxzlzPuLjFIKBTxbZZ9G0cluTufyUgNX.png?width=216&crop=smart&format=pjpg&auto=webp&s=c932a41d505b3cbbcf1f0a100d284066faa7ec7b', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/N2IxNjNvZHIxZ2VlMa6_sTZ0EXgszxzlzPuLjFIKBTxbZZ9G0cluTufyUgNX.png?width=320&crop=smart&format=pjpg&auto=webp&s=1c19f3c14df183dde6f16003051efe0a7da7da83', 'width': 320}, {'height': 331, 'url': 'https://external-preview.redd.it/N2IxNjNvZHIxZ2VlMa6_sTZ0EXgszxzlzPuLjFIKBTxbZZ9G0cluTufyUgNX.png?width=640&crop=smart&format=pjpg&auto=webp&s=c3fe9ca35172e8d65fe6ef02b208db843a7e7976', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/N2IxNjNvZHIxZ2VlMa6_sTZ0EXgszxzlzPuLjFIKBTxbZZ9G0cluTufyUgNX.png?width=960&crop=smart&format=pjpg&auto=webp&s=4e0c0c0fb2ad829ed506811bb1c08b50833be21e', 'width': 960}], 'source': {'height': 538, 'url': 'https://external-preview.redd.it/N2IxNjNvZHIxZ2VlMa6_sTZ0EXgszxzlzPuLjFIKBTxbZZ9G0cluTufyUgNX.png?format=pjpg&auto=webp&s=5c8dc5cb4c48b4cabfc041cdd3f32212d7c14811', 'width': 1040}, 'variants': {}}]} |
||
Reasoning model as agents | 1 | Anyone use reasoning models in their agent workflow? Having a hard time telling it to reply this in this exact format. I can parse out the thinking parts.
Noticed this mostly with the smaller reasoning models, the top dogs usually listen.
| 2025-01-22T00:58:49 | https://www.reddit.com/r/LocalLLaMA/comments/1i6yn4u/reasoning_model_as_agents/ | sugarfreecaffeine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6yn4u | false | null | t3_1i6yn4u | /r/LocalLLaMA/comments/1i6yn4u/reasoning_model_as_agents/ | false | false | self | 1 | null |
Need this feature locally soon , anyone know how to do this locally? both deepthink and search , it works better at least i liked it, new way to browse | 0 | 2025-01-22T01:10:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i6ywii/need_this_feature_locally_soon_anyone_know_how_to/ | TheLogiqueViper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6ywii | false | null | t3_1i6ywii | /r/LocalLLaMA/comments/1i6ywii/need_this_feature_locally_soon_anyone_know_how_to/ | false | false | 0 | null |
||
Unsloth accuracy | 1 | [removed] | 2025-01-22T01:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1i6z2i7/unsloth_accuracy/ | BitAcademic9597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6z2i7 | false | null | t3_1i6z2i7 | /r/LocalLLaMA/comments/1i6z2i7/unsloth_accuracy/ | false | false | self | 1 | null |
How many GPUs? | 1 | [removed]
[View Poll](https://www.reddit.com/poll/1i6z4lj) | 2025-01-22T01:21:41 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1i6z4lj | false | null | t3_1i6z4lj | /r/LocalLLaMA/comments/1i6z4lj/how_many_gpus/ | false | false | default | 1 | null |
||
Is it the R1 model on the DeepSeek website? And is it FREE? | 1 | [removed] | 2025-01-22T01:22:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i6z56f/is_it_the_r1_model_on_the_deepseek_website_and_is/ | brestho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6z56f | false | null | t3_1i6z56f | /r/LocalLLaMA/comments/1i6z56f/is_it_the_r1_model_on_the_deepseek_website_and_is/ | false | false | 1 | null |
|
Self-improvement to AGI ? | 1 | I've been using R1 a lot today and I was thinking about how far AI has come. It can solve complex problems in impressive ways while being locally hosted (I'm using distill 32B version).
So I was wondering, what if we gave these models their own code and a way to do inference on some prompts? We could then just ask the model to evaluate its own answer, propose a way to improve answers (to perform better on benchmarks for example by fine-tuning on some new generated data or changing a bit the architecture) and test it on its own? Just like reinforcement learning but with the model controlling its own iterations.
Using this we transform test time scaling into test-time self-improvement and get to AGI with a lot of compute?
Maybe it is what OAI is cooking behind closed doors ? | 2025-01-22T01:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/1i6zafl/selfimprovement_to_agi/ | valcore93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6zafl | false | null | t3_1i6zafl | /r/LocalLLaMA/comments/1i6zafl/selfimprovement_to_agi/ | false | false | self | 1 | null |
The distilled R1 models likely work best in workflows, so now's a great time to learn those if you haven't already! | 54 | Another member of our board recently pointed out that Deepseek's paper ["DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning"](https://kingy.ai/wp-content/uploads/2025/01/DeepSeek_R1.pdf) said the following:
>*When evaluating DeepSeek-R1, we observe that it is sensitive to prompts. Few-shot prompting consistently degrades its performance. Therefore, we recommend users directly describe the problem and specify the output format using a zero-shot setting for optimal results.*
R1, and likely all reasoning models, are best suited for a zero-shot "*please think through this specific problem*" sort of prompts, and you'll likely get far better results doing that than having a multi-turn conversation jammed in there while it thinks.
So once again, I take the opportunity to say: workflows are your friend. I know I'm always harping about workflows, but this case is a slam dunk use-case for learning to use workflows, getting comfortable with them, etc.
You will likely get far better results out of R1, QwQ, the R1 Distilled models, etc if you were have a workflow that did something similar to the following:
1. Summarize the what the most recent message is saying and/or asking
2. Summarize any supporting context to assist in thinking about this
3. Pass 1 and 2 into the reasoning model, and let it think of a problem
4. Respond to the user using the output of 3.
There are 2 really valuable benefits of doing this- first: you only pass in a single scoped problem every time and second: you are hiding the full thinking logic of step 3, so that isn't kept within the conversation or agentic history.
It doesn't matter what workflow program you go with- n8n, langflow, wilmerai, omnichain, whatever. This is a great time to just try them out if you haven't already, and get used to working with them. I've been using workflows exclusively when using ai since at least May or June of last year, and I can't imagine going back. Many of you may end up not liking using them, but many of you might. Either way, you'll get the experience AND can use these distilled R1 models to their maximum benefit. | 2025-01-22T01:31:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i6zbsf/the_distilled_r1_models_likely_work_best_in/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6zbsf | false | null | t3_1i6zbsf | /r/LocalLLaMA/comments/1i6zbsf/the_distilled_r1_models_likely_work_best_in/ | false | false | self | 54 | null |
*who's* content policies do you need to check with Deep Seek? | 2 | 2025-01-22T01:37:52 | Chickenwomp | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i6zgl5 | false | null | t3_1i6zgl5 | /r/LocalLLaMA/comments/1i6zgl5/whos_content_policies_do_you_need_to_check_with/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'TEArX_xx9EMHI_um1sA8NDcWvUioJ_WAanDFoi3RXBY', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/h3n623st8gee1.png?width=108&crop=smart&auto=webp&s=86e3930f9c69949df0dd307b7618d7f591c3bced', 'width': 108}, {'height': 54, 'url': 'https://preview.redd.it/h3n623st8gee1.png?width=216&crop=smart&auto=webp&s=442641e1b0438057e97c7cd0f5345795f66b1b78', 'width': 216}, {'height': 80, 'url': 'https://preview.redd.it/h3n623st8gee1.png?width=320&crop=smart&auto=webp&s=ad0c0fe82096cbf30363d7320b72e09ad1ec8711', 'width': 320}, {'height': 161, 'url': 'https://preview.redd.it/h3n623st8gee1.png?width=640&crop=smart&auto=webp&s=94eb2c30ac3576f4fc7d17c7f9dd41d73f9a94a5', 'width': 640}, {'height': 242, 'url': 'https://preview.redd.it/h3n623st8gee1.png?width=960&crop=smart&auto=webp&s=c160ae1482dbc064e20007b601e522e5b55a90b2', 'width': 960}, {'height': 272, 'url': 'https://preview.redd.it/h3n623st8gee1.png?width=1080&crop=smart&auto=webp&s=79dabbb19c03a9b55734bc0b41b5f7a351449f5f', 'width': 1080}], 'source': {'height': 405, 'url': 'https://preview.redd.it/h3n623st8gee1.png?auto=webp&s=a37a03814970df4538fae6d2b2e509dd20bf2f30', 'width': 1605}, 'variants': {}}]} |
|||
Just a comparison of US $500B Stargate AI project to other tech projects | 118 | **Manhattan Project** \~$30 billion in today’s dollars
**Apollo Program** \~$170–$180 billion in today’s dollars
**Space Shuttle Program** \~$275–$300 billion in today’s dollars
**Interstate Highway System**, entire decades-long Interstate Highway System buildout, \~ $500–$550 billion in today’s dollars
Stargate is huge AI project | 2025-01-22T01:40:19 | https://www.reddit.com/r/LocalLLaMA/comments/1i6zid8/just_a_comparison_of_us_500b_stargate_ai_project/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6zid8 | false | null | t3_1i6zid8 | /r/LocalLLaMA/comments/1i6zid8/just_a_comparison_of_us_500b_stargate_ai_project/ | false | false | self | 118 | null |
How to prompt Deepseek R1 from outputting its stream of consciousness? | 0 | `do not output your reasoning/thinking sequence monologue and just continue the story`
is not doing the trick. I'm using it as a creative writing helper and the <think> output is kinda disrupting. | 2025-01-22T01:42:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i6zk16/how_to_prompt_deepseek_r1_from_outputting_its/ | stvneads | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6zk16 | false | null | t3_1i6zk16 | /r/LocalLLaMA/comments/1i6zk16/how_to_prompt_deepseek_r1_from_outputting_its/ | false | false | self | 0 | null |
Ryzen AI Max+ 395 Eval Speed Estimate | 1 | We can make a pretty decent estimate of the speed of the Ryzen AI Max+ 395 based on what members of the community have tested with the earlier AMD 8700g APUs.
These ran llama 3 70b approximately 30% faster than CPU alone with 5600mt/s ddr5 memory.
Since the Ryzen AI Max+ 395 runs 8000mt/s something in the range 30-50% faster than CPU alone is likely (I.e 5 tokens/s).
See below:
https://www.reddit.com/r/LocalLLaMA/s/CLiJmu8LTs | 2025-01-22T01:50:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i6zpk5/ryzen_ai_max_395_eval_speed_estimate/ | NewBronzeAge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6zpk5 | false | null | t3_1i6zpk5 | /r/LocalLLaMA/comments/1i6zpk5/ryzen_ai_max_395_eval_speed_estimate/ | false | false | self | 1 | null |
Unsloth accuracy | 1 | [removed] | 2025-01-22T02:04:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i6zzt5/unsloth_accuracy/ | BitAcademic9597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i6zzt5 | false | null | t3_1i6zzt5 | /r/LocalLLaMA/comments/1i6zzt5/unsloth_accuracy/ | false | false | self | 1 | null |
Unsloth accuracy vs Transformers | 1 | [removed] | 2025-01-22T02:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i700uw/unsloth_accuracy_vs_transformers/ | BitAcademic9597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i700uw | false | null | t3_1i700uw | /r/LocalLLaMA/comments/1i700uw/unsloth_accuracy_vs_transformers/ | false | false | self | 1 | null |
o3 has the same base model as o1 according to Dylan Patel of SemiAnalysis | 2 | From https://x.com/dylan522p/status/1881818550400336025 :
>They did new post training, but same base model.
Alternative link: https://xcancel.com/dylan522p/status/1881818550400336025 .
Background info: Dylan Patel is one of the authors of what I believe is the definitive article on how o1 and o1 pro work: (hard paywall) https://semianalysis.com/2024/12/11/scaling-laws-o1-pro-architecture-reasoning-training-infrastructure-orion-and-claude-3-5-opus-failures/ . Dylan Patel has source(s) at OpenAI per https://x.com/dylan522p/status/1869084570618060916 ; alternative link: https://xcancel.com/dylan522p/status/1869084570618060916 . | 2025-01-22T02:07:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i7023z/o3_has_the_same_base_model_as_o1_according_to/ | Wiskkey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i7023z | false | null | t3_1i7023z | /r/LocalLLaMA/comments/1i7023z/o3_has_the_same_base_model_as_o1_according_to/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'pFHGCL4FB46fn2O7jmnK5l2eQ84BQKuRAAwq1Ph9L-4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/quqejI2sR5xyuZmXOtqMsuiLgcr_tr6cHQTqfFFYW80.jpg?width=108&crop=smart&auto=webp&s=4241f5ca0733ce7d87e082cc059302971e44632b', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/quqejI2sR5xyuZmXOtqMsuiLgcr_tr6cHQTqfFFYW80.jpg?auto=webp&s=991efae1881e482c441f64d6146a6a4c4c10fa84', 'width': 200}, 'variants': {}}]} |
Pairing a W7900 with a 7900 XTX - yay or nay? | 2 | I have an opportunity to expand my workstation. Current built is 5950x, 64GB RAM, Sapphire Pulse 7900 XTX.
I love my XTX, and I'm debating between adding a 2nd 7900 XTX, or going for a W7900 for that delicious 48GB VRAM. Maybe even a W7800 48GB version (although they seem hard to find via consumer channels).
ROCm added multi-gpus support some time ago, and it's been incorporated into llama.cpp and vLLM, etc.
I'm looking for some insights from people with experience with ROCm and running different cards. I'm using windows but I'm relatively competent in Linux and will switch if the performance uplift is there.
I'm also considering adding an A6000 instead, but I'd rather stick with AMD if I can.
I'm not interested in comments to just buy a 3090.
Will llama.cpp and/or vLLM utilize both a 7900XTX and w7900/7800 appropriately? What kinds of configs can I leverage? I'm also really curious about spec decoding running a 1.5b/3b model on the 7900xtx and 70b q4 model on the w7900 for speed and world domination. Will spec decoding work on separate GPUs? Is the PCIe 4 connection too slow for spec decoding?
Thanks in advance. I need to pull the trigger in the next week and want to make the most of my hardware. | 2025-01-22T02:07:25 | https://www.reddit.com/r/LocalLLaMA/comments/1i70257/pairing_a_w7900_with_a_7900_xtx_yay_or_nay/ | Thrumpwart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i70257 | false | null | t3_1i70257 | /r/LocalLLaMA/comments/1i70257/pairing_a_w7900_with_a_7900_xtx_yay_or_nay/ | false | false | self | 2 | null |
Unsloth accuracy vs Transformers | 1 | [removed] | 2025-01-22T02:08:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i702sj/unsloth_accuracy_vs_transformers/ | BitAcademic9597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i702sj | false | null | t3_1i702sj | /r/LocalLLaMA/comments/1i702sj/unsloth_accuracy_vs_transformers/ | false | false | self | 1 | null |
Now deploy via Transformers, Llama cpp, Ollama or integrate with XAI, OpenAI, Anthropic, Openrouter or custom endpoints! Local or OpenAI Embeddings CPU/MPS/CUDA Support Linux, Windows & Mac. Fully open source. | 13 | 2025-01-22T02:09:23 | http://github.com/cntrlai/notate | Hairetsu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i703j8 | false | null | t3_1i703j8 | /r/LocalLLaMA/comments/1i703j8/now_deploy_via_transformers_llama_cpp_ollama_or/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'VIZX3kXm7QbezauxtVStGDaXX6iiEZQ8_CrL-v_k-Sw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F8-deTA8UPvQ15Oy12MGtJO13bYFuroRHIhDV-DD65s.jpg?width=108&crop=smart&auto=webp&s=169b1ed2d7fa6b6539308dc6c53be6ddbc40ae73', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F8-deTA8UPvQ15Oy12MGtJO13bYFuroRHIhDV-DD65s.jpg?width=216&crop=smart&auto=webp&s=30e751377e72a1afafd5da17006dce49c757d9b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F8-deTA8UPvQ15Oy12MGtJO13bYFuroRHIhDV-DD65s.jpg?width=320&crop=smart&auto=webp&s=5b8a65a0c5bd66a67b258d07586e6ee2b85d0c04', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F8-deTA8UPvQ15Oy12MGtJO13bYFuroRHIhDV-DD65s.jpg?width=640&crop=smart&auto=webp&s=3db535ac8a44bd516e80cc28284fb4eb89067335', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F8-deTA8UPvQ15Oy12MGtJO13bYFuroRHIhDV-DD65s.jpg?width=960&crop=smart&auto=webp&s=8f09818b01850e49f10c09f664bd16341675837b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/F8-deTA8UPvQ15Oy12MGtJO13bYFuroRHIhDV-DD65s.jpg?width=1080&crop=smart&auto=webp&s=c27ac6401fef771bd55f1146dc3db3c4f71503b1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/F8-deTA8UPvQ15Oy12MGtJO13bYFuroRHIhDV-DD65s.jpg?auto=webp&s=3cdfd29b5967bb8a12c7014a52c1df91be83618b', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.