title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Production grade LLM Orchestration framework | 1 | [removed] | 2025-01-23T15:36:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i862eu/production_grade_llm_orchestration_framework/ | awesum_11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i862eu | false | null | t3_1i862eu | /r/LocalLLaMA/comments/1i862eu/production_grade_llm_orchestration_framework/ | false | false | self | 1 | null |
What are the best small LLM models for Data Analysis ? | 1 | [removed] | 2025-01-23T15:38:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i864kl/what_are_the_best_small_llm_models_for_data/ | marmagdotme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i864kl | false | null | t3_1i864kl | /r/LocalLLaMA/comments/1i864kl/what_are_the_best_small_llm_models_for_data/ | false | false | self | 1 | null |
Hard integral problem | 1 | [removed] | 2025-01-23T15:39:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i864mg/hard_integral_problem/ | PsychologicalKnee562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i864mg | false | null | t3_1i864mg | /r/LocalLLaMA/comments/1i864mg/hard_integral_problem/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '24VKdL_WzUG7N1Ti5GjlruUpfCsQfdr8eN05WjGS4kY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?width=108&crop=smart&auto=webp&s=fcc5696614a4a7fbcb0e8d41870c6f131f6ba9f5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?width=216&crop=smart&auto=webp&s=15e826d21ecbe21556f362063b18678e271daab4', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?auto=webp&s=65a1fa0d343cd544754bf78a93c832f7a9bb9e9b', 'width': 316}, 'variants': {}}]} |
First 5090 LLM results, compared to 4090 and 6000 ada | 159 | Source:
[https://www.storagereview.com/review/nvidia-geforce-rtx-5090-review-pushing-boundaries-with-ai-acceleration](https://www.storagereview.com/review/nvidia-geforce-rtx-5090-review-pushing-boundaries-with-ai-acceleration)
| 2025-01-23T15:42:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i867k8/first_5090_llm_results_compared_to_4090_and_6000/ | jwestra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i867k8 | false | null | t3_1i867k8 | /r/LocalLLaMA/comments/1i867k8/first_5090_llm_results_compared_to_4090_and_6000/ | false | false | self | 159 | {'enabled': False, 'images': [{'id': 'cE9F3V0sZMIM38OKB_0p9WGeHg3zezH3yozi78nqQkE', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=108&crop=smart&auto=webp&s=2607ead8e8b60150f895a02e59cadff9072a1912', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=216&crop=smart&auto=webp&s=edd052b230429cfa520548517760c1c85e04eabb', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=320&crop=smart&auto=webp&s=e988922b06c3d5dbd9e29ebabdbe3edad4a5006f', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=640&crop=smart&auto=webp&s=0a5868e7a86ac26107edab60570277f4095ed053', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=960&crop=smart&auto=webp&s=d6c5c030d72d5b6dd998ac5c88b8aa932d620df1', 'width': 960}, {'height': 648, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=1080&crop=smart&auto=webp&s=6ebaf3cb9b47b3171d965be4958f5ae3b01db4d3', 'width': 1080}], 'source': {'height': 1201, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?auto=webp&s=643d13a1263b329317236708d8b4ec436a54b535', 'width': 2000}, 'variants': {}}]} |
Determining the Validity of Large Language Models for Automated Perceptual Analysis | 1 | [removed] | 2025-01-23T15:42:44 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1i867oh | false | null | t3_1i867oh | /r/LocalLLaMA/comments/1i867oh/determining_the_validity_of_large_language_models/ | false | false | default | 1 | null |
||
Determining the Validity of Large Language Models for Automated Perceptual Analysis | 1 | [removed] | 2025-01-23T15:43:36 | https://www.reddit.com/r/LocalLLaMA/comments/1i868e5/determining_the_validity_of_large_language_models/ | Nervous-Midnight-175 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i868e5 | false | null | t3_1i868e5 | /r/LocalLLaMA/comments/1i868e5/determining_the_validity_of_large_language_models/ | false | false | self | 1 | null |
Hard integral problem | 1 | [removed] | 2025-01-23T15:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i86ct3/hard_integral_problem/ | PsychologicalKnee562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i86ct3 | false | null | t3_1i86ct3 | /r/LocalLLaMA/comments/1i86ct3/hard_integral_problem/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '24VKdL_WzUG7N1Ti5GjlruUpfCsQfdr8eN05WjGS4kY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?width=108&crop=smart&auto=webp&s=fcc5696614a4a7fbcb0e8d41870c6f131f6ba9f5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?width=216&crop=smart&auto=webp&s=15e826d21ecbe21556f362063b18678e271daab4', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?auto=webp&s=65a1fa0d343cd544754bf78a93c832f7a9bb9e9b', 'width': 316}, 'variants': {}}]} |
Scale AI CEO says China has quickly caught the U.S. with the DeepSeek open-source model | 393 | 2025-01-23T15:50:30 | https://www.cnbc.com/2025/01/23/scale-ai-ceo-says-china-has-quickly-caught-the-us-with-deepseek.html | etherd0t | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 1i86e4y | false | null | t3_1i86e4y | /r/LocalLLaMA/comments/1i86e4y/scale_ai_ceo_says_china_has_quickly_caught_the_us/ | false | false | 393 | {'enabled': False, 'images': [{'id': 'yYDik_hRf-lW_mPLBZXUT_737EvSnr8y52VUcnk_Odg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/QaGEWAoaN73yKpJcRFLASUVmy5TY0ehTzhGZuFAVhPY.jpg?width=108&crop=smart&auto=webp&s=97860c4bf27c7e1bbb0cd3f81f7861c5c620eec8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/QaGEWAoaN73yKpJcRFLASUVmy5TY0ehTzhGZuFAVhPY.jpg?width=216&crop=smart&auto=webp&s=54c7aa016f80a3c45b6da5d16b75cf3ffdda47ee', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/QaGEWAoaN73yKpJcRFLASUVmy5TY0ehTzhGZuFAVhPY.jpg?width=320&crop=smart&auto=webp&s=85893248314c08652da5b29c222e750d65c2621c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/QaGEWAoaN73yKpJcRFLASUVmy5TY0ehTzhGZuFAVhPY.jpg?width=640&crop=smart&auto=webp&s=05b9252fa9a2aa815d8f1c4c41bc8b680d1e4628', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/QaGEWAoaN73yKpJcRFLASUVmy5TY0ehTzhGZuFAVhPY.jpg?width=960&crop=smart&auto=webp&s=36c2c946045bec7247cd3ec43733779f136981c3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/QaGEWAoaN73yKpJcRFLASUVmy5TY0ehTzhGZuFAVhPY.jpg?width=1080&crop=smart&auto=webp&s=b294a5831cb274d93a5fc636eae98d546add358c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/QaGEWAoaN73yKpJcRFLASUVmy5TY0ehTzhGZuFAVhPY.jpg?auto=webp&s=cc95eaf0a467b06b3448fab01067ff4f5cdb6b52', 'width': 1920}, 'variants': {}}]} |
||
Hard integral problem | 1 | [removed] | 2025-01-23T15:57:07 | https://www.reddit.com/r/LocalLLaMA/comments/1i86joq/hard_integral_problem/ | PsychologicalKnee562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i86joq | false | null | t3_1i86joq | /r/LocalLLaMA/comments/1i86joq/hard_integral_problem/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '24VKdL_WzUG7N1Ti5GjlruUpfCsQfdr8eN05WjGS4kY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?width=108&crop=smart&auto=webp&s=fcc5696614a4a7fbcb0e8d41870c6f131f6ba9f5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?width=216&crop=smart&auto=webp&s=15e826d21ecbe21556f362063b18678e271daab4', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?auto=webp&s=65a1fa0d343cd544754bf78a93c832f7a9bb9e9b', 'width': 316}, 'variants': {}}]} |
First Nvidia RTX 5090 review on AI | 16 | Number looks good
[https://www.storagereview.com/review/nvidia-geforce-rtx-5090-review-pushing-boundaries-with-ai-acceleration](https://www.storagereview.com/review/nvidia-geforce-rtx-5090-review-pushing-boundaries-with-ai-acceleration)
| 2025-01-23T16:01:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i86nnf/first_nvidia_rtx_5090_review_on_ai/ | rajiv67 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i86nnf | false | null | t3_1i86nnf | /r/LocalLLaMA/comments/1i86nnf/first_nvidia_rtx_5090_review_on_ai/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'cE9F3V0sZMIM38OKB_0p9WGeHg3zezH3yozi78nqQkE', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=108&crop=smart&auto=webp&s=2607ead8e8b60150f895a02e59cadff9072a1912', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=216&crop=smart&auto=webp&s=edd052b230429cfa520548517760c1c85e04eabb', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=320&crop=smart&auto=webp&s=e988922b06c3d5dbd9e29ebabdbe3edad4a5006f', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=640&crop=smart&auto=webp&s=0a5868e7a86ac26107edab60570277f4095ed053', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=960&crop=smart&auto=webp&s=d6c5c030d72d5b6dd998ac5c88b8aa932d620df1', 'width': 960}, {'height': 648, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?width=1080&crop=smart&auto=webp&s=6ebaf3cb9b47b3171d965be4958f5ae3b01db4d3', 'width': 1080}], 'source': {'height': 1201, 'url': 'https://external-preview.redd.it/PnzUTHeUQDah3Madq3JF5tCDBdEWLySpwcBRwh4t1-o.jpg?auto=webp&s=643d13a1263b329317236708d8b4ec436a54b535', 'width': 2000}, 'variants': {}}]} |
Best 8-12b model | 7 | Which is current best model for reasoning and logic? I did a few simple tests and they all fail a lot.
All are q4-q5 versions:
Llama3.1-8B
Gemma2-9B
Finbulvetr-11B
Nemo-12B
Nemomix-unleashed-12B
| 2025-01-23T16:07:06 | https://www.reddit.com/r/LocalLLaMA/comments/1i86sex/best_812b_model/ | medgel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i86sex | false | null | t3_1i86sex | /r/LocalLLaMA/comments/1i86sex/best_812b_model/ | false | false | self | 7 | null |
Why don't more LLMs work well with Cline? | 9 | I've been using Cline a lot lately, and it's very surprising how most models simply don't follow instructions well. A lot of the time Cline gets stuck, whether the LLM isn't outputting with the correct tags or what, even when some LLMs see what the problem is they still don't correct themselves. Claude works pretty well, but even it has issues every now and then. I'm just curious, how many people use Cline with a non-Claude LLM and don't have errors frequently from not following instructions? If so, what models are y'all using? I've tried both of the newest Deepseek models, a lot of open models, and the only other one I've had decent success with is Llama 3.3 70b. I've been working on my own version of Cline recently to try to make the instructions more universally understood, which I've had some success with, but want to see if anyone knows of a better path to go down. Thank you!! | 2025-01-23T16:09:32 | https://www.reddit.com/r/LocalLLaMA/comments/1i86ugr/why_dont_more_llms_work_well_with_cline/ | Sellitus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i86ugr | false | null | t3_1i86ugr | /r/LocalLLaMA/comments/1i86ugr/why_dont_more_llms_work_well_with_cline/ | false | false | self | 9 | null |
How deepseek-r1 responds to "Who was responsible for the Tiananmen Square massacre?" in English and Chinese. | 1 | [removed] | 2025-01-23T16:12:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i86wqc/how_deepseekr1_responds_to_who_was_responsible/ | 4in4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i86wqc | false | null | t3_1i86wqc | /r/LocalLLaMA/comments/1i86wqc/how_deepseekr1_responds_to_who_was_responsible/ | false | false | self | 1 | null |
The R1 Distillation you want is FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview | 109 | I made an exl2 4.25 BPW quantization of FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview, and it functions how I was expecting DeepSeek-R1-Distill-Qwen-32B to have. It does not degrade on multi-turn performance, its instruction following is superior, and the writing results were more closely in line with R1.
[HF Link](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview)
I know people said this late on Monday already, but it took me until now to get it and test it, so I figured that others may still be struggling with DeepSeek-R1-Distill-Qwen-32B. I, personally, believe it's the new SOTA you were probably expecting. | 2025-01-23T16:25:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i87826/the_r1_distillation_you_want_is/ | TheActualStudy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i87826 | false | null | t3_1i87826 | /r/LocalLLaMA/comments/1i87826/the_r1_distillation_you_want_is/ | false | false | self | 109 | {'enabled': False, 'images': [{'id': 'R7FmvGUv0IuP2lOvwe4jYIRxvhzDD4fWEk8D8-E6wjE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=108&crop=smart&auto=webp&s=f4ce6ff91f28493117cd5b048bc6216a09220bef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=216&crop=smart&auto=webp&s=71086492e0bd267427d43b5860c6be4419149e93', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=320&crop=smart&auto=webp&s=760ce35daa5baabd11bed7c6c2cc53fd92c4512b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=640&crop=smart&auto=webp&s=a70e3333a94a5c2456a7acc9926686ecd7ac5c9c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=960&crop=smart&auto=webp&s=426d85b294dc069a37783d91f3f4591982523159', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=1080&crop=smart&auto=webp&s=b36973a03bf52079eebc34748f4189c4e0976c8e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?auto=webp&s=5e0646c1f7e4cab74e200abeee3f00dbc2c33a9d', 'width': 1200}, 'variants': {}}]} |
SmolVLM goes even smaller, running on your toaster 🥵🔥 Today Hugging Face release newer SmolVLMs: 256M and 500M | 44 | 2025-01-23T16:29:43 | https://v.redd.it/6d3hweutsree1 | Lynncc6 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i87boz | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/6d3hweutsree1/DASHPlaylist.mpd?a=1740241798%2CZTlhYjZmZDFmMDAzMTI2N2QzYjA1NTI0M2E5YWQyZDYwNjVjZWZhZTI3YjZhNzI2OWVhYjU3MmY2NTRmNjQ3Yw%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/6d3hweutsree1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/6d3hweutsree1/HLSPlaylist.m3u8?a=1740241798%2CMmFiODM0ZTMwNjlmZWY5Mjg3YmZlYzBlYzE0ZTg0MDg3OGM4NDFkYTUwOTZjNjdiZjEyN2YxMjY0OTlhYmZkZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6d3hweutsree1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1140}} | t3_1i87boz | /r/LocalLLaMA/comments/1i87boz/smolvlm_goes_even_smaller_running_on_your_toaster/ | false | false | 44 | {'enabled': False, 'images': [{'id': 'ZWkxc3lkdXRzcmVlMeOQ6gi0MASgTgbv3hzqEbPqGdFSlYbcrJRLih99qg-l', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/ZWkxc3lkdXRzcmVlMeOQ6gi0MASgTgbv3hzqEbPqGdFSlYbcrJRLih99qg-l.png?width=108&crop=smart&format=pjpg&auto=webp&s=605c82208c1acf1009a1320fd146de8cb695e81d', 'width': 108}, {'height': 136, 'url': 'https://external-preview.redd.it/ZWkxc3lkdXRzcmVlMeOQ6gi0MASgTgbv3hzqEbPqGdFSlYbcrJRLih99qg-l.png?width=216&crop=smart&format=pjpg&auto=webp&s=f537837faa5bac9a33486b4a215d7fa6168d036f', 'width': 216}, {'height': 202, 'url': 'https://external-preview.redd.it/ZWkxc3lkdXRzcmVlMeOQ6gi0MASgTgbv3hzqEbPqGdFSlYbcrJRLih99qg-l.png?width=320&crop=smart&format=pjpg&auto=webp&s=8ae786e9a91fa98389684a61f9edab1feeb917ae', 'width': 320}, {'height': 404, 'url': 'https://external-preview.redd.it/ZWkxc3lkdXRzcmVlMeOQ6gi0MASgTgbv3hzqEbPqGdFSlYbcrJRLih99qg-l.png?width=640&crop=smart&format=pjpg&auto=webp&s=f27063f7f85d481571dbfa74faf296a71811b93e', 'width': 640}, {'height': 606, 'url': 'https://external-preview.redd.it/ZWkxc3lkdXRzcmVlMeOQ6gi0MASgTgbv3hzqEbPqGdFSlYbcrJRLih99qg-l.png?width=960&crop=smart&format=pjpg&auto=webp&s=6361b339847ff9f23b37132728991f8ba3bb6dc1', 'width': 960}, {'height': 682, 'url': 'https://external-preview.redd.it/ZWkxc3lkdXRzcmVlMeOQ6gi0MASgTgbv3hzqEbPqGdFSlYbcrJRLih99qg-l.png?width=1080&crop=smart&format=pjpg&auto=webp&s=237619c8a61f6c2500e473c514b1a93402a7832b', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/ZWkxc3lkdXRzcmVlMeOQ6gi0MASgTgbv3hzqEbPqGdFSlYbcrJRLih99qg-l.png?format=pjpg&auto=webp&s=9d6041730d9a7abd1a01f246af7b9c142afda60d', 'width': 1140}, 'variants': {}}]} |
||
Hard integral problem | 3 | There is quite famous integral on the math stackexchange, mostly due to the Cleo’s answer(https://math.stackexchange.com/questions/562694/integral-int-11-frac1x-sqrt-frac1x1-x-ln-left-frac2-x22-x1/563063#563063). I’ve tried asking various LLM’s, both open source(such as r1 full model and distillations) and closed sources ones(o1-mini, new Gemini 2.0 Thinking, 3.5 Sonnet). None has given the correct answered, and none seems to recognize to problem. I did no prompt engineering(prompt was basically: `integral from -1 to 1 of ((1/x)*sqrt((1+x)/(1-x))*ln((2x^2+2x+1)/(2x^2-2x+1))) dx`) or any follow-ups and hints. The closest in terms of accuracy was Gemini 2.0 Thinking with turned on code execution, as it was able to execute code to approximate the integral, and then it found close approximation with symbolic representation, which was however completely incorrect. | 2025-01-23T16:31:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i87d7p/hard_integral_problem/ | PsychologicalKnee562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i87d7p | false | null | t3_1i87d7p | /r/LocalLLaMA/comments/1i87d7p/hard_integral_problem/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '24VKdL_WzUG7N1Ti5GjlruUpfCsQfdr8eN05WjGS4kY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?width=108&crop=smart&auto=webp&s=fcc5696614a4a7fbcb0e8d41870c6f131f6ba9f5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?width=216&crop=smart&auto=webp&s=15e826d21ecbe21556f362063b18678e271daab4', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?auto=webp&s=65a1fa0d343cd544754bf78a93c832f7a9bb9e9b', 'width': 316}, 'variants': {}}]} |
Best / most realistic REAL Time TTS in 2025? | 1 | With at most 5 seconds execution time | 2025-01-23T16:33:25 | https://www.reddit.com/r/LocalLLaMA/comments/1i87esa/best_most_realistic_real_time_tts_in_2025/ | serendipity98765 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i87esa | false | null | t3_1i87esa | /r/LocalLLaMA/comments/1i87esa/best_most_realistic_real_time_tts_in_2025/ | false | false | self | 1 | null |
Deepseek R1 is the only one that nails this new viral benchmark | 400 | 2025-01-23T16:34:21 | https://v.redd.it/4skrezsntree1 | Charuru | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i87fkl | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/4skrezsntree1/DASHPlaylist.mpd?a=1740242077%2CMjMyNTEwNTc4NDMxNDg2YTFkZjYzODU4MTNlZjVkOTM5ZmNjMDUyZjhhZDAwNTkzNzhjYWUwOTY1NjdiNWUyOQ%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/4skrezsntree1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/4skrezsntree1/HLSPlaylist.m3u8?a=1740242077%2CZmQ5YTZmMGI1MTcwN2Y1ZTUwYjRiYTI4ZTNjMWQwZWE1N2RlMWE1YjE2YjRjOGJmMzg4YjdhYzBmYzg4MWM0Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4skrezsntree1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 960}} | t3_1i87fkl | /r/LocalLLaMA/comments/1i87fkl/deepseek_r1_is_the_only_one_that_nails_this_new/ | false | false | 400 | {'enabled': False, 'images': [{'id': 'dTNsOXYwcnJ0cmVlMQcGL6cDuoI_ROA8VT0SlOGuG2iHRRkQmxqkRS_k8D6O', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dTNsOXYwcnJ0cmVlMQcGL6cDuoI_ROA8VT0SlOGuG2iHRRkQmxqkRS_k8D6O.png?width=108&crop=smart&format=pjpg&auto=webp&s=2406acaa49359dd82370ee9bd963a45adf80c633', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/dTNsOXYwcnJ0cmVlMQcGL6cDuoI_ROA8VT0SlOGuG2iHRRkQmxqkRS_k8D6O.png?width=216&crop=smart&format=pjpg&auto=webp&s=bab676e796685dfc4043904fdad770f071e2a67f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/dTNsOXYwcnJ0cmVlMQcGL6cDuoI_ROA8VT0SlOGuG2iHRRkQmxqkRS_k8D6O.png?width=320&crop=smart&format=pjpg&auto=webp&s=3f29141cdaf48cb140636e7c020f7a72846a95e9', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/dTNsOXYwcnJ0cmVlMQcGL6cDuoI_ROA8VT0SlOGuG2iHRRkQmxqkRS_k8D6O.png?width=640&crop=smart&format=pjpg&auto=webp&s=cf9ee436f27abe4513380882a56b55dedaedf0f4', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/dTNsOXYwcnJ0cmVlMQcGL6cDuoI_ROA8VT0SlOGuG2iHRRkQmxqkRS_k8D6O.png?width=960&crop=smart&format=pjpg&auto=webp&s=250fbd435b0be86a47aa496fa384bf650cd78022', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/dTNsOXYwcnJ0cmVlMQcGL6cDuoI_ROA8VT0SlOGuG2iHRRkQmxqkRS_k8D6O.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d679b74121279cb6b072ac2f87372c0ccd40e7e9', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/dTNsOXYwcnJ0cmVlMQcGL6cDuoI_ROA8VT0SlOGuG2iHRRkQmxqkRS_k8D6O.png?format=pjpg&auto=webp&s=62ca5d70d9a6775e629f4b985ee354ddc2f8c679', 'width': 1280}, 'variants': {}}]} |
||
New PC Build - AI and Gaming Combo - Is stacking RAM worth it? | 3 | Not sure if this is the place to post this, posted in PcBuild and was recommended to try here as well!
Looking at building a new PC for both AI and Gaming. I'm going for a 5090 for the 32GB VRAM and obviously gaming performance but I'm wondering about the RAM. Does anyone know from the AI perspective - Is there a benefit to having a lot of RAM? I was thinking about not only for running a bunch of AI services in containers but also for potentially offloading some models to it. Is it worth the money from a performance perspective though? Obviously RAM isn't cheap when you're talking 96GB or more so trying to figure out if it's worth doing.
Looking at 2x48GB DDR5 6000 CL30 but I'm wondering if that's a waste of money really.
Here's the vague build I'm looking at.
[https://uk.pcpartpicker.com/list/7DJx2x](https://uk.pcpartpicker.com/list/7DJx2x)
Any help or advice would be amazing! | 2025-01-23T16:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/1i87hlz/new_pc_build_ai_and_gaming_combo_is_stacking_ram/ | Fringolicious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i87hlz | false | null | t3_1i87hlz | /r/LocalLLaMA/comments/1i87hlz/new_pc_build_ai_and_gaming_combo_is_stacking_ram/ | false | false | self | 3 | null |
Introducing ElizaTown: Your Hub for Sharing ElizaOS Characters | 0 | We’re excited to announce **ElizaTown**, an open-source platform for sharing and discovering characters created for **ElizaOS**. If you’re an ElizaOS creator or user, this platform is for you!
# What ElizaTown Offers:
* 🖼️ Upload and showcase your ElizaOS character files, complete with images.
* 🔍 Discover new characters shared by the community.
* ❤️ Track likes and downloads to see what’s popular.
* 📤 Easily share your creations with others.
# Explore the Project:
* **Live Platform**: [https://eliza.town/](https://eliza.town/)
* **GitHub Repository**: [https://github.com/ShadovvBeast/ElizaTown](https://github.com/ShadovvBeast/ElizaTown)
# Open-Source and Contributions:
ElizaTown is licensed under **MIT**, so it’s free to use and modify. We welcome contributions from the community! If you have ideas for features, fixes, or improvements, feel free to submit a pull request.
Let’s build an amazing hub for the ElizaOS community together. We’d love your feedback and participation! 🚀 | 2025-01-23T16:57:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i87zsf/introducing_elizatown_your_hub_for_sharing/ | ShadovvBeast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i87zsf | false | null | t3_1i87zsf | /r/LocalLLaMA/comments/1i87zsf/introducing_elizatown_your_hub_for_sharing/ | false | false | self | 0 | null |
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning | 24 | 2025-01-23T16:59:18 | https://arxiv.org/abs/2501.12948 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1i8814w | false | null | t3_1i8814w | /r/LocalLLaMA/comments/1i8814w/deepseekr1_incentivizing_reasoning_capability_in/ | false | false | default | 24 | null |
|
Is it possible to run an acceptable local LLM with my hardware, or even with what's currently available on the market? | 1 | [removed] | 2025-01-23T16:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/1i8819w/is_it_possible_to_run_an_acceptable_local_llm/ | AlternateWitness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8819w | false | null | t3_1i8819w | /r/LocalLLaMA/comments/1i8819w/is_it_possible_to_run_an_acceptable_local_llm/ | false | false | self | 1 | null |
Facebook's Coconut: Training Large Language Model to Reason in a Continuous Latent Space has been open-sourced | 97 | 2025-01-23T17:05:54 | https://github.com/facebookresearch/coconut | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i887aa | false | null | t3_1i887aa | /r/LocalLLaMA/comments/1i887aa/facebooks_coconut_training_large_language_model/ | false | false | 97 | {'enabled': False, 'images': [{'id': 'hPsPtz2jXFOmQ1HBZjQpri6T2lxaAARKPU6wigx4ru0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/I9BvV7l8sFZkcQHfse_ouglW3Xd01DaryGObNbw54ZQ.jpg?width=108&crop=smart&auto=webp&s=c54af5114c6293bb9f6a06ea6581b45f46cc7484', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/I9BvV7l8sFZkcQHfse_ouglW3Xd01DaryGObNbw54ZQ.jpg?width=216&crop=smart&auto=webp&s=8454954a48eb8484ad726bbb73a4614eee13e756', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/I9BvV7l8sFZkcQHfse_ouglW3Xd01DaryGObNbw54ZQ.jpg?width=320&crop=smart&auto=webp&s=79ddbbc19a3fec1575eee2c37f415c2a44108fba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/I9BvV7l8sFZkcQHfse_ouglW3Xd01DaryGObNbw54ZQ.jpg?width=640&crop=smart&auto=webp&s=122bc0d4722c551cb2b629c373210ddafb6f6f92', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/I9BvV7l8sFZkcQHfse_ouglW3Xd01DaryGObNbw54ZQ.jpg?width=960&crop=smart&auto=webp&s=0c02c2f4c3af55c7f82736723989c673676bbe3a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/I9BvV7l8sFZkcQHfse_ouglW3Xd01DaryGObNbw54ZQ.jpg?width=1080&crop=smart&auto=webp&s=9a780cd235cc6665aed787c3dc5d45a6066b37ac', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/I9BvV7l8sFZkcQHfse_ouglW3Xd01DaryGObNbw54ZQ.jpg?auto=webp&s=82f40ef565fc27dc6569db21a5493b58c25f7f2d', 'width': 1200}, 'variants': {}}]} |
||
Meta panicked by Deepseek | 1 | 2025-01-23T17:13:09 | nknnr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i88dny | false | null | t3_1i88dny | /r/LocalLLaMA/comments/1i88dny/meta_panicked_by_deepseek/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'o6mKFCeOKjaLkqNDbvTbIydZoEGkazHyjo1nVmMMMhc', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/ytgpvayl0see1.png?width=108&crop=smart&auto=webp&s=8fc99964c2e28bfcdb1be5c1d76dec0d477824c2', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/ytgpvayl0see1.png?width=216&crop=smart&auto=webp&s=66c2ad193359d2fd06f4c85e00c9ae3032c60f14', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/ytgpvayl0see1.png?width=320&crop=smart&auto=webp&s=7f9bdea988c8d57a7ecbd0979208c0e1be95c2e1', 'width': 320}, {'height': 516, 'url': 'https://preview.redd.it/ytgpvayl0see1.png?width=640&crop=smart&auto=webp&s=267132650fc9e1aae38e0372462b95d05f54d1af', 'width': 640}], 'source': {'height': 686, 'url': 'https://preview.redd.it/ytgpvayl0see1.png?auto=webp&s=6c444db9bd1ae983c62a2e92f2b5360513bc97d7', 'width': 850}, 'variants': {}}]} |
|||
Affordable local AI coding assistant | 5 | Hi, I'm looking for a local AI coding assistant. How would I do that and what kind of resources would I need? I'm looking for affordable if possible.
Thanks. | 2025-01-23T17:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/1i88f46/affordable_local_ai_coding_assistant/ | 0b3e02d6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i88f46 | false | null | t3_1i88f46 | /r/LocalLLaMA/comments/1i88f46/affordable_local_ai_coding_assistant/ | false | false | self | 5 | null |
Meta panicked by Deepseek | 2,169 | 2025-01-23T17:15:55 | Optimal_Hamster5789 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i88g4y | false | null | t3_1i88g4y | /r/LocalLLaMA/comments/1i88g4y/meta_panicked_by_deepseek/ | false | false | 2,169 | {'enabled': True, 'images': [{'id': 'CcbrP_BNvbHtEypHh_cMwbk0ippdqIRosB9UNbs9Clk', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/ek65oz361see1.png?width=108&crop=smart&auto=webp&s=1ee979dd7bc224b1d7973f400f7d94a5082ccfec', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/ek65oz361see1.png?width=216&crop=smart&auto=webp&s=6ae487cf033ec8e7b9bcc979f3716a699914d284', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/ek65oz361see1.png?width=320&crop=smart&auto=webp&s=b32cff7fa23bd25820be0229b9fceae792cbb012', 'width': 320}, {'height': 516, 'url': 'https://preview.redd.it/ek65oz361see1.png?width=640&crop=smart&auto=webp&s=fd236f1570226e841c54a41cd8f2a2e7c6328a8c', 'width': 640}], 'source': {'height': 686, 'url': 'https://preview.redd.it/ek65oz361see1.png?auto=webp&s=57290fd4fdc714bca041b7d63b290a5a6efff1ac', 'width': 850}, 'variants': {}}]} |
|||
I created a Streamlit tool for interacting with Ollama models so I can build my own model interface | 1 | Hi y'all, first time caller long time listener - just thought I'd share my Streamlit-based LLM tool that I've been working on, the very catchy name for it is [Streamlit LLM Interface](https://github.com/jmemcc/streamlit-llm-interface).
I find myself building a lot of tools in Streamlit, they are simple to make and are a pretty great way to mock up ideas or improve tooling for non-engineering team members. So I made this easy to use Ollama interface with the goal of interating on it and adding features myself, rather than using OpenWebUI and others. I also realised there are likely a few others out there using Streamlit day to day like this, so I thought I'd release it to maybe help someone else find a base to start their tool from :)
It supports custom system prompts (or 'roles') which you can input directly or add to a file for longer storage, exports of JSON chat history details, and also has support for the OpenAI API too.
I'm going to add support for more inference tools like vLLM and llama.cpp, as well as cloud GPU connections, reasoning model support (thinking output is shown like a normal message currently), vision model support, and RAG functionality in time, but I thought I'd just make it public now so I feel more compelled to do all that.
Let me know what you think and if you have suggestions! | 2025-01-23T17:18:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i88i3r/i_created_a_streamlit_tool_for_interacting_with/ | SnooHesitations1871 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i88i3r | false | null | t3_1i88i3r | /r/LocalLLaMA/comments/1i88i3r/i_created_a_streamlit_tool_for_interacting_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Hp87AX_eMA59aBYpewjNThYpB9uKTSiOX7VQKk8Pkq8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RA0Ob2Y5x2S4KXVW-VC-atlxfGiMFR7_AwvKxE9xO6M.jpg?width=108&crop=smart&auto=webp&s=65dd65959017cba1425f80c061b46aaffc2392ca', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RA0Ob2Y5x2S4KXVW-VC-atlxfGiMFR7_AwvKxE9xO6M.jpg?width=216&crop=smart&auto=webp&s=605413cdf2bbe28e77669ed2b3cce78a2457e4a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RA0Ob2Y5x2S4KXVW-VC-atlxfGiMFR7_AwvKxE9xO6M.jpg?width=320&crop=smart&auto=webp&s=9d82adaf02059f8ebf619e5c6333b459e268deb8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RA0Ob2Y5x2S4KXVW-VC-atlxfGiMFR7_AwvKxE9xO6M.jpg?width=640&crop=smart&auto=webp&s=a42055ab97a01456b623c199f18cd6a7ee1eed6b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RA0Ob2Y5x2S4KXVW-VC-atlxfGiMFR7_AwvKxE9xO6M.jpg?width=960&crop=smart&auto=webp&s=0dc0be622beb83f8ddcb4f7aa86b414aeef44030', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RA0Ob2Y5x2S4KXVW-VC-atlxfGiMFR7_AwvKxE9xO6M.jpg?width=1080&crop=smart&auto=webp&s=d55dba0e78497d51406ff7e173a77eb470852d03', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RA0Ob2Y5x2S4KXVW-VC-atlxfGiMFR7_AwvKxE9xO6M.jpg?auto=webp&s=97dda15dc155552604e872271823658e8fd93c85', 'width': 1200}, 'variants': {}}]} |
Can't Find a Local ChatGPT Killer for Unity Coding | 1 | I am frustrated by ChatGPT. Specifically I am frustrated that I can't find anything better than it. Before you grab the pitchforks, allow me to explain. I use ChatGPT for coding for Unity engine, I give it simple few sentence prompts and hit enter. Sometimes I need to refine things but 90% of times it is perfect. In 20 seconds I have code that is ready to go as is. Even if i need modifications it understands my requirements and unity engine in such a way that is both amazing and strange. For the record: I don't like this. I don't want to depend on ChatGPT. Any day they can pull the plug or raise prices to a ridiculous degree and I can't do anything about it. I want a local or at least open source alternative. Something I can be damn sure that will be there even years from now. I have tried SO MANY alternatives. Qwen 2.5 coder 32b, Nemotron, Mistral Large, Llama 3.3, QwQ 32b, Sky-T1, Phi4, Gemma 27b, and of course the big boys: Gemini 1206, Gemini 2.0 Flash, and of course the new golden boy: Deepseek R1 and V3. Don't get me wrong I am not saying neither of them could code for unity, but neither of them got anywhere close to to ease and near perfect code on first try as Chat did. If anyone has any experience like with this, coding for unity, please let me know. Maybe I am missing something, maybe I am prompting wrong, no idea. I just want something that works at least close to what ChatGPT can do. And yes: I am familiar with loads of llm leaderboards like aider, lmsys arena, prollm toqan - my real world experience unfortunately doesn't match the numbers I see there...I wish it did. | 2025-01-23T17:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i88ia0/cant_find_a_local_chatgpt_killer_for_unity_coding/ | reactorx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i88ia0 | false | null | t3_1i88ia0 | /r/LocalLLaMA/comments/1i88ia0/cant_find_a_local_chatgpt_killer_for_unity_coding/ | false | false | self | 1 | null |
Production grade LLM Orchestration frameworks | 1 | [removed] | 2025-01-23T17:32:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i88uxg/production_grade_llm_orchestration_frameworks/ | awesum_11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i88uxg | false | null | t3_1i88uxg | /r/LocalLLaMA/comments/1i88uxg/production_grade_llm_orchestration_frameworks/ | false | false | self | 1 | null |
I think it's forced. DeepSeek did its best... | 1,154 | 2025-01-23T17:49:34 | Alexs1200AD | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i8996r | false | null | t3_1i8996r | /r/LocalLLaMA/comments/1i8996r/i_think_its_forced_deepseek_did_its_best/ | false | false | 1,154 | {'enabled': True, 'images': [{'id': 'oGmHxjbqTKa47KH2Ns9AY0kAy0Q_fv5XeKQPUAt2sqA', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/b3n1jpj17see1.jpeg?width=108&crop=smart&auto=webp&s=e3f7900f37e84bc9e451a7a85328bea2884b5822', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/b3n1jpj17see1.jpeg?width=216&crop=smart&auto=webp&s=1730cf3254e65d566c1e43c8675432ef7e553944', 'width': 216}, {'height': 92, 'url': 'https://preview.redd.it/b3n1jpj17see1.jpeg?width=320&crop=smart&auto=webp&s=720dca30fbcb234ff2bff17e4e8d5df09158f9a8', 'width': 320}, {'height': 185, 'url': 'https://preview.redd.it/b3n1jpj17see1.jpeg?width=640&crop=smart&auto=webp&s=55a5ea362bf2cb802996106f2fc698c1f579cfff', 'width': 640}, {'height': 278, 'url': 'https://preview.redd.it/b3n1jpj17see1.jpeg?width=960&crop=smart&auto=webp&s=433db0141c0fac9750006c8fecbcd2824a95cbac', 'width': 960}, {'height': 312, 'url': 'https://preview.redd.it/b3n1jpj17see1.jpeg?width=1080&crop=smart&auto=webp&s=a0b0f27978b10df0d394491d9b6705e78555f329', 'width': 1080}], 'source': {'height': 314, 'url': 'https://preview.redd.it/b3n1jpj17see1.jpeg?auto=webp&s=edf44a2ed69616b551d086103ba2b520145a6303', 'width': 1084}, 'variants': {}}]} |
|||
What the hell is wrong with this sub | 1 | [removed] | 2025-01-23T17:53:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i89clk/what_the_hell_is_wrong_with_this_sub/ | No_Pilot_1974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i89clk | false | null | t3_1i89clk | /r/LocalLLaMA/comments/1i89clk/what_the_hell_is_wrong_with_this_sub/ | false | false | self | 1 | null |
What the hell is wrong with this sub | 1 | [removed] | 2025-01-23T17:55:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i89eo0/what_the_hell_is_wrong_with_this_sub/ | No_Pilot_1974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i89eo0 | false | null | t3_1i89eo0 | /r/LocalLLaMA/comments/1i89eo0/what_the_hell_is_wrong_with_this_sub/ | false | false | self | 1 | null |
What the hell is wrong with this sub | 1 | [removed] | 2025-01-23T17:59:45 | https://www.reddit.com/r/LocalLLaMA/comments/1i89i81/what_the_hell_is_wrong_with_this_sub/ | No_Pilot_1974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i89i81 | false | null | t3_1i89i81 | /r/LocalLLaMA/comments/1i89i81/what_the_hell_is_wrong_with_this_sub/ | false | false | self | 1 | null |
How LLMs Achieved 85% Human Accuracy in Social Surveys and What This Means for the Future of AI | 1 | [removed] | 2025-01-23T18:05:25 | https://www.reddit.com/r/LocalLLaMA/comments/1i89nc7/how_llms_achieved_85_human_accuracy_in_social/ | Nervous-Midnight-175 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i89nc7 | false | null | t3_1i89nc7 | /r/LocalLLaMA/comments/1i89nc7/how_llms_achieved_85_human_accuracy_in_social/ | false | false | self | 1 | null |
How LLMs Achieved 85% Human Accuracy in Social Surveys and What This Means for the Future of AI | 1 | [removed] | 2025-01-23T18:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/1i89p0r/how_llms_achieved_85_human_accuracy_in_social/ | Nervous-Midnight-175 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i89p0r | false | null | t3_1i89p0r | /r/LocalLLaMA/comments/1i89p0r/how_llms_achieved_85_human_accuracy_in_social/ | false | false | self | 1 | null |
I have done a Learning Assistant with LLM, Please give your feedback | 1 | [removed] | 2025-01-23T18:08:18 | https://github.com/Raviteja-5976/Learning-Helper-Assistant | Raviteja-5312 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i89pum | false | null | t3_1i89pum | /r/LocalLLaMA/comments/1i89pum/i_have_done_a_learning_assistant_with_llm_please/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'EbfT-bJwvZ1B5WB3n1Q85GSPITFucclR5n5OPRd8i_k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ALIvavWh7ARSv5_ZGT75cwU6UFGw-vO79k5Ooo_VfnA.jpg?width=108&crop=smart&auto=webp&s=646b88ee6fe4cc794b26a9ff0484bb0a6ee710a7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ALIvavWh7ARSv5_ZGT75cwU6UFGw-vO79k5Ooo_VfnA.jpg?width=216&crop=smart&auto=webp&s=6aa2731b8b0be3965d560ba3c17ac12e9594a0ca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ALIvavWh7ARSv5_ZGT75cwU6UFGw-vO79k5Ooo_VfnA.jpg?width=320&crop=smart&auto=webp&s=f883fddd71804a70754fce8a25a4c707d9bb1d00', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ALIvavWh7ARSv5_ZGT75cwU6UFGw-vO79k5Ooo_VfnA.jpg?width=640&crop=smart&auto=webp&s=13a521eed7529dc02b3ab8d329ece63e91ece6b5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ALIvavWh7ARSv5_ZGT75cwU6UFGw-vO79k5Ooo_VfnA.jpg?width=960&crop=smart&auto=webp&s=464b830dbdd3d063b6c5c7163216b893cf484b52', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ALIvavWh7ARSv5_ZGT75cwU6UFGw-vO79k5Ooo_VfnA.jpg?width=1080&crop=smart&auto=webp&s=4c524beab361327bfe74e759f8cea4c7e9ec68f6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ALIvavWh7ARSv5_ZGT75cwU6UFGw-vO79k5Ooo_VfnA.jpg?auto=webp&s=220a5a96a8efa8617992d949ccb8251176581964', 'width': 1200}, 'variants': {}}]} |
|
I'm posting this as an image because automod keeps removing my post | 1 | 2025-01-23T18:08:20 | No_Pilot_1974 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i89pw3 | false | null | t3_1i89pw3 | /r/LocalLLaMA/comments/1i89pw3/im_posting_this_as_an_image_because_automod_keeps/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'PbZigSMuoQdpGJ1eF5yxyA3hQ01VG9a-bKuSM-uoYBs', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/knwsuh9hasee1.png?width=108&crop=smart&auto=webp&s=6634fec08e44d2f95473c9555c2c8e32eb7a6722', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/knwsuh9hasee1.png?width=216&crop=smart&auto=webp&s=5f174fa0602bd9c28fca5a933aee259119f0e82c', 'width': 216}, {'height': 319, 'url': 'https://preview.redd.it/knwsuh9hasee1.png?width=320&crop=smart&auto=webp&s=d5ad9f7aed2fb3d60400daa35fd19e71d3f5f812', 'width': 320}, {'height': 638, 'url': 'https://preview.redd.it/knwsuh9hasee1.png?width=640&crop=smart&auto=webp&s=8ccd175edd200e136fa542f824f4e682fd8fe38b', 'width': 640}], 'source': {'height': 646, 'url': 'https://preview.redd.it/knwsuh9hasee1.png?auto=webp&s=211d778f2599e2bc023590227b2b9a3b9ea152c3', 'width': 648}, 'variants': {}}]} |
|||
Post title | 0 | 2025-01-23T18:09:05 | No_Pilot_1974 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i89qke | false | null | t3_1i89qke | /r/LocalLLaMA/comments/1i89qke/post_title/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'ThXwfgwEfW4ALVU5nQ5U0E7ft0HZwVXan_ae6REV-fE', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/o6fcm2hnasee1.png?width=108&crop=smart&auto=webp&s=c481edc2b338e923ca1006c6f87d5f993e209bdf', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/o6fcm2hnasee1.png?width=216&crop=smart&auto=webp&s=c3f652a7e6aa79056685526a804c856ac5456963', 'width': 216}, {'height': 319, 'url': 'https://preview.redd.it/o6fcm2hnasee1.png?width=320&crop=smart&auto=webp&s=0d5dba4c92f4ce537890da2d91e67c0df4cdb3f6', 'width': 320}, {'height': 638, 'url': 'https://preview.redd.it/o6fcm2hnasee1.png?width=640&crop=smart&auto=webp&s=f518f0157144527da70b122a5f61b2afb872cc33', 'width': 640}], 'source': {'height': 646, 'url': 'https://preview.redd.it/o6fcm2hnasee1.png?auto=webp&s=14b21f83aec06cae571fafe4b9a259ccaaf0bdbe', 'width': 648}, 'variants': {}}]} |
|||
Is an 8 Trillion parameter MoE with 7B active parameters cheaper to train than a 400B dense model? | 9 | My understanding is MoE models are trained as if they were smaller dense models in terms of compute (FLOPs) but they require more memory to store all parameters.
This model with 7B active parameters, would have training compute costs of a 7B dense model, but match (theoretical) performance of a 236B dense model.
There is a large 456B model called [Snowflake-Arctic](https://huggingface.co/Snowflake/snowflake-arctic-instruct) with <7B active parameters (excluding dense model component), trained
with $2million. Asking deepseek-R1 for an educated guess for 400B, gets me $80–120 million.
| 2025-01-23T18:16:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i89x2z/is_an_8_trillion_parameter_moe_with_7b_active/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i89x2z | false | null | t3_1i89x2z | /r/LocalLLaMA/comments/1i89x2z/is_an_8_trillion_parameter_moe_with_7b_active/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': '47Pl48E-UpLADOg7g-_vE1AC5XEiOHj5t-oKEv_lfM0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cjC37F1OxL9stpwCKiQGeJojdcJOSceYC4QOOCiRBTE.jpg?width=108&crop=smart&auto=webp&s=09c24b28ccb1bab7e2b9099ebe1ba8b67a9f2bab', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cjC37F1OxL9stpwCKiQGeJojdcJOSceYC4QOOCiRBTE.jpg?width=216&crop=smart&auto=webp&s=3c3d47f50d6b9d01548c2fc51d91561435da0bdd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cjC37F1OxL9stpwCKiQGeJojdcJOSceYC4QOOCiRBTE.jpg?width=320&crop=smart&auto=webp&s=70e864e6217a4ce71b1067317d768d6ecac87c4e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cjC37F1OxL9stpwCKiQGeJojdcJOSceYC4QOOCiRBTE.jpg?width=640&crop=smart&auto=webp&s=ca80df581d7b4e3b5b391505494bb65ca5584ad9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cjC37F1OxL9stpwCKiQGeJojdcJOSceYC4QOOCiRBTE.jpg?width=960&crop=smart&auto=webp&s=fbca1aa7dbd4b0d3d13e923a6e1187679b5f57df', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cjC37F1OxL9stpwCKiQGeJojdcJOSceYC4QOOCiRBTE.jpg?width=1080&crop=smart&auto=webp&s=1ecbc20a1703e1b6b05212128285366b863cd402', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cjC37F1OxL9stpwCKiQGeJojdcJOSceYC4QOOCiRBTE.jpg?auto=webp&s=513b719603aea0ddfc071171a815ed9469118968', 'width': 1200}, 'variants': {}}]} |
Openai operator locally | 0 | Hey guys,
Watching the openai operator announcement, runs a virtual browser and all that. I know some projects had been working on this type thing, but it's been a bit since I've looked around at the progress here. Is anyone using something like this successfully with a reasonable VRAM local model? Seems really cool | 2025-01-23T18:18:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i89z6d/openai_operator_locally/ | timtulloch11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i89z6d | false | null | t3_1i89z6d | /r/LocalLLaMA/comments/1i89z6d/openai_operator_locally/ | false | false | self | 0 | null |
deepseek-r1-distill-qwen-32b performs worse than expected on LiveBench | 1 | [removed] | 2025-01-23T18:23:32 | https://www.reddit.com/r/LocalLLaMA/comments/1i8a3c0/deepseekr1distillqwen32b_performs_worse_than/ | Emergency-Map9861 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8a3c0 | false | null | t3_1i8a3c0 | /r/LocalLLaMA/comments/1i8a3c0/deepseekr1distillqwen32b_performs_worse_than/ | false | false | 1 | null |
|
OpenAI launches Operator | 0 |
[https://openai.com/index/introducing-operator/](https://openai.com/index/introducing-operator/) | 2025-01-23T18:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i8a7y2/openai_launches_operator/ | diovd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8a7y2 | false | null | t3_1i8a7y2 | /r/LocalLLaMA/comments/1i8a7y2/openai_launches_operator/ | false | false | self | 0 | null |
Deepmind learning from Deepseek. Power of open source! | 387 | 2025-01-23T18:30:30 | Charuru | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i8a9qb | false | null | t3_1i8a9qb | /r/LocalLLaMA/comments/1i8a9qb/deepmind_learning_from_deepseek_power_of_open/ | false | false | 387 | {'enabled': True, 'images': [{'id': 'MWAlxfbHYda4b05eI2bdIscH8vkOrFgx19iQAPNvCdc', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/xouhskggesee1.png?width=108&crop=smart&auto=webp&s=0d39205fa4f8a51b9ffe5485a9186f2b37966d4c', 'width': 108}, {'height': 264, 'url': 'https://preview.redd.it/xouhskggesee1.png?width=216&crop=smart&auto=webp&s=6031abdc4634292a2c9642439976a76a05068bf7', 'width': 216}, {'height': 391, 'url': 'https://preview.redd.it/xouhskggesee1.png?width=320&crop=smart&auto=webp&s=a8fe0a351bb16fa1ccac8ea5b35da127eb3db0c9', 'width': 320}, {'height': 782, 'url': 'https://preview.redd.it/xouhskggesee1.png?width=640&crop=smart&auto=webp&s=3e2d75d3fe9869f9aa59bf1661a57a8050b9bde4', 'width': 640}, {'height': 1173, 'url': 'https://preview.redd.it/xouhskggesee1.png?width=960&crop=smart&auto=webp&s=cd9e8396a05a6aac5ecf0a4998f0f2b3eb72f246', 'width': 960}, {'height': 1320, 'url': 'https://preview.redd.it/xouhskggesee1.png?width=1080&crop=smart&auto=webp&s=a95ec30aa98ac2c6f3f638b4ab88ae453c5cebc5', 'width': 1080}], 'source': {'height': 2518, 'url': 'https://preview.redd.it/xouhskggesee1.png?auto=webp&s=48502218cac3669ffeab66ed97032cda299fc9a3', 'width': 2060}, 'variants': {}}]} |
|||
Man runs edge model on edge device and the world applauds | 1 | [https://x.com/BrianRoemmele/status/1882436734774043055](https://x.com/BrianRoemmele/status/1882436734774043055)
Even the world's smartest man is impressed!
[https://x.com/BrianRoemmele/status/1882436734774043055](https://x.com/BrianRoemmele/status/1882436734774043055)
On a serious note though, seeing someone taking so much credit for what is really all the hard work of other people is 🤮
Does anybody know who this grifter is?
I've never come across him before. | 2025-01-23T18:30:45 | https://www.reddit.com/r/LocalLLaMA/comments/1i8aa0h/man_runs_edge_model_on_edge_device_and_the_world/ | Position_Emergency | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8aa0h | false | null | t3_1i8aa0h | /r/LocalLLaMA/comments/1i8aa0h/man_runs_edge_model_on_edge_device_and_the_world/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ALTK7sLKpLZoswWg-mbVj0cglxRc_6Knk6HTRDzTWks', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/C4iC2eqKzw18kcPKiP8gX3nm5fc3RvfEvFGqp3li7RE.jpg?width=108&crop=smart&auto=webp&s=bfbb49965ae8d5ee78d6664aa0a3ddffe662785a', 'width': 108}, {'height': 151, 'url': 'https://external-preview.redd.it/C4iC2eqKzw18kcPKiP8gX3nm5fc3RvfEvFGqp3li7RE.jpg?width=216&crop=smart&auto=webp&s=39c71819859463897fdba9fc721eceef3132b772', 'width': 216}, {'height': 224, 'url': 'https://external-preview.redd.it/C4iC2eqKzw18kcPKiP8gX3nm5fc3RvfEvFGqp3li7RE.jpg?width=320&crop=smart&auto=webp&s=67cf248b567204a76925d903b4f8e316e414b506', 'width': 320}], 'source': {'height': 333, 'url': 'https://external-preview.redd.it/C4iC2eqKzw18kcPKiP8gX3nm5fc3RvfEvFGqp3li7RE.jpg?auto=webp&s=1161433d27ef424835a2af313756294a8b2ba92d', 'width': 474}, 'variants': {}}]} |
Interesting/Random Observation: Deepseek R-1 fails to identify the name of Eldon Rosen, as a character of the novel "Do Androids Dream of Electric Sheep" | 0 | I was randomly testing the new Deepseek models, and noticed that when prompted with *"Who is Eldon Rosen?"*, which is the equivalent of Eldon Tyrell in the novel *Do Androids Dream of Electric Sheep*:
* **Deepseek R-1**: Failed to identify Eldon Rosen as a character in the novel, rather attributing the character . This issue seems to extend to other characters from the book as well, especially those whose names have discrepancies between the book and the movie. When given more information, or specifically asked "What was Eldon Tyrell's name in the book?", or given a more complete question, it can respond correctly most of the time.
* **Deepseek R-1 32b**: The problem is more noticeable in the 32B versions. It fails to answer both questions "Who is Eldon Rosen?" and "What was Eldon Tyrell's name in the book?", it also fails to answer the questions posed with more complete information.
* **GPT-4o, O1, O1-mini, Grok, Llama3.3-70b, Llama3.3-90b**: All successfully identified Eldon Rosen.
* **GPT-4o Mini, and Llama3.2-8b, LLama3.2-70b, LLama3.2-90b, Llama3.3-11b** : Weren't able to identify the character.
Is it just model size? | 2025-01-23T18:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1i8acc2/interestingrandom_observation_deepseek_r1_fails/ | Weak-Abbreviations15 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8acc2 | false | null | t3_1i8acc2 | /r/LocalLLaMA/comments/1i8acc2/interestingrandom_observation_deepseek_r1_fails/ | false | false | self | 0 | null |
DeepSeek R1 Plus Subscription at $5/mo? | 0 | I think DS should have a Plus subscription at $5/mo, the R1 model but with faster thinking. I bet many people would join or switch from ChatGPT's $20/mo. | 2025-01-23T18:35:06 | https://www.reddit.com/r/LocalLLaMA/comments/1i8adxg/deepseek_r1_plus_subscription_at_5mo/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8adxg | false | null | t3_1i8adxg | /r/LocalLLaMA/comments/1i8adxg/deepseek_r1_plus_subscription_at_5mo/ | false | false | self | 0 | null |
OpenAI introduces Operator: Computer-Using Agent | 12 | 2025-01-23T18:38:03 | https://openai.com/index/introducing-operator/ | lomero | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1i8agkj | false | null | t3_1i8agkj | /r/LocalLLaMA/comments/1i8agkj/openai_introduces_operator_computerusing_agent/ | false | false | default | 12 | null |
|
Model serving provider for Deep seek R1 | 1 | Hello everyone,
I was wondering if one of you knew where I could run Deepseek R1 at an ok price. The Deepseek API is quite cheap (and is fine for a lot of uses) but I don't trust Deepseek (the company) for applications where confidentiality is important | 2025-01-23T18:51:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i8asq7/model_serving_provider_for_deep_seek_r1/ | pas_possible | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8asq7 | false | null | t3_1i8asq7 | /r/LocalLLaMA/comments/1i8asq7/model_serving_provider_for_deep_seek_r1/ | false | false | self | 1 | null |
best 120b~ range model? | 1 | [removed] | 2025-01-23T18:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i8avmy/best_120b_range_model/ | burrrneracc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8avmy | false | null | t3_1i8avmy | /r/LocalLLaMA/comments/1i8avmy/best_120b_range_model/ | false | false | self | 1 | null |
It's not free; you pay with your data, and it is used for training. | 58 | Just something to think about when you use "free" ChatGPT, or others... is never free. | 2025-01-23T19:07:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i8b750/its_not_free_you_pay_with_your_data_and_it_is/ | estebansaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8b750 | false | null | t3_1i8b750 | /r/LocalLLaMA/comments/1i8b750/its_not_free_you_pay_with_your_data_and_it_is/ | false | false | self | 58 | null |
DeepSeek R1 Distill Qwen 2.5 32B ablated (uncensored) | 34 | I wanted to share this release with the community of an [ablated](https://www.lesswrong.com/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction) (or "abliterated") version of DeepSeek R1 Distill Qwen 2.5 (32B) instruct. In this way the assistant will refuse requests less often, for a more uncensored experience. We landed on layer 16 as the candidate. But wanted to explore other attempts and learnings. The release on hf: [deepseek-r1-qwen-2.5-32B-ablated](https://huggingface.co/NaniDAO/deepseek-r1-qwen-2.5-32B-ablated) | 2025-01-23T19:09:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i8b8mw/deepseek_r1_distill_qwen_25_32b_ablated_uncensored/ | ro5ssss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8b8mw | false | null | t3_1i8b8mw | /r/LocalLLaMA/comments/1i8b8mw/deepseek_r1_distill_qwen_25_32b_ablated_uncensored/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'oYRnFr2AaooJ90hoIR5rnhOS6n4J8YdQ1RTRXDp7ZC4', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=108&crop=smart&auto=webp&s=6ba9184844005aa98afb60bd51667287f60dc270', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=216&crop=smart&auto=webp&s=47cd1297cb95d335c2e047312bad9e780d249701', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=320&crop=smart&auto=webp&s=a4c0484e65966c8414ff93c7f30432bc53a45473', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=640&crop=smart&auto=webp&s=708b20b4ca742f23c32c88e75a1840ad8e604e81', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=960&crop=smart&auto=webp&s=0189549ac903b6a3da0079015f4c5c11a5db727c', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=1080&crop=smart&auto=webp&s=8be7f16a9e8a8fd9cbb645948dc262d7f90dc002', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?auto=webp&s=0920c0d81c11764a8c8466f83cc9f1d5386d22b2', 'width': 4200}, 'variants': {}}]} |
Finetuning Deepseek r1 on different domains | 1 | [removed] | 2025-01-23T19:18:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i8bfzn/finetuning_deepseek_r1_on_different_domains/ | Living-Situation6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8bfzn | false | null | t3_1i8bfzn | /r/LocalLLaMA/comments/1i8bfzn/finetuning_deepseek_r1_on_different_domains/ | false | false | self | 1 | null |
Deepseek r1 in domains outside math and coding | 1 | [removed] | 2025-01-23T19:21:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i8bjd1/deepseek_r1_in_domains_outside_math_and_coding/ | Living-Situation6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8bjd1 | false | null | t3_1i8bjd1 | /r/LocalLLaMA/comments/1i8bjd1/deepseek_r1_in_domains_outside_math_and_coding/ | false | false | self | 1 | null |
What's the cheapest way to deploy / use Deepseek r1 671b ? | 1 | I have a Claude subscription I use mostly for my coding work, however, Deepseek r1 seems really good, but as many, I don't have the hardware to run the full model, and the 32b parameters version is quite dumb when I run it locally. How do you run Deepseek r1 ? Do you rent a VPS? or is there another service with GPUs that can be cheap to rent ? I'm wondering if it can be cheaper than my Claude subscription. I'm used to run everything locally, so sorry if my question is very noob like..
Thanks for your help ! | 2025-01-23T19:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/1i8bkjr/whats_the_cheapest_way_to_deploy_use_deepseek_r1/ | tomakorea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8bkjr | false | null | t3_1i8bkjr | /r/LocalLLaMA/comments/1i8bkjr/whats_the_cheapest_way_to_deploy_use_deepseek_r1/ | false | false | self | 1 | null |
In these tests the 5090 is 50% faster than the 4090 in FP8 and 435% faster in FP4. | 7 | " Flux.1 dev FP8 Flux.1 dev FP4
RTX 5090 6,61 s/immagine 3,94 s/immagine
RTX 4090 9,94 s/immagine 17,12 s/immagine"
https://www.tomshw.it/hardware/nvidia-rtx-5090-test-recensione#prestazioni-in-creazione-contenuti | 2025-01-23T19:30:50 | https://www.reddit.com/r/LocalLLaMA/comments/1i8br4a/in_these_tests_the_5090_is_50_faster_than_the/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8br4a | false | null | t3_1i8br4a | /r/LocalLLaMA/comments/1i8br4a/in_these_tests_the_5090_is_50_faster_than_the/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': '4dXf_grodIiJv-ydFWNhOukDLJA_X6Pv704e7qjDPXQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ObINyV8VxNC0A5fAHb1ycyQbXD_VUBbf9UZp1p7ye6I.jpg?width=108&crop=smart&auto=webp&s=b206817db28db0d537d569df8d40be42dbc02867', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ObINyV8VxNC0A5fAHb1ycyQbXD_VUBbf9UZp1p7ye6I.jpg?width=216&crop=smart&auto=webp&s=1da00e431c44eacd2d8cba2124473cfc0ea78e3b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ObINyV8VxNC0A5fAHb1ycyQbXD_VUBbf9UZp1p7ye6I.jpg?width=320&crop=smart&auto=webp&s=fb328eebe2f430cafe03182331742b30deb6be9f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ObINyV8VxNC0A5fAHb1ycyQbXD_VUBbf9UZp1p7ye6I.jpg?width=640&crop=smart&auto=webp&s=0cf022485ca9b2c2e8d526d3a97f17536b10cc77', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ObINyV8VxNC0A5fAHb1ycyQbXD_VUBbf9UZp1p7ye6I.jpg?width=960&crop=smart&auto=webp&s=f91a7be89aba3e8f85e52d0de9c5aa758ff1a289', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ObINyV8VxNC0A5fAHb1ycyQbXD_VUBbf9UZp1p7ye6I.jpg?width=1080&crop=smart&auto=webp&s=e6edfba8bc48ac8a2213832b1dd76bddced0699a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ObINyV8VxNC0A5fAHb1ycyQbXD_VUBbf9UZp1p7ye6I.jpg?auto=webp&s=c830f911e82fe4e14bc14fd32339b21cecbab89c', 'width': 1919}, 'variants': {}}]} |
Best free local AI? | 1 | [removed] | 2025-01-23T19:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i8brem/best_free_local_ai/ | Mammoth-Prior-7501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8brem | false | null | t3_1i8brem | /r/LocalLLaMA/comments/1i8brem/best_free_local_ai/ | false | false | self | 1 | null |
Is the Deepseek R1 finetunable? | 3 | I wanted to finetune the 14b-qwen-distill-q4\_K\_M. Also are the Qwen 2.5 models finetunable? | 2025-01-23T19:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i8bzg1/is_the_deepseek_r1_finetunable/ | CaptTechno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8bzg1 | false | null | t3_1i8bzg1 | /r/LocalLLaMA/comments/1i8bzg1/is_the_deepseek_r1_finetunable/ | false | false | self | 3 | null |
Very poor answers from Deepseek R1 7b, running on M1 Max in LM Studio. Is 7b supposed to be this bad? | 1 | [removed] | 2025-01-23T19:46:06 | https://www.reddit.com/r/LocalLLaMA/comments/1i8c46i/very_poor_answers_from_deepseek_r1_7b_running_on/ | RupFox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8c46i | false | null | t3_1i8c46i | /r/LocalLLaMA/comments/1i8c46i/very_poor_answers_from_deepseek_r1_7b_running_on/ | false | false | self | 1 | null |
DeepSeek R1 Academic Paper Agent | 0 | Hey guys, first post in here.
I have been playing with DeepSeeks R1 model and am super impressed! Does anyone have an idea on how I could turn it into an AI agent that can help me with writing a full academic paper?
Online research for citations and computations for data analysis would be needed.
Appreciate any help on this one!
Thanks | 2025-01-23T19:49:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i8c7dl/deepseek_r1_academic_paper_agent/ | predecker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8c7dl | false | null | t3_1i8c7dl | /r/LocalLLaMA/comments/1i8c7dl/deepseek_r1_academic_paper_agent/ | false | false | self | 0 | null |
Time to ban Chinese LLMs! /s | 5 | Background:
https://www.teamblind.com/post/Meta-genai-org-in-panic-mode-KccnF41n | 2025-01-23T20:00:13 | Amgadoz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i8cg79 | false | null | t3_1i8cg79 | /r/LocalLLaMA/comments/1i8cg79/time_to_ban_chinese_llms_s/ | false | false | 5 | {'enabled': True, 'images': [{'id': 'gbNmaJ11qACsfDBibZTulKzbH0YlWkZXH5zQXnDWLj0', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/nqlgailhusee1.png?width=108&crop=smart&auto=webp&s=28d49c9024e005a7d1ff0db2abfdc364bbf53b49', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/nqlgailhusee1.png?width=216&crop=smart&auto=webp&s=8cda9f3d6af19f0a3fb9527fb36a17325cf2b143', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/nqlgailhusee1.png?width=320&crop=smart&auto=webp&s=6ce8719c9722d13be831ca3b95b80175ece0bbb1', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/nqlgailhusee1.png?width=640&crop=smart&auto=webp&s=91272284fdaccdfc54598c86e6848019e02abaee', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/nqlgailhusee1.png?width=960&crop=smart&auto=webp&s=e9a6396490a4285a6b2c75f5d94d4869633c157d', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/nqlgailhusee1.png?width=1080&crop=smart&auto=webp&s=d95bb4c02420c625b9a89c693c71534b989eadeb', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://preview.redd.it/nqlgailhusee1.png?auto=webp&s=0546cebd7a3914ff709b439ef2c5e2522be61267', 'width': 2160}, 'variants': {}}]} |
||
Deepseek-r1-Qwen 1.5B's overthinking is adorable | 296 | 2025-01-23T20:01:44 | https://v.redd.it/b5coo5glusee1 | Ill-Still-6859 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i8chpr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b5coo5glusee1/DASHPlaylist.mpd?a=1740254538%2CYjJlMDIyMjEwMGJiNDQwMTM0ZmIxODNkNmYxMGM4NGZmNjllN2VmZWZlYjgzYTU0MjI1MWYxNzZlYmE3OWFhNg%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/b5coo5glusee1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/b5coo5glusee1/HLSPlaylist.m3u8?a=1740254538%2CNmY3MDFkMjgyNDcxMWMxOGVkODNiMTgxNDlhOGY4YjgxNWQyM2JlZjU1ZDE4NWFiNDQxYWE5ZDI3YWU3ZDczMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/b5coo5glusee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 856}} | t3_1i8chpr | /r/LocalLLaMA/comments/1i8chpr/deepseekr1qwen_15bs_overthinking_is_adorable/ | false | false | 296 | {'enabled': False, 'images': [{'id': 'azZ1d2EzZ2x1c2VlMWwcsRUdCKlecN3EYDmX-jmw1aKFL7Ec90KkMpgcpWxW', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/azZ1d2EzZ2x1c2VlMWwcsRUdCKlecN3EYDmX-jmw1aKFL7Ec90KkMpgcpWxW.png?width=108&crop=smart&format=pjpg&auto=webp&s=7bc389553b3aa62a7386d809baf1698ee1333ab3', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/azZ1d2EzZ2x1c2VlMWwcsRUdCKlecN3EYDmX-jmw1aKFL7Ec90KkMpgcpWxW.png?width=216&crop=smart&format=pjpg&auto=webp&s=9f11d521f3d89a82aad371a37e1d3dee509f646a', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/azZ1d2EzZ2x1c2VlMWwcsRUdCKlecN3EYDmX-jmw1aKFL7Ec90KkMpgcpWxW.png?width=320&crop=smart&format=pjpg&auto=webp&s=6406ba34ea30e09119bcb70c6087ea8cd7a1ced9', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/azZ1d2EzZ2x1c2VlMWwcsRUdCKlecN3EYDmX-jmw1aKFL7Ec90KkMpgcpWxW.png?width=640&crop=smart&format=pjpg&auto=webp&s=414da19bd3559a5babfd04067ef9e9ccf616757a', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/azZ1d2EzZ2x1c2VlMWwcsRUdCKlecN3EYDmX-jmw1aKFL7Ec90KkMpgcpWxW.png?width=960&crop=smart&format=pjpg&auto=webp&s=33db04a3968f0d6bde9106b03c2b2aa2c6a88804', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/azZ1d2EzZ2x1c2VlMWwcsRUdCKlecN3EYDmX-jmw1aKFL7Ec90KkMpgcpWxW.png?width=1080&crop=smart&format=pjpg&auto=webp&s=226c93a51c36a88bdfaff57f87cd4c1486d28150', 'width': 1080}], 'source': {'height': 2424, 'url': 'https://external-preview.redd.it/azZ1d2EzZ2x1c2VlMWwcsRUdCKlecN3EYDmX-jmw1aKFL7Ec90KkMpgcpWxW.png?format=pjpg&auto=webp&s=fc5891aaa508e74e13d6cc2c7a1bfce7013f120c', 'width': 1080}, 'variants': {}}]} |
||
R1-Qwen32b AIME II deep-dive and verification | 13 | After having seen a lot of chatter about the distills being bad or published benchmarks being wrong I decided to manually verify the performance of DeepSeek-R1-Distill-Qwen-32B-Q4_K_M.gguf. Parameters: temperature 0.6. Starting context was 8192, upped to 16384 for a couple.
##TLDR: 11/15, so 73%, in-line with what DeepSeek published (I assume they ran AIME I exam as well, there is also RNG, see below).
I watched all of these traces by hand and some thoughts:
- This really does seem like the real deal, for real.
- 2 of its wrong answers I think it could probably get correct sometimes or maybe with less quant/different params. It was extremely close to a correct answer.
- The only questions where it became incoherent and mostly rubbish were questions where visual reasoning is kinda required, there were 2 questions like that on this exam.
Results: (questions here: https://artofproblemsolving.com/wiki/index.php/2024_AIME_II_Problems).
1. 73 - Correct.
2. 236 - Correct.
3. 45 - Correct.
4. 33 - Correct.
5. 80 - Correct.
6. 55 - Correct.
7. 5694 - Incorrect. This is the number the question is asking for itself, but the question asks you to perform a simple calculation and report it in a different way in terms of Q+R. Probably long context issue. Q+R = 699.
With 16384 context: 699 - Correct. Marked as correct.
8. 3Sqrt[13]. Incorrect. It got confused and lost context too.
9. 34 - Incorrect. Lost context.
10. 468 - Correct.
11. 603 - Incorrect but close. Completely lost context.
Tried it again with longer context. It got so close, got 601 (the correct answer) but decided that it was "too many" and said 13. Lol. A human reading the chain would have understood it solved it, still incorrect for the benchmark tho.
12. 23 - Correct.
13. 321 - Correct.
14. 211 - Correct.
15. 12 - Incorrect and lost context. It had broadly the right idea about counting k-values. Tried with longer context, also didn't work. | 2025-01-23T20:08:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i8cnau/r1qwen32b_aime_ii_deepdive_and_verification/ | Billy462 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8cnau | false | null | t3_1i8cnau | /r/LocalLLaMA/comments/1i8cnau/r1qwen32b_aime_ii_deepdive_and_verification/ | false | false | self | 13 | null |
Asking distilled R1 to think more | 8 | I have been trying the distilled R1 models (the 32B and the 14B, both with q4-M version) through Ollama to see how I can get the most out of them and what prompt techniques work best for them. I saw the problem below in here and I thought it’s a good problem to test the models with.
# The problem
A year ago, 60 animals lived in the magical garden: 30 hares, 20 wolves and
10 lions. The number of animals in the garden changes only in three cases: when the wolf eats
hare and turns into a lion, when a lion eats a hare and turns into a wolf, and when a lion
eats a wolf and turns into a hare. Currently, there are no animals left in the garden that can
eat each other. Determine the maximum and minimum number of animals
to be left in the garden.
# My experiment
First I just give them the problem. The two models didn’t get the answer right.
Then, I added this to the end of the prompt: “Take your time and think as much as you want and then provide an answer.”
In this case both models thought for much longer and they both got one part of the answer right (one got the minimum correct and the other the maximum).
# Conclusion
These models seem to have an understanding of how long they think and you can just ask them to think longer, which seems to be improving their reasoning.
# Discussion
What are some other prompting techniques or other methods out there to enhance the reasoning for these models? | 2025-01-23T20:15:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i8ct56/asking_distilled_r1_to_think_more/ | Saber-tooth-tiger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8ct56 | false | null | t3_1i8ct56 | /r/LocalLLaMA/comments/1i8ct56/asking_distilled_r1_to_think_more/ | false | false | self | 8 | null |
Fine tuning to learn from R1 feasible? | 0 | So I'm wondering: if the stuff in between<think> and </think> is what makes reasoning models stand out, wouldn't it be helpful for smaller models to also do that? My idea is to take a bunch of leaderboard questions, let them answer by R1, and building a dataset from that to fine-tune smaller models. Would that work or is it a waste of time? | 2025-01-23T20:18:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i8cvy1/fine_tuning_to_learn_from_r1_feasible/ | ComprehensiveBird317 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8cvy1 | false | null | t3_1i8cvy1 | /r/LocalLLaMA/comments/1i8cvy1/fine_tuning_to_learn_from_r1_feasible/ | false | false | self | 0 | null |
What are your use cases? | 6 | I've been in onto AI/LLMs even before ChatGPT got released back in 2022 and now I mostly use it or alternatives (mostly Claude) for boilerplate code and the occasional debugging. I'm looking to investing in hardware for running LLMs locally so I'm curious what other use cases do you have for AI. I've seen some people mention they use it for math problems and writing, but are these like homework assignments or a bit more complex? | 2025-01-23T20:18:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i8cvzb/what_are_your_use_cases/ | No_Hedgehog_7563 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8cvzb | false | null | t3_1i8cvzb | /r/LocalLLaMA/comments/1i8cvzb/what_are_your_use_cases/ | false | false | self | 6 | null |
Do you think this configuration is good enough for R1 32b | 1 | I'm planing to build a future proof (?) budget PC to start building LLM locally
• Ryzen 8600g (USD 250)
• 2xMSI 3060 12gb Ventus 3x (USD 250 each)
• RAM 32gb(16x2) 6000mhz Flare X5 (USD 106)
• MB MSI Pro X670-P wifi (USD 187.5)
• SSD Samsung m.2 980 PRO 2tb (USD 150)
• 2xSeagate 2tb 7200rpm (USD 56.25 each)
• Be Quiet! Pure Base 500 (USD 75)
• Be Quiet! Pure Power 12M 850w (USD 118.75)
The be quiet! Case and power supply, and the Ryzen 8600g are gifts from my parents. I have put their prices as reference for prices in my country. I'm still chosing the other parts. What do you think I should change?
My budget is USD 1.000 | 2025-01-23T20:20:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i8cxf8/do_you_think_this_configuration_is_good_enough/ | king2014py | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8cxf8 | false | null | t3_1i8cxf8 | /r/LocalLLaMA/comments/1i8cxf8/do_you_think_this_configuration_is_good_enough/ | false | false | self | 1 | null |
Deepseek R1 vs Claude 3.5 Sonnet vs o1 vs 4o for coding | 1 | [removed] | 2025-01-23T20:27:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i8d3i1/deepseek_r1_vs_claude_35_sonnet_vs_o1_vs_4o_for/ | Jay_Wheyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8d3i1 | false | null | t3_1i8d3i1 | /r/LocalLLaMA/comments/1i8d3i1/deepseek_r1_vs_claude_35_sonnet_vs_o1_vs_4o_for/ | false | false | self | 1 | null |
Advice on my AI Workstation | 5 | Hey Guys, iam planning on building a Workstation for Multi-Agent AI Usage. My Budget is 20k (€)
My current Plan is the following core components:
* Phanteks Enthoo Pro 2 Server Edition
* AMD Epyc 9755, 128C/256T, 2.70-4.10GHz, tray
* 12x Micron RDIMM 64GB, DDR5-5600
* Supermicro H13SSL-NT retail
* 2x Crucial T700 SSD 4TB as ZFS Mirror
* Seasonic Prime PX-1600 or Higher depending on the GPU Setup
* 12x Noctua NF-F12
* SilverStone XE360-SP5 360mm AIO for CPU
Now my Question for this lovely Community is on the GPU Setup.
The Mainboard allows for at least 3 PCIe5 16x and 2 PCIe4 8x Cards, however for a full Population I would need riser Cables. The Case has 11 PCI Slots available.
I Need a Closed Case because of the Surrounding Environment and I dont feel very save building a Custom Water Cooling Loop although this would solve my Slot Issues.
With these Base Settings my Question is what GPU Setup would you Guys recommend? I Cannot go over 2000W for now due to external Factors so i would need to powerlimit some of the Configurations below.
Given that I want to have multiple Agents running I was thinking about either
1. 3x 4090
2. 4x 3090
3. Mixing 4090 with Intel Arc770
4. ???
Thank you in Advance for any Feedback.
PS: I have high Hopes to utilize the CPU for LLMs aswell based on the latest ZenDNN 5.0.
| 2025-01-23T20:30:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i8d6c1/advice_on_my_ai_workstation/ | MyLifeAsSinusOfX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8d6c1 | false | null | t3_1i8d6c1 | /r/LocalLLaMA/comments/1i8d6c1/advice_on_my_ai_workstation/ | false | false | self | 5 | null |
Is DeepSeek R1 Good for Creative Writing Run Locally? | 10 | Use cases I'm interested in would be brainstorming, plotting, outlining, creation of characters, magic systems, tech systems, ecosystems, worldbuilding and lore.
Is there an uncensored version of DeepSeek R1 that I could run locally so I can freely work on Dark Fantasy projects?
I welcome any recommendations thank you! | 2025-01-23T20:33:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i8d8j8/is_deepseek_r1_good_for_creative_writing_run/ | shyguy8545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8d8j8 | false | null | t3_1i8d8j8 | /r/LocalLLaMA/comments/1i8d8j8/is_deepseek_r1_good_for_creative_writing_run/ | false | false | self | 10 | null |
Hi! I'm working on a VS Code extension with a built-in local LLM, would love to know your thoughts :) | 1 | 2025-01-23T20:35:03 | https://v.redd.it/equ1e7290tee1 | juanviera23 | /r/LocalLLaMA/comments/1i8da49/hi_im_working_on_a_vs_code_extension_with_a/ | 1970-01-01T00:00:00 | 0 | {} | 1i8da49 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/equ1e7290tee1/DASHPlaylist.mpd?a=1740386111%2CM2MyZDk1MDk0YWY5YmZjYWI4NWJiMTBlNmU0ODJiMjAzNzZlYzE2MWVkOWU0MGM2ZTA2OGU1MDAzNDhjZGNjYw%3D%3D&v=1&f=sd', 'duration': 154, 'fallback_url': 'https://v.redd.it/equ1e7290tee1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 712, 'hls_url': 'https://v.redd.it/equ1e7290tee1/HLSPlaylist.m3u8?a=1740386111%2CNDE2NGY5NzA4ZjJlZjRlNTczYjE1MTZjOGI5YzMzY2ExZGYwOGZiNzlmYTUwNjcwN2FkZDQ5MWI5MDYzNjZlYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/equ1e7290tee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1i8da49 | /r/LocalLLaMA/comments/1i8da49/hi_im_working_on_a_vs_code_extension_with_a/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'N3Jud2Q3MjkwdGVlMc5MvSBjzYv1ELVpHBdJbVCWB0UNqV_pC-EJ5AsZBzZ7', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N3Jud2Q3MjkwdGVlMc5MvSBjzYv1ELVpHBdJbVCWB0UNqV_pC-EJ5AsZBzZ7.png?width=108&crop=smart&format=pjpg&auto=webp&s=d075829d0627118f42af4d20c050a9ba3d07a35a', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/N3Jud2Q3MjkwdGVlMc5MvSBjzYv1ELVpHBdJbVCWB0UNqV_pC-EJ5AsZBzZ7.png?width=216&crop=smart&format=pjpg&auto=webp&s=c45b6d7914c7f51205db30e7e03a190b6f90b57c', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/N3Jud2Q3MjkwdGVlMc5MvSBjzYv1ELVpHBdJbVCWB0UNqV_pC-EJ5AsZBzZ7.png?width=320&crop=smart&format=pjpg&auto=webp&s=6e07c473485f26fdde93838a814ae04ce000a181', 'width': 320}, {'height': 356, 'url': 'https://external-preview.redd.it/N3Jud2Q3MjkwdGVlMc5MvSBjzYv1ELVpHBdJbVCWB0UNqV_pC-EJ5AsZBzZ7.png?width=640&crop=smart&format=pjpg&auto=webp&s=31fb90c0b948c63d76c6288447443436ee6202e5', 'width': 640}, {'height': 534, 'url': 'https://external-preview.redd.it/N3Jud2Q3MjkwdGVlMc5MvSBjzYv1ELVpHBdJbVCWB0UNqV_pC-EJ5AsZBzZ7.png?width=960&crop=smart&format=pjpg&auto=webp&s=00eb621cf235d072d4ab53d9fd11075b7a8e06a4', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/N3Jud2Q3MjkwdGVlMc5MvSBjzYv1ELVpHBdJbVCWB0UNqV_pC-EJ5AsZBzZ7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fa4122c82572ab34de2609db5e8fab8c3ab833d9', 'width': 1080}], 'source': {'height': 760, 'url': 'https://external-preview.redd.it/N3Jud2Q3MjkwdGVlMc5MvSBjzYv1ELVpHBdJbVCWB0UNqV_pC-EJ5AsZBzZ7.png?format=pjpg&auto=webp&s=8fcaf9f391e40fe5c291e46d7af088c7a7433453', 'width': 1366}, 'variants': {}}]} |
||
Got a Mac M4 Mini 24GB for ollama. How do I increase the unified memory limit for the GPU? | 3 | Hello,
I read the MacOS default allows 75% of memory max to be used by the GPU for the Mac M4 Mini, but it is configurable. Any idea how can I configure this?
For example, I just got the M4 Mini 24GB and I want to allow 20GB, or even 22GB, to be used by the GPU exclusively, as I won't be using the UI, just ollama via SSH.
Thanks! | 2025-01-23T20:40:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i8deyc/got_a_mac_m4_mini_24gb_for_ollama_how_do_i/ | StartupTim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8deyc | false | null | t3_1i8deyc | /r/LocalLLaMA/comments/1i8deyc/got_a_mac_m4_mini_24gb_for_ollama_how_do_i/ | false | false | self | 3 | null |
Bad performance with RTX 4060 and 16gb RAM. Help please | 1 | [removed] | 2025-01-23T20:44:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i8dij2/bad_performance_with_rtx_4060_and_16gb_ram_help/ | ShovvTime13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8dij2 | false | null | t3_1i8dij2 | /r/LocalLLaMA/comments/1i8dij2/bad_performance_with_rtx_4060_and_16gb_ram_help/ | false | false | self | 1 | null |
FuseO1-DeepSeekR1-... on M4pro max 128 gb | 1 | [removed] | 2025-01-23T20:44:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i8dioz/fuseo1deepseekr1_on_m4pro_max_128_gb/ | Asherah18 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8dioz | false | null | t3_1i8dioz | /r/LocalLLaMA/comments/1i8dioz/fuseo1deepseekr1_on_m4pro_max_128_gb/ | false | false | self | 1 | null |
Bad performance with RTX 4060 and 16gb RAM. Why? | 1 | [removed] | 2025-01-23T20:45:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i8dj0s/bad_performance_with_rtx_4060_and_16gb_ram_why/ | ShovvTime13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8dj0s | false | null | t3_1i8dj0s | /r/LocalLLaMA/comments/1i8dj0s/bad_performance_with_rtx_4060_and_16gb_ram_why/ | false | false | self | 1 | null |
Why can't I post asking about performance issues? | 1 | [removed] | 2025-01-23T20:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i8dsh3/why_cant_i_post_asking_about_performance_issues/ | ShovvTime13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8dsh3 | false | null | t3_1i8dsh3 | /r/LocalLLaMA/comments/1i8dsh3/why_cant_i_post_asking_about_performance_issues/ | false | false | self | 1 | null |
Why can't I post asking about performance issues? | 1 | [removed] | 2025-01-23T20:57:07 | https://www.reddit.com/r/LocalLLaMA/comments/1i8dt6e/why_cant_i_post_asking_about_performance_issues/ | ShovvTime13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8dt6e | false | null | t3_1i8dt6e | /r/LocalLLaMA/comments/1i8dt6e/why_cant_i_post_asking_about_performance_issues/ | false | false | self | 1 | null |
Bad performance with RTX 4060 and 16gb RAM | 1 | [removed] | 2025-01-23T21:00:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i8dwd3/bad_performance_with_rtx_4060_and_16gb_ram/ | ShovvTime13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8dwd3 | false | null | t3_1i8dwd3 | /r/LocalLLaMA/comments/1i8dwd3/bad_performance_with_rtx_4060_and_16gb_ram/ | false | false | 1 | null |
|
Why Can't I ask for help? | 1 | [removed] | 2025-01-23T21:05:36 | https://www.reddit.com/r/LocalLLaMA/comments/1i8e0vd/why_cant_i_ask_for_help/ | ShovvTime13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8e0vd | false | null | t3_1i8e0vd | /r/LocalLLaMA/comments/1i8e0vd/why_cant_i_ask_for_help/ | false | false | self | 1 | null |
What is the AI model you enjoy most? | 1 | [removed] | 2025-01-23T21:06:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i8e1md/what_is_the_ai_model_you_enjoy_most/ | ShovvTime13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8e1md | false | null | t3_1i8e1md | /r/LocalLLaMA/comments/1i8e1md/what_is_the_ai_model_you_enjoy_most/ | false | false | self | 1 | null |
Bad performance with RTX 4060 and 16gb RAM | 1 | [removed] | 2025-01-23T21:08:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i8e3i2/bad_performance_with_rtx_4060_and_16gb_ram/ | Cold_Guava_217 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8e3i2 | false | null | t3_1i8e3i2 | /r/LocalLLaMA/comments/1i8e3i2/bad_performance_with_rtx_4060_and_16gb_ram/ | false | false | 1 | null |
|
LM Arena’s Voting Exploited for Betting Manipulation | 1 | [removed] | 2025-01-23T21:09:23 | https://www.reddit.com/r/LocalLLaMA/comments/1i8e46q/lm_arenas_voting_exploited_for_betting/ | RYSKZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8e46q | false | null | t3_1i8e46q | /r/LocalLLaMA/comments/1i8e46q/lm_arenas_voting_exploited_for_betting/ | false | false | self | 1 | null |
DeepSeek R1's answer to the meaning of life. | 1 | 2025-01-23T21:10:45 | Fantastic-Care-5885 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i8e5dw | false | null | t3_1i8e5dw | /r/LocalLLaMA/comments/1i8e5dw/deepseek_r1s_answer_to_the_meaning_of_life/ | false | false | 1 | {'enabled': True, 'images': [{'id': '2A2TILesj1bi6_eBH1u3GmcoX1PRO5XqYVp13zTJ8RM', 'resolutions': [{'height': 23, 'url': 'https://preview.redd.it/d8itd6uz6tee1.png?width=108&crop=smart&auto=webp&s=6062370b57820de173397caae6a0dab2a1289280', 'width': 108}, {'height': 46, 'url': 'https://preview.redd.it/d8itd6uz6tee1.png?width=216&crop=smart&auto=webp&s=e2d4988118064352a60d0ae176ad61b67601056d', 'width': 216}, {'height': 68, 'url': 'https://preview.redd.it/d8itd6uz6tee1.png?width=320&crop=smart&auto=webp&s=e823d298f1f07ef42c9584e3874efa32b048145b', 'width': 320}, {'height': 136, 'url': 'https://preview.redd.it/d8itd6uz6tee1.png?width=640&crop=smart&auto=webp&s=b2511ddffe69dc171b93d4cd27607414873c8360', 'width': 640}], 'source': {'height': 165, 'url': 'https://preview.redd.it/d8itd6uz6tee1.png?auto=webp&s=e3d15d5225fc4d413e1598327656c3ed084b4897', 'width': 772}, 'variants': {}}]} |
|||
What GPU is best for large models? | 1 | [removed] | 2025-01-23T21:10:45 | https://www.reddit.com/r/LocalLLaMA/comments/1i8e5e0/what_gpu_is_best_for_large_models/ | ShovvTime13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8e5e0 | false | null | t3_1i8e5e0 | /r/LocalLLaMA/comments/1i8e5e0/what_gpu_is_best_for_large_models/ | false | false | self | 1 | null |
Network Latency in Local Ollama and LM Studio API Calls | 1 | [removed] | 2025-01-23T21:14:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i8e8bp/network_latency_in_local_ollama_and_lm_studio_api/ | asiff00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8e8bp | false | null | t3_1i8e8bp | /r/LocalLLaMA/comments/1i8e8bp/network_latency_in_local_ollama_and_lm_studio_api/ | false | false | self | 1 | null |
DeepSeek-R1 distilled models below 70b still struggle with the “apple test”. Here are my test results: | 1 | [removed] | 2025-01-23T21:15:01 | https://www.reddit.com/r/LocalLLaMA/comments/1i8e943/deepseekr1_distilled_models_below_70b_still/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8e943 | false | null | t3_1i8e943 | /r/LocalLLaMA/comments/1i8e943/deepseekr1_distilled_models_below_70b_still/ | false | false | self | 1 | null |
How can deepseek leap ahead of competition with their open weight models? | 0 |
I have these hypothesis, what are your thoughts or what do you know?
Do they have access to better (copyrighted, secret, better curated, human synthesized etc) data? I feel this is more likely the reason.
Do they have better training mechanism? This is the second most likely reason, but no idea how they can do it sustainably.
Do they have better model architecture? This is pretty open with their published papers, weights, anybody can copy or even improve the architectures.
Do they have more GPU power than even openai or meta? It's a little hard too believe this is true after embargo.
Did they train their model on leaderboards questions? I doubt such kind of behavior would float them so long.
(I asked the same question at r/openai but didn't get too much attention or any quality answer. Sorry if you saw it before) | 2025-01-23T21:15:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i8e9od/how_can_deepseek_leap_ahead_of_competition_with/ | --dany-- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8e9od | false | null | t3_1i8e9od | /r/LocalLLaMA/comments/1i8e9od/how_can_deepseek_leap_ahead_of_competition_with/ | false | false | self | 0 | null |
which distilled r1 model can i run? | 1 | [removed] | 2025-01-23T21:17:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i8eb00/which_distilled_r1_model_can_i_run/ | Far-Solution549 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8eb00 | false | null | t3_1i8eb00 | /r/LocalLLaMA/comments/1i8eb00/which_distilled_r1_model_can_i_run/ | false | false | self | 1 | null |
Are There Any Uncensored DeepSeek R1 Distilled Models Out There? | 3 | As the title says. I'm looking for an 8B model that is uncensored. Where can I check for information like this? | 2025-01-23T21:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i8eth7/are_there_any_uncensored_deepseek_r1_distilled/ | Lilith-Vampire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8eth7 | false | null | t3_1i8eth7 | /r/LocalLLaMA/comments/1i8eth7/are_there_any_uncensored_deepseek_r1_distilled/ | false | false | self | 3 | null |
An interesting conversation with DeepSeek on Ollama | 1 | [removed] | 2025-01-23T21:41:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i8evo6/an_interesting_conversation_with_deepseek_on/ | __LarrySkywalker__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8evo6 | false | null | t3_1i8evo6 | /r/LocalLLaMA/comments/1i8evo6/an_interesting_conversation_with_deepseek_on/ | false | false | self | 1 | null |
Thinking Models in PocketPal iOS app | 7 | I just updated my PocketPal app for iOS on my iPhone 16 Pro. I downloaded the Qwen-7B version of the distilled R1 model, and see that it now has a new window pop up that's dedicated to thinking.
Unfortunately, it never finishes thinking within the model's output limit and it will cut off midway through. When I try to prompt it to continue, it will just start thinking all over again, as if the partial thoughts it was able to get through aren't in its context.
Is there any way to fix this, or to expand the number of tokens per response that it gives me? | 2025-01-23T21:41:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i8ew4u/thinking_models_in_pocketpal_ios_app/ | Commercial_Nerve_308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8ew4u | false | null | t3_1i8ew4u | /r/LocalLLaMA/comments/1i8ew4u/thinking_models_in_pocketpal_ios_app/ | false | false | self | 7 | null |
How to build LLM skillset but how much maths and python do i need to know | 1 | [removed] | 2025-01-23T22:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i8fc3v/how_to_build_llm_skillset_but_how_much_maths_and/ | shaken-n-stirred | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8fc3v | false | null | t3_1i8fc3v | /r/LocalLLaMA/comments/1i8fc3v/how_to_build_llm_skillset_but_how_much_maths_and/ | false | false | self | 1 | null |
Local AGI in sight? | 0 | Been playwith that deepseek-r1 thing, reading through https://sakana.ai/transformer-squared/ and smoking weed, and it got me thinking that if we put those two together, it'll be an AGI in a sense.
Anyone have enough compute to try? | 2025-01-23T22:01:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i8fcfu/local_agi_in_sight/ | Western_Courage_6563 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8fcfu | false | null | t3_1i8fcfu | /r/LocalLLaMA/comments/1i8fcfu/local_agi_in_sight/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=108&crop=smart&auto=webp&s=61f7124235d3c9cc17267eb2ed7de46bab49765e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=216&crop=smart&auto=webp&s=b01c782fa93b021a180dc44d7151fade86d6431d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=320&crop=smart&auto=webp&s=670eb9c9058d14ac8846a6475e3d47cb616cf011', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=640&crop=smart&auto=webp&s=f5f30bf0b3bae15b4dee53ba7bd37f2486072c04', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=960&crop=smart&auto=webp&s=2c1d1a6c85eb92a670807f829ec7254dc53f1bd7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=1080&crop=smart&auto=webp&s=344e6dcc7b48a81d3b6727c749b0c289aabe5547', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?auto=webp&s=efb765a9e5d3d5585101bc98246d9babdd7d3105', 'width': 1600}, 'variants': {}}]} |
when is a model running 'locally"? | 1 | disclaimer : complete newbie to all of this and while no question is a dumb question, I'm pretty sure I'm out to disprove that.
Just starting to learn about Local LLM's. Got ollama to run along with webui and can download some different models to my PC (64gb mem, 4090). Been playing with llama and mistral to figure this out more. Today downloaded deepseek and started reading about it so this sparked some questions
* why are people saying ollama only downloads a "distilled" version? what does this mean?
* should the 70B deepseek version run on my hardware? How do I know how much resources it's taking?
* I know I can look at HWINFO64 and see resource usage, but will the model be taking GPU resources when it's not doing anything?
* Maybe a better question is when in the process is the model actually using the GPU?
As you can tell, I'm new to all of this and don't know what I don't know, but thanks in advance for any help
| 2025-01-23T22:08:24 | https://www.reddit.com/r/LocalLLaMA/comments/1i8fihc/when_is_a_model_running_locally/ | Repulsive_Pop4771 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8fihc | false | null | t3_1i8fihc | /r/LocalLLaMA/comments/1i8fihc/when_is_a_model_running_locally/ | false | false | self | 1 | null |
I Built an Open-Source RAG API for Docs, GitHub Issues and READMEs | 8 | I’ve been working on **Ragpi**, an open-source AI assistant that builds knowledge bases from docs, GitHub Issues, and READMEs. It uses Redis Stack as a vector DB and leverages RAG to answer technical questions through an API.
Some things it does:
* Creates knowledge bases from documentation websites, GitHub Issues, and READMEs
* Uses hybrid search (semantic + keyword) for retrieval
* Uses tool calling to dynamically search and retrieve relevant information during conversations
* Works with OpenAI or Ollama
* Provides a simple REST API for querying and managing sources
**Built with:** FastAPI, Redis Stack, and Celery.
It’s still a work in progress, but I’d love some feedback!
Repo: [https://github.com/ragpi/ragpi](https://github.com/ragpi/ragpi)
API Reference: [https://docs.ragpi.io](https://docs.ragpi.io/) | 2025-01-23T22:13:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i8fn1q/i_built_an_opensource_rag_api_for_docs_github/ | eleven-five | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8fn1q | false | null | t3_1i8fn1q | /r/LocalLLaMA/comments/1i8fn1q/i_built_an_opensource_rag_api_for_docs_github/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'oMThEMbFECrq_40JgSUa1yIk9vp81qHAtC6fbjx-Sww', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vafuQTHeT9qrvLHK6E4FlLjHB-eIVwCK4dNN4iRVVa8.jpg?width=108&crop=smart&auto=webp&s=46ded5637f9a90bfcc9e50eee4489130dd387c6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vafuQTHeT9qrvLHK6E4FlLjHB-eIVwCK4dNN4iRVVa8.jpg?width=216&crop=smart&auto=webp&s=8690ad304dc926baf4f1a08918bea1326562902b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vafuQTHeT9qrvLHK6E4FlLjHB-eIVwCK4dNN4iRVVa8.jpg?width=320&crop=smart&auto=webp&s=c5f4f24ea5424fd1bcdcb9fcd2b5856f91f5d1eb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vafuQTHeT9qrvLHK6E4FlLjHB-eIVwCK4dNN4iRVVa8.jpg?width=640&crop=smart&auto=webp&s=45f128661b692935e2809f072f9e3b96b7054cd0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vafuQTHeT9qrvLHK6E4FlLjHB-eIVwCK4dNN4iRVVa8.jpg?width=960&crop=smart&auto=webp&s=9ac64c8069d1a9af43a5249e151ee6fe1732c4cf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vafuQTHeT9qrvLHK6E4FlLjHB-eIVwCK4dNN4iRVVa8.jpg?width=1080&crop=smart&auto=webp&s=e6b6d091196db0ba2ba89b17f9c287c8b3237a2d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vafuQTHeT9qrvLHK6E4FlLjHB-eIVwCK4dNN4iRVVa8.jpg?auto=webp&s=7f56d1f63de5482f6ff938b3fa605fe6706de5fc', 'width': 1200}, 'variants': {}}]} |
SmolVLM 250M: The world's smallest multimodal model. Running 100% locally in-browser on WebGPU. | 1 | 2025-01-23T22:15:37 | https://v.redd.it/si2qlwem8ree1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i8foi0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/si2qlwem8ree1/DASHPlaylist.mpd?a=1740262553%2CNWY4MmYzYTU5OWEwZDc4MTYyZDRjMDU1ZWZlMDlhMzdkMmUxMjU4Yjg2OTM2OWQ4MDk3ZjBkMzA1ZGMzZGViZA%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/si2qlwem8ree1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/si2qlwem8ree1/HLSPlaylist.m3u8?a=1740262553%2CMGNjMGVmMjMwYjg3MWFiZTExNDMzYmMxOTc5Zjk4YmJmOGJkNzlmZDE2MDViZjAyYzViZGE5M2MyMzIxZTYyOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/si2qlwem8ree1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1198}} | t3_1i8foi0 | /r/LocalLLaMA/comments/1i8foi0/smolvlm_250m_the_worlds_smallest_multimodal_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bGxidnF3ZW04cmVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo', 'resolutions': [{'height': 97, 'url': 'https://external-preview.redd.it/bGxidnF3ZW04cmVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=108&crop=smart&format=pjpg&auto=webp&s=0ad3e7c25570c1fe1d81fcf10f83dcc77af1017f', 'width': 108}, {'height': 194, 'url': 'https://external-preview.redd.it/bGxidnF3ZW04cmVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=216&crop=smart&format=pjpg&auto=webp&s=1b6124a5f37f639f05aacf521d74dca20a5451e8', 'width': 216}, {'height': 288, 'url': 'https://external-preview.redd.it/bGxidnF3ZW04cmVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=320&crop=smart&format=pjpg&auto=webp&s=33e07f7761b406a0a80bbc7aaf49ba26390a7a73', 'width': 320}, {'height': 576, 'url': 'https://external-preview.redd.it/bGxidnF3ZW04cmVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=640&crop=smart&format=pjpg&auto=webp&s=7aacb96d293b410f9ffd54e8c04876151ed2eaa8', 'width': 640}, {'height': 864, 'url': 'https://external-preview.redd.it/bGxidnF3ZW04cmVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=960&crop=smart&format=pjpg&auto=webp&s=352ce52adcec0169de7b5584fbb3dab802f50b9c', 'width': 960}, {'height': 972, 'url': 'https://external-preview.redd.it/bGxidnF3ZW04cmVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f38ccac15fa6a1c207bf2644ca7bd4e66252d02a', 'width': 1080}], 'source': {'height': 1526, 'url': 'https://external-preview.redd.it/bGxidnF3ZW04cmVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?format=pjpg&auto=webp&s=4c289528e84e0a537c49fa63f6ed076ef6c3ad30', 'width': 1694}, 'variants': {}}]} |
||
SmolVLM 256M: The world's smallest multimodal model, running 100% locally in-browser on WebGPU. | 136 | 2025-01-23T22:17:19 | https://v.redd.it/qikrzy8witee1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i8fpza | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qikrzy8witee1/DASHPlaylist.mpd?a=1740262655%2COTA1YjYwODc4YmZhNThmNjBkYTUwNmEwYjhlOWJjMWIxNjFmMjMwOTYwMmZiZTA3NzJlOTJkOTIxODliZjdlZg%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/qikrzy8witee1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/qikrzy8witee1/HLSPlaylist.m3u8?a=1740262655%2CNTU3YjcyMDFhOTBiOTQwNGYzODBlOTNkNzU1ZTU0NmE0NmQ0ZDAxYWZkM2E2ZWFiNjI1ZDYwMzkwZGM2ZDc1ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qikrzy8witee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1198}} | t3_1i8fpza | /r/LocalLLaMA/comments/1i8fpza/smolvlm_256m_the_worlds_smallest_multimodal_model/ | false | false | 136 | {'enabled': False, 'images': [{'id': 'NTYzZXAwOXdpdGVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo', 'resolutions': [{'height': 97, 'url': 'https://external-preview.redd.it/NTYzZXAwOXdpdGVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=108&crop=smart&format=pjpg&auto=webp&s=8ea6b4f366da9052ce84ab5fc7cbc139394a34f3', 'width': 108}, {'height': 194, 'url': 'https://external-preview.redd.it/NTYzZXAwOXdpdGVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=216&crop=smart&format=pjpg&auto=webp&s=9334bf3c6108612e0c7a194fb81b8703ec59b967', 'width': 216}, {'height': 288, 'url': 'https://external-preview.redd.it/NTYzZXAwOXdpdGVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=320&crop=smart&format=pjpg&auto=webp&s=de0ce4eca7dbe2028ce879dee05ffdd727a7cb69', 'width': 320}, {'height': 576, 'url': 'https://external-preview.redd.it/NTYzZXAwOXdpdGVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=640&crop=smart&format=pjpg&auto=webp&s=02becb76d863079e5df02fe2805b471f8a7b1b70', 'width': 640}, {'height': 864, 'url': 'https://external-preview.redd.it/NTYzZXAwOXdpdGVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=960&crop=smart&format=pjpg&auto=webp&s=21e862d091244e24cb0e31b3cfdf7813bfc3b3a9', 'width': 960}, {'height': 972, 'url': 'https://external-preview.redd.it/NTYzZXAwOXdpdGVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d71ede6db79e29bf0024797542f849ba6e34100f', 'width': 1080}], 'source': {'height': 1526, 'url': 'https://external-preview.redd.it/NTYzZXAwOXdpdGVlMeNP1riRHGftFiyraDTq8M0dXNR_Xk41nSkLrV2F0EOo.png?format=pjpg&auto=webp&s=8b626323769e371e022de35f4ba2cb749028ef93', 'width': 1694}, 'variants': {}}]} |
||
How good is deepseek-r1:32b? | 0 | It's available with ollama now so it should be really easy to use. I am interested in coding and maths. | 2025-01-23T22:22:07 | https://www.reddit.com/r/LocalLLaMA/comments/1i8fu56/how_good_is_deepseekr132b/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8fu56 | false | null | t3_1i8fu56 | /r/LocalLLaMA/comments/1i8fu56/how_good_is_deepseekr132b/ | false | false | self | 0 | null |
What are the best models you recommend on a rtx 5060 12gb? | 0 | Long context and long output a bonus.
If anything is better than neural-chat (truly remarkable model) or mistral Lexi also very interested. | 2025-01-23T22:22:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i8futs/what_are_the_best_models_you_recommend_on_a_rtx/ | Jethro_E7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8futs | false | null | t3_1i8futs | /r/LocalLLaMA/comments/1i8futs/what_are_the_best_models_you_recommend_on_a_rtx/ | false | false | self | 0 | null |
Can a NVIDIA A100 DGX handle running the 671B DeepSeek-r1 model? | 1 | [removed] | 2025-01-23T22:42:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i8gb62/can_a_nvidia_a100_dgx_handle_running_the_671b/ | -SANSEVIERIA- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8gb62 | false | null | t3_1i8gb62 | /r/LocalLLaMA/comments/1i8gb62/can_a_nvidia_a100_dgx_handle_running_the_671b/ | false | false | self | 1 | null |
Prompts are fine through ollama cli but responses are garbage and forgetful in chat box | 2 | Experimenting with deepseek r1 and noticing this. Any explanation? | 2025-01-23T22:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/1i8gi8b/prompts_are_fine_through_ollama_cli_but_responses/ | Aggravating_Carry604 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i8gi8b | false | null | t3_1i8gi8b | /r/LocalLLaMA/comments/1i8gi8b/prompts_are_fine_through_ollama_cli_but_responses/ | false | false | self | 2 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.