title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Kokoro TTS 1.0
223
2025-02-03T00:52:05
https://huggingface.co/hexgrad/Kokoro-82M
zxyzyxz
huggingface.co
1970-01-01T00:00:00
0
{}
1igcpwz
false
null
t3_1igcpwz
/r/LocalLLaMA/comments/1igcpwz/kokoro_tts_10/
false
false
https://b.thumbs.redditm…7CD4d0VuU90Q.jpg
223
{'enabled': False, 'images': [{'id': 'TL8xIUiXgJg5YjryMYhj7JiBtqOghnN47_mvdxSWYzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=108&crop=smart&auto=webp&s=c44a83d5fab77c813216e5454c6fba07bfb55e15', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=216&crop=smart&auto=webp&s=bb8032866f6a8609550af1ac69ccea6df3761f92', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=320&crop=smart&auto=webp&s=7f990de0136d4482b7b3bcd05bda7d1723859680', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=640&crop=smart&auto=webp&s=b75663383244e2aa5f5fcf0207756c5dc28fb51b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=960&crop=smart&auto=webp&s=7f200c8a1257ecccf20195dc5abffaaeeb16f10a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=1080&crop=smart&auto=webp&s=9a5faaa15c9e5fde7b616979aadc6a151dfa87b0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?auto=webp&s=c3c1958b6cc380e316d46b3fe9508529724694d5', 'width': 1200}, 'variants': {}}]}
Can I Run This LLM? - A VRAM Estimator for Local AI Models
2
Hi, I’m currently in my third semester of electrical engineering, and I built this project over the weekend. [http://www.canirunthisllm.net/](http://www.canirunthisllm.net/) helps users estimate the VRAM requirements for running local AI models. It can automatically detect PC specifications or allows users to enter them manually for analysis. This is my first web app project, and I would really appreciate any feedback! Next, I’m working on a feature that lets users enter a Hugging Face model tag into a form, and the backend will automatically fetch the model parameters. After that, I plan to add support for multiple GPUs
2025-02-03T00:53:45
https://www.reddit.com/r/LocalLLaMA/comments/1igcr2r/can_i_run_this_llm_a_vram_estimator_for_local_ai/
negative_entropie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igcr2r
false
null
t3_1igcr2r
/r/LocalLLaMA/comments/1igcr2r/can_i_run_this_llm_a_vram_estimator_for_local_ai/
false
false
self
2
null
Export Controls for China Obtaining GPUs
1
So I'm browsing ebay and dreaming about the days when I can stack A100's instead of 3090's, and I noticed that many of the A100's for sale are shipping from... Greater China. Kind of infers that the export controls haven't done much if they're willing to sell them back to the world. Probably stating that grass is green with this, but I just found it mildly amusing.
2025-02-03T00:56:10
https://www.reddit.com/r/LocalLLaMA/comments/1igcsud/export_controls_for_china_obtaining_gpus/
Mass2018
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igcsud
false
null
t3_1igcsud
/r/LocalLLaMA/comments/1igcsud/export_controls_for_china_obtaining_gpus/
false
false
self
1
null
... All I wrote is test!
75
2025-02-03T00:58:58
https://i.redd.it/g90sz09motge1.jpeg
internetpillows
i.redd.it
1970-01-01T00:00:00
0
{}
1igcuub
false
null
t3_1igcuub
/r/LocalLLaMA/comments/1igcuub/all_i_wrote_is_test/
false
false
https://b.thumbs.redditm…FNL8nbZf3TtM.jpg
75
{'enabled': True, 'images': [{'id': 'XdSA3hm52mkafDtnezWJkb7fVp31IY_WIR1aeL3I2D4', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/g90sz09motge1.jpeg?width=108&crop=smart&auto=webp&s=0f51798a6d5d2127d1268c640188ba5bd0a3e8fd', 'width': 108}, {'height': 181, 'url': 'https://preview.redd.it/g90sz09motge1.jpeg?width=216&crop=smart&auto=webp&s=e19d7e36a3e72151ff2bfd66664d2b9adbc8221b', 'width': 216}, {'height': 268, 'url': 'https://preview.redd.it/g90sz09motge1.jpeg?width=320&crop=smart&auto=webp&s=e4304806b7650f3289852df706a898b52daf2d15', 'width': 320}, {'height': 537, 'url': 'https://preview.redd.it/g90sz09motge1.jpeg?width=640&crop=smart&auto=webp&s=2c34ddea7f426493d998245465a45f6cce9c8c3a', 'width': 640}], 'source': {'height': 780, 'url': 'https://preview.redd.it/g90sz09motge1.jpeg?auto=webp&s=e2d00fd302b9179fefebbd54e8ba5d2834e29b8d', 'width': 928}, 'variants': {}}]}
I built a silent speech recognition tool that reads your lips in real-time and types whatever you mouth - runs 100% locally!
1,126
2025-02-03T01:00:09
https://v.redd.it/dh90m1iyntge1
tycho_brahes_nose_
v.redd.it
1970-01-01T00:00:00
0
{}
1igcvol
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dh90m1iyntge1/DASHPlaylist.mpd?a=1741136425%2CNjc3YmYyMWRlZjcwMmY1MzRjYzAyZjMxZWFlNTA3MmJhODAxODcwNmEyY2Y2Nzg5YTRkMjhlNWJkMTNkMjI2ZQ%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/dh90m1iyntge1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/dh90m1iyntge1/HLSPlaylist.m3u8?a=1741136425%2CYTA3NjQ3NTY1YTQ4MmQ3N2MwMDExYTg4ZmNiMjExY2RkOTk2MzYxMDY5MjIyNzZkMDBlNGVkYjMzYWE4ZGEyNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dh90m1iyntge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1igcvol
/r/LocalLLaMA/comments/1igcvol/i_built_a_silent_speech_recognition_tool_that/
false
false
https://external-preview…087911e7fd1a890c
1,126
{'enabled': False, 'images': [{'id': 'bnIwMGoyaXludGdlMVL1KlPwXSM4mwFtLRlx6KM67CArRsK705RfUy_x1msn', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnIwMGoyaXludGdlMVL1KlPwXSM4mwFtLRlx6KM67CArRsK705RfUy_x1msn.png?width=108&crop=smart&format=pjpg&auto=webp&s=3ef381d3c6696b75f5d1a0e6695e9433fd2bf782', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bnIwMGoyaXludGdlMVL1KlPwXSM4mwFtLRlx6KM67CArRsK705RfUy_x1msn.png?width=216&crop=smart&format=pjpg&auto=webp&s=01b06fa91affb9b652a58af7fccf09673a138e52', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bnIwMGoyaXludGdlMVL1KlPwXSM4mwFtLRlx6KM67CArRsK705RfUy_x1msn.png?width=320&crop=smart&format=pjpg&auto=webp&s=542c87011425ab71bd49affb167ef8a061c58b00', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bnIwMGoyaXludGdlMVL1KlPwXSM4mwFtLRlx6KM67CArRsK705RfUy_x1msn.png?width=640&crop=smart&format=pjpg&auto=webp&s=a3b87d68428d80eba8aef60d928a4608ed422eca', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bnIwMGoyaXludGdlMVL1KlPwXSM4mwFtLRlx6KM67CArRsK705RfUy_x1msn.png?width=960&crop=smart&format=pjpg&auto=webp&s=a50787ddddee726a3cead1e9a9f099ae7e3bafa2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bnIwMGoyaXludGdlMVL1KlPwXSM4mwFtLRlx6KM67CArRsK705RfUy_x1msn.png?width=1080&crop=smart&format=pjpg&auto=webp&s=286c67455c92ed3373866bf3bc7fc43cc2ecccc0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bnIwMGoyaXludGdlMVL1KlPwXSM4mwFtLRlx6KM67CArRsK705RfUy_x1msn.png?format=pjpg&auto=webp&s=65bf8b95a58996d36f4378db7ffc1a51789456a4', 'width': 1920}, 'variants': {}}]}
Deepseek r1
1
[removed]
2025-02-03T01:03:55
https://www.reddit.com/r/LocalLLaMA/comments/1igcyq0/deepseek_r1/
AlgorithmicMuse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igcyq0
false
null
t3_1igcyq0
/r/LocalLLaMA/comments/1igcyq0/deepseek_r1/
false
false
self
1
null
Seeking participants for a paid remote interview on GenAI usage
1
[removed]
2025-02-03T01:07:22
https://www.reddit.com/r/LocalLLaMA/comments/1igd170/seeking_participants_for_a_paid_remote_interview/
EmptyEvidence1576
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igd170
false
null
t3_1igd170
/r/LocalLLaMA/comments/1igd170/seeking_participants_for_a_paid_remote_interview/
false
false
self
1
null
Seeking participants for paid interviews on genAI!
1
[removed]
2025-02-03T01:17:12
https://www.reddit.com/r/LocalLLaMA/comments/1igd8fa/seeking_participants_for_paid_interviews_on_genai/
EmptyEvidence1576
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igd8fa
false
null
t3_1igd8fa
/r/LocalLLaMA/comments/1igd8fa/seeking_participants_for_paid_interviews_on_genai/
false
false
self
1
null
Is there a package repository for AI workflows?
1
Is there a pip or npm like repository for pre-built RAG workflows? I'd love to discover and download reusable workflows for tasks like weather reports or news updates...
2025-02-03T01:26:58
https://www.reddit.com/r/LocalLLaMA/comments/1igdfcm/is_there_a_package_repository_for_ai_workflows/
kadketon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igdfcm
false
null
t3_1igdfcm
/r/LocalLLaMA/comments/1igdfcm/is_there_a_package_repository_for_ai_workflows/
false
false
self
1
null
New Alienware PC with a 4090 – Which local LLM should I install?
1
[removed]
2025-02-03T01:27:20
https://www.reddit.com/r/LocalLLaMA/comments/1igdfm3/new_alienware_pc_with_a_4090_which_local_llm/
NotSoSubtleSaver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igdfm3
false
null
t3_1igdfm3
/r/LocalLLaMA/comments/1igdfm3/new_alienware_pc_with_a_4090_which_local_llm/
false
false
self
1
null
I mean, why?
0
I recorded the response and saw that R1 breaks the question nicely until it reaches a point. Why ?
2025-02-03T01:30:49
https://v.redd.it/gf6w52rhutge1
theaegontrgyn
v.redd.it
1970-01-01T00:00:00
0
{}
1igdi4c
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/gf6w52rhutge1/DASHPlaylist.mpd?a=1741138297%2CMDVhNmU0NDkwYzRjMGRjYWQ2MDhlOWY0OTM3MjAxYjU2MTczODQxNjY5YWYxNjQ5MjM0Y2I1NTEzN2FmZDIyNA%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/gf6w52rhutge1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 1280, 'hls_url': 'https://v.redd.it/gf6w52rhutge1/HLSPlaylist.m3u8?a=1741138297%2CYmFjZmMwMzdhNTFkODRiMjlhNDc1NmUzZDNlYjNiM2NkZTRhZGY3ZWEyNDk1MTc5NzcxZGRkNTk3N2RmOTY1Nw%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/gf6w52rhutge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 592}}
t3_1igdi4c
/r/LocalLLaMA/comments/1igdi4c/i_mean_why/
false
false
https://external-preview…4f49ea6477283249
0
{'enabled': False, 'images': [{'id': 'MGowcXY2aGh1dGdlMcEU0Reisoug7cDDf-if0-PJwXrlVdVCqXdnyH_R71E9', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MGowcXY2aGh1dGdlMcEU0Reisoug7cDDf-if0-PJwXrlVdVCqXdnyH_R71E9.png?width=108&crop=smart&format=pjpg&auto=webp&s=b14313622bdd49527d15270802526d15842b396a', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/MGowcXY2aGh1dGdlMcEU0Reisoug7cDDf-if0-PJwXrlVdVCqXdnyH_R71E9.png?width=216&crop=smart&format=pjpg&auto=webp&s=6864cc943db914034645cd2ec63f0994c40616c0', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/MGowcXY2aGh1dGdlMcEU0Reisoug7cDDf-if0-PJwXrlVdVCqXdnyH_R71E9.png?width=320&crop=smart&format=pjpg&auto=webp&s=a5acb19987090319503d1fcf322a114d262c4bb9', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/MGowcXY2aGh1dGdlMcEU0Reisoug7cDDf-if0-PJwXrlVdVCqXdnyH_R71E9.png?width=640&crop=smart&format=pjpg&auto=webp&s=8d434f9b7f60c59dc95f88666724068f0850f650', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/MGowcXY2aGh1dGdlMcEU0Reisoug7cDDf-if0-PJwXrlVdVCqXdnyH_R71E9.png?format=pjpg&auto=webp&s=bce2f319f9009b85c3678729a6678a5c9dd2abc0', 'width': 888}, 'variants': {}}]}
Ok I admit it, Browser Use is insane (using gemini 2.0 flash-exp default) [https://github.com/browser-use/browser-use]
163
2025-02-03T01:38:56
https://i.redd.it/evlscivtvtge1.gif
teddybear082
i.redd.it
1970-01-01T00:00:00
0
{}
1igdnx2
false
null
t3_1igdnx2
/r/LocalLLaMA/comments/1igdnx2/ok_i_admit_it_browser_use_is_insane_using_gemini/
false
false
https://b.thumbs.redditm…RvuRoH8ZNCNQ.jpg
163
{'enabled': True, 'images': [{'id': 'A6UuvzFJHyNokiLaz-j97NvlvpuY8bOl4unFHIzznxs', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=108&crop=smart&format=png8&s=5e5cd96289f7e402e456291a3c0db2c9f525e214', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=216&crop=smart&format=png8&s=add8d8f006d1771036b20bbc5f914812c5cb8444', 'width': 216}, {'height': 254, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=320&crop=smart&format=png8&s=c51857c95bfe39de6b95f0abe9e77791f20c8a7a', 'width': 320}, {'height': 508, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=640&crop=smart&format=png8&s=48e4e4f77e13880fe141b475488b34c67f918a24', 'width': 640}, {'height': 763, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=960&crop=smart&format=png8&s=614cd51e1f9ef3185bc69f041096bfa42ad9479c', 'width': 960}, {'height': 858, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=1080&crop=smart&format=png8&s=2fbd7e97cce0724d1ad8645c9ad3d99b78d7a4b9', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?format=png8&s=36afe7099db70e7cf457667113ffaf90b3170899', 'width': 1264}, 'variants': {'gif': {'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=108&crop=smart&s=619d216c9ed28c28c171901fdfa62400b93a7fcc', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=216&crop=smart&s=46887297a0b1c4cdb8740830a0d5db57734cdf46', 'width': 216}, {'height': 254, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=320&crop=smart&s=66a7493fe5357bd5277249687fd052df34e9c8ba', 'width': 320}, {'height': 508, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=640&crop=smart&s=8e17dd7562de9086f58eb97b4363b79f94ad14a3', 'width': 640}, {'height': 763, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=960&crop=smart&s=7b57a65bf440309f034d5c5ceb6715b505875e56', 'width': 960}, {'height': 858, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=1080&crop=smart&s=fe92575048e0be660240d270673686de749d73ed', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?s=f771bd219f39c0621ad8aaf6629923dd88beb8e5', 'width': 1264}}, 'mp4': {'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=108&format=mp4&s=a994a1ed7ff222fe3319740b40f5b30e30b922f3', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=216&format=mp4&s=54e42beb1b5999eb88909da59375bcdcf64e4a0c', 'width': 216}, {'height': 254, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=320&format=mp4&s=d1c9e6c88f713f0e321d6b7eafb361d8b374376a', 'width': 320}, {'height': 508, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=640&format=mp4&s=00a41d1a889933b5b9434257adcbc6defedd9936', 'width': 640}, {'height': 763, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=960&format=mp4&s=e8d3cf2af66a7c9fd7810968adf3c8131bdcb2ac', 'width': 960}, {'height': 858, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?width=1080&format=mp4&s=81fe00cafb5f414825fbbec8fe2d705c74d11efc', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://preview.redd.it/evlscivtvtge1.gif?format=mp4&s=4a478c98c73b14fa6609cc7f395e31f4ea684b3c', 'width': 1264}}}}]}
How to train my own model?
3
How to train my own model for specific tasks in my codebase? Is there a way to dot his? I don't need all the translations, all the nonsense. I don't need an LLM with millions of pages of history, or spanish, or chinese. I just want to train one on my code. Are there any resources I can read to get started?
2025-02-03T01:43:28
https://www.reddit.com/r/LocalLLaMA/comments/1igdr6v/how_to_train_my_own_model/
Responsible-Rate7466
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igdr6v
false
null
t3_1igdr6v
/r/LocalLLaMA/comments/1igdr6v/how_to_train_my_own_model/
false
false
self
3
null
A silly idea, but wanted to share: AI-assisted coding competition
11
I've been working with Cline a lot recently (still can't find a good local replacement for sonnet:/) and find myself more and more directing the agent and then sitting back and watching. I gotta say, it's a lot more exciting than the competitive e-sports "Excel Games". Anyway, got me to thinking how interesting and \*educational\* an AI-assisted live coding competition would be. Optimizing model choice, tool choice, context, agent selection, would be fascinating... And tailing all those logs would make for some awesome Matrix-style battlestations! That'll bring in the views! Does such a thing exist in any form?
2025-02-03T01:56:50
https://www.reddit.com/r/LocalLLaMA/comments/1ige0my/a_silly_idea_but_wanted_to_share_aiassisted/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ige0my
false
null
t3_1ige0my
/r/LocalLLaMA/comments/1ige0my/a_silly_idea_but_wanted_to_share_aiassisted/
false
false
self
11
null
Can I run it?
1
Hi, I currently have llama3:latest 4.7gb (modified 9 months ago), llama-pro:latest 4.7gb (same time), wizard-math:13b 7.4gb and wizard-math:latest 4.1gb (2 months ago). I was wondering if using deepseek r1 (7b-8b or 14b) would perform better that these smaller versions. I have a 4070 FE (12gbs). I mainly use it for coding assistance (very small projects), I am currently use ChatGPT for any help but I am tired of paying $20 a month to just use it a handful of times a day.
2025-02-03T02:24:51
https://www.reddit.com/r/LocalLLaMA/comments/1igekbw/can_i_run_it/
Gualuigi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igekbw
false
null
t3_1igekbw
/r/LocalLLaMA/comments/1igekbw/can_i_run_it/
false
false
self
1
null
Anyone know of any trackers or torrent sites that have llm models and training data sets?
5
Looking for torrent or magnet links for models. Having trouble with huggingface getting blocked and looking for alternatives. Also, for the huge models, torrent would be nice.
2025-02-03T02:34:55
https://www.reddit.com/r/LocalLLaMA/comments/1igerad/anyone_know_of_any_trackers_or_torrent_sites_that/
bidet_enthusiast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igerad
false
null
t3_1igerad
/r/LocalLLaMA/comments/1igerad/anyone_know_of_any_trackers_or_torrent_sites_that/
false
false
self
5
null
Language Models and World Models, A Philosophy
1
2025-02-03T02:41:59
https://hylaeansea.org/blog/2025/02/01/World-Models-and-Language-Models.html
kyjohnso
hylaeansea.org
1970-01-01T00:00:00
0
{}
1igew7a
false
null
t3_1igew7a
/r/LocalLLaMA/comments/1igew7a/language_models_and_world_models_a_philosophy/
false
false
default
1
null
World Models and Language Models, a Philosophy
1
2025-02-03T02:45:24
https://hylaeansea.org/blog/2025/02/01/World-Models-and-Language-Models.html
kyjohnso
hylaeansea.org
1970-01-01T00:00:00
0
{}
1igeyi3
false
null
t3_1igeyi3
/r/LocalLLaMA/comments/1igeyi3/world_models_and_language_models_a_philosophy/
false
false
default
1
null
Phi 4 is so underrated
236
As a GPU poor pleb with but a humble M4 Mac mini (24 GB RAM), my local LLM options are limited. As such, I've found Phi 4 ([Q8, Unsloth variant](https://huggingface.co/unsloth/phi-4-GGUF)) to be an extremely capable model for my hardware. My use cases are general knowledge questions and coding prompts. It's at least as good as GPT 3.5 in my experience and sets me on the right direction more often then not. I can't speak to benchmarks because I don't really understand (or frankly care about) any of them. It's just a good model for the things I need a model for. And no, Microsoft isn't paying me. I'm just a fan. 🙂
2025-02-03T02:49:57
https://www.reddit.com/r/LocalLLaMA/comments/1igf1vi/phi_4_is_so_underrated/
jeremyckahn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igf1vi
false
null
t3_1igf1vi
/r/LocalLLaMA/comments/1igf1vi/phi_4_is_so_underrated/
false
false
self
236
{'enabled': False, 'images': [{'id': 'NRd1Hpmz6jceghnlu_DCdUk-FN3Qjzxr8Hlf6wxxuP4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8ujSOgIl6Eex7OK8VE6FBHHfCBhXB85R954QPnkELNw.jpg?width=108&crop=smart&auto=webp&s=df01119d3eeb7422d7ab698c88ad2fe9a60f153a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8ujSOgIl6Eex7OK8VE6FBHHfCBhXB85R954QPnkELNw.jpg?width=216&crop=smart&auto=webp&s=c762e3c693157fa18da2639ce3bd3ac75758ad72', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8ujSOgIl6Eex7OK8VE6FBHHfCBhXB85R954QPnkELNw.jpg?width=320&crop=smart&auto=webp&s=f45b1287c191a46fd5e36e444c54bbeba06dac81', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8ujSOgIl6Eex7OK8VE6FBHHfCBhXB85R954QPnkELNw.jpg?width=640&crop=smart&auto=webp&s=30c61d6a6f767ab3879952aa278c4c5c8d3bec24', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8ujSOgIl6Eex7OK8VE6FBHHfCBhXB85R954QPnkELNw.jpg?width=960&crop=smart&auto=webp&s=6722533cca88d99bfe299e86fa28c8d5d78843fc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8ujSOgIl6Eex7OK8VE6FBHHfCBhXB85R954QPnkELNw.jpg?width=1080&crop=smart&auto=webp&s=59558d2bdadd1fa414d2b2ecddccaf2dc0c2f33a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8ujSOgIl6Eex7OK8VE6FBHHfCBhXB85R954QPnkELNw.jpg?auto=webp&s=cd00491fdd6619bc11e4f7466b6089f3d869c545', 'width': 1200}, 'variants': {}}]}
How is your experience with Intel ipex-llm ?
1
[removed]
2025-02-03T02:51:05
https://www.reddit.com/r/LocalLLaMA/comments/1igf2o1/how_is_your_experience_with_intel_ipexllm/
noredditr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igf2o1
false
null
t3_1igf2o1
/r/LocalLLaMA/comments/1igf2o1/how_is_your_experience_with_intel_ipexllm/
false
false
self
1
null
How to prevent DeepSeek Distill's from thinking infinitely?
3
Mainly using quants (q5 lately) of *"DEEPSEEK_R1_DISTILL_QWEN_32B"* The model is incredible, but very often it will get stuck in this infinite loop of thinking and second guessing itself. This will repeat until the context inevitably catches up to the amount of memory you have available. Have any of you managed to tame this behavior yet?
2025-02-03T02:55:06
https://www.reddit.com/r/LocalLLaMA/comments/1igf5gi/how_to_prevent_deepseek_distills_from_thinking/
ForsookComparison
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igf5gi
false
null
t3_1igf5gi
/r/LocalLLaMA/comments/1igf5gi/how_to_prevent_deepseek_distills_from_thinking/
false
false
self
3
null
Running on a mining rig or multi-gpu mobo possible?
2
I've got a few mining rigs collecting dust and am wondering if they can run local LLMs? Most of these are 6-7 1060 3gb or 4gb rx480's. All of these run on 1xpcie risers or mining mobos. I also have a parallel gpu mobo that I think lets me run 4x4pcie....it daisy chains to a PC mobo using some fast cards. Looking at playing w/deepseek before they ban it. Also, how much hard drive and RAM is needed if I have GPU ram?
2025-02-03T02:56:20
https://www.reddit.com/r/LocalLLaMA/comments/1igf6ah/running_on_a_mining_rig_or_multigpu_mobo_possible/
TendieRetard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igf6ah
false
null
t3_1igf6ah
/r/LocalLLaMA/comments/1igf6ah/running_on_a_mining_rig_or_multigpu_mobo_possible/
false
false
self
2
null
Hi local ai
1
[removed]
2025-02-03T02:57:02
https://www.reddit.com/r/LocalLLaMA/comments/1igf6se/hi_local_ai/
noredditr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igf6se
false
null
t3_1igf6se
/r/LocalLLaMA/comments/1igf6se/hi_local_ai/
false
false
self
1
null
Looking for Multi-GPU Server Advice
1
[removed]
2025-02-03T03:07:39
https://www.reddit.com/r/LocalLLaMA/comments/1igfebb/looking_for_multigpu_server_advice/
BeerAndRaptors
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igfebb
false
null
t3_1igfebb
/r/LocalLLaMA/comments/1igfebb/looking_for_multigpu_server_advice/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Pv93V7EQDl6NxSq7fw8VCwNSfdiu_uJs0nb1ECZW7-U', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/JQwUggtlO-TVy05tTkGDkWYyndLi4r40mHN8tHB9mEc.jpg?width=108&crop=smart&auto=webp&s=12145357ccc2d81aead6600db621b88ec83422d1', 'width': 108}, {'height': 156, 'url': 'https://external-preview.redd.it/JQwUggtlO-TVy05tTkGDkWYyndLi4r40mHN8tHB9mEc.jpg?width=216&crop=smart&auto=webp&s=6d2de4ba7293260878a39ec54f82c324970cb4c9', 'width': 216}, {'height': 232, 'url': 'https://external-preview.redd.it/JQwUggtlO-TVy05tTkGDkWYyndLi4r40mHN8tHB9mEc.jpg?width=320&crop=smart&auto=webp&s=101ed58e2e6c510c44e97961f6e5de557f92247a', 'width': 320}], 'source': {'height': 290, 'url': 'https://external-preview.redd.it/JQwUggtlO-TVy05tTkGDkWYyndLi4r40mHN8tHB9mEc.jpg?auto=webp&s=bc021af5c998212499f1746431529afcd668434c', 'width': 400}, 'variants': {}}]}
Tool use in thinking tags
1
Has anyone experimented with tool calling in thinking tags? Seems like the next step is to implement call and response into the reasoning processing.
2025-02-03T03:20:20
https://www.reddit.com/r/LocalLLaMA/comments/1igfmu7/tool_use_in_thinking_tags/
Disastrous_Ad8959
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igfmu7
false
null
t3_1igfmu7
/r/LocalLLaMA/comments/1igfmu7/tool_use_in_thinking_tags/
false
false
self
1
null
As a noob, I'm seeing a lot of what seems to me FUD being spread about deepseek, am I wrong or is it warranted?
1
[removed]
2025-02-03T03:58:27
https://www.reddit.com/r/LocalLLaMA/comments/1iggcik/as_a_noob_im_seeing_a_lot_of_what_seems_to_me_fud/
TendieRetard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iggcik
false
null
t3_1iggcik
/r/LocalLLaMA/comments/1iggcik/as_a_noob_im_seeing_a_lot_of_what_seems_to_me_fud/
false
false
self
1
{'enabled': False, 'images': [{'id': '5UNlK7tWTIdSAg5IekKePPNWX_tuI6f507qkpm3vy1w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NTpoywrYlCJa3jKjDg9D7lrDy_7FNtMQlLyX5AIQfRo.jpg?width=108&crop=smart&auto=webp&s=749a4f3bf6dbefbd8f9b80fb34d2fefcff368ccd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NTpoywrYlCJa3jKjDg9D7lrDy_7FNtMQlLyX5AIQfRo.jpg?width=216&crop=smart&auto=webp&s=dee6960479df11c1246632e8ec4253ff81959100', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NTpoywrYlCJa3jKjDg9D7lrDy_7FNtMQlLyX5AIQfRo.jpg?width=320&crop=smart&auto=webp&s=2067ea6cb181cf75016df1cbdeedd34a9e7c9bf2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NTpoywrYlCJa3jKjDg9D7lrDy_7FNtMQlLyX5AIQfRo.jpg?width=640&crop=smart&auto=webp&s=4f74336645b6894077cf46026f0e048d4b4f7c1c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NTpoywrYlCJa3jKjDg9D7lrDy_7FNtMQlLyX5AIQfRo.jpg?width=960&crop=smart&auto=webp&s=285e644698daed06e76ea64c154bc33a587c91d4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NTpoywrYlCJa3jKjDg9D7lrDy_7FNtMQlLyX5AIQfRo.jpg?width=1080&crop=smart&auto=webp&s=6a749aff9cce0b5152a7019bd3f0dc0c1bf7dfd5', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/NTpoywrYlCJa3jKjDg9D7lrDy_7FNtMQlLyX5AIQfRo.jpg?auto=webp&s=1fa4e0b3ff6dc66f23c0789dc2d886f3142c62bc', 'width': 1200}, 'variants': {}}]}
Make your Mistral Small 3 24B Think like R1-distilled models
227
I've been seeing a lot of posts about the Mistral Small 3 24B model, and I remember having this CoT system prompt in my collection. I might as well try it out on this new model. I haven't used it for a long time since I switched to R1-distilled-32b. I'm not the original writer of this prompt; I've rewritten some parts of it, and I can't remember where I got it from. System prompt: [https://pastebin.com/sVMrgZBp](https://pastebin.com/sVMrgZBp) https://i.redd.it/d1geatbckuge1.gif https://i.redd.it/hyrryecnkuge1.gif
2025-02-03T04:01:42
https://www.reddit.com/r/LocalLLaMA/comments/1iggetv/make_your_mistral_small_3_24b_think_like/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iggetv
false
null
t3_1iggetv
/r/LocalLLaMA/comments/1iggetv/make_your_mistral_small_3_24b_think_like/
false
false
https://b.thumbs.redditm…FKtvzC60mOdY.jpg
227
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
Chain of Thought work better
1
[removed]
2025-02-03T04:22:15
https://www.reddit.com/r/LocalLLaMA/comments/1iggsaa/chain_of_thought_work_better/
sutlac
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iggsaa
false
null
t3_1iggsaa
/r/LocalLLaMA/comments/1iggsaa/chain_of_thought_work_better/
false
false
self
1
null
Local Deepsink R1 for a noob
0
I'm in my first few weeks of Calculus 2 and having Deepsink specifically has been a godsend. It gets answers correct more often than ChatGPT, and even when it gets answers wrong there's a higher chance it will fix it and eventually (fairly quickly) get it right (much more often than ChatGPT). As an example I'm literally in the process of working on a center of mass problem that ChatGPT hasn't gotten right in multiple attempts and is now just ending the response before it finishes. Plugged it into R1 and it got part 1 and 2 right first try. The worst part about R1 is its stability. It seems like it's so popular the service goes down fairly often. As I spend a lot (most?) of my day working on school, having it readily available is important. I have a small homelab running a few services, but nothing like this. I also don't have a computer that can run it *right now*, but I'm interested in upgrading and converting my gaming PC (for the time being lol). Current specs are an AMD 5600x CPU, 16gb ram, 2Tb nvme and a 3060 ti. If money was sort of limited, what would you do? I'm willing to invest, but also as a student a 5090 is kind of out of the question. Or, if there's another option I would be open to it. ChatGPT is honestly frustrating to work with. I've seen the quantized version of R1 run on more reasonable rigs, but I also barely understand the technical side so.
2025-02-03T04:27:56
https://www.reddit.com/r/LocalLLaMA/comments/1iggvvh/local_deepsink_r1_for_a_noob/
Fluffy_Extension_420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iggvvh
false
null
t3_1iggvvh
/r/LocalLLaMA/comments/1iggvvh/local_deepsink_r1_for_a_noob/
false
false
self
0
null
Mistral, Qwen, Deepseek
372
Aren't you noticing a pattern? Companies outside the USA are releasing models like Mistral AI, Qwen, and DeepSeek - reliable models that are made accessible, smaller and open-source, compared to most US-based companies
2025-02-03T04:28:53
https://www.reddit.com/r/LocalLLaMA/comments/1iggwff/mistral_qwen_deepseek/
Stargazer-8989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iggwff
false
null
t3_1iggwff
/r/LocalLLaMA/comments/1iggwff/mistral_qwen_deepseek/
false
false
self
372
null
I made R1-distilled-llama-8B significantly smarter by accident.
1
[removed]
2025-02-03T04:46:31
https://www.reddit.com/r/LocalLLaMA/comments/1igh7kf/i_made_r1distilledllama8b_significantly_smarter/
Valuable-Run2129
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igh7kf
false
null
t3_1igh7kf
/r/LocalLLaMA/comments/1igh7kf/i_made_r1distilledllama8b_significantly_smarter/
false
false
self
1
null
Nvidia 50 series GPU performance for LLM inference and finetuning
1
[removed]
2025-02-03T04:51:15
https://www.reddit.com/r/LocalLLaMA/comments/1ighaje/nvidia_50_series_gpu_performance_for_llm/
Fantastic_Quiet1838
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ighaje
false
null
t3_1ighaje
/r/LocalLLaMA/comments/1ighaje/nvidia_50_series_gpu_performance_for_llm/
false
false
self
1
null
What hardware needed to run deepseek r1 671b?
1
What kind of hardware is needed? I'm trying to get an estimate on the cost of hosting this on a cloud server to see if it's viable.
2025-02-03T05:04:17
https://www.reddit.com/r/LocalLLaMA/comments/1ighj1n/what_hardware_needed_to_run_deepseek_r1_671b/
TheHolyToxicToast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ighj1n
false
null
t3_1ighj1n
/r/LocalLLaMA/comments/1ighj1n/what_hardware_needed_to_run_deepseek_r1_671b/
false
false
self
1
null
OpenAI released Deep Research
1
2025-02-03T05:21:01
https://openai.com/index/introducing-deep-research/
Muted_Broccoli5627
openai.com
1970-01-01T00:00:00
0
{}
1ightev
false
null
t3_1ightev
/r/LocalLLaMA/comments/1ightev/openai_released_deep_research/
false
false
default
1
null
Deep seek v3
1
[removed]
2025-02-03T05:29:59
https://www.reddit.com/r/LocalLLaMA/comments/1ighys7/deep_seek_v3/
Internal-Sir2294
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ighys7
false
null
t3_1ighys7
/r/LocalLLaMA/comments/1ighys7/deep_seek_v3/
false
false
self
1
null
Best simple GenAI benchmarking tools?
1
[removed]
2025-02-03T05:59:13
https://www.reddit.com/r/LocalLLaMA/comments/1igifmr/best_simple_genai_benchmarking_tools/
Revolaition
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igifmr
false
null
t3_1igifmr
/r/LocalLLaMA/comments/1igifmr/best_simple_genai_benchmarking_tools/
false
false
self
1
null
DeepSeek-R1 HR Bot Turning Policy Responses Lengthier and adding Context itself – How to Fix It?
1
First time trying to build a chatbot with RAG, I tried fixing it with ChatGPT, the problem is that Deepseek add more context to it and and not limiting to documents for example if I ask "What is the notice period in the company? Sure! The notice period in our company is 30 days. It must be served before an employee leaves the company. This notice period ensures everyone is accounted for and prevents unnecessary termination. It's an important part of maintaining a positive work environment. its adding to extra content. import torch from transformers import AutoModelForCausalLM, AutoTokenizer from sentence_transformers import SentenceTransformer from langchain_community.embeddings import HuggingFaceEmbeddings from langchain_community.vectorstores import Chroma from langchain.schema import Document import chromadb # ✅ Load Local DeepSeek Model DEVICE = "cuda" if torch.cuda.is_available() else "cpu" MODEL_NAME = "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForCausalLM.from_pretrained(MODEL_NAME).to(DEVICE) # ✅ Load Embedding Model for Retrieval embedder = SentenceTransformer("BAAI/bge-small-en-v1.5") # Best for retrieval # ✅ Set Up ChromaDB for Local Policy Storage chroma_client = chromadb.PersistentClient(path="./chroma_db") chroma_collection = chroma_client.get_or_create_collection(name="hr_policies") class HRAssistant: def __init__(self): self.db = Chroma(persist_directory="./chroma_db", embedding_function=HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")) def add_policies(self, policies): """Adds HR policies to the database""" documents = [Document(page_content=p) for p in policies] self.db.add_documents(documents) def retrieve_best_policy(self, query): """Finds the best policy match using embeddings (RAG Step 1).""" docs = self.db.similarity_search(query, k=1) if not docs: return None return docs[0].page_content def generate_response(self, query): """Retrieves a policy and generates a conversational response using DeepSeek-R1.""" best_policy = self.retrieve_best_policy(query) if not best_policy: return "I'm sorry, I couldn't find a specific policy for your request. Let me know if you need help with something else!" # ✅ STRICT INSTRUCTIONS: NO EXTRA INTERPRETATIONS input_text = f""" You are a friendly HR assistant. Your job is to answer employee questions in a **conversational, warm, and engaging** way, based only on the given HR policy. ### REFERENCE INFORMATION ### {best_policy} ### USER QUESTION ### {query} ### INSTRUCTIONS ### - **Only rephrase the policy that's it, nothing else** - **Dont add anything other words or sentences** - **Rephrase the policy naturally in a conversational tone, without altering its meaning.** - **Ensure it is clear that employees must serve the notice period before leaving—do NOT add extra context.** - **Do NOT explain, summarize, or add reasoning.** - **Do NOT justify or explain why the policy exists.** - **Do NOT change key words that define employee obligations.** - **Do NOT add phrases like 'ensuring everyone is accounted for' or 'prevents unnecessary termination.'** - **Do NOT assume why employees are leaving or their personal situation.** - **Strictly return the policy in a friendly, conversational way.** - **Do NOT add extra explanations, advice, or commentary.** - **Do NOT generate policies that do not exist in the system.** - **Do NOT infer missing details or speculate on policy intent.** - **If a request involves sensitive information, return: "I'm sorry, but I can’t provide that information."** - **If the policy request is unclear, ask for clarification instead of assuming.** - **If multiple policies apply, return the most relevant one.** - **If a policy is outdated or under revision, return: "This policy is currently under review. Please check back later."** - **Maintain required action words such as 'must serve' or 'before leaving'.** - **If the policy does not match, return: "I'm sorry, I couldn't find a specific policy for your request."** ### ANSWER: """ # ✅ Generate Response Using DeepSeek-R1 inputs = tokenizer(input_text, return_tensors="pt", padding=True).to(DEVICE) output = model.generate( **inputs, max_new_tokens=20, # 🔹 Keeps responses concise temperature=0.1, # 🔹 Balances variation without adding assumptions top_p=0.8, # 🔹 Narrows word selection top_k=30, # 🔹 Limits randomness do_sample=True, # 🔹 Ensures conversational tone pad_token_id=tokenizer.eos_token_id ) response_text = tokenizer.decode(output[0], skip_special_tokens=True).strip() return response_text if __name__ == "__main__": assistant = HRAssistant() # ✅ Add Policies to Vector DB (Run Once) policies = [ "Leave Policy: Employees are entitled to 12 paid leaves per year. Any additional leave will be considered unpaid leave.", "Resignation Notice Period: Employees must serve a notice period of 30 days before leaving the company.", "Paternity Leave: Male employees are entitled to 10 days of paid paternity leave." ] assistant.add_policies(policies) # ✅ Query the RAG System user_query = input("Ask about HR policies: ") print(assistant.generate_response(user_query)) Is this a best approach, or I need to tweak this code. Please help me out with this.
2025-02-03T06:01:47
https://www.reddit.com/r/LocalLLaMA/comments/1igihdm/deepseekr1_hr_bot_turning_policy_responses/
samy_here
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igihdm
false
null
t3_1igihdm
/r/LocalLLaMA/comments/1igihdm/deepseekr1_hr_bot_turning_policy_responses/
false
false
self
1
null
A new paper on llms Underthinking. Paper says llms are not thinking deep enough and jumping from one thought to another prematurely. Just tweaking llms little bit could increase the performance. We dont need to retrain them, Guys what are your thoughts.
1
[removed]
2025-02-03T06:23:04
https://www.reddit.com/r/LocalLLaMA/comments/1igitb3/a_new_paper_on_llms_underthinking_paper_says_llms/
Ecstatic_Adu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igitb3
false
null
t3_1igitb3
/r/LocalLLaMA/comments/1igitb3/a_new_paper_on_llms_underthinking_paper_says_llms/
false
false
self
1
null
Morning Radio - Locally Generated Personal Morning Broadcast
1
2025-02-03T06:25:14
https://github.com/smy20011/MorningRadio
wuduzodemu
github.com
1970-01-01T00:00:00
0
{}
1igiufi
false
null
t3_1igiufi
/r/LocalLLaMA/comments/1igiufi/morning_radio_locally_generated_personal_morning/
false
false
https://b.thumbs.redditm…yQpmiGkVepCo.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/nm0MbB12ySzfO4ubcdzXCPER80VcIu9Qwn1HsmfJdUo.jpg?auto=webp&s=185921a2b39cf0871695f9352a0cdb41740a48ca', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/nm0MbB12ySzfO4ubcdzXCPER80VcIu9Qwn1HsmfJdUo.jpg?width=108&crop=smart&auto=webp&s=7eda104d1e5f85c25711f7665f4c56956c0a71a5', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/nm0MbB12ySzfO4ubcdzXCPER80VcIu9Qwn1HsmfJdUo.jpg?width=216&crop=smart&auto=webp&s=1727ecb3e653596f1f481612bef9cb704d2fb82b', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/nm0MbB12ySzfO4ubcdzXCPER80VcIu9Qwn1HsmfJdUo.jpg?width=320&crop=smart&auto=webp&s=7ea3470a4e92e9bb972c12484aea32556d6937a0', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/nm0MbB12ySzfO4ubcdzXCPER80VcIu9Qwn1HsmfJdUo.jpg?width=640&crop=smart&auto=webp&s=86dfe57958b48ede90588fe692559b9bc0184778', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/nm0MbB12ySzfO4ubcdzXCPER80VcIu9Qwn1HsmfJdUo.jpg?width=960&crop=smart&auto=webp&s=d160f05f62261391be28ed555805904500f88393', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/nm0MbB12ySzfO4ubcdzXCPER80VcIu9Qwn1HsmfJdUo.jpg?width=1080&crop=smart&auto=webp&s=1d520e5ea44f55c3e16e81a2e656c57686e9aa31', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'hwEaPmd4JouEQI4OLtepoUhA8WQhb9hACAP6MuP7my8'}], 'enabled': False}
Underthinking of o1-like LLMs
1
Key Points: * Frequent Switching: DeepSeek-R1 often switches between different reasoning approaches without fully exploring them, particularly on difficult maths problems. * Inefficient Reasoning: Incorrect answers often involve more tokens, but these don't lead to better results. * Early Abandonment of Correct Thoughts:  Surprisingly, models often start with a correct idea, only to abandon it.This 'underthinking' shows that simply scaling up models isn't enough; we need to enhance how they explore and deepen their reasoning. It's about quality, not just quantity of thought! [https://arxiv.org/pdf/2501.18585](https://arxiv.org/pdf/2501.18585)
2025-02-03T06:34:03
https://www.reddit.com/r/LocalLLaMA/comments/1igiyxw/underthinking_of_o1like_llms/
Xiwei
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igiyxw
false
null
t3_1igiyxw
/r/LocalLLaMA/comments/1igiyxw/underthinking_of_o1like_llms/
false
false
self
1
null
Best Local LLM for Coding on ThinkPad T14 Gen 1 (i7-10510U, 32GB RAM, MX330)?
1
I want to run a local LLM using Ollama for coding assistance. I saw people recommending deepseek-coder-v2, but since it's a 16B model, I’m unsure if my laptop can handle it efficiently. My machine specs: Laptop: ThinkPad T14 Gen 1 CPU: Intel Core i7-10510U (4C/8T, 1.8–4.9GHz) RAM: 32GB DDR4 (2666MHz) GPU: Nvidia GeForce MX330 (2GB VRAM) OS: Fedora 41 Would deepseek-coder-v2 run well on this setup, or should I consider a smaller model like CodeLlama 7B/13B? Are there any other lightweight LLMs optimized for coding that would work better on my hardware? Thanks in advance!
2025-02-03T06:47:18
https://www.reddit.com/r/LocalLLaMA/comments/1igj5jd/best_local_llm_for_coding_on_thinkpad_t14_gen_1/
psahu1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igj5jd
false
null
t3_1igj5jd
/r/LocalLLaMA/comments/1igj5jd/best_local_llm_for_coding_on_thinkpad_t14_gen_1/
false
false
self
1
null
Best Model Combo to serve all needs.
1
[removed]
2025-02-03T06:50:05
https://www.reddit.com/r/LocalLLaMA/comments/1igj6x0/best_model_combo_to_serve_all_needs/
dryEther
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igj6x0
false
null
t3_1igj6x0
/r/LocalLLaMA/comments/1igj6x0/best_model_combo_to_serve_all_needs/
false
false
self
1
null
The naming of DeepSeek-R1-Distill-Llama-70B violated Llama license
1
Based on my understanding of Llama license, if you release a model based on llama, you need to put llama at the beginning of the model name. For example: [https://huggingface.co/nvidia/Llama-3\_1-Nemotron-51B-Instruct](https://huggingface.co/nvidia/Llama-3_1-Nemotron-51B-Instruct) [https://github.com/meta-llama/llama-models/blob/main/models/llama3\_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. **If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.** Hope Deepseek will fix this soon. In my opinion, I think they should name their distilled models with the base model first because I found quite many people confused these distillation models with the main V3/R1 models.
2025-02-03T06:56:36
https://www.reddit.com/r/LocalLLaMA/comments/1igjabj/the_naming_of_deepseekr1distillllama70b_violated/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igjabj
false
null
t3_1igjabj
/r/LocalLLaMA/comments/1igjabj/the_naming_of_deepseekr1distillllama70b_violated/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/p7_8MJMYT8rgIvDCvySs8vrq4fauiYD9tGA-kw5q-G8.jpg?auto=webp&s=69e2b839958c415154eb07be8586f7dfee09353a', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/p7_8MJMYT8rgIvDCvySs8vrq4fauiYD9tGA-kw5q-G8.jpg?width=108&crop=smart&auto=webp&s=2cce91c3a55f7f32efe717bd2ad50fdaaa6abba7', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/p7_8MJMYT8rgIvDCvySs8vrq4fauiYD9tGA-kw5q-G8.jpg?width=216&crop=smart&auto=webp&s=6745e6746ddc4d030555d8d5bed48b4cfb9aceb0', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/p7_8MJMYT8rgIvDCvySs8vrq4fauiYD9tGA-kw5q-G8.jpg?width=320&crop=smart&auto=webp&s=bd5d2bcda3db2203d7f69f1ead555e7742fa1987', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/p7_8MJMYT8rgIvDCvySs8vrq4fauiYD9tGA-kw5q-G8.jpg?width=640&crop=smart&auto=webp&s=3d29c85df2081cdf148ce1db3cb3c939ccd9d31d', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/p7_8MJMYT8rgIvDCvySs8vrq4fauiYD9tGA-kw5q-G8.jpg?width=960&crop=smart&auto=webp&s=82efedf1438fe29f6c06f1c8752896b38282b215', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/p7_8MJMYT8rgIvDCvySs8vrq4fauiYD9tGA-kw5q-G8.jpg?width=1080&crop=smart&auto=webp&s=418fdcf25f0d51ec9e49bdd2508ff23ff89cf3f6', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'OD24zNPGI8KTL5Ir3rVbu5oxO4CvwWM-TYZVJ2fyq-g'}], 'enabled': False}
What is the current best vision model for text extraction from PDFs? I think it's Gemini Flash 2.0
1
I was able to get satisfactory extraction through it's cot. Is there anything else you feel might be better
2025-02-03T06:56:43
https://www.reddit.com/r/LocalLLaMA/comments/1igjadx/what_is_the_current_best_vision_model_for_text/
Existing-Pay7076
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igjadx
false
null
t3_1igjadx
/r/LocalLLaMA/comments/1igjadx/what_is_the_current_best_vision_model_for_text/
false
false
self
1
null
Is it just me waiting for open source o3 like model
1
.
2025-02-03T07:13:43
https://www.reddit.com/r/LocalLLaMA/comments/1igjjat/is_it_just_me_waiting_for_open_source_o3_like/
TheLogiqueViper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igjjat
false
null
t3_1igjjat
/r/LocalLLaMA/comments/1igjjat/is_it_just_me_waiting_for_open_source_o3_like/
false
false
self
1
null
What is the best model for function calling that can also do conversation
1
I am making an LLM assistant which has the ability to use functions within a python script but I also want to be able to chat with it. I have tried llama3-groq-tool-use which works really well for tool calling but can be a bit stupid sometimes when chatting
2025-02-03T07:30:53
https://www.reddit.com/r/LocalLLaMA/comments/1igjrog/what_is_the_best_model_for_function_calling_that/
shamboozles420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igjrog
false
null
t3_1igjrog
/r/LocalLLaMA/comments/1igjrog/what_is_the_best_model_for_function_calling_that/
false
false
self
1
null
WTH is that lol
1
[removed]
2025-02-03T07:54:00
[deleted]
1970-01-01T00:00:00
0
{}
1igk2va
false
null
t3_1igk2va
/r/LocalLLaMA/comments/1igk2va/wth_is_that_lol/
false
false
default
1
null
What Ai Model runs on standart M4 Mac mini
1
[removed]
2025-02-03T07:54:01
https://www.reddit.com/r/LocalLLaMA/comments/1igk2vl/what_ai_model_runs_on_standart_m4_mac_mini/
No_Operation_7139
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igk2vl
false
null
t3_1igk2vl
/r/LocalLLaMA/comments/1igk2vl/what_ai_model_runs_on_standart_m4_mac_mini/
false
false
self
1
null
Need Advice on Model Loading Speed
1
Hey everyone, I currently have 3 GPUs: • 30HX (6GB VRAM) • 30HX (6GB VRAM) • 2080 Super (8GB VRAM) Total: 20GB VRAM Rest of the setup: • 8GB RAM • M.2 2230 • Mining motherboard (supports only one DDR3 RAM slot) I’m about to add 3 more GPUs to my system (Total of 38 gb VRAM). Right now, I’m running DeepSeek 14B, Gemma2 9B, and Qwen2.5 Code 7B. Token speed is great, except for DeepSeek 14B, but that should improve once the new GPUs arrive. One thing I’ve noticed is that RAM isn’t really being used when a model is running even though everyone says system ram is important, so the DDR3 limitation isn’t a big deal. However, my main issue is model loading times—switching between models takes way too long. I’m planning to use API calls, with each model handling a different task within the same program. Given my setup, what’s the best way to reduce model loading time? Any insights would be appreciated! Edit: my m.2 has a max read speed of 500mb/s.
2025-02-03T08:06:33
https://www.reddit.com/r/LocalLLaMA/comments/1igk8x6/need_advice_on_model_loading_speed/
Chemical_Elk7746
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igk8x6
false
null
t3_1igk8x6
/r/LocalLLaMA/comments/1igk8x6/need_advice_on_model_loading_speed/
false
false
self
1
null
Speech to Text - Transcribing
1
Using LM Studio , and previously AnythingLLM. but is there a model/way to transcribe Speech to Text ?
2025-02-03T08:06:33
https://www.reddit.com/r/LocalLLaMA/comments/1igk8x8/speech_to_text_transcribing/
uber-linny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igk8x8
false
null
t3_1igk8x8
/r/LocalLLaMA/comments/1igk8x8/speech_to_text_transcribing/
false
false
self
1
null
Open DeepResearch (alternative)
1
[removed]
2025-02-03T08:22:58
https://www.reddit.com/r/LocalLLaMA/comments/1igkgo9/open_deepresearch_alternative/
Fluffy-Ad3495
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igkgo9
false
null
t3_1igkgo9
/r/LocalLLaMA/comments/1igkgo9/open_deepresearch_alternative/
false
false
self
1
null
Thinking of building a small H200 server need advice
1
[removed]
2025-02-03T08:30:55
https://www.reddit.com/r/LocalLLaMA/comments/1igkkd3/thinking_of_building_a_small_h200_server_need/
Hatter_The_Mad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igkkd3
false
null
t3_1igkkd3
/r/LocalLLaMA/comments/1igkkd3/thinking_of_building_a_small_h200_server_need/
false
false
self
1
null
Thinking of building a small budget AI rig.
0
Ryzen 7 7600 64G DDR4 2x Radeon 7800XT 2G m.2 This is for hobby/home use. Thoughts?
2025-02-03T08:33:35
https://www.reddit.com/r/LocalLLaMA/comments/1igklnv/thinking_of_building_a_small_budget_ai_rig/
TwoWrongsAreSoRight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igklnv
false
null
t3_1igklnv
/r/LocalLLaMA/comments/1igklnv/thinking_of_building_a_small_budget_ai_rig/
false
false
self
0
null
Phi4 pretending to be gpt
0
Hello, am I the only one with the problem that phi4 with ollama pretends to be gpt from OpenAI? I might not know something, maybe it's not a mistake, but I'd be grateful if someone could explain it to me. By the way, I looked for information on websites, but didn’t find it.
2025-02-03T08:40:39
https://www.reddit.com/r/LocalLLaMA/comments/1igkp1j/phi4_pretending_to_be_gpt/
kspepko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igkp1j
false
null
t3_1igkp1j
/r/LocalLLaMA/comments/1igkp1j/phi4_pretending_to_be_gpt/
false
false
self
0
null
A modified Zebra puzzle for CoT LLMs
1
[removed]
2025-02-03T08:45:06
https://www.reddit.com/r/LocalLLaMA/comments/1igkr42/a_modified_zebra_puzzle_for_cot_llms/
malaigo2000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igkr42
false
null
t3_1igkr42
/r/LocalLLaMA/comments/1igkr42/a_modified_zebra_puzzle_for_cot_llms/
false
false
self
1
null
N00b question
1
[removed]
2025-02-03T09:23:50
https://www.reddit.com/r/LocalLLaMA/comments/1igl91y/n00b_question/
zeitgeistmcgee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igl91y
false
null
t3_1igl91y
/r/LocalLLaMA/comments/1igl91y/n00b_question/
false
false
self
1
null
noob here ! best LLM for uncensored NSFW Roleplay ?
0
(sorry bad english) Hey guys, i'm total noob, i'm French using LM Studio and I'm looking what on the title, i'm noob in LLM / backend requirements etc ... i'm looking for a modle that allow long term RP, no repetitive answers, no infinite loop, etc.. like GPT but ... uncensored and obedient when I ask "don't play my char / keep narrative style" xD thanks <3 I have 16Go VRAM (4080 SUPER) + R7 5800X
2025-02-03T09:28:48
https://www.reddit.com/r/LocalLLaMA/comments/1iglbcv/noob_here_best_llm_for_uncensored_nsfw_roleplay/
This_Walrus8288
self.LocalLLaMA
2025-02-03T09:47:47
0
{}
1iglbcv
false
null
t3_1iglbcv
/r/LocalLLaMA/comments/1iglbcv/noob_here_best_llm_for_uncensored_nsfw_roleplay/
false
false
nsfw
0
null
How to make the most out of local models?
0
What are things to make my life more useful with local models? Even if it involves agents or whatnot, I’d like to be more efficient with local LLMs
2025-02-03T09:38:32
https://www.reddit.com/r/LocalLLaMA/comments/1iglfza/how_to_make_the_most_out_of_local_models/
No_Expert1801
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iglfza
false
null
t3_1iglfza
/r/LocalLLaMA/comments/1iglfza/how_to_make_the_most_out_of_local_models/
false
false
self
0
null
chat gpt is mad at deepseek
1
2025-02-03T09:38:42
https://www.reddit.com/gallery/1iglg1q
Ok_Combination_615
reddit.com
1970-01-01T00:00:00
0
{}
1iglg1q
false
null
t3_1iglg1q
/r/LocalLLaMA/comments/1iglg1q/chat_gpt_is_mad_at_deepseek/
false
false
https://b.thumbs.redditm…Me_fdzE-kFvw.jpg
1
null
Need Advice on Hardware for Running a 70B Local LLM
2
Hi everyone, I work at a company of around 100 people and we’re planning to invest in our very own local language model setup. We’d like to run the Deepseek-R1-Distill-Llama-70B model on our premises, but I’m completely new to the hardware side of things. I’ve seen some older recommendations out there, but they seem a bit outdated now. I’ve also heard that newer open source models are emerging that require less memory overall. For our current needs, however, we want to run this specific 70B parameter model. Since we won’t have all 100 people using it at the same time, I’m hoping to find a balanced, cost-effective setup that still feels future-proof. I’m not very familiar with all the technical details... what should I be looking for in terms of system specifications (like processing power, memory capacity, etc.) for a good quality-to-price solution? Any advice, recommendations, or pointers to best practices for a company like ours would be greatly appreciated! Thanks in advance for your help.
2025-02-03T09:39:11
https://www.reddit.com/r/LocalLLaMA/comments/1iglg8t/need_advice_on_hardware_for_running_a_70b_local/
leonardvnhemert
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iglg8t
false
null
t3_1iglg8t
/r/LocalLLaMA/comments/1iglg8t/need_advice_on_hardware_for_running_a_70b_local/
false
false
self
2
null
🔥 Qwen2.5 Max vs DeepSeek V3: Battle of the AI Titans![R]
1
[removed]
2025-02-03T09:46:42
https://www.reddit.com/r/LocalLLaMA/comments/1igljxt/qwen25_max_vs_deepseek_v3_battle_of_the_ai_titansr/
Amanpandey046
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igljxt
false
null
t3_1igljxt
/r/LocalLLaMA/comments/1igljxt/qwen25_max_vs_deepseek_v3_battle_of_the_ai_titansr/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Q6W6TpnMEUm0bwt5ZRUDca1esUk8ylY3vd4XEY8MsNM', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/TAX7TeR250UB4WDUQEQUAqYOUcd7nFzpnLi1nTNkMrM.jpg?width=108&crop=smart&auto=webp&s=90ca02c23058df36ffc4462894b09974d206f228', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/TAX7TeR250UB4WDUQEQUAqYOUcd7nFzpnLi1nTNkMrM.jpg?width=216&crop=smart&auto=webp&s=a432bad16201c34ff316460c170a00f1c4660428', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/TAX7TeR250UB4WDUQEQUAqYOUcd7nFzpnLi1nTNkMrM.jpg?width=320&crop=smart&auto=webp&s=9787e26a4d9b208e5efb3353c64ee2e3afec3cad', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/TAX7TeR250UB4WDUQEQUAqYOUcd7nFzpnLi1nTNkMrM.jpg?width=640&crop=smart&auto=webp&s=910a26df7f08691290f3de30f2a57c03ad8da841', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/TAX7TeR250UB4WDUQEQUAqYOUcd7nFzpnLi1nTNkMrM.jpg?width=960&crop=smart&auto=webp&s=50639e776cac7612c623cdba041c0cd2fc216db4', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/TAX7TeR250UB4WDUQEQUAqYOUcd7nFzpnLi1nTNkMrM.jpg?width=1080&crop=smart&auto=webp&s=7d0a9153af1b499a6db41a4f961ed1fa5993a9b5', 'width': 1080}], 'source': {'height': 686, 'url': 'https://external-preview.redd.it/TAX7TeR250UB4WDUQEQUAqYOUcd7nFzpnLi1nTNkMrM.jpg?auto=webp&s=1e3de9cacf3e7fd9a262a61eb97b1c3af0e2668d', 'width': 1200}, 'variants': {}}]}
Fine-tuning model on source code
1
[removed]
2025-02-03T09:49:38
https://www.reddit.com/r/LocalLLaMA/comments/1iglle4/finetuning_model_on_source_code/
playX281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iglle4
false
null
t3_1iglle4
/r/LocalLLaMA/comments/1iglle4/finetuning_model_on_source_code/
false
false
self
1
null
MOE affect deeper layers.
1
[removed]
2025-02-03T10:19:08
https://www.reddit.com/r/LocalLLaMA/comments/1igm04r/moe_affect_deeper_layers/
AdventurousSwim1312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igm04r
false
null
t3_1igm04r
/r/LocalLLaMA/comments/1igm04r/moe_affect_deeper_layers/
false
false
self
1
null
How can I find the best and affordable model to run on my specific hardware? Is there any website where I can enter my hardware details and it suggests AI models that I can run locally?
1
I am looking for some source/website, where I can input my hardware specifications like CPU, GPU, RAM and then it can provide a list of models in a ranked sort that can run with that hardware. and not viceversa.
2025-02-03T10:19:28
https://www.reddit.com/r/LocalLLaMA/comments/1igm0ae/how_can_i_find_the_best_and_affordable_model_to/
yv_MandelBug
self.LocalLLaMA
2025-02-03T10:37:27
0
{}
1igm0ae
false
null
t3_1igm0ae
/r/LocalLLaMA/comments/1igm0ae/how_can_i_find_the_best_and_affordable_model_to/
false
false
self
1
null
Deepseek-r1:1.5b vs llama3.2:3b
1
[deleted]
2025-02-03T10:22:47
[deleted]
2025-02-03T10:27:40
0
{}
1igm1y2
false
null
t3_1igm1y2
/r/LocalLLaMA/comments/1igm1y2/deepseekr115b_vs_llama323b/
false
false
default
1
null
Possible to use the context extend in text-generation-webui?
1
[removed]
2025-02-03T10:25:41
https://www.reddit.com/r/LocalLLaMA/comments/1igm3b4/possible_to_use_the_context_extend_in/
Appropriate_Water517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igm3b4
false
null
t3_1igm3b4
/r/LocalLLaMA/comments/1igm3b4/possible_to_use_the_context_extend_in/
false
false
self
1
null
Why are eval speed so much lower with Macs ?
1
[removed]
2025-02-03T10:29:00
https://www.reddit.com/r/LocalLLaMA/comments/1igm4ue/why_are_eval_speed_so_much_lower_with_macs/
Practical-Collar3063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igm4ue
false
null
t3_1igm4ue
/r/LocalLLaMA/comments/1igm4ue/why_are_eval_speed_so_much_lower_with_macs/
false
false
self
1
{'enabled': False, 'images': [{'id': 'N9XnEbjOakYFLqFIvzFvsk7U4A5Oy4xI0l2ttCZtTPU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3mFc_o2reO3X1zI73hqZv3_Yp59-09VGNScMO4ADSt0.jpg?width=108&crop=smart&auto=webp&s=a3252d98bae9c4af291f360092c7aaae330da140', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3mFc_o2reO3X1zI73hqZv3_Yp59-09VGNScMO4ADSt0.jpg?width=216&crop=smart&auto=webp&s=8f284d9fa6860bdba2d27183559309d98d48c239', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3mFc_o2reO3X1zI73hqZv3_Yp59-09VGNScMO4ADSt0.jpg?width=320&crop=smart&auto=webp&s=fdff9cab6871059e9710d7574983e2f64be91422', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3mFc_o2reO3X1zI73hqZv3_Yp59-09VGNScMO4ADSt0.jpg?width=640&crop=smart&auto=webp&s=0f697c1eb8235a728e888b301442866dcd3a31c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3mFc_o2reO3X1zI73hqZv3_Yp59-09VGNScMO4ADSt0.jpg?width=960&crop=smart&auto=webp&s=8e4d22c4948d9db14e8dd40d0084c052fc00870a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3mFc_o2reO3X1zI73hqZv3_Yp59-09VGNScMO4ADSt0.jpg?width=1080&crop=smart&auto=webp&s=2b21b3e2ff63a15ac5a4b6f530eaaca95d67c86f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3mFc_o2reO3X1zI73hqZv3_Yp59-09VGNScMO4ADSt0.jpg?auto=webp&s=ad6735f3cf0f44f00e37ee87a7c042eee3b5c832', 'width': 1200}, 'variants': {}}]}
DeepSeek V3 Pruning
1
[removed]
2025-02-03T10:30:04
https://www.reddit.com/r/LocalLLaMA/comments/1igm5e1/deepseek_v3_pruning/
AdventurousSwim1312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igm5e1
false
null
t3_1igm5e1
/r/LocalLLaMA/comments/1igm5e1/deepseek_v3_pruning/
false
false
self
1
null
deepseek1.5b vs llama3.2:3b
0
2025-02-03T10:30:47
https://www.reddit.com/gallery/1igm5st
Frosty-Equipment-692
reddit.com
1970-01-01T00:00:00
0
{}
1igm5st
false
null
t3_1igm5st
/r/LocalLLaMA/comments/1igm5st/deepseek15b_vs_llama323b/
false
false
https://b.thumbs.redditm…2tB6SjYwmFco.jpg
0
null
Buying Advice on GPU
2
I was thinking to add a gpu to my nas running TrueNas Scale, included in the installable apps there is ollama, so i was thinking to use it to run some llm like deepseek. What Gpu should i take? The nas is running a ryzen pro 4650 and 64 gb of ecc memory, but i don't want to run it in the ram since that is reserver for my zfs cache and other services. I don't want to spend too much money on this since is not really for training buy just play with it and mostly use to ask for some python/bash scripts to automate stuff. I read the go to for cheap was the nvidia p40 but now seems to be around 300 euro so doesn't look that worth anymore. My other go to choice were a 2nd hand 3060 12gb or a k80 that has 24gb amd should be able to hold larger models. Do gpu speed matter for this kind of usage? Consider i'm totally a novice in this so my knowledge is very shallow. Any tips is welcome. Thanks
2025-02-03T10:41:41
https://www.reddit.com/r/LocalLLaMA/comments/1igmazx/buying_advice_on_gpu/
BakaPhoenix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igmazx
false
null
t3_1igmazx
/r/LocalLLaMA/comments/1igmazx/buying_advice_on_gpu/
false
false
self
2
null
Any numbers on Deepseek R1 671b on System76 Thelio Astra / Ampere Altra?
1
[removed]
2025-02-03T11:00:37
https://www.reddit.com/r/LocalLLaMA/comments/1igmkir/any_numbers_on_deepseek_r1_671b_on_system76/
peppergrayxyz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igmkir
false
null
t3_1igmkir
/r/LocalLLaMA/comments/1igmkir/any_numbers_on_deepseek_r1_671b_on_system76/
false
false
self
1
null
[Research] Using Adaptive Classification to Automatically Optimize LLM Temperature Settings
7
I've been working on an approach to automatically optimize LLM configurations (particularly temperature) based on query characteristics. The idea is simple: different types of prompts need different temperature settings for optimal results, and we can learn these patterns. **The Problem:** * LLM behavior varies significantly with temperature settings (0.0 to 2.0) * Manual configuration is time-consuming and error-prone * Most people default to temperature=0.7 for everything **The Approach:** We trained an adaptive classifier that categorizes queries into five temperature ranges: * DETERMINISTIC (0.0-0.1): For factual, precise responses * FOCUSED (0.2-0.5): For technical, structured content * BALANCED (0.6-1.0): For conversational responses * CREATIVE (1.1-1.5): For varied, imaginative outputs * EXPERIMENTAL (1.6-2.0): For maximum variability **Results (tested on 500 diverse queries):** * 69.8% success rate in finding optimal configurations * Average similarity score of 0.64 (using RTC evaluation) * Most interesting finding: BALANCED and CREATIVE temps consistently performed best (scores: 0.649 and 0.645) **Distribution of optimal settings:** FOCUSED: 26.4% BALANCED: 23.5% DETERMINISTIC: 18.6% CREATIVE: 17.8% EXPERIMENTAL: 13.8% This suggests that while the default temp=0.7 (BALANCED) works well, it's only optimal for about a quarter of queries. Many queries benefit from either more precise or more creative settings. The code and pre-trained models are available on GitHub: https://github.com/codelion/adaptive-classifier. Would love to hear your thoughts, especially if you've experimented with temperature optimization before. EDIT: Since people are asking - evaluation was done using Round-Trip Consistency testing, measuring how well the model maintains response consistency across similar queries at each temperature setting. \^(Disclaimer: This is a research project, and while the results are promising, your mileage may vary depending on your specific use case and model.)
2025-02-03T11:13:45
https://www.reddit.com/r/LocalLLaMA/comments/1igmrm8/research_using_adaptive_classification_to/
asankhs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igmrm8
false
null
t3_1igmrm8
/r/LocalLLaMA/comments/1igmrm8/research_using_adaptive_classification_to/
false
false
self
7
{'enabled': False, 'images': [{'id': 'kqzDNKph48w3UgN3KPmuNjjELHLJLebiRKqbp_R4oKU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jmVCNwfcfOzIxLAaVOsNxzhBhXqY2UqMHNXga79dD8g.jpg?width=108&crop=smart&auto=webp&s=fe1dc083a2fe5fbf59795534c22c444d8f2c2d17', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jmVCNwfcfOzIxLAaVOsNxzhBhXqY2UqMHNXga79dD8g.jpg?width=216&crop=smart&auto=webp&s=f5740af3600b836789191dc5df9efe114a665b58', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jmVCNwfcfOzIxLAaVOsNxzhBhXqY2UqMHNXga79dD8g.jpg?width=320&crop=smart&auto=webp&s=246b479084f0e844f5fd629e99ff0d5862f8ec90', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jmVCNwfcfOzIxLAaVOsNxzhBhXqY2UqMHNXga79dD8g.jpg?width=640&crop=smart&auto=webp&s=73b7ff082abaeea58eee02523f5a0ca41e5d5e07', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jmVCNwfcfOzIxLAaVOsNxzhBhXqY2UqMHNXga79dD8g.jpg?width=960&crop=smart&auto=webp&s=6cfca5ff277fb79f503915aa66075a3de4866bc5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jmVCNwfcfOzIxLAaVOsNxzhBhXqY2UqMHNXga79dD8g.jpg?width=1080&crop=smart&auto=webp&s=c724c16226ab6b002591a6560fb24125de36e25f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jmVCNwfcfOzIxLAaVOsNxzhBhXqY2UqMHNXga79dD8g.jpg?auto=webp&s=abcbe55bb817dd8f228be24c3440e20035794f9c', 'width': 1200}, 'variants': {}}]}
gsh with gemma2 can predict 50% of my shell commands! Full benchmark comparing different local models included.
13
So I've been building [https://github.com/atinylittleshell/gsh](https://github.com/atinylittleshell/gsh) which can use local LLM to auto complete and explain shell commands, like this - [gsh's predicts the next command I want to run](https://preview.redd.it/swrqluodpwge1.png?width=636&format=png&auto=webp&s=82d37508a63ad065a4590fec87092dd84ac28459) To better understand which model performs the best for me, I built an evaluation system in gsh that can **use my command history as an evaluation dataset** to test different LLMs and see how well they could predict my commands (retroactively), like this - [gsh now has a built-in evaluation system](https://preview.redd.it/7j7vuiaspwge1.png?width=675&format=png&auto=webp&s=413400ad84191a926665824008c0b0bd8a2b9933) The result really surprised me! I tested almost every popular open source model between 1b-14b (excluded deepseek R1 and distills as reasoning models are not suited for low latency generation which we need here), and it turns out Google's gemma2:9b did the best with almost 30% exact matches, and overall 50% similarity score. [Model benchmark](https://preview.redd.it/en67gou6qwge1.png?width=991&format=png&auto=webp&s=3bf955d1c24975fceed3316ed6b33964b7bbeb21) This was done with a M4 Mac Mini. Some other observations - 1. qwen2.5 3b is somehow better at this than its 7b and 14b variant. 2. qwen2.5-coder scales well linearly with more parameters. 3. mistral and llama3.2 aren't very good at this. I'm pretty impressed by gemma2 - would not have thought they were a good choice but here I am looking at hard data. I'll likely use gemma2 as a base to fine-tune even better predictors. Just thought this was interesting to share!
2025-02-03T11:18:48
https://www.reddit.com/r/LocalLLaMA/comments/1igmuba/gsh_with_gemma2_can_predict_50_of_my_shell/
atinylittleshell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igmuba
false
null
t3_1igmuba
/r/LocalLLaMA/comments/1igmuba/gsh_with_gemma2_can_predict_50_of_my_shell/
false
false
https://a.thumbs.redditm…Fm2y_mPVsHC8.jpg
13
{'enabled': False, 'images': [{'id': 'JoSOOjv8_mu6Pt6f1r8qSX-8t4vdyrhxJGKu1cbYK9A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PAtbLYnBtJ3ArHfwFmRjw8Ux15ZBrIBJqJZllgU90TU.jpg?width=108&crop=smart&auto=webp&s=0838188badb1fc0d7544aa485227f4da2d7abe94', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PAtbLYnBtJ3ArHfwFmRjw8Ux15ZBrIBJqJZllgU90TU.jpg?width=216&crop=smart&auto=webp&s=946c6d0d20b5b40ab2ec7ca521dec19f79f2ef40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PAtbLYnBtJ3ArHfwFmRjw8Ux15ZBrIBJqJZllgU90TU.jpg?width=320&crop=smart&auto=webp&s=68ed7f6cf25c7eda8ebda3009a1c8c16dd260b08', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PAtbLYnBtJ3ArHfwFmRjw8Ux15ZBrIBJqJZllgU90TU.jpg?width=640&crop=smart&auto=webp&s=341367896bd6be16bf4be49dd9f30b4581fab4b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PAtbLYnBtJ3ArHfwFmRjw8Ux15ZBrIBJqJZllgU90TU.jpg?width=960&crop=smart&auto=webp&s=d106e2eedb504c909ea371786bd3aebf123d0a7d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PAtbLYnBtJ3ArHfwFmRjw8Ux15ZBrIBJqJZllgU90TU.jpg?width=1080&crop=smart&auto=webp&s=088b5961f89f915c03d9c064058b37ec3fb8eb2d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PAtbLYnBtJ3ArHfwFmRjw8Ux15ZBrIBJqJZllgU90TU.jpg?auto=webp&s=38af70b03d42d962ea7cbd872e55950f2d750898', 'width': 1200}, 'variants': {}}]}
advise how to install Hermes/3-Llama-1.3-405B-Uncensored
1
[removed]
2025-02-03T11:19:48
https://www.reddit.com/r/LocalLLaMA/comments/1igmuuo/advise_how_to_install_hermes3llama13405buncensored/
szavelin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igmuuo
false
null
t3_1igmuuo
/r/LocalLLaMA/comments/1igmuuo/advise_how_to_install_hermes3llama13405buncensored/
false
false
self
1
{'enabled': False, 'images': [{'id': '8CQXBslb8BmKRTJOwpGWMjm6sxF1FsVXkAcR5CXsjK4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Uq83sQRB8QOlnStwd5VzQWYOemqDuboXpAY1IMIZkFk.jpg?width=108&crop=smart&auto=webp&s=d554262c564a74d379d3646e220c7e196a55c578', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Uq83sQRB8QOlnStwd5VzQWYOemqDuboXpAY1IMIZkFk.jpg?width=216&crop=smart&auto=webp&s=6557d04fe4efa8d55e3a00093dd9cf175f08c1c8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Uq83sQRB8QOlnStwd5VzQWYOemqDuboXpAY1IMIZkFk.jpg?width=320&crop=smart&auto=webp&s=41d4e3d96f1c78d318460b49f5df96ba15bc7fae', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Uq83sQRB8QOlnStwd5VzQWYOemqDuboXpAY1IMIZkFk.jpg?width=640&crop=smart&auto=webp&s=371ce32523f50a901f71c6c3a206563c7eabf6fe', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Uq83sQRB8QOlnStwd5VzQWYOemqDuboXpAY1IMIZkFk.jpg?width=960&crop=smart&auto=webp&s=fc787fdc90e7a858c6cceebeb017552a6aa6504b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Uq83sQRB8QOlnStwd5VzQWYOemqDuboXpAY1IMIZkFk.jpg?width=1080&crop=smart&auto=webp&s=66b6c5db840d54ac272d1b656f62887c5a6b3cae', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Uq83sQRB8QOlnStwd5VzQWYOemqDuboXpAY1IMIZkFk.jpg?auto=webp&s=dfd85e9bcb069c87e54fcedd0b4998ad10f42c4a', 'width': 1200}, 'variants': {}}]}
DeepSeek-R1 never ever relaxes...
158
So, I was testing DeepSeek-R1 with a math problem I found in a textbook for 9-year-olds **(yes, really)**, and the model managed to crack it. The problem was: `"Find two 3-digit palindromic numbers that add up to a 4-digit palindromic number. Note: the first digit of any of these numbers can't be 0."` [R1 starts thinking...](https://preview.redd.it/ml5hnng3rwge1.jpg?width=1800&format=pjpg&auto=webp&s=1456610eeff8d8b9a122d86fbb44967f84f682d9) Now, here’s where it gets interesting. R1 thought for a bit, found the correct answer in its `<think></think>` block, then went ahead to output it—but made a mistake. [R1 makes a mistake...](https://preview.redd.it/77bke6q1swge1.jpg?width=1800&format=pjpg&auto=webp&s=d6eac07677fe576be9e699776a2134cba1d15c62) Before even finishing its response, it caught its own error, backtracked, and corrected itself on the fly outside of the`<think></think>` block. [R1 corrects itself...](https://preview.redd.it/yc3zjamsswge1.jpg?width=1800&format=pjpg&auto=webp&s=903d42998593e95a68ff32006b7bac6335df9f1e) [R1's final answer.](https://preview.redd.it/j8vgvxn3twge1.jpg?width=1800&format=pjpg&auto=webp&s=b189fce4a099ed9182b315c2164a1071a4a32104) [DeepSeek-R1 complete answer.](https://pastebin.com/0Ayv77LN) Regarding the problem, **no other LLM solved it, except for** [**OpenAI o1**](https://pastebin.com/YCRR521W). So now I’m wondering—**what's holding them back?** Is it the tokenizer's weaknesses? The sampling parameters (even when all where at the recommended settings they failed)? Or maybe, just maybe, non-thinking LLMs are really that bad at math? Would love to hear thoughts on this. Unsuccessful attemps by other models: * [chatgpt-4o-latest-20241120](https://pastebin.com/r8VKHrcA) * [claude-3-5-sonnet-20241022](https://pastebin.com/tXc7wGVz) * [phi-4](https://pastebin.com/zGzQJ8B5) * [amazon-nova-pro-v1.0](https://pastebin.com/vt54UFBe) * [gemini-exp-1206](https://pastebin.com/eSN4y6E0) * [llama-3.1-405b-instruct-bf16](https://pastebin.com/jVj1KcMF) * [qwen-max-2025-01-25](https://pastebin.com/ZRLfhEfU)
2025-02-03T11:30:28
https://www.reddit.com/r/LocalLLaMA/comments/1ign0lz/deepseekr1_never_ever_relaxes/
IrisColt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ign0lz
false
null
t3_1ign0lz
/r/LocalLLaMA/comments/1ign0lz/deepseekr1_never_ever_relaxes/
false
false
https://b.thumbs.redditm…EHYAO3868Ugk.jpg
158
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
Do people on this sub actually believe in LLM-based AGI?
1
[removed]
2025-02-03T11:32:16
https://www.reddit.com/r/LocalLLaMA/comments/1ign1l7/do_people_on_this_sub_actually_believe_in/
lmyslinski
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ign1l7
false
null
t3_1ign1l7
/r/LocalLLaMA/comments/1ign1l7/do_people_on_this_sub_actually_believe_in/
false
false
self
1
null
Startup idea with locally installing LLMs
1
[removed]
2025-02-03T12:02:42
https://www.reddit.com/r/LocalLLaMA/comments/1ignins/startup_idea_with_locally_installing_llms/
Antique-Fishing-3567
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ignins
false
null
t3_1ignins
/r/LocalLLaMA/comments/1ignins/startup_idea_with_locally_installing_llms/
false
false
self
1
null
24x 32gb or 8x 96gb for deepseek R1 671b?
1
[removed]
2025-02-03T12:06:41
https://www.reddit.com/r/LocalLLaMA/comments/1ignl0o/24x_32gb_or_8x_96gb_for_deepseek_r1_671b/
WouterGlorieux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ignl0o
false
null
t3_1ignl0o
/r/LocalLLaMA/comments/1ignl0o/24x_32gb_or_8x_96gb_for_deepseek_r1_671b/
false
false
self
1
null
What’s the current closest I can get to ChatGPT for general use on a 96gb ram m2 pro Mac?
1
[removed]
2025-02-03T12:12:48
https://www.reddit.com/r/LocalLLaMA/comments/1ignol2/whats_the_current_closest_i_can_get_to_chatgpt/
boodleberry
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ignol2
false
null
t3_1ignol2
/r/LocalLLaMA/comments/1ignol2/whats_the_current_closest_i_can_get_to_chatgpt/
false
false
self
1
null
Cursor now supports deepseek v3 and r1 models
22
https://preview.redd.it/… limits anymore.
2025-02-03T12:27:03
https://www.reddit.com/r/LocalLLaMA/comments/1ignwxh/cursor_now_supports_deepseek_v3_and_r1_models/
Available-Stress8598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ignwxh
false
null
t3_1ignwxh
/r/LocalLLaMA/comments/1ignwxh/cursor_now_supports_deepseek_v3_and_r1_models/
false
false
https://b.thumbs.redditm…wtKhDUSaoGas.jpg
22
null
Don't forget to optimize your hardware! (Windows)
66
2025-02-03T12:41:48
https://www.reddit.com/gallery/1igo6c9
rpwoerk
reddit.com
1970-01-01T00:00:00
0
{}
1igo6c9
false
null
t3_1igo6c9
/r/LocalLLaMA/comments/1igo6c9/dont_forget_to_optimize_your_hardware_windows/
false
false
https://b.thumbs.redditm…GISvJPZlb6kM.jpg
66
null
Creative writing: DeepSeek-R1-Distill-Qwen-14B vs DeepSeek-R1-Distill-Qwen-32B (16 GB V-RAM use case)
1
[removed]
2025-02-03T12:57:44
https://www.reddit.com/r/LocalLLaMA/comments/1igoglm/creative_writing_deepseekr1distillqwen14b_vs/
DaleCooperHS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igoglm
false
null
t3_1igoglm
/r/LocalLLaMA/comments/1igoglm/creative_writing_deepseekr1distillqwen14b_vs/
false
false
self
1
null
Berkeley Team Says They've Recreated DeepSeek's OpenAI Killer for $30
0
2025-02-03T13:05:02
https://futurism.com/researchers-deepseek-even-cheaper
houseofextropy
futurism.com
1970-01-01T00:00:00
0
{}
1igols2
false
null
t3_1igols2
/r/LocalLLaMA/comments/1igols2/berkeley_team_says_theyve_recreated_deepseeks/
false
false
https://b.thumbs.redditm…ViE-vJGXJ1UA.jpg
0
{'enabled': False, 'images': [{'id': 'T6TZLVU18pXOf7A9YwWZJbo1f8a5TfBCgC2yK61XeDo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3hqQ_UUjf4-OArVkXrK9iVow9Ut6pVIvY65m7Q01s8w.jpg?width=108&crop=smart&auto=webp&s=c0935f16055f9f1dd58768d073f9074cca0c4e0c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3hqQ_UUjf4-OArVkXrK9iVow9Ut6pVIvY65m7Q01s8w.jpg?width=216&crop=smart&auto=webp&s=82024d22c72f9b14fa5f0b4db7362612114e9540', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/3hqQ_UUjf4-OArVkXrK9iVow9Ut6pVIvY65m7Q01s8w.jpg?width=320&crop=smart&auto=webp&s=207c5cacf96bf5d0972de61c7a879ba8a9f8ab9b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/3hqQ_UUjf4-OArVkXrK9iVow9Ut6pVIvY65m7Q01s8w.jpg?width=640&crop=smart&auto=webp&s=3f2df14516f004111b25bc1b07d4be4e6f9dc146', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/3hqQ_UUjf4-OArVkXrK9iVow9Ut6pVIvY65m7Q01s8w.jpg?width=960&crop=smart&auto=webp&s=027030ee1e05e2f146b3aa88d24061bc65a9be1a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/3hqQ_UUjf4-OArVkXrK9iVow9Ut6pVIvY65m7Q01s8w.jpg?width=1080&crop=smart&auto=webp&s=ed73bad6b70e93323d591edba4437d49d286a3ce', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/3hqQ_UUjf4-OArVkXrK9iVow9Ut6pVIvY65m7Q01s8w.jpg?auto=webp&s=997e4f2183799f481ad25e1e07b15d6c82a8b4d7', 'width': 1200}, 'variants': {}}]}
So a miner is selling his stock of 54 rtx 2060 12 gb cards nearby
0
Is it worth it to buy 1 or possibly even two 2060 12 gb for 120 per card? I currently have only one 2060 6 gb installed in my pc but with 2 additional 12 gb cards that would be 30 gb total vram theoretically. My goal is to run Mistral small 24 b Q4Km (14.3 gb) for creative writing,translation and general use and potentially Qwen2.5 32b Q6 Coder (26.9 gb) for Coding tasks if possible with 6-8 tokens/s being enough for me to be happy. While 32 b would be nice my main goal would be running the 24 b model decently and setup a Rag agent down the line for personal use. A used 3090 with 24 gb would cost me 650 in my region and other used cards like the p100 or p40 cost 300 each. Has anyone tried a similar setup and what is your experience regarding token speed? Or would you recommend I wait for other solutions to come out this year? 120 bucks to be able to run 24b seems like a good deal but without Nvlink nor Sli being supported by the 2060 I am not sure how well it would work if at all.
2025-02-03T13:05:08
https://www.reddit.com/r/LocalLLaMA/comments/1igoluu/so_a_miner_is_selling_his_stock_of_54_rtx_2060_12/
Eden1506
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igoluu
false
null
t3_1igoluu
/r/LocalLLaMA/comments/1igoluu/so_a_miner_is_selling_his_stock_of_54_rtx_2060_12/
false
false
self
0
null
Can we prompt the thinking process of DeepSeek R1?
2
By thinking process I mean between <think> tags (CoT). DeepSeek R1 always thinks in English and in a structured way. In my experience, prompts influence the final answer but not the thinking. For example, can it think in the style of Rick Sanchez, can it think like a nihilist , can it think in a different language etc. I saw a post of DeepSeek thinking like Donald Trump, but it doesn't always give good results and hard to reproduce for other cases. I am asking this because to me reading its thinking process is much more interesting than the final output.
2025-02-03T13:06:00
https://www.reddit.com/r/LocalLLaMA/comments/1igomg0/can_we_prompt_the_thinking_process_of_deepseek_r1/
AloneCoffee4538
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igomg0
false
null
t3_1igomg0
/r/LocalLLaMA/comments/1igomg0/can_we_prompt_the_thinking_process_of_deepseek_r1/
false
false
self
2
null
deepseek content bypass
1
[removed]
2025-02-03T13:07:21
https://www.reddit.com/r/LocalLLaMA/comments/1igonec/deepseek_content_bypass/
Apprehensive_Cut7308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igonec
false
null
t3_1igonec
/r/LocalLLaMA/comments/1igonec/deepseek_content_bypass/
false
false
self
1
null
CREATIVE WRITING: DeepSeek-R1-Distill-Qwen-32B-GGUF vs DeepSeek-R1-Distill-Qwen-14B-GGUF (within 16 GB Vram)
2
[removed]
2025-02-03T13:08:32
https://www.reddit.com/r/LocalLLaMA/comments/1igoo6p/creative_writing_deepseekr1distillqwen32bgguf_vs/
DaleCooperHS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igoo6p
false
null
t3_1igoo6p
/r/LocalLLaMA/comments/1igoo6p/creative_writing_deepseekr1distillqwen32bgguf_vs/
false
false
self
2
null
Deepseek R1 on consumer pc
0
Is there a chance to run Deepseek r1 on a 16Gb VRAM + 64 Gb RAM Pc, possibly at least 1 token/s ? I tried `Unsloth's DeepSeek-R1 1.58-bit` version but it didn't work very well.
2025-02-03T13:17:06
https://www.reddit.com/r/LocalLLaMA/comments/1igou09/deepseek_r1_on_consumer_pc/
qaf23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igou09
false
null
t3_1igou09
/r/LocalLLaMA/comments/1igou09/deepseek_r1_on_consumer_pc/
false
false
self
0
null
5080 16gb vs 4090 24gb
1
For running local DeepSeek models, the RTX 4090 (24GB VRAM) is likely a better choice than the upcoming RTX 5080 (16GB VRAM)—even if the 5080 has better overall architecture improvements. Here’s why: 1. VRAM Capacity Matters Most for Large Models Deep learning models require a lot of VRAM for inference, and VRAM size is the main bottleneck when running larger models. Model VRAM Required (FP16) VRAM Required (8-bit) VRAM Required (4-bit) 7B ~14GB ~8GB ~5GB 14B ~28GB ~16GB ~10GB 32B ~64GB ~32GB ~20GB 67B ~128GB ~64GB ~40GB • A 4090 (24GB VRAM) can comfortably run 7B and 14B models, possibly 32B at 4-bit quantization. • A 5080 (16GB VRAM) might struggle with 14B models and would be limited to smaller models or aggressive quantization. 2. VRAM Bandwidth and Performance Differences • RTX 4090: Has 384-bit memory bus → high bandwidth, which improves large model inference speeds. • RTX 5080 (rumored): Expected to have 256-bit memory bus, which is significantly lower. Even if the 5080 has a faster GPU core, its 16GB VRAM and smaller memory bus will cripple performance for larger models. 3. Tensor and Compute Performance • The 5080 will likely have DLSS 3.5, better power efficiency, and higher raw TFLOPS, but inference relies more on VRAM capacity and bandwidth than raw GPU power. • The 4090 already has Tensor Cores optimized for AI and performs well in inference tasks. Verdict: Get the 4090 for DeepSeek Models • 4090 (24GB VRAM) → Better for DeepSeek models, capable of running 7B, 14B, and even 32B (4-bit). • 5080 (16GB VRAM) → Limited to 7B models or highly quantized versions of 14B. • If you’re serious about running bigger models locally, more VRAM always wins. If you’re planning to run anything above 14B, you might want to consider a 4090 or even a 4090 Ti / 5000-series Titan (if it has more VRAM).
2025-02-03T13:22:52
https://www.reddit.com/r/LocalLLaMA/comments/1igoy14/5080_16gb_vs_4090_24gb/
HeyDontSkipLegDay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igoy14
false
null
t3_1igoy14
/r/LocalLLaMA/comments/1igoy14/5080_16gb_vs_4090_24gb/
false
false
self
1
null
What ollama is doing after providing response?
1
[removed]
2025-02-03T13:23:25
https://www.reddit.com/r/LocalLLaMA/comments/1igoyeb/what_ollama_is_doing_after_providing_response/
lazy-kozak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igoyeb
false
null
t3_1igoyeb
/r/LocalLLaMA/comments/1igoyeb/what_ollama_is_doing_after_providing_response/
false
false
self
1
null
Ollama + WebUI: there is something changed,no ?
1
[removed]
2025-02-03T13:26:16
https://www.reddit.com/r/LocalLLaMA/comments/1igp0c7/ollama_webui_there_is_something_changedno/
rx22230
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igp0c7
false
null
t3_1igp0c7
/r/LocalLLaMA/comments/1igp0c7/ollama_webui_there_is_something_changedno/
false
false
self
1
null
Which models work best for which tasks?
2
I’ve recently got into running local models, but I’m noticing that some (like Llama) often create a lot of code errors, while Qwen struggles with what I would call “high temperature” prompts, like creative writing. The 14B and 7B DeepSeek have hallucinated a lot of facts and I’ve given up on those. For reference, I have an M2 with 32 gigs of RAM. Most of my tests have been for less than 30B parameters. I’ve heard Mistral is the best all-around daily driver and I’m going to try that next, but which do you prefer for 1) Ideation 2) Technical solutions 3) Planning and outlines
2025-02-03T13:30:41
https://www.reddit.com/r/LocalLLaMA/comments/1igp3ip/which_models_work_best_for_which_tasks/
Aromatic-Life5879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igp3ip
false
null
t3_1igp3ip
/r/LocalLLaMA/comments/1igp3ip/which_models_work_best_for_which_tasks/
false
false
self
2
null
u/AutoModerator delete message
0
u/AutoModerator delete message but I cannot read the reason.. difficut to improve myself...
2025-02-03T13:32:23
https://www.reddit.com/r/LocalLLaMA/comments/1igp4ro/uautomoderator_delete_message/
rx22230
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igp4ro
false
null
t3_1igp4ro
/r/LocalLLaMA/comments/1igp4ro/uautomoderator_delete_message/
false
false
self
0
null
Open WebUI for Ollama
0
in WebUI for Ollama how do you keep track of your chats? mine are disappearing
2025-02-03T13:35:30
https://www.reddit.com/r/LocalLLaMA/comments/1igp709/open_webui_for_ollama/
rx22230
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igp709
false
null
t3_1igp709
/r/LocalLLaMA/comments/1igp709/open_webui_for_ollama/
false
false
self
0
null
$7k USD for a new build, recommendations?
2
I have about $7K USD to build a new desktop/workstation. (the last desktop I built was around 2017) I'd love to get as much VRAM as possible out of this, perhaps dual GPU? I don't have to have the most powerful CPU, but VRAM and RAM are a must. Could you recommend some components? Thanks
2025-02-03T13:41:12
https://www.reddit.com/r/LocalLLaMA/comments/1igpb0q/7k_usd_for_a_new_build_recommendations/
WasJohnTitorReal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igpb0q
false
null
t3_1igpb0q
/r/LocalLLaMA/comments/1igpb0q/7k_usd_for_a_new_build_recommendations/
false
false
self
2
null
What are your prompts for code assitants?
2
Reading [this post ](https://old.reddit.com/r/LocalLLaMA/comments/1iggetv/make_your_mistral_small_3_24b_think_like/) today (and the comments) got me thinking about good system prompts for code aasistance. I'm sure the community has found some useful ones. If you're willing to shared, I'd be very interested to great what works well for you.
2025-02-03T13:42:02
https://www.reddit.com/r/LocalLLaMA/comments/1igpbmj/what_are_your_prompts_for_code_assitants/
RobotRobotWhatDoUSee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igpbmj
false
null
t3_1igpbmj
/r/LocalLLaMA/comments/1igpbmj/what_are_your_prompts_for_code_assitants/
false
false
self
2
null
Mistral Small 3: Redefining Expectations – Performance Beyond Its Size (Feels Like a 70B Model!)
169
🚀 Hold onto your hats, folks! Mistral Small 3 is here to blow your minds! This isn't just another small model – it's a powerhouse that feels like you're wielding a 70B beast! I've thrown every complex question I could think of at it, and the results are mind-blowing. From coding conundrums to deep language understanding, this thing is breaking barriers left and right. I dare you to try it out and share your experiences here. Let's see what crazy things we can make Mistral Small 3 do! Who else is ready to have their expectations redefined? 🤯 This is Q4\_K\_M just 14GB https://i.redd.it/fdqvgbm9gxge1.gif Prompt Create an interactive web page that animates the Sun and the planets in our Solar System. The animation should include the following features: 1. **Sun** : A central, bright yellow circle representing the Sun. 2. **Planets** : Eight planets (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune) orbiting around the Sun with realistic relative sizes and distances. 3. **Orbits** : Visible elliptical orbits for each planet to show their paths around the Sun. 4. **Animation** : Smooth orbital motion for all planets, with varying speeds based on their actual orbital periods. 5. **Labels** : Clickable labels for each planet that display additional information when hovered over or clicked (e.g., name, distance from the Sun, orbital period). 6. **Interactivity** : Users should be able to pause and resume the animation using buttons. Ensure the design is visually appealing with a dark background to enhance the visibility of the planets and their orbits. Use CSS for styling and JavaScript for the animation logic.
2025-02-03T13:45:54
https://www.reddit.com/r/LocalLLaMA/comments/1igpedw/mistral_small_3_redefining_expectations/
Vishnu_One
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igpedw
false
null
t3_1igpedw
/r/LocalLLaMA/comments/1igpedw/mistral_small_3_redefining_expectations/
false
false
https://b.thumbs.redditm…a4swxnh3Ta_k.jpg
169
null
Recommended tool for Personal Assistant Pipeline? (Python)
5
Im looking to build/fork a personal assistant, to use in my Home Assistant as a conversational agent. I currently have Ollama set up wich works great but it lacks custom functionality. I want to integrate things like RAG for memory, and function calls for my personal services and scripts. So: What i need is to develop a pipeline, that includes inference of a llm model, rag, function calling etc. Before i start building a custom pipeline from scratch, and integrate that into my Home Assistant, i want to check if anyone has tips for any repos i should check out. I know about HomeLLM, which i will be using to integrate the pipeline to Home Assistant. Im a python developer. Thanks, any tips/leads are appriciated!
2025-02-03T13:47:18
https://www.reddit.com/r/LocalLLaMA/comments/1igpfdz/recommended_tool_for_personal_assistant_pipeline/
Boltyx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igpfdz
false
null
t3_1igpfdz
/r/LocalLLaMA/comments/1igpfdz/recommended_tool_for_personal_assistant_pipeline/
false
false
self
5
null
whats the difference between uncensored and abliterated models
29
im quite new to llms and ai but i was searching around and i found some models are uncensored and some models are ablitirated i tried both and they simply performed the task aka asking the old "how to create a ied question"
2025-02-03T13:47:25
https://www.reddit.com/r/LocalLLaMA/comments/1igpfgp/whats_the_difference_between_uncensored_and/
Traveling_Pirate2190
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1igpfgp
false
null
t3_1igpfgp
/r/LocalLLaMA/comments/1igpfgp/whats_the_difference_between_uncensored_and/
false
false
self
29
null
I built a Linux Distro to run Nvidia GPUs for AI
149
Hey r/LocalLLaMA, I wanted to share a project I’ve been working on - I built a minimalist Linux distro called **Sbnb Linux** that’s super easy to get up and running with Nvidia GPUs. Here’s the cool part: - You can boot straight from a USB flash drive, no installation needed. - It’s got all the tools to spin up a virtual machine and attach your Nvidia GPU using a low-overhead `vfio-pci` setup. - Once that’s done, you can easily run AI like DeepSeek R1 in the VM using **ollama**. But wait, there’s more! The bare metal server and VM are connected through **Tailscale tunnels**, so you can SSH into them from anywhere using OAuth (Google, etc.). If anyone’s curious to give it a shot, all you need is a USB flash drive and about 30 minutes to get up and running. The instructions are here: [GitHub Link](https://github.com/sbnb-io/sbnb/blob/main/README-NVIDIA.md). If you run into any issues, drop a message below and I’ll be happy to help out! **P.S.** As a fun weekend project, my kids and I built a beast of a home server powered by an AMD EPYC 7C13 (3rd gen). I posted all the nerdy details and costs over on r/homelab if you're into that kind of thing: [link here](https://www.reddit.com/r/homelab/comments/1hmnnwg/built_a_powerful_and_silent_amd_epyc_home_server/). Let me know what you think!
2025-02-03T13:53:56
https://www.reddit.com/gallery/1igpkc8
aospan
reddit.com
1970-01-01T00:00:00
0
{}
1igpkc8
false
null
t3_1igpkc8
/r/LocalLLaMA/comments/1igpkc8/i_built_a_linux_distro_to_run_nvidia_gpus_for_ai/
false
false
https://b.thumbs.redditm…YF470-tBLf_o.jpg
149
null