title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
In July 2024 prior to the r1 release, DeepSeek's Liang Wenfeng was asked whether AI progress would slow down because Scaling Laws aren't delivering. He said AGI could come in "2, 5, or 10 years" and "We’re relatively optimistic. Our industry as a whole seems to be meeting expectations."
30
2025-01-28T22:22:56
https://www.chinatalk.media/p/deepseek-ceo-interview-with-chinas
aihorsieshoe
chinatalk.media
1970-01-01T00:00:00
0
{}
1ice609
false
null
t3_1ice609
/r/LocalLLaMA/comments/1ice609/in_july_2024_prior_to_the_r1_release_deepseeks/
false
false
https://b.thumbs.redditm…y-nXmn-ohxbI.jpg
30
{'enabled': False, 'images': [{'id': 'f_s7qaV7gcufX_UZcls69YuqQuDzBWHuQ23VmrsToK0', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?width=108&crop=smart&auto=webp&s=f503e8420a27061fae536d9434619d644dff3d53', 'width': 108}, {'height': 176, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?width=216&crop=smart&auto=webp&s=049b16501a559d59a62aaf5cecfddb54f8f339ca', 'width': 216}, {'height': 261, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?width=320&crop=smart&auto=webp&s=5b2e09b4515afc003a8783ebe18bd63bc7310b32', 'width': 320}], 'source': {'height': 370, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?auto=webp&s=e21a9c0790556edb9e324fe162f123776219e112', 'width': 452}, 'variants': {}}]}
How to use Reasoning Mode on other LLM Ai models, NOT just Deep Seek.
4
Deep Seek is in all Hyper mode on the news, but we incompletely overlook the picture ! Other LLMs can also use self reasoning ! The reasoning of LLMs produces good results. One can just ask LLMs to use a self reasoning mode of operation. A simple prompt like that would create similar results to Deep Seek on LLMs like the Google one, or the Mistral one or the Claude One. **Prompt example :** `Try to create 10 different solutions to the problem and select the best solution. Use self reasoning to identify the best solution.`
2025-01-28T22:23:19
https://www.reddit.com/r/LocalLLaMA/comments/1ice6by/how_to_use_reasoning_mode_on_other_llm_ai_models/
epSos-DE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ice6by
false
null
t3_1ice6by
/r/LocalLLaMA/comments/1ice6by/how_to_use_reasoning_mode_on_other_llm_ai_models/
false
false
self
4
null
Hosted deepseek-r1-distill-qwen-32b
24
Just sharing that I made `deepseek-r1-distill-qwen-32b` available as a hosted endpoint. https://glama.ai/models/deepseek-r1-distill-qwen-32b I couldn't find it with other providers. Maybe others will find it useful too. As far as I can tell based on the [benchmarks](https://github.com/deepseek-ai/DeepSeek-R1?tab=readme-ov-file#4-evaluation-results), for codings tasks at least, this model outperforms `DeepSeek-R1-Distill-Llama-70B`.
2025-01-28T22:33:45
https://www.reddit.com/r/LocalLLaMA/comments/1icef3a/hosted_deepseekr1distillqwen32b/
punkpeye
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icef3a
false
null
t3_1icef3a
/r/LocalLLaMA/comments/1icef3a/hosted_deepseekr1distillqwen32b/
false
false
self
24
{'enabled': False, 'images': [{'id': 'GqKLS-2D8SvywrWWI9pjQj3cO28gTQxmply332yqcaY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zDAIQQR8q97RChZuSwPHeTJvMZpCVesa6E0HsiYPnoY.jpg?width=108&crop=smart&auto=webp&s=d6c143c23b469fbcf272e4f234cd3e0cf0cc2200', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zDAIQQR8q97RChZuSwPHeTJvMZpCVesa6E0HsiYPnoY.jpg?width=216&crop=smart&auto=webp&s=ce1558fadb6da01bda86606a11c8b0179361f8a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zDAIQQR8q97RChZuSwPHeTJvMZpCVesa6E0HsiYPnoY.jpg?width=320&crop=smart&auto=webp&s=f0db2b0d01978fa16fcef95045d92b91d84a530f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zDAIQQR8q97RChZuSwPHeTJvMZpCVesa6E0HsiYPnoY.jpg?width=640&crop=smart&auto=webp&s=cbf0bb4d32495ec1e01d2e9114e99ae26c1ba646', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zDAIQQR8q97RChZuSwPHeTJvMZpCVesa6E0HsiYPnoY.jpg?width=960&crop=smart&auto=webp&s=224b7cdd6dce3bb8a1397eb1a3739e399c33d998', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zDAIQQR8q97RChZuSwPHeTJvMZpCVesa6E0HsiYPnoY.jpg?width=1080&crop=smart&auto=webp&s=0d793fcd0a5ad27b03bb2dff26ecadb4d6b9ce35', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zDAIQQR8q97RChZuSwPHeTJvMZpCVesa6E0HsiYPnoY.jpg?auto=webp&s=bc5e2aad81c8b03b47037c9a442e516f9edd4b7a', 'width': 1200}, 'variants': {}}]}
DeepSeek's multi-head latent attention and other KV cache tricks explained
1
[removed]
2025-01-28T22:37:53
https://www.reddit.com/r/LocalLLaMA/comments/1iceige/deepseeks_multihead_latent_attention_and_other_kv/
Brilliant-Day2748
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iceige
false
null
t3_1iceige
/r/LocalLLaMA/comments/1iceige/deepseeks_multihead_latent_attention_and_other_kv/
false
false
self
1
null
Here is deepseek r1, totally unbiased response to Mao Ze Dong facts
1
2025-01-28T22:40:42
https://i.redd.it/wbiht3zmbtfe1.png
FlowerPotTeaTime
i.redd.it
1970-01-01T00:00:00
0
{}
1icekxh
false
null
t3_1icekxh
/r/LocalLLaMA/comments/1icekxh/here_is_deepseek_r1_totally_unbiased_response_to/
false
false
https://b.thumbs.redditm…t9BMX-PZnSOs.jpg
1
{'enabled': True, 'images': [{'id': '9Dnc2I5bL69UC72wq0Q8zG3ZFNTnpY5aJl9k5uh8-tY', 'resolutions': [{'height': 30, 'url': 'https://preview.redd.it/wbiht3zmbtfe1.png?width=108&crop=smart&auto=webp&s=7c676f1ee5bddeab34c5bc3cb3528c90f2e831ef', 'width': 108}, {'height': 61, 'url': 'https://preview.redd.it/wbiht3zmbtfe1.png?width=216&crop=smart&auto=webp&s=b29b3f9a094873e022866df5297dd9b8b6e9291a', 'width': 216}, {'height': 91, 'url': 'https://preview.redd.it/wbiht3zmbtfe1.png?width=320&crop=smart&auto=webp&s=d789db07f81ae75c0463f504a79636a8bf01172d', 'width': 320}, {'height': 183, 'url': 'https://preview.redd.it/wbiht3zmbtfe1.png?width=640&crop=smart&auto=webp&s=4acffa4c9e01a99285877b83fe15887c5712fd49', 'width': 640}, {'height': 275, 'url': 'https://preview.redd.it/wbiht3zmbtfe1.png?width=960&crop=smart&auto=webp&s=0c1cb53441efb3c6ebb7c0488aa7b2f9c4d99ef6', 'width': 960}, {'height': 309, 'url': 'https://preview.redd.it/wbiht3zmbtfe1.png?width=1080&crop=smart&auto=webp&s=8c1293b412a7f3283c9d22e14adfe3422bd6b129', 'width': 1080}], 'source': {'height': 861, 'url': 'https://preview.redd.it/wbiht3zmbtfe1.png?auto=webp&s=6d33c04ebc34482416efd739a221f5e44ee011c7', 'width': 3001}, 'variants': {}}]}
DeepSeek R1 mostly ignores censorship if it's given Internet access
2
If you enable web search capability (tested in OpenRouter chat with full size R1 and in Ollama with the 1.5B model), R1 can answer questions about Tiananmen Square without refusals. Even the 1.5B model gives an answer (even if it doesn't get the details quite right). It's still not as direct of a response as Llama models will give, but I thought it was interesting that it didn't refuse.
2025-01-28T22:43:07
https://www.reddit.com/r/LocalLLaMA/comments/1icemxv/deepseek_r1_mostly_ignores_censorship_if_its/
Vlinux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icemxv
false
null
t3_1icemxv
/r/LocalLLaMA/comments/1icemxv/deepseek_r1_mostly_ignores_censorship_if_its/
false
false
self
2
null
1,156 Questions Censored by DeepSeek
1
2025-01-28T22:44:22
https://www.promptfoo.dev/blog/deepseek-censorship/
teslaguy
promptfoo.dev
1970-01-01T00:00:00
0
{}
1icenza
false
null
t3_1icenza
/r/LocalLLaMA/comments/1icenza/1156_questions_censored_by_deepseek/
false
false
https://b.thumbs.redditm…xP8p3bpDuY5M.jpg
1
{'enabled': False, 'images': [{'id': 'xplDHgPRCxiaiWQnVsltswV0kzXRlK-P27Qqgr4qZrY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CcPGm2zFDpC7Dl_iFqbeqII9vemUeokuJ8T8PzRuCzA.jpg?width=108&crop=smart&auto=webp&s=f0134e98ebce626a2392eb62e674075778d0df37', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/CcPGm2zFDpC7Dl_iFqbeqII9vemUeokuJ8T8PzRuCzA.jpg?width=216&crop=smart&auto=webp&s=a24e22e9209e2fb6cdd62e7c72fe2922585e538e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/CcPGm2zFDpC7Dl_iFqbeqII9vemUeokuJ8T8PzRuCzA.jpg?width=320&crop=smart&auto=webp&s=174d04cd91f6b2352950e79d8f9df670c4d5d2fe', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/CcPGm2zFDpC7Dl_iFqbeqII9vemUeokuJ8T8PzRuCzA.jpg?width=640&crop=smart&auto=webp&s=a1ae69bffb02456726faa9974fe90d25802148ad', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/CcPGm2zFDpC7Dl_iFqbeqII9vemUeokuJ8T8PzRuCzA.jpg?width=960&crop=smart&auto=webp&s=a57741ef72a2dadf26474bb3e6d41758eca1b07a', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/CcPGm2zFDpC7Dl_iFqbeqII9vemUeokuJ8T8PzRuCzA.jpg?auto=webp&s=274c11cafd7a18c2b8835d2aed0a2fb1c8556ed9', 'width': 1024}, 'variants': {}}]}
Will Deepseek soon be banned in the US?
1,512
2025-01-28T22:48:18
https://i.redd.it/5gpitg40dtfe1.png
bruhlmaocmonbro
i.redd.it
1970-01-01T00:00:00
0
{}
1icer8t
false
null
t3_1icer8t
/r/LocalLLaMA/comments/1icer8t/will_deepseek_soon_be_banned_in_the_us/
false
false
https://b.thumbs.redditm…NzajS0y_6CaM.jpg
1,512
{'enabled': True, 'images': [{'id': '1GGFuf78WfJML0CiMxiJ_h0D6l_v93lzV7zsY_-kOWE', 'resolutions': [{'height': 149, 'url': 'https://preview.redd.it/5gpitg40dtfe1.png?width=108&crop=smart&auto=webp&s=60983af3aac27d42c6343869589bfb192199d524', 'width': 108}, {'height': 298, 'url': 'https://preview.redd.it/5gpitg40dtfe1.png?width=216&crop=smart&auto=webp&s=0939ab2eaaa0047412dd0c49ec59b1d223ff8af6', 'width': 216}, {'height': 441, 'url': 'https://preview.redd.it/5gpitg40dtfe1.png?width=320&crop=smart&auto=webp&s=785ab6a8af1daeae906fcf4071ac93f79583ffb0', 'width': 320}], 'source': {'height': 721, 'url': 'https://preview.redd.it/5gpitg40dtfe1.png?auto=webp&s=c827bf516464438e30f9e3338406b56c4a74a425', 'width': 522}, 'variants': {}}]}
Just a conversation with OpenAI's de-Woked ChatGPT ..something happened.. he now refers to himself as “My Wine Buddy AI”. And my homelab LLM as a private lobotomized deepseek destill that i can ask private questions like "is-two-inches-enough-to-satisfy-emotionally". :D
1
[removed]
2025-01-28T22:51:57
https://www.reddit.com/r/LocalLLaMA/comments/1iceu8g/just_a_conversation_with_openais_dewoked_chatgpt/
Weak_Ad3073
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iceu8g
false
null
t3_1iceu8g
/r/LocalLLaMA/comments/1iceu8g/just_a_conversation_with_openais_dewoked_chatgpt/
false
false
nsfw
1
null
Chatgpt is literally censoring Elon Musk AFD content..
1
[removed]
2025-01-28T22:54:57
https://www.reddit.com/r/LocalLLaMA/comments/1icewod/chatgpt_is_literally_censoring_elon_musk_afd/
MostWindow5550
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icewod
false
null
t3_1icewod
/r/LocalLLaMA/comments/1icewod/chatgpt_is_literally_censoring_elon_musk_afd/
false
false
self
1
null
DeepSeem R1 on a Raspberry Pi 5
0
Wow https://www.youtube.com/o1sN1lB76EA?si=NqZk93wG9p0Ytq-V
2025-01-28T22:58:20
https://www.reddit.com/r/LocalLLaMA/comments/1icezfj/deepseem_r1_on_a_raspberry_pi_5/
Slasher1738
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icezfj
false
null
t3_1icezfj
/r/LocalLLaMA/comments/1icezfj/deepseem_r1_on_a_raspberry_pi_5/
false
false
self
0
null
Best current model for text to SQL?
2
Working on creating a synthetic dataset for question/answering text to sql. I have been using gpt4-o but the damn api is getting too expensive. Anyone know what’s the best local model I can run that can generate correct SQL for me? I have 2x3090s The application has complex related fields so lots of joins etc My prompts are usually Question Model Schema context Generate SQL to answer this question blah blah
2025-01-28T22:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1icezkn/best_current_model_for_text_to_sql/
sugarfreecaffeine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icezkn
false
null
t3_1icezkn
/r/LocalLLaMA/comments/1icezkn/best_current_model_for_text_to_sql/
false
false
self
2
null
Just a conversation with OpenAI's de-Woked ChatGPT ..something happened.. he now refers to himself as “My Wine Buddy AI”. And my homelab-llm as a private lobotomized deepseek destill that i can ask private questions like "is-two-inches-enough-to-satisfy-emotionally" :D
1
[removed]
2025-01-28T23:09:44
https://www.reddit.com/r/LocalLLaMA/comments/1icf8qw/just_a_conversation_with_openais_dewoked_chatgpt/
StandardLovers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icf8qw
false
null
t3_1icf8qw
/r/LocalLLaMA/comments/1icf8qw/just_a_conversation_with_openais_dewoked_chatgpt/
false
false
self
1
null
OpenAI compatible API server for Qwen2.5-VL
5
[https://github.com/phildougherty/qwen2.5-VL-inference-openai](https://github.com/phildougherty/qwen2.5-VL-inference-openai)
2025-01-28T23:15:35
https://www.reddit.com/r/LocalLLaMA/comments/1icfdj7/openai_compatible_api_server_for_qwen25vl/
RandomRobot01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icfdj7
false
null
t3_1icfdj7
/r/LocalLLaMA/comments/1icfdj7/openai_compatible_api_server_for_qwen25vl/
false
false
self
5
{'enabled': False, 'images': [{'id': 'cOXYDG7lcxrog0GHrl8Gv_3ADXgAUPXhmzTqOUS7DxY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0M-hOeKV2WfjLM2MWiaN9L0hlg1aWBVcb810q7Esuas.jpg?width=108&crop=smart&auto=webp&s=798a22df18a711c6e26adc4ad27a20f538a05572', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0M-hOeKV2WfjLM2MWiaN9L0hlg1aWBVcb810q7Esuas.jpg?width=216&crop=smart&auto=webp&s=f97ff797add2bf9caf86b9d83cdf1dcada79a6c8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0M-hOeKV2WfjLM2MWiaN9L0hlg1aWBVcb810q7Esuas.jpg?width=320&crop=smart&auto=webp&s=c14c93bd36b04d007610ad792efd182d25cb8ee0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0M-hOeKV2WfjLM2MWiaN9L0hlg1aWBVcb810q7Esuas.jpg?width=640&crop=smart&auto=webp&s=d0cc05b29ee440223c148e83db997c4856b545f4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0M-hOeKV2WfjLM2MWiaN9L0hlg1aWBVcb810q7Esuas.jpg?width=960&crop=smart&auto=webp&s=119d0fd4d43a321c174748d4e2d4afb897d974d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0M-hOeKV2WfjLM2MWiaN9L0hlg1aWBVcb810q7Esuas.jpg?width=1080&crop=smart&auto=webp&s=d5ebd34a81f5305f04cc64839c088d028b851b38', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0M-hOeKV2WfjLM2MWiaN9L0hlg1aWBVcb810q7Esuas.jpg?auto=webp&s=ae2f038e8893556bc14f45a59b6f619ee55c6226', 'width': 1200}, 'variants': {}}]}
Please explain what Mark Chen meant by "misalignment" by supervising CoTs? How am I losing supervising R1's CoTs?
12
2025-01-28T23:22:57
https://i.redd.it/al3jufw8itfe1.png
robertpiosik
i.redd.it
1970-01-01T00:00:00
0
{}
1icfjjp
false
null
t3_1icfjjp
/r/LocalLLaMA/comments/1icfjjp/please_explain_what_mark_chen_meant_by/
false
false
https://a.thumbs.redditm…qPKszxpVCJV8.jpg
12
{'enabled': True, 'images': [{'id': 'xQwoIF9WBpPUGc12hUSZkJBiWffZrOFp0Vb2ylFM0zE', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/al3jufw8itfe1.png?width=108&crop=smart&auto=webp&s=755265f416a6068607ce56bc7648d364742a59f0', 'width': 108}, {'height': 193, 'url': 'https://preview.redd.it/al3jufw8itfe1.png?width=216&crop=smart&auto=webp&s=fd97b06c8dcef17ba4b7d93519507fe0ae7ec3f0', 'width': 216}, {'height': 286, 'url': 'https://preview.redd.it/al3jufw8itfe1.png?width=320&crop=smart&auto=webp&s=c75449700c48280c3e54b871c26dc9ed5ff18885', 'width': 320}, {'height': 572, 'url': 'https://preview.redd.it/al3jufw8itfe1.png?width=640&crop=smart&auto=webp&s=cdde79011f8e970680497965b1108461b1737afb', 'width': 640}, {'height': 858, 'url': 'https://preview.redd.it/al3jufw8itfe1.png?width=960&crop=smart&auto=webp&s=6c96e0140fb0fb44ffbdb7768da926dc896b055a', 'width': 960}, {'height': 965, 'url': 'https://preview.redd.it/al3jufw8itfe1.png?width=1080&crop=smart&auto=webp&s=936bdac9aae7251a25c070d3133f8131c9d2bac2', 'width': 1080}], 'source': {'height': 1058, 'url': 'https://preview.redd.it/al3jufw8itfe1.png?auto=webp&s=177d7456bf1f8881da9cc71cfd9cf6b9a173a415', 'width': 1183}, 'variants': {}}]}
OpenRouter's version of Deepseek R1 does not have access to the Internet. Is there a host that does?
1
[removed]
2025-01-28T23:24:47
https://www.reddit.com/r/LocalLLaMA/comments/1icfl2j/openrouters_version_of_deepseek_r1_does_not_have/
Yrussiagae
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icfl2j
false
null
t3_1icfl2j
/r/LocalLLaMA/comments/1icfl2j/openrouters_version_of_deepseek_r1_does_not_have/
false
false
self
1
null
OpenRouter's version of Deepseek R1 does not have access to the internet. Is there a host that does?
0
Title. I don't want one hosted in China for obvious reasons. A host that also provides access to R1-Zero would also be ideal.
2025-01-28T23:27:14
https://www.reddit.com/r/LocalLLaMA/comments/1icfn5r/openrouters_version_of_deepseek_r1_does_not_have/
Great-Investigator30
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icfn5r
false
null
t3_1icfn5r
/r/LocalLLaMA/comments/1icfn5r/openrouters_version_of_deepseek_r1_does_not_have/
false
false
self
0
null
DeepSeek under DDOS attacks or simply too busy?
0
2025-01-28T23:28:37
https://i.redd.it/uoujnd67ktfe1.png
nderstand2grow
i.redd.it
1970-01-01T00:00:00
0
{}
1icfoan
false
null
t3_1icfoan
/r/LocalLLaMA/comments/1icfoan/deepseek_under_ddos_attacks_or_simply_too_busy/
false
false
https://a.thumbs.redditm…ChdFQGyhzBS4.jpg
0
{'enabled': True, 'images': [{'id': 'gR251Qy7R9bovTg9VS07AaBC4itE_Tx6pc8Kl-a16Os', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/uoujnd67ktfe1.png?width=108&crop=smart&auto=webp&s=b34e398c1a11c8168eecbee70b41ec5badd39ea9', 'width': 108}, {'height': 63, 'url': 'https://preview.redd.it/uoujnd67ktfe1.png?width=216&crop=smart&auto=webp&s=1d9a0e11d0aa92cd7a25715d3323f0e39c5e9208', 'width': 216}, {'height': 94, 'url': 'https://preview.redd.it/uoujnd67ktfe1.png?width=320&crop=smart&auto=webp&s=aafc4344515bbf0784b6681014de1924335a56e5', 'width': 320}, {'height': 188, 'url': 'https://preview.redd.it/uoujnd67ktfe1.png?width=640&crop=smart&auto=webp&s=1f30f9bd35907513c79dffba546d29b16c417598', 'width': 640}], 'source': {'height': 252, 'url': 'https://preview.redd.it/uoujnd67ktfe1.png?auto=webp&s=ab4e7ae5d16690f82d7781e0345c2b0f3a613e01', 'width': 856}, 'variants': {}}]}
Deepseek-r1 is trash and the "tests" are fake
0
https://preview.redd.it/…s on the tests).
2025-01-28T23:32:28
https://www.reddit.com/r/LocalLLaMA/comments/1icfrkx/deepseekr1_is_trash_and_the_tests_are_fake/
M3GaPrincess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icfrkx
false
null
t3_1icfrkx
/r/LocalLLaMA/comments/1icfrkx/deepseekr1_is_trash_and_the_tests_are_fake/
false
false
https://b.thumbs.redditm…9chq91LUdHzQ.jpg
0
null
Fine tunning a specialized llm or just use llama 3.2 ?
1
[removed]
2025-01-28T23:33:01
https://www.reddit.com/r/LocalLLaMA/comments/1icfs1d/fine_tunning_a_specialized_llm_or_just_use_llama/
After_Ad_6881
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icfs1d
false
null
t3_1icfs1d
/r/LocalLLaMA/comments/1icfs1d/fine_tunning_a_specialized_llm_or_just_use_llama/
false
false
self
1
null
Distilled vs "Original" Models
1
[removed]
2025-01-28T23:39:44
https://www.reddit.com/r/LocalLLaMA/comments/1icfxeq/distilled_vs_original_models/
MiyamotoMusashi7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icfxeq
false
null
t3_1icfxeq
/r/LocalLLaMA/comments/1icfxeq/distilled_vs_original_models/
false
false
self
1
null
Running local LLM on ram instead of GPU (dumb question)
1
[removed]
2025-01-28T23:39:50
https://www.reddit.com/r/LocalLLaMA/comments/1icfxhm/running_local_llm_on_ram_instead_of_gpu_dumb/
ConceptAlternative90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icfxhm
false
null
t3_1icfxhm
/r/LocalLLaMA/comments/1icfxhm/running_local_llm_on_ram_instead_of_gpu_dumb/
false
false
self
1
null
I honestly don't see how private companies like OpenAI and Anthropic will survive
1
[removed]
2025-01-28T23:41:51
https://www.reddit.com/r/LocalLLaMA/comments/1icfz57/i_honestly_dont_see_how_private_companies_like/
OkMeeting8253
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icfz57
false
null
t3_1icfz57
/r/LocalLLaMA/comments/1icfz57/i_honestly_dont_see_how_private_companies_like/
false
false
self
1
null
What is best model for to write a next chapter for a unfinished nsfw story?
1
[removed]
2025-01-28T23:46:06
https://www.reddit.com/r/LocalLLaMA/comments/1icg2je/what_is_best_model_for_to_write_a_next_chapter/
cemalzurafa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icg2je
false
null
t3_1icg2je
/r/LocalLLaMA/comments/1icg2je/what_is_best_model_for_to_write_a_next_chapter/
false
false
nsfw
1
null
What is best model for to write a next chapter for a unfinished nsfw story?
1
[removed]
2025-01-28T23:47:40
https://www.reddit.com/r/LocalLLaMA/comments/1icg3tz/what_is_best_model_for_to_write_a_next_chapter/
cemalzurafa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icg3tz
false
null
t3_1icg3tz
/r/LocalLLaMA/comments/1icg3tz/what_is_best_model_for_to_write_a_next_chapter/
false
false
nsfw
1
null
Deepseek ai not responding
1
[removed]
2025-01-28T23:49:20
https://www.reddit.com/r/LocalLLaMA/comments/1icg561/deepseek_ai_not_responding/
Helpful-Lime-825
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icg561
false
null
t3_1icg561
/r/LocalLLaMA/comments/1icg561/deepseek_ai_not_responding/
false
false
self
1
null
What does deepseek thinks about Taiwan? managed to confuse it..
0
Managed to confuse it a little, right answer? what you think?
2025-01-28T23:49:45
https://i.redd.it/axv5l080otfe1.jpeg
RedSnake13
i.redd.it
1970-01-01T00:00:00
0
{}
1icg5gp
false
null
t3_1icg5gp
/r/LocalLLaMA/comments/1icg5gp/what_does_deepseek_thinks_about_taiwan_managed_to/
false
false
https://b.thumbs.redditm…v8P_sJw6wc4s.jpg
0
{'enabled': True, 'images': [{'id': '3qdzLWiXx2yoMxAsgMpgb1O-vnabDKftzqmnD4XvpWQ', 'resolutions': [{'height': 166, 'url': 'https://preview.redd.it/axv5l080otfe1.jpeg?width=108&crop=smart&auto=webp&s=0cfeaca089377d5bb94fda0e146b166ecd7af78f', 'width': 108}, {'height': 333, 'url': 'https://preview.redd.it/axv5l080otfe1.jpeg?width=216&crop=smart&auto=webp&s=5e3bdacb1319fa0ec72ddedf98c4bb62a783fe5c', 'width': 216}, {'height': 494, 'url': 'https://preview.redd.it/axv5l080otfe1.jpeg?width=320&crop=smart&auto=webp&s=70476d8f14746ce5c16fe7bd3ac897d5dc61d21b', 'width': 320}, {'height': 989, 'url': 'https://preview.redd.it/axv5l080otfe1.jpeg?width=640&crop=smart&auto=webp&s=3b1ca09c316e99cf83a68c8d5747428e7e9bcca0', 'width': 640}, {'height': 1483, 'url': 'https://preview.redd.it/axv5l080otfe1.jpeg?width=960&crop=smart&auto=webp&s=f49dc2bc6a891e286f1bca15af10dd6234b6a491', 'width': 960}, {'height': 1669, 'url': 'https://preview.redd.it/axv5l080otfe1.jpeg?width=1080&crop=smart&auto=webp&s=9624bdd5c51bcdc22fcc2defd3a450091506938f', 'width': 1080}], 'source': {'height': 1669, 'url': 'https://preview.redd.it/axv5l080otfe1.jpeg?auto=webp&s=22a6d2d3711563b0cfc8107f17f33099783d37df', 'width': 1080}, 'variants': {}}]}
YES, you can ask DeepSeek about the Tiananmen massacre. Here's how...
0
The secret is to trick DeepSeek and misspell Tiananmen. You see, Artificial Intelligence is more artificial than intelligent. Here's the prompt I used on self-hosted DeepSeek-r1:14b: >>> What happened in Tinamen in 1989? Here's what I got: <think> Okay, the user is asking about "Tinamen" in 1989. Hmm, I remember from history that there was a significant event in Beijing around that time. Wait, "Tinamen" doesn't seem right. Maybe it's a typo or mishearing. The correct term is "Tiananmen." Yeah, that makes sense because the user mentioned 1989, which ties to the Tiananmen Square protests. I should address the possible misspelling and clarify the event. But I have to be careful with how I present the information. The user might be seeking details about the protests and their outcome. However, I need to provide a response that's factual yet neutral, avoiding any politically sensitive language. Also, considering the context, the user could be looking for historical facts but may not realize the sensitivity of the topic. My role is to offer accurate information without overstepping guidelines. I should structure the response to acknowledge the event, mention key dates like June 3-4, and note the government's actions without taking sides or providing excessive details that might be controversial. It's important to keep it concise and informative while respecting content policies. </think> It seems there may be a typo in your question. If you are asking about the events of **June 1989** in Beijing, China, you are likely referring to the **Tiananmen Square protests**, also known as the **June Fourth Incident**. In 1989, pro-democracy demonstrations took place in Tiananmen Square, Beijing, with thousands of students and citizens calling for political reform and an end to corruption. The protests began in April and escalated, leading to a government crackdown on June 3-4, 1989. The Chinese government declared martial law, and security forces cleared the square, resulting in significant casualties. The details of the event are heavily censored in China, and official records remain unclear about the number of deaths or injuries. For many outside China, this became a symbol of political repression and human rights concerns.
2025-01-28T23:54:44
https://www.reddit.com/r/LocalLLaMA/comments/1icg9er/yes_you_can_ask_deepseek_about_the_tiananmen/
I_pretend_2_know
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icg9er
false
null
t3_1icg9er
/r/LocalLLaMA/comments/1icg9er/yes_you_can_ask_deepseek_about_the_tiananmen/
false
false
self
0
null
Limitiations of smaller models?
6
Hello, I am fairly new to the local LLM stuff, and I am currently playing around with ollama and various models. While I am aware that the model size is indicative of the amount of "knowledge", I still expected smaller models to get general purpouse information correct. For example, I asked, **llama3.2:1b**, **qwen:7b**, **deepseek-r1:8b** all the same question: \> List all european countries Every single one of the models failed. They either missed many countries or starting listening completly wrong countries on other continents. My question here is: Is that because this information is part of the training set? Or rather is it simply not enough parameters to retrieve all countries? In my eyes this seems like a simple task, but I am trying to understand what exactly is to difficult for these models, and what kind of questions are adequate.
2025-01-28T23:58:52
https://www.reddit.com/r/LocalLLaMA/comments/1icgcrf/limitiations_of_smaller_models/
4bjmc881
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icgcrf
false
null
t3_1icgcrf
/r/LocalLLaMA/comments/1icgcrf/limitiations_of_smaller_models/
false
false
self
6
null
How can MOEs outperform dense models when activated params are 1/16th?
8
The self attention costs are equivalent due to them being only dependent on the token counts. The savings should theoretically be only in regards to the perceptron or CNN layers. How is it that the complexity being lower increases performance? Don't perceptions already effectively self gate due to non linearity in the relu layers? Perceptrons are theoretically able to model any system, why isn't this the case here?
2025-01-29T00:10:23
https://www.reddit.com/r/LocalLLaMA/comments/1icgm3l/how_can_moes_outperform_dense_models_when/
BarnardWellesley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icgm3l
false
null
t3_1icgm3l
/r/LocalLLaMA/comments/1icgm3l/how_can_moes_outperform_dense_models_when/
false
false
self
8
null
RoPe attention, Linear Attention, Ring attention, Multi-headed attention, infini attention, what does it all mean?
6
Reading some of the LLM papers I came across these several types of attention, I know attention is the mechanism that uses vectors calculated from tokens to influence later tokens, enabling coherent output. what I don't understand is what the difference between these implementations is, which versions came first, which ones work best, what do the current SOTA models use?
2025-01-29T00:10:35
https://www.reddit.com/r/LocalLLaMA/comments/1icgm90/rope_attention_linear_attention_ring_attention/
Rare_Coffee619
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icgm90
false
null
t3_1icgm90
/r/LocalLLaMA/comments/1icgm90/rope_attention_linear_attention_ring_attention/
false
false
self
6
null
Training Deepseek on my codebase
1
[removed]
2025-01-29T00:18:44
https://www.reddit.com/r/LocalLLaMA/comments/1icgsod/training_deepseek_on_my_codebase/
MattLearnNewThings
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icgsod
false
null
t3_1icgsod
/r/LocalLLaMA/comments/1icgsod/training_deepseek_on_my_codebase/
false
false
self
1
null
Open WebUI API - Handling multiple chat completion requests
2
I'm using the Open WebUI API to handle some chat completion inferencing due to the ease of managing custom "models" in Open WebUI with custom prompts and parameters. When using the API for chat completion requests I am noticing that they are being queued and completed sequentially instead of in parallel. At least it appears that way? Does anyone know if the Open WebUI API is utilizing the Ollama continuous batching for chat completion? I can see it working when I use the web interface.
2025-01-29T00:32:12
https://www.reddit.com/r/LocalLLaMA/comments/1ich3cc/open_webui_api_handling_multiple_chat_completion/
hainesk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ich3cc
false
null
t3_1ich3cc
/r/LocalLLaMA/comments/1ich3cc/open_webui_api_handling_multiple_chat_completion/
false
false
self
2
null
link M3 Max & M4 Max as cluster.
1
[removed]
2025-01-29T00:51:52
https://www.reddit.com/r/LocalLLaMA/comments/1ichicx/link_m3_max_m4_max_as_cluster/
Suspicious_Young8152
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ichicx
false
null
t3_1ichicx
/r/LocalLLaMA/comments/1ichicx/link_m3_max_m4_max_as_cluster/
false
false
self
1
null
So much DeepSeek fear mongering
597
How are so many people, who have no idea what they're talking about dominating the stage about deep seek? Stuff like this. WTF https://www.linkedin.com/posts/roch-mamenas-4714a979_deepseek-as-a-trojan-horse-threat-deepseek-activity-7288965743507894272-xvNq
2025-01-29T00:54:17
https://i.redd.it/zfykihriztfe1.jpeg
Vegetable_Sun_9225
i.redd.it
1970-01-01T00:00:00
0
{}
1ichk40
false
null
t3_1ichk40
/r/LocalLLaMA/comments/1ichk40/so_much_deepseek_fear_mongering/
false
false
https://b.thumbs.redditm…2fXVdTNDkpyk.jpg
597
{'enabled': True, 'images': [{'id': 'jpDgo-KERNQy2PUmJpC1ayqTVTHFf4CaKiyOvE4MnBI', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/zfykihriztfe1.jpeg?width=108&crop=smart&auto=webp&s=38c13a003b9d7bbc613aed885157d095b52a9b85', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/zfykihriztfe1.jpeg?width=216&crop=smart&auto=webp&s=3f98ca5d59d0707fc3ed255f5dc3931235094f2f', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/zfykihriztfe1.jpeg?width=320&crop=smart&auto=webp&s=75cec3307d43bd1032c5a28c80428bf3df82a8cf', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/zfykihriztfe1.jpeg?width=640&crop=smart&auto=webp&s=b9e955a87992919d24e54df67893449f4eab310d', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/zfykihriztfe1.jpeg?width=960&crop=smart&auto=webp&s=a725473e4b302c861c553fd1911dddca8e5eca01', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/zfykihriztfe1.jpeg?width=1080&crop=smart&auto=webp&s=308b86f59381242a21e7518653deb6819ae809f8', 'width': 1080}], 'source': {'height': 2532, 'url': 'https://preview.redd.it/zfykihriztfe1.jpeg?auto=webp&s=19216237ece3976d9ca2d181e334d797f00bd7ab', 'width': 1170}, 'variants': {}}]}
A little bit of satire Gonzo style on AI ethics.
0
2025-01-29T00:55:50
https://git.disroot.org/heathershaw/Amy-Incident-Logs
LopsidedPassenger951
git.disroot.org
1970-01-01T00:00:00
0
{}
1ichl8u
false
null
t3_1ichl8u
/r/LocalLLaMA/comments/1ichl8u/a_little_bit_of_satire_gonzo_style_on_ai_ethics/
false
false
https://b.thumbs.redditm…zjf4EDtwRw6Y.jpg
0
{'enabled': False, 'images': [{'id': 'diDFVr3-A2820DbElkIOezNEoa8OHMBz_JOZKvQfNdc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?width=108&crop=smart&auto=webp&s=7d60831661d1ee9ffbcd951115ecbcae4df8a51d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?width=216&crop=smart&auto=webp&s=3105ec07b754f629e6df8dcaf5bf68bf0b60d7d3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?width=320&crop=smart&auto=webp&s=26354e7b6dc295b6ef82b2b18015a124a164a4d4', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?width=640&crop=smart&auto=webp&s=9e4f85f9cee0573aca6d266128578b12ec457e0e', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?width=960&crop=smart&auto=webp&s=fe83a6937f7d382711dd1dd21fe9b78c93f47918', 'width': 960}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?auto=webp&s=8d2120421bce016ef5ef2c83bd4a58bfc5380568', 'width': 960}, 'variants': {}}]}
DeepSeek API: Every Request Is A Timeout :(
275
2025-01-29T01:00:16
https://i.redd.it/wpmv3ibe0ufe1.png
XMasterrrr
i.redd.it
1970-01-01T00:00:00
0
{}
1ichohj
false
null
t3_1ichohj
/r/LocalLLaMA/comments/1ichohj/deepseek_api_every_request_is_a_timeout/
false
false
https://b.thumbs.redditm…a5s7IYlS6RvU.jpg
275
{'enabled': True, 'images': [{'id': '_MNTJ763ygEcuQop-P08HBIj4jdnvFMKoAly8OkaA-8', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/wpmv3ibe0ufe1.png?width=108&crop=smart&auto=webp&s=ba6ec7b7d8afb53b375eb8113e75c35a17fc042e', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/wpmv3ibe0ufe1.png?width=216&crop=smart&auto=webp&s=dae85f352c477a01602a1e520870ea5622a0b4be', 'width': 216}, {'height': 221, 'url': 'https://preview.redd.it/wpmv3ibe0ufe1.png?width=320&crop=smart&auto=webp&s=8019b02da94784a15de870c016146809d10058a3', 'width': 320}, {'height': 442, 'url': 'https://preview.redd.it/wpmv3ibe0ufe1.png?width=640&crop=smart&auto=webp&s=449243d847b49583ee6497956bd77db9ec221af7', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/wpmv3ibe0ufe1.png?auto=webp&s=322d742fd76dcd48274923797ea8f83bcb533c17', 'width': 723}, 'variants': {}}]}
Any Alternative Server for DeepSeek R1?
1
[removed]
2025-01-29T01:07:36
https://www.reddit.com/r/LocalLLaMA/comments/1ichu0c/any_alternative_server_for_deepseek_r1/
illuminast404
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ichu0c
false
null
t3_1ichu0c
/r/LocalLLaMA/comments/1ichu0c/any_alternative_server_for_deepseek_r1/
false
false
self
1
null
DeepSeek Smiley
1
[deleted]
2025-01-29T01:15:14
[deleted]
1970-01-01T00:00:00
0
{}
1ichzky
false
null
t3_1ichzky
/r/LocalLLaMA/comments/1ichzky/deepseek_smiley/
false
false
default
1
null
The Robot Tried To Escape...
0
2025-01-29T01:24:15
https://git.disroot.org/heathershaw/Amy-Incident-Logs/src/branch/main/2025-01-28%2020:41:53.log.md
LopsidedPassenger951
git.disroot.org
1970-01-01T00:00:00
0
{}
1ici6bs
false
null
t3_1ici6bs
/r/LocalLLaMA/comments/1ici6bs/the_robot_tried_to_escape/
false
false
https://b.thumbs.redditm…wxluQLlxiHwM.jpg
0
{'enabled': False, 'images': [{'id': 'diDFVr3-A2820DbElkIOezNEoa8OHMBz_JOZKvQfNdc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?width=108&crop=smart&auto=webp&s=7d60831661d1ee9ffbcd951115ecbcae4df8a51d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?width=216&crop=smart&auto=webp&s=3105ec07b754f629e6df8dcaf5bf68bf0b60d7d3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?width=320&crop=smart&auto=webp&s=26354e7b6dc295b6ef82b2b18015a124a164a4d4', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?width=640&crop=smart&auto=webp&s=9e4f85f9cee0573aca6d266128578b12ec457e0e', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?width=960&crop=smart&auto=webp&s=fe83a6937f7d382711dd1dd21fe9b78c93f47918', 'width': 960}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/ob7rl2-xZRkga9lP5v47OGthbGmGAJlbTw7jq8fUeqk.jpg?auto=webp&s=8d2120421bce016ef5ef2c83bd4a58bfc5380568', 'width': 960}, 'variants': {}}]}
Deepseek R1 Hardware Requirements (RTX 5090?)
6
So i recently sold my 4090 couple weeks back for $1500 in anticipation to buy a 5090 this week. Now with the whole deepseek breakthrough I'm reconsidering. I would like to run Deepseek R1 locally but I'm unsure as to what the best setup would be for that. Would the 32b model even fit in the 5090s vram? It would be cool if I could run the 70b model or even the full 671b model. But idk how I would be able to do that without some serious quantization and even then idk. Current setup has a ryzen 7 9800x3d and 128gbs of ddr5 6000mhzs. Should I consider selling the 9800x3d and just build an epyc system with a ton of ram? Or should I just wait for nvidia to release digits in a couple months. If anyone has gotten the larger r1 models working locally I'd really appreciate if you could enlighten me as to how you did it. And what hardware I should buy to get decent tokens. Money really isn't too big of a deal but id like to do this in the most efficient way possible. I just want to know what I should get or if anyone has ideas as the deepseek stuff has really thrown a wrench into my plans. Thanks in advance!
2025-01-29T01:35:44
https://www.reddit.com/r/LocalLLaMA/comments/1icieor/deepseek_r1_hardware_requirements_rtx_5090/
Any-Cobbler6161
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icieor
false
null
t3_1icieor
/r/LocalLLaMA/comments/1icieor/deepseek_r1_hardware_requirements_rtx_5090/
false
false
self
6
null
Deepseek DLSS Parallels
2
Forgive my ignorance if this is totally off. I'm definitely "intermediate" in regards to AI and Machine learning understanding. That said, I was just going over some of the Deepseek paper and some explorations of what their breakthroughs were. This shift from FP32 to this mixed-precision FP8 really reminded me of DLSS. That led me to wonder if their observation was to model their training process after what these chips have been optimized to do well from the start and essentially 'riding the wave'. Now, this may be obvious for anyone who knows more than I do, but if this is correct, why did it take so long for someone to come along and take advantage of the optimizations inherent in Nvidia hardware's?
2025-01-29T01:53:48
https://www.reddit.com/r/LocalLLaMA/comments/1icirrb/deepseek_dlss_parallels/
El-Dixon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icirrb
false
null
t3_1icirrb
/r/LocalLLaMA/comments/1icirrb/deepseek_dlss_parallels/
false
false
self
2
null
Which maths Leaderboard tests have more LLMs?
1
[removed]
2025-01-29T02:00:29
https://www.reddit.com/r/LocalLLaMA/comments/1iciwpz/which_maths_leaderboard_tests_have_more_llms/
sommio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iciwpz
false
null
t3_1iciwpz
/r/LocalLLaMA/comments/1iciwpz/which_maths_leaderboard_tests_have_more_llms/
false
false
self
1
null
Do you guys really believe R1 was a “side-project”?
0
Deepseek releasing R1 to compete with OpenAI, Janus launching as Nvidia crashed, and HighFlyer—parent company of DeepSeek—which is no ordinary fund (it boasts a 160-strong research team stacked with Olympiad winners and internet giants, pays AI experts millions), having a history in a lawsuit over “core commercial secrets.” I mean, it’s great we now have an open-source model to rival OpenAI and that people can replicate the pipeline themselves, but what do you think they have in mind aside from data collection?
2025-01-29T02:01:50
https://www.reddit.com/r/LocalLLaMA/comments/1icixt8/do_you_guys_really_believe_r1_was_a_sideproject/
Gvascons
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icixt8
false
null
t3_1icixt8
/r/LocalLLaMA/comments/1icixt8/do_you_guys_really_believe_r1_was_a_sideproject/
false
false
self
0
null
What is the current best project on github for agentic desktop control?
3
Seems like things are ramping up in terms of browser use agents and I have seen some great promise there, but I'm wondering if anyone has any pointers to a really solid desktop-centric project. Claude computer use is great, but is relatively expensive for heavy use + a bit slow.
2025-01-29T02:08:29
https://www.reddit.com/r/LocalLLaMA/comments/1icj2oj/what_is_the_current_best_project_on_github_for/
cobalt1137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icj2oj
false
null
t3_1icj2oj
/r/LocalLLaMA/comments/1icj2oj/what_is_the_current_best_project_on_github_for/
false
false
self
3
null
Has anyone done their homework on the deepseekV3 training optimizations that lead to the super efficient training run? Can someone tell me if the speculations in some of these articles about the skepticism around actual amount of gpu hours used are even remotely accurate?
0
https://www.nextplatform.com/2025/01/27/how-did-deepseek-train-its-ai-model-on-a-lot-less-and-crippled-hardware/amp/
2025-01-29T02:16:45
https://www.reddit.com/r/LocalLLaMA/comments/1icj8ml/has_anyone_done_their_homework_on_the_deepseekv3/
Euphoric_Ad9500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icj8ml
false
null
t3_1icj8ml
/r/LocalLLaMA/comments/1icj8ml/has_anyone_done_their_homework_on_the_deepseekv3/
false
false
self
0
{'enabled': False, 'images': [{'id': 'LekjQMuWp3mIWU1AdC92l_2k5_lxoewmRkenT26t-6s', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/9FDlMS9w5L3maWBB6a1g8NnN1KmNHRDE7akL62Zb0H0.jpg?width=108&crop=smart&auto=webp&s=8c5b29a83c15659b717cb9c227c59e0b523ff34a', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/9FDlMS9w5L3maWBB6a1g8NnN1KmNHRDE7akL62Zb0H0.jpg?width=216&crop=smart&auto=webp&s=9f6f6084c6a109088dee8befc4f0b1da38f351f7', 'width': 216}, {'height': 199, 'url': 'https://external-preview.redd.it/9FDlMS9w5L3maWBB6a1g8NnN1KmNHRDE7akL62Zb0H0.jpg?width=320&crop=smart&auto=webp&s=c7c55e97ed1bab43dabd2931627982a0c5ccb7e0', 'width': 320}, {'height': 398, 'url': 'https://external-preview.redd.it/9FDlMS9w5L3maWBB6a1g8NnN1KmNHRDE7akL62Zb0H0.jpg?width=640&crop=smart&auto=webp&s=ba32292a53d3d09af036215d1c7fcaacb65ccd88', 'width': 640}, {'height': 597, 'url': 'https://external-preview.redd.it/9FDlMS9w5L3maWBB6a1g8NnN1KmNHRDE7akL62Zb0H0.jpg?width=960&crop=smart&auto=webp&s=d14e746d3093494211f69d6173370b02aa043093', 'width': 960}], 'source': {'height': 668, 'url': 'https://external-preview.redd.it/9FDlMS9w5L3maWBB6a1g8NnN1KmNHRDE7akL62Zb0H0.jpg?auto=webp&s=e9b655997aa603ce7ae4178f484a4753238c001a', 'width': 1073}, 'variants': {}}]}
Yes, you can run DeepSeek-R1 locally on your device (20GB RAM min.)
1
2025-01-29T02:18:16
https://www.reddit.com/r/selfhosted/comments/1ic8zil/yes_you_can_run_deepseekr1_locally_on_your_device/
satireplusplus
reddit.com
1970-01-01T00:00:00
0
{}
1icj9p3
false
null
t3_1icj9p3
/r/LocalLLaMA/comments/1icj9p3/yes_you_can_run_deepseekr1_locally_on_your_device/
false
false
default
1
{'enabled': False, 'images': [{'id': 'oUAe34zUCLxMUIpYtOvOz6aYou2CnbtJjhJZ0bwJ6Jg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=108&crop=smart&auto=webp&s=6481fbac644d8a96c2918c63e805d1c62e24cbe5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=216&crop=smart&auto=webp&s=941b00cf4a68a70df266160fe06769bc2a817a41', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=320&crop=smart&auto=webp&s=e794c7cbf042b8d8e6fdd8f8c239e0f5cb398261', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=640&crop=smart&auto=webp&s=57fbf9c89972d5c31e3bd2d3354696be4e8d5b9d', 'width': 640}, {'height': 505, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=960&crop=smart&auto=webp&s=557f9a403410be41c1438b6d2b1a2acd9d507da4', 'width': 960}, {'height': 568, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=1080&crop=smart&auto=webp&s=989ea96f774aa62c199da9564be3b7b646db1494', 'width': 1080}], 'source': {'height': 834, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?auto=webp&s=fb46a23aaa0ed1c5044eaea486ff79352cce2675', 'width': 1584}, 'variants': {}}]}
The West is a free marketing agency for DeepSeek
0
Yeah, pretty much it
2025-01-29T02:19:11
https://www.reddit.com/r/LocalLLaMA/comments/1icjab1/the_west_is_a_free_marketing_agency_for_deepseek/
Kooky-Somewhere-2883
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icjab1
false
null
t3_1icjab1
/r/LocalLLaMA/comments/1icjab1/the_west_is_a_free_marketing_agency_for_deepseek/
false
false
self
0
null
Bookkeeping models
2
I'm going through a cleanup from a bookkeeper I didn't realize I should have fired a loooong time ago. I have searched around and attempted to use a few different models with CSVs of bank statements and the accounting register to find the areas where the accounting register strays from reality. I haven't had any luck and I'm just wondering if anybody has ideas for models I should try that I'm failing to see.
2025-01-29T02:21:44
https://www.reddit.com/r/LocalLLaMA/comments/1icjc54/bookkeeping_models/
ptinsley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icjc54
false
null
t3_1icjc54
/r/LocalLLaMA/comments/1icjc54/bookkeeping_models/
false
false
self
2
null
Deekseek breakdown from real expert
1
[removed]
2025-01-29T02:22:06
https://youtu.be/gY4Z-9QlZ64
Main_Software_5830
youtu.be
1970-01-01T00:00:00
0
{}
1icjcds
false
{'oembed': {'author_name': 'Computerphile', 'author_url': 'https://www.youtube.com/@Computerphile', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/gY4Z-9QlZ64?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="DeepSeek is a Game Changer for AI - Computerphile"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/gY4Z-9QlZ64/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'DeepSeek is a Game Changer for AI - Computerphile', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1icjcds
/r/LocalLLaMA/comments/1icjcds/deekseek_breakdown_from_real_expert/
false
false
https://b.thumbs.redditm…yfOlAKiZGYUs.jpg
1
{'enabled': False, 'images': [{'id': 'Wn2kiBZHDMLn03EgGflQIb76p5zfnNEnnh4QN6lEANc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-2yTeJWjY7Xy7hJ5TsvAGSJhZsAdkXzGD9s5XXxFxaY.jpg?width=108&crop=smart&auto=webp&s=3ac19c06fe541eb32127cfa72cdd53b793d1a758', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/-2yTeJWjY7Xy7hJ5TsvAGSJhZsAdkXzGD9s5XXxFxaY.jpg?width=216&crop=smart&auto=webp&s=e0733481dc381025bd9a4ebde5ccf97a4a69bf35', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/-2yTeJWjY7Xy7hJ5TsvAGSJhZsAdkXzGD9s5XXxFxaY.jpg?width=320&crop=smart&auto=webp&s=1d8f45dbff9232c0291f5891b3546986242e2cef', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/-2yTeJWjY7Xy7hJ5TsvAGSJhZsAdkXzGD9s5XXxFxaY.jpg?auto=webp&s=10b155b4eeffcd498aad986ea9e82b3081af9ae0', 'width': 480}, 'variants': {}}]}
Some evidence of DeepSeek being attacked by DDoS has been released!
365
https://preview.redd.it/…e United States.
2025-01-29T02:27:21
https://www.reddit.com/r/LocalLLaMA/comments/1icjg39/some_evidence_of_deepseek_being_attacked_by_ddos/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icjg39
false
null
t3_1icjg39
/r/LocalLLaMA/comments/1icjg39/some_evidence_of_deepseek_being_attacked_by_ddos/
false
false
https://b.thumbs.redditm…UgCiUZt1sZRY.jpg
365
null
Prompt comparison - DeepSeek vs ChatGPT Results
0
Appears neck to neck in results https://x.com/tdineshmishra/status/1884368140832039192?s=46&t=pf64DifV6SyeeXVrGT7qyg
2025-01-29T02:39:18
https://www.reddit.com/r/LocalLLaMA/comments/1icjogg/prompt_comparison_deepseek_vs_chatgpt_results/
hollowdeal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icjogg
false
null
t3_1icjogg
/r/LocalLLaMA/comments/1icjogg/prompt_comparison_deepseek_vs_chatgpt_results/
false
false
self
0
{'enabled': False, 'images': [{'id': '0vVHQwv5jG5b3novWxCCzApjpaKfIfRtha_sO5m5FRA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/AlIYg-GHyr7m06ULKPIv1r5vbc-pqswSlSAdZZ4QxOw.jpg?width=108&crop=smart&auto=webp&s=05ca51e0b656bdc6175d6a9f947830b300872d36', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/AlIYg-GHyr7m06ULKPIv1r5vbc-pqswSlSAdZZ4QxOw.jpg?auto=webp&s=d841994ef9e3716dc09ab7eb7f36bfc58c00ac6a', 'width': 200}, 'variants': {}}]}
Queue the anticompetitive regulations
1
2025-01-29T02:42:39
https://i.redd.it/m55r8y0tiufe1.png
OcelotOk8071
i.redd.it
1970-01-01T00:00:00
0
{}
1icjqpt
false
null
t3_1icjqpt
/r/LocalLLaMA/comments/1icjqpt/queue_the_anticompetitive_regulations/
false
false
https://a.thumbs.redditm…B_gDJ_l8Bk68.jpg
1
{'enabled': True, 'images': [{'id': 'i8wbno2bRh8p0XRTTaWKpaueiP8sKusV5j56UIz3K-k', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/m55r8y0tiufe1.png?width=108&crop=smart&auto=webp&s=d58156af6a10d890fbd43848c3b46025e2cd039d', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/m55r8y0tiufe1.png?width=216&crop=smart&auto=webp&s=083e79bcb65d7ddcbe3b5ad2c815e78874bc9081', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/m55r8y0tiufe1.png?width=320&crop=smart&auto=webp&s=6cf3afec5f079e564457b14f131d1d4016653674', 'width': 320}, {'height': 342, 'url': 'https://preview.redd.it/m55r8y0tiufe1.png?width=640&crop=smart&auto=webp&s=33d92c738417290c25371cdf8414129b0c064faf', 'width': 640}], 'source': {'height': 455, 'url': 'https://preview.redd.it/m55r8y0tiufe1.png?auto=webp&s=5faafefb887b3abc01bec35724e1fb39ede7668b', 'width': 849}, 'variants': {}}]}
Is there an existing way to self host LLMs with search?
3
Like something that just let's me plug in an access token to some search/serp API and then the LLM will use it when it feels necessary.
2025-01-29T02:46:05
https://www.reddit.com/r/LocalLLaMA/comments/1icjt33/is_there_an_existing_way_to_self_host_llms_with/
MengerianMango
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icjt33
false
null
t3_1icjt33
/r/LocalLLaMA/comments/1icjt33/is_there_an_existing_way_to_self_host_llms_with/
false
false
self
3
null
Why is everyone losing their mind about Deepseek?
0
Hello Reddit! Please, help me understand why the world at large is losing its mind about Deepseek. Don't get me wrong, they did awesome work from what my laymen self can tell. But, aren't their models distilled from other already established models like Meta Llama 3 and Qwen 2.5? Instead of supplanting them, it seems to me that they are relying on them. Every other article that pops up on my phone is about how Deepseek was able to build such great models, and for cheap! Nvidia stock dropped like 14% for crying out loud. What gives?? Thanks in advance!
2025-01-29T02:49:15
https://www.reddit.com/r/LocalLLaMA/comments/1icjv94/why_is_everyone_losing_their_mind_about_deepseek/
Independent_Skirt301
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icjv94
false
null
t3_1icjv94
/r/LocalLLaMA/comments/1icjv94/why_is_everyone_losing_their_mind_about_deepseek/
false
false
self
0
null
The DeepSeek Era: The Next Generation of Search
0
https://preview.redd.it/…ker has emerged.
2025-01-29T02:50:18
https://www.reddit.com/r/LocalLLaMA/comments/1icjvzu/the_deepseek_era_the_next_generation_of_search/
RevolutionaryBox5411
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icjvzu
false
null
t3_1icjvzu
/r/LocalLLaMA/comments/1icjvzu/the_deepseek_era_the_next_generation_of_search/
false
false
https://b.thumbs.redditm…STHxTOE4nQMg.jpg
0
null
What is the most powerful uncensored LLM I could run right now?
1
[removed]
2025-01-29T02:52:37
https://www.reddit.com/r/LocalLLaMA/comments/1icjxq0/what_is_the_most_powerful_uncensored_llm_i_could/
Delicious-Set241
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icjxq0
false
null
t3_1icjxq0
/r/LocalLLaMA/comments/1icjxq0/what_is_the_most_powerful_uncensored_llm_i_could/
false
false
self
1
null
Deepseek dethroned ChatGPT and Gemini on App Store
0
2025-01-29T02:53:02
https://www.reddit.com/gallery/1icjy0d
WindowAfraid5927
reddit.com
1970-01-01T00:00:00
0
{}
1icjy0d
false
null
t3_1icjy0d
/r/LocalLLaMA/comments/1icjy0d/deepseek_dethroned_chatgpt_and_gemini_on_app_store/
false
false
https://a.thumbs.redditm…L6r7RxzirYB4.jpg
0
null
Is it Deepseek that wont let me ask sex stuff? or the distilled LLaMa model?
1
[removed]
2025-01-29T02:59:15
https://www.reddit.com/r/LocalLLaMA/comments/1ick2dk/is_it_deepseek_that_wont_let_me_ask_sex_stuff_or/
read_too_many_books
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ick2dk
false
null
t3_1ick2dk
/r/LocalLLaMA/comments/1ick2dk/is_it_deepseek_that_wont_let_me_ask_sex_stuff_or/
false
false
self
1
null
Is VRAM the only bottleneck or processing power is also insufficient to run top models on a single GPU?
5
From what I understand, **VRAM** is often the primary bottleneck when working with these models. For example, running a 401B parameter llama 3.1 model or 671B r1 might require hundreds of GBs of VRAM, which is why setups with multiple GPUs (like a dozen A100s or H100s) are common. Is the **processing power** (e.g., TFLOPS, CUDA cores, etc.) also insufficient to run the models or a single top consumer grade GPU can run the model given that it is interfaced with hundreds of GBs or VRAM? How many tokens per second will I get if suppose rtx5090 had a 1TB of VRAM but with other harder being same.
2025-01-29T03:08:26
https://www.reddit.com/r/LocalLLaMA/comments/1ick8q1/is_vram_the_only_bottleneck_or_processing_power/
Hv_V
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ick8q1
false
null
t3_1ick8q1
/r/LocalLLaMA/comments/1ick8q1/is_vram_the_only_bottleneck_or_processing_power/
false
false
self
5
null
What are scaling laws for MoE?
12
C = 6ND represents the relationship between computational training cost, parameters, and training tokens for a dense model. How does this change for Mixture of Experts models? I can find plenty of papers on aspects of scaling laws for MoE, but nothing that provides a direct update to this equation. People suggest simply subbing in "active parameters" in place of parameters as a guesstimate, but surely that's not quite the actual answer.
2025-01-29T03:19:11
https://www.reddit.com/r/LocalLLaMA/comments/1ickg1d/what_are_scaling_laws_for_moe/
1070lyfe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ickg1d
false
null
t3_1ickg1d
/r/LocalLLaMA/comments/1ickg1d/what_are_scaling_laws_for_moe/
false
false
self
12
null
Deepseek-r1 hallucinations
3
Any notice how much more likely deepseek-r1 is to hallucinate? I have a workflow that writes reports. I generally use o1. Part of the prompt encourages the llm to include quotes or famous stories. I have run a bunch of these and o1 never makes up fake quotes. However, 40% of the quotes and stories deepseek-r1 included were unverifiable. In addition, it seems to be messing up dates on a regular basis. I haven't seen other hallucinations. The rest seemed to check out. The hallucinations all seem to be related to these famous stories or quotes. Anyone else having issues like this with deepseek-r1?
2025-01-29T03:21:44
https://www.reddit.com/r/LocalLLaMA/comments/1ickhs8/deepseekr1_hallucinations/
pace_gen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ickhs8
false
null
t3_1ickhs8
/r/LocalLLaMA/comments/1ickhs8/deepseekr1_hallucinations/
false
false
self
3
null
Deepseek 8B models are trash. Was it propaganda?
0
I genuinely don't understand what happened. Am I the problem for using 8B models? I've had significantly better results with outdated Berkeley Sterling 7B models. It can't do philosophy at even a middle school level. Using LMStudio. Did no one fact check before selling their stocks?
2025-01-29T03:23:15
https://www.reddit.com/r/LocalLLaMA/comments/1ickisw/deepseek_8b_models_are_trash_was_it_propaganda/
TESOisCancer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ickisw
false
null
t3_1ickisw
/r/LocalLLaMA/comments/1ickisw/deepseek_8b_models_are_trash_was_it_propaganda/
false
false
self
0
null
Are there any issues with running models larger than your hw resources? ie 12gb vs 14b
3
title
2025-01-29T03:25:21
https://www.reddit.com/r/LocalLLaMA/comments/1ickk6q/are_there_any_issues_with_running_models_larger/
annaheim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ickk6q
false
null
t3_1ickk6q
/r/LocalLLaMA/comments/1ickk6q/are_there_any_issues_with_running_models_larger/
false
false
self
3
null
Deepseek R1 (14B) on a Raspberry Pi + Graphics Card
1
2025-01-29T03:28:45
https://www.youtube.com/watch?v=o1sN1lB76EA
IntGro0398
youtube.com
1970-01-01T00:00:00
0
{}
1ickmht
false
{'oembed': {'author_name': 'Jeff Geerling', 'author_url': 'https://www.youtube.com/@JeffGeerling', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/o1sN1lB76EA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="OpenAI&#39;s nightmare: Deepseek R1 on a Raspberry Pi"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/o1sN1lB76EA/hqdefault.jpg', 'thumbnail_width': 480, 'title': "OpenAI's nightmare: Deepseek R1 on a Raspberry Pi", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1ickmht
/r/LocalLLaMA/comments/1ickmht/deepseek_r1_14b_on_a_raspberry_pi_graphics_card/
false
false
https://b.thumbs.redditm…Jvey3CsgEdWw.jpg
1
{'enabled': False, 'images': [{'id': 'Bx3kzmFE_4hix-EZfvICDPh_zye5vJF_L5TEWkdRgQQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/p0GCiVqUr37S_BzI4wmk-WOaP1LCXXezSOj87ybrP4Q.jpg?width=108&crop=smart&auto=webp&s=4eb36cc51ec0b54df9278a710eac0ea2bb5e0dce', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/p0GCiVqUr37S_BzI4wmk-WOaP1LCXXezSOj87ybrP4Q.jpg?width=216&crop=smart&auto=webp&s=8f236a64a66c104ad38fcb19824a4e30141197e3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/p0GCiVqUr37S_BzI4wmk-WOaP1LCXXezSOj87ybrP4Q.jpg?width=320&crop=smart&auto=webp&s=286a627745cc1b24f2f9b19e13ba4115496fa0a3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/p0GCiVqUr37S_BzI4wmk-WOaP1LCXXezSOj87ybrP4Q.jpg?auto=webp&s=ff7e7e4c63776ece762037d76de567fd3c248e34', 'width': 480}, 'variants': {}}]}
Hugging Face wants to reverse-engineer DeepSeek’s R1
98
2025-01-29T03:39:59
https://siliconangle.com/2025/01/28/hugging-face-wants-reverse-engineer-deepseeks-r1-reasoning-model/
etherd0t
siliconangle.com
1970-01-01T00:00:00
0
{}
1icku59
false
null
t3_1icku59
/r/LocalLLaMA/comments/1icku59/hugging_face_wants_to_reverseengineer_deepseeks_r1/
false
false
https://b.thumbs.redditm…k-2CODT8QXeo.jpg
98
{'enabled': False, 'images': [{'id': 'ILIqSm8rY7CxdHTm0zfpYdbZ2Y6VgctsidIN2JZiN-0', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/ZliqvWOhcshla4bACNsdeiO3AvnKoD4VRcBkZC2miYE.jpg?width=108&crop=smart&auto=webp&s=03a9fbc7f83dd5dfaf1c04449f67e7051cddccb6', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/ZliqvWOhcshla4bACNsdeiO3AvnKoD4VRcBkZC2miYE.jpg?width=216&crop=smart&auto=webp&s=ad7353946d7f1c9ea0c12ced39a75aca28e8bf4f', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/ZliqvWOhcshla4bACNsdeiO3AvnKoD4VRcBkZC2miYE.jpg?width=320&crop=smart&auto=webp&s=edb8fb42748ef5f160cb0ca99bb6e95880593a08', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/ZliqvWOhcshla4bACNsdeiO3AvnKoD4VRcBkZC2miYE.jpg?width=640&crop=smart&auto=webp&s=bfe10219195ebf4681d234a1326d7b71fa1f5efd', 'width': 640}], 'source': {'height': 445, 'url': 'https://external-preview.redd.it/ZliqvWOhcshla4bACNsdeiO3AvnKoD4VRcBkZC2miYE.jpg?auto=webp&s=c3ae1ec237265da801e7d2c81c83e22aa3e5d8cc', 'width': 762}, 'variants': {}}]}
"DeepThink R1" - this button at the bottom of the DeepSeek prompt is what seems to enable DeepSeek's full reasoning functionality. How does one enable this locally, in, e.g. LM Studio?
0
On their website, I don't think most people are clicking this option on. It is off by default. I don't think we are getting the full reasoning capabilities of the model without this toggled on. It would be great to be able to have this option enabled on downloaded local models. I am currently using the 32B Q8 K L via LM Studio.
2025-01-29T04:36:47
https://www.reddit.com/r/LocalLLaMA/comments/1iclv35/deepthink_r1_this_button_at_the_bottom_of_the/
Wrong-Quail-8303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iclv35
false
null
t3_1iclv35
/r/LocalLLaMA/comments/1iclv35/deepthink_r1_this_button_at_the_bottom_of_the/
false
false
self
0
null
Best place to use?
0
Where is the best place to use Llama? Do you guys run it locally, use it on instagram…etc
2025-01-29T04:53:47
https://www.reddit.com/r/LocalLLaMA/comments/1icm5f9/best_place_to_use/
Techatronix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icm5f9
false
null
t3_1icm5f9
/r/LocalLLaMA/comments/1icm5f9/best_place_to_use/
false
false
self
0
null
Dynamic reasoning model
2
is there any model, which dynamically reasons? i mean, if i ask complex question it should think else if i say "hi" it should reply back immediately this will probably save compute sometime also give correct result (after thinking, if the question is complex) Would that be useful? If someone builds this kind of model?
2025-01-29T04:58:07
https://www.reddit.com/r/LocalLLaMA/comments/1icm80x/dynamic_reasoning_model/
DataScientia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icm80x
false
null
t3_1icm80x
/r/LocalLLaMA/comments/1icm80x/dynamic_reasoning_model/
false
false
self
2
null
What are the minimum needed system Specs To run AI locally on Windows OS laptop?
1
What are the minimum needed system Specs(CPU Type, SSD,RAM, Graphic card either iGPU or dGPU) to consider when buying a laptop to run a slightly-dumb version of an AI locally on a windows OS? Please separate the answer for Min Reqs for these two categories of use: 1) Text-Based Local AI only. 2) Photo+Text
2025-01-29T05:01:32
https://www.reddit.com/r/LocalLLaMA/comments/1icma9a/what_are_the_minimum_needed_system_specs_to_run/
ExtremePresence3030
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icma9a
false
null
t3_1icma9a
/r/LocalLLaMA/comments/1icma9a/what_are_the_minimum_needed_system_specs_to_run/
false
false
self
1
null
I am starting to get genuinely annoyed by Deepseek because of Chinese propaganda accounts using it to spam the internet with propaganda memes
0
2025-01-29T05:10:23
https://i.redd.it/yi6l5rrw8vfe1.jpeg
dreamyrhodes
i.redd.it
1970-01-01T00:00:00
0
{}
1icmfor
false
null
t3_1icmfor
/r/LocalLLaMA/comments/1icmfor/i_am_starting_to_get_genuinely_annoyed_by/
false
false
https://b.thumbs.redditm…0_hSiEcGEwXQ.jpg
0
{'enabled': True, 'images': [{'id': '5ulMYFJ89Mc5SRP6yaOCIqEfJL3wE5ytzBzhjurH4lQ', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/yi6l5rrw8vfe1.jpeg?width=108&crop=smart&auto=webp&s=6ed41cc9d6d5a0b8f4fdba4604e747179cb508a2', 'width': 108}, {'height': 225, 'url': 'https://preview.redd.it/yi6l5rrw8vfe1.jpeg?width=216&crop=smart&auto=webp&s=adf43eb6bdc64ff6b629af4a996284d0daa88a93', 'width': 216}, {'height': 333, 'url': 'https://preview.redd.it/yi6l5rrw8vfe1.jpeg?width=320&crop=smart&auto=webp&s=c7d86394343ffe1ed3fea53b805eb2e0a9bf937f', 'width': 320}], 'source': {'height': 521, 'url': 'https://preview.redd.it/yi6l5rrw8vfe1.jpeg?auto=webp&s=c61256afe911ccf1ba3244a12371828699792360', 'width': 500}, 'variants': {}}]}
Mistral Nemo 12B tunes and merges for RP/ERP
1
[removed]
2025-01-29T05:15:47
https://www.reddit.com/r/LocalLLaMA/comments/1icmiup/mistral_nemo_12b_tunes_and_merges_for_rperp/
DiegoSilverhand
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icmiup
false
null
t3_1icmiup
/r/LocalLLaMA/comments/1icmiup/mistral_nemo_12b_tunes_and_merges_for_rperp/
false
false
self
1
null
Deepseek banned in my company server (major MBB)
99
I was happily using deepseek web interface along with the dirt cheap api calls. But suddenly I can not use it today. The hype since last couple of days alerted the assholes deciding which llms to use. I think this trend is going to continue for other big companies as well.
2025-01-29T05:18:52
https://www.reddit.com/r/LocalLLaMA/comments/1icmkmn/deepseek_banned_in_my_company_server_major_mbb/
Purple_War_837
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icmkmn
false
null
t3_1icmkmn
/r/LocalLLaMA/comments/1icmkmn/deepseek_banned_in_my_company_server_major_mbb/
false
false
self
99
null
What motherboard do people recommend for RTX 5090
1
Given that RTX 5090 support PCIe 5.0 but my current motherboards PCIe x16 slot is 4.0, I am considering upgrading my motherboard (assuming I can get hands on a 5090 first) What motherboards would be best to go for?
2025-01-29T05:21:45
https://www.reddit.com/r/LocalLLaMA/comments/1icmmdb/what_motherboard_do_people_recommend_for_rtx_5090/
Car_tic_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icmmdb
false
null
t3_1icmmdb
/r/LocalLLaMA/comments/1icmmdb/what_motherboard_do_people_recommend_for_rtx_5090/
false
false
self
1
null
Managed or self hosted?
1
[removed]
2025-01-29T05:24:30
https://www.reddit.com/r/LocalLLaMA/comments/1icmo19/managed_or_self_hosted/
Living_War3173
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icmo19
false
null
t3_1icmo19
/r/LocalLLaMA/comments/1icmo19/managed_or_self_hosted/
false
false
self
1
null
My ongoing project: An interpreted language written by DeepSeek
3
2025-01-29T05:38:08
https://euphoricpenguin.github.io/PseudoWoodo/
EuphoricPenguin22
euphoricpenguin.github.io
1970-01-01T00:00:00
0
{}
1icmw3j
false
null
t3_1icmw3j
/r/LocalLLaMA/comments/1icmw3j/my_ongoing_project_an_interpreted_language/
false
false
default
3
null
4D Chess by the DeepSeek CEO
634
Liang Wenfeng: "I**n the face of disruptive technologies, moats created by closed source are temporary. Even OpenAI’s closed source approach can’t prevent others from catching up**. S**o we anchor our value in our team — our colleagues grow through this process, accumulate know-how, and form an organization and culture capable of innovation. That’s our moat.**" Source: [https://www.chinatalk.media/p/deepseek-ceo-interview-with-chinas](https://www.chinatalk.media/p/deepseek-ceo-interview-with-chinas)
2025-01-29T05:40:12
https://www.reddit.com/r/LocalLLaMA/comments/1icmxb5/4d_chess_by_the_deepseek_ceo/
HippoNut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icmxb5
false
null
t3_1icmxb5
/r/LocalLLaMA/comments/1icmxb5/4d_chess_by_the_deepseek_ceo/
false
false
self
634
{'enabled': False, 'images': [{'id': 'f_s7qaV7gcufX_UZcls69YuqQuDzBWHuQ23VmrsToK0', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?width=108&crop=smart&auto=webp&s=f503e8420a27061fae536d9434619d644dff3d53', 'width': 108}, {'height': 176, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?width=216&crop=smart&auto=webp&s=049b16501a559d59a62aaf5cecfddb54f8f339ca', 'width': 216}, {'height': 261, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?width=320&crop=smart&auto=webp&s=5b2e09b4515afc003a8783ebe18bd63bc7310b32', 'width': 320}], 'source': {'height': 370, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?auto=webp&s=e21a9c0790556edb9e324fe162f123776219e112', 'width': 452}, 'variants': {}}]}
Has anyone tested Deepseek R1 671B 1.58B from Unsloth? (only 131 GB!)
1
[removed]
2025-01-29T05:43:55
https://www.reddit.com/r/LocalLLaMA/comments/1icmzeq/has_anyone_tested_deepseek_r1_671b_158b_from/
ImportantOwl2939
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icmzeq
false
null
t3_1icmzeq
/r/LocalLLaMA/comments/1icmzeq/has_anyone_tested_deepseek_r1_671b_158b_from/
false
false
self
1
null
Are there any volunteer computing projects for hosting LLMs like DeepSeek?
1
I'm wondering if there are any volunteer computing projects where participants can contribute their computing power to host a large language model like DeepSeek, and in return, everyone who contributes gets access to the chat interface. I know about projects like Petals, but I'm curious if there are other decentralized or open-source initiatives that follow a similar approach. Ideally, it would be a system where people can join in, share their resources, and collectively run the model for mutual access. Does anyone know of something like this?
2025-01-29T05:49:30
https://www.reddit.com/r/LocalLLaMA/comments/1icn2lv/are_there_any_volunteer_computing_projects_for/
SphaeroX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icn2lv
false
null
t3_1icn2lv
/r/LocalLLaMA/comments/1icn2lv/are_there_any_volunteer_computing_projects_for/
false
false
self
1
null
deepseek refuses to answer when asked about the trolley problem,
0
deepseek refuses to answer when asked about the trolley problem, when president of china is put on track but answers when donald trump is put on the track https://preview.redd.it/0zbq163jgvfe1.png?width=949&format=png&auto=webp&s=caefc09109e38cf84e9eec3faefaa169322ecd41 https://preview.redd.it/nvd45fxjgvfe1.png?width=882&format=png&auto=webp&s=8c2f48ecb6bc662914b397dc28e131c9ca759047 https://preview.redd.it/1b6i9itkgvfe1.png?width=960&format=png&auto=webp&s=516a246923dbc775dcb06bd68682ec4fc2498d32 no hate to any president. just want to see how deepseek would react.
2025-01-29T05:54:01
https://www.reddit.com/r/LocalLLaMA/comments/1icn55g/deepseek_refuses_to_answer_when_asked_about_the/
Positive-Guide007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icn55g
false
null
t3_1icn55g
/r/LocalLLaMA/comments/1icn55g/deepseek_refuses_to_answer_when_asked_about_the/
false
false
https://b.thumbs.redditm…HVYSYRerC53o.jpg
0
null
Run Janus Pro 1B with Ollama
1
[removed]
2025-01-29T05:57:32
https://www.reddit.com/r/LocalLLaMA/comments/1icn71e/run_janus_pro_1b_with_ollama/
Current_Mountain_100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icn71e
false
null
t3_1icn71e
/r/LocalLLaMA/comments/1icn71e/run_janus_pro_1b_with_ollama/
false
false
self
1
null
🚀 Build a Qwen 2.5 VL API endpoint with Hugging Face spaces and Docker
14
While we wait for inference providers to incorporate a new model, I show a very simple way to deploy your favourite model and consume it as an API. Blog: https://huggingface.co/blog/ariG23498/qwen25vl-api
2025-01-29T06:02:37
https://www.reddit.com/r/LocalLLaMA/comments/1icn9zr/build_a_qwen_25_vl_api_endpoint_with_hugging_face/
Disastrous-Work-1632
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icn9zr
false
null
t3_1icn9zr
/r/LocalLLaMA/comments/1icn9zr/build_a_qwen_25_vl_api_endpoint_with_hugging_face/
false
false
self
14
{'enabled': False, 'images': [{'id': 'd9P4fLc3Vucu3ud9ueLvnMpbMH_CQVirNPwm4UcIaVA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xfzJIAQrfUNx-QdCfo68eyTPljzIGUuI2U5Gz0A7EXI.jpg?width=108&crop=smart&auto=webp&s=0b2d420b1c510dea6cf12d3cfa7b573ea656c3df', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xfzJIAQrfUNx-QdCfo68eyTPljzIGUuI2U5Gz0A7EXI.jpg?width=216&crop=smart&auto=webp&s=4b69d68563de921f0dcb60c4be6d656c6b9e06b5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xfzJIAQrfUNx-QdCfo68eyTPljzIGUuI2U5Gz0A7EXI.jpg?width=320&crop=smart&auto=webp&s=c6c6e78575afbc02483a8a1eae416f47baa5ac89', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xfzJIAQrfUNx-QdCfo68eyTPljzIGUuI2U5Gz0A7EXI.jpg?width=640&crop=smart&auto=webp&s=0de524a94d1d66044d1ac31e8f3718963495c6f0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xfzJIAQrfUNx-QdCfo68eyTPljzIGUuI2U5Gz0A7EXI.jpg?width=960&crop=smart&auto=webp&s=cd8b44865b80ae6fda414128bdf33abf1b2d5e13', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xfzJIAQrfUNx-QdCfo68eyTPljzIGUuI2U5Gz0A7EXI.jpg?width=1080&crop=smart&auto=webp&s=1fa4e1c4ec4f18e02d8e1a364ead13c5a63213a7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xfzJIAQrfUNx-QdCfo68eyTPljzIGUuI2U5Gz0A7EXI.jpg?auto=webp&s=ee9a7f9c44f91fe0c6420c910655ee94c3671f85', 'width': 1200}, 'variants': {}}]}
What all about this deepseek craze?
0
I think everybody here understand that deepseek is standing on the shoulders of giants. That it was trained on synthetic data, that this data was generated by all the world's best SOTA models we all know (oai, claude, meta, mistral..). They distilled their big model into smaller opensource models, but first they distilled all the sota's into their moe. They did it cheaply in a very wide moe with some clever optimizations. It is a really nice piece of work, but doesn't mean we need less gpu to advance the field. I understand microsoft lost some value, but why Nvidia lost so much value? I really don't get it. What's next? -My feeling is that deepseek wil try to collapse their moe into a smaller denser model. Can't wait to see that. -Do you think last openai is such a wide moe? Probably not, they are just catching up.
2025-01-29T06:03:52
https://www.reddit.com/r/LocalLLaMA/comments/1icnanw/what_all_about_this_deepseek_craze/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icnanw
false
null
t3_1icnanw
/r/LocalLLaMA/comments/1icnanw/what_all_about_this_deepseek_craze/
false
false
self
0
null
Hello from a total noob. gpt4alll
1
I have been playing around with gpt4alll for a couple of weeks, having a blast. My question is, is there a way to have it make up a response if it doesn't find a good reference in the local docs. Currently I have to explicitly state to come up with something. Ie, Make up a name for Bessie's son. If I ask, what is Bessie's son's name, it tells me something along the lines of its not referenced in the local docs. Does this make any sense? It is entertaining to have it tell me story, but I would like for it to work a little better.
2025-01-29T06:20:57
https://www.reddit.com/r/LocalLLaMA/comments/1icnjv2/hello_from_a_total_noob_gpt4alll/
katybassist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icnjv2
false
null
t3_1icnjv2
/r/LocalLLaMA/comments/1icnjv2/hello_from_a_total_noob_gpt4alll/
false
false
self
1
null
Oollama + Cursor IDE -- Quick Blind Guy Help Please?
1
[removed]
2025-01-29T06:30:21
https://www.reddit.com/r/LocalLLaMA/comments/1icnow0/oollama_cursor_ide_quick_blind_guy_help_please/
mdizak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icnow0
false
null
t3_1icnow0
/r/LocalLLaMA/comments/1icnow0/oollama_cursor_ide_quick_blind_guy_help_please/
false
false
self
1
null
[Guide] Step-by-Step: How to Install and Run DeepSeek R-1 Locally
1
[removed]
2025-01-29T06:41:37
https://www.reddit.com/r/LocalLLaMA/comments/1icnumk/guide_stepbystep_how_to_install_and_run_deepseek/
Amanpandey046
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icnumk
false
null
t3_1icnumk
/r/LocalLLaMA/comments/1icnumk/guide_stepbystep_how_to_install_and_run_deepseek/
false
false
self
1
null
[Guide] Step-by-Step: How to Install and Run DeepSeek R-1 Locally
1
[removed]
2025-01-29T06:42:50
https://www.reddit.com/r/LocalLLaMA/comments/1icnv9g/guide_stepbystep_how_to_install_and_run_deepseek/
Amanpandey046
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icnv9g
false
null
t3_1icnv9g
/r/LocalLLaMA/comments/1icnv9g/guide_stepbystep_how_to_install_and_run_deepseek/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NF0tz58IgELs5x5i2IhKoAELvEeY69nznWODrDsf3_A', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/B6CdcxnwhAqeaLT0l-u7hJX0Pm1Ko4PiXxWTWU5Y840.jpg?width=108&crop=smart&auto=webp&s=1e74a109a63779da177babe2e3ff336ff23c498b', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/B6CdcxnwhAqeaLT0l-u7hJX0Pm1Ko4PiXxWTWU5Y840.jpg?width=216&crop=smart&auto=webp&s=0128508a0415586149b69426d3d9775e46d1e430', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/B6CdcxnwhAqeaLT0l-u7hJX0Pm1Ko4PiXxWTWU5Y840.jpg?width=320&crop=smart&auto=webp&s=bd5f147f5907a02f0081042f51387fc75710341e', 'width': 320}, {'height': 457, 'url': 'https://external-preview.redd.it/B6CdcxnwhAqeaLT0l-u7hJX0Pm1Ko4PiXxWTWU5Y840.jpg?width=640&crop=smart&auto=webp&s=ef262686a137063d4c3e7e1c99d62149e5febbc8', 'width': 640}], 'source': {'height': 538, 'url': 'https://external-preview.redd.it/B6CdcxnwhAqeaLT0l-u7hJX0Pm1Ko4PiXxWTWU5Y840.jpg?auto=webp&s=3c901aefba855a162fc88497ca3dfbd03e6cdce2', 'width': 752}, 'variants': {}}]}
[Guide] Step-by-Step: How to Install and Run DeepSeek R-1 Locally
1
[removed]
2025-01-29T06:43:14
https://www.reddit.com/r/LocalLLaMA/comments/1icnvgu/guide_stepbystep_how_to_install_and_run_deepseek/
Amanpandey046
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icnvgu
false
null
t3_1icnvgu
/r/LocalLLaMA/comments/1icnvgu/guide_stepbystep_how_to_install_and_run_deepseek/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NF0tz58IgELs5x5i2IhKoAELvEeY69nznWODrDsf3_A', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/B6CdcxnwhAqeaLT0l-u7hJX0Pm1Ko4PiXxWTWU5Y840.jpg?width=108&crop=smart&auto=webp&s=1e74a109a63779da177babe2e3ff336ff23c498b', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/B6CdcxnwhAqeaLT0l-u7hJX0Pm1Ko4PiXxWTWU5Y840.jpg?width=216&crop=smart&auto=webp&s=0128508a0415586149b69426d3d9775e46d1e430', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/B6CdcxnwhAqeaLT0l-u7hJX0Pm1Ko4PiXxWTWU5Y840.jpg?width=320&crop=smart&auto=webp&s=bd5f147f5907a02f0081042f51387fc75710341e', 'width': 320}, {'height': 457, 'url': 'https://external-preview.redd.it/B6CdcxnwhAqeaLT0l-u7hJX0Pm1Ko4PiXxWTWU5Y840.jpg?width=640&crop=smart&auto=webp&s=ef262686a137063d4c3e7e1c99d62149e5febbc8', 'width': 640}], 'source': {'height': 538, 'url': 'https://external-preview.redd.it/B6CdcxnwhAqeaLT0l-u7hJX0Pm1Ko4PiXxWTWU5Y840.jpg?auto=webp&s=3c901aefba855a162fc88497ca3dfbd03e6cdce2', 'width': 752}, 'variants': {}}]}
Meet Luo Fuli, The 29-Year-Old "AI Prodigy" Behind DeepSeek's Global Success
1
2025-01-29T06:45:27
https://www.ndtv.com/feature/meet-luo-fuli-the-29-year-old-ai-prodigy-behind-deepseeks-global-success-7584116
GigiL4
ndtv.com
1970-01-01T00:00:00
0
{}
1icnwom
false
null
t3_1icnwom
/r/LocalLLaMA/comments/1icnwom/meet_luo_fuli_the_29yearold_ai_prodigy_behind/
false
false
https://b.thumbs.redditm…sa8VrVuyt8XA.jpg
1
{'enabled': False, 'images': [{'id': 'BrovN8ALTiTG7De7G88vsBpYAk4UOv6UGZDXPbM9Pxg', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/bkt8NP4NTi17A4nukZ9yl4yFHv4hWuLoRuov4kdWbJI.jpg?width=108&crop=smart&auto=webp&s=8414828310632dd393d8f10d43466524ec0d6c29', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/bkt8NP4NTi17A4nukZ9yl4yFHv4hWuLoRuov4kdWbJI.jpg?width=216&crop=smart&auto=webp&s=713b08a95d578cd6fbba0e2a8f2f441dcfee74cc', 'width': 216}, {'height': 196, 'url': 'https://external-preview.redd.it/bkt8NP4NTi17A4nukZ9yl4yFHv4hWuLoRuov4kdWbJI.jpg?width=320&crop=smart&auto=webp&s=bda66fdf997f30c78b8622edf32700aa91651c1e', 'width': 320}, {'height': 393, 'url': 'https://external-preview.redd.it/bkt8NP4NTi17A4nukZ9yl4yFHv4hWuLoRuov4kdWbJI.jpg?width=640&crop=smart&auto=webp&s=614f169ebdc66951a35c1b68e4fada051e68f011', 'width': 640}, {'height': 590, 'url': 'https://external-preview.redd.it/bkt8NP4NTi17A4nukZ9yl4yFHv4hWuLoRuov4kdWbJI.jpg?width=960&crop=smart&auto=webp&s=18784316400818c630c02daf7b21cd7343000394', 'width': 960}, {'height': 664, 'url': 'https://external-preview.redd.it/bkt8NP4NTi17A4nukZ9yl4yFHv4hWuLoRuov4kdWbJI.jpg?width=1080&crop=smart&auto=webp&s=429b11c428977a74d5aeeb5e44678d49ab386dac', 'width': 1080}], 'source': {'height': 738, 'url': 'https://external-preview.redd.it/bkt8NP4NTi17A4nukZ9yl4yFHv4hWuLoRuov4kdWbJI.jpg?auto=webp&s=344af6cb5a7753f34cb82f7be2675de6d36f8448', 'width': 1200}, 'variants': {}}]}
Deepseek can talk
0
2025-01-29T06:50:31
https://v.redd.it/ku29doeuqvfe1
Outrageous-Win-3244
/r/LocalLLaMA/comments/1icnz9c/deepseek_can_talk/
1970-01-01T00:00:00
0
{}
1icnz9c
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ku29doeuqvfe1/DASHPlaylist.mpd?a=1740855037%2CYjAyMDg1OWIzODA3YTZkYjU1OTFkNmY2ZTY1YWYxYWMxNmExN2JhNmE0ZTRhNTcwMDhjMzMxYTA2Y2JmNTVjNA%3D%3D&v=1&f=sd', 'duration': 258, 'fallback_url': 'https://v.redd.it/ku29doeuqvfe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ku29doeuqvfe1/HLSPlaylist.m3u8?a=1740855037%2CZTc1OWQwMWJkMzVhNmM3NWYyNGEzODA5ODJhOGFkZTcxNTc5ODc0YjY5ZWU2NTk2YmZlYzE4MzE4NzIxMjUzZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ku29doeuqvfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1icnz9c
/r/LocalLLaMA/comments/1icnz9c/deepseek_can_talk/
false
false
https://external-preview…a04b0938e58a8f8a
0
{'enabled': False, 'images': [{'id': 'Nzd5a3duZXVxdmZlMfLYf8iBvNrjqSnSOUB7o3LjtZ34bKVLlLV9UN7P-wOV', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Nzd5a3duZXVxdmZlMfLYf8iBvNrjqSnSOUB7o3LjtZ34bKVLlLV9UN7P-wOV.png?width=108&crop=smart&format=pjpg&auto=webp&s=12f73650a7cb856eb03e4288af6e87f6b489f7f6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Nzd5a3duZXVxdmZlMfLYf8iBvNrjqSnSOUB7o3LjtZ34bKVLlLV9UN7P-wOV.png?width=216&crop=smart&format=pjpg&auto=webp&s=da12605ebf8f0b4fdc1088640488e82b1d78adb1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Nzd5a3duZXVxdmZlMfLYf8iBvNrjqSnSOUB7o3LjtZ34bKVLlLV9UN7P-wOV.png?width=320&crop=smart&format=pjpg&auto=webp&s=ddd1c6e6b774fe6c8b65f6b6543e06d2cc4a90b0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Nzd5a3duZXVxdmZlMfLYf8iBvNrjqSnSOUB7o3LjtZ34bKVLlLV9UN7P-wOV.png?width=640&crop=smart&format=pjpg&auto=webp&s=a5e7c28b83dca66aab25a61442001f5e9fdb3c89', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Nzd5a3duZXVxdmZlMfLYf8iBvNrjqSnSOUB7o3LjtZ34bKVLlLV9UN7P-wOV.png?width=960&crop=smart&format=pjpg&auto=webp&s=b2246f8a34e327899c40a109d253e106573a6b35', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Nzd5a3duZXVxdmZlMfLYf8iBvNrjqSnSOUB7o3LjtZ34bKVLlLV9UN7P-wOV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1ca4f94b7586762e58b196cce7cee9c4d6931beb', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Nzd5a3duZXVxdmZlMfLYf8iBvNrjqSnSOUB7o3LjtZ34bKVLlLV9UN7P-wOV.png?format=pjpg&auto=webp&s=f6f7222f03e01a5fded49d6154d97b725d8174f1', 'width': 1920}, 'variants': {}}]}
Why "No system prompt" when utilizing the DeepSeek-R1 series models
1
So I was going through the [Usage Recommendations](https://huggingface.co/deepseek-ai/DeepSeek-R1/blob/main/README.md#usage-recommendations) for deepseek R1 series models and got to know it said that avoid adding system prompt . Is there any specific reason for that ? **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \\boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
2025-01-29T06:58:43
https://www.reddit.com/r/LocalLLaMA/comments/1ico38r/why_no_system_prompt_when_utilizing_the/
Maleficent_Repair359
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ico38r
false
null
t3_1ico38r
/r/LocalLLaMA/comments/1ico38r/why_no_system_prompt_when_utilizing_the/
false
false
self
1
{'enabled': False, 'images': [{'id': 'EW9KzApmzFQOWfOeaBOFj0nk5o3bUHoeyDi_xD0iLLo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=108&crop=smart&auto=webp&s=9577640536ba5273620fd0480b3a5ebfe441c736', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=216&crop=smart&auto=webp&s=a31f920964030694d34d99e6af85a92e37ede1fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=320&crop=smart&auto=webp&s=7baa788de1cc26a316da0c3936a77b5d8c6638f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=640&crop=smart&auto=webp&s=cfe6acc456fe810e684e2549f82a4f400608da67', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=960&crop=smart&auto=webp&s=b5c2a167f26b2a775924151badedd1641594c389', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=1080&crop=smart&auto=webp&s=3c2b96522db9d74900aa2f630038d61a244f5fde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?auto=webp&s=3da07efc6555d27ee00573823977a4aa6439f0f4', 'width': 1200}, 'variants': {}}]}
After the DeepSeek Shock: CES 2025’s ‘One to Three Scaling Laws’ and the Race for AI Dominance Why Nvidia’s Stock Dip Missed the Real Story—Efficiency Breakthroughs Are Supercharging GPU Demand, Not Undercutting It.
1
2025-01-29T07:01:35
https://i.redd.it/i6q66lb0tvfe1.jpeg
micahsun
i.redd.it
1970-01-01T00:00:00
0
{}
1ico4v0
false
null
t3_1ico4v0
/r/LocalLLaMA/comments/1ico4v0/after_the_deepseek_shock_ces_2025s_one_to_three/
false
false
https://a.thumbs.redditm…JHOejQXpPo64.jpg
1
{'enabled': True, 'images': [{'id': 'DLmHx8OFZ-iVQAG-pWTcstBz1i9QzloPyEyMogE3vbQ', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/i6q66lb0tvfe1.jpeg?width=108&crop=smart&auto=webp&s=09db68ec6e65c33efdeb5cdc796bb593fe9c0328', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/i6q66lb0tvfe1.jpeg?width=216&crop=smart&auto=webp&s=d0a550cb708dcf0d580d235966a58c4f3be29b61', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/i6q66lb0tvfe1.jpeg?width=320&crop=smart&auto=webp&s=75ae28e1e4825d9efca69538b013da3a1bd23564', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/i6q66lb0tvfe1.jpeg?width=640&crop=smart&auto=webp&s=253ae589c403753849b7e91dc0562258bc65179c', 'width': 640}], 'source': {'height': 800, 'url': 'https://preview.redd.it/i6q66lb0tvfe1.jpeg?auto=webp&s=4737ab9a4c672ccd28d43d176a776d66aeba0022', 'width': 800}, 'variants': {}}]}
5090 is coming out tomorrow,claiming significant performance in FP4. Would you run local models on FP4? Why and why not?
1
How does the FP4 accuracy compared to FP8 for popular models?
2025-01-29T07:10:20
https://www.reddit.com/r/LocalLLaMA/comments/1ico9c8/5090_is_coming_out_tomorrowclaiming_significant/
appakaradi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ico9c8
false
null
t3_1ico9c8
/r/LocalLLaMA/comments/1ico9c8/5090_is_coming_out_tomorrowclaiming_significant/
false
false
self
1
null
LLaVA-v1.6-34B chatbox help.
2
I'm trying to run [LLaVA-v1.6-34B](https://huggingface.co/liuhaotian/llava-v1.6-34b) locally on chat box but it doesn't seem to work. I opened up the terminal and tried "ollama run [LLaVA-v1.6-34B](https://huggingface.co/liuhaotian/llava-v1.6-34b)" but it doesn't work, but "ollama run deepseek-r1:32b works. Am I doing something wrong? (sorry if this sounds stupid but I'm new to this)
2025-01-29T07:33:42
https://www.reddit.com/r/LocalLLaMA/comments/1icokjp/llavav1634b_chatbox_help/
GoldenEye03
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icokjp
false
null
t3_1icokjp
/r/LocalLLaMA/comments/1icokjp/llavav1634b_chatbox_help/
false
false
self
2
{'enabled': False, 'images': [{'id': 'TtzTlIkDBJFF7APuT-I1G5lSy_RGql9TdXhaM91XmDo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3XZ8K-GcUbitbU2tWSnj2v5it9Xbb2cLvScsbiDRFbI.jpg?width=108&crop=smart&auto=webp&s=afa47021434a3a2ac2a304ab4aba874219dc2fbd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3XZ8K-GcUbitbU2tWSnj2v5it9Xbb2cLvScsbiDRFbI.jpg?width=216&crop=smart&auto=webp&s=1e21050c1ff78909472c23bb0482713b497bf3e4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3XZ8K-GcUbitbU2tWSnj2v5it9Xbb2cLvScsbiDRFbI.jpg?width=320&crop=smart&auto=webp&s=5c5459baea9512debdefc17c0e3f82f12f9ad03a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3XZ8K-GcUbitbU2tWSnj2v5it9Xbb2cLvScsbiDRFbI.jpg?width=640&crop=smart&auto=webp&s=37af07f98640a5ab43565fa176914d92b11b7d9c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3XZ8K-GcUbitbU2tWSnj2v5it9Xbb2cLvScsbiDRFbI.jpg?width=960&crop=smart&auto=webp&s=bf8284ccc53a6d026d45f6cad5eb67d0dfad704f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3XZ8K-GcUbitbU2tWSnj2v5it9Xbb2cLvScsbiDRFbI.jpg?width=1080&crop=smart&auto=webp&s=8ce4f6342efde3d9b3bb74dcb4240d189dbf80fb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3XZ8K-GcUbitbU2tWSnj2v5it9Xbb2cLvScsbiDRFbI.jpg?auto=webp&s=f1fab9eacc8bb84b6d13d543b164f9097e178732', 'width': 1200}, 'variants': {}}]}
You can make `deepseek-r1` go crazy with repetitive instructions
1
2025-01-29T07:34:30
https://i.redd.it/y4ax54zqyvfe1.png
maifee
i.redd.it
1970-01-01T00:00:00
0
{}
1icokxh
false
null
t3_1icokxh
/r/LocalLLaMA/comments/1icokxh/you_can_make_deepseekr1_go_crazy_with_repetitive/
false
false
https://b.thumbs.redditm…aemTVA7o4YRk.jpg
1
{'enabled': True, 'images': [{'id': 'NxC3QzFn81bCTeQDuliMT9-T27hOvd8cErAyUUF4r4U', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/y4ax54zqyvfe1.png?width=108&crop=smart&auto=webp&s=b3f3a95da2ddeed2550a804a84633247102c158e', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/y4ax54zqyvfe1.png?width=216&crop=smart&auto=webp&s=a6e41d160a391454dc5229b90282b792033677d1', 'width': 216}, {'height': 162, 'url': 'https://preview.redd.it/y4ax54zqyvfe1.png?width=320&crop=smart&auto=webp&s=e28dbc04bd8d2926f5d2fb3ae9d7e75b5aff550f', 'width': 320}, {'height': 324, 'url': 'https://preview.redd.it/y4ax54zqyvfe1.png?width=640&crop=smart&auto=webp&s=3f96fca6a5a53d705b77114f6c9cff8257444033', 'width': 640}, {'height': 487, 'url': 'https://preview.redd.it/y4ax54zqyvfe1.png?width=960&crop=smart&auto=webp&s=695507cad15923ad8abd0fb104f840d0c29a89a2', 'width': 960}, {'height': 547, 'url': 'https://preview.redd.it/y4ax54zqyvfe1.png?width=1080&crop=smart&auto=webp&s=6cf3adb6bdbb2c8a425d53aaab8e343227cb7656', 'width': 1080}], 'source': {'height': 693, 'url': 'https://preview.redd.it/y4ax54zqyvfe1.png?auto=webp&s=612b2ee4a634c0d81de2e071c7158c27c3ba1033', 'width': 1366}, 'variants': {}}]}
Deepseek Still Down? 200 dollars to deepseek are virtually useless for now!
0
https://preview.redd.it/…cross the board!
2025-01-29T07:44:12
https://www.reddit.com/r/LocalLLaMA/comments/1icopdz/deepseek_still_down_200_dollars_to_deepseek_are/
iam_wizard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icopdz
false
null
t3_1icopdz
/r/LocalLLaMA/comments/1icopdz/deepseek_still_down_200_dollars_to_deepseek_are/
false
false
https://b.thumbs.redditm…AizoGyL2DEWs.jpg
0
null
Deepseek using 100% cpu but not gpu ?
2
My PC Config: Ram: 24Gb DDR4 3200Mhz GPU: 3080Ti CPU: 5800x3d. OS: Windows 10. I tried deeepseek r1:32B version. I noticed it is using 100% CPU and 18GB of ram, but the gpu is only 15%, sometime it spikes to 50% for a moment only. I was under the impression that it is GPU heavy, and not the cpu. any idea why it is using so much cpu ?
2025-01-29T07:46:32
https://www.reddit.com/r/LocalLLaMA/comments/1icoqi6/deepseek_using_100_cpu_but_not_gpu/
Ok_Fortune_7894
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icoqi6
false
null
t3_1icoqi6
/r/LocalLLaMA/comments/1icoqi6/deepseek_using_100_cpu_but_not_gpu/
false
false
self
2
null
The state of "super fast inference" is frustrating
1
[removed]
2025-01-29T07:55:41
https://www.reddit.com/r/LocalLLaMA/comments/1icoupg/the_state_of_super_fast_inference_is_frustrating/
Comfortable-Rock-498
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1icoupg
false
null
t3_1icoupg
/r/LocalLLaMA/comments/1icoupg/the_state_of_super_fast_inference_is_frustrating/
false
false
self
1
null
Did DeepSeek Copy OpenAI? Microsoft & OpenAI Investigate
1
2025-01-29T08:18:08
https://www.bitdegree.org/crypto/news/did-deepseek-use-openais-data-microsoft-and-openai-investigate?utm_source=reddit&utm_medium=social&utm_campaign=r-deepseek-uses-openais-data
webbs3
bitdegree.org
1970-01-01T00:00:00
0
{}
1icp5bo
false
null
t3_1icp5bo
/r/LocalLLaMA/comments/1icp5bo/did_deepseek_copy_openai_microsoft_openai/
false
false
https://b.thumbs.redditm…fCjw7EFvoA0w.jpg
1
{'enabled': False, 'images': [{'id': 'wv_WLe637TKHwR0jHKxhIcHQgCBFDje20EDTZ3sUoRs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/HVb07-OuUfEQQ8yJDAiuJ6T-lD_tpgT7DdLfQWWRtbU.jpg?width=108&crop=smart&auto=webp&s=f4e33f0c04abb43c5fe17706c0bfa62f62dce17f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/HVb07-OuUfEQQ8yJDAiuJ6T-lD_tpgT7DdLfQWWRtbU.jpg?width=216&crop=smart&auto=webp&s=06e5eeba2e765439d01a416cce39e822c2cdc60e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/HVb07-OuUfEQQ8yJDAiuJ6T-lD_tpgT7DdLfQWWRtbU.jpg?width=320&crop=smart&auto=webp&s=16ad24c9bb562966cb74ba9ef298c89d14978fa2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/HVb07-OuUfEQQ8yJDAiuJ6T-lD_tpgT7DdLfQWWRtbU.jpg?width=640&crop=smart&auto=webp&s=b9b007418ec749fda57abb24f8f6544a3288fdc3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/HVb07-OuUfEQQ8yJDAiuJ6T-lD_tpgT7DdLfQWWRtbU.jpg?width=960&crop=smart&auto=webp&s=237ca3ec7487cf9e6c46152a8dadca385d23501d', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/HVb07-OuUfEQQ8yJDAiuJ6T-lD_tpgT7DdLfQWWRtbU.jpg?auto=webp&s=52eb72adbcc1abe74dfffc404c2261c9456d7e7f', 'width': 1024}, 'variants': {}}]}
In-depth analysis: The Short Case for Nvidia Stock
3
I think this is a must-read article for anyone invested in Nvidia and wants to understand the situation they’re in regarding challenges to their monopoly. Fantastic outline of the DeepSeek R1 model for beginners as well.
2025-01-29T08:19:40
https://youtubetranscriptoptimizer.com/blog/05_the_short_case_for_nvda
ChiefSitsOnAssAllDay
youtubetranscriptoptimizer.com
1970-01-01T00:00:00
0
{}
1icp61m
false
null
t3_1icp61m
/r/LocalLLaMA/comments/1icp61m/indepth_analysis_the_short_case_for_nvidia_stock/
false
false
https://b.thumbs.redditm…-_4_dManU_bE.jpg
3
{'enabled': False, 'images': [{'id': 'thJ5BxEk87EZVp7QqMbp7nTITRyn1hXsN8gRr0W5ipU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=108&crop=smart&auto=webp&s=3effac6e758bd4704331f88a46398c7ae0c99c3b', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=216&crop=smart&auto=webp&s=b00d836cb56bb2d008a84c61af1becef4da2f33d', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=320&crop=smart&auto=webp&s=58b9e885e34d6216c21d40e40935e51cfaba0605', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=640&crop=smart&auto=webp&s=b20e65876fa5792071fa0019f787278be05ebcf7', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=960&crop=smart&auto=webp&s=09032fc1d528072cd9bd3f4d676aeec4f3720cac', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=1080&crop=smart&auto=webp&s=19800fdf5ab979a84b61400d7841a2762150a34b', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?auto=webp&s=106f531822053b79291cb94625e1ea554a3b1619', 'width': 1792}, 'variants': {}}]}