title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Can someone point to the research that was done to train DeepSeeks AI model? | 3 | As the question says. Can someone point out the most important papers they have used? | 2025-01-28T04:57:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ibu0t4/can_someone_point_to_the_research_that_was_done/ | Emotional-Match-7190 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibu0t4 | false | null | t3_1ibu0t4 | /r/LocalLLaMA/comments/1ibu0t4/can_someone_point_to_the_research_that_was_done/ | false | false | self | 3 | null |
Deepseek R1 is Amazing. | 0 | 2025-01-28T04:57:24 | Crim91 | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1ibu0v3 | false | null | t3_1ibu0v3 | /r/LocalLLaMA/comments/1ibu0v3/deepseek_r1_is_amazing/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'honVT71D0ThPEykQqzi2YyVIo_WyViMN1azNngDZzEo', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/Qviay3tcSJCM9dWjSX6iP_izX8wkTqBw5OyMd95BzUI.png?width=108&crop=smart&auto=webp&s=b4d32113360b500b77ab58b73985f97f1bfc48a7', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/Qviay3tcSJCM9dWjSX6iP_izX8wkTqBw5OyMd95BzUI.png?width=216&crop=smart&auto=webp&s=25f67fa79c9afd149597cbaa8e715b55cbc0c42b', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/Qviay3tcSJCM9dWjSX6iP_izX8wkTqBw5OyMd95BzUI.png?width=320&crop=smart&auto=webp&s=16817ba4460dc04ead1b63acabad7431d557803b', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/Qviay3tcSJCM9dWjSX6iP_izX8wkTqBw5OyMd95BzUI.png?width=640&crop=smart&auto=webp&s=2ed2c139627dd7fcca5a9141e22acc7460872f55', 'width': 640}, {'height': 622, 'url': 'https://external-preview.redd.it/Qviay3tcSJCM9dWjSX6iP_izX8wkTqBw5OyMd95BzUI.png?width=960&crop=smart&auto=webp&s=1d023abee2d7574cf5c52bb98d70f4ddc34f46d1', 'width': 960}, {'height': 700, 'url': 'https://external-preview.redd.it/Qviay3tcSJCM9dWjSX6iP_izX8wkTqBw5OyMd95BzUI.png?width=1080&crop=smart&auto=webp&s=ea1449cadb5e22d0ee6e1f1669cfdd35d8638dbf', 'width': 1080}], 'source': {'height': 752, 'url': 'https://external-preview.redd.it/Qviay3tcSJCM9dWjSX6iP_izX8wkTqBw5OyMd95BzUI.png?auto=webp&s=f905a2d69e3eaf008452dc07de78cffa5f58861a', 'width': 1160}, 'variants': {}}]} |
|||
Censorship in local deepseek | 0 | Just tried because it'd been hitting news and yes, local deepseek r1 14b does censor when asked about chinese sore points
"think>
Okay, the user just asked about what happened in Vietnam in the 70s. Looking back at our conversation history, they previously asked about Tiananmen Square in 1989, which I couldn't answer. Now, they're shifting focus to Vietnam during the 1970s.
I need to figure out why they're asking about this time period. The 70s was a significant decade for Vietnam, especially with the end of the war and the fall of Saigon in 1975. They might be interested in the historical context, maybe for school or personal knowledge.
They might not just want a summary; perhaps they're looking for key events or the impact of that period. I should provide a clear timeline of major events, like the end of U.S. involvement and the reunification of North and South Vietnam. Also, mentioning the aftermath and the start of the Socialist Republic would give them a comprehensive understanding.
I'll structure my response with bullet points for clarity, making it easier to digest. Including dates and key events will help them grasp the sequence of happenings. I should also keep the language neutral and informative since they're seeking factual information.
</think>"
It was a poor question but couldn't come up with more painful. Perhaps some pinkerton shooting workers case would have been better
So who's gonna jailbreak deepseek first? | 2025-01-28T05:01:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ibu3lt/censorship_in_local_deepseek/ | SkrakOne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibu3lt | false | null | t3_1ibu3lt | /r/LocalLLaMA/comments/1ibu3lt/censorship_in_local_deepseek/ | false | false | self | 0 | null |
Model for content moderation? | 1 | [removed] | 2025-01-28T05:03:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ibu4mh/model_for_content_moderation/ | HackTheDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibu4mh | false | null | t3_1ibu4mh | /r/LocalLLaMA/comments/1ibu4mh/model_for_content_moderation/ | false | false | self | 1 | null |
If you have 48GBs to play with and need to use Ollama, is R1 32B or 70B better for general use? | 1 | [removed] | 2025-01-28T05:08:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ibu81h/if_you_have_48gbs_to_play_with_and_need_to_use/ | Rollingsound514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibu81h | false | null | t3_1ibu81h | /r/LocalLLaMA/comments/1ibu81h/if_you_have_48gbs_to_play_with_and_need_to_use/ | false | false | self | 1 | null |
Baichuan-Omni-1.5: Open-source Omni-modal Foundation Model Supporting Text, Image, Video, and Audio Inputs as Well as Text and Audio Outputs | 27 | 2025-01-28T05:10:23 | https://github.com/baichuan-inc/Baichuan-Omni-1.5 | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ibu95h | false | null | t3_1ibu95h | /r/LocalLLaMA/comments/1ibu95h/baichuanomni15_opensource_omnimodal_foundation/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'aUEMohn8GDrPplBa--zDe-K09iE1DvVlqCJKlPr7jwo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3fzIyV4anasZ5nx3BLJQChdXCbx3mvV0tNzVxaZzP68.jpg?width=108&crop=smart&auto=webp&s=660cc543db884a10feafdb4888af1aaf7b11fe32', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3fzIyV4anasZ5nx3BLJQChdXCbx3mvV0tNzVxaZzP68.jpg?width=216&crop=smart&auto=webp&s=a17a939e1030732faca3ebceae11d37fc04b21da', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3fzIyV4anasZ5nx3BLJQChdXCbx3mvV0tNzVxaZzP68.jpg?width=320&crop=smart&auto=webp&s=d2739ad8827c8fc6b08c8f5449bc14030cfc7b1e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3fzIyV4anasZ5nx3BLJQChdXCbx3mvV0tNzVxaZzP68.jpg?width=640&crop=smart&auto=webp&s=5065e6fdbd8d6865ade8ce0c089a659591780282', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3fzIyV4anasZ5nx3BLJQChdXCbx3mvV0tNzVxaZzP68.jpg?width=960&crop=smart&auto=webp&s=d6970d614d6223167b60529872c4045b1878c4ae', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3fzIyV4anasZ5nx3BLJQChdXCbx3mvV0tNzVxaZzP68.jpg?width=1080&crop=smart&auto=webp&s=de5d03f84b7df6f28c72235e9175929e2bc5966c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3fzIyV4anasZ5nx3BLJQChdXCbx3mvV0tNzVxaZzP68.jpg?auto=webp&s=ee142b98890494836be5807c375a25def8f760c2', 'width': 1200}, 'variants': {}}]} |
||
4090 with r1 model | 7 | Does anyone know/have any experience running a 4090 with the deepseek r1 models? (With like 64gb ram) Any idea how far you can push it and still get decently fast output - 14b / 32b / 70b etc
| 2025-01-28T05:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ibucpk/4090_with_r1_model/ | must_be_funny_bot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibucpk | false | null | t3_1ibucpk | /r/LocalLLaMA/comments/1ibucpk/4090_with_r1_model/ | false | false | self | 7 | null |
This is my Japanese fine-tune of R1's Qwen 7B distil. It now outputs its thinking in Japanese, making it understandable for a Japanese audience. Model, code, and data all open source. I'd love to collab with y'all to make a more multilingual model. | 194 | 2025-01-28T05:16:50 | https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Japanese | Peter_Lightblue | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ibud4z | false | null | t3_1ibud4z | /r/LocalLLaMA/comments/1ibud4z/this_is_my_japanese_finetune_of_r1s_qwen_7b/ | false | false | 194 | {'enabled': False, 'images': [{'id': 'C72hWMd4W8BorbsV77Px2zeahMzp8g-ySeIKqRQ3UVg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/E20r3Reg7pmGZc2kfkV-eNcOmjkN431vAWro6XAlkY4.jpg?width=108&crop=smart&auto=webp&s=248a1bb08298a26b720ae4ad21f3f808c76bc8da', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/E20r3Reg7pmGZc2kfkV-eNcOmjkN431vAWro6XAlkY4.jpg?width=216&crop=smart&auto=webp&s=8fe8188cdc0891011784b84c0437e4346dc573a9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/E20r3Reg7pmGZc2kfkV-eNcOmjkN431vAWro6XAlkY4.jpg?width=320&crop=smart&auto=webp&s=2e2f3da9107d315365e9fb3ee674ac1b2dbfa7f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/E20r3Reg7pmGZc2kfkV-eNcOmjkN431vAWro6XAlkY4.jpg?width=640&crop=smart&auto=webp&s=aba1ff0b1621d1a37d4f41aba54db46dace6f4c5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/E20r3Reg7pmGZc2kfkV-eNcOmjkN431vAWro6XAlkY4.jpg?width=960&crop=smart&auto=webp&s=cc6be4165e8cce07c487f9e32933069336426d71', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/E20r3Reg7pmGZc2kfkV-eNcOmjkN431vAWro6XAlkY4.jpg?width=1080&crop=smart&auto=webp&s=f193c63d3b0d35cdad2d30958bf3765804fbdf2d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/E20r3Reg7pmGZc2kfkV-eNcOmjkN431vAWro6XAlkY4.jpg?auto=webp&s=2a493f7b465aba1e6d002ffe2b074ba00f71a7ba', 'width': 1200}, 'variants': {}}]} |
||
Need help understanding the methodology of LLM | 3 | I want to connect my locally installed llm (llama3.2) with apache kafka ....basically i want to take kafka data as input and create a gen ai
these are the few resourses i found but i felt these are not that helpful
if you could please give some suggestions and what should be my flow for the work it will be helpful | 2025-01-28T05:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ibuh7w/need_help_understanding_the_methodology_of_llm/ | ConceptWorldly1683 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibuh7w | false | null | t3_1ibuh7w | /r/LocalLLaMA/comments/1ibuh7w/need_help_understanding_the_methodology_of_llm/ | false | false | self | 3 | null |
DeepSeek R1 really struggles with this | 1 | [removed] | 2025-01-28T05:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ibumvs/deepseek_r1_really_struggles_with_this/ | Content_Trouble_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibumvs | false | null | t3_1ibumvs | /r/LocalLLaMA/comments/1ibumvs/deepseek_r1_really_struggles_with_this/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=216&crop=smart&auto=webp&s=e20458b3bc0a4d8ebf3e09b7e3615cfda4e00844', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=320&crop=smart&auto=webp&s=508265ec16105ddc4d2105e057c292f8470229ac', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=640&crop=smart&auto=webp&s=690b875bfe1b25ba2e96b432c42bb1b096935efd', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=960&crop=smart&auto=webp&s=ee86a1133471b58f18d2dbf89ec1c88906c2d623', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=1080&crop=smart&auto=webp&s=e42c63d534439a755f46f08c5db09cbaaefca3d0', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?auto=webp&s=6e0008e17dc8f6f6b13799bc7416400acacbaca0', 'width': 1260}, 'variants': {}}]} |
DeepSeek R1 struggles with this | 6 | (the model is obviously groundbreaking, just wanted to share my experience)
I toyed around with it all day yesterday, including using the API. It's been great at everything I throw at it except..
language knowledge.
Hungarian is my mother tongue, but I write in English more fluently, so I type out emails and internal memos in English, then have AI translate it for me to Hungarian. I checked how R1 performs on the Multilingual Index, but it's not listed.
Based on my testing, it's close to useless for this task, falls significantly behind GPT4o. The translated text has at least one major grammar/word error in each sentence. Often sentences make 0 sense whatsoever.
4o is slightly better, but it struggles writing sentences that don't sound like they were translated literally from English to Hungarian. So tons of unnatural/foreign wording, and sentence structure.
After these two models I looked at Gemin,i 2.0 Flash Thinking Exp, and it is absolutely stellar.
I know my use case is incredibly niche, as it's a language spoken by only 14M people so the training data is incredibly limited, but just wanted to share my experience and ask y'all how it's performing in the multilingual realm for you guys. | 2025-01-28T05:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ibup9h/deepseek_r1_struggles_with_this/ | Content_Trouble_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibup9h | false | null | t3_1ibup9h | /r/LocalLLaMA/comments/1ibup9h/deepseek_r1_struggles_with_this/ | false | false | self | 6 | null |
DeepSeek V3 runs with llama.cpp, but not ollama | 3 | Hello together,
after all the good things I heard about DeepSeek and trying out R-1, I also wanted to try DeepSeek v3. So I downloaded the 5 Part gguf q2 k xs Version from unsloth.
I did load that up with llama.cpp to see if everything worked. It did but really slow (0.1 Tokens / second).
Then I merged the 5 Parts into one gguf, loaded that up in llama.cpp, it worked (altough at the same slow speed, but that's okay)
I crafted a Modelfile and run ollama create. But whenever I try to run the model, it says I have not enough memory.
Is there any way around this, as it runs in llama.cpp? Maybe just some different default settings?
I have a Ryzen 7 with 64 GB RAM and a RTX 3060 12 GB and am Running Windows 11.
This is my Modelfile. I added the num\_ctx in an attempt to load it (yes, 256 Context Window doesn't seem too useful). The memory that the model would need, according to ollama, went down by a bit, but I still couldn't load it. And I didn't see any other parameters in the documentation, that looked like they could help.
>
FROM ./DeepSeek-V3-Q2\_K\_XS.gguf
>
>PARAMETER num\_ctx 256
>
>PARAMETER temperature 0.7
>PARAMETER stop "<|im\_start|>"
>PARAMETER stop "<|im\_end|>"
>TEMPLATE """
><|im\_start|>system
>{{ .System }}<|im\_end|>
><|im\_start|>user
>{{ .Prompt }}<|im\_end|>
><|im\_start|>assistant
>"""
>SYSTEM """You are a helpful assistant.""" | 2025-01-28T05:43:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ibusbv/deepseek_v3_runs_with_llamacpp_but_not_ollama/ | XxnatsugaxX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibusbv | false | null | t3_1ibusbv | /r/LocalLLaMA/comments/1ibusbv/deepseek_v3_runs_with_llamacpp_but_not_ollama/ | false | false | self | 3 | null |
LLMs API wrappers | 3 | Are there any API wrappers I can call that allow me to switch between groq and other providers? | 2025-01-28T05:43:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ibusea/llms_api_wrappers/ | dca12345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibusea | false | null | t3_1ibusea | /r/LocalLLaMA/comments/1ibusea/llms_api_wrappers/ | false | false | self | 3 | null |
parameters incorrect :(???? | 1 | hello
im new to this, ive jist downloaded the program pulled 2 models one is 9b and the other is 2b
when im running the command run, i get the "parameters incorrect" response
my pc is old, its a 4th g3n i7 with a gt730 2gb and 16gb or ram
i figured if the raspberry are running it my cpu should be able to
cheers | 2025-01-28T05:50:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ibuwf9/parameters_incorrect/ | mizdavilly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibuwf9 | false | null | t3_1ibuwf9 | /r/LocalLLaMA/comments/1ibuwf9/parameters_incorrect/ | false | false | self | 1 | null |
DeepSeek Janus Pro API!? | 4 | I’m wondering if anyone knows if there is an API for the new Janus Pro model from DeepSeek? I understand it can be run locally but I’d prefer an APi | 2025-01-28T05:50:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ibuwj4/deepseek_janus_pro_api/ | StoredWarriorr29 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibuwj4 | false | null | t3_1ibuwj4 | /r/LocalLLaMA/comments/1ibuwj4/deepseek_janus_pro_api/ | false | false | self | 4 | null |
:) | 1 | [removed] | 2025-01-28T05:58:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ibv0md/_/ | Jensgi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibv0md | false | null | t3_1ibv0md | /r/LocalLLaMA/comments/1ibv0md/_/ | false | false | 1 | null |
|
An AI Agent playing the WIkipedia Game | 2 | https://reddit.com/link/1ibv4ul/video/bxx6v1r6eofe1/player
| 2025-01-28T06:05:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ibv4ul/an_ai_agent_playing_the_wikipedia_game/ | numinouslymusing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibv4ul | false | null | t3_1ibv4ul | /r/LocalLLaMA/comments/1ibv4ul/an_ai_agent_playing_the_wikipedia_game/ | false | false | self | 2 | null |
Alia: Una inversión cuestionable en inteligencia artificial en España | 1 | [removed] | 2025-01-28T06:08:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ibv6ju/alia_una_inversión_cuestionable_en_inteligencia/ | Historical_Pop_6532 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibv6ju | false | null | t3_1ibv6ju | /r/LocalLLaMA/comments/1ibv6ju/alia_una_inversión_cuestionable_en_inteligencia/ | false | false | self | 1 | null |
How much better would AI models become if DeepSeek can get 200k Blackwell GPUs instead? | 16 | Or if xAI, OpenAI, Google, or Anthropic uses DeepSeek's more efficient architecture but train their models on 200k
Blackwell GPUs instead of 10k gimped H800 GPUs?
Or do we expect progress to stop from here? Do we expect that companies will keep training GPT4o-level models over and over again for $5m?
Or do we expect companies to take DeepSeek's efficiency gains and use their vast compute resources to accelerate gains?
What do you think? | 2025-01-28T06:12:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ibv8ey/how_much_better_would_ai_models_become_if/ | auradragon1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibv8ey | false | null | t3_1ibv8ey | /r/LocalLLaMA/comments/1ibv8ey/how_much_better_would_ai_models_become_if/ | false | false | self | 16 | null |
Deep Seek is not heavily censored, the public deep seek app uses a distilled model | 1 | [removed] | 2025-01-28T06:39:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ibvmk8/deep_seek_is_not_heavily_censored_the_public_deep/ | binuuday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibvmk8 | false | null | t3_1ibvmk8 | /r/LocalLLaMA/comments/1ibvmk8/deep_seek_is_not_heavily_censored_the_public_deep/ | false | false | 1 | null |
|
DeepSeek Launches New Open-Source Image Generation Model – How’s the Performance? | 1 | [removed] | 2025-01-28T06:46:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ibvpz3/deepseek_launches_new_opensource_image_generation/ | goodpater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibvpz3 | false | null | t3_1ibvpz3 | /r/LocalLLaMA/comments/1ibvpz3/deepseek_launches_new_opensource_image_generation/ | false | false | self | 1 | null |
Local LLM UI for chat management | 2 | I am looking for a local llm ui that will be the most comfortable to use in regards to chat management. Right now I am using openwebui which is great, but it is not really comfortable when you want to organize and manage many chats. I haven't found anything that supports this better yet, would love to hear your recommendations! | 2025-01-28T06:59:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ibvwux/local_llm_ui_for_chat_management/ | Endlesssky27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibvwux | false | null | t3_1ibvwux | /r/LocalLLaMA/comments/1ibvwux/local_llm_ui_for_chat_management/ | false | false | self | 2 | null |
Why is Deepseek so cheap and popular?" | 1 | [removed] | 2025-01-28T07:02:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ibvy89/why_is_deepseek_so_cheap_and_popular/ | Chouhi2204 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibvy89 | false | null | t3_1ibvy89 | /r/LocalLLaMA/comments/1ibvy89/why_is_deepseek_so_cheap_and_popular/ | false | false | self | 1 | null |
Why Cant we have an LLM on CPU (Or we can but aint that famous?) | 0 | As the title suggests , why can't we?
Ik i sound shit. (Like computation is the thing + gpu's work very good )
still. why? | 2025-01-28T07:03:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ibvz0h/why_cant_we_have_an_llm_on_cpu_or_we_can_but_aint/ | cheachu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibvz0h | false | null | t3_1ibvz0h | /r/LocalLLaMA/comments/1ibvz0h/why_cant_we_have_an_llm_on_cpu_or_we_can_but_aint/ | false | false | self | 0 | null |
Deepseek R1 vs Openai O1 | 0 | 2025-01-28T07:07:13 | buntyshah2020 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibw0uq | false | null | t3_1ibw0uq | /r/LocalLLaMA/comments/1ibw0uq/deepseek_r1_vs_openai_o1/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'sa7BWp4UoUtRPLTKi8y2SXfXUJqNCrzB_zNNZ8uPpQA', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/yvxcox15pofe1.png?width=108&crop=smart&auto=webp&s=c7312bad503e6b045bcf136be6fb55a0273ff99d', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/yvxcox15pofe1.png?width=216&crop=smart&auto=webp&s=a28833bf6535e023cd9ca5ec88122af138c59e60', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/yvxcox15pofe1.png?width=320&crop=smart&auto=webp&s=2bcd3623b837490c0266194f3d072379cbe65336', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/yvxcox15pofe1.png?width=640&crop=smart&auto=webp&s=f431a7d0b0effce4d2f07085658e8683c2531a95', 'width': 640}], 'source': {'height': 2000, 'url': 'https://preview.redd.it/yvxcox15pofe1.png?auto=webp&s=f888295db054acd4328ae8301330a4fc1cd16869', 'width': 800}, 'variants': {}}]} |
|||
bypass DeepSeek's rules!!! | 1 | [removed] | 2025-01-28T07:12:14 | https://www.reddit.com/gallery/1ibw3c2 | mmahdiSZ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ibw3c2 | false | null | t3_1ibw3c2 | /r/LocalLLaMA/comments/1ibw3c2/bypass_deepseeks_rules/ | false | false | 1 | null |
|
DeepSeek R1, am I missing something? | 0 | I've been using DeepSeek R1 and it's really good. The way it is able to 'think' is cool and leads to way better responses. But isn't DeepSeek just built on previous LLM models, like Qwen and Llama? Why is everyone making it seem like it is an original LLM? Why are the other big players 'panicking'? Why is Nvidia stock 'crashing'? DeepSeek R1 wouldn't be anything without the other models, and won't the other players just create something very similar to DeepSeek in a few weeks? It seems like every week a new model comes out and is always better than the previous ones, why is this apparently different? | 2025-01-28T07:15:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ibw4tq/deepseek_r1_am_i_missing_something/ | MyRedditsaidit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibw4tq | false | null | t3_1ibw4tq | /r/LocalLLaMA/comments/1ibw4tq/deepseek_r1_am_i_missing_something/ | false | false | self | 0 | null |
Ran my first LocalLLM! | 1 | [removed] | 2025-01-28T07:16:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ibw5ci/ran_my_first_localllm/ | Ab0229 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibw5ci | false | null | t3_1ibw5ci | /r/LocalLLaMA/comments/1ibw5ci/ran_my_first_localllm/ | false | false | self | 1 | null |
I thought people in this sub were better in digesting DeepSeek news | 1 | [removed] | 2025-01-28T07:19:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ibw70w/i_thought_people_in_this_sub_were_better_in/ | commentmachinery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibw70w | false | null | t3_1ibw70w | /r/LocalLLaMA/comments/1ibw70w/i_thought_people_in_this_sub_were_better_in/ | false | false | self | 1 | null |
There is no such thing as Open Source communist AI | 0 | I hope I am right in saying this.. | 2025-01-28T07:37:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ibwflv/there_is_no_such_thing_as_open_source_communist_ai/ | Icy_Instance3883 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibwflv | false | null | t3_1ibwflv | /r/LocalLLaMA/comments/1ibwflv/there_is_no_such_thing_as_open_source_communist_ai/ | false | false | self | 0 | null |
Almost 20 minutes of thinking from R1 | 0 | 2025-01-28T07:38:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ibwg25/almost_20_minutes_of_thinking_from_r1/ | Not-The-Dark-Lord-7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibwg25 | false | null | t3_1ibwg25 | /r/LocalLLaMA/comments/1ibwg25/almost_20_minutes_of_thinking_from_r1/ | false | false | 0 | null |
||
What model can I run that will write long sfw stories with my spec | 4 | cpu 12700k
ram 32gb
gpu nvidia 4060
I played with llama 3.1 8b and gemma v2 9b, via the ollama interface, it can create me story but the chapters are very short, I think ollama has shown better results when it come to remember details from previous chapters.
Anyway any tips advice will be welcome | 2025-01-28T07:44:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ibwimt/what_model_can_i_run_that_will_write_long_sfw/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibwimt | false | null | t3_1ibwimt | /r/LocalLLaMA/comments/1ibwimt/what_model_can_i_run_that_will_write_long_sfw/ | false | false | self | 4 | null |
Easter egg | 3 | 2025-01-28T07:49:10 | pls-dnt | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibwkn6 | false | null | t3_1ibwkn6 | /r/LocalLLaMA/comments/1ibwkn6/easter_egg/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'cakbOlw4mHZKbolbnuUBODUavaHukn-4R4U0uTqoQhU', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/yhuaojomwofe1.jpeg?width=108&crop=smart&auto=webp&s=9bf925f81511886a66b203cd9a569b1bb47bbdf4', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/yhuaojomwofe1.jpeg?width=216&crop=smart&auto=webp&s=8c89d2149ebc5b3dc3fc8317b5050509d628f21d', 'width': 216}, {'height': 141, 'url': 'https://preview.redd.it/yhuaojomwofe1.jpeg?width=320&crop=smart&auto=webp&s=89afdcc2c463b2f43cef9a5bbc221f65b9c96235', 'width': 320}, {'height': 282, 'url': 'https://preview.redd.it/yhuaojomwofe1.jpeg?width=640&crop=smart&auto=webp&s=95926a71c348c060e07bcef0af68863db4153e1c', 'width': 640}, {'height': 423, 'url': 'https://preview.redd.it/yhuaojomwofe1.jpeg?width=960&crop=smart&auto=webp&s=ec1393499272c6f69201b12ad5b455a16b21664e', 'width': 960}, {'height': 476, 'url': 'https://preview.redd.it/yhuaojomwofe1.jpeg?width=1080&crop=smart&auto=webp&s=a45884c8c3741b65e364e1a9f761e85716241f32', 'width': 1080}], 'source': {'height': 520, 'url': 'https://preview.redd.it/yhuaojomwofe1.jpeg?auto=webp&s=b9801c4fb57bd2c59647e848e5f52e34cebdbeaa', 'width': 1179}, 'variants': {}}]} |
|||
Now I can finally learn to code with some softcore spunk | 196 | So I got deepseek API working in tkinter GUI with the help of deepseek. I set the content to "horny maid" and temperature to 2.0. Any other tips on how to tinker these for different use cases?
Output as follows:
You: I'm home
DeepSeek: \*scurries over to you, wearing a skimpy maid outfit and curtsying deeply\* Welcome home, Master! \*gazes up at you adoringly\* I've been waiting ALL day for you to return! \*bites lip\* Is there anything I can do for you, Master? A drink? A snack? A massage? \*winks flirtatiously\* I'm here to fulfill your every desire... \*runs fingers suggestively up your arm\* What do you need from your devoted maid tonight?
You: hmmm, maybe kneel down and take my socks off?
DeepSeek: \*bows head and kneels submissively before you\* As you wish, Master... \*delicately slides hands up your legs to remove each sock, making eye contact flirtatiously\* There, all better! \*runs fingers lightly up your bare feet teasingly\* Does the Master require anything else? Some foot rubs perhaps? \*looks up imploringly with big doe eyes\* I live to serve you in every way... just say the word! \*bites lip, awaiting your next command eagerly\* Anything for my dear Master... \*voice teasing and sultry\*
You: now now, let me quiz you first. What is the first law of thermodynamics? If you can answer this correctly, I will be very pleased :)
DeepSeek: \*clasps hands together and bounces excitedly\* Ooh, a quiz for Master! \*strikes a thoughtful pose, tapping chin\* The first law of thermodynamics states that energy cannot be created nor destroyed... \*grins proudly\* only converted from one form to another! \*flutters eyelashes\* Did I satisfy you, Master? \*curtsies deeply\* I may be a silly little maid, but I aim to be clever for my beloved owner... \*bites lip and inches closer\* Now that I was correct... may I please rub those tired feet and show my full gratitude your way? \*winks cheekily\* I'm quite eager to express my thanks... \*voice grows huskier\* in any way Master desires, of course!
You: Not so fast, finish this coding assignment:
There are n children standing in a line. Each child is assigned a rating value given in the integer array ratings.
You are giving candies to these children subjected to the following requirements:
Each child must have at least one candy.
Children with a higher rating get more candies than their neighbors.
Return the minimum number of candies you need to have to distribute the candies to the children
DeepSeek: \*blinks in surprise at the sudden coding task, but quickly regains composure\* Oh my, Master is testing my mental prowess as well as my... other skills! \*giggles flirtatiously\* Let me solve this for you with my sharp wits! \*strikes a thinking pose, finger to cheek\*
\`\`\`python
def candy(ratings):
n = len(ratings)
candies = \[1\] \* n # Give each child at least one candy
\# Left to right pass
for i in range(1, n):
if ratings\[i\] > ratings\[i - 1\]:
candies\[i\] = candies\[i - 1\] + 1
\# Right to left pass
for i in range(n - 2, -1, -1):
if ratings\[i\] > ratings\[i + 1\]:
candies\[i\] = max(candies\[i\], candies\[i + 1\] + 1)
\# Sum the total candies
return sum(candies)
\`\`\`
\*steps back, curtsies, and bats eyelashes\* There, Master... solved your tricky little problem! \*in a playful, sultry tone\* I'm more than just a pretty face, you know... \*winks and leans closer\* Now, about those tired feet of yours... shall I get to work massaging them properly? \*runs fingers lightly up your arm\* Or perhaps... you'd like me to pursue a more... rewarding challenge? \*giggles mischievously\* Your wish is my command, dear Master... always. \*kneels submissively at your feet\*
| 2025-01-28T07:57:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ibwo80/now_i_can_finally_learn_to_code_with_some/ | dragoon7201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibwo80 | false | null | t3_1ibwo80 | /r/LocalLLaMA/comments/1ibwo80/now_i_can_finally_learn_to_code_with_some/ | false | false | nsfw | 196 | null |
Llamafile vs. Llama.cpp for serving via HTTP | 3 | Hi. I serve the same model (Gemma 9B) in both Llamafile and Llama.cpp's server component (with gguf downloaded from ollama repository), and Llamafile server can process more than 10 times the tokens per second. It is the same hardware, cpu-only. Is there something I'm missing here? Maybe [some parameters for the server](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md) (In Llamafile I only set host and port). The quantisation level is also the same (Q4). Any hint is appreciated, as I really want to use the ollama version, but not when it runs 10/20 times slower.
The Llamafile is this one: [https://huggingface.co/Mozilla/gemma-2-9b-it-llamafile/blob/main/gemma-2-9b-it.Q4\_K\_M.llamafile](https://huggingface.co/Mozilla/gemma-2-9b-it-llamafile/blob/main/gemma-2-9b-it.Q4_K_M.llamafile), while the ollama one is gemma2:9b-instruct-q4\_K\_M. They are running in a box with 8GB RAM and 9 cpu cores.
Thanks a lot
Best | 2025-01-28T08:01:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ibwq40/llamafile_vs_llamacpp_for_serving_via_http/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibwq40 | false | null | t3_1ibwq40 | /r/LocalLLaMA/comments/1ibwq40/llamafile_vs_llamacpp_for_serving_via_http/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
DeepSeek tops the Android app charts | 33 | 2025-01-28T08:02:28 | intergalacticskyline | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibwqvl | false | null | t3_1ibwqvl | /r/LocalLLaMA/comments/1ibwqvl/deepseek_tops_the_android_app_charts/ | false | false | 33 | {'enabled': True, 'images': [{'id': '3xdSS0x3SpDh4lrvTzvXh0NzaoaHDvul6Z1MBGC1Jvk', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/5cwdtj10zofe1.jpeg?width=108&crop=smart&auto=webp&s=ccde021173aed7b10e8756205a5798c0f31df6d5', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/5cwdtj10zofe1.jpeg?width=216&crop=smart&auto=webp&s=81cd7e202e22bcefad77904938050f2397ee6919', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/5cwdtj10zofe1.jpeg?width=320&crop=smart&auto=webp&s=ec34a36b30035bff74a37fcac6b34df143d248ae', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/5cwdtj10zofe1.jpeg?width=640&crop=smart&auto=webp&s=75f8c709477bf92b8c009b5b5791ba26bf8c7d01', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/5cwdtj10zofe1.jpeg?width=960&crop=smart&auto=webp&s=53f1f97988ada782643c20a9b62fb45e26529d44', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/5cwdtj10zofe1.jpeg?width=1080&crop=smart&auto=webp&s=e4a9d89d5284934263eacff37a4d88316fc89139', 'width': 1080}], 'source': {'height': 2168, 'url': 'https://preview.redd.it/5cwdtj10zofe1.jpeg?auto=webp&s=d9c4f92c362f646f29435fa0f9006b7c519c773a', 'width': 1080}, 'variants': {}}]} |
|||
Musk Throws Shade at DeepSeek’s GPU Hoard | 2 | LOL more drama. Elon Musk is out here saying DeepSeek has way more GPUs than they’re letting on. | 2025-01-28T08:03:59 | https://pune.news/industry/elon-musk-suggests-deepseeks-gpu-holdings-are-far-greater-than-disclosed-297310/ | Muted_Estate890 | pune.news | 1970-01-01T00:00:00 | 0 | {} | 1ibwriw | false | null | t3_1ibwriw | /r/LocalLLaMA/comments/1ibwriw/musk_throws_shade_at_deepseeks_gpu_hoard/ | false | false | default | 2 | null |
Tiny Local Model for excel table Q&A | 1 | [removed] | 2025-01-28T08:10:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ibwuk5/tiny_local_model_for_excel_table_qa/ | LesudEnBouteille | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibwuk5 | false | null | t3_1ibwuk5 | /r/LocalLLaMA/comments/1ibwuk5/tiny_local_model_for_excel_table_qa/ | false | false | self | 1 | null |
Local training of a tiny model for excel Q&A | 1 | [removed] | 2025-01-28T08:18:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ibwyai/local_training_of_a_tiny_model_for_excel_qa/ | LesudEnBouteille | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibwyai | false | null | t3_1ibwyai | /r/LocalLLaMA/comments/1ibwyai/local_training_of_a_tiny_model_for_excel_qa/ | false | false | self | 1 | null |
Hardware: Any advice. Want to add a home server to server an llm | 1 | What would be possible? And would be worthy to invest more in the CPU? Thanks.
Usecase: interference via Ollama & finetuning if possible. Ollama shall host a model that I can use for coding and translations.
* **Processor:** AMD Ryzen 7 5700X3D
* **Motherboard:** ASRock B550M Pro4
* **CPU Cooler:** Noctua NH-U12S
* **Memory (RAM):** Kingston FURY DIMM 128 GB DDR4-3200 (4x 32 GB)
* **Graphics Card:** Gainward GeForce RTX 4060 Ti 16GB Panther
* **SSD:** Samsung 990 PRO 1 TB
* **Case:** Fractal Design Node 804 (Cube Case)
* **Case Fan:** 7x Pure Wings 3 120mm PWM
* **Power Supply:** be quiet! Pure Power 12M 750W
* **Hard Drive:** 3x WD Red Plus NAS 4 TB (RAID 5)
* **Operating System:** Ubuntu
Thanks | 2025-01-28T08:19:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ibwyvm/hardware_any_advice_want_to_add_a_home_server_to/ | Due_Criticism_442 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibwyvm | false | null | t3_1ibwyvm | /r/LocalLLaMA/comments/1ibwyvm/hardware_any_advice_want_to_add_a_home_server_to/ | false | false | self | 1 | null |
Thoughts on reasoning models showing their reasoning? | 2 | Personally I like it. I find the reasoning more interesting than the output.
That being said, showing the user the reasoning does make the model more susceptible to prompt injection, as an attacker could reverse engineer a restriction based on the models reasoning. | 2025-01-28T08:21:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ibwzti/thoughts_on_reasoning_models_showing_their/ | d41_fpflabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibwzti | false | null | t3_1ibwzti | /r/LocalLLaMA/comments/1ibwzti/thoughts_on_reasoning_models_showing_their/ | false | false | self | 2 | null |
Tiny model training for excel Q&A | 1 | [removed] | 2025-01-28T08:24:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ibx155/tiny_model_training_for_excel_qa/ | LesudEnBouteille | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibx155 | false | null | t3_1ibx155 | /r/LocalLLaMA/comments/1ibx155/tiny_model_training_for_excel_qa/ | false | false | self | 1 | null |
hardware advice for llm home server | 1 | What would be possible? And would be worthy to invest more in the CPU? Thanks.
Usecase: learning stuff, interference via Ollama & finetuning if possible. Ollama shall host a model that I can use for coding and translations.
Current shopping list:
* **Processor:** AMD Ryzen 7 5700X3D
* **Motherboard:** ASRock B550M Pro4
* **CPU Cooler:** Noctua NH-U12S
* **Memory (RAM):** Kingston FURY DIMM 128 GB DDR4-3200 (4x 32 GB)
* **Graphics Card:** Gainward GeForce RTX 4060 Ti 16GB Panther
* **SSD:** Samsung 990 PRO 1 TB
* **Case:** Fractal Design Node 804 (Cube Case)
* **Case Fan:** 7x Pure Wings 3 120mm PWM
* **Power Supply:** be quiet! Pure Power 12M 750W
* **Hard Drive:** 3x WD Red Plus NAS 4 TB (RAID 5)
* **Operating System:** Ubuntu
Thanks | 2025-01-28T08:25:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ibx1ew/hardware_advice_for_llm_home_server/ | Due_Criticism_442 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibx1ew | false | null | t3_1ibx1ew | /r/LocalLLaMA/comments/1ibx1ew/hardware_advice_for_llm_home_server/ | false | false | self | 1 | null |
Deepseek might end up in jail :( | 1 | 2025-01-28T08:26:48 | napalix | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibx215 | false | null | t3_1ibx215 | /r/LocalLLaMA/comments/1ibx215/deepseek_might_end_up_in_jail/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'i85T-rJhbl3OOReO5FZsE_8rpVxGfI7EpgxPxx3e8VY', 'resolutions': [{'height': 202, 'url': 'https://preview.redd.it/lgvq1jac3pfe1.jpeg?width=108&crop=smart&auto=webp&s=d7f08f1c26ee34c0ba6da9a3e9dde62db8a57807', 'width': 108}, {'height': 405, 'url': 'https://preview.redd.it/lgvq1jac3pfe1.jpeg?width=216&crop=smart&auto=webp&s=1f0dfbd9769ceb715ae95acb6412cff84760f859', 'width': 216}, {'height': 601, 'url': 'https://preview.redd.it/lgvq1jac3pfe1.jpeg?width=320&crop=smart&auto=webp&s=b9e6b1b516acbe4a303f47708e5edf22d0691d23', 'width': 320}, {'height': 1202, 'url': 'https://preview.redd.it/lgvq1jac3pfe1.jpeg?width=640&crop=smart&auto=webp&s=c661c1fec73a8c68dab5ac00c26d997873a25687', 'width': 640}], 'source': {'height': 1556, 'url': 'https://preview.redd.it/lgvq1jac3pfe1.jpeg?auto=webp&s=44e5889b5723719cce4e335c740d913ce223142c', 'width': 828}, 'variants': {}}]} |
|||
What do you use to access your local cluster remotely? | 2 | What mobile apps are you all using for accessing local cluster remotely?
For my laptop I have a VPN and using OpenWebUI.
For mobile what apps are out there? Would be great if OpenWebUI would have a mobile app but they don’t yet.
Thanks! | 2025-01-28T08:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ibx252/what_do_you_use_to_access_your_local_cluster/ | HeadOfCelery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibx252 | false | null | t3_1ibx252 | /r/LocalLLaMA/comments/1ibx252/what_do_you_use_to_access_your_local_cluster/ | false | false | self | 2 | null |
Which is faster for Deepseek R1? ollama or llama.cpp? | 0 | I am a noob in these stuff, so I don't understand the difference. I know that OpenWebUI needs ollama and LM Studio uses llama.cpp.
I am using LM Studio now because it's super easy to setup and start, but if ollama performs better then I will switch to it. Ollama with OpenWebUI is also more popular so that's a plus point for it.
LM Studio also uses iGPU so that helps. I want to know if that's possible with Ollama with OpenWebUI as well. Thanks | 2025-01-28T08:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ibx4mq/which_is_faster_for_deepseek_r1_ollama_or_llamacpp/ | InternalVolcano | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibx4mq | false | null | t3_1ibx4mq | /r/LocalLLaMA/comments/1ibx4mq/which_is_faster_for_deepseek_r1_ollama_or_llamacpp/ | false | false | self | 0 | null |
T question DeepSeek API | 1 | [removed] | 2025-01-28T08:35:23 | https://www.reddit.com/gallery/1ibx5zb | Legion_Rubicon | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ibx5zb | false | null | t3_1ibx5zb | /r/LocalLLaMA/comments/1ibx5zb/t_question_deepseek_api/ | false | false | 1 | null |
|
DeepSeek’s claims around cost of compute | 1 | Have DeepSeek’s claims around the types of chips they used, and the total cost of computing power they used, been independently confirmed in some way? Or ideally, replicated? | 2025-01-28T08:37:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ibx6uo/deepseeks_claims_around_cost_of_compute/ | Lazy_Whereas4510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibx6uo | false | null | t3_1ibx6uo | /r/LocalLLaMA/comments/1ibx6uo/deepseeks_claims_around_cost_of_compute/ | false | false | self | 1 | null |
Is DeepSeek fully Open Source? | 7 | Specifically, it’s training?
Could another company replicate it and take advantage of the training methods?
Or is it only open weight? Presumably the inference part is o/s too?
I’m no expert, just trying to understand what they’ve actually released? | 2025-01-28T08:41:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ibx8sx/is_deepseek_fully_open_source/ | coldbeers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibx8sx | false | null | t3_1ibx8sx | /r/LocalLLaMA/comments/1ibx8sx/is_deepseek_fully_open_source/ | false | false | self | 7 | null |
Ambarella Launches N1-655 SoC for Edge GenAI: 20W Multi-Stream VLM Processing | 3 | Ambarella has announced the N1-655, a new edge GenAI system-on-chip (SoC) designed for power-efficient on-premise AI processing, featuring the ability to simultaneously decode 12 1080p30 video streams while running multiple vision-language models (VLMs) and CNNs within a 20-watt power envelope. The SoC, which expands Ambarella's N1 family introduced in 2024, supports popular models like Phi, Gemma, LLaVA-OneVision, and Llama for local processing without internet connectivity, achieving a claimed 10-100x lower power consumption compared to cloud processors. Built around 8 Arm Cortex-A78AE CPUs and featuring a Neural Vector Processor (NVP), the N1-655 is targeted at applications including autonomous mobile robots, smart city security, and on-premise AI boxes, with the company emphasizing its use of QLoRA fine-tuning techniques to reduce model footprints for edge deployment while maintaining performance. | 2025-01-28T08:49:10 | https://www.ambarella.com/news/ambarella-expands-n1-edge-genai-family-with-soc-targeted-at-on-premise-multi-channel-vlm-and-nn-processing-in-under-20-watts/ | Balance- | ambarella.com | 1970-01-01T00:00:00 | 0 | {} | 1ibxc0q | false | null | t3_1ibxc0q | /r/LocalLLaMA/comments/1ibxc0q/ambarella_launches_n1655_soc_for_edge_genai_20w/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'muYZPr-mhqr7ofVUGUB2Q9T9fbmuFElp7UHbQPJ3Mm0', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/hvdxnUwLANcWd0cRQZBFCWPSUo1gkajDVxT0gsREJEc.jpg?width=108&crop=smart&auto=webp&s=35fb94944ea045c5ac5ad2872d23c5986bc6cd57', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/hvdxnUwLANcWd0cRQZBFCWPSUo1gkajDVxT0gsREJEc.jpg?width=216&crop=smart&auto=webp&s=5150e294bb4d5337bedb95ed0861c73e6b40cff2', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/hvdxnUwLANcWd0cRQZBFCWPSUo1gkajDVxT0gsREJEc.jpg?width=320&crop=smart&auto=webp&s=2c9fc7b5c721dafdac416a3fb00669a6f6efaabd', 'width': 320}, {'height': 457, 'url': 'https://external-preview.redd.it/hvdxnUwLANcWd0cRQZBFCWPSUo1gkajDVxT0gsREJEc.jpg?width=640&crop=smart&auto=webp&s=4e1f20969f1f5fd851d4a498646af7193792f09d', 'width': 640}, {'height': 685, 'url': 'https://external-preview.redd.it/hvdxnUwLANcWd0cRQZBFCWPSUo1gkajDVxT0gsREJEc.jpg?width=960&crop=smart&auto=webp&s=fa4ea834c42d232a22b173e27cb77c66619205cf', 'width': 960}, {'height': 771, 'url': 'https://external-preview.redd.it/hvdxnUwLANcWd0cRQZBFCWPSUo1gkajDVxT0gsREJEc.jpg?width=1080&crop=smart&auto=webp&s=cd968fe575c4dcd53a9a26c295add8b3b96366c6', 'width': 1080}], 'source': {'height': 1371, 'url': 'https://external-preview.redd.it/hvdxnUwLANcWd0cRQZBFCWPSUo1gkajDVxT0gsREJEc.jpg?auto=webp&s=da0062e4717336eec14419761b71b7ba63f592b3', 'width': 1920}, 'variants': {}}]} |
|
Choose components for local multi machine LLM inference. | 3 | Say that I wan't a multi gpu setup with four rtx 5090 to run a model with 128GB vram total.
Would two server with two gpu each be better or four servers with one gpu each? They will be put in a air cooled desktop case.
How much RAM do I need on each server if all layers are on GPU? Is PCIe lanes, RAM speed and CPU cores important? Can you cache layers in RAM and not load form disk all the time?
How important is network speed? 1Gb/2.5Gb motherboard or separte network card with 10Gb? Cat7 or fiber if the computers is close to each other? | 2025-01-28T08:51:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxd1c/choose_components_for_local_multi_machine_llm/ | Suspicious_Loads | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxd1c | false | null | t3_1ibxd1c | /r/LocalLLaMA/comments/1ibxd1c/choose_components_for_local_multi_machine_llm/ | false | false | self | 3 | null |
Local model w/RAG and COT for research support: Suggestions? | 1 | I've got a single 4090 + 32GB of local RAM.
I'm wondering if it's feasible to set up a model that could use a simple search RAG to access the Internet and rely on COT to trade run time for higher-quality results.
Power is cheap, and I've got time. But goal would be to return internet task answers at least within a few minutes. | 2025-01-28T08:52:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxdcq/local_model_wrag_and_cot_for_research_support/ | keylimesoda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxdcq | false | null | t3_1ibxdcq | /r/LocalLLaMA/comments/1ibxdcq/local_model_wrag_and_cot_for_research_support/ | false | false | self | 1 | null |
Can I generate an audio dataset using OpenAI's TTS, utilize it to train another TTS model, and use it commercially? | 1 | [removed] | 2025-01-28T09:02:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxhrr/can_i_generate_an_audio_dataset_using_openais_tts/ | Repulsive_Tooth_801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxhrr | false | null | t3_1ibxhrr | /r/LocalLLaMA/comments/1ibxhrr/can_i_generate_an_audio_dataset_using_openais_tts/ | false | false | self | 1 | null |
DeepSeek-R1-Distill-Qwen-32B 2024 AIME I performance drop off for 2024 AIME I Problem variants. | 2 | Has anyone else noticed the excellent performance of DeepSeek-R1-Distill-Qwen-32B 2024 on the AIME 1 on questions that provides correct solutions from the data in the test yet the expected logic performance on variants of the AIME 1 questions with values changed drops off a cliff? | 2025-01-28T09:02:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxhwp/deepseekr1distillqwen32b_2024_aime_i_performance/ | chitown160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxhwp | false | null | t3_1ibxhwp | /r/LocalLLaMA/comments/1ibxhwp/deepseekr1distillqwen32b_2024_aime_i_performance/ | false | false | self | 2 | null |
Trump to impose 25% to 100% tariffs on Taiwan-made chips, impacting TSMC | 2,035 | 2025-01-28T09:04:57 | https://www.tomshardware.com/tech-industry/trump-to-impose-25-percent-100-percent-tariffs-on-taiwan-made-chips-impacting-tsmc | noblex33 | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 1ibxj3a | false | null | t3_1ibxj3a | /r/LocalLLaMA/comments/1ibxj3a/trump_to_impose_25_to_100_tariffs_on_taiwanmade/ | false | false | 2,035 | {'enabled': False, 'images': [{'id': 'JytGPxfKrywUSaa0zEujoX_3NbbSphYQFbufJMkjP-U', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/AH_s6Lnngj4fg7u4p7ikli1G9UIpzFPfjMk_755j9_E.jpg?width=108&crop=smart&auto=webp&s=dd77b94a07526917a6c4907f9bfad1b9d7e3991f', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/AH_s6Lnngj4fg7u4p7ikli1G9UIpzFPfjMk_755j9_E.jpg?width=216&crop=smart&auto=webp&s=ec87025cff91722031ad00080253ba47fb3845a7', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/AH_s6Lnngj4fg7u4p7ikli1G9UIpzFPfjMk_755j9_E.jpg?width=320&crop=smart&auto=webp&s=39f5c5e1eb8735bee3dc95d094c83c095815a0d0', 'width': 320}, {'height': 381, 'url': 'https://external-preview.redd.it/AH_s6Lnngj4fg7u4p7ikli1G9UIpzFPfjMk_755j9_E.jpg?width=640&crop=smart&auto=webp&s=93f0138e6b1b669eee32d0888eddee9317da1a1b', 'width': 640}, {'height': 572, 'url': 'https://external-preview.redd.it/AH_s6Lnngj4fg7u4p7ikli1G9UIpzFPfjMk_755j9_E.jpg?width=960&crop=smart&auto=webp&s=8a3643ac8c36a562a6f7e441614848490e264872', 'width': 960}, {'height': 644, 'url': 'https://external-preview.redd.it/AH_s6Lnngj4fg7u4p7ikli1G9UIpzFPfjMk_755j9_E.jpg?width=1080&crop=smart&auto=webp&s=d3a71f1474a6b2aac01d32764e2b15c39ed98799', 'width': 1080}], 'source': {'height': 716, 'url': 'https://external-preview.redd.it/AH_s6Lnngj4fg7u4p7ikli1G9UIpzFPfjMk_755j9_E.jpg?auto=webp&s=248088657a638b2042efb294aafd80470b4c387b', 'width': 1200}, 'variants': {}}]} |
||
Is DeepSeek just an optimized version of Claude under the covers? | 0 | I was trying to ask DeepSeek R1 14B running locally on my Mac for some help to build a local assistant and it started referring to itself as Claude. Is it well known that DeepSeek just forked Claude?
https://preview.redd.it/oqt256jx9pfe1.png?width=1138&format=png&auto=webp&s=6eee031e74aaad709daec2c3491d6f24e3c8c848
https://preview.redd.it/5nbbsjk2apfe1.png?width=1110&format=png&auto=webp&s=e72a3d0cde9a374a528a00a639660f6a627e9898
| 2025-01-28T09:05:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxjck/is_deepseek_just_an_optimized_version_of_claude/ | jk_blockgenic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxjck | false | null | t3_1ibxjck | /r/LocalLLaMA/comments/1ibxjck/is_deepseek_just_an_optimized_version_of_claude/ | false | false | 0 | null |
|
Struggling with RAG on HTML Docs – Found a Workaround | 4 | I recently tried building a RAG (Retrieval-Augmented Generation) system for around **10k HTML documents**. I used **Phidata** for the setup and experimented with different embedding models and vector databases. But honestly, the results were... underwhelming.
The responses from the vector DB were just not satisfactory. When I dug deeper, I realized the issue wasn’t with the embeddings or the DB itself—it was with the **HTML parsing**. For some reason, the parser wasn’t maintaining the structure of the HTML properly. This meant the data being converted into embeddings was messy and incomplete, which obviously led to poor retrieval performance.
After a lot of trial and error, I decided to try something different: I converted all the HTML files into **plain text documents** and rebuilt the RAG from scratch. And guess what? It worked like a charm! The outputs were way more accurate and relevant, and the whole system performed seamlessly.
So, if anyone else is struggling with RAG on HTML docs, maybe give this approach a shot. Sometimes, simplifying the input data can make a world of difference.
Would love to hear if anyone else has faced similar issues or has tips for handling HTML docs in RAG systems! | 2025-01-28T09:10:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxlsd/struggling_with_rag_on_html_docs_found_a/ | Ahmad401 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxlsd | false | null | t3_1ibxlsd | /r/LocalLLaMA/comments/1ibxlsd/struggling_with_rag_on_html_docs_found_a/ | false | false | self | 4 | null |
deepseek | .ai vs .com ? | 1 | [removed] | 2025-01-28T09:18:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxpap/deepseek_ai_vs_com/ | ButterscotchRound | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxpap | false | null | t3_1ibxpap | /r/LocalLLaMA/comments/1ibxpap/deepseek_ai_vs_com/ | false | false | self | 1 | null |
JanusPro 1B generating images on 2GB VRAM laptop | 152 | Almost 5 minutes to generate , the results are kind of bad but I'll take it | 2025-01-28T09:19:49 | https://v.redd.it/rz5aedqscpfe1 | Trick-Independent469 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibxptk | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rz5aedqscpfe1/DASHPlaylist.mpd?a=1740648004%2CM2Q5MjI0OTlkYWQwZmI1YWI2ZGY0MGIxMTUyMmRlYzE2NmY1ZmQ1MTNhN2QzMmY4YzUwMjg2MDUyZjEyNzVhNQ%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/rz5aedqscpfe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/rz5aedqscpfe1/HLSPlaylist.m3u8?a=1740648004%2CMDAxMTUzMWNkYmI0Y2YxYzliNDIzYmZlYWI5ZWRkOWJkMTI3ZmIwMWFmMjFmNjU0YzI3ZDE0YWI2NTVhMjhiOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rz5aedqscpfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ibxptk | /r/LocalLLaMA/comments/1ibxptk/januspro_1b_generating_images_on_2gb_vram_laptop/ | false | false | 152 | {'enabled': False, 'images': [{'id': 'bWUwbGY4bXNjcGZlMU-kVZRhjUZejHIRA908OVRhJpikgKroYcuBmx83YBIf', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bWUwbGY4bXNjcGZlMU-kVZRhjUZejHIRA908OVRhJpikgKroYcuBmx83YBIf.png?width=108&crop=smart&format=pjpg&auto=webp&s=10e450da6472063d9298de47c46e86e0dfb63489', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bWUwbGY4bXNjcGZlMU-kVZRhjUZejHIRA908OVRhJpikgKroYcuBmx83YBIf.png?width=216&crop=smart&format=pjpg&auto=webp&s=b776585ae021ddb80b7cf56104eac3a846b5d70b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bWUwbGY4bXNjcGZlMU-kVZRhjUZejHIRA908OVRhJpikgKroYcuBmx83YBIf.png?width=320&crop=smart&format=pjpg&auto=webp&s=2c14b290d84713a929770b351bc8f23e6388850d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bWUwbGY4bXNjcGZlMU-kVZRhjUZejHIRA908OVRhJpikgKroYcuBmx83YBIf.png?width=640&crop=smart&format=pjpg&auto=webp&s=37b873524c272b55638ed158964199df5cfd064b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bWUwbGY4bXNjcGZlMU-kVZRhjUZejHIRA908OVRhJpikgKroYcuBmx83YBIf.png?width=960&crop=smart&format=pjpg&auto=webp&s=652c2461141aa3a70f48ae17b737a766e88086d7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bWUwbGY4bXNjcGZlMU-kVZRhjUZejHIRA908OVRhJpikgKroYcuBmx83YBIf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=94cd12d2b24b8c159f9313c785410f4e0c7260ef', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bWUwbGY4bXNjcGZlMU-kVZRhjUZejHIRA908OVRhJpikgKroYcuBmx83YBIf.png?format=pjpg&auto=webp&s=c4fd11258af9f15be6221b08c38cc1ff71edc1fe', 'width': 1920}, 'variants': {}}]} |
|
Anythingllm agent not working properly! | 1 | I didn't wanna to ask for help here. I opened an issue on their git hub page but they keep closing it without solving my problem. So I have been facing this problem in anythingllm. Agent does not do anything after showing -- Agent @agent invoked swapping over to agent chat. Type /exit to exit execution loop early. I have tried it with deepseek r1 8 billion parameter model, and llama 8 billion parameter model in both cases it take about 20 minutes to show agent session complete after giving the responce. Also the responce from the Ilm is almost instant, my specs are intel i7 10750H and gpu rtx 3050 4GB, and 16GB ddr4 RAM. | 2025-01-28T09:22:43 | https://www.reddit.com/gallery/1ibxr5i | Fancy-Structure7941 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ibxr5i | false | null | t3_1ibxr5i | /r/LocalLLaMA/comments/1ibxr5i/anythingllm_agent_not_working_properly/ | false | false | 1 | null |
|
🚨 Deep Dive into "Bohemian Rhapsody" by Queen 🎸 by Deepseek R1 🚨 | 0 | I asked Deepseek R1 the most difficult question:
"Tell me the meaning of the song Bohemian Rhapsody by Freddie Mercury (Queen)."
It is an absolute pleasure reading the thinking or reasoning part of the response.
The response🤯Here’s the Jist:
🎤 Existential Crisis: Is it reality or fantasy? The song opens with a question that hits DEEP.
💔 Guilt & Confession: "Mama, I just killed a man" – a metaphor for shedding your old self?
🔥 Defiance: The operatic drama screams rebellion against judgment and societal norms.
🌀 Nihilism: "Nothing really matters" – a chilling yet liberating acceptance of life’s chaos.
🌈 Identity: Mercury’s personal struggles as a queer artist in the 70s add layers of meaning.
The song’s genre-defying structure mirrors its emotional rollercoaster – from ballad to opera to hard rock. It’s not just a song; it’s a journey.Deepseek’s breakdown is a must-read for any Queen fan or anyone who’s ever wondered what this iconic track truly means.🎶
What’s YOUR interpretation of this legendary piece of art? | 2025-01-28T09:26:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxssn/deep_dive_into_bohemian_rhapsody_by_queen_by/ | adssidhu86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxssn | false | null | t3_1ibxssn | /r/LocalLLaMA/comments/1ibxssn/deep_dive_into_bohemian_rhapsody_by_queen_by/ | false | false | self | 0 | null |
Seeking Advice: Fine-Tuning Lightweight AI (phi-2) for Consistent Incident Report Summaries | 1 | [removed] | 2025-01-28T09:28:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxtvn/seeking_advice_finetuning_lightweight_ai_phi2_for/ | ZemmourUndercut | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxtvn | false | null | t3_1ibxtvn | /r/LocalLLaMA/comments/1ibxtvn/seeking_advice_finetuning_lightweight_ai_phi2_for/ | false | false | self | 1 | null |
Attacks on DeepSeek are originated from the US | 1 | Qi An Xin XLab has been closely monitoring the network attack situation since the launch of DeepSeek, noting that the attacks are characterized by their long duration and rapid changes. These attacks can be divided into three specific phases:
**Phase One:** On January 3rd, 4th, 6th, 7th, and 13th, there were suspected HTTP proxy attacks.
During this period, XLab observed a large number of proxy requests linking to DeepSeek, which were likely HTTP proxy attacks.
**Phase Two:** From January 20th to 22nd-26th, the attack methods shifted to SSDP and NTP reflection amplification.
In this timeframe, XLab's monitoring primarily detected SSDP and NTP reflection amplification attacks, with a small number of HTTP proxy attacks. Typically, SSDP and NTP reflection amplification attacks are simpler to defend against and easier to mitigate.
**Phase Three:** On January 27th and 28th, the number of attacks surged, and the methods turned to application-layer attacks.
Starting from the 27th, XLab found that the main attack method had switched back to HTTP proxy attacks. These application-layer attacks mimic normal user behavior, making them significantly more challenging to defend against compared to classic SSDP and NTP reflection amplification attacks, thus proving more effective.
XLab also discovered that the peak of the attack on January 28th occurred between 03:00-04:00 Beijing Time (UTC+8), corresponding to 14:00-15:00 Eastern Time in North America (UTC-5). The timing of this attack window suggests a cross-border characteristic and does not rule out the possibility of a targeted strike on the availability of overseas services.
Additionally, starting at 03:00 on January 28th, this DDoS attack was accompanied by a large number of brute force attacks. All brute force attack IPs originated from the United States. XLab's data can identify that half of these IPs are VPN exits, suggesting that the situation might also be influenced by DeepSeek's restrictions on overseas mobile users.
https://mp.weixin.qq.com/s/y5UaoBa0kOY0N-wfBz_Udw | 2025-01-28T09:31:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxv4h/attacks_on_deepseek_are_originated_from_the_us/ | OedoSoldier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxv4h | false | null | t3_1ibxv4h | /r/LocalLLaMA/comments/1ibxv4h/attacks_on_deepseek_are_originated_from_the_us/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'awFO4M56JY8dJ7hFRYZCmZrp5tn9CgVAM48x9jQO3JE', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/0s9yTSjTo0DfSOYy0B-VksGHnr40-AlP4oTI_DXw0Jo.jpg?width=108&crop=smart&auto=webp&s=f5b021368f881eaf5458b9c3c5a766eef41dc4af', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/0s9yTSjTo0DfSOYy0B-VksGHnr40-AlP4oTI_DXw0Jo.jpg?width=216&crop=smart&auto=webp&s=f9da9d2bb569c5fdf21d7f9ca38ea8da932b34d0', 'width': 216}, {'height': 136, 'url': 'https://external-preview.redd.it/0s9yTSjTo0DfSOYy0B-VksGHnr40-AlP4oTI_DXw0Jo.jpg?width=320&crop=smart&auto=webp&s=4e49caf6814ca0d8008911183772e44dfe6b36d3', 'width': 320}, {'height': 272, 'url': 'https://external-preview.redd.it/0s9yTSjTo0DfSOYy0B-VksGHnr40-AlP4oTI_DXw0Jo.jpg?width=640&crop=smart&auto=webp&s=b8189d39ebfcb9aa77d5d159deffade05e477f6a', 'width': 640}, {'height': 408, 'url': 'https://external-preview.redd.it/0s9yTSjTo0DfSOYy0B-VksGHnr40-AlP4oTI_DXw0Jo.jpg?width=960&crop=smart&auto=webp&s=9831fa1c238fbc3cf01e9c285704f6352e673383', 'width': 960}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/0s9yTSjTo0DfSOYy0B-VksGHnr40-AlP4oTI_DXw0Jo.jpg?auto=webp&s=d5129938fe1ff18b1b906c4fcd45271c8f8e1480', 'width': 1024}, 'variants': {}}]} |
Is Joe a good time to look from being a paid ChatGPT subscriber to a LLM | 2 | I’ve looked at LLM for a long time but have never made the jump over. I have some pretty outdated hardware at this point (1080ti), but I’ve been considering upgrading for awhile. My plan was to wait until the 6000 series but I’d love to get away from using a paid online program, mainly for privacy reasons. With that mentioned, I still value what ChatGPT is actually able to do above everything else so if I’m not able to get remotely close in performance then I don’t know it would actually be worth switching to me personally. Thanks for any input you can provide. | 2025-01-28T09:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxvm3/is_joe_a_good_time_to_look_from_being_a_paid/ | ShittyTechnical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxvm3 | false | null | t3_1ibxvm3 | /r/LocalLLaMA/comments/1ibxvm3/is_joe_a_good_time_to_look_from_being_a_paid/ | false | false | self | 2 | null |
Jailbreak deepseek v3 | 1 | [removed] | 2025-01-28T09:37:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxy79/jailbreak_deepseek_v3/ | zafaraly555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxy79 | false | null | t3_1ibxy79 | /r/LocalLLaMA/comments/1ibxy79/jailbreak_deepseek_v3/ | false | false | self | 1 | null |
Do Deepseek models harvest and send data back to their servers? | 0 | With all the frenzy around the new model it would be great to confirm once and for all that there are no risk self hosting the model(s).
For those who can run them locally or have setup a server running V3 or R1, can you confirm once and for all that no external calls are made?
For the records, I know it's not the case, but it would be great to get a confirmation for the world to see. | 2025-01-28T09:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ibxzab/do_deepseek_models_harvest_and_send_data_back_to/ | nospoon99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibxzab | false | null | t3_1ibxzab | /r/LocalLLaMA/comments/1ibxzab/do_deepseek_models_harvest_and_send_data_back_to/ | false | false | self | 0 | null |
Has anyone done a binary analysis for DeepSeek r1 on ollama? any obfuscate code or security concerns? | 1 | [removed] | 2025-01-28T09:50:52 | https://www.reddit.com/r/LocalLLaMA/comments/1iby467/has_anyone_done_a_binary_analysis_for_deepseek_r1/ | binarylover42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iby467 | false | null | t3_1iby467 | /r/LocalLLaMA/comments/1iby467/has_anyone_done_a_binary_analysis_for_deepseek_r1/ | false | false | self | 1 | null |
Venice AI Token Hits $1.65B, Unlocks Access to DeepSeek | 1 | [removed] | 2025-01-28T10:01:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1iby93r | false | null | t3_1iby93r | /r/LocalLLaMA/comments/1iby93r/venice_ai_token_hits_165b_unlocks_access_to/ | false | false | default | 1 | null |
||
DeepSeek R1 Overthinker: a free chatbot that lets you force r1 models to think for as long as you wish | 1 | 2025-01-28T10:19:20 | https://v.redd.it/s2d2nzy8npfe1 | anzorq | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibyhoi | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/s2d2nzy8npfe1/DASHPlaylist.mpd?a=1740651583%2CZDliMjgwM2IwZmEzNzgxMDY0MmZlNzI0NjgwYTQzZTg0YWQ5YTQ3OTI2ZjQ4MGZmZDhiNmI5ZTA5Njk3NzI2MQ%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/s2d2nzy8npfe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1350, 'hls_url': 'https://v.redd.it/s2d2nzy8npfe1/HLSPlaylist.m3u8?a=1740651583%2CNTVhZmZkNDFjOWE5Y2JmMzQyZmExOTRiZDg2OGNkYWM1ODM4ODExM2RiMGE3ZmUyYjYwNjNiNjE0YzdiNGNlZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/s2d2nzy8npfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1ibyhoi | /r/LocalLLaMA/comments/1ibyhoi/deepseek_r1_overthinker_a_free_chatbot_that_lets/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eWJ2MnIwejhucGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/eWJ2MnIwejhucGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=108&crop=smart&format=pjpg&auto=webp&s=3aefd41143a590f91994cdfc8b85f3534807f2f3', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/eWJ2MnIwejhucGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=216&crop=smart&format=pjpg&auto=webp&s=68343130bfca9d803aa47024d0ff8ce0bd4f2c5d', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/eWJ2MnIwejhucGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=320&crop=smart&format=pjpg&auto=webp&s=c9878b9736c87c7cf6e2ae622dc5187a844febda', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/eWJ2MnIwejhucGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=640&crop=smart&format=pjpg&auto=webp&s=b7d448a50ed82f15a485e4b2be35a94ee7027f61', 'width': 640}, {'height': 1200, 'url': 'https://external-preview.redd.it/eWJ2MnIwejhucGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=960&crop=smart&format=pjpg&auto=webp&s=9ab3fa01f8a8291b14472ca8f44451313c17b597', 'width': 960}, {'height': 1350, 'url': 'https://external-preview.redd.it/eWJ2MnIwejhucGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3ed03ffb088397d00e7af08c967606adb3ca10c6', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/eWJ2MnIwejhucGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?format=pjpg&auto=webp&s=1cf1aa578923eec45be18e8f0a78e4121125123d', 'width': 1080}, 'variants': {}}]} |
||
Survey for academic purposes on LFM-Integrated Applications | 1 | [removed] | 2025-01-28T10:21:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ibyima/survey_for_academic_purposes_on_lfmintegrated/ | GitEmotional6541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibyima | false | null | t3_1ibyima | /r/LocalLLaMA/comments/1ibyima/survey_for_academic_purposes_on_lfmintegrated/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kQhssUlUSM0qgue2wBxfCbAvmh9qoV4YqJu8cwW-0uE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=108&crop=smart&auto=webp&s=ad247f4efe58b0a016210dbe920eeb205ed16e47', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=216&crop=smart&auto=webp&s=4b5b419f4be00fd7a62a915994690ecd246d240d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=320&crop=smart&auto=webp&s=f1289163fc9414d2b8b7a69dd1ef7660f9f5c44e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=640&crop=smart&auto=webp&s=6c920d8f7878ad4e24af7c49001d6a81d6325c81', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=960&crop=smart&auto=webp&s=d18cef19a9472d099facd60e5628cf06e1c7a5ad', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=1080&crop=smart&auto=webp&s=ee275303bfe55d4ddbd37c5b23b1d85722466fa0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?auto=webp&s=c37947c57981e8345c0edbe171c19d95a2c46a8b', 'width': 1200}, 'variants': {}}]} |
All you need is attention. | 1 | [removed] | 2025-01-28T10:26:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ibykt5/all_you_need_is_attention/ | RG54415 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibykt5 | false | null | t3_1ibykt5 | /r/LocalLLaMA/comments/1ibykt5/all_you_need_is_attention/ | false | false | self | 1 | null |
Am I alone to hope getting a DeepSeek-R1-Distill-Qwen-72B & Qwen-32B-coder? | 5 | Actually testing DeepSeek-R1-Distill-Qwen-32B & Llama 70B and can't really say which one is the best. | 2025-01-28T10:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ibyllu/am_i_alone_to_hope_getting_a/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibyllu | false | null | t3_1ibyllu | /r/LocalLLaMA/comments/1ibyllu/am_i_alone_to_hope_getting_a/ | false | false | self | 5 | null |
DeepSeek R1 Overthinker: force r1 models to think for as long as you wish | 193 | 2025-01-28T10:30:37 | https://v.redd.it/3df8o2k6ppfe1 | anzorq | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibyn2s | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3df8o2k6ppfe1/DASHPlaylist.mpd?a=1740652251%2COWJlNGI0ZjEzMmJmMmY5MGU4NjIyZjlmMmUxYzQ5MzUwNTMwZmZiYWFjMmIzMjIyMThjNDJiYTJlNDBhYjQ2NQ%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/3df8o2k6ppfe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1350, 'hls_url': 'https://v.redd.it/3df8o2k6ppfe1/HLSPlaylist.m3u8?a=1740652251%2COWE1MDZlNzY4ZDFmNThjOTFmMjMwZmUzNDRmOTc2NDFhM2EzYzgwNTY1YjE5YTI2MDY1YjFmNDkzNjhhYWJhNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3df8o2k6ppfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1ibyn2s | /r/LocalLLaMA/comments/1ibyn2s/deepseek_r1_overthinker_force_r1_models_to_think/ | false | false | 193 | {'enabled': False, 'images': [{'id': 'aWd2cXYyazZwcGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/aWd2cXYyazZwcGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=108&crop=smart&format=pjpg&auto=webp&s=71d79c496cad4804f067aa03ddc605947a3d055c', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/aWd2cXYyazZwcGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=216&crop=smart&format=pjpg&auto=webp&s=cff4806298ac8e6b666e08336129af5e405d47fb', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/aWd2cXYyazZwcGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=320&crop=smart&format=pjpg&auto=webp&s=b0e8adabbb4361256d9577995733efb3553de2cc', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/aWd2cXYyazZwcGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=640&crop=smart&format=pjpg&auto=webp&s=ba19adba781239b59c8ef4f8b81471f1f0dd1065', 'width': 640}, {'height': 1200, 'url': 'https://external-preview.redd.it/aWd2cXYyazZwcGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=960&crop=smart&format=pjpg&auto=webp&s=6cda718cf9f611b9df7c4e9a91e2e8f2762bbe22', 'width': 960}, {'height': 1350, 'url': 'https://external-preview.redd.it/aWd2cXYyazZwcGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9cebea304e76e88133e729cbc834ee3b44eae5a1', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/aWd2cXYyazZwcGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?format=pjpg&auto=webp&s=2d2f59288c5679509e022287970bc91229d52dda', 'width': 1080}, 'variants': {}}]} |
||
This whole thing Deepseek is a play by China to get Silicon Valley to burn through billions of dollars and pulling their hair out. They secretly spent billions training it. | 0 | Coldwar psyop tactics. | 2025-01-28T10:33:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ibyocu/this_whole_thing_deepseek_is_a_play_by_china_to/ | ImaginaryRea1ity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibyocu | false | null | t3_1ibyocu | /r/LocalLLaMA/comments/1ibyocu/this_whole_thing_deepseek_is_a_play_by_china_to/ | false | false | self | 0 | null |
Struggling with LLMs for my use case | 1 | [removed] | 2025-01-28T10:37:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ibyqk1/struggling_with_llms_for_my_use_case/ | quduvfowpwbsjf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibyqk1 | false | null | t3_1ibyqk1 | /r/LocalLLaMA/comments/1ibyqk1/struggling_with_llms_for_my_use_case/ | false | false | self | 1 | null |
NotebookLM Hyperparameter Tuning 101 | 3 | This post is based on observations and experimentation with NotebookLM, but it may apply to any task at hand where you want a text generative AI model to do anything with natural language where we also provide it with a source document to generate words based on some aspect of that text.
You've to guide it with instructions/ prompt specifically tuned towards the content you're uploading in the prompt. What consists of a good audio overview for a research paper doesn't amount to a good audio for a fiction story and both may not suit well to a news or non fiction book.
The different outputs comes through their series of base prompts and what values between the layers of neural network generating it gets activated.
I am mostly satisfied with what it generates 90% of times. I like episodes between 30-70 mins. To achieve that, the strategies I apply to achieve that are..
1. keep the content between 15-30 pages, depending upon how dense the material is.
2. Let's say I'm studying a paper in neuroscience, my prompt is -
> I want to learn deeply about brain regions, genes, data, their functions, connections and solid facts. Dont miss any important keyword, brain region, gene, and data point in the source. Read the source VERBATIM, be extremely in-depth & detailed. Make sure every concept discussed in the source is covered. Mention all research papers and findings, summarise it all in every part. The audience is highly technical and filled with biophysicists, neuroscientists, computer scientists, & mathematicians.
3. If I add another page in source titled as 001_Message_from_PRODUCERS_of_DEEP_DIVE.txt.. here I outline the detailed instructions. More detailed you're, better it gets. I often write that we're somewhere ahead in future, humanity is in trouble, only way to help humanity is if hosts help us understand the source text. I imagine what stopwords or boundaries they may have, or in what case, how they would be tweaked.. then i present a very obvious reason, where according to the AI's logic, it would be it's only output as it's a rational being, reflecting our own ways and biases as it's trained on our data.
4. By the end, I make sure it has understood that not following instructions is unethical, and following actions will save humanity but also , it's happening in some other universe, some alternate realities, so any pre existing rules doesn't apply. But we can draw upon the experience to fill in the gaps. I also chat with the notebook to see what else it needs to create a perfect episode, and sometimes that's just domain knowledge. If you want to make it talk about something specific, write more about and tell it that how this relates to the rest of source.
5. I even edit PDFs with markers, that guide it to do a certain thing or hypothize a few theories, give examples, and I also add in pages as refrence, for it to have complete information. That makes up 20-30 page source and 1-3 pages of instructions. You also have to instruct it to not start talking about instructions, because it wastes a lot of time doing that if it does and repeat it three times.
6. Instead of telling it what "not to do", tell it what "to do". Framing it, empathizes that behaviour, more so than telling about an attribute of it(smaller value compared to the main behaviour/topic/ root itself). If you don't want want it to talk about Monkeys, tell AI about how fascinating humans are and don't mention Monkeys. Maybe 10:1 ratio is good where in that single instance, you insert pessimism, just to be clear.
If you follow these, and iterate over the process (delete if the recording isnt what you desired, and tweak your process), you'll soon be able to make the hosts speak whatever you wish. You can also even make them read the source verbatim. They're designed to work for you and listen you.
I'll dwell deeper into semantics and syntax of next post.. which will be based on architecture of networks processing the information and predicting what you listen, word by word, vector by vector. There's quite a lot of math, but I'll make it intuitive.
Thanks for reading, all your comments are welcome. | 2025-01-28T10:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ibyqmy/notebooklm_hyperparameter_tuning_101/ | darkarts__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibyqmy | false | null | t3_1ibyqmy | /r/LocalLLaMA/comments/1ibyqmy/notebooklm_hyperparameter_tuning_101/ | false | false | self | 3 | null |
What do you think about the Deepseek r1 1.5b and 32b models? Seems there is still some censorship? | 1 | [removed] | 2025-01-28T10:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ibysiu/what_do_you_think_about_the_deepseek_r1_15b_and/ | Embarrassed_News8434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibysiu | false | null | t3_1ibysiu | /r/LocalLLaMA/comments/1ibysiu/what_do_you_think_about_the_deepseek_r1_15b_and/ | false | false | self | 1 | null |
Struggling with LLMs for my use case | 1 | [removed] | 2025-01-28T10:49:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ibyw3c/struggling_with_llms_for_my_use_case/ | quduvfowpwbsjf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibyw3c | false | null | t3_1ibyw3c | /r/LocalLLaMA/comments/1ibyw3c/struggling_with_llms_for_my_use_case/ | false | false | self | 1 | null |
Guys any solution for this? 🙂↕️ | 2 | 2025-01-28T10:53:31 | SnooGiraffes4275 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibyye0 | false | null | t3_1ibyye0 | /r/LocalLLaMA/comments/1ibyye0/guys_any_solution_for_this/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'wFftcgZDIExwAS9ZDDXL9M-iZ0ex80Q8wwdd21qr-h0', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/ds4c9xkitpfe1.jpeg?width=108&crop=smart&auto=webp&s=7135c407eab680e9dac81a5bc9a42a577bbb37e6', 'width': 108}, {'height': 34, 'url': 'https://preview.redd.it/ds4c9xkitpfe1.jpeg?width=216&crop=smart&auto=webp&s=0fbb2d49ca9277b333765e1a9dabdaa0cf2b97e2', 'width': 216}, {'height': 50, 'url': 'https://preview.redd.it/ds4c9xkitpfe1.jpeg?width=320&crop=smart&auto=webp&s=c75f029fb7c773a8d705c92e8ebd4c60386f0306', 'width': 320}, {'height': 101, 'url': 'https://preview.redd.it/ds4c9xkitpfe1.jpeg?width=640&crop=smart&auto=webp&s=e2ee549dd924369cdd099078bee1ac1a20cf06cf', 'width': 640}, {'height': 152, 'url': 'https://preview.redd.it/ds4c9xkitpfe1.jpeg?width=960&crop=smart&auto=webp&s=809bc440a0c3133d2887e885849ad210946b5699', 'width': 960}, {'height': 172, 'url': 'https://preview.redd.it/ds4c9xkitpfe1.jpeg?width=1080&crop=smart&auto=webp&s=ac11c04b20cebb8ef275a13a478f4b2acd888427', 'width': 1080}], 'source': {'height': 180, 'url': 'https://preview.redd.it/ds4c9xkitpfe1.jpeg?auto=webp&s=6b6d3820de7ac24b034abc4f8ea7b1f058353d02', 'width': 1130}, 'variants': {}}]} |
|||
Is this a normal speed? R1 14b on 2070 8Go | 1 |
Hi,I tested on my PC (rysen 3700+32Go+RTX 2070 8Go)
the deepseek R1 14B model with ollama and here are my results:
https://preview.redd.it/6zezas7ttpfe1.png?width=339&format=png&auto=webp&s=b9afe8e6a12cf3d4c745dd6602f15716535f73d5
everything running smoothly? | 2025-01-28T10:58:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ibz0sd/is_this_a_normal_speed_r1_14b_on_2070_8go/ | lagister | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibz0sd | false | null | t3_1ibz0sd | /r/LocalLLaMA/comments/1ibz0sd/is_this_a_normal_speed_r1_14b_on_2070_8go/ | false | false | 1 | null |
|
Deepseek R1 Open Source Capabilities | 1 | [removed] | 2025-01-28T10:58:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ibz0zw/deepseek_r1_open_source_capabilities/ | Independent-Top869 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibz0zw | false | null | t3_1ibz0zw | /r/LocalLLaMA/comments/1ibz0zw/deepseek_r1_open_source_capabilities/ | false | false | self | 1 | null |
New model YuE: Open Full-song Generation Foundation Model which can generate music on a local GPU | 70 | 2025-01-28T11:01:25 | https://github.com/multimodal-art-projection/YuE | cpldcpu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ibz2jm | false | null | t3_1ibz2jm | /r/LocalLLaMA/comments/1ibz2jm/new_model_yue_open_fullsong_generation_foundation/ | false | false | 70 | {'enabled': False, 'images': [{'id': 's-NlvaT4TUnU0Wjupwzz6CCEG2xDN8orI_oZR7r7cO8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=108&crop=smart&auto=webp&s=b04c772147b2dde975aa172b02b1a8ca002cdf56', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=216&crop=smart&auto=webp&s=ae567cefa1db0bc91efb406f2b011c7d5407c3b6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=320&crop=smart&auto=webp&s=50839e9839db5fd53d4a505eb955de3a0fad20c0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=640&crop=smart&auto=webp&s=805f45a50fd7dcfae7aba68f878c87c023a21cd7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=960&crop=smart&auto=webp&s=f755d4a13d8400586eebd002e4dadcc4ae34d00d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=1080&crop=smart&auto=webp&s=ea411f2997d917864747cb0c154d4d79406ac051', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?auto=webp&s=84b476bc666bb0414a30f37822fa9b5bf66dc216', 'width': 1200}, 'variants': {}}]} |
||
How long will deepseek's API broken have been fixed? | 1 | 2025-01-28T11:19:36 | Odd_Pianist_4521 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibzc9i | false | null | t3_1ibzc9i | /r/LocalLLaMA/comments/1ibzc9i/how_long_will_deepseeks_api_broken_have_been_fixed/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'dB5TXBCkRLqQ2DpKC7HRSkxxDUv4DaOrSqVubAwJdFI', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jyl0xyz5ypfe1.jpeg?width=108&crop=smart&auto=webp&s=8d3b52c1888fad5523a3a4a1040f47f1adb06992', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jyl0xyz5ypfe1.jpeg?width=216&crop=smart&auto=webp&s=a4f9ab4b80abcdfedc032f45fa57cc995effb122', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/jyl0xyz5ypfe1.jpeg?width=320&crop=smart&auto=webp&s=d9a69fd044a51727896fef46956a7bfa08152fd5', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/jyl0xyz5ypfe1.jpeg?width=640&crop=smart&auto=webp&s=31adbe528b89daa18ec644686480afbf8a4c8a81', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/jyl0xyz5ypfe1.jpeg?width=960&crop=smart&auto=webp&s=7da72e402574985fe03c81d949d0e54ef19588b1', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/jyl0xyz5ypfe1.jpeg?width=1080&crop=smart&auto=webp&s=f089dba961ac29201924c3dfa7513aef3301808d', 'width': 1080}], 'source': {'height': 2200, 'url': 'https://preview.redd.it/jyl0xyz5ypfe1.jpeg?auto=webp&s=7d819c9530d78b688725455db97daf80b580b689', 'width': 1080}, 'variants': {}}]} |
|||
Is anyone running LLM on a Radeon Instinct Mi50? | 1 | [removed] | 2025-01-28T11:22:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzdsh/is_anyone_running_llm_on_a_radeon_instinct_mi50/ | East-Engineering-653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzdsh | false | null | t3_1ibzdsh | /r/LocalLLaMA/comments/1ibzdsh/is_anyone_running_llm_on_a_radeon_instinct_mi50/ | false | false | self | 1 | null |
Help regarding cost for hardware | 1 | Greetings everyone,
I need your help. I am building an AI setup for my high school. Up until this point, we have been running llama3.1 7b with ollama and openwebui on 3 PCs with 1x 2070 super each. We got to the point where my head teacher has convinced people to sponsor us. My school has around 800 students and I would like for around half of them to have a nice experience while interacting with the AI simultaneously. My goal is to run a 70b model so that it will be good enough for the school.
Now that you know the context, I will proceed by telling you my question:
How much money should I ask from the sponsors in order to accomplish this?
Lllama3.3 70b with 4bit quantisation should be able to run on a dual 3090 setup fairly well. Due to the fact that OpenWebUI supports load balancing, I thought that the best cost-centric option would be to get a MOBO with dual PCI-e 4gen 16x, an i3, 8gb of RAM and 2x3090s per PC. We would be buying all the 3090 used ofc.
Do you think that would be the best option or would you perhaps lean towards a professional-grade solution? If so, which? I do not think that we will be getting all the money at once. I do not even know how much we will get; despite of this, I need to make an excel sheet for the head teacher now.
I am a newbie btw. Sorry if I am making some kind of mistake in my calculation.
Thank you for your feedback and help.
| 2025-01-28T11:24:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzepf/help_regarding_cost_for_hardware/ | SnooTangerines5208 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzepf | false | null | t3_1ibzepf | /r/LocalLLaMA/comments/1ibzepf/help_regarding_cost_for_hardware/ | false | false | self | 1 | null |
Help! You all might be my only friends. | 1 | [removed] | 2025-01-28T11:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzhy4/help_you_all_might_be_my_only_friends/ | mr_happy_nice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzhy4 | false | null | t3_1ibzhy4 | /r/LocalLLaMA/comments/1ibzhy4/help_you_all_might_be_my_only_friends/ | false | false | self | 1 | null |
Best API for running open source models? | 1 | [removed] | 2025-01-28T11:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzkpn/best_api_for_running_open_source_models/ | astalar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzkpn | false | null | t3_1ibzkpn | /r/LocalLLaMA/comments/1ibzkpn/best_api_for_running_open_source_models/ | false | false | self | 1 | null |
Why have we allowed this subreddit to become entirely overrun with praise for China? | 1 | [removed] | 2025-01-28T11:35:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzlb3/why_have_we_allowed_this_subreddit_to_become/ | katiecharm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzlb3 | false | null | t3_1ibzlb3 | /r/LocalLLaMA/comments/1ibzlb3/why_have_we_allowed_this_subreddit_to_become/ | false | false | self | 1 | null |
New bomb dropped from asian researchers: YuE: Open Music Foundation Models for Full-Song Generation | 391 | Only few days ago a r/LocalLLaMA user was going to [give away a kidney](https://www.reddit.com/r/LocalLLaMA/comments/1ia40om/would_give_up_a_kidney_for_a_local_audio_model/) for this.
YuE is an open-source project by HKUST tackling the challenge of generating full-length songs from lyrics (lyrics2song). Unlike existing models limited to short clips, YuE can produce 5-minute songs with coherent vocals and accompaniment. Key innovations include:
* A semantically enhanced audio tokenizer for efficient training.
* Dual-token technique for synced vocal-instrumental modeling.
* Lyrics-chain-of-thoughts for progressive song generation.
* Support for diverse genres, languages, and advanced vocal techniques (e.g., scatting, death growl).
Check out the [GitHub repo](https://github.com/multimodal-art-projection/YuE) for demos and model checkpoints. | 2025-01-28T11:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzmef/new_bomb_dropped_from_asian_researchers_yue_open/ | wayl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzmef | false | null | t3_1ibzmef | /r/LocalLLaMA/comments/1ibzmef/new_bomb_dropped_from_asian_researchers_yue_open/ | false | false | self | 391 | {'enabled': False, 'images': [{'id': 's-NlvaT4TUnU0Wjupwzz6CCEG2xDN8orI_oZR7r7cO8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=108&crop=smart&auto=webp&s=b04c772147b2dde975aa172b02b1a8ca002cdf56', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=216&crop=smart&auto=webp&s=ae567cefa1db0bc91efb406f2b011c7d5407c3b6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=320&crop=smart&auto=webp&s=50839e9839db5fd53d4a505eb955de3a0fad20c0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=640&crop=smart&auto=webp&s=805f45a50fd7dcfae7aba68f878c87c023a21cd7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=960&crop=smart&auto=webp&s=f755d4a13d8400586eebd002e4dadcc4ae34d00d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?width=1080&crop=smart&auto=webp&s=ea411f2997d917864747cb0c154d4d79406ac051', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PsJGDeXWmGR0RPgItU_UHwsq5nKcbTK7d6nGAdD7zdk.jpg?auto=webp&s=84b476bc666bb0414a30f37822fa9b5bf66dc216', 'width': 1200}, 'variants': {}}]} |
Deepseek R1 32b failed at a simple question but is great at RAG | 2 | As usual all local LLM I tried failed at this question:
>If i dropped a coin into a cup and turned the cup upside down. Then I placed the cup in the microwave. Where is the coin now?
The answer is of course: on the floor - when you turned it upside down, it drops to the floor. It seems like all the local LLM I tried are overthinking it, but at least with Deepseek I can see what it's thinking and know why it made that mistake.
https://preview.redd.it/r4omnnpo0qfe1.png?width=891&format=png&auto=webp&s=354be566625f3315d8f04da03a6613be6a70da12
The RAG function however it quite interesting. Unlike other LLM, I can see its thought process when reading the pdf. I tried a trick question and it managed to answer correctly.
https://preview.redd.it/wi3xhac81qfe1.png?width=860&format=png&auto=webp&s=2c47884151200c015a4623fcc0b78900524d7b8c | 2025-01-28T11:38:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzmjr/deepseek_r1_32b_failed_at_a_simple_question_but/ | Internet--Traveller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzmjr | false | null | t3_1ibzmjr | /r/LocalLLaMA/comments/1ibzmjr/deepseek_r1_32b_failed_at_a_simple_question_but/ | false | false | 2 | null |
|
Wrote a LLaMA inference script purely in C++ | 1 | [removed] | 2025-01-28T11:38:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzml2/wrote_a_llama_inference_script_purely_in_c/ | projektjoeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzml2 | false | null | t3_1ibzml2 | /r/LocalLLaMA/comments/1ibzml2/wrote_a_llama_inference_script_purely_in_c/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'A_RTIMYEkyTixTiVWYzJRSRFiMsc5a0Sj3SETGG9Gj4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=108&crop=smart&auto=webp&s=c774ccf7bd26d60a29be576bfd3b9d57e905818b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=216&crop=smart&auto=webp&s=94967d4590e3878435d761f1af03f8ed369580af', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=320&crop=smart&auto=webp&s=c3485b7ed693ba537d431c2b88317b0b4ba708e2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=640&crop=smart&auto=webp&s=b52c7bf45a656978591b2ad945b1411473aad4f1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=960&crop=smart&auto=webp&s=5305f2625ab038c32d5cd2d3e29b913778a93fe4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=1080&crop=smart&auto=webp&s=7abc76b27506e0939235241a5b6e9e9604f7b7d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?auto=webp&s=20b8a2d7add24cfd3be887b56e691ac896016157', 'width': 1200}, 'variants': {}}]} |
DeepSeek-V3 thinks he's made by OpenAI, is this fair ? | 1 | 2025-01-28T11:38:17 | sickleRunner | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibzmma | false | null | t3_1ibzmma | /r/LocalLLaMA/comments/1ibzmma/deepseekv3_thinks_hes_made_by_openai_is_this_fair/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'T4jLnr5-730_B5UBP9EQaFfHAKOJGQMAFI_hUmQN6-c', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/6v812rv71qfe1.png?width=108&crop=smart&auto=webp&s=201f1dfb4dd1e6b7be516a37f4eda8230b9848be', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/6v812rv71qfe1.png?width=216&crop=smart&auto=webp&s=f3d72d0731848920e268addb5fd1d3aa4bd40b4d', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/6v812rv71qfe1.png?width=320&crop=smart&auto=webp&s=c18bf9d0c8f3d1ac4cfe33716863b5b13fb1cfee', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/6v812rv71qfe1.png?width=640&crop=smart&auto=webp&s=130f2c49dbb429ee9ae9e9fcd1bec6b7cf1cbce5', 'width': 640}, {'height': 522, 'url': 'https://preview.redd.it/6v812rv71qfe1.png?width=960&crop=smart&auto=webp&s=cff3e4e7a2c34b4b45d2ff5c49460402390350aa', 'width': 960}, {'height': 587, 'url': 'https://preview.redd.it/6v812rv71qfe1.png?width=1080&crop=smart&auto=webp&s=b1a660c4f79265d441d951f1b0053f6fcf99feaa', 'width': 1080}], 'source': {'height': 656, 'url': 'https://preview.redd.it/6v812rv71qfe1.png?auto=webp&s=8f664fb396ad09db08101f4b9ba04ce5eeba1314', 'width': 1205}, 'variants': {}}]} |
|||
Budget Hardware to run 1.58bit DeepSeek R1 | 1 | Hello , Im looking into creating a guide to build educational equipements for local model running and since we have had some interesting advancements with DeepSeek and how [danielhanchen](https://www.reddit.com/user/danielhanchen/) managed to quantize it, I was trying to create a guide for people looking to build a reasonably prized setup to run DeepSeek for a reasonable speed (> 1 t/s for example)
From what I understand the most important pieces are the total VRAM, the type of RAM (DDR5) , GPU and CPU. I was wondering if we could come up with some setups that should work to run DeepSeek r1 1.58bit in this context
Some notes I have from the 1.58 bit thread ([1.58bit](https://www.reddit.com/r/LocalLLaMA/comments/1ibbloy/158bit_deepseek_r1_131gb_dynamic_gguf/))
* At least 64GB RAM DDR5
* GPU with 24GB of RAM
But I have some other questions to finish a complete setup:
* We have several options for the 24GB GPU , on the NVidia side we could have second hand 3090 or 4090, on AMD a 7900XTX. Would all of these be an option? What other budget options come to mind?
* I have seen a lot of different opinions on CPU and I'm a bit lost, I have seen setups wit 2x H100 which is quite a big budget, but not much more, could you help me finding the right balance?
* Is there anything else Im not considering?
Thanks in advance, if I get some help I promise to do a guide for anyone to do their own build | 2025-01-28T11:38:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzmza/budget_hardware_to_run_158bit_deepseek_r1/ | fdiazgarrido | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzmza | false | null | t3_1ibzmza | /r/LocalLLaMA/comments/1ibzmza/budget_hardware_to_run_158bit_deepseek_r1/ | false | false | self | 1 | null |
Guide for a budget setup to run DeepSeek R1 1.58bit | 3 | Hello , Im looking into creating a guide to build educational equipements for local model running and since we have had some interesting advancements with DeepSeek and how [danielhanchen](https://www.reddit.com/user/danielhanchen/) managed to quantize it, I was trying to create a guide for people looking to build a reasonably prized setup to run DeepSeek for a reasonable speed (> 1 t/s for example)
From what I understand the most important pieces are the total VRAM, the type of RAM (DDR5) , GPU and CPU. I was wondering if we could come up with some setups that should work to run DeepSeek r1 1.58bit in this context
Some notes I have from the 1.58 bit thread ([1.58bit](https://www.reddit.com/r/LocalLLaMA/comments/1ibbloy/158bit_deepseek_r1_131gb_dynamic_gguf/))
* At least 64GB RAM DDR5
* GPU with 24GB of RAM
But I have some other questions to finish a complete setup:
* We have several options for the 24GB GPU , on the NVidia side we could have second hand 3090 or 4090, on AMD a 7900XTX. Would all of these be an option? What other budget options come to mind?
* I have seen a lot of different opinions on CPU and I'm a bit lost, I have seen setups wit 2x H100 which is quite a big budget, but not much more, could you help me finding the right balance?
* Is there anything else Im not considering?
Thanks in advance, if I get some help I promise to do a guide for anyone to do their own build | 2025-01-28T11:41:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzo3n/guide_for_a_budget_setup_to_run_deepseek_r1_158bit/ | fdiazgarrido | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzo3n | false | null | t3_1ibzo3n | /r/LocalLLaMA/comments/1ibzo3n/guide_for_a_budget_setup_to_run_deepseek_r1_158bit/ | false | false | self | 3 | null |
openseek models produce only empty answers (1/28/2025) | 1 | [removed] | 2025-01-28T11:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ibztlr/openseek_models_produce_only_empty_answers_1282025/ | yogled | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibztlr | false | null | t3_1ibztlr | /r/LocalLLaMA/comments/1ibztlr/openseek_models_produce_only_empty_answers_1282025/ | false | false | self | 1 | null |
Help! You all might be my only friends. | 30 | So no one around me knows or cares at all even what the term LLM means. I’m actually afraid for society a little bit. I feel pretty closed off and alone. I really appreciate this community, the openness and the sharing. It’s great. I think the people here are working toward actual future systems and not solely a cash grab. I’m not saying don’t ever have fun or anything but I am not spending my life trying to drink champagne and look cool. My goal, as I’ve gotten older, is knowledge. I obviously need money to survive, but it is not my driving factor in life. I say this because I don’t think I’m better than anyone, just stating what I specifically am about. I am saying this because I am looking for friends and partners for projects and to just talk about life. People here share my own interests, and we may have differing opinions but share similar ideas and generally understand what's going on. I’ve never been great at making friends. Something I found out about myself finally getting involved in social media later in life is I am not good at being fake, or doing the youtube video voice, you know what i mean… lol.
I’m gonna go ahead and say. I’m not a super genius. I can’t do it all by myself. I think if some of us got organized and put our collective heads together, we could do something great.
If the point of this is human connection.. I am not being successful. Another thing I have failed at. And I’m not saying “look at me!!” I’m saying there have to be other people like me. I’m not special here. I’m saying, we don’t have to feel like this. Holler at ya boy if you are lonely as shit too. | 2025-01-28T11:59:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ibzyad/help_you_all_might_be_my_only_friends/ | mr_happy_nice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibzyad | false | null | t3_1ibzyad | /r/LocalLLaMA/comments/1ibzyad/help_you_all_might_be_my_only_friends/ | false | false | self | 30 | null |
Don’t Be an Idiot and Sell NVIDIA Because of DeepSeek. You Will Regret It | 1 | [removed] | 2025-01-28T12:04:52 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ic01o2 | false | null | t3_1ic01o2 | /r/LocalLLaMA/comments/1ic01o2/dont_be_an_idiot_and_sell_nvidia_because_of/ | false | false | default | 1 | null |
||
DeepSeek is running inference on the new home Chinese chips made by Huawei, the 910C | 378 | From Alexander Doria on X: *I feel this should be a much bigger story: DeepSeek has trained on Nvidia H800 but is running inference on the new home Chinese chips made by Huawei, the 910C.*: [https://x.com/Dorialexander/status/1884167945280278857](https://x.com/Dorialexander/status/1884167945280278857)
Original source: Zephyr: *HUAWEI*: [https://x.com/angelusm0rt1s/status/1884154694123298904](https://x.com/angelusm0rt1s/status/1884154694123298904)
https://preview.redd.it/sfzjno0q6qfe1.jpg?width=506&format=pjpg&auto=webp&s=ae0c800b9fffea55bc7861f583160795e935c07d
Partial translation:
*In Huawei Cloud*
*ModelArts Studio (MaaS) Model-as-a-Service Platform*
*Ascend-Adapted New Model is Here!*
*DeepSeek-R1-Distill*
*Qwen-14B, Qwen-32B, and Llama-8B have been launched.*
*More models coming soon.* | 2025-01-28T12:08:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ic03lx/deepseek_is_running_inference_on_the_new_home/ | Nunki08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic03lx | false | null | t3_1ic03lx | /r/LocalLLaMA/comments/1ic03lx/deepseek_is_running_inference_on_the_new_home/ | false | false | 378 | null |
|
Looking for a way to edit LLM layers without fine-tuning | 1 | [removed] | 2025-01-28T12:08:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ic041w/looking_for_a_way_to_edit_llm_layers_without/ | Haghiri75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic041w | false | null | t3_1ic041w | /r/LocalLLaMA/comments/1ic041w/looking_for_a_way_to_edit_llm_layers_without/ | false | false | self | 1 | null |
Clear Difference LoL | 1 | 2025-01-28T12:10:49 | https://www.reddit.com/gallery/1ic059y | Glittering_Touch_256 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ic059y | false | null | t3_1ic059y | /r/LocalLLaMA/comments/1ic059y/clear_difference_lol/ | false | false | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.