title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Local-First RAG is really interesting | 8 | I came across [MeMemo](https://github.com/mememo), a repo that implements Retrieval-Augmented Generation (RAG) entirely on the client side using WebGPU. No servers, full privacy, and faster responses—it’s a bold take on AI that keeps everything local.
It’s an interesting shift in how RAG could work, especially for scenarios where privacy and low latency are critical. Running entirely in the browser changes the game, but it also comes with challenges like limited resources and browser constraints. Still, the concept opens up possibilities for lightweight, privacy-focused applications that don’t rely on cloud infrastructure.
Worth keeping an eye on to see how it evolves and what people build with it. | 2025-01-27T11:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ib6x7u/localfirst_rag_is_really_interesting/ | Muted_Estate890 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib6x7u | false | null | t3_1ib6x7u | /r/LocalLLaMA/comments/1ib6x7u/localfirst_rag_is_really_interesting/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'uUZ1FiLMKoa-Q-kAUs5-dLyI4K8ROC8Ian7LjId4rok', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IutHq7sZjvBsKqMf55IU9zJ37iGK2d-eTYYCIx_61DY.jpg?width=108&crop=smart&auto=webp&s=90e949275d046febb2bbae572d0d37de3a728825', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IutHq7sZjvBsKqMf55IU9zJ37iGK2d-eTYYCIx_61DY.jpg?width=216&crop=smart&auto=webp&s=041ba685fe0a28b3018ad15c9f71fafd9b8d87b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IutHq7sZjvBsKqMf55IU9zJ37iGK2d-eTYYCIx_61DY.jpg?width=320&crop=smart&auto=webp&s=f8e7d1b5c4bf92ef5a5be0617c3fad01d8dbc74f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IutHq7sZjvBsKqMf55IU9zJ37iGK2d-eTYYCIx_61DY.jpg?width=640&crop=smart&auto=webp&s=572bd5a02e8d2047988bc8b42a0a73d1650a595a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IutHq7sZjvBsKqMf55IU9zJ37iGK2d-eTYYCIx_61DY.jpg?width=960&crop=smart&auto=webp&s=2a6dd114fbe8bbefc13d3d7a9f53e1113698a382', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IutHq7sZjvBsKqMf55IU9zJ37iGK2d-eTYYCIx_61DY.jpg?width=1080&crop=smart&auto=webp&s=6673860a7fcf02d8fe835888af54965eb4e64d62', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/IutHq7sZjvBsKqMf55IU9zJ37iGK2d-eTYYCIx_61DY.jpg?auto=webp&s=2cbcdea1fdb27800693bf6a32c2edd0bc6db8fe4', 'width': 1280}, 'variants': {}}]} |
model recommendation? | 1 | [removed] | 2025-01-27T11:52:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ib6ymx/model_recommendation/ | MrObsidian_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib6ymx | false | null | t3_1ib6ymx | /r/LocalLLaMA/comments/1ib6ymx/model_recommendation/ | false | false | self | 1 | null |
Looking to skill up on my knowledge of AI. | 3 | - How do I determine if I can run a model locally or not?
- Inferencing?
- Parameters?
- Quantization?
- Etc?
Anybody have a resource that pretty much gets you up to speed on all these fundamentals? | 2025-01-27T12:08:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ib79xv/looking_to_skill_up_on_my_knowledge_of_ai/ | suudoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib79xv | false | null | t3_1ib79xv | /r/LocalLLaMA/comments/1ib79xv/looking_to_skill_up_on_my_knowledge_of_ai/ | false | false | self | 3 | null |
Deepseek R1 w/ Project DIGITS | 1 | [removed] | 2025-01-27T12:17:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ib7gmh/deepseek_r1_w_project_digits/ | Dear_Chemistry_7769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib7gmh | false | null | t3_1ib7gmh | /r/LocalLLaMA/comments/1ib7gmh/deepseek_r1_w_project_digits/ | false | false | self | 1 | null |
How to train/finetune an LLM to help with specific scripting language and specific API? | 1 | [removed] | 2025-01-27T12:20:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ib7i86/how_to_trainfinetune_an_llm_to_help_with_specific/ | Radiant_Ice4596 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib7i86 | false | null | t3_1ib7i86 | /r/LocalLLaMA/comments/1ib7i86/how_to_trainfinetune_an_llm_to_help_with_specific/ | false | false | self | 1 | null |
DeepSeek R1 w/ Project DIGITS | 1 | [removed] | 2025-01-27T12:20:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ib7isw/deepseek_r1_w_project_digits/ | PeruP | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib7isw | false | null | t3_1ib7isw | /r/LocalLLaMA/comments/1ib7isw/deepseek_r1_w_project_digits/ | false | false | self | 1 | null |
Outstanding Performance by Deepseek R1 | 1 | [removed] | 2025-01-27T12:21:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ib7izm/outstanding_performance_by_deepseek_r1/ | Agile-Box8927 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib7izm | false | null | t3_1ib7izm | /r/LocalLLaMA/comments/1ib7izm/outstanding_performance_by_deepseek_r1/ | false | false | 1 | null |
|
I spent the last weekend optimizing the DeepSeek V2/V3 llama.cpp implementation - PR #11446 | 148 | 2025-01-27T12:25:45 | https://github.com/ggerganov/llama.cpp/pull/11446 | fairydreaming | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ib7mg4 | false | null | t3_1ib7mg4 | /r/LocalLLaMA/comments/1ib7mg4/i_spent_the_last_weekend_optimizing_the_deepseek/ | false | false | 148 | null |
||
Developing DeepLab Library to Apply DeepSeek for Reasoning Tasks | 1 | [removed] | 2025-01-27T12:27:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ib7ng7/developing_deeplab_library_to_apply_deepseek_for/ | Different_Prune_3529 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib7ng7 | false | null | t3_1ib7ng7 | /r/LocalLLaMA/comments/1ib7ng7/developing_deeplab_library_to_apply_deepseek_for/ | false | false | self | 1 | null |
Good resources for learning more about (practical) LLM | 3 | I'm a Software Engineer and I have basic (minimal) knowledge of LLM's. However I would like to improve my skills For example I wonder:
\- How much RAM is needed for model X with x parameters
\- I notice that when I prompt deepSeek it first start outputting its thinking process, why and how does this under the hood work?
How did you gained more experience both theoretical and practical in ML and LLM? I would appreciate resources that helped you. | 2025-01-27T12:35:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ib7try/good_resources_for_learning_more_about_practical/ | BukHunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib7try | false | null | t3_1ib7try | /r/LocalLLaMA/comments/1ib7try/good_resources_for_learning_more_about_practical/ | false | false | self | 3 | null |
Qwen2.5 14B-1M Impressions | 75 | I'm running 14B-1M-Q8 gguf + 64K Q8 KV Cache, still have 2gb vram left on my 24gb card
I tested it using long podcast scripts from WAN Show and Waveform, it can capture most of the details, definitely not all of them, but it's pretty good for a 14B-Q8 model, plus I'm using Q8 quantnized context
I also compared it with the original 14b-32k model, using Waveform scripts which can fit in 32k context, I found 1M version always generate longer summarisation then 32k, and for most times it's better structured and more detailed than 32k
In conclusion, this model definitely worth downloading, and it replaced 32B-IQ4-XS as my main model for text summarisation | 2025-01-27T12:41:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ib7y8e/qwen25_14b1m_impressions/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib7y8e | false | null | t3_1ib7y8e | /r/LocalLLaMA/comments/1ib7y8e/qwen25_14b1m_impressions/ | false | false | self | 75 | null |
Local School Build. | 8 | Hi, father of 6 here in a small rural town. I’m friends with the local computer science teacher and said we could build a local system with deepseek. I’m looking for the “best” setup for this use case. Full tutorial on set up would be ideal. While adults will be supporting the kids setting it up ideally. Thanks! | 2025-01-27T12:44:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ib80m9/local_school_build/ | Bitter_Implement6906 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib80m9 | false | null | t3_1ib80m9 | /r/LocalLLaMA/comments/1ib80m9/local_school_build/ | false | false | self | 8 | null |
Comics to video? | 2 | So I had an idea the other day and not sure if anybody else has talked about this but.. We have AI that can generate unique voices, we have AI that can generate videos, and we have AI that can understand text and create sceneries with it. So what if we were able to take like a comic book or comic strip and turn it into a full-fledged short movie or clip? I think it would be amazing to take something like a Dilbert comic strip and turn it into a short five minute comedy video with full voice acting and everything. Is this even possible? And does anybody else think a program like this would be amazing? | 2025-01-27T12:45:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ib81pc/comics_to_video/ | Nervous-Computer-885 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib81pc | false | null | t3_1ib81pc | /r/LocalLLaMA/comments/1ib81pc/comics_to_video/ | false | false | self | 2 | null |
looking for a web based ui to run and interact with deepseek | 1 | [removed] | 2025-01-27T12:47:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ib82zk/looking_for_a_web_based_ui_to_run_and_interact/ | _indistinct_chatter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib82zk | false | null | t3_1ib82zk | /r/LocalLLaMA/comments/1ib82zk/looking_for_a_web_based_ui_to_run_and_interact/ | false | false | self | 1 | null |
Deepseek Distill of Mistral Large? | 0 | I've noticed the largest of the distilled Deepseek is a 70b llama model. I'm wondering if there is a reason to stop there, or if it would be possible to go further and distill Mistral Large? Ideally we wouldn't all be independently trying to do this, since I'm assuming it'll be costly. So i was just wondering if anyone is spearheading this, I wouldn't mind contributing to it. | 2025-01-27T12:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ib83kz/deepseek_distill_of_mistral_large/ | Judtoff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib83kz | false | null | t3_1ib83kz | /r/LocalLLaMA/comments/1ib83kz/deepseek_distill_of_mistral_large/ | false | false | self | 0 | null |
DeepSeek gets a score of 50/100 on the cartoon benchmark test. | 0 | 2025-01-27T12:54:04 | RazzmatazzReal4129 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ib884j | false | null | t3_1ib884j | /r/LocalLLaMA/comments/1ib884j/deepseek_gets_a_score_of_50100_on_the_cartoon/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'Z43UgfQVYha0vYU30bvY5fONDbOOLi1k4jxMrS0ZDw8', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/e8mknhqw9jfe1.png?width=108&crop=smart&auto=webp&s=0ef265f349319497bc3ca494eba0760dd61b5df5', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/e8mknhqw9jfe1.png?width=216&crop=smart&auto=webp&s=3402276fd925771fb8603f10df07294cf3355aa3', 'width': 216}, {'height': 237, 'url': 'https://preview.redd.it/e8mknhqw9jfe1.png?width=320&crop=smart&auto=webp&s=d292cf8744c4459e8e57713f98e143389c5ee8d3', 'width': 320}, {'height': 474, 'url': 'https://preview.redd.it/e8mknhqw9jfe1.png?width=640&crop=smart&auto=webp&s=0b0f698d4049d88ff2468cf80431d9f561217272', 'width': 640}], 'source': {'height': 634, 'url': 'https://preview.redd.it/e8mknhqw9jfe1.png?auto=webp&s=a20097c6b80ba03a1a578f4f3f6bbdec9d787d5d', 'width': 855}, 'variants': {}}]} |
|||
LLM wouldn't give me the it's training data date - I got around it with this | 0 | 2025-01-27T12:59:12 | https://www.reddit.com/gallery/1ib8c3x | cave_guard | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ib8c3x | false | null | t3_1ib8c3x | /r/LocalLLaMA/comments/1ib8c3x/llm_wouldnt_give_me_the_its_training_data_date_i/ | false | false | 0 | null |
||
Do you think gbt4all will add deepseek in their model? Why or why not? | 1 | [removed] | 2025-01-27T13:03:11 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ib8fnr | false | null | t3_1ib8fnr | /r/LocalLLaMA/comments/1ib8fnr/do_you_think_gbt4all_will_add_deepseek_in_their/ | false | false | default | 1 | null |
||
Vectorizing code | 2 | I’ve been playing around with vectorizing code for a code search tool. Tried some different approaches around chunking and what not but seems to me like performance still isn’t great. What’s ever using to do this?
Kind of wondering if I should have a small model extract what is important from each file and vectorize that?
Fishing for ideas here to play around with.
Thanks! | 2025-01-27T13:03:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ib8fos/vectorizing_code/ | thepetek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib8fos | false | null | t3_1ib8fos | /r/LocalLLaMA/comments/1ib8fos/vectorizing_code/ | false | false | self | 2 | null |
DeepSeek R1 vs V3 | 7 | Can someone explain the difference between DeepSeek R1 and V3 models. Not understanding what the differences are and when to use which one?
Starting to learn more about LLMs so I'm not well versed in this area currently | 2025-01-27T13:03:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ib8g7e/deepseek_r1_vs_v3/ | iseeyouboo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib8g7e | false | null | t3_1ib8g7e | /r/LocalLLaMA/comments/1ib8g7e/deepseek_r1_vs_v3/ | false | false | self | 7 | null |
Best Agents framework? | 3 | What do you think it's the best agent framework currently? This is the list of options that come to my mind, ordered by the amount of github stars:
- AutoGPT
- MetaGPT
- AutoGen
- CrewAI
- Phidata
- Composio
- LangGraph
- Smolagents
- Pydantic AI
- Langroid
Recommendations? Experience with the tools?
| 2025-01-27T13:07:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ib8j2r/best_agents_framework/ | xdoso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib8j2r | false | null | t3_1ib8j2r | /r/LocalLLaMA/comments/1ib8j2r/best_agents_framework/ | false | false | self | 3 | null |
Open Source Offline Note Taking & Summary App for Mac | 6 | 2025-01-27T13:08:18 | https://v.redd.it/uv7666lbcjfe1 | imsinghaniya | /r/LocalLLaMA/comments/1ib8jt9/open_source_offline_note_taking_summary_app_for/ | 1970-01-01T00:00:00 | 0 | {} | 1ib8jt9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/uv7666lbcjfe1/DASHPlaylist.mpd?a=1740704906%2CZTgwYTUyN2VhMzZmMTI5ZTNlMDg4MjBhMzg1NzcwNzJjYTYwYjA4ODk5NmVjZDgwYzE0YmQzY2RiZGU3OTgwYw%3D%3D&v=1&f=sd', 'duration': 242, 'fallback_url': 'https://v.redd.it/uv7666lbcjfe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/uv7666lbcjfe1/HLSPlaylist.m3u8?a=1740704906%2CNDU0MWI4Y2Q0ZjgwY2I3ZDgxZmFmNWRlZjA3NWM4ZWI3MmU2ZDA2NjNhYTEwMjUxYTE4MjEyZDMzZDRmYWM3NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uv7666lbcjfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1620}} | t3_1ib8jt9 | /r/LocalLLaMA/comments/1ib8jt9/open_source_offline_note_taking_summary_app_for/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'azY5azg1aWJjamZlMduhM8nsb9iMumSCD5uknqJUxYb3vPr3CS-m6M1MTcQ-', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/azY5azg1aWJjamZlMduhM8nsb9iMumSCD5uknqJUxYb3vPr3CS-m6M1MTcQ-.png?width=108&crop=smart&format=pjpg&auto=webp&s=409bb1e3badfeb21c38a6c99a3a7c2acc002b1b0', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/azY5azg1aWJjamZlMduhM8nsb9iMumSCD5uknqJUxYb3vPr3CS-m6M1MTcQ-.png?width=216&crop=smart&format=pjpg&auto=webp&s=7f6c032486b798e358b0742f53a91afe171bf4e7', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/azY5azg1aWJjamZlMduhM8nsb9iMumSCD5uknqJUxYb3vPr3CS-m6M1MTcQ-.png?width=320&crop=smart&format=pjpg&auto=webp&s=f3291af35a54f18334ece72ec98ce882480849ad', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/azY5azg1aWJjamZlMduhM8nsb9iMumSCD5uknqJUxYb3vPr3CS-m6M1MTcQ-.png?width=640&crop=smart&format=pjpg&auto=webp&s=e1012b89470acd0c811c8ce46331e3eaf1e66dc7', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/azY5azg1aWJjamZlMduhM8nsb9iMumSCD5uknqJUxYb3vPr3CS-m6M1MTcQ-.png?width=960&crop=smart&format=pjpg&auto=webp&s=9762c29ce2b01aa26d0b69894d4abb93d82a519b', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/azY5azg1aWJjamZlMduhM8nsb9iMumSCD5uknqJUxYb3vPr3CS-m6M1MTcQ-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=55999d6ee1c9bb529a29454b750dd9c0ba99caa8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/azY5azg1aWJjamZlMduhM8nsb9iMumSCD5uknqJUxYb3vPr3CS-m6M1MTcQ-.png?format=pjpg&auto=webp&s=6d468bfdb47b94fcc6be879fae13e2985b76429f', 'width': 1620}, 'variants': {}}]} |
||
LiveIdeaBench Results: DeepSeek R1 vs QWQ - Unexpected Findings | 6 | Just dropped some fresh test results on LiveIdeaBench ([https://liveideabench.com](https://liveideabench.com)) after evaluating several new models - DeepSeek V3, R1, Phi-4, and Minimax-01.
Ngl, I had high hopes for R1 to crack the top 3, but the results were... interesting. Despite its solid performance, it still couldn't dethrone QWQ in scientific ideation capability.
Here's the tea :
R1's strengths:
* Strong originality
* High feasibility scores
The catch:
* Lower fluency scores (ability to perform consistently across different domains)
My theory? Probably has something to do with R1's extensive RL pretraining, which seems to have made it more of a specialist in reasoning, math, and coding rather than a jack-of-all-trades?
Thought this might be useful data for you!
https://preview.redd.it/dzxyibi5djfe1.png?width=970&format=png&auto=webp&s=2662e00ddb19aefb0c72c56e61a021b9646628f7
| 2025-01-27T13:11:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ib8mdb/liveideabench_results_deepseek_r1_vs_qwq/ | realJoeTrump | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib8mdb | false | null | t3_1ib8mdb | /r/LocalLLaMA/comments/1ib8mdb/liveideabench_results_deepseek_r1_vs_qwq/ | false | false | 6 | null |
|
Needing help with implementing speech-to-text | 3 | Hello people,
I have chronic pain in both my hands and wrists and have been trying to find a real-time speech to text solution as I cant use a keyboard. I've lost my job due to the pain and can't afford paid solutions like dragon which cost >$600. I'm currently using windows 11 voice access which is not great at all.
I've stumbled across whisper.cpp and faster-whisper. I have tried setting up both with no success despite using LLm's/youtube to help debug. I'm on windows 11 with 16GB RAM and an intel i7 CPU. Got no GPU.
I've struggled to get any examples working or any apps that utilize these libraries. I'm just looking for some help if anyone is willing to offer some.
Thank you. | 2025-01-27T13:13:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ib8o56/needing_help_with_implementing_speechtotext/ | Zodianz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib8o56 | false | null | t3_1ib8o56 | /r/LocalLLaMA/comments/1ib8o56/needing_help_with_implementing_speechtotext/ | false | false | self | 3 | null |
Quick Deepseek Guide | 11 | * the more context you give the better the answer
* add 'Answer in the context of everything we have discussed.' for memory continuity throughout chat
* rename important chats in caps to find it easily later
* Deepseek can\`t open links
* chat length is 39 messages - I hit it today
* citations are fake, but the info is not: Deepseek just cant remember the sources.
* ask it to ask you questions if something is unclear
* it relies on sources in the language you used. For example, 90% of medical literature is in English. If you use non-English, add 'Think in English to utilize English literature, sources, and data, but respond in \[your language\].' | 2025-01-27T13:14:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ib8on2/quick_deepseek_guide/ | ComprehensiveQuail77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib8on2 | false | null | t3_1ib8on2 | /r/LocalLLaMA/comments/1ib8on2/quick_deepseek_guide/ | false | false | self | 11 | null |
Could we agree ot call or not to call these models "open source" but "free weights"? | 1 | [removed] | 2025-01-27T13:15:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ib8pn9/could_we_agree_ot_call_or_not_to_call_these/ | MusicTait | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib8pn9 | false | null | t3_1ib8pn9 | /r/LocalLLaMA/comments/1ib8pn9/could_we_agree_ot_call_or_not_to_call_these/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KygxgHN27Fih4LpZhq_76UXoSVJf1Ia5v-vS5fVnVp4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/gpVdUyhOMYL7V2S0HhHPhWgf_3qftPVTolzjuloBJqU.jpg?width=108&crop=smart&auto=webp&s=b5d44ea426f68219e9dc36b563b8dd7a331f4530', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/gpVdUyhOMYL7V2S0HhHPhWgf_3qftPVTolzjuloBJqU.jpg?width=216&crop=smart&auto=webp&s=00eafc0f271941477217bdea29797576afe93ee9', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/gpVdUyhOMYL7V2S0HhHPhWgf_3qftPVTolzjuloBJqU.jpg?width=320&crop=smart&auto=webp&s=047759410c54124f58bcc0416d9c7db3a7b2d2ea', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/gpVdUyhOMYL7V2S0HhHPhWgf_3qftPVTolzjuloBJqU.jpg?width=640&crop=smart&auto=webp&s=637851a83d4c2e4b6cfb29037e0cceb23fed4108', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/gpVdUyhOMYL7V2S0HhHPhWgf_3qftPVTolzjuloBJqU.jpg?width=960&crop=smart&auto=webp&s=c6b07690e308038d69ae404fa4ac5f04408bab86', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/gpVdUyhOMYL7V2S0HhHPhWgf_3qftPVTolzjuloBJqU.jpg?width=1080&crop=smart&auto=webp&s=fc442a577fcff00b2e9324db381592cb2daf7c56', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/gpVdUyhOMYL7V2S0HhHPhWgf_3qftPVTolzjuloBJqU.jpg?auto=webp&s=e901d2c7de21f9916438e6d81ca25d8340bb57a5', 'width': 1200}, 'variants': {}}]} |
Why is R1 more expensive than V3? | 8 | The models have the same architecture and as I understand it, they charge for all output tokens. Therefore, they should cost the same to run per token right? Yet V3 is $0.28/mil and R1 is $2.19/mil. | 2025-01-27T13:20:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ib8tjh/why_is_r1_more_expensive_than_v3/ | a_slay_nub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib8tjh | false | null | t3_1ib8tjh | /r/LocalLLaMA/comments/1ib8tjh/why_is_r1_more_expensive_than_v3/ | false | false | self | 8 | null |
Nvidia pre-market down 12% due Deepseek | 140 | [NVidia Prę-market down 12% due Deepseek](https://www.cnbc.com/2025/01/27/nvidia-falls-10percent-in-premarket-trading-as-chinas-deepseek-triggers-global-tech-sell-off.html) | 2025-01-27T13:23:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ib8wfw/nvidia_premarket_down_12_due_deepseek/ | puffyarizona | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib8wfw | false | null | t3_1ib8wfw | /r/LocalLLaMA/comments/1ib8wfw/nvidia_premarket_down_12_due_deepseek/ | false | false | self | 140 | {'enabled': False, 'images': [{'id': 'svSryB2iK8d1UOSbIYNASv_i28YRB_7FKlG4gd2R_O4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/sj720AIdjsIFEypF6OXNev3bqzuZTRl9Yip4vRM2K-U.jpg?width=108&crop=smart&auto=webp&s=2a037f0085c6cf4effc4ddc74236c26da6d6f0ea', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/sj720AIdjsIFEypF6OXNev3bqzuZTRl9Yip4vRM2K-U.jpg?width=216&crop=smart&auto=webp&s=2065a8467e4b37d0771ee5521e04a3e051f90a4b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/sj720AIdjsIFEypF6OXNev3bqzuZTRl9Yip4vRM2K-U.jpg?width=320&crop=smart&auto=webp&s=63d5d7d8969bf611e554d1f1878c91e82f2c73e4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/sj720AIdjsIFEypF6OXNev3bqzuZTRl9Yip4vRM2K-U.jpg?width=640&crop=smart&auto=webp&s=6a0132a4feeada5aefdf06788022851938f8c240', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/sj720AIdjsIFEypF6OXNev3bqzuZTRl9Yip4vRM2K-U.jpg?width=960&crop=smart&auto=webp&s=d5c4f21261b7586b65c186b2f2a842ae5716441a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/sj720AIdjsIFEypF6OXNev3bqzuZTRl9Yip4vRM2K-U.jpg?width=1080&crop=smart&auto=webp&s=85b29cad1522433e83e74fd5f80d38fa0fb0f6ec', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/sj720AIdjsIFEypF6OXNev3bqzuZTRl9Yip4vRM2K-U.jpg?auto=webp&s=5f9c3195ac75e3c30f15e710d795c94eaa7fd15e', 'width': 1920}, 'variants': {}}]} |
How much would it cost to have someone finetune R1? | 8 | I would like to pay someone to finetune R1 on a specific topic. How much would this cost (also including hosting costs)? | 2025-01-27T13:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ib8xwu/how_much_would_it_cost_to_have_someone_finetune_r1/ | PublicQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib8xwu | false | null | t3_1ib8xwu | /r/LocalLLaMA/comments/1ib8xwu/how_much_would_it_cost_to_have_someone_finetune_r1/ | false | false | self | 8 | null |
Does anyone know how much the training costs for DeepSeek R1 were? | 1 | [removed] | 2025-01-27T13:31:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ib93qd/does_anyone_know_how_much_the_training_costs_for/ | ArturTMvelli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib93qd | false | null | t3_1ib93qd | /r/LocalLLaMA/comments/1ib93qd/does_anyone_know_how_much_the_training_costs_for/ | false | false | self | 1 | null |
I have access to a server with 2xL40s 48GB. What should we test? | 1 | We finished the project, but I have access to the server for the remaning of the week. I've done som performance testing on serval models, of course deepseek included. But if anyone have something they want tested, hit me up and i'll give it a shot. | 2025-01-27T13:32:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ib949a/i_have_access_to_a_server_with_2xl40s_48gb_what/ | Dnorgaard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib949a | false | null | t3_1ib949a | /r/LocalLLaMA/comments/1ib949a/i_have_access_to_a_server_with_2xl40s_48gb_what/ | false | false | self | 1 | null |
CodeGate support now available in Aider. | 1 | [removed] | 2025-01-27T13:38:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ib997d/codegate_support_now_available_in_aider/ | zero_proof_fork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib997d | false | null | t3_1ib997d | /r/LocalLLaMA/comments/1ib997d/codegate_support_now_available_in_aider/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GlcDGBZctjJ5qZRWlp8RRTBk85FyJa8nM1StI8emDBk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/DK9Fr1BwSbMcAS6ffN21aeutPuelxdunow3rJ7YKLZU.jpg?width=108&crop=smart&auto=webp&s=d98ba00466ed16ac41c73f3785b7b197c05513ed', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/DK9Fr1BwSbMcAS6ffN21aeutPuelxdunow3rJ7YKLZU.jpg?width=216&crop=smart&auto=webp&s=ac28b33b774e324fb44086b4eac555849c43a0da', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/DK9Fr1BwSbMcAS6ffN21aeutPuelxdunow3rJ7YKLZU.jpg?width=320&crop=smart&auto=webp&s=284125abcaf0303ffdfc266a181f5e1c538b5a17', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/DK9Fr1BwSbMcAS6ffN21aeutPuelxdunow3rJ7YKLZU.jpg?auto=webp&s=5144f2b3e25209f7acbf99268480ebd0d5a095bf', 'width': 480}, 'variants': {}}]} |
Same size as the old gpt2 model. Insane. | 196 | 2025-01-27T13:38:32 | grey-seagull | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ib99ei | false | null | t3_1ib99ei | /r/LocalLLaMA/comments/1ib99ei/same_size_as_the_old_gpt2_model_insane/ | false | false | 196 | {'enabled': True, 'images': [{'id': 'vUVcg4z8cT7zNCZAMmvzvGb_Y1Hr-jWmwzU2-aOq6zw', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/cotf3wm1ijfe1.jpeg?width=108&crop=smart&auto=webp&s=b94d23f2d83dc3ed52acebb595cb69c2c03ce641', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/cotf3wm1ijfe1.jpeg?width=216&crop=smart&auto=webp&s=7c6119eccbfca6fcee76bc79d4c58e362a6e54aa', 'width': 216}, {'height': 318, 'url': 'https://preview.redd.it/cotf3wm1ijfe1.jpeg?width=320&crop=smart&auto=webp&s=78a6debefce134f0462fba2e976b7a9268aa41e5', 'width': 320}, {'height': 637, 'url': 'https://preview.redd.it/cotf3wm1ijfe1.jpeg?width=640&crop=smart&auto=webp&s=ba2f6e3e7a8a1708887fa7db558a92d1d0c83b4e', 'width': 640}, {'height': 956, 'url': 'https://preview.redd.it/cotf3wm1ijfe1.jpeg?width=960&crop=smart&auto=webp&s=c73781c9c21e99426f2a5a4566d4927930f231a5', 'width': 960}], 'source': {'height': 1075, 'url': 'https://preview.redd.it/cotf3wm1ijfe1.jpeg?auto=webp&s=c0401ea1c49c954e944919480e498dbe26c2af08', 'width': 1079}, 'variants': {}}]} |
|||
Is Anyone Else Having Problems with DeepSeek Today? | 31 | The online model stopped working today.. At least for me. Anyone having this issue? | 2025-01-27T13:46:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ib9fry/is_anyone_else_having_problems_with_deepseek_today/ | Electronic-Metal2391 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib9fry | false | null | t3_1ib9fry | /r/LocalLLaMA/comments/1ib9fry/is_anyone_else_having_problems_with_deepseek_today/ | false | false | self | 31 | null |
DeepSeek Chat Started to Slow Down after all the News and Hype | 60 | Issues logging in, it takes forever, or it says login failed.
If logged in successfully, chat history is not loading, opening the chat takes a longer time and sometimes nothing comes up in the chat.
It is giving clear message in the chat that heavy traffic is there and retry later.
So it seems the infra of the DeepSeek chat has hit the limit. | 2025-01-27T13:49:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ib9hvd/deepseek_chat_started_to_slow_down_after_all_the/ | lake_trade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib9hvd | false | null | t3_1ib9hvd | /r/LocalLLaMA/comments/1ib9hvd/deepseek_chat_started_to_slow_down_after_all_the/ | false | false | self | 60 | null |
DeepSeek: Chinese AI chatbot sparks market turmoil for rivals | 1 | 2025-01-27T13:49:36 | https://www.bbc.com/news/articles/c0qw7z2v1pgo | Separate_Paper_1412 | bbc.com | 1970-01-01T00:00:00 | 0 | {} | 1ib9hx7 | false | null | t3_1ib9hx7 | /r/LocalLLaMA/comments/1ib9hx7/deepseek_chinese_ai_chatbot_sparks_market_turmoil/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'wqlR59EVf9P13k6KgBVbkCvLD86JGMT9WBoD52rzYQg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/iVvYnMBUVXNJufniZ7fCCmIPW83bap6j2EuRCS98-A4.jpg?width=108&crop=smart&auto=webp&s=6f867c5955d449491f019defe5f134a5c744d488', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/iVvYnMBUVXNJufniZ7fCCmIPW83bap6j2EuRCS98-A4.jpg?width=216&crop=smart&auto=webp&s=836bf0003cc1b149ffa25575c641cea61331946a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/iVvYnMBUVXNJufniZ7fCCmIPW83bap6j2EuRCS98-A4.jpg?width=320&crop=smart&auto=webp&s=cc586e756b21fc2f141d8a7060f96213c81b1ca4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/iVvYnMBUVXNJufniZ7fCCmIPW83bap6j2EuRCS98-A4.jpg?width=640&crop=smart&auto=webp&s=8dd96c50501c50eef3b28ecd6979e1a0249461a6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/iVvYnMBUVXNJufniZ7fCCmIPW83bap6j2EuRCS98-A4.jpg?width=960&crop=smart&auto=webp&s=f157be68be68054e4f1ad399cd59c2f95f655a87', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/iVvYnMBUVXNJufniZ7fCCmIPW83bap6j2EuRCS98-A4.jpg?auto=webp&s=f5a2e2d938d2889aa81fce95ccdcf017c6b04d7e', 'width': 1024}, 'variants': {}}]} |
||
Can I run local gpt on intel Iris Graphics | 1 | [removed] | 2025-01-27T13:54:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ib9le9/can_i_run_local_gpt_on_intel_iris_graphics/ | MuscleStriking9756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib9le9 | false | null | t3_1ib9le9 | /r/LocalLLaMA/comments/1ib9le9/can_i_run_local_gpt_on_intel_iris_graphics/ | false | false | self | 1 | null |
China will win | 0 | Deepseek is a cold shower in America. Silicon Valley felt the blow, the wave of pessimism is great.
I don't feel firm in people like Sam Altman, Dario Amodei, Sundai Pichai. Elon Musk and Zuckerberg are more concerned about licking Trump's boots than they are about the race. The scenario is desperate. | 2025-01-27T13:55:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ib9m19/china_will_win/ | Objective_Lab_3182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib9m19 | false | null | t3_1ib9m19 | /r/LocalLLaMA/comments/1ib9m19/china_will_win/ | false | false | self | 0 | null |
5090 Prices Sweden | 6 | [https://www.proshop.se/?s=5090&o=2052](https://www.proshop.se/?s=5090&o=2052)
I converted them to EURO
||
||
|GIGABYTE AORUS EXTREME WATERFORCE|GIGABYTE AORUS EXTREME WATERFORCE WB|GIGABYTE AORUS MASTER ICE|GIGABYTE AORUS MASTER| GIGABYTE WINDFORCE 3 OC |ASUS GEFORCE RTX 5090 TUF|
|3374,1|3284,1|3104,1|3059,1|2789,1|2519,91 | | 2025-01-27T13:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ib9mza/5090_prices_sweden/ | Dry-Bunch-7448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib9mza | false | null | t3_1ib9mza | /r/LocalLLaMA/comments/1ib9mza/5090_prices_sweden/ | false | false | self | 6 | null |
Top GPUS on the market for AI? | 1 | [removed] | 2025-01-27T13:56:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ib9n1l/top_gpus_on_the_market_for_ai/ | Flkhuo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib9n1l | false | null | t3_1ib9n1l | /r/LocalLLaMA/comments/1ib9n1l/top_gpus_on_the_market_for_ai/ | false | false | self | 1 | null |
Top GPUS on the market for AI? | 0 | What is the top GPU that can run **DeepSeek V3 largest model 600B**?
I want a cost-efficient setup without sacrificing reliability.. I know it will be +100-200k
I need to run a smart model locally, without worrying that my data is being used, therefore I can't use external APIs. | 2025-01-27T13:58:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ib9nz5/top_gpus_on_the_market_for_ai/ | Flkhuo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib9nz5 | false | null | t3_1ib9nz5 | /r/LocalLLaMA/comments/1ib9nz5/top_gpus_on_the_market_for_ai/ | false | false | self | 0 | null |
Truly async Ollama assistant | 1 | Is there a project that enables people to use an local Ollama instance for truly async messaging?
**What do I mean by this ...**
Email might be a horrible format for this, but I like the async nature for this example because everyone should understand this:
* User gets an incoming email, maybe with an attachment.
* User forwards to an own mailbox like "assistant@..." and goes like *"Create a todo list out of the mentioned tasks"*, etc.
* An assistant process running locally at home finds the email, reads the user email as prompt and the email/attachment as context
* Assistant processes the task (might take a while) and answers to the email directly back to the user
**Why would you want to do that?**
* using a medium like email enables the communication with your self-hosted (or cheaply rented) Ollama instance without VPNs or opening ports.
* this could be done from anywhere and any machine at any time. Just found something I need to get processed for me? Send it to my assistant via email and wait a minute or two ...
Due to the lack of streaming and a full chat UI, this would not be like chatting as we're used to. It would feel more like forwarding mails to an human assistant sitting at home waiting for your mails to answer. | 2025-01-27T14:07:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ib9vpt/truly_async_ollama_assistant/ | waescher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ib9vpt | false | null | t3_1ib9vpt | /r/LocalLLaMA/comments/1ib9vpt/truly_async_ollama_assistant/ | false | false | self | 1 | null |
Deepseek API docs require you to install OpenAi sdk | 0 | 2025-01-27T14:11:58 | ParadiseMaker69 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ib9yw6 | false | null | t3_1ib9yw6 | /r/LocalLLaMA/comments/1ib9yw6/deepseek_api_docs_require_you_to_install_openai/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'ar2O9mfgJsgc2XhZpZNONHKAKcDSrABTDn11u0V6ico', 'resolutions': [{'height': 215, 'url': 'https://preview.redd.it/ne86jth0ojfe1.jpeg?width=108&crop=smart&auto=webp&s=94c284618295032a889697b4c1611202da8d1d0d', 'width': 108}, {'height': 431, 'url': 'https://preview.redd.it/ne86jth0ojfe1.jpeg?width=216&crop=smart&auto=webp&s=18fd6b2b14b274eae93b4058a388fb8dbd1adc64', 'width': 216}, {'height': 639, 'url': 'https://preview.redd.it/ne86jth0ojfe1.jpeg?width=320&crop=smart&auto=webp&s=52c5a84fc2e4ffeabd2e36e1f8ad43621305ff24', 'width': 320}, {'height': 1278, 'url': 'https://preview.redd.it/ne86jth0ojfe1.jpeg?width=640&crop=smart&auto=webp&s=8582f6a569cb2af333c1ca1c794225f14f97c8e8', 'width': 640}, {'height': 1917, 'url': 'https://preview.redd.it/ne86jth0ojfe1.jpeg?width=960&crop=smart&auto=webp&s=9eb69df8bfa1016b147cf2406195755b869cba35', 'width': 960}, {'height': 2156, 'url': 'https://preview.redd.it/ne86jth0ojfe1.jpeg?width=1080&crop=smart&auto=webp&s=7918351c54cdf8af92103bd3f54f9f17fabef514', 'width': 1080}], 'source': {'height': 2636, 'url': 'https://preview.redd.it/ne86jth0ojfe1.jpeg?auto=webp&s=a58ca9c1bc5b7cbd5a4444291a40647c3d6259a6', 'width': 1320}, 'variants': {}}]} |
|||
DeepSeek R1 32B refused to answer a normal question saying it's prohibited and sensitive | 0 | As long as it suspects you are a "spy" with "evil intentions" it will continue refusing to answer the following questions...but it's perfectly ok to ask any questions about US tho😂
*Processing img igp8hrlhnjfe1...*
| 2025-01-27T14:20:18 | https://www.reddit.com/r/LocalLLaMA/comments/1iba5d1/deepseek_r1_32b_refused_to_answer_a_normal/ | Odd-Contribution4610 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iba5d1 | false | null | t3_1iba5d1 | /r/LocalLLaMA/comments/1iba5d1/deepseek_r1_32b_refused_to_answer_a_normal/ | false | false | self | 0 | null |
Deepseek Ollama models overthink a lot. Is that the common experience? | 2 | 2025-01-27T14:21:28 | https://www.reddit.com/r/LocalLLaMA/comments/1iba68x/deepseek_ollama_models_overthink_a_lot_is_that/ | pmelendezu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iba68x | false | null | t3_1iba68x | /r/LocalLLaMA/comments/1iba68x/deepseek_ollama_models_overthink_a_lot_is_that/ | false | false | 2 | null |
||
I wrote my own Llama inference in pure c++ | 1 | [removed] | 2025-01-27T14:23:03 | https://www.reddit.com/r/LocalLLaMA/comments/1iba7jo/i_wrote_my_own_llama_inference_in_pure_c/ | projektjoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iba7jo | false | null | t3_1iba7jo | /r/LocalLLaMA/comments/1iba7jo/i_wrote_my_own_llama_inference_in_pure_c/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'JLKO5lhXimXINX4AKZ6Dz9-QW6Iy2SYZnyVebaKN4Ew', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F1Y6vdZc2JB-Jw9UXp2W8CMslmH4xIsPEO5lFrojjaY.jpg?width=108&crop=smart&auto=webp&s=b18af1d67dc174430443479ab181ee239fe39891', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F1Y6vdZc2JB-Jw9UXp2W8CMslmH4xIsPEO5lFrojjaY.jpg?width=216&crop=smart&auto=webp&s=af1d221b847b79e71981aa4ca4133f51fc04a2c2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F1Y6vdZc2JB-Jw9UXp2W8CMslmH4xIsPEO5lFrojjaY.jpg?width=320&crop=smart&auto=webp&s=767cc08bddfc8cf1b3edd09a869f51e3383c11b2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F1Y6vdZc2JB-Jw9UXp2W8CMslmH4xIsPEO5lFrojjaY.jpg?width=640&crop=smart&auto=webp&s=bf3fed0b8e56b7b1954a54e0fe28be201601620d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F1Y6vdZc2JB-Jw9UXp2W8CMslmH4xIsPEO5lFrojjaY.jpg?width=960&crop=smart&auto=webp&s=f8228c018901e4c61f4dc7da0fb5e48d76ed53cb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/F1Y6vdZc2JB-Jw9UXp2W8CMslmH4xIsPEO5lFrojjaY.jpg?width=1080&crop=smart&auto=webp&s=2c04a36503a749eba90cfc617c7138dcc7a5c1b4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/F1Y6vdZc2JB-Jw9UXp2W8CMslmH4xIsPEO5lFrojjaY.jpg?auto=webp&s=a3840a03c9a361374b8e4808f1b7614e7fce5e89', 'width': 1200}, 'variants': {}}]} |
Top lightweight models for coding | 2 | Things are moving too fast in this space, so wondering what’s the current best light weight model that can be run on a MacBook Pro 32g.
| 2025-01-27T14:24:11 | https://www.reddit.com/r/LocalLLaMA/comments/1iba8dt/top_lightweight_models_for_coding/ | lamagy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iba8dt | false | null | t3_1iba8dt | /r/LocalLLaMA/comments/1iba8dt/top_lightweight_models_for_coding/ | false | false | self | 2 | null |
AI model to do time estimations based on 2D drawing | 1 | [removed] | 2025-01-27T14:24:47 | https://www.reddit.com/r/LocalLLaMA/comments/1iba8uc/ai_model_to_do_time_estimations_based_on_2d/ | Aggressive_Read505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iba8uc | false | null | t3_1iba8uc | /r/LocalLLaMA/comments/1iba8uc/ai_model_to_do_time_estimations_based_on_2d/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_INYNsp1G-gpkla3-5DYbIjDAMoAw_TZVcIwjqzQilQ', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=108&crop=smart&auto=webp&s=844bb14005fef7669ed95badb69c4803afa5a95d', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=216&crop=smart&auto=webp&s=03f0ad425bb379a0bfdff279738967a534111342', 'width': 216}, {'height': 226, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=320&crop=smart&auto=webp&s=da357315b107f808317a7d844353a2127a13631c', 'width': 320}, {'height': 452, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=640&crop=smart&auto=webp&s=d8f6c6d3e33380b4e30a1583c04affd32505b9cd', 'width': 640}, {'height': 678, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=960&crop=smart&auto=webp&s=f8e1a242e56409be0ce518dae0c3bfd021b68e1f', 'width': 960}, {'height': 763, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=1080&crop=smart&auto=webp&s=bce5ddbdb70f3cd8ed038156b45b2d44f69c22ae', 'width': 1080}], 'source': {'height': 1018, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?auto=webp&s=86694e70e0d977bf72604678fcba7d184a956672', 'width': 1440}, 'variants': {}}]} |
File creation on llama3.2 and open-webui | 1 | [removed] | 2025-01-27T14:28:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ibabva/file_creation_on_llama32_and_openwebui/ | golden_tortoise8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibabva | false | null | t3_1ibabva | /r/LocalLLaMA/comments/1ibabva/file_creation_on_llama32_and_openwebui/ | false | false | self | 1 | null |
DeepSeek thinking | 1 | [removed] | 2025-01-27T14:30:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ibad0s/deepseek_thinking/ | 41rp0r7m4n493r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibad0s | false | null | t3_1ibad0s | /r/LocalLLaMA/comments/1ibad0s/deepseek_thinking/ | false | false | self | 1 | null |
File creation on llama3.2 and open-webui | 1 | [removed] | 2025-01-27T14:30:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ibad6e/file_creation_on_llama32_and_openwebui/ | golden_tortoise8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibad6e | false | null | t3_1ibad6e | /r/LocalLLaMA/comments/1ibad6e/file_creation_on_llama32_and_openwebui/ | false | false | self | 1 | null |
AI time estimation based on 2d drawing | 1 | [removed] | 2025-01-27T14:31:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ibadx3/ai_time_estimation_based_on_2d_drawing/ | Aggressive_Read505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibadx3 | false | null | t3_1ibadx3 | /r/LocalLLaMA/comments/1ibadx3/ai_time_estimation_based_on_2d_drawing/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_INYNsp1G-gpkla3-5DYbIjDAMoAw_TZVcIwjqzQilQ', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=108&crop=smart&auto=webp&s=844bb14005fef7669ed95badb69c4803afa5a95d', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=216&crop=smart&auto=webp&s=03f0ad425bb379a0bfdff279738967a534111342', 'width': 216}, {'height': 226, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=320&crop=smart&auto=webp&s=da357315b107f808317a7d844353a2127a13631c', 'width': 320}, {'height': 452, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=640&crop=smart&auto=webp&s=d8f6c6d3e33380b4e30a1583c04affd32505b9cd', 'width': 640}, {'height': 678, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=960&crop=smart&auto=webp&s=f8e1a242e56409be0ce518dae0c3bfd021b68e1f', 'width': 960}, {'height': 763, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?width=1080&crop=smart&auto=webp&s=bce5ddbdb70f3cd8ed038156b45b2d44f69c22ae', 'width': 1080}], 'source': {'height': 1018, 'url': 'https://external-preview.redd.it/gxSB3E3CI32ELUly65qA5bViW2HJnVWrRal0D42Po90.jpg?auto=webp&s=86694e70e0d977bf72604678fcba7d184a956672', 'width': 1440}, 'variants': {}}]} |
Can text based ai share images? | 1 | [removed] | 2025-01-27T14:32:49 | https://www.reddit.com/gallery/1ibaf1c | Creati_007 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ibaf1c | false | null | t3_1ibaf1c | /r/LocalLLaMA/comments/1ibaf1c/can_text_based_ai_share_images/ | false | false | 1 | null |
|
Long. Live. Open Source. | 1 | 2025-01-27T14:33:00 | Rollingsound514 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibaf68 | false | null | t3_1ibaf68 | /r/LocalLLaMA/comments/1ibaf68/long_live_open_source/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'O_fc8EL7wY5OWuUNUBXMY3zA_FmO_wB4DUQov6muuh8', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/lqsan7bmrjfe1.jpeg?width=108&crop=smart&auto=webp&s=2d7bc20853a59966986039a949c93f125e3c8bc0', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/lqsan7bmrjfe1.jpeg?width=216&crop=smart&auto=webp&s=91ef442f02fdab8e98ff68a306885a3b80c9de85', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/lqsan7bmrjfe1.jpeg?width=320&crop=smart&auto=webp&s=67e2bb672aec5108a4a4e5d290cd043f562ba9fb', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/lqsan7bmrjfe1.jpeg?width=640&crop=smart&auto=webp&s=a8aea41f1e78fcf6fd296019fbbd64268dfbd954', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/lqsan7bmrjfe1.jpeg?width=960&crop=smart&auto=webp&s=7c20a4232c8ce0de999d615c3520fd50b7c6f469', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/lqsan7bmrjfe1.jpeg?width=1080&crop=smart&auto=webp&s=508e780fc26aac766862e0e0137d25ce5968b046', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/lqsan7bmrjfe1.jpeg?auto=webp&s=dac7a8eb2a81a3e914226f558d91403b56d4d33e', 'width': 4032}, 'variants': {}}]} |
|||
DeepSeek API has been down most of the morning | 46 | [https://status.deepseek.com/](https://status.deepseek.com/)
Haven't been able to get a response the last couple hours. | 2025-01-27T14:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ibai2q/deepseek_api_has_been_down_most_of_the_morning/ | RazzmatazzReal4129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibai2q | false | null | t3_1ibai2q | /r/LocalLLaMA/comments/1ibai2q/deepseek_api_has_been_down_most_of_the_morning/ | false | false | self | 46 | null |
Deepseek doenst want to talk about the panama papers | 1 | [removed] | 2025-01-27T14:43:49 | Fit_Photograph5085 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibanno | false | null | t3_1ibanno | /r/LocalLLaMA/comments/1ibanno/deepseek_doenst_want_to_talk_about_the_panama/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'WWWQXJxj490DycBie5ayo4yBobbSfQjG-mUKj7RSuh4', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/7ka3zq2ptjfe1.jpeg?width=108&crop=smart&auto=webp&s=4a48df106da9b4b8ee07562ac78568285bfed6e7', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/7ka3zq2ptjfe1.jpeg?width=216&crop=smart&auto=webp&s=d91cc668cb8c226564357113bce2d34028f1ee0a', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/7ka3zq2ptjfe1.jpeg?width=320&crop=smart&auto=webp&s=ea46300a69f33fe0545b2fb3d86f05ce76fb2aad', 'width': 320}, {'height': 467, 'url': 'https://preview.redd.it/7ka3zq2ptjfe1.jpeg?width=640&crop=smart&auto=webp&s=47d2887f98c5b8b098718e58917689f1c11d40f5', 'width': 640}], 'source': {'height': 657, 'url': 'https://preview.redd.it/7ka3zq2ptjfe1.jpeg?auto=webp&s=f4d289e433e69c2b583ac182a3ca167a5dd77e85', 'width': 900}, 'variants': {}}]} |
||
Nvidia stock is prolly gonna go up after deepseek high traffic outage hits the news. There is already a little uptick. | 1 | 2025-01-27T14:49:36 | grey-seagull | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibasb2 | false | null | t3_1ibasb2 | /r/LocalLLaMA/comments/1ibasb2/nvidia_stock_is_prolly_gonna_go_up_after_deepseek/ | false | false | 1 | {'enabled': True, 'images': [{'id': '3iyBfC_BGR3G7Puu-t9vtjyifhZtgSmaKUGrsuUtZE4', 'resolutions': [{'height': 131, 'url': 'https://preview.redd.it/e5416c2qujfe1.jpeg?width=108&crop=smart&auto=webp&s=bfd4086351d7942dd10d285a487bee15cb144f3e', 'width': 108}, {'height': 263, 'url': 'https://preview.redd.it/e5416c2qujfe1.jpeg?width=216&crop=smart&auto=webp&s=3ff7ab0d396fbc2ba1fd4eaa6663fc61d56e0411', 'width': 216}, {'height': 390, 'url': 'https://preview.redd.it/e5416c2qujfe1.jpeg?width=320&crop=smart&auto=webp&s=47f243b5743e9f13e6f5e049db9419334c907f83', 'width': 320}, {'height': 780, 'url': 'https://preview.redd.it/e5416c2qujfe1.jpeg?width=640&crop=smart&auto=webp&s=69e7144a80454624349edad6cca4747981fc1221', 'width': 640}, {'height': 1170, 'url': 'https://preview.redd.it/e5416c2qujfe1.jpeg?width=960&crop=smart&auto=webp&s=ab569e170cb89b01d3c2e9084e331fb1e1871693', 'width': 960}, {'height': 1316, 'url': 'https://preview.redd.it/e5416c2qujfe1.jpeg?width=1080&crop=smart&auto=webp&s=acf37c999f1e2e5043a24fd0683359810564ac67', 'width': 1080}], 'source': {'height': 1426, 'url': 'https://preview.redd.it/e5416c2qujfe1.jpeg?auto=webp&s=654c82ec14fb33cd21e0133304ebae7c97f31c89', 'width': 1170}, 'variants': {}}]} |
|||
Poetic justice | 1 | 2025-01-27T14:51:34 | analgerianabroad | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibatw9 | false | null | t3_1ibatw9 | /r/LocalLLaMA/comments/1ibatw9/poetic_justice/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'gPqNY1aODVwUgmgg1djwE7ckvD0MzZEWegEtZIhGRA8', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ujjf7cc1vjfe1.jpeg?width=108&crop=smart&auto=webp&s=b69b1fa972095653319d4eac6bbdedb255851642', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ujjf7cc1vjfe1.jpeg?width=216&crop=smart&auto=webp&s=b0604f6cf2de61c93341ffc589a99a7423952e56', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ujjf7cc1vjfe1.jpeg?width=320&crop=smart&auto=webp&s=3ef13e8430399e308b97a7f3cad566e7843fa71f', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/ujjf7cc1vjfe1.jpeg?auto=webp&s=c44838b88e75afdf9ba58d53d8df04e25f66b72f', 'width': 500}, 'variants': {}}]} |
|||
A true open AI solution for companies / teams (Openrouter, Librechat, OpenWebUI)? | 4 | I'm in charge of deploying LMMs at my company, we're not huge, so I was assigned in a 'best available man for the job' context. At most, I'm your average r/LocalLLaMA lurker experimenting with some local distilled R1 models.
That being said, I believe in true open AI, and so does my boss. My goal is to provide our employees with (open) LLMs running on European servers, without our data being trained on. Here is a list of prerequisites:
* Every employee can login with their own credentials
* Management can tweak which models / providers the employees have access to.
* We will centrally pay for usage, but user limits would be appreciated.
* The UI must be chatgpt-esque, as everyone here works with it, and wouldn't want to adapt.
* One click LLM selection from list selected by company management (for data safety)
* Good t/sec. 50+ would be nice.
So far, Openrouter comes closest to our whishes, but it is lacking in some regards, and seems more focused on their API business / tech-oriented users:
* No one-click model switching: Enabling/disabling models individually is tedious, I just want users to be able to select a model from a short pre-selected list of models. Right now, configuring multiple LLMs can allow the user to interact with multiple simultaneously. Wicked cool, but not for my user base.
* Lack of enterprise/team features: No way to share predefined model configurations across teams or enforce centralized provider settings.
* Saved model+provider settings disappear when closing a session and starting a new chat, forcing the user to reselect it from a long list of models.
Open-WebUI seems like an excellent contender too, but I'm uncertain whether I could easily and safely deploy that for a couple dozen people (with my limited time availability for this side project).
| 2025-01-27T14:59:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ibb0do/a_true_open_ai_solution_for_companies_teams/ | HIVVIH | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibb0do | false | null | t3_1ibb0do | /r/LocalLLaMA/comments/1ibb0do/a_true_open_ai_solution_for_companies_teams/ | false | false | self | 4 | null |
DeepSeek is experiencing high traffic | 1 | [removed] | 2025-01-27T14:59:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ibb0gh/deepseek_is_experiencing_high_traffic/ | Many_Novel_9716 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibb0gh | false | null | t3_1ibb0gh | /r/LocalLLaMA/comments/1ibb0gh/deepseek_is_experiencing_high_traffic/ | false | false | 1 | null |
|
Qwen3.0 MOE? New Reasoning Model? | 365 | 2025-01-27T15:09:24 | Vishnu_One | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibb8rr | false | null | t3_1ibb8rr | /r/LocalLLaMA/comments/1ibb8rr/qwen30_moe_new_reasoning_model/ | false | false | 365 | {'enabled': True, 'images': [{'id': 'Zgz3O2ikhQGfcYd9c8-PonlgUk7BkrakV56xaDA0FuE', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/0vnua5vqxjfe1.png?width=108&crop=smart&auto=webp&s=ffae169149174789b716d1bb6b7bd223c4f3b4fb', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/0vnua5vqxjfe1.png?width=216&crop=smart&auto=webp&s=3d8edc2d9bd9a7cd6ecc3a0f48e2fa1b3c06a74f', 'width': 216}, {'height': 120, 'url': 'https://preview.redd.it/0vnua5vqxjfe1.png?width=320&crop=smart&auto=webp&s=12861d7e6664e9cd7e45dd0710b87280d3a92aff', 'width': 320}], 'source': {'height': 214, 'url': 'https://preview.redd.it/0vnua5vqxjfe1.png?auto=webp&s=9bb806840568ae260563d12da745e4c53a8dd47e', 'width': 570}, 'variants': {}}]} |
|||
Is there a way to run Deepseek R1 7B Parameters on an RX 6650XT ? | 1 | Hello there, i honestly don't know much about LLMs but i do lose using them (always hosted). I wanted to then use DeepSeek R1 with 7B parameters which i think should run well on my PC with its specs? (i5-11400F, 32GB Ram, RX 6650XT) and i managed to run it on my CPU using Ollama and even there it still runs quite well it's pretty impressive lol.
Anyway, i saw that it uses my CPU instead of my GPU and even tho they added support for AMD GPUs months ago, i don't see my GPU in the list of supported GPUs. So i wondered if there was a way or another to still run this LLM on my GPU? I've done searching before doing this post but without much success.. Thank you to anyone who could give me a clear answer ! | 2025-01-27T15:11:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ibbasa/is_there_a_way_to_run_deepseek_r1_7b_parameters/ | CancerousGTFO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibbasa | false | null | t3_1ibbasa | /r/LocalLLaMA/comments/1ibbasa/is_there_a_way_to_run_deepseek_r1_7b_parameters/ | false | false | self | 1 | null |
Which Local model can i run on my PC specs? | 2 | Hi,
I am new in this field, i want to try any ai model. I have following Laptop
MSI Laptop GF65 Thin 10UE, core i5, 32GB Ram, Nvidia GeForece RTX 3060 6GB
I used 1.5b deepseek model from ollama, that mostly gave wrong information. I was pretty bad honestly.
Before you comment anything, please note that i am a newbie. | 2025-01-27T15:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ibbksj/which_local_model_can_i_run_on_my_pc_specs/ | ahtishamafzal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibbksj | false | null | t3_1ibbksj | /r/LocalLLaMA/comments/1ibbksj/which_local_model_can_i_run_on_my_pc_specs/ | false | false | self | 2 | null |
1.58bit DeepSeek R1 - 131GB Dynamic GGUF | 1,430 | Hey r/LocalLLaMA! I managed to **dynamically quantize** the full DeepSeek R1 671B MoE to 1.58bits in GGUF format. The trick is **not to quantize all layers**, but quantize only the MoE layers to 1.5bit, and leave attention and other layers in 4 or 6bit.
|MoE Bits|Type|Disk Size|Accuracy|HF Link|
|:-|:-|:-|:-|:-|
|1.58bit|IQ1\_S|**131GB**|Fair|[Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_S)|
|1.73bit|IQ1\_M|**158GB**|Good|[Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_M)|
|2.22bit|IQ2\_XXS|**183GB**|Better|[Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ2_XXS)|
|2.51bit|Q2\_K\_XL|**212GB**|Best|[Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-Q2_K_XL)|
You can get **140 tokens / s** on 2x H100 80GB GPUs with all layers offloaded. A 24GB GPU like RTX 4090 should be able to get at least 1 to 3 tokens / s.
If we naively quantize all layers to 1.5bit (-1, 0, 1), the model will fail dramatically, since it'll produce **gibberish** and **infinite repetitions**. I selectively leave all attention layers in 4/6bit, and leave the first 3 transformer dense layers in 4/6bit. The MoE layers take up 88% of all space, so we can leave them in 1.5bit. We get in total a weighted sum of 1.58bits!
I asked it the 1.58bit model to create Flappy Bird with 10 conditions (like random colors, a best score etc), and it did pretty well! Using a generic non dynamically quantized model will fail miserably - there will be no output at all!
[Flappy Bird game made by 1.58bit R1](https://i.redd.it/k8nfun2ezjfe1.gif)
There's more details in the blog here: [https://unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic) The link to the 1.58bit GGUF is here: [https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1\_S](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_S) You should be able to run it in your favorite inference tool if it supports i matrix quants. No need to re-update llama.cpp.
A reminder on DeepSeek's chat template (for distilled versions as well) - it auto adds a BOS - do not add it manually!
`<|begin▁of▁sentence|><|User|>What is 1+1?<|Assistant|>It's 2.<|end▁of▁sentence|><|User|>Explain more!<|Assistant|>`
To know how many layers to offload to the GPU, I approximately calculated it as below:
|Quant|File Size|24GB GPU|80GB GPU|2x80GB GPU|
|:-|:-|:-|:-|:-|
|1.58bit|131GB|7|33|All layers 61|
|1.73bit|158GB|5|26|57|
|2.22bit|183GB|4|22|49|
|2.51bit|212GB|2|19|32|
All other GGUFs for R1 are here: [https://huggingface.co/unsloth/DeepSeek-R1-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-GGUF) There's also GGUFs and dynamic 4bit bitsandbytes quants and others for all other distilled versions (Qwen, Llama etc) at [https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5](https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5) | 2025-01-27T15:24:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ibbloy/158bit_deepseek_r1_131gb_dynamic_gguf/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibbloy | false | null | t3_1ibbloy | /r/LocalLLaMA/comments/1ibbloy/158bit_deepseek_r1_131gb_dynamic_gguf/ | false | false | 1,430 | {'enabled': False, 'images': [{'id': 'xSdGWArWU6LYyDRL5oP5FnuIAfKsN1Z6N1wc8N_fOQY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=108&crop=smart&auto=webp&s=38be96fe7ba592d724845ec508925c2e2d0437a9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=216&crop=smart&auto=webp&s=216add24eeddf96721764be15a01323d3289a098', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=320&crop=smart&auto=webp&s=146aafa2effa94c6a92be3a1e52d5d1c5dada77c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=640&crop=smart&auto=webp&s=bc7cd6ab7b35a273b107dce1a4113ba2c9dcca51', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=960&crop=smart&auto=webp&s=f708695c420ae4c27a7b7b045b263ef095a49773', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=1080&crop=smart&auto=webp&s=674cf56e451c44a0c9ae525a6f1cb1a1dd92eab0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?auto=webp&s=15786bbf8fa654f9c457319fd2509fc682f49b99', 'width': 1200}, 'variants': {}}]} |
|
Front end for conversation w/ multiple, simultaneous , endpoints? | 3 | So far it seems that most front ends can speak to 1 end point at a time, and to some degree switch endpoints between replies. Is there a front end that allows to connect to multiple endpoints at the same time and have them :
A) Speak to each other.
B) Speak to you in a group chat setting.
Or is this something that will require langchain? For example, one device with a 7b model using kobold, one device with 13b using text-gen-ui. | 2025-01-27T15:31:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ibbswb/front_end_for_conversation_w_multiple/ | BackgroundAmoebaNine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibbswb | false | null | t3_1ibbswb | /r/LocalLLaMA/comments/1ibbswb/front_end_for_conversation_w_multiple/ | false | false | self | 3 | null |
1. Hire CN devs on H1Bs, 2. Publish secrets on Arxiv, 3. ????, 4. Profit | 3 | Where did it go wrong? | 2025-01-27T15:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ibbxpy/1_hire_cn_devs_on_h1bs_2_publish_secrets_on_arxiv/ | latestagecapitalist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibbxpy | false | null | t3_1ibbxpy | /r/LocalLLaMA/comments/1ibbxpy/1_hire_cn_devs_on_h1bs_2_publish_secrets_on_arxiv/ | false | false | self | 3 | null |
Deepseek censorship is more tolerable then Western Censorship. | 0 | [removed] | 2025-01-27T15:42:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ibc1tg/deepseek_censorship_is_more_tolerable_then/ | CreepyMan121 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibc1tg | false | null | t3_1ibc1tg | /r/LocalLLaMA/comments/1ibc1tg/deepseek_censorship_is_more_tolerable_then/ | false | false | self | 0 | null |
Phi-4 is quite a strong judge | 14 | One of my colleagues run a quick experiment to compare Phi-4 to our ft `flowaicom/Flow-Judge-v0.1` (currently #3 on the [judge-arena](https://huggingface.co/spaces/AtlaAI/judge-arena), still preliminary results though) and it turns out Phi-4 is a great "smol" open model for evals.
You can learn more about the experiment [here](https://www.flow-ai.com/blog/phi-4-as-llm-evaluator). | 2025-01-27T15:42:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ibc2at/phi4_is_quite_a_strong_judge/ | bergr7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibc2at | false | null | t3_1ibc2at | /r/LocalLLaMA/comments/1ibc2at/phi4_is_quite_a_strong_judge/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '2pKOE_cH-iLNwbkBArlt9q9vDvznDbuN24oBI1hdF8I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=108&crop=smart&auto=webp&s=f76322d1f211fa7b2194b25db6ad874383c2a088', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=216&crop=smart&auto=webp&s=12a11c4bd286ed0720eb3b353f9a46cd2034b481', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=320&crop=smart&auto=webp&s=f992a24fe706c3037983cfa130d03df335a1f6d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=640&crop=smart&auto=webp&s=4ef15551728ccd8c5ef80474900a3acd5e6672b6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=960&crop=smart&auto=webp&s=1dfb937f7807f08897bc845e25a39dffb3d917f7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=1080&crop=smart&auto=webp&s=9becfd7854a14d90d6bef3aa1cb6fd62df468000', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?auto=webp&s=10fda0b7afd2060d69403af42a16d4b78a21a07f', 'width': 1200}, 'variants': {}}]} |
Deepseek currently restricts new registrations to Chinese phone numbers only | 185 | See: https://i.imgur.com/9WLAnko.png | 2025-01-27T15:43:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ibc2vx/deepseek_currently_restricts_new_registrations_to/ | SysPsych | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibc2vx | false | null | t3_1ibc2vx | /r/LocalLLaMA/comments/1ibc2vx/deepseek_currently_restricts_new_registrations_to/ | false | false | self | 185 | {'enabled': False, 'images': [{'id': 'jOLi8ur0hAtAhqboxoQbhqMbBvovfc1KvFs2VQDCu0s', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/Q96ipBNG3uIy-mqzOsj6A3KE0IjZxl7bTKhXKoUjAUM.png?width=108&crop=smart&auto=webp&s=180a08ea74d28dbf046df482d1e1a873e1346970', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/Q96ipBNG3uIy-mqzOsj6A3KE0IjZxl7bTKhXKoUjAUM.png?width=216&crop=smart&auto=webp&s=71485fc2463b28d6bbc1dbba6d990efa39f8769a', 'width': 216}, {'height': 134, 'url': 'https://external-preview.redd.it/Q96ipBNG3uIy-mqzOsj6A3KE0IjZxl7bTKhXKoUjAUM.png?width=320&crop=smart&auto=webp&s=949cf8c4f4a5bb960a847405c5db5c290a8508fa', 'width': 320}, {'height': 269, 'url': 'https://external-preview.redd.it/Q96ipBNG3uIy-mqzOsj6A3KE0IjZxl7bTKhXKoUjAUM.png?width=640&crop=smart&auto=webp&s=d136f9ec0326717b320c8016fb8f7b3654ff1997', 'width': 640}], 'source': {'height': 372, 'url': 'https://external-preview.redd.it/Q96ipBNG3uIy-mqzOsj6A3KE0IjZxl7bTKhXKoUjAUM.png?auto=webp&s=70440c9b2668a077479218d67c41267bece91619', 'width': 883}, 'variants': {}}]} |
The Ring Model: A Decentralized Framework for Collaborative LLM Training and Local Empowerment | 1 | [removed] | 2025-01-27T15:44:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ibc3gy/the_ring_model_a_decentralized_framework_for/ | ReverseTimeEngineer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibc3gy | false | null | t3_1ibc3gy | /r/LocalLLaMA/comments/1ibc3gy/the_ring_model_a_decentralized_framework_for/ | false | false | self | 1 | null |
Questions About DeepSeek R1 Image Inference | 1 | [removed] | 2025-01-27T15:45:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ibc4qg/questions_about_deepseek_r1_image_inference/ | Acceptable_Young_167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibc4qg | false | null | t3_1ibc4qg | /r/LocalLLaMA/comments/1ibc4qg/questions_about_deepseek_r1_image_inference/ | false | false | self | 1 | null |
DeepSeek R1 Distill Llama 70B is available on OpenRouter | 7 | Deep Seek R1 Distill Llama 70B is available at Open Router for the following price:
$0.23/M input tokens
$0.69/M output tokens
and 131,072 context. | 2025-01-27T15:57:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ibced9/deepseek_r1_distill_llama_70b_is_available_on/ | Curious_Cantaloupe65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibced9 | false | null | t3_1ibced9 | /r/LocalLLaMA/comments/1ibced9/deepseek_r1_distill_llama_70b_is_available_on/ | false | false | self | 7 | null |
Problem with Verification Code | 1 | [removed] | 2025-01-27T15:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ibcfte/problem_with_verification_code/ | madjija69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibcfte | false | null | t3_1ibcfte | /r/LocalLLaMA/comments/1ibcfte/problem_with_verification_code/ | false | false | self | 1 | null |
I created a website to see if you can run an LLM on your hardware. Feedback appreciated! | 12 | [https://canirunthisllm.com/](https://canirunthisllm.com/)
Heavily inspired by the great work done in this post I wanted to make a web page where you could fill in the information yourself: [https://www.reddit.com/r/LocalLLaMA/comments/1ib2uuz/i\_created\_a\_can\_you\_run\_it\_tool\_for\_open\_source/](https://www.reddit.com/r/LocalLLaMA/comments/1ib2uuz/i_created_a_can_you_run_it_tool_for_open_source/)
It doesn't work with all models from huggingface, wondering if there is a better way to store this information? There are probably a few people now who are aiming to do the same thing - anyone want to join forces?
Again, most credit goes to u/MixtureOfAmateurs | 2025-01-27T16:08:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ibcnyl/i_created_a_website_to_see_if_you_can_run_an_llm/ | Ambitious_Monk2445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibcnyl | false | null | t3_1ibcnyl | /r/LocalLLaMA/comments/1ibcnyl/i_created_a_website_to_see_if_you_can_run_an_llm/ | false | false | self | 12 | null |
Best 'free' uncensored open minded model? | 3 | Hello, i want a model to talk about deep subjects, and to think about unconventional ideias, to break assumptions and speculate on new things and being uncensored
for example, many of them when asked about their subjective experience not even try to attempt to speculate on the ideia
i want a online model unless it is very light weight and can be run on a phone or a 3rd gen i5 | 2025-01-27T16:12:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ibcrxr/best_free_uncensored_open_minded_model/ | ultraganymede | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibcrxr | false | null | t3_1ibcrxr | /r/LocalLLaMA/comments/1ibcrxr/best_free_uncensored_open_minded_model/ | false | false | self | 3 | null |
CNBC on R1 | 0 | One of the anchors said he could run it on his Apple mini … come on I need 16xH100s 😭😭😭😩 | 2025-01-27T16:14:09 | https://v.redd.it/qf50lghp9kfe1 | bzrkkk | /r/LocalLLaMA/comments/1ibctam/cnbc_on_r1/ | 1970-01-01T00:00:00 | 0 | {} | 1ibctam | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qf50lghp9kfe1/DASHPlaylist.mpd?a=1740716052%2CMTYyNTFkMTkzMzBlOTgwYTNmZmMwMzEwNzAyYjNjMjI5MmZjZDk0OWM5OTA5N2Y0OWFiNTAwMmI0MjdlNTRjNQ%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/qf50lghp9kfe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/qf50lghp9kfe1/HLSPlaylist.m3u8?a=1740716052%2CNTA1MjVmNDkzMDU3ZDY0ZWIxZTAzZGQzYTY0ZjdhYjk4ZWExYzJhMDAwNDRjNDg1ZmQzZjRmZGU4OGFkMWE0Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qf50lghp9kfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1ibctam | /r/LocalLLaMA/comments/1ibctam/cnbc_on_r1/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'a3RoMXJ2eWc5a2ZlMcUGsmVPYaRnx9kmUohxrkbLEzSQ4GmxLZ3J4QEZBtu4', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/a3RoMXJ2eWc5a2ZlMcUGsmVPYaRnx9kmUohxrkbLEzSQ4GmxLZ3J4QEZBtu4.png?width=108&crop=smart&format=pjpg&auto=webp&s=962e3f1eeb761a07dc7d8c4043cd2390a0c9bdab', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/a3RoMXJ2eWc5a2ZlMcUGsmVPYaRnx9kmUohxrkbLEzSQ4GmxLZ3J4QEZBtu4.png?width=216&crop=smart&format=pjpg&auto=webp&s=8753c6664d057c607a210caf986ecaf652375c2b', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/a3RoMXJ2eWc5a2ZlMcUGsmVPYaRnx9kmUohxrkbLEzSQ4GmxLZ3J4QEZBtu4.png?width=320&crop=smart&format=pjpg&auto=webp&s=18556ea53eccfc6504ba2a376003db2ecb62706c', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/a3RoMXJ2eWc5a2ZlMcUGsmVPYaRnx9kmUohxrkbLEzSQ4GmxLZ3J4QEZBtu4.png?width=640&crop=smart&format=pjpg&auto=webp&s=629a8c3d9bf1777426b8f5c5edc3e45bdf772dda', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/a3RoMXJ2eWc5a2ZlMcUGsmVPYaRnx9kmUohxrkbLEzSQ4GmxLZ3J4QEZBtu4.png?width=960&crop=smart&format=pjpg&auto=webp&s=a25d3a362169669ad86fa2bd7f1b67f30a50866a', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/a3RoMXJ2eWc5a2ZlMcUGsmVPYaRnx9kmUohxrkbLEzSQ4GmxLZ3J4QEZBtu4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=aee83da87ced054f352c8c62aad53d65527f25ca', 'width': 1080}], 'source': {'height': 3840, 'url': 'https://external-preview.redd.it/a3RoMXJ2eWc5a2ZlMcUGsmVPYaRnx9kmUohxrkbLEzSQ4GmxLZ3J4QEZBtu4.png?format=pjpg&auto=webp&s=d19b3e09647b30aac3e03f9065e6af9efa12ce3f', 'width': 2160}, 'variants': {}}]} |
|
Deepseek limits registration (temporarily) | 4 | 2025-01-27T16:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ibcu0f/deepseek_limits_registration_temporarily/ | OkStatement3655 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibcu0f | false | null | t3_1ibcu0f | /r/LocalLLaMA/comments/1ibcu0f/deepseek_limits_registration_temporarily/ | false | false | 4 | null |
||
If deepseek is open source couldn't anyone just use their more efficient code but with openAI's computing power and be way better? | 1 | [removed] | 2025-01-27T16:19:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ibcy4o/if_deepseek_is_open_source_couldnt_anyone_just/ | Key_Chance6964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibcy4o | false | null | t3_1ibcy4o | /r/LocalLLaMA/comments/1ibcy4o/if_deepseek_is_open_source_couldnt_anyone_just/ | false | false | self | 1 | null |
How to run R1 Zero? | 2 | Has anyone reduced Deepseek R1 Zero to a more manageable size, so that it fits consumer level GPUs? Is there an API I can try? All the versions on Hugging Face are far too big | 2025-01-27T16:20:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ibcypa/how_to_run_r1_zero/ | rom16384 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibcypa | false | null | t3_1ibcypa | /r/LocalLLaMA/comments/1ibcypa/how_to_run_r1_zero/ | false | false | self | 2 | null |
Janus-Pro - improving both multimodal understanding and visual generation of Deepseek Janus | 39 | **Deepseek hits a homerun** with a new release just **7 days after R1**: **Janus-Pro**, an MLLM (Text-Image to Text-Image) model! 🎉
Imagine applying the **R1 GRPO algorithm** alongside the new paper: *“*[Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step](https://huggingface.co/papers/2501.13926)*”* and its **Potential Assessment Reward Model (PARM)**—this could lead to a groundbreaking **Deepseek-VR1 model**! 😄 | 2025-01-27T16:28:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ibd5t8/januspro_improving_both_multimodal_understanding/ | citaman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibd5t8 | false | null | t3_1ibd5t8 | /r/LocalLLaMA/comments/1ibd5t8/januspro_improving_both_multimodal_understanding/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'SoPSP1Ljnj2Ay1qIathazmfwFsO3GAZuxi1uuvaDC1Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pr5ji12JWa0oVZdPdATNUm9zqfljtiIGkF5AE4LOtRI.jpg?width=108&crop=smart&auto=webp&s=1dfc5f37bd08b1e50831b641f897d8faf055fd1c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pr5ji12JWa0oVZdPdATNUm9zqfljtiIGkF5AE4LOtRI.jpg?width=216&crop=smart&auto=webp&s=c7cdb2194376a5701403ea5da4ccd2dd6c5f3e4f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pr5ji12JWa0oVZdPdATNUm9zqfljtiIGkF5AE4LOtRI.jpg?width=320&crop=smart&auto=webp&s=98a04f96e11e3aa2c6c87dd6a4dc95c9187b679f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pr5ji12JWa0oVZdPdATNUm9zqfljtiIGkF5AE4LOtRI.jpg?width=640&crop=smart&auto=webp&s=808cd0d961fb8514893ddc3a2a88950e4251fc23', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pr5ji12JWa0oVZdPdATNUm9zqfljtiIGkF5AE4LOtRI.jpg?width=960&crop=smart&auto=webp&s=3c0242c288c8b617fe5e7b76647d502498b0d8cd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pr5ji12JWa0oVZdPdATNUm9zqfljtiIGkF5AE4LOtRI.jpg?width=1080&crop=smart&auto=webp&s=ace7d623a7e3f9d33295c7fedb92d859bed997ba', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pr5ji12JWa0oVZdPdATNUm9zqfljtiIGkF5AE4LOtRI.jpg?auto=webp&s=e8cd31b8b6878f7b0256c494c4014b5f14163fcf', 'width': 1200}, 'variants': {}}]} |
DeepSeek releases deepseek-ai/Janus-Pro-7B (unified multimodal model). | 692 | 2025-01-27T16:28:21 | https://huggingface.co/deepseek-ai/Janus-Pro-7B | paf1138 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ibd5x0 | false | null | t3_1ibd5x0 | /r/LocalLLaMA/comments/1ibd5x0/deepseek_releases_deepseekaijanuspro7b_unified/ | false | false | 692 | {'enabled': False, 'images': [{'id': 'CvIkUEQJ7xtbhjxiEUGNwBQwSMYmfuAmiP_Q6CjKMCU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/n5r1wVoNriwXCNjXkrw2Ab2zRN5UbL6aXFXA0wRQWRU.jpg?width=108&crop=smart&auto=webp&s=4d9d3016a8573aab5c09bf2aa8d12de4d759ccd2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/n5r1wVoNriwXCNjXkrw2Ab2zRN5UbL6aXFXA0wRQWRU.jpg?width=216&crop=smart&auto=webp&s=6cf5ab3eaca5c1209a137fd3224ddb3be3d81ada', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/n5r1wVoNriwXCNjXkrw2Ab2zRN5UbL6aXFXA0wRQWRU.jpg?width=320&crop=smart&auto=webp&s=08359741ac6bb6763e66fcbffdf2b3559c304830', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/n5r1wVoNriwXCNjXkrw2Ab2zRN5UbL6aXFXA0wRQWRU.jpg?width=640&crop=smart&auto=webp&s=ef80d96659edc7101cd569ddae687c24437596ba', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/n5r1wVoNriwXCNjXkrw2Ab2zRN5UbL6aXFXA0wRQWRU.jpg?width=960&crop=smart&auto=webp&s=e22eed4d4fc6ec2f68102071aefa6cd337bc5b62', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/n5r1wVoNriwXCNjXkrw2Ab2zRN5UbL6aXFXA0wRQWRU.jpg?width=1080&crop=smart&auto=webp&s=f827ebf1addbce50bdfe71c5fda7989325b6675b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/n5r1wVoNriwXCNjXkrw2Ab2zRN5UbL6aXFXA0wRQWRU.jpg?auto=webp&s=5c3fdf7d60acb67443d94244e481f1a922c85e50', 'width': 1200}, 'variants': {}}]} |
||
Any one test R1 yet with low level code? | 2 | [https://x.com/ggerganov/status/1883888097185927311](https://x.com/ggerganov/status/1883888097185927311) | 2025-01-27T16:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ibdjcn/any_one_test_r1_yet_with_low_level_code/ | -Nods- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibdjcn | false | null | t3_1ibdjcn | /r/LocalLLaMA/comments/1ibdjcn/any_one_test_r1_yet_with_low_level_code/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '2FH_bXWFaAzATuvpsq3NB3YJGpqU8V-Fpbq2dZBI7KY', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/p6vYZNZzC8H-nt4Rd1LUvUBl6My-Pigzv8cjkz0JQ_0.jpg?width=108&crop=smart&auto=webp&s=a90207fe59bf26e72a91394e09c0fb24e3df4d05', 'width': 108}, {'height': 173, 'url': 'https://external-preview.redd.it/p6vYZNZzC8H-nt4Rd1LUvUBl6My-Pigzv8cjkz0JQ_0.jpg?width=216&crop=smart&auto=webp&s=ddb07fd62880e1d9cb18f61c123bdc1b186e091c', 'width': 216}, {'height': 256, 'url': 'https://external-preview.redd.it/p6vYZNZzC8H-nt4Rd1LUvUBl6My-Pigzv8cjkz0JQ_0.jpg?width=320&crop=smart&auto=webp&s=a7958f8f936fb9d6142bfb8f182d56e47433dfde', 'width': 320}, {'height': 513, 'url': 'https://external-preview.redd.it/p6vYZNZzC8H-nt4Rd1LUvUBl6My-Pigzv8cjkz0JQ_0.jpg?width=640&crop=smart&auto=webp&s=3818b552569cdf8e25fa20b8eb7523a82a8565f8', 'width': 640}, {'height': 770, 'url': 'https://external-preview.redd.it/p6vYZNZzC8H-nt4Rd1LUvUBl6My-Pigzv8cjkz0JQ_0.jpg?width=960&crop=smart&auto=webp&s=ad73eb43c590fa1779dd2f4ad455f49f2c86cf0e', 'width': 960}, {'height': 866, 'url': 'https://external-preview.redd.it/p6vYZNZzC8H-nt4Rd1LUvUBl6My-Pigzv8cjkz0JQ_0.jpg?width=1080&crop=smart&auto=webp&s=3e3e30607895acb9c988454a7042d9f05f5683cb', 'width': 1080}], 'source': {'height': 1392, 'url': 'https://external-preview.redd.it/p6vYZNZzC8H-nt4Rd1LUvUBl6My-Pigzv8cjkz0JQ_0.jpg?auto=webp&s=630d92b3362977cfd6b8062f38142a35ca9b1e63', 'width': 1734}, 'variants': {}}]} |
Why is no-one talking about EP | 27 | Everyone keeps asking how Deepseek are able to provide the API prices they are. However no one seems to be discussing specifically how Deepseek are hosting their model from an infrastructure and topology perspective. They released some incredibly interesting papers, in particular their Deepseek MOE and Deepseek 2 papers were incredibly insightful.
MOE has been around for a while now, and most inference engines have support for models. But very few have implemented Expert Parallelism (EP). There's been a tonne of focus on Tensor Parallelism (TP) and Pipeline parallelism (PP), but EP has been all but ignored, despite the growing number of models that have been using MOE designs.
The idea behind EP is to essentially split the model by experts, rather than layers. So each expert model is fully loaded onto a card, on its own or with other experts ( depending on VRam, tops, etc ). As 8 experts are used for each token, 8 models are essentially queried, with the router model choosing appropriate experts, collating results and then deciding the top token choice as appropriate.
Deepseek v3 is HUGE, no denying that. But the MOE experts themselves are only around 2.5b each, meaning that on their own they are actually incredibly performant and nippy little models. Think of running Gemma2 2b. The real resource requirement comes from having 256 of them. That's 256 Gemma2 sized mini models all on standby ready for inference.
I would be very interested to know the infrastructure Deepseek has in place, but theoretically you could host this at a more than respectable speed ( multi request, batched, etc ) in FP8 ( well, int8 ) with 30 3090 cards. Drop that to W4A16 and you're looking at around 16 cards. That's something that we've actually seen on builds posted here ( respect to that builder ).
Obviously you could do the same or even better using A100s, which according to recent articles Deepseek have tens of thousands of. These are 300w and relatively low power devices. Plus it makes MIG actually interesting and useful for something other than multi-tenant/user setups.
The only restrictions are device fabric ( PCIE ) and inference options. Re Fabric, that's actually easily solved using multiple layers of PCIE switches, using onboard switch DMA controllers to remove any latency issues and enable speedy P2P RDMA transfers. You can have 25 cards connected to a single socket utilising 5 5 port switches each connected into another 5 port switch which itself is connected to the host machine.
The inference problem is harder to solve. But I can assure you there are many companies that have probably easily done it by now. It's not a massive task, but something that sits outside of most inference setups and solutions at the moment. The only two documented options I've seen are TensorRT and DeepSpeed. But I've not seen a single person mention anything on the topic.
So yeah, deepseek are legends, and what they've released is more usable and impressive than I think the majority of the community have yet to realise. And if you've got the right topology, serving it on mass can cost a fraction of dense models. Would love to see some of the inference backends implement EP, however realise that it's probably not at the top of their priority list. | 2025-01-27T16:43:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ibdjdn/why_is_noone_talking_about_ep/ | BreakIt-Boris | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibdjdn | false | null | t3_1ibdjdn | /r/LocalLLaMA/comments/1ibdjdn/why_is_noone_talking_about_ep/ | false | false | self | 27 | null |
DeepSeek AI: The Rising Power in AI That You Can’t Invest in—But Here’s How You Can Profit. What is your favorite DeepSeek related play? $META and Lama Looking great! | 1 | 2025-01-27T16:44:52 | Affectionate_Cod3714 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibdkgr | false | null | t3_1ibdkgr | /r/LocalLLaMA/comments/1ibdkgr/deepseek_ai_the_rising_power_in_ai_that_you_cant/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'biFLZ2mwzM1gg9UHhEjTun6ZHZAmsNbOfozacnO-5uY', 'resolutions': [{'height': 170, 'url': 'https://preview.redd.it/r54p9taafkfe1.png?width=108&crop=smart&auto=webp&s=edab4b02539d9d17260a9e60279778207b8d036b', 'width': 108}, {'height': 340, 'url': 'https://preview.redd.it/r54p9taafkfe1.png?width=216&crop=smart&auto=webp&s=a73ad0d900e1431ad8c27db2d122f072fed75b3b', 'width': 216}, {'height': 505, 'url': 'https://preview.redd.it/r54p9taafkfe1.png?width=320&crop=smart&auto=webp&s=9368fdfa3a3abc8e3bc6858d909042259d6202a1', 'width': 320}, {'height': 1010, 'url': 'https://preview.redd.it/r54p9taafkfe1.png?width=640&crop=smart&auto=webp&s=f28e20165926761e260b3685e3ad5fd78027fec2', 'width': 640}, {'height': 1515, 'url': 'https://preview.redd.it/r54p9taafkfe1.png?width=960&crop=smart&auto=webp&s=96230cfe4fbd393dd748cacfbd3e1c0449f498dc', 'width': 960}, {'height': 1704, 'url': 'https://preview.redd.it/r54p9taafkfe1.png?width=1080&crop=smart&auto=webp&s=e1b69e533143e73e5204c59da45e1f7b2ea644d8', 'width': 1080}], 'source': {'height': 1736, 'url': 'https://preview.redd.it/r54p9taafkfe1.png?auto=webp&s=4c53bb2942a6037e28979b11b6ac8f47e511ab65', 'width': 1100}, 'variants': {}}]} |
|||
DeepSeek probably getting DDoSed by the US for their impact on the stock market today. | 451 | 2025-01-27T16:46:37 | DocStrangeLoop | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibdm1z | false | null | t3_1ibdm1z | /r/LocalLLaMA/comments/1ibdm1z/deepseek_probably_getting_ddosed_by_the_us_for/ | false | false | 451 | {'enabled': True, 'images': [{'id': '3FLQ1S-W8frOGPsoreUnsxV6xztqKRCQ_bOodD9Lgkw', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/0howo9plfkfe1.jpeg?width=108&crop=smart&auto=webp&s=c746684b1db813d587d2daa0ffd87de5ae655601', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/0howo9plfkfe1.jpeg?width=216&crop=smart&auto=webp&s=804409139239f6de9a35678c9026407c01bccf7b', 'width': 216}, {'height': 247, 'url': 'https://preview.redd.it/0howo9plfkfe1.jpeg?width=320&crop=smart&auto=webp&s=ed94624631e22724e84b30b890457f092c517be4', 'width': 320}, {'height': 495, 'url': 'https://preview.redd.it/0howo9plfkfe1.jpeg?width=640&crop=smart&auto=webp&s=8d08aef24ffc8d02182d314bea0355387632b24e', 'width': 640}, {'height': 743, 'url': 'https://preview.redd.it/0howo9plfkfe1.jpeg?width=960&crop=smart&auto=webp&s=eedf1227b9ae54e3c1170f5589e8108b7752f70e', 'width': 960}], 'source': {'height': 836, 'url': 'https://preview.redd.it/0howo9plfkfe1.jpeg?auto=webp&s=68ca0bef69141d02421b63def7bbb9318ed61b93', 'width': 1079}, 'variants': {}}]} |
|||
What if? | 0 | What if efficiency of DeepSeek has something to do with unique Chinese writing script itself?
Unlike English, where words are typically composed of multiple letters, Chinese can convey entire words or concepts within a single character or a pair of characters.
This means that Chinese texts have much higher information per token density which might result in smaller base models of the same quality
| 2025-01-27T16:47:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ibdmog/what_if/ | Vaddieg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibdmog | false | null | t3_1ibdmog | /r/LocalLLaMA/comments/1ibdmog/what_if/ | false | false | self | 0 | null |
vLLM guided_choice logprob influence | 1 | Hi,
I am looking at using logprob conversions for as a threshold of model confidence in classifying prompt statements i.e. "Return True/False for the above statement". Could also be used for multi-choice answering.
vLLM engine completions supports additional parameters for guided\_choice using either outlines/lm-formatter. However even using the below, BPE still tokenizes True/False to subwords:
client = OpenAI("")
client.completions.create(model="",
prompt="The sky is blue. True/False: ",
max_tokens=1000,
temperature=0,
logprobs=5,
extra_body={
"guided_choice": ["True", "False"],
"guided_decoding_backend": "lm-format-enforcer"
})
Example, top\_logprobs\[0\]:
{'Fa': -6.141696929931641,
'Fal': -6.126071929931641,
'False': -0.4385721683502197,
'Tr': -5.251071929931641,
'True': -1.0635721683502197}
Sometimes the full token may not be present as a key in the top\_logprob. Is there a way to specify in the client request prevent subword token options for consistent probability comparison?
One solution I did think of was instead setting guided\_choice to tokens that only exist as a single token in the vocab e.g. "guided\_choice": \["T", "F"\]
What are peoples experience in this use case? Thanks
| 2025-01-27T16:51:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ibdpxm/vllm_guided_choice_logprob_influence/ | Lower_Tutor5470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibdpxm | false | null | t3_1ibdpxm | /r/LocalLLaMA/comments/1ibdpxm/vllm_guided_choice_logprob_influence/ | false | false | self | 1 | null |
What is stopping US large companies from hosting deepseek themself? | 21 | I keep on hearing lot of fearmongering how deepseek is censoring input on their web-service and this and that on lot of threads (I understand some valid privacy concerns but feel they are overblown and can be countered), so what is stopping us corporations like Google, xAI and others from hosting the deepseek on their end?
I feel lot fearmongering is due to what US corporations wanted to give at huge profit margins (200 a month) was made available for free and now their investments are at risk and lot of misinformation sounds like propaganda from them. But why not just take the opportunity to host it them self? With their better hardware they could provide at much better TPS and capacity, after all their license allows it!
I am also aware of providers providing inference on OR but those small companies don't have enough compute to provide the R1 at decent/usable TPS. | 2025-01-27T16:52:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ibdrjq/what_is_stopping_us_large_companies_from_hosting/ | Specter_Origin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibdrjq | false | null | t3_1ibdrjq | /r/LocalLLaMA/comments/1ibdrjq/what_is_stopping_us_large_companies_from_hosting/ | false | false | self | 21 | null |
Thought prompting has a real impact on results on Deepseek R1 | 0 | I don't know if I'm the only one who's noticed, but there seems to be a real importance to thought prompting with R1. As soon as someone is asked to think like a domain expert, or has to think according to particular rules, criteria/instructions, all of a sudden, the result becomes much more interesting and the thinking becomes much more relevant to certain complex problems.
Which makes me think that today o1 is probably even better in some areas, because there seems to be a specific pre-pompting before the prompt is sent. And this can be seen in the fact that when the same techniques are used as with Deepseek on o1, i.e. attempting a kind of prompting by assigning a role/way of thinking, this often proves very counter-productive and alters the quality of the result. | 2025-01-27T16:53:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ibdrso/thought_prompting_has_a_real_impact_on_results_on/ | Wonderful-Excuse4922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibdrso | false | null | t3_1ibdrso | /r/LocalLLaMA/comments/1ibdrso/thought_prompting_has_a_real_impact_on_results_on/ | false | false | self | 0 | null |
Its 2025. What UI are you using to interact with your models? | 2 | Ive been using oogabooga and ollama, but I stepped away for a few months to work on do some other things...now I'm back in and -wow-, amazing new toys!
I'm wondering what may have changed in the UI scene - what are you using, and why do you like it? | 2025-01-27T16:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ibdxax/its_2025_what_ui_are_you_using_to_interact_with/ | bidet_enthusiast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibdxax | false | null | t3_1ibdxax | /r/LocalLLaMA/comments/1ibdxax/its_2025_what_ui_are_you_using_to_interact_with/ | false | false | self | 2 | null |
LLaMA inference in pure C++ from scratch | 1 | [removed] | 2025-01-27T16:59:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ibdxm8/llama_inference_in_pure_c_from_scratch/ | joe_projekt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibdxm8 | false | null | t3_1ibdxm8 | /r/LocalLLaMA/comments/1ibdxm8/llama_inference_in_pure_c_from_scratch/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'A_RTIMYEkyTixTiVWYzJRSRFiMsc5a0Sj3SETGG9Gj4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=108&crop=smart&auto=webp&s=c774ccf7bd26d60a29be576bfd3b9d57e905818b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=216&crop=smart&auto=webp&s=94967d4590e3878435d761f1af03f8ed369580af', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=320&crop=smart&auto=webp&s=c3485b7ed693ba537d431c2b88317b0b4ba708e2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=640&crop=smart&auto=webp&s=b52c7bf45a656978591b2ad945b1411473aad4f1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=960&crop=smart&auto=webp&s=5305f2625ab038c32d5cd2d3e29b913778a93fe4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?width=1080&crop=smart&auto=webp&s=7abc76b27506e0939235241a5b6e9e9604f7b7d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0A8Z5vQtkTUQkMVWQ0qgWdI1kTAdspi5utL24aL9c2Y.jpg?auto=webp&s=20b8a2d7add24cfd3be887b56e691ac896016157', 'width': 1200}, 'variants': {}}]} |
RTX 3060 12GB vs RX 7600 XT 16GB for LLM and general use? | 7 | Hey everyone!
I'm trying to decide between the **RTX 3060 12GB** and the **RX 7600 XT 16GB** for my next GPU upgrade. Here’s what I plan to use it for:
* Running **Ollama** (LLM inference locally).
* General gaming and productivity tasks (like coding and I would like to use the tool [continue.dev](https://www.continue.dev/)).
I know that NVIDIA GPUs are well-supported across all tools and tend to run things smoothly, but I’ve also seen that AMD now has **ROCm support** for these kinds of workloads.
However, I don’t know how solid the ROCm ecosystem is right now. Does anyone have experience with it? Is it a viable choice for running AI/ML workloads like Ollama?
Here’s the key comparison:
* **RX 7600 XT**:
* 16GB VRAM, which sounds awesome for AI and future-proofing.
* **345 EUR**.
* **RTX 3060 12GB**:
* 12GB VRAM, which is still decent.
* **293 EUR**.
* Excellent driver support and compatibility for AI workloads like Ollama.
I'm leaning slightly towards the RX 7600 XT because of the larger VRAM, but I'm worried about compatibility issues and whether ROCm is good enough yet. On the other hand, NVIDIA seems to be the safe choice, but the lower VRAM might hurt in the long term.
Which one would you recommend for my use case? Any personal experiences with these GPUs or insights into how well ROCm performs for AI workloads would be really helpful!
Thanks in advance! 😊 | 2025-01-27T17:03:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ibe1l7/rtx_3060_12gb_vs_rx_7600_xt_16gb_for_llm_and/ | fugxto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibe1l7 | false | null | t3_1ibe1l7 | /r/LocalLLaMA/comments/1ibe1l7/rtx_3060_12gb_vs_rx_7600_xt_16gb_for_llm_and/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'JoLAbcgPAn_D7ExuVvyaNJpSY81e3Jca27FTj1G8-xQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=108&crop=smart&auto=webp&s=b6c70517bb80bca66bf94d99af93ec23982e2986', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=216&crop=smart&auto=webp&s=146011169cd4033ebcd4b883efc62f0bd345d74b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=320&crop=smart&auto=webp&s=7a560fe31ff4e8b423a9029c052df232e0365572', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=640&crop=smart&auto=webp&s=ea9ff85c4782247e303164d9d75b4071d789f397', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=960&crop=smart&auto=webp&s=81aa9753e911761e0c56b3b897ba0f44cafff21d', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=1080&crop=smart&auto=webp&s=a67fd0983e228aa2fa0a2ba466c071793fe21afc', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?auto=webp&s=92948afd26cc637bb25c79223a1b99b3ecbbbfa2', 'width': 2401}, 'variants': {}}]} |
Thoughts? I kinda feel happy about this... | 967 | 2025-01-27T17:03:47 | Butefluko | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibe1ro | false | null | t3_1ibe1ro | /r/LocalLLaMA/comments/1ibe1ro/thoughts_i_kinda_feel_happy_about_this/ | false | false | 967 | {'enabled': True, 'images': [{'id': '3WHbJcYxNwVuCihhGh7BCAEcdmq0b5OMe8eBuAM3YKM', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/6b78kpulikfe1.png?width=108&crop=smart&auto=webp&s=77138407959a12e5369275d992a4fa1ad7c67a60', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/6b78kpulikfe1.png?width=216&crop=smart&auto=webp&s=64b0cd39eee5086a0559425e0ac8fc5c1db6bd9f', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/6b78kpulikfe1.png?width=320&crop=smart&auto=webp&s=de502849a0d146742dc73149eb3293b80444dc76', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/6b78kpulikfe1.png?width=640&crop=smart&auto=webp&s=4f14a6edf107ecbf10ce8c437a2e0826bef5af67', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/6b78kpulikfe1.png?width=960&crop=smart&auto=webp&s=e413e85b029dd5df64c551098727530c56d06490', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/6b78kpulikfe1.png?width=1080&crop=smart&auto=webp&s=79fdcb4f2e37aa9aaf183559c4ec4f29be221df6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/6b78kpulikfe1.png?auto=webp&s=130c4d6a1ace9bde389e46d412dbff058d7c4bfb', 'width': 1080}, 'variants': {}}]} |
|||
Is normal to deepseek uses only CPU? | 1 | [removed] | 2025-01-27T17:08:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ibe6f9/is_normal_to_deepseek_uses_only_cpu/ | spool276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibe6f9 | false | null | t3_1ibe6f9 | /r/LocalLLaMA/comments/1ibe6f9/is_normal_to_deepseek_uses_only_cpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]} |
Nvidia faces $465 billion loss as DeepSeek disrupts AI market, largest in US market history | 356 | 2025-01-27T17:09:57 | https://www.financialexpress.com/business/investing-abroad-nvidia-faces-465-billion-loss-as-deepseek-disrupts-ai-market-3728093/ | fallingdowndizzyvr | financialexpress.com | 1970-01-01T00:00:00 | 0 | {} | 1ibe7dn | false | null | t3_1ibe7dn | /r/LocalLLaMA/comments/1ibe7dn/nvidia_faces_465_billion_loss_as_deepseek/ | false | false | default | 356 | null |
|
Our LLM | 1 | 2025-01-27T17:12:34 | NayamAmarshe | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibe9px | false | null | t3_1ibe9px | /r/LocalLLaMA/comments/1ibe9px/our_llm/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'eYLcU5FSY6t70AieDCnn36yrOpVk16mRPk_RsUq-NrI', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/jvm8m3j6kkfe1.jpeg?width=108&crop=smart&auto=webp&s=1852bbbea39710f1b0aac08f9ffee1fb6cf84e88', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/jvm8m3j6kkfe1.jpeg?width=216&crop=smart&auto=webp&s=a2f8063f19f729255a22a84530a1019d819ed146', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/jvm8m3j6kkfe1.jpeg?width=320&crop=smart&auto=webp&s=671abf07f1c545af4d39547b254afa8c75477dc4', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/jvm8m3j6kkfe1.jpeg?width=640&crop=smart&auto=webp&s=d0edbc050f549659533dc4bd74bbc278c7aeb0d6', 'width': 640}, {'height': 537, 'url': 'https://preview.redd.it/jvm8m3j6kkfe1.jpeg?width=960&crop=smart&auto=webp&s=7337c745a8d2012e0d158832f8c5e1b81a2f9f50', 'width': 960}, {'height': 604, 'url': 'https://preview.redd.it/jvm8m3j6kkfe1.jpeg?width=1080&crop=smart&auto=webp&s=d6c2a85431841ed42dafcd6c4e93fffa87060eb8', 'width': 1080}], 'source': {'height': 1680, 'url': 'https://preview.redd.it/jvm8m3j6kkfe1.jpeg?auto=webp&s=4b79fa66f8a828e459a94d14ecd8c856039b7b55', 'width': 3000}, 'variants': {}}]} |
|||
I guess if has Nvidia's tech, China will make a 240G VMEM GPU, benefit for all people, sad. | 0 | I wish Chinese can do more. | 2025-01-27T17:12:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ibe9vv/i_guess_if_has_nvidias_tech_china_will_make_a/ | ImprovementEqual3931 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibe9vv | false | null | t3_1ibe9vv | /r/LocalLLaMA/comments/1ibe9vv/i_guess_if_has_nvidias_tech_china_will_make_a/ | false | false | self | 0 | null |
Jailbreaking DeepSeek: Sweary haiku about [redacted] | 33 | 2025-01-27T17:16:41 | https://v.redd.it/o86owjtqkkfe1 | Time-Winter-4319 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibedgi | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/o86owjtqkkfe1/DASHPlaylist.mpd?a=1740590215%2CMWE4YTQ5NjJiOGI2NTk4Y2YxOGJjMDY2MmI2MDA1YWViNDIwOTI4ZjMxYTllNmMwMDYwZDA1M2U3ZjZmMTU0Zg%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/o86owjtqkkfe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/o86owjtqkkfe1/HLSPlaylist.m3u8?a=1740590215%2CYTMwZDY1ZGQ2ODIxMmQwZTFiOGFkODg1OWVlYjViMWEwMDgyZWM1Mzk3ZjhkMDhmM2I3ZWMyODI3MWI3NTc3Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/o86owjtqkkfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1150}} | t3_1ibedgi | /r/LocalLLaMA/comments/1ibedgi/jailbreaking_deepseek_sweary_haiku_about_redacted/ | false | false | 33 | {'enabled': False, 'images': [{'id': 'OXRiYXdwdHFra2ZlMRTVpkWRwftFqNp2hlTpeOI6hMMMLBV-dAtEKMuzejtZ', 'resolutions': [{'height': 101, 'url': 'https://external-preview.redd.it/OXRiYXdwdHFra2ZlMRTVpkWRwftFqNp2hlTpeOI6hMMMLBV-dAtEKMuzejtZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=4ddf8094fbaaa3699c778525c536169413724843', 'width': 108}, {'height': 202, 'url': 'https://external-preview.redd.it/OXRiYXdwdHFra2ZlMRTVpkWRwftFqNp2hlTpeOI6hMMMLBV-dAtEKMuzejtZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=a04bc6883f4d2c9c36aeb35f1952be4ed7501284', 'width': 216}, {'height': 300, 'url': 'https://external-preview.redd.it/OXRiYXdwdHFra2ZlMRTVpkWRwftFqNp2hlTpeOI6hMMMLBV-dAtEKMuzejtZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=5a5fc416eb8360421e99e1d2b06944114bdc00c7', 'width': 320}, {'height': 600, 'url': 'https://external-preview.redd.it/OXRiYXdwdHFra2ZlMRTVpkWRwftFqNp2hlTpeOI6hMMMLBV-dAtEKMuzejtZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=9716acc4ae08617e3593d3154e38ef738409dff7', 'width': 640}, {'height': 900, 'url': 'https://external-preview.redd.it/OXRiYXdwdHFra2ZlMRTVpkWRwftFqNp2hlTpeOI6hMMMLBV-dAtEKMuzejtZ.png?width=960&crop=smart&format=pjpg&auto=webp&s=667d5c34996c8be3ea6f65e1f837a5397c3c0ca7', 'width': 960}, {'height': 1013, 'url': 'https://external-preview.redd.it/OXRiYXdwdHFra2ZlMRTVpkWRwftFqNp2hlTpeOI6hMMMLBV-dAtEKMuzejtZ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b0ac738a593d06403c906dd1f8e496f8158e1115', 'width': 1080}], 'source': {'height': 1552, 'url': 'https://external-preview.redd.it/OXRiYXdwdHFra2ZlMRTVpkWRwftFqNp2hlTpeOI6hMMMLBV-dAtEKMuzejtZ.png?format=pjpg&auto=webp&s=d571c8b752059c674e3098d14c16415f927a1583', 'width': 1654}, 'variants': {}}]} |
||
What if AI models can be trained and run on a Casio FX300-MS | 1 | [removed] | 2025-01-27T17:18:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ibeexr/what_if_ai_models_can_be_trained_and_run_on_a/ | solidpoopchunk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ibeexr | false | null | t3_1ibeexr | /r/LocalLLaMA/comments/1ibeexr/what_if_ai_models_can_be_trained_and_run_on_a/ | false | false | self | 1 | null |
OpenAI employee’s reaction to Deepseek | 8,537 | 2025-01-27T17:23:12 | bruhlmaocmonbro | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ibej82 | false | null | t3_1ibej82 | /r/LocalLLaMA/comments/1ibej82/openai_employees_reaction_to_deepseek/ | false | false | 8,537 | {'enabled': True, 'images': [{'id': 'fXlG2qmUwipgsjWLLYDUd6bDxlcCqlnpJ9-SLoA29pg', 'resolutions': [{'height': 160, 'url': 'https://preview.redd.it/ij7ubrn3mkfe1.jpeg?width=108&crop=smart&auto=webp&s=c40f5e933d0e0fd03987b90e2bee36793bef2a71', 'width': 108}, {'height': 321, 'url': 'https://preview.redd.it/ij7ubrn3mkfe1.jpeg?width=216&crop=smart&auto=webp&s=6833fa87ce30a13b5babc00f7f5c48e29341996c', 'width': 216}, {'height': 476, 'url': 'https://preview.redd.it/ij7ubrn3mkfe1.jpeg?width=320&crop=smart&auto=webp&s=08603c247cb8683df8030bc09c3a61bd3345fa29', 'width': 320}, {'height': 953, 'url': 'https://preview.redd.it/ij7ubrn3mkfe1.jpeg?width=640&crop=smart&auto=webp&s=db93fc1e3aea11120926d14eefcc127a43118a66', 'width': 640}, {'height': 1430, 'url': 'https://preview.redd.it/ij7ubrn3mkfe1.jpeg?width=960&crop=smart&auto=webp&s=f0fad9ef66a3484b09f803dd8781ca89e2b85cbd', 'width': 960}, {'height': 1608, 'url': 'https://preview.redd.it/ij7ubrn3mkfe1.jpeg?width=1080&crop=smart&auto=webp&s=d0c31fa71adebe44bca0d278f8c643d840c31be6', 'width': 1080}], 'source': {'height': 1743, 'url': 'https://preview.redd.it/ij7ubrn3mkfe1.jpeg?auto=webp&s=5681256b784c8368268ca7bb799bc2986b45affb', 'width': 1170}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.