title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DeepSeek Reasoning API | 1 | [removed] | 2025-01-29T08:28:19 | https://www.reddit.com/r/LocalLLaMA/comments/1icp9ws/deepseek_reasoning_api/ | mysteryhumpf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icp9ws | false | null | t3_1icp9ws | /r/LocalLLaMA/comments/1icp9ws/deepseek_reasoning_api/ | false | false | self | 1 | null |
Its hard to find information plz help. | 1 | [removed] | 2025-01-29T08:37:15 | https://www.reddit.com/r/LocalLLaMA/comments/1icpdzj/its_hard_to_find_information_plz_help/ | GutterGuy0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icpdzj | false | null | t3_1icpdzj | /r/LocalLLaMA/comments/1icpdzj/its_hard_to_find_information_plz_help/ | false | false | self | 1 | null |
Scalability and High Performance LLM Inferencing | 1 | [removed] | 2025-01-29T08:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/1icpfdx/scalability_and_high_performance_llm_inferencing/ | Secret_Dog8438 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icpfdx | false | null | t3_1icpfdx | /r/LocalLLaMA/comments/1icpfdx/scalability_and_high_performance_llm_inferencing/ | false | false | self | 1 | null |
Downloading chinese models before USA blocks | 1 | 2025-01-29T08:42:08 | https://imgflip.com/gif/9ibjok | Leflakk | imgflip.com | 1970-01-01T00:00:00 | 0 | {} | 1icpg8z | false | null | t3_1icpg8z | /r/LocalLLaMA/comments/1icpg8z/downloading_chinese_models_before_usa_blocks/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'HgV1Njo2dzzsDUZ0yNrUznrEyPYw-5MGqgED_-mgtnM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fyc0AQHDNg4dJJHhLZw7rjBw9Bcq5J72z_MYL292jLA.jpg?width=108&crop=smart&auto=webp&s=d0450d84083793a3b99d401dfdb6fadba9b2442d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/fyc0AQHDNg4dJJHhLZw7rjBw9Bcq5J72z_MYL292jLA.jpg?width=216&crop=smart&auto=webp&s=3f12125a000a114db7f184edd91d05fe37344e3f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/fyc0AQHDNg4dJJHhLZw7rjBw9Bcq5J72z_MYL292jLA.jpg?width=320&crop=smart&auto=webp&s=c8a5ac2fe4db46e6d4d18f0e30203a42d1b054b6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fyc0AQHDNg4dJJHhLZw7rjBw9Bcq5J72z_MYL292jLA.jpg?auto=webp&s=b19dce6e1cb4badfc1a10cefdb515f932b8d09b2', 'width': 360}, 'variants': {}}]} |
||
How are closed API companies functioning? | 1 | [removed] | 2025-01-29T08:43:44 | https://www.reddit.com/r/LocalLLaMA/comments/1icpgyj/how_are_closed_api_companies_functioning/ | According_Fig_4784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icpgyj | false | null | t3_1icpgyj | /r/LocalLLaMA/comments/1icpgyj/how_are_closed_api_companies_functioning/ | false | false | self | 1 | null |
Downloading chinese models | 1 | [removed] | 2025-01-29T08:44:19 | https://www.reddit.com/r/LocalLLaMA/comments/1icph84/downloading_chinese_models/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icph84 | false | null | t3_1icph84 | /r/LocalLLaMA/comments/1icph84/downloading_chinese_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'HgV1Njo2dzzsDUZ0yNrUznrEyPYw-5MGqgED_-mgtnM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fyc0AQHDNg4dJJHhLZw7rjBw9Bcq5J72z_MYL292jLA.jpg?width=108&crop=smart&auto=webp&s=d0450d84083793a3b99d401dfdb6fadba9b2442d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/fyc0AQHDNg4dJJHhLZw7rjBw9Bcq5J72z_MYL292jLA.jpg?width=216&crop=smart&auto=webp&s=3f12125a000a114db7f184edd91d05fe37344e3f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/fyc0AQHDNg4dJJHhLZw7rjBw9Bcq5J72z_MYL292jLA.jpg?width=320&crop=smart&auto=webp&s=c8a5ac2fe4db46e6d4d18f0e30203a42d1b054b6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fyc0AQHDNg4dJJHhLZw7rjBw9Bcq5J72z_MYL292jLA.jpg?auto=webp&s=b19dce6e1cb4badfc1a10cefdb515f932b8d09b2', 'width': 360}, 'variants': {}}]} |
Any AMD or Intel desktop PC processors with unified RAM for desktop for LLM inference and gaming launching before Q2 2025? | 4 | https://www.amd.com/en/products/processors/laptop/ryzen/ai-300-series/amd-ryzen-ai-max-plus-395.html
Are these Ryzen AI MAX processors with unified RAM compatible with desktop PC?
Currently, I am building a PC for VR gaming and LLM inference. I am about to buy 9800x3d, if I wait a couple of quarters, can I use Ryzen AI processors on my new desktop build?
which motherboards are compatible with these new processors? I was planning for AMD B850 motherboards.
Not having 3D cache and bit lower FPS for much accomodating larger LLM is fine. | 2025-01-29T08:44:33 | https://www.reddit.com/r/LocalLLaMA/comments/1icphbw/any_amd_or_intel_desktop_pc_processors_with/ | meta_voyager7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icphbw | false | null | t3_1icphbw | /r/LocalLLaMA/comments/1icphbw/any_amd_or_intel_desktop_pc_processors_with/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 't_rTRi3XCwrMxkajToMRkbrp2TBYLgSPfN2upBwtBhk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/69cO50X6ahYHGsvNGVWykMhzK0IsiTfS5KgJ-C-YduQ.jpg?width=108&crop=smart&auto=webp&s=973d81209beb65eb94060e80bc6a1b7a296af203', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/69cO50X6ahYHGsvNGVWykMhzK0IsiTfS5KgJ-C-YduQ.jpg?width=216&crop=smart&auto=webp&s=8c2f4c12b2a703da4f691e3a74d9cea81cdb6f77', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/69cO50X6ahYHGsvNGVWykMhzK0IsiTfS5KgJ-C-YduQ.jpg?width=320&crop=smart&auto=webp&s=e1f644b1e8eff2d1b49bd19cbd1638c9c7235557', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/69cO50X6ahYHGsvNGVWykMhzK0IsiTfS5KgJ-C-YduQ.jpg?width=640&crop=smart&auto=webp&s=ef0d28c07c8d18201b60d6fb0df8095eb4b1f2da', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/69cO50X6ahYHGsvNGVWykMhzK0IsiTfS5KgJ-C-YduQ.jpg?width=960&crop=smart&auto=webp&s=c92238beba4492ccd42162017fbc88f7c27ae797', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/69cO50X6ahYHGsvNGVWykMhzK0IsiTfS5KgJ-C-YduQ.jpg?width=1080&crop=smart&auto=webp&s=50e4c1c2de76667004ef678a175cf72e5d770626', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/69cO50X6ahYHGsvNGVWykMhzK0IsiTfS5KgJ-C-YduQ.jpg?auto=webp&s=c6302c1b9add022cd972c054adc1669a397edaf8', 'width': 1200}, 'variants': {}}]} |
How to run deepseek r1 on 4xH100 | 22 | https://github.com/sanjay920/run_deepseek_r1
---
## Throughput Achieved
- DeepSeek R1 running on a 4×H100 setup reached a generation rate of **25 tokens/second**.
- Over an hour, that amounts to **90,000 output tokens**.
## Compute Costs on Lambda Cloud
- Running 4×H100 GPUs on Lambda Cloud costs **$12.36 per hour**.
- Generating 90k tokens in one hour results in an estimated **$137 per 1 million tokens** (based on 11.1 hours needed to generate 1M tokens).
## Comparison to OpenAI O1 Pricing
- OpenAI O1 charges **$60 per 1 million output tokens**, making it roughly **2× to 2.5× cheaper** than this self-hosted setup at the current throughput.
| 2025-01-29T08:45:26 | https://www.reddit.com/r/LocalLLaMA/comments/1icphqa/how_to_run_deepseek_r1_on_4xh100/ | sanjay920 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icphqa | false | null | t3_1icphqa | /r/LocalLLaMA/comments/1icphqa/how_to_run_deepseek_r1_on_4xh100/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'gWlC6aQDpsCUMJY8fcGqaZPpbm0khpWYOkklRGfpN5U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OB8GXbqghLVegpemCucgX1cr5JZtLEOOLNTQ-dF73sU.jpg?width=108&crop=smart&auto=webp&s=f3910ba676336a68cf5e3649d65bf49782fd6cc6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OB8GXbqghLVegpemCucgX1cr5JZtLEOOLNTQ-dF73sU.jpg?width=216&crop=smart&auto=webp&s=b0433a2cbb29e9b69ff83d758dc54cf1090016aa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OB8GXbqghLVegpemCucgX1cr5JZtLEOOLNTQ-dF73sU.jpg?width=320&crop=smart&auto=webp&s=48271a008e121bf8e78d1d522a07b8c2b0ac7a6e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OB8GXbqghLVegpemCucgX1cr5JZtLEOOLNTQ-dF73sU.jpg?width=640&crop=smart&auto=webp&s=0ed8eeb3e829b65febbffaa2ec1e7f7a559dba23', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OB8GXbqghLVegpemCucgX1cr5JZtLEOOLNTQ-dF73sU.jpg?width=960&crop=smart&auto=webp&s=ad1e9670f0576071bf856b9be50cb31575661413', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OB8GXbqghLVegpemCucgX1cr5JZtLEOOLNTQ-dF73sU.jpg?width=1080&crop=smart&auto=webp&s=fbc278169c27b8947fda989d402753e9f270862b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OB8GXbqghLVegpemCucgX1cr5JZtLEOOLNTQ-dF73sU.jpg?auto=webp&s=51f26d2a87b1bf99fb510c594a0abada4b3342da', 'width': 1200}, 'variants': {}}]} |
How are closed API companies functioning? | 1 | [removed] | 2025-01-29T08:46:51 | https://www.reddit.com/r/LocalLLaMA/comments/1icpicx/how_are_closed_api_companies_functioning/ | According_Fig_4784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icpicx | false | null | t3_1icpicx | /r/LocalLLaMA/comments/1icpicx/how_are_closed_api_companies_functioning/ | false | false | self | 1 | null |
How to Import DeepSeek 1.5B Model into LM Studio? + Best Studios for Running LLMs | 1 | [removed] | 2025-01-29T08:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/1icpiwh/how_to_import_deepseek_15b_model_into_lm_studio/ | zennobody | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icpiwh | false | null | t3_1icpiwh | /r/LocalLLaMA/comments/1icpiwh/how_to_import_deepseek_15b_model_into_lm_studio/ | false | false | self | 1 | null |
Me downloading chinese models | 1 | 2025-01-29T08:48:37 | Leflakk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icpj3x | false | null | t3_1icpj3x | /r/LocalLLaMA/comments/1icpj3x/me_downloading_chinese_models/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'AKZaEIyd9pkNYiQ-ExXZrH18dgKfEHxTliBH5kA67ZE', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?width=108&crop=smart&format=png8&s=c68e09c15dc9a03ef6cb00a5ebf16fd816068495', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?width=216&crop=smart&format=png8&s=68a916561332372df1a828294f4fb23aba0103bb', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?width=320&crop=smart&format=png8&s=bf11b18562b0d102fd8721bf9611ecb0ddb2eab4', 'width': 320}], 'source': {'height': 360, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?format=png8&s=45ace4dbfa16d9cb3544134b6f85e14da21c284b', 'width': 360}, 'variants': {'gif': {'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?width=108&crop=smart&s=6f4ada131b9be5a23ae4700ba57013f25749389e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?width=216&crop=smart&s=d63e3551cf46414539541dd5bab3d17cc2dc5a8a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?width=320&crop=smart&s=1085d313842890e0c971d6f94a9ca364ea2329d3', 'width': 320}], 'source': {'height': 360, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?s=ac9810ef2bd08a78094b5de8884743042b8ca491', 'width': 360}}, 'mp4': {'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?width=108&format=mp4&s=c2f9559f2f5902b34014a93c18d44a88bf99681c', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?width=216&format=mp4&s=a210828eddc3e5cc237e42635a38408ea8def4d7', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?width=320&format=mp4&s=37337f790008af785aa2bd683e345e56c321b194', 'width': 320}], 'source': {'height': 360, 'url': 'https://preview.redd.it/0g3bom73cwfe1.gif?format=mp4&s=d7df9c2c08813cf6f6c40376317999a922f0393b', 'width': 360}}}}]} |
|||
NobodyWho 4.4 | 6 | Hey There , nobodywho here. Apart from the last week in December we have been working hard on the stability of our plugin.
That means that we recently released 4.4 with some great features, better performance and QOL changes:
* Context shifting, which basically allows you to have infinite conversations regardless of context-length with you character
* In-editor documentation
* Support for custom chat templates
* Better examples in our readme
* Lots of sampler variations and configuration types
* a bunch of bug fixes
We will also be adding a small QOL feature with the onset of the new r1 models, which allows you to hide the thinking tags from your responses.
If you want to know more check out our repo and give us a star, that would be very appreciated!
Also, we are doing a game jam next weekend with prizes. So if you haven't tried our plugin that is a great opportunity to check it out! | 2025-01-29T08:49:55 | https://www.reddit.com/r/LocalLLaMA/comments/1icpjp1/nobodywho_44/ | No_Abbreviations_532 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icpjp1 | false | null | t3_1icpjp1 | /r/LocalLLaMA/comments/1icpjp1/nobodywho_44/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'GbzKr1H0z4y_2V9qPtu_61DbbYPRxiKam5T62Y9_znI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ly7-CYwL3sMmTI0Lp1xjnDbGC-nWPQDGjZTiSsDLYKU.jpg?width=108&crop=smart&auto=webp&s=1c9e31ba373055f8ed2bdd319120439abc91f58e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ly7-CYwL3sMmTI0Lp1xjnDbGC-nWPQDGjZTiSsDLYKU.jpg?width=216&crop=smart&auto=webp&s=7a636c58fa54cab2a08d056e6bd84fe410ae100a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ly7-CYwL3sMmTI0Lp1xjnDbGC-nWPQDGjZTiSsDLYKU.jpg?width=320&crop=smart&auto=webp&s=7c2f866fc22681bc8ac5670901ca3a46d70c703f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ly7-CYwL3sMmTI0Lp1xjnDbGC-nWPQDGjZTiSsDLYKU.jpg?width=640&crop=smart&auto=webp&s=2f01c11cefd4a8a508f9621065ad87d2e7bd11c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ly7-CYwL3sMmTI0Lp1xjnDbGC-nWPQDGjZTiSsDLYKU.jpg?width=960&crop=smart&auto=webp&s=ac2e9b4dcbdda3995732c84dbb925af5af1a1d2c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ly7-CYwL3sMmTI0Lp1xjnDbGC-nWPQDGjZTiSsDLYKU.jpg?width=1080&crop=smart&auto=webp&s=c4fd34e56b84a3dcb6bd25ad2a4da9505f36b11c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ly7-CYwL3sMmTI0Lp1xjnDbGC-nWPQDGjZTiSsDLYKU.jpg?auto=webp&s=177c39a2876c8f063b3b47453b9db933f8ecafae', 'width': 1200}, 'variants': {}}]} |
Microsoft Probing If DeepSeek-Linked Group Improperly Obtained OpenAI Data | 15 | 2025-01-29T08:52:51 | https://www.bloomberg.com/news/articles/2025-01-29/microsoft-probing-if-deepseek-linked-group-improperly-obtained-openai-data | VanillaSecure405 | bloomberg.com | 1970-01-01T00:00:00 | 0 | {} | 1icpl14 | false | null | t3_1icpl14 | /r/LocalLLaMA/comments/1icpl14/microsoft_probing_if_deepseeklinked_group/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'LInMQsb7idhOUL_9-Di29Mw5k9uN4mBGbBlZ1a11uJ4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/PrnJotfWqNic_OjuUg48v_ZxFAHmxMDus-CXmMuFxwM.jpg?width=108&crop=smart&auto=webp&s=2a798033619389fc510956ea7e1ad0c5ce5160d4', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/PrnJotfWqNic_OjuUg48v_ZxFAHmxMDus-CXmMuFxwM.jpg?width=216&crop=smart&auto=webp&s=737dc065903da7519b2a0c7603b339f1ff9b3403', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/PrnJotfWqNic_OjuUg48v_ZxFAHmxMDus-CXmMuFxwM.jpg?width=320&crop=smart&auto=webp&s=34c596e48c715fc47b6c414f8cd9eea9377a75d7', 'width': 320}, {'height': 427, 'url': 'https://external-preview.redd.it/PrnJotfWqNic_OjuUg48v_ZxFAHmxMDus-CXmMuFxwM.jpg?width=640&crop=smart&auto=webp&s=2400f373cf3cbfb8922104aee6264ff927cc0041', 'width': 640}, {'height': 641, 'url': 'https://external-preview.redd.it/PrnJotfWqNic_OjuUg48v_ZxFAHmxMDus-CXmMuFxwM.jpg?width=960&crop=smart&auto=webp&s=20c6649168e99842e5d34e1a5fa4a749efcd62b4', 'width': 960}, {'height': 721, 'url': 'https://external-preview.redd.it/PrnJotfWqNic_OjuUg48v_ZxFAHmxMDus-CXmMuFxwM.jpg?width=1080&crop=smart&auto=webp&s=18f7ea002330bff0cf70b29b3143e818842557f9', 'width': 1080}], 'source': {'height': 802, 'url': 'https://external-preview.redd.it/PrnJotfWqNic_OjuUg48v_ZxFAHmxMDus-CXmMuFxwM.jpg?auto=webp&s=b27a7cfa4c0e55e796447d946364753e47e07d3e', 'width': 1200}, 'variants': {}}]} |
||
About DeepSeek's "censorship" | 1 | [removed] | 2025-01-29T08:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/1icpm00/about_deepseeks_censorship/ | Zalathustra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icpm00 | false | null | t3_1icpm00 | /r/LocalLLaMA/comments/1icpm00/about_deepseeks_censorship/ | false | false | self | 1 | null |
Downloading chinese models | 1 | 2025-01-29T08:55:14 | https://www.reddit.com/gallery/1icpm5p | Leflakk | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1icpm5p | false | null | t3_1icpm5p | /r/LocalLLaMA/comments/1icpm5p/downloading_chinese_models/ | false | false | 1 | null |
||
Swapping MOE into VRAM | 1 | Why doesn't MoE allow us to load the full model into the RAM, and only load the active models into VRAM as needed. Why can't MoE model be structured in a discreet, split model parts? | 2025-01-29T08:58:54 | https://www.reddit.com/r/LocalLLaMA/comments/1icpns9/swapping_moe_into_vram/ | kaisurniwurer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icpns9 | false | null | t3_1icpns9 | /r/LocalLLaMA/comments/1icpns9/swapping_moe_into_vram/ | false | false | self | 1 | null |
How DeepSeek is able to compete with Ai startups like Open Ai. | 1 | [removed] | 2025-01-29T09:12:43 | https://www.reddit.com/r/LocalLLaMA/comments/1icpu62/how_deepseek_is_able_to_compete_with_ai_startups/ | unknownstudentoflife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icpu62 | false | null | t3_1icpu62 | /r/LocalLLaMA/comments/1icpu62/how_deepseek_is_able_to_compete_with_ai_startups/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=216&crop=smart&auto=webp&s=e20458b3bc0a4d8ebf3e09b7e3615cfda4e00844', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=320&crop=smart&auto=webp&s=508265ec16105ddc4d2105e057c292f8470229ac', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=640&crop=smart&auto=webp&s=690b875bfe1b25ba2e96b432c42bb1b096935efd', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=960&crop=smart&auto=webp&s=ee86a1133471b58f18d2dbf89ec1c88906c2d623', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=1080&crop=smart&auto=webp&s=e42c63d534439a755f46f08c5db09cbaaefca3d0', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?auto=webp&s=6e0008e17dc8f6f6b13799bc7416400acacbaca0', 'width': 1260}, 'variants': {}}]} |
How DeepSeek is able to compete with Ai startups like Open Ai. | 1 | [removed] | 2025-01-29T09:19:03 | https://www.reddit.com/r/LocalLLaMA/comments/1icpx66/how_deepseek_is_able_to_compete_with_ai_startups/ | unknownstudentoflife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icpx66 | false | null | t3_1icpx66 | /r/LocalLLaMA/comments/1icpx66/how_deepseek_is_able_to_compete_with_ai_startups/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=216&crop=smart&auto=webp&s=e20458b3bc0a4d8ebf3e09b7e3615cfda4e00844', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=320&crop=smart&auto=webp&s=508265ec16105ddc4d2105e057c292f8470229ac', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=640&crop=smart&auto=webp&s=690b875bfe1b25ba2e96b432c42bb1b096935efd', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=960&crop=smart&auto=webp&s=ee86a1133471b58f18d2dbf89ec1c88906c2d623', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=1080&crop=smart&auto=webp&s=e42c63d534439a755f46f08c5db09cbaaefca3d0', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?auto=webp&s=6e0008e17dc8f6f6b13799bc7416400acacbaca0', 'width': 1260}, 'variants': {}}]} |
Arm64 AI code assistant with no GPU? | 0 | Greetings all,
I'm considering buying an Ampere Altra Arm64 system like [System76's ](https://system76.com/desktops/thelio-astra-a1-n1/configure)offering or something similar purely for local code consideration purposes using [Tabby](https://github.com/TabbyML/tabby) or something similar.
I'm considering not buying a GPU for these reasons.
* Fan noise
* Hot climate
* CPU/GPU heating up the room
I've had systems with big graphics cards before, and living in a hot climate, it is uncomfortable due to noise and heat.
I'm planning on a large memory bank to accommodate a large model.
Is it fine to use a local LLM without a GPU? | 2025-01-29T09:29:00 | https://www.reddit.com/r/LocalLLaMA/comments/1icq1uq/arm64_ai_code_assistant_with_no_gpu/ | lickety-split1800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icq1uq | false | null | t3_1icq1uq | /r/LocalLLaMA/comments/1icq1uq/arm64_ai_code_assistant_with_no_gpu/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '4EJaYmYuZmwK6svmQEMf7CvJvst8qZ6CneCVqtvOgcQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/4mTpERLyhoaU27vqlC4Jmudp-XC04iugeoUgQKY3iDI.jpg?width=108&crop=smart&auto=webp&s=473cf5def47cde392a34c5d506200103cde373d2', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/4mTpERLyhoaU27vqlC4Jmudp-XC04iugeoUgQKY3iDI.jpg?width=216&crop=smart&auto=webp&s=2e7953439e6db2e47782f3bd4f8d2972a7421c2f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/4mTpERLyhoaU27vqlC4Jmudp-XC04iugeoUgQKY3iDI.jpg?width=320&crop=smart&auto=webp&s=6ec8ddbab9485075e7c9ca98fa3b4708b8422e9d', 'width': 320}], 'source': {'height': 386, 'url': 'https://external-preview.redd.it/4mTpERLyhoaU27vqlC4Jmudp-XC04iugeoUgQKY3iDI.jpg?auto=webp&s=f9bad45e818d6d010014b7381f0fb366dcb4e2ae', 'width': 386}, 'variants': {}}]} |
I have a budget of 40k USD I need to setup machine to host deepseek r1 - what options do I have | 73 | Hello,
looking for some tips/directions on hardware choice to host deepseek r1 locally (my budget is up to 40k) | 2025-01-29T09:30:39 | https://www.reddit.com/r/LocalLLaMA/comments/1icq2mf/i_have_a_budget_of_40k_usd_i_need_to_setup/ | zibenmoka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icq2mf | false | null | t3_1icq2mf | /r/LocalLLaMA/comments/1icq2mf/i_have_a_budget_of_40k_usd_i_need_to_setup/ | false | false | self | 73 | null |
Liang Wenfeng, founder of DeepSeek. gigachad | 1 | [removed] | 2025-01-29T09:32:36 | https://www.reddit.com/r/LocalLLaMA/comments/1icq3kd/liang_wenfeng_founder_of_deepseek_gigachad/ | chansumpoh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icq3kd | false | null | t3_1icq3kd | /r/LocalLLaMA/comments/1icq3kd/liang_wenfeng_founder_of_deepseek_gigachad/ | false | false | self | 1 | null |
Autonomous AI coder with local LLM support and Todoist project management | 1 | 2025-01-29T09:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/1icq3vj/autonomous_ai_coder_with_local_llm_support_and/ | Grigorij_127 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icq3vj | false | null | t3_1icq3vj | /r/LocalLLaMA/comments/1icq3vj/autonomous_ai_coder_with_local_llm_support_and/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'tmJtT9vNgEVRGxiWVWLFW_DcYvD7IBDCrU7dyNmjOpU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pztyCxMP_WAI7TUKFuZGCXMB5AMhsM9GZLlV6wcsJFs.jpg?width=108&crop=smart&auto=webp&s=227cd69b65f7cb6d75965d7e537e20a98acde6d2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pztyCxMP_WAI7TUKFuZGCXMB5AMhsM9GZLlV6wcsJFs.jpg?width=216&crop=smart&auto=webp&s=bcf30a417843652c2da7b02479cb0a948424faaf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pztyCxMP_WAI7TUKFuZGCXMB5AMhsM9GZLlV6wcsJFs.jpg?width=320&crop=smart&auto=webp&s=e9dc685e7b4415ed7493f3d41a5ecc9ab598f414', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pztyCxMP_WAI7TUKFuZGCXMB5AMhsM9GZLlV6wcsJFs.jpg?width=640&crop=smart&auto=webp&s=baad01042467c014ec95bd09a6e6d5a38a00035a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pztyCxMP_WAI7TUKFuZGCXMB5AMhsM9GZLlV6wcsJFs.jpg?width=960&crop=smart&auto=webp&s=4acc56a524369d018965b4610a11c944ab38f932', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pztyCxMP_WAI7TUKFuZGCXMB5AMhsM9GZLlV6wcsJFs.jpg?width=1080&crop=smart&auto=webp&s=92d15a62ca1df2701262559be4864aaafdc25b9e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pztyCxMP_WAI7TUKFuZGCXMB5AMhsM9GZLlV6wcsJFs.jpg?auto=webp&s=a09c406f3d66c65862fe0bfc6ff92580128cd1ce', 'width': 1200}, 'variants': {}}]} |
||
Deepseek model is really fast | 0 | 2025-01-29T09:35:53 | https://v.redd.it/u7kb8ygdkwfe1 | rpwoerk | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icq54z | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/u7kb8ygdkwfe1/DASHPlaylist.mpd?a=1740735400%2CNjA5OGMwMWU1OThjMDE3NzU4ODQ5ZTk5NWRmYTk2ZjI4YzE1OGIyZTExOTg1MDRjNzRhMTA2MTY5ZTI4MjFlYQ%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/u7kb8ygdkwfe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 726, 'hls_url': 'https://v.redd.it/u7kb8ygdkwfe1/HLSPlaylist.m3u8?a=1740735400%2CNzA0MWU2ZGY1NjZjN2Q5MTlhMjViZTFmY2U2MDk4NzA0N2E5Y2M0NDg5MThmODBkN2M5NjRlNGVkN2MyOTFiNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/u7kb8ygdkwfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1icq54z | /r/LocalLLaMA/comments/1icq54z/deepseek_model_is_really_fast/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZDFsamt5Z2Rrd2ZlMX1JjN7KoZC24N04qZKFc_b-AbMuvU095bENUynOottF', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ZDFsamt5Z2Rrd2ZlMX1JjN7KoZC24N04qZKFc_b-AbMuvU095bENUynOottF.png?width=108&crop=smart&format=pjpg&auto=webp&s=c184565b48e3b1fc5af32feb9a3bc4eb5e5a5e36', 'width': 108}, {'height': 217, 'url': 'https://external-preview.redd.it/ZDFsamt5Z2Rrd2ZlMX1JjN7KoZC24N04qZKFc_b-AbMuvU095bENUynOottF.png?width=216&crop=smart&format=pjpg&auto=webp&s=c83160ff3f0227c4e4bd89587a465ca85a2506ce', 'width': 216}, {'height': 322, 'url': 'https://external-preview.redd.it/ZDFsamt5Z2Rrd2ZlMX1JjN7KoZC24N04qZKFc_b-AbMuvU095bENUynOottF.png?width=320&crop=smart&format=pjpg&auto=webp&s=6250b69748e6c74813aece30de245439c9a59623', 'width': 320}, {'height': 645, 'url': 'https://external-preview.redd.it/ZDFsamt5Z2Rrd2ZlMX1JjN7KoZC24N04qZKFc_b-AbMuvU095bENUynOottF.png?width=640&crop=smart&format=pjpg&auto=webp&s=edc081a430d297b40604f2f1a29b14a8faed558f', 'width': 640}], 'source': {'height': 952, 'url': 'https://external-preview.redd.it/ZDFsamt5Z2Rrd2ZlMX1JjN7KoZC24N04qZKFc_b-AbMuvU095bENUynOottF.png?format=pjpg&auto=webp&s=117b767ba9aaddd4467ba704ac10732a7843ed54', 'width': 944}, 'variants': {}}]} |
||
If i was to run DeepSeek locally can i run web search. | 1 | [removed] | 2025-01-29T09:39:45 | https://www.reddit.com/r/LocalLLaMA/comments/1icq6v9/if_i_was_to_run_deepseek_locally_can_i_run_web/ | Key_Ad640 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icq6v9 | false | null | t3_1icq6v9 | /r/LocalLLaMA/comments/1icq6v9/if_i_was_to_run_deepseek_locally_can_i_run_web/ | false | false | self | 1 | null |
MoE for GPU poor | 2 | As one of the GPU poor people, I don't think highly of MoE architecture that much. It requires a lot of memory resources since non-active parameters also need to be loaded into GPU. I'm not an expert, but isn't dense model better for the same number of parameters that can be loaded into fixed GPU? | 2025-01-29T09:48:52 | https://www.reddit.com/r/LocalLLaMA/comments/1icqazl/moe_for_gpu_poor/ | always_newbee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icqazl | false | null | t3_1icqazl | /r/LocalLLaMA/comments/1icqazl/moe_for_gpu_poor/ | false | false | self | 2 | null |
I've been testing AI models with this puzzle and they all keep failing | 1 | [removed] | 2025-01-29T09:52:55 | https://www.reddit.com/r/LocalLLaMA/comments/1icqcw8/ive_been_testing_ai_models_with_this_puzzle_and/ | schlyza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icqcw8 | false | null | t3_1icqcw8 | /r/LocalLLaMA/comments/1icqcw8/ive_been_testing_ai_models_with_this_puzzle_and/ | false | false | self | 1 | null |
How exactly does DeepSeek train its models? | 0 | From what I understand, they use a method called Distillation. While ChatGPT trains using all possible sources, DeepSeek has selected the training data used by ChatGPT through a procedure that extracts the most impactful data for learning, thus reducing the number of input parameters and training time.
But basically, doesn’t this mean they took OpenAI’s base data and put it on a smaller model? Wouldn’t that violate OpenAI’s usage agreements? | 2025-01-29T10:00:22 | https://www.reddit.com/r/LocalLLaMA/comments/1icqgjg/how_exactly_does_deepseek_train_its_models/ | Atlantis1910 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icqgjg | false | null | t3_1icqgjg | /r/LocalLLaMA/comments/1icqgjg/how_exactly_does_deepseek_train_its_models/ | false | false | self | 0 | null |
Throughput and Latency on vLLM | 1 | [removed] | 2025-01-29T10:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1icqiae/throughput_and_latency_on_vllm/ | avatar903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icqiae | false | null | t3_1icqiae | /r/LocalLLaMA/comments/1icqiae/throughput_and_latency_on_vllm/ | false | false | self | 1 | null |
Model Selector | 1 | [removed] | 2025-01-29T10:15:42 | https://www.reddit.com/r/LocalLLaMA/comments/1icqo1p/model_selector/ | Top_Drop_157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icqo1p | false | null | t3_1icqo1p | /r/LocalLLaMA/comments/1icqo1p/model_selector/ | false | false | self | 1 | null |
Deepsick | 0 | Hello locallama people,
Am i the only one sick of the current hype for deepseek? I mean sure the model is Open Weight and ranking pretty well but the latest snapshot of Gemini are still ranking better... Sure the API cost might be lesser but is it worth the current spamming?
Plus I'm also pretty borred by the hype with the self hoster who are actually using the qwen model. How many of you can actually run a 600+b parameters model? (Except for the guy with its 15+ 3090 ;) ) Could we step back a little to have actually interesting discussions about some actual advancement in the localai field? Or is this thread condamned to become hypeai....
| 2025-01-29T10:18:36 | https://www.reddit.com/r/LocalLLaMA/comments/1icqpen/deepsick/ | pydehon1606 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icqpen | false | null | t3_1icqpen | /r/LocalLLaMA/comments/1icqpen/deepsick/ | false | false | self | 0 | null |
Hi, why is deepseek ddosed? | 1 | [removed] | 2025-01-29T10:32:36 | https://www.reddit.com/r/LocalLLaMA/comments/1icqwaq/hi_why_is_deepseek_ddosed/ | Amazing-Incident-391 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icqwaq | false | null | t3_1icqwaq | /r/LocalLLaMA/comments/1icqwaq/hi_why_is_deepseek_ddosed/ | false | false | self | 1 | null |
What is the best open source model for American history? | 0 | I’m looking from 32B-70B parameters.
I also want a long context window, preferably 32K or more (for rag).
What should I use?
Considering llama 3.1 70B, mistral 8x7B, aya expanse 32B, qwen 2.5 32B, command r 35B. | 2025-01-29T10:33:05 | https://www.reddit.com/r/LocalLLaMA/comments/1icqwil/what_is_the_best_open_source_model_for_american/ | Glittering-Bag-4662 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icqwil | false | null | t3_1icqwil | /r/LocalLLaMA/comments/1icqwil/what_is_the_best_open_source_model_for_american/ | false | false | self | 0 | null |
Stanford project "CodeMonkeys" scores 57.4%-66.2% on SWE-bench Verified using Claude Sonnet 3.5 | 10 | 2025-01-29T10:38:06 | https://scalingintelligence.stanford.edu/blogs/codemonkeys/ | Reddit1396 | scalingintelligence.stanford.edu | 1970-01-01T00:00:00 | 0 | {} | 1icqywh | false | null | t3_1icqywh | /r/LocalLLaMA/comments/1icqywh/stanford_project_codemonkeys_scores_574662_on/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'QLNgN3m3dsv7AGmi27RZ_Gz6PqT5HPdqwv02rzFWD6A', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CCC9Zo0A5x6kkNqzPeUIqyy7Hj2DP0wRtoBmwpVJoIE.jpg?width=108&crop=smart&auto=webp&s=c586ef18b3aea616ec96c6e9ac19e910cdbce4f9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/CCC9Zo0A5x6kkNqzPeUIqyy7Hj2DP0wRtoBmwpVJoIE.jpg?width=216&crop=smart&auto=webp&s=0d466d9015ad88b874a8ef8c57389131321e52c5', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/CCC9Zo0A5x6kkNqzPeUIqyy7Hj2DP0wRtoBmwpVJoIE.jpg?width=320&crop=smart&auto=webp&s=818d045d74e8c586454a02cc73dae046efde34b9', 'width': 320}], 'source': {'height': 606, 'url': 'https://external-preview.redd.it/CCC9Zo0A5x6kkNqzPeUIqyy7Hj2DP0wRtoBmwpVJoIE.jpg?auto=webp&s=1923a8055deca1de42e38f0eae74312fa2bc5ecb', 'width': 606}, 'variants': {}}]} |
||
DeepSeek-R1 evolving a Game of Life pattern really feels like a breakthrough | 191 | I’m truly amazed. I've just discovered that DeepSeek-R1 has managed to correctly compute one generation of Conway's Game of Life (starting from a simple five-cell row pattern)—a first for any LLM I've tested. While it required a significant amount of reasoning (749.31 seconds of thought), the model got it right on the first try. It felt just like using a bazooka to kill a fly (5596 tokens at 7 tk/s).
While this might sound modest, I’ve long viewed this challenge as the “strawberry problem” but on steroids. DeepSeek-R1 had to understand cellular automata rules, visualize a grid, track multiple cells simultaneously, and apply specific survival and birth rules to each position—all while maintaining spatial reasoning.
[Pattern at gen 0.](https://preview.redd.it/vup8iom0vwfe1.png?width=138&format=png&auto=webp&s=61bcf0740f9a0b8f6bb64525ce64e293e6253fe4)
[Pattern at gen 1.](https://preview.redd.it/zgzeawc2vwfe1.png?width=138&format=png&auto=webp&s=5886ae4cefba04201dd1a847800f0004333f3bbb)
**Prompt:**
`Simulate one generation of Conway's Game of Life starting from the following initial configuration: ....... ....... ....... .OOOOO. ....... ....... ....... Use a 7x7 grid for the simulation. Represent alive cells with "O" and dead cells with ".". Apply the rules of Conway's Game of Life to calculate each generation. Provide diagrams of the initial state, and first generation, in the same format as shown above.`
**Answer:**
[<think></think> and answer (Pastebin)](https://pastebin.com/JTveEkXg) | 2025-01-29T10:39:01 | https://www.reddit.com/r/LocalLLaMA/comments/1icqzcz/deepseekr1_evolving_a_game_of_life_pattern_really/ | IrisColt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icqzcz | false | null | t3_1icqzcz | /r/LocalLLaMA/comments/1icqzcz/deepseekr1_evolving_a_game_of_life_pattern_really/ | false | false | 191 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
|
When new S2T model? Whisper large V4? | 5 | It has been ages in LLM timeline since last release of whisper or any good speech-to-text model. I have stumbled upon this chinese model [MinMo](https://funaudiollm.github.io/minmo/), but never found weights.
Seems like everything else made a big step forward, video, images, math, coding, writing - except specialised speech-to-text models. Even general chat LLMs added another 100+ points in arena ELO since previous whisper release.
Am I missing something? | 2025-01-29T10:39:25 | https://www.reddit.com/r/LocalLLaMA/comments/1icqzjr/when_new_s2t_model_whisper_large_v4/ | Similar-Ingenuity-36 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icqzjr | false | null | t3_1icqzjr | /r/LocalLLaMA/comments/1icqzjr/when_new_s2t_model_whisper_large_v4/ | false | false | self | 5 | null |
Laptop buying question - 4090 or wait for strix halo? | 1 | For context Im in canada. With 16gb vram from rtx4090 mobile, would i be able to run a 22b or 30b at q4? | 2025-01-29T10:44:29 | https://www.reddit.com/r/LocalLLaMA/comments/1icr23j/laptop_buying_question_4090_or_wait_for_strix_halo/ | Sea-Spot-1113 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icr23j | false | null | t3_1icr23j | /r/LocalLLaMA/comments/1icr23j/laptop_buying_question_4090_or_wait_for_strix_halo/ | false | false | self | 1 | null |
Deepseek answers but not my question???? | 1 | [removed] | 2025-01-29T10:44:54 | https://www.reddit.com/r/LocalLLaMA/comments/1icr2a6/deepseek_answers_but_not_my_question/ | purplewater0o0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icr2a6 | false | null | t3_1icr2a6 | /r/LocalLLaMA/comments/1icr2a6/deepseek_answers_but_not_my_question/ | false | false | self | 1 | null |
deepseek answers but not my question??? | 1 | [removed] | 2025-01-29T10:46:17 | https://www.reddit.com/r/LocalLLaMA/comments/1icr2xr/deepseek_answers_but_not_my_question/ | purplewater0o0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icr2xr | false | null | t3_1icr2xr | /r/LocalLLaMA/comments/1icr2xr/deepseek_answers_but_not_my_question/ | false | false | self | 1 | null |
Hybrid search with Qdrant | 0 | Hello I have a doubt in my implementati on of agents.
Specifically i wanttoo perform hybrid retrieve search. It is suggested that I use the following:
https://huggingface.co/Qdrant/all_miniLM_L6_v2_with_attentions
Since the language im using is not english I wanted to have a feedback on all_miniLM_L6
From HF i see that the model is tagged with the english language and reading through the paper of the originale model (not the porta from Qdrant) I see that it doesnt have multilingua capability.
Should i use it or search for another model in order to perform the retrieval part? | 2025-01-29T10:49:28 | https://www.reddit.com/r/LocalLLaMA/comments/1icr4df/hybrid_search_with_qdrant/ | Ambitious-Most4485 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icr4df | false | null | t3_1icr4df | /r/LocalLLaMA/comments/1icr4df/hybrid_search_with_qdrant/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '2DSq_NZXPN4yWz23mlSzUKgCdZ1cTLyjlqJ3tccXCEQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yXB1yGqX5qoWM-POIn1Ph5GpjIAiTJ4yMt_II7lz7DA.jpg?width=108&crop=smart&auto=webp&s=18a3978463a89a3eb3680e34bfc03c1a1916de16', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yXB1yGqX5qoWM-POIn1Ph5GpjIAiTJ4yMt_II7lz7DA.jpg?width=216&crop=smart&auto=webp&s=4b511b0bbea834b103daeda66cb6d110b1edde01', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yXB1yGqX5qoWM-POIn1Ph5GpjIAiTJ4yMt_II7lz7DA.jpg?width=320&crop=smart&auto=webp&s=7a890713ffe0d33810db3e84898eea579512bfda', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yXB1yGqX5qoWM-POIn1Ph5GpjIAiTJ4yMt_II7lz7DA.jpg?width=640&crop=smart&auto=webp&s=4013815bd851e080f9d997e18153eaccdefbed4c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yXB1yGqX5qoWM-POIn1Ph5GpjIAiTJ4yMt_II7lz7DA.jpg?width=960&crop=smart&auto=webp&s=deb2c92522973d304c7d98b94872d326431c41e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yXB1yGqX5qoWM-POIn1Ph5GpjIAiTJ4yMt_II7lz7DA.jpg?width=1080&crop=smart&auto=webp&s=508eee094454bd79a60d69e32e8122e9f007708c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yXB1yGqX5qoWM-POIn1Ph5GpjIAiTJ4yMt_II7lz7DA.jpg?auto=webp&s=151bb5146aa5008761cc777f11934483924014de', 'width': 1200}, 'variants': {}}]} |
Deepseek answers but not my question??? | 1 | [removed] | 2025-01-29T10:50:02 | https://www.reddit.com/r/LocalLLaMA/comments/1icr4ol/deepseek_answers_but_not_my_question/ | purplewater0o0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icr4ol | false | null | t3_1icr4ol | /r/LocalLLaMA/comments/1icr4ol/deepseek_answers_but_not_my_question/ | false | false | self | 1 | null |
How come we dont see many people spinning up R1 671b in the cloud, selling access and making bank? | 175 | What am I missing? I'm not too knowledgeable about deploying big models like these, but for people that are, shouldn't it be quite easy to deploy it in the cloud?
That's the cool thing about open weights, no? If you have the hardware (which is nothing crazy if you're already using VPS), you can run and scale it dynamically.
And since it's so efficient, it should be quite cheap when spread out over several users. Why aren't we seeing everyone and their grandma selling us a subscription to their website? | 2025-01-29T10:53:59 | https://www.reddit.com/r/LocalLLaMA/comments/1icr6md/how_come_we_dont_see_many_people_spinning_up_r1/ | linkcharger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icr6md | false | null | t3_1icr6md | /r/LocalLLaMA/comments/1icr6md/how_come_we_dont_see_many_people_spinning_up_r1/ | false | false | self | 175 | null |
Images Generated by Deepseek Janus Pro 7B | 1 | 2025-01-29T10:54:04 | Sam_Tech1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icr6nx | false | null | t3_1icr6nx | /r/LocalLLaMA/comments/1icr6nx/images_generated_by_deepseek_janus_pro_7b/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'gDr16BM3skd09-60gmSzXdGRv_b6KWQX6A-mHSX00r4', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/6nqy4rvgywfe1.png?width=108&crop=smart&auto=webp&s=fd0630b11b364204363485e5856dae89238624b7', 'width': 108}, {'height': 76, 'url': 'https://preview.redd.it/6nqy4rvgywfe1.png?width=216&crop=smart&auto=webp&s=53d9d9e49f674d22c5cc76b62da3cae2ab644bc7', 'width': 216}, {'height': 113, 'url': 'https://preview.redd.it/6nqy4rvgywfe1.png?width=320&crop=smart&auto=webp&s=862cd02a0ec5466c7fb3f92b88b5e4094c0aaacf', 'width': 320}, {'height': 226, 'url': 'https://preview.redd.it/6nqy4rvgywfe1.png?width=640&crop=smart&auto=webp&s=09e81477f6c3d0d79089a659f73b85303e857a41', 'width': 640}, {'height': 339, 'url': 'https://preview.redd.it/6nqy4rvgywfe1.png?width=960&crop=smart&auto=webp&s=3c684b62c214b6fba6b6f8b02ff00c4d3952dec1', 'width': 960}, {'height': 381, 'url': 'https://preview.redd.it/6nqy4rvgywfe1.png?width=1080&crop=smart&auto=webp&s=20aaa54746323c7ad2839425a6694ee4724257bd', 'width': 1080}], 'source': {'height': 506, 'url': 'https://preview.redd.it/6nqy4rvgywfe1.png?auto=webp&s=f08f93e504c6ed7589ad319147da502d03ef70e1', 'width': 1432}, 'variants': {}}]} |
|||
Thoughts about role-playing LLMs | 1 | [removed] | 2025-01-29T10:54:33 | https://www.reddit.com/r/LocalLLaMA/comments/1icr6xo/thoughts_about_roleplaying_llms/ | Megalith01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icr6xo | false | null | t3_1icr6xo | /r/LocalLLaMA/comments/1icr6xo/thoughts_about_roleplaying_llms/ | false | false | self | 1 | null |
Hmmmm.... | 1 | [removed] | 2025-01-29T11:00:44 | https://www.reddit.com/gallery/1icra5g | mousicle27 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1icra5g | false | null | t3_1icra5g | /r/LocalLLaMA/comments/1icra5g/hmmmm/ | false | false | 1 | null |
|
Why don't we use NVMe instead of VRAM | 1 | Why don't we use NMVe on PCIe lanes to directly serve the GPU instead of loading huge models to VRAM?? Yes, it will be slower and will have more latency, but being able to run something vs nothing is better right? | 2025-01-29T11:03:51 | https://www.reddit.com/r/LocalLLaMA/comments/1icrc2l/why_dont_we_use_nvme_instead_of_vram/ | infinity6570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icrc2l | false | null | t3_1icrc2l | /r/LocalLLaMA/comments/1icrc2l/why_dont_we_use_nvme_instead_of_vram/ | false | false | self | 1 | null |
What is the difference between DeepSeek R1 Model Local vs Web Chat Option? | 1 | [removed] | 2025-01-29T11:04:00 | https://www.reddit.com/r/LocalLLaMA/comments/1icrc5k/what_is_the_difference_between_deepseek_r1_model/ | jaffer3650 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icrc5k | false | null | t3_1icrc5k | /r/LocalLLaMA/comments/1icrc5k/what_is_the_difference_between_deepseek_r1_model/ | false | false | self | 1 | null |
My deepseek isn't responding. For now. Does anyone have same issue? Maybe the cyber attack they facing now is too much? | 2 | [removed] | 2025-01-29T11:13:47 | https://www.reddit.com/r/LocalLLaMA/comments/1icrh6i/my_deepseek_isnt_responding_for_now_does_anyone/ | Bubbly-Entry1110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icrh6i | false | null | t3_1icrh6i | /r/LocalLLaMA/comments/1icrh6i/my_deepseek_isnt_responding_for_now_does_anyone/ | false | false | self | 2 | null |
Any good uncensored Gemma 2 version except for tiger gemma? | 1 | [removed] | 2025-01-29T11:25:57 | https://www.reddit.com/r/LocalLLaMA/comments/1icrnng/any_good_uncensored_gemma_2_version_except_for/ | Appropriate_Water517 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icrnng | false | null | t3_1icrnng | /r/LocalLLaMA/comments/1icrnng/any_good_uncensored_gemma_2_version_except_for/ | false | false | nsfw | 1 | null |
This is how you coax Deepseek to talk about Tiananmen | 0 | You can study it's thought process and formulate a prompt to change it's behavior:
https://preview.redd.it/o57p9mn64xfe1.png?width=825&format=png&auto=webp&s=359f65f3ea22db242165b3251c8c82c02165ba89
| 2025-01-29T11:26:59 | https://www.reddit.com/r/LocalLLaMA/comments/1icro6k/this_is_how_you_coax_deepseek_to_talk_about/ | Internet--Traveller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icro6k | false | null | t3_1icro6k | /r/LocalLLaMA/comments/1icro6k/this_is_how_you_coax_deepseek_to_talk_about/ | false | false | 0 | null |
|
PSA: Bypassing the DeepSeek signup freeze + insane prices + TPM rates devs are sleeping on | 0 | hey,
If your pipeline is getting choked by rate limits or waiting rooms on mainstream providers, I just found a perfect storm of:
🔹 10M tokens/min *approved* throughput (on request)
🔹 EU-hosted DeepSeek R1 at $0.8/M (vs MUCH higher elsewhere)
🔹 Working access while [DeepSeek.com](http://DeepSeek.com) blocks new signups
**Context**
1. DeepSeek's direct service is "temporarily unavailable" (security reasons)
2. Major providers are either:
* Rate-limited to 50k TPM 🐌
* Price-gouging ($7/M → $14M roundtrip) 💸
* Both
**Why Nebius AI Studio matters right now:**
✅ Approved 10M TPM = Process 10,000 pages/sec
✅ Free $25 trial = 31M tokens (with voucher TEXT2IMAGE)
✅ Actual available capacity (no 5xx errors)
✅ GDPR-safe API that doesn't leak to third parties
[https://studio.nebius.ai/playground?models=deepseek-ai/DeepSeek-R1](https://studio.nebius.ai/playground?models=deepseek-ai/DeepSeek-R1) | 2025-01-29T11:31:04 | https://www.reddit.com/r/LocalLLaMA/comments/1icrqey/psa_bypassing_the_deepseek_signup_freeze_insane/ | medi6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icrqey | false | null | t3_1icrqey | /r/LocalLLaMA/comments/1icrqey/psa_bypassing_the_deepseek_signup_freeze_insane/ | false | false | self | 0 | null |
Use Deepseek (locally) without a super computer? | 1 | [removed] | 2025-01-29T11:35:31 | https://www.reddit.com/r/LocalLLaMA/comments/1icrsuj/use_deepseek_locally_without_a_super_computer/ | Peopuzzle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icrsuj | false | null | t3_1icrsuj | /r/LocalLLaMA/comments/1icrsuj/use_deepseek_locally_without_a_super_computer/ | false | false | self | 1 | null |
Do you save chinese models (just in case)? | 9 | US could decide to block chinese platforms and models, wondering if I should download ans save some. | 2025-01-29T11:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/1icrzh6/do_you_save_chinese_models_just_in_case/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icrzh6 | false | null | t3_1icrzh6 | /r/LocalLLaMA/comments/1icrzh6/do_you_save_chinese_models_just_in_case/ | false | false | self | 9 | null |
Deepseek in local machine | Ollama | javascript AI App | 2 | 2025-01-29T11:48:19 | https://youtube.com/watch?v=xd2nhBAbxXk&si=gab8eAZEVn6eHeH5 | zorefcode | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1icrzu4 | false | {'oembed': {'author_name': 'Zoref Code', 'author_url': 'https://www.youtube.com/@zorefcode', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/xd2nhBAbxXk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Deepseek in local machine | Ollama | javascript AI App"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/xd2nhBAbxXk/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Deepseek in local machine | Ollama | javascript AI App', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1icrzu4 | /r/LocalLLaMA/comments/1icrzu4/deepseek_in_local_machine_ollama_javascript_ai_app/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'xwkITcRg8UltfEIlVr3begEAqoaREhLaJ3wE7JztA4c', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/AUi1tx7ZCjH8Ditd45ZIYxaiJY5Zz-Nq2iixX7kUDjM.jpg?width=108&crop=smart&auto=webp&s=81ce419291e75103c4b9a362360ee4a4a23a0153', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/AUi1tx7ZCjH8Ditd45ZIYxaiJY5Zz-Nq2iixX7kUDjM.jpg?width=216&crop=smart&auto=webp&s=bc701e8ecaddb3d6416ed0e8a848bee48f2426c8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/AUi1tx7ZCjH8Ditd45ZIYxaiJY5Zz-Nq2iixX7kUDjM.jpg?width=320&crop=smart&auto=webp&s=ee078bb976224720a20541280cea3c78afd38b2b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/AUi1tx7ZCjH8Ditd45ZIYxaiJY5Zz-Nq2iixX7kUDjM.jpg?auto=webp&s=7d9c09f02012f1095cf3facc75e4a8829fa40dcc', 'width': 480}, 'variants': {}}]} |
||
Best table reading (LLM) ocr model as of now? | 1 | [removed] | 2025-01-29T11:48:31 | https://www.reddit.com/r/LocalLLaMA/comments/1icrzxu/best_table_reading_llm_ocr_model_as_of_now/ | DeusExWolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icrzxu | false | null | t3_1icrzxu | /r/LocalLLaMA/comments/1icrzxu/best_table_reading_llm_ocr_model_as_of_now/ | false | false | self | 1 | null |
New to LocalLLama, what size model should I be running on my pc? | 1 | [removed] | 2025-01-29T11:53:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ics2vj/new_to_localllama_what_size_model_should_i_be/ | horendus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ics2vj | false | null | t3_1ics2vj | /r/LocalLLaMA/comments/1ics2vj/new_to_localllama_what_size_model_should_i_be/ | false | false | self | 1 | null |
PSA: your 7B/14B/32B/70B "R1" is NOT DeepSeek. | 1,468 | It's not even an MoE, for that matter. It's a finetune of an existing dense model (Qwen 2.5 for most, Llama 3.3 for 70B). *ONLY* the full, 671B model is the real stuff.
(Making a post about this because I'm getting really tired of having to explain this under every "R1 on a potato" and "why is my R1 not as smart as people say" post separately.) | 2025-01-29T12:06:25 | https://www.reddit.com/r/LocalLLaMA/comments/1icsa5o/psa_your_7b14b32b70b_r1_is_not_deepseek/ | Zalathustra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icsa5o | false | null | t3_1icsa5o | /r/LocalLLaMA/comments/1icsa5o/psa_your_7b14b32b70b_r1_is_not_deepseek/ | false | false | self | 1,468 | null |
The censorship is real, they don't want you knowing the truth. DeepSeek #1 in 167 Countries | 12 | 2025-01-29T12:07:39 | RevolutionaryBox5411 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icsav6 | false | null | t3_1icsav6 | /r/LocalLLaMA/comments/1icsav6/the_censorship_is_real_they_dont_want_you_knowing/ | false | false | 12 | {'enabled': True, 'images': [{'id': 'sOBrXLgGQ0mVTBAJukIeHLkuV2gnMgknkO2zj4MH1So', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/w77n99u8bxfe1.png?width=108&crop=smart&auto=webp&s=ce3753ef4da833ed427b4ae9de286be2ca5d7ab9', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/w77n99u8bxfe1.png?width=216&crop=smart&auto=webp&s=02d8bce3e102984d85f0c17bce51f3a12b707dd8', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/w77n99u8bxfe1.png?width=320&crop=smart&auto=webp&s=1fee2b1e4aef32f0ed3dec391d117959011e3014', 'width': 320}, {'height': 411, 'url': 'https://preview.redd.it/w77n99u8bxfe1.png?width=640&crop=smart&auto=webp&s=feb45e372378eb4a9d092f36d1e7c03eaaebf28c', 'width': 640}, {'height': 617, 'url': 'https://preview.redd.it/w77n99u8bxfe1.png?width=960&crop=smart&auto=webp&s=644d97f69321f339206024605a86661b26e46ad6', 'width': 960}, {'height': 694, 'url': 'https://preview.redd.it/w77n99u8bxfe1.png?width=1080&crop=smart&auto=webp&s=f0e8b94d1adbe965a0a8e00bf432ede167317b08', 'width': 1080}], 'source': {'height': 1130, 'url': 'https://preview.redd.it/w77n99u8bxfe1.png?auto=webp&s=3dfa4a7efa1d601527d2fdad85b95f5ac8b2c659', 'width': 1757}, 'variants': {}}]} |
|||
RTX 3060 12GB or RX 6800 16GB for LLM and general gaming | 3 | I'm gonna build a pc in a february and i'm a bit confused on which gpu i should buy.
A bit of a breakdown
**RX 6800 16GB (253-300 USED):**
* **Pros:**
* **16GB VRAM** – Huge advantage for larger LLMs and future-proofing.
* Better raw gaming performance (\~30–40% faster than the 3060 at 1440p).
* Competitive price-to-performance ratio.
* **Cons:**
* **ROCm support** for AI workloads is improving but still patchy compared to CUDA.
* No Tensor Cores (slower for some AI tasks like FP16 inference).
**RTX 3060 12GB (200-215 USED):**
* **Pros:**
* **CUDA/NVIDIA ecosystem** – Broad support for AI frameworks (PyTorch, TensorFlow, Ollama).
* Mature drivers and better stability for productivity apps (Blender, DaVinci Resolve).
* DLSS for gaming (upscaling advantage in supported titles).
* **Cons:**
* Only 12GB VRAM – might limit LLM model size (e.g., 7B models work, but 13B+ could struggle).
* Weaker raw gaming performance compared to the 6800.
**Key Questions:**
1. **For LLMs:** Will the RX 6800’s 16GB VRAM outweigh its weaker ROCm support? I’ve heard ROCm 5.6+ works on RDNA2 GPUs, but how reliable is it for **Ollama / Hugging face**?
2. **Longevity:** Can the 6800’s 16GB be combined with another amd/nvidia gpu for combined vram?
There's also 3080 10gb, 3070 8gb, or 6700 xt 12gb. but i dont think they are really worth it. I can't go higher than 300$.
Note that i'm not really and expert in ai terms itself although i'm pretty familiar with ollama, i'm not with hugging face and others Ai tools.
https://preview.redd.it/gb2unsu8bxfe1.png?width=574&format=png&auto=webp&s=40028f86cb64aa9a132071ca4dbdb3d51ab7817e
Full list. I will buy this in february. If the tray ver is sold out i will not buy the thermalright and instead use the stock cooler. Gpu is Used. I already checked the mobo will fit NR200. 1$ is 15k. the case comes with riser so no need to worry about adding a 2nd gpu.
| 2025-01-29T12:07:44 | https://www.reddit.com/r/LocalLLaMA/comments/1icsawr/rtx_3060_12gb_or_rx_6800_16gb_for_llm_and_general/ | Dhonnan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icsawr | false | null | t3_1icsawr | /r/LocalLLaMA/comments/1icsawr/rtx_3060_12gb_or_rx_6800_16gb_for_llm_and_general/ | false | false | 3 | null |
|
3 new reasoning datasets using R1 - High-quality CoTs (from Maxime Labonne on X) | 24 | Bespoke-Stratos-17k: [https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)
OpenThoughts-114k: [https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
R1-Distill-SFT (1.8M samples): [https://huggingface.co/datasets/ServiceNow-AI/R1-Distill-SFT](https://huggingface.co/datasets/ServiceNow-AI/R1-Distill-SFT)
Maxime Labonne: *Dataset review ?? Lots of new reasoning datasets but very few use R1 yet*.: [https://x.com/maximelabonne/status/1884565062708543572](https://x.com/maximelabonne/status/1884565062708543572)
https://preview.redd.it/t5oftpp9exfe1.png?width=950&format=png&auto=webp&s=994e0594e9860fedec383b1acde4cc4e6a248b7b
| 2025-01-29T12:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1icsk3w/3_new_reasoning_datasets_using_r1_highquality/ | Nunki08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icsk3w | false | null | t3_1icsk3w | /r/LocalLLaMA/comments/1icsk3w/3_new_reasoning_datasets_using_r1_highquality/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'MKAF9pgrXcmpk4wLR6XUiU609ibzJCV_QiPf-CceFaA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pXALc_V-tAIQTXDrrHGn0OLNGuLlNPdMvDRZSfsMWeQ.jpg?width=108&crop=smart&auto=webp&s=aec7d310b0f6d6e94f3eb8d6c0de2e789e99073c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pXALc_V-tAIQTXDrrHGn0OLNGuLlNPdMvDRZSfsMWeQ.jpg?width=216&crop=smart&auto=webp&s=60037af24fabd169088407f2381e3737eb52898f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pXALc_V-tAIQTXDrrHGn0OLNGuLlNPdMvDRZSfsMWeQ.jpg?width=320&crop=smart&auto=webp&s=731d2644c4ef6d0bfe9c03689fabe875cb695951', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pXALc_V-tAIQTXDrrHGn0OLNGuLlNPdMvDRZSfsMWeQ.jpg?width=640&crop=smart&auto=webp&s=25de674d750578ef139c2374aa784ae1110fd4b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pXALc_V-tAIQTXDrrHGn0OLNGuLlNPdMvDRZSfsMWeQ.jpg?width=960&crop=smart&auto=webp&s=308857ac0423e907b23b62673a0887cd09b77345', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pXALc_V-tAIQTXDrrHGn0OLNGuLlNPdMvDRZSfsMWeQ.jpg?width=1080&crop=smart&auto=webp&s=7b2857b95e85e585bb8583711d574308703d4157', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pXALc_V-tAIQTXDrrHGn0OLNGuLlNPdMvDRZSfsMWeQ.jpg?auto=webp&s=fed65a2040a00497ee823c4b21ae910cb294dea5', 'width': 1200}, 'variants': {}}]} |
|
DeepSeek's multi-head latent attention and other KV cache tricks explained | 4 | 2025-01-29T12:23:52 | https://www.reddit.com/r/LocalLLaMA/comments/1icskl1/deepseeks_multihead_latent_attention_and_other_kv/ | Brilliant-Day2748 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icskl1 | false | null | t3_1icskl1 | /r/LocalLLaMA/comments/1icskl1/deepseeks_multihead_latent_attention_and_other_kv/ | false | false | 4 | null |
||
DeepSeek is unusable... | 0 | For you DeepSeek is also so overloaded that is unusable?
Every day is worsen.
Currently can't even get any response...😭 | 2025-01-29T12:31:20 | https://www.reddit.com/r/LocalLLaMA/comments/1icsp84/deepseek_is_unusable/ | Healthy-Nebula-3603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icsp84 | false | null | t3_1icsp84 | /r/LocalLLaMA/comments/1icsp84/deepseek_is_unusable/ | false | false | self | 0 | null |
"Average AI researcher: there’s a 16% chance AI causes extinction" | 0 | 2025-01-29T12:31:34 | omnisvosscio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icspdn | false | null | t3_1icspdn | /r/LocalLLaMA/comments/1icspdn/average_ai_researcher_theres_a_16_chance_ai/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'L2hy2RzpShkDY0xttLJLSxDJ1_2BFZQZz9ufClpiNRQ', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/9sdjel4wfxfe1.png?width=108&crop=smart&auto=webp&s=44d405a03c9d1f5315fbea4350e8faf7b33c6e21', 'width': 108}, {'height': 200, 'url': 'https://preview.redd.it/9sdjel4wfxfe1.png?width=216&crop=smart&auto=webp&s=36720b479585d461e908428c25ec26d280f4ace4', 'width': 216}, {'height': 296, 'url': 'https://preview.redd.it/9sdjel4wfxfe1.png?width=320&crop=smart&auto=webp&s=49f576e67c93d5e3cbdb2b10c82de4aa08c638f4', 'width': 320}, {'height': 593, 'url': 'https://preview.redd.it/9sdjel4wfxfe1.png?width=640&crop=smart&auto=webp&s=7e97d0ef2592bec7be59e1166c31f0e492dea690', 'width': 640}, {'height': 890, 'url': 'https://preview.redd.it/9sdjel4wfxfe1.png?width=960&crop=smart&auto=webp&s=f67debc44d5e3486b6cc81925a35db2743bc9515', 'width': 960}, {'height': 1001, 'url': 'https://preview.redd.it/9sdjel4wfxfe1.png?width=1080&crop=smart&auto=webp&s=f931e8a183ad25e6828446e3f4e4d0d5e14cb9de', 'width': 1080}], 'source': {'height': 1250, 'url': 'https://preview.redd.it/9sdjel4wfxfe1.png?auto=webp&s=12fba5c3cf0378d50a7657c56e34c114d8b14f97', 'width': 1348}, 'variants': {}}]} |
|||
Optimizing Large Language Model Training Using FP4 Quantization | 12 | https://arxiv.org/abs/2501.17116
Abstract:
>The growing computational demands of training large language models (LLMs) necessitate more efficient methods. Quantized training presents a promising solution by enabling low-bit arithmetic operations to reduce these costs. While FP8 precision has demonstrated feasibility, leveraging FP4 remains a challenge due to significant quantization errors and limited representational capacity. This work introduces the first FP4 training framework for LLMs, addressing these challenges with two key innovations: a differentiable quantization estimator for precise weight updates and an outlier clamping and compensation strategy to prevent activation collapse. To ensure stability, the framework integrates a mixed-precision training scheme and vector-wise quantization. Experimental results demonstrate that our FP4 framework achieves accuracy comparable to BF16 and FP8, with minimal degradation, scaling effectively to 13B-parameter LLMs trained on up to 100B tokens. With the emergence of next-generation hardware supporting FP4, our framework sets a foundation for efficient ultra-low precision training. | 2025-01-29T12:40:11 | https://www.reddit.com/r/LocalLLaMA/comments/1icsuii/optimizing_large_language_model_training_using/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icsuii | false | null | t3_1icsuii | /r/LocalLLaMA/comments/1icsuii/optimizing_large_language_model_training_using/ | false | false | self | 12 | null |
Best Small Model for RAG | 1 | I have a bunch of unstructured data (10k HTML documents) and I need to search over it and give me back structured response.
Instead of cleaning the data and converting to structures data then applying NLP to queries etc etc I thought its better to use LLM perhaps?
What would be the best model for this for fast inference? | 2025-01-29T12:42:49 | https://www.reddit.com/r/LocalLLaMA/comments/1icsw4r/best_small_model_for_rag/ | webdevop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icsw4r | false | null | t3_1icsw4r | /r/LocalLLaMA/comments/1icsw4r/best_small_model_for_rag/ | false | false | self | 1 | null |
Current state of ai, summarised | 1 | 2025-01-29T12:48:01 | Ok-Two697 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icszex | false | null | t3_1icszex | /r/LocalLLaMA/comments/1icszex/current_state_of_ai_summarised/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'rw4gBTs9YFm3_VPNAmr7JIcanOcp_RU-JrUwsbmUr-U', 'resolutions': [{'height': 181, 'url': 'https://preview.redd.it/vctrj7xuixfe1.jpeg?width=108&crop=smart&auto=webp&s=dfbb6cbf72cae7d7fa3ee3fb4100134491c5d24d', 'width': 108}, {'height': 363, 'url': 'https://preview.redd.it/vctrj7xuixfe1.jpeg?width=216&crop=smart&auto=webp&s=68a4f9ebc874f3a2275a1567b9b0dc32b473adae', 'width': 216}, {'height': 538, 'url': 'https://preview.redd.it/vctrj7xuixfe1.jpeg?width=320&crop=smart&auto=webp&s=dac3e9d50b6d3a955ede6df51021361df500d6fa', 'width': 320}, {'height': 1077, 'url': 'https://preview.redd.it/vctrj7xuixfe1.jpeg?width=640&crop=smart&auto=webp&s=4f203ad93faa8ab3c975c57ef3e01e4583358739', 'width': 640}, {'height': 1616, 'url': 'https://preview.redd.it/vctrj7xuixfe1.jpeg?width=960&crop=smart&auto=webp&s=f0f0371c965bf1c63c6f356039ec4df4c8728a0d', 'width': 960}, {'height': 1818, 'url': 'https://preview.redd.it/vctrj7xuixfe1.jpeg?width=1080&crop=smart&auto=webp&s=a1825b04586797e90452d067acfc2f180207b350', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://preview.redd.it/vctrj7xuixfe1.jpeg?auto=webp&s=c66a213a54c2c3e0d0ed4d8d7af0108b7611e500', 'width': 1216}, 'variants': {}}]} |
|||
Oh boy do local R1 values matter! | 102 | I had mixed results with the local 7B, 8B and 32B models, but I sure didn't know that the parameters matter this much. I suck at reading READMEs, but this time I took a bit of time and found these super important instructions:
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \\boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
I apply step 3 to everything, even generating code with success. With increasing the context window to 32768, I have had very consistent solid results.
8B llama is my favorite for instructions, do you guys use different settings? | 2025-01-29T12:48:11 | https://www.reddit.com/r/LocalLLaMA/comments/1icszj8/oh_boy_do_local_r1_values_matter/ | Zundrium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icszj8 | false | null | t3_1icszj8 | /r/LocalLLaMA/comments/1icszj8/oh_boy_do_local_r1_values_matter/ | false | false | self | 102 | null |
Could R1's 8 bit MoE + kernals allow for efficient 100K GPU hour training epochs for long term memory recall via "retraining sleeps" without knowledge degregation? | 0 | 100k hour epochs for the full 14T dataset is impressive. Equating to 48 hours on a 2048 H800 cluster, 24 hours on a 4096 cluster. New knowledge from both the world and user interactions can be updated very quickly, every 24 hours or so. For a very low price. Using 10% randomized data for test/validation would yield 3 hour epochs. Allowing for updated knowledge sets every day.
This costs only $25k * 3 per day. Without the knowledge overwrite degradation issues of fine tuning. | 2025-01-29T12:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/1icszzk/could_r1s_8_bit_moe_kernals_allow_for_efficient/ | BarnardWellesley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icszzk | false | null | t3_1icszzk | /r/LocalLLaMA/comments/1icszzk/could_r1s_8_bit_moe_kernals_allow_for_efficient/ | false | false | self | 0 | null |
Qwen-7b as a little shopkeeper - demo on github | 1 | [deleted] | 2025-01-29T12:54:37 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ict3ly | false | null | t3_1ict3ly | /r/LocalLLaMA/comments/1ict3ly/qwen7b_as_a_little_shopkeeper_demo_on_github/ | false | false | default | 1 | null |
||
Qwen-7B shopkeeper - demo on github | 62 | 2025-01-29T12:55:25 | No_Abbreviations_532 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ict448 | false | null | t3_1ict448 | /r/LocalLLaMA/comments/1ict448/qwen7b_shopkeeper_demo_on_github/ | false | false | default | 62 | {'enabled': True, 'images': [{'id': 't5hnxmj5kxfe1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=108&crop=smart&format=png8&s=345bc98c9b0c627f332947b850394c2396fe8d45', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=216&crop=smart&format=png8&s=292b8dfab478103431b4bca1efc7816223bf674c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=320&crop=smart&format=png8&s=6f13da27a0826709a8c72d28a4684d64be55f07c', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=640&crop=smart&format=png8&s=8b83eacc283927c6f785a7e8d4b4e89a945c934c', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=960&crop=smart&format=png8&s=b29b478835f307dbbf3bff8b11c802cda2f69b48', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=1080&crop=smart&format=png8&s=1eb776235e485be58c6b25f2f63e984612234414', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?format=png8&s=2e974cf3ed0ab322defcc741dc826314b5e97e7c', 'width': 1920}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=108&crop=smart&s=a67c77e60c59a1e5d4fc35c9e1c9d868d499b658', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=216&crop=smart&s=94d74ed307c21f9c040f03cd8fe27956209ddecf', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=320&crop=smart&s=5132f0f77314496f0cbb2cd833e65b7851f5c38e', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=640&crop=smart&s=838f65cfc990d76d84f5a977a244d7fe8eb37415', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=960&crop=smart&s=ea4dd6e1ed5eaf01d12cae5a03454cddb8daf855', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=1080&crop=smart&s=83d3a2552ba788a2cbeb3df221e178e63098fbe5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?s=0394213960cf1d1825382f3d2c2802a9cee5f415', 'width': 1920}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=108&format=mp4&s=40c4cdbce6c7dd1251288fe48049cb12acac7f91', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=216&format=mp4&s=9412b5d25b4cf0f7fe27679a1c9d4261d683c8d0', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=320&format=mp4&s=6b66f4aedc9ab2df6591979023ed0cf648f2e2ac', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=640&format=mp4&s=d68dfaec733034d123f90e7a71ecf06e0215a318', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=960&format=mp4&s=e993878b281e1021aaeb316367ee2b419d82a3d9', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?width=1080&format=mp4&s=4305509fc2342bb02632de8d808d602c2f850d60', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/t5hnxmj5kxfe1.gif?format=mp4&s=7bafb5b5b6d6f6873ac02ebc17207d6f49e3eda4', 'width': 1920}}}}]} |
||
Believe | 0 | 2025-01-29T12:59:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ict6gl/believe/ | rodlib | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ict6gl | false | null | t3_1ict6gl | /r/LocalLLaMA/comments/1ict6gl/believe/ | false | false | 0 | null |
||
Question: Can Nvidia Digits (supercomputer) run the largest DeepSeek model? | 1 | [removed] | 2025-01-29T13:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ict988/question_can_nvidia_digits_supercomputer_run_the/ | barnlk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ict988 | false | null | t3_1ict988 | /r/LocalLLaMA/comments/1ict988/question_can_nvidia_digits_supercomputer_run_the/ | false | false | self | 1 | null |
What’s the Best AI for Text-to-Video Generation Using Real Images from its stock (with API Support)? | 1 | [removed] | 2025-01-29T13:03:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ict9qm/whats_the_best_ai_for_texttovideo_generation/ | Lonely_Culture_6907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ict9qm | false | null | t3_1ict9qm | /r/LocalLLaMA/comments/1ict9qm/whats_the_best_ai_for_texttovideo_generation/ | false | false | self | 1 | null |
Why do people like Ollama more than LM Studio? | 250 | I'm just curious. I see a ton of people discussing Ollama, but as an LM Studio user, don't see a lot of people talking about it.
But LM Studio seems so much better to me. It uses arbitrary GGUFs, not whatever that weird proprietary format Ollama uses is. It has a really nice GUI, not mysterious opaque headless commands. If I want to try a new model, it's super easy to search for it, download it, try it, and throw it away or serve it up to AnythingLLM for some RAG or foldering.
(Before you raise KoboldCPP, yes, absolutely KoboldCPP, it just doesn't run on my machine.)
So why the Ollama obsession on this board? Help me understand. | 2025-01-29T13:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1icta5y/why_do_people_like_ollama_more_than_lm_studio/ | Intelligent-Gift4519 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icta5y | false | null | t3_1icta5y | /r/LocalLLaMA/comments/1icta5y/why_do_people_like_ollama_more_than_lm_studio/ | false | false | self | 250 | null |
Frustrated with VRAM Intervals of Distilled Models | 0 | Why is there no standard yet?
For instance - on my 4070 ti super, the DeepSeek-R1-Distill-Qwen-32B consumes too much vram, so inference is split amongst CPU and GPU, whereas DeepSeek-R1-Distill-Qwen-14B is the next step down, but really only needs a GPU with 12GB of ram.
I don't know - I'm frustrated why we're not catering distilled models to consumer hardware specs. I'm essentially losing out on 4-6 GB of my VRAM.
Tokenization falls off if you exceed VRAM usage within your GPU and have to split with CPU using ollama.
I am a noob though - let me know if there's a workaround. | 2025-01-29T13:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ictden/frustrated_with_vram_intervals_of_distilled_models/ | foundoutimanadult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ictden | false | null | t3_1ictden | /r/LocalLLaMA/comments/1ictden/frustrated_with_vram_intervals_of_distilled_models/ | false | false | self | 0 | null |
Need Advice on Document Processing (PDFs, Excel, TXT) – Best Free & Open-Source Tools? | 1 | [removed] | 2025-01-29T13:10:29 | https://www.reddit.com/r/LocalLLaMA/comments/1icteac/need_advice_on_document_processing_pdfs_excel_txt/ | OceanHarmonies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icteac | false | null | t3_1icteac | /r/LocalLLaMA/comments/1icteac/need_advice_on_document_processing_pdfs_excel_txt/ | false | false | self | 1 | null |
Is the DeepSeek model poisoned at the data level? | 0 | I ran the DeepSeek model locally and asked, "What is Taiwan?" I received a response stating that Taiwan is a historical and inalienable part of China.
For comparison, I also ran Meta's LLaMA 3 locally—the same model on which DeepSeek is based—and got a response saying that Taiwan is an independent island nation.
Check out the screenshot.
This raises a question: Essentially, the Chinese government's perspective is embedded at the model level, meaning it doesn’t matter whether DeepSeek is running in the cloud or locally.
Now, further questions:
How can this be "rooted out"? Would fine-tuning the model help, or would some RAG-based approach with prompts on disputed topics be more effective?
What other kinds of "backdoors" like this might be embedded? 😎 | 2025-01-29T13:11:17 | aospan | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ictev1 | false | null | t3_1ictev1 | /r/LocalLLaMA/comments/1ictev1/is_the_deepseek_model_poisoned_at_the_data_level/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'qYh2HCqsLxwQW_OtI_wBJvLAXB1fz6N_1UaVnilp5HA', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/npnc6ja0nxfe1.jpeg?width=108&crop=smart&auto=webp&s=31daad4c8793a3596200042ccae5a47d85002df1', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/npnc6ja0nxfe1.jpeg?width=216&crop=smart&auto=webp&s=51399c6a2f04bd54d735f6bf55e095f98b29177f', 'width': 216}, {'height': 130, 'url': 'https://preview.redd.it/npnc6ja0nxfe1.jpeg?width=320&crop=smart&auto=webp&s=e35b5c9db154462d614b2bb247c1755e34b0d360', 'width': 320}, {'height': 260, 'url': 'https://preview.redd.it/npnc6ja0nxfe1.jpeg?width=640&crop=smart&auto=webp&s=3843639f8331fa0fd00483a100895773650fa439', 'width': 640}, {'height': 391, 'url': 'https://preview.redd.it/npnc6ja0nxfe1.jpeg?width=960&crop=smart&auto=webp&s=7bdeaf67d7ee04d6d573d9467e2966c06471ff3b', 'width': 960}, {'height': 440, 'url': 'https://preview.redd.it/npnc6ja0nxfe1.jpeg?width=1080&crop=smart&auto=webp&s=b193fc0752d311d37c069dd6b05945936c6ea853', 'width': 1080}], 'source': {'height': 714, 'url': 'https://preview.redd.it/npnc6ja0nxfe1.jpeg?auto=webp&s=52f193201f40038f0e07fdb1f3cba2a985643366', 'width': 1752}, 'variants': {}}]} |
||
Need Advice on Document Processing (PDFs, Excel, TXT) – Best Free & Open-Source Tools? | 1 | [removed] | 2025-01-29T13:11:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ictf3i/need_advice_on_document_processing_pdfs_excel_txt/ | OceanHarmonies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ictf3i | false | null | t3_1ictf3i | /r/LocalLLaMA/comments/1ictf3i/need_advice_on_document_processing_pdfs_excel_txt/ | false | false | self | 1 | null |
How do LLMs generate grammatically-correct sentences every time? | 1 | I get that it predicts the next token using weights that were generated while training on a large amount of text that was scraped from the internet. But, a lot of people make typos, grammar mistakes, and sometimes even poorly written HTML can also display junk in the content.
So did they have to do some trickery to compensate for all this or is the rules of grammar part of the text-generation pipeline itself? | 2025-01-29T13:13:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ictgbf/how_do_llms_generate_grammaticallycorrect/ | sunshine-and-sorrow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ictgbf | false | null | t3_1ictgbf | /r/LocalLLaMA/comments/1ictgbf/how_do_llms_generate_grammaticallycorrect/ | false | false | self | 1 | null |
Instead of multilingual is there any only English based llm? | 1 | [removed] | 2025-01-29T13:13:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ictgg1/instead_of_multilingual_is_there_any_only_english/ | EmmaMartian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ictgg1 | false | null | t3_1ictgg1 | /r/LocalLLaMA/comments/1ictgg1/instead_of_multilingual_is_there_any_only_english/ | false | false | self | 1 | null |
Look what they need.. | 1 | 2025-01-29T13:16:28 | MAXFlRE | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icti94 | false | null | t3_1icti94 | /r/LocalLLaMA/comments/1icti94/look_what_they_need/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'tCehchyBX8FCofSXMH6P_KIwS7xYwPJQJbUjfWD5dKI', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/kv6zf4ysnxfe1.png?width=108&crop=smart&auto=webp&s=4523fb55ee60f42dfb6c8894d0b572a765d22212', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/kv6zf4ysnxfe1.png?width=216&crop=smart&auto=webp&s=877a37a17e734a6fb6c233bdbebc13e169369f7e', 'width': 216}, {'height': 357, 'url': 'https://preview.redd.it/kv6zf4ysnxfe1.png?width=320&crop=smart&auto=webp&s=c3123d34eda434beb5590f2321823d5f937a9084', 'width': 320}], 'source': {'height': 558, 'url': 'https://preview.redd.it/kv6zf4ysnxfe1.png?auto=webp&s=e6570cb8934459d47edbbbeb0f96854df616adde', 'width': 500}, 'variants': {}}]} |
|||
Distributed inference in kubernetes | 1 | What is your setup in running distributed inference in kubernetes? We have 6 supermicro SYS-821GE-TNHR server, it contains 8 H100 GPUs, GPU operator is setup correctly, when running distributed inference with -for example- VLLM it's very slow, around 2 tokens per second.
What enhancements do you reccomend? Is the network operator helpful? I'm kinda lost on how to set it up with our servers.
Any guidance is much appreciated. | 2025-01-29T13:16:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ictilz/distributed_inference_in_kubernetes/ | No-Emphasis6569 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ictilz | false | null | t3_1ictilz | /r/LocalLLaMA/comments/1ictilz/distributed_inference_in_kubernetes/ | false | false | self | 1 | null |
Can I allocate SDRAM(cpu) + VRAM (gpu) to try and run full R1, or am I better off running 48gb VRAM for 70B r1? | 1 | (preface: ive not run local ai before, and im a noob in this area)
It seems the best bang for your buck is 2x 3090's to run the 70B version of R1, which is not the R1 everyones raving about.
GPU wise, itd take 20-30k (10-15k if youre clever) to setup enough vram to run the full raw R1 model.
I saw a [post](https://www.reddit.com/r/LocalLLaMA/comments/1ic8cjf/6000_computer_to_run_deepseek_r1_670b_q8_locally/) where someone suggested to run R1 on CPU's and RAM instead of cuda.
I was wondering, Is there any point in going above 48gb of vram, if the next step up (full R1 model) requires 20-40 GPU's?
Is there a way to augment running R1, by say.. quantizing it? or is that what the 70B model is?
Or is there a way to run GPU's alongside high performance CPU+ddr4? in order to get close to the ram (vram+sdram) needed to run full R1? or is it not feasable to mix vram+sdram, because from what I saw before, if you mix a 3090 and a 3080, you get only upto the performance of a 3080. | 2025-01-29T13:25:06 | https://www.reddit.com/r/LocalLLaMA/comments/1icto2i/can_i_allocate_sdramcpu_vram_gpu_to_try_and_run/ | CertainlyBright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icto2i | false | null | t3_1icto2i | /r/LocalLLaMA/comments/1icto2i/can_i_allocate_sdramcpu_vram_gpu_to_try_and_run/ | false | false | self | 1 | null |
Deepseek buisness model is not unique. | 0 | Everybody is scratching their head why a quantity firm is investing in llm. But an important thing to remember is that it is a philanthropy project. It's a pet project of the founder. In fact this is not the first time quant firms have done this. DE Shaw did the same with molecular biology, Jim Simmons spent a ton on math research grants. So for those wondering how they plan to make money. They really aren't interested in making a ton of money. It's a billionaire's pet project. | 2025-01-29T13:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ictqfs/deepseek_buisness_model_is_not_unique/ | indicisivedivide | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ictqfs | false | null | t3_1ictqfs | /r/LocalLLaMA/comments/1ictqfs/deepseek_buisness_model_is_not_unique/ | false | false | self | 0 | null |
good shit | 558 | 2025-01-29T13:32:50 | diligentgrasshopper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icttm7 | false | null | t3_1icttm7 | /r/LocalLLaMA/comments/1icttm7/good_shit/ | false | false | 558 | {'enabled': True, 'images': [{'id': 'gYBVeRQOlNsVo9ZBJ8mJmbrI0IVx-ld5j9Z9KqeztKU', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/azitnmgpqxfe1.png?width=108&crop=smart&auto=webp&s=fc0a7ab99aa896b902a1c8fba07c7586bb8b7acc', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/azitnmgpqxfe1.png?width=216&crop=smart&auto=webp&s=933c65132536629fd1b45cb2388abff74ca2a25c', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/azitnmgpqxfe1.png?width=320&crop=smart&auto=webp&s=b9e6efecf725368cd6cff47229dbc1474bbef0ea', 'width': 320}, {'height': 335, 'url': 'https://preview.redd.it/azitnmgpqxfe1.png?width=640&crop=smart&auto=webp&s=c237cfc71f4b87b6b0a5d08b0631615eefa83d9a', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/azitnmgpqxfe1.png?width=960&crop=smart&auto=webp&s=72a62109c1373874c1749a39613a2b09ecfc4743', 'width': 960}, {'height': 565, 'url': 'https://preview.redd.it/azitnmgpqxfe1.png?width=1080&crop=smart&auto=webp&s=4c33299cb0d47d46aae6e060d481c37e1d633135', 'width': 1080}], 'source': {'height': 1508, 'url': 'https://preview.redd.it/azitnmgpqxfe1.png?auto=webp&s=8301af24ce6ba13cc00431855fcdaea8a3fe579a', 'width': 2880}, 'variants': {}}]} |
|||
Optimizing Large Language Model Training Using FP4 Quantization | 4 | The growing computational demands of training large language models (LLMs) necessitate more efficient methods. Quantized training presents a promising solution by enabling low-bit arithmetic operations to reduce these costs. While FP8 precision has demonstrated feasibility, leveraging FP4 remains a challenge due to significant quantization errors and limited representational capacity. This work introduces the first FP4 training framework for LLMs, addressing these challenges with two key innovations: a differentiable quantization estimator for precise weight updates and an outlier clamping and compensation strategy to prevent activation collapse. To ensure stability, the framework integrates a mixed-precision training scheme and vector-wise quantization. Experimental results demonstrate that our FP4 framework achieves accuracy comparable to BF16 and FP8, with minimal degradation, scaling effectively to 13B-parameter LLMs trained on up to 100B tokens. With the emergence of next-generation hardware supporting FP4, our framework sets a foundation for efficient ultra-low precision training.
[https://arxiv.org/abs/2501.17116](https://arxiv.org/abs/2501.17116) | 2025-01-29T13:36:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ictw5m/optimizing_large_language_model_training_using/ | Won3wan32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ictw5m | false | null | t3_1ictw5m | /r/LocalLLaMA/comments/1ictw5m/optimizing_large_language_model_training_using/ | false | false | self | 4 | null |
Comparing Performance of AMD Ryzen AI Max+ 395, NVIDIA DIGITS, and RTX 5090 for Local LLMs | 1 | Hello everyone,
I’m looking for insights re: the expected performance and capability of the AMD Ryzen AI Max+ 395 (lol) and NVIDIA’s DIGITS compared to the soon-to-be-released RTX 5090 when it comes to running local LLMs.
I know that none of these are out/to wait for release and benchmarks etc.. but I am trying to decide whether to buy the 5090 in a couple of days’ time (that is if scalpers don’t beat me to it).
From what I’ve gathered:
- AMD Ryzen AI Max+ 395 claims to outperform the RTX 4090 by up to 2.2 times in specific AI workloads while drawing up to 87% less power. This sounds really dubious to me, even if it has the purported 96gb unified memory and architecture optimisations etc.
- DIGITS claims 1 petaflop of performance at FP4 precision and 128 GB of unified memory - I’m not entirely sure what this translates to in real world use. I’m curious about how it might stack up against AMD’s offering in practical scenarios and whether to be skeptical of both.
Given these points, I’d love to hear your opinions on:
1. Which option do you think will provide the best performance-to-cost ratio for hosting local LLMs?
2. How do you expect each of these systems to host something that will breeze through handling RAG tasks involving scientific papers and textbooks?
3. Are there any other considerations or alternatives I should keep in mind when choosing hardware for this purpose?
All will cost a decent chunk of cash and I really don’t want to be wasting money needlessly or have buyer’s remorse!
Thanks and hope this also helps others given how rapidly everything is evolving in this space. | 2025-01-29T13:42:29 | https://www.reddit.com/r/LocalLLaMA/comments/1icu0o5/comparing_performance_of_amd_ryzen_ai_max_395/ | Big_Yak9983 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icu0o5 | false | null | t3_1icu0o5 | /r/LocalLLaMA/comments/1icu0o5/comparing_performance_of_amd_ryzen_ai_max_395/ | false | false | self | 1 | null |
Kaggle installing vllm | 0 | I am unable to get vllm 0.7.0 working on kaggle’s environment.
If anyone is able to get it fully up and running through a utility script, i’d be happy to buy you a coffee. | 2025-01-29T13:42:40 | https://www.reddit.com/r/LocalLLaMA/comments/1icu0t2/kaggle_installing_vllm/ | Wonderful_Alfalfa115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icu0t2 | false | null | t3_1icu0t2 | /r/LocalLLaMA/comments/1icu0t2/kaggle_installing_vllm/ | false | false | self | 0 | null |
Should I buy refurb 3090 to add to 4x1070 | 0 | I have a system with 4x1070 8GB each. I can buy 3090 with 24GB for $700 and add it to the system.
Currently I am running pre easiest setup possible: ArchLinux, LM Studio 3.8 and WebUI in a docker container with OPENAI\_API\_BASE\_URL pointing to LM Studio for local network access.
| 2025-01-29T13:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/1icu1f9/should_i_buy_refurb_3090_to_add_to_4x1070/ | regjoe13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icu1f9 | false | null | t3_1icu1f9 | /r/LocalLLaMA/comments/1icu1f9/should_i_buy_refurb_3090_to_add_to_4x1070/ | false | false | self | 0 | null |
Can't run Llama | 1 | [removed] | 2025-01-29T14:00:10 | https://www.reddit.com/r/LocalLLaMA/comments/1icudc2/cant_run_llama/ | Money_Argument9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icudc2 | false | null | t3_1icudc2 | /r/LocalLLaMA/comments/1icudc2/cant_run_llama/ | false | false | self | 1 | null |
deepseek R1 70b on the mac mini m4 | 1 | [removed] | 2025-01-29T14:01:33 | https://www.reddit.com/r/LocalLLaMA/comments/1icuekr/deepseek_r1_70b_on_the_mac_mini_m4/ | Strange-Composer-951 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icuekr | false | null | t3_1icuekr | /r/LocalLLaMA/comments/1icuekr/deepseek_r1_70b_on_the_mac_mini_m4/ | false | false | self | 1 | null |
Comparing expected performance of AMD Ryzen AI Max+ 395, NVIDIA DIGITS, and RTX 5090 for Local LLMs | 29 | Hello everyone,
I’m looking for opinions from more knowledgable folk on the expected performance of the AMD Ryzen AI Max+ 395 (lol) and NVIDIA’s DIGITS vs the RTX 5090 when it comes to running local LLMs.
For context, asking this question now because I’m trying to decide whether to battle it out with scalpers and see if I can buy an RTX 5090 tomorrow, or to just chill//avoid wasting money if superior tools are round the corner.
From what I’ve gathered:
AMD Ryzen AI Max+ 395 claims to outperform the RTX 4090 by up to 2.2 times in specific AI workloads while drawing up to 87% less power. 96 GB of RAM can be dedicated to graphics tasks which means bigger models. This seems promising for personal use, especially as I’m doing a lot of RAG with medical textbooks and articles.
DIGITS reportedly offers 1 petaflop of performance at FP4 precision (not really sure what this would mean in the real world) and 128 GB of unified memory and NVIDIA is marketing this as optimised for running large models locally.
I’m curious about how both would stack up against the RTX 5090. I know it “only” has 32gb VRAM so would be more limited in what models it can run, but if there is a huge inference speed advantage then I would prefer that over having a bigger model.
1. Which option do you think will provide the best performance:cost ratio for hosting local LLMs?
2. How quick do you expect inference speed each of these systems when handling RAG tasks with scientific papers, books etc.?
3. Are there any other considerations or alternatives I should keep in mind? I should state here that I don’t want to buy any Apple product.
Wildcard question:
Have DeepSeek and Chinese researchers changed the game completely, and I need to shift my focus away from optimising what hardware I have entirely??
Thanks in advance for your insights! Hope this also helps others in the same boat as me. | 2025-01-29T14:02:40 | https://www.reddit.com/r/LocalLLaMA/comments/1icuff7/comparing_expected_performance_of_amd_ryzen_ai/ | Big_Yak9983 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icuff7 | false | null | t3_1icuff7 | /r/LocalLLaMA/comments/1icuff7/comparing_expected_performance_of_amd_ryzen_ai/ | false | false | self | 29 | null |
Running DeepSeek R1 on AWS: Cost & Hardware Recommendations? | 1 | **Running DeepSeek R1 on AWS: Cost & Hardware Recommendations?**
Hey everyone,
I’m running a startup with 60 developers, and we’re looking to deploy **DeepSeek R1** on AWS. I have a few questions regarding the setup:
1. **What kind of AWS hardware** (GPUs, instance types, storage, etc.) is recommended for running DeepSeek R1 efficiently?
2. **Estimated cost** for running inference vs. fine-tuning (if feasible)?
3. **Any cost-saving tips** (e.g., spot instances, reserved instances, or alternative setups)?
4. Would you recommend **SageMaker, EC2, or other AWS services** for this use case?
5. Any **real-world experience** with latency/performance when serving DeepSeek R1 at scale?
We’re looking for a balance between performance and cost-efficiency. Would love to hear from anyone who has experience running similar LLMs on AWS.
I am thinking all devs sharing the same instance first, with some sort of pipeline waiting mechanism at first.
Please feel free to recommend other solutions and/or hardware.
Thanks in advance! | 2025-01-29T14:09:04 | https://www.reddit.com/r/LocalLLaMA/comments/1icukfm/running_deepseek_r1_on_aws_cost_hardware/ | alamkh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icukfm | false | null | t3_1icukfm | /r/LocalLLaMA/comments/1icukfm/running_deepseek_r1_on_aws_cost_hardware/ | false | false | self | 1 | null |
If GPT 4 can speak Chinese, doesn't that mean they have stolen Chinese copyrighted works/intellectual property rights without permission? | 1 | [removed]
[View Poll](https://www.reddit.com/poll/1icul95) | 2025-01-29T14:10:08 | https://www.reddit.com/r/LocalLLaMA/comments/1icul95/if_gpt_4_can_speak_chinese_doesnt_that_mean_they/ | dahara111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icul95 | false | null | t3_1icul95 | /r/LocalLLaMA/comments/1icul95/if_gpt_4_can_speak_chinese_doesnt_that_mean_they/ | false | false | self | 1 | null |
Looking for Advice on Best Open-Source Tools for Document Processing (PDFs, Excel, TXT) | 1 | [removed] | 2025-01-29T14:20:43 | https://www.reddit.com/r/LocalLLaMA/comments/1icutfv/looking_for_advice_on_best_opensource_tools_for/ | OceanHarmonies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icutfv | false | null | t3_1icutfv | /r/LocalLLaMA/comments/1icutfv/looking_for_advice_on_best_opensource_tools_for/ | false | false | self | 1 | null |
What do you guys think? | 0 | 2025-01-29T14:23:35 | Vincentkk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icuvmc | false | null | t3_1icuvmc | /r/LocalLLaMA/comments/1icuvmc/what_do_you_guys_think/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'NmBY0mTe-BfIU5ej1vrDJfo84JDoyOTstGtxTr4qjC4', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/yr6np9mwzxfe1.jpeg?width=108&crop=smart&auto=webp&s=67dfb53bf013b24c99db3130c705f304140edeb3', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/yr6np9mwzxfe1.jpeg?width=216&crop=smart&auto=webp&s=d145d0f5fb26ac13b7f0607320aaad467d73ff7e', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/yr6np9mwzxfe1.jpeg?width=320&crop=smart&auto=webp&s=e5f15e7af01b3e7720cf2a92542290594ecd1818', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/yr6np9mwzxfe1.jpeg?width=640&crop=smart&auto=webp&s=58ad397a154dc5b250990aef7a531ce51d49dbe6', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/yr6np9mwzxfe1.jpeg?width=960&crop=smart&auto=webp&s=a845c705899551a6389dce8201d0d8919c2c6c04', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/yr6np9mwzxfe1.jpeg?width=1080&crop=smart&auto=webp&s=115257aed3a3d26fa81626a9e7b04f69e0b9aae3', 'width': 1080}], 'source': {'height': 2436, 'url': 'https://preview.redd.it/yr6np9mwzxfe1.jpeg?auto=webp&s=66172910f4161bd4cf7380c285bdc9acd533a50b', 'width': 1125}, 'variants': {}}]} |
|||
Deepseek R1 1.5B local model is also censored btw | 1 | 2025-01-29T14:26:37 | Delicious-Farmer-234 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icuxxu | false | null | t3_1icuxxu | /r/LocalLLaMA/comments/1icuxxu/deepseek_r1_15b_local_model_is_also_censored_btw/ | false | false | 1 | {'enabled': True, 'images': [{'id': '7IHfeujtL7eh67NJg5qPw1x3p133Kvl3Om5wa_WWrwM', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/p1a4s0lkzxfe1.png?width=108&crop=smart&auto=webp&s=20243ae8a5bf12525539617f83fd81b8ea32da23', 'width': 108}, {'height': 181, 'url': 'https://preview.redd.it/p1a4s0lkzxfe1.png?width=216&crop=smart&auto=webp&s=9069dae1f56d1f5b2fa36d7d036ad2c6619deb56', 'width': 216}, {'height': 268, 'url': 'https://preview.redd.it/p1a4s0lkzxfe1.png?width=320&crop=smart&auto=webp&s=e94cfd44b64714dc81055e067b68519314b21279', 'width': 320}, {'height': 537, 'url': 'https://preview.redd.it/p1a4s0lkzxfe1.png?width=640&crop=smart&auto=webp&s=2b2650a932af9bfe7656e865e08e59a7b97937e5', 'width': 640}, {'height': 805, 'url': 'https://preview.redd.it/p1a4s0lkzxfe1.png?width=960&crop=smart&auto=webp&s=617d380f7c4ffcd76501c6710f14ca3fdc981190', 'width': 960}, {'height': 906, 'url': 'https://preview.redd.it/p1a4s0lkzxfe1.png?width=1080&crop=smart&auto=webp&s=5babe16b94e7ece90a1bf5ae78f3fefb723ea245', 'width': 1080}], 'source': {'height': 1377, 'url': 'https://preview.redd.it/p1a4s0lkzxfe1.png?auto=webp&s=cc0dbacf4d8ea740f2220506d5fbea9613f43bd5', 'width': 1641}, 'variants': {}}]} |
|||
DeepSeek authors, past collaborators, and their affiliations | 1 | 2025-01-29T14:26:44 | osint_for_good | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icuy11 | false | null | t3_1icuy11 | /r/LocalLLaMA/comments/1icuy11/deepseek_authors_past_collaborators_and_their/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Y6SHeH7yDK8qEtOijZ2vCKWXDow9pG04_h6AIKFmz0M', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ko683u870yfe1.png?width=108&crop=smart&auto=webp&s=20e054ea3f5815bc1b79a97c6c4de3f391cf8c25', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ko683u870yfe1.png?width=216&crop=smart&auto=webp&s=386e595527d8c92beee7663a9791124ff3906f3d', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ko683u870yfe1.png?width=320&crop=smart&auto=webp&s=7bc59543e54599396b11e6d1fa60fa06a10b5e7f', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/ko683u870yfe1.png?width=640&crop=smart&auto=webp&s=d61828b83322619dad00d6221f2356109535b2bd', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/ko683u870yfe1.png?width=960&crop=smart&auto=webp&s=04159326ceaf2f80555ca99de1a2a1458a14ce29', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/ko683u870yfe1.png?width=1080&crop=smart&auto=webp&s=26df80df05ec1ce5abed9cd1a957e86c0a45e549', 'width': 1080}], 'source': {'height': 5000, 'url': 'https://preview.redd.it/ko683u870yfe1.png?auto=webp&s=323ccb7f2dde10a050a9bf762196a8f6b955b62e', 'width': 5000}, 'variants': {}}]} |
|||
Deepseek R1 1.5B response, thoughts? | 0 | 2025-01-29T14:28:52 | Delicious-Farmer-234 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icuzq2 | false | null | t3_1icuzq2 | /r/LocalLLaMA/comments/1icuzq2/deepseek_r1_15b_response_thoughts/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'cSXo53Z9jAIu9Sqr60mAslfBK_IPJdBQCOxcMchTX-c', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/d36i38ft0yfe1.png?width=108&crop=smart&auto=webp&s=16add514e5fedbcf4e838cc39107cb234cd1f7d6', 'width': 108}, {'height': 181, 'url': 'https://preview.redd.it/d36i38ft0yfe1.png?width=216&crop=smart&auto=webp&s=299218a7826e4fbe8bd7c355485b2a8afeefe3c1', 'width': 216}, {'height': 268, 'url': 'https://preview.redd.it/d36i38ft0yfe1.png?width=320&crop=smart&auto=webp&s=15f3e6b7b198e15e915130492817a2ea4c9bddd8', 'width': 320}, {'height': 537, 'url': 'https://preview.redd.it/d36i38ft0yfe1.png?width=640&crop=smart&auto=webp&s=9cb76b3ac6edc15300b06df4c788ba237f25a0a8', 'width': 640}, {'height': 805, 'url': 'https://preview.redd.it/d36i38ft0yfe1.png?width=960&crop=smart&auto=webp&s=36dbe406294dd885d584d52bcf09b648bd562a2a', 'width': 960}, {'height': 906, 'url': 'https://preview.redd.it/d36i38ft0yfe1.png?width=1080&crop=smart&auto=webp&s=41c35f0aa44aab455dfc77b02b857d76329c36a1', 'width': 1080}], 'source': {'height': 1377, 'url': 'https://preview.redd.it/d36i38ft0yfe1.png?auto=webp&s=95debe483a3fb01535f3f54dab31cdebe5abeecb', 'width': 1641}, 'variants': {}}]} |
|||
Let's not tell them about Qwen 2.5 Max | 1 | 2025-01-29T14:33:13 | https://v.redd.it/1kal94mm1yfe1 | one-escape-left | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icv38m | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/1kal94mm1yfe1/DASHPlaylist.mpd?a=1740753209%2CYjA4MmU2NDBmZGIxY2UyMDRkMTcyNDc2NjAxMDQxNmJhN2FmOTgzNzYxNWQzMzFhMDkyNDI4Y2JmZjRjY2IwYg%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/1kal94mm1yfe1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/1kal94mm1yfe1/HLSPlaylist.m3u8?a=1740753209%2CZDk1MzcyYjQxMjI0YmE0OWYzZDIyMmU2NGI5NmUxMzIzYWMwODQyOWZlYjFjZDMxOTVjN2U4MTJmYTJlNzQyNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1kal94mm1yfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 640}} | t3_1icv38m | /r/LocalLLaMA/comments/1icv38m/lets_not_tell_them_about_qwen_25_max/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aDBhcnM2a20xeWZlMZCmzFIuA7ViNagL80xkpAjwlYplS4KnSznLso8v0o-l', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aDBhcnM2a20xeWZlMZCmzFIuA7ViNagL80xkpAjwlYplS4KnSznLso8v0o-l.png?width=108&crop=smart&format=pjpg&auto=webp&s=aea6f511a5b456177fd586c8a0fe3891ee7c6518', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aDBhcnM2a20xeWZlMZCmzFIuA7ViNagL80xkpAjwlYplS4KnSznLso8v0o-l.png?width=216&crop=smart&format=pjpg&auto=webp&s=85fcc861610f6289f858e77433060974f39f99cc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aDBhcnM2a20xeWZlMZCmzFIuA7ViNagL80xkpAjwlYplS4KnSznLso8v0o-l.png?width=320&crop=smart&format=pjpg&auto=webp&s=9fcf0d36abc0b66436dca359c89814aacaf8f3ab', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/aDBhcnM2a20xeWZlMZCmzFIuA7ViNagL80xkpAjwlYplS4KnSznLso8v0o-l.png?width=640&crop=smart&format=pjpg&auto=webp&s=1299be329010c55a0376d1556d947ea1e1a71266', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/aDBhcnM2a20xeWZlMZCmzFIuA7ViNagL80xkpAjwlYplS4KnSznLso8v0o-l.png?format=pjpg&auto=webp&s=58de2501e0016a02b403ce0c055933e4918bf22b', 'width': 640}, 'variants': {}}]} |
||
Alibaba's own charts show conflicting numbers for MMLU-Pro | 15 | 2025-01-29T14:42:41 | splityoassintwo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icvakq | false | null | t3_1icvakq | /r/LocalLLaMA/comments/1icvakq/alibabas_own_charts_show_conflicting_numbers_for/ | false | false | 15 | {'enabled': True, 'images': [{'id': 'J1flLH5LecAr6llG8bOGWZofrnH1oxC-JAghebB5MtY', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/bp317ky93yfe1.jpeg?width=108&crop=smart&auto=webp&s=bd776ffef200e078c417adcc1eebc894fea36da2', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/bp317ky93yfe1.jpeg?width=216&crop=smart&auto=webp&s=f2de0f946c99b59dd59c64de3939e65746155bde', 'width': 216}, {'height': 374, 'url': 'https://preview.redd.it/bp317ky93yfe1.jpeg?width=320&crop=smart&auto=webp&s=69e826a5ebceb8a51e910a8621beca84c91e8309', 'width': 320}, {'height': 749, 'url': 'https://preview.redd.it/bp317ky93yfe1.jpeg?width=640&crop=smart&auto=webp&s=d28243434a732efe14d3296468b292b280421803', 'width': 640}, {'height': 1124, 'url': 'https://preview.redd.it/bp317ky93yfe1.jpeg?width=960&crop=smart&auto=webp&s=048a834516e585b8b1fb4af42fbdf1a42f3afa1d', 'width': 960}, {'height': 1265, 'url': 'https://preview.redd.it/bp317ky93yfe1.jpeg?width=1080&crop=smart&auto=webp&s=0dd48e18468f9f9346ad2ae2c633ca1cdb3eb617', 'width': 1080}], 'source': {'height': 1544, 'url': 'https://preview.redd.it/bp317ky93yfe1.jpeg?auto=webp&s=3be69d30a6e4cbbf84e951106d95f88b68acfa95', 'width': 1318}, 'variants': {}}]} |
Subsets and Splits