title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Nvidia cuts FP8 training performance in half on RTX 40 and 50 series GPUs | 430 | According to their new RTX Blackwell GPU architecture whitepaper, Nvidia appears to have cut FP8 training performance in half on RTX 40 and 50 series GPUs after DeepSeek successfully trained their SOTA V3 and R1 models using FP8.
In their original Ada Lovelace whitepaper, table 2 in Appendix A shows the 4090 having **660.6 TFlops** of FP8 with FP32 accumulate without sparsity, which is the same as FP8 with FP16 accumulate. The new Blackwell paper shows half the performance for the 4090 at just **330.3 TFlops** of FP8 with FP32 accumulate, and the 5090 has just **419 TFlops** vs **838 TFlops** for FP8 with FP16 accumulate.
FP32 accumulate is a must when it comes to training because FP16 doesn't have the necessary precision and dynamic range required.
If this isn't a mistake, then it means Nvidia lobotomized their Geforce lineup to further dissuade us from using them for AI/ML training, and it could potentially be reversible for the RTX 40 series at least, as this was likely done through a driver update.
This is quite unfortunate but not unexpected as Nvidia has a known history of artificially limiting Geforce GPUs for AI training since the Turing architecture, while their Quadro and datacenter GPUs continue to have the full performance.
https://preview.redd.it/x3qfea1352ge1.jpg?width=2007&format=pjpg&auto=webp&s=6c20a53057eb2bf15bbf65db4900af638fef9955
https://preview.redd.it/lk3ch91352ge1.jpg?width=1934&format=pjpg&auto=webp&s=d267c0312fe0be00175e616512101dce69113134
Sources:
RTX Blackwell GPU Architecture Whitepaper:
[https://images.nvidia.com/aem-dam/Solutions/geforce/blackwell/nvidia-rtx-blackwell-gpu-architecture.pdf](https://images.nvidia.com/aem-dam/Solutions/geforce/blackwell/nvidia-rtx-blackwell-gpu-architecture.pdf)
RTX Ada Lovelace GPU Architecture Whitepaper:
[https://images.nvidia.com/aem-dam/Solutions/Data-Center/l4/nvidia-ada-gpu-architecture-whitepaper-v2.1.pdf](https://images.nvidia.com/aem-dam/Solutions/Data-Center/l4/nvidia-ada-gpu-architecture-whitepaper-v2.1.pdf) | 2025-01-30T04:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ideaxu/nvidia_cuts_fp8_training_performance_in_half_on/ | Emergency-Map9861 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ideaxu | false | null | t3_1ideaxu | /r/LocalLLaMA/comments/1ideaxu/nvidia_cuts_fp8_training_performance_in_half_on/ | false | false | 430 | null |
|
What is the best around 12-15B param models for coding? | 3 | I have been using the qwen 2.5 14B for this so far. Is it amongst the best in this class? Have also installed DeepSeek V2 Lite instruct which is 16B params large, would it be better, if yes are these the best in this class? | 2025-01-30T04:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/1idedqn/what_is_the_best_around_1215b_param_models_for/ | LibraryComplex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idedqn | false | null | t3_1idedqn | /r/LocalLLaMA/comments/1idedqn/what_is_the_best_around_1215b_param_models_for/ | false | false | self | 3 | null |
Latitude is so slow on self hosting on M3 | 1 | [removed] | 2025-01-30T04:46:02 | https://www.reddit.com/r/LocalLLaMA/comments/1idergk/latitude_is_so_slow_on_self_hosting_on_m3/ | addimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idergk | false | null | t3_1idergk | /r/LocalLLaMA/comments/1idergk/latitude_is_so_slow_on_self_hosting_on_m3/ | false | false | self | 1 | null |
Looks like there is finally more info on Arx-0.3 | 3 | https://x.com/appliedgeneral/status/1884738566645018932?s=46 | 2025-01-30T04:47:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ides7x/looks_like_there_is_finally_more_info_on_arx03/ | AccountantDry2483 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ides7x | false | null | t3_1ides7x | /r/LocalLLaMA/comments/1ides7x/looks_like_there_is_finally_more_info_on_arx03/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'qYUt47r28LYZrUAQwtLdKfDBw1fMLM90cVeENOv8fJM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/1BJsv_tpvQ3X_0BasdVPtytz-BUqCJ3SYWENcBWpuqo.jpg?width=108&crop=smart&auto=webp&s=a36e2ed0e52838c4e724590b523b521ca224c650', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/1BJsv_tpvQ3X_0BasdVPtytz-BUqCJ3SYWENcBWpuqo.jpg?auto=webp&s=58e5c1b851193cbaa9385b9a99c4e94a9c9d3976', 'width': 200}, 'variants': {}}]} |
I asked DeepSeek if our data is shared with the Chinese government, and they said, "Yes" | 1 | 2025-01-30T04:51:07 | https://www.youtube.com/watch?v=17LDxEMT4q8 | ThalyaSparkle | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1ideupf | false | {'oembed': {'author_name': 'BigSmilesMovies', 'author_url': 'https://www.youtube.com/@BigSmilesMovies', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/17LDxEMT4q8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="DeepSeek Says Chinese Govt Has Access to your Data"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/17LDxEMT4q8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'DeepSeek Says Chinese Govt Has Access to your Data', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ideupf | /r/LocalLLaMA/comments/1ideupf/i_asked_deepseek_if_our_data_is_shared_with_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NhUivaqJt9bRpbeAQQXEMVMmwDttYXsOHXaKUKDCl4Q', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MC9HKloDrYQ1CEi4Z0fHOgU2MxShB8t1Rha8hQLzeFU.jpg?width=108&crop=smart&auto=webp&s=b7b93dc3864cab134ac4cb2fedd72baf84a54a91', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/MC9HKloDrYQ1CEi4Z0fHOgU2MxShB8t1Rha8hQLzeFU.jpg?width=216&crop=smart&auto=webp&s=f243224f3d62ad64943949fe8625699066c6f012', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/MC9HKloDrYQ1CEi4Z0fHOgU2MxShB8t1Rha8hQLzeFU.jpg?width=320&crop=smart&auto=webp&s=5ada10611acae05461a9c9497c4f4f9983c19167', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/MC9HKloDrYQ1CEi4Z0fHOgU2MxShB8t1Rha8hQLzeFU.jpg?auto=webp&s=eda5f072bece32b751634e9eedf806877e3d025f', 'width': 480}, 'variants': {}}]} |
||
Reach Sam Altman | 1 | [removed] | 2025-01-30T04:57:43 | https://www.reddit.com/r/LocalLLaMA/comments/1idez4a/reach_sam_altman/ | Vegetable-College353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idez4a | false | null | t3_1idez4a | /r/LocalLLaMA/comments/1idez4a/reach_sam_altman/ | false | false | self | 1 | null |
Microsoft yesterday: DeepSeek illegally stole OpenAI's intellectual property.😤 Microsoft today: DeepSeek is now available on our AI platforms and welcome everyone trying it.🤩 | 1 | 2025-01-30T05:00:41 | bruhlmaocmonbro | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idf19s | false | null | t3_1idf19s | /r/LocalLLaMA/comments/1idf19s/microsoft_yesterday_deepseek_illegally_stole/ | false | false | 1 | {'enabled': True, 'images': [{'id': '2-WiKJMFdXghVHLr9WzGyOVSFhp688A8LgKbcmLiwEg', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/wd6gf2kdc2ge1.jpeg?width=108&crop=smart&auto=webp&s=730b26da70fef44576d2f0b5bef09ed3613f6619', 'width': 108}, {'height': 252, 'url': 'https://preview.redd.it/wd6gf2kdc2ge1.jpeg?width=216&crop=smart&auto=webp&s=5ef0d3a1121fa854ee44eedc8e7f57b475137048', 'width': 216}, {'height': 373, 'url': 'https://preview.redd.it/wd6gf2kdc2ge1.jpeg?width=320&crop=smart&auto=webp&s=4edafbc002ba11837e718b5c1a2f128881b6df77', 'width': 320}, {'height': 746, 'url': 'https://preview.redd.it/wd6gf2kdc2ge1.jpeg?width=640&crop=smart&auto=webp&s=858604ad99803d38ca5ab33247ac6132e1c6310d', 'width': 640}, {'height': 1120, 'url': 'https://preview.redd.it/wd6gf2kdc2ge1.jpeg?width=960&crop=smart&auto=webp&s=3d6ee5554e592df9e8ce8bd872a4a180cbf0d9e5', 'width': 960}, {'height': 1260, 'url': 'https://preview.redd.it/wd6gf2kdc2ge1.jpeg?width=1080&crop=smart&auto=webp&s=85377551ea8386d1844fe350cc221e66e5603d2d', 'width': 1080}], 'source': {'height': 1365, 'url': 'https://preview.redd.it/wd6gf2kdc2ge1.jpeg?auto=webp&s=4241102c06b0ca365152330381498633585f3c67', 'width': 1170}, 'variants': {}}]} |
|||
Roo Code 3.3.6 Released - Meet the Powerful "New Task" Tool | 1 | [removed] | 2025-01-30T05:02:21 | https://www.reddit.com/r/LocalLLaMA/comments/1idf2nf/roo_code_336_released_meet_the_powerful_new_task/ | hannesrudolph | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idf2nf | false | null | t3_1idf2nf | /r/LocalLLaMA/comments/1idf2nf/roo_code_336_released_meet_the_powerful_new_task/ | false | false | self | 1 | null |
Which version of R1 can I run on 2xA100 computer? | 1 | [removed] | 2025-01-30T05:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1idf4nx/which_version_of_r1_can_i_run_on_2xa100_computer/ | sobolanul11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idf4nx | false | null | t3_1idf4nx | /r/LocalLLaMA/comments/1idf4nx/which_version_of_r1_can_i_run_on_2xa100_computer/ | false | false | self | 1 | null |
Is the Qwen-2.5 Max on chat and API different? | 1 | [removed] | 2025-01-30T05:14:42 | https://www.reddit.com/r/LocalLLaMA/comments/1idfaxn/is_the_qwen25_max_on_chat_and_api_different/ | lazylurker999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idfaxn | false | null | t3_1idfaxn | /r/LocalLLaMA/comments/1idfaxn/is_the_qwen25_max_on_chat_and_api_different/ | false | false | self | 1 | null |
New to LocalLLAma and Need help | 1 | [removed] | 2025-01-30T05:18:07 | https://www.reddit.com/r/LocalLLaMA/comments/1idfd7h/new_to_localllama_and_need_help/ | AsrielPlay52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idfd7h | false | null | t3_1idfd7h | /r/LocalLLaMA/comments/1idfd7h/new_to_localllama_and_need_help/ | false | false | self | 1 | null |
R1 Reasoning Effort for the Open-Webui | 5 | https://reddit.com/link/1idflkk/video/q1vfq9n1h2ge1/player
| 2025-01-30T05:30:45 | https://www.reddit.com/r/LocalLLaMA/comments/1idflkk/r1_reasoning_effort_for_the_openwebui/ | onil_gova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idflkk | false | null | t3_1idflkk | /r/LocalLLaMA/comments/1idflkk/r1_reasoning_effort_for_the_openwebui/ | false | false | self | 5 | null |
Any reviews or thoughts on msi 5090 ventus. | 1 | [removed] | 2025-01-30T05:33:17 | https://www.reddit.com/r/LocalLLaMA/comments/1idfn8w/any_reviews_or_thoughts_on_msi_5090_ventus/ | Dry-Bunch-7448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idfn8w | false | null | t3_1idfn8w | /r/LocalLLaMA/comments/1idfn8w/any_reviews_or_thoughts_on_msi_5090_ventus/ | false | false | self | 1 | null |
Autopen: a text editor for exploring language model behaviour | 1 | [removed] | 2025-01-30T05:35:40 | https://www.reddit.com/r/LocalLLaMA/comments/1idfovf/autopen_a_text_editor_for_exploring_language/ | disposableoranges | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idfovf | false | null | t3_1idfovf | /r/LocalLLaMA/comments/1idfovf/autopen_a_text_editor_for_exploring_language/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LybIDHfltM2oNpSwYOw3Csfh7ZCRUUvtC6WpBtV6YtY', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/IjNNmv9dhtQ685O6AbgLltThHYVrbg6SO55yhI_iOcI.png?width=108&crop=smart&auto=webp&s=f477d0a97c379f1c0efa47b5e93920e221f59897', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/IjNNmv9dhtQ685O6AbgLltThHYVrbg6SO55yhI_iOcI.png?width=216&crop=smart&auto=webp&s=bc8ddd16444d2e27a04e6b4d334822ebc8d92541', 'width': 216}, {'height': 221, 'url': 'https://external-preview.redd.it/IjNNmv9dhtQ685O6AbgLltThHYVrbg6SO55yhI_iOcI.png?width=320&crop=smart&auto=webp&s=3130143fff8060777d9f9e8dbfbb5e84c54b3f4e', 'width': 320}, {'height': 443, 'url': 'https://external-preview.redd.it/IjNNmv9dhtQ685O6AbgLltThHYVrbg6SO55yhI_iOcI.png?width=640&crop=smart&auto=webp&s=ea1003303a45faa97620ac8668edc97c47552eea', 'width': 640}, {'height': 664, 'url': 'https://external-preview.redd.it/IjNNmv9dhtQ685O6AbgLltThHYVrbg6SO55yhI_iOcI.png?width=960&crop=smart&auto=webp&s=2e5701642a36e3c61cb0973aee81f00efc144a7f', 'width': 960}, {'height': 747, 'url': 'https://external-preview.redd.it/IjNNmv9dhtQ685O6AbgLltThHYVrbg6SO55yhI_iOcI.png?width=1080&crop=smart&auto=webp&s=a24baa7595b33c030b02b22dd6727de2c2fd2518', 'width': 1080}], 'source': {'height': 1046, 'url': 'https://external-preview.redd.it/IjNNmv9dhtQ685O6AbgLltThHYVrbg6SO55yhI_iOcI.png?auto=webp&s=bc82017e39cc92d8aa9f1ab3517e2658ed13b28a', 'width': 1511}, 'variants': {}}]} |
Which version of R1 can I run on 2xA100 computer? | 0 |
I would need a bit of help setting up R1 on my 2xA100 machine. I have 160gb video ram and 256 gb of system memory.
What version of R1 can I run?
Is there a tutorial on how to set it up (running Ubuntu on the machine) and access it via API?
Should I use Oobaabooga? | 2025-01-30T05:44:05 | https://www.reddit.com/r/LocalLLaMA/comments/1idfuip/which_version_of_r1_can_i_run_on_2xa100_computer/ | Significant_Bike9759 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idfuip | false | null | t3_1idfuip | /r/LocalLLaMA/comments/1idfuip/which_version_of_r1_can_i_run_on_2xa100_computer/ | false | false | self | 0 | null |
I'm promoting running local LLM in my country | 0 | So I can tank Nvidia and AMD stock. 🤣🤣 | 2025-01-30T06:01:18 | https://www.reddit.com/r/LocalLLaMA/comments/1idg5wh/im_promoting_running_local_llm_in_my_country/ | Reasonable-Climate66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idg5wh | false | null | t3_1idg5wh | /r/LocalLLaMA/comments/1idg5wh/im_promoting_running_local_llm_in_my_country/ | false | false | self | 0 | null |
5 Open Source Small Language Models (SLMs) and Their Use Cases | 1 | [removed] | 2025-01-30T06:03:31 | https://www.reddit.com/r/LocalLLaMA/comments/1idg7ai/5_open_source_small_language_models_slms_and/ | 0xhbam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idg7ai | false | null | t3_1idg7ai | /r/LocalLLaMA/comments/1idg7ai/5_open_source_small_language_models_slms_and/ | false | false | self | 1 | null |
OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us | 1 | [removed] | 2025-01-30T06:15:57 | https://www.404media.co/openai-furious-deepseek-might-have-stolen-all-the-data-openai-stole-from-us/?fbclid=PAY2xjawIH-B5leHRuA2FlbQIxMQABpmVKvuKJWUrbRQKplSX6cz10QTwr7dAU2qKAs02SC0Bj0nvMIobr_Eysdw_aem_1dUK39P8sjjFzkM95HUXrw | cern_unnosi | 404media.co | 1970-01-01T00:00:00 | 0 | {} | 1idgffe | false | null | t3_1idgffe | /r/LocalLLaMA/comments/1idgffe/openai_furious_deepseek_might_have_stolen_all_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'qP5WzHXAeHERL_94FJOWikcgk-lAKrN1hxioVDY-R8U', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ad5A3Aby7q78-LvckYlzZrTjSyKk5Hz7IV2YnncN75A.jpg?width=108&crop=smart&auto=webp&s=966489df927e644000c7544f915f811cc0356b9c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ad5A3Aby7q78-LvckYlzZrTjSyKk5Hz7IV2YnncN75A.jpg?width=216&crop=smart&auto=webp&s=e59440d164a89b38700b63cbed2b206cee90d11e', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/ad5A3Aby7q78-LvckYlzZrTjSyKk5Hz7IV2YnncN75A.jpg?width=320&crop=smart&auto=webp&s=ee33d9382ed23dd993ec58b55ec97f327e759475', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/ad5A3Aby7q78-LvckYlzZrTjSyKk5Hz7IV2YnncN75A.jpg?width=640&crop=smart&auto=webp&s=cb23d117939c035c9fb3ea7d4358370143b8fe1b', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/ad5A3Aby7q78-LvckYlzZrTjSyKk5Hz7IV2YnncN75A.jpg?width=960&crop=smart&auto=webp&s=06ffd3ec235740e378537c8fd81f1b75bba8e0c9', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/ad5A3Aby7q78-LvckYlzZrTjSyKk5Hz7IV2YnncN75A.jpg?width=1080&crop=smart&auto=webp&s=e8e35b76d6ee4fa48656363fe1c5185487a89e44', 'width': 1080}], 'source': {'height': 673, 'url': 'https://external-preview.redd.it/ad5A3Aby7q78-LvckYlzZrTjSyKk5Hz7IV2YnncN75A.jpg?auto=webp&s=8fce363e24195ee9cfe96e859d18be73008c82fb', 'width': 1200}, 'variants': {}}]} |
|
High-end Desktop GPU (4090/5090) vs Server Setup? | 1 | [removed] | 2025-01-30T06:17:03 | https://www.reddit.com/r/LocalLLaMA/comments/1idgg50/highend_desktop_gpu_40905090_vs_server_setup/ | ImportantSpeed7224 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idgg50 | false | null | t3_1idgg50 | /r/LocalLLaMA/comments/1idgg50/highend_desktop_gpu_40905090_vs_server_setup/ | false | false | self | 1 | null |
So this is what it comes down to? | 0 | 2025-01-30T06:21:00 | Glanble | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idgiq9 | false | null | t3_1idgiq9 | /r/LocalLLaMA/comments/1idgiq9/so_this_is_what_it_comes_down_to/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'UbjeCQv3DsQaJeewrVxWk8c2aMX6Hb9uwt35b2ZiQgo', 'resolutions': [{'height': 125, 'url': 'https://preview.redd.it/fhhynp2lq2ge1.jpeg?width=108&crop=smart&auto=webp&s=d8263981c357348e34487c7dbe2c3250502defa4', 'width': 108}, {'height': 251, 'url': 'https://preview.redd.it/fhhynp2lq2ge1.jpeg?width=216&crop=smart&auto=webp&s=6d8a1f978b011a92e1faed2d417d8c1ab80a6e7e', 'width': 216}, {'height': 371, 'url': 'https://preview.redd.it/fhhynp2lq2ge1.jpeg?width=320&crop=smart&auto=webp&s=b7f229b4a8f1ac8ab3f92818aece303ce9360c09', 'width': 320}, {'height': 743, 'url': 'https://preview.redd.it/fhhynp2lq2ge1.jpeg?width=640&crop=smart&auto=webp&s=71bd586d6d1bc7af05423faf15de35874e6a3226', 'width': 640}, {'height': 1115, 'url': 'https://preview.redd.it/fhhynp2lq2ge1.jpeg?width=960&crop=smart&auto=webp&s=fa4ee1c65899bf0f619b917058c449e5f3f991b3', 'width': 960}, {'height': 1255, 'url': 'https://preview.redd.it/fhhynp2lq2ge1.jpeg?width=1080&crop=smart&auto=webp&s=ab956afd340c7b1b9867dba88ab609310e346d0f', 'width': 1080}], 'source': {'height': 1312, 'url': 'https://preview.redd.it/fhhynp2lq2ge1.jpeg?auto=webp&s=643cf70df297ce63829a7405de93ad9f6ab0bfb9', 'width': 1129}, 'variants': {}}]} |
|||
Beginner Friendly Python tutorials for Agentic AI using different frameworks. | 1 | [removed] | 2025-01-30T06:31:37 | https://www.reddit.com/r/LocalLLaMA/comments/1idgp5z/beginner_friendly_python_tutorials_for_agentic_ai/ | IntelligentCreme3407 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idgp5z | false | null | t3_1idgp5z | /r/LocalLLaMA/comments/1idgp5z/beginner_friendly_python_tutorials_for_agentic_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'T3KT0gWpxh02N5Aa-eIX05scsWapFNNv1I7FNkx9W4A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GlZa8ExU_BZAQXxTB7TWIRLETpekE-QLrK3J9JdCGfg.jpg?width=108&crop=smart&auto=webp&s=937b93d1b3956e3d9b7b107c30d9b56fea5d6b30', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GlZa8ExU_BZAQXxTB7TWIRLETpekE-QLrK3J9JdCGfg.jpg?width=216&crop=smart&auto=webp&s=3c8589c15525041bdc09541d48ccb43cda580dde', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GlZa8ExU_BZAQXxTB7TWIRLETpekE-QLrK3J9JdCGfg.jpg?width=320&crop=smart&auto=webp&s=e18de4af73fd22aa84ca3d78142fb194ce9cb71f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GlZa8ExU_BZAQXxTB7TWIRLETpekE-QLrK3J9JdCGfg.jpg?width=640&crop=smart&auto=webp&s=a5ee0daaa19a32f4b08321e21c7e1c35dfc320dc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GlZa8ExU_BZAQXxTB7TWIRLETpekE-QLrK3J9JdCGfg.jpg?width=960&crop=smart&auto=webp&s=c04f5d174649a3750f18297546102bbda7cee053', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GlZa8ExU_BZAQXxTB7TWIRLETpekE-QLrK3J9JdCGfg.jpg?width=1080&crop=smart&auto=webp&s=31e2e235a6e2444a2a392ae1711cb3c775c9431c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GlZa8ExU_BZAQXxTB7TWIRLETpekE-QLrK3J9JdCGfg.jpg?auto=webp&s=7b8dca1ba92e00cb6d01031ce6baf99af4ee0f59', 'width': 1200}, 'variants': {}}]} |
DeepSeek Says chinese govt have access to your data | 1 | [removed] | 2025-01-30T06:33:22 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1idgq8y | false | {'oembed': {'author_name': 'BigSmilesMovies', 'author_url': 'https://www.youtube.com/@BigSmilesMovies', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/17LDxEMT4q8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="DeepSeek Says Chinese Govt Has Access to your Data"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/17LDxEMT4q8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'DeepSeek Says Chinese Govt Has Access to your Data', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1idgq8y | /r/LocalLLaMA/comments/1idgq8y/deepseek_says_chinese_govt_have_access_to_your/ | false | false | default | 1 | null |
||
What are you *actually* using R1 for? | 121 | Honest question. I see the hype around R1, and I’ve even downloaded and played with a couple distills myself. It’s definitely an achievement, if not for the models, then for the paper and detailed publication of the training methodology. No argument there.
However, I’m having difficulty understanding the mad rush to download and use these models. They are reasoning models, and as such, all they want to do is output long chains of thought full of /think tokens to solve a problem, even if the problem is simple, e.g. 2+2. As such, my assumption is they aren’t meant to be used for quick daily interactions like GPT-4o and company, but rather only to solve complex problems.
So I ask, what are you actually doing with R1 (other than toy “how many R’s in strawberry” reasoning problems) that you were previously doing with other models? What value have they added to your daily workload? I’m honestly curious, as maybe I have a misconception about their utility. | 2025-01-30T06:35:23 | https://www.reddit.com/r/LocalLLaMA/comments/1idgrh4/what_are_you_actually_using_r1_for/ | PataFunction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idgrh4 | false | null | t3_1idgrh4 | /r/LocalLLaMA/comments/1idgrh4/what_are_you_actually_using_r1_for/ | false | false | self | 121 | null |
Handling split tables in PDFs | 1 | [removed] | 2025-01-30T06:50:40 | https://www.reddit.com/r/LocalLLaMA/comments/1idh0lb/handling_split_tables_in_pdfs/ | MacaronExcellent4772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idh0lb | false | null | t3_1idh0lb | /r/LocalLLaMA/comments/1idh0lb/handling_split_tables_in_pdfs/ | false | false | self | 1 | null |
YuE Music Generator GGUF - Will try soon! | 33 | Hey guys!
I just found a quantized version of YuE on Huggingface: https://huggingface.co/tensorblock/YuE-s1-7B-anneal-en-cot-GGUF
Will try soon and revert bsck if I can make a full song on 32GB VRAM 😍
Anyone tested it yet? | 2025-01-30T07:01:04 | https://www.reddit.com/r/LocalLLaMA/comments/1idh6su/yue_music_generator_gguf_will_try_soon/ | quantier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idh6su | false | null | t3_1idh6su | /r/LocalLLaMA/comments/1idh6su/yue_music_generator_gguf_will_try_soon/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'TbBISFRQiekt1K9gWpwrIXeP-_TbdshkmMiu66Y8OBQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nrZ1KusXbREGbgO5POZUp6FxQswG9Zr789bgBePc5tM.jpg?width=108&crop=smart&auto=webp&s=2226cea9f54536b713a14679d9a8d13616e36733', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nrZ1KusXbREGbgO5POZUp6FxQswG9Zr789bgBePc5tM.jpg?width=216&crop=smart&auto=webp&s=2581fef4e4f9c40338d2d4251f2b9f9826dbcce6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nrZ1KusXbREGbgO5POZUp6FxQswG9Zr789bgBePc5tM.jpg?width=320&crop=smart&auto=webp&s=64640055f421d5d1f51e7ec504cf009309803c50', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nrZ1KusXbREGbgO5POZUp6FxQswG9Zr789bgBePc5tM.jpg?width=640&crop=smart&auto=webp&s=3d6ad0212e8f6f936ca425a4d936c6bbb7aa32c8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nrZ1KusXbREGbgO5POZUp6FxQswG9Zr789bgBePc5tM.jpg?width=960&crop=smart&auto=webp&s=89c258a00926749a8796ad292a0b59927c230849', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nrZ1KusXbREGbgO5POZUp6FxQswG9Zr789bgBePc5tM.jpg?width=1080&crop=smart&auto=webp&s=866873f78930d4956c4af4179c2f95326be97c59', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nrZ1KusXbREGbgO5POZUp6FxQswG9Zr789bgBePc5tM.jpg?auto=webp&s=a26b3c6b4d545f8ab1fe096ecf02d3fe0fbc6581', 'width': 1200}, 'variants': {}}]} |
Handling split tables in PDFs | 1 | [removed] | 2025-01-30T07:02:04 | https://www.reddit.com/r/LocalLLaMA/comments/1idh7js/handling_split_tables_in_pdfs/ | MacaronExcellent4772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idh7js | false | null | t3_1idh7js | /r/LocalLLaMA/comments/1idh7js/handling_split_tables_in_pdfs/ | false | false | self | 1 | null |
Handling split tables in PDFs | 1 | I'm currently working on a project where I am trying to build a rag agent on top of a pdf that contains a budget table. The problem here is that is not whole and is split between two pages. For eg, the first two rows are in page 2 and the rest is continued on page 3.
I've used llama parse to handle the pof parsing since it came out to be the better when compared with PyPDF. I've tried to build QA pipeline on the parsed chunks using llama3 but it's not able to capture the table as a whole.
Has anyone encountered this issue? I'm actively looking into this and l'd appreciate if you can add your suggestions on how to get around this. TIA. | 2025-01-30T07:03:12 | https://www.reddit.com/r/LocalLLaMA/comments/1idh8a9/handling_split_tables_in_pdfs/ | Admirable-Session648 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idh8a9 | false | null | t3_1idh8a9 | /r/LocalLLaMA/comments/1idh8a9/handling_split_tables_in_pdfs/ | false | false | self | 1 | null |
Combining GPUs vs 1 expensive GPU? | 8 | In where I am at, I can find 3060 12GB at $500, but the cheapest 3090 24GB I can find is $3000. (All my local currency).
This makes me think, I saw some rig video where people put 4x3090, does that means I can buy 6x3060 at the price of 1x3090, and it will perform significantly better on LLM/SD because of the much larger VRAM? Or is there something that 3090 has and using multiple 3060 still cannot catch on?
Also when I browse the web, there are topics about how VRAM cannot be combined and any model using more than 12GB will just overflow, vs some other topics that say VRAM can be combined. I am confused on what is actually valid, and hope to seek some validations.
I am very new to the space so would appreciate any advice/comment. | 2025-01-30T07:08:24 | https://www.reddit.com/r/LocalLLaMA/comments/1idhbcv/combining_gpus_vs_1_expensive_gpu/ | jimmyspinsggez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idhbcv | false | null | t3_1idhbcv | /r/LocalLLaMA/comments/1idhbcv/combining_gpus_vs_1_expensive_gpu/ | false | false | self | 8 | null |
LLM Hardware Calculator | 1 | Can anyone here link me to an LLM hardware sizing calculator? Something that takes in parameters like:
1) Model
2) Quantization
3)T/s
4) Context
5) GPU or CPU inferencing option
and then suggests hardware requirements. | 2025-01-30T07:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/1idhnb2/llm_hardware_calculator/ | heybigeyes123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idhnb2 | false | null | t3_1idhnb2 | /r/LocalLLaMA/comments/1idhnb2/llm_hardware_calculator/ | false | false | self | 1 | null |
Lightweight Semantic Search for Docs and Structured Data | 2 | Just published a functional preview of a portable semantic search tool that you can use with your local LMs:
[https://github.com/Independent-AI-Labs/local-super-agents/tree/main/hype](https://github.com/Independent-AI-Labs/local-super-agents/tree/main/hype)
Although still quite basic, it's optimized for consumer hardware and has a built-in benchmark for you to flex your x3Ds and Gen5 SSDs with!
Multiple term fuzzy matching clocks at about 5M rows / second on high-end desktop systems, but I'm sure we can top that with a bit of planned improvements.
Love to hear what you guys use for on-device document RAG and other similar use cases. | 2025-01-30T07:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/1idhtd1/lightweight_semantic_search_for_docs_and/ | Ragecommie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idhtd1 | false | null | t3_1idhtd1 | /r/LocalLLaMA/comments/1idhtd1/lightweight_semantic_search_for_docs_and/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '-6Wbd56rnbZhCMSZaQYdBdMa4SkBbTpi2a-6UuqZ8DI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ndLI5HlH2OBtaQQywJWrEf0zQ3yvFCj2vaoy87GZytM.jpg?width=108&crop=smart&auto=webp&s=a2865380df77aebd6d33f12c8a50da4c98912c0f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ndLI5HlH2OBtaQQywJWrEf0zQ3yvFCj2vaoy87GZytM.jpg?width=216&crop=smart&auto=webp&s=0f54e9b7c9c8aa2492d0f9281f4df21a6e5a62b1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ndLI5HlH2OBtaQQywJWrEf0zQ3yvFCj2vaoy87GZytM.jpg?width=320&crop=smart&auto=webp&s=05922d1a88c3161f71f9b48152a55419c7d2e6ba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ndLI5HlH2OBtaQQywJWrEf0zQ3yvFCj2vaoy87GZytM.jpg?width=640&crop=smart&auto=webp&s=0a82a0e3bc0ebeb80330e5b933cc902f59ef100b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ndLI5HlH2OBtaQQywJWrEf0zQ3yvFCj2vaoy87GZytM.jpg?width=960&crop=smart&auto=webp&s=e62a1d8e072f70d182f4f888b22b396e0d20738f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ndLI5HlH2OBtaQQywJWrEf0zQ3yvFCj2vaoy87GZytM.jpg?width=1080&crop=smart&auto=webp&s=2dcd0fdaf3d63847f2a690c796cffc2ba7cba772', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ndLI5HlH2OBtaQQywJWrEf0zQ3yvFCj2vaoy87GZytM.jpg?auto=webp&s=d3d3d706a6e71183d6968aa75654e5d5be8dbecd', 'width': 1200}, 'variants': {}}]} |
I did a very short perplexity test with DeepSeek R1 with different numbers of experts and also some of the distilled models | 3 | First: This test only ran 8 blocks (out of ~560) so it should be taken with a massive grain of salt. I'd say based on my experience running perplexity on models you usually don't end up with something completely different from the trend at the beginning but it's definitely not impossible. You also shouldn't compare perplexity here with other unrelated models, perplexity probably isn't a very fair test for chain of thought models since they don't get to do any thinking.
Experts | PPL
-|-
8 | 3.4155, 4.2311, 3.0817, 2.8601, 2.6933, 2.5792, 2.5123, 2.5239
16 | 3.5350, 4.3594, 3.0307, 2.8619, 2.7227, 2.6664, 2.6288, 2.6568
6 | 3.4227, 4.2400, 3.1610, 2.9933, 2.8307, 2.7110, 2.6253, 2.6488
4 | 3.5790, 4.5984, 3.5135, 3.4490, 3.2952, 3.2563, 3.1883, 3.2978
VMv2 | 4.6217, 6.3318, 4.8642, 3.6984, 3.0867, 2.8033, 2.6044, 2.5547
3 | 3.9209, 4.9318, 4.0944, 4.2450, 4.2071, 4.3095, 4.3150, 4.6082
LR170B | 4.1261, 4.9672, 5.0192, 5.1777, 5.3557, 5.6300, 5.8582, 6.2350
QR132B | 5.9082, 7.5575, 6.0677, 5.0672, 4.8776, 4.8903, 4.7712, 4.7167
2 | 6.2387, 7.7455
Legend:
* Normal = DeepSeek-R1-UD-IQ1_M - https://unsloth.ai/blog/deepseekr1-dynamic
* `LR170B` = DeepSeek-R1-Distill-Llama-70B-Q5_K_M
* `QR132B` = DeepSeek-R1-Distill-Qwen-32B-Q6_K
* `VMv2` = Virtuoso-Medium-v2-Q6_K (32B model) - https://huggingface.co/arcee-ai/Virtuoso-Medium-v2-GGUF
Table sorted by average PPL, lower PPL is better. Perplexity test run with block size 512. You can override the number of experts for the llama.cpp commandline apps (`llama-cli`, `llama-perplexity`, etc) using `--override-kv deepseek2.expert_used_count=int:4` or whatever.This is only meaningful on actual MoE models, not the distills.
Again, this really isn't a scientific test, at most it should be considered a place to start discussion. To the extent that we can actually trust these results, the full DS model even with very aggressive quantization seems to beat the normal distills until you limit it to 2 experts. The Virtuoso Medium V2 distill looks pretty strong, ending up between full DS R1 with 3 and 4 experts.
I tried with 10 and 12 experts and generating perplexity failed with NaNs. | 2025-01-30T08:00:48 | https://www.reddit.com/r/LocalLLaMA/comments/1idi5cr/i_did_a_very_short_perplexity_test_with_deepseek/ | alwaysbeblepping | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idi5cr | false | null | t3_1idi5cr | /r/LocalLLaMA/comments/1idi5cr/i_did_a_very_short_perplexity_test_with_deepseek/ | false | false | self | 3 | null |
Has Anyone Successfully Fine-Tuned Whisper for a Local Language? | 6 | Hi everyone,
I am fairly new to AI and coding, and I’m curious about fine-tuning OpenAI’s Whisper model to improve its accuracy for a local language.
Has anyone here successfully fine-tuned Whisper? If so, how did you do it? What tools, frameworks, or techniques did you use? Would transfer learning or some other method work best?
I tried doing it my self on colab but I coulddnt seem to make it work, to begin with I just used common voices from Mozilla to see if it was even possible, maybe it is my limitation, but just wanted to ask if anyone have done it and could guide me a bit :)
I’d really appreciate any insights, experiences, or resources that could help!
Thanks in advance! | 2025-01-30T08:01:05 | https://www.reddit.com/r/LocalLLaMA/comments/1idi5j1/has_anyone_successfully_finetuned_whisper_for_a/ | jumnopol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idi5j1 | false | null | t3_1idi5j1 | /r/LocalLLaMA/comments/1idi5j1/has_anyone_successfully_finetuned_whisper_for_a/ | false | false | self | 6 | null |
Can an LLM be customized to act as a chatbot? | 0 | Greetings,
Is it possible to make an LLM act as a guider to my website? We have plenty of sections and thousands of pre-written and customizable documents (single page documents, nothing complicated).
Could I feed the LLM all of the sections (alongside their their purpose) and all of the contracts/documents so that it can recommend one on the fly rather make the client search through the entire database?
Is there such service that'd suit my use case? Like can just tell it "You are a \[X\] entity's chatbot. Your purpose is to do \[X\]. When you enumerate documents, wrap them with <doc> </doc> so my front-end can detect it and present it" and somewhere I can upload all of the knowledge base/documents I have (or give it access to my database?).
What service and model size would satisfy these requirements? Would hosting it myself even be feasible? | 2025-01-30T08:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/1idicks/can_an_llm_be_customized_to_act_as_a_chatbot/ | Nervous-Positive-431 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idicks | false | null | t3_1idicks | /r/LocalLLaMA/comments/1idicks/can_an_llm_be_customized_to_act_as_a_chatbot/ | false | false | self | 0 | null |
Handling split tables in PDFs | 6 | I'm currently working on a project where I am trying to build a rag agent on top of a pdf that contains a budget table. The problem here is that is not whole and is split between two pages. For eg, the first two rows are in page 2 and the rest is continued on page 3.
I've used llama parse to handle the pof parsing since it came out to be the better when compared with PyPDF. I've tried to build QA pipeline on the parsed chunks using llama3 but it's not able to capture the table as a whole.
Has anyone encountered this issue? I'm actively looking into this and l'd appreciate if you can add your suggestions on how to get around this. TIA. | 2025-01-30T08:19:14 | https://www.reddit.com/r/LocalLLaMA/comments/1idiffn/handling_split_tables_in_pdfs/ | MacaronExcellent4772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idiffn | false | null | t3_1idiffn | /r/LocalLLaMA/comments/1idiffn/handling_split_tables_in_pdfs/ | false | false | self | 6 | null |
Advice for generating test cases on smaller models | 1 | [removed] | 2025-01-30T08:26:11 | https://www.reddit.com/r/LocalLLaMA/comments/1idiii6/advice_for_generating_test_cases_on_smaller_models/ | KarimAbdelQader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idiii6 | false | null | t3_1idiii6 | /r/LocalLLaMA/comments/1idiii6/advice_for_generating_test_cases_on_smaller_models/ | false | false | self | 1 | null |
The real DeepSeek-R1 schematic | 21 | https://i.redd.it/cpj3a0cpe3ge1.gif
Forget flashy headlines, here's the actual DeepSeek-R1 schematic.
It cannot be explained in one news headline or 1 paragraph. We need deep videos and hands on modules to truly understand the DeepSeek-R1 pipeline. | 2025-01-30T08:35:54 | https://www.reddit.com/r/LocalLLaMA/comments/1idimum/the_real_deepseekr1_schematic/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idimum | false | null | t3_1idimum | /r/LocalLLaMA/comments/1idimum/the_real_deepseekr1_schematic/ | false | false | 21 | null |
|
Deep Seek Trick I recently discovered! | 0 | 2025-01-30T08:40:47 | https://www.reddit.com/r/LocalLLaMA/comments/1idip0c/deep_seek_trick_i_recently_discovered/ | iam_wizard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idip0c | false | null | t3_1idip0c | /r/LocalLLaMA/comments/1idip0c/deep_seek_trick_i_recently_discovered/ | false | false | 0 | null |
||
Would you fund open research? | 1 | [removed]
[View Poll](https://www.reddit.com/poll/1idiun7) | 2025-01-30T08:53:17 | https://www.reddit.com/r/LocalLLaMA/comments/1idiun7/would_you_fund_open_research/ | StevenSamAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idiun7 | false | null | t3_1idiun7 | /r/LocalLLaMA/comments/1idiun7/would_you_fund_open_research/ | false | false | self | 1 | null |
What about 1 TB Sys RAM system with the 7995WX to run LLMs ? | 30 | Today, I tried running the DeepSeek R1 2.58-bit Quant version on a 24 vCPU, 192 GB RAM server without a GPU. I achieved a speed of about 11 tokens/second in the pg512 test. Meanwhile, four A40 GPUs produced around 33 tokens/second.
This got me thinking about a possible setup. For my personal needs, 11 tokens/second seems adequate. However, for a very large LLM such as R1 Q8\_0, which requires 700 GB of VRAM, one would typically need eight A100 GPUs (H100s are even more expensive) and would also have to offload some layers to the CPU. That setup costs around $177,840.
In contrast, a Ryzen Threadripper PRO 7995WX costs around $11,500, and 1 TB of RAM is about $2,400, so the total would be roughly $14,000—about twelve times cheaper. Of course, the inference speed would be significantly slower, and performance might suffer as the context window grows, but it’s still feasible to own a personal system.
I’m new to LLMs, so I’d love to hear any additional thoughts or suggestions. | 2025-01-30T08:53:33 | https://www.reddit.com/r/LocalLLaMA/comments/1idiurl/what_about_1_tb_sys_ram_system_with_the_7995wx_to/ | MatrixEternal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idiurl | false | null | t3_1idiurl | /r/LocalLLaMA/comments/1idiurl/what_about_1_tb_sys_ram_system_with_the_7995wx_to/ | false | false | self | 30 | null |
The Mac M2 Ultra faster than 2xH100s in running Deepseek R1 IQ1_S. | 1 | Over on the llama.cpp github, people have been benchmarking R1 IQ1_s. The M2 Ultra is faster than two H100s for TG. The M2 Ultra gets 13.88t/s. 2xH100 gets 11.53t/s. That's surprising.
As for PP processing, that's all over the place on the 2xH100s. From 0.41 to 137.66. For the M2 Ultra it's 24.05.
https://github.com/ggerganov/llama.cpp/issues/11474 | 2025-01-30T08:54:23 | https://www.reddit.com/r/LocalLLaMA/comments/1idiv5m/the_mac_m2_ultra_faster_than_2xh100s_in_running/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idiv5m | false | null | t3_1idiv5m | /r/LocalLLaMA/comments/1idiv5m/the_mac_m2_ultra_faster_than_2xh100s_in_running/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TYQ9m_pCrgsy8AFhXDiHnOgvgkbnGkWnHwvMukfCyh0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=108&crop=smart&auto=webp&s=7ea56a965b2ec920d90bcdca3f57e6dd2c224742', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=216&crop=smart&auto=webp&s=a1a86127caf3dba4f6b460bb259a771498e644d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=320&crop=smart&auto=webp&s=33f8da2c59c7201663ffe450280f8c6d1785b813', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=640&crop=smart&auto=webp&s=b01c4b4be1f88b62ce270ca22c56f84c2a12cea4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=960&crop=smart&auto=webp&s=f845610125ce9dc0d27e7f13a31e86fdb3c904ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=1080&crop=smart&auto=webp&s=578c4b8245b03dafb85effbe1a3fc2808fe8f374', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?auto=webp&s=80c0ec6992ab70024df04486991b0c66b03f2a6a', 'width': 1200}, 'variants': {}}]} |
The Mac M2 Ultra is faster than 2xH100s in running Deepseek R1 IQ1_S. | 71 | Over on the llama.cpp github, people have been benchmarking R1 IQ1_s. The M2 Ultra is faster than two H100s for TG. The M2 Ultra gets 13.88t/s. 2xH100 gets 11.53t/s. That's surprising.
As for PP processing, that's all over the place on the 2xH100s. From 0.41 to 137.66. For the M2 Ultra it's 24.05.
https://github.com/ggerganov/llama.cpp/issues/11474 | 2025-01-30T08:55:42 | https://www.reddit.com/r/LocalLLaMA/comments/1idivqe/the_mac_m2_ultra_is_faster_than_2xh100s_in/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idivqe | false | null | t3_1idivqe | /r/LocalLLaMA/comments/1idivqe/the_mac_m2_ultra_is_faster_than_2xh100s_in/ | false | false | self | 71 | {'enabled': False, 'images': [{'id': 'TYQ9m_pCrgsy8AFhXDiHnOgvgkbnGkWnHwvMukfCyh0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=108&crop=smart&auto=webp&s=7ea56a965b2ec920d90bcdca3f57e6dd2c224742', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=216&crop=smart&auto=webp&s=a1a86127caf3dba4f6b460bb259a771498e644d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=320&crop=smart&auto=webp&s=33f8da2c59c7201663ffe450280f8c6d1785b813', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=640&crop=smart&auto=webp&s=b01c4b4be1f88b62ce270ca22c56f84c2a12cea4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=960&crop=smart&auto=webp&s=f845610125ce9dc0d27e7f13a31e86fdb3c904ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?width=1080&crop=smart&auto=webp&s=578c4b8245b03dafb85effbe1a3fc2808fe8f374', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n-jN5tf67feziDuR34Ds_KqdYWGR7nMT4qV8D9TY21w.jpg?auto=webp&s=80c0ec6992ab70024df04486991b0c66b03f2a6a', 'width': 1200}, 'variants': {}}]} |
PSA #2: No, R1 isn't telling you to talk about something else. | 1 | [removed] | 2025-01-30T08:58:48 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1idix3n | false | null | t3_1idix3n | /r/LocalLLaMA/comments/1idix3n/psa_2_no_r1_isnt_telling_you_to_talk_about/ | false | false | default | 1 | null |
||
How to prepare datasets to fine tuning deepseek reasoning model? | 1 | [removed] | 2025-01-30T09:12:42 | https://www.reddit.com/r/LocalLLaMA/comments/1idj3ds/how_to_prepare_datasets_to_fine_tuning_deepseek/ | Present-Tourist6487 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idj3ds | false | null | t3_1idj3ds | /r/LocalLLaMA/comments/1idj3ds/how_to_prepare_datasets_to_fine_tuning_deepseek/ | false | false | self | 1 | null |
R1 hallucinating | 1 | [removed] | 2025-01-30T09:17:08 | thatoneploomer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idj5g0 | false | null | t3_1idj5g0 | /r/LocalLLaMA/comments/1idj5g0/r1_hallucinating/ | false | false | 1 | {'enabled': True, 'images': [{'id': '_Ya0GhGR1BBADT9yD-wL0Fha3sJ8Qj6zJNctFNoIGSc', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/4z4e8685m3ge1.png?width=108&crop=smart&auto=webp&s=e33244caa7e991b66f996cdc8f45290140eff149', 'width': 108}, {'height': 54, 'url': 'https://preview.redd.it/4z4e8685m3ge1.png?width=216&crop=smart&auto=webp&s=dcae7fd0161f8a672289e16f91dff1a9373bc890', 'width': 216}, {'height': 80, 'url': 'https://preview.redd.it/4z4e8685m3ge1.png?width=320&crop=smart&auto=webp&s=2463f88db77d092a8d325b531d110d17b8ef5a1d', 'width': 320}, {'height': 160, 'url': 'https://preview.redd.it/4z4e8685m3ge1.png?width=640&crop=smart&auto=webp&s=441c5989fd56d785dea946af7ece13a6294cce60', 'width': 640}], 'source': {'height': 206, 'url': 'https://preview.redd.it/4z4e8685m3ge1.png?auto=webp&s=aa3a8e3d8765c5790c5e272197f17d24d3f562b2', 'width': 819}, 'variants': {}}]} |
||
KV cache performance - unexpected issue | 5 | Hi,
I'm trying to implement a simple decoder-only llm, for educational purpose, and have been struggling with some issue related to KV caching. For some reason, the below implementation results in **lower** performances when using the KV caching. Profiling the code reveals that despite slightly faster matmuls (both for kqv generation and for the actual self attention mechanism), the read/write slicings of the KV cache actually makes the whole thing slower.
Am I doing something really dumb, here ? I implemented the KV cache as a circular buffer, and I have k/v cache for each SelfAttention heads
class SelfAttentionHead(torch.nn.Module):
def __init__(self, head_size):
super().__init__()
self.head_size = head_size
self.key = torch.nn.Linear(n_embedding, head_size, bias=False)
self.query = torch.nn.Linear(n_embedding, head_size, bias=False)
self.value = torch.nn.Linear(n_embedding, head_size, bias=False)
self.register_buffer('tril', torch.tril(torch.ones(block_size, block_size)))
self.register_buffer('k_cache', torch.zeros(0))
self.register_buffer('v_cache', torch.zeros(0))
self.last_index = None
self.use_cache = False
def train(self, mode=True):
super().train(mode)
if(mode==False):
self.use_cache = True
self.last_index = None
self.k_cache = torch.zeros(0, device=device)
self.v_cache = torch.zeros(0, device=device)
torch.cuda.empty_cache()
else:
self.use_cache = False
self.k_cache = torch.zeros(0, device=device)
self.v_cache = torch.zeros(0, device=device)
torch.cuda.empty_cache()
def eval(self):
super().eval()
self.use_cache = True
self.last_index = None
self.k_cache = torch.zeros(0, device=device)
self.v_cache = torch.zeros(0, device=device)
torch.cuda.empty_cache()
def forward(self, x):
B, T, _ = x.shape
if self.use_cache:
x_new = x[:,-1,:]
if(self.k_cache.shape[0] == 0 and self.v_cache.shape[0] == 0):
self.k_cache = torch.zeros(size=[B,block_size,self.head_size], device=device)
self.v_cache = torch.zeros(size=[B,block_size,self.head_size], device=device)
k_new = self.key(x_new) #batch_size, 1, head_size
q_new = self.query(x_new) # batch_size, 1, head_size
v_new = self.value(x_new) # batch_size, 1, head_size
if(self.last_index is None):
self.last_index = 0
else:
self.last_index += 1
update_index = self.last_index % block_size
self.k_cache[:,update_index,:] = k_new
self.v_cache[:,update_index,:] = v_new
#Retrieve appropriate K, V by fetching the KV cache
valid_start = max(0,self.last_index-block_size+1)
cache_indices = torch.arange(valid_start, self.last_index+1, device=device) % block_size
K = self.k_cache[:, cache_indices, :]
V = self.v_cache[:, cache_indices, :]
QKt = (q_new @ K.transpose(-1,-2)) * self.head_size**-0.5
QKt[:,T:,:] = float('-inf')
wei = F.softmax(QKt, dim=-1)
out = wei @ V
return out
else:
k = self.key(x) # batch_size, block_size, head_size
q = self.query(x) # batch_size, block_size, head_size
v = self.value(x) # batch_size, block_size, head_size
if (self.last_index is None):
self.last_index = 0
else:
self.last_index += 1
update_index = self.last_index % block_size
QKt = (q @ k.transpose(-1, -2)) * (self.head_size**-0.5)
wei = QKt.masked_fill(self.tril[:T, :T] == 0, float('-inf'))
wei = F.softmax(wei, dim=-1)
out = wei @ v
return out | 2025-01-30T09:29:06 | https://www.reddit.com/r/LocalLLaMA/comments/1idjavc/kv_cache_performance_unexpected_issue/ | henker92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idjavc | false | null | t3_1idjavc | /r/LocalLLaMA/comments/1idjavc/kv_cache_performance_unexpected_issue/ | false | false | self | 5 | null |
Query | 1 | [removed] | 2025-01-30T09:43:09 | https://www.reddit.com/r/LocalLLaMA/comments/1idjh52/query/ | Master-Article7603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idjh52 | false | null | t3_1idjh52 | /r/LocalLLaMA/comments/1idjh52/query/ | false | false | self | 1 | null |
Did I cause the DeepSeek outage? | 0 | 2025-01-30T09:48:29 | https://www.reddit.com/gallery/1idjjly | ConcernedCitizen_KM | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1idjjly | false | null | t3_1idjjly | /r/LocalLLaMA/comments/1idjjly/did_i_cause_the_deepseek_outage/ | false | false | 0 | null |
||
GRPO for VLMs? | 1 | Is there any example of using the huggingface GRPO trainer for VLMs? I'm not sure how to format my dataset to use it with a VLM. | 2025-01-30T09:56:40 | https://www.reddit.com/r/LocalLLaMA/comments/1idjnd5/grpo_for_vlms/ | LiquidGunay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idjnd5 | false | null | t3_1idjnd5 | /r/LocalLLaMA/comments/1idjnd5/grpo_for_vlms/ | false | false | self | 1 | null |
Deepseek are clever fuckers | 0 | I wrote this about how Deepseek is pushing decision makers in large financial institutions to seriously consider running their own models instead of calling out to Microsoft, Amazon & Google
[https://blog.helix.ml/p/you-should-run-local-models-run-deepseek](https://blog.helix.ml/p/you-should-run-local-models-run-deepseek) | 2025-01-30T10:02:40 | https://www.reddit.com/r/LocalLLaMA/comments/1idjqdt/deepseek_are_clever_fuckers/ | lewqfu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idjqdt | false | null | t3_1idjqdt | /r/LocalLLaMA/comments/1idjqdt/deepseek_are_clever_fuckers/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ruPKh6HgyDP2Z8-L_wBICT12by0jgIL_JyoMmHHMsZM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1pepvrwiuHUWlcPJD5z4WbLOZG_XPnrDopuby_LY1pg.jpg?width=108&crop=smart&auto=webp&s=fb54a193bd0a19669fae503f3c3c81fb8b91259d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1pepvrwiuHUWlcPJD5z4WbLOZG_XPnrDopuby_LY1pg.jpg?width=216&crop=smart&auto=webp&s=d12141f7cb9f8c919cd65026e4914e98e0eda913', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1pepvrwiuHUWlcPJD5z4WbLOZG_XPnrDopuby_LY1pg.jpg?width=320&crop=smart&auto=webp&s=53c877a7d6785637fccb2264cab87aa93b3a1211', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1pepvrwiuHUWlcPJD5z4WbLOZG_XPnrDopuby_LY1pg.jpg?width=640&crop=smart&auto=webp&s=cbbf749e5d3bfde3c5390e856b0091ef0a9f98db', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1pepvrwiuHUWlcPJD5z4WbLOZG_XPnrDopuby_LY1pg.jpg?width=960&crop=smart&auto=webp&s=8b6e920580a0a136c1247165318f4e71208b9714', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1pepvrwiuHUWlcPJD5z4WbLOZG_XPnrDopuby_LY1pg.jpg?width=1080&crop=smart&auto=webp&s=a307552a277e3bb781cd739653208fe65474bf98', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1pepvrwiuHUWlcPJD5z4WbLOZG_XPnrDopuby_LY1pg.jpg?auto=webp&s=75c63502e14384108a5180bbec663b4780e39dec', 'width': 1200}, 'variants': {}}]} |
Took long enough | 0 | 2025-01-30T10:05:12 | Own_Bet_9292 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idjrjh | false | null | t3_1idjrjh | /r/LocalLLaMA/comments/1idjrjh/took_long_enough/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'P8RFaxhcW77jdnposcqVJ6f5brxCn33chr86nI0o7gs', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/lgypv886u3ge1.png?width=108&crop=smart&auto=webp&s=f7b28a2f9656cac8205adb5e9aea9cf70fc514e8', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/lgypv886u3ge1.png?width=216&crop=smart&auto=webp&s=4abd15c9d4b69855267131d0771aec051f8ca96a', 'width': 216}, {'height': 158, 'url': 'https://preview.redd.it/lgypv886u3ge1.png?width=320&crop=smart&auto=webp&s=fa0f0f67c9caa56e4eac4f784b91ba3eff76a753', 'width': 320}, {'height': 317, 'url': 'https://preview.redd.it/lgypv886u3ge1.png?width=640&crop=smart&auto=webp&s=8799cf9306584f6a4ac5b7efa277d7f92f39bf2c', 'width': 640}, {'height': 476, 'url': 'https://preview.redd.it/lgypv886u3ge1.png?width=960&crop=smart&auto=webp&s=8f57e650c4f7ff7e01b1233caa70d6c6be653545', 'width': 960}, {'height': 535, 'url': 'https://preview.redd.it/lgypv886u3ge1.png?width=1080&crop=smart&auto=webp&s=1d11808902210c7f509489253bb5b0a7ef015c60', 'width': 1080}], 'source': {'height': 759, 'url': 'https://preview.redd.it/lgypv886u3ge1.png?auto=webp&s=549c7f5e09d88a4bcb4c7dacbb5238ceb942d61b', 'width': 1530}, 'variants': {}}]} |
|||
how to use Ollama with C++ | 1 | [removed] | 2025-01-30T10:14:40 | https://www.reddit.com/r/LocalLLaMA/comments/1idjw04/how_to_use_ollama_with_c/ | Reasonable-Falcon470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idjw04 | false | null | t3_1idjw04 | /r/LocalLLaMA/comments/1idjw04/how_to_use_ollama_with_c/ | false | false | self | 1 | null |
how to use Ollama with C++ | 1 | [removed] | 2025-01-30T10:16:45 | https://www.reddit.com/r/LocalLLaMA/comments/1idjwyw/how_to_use_ollama_with_c/ | Reasonable-Falcon470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idjwyw | false | null | t3_1idjwyw | /r/LocalLLaMA/comments/1idjwyw/how_to_use_ollama_with_c/ | false | false | self | 1 | null |
An Interesting Watch: DeepSeek vs. Open AI - The State of AI w/ Emad Mostaque & Salim Ismail | 0 | I believe that Emad in this podcast does a good job of explaining why Deepseek R1 is actually an engineering revolution to training models.
[https://www.youtube.com/watch?v=lY8Ja00PCQM](https://www.youtube.com/watch?v=lY8Ja00PCQM) | 2025-01-30T10:34:05 | https://www.reddit.com/r/LocalLLaMA/comments/1idk5ad/an_interesting_watch_deepseek_vs_open_ai_the/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idk5ad | false | null | t3_1idk5ad | /r/LocalLLaMA/comments/1idk5ad/an_interesting_watch_deepseek_vs_open_ai_the/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'M8OUnhiUUgZmNFjolvfAdQaWhAsWGwDsXQ-eQxtlTJ4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/8vQbz5p0b128mv30HLtu4dKQJAbTw__cuP9z7k2LBwE.jpg?width=108&crop=smart&auto=webp&s=2cbc5116e2b73de88ff46668cf4fd93242454000', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/8vQbz5p0b128mv30HLtu4dKQJAbTw__cuP9z7k2LBwE.jpg?width=216&crop=smart&auto=webp&s=389d7542e28a0c12d285eac9d7670892885b6f38', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/8vQbz5p0b128mv30HLtu4dKQJAbTw__cuP9z7k2LBwE.jpg?width=320&crop=smart&auto=webp&s=2ddd88a09b5a753dca099c15d68998bf2b955835', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/8vQbz5p0b128mv30HLtu4dKQJAbTw__cuP9z7k2LBwE.jpg?auto=webp&s=992b4f64f010bccc658fa1576b18340fc6ce6f54', 'width': 480}, 'variants': {}}]} |
Deep seek on frustrated #deepseek | 0 | 2025-01-30T10:36:27 | NormalPitch5769 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idk6fu | false | null | t3_1idk6fu | /r/LocalLLaMA/comments/1idk6fu/deep_seek_on_frustrated_deepseek/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'eUqXREOV1LLIYoSYODFi1xqNZHhkUT1oVxxG8z4M5Y0', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/mhunm2ka04ge1.jpeg?width=108&crop=smart&auto=webp&s=331dd36bcdd66814f4f48680cf1f13867e3ef873', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/mhunm2ka04ge1.jpeg?width=216&crop=smart&auto=webp&s=f9040d8be1cebdff6101b3ae799da8ed3ab204fb', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/mhunm2ka04ge1.jpeg?width=320&crop=smart&auto=webp&s=37295d86114feaa0be23a6dc6f333f22b678d023', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/mhunm2ka04ge1.jpeg?width=640&crop=smart&auto=webp&s=fbdd82bbf8245c4973f6d752136320d79a958192', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/mhunm2ka04ge1.jpeg?width=960&crop=smart&auto=webp&s=1495fc5eed831042d6c2b46c72eb723754d28789', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/mhunm2ka04ge1.jpeg?width=1080&crop=smart&auto=webp&s=cf07d8e5d2559687696e46577ca5f8317e470416', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/mhunm2ka04ge1.jpeg?auto=webp&s=4f6573deda85adb5ec6d7d2e939a11a0caf3f15a', 'width': 3024}, 'variants': {}}]} |
|||
Built a Lightning-Fast DeepSeek RAG Chatbot – Reads PDFs, Uses FAISS, and Runs on GPU! 🚀 | 8 | 2025-01-30T10:38:01 | https://github.com/SaiAkhil066/DeepSeek-RAG-Chatbot.git | akhilpanja | github.com | 1970-01-01T00:00:00 | 0 | {} | 1idk78y | false | null | t3_1idk78y | /r/LocalLLaMA/comments/1idk78y/built_a_lightningfast_deepseek_rag_chatbot_reads/ | false | false | 8 | {'enabled': False, 'images': [{'id': '-c5qad-_Gb_G2gZc_Ayzpk4CYHrHMEhQ0REkCoX2Knc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tUx5_IJmvUr2ThZPtrihHQ9kI5AASNBWXyd4_vSU98g.jpg?width=108&crop=smart&auto=webp&s=20411351a332548f281d01fa3dbd669e1ed58a7e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tUx5_IJmvUr2ThZPtrihHQ9kI5AASNBWXyd4_vSU98g.jpg?width=216&crop=smart&auto=webp&s=b2085472ae41ace2aef8042bef17418309d9b9f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tUx5_IJmvUr2ThZPtrihHQ9kI5AASNBWXyd4_vSU98g.jpg?width=320&crop=smart&auto=webp&s=fa8df1680efa5efc035d783e638d733b4d2a267a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tUx5_IJmvUr2ThZPtrihHQ9kI5AASNBWXyd4_vSU98g.jpg?width=640&crop=smart&auto=webp&s=d5062c7b85ff0d68e2cb22c69a986c14fb86fc50', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tUx5_IJmvUr2ThZPtrihHQ9kI5AASNBWXyd4_vSU98g.jpg?width=960&crop=smart&auto=webp&s=11fceccce9fe4f58b4fdccb4fe7f752ee9340b39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tUx5_IJmvUr2ThZPtrihHQ9kI5AASNBWXyd4_vSU98g.jpg?width=1080&crop=smart&auto=webp&s=505576f45a67851f84b3c345613d5c8e25fe6e55', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tUx5_IJmvUr2ThZPtrihHQ9kI5AASNBWXyd4_vSU98g.jpg?auto=webp&s=254288463996a9109c7af6caa966b073164937c6', 'width': 1200}, 'variants': {}}]} |
||
ZeroCoT: a simple method to bootstrap CoT from zero | 18 | Author: @BlinkDL_AI
https://x.com/BlinkDL_AI/status/1884768989743882276 | 2025-01-30T10:50:10 | AaronFeng47 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idkdan | false | null | t3_1idkdan | /r/LocalLLaMA/comments/1idkdan/zerocot_a_simple_method_to_bootstrap_cot_from_zero/ | false | false | 18 | {'enabled': True, 'images': [{'id': 'JDd1ZkOlHnVIZiTV6CGAGg3dLhb8if-4R3Lcmvx2x_g', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/0jtwa3nq24ge1.png?width=108&crop=smart&auto=webp&s=94c33081032b50552a7622920c8bbeafc07e7f42', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/0jtwa3nq24ge1.png?width=216&crop=smart&auto=webp&s=6814f8db079b1cd9a7bdc82a177540feb5f67112', 'width': 216}, {'height': 222, 'url': 'https://preview.redd.it/0jtwa3nq24ge1.png?width=320&crop=smart&auto=webp&s=0ba7223dbd7526ee3c30c44d01930e4426733f1a', 'width': 320}, {'height': 445, 'url': 'https://preview.redd.it/0jtwa3nq24ge1.png?width=640&crop=smart&auto=webp&s=266146fde56b15038f4b881fa361d78640dff652', 'width': 640}, {'height': 668, 'url': 'https://preview.redd.it/0jtwa3nq24ge1.png?width=960&crop=smart&auto=webp&s=ab14a8ea529c4aa3714fb7614a213484d59bad58', 'width': 960}, {'height': 751, 'url': 'https://preview.redd.it/0jtwa3nq24ge1.png?width=1080&crop=smart&auto=webp&s=730b6d25358793fd30d2f4d3c6aa174f16b4169a', 'width': 1080}], 'source': {'height': 1042, 'url': 'https://preview.redd.it/0jtwa3nq24ge1.png?auto=webp&s=8812b6243d4d760f842dc43d5cdef6f723380ce1', 'width': 1497}, 'variants': {}}]} |
||
DeepSeek now refuses marketing tasks? | 0 | 2025-01-30T10:56:15 | omnisvosscio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idkg5m | false | null | t3_1idkg5m | /r/LocalLLaMA/comments/1idkg5m/deepseek_now_refuses_marketing_tasks/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'meJELUoGxzgfX2HsNQ4mQcXfkWig7j9WEFB-FsfXMHg', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/fxr9k9ur34ge1.png?width=108&crop=smart&auto=webp&s=40eaff2407aa8b19a430ab278e400216befe4b5a', 'width': 108}, {'height': 36, 'url': 'https://preview.redd.it/fxr9k9ur34ge1.png?width=216&crop=smart&auto=webp&s=94a5ecb098e9b14b52939c38ec5140b134761d53', 'width': 216}, {'height': 53, 'url': 'https://preview.redd.it/fxr9k9ur34ge1.png?width=320&crop=smart&auto=webp&s=c4f9829f6f9080a95a75a9f7fd56aaa20af34478', 'width': 320}, {'height': 107, 'url': 'https://preview.redd.it/fxr9k9ur34ge1.png?width=640&crop=smart&auto=webp&s=9069696a4c8870ee2f4e0c0a082c908e8845eeb6', 'width': 640}, {'height': 160, 'url': 'https://preview.redd.it/fxr9k9ur34ge1.png?width=960&crop=smart&auto=webp&s=493b33779fb5d11e2436c4178ffb8175bf16b04c', 'width': 960}, {'height': 180, 'url': 'https://preview.redd.it/fxr9k9ur34ge1.png?width=1080&crop=smart&auto=webp&s=cff2f98244bcbae06e745e64a1c0a49fd5c66ae2', 'width': 1080}], 'source': {'height': 286, 'url': 'https://preview.redd.it/fxr9k9ur34ge1.png?auto=webp&s=7fcf09dc689d10f405f98a136300860f37b340bd', 'width': 1710}, 'variants': {}}]} |
|||
PSA: your 7B/14B/32B/70B "R1" is NOT DeepSeek. | 1 | [removed] | 2025-01-30T11:06:14 | https://www.reddit.com/r/LocalLLaMA/comments/1idklhv/psa_your_7b14b32b70b_r1_is_not_deepseek/ | Zalathustra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idklhv | false | null | t3_1idklhv | /r/LocalLLaMA/comments/1idklhv/psa_your_7b14b32b70b_r1_is_not_deepseek/ | false | false | self | 1 | null |
Deepseek brought retards here | 65 | Our locallama community is (or was) a highly technical community not talking about trends (shit like langchain, politicis etc).
Right now it's mostly people showing screesnhots of deepseek chat and other hype.
I hope the hype is over soon and I start seeing highly technical content here again | 2025-01-30T11:10:41 | https://www.reddit.com/r/LocalLLaMA/comments/1idknoy/deepseek_brought_retards_here/ | Armym | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idknoy | false | null | t3_1idknoy | /r/LocalLLaMA/comments/1idknoy/deepseek_brought_retards_here/ | false | false | self | 65 | null |
The absolute state of things | 1 | [removed] | 2025-01-30T11:11:16 | https://www.reddit.com/r/LocalLLaMA/comments/1idknyy/the_absolute_state_of_things/ | Zalathustra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idknyy | false | null | t3_1idknyy | /r/LocalLLaMA/comments/1idknyy/the_absolute_state_of_things/ | false | false | self | 1 | null |
Qwen LLM page not loading in Firefox, because of the DuckDuckGo Extension | 1 | [removed] | 2025-01-30T11:16:57 | https://www.reddit.com/r/LocalLLaMA/comments/1idkqul/qwen_llm_page_not_loading_in_firefox_because_of/ | Significant-Owl2580 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idkqul | false | null | t3_1idkqul | /r/LocalLLaMA/comments/1idkqul/qwen_llm_page_not_loading_in_firefox_because_of/ | false | false | self | 1 | null |
COS(M+O)S: Curiosity and RL-Enhanced MCTS for Exploring Story Space via Language Models | 1 | 2025-01-30T11:17:23 | https://v.redd.it/fwnfxmgd74ge1 | cosmos-llm | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idkr2y | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fwnfxmgd74ge1/DASHPlaylist.mpd?a=1740827859%2CNWQyMjY4OGVmNDI1ZDZlOWMxNzRhYzY1ZWI3ZGVjNDI5N2JlZmI4NjdhOGJjNTIzZmRlMDNhODQzM2NmY2U4Ng%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/fwnfxmgd74ge1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/fwnfxmgd74ge1/HLSPlaylist.m3u8?a=1740827859%2CY2FiZDJjNWI3NDdhM2U3YzkxOWE1ODEzOTFhZGRmOTA5MmM4ZGQyOTdmMzk4OTI4MjAyOWQyOWFiOWNmMTQ5Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fwnfxmgd74ge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1idkr2y | /r/LocalLLaMA/comments/1idkr2y/cosmos_curiosity_and_rlenhanced_mcts_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZDdsaXV3Z2Q3NGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZDdsaXV3Z2Q3NGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=108&crop=smart&format=pjpg&auto=webp&s=3892b40d0b4fc68b1e8148d6a4062a2012413e5d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZDdsaXV3Z2Q3NGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=216&crop=smart&format=pjpg&auto=webp&s=5214d3ad976719a1da0060ef6b7729d00668aa18', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZDdsaXV3Z2Q3NGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=320&crop=smart&format=pjpg&auto=webp&s=88826f4db0f456bb26c24b68481f8f9636441a92', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZDdsaXV3Z2Q3NGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=640&crop=smart&format=pjpg&auto=webp&s=a352cbc50980aef635d6da5d1fc94637ed38a3ce', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZDdsaXV3Z2Q3NGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=960&crop=smart&format=pjpg&auto=webp&s=819d2a1c3cd98173fc72cc27376044705e413f6a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZDdsaXV3Z2Q3NGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7fd10336cb0eee1fe5905e3395f0aef2b6f593a0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZDdsaXV3Z2Q3NGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?format=pjpg&auto=webp&s=de5931eae677333aa479e74fc4d1fcce799ec476', 'width': 1920}, 'variants': {}}]} |
||
Inline Image Generation w/ Stable Diffusion | 1 | If anyone's interested, I just added image generation support to the Mac version of my LLM frontend. You can generate images by just asking the model, or by using /image. The smaller models will sometimes say they can't create images, but if you push them on it they will ;). Currently working on adding this to the iOS and visionOS apps, but it's a little less straightforward. I've also added support for some of the DeepSeek models. If anyone is interested on collaborating on this project DM me! | 2025-01-30T11:22:11 | https://www.reddit.com/r/LocalLLaMA/comments/1idktml/inline_image_generation_w_stable_diffusion/ | kenech_io | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idktml | false | null | t3_1idktml | /r/LocalLLaMA/comments/1idktml/inline_image_generation_w_stable_diffusion/ | false | false | self | 1 | null |
Can we use Llama 3.3-70B-instruct as a base model for creating model like DeepSeek R1 | 0 | I think I have figured out a way that can train open source LLMs like llama 3.3 to get the performance like DeepSeek-R1.
My approach is this:
✅ Llama 3.3 Fine-Tuning
✅ Matroid Constraint Optimization for Logical Structuring
✅ Reinforcement Learning with Self-Verification
✅ Evaluation Against DeepSeek-R1
✅ Optimized Deployment with INT4 Quantization | 2025-01-30T11:26:04 | https://www.reddit.com/r/LocalLLaMA/comments/1idkvlw/can_we_use_llama_3370binstruct_as_a_base_model/ | Secure_Echo_971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idkvlw | false | null | t3_1idkvlw | /r/LocalLLaMA/comments/1idkvlw/can_we_use_llama_3370binstruct_as_a_base_model/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
Running deepseek v3 model in Open WebUI | 1 | [removed] | 2025-01-30T11:29:35 | https://www.reddit.com/r/LocalLLaMA/comments/1idkxhn/running_deepseek_v3_model_in_open_webui/ | quantimx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idkxhn | false | null | t3_1idkxhn | /r/LocalLLaMA/comments/1idkxhn/running_deepseek_v3_model_in_open_webui/ | false | false | self | 1 | null |
The cheapest way to run Openwebui | 2 | Hi I want to run Openwebui in the online server how can i do it with cheapest way? which online service is suitsble for me? i willl be the only one using it but i want to acces from any device | 2025-01-30T11:31:09 | https://www.reddit.com/r/LocalLLaMA/comments/1idkyds/the_cheapest_way_to_run_openwebui/ | pifmu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idkyds | false | null | t3_1idkyds | /r/LocalLLaMA/comments/1idkyds/the_cheapest_way_to_run_openwebui/ | false | false | self | 2 | null |
Help me understand why the better version is cheaper to use. | 0 | Why is the 32B version cheaper than the 14B version on Openrouter ??
https://preview.redd.it/4p8rk3epb4ge1.png?width=679&format=png&auto=webp&s=1f01cfc791914b8feb1eb0345bb575943224c408
| 2025-01-30T11:41:08 | https://www.reddit.com/r/LocalLLaMA/comments/1idl3ln/help_me_understand_why_the_better_version_is/ | -x-Spike-x- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idl3ln | false | null | t3_1idl3ln | /r/LocalLLaMA/comments/1idl3ln/help_me_understand_why_the_better_version_is/ | false | false | 0 | null |
|
Can we get back to actually talking about LLMs instead of circlejerking about Deepseek? | 1 | [removed] | 2025-01-30T11:43:19 | https://www.reddit.com/r/LocalLLaMA/comments/1idl4q2/can_we_get_back_to_actually_talking_about_llms/ | MerePotato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idl4q2 | false | null | t3_1idl4q2 | /r/LocalLLaMA/comments/1idl4q2/can_we_get_back_to_actually_talking_about_llms/ | false | false | self | 1 | null |
Can we get back to actually talking about LLMs instead of kneeling at the altar of Deepseek? | 1 | [removed] | 2025-01-30T11:45:28 | https://www.reddit.com/r/LocalLLaMA/comments/1idl5wb/can_we_get_back_to_actually_talking_about_llms/ | MerePotato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idl5wb | false | null | t3_1idl5wb | /r/LocalLLaMA/comments/1idl5wb/can_we_get_back_to_actually_talking_about_llms/ | false | false | self | 1 | null |
Can we get back to actually talking about LLMs instead of circlejerking about Deepseek? | 1 | [removed] | 2025-01-30T11:47:07 | https://www.reddit.com/r/LocalLLaMA/comments/1idl6sr/can_we_get_back_to_actually_talking_about_llms/ | MerePotato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idl6sr | false | null | t3_1idl6sr | /r/LocalLLaMA/comments/1idl6sr/can_we_get_back_to_actually_talking_about_llms/ | false | false | self | 1 | null |
Can we get back to actually talking about LLMs now? | 1 | [removed] | 2025-01-30T11:49:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1idl8bd | false | null | t3_1idl8bd | /r/LocalLLaMA/comments/1idl8bd/can_we_get_back_to_actually_talking_about_llms_now/ | false | false | default | 1 | null |
||
Can we get back to actually talking about LLMs now? | 1 | [removed] | 2025-01-30T11:50:41 | https://www.reddit.com/r/LocalLLaMA/comments/1idl8tp/can_we_get_back_to_actually_talking_about_llms_now/ | MerePotato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idl8tp | false | null | t3_1idl8tp | /r/LocalLLaMA/comments/1idl8tp/can_we_get_back_to_actually_talking_about_llms_now/ | false | false | self | 1 | null |
Processing Whisper transcription via an local LLM | 1 | [removed] | 2025-01-30T11:56:20 | https://www.reddit.com/r/LocalLLaMA/comments/1idlbxj/processing_whisper_transcription_via_an_local_llm/ | Separate-Power-1881 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idlbxj | false | null | t3_1idlbxj | /r/LocalLLaMA/comments/1idlbxj/processing_whisper_transcription_via_an_local_llm/ | false | false | self | 1 | null |
What really happened | 1 | 2025-01-30T11:56:28 | Crazy_Ninja6559 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idlbzy | false | null | t3_1idlbzy | /r/LocalLLaMA/comments/1idlbzy/what_really_happened/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'aHkvI30QE0onEulXJLg1H-tQwC6mueShJy1JCuvzWbU', 'resolutions': [{'height': 125, 'url': 'https://preview.redd.it/lq33rwihe4ge1.jpeg?width=108&crop=smart&auto=webp&s=7f3cda3505e0e9bc9c155589aeabb06175519b64', 'width': 108}, {'height': 250, 'url': 'https://preview.redd.it/lq33rwihe4ge1.jpeg?width=216&crop=smart&auto=webp&s=b221b7dc56f1d655928eecc85c9ca506474e1ece', 'width': 216}, {'height': 371, 'url': 'https://preview.redd.it/lq33rwihe4ge1.jpeg?width=320&crop=smart&auto=webp&s=f34c04820a4824fdcc27ff9bca165c48c4505b5c', 'width': 320}, {'height': 743, 'url': 'https://preview.redd.it/lq33rwihe4ge1.jpeg?width=640&crop=smart&auto=webp&s=262844feb659ba8b36dec630cd89a9ce5ee0b5ac', 'width': 640}], 'source': {'height': 813, 'url': 'https://preview.redd.it/lq33rwihe4ge1.jpeg?auto=webp&s=7a2d9a70df52b773ab59c09ad121f5accebcba6d', 'width': 700}, 'variants': {}}]} |
|||
DeepSeek-R1 for Cline over ai.azure | 1 | [removed] | 2025-01-30T12:03:14 | https://www.reddit.com/r/LocalLLaMA/comments/1idlg0m/deepseekr1_for_cline_over_aiazure/ | BudgetDelivery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idlg0m | false | null | t3_1idlg0m | /r/LocalLLaMA/comments/1idlg0m/deepseekr1_for_cline_over_aiazure/ | false | false | 1 | null |
|
GPT4All/LMStudio - Do any companies actually use their enterprise offering? | 2 | I saw that GPT4All/LMStudio both have enterprise versions (at least they have one of those "contact us" forms).
But I'm wondering if you've actually heard of any enterprises that have formally provisioned these apps to their employees? And if so, what was the reason? Like why did that enterprise decide not to self-host an internal AI service (which would also avoid sending sensitive data to OpenAI or whatever)?
On another note, I can *maybe* see middle managers telling their direct team to use GPT4All/LocalLlama, as a workaround to their slow/backward enterprise blocking ChatGPT but also not having any other internal solution yet.
But even that feels like a stretch - like does anyone know any middle managers that have actually gone out of their way to buy a handful of seats for GPT4All/LMStudio? I imagine 99.9% of people/teams in that situation just use their personal ChatGPT, sending that enterprise data to OpenAI without the enterprise knowing lol. | 2025-01-30T12:06:05 | https://www.reddit.com/r/LocalLLaMA/comments/1idlhn3/gpt4alllmstudio_do_any_companies_actually_use/ | intofuture | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idlhn3 | false | null | t3_1idlhn3 | /r/LocalLLaMA/comments/1idlhn3/gpt4alllmstudio_do_any_companies_actually_use/ | false | false | self | 2 | null |
PSA #3: mass-reporting actual informative posts won't make you less wrong. | 1 | [removed] | 2025-01-30T12:06:17 | https://www.reddit.com/r/LocalLLaMA/comments/1idlhr6/psa_3_massreporting_actual_informative_posts_wont/ | Zalathustra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idlhr6 | false | null | t3_1idlhr6 | /r/LocalLLaMA/comments/1idlhr6/psa_3_massreporting_actual_informative_posts_wont/ | false | false | self | 1 | null |
Okay, this is a test. | 1 | At this point I'm wondering if *everything* I post gets insta-nuked. | 2025-01-30T12:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/1idligf/okay_this_is_a_test/ | Zalathustra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idligf | false | null | t3_1idligf | /r/LocalLLaMA/comments/1idligf/okay_this_is_a_test/ | false | false | self | 1 | null |
COS(M+O)S: 3B LLM + MCTS Approaches 70B-Level Plot Quality Using Curiosity-Based Rewards | 1 | 2025-01-30T12:12:39 | https://v.redd.it/7xa35t09h4ge1 | Busy_Talk8788 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idlljo | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7xa35t09h4ge1/DASHPlaylist.mpd?a=1740831173%2CMzVlZTZmMDM4NzNlYzc2Mzk0MzIyZDVhMTczZDRlYWZlMWQ1NjU2NDJiMzhkZTc5M2IzODg0N2IyOTFjN2U1Mg%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/7xa35t09h4ge1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/7xa35t09h4ge1/HLSPlaylist.m3u8?a=1740831173%2CM2I2ZWIzNGRmNThmNGMwNGYyNTBlMWE1ZTk4YTNjMzljOTJiZWUyYTQ0ZjA1MzgzZGI3NjZiZGJkOGIyMTBkMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7xa35t09h4ge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1idlljo | /r/LocalLLaMA/comments/1idlljo/cosmos_3b_llm_mcts_approaches_70blevel_plot/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ejM1c3JyMDloNGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ejM1c3JyMDloNGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=108&crop=smart&format=pjpg&auto=webp&s=534d1dcbb2590bdc65c40b9d5f2362ac584b03cd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ejM1c3JyMDloNGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=216&crop=smart&format=pjpg&auto=webp&s=817f23cf72e86db63bed79970d7865c7a1313cbf', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ejM1c3JyMDloNGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=320&crop=smart&format=pjpg&auto=webp&s=71c28ce58716881bd117c0570922ffdec27bc095', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ejM1c3JyMDloNGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=640&crop=smart&format=pjpg&auto=webp&s=d4acb1468058188143b4cdfdeaa4a4462dfebf91', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ejM1c3JyMDloNGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=960&crop=smart&format=pjpg&auto=webp&s=4a485789dcb1d24e071ac17178d7683a3df46cab', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ejM1c3JyMDloNGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c425dfb5586a2ca29d9d6c13c80f88df69d159d6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ejM1c3JyMDloNGdlMRKOnAQhI4ITm2Pm_w-pI6dsSPy0LvriPTXAm2mk3rur.png?format=pjpg&auto=webp&s=492da648aad844853521f99cf3df28bc22171529', 'width': 1920}, 'variants': {}}]} |
||
PSA #3: mass-reporting actual informative posts won't make you less wrong | 1 | [removed] | 2025-01-30T12:13:53 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1idlman | false | null | t3_1idlman | /r/LocalLLaMA/comments/1idlman/psa_3_massreporting_actual_informative_posts_wont/ | false | false | default | 1 | null |
||
Ok but can your western AI do this? | 15 | 2025-01-30T12:24:39 | https://www.reddit.com/gallery/1idlskj | CH1997H | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1idlskj | false | null | t3_1idlskj | /r/LocalLLaMA/comments/1idlskj/ok_but_can_your_western_ai_do_this/ | false | false | 15 | null |
||
Fantastic summary of DeepSeek R1 and why it's such a big deal by Computerphile | 51 | 2025-01-30T12:26:30 | https://youtu.be/gY4Z-9QlZ64 | CrasHthe2nd | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1idltqu | false | {'oembed': {'author_name': 'Computerphile', 'author_url': 'https://www.youtube.com/@Computerphile', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/gY4Z-9QlZ64?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="DeepSeek is a Game Changer for AI - Computerphile"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/gY4Z-9QlZ64/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'DeepSeek is a Game Changer for AI - Computerphile', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1idltqu | /r/LocalLLaMA/comments/1idltqu/fantastic_summary_of_deepseek_r1_and_why_its_such/ | false | false | 51 | {'enabled': False, 'images': [{'id': 'Wn2kiBZHDMLn03EgGflQIb76p5zfnNEnnh4QN6lEANc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-2yTeJWjY7Xy7hJ5TsvAGSJhZsAdkXzGD9s5XXxFxaY.jpg?width=108&crop=smart&auto=webp&s=3ac19c06fe541eb32127cfa72cdd53b793d1a758', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/-2yTeJWjY7Xy7hJ5TsvAGSJhZsAdkXzGD9s5XXxFxaY.jpg?width=216&crop=smart&auto=webp&s=e0733481dc381025bd9a4ebde5ccf97a4a69bf35', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/-2yTeJWjY7Xy7hJ5TsvAGSJhZsAdkXzGD9s5XXxFxaY.jpg?width=320&crop=smart&auto=webp&s=1d8f45dbff9232c0291f5891b3546986242e2cef', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/-2yTeJWjY7Xy7hJ5TsvAGSJhZsAdkXzGD9s5XXxFxaY.jpg?auto=webp&s=10b155b4eeffcd498aad986ea9e82b3081af9ae0', 'width': 480}, 'variants': {}}]} |
||
Is this scene the reason why DeepSeek's logo is a whale? The CoT... | 1 | 2025-01-30T12:26:32 | https://www.youtube.com/watch?v=Qrv9c-udCrg | Extraaltodeus | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1idltsc | false | {'oembed': {'author_name': 'BadfishKoo', 'author_url': 'https://www.youtube.com/@BadfishKoo', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Qrv9c-udCrg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="The Hitchhikers Guide to the Galaxy - The Whale"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Qrv9c-udCrg/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'The Hitchhikers Guide to the Galaxy - The Whale', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1idltsc | /r/LocalLLaMA/comments/1idltsc/is_this_scene_the_reason_why_deepseeks_logo_is_a/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'DsVX_7uLI1WfHSPM6aYJytJxTP1MbwDj5wHb6IgWaq0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/yJi-kiAjc3CAZuCi79JCHCVJ32Bgf0-aiCJJyF0JmgE.jpg?width=108&crop=smart&auto=webp&s=e0bab4d7aac6555d3f2578787abd6deac1cd3893', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/yJi-kiAjc3CAZuCi79JCHCVJ32Bgf0-aiCJJyF0JmgE.jpg?width=216&crop=smart&auto=webp&s=7052f04af207186152f7d32ee3a72abbed785b55', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/yJi-kiAjc3CAZuCi79JCHCVJ32Bgf0-aiCJJyF0JmgE.jpg?width=320&crop=smart&auto=webp&s=9afde55987154450f048bbdfb3326f43136b75b6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/yJi-kiAjc3CAZuCi79JCHCVJ32Bgf0-aiCJJyF0JmgE.jpg?auto=webp&s=453021f354374495e1386842005b80afc6c0ccc0', 'width': 480}, 'variants': {}}]} |
||
Exploring User Privacy in Ollama: Are Local LLMs Truly Private? | 0 | I've spent the past couple of days looking into Ollama. Findings are listed at the beginning of the article, technical breakdown and hardening methods below.
[https://loopbreaker.substack.com/p/exploring-user-privacy-in-ollama](https://loopbreaker.substack.com/p/exploring-user-privacy-in-ollama) | 2025-01-30T12:35:18 | https://www.reddit.com/r/LocalLLaMA/comments/1idlz1x/exploring_user_privacy_in_ollama_are_local_llms/ | WasJohnTitorReal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idlz1x | false | null | t3_1idlz1x | /r/LocalLLaMA/comments/1idlz1x/exploring_user_privacy_in_ollama_are_local_llms/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'KztZ6JntfL0mpDV9f4jCgbPxRNJATHrec-lt_RBrFDs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/9gc8Z3GsmWEe9WQXWeUdaleuypPNbMdrY1zXdtW6OnQ.jpg?width=108&crop=smart&auto=webp&s=cf386a7a5bb1009a886b10be49c5e1355f1ba12f', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/9gc8Z3GsmWEe9WQXWeUdaleuypPNbMdrY1zXdtW6OnQ.jpg?width=216&crop=smart&auto=webp&s=c196d4e1ca0ffdb081eeac8e73673a5ba05bf5ca', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/9gc8Z3GsmWEe9WQXWeUdaleuypPNbMdrY1zXdtW6OnQ.jpg?width=320&crop=smart&auto=webp&s=9d140782c688c7b145a1cc79eca318ef378ababc', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/9gc8Z3GsmWEe9WQXWeUdaleuypPNbMdrY1zXdtW6OnQ.jpg?width=640&crop=smart&auto=webp&s=69c1607879dbc5914eee44fa40c75af7a8c4ce73', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/9gc8Z3GsmWEe9WQXWeUdaleuypPNbMdrY1zXdtW6OnQ.jpg?width=960&crop=smart&auto=webp&s=e076d312fb0929c3191b757e316b533e21d06324', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/9gc8Z3GsmWEe9WQXWeUdaleuypPNbMdrY1zXdtW6OnQ.jpg?width=1080&crop=smart&auto=webp&s=f780ec93e5868fcb97e2a374683270931f65e466', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9gc8Z3GsmWEe9WQXWeUdaleuypPNbMdrY1zXdtW6OnQ.jpg?auto=webp&s=84a200a6ac5484b2461922aebb1a92f982c3bf8a', 'width': 1080}, 'variants': {}}]} |
'agentic' library for scraping websites | 0 | Hi, I am searching for a library to scrape websites using local LLMs.
Basically what I desire is the ability of:
1. broadly define a task (e.g. search for new about X)
2. give a target domain (and possibly a max number of links to follow within that domain)
3. Put relevant data in the LLM and get structured data out
I know there are several options to get the structured data, but I am not aware of libraries covering all three aspects.
Any suggestions, along with a (local) LLM to be used in combination (7-14B)?
| 2025-01-30T12:56:07 | https://www.reddit.com/r/LocalLLaMA/comments/1idmc51/agentic_library_for_scraping_websites/ | BenXavier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idmc51 | false | null | t3_1idmc51 | /r/LocalLLaMA/comments/1idmc51/agentic_library_for_scraping_websites/ | false | false | self | 0 | null |
Local AI to transcribe videos? | 1 | Hey folks. Quick question. I have not worked with any multimodal stuff yet. Is there a good local model/interface to transcribe videos (or the audio tracks from videos?) I have something like 80 hours of video which I'd like to search for certain proper names. As you may understand from my other threads, I am not a developer, so I am more looking for a 'smart user' solution (can run scripts someone else wrote, navigate an interface, etc). I can strip the audio from the videos down to MP3 if I need to (but would be great to not have to). Thank you! | 2025-01-30T12:58:32 | https://www.reddit.com/r/LocalLLaMA/comments/1idmdlf/local_ai_to_transcribe_videos/ | Intelligent-Gift4519 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idmdlf | false | null | t3_1idmdlf | /r/LocalLLaMA/comments/1idmdlf/local_ai_to_transcribe_videos/ | false | false | self | 1 | null |
Model to train troubleshooting document | 2 | I have a bunch of troubleshooting documents and API documents, and i want to train a model to answer troubleshooting questions and api related questions. Some of the documents contain screenshots. Which model would be suitable for that kind of data? I’ll be running on 4070 Super 12G. | 2025-01-30T13:03:43 | https://www.reddit.com/r/LocalLLaMA/comments/1idmh8o/model_to_train_troubleshooting_document/ | Confident-Mistake400 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idmh8o | false | null | t3_1idmh8o | /r/LocalLLaMA/comments/1idmh8o/model_to_train_troubleshooting_document/ | false | false | self | 2 | null |
Will Quantum Computers make LLMs better? | 0 | I am a heavy LLM user, but I have a very superficial knowledge of how LLMs work. I think they use probability to predict what to say next. Quantum computers, from what I understand, can go through many different outcomes very quickly depending on the problem. Does this mean Quantum computers will be useful for LLMs? | 2025-01-30T13:08:20 | https://www.reddit.com/r/LocalLLaMA/comments/1idmkaf/will_quantum_computers_make_llms_better/ | Mysterious_Comb9550 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idmkaf | false | null | t3_1idmkaf | /r/LocalLLaMA/comments/1idmkaf/will_quantum_computers_make_llms_better/ | false | false | self | 0 | null |
Meet Lumigator: Your Tool to Model Selection | 16 | 2025-01-30T13:09:09 | https://blog.mozilla.ai/lumigator-is-here-2/ | ab2377 | blog.mozilla.ai | 1970-01-01T00:00:00 | 0 | {} | 1idmktj | false | null | t3_1idmktj | /r/LocalLLaMA/comments/1idmktj/meet_lumigator_your_tool_to_model_selection/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'NM674NWJKaBnCL72_ZttpWM-PREHBzwJ4OFOUBS7QlM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/tV6ZWhwUBkpHY8BxkRsUtbte_afeAUNybUzSyDQKxKI.jpg?width=108&crop=smart&auto=webp&s=6c59a598cf0176c36fa61cf408186e14b17b3b57', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/tV6ZWhwUBkpHY8BxkRsUtbte_afeAUNybUzSyDQKxKI.jpg?width=216&crop=smart&auto=webp&s=f88c74995295c72c7cda1ee7853aafd77df662ac', 'width': 216}, {'height': 158, 'url': 'https://external-preview.redd.it/tV6ZWhwUBkpHY8BxkRsUtbte_afeAUNybUzSyDQKxKI.jpg?width=320&crop=smart&auto=webp&s=0912a1b96cec981a6fbb88a73300755881ae60c6', 'width': 320}, {'height': 316, 'url': 'https://external-preview.redd.it/tV6ZWhwUBkpHY8BxkRsUtbte_afeAUNybUzSyDQKxKI.jpg?width=640&crop=smart&auto=webp&s=cd98f5b59267054dbf1db0c9cf7e0fbc03df623d', 'width': 640}, {'height': 474, 'url': 'https://external-preview.redd.it/tV6ZWhwUBkpHY8BxkRsUtbte_afeAUNybUzSyDQKxKI.jpg?width=960&crop=smart&auto=webp&s=46ec56e4a3efe931471160def4d38e0b32e7321b', 'width': 960}, {'height': 533, 'url': 'https://external-preview.redd.it/tV6ZWhwUBkpHY8BxkRsUtbte_afeAUNybUzSyDQKxKI.jpg?width=1080&crop=smart&auto=webp&s=761d3cf2b191724b8e0162eae1ef32b85a8666cf', 'width': 1080}], 'source': {'height': 593, 'url': 'https://external-preview.redd.it/tV6ZWhwUBkpHY8BxkRsUtbte_afeAUNybUzSyDQKxKI.jpg?auto=webp&s=3db380b49d9e7065e2dd4783b829acd56881a803', 'width': 1200}, 'variants': {}}]} |
||
What it's like to use DeepSeek with -200 social credit | 1 | 2025-01-30T13:15:01 | Content_Trouble_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idmork | false | null | t3_1idmork | /r/LocalLLaMA/comments/1idmork/what_its_like_to_use_deepseek_with_200_social/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'K-0m3Yf87wWU5LVyYYvFsR5KYxSrA_XCK5vCENHhC1c', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/ylz1c1wfs4ge1.png?width=108&crop=smart&auto=webp&s=0b844b5f46e0a715a14a2cd1f83dead31185ce0c', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/ylz1c1wfs4ge1.png?width=216&crop=smart&auto=webp&s=a63372ef7e959c370c3e9473157547dcb3ed5a88', 'width': 216}, {'height': 238, 'url': 'https://preview.redd.it/ylz1c1wfs4ge1.png?width=320&crop=smart&auto=webp&s=f3f6c0bddc474985959fb5982db545b5f22f4390', 'width': 320}, {'height': 476, 'url': 'https://preview.redd.it/ylz1c1wfs4ge1.png?width=640&crop=smart&auto=webp&s=18217632e7b8c1bf1ef318d9c20678598bce8708', 'width': 640}, {'height': 714, 'url': 'https://preview.redd.it/ylz1c1wfs4ge1.png?width=960&crop=smart&auto=webp&s=d98d91f9468157bdfec1c91b36abc59f57aa1729', 'width': 960}, {'height': 803, 'url': 'https://preview.redd.it/ylz1c1wfs4ge1.png?width=1080&crop=smart&auto=webp&s=4616fcb66452269d806f0cfd69542d0608594e34', 'width': 1080}], 'source': {'height': 1277, 'url': 'https://preview.redd.it/ylz1c1wfs4ge1.png?auto=webp&s=b05cf7c8052b87d3201e06258dffa1fc49d9ab9a', 'width': 1716}, 'variants': {}}]} |
|||
Memory allocation for MoE's. | 5 | Sup, so...
when loading a model exceeds your vram capacity, it spills into your regular ram, creating a bottleneck since part of the active interference happens with data pulled from said ram.
Since MoE's split up their cognitive work into internal specialists, wouldn't it make sense to let a model decide, before or during inference, what specialists to prioritize and swap into vram?
Is that already a thing?
If not; wouldn't it help massively speed up inference on MoE's like R1, that could fit into ram for the bulk of it, and run its specialists on GPU memory? Those 37B of MoE would fit into higher end GPU setups, and depending on what tradeoff between intelligence and context length you need, you can quant your way to your optimal setup. | 2025-01-30T13:18:20 | https://www.reddit.com/r/LocalLLaMA/comments/1idmr0n/memory_allocation_for_moes/ | GirthusThiccus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idmr0n | false | null | t3_1idmr0n | /r/LocalLLaMA/comments/1idmr0n/memory_allocation_for_moes/ | false | false | self | 5 | null |
best llm model for code review | 1 | [removed] | 2025-01-30T13:21:53 | https://www.reddit.com/r/LocalLLaMA/comments/1idmtfl/best_llm_model_for_code_review/ | Hedi-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idmtfl | false | null | t3_1idmtfl | /r/LocalLLaMA/comments/1idmtfl/best_llm_model_for_code_review/ | false | false | self | 1 | null |
why deepseek is not working today? | 1 | [removed] | 2025-01-30T13:25:30 | https://www.reddit.com/r/LocalLLaMA/comments/1idmvug/why_deepseek_is_not_working_today/ | Fit-Business-7912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idmvug | false | null | t3_1idmvug | /r/LocalLLaMA/comments/1idmvug/why_deepseek_is_not_working_today/ | false | false | self | 1 | null |
Running FULL Deepseek-R1 671B 2.51-bit quants locally at 7token/s setup | 1 | [removed] | 2025-01-30T13:31:33 | https://www.reddit.com/r/LocalLLaMA/comments/1idmzzx/running_full_deepseekr1_671b_251bit_quants/ | lyc8503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idmzzx | false | null | t3_1idmzzx | /r/LocalLLaMA/comments/1idmzzx/running_full_deepseekr1_671b_251bit_quants/ | false | false | 1 | null |
|
CPU-only DeepSeek-R1 671B local infer at 7token/s (2.51-bit quant) | 1 | [removed] | 2025-01-30T13:39:31 | https://www.reddit.com/r/LocalLLaMA/comments/1idn5oz/cpuonly_deepseekr1_671b_local_infer_at_7tokens/ | lyc8503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idn5oz | false | null | t3_1idn5oz | /r/LocalLLaMA/comments/1idn5oz/cpuonly_deepseekr1_671b_local_infer_at_7tokens/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uTH7CSuJT7xESL5xe7-CfxFMSg_JTf4Nma4bQtm-hmo', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=108&crop=smart&auto=webp&s=e3cd0740d84b19d7a2f89abe699010990f86feba', 'width': 108}, {'height': 166, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=216&crop=smart&auto=webp&s=8f5f44e359aeedaabbccbc32ce33767386f46c66', 'width': 216}, {'height': 246, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=320&crop=smart&auto=webp&s=2c7283aa3b6fb2f53d23b42ab2a81687cd0cb89d', 'width': 320}, {'height': 492, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=640&crop=smart&auto=webp&s=eae05015ec695280f7aec2dd5e5959755977ca83', 'width': 640}, {'height': 739, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=960&crop=smart&auto=webp&s=961c74461a7f004a24e74fcfbd31f7ffd5328f85', 'width': 960}, {'height': 831, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=1080&crop=smart&auto=webp&s=47c4e7fa92f45dddf33f34f3105efa57e20c158d', 'width': 1080}], 'source': {'height': 1107, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?auto=webp&s=06a6d7e68b81b39bb13f7081ea313fe02f2a2edf', 'width': 1438}, 'variants': {}}]} |
Bing-style meltdowns in Open-Source projects? | 6 | Do we know what ultimately caused Bing/Sydney to have such seemingly-emotional breakdowns back in the early days of Bing? Wasn't it supposed to be just a GPT wrapper/base? Did they fine tune it with bad data (maybe leftover Tay interaction data)? Maybe RLHF over-rewarding more emotional sounding content? I don't believe there were any microsoft papers that addressed the issue -- have we observed anything else remotely like that in the open source models? Really curious about what was being attempted and didn't work out.
Of course now one can induce such behavior on purpose as RP via prompting or few-shot examples trivially, but as back then this was accidental I'm just curious if there were bumps on the road that can be reproduced/studied. | 2025-01-30T13:39:52 | https://www.reddit.com/r/LocalLLaMA/comments/1idn5x5/bingstyle_meltdowns_in_opensource_projects/ | Legumbrero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idn5x5 | false | null | t3_1idn5x5 | /r/LocalLLaMA/comments/1idn5x5/bingstyle_meltdowns_in_opensource_projects/ | false | false | self | 6 | null |
Slow on local? | 1 | [removed] | 2025-01-30T13:41:04 | MangyanCoding | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idn6u0 | false | null | t3_1idn6u0 | /r/LocalLLaMA/comments/1idn6u0/slow_on_local/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'PqoTZTSOM7VEjLM11ng7Up1ZJdEX_9p7E7jDrapATYc', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/1qob5p68x4ge1.png?width=108&crop=smart&auto=webp&s=6128c5881f8b30626555069f1d549d3f9b45494a', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/1qob5p68x4ge1.png?width=216&crop=smart&auto=webp&s=d1ad3c9f315dbfac0e666e0f9c7bc0c8e331436e', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/1qob5p68x4ge1.png?width=320&crop=smart&auto=webp&s=667bb887084c10204e3110d474890f0deecf04ab', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/1qob5p68x4ge1.png?width=640&crop=smart&auto=webp&s=c9f847c6065161ede3b4e9929630aeb21b6b5f0e', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/1qob5p68x4ge1.png?width=960&crop=smart&auto=webp&s=a2b26cbe5235e92cf26faa4f5eef05137018950c', 'width': 960}, {'height': 811, 'url': 'https://preview.redd.it/1qob5p68x4ge1.png?width=1080&crop=smart&auto=webp&s=178bf34c71ec5c8b7455645df24fd699ea9e2af3', 'width': 1080}], 'source': {'height': 811, 'url': 'https://preview.redd.it/1qob5p68x4ge1.png?auto=webp&s=6d3a39283de8f09fcd478fb8a8a8135178e643c9', 'width': 1080}, 'variants': {}}]} |
||
Chinese AI Chatbot (DeepSeek or Qwen)! Are They Really Worth Their Claims? | 1 | 2025-01-30T13:41:29 | https://tweaklibrary.com/chinese-ai-chatbot-deepseek-vs-qwen-vs-chatgpt/ | ankush011 | tweaklibrary.com | 1970-01-01T00:00:00 | 0 | {} | 1idn75o | false | null | t3_1idn75o | /r/LocalLLaMA/comments/1idn75o/chinese_ai_chatbot_deepseek_or_qwen_are_they/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'qYs-9ptQtqNS94DP2aNv6dEmVTgW8h1qSAWQVPIKpv0', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/9OcCpZUoEYyQpLWaY1XAyltbaaojggqR3qr4X4vxnIE.jpg?width=108&crop=smart&auto=webp&s=cb2b6d0b91e91ed318a9b95680c6db44c04c7bae', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/9OcCpZUoEYyQpLWaY1XAyltbaaojggqR3qr4X4vxnIE.jpg?width=216&crop=smart&auto=webp&s=02c8aea4dceaf3e27395f0452447d50d01f17382', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/9OcCpZUoEYyQpLWaY1XAyltbaaojggqR3qr4X4vxnIE.jpg?width=320&crop=smart&auto=webp&s=02d0bfbf1eff1f72db1d7dbcad6704b9bd1739df', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/9OcCpZUoEYyQpLWaY1XAyltbaaojggqR3qr4X4vxnIE.jpg?width=640&crop=smart&auto=webp&s=0ad3f9c0ac4d2ed02b5173c6213be61f3d822d17', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/9OcCpZUoEYyQpLWaY1XAyltbaaojggqR3qr4X4vxnIE.jpg?width=960&crop=smart&auto=webp&s=720fef40235f895ad637ffbdd464d9f405c159f7', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/9OcCpZUoEYyQpLWaY1XAyltbaaojggqR3qr4X4vxnIE.jpg?width=1080&crop=smart&auto=webp&s=b0cae6e27ca62fd37624f3391fc85c49f536b062', 'width': 1080}], 'source': {'height': 750, 'url': 'https://external-preview.redd.it/9OcCpZUoEYyQpLWaY1XAyltbaaojggqR3qr4X4vxnIE.jpg?auto=webp&s=a2bf3e04eaa6549ae42b2cc2b247da2d1ccc8200', 'width': 1200}, 'variants': {}}]} |
||
CPU-only DeepSeek-R1 671B local infer at 7token/s (2.51-bit quant) | 1 | [removed] | 2025-01-30T13:41:37 | https://www.reddit.com/r/LocalLLaMA/comments/1idn79c/cpuonly_deepseekr1_671b_local_infer_at_7tokens/ | lyc8503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idn79c | false | null | t3_1idn79c | /r/LocalLLaMA/comments/1idn79c/cpuonly_deepseekr1_671b_local_infer_at_7tokens/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uTH7CSuJT7xESL5xe7-CfxFMSg_JTf4Nma4bQtm-hmo', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=108&crop=smart&auto=webp&s=e3cd0740d84b19d7a2f89abe699010990f86feba', 'width': 108}, {'height': 166, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=216&crop=smart&auto=webp&s=8f5f44e359aeedaabbccbc32ce33767386f46c66', 'width': 216}, {'height': 246, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=320&crop=smart&auto=webp&s=2c7283aa3b6fb2f53d23b42ab2a81687cd0cb89d', 'width': 320}, {'height': 492, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=640&crop=smart&auto=webp&s=eae05015ec695280f7aec2dd5e5959755977ca83', 'width': 640}, {'height': 739, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=960&crop=smart&auto=webp&s=961c74461a7f004a24e74fcfbd31f7ffd5328f85', 'width': 960}, {'height': 831, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=1080&crop=smart&auto=webp&s=47c4e7fa92f45dddf33f34f3105efa57e20c158d', 'width': 1080}], 'source': {'height': 1107, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?auto=webp&s=06a6d7e68b81b39bb13f7081ea313fe02f2a2edf', 'width': 1438}, 'variants': {}}]} |
Deepseek r1 distilled with tools support, when? | 3 | It would be awesome if these distilled models supported tools. Anyone knows if they are gonna do this? | 2025-01-30T13:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/1idn9hc/deepseek_r1_distilled_with_tools_support_when/ | cypherbits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idn9hc | false | null | t3_1idn9hc | /r/LocalLLaMA/comments/1idn9hc/deepseek_r1_distilled_with_tools_support_when/ | false | false | self | 3 | null |
Subsets and Splits