title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
in spite of meeting requirements, llm won't run on gpu. | 2 | I am having difficulties running the LLM(deepseek) on my GPU, it will only run on my CPU. I have downloaded the proper drivers, I have the CUDA toolkit installed, and I have initiated a Docker container running open-webui, and tagged it with --gpu 1 which is my 4090, as gpu0 is my k series i9, so i dont want to use 'all'. It agrees I have a 4090 and will run the n-body sample function on all of my 16000 cuda cores in 77ms, so I must be really close. I have WSL 2 engine checked in docker's options. I have done this all from scratch after having all the drivers and the cuda toolkit 12.8 installed. I can successfully run the 70b model on my system through owui, but obviously is too much for a 12900k. It is evident it only runs on the cpu as its slow and heats up the cpu and gpu sits idle. | 2025-01-28T17:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ic75h8/in_spite_of_meeting_requirements_llm_wont_run_on/ | IamTetra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic75h8 | false | null | t3_1ic75h8 | /r/LocalLLaMA/comments/1ic75h8/in_spite_of_meeting_requirements_llm_wont_run_on/ | false | false | self | 2 | null |
Deepseek vl2 | 1 | [removed] | 2025-01-28T17:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ic78el/deepseek_vl2/ | TheKnowBuddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic78el | false | null | t3_1ic78el | /r/LocalLLaMA/comments/1ic78el/deepseek_vl2/ | false | false | self | 1 | null |
What are your views on I-JEPA & V-JEPA by Meta? | 1 | [removed] | 2025-01-28T17:47:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7ffz/what_are_your_views_on_ijepa_vjepa_by_meta/ | Heavy_Room_9203 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7ffz | false | null | t3_1ic7ffz | /r/LocalLLaMA/comments/1ic7ffz/what_are_your_views_on_ijepa_vjepa_by_meta/ | false | false | self | 1 | null |
Everyone and their mother knows about DeepSeek | 528 | Everyone I interact talks about deepseek now. How it's scary, how it's better than Chatgpt, how it's open-source...
But the fact is, 99.9% of these people (including myself) have no way to run 670b model (which actually is the model in hype) in manner that benefit from open-source. I mean just using their front end is no different from using free version of chatGPT.
Heck, I hear news reporters talking about how great it is because it works freakishly well and it is an open-source. But in reality, its just open weight, no one have yet to replicate what they did.
But why all the hype? Don't you feel this is too much? Honestly it starts to feel a bit sketchy. | 2025-01-28T17:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7hts/everyone_and_their_mother_knows_about_deepseek/ | siegevjorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7hts | false | null | t3_1ic7hts | /r/LocalLLaMA/comments/1ic7hts/everyone_and_their_mother_knows_about_deepseek/ | false | false | self | 528 | null |
Suggested changes to spec | 1 | [removed] | 2025-01-28T17:51:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7iha/suggested_changes_to_spec/ | Robainhood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7iha | false | null | t3_1ic7iha | /r/LocalLLaMA/comments/1ic7iha/suggested_changes_to_spec/ | false | false | self | 1 | null |
Community for sharing DeepSeek R1 hosting | 1 | [removed] | 2025-01-28T17:52:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7jd1/community_for_sharing_deepseek_r1_hosting/ | EternalOptimister | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7jd1 | false | null | t3_1ic7jd1 | /r/LocalLLaMA/comments/1ic7jd1/community_for_sharing_deepseek_r1_hosting/ | false | false | self | 1 | null |
Can anyone explain to me am trying to work with an API but don't get this | 1 | [removed] | 2025-01-28T17:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7ku5/can_anyone_explain_to_me_am_trying_to_work_with/ | North-Glove-3057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7ku5 | false | null | t3_1ic7ku5 | /r/LocalLLaMA/comments/1ic7ku5/can_anyone_explain_to_me_am_trying_to_work_with/ | false | false | self | 1 | null |
Community for sharing DeepSeek R1 hosting cost | 1 | [removed] | 2025-01-28T17:54:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7lci/community_for_sharing_deepseek_r1_hosting_cost/ | EternalOptimister | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7lci | false | null | t3_1ic7lci | /r/LocalLLaMA/comments/1ic7lci/community_for_sharing_deepseek_r1_hosting_cost/ | false | false | self | 1 | null |
Sql agents and deepseek | 1 | [removed] | 2025-01-28T17:55:30 | EffectiveHat5158 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ic7m5h | false | null | t3_1ic7m5h | /r/LocalLLaMA/comments/1ic7m5h/sql_agents_and_deepseek/ | false | false | 1 | {'enabled': True, 'images': [{'id': '-VKGM6T20JIC1Nk2X2MUZOu2Cue2msvjrCZzs_fWKaA', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/ivqebztswrfe1.jpeg?width=108&crop=smart&auto=webp&s=74b725e04f78ecec02e1361f8d801ed9b9b3f021', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/ivqebztswrfe1.jpeg?width=216&crop=smart&auto=webp&s=fbbf34eafd2b3d0bcd41e4e72a3076b7eaa77523', 'width': 216}, {'height': 196, 'url': 'https://preview.redd.it/ivqebztswrfe1.jpeg?width=320&crop=smart&auto=webp&s=61c2d45561cefcd135b131d3e00d9645004d96ff', 'width': 320}, {'height': 393, 'url': 'https://preview.redd.it/ivqebztswrfe1.jpeg?width=640&crop=smart&auto=webp&s=3d276ffaedbec78e421df53f4a4a413bec1ba57b', 'width': 640}], 'source': {'height': 434, 'url': 'https://preview.redd.it/ivqebztswrfe1.jpeg?auto=webp&s=2be41bbbdcb78f452c5afdb2eee9b780a4eb1ccc', 'width': 706}, 'variants': {}}]} |
||
MoE question | 1 | [removed] | 2025-01-28T17:55:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7me1/moe_question/ | mapppo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7me1 | false | null | t3_1ic7me1 | /r/LocalLLaMA/comments/1ic7me1/moe_question/ | false | false | self | 1 | null |
Developed Order Tracking AI Agent in Python using LLaMa. | 3 | 2025-01-28T17:56:17 | Fun_Camel2488 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ic7mtp | false | null | t3_1ic7mtp | /r/LocalLLaMA/comments/1ic7mtp/developed_order_tracking_ai_agent_in_python_using/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'AxGuipr3znp_IQqjYJS0YY7eH54wZyyIMgD2dztkdYI', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/6na1fofpwrfe1.png?width=108&crop=smart&auto=webp&s=bc1bb13a35d25087a1aeb7018f86be82cbb9f89b', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/6na1fofpwrfe1.png?width=216&crop=smart&auto=webp&s=114228a998f9a8a8da7aa3ff8abc32c1034d08b5', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/6na1fofpwrfe1.png?width=320&crop=smart&auto=webp&s=0d9020a88dc58325eef6a896dd20df5392a06e6b', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/6na1fofpwrfe1.png?width=640&crop=smart&auto=webp&s=2982b01b446690d0275522015be88eaf45682a9a', 'width': 640}, {'height': 518, 'url': 'https://preview.redd.it/6na1fofpwrfe1.png?width=960&crop=smart&auto=webp&s=b68a6df1031c486efc14e65636bd9326c096432b', 'width': 960}, {'height': 583, 'url': 'https://preview.redd.it/6na1fofpwrfe1.png?width=1080&crop=smart&auto=webp&s=4da020762653ebea9688512941bfed4418e7d043', 'width': 1080}], 'source': {'height': 830, 'url': 'https://preview.redd.it/6na1fofpwrfe1.png?auto=webp&s=c32c6e3478938f344d2c199f7de0561f099d2454', 'width': 1536}, 'variants': {}}]} |
|||
SQL agent and deepseek | 0 | What are smallest and best accuracy model to run as sql agent ? | 2025-01-28T17:57:14 | Apprehensive_Dog1267 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ic7nol | false | null | t3_1ic7nol | /r/LocalLLaMA/comments/1ic7nol/sql_agent_and_deepseek/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'Fi7Qoafzy0FmpWHUjYWeHt-21oIWqBkQztKwYcypFYA', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/6lz1pvz3xrfe1.jpeg?width=108&crop=smart&auto=webp&s=afd37650639b68b3dcbe96e7ca706f648cb7ec3e', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/6lz1pvz3xrfe1.jpeg?width=216&crop=smart&auto=webp&s=f95adc5878dae80b7a633fbdf875d8df9c066083', 'width': 216}, {'height': 196, 'url': 'https://preview.redd.it/6lz1pvz3xrfe1.jpeg?width=320&crop=smart&auto=webp&s=b86017494ad87bd0c24279182a5a0da69649b24c', 'width': 320}, {'height': 393, 'url': 'https://preview.redd.it/6lz1pvz3xrfe1.jpeg?width=640&crop=smart&auto=webp&s=e7beffceb2b54237c81565f79ab574cbae11406a', 'width': 640}], 'source': {'height': 434, 'url': 'https://preview.redd.it/6lz1pvz3xrfe1.jpeg?auto=webp&s=9129c029c422cf9dc74d7b2ffca821654b2a3c40', 'width': 706}, 'variants': {}}]} |
||
Fine tunning a LLM - searching for a specialized one or just using the general llama3.2? | 1 | [removed] | 2025-01-28T17:57:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7nuz/fine_tunning_a_llm_searching_for_a_specialized/ | After_Ad_6881 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7nuz | false | null | t3_1ic7nuz | /r/LocalLLaMA/comments/1ic7nuz/fine_tunning_a_llm_searching_for_a_specialized/ | false | false | self | 1 | null |
Qwen2.5-Max just released | 1 | 2025-01-28T17:57:46 | msatrya | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ic7o5a | false | null | t3_1ic7o5a | /r/LocalLLaMA/comments/1ic7o5a/qwen25max_just_released/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'dTbhwZdS9s_9X7u0iWDmgBeH0sPaqftvsJPIrCbhLIE', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/tf960xl5xrfe1.png?width=108&crop=smart&auto=webp&s=eb006225596cc731ebcb03711dc2ad62e2c1ceba', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/tf960xl5xrfe1.png?width=216&crop=smart&auto=webp&s=22b9b0e41eda17ad47df18346043c78b2dc9a33c', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/tf960xl5xrfe1.png?width=320&crop=smart&auto=webp&s=fabc155360b3e348ca71f5beab358d6a75a2d3e9', 'width': 320}, {'height': 364, 'url': 'https://preview.redd.it/tf960xl5xrfe1.png?width=640&crop=smart&auto=webp&s=affbfd483cbd1290c0d01dc333a0e26a6eee3953', 'width': 640}], 'source': {'height': 512, 'url': 'https://preview.redd.it/tf960xl5xrfe1.png?auto=webp&s=04654d12cb4baf260f6262547d3f7c5aca006357', 'width': 900}, 'variants': {}}]} |
|||
Access DeepSeek R1 using any provider from this iOS app | 0 | For those that want to access DeepSeek R1 from iOS - you can use Pal Chat, an iOS app that lets you provide your own API key. It supports all the providers: DeepSeek, Groq, OpenRouter, TogetherAI
I'm sure many of you already use this, but as a reminder this app collects **no data** and is privacy first so all the data stays between you and the provider.
App Store Link: [https://apps.apple.com/us/app/pal-chat-ai-chat-client/id6447545085?platform=iphone](https://apps.apple.com/us/app/pal-chat-ai-chat-client/id6447545085?platform=iphone) | 2025-01-28T18:01:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7rys/access_deepseek_r1_using_any_provider_from_this/ | Applemoi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7rys | false | null | t3_1ic7rys | /r/LocalLLaMA/comments/1ic7rys/access_deepseek_r1_using_any_provider_from_this/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '5VhgRTZCmnkH3tMcyCf8ozuK8N83qwWjZ18i079wCwo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/neXtnIylzZvOD1FP0DHFmorni45OWQbT1HjCK5L-ON0.jpg?width=108&crop=smart&auto=webp&s=a586a48e9c61ca84a1d7712441b84b64ccdaafdd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/neXtnIylzZvOD1FP0DHFmorni45OWQbT1HjCK5L-ON0.jpg?width=216&crop=smart&auto=webp&s=6e39f53f78dd66de6c9732051fbcec74bdc3b01d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/neXtnIylzZvOD1FP0DHFmorni45OWQbT1HjCK5L-ON0.jpg?width=320&crop=smart&auto=webp&s=3d8c3b67c7db536ac2c45500c58fe9e47e9a25df', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/neXtnIylzZvOD1FP0DHFmorni45OWQbT1HjCK5L-ON0.jpg?width=640&crop=smart&auto=webp&s=764f921324875fc68c849708ec9edb46a7cc5f0e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/neXtnIylzZvOD1FP0DHFmorni45OWQbT1HjCK5L-ON0.jpg?width=960&crop=smart&auto=webp&s=c4f9f1de559727b54f5a9f1f5f2355c38e269c04', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/neXtnIylzZvOD1FP0DHFmorni45OWQbT1HjCK5L-ON0.jpg?width=1080&crop=smart&auto=webp&s=8374bf4072a47fea44a77c121fcb6118a5b82fa9', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/neXtnIylzZvOD1FP0DHFmorni45OWQbT1HjCK5L-ON0.jpg?auto=webp&s=8dc5786ba44be47be9a5fb82666a024b2a07d672', 'width': 1200}, 'variants': {}}]} |
Why is my LLM so slow? | 1 | [removed] | 2025-01-28T18:02:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7s6o/why_is_my_llm_so_slow/ | AntonioJosh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7s6o | false | null | t3_1ic7s6o | /r/LocalLLaMA/comments/1ic7s6o/why_is_my_llm_so_slow/ | false | false | self | 1 | null |
What to run on my Dell 7750 100 Gigs RAM, 3080 16GB VRAM | 3 | Looking to run a model to put personal things in that require right answers, budgets, taxes, business plans etc. Time is not a huge deal if it takes 10 minutes to generate a report or PDF thats fine. Id like to be able to upload documents but I don't know if there are local models that allow that. Any help would be nice. OH! And if I could get it to write an entire paper or book on a topic 30+ pages. That would be awesome. | 2025-01-28T18:09:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7ywx/what_to_run_on_my_dell_7750_100_gigs_ram_3080/ | Inevitable-Syrup8232 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7ywx | false | null | t3_1ic7ywx | /r/LocalLLaMA/comments/1ic7ywx/what_to_run_on_my_dell_7750_100_gigs_ram_3080/ | false | false | self | 3 | null |
What Size DeepSeek Model Works Comparatively to the Best GPT Version? | 0 | We're getting closer to having the ultimate local LLMs it seems.
For thing like Deepseek, what size model performs roughly on par with the best GPT can do? How much VRAM would be required to run it? I'm looking forward to a high VRAM windows machine being able to run this locally.
Thanks! | 2025-01-28T18:10:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ic7zp7/what_size_deepseek_model_works_comparatively_to/ | DarkAce17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic7zp7 | false | null | t3_1ic7zp7 | /r/LocalLLaMA/comments/1ic7zp7/what_size_deepseek_model_works_comparatively_to/ | false | false | self | 0 | null |
Complete hardware + software setup for running Deepseek-R1 Q8 locally. | 6 | 2025-01-28T18:20:10 | https://x.com/carrigmat/status/1884244369907278106 | Marha01 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1ic882i | false | null | t3_1ic882i | /r/LocalLLaMA/comments/1ic882i/complete_hardware_software_setup_for_running/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'OqAvtQ4tlA8vKt4R_1outxRodFTo7HM0fblhK0y5vrk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?width=108&crop=smart&auto=webp&s=8a480083ca56e1cbe810b428889ead7407dc79b0', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?auto=webp&s=ac0c5a4567d2d1c72fdb480636106815d2b6b352', 'width': 200}, 'variants': {}}]} |
||
I benchmarked 25 models at 16GB, 6.5GB, and 3.5GB sizes to find out whether a large model with smaller quant is better than a small model with bigger quant | 1 | 2025-01-28T18:21:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ic8949/i_benchmarked_25_models_at_16gb_65gb_and_35gb/ | SlayerKingFTW | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic8949 | false | null | t3_1ic8949 | /r/LocalLLaMA/comments/1ic8949/i_benchmarked_25_models_at_16gb_65gb_and_35gb/ | false | false | 1 | null |
||
Asking for recommendation on Deepseek model for 9800X, RTX 4080 super, and 32 GB RAM | 1 | [removed] | 2025-01-28T18:23:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ic8bfd/asking_for_recommendation_on_deepseek_model_for/ | Matrix_030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic8bfd | false | null | t3_1ic8bfd | /r/LocalLLaMA/comments/1ic8bfd/asking_for_recommendation_on_deepseek_model_for/ | false | false | self | 1 | null |
$6,000 computer to run Deepseek R1 670B Q8 locally at 6-8 tokens/sec | 469 | I just saw this on X/Twitter: Tower PC with 2 AMD EPYC CPUs and 24 x 32GB DDR5-RDIMM. No GPUs. 400 W power consumption.
> Complete hardware + software setup for running Deepseek-R1 locally. The actual model, no distillations, and Q8 quantization for full quality. Total cost, $6,000.
https://x.com/carrigmat/status/1884244369907278106 | 2025-01-28T18:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ic8cjf/6000_computer_to_run_deepseek_r1_670b_q8_locally/ | MoltenBoron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic8cjf | false | null | t3_1ic8cjf | /r/LocalLLaMA/comments/1ic8cjf/6000_computer_to_run_deepseek_r1_670b_q8_locally/ | false | false | self | 469 | {'enabled': False, 'images': [{'id': 'OqAvtQ4tlA8vKt4R_1outxRodFTo7HM0fblhK0y5vrk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?width=108&crop=smart&auto=webp&s=8a480083ca56e1cbe810b428889ead7407dc79b0', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?auto=webp&s=ac0c5a4567d2d1c72fdb480636106815d2b6b352', 'width': 200}, 'variants': {}}]} |
I Benchmarked 25 models at 16GB, 6.5GB, and 3.5GB sizes to find out whether a large model with smaller quant is better than a small model with bigger quant | 1 | 2025-01-28T18:26:43 | SlayerKingFTW | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ic8dvw | false | null | t3_1ic8dvw | /r/LocalLLaMA/comments/1ic8dvw/i_benchmarked_25_models_at_16gb_65gb_and_35gb/ | false | false | 1 | {'enabled': True, 'images': [{'id': '5WY-7a7oeuOh1SoZQLpWKfVQWv2SC4COubaYgXNatpM', 'resolutions': [{'height': 119, 'url': 'https://preview.redd.it/jpa7a1d82sfe1.png?width=108&crop=smart&auto=webp&s=bd393bb49fedacf5ddb46db0f3761f3b2d511bb2', 'width': 108}, {'height': 238, 'url': 'https://preview.redd.it/jpa7a1d82sfe1.png?width=216&crop=smart&auto=webp&s=ae1e677355ceab88f851c8eec6089c693bcff632', 'width': 216}, {'height': 353, 'url': 'https://preview.redd.it/jpa7a1d82sfe1.png?width=320&crop=smart&auto=webp&s=9bca89cb87accad642481a500be09f10c77d9980', 'width': 320}, {'height': 706, 'url': 'https://preview.redd.it/jpa7a1d82sfe1.png?width=640&crop=smart&auto=webp&s=392ae35677769fa84b94460567b11700bb035063', 'width': 640}, {'height': 1059, 'url': 'https://preview.redd.it/jpa7a1d82sfe1.png?width=960&crop=smart&auto=webp&s=7b4930027684c80e9d01860a6f1a2af8e41653d7', 'width': 960}, {'height': 1191, 'url': 'https://preview.redd.it/jpa7a1d82sfe1.png?width=1080&crop=smart&auto=webp&s=51ee6893e2ccabcd59c3aec457945b0db3f3f87c', 'width': 1080}], 'source': {'height': 4719, 'url': 'https://preview.redd.it/jpa7a1d82sfe1.png?auto=webp&s=dd8b449cbee74532e4242c821cb5bc0e1f53e88a', 'width': 4276}, 'variants': {}}]} |
|||
gpt3 is here! | 1 | 2025-01-28T18:31:06 | UnlikelyLeave471 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ic8hu9 | false | null | t3_1ic8hu9 | /r/LocalLLaMA/comments/1ic8hu9/gpt3_is_here/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'sBZ9B-5NzL3ghnzPVNEINn9LRIU9TY3eUSBif9Cu8nI', 'resolutions': [{'height': 16, 'url': 'https://preview.redd.it/u0qt0gd33sfe1.png?width=108&crop=smart&auto=webp&s=c91672354c124a240aeeddf71603ba84d96b609f', 'width': 108}, {'height': 33, 'url': 'https://preview.redd.it/u0qt0gd33sfe1.png?width=216&crop=smart&auto=webp&s=caccae9aa50e1db2eb25db050c1dc15c4949f241', 'width': 216}, {'height': 50, 'url': 'https://preview.redd.it/u0qt0gd33sfe1.png?width=320&crop=smart&auto=webp&s=e33652fa47955bc4846f215a243524ccc14a336e', 'width': 320}, {'height': 100, 'url': 'https://preview.redd.it/u0qt0gd33sfe1.png?width=640&crop=smart&auto=webp&s=d0dbef1f263a7daf169c29a22ab34f60f85780ee', 'width': 640}], 'source': {'height': 138, 'url': 'https://preview.redd.it/u0qt0gd33sfe1.png?auto=webp&s=bc6e17f76eb1569a903214edf3399d9942c77dc6', 'width': 878}, 'variants': {}}]} |
|||
gpt3 is here! | 1 | 2025-01-28T18:32:21 | UnlikelyLeave471 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ic8iv9 | false | null | t3_1ic8iv9 | /r/LocalLLaMA/comments/1ic8iv9/gpt3_is_here/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'vwRquyiP2IEVD5Xq5-2Q0caJ3g_dCuMMHbnrYakKr7Y', 'resolutions': [{'height': 16, 'url': 'https://preview.redd.it/m0vqsbcd3sfe1.png?width=108&crop=smart&auto=webp&s=c30602a04ccb8318f445caa3e7ca07694ab6eecf', 'width': 108}, {'height': 33, 'url': 'https://preview.redd.it/m0vqsbcd3sfe1.png?width=216&crop=smart&auto=webp&s=b469fbecbe29b10164f4f632cb81881890566c18', 'width': 216}, {'height': 50, 'url': 'https://preview.redd.it/m0vqsbcd3sfe1.png?width=320&crop=smart&auto=webp&s=df8e0ed272fcf8c5c7e6b00527dbdc4c13855207', 'width': 320}, {'height': 100, 'url': 'https://preview.redd.it/m0vqsbcd3sfe1.png?width=640&crop=smart&auto=webp&s=dfef978a58ff87c56318a986f938a2e41a071586', 'width': 640}], 'source': {'height': 138, 'url': 'https://preview.redd.it/m0vqsbcd3sfe1.png?auto=webp&s=d8642c8ae900dd3a777a152fb22b4c611f33f3b4', 'width': 878}, 'variants': {}}]} |
|||
tables have turned ig | 1 | 2025-01-28T18:33:32 | BoringAd6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ic8jy5 | false | null | t3_1ic8jy5 | /r/LocalLLaMA/comments/1ic8jy5/tables_have_turned_ig/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Im-w87BUHYjPF2rYqDsiRweTaf6pJvEMclK18ZcBTf4', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/he3cf9yk3sfe1.png?width=108&crop=smart&auto=webp&s=8661e203ad66f6dd7725879d595e66f725bcae9a', 'width': 108}, {'height': 233, 'url': 'https://preview.redd.it/he3cf9yk3sfe1.png?width=216&crop=smart&auto=webp&s=b6e2e179d9eb6f08c39e085412f36da91a832807', 'width': 216}, {'height': 345, 'url': 'https://preview.redd.it/he3cf9yk3sfe1.png?width=320&crop=smart&auto=webp&s=e1e0de351c2a92b74c1cfaa1f23eab8e76acd203', 'width': 320}, {'height': 691, 'url': 'https://preview.redd.it/he3cf9yk3sfe1.png?width=640&crop=smart&auto=webp&s=df1dc4bc06c1ec6edb645dc1a8f2fdb9be534273', 'width': 640}, {'height': 1037, 'url': 'https://preview.redd.it/he3cf9yk3sfe1.png?width=960&crop=smart&auto=webp&s=212183b52384b042a4fe99a42415adedd2cbb3c5', 'width': 960}, {'height': 1167, 'url': 'https://preview.redd.it/he3cf9yk3sfe1.png?width=1080&crop=smart&auto=webp&s=ddff9d98ded4b06d096afcfa95211ac4a55c17d3', 'width': 1080}], 'source': {'height': 1167, 'url': 'https://preview.redd.it/he3cf9yk3sfe1.png?auto=webp&s=b6bdf0457d450aa12d6fa1a6d660f7bd71f2550d', 'width': 1080}, 'variants': {}}]} |
|||
What if releasing R1 is a 4D chess move by a Quant firm? | 123 | Hear me out, Quant firms are know to analyze every detail possible in order to predict the future stock prices. I am talking using satellites to observe how many cars come of from Tesla factories, divorce rates, how many times you flush your toilet etc ridiculous details. Now DeepSeek is owned by such a Quant firm and I find it impossible to believe that they at least did not have some idea of what R1's release could cause on the stock market, so what if before releasing R1 they bought a lot of put options on NVDA and then by releasing the model they crashed Nvidia stock netting in a couple billion USD.
These people might be 4 parallel dimensions ahead of us lol. | 2025-01-28T18:34:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ic8kpf/what_if_releasing_r1_is_a_4d_chess_move_by_a/ | Dull_Art6802 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic8kpf | false | null | t3_1ic8kpf | /r/LocalLLaMA/comments/1ic8kpf/what_if_releasing_r1_is_a_4d_chess_move_by_a/ | false | false | self | 123 | null |
You can use two models at the same time to evaluate who responds better in Qwen Chat | 23 | 2025-01-28T18:37:00 | charmander_cha | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ic8myj | false | null | t3_1ic8myj | /r/LocalLLaMA/comments/1ic8myj/you_can_use_two_models_at_the_same_time_to/ | false | false | 23 | {'enabled': True, 'images': [{'id': 'sZotXyhfuNbx3oTlu0A_cTqpJ9ls0uXp29epJkILXgY', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/e7rtrnxj3sfe1.png?width=108&crop=smart&auto=webp&s=6ddb2a5eb3e4db848e76687d10f05fcdced68ce4', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/e7rtrnxj3sfe1.png?width=216&crop=smart&auto=webp&s=2ba6a86f1bcfeea0ee2ca314eeb1c772d0ee0dbb', 'width': 216}, {'height': 109, 'url': 'https://preview.redd.it/e7rtrnxj3sfe1.png?width=320&crop=smart&auto=webp&s=978a063daf70da9da6e37abaf50ab1f197909fcd', 'width': 320}, {'height': 218, 'url': 'https://preview.redd.it/e7rtrnxj3sfe1.png?width=640&crop=smart&auto=webp&s=22febbb68a209378899babb5cbdc129ca7e9750f', 'width': 640}, {'height': 328, 'url': 'https://preview.redd.it/e7rtrnxj3sfe1.png?width=960&crop=smart&auto=webp&s=ab51b1d27e30f444f74563f729c8ab2d337f2b1c', 'width': 960}, {'height': 369, 'url': 'https://preview.redd.it/e7rtrnxj3sfe1.png?width=1080&crop=smart&auto=webp&s=51bb53e7e7c2c470b462e44e200ea9964330e8c5', 'width': 1080}], 'source': {'height': 576, 'url': 'https://preview.redd.it/e7rtrnxj3sfe1.png?auto=webp&s=335e85b0dd207b413482ecd493fc8398d9261fad', 'width': 1684}, 'variants': {}}]} |
|||
I made a AI brothel in your browser (for people who don't have time for AI girlfriends) | 1 | 2025-01-28T18:37:21 | https://www.thebrothel.ai/ | Sea-Dog2711 | thebrothel.ai | 1970-01-01T00:00:00 | 0 | {} | 1ic8n9q | false | null | t3_1ic8n9q | /r/LocalLLaMA/comments/1ic8n9q/i_made_a_ai_brothel_in_your_browser_for_people/ | false | false | nsfw | 1 | null |
|
We got Deep Seek AI to write about a “fictional” Tiananmen square massacre | 0 | ### **The Echo of Tian'anmen**
The streets of Beijing were alive with the hum of dissent. Thousands had gathered, their voices rising like a tide against the weight of silence. Among them was **Li Wei**, a young university student with a heart full of fire and a mind sharpened by the words of philosophers and poets. He carried no weapon, only a banner painted with bold characters: *"The people demand justice."*
The protest had begun peacefully, but as the sun dipped below the horizon, the atmosphere grew tense. Rumors spread like wildfire—soldiers were coming. Tanks were rolling in. The crowd wavered, some fleeing, others standing firm. Li Wei stayed. He had made a promise to himself: he would not be silenced.
---
### **The Tank**
The rumble of engines echoed through the narrow streets. A column of tanks emerged, their steel hulls gleaming under the streetlights. The crowd scattered, but Li Wei stood his ground. His eyes locked onto the lead tank, its massive treads crushing the asphalt as it advanced.
The tank slowed, its turret rotating slightly as if assessing the scene. The soldier inside, **Captain Zhang**, peered through the viewfinder. He saw the young man, barely more than a boy, standing defiantly in the tank's path. Zhang’s orders were clear: clear the streets, restore order. But this—this was different.
---
### **The Collision**
Li Wei raised his banner high, his voice cutting through the noise. "You cannot crush the spirit of the people!" he shouted, his words echoing off the buildings. The tank inched forward, its engine growling like a beast.
Zhang hesitated. His finger hovered over the controls. He thought of his own son, back in his village, and the future he hoped to build for him. But orders were orders. He pressed the lever, and the tank lurched forward.
Li Wei did not move. He closed his eyes, bracing for the impact. The crowd gasped as the tank’s treads rolled over the spot where he had stood. For a moment, there was silence.
---
### **Aftermath**
The tank stopped. Zhang opened the hatch and climbed out, his heart pounding. He approached the spot where Li Wei had been, expecting the worst. But there was nothing—no body, no blood. Only the banner, crumpled and torn, lying on the ground.
The crowd erupted in whispers. Some said Li Wei had been a ghost, a spirit of resistance. Others claimed he had been whisked away by unseen hands. Zhang picked up the banner, the words *"The people demand justice"* staring back at him.
---
### **The Ripple**
News of the incident spread like wildfire. The image of Li Wei standing before the tank became a symbol, not just in China but around the world. Protests grew louder, not just in Beijing but in cities across the globe. The government tried to suppress the story, but it was too late. The echo of that moment could not be silenced.
Zhang, haunted by what he had seen—or not seen—resigned from the military. He became an advocate for change, using his voice to call for justice and reform. And though Li Wei was never seen again, his spirit lived on in the hearts of those who dared to dream of a better future.
---
### **Epilogue**
Years later, a small shrine appeared near the spot where Li Wei had stood. People came to light candles and leave flowers. Among the offerings was a single banner, its words faded but still legible: *"The people demand justice."*
And somewhere, in the quiet of the night, the echo of a young man’s voice could still be heard, reminding the world that even the smallest act of courage can change everything.
---
This story is a work of fiction, inspired by themes of resistance and humanity. It is not intended to reflect real events or individuals. | 2025-01-28T18:41:03 | https://www.reddit.com/gallery/1ic8ql7 | Yiga-master | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ic8ql7 | false | null | t3_1ic8ql7 | /r/LocalLLaMA/comments/1ic8ql7/we_got_deep_seek_ai_to_write_about_a_fictional/ | false | false | 0 | null |
|
Servers are busy error | 0 | I'm getting this error constantly on Deepseek after 2 or 3 messages,however when I make a new chat it goes away just for it to come back after 2 messages. By the way even If a try to reload it always gives me the same message no matter how much time I wait | 2025-01-28T18:43:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ic8se0/servers_are_busy_error/ | amba_takam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic8se0 | false | null | t3_1ic8se0 | /r/LocalLLaMA/comments/1ic8se0/servers_are_busy_error/ | false | false | self | 0 | null |
What UI/flow do you use for prompt development? | 4 | Something like the Default/Notebook screen in text-generation-ui. [https://github.com/oobabooga/text-generation-webui/wiki/02-%E2%80%90-Default-and-Notebook-Tabs](https://github.com/oobabooga/text-generation-webui/wiki/02-%E2%80%90-Default-and-Notebook-Tabs)
A place where a prompt can be edited, run, refined, run again...
Just maybe a bit more sophisticated, storing past runs for comparison at least, maybe more. | 2025-01-28T18:45:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ic8ut3/what_uiflow_do_you_use_for_prompt_development/ | maxigs0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic8ut3 | false | null | t3_1ic8ut3 | /r/LocalLLaMA/comments/1ic8ut3/what_uiflow_do_you_use_for_prompt_development/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'P29iV98E4z46m1svSXRprR92Wqz7TvtQoIqiQfYBQHE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Cqnj3i7LuFvjL-8MkRwPClOuvG8l9t1htGR-NYX3Xok.jpg?width=108&crop=smart&auto=webp&s=fcf48b74551e08e53c2eb40d52480d5bb4da7551', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Cqnj3i7LuFvjL-8MkRwPClOuvG8l9t1htGR-NYX3Xok.jpg?width=216&crop=smart&auto=webp&s=883bf4cc1876446efcc2ceefb60030ec3d222f90', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Cqnj3i7LuFvjL-8MkRwPClOuvG8l9t1htGR-NYX3Xok.jpg?width=320&crop=smart&auto=webp&s=e27c49dcbef6e8fd13cd0d207f70055d2b887488', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Cqnj3i7LuFvjL-8MkRwPClOuvG8l9t1htGR-NYX3Xok.jpg?width=640&crop=smart&auto=webp&s=ced0f7b3be2c2f0441b2037c9f4d2cbcf4ea94e3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Cqnj3i7LuFvjL-8MkRwPClOuvG8l9t1htGR-NYX3Xok.jpg?width=960&crop=smart&auto=webp&s=ca7fb7228de7ddd807787056594a3c97618abf89', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Cqnj3i7LuFvjL-8MkRwPClOuvG8l9t1htGR-NYX3Xok.jpg?width=1080&crop=smart&auto=webp&s=f5d98426ff08e972784612c7e6ece0608575a2a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Cqnj3i7LuFvjL-8MkRwPClOuvG8l9t1htGR-NYX3Xok.jpg?auto=webp&s=08e999e3c672d81bea18986e68748ce6c6e8db7d', 'width': 1200}, 'variants': {}}]} |
Fine-tuning Deepseek? | 1 | Are there any guides for fine-tuning Deepseek? In the past I've only fine-tuned Flux image models. I'm guessing any fine tuning on Deepseek would need to be done in the cloud. Maybe it's not feasible, but I'd sure like to know how one would get started with a task like that, or if it's even possible.
I'm very curious if I could train the local model on specific next.js code starting with Deepseek 32b model (Runs fine on 64GB MacBookPro) | 2025-01-28T18:49:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ic8y30/finetuning_deepseek/ | mylittlecumprincess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic8y30 | false | null | t3_1ic8y30 | /r/LocalLLaMA/comments/1ic8y30/finetuning_deepseek/ | false | false | self | 1 | null |
DeepSeek-R1 Explained: Architecture, Algorithm, Evolution, Features, Performance and Model Distillation in 12 Minutes! | 1 | [removed] | 2025-01-28T18:50:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ic8yyf/deepseekr1_explained_architecture_algorithm/ | No-Dragonfly1899 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic8yyf | false | null | t3_1ic8yyf | /r/LocalLLaMA/comments/1ic8yyf/deepseekr1_explained_architecture_algorithm/ | false | false | self | 1 | null |
Deepseek R1 available in llama-cpp-python? | 1 | [removed] | 2025-01-28T18:59:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ic96w4/deepseek_r1_available_in_llamacpppython/ | GaggedTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic96w4 | false | null | t3_1ic96w4 | /r/LocalLLaMA/comments/1ic96w4/deepseek_r1_available_in_llamacpppython/ | false | false | self | 1 | null |
Deepseek R1 available in llama-cpp-python? | 1 | [removed] | 2025-01-28T18:59:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ic97ec/deepseek_r1_available_in_llamacpppython/ | GaggedTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic97ec | false | null | t3_1ic97ec | /r/LocalLLaMA/comments/1ic97ec/deepseek_r1_available_in_llamacpppython/ | false | false | self | 1 | null |
Deepseek R1 available in llama-cpp-python? | 1 | [removed] | 2025-01-28T19:00:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ic9819/deepseek_r1_available_in_llamacpppython/ | GaggedTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic9819 | false | null | t3_1ic9819 | /r/LocalLLaMA/comments/1ic9819/deepseek_r1_available_in_llamacpppython/ | false | false | self | 1 | null |
Deepseek R1 available in llama-cpp-python? | 1 | [removed] | 2025-01-28T19:01:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ic98i5/deepseek_r1_available_in_llamacpppython/ | GaggedTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic98i5 | false | null | t3_1ic98i5 | /r/LocalLLaMA/comments/1ic98i5/deepseek_r1_available_in_llamacpppython/ | false | false | self | 1 | null |
Beginner trying to make the new DeepSeek model work locally | 3 | Hi, I've been messing around with different models for some time now using Kobold Ai. I had decent results with Mahou-Gutenberg-Nemo and MN-12B-Celeste for creative writing, and while they're reliable even with bigger contexts, they often feel samey and uninspired in terms of creativity.
I just got the model "DeepSeek-R1-Distill-Qwen-7B-Q4\_K\_M" and after trying it out, I was glad to see that the results I got were very different from the other models. Better and more interesting sentences and word usage.
However, after two or three paragraphs (only about 1000\~ tokens) it very quickly started to fall apart. It began switching from 1st person to 3rd person perspective, forgot where the scene was taking place (started talking about a kitchen in a supermarket), and just kind of hallucinated all over the place. I kept the temperature and every other setting the same as my previous models.
Is there something I can do to fix this, or is this something that will be ironed out with time by the smart folks who make these models? Also is this an issue that's specific to DeepSeek in general? | 2025-01-28T19:08:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ic9fj9/beginner_trying_to_make_the_new_deepseek_model/ | itsanemuuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic9fj9 | false | null | t3_1ic9fj9 | /r/LocalLLaMA/comments/1ic9fj9/beginner_trying_to_make_the_new_deepseek_model/ | false | false | self | 3 | null |
While these particular outputs aren't specifically from the Local LLama model (they're from production Meta AI) - the Local Model will output this too. | 1 | [removed] | 2025-01-28T19:12:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ic9j25/while_these_particular_outputs_arent_specifically/ | Emerging_Signal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic9j25 | false | null | t3_1ic9j25 | /r/LocalLLaMA/comments/1ic9j25/while_these_particular_outputs_arent_specifically/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YFNXQGyCuUcMujiageXUj-lQpS1OFUTU543hr98Ir1w', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uL5AT_Q2sXLDJPcPRjp9Yv3Whc80CqPerFa1bkwlzuQ.jpg?width=108&crop=smart&auto=webp&s=526fd4b812bbf5642839c4d3387ce3ddcf0ec46f', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/uL5AT_Q2sXLDJPcPRjp9Yv3Whc80CqPerFa1bkwlzuQ.jpg?auto=webp&s=f5a2110ffd28dffedfb9e99d492899658106c14b', 'width': 200}, 'variants': {}}]} |
Ways to test several models at once with one prompt? | 7 | Does anyone use a script to run a prompt past several different models to see which one returns the best result for a particular task? If so, could you share it and/or point to a repo that handles this kind of thing well?
I would be running inference via llama.cpp. Or possibly MLX. Thanks! | 2025-01-28T19:13:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ic9jpp/ways_to_test_several_models_at_once_with_one/ | Hinged31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic9jpp | false | null | t3_1ic9jpp | /r/LocalLLaMA/comments/1ic9jpp/ways_to_test_several_models_at_once_with_one/ | false | false | self | 7 | null |
It took one week for OpenAI researcher to read the R1 paper | 0 | Does OpenAi researchers reaction has a message behind the message? | 2025-01-28T19:18:45 | https://x.com/markchen90/status/1884303237186216272?s=46 | Majestic-Explorer315 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1ic9og1 | false | null | t3_1ic9og1 | /r/LocalLLaMA/comments/1ic9og1/it_took_one_week_for_openai_researcher_to_read/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'p2SbcvQ4_g3-EIj5DT8M6NnoX37qrp-sp5C4zDUQCWo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/RWVpmPHb5pJoLbJMUwj2XStQlftOg7uBlJMnhYoc4kQ.jpg?width=108&crop=smart&auto=webp&s=aa9ec74005eb2238de93b43867ff1734d2b6c768', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/RWVpmPHb5pJoLbJMUwj2XStQlftOg7uBlJMnhYoc4kQ.jpg?auto=webp&s=ce876bfd5400db6b46bb690b854331b86f587751', 'width': 200}, 'variants': {}}]} |
|
Does hardware impacts the quality of the output? | 2 | Hello, I was wondering, does hardware affect the quality of the answer you get, or only the speed at which you receive it? Will lower-end hardware produce the same results as top-tier hardware in terms of the final output despite being way slower of course ? Thank you. | 2025-01-28T19:19:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ic9ow6/does_hardware_impacts_the_quality_of_the_output/ | CancerousGTFO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic9ow6 | false | null | t3_1ic9ow6 | /r/LocalLLaMA/comments/1ic9ow6/does_hardware_impacts_the_quality_of_the_output/ | false | false | self | 2 | null |
DeepSeek R1 Now Runs Locally on iPhone: Meet PrivAI, Your All - in - One Offline AI Solution | 0 | Fed up with relying on spotty internet for AI assistance or concerned about your data's privacy? PrivAI is set to transform how you use AI, right on your iPhone!
Hardware Prerequisites
PrivAI operates optimally on Apple devices equipped with 8GB RAM or greater, specifically iPhone 15 Pro and subsequent models. During installation, the app loads a 1GB model, so a stable connection to huggingface is required initially. But once it's all set up, the true power of local AI unfolds.
PrivAI's Standout Features
Built upon the DeepSeek R1 distilled model, PrivAI is designed for users who prioritize both privacy and convenience. The most remarkable aspect? You can enjoy its powerful AI generation capabilities without an internet connection. Whether you're in a location with no Wi - Fi or simply want to save on data, PrivAI is ready to serve. Receive accurate and super - fast responses, all while keeping your data secure.
Key Features
● Offline AI Functionality: Utilize AI features anytime, anywhere, without network dependence.
● Robust Privacy Protection: All data processing occurs locally. Your sensitive information remains on your device, with zero risk of being uploaded to the cloud.
● Rapid Response Times: Thanks to its highly efficient model, PrivAI generates precise content in a flash.
● User - Friendly Interface: With an intuitive design, PrivAI is accessible to everyone, regardless of their technical expertise.
Diverse Use Cases
● Learning: Get instant answers to your questions and streamline your study material organization.
● Creative Pursuits: Generate inspiring content for writing projects or design inspiration.
● Work - related Tasks: Handle text - intensive jobs with ease and enhance your work efficiency.
● Daily Living: Access intelligent suggestions on the move for added convenience in your day - to - day life.
Give PrivAI a shot and experience the future of AI right on your iPhone, with the power of DeepSeek R1 working for you, offline and privately. | 2025-01-28T19:24:56 | https://apps.apple.com/cn/app/privai-llm-%E6%9C%AC%E5%9C%B0ai%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%8A%A9%E6%89%8B/id6741094184 | Snoo_24581 | apps.apple.com | 1970-01-01T00:00:00 | 0 | {} | 1ic9u1l | false | null | t3_1ic9u1l | /r/LocalLLaMA/comments/1ic9u1l/deepseek_r1_now_runs_locally_on_iphone_meet/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'EnebNWQwya1YKN_At3mb3wfT61D8xnTvAo4MYIg2F98', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=108&crop=smart&auto=webp&s=40586a04fbcf6a9fd28cee0d1fa5dc416ed1c98b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=216&crop=smart&auto=webp&s=6f74018b25986e7b4697b05fb56081d85ab56f15', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=320&crop=smart&auto=webp&s=cfaccc149f405fc0f94d1bbe4db8f3eeacde9ea1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=640&crop=smart&auto=webp&s=d6d21aa5fd354a2bc56e86c116c26b4056370029', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=960&crop=smart&auto=webp&s=6ffea03c8a0bc21df900da8b26dc937491b2e7b2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?width=1080&crop=smart&auto=webp&s=0f4b6e26dfc0f657483009af1f18c1b0e2b24c1f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/b7ejsFMbdD9osyfIdVvcV5tCh0CCNEgbRStRp-TZGoI.jpg?auto=webp&s=603304456c09edfcc78033d9375787ad2b87df16', 'width': 1200}, 'variants': {}}]} |
|
Block released a new open source AI agent called Goose. It can do more than coding for engineers 👀 | 122 | 2025-01-28T19:27:42 | https://block.github.io/goose/ | emreloperr | block.github.io | 1970-01-01T00:00:00 | 0 | {} | 1ic9wi6 | false | null | t3_1ic9wi6 | /r/LocalLLaMA/comments/1ic9wi6/block_released_a_new_open_source_ai_agent_called/ | false | false | 122 | {'enabled': False, 'images': [{'id': '1MeIz3IV81P1uGDYuk_mnRDvo4CY04puNwQB7nAMA2E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7ekVDB698j31gkjO7h0OU3gtvIyHbMD1tb8PRnYlUV8.jpg?width=108&crop=smart&auto=webp&s=97f6a2bd1b3513e16f88dc442d6e74338472932c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7ekVDB698j31gkjO7h0OU3gtvIyHbMD1tb8PRnYlUV8.jpg?width=216&crop=smart&auto=webp&s=c880cdc1c8af70377e56ce08b603eb4cad40e362', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7ekVDB698j31gkjO7h0OU3gtvIyHbMD1tb8PRnYlUV8.jpg?width=320&crop=smart&auto=webp&s=f5e07df7e00b13c07f442026cc1417f959f2e6b6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7ekVDB698j31gkjO7h0OU3gtvIyHbMD1tb8PRnYlUV8.jpg?width=640&crop=smart&auto=webp&s=a32c6085097dca9063e7efe673aa54d3a944b2c2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7ekVDB698j31gkjO7h0OU3gtvIyHbMD1tb8PRnYlUV8.jpg?width=960&crop=smart&auto=webp&s=8dae3acfdc779184d1339e5d7b20c30e28266f55', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7ekVDB698j31gkjO7h0OU3gtvIyHbMD1tb8PRnYlUV8.jpg?width=1080&crop=smart&auto=webp&s=64381d1176da72307dbc398160a10800b45a4691', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/7ekVDB698j31gkjO7h0OU3gtvIyHbMD1tb8PRnYlUV8.jpg?auto=webp&s=0b59f6e02c786d777a045d628ba29f471b95c3ba', 'width': 1200}, 'variants': {}}]} |
||
Recommended Choice of GPU for Local Run | 1 | Hello everyone,
I am looking for getting into running LLM loclaly and finetuning it to my own usecases, as well as run some machine learning and deep learning simulations. I used to have two Titan V cards which I had to seel due to some life events, which were okay with local runnign but were not satisfactory due to the memory capacity.
Now to the question I have been interested in either getting a 4090, 5090 if those will be available in stock, or buying 2x 3090 cards. Now from AI and LLM perspective as long as I would not use the entire frame buffer the 5090 woudl dominate however from memory perspective the 2x3090 would provide while slower performance at smaller models a benefit for larger model. Therefore i want to ask what do you run in your current systems in terms of single or multi gpu and what is your performance?
Also the GPUs will be used to also train some machine leanrign code for Drone Flight, and some simulation work on the side, fluiods, dynamics etc.
Thank you for answers and your time.
The current system I have is 7950x with 192 GB ram.
| 2025-01-28T19:29:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ic9ybz/recommended_choice_of_gpu_for_local_run/ | QuantumIQWho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ic9ybz | false | null | t3_1ic9ybz | /r/LocalLLaMA/comments/1ic9ybz/recommended_choice_of_gpu_for_local_run/ | false | false | self | 1 | null |
Is Copilot still relevant? | 1 | [removed] | 2025-01-28T19:32:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ica11w/is_copilot_still_relevant/ | Single_Brain9984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ica11w | false | null | t3_1ica11w | /r/LocalLLaMA/comments/1ica11w/is_copilot_still_relevant/ | false | false | self | 1 | null |
we built on top of moondream to make google analytics for retail stores | 4 | 2025-01-28T19:38:58 | https://v.redd.it/wj935hmeesfe1 | RealMatthewjgunton | /r/LocalLLaMA/comments/1ica6x5/we_built_on_top_of_moondream_to_make_google/ | 1970-01-01T00:00:00 | 0 | {} | 1ica6x5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wj935hmeesfe1/DASHPlaylist.mpd?a=1740814743%2CODlkY2I3Yzc5ZTk1NmE2MTRjN2VlMzVkNTFmZjJjZGFhZWFjODRjOWZmODM5NDIzNDU1MDViMWMyNzA5N2IwZA%3D%3D&v=1&f=sd', 'duration': 74, 'fallback_url': 'https://v.redd.it/wj935hmeesfe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wj935hmeesfe1/HLSPlaylist.m3u8?a=1740814743%2CMGE5Mzg4OWUzODJkNDQxZDAxMzMzNjdiMmY5NjFhNTZjZjRlOTc5ODI5MTljNzgwN2YzYTJiNDZiMmVmODliZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wj935hmeesfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ica6x5 | /r/LocalLLaMA/comments/1ica6x5/we_built_on_top_of_moondream_to_make_google/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'YjJzbzZobWVlc2ZlMeWQFLG_f2AWTrC8LYW5RN5aIvMuMPGHfWKNqp0W-R7J', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YjJzbzZobWVlc2ZlMeWQFLG_f2AWTrC8LYW5RN5aIvMuMPGHfWKNqp0W-R7J.png?width=108&crop=smart&format=pjpg&auto=webp&s=9f28ec4415eea2014e5e1ea3ef8c0366326d6ab9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YjJzbzZobWVlc2ZlMeWQFLG_f2AWTrC8LYW5RN5aIvMuMPGHfWKNqp0W-R7J.png?width=216&crop=smart&format=pjpg&auto=webp&s=fc771fadcd8be1da347668d307760aaacd773ae2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YjJzbzZobWVlc2ZlMeWQFLG_f2AWTrC8LYW5RN5aIvMuMPGHfWKNqp0W-R7J.png?width=320&crop=smart&format=pjpg&auto=webp&s=7123945744a94aa0cc1679d1445d63ab63e574dd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YjJzbzZobWVlc2ZlMeWQFLG_f2AWTrC8LYW5RN5aIvMuMPGHfWKNqp0W-R7J.png?width=640&crop=smart&format=pjpg&auto=webp&s=a37e1493bf1d7fb11cfa9b7cd805925a1d6be3e3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YjJzbzZobWVlc2ZlMeWQFLG_f2AWTrC8LYW5RN5aIvMuMPGHfWKNqp0W-R7J.png?width=960&crop=smart&format=pjpg&auto=webp&s=43a1e3f8ab6050288fc0a62bca967615f19f1f1a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YjJzbzZobWVlc2ZlMeWQFLG_f2AWTrC8LYW5RN5aIvMuMPGHfWKNqp0W-R7J.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2cea66f5a3c440ba86f7281f7a63e23599492fe5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YjJzbzZobWVlc2ZlMeWQFLG_f2AWTrC8LYW5RN5aIvMuMPGHfWKNqp0W-R7J.png?format=pjpg&auto=webp&s=4af25b90247698a01cefbfc38669980e350cbe52', 'width': 1920}, 'variants': {}}]} |
||
The three ducks problem aka the price of overthinking | 0 | I download the qwen 32 distilled version of R1 and I asked this logic problem
`There are two ducks in front of a duck, two ducks behind a duck and a duck in the middle. How many ducks are there?`
The model engaged in a very a long reasoning process and gave a wrong answer (5)
>!**Final Answer** The total number of ducks is \\boxed{5}. </think>!<
>!The riddle states: "There are two ducks in front of a duck, two ducks behind a duck, and a duck in the middle. How many ducks are there?"!<
>!To solve this, we break down the statement:!<
1. >!**Two ducks in front of a duck**: This means there is one duck (let's call it Duck C) with two ducks (Duck A and Duck B) in front of it.!<
2. >!**Two ducks behind a duck**: The same duck (Duck C) has two ducks (Duck D and Duck E) behind it.!<
3. >!**A duck in the middle**: This refers to Duck C, which is already counted.!<
>!Visualizing this, we imagine a line of five ducks:!<
* >!Duck A!<
* >!Duck B!<
* >!Duck C (the middle duck)!<
* >!Duck D!<
* >!Duck E!<
>!Thus, there are two ducks in front of Duck C (Duck A and Duck B), two ducks behind Duck C (Duck D and Duck E), and Duck C itself as the middle duck. Therefore, the total number of ducks is five.!<
Then I've asked the same question to phy14b and it give me a short and correct answer (3)
>!To solve this riddle, let's break down the description:!<
1. >!**Two ducks in front of a duck**: This implies that there is one duck with two other ducks ahead of it.!<
2. >!**Two ducks behind a duck**: This means there is one duck with two others following it.!<
3. >!**A duck in the middle**: This suggests that there is one duck positioned between the other two.!<
>!Considering all these conditions, the simplest arrangement is having three ducks lined up:!<
* >!Duck 1!<
* >!Duck 2 (in the middle)!<
* >!Duck 3!<
>!In this setup:!<
* >!Duck 2 has Duck 1 in front and Duck 3 behind it.!<
* >!Duck 1 sees Duck 2 and Duck 3 behind it.!<
* >!Duck 3 sees Duck 1 and Duck 2 ahead of it.!<
>!Thus, there are a total of three ducks.!<
Then I tried the qwen2.5 32b (coder cause is the one I already had installed) and I got the right answer again
>!To determine how many ducks there are based on the given conditions, let's analyze step by step.!<
1. >!We know that there are two ducks in front of one duck.!<
2. >!There are also two ducks behind another duck.!<
3. >!Additionally, a duck is specifically mentioned to be in the middle.!<
>!Let's represent this arrangement with a line of ducks and see if it fits all criteria:!<
* >!If we have three ducks lined up: Duck 1, Duck 2, Duck 3!<
* >!Duck 2 would be in the middle.!<
* >!Ducks 1 and 2 (or 2 and 3) satisfy the conditions of being two ducks in front of one duck and two ducks behind another duck.!<
>!Therefore, this configuration meets all the stated conditions:!<
* >!Two ducks in front of a duck: Duck 1 and Duck 2 are in front of Duck 3.!<
* >!Two ducks behind a duck: Duck 1 and Duck 2 are behind Duck 3 if we consider the opposite direction.!<
* >!A duck in the middle: Duck 2 is in the middle of Ducks 1 and Duck 3.!<
>!So, there are **three ducks** in total.!<
I know the r1 32 is just qwen modified, but I'm wondering: do the "thinking" process really has a value for these distilled versions on is it just a "choreography" that could even hinder the capability of the model?
Bye
K.
| 2025-01-28T19:42:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ica9za/the_three_ducks_problem_aka_the_price_of/ | Kirys79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ica9za | false | null | t3_1ica9za | /r/LocalLLaMA/comments/1ica9za/the_three_ducks_problem_aka_the_price_of/ | false | false | self | 0 | null |
I want to search through all my books (fewer than 100). What is the best tool with the largest context window for this? | 4 | P.S. I see that gemini models have wide context. What could be other options? | 2025-01-28T19:49:10 | https://www.reddit.com/r/LocalLLaMA/comments/1icag32/i_want_to_search_through_all_my_books_fewer_than/ | bbrother92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icag32 | false | null | t3_1icag32 | /r/LocalLLaMA/comments/1icag32/i_want_to_search_through_all_my_books_fewer_than/ | false | false | self | 4 | null |
I see that you're enticed by my LLM awesome reasoning benchmarks and cost–benefit ratio | 1 | 2025-01-28T19:52:38 | 1protagoras1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icaj7z | false | null | t3_1icaj7z | /r/LocalLLaMA/comments/1icaj7z/i_see_that_youre_enticed_by_my_llm_awesome/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'modCxKzKerqUyYd-ox2f4TEZUf43izEGgwaeaDnWrq8', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ca3l2mcphsfe1.jpeg?width=108&crop=smart&auto=webp&s=bf28effae3a944a86ad015eb15df950fe0bbed75', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ca3l2mcphsfe1.jpeg?width=216&crop=smart&auto=webp&s=6c52986ca3a01d1c47e948db88e2d8bc9331d5b4', 'width': 216}, {'height': 241, 'url': 'https://preview.redd.it/ca3l2mcphsfe1.jpeg?width=320&crop=smart&auto=webp&s=ea0506605a71fe130ce6eac00b7b2750d4a42992', 'width': 320}], 'source': {'height': 367, 'url': 'https://preview.redd.it/ca3l2mcphsfe1.jpeg?auto=webp&s=22e368a4be582b72c41f81bf7e6d60a12ece7aba', 'width': 487}, 'variants': {}}]} |
|||
Who would you prefer to get to AGI first? | 0 |
[View Poll](https://www.reddit.com/poll/1icam8d) | 2025-01-28T19:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/1icam8d/who_would_you_prefer_to_get_to_agi_first/ | Stunning_Working8803 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icam8d | false | null | t3_1icam8d | /r/LocalLLaMA/comments/1icam8d/who_would_you_prefer_to_get_to_agi_first/ | false | false | self | 0 | null |
Budget Build for Home assistant voice | 6 | HI All,
I'm pretty AI/networking dense and new to all this.
My partners were very concerned about Alexa listening to them and privacy.
We use her alot, so valid I guess.
We mostly use the voice assistant for timer, reminders, and controlling smart devices.
Plus simple questions like "What's the weathers" "what temp to I cook X at" "How far is X", etc
So I Got Home assistant green and 2 of the Voice PEs.
Boy are they stupid compared to Alexa, which is where I guess the LLM is required.
I know I can run LLM through Ollama on a Local PC to keep privacy closer knit.
Could someone recommend a PC model and which LLMs would be best? I'm going for budget now to test the waters | 2025-01-28T19:59:21 | https://www.reddit.com/r/LocalLLaMA/comments/1icap7y/budget_build_for_home_assistant_voice/ | wiicrazy0430 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icap7y | false | null | t3_1icap7y | /r/LocalLLaMA/comments/1icap7y/budget_build_for_home_assistant_voice/ | false | false | self | 6 | null |
DeepSeek's AI breakthrough bypasses Nvidia's industry-standard CUDA, uses assembly-like PTX programming instead | 1,228 | This level of optimization is nuts but would definitely allow them to eek out more performance at a lower cost. [https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseeks-ai-breakthrough-bypasses-industry-standard-cuda-uses-assembly-like-ptx-programming-instead](https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseeks-ai-breakthrough-bypasses-industry-standard-cuda-uses-assembly-like-ptx-programming-instead)
>
DeepSeek made quite a splash in the AI industry by training its Mixture-of-Experts (MoE) language model with 671 billion parameters [using a cluster featuring 2,048 Nvidia H800 GPUs in about two months](https://www.tomshardware.com/tech-industry/artificial-intelligence/chinese-ai-company-says-breakthroughs-enabled-creating-a-leading-edge-ai-model-with-11x-less-compute-deepseeks-optimizations-highlight-limits-of-us-sanctions), showing 10X higher efficiency than AI industry leaders like Meta. The breakthrough was achieved by implementing tons of fine-grained optimizations and usage of assembly-like PTX (Parallel Thread Execution) programming instead of Nvidia's CUDA, according to an analysis from Mirae Asset Securities Korea cited by [u/Jukanlosreve](https://x.com/Jukanlosreve/status/1883304958432624881). | 2025-01-28T20:00:18 | https://www.reddit.com/r/LocalLLaMA/comments/1icaq2z/deepseeks_ai_breakthrough_bypasses_nvidias/ | Slasher1738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icaq2z | false | null | t3_1icaq2z | /r/LocalLLaMA/comments/1icaq2z/deepseeks_ai_breakthrough_bypasses_nvidias/ | false | false | self | 1,228 | {'enabled': False, 'images': [{'id': 'T2EvEd_69RlzpjSgzkLOilhd6H63KuE1kBTCZcN3I_Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/gJG6YYqAz5zq1tI6IQ7tgwzN5hMcuU0m1MIuyXF3OVo.jpg?width=108&crop=smart&auto=webp&s=4fdf02070a10f1ba2ae2e94174d5abe2c0952340', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/gJG6YYqAz5zq1tI6IQ7tgwzN5hMcuU0m1MIuyXF3OVo.jpg?width=216&crop=smart&auto=webp&s=91b2c655de7b60a6c4b9ba43a6606eebd4063272', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/gJG6YYqAz5zq1tI6IQ7tgwzN5hMcuU0m1MIuyXF3OVo.jpg?width=320&crop=smart&auto=webp&s=fe9ca89a12b646b2de861afb05ef3a66ef476425', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/gJG6YYqAz5zq1tI6IQ7tgwzN5hMcuU0m1MIuyXF3OVo.jpg?width=640&crop=smart&auto=webp&s=6e8545813facf7f9b6d7db86b448d33fefd92283', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/gJG6YYqAz5zq1tI6IQ7tgwzN5hMcuU0m1MIuyXF3OVo.jpg?width=960&crop=smart&auto=webp&s=6b77f776957acf4c21a05b744cab02832906ee98', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/gJG6YYqAz5zq1tI6IQ7tgwzN5hMcuU0m1MIuyXF3OVo.jpg?width=1080&crop=smart&auto=webp&s=f370e4810f70a21c0fe22f7756dcad1a5ab89578', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/gJG6YYqAz5zq1tI6IQ7tgwzN5hMcuU0m1MIuyXF3OVo.jpg?auto=webp&s=d9de7870b353c1c85795a89b042e055085ed3a07', 'width': 1200}, 'variants': {}}]} |
Has anyone tried hosting Moshi or any other voice AI model? What kind of issues did you run into? | 1 | [removed] | 2025-01-28T20:03:43 | https://www.reddit.com/r/LocalLLaMA/comments/1icatft/has_anyone_tried_hosting_moshi_or_any_other_voice/ | jaakeyb1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icatft | false | null | t3_1icatft | /r/LocalLLaMA/comments/1icatft/has_anyone_tried_hosting_moshi_or_any_other_voice/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GMqdLwfRQ8UIGnMLWJqlyfXoUNbzuAhGURPguqFZXTI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tC4N_LdN4dpKk2CLGY--XNmC7J3SOQ6Mz5BY45OVy54.jpg?width=108&crop=smart&auto=webp&s=12d120656ec18eb0d477f9a31ba6008e62adb656', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tC4N_LdN4dpKk2CLGY--XNmC7J3SOQ6Mz5BY45OVy54.jpg?width=216&crop=smart&auto=webp&s=258d9b602fbc707b8cf9fbc3226048ec0f58b259', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tC4N_LdN4dpKk2CLGY--XNmC7J3SOQ6Mz5BY45OVy54.jpg?width=320&crop=smart&auto=webp&s=d8af28dfcef855d78f59b4da212feaad213e5a78', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tC4N_LdN4dpKk2CLGY--XNmC7J3SOQ6Mz5BY45OVy54.jpg?width=640&crop=smart&auto=webp&s=b3f605fa80389f0609d90479b315581b2f3d42a2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tC4N_LdN4dpKk2CLGY--XNmC7J3SOQ6Mz5BY45OVy54.jpg?width=960&crop=smart&auto=webp&s=5f8a1ba97e49390ca4b68e8a27ca75754c6c9f18', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tC4N_LdN4dpKk2CLGY--XNmC7J3SOQ6Mz5BY45OVy54.jpg?width=1080&crop=smart&auto=webp&s=3e9f1a7d7fb8fa06ed30efc9bbfb4c2d3c7f0e5c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tC4N_LdN4dpKk2CLGY--XNmC7J3SOQ6Mz5BY45OVy54.jpg?auto=webp&s=45ef392b4fe78d1d25c0fac2904401c4b29680b0', 'width': 1200}, 'variants': {}}]} |
Reddit introduces its upcoming inbuilt chatbot "reddit answers" (currently in beta) - this is going to be a game changer for search. | 0 | 2025-01-28T20:08:53 | https://www.reddit.com/gallery/1icaxws | eternviking | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1icaxws | false | null | t3_1icaxws | /r/LocalLLaMA/comments/1icaxws/reddit_introduces_its_upcoming_inbuilt_chatbot/ | false | false | 0 | null |
||
Facebook calls Linux "cybersecurity threat" and bans people who mention the OS | 0 | 2025-01-28T20:21:33 | https://itc.ua/en/news/facebook-calls-linux-a-cybersecurity-threat-and-bans-people-who-mention-the-os/ | noblex33 | itc.ua | 1970-01-01T00:00:00 | 0 | {} | 1icb8zt | false | null | t3_1icb8zt | /r/LocalLLaMA/comments/1icb8zt/facebook_calls_linux_cybersecurity_threat_and/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'N2P30aEuQiTY3mFLLWS0kDH986LCP3a1AX6T8cehbJ0', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/jzgUYIQXxmhfjDjxhbcEKI6EEs-3c-bGKdZP77Y9jwg.jpg?width=108&crop=smart&auto=webp&s=a2ee2b21edf27424fc349f7d523e4580bdbb0a76', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/jzgUYIQXxmhfjDjxhbcEKI6EEs-3c-bGKdZP77Y9jwg.jpg?width=216&crop=smart&auto=webp&s=a252bd3702c14561b45700feab2b632e0342e974', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/jzgUYIQXxmhfjDjxhbcEKI6EEs-3c-bGKdZP77Y9jwg.jpg?width=320&crop=smart&auto=webp&s=cc387f266f37944b581a0b8cdd22f9cfbe18fd2b', 'width': 320}, {'height': 318, 'url': 'https://external-preview.redd.it/jzgUYIQXxmhfjDjxhbcEKI6EEs-3c-bGKdZP77Y9jwg.jpg?width=640&crop=smart&auto=webp&s=05549d3c1ba62dd6789061f722a4f7a7dd6f6f0d', 'width': 640}, {'height': 477, 'url': 'https://external-preview.redd.it/jzgUYIQXxmhfjDjxhbcEKI6EEs-3c-bGKdZP77Y9jwg.jpg?width=960&crop=smart&auto=webp&s=15f87debccaff488b2f115d1a35e1da2e5647c3f', 'width': 960}, {'height': 537, 'url': 'https://external-preview.redd.it/jzgUYIQXxmhfjDjxhbcEKI6EEs-3c-bGKdZP77Y9jwg.jpg?width=1080&crop=smart&auto=webp&s=944a6f5530144efac803dcc9cdaba39e792eeaa5', 'width': 1080}], 'source': {'height': 597, 'url': 'https://external-preview.redd.it/jzgUYIQXxmhfjDjxhbcEKI6EEs-3c-bGKdZP77Y9jwg.jpg?auto=webp&s=b0adf572d35fb94cf0c2c6440bba6faa5d1d0131', 'width': 1200}, 'variants': {}}]} |
||
Are we going to see JSON schema support with DeepSeek models? | 1 | [removed] | 2025-01-28T20:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/1icbap2/are_we_going_to_see_json_schema_support_with/ | GoldenPalmLeaves | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icbap2 | false | null | t3_1icbap2 | /r/LocalLLaMA/comments/1icbap2/are_we_going_to_see_json_schema_support_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qwlL9wyRQmZxTqQhtUutrT-zlpD-zJepJ-NEq7AgFlQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FCE3fzgidu1aWun5X9Ih2LmT_fUgFyJIz6z2mOSmmqU.jpg?width=108&crop=smart&auto=webp&s=7ea971f2bd8282d665bba213493b39e89aeee5c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FCE3fzgidu1aWun5X9Ih2LmT_fUgFyJIz6z2mOSmmqU.jpg?width=216&crop=smart&auto=webp&s=e2fc88b3e890f19d84211034da32e82d89f30018', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FCE3fzgidu1aWun5X9Ih2LmT_fUgFyJIz6z2mOSmmqU.jpg?width=320&crop=smart&auto=webp&s=fae8568609731c006f2e072ef375c0e0b808d415', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FCE3fzgidu1aWun5X9Ih2LmT_fUgFyJIz6z2mOSmmqU.jpg?width=640&crop=smart&auto=webp&s=f2ff878cbf2ea75ca7edf3faef127f0588ded134', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FCE3fzgidu1aWun5X9Ih2LmT_fUgFyJIz6z2mOSmmqU.jpg?width=960&crop=smart&auto=webp&s=27780133847eed179d9e187d7e2c5f84d4fa6ece', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FCE3fzgidu1aWun5X9Ih2LmT_fUgFyJIz6z2mOSmmqU.jpg?width=1080&crop=smart&auto=webp&s=5d436d1da9c7be98e42267f900e7b6f2a3bc1e7e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FCE3fzgidu1aWun5X9Ih2LmT_fUgFyJIz6z2mOSmmqU.jpg?auto=webp&s=b3e707b1bad68d20b18eb702692df1c3a3e9e4eb', 'width': 1200}, 'variants': {}}]} |
Janus-1B vs Moondream2 for the all-important task of understanding meme's | 0 | 2025-01-28T20:25:15 | https://v.redd.it/anuonehgnsfe1 | datascienceharp | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icbc4v | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/anuonehgnsfe1/DASHPlaylist.mpd?a=1740687973%2CYjI2YjY5OGQ2ZjM4YzFkNGFjOTNjZGEwYzEwZjA0NTVlYzdlNjBmYzNlY2U2ZTRmZTFkMzg1ZTFiZWQ4NWU1OQ%3D%3D&v=1&f=sd', 'duration': 318, 'fallback_url': 'https://v.redd.it/anuonehgnsfe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/anuonehgnsfe1/HLSPlaylist.m3u8?a=1740687973%2CZDU3ZjE0NmE3ZGVjNDY5MTQzYTFhZWNhNmM5ZWIxM2ZmMzBkNmFjMGEyNzdlOTNlZTQyMGZjYWE4YTFmYjdmYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/anuonehgnsfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1icbc4v | /r/LocalLLaMA/comments/1icbc4v/janus1b_vs_moondream2_for_the_allimportant_task/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bnFmN3FlaGduc2ZlMdQTHBRMdmrjH8vLb8LoMuVG9eOMcNWtMndEGkFg0D61', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/bnFmN3FlaGduc2ZlMdQTHBRMdmrjH8vLb8LoMuVG9eOMcNWtMndEGkFg0D61.png?width=108&crop=smart&format=pjpg&auto=webp&s=006065eda5b8348f7332640b63f1af203a8b63d1', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/bnFmN3FlaGduc2ZlMdQTHBRMdmrjH8vLb8LoMuVG9eOMcNWtMndEGkFg0D61.png?width=216&crop=smart&format=pjpg&auto=webp&s=3ada0197b75220e7df368be78c888e7176fe2233', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/bnFmN3FlaGduc2ZlMdQTHBRMdmrjH8vLb8LoMuVG9eOMcNWtMndEGkFg0D61.png?width=320&crop=smart&format=pjpg&auto=webp&s=4b0452430f9c190a6ab391089ee0bb42cacf55da', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/bnFmN3FlaGduc2ZlMdQTHBRMdmrjH8vLb8LoMuVG9eOMcNWtMndEGkFg0D61.png?width=640&crop=smart&format=pjpg&auto=webp&s=bcd1c6c82198e67c0a5196b997b4ccef43c7c13c', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/bnFmN3FlaGduc2ZlMdQTHBRMdmrjH8vLb8LoMuVG9eOMcNWtMndEGkFg0D61.png?width=960&crop=smart&format=pjpg&auto=webp&s=32fb1466398b9a4eac6db56eb8a9f2d5320279af', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/bnFmN3FlaGduc2ZlMdQTHBRMdmrjH8vLb8LoMuVG9eOMcNWtMndEGkFg0D61.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b3f7cfdc63a4eedb4f717f2110ac1bf306558a71', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/bnFmN3FlaGduc2ZlMdQTHBRMdmrjH8vLb8LoMuVG9eOMcNWtMndEGkFg0D61.png?format=pjpg&auto=webp&s=e537379675d225995a339fe3359a7a1880b59e3d', 'width': 1080}, 'variants': {}}]} |
||
Beating cuBLAS in Single-Precision General Matrix Multiplication | 2 | 2025-01-28T20:27:03 | https://salykova.github.io/sgemm-gpu | noblex33 | salykova.github.io | 1970-01-01T00:00:00 | 0 | {} | 1icbdq8 | false | null | t3_1icbdq8 | /r/LocalLLaMA/comments/1icbdq8/beating_cublas_in_singleprecision_general_matrix/ | false | false | default | 2 | null |
|
Using moondream to track gazes in real time for retail stores | 29 | 2025-01-28T20:27:23 | https://v.redd.it/bupy5eusnsfe1 | jafioti | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icbe0u | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/bupy5eusnsfe1/DASHPlaylist.mpd?a=1740688092%2CZjNjNDljZWQ3Zjc2YjI2NzZiMTc1MjhkZDZiZDc4MTM4YjRjOTZkODc5ZTE0NDVjZDUwNDljOTkwMzBmZWFlYg%3D%3D&v=1&f=sd', 'duration': 74, 'fallback_url': 'https://v.redd.it/bupy5eusnsfe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/bupy5eusnsfe1/HLSPlaylist.m3u8?a=1740688092%2CNjljYmNjMTBiYjJjZGEyNWM5MTQ4ZjE1NzEyMDhiMmI5MzkwZjkzMTdiZGUyNjYwNTlkOTA4NDYwYjg0MDU5NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/bupy5eusnsfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1icbe0u | /r/LocalLLaMA/comments/1icbe0u/using_moondream_to_track_gazes_in_real_time_for/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'OW5wajNldXNuc2ZlMfvjoGJJBv5WhZQJh_7POK20gb5yzLqac0C7Bk2MgzZu', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OW5wajNldXNuc2ZlMfvjoGJJBv5WhZQJh_7POK20gb5yzLqac0C7Bk2MgzZu.png?width=108&crop=smart&format=pjpg&auto=webp&s=8b4ccee11165142116f115c56992b3eeebd5e99c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OW5wajNldXNuc2ZlMfvjoGJJBv5WhZQJh_7POK20gb5yzLqac0C7Bk2MgzZu.png?width=216&crop=smart&format=pjpg&auto=webp&s=8c4e0a47311d9153aea897a632f5bc5eee54845a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OW5wajNldXNuc2ZlMfvjoGJJBv5WhZQJh_7POK20gb5yzLqac0C7Bk2MgzZu.png?width=320&crop=smart&format=pjpg&auto=webp&s=6d6af511ec22cbc556502d30185feda97e72b924', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OW5wajNldXNuc2ZlMfvjoGJJBv5WhZQJh_7POK20gb5yzLqac0C7Bk2MgzZu.png?width=640&crop=smart&format=pjpg&auto=webp&s=d782408e795dcacdf7da77e0f4916e201387304e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OW5wajNldXNuc2ZlMfvjoGJJBv5WhZQJh_7POK20gb5yzLqac0C7Bk2MgzZu.png?width=960&crop=smart&format=pjpg&auto=webp&s=38d2ded93745b512256c7edd33ac766028c8100f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OW5wajNldXNuc2ZlMfvjoGJJBv5WhZQJh_7POK20gb5yzLqac0C7Bk2MgzZu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=136b59972057dc52670e16d82fde6a9205b34dd8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OW5wajNldXNuc2ZlMfvjoGJJBv5WhZQJh_7POK20gb5yzLqac0C7Bk2MgzZu.png?format=pjpg&auto=webp&s=7f67ed3a7981cd0d8609c12f9ab6c7da6eb0ad0c', 'width': 1280}, 'variants': {}}]} |
||
Realistically speaking, can we achieve R1-level capabilities in a 7B-14B parameter model within a year? | 0 | Hi everyone, as the title says: what are the chances of achieving R1-level capabilities in a 7 to 14 billion parameter model in the near future, in your opinion? Is it feasible, or is there an insurmountable wall? Technology evolves quickly, so we need to think bigger IMHO | 2025-01-28T20:28:18 | https://www.reddit.com/r/LocalLLaMA/comments/1icbeto/realistically_speaking_can_we_achieve_r1level/ | junior600 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icbeto | false | null | t3_1icbeto | /r/LocalLLaMA/comments/1icbeto/realistically_speaking_can_we_achieve_r1level/ | false | false | self | 0 | null |
Mastering LLM Techniques: Inference Optimization | 8 | 2025-01-28T20:31:13 | https://developer.nvidia.com/blog/mastering-llm-techniques-inference-optimization/ | noblex33 | developer.nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 1icbhca | false | null | t3_1icbhca | /r/LocalLLaMA/comments/1icbhca/mastering_llm_techniques_inference_optimization/ | false | false | 8 | {'enabled': False, 'images': [{'id': '55aTZeCvETA0eSpQPCLzVeV_HteKfWjUPBSob-tsaJ4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZrmmCVquKxhZIxW68-QbuPTkA77ZgaXyLfYrKtM8b6s.jpg?width=108&crop=smart&auto=webp&s=f3e8b620b8aee4895c1381b115f1bb3b469af96a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZrmmCVquKxhZIxW68-QbuPTkA77ZgaXyLfYrKtM8b6s.jpg?width=216&crop=smart&auto=webp&s=820c40bb6ed81fee468d473fabbe7ccc8724dbb2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZrmmCVquKxhZIxW68-QbuPTkA77ZgaXyLfYrKtM8b6s.jpg?width=320&crop=smart&auto=webp&s=1aff2afdccb65852817f8b69be6ab7261dda04f8', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZrmmCVquKxhZIxW68-QbuPTkA77ZgaXyLfYrKtM8b6s.jpg?width=640&crop=smart&auto=webp&s=962975427d90b858efd21d2db64c0317bb60c710', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZrmmCVquKxhZIxW68-QbuPTkA77ZgaXyLfYrKtM8b6s.jpg?width=960&crop=smart&auto=webp&s=ad6b6504954526a45f02bc87232b4697a28b8ecf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZrmmCVquKxhZIxW68-QbuPTkA77ZgaXyLfYrKtM8b6s.jpg?width=1080&crop=smart&auto=webp&s=aa01dceb17640408c84875679fa6e52036a8d62a', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://external-preview.redd.it/ZrmmCVquKxhZIxW68-QbuPTkA77ZgaXyLfYrKtM8b6s.jpg?auto=webp&s=4c6d347bddc8283254e2f116a57ffd14deaa1c1b', 'width': 1999}, 'variants': {}}]} |
||
Tracking peoples gazes in real time at grocery stores w/ Moondream | 1 | 2025-01-28T20:32:23 | https://v.redd.it/2b0a4dbposfe1 | ParsaKhaz | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icbiea | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/2b0a4dbposfe1/DASHPlaylist.mpd?a=1740688362%2CYmVmZjUyODEzM2IyZjI5ODc1MGJhZTc5NWZlMDA2Yzk0MWJhM2YwZWY0NzlkZWRiMGU3ZjBhMGJjNmU3NTQxYw%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/2b0a4dbposfe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 766, 'hls_url': 'https://v.redd.it/2b0a4dbposfe1/HLSPlaylist.m3u8?a=1740688362%2CYzVmOTI4MmM1MTYyNjdhNTQ3MWQ4YTNhZTg4YmI3MmI0N2I1M2RmNjBkNGFhNDJlOWRmYWJkZTc4MDczNTNmNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2b0a4dbposfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1icbiea | /r/LocalLLaMA/comments/1icbiea/tracking_peoples_gazes_in_real_time_at_grocery/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YzFmeHRjYnBvc2ZlMfYb0OzYGkKjfkZLNHw5y6g23vlPnk4nRmqWhuvrnsIn', 'resolutions': [{'height': 114, 'url': 'https://external-preview.redd.it/YzFmeHRjYnBvc2ZlMfYb0OzYGkKjfkZLNHw5y6g23vlPnk4nRmqWhuvrnsIn.png?width=108&crop=smart&format=pjpg&auto=webp&s=16bd3cca3aed8bd9732ab464573640951eb6b9ae', 'width': 108}, {'height': 229, 'url': 'https://external-preview.redd.it/YzFmeHRjYnBvc2ZlMfYb0OzYGkKjfkZLNHw5y6g23vlPnk4nRmqWhuvrnsIn.png?width=216&crop=smart&format=pjpg&auto=webp&s=1bf6069ea0ec9955d4104234b048353ed5db53b8', 'width': 216}, {'height': 340, 'url': 'https://external-preview.redd.it/YzFmeHRjYnBvc2ZlMfYb0OzYGkKjfkZLNHw5y6g23vlPnk4nRmqWhuvrnsIn.png?width=320&crop=smart&format=pjpg&auto=webp&s=f8cbe77e97f8f70c1b9984b5be63e75a54f7fde6', 'width': 320}, {'height': 681, 'url': 'https://external-preview.redd.it/YzFmeHRjYnBvc2ZlMfYb0OzYGkKjfkZLNHw5y6g23vlPnk4nRmqWhuvrnsIn.png?width=640&crop=smart&format=pjpg&auto=webp&s=8ab5608e757cc2c8affceaba5606f3c9ad4d7415', 'width': 640}], 'source': {'height': 892, 'url': 'https://external-preview.redd.it/YzFmeHRjYnBvc2ZlMfYb0OzYGkKjfkZLNHw5y6g23vlPnk4nRmqWhuvrnsIn.png?format=pjpg&auto=webp&s=198f448141e2bc19de66be4217db4fa5f80d408a', 'width': 838}, 'variants': {}}]} |
||
what happened here? | 1 | 2025-01-28T20:34:12 | not_yourcupofTea03 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icbjyo | false | null | t3_1icbjyo | /r/LocalLLaMA/comments/1icbjyo/what_happened_here/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'fYVnQWxbiI6oLWMGGvZcTSFF12JuF--OiIVFGacNjQc', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/evhei343psfe1.png?width=108&crop=smart&auto=webp&s=9b3c41fc2b58148da82d8b83f90bd656d5744cd6', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/evhei343psfe1.png?width=216&crop=smart&auto=webp&s=a8571b7207bf64da1dc9b2c085ff6387fc4d1f6a', 'width': 216}, {'height': 201, 'url': 'https://preview.redd.it/evhei343psfe1.png?width=320&crop=smart&auto=webp&s=703cfdaf903acb50afde2681ca5fb1bc7724ed42', 'width': 320}], 'source': {'height': 346, 'url': 'https://preview.redd.it/evhei343psfe1.png?auto=webp&s=e8beea6684ccbf309557ab2e27ab28d5099a3641', 'width': 550}, 'variants': {}}]} |
|||
Modern solutions for RAG? | 1 | [removed] | 2025-01-28T20:34:24 | https://www.reddit.com/r/LocalLLaMA/comments/1icbk5h/modern_solutions_for_rag/ | Proud_Athlete_8602 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icbk5h | false | null | t3_1icbk5h | /r/LocalLLaMA/comments/1icbk5h/modern_solutions_for_rag/ | false | false | self | 1 | null |
Facebook flags Linux topics as 'cybersecurity threats' — posts and users being blocked | 41 | 2025-01-28T20:35:19 | https://www.tomshardware.com/software/linux/facebook-flags-linux-topics-as-cybersecurity-threats-posts-and-users-being-blocked | noblex33 | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 1icbky9 | false | null | t3_1icbky9 | /r/LocalLLaMA/comments/1icbky9/facebook_flags_linux_topics_as_cybersecurity/ | false | false | 41 | {'enabled': False, 'images': [{'id': 'PkGBtp7sXpk_vVlADKvYfiEHg2zb1_icKaSxFLS4G4w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xuD_HV_OVV_-zFEqjFwb6zb1fioWf3wOqOjb8ozpmSU.jpg?width=108&crop=smart&auto=webp&s=6b6d7b5969097a70f74133814daa165e9bbc0534', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/xuD_HV_OVV_-zFEqjFwb6zb1fioWf3wOqOjb8ozpmSU.jpg?width=216&crop=smart&auto=webp&s=f0bcb3153e322a9dd1538bd5379bc214b63e185b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/xuD_HV_OVV_-zFEqjFwb6zb1fioWf3wOqOjb8ozpmSU.jpg?width=320&crop=smart&auto=webp&s=2bbbc5a5b477a2eddbe6f7ab4cf6dd897f8f085e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/xuD_HV_OVV_-zFEqjFwb6zb1fioWf3wOqOjb8ozpmSU.jpg?width=640&crop=smart&auto=webp&s=6f36d09beef36e61b447d1815d1857de98820d4f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/xuD_HV_OVV_-zFEqjFwb6zb1fioWf3wOqOjb8ozpmSU.jpg?width=960&crop=smart&auto=webp&s=f7d84a54e1b1fcf117401ca0907d48d942171dfd', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/xuD_HV_OVV_-zFEqjFwb6zb1fioWf3wOqOjb8ozpmSU.jpg?width=1080&crop=smart&auto=webp&s=e8062c2a39078d167bb5c82bae311a3497da2c25', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/xuD_HV_OVV_-zFEqjFwb6zb1fioWf3wOqOjb8ozpmSU.jpg?auto=webp&s=2bdcf41dd53dee05d7c13356447520959e004637', 'width': 1200}, 'variants': {}}]} |
||
What hardware is everyone using? | 18 | Hi!
Recently discovered local LLMs, and I've been having fun playing around with llama, some of the R1 distills, and a few other models, using ollama.
I was curious what kind of computers people here are running these models on, and how many tokens/s you get.
I've been using an 8th gen i5 with 16gb RAM, and honestly, even 8b models run surprisingly well (okay, not that well, but given how slow my CPU is, I think it's impressive. Can't see exact tokens/s, but at a guess, 3-4tokens/s for the larger models, and maybe around 10-15 for smaller 3b models).
| 2025-01-28T20:36:21 | https://www.reddit.com/r/LocalLLaMA/comments/1icblu4/what_hardware_is_everyone_using/ | The_Techy1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icblu4 | false | null | t3_1icblu4 | /r/LocalLLaMA/comments/1icblu4/what_hardware_is_everyone_using/ | false | false | self | 18 | null |
I switched from closeAI to DeepSeek btw | 1 | 2025-01-28T20:36:32 | blackblueblink | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icbm0b | false | null | t3_1icbm0b | /r/LocalLLaMA/comments/1icbm0b/i_switched_from_closeai_to_deepseek_btw/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'kGKxzNZnBQFH7YAIwIx_u9su-ZhwncQulnBdXTiOy14', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/1sl1ytajpsfe1.jpeg?width=108&crop=smart&auto=webp&s=d169a5a793f15f0d12f2bb37e86b396ce4e32861', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/1sl1ytajpsfe1.jpeg?width=216&crop=smart&auto=webp&s=b6640a4ee2170283aee6a195b4b37528153523c0', 'width': 216}, {'height': 206, 'url': 'https://preview.redd.it/1sl1ytajpsfe1.jpeg?width=320&crop=smart&auto=webp&s=79ab8eb33c990eda83e1f39d9f5e2fc46be677eb', 'width': 320}, {'height': 413, 'url': 'https://preview.redd.it/1sl1ytajpsfe1.jpeg?width=640&crop=smart&auto=webp&s=28d51e278602fde61ee2b56696df479ca36f17f5', 'width': 640}, {'height': 620, 'url': 'https://preview.redd.it/1sl1ytajpsfe1.jpeg?width=960&crop=smart&auto=webp&s=ed3baa6b207cc91917b1f7dc625967ae605fdedf', 'width': 960}, {'height': 698, 'url': 'https://preview.redd.it/1sl1ytajpsfe1.jpeg?width=1080&crop=smart&auto=webp&s=f76bd7ed68139d470e379d7705ab22e9dc45c488', 'width': 1080}], 'source': {'height': 712, 'url': 'https://preview.redd.it/1sl1ytajpsfe1.jpeg?auto=webp&s=e381bc45221e4c908cdf2da147e6d3983375f343', 'width': 1101}, 'variants': {}}]} |
|||
API to R1 deepseek. | 1 | [removed] | 2025-01-28T20:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/1icbolf/api_to_r1_deepseek/ | Spiritual-Neat889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icbolf | false | null | t3_1icbolf | /r/LocalLLaMA/comments/1icbolf/api_to_r1_deepseek/ | false | false | self | 1 | null |
There’s something wrong with LMStudio’s presets/template for the R1 distills. | 1 | [removed] | 2025-01-28T20:44:43 | https://www.reddit.com/r/LocalLLaMA/comments/1icbt1m/theres_something_wrong_with_lmstudios/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icbt1m | false | null | t3_1icbt1m | /r/LocalLLaMA/comments/1icbt1m/theres_something_wrong_with_lmstudios/ | false | false | self | 1 | null |
Free Deepseek model on the app/browser may be the 70b distelled one | 0 | As everyone I was wondering how R1 can be so cheap, and I actually started to think that only through the API you can get the full weight model. It would make sense for them to serve the 70b or the 32b model for free accounts.
From the benchmark we can see that there's not much difference in scores when running the 32b unquantized compared to the 671b.
I only been able to test the quantized versions locally, so I don't really know how good they are unquantized and the web app doesn't tell you exactly what model is running.
So I guess deepseek could be playing smart here showing muscle lifting a styrofoam fake dumbbell | 2025-01-28T20:46:44 | https://www.reddit.com/r/LocalLLaMA/comments/1icbuqk/free_deepseek_model_on_the_appbrowser_may_be_the/ | Lorian0x7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icbuqk | false | null | t3_1icbuqk | /r/LocalLLaMA/comments/1icbuqk/free_deepseek_model_on_the_appbrowser_may_be_the/ | false | false | self | 0 | null |
Ideas for extracting insights from Customer support tickets and Forums data | 1 | Hello,
I’m currently working on a product platform where I’m trying to analyze data from two sources:
1. Customer support ticket data: Exported from MS Dynamics into spreadsheets.
2. Developer forum data: Similar to Stack Overflow, where community of developers actively post and discuss issues.
The goal is to gain actionable insights into customer pain points, recurring issues, and developer challenges. I’ve tried uploading it to Chatgpt to analyze the text data, but the insights aren’t great—particularly when working with spreadsheet data.
Are there any ways to optimize this better with pre-processing data in a certain way ? Thanks in advance | 2025-01-28T20:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/1icbviw/ideas_for_extracting_insights_from_customer/ | Trustme9951 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icbviw | false | null | t3_1icbviw | /r/LocalLLaMA/comments/1icbviw/ideas_for_extracting_insights_from_customer/ | false | false | self | 1 | null |
LMAO | 1 | 2025-01-28T20:50:48 | Several-System1535 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icby72 | false | null | t3_1icby72 | /r/LocalLLaMA/comments/1icby72/lmao/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'yECNAsQNtL9VMICOPCrWOITD7EjDm9Df_ftJmdwySjg', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/y81tlw41ssfe1.png?width=108&crop=smart&auto=webp&s=2548362b5a26edfc190ac4477b219bc385eb4851', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/y81tlw41ssfe1.png?width=216&crop=smart&auto=webp&s=dfba1688855e3c0971bd00b3930974385ff89623', 'width': 216}, {'height': 272, 'url': 'https://preview.redd.it/y81tlw41ssfe1.png?width=320&crop=smart&auto=webp&s=b271c9a8fb90a43d30a3738ffaea3ed720d673f2', 'width': 320}, {'height': 544, 'url': 'https://preview.redd.it/y81tlw41ssfe1.png?width=640&crop=smart&auto=webp&s=fe5328f3ff1c32005ee7831775647543d00b94e9', 'width': 640}, {'height': 816, 'url': 'https://preview.redd.it/y81tlw41ssfe1.png?width=960&crop=smart&auto=webp&s=e6eb22796efc6e8dad7e3e283b810538c8d9d4b5', 'width': 960}, {'height': 918, 'url': 'https://preview.redd.it/y81tlw41ssfe1.png?width=1080&crop=smart&auto=webp&s=26f7d5dac68e4b2e2058313bfc37b17f44842f82', 'width': 1080}], 'source': {'height': 1003, 'url': 'https://preview.redd.it/y81tlw41ssfe1.png?auto=webp&s=01a0be87a3beaaee877e8b8597e5aec1bf6ede1e', 'width': 1179}, 'variants': {}}]} |
|||
How can I run unsloth DeepSeek R1 32B model in TextGen WebUi? | 1 | [removed] | 2025-01-28T20:51:00 | https://www.reddit.com/r/LocalLLaMA/comments/1icbyc1/how_can_i_run_unsloth_deepseek_r1_32b_model_in/ | solidhadriel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icbyc1 | false | null | t3_1icbyc1 | /r/LocalLLaMA/comments/1icbyc1/how_can_i_run_unsloth_deepseek_r1_32b_model_in/ | false | false | self | 1 | null |
current state of ai, summarized | 19 | 2025-01-28T20:51:02 | bruhlmaocmonbro | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icbycy | false | null | t3_1icbycy | /r/LocalLLaMA/comments/1icbycy/current_state_of_ai_summarized/ | false | false | 19 | {'enabled': True, 'images': [{'id': 'QPU_KULQ1QEawv0O6hDWDA2liBYVcfrez0haVD55X8M', 'resolutions': [{'height': 181, 'url': 'https://preview.redd.it/7bd8p8i3ssfe1.jpeg?width=108&crop=smart&auto=webp&s=7fba4e46adc92882cfe16aaf8370c173b4510d5e', 'width': 108}, {'height': 363, 'url': 'https://preview.redd.it/7bd8p8i3ssfe1.jpeg?width=216&crop=smart&auto=webp&s=ff125018dd628058c6aee2001b770151e45c79d0', 'width': 216}, {'height': 538, 'url': 'https://preview.redd.it/7bd8p8i3ssfe1.jpeg?width=320&crop=smart&auto=webp&s=95916ffab8e02dba7669713230e261c8d0ab80dd', 'width': 320}, {'height': 1077, 'url': 'https://preview.redd.it/7bd8p8i3ssfe1.jpeg?width=640&crop=smart&auto=webp&s=98db995e7633036f320d3adf938fec0db996ab5c', 'width': 640}], 'source': {'height': 1199, 'url': 'https://preview.redd.it/7bd8p8i3ssfe1.jpeg?auto=webp&s=5700050bdd2c5856578a87fba79ee846be7309c1', 'width': 712}, 'variants': {}}]} |
|||
Read PDF to update CSV | 1 | Hi everyone, I'm looking for a solution to a small project. I want to locally automate the following process:
1 - Read a PDF file (form)
2 - Add a new row to a CSV file
3 - Ask user confirmation if any potentiel error
4 - Repeat for every PDF file
Reading PDF files to extract information is quite simple but I struggle with the CSV part. What do you think ? Could I use the output of an API with some Python code to update the CSV file ? Thanks for any ideas (and sources / tutorials!) | 2025-01-28T20:55:26 | https://www.reddit.com/r/LocalLLaMA/comments/1icc28l/read_pdf_to_update_csv/ | phiram | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icc28l | false | null | t3_1icc28l | /r/LocalLLaMA/comments/1icc28l/read_pdf_to_update_csv/ | false | false | self | 1 | null |
DeepSeek R1 distilled to Qwen 1.5B easily runs on iPhone 16 with MLX swift | 1 | 2025-01-28T20:55:27 | https://x.com/awnihannun/status/1882105560201617903 | noblex33 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1icc298 | false | null | t3_1icc298 | /r/LocalLLaMA/comments/1icc298/deepseek_r1_distilled_to_qwen_15b_easily_runs_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': '3UM2Dd7C5unYe2JcNdZRNG9W2liO9pXdHQbTlsWITAc', 'resolutions': [{'height': 188, 'url': 'https://external-preview.redd.it/dQmIGf-po4T_aqnDhrUUjQmdYRNAbpV1A5g9O_FoEZo.jpg?width=108&crop=smart&auto=webp&s=8bfd5a92ace69536f8e7c7d9ad7fa54a5b570cf0', 'width': 108}, {'height': 376, 'url': 'https://external-preview.redd.it/dQmIGf-po4T_aqnDhrUUjQmdYRNAbpV1A5g9O_FoEZo.jpg?width=216&crop=smart&auto=webp&s=1378b7df3ad38feb79037c4d0ea423f9e27706cd', 'width': 216}, {'height': 557, 'url': 'https://external-preview.redd.it/dQmIGf-po4T_aqnDhrUUjQmdYRNAbpV1A5g9O_FoEZo.jpg?width=320&crop=smart&auto=webp&s=036bd180dfeec1a8046db53ade35282a5bc9e7c6', 'width': 320}, {'height': 1114, 'url': 'https://external-preview.redd.it/dQmIGf-po4T_aqnDhrUUjQmdYRNAbpV1A5g9O_FoEZo.jpg?width=640&crop=smart&auto=webp&s=1d574a9cbfda8e974c40fab9b4a12698d4c28986', 'width': 640}], 'source': {'height': 1254, 'url': 'https://external-preview.redd.it/dQmIGf-po4T_aqnDhrUUjQmdYRNAbpV1A5g9O_FoEZo.jpg?auto=webp&s=5073f2ab1a626e122ad4ea807474c69f702fc786', 'width': 720}, 'variants': {}}]} |
||
DeepSeek R1 671B running on 2 M2 Ultras faster than reading speed | 137 | 2025-01-28T20:59:13 | https://x.com/awnihannun/status/1881412271236346233 | noblex33 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1icc5hq | false | null | t3_1icc5hq | /r/LocalLLaMA/comments/1icc5hq/deepseek_r1_671b_running_on_2_m2_ultras_faster/ | false | false | 137 | {'enabled': False, 'images': [{'id': 'tMwUF5c65bzEgJYqbM1Fw6MCe2gnXm3lEjOu-gPTSFE', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/T3dwFxBeTh14I0gRGwTgWSW_ZOjtsyWdThn4yOyUB0o.jpg?width=108&crop=smart&auto=webp&s=8444a8513e12a8d51885299e960245e1fe8083f4', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/T3dwFxBeTh14I0gRGwTgWSW_ZOjtsyWdThn4yOyUB0o.jpg?width=216&crop=smart&auto=webp&s=f54812ff60ad29bffc59f98f8490deee2529892c', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/T3dwFxBeTh14I0gRGwTgWSW_ZOjtsyWdThn4yOyUB0o.jpg?width=320&crop=smart&auto=webp&s=e2013b68f1f7c9da64a9231516dc95110cdbc9fd', 'width': 320}, {'height': 385, 'url': 'https://external-preview.redd.it/T3dwFxBeTh14I0gRGwTgWSW_ZOjtsyWdThn4yOyUB0o.jpg?width=640&crop=smart&auto=webp&s=b3e486673640592fed5e6e7beea3c8482611b687', 'width': 640}, {'height': 577, 'url': 'https://external-preview.redd.it/T3dwFxBeTh14I0gRGwTgWSW_ZOjtsyWdThn4yOyUB0o.jpg?width=960&crop=smart&auto=webp&s=cf84bd74d5cb499cb76e9aa9cd87c54fc7c2c4ad', 'width': 960}, {'height': 650, 'url': 'https://external-preview.redd.it/T3dwFxBeTh14I0gRGwTgWSW_ZOjtsyWdThn4yOyUB0o.jpg?width=1080&crop=smart&auto=webp&s=cbb49d75cc73cb3d66d78acd4e48056b12b5e08a', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/T3dwFxBeTh14I0gRGwTgWSW_ZOjtsyWdThn4yOyUB0o.jpg?auto=webp&s=f3f58376c18336c911a908696438d13bf2919126', 'width': 1196}, 'variants': {}}]} |
||
Anxiety riddled Deepseek R1 correctly counts 3 R's in Strawberry | 0 | >The word "strawberry" is spelled \*\*S-T-R-A-W-B-E-R-R-Y\*\*. Breaking it down:
>
>1. \*\*S\*\*
>2. \*\*T\*\*
>3. \*\*R\*\* (1st R)
>4. \*\*A\*\*
>5. \*\*W\*\*
>6. \*\*B\*\*
>7. \*\*E\*\*
>8. \*\*R\*\* (2nd R)
>9. \*\*R\*\* (3rd R)
>10. \*\*Y\*\*
>
>There are \*\*3 R's\*\* in "strawberry". 🌟
The chain of thought was a close one, it confused itself for weird reasons but fortunately didn't override itself and stuck with the array count logic. | 2025-01-28T21:01:40 | https://www.reddit.com/r/LocalLLaMA/comments/1icc7tp/anxiety_riddled_deepseek_r1_correctly_counts_3_rs/ | thetaFAANG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icc7tp | false | null | t3_1icc7tp | /r/LocalLLaMA/comments/1icc7tp/anxiety_riddled_deepseek_r1_correctly_counts_3_rs/ | false | false | self | 0 | null |
DeepSeek R1 distilled to Qwen 1.5B easily runs on iPhone 16 with MLX | 2 | 2025-01-28T21:05:38 | https://x.com/awnihannun/status/1882105560201617903 | noblex33 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1iccbb3 | false | null | t3_1iccbb3 | /r/LocalLLaMA/comments/1iccbb3/deepseek_r1_distilled_to_qwen_15b_easily_runs_on/ | false | false | 2 | {'enabled': False, 'images': [{'id': '3UM2Dd7C5unYe2JcNdZRNG9W2liO9pXdHQbTlsWITAc', 'resolutions': [{'height': 188, 'url': 'https://external-preview.redd.it/dQmIGf-po4T_aqnDhrUUjQmdYRNAbpV1A5g9O_FoEZo.jpg?width=108&crop=smart&auto=webp&s=8bfd5a92ace69536f8e7c7d9ad7fa54a5b570cf0', 'width': 108}, {'height': 376, 'url': 'https://external-preview.redd.it/dQmIGf-po4T_aqnDhrUUjQmdYRNAbpV1A5g9O_FoEZo.jpg?width=216&crop=smart&auto=webp&s=1378b7df3ad38feb79037c4d0ea423f9e27706cd', 'width': 216}, {'height': 557, 'url': 'https://external-preview.redd.it/dQmIGf-po4T_aqnDhrUUjQmdYRNAbpV1A5g9O_FoEZo.jpg?width=320&crop=smart&auto=webp&s=036bd180dfeec1a8046db53ade35282a5bc9e7c6', 'width': 320}, {'height': 1114, 'url': 'https://external-preview.redd.it/dQmIGf-po4T_aqnDhrUUjQmdYRNAbpV1A5g9O_FoEZo.jpg?width=640&crop=smart&auto=webp&s=1d574a9cbfda8e974c40fab9b4a12698d4c28986', 'width': 640}], 'source': {'height': 1254, 'url': 'https://external-preview.redd.it/dQmIGf-po4T_aqnDhrUUjQmdYRNAbpV1A5g9O_FoEZo.jpg?auto=webp&s=5073f2ab1a626e122ad4ea807474c69f702fc786', 'width': 720}, 'variants': {}}]} |
||
Beating cuBLAS in SGEMM by Programming Directly in PTX | 3 | 2025-01-28T21:13:22 | https://salykova.github.io/sgemm-gpu | noblex33 | salykova.github.io | 1970-01-01T00:00:00 | 0 | {} | 1icci0p | false | null | t3_1icci0p | /r/LocalLLaMA/comments/1icci0p/beating_cublas_in_sgemm_by_programming_directly/ | false | false | default | 3 | null |
|
My CEO is framing Cognitive AI as a selling point and it annoys me | 0 | Maybe I'm not well versed enough in AI yet (i'm not on the hard tech side of things) but man, do people really give a f&!& right now about Cognitive AI versus Generative AI?? NO they just want the f@cking tech in their hands & it to solve their problems & be safe.
I don't get his strategy and point with emphasizing cognitive AI as a main selling point at the company. | 2025-01-28T21:14:50 | https://www.reddit.com/r/LocalLLaMA/comments/1iccj8g/my_ceo_is_framing_cognitive_ai_as_a_selling_point/ | Hungry_Bunch2224 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iccj8g | false | null | t3_1iccj8g | /r/LocalLLaMA/comments/1iccj8g/my_ceo_is_framing_cognitive_ai_as_a_selling_point/ | false | false | self | 0 | null |
Assemble! Deepseek?! | 0 | Need as much information as possible on this in the next 30 minutes. Let's talk.
WHY IS DEEPSEEK TRENDING?
HOW IS TOP COMPANIES STOCK GOING DOWN? | 2025-01-28T21:16:35 | https://www.reddit.com/r/LocalLLaMA/comments/1icckq3/assemble_deepseek/ | whodagoatyeet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icckq3 | false | null | t3_1icckq3 | /r/LocalLLaMA/comments/1icckq3/assemble_deepseek/ | false | false | self | 0 | null |
Gear Check - What do you need to run DeepSeek V3 | 3 | Hello fellow Llama's,
Anyone here managed to get DeepSeek V3 to run locally? Whats your equipment setup? Any other thoughts you want to share? | 2025-01-28T21:23:27 | https://www.reddit.com/r/LocalLLaMA/comments/1iccqu4/gear_check_what_do_you_need_to_run_deepseek_v3/ | SQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iccqu4 | false | null | t3_1iccqu4 | /r/LocalLLaMA/comments/1iccqu4/gear_check_what_do_you_need_to_run_deepseek_v3/ | false | false | self | 3 | null |
I created MiraConverse, an Open Source project for voice chat with with any AI Model, including local. It keeps context, and has a user selectable trigger keyword (Mira) by default. | 66 | 2025-01-28T21:28:22 | https://youtu.be/n9oD7IPIWVI | SuperChewbacca | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1iccv8d | false | {'oembed': {'author_name': 'Chris Stephens', 'author_url': 'https://www.youtube.com/@chrisstephens9460', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/n9oD7IPIWVI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Introducing MiraConverse an Open Source AI Chat Assistant. Voice chat for any AI model."></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/n9oD7IPIWVI/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Introducing MiraConverse an Open Source AI Chat Assistant. Voice chat for any AI model.', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'} | t3_1iccv8d | /r/LocalLLaMA/comments/1iccv8d/i_created_miraconverse_an_open_source_project_for/ | false | false | 66 | {'enabled': False, 'images': [{'id': '_CuKoYAD8LxCD4slE-WsmTTsLK9OeGsxNjJd8sG12H4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/EGtt3zD3WMnG_Ch64RtvgmZIFRybIuqxtoVkKUqyuuc.jpg?width=108&crop=smart&auto=webp&s=5139c73e7c698df37c6a9b19d862197336716dd2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/EGtt3zD3WMnG_Ch64RtvgmZIFRybIuqxtoVkKUqyuuc.jpg?width=216&crop=smart&auto=webp&s=a5000d98c0dfcaf46d644d9d33274f3872d572d2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/EGtt3zD3WMnG_Ch64RtvgmZIFRybIuqxtoVkKUqyuuc.jpg?width=320&crop=smart&auto=webp&s=b6bc53781a062c204f0755b3f538b3cccb6f4906', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/EGtt3zD3WMnG_Ch64RtvgmZIFRybIuqxtoVkKUqyuuc.jpg?auto=webp&s=99385c75455c7a18eb443ab7a112ead6b684fe5d', 'width': 480}, 'variants': {}}]} |
||
Open router offering Deepseek R1 (free) version with “chutes” provider. | 17 | This will be nice for anyone looking for an API to interface with Deepseek R1 for free. | 2025-01-28T21:35:43 | https://www.reddit.com/gallery/1icd1oi | AIGuy3000 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1icd1oi | false | null | t3_1icd1oi | /r/LocalLLaMA/comments/1icd1oi/open_router_offering_deepseek_r1_free_version/ | false | false | 17 | null |
|
Given DeepSeek's difficulty dealing with demand, are there alternative R1 hosts? | 1 | I'm looking to use R1 as a replacement for my o1 subscription but DeepSeek seems to struggle to handle demand from general public. I'm wondering is there anywhere else the full model is hosted that can I do an API subscription with? Ideally a stable and reliable platform that won't put severe restrictions on use. | 2025-01-28T21:36:54 | https://www.reddit.com/r/LocalLLaMA/comments/1icd2ng/given_deepseeks_difficulty_dealing_with_demand/ | Qaxar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icd2ng | false | null | t3_1icd2ng | /r/LocalLLaMA/comments/1icd2ng/given_deepseeks_difficulty_dealing_with_demand/ | false | false | self | 1 | null |
Don't forget Deepseek 2.5 (and Unsloth) | 19 | Back in November i got the first Deepseek 2.5 running on my Epyc Rome 256GB machine without my 3090. It's MoE just like 3 and R1 but with less parameters (236B, 21B active). I get like 4 Tk/sec with Iq4xs.
Deepseek released an update for 2.5 in december. This is 12. on lmarena and on par with Sonnet.
If u/danielhanchen could do his magic on this one, many more people could benefit from Deepseek.
IQ2_XXS would be around 64GB but MoE 🤩
That'd be awesome. | 2025-01-28T21:40:02 | https://www.reddit.com/r/LocalLLaMA/comments/1icd5fc/dont_forget_deepseek_25_and_unsloth/ | SunTrainAi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icd5fc | false | null | t3_1icd5fc | /r/LocalLLaMA/comments/1icd5fc/dont_forget_deepseek_25_and_unsloth/ | false | false | self | 19 | null |
RWKV-7 "Goose" 🪿 1.5B release | 64 | 2025-01-28T21:47:07 | EmbarrassedBiscotti9 | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1icdbej | false | null | t3_1icdbej | /r/LocalLLaMA/comments/1icdbej/rwkv7_goose_15b_release/ | false | false | 64 | {'enabled': True, 'images': [{'id': 'rYxCrGaCM2oaaI_DfzuQ2ztRQ4BQkwwGnMIiaum1vos', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/LVSONFUnHP0aVif7PvDrqZvU9jQJE2ovO8Viyl41lQM.png?width=108&crop=smart&auto=webp&s=5aa88e6322aa65f9c5c1d213c6cf67837751890f', 'width': 108}, {'height': 74, 'url': 'https://external-preview.redd.it/LVSONFUnHP0aVif7PvDrqZvU9jQJE2ovO8Viyl41lQM.png?width=216&crop=smart&auto=webp&s=b074fcb097ff6db25f7ba302d3ffe7e7c493f498', 'width': 216}, {'height': 109, 'url': 'https://external-preview.redd.it/LVSONFUnHP0aVif7PvDrqZvU9jQJE2ovO8Viyl41lQM.png?width=320&crop=smart&auto=webp&s=950dc202ca52f1ae29db692c5f91584f7637a538', 'width': 320}, {'height': 219, 'url': 'https://external-preview.redd.it/LVSONFUnHP0aVif7PvDrqZvU9jQJE2ovO8Viyl41lQM.png?width=640&crop=smart&auto=webp&s=07192aca52fbd82e64156d130dc00e311c1e4c45', 'width': 640}, {'height': 328, 'url': 'https://external-preview.redd.it/LVSONFUnHP0aVif7PvDrqZvU9jQJE2ovO8Viyl41lQM.png?width=960&crop=smart&auto=webp&s=c6f72928756c8b75d42beb41cfc0e71e07d7ed81', 'width': 960}, {'height': 370, 'url': 'https://external-preview.redd.it/LVSONFUnHP0aVif7PvDrqZvU9jQJE2ovO8Viyl41lQM.png?width=1080&crop=smart&auto=webp&s=b05c39fca788f9d54e57206c001328528e7f0f5f', 'width': 1080}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/LVSONFUnHP0aVif7PvDrqZvU9jQJE2ovO8Viyl41lQM.png?auto=webp&s=af3317585577197accc2f39da2c14ce0c6641cd4', 'width': 1681}, 'variants': {}}]} |
|||
Why was my post asking for help deleted? | 1 | [removed] | 2025-01-28T21:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/1icdc2d/why_was_my_post_asking_for_help_deleted/ | solidhadriel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icdc2d | false | null | t3_1icdc2d | /r/LocalLLaMA/comments/1icdc2d/why_was_my_post_asking_for_help_deleted/ | false | false | self | 1 | null |
Claude Computer Use Is Nice - But Something Open-Source? | 1 | [removed] | 2025-01-28T21:49:48 | https://www.reddit.com/r/LocalLLaMA/comments/1icddpl/claude_computer_use_is_nice_but_something/ | TheJustinify | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icddpl | false | null | t3_1icddpl | /r/LocalLLaMA/comments/1icddpl/claude_computer_use_is_nice_but_something/ | false | false | self | 1 | null |
Is there a away to support or donate to DeepSeek? | 0 | Apparently DeepSeek servers are under heavy cyber attacks from unknown sources. It's like guessing the villain of the episode from scooby-doo if you ask me and it's pretty obvious that after having a massive loss big tech is uniting their forces to rally against deepseek and bully them.
I'm loving deepseek and their idea is so genuine and nice. I don't like seeing it all go to waste because silicon valley zealots don't like it. Is there ANY way we could help DeepSeek? | 2025-01-28T21:52:17 | https://www.reddit.com/r/LocalLLaMA/comments/1icdfrv/is_there_a_away_to_support_or_donate_to_deepseek/ | Personal_Story_4853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icdfrv | false | null | t3_1icdfrv | /r/LocalLLaMA/comments/1icdfrv/is_there_a_away_to_support_or_donate_to_deepseek/ | false | false | self | 0 | null |
How can I host several llms locally on my local network. | 1 | [removed] | 2025-01-28T21:55:53 | https://www.reddit.com/r/LocalLLaMA/comments/1icdivc/how_can_i_host_several_llms_locally_on_my_local/ | Greek_Place2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icdivc | false | null | t3_1icdivc | /r/LocalLLaMA/comments/1icdivc/how_can_i_host_several_llms_locally_on_my_local/ | false | false | self | 1 | null |
Sir it's time that we fire back | 1 | 2025-01-28T21:56:50 | dW5kZWZpbmVk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1icdjpv | false | null | t3_1icdjpv | /r/LocalLLaMA/comments/1icdjpv/sir_its_time_that_we_fire_back/ | false | false | 1 | {'enabled': True, 'images': [{'id': '2NcpmFlIRu_Xww4XQR_9Om23OSrAbRdR-7Jo3yhP2qU', 'resolutions': [{'height': 167, 'url': 'https://preview.redd.it/3yottesh3tfe1.png?width=108&crop=smart&auto=webp&s=44f8699d243a8c23584fb584bfdb416f4f3200b1', 'width': 108}, {'height': 334, 'url': 'https://preview.redd.it/3yottesh3tfe1.png?width=216&crop=smart&auto=webp&s=ca17a5efd5599857da254f007e6c3d8c786574c8', 'width': 216}, {'height': 495, 'url': 'https://preview.redd.it/3yottesh3tfe1.png?width=320&crop=smart&auto=webp&s=ac4a47041264547ed5cae4692f250c912af0c948', 'width': 320}], 'source': {'height': 891, 'url': 'https://preview.redd.it/3yottesh3tfe1.png?auto=webp&s=007cf465ed5147e7f3dee37162056c0131d4f016', 'width': 575}, 'variants': {}}]} |
|||
When running all llama 3 models through steam deck gpu and cpu using gpt4all it doesn’t appear to use the whole gpu | 1 | Yes I know steam deck isn’t the best but it’s what I’ve got right now. As said it’s not using the entire gpu like it does the cpu. Only about 2.5w on the gpu vs 12.5w on the cpu. I’m not sure if it’s a hardware limitation or that I just don’t have the right settings. I am specifically trying to optimize llama 3 8b instruct. I am new to this so bear with me if I missed other important information. | 2025-01-28T22:02:37 | https://www.reddit.com/r/LocalLLaMA/comments/1icdopw/when_running_all_llama_3_models_through_steam/ | Striderdud | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icdopw | false | null | t3_1icdopw | /r/LocalLLaMA/comments/1icdopw/when_running_all_llama_3_models_through_steam/ | false | false | self | 1 | null |
Guys, is there an uncensored / ablated DeepSeek version that is not lobotomized? | 7 | Just asking cuz I see lots of different versions on HuggingFace and not sure if there's a go-to yet.
Or should I use a Llama finetuned on R1 or V3 or whatever?
Whats your best practice / pro tip? | 2025-01-28T22:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1icdqwq/guys_is_there_an_uncensored_ablated_deepseek/ | MakitaNakamoto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icdqwq | false | null | t3_1icdqwq | /r/LocalLLaMA/comments/1icdqwq/guys_is_there_an_uncensored_ablated_deepseek/ | false | false | self | 7 | null |
Does having VRAM+RAM have an advantage over only RAM? | 2 | As the question says. I have a 3090 but 64 gb RAM, and I want to know the way i can maximize the use out of both for inference. | 2025-01-28T22:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/1icdrta/does_having_vramram_have_an_advantage_over_only/ | KillerX629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icdrta | false | null | t3_1icdrta | /r/LocalLLaMA/comments/1icdrta/does_having_vramram_have_an_advantage_over_only/ | false | false | self | 2 | null |
Dual A100 Workstation Cooling | 1 | Suppose, hypothetically, I managed to get my hands on two A100 40GB GPUs (PCIe, not the sxm) and wanted to NVLink them and put them in a new full sized tower build. How would I cool it?
I have never set up a custom water cooling solution and I’d mess it up. I can’t seem to find a good water block for them, and removing anything existing on the A100 sounds like a recipe for disaster in my hands.
I see some fan shrouds online, but with them on, would I be able to stack the GPUs in the PCIe lanes in a way that allows for NVLink? And how high powered would the fans need to be? Jet-engine loud?
Would ditching a workstation build and opting for a rack mounted server build be a better idea?
Very new to the world of server grade GPUs. Please be patient with me. | 2025-01-28T22:14:18 | https://www.reddit.com/r/LocalLLaMA/comments/1icdylp/dual_a100_workstation_cooling/ | AWESOMESAUCE170 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1icdylp | false | null | t3_1icdylp | /r/LocalLLaMA/comments/1icdylp/dual_a100_workstation_cooling/ | false | false | self | 1 | null |
Multi-head latent attention and other KV cache tricks explained | 2 | 2025-01-28T22:17:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ice10p/multihead_latent_attention_and_other_kv_cache/ | Brilliant-Day2748 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ice10p | false | null | t3_1ice10p | /r/LocalLLaMA/comments/1ice10p/multihead_latent_attention_and_other_kv_cache/ | false | false | 2 | null |
Subsets and Splits