title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Byee
238
2025-01-27T01:44:42
https://i.redd.it/jauvd8soyffe1.jpeg
amirulnaim2000
i.redd.it
1970-01-01T00:00:00
0
{}
1iawl12
false
null
t3_1iawl12
/r/LocalLLaMA/comments/1iawl12/byee/
false
false
https://b.thumbs.redditm…c-rXFOSq3opU.jpg
238
{'enabled': True, 'images': [{'id': 'nTPmMZJdoqsI5aMh0iJTyLdoyKuLCkMlW4ow5v8Yino', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/jauvd8soyffe1.jpeg?width=108&crop=smart&auto=webp&s=c6278027fdc898dc314d5570355e7028898c25b0', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/jauvd8soyffe1.jpeg?width=216&crop=smart&auto=webp&s=bd4816de9d7a4a53918a7b1ead47a36d7e4bed7e', 'width': 216}, {'height': 271, 'url': 'https://preview.redd.it/jauvd8soyffe1.jpeg?width=320&crop=smart&auto=webp&s=f98d8a0c2447bbee55852101602ad1ba740a3960', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/jauvd8soyffe1.jpeg?auto=webp&s=ffd4729f066de753d92b2642ef884bdc5443b3e5', 'width': 590}, 'variants': {}}]}
NES/SNES Retro UI for LLMs
3
Hi, As the title suggests, I'm wondering if there are any cool UIs that look something like old NES/SNES games? With accompanying sounds too if that's possible. Me and my buddies were thinking of creating a UI for various LLMs with pictures of us in 8-bit or 16-bit graphics with Super Mario Bros themed buttons, colors and sounds. Like some sort of super hero team ..LMAO.... Is there anything similar available at the moment? Some cyberpunk theme possibly?
2025-01-27T01:48:45
https://www.reddit.com/r/LocalLLaMA/comments/1iawnu7/nessnes_retro_ui_for_llms/
Proud_Fox_684
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iawnu7
false
null
t3_1iawnu7
/r/LocalLLaMA/comments/1iawnu7/nessnes_retro_ui_for_llms/
false
false
self
3
null
The Yellow Swan? Is it popping?
4
Nvidia opens down more than -5% in overnight trading in its first reaction to DeepSeek. The biggest market headwind just came out of nowhere. Nasdaq 100 futures are now down -330 POINTS since the market opened just hours ago as DeepSeek takes #1 on the App Store. 🤔
2025-01-27T01:54:52
https://www.reddit.com/gallery/1iaws33
Neat-Computer-6975
reddit.com
1970-01-01T00:00:00
0
{}
1iaws33
false
null
t3_1iaws33
/r/LocalLLaMA/comments/1iaws33/the_yellow_swan_is_it_popping/
false
false
https://b.thumbs.redditm…vwKSv7_F3ktA.jpg
4
null
Well we all knew it'd have to be a bit censored lol
1
[removed]
2025-01-27T01:57:09
https://www.reddit.com/r/LocalLLaMA/comments/1iawtpl/well_we_all_knew_itd_have_to_be_a_bit_censored_lol/
Background-Remote765
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iawtpl
false
null
t3_1iawtpl
/r/LocalLLaMA/comments/1iawtpl/well_we_all_knew_itd_have_to_be_a_bit_censored_lol/
false
false
https://b.thumbs.redditm…mOGE8DpJ8hYQ.jpg
1
null
Well we all knew R1 would be a bit biased lol
1
2025-01-27T02:00:44
https://www.reddit.com/gallery/1iawwah
Background-Remote765
reddit.com
1970-01-01T00:00:00
0
{}
1iawwah
false
null
t3_1iawwah
/r/LocalLLaMA/comments/1iawwah/well_we_all_knew_r1_would_be_a_bit_biased_lol/
false
false
https://b.thumbs.redditm…qtFGznRd2rwg.jpg
1
null
How can I configure Kimi AI Android app to be in English?
2
How can I configure Kimi AI Android app to be in English?
2025-01-27T02:02:24
https://www.reddit.com/r/LocalLLaMA/comments/1iawxja/how_can_i_configure_kimi_ai_android_app_to_be_in/
Franck_Dernoncourt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iawxja
false
null
t3_1iawxja
/r/LocalLLaMA/comments/1iawxja/how_can_i_configure_kimi_ai_android_app_to_be_in/
false
false
self
2
null
🎉 Announcing orca-mini-phi-4
1
[removed]
2025-01-27T02:07:46
https://www.reddit.com/r/LocalLLaMA/comments/1iax1bl/announcing_orcaminiphi4/
Remarkable-Spite-107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iax1bl
false
null
t3_1iax1bl
/r/LocalLLaMA/comments/1iax1bl/announcing_orcaminiphi4/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gB8Elf-NRas8DNzxZ3XHvaqW5xslY9xtCr21OUuhBu0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/z65rym9vce-lkH9rwbBStJ7Eb-WuZBlS3uu_wUaUNd8.jpg?width=108&crop=smart&auto=webp&s=acd93b930a8b902d9c9299e22a40b1a57149ca62', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/z65rym9vce-lkH9rwbBStJ7Eb-WuZBlS3uu_wUaUNd8.jpg?width=216&crop=smart&auto=webp&s=4387ea3ece6ba692c57fffa8dc27218b953f8aad', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/z65rym9vce-lkH9rwbBStJ7Eb-WuZBlS3uu_wUaUNd8.jpg?width=320&crop=smart&auto=webp&s=70389dfdffd83208f3c91b1cbf6a0c3f9b78d950', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/z65rym9vce-lkH9rwbBStJ7Eb-WuZBlS3uu_wUaUNd8.jpg?width=640&crop=smart&auto=webp&s=0f1b9ad44f5c2fe5c2c5d34066c0bc8667f780af', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/z65rym9vce-lkH9rwbBStJ7Eb-WuZBlS3uu_wUaUNd8.jpg?width=960&crop=smart&auto=webp&s=4c2ce41cca9acb3c7ef036402d7b6b8c09506ea8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/z65rym9vce-lkH9rwbBStJ7Eb-WuZBlS3uu_wUaUNd8.jpg?width=1080&crop=smart&auto=webp&s=759c9b2f7582dd486276351cffc1630dc7776e3e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/z65rym9vce-lkH9rwbBStJ7Eb-WuZBlS3uu_wUaUNd8.jpg?auto=webp&s=dfe1f3afc364e679bb6ed713e1ce813c15af9e8e', 'width': 1200}, 'variants': {}}]}
AI Reading Partner
1
I like Claude a lot. I find Claude is in fact more "anthropic" (pun intended) or more"anthropomorphic" than any of the other models. It's just a better conversationalist overall. I use it as my reading buddy/book club sometimes. Lately I've become a lot more aware of the environmental impact of these AI queries and also about Anthropic's involvement with the U.S military (yes, I know they're all doing it and that it was inevitable, but I still don't like it). I mean, I'm grateful for the opportunity to play around with all the AI models out there, but I think the price tag is a bit too high simply for the sake of my entertainment. I am trying to find out if any of the open-source local LLMs work as well as Claude in the particular way I mentioned. Coding is not an important function for me. Has anyone come across something similar?
2025-01-27T02:18:14
https://www.reddit.com/r/LocalLLaMA/comments/1iax8nv/ai_reading_partner/
Saddyblues
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iax8nv
false
null
t3_1iax8nv
/r/LocalLLaMA/comments/1iax8nv/ai_reading_partner/
false
false
self
1
null
How much RAM do I need? ROCM vs CUDA
1
[removed]
2025-01-27T02:19:35
https://www.reddit.com/r/LocalLLaMA/comments/1iax9m1/how_much_ram_do_i_need_rocm_vs_cuda/
Josue999it
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iax9m1
false
null
t3_1iax9m1
/r/LocalLLaMA/comments/1iax9m1/how_much_ram_do_i_need_rocm_vs_cuda/
false
false
self
1
null
I have a 12GB 3060, is it possible to fine-tune ANY model?
5
Hey everyone. I have nothing. But I have a 3060 12 GB with 16 GB RAM going on strong in my life. About the only good thing. I want to fine tune a small model for my specific use case. I am still learning about LLMs, their capabilities, and such. I am building an app for construction project managers and using Ollama in development to have AI capabilities. I used my last $5 to get deepseek API (top up), but I find that its not exactly usable. I don’t want it to read any drawings. I want it to help me out locally so that when I preparing my lawsuits, disputing things like extra costs, or charging extra, it can help me. If I am able to fine tune it, then it can suggest preventive measures prior to a huge cost. Once fine tuned, I aim to host via runpod (but I am still researching). My issue is, i have nothing.
2025-01-27T02:32:44
https://www.reddit.com/r/LocalLLaMA/comments/1iaxin9/i_have_a_12gb_3060_is_it_possible_to_finetune_any/
shakespear94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iaxin9
false
null
t3_1iaxin9
/r/LocalLLaMA/comments/1iaxin9/i_have_a_12gb_3060_is_it_possible_to_finetune_any/
false
false
self
5
null
Wholesome interaction with deepseek v3
71
2025-01-27T03:01:55
https://www.reddit.com/gallery/1iay2ik
ParadiseMaker69
reddit.com
1970-01-01T00:00:00
0
{}
1iay2ik
false
null
t3_1iay2ik
/r/LocalLLaMA/comments/1iay2ik/wholesome_interaction_with_deepseek_v3/
false
false
https://b.thumbs.redditm…tRsV6ljgQECo.jpg
71
null
Tried to ask about Tiananmen Square then this happened…
0
2025-01-27T03:02:04
https://v.redd.it/ssyv0bcecgfe1
kruckshanks
v.redd.it
1970-01-01T00:00:00
0
{}
1iay2mn
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ssyv0bcecgfe1/DASHPlaylist.mpd?a=1740539378%2COTdiODg3ODE2YjJiNWYwYjVkNjNlOTRjMDg4MzFhNWQwMDkyNWI1YWVmNzdkMTkxNWNiYWM1ZDI1YWNjNmExMA%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/ssyv0bcecgfe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/ssyv0bcecgfe1/HLSPlaylist.m3u8?a=1740539378%2CODQ3YmNkMzVkMzE2ZGM3MWE2MDdlYzc0MTUxYjhjMjczMmFhOWU5MWI1ZDlmZTU3NThhY2E2MWViYjNjMjc2Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ssyv0bcecgfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 646}}
t3_1iay2mn
/r/LocalLLaMA/comments/1iay2mn/tried_to_ask_about_tiananmen_square_then_this/
false
false
https://external-preview…ab0c8b79b17490e6
0
{'enabled': False, 'images': [{'id': 'MG9heDNvNWVjZ2ZlMSQUZv8gitevHljgYgtxd7-9ky8WENoO-7YRs_BN2pGA', 'resolutions': [{'height': 214, 'url': 'https://external-preview.redd.it/MG9heDNvNWVjZ2ZlMSQUZv8gitevHljgYgtxd7-9ky8WENoO-7YRs_BN2pGA.png?width=108&crop=smart&format=pjpg&auto=webp&s=35fbfa2a1c3eb44b87bcc004c30089ed4f52f4e0', 'width': 108}, {'height': 428, 'url': 'https://external-preview.redd.it/MG9heDNvNWVjZ2ZlMSQUZv8gitevHljgYgtxd7-9ky8WENoO-7YRs_BN2pGA.png?width=216&crop=smart&format=pjpg&auto=webp&s=f910066aeedc20f45f33d74677ad1ccbeed83984', 'width': 216}, {'height': 634, 'url': 'https://external-preview.redd.it/MG9heDNvNWVjZ2ZlMSQUZv8gitevHljgYgtxd7-9ky8WENoO-7YRs_BN2pGA.png?width=320&crop=smart&format=pjpg&auto=webp&s=2249eec5333e2eea7c20796e6b8b6a5fa9d93851', 'width': 320}, {'height': 1269, 'url': 'https://external-preview.redd.it/MG9heDNvNWVjZ2ZlMSQUZv8gitevHljgYgtxd7-9ky8WENoO-7YRs_BN2pGA.png?width=640&crop=smart&format=pjpg&auto=webp&s=426369d561143def6ca11e96e895f078d9b0f190', 'width': 640}], 'source': {'height': 1758, 'url': 'https://external-preview.redd.it/MG9heDNvNWVjZ2ZlMSQUZv8gitevHljgYgtxd7-9ky8WENoO-7YRs_BN2pGA.png?format=pjpg&auto=webp&s=1f056c4912694c364b80c056965185dd0b0a4be7', 'width': 886}, 'variants': {}}]}
Where to find xtts2 models to compare my finetune?
1
Hey all! I've been finetuning some voices for Skyrim and I know Mantella does xtts and I want to compare my results but I don't know where to get models. I've seen there's also the xtts models on nexusmods but the outputs seem completely different to mine. Am I missing something or being dense? Currently I have a finetune output with model.json and output wav and the pth files but everything I'm finding only has the one model and a couple of json files.... I'm using alltalk2 for my generation and finetuning. Cheers for the help!
2025-01-27T03:16:14
https://www.reddit.com/r/LocalLLaMA/comments/1iaycqb/where_to_find_xtts2_models_to_compare_my_finetune/
charlieboy2001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iaycqb
false
null
t3_1iaycqb
/r/LocalLLaMA/comments/1iaycqb/where_to_find_xtts2_models_to_compare_my_finetune/
false
false
self
1
null
I miss the days when ClosedAI was OpenAI
132
Since OpenAI became ClosedAI, they seem to have lost their innovativeness, under the delusion that they have created a moat that others cannot cross. Maybe if they had continued to be OpenAI we would be seeing open source gpt5 and o5 by now.
2025-01-27T03:23:39
https://www.reddit.com/r/LocalLLaMA/comments/1iayi0m/i_miss_the_days_when_closedai_was_openai/
nknnr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iayi0m
false
null
t3_1iayi0m
/r/LocalLLaMA/comments/1iayi0m/i_miss_the_days_when_closedai_was_openai/
false
false
self
132
null
Prediction of Token Generation Performance
2
For Llama 3.3-70b (for example) is there a way to predict token production rate by GPU system capabilities assuming other impactful parameters are held constant? (Or maybe a rule of thumb?)
2025-01-27T03:33:08
https://www.reddit.com/r/LocalLLaMA/comments/1iayoiw/prediction_of_token_generation_performance/
GaltEngineering
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iayoiw
false
null
t3_1iayoiw
/r/LocalLLaMA/comments/1iayoiw/prediction_of_token_generation_performance/
false
false
self
2
null
Can someone please answer me this question?
1
[removed]
2025-01-27T03:44:46
https://www.reddit.com/r/LocalLLaMA/comments/1iaywkn/can_someone_please_answer_me_this_question/
Paulsybrandy1980
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iaywkn
false
null
t3_1iaywkn
/r/LocalLLaMA/comments/1iaywkn/can_someone_please_answer_me_this_question/
false
false
self
1
null
¿Cuánta RAM para ejecutar localmente? ¿ROCm o CUDA?
1
[removed]
2025-01-27T03:45:31
https://www.reddit.com/r/LocalLLaMA/comments/1iayx54/cuánta_ram_para_ejecutar_localmente_rocm_o_cuda/
Josue999it
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iayx54
false
null
t3_1iayx54
/r/LocalLLaMA/comments/1iayx54/cuánta_ram_para_ejecutar_localmente_rocm_o_cuda/
false
false
self
1
null
Deepseek like a boss
1,511
2025-01-27T03:52:40
https://i.redd.it/d6slqdvilgfe1.jpeg
AdditionalWeb107
i.redd.it
1970-01-01T00:00:00
0
{}
1iaz2or
false
null
t3_1iaz2or
/r/LocalLLaMA/comments/1iaz2or/deepseek_like_a_boss/
false
false
https://a.thumbs.redditm…PUArZS5sc4N0.jpg
1,511
{'enabled': True, 'images': [{'id': 'rDcSUyR4TL6vkPJVfkPiNze_xhsxa6rNnr3_wcqTP50', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/d6slqdvilgfe1.jpeg?width=108&crop=smart&auto=webp&s=536f49ae4c00fdba2161cd5492692e29035dc251', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/d6slqdvilgfe1.jpeg?width=216&crop=smart&auto=webp&s=1c204069be5cd27243ebd1e0db09f34732e5bddd', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/d6slqdvilgfe1.jpeg?width=320&crop=smart&auto=webp&s=1d04129ded0318925599e60e517a7ed4de9e645c', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/d6slqdvilgfe1.jpeg?width=640&crop=smart&auto=webp&s=5b3de28e4d080e64aa2341a87efd989cd1ede176', 'width': 640}], 'source': {'height': 680, 'url': 'https://preview.redd.it/d6slqdvilgfe1.jpeg?auto=webp&s=b9c9e34cb273f91bbb0fb9f02a0b8aa5952c39e3', 'width': 680}, 'variants': {}}]}
Deepseek R1: Maybe I am prompting incorrectly but wasn't impressive for me.
1
[removed]
2025-01-27T04:10:14
https://www.reddit.com/r/LocalLLaMA/comments/1iazgcj/deepseek_r1_maybe_i_am_prompting_incorrectly_but/
isposinf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iazgcj
false
null
t3_1iazgcj
/r/LocalLLaMA/comments/1iazgcj/deepseek_r1_maybe_i_am_prompting_incorrectly_but/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uzHoHuYzF65euLO33pE8bj7DqAbD2sD0FUyqBodk3Tg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/smct_A71iT6IAJ1Vriww_U9WfZXwDe0PFcp-wn1RZTY.jpg?width=108&crop=smart&auto=webp&s=3eeadabfa1b6599698bb1a0500d8328b61038b8d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/smct_A71iT6IAJ1Vriww_U9WfZXwDe0PFcp-wn1RZTY.jpg?width=216&crop=smart&auto=webp&s=bcf64a090a99ee1f0569eed45bc1b12b57d1c3f3', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/smct_A71iT6IAJ1Vriww_U9WfZXwDe0PFcp-wn1RZTY.jpg?width=320&crop=smart&auto=webp&s=5210b91da98af0362f93623a73747010e97db1b1', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/smct_A71iT6IAJ1Vriww_U9WfZXwDe0PFcp-wn1RZTY.jpg?width=640&crop=smart&auto=webp&s=706ec678461c64e4dae87fe0a5c1824292833c21', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/smct_A71iT6IAJ1Vriww_U9WfZXwDe0PFcp-wn1RZTY.jpg?width=960&crop=smart&auto=webp&s=864451206412dcb17be8d9f8c32da2fa96d96623', 'width': 960}, {'height': 606, 'url': 'https://external-preview.redd.it/smct_A71iT6IAJ1Vriww_U9WfZXwDe0PFcp-wn1RZTY.jpg?width=1080&crop=smart&auto=webp&s=3552e86891211c5d11322fbc0de51269065eb0b7', 'width': 1080}], 'source': {'height': 1404, 'url': 'https://external-preview.redd.it/smct_A71iT6IAJ1Vriww_U9WfZXwDe0PFcp-wn1RZTY.jpg?auto=webp&s=703d5375bbd3200aef38a636264e9deb7cab3763', 'width': 2500}, 'variants': {}}]}
Did Gemini just leak its internal API? What other commands are known?
0
2025-01-27T04:12:32
https://www.reddit.com/gallery/1iazi35
RetiredApostle
reddit.com
1970-01-01T00:00:00
0
{}
1iazi35
false
null
t3_1iazi35
/r/LocalLLaMA/comments/1iazi35/did_gemini_just_leak_its_internal_api_what_other/
false
false
https://b.thumbs.redditm…OxZGLkCeC39U.jpg
0
null
@emostaque : The future is local inference
66
2025-01-27T04:12:37
https://i.redd.it/56cwsgs2pgfe1.jpeg
MrWidmoreHK
i.redd.it
1970-01-01T00:00:00
0
{}
1iazi5b
false
null
t3_1iazi5b
/r/LocalLLaMA/comments/1iazi5b/emostaque_the_future_is_local_inference/
false
false
https://b.thumbs.redditm…fhpeajX7xqjk.jpg
66
{'enabled': True, 'images': [{'id': 'LGgYUm9a2Vp3WmZADs5ORqNBaop-17RVvBtBqyxFw5I', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/56cwsgs2pgfe1.jpeg?width=108&crop=smart&auto=webp&s=261143238dff99cb732757e73031352cd0f80ca0', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/56cwsgs2pgfe1.jpeg?width=216&crop=smart&auto=webp&s=c5e8c07ab03fb358027d89a1189021fd7ca2fbd4', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/56cwsgs2pgfe1.jpeg?width=320&crop=smart&auto=webp&s=2a620c5c9c454295841c20208d57175788b7d4db', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/56cwsgs2pgfe1.jpeg?width=640&crop=smart&auto=webp&s=68ee5a33606cf09deb184841e83242faf9fb6f01', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/56cwsgs2pgfe1.jpeg?width=960&crop=smart&auto=webp&s=0abb3e65cd094ac8953b24de067db96a0bcfec90', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/56cwsgs2pgfe1.jpeg?width=1080&crop=smart&auto=webp&s=fadfbb334b5d978a5decd1bbebc58c5ddc6c1131', 'width': 1080}], 'source': {'height': 2405, 'url': 'https://preview.redd.it/56cwsgs2pgfe1.jpeg?auto=webp&s=02b70595bb5782e2bd2415e42ca425fd0dd8705a', 'width': 1080}, 'variants': {}}]}
Deepseek is Down!
13
https://preview.redd.it/…f7a3d484587ccd
2025-01-27T04:18:50
https://www.reddit.com/r/LocalLLaMA/comments/1iazmuw/deepseek_is_down/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iazmuw
false
null
t3_1iazmuw
/r/LocalLLaMA/comments/1iazmuw/deepseek_is_down/
false
false
https://b.thumbs.redditm…fQbVn11xKURw.jpg
13
null
Can a Ryzen 7 4700s (it hasrun a 7b LLM?
1
I was planning on buying a pc in the future, and I found these kits for around 300 dollars in my country(they probably shouldn't be that expensive though ). They come with 8 cores and 16 threads at 3.6 ghz(I think its the processor they use in the PS5, but without gpu) , and 16 GB of ddr6 ram.Since the ram is fast, at least in comparison with ddr5, Could it be a good option for running a 7b model ( or something similar) only on cpu? Has anyone tried this? The link has the full specs
2025-01-27T04:38:52
https://www.cpu-monkey.com/en/cpu-amd_ryzen_7_4700s
BugVegetable4220
cpu-monkey.com
1970-01-01T00:00:00
0
{}
1iazz71
false
null
t3_1iazz71
/r/LocalLLaMA/comments/1iazz71/can_a_ryzen_7_4700s_it_hasrun_a_7b_llm/
false
false
default
1
null
Minimal openwebui installation
1
Is there any fork of openwebui or anything else i can use with ollama with minimal features for chatting. I love openwebui for its excellent UI but i dont need most of its features, just a bare minimum chat app would suffice.
2025-01-27T04:39:33
https://www.reddit.com/r/LocalLLaMA/comments/1iazzm9/minimal_openwebui_installation/
timedacorn369
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iazzm9
false
null
t3_1iazzm9
/r/LocalLLaMA/comments/1iazzm9/minimal_openwebui_installation/
false
false
self
1
null
R1 odd e test
10
this is one of my favorite reasoning tests to do whenever a new model comes out because it requires them to correctly conceptualize all numbers, as well as fend off the sycophantic bias to assume the user is giving a valid task.
2025-01-27T04:40:26
https://i.redd.it/kezgv3d1ugfe1.jpeg
Sl33py_4est
i.redd.it
1970-01-01T00:00:00
0
{}
1ib005m
false
null
t3_1ib005m
/r/LocalLLaMA/comments/1ib005m/r1_odd_e_test/
false
false
https://b.thumbs.redditm…Ama-h3ByFFSc.jpg
10
{'enabled': True, 'images': [{'id': 'c1i8IaTuTFisWGPYKWjn-m76e3_RUbRmvzEASrSPNCg', 'resolutions': [{'height': 208, 'url': 'https://preview.redd.it/kezgv3d1ugfe1.jpeg?width=108&crop=smart&auto=webp&s=aee44d9ac19618f798f709caa97a3f61234de916', 'width': 108}, {'height': 417, 'url': 'https://preview.redd.it/kezgv3d1ugfe1.jpeg?width=216&crop=smart&auto=webp&s=cc3b1b9c3c7ec3c6a8c991f4d76998e20761b4b7', 'width': 216}, {'height': 618, 'url': 'https://preview.redd.it/kezgv3d1ugfe1.jpeg?width=320&crop=smart&auto=webp&s=6a77435f329b46c54bc20a331d271c59b590c0cf', 'width': 320}, {'height': 1236, 'url': 'https://preview.redd.it/kezgv3d1ugfe1.jpeg?width=640&crop=smart&auto=webp&s=9062aadae18902f485f2109f9a7961275e1b4bf3', 'width': 640}, {'height': 1854, 'url': 'https://preview.redd.it/kezgv3d1ugfe1.jpeg?width=960&crop=smart&auto=webp&s=8878162b88491969b743bfc03930349b2afd9aa0', 'width': 960}, {'height': 2085, 'url': 'https://preview.redd.it/kezgv3d1ugfe1.jpeg?width=1080&crop=smart&auto=webp&s=c0b61e8873039aec99c09ba3ae4ef1a4b5d9707b', 'width': 1080}], 'source': {'height': 2277, 'url': 'https://preview.redd.it/kezgv3d1ugfe1.jpeg?auto=webp&s=0fb28ac88d52b28498bb92b5f29d1f807fc8cae8', 'width': 1179}, 'variants': {}}]}
Can a Ryzen 7 4700s (it has gddr6 ram) run in models?
2
I was planning on buying a pc in the future, and I found these kits for around 300 dollars in my country(they probably shouldn't be that expensive though ). They come with 8 cores and 16 threads at 3.6 ghz(I think its the processor they use in the PS5, but without gpu) , and 16 GB of ddr6 ram.Since the ram is fast, at least in comparison with ddr5, Could it be a good option for running a 7b model ( or something similar) only on cpu? Has anyone tried this? The link has the full specs.
2025-01-27T04:43:34
https://www.cpu-monkey.com/en/cpu-amd_ryzen_7_4700s
BugVegetable4220
cpu-monkey.com
1970-01-01T00:00:00
0
{}
1ib023z
false
null
t3_1ib023z
/r/LocalLLaMA/comments/1ib023z/can_a_ryzen_7_4700s_it_has_gddr6_ram_run_in_models/
false
false
default
2
null
DiffuEraser (A Diffusion Model for Video Inpainting)
9
https://preview.redd.it/…elease it after.
2025-01-27T04:51:26
https://www.reddit.com/r/LocalLLaMA/comments/1ib06vw/diffueraser_a_diffusion_model_for_video_inpainting/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib06vw
false
null
t3_1ib06vw
/r/LocalLLaMA/comments/1ib06vw/diffueraser_a_diffusion_model_for_video_inpainting/
false
false
https://b.thumbs.redditm…VkyE2cDGiHHo.jpg
9
{'enabled': False, 'images': [{'id': 'PFYPWVTNbSBk7vQ_H14wISuuwQKqEEXuP5i2MW7WYus', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xTfpnlDR9-iRMRIBbsfB-p_MgIOhRee3fSWY5BjHOdY.jpg?width=108&crop=smart&auto=webp&s=25ebf84a6901ec54a5ca5f96ebabccf4c1d73800', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xTfpnlDR9-iRMRIBbsfB-p_MgIOhRee3fSWY5BjHOdY.jpg?width=216&crop=smart&auto=webp&s=ae990b51087d64a4470e723e80092473712517c3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xTfpnlDR9-iRMRIBbsfB-p_MgIOhRee3fSWY5BjHOdY.jpg?width=320&crop=smart&auto=webp&s=e4b663cbab33738daf759c6314cb8f9843c2ec5d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xTfpnlDR9-iRMRIBbsfB-p_MgIOhRee3fSWY5BjHOdY.jpg?width=640&crop=smart&auto=webp&s=dda9b6d3b97904ad3935ec5b2b381bf79efb6893', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xTfpnlDR9-iRMRIBbsfB-p_MgIOhRee3fSWY5BjHOdY.jpg?width=960&crop=smart&auto=webp&s=ebceb9662a5c55649006f153de45a048887cd885', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xTfpnlDR9-iRMRIBbsfB-p_MgIOhRee3fSWY5BjHOdY.jpg?width=1080&crop=smart&auto=webp&s=40d335653d55f5624d01efb16b76b7cc0d011438', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xTfpnlDR9-iRMRIBbsfB-p_MgIOhRee3fSWY5BjHOdY.jpg?auto=webp&s=e89d0361c4cf3165e093439a52dc2c6dcb35e9b2', 'width': 1200}, 'variants': {}}]}
How to make Deepseek overthink?
1
like literally, how long can you go with the thinking partl? what is the point of diminishing returns? how to make it to prompt itself(if that's what it is) i don't know how it works and I'd love to. i don't even know what i'm asking even possible. an additional thing i don't know: how llms determine the length of their response, is the token count determined when processing the prompt before generating text?
2025-01-27T04:53:17
https://www.reddit.com/r/LocalLLaMA/comments/1ib080p/how_to_make_deepseek_overthink/
flatminded
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib080p
false
null
t3_1ib080p
/r/LocalLLaMA/comments/1ib080p/how_to_make_deepseek_overthink/
false
false
self
1
null
Deepseek who? Llama is still king
0
I tried a simple query on the distilled deepseek-r1 models (1.5B, 7B, 8B, 14B) versus llama3.2 (3B). The query is a variant of the strawberry test: \>> wie viele F sind in dem Wort Schifffahrtsgesellschaftskapitän? LLama3.2 was the only one to get the correct result. The full answers are here: [https://x.com/0x53A/status/1883726959886340562](https://x.com/0x53A/status/1883726959886340562) As a quick note, deepseek-r1:8b is based on llama3.1, so it would be interesting to see how a theoretical deepseek model based on llama3.2 would fare against it.
2025-01-27T04:53:56
https://www.reddit.com/r/LocalLLaMA/comments/1ib08g6/deepseek_who_llama_is_still_king/
0x53A
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib08g6
false
null
t3_1ib08g6
/r/LocalLLaMA/comments/1ib08g6/deepseek_who_llama_is_still_king/
false
false
self
0
{'enabled': False, 'images': [{'id': 'AtAYm0aq2c4J6eym1YsTXScuCcTIDF3nUesul522Yek', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Hx01KKA_sX9ZAlvIUF1lV0nz3_v3y_WzE2g2M4dnaxc.jpg?width=108&crop=smart&auto=webp&s=69c59ec7b4bd469c6e45684e2a16aedb77326fc1', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/Hx01KKA_sX9ZAlvIUF1lV0nz3_v3y_WzE2g2M4dnaxc.jpg?auto=webp&s=426778747de5bfadfacdb9ea4bfe20f92fda8d0a', 'width': 200}, 'variants': {}}]}
how would you go about creating your own version of Operator using Open Ai sources , which tools would you use and how would you approach it?
1
[removed]
2025-01-27T05:01:20
https://www.reddit.com/r/LocalLLaMA/comments/1ib0d6l/how_would_you_go_about_creating_your_own_version/
CluelessTreat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib0d6l
false
null
t3_1ib0d6l
/r/LocalLLaMA/comments/1ib0d6l/how_would_you_go_about_creating_your_own_version/
false
false
self
1
null
From this week's The Economist: "China’s AI industry has almost caught up with America’s"
197
2025-01-27T05:04:42
https://www.economist.com/briefing/2025/01/23/chinas-ai-industry-has-almost-caught-up-with-americas
comfyui_user_999
economist.com
1970-01-01T00:00:00
0
{}
1ib0ffq
false
null
t3_1ib0ffq
/r/LocalLLaMA/comments/1ib0ffq/from_this_weeks_the_economist_chinas_ai_industry/
false
false
https://b.thumbs.redditm…l6REhJQxUQgk.jpg
197
{'enabled': False, 'images': [{'id': 'e-PhFtkskBqvi3Pn-X76xwn8h4b8m1nhEMDfubshbm0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/C2yiTG1ri5iDyKLc-Vn_3V_ISkOymhbpYsu1RacQ-tE.jpg?width=108&crop=smart&auto=webp&s=edfa6df7f12fb43fe952308a57a20bb81068c090', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/C2yiTG1ri5iDyKLc-Vn_3V_ISkOymhbpYsu1RacQ-tE.jpg?width=216&crop=smart&auto=webp&s=ea7e031ae3f3197d9826d6411d5c892ad7cf56fb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/C2yiTG1ri5iDyKLc-Vn_3V_ISkOymhbpYsu1RacQ-tE.jpg?width=320&crop=smart&auto=webp&s=6131e2b4b4d8785f7cbdbb155eca1a4ea766affc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/C2yiTG1ri5iDyKLc-Vn_3V_ISkOymhbpYsu1RacQ-tE.jpg?width=640&crop=smart&auto=webp&s=c1461e932e29591186828398f4de9362c5ad8106', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/C2yiTG1ri5iDyKLc-Vn_3V_ISkOymhbpYsu1RacQ-tE.jpg?width=960&crop=smart&auto=webp&s=51997d6cc79ecb34c2981bb2098ad0ac974bad33', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/C2yiTG1ri5iDyKLc-Vn_3V_ISkOymhbpYsu1RacQ-tE.jpg?width=1080&crop=smart&auto=webp&s=c73f0f178cade953dd4ce385f826ef1e3dde8082', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/C2yiTG1ri5iDyKLc-Vn_3V_ISkOymhbpYsu1RacQ-tE.jpg?auto=webp&s=e8e986a590fc73fafbeae4dfb8615717372f5fc9', 'width': 1280}, 'variants': {}}]}
How do I run open source LLMs locally?
0
So far, I have been using closed source LLMs from various providers. Our organization is moving to open source and they want me to list down the requirements for it. I know we can use GPU. But is there any way that i can run LLMs locally or through google colab. How do i use its endpoint in my code if i am running it in google colab? I have also been hearing about quantization and that there are quantized models available. I know nothing about it. Knowledgeable folks do provide your suggestions below.
2025-01-27T05:27:08
https://www.reddit.com/r/LocalLLaMA/comments/1ib0ste/how_do_i_run_open_source_llms_locally/
Available-Stress8598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib0ste
false
null
t3_1ib0ste
/r/LocalLLaMA/comments/1ib0ste/how_do_i_run_open_source_llms_locally/
false
false
self
0
null
Added DeepSeek R1 & Heinlein's Lunar Supercomputer, Mike, to my downloadable LLM forecast tool. Compare to Llama, ChatGPT though 2030.
1
[removed]
2025-01-27T05:37:35
https://www.reddit.com/r/LocalLLaMA/comments/1ib0yss/added_deepseek_r1_heinleins_lunar_supercomputer/
64NOMIS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib0yss
false
null
t3_1ib0yss
/r/LocalLLaMA/comments/1ib0yss/added_deepseek_r1_heinleins_lunar_supercomputer/
false
false
self
1
{'enabled': False, 'images': [{'id': 'chb6xLzyBoEMju075fpuXruHHqLOif9ICbuq0v9Adp0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VrYrKXVVNtK6-ymQ6meI9G-_BltM-uFhwc5laTvkwwQ.jpg?width=108&crop=smart&auto=webp&s=4ce3631863856f1e096aa6d62b370bad9718d5ed', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VrYrKXVVNtK6-ymQ6meI9G-_BltM-uFhwc5laTvkwwQ.jpg?width=216&crop=smart&auto=webp&s=760e24378236f149de6c7451e4a8f4e224836542', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VrYrKXVVNtK6-ymQ6meI9G-_BltM-uFhwc5laTvkwwQ.jpg?width=320&crop=smart&auto=webp&s=2dc3216342790c5c09f83240bb7eb7ce85b6789e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VrYrKXVVNtK6-ymQ6meI9G-_BltM-uFhwc5laTvkwwQ.jpg?width=640&crop=smart&auto=webp&s=04527ba8f2789efbddf44e2adce6edb2783703ff', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VrYrKXVVNtK6-ymQ6meI9G-_BltM-uFhwc5laTvkwwQ.jpg?width=960&crop=smart&auto=webp&s=85790d188c0a71da43193ed4b7bb4395f1c57c57', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VrYrKXVVNtK6-ymQ6meI9G-_BltM-uFhwc5laTvkwwQ.jpg?width=1080&crop=smart&auto=webp&s=1a801aa86a025a92d4e061b1b51a450cfcf5c110', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VrYrKXVVNtK6-ymQ6meI9G-_BltM-uFhwc5laTvkwwQ.jpg?auto=webp&s=231a41a48499c9d2484bc5c120b7965517dda9ea', 'width': 1200}, 'variants': {}}]}
AI, AGI, ASI with full Human Memory.
1
[removed]
2025-01-27T05:44:40
https://www.reddit.com/r/LocalLLaMA/comments/1ib12p0/ai_agi_asi_with_full_human_memory/
young_b_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib12p0
false
null
t3_1ib12p0
/r/LocalLLaMA/comments/1ib12p0/ai_agi_asi_with_full_human_memory/
false
false
self
1
null
What does policy mean in the context of llm RL?
6
And does updating policy simply mean updating the weights?
2025-01-27T05:56:53
https://www.reddit.com/r/LocalLLaMA/comments/1ib199m/what_does_policy_mean_in_the_context_of_llm_rl/
Mortis200
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib199m
false
null
t3_1ib199m
/r/LocalLLaMA/comments/1ib199m/what_does_policy_mean_in_the_context_of_llm_rl/
false
false
self
6
null
SotA TTS/STT, but for accuracy and not speed.
9
A lot of the models and packages I find are intended for speed, live-captioning and so on, but I don't really care about those. I need one that supports multilingual English/Hebrew + translate. I have a 3090Ti so I don't think I'll need optimization, either. So far, I've been using OpenAI's whisper - it's fine, but I feel like there's something better out there. I found one Hebrew finetune but it doesn't seem to translate to English. Further questions: Are there ways to run the inference multiple times to get better transcriptions? Or start off with a prompt saying "this is an audio file of a physics lecture" and then it'll transcribe/translate based on that context?
2025-01-27T05:57:28
https://www.reddit.com/r/LocalLLaMA/comments/1ib19ll/sota_ttsstt_but_for_accuracy_and_not_speed/
vardonir
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib19ll
false
null
t3_1ib19ll
/r/LocalLLaMA/comments/1ib19ll/sota_ttsstt_but_for_accuracy_and_not_speed/
false
false
self
9
null
Context Compression for nano LLM
1
When a user send a prompt, the chat will use a decision tree to select one or several higly compressed files on the topic. During this process it can trick the humain trying to speak of another thing. What do you think ?
2025-01-27T06:01:35
https://www.reddit.com/r/LocalLLaMA/comments/1ib1bzj/context_compression_for_nano_llm/
Full-Engineering-418
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib1bzj
false
null
t3_1ib1bzj
/r/LocalLLaMA/comments/1ib1bzj/context_compression_for_nano_llm/
false
false
self
1
null
What it takes to run "distilled versions" of Deepseek R1 locally?
1
[removed]
2025-01-27T06:12:52
https://www.reddit.com/r/LocalLLaMA/comments/1ib1i4e/what_it_takes_to_run_distilled_versions_of/
RGBGraphicZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib1i4e
false
null
t3_1ib1i4e
/r/LocalLLaMA/comments/1ib1i4e/what_it_takes_to_run_distilled_versions_of/
false
false
self
1
null
What's deepseek RL reward function?
16
I couldn't find on the paper. Anyone knows how does the reward looks like?
2025-01-27T06:15:16
https://www.reddit.com/r/LocalLLaMA/comments/1ib1jdl/whats_deepseek_rl_reward_function/
Fantastic_Climate_90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib1jdl
false
null
t3_1ib1jdl
/r/LocalLLaMA/comments/1ib1jdl/whats_deepseek_rl_reward_function/
false
false
self
16
null
Difference in Llama 3.2 1B instruct and Llama 3.1 8B
1
[removed]
2025-01-27T06:25:43
https://www.reddit.com/r/LocalLLaMA/comments/1ib1otj/difference_in_llama_32_1b_instruct_and_llama_31_8b/
lonesomhelme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib1otj
false
null
t3_1ib1otj
/r/LocalLLaMA/comments/1ib1otj/difference_in_llama_32_1b_instruct_and_llama_31_8b/
false
false
self
1
null
My Take On DeepSeek-R1’s Influence on the Future
0
**TLDR:** * In the context of local models, I’m particularly excited to have the opportunity to train and fine-tune all types of models and modalities with highly performant compute at scale locally. Maybe a cluster of GPUs that nowadays costs $1 million could be bought for $5000 in 5 years.* Back in the 1950s, computers were these gigantic metal boxes with very large logic gates. They were machines that usually resided in research labs and universities, and were largely out of reach for the general consumer. In the summer of 1981, IBM released the IBM Personal Computer 5150 using off the shelf components including the Intel 8088 processor. It was the first time the consumer could interact with a real computer at a reasonably affordable price. Analogously, AlexNet was one of the first breakthroughs that made function approximation model training and inference extremely scalable. It completely changed the perspective of the compute problem in ML. Looking back at PCs, as time went on, increased demand of better PCs drove IBM and Intel (and later AMD) to begin a decades long war of competing and building the best CPU for consumer and, later on, data center use. Transistors became smaller, resulting in greater efficiency, which gave consumers an increased amount of compute to play with. Developers also kept improving their software’s performance by employing compiler optimizations (at the lowest level) to be able to do more with less. This drive towards efficiency made it possible for a lot of compute problems to be solvable and within reach. I believe DeepSeek is about to, or has already, kickstarted a new growth phase in ML. ML research labs are going to realize that for the same amount of compute, they can do a lot more by optimizing the way their models are trained and fine-tuned. Models will keep reaching convergence much faster than before with the same amount of compute by optimizing policies and equations that govern model architectures. Companies that sell compute are going to want to keep selling their products every year, maintaining a stream of income that, ideally, grows of course. Companies building general purpose accelerated chips such as Nvidia, Microsoft, Apple, Amazon, Meta, and so many more are going to compete to provide the latest and fastest. While tech has grown a lot over the past decade, there was a time in the 2010s where, every year, Apple released iPhones that were 10x than the previous generation. Nowadays it’s harder to get those type of leaps, but I hope and believe this fierce competition between players across the technology stack, from hardware to research to consumer products, is going to create some amazing things at a rapid rate in the near future. In the context of local models, I’m particularly excited to have the opportunity to train and fine-tune all types of models and modalities with highly performant compute at scale locally. Maybe a cluster of GPUs that nowadays costs $1 million could be bought for $5000 in 5 years.
2025-01-27T06:43:01
https://www.reddit.com/r/LocalLLaMA/comments/1ib1xu4/my_take_on_deepseekr1s_influence_on_the_future/
Delicious-Ad-3552
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib1xu4
false
null
t3_1ib1xu4
/r/LocalLLaMA/comments/1ib1xu4/my_take_on_deepseekr1s_influence_on_the_future/
false
false
self
0
null
A chatbot app that lets you force r1 models to think for as long as you wish (♾️). Details in comments
1
2025-01-27T06:50:44
https://v.redd.it/pj2nkrjxfhfe1
anzorq
v.redd.it
1970-01-01T00:00:00
0
{}
1ib21w6
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pj2nkrjxfhfe1/DASHPlaylist.mpd?a=1740552658%2CODAwNTE2MzkxZDhmNzEzOGJkN2I5ODcxODE1NjM1Njc2ZjM2MmVkMmFlNmVjNzEwOWMwYjA3YjQwMjQ5YjAyMg%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/pj2nkrjxfhfe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1350, 'hls_url': 'https://v.redd.it/pj2nkrjxfhfe1/HLSPlaylist.m3u8?a=1740552658%2CN2NkZjM2YjMxZTY2YWUwYWM3MTQyOGEzMWNmNGFiOGU3Nzk0MDk5YWRkY2Y4NWFhMTAzZDIxODA0MzliMTUxMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pj2nkrjxfhfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1ib21w6
/r/LocalLLaMA/comments/1ib21w6/a_chatbot_app_that_lets_you_force_r1_models_to/
false
false
https://external-preview…aafa8078a976b4dd
1
{'enabled': False, 'images': [{'id': 'dml2bW94anhmaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/dml2bW94anhmaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=108&crop=smart&format=pjpg&auto=webp&s=3da5da4decb4973cfd0ff15d782520bca71b4641', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/dml2bW94anhmaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=216&crop=smart&format=pjpg&auto=webp&s=527d6f6a06cd33a821763ac1b0b764dbcaed5b3a', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/dml2bW94anhmaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=320&crop=smart&format=pjpg&auto=webp&s=457ff9142a4e1fe20b3d3e0d4730f8d342508f25', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/dml2bW94anhmaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=640&crop=smart&format=pjpg&auto=webp&s=11ea6ddfdebccb43e6fea4673caab71d0e675d97', 'width': 640}, {'height': 1200, 'url': 'https://external-preview.redd.it/dml2bW94anhmaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=960&crop=smart&format=pjpg&auto=webp&s=475c7d0c068e7fa58f646dc6db307a04f4d500f6', 'width': 960}, {'height': 1350, 'url': 'https://external-preview.redd.it/dml2bW94anhmaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b7940f9b451a610868747ee518f4d9ca4bf7fb6d', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/dml2bW94anhmaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?format=pjpg&auto=webp&s=2863613eb5f1c14d36bb3a8124446b38de9503fb', 'width': 1080}, 'variants': {}}]}
today is a happy day, wanna share this song with ya all.
1
[removed]
2025-01-27T06:56:30
https://www.reddit.com/r/LocalLLaMA/comments/1ib24to/today_is_a_happy_day_wanna_share_this_song_with/
wo-tatatatatata
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib24to
false
null
t3_1ib24to
/r/LocalLLaMA/comments/1ib24to/today_is_a_happy_day_wanna_share_this_song_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FuePTqNqNIo804CYhYqKjaNdVKGWVSpbC4-pLxdiOhw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aD5ncRNFdHoLzGea8wmuE9KL4ISdjHeVCVIlRJh8C7U.jpg?width=108&crop=smart&auto=webp&s=4e68fedbd7d86cc0f4f6783c184ab94620c251d0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aD5ncRNFdHoLzGea8wmuE9KL4ISdjHeVCVIlRJh8C7U.jpg?width=216&crop=smart&auto=webp&s=8511c5bfd1bd441801ca5ff8df4e5be040ca3f78', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aD5ncRNFdHoLzGea8wmuE9KL4ISdjHeVCVIlRJh8C7U.jpg?width=320&crop=smart&auto=webp&s=e2c8c18a035149a1449e963c01b1dbc9fa2bd61e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/aD5ncRNFdHoLzGea8wmuE9KL4ISdjHeVCVIlRJh8C7U.jpg?auto=webp&s=19f5f26c2ec2881503331783184e59d0968a4acc', 'width': 480}, 'variants': {}}]}
A chatbot app that lets you force r1 models to think for as long as you wish (♾️). Details in comments
1
2025-01-27T06:59:19
https://v.redd.it/oilinu00ihfe1
anzorq
v.redd.it
1970-01-01T00:00:00
0
{}
1ib2690
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/oilinu00ihfe1/DASHPlaylist.mpd?a=1740553173%2CYmZkMmEwODk5NzU5MzYyZTUwNTcxOTI3OTdiODkyNmQ0NDkzY2ZiMzE0NmViNmM5MWM5OWQzZDU5YmYxM2EyNw%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/oilinu00ihfe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1350, 'hls_url': 'https://v.redd.it/oilinu00ihfe1/HLSPlaylist.m3u8?a=1740553173%2CMTJmNWFhOWFkZTRmZGJkYzVmOTE4Nzg0ODBjMjIwZDFkNDIwNTdlMGJjOWJlZGI5MDY5YTA4MDk2YmVkMGIzNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/oilinu00ihfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1ib2690
/r/LocalLLaMA/comments/1ib2690/a_chatbot_app_that_lets_you_force_r1_models_to/
false
false
https://external-preview…291735dbe3e6b016
1
{'enabled': False, 'images': [{'id': 'NzI4MWI0Z3RpaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/NzI4MWI0Z3RpaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=108&crop=smart&format=pjpg&auto=webp&s=8ba481256a6fd798a7bdae34256f3be73d3a291f', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/NzI4MWI0Z3RpaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=216&crop=smart&format=pjpg&auto=webp&s=5aaa4198e54986b6a483a4a1a8500fd9ee7b7845', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/NzI4MWI0Z3RpaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=320&crop=smart&format=pjpg&auto=webp&s=482ad08a5b71232ca8ca3c25faaf4b974755d572', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/NzI4MWI0Z3RpaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=640&crop=smart&format=pjpg&auto=webp&s=e71e8f4cdb443282e8a619e36ab6399bebb4530a', 'width': 640}, {'height': 1200, 'url': 'https://external-preview.redd.it/NzI4MWI0Z3RpaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=960&crop=smart&format=pjpg&auto=webp&s=624466f754e5fdde419e03630101d8c644beb3cf', 'width': 960}, {'height': 1350, 'url': 'https://external-preview.redd.it/NzI4MWI0Z3RpaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=cd75dfcb313a1097bdbcb7b585a8c2ab295995c3', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/NzI4MWI0Z3RpaGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?format=pjpg&auto=webp&s=63174c934d021b71fa72157c2c33e9f759cd6027', 'width': 1080}, 'variants': {}}]}
Chatbot that lets you force r1 models to think for as long as you wish. Details in comments
1
2025-01-27T07:07:24
https://v.redd.it/n054e0p2khfe1
anzorq
v.redd.it
1970-01-01T00:00:00
0
{}
1ib2ahv
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/n054e0p2khfe1/DASHPlaylist.mpd?a=1740553657%2CNjk0YTc4MWIxODAwNDQxYTQzZGYwYWY3YzUwMDExZjhlYWRiZWY2OTE4OTM3MzJkZmZjMmY5NTc2YTRlYmY4YQ%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/n054e0p2khfe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1350, 'hls_url': 'https://v.redd.it/n054e0p2khfe1/HLSPlaylist.m3u8?a=1740553657%2COGZhMWZhM2FmYTg2ZGFjZTdhNjdlNGNhNTk3YjA2MmJlNmY3MDA3YTQwZjQxZWNmOTA1MWI1YmYwNDMyM2U2NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n054e0p2khfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1ib2ahv
/r/LocalLLaMA/comments/1ib2ahv/chatbot_that_lets_you_force_r1_models_to_think/
false
false
https://external-preview…55ceeacf2fa68ecd
1
{'enabled': False, 'images': [{'id': 'd3NmOTEzcDJraGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/d3NmOTEzcDJraGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=108&crop=smart&format=pjpg&auto=webp&s=96fdb9adb4e5fc68ab7e25398de5d66941ad6b1c', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/d3NmOTEzcDJraGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=216&crop=smart&format=pjpg&auto=webp&s=84f5d6ea705eda21c55d4f18f1fb451d91e11113', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/d3NmOTEzcDJraGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=320&crop=smart&format=pjpg&auto=webp&s=0c7a7f677aa4a1f49674744c163d278723e15298', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/d3NmOTEzcDJraGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=640&crop=smart&format=pjpg&auto=webp&s=4df38f3112b40e0f48120259a59786b1e218bf1e', 'width': 640}, {'height': 1200, 'url': 'https://external-preview.redd.it/d3NmOTEzcDJraGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=960&crop=smart&format=pjpg&auto=webp&s=1a9f4817ab314da9066c7c5a7bdf7b23a5b6c677', 'width': 960}, {'height': 1350, 'url': 'https://external-preview.redd.it/d3NmOTEzcDJraGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5903c7294cbd745daf777d406aafc691db992ae3', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/d3NmOTEzcDJraGZlMQ-Y_nspVqRuENfEqKSBWaLfxAxl82wv6S6Ho3TY9Ea9.png?format=pjpg&auto=webp&s=8261734d7f96654c0803d2ee6de722a28426d1fc', 'width': 1080}, 'variants': {}}]}
Distilled R1 not nearly as good as non-distilled R1
1
[removed]
2025-01-27T07:13:29
https://www.reddit.com/r/LocalLLaMA/comments/1ib2dcx/distilled_r1_not_nearly_as_good_as_nondistilled_r1/
charlyarly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib2dcx
false
null
t3_1ib2dcx
/r/LocalLLaMA/comments/1ib2dcx/distilled_r1_not_nearly_as_good_as_nondistilled_r1/
false
false
self
1
null
DeepSeek R1 Overthinker: a free chatbot that lets you force r1 models to think for as long as you wish
1
[removed]
2025-01-27T07:15:50
https://www.reddit.com/r/LocalLLaMA/comments/1ib2egp/deepseek_r1_overthinker_a_free_chatbot_that_lets/
anzorq
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib2egp
false
null
t3_1ib2egp
/r/LocalLLaMA/comments/1ib2egp/deepseek_r1_overthinker_a_free_chatbot_that_lets/
false
false
self
1
{'enabled': False, 'images': [{'id': '26TrNOyw5mQLHQ6N91y3u6_qwpB-Wrd-3koJHd1dPAs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pKjKJFzzM--eCAbv02NCCsiiSeUvJlYBGzXl4HAG8wE.jpg?width=108&crop=smart&auto=webp&s=d2527d90bb0d55882c7c7f7f2289e2dac4a6bd20', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pKjKJFzzM--eCAbv02NCCsiiSeUvJlYBGzXl4HAG8wE.jpg?width=216&crop=smart&auto=webp&s=74be3e304aae2d0e6f53693cdcb3a8d9b9ec81aa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pKjKJFzzM--eCAbv02NCCsiiSeUvJlYBGzXl4HAG8wE.jpg?width=320&crop=smart&auto=webp&s=355ee5fda84e485ebbe84eac25ca7c1bfa97d41f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pKjKJFzzM--eCAbv02NCCsiiSeUvJlYBGzXl4HAG8wE.jpg?width=640&crop=smart&auto=webp&s=b440ef2fe2d1d7d4ffce5153c190b35183ef32cc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pKjKJFzzM--eCAbv02NCCsiiSeUvJlYBGzXl4HAG8wE.jpg?width=960&crop=smart&auto=webp&s=849e43e1606b4c32372bf36d19632d373d3724ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pKjKJFzzM--eCAbv02NCCsiiSeUvJlYBGzXl4HAG8wE.jpg?width=1080&crop=smart&auto=webp&s=f21776dcd6edd927f603b4f27e891b36b21421b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pKjKJFzzM--eCAbv02NCCsiiSeUvJlYBGzXl4HAG8wE.jpg?auto=webp&s=71dc23ea0a54ce7b21549edb66c859fb31966487', 'width': 1200}, 'variants': {}}]}
Setting an absolute rule in DeepSeek
1
2025-01-27T07:16:10
https://v.redd.it/o71dvtamlhfe1
Competitive_Poem53
/r/LocalLLaMA/comments/1ib2en4/setting_an_absolute_rule_in_deepseek/
1970-01-01T00:00:00
0
{}
1ib2en4
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/o71dvtamlhfe1/DASHPlaylist.mpd?a=1740683777%2COGY4NjYwZjQyMWQyNjZlM2E5NjBmZDYxZTYwMWM1MmM5NzQ3NDMzNjNlZjBmMjI1MzBiM2ZmZGJiOTA3ZWM1NQ%3D%3D&v=1&f=sd', 'duration': 140, 'fallback_url': 'https://v.redd.it/o71dvtamlhfe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/o71dvtamlhfe1/HLSPlaylist.m3u8?a=1740683777%2CMzc2MjM3ZGM1YWMxOWEwNzI1MjQ2YWI2NDM0YmNmZjlkN2FhNWU0ZDExMTMxMjRiMzI1NGM4MWNlMGY4NGMzZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/o71dvtamlhfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 574}}
t3_1ib2en4
/r/LocalLLaMA/comments/1ib2en4/setting_an_absolute_rule_in_deepseek/
false
false
https://external-preview…8e6b208d95352a34
1
{'enabled': False, 'images': [{'id': 'YTVtOGF3YW1saGZlMSJ4Zu8vOnBnOqECneDNUWfF3H10LvbChMC4e_e8sWhV', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/YTVtOGF3YW1saGZlMSJ4Zu8vOnBnOqECneDNUWfF3H10LvbChMC4e_e8sWhV.png?width=108&crop=smart&format=pjpg&auto=webp&s=5165b322634f679f3401606ef9d49fc18531d669', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/YTVtOGF3YW1saGZlMSJ4Zu8vOnBnOqECneDNUWfF3H10LvbChMC4e_e8sWhV.png?width=216&crop=smart&format=pjpg&auto=webp&s=2d8bd69ba17b26f37b75ea1769d5de7a343e46ac', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/YTVtOGF3YW1saGZlMSJ4Zu8vOnBnOqECneDNUWfF3H10LvbChMC4e_e8sWhV.png?width=320&crop=smart&format=pjpg&auto=webp&s=3f19d12c7992e69c95ed32f8ec4b196bb12bcdb1', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/YTVtOGF3YW1saGZlMSJ4Zu8vOnBnOqECneDNUWfF3H10LvbChMC4e_e8sWhV.png?width=640&crop=smart&format=pjpg&auto=webp&s=2ef1069144f483757f83b2bd836ef95b5a7dcb20', 'width': 640}], 'source': {'height': 1604, 'url': 'https://external-preview.redd.it/YTVtOGF3YW1saGZlMSJ4Zu8vOnBnOqECneDNUWfF3H10LvbChMC4e_e8sWhV.png?format=pjpg&auto=webp&s=2db9653fb3d4cbcb5219e51d9004b3ba4913900b', 'width': 720}, 'variants': {}}]}
DeepSeek R1 Overthinker: a free chatbot that lets you force r1 models to think for as long as you wish
1
[removed]
2025-01-27T07:17:41
[deleted]
1970-01-01T00:00:00
0
{}
1ib2fcd
false
null
t3_1ib2fcd
/r/LocalLLaMA/comments/1ib2fcd/deepseek_r1_overthinker_a_free_chatbot_that_lets/
false
false
default
1
null
Could it be Qwen 3 / Qwen 2.5 72b Coder??!!
18
https://preview.redd.it/…81f66e3d692436
2025-01-27T07:18:34
https://www.reddit.com/r/LocalLLaMA/comments/1ib2fqy/could_it_be_qwen_3_qwen_25_72b_coder/
notrdm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib2fqy
false
null
t3_1ib2fqy
/r/LocalLLaMA/comments/1ib2fqy/could_it_be_qwen_3_qwen_25_72b_coder/
false
false
https://a.thumbs.redditm…tBIr4q-j1Gj8.jpg
18
null
Where to run llama 3.2?
1
[removed]
2025-01-27T07:18:49
[deleted]
1970-01-01T00:00:00
0
{}
1ib2fvj
false
null
t3_1ib2fvj
/r/LocalLLaMA/comments/1ib2fvj/where_to_run_llama_32/
false
false
default
1
null
Fine-Tuned SAM2 Model on X-Ray Images: Automatic Mask Generator Issue
1
[removed]
2025-01-27T07:20:38
https://www.reddit.com/r/LocalLLaMA/comments/1ib2grn/finetuned_sam2_model_on_xray_images_automatic/
CranberryIcy7387
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib2grn
false
null
t3_1ib2grn
/r/LocalLLaMA/comments/1ib2grn/finetuned_sam2_model_on_xray_images_automatic/
false
false
self
1
null
Fine-Tuned SAM2 Model: Automatic Mask Generator Issue
1
[removed]
2025-01-27T07:30:34
https://www.reddit.com/r/LocalLLaMA/comments/1ib2ll8/finetuned_sam2_model_automatic_mask_generator/
Logical_Tip_3240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib2ll8
false
null
t3_1ib2ll8
/r/LocalLLaMA/comments/1ib2ll8/finetuned_sam2_model_automatic_mask_generator/
false
false
self
1
null
Hype Kills
1
[removed]
2025-01-27T07:33:25
https://www.reddit.com/r/LocalLLaMA/comments/1ib2mxy/hype_kills/
Economy_Future_6752
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib2mxy
false
null
t3_1ib2mxy
/r/LocalLLaMA/comments/1ib2mxy/hype_kills/
false
false
self
1
null
It's actually crazy how insightful Deepseek is. This question completely mind broke it and it spent like 15 paragraphs debating back and forth with itself and shooting down it's own arguments trying to decide. It is legit like two real people having a debate
1
2025-01-27T07:34:24
https://i.redd.it/kcjlg4xsohfe1.png
YobaiYamete
i.redd.it
1970-01-01T00:00:00
0
{}
1ib2neu
false
null
t3_1ib2neu
/r/LocalLLaMA/comments/1ib2neu/its_actually_crazy_how_insightful_deepseek_is/
false
false
https://b.thumbs.redditm…wdrKcAubybvc.jpg
1
{'enabled': True, 'images': [{'id': 'V_T8csn6a0hl92cAs5izK1tUHMAsHh1s9XExu4wrjz8', 'resolutions': [{'height': 6, 'url': 'https://preview.redd.it/kcjlg4xsohfe1.png?width=108&crop=smart&auto=webp&s=e1a3b059806d0b25328081fb0f5b8a55b05ac34e', 'width': 108}, {'height': 12, 'url': 'https://preview.redd.it/kcjlg4xsohfe1.png?width=216&crop=smart&auto=webp&s=d4ce8271409541350500b382e9f6ad95d9850bc6', 'width': 216}, {'height': 18, 'url': 'https://preview.redd.it/kcjlg4xsohfe1.png?width=320&crop=smart&auto=webp&s=2d5c752f429ba41f28391b7801d6dcc00caf5142', 'width': 320}, {'height': 37, 'url': 'https://preview.redd.it/kcjlg4xsohfe1.png?width=640&crop=smart&auto=webp&s=e188359d12c7ab5f730b736868885f994adafe19', 'width': 640}, {'height': 55, 'url': 'https://preview.redd.it/kcjlg4xsohfe1.png?width=960&crop=smart&auto=webp&s=c84265f989ba8fe144e20b98cd69d6ec6e331055', 'width': 960}, {'height': 62, 'url': 'https://preview.redd.it/kcjlg4xsohfe1.png?width=1080&crop=smart&auto=webp&s=30cb7a9d3f1b61b3c22ca770ab6418f1572a334c', 'width': 1080}], 'source': {'height': 65, 'url': 'https://preview.redd.it/kcjlg4xsohfe1.png?auto=webp&s=0a11e46b5430e4b645a010ebb1de8e69ffd97a4c', 'width': 1118}, 'variants': {}}]}
I created a "Can you run it" tool for open source LLMs
363
ERROR: type should be string, got "https://github.com/Raskoll2/LLMcalc\n\nIt's extremly simple but tells you a tk/s estimate of all the quants, and how to run them e.g. 80% layer offload, KV offload, all on GPU. \n\nI have no clue if it'll run on anyone else's systems. I've tried with with linux + 1x Nvidia GPU, if anyone on other systems or multi GPU systems could relay some error messages that would be great"
2025-01-27T07:46:52
https://www.reddit.com/r/LocalLLaMA/comments/1ib2uuz/i_created_a_can_you_run_it_tool_for_open_source/
MixtureOfAmateurs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib2uuz
false
null
t3_1ib2uuz
/r/LocalLLaMA/comments/1ib2uuz/i_created_a_can_you_run_it_tool_for_open_source/
false
false
self
363
{'enabled': False, 'images': [{'id': 'fnPxrJnkYHGVoNXSBAFzdlh3N884cWYsLuA0snvrYuk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/a2kbMAxWjN7bmgR0HoRXmKmrus-bkDh9fDVabiJQZZ8.jpg?width=108&crop=smart&auto=webp&s=85115bdd86ca4099bf8011c9c985fcd4e8d3ee4c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/a2kbMAxWjN7bmgR0HoRXmKmrus-bkDh9fDVabiJQZZ8.jpg?width=216&crop=smart&auto=webp&s=c028f55c2cdf9c54e500cc4e68cfb610900fec76', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/a2kbMAxWjN7bmgR0HoRXmKmrus-bkDh9fDVabiJQZZ8.jpg?width=320&crop=smart&auto=webp&s=d4584366be465d748e244c0d8f4f75db6bcb356e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/a2kbMAxWjN7bmgR0HoRXmKmrus-bkDh9fDVabiJQZZ8.jpg?width=640&crop=smart&auto=webp&s=caeb5d6db0fdb20199d0fee67acfd0cdbc768430', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/a2kbMAxWjN7bmgR0HoRXmKmrus-bkDh9fDVabiJQZZ8.jpg?width=960&crop=smart&auto=webp&s=64dad0ebba0866ee22d874c829edce699b2024e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/a2kbMAxWjN7bmgR0HoRXmKmrus-bkDh9fDVabiJQZZ8.jpg?width=1080&crop=smart&auto=webp&s=d000d7da6c509a52cf4b0c8dc97432f3a68c2c39', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/a2kbMAxWjN7bmgR0HoRXmKmrus-bkDh9fDVabiJQZZ8.jpg?auto=webp&s=bdc5a0ff3663e09bdb390e2b39f7fa2ccf97925e', 'width': 1200}, 'variants': {}}]}
Qwen 2.5 Models with 1M context length released!
1
Heres the blog post: [https://x.com/Alibaba\_Qwen/status/1883557964759654608](https://x.com/Alibaba_Qwen/status/1883557964759654608) [https://qwenlm.github.io/blog/qwen2.5-1m/](https://qwenlm.github.io/blog/qwen2.5-1m/)
2025-01-27T08:11:52
https://www.reddit.com/r/LocalLLaMA/comments/1ib3a0v/qwen_25_models_with_1m_context_length_released/
myvirtualrealitymask
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib3a0v
false
null
t3_1ib3a0v
/r/LocalLLaMA/comments/1ib3a0v/qwen_25_models_with_1m_context_length_released/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SV6SXoHHgotXVJh2sADpKpIFPalUjLpLTnimxTAVD08', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/EKzhBeQRwr8-aCI4_WYgwDDfThsrls3HJahnNX87Fy4.jpg?width=108&crop=smart&auto=webp&s=7652e8e64f97282690534217289f627bc9e0a807', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/EKzhBeQRwr8-aCI4_WYgwDDfThsrls3HJahnNX87Fy4.jpg?width=216&crop=smart&auto=webp&s=c44bf8cd042012ddc2545a3d5e33a4ab360b5420', 'width': 216}, {'height': 136, 'url': 'https://external-preview.redd.it/EKzhBeQRwr8-aCI4_WYgwDDfThsrls3HJahnNX87Fy4.jpg?width=320&crop=smart&auto=webp&s=989b2f218a64c27f576caf0d545eba429e04558e', 'width': 320}, {'height': 272, 'url': 'https://external-preview.redd.it/EKzhBeQRwr8-aCI4_WYgwDDfThsrls3HJahnNX87Fy4.jpg?width=640&crop=smart&auto=webp&s=db9f6db786d1d0ef1d8c057d2cfe57cc3b10fd74', 'width': 640}, {'height': 408, 'url': 'https://external-preview.redd.it/EKzhBeQRwr8-aCI4_WYgwDDfThsrls3HJahnNX87Fy4.jpg?width=960&crop=smart&auto=webp&s=1402007d2246b2873665c70587184d71f217917a', 'width': 960}, {'height': 459, 'url': 'https://external-preview.redd.it/EKzhBeQRwr8-aCI4_WYgwDDfThsrls3HJahnNX87Fy4.jpg?width=1080&crop=smart&auto=webp&s=6a174ef553d00e2054e3d3e0789ba71b9f09bfe8', 'width': 1080}], 'source': {'height': 871, 'url': 'https://external-preview.redd.it/EKzhBeQRwr8-aCI4_WYgwDDfThsrls3HJahnNX87Fy4.jpg?auto=webp&s=a7f133b54be6f7fbf4f6a888a6a5cd1aee0089fe', 'width': 2047}, 'variants': {}}]}
Easy GPU grants for fine tuning / fun projects
2
Does anyone know of any easy to get gpu grants for fine tuning and / or fun projects that aren’t technical research? I’m looking for about 500 hours of MI300X / H100 so in the range of $1-2k
2025-01-27T08:24:24
https://www.reddit.com/r/LocalLLaMA/comments/1ib3hhb/easy_gpu_grants_for_fine_tuning_fun_projects/
Wonderful_Alfalfa115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib3hhb
false
null
t3_1ib3hhb
/r/LocalLLaMA/comments/1ib3hhb/easy_gpu_grants_for_fine_tuning_fun_projects/
false
false
self
2
null
I asked DeepSeek to comment on U.S. AI companies.
1
2025-01-27T08:25:20
https://i.redd.it/q88zeke5yhfe1.jpeg
Alternative-Duty-532
i.redd.it
1970-01-01T00:00:00
0
{}
1ib3hzn
false
null
t3_1ib3hzn
/r/LocalLLaMA/comments/1ib3hzn/i_asked_deepseek_to_comment_on_us_ai_companies/
false
false
https://b.thumbs.redditm…G6Y7JOqRdl_w.jpg
1
{'enabled': True, 'images': [{'id': 'bSUc38NbhcsrOKI4xv5OH8XBROtXZSlG2ZBWJVACTJI', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/q88zeke5yhfe1.jpeg?width=108&crop=smart&auto=webp&s=9082ea46cd3e5da2f6dce2e061f1eb150ec6886a', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/q88zeke5yhfe1.jpeg?width=216&crop=smart&auto=webp&s=5865282f52557c905bacbd4632beaebb7d85702f', 'width': 216}, {'height': 212, 'url': 'https://preview.redd.it/q88zeke5yhfe1.jpeg?width=320&crop=smart&auto=webp&s=401731ada56d1056de3b25ffdf46950b1e8affd0', 'width': 320}, {'height': 424, 'url': 'https://preview.redd.it/q88zeke5yhfe1.jpeg?width=640&crop=smart&auto=webp&s=2673d321e109e8ae8908178d2d76b8e28f6de62b', 'width': 640}, {'height': 637, 'url': 'https://preview.redd.it/q88zeke5yhfe1.jpeg?width=960&crop=smart&auto=webp&s=8681ed61400337ee6f49353b4136a616a4e48a79', 'width': 960}, {'height': 716, 'url': 'https://preview.redd.it/q88zeke5yhfe1.jpeg?width=1080&crop=smart&auto=webp&s=ecf94fc3124ccdb797bff362018743d6aac8fa97', 'width': 1080}], 'source': {'height': 1322, 'url': 'https://preview.redd.it/q88zeke5yhfe1.jpeg?auto=webp&s=98842480236106ffc84aaf08c21263fbf49cec89', 'width': 1992}, 'variants': {}}]}
I asked DeepSeek to comment on U.S. AI companies.
333
2025-01-27T08:26:21
https://i.redd.it/g4gno1ubyhfe1.jpeg
Alternative-Duty-532
i.redd.it
1970-01-01T00:00:00
0
{}
1ib3igq
false
null
t3_1ib3igq
/r/LocalLLaMA/comments/1ib3igq/i_asked_deepseek_to_comment_on_us_ai_companies/
false
false
https://a.thumbs.redditm…F3D6mNIkRCC0.jpg
333
{'enabled': True, 'images': [{'id': 'fGWFmUevKa4MDvIHPTwP5VO1kysmVIajxQjRmYxTkl8', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/g4gno1ubyhfe1.jpeg?width=108&crop=smart&auto=webp&s=842cfb69d0eb8630791b9b19204240c63bd63aa0', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/g4gno1ubyhfe1.jpeg?width=216&crop=smart&auto=webp&s=8196f80efe98c410b4b2e965b203ec2a6af7c2e2', 'width': 216}, {'height': 212, 'url': 'https://preview.redd.it/g4gno1ubyhfe1.jpeg?width=320&crop=smart&auto=webp&s=c68066030c871fa520430d18ad9cd680996543c9', 'width': 320}, {'height': 424, 'url': 'https://preview.redd.it/g4gno1ubyhfe1.jpeg?width=640&crop=smart&auto=webp&s=4f61cf4623524611a6700d162f38168c64599aa7', 'width': 640}, {'height': 637, 'url': 'https://preview.redd.it/g4gno1ubyhfe1.jpeg?width=960&crop=smart&auto=webp&s=c7775fc978e016f591371cfab3a3d676d1e94222', 'width': 960}, {'height': 716, 'url': 'https://preview.redd.it/g4gno1ubyhfe1.jpeg?width=1080&crop=smart&auto=webp&s=53605e69cb49ccfa288d3b594953a7dc18b5adf9', 'width': 1080}], 'source': {'height': 1322, 'url': 'https://preview.redd.it/g4gno1ubyhfe1.jpeg?auto=webp&s=902d609b09eb20b02b72b7a8bc4e6858e39c866b', 'width': 1992}, 'variants': {}}]}
DeepSeek unhinged
0
Godspeed Pliny, you crazy bastard.
2025-01-27T08:26:39
https://www.reddit.com/gallery/1ib3ilu
AgileIndependence940
reddit.com
1970-01-01T00:00:00
0
{}
1ib3ilu
false
null
t3_1ib3ilu
/r/LocalLLaMA/comments/1ib3ilu/deepseek_unhinged/
false
false
https://b.thumbs.redditm…rkoa932ajaqg.jpg
0
null
Distilled sh.t packs the ooomph..
1
[removed]
2025-01-27T08:31:46
https://www.reddit.com/r/LocalLLaMA/comments/1ib3kvg/distilled_sht_packs_the_ooomph/
nntun03
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib3kvg
false
null
t3_1ib3kvg
/r/LocalLLaMA/comments/1ib3kvg/distilled_sht_packs_the_ooomph/
false
false
self
1
null
Run LLM locally on Android
2
2025-01-27T08:47:31
https://v.redd.it/llp8vbi22ife1
sandoche
v.redd.it
1970-01-01T00:00:00
0
{}
1ib3ry8
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/llp8vbi22ife1/DASHPlaylist.mpd?a=1740559671%2CMmZjMDQ0YWJmMzYwYWY5NDRlNTAxOTNhZTY3ZTAzMzRkZGU1NGM4ZjU1ZWQ3NzIyZTUyZTQ3NmI3YjYwNDM4MA%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/llp8vbi22ife1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/llp8vbi22ife1/HLSPlaylist.m3u8?a=1740559671%2CODAzYzNiZjgzNjc5Mjg5NGY3ZjFkYmU3ZTlmYjEyMmFkMTY1NTE2MTQ0N2QzMTQyMzBkODE1ZDViYzQ0NTEyZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/llp8vbi22ife1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 576}}
t3_1ib3ry8
/r/LocalLLaMA/comments/1ib3ry8/run_llm_locally_on_android/
false
false
https://external-preview…bb2f9cf7c4867158
2
{'enabled': False, 'images': [{'id': 'eHNudmRjaTIyaWZlMa2Abb99hOQyOQ71XUWNG6qkG6hM0CsQPGRGhgNwRDon', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/eHNudmRjaTIyaWZlMa2Abb99hOQyOQ71XUWNG6qkG6hM0CsQPGRGhgNwRDon.png?width=108&crop=smart&format=pjpg&auto=webp&s=2adce53680a3ecca5331188d1d5c81d2987df2c3', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/eHNudmRjaTIyaWZlMa2Abb99hOQyOQ71XUWNG6qkG6hM0CsQPGRGhgNwRDon.png?width=216&crop=smart&format=pjpg&auto=webp&s=1f2a4150d028fc6d5847016686ac3cd77f54c77d', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/eHNudmRjaTIyaWZlMa2Abb99hOQyOQ71XUWNG6qkG6hM0CsQPGRGhgNwRDon.png?width=320&crop=smart&format=pjpg&auto=webp&s=97d74595c2cb82cb56445e20312265c096624d84', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/eHNudmRjaTIyaWZlMa2Abb99hOQyOQ71XUWNG6qkG6hM0CsQPGRGhgNwRDon.png?width=640&crop=smart&format=pjpg&auto=webp&s=3ed1cea0cec90cafbef93671aec23fb8271a008e', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/eHNudmRjaTIyaWZlMa2Abb99hOQyOQ71XUWNG6qkG6hM0CsQPGRGhgNwRDon.png?format=pjpg&auto=webp&s=34f0911c7001fa9604e04b98124654fe3ed26a5b', 'width': 864}, 'variants': {}}]}
Local Llama for Chromebook
0
I have been successful with deployment of tiny llama to Google Chromebook. But latency and coherence sucks. Any recommendations would be appreciated.
2025-01-27T09:00:50
https://www.reddit.com/r/LocalLLaMA/comments/1ib3xyg/local_llama_for_chromebook/
bobfromthemailroom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib3xyg
false
null
t3_1ib3xyg
/r/LocalLLaMA/comments/1ib3xyg/local_llama_for_chromebook/
false
false
self
0
null
Best framework for AI / LLM Agents development?
1
[removed]
2025-01-27T09:07:20
https://www.reddit.com/r/LocalLLaMA/comments/1ib41zc/best_framework_for_ai_llm_agents_development/
Realistic-Platypus88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib41zc
false
null
t3_1ib41zc
/r/LocalLLaMA/comments/1ib41zc/best_framework_for_ai_llm_agents_development/
false
false
self
1
null
DeepSeek-R-1 is so woke its worthless, why is the MSM gone wild that this thing is Great?
0
[removed]
2025-01-27T09:09:43
https://www.reddit.com/r/LocalLLaMA/comments/1ib43c5/deepseekr1_is_so_woke_its_worthless_why_is_the/
Waste-Dimension-1681
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib43c5
false
null
t3_1ib43c5
/r/LocalLLaMA/comments/1ib43c5/deepseekr1_is_so_woke_its_worthless_why_is_the/
false
false
self
0
null
Nasdaq 100 futures fall over -200 points as markets react to DeepSeek release. The DeepSeek prophecy is coming true.
0
2025-01-27T09:14:06
https://i.redd.it/qtyz1w4q6ife1.png
eternviking
i.redd.it
1970-01-01T00:00:00
0
{}
1ib45sx
false
null
t3_1ib45sx
/r/LocalLLaMA/comments/1ib45sx/nasdaq_100_futures_fall_over_200_points_as/
false
false
https://a.thumbs.redditm…fSEvUBIMIFj4.jpg
0
{'enabled': True, 'images': [{'id': 'E7TM3fMoeKaTklP6TQWa8AuMQ9wqxIEb_UsEBnakq7Q', 'resolutions': [{'height': 180, 'url': 'https://preview.redd.it/qtyz1w4q6ife1.png?width=108&crop=smart&auto=webp&s=a145cf7cc250ddfdf1ff8560f16ffd0046fc95eb', 'width': 108}, {'height': 360, 'url': 'https://preview.redd.it/qtyz1w4q6ife1.png?width=216&crop=smart&auto=webp&s=3c7442776789cf744789a7176b2374d40c2d4883', 'width': 216}, {'height': 533, 'url': 'https://preview.redd.it/qtyz1w4q6ife1.png?width=320&crop=smart&auto=webp&s=e22df36affc6c618c4ba0fb1336bbc498aa38e58', 'width': 320}, {'height': 1067, 'url': 'https://preview.redd.it/qtyz1w4q6ife1.png?width=640&crop=smart&auto=webp&s=884634319bbfa35f10e98ce876d3f8401e7197a4', 'width': 640}, {'height': 1601, 'url': 'https://preview.redd.it/qtyz1w4q6ife1.png?width=960&crop=smart&auto=webp&s=7cf83bc6df1be171b62311d953f177fc726b6d0c', 'width': 960}, {'height': 1801, 'url': 'https://preview.redd.it/qtyz1w4q6ife1.png?width=1080&crop=smart&auto=webp&s=624dfbbb27b2694940ca9bc74e289f272ce41a39', 'width': 1080}], 'source': {'height': 1967, 'url': 'https://preview.redd.it/qtyz1w4q6ife1.png?auto=webp&s=8a66941c714c0edbe84be122ea09677d89892466', 'width': 1179}, 'variants': {}}]}
Why is DeepSeek so good?
0
As far as i understand language models, they require a lot of hard work to operate properly-you have to train them "by hand", which requires a big team and time. So, how is it possible that DeepSeek, which lacks the ressources of big US AI companies, is outperforming US AI? Even if it was done in a more efficient way, this should not be possible as far as i uderstand the tech.
2025-01-27T09:17:09
https://www.reddit.com/r/LocalLLaMA/comments/1ib47lo/why_is_deepseek_so_good/
Acceptable-Try-4682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib47lo
false
null
t3_1ib47lo
/r/LocalLLaMA/comments/1ib47lo/why_is_deepseek_so_good/
false
false
self
0
null
Multi-agent workflow with parallelization in llama index
1
[removed]
2025-01-27T09:19:07
https://www.reddit.com/r/LocalLLaMA/comments/1ib48po/multiagent_workflow_with_parallelization_in_llama/
Former_Trouble_4428
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib48po
false
null
t3_1ib48po
/r/LocalLLaMA/comments/1ib48po/multiagent_workflow_with_parallelization_in_llama/
false
false
self
1
null
What's wrong with FuseO1 size in ollama ?
2
The quantized version of fuseO1 file size is 19 gb same as r1 but while running it expands to 70-90 gb depending on availability. In this picture it says 69 gb but if I give it another 24 gb card it expands to 84 gb. This happens with all the 32B q4 pulls of FuseO1. Thus I cannot run them on one x090 nvidia as intended. Official R1 32b and even 70b take similar to their file size. ( Few gb more depending on context length) Am I doing something wrong?
2025-01-27T09:22:21
https://i.redd.it/xibbin8c8ife1.png
alok_saurabh
i.redd.it
1970-01-01T00:00:00
0
{}
1ib4ak0
false
null
t3_1ib4ak0
/r/LocalLLaMA/comments/1ib4ak0/whats_wrong_with_fuseo1_size_in_ollama/
false
false
https://b.thumbs.redditm…8ZJ-xt5mGR9U.jpg
2
{'enabled': True, 'images': [{'id': 'W8smm3MlU_cjPTV0DmkQeGpm5OBLTHKPhq5dwgzMLq0', 'resolutions': [{'height': 29, 'url': 'https://preview.redd.it/xibbin8c8ife1.png?width=108&crop=smart&auto=webp&s=cbc262f023f81f60194f5f5d7bbac616cd60a240', 'width': 108}, {'height': 59, 'url': 'https://preview.redd.it/xibbin8c8ife1.png?width=216&crop=smart&auto=webp&s=69481b9056a6500c008e9edd3a2d1ee8d0c67787', 'width': 216}, {'height': 87, 'url': 'https://preview.redd.it/xibbin8c8ife1.png?width=320&crop=smart&auto=webp&s=02026aec194b6f5bbad4821a312ae501c3b93fcc', 'width': 320}, {'height': 174, 'url': 'https://preview.redd.it/xibbin8c8ife1.png?width=640&crop=smart&auto=webp&s=1d5e394f3abb5b4c4d742f071f6f9ee1910bd6c7', 'width': 640}, {'height': 262, 'url': 'https://preview.redd.it/xibbin8c8ife1.png?width=960&crop=smart&auto=webp&s=84428f91caeccca5b4cb7d0c4293ff06927c38ad', 'width': 960}, {'height': 295, 'url': 'https://preview.redd.it/xibbin8c8ife1.png?width=1080&crop=smart&auto=webp&s=cf7daa138eb415a79684717ffdd9b253821f0249', 'width': 1080}], 'source': {'height': 586, 'url': 'https://preview.redd.it/xibbin8c8ife1.png?auto=webp&s=6fa25a063d4bf75e416f4813bffc0c6bd4c543e1', 'width': 2145}, 'variants': {}}]}
Was this about DeepSeek? Do you think he is really worried about it?
0
2025-01-27T09:30:17
https://i.redd.it/3da33u5r9ife1.jpeg
AloneCoffee4538
i.redd.it
1970-01-01T00:00:00
0
{}
1ib4f45
false
null
t3_1ib4f45
/r/LocalLLaMA/comments/1ib4f45/was_this_about_deepseek_do_you_think_he_is_really/
false
false
https://a.thumbs.redditm…iFhGNBYcyZM4.jpg
0
{'enabled': True, 'images': [{'id': '3RePqFErg7b1WHjlwqQQg-e4SxLHMzxdTO-sWEesuOM', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/3da33u5r9ife1.jpeg?width=108&crop=smart&auto=webp&s=dc98d894150f62ed01f5dafd00fbc44ef8ac9752', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/3da33u5r9ife1.jpeg?width=216&crop=smart&auto=webp&s=a8925081df3d2d4f84a878d7d60cc2dc637dcc50', 'width': 216}, {'height': 234, 'url': 'https://preview.redd.it/3da33u5r9ife1.jpeg?width=320&crop=smart&auto=webp&s=6733f7b09e1af8d75e6f8aa421b34163acef50ec', 'width': 320}, {'height': 469, 'url': 'https://preview.redd.it/3da33u5r9ife1.jpeg?width=640&crop=smart&auto=webp&s=c5d24d5e8766ec7429ee0a8c722185e5a1a14c1c', 'width': 640}, {'height': 704, 'url': 'https://preview.redd.it/3da33u5r9ife1.jpeg?width=960&crop=smart&auto=webp&s=1422d5a3c9fce13475d221d9c6dfb57f32b5b6f5', 'width': 960}, {'height': 792, 'url': 'https://preview.redd.it/3da33u5r9ife1.jpeg?width=1080&crop=smart&auto=webp&s=5b2b6ab617dba7f9677364984c6309d14895a7cf', 'width': 1080}], 'source': {'height': 865, 'url': 'https://preview.redd.it/3da33u5r9ife1.jpeg?auto=webp&s=708f1fcc1c11d4f8c363940ad2b500581855da75', 'width': 1179}, 'variants': {}}]}
Last code competition shows that o1 is still in the lead for competitive programming
0
Quite surprised since everyone around is used to saying that Claude and R1 are better than o1 at peogramming
2025-01-27T09:33:44
https://i.redd.it/9xhce9adaife1.jpeg
Lindayz
i.redd.it
1970-01-01T00:00:00
0
{}
1ib4h5o
false
null
t3_1ib4h5o
/r/LocalLLaMA/comments/1ib4h5o/last_code_competition_shows_that_o1_is_still_in/
false
false
https://b.thumbs.redditm…_lywqtF6xPbg.jpg
0
{'enabled': True, 'images': [{'id': '_nZuymmc558iclwBpNdPc2cpn0CvZPxU9Es4nUdJNns', 'resolutions': [{'height': 148, 'url': 'https://preview.redd.it/9xhce9adaife1.jpeg?width=108&crop=smart&auto=webp&s=01bfba1cf466bcc28cf4e38055c70e5129305b71', 'width': 108}, {'height': 297, 'url': 'https://preview.redd.it/9xhce9adaife1.jpeg?width=216&crop=smart&auto=webp&s=c4d87ad5a3819577c9e24fc8550560339a804217', 'width': 216}, {'height': 440, 'url': 'https://preview.redd.it/9xhce9adaife1.jpeg?width=320&crop=smart&auto=webp&s=c29421b0a44dc1a7892f124e63304b4ee37b106f', 'width': 320}, {'height': 881, 'url': 'https://preview.redd.it/9xhce9adaife1.jpeg?width=640&crop=smart&auto=webp&s=28bca9ca23a4b28a51e60c882c5fe9de039b539f', 'width': 640}, {'height': 1321, 'url': 'https://preview.redd.it/9xhce9adaife1.jpeg?width=960&crop=smart&auto=webp&s=2b8839b347c96d015233b56910ebbfa36dfdb533', 'width': 960}, {'height': 1486, 'url': 'https://preview.redd.it/9xhce9adaife1.jpeg?width=1080&crop=smart&auto=webp&s=afd20d5dc8967825048fa06687642809bcc43778', 'width': 1080}], 'source': {'height': 1776, 'url': 'https://preview.redd.it/9xhce9adaife1.jpeg?auto=webp&s=da8863e68510d596e8f4c09939ab4eeb8cc04645', 'width': 1290}, 'variants': {}}]}
Is there a good unified interface for different LLMs?
3
So in the past I've come across interfaces for using APIs, or even local LLMs in one place. Is there a consensus on what is the current best such interface? And is there a way to use a ChatGPT subscription in an interface like this? (API costs for o1 are worse than using the Plus plan) Or is it only API and local stuff that can be properly integrated?
2025-01-27T09:34:06
https://www.reddit.com/r/LocalLLaMA/comments/1ib4hcu/is_there_a_good_unified_interface_for_different/
Dzsaffar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib4hcu
false
null
t3_1ib4hcu
/r/LocalLLaMA/comments/1ib4hcu/is_there_a_good_unified_interface_for_different/
false
false
self
3
null
How *exactly* is Deepseek so cheap?
607
Deepseek's all the rage. I get it, 95-97% reduction in costs. How \*exactly\*? Aside from cheaper training (not doing RLHF), quantization, and caching (semantic input HTTP caching I guess?), where's the reduction coming from? This can't be all, because supposedly R1 isn't quantized. Right? Is it subsidized? Is OpenAI/Anthropic just...charging too much? What's the deal?
2025-01-27T09:40:04
https://www.reddit.com/r/LocalLLaMA/comments/1ib4ksj/how_exactly_is_deepseek_so_cheap/
micamecava
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib4ksj
false
null
t3_1ib4ksj
/r/LocalLLaMA/comments/1ib4ksj/how_exactly_is_deepseek_so_cheap/
false
false
self
607
null
It was fun while it lasted.
212
2025-01-27T09:50:11
https://i.redd.it/f4z3rtg5dife1.png
omnisvosscio
i.redd.it
1970-01-01T00:00:00
0
{}
1ib4qrg
false
null
t3_1ib4qrg
/r/LocalLLaMA/comments/1ib4qrg/it_was_fun_while_it_lasted/
false
false
https://a.thumbs.redditm…0TZqQlp7wBh0.jpg
212
{'enabled': True, 'images': [{'id': 'PaP9lKJLxUhaCBdxk-qq_tvIN-NZc2B6vraXVg7Z9_I', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/f4z3rtg5dife1.png?width=108&crop=smart&auto=webp&s=ae19b03bdc71fb2a73781d9c5d232d373ec36f79', 'width': 108}, {'height': 57, 'url': 'https://preview.redd.it/f4z3rtg5dife1.png?width=216&crop=smart&auto=webp&s=65aaaf9695b33c1ef53734aed01f8b1aae433564', 'width': 216}, {'height': 85, 'url': 'https://preview.redd.it/f4z3rtg5dife1.png?width=320&crop=smart&auto=webp&s=a36e32411d1440106d82759c59148d5c7f900a12', 'width': 320}, {'height': 171, 'url': 'https://preview.redd.it/f4z3rtg5dife1.png?width=640&crop=smart&auto=webp&s=b2e9e389c105a5198f86f71e16e2dc7186ad1daa', 'width': 640}, {'height': 257, 'url': 'https://preview.redd.it/f4z3rtg5dife1.png?width=960&crop=smart&auto=webp&s=753e08f607dfce5e288382d9ddfae857b6ad3b83', 'width': 960}, {'height': 289, 'url': 'https://preview.redd.it/f4z3rtg5dife1.png?width=1080&crop=smart&auto=webp&s=6b9778b157c1cb4654f188d5605b4e7025841a7c', 'width': 1080}], 'source': {'height': 466, 'url': 'https://preview.redd.it/f4z3rtg5dife1.png?auto=webp&s=0095f2adcac349b38293558df2888359db0ed8fc', 'width': 1736}, 'variants': {}}]}
Fine-Tuned SAM2 Model on Images: Automatic Mask Generator Issue
1
[removed]
2025-01-27T09:51:51
https://www.reddit.com/r/LocalLLaMA/comments/1ib4rsy/finetuned_sam2_model_on_images_automatic_mask/
Logical_Tip_3240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib4rsy
false
null
t3_1ib4rsy
/r/LocalLLaMA/comments/1ib4rsy/finetuned_sam2_model_on_images_automatic_mask/
false
false
self
1
null
What's the difference in response between the DeepSeek Chatbot UI and Running The Full 600B+ Parameter Model on 8 bare metal H100s on Vultr?
1
[removed]
2025-01-27T09:54:33
https://www.reddit.com/r/LocalLLaMA/comments/1ib4td3/whats_the_difference_in_response_between_the/
ItsDrDolphLundgren
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib4td3
false
null
t3_1ib4td3
/r/LocalLLaMA/comments/1ib4td3/whats_the_difference_in_response_between_the/
false
false
self
1
null
Which libraries do you use to run GGUF models?
1
Hello everybody! I'm quite new at running AI on local hardware. I'm somewhat familiar with the transformers library. However, I'm a bit outdated when it comes to new tech and libraries for python. I will need to run all kinds of models like vision or tool use models. Which framework/library would you suggest for me?
2025-01-27T09:55:08
https://www.reddit.com/r/LocalLLaMA/comments/1ib4tq0/which_libraries_do_you_use_to_run_gguf_models/
Su1tz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib4tq0
false
null
t3_1ib4tq0
/r/LocalLLaMA/comments/1ib4tq0/which_libraries_do_you_use_to_run_gguf_models/
false
false
self
1
null
Buying m4 mac mini 16g or 24g
1
[removed]
2025-01-27T09:59:42
https://www.reddit.com/r/LocalLLaMA/comments/1ib4wgy/buying_m4_mac_mini_16g_or_24g/
Special_Permit_5546
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib4wgy
false
null
t3_1ib4wgy
/r/LocalLLaMA/comments/1ib4wgy/buying_m4_mac_mini_16g_or_24g/
false
false
self
1
null
Built a local Llama-powered chat for Apple Notes—quick to set up, would love feedback!
4
Hey everyone, I've heard some folks mention that they've built custom solutions to chat with their Apple Notes, so I decided to create one myself, but make it easy to setup for others. I'm currently preparing it for launch and would love to hear your thoughts or feedback. Best, Arne
2025-01-27T09:59:53
https://www.reddit.com/r/LocalLLaMA/comments/1ib4wkx/built_a_local_llamapowered_chat_for_apple/
arne226
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib4wkx
false
null
t3_1ib4wkx
/r/LocalLLaMA/comments/1ib4wkx/built_a_local_llamapowered_chat_for_apple/
false
false
self
4
null
DeepSeek R1 is painful slow on my device, is this problem with all the models or will installing large models will make it faster?
0
Hello, I had installed DeepSeek R1 14b model around 3-4 days ago and it was taking 5-10 minutes for asking even basic questions not even question even for replying to a Hello, it took 5 minutes, I want to develop a small personalized chatbot for a website using Deepseek model but as it's extremely slow, I am unable to do so, if I install a larger model, will it be faster or even large models are slow, I can install upto 70b model TLDR: Are larger deepseek model faster, I installed 14b model of deepseek using Ollama and it was really slow
2025-01-27T10:00:18
https://www.reddit.com/r/LocalLLaMA/comments/1ib4wvd/deepseek_r1_is_painful_slow_on_my_device_is_this/
Fair_Performance_290
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib4wvd
false
null
t3_1ib4wvd
/r/LocalLLaMA/comments/1ib4wvd/deepseek_r1_is_painful_slow_on_my_device_is_this/
false
false
self
0
null
DeepSeek R1 is painful slow on my device, is this problem with all the models or will installing large models will make it faster?
0
Hello, I had installed DeepSeek R1 14b model around 3-4 days ago and it was taking 5-10 minutes for asking even basic questions not even question even for replying to a Hello, it took 5 minutes, I want to develop a small personalized chatbot for a website using Deepseek model but as it's extremely slow, I am unable to do so, if I install a larger model, will it be faster or even large models are slow, I can install upto 70b model TLDR: Are larger deepseek model faster, I installed 14b model of deepseek using Ollama and it was really slow
2025-01-27T10:00:30
https://www.reddit.com/r/LocalLLaMA/comments/1ib4x0m/deepseek_r1_is_painful_slow_on_my_device_is_this/
Fair_Performance_290
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib4x0m
false
null
t3_1ib4x0m
/r/LocalLLaMA/comments/1ib4x0m/deepseek_r1_is_painful_slow_on_my_device_is_this/
false
false
self
0
null
Deepseek down?
1
https://preview.redd.it/…08eef4c0205bca
2025-01-27T10:01:15
https://www.reddit.com/r/LocalLLaMA/comments/1ib4xk1/deepseek_down/
HIVVIH
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib4xk1
false
null
t3_1ib4xk1
/r/LocalLLaMA/comments/1ib4xk1/deepseek_down/
false
false
https://b.thumbs.redditm…YoYpRdVR0mSg.jpg
1
null
Any sources about the TOTAL DeepSeek R1 training costs?
1
I only see the 5.57M from V3, but no mention to the V3->R1 costs
2025-01-27T10:17:56
https://www.reddit.com/r/LocalLLaMA/comments/1ib5846/any_sources_about_the_total_deepseek_r1_training/
Neat-Computer-6975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib5846
false
null
t3_1ib5846
/r/LocalLLaMA/comments/1ib5846/any_sources_about_the_total_deepseek_r1_training/
false
false
self
1
null
Ollama DeepSeek-R1-Distill-Qwen-32B
1
The only pullable DeepSeek-R1-Distill-Qwen-32B model I can see on ollama is hengwen/DeepSeek-R1-Distill-Qwen-32B:q4_k_m but it seems to be only Chinese, is the an English one somewhere?
2025-01-27T10:50:23
https://www.reddit.com/r/LocalLLaMA/comments/1ib5smm/ollama_deepseekr1distillqwen32b/
NaiRogers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib5smm
false
null
t3_1ib5smm
/r/LocalLLaMA/comments/1ib5smm/ollama_deepseekr1distillqwen32b/
false
false
self
1
null
deepseek r1 tops the creative writing rankings
346
2025-01-27T11:00:18
https://i.redd.it/yslsnd6fpife1.png
Still_Potato_415
i.redd.it
1970-01-01T00:00:00
0
{}
1ib5yuk
false
null
t3_1ib5yuk
/r/LocalLLaMA/comments/1ib5yuk/deepseek_r1_tops_the_creative_writing_rankings/
false
false
https://b.thumbs.redditm…N21J-idcQ4Kg.jpg
346
{'enabled': True, 'images': [{'id': 'Jc0wltHXqb-CKDJshni3zNN7f45DW4uzVxaRMm4KYcI', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/yslsnd6fpife1.png?width=108&crop=smart&auto=webp&s=aa50b55e535f9573afa0583ad258d19839fcbbf6', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/yslsnd6fpife1.png?width=216&crop=smart&auto=webp&s=d5ca9f65508c3860c52f2a9f959f88434479462f', 'width': 216}, {'height': 244, 'url': 'https://preview.redd.it/yslsnd6fpife1.png?width=320&crop=smart&auto=webp&s=c69ab44e8b81b356694029742e5d94ee954b359c', 'width': 320}, {'height': 488, 'url': 'https://preview.redd.it/yslsnd6fpife1.png?width=640&crop=smart&auto=webp&s=0c3efe198ef9c53da2bdad4be41459ab90d46ec7', 'width': 640}, {'height': 733, 'url': 'https://preview.redd.it/yslsnd6fpife1.png?width=960&crop=smart&auto=webp&s=ec975e9fb47c7b8b3dfd56f373e1f6f15c8a002b', 'width': 960}, {'height': 825, 'url': 'https://preview.redd.it/yslsnd6fpife1.png?width=1080&crop=smart&auto=webp&s=e0c42bf0cca0949d1ad3358f43751038718c7ec8', 'width': 1080}], 'source': {'height': 1638, 'url': 'https://preview.redd.it/yslsnd6fpife1.png?auto=webp&s=8b93c0758bb36df93d3b4bea568f6fb16e973e4e', 'width': 2144}, 'variants': {}}]}
RAG in Business: Insights, Use Cases, and Technologies for Structured Data
1
[removed]
2025-01-27T11:01:35
https://www.reddit.com/r/LocalLLaMA/comments/1ib5zuc/rag_in_business_insights_use_cases_and/
lirones
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib5zuc
false
null
t3_1ib5zuc
/r/LocalLLaMA/comments/1ib5zuc/rag_in_business_insights_use_cases_and/
false
false
self
1
null
Suggest a prompt optimizer/improver tool
1
[removed]
2025-01-27T11:06:39
https://www.reddit.com/r/LocalLLaMA/comments/1ib63ct/suggest_a_prompt_optimizerimprover_tool/
Perfect_Ad3146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib63ct
false
null
t3_1ib63ct
/r/LocalLLaMA/comments/1ib63ct/suggest_a_prompt_optimizerimprover_tool/
false
false
self
1
null
Help with browser-use WebUI + Kluster AI (Custom API)
1
[removed]
2025-01-27T11:06:42
https://www.reddit.com/r/LocalLLaMA/comments/1ib63e0/help_with_browseruse_webui_kluster_ai_custom_api/
Fox-Lopsided
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib63e0
false
null
t3_1ib63e0
/r/LocalLLaMA/comments/1ib63e0/help_with_browseruse_webui_kluster_ai_custom_api/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YvkDV79d9ut7ApnjJg3zJwKUiMY9V6zK1UJkgnuYcWY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=108&crop=smart&auto=webp&s=b6d4a9466e570d389188e24af75180255bb37da7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=216&crop=smart&auto=webp&s=4f903b231d22fa19c1c01b9d66c5b9b8564246a2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=320&crop=smart&auto=webp&s=c4c38bdc08e2b53cce959a427e19aa591bc64050', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=640&crop=smart&auto=webp&s=1d3ed9f94cb5edb007f860ae53066f1ca706accd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=960&crop=smart&auto=webp&s=6348ecf532db39d8eeb00e6b85006d4a4d244866', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=1080&crop=smart&auto=webp&s=a62c03facfc3aa3c5a8328f061d89edf7d9cff36', 'width': 1080}], 'source': {'height': 945, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?auto=webp&s=bd50092485ba35c5ffc13130a53c20a774cd0bc9', 'width': 1800}, 'variants': {}}]}
Please help me with browser-use and kluster.ai
1
[removed]
2025-01-27T11:07:39
https://www.reddit.com/r/LocalLLaMA/comments/1ib641w/please_help_me_with_browseruse_and_klusterai/
Fox-Lopsided
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib641w
false
null
t3_1ib641w
/r/LocalLLaMA/comments/1ib641w/please_help_me_with_browseruse_and_klusterai/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YvkDV79d9ut7ApnjJg3zJwKUiMY9V6zK1UJkgnuYcWY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=108&crop=smart&auto=webp&s=b6d4a9466e570d389188e24af75180255bb37da7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=216&crop=smart&auto=webp&s=4f903b231d22fa19c1c01b9d66c5b9b8564246a2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=320&crop=smart&auto=webp&s=c4c38bdc08e2b53cce959a427e19aa591bc64050', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=640&crop=smart&auto=webp&s=1d3ed9f94cb5edb007f860ae53066f1ca706accd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=960&crop=smart&auto=webp&s=6348ecf532db39d8eeb00e6b85006d4a4d244866', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=1080&crop=smart&auto=webp&s=a62c03facfc3aa3c5a8328f061d89edf7d9cff36', 'width': 1080}], 'source': {'height': 945, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?auto=webp&s=bd50092485ba35c5ffc13130a53c20a774cd0bc9', 'width': 1800}, 'variants': {}}]}
Browser-use and kluster.ai - Please help
1
[removed]
2025-01-27T11:08:21
https://www.reddit.com/r/LocalLLaMA/comments/1ib64gu/browseruse_and_klusterai_please_help/
Fox-Lopsided
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib64gu
false
null
t3_1ib64gu
/r/LocalLLaMA/comments/1ib64gu/browseruse_and_klusterai_please_help/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YvkDV79d9ut7ApnjJg3zJwKUiMY9V6zK1UJkgnuYcWY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=108&crop=smart&auto=webp&s=b6d4a9466e570d389188e24af75180255bb37da7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=216&crop=smart&auto=webp&s=4f903b231d22fa19c1c01b9d66c5b9b8564246a2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=320&crop=smart&auto=webp&s=c4c38bdc08e2b53cce959a427e19aa591bc64050', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=640&crop=smart&auto=webp&s=1d3ed9f94cb5edb007f860ae53066f1ca706accd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=960&crop=smart&auto=webp&s=6348ecf532db39d8eeb00e6b85006d4a4d244866', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?width=1080&crop=smart&auto=webp&s=a62c03facfc3aa3c5a8328f061d89edf7d9cff36', 'width': 1080}], 'source': {'height': 945, 'url': 'https://external-preview.redd.it/6LPUM1zQpVWtbVFx2flIROBOjVMD98dirJ0ZN9IDWcM.jpg?auto=webp&s=bd50092485ba35c5ffc13130a53c20a774cd0bc9', 'width': 1800}, 'variants': {}}]}
Suggest a prompt optimizer tool
1
[removed]
2025-01-27T11:10:00
https://www.reddit.com/r/LocalLLaMA/comments/1ib65kp/suggest_a_prompt_optimizer_tool/
Perfect_Ad3146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib65kp
false
null
t3_1ib65kp
/r/LocalLLaMA/comments/1ib65kp/suggest_a_prompt_optimizer_tool/
false
false
self
1
null
Just a reminder of the cost of censorship
0
2025-01-27T11:19:42
https://v.redd.it/hhk68gn9tife1
LucasNoritomi
v.redd.it
1970-01-01T00:00:00
0
{}
1ib6c1v
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/hhk68gn9tife1/DASHPlaylist.mpd?a=1740568798%2CNjZkZWJmMjU2ODA2YjU0NzAwZmU5MmQ1YzY1N2VmZDNiNWEzNTY2ZTgwYTE0OTY3Mzk4YWEzYTU4ZWFlMWZiNg%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/hhk68gn9tife1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'hls_url': 'https://v.redd.it/hhk68gn9tife1/HLSPlaylist.m3u8?a=1740568798%2CYTQ4ZjBlNmM3ODllNzVkMDhiYTU5OWYwYzFkM2JkYTFiMzY2NTkwYzA2Mjc5NTk1YmZmN2JjNWJjNzE2OTNjNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hhk68gn9tife1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 384}}
t3_1ib6c1v
/r/LocalLLaMA/comments/1ib6c1v/just_a_reminder_of_the_cost_of_censorship/
false
false
https://external-preview…f9266538ff4bca16
0
{'enabled': False, 'images': [{'id': 'NjE3YTBhZzl0aWZlMYiNTG4MzlL2kZJisi0nLJ77zazSV0ls4mtA1LpOcg_z', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/NjE3YTBhZzl0aWZlMYiNTG4MzlL2kZJisi0nLJ77zazSV0ls4mtA1LpOcg_z.png?width=108&crop=smart&format=pjpg&auto=webp&s=e136a943b8ab476b18e7ae2e46050f9f5f3e3c31', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/NjE3YTBhZzl0aWZlMYiNTG4MzlL2kZJisi0nLJ77zazSV0ls4mtA1LpOcg_z.png?width=216&crop=smart&format=pjpg&auto=webp&s=e946ba57c16426d96900a4ddf50636c21143d736', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/NjE3YTBhZzl0aWZlMYiNTG4MzlL2kZJisi0nLJ77zazSV0ls4mtA1LpOcg_z.png?width=320&crop=smart&format=pjpg&auto=webp&s=91f960eccc8b5c0d262c8cd64fed82df0c77e632', 'width': 320}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/NjE3YTBhZzl0aWZlMYiNTG4MzlL2kZJisi0nLJ77zazSV0ls4mtA1LpOcg_z.png?format=pjpg&auto=webp&s=27971e07603abb94c7d05c6a7ec5d758f28b2792', 'width': 573}, 'variants': {}}]}
DeepSeek R1 told me it's developed by OpenAI !
1
2025-01-27T11:23:37
https://i.redd.it/2onw0m3xtife1.png
alishokoohi666
i.redd.it
1970-01-01T00:00:00
0
{}
1ib6er6
false
null
t3_1ib6er6
/r/LocalLLaMA/comments/1ib6er6/deepseek_r1_told_me_its_developed_by_openai/
false
false
https://b.thumbs.redditm…m2GQVAIeQDqk.jpg
1
{'enabled': True, 'images': [{'id': 'ug2iLpP2l2mBTas_Noi681CgURQvKg7iuhv13PjnCBw', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/2onw0m3xtife1.png?width=108&crop=smart&auto=webp&s=c9fd5f34818f74b64158b082400f3b25819151c5', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/2onw0m3xtife1.png?width=216&crop=smart&auto=webp&s=3f1d55168c90956a72a8070224e0eb88b199c3a0', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/2onw0m3xtife1.png?width=320&crop=smart&auto=webp&s=611246f8043ca855e3b892c2ca2a13a7f1165865', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/2onw0m3xtife1.png?width=640&crop=smart&auto=webp&s=432cf0009dd319499310a4fd26ab5e4d08cdce16', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/2onw0m3xtife1.png?width=960&crop=smart&auto=webp&s=c064a6f2357e6400f884182cfcfac47eb492643c', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/2onw0m3xtife1.png?width=1080&crop=smart&auto=webp&s=ca114acd57e354dd5d8ff375e1fc28827cbac91e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/2onw0m3xtife1.png?auto=webp&s=1c40b0c1879a1b16968f6ca0d53fac81fc8794d5', 'width': 1920}, 'variants': {}}]}
DeepSeek R1 told me it's developed by OpenAI !
1
2025-01-27T11:36:45
https://i.redd.it/xg433wh8wife1.png
alishokoohi666
i.redd.it
1970-01-01T00:00:00
0
{}
1ib6nmx
false
null
t3_1ib6nmx
/r/LocalLLaMA/comments/1ib6nmx/deepseek_r1_told_me_its_developed_by_openai/
false
false
https://b.thumbs.redditm…3OTTDlSe8tms.jpg
1
{'enabled': True, 'images': [{'id': 'P57Anx4Ta-066Jn9goSpLQwVSTQb08gLF8Xoek02sY4', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/xg433wh8wife1.png?width=108&crop=smart&auto=webp&s=ab8a52ebcc84016ddbb836d5857f8e01ecf850cf', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/xg433wh8wife1.png?width=216&crop=smart&auto=webp&s=b1441228dfc140746c6853dc177c5c2011c61218', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/xg433wh8wife1.png?width=320&crop=smart&auto=webp&s=84534d61b9d9d4097fadd0d62716551794160dd6', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/xg433wh8wife1.png?width=640&crop=smart&auto=webp&s=1c0544ca3d3ec39637eb0d0b95c2f42f1515d54f', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/xg433wh8wife1.png?width=960&crop=smart&auto=webp&s=a12af8742aa8024567e439ffd16789e52fa84748', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/xg433wh8wife1.png?width=1080&crop=smart&auto=webp&s=fcd52b1c5e19dcbf1ca0c521b84604c5ad5e6ef4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/xg433wh8wife1.png?auto=webp&s=929c7ac13fea7fee5019c06b6f6e1b871316a66f', 'width': 1920}, 'variants': {}}]}
DeepSeek Buzz Puts Tech Stocks on Track for $1 Trillion Wipeout
5
[https://finance.yahoo.com/news/nasdaq-futures-slump-china-deepseek-022904517.html](https://finance.yahoo.com/news/nasdaq-futures-slump-china-deepseek-022904517.html) *BREAKING: Nasdaq 100 futures crash -1,100 POINTS as pre-market selling accelerates on worries of DeepSeek dethroning US Tech.* *Now on track for its biggest 1-day loss since September 2022.* The Kobeissi Letter: [https://x.com/KobeissiLetter/status/1883831022149927352](https://x.com/KobeissiLetter/status/1883831022149927352) https://preview.redd.it/7xvpd7covife1.jpg?width=747&format=pjpg&auto=webp&s=abf855acbf53716b1e422a7cb2e08758f33b58df Bloomberg: DeepSeek Buzz Puts Tech Stocks on Track for $1.2 Trillion Drop: [https://www.bloomberg.com/news/articles/2025-01-27/nasdaq-futures-slump-as-china-s-deepseek-sparks-us-tech-concern](https://www.bloomberg.com/news/articles/2025-01-27/nasdaq-futures-slump-as-china-s-deepseek-sparks-us-tech-concern) Markets Insider: Nvidia tumbles and tech stocks slide premarket as China's DeepSeek spooks AI investors: [https://markets.businessinsider.com/news/stocks/nvidia-tech-stocks-deepseek-ai-race-nasdaq-2025-1?op=1](https://markets.businessinsider.com/news/stocks/nvidia-tech-stocks-deepseek-ai-race-nasdaq-2025-1?op=1) Business Insider: Chinese AI lab DeepSeek massively undercuts OpenAI on pricing — and that's spooking tech stocks: [https://www.businessinsider.com/chinese-ai-lab-deepseek-massively-undercuts-openai-on-pricing-2025-1](https://www.businessinsider.com/chinese-ai-lab-deepseek-massively-undercuts-openai-on-pricing-2025-1)
2025-01-27T11:37:03
https://www.reddit.com/r/LocalLLaMA/comments/1ib6nu7/deepseek_buzz_puts_tech_stocks_on_track_for_1/
Nunki08
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib6nu7
false
null
t3_1ib6nu7
/r/LocalLLaMA/comments/1ib6nu7/deepseek_buzz_puts_tech_stocks_on_track_for_1/
false
false
https://b.thumbs.redditm…i97pIDnT2-ds.jpg
5
{'enabled': False, 'images': [{'id': 'k96qJ243MAhwOFrp8aFBFKBo-T4xGTxJNfMnmJIJjFc', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/pUh_-3OKqJu3jrxiCfa8jBRPR_UOZ5zCUEoeYoWzDUY.jpg?width=108&crop=smart&auto=webp&s=d74f97aa10a3adb7aa4fe483113d8e9aabf7d3a6', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/pUh_-3OKqJu3jrxiCfa8jBRPR_UOZ5zCUEoeYoWzDUY.jpg?width=216&crop=smart&auto=webp&s=ffd7808decf527d9b690d12f9d46409353f411a2', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/pUh_-3OKqJu3jrxiCfa8jBRPR_UOZ5zCUEoeYoWzDUY.jpg?width=320&crop=smart&auto=webp&s=1455994049497fb4768cf37226a371d374dee0cd', 'width': 320}, {'height': 427, 'url': 'https://external-preview.redd.it/pUh_-3OKqJu3jrxiCfa8jBRPR_UOZ5zCUEoeYoWzDUY.jpg?width=640&crop=smart&auto=webp&s=afaea8e40351183f5920c9db4f99e0609821d291', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/pUh_-3OKqJu3jrxiCfa8jBRPR_UOZ5zCUEoeYoWzDUY.jpg?width=960&crop=smart&auto=webp&s=352429212aa2f3b07b5e0cf9a3ecd9c3c00d56ed', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/pUh_-3OKqJu3jrxiCfa8jBRPR_UOZ5zCUEoeYoWzDUY.jpg?width=1080&crop=smart&auto=webp&s=7e1b11e94df17bb66eed0d18452ead4b3966c4c0', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/pUh_-3OKqJu3jrxiCfa8jBRPR_UOZ5zCUEoeYoWzDUY.jpg?auto=webp&s=2f732587ad255d75de105e2febd6a1a75cd3b89a', 'width': 1199}, 'variants': {}}]}
Creating customizable voice agent for low latency Conversational AI project
2
Hi everyone, I’m currently working on a project to build a low-latency conversational AI system with customizable voice agents, and I’m curious to hear how you guys have tackled similar challenges. I want to have either fully offline system or fully online system, my hardware is Jetson Orin AGX. The only important consideration is speed.
2025-01-27T11:44:48
https://www.reddit.com/r/LocalLLaMA/comments/1ib6t5l/creating_customizable_voice_agent_for_low_latency/
bdiler1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib6t5l
false
null
t3_1ib6t5l
/r/LocalLLaMA/comments/1ib6t5l/creating_customizable_voice_agent_for_low_latency/
false
false
self
2
null
How much should I offload to GPU for best performance.
0
I am running **Deepseek R1 1.5b** from **unsloth**, in **lmstudio**, on a laptop with core **i5 8th gen and uhd620** graphics. In the model settings there's a **gpu offload slider** that goes from 0 to 28. How much should I select in that slider? The integrated uhd620 isn't powerful so selecting 28 makes it slower, by default it's set to 0 which is also slow compared to selecting a value in between. I am looking for the sweet spot. Thanks.
2025-01-27T11:44:59
https://www.reddit.com/r/LocalLLaMA/comments/1ib6t9x/how_much_should_i_offload_to_gpu_for_best/
InternalVolcano
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib6t9x
false
null
t3_1ib6t9x
/r/LocalLLaMA/comments/1ib6t9x/how_much_should_i_offload_to_gpu_for_best/
false
false
self
0
null
How many people do you think dismissed DeepSeek as hype after trying it without R1?
0
2025-01-27T11:46:55
https://www.reddit.com/gallery/1ib6um8
omnisvosscio
reddit.com
1970-01-01T00:00:00
0
{}
1ib6um8
false
null
t3_1ib6um8
/r/LocalLLaMA/comments/1ib6um8/how_many_people_do_you_think_dismissed_deepseek/
false
false
https://b.thumbs.redditm…MFq2semonsWI.jpg
0
null
Same model on Ollama performing worse than Cloud providers (Groq, HF, ...)
2
Hello. I have a prompt that returns a wrong answer to a question when asking LLama hosted on Ollama. However, when asking the same model hosted on Groq or Hugging Face or any other cloud provider I get a correct answer to the question. The prompt is a RAG prompt, it contains instructions, context and a question. The context and the question are in portuguese. However, I am sending exactly the same prompt in all situations. Why do they all answer correctly but the Ollama one does not? I am going insane over this, I would thank any help.
2025-01-27T11:46:59
https://www.reddit.com/r/LocalLLaMA/comments/1ib6unq/same_model_on_ollama_performing_worse_than_cloud/
ParaplegicGuru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ib6unq
false
null
t3_1ib6unq
/r/LocalLLaMA/comments/1ib6unq/same_model_on_ollama_performing_worse_than_cloud/
false
false
self
2
null