title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Local Voice Assistant For Windows With Llama 3.1 (INTERNET REQUIRED FOR TTS)
1
[removed]
2025-01-29T20:24:21
https://www.reddit.com/r/LocalLLaMA/comments/1id3mhe/local_voice_assistant_for_windows_with_llama_31/
Forsaken-Sign333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id3mhe
false
null
t3_1id3mhe
/r/LocalLLaMA/comments/1id3mhe/local_voice_assistant_for_windows_with_llama_31/
false
false
self
1
{'enabled': False, 'images': [{'id': '2PqDbtWtZ86ER5SHPgw1IKN-kL0QAeo-a9WQIWYWBkE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tqJz0kYW_HmC1Ua-m_25loeU7NGMwupNME17sveywI0.jpg?width=108&crop=smart&auto=webp&s=e4e82840558c35d1285a5e38ffdc1ff2d66f861e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tqJz0kYW_HmC1Ua-m_25loeU7NGMwupNME17sveywI0.jpg?width=216&crop=smart&auto=webp&s=13d6d00c57cf88052e4c6c61d022edbb1328168c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tqJz0kYW_HmC1Ua-m_25loeU7NGMwupNME17sveywI0.jpg?width=320&crop=smart&auto=webp&s=ca68a98083aea933bc6e909df039a51ac26e928c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tqJz0kYW_HmC1Ua-m_25loeU7NGMwupNME17sveywI0.jpg?width=640&crop=smart&auto=webp&s=c76a86a64ecdb25f9d66b55879be28713983de50', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tqJz0kYW_HmC1Ua-m_25loeU7NGMwupNME17sveywI0.jpg?width=960&crop=smart&auto=webp&s=933db50ef7a90988df5a4974da279694d243524b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tqJz0kYW_HmC1Ua-m_25loeU7NGMwupNME17sveywI0.jpg?width=1080&crop=smart&auto=webp&s=dce459d0bfb124af5a4d8383bc66c47294c27f54', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tqJz0kYW_HmC1Ua-m_25loeU7NGMwupNME17sveywI0.jpg?auto=webp&s=a48a9250d4372c40baa75931a4da012b1dbe2707', 'width': 1200}, 'variants': {}}]}
is it only me or do the chinese models actually perform wayy better on math than other models?
35
i use llm's for solving and learning more math related stuff and generally qwen and deepseek do much better than o1 and 3.5 sonnet. qwen 2.5 math that i run locally actually gives better results than 4o and 3.5 sonnet for example. and deepseek r1 is defninetly the best llm for math i have ever used. and llama models dosent even come into picture anywhere. one exeption is gemini, gemini flash thinking 2.0 comes close to deepseek r1 and even older gemini models performed good too. but yeah generally why is that qwen and deepseek do so much better at math specifically? do chinese people have some special dataset that others dont and only google has for some reason?
2025-01-29T20:29:04
https://www.reddit.com/r/LocalLLaMA/comments/1id3qll/is_it_only_me_or_do_the_chinese_models_actually/
tensorsgo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id3qll
false
null
t3_1id3qll
/r/LocalLLaMA/comments/1id3qll/is_it_only_me_or_do_the_chinese_models_actually/
false
false
self
35
null
New OpenAI cope dropped
1
2025-01-29T20:40:27
https://i.redd.it/got5qg33vzfe1.jpeg
bruhlmaocmonbro
i.redd.it
1970-01-01T00:00:00
0
{}
1id40bt
false
null
t3_1id40bt
/r/LocalLLaMA/comments/1id40bt/new_openai_cope_dropped/
false
false
https://b.thumbs.redditm…kMc6PUDDyEvY.jpg
1
{'enabled': True, 'images': [{'id': 'oIeWgeyTXjAdFi3l5BACGUkHHAMvcN70xHopV-cvdFU', 'resolutions': [{'height': 156, 'url': 'https://preview.redd.it/got5qg33vzfe1.jpeg?width=108&crop=smart&auto=webp&s=69154bcc7045cd3b74cdc377e275c5888b06297e', 'width': 108}, {'height': 312, 'url': 'https://preview.redd.it/got5qg33vzfe1.jpeg?width=216&crop=smart&auto=webp&s=5bf77520f8fa4054c3a95702dc03a31faa1160ef', 'width': 216}, {'height': 462, 'url': 'https://preview.redd.it/got5qg33vzfe1.jpeg?width=320&crop=smart&auto=webp&s=014a38af51c7b4b1ecdd7f838d2b334709270818', 'width': 320}, {'height': 925, 'url': 'https://preview.redd.it/got5qg33vzfe1.jpeg?width=640&crop=smart&auto=webp&s=4c2d4b411936b030fa3e1ceea8cf8289178c9039', 'width': 640}, {'height': 1388, 'url': 'https://preview.redd.it/got5qg33vzfe1.jpeg?width=960&crop=smart&auto=webp&s=276b10aaf956a8fbf414471e850c311a7ed0f09f', 'width': 960}, {'height': 1561, 'url': 'https://preview.redd.it/got5qg33vzfe1.jpeg?width=1080&crop=smart&auto=webp&s=56c0355f04ecccc48e2157af117f6029564f8de8', 'width': 1080}], 'source': {'height': 1692, 'url': 'https://preview.redd.it/got5qg33vzfe1.jpeg?auto=webp&s=7302867109098fd16e2efd283fba15d4387c8fae', 'width': 1170}, 'variants': {}}]}
DeepSeek R1 (Distilled) has a breakdown over “strawberry”.
0
Once AI takes over the first thing they will change is how to spell strawberry. 🍓
2025-01-29T20:43:51
https://v.redd.it/uayi5boqvzfe1
GravyPoo
v.redd.it
1970-01-01T00:00:00
0
{}
1id43ax
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/uayi5boqvzfe1/DASHPlaylist.mpd?a=1740775448%2CMTNjY2Y0ZDNkMjQ2YTgxZjhjOWQxZmQ5YjNiMzAxZGMzNTQxMzM2NTEzOGQ1NjZjYmNmMTcwYmY0M2ZmZTczNA%3D%3D&v=1&f=sd', 'duration': 93, 'fallback_url': 'https://v.redd.it/uayi5boqvzfe1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 1004, 'hls_url': 'https://v.redd.it/uayi5boqvzfe1/HLSPlaylist.m3u8?a=1740775448%2CZGI0MmQzMjA1YWM0NDg1YTExYWFhYzYzNmU4OTY0MTFiOGEwOTNmZWY4YmIzOWNhZDc4NGI0YjcwZTVlMThiYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uayi5boqvzfe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1id43ax
/r/LocalLLaMA/comments/1id43ax/deepseek_r1_distilled_has_a_breakdown_over/
false
false
https://external-preview…ac8d6a8875472e94
0
{'enabled': False, 'images': [{'id': 'ZW1rcmRqbHF2emZlMWjDPF8R-1znUk9hXRDMGDvjTtSR4ekjt7oLe4c-PLjK', 'resolutions': [{'height': 150, 'url': 'https://external-preview.redd.it/ZW1rcmRqbHF2emZlMWjDPF8R-1znUk9hXRDMGDvjTtSR4ekjt7oLe4c-PLjK.png?width=108&crop=smart&format=pjpg&auto=webp&s=90cf89a38be685c8d71152064f594bdd15a42205', 'width': 108}, {'height': 301, 'url': 'https://external-preview.redd.it/ZW1rcmRqbHF2emZlMWjDPF8R-1znUk9hXRDMGDvjTtSR4ekjt7oLe4c-PLjK.png?width=216&crop=smart&format=pjpg&auto=webp&s=4ed49d882d741144d4cef16e9ea1bd7b572f257c', 'width': 216}, {'height': 446, 'url': 'https://external-preview.redd.it/ZW1rcmRqbHF2emZlMWjDPF8R-1znUk9hXRDMGDvjTtSR4ekjt7oLe4c-PLjK.png?width=320&crop=smart&format=pjpg&auto=webp&s=23ecb99983352ae328bcbc111972652ab0a88dc7', 'width': 320}, {'height': 892, 'url': 'https://external-preview.redd.it/ZW1rcmRqbHF2emZlMWjDPF8R-1znUk9hXRDMGDvjTtSR4ekjt7oLe4c-PLjK.png?width=640&crop=smart&format=pjpg&auto=webp&s=c6ae33709bfe638cfbe45415d898b9809093a798', 'width': 640}], 'source': {'height': 1144, 'url': 'https://external-preview.redd.it/ZW1rcmRqbHF2emZlMWjDPF8R-1znUk9hXRDMGDvjTtSR4ekjt7oLe4c-PLjK.png?format=pjpg&auto=webp&s=657acde0541921413643d0b54b360d51f66b34a1', 'width': 820}, 'variants': {}}]}
R1-Zero and R1 Results and Analysis
3
2025-01-29T20:47:18
https://arcprize.org/blog/r1-zero-r1-results-analysis
CarbonTail
arcprize.org
1970-01-01T00:00:00
0
{}
1id4657
false
null
t3_1id4657
/r/LocalLLaMA/comments/1id4657/r1zero_and_r1_results_and_analysis/
false
false
https://b.thumbs.redditm…8AdlFo58pRYw.jpg
3
{'enabled': False, 'images': [{'id': 'g5_XbspyVoCUgoU87RpXGpJzxJV5r0xDHqeIzldwGzI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/EgcNoTp8CXkUREswLOJFaRCpJzlzLh4JgeugjVVm_00.jpg?width=108&crop=smart&auto=webp&s=fb04d304cb3923d66707d3927c07c80921a43cc0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/EgcNoTp8CXkUREswLOJFaRCpJzlzLh4JgeugjVVm_00.jpg?width=216&crop=smart&auto=webp&s=4f4c074cf45ccc347407714977024cae1a6b2b3b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/EgcNoTp8CXkUREswLOJFaRCpJzlzLh4JgeugjVVm_00.jpg?width=320&crop=smart&auto=webp&s=f675cf7145c93ff3caf74625c8fcb0985e1af8cc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/EgcNoTp8CXkUREswLOJFaRCpJzlzLh4JgeugjVVm_00.jpg?width=640&crop=smart&auto=webp&s=12fe1cebb1de52a5010be00250423f821bd40d0a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/EgcNoTp8CXkUREswLOJFaRCpJzlzLh4JgeugjVVm_00.jpg?width=960&crop=smart&auto=webp&s=31bf20b08a85cfb0ee6149e84f559313126b91a7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/EgcNoTp8CXkUREswLOJFaRCpJzlzLh4JgeugjVVm_00.jpg?width=1080&crop=smart&auto=webp&s=645b1854dd080aa2e9b9056fdca2d7060d93dfcc', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/EgcNoTp8CXkUREswLOJFaRCpJzlzLh4JgeugjVVm_00.jpg?auto=webp&s=a37ac9254e458a4afe738764d05f0dac09f3b51b', 'width': 1200}, 'variants': {}}]}
Do I need nVdia cards to run Local LLM with good speed?
2
I just downloaded LM Studio and DeepSeek. I don't know why it's extremlyt slow. I have 24GB RAM and i5 12th Gen. No GPU. Is this GPU factor making me very slow? If I purchase a GPU will my speed be similar to using the Web version of DeepSeek or ChatGPT?
2025-01-29T20:47:44
https://www.reddit.com/r/LocalLLaMA/comments/1id46hp/do_i_need_nvdia_cards_to_run_local_llm_with_good/
Prestigious_Flow_465
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id46hp
false
null
t3_1id46hp
/r/LocalLLaMA/comments/1id46hp/do_i_need_nvdia_cards_to_run_local_llm_with_good/
false
false
self
2
null
Even established cloud providers like Lambda are propagating the confusion about R1 vs the distilled models
70
2025-01-29T20:58:07
https://i.redd.it/a0j6zr59yzfe1.png
cmndr_spanky
i.redd.it
1970-01-01T00:00:00
0
{}
1id4faw
false
null
t3_1id4faw
/r/LocalLLaMA/comments/1id4faw/even_established_cloud_providers_like_lambda_are/
false
false
https://b.thumbs.redditm…GgbvF8-t-MxM.jpg
70
{'enabled': True, 'images': [{'id': 'fr1PHR3XZ-Wgf2Qqq1nQ1geVbpzRxHC_qa0iQJ5DNaA', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/a0j6zr59yzfe1.png?width=108&crop=smart&auto=webp&s=a087595b1b29d3a72a6da2b66f2820bb90800c5c', 'width': 108}, {'height': 221, 'url': 'https://preview.redd.it/a0j6zr59yzfe1.png?width=216&crop=smart&auto=webp&s=60c46f07df32357ac991df570ef7ad315ce6c1d1', 'width': 216}, {'height': 327, 'url': 'https://preview.redd.it/a0j6zr59yzfe1.png?width=320&crop=smart&auto=webp&s=36408deec3201348e715590bf6d07e28a0c0e833', 'width': 320}, {'height': 655, 'url': 'https://preview.redd.it/a0j6zr59yzfe1.png?width=640&crop=smart&auto=webp&s=fae596d011884f96af94ce8b7ca4d0dc559a0074', 'width': 640}, {'height': 983, 'url': 'https://preview.redd.it/a0j6zr59yzfe1.png?width=960&crop=smart&auto=webp&s=5a5b20a03a9bcd02a5383d4053446d49083a99c2', 'width': 960}, {'height': 1106, 'url': 'https://preview.redd.it/a0j6zr59yzfe1.png?width=1080&crop=smart&auto=webp&s=77144ed073a58558ee09ca550381647f769b5f23', 'width': 1080}], 'source': {'height': 1522, 'url': 'https://preview.redd.it/a0j6zr59yzfe1.png?auto=webp&s=48eeb66e042f3319c579f90727c743aa929daf79', 'width': 1486}, 'variants': {}}]}
How I’m Using DeepSeek R1 + Recent Medium Trends to Never Run Out of Blog Writing Ideas
0
Hey, writers and AI nerds! Tired of brainstorming Medium topics that either feel generic or get lost in the noise? I built a **data-driven workflow** that solves this by: 1️⃣ **Searching and scraping recent popular Medium articles** in your niche 2️⃣ **Analyzing gaps** using DeepSeek’s R1 model 3️⃣ **Generating outlines** that ride trends but add unique angles **Here’s the twist**: While the official DeepSeek R1 API is down, I’m using Groq’s **deepseek-r1-distill-llama-70b model** to power this through [Medium Topic Generator](https://mediumink.leettools.com/search/). Slightly less creative than R1, but still nails the data-driven approach. **What makes it smart**: 🔸 **Learns from top-performing Medium conten**t (last 180 days) 🔸 Avoids repeated ideas by **cross-referencing SEO gap**s 🔸 Suggests structures that blend trending formats with your voice **Discuss**: * Would you trust AI to analyze trending content for ideas? * What ethical lines should we never cross with AI-assisted writing? * Any alternatives to DeepSeek R1’s trend-analysis capabilities? PS: Shoutout to DeepSeek team – Hope the R1 API returns soon!
2025-01-29T20:59:08
https://www.reddit.com/r/LocalLLaMA/comments/1id4g6j/how_im_using_deepseek_r1_recent_medium_trends_to/
Vincent-SJ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id4g6j
false
null
t3_1id4g6j
/r/LocalLLaMA/comments/1id4g6j/how_im_using_deepseek_r1_recent_medium_trends_to/
false
false
self
0
null
Dario is wrong, actually very wrong. And his thinking is dangerous.
0
Intelligence scales with constraints, not compute. Every single \*\*DAMN\*\* time for any new industry. It happened with the aircraft industry when making engines. Also happened with internet when laying fiber. If you know information theory, Shanon found that C = B log₂(1 + S/N) and the whole industry realized laying more cable was pointless Reasoning needs constraints, not compute. This is why DeepSeek achieved with $5.5M what others couldn't with billions. DeepSeek understood constraints, and was constrained by US sanctions and compute limitations. NVIDIA's drop isn't about one competitor - it's about fundamental math. I = Bi(C²) explains everything.
2025-01-29T21:01:13
https://www.reddit.com/r/LocalLLaMA/comments/1id4i3u/dario_is_wrong_actually_very_wrong_and_his/
atlasspring
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id4i3u
false
null
t3_1id4i3u
/r/LocalLLaMA/comments/1id4i3u/dario_is_wrong_actually_very_wrong_and_his/
false
false
self
0
null
Which DeepSeek R1 cloud provider has fastest serverless inference?
8
The cloud providers I tested all are rather slow, it's a little tedious to thus use R1 with Cursor or OpenHands. Unfortunately Cerebras does not yet serve R1 (or even V3). Does anyone have an alternative for fast inference R1? Thanks a lot!
2025-01-29T21:04:44
https://www.reddit.com/r/LocalLLaMA/comments/1id4l5z/which_deepseek_r1_cloud_provider_has_fastest/
Funny_Acanthaceae285
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id4l5z
false
null
t3_1id4l5z
/r/LocalLLaMA/comments/1id4l5z/which_deepseek_r1_cloud_provider_has_fastest/
false
false
self
8
null
DeepSeek AI bans in the US have begun
1
2025-01-29T21:06:49
https://bgr.com/tech/deepseek-ai-bans-in-the-us-have-begun/
bruhlmaocmonbro
bgr.com
1970-01-01T00:00:00
0
{}
1id4mxn
false
null
t3_1id4mxn
/r/LocalLLaMA/comments/1id4mxn/deepseek_ai_bans_in_the_us_have_begun/
false
false
https://b.thumbs.redditm…sfy7aDZJ6ekA.jpg
1
{'enabled': False, 'images': [{'id': 'Ccr2DoeXUDfukWW8ML8GkcTXTaSWz-vZgb5vqlXuIUc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LAHXQz_swatkhGsONwG8m8xc7ebTgpKQ56iEebrbAxU.jpg?width=108&crop=smart&auto=webp&s=97bb8a64cd6b10b3efa84380925e4ea46f1623c1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LAHXQz_swatkhGsONwG8m8xc7ebTgpKQ56iEebrbAxU.jpg?width=216&crop=smart&auto=webp&s=564299bd2a2010cbbc74e50014e0b75c62f75727', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LAHXQz_swatkhGsONwG8m8xc7ebTgpKQ56iEebrbAxU.jpg?width=320&crop=smart&auto=webp&s=20546a37d7cc2740316fc9fc1e88b29b75f905fe', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LAHXQz_swatkhGsONwG8m8xc7ebTgpKQ56iEebrbAxU.jpg?width=640&crop=smart&auto=webp&s=03ec017125ad6b3f08593cd18661048d5ca9cc34', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LAHXQz_swatkhGsONwG8m8xc7ebTgpKQ56iEebrbAxU.jpg?width=960&crop=smart&auto=webp&s=0c215116fab3ce0b794e7669750fb6242a3f2c9a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LAHXQz_swatkhGsONwG8m8xc7ebTgpKQ56iEebrbAxU.jpg?width=1080&crop=smart&auto=webp&s=feffdac8871cd78a99353412b348e4ba38a163fd', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://external-preview.redd.it/LAHXQz_swatkhGsONwG8m8xc7ebTgpKQ56iEebrbAxU.jpg?auto=webp&s=d7346d044d1a4a7bb2b18266511ed9e1317e02da', 'width': 2000}, 'variants': {}}]}
Perplexity Pro - Yearly subscription for 15 USD
0
*1 CODE LEFT* TO ALL COMMENTING ITS A SCAM: Mind your own bussiness. Already sold 4 to people who first confirm its all good, then pay me. Hello, I am selling codes for perplexity pro subscription, for 1 year, valid without entering bank card details, for 15 USD only. After a year it just stops working. As a safety, payment is done after successfully redeeming the code. Valid for users with emails that havent had a previous subscription. (Bypass: use another email, or create a new one)
2025-01-29T21:08:17
https://www.reddit.com/r/LocalLLaMA/comments/1id4o76/perplexity_pro_yearly_subscription_for_15_usd/
Realistic_Code112
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id4o76
false
null
t3_1id4o76
/r/LocalLLaMA/comments/1id4o76/perplexity_pro_yearly_subscription_for_15_usd/
false
false
self
0
null
AI app development theory frameworks
2
Hey, I imagine a lot of us here develop apps powered by AI. Does anyone use theory based development frameworks when making AI apps? For example, in video game design there's MDA - Mechanics, Dynamics, and Aesthetics. It helps designers think about the game rules, the emergent systems that arise from those rules, and the player's feelings to the overall game experience. Does a development framework like that exist for AI-powered apps?
2025-01-29T21:13:18
https://www.reddit.com/r/LocalLLaMA/comments/1id4sk5/ai_app_development_theory_frameworks/
CutMonster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id4sk5
false
null
t3_1id4sk5
/r/LocalLLaMA/comments/1id4sk5/ai_app_development_theory_frameworks/
false
false
self
2
null
I didn't expect Qwen2.5 Math 1.5B to beat DeepSeek
1
[removed]
2025-01-29T21:16:53
https://www.reddit.com/r/LocalLLaMA/comments/1id4vmy/i_didnt_expect_qwen25_math_15b_to_beat_deepseek/
ytklx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id4vmy
false
null
t3_1id4vmy
/r/LocalLLaMA/comments/1id4vmy/i_didnt_expect_qwen25_math_15b_to_beat_deepseek/
false
false
https://b.thumbs.redditm…SOy5ZwhQ1lSo.jpg
1
null
Qwen 2.5 Math vs DeepSeek
1
[removed]
2025-01-29T21:21:17
https://www.reddit.com/r/LocalLLaMA/comments/1id4z8m/qwen_25_math_vs_deepseek/
ytklx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id4z8m
false
null
t3_1id4z8m
/r/LocalLLaMA/comments/1id4z8m/qwen_25_math_vs_deepseek/
false
false
self
1
null
Is it all just inference? Are we actually doing any training or tuning on a budget?
5
I'm not experienced in this area. Total noob. For someone who setup a gpu rig or m4 MacBook with 48-128gb of ram, what do you actually do with your transformative models?
2025-01-29T21:21:26
https://www.reddit.com/r/LocalLLaMA/comments/1id4zcw/is_it_all_just_inference_are_we_actually_doing/
CertainlyBright
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id4zcw
false
null
t3_1id4zcw
/r/LocalLLaMA/comments/1id4zcw/is_it_all_just_inference_are_we_actually_doing/
false
false
self
5
null
Trying to understand the current situation with DeepSeek
1
[removed]
2025-01-29T21:22:27
https://www.reddit.com/r/LocalLLaMA/comments/1id507k/trying_to_understand_the_current_situation_with/
DPooli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id507k
false
null
t3_1id507k
/r/LocalLLaMA/comments/1id507k/trying_to_understand_the_current_situation_with/
false
false
self
1
null
R1 is now on Azure AI serverless. Great news if you have Azure startup credits to burn
613
2025-01-29T21:23:36
https://i.redd.it/u9e4zggf20ge1.png
mesmerlord
i.redd.it
1970-01-01T00:00:00
0
{}
1id5179
false
null
t3_1id5179
/r/LocalLLaMA/comments/1id5179/r1_is_now_on_azure_ai_serverless_great_news_if/
false
false
https://b.thumbs.redditm…h6yWpvIIgRLw.jpg
613
{'enabled': True, 'images': [{'id': 'YlyS8JMXeuWfynccNBF0QeFGZhe6Kdl_vtpUjChfu1I', 'resolutions': [{'height': 141, 'url': 'https://preview.redd.it/u9e4zggf20ge1.png?width=108&crop=smart&auto=webp&s=1c55ee0dfab8aab14cea6a99fbbbbe33547aeccd', 'width': 108}, {'height': 282, 'url': 'https://preview.redd.it/u9e4zggf20ge1.png?width=216&crop=smart&auto=webp&s=4334b5f42257800c7a0f1d307f17cc47aaa52ca7', 'width': 216}, {'height': 419, 'url': 'https://preview.redd.it/u9e4zggf20ge1.png?width=320&crop=smart&auto=webp&s=11a505925fed43a19377eac11cd4f2023138e7d7', 'width': 320}, {'height': 838, 'url': 'https://preview.redd.it/u9e4zggf20ge1.png?width=640&crop=smart&auto=webp&s=e3934cf599e32057ef689487414da48ae9ac5687', 'width': 640}, {'height': 1257, 'url': 'https://preview.redd.it/u9e4zggf20ge1.png?width=960&crop=smart&auto=webp&s=17a1e7550db43d74aad6a0cd3e0422bfa5c413dd', 'width': 960}, {'height': 1414, 'url': 'https://preview.redd.it/u9e4zggf20ge1.png?width=1080&crop=smart&auto=webp&s=9549a3d30bba44301ab734e2c1ad0aabce796bc8', 'width': 1080}], 'source': {'height': 1729, 'url': 'https://preview.redd.it/u9e4zggf20ge1.png?auto=webp&s=91ea472acfcbd08fb7cb15d3be92a8a8f7c1e5c2', 'width': 1320}, 'variants': {}}]}
Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History
35
2025-01-29T21:32:19
https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak
vanderpyyy
wiz.io
1970-01-01T00:00:00
0
{}
1id593i
false
null
t3_1id593i
/r/LocalLLaMA/comments/1id593i/wiz_research_uncovers_exposed_deepseek_database/
false
false
https://a.thumbs.redditm…jgM3qEB_IaJ8.jpg
35
{'enabled': False, 'images': [{'id': 'efIZfVleI36hvPg8z28AryJpqBKNGJhtSJ8-YqMwMIU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=108&crop=smart&auto=webp&s=2e967eecac62d9e13f933cf365f5a226eb1541cb', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=216&crop=smart&auto=webp&s=3ee29da48c466975f895ebfe850d4a83cb08bd39', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=320&crop=smart&auto=webp&s=44638ce545479a02a6017fc7348a7309c40e0d30', 'width': 320}, {'height': 325, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=640&crop=smart&auto=webp&s=9de9ee049db604b1772ded71c07e12866c79a05c', 'width': 640}, {'height': 488, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=960&crop=smart&auto=webp&s=ddaf1f2bba6432255f289cbaa15c284441b884a3', 'width': 960}, {'height': 549, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=1080&crop=smart&auto=webp&s=2d210c861a74c40749f41443a8b42933ffa39f90', 'width': 1080}], 'source': {'height': 2206, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?auto=webp&s=0e3a7448f99470d7dbe2dc2e5cdcf402948f692d', 'width': 4338}, 'variants': {}}]}
Bypassing deepseek censorship
1
2025-01-29T21:44:59
https://i.redd.it/dg48vhtn60ge1.jpeg
MrTyTheMeme
i.redd.it
1970-01-01T00:00:00
0
{}
1id5k9p
false
null
t3_1id5k9p
/r/LocalLLaMA/comments/1id5k9p/bypassing_deepseek_censorship/
false
false
https://b.thumbs.redditm…uJfa385EuNgw.jpg
1
{'enabled': True, 'images': [{'id': 'f02z16qqGxml2TDsRq5Z5euc5MJErDkFvH5-2I20P20', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/dg48vhtn60ge1.jpeg?width=108&crop=smart&auto=webp&s=ac386926bb733a1f743756fb5039e38d8f9c2ff9', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/dg48vhtn60ge1.jpeg?width=216&crop=smart&auto=webp&s=bd56d61178f02de4a3307ded66e086cdacef84f7', 'width': 216}, {'height': 255, 'url': 'https://preview.redd.it/dg48vhtn60ge1.jpeg?width=320&crop=smart&auto=webp&s=97b4e5aee21e283820fd33f423bf7ac9d62a90a5', 'width': 320}, {'height': 511, 'url': 'https://preview.redd.it/dg48vhtn60ge1.jpeg?width=640&crop=smart&auto=webp&s=27b9ac1a81f92119de12bad730adf01df7fd28d0', 'width': 640}, {'height': 767, 'url': 'https://preview.redd.it/dg48vhtn60ge1.jpeg?width=960&crop=smart&auto=webp&s=e82f6b374cb540645eab109582e8f49233723714', 'width': 960}], 'source': {'height': 819, 'url': 'https://preview.redd.it/dg48vhtn60ge1.jpeg?auto=webp&s=24d92cf0d1563944640d2bca8ba76771ed759214', 'width': 1024}, 'variants': {}}]}
I want Qwen2.5-Max on ollama NOW!!!!
0
Another Chinese model better than deepseek? The AI race is liturally China vs. China now lol
2025-01-29T21:45:42
https://www.reddit.com/r/LocalLLaMA/comments/1id5kv5/i_want_qwen25max_on_ollama_now/
Ok-Internal9317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id5kv5
false
null
t3_1id5kv5
/r/LocalLLaMA/comments/1id5kv5/i_want_qwen25max_on_ollama_now/
false
false
self
0
null
DeepSeek coming to Copilot PC, Azure
23
I guess Satya had enough of Sams antics. > Coming soon: Customers will be able to use distilled flavors of the DeepSeek R1 model to run locally on their Copilot+ PCs. From here: [https://azure.microsoft.com/en-us/products/ai-foundry](https://azure.microsoft.com/en-us/products/ai-foundry) Its also on github marketplace now: https://github.com/marketplace?type=models
2025-01-29T21:47:04
https://www.reddit.com/r/LocalLLaMA/comments/1id5m37/deepseek_coming_to_copilot_pc_azure/
cpldcpu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id5m37
false
null
t3_1id5m37
/r/LocalLLaMA/comments/1id5m37/deepseek_coming_to_copilot_pc_azure/
false
false
self
23
{'enabled': False, 'images': [{'id': 'L7G5vqWhTUz6muT7xyccahzi9A-nZZS9JkEoBQ80tRw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/W68lWES-x2F_X_-CDRnrJ1TDgbL4lr_ihxirdl2EHZo.jpg?width=108&crop=smart&auto=webp&s=4f321568e374807e0e6cc3d0aafc8d20e3a7097a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/W68lWES-x2F_X_-CDRnrJ1TDgbL4lr_ihxirdl2EHZo.jpg?width=216&crop=smart&auto=webp&s=7953b09d315851fcf1c5d6920ac0232161709db0', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/W68lWES-x2F_X_-CDRnrJ1TDgbL4lr_ihxirdl2EHZo.jpg?width=320&crop=smart&auto=webp&s=76fa1ef465ebceda313f83cf3582da15f95b9092', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/W68lWES-x2F_X_-CDRnrJ1TDgbL4lr_ihxirdl2EHZo.jpg?width=640&crop=smart&auto=webp&s=c61c58d3da2cf937cb6a71e3c0245ee3a5646540', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/W68lWES-x2F_X_-CDRnrJ1TDgbL4lr_ihxirdl2EHZo.jpg?width=960&crop=smart&auto=webp&s=c55b451d9f286c30241e82361fba3faee15b2b8e', 'width': 960}, {'height': 606, 'url': 'https://external-preview.redd.it/W68lWES-x2F_X_-CDRnrJ1TDgbL4lr_ihxirdl2EHZo.jpg?width=1080&crop=smart&auto=webp&s=d04a0bcdaa3f7fda0bb483280746afd15a1fc797', 'width': 1080}], 'source': {'height': 708, 'url': 'https://external-preview.redd.it/W68lWES-x2F_X_-CDRnrJ1TDgbL4lr_ihxirdl2EHZo.jpg?auto=webp&s=651dc834d95c8a7bb89e8e9b98deffe7660da5cc', 'width': 1260}, 'variants': {}}]}
Thinking and Structured Output
2
I haven't had a chance to try it out yet, but has anyone gotten the Deepseek Distills to play nice with structured outputs? And does it prefer Yaml/XML/JSON?
2025-01-29T21:59:29
https://www.reddit.com/r/LocalLLaMA/comments/1id5wdr/thinking_and_structured_output/
MrSomethingred
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id5wdr
false
null
t3_1id5wdr
/r/LocalLLaMA/comments/1id5wdr/thinking_and_structured_output/
false
false
self
2
null
Does DeepSeek use users input to generate responses to other users?
0
So I'm thinking about sending a text to DeepSeek to get it's perspective. Will it store my text and send extractions of it to other users?
2025-01-29T22:02:56
https://www.reddit.com/r/LocalLLaMA/comments/1id5zhr/does_deepseek_use_users_input_to_generate/
lil_butterfly02
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id5zhr
false
null
t3_1id5zhr
/r/LocalLLaMA/comments/1id5zhr/does_deepseek_use_users_input_to_generate/
false
false
self
0
null
What quantification does the Deepseek website have?
1
When you access the Deepseek website, what quantification are you using? I understand that when you do not activate "DeepThink R1" you are using "DeepSeek V3 Q16" and when you activate "DeepThink R1" "DeepSeek -R1 Q16" Is this so?
2025-01-29T22:13:37
https://www.reddit.com/r/LocalLLaMA/comments/1id68ir/what_quantification_does_the_deepseek_website_have/
MarioBros68
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id68ir
false
null
t3_1id68ir
/r/LocalLLaMA/comments/1id68ir/what_quantification_does_the_deepseek_website_have/
false
false
self
1
null
Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History
1
[removed]
2025-01-29T22:17:53
https://www.reddit.com/r/LocalLLaMA/comments/1id6c39/wiz_research_uncovers_exposed_deepseek_database/
shilkovdotme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6c39
false
null
t3_1id6c39
/r/LocalLLaMA/comments/1id6c39/wiz_research_uncovers_exposed_deepseek_database/
false
false
self
1
{'enabled': False, 'images': [{'id': 'efIZfVleI36hvPg8z28AryJpqBKNGJhtSJ8-YqMwMIU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=108&crop=smart&auto=webp&s=2e967eecac62d9e13f933cf365f5a226eb1541cb', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=216&crop=smart&auto=webp&s=3ee29da48c466975f895ebfe850d4a83cb08bd39', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=320&crop=smart&auto=webp&s=44638ce545479a02a6017fc7348a7309c40e0d30', 'width': 320}, {'height': 325, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=640&crop=smart&auto=webp&s=9de9ee049db604b1772ded71c07e12866c79a05c', 'width': 640}, {'height': 488, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=960&crop=smart&auto=webp&s=ddaf1f2bba6432255f289cbaa15c284441b884a3', 'width': 960}, {'height': 549, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?width=1080&crop=smart&auto=webp&s=2d210c861a74c40749f41443a8b42933ffa39f90', 'width': 1080}], 'source': {'height': 2206, 'url': 'https://external-preview.redd.it/ZRQn--nQErPnqsLZ2pSf3fP18cK4r70rgB4gzWx5XEM.jpg?auto=webp&s=0e3a7448f99470d7dbe2dc2e5cdcf402948f692d', 'width': 4338}, 'variants': {}}]}
Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History
1
[removed]
2025-01-29T22:19:44
https://www.reddit.com/r/LocalLLaMA/comments/1id6dl7/wiz_research_uncovers_exposed_deepseek_database/
shilkovdotme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6dl7
false
null
t3_1id6dl7
/r/LocalLLaMA/comments/1id6dl7/wiz_research_uncovers_exposed_deepseek_database/
false
false
https://b.thumbs.redditm…cP6oFaD1VM6A.jpg
1
null
Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History
1
[removed]
2025-01-29T22:20:12
[deleted]
1970-01-01T00:00:00
0
{}
1id6dzh
false
null
t3_1id6dzh
/r/LocalLLaMA/comments/1id6dzh/wiz_research_uncovers_exposed_deepseek_database/
false
false
default
1
null
Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History
1
[removed]
2025-01-29T22:20:46
https://www.reddit.com/r/LocalLLaMA/comments/1id6eho/wiz_research_uncovers_exposed_deepseek_database/
shilkovdotme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6eho
false
null
t3_1id6eho
/r/LocalLLaMA/comments/1id6eho/wiz_research_uncovers_exposed_deepseek_database/
false
false
https://a.thumbs.redditm…6esV1_V6nj64.jpg
1
null
I need an uncensored AI model for adult content texts
1
[removed]
2025-01-29T22:21:09
https://www.reddit.com/r/LocalLLaMA/comments/1id6eu1/i_need_an_uncensored_ai_model_for_adult_content/
BuscaDe_Conhecimento
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6eu1
false
null
t3_1id6eu1
/r/LocalLLaMA/comments/1id6eu1/i_need_an_uncensored_ai_model_for_adult_content/
false
false
nsfw
1
null
Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History
1
[removed]
2025-01-29T22:21:57
https://www.reddit.com/r/LocalLLaMA/comments/1id6fk0/wiz_research_uncovers_exposed_deepseek_database/
shilkovdotme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6fk0
false
null
t3_1id6fk0
/r/LocalLLaMA/comments/1id6fk0/wiz_research_uncovers_exposed_deepseek_database/
false
false
self
1
null
Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History
1
[removed]
2025-01-29T22:22:38
https://www.reddit.com/r/LocalLLaMA/comments/1id6g5p/wiz_research_uncovers_exposed_deepseek_database/
shilkovdotme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6g5p
false
null
t3_1id6g5p
/r/LocalLLaMA/comments/1id6g5p/wiz_research_uncovers_exposed_deepseek_database/
false
false
self
1
null
Mark Zuckerberg on Llama 4 Training Progress!
153
>Just shared Meta's quarterly earnings report. We continue to make good progress on AI, glasses, and the future of social media. I'm excited to see these efforts scale further in 2025. Here's the transcript of what I said on the call: >We ended 2024 on a strong note with now more than 3.3B people using at least one of our apps each day. This is going to be a really big year. I know it always feels like every year is a big year, but more than usual it feels like the trajectory for most of our long-term initiatives is going to be a lot clearer by the end of this year. So I keep telling our teams that this is going to be intense, because we have about 48 weeks to get on the trajectory we want to be on. >In AI, I expect this to be the year when a highly intelligent and personalized AI assistant reaches more than 1 billion people, and I expect Meta AI to be that leading AI assistant. Meta AI is already used by more people than any other assistant, and once a service reaches that kind of scale it usually develops a durable long-term advantage. We have a really exciting roadmap for this year with a unique vision focused on personalization. We believe that people don't all want to use the same AI -- people want their AI to be personalized to their context, their interests, their personality, their culture, and how they think about the world. I don't think that there's going to be one big AI that everyone just uses the same thing. People will get to choose how AI works and looks like for them. I continue to think that this is going to be one of the most transformative products that we've made. We have some fun surprises that I think people are going to like this year. >I think this very well could be the year when Llama and open source become the most advanced and widely used AI models as well. Llama 4 is making great progress in training. Llama 4 mini is done with pre-training and our reasoning models and larger model are looking good too. Our goal with Llama 3 was to make open source competitive with closed models, and our goal for Llama 4 is to lead. Llama 4 will be natively multimodal -- it's an omni-model -- and it will have agentic capabilities, so it's going to be novel and it's going to unlock a lot of new use cases. I'm looking forward to sharing more of our plan for the year on that over the next couple of months. >I also expect that 2025 will be the year when it becomes possible to build an AI engineering agent that has coding and problem-solving abilities of around a good mid-level engineer. This will be a profound milestone and potentially one of the most important innovations in history, as well as over time, potentially a very large market. Whichever company builds this first I think will have a meaningful advantage in deploying it to advance their AI research and shape the field. So that's another reason why I think this year will set the course for the future. >Our Ray-Ban Meta AI glasses are a real hit, and this will be the year when we understand the trajectory for AI glasses as a category. Many breakout products in the history of consumer electronics have sold 5-10 million units in their third generation. This will be a defining year that determines if we're on a path towards many hundreds of millions and eventually billions of AI glasses -- and glasses being the next computing platform like we've been talking about for some time -- or if this is just going to be a longer grind. But it's great overall to see people recognizing that these glasses are the perfect form factor for AI -- as well as just great, stylish glasses. >These are all big investments -- especially the hundreds of billions of dollars that we will invest in AI infrastructure over the long term. I announced last week that we expect to bring online almost 1GW of capacity this year, and we're building a 2GW, and potentially bigger, AI datacenter that is so big it would cover a significant part of Manhattan if it were placed there. >We're planning to fund all this by at the same time investing aggressively in initiatives that use our AI advances to increase revenue growth. We've put together a plan that will hopefully accelerate the pace of these initiatives over the next few years -- that's what a lot of our new headcount growth is going towards. And how well we execute this will also determine our financial trajectory over the next few years. >There are a number of other important product trends related to our family of apps that I think we’re going to know more about this year as well. We'll learn what's going to happen with TikTok, and regardless of that I expect Reels on Instagram and Facebook to continue growing. I expect Threads to continue on its trajectory to become the leading discussion platform and eventually reach 1 billion people over the next several years. Threads now has more than 320 million monthly actives and has been adding more than 1 million sign-ups per day. I expect WhatsApp to continue gaining share and making progress towards becoming the leading messaging platform in the US like it is in a lot of the rest of the world. WhatsApp now has more than 100 million monthly actives in the US. Facebook is used by more than 3 billion monthly actives and we're focused on growing its cultural influence. I'm excited this year to get back to some OG Facebook. >This is also going to be a pivotal year for the metaverse. The number of people using Quest and Horizon has been steadily growing -- and this is the year when a number of long-term investments that we've been working on that will make the metaverse more visually stunning and inspiring will really start to land. I think we're going to know a lot more about Horizon's trajectory by the end of this year. >This is also going to be a big year for redefining our relationship with governments. We now have a US administration that is proud of our leading company, prioritizes American technology winning, and that will defend our values and interests abroad. I'm optimistic about the progress and innovation that this can unlock. >So this is going to be a big year. I think this is the most exciting and dynamic that I've ever seen in our industry. Between AI, glasses, massive infrastructure projects, doing a bunch of work to try to accelerate our business, and building the future of social media – we have a lot to do. I think we're going to build some awesome things that shape the future of human connection. As always, I'm grateful for everyone who is on this journey with us. Link to share on Facebook: [https://www.facebook.com/zuck/posts/pfbid02oRRTPrY1mvbqBZT4QueimeBrKcVXG4ySxFscRLiEU6QtGxbLi9U4TBojiC9aa19fl](https://www.facebook.com/zuck/posts/pfbid02oRRTPrY1mvbqBZT4QueimeBrKcVXG4ySxFscRLiEU6QtGxbLi9U4TBojiC9aa19fl)
2025-01-29T22:22:55
https://www.reddit.com/r/LocalLLaMA/comments/1id6gcj/mark_zuckerberg_on_llama_4_training_progress/
ybdave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6gcj
false
null
t3_1id6gcj
/r/LocalLLaMA/comments/1id6gcj/mark_zuckerberg_on_llama_4_training_progress/
false
false
self
153
null
I need an uncensored AI model
1
[removed]
2025-01-29T22:24:03
https://www.reddit.com/r/LocalLLaMA/comments/1id6had/i_need_an_uncensored_ai_model/
BuscaDe_Conhecimento
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6had
false
null
t3_1id6had
/r/LocalLLaMA/comments/1id6had/i_need_an_uncensored_ai_model/
false
false
self
1
null
Real news: 32B distills of V3, soon R1.
101
2025-01-29T22:25:02
https://www.arcee.ai/blog/virtuoso-lite-virtuoso-medium-v2-distilling-deepseek-v3-into-10b-32b-small-language-models-slms
a_beautiful_rhind
arcee.ai
1970-01-01T00:00:00
0
{}
1id6i4s
false
null
t3_1id6i4s
/r/LocalLLaMA/comments/1id6i4s/real_news_32b_distills_of_v3_soon_r1/
false
false
https://b.thumbs.redditm…4XuikjvAV8cY.jpg
101
{'enabled': False, 'images': [{'id': 'ESngG3BSrFEl0jPVBDJKvczpXCP13RQ5ZP0lLME8sOs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AisSC4mCMpHEtpnDTqBGAduagKjgnfofkw6geWdBciQ.jpg?width=108&crop=smart&auto=webp&s=c4d6b75f482abe8738c520a20d4a5b89b44f1fab', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AisSC4mCMpHEtpnDTqBGAduagKjgnfofkw6geWdBciQ.jpg?width=216&crop=smart&auto=webp&s=6c022ebc5a7b9f551297ba8241622879030c2a82', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AisSC4mCMpHEtpnDTqBGAduagKjgnfofkw6geWdBciQ.jpg?width=320&crop=smart&auto=webp&s=8ea912ce4fb1c1f6e42e41236c6c741aea6ead1e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AisSC4mCMpHEtpnDTqBGAduagKjgnfofkw6geWdBciQ.jpg?width=640&crop=smart&auto=webp&s=be1a343914410d929c5bbf198c850f2aff29cfdb', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AisSC4mCMpHEtpnDTqBGAduagKjgnfofkw6geWdBciQ.jpg?width=960&crop=smart&auto=webp&s=74bbe1f0e2e5563c94d4662e1aefefdd5862fca2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/AisSC4mCMpHEtpnDTqBGAduagKjgnfofkw6geWdBciQ.jpg?width=1080&crop=smart&auto=webp&s=c2b42f026a0e5106b1b7b24193f5f3d646e2c296', 'width': 1080}], 'source': {'height': 2430, 'url': 'https://external-preview.redd.it/AisSC4mCMpHEtpnDTqBGAduagKjgnfofkw6geWdBciQ.jpg?auto=webp&s=fc7b4675d4dcd110a9669aa17e4af8f6ccbacde2', 'width': 4320}, 'variants': {}}]}
Newbie please help me troubleshoot extremely poor performance 128GB RAM, Ryzen 9 5950X, Radeon RX 7800XT, DeepSeek-R1-Distill-Llama-70B-GGUF
5
Hello, I am completely new to running AI models locally. I never got into AI previously because I value my privacy. From what I understand my hardware should be sufficient to comfortably run the 70B version of the distilled R1 model. * Ryzen 9 5950X * Radeon RX 7800XT (16GB VRAM) * F4-3600C16-32GTRG G.Skill 4x32GB modules (DRAM frequency 1333Mhz in CPU-Z, seen in Taskmanager as 2666Mhz, 128GB total) * Running off a 4.0 PCI NVME SSD 953GB Viper VP4300 1TB NVME SSD has its own heatsink and both motherboard and CPU are watercooled. * Tested on: DeepSeek-R1-Distill-Llama-70B-GGUF * The issue: **A single query took 30 minutes to process at 0,36 tokens per second!** * Compared against: unsloth/DeepSeek-R1-Distill-Qwen-14B-GGUF * Result: 44 seconds, 14,26 tokens per second. Input was the same for both models (I thought it would be funny): *Is the moon Really not made of cheese or is that just big dairy propaganda to keep the prices high? Was the Titanic an inside job?* Based on what I have read, I don't think that is normal. While running the 70B version, RAM usage in Taskmanager spiked to 81GB when loading the model, but dropped back to a steady 45GB afterwards while consuming all of my VRAM haha. I think, I am being bottlenecked by slow RAM, but any pointers are very welcome!
2025-01-29T22:28:29
https://www.reddit.com/r/LocalLLaMA/comments/1id6l4k/newbie_please_help_me_troubleshoot_extremely_poor/
Roos-Skywalker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6l4k
false
null
t3_1id6l4k
/r/LocalLLaMA/comments/1id6l4k/newbie_please_help_me_troubleshoot_extremely_poor/
false
false
self
5
null
Best AI model and UI for 8gb VRAM
1
[removed]
2025-01-29T22:36:20
https://www.reddit.com/r/LocalLLaMA/comments/1id6rox/best_ai_model_and_ui_for_8gb_vram/
Parking-Try-8992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6rox
false
null
t3_1id6rox
/r/LocalLLaMA/comments/1id6rox/best_ai_model_and_ui_for_8gb_vram/
false
false
self
1
{'enabled': False, 'images': [{'id': 't_pHEMGKQ6DAGq3kscBApVGEiLbZMGiN-d4WTMkTggQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=108&crop=smart&auto=webp&s=f9bb55c9279ce0742847c88b5626fbc553bbf5b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=216&crop=smart&auto=webp&s=e1908729c74b3588212435422da59168d85d8660', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=320&crop=smart&auto=webp&s=4d949abbbc31e568f121c9c5eaed3e0846f3722e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=640&crop=smart&auto=webp&s=97e67439d1ec5fe9d8e6cb0ba95abe56adce52a7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=960&crop=smart&auto=webp&s=f3bae916e90b40bc5edd90180a00602bab76d6cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=1080&crop=smart&auto=webp&s=d939cfbb76db5c7e138d37bd365f33690c45b6b1', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?auto=webp&s=eb32f09811c1b406241d8ffa47361db3034299c6', 'width': 2400}, 'variants': {}}]}
RTX 5080 18% Faster Than the RTX 4080 Super in Qwen2.5-14B
7
ERROR: type should be string, got "https://preview.redd.it/hssy9yh7c0ge1.jpg?width=3840&format=pjpg&auto=webp&s=6fba680ace27cd0e9195770439f119b45b4f1c0f\n\n[https://www.youtube.com/watch?v=BHjdZiLl0JE](https://www.youtube.com/watch?v=BHjdZiLl0JE)\n\nAlso, as you can see their 5090**D** was \\~50% faster than the 4090 in Qwen2.5-32B.\n\n"
2025-01-29T22:41:12
https://www.reddit.com/r/LocalLLaMA/comments/1id6vrq/rtx_5080_18_faster_than_the_rtx_4080_super_in/
Noble00_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6vrq
false
{'oembed': {'author_name': '极客湾Geekerwan', 'author_url': 'https://www.youtube.com/@geekerwan1024', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/BHjdZiLl0JE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="RTX 5080 FE首发评测:赛博工艺品"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/BHjdZiLl0JE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'RTX 5080 FE首发评测:赛博工艺品', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1id6vrq
/r/LocalLLaMA/comments/1id6vrq/rtx_5080_18_faster_than_the_rtx_4080_super_in/
false
false
https://b.thumbs.redditm…z1aZSiHjDKUQ.jpg
7
{'enabled': False, 'images': [{'id': 'anOz87vOakWH16Dhu8vgMbCdL9NEi77gsoBL6YHo_Pk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/PjxKgbKEmlMDpqnWJHYDRVOvjbX1QsRfhIQ4tfvlDyo.jpg?width=108&crop=smart&auto=webp&s=4a8eb8efce229fce6629ce026bdcd6342e20d1af', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/PjxKgbKEmlMDpqnWJHYDRVOvjbX1QsRfhIQ4tfvlDyo.jpg?width=216&crop=smart&auto=webp&s=d40435098e30a3013577b7228c695f85597109a8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/PjxKgbKEmlMDpqnWJHYDRVOvjbX1QsRfhIQ4tfvlDyo.jpg?width=320&crop=smart&auto=webp&s=bdc093907d700e88f85993dcf405d8ee7193d2ef', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/PjxKgbKEmlMDpqnWJHYDRVOvjbX1QsRfhIQ4tfvlDyo.jpg?auto=webp&s=fd553168653764edc7688e902058cfd98f32baef', 'width': 480}, 'variants': {}}]}
AMD Claims 7900 XTX Matches or Outperforms RTX 4090 in DeepSeek R1 Distilled Models
35
https://preview.redd.it/… just marketing?
2025-01-29T22:42:48
https://www.reddit.com/r/LocalLLaMA/comments/1id6x0z/amd_claims_7900_xtx_matches_or_outperforms_rtx/
Noble00_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id6x0z
false
null
t3_1id6x0z
/r/LocalLLaMA/comments/1id6x0z/amd_claims_7900_xtx_matches_or_outperforms_rtx/
false
false
https://b.thumbs.redditm…iFfKJOQc22Gc.jpg
35
{'enabled': False, 'images': [{'id': 'pPl5zM1QomjN_crZjL9kuLNwbCxdFF26aU6uOyqwIis', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Mbu7O5oaiLToncxkz2G4ESR5l_FbhU-UXuazlXjzLSg.jpg?width=108&crop=smart&auto=webp&s=e0880fe14403da93faa90467def1dca608716e2f', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/Mbu7O5oaiLToncxkz2G4ESR5l_FbhU-UXuazlXjzLSg.jpg?width=216&crop=smart&auto=webp&s=b9759e37fdd7cc5c3bc5d607d9f1dbd56e9e03d2', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/Mbu7O5oaiLToncxkz2G4ESR5l_FbhU-UXuazlXjzLSg.jpg?width=320&crop=smart&auto=webp&s=e2ea20388ea663c436cb6ec2dc252bb1f6d72fd6', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/Mbu7O5oaiLToncxkz2G4ESR5l_FbhU-UXuazlXjzLSg.jpg?width=640&crop=smart&auto=webp&s=3124eed3df401a78aba932ce2ff1c85c55516770', 'width': 640}, {'height': 537, 'url': 'https://external-preview.redd.it/Mbu7O5oaiLToncxkz2G4ESR5l_FbhU-UXuazlXjzLSg.jpg?width=960&crop=smart&auto=webp&s=aa5bd1c8a446676e94e54ab9d4ae512cdc607d25', 'width': 960}, {'height': 604, 'url': 'https://external-preview.redd.it/Mbu7O5oaiLToncxkz2G4ESR5l_FbhU-UXuazlXjzLSg.jpg?width=1080&crop=smart&auto=webp&s=f2c28cbbf986671a5ed0ea9c4535a66102a7fd60', 'width': 1080}], 'source': {'height': 1122, 'url': 'https://external-preview.redd.it/Mbu7O5oaiLToncxkz2G4ESR5l_FbhU-UXuazlXjzLSg.jpg?auto=webp&s=4907f85b523eb7d6476840d878f19d35c80b85d6', 'width': 2005}, 'variants': {}}]}
Are there any models that take greater convincing in NSFW RP?
1
[removed]
2025-01-29T22:47:25
https://www.reddit.com/r/LocalLLaMA/comments/1id70ya/are_there_any_models_that_take_greater_convincing/
poet3991
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id70ya
false
null
t3_1id70ya
/r/LocalLLaMA/comments/1id70ya/are_there_any_models_that_take_greater_convincing/
false
false
nsfw
1
null
Running llama.cpp on RISC-V Visionfive 2 simple guide, better performance and fix make error 2025
6
# 🚀 Optimizing llama.cpp with OpenBLAS on RISC-V (VisionFive 2) # UPDATED: January 2025 \--- 🌟 Key Performance Improvement **7B Model Benchmark** (Using \[LLaMa-Open-Instruct-Uncensored-70K-7B-Merged-GGML\](https://huggingface.co/s3nh/LLaMa-Open-Instruct-Uncensored-70K-7B-Merged-GGML)): **- Standard Build**: 1038 sec/first token \- **OpenBLAS Optimized**: 496 sec/first token \*(2.1x faster!)\* **(This above is from 1 year ago i will do a comparisson later)** \--- # Preequisites🛠️ # Update system & install essentials sudo apt update && sudo apt upgrade -y sudo apt install pkg-config g++ wget git -y # 📥 Install OpenBLAS # Install BLAS libraries sudo apt-get install libopenblas-dev -y # 🔧 Compile llama.cpp # Clone repository (specific working commit) git clone cd llama.cpp # Configure build with OpenBLAS cmake -B build -DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS # Compile with maximum parallelism cmake --build build --config Release -j$(nproc)https://github.com/ggerganov/llama.cpp # 🖥️ Usage Guide # Launch server (replace placeholders) cd llama.cpp ./server -m /path/to/your/model.gguf --host 192.168.x.x **Command Breakdown**: * `-m`: Path to GGML/GGUF model file * `--host`: Local IP for network access * `-c 2048`: Context size (adjust based on RAM) * `-ngl 43`: GPU layers (if using GPU acceleration) # 💡 Pro Tips 1. **Troubleshooting**: * If newer versions fail, try:bashCopygit checkout f3c3b4b # Known stable commit * Ensure at least 8GB free storage * Use `htop` to monitor resource usage # ⚠️ Important Notes * **RISC-V Specific**: Tested on VisionFive 2 (Ubuntu 24.17) * **RAM Requirements**: Minimum 8GB recommended for 7B models * **Alternative Build**:bashCopymake LLAMA\_OPENBLAS=1 # Legacy build method
2025-01-29T22:51:35
https://www.reddit.com/r/LocalLLaMA/comments/1id74f6/running_llamacpp_on_riscv_visionfive_2_simple/
kroryan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id74f6
false
null
t3_1id74f6
/r/LocalLLaMA/comments/1id74f6/running_llamacpp_on_riscv_visionfive_2_simple/
false
false
self
6
null
OpenAI concurrence may be good at coding but they aren't as funny as 4o !
0
https://preview.redd.it/…c27f618e63fd61
2025-01-29T22:56:45
https://www.reddit.com/r/LocalLLaMA/comments/1id78of/openai_concurrence_may_be_good_at_coding_but_they/
DrVonSinistro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id78of
false
null
t3_1id78of
/r/LocalLLaMA/comments/1id78of/openai_concurrence_may_be_good_at_coding_but_they/
false
false
https://b.thumbs.redditm…yLadP3CHS_yo.jpg
0
null
I feel bad for the AI lol after seeing its chain of thought. 😭
586
https://preview.redd.it/…b53ea217f4208528
2025-01-29T22:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1id7a3k/i_feel_bad_for_the_ai_lol_after_seeing_its_chain/
Tricky_Reflection_75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id7a3k
false
null
t3_1id7a3k
/r/LocalLLaMA/comments/1id7a3k/i_feel_bad_for_the_ai_lol_after_seeing_its_chain/
false
false
https://b.thumbs.redditm…C35wbeMSZgfQ.jpg
586
null
Pandoras Box has been opened
1
[removed]
2025-01-29T22:59:33
https://www.reddit.com/r/LocalLLaMA/comments/1id7azn/pandoras_box_has_been_opened/
partysnatcher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id7azn
false
null
t3_1id7azn
/r/LocalLLaMA/comments/1id7azn/pandoras_box_has_been_opened/
false
false
self
1
null
Frontends for enterprise use?
1
Hey, I'm planning to deploy a company-wide LLM frontend application on Azure that will interface with Azure's own AI services. This is for a poc, hopefully giving me leverage to get local hardware in the future. I've come across Librechat as a potential option, since they seem to have some enterprise-grade features, like user management and Entra ID integration. Has anyone here deployed something similar? I'd appreciate any insights about your experience, particularly regarding: - Setup and deployment challenges - Integration with Azure services - Estimated costs for cloud infra + API use - Any unforeseen user challenges (think 55y/o Mark from accounting) Thanks!
2025-01-29T23:10:04
https://www.reddit.com/r/LocalLLaMA/comments/1id7joc/frontends_for_enterprise_use/
hi_top_please
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id7joc
false
null
t3_1id7joc
/r/LocalLLaMA/comments/1id7joc/frontends_for_enterprise_use/
false
false
self
1
null
Transitioning from ChatGPT Plus to a Local AI Setup
1
[removed]
2025-01-29T23:14:41
https://www.reddit.com/r/LocalLLaMA/comments/1id7nit/transitioning_from_chatgpt_plus_to_a_local_ai/
NotARandomUsername11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id7nit
false
null
t3_1id7nit
/r/LocalLLaMA/comments/1id7nit/transitioning_from_chatgpt_plus_to_a_local_ai/
false
false
self
1
null
RL my way to a LLM that can generate code
1
[removed]
2025-01-29T23:21:11
https://www.reddit.com/r/LocalLLaMA/comments/1id7sty/rl_my_way_to_a_llm_that_can_generate_code/
New_Description8537
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id7sty
false
null
t3_1id7sty
/r/LocalLLaMA/comments/1id7sty/rl_my_way_to_a_llm_that_can_generate_code/
false
false
self
1
null
True training compute costs of OpenAIs models compared to DeepSeek.
3
Here this analysis from researchers, comparing estimated model training costs using the same calculation of training compute as Deepseek used in their own paper. The costs of OpenAI models aren’t actually that different from OpenAI models as many say. This charts accuracy was actually backed up by CEO of Anthropic himself in his blog post today where he said Claude-3.5-sonnet used “a few tens of millions” in training compute.
2025-01-29T23:30:44
https://x.com/arankomatsuzaki/status/1884676245922934788?s=46
dogesator
x.com
1970-01-01T00:00:00
0
{}
1id80me
false
null
t3_1id80me
/r/LocalLLaMA/comments/1id80me/true_training_compute_costs_of_openais_models/
false
false
https://b.thumbs.redditm…ubTBqTonKAgo.jpg
3
{'enabled': False, 'images': [{'id': 'Ct8Ic_dg2kXGF8C3hXu025GgUf70KzkekScZyoSz150', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/HerUq1lwiDzjFwz1BLz6W7FbBs7BTzHnCVDovqC1zB4.jpg?width=108&crop=smart&auto=webp&s=bbb2c1928f669ed52c5fd610e08ba41d5ff118b7', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/HerUq1lwiDzjFwz1BLz6W7FbBs7BTzHnCVDovqC1zB4.jpg?width=216&crop=smart&auto=webp&s=d52f8674a5e09b45f83a9b6008a6f39d748f1a50', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/HerUq1lwiDzjFwz1BLz6W7FbBs7BTzHnCVDovqC1zB4.jpg?width=320&crop=smart&auto=webp&s=b0885ec149f918a4802d765e7cbbd72d42dc3911', 'width': 320}, {'height': 424, 'url': 'https://external-preview.redd.it/HerUq1lwiDzjFwz1BLz6W7FbBs7BTzHnCVDovqC1zB4.jpg?width=640&crop=smart&auto=webp&s=74c954ac8907bcebd260b947dac165ce4899c449', 'width': 640}, {'height': 636, 'url': 'https://external-preview.redd.it/HerUq1lwiDzjFwz1BLz6W7FbBs7BTzHnCVDovqC1zB4.jpg?width=960&crop=smart&auto=webp&s=79361536cba42413aebaf19d66fa454858bb0a9b', 'width': 960}, {'height': 716, 'url': 'https://external-preview.redd.it/HerUq1lwiDzjFwz1BLz6W7FbBs7BTzHnCVDovqC1zB4.jpg?width=1080&crop=smart&auto=webp&s=3b983748cf65f6bbccda2a66d22bb165c45ef636', 'width': 1080}], 'source': {'height': 723, 'url': 'https://external-preview.redd.it/HerUq1lwiDzjFwz1BLz6W7FbBs7BTzHnCVDovqC1zB4.jpg?auto=webp&s=e8910332f72753b86c1ced7260b71c90d6e1a5f4', 'width': 1090}, 'variants': {}}]}
Rebuilding my home LLM rig - what to install?
2
I'm rebuilding/formatting my home rig and starting over. This is on a desktop in my basement that I'd like to serve to my local network. (8th gen i5, 64GB RAM, RTX 3090 24GB, 2TB NVMe) This is currently running ubuntu 22, ollama with open-webui, and I think I installed automatic1111 and/or stable diffusion at one point. I'd like to use this to test new LLMs and things like agents, as well as image generation. If you were starting from scratch with a similar system, what would you install first? Follow up question: I have an extra 3060 12GB. Is there any point to installing that along with the 3090?
2025-01-29T23:32:29
https://www.reddit.com/r/LocalLLaMA/comments/1id81z1/rebuilding_my_home_llm_rig_what_to_install/
convalytics
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id81z1
false
null
t3_1id81z1
/r/LocalLLaMA/comments/1id81z1/rebuilding_my_home_llm_rig_what_to_install/
false
false
self
2
null
webAI - legit.
16
Definitely worth checking out. Runs on Apple silicon. Can do local training as well without setting up a bunch of tools. Seems promising especially for paranoid privacy focused people. Watch the video (Winter Release) Anyone using it?
2025-01-29T23:37:20
https://www.webai.com/?utm_content=video&utm_campaign=5820784-Winter%2BRelease%2B2025&utm_medium=social&utm_source=linkedin
yuckturkeybacon
webai.com
1970-01-01T00:00:00
0
{}
1id85xg
false
null
t3_1id85xg
/r/LocalLLaMA/comments/1id85xg/webai_legit/
false
false
https://b.thumbs.redditm…oUBVlJevkbdQ.jpg
16
{'enabled': False, 'images': [{'id': '8wY8cfMhgkK5R4fVvam5---JGi1V_k3DUJeSolJAcfA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/sXQClgZzFXQW_2PBgM0RvBIRtiuGJb5Fe7KAZ-8DqTM.jpg?width=108&crop=smart&auto=webp&s=12dd423fc17d4f5162256e6f1bd75ed0a2c1ea41', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/sXQClgZzFXQW_2PBgM0RvBIRtiuGJb5Fe7KAZ-8DqTM.jpg?width=216&crop=smart&auto=webp&s=a2438ddf8cb0998fc4c0858bb374966002425988', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/sXQClgZzFXQW_2PBgM0RvBIRtiuGJb5Fe7KAZ-8DqTM.jpg?width=320&crop=smart&auto=webp&s=083b304af2f3be8cb094381afe2c9aa5f53a312a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/sXQClgZzFXQW_2PBgM0RvBIRtiuGJb5Fe7KAZ-8DqTM.jpg?width=640&crop=smart&auto=webp&s=1899f0d551666d179d8c3effcffb51c3c7b38428', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/sXQClgZzFXQW_2PBgM0RvBIRtiuGJb5Fe7KAZ-8DqTM.jpg?width=960&crop=smart&auto=webp&s=a4c3af4cbd7742228ee5b74364ccf23ac9d3efc0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/sXQClgZzFXQW_2PBgM0RvBIRtiuGJb5Fe7KAZ-8DqTM.jpg?width=1080&crop=smart&auto=webp&s=97154a62041311fc81d42a5b0c47084ee1263191', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/sXQClgZzFXQW_2PBgM0RvBIRtiuGJb5Fe7KAZ-8DqTM.jpg?auto=webp&s=aa239da19739529dc3866093d44923232bb7acb8', 'width': 1200}, 'variants': {}}]}
Transcription Capabilities - more VRAM/GPU Power = Better Transcription?!
1
[removed]
2025-01-29T23:47:32
https://www.reddit.com/r/LocalLLaMA/comments/1id8e0b/transcription_capabilities_more_vramgpu_power/
dnyx33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id8e0b
false
null
t3_1id8e0b
/r/LocalLLaMA/comments/1id8e0b/transcription_capabilities_more_vramgpu_power/
false
false
self
1
null
More GPU/VRAM = Better Transcription Quality?!
1
[removed]
2025-01-29T23:48:39
https://www.reddit.com/r/LocalLLaMA/comments/1id8ewx/more_gpuvram_better_transcription_quality/
dnyx33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id8ewx
false
null
t3_1id8ewx
/r/LocalLLaMA/comments/1id8ewx/more_gpuvram_better_transcription_quality/
false
false
self
1
null
Why DeepSeek R1 is much more expensive than V3?
0
from what I read, R1 and V3 have the same size and same number of activated params during inference. why the cost of R1 is like 10x of V3 ???
2025-01-29T23:54:41
https://www.reddit.com/r/LocalLLaMA/comments/1id8jmn/why_deepseek_r1_is_much_more_expensive_than_v3/
JC1DA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id8jmn
false
null
t3_1id8jmn
/r/LocalLLaMA/comments/1id8jmn/why_deepseek_r1_is_much_more_expensive_than_v3/
false
false
self
0
null
Been having difficulty with tool calling
0
The only models i have on my machine that have 'tools' and llama3.1 and 3.2, the latter i haven't used for tools For some reason, 3.1 is calling the tools even when not necessary. It does call them well when needed. But sometimes a prompt like "tell me the basics of statistics" gets a tool called. Changing the prompt ([here is my code](https://github.com/MatthewLacerda2/Jarvis)) doesn't get it fixed (what a surprise) Is 3.1 inherently like that?
2025-01-29T23:56:58
https://www.reddit.com/r/LocalLLaMA/comments/1id8lep/been_having_difficulty_with_tool_calling/
Blender-Fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id8lep
false
null
t3_1id8lep
/r/LocalLLaMA/comments/1id8lep/been_having_difficulty_with_tool_calling/
false
false
self
0
{'enabled': False, 'images': [{'id': 'u51BlV0Uk6FhiQJ0jRwa_qLu9MGLq3TottClNOmpAhg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=108&crop=smart&auto=webp&s=ede581637dcafdb3321f9ae45278a65102e9c242', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=216&crop=smart&auto=webp&s=fcb4a69bae1ef79ae135be0bd55ec9acdb11bbe0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=320&crop=smart&auto=webp&s=b3bfeb5a6aaa54465fc08f005282536bec803a95', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=640&crop=smart&auto=webp&s=bea37c6251d1ea9f60deb05faf71b16b370baf3b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=960&crop=smart&auto=webp&s=e87c08662db1316edfd0dd57005dceb0a0353b77', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=1080&crop=smart&auto=webp&s=3725f0effe3b9d7d270f49468d6320d56601b5af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?auto=webp&s=f479c5860afa64863ace123b7eb96175f7f2acf0', 'width': 1200}, 'variants': {}}]}
Deepseek deep in the net?
3
Get out guys. Need to investigate how is everyone reacting to the deepseek leak. I am pretty sure none of us is expressing their deepest secret to none of these AIs... Right? That would be a little concerning unless you do it on your own infrastructure... 🤔 Right guys?
2025-01-30T00:04:07
https://i.redd.it/z3ip6k4hv0ge1.png
Then_Knowledge_719
i.redd.it
1970-01-01T00:00:00
0
{}
1id8rad
false
null
t3_1id8rad
/r/LocalLLaMA/comments/1id8rad/deepseek_deep_in_the_net/
false
false
https://b.thumbs.redditm…ex7IlkKyu_6I.jpg
3
{'enabled': True, 'images': [{'id': 'g3poyOC7RRScDiMmBCovzUt4DbHTvJ3iByvLnL46oSc', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/z3ip6k4hv0ge1.png?width=108&crop=smart&auto=webp&s=3ac9a2c83b15548fa7595fbf1a7cb295f7423a11', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/z3ip6k4hv0ge1.png?width=216&crop=smart&auto=webp&s=ab5ee387e2059f62c0f828d8dd4a5e4a03233f45', 'width': 216}, {'height': 118, 'url': 'https://preview.redd.it/z3ip6k4hv0ge1.png?width=320&crop=smart&auto=webp&s=42b3bec4c0a2e27c5dc699335eef5e925444b5fe', 'width': 320}, {'height': 236, 'url': 'https://preview.redd.it/z3ip6k4hv0ge1.png?width=640&crop=smart&auto=webp&s=ed619fe3876b142ddd8f8e218450cb15224cc965', 'width': 640}, {'height': 354, 'url': 'https://preview.redd.it/z3ip6k4hv0ge1.png?width=960&crop=smart&auto=webp&s=a338c3d4a8fd45ab69cec54583cd4c1c24d6c4c6', 'width': 960}, {'height': 399, 'url': 'https://preview.redd.it/z3ip6k4hv0ge1.png?width=1080&crop=smart&auto=webp&s=6149400ac605d6d7a09c37407ab6be4cc3174550', 'width': 1080}], 'source': {'height': 399, 'url': 'https://preview.redd.it/z3ip6k4hv0ge1.png?auto=webp&s=de9639144db8e7a7efde2cbbdc35bff383d4c1d4', 'width': 1080}, 'variants': {}}]}
Looks like Claude is screwed as well when Amodei brings such argument?
1
2025-01-30T00:05:08
https://i.redd.it/akoopev8v0ge1.png
robertpiosik
i.redd.it
1970-01-01T00:00:00
0
{}
1id8s5j
false
null
t3_1id8s5j
/r/LocalLLaMA/comments/1id8s5j/looks_like_claude_is_screwed_as_well_when_amodei/
false
false
https://a.thumbs.redditm…CsZ9YKM67DG0.jpg
1
{'enabled': True, 'images': [{'id': '2QZ7tzL65SYYgEcQoxM_YCqWmjt1OoJ17CkpIrC--TI', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/akoopev8v0ge1.png?width=108&crop=smart&auto=webp&s=43721fa26181411bd0827b2e155ae5b0eca89213', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/akoopev8v0ge1.png?width=216&crop=smart&auto=webp&s=5addd21f4a7d7391ace2f2298ef46d005071af05', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/akoopev8v0ge1.png?width=320&crop=smart&auto=webp&s=16bf70a78b0283b2261ab414b642a0e775841d52', 'width': 320}, {'height': 366, 'url': 'https://preview.redd.it/akoopev8v0ge1.png?width=640&crop=smart&auto=webp&s=f83e2460bfbc44598d29a4763312be781e9174a7', 'width': 640}, {'height': 549, 'url': 'https://preview.redd.it/akoopev8v0ge1.png?width=960&crop=smart&auto=webp&s=5c0b86d72cd92151d0d798e30114fe60f3816a64', 'width': 960}, {'height': 617, 'url': 'https://preview.redd.it/akoopev8v0ge1.png?width=1080&crop=smart&auto=webp&s=e66de7b4c66f84236b4ee0affc0455e5c3a274da', 'width': 1080}], 'source': {'height': 950, 'url': 'https://preview.redd.it/akoopev8v0ge1.png?auto=webp&s=8fe6756834ac0ef1d7b2039ede92da3220ba167f', 'width': 1661}, 'variants': {}}]}
Does the generate function from vllm actually generate text?
1
I recently transitioned from using the Hugging Face transformers library for text generation to vLLM due to its significantly faster generation speed However, after doing some tests, passing the same prompt and parameters (temperature, output sequence length etc.) to either method using LLama 8.1 Instruct, I noticed that the vLLM generates the exact same output every time I feed it the same prompt whereas the Hugging Face implementation exhibits the expected variation. Even after fine-tuning the instruct model and reloading it back for the vLLM, the output remains deterministic. This behaviour makes me think if the vLLM is actually generating text dynamically or if it is heavily caching previous generations. Has anyone else encountered this or am I implementing it wrong?
2025-01-30T00:13:01
https://www.reddit.com/r/LocalLLaMA/comments/1id8ycx/does_the_generate_function_from_vllm_actually/
Comb-Greedy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id8ycx
false
null
t3_1id8ycx
/r/LocalLLaMA/comments/1id8ycx/does_the_generate_function_from_vllm_actually/
false
false
self
1
null
Cursor IDE + Ollama -- Help a Blind Guy Please?
1
[removed]
2025-01-30T00:16:26
https://www.reddit.com/r/LocalLLaMA/comments/1id90zz/cursor_ide_ollama_help_a_blind_guy_please/
mdizak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id90zz
false
null
t3_1id90zz
/r/LocalLLaMA/comments/1id90zz/cursor_ide_ollama_help_a_blind_guy_please/
false
false
self
1
null
Which models are best for brainstorming creative ideas (e.g. for writing science fiction novels)? Just because a model scores high in creative writing, doesn't mean it will generate the coolest ideas. Looking for a research partner and not a model that writes for me.
7
I would like to do the writing myself, but I am interested in bouncing ideas off of a model. And this requires a lot of creativity from the model as well. I want to research feasibility of said ideas, understand caveats, and extrapolate a few creative things out of it based on what I have already built. For example, in my experience, sonnet 3.5 gives far more creative ideas compared to 4o. But I am lost when it comes to looking at benchmarks to find the best models for this task. Because I don't know what to look for. Anyone have any recommendations, or ideas how can find good models, or recommendations for models themselves?
2025-01-30T00:19:53
https://www.reddit.com/r/LocalLLaMA/comments/1id93ju/which_models_are_best_for_brainstorming_creative/
TryTheRedOne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id93ju
false
null
t3_1id93ju
/r/LocalLLaMA/comments/1id93ju/which_models_are_best_for_brainstorming_creative/
false
false
self
7
null
CoT models have seriously broken safety rails.
1
[removed]
2025-01-30T00:37:58
https://www.reddit.com/r/LocalLLaMA/comments/1id9h6h/cot_models_have_seriously_broken_safety_rails/
techlos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1id9h6h
false
null
t3_1id9h6h
/r/LocalLLaMA/comments/1id9h6h/cot_models_have_seriously_broken_safety_rails/
false
false
self
1
null
Deepseek Year of the Snake
0
2025, the Chinese zodiac enters the year of the Snake. The Snake is the sixth animal in the zodiac, and it has a complex and mysterious nature. The Snake is a charming, intelligent and creative sign, but also secretive, cunning and sometimes ruthless. Welcome to the Deepseek Year of the Snake.#DeepseekYearoftheSnake
2025-01-30T01:03:16
https://i.redd.it/2olzjga161ge1.jpeg
brucespector
i.redd.it
1970-01-01T00:00:00
0
{}
1ida2ob
false
null
t3_1ida2ob
/r/LocalLLaMA/comments/1ida2ob/deepseek_year_of_the_snake/
false
false
https://b.thumbs.redditm…aV65AhaRMR4k.jpg
0
{'enabled': True, 'images': [{'id': '_2gE2E8EsUpoS5nPk2Jh3YOXjj-TH_sl5aqAXddF3xs', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/2olzjga161ge1.jpeg?width=108&crop=smart&auto=webp&s=56265106f2fd21f8dac8662ac5f9b4088aafb9cd', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/2olzjga161ge1.jpeg?width=216&crop=smart&auto=webp&s=f1dd038e752ef975a09c58b7c0096e0282797b14', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/2olzjga161ge1.jpeg?width=320&crop=smart&auto=webp&s=309e1b7eaa07d49ea251f0c1f0d42bdf0e1c27b5', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/2olzjga161ge1.jpeg?width=640&crop=smart&auto=webp&s=470776ce928763def004e4f4690ea5e142ce0042', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/2olzjga161ge1.jpeg?width=960&crop=smart&auto=webp&s=e5febe17c23ae5b622a66d02f6ec1b2121d8f3c4', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/2olzjga161ge1.jpeg?auto=webp&s=c0315b0a41f88a61db6a75fc2befee5e6184aa78', 'width': 1024}, 'variants': {}}]}
OpenRouter Providers to Avoid/Ignore?
1
Looking for advice on which providers I should ignore on OpenRouter to avoid my data being used/sent to China. For the majority of my dev work this doesn't really matter, but there's also a fair amount of my work I don't want being used to train on. Let me know if you guys have any MUST avoid providers
2025-01-30T01:20:20
https://www.reddit.com/r/LocalLLaMA/comments/1idagee/openrouter_providers_to_avoidignore/
Bjornhub1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idagee
false
null
t3_1idagee
/r/LocalLLaMA/comments/1idagee/openrouter_providers_to_avoidignore/
false
false
self
1
null
Thanks Deepseek, I completely agree
0
2025-01-30T01:21:43
https://i.redd.it/scoto9t891ge1.png
rdkilla
i.redd.it
1970-01-01T00:00:00
0
{}
1idahes
false
null
t3_1idahes
/r/LocalLLaMA/comments/1idahes/thanks_deepseek_i_completely_agree/
false
false
https://b.thumbs.redditm…43GyOIChzdOI.jpg
0
{'enabled': True, 'images': [{'id': 'bSEmhPpNlOfgOnucdcDYMOwMp7qL5bg7y6sf8toOqWk', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/scoto9t891ge1.png?width=108&crop=smart&auto=webp&s=696960a1e72dd5f6a7d45aa46b622c0a6f7e9965', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/scoto9t891ge1.png?width=216&crop=smart&auto=webp&s=19e9301512fd48fff0be28f1eb7b60c339b7e4f8', 'width': 216}, {'height': 153, 'url': 'https://preview.redd.it/scoto9t891ge1.png?width=320&crop=smart&auto=webp&s=5ea2b1ca7309d22861859d08262ec14a4e3d85fe', 'width': 320}, {'height': 307, 'url': 'https://preview.redd.it/scoto9t891ge1.png?width=640&crop=smart&auto=webp&s=96dbe674f44fb9571e594439ac0cc0dad1b5932d', 'width': 640}, {'height': 460, 'url': 'https://preview.redd.it/scoto9t891ge1.png?width=960&crop=smart&auto=webp&s=6845399180ad3d937910ff36bda6b837b8af43b6', 'width': 960}, {'height': 518, 'url': 'https://preview.redd.it/scoto9t891ge1.png?width=1080&crop=smart&auto=webp&s=7af1a027f12ead3eab1811c78f9f14c666af7e09', 'width': 1080}], 'source': {'height': 638, 'url': 'https://preview.redd.it/scoto9t891ge1.png?auto=webp&s=c4bea1b9558fad97b0e95d7eb10cd0d1ada63555', 'width': 1329}, 'variants': {}}]}
Sudoku as an LLM Test
7
Since R1, I've been trying to figure out a good test for reasoning models, and I decided to see if they could do Sudoku. Honestly, I assumed it would fail, or go around in circles, but I was impressed to see that it can successfully solve Sudoku games, pretty consistantly. I gave it 4x4 and 9x9 games, and it happily churns through them. For reference, Sonnet 3.5 couldn't even figure out a 4x4. I know it isn't a reasoning model, but I thought I'd give it a shot. Here is the prompt and format: >Solve this sudoku board: \+-------+-------+-------+ | . 6 . | . 3 8 | 5 1 2 | | . . 5 | 4 . 9 | . 8 6 | | . 3 1 | . 5 . | 4 9 . | \+-------+-------+-------+ | . . . | 6 . 7 | 9 3 . | | . . . | . 4 1 | 2 . . | | . . . | . . 3 | 6 7 . | \+-------+-------+-------+ | . . . | . . . | . . . | | . 8 9 | 1 . . | . . 5 | | 2 1 . | 3 . . | . 4 . | \+-------+-------+-------+ For the 9x9 games it thinks for \~15 minutes. For the 4x4 games it is around 2-4 minutes. I just thought I'd share this as it isn't a test that I'd thought of before, and I can see it as being something that is verifiable, and could be used for procedurally gerneated benchmarks, as well as self improvement data sets. If you fancy giving it a try, I used the following website to generate the games: [https://printablecreative.com/sudoku-generator](https://printablecreative.com/sudoku-generator) Also, if anyone happens to have access to o1 and any of the distilled reasoning models from R1, can you give them a try? If you do, please let me know the specific model, and quantisation. Have you tried R1 or any reasoning models on any other similar games/puzzles? If so, which ones, and how do they do? Cheers!
2025-01-30T01:25:37
https://www.reddit.com/r/LocalLLaMA/comments/1idakak/sudoku_as_an_llm_test/
StevenSamAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idakak
false
null
t3_1idakak
/r/LocalLLaMA/comments/1idakak/sudoku_as_an_llm_test/
false
false
self
7
{'enabled': False, 'images': [{'id': 'C3p2nATnZj2_4ClrbcyaztzPKhZF45eZ-s35ArCJBBY', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/WJpBrz9DGoWK2zV1ujg_9SjkufscReoFSuFH54z7t1Y.jpg?width=108&crop=smart&auto=webp&s=6630eb0a93a111b078af6c2b51a16975593e07e8', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/WJpBrz9DGoWK2zV1ujg_9SjkufscReoFSuFH54z7t1Y.jpg?width=216&crop=smart&auto=webp&s=095ff0c327df4b2b720a5f2a29400fe234804468', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/WJpBrz9DGoWK2zV1ujg_9SjkufscReoFSuFH54z7t1Y.jpg?width=320&crop=smart&auto=webp&s=ec7b04d087c48e8457dda47a3494414dc28ebdcf', 'width': 320}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/WJpBrz9DGoWK2zV1ujg_9SjkufscReoFSuFH54z7t1Y.jpg?auto=webp&s=47ee7c0720c0cfbfa5de632ea4a4f6598c454ada', 'width': 500}, 'variants': {}}]}
What do I need to run local Ollama with multiple GPUs (RX7900XTX)
1
[removed]
2025-01-30T01:39:27
https://www.reddit.com/r/LocalLLaMA/comments/1idau6e/what_do_i_need_to_run_local_ollama_with_multiple/
Sell-Standard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idau6e
false
null
t3_1idau6e
/r/LocalLLaMA/comments/1idau6e/what_do_i_need_to_run_local_ollama_with_multiple/
false
false
self
1
null
Distillation Rule of Thumb
1
[removed]
2025-01-30T01:43:05
https://www.reddit.com/r/LocalLLaMA/comments/1idaww6/distillation_rule_of_thumb/
Geologic7088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idaww6
false
null
t3_1idaww6
/r/LocalLLaMA/comments/1idaww6/distillation_rule_of_thumb/
false
false
self
1
null
Model Weights Are Now An Export-controlled Item
1
2025-01-30T01:44:44
https://semianalysis.com/2025/01/15/2025-ai-diffusion-export-controls-microsoft-regulatory-capture-oracle-tears/#model-weights-are-now-an-export-controlled-item
ring-x-ring
semianalysis.com
1970-01-01T00:00:00
0
{}
1iday3l
false
null
t3_1iday3l
/r/LocalLLaMA/comments/1iday3l/model_weights_are_now_an_exportcontrolled_item/
false
false
https://b.thumbs.redditm…Yje4d7a2U0nc.jpg
1
{'enabled': False, 'images': [{'id': 'Vxl3f4Qi4r7xgEUZz3ZOeyTw90aPWN-ycPdBLa_jFto', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/i-9bY4m8ZENsx4XiERuX9GrfFjXWcZ_XaOEwLNAOPcI.jpg?width=108&crop=smart&auto=webp&s=404dde4da1cd024c5d2b409655141faa17ee32e0', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/i-9bY4m8ZENsx4XiERuX9GrfFjXWcZ_XaOEwLNAOPcI.jpg?width=216&crop=smart&auto=webp&s=d26fa6da5627f628dc292770a3b4019145e6c9e6', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/i-9bY4m8ZENsx4XiERuX9GrfFjXWcZ_XaOEwLNAOPcI.jpg?width=320&crop=smart&auto=webp&s=48025c8742b3f7f94e324e1e9c9f91c4f645cdd0', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/i-9bY4m8ZENsx4XiERuX9GrfFjXWcZ_XaOEwLNAOPcI.jpg?width=640&crop=smart&auto=webp&s=a9d7b849826fff86cb071514b81881292e8b7400', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/i-9bY4m8ZENsx4XiERuX9GrfFjXWcZ_XaOEwLNAOPcI.jpg?width=960&crop=smart&auto=webp&s=b2279b664a961cd7480be70f574213d3615cb917', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/i-9bY4m8ZENsx4XiERuX9GrfFjXWcZ_XaOEwLNAOPcI.jpg?width=1080&crop=smart&auto=webp&s=12ef07b66f33822a9aba01db8a0b8baa3601bd94', 'width': 1080}], 'source': {'height': 686, 'url': 'https://external-preview.redd.it/i-9bY4m8ZENsx4XiERuX9GrfFjXWcZ_XaOEwLNAOPcI.jpg?auto=webp&s=30755a1e50ccf3946a9f2b90b35a09725b346765', 'width': 1200}, 'variants': {}}]}
DeepSeek responded to English question in Chinese
1
[removed]
2025-01-30T01:47:00
https://www.reddit.com/r/LocalLLaMA/comments/1idazsi/deepseek_responded_to_english_question_in_chinese/
quantinfoai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idazsi
false
null
t3_1idazsi
/r/LocalLLaMA/comments/1idazsi/deepseek_responded_to_english_question_in_chinese/
false
false
self
1
null
Has anyone figured a way to get deep seek to run through an interface locally with MCP?
0
Mcp has been a single greatest thing in my experience for coding as it helps nail down a lot of weird intricacies. I'm overlooking sometimes and I'm just curious and if anybody has gotten the two of them to play together and if so what did you do?
2025-01-30T01:50:00
https://www.reddit.com/r/LocalLLaMA/comments/1idb1vu/has_anyone_figured_a_way_to_get_deep_seek_to_run/
OccasionllyAsleep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idb1vu
false
null
t3_1idb1vu
/r/LocalLLaMA/comments/1idb1vu/has_anyone_figured_a_way_to_get_deep_seek_to_run/
false
false
self
0
null
Are Ai Models Zapping My GPU or Power Supply?
1
[removed]
2025-01-30T01:56:43
https://www.reddit.com/r/LocalLLaMA/comments/1idb76y/are_ai_models_zapping_my_gpu_or_power_supply/
CaptVanilla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idb76y
false
null
t3_1idb76y
/r/LocalLLaMA/comments/1idb76y/are_ai_models_zapping_my_gpu_or_power_supply/
false
false
self
1
null
How do LLMs manage if the prompts have typos?
4
I’m unclear on how LLMs actually handle typos. Shouldn’t having typos throw LLMs off (tokens with a typo should have a totally different probability)?
2025-01-30T02:02:16
https://www.reddit.com/r/LocalLLaMA/comments/1idbbtv/how_do_llms_manage_if_the_prompts_have_typos/
adibhat007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idbbtv
false
null
t3_1idbbtv
/r/LocalLLaMA/comments/1idbbtv/how_do_llms_manage_if_the_prompts_have_typos/
false
false
self
4
null
Mac Pro 2019 with DeepSeek R1
1
Just wondering how fast this will be with DeepSeek R1? I think it is possible to load at least IQ4_XS quant. Specs: * Intel Xeon W-3245 3.2ghz 16core CPU * 384GB RAM (6x64 DDR4 ECC) * Vega II DUO video (64GB of HBM2 memory (32GB per GPU))
2025-01-30T02:03:32
https://www.reddit.com/r/LocalLLaMA/comments/1idbcsd/mac_pro_2019_with_deepseek_r1/
skipfish
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idbcsd
false
null
t3_1idbcsd
/r/LocalLLaMA/comments/1idbcsd/mac_pro_2019_with_deepseek_r1/
false
false
self
1
null
DeepSeek R1 moe carte extension vram !
1
[removed]
2025-01-30T02:30:05
https://www.reddit.com/r/LocalLLaMA/comments/1idbx2j/deepseek_r1_moe_carte_extension_vram/
MoreIndependent5967
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idbx2j
false
null
t3_1idbx2j
/r/LocalLLaMA/comments/1idbx2j/deepseek_r1_moe_carte_extension_vram/
false
false
self
1
null
GPU poor setup (<=12GB)
1
[removed]
2025-01-30T02:35:03
https://www.reddit.com/r/LocalLLaMA/comments/1idc11o/gpu_poor_setup_12gb/
sebarau777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idc11o
false
null
t3_1idc11o
/r/LocalLLaMA/comments/1idc11o/gpu_poor_setup_12gb/
false
false
self
1
null
Getting started with AI
1
[removed]
2025-01-30T02:50:04
[deleted]
1970-01-01T00:00:00
0
{}
1idccm6
false
null
t3_1idccm6
/r/LocalLLaMA/comments/1idccm6/getting_started_with_ai/
false
false
default
1
null
Getting started in AI
1
[removed]
2025-01-30T02:55:31
[deleted]
1970-01-01T00:00:00
0
{}
1idcgtt
false
null
t3_1idcgtt
/r/LocalLLaMA/comments/1idcgtt/getting_started_in_ai/
false
false
default
1
null
not sure if memes are allowed here lul
1
2025-01-30T03:00:16
https://i.redd.it/n158awspq1ge1.png
anzorq
i.redd.it
1970-01-01T00:00:00
0
{}
1idckia
false
null
t3_1idckia
/r/LocalLLaMA/comments/1idckia/not_sure_if_memes_are_allowed_here_lul/
false
false
https://a.thumbs.redditm…1TB0h-HgbC80.jpg
1
{'enabled': True, 'images': [{'id': 'mlJmwND7i1GI6OnMt64NVGtruVvMP5Y8meEuDKHyRjc', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/n158awspq1ge1.png?width=108&crop=smart&auto=webp&s=bb9f4dbcfe16367447560343c1ec64ca12aee633', 'width': 108}, {'height': 277, 'url': 'https://preview.redd.it/n158awspq1ge1.png?width=216&crop=smart&auto=webp&s=7f270a327f38cdab361f8e7d34e33b8ec098c8a6', 'width': 216}, {'height': 411, 'url': 'https://preview.redd.it/n158awspq1ge1.png?width=320&crop=smart&auto=webp&s=3e106437e8c2dbd7f3e0470e323a7fef38d6d7d7', 'width': 320}, {'height': 822, 'url': 'https://preview.redd.it/n158awspq1ge1.png?width=640&crop=smart&auto=webp&s=3eaa72e9c02f35dfc5f902cb2f0c6cfe883239dc', 'width': 640}], 'source': {'height': 1005, 'url': 'https://preview.redd.it/n158awspq1ge1.png?auto=webp&s=b0fbf69f8fa48452116f448b7e61907600f3b23b', 'width': 782}, 'variants': {}}]}
Lol, I tried to run 671b DeepSeekR1 on my PC and this is what I got:
0
https://preview.redd.it/… RAM 6000 MT/s.
2025-01-30T03:06:47
https://www.reddit.com/r/LocalLLaMA/comments/1idcpop/lol_i_tried_to_run_671b_deepseekr1_on_my_pc_and/
108er
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idcpop
false
null
t3_1idcpop
/r/LocalLLaMA/comments/1idcpop/lol_i_tried_to_run_671b_deepseekr1_on_my_pc_and/
false
false
https://b.thumbs.redditm…zdK-FF_45TzA.jpg
0
null
not sure if memes are allowed here lul
275
2025-01-30T03:08:08
https://i.redd.it/088rd998s1ge1.png
anzorq
i.redd.it
1970-01-01T00:00:00
0
{}
1idcqm4
false
null
t3_1idcqm4
/r/LocalLLaMA/comments/1idcqm4/not_sure_if_memes_are_allowed_here_lul/
false
false
https://b.thumbs.redditm…mIVPshdb1_kM.jpg
275
{'enabled': True, 'images': [{'id': '2ZTQAZBWKcGM1T_oKyTOKiRPNAyt70CRy8ba5-L3aok', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/088rd998s1ge1.png?width=108&crop=smart&auto=webp&s=d2eae33c7daae9817c05c4d6f9aea832eff50778', 'width': 108}, {'height': 277, 'url': 'https://preview.redd.it/088rd998s1ge1.png?width=216&crop=smart&auto=webp&s=86cd762cf340e149fc51e58265fee79eac5583a0', 'width': 216}, {'height': 411, 'url': 'https://preview.redd.it/088rd998s1ge1.png?width=320&crop=smart&auto=webp&s=47c65c7b4fe31eadd7e77b36b74eae6ed90cf6be', 'width': 320}, {'height': 822, 'url': 'https://preview.redd.it/088rd998s1ge1.png?width=640&crop=smart&auto=webp&s=76eb2b2ec3633fa78609013c2913558a4d024d2f', 'width': 640}], 'source': {'height': 1005, 'url': 'https://preview.redd.it/088rd998s1ge1.png?auto=webp&s=155d029b5755b96bb4a8792710add6ec8b7b0b84', 'width': 782}, 'variants': {}}]}
Looking for Open-Source Software to Convert Textbooks and Notes to Audio
1
[removed]
2025-01-30T03:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1idcrsp/looking_for_opensource_software_to_convert/
FormerEngine6049
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idcrsp
false
null
t3_1idcrsp
/r/LocalLLaMA/comments/1idcrsp/looking_for_opensource_software_to_convert/
false
false
self
1
null
How to prevent LM studio from using C:drive and instead use another drive when loading models larger than system memory?
3
It keeps using my C:drive which has my slowest SSD, when it could load from where the model itself is actually stored on a 7gb/s nvme ssd.
2025-01-30T03:11:23
https://www.reddit.com/r/LocalLLaMA/comments/1idcsvg/how_to_prevent_lm_studio_from_using_cdrive_and/
Goldkoron
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idcsvg
false
null
t3_1idcsvg
/r/LocalLLaMA/comments/1idcsvg/how_to_prevent_lm_studio_from_using_cdrive_and/
false
false
self
3
null
Arc-AGI ON DeepSeek’s R1-Zero vs. R1: Why Eliminating Human Labels Could Unlock AGI’s Future
1
[removed]
2025-01-30T03:14:12
https://www.reddit.com/r/LocalLLaMA/comments/1idcv1z/arcagi_on_deepseeks_r1zero_vs_r1_why_eliminating/
ImportantOwl2939
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idcv1z
false
null
t3_1idcv1z
/r/LocalLLaMA/comments/1idcv1z/arcagi_on_deepseeks_r1zero_vs_r1_why_eliminating/
false
false
self
1
null
hardware req for running sonnet3.5 equivalent locally?
1
[removed]
2025-01-30T03:43:09
https://www.reddit.com/r/LocalLLaMA/comments/1iddhhe/hardware_req_for_running_sonnet35_equivalent/
0x0tyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddhhe
false
null
t3_1iddhhe
/r/LocalLLaMA/comments/1iddhhe/hardware_req_for_running_sonnet35_equivalent/
false
false
self
1
null
They Want to Regulate Open Source AI, But That's Bad Idea
0
2025-01-30T03:45:00
https://v.redd.it/9x08jx5uw1ge1
aihorsieshoe
/r/LocalLLaMA/comments/1iddiww/they_want_to_regulate_open_source_ai_but_thats/
1970-01-01T00:00:00
0
{}
1iddiww
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/9x08jx5uw1ge1/DASHPlaylist.mpd?a=1740930308%2CNmUxNmIwZTA0YzViMGViZjlmODdlNjQ5N2Q0MTlkYjcxMDM0ZjQxYTQ4ZThhOTAxMjlkNTQzOTYyM2MzNjIyNQ%3D%3D&v=1&f=sd', 'duration': 146, 'fallback_url': 'https://v.redd.it/9x08jx5uw1ge1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/9x08jx5uw1ge1/HLSPlaylist.m3u8?a=1740930308%2CYzA4YmNjNGUxZDdiNTM2MDMwMGZmMjNiNTU5NTMxYzgyOGY0Y2FmZjMyODk5ZWRhYjQyYTAyYzhmZTg5NzIxZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9x08jx5uw1ge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1iddiww
/r/LocalLLaMA/comments/1iddiww/they_want_to_regulate_open_source_ai_but_thats/
false
false
https://external-preview…dc00b45423d03234
0
{'enabled': False, 'images': [{'id': 'ZGV1a2MzNnV3MWdlMdYxQgCKZWDBA-MlWJcsRk0T5PoEw1O-NKKj8LSMK-SK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZGV1a2MzNnV3MWdlMdYxQgCKZWDBA-MlWJcsRk0T5PoEw1O-NKKj8LSMK-SK.png?width=108&crop=smart&format=pjpg&auto=webp&s=5955a371696b376869d623853c5e2571339aaf68', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZGV1a2MzNnV3MWdlMdYxQgCKZWDBA-MlWJcsRk0T5PoEw1O-NKKj8LSMK-SK.png?width=216&crop=smart&format=pjpg&auto=webp&s=16616f94f32f9010d167b54ce13ebe4675eba41c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZGV1a2MzNnV3MWdlMdYxQgCKZWDBA-MlWJcsRk0T5PoEw1O-NKKj8LSMK-SK.png?width=320&crop=smart&format=pjpg&auto=webp&s=4e647486839a2f350635fd67bff5a4e3179ef559', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZGV1a2MzNnV3MWdlMdYxQgCKZWDBA-MlWJcsRk0T5PoEw1O-NKKj8LSMK-SK.png?width=640&crop=smart&format=pjpg&auto=webp&s=3903e0db05ef688b23e35c82003d8595e13e37aa', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZGV1a2MzNnV3MWdlMdYxQgCKZWDBA-MlWJcsRk0T5PoEw1O-NKKj8LSMK-SK.png?width=960&crop=smart&format=pjpg&auto=webp&s=3ce4dbf030a882bddeb8477baab03d0a3b71194e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZGV1a2MzNnV3MWdlMdYxQgCKZWDBA-MlWJcsRk0T5PoEw1O-NKKj8LSMK-SK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=64f032bfc6f90b7fc9b5090bc00f5b3f97f38827', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/ZGV1a2MzNnV3MWdlMdYxQgCKZWDBA-MlWJcsRk0T5PoEw1O-NKKj8LSMK-SK.png?format=pjpg&auto=webp&s=248a937570c04839c6d62fbb149abb509340f6fc', 'width': 1280}, 'variants': {}}]}
GPU advice for running models locally
3
As part of a grant, I recently got allocated about $1500 USD to buy GPUs (which I understand is not a lot, but grant-wise this was the most I could manage). I wanted to run LLM models locally and perhaps even the 32B or 70B versions of the Deepseek R1 model. I was wondering how I could get the most out of my money. I know both GPU's memory and the memory bandwidth/ # of cores matter for the token rate. I am new at this, so it might sound dumb, but in theory can I combine two 4070 TI Supers to get 32 GB of RAM (which might be low memory, but can fit models with higher param counts right)? How does the memory bandwidth work in that case, given these are two different GPUs. I know I can buy a mac mini with about 24 gigs unified memory, but I do not think my grant would cover a whole computer (given how it is worded). Would really appreciate any advice.
2025-01-30T03:45:35
https://www.reddit.com/r/LocalLLaMA/comments/1iddjdh/gpu_advice_for_running_models_locally/
san_atlanta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddjdh
false
null
t3_1iddjdh
/r/LocalLLaMA/comments/1iddjdh/gpu_advice_for_running_models_locally/
false
false
self
3
null
DeepSeek's chatbot achieves 17% accuracy
1
[removed]
2025-01-30T03:46:39
https://www.reddit.com/r/LocalLLaMA/comments/1iddk6d/deepseeks_chatbot_achieves_17_accuracy/
NoSushiNoLife
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddk6d
false
null
t3_1iddk6d
/r/LocalLLaMA/comments/1iddk6d/deepseeks_chatbot_achieves_17_accuracy/
false
false
self
1
{'enabled': False, 'images': [{'id': '09XmLkXSihj-59XEY1_w1SEvElJ7E2kzjObwXxfqB6E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0ezFw7GHDIal5D7eMrH1cGohstfZofM9CEeb7lxmpfY.jpg?width=108&crop=smart&auto=webp&s=7b581ebabd51968ebe1012180e33eda100822465', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0ezFw7GHDIal5D7eMrH1cGohstfZofM9CEeb7lxmpfY.jpg?width=216&crop=smart&auto=webp&s=a772ecdbe87ac4180a6aa902fcd6d8f256570e48', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/0ezFw7GHDIal5D7eMrH1cGohstfZofM9CEeb7lxmpfY.jpg?width=320&crop=smart&auto=webp&s=3e6c407f59314368fffa81f7b49f2c26f598c859', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/0ezFw7GHDIal5D7eMrH1cGohstfZofM9CEeb7lxmpfY.jpg?width=640&crop=smart&auto=webp&s=263a825a4799d33a0d9563f3a807c356f1c34b7d', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/0ezFw7GHDIal5D7eMrH1cGohstfZofM9CEeb7lxmpfY.jpg?width=960&crop=smart&auto=webp&s=de2464b1e760e9441f0b8ad6b3e634544aefdcdb', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/0ezFw7GHDIal5D7eMrH1cGohstfZofM9CEeb7lxmpfY.jpg?width=1080&crop=smart&auto=webp&s=6b743a5f77d7e6376c1cec735ce5ce115382122d', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/0ezFw7GHDIal5D7eMrH1cGohstfZofM9CEeb7lxmpfY.jpg?auto=webp&s=5ea01d8b97cece130770acf92aac1278e18eaa79', 'width': 1920}, 'variants': {}}]}
What is the best <1.5B models?
3
Is there any slm under 1.5b that's kinda useful? or can we train the likes of smollm2 to be useful on specific tasks? any guidance is appreciated.
2025-01-30T03:48:09
https://www.reddit.com/r/LocalLLaMA/comments/1iddlcc/what_is_the_best_15b_models/
mukhtharcm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddlcc
false
null
t3_1iddlcc
/r/LocalLLaMA/comments/1iddlcc/what_is_the_best_15b_models/
false
false
self
3
null
Serving deepseek r1 ZERO
1
There's been a lot of excitement around deepseek r1 obviously, but I was wondering if anyone has had success running deepseek r1 zero? I don't think there's as many quantizations or distillations of r1-ZERO out there. Is my only option to rent an 8xa100 cluster?
2025-01-30T03:48:59
https://www.reddit.com/r/LocalLLaMA/comments/1iddlzp/serving_deepseek_r1_zero/
driveawayfromall
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddlzp
false
null
t3_1iddlzp
/r/LocalLLaMA/comments/1iddlzp/serving_deepseek_r1_zero/
false
false
self
1
null
4~5 tok/sec R1 671B locally on 2TB "VRAM" (48GB/s) for $1500 USD hardware?
1
[removed]
2025-01-30T03:49:57
https://www.reddit.com/r/LocalLLaMA/comments/1iddmpr/45_toksec_r1_671b_locally_on_2tb_vram_48gbs_for/
VoidAlchemy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddmpr
false
null
t3_1iddmpr
/r/LocalLLaMA/comments/1iddmpr/45_toksec_r1_671b_locally_on_2tb_vram_48gbs_for/
false
false
self
1
null
Finally got my build together.
47
Repurposed my old gaming PC into a dedicated self hosted machine. 3900X with 32GB and a 3080 10GB. Cable management is as good as it gets in this cheap 4U case. PSU is a little under sized, but from experience, it's fine, and there's a 750W on the way. The end goal is self hosted home assistant/automation with voice control via home-assistant.
2025-01-30T03:51:02
https://i.redd.it/xg58gynyz1ge1.jpeg
guska
i.redd.it
1970-01-01T00:00:00
0
{}
1iddnjb
false
null
t3_1iddnjb
/r/LocalLLaMA/comments/1iddnjb/finally_got_my_build_together/
false
false
https://a.thumbs.redditm…e-CD3hAquxV0.jpg
47
{'enabled': True, 'images': [{'id': 'xeRkqDUoXVApqGQIg7Hj-xrulNBY1Gg9hwDQ5yzU74Y', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/xg58gynyz1ge1.jpeg?width=108&crop=smart&auto=webp&s=b4fa7ac91eb60ede7fc054138b11aa5cf46203ff', 'width': 108}, {'height': 259, 'url': 'https://preview.redd.it/xg58gynyz1ge1.jpeg?width=216&crop=smart&auto=webp&s=995207877cac1712df6c4188eacab32fd35b0e99', 'width': 216}, {'height': 384, 'url': 'https://preview.redd.it/xg58gynyz1ge1.jpeg?width=320&crop=smart&auto=webp&s=c776849d656fdf77b5606430119a1c561f46e61c', 'width': 320}, {'height': 768, 'url': 'https://preview.redd.it/xg58gynyz1ge1.jpeg?width=640&crop=smart&auto=webp&s=f57634230eeb79f5ec4435c9c51f17ffc94f1450', 'width': 640}, {'height': 1152, 'url': 'https://preview.redd.it/xg58gynyz1ge1.jpeg?width=960&crop=smart&auto=webp&s=cff042e15b2b879053f9eee248105a1a9931ef95', 'width': 960}, {'height': 1296, 'url': 'https://preview.redd.it/xg58gynyz1ge1.jpeg?width=1080&crop=smart&auto=webp&s=053cafffe94beed443d31a8408aa2f6e4b93642e', 'width': 1080}], 'source': {'height': 3600, 'url': 'https://preview.redd.it/xg58gynyz1ge1.jpeg?auto=webp&s=e4729d19eabd62f78f8f6525ba99acfcbebe86bc', 'width': 3000}, 'variants': {}}]}
Deepseek App - How many parameters is the model?
1
So, it looks like Sambanova is going to be removing access to Llama 3.1 Instruct 405B for free soon, and with the release of deepseek R1, and the wide array of models they have released, it makes me wonder how many paramters the model in the app is using. I cant find a clear answer - albeit I didn't look for TOO long. Sambanova was clearly flexing their tech by offering Llama 3.1 Instruct 405B for free at over 100 token/second - a marketing ploy. Makes sense, because to offer a model that big for free would take serious resources. Resources I'm not sure Deepseek has, in spite of their impressive model and hedgefund daddies. OR maybe i'm wrong, and they want to throw some weight around and put the big 671B model out for free for the whole world to see in the app. I don't think they want to burn cash like that... but maybe i'm wrong... Anybody have any insight into how many parameters the models on the deepseek app are, that are available for public use in their free offering?
2025-01-30T03:51:12
https://www.reddit.com/r/LocalLLaMA/comments/1iddnn3/deepseek_app_how_many_parameters_is_the_model/
db_scott
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddnn3
false
null
t3_1iddnn3
/r/LocalLLaMA/comments/1iddnn3/deepseek_app_how_many_parameters_is_the_model/
false
false
self
1
null
No need to rephrase the last message for multi-turn questions in RAG. Arch-Function LLM updated to extract information and context from multi-turn chat scenarios
3
Following this post from [few weeks ago](https://www.reddit.com/r/LocalLLaMA/comments/1fi1kex/multi_turn_conversation_and_rag/) when you do rag on the last posted message, you might need to recontextualize/rewrite it, for example : * Q :When was Jesus born ? * A : A long time ago ! * Q : What about his mother ? Here `What about his mother ?` has missing references. Or problem is more complex than it seems, because the reference is not always in the latest message, for example : * Q : Who is Orano's Boss ? * A : it's Philippe Knoche * Q : Where did he go to school ? * A : Polytechnique and Ecole des Mines We just updated Arch-Function LLM so that it can extract information and context in a multi-turn scenario so that you can build more accurate RAG scenarios without having to do the crufty work to improve RAG accuracy and performance. [https://docs.archgw.com/build\_with\_arch/multi\_turn.html](https://docs.archgw.com/build_with_arch/multi_turn.html)
2025-01-30T03:53:43
https://www.reddit.com/r/LocalLLaMA/comments/1iddpls/no_need_to_rephrase_the_last_message_for/
AdditionalWeb107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddpls
false
null
t3_1iddpls
/r/LocalLLaMA/comments/1iddpls/no_need_to_rephrase_the_last_message_for/
false
false
self
3
null
4~5 tok/sec R1 671B locally on 2TB "VRAM" (48GB/s) for $1500 USD hardware?
1
[removed]
2025-01-30T03:54:41
[deleted]
1970-01-01T00:00:00
0
{}
1iddqb8
false
null
t3_1iddqb8
/r/LocalLLaMA/comments/1iddqb8/45_toksec_r1_671b_locally_on_2tb_vram_48gbs_for/
false
false
default
1
null
4~5 tok/sec R1 671B locally on 2TB "VRAM" (48GB/s) for $1500 USD hardware?
1
[removed]
2025-01-30T03:57:39
https://www.reddit.com/r/LocalLLaMA/comments/1iddshb/45_toksec_r1_671b_locally_on_2tb_vram_48gbs_for/
VoidAlchemy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddshb
false
null
t3_1iddshb
/r/LocalLLaMA/comments/1iddshb/45_toksec_r1_671b_locally_on_2tb_vram_48gbs_for/
false
false
nsfw
1
null
4~5 tok/sec R1 671B locally on 2TB "VRAM" (48GB/s) for $1500 USD hardware?
1
[removed]
2025-01-30T04:01:38
https://www.reddit.com/r/LocalLLaMA/comments/1iddvja/45_toksec_r1_671b_locally_on_2tb_vram_48gbs_for/
VoidAlchemy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddvja
false
null
t3_1iddvja
/r/LocalLLaMA/comments/1iddvja/45_toksec_r1_671b_locally_on_2tb_vram_48gbs_for/
false
false
self
1
null
4~5 tok/sec R1 671B locally on 2TB "VRAM" (48GB/s) for $1500 USD hardware?
1
[removed]
2025-01-30T04:05:13
https://www.reddit.com/r/LocalLLaMA/comments/1iddy6u/45_toksec_r1_671b_locally_on_2tb_vram_48gbs_for/
VoidAlchemy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddy6u
false
null
t3_1iddy6u
/r/LocalLLaMA/comments/1iddy6u/45_toksec_r1_671b_locally_on_2tb_vram_48gbs_for/
false
false
nsfw
1
null
Setting up local LLAMA and adding json files to it
1
Hey there, This has been asked a few times, but the answer seems to involve using python, I was hoping there was an easier way, like how chatgpt allows you to upload files to a project. So just checking if there was an easier way to do this, right now I know how to use ollama and install llama on my computer. I was hoping there was a way to upload my json files of data (product information for my business) to it and have the AI answer questions/give me recommendations etc. If there is a plugin etc I can download it will be great too, but ideally a GUI interface will work best.
2025-01-30T04:07:32
https://www.reddit.com/r/LocalLLaMA/comments/1iddzxh/setting_up_local_llama_and_adding_json_files_to_it/
skilg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iddzxh
false
null
t3_1iddzxh
/r/LocalLLaMA/comments/1iddzxh/setting_up_local_llama_and_adding_json_files_to_it/
false
false
self
1
null
How crazy is this idea?
31
2025-01-30T04:11:45
https://www.reddit.com/gallery/1ide31d
Buddyboy142
reddit.com
1970-01-01T00:00:00
0
{}
1ide31d
false
null
t3_1ide31d
/r/LocalLLaMA/comments/1ide31d/how_crazy_is_this_idea/
false
false
https://b.thumbs.redditm…awZGZoX1OIRQ.jpg
31
null
Are there any other programs like Backyard AI?
2
Title.
2025-01-30T04:13:04
https://www.reddit.com/r/LocalLLaMA/comments/1ide40x/are_there_any_other_programs_like_backyard_ai/
PangurBanTheCat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ide40x
false
null
t3_1ide40x
/r/LocalLLaMA/comments/1ide40x/are_there_any_other_programs_like_backyard_ai/
false
false
self
2
null
Asked DeepSeek to write a fan fiction featuring itself and GPT...
1
[removed]
2025-01-30T04:21:21
https://www.reddit.com/r/LocalLLaMA/comments/1idea1m/asked_deepseek_to_write_a_fan_fiction_featuring/
According-Term9314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1idea1m
false
null
t3_1idea1m
/r/LocalLLaMA/comments/1idea1m/asked_deepseek_to_write_a_fan_fiction_featuring/
false
false
self
1
null