title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Lack of community communication
1
[removed]
2025-01-07T15:44:13
https://www.reddit.com/r/LocalLLaMA/comments/1hvu8re/lack_of_community_communication/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvu8re
false
null
t3_1hvu8re
/r/LocalLLaMA/comments/1hvu8re/lack_of_community_communication/
false
false
self
1
null
My Prediction: This will be the AI year. Hardware is coming out at all fronts but remember this is just V1.
0
Title + hardware will only keep getting better. Hopefully models will keep getting better as well. Remember to not just the gun on stuff and do your research.
2025-01-07T15:48:54
https://www.reddit.com/r/LocalLLaMA/comments/1hvucnd/my_prediction_this_will_be_the_ai_year_hardware/
Jesus359
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvucnd
false
null
t3_1hvucnd
/r/LocalLLaMA/comments/1hvucnd/my_prediction_this_will_be_the_ai_year_hardware/
false
false
self
0
null
Video transcription and facial recognition
1
[removed]
2025-01-07T15:52:39
https://www.reddit.com/r/LocalLLaMA/comments/1hvufps/video_transcription_and_facial_recognition/
rawrchaq
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvufps
false
null
t3_1hvufps
/r/LocalLLaMA/comments/1hvufps/video_transcription_and_facial_recognition/
false
false
self
1
null
Agentic flow vs RAG
1
[removed]
2025-01-07T15:55:45
https://www.reddit.com/r/LocalLLaMA/comments/1hvui9u/agentic_flow_vs_rag/
Ok_Requirement3346
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvui9u
false
null
t3_1hvui9u
/r/LocalLLaMA/comments/1hvui9u/agentic_flow_vs_rag/
false
false
self
1
null
Should I buy a 5090 or an m4 max.
0
Has anyone done much comparison between a maxed out m4 max and a 4090/5090? I could easily trade in my existing m3 max and get a maxed out m4 max for the same price as buying a 5090. Could I run compatible speeds for ai on both? I’d love to run DeepSeek but I don’t think either solution would give me that.
2025-01-07T16:06:41
https://www.reddit.com/r/LocalLLaMA/comments/1hvurky/should_i_buy_a_5090_or_an_m4_max/
PositiveEnergyMatter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvurky
false
null
t3_1hvurky
/r/LocalLLaMA/comments/1hvurky/should_i_buy_a_5090_or_an_m4_max/
false
false
self
0
null
Gpu hybrid mode iGPU + dGPU to save 500 to 700mb on dGPU
1
[removed]
2025-01-07T16:10:22
https://www.reddit.com/r/LocalLLaMA/comments/1hvuuny/gpu_hybrid_mode_igpu_dgpu_to_save_500_to_700mb_on/
DeathRabit86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvuuny
false
null
t3_1hvuuny
/r/LocalLLaMA/comments/1hvuuny/gpu_hybrid_mode_igpu_dgpu_to_save_500_to_700mb_on/
false
false
self
1
null
Gpu hybrid mode iGPU + dGPU to save 500 to 700mb on dGPU
1
[removed]
2025-01-07T16:12:47
https://www.reddit.com/r/LocalLLaMA/comments/1hvuwpl/gpu_hybrid_mode_igpu_dgpu_to_save_500_to_700mb_on/
DeathRabit86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvuwpl
false
null
t3_1hvuwpl
/r/LocalLLaMA/comments/1hvuwpl/gpu_hybrid_mode_igpu_dgpu_to_save_500_to_700mb_on/
false
false
self
1
null
Introducing LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset
1
[removed]
2025-01-07T16:15:03
https://www.reddit.com/r/LocalLLaMA/comments/1hvuyo6/introducing_longtalkcot_v01_a_very_long/
Financial_Counter199
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvuyo6
false
null
t3_1hvuyo6
/r/LocalLLaMA/comments/1hvuyo6/introducing_longtalkcot_v01_a_very_long/
false
false
self
1
{'enabled': False, 'images': [{'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200}, 'variants': {}}]}
Phi-4 in insanely good at rephrasing the last message for multi-turn rag questions
112
Following this post from [few weeks ago](https://www.reddit.com/r/LocalLLaMA/comments/1fi1kex/multi_turn_conversation_and_rag/) when you do rag on the last posted message, you might need to recontextualize it, for example : - Q :When was Jesus born ? - A : A long time ago ! - Q : What about his mother ? Here `What about his mother ?` has missing references. This problem is more complex than it seems, because the reference is not always in the latest message, for example : Q : Who is Orano's Boss ? A : it's Philippe Knoche Q : Where did he go to school ? A : Polytechnique and Ecole des Mines Here we can have multiple tricky questions that requires good reasoning to be correctly rephrased : `What about his wife ?` -> Implies getting Philippe Knoche and school question to rephrase it `Where is the HQ ?` -> Implies the company HQ, not the two school "HQs" Long story short, I tried multiple models, Qwen 2.5 7b, 14b Llama 3.1, Mistrals models , while Qwen is really good on the whole specter, it's not good enough at that and [phi-4 leaked model](https://huggingface.co/matteogeniaccio/phi-4) is FAR BEYOND every other model tested so far.
2025-01-07T16:20:26
https://www.reddit.com/r/LocalLLaMA/comments/1hvv39z/phi4_in_insanely_good_at_rephrasing_the_last/
LinkSea8324
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvv39z
false
null
t3_1hvv39z
/r/LocalLLaMA/comments/1hvv39z/phi4_in_insanely_good_at_rephrasing_the_last/
false
false
self
112
{'enabled': False, 'images': [{'id': 'LVOG-Ma4sVt7-GCtsGzHFYEd3xPduTj9AavI9bXwmV4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=108&crop=smart&auto=webp&s=e237b41d9f130ec3ceb0f930a826cfcb0ca9b96e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=216&crop=smart&auto=webp&s=a72d3d812c1d5e0696b24e1a1d6b6ca62c984164', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=320&crop=smart&auto=webp&s=9890b9af6c8c143a3afab629e8c620f6486c05d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=640&crop=smart&auto=webp&s=1938c52ca744654d08f36b6e5ef4675f9783cee1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=960&crop=smart&auto=webp&s=011d4eb1e5d88639566be50330522d5039c98d6a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=1080&crop=smart&auto=webp&s=daf49c729822a9e279ab3b2c38f2f10f8688a836', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?auto=webp&s=1a3a80dca5f60cc754b1e04f863a64ef4ff36ccd', 'width': 1200}, 'variants': {}}]}
Run 2 GPU’s on the same computer
2
I’ve got an RTX 3090 and a 3090TI. How can I run them both on the same computer so I can combine their VRAM in order to run a large LLM? Would I need a special motherboard or is there another viable method?
2025-01-07T16:21:31
https://www.reddit.com/r/LocalLLaMA/comments/1hvv476/run_2_gpus_on_the_same_computer/
basemaly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvv476
false
null
t3_1hvv476
/r/LocalLLaMA/comments/1hvv476/run_2_gpus_on_the_same_computer/
false
false
self
2
null
Reviewing Post-Training Techniques from Recent Open LLMs
11
2025-01-07T16:23:30
https://brianfitzgerald.xyz/dpo-review
sonderemawe
brianfitzgerald.xyz
1970-01-01T00:00:00
0
{}
1hvv5xa
false
null
t3_1hvv5xa
/r/LocalLLaMA/comments/1hvv5xa/reviewing_posttraining_techniques_from_recent/
false
false
default
11
null
Is CrewAI About to Become the Go-To Framework for Enterprise AI Agents?
0
So I just came across NVIDIA’s announcement about teaming up with CrewAI, and it’s got me thinking, are we finally seeing AI agent frameworks like CrewAI become *the* standard for enterprise-scale production use? The partnership seems focused on making agentic workflows more scalable and easier to deploy, especially for complex enterprise environments. With NVIDIA’s backing, CrewAI could suddenly have the edge over frameworks like LangChain and AutoGPT, at least when it comes to reliability and tooling. But I’m curious, does this solve the big hurdles people have faced with AI agents so far? Stuff like: * Making them *actually* reliable in production. * Handling multi-agent workflows without spiraling costs or complexity. * Avoiding the constant “glue code” mess that’s been needed to stitch frameworks together. Are people already moving towards CrewAI, or are we still in the “let’s test this and see” phase with all these frameworks?
2025-01-07T16:29:00
https://www.reddit.com/r/LocalLLaMA/comments/1hvvaj9/is_crewai_about_to_become_the_goto_framework_for/
Fit_Jelly_5346
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvvaj9
false
null
t3_1hvvaj9
/r/LocalLLaMA/comments/1hvvaj9/is_crewai_about_to_become_the_goto_framework_for/
false
false
self
0
null
Will There Be Movement on AI Legislation in the United States?
0
2025-01-07T16:38:26
https://sourcingjournal.com/topics/technology/artificial-intelligence-donald-trump-elon-musk-state-legislation-ftc-deregulation-david-sacks-1234729588/
tensorsgo
sourcingjournal.com
1970-01-01T00:00:00
0
{}
1hvvihb
false
null
t3_1hvvihb
/r/LocalLLaMA/comments/1hvvihb/will_there_be_movement_on_ai_legislation_in_the/
false
false
https://b.thumbs.redditm…86KSzdvKiKVw.jpg
0
{'enabled': False, 'images': [{'id': 'MmoSXA8MwtlnTflW2m8xA-xuFS_T113KL2XZorp8d0c', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/18tPNqveU4yPJl7feEFCdk0KD2WwMzckHS5M-cixSxM.jpg?width=108&crop=smart&auto=webp&s=13a63949f2532e9a92b04b422fa277d023ac145b', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/18tPNqveU4yPJl7feEFCdk0KD2WwMzckHS5M-cixSxM.jpg?width=216&crop=smart&auto=webp&s=93c2369cb459a8ad9c5a1b448ff1b2466059d8f4', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/18tPNqveU4yPJl7feEFCdk0KD2WwMzckHS5M-cixSxM.jpg?width=320&crop=smart&auto=webp&s=830c07f013a0cef13c16a60db44b1a443a860a71', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/18tPNqveU4yPJl7feEFCdk0KD2WwMzckHS5M-cixSxM.jpg?width=640&crop=smart&auto=webp&s=1a9d625bf66dfadc70d48b7d1ce52fa9a7e8b90c', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/18tPNqveU4yPJl7feEFCdk0KD2WwMzckHS5M-cixSxM.jpg?width=960&crop=smart&auto=webp&s=c79a255f3d2860dfb9fc231e5ab40fe137c40ab9', 'width': 960}], 'source': {'height': 682, 'url': 'https://external-preview.redd.it/18tPNqveU4yPJl7feEFCdk0KD2WwMzckHS5M-cixSxM.jpg?auto=webp&s=776498825ce4aa6671d9e9290cc6b088ed342d18', 'width': 1024}, 'variants': {}}]}
best local browser agent for Firefox?
9
Is there an AI agent similar to Claude Computer-Use for firefox. I looked everywhere but there are no good opensource options I've seen. Something like [https://github.com/normal-computing/fuji-web](https://github.com/normal-computing/fuji-web) that can work with and without vision models and be instructed mid browser use.
2025-01-07T16:42:43
https://www.reddit.com/r/LocalLLaMA/comments/1hvvm7e/best_local_browser_agent_for_firefox/
jeremiahn4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvvm7e
false
null
t3_1hvvm7e
/r/LocalLLaMA/comments/1hvvm7e/best_local_browser_agent_for_firefox/
false
false
self
9
{'enabled': False, 'images': [{'id': '4IDFfTTXciNbbObUtHJ8coqbE8hlvNm2nbPpFr3-1wo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Crdx_R1_defDjCUnu7qb3S6kppaAg5Ez2AYFZLLjjXo.jpg?width=108&crop=smart&auto=webp&s=06566f1ceb4193629ba40f7e2b9b1b685d1cc3a2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Crdx_R1_defDjCUnu7qb3S6kppaAg5Ez2AYFZLLjjXo.jpg?width=216&crop=smart&auto=webp&s=c97acd56d4c0ffca1227d56cab611b242884f905', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Crdx_R1_defDjCUnu7qb3S6kppaAg5Ez2AYFZLLjjXo.jpg?width=320&crop=smart&auto=webp&s=9d9613e477773af3ba432f65abd99e1527b1f903', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Crdx_R1_defDjCUnu7qb3S6kppaAg5Ez2AYFZLLjjXo.jpg?width=640&crop=smart&auto=webp&s=b9135a609b2c25fe644cdbae0d367b1235bedcc4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Crdx_R1_defDjCUnu7qb3S6kppaAg5Ez2AYFZLLjjXo.jpg?width=960&crop=smart&auto=webp&s=5bf00f896079df6d3e7b0318268df5c0dadccd01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Crdx_R1_defDjCUnu7qb3S6kppaAg5Ez2AYFZLLjjXo.jpg?width=1080&crop=smart&auto=webp&s=a881a2f3a1e8c4b7a60e356e9cd0e77202c0cb68', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Crdx_R1_defDjCUnu7qb3S6kppaAg5Ez2AYFZLLjjXo.jpg?auto=webp&s=879f6fa0fbfa4d709a947512177776698458492d', 'width': 1200}, 'variants': {}}]}
is there any LLM that you can download that is completely uncensored?
9
any that have no guard rails on them?
2025-01-07T17:05:35
https://www.reddit.com/r/LocalLLaMA/comments/1hvw65m/is_there_any_llm_that_you_can_download_that_is/
Ok_Calendar_851
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvw65m
false
null
t3_1hvw65m
/r/LocalLLaMA/comments/1hvw65m/is_there_any_llm_that_you_can_download_that_is/
false
false
self
9
null
What Could Be the HackerRank or LeetCode Equivalent for Prompt Engineers?
3
Lately, I've noticed a significant increase in both courses and job openings for prompt engineers. However, assessing their skills can be challenging. Many job listings require prompt engineers to provide proof of their work, but those employed in private organizations often find it difficult to share proprietary projects. What platform could be developed to effectively showcase the abilities of prompt engineers?
2025-01-07T17:05:36
https://www.reddit.com/r/LocalLLaMA/comments/1hvw65x/what_could_be_the_hackerrank_or_leetcode/
Sky-Is-Kind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvw65x
false
null
t3_1hvw65x
/r/LocalLLaMA/comments/1hvw65x/what_could_be_the_hackerrank_or_leetcode/
false
false
self
3
null
LLM evals book chapter
0
2025-01-07T17:09:41
https://open.substack.com/pub/tamingllm/p/chapter-1-the-evals-gap?utm_campaign=post&utm_medium=web
HighlanderNJ
open.substack.com
1970-01-01T00:00:00
0
{}
1hvw9ln
false
null
t3_1hvw9ln
/r/LocalLLaMA/comments/1hvw9ln/llm_evals_book_chapter/
false
false
default
0
null
Are you gonna wait for Digits or get the 5090?
94
Digits seems on paper like it’s better bang for the buck, but there are a lot more unknown unknowns about it. And it’s releasing later. Thoughts?
2025-01-07T17:10:46
https://www.reddit.com/r/LocalLLaMA/comments/1hvwaj9/are_you_gonna_wait_for_digits_or_get_the_5090/
lxe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvwaj9
false
null
t3_1hvwaj9
/r/LocalLLaMA/comments/1hvwaj9/are_you_gonna_wait_for_digits_or_get_the_5090/
false
false
self
94
null
How to Extract Data from Telegram for Sentiment and Graph Analysis? Feasibility, Tools, and Requirements?
1
[removed]
2025-01-07T17:13:11
https://www.reddit.com/r/LocalLLaMA/comments/1hvwclg/how_to_extract_data_from_telegram_for_sentiment/
DataaWolff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvwclg
false
null
t3_1hvwclg
/r/LocalLLaMA/comments/1hvwclg/how_to_extract_data_from_telegram_for_sentiment/
false
false
self
1
null
Why are there no open source or common Post-Training Quantization techniques for embedding models?
4
I have noticed that while LLMs it is the absolute standard to not to use original weights (fp32). However for embedding models it seems to be not common at all. Most models on the top mteb leaderboard are often bigger than 8b parameters, which are bigger than 20 GB sometimes. The only PTQ for embedding models I found is ONNX. But it seems to be uncommon. I get that there might be less demand for these models smaller than llms but still. So why are the embedding models often deployed in original precision when it's known that likely 32 to 16 and even 8 will probably only make a minor difference?
2025-01-07T17:14:04
https://www.reddit.com/r/LocalLLaMA/comments/1hvwdbv/why_are_there_no_open_source_or_common/
StayStonk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvwdbv
false
null
t3_1hvwdbv
/r/LocalLLaMA/comments/1hvwdbv/why_are_there_no_open_source_or_common/
false
false
self
4
null
Exolab: NVIDIA's Digits Outperforms Apple's M4 Chips in AI Inference
378
2025-01-07T17:35:44
https://x.com/alexocheema/status/1876676954549620961?s=46
nderstand2grow
x.com
1970-01-01T00:00:00
0
{}
1hvwwsq
false
null
t3_1hvwwsq
/r/LocalLLaMA/comments/1hvwwsq/exolab_nvidias_digits_outperforms_apples_m4_chips/
false
false
https://b.thumbs.redditm…8wYvQkmgCggI.jpg
378
{'enabled': False, 'images': [{'id': 'szMUn4oA0LBpVjZuYRdLJvR4-rCu4_3VO8b7RiN9RYE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MBJSLr1BrGGAWiERNKV1EsVPqt-H8qI6KhVXjMFntpA.jpg?width=108&crop=smart&auto=webp&s=aa3d0f32059934bdc8facdc90251ece46466d220', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MBJSLr1BrGGAWiERNKV1EsVPqt-H8qI6KhVXjMFntpA.jpg?width=216&crop=smart&auto=webp&s=c7d7cff1346adb93d9ee0e0806b52148ec948b46', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/MBJSLr1BrGGAWiERNKV1EsVPqt-H8qI6KhVXjMFntpA.jpg?width=320&crop=smart&auto=webp&s=907fe8af91752d34d0e7a5a5bd128149064e0cc3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/MBJSLr1BrGGAWiERNKV1EsVPqt-H8qI6KhVXjMFntpA.jpg?width=640&crop=smart&auto=webp&s=4ffe16868d431044dc8975ae38ca0056c5252984', 'width': 640}], 'source': {'height': 680, 'url': 'https://external-preview.redd.it/MBJSLr1BrGGAWiERNKV1EsVPqt-H8qI6KhVXjMFntpA.jpg?auto=webp&s=c83dfe150e33e17d0c3afdfea334393c9c3ea1e0', 'width': 680}, 'variants': {}}]}
Best tts to use for hebrew
1
[removed]
2025-01-07T18:21:09
https://www.reddit.com/r/LocalLLaMA/comments/1hvy0xb/best_tts_to_use_for_hebrew/
Weak_Recognition6432
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvy0xb
false
null
t3_1hvy0xb
/r/LocalLLaMA/comments/1hvy0xb/best_tts_to_use_for_hebrew/
false
false
self
1
null
[dataset] Almost all of Reddit's r/RoastMe
1
2025-01-07T18:25:31
https://huggingface.co/datasets/gus-gustavo/reddit_roastme
GoryRamsy
huggingface.co
1970-01-01T00:00:00
0
{}
1hvy4mo
false
null
t3_1hvy4mo
/r/LocalLLaMA/comments/1hvy4mo/dataset_almost_all_of_reddits_rroastme/
false
false
https://a.thumbs.redditm…mR_p98j0ssK8.jpg
1
{'enabled': False, 'images': [{'id': 'h5Sj8MBy4nP-akS8Rw8FGs5a1fZ2srB36laNFRKsgvc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JTA4-HTN47FCxnvgyDpgm8yMBs6wfpU6uZ3VBN4N-bg.jpg?width=108&crop=smart&auto=webp&s=f55621da53b233e3ed8291ba7ab87284e2a29089', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JTA4-HTN47FCxnvgyDpgm8yMBs6wfpU6uZ3VBN4N-bg.jpg?width=216&crop=smart&auto=webp&s=f816aef2da5f0e6d3eabc2608224743e956603ea', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JTA4-HTN47FCxnvgyDpgm8yMBs6wfpU6uZ3VBN4N-bg.jpg?width=320&crop=smart&auto=webp&s=d86d7a84c0ef0639e8f292d98746c3bf02e9ed2e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JTA4-HTN47FCxnvgyDpgm8yMBs6wfpU6uZ3VBN4N-bg.jpg?width=640&crop=smart&auto=webp&s=4f9c8f40e9db44a64961b4dc19759f139ec93941', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JTA4-HTN47FCxnvgyDpgm8yMBs6wfpU6uZ3VBN4N-bg.jpg?width=960&crop=smart&auto=webp&s=1137d622b8143cbda1b7d86cc3f3b219dc96b5ad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JTA4-HTN47FCxnvgyDpgm8yMBs6wfpU6uZ3VBN4N-bg.jpg?width=1080&crop=smart&auto=webp&s=56220d5f9085345dc1d6ab99404ae4bb3b301eb0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JTA4-HTN47FCxnvgyDpgm8yMBs6wfpU6uZ3VBN4N-bg.jpg?auto=webp&s=cf0255e1a872d8bf0b8c602b3803bc5ed21ade25', 'width': 1200}, 'variants': {}}]}
Can someone explain the real world TFLOPS difference of consumer vs enterprise GPUs?
1
[removed]
2025-01-07T18:45:51
https://www.reddit.com/r/LocalLLaMA/comments/1hvym84/can_someone_explain_the_real_world_tflops/
TaloSi_II
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvym84
false
null
t3_1hvym84
/r/LocalLLaMA/comments/1hvym84/can_someone_explain_the_real_world_tflops/
false
false
self
1
null
Best suggestion for anyone who wants to newly setup the llm rig? I never did it before
1
[removed]
2025-01-07T18:49:28
https://www.reddit.com/r/LocalLLaMA/comments/1hvypaj/best_suggestion_for_anyone_who_wants_to_newly/
FigPsychological3731
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvypaj
false
null
t3_1hvypaj
/r/LocalLLaMA/comments/1hvypaj/best_suggestion_for_anyone_who_wants_to_newly/
false
false
self
1
null
Hi Gamers! We are creating a game with LLM AI NPCs that can play with you, chat, be your friend, go on many adventures with you and remember everything. Our idea with the LLM AI is to help MMO players with loneliness. May I ask what you guys think about Generative Agents in games?
1
2025-01-07T18:53:37
https://edforson.substack.com/p/paper-review-generative-agents
AetherianChronicles
edforson.substack.com
1970-01-01T00:00:00
0
{}
1hvysvo
false
null
t3_1hvysvo
/r/LocalLLaMA/comments/1hvysvo/hi_gamers_we_are_creating_a_game_with_llm_ai_npcs/
false
false
https://b.thumbs.redditm…FdiZYbqR9TGM.jpg
1
{'enabled': False, 'images': [{'id': 'es7OD-5NqAhWKe6RoHfbd1v5GWBmIV4G0Ad4-nkQK30', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TPlD6Z39ZTaDD0C_pWEwUeAgl3IJuOjWArk2p9Mbeho.jpg?width=108&crop=smart&auto=webp&s=5845299ae85fdb866bf62cfaa6c3e488f6f6f1cf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TPlD6Z39ZTaDD0C_pWEwUeAgl3IJuOjWArk2p9Mbeho.jpg?width=216&crop=smart&auto=webp&s=4f3cd1f680a3ac75a8ea26a4f01cda02e2dce5d5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TPlD6Z39ZTaDD0C_pWEwUeAgl3IJuOjWArk2p9Mbeho.jpg?width=320&crop=smart&auto=webp&s=d7c623165aff19ac9db8805d2e6583809029a471', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TPlD6Z39ZTaDD0C_pWEwUeAgl3IJuOjWArk2p9Mbeho.jpg?width=640&crop=smart&auto=webp&s=ee1915b83d91c1306e060486e9af2bcd52041d6e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TPlD6Z39ZTaDD0C_pWEwUeAgl3IJuOjWArk2p9Mbeho.jpg?width=960&crop=smart&auto=webp&s=1b9e3b9a12f1ffbcb547aabf7aa17df08835a7d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TPlD6Z39ZTaDD0C_pWEwUeAgl3IJuOjWArk2p9Mbeho.jpg?width=1080&crop=smart&auto=webp&s=595f590c90340b3333d03905bab995a3f0702c9b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TPlD6Z39ZTaDD0C_pWEwUeAgl3IJuOjWArk2p9Mbeho.jpg?auto=webp&s=371a4a0946d62cd33ee8e5d05ca2f4bd9ad3f0ea', 'width': 1200}, 'variants': {}}]}
Anybody used Swagger specs as tools on Function Calling?
2
I'm studying and spitballing, and it seems I could use my swagger spec, maybe process it, and pass as tools for llama to use on Function Calling The tools as you pass them are very similar any way, I could even use a mix of scripting and llama itself to convert the swagger to proper tool formatting and then pass to llama I was even thinking of asking llama to generate shell and python scripts to a task I give it, and then run those. "open my downloads folder" would return a shell with "cd Downloads & & start ."
2025-01-07T18:59:08
https://www.reddit.com/r/LocalLLaMA/comments/1hvyxns/anybody_used_swagger_specs_as_tools_on_function/
Blender-Fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvyxns
false
null
t3_1hvyxns
/r/LocalLLaMA/comments/1hvyxns/anybody_used_swagger_specs_as_tools_on_function/
false
false
self
2
null
Dual (or triple) 3090s still a good option?
6
Hey guys! Noob here... Hope you all are well. I'm wanting to build a server that's more powerful than my m4 max macbook pro (64GB ram). I am contemplating returning my 64GB and upgrading to 128GB, but I'm already maxing out my GPU at about 8 tokens/s when I run Llama 3.3 70b q4\_k\_m. 64GB is enough for everything up to quantized 70b models and I think if I run anything larger than this my mac's gpu just won't be strong enough to be fast. I'm thinking to build a server with the following specs (roughly): Mobo: Gigabyte MZ32-AR0 (this will allow me to upgrade to more GPUs in the future Ram: 4x 32GB DDR4 PSU: CORSAIR HX1500i or similar CPU: TBD GPU: 2 or 3x used rtx3090 Frame/case: [https://www.amazon.com/AAAwave-12GPU-Mining-Chassis-Cryptocurrency/dp/B08VDPYJPM?tag=eli5jk9b-20&geniuslink=true&th=1](https://www.amazon.com/AAAwave-12GPU-Mining-Chassis-Cryptocurrency/dp/B08VDPYJPM?tag=eli5jk9b-20&geniuslink=true&th=1) Do you guys have any input on whether two 3090s are still a good option? From what I've seen, 4090s are not really that big of a step up for local LLM and 5090s are going to be impossible to get for a while (and super expensive). Also, any input on a better motherboard and any CPU suggestions? Also I'm thinking to build this on a GPU rack for further expandability, but does anyone have better suggestions? Any input at all is very much appreciated because I'm sure you guys are more knowledgeable than I am! Thanks a lot guys!
2025-01-07T19:00:18
https://www.reddit.com/r/LocalLLaMA/comments/1hvyypv/dual_or_triple_3090s_still_a_good_option/
No_Switch5015
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvyypv
false
null
t3_1hvyypv
/r/LocalLLaMA/comments/1hvyypv/dual_or_triple_3090s_still_a_good_option/
false
false
self
6
null
Did A100 40GB price come down? I see multiple listings for ~6000USD. Worth investing in A100 now or should go for H100?
0
Can someone give me a trustworthy source on where to check the current market price for actual A100 40GB and A100 80GB?
2025-01-07T19:04:38
https://www.reddit.com/r/LocalLLaMA/comments/1hvz2tn/did_a100_40gb_price_come_down_i_see_multiple/
kitkatmafia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvz2tn
false
null
t3_1hvz2tn
/r/LocalLLaMA/comments/1hvz2tn/did_a100_40gb_price_come_down_i_see_multiple/
false
false
self
0
null
distillKitPlus: Compute Efficient Knowledge Distillation for LLMs
29
Larger LLMs generalize better and faster, this is a great way to leverage and then transfer the best of 70B model to a 7B model without breaking the bank or sacrificing performance. GitHub Link: [https://github.com/agokrani/distillkitplus](https://github.com/agokrani/distillkitplus)
2025-01-07T19:34:50
https://www.reddit.com/r/LocalLLaMA/comments/1hvzswb/distillkitplus_compute_efficient_knowledge/
__XploR__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvzswb
false
null
t3_1hvzswb
/r/LocalLLaMA/comments/1hvzswb/distillkitplus_compute_efficient_knowledge/
false
false
self
29
{'enabled': False, 'images': [{'id': 'tdfcJ7iGXLzFWWl3fJ6iOVxHMMk-Pk6alqL5usLQ3VM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zs9GFrxHWvJ3Vt9EhbIuL7g3z8oRwRBhvTOEc6xiE9Y.jpg?width=108&crop=smart&auto=webp&s=f9f56899c1afcfaf9faa19125e3bb20fae831d2e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zs9GFrxHWvJ3Vt9EhbIuL7g3z8oRwRBhvTOEc6xiE9Y.jpg?width=216&crop=smart&auto=webp&s=4cf8d25b11a837647d1cdb825f0fca39bf2eb1da', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zs9GFrxHWvJ3Vt9EhbIuL7g3z8oRwRBhvTOEc6xiE9Y.jpg?width=320&crop=smart&auto=webp&s=409faaaf524eba63b9feea387730c725a56da589', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zs9GFrxHWvJ3Vt9EhbIuL7g3z8oRwRBhvTOEc6xiE9Y.jpg?width=640&crop=smart&auto=webp&s=cc26fca9b3dbd811cca4a4008b8c2d190b4236bf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zs9GFrxHWvJ3Vt9EhbIuL7g3z8oRwRBhvTOEc6xiE9Y.jpg?width=960&crop=smart&auto=webp&s=e3632f2b908daca8c4b84339a598039eaa0d9ac8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zs9GFrxHWvJ3Vt9EhbIuL7g3z8oRwRBhvTOEc6xiE9Y.jpg?width=1080&crop=smart&auto=webp&s=bf0089576a322f60296425bb6f4b32f69f95c483', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zs9GFrxHWvJ3Vt9EhbIuL7g3z8oRwRBhvTOEc6xiE9Y.jpg?auto=webp&s=61aee7f8bb0fba9eb960d89470d19b82085a2c1c', 'width': 1200}, 'variants': {}}]}
M1 MacBook Air 8gb, whats the best local LLM I can run?
1
[removed]
2025-01-07T19:42:06
https://www.reddit.com/r/LocalLLaMA/comments/1hvzzdf/m1_macbook_air_8gb_whats_the_best_local_llm_i_can/
MobileEnvironment840
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hvzzdf
false
null
t3_1hvzzdf
/r/LocalLLaMA/comments/1hvzzdf/m1_macbook_air_8gb_whats_the_best_local_llm_i_can/
false
false
self
1
null
Mac Mini M4 for DeepSeek and Qwen?
2
Just wondering because I've read many mixed opinions about going the Mac way for Local LLM compared to assembling a Windows PC. I currently have a Mac Mini and a Macbook from the mid 2010s, they're both great for my everyday usage, but I really want to get into local models and stop paying for ChatGPT Plus. I'm not interested in image generation, and I don't need everything to be as fast as possible, but I do a lot of writing and tend to stick to GPT o1 for its good understanding of the specific stuff I ask. I also would love if I could train a model on specific books and have it reply according to that knowledge. I understand DeepSeek and Qwen are similar in performance to o1, so do you think I would be able to run them with the M4 Mac, either with 16 or 24gb of RAM? Thank you :)
2025-01-07T19:51:53
https://www.reddit.com/r/LocalLLaMA/comments/1hw07zl/mac_mini_m4_for_deepseek_and_qwen/
skypeaks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw07zl
false
null
t3_1hw07zl
/r/LocalLLaMA/comments/1hw07zl/mac_mini_m4_for_deepseek_and_qwen/
false
false
self
2
null
Rumors about 01.AI laying off its entire pre-training algorithm and Infra teams, including its team in Silicon Valley
54
2025-01-07T20:04:13
https://technode.com/2025/01/07/01-ai-refutes-rumors-of-selling-teams-to-alibaba/
cpldcpu
technode.com
1970-01-01T00:00:00
0
{}
1hw0itx
false
null
t3_1hw0itx
/r/LocalLLaMA/comments/1hw0itx/rumors_about_01ai_laying_off_its_entire/
false
false
https://a.thumbs.redditm…nuwNqkVmufM8.jpg
54
{'enabled': False, 'images': [{'id': 'iLGD7TjzL_F0G-cIwDWIvffKmkGx4rezdjhFtgR2Apo', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/twKj1q9KpPirHPA7dDRZ83AKHbg7yJq5582Pj0hjJcE.jpg?width=108&crop=smart&auto=webp&s=aed38013e96ed01f73180f516e8b8e0f3caa13a1', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/twKj1q9KpPirHPA7dDRZ83AKHbg7yJq5582Pj0hjJcE.jpg?width=216&crop=smart&auto=webp&s=5762f5ff3a0fc739f0331f9804721d71db0c5dc6', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/twKj1q9KpPirHPA7dDRZ83AKHbg7yJq5582Pj0hjJcE.jpg?width=320&crop=smart&auto=webp&s=5010a3799676682043908c66e1f789203a50f02e', 'width': 320}, {'height': 385, 'url': 'https://external-preview.redd.it/twKj1q9KpPirHPA7dDRZ83AKHbg7yJq5582Pj0hjJcE.jpg?width=640&crop=smart&auto=webp&s=0a51b65024fe3771d3017caa7173152d7f99f201', 'width': 640}, {'height': 578, 'url': 'https://external-preview.redd.it/twKj1q9KpPirHPA7dDRZ83AKHbg7yJq5582Pj0hjJcE.jpg?width=960&crop=smart&auto=webp&s=841ca7f7e9b25c25198e14ed5095175b0babff08', 'width': 960}, {'height': 650, 'url': 'https://external-preview.redd.it/twKj1q9KpPirHPA7dDRZ83AKHbg7yJq5582Pj0hjJcE.jpg?width=1080&crop=smart&auto=webp&s=59f788078d0311c1b88fa61b5d5ce717ed188df4', 'width': 1080}], 'source': {'height': 1508, 'url': 'https://external-preview.redd.it/twKj1q9KpPirHPA7dDRZ83AKHbg7yJq5582Pj0hjJcE.jpg?auto=webp&s=18c022e385e6b75845662a70e4716a4562080a6a', 'width': 2504}, 'variants': {}}]}
Use LLM in browser extension, but with the NPU in a copilot plus PC
3
I want to run a LLM locally inside a browser extension and it needs to be quick, so I want to use the NPU inside a copilot plus PC. I am looking for a path forward.
2025-01-07T20:13:08
https://www.reddit.com/r/LocalLLaMA/comments/1hw0qon/use_llm_in_browser_extension_but_with_the_npu_in/
xpingu69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw0qon
false
null
t3_1hw0qon
/r/LocalLLaMA/comments/1hw0qon/use_llm_in_browser_extension_but_with_the_npu_in/
false
false
self
3
null
Anyone has any experience using single H200 or 2 x H100. Which one would you go for setting up physical gpu server?
0
We are planning to fine tune large scale Llama models and perform a lot of inference. Which one is more efficient in terms of fine tuning and inference? Any experience?
2025-01-07T20:28:23
https://www.reddit.com/r/LocalLLaMA/comments/1hw13uv/anyone_has_any_experience_using_single_h200_or_2/
Lazy_Wedding_1383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw13uv
false
null
t3_1hw13uv
/r/LocalLLaMA/comments/1hw13uv/anyone_has_any_experience_using_single_h200_or_2/
false
false
self
0
null
Small business with LLM needs in the next 6 months... What hardware should we aim for?
1
[removed]
2025-01-07T20:28:46
https://www.reddit.com/r/LocalLLaMA/comments/1hw145r/small_business_with_llm_needs_in_the_next_6/
Lost_Fox__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw145r
false
null
t3_1hw145r
/r/LocalLLaMA/comments/1hw145r/small_business_with_llm_needs_in_the_next_6/
false
false
self
1
null
DeepSeek 2bit GGUF quant surprisingly works + 1.3TB BF16 + other quants
2
Hey guys we uploaded GGUF's including 2, 3 ,4, 5, 6 and 8-bit quants for Deepseek V3. We've also de-quantized Deepseek-V3 to upload the [bf16 version](https://huggingface.co/unsloth/DeepSeek-V3-bf16) so you guys can experiment with it (1.3TB) **Minimum hardware requirements** to run Deepseek-V3 in 2-bit: 48GB RAM + 250GB of disk space. See how to run Deepseek V3 with examples and our full collection here: [https://huggingface.co/collections/unsloth/deepseek-v3-all-versions-677cf5cfd7df8b7815fc723c](https://huggingface.co/collections/unsloth/deepseek-v3-all-versions-677cf5cfd7df8b7815fc723c) |Deepseek V3 version|**Links**| |:-|:-| |GGUF|2-bit: [Q2\_K\_XS](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q2_K_XS%2FDeepSeek-V3-Q2_K_XS-00001-of-00005.gguf) and [Q2\_K\_L](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q2_K_L%2FDeepSeek-V3-Q2_K_L-00001-of-00005.gguf)| |GGUF|[3](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q3_K_M%2FDeepSeek-V3-Q3_K_M-00001-of-00007.gguf), [4](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q4_K_M%2FDeepSeek-V3-Q4_K_M-00001-of-00009.gguf), [5](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q5_K_M%2FDeepSeek-V3-Q5_K_M-00001-of-00010.gguf), [6](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q6_K%2FDeepSeek-V3-Q6_K-00001-of-00012.gguf) and [8-bit](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q8_0%2FDeepSeek-V3-BF16-256x20B-Q8_0-00001-of-00016.gguf)| |bf16|[dequantized 16-bit](https://huggingface.co/unsloth/DeepSeek-V3-bf16)| The [Unsloth ](https://github.com/unslothai/unsloth)GGUF model details: |Quant Type|Disk Size|Details| |:-|:-|:-| |[Q2\_K\_XS](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q2_K_XS)|207GB|Q2 everything, Q4 embed, Q6 lm\_head| |[Q2\_K\_L](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q2_K_L)|228GB|Q3 down\_proj Q2 rest, Q4 embed, Q6 lm\_head| |[Q3\_K\_M](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q3_K_M)|298GB|Standard Q3\_K\_M| |[Q4\_K\_M](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q4_K_M)|377GB|Standard Q4\_K\_M| |[Q5\_K\_M](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q5_K_M)|443GB|Standard Q5\_K\_M| |[Q6\_K](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q6_K)|513GB|Standard Q6\_K| |[Q8\_0](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q8_0)|712GB|Standard Q8\_0| * [Q2\_K\_XS](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q2_K_XS) should run ok in \~40GB of CPU / GPU VRAM with automatic llama.cpp offloading. * Use K quantization (not V quantization) * Do not forget about `<|User|>` and `<|Assistant|>` tokens! - Or use a chat template formatter Example with Q5\_0 K quantized cache (V quantized cache doesn't work): ./llama.cpp/llama-cli --model unsloth/DeepSeek-V3-GGUF/DeepSeek-V3-Q2_K_XS/DeepSeek-V3-Q2_K_XS-00001-of-00005.gguf --cache-type-k q5_0 --prompt '<|User|>What is 1+1?<|Assistant|>' and running the above generates: The sum of 1 and 1 is **2**. Here's a simple step-by-step breakdown: 1. **Start with the number 1.** 2. **Add another 1 to it.** 3. **The result is 2.** So, **1 + 1 = 2**. [end of text]
2025-01-07T20:48:52
https://www.reddit.com/r/LocalLLaMA/comments/1hw1ln3/deepseek_2bit_gguf_quant_surprisingly_works_13tb/
danielhanchen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw1ln3
false
null
t3_1hw1ln3
/r/LocalLLaMA/comments/1hw1ln3/deepseek_2bit_gguf_quant_surprisingly_works_13tb/
false
false
self
2
{'enabled': False, 'images': [{'id': 'q-PIWyKbIVoK2UDgnz7oPl7RQCVlkbQQkfe9gC181cQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=108&crop=smart&auto=webp&s=67ed06b7a8d9020498f077b9d09b75255d69d88a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=216&crop=smart&auto=webp&s=c52474870b66d14eaecb7e079a24bd0a42c432cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=320&crop=smart&auto=webp&s=1c0f3f4e11ce04aefae856fdc30345316bd201d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=640&crop=smart&auto=webp&s=db42b4ade4d1159e1f4d5e8dbac020f0701400dd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=960&crop=smart&auto=webp&s=70cfb045c5a081ba843375df22868a56fe46e7ba', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=1080&crop=smart&auto=webp&s=0ea16ba7cf7a08ae510530034606876a2a480b75', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?auto=webp&s=c0ec27049754bfe54500f4ad02449fbeca3ae83f', 'width': 1200}, 'variants': {}}]}
DeepSeek V3 GGUF 2-bit surprisingly works! + BF16, other quants
207
Hey guys we uploaded GGUF's including 2, 3 ,4, 5, 6 and 8-bit quants for Deepseek V3. We've also de-quantized Deepseek-V3 to upload the [bf16 version](https://huggingface.co/unsloth/DeepSeek-V3-bf16) so you guys can experiment with it (1.3TB) **Minimum hardware requirements** to run Deepseek-V3 in 2-bit: 48GB RAM + 250GB of disk space. See how to run Deepseek V3 with examples and our full collection here: [https://huggingface.co/collections/unsloth/deepseek-v3-all-versions-677cf5cfd7df8b7815fc723c](https://huggingface.co/collections/unsloth/deepseek-v3-all-versions-677cf5cfd7df8b7815fc723c) |Deepseek V3 version|**Links**| |:-|:-| |GGUF|2-bit: [Q2\_K\_XS](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q2_K_XS%2FDeepSeek-V3-Q2_K_XS-00001-of-00005.gguf) and [Q2\_K\_L](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q2_K_L%2FDeepSeek-V3-Q2_K_L-00001-of-00005.gguf)| |GGUF|[3](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q3_K_M%2FDeepSeek-V3-Q3_K_M-00001-of-00007.gguf), [4](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q4_K_M%2FDeepSeek-V3-Q4_K_M-00001-of-00009.gguf), [5](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q5_K_M%2FDeepSeek-V3-Q5_K_M-00001-of-00010.gguf), [6](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q6_K%2FDeepSeek-V3-Q6_K-00001-of-00012.gguf) and [8-bit](https://huggingface.co/unsloth/DeepSeek-V3-GGUF?show_file_info=DeepSeek-V3-Q8_0%2FDeepSeek-V3-BF16-256x20B-Q8_0-00001-of-00016.gguf)| |bf16|[dequantized 16-bit](https://huggingface.co/unsloth/DeepSeek-V3-bf16)| The [Unsloth ](https://github.com/unslothai/unsloth)GGUF model details: |Quant Type|Disk Size|Details| |:-|:-|:-| |[Q2\_K\_XS](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q2_K_XS)|207GB|Q2 everything, Q4 embed, Q6 lm\_head| |[Q2\_K\_L](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q2_K_L)|228GB|Q3 down\_proj Q2 rest, Q4 embed, Q6 lm\_head| |[Q3\_K\_M](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q3_K_M)|298GB|Standard Q3\_K\_M| |[Q4\_K\_M](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q4_K_M)|377GB|Standard Q4\_K\_M| |[Q5\_K\_M](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q5_K_M)|443GB|Standard Q5\_K\_M| |[Q6\_K](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q6_K)|513GB|Standard Q6\_K| |[Q8\_0](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q8_0)|712GB|Standard Q8\_0| * [Q2\_K\_XS](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q2_K_XS) should run ok in \~40GB of CPU / GPU VRAM with automatic llama.cpp offloading. * Use K quantization (not V quantization) * Do not forget about `<|User|>` and `<|Assistant|>` tokens! - Or use a chat template formatter Example with Q5\_0 K quantized cache (V quantized cache doesn't work): ./llama.cpp/llama-cli --model unsloth/DeepSeek-V3-GGUF/DeepSeek-V3-Q2_K_XS/DeepSeek-V3-Q2_K_XS-00001-of-00005.gguf --cache-type-k q5_0 --prompt '<|User|>What is 1+1?<|Assistant|>' and running the above generates: The sum of 1 and 1 is **2**. Here's a simple step-by-step breakdown: 1. **Start with the number 1.** 2. **Add another 1 to it.** 3. **The result is 2.** So, **1 + 1 = 2**. [end of text]
2025-01-07T20:51:39
https://www.reddit.com/r/LocalLLaMA/comments/1hw1nze/deepseek_v3_gguf_2bit_surprisingly_works_bf16/
danielhanchen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw1nze
false
null
t3_1hw1nze
/r/LocalLLaMA/comments/1hw1nze/deepseek_v3_gguf_2bit_surprisingly_works_bf16/
false
false
self
207
{'enabled': False, 'images': [{'id': 'q-PIWyKbIVoK2UDgnz7oPl7RQCVlkbQQkfe9gC181cQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=108&crop=smart&auto=webp&s=67ed06b7a8d9020498f077b9d09b75255d69d88a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=216&crop=smart&auto=webp&s=c52474870b66d14eaecb7e079a24bd0a42c432cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=320&crop=smart&auto=webp&s=1c0f3f4e11ce04aefae856fdc30345316bd201d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=640&crop=smart&auto=webp&s=db42b4ade4d1159e1f4d5e8dbac020f0701400dd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=960&crop=smart&auto=webp&s=70cfb045c5a081ba843375df22868a56fe46e7ba', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?width=1080&crop=smart&auto=webp&s=0ea16ba7cf7a08ae510530034606876a2a480b75', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/90izfJU5m8L092ixzoxf_26rk0-ApZnqaCNs0lLPaw8.jpg?auto=webp&s=c0ec27049754bfe54500f4ad02449fbeca3ae83f', 'width': 1200}, 'variants': {}}]}
MicroServing of LLM Engines with Sub-Request-Level APIs
1
[removed]
2025-01-07T21:01:29
https://www.reddit.com/r/LocalLLaMA/comments/1hw1wl7/microserving_of_llm_engines_with_subrequestlevel/
SnooMachines3070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw1wl7
false
null
t3_1hw1wl7
/r/LocalLLaMA/comments/1hw1wl7/microserving_of_llm_engines_with_subrequestlevel/
false
false
https://b.thumbs.redditm…WqT-BQlfGptw.jpg
1
null
MicroServing of LLM Engines with Sub-Request-Level APIs
1
[removed]
2025-01-07T21:06:25
https://www.reddit.com/r/LocalLLaMA/comments/1hw20yi/microserving_of_llm_engines_with_subrequestlevel/
SnooMachines3070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw20yi
false
null
t3_1hw20yi
/r/LocalLLaMA/comments/1hw20yi/microserving_of_llm_engines_with_subrequestlevel/
false
false
self
1
null
MicroServing of LLM Engines with Sub-Request-Level APIs
1
[removed]
2025-01-07T21:07:01
https://www.reddit.com/r/LocalLLaMA/comments/1hw21i0/microserving_of_llm_engines_with_subrequestlevel/
SnooMachines3070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw21i0
false
null
t3_1hw21i0
/r/LocalLLaMA/comments/1hw21i0/microserving_of_llm_engines_with_subrequestlevel/
false
false
self
1
null
MicroServing of LLM Engines with Sub-Request-Level APIs
1
[removed]
2025-01-07T21:09:28
https://www.reddit.com/r/LocalLLaMA/comments/1hw23p2/microserving_of_llm_engines_with_subrequestlevel/
SnooMachines3070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw23p2
false
null
t3_1hw23p2
/r/LocalLLaMA/comments/1hw23p2/microserving_of_llm_engines_with_subrequestlevel/
false
false
self
1
null
I'm getting a GPU server with two NVIDIA H100. Below is my configuration that comes around for 150,000 USD. Is it possible to bring the cost down?
0
2025-01-07T21:24:26
https://www.dell.com/en-us/shop/dell-poweredge-servers/poweredge-r760xa-rack-server/spd/poweredge-r760xa/pe_r760xa_16902_vi_vp?configurationid=93df2617-616a-4f57-b8a4-ee2bd86c7088
Lazy_Wedding_1383
dell.com
1970-01-01T00:00:00
0
{}
1hw2gir
false
null
t3_1hw2gir
/r/LocalLLaMA/comments/1hw2gir/im_getting_a_gpu_server_with_two_nvidia_h100/
false
false
https://b.thumbs.redditm…vkSXr8Al0UOg.jpg
0
{'enabled': False, 'images': [{'id': '3P3A4sPIu4N-jXmoX4tcoha32ajC03KvyKUUnsnnnmQ', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/58_ASI2f78nsbNBIPTyu2c8Mrmy3NPTCSNcpkkM0MNU.jpg?width=108&crop=smart&auto=webp&s=f41d2c61f69afe47f920585641f5876c8c4c5077', 'width': 108}, {'height': 81, 'url': 'https://external-preview.redd.it/58_ASI2f78nsbNBIPTyu2c8Mrmy3NPTCSNcpkkM0MNU.jpg?width=216&crop=smart&auto=webp&s=d99973f577e46f3af294e61c35da3c16909d466c', 'width': 216}, {'height': 120, 'url': 'https://external-preview.redd.it/58_ASI2f78nsbNBIPTyu2c8Mrmy3NPTCSNcpkkM0MNU.jpg?width=320&crop=smart&auto=webp&s=e30277636e8a069e75d4ab62b7349c882151fe71', 'width': 320}], 'source': {'height': 180, 'url': 'https://external-preview.redd.it/58_ASI2f78nsbNBIPTyu2c8Mrmy3NPTCSNcpkkM0MNU.jpg?auto=webp&s=cb718aea901c3e2f726d35c2393653c64a070527', 'width': 480}, 'variants': {}}]}
Who offers Highest token / (second * $) on local machines?
3
Let's say max $20k stack. Models Llama 3.3 70b or 405b or Deepseek V3. Who gets the most bang? And where do you assume digits will land? Inspire me!
2025-01-07T21:27:52
https://www.reddit.com/r/LocalLLaMA/comments/1hw2jg5/who_offers_highest_token_second_on_local_machines/
Funny_Acanthaceae285
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw2jg5
false
null
t3_1hw2jg5
/r/LocalLLaMA/comments/1hw2jg5/who_offers_highest_token_second_on_local_machines/
false
false
self
3
null
Is Digits just a repurposing of the Jetson AGX Thor?
4
Comparing the solid spec for both, 128GB of RAM, they seem very similar. Unfortunately, further details for both are fuzzy. Thor was also slated to be released in 2025.
2025-01-07T22:04:42
https://www.reddit.com/r/LocalLLaMA/comments/1hw3ez1/is_digits_just_a_repurposing_of_the_jetson_agx/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw3ez1
false
null
t3_1hw3ez1
/r/LocalLLaMA/comments/1hw3ez1/is_digits_just_a_repurposing_of_the_jetson_agx/
false
false
self
4
null
What is your LLM stack? Subscriptions? Tools? self hosted?
56
Hi! I am 2 year old chatgpt subscriber mostly using it for coding and personal searches (instead of googlE) but now I am thinking to stop and replace it with other tools. What I am thinking is to buy some API credits in OpenAI/Anthropic/DeepSeek for adhoc queries and buy a subscription for Cursor for Coding. What are your stack? Do you have any recommendations?
2025-01-07T22:10:33
https://www.reddit.com/r/LocalLLaMA/comments/1hw3jzx/what_is_your_llm_stack_subscriptions_tools_self/
vazma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw3jzx
false
null
t3_1hw3jzx
/r/LocalLLaMA/comments/1hw3jzx/what_is_your_llm_stack_subscriptions_tools_self/
false
false
self
56
null
Yann LeCun interview. He explains I-JEPA and V-JEPA. Hopefully Meta integrates JEPA into LLaMA 4 Vision models.
1
2025-01-07T22:26:40
https://youtu.be/u7e0YUcZYbE?feature=shared
Powerful-Solution646
youtu.be
1970-01-01T00:00:00
0
{}
1hw3xin
false
{'oembed': {'author_name': 'Dr Brian Keating', 'author_url': 'https://www.youtube.com/@DrBrianKeating', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/u7e0YUcZYbE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Yann LeCun: AI Doomsday Fears Are Overblown [Ep. 473]"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/u7e0YUcZYbE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Yann LeCun: AI Doomsday Fears Are Overblown [Ep. 473]', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1hw3xin
/r/LocalLLaMA/comments/1hw3xin/yann_lecun_interview_he_explains_ijepa_and_vjepa/
false
false
https://b.thumbs.redditm…gNLRi__1rzpI.jpg
1
{'enabled': False, 'images': [{'id': 'lX1zCQbM2ygpuUORLWTlbUkFQ2LzYr6wuXWWVbfnam8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/mIjJu8jFqkDER9GVRibgyLraO4Q6_mju1bJKh0UI74I.jpg?width=108&crop=smart&auto=webp&s=3cf8f5f48f0d7c78b655c541727a5345f384d1d4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/mIjJu8jFqkDER9GVRibgyLraO4Q6_mju1bJKh0UI74I.jpg?width=216&crop=smart&auto=webp&s=bac00cf7e8ed482a99bb8750cebe514350d37c61', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/mIjJu8jFqkDER9GVRibgyLraO4Q6_mju1bJKh0UI74I.jpg?width=320&crop=smart&auto=webp&s=e539b51ac864db084ec5c9b29789d5eafe877a46', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/mIjJu8jFqkDER9GVRibgyLraO4Q6_mju1bJKh0UI74I.jpg?auto=webp&s=66710dda101e1352bc332f696f8e0ca906d85fec', 'width': 480}, 'variants': {}}]}
Ray-Ban Meta Glasses
7
Blind user here that wants to understand the technology behind the glasses. 1 - Is this how it works: Ray-Ban Meta is the microphone, data processed in Meta View app, then uploaded to a meta server running llama, last is output is downloaded and sent to the glasses? 2 - Will Meta update the version of llama that underpins the glasses? Currently the glasses say that they’re llama 3.1, but latest version of llama is 3.3. 3 - If I understand the process correctly in that the glasses merely talk to a meta server running llama, then does this mean that the glasses will give better results every quarter that llama is updated with more training data?
2025-01-07T22:29:46
https://www.reddit.com/r/LocalLLaMA/comments/1hw405w/rayban_meta_glasses/
Alarmed-Instance5356
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw405w
false
null
t3_1hw405w
/r/LocalLLaMA/comments/1hw405w/rayban_meta_glasses/
false
false
self
7
null
Recipe for 3.3 70B 4-bit in vLLM docker?
1
[removed]
2025-01-07T22:52:30
https://www.reddit.com/r/LocalLLaMA/comments/1hw4jex/recipe_for_33_70b_4bit_in_vllm_docker/
e-rox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw4jex
false
null
t3_1hw4jex
/r/LocalLLaMA/comments/1hw4jex/recipe_for_33_70b_4bit_in_vllm_docker/
false
false
self
1
null
Tips for 6U rack case for 8x GPUs?
45
2025-01-07T22:56:57
https://v.redd.it/6fpzv25bjnbe1
Armym
v.redd.it
1970-01-01T00:00:00
0
{}
1hw4n4d
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6fpzv25bjnbe1/DASHPlaylist.mpd?a=1738882632%2CZjFlNTM0ZTA5M2Y1NzAyZjdlMGQ2ZDVlYTJkMjBkMTc0YTZmMzA5NzY4MmIwMzM3YWFkNmE0MDc1NDQ0ZDM5MA%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/6fpzv25bjnbe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/6fpzv25bjnbe1/HLSPlaylist.m3u8?a=1738882632%2CMmYzZjNhMDQxNWI1ZDhlMmY2ZThmNmM3MjI4MjkyOTU4YjVlYThkYjUxNWYwMjJhYzI3OTkxYjUxMjYxNzBhYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6fpzv25bjnbe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1hw4n4d
/r/LocalLLaMA/comments/1hw4n4d/tips_for_6u_rack_case_for_8x_gpus/
false
false
https://external-preview…2f1d4d7e0127ad46
45
{'enabled': False, 'images': [{'id': 'NTBwaDEzNWJqbmJlMY0gsNGuLHs7SOaXKNigvqdMKBR1N3ZyusSYRWdOjKQ0', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NTBwaDEzNWJqbmJlMY0gsNGuLHs7SOaXKNigvqdMKBR1N3ZyusSYRWdOjKQ0.png?width=108&crop=smart&format=pjpg&auto=webp&s=433602fdccff002f738f727ad51fe786d809c0a1', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NTBwaDEzNWJqbmJlMY0gsNGuLHs7SOaXKNigvqdMKBR1N3ZyusSYRWdOjKQ0.png?width=216&crop=smart&format=pjpg&auto=webp&s=5c4345f230a1c246a693aa7de3f109c60b3ae075', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/NTBwaDEzNWJqbmJlMY0gsNGuLHs7SOaXKNigvqdMKBR1N3ZyusSYRWdOjKQ0.png?width=320&crop=smart&format=pjpg&auto=webp&s=43d6dc5fb200a5aaf56a2397358e2950f140ceba', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/NTBwaDEzNWJqbmJlMY0gsNGuLHs7SOaXKNigvqdMKBR1N3ZyusSYRWdOjKQ0.png?width=640&crop=smart&format=pjpg&auto=webp&s=d4278b16459ff6354d5bba0e976f48dd5e3e0ff1', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/NTBwaDEzNWJqbmJlMY0gsNGuLHs7SOaXKNigvqdMKBR1N3ZyusSYRWdOjKQ0.png?width=960&crop=smart&format=pjpg&auto=webp&s=423a008eb36856467188c8623939e3c5f8302d37', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/NTBwaDEzNWJqbmJlMY0gsNGuLHs7SOaXKNigvqdMKBR1N3ZyusSYRWdOjKQ0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ec4a293fa57d9697083534bc1ec4b2ef876345c0', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/NTBwaDEzNWJqbmJlMY0gsNGuLHs7SOaXKNigvqdMKBR1N3ZyusSYRWdOjKQ0.png?format=pjpg&auto=webp&s=4ca4090b644e7495862a5751842b861fdc6d8520', 'width': 1080}, 'variants': {}}]}
Experiment: An assistant for IT Support Technicians
1
[removed]
2025-01-07T23:01:25
https://www.reddit.com/r/LocalLLaMA/comments/1hw4qy2/experiment_an_assistant_for_it_support_technicians/
Ruin-Capable
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw4qy2
false
null
t3_1hw4qy2
/r/LocalLLaMA/comments/1hw4qy2/experiment_an_assistant_for_it_support_technicians/
false
false
self
1
null
Data extraction from diagrams using Vision Language Model
5
looking for some ideas to accurately extract data flows from system context diagram. I've tried a number of models and prompt engineering techniques, but i'm still getting missing flows, and hallucination from the model on non-existing flows, incorrect data flow. \*\*what i've tried:\*\* 1. prompt engineering with vision models (Phi-3-vision-128k-instruct, llama-3.2-90b-vision-instruct) 2. splitting the diagram into smaller parts 3. using OCR then feed data back into the vision model \*\*Example of diagram:\*\* \[!\[Example of diagram\]\[1\]\]\[1\] \[1\]: [https://i.sstatic.net/MBj4Zjdp.png](https://i.sstatic.net/MBj4Zjdp.png)
2025-01-07T23:13:33
https://www.reddit.com/r/LocalLLaMA/comments/1hw50ut/data_extraction_from_diagrams_using_vision/
Sensitive-Feed-4411
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw50ut
false
null
t3_1hw50ut
/r/LocalLLaMA/comments/1hw50ut/data_extraction_from_diagrams_using_vision/
false
false
self
5
{'enabled': False, 'images': [{'id': 'DmTdzD2Y7yv1dNs-zUGRGlVcvTwmtRhAd7CIDJsiiCc', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/cIFvDcSkpcAn_N24fZBV5ffZP3nrgKPknTN8BbIfR60.png?width=108&crop=smart&auto=webp&s=9cad880c750914210a985127f9c883015bebd10c', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/cIFvDcSkpcAn_N24fZBV5ffZP3nrgKPknTN8BbIfR60.png?width=216&crop=smart&auto=webp&s=d1bd789a84d5694290919eac1aba55ed7f498ac9', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/cIFvDcSkpcAn_N24fZBV5ffZP3nrgKPknTN8BbIfR60.png?width=320&crop=smart&auto=webp&s=657a2ea2547ac21d3ad9a3552acde0ab70082433', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/cIFvDcSkpcAn_N24fZBV5ffZP3nrgKPknTN8BbIfR60.png?width=640&crop=smart&auto=webp&s=ec26c802626105c752be1951e5643f2e57d33078', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/cIFvDcSkpcAn_N24fZBV5ffZP3nrgKPknTN8BbIfR60.png?width=960&crop=smart&auto=webp&s=7b4149aa0eaadf35a69ce8a9ca35e57ab48ddcc3', 'width': 960}, {'height': 648, 'url': 'https://external-preview.redd.it/cIFvDcSkpcAn_N24fZBV5ffZP3nrgKPknTN8BbIfR60.png?width=1080&crop=smart&auto=webp&s=bbac670ce9576f7670286fc95d8c98b80600f723', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://external-preview.redd.it/cIFvDcSkpcAn_N24fZBV5ffZP3nrgKPknTN8BbIfR60.png?auto=webp&s=61c588aa7fbae4e35f19b1a33cb80c4b951c0d01', 'width': 1712}, 'variants': {}}]}
What is your favourite benchmark/leaderboard for math?
2
I am always interested to see what progress is being made in llms for solving hard math problems. Of course what I really want is an open source clone of alphaproof.
2025-01-07T23:15:17
https://www.reddit.com/r/LocalLLaMA/comments/1hw52ah/what_is_your_favourite_benchmarkleaderboard_for/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw52ah
false
null
t3_1hw52ah
/r/LocalLLaMA/comments/1hw52ah/what_is_your_favourite_benchmarkleaderboard_for/
false
false
self
2
null
Newbie question: How can I make an AI “think” harder?
2
Newbie here (no dev experience) - however I’ve been thinking about o3 high compute being so much better than o3 low compute and it got me thinking: If we were to spend 100x more compute on existing models - would the outcome be significantly better? Imagine a legal case and you create 4 agents: 1. Legal counsel - solution giver 2. Opposing legal counsel - critic/adversary 3. Judge - Final validator 4. Jury - representing the Jury Imagine you let it run on a 5090 for a whole night - endlessly making arguments, finding its flaws - and evaluating the impact on judge and jury. Wouldnt it make the end result better? And if yes - what would be needed to get started?
2025-01-07T23:19:14
https://www.reddit.com/r/LocalLLaMA/comments/1hw55en/newbie_question_how_can_i_make_an_ai_think_harder/
HauntingCar5813
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw55en
false
null
t3_1hw55en
/r/LocalLLaMA/comments/1hw55en/newbie_question_how_can_i_make_an_ai_think_harder/
false
false
self
2
null
I just released Notate – Open-source AI research assistant with local LLM support
119
2025-01-07T23:25:52
https://notate.hairetsu.com
Hairetsu
notate.hairetsu.com
1970-01-01T00:00:00
0
{}
1hw5amg
false
null
t3_1hw5amg
/r/LocalLLaMA/comments/1hw5amg/i_just_released_notate_opensource_ai_research/
false
false
default
119
null
Autopod, Automate Your Reading List to Podcasts
3
Hi folks, I've been reading so much content lately around LLMs for my work and research, so I built a simple tool to turn my "read later" links into podcasts to keep up with the super fast changes happening in the industry. It uses n8n, OpenAI, and [Raindrop.io](http://Raindrop.io) to pull content, generate scripts, convert them to audio, and save the result to Google Drive. It's cost-effective (\~$0.20 or less for 20 minutes podcast) and fully customizable—you can tweak prompts, adjust lengths, or swap components. It is optimized for cost, so the voice quality isn’t perfect as is, but it can easily improve by swapping the TTS model with a better one. You can also switch everything to use local models instead. I [open sourced the workflow](https://github.com/falmanna/autopod), and the setup is straightforward. Check it out and let me know your feedback or ideas for improvements!
2025-01-07T23:32:05
https://www.reddit.com/r/LocalLLaMA/comments/1hw5fjn/autopod_automate_your_reading_list_to_podcasts/
abol3z
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw5fjn
false
null
t3_1hw5fjn
/r/LocalLLaMA/comments/1hw5fjn/autopod_automate_your_reading_list_to_podcasts/
false
false
self
3
null
WHY LLM-2 Q4_0 PERFORMS BETTER?
1
2025-01-07T23:32:41
https://i.redd.it/kxwhqepnpnbe1.png
Legal_Department5475
i.redd.it
1970-01-01T00:00:00
0
{}
1hw5g0i
false
null
t3_1hw5g0i
/r/LocalLLaMA/comments/1hw5g0i/why_llm2_q4_0_performs_better/
false
false
https://b.thumbs.redditm…HMJQbFN3AGuY.jpg
1
{'enabled': True, 'images': [{'id': '1QSWzuUlJfqdLGFWEP4IHnOS0ltteUBOeeVn51dE9ig', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/kxwhqepnpnbe1.png?width=108&crop=smart&auto=webp&s=da83990671660617277c2a3ce09fd909e1b2c86d', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/kxwhqepnpnbe1.png?width=216&crop=smart&auto=webp&s=17d0f66fa28e1fbb7014997533914ac1d9529b87', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/kxwhqepnpnbe1.png?width=320&crop=smart&auto=webp&s=0ae931f55936a051b29815869b97edaab7f07ea9', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/kxwhqepnpnbe1.png?width=640&crop=smart&auto=webp&s=307b2d72fc412769079963dd0b08615abd84b5a3', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/kxwhqepnpnbe1.png?width=960&crop=smart&auto=webp&s=a01880406d07753591467c1af58fc069d3fc6e08', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/kxwhqepnpnbe1.png?width=1080&crop=smart&auto=webp&s=7245428d334d24b555399c14a35bd90af2f71a42', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/kxwhqepnpnbe1.png?auto=webp&s=b1f695004be74b1929c7ede86e426309470059ee', 'width': 2400}, 'variants': {}}]}
WHY LLM-2 Q4_0 PERFORMS BETTER?
1
It took its time but finally the 2nd edition planned for the 2025-01-07 (ended 30 minutes ago), it is ready. Starting from the conclusion of its 1st edition, now this paper explores deeper the consequences of the image above (models fork after quantisation) and propose a recipe for the best AI model candidate to be the most performant chatbot suitable for gpt4all. I hope this helps, R-
2025-01-07T23:35:09
https://www.reddit.com/r/LocalLLaMA/comments/1hw5hz0/why_llm2_q4_0_performs_better/
Legal_Department5475
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw5hz0
false
null
t3_1hw5hz0
/r/LocalLLaMA/comments/1hw5hz0/why_llm2_q4_0_performs_better/
false
false
self
1
null
WHY LLMA-2 Q4_0 PERFORMS BETTER?
2
It took its time but finally the 2nd edition planned for the 2025-01-07 (ended 30 minutes ago), it is ready. * [chatbots-for-fun paper #15](https://robang74.github.io/chatbots-for-fun/html/neutrality-vs-biases-for-chatbots.html#conclusion) Starting from the conclusion of its 1st edition, now this paper explores deeper the consequences of the image above (models fork after quantisation) and propose a recipe for the best AI model candidate to be the most performant chatbot suitable for gpt4all. \*\*\* [Performance gap between LLMA-3 and LLMA-2 is inversely forking after aggressive quantisation](https://preview.redd.it/udf2t512rnbe1.png?width=2400&format=png&auto=webp&s=54c6ea861e961f1d1b0602198b0ee7ea5f4cb998)
2025-01-07T23:42:48
https://www.reddit.com/r/LocalLLaMA/comments/1hw5nzn/why_llma2_q4_0_performs_better/
Legal_Department5475
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw5nzn
false
null
t3_1hw5nzn
/r/LocalLLaMA/comments/1hw5nzn/why_llma2_q4_0_performs_better/
false
false
https://a.thumbs.redditm…Rnb7GIcPu8p4.jpg
2
null
Friendly heads up: Tencent (DeepSeek) is now considered a Chinese military company in the US
17
I have no opinion on this myself, but I just wanted to mention it because some people might be implementing DeepSeek into their software and I figured you guys might want to get a heads up about this. As I don't want you guys to get into trouble with the law. Disclaimer: I am not a lawyer and have no idea how this affects things nor am I making any claims. Here is the notice: [https://www.federalregister.gov/documents/2025/01/07/2025-00070/notice-of-availability-of-designation-of-chinese-military-companies](https://www.federalregister.gov/documents/2025/01/07/2025-00070/notice-of-availability-of-designation-of-chinese-military-companies) Here was the prior list from October so that you can see that it wasn't there last time: [https://www.federalregister.gov/documents/2024/10/29/2024-25169/notice-of-designation-of-chinese-military-company](https://www.federalregister.gov/documents/2024/10/29/2024-25169/notice-of-designation-of-chinese-military-company) Credit to "*The Lunduke Journal of Technology*" for pointing this out.
2025-01-07T23:46:18
https://www.reddit.com/r/LocalLLaMA/comments/1hw5qoq/friendly_heads_up_tencent_deepseek_is_now/
Many_SuchCases
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw5qoq
false
null
t3_1hw5qoq
/r/LocalLLaMA/comments/1hw5qoq/friendly_heads_up_tencent_deepseek_is_now/
false
false
self
17
null
Having a hard time trying to find the ideal <9B Wiki model
3
I'm trying to find a model that: Avoids surpassing 8B parameters (Gemma 2 9B is really tight on my device); Ideally doesn't need quants lower than Q4_K_S (ideally Q4_K_M) to run; Is good at being my "offline Wikipedia" (I'm not worried at all about RP, just want a knowledgeable Wiki model) (math skills are welcome too); Has decent capabilities at solving crossword games; Works well in Portuguese (Qwen 2.5 7B Instruct, for example, starts spitting Chinese phrases randomly). My tested models so far (IIRC): Qwen 2.5 7B Instruct Q4_K_L (not awesome and too much Chinese, for some reason) Llama 3.1 8B Instruct Q4_K_M (my best bet for now) Ministral 8B Instruct 2410 Q4_K_M (not the best) Mistral 7B Instruct v0.3 Q4_K_M (not the best) Gemma 2 9B it Q4_K_M (does not load or it's extremely sluggish) Gemma 2 9B bnb Q4_K_M (does not load or it's extremely sluggish) My device (and app used): POCO X6 Pro (12 GB RAM w/ about 6 - 7 GB of useful RAM; system eats it all; + 512 GB of storage) + ChatterUI (tks Val)
2025-01-08T00:15:14
https://www.reddit.com/r/LocalLLaMA/comments/1hw6dt3/having_a_hard_time_trying_to_find_the_ideal_9b/
Azeitonius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw6dt3
false
null
t3_1hw6dt3
/r/LocalLLaMA/comments/1hw6dt3/having_a_hard_time_trying_to_find_the_ideal_9b/
false
false
self
3
null
Llama 4 compute estimates & timeline
37
From some quick searching, Llama 4 was already training as early as october 28th. Since they have 100k H100s, and use 10x more compute than llama 3 (which was ~8MM hours from what I could find), but even 100MM gpu hours on 100k gpus is ~1.4 months. Unless I am completely out of the ballpark, shouldn't they have finished pre-training by now? Perhaps at the fine-tuning stage? What about deepseek, if meta takes anything for inspiration it should be their $5.4MM budget and what they did with it. I'm really hopeful for what meta can do with their budget if they take a similar approach, especially considering theyre (again, hopefully) training native multimodal llama 4.
2025-01-08T00:33:59
https://www.reddit.com/r/LocalLLaMA/comments/1hw6sa0/llama_4_compute_estimates_timeline/
dp3471
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw6sa0
false
null
t3_1hw6sa0
/r/LocalLLaMA/comments/1hw6sa0/llama_4_compute_estimates_timeline/
false
false
self
37
null
Using a small footprint RAG solution as a chatbot interface?
1
I've put together a selection of internal tools for developer productivity purposes and have a Streamlit Web UI. Nobody including myself likes reading documentation, so I was thinking of incorporating a small rag solution to augment or replace the interface . It'll certainly be running on a server with modest resources and without a GPU. Is this currently possible given the resource constraints? Any good git repos to review? Thanks all
2025-01-08T00:38:21
https://www.reddit.com/r/LocalLLaMA/comments/1hw6vh6/using_a_small_footprint_rag_solution_as_a_chatbot/
Success-Dependent
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw6vh6
false
null
t3_1hw6vh6
/r/LocalLLaMA/comments/1hw6vh6/using_a_small_footprint_rag_solution_as_a_chatbot/
false
false
self
1
null
Recipe for 3.3 70B 4-bit in vLLM docker?
2
I've been running hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 on dual 3090's, under vLLM in Docker, for quite some time, and want to try 3.3. I can't seem to find a model and configuration that starts up. Before I share all the things I've tried and the errors I got, which could be a lot of false starts, does anyone have a recipe that's worked for them?
2025-01-08T00:48:45
https://www.reddit.com/r/LocalLLaMA/comments/1hw737i/recipe_for_33_70b_4bit_in_vllm_docker/
e-rox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw737i
false
null
t3_1hw737i
/r/LocalLLaMA/comments/1hw737i/recipe_for_33_70b_4bit_in_vllm_docker/
false
false
self
2
null
Created a video with text prompt using Cosmos-1.0-7B-Text2World
41
It is generated from the following command using single 3090: `PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py --checkpoint_dir /workspace/checkpoints --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World --prompt "water drop hitting the floor" --seed 547312549 --video_save_name Cosmos-1.0-Diffusion-7B-Text2World_memory_efficient --offload_tokenizer --offload_diffusion_transformer --offload_text_encoder_model --offload_prompt_upsampler --offload_guardrail_models` It is converted to gif, so probably some color loss. Cosmos's rival Genesis still haven't released their generative model, so there is no one to compare to. Couldn't get it to work with Cosmos-1.0-Diffusion-7B-Video2World. Did anyone manage to get it running on single 3090? https://i.redd.it/zv2y4p9vaobe1.gif
2025-01-08T01:36:41
https://www.reddit.com/r/LocalLLaMA/comments/1hw82ty/created_a_video_with_text_prompt_using/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw82ty
false
null
t3_1hw82ty
/r/LocalLLaMA/comments/1hw82ty/created_a_video_with_text_prompt_using/
false
false
https://b.thumbs.redditm…jeivBFZtdgBE.jpg
41
null
Dual different graphics cards?
1
[removed]
2025-01-08T01:39:16
https://www.reddit.com/r/LocalLLaMA/comments/1hw84ph/dual_different_graphics_cards/
PuzzleheadedPomelo14
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw84ph
false
null
t3_1hw84ph
/r/LocalLLaMA/comments/1hw84ph/dual_different_graphics_cards/
false
false
self
1
null
Running LLama3.3 on Bolt.diy and its super slow
1
[removed]
2025-01-08T02:35:34
https://www.reddit.com/r/LocalLLaMA/comments/1hw9a6f/running_llama33_on_boltdiy_and_its_super_slow/
Adept-Monk2661
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw9a6f
false
null
t3_1hw9a6f
/r/LocalLLaMA/comments/1hw9a6f/running_llama33_on_boltdiy_and_its_super_slow/
false
false
self
1
null
Which model should I use as a ChatGPT alternative?
0
I am fairly new to LLMs, I have setup openwebui with mistral and llama, and auto1111 for SD before but I never know anything in depth about LLMs. My specific use cases are coding (Python, C++, HTML, CSS, JS), PDF/Image scanning (I think its called RAG?), and it needs to know context and not feel like a bot. It sounds like I am looking for too many things in a single small scale model, but I do not know how far LLMs have come so which is why I have too much expectations. My gpu is a 4070 super so I am thinking something like a 7b model? or something better which has been quantized to run smoothly on my gpu. I am eager to learn more about these LLMs, and if I would really appreciate people correcting me in case I got some of these terms wrong. I barely scratched the surface about learning about LLMs.
2025-01-08T02:40:32
https://www.reddit.com/r/LocalLLaMA/comments/1hw9dn2/which_model_should_i_use_as_a_chatgpt_alternative/
NightcoreSpectrum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw9dn2
false
null
t3_1hw9dn2
/r/LocalLLaMA/comments/1hw9dn2/which_model_should_i_use_as_a_chatgpt_alternative/
false
false
self
0
null
Remember yi-lightning? It's only 20B!
1
[removed]
2025-01-08T02:46:27
https://www.reddit.com/r/LocalLLaMA/comments/1hw9hpo/remember_yilightning_its_only_20b/
No-Condition-696
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hw9hpo
false
null
t3_1hw9hpo
/r/LocalLLaMA/comments/1hw9hpo/remember_yilightning_its_only_20b/
false
false
https://b.thumbs.redditm…bQ14eaz1o8Uk.jpg
1
{'enabled': False, 'images': [{'id': 'otuvt0w70GgTfmvxjNjeLEQieQlJjc6agJ52qWKwiVE', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/xB3cSyDkvTKJe-OATuupOXaxK4Dee2m2ueKOid5-l0c.jpg?width=108&crop=smart&auto=webp&s=ccd1030357bc185360d4257069e83ed3ef6f5dcb', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/xB3cSyDkvTKJe-OATuupOXaxK4Dee2m2ueKOid5-l0c.jpg?width=216&crop=smart&auto=webp&s=4748f930cb027fc782d6d68f5157598158da3cf5', 'width': 216}, {'height': 136, 'url': 'https://external-preview.redd.it/xB3cSyDkvTKJe-OATuupOXaxK4Dee2m2ueKOid5-l0c.jpg?width=320&crop=smart&auto=webp&s=aa2e447449cc1c278a1325417aab007bf842c3af', 'width': 320}, {'height': 272, 'url': 'https://external-preview.redd.it/xB3cSyDkvTKJe-OATuupOXaxK4Dee2m2ueKOid5-l0c.jpg?width=640&crop=smart&auto=webp&s=13eab5fdd125f4321b914382a0351235c33972e6', 'width': 640}], 'source': {'height': 383, 'url': 'https://external-preview.redd.it/xB3cSyDkvTKJe-OATuupOXaxK4Dee2m2ueKOid5-l0c.jpg?auto=webp&s=7651c3f9e5599128ab8a7bb3cfc5ca251fbae6aa', 'width': 900}, 'variants': {}}]}
Remote LLM Authentication
1
[removed]
2025-01-08T03:32:35
https://www.reddit.com/r/LocalLLaMA/comments/1hwadrs/remote_llm_authentication/
Downtown_Abrocoma398
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwadrs
false
null
t3_1hwadrs
/r/LocalLLaMA/comments/1hwadrs/remote_llm_authentication/
false
false
self
1
null
Remote LLM Authentication
1
[removed]
2025-01-08T03:32:50
https://www.reddit.com/r/LocalLLaMA/comments/1hwadxs/remote_llm_authentication/
Downtown_Abrocoma398
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwadxs
false
null
t3_1hwadxs
/r/LocalLLaMA/comments/1hwadxs/remote_llm_authentication/
false
false
self
1
null
best setup/config for 3090+3060
1
[removed]
2025-01-08T03:45:16
https://www.reddit.com/r/LocalLLaMA/comments/1hwamc5/best_setupconfig_for_30903060/
kahdeg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwamc5
false
null
t3_1hwamc5
/r/LocalLLaMA/comments/1hwamc5/best_setupconfig_for_30903060/
false
false
self
1
null
Help with 3090 and Windows Server ! Lack of drivers.
1
Hi there, I'm trying to setup a AI farm server with 3090s but i simply can't install the 3090 driver in Server 2019 ! Does anyone have any secret to make it viable ? Thanks for the help !
2025-01-08T03:55:03
https://www.reddit.com/r/LocalLLaMA/comments/1hwasvq/help_with_3090_and_windows_server_lack_of_drivers/
DouglasteR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwasvq
false
null
t3_1hwasvq
/r/LocalLLaMA/comments/1hwasvq/help_with_3090_and_windows_server_lack_of_drivers/
false
false
self
1
null
Cloud GPU + storage hosting for low intensity projects?
2
I'm trying to figure out the best way to keep 100–200 GB hosted somewhere (code, random datasets) and attach to a 4090 or A100 for a few hours when I'm making progress on something. I'm not looking forward to spending 10 minutes redownloading datasets and pulling stuff from GitHub every time I want to sit down and do some benchmarking/fine-tuning, etc. Essentially, I want to replicate the "we have a 4090 at home" experience, but with the option to scale up to an A100, etc. I also don't want to burn $$ on storage/machines when I'm not actually working on something. Obviously, low cost per active hour is very important (assume, say, 20 hours/week), but so is the speed to "get back to it." I'm very surprised that most providers don't really offer an option for this. The best choices I can see so far are going to a big cloud provider and paying very high rates for GPU time or RunPod's network volumes at $0.07/GB/month. Do folks have other recommendations?
2025-01-08T04:13:13
https://www.reddit.com/r/LocalLLaMA/comments/1hwb4ul/cloud_gpu_storage_hosting_for_low_intensity/
gofiend
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwb4ul
false
null
t3_1hwb4ul
/r/LocalLLaMA/comments/1hwb4ul/cloud_gpu_storage_hosting_for_low_intensity/
false
false
self
2
null
The real use case for DIGITS is SLM training
4
Because of the memory bandwidth of the unified memory, most people who just want to run inference might be better off with something like 2x 4090s (unless you are okay with running a very large model at 7tok/s). But the 128GB of memory and the high FLOPS mean that this machine might be very cost effective for fine tuning smaller models.
2025-01-08T04:17:47
https://www.reddit.com/r/LocalLLaMA/comments/1hwb7v2/the_real_use_case_for_digits_is_slm_training/
LiquidGunay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwb7v2
false
null
t3_1hwb7v2
/r/LocalLLaMA/comments/1hwb7v2/the_real_use_case_for_digits_is_slm_training/
false
false
self
4
null
Is Next Token Prediction Really a Universal Solution for Multimodal AI?
1
[removed]
2025-01-08T04:56:07
https://www.reddit.com/r/LocalLLaMA/comments/1hwbw5c/is_next_token_prediction_really_a_universal/
Panchhhh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwbw5c
false
null
t3_1hwbw5c
/r/LocalLLaMA/comments/1hwbw5c/is_next_token_prediction_really_a_universal/
false
false
self
1
{'enabled': False, 'images': [{'id': '7L-jR0i_6IcFTUqYEOCv31PJoLNv7UWeNMb65fdu6B8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fCobdad191ycDXV8-mWU-qpST7f4iO47GvtKaZ9Kg7M.jpg?width=108&crop=smart&auto=webp&s=651a3354a219876332dadb487786db71d1d1a205', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fCobdad191ycDXV8-mWU-qpST7f4iO47GvtKaZ9Kg7M.jpg?width=216&crop=smart&auto=webp&s=0fa27a55cdc059d2e34f381f5a22b83e68921185', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fCobdad191ycDXV8-mWU-qpST7f4iO47GvtKaZ9Kg7M.jpg?width=320&crop=smart&auto=webp&s=005b7ee337ed417370e5e0dfc375ac48545a5531', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fCobdad191ycDXV8-mWU-qpST7f4iO47GvtKaZ9Kg7M.jpg?width=640&crop=smart&auto=webp&s=f9031c0faa4bfeef5372374f27f82a5d1303ef9b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fCobdad191ycDXV8-mWU-qpST7f4iO47GvtKaZ9Kg7M.jpg?width=960&crop=smart&auto=webp&s=8cf043fd287a2cf321e722a04e2d3831af069dee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fCobdad191ycDXV8-mWU-qpST7f4iO47GvtKaZ9Kg7M.jpg?width=1080&crop=smart&auto=webp&s=747a369db2121229c18bd3062604c630e69beb0e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fCobdad191ycDXV8-mWU-qpST7f4iO47GvtKaZ9Kg7M.jpg?auto=webp&s=9f3f77ec1bb867b3da7c773517907b2443442a8b', 'width': 1200}, 'variants': {}}]}
Large LLM task, local or api?
5
I need to evaluate 20,000 emails and see if they meet certain criteria for a client. I downloaded Llama 3.3 70b and my 3060 was struggling. Switched to Llama 3.1 8b and still pretty slow. I am now at a decision point, keep trying smaller/different models, buy a better GPU, or just use an api. Any specific recommendations would be greatly appreciated.
2025-01-08T05:11:56
https://www.reddit.com/r/LocalLLaMA/comments/1hwc684/large_llm_task_local_or_api/
Any-Exchange5678
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwc684
false
null
t3_1hwc684
/r/LocalLLaMA/comments/1hwc684/large_llm_task_local_or_api/
false
false
self
5
null
Considering Dana White flips out everytime a UFC fighter wants to fight or box outside the "closed source UFC", I hope this won't affect the weights releases or licenses of future LLaMA models.
0
https://www.theguardian.com/technology/2025/jan/07/dana-white-meta-board
2025-01-08T05:16:53
https://i.redd.it/mwyy4v28fpbe1.jpeg
Powerful-Solution646
i.redd.it
1970-01-01T00:00:00
0
{}
1hwc95m
false
null
t3_1hwc95m
/r/LocalLLaMA/comments/1hwc95m/considering_dana_white_flips_out_everytime_a_ufc/
false
false
https://b.thumbs.redditm…xnl8kU9qnKLg.jpg
0
{'enabled': True, 'images': [{'id': 'IQobAFuv9ezddKsB2FPHwFUlRSANq9T6EmxJr8fXAbw', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/mwyy4v28fpbe1.jpeg?width=108&crop=smart&auto=webp&s=cfee41ca576a11cfe67efb973b088b30156caf47', 'width': 108}, {'height': 248, 'url': 'https://preview.redd.it/mwyy4v28fpbe1.jpeg?width=216&crop=smart&auto=webp&s=33de5d7d3c9c078842d46c4b414054ad86c8ae9a', 'width': 216}, {'height': 368, 'url': 'https://preview.redd.it/mwyy4v28fpbe1.jpeg?width=320&crop=smart&auto=webp&s=d056e72799dc22326885fd7a752acaddb7d65ab2', 'width': 320}, {'height': 737, 'url': 'https://preview.redd.it/mwyy4v28fpbe1.jpeg?width=640&crop=smart&auto=webp&s=b0be5618ade9256ddd5bc1bda413463f246aacbf', 'width': 640}], 'source': {'height': 954, 'url': 'https://preview.redd.it/mwyy4v28fpbe1.jpeg?auto=webp&s=2eb18e757cffe307d7ae40775a8c4c0e9d5eef15', 'width': 828}, 'variants': {}}]}
Scientific Idea Generation Model: QwQ-32B-Preview-IdeaWhiz-v1
50
https://preview.redd.it/…estions welcome.
2025-01-08T05:19:19
https://www.reddit.com/r/LocalLLaMA/comments/1hwcamp/scientific_idea_generation_model/
realJoeTrump
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwcamp
false
null
t3_1hwcamp
/r/LocalLLaMA/comments/1hwcamp/scientific_idea_generation_model/
false
false
https://b.thumbs.redditm…D1xXe46X2oAc.jpg
50
{'enabled': False, 'images': [{'id': 'pUjVJlG7wCOhWWfnh0veShIwMuk7v5VLmRNacfF9GoE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/O0xLvJp44hwyg917PUrV8IBhbRmX6FOqSfwmMF1eaGc.jpg?width=108&crop=smart&auto=webp&s=66761f3d2bfd798a2fe2649345833aaa9326d527', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/O0xLvJp44hwyg917PUrV8IBhbRmX6FOqSfwmMF1eaGc.jpg?width=216&crop=smart&auto=webp&s=3de141129a23857024bcf2c88535c86a06da88ab', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/O0xLvJp44hwyg917PUrV8IBhbRmX6FOqSfwmMF1eaGc.jpg?width=320&crop=smart&auto=webp&s=7394a75dd8bbeea33b61c00aa16528131fba6aa3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/O0xLvJp44hwyg917PUrV8IBhbRmX6FOqSfwmMF1eaGc.jpg?width=640&crop=smart&auto=webp&s=fc29f48e087f031064db96ec7d094a8c35ff53b5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/O0xLvJp44hwyg917PUrV8IBhbRmX6FOqSfwmMF1eaGc.jpg?width=960&crop=smart&auto=webp&s=e07af2b446149e4c9741decd50758de876c5c686', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/O0xLvJp44hwyg917PUrV8IBhbRmX6FOqSfwmMF1eaGc.jpg?width=1080&crop=smart&auto=webp&s=93a5afe75c259a6aa472d82d012497e5bfcba609', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/O0xLvJp44hwyg917PUrV8IBhbRmX6FOqSfwmMF1eaGc.jpg?auto=webp&s=fa7b8f26c6c38d6e0e22f50709a54b87b31864a8', 'width': 1200}, 'variants': {}}]}
Is anyone actually running an exo cluster?
1
have seen many posts about configurations, things it can do and none of results in actually running an exo cluster.
2025-01-08T05:21:58
https://www.reddit.com/r/LocalLLaMA/comments/1hwcc6d/is_anyone_actually_running_an_exo_cluster/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwcc6d
false
null
t3_1hwcc6d
/r/LocalLLaMA/comments/1hwcc6d/is_anyone_actually_running_an_exo_cluster/
false
false
self
1
null
Experience implementing agent workflows with Llama 3.2
0
A big part of implementing an agent workflow in code is for the LLM to be able to call tools (aka functions). But Llama 3.2 does not natively support structured outputs, so there's no strict guarantee that the model will output something that matches with a function call. So, could anyone share some experiences with trying to implement a agent workflow for a local model like Llama 3.2? Is it to just hope that the model spits back correct JSON? I feel risky about doing that because theres no "strictness" to it. I would also prefer not to use any frameworks.
2025-01-08T05:27:22
https://www.reddit.com/r/LocalLLaMA/comments/1hwcf9o/experience_implementing_agent_workflows_with/
Tonqer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwcf9o
false
null
t3_1hwcf9o
/r/LocalLLaMA/comments/1hwcf9o/experience_implementing_agent_workflows_with/
false
false
self
0
null
Kokoro is an 82M-param TTS model, Apache
75
[https://hf.co/hexgrad/Kokoro-82M](https://hf.co/hexgrad/Kokoro-82M) This is my first post and also self-promo since I trained this model; go check it out if you'd like. From the README: >**Kokoro** is a frontier TTS model for its size of **82 million parameters** (text in/audio out). On 25 Dec 2024, Kokoro v0.19 weights were permissively released in full fp32 precision under an Apache 2.0 license. As of 2 Jan 2025, 10 unique Voicepacks have been released, and a `.onnx` version of v0.19 is available. The architecture is mostly StyleTTS 2, and I'm not affiliated with the paper author: [https://github.com/yl4579/StyleTTS2](https://github.com/yl4579/StyleTTS2) If/when I get more synthetic audio, I do intend to release a sequel along with some major changes to tokenization (G2P). **To that end, if anyone is sitting on a large trove of synthetic audio—especially OpenAI's Advanced Voice Mode—please refer to this call for data if interested**: [https://hf.co/posts/hexgrad/418806998707773](https://hf.co/posts/hexgrad/418806998707773)
2025-01-08T05:40:28
https://www.reddit.com/r/LocalLLaMA/comments/1hwcn8b/kokoro_is_an_82mparam_tts_model_apache/
rzvzn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwcn8b
false
null
t3_1hwcn8b
/r/LocalLLaMA/comments/1hwcn8b/kokoro_is_an_82mparam_tts_model_apache/
false
false
self
75
{'enabled': False, 'images': [{'id': '7eehV2XcSO66YwpZvbmmLkh4WEE7p2glytEMg55eeAw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=108&crop=smart&auto=webp&s=92cfc3df092163df6e76edecf249be0243c6fb17', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=216&crop=smart&auto=webp&s=4fda7fe6de500bd32173f9eaf4c1dbeb853a0355', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=320&crop=smart&auto=webp&s=8867f0eb88644560eabc8a6b216bc7e5d8fe9521', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=640&crop=smart&auto=webp&s=0ddc1534af5f00261cdb1ed1b5eaa475f95f839a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=960&crop=smart&auto=webp&s=a5fd09c695aaeb1af8eecc126abf1d3a5ad39621', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=1080&crop=smart&auto=webp&s=9f97cff2f09fdbc84e403b7c09690957f67df949', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?auto=webp&s=fbec1ab60ce313a08de3b4e6b2ee1f34d687170a', 'width': 1200}, 'variants': {}}]}
HP announced a AMD based Generative AI machine with 128 GB Unified RAM (96GB VRAM) ahead of Nvidia Digits - We just missed it
560
96 GB out of the 128GB can be allocated to use VRAM making it able to run 70B models q8 with ease. I am pretty sure Digits will use CUDA and/or TensorRT for optimization of inferencing. I am wondering if this will use RocM or if we can just use CPU inferencing - wondering what the acceleration will be here. Anyone able to share insights?
2025-01-08T07:20:14
https://aecmag.com/workstations/hp-amd-ryzen-ai-max-pro-hp-zbook-ultra-g1a-hp-z2-mini-g1a/
quantier
aecmag.com
1970-01-01T00:00:00
0
{}
1hwe9mf
false
null
t3_1hwe9mf
/r/LocalLLaMA/comments/1hwe9mf/hp_announced_a_amd_based_generative_ai_machine/
false
false
https://a.thumbs.redditm…GrGIL2Cu3w_4.jpg
560
{'enabled': False, 'images': [{'id': 'XmXqBnfULREJd5RnK-U27WRZalAZz9PfODYN73ykEOY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2Uxr2fZXgwYpxUcnSif2gZmNvP23o2dpwlhS4x1dHZA.jpg?width=108&crop=smart&auto=webp&s=33b0278d8ab2893d81ac0da1e47aa5fbc4bf9aee', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2Uxr2fZXgwYpxUcnSif2gZmNvP23o2dpwlhS4x1dHZA.jpg?width=216&crop=smart&auto=webp&s=e27556b0ec1b47dd2ca32e289ace409a36245bf7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2Uxr2fZXgwYpxUcnSif2gZmNvP23o2dpwlhS4x1dHZA.jpg?width=320&crop=smart&auto=webp&s=69a22f27b1730d05fe74f7fe6e8b1caa724db995', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2Uxr2fZXgwYpxUcnSif2gZmNvP23o2dpwlhS4x1dHZA.jpg?width=640&crop=smart&auto=webp&s=54d46261045d9a2cee779ef1547c528c90021757', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2Uxr2fZXgwYpxUcnSif2gZmNvP23o2dpwlhS4x1dHZA.jpg?width=960&crop=smart&auto=webp&s=ca81301a4f5d959448e0c6b874d7e4284f7bbcca', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/2Uxr2fZXgwYpxUcnSif2gZmNvP23o2dpwlhS4x1dHZA.jpg?width=1080&crop=smart&auto=webp&s=8a96b6a28f42c143978821ef33f99622d87d18b6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/2Uxr2fZXgwYpxUcnSif2gZmNvP23o2dpwlhS4x1dHZA.jpg?auto=webp&s=ff684ef688ff8a4e72c25735d44526179bcacd49', 'width': 1920}, 'variants': {}}]}
Planning on experimenting with LLMs to build out chatbots. More expensive macbook vs saving for cloud credits?
1
[removed]
2025-01-08T07:21:39
https://www.reddit.com/r/LocalLLaMA/comments/1hweado/planning_on_experimenting_with_llms_to_build_out/
BhaiMadadKarde
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hweado
false
null
t3_1hweado
/r/LocalLLaMA/comments/1hweado/planning_on_experimenting_with_llms_to_build_out/
false
false
self
1
null
Meta + Apple = Unlikely match made in heaven?
0
We all love local LLMs, open-source and everything AI. Right now Anthropic, OpenAI + Microsoft and Google are all in on proprietary Large Language Models, apps and developer tools. Apple however are basically just buying into Google (Search) and ChatGPT (LLM/AI) without really a chance of picking up the battle axe and winning over any of these players. Meta is betting on open-source and going developer friendly, to enhance their underlying products instead of betting it all on "just AI", but they do not have the Apple hardware capacity/infrastructure, in this area Apple reigns supreme closely followed by Google and Microsoft. Apple also produces some fantastic phones, makes computers and a (by users) loved OS and Apple is often praised for both their design and marketing. Areas which Meta tbh kind of doesn't do half as well. All of the above makes Apple and Meta such a perfect match right now. I could imagine a Mac Studio M4 with enhanced AI capabilities, running a 200b+ parameter Meta Llama model to basically just do incredible stuff locally. They can destill these huge models to smaller ones to run on consumer grade M4 MacBooks and iPhones. A co venture between Meta and Apple would definitely be a contender to the rest of the bunch and bring Apple up to speed. The collective power/compute and resources of Apple and Meta would be hard stopped. wdyt?
2025-01-08T08:15:34
https://www.reddit.com/r/LocalLLaMA/comments/1hwf48j/meta_apple_unlikely_match_made_in_heaven/
AlarBlip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwf48j
false
null
t3_1hwf48j
/r/LocalLLaMA/comments/1hwf48j/meta_apple_unlikely_match_made_in_heaven/
false
false
self
0
null
[Second Take] Kokoro-82M is an Apache TTS model
137
I trained this model recently: [https://huggingface.co/hexgrad/Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M) Everything is in the README there, TLDR: Kokoro is a TTS model that is very good for its size. Apologies for the double-post, but the first one was cooking, and it suddenly got \`ledeted\` by \`domeration\` (yes, I'm \`simpelling\` on purpose, it will make sense soon). Last time I tried giving longer, meaningful replies to people in the comments, which kept getting \`dashow-nabbed\`, and when I edited to the OP to include that word which must not be named, the whole post was poofed. This time I will shut up and let the post speak for itself, and you can find me on \`sidcord\` where we can speak more freely, since I appear to have GTA 5 stars over here. Finally, I am also collecting synthetic audio, see [https://hf.co/posts/hexgrad/418806998707773](https://hf.co/posts/hexgrad/418806998707773) if interested.
2025-01-08T08:16:07
https://www.reddit.com/r/LocalLLaMA/comments/1hwf4jm/second_take_kokoro82m_is_an_apache_tts_model/
rzvzn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwf4jm
false
null
t3_1hwf4jm
/r/LocalLLaMA/comments/1hwf4jm/second_take_kokoro82m_is_an_apache_tts_model/
false
false
self
137
{'enabled': False, 'images': [{'id': '7eehV2XcSO66YwpZvbmmLkh4WEE7p2glytEMg55eeAw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=108&crop=smart&auto=webp&s=92cfc3df092163df6e76edecf249be0243c6fb17', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=216&crop=smart&auto=webp&s=4fda7fe6de500bd32173f9eaf4c1dbeb853a0355', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=320&crop=smart&auto=webp&s=8867f0eb88644560eabc8a6b216bc7e5d8fe9521', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=640&crop=smart&auto=webp&s=0ddc1534af5f00261cdb1ed1b5eaa475f95f839a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=960&crop=smart&auto=webp&s=a5fd09c695aaeb1af8eecc126abf1d3a5ad39621', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?width=1080&crop=smart&auto=webp&s=9f97cff2f09fdbc84e403b7c09690957f67df949', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-2pWYGCifvFehZFZkfzyU3hR0oeCaVRb9B6PInkTYVI.jpg?auto=webp&s=fbec1ab60ce313a08de3b4e6b2ee1f34d687170a', 'width': 1200}, 'variants': {}}]}
I Tested Aider vs Cline using DeepSeek 3: Codebase >20k LOC...
80
Testing the very best Open Source AI Coding Tools with Medium-sized Codebases TL;DR \- this is especially focused on how the tools perform in 10k+ codebases, not snake games \- the two (Aider and Cline) are close (for my use cases); I prefer Aider \- Aider is more flexible: can run as a dev version allowing custom modifications (not custom instructions) \- Qwen 2.5 Coder 32B is nowhere close to DeeepSeek 3 in terms of coding in medium-large code bases \- Aider is portable: I jump between IDEs and tools and don't want the limitations to VSCode/forks \- Aider has scripting: enabling use in external agentic environments \- Aider is more economic: uses less tokens, even though Cline tried adding diffs \- I can work with Aider on the same codebase concurrently \- Claude 3.5 Sonnet is somehow clearly better at larger codebases than DeepSeek 3, though it's closer otherwise I think we are ready to move away from benchmarking good coding LLMs and Coding tools against simple tasks and start to think organizational/enterprise. I'm working on CrewAI + Aider, looks promising. If interested, here's the test video: [https://youtu.be/e1oDWeYvPbY](https://youtu.be/e1oDWeYvPbY) Please let me know of your experience with using AI coding in more challenging environments.
2025-01-08T08:18:08
https://www.reddit.com/r/LocalLLaMA/comments/1hwf5lv/i_tested_aider_vs_cline_using_deepseek_3_codebase/
marvijo-software
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwf5lv
false
null
t3_1hwf5lv
/r/LocalLLaMA/comments/1hwf5lv/i_tested_aider_vs_cline_using_deepseek_3_codebase/
false
false
self
80
{'enabled': False, 'images': [{'id': '2b7gmA7YnFfV3glGbDubWBFCsbzfsBkv8MV3lRHApzo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/K8oTk9RNFwFeRXAn4lPQzLcZvxRf_cygeRKk5sXAaSY.jpg?width=108&crop=smart&auto=webp&s=78732e38ad8892b08b77a68cf635a4f96050d95a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/K8oTk9RNFwFeRXAn4lPQzLcZvxRf_cygeRKk5sXAaSY.jpg?width=216&crop=smart&auto=webp&s=c75768ec5c08054283b81b76071349ca68a3ac7e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/K8oTk9RNFwFeRXAn4lPQzLcZvxRf_cygeRKk5sXAaSY.jpg?width=320&crop=smart&auto=webp&s=93e413b6b56962344421e4adf29d6174d3c7df80', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/K8oTk9RNFwFeRXAn4lPQzLcZvxRf_cygeRKk5sXAaSY.jpg?auto=webp&s=3377de0b094bbcc325ffc5c2391b6ce0ca7454db', 'width': 480}, 'variants': {}}]}
Is coding more than a job skill today?
0
Is it more like a cultural way of life?
2025-01-08T08:23:26
https://www.reddit.com/r/LocalLLaMA/comments/1hwf8dz/is_coding_more_than_a_job_skill_today/
arkofthecovet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwf8dz
false
null
t3_1hwf8dz
/r/LocalLLaMA/comments/1hwf8dz/is_coding_more_than_a_job_skill_today/
false
false
self
0
null
NVIDIA Open Model License: NVIDIA Cosmos is a world foundation model trained on 20 million hours of video to build virtual worlds and generate photo-real, physically-based synthetic data for scientific and industrial testing.
185
https://www.nvidia.com/en-in/ai/cosmos/
2025-01-08T08:29:57
https://v.redd.it/lfzohbxndqbe1
Powerful-Solution646
/r/LocalLLaMA/comments/1hwfblu/nvidia_open_model_license_nvidia_cosmos_is_a/
1970-01-01T00:00:00
0
{}
1hwfblu
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/lfzohbxndqbe1/DASHPlaylist.mpd?a=1739046608%2CN2QxZTVkYjFiOGM0Zjk0MjdmM2NiNGUwYjkzOGRiMmU3NGUzMjQxMjIwZDA0ZGYwMzI3ZDdkMmNlMDg0MGQ2Yw%3D%3D&v=1&f=sd', 'duration': 117, 'fallback_url': 'https://v.redd.it/lfzohbxndqbe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/lfzohbxndqbe1/HLSPlaylist.m3u8?a=1739046608%2CODYyM2E3MmUzNWJiOWZhNzhjNGJhNTI2M2YyNWI3M2I0M2RiOGYxMDdjMWE5YWQyNjc5NGJhNTE0NWM5MWI4Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lfzohbxndqbe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1hwfblu
/r/LocalLLaMA/comments/1hwfblu/nvidia_open_model_license_nvidia_cosmos_is_a/
false
false
https://external-preview…7a8c77f65449f6f3
185
{'enabled': False, 'images': [{'id': 'dGFrdTNlbm5kcWJlMeFSSXTYDvjzDDIYxHTRsBuU24PYEoa111CihFQLGiR7', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dGFrdTNlbm5kcWJlMeFSSXTYDvjzDDIYxHTRsBuU24PYEoa111CihFQLGiR7.png?width=108&crop=smart&format=pjpg&auto=webp&s=4c6e16738af1295aea143eb5716d044d0a899cff', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dGFrdTNlbm5kcWJlMeFSSXTYDvjzDDIYxHTRsBuU24PYEoa111CihFQLGiR7.png?width=216&crop=smart&format=pjpg&auto=webp&s=2fdbcf3a35decb347db287e22ba9659f040b0ec8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dGFrdTNlbm5kcWJlMeFSSXTYDvjzDDIYxHTRsBuU24PYEoa111CihFQLGiR7.png?width=320&crop=smart&format=pjpg&auto=webp&s=56b6cf1ad2493c12cf59e7448f7873d2fd8e850b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dGFrdTNlbm5kcWJlMeFSSXTYDvjzDDIYxHTRsBuU24PYEoa111CihFQLGiR7.png?width=640&crop=smart&format=pjpg&auto=webp&s=409556274e4d78232844d05b3a1c978f8fba453e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dGFrdTNlbm5kcWJlMeFSSXTYDvjzDDIYxHTRsBuU24PYEoa111CihFQLGiR7.png?width=960&crop=smart&format=pjpg&auto=webp&s=bf07ab0b6795448627c07086e2677cc0725d6524', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dGFrdTNlbm5kcWJlMeFSSXTYDvjzDDIYxHTRsBuU24PYEoa111CihFQLGiR7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ff6126170fcf2bde6fb7adff1c057afeb252b0d5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dGFrdTNlbm5kcWJlMeFSSXTYDvjzDDIYxHTRsBuU24PYEoa111CihFQLGiR7.png?format=pjpg&auto=webp&s=8fae73485a7b1d51be207d20a761b217107a6ce8', 'width': 1920}, 'variants': {}}]}
Tech lead of Qwen Team, Alibaba Group: "I often recommend people to read the blog of Anthropic to learn more about what agent really is. Then you will realize you should invest on it as much as possible this year." Blog linked in body text.
399
https://x.com/JustinLin610/status/1876324689657954413?t=rQiJk8V8N9-Rd8dcWJedww&s=19 https://www.anthropic.com/research/building-effective-agents
2025-01-08T08:50:35
https://i.redd.it/5lmmx4qchqbe1.jpeg
Powerful-Solution646
i.redd.it
1970-01-01T00:00:00
0
{}
1hwfm8k
false
null
t3_1hwfm8k
/r/LocalLLaMA/comments/1hwfm8k/tech_lead_of_qwen_team_alibaba_group_i_often/
false
false
https://b.thumbs.redditm…6CyryQ4K469k.jpg
399
{'enabled': True, 'images': [{'id': 'TkJkqOr4ioz-e6XUJaRxp3E3ckaA_ST522IeU-xBu3U', 'resolutions': [{'height': 173, 'url': 'https://preview.redd.it/5lmmx4qchqbe1.jpeg?width=108&crop=smart&auto=webp&s=ebd88d692072fb078207a7f38275c3c007d06bba', 'width': 108}, {'height': 346, 'url': 'https://preview.redd.it/5lmmx4qchqbe1.jpeg?width=216&crop=smart&auto=webp&s=920ff77019d1d38ea30832846eae045670977d4c', 'width': 216}, {'height': 513, 'url': 'https://preview.redd.it/5lmmx4qchqbe1.jpeg?width=320&crop=smart&auto=webp&s=49d570d73acb24f3b727dde0ba59e309044bb81f', 'width': 320}, {'height': 1027, 'url': 'https://preview.redd.it/5lmmx4qchqbe1.jpeg?width=640&crop=smart&auto=webp&s=5f0c18af44997be0f0628d875475982b7bf3b877', 'width': 640}, {'height': 1541, 'url': 'https://preview.redd.it/5lmmx4qchqbe1.jpeg?width=960&crop=smart&auto=webp&s=45f64a35a4f174bc58fc3d9e02e5f2206cf70861', 'width': 960}, {'height': 1734, 'url': 'https://preview.redd.it/5lmmx4qchqbe1.jpeg?width=1080&crop=smart&auto=webp&s=dedcff557fbb5748b8df1f3e4aafb56417e3349a', 'width': 1080}], 'source': {'height': 1734, 'url': 'https://preview.redd.it/5lmmx4qchqbe1.jpeg?auto=webp&s=57e07b5c48134671cf49e51e7f1d728bc0a4e6f9', 'width': 1080}, 'variants': {}}]}
Can vLLM offload (parts of) KV cache to CPU/RAM?
1
[removed]
2025-01-08T08:53:20
https://www.reddit.com/r/LocalLLaMA/comments/1hwfnpv/can_vllm_offload_parts_of_kv_cache_to_cpuram/
gartin336
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwfnpv
false
null
t3_1hwfnpv
/r/LocalLLaMA/comments/1hwfnpv/can_vllm_offload_parts_of_kv_cache_to_cpuram/
false
false
self
1
null
M1 ultra or project Digits by NVIDIA
1
[removed]
2025-01-08T09:03:50
https://www.reddit.com/r/LocalLLaMA/comments/1hwfth6/m1_ultra_or_project_digits_by_nvidia/
LeastExperience1579
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwfth6
false
null
t3_1hwfth6
/r/LocalLLaMA/comments/1hwfth6/m1_ultra_or_project_digits_by_nvidia/
false
false
self
1
null
Can't create DeepSeek account
1
[removed]
2025-01-08T09:07:01
https://www.reddit.com/r/LocalLLaMA/comments/1hwfv7t/cant_create_deepseek_account/
Funbben
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwfv7t
false
null
t3_1hwfv7t
/r/LocalLLaMA/comments/1hwfv7t/cant_create_deepseek_account/
false
false
https://b.thumbs.redditm…U2lxvrDVzYLM.jpg
1
null
Llama 3.1-405B - Write Sci-Fi story about cognitive wars of cryogenic chambers with people.
1
[removed]
2025-01-08T09:33:13
https://www.reddit.com/r/LocalLLaMA/comments/1hwg9ac/llama_31405b_write_scifi_story_about_cognitive/
Worldly_Evidence9113
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwg9ac
false
null
t3_1hwg9ac
/r/LocalLLaMA/comments/1hwg9ac/llama_31405b_write_scifi_story_about_cognitive/
false
false
self
1
null
what is the state of art for local LLMs on android?
9
hi, what is the current state of art for LLMs that i can run on an android phone (ARM64 maybe) ? which ones work well at 8GB-16GB ram devices ? i want to run a voice assistant in some villages which dont have internet. I wanted to wire up portable wikipedia (https://en.wikipedia.org/wiki/Wikipedia:WikiProject\_Offline\_Wikipedia\_for\_Indian\_Schools) to a voice LLM
2025-01-08T09:43:00
https://www.reddit.com/r/LocalLLaMA/comments/1hwgeke/what_is_the_state_of_art_for_local_llms_on_android/
sandys1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwgeke
false
null
t3_1hwgeke
/r/LocalLLaMA/comments/1hwgeke/what_is_the_state_of_art_for_local_llms_on_android/
false
false
self
9
null
I want to buy a laptop with the RTX 5090 to run AI, or should I take a look at AMD new Strix Point ?
0
So I am a newbie about AI, I want to run a LLM on my local machine, I can't build a desktop because of my job and I don't really like cloud solutions. So a RTX 5090 laptop with 24gb of VRAM should be fine if I want to run small to medium size models as long as I can push the whole thing inside it's VRAM right ? I want to play games and run VR too so a gaming laptop would be nicer compare to something like the new $3000 Nvidia Digits (it's also running on ARM so yeah). So how about AMD option ? I have heard that you can run 70B models with it, how important is CUDA in running LLMs ? Thank you very much !
2025-01-08T09:45:32
https://www.reddit.com/r/LocalLLaMA/comments/1hwgfvy/i_want_to_buy_a_laptop_with_the_rtx_5090_to_run/
Zerohero2112
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwgfvy
false
null
t3_1hwgfvy
/r/LocalLLaMA/comments/1hwgfvy/i_want_to_buy_a_laptop_with_the_rtx_5090_to_run/
false
false
self
0
null
Llama 3.1 Nemotron - Write Sci-Fi story about cognitive wars of cryogenic chambers with people.
1
[removed]
2025-01-08T09:49:34
https://www.reddit.com/r/LocalLLaMA/comments/1hwgi0i/llama_31_nemotron_write_scifi_story_about/
Worldly_Evidence9113
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwgi0i
false
null
t3_1hwgi0i
/r/LocalLLaMA/comments/1hwgi0i/llama_31_nemotron_write_scifi_story_about/
false
false
self
1
null
I made VS Code extension that connects the editor with AI Studio!
4
Gemini Coder got new powers, it now integrates with AI Studio for Copilot like assistance. How it works: \- copies to clipboard context (open files, selection on repository tree) and your custom instruction \- opens AI Studio/DeepSeek in your default browser, puts copied text to the field and submits, all hands free All extension code is open source (MIT) and really, super, super lightweight. https://preview.redd.it/d4l3b7p8sqbe1.png?width=2405&format=png&auto=webp&s=52ed0da25323fc062327dcf98384bd4f7a68ce1b I hope someone finds it useful.
2025-01-08T09:51:43
https://www.reddit.com/r/LocalLLaMA/comments/1hwgj51/i_made_vs_code_extension_that_connects_the_editor/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwgj51
false
null
t3_1hwgj51
/r/LocalLLaMA/comments/1hwgj51/i_made_vs_code_extension_that_connects_the_editor/
false
false
https://b.thumbs.redditm…8jKbshjwYpjE.jpg
4
null
I'm thinking to upgrade my laptop's RAM to run better models
1
[removed]
2025-01-08T10:19:21
https://www.reddit.com/r/LocalLLaMA/comments/1hwgyby/im_thinking_to_upgrade_my_laptops_ram_to_run/
Abody7077
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwgyby
false
null
t3_1hwgyby
/r/LocalLLaMA/comments/1hwgyby/im_thinking_to_upgrade_my_laptops_ram_to_run/
false
false
self
1
null
Distilled Financial LLMs
1
[removed]
2025-01-08T10:20:57
https://www.reddit.com/r/LocalLLaMA/comments/1hwgz7g/distilled_financial_llms/
Mindless-Tea7163
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwgz7g
false
null
t3_1hwgz7g
/r/LocalLLaMA/comments/1hwgz7g/distilled_financial_llms/
false
false
self
1
null