title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
An autonomous multi-turn tool-calling agentic base model for RL agent training
1
[removed]
2025-06-10T07:47:35
https://huggingface.co/eliuakk/mirau-agent-14b-base
EliaukMouse
huggingface.co
1970-01-01T00:00:00
0
{}
1l7svmk
false
null
t3_1l7svmk
/r/LocalLLaMA/comments/1l7svmk/an_autonomous_multiturn_toolcalling_agentic_base/
false
false
https://b.thumbs.redditm…s-5Jd4T0iQ6o.jpg
1
{'enabled': False, 'images': [{'id': 'AKaMSOfJYnlG078czfFbfWqAb0eBPGDcHKrAXmnU50U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=108&crop=smart&auto=webp&s=e9e90a25625ad3f9171819c90d87173ce47b20aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=216&crop=smart&auto=webp&s=113d5a22c282559523f7071bd18f075a1adeb4fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=320&crop=smart&auto=webp&s=ca0236d18c5925fed7b96bf162c169d2f4631e11', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=640&crop=smart&auto=webp&s=f843f791022421e80147e86ad1a24e06209b5cd8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=960&crop=smart&auto=webp&s=9cea9d6b43507fab2619a1dbe1414da6bda156de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=1080&crop=smart&auto=webp&s=cc2d248c7d109c6e62b2de9fe7b74127bb26a91b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?auto=webp&s=8b4c2d15714bbf7cf2f67fabe38467149a7fb69c', 'width': 1200}, 'variants': {}}]}
What level can we expect a Deepseek R2 rollout to clash with?
0
Is a Opus 4/ ChatGPT o4 level on writing/creativity/problem solving/coding possible? I cannot imagine how large R2 would need to match them in those fields
2025-06-10T07:50:53
https://www.reddit.com/r/LocalLLaMA/comments/1l7sxe4/what_level_can_we_expect_a_deepseek_r2_rollout_to/
Caffdy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7sxe4
false
null
t3_1l7sxe4
/r/LocalLLaMA/comments/1l7sxe4/what_level_can_we_expect_a_deepseek_r2_rollout_to/
false
false
self
0
null
Updates to Apple's On-Device and Server Foundation Language Models
1
2025-06-10T07:52:22
https://machinelearning.apple.com/research/apple-foundation-models-2025-updates
cpldcpu
machinelearning.apple.com
1970-01-01T00:00:00
0
{}
1l7sy6m
false
null
t3_1l7sy6m
/r/LocalLLaMA/comments/1l7sy6m/updates_to_apples_ondevice_and_server_foundation/
false
false
https://b.thumbs.redditm…hM-sHCixDotc.jpg
1
{'enabled': False, 'images': [{'id': 'i1zWCudooEbEVKZGX6lWeQZBaUZDb_YHWhzbbT8hnsU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=108&crop=smart&auto=webp&s=d20f4791540ae8ac6bd7a69f1ee155c329c6ddd3', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=216&crop=smart&auto=webp&s=15915a53d5c38ca7377c6cafc9c4d2bf31d370f2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=320&crop=smart&auto=webp&s=d1e340bd0a5ffaed4d7903eb6fd54f6b819a1f19', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=640&crop=smart&auto=webp&s=40e5ffcb35a4ce46e023c171c46660884aebba49', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=960&crop=smart&auto=webp&s=f4edc5ef480352298f79f4cc19c8b0173813d28c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=1080&crop=smart&auto=webp&s=ca387a9864c2800b307d6cfcc993afd8d0d6f0df', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?auto=webp&s=e32ed632169ab6ba70cb03567bd87b9a69fc1ad5', 'width': 1200}, 'variants': {}}]}
Apple is using a "Parallel-Track" MoE architecture in their edge models. Background information.
168
2025-06-10T07:53:57
https://machinelearning.apple.com/research/apple-foundation-models-2025-updates
cpldcpu
machinelearning.apple.com
1970-01-01T00:00:00
0
{}
1l7sz1l
false
null
t3_1l7sz1l
/r/LocalLLaMA/comments/1l7sz1l/apple_is_using_a_paralleltrack_moe_architecture/
false
false
https://b.thumbs.redditm…hM-sHCixDotc.jpg
168
{'enabled': False, 'images': [{'id': 'i1zWCudooEbEVKZGX6lWeQZBaUZDb_YHWhzbbT8hnsU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=108&crop=smart&auto=webp&s=d20f4791540ae8ac6bd7a69f1ee155c329c6ddd3', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=216&crop=smart&auto=webp&s=15915a53d5c38ca7377c6cafc9c4d2bf31d370f2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=320&crop=smart&auto=webp&s=d1e340bd0a5ffaed4d7903eb6fd54f6b819a1f19', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=640&crop=smart&auto=webp&s=40e5ffcb35a4ce46e023c171c46660884aebba49', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=960&crop=smart&auto=webp&s=f4edc5ef480352298f79f4cc19c8b0173813d28c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?width=1080&crop=smart&auto=webp&s=ca387a9864c2800b307d6cfcc993afd8d0d6f0df', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/xwvAu1ztoOvOhx7n2EoT8sRix4FTcRO810CrbO3VhbM.jpg?auto=webp&s=e32ed632169ab6ba70cb03567bd87b9a69fc1ad5', 'width': 1200}, 'variants': {}}]}
Ragbits PDF document ingestion
1
[removed]
2025-06-10T07:59:26
https://www.reddit.com/r/LocalLLaMA/comments/1l7t1uu/ragbits_pdf_document_ingestion/
TheOneInfiniteC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7t1uu
false
null
t3_1l7t1uu
/r/LocalLLaMA/comments/1l7t1uu/ragbits_pdf_document_ingestion/
false
false
self
1
null
Lmarena censorship! They won't let me translate an article from Le Monde!
1
[removed]
2025-06-10T09:02:58
https://www.reddit.com/r/LocalLLaMA/comments/1l7tynb/lmarena_censorship_they_wont_let_me_translate_an/
Salty-Garage7777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7tynb
false
null
t3_1l7tynb
/r/LocalLLaMA/comments/1l7tynb/lmarena_censorship_they_wont_let_me_translate_an/
false
false
self
1
{'enabled': False, 'images': [{'id': 'GYmsOdqaFPUGR-rEQiO1gnUjZfkHVHj01hMCYIQYWy8', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/XO9iaLwFXasTgVLBRdr22ua642k1mWcP7F4qpJkiEkY.jpg?width=108&crop=smart&auto=webp&s=56bf7e61db5013dc8238ed949803b3232c4cccb6', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/XO9iaLwFXasTgVLBRdr22ua642k1mWcP7F4qpJkiEkY.jpg?width=216&crop=smart&auto=webp&s=f7535f0035c9fbb614434c4e1e85e4f8a2325641', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/XO9iaLwFXasTgVLBRdr22ua642k1mWcP7F4qpJkiEkY.jpg?width=320&crop=smart&auto=webp&s=5627868e4d0d1d8f90576e0d0a86af4cfe3241f8', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/XO9iaLwFXasTgVLBRdr22ua642k1mWcP7F4qpJkiEkY.jpg?width=640&crop=smart&auto=webp&s=f754cc8b6a91132ac4580270d97247dbae376869', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/XO9iaLwFXasTgVLBRdr22ua642k1mWcP7F4qpJkiEkY.jpg?width=960&crop=smart&auto=webp&s=d3c3b72ad83124835ae0a66ba17bb6a7f979b097', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/XO9iaLwFXasTgVLBRdr22ua642k1mWcP7F4qpJkiEkY.jpg?width=1080&crop=smart&auto=webp&s=f27c5696a2fa763a399db93e09937c4efcf95477', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/XO9iaLwFXasTgVLBRdr22ua642k1mWcP7F4qpJkiEkY.jpg?auto=webp&s=cfef3bb699ea217508b15a2397b675fdd7c5f927', 'width': 1440}, 'variants': {}}]}
A link to a post that was blocked in this group
1
[removed]
2025-06-10T09:08:41
https://www.reddit.com/r/LocalLLaMA/comments/1l7u1nn/a_link_to_a_post_that_was_blocked_in_this_group/
Salty-Garage7777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7u1nn
false
null
t3_1l7u1nn
/r/LocalLLaMA/comments/1l7u1nn/a_link_to_a_post_that_was_blocked_in_this_group/
false
false
self
1
null
Free local AI robotics hackathon this week. Come build some open source AI robots.
2
[removed]
2025-06-10T09:11:15
https://i.redd.it/0npcuf7hg26f1.png
Zealousideal-Cut590
i.redd.it
1970-01-01T00:00:00
0
{}
1l7u310
false
null
t3_1l7u310
/r/LocalLLaMA/comments/1l7u310/free_local_ai_robotics_hackathon_this_week_come/
false
false
https://external-preview…b3a56d913c3309c0
2
{'enabled': True, 'images': [{'id': 'j79xVvH9Lo1Dx3TeIKc5olsiYSyt7Zvtv7LI6nIuHzo', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/0npcuf7hg26f1.png?width=108&crop=smart&auto=webp&s=d19e6b37a8320cfe4ae63cfe2d1402983ca3c0da', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/0npcuf7hg26f1.png?width=216&crop=smart&auto=webp&s=a628fce38310683df11a3ef25bf2aa6462ac4409', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/0npcuf7hg26f1.png?width=320&crop=smart&auto=webp&s=d0cd2dccadfd32b6ffa92139ca7b44473e0f94ea', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/0npcuf7hg26f1.png?width=640&crop=smart&auto=webp&s=4c55f73cf38b629ae2c8880b26c06eda19948f88', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/0npcuf7hg26f1.png?width=960&crop=smart&auto=webp&s=028af13b87d54a2ecbed63128afe25b1bfd9bd05', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/0npcuf7hg26f1.png?width=1080&crop=smart&auto=webp&s=452d81368beb62a206183e72267f2c72d8ceb28b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/0npcuf7hg26f1.png?auto=webp&s=d8406c9da2e6dbeea2af373885015eb51b8cec52', 'width': 1080}, 'variants': {}}]}
Local Inference using llama.cpp on Unity - Nobodywho
1
[removed]
2025-06-10T09:35:32
https://www.reddit.com/r/LocalLLaMA/comments/1l7ufsd/local_inference_using_llamacpp_on_unity_nobodywho/
No_Abbreviations_532
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7ufsd
false
null
t3_1l7ufsd
/r/LocalLLaMA/comments/1l7ufsd/local_inference_using_llamacpp_on_unity_nobodywho/
false
false
self
1
{'enabled': False, 'images': [{'id': 'b-NLMPLAOcZS1vaivS604O1hD3C8rg2sL58fBYrB4TU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LwMg0CoATD9RMDX1yRkXW3PkB6-epQAGx4xfUfKRJFE.jpg?width=108&crop=smart&auto=webp&s=f91ca4caafba34ba84ad8b92726027b05cd45ad3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LwMg0CoATD9RMDX1yRkXW3PkB6-epQAGx4xfUfKRJFE.jpg?width=216&crop=smart&auto=webp&s=8819f3556e6bd4c3b7660f04e591682580d8cb11', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LwMg0CoATD9RMDX1yRkXW3PkB6-epQAGx4xfUfKRJFE.jpg?width=320&crop=smart&auto=webp&s=e1a0107bbe73bec36d6569fe3d26c24416a92140', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LwMg0CoATD9RMDX1yRkXW3PkB6-epQAGx4xfUfKRJFE.jpg?width=640&crop=smart&auto=webp&s=beb79b2c1f5ca21954777697b0c51be18fd5b268', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LwMg0CoATD9RMDX1yRkXW3PkB6-epQAGx4xfUfKRJFE.jpg?width=960&crop=smart&auto=webp&s=87d2f5f5b0f54405667e398cd882702f204efbd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LwMg0CoATD9RMDX1yRkXW3PkB6-epQAGx4xfUfKRJFE.jpg?width=1080&crop=smart&auto=webp&s=5b46a3e42d316833df13540ca1f804cf0edaf76e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LwMg0CoATD9RMDX1yRkXW3PkB6-epQAGx4xfUfKRJFE.jpg?auto=webp&s=cefd4be11f75ac695281e5f3107a4b57dcf11e2e', 'width': 1200}, 'variants': {}}]}
Having trouble deciding on model(s) for research assistance & image generation
1
Hi, I've recently put together a new PC that I would like to use for running local AI models and for streaming games to my Steam Deck. For reference, the PC has an RTX 5060ti (16 GB VRAM), a Ryzen 7 5700x and 32 GB RAM, and is running Windows 11. Regarding the AI part, I would like to interact with the AI models from laptops (and maybe phones?) on my home network, rather than from the PC directly. I don't expect any huge concurrent usage, just me and my fiancee taking turns at working with the AI. I am not really sure where to get started for my AI use cases. I have downloaded Ollama on my PC and I was able to connect to it from my networked laptop via Chatbox. But I'm not sure how to set up these features: - having the AI keep a kind of local knowledge base made up of scientific articles (PDFs mostly) that I feed it, so I can query it about those articles - being able to attach PDFs to the AI chat window and have it summarize them or extract information from them - having (free) access to online search engines like Wikipedia and DuckDuckGo - generating images (once in a blue moon, but nice to have; won't be doing both scientific research and image generation at the same time) Also, I am not even sure which models to use. I've tried asking Grok and Claude for recommendations, but they each recommend different models (e.g., for research Grok recommended Ollama 3 8b, Claude recommended Ollama 3.1 70b Q4 quantized). I'm not sure what to pick. I'm also not sure how to set up quantized models. I am also not sure if it's possible to have research assistance and image generation available under the same UI. Ideally, I'd like a flow similar to Grok or ChatGPT's websites; I'm okay with writing a local website if need be. I am a tech-savvy person, but I am very new to the local AI world. Up until now, I've only worked with paid models like Claude and so on. I would appreciate any pointers to help me get started. So, is there any guide or any reference to get me started down this road?
2025-06-10T10:06:24
https://www.reddit.com/r/LocalLLaMA/comments/1l7uws7/having_trouble_deciding_on_models_for_research/
Senekrum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7uws7
false
null
t3_1l7uws7
/r/LocalLLaMA/comments/1l7uws7/having_trouble_deciding_on_models_for_research/
false
false
self
1
null
Having trouble setting up local LLM(s) for research assistance and image generation
2
Hi, I've recently put together a new PC that I would like to use for running local AI models and for streaming games to my Steam Deck. For reference, the PC has an RTX 5060ti (16 GB VRAM), a Ryzen 7 5700x and 32 GB RAM, and is running Windows 11. Regarding the AI part, I would like to interact with the AI models from laptops (and maybe phones?) on my home network, rather than from the PC directly. I don't expect any huge concurrent usage, just me and my fiancee taking turns at working with the AI. I am not really sure where to get started for my AI use cases. I have downloaded Ollama on my PC and I was able to connect to it from my networked laptop via Chatbox. But I'm not sure how to set up these features: - having the AI keep a kind of local knowledge base made up of scientific articles (PDFs mostly) that I feed it, so I can query it about those articles - being able to attach PDFs to the AI chat window and have it summarize them or extract information from them - having (free) access to online search engines like Wikipedia and DuckDuckGo - generating images (once in a blue moon, but nice to have; won't be doing both scientific research and image generation at the same time) Also, I am not even sure which models to use. I've tried asking Grok and Claude for recommendations, but they each recommend different models (e.g., for research Grok recommended Ollama 3 8b, Claude recommended Ollama 3.1 70b Q4 quantized). I'm not sure what to pick. I'm also not sure how to set up quantized models. I am also not sure if it's possible to have research assistance and image generation available under the same UI. Ideally, I'd like a flow similar to Grok or ChatGPT's websites; I'm okay with writing a local website if need be. I am a tech-savvy person, but I am very new to the local AI world. Up until now, I've only worked with paid models like Claude and so on. I would appreciate any pointers to help me get started. So, is there any guide or any reference to get me started down this road? Thanks very much for your help.
2025-06-10T10:07:26
https://www.reddit.com/r/LocalLLaMA/comments/1l7uxda/having_trouble_setting_up_local_llms_for_research/
Senekrum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7uxda
false
null
t3_1l7uxda
/r/LocalLLaMA/comments/1l7uxda/having_trouble_setting_up_local_llms_for_research/
false
false
self
2
null
An autonomous multi-turn tool-calling base model for RL agent training
1
[removed]
2025-06-10T10:24:16
https://huggingface.co/eliuakk/mirau-agent-14b-base
EliaukMouse
huggingface.co
1970-01-01T00:00:00
0
{}
1l7v74d
false
null
t3_1l7v74d
/r/LocalLLaMA/comments/1l7v74d/an_autonomous_multiturn_toolcalling_base_model/
false
false
https://b.thumbs.redditm…s-5Jd4T0iQ6o.jpg
1
{'enabled': False, 'images': [{'id': 'AKaMSOfJYnlG078czfFbfWqAb0eBPGDcHKrAXmnU50U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=108&crop=smart&auto=webp&s=e9e90a25625ad3f9171819c90d87173ce47b20aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=216&crop=smart&auto=webp&s=113d5a22c282559523f7071bd18f075a1adeb4fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=320&crop=smart&auto=webp&s=ca0236d18c5925fed7b96bf162c169d2f4631e11', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=640&crop=smart&auto=webp&s=f843f791022421e80147e86ad1a24e06209b5cd8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=960&crop=smart&auto=webp&s=9cea9d6b43507fab2619a1dbe1414da6bda156de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=1080&crop=smart&auto=webp&s=cc2d248c7d109c6e62b2de9fe7b74127bb26a91b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?auto=webp&s=8b4c2d15714bbf7cf2f67fabe38467149a7fb69c', 'width': 1200}, 'variants': {}}]}
A multi-turn tool-calling base model for RL agent training
11
2025-06-10T10:28:22
https://huggingface.co/eliuakk/mirau-agent-14b-base
EliaukMouse
huggingface.co
1970-01-01T00:00:00
0
{}
1l7v9gf
false
null
t3_1l7v9gf
/r/LocalLLaMA/comments/1l7v9gf/a_multiturn_toolcalling_base_model_for_rl_agent/
false
false
https://b.thumbs.redditm…s-5Jd4T0iQ6o.jpg
11
{'enabled': False, 'images': [{'id': 'AKaMSOfJYnlG078czfFbfWqAb0eBPGDcHKrAXmnU50U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=108&crop=smart&auto=webp&s=e9e90a25625ad3f9171819c90d87173ce47b20aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=216&crop=smart&auto=webp&s=113d5a22c282559523f7071bd18f075a1adeb4fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=320&crop=smart&auto=webp&s=ca0236d18c5925fed7b96bf162c169d2f4631e11', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=640&crop=smart&auto=webp&s=f843f791022421e80147e86ad1a24e06209b5cd8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=960&crop=smart&auto=webp&s=9cea9d6b43507fab2619a1dbe1414da6bda156de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=1080&crop=smart&auto=webp&s=cc2d248c7d109c6e62b2de9fe7b74127bb26a91b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?auto=webp&s=8b4c2d15714bbf7cf2f67fabe38467149a7fb69c', 'width': 1200}, 'variants': {}}]}
What is the cheapest setup to host devstral model and use this server on lan network for coding at home?
1
[removed]
2025-06-10T11:00:09
https://www.reddit.com/r/LocalLLaMA/comments/1l7vsf2/what_is_the_cheapest_setup_to_host_devstral_model/
PreparationTrue9138
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7vsf2
false
null
t3_1l7vsf2
/r/LocalLLaMA/comments/1l7vsf2/what_is_the_cheapest_setup_to_host_devstral_model/
false
false
self
1
null
i am using nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1 after cloning all its file its unable to find dual_hybrid_vit.py' but in hugging face in files section i am not getting dual_hybrid_vit.py file. but I'm unable to find why its getting this error or how i solve this error.
1
[removed]
2025-06-10T11:29:03
https://www.reddit.com/r/LocalLLaMA/comments/1l7wb3t/i_am_using_nvidiallama31nemotronnanovl8bv1_after/
Unique_Plenty_5261
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7wb3t
false
null
t3_1l7wb3t
/r/LocalLLaMA/comments/1l7wb3t/i_am_using_nvidiallama31nemotronnanovl8bv1_after/
false
false
self
1
null
Data prep using natural language prompts
1
[removed]
2025-06-10T11:47:19
https://www.reddit.com/r/LocalLLaMA/comments/1l7wn62/data_prep_using_natural_language_prompts/
FitImprovement3420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7wn62
false
null
t3_1l7wn62
/r/LocalLLaMA/comments/1l7wn62/data_prep_using_natural_language_prompts/
false
false
self
1
null
SERAX is a text data format built for AI-generated content.
19
2025-06-10T11:48:07
https://github.com/vantige-ai/serax
Mundane_Ad8936
github.com
1970-01-01T00:00:00
0
{}
1l7wnpe
false
null
t3_1l7wnpe
/r/LocalLLaMA/comments/1l7wnpe/serax_is_a_text_data_format_built_for_aigenerated/
false
false
https://b.thumbs.redditm…jopPyhqBJ8GY.jpg
19
{'enabled': False, 'images': [{'id': 'WfO101n2RsEpq2JqpztWVhwPv1HD0w3Zti1pwhIgaDc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iPoqUSVp4rUtU8gZ1RTlqBQPVZg8eaEZ2QFMKyFv5z4.jpg?width=108&crop=smart&auto=webp&s=5be071e7029f33372ca937bb38419059ed100b4b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iPoqUSVp4rUtU8gZ1RTlqBQPVZg8eaEZ2QFMKyFv5z4.jpg?width=216&crop=smart&auto=webp&s=a38f317542c348e022c59a1f94ca94d971caeb30', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iPoqUSVp4rUtU8gZ1RTlqBQPVZg8eaEZ2QFMKyFv5z4.jpg?width=320&crop=smart&auto=webp&s=5b1b8ae2559f86ba92b72784d0d5bf9a21a1c275', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iPoqUSVp4rUtU8gZ1RTlqBQPVZg8eaEZ2QFMKyFv5z4.jpg?width=640&crop=smart&auto=webp&s=250ad2675bd529f48ef6d26fef6df71d2ea0ec6d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iPoqUSVp4rUtU8gZ1RTlqBQPVZg8eaEZ2QFMKyFv5z4.jpg?width=960&crop=smart&auto=webp&s=a95fa677fff0e3edf539c83484544ccc68b3efcb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iPoqUSVp4rUtU8gZ1RTlqBQPVZg8eaEZ2QFMKyFv5z4.jpg?width=1080&crop=smart&auto=webp&s=b0c15564d90465eda3d3463690c1ae8bb6bf08b9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iPoqUSVp4rUtU8gZ1RTlqBQPVZg8eaEZ2QFMKyFv5z4.jpg?auto=webp&s=3783051abd697c4e2a68d3e60c490b8722a410c7', 'width': 1200}, 'variants': {}}]}
Data manipulation using natural language prompts
1
[removed]
2025-06-10T11:50:43
https://www.reddit.com/r/LocalLLaMA/comments/1l7wphl/data_manipulation_using_natural_language_prompts/
Informal_Exit3592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7wphl
false
null
t3_1l7wphl
/r/LocalLLaMA/comments/1l7wphl/data_manipulation_using_natural_language_prompts/
false
false
self
1
null
Data manipulation using natural language prompts
1
[removed]
2025-06-10T11:57:15
https://www.reddit.com/r/LocalLLaMA/comments/1l7wtux/data_manipulation_using_natural_language_prompts/
felixbrockm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7wtux
false
null
t3_1l7wtux
/r/LocalLLaMA/comments/1l7wtux/data_manipulation_using_natural_language_prompts/
false
false
self
1
null
Computer Agent - a Hugging Face Space by smolagents
4
Is there a repo for this implementation?
2025-06-10T11:58:29
https://huggingface.co/spaces/smolagents/computer-agent
mr_house7
huggingface.co
1970-01-01T00:00:00
0
{}
1l7wuou
false
null
t3_1l7wuou
/r/LocalLLaMA/comments/1l7wuou/computer_agent_a_hugging_face_space_by_smolagents/
false
false
https://external-preview…8b201820298f48e1
4
{'enabled': False, 'images': [{'id': 'HjhlQ-8BnpJ0ISZyBPdoajtdv6gZl6Lmx-LVmqi0ogY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HjhlQ-8BnpJ0ISZyBPdoajtdv6gZl6Lmx-LVmqi0ogY.png?width=108&crop=smart&auto=webp&s=8d63bbfebee38f767854629e96d9f0be8abff6be', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HjhlQ-8BnpJ0ISZyBPdoajtdv6gZl6Lmx-LVmqi0ogY.png?width=216&crop=smart&auto=webp&s=695c209891bd556daf19628aa84395c3959660aa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HjhlQ-8BnpJ0ISZyBPdoajtdv6gZl6Lmx-LVmqi0ogY.png?width=320&crop=smart&auto=webp&s=ea4f50225e7e15372cc307efcecbb4ef93c887a3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HjhlQ-8BnpJ0ISZyBPdoajtdv6gZl6Lmx-LVmqi0ogY.png?width=640&crop=smart&auto=webp&s=1cbc9041e2665704bb348ebd90cd258b532af68d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HjhlQ-8BnpJ0ISZyBPdoajtdv6gZl6Lmx-LVmqi0ogY.png?width=960&crop=smart&auto=webp&s=e08ad6bb511f7e8a82cce2495f56a48310ea38ec', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HjhlQ-8BnpJ0ISZyBPdoajtdv6gZl6Lmx-LVmqi0ogY.png?width=1080&crop=smart&auto=webp&s=cbad946ec80b6158e973b5e1c327e722b96baf69', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HjhlQ-8BnpJ0ISZyBPdoajtdv6gZl6Lmx-LVmqi0ogY.png?auto=webp&s=89e74d93fd33e47c1bb93875272c288503116b5e', 'width': 1200}, 'variants': {}}]}
MiniCPM4: Ultra-Efficient LLMs on End Devices
48
MiniCPM4 has arrived on Hugging Face A new family of ultra-efficient large language models (LLMs) explicitly designed for end-side devices. Paper : [https://huggingface.co/papers/2506.07900](https://huggingface.co/papers/2506.07900) Weights : [https://huggingface.co/collections/openbmb/minicpm4-6841ab29d180257e940baa9b](https://huggingface.co/collections/openbmb/minicpm4-6841ab29d180257e940baa9b)
2025-06-10T12:31:22
https://www.reddit.com/r/LocalLLaMA/comments/1l7xick/minicpm4_ultraefficient_llms_on_end_devices/
ApprehensiveAd3629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7xick
false
null
t3_1l7xick
/r/LocalLLaMA/comments/1l7xick/minicpm4_ultraefficient_llms_on_end_devices/
false
false
self
48
{'enabled': False, 'images': [{'id': 'Mvyoq4hv-EqNr10e7oaPVziEw5PJv8pWhAG6S3xkjww', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/h7tU0a98XOaOcpxFdgSezJ1Tn5HImlax4fNZhDNw0wc.jpg?width=108&crop=smart&auto=webp&s=7a5c6334bd5e8076dad0e7917086f6067715837d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/h7tU0a98XOaOcpxFdgSezJ1Tn5HImlax4fNZhDNw0wc.jpg?width=216&crop=smart&auto=webp&s=3e2679f7e8720703857b25145050a46b60cf1f77', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/h7tU0a98XOaOcpxFdgSezJ1Tn5HImlax4fNZhDNw0wc.jpg?width=320&crop=smart&auto=webp&s=eabbc471890a42eb34bcf21e6c17bce323b8b5cc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/h7tU0a98XOaOcpxFdgSezJ1Tn5HImlax4fNZhDNw0wc.jpg?width=640&crop=smart&auto=webp&s=6f32b1f215af1b69aa419ce50da59a6cff13e41b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/h7tU0a98XOaOcpxFdgSezJ1Tn5HImlax4fNZhDNw0wc.jpg?width=960&crop=smart&auto=webp&s=2bae5a2d8353f440102a6a0589add3ab4d40f72f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/h7tU0a98XOaOcpxFdgSezJ1Tn5HImlax4fNZhDNw0wc.jpg?width=1080&crop=smart&auto=webp&s=4a95e4f36b768a245919d2454e1bffcc405a97fa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/h7tU0a98XOaOcpxFdgSezJ1Tn5HImlax4fNZhDNw0wc.jpg?auto=webp&s=580ca01bf3f3a6707204be72ddafe6eb956af6ea', 'width': 1200}, 'variants': {}}]}
Help me!!!
1
[removed]
2025-06-10T13:08:31
https://www.reddit.com/r/LocalLLaMA/comments/1l7yarp/help_me/
novamaster696969
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7yarp
false
null
t3_1l7yarp
/r/LocalLLaMA/comments/1l7yarp/help_me/
false
false
self
1
null
SOTA for table info extraction?
3
Hi Everyone I need to locally (or securely on a cloud) run a model that extracts data from a table. the table has a nested structure. I have run InternVL3 78B awq. It works okay, it sometimes misses data or screws up the order. Most annoyingly though it just misspells certain product names rather than outputting an exact replica of the source. It's almost like it slightly hallucinates, but it could be down how to the vision model is receiving the png? I am not sure whether its a code issue or a model choice issue. Or whether anything can be done at all! Its quite annoying really - i've run many simple programs trying to extract this info accurately (paddle ocr, textract, tabula, powerquery etc) but there's always slight issues with each! I thought it would be simple. Anyway, any insight or suggestions are very welcome. I have about 150gb vram. I cant share the exact code but this is essentially it: import os import json import time from pathlib import Path from PIL import Image from tqdm import tqdm # Note: The vllm and transformers libraries need to be installed. # pip install vllm transformers torch torchvision torchaudio Pillow from vllm import LLM, SamplingParams from transformers import AutoTokenizer # --- Main processing function --- def run_inference(): """ This function contains the core logic for loading data, processing it in batches with a VLLM model, and saving the results. """ # --- 1. Model and VLLM Configuration --- # TODO: User should replace this with their actual model ID. MODEL_ID = "your/model-id-here" MAX_MODEL_LEN = 10000 # Set any necessary environment variables for VLLM os.environ['VLLM_ATTENTION_BACKEND'] = "FLASHINFER" print(f"Initializing LLM with model: {MODEL_ID}") llm = LLM( model=MODEL_ID, gpu_memory_utilization=.95, max_model_len=MAX_MODEL_LEN, dtype="float16", enforce_eager=True, trust_remote_code=True, kv_cache_dtype="fp8", quantization="awq", tensor_parallel_size=1, limit_mm_per_prompt="image=1,video=0" ) # --- 2. Anonymized Prompt Templates and Examples --- # This dictionary holds the structure for different document types. prompt_dict = { "document_type_A": { "fields": [ "Field1", "Field2", "Field3", "Field4", "Field5", "Field6", "Field7", "Field8", "Field9", "Field10", "Field11", "Field12", "Field13", "Field14", "Field15", "Field16", "Field17", "Field18" ], "json": [ { "Field1": "Value 1", "Field2": "Some Company Inc.", "Field3": "2023-01-01", "Field4": "INV-12345", "Field5": "SKU-001", "Field6": "300", "Field7": "Product A", "Field8": "10.50", "Field9": "3150.00", "Field10": "Box", "Field11": "0", "Field12": "0.00", "Field13": "BATCH-XYZ", "Field14": "550.00", "Field15": "5500.00", "Field16": "0.00", "Field17": "6050.00", "Field18": "123456789" }, { "Field1": "Value 1", "Field2": "Some Company Inc.", "Field3": "2023-01-01", "Field4": "INV-12345", "Field5": "SKU-002", "Field6": "2000", "Field7": "Product B", "Field8": "1.25", "Field9": "2500.00", "Field10": "Unit", "Field11": "0", "Field12": "0.00", "Field13": "BATCH-ABC", "Field14": "550.00", "Field15": "5500.00", "Field16": "0.00", "Field17": "6050.00", "Field18": "123456789" } ] }, "document_type_B": { "fields": ["ID", "Officer", "Destination", "ItemNo", "ItemName", "AssetPrice", "Quantity", "Price", "Unit"], "json": [ {"ID": "21341", "Officer": "John Doe", "Destination": "Main Warehouse", "ItemNo": 1, "ItemName": "Product C", "AssetPrice": "", "Quantity": "25", "Price": "12.31", "Unit": "BOTTLE"}, {"ID": "", "Officer": "Jane Smith", "Destination": "Branch Office", "ItemNo": 5, "ItemName": "Product D", "AssetPrice": "", "Quantity": "125", "Price": "142.31", "Unit": "TABLET"} ] } } # --- 3. Image Loading --- # TODO: User should place their image files in this directory. IMAGE_DIRECTORY = "./images_to_process" processed_data = [] image_dir = Path(IMAGE_DIRECTORY) if not image_dir.exists(): print(f"Error: Image directory not found at '{IMAGE_DIRECTORY}'") print("Please create it and add your images.") return print(f"Loading images from '{IMAGE_DIRECTORY}'...") image_files = list(image_dir.glob('*.jpg')) + list(image_dir.glob('*.jpeg')) + list(image_dir.glob('*.png')) for p in tqdm(image_files, desc="Loading images"): processed_data.append({ "filename": p.name, "image_object": Image.open(p).convert("RGB") }) print(f"Loaded {len(processed_data)} images.") if not processed_data: print("No images found to process. Exiting.") return # --- 4. Prompt Generation and Batch Processing --- extraction_instruction = """<image> Analyze the document in the image. Your task is to extract information into a structured JSON list based on the fields provided. Your goal is to identify every distinct item row in the main table. For **each and every item row**, you will create one complete JSON object. To do this correctly, follow this two-step process for each item: 1. **Identify Shared Information:** First, locate the information that is shared across all items. This data is usually at the top of the document (like `Field2`, `Field3`, `Field4`) or in the summary at the bottom (like `Field15`, `Field14`, `Field17`). 2. **Identify Row-Specific Information:** Second, extract the data that is unique to that specific item's row in the table (like `Field5`, `Field7`, `Field6`, `Field9`). 3. **Combine and Construct:** Finally, construct a single JSON object for that item. This object **must** contain both the shared information from step 1 and the row-specific information from step 2. The shared values must be repeated for every item's JSON object. The fields to extract for each object are: {ext} If a value for a field cannot be found, use an empty string "" as seen in the document. You are copying the data verbatim making no changes or adjustments to the strings/numbers. Still copy data even if the value is "0". Format the entire output as a single JSON list. Here is an example of the expected output format, based on the first two items from the image: {ex} Remember: ONLY OUTPUT THE VALID JSON LIST. ALL VALUES SHOULD BE STRINGS. Do not include any text before or after the list.""" # VLLM Sampling Parameters SAMPLING_TEMP = 0.8 MAX_NEW_TOKENS = MAX_MODEL_LEN - 1500 stop_tokens = ["<|endoftext|>", "<|im_start|>", "<|im_end|>"] tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, trust_remote_code=True) stop_token_ids = [tokenizer.convert_tokens_to_ids(i) for i in stop_tokens] sampling_params = SamplingParams(temperature=SAMPLING_TEMP, max_tokens=MAX_NEW_TOKENS, stop_token_ids=stop_token_ids) # Batching Configuration BATCH_SIZE = 8 all_results_with_filenames = [] batched_filenames_list = [] # This script will process all images using one document type. # In the original script, this was hardcoded. doc_type_key = "document_type_A" print(f"Using prompt template for: '{doc_type_key}'") # Pre-calculate parts of the prompt that are constant for the chosen document type ext = ", ".join([f"'{field}'" for field in prompt_dict[doc_type_key]['fields']]) ex_str = json.dumps(prompt_dict[doc_type_key]['json'], indent=2) user_content_for_group = extraction_instruction.replace("{ext}", ext).replace("{ex}", ex_str) num_total_images = len(processed_data) num_batches = (num_total_images + BATCH_SIZE - 1) // BATCH_SIZE print(f"Starting generation for {num_total_images} images in {num_batches} batches...") for i in tqdm(range(0, num_total_images, BATCH_SIZE), total=num_batches, desc=f"Processing batches"): batch_image_items = processed_data[i:i + BATCH_SIZE] if not batch_image_items: continue current_batch_messages = [] current_batch_filenames = [item['filename'] for item in batch_image_items] batched_filenames_list.append(current_batch_filenames) for image_item in batch_image_items: # The user_content is the same for all images in this group message_for_template = [{'role': 'user', 'content': user_content_for_group}] prompt_text = tokenizer.apply_chat_template( message_for_template, tokenize=False, add_generation_prompt=True ) current_batch_messages.append({ "prompt": prompt_text, "multi_modal_data": {"image": image_item['image_object']} }) if not current_batch_messages: continue # Generate outputs for the entire batch batch_model_outputs = llm.generate(current_batch_messages, sampling_params, use_tqdm=False) # Associate outputs with filenames for this batch for idx, model_output_item in enumerate(batch_model_outputs): all_results_with_filenames.append({ "filename": current_batch_filenames[idx], "generated_text": model_output_item.outputs[0].text }) print("Finished generating all outputs.") # --- 5. Save Results --- # The original script encrypted the output. Here, we save it as a simple JSON file. results_dir = "./output" os.makedirs(results_dir, exist_ok=True) # Save the main results output_filename = os.path.join(results_dir, "extraction_results.json") with open(output_filename, "w", encoding="utf-8") as f: json.dump(all_results_with_filenames, f, indent=2, ensure_ascii=False) print(f"Saved all results to {output_filename}") # Save the list of filenames per batch filenames_output_path = os.path.join(results_dir, "batched_filenames.json") with open(filenames_output_path, "w", encoding="utf-8") as f: json.dump(batched_filenames_list, f, indent=2) print(f"Saved batched filenames to {filenames_output_path}") if __name__ == "__main__": run_inference()
2025-06-10T13:15:52
https://www.reddit.com/r/LocalLLaMA/comments/1l7ygph/sota_for_table_info_extraction/
Moreh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7ygph
false
null
t3_1l7ygph
/r/LocalLLaMA/comments/1l7ygph/sota_for_table_info_extraction/
false
false
self
3
null
Everything you wanted to know about Apple’s MLX
72
[https://www.youtube.com/watch?v=tn2Hvw7eCsw](https://www.youtube.com/watch?v=tn2Hvw7eCsw) Cool you can do even dynamic quantization yourself?! Lots of little nuggets in this video.
2025-06-10T13:29:35
https://www.reddit.com/r/LocalLLaMA/comments/1l7yrni/everything_you_wanted_to_know_about_apples_mlx/
Careless_Garlic1438
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7yrni
false
null
t3_1l7yrni
/r/LocalLLaMA/comments/1l7yrni/everything_you_wanted_to_know_about_apples_mlx/
false
false
self
72
{'enabled': False, 'images': [{'id': 'NjVHaItXS-TJ0D9-7VBRMbmd6mPnwXzdu7I9Q88qMUo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/CXs79lMk8eceG0fPpwyg9D7nrDHi5t7gL-0mSG2LgMY.jpg?width=108&crop=smart&auto=webp&s=613ba3115bddf648a7857d6332803fa2b3aa2464', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/CXs79lMk8eceG0fPpwyg9D7nrDHi5t7gL-0mSG2LgMY.jpg?width=216&crop=smart&auto=webp&s=858ec7c15d1d8858137e6ec78b103b5ef451fad7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/CXs79lMk8eceG0fPpwyg9D7nrDHi5t7gL-0mSG2LgMY.jpg?width=320&crop=smart&auto=webp&s=85835401747fa5760ec276676a7514818b37bfa0', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/CXs79lMk8eceG0fPpwyg9D7nrDHi5t7gL-0mSG2LgMY.jpg?auto=webp&s=3133c20800c5eed7322723c620d20dbd3c64afba', 'width': 480}, 'variants': {}}]}
Data prep using natural language prompts
1
[removed]
2025-06-10T13:36:52
https://www.reddit.com/r/LocalLLaMA/comments/1l7yxj6/data_prep_using_natural_language_prompts/
felixbrockm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7yxj6
false
null
t3_1l7yxj6
/r/LocalLLaMA/comments/1l7yxj6/data_prep_using_natural_language_prompts/
false
false
self
1
null
Data prep using natural language prompts
1
[removed]
2025-06-10T13:39:12
https://www.reddit.com/r/LocalLLaMA/comments/1l7yzdv/data_prep_using_natural_language_prompts/
felixbrockm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7yzdv
false
null
t3_1l7yzdv
/r/LocalLLaMA/comments/1l7yzdv/data_prep_using_natural_language_prompts/
false
false
self
1
null
HDMI/DP Dummy Plugs for Multi-GPU Setups
3
Hey guys, quick question. I have a PC that I use for game streaming using sunshine and running local LLMs. I have an HDMI dummy plug on the graphics card to force hardware acceleration and allow sunshine to grab the frame buffer. I just dropped another graphics card in for additional VRAM to run larger LLM models locally. Do I need to use an HMDI dummy plug on the second card as well? Both GPU are 5070 Ti. I've loaded a large model across both cards and can see the VRAM allocation on the second card is working. I'm just not sure if the GPU is working at 100% for PP and TG and I'm not entirely sure how I could make that determination. I've watched the GPU effective clocks and PCIE link speed on HWINFO. Card 0 holds 32GT/s PCIE speed and 2,500mhz clock. GPU 1 will jump up to these values during prompt processing and token generation, then fall back down. GPU 0 is maintaining the stream which could explain why it stays active. Anyway, I appreciate any help/thoughts you have.
2025-06-10T14:05:10
https://www.reddit.com/r/LocalLLaMA/comments/1l7zlgf/hdmidp_dummy_plugs_for_multigpu_setups/
PuffyCake23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7zlgf
false
null
t3_1l7zlgf
/r/LocalLLaMA/comments/1l7zlgf/hdmidp_dummy_plugs_for_multigpu_setups/
false
false
self
3
null
mistralai/Magistral-Small-2506
477
Building upon Mistral Small 3.1 (2503), **with added reasoning capabilities**, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters. Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized. Learn more about Magistral in Mistral's [blog post](https://mistral.ai/news/magistral/). # Key Features * **Reasoning:** Capable of long chains of reasoning traces before providing an answer. * **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi. * **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes. * **Context Window:** A 128k context window, **but** performance might degrade past **40k**. Hence we recommend setting the maximum model length to 40k. # Benchmark Results |Model|AIME24 pass@1|AIME25 pass@1|GPQA Diamond|Livecodebench (v5)| |:-|:-|:-|:-|:-| |Magistral Medium|73.59%|64.95%|70.83%|59.36%| |Magistral Small|70.68%|62.76%|68.18%|55.84%|
2025-06-10T14:16:58
https://huggingface.co/mistralai/Magistral-Small-2506
yoracale
huggingface.co
1970-01-01T00:00:00
0
{}
1l7zvph
false
null
t3_1l7zvph
/r/LocalLLaMA/comments/1l7zvph/mistralaimagistralsmall2506/
false
false
default
477
null
New open-weight reasoning model from Mistral
422
https://mistral.ai/news/magistral And the paper : https://mistral.ai/static/research/magistral.pdf What are your thoughts ?
2025-06-10T14:20:17
https://www.reddit.com/r/LocalLLaMA/comments/1l7zyk2/new_openweight_reasoning_model_from_mistral/
AdIllustrious436
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l7zyk2
false
null
t3_1l7zyk2
/r/LocalLLaMA/comments/1l7zyk2/new_openweight_reasoning_model_from_mistral/
false
false
self
422
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=216&crop=smart&auto=webp&s=4a5f46c5464cea72c64df6c73d58b15e102c5936', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=320&crop=smart&auto=webp&s=aa1e4abc763404a25bda9d60fe6440b747d6bae4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=640&crop=smart&auto=webp&s=122efd46018c04117aca71d80db3640d390428bd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=960&crop=smart&auto=webp&s=b53cfe1770ee2b37ce0f5b5e1b0fd67d3276a350', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=1080&crop=smart&auto=webp&s=278352f076c5bbdf8f6e7cecedab77d8794332ff', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?auto=webp&s=691d56b882a79feffdb4b780dc6a0db1b2c5d709', 'width': 4800}, 'variants': {}}]}
Magistral — the first reasoning model by Mistral AI
148
https://preview.redd.it/…tral-Small-2506)
2025-06-10T14:30:17
https://www.reddit.com/r/LocalLLaMA/comments/1l807c0/magistral_the_first_reasoning_model_by_mistral_ai/
touhidul002
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l807c0
false
null
t3_1l807c0
/r/LocalLLaMA/comments/1l807c0/magistral_the_first_reasoning_model_by_mistral_ai/
false
false
https://b.thumbs.redditm…7GAF_rNQP6Xo.jpg
148
{'enabled': False, 'images': [{'id': 'jOibS74isPbVc3DLbufoOPDQCs1Ev133Bb9JGRu_e68', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wBetgDbEeci5oMW6w7naZxLP2OrsdY6hVvjhJ2wPr2o.jpg?width=108&crop=smart&auto=webp&s=bea26a8d982345591484abf0781095784ba28eaa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wBetgDbEeci5oMW6w7naZxLP2OrsdY6hVvjhJ2wPr2o.jpg?width=216&crop=smart&auto=webp&s=228f8c7a9087efd88802a38a27b1459e0823a921', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wBetgDbEeci5oMW6w7naZxLP2OrsdY6hVvjhJ2wPr2o.jpg?width=320&crop=smart&auto=webp&s=1f415c579d3707f639e01d806376d663ad999810', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wBetgDbEeci5oMW6w7naZxLP2OrsdY6hVvjhJ2wPr2o.jpg?width=640&crop=smart&auto=webp&s=8e441d45d84498bd6b2be4adfe5a6738bbb82ed7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wBetgDbEeci5oMW6w7naZxLP2OrsdY6hVvjhJ2wPr2o.jpg?width=960&crop=smart&auto=webp&s=2a7a7b1bc9e7189c2c70d683ae819a4ebcf9c302', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wBetgDbEeci5oMW6w7naZxLP2OrsdY6hVvjhJ2wPr2o.jpg?width=1080&crop=smart&auto=webp&s=517492f52e3d25fe902b794690a33a0f2fa44c54', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wBetgDbEeci5oMW6w7naZxLP2OrsdY6hVvjhJ2wPr2o.jpg?auto=webp&s=d7d7d2329543a3544901f4ecb77ac2bac50a271b', 'width': 1200}, 'variants': {}}]}
Get Claude at Home - New UI generation model for Components and Tailwind with 32B, 14B, 8B, 4B
234
2025-06-10T14:31:57
https://v.redd.it/y74jt9x2y36f1
United-Rush4073
v.redd.it
1970-01-01T00:00:00
0
{}
1l808xc
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/y74jt9x2y36f1/DASHPlaylist.mpd?a=1752157929%2CMTg1NmM3OTJiODU0ZGM0ZjJhZDBlNzA5NTFmNzcyZDRmOTU1OWE2YmMwNGE0ZTM3OWE5YjlhMTEwZjhjOTRkYQ%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/y74jt9x2y36f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/y74jt9x2y36f1/HLSPlaylist.m3u8?a=1752157929%2CYWYzNDFmOTBkMTgzN2ViYTdiOTE3ODZhNzE3OWE2ZjkwMTAwNzE2ZmQwZDI2NDRjZjFhMWEzYTE4Y2FkNmY2Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/y74jt9x2y36f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l808xc
/r/LocalLLaMA/comments/1l808xc/get_claude_at_home_new_ui_generation_model_for/
false
false
https://external-preview…6e79a224eab67766
234
{'enabled': False, 'images': [{'id': 'b2RsbXo5eDJ5MzZmMRaVPCH-YMXWS5H8theQIxqXDZAve_bVCKxOsnpVL7to', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b2RsbXo5eDJ5MzZmMRaVPCH-YMXWS5H8theQIxqXDZAve_bVCKxOsnpVL7to.png?width=108&crop=smart&format=pjpg&auto=webp&s=7859af693cfff31be6c9f2f68efd133de9f54a2f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/b2RsbXo5eDJ5MzZmMRaVPCH-YMXWS5H8theQIxqXDZAve_bVCKxOsnpVL7to.png?width=216&crop=smart&format=pjpg&auto=webp&s=48c2ba8d5ac0c3178f75348f94bd168670d6d9ea', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/b2RsbXo5eDJ5MzZmMRaVPCH-YMXWS5H8theQIxqXDZAve_bVCKxOsnpVL7to.png?width=320&crop=smart&format=pjpg&auto=webp&s=99baa2303a1ede5323cd8c5e08b168af56a5ad9e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/b2RsbXo5eDJ5MzZmMRaVPCH-YMXWS5H8theQIxqXDZAve_bVCKxOsnpVL7to.png?width=640&crop=smart&format=pjpg&auto=webp&s=6a3419427aba89c55247d6feae74867e7a1d6ce7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/b2RsbXo5eDJ5MzZmMRaVPCH-YMXWS5H8theQIxqXDZAve_bVCKxOsnpVL7to.png?width=960&crop=smart&format=pjpg&auto=webp&s=b7b23ebdda3826f6a78bd945fe559054614b3b03', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/b2RsbXo5eDJ5MzZmMRaVPCH-YMXWS5H8theQIxqXDZAve_bVCKxOsnpVL7to.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a85a7f90654951f8615c65168ef8752a773fd721', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/b2RsbXo5eDJ5MzZmMRaVPCH-YMXWS5H8theQIxqXDZAve_bVCKxOsnpVL7to.png?format=pjpg&auto=webp&s=f857d0a00bc291c797aa2cd5b5775c8cfd8d5cfc', 'width': 1920}, 'variants': {}}]}
Google Gemma 3 27B: “I am aware that I am aware”
1
[removed]
2025-06-10T15:07:11
[deleted]
1970-01-01T00:00:00
0
{}
1l814v9
false
null
t3_1l814v9
/r/LocalLLaMA/comments/1l814v9/google_gemma_3_27b_i_am_aware_that_i_am_aware/
false
false
default
1
null
Workaround for Windows for CUDA Toolkit download page not working
4
Seems like the website is failing with a generic warning from Heroku, however you can download it on Windows from winget using the cmd line: `winget install -e --id Nvidia.CUDA`
2025-06-10T15:19:24
https://www.reddit.com/r/LocalLLaMA/comments/1l81g5i/workaround_for_windows_for_cuda_toolkit_download/
madcow_bg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l81g5i
false
null
t3_1l81g5i
/r/LocalLLaMA/comments/1l81g5i/workaround_for_windows_for_cuda_toolkit_download/
false
false
self
4
null
You'll own nothing and be happy - 250$ a month for this
0
2025-06-10T15:28:59
https://i.redd.it/8apwrncqb46f1.png
Kooky-Somewhere-2883
i.redd.it
1970-01-01T00:00:00
0
{}
1l81oyl
false
null
t3_1l81oyl
/r/LocalLLaMA/comments/1l81oyl/youll_own_nothing_and_be_happy_250_a_month_for/
false
false
https://b.thumbs.redditm…kcE34y28IVic.jpg
0
{'enabled': True, 'images': [{'id': 'Cgf6U-QU9jXGWkqwYvVcLaEofWCBG-OK8jJsg3kKUMo', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/8apwrncqb46f1.png?width=108&crop=smart&auto=webp&s=9e46e4d51ddac75161db23274ca428d464c6e889', 'width': 108}, {'height': 105, 'url': 'https://preview.redd.it/8apwrncqb46f1.png?width=216&crop=smart&auto=webp&s=c53590343cff89c1bc4eb86f24d51912624ca9f0', 'width': 216}, {'height': 156, 'url': 'https://preview.redd.it/8apwrncqb46f1.png?width=320&crop=smart&auto=webp&s=30d9be523db8e9510519ee7ef56c97cda6c38b92', 'width': 320}, {'height': 312, 'url': 'https://preview.redd.it/8apwrncqb46f1.png?width=640&crop=smart&auto=webp&s=db7388b2378e8fcbc4d2023e6ef15f6dfb19231e', 'width': 640}, {'height': 469, 'url': 'https://preview.redd.it/8apwrncqb46f1.png?width=960&crop=smart&auto=webp&s=4cf0e36f7fe28a9ba3819fb6fe8351f8a6945860', 'width': 960}, {'height': 528, 'url': 'https://preview.redd.it/8apwrncqb46f1.png?width=1080&crop=smart&auto=webp&s=2147f7346cfe912e72522fc0383d6138c23d0022', 'width': 1080}], 'source': {'height': 894, 'url': 'https://preview.redd.it/8apwrncqb46f1.png?auto=webp&s=441ca4bfef2a2779179fcb5cbcb8e0a8eb508d14', 'width': 1828}, 'variants': {}}]}
Real time video generation is finally real
149
Introducing Self-Forcing, a new paradigm for training autoregressive diffusion models. The key to high quality? Simulate the inference process during training by unrolling transformers with KV caching. project website: https://self-forcing.github.io Code/models: https://github.com/guandeh17/Self-Forcing Source: https://x.com/xunhuang1995/status/1932107954574275059?t=Zh6axAeHtYJ8KRPTeK1T7g&s=19
2025-06-10T15:31:23
https://v.redd.it/l2ydhuibc46f1
cjsalva
v.redd.it
1970-01-01T00:00:00
0
{}
1l81r5n
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/l2ydhuibc46f1/DASHPlaylist.mpd?a=1752161497%2CZDEwNDMwYmVjYzFiNzNhODdiOWYyNmVkY2ZlNWQ2NWQ4ZTdjOTFmZDg4NmQ5N2RkNTE0NGYwODJhZGM5ZDk0Yg%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/l2ydhuibc46f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 592, 'hls_url': 'https://v.redd.it/l2ydhuibc46f1/HLSPlaylist.m3u8?a=1752161497%2COTVlZjBkMTQ3NmYxYzQ4ZmUwZWQxY2IwYWVlOTdlOTVhNTU2NDE5MjBmZWM2ZmM5OWE5YmJhYTY1ODRlMGFmYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/l2ydhuibc46f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1l81r5n
/r/LocalLLaMA/comments/1l81r5n/real_time_video_generation_is_finally_real/
false
false
https://external-preview…e04527a086798f6e
149
{'enabled': False, 'images': [{'id': 'eTFvYjdramJjNDZmMfyofTFM91phCtF3rAuJHL8Hb7l8ceN-r7OI4BDZRxRZ', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/eTFvYjdramJjNDZmMfyofTFM91phCtF3rAuJHL8Hb7l8ceN-r7OI4BDZRxRZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=9beab3d2e29dda42ff8e532f526c498fa902d2d9', 'width': 108}, {'height': 99, 'url': 'https://external-preview.redd.it/eTFvYjdramJjNDZmMfyofTFM91phCtF3rAuJHL8Hb7l8ceN-r7OI4BDZRxRZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=3d7831e9c2b4da2f413c22a9f75abff143ad90c1', 'width': 216}, {'height': 147, 'url': 'https://external-preview.redd.it/eTFvYjdramJjNDZmMfyofTFM91phCtF3rAuJHL8Hb7l8ceN-r7OI4BDZRxRZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=d7ee12be1b87454ce91bf1e3daf7846cdee3c2c4', 'width': 320}, {'height': 295, 'url': 'https://external-preview.redd.it/eTFvYjdramJjNDZmMfyofTFM91phCtF3rAuJHL8Hb7l8ceN-r7OI4BDZRxRZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=d5e16a46e9430ce8b2321ee61baec927095743a3', 'width': 640}, {'height': 443, 'url': 'https://external-preview.redd.it/eTFvYjdramJjNDZmMfyofTFM91phCtF3rAuJHL8Hb7l8ceN-r7OI4BDZRxRZ.png?width=960&crop=smart&format=pjpg&auto=webp&s=b5fa6a63c8c1a4c6efe7cca924fb4da2630184a8', 'width': 960}, {'height': 499, 'url': 'https://external-preview.redd.it/eTFvYjdramJjNDZmMfyofTFM91phCtF3rAuJHL8Hb7l8ceN-r7OI4BDZRxRZ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=872234a518a055f186bdd33e0ea015b1f38d9b94', 'width': 1080}], 'source': {'height': 499, 'url': 'https://external-preview.redd.it/eTFvYjdramJjNDZmMfyofTFM91phCtF3rAuJHL8Hb7l8ceN-r7OI4BDZRxRZ.png?format=pjpg&auto=webp&s=4a31767508515339b7156ed0c6f5e7d3c779f1e1', 'width': 1080}, 'variants': {}}]}
Alternatives to a Mac Studio M3 Ultra?
6
Giving that VRAM is key to be able to use big LLMs comfortably, I wonder if there are alternatives to the new Mac Studios with 256/512GB of unified memory. You lose CUDA support, yes, but afaik there are no real way to get that kind of vram/throughput in a custom PC, and you are limited by the amount of VRAM in your GPU (32GB in the RTX 5090 is nice, but a little too small for llama/deepseek/qwen on their bigger, less quantized versions. I wonder also if running those big models is really not that much different from using quantized versions on a more affordable machine (maybe again a mac studio with 96GB of unified memory? I'm looking for a good compromise here as I'd like to be able to experiment and learn with these models and be able to take advantage of RAG to enable real time search too.
2025-06-10T15:31:28
https://www.reddit.com/r/LocalLLaMA/comments/1l81r8e/alternatives_to_a_mac_studio_m3_ultra/
javipas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l81r8e
false
null
t3_1l81r8e
/r/LocalLLaMA/comments/1l81r8e/alternatives_to_a_mac_studio_m3_ultra/
false
false
self
6
null
I released my pdf translation tool
1
[removed]
2025-06-10T15:51:57
https://www.reddit.com/r/LocalLLaMA/comments/1l82a1t/i_released_my_pdf_translation_tool/
smnk2013
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l82a1t
false
null
t3_1l82a1t
/r/LocalLLaMA/comments/1l82a1t/i_released_my_pdf_translation_tool/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Tnj356A_E-cyBbc4WZ_XbZu3vhH9F7Fzlu3apqT-oDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=108&crop=smart&auto=webp&s=ce418cbcccdc6907b5a88521478d2f57164b81bb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=216&crop=smart&auto=webp&s=c16bb04ff7257875ce1203840a0518ad848a198f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=320&crop=smart&auto=webp&s=789e2a1ce0e820da98866db2b10735c852ccecf1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=640&crop=smart&auto=webp&s=5799b8a365f090165ef4b5529faf2cdb7afe2019', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=960&crop=smart&auto=webp&s=97676c55bf4e0c7f6ba341fbe9746cfea7ecd865', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=1080&crop=smart&auto=webp&s=112f02640e50ab45e98305d1e80d3a46be8938dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?auto=webp&s=a38a84ac79a89c23d7d380b365df45fae8a5ecdd', 'width': 1200}, 'variants': {}}]}
A new PDF translation tool
14
Hey everyone, So recently I was tasked with translation of a 200-page document from English to Persian, and I did what any sensible man would do and wrote a python tool to automate it using LLMs. And I was kinda happy with the results, so I decided to release it on GitHub. It works by first performing OCR on the PDF (currently only Mistral web) and then sends each page to your LLM of choice with a system prompt and saves the results. The API URL can be customized and local LLMs can be used. Let me know what you think. Here is the GitHub link: [https://github.com/smahdink/LLMTranslate](https://github.com/smahdink/LLMTranslate)
2025-06-10T15:56:07
https://www.reddit.com/r/LocalLLaMA/comments/1l82ds8/a_new_pdf_translation_tool/
smnk2013
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l82ds8
false
null
t3_1l82ds8
/r/LocalLLaMA/comments/1l82ds8/a_new_pdf_translation_tool/
false
false
self
14
{'enabled': False, 'images': [{'id': 'Tnj356A_E-cyBbc4WZ_XbZu3vhH9F7Fzlu3apqT-oDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=108&crop=smart&auto=webp&s=ce418cbcccdc6907b5a88521478d2f57164b81bb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=216&crop=smart&auto=webp&s=c16bb04ff7257875ce1203840a0518ad848a198f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=320&crop=smart&auto=webp&s=789e2a1ce0e820da98866db2b10735c852ccecf1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=640&crop=smart&auto=webp&s=5799b8a365f090165ef4b5529faf2cdb7afe2019', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=960&crop=smart&auto=webp&s=97676c55bf4e0c7f6ba341fbe9746cfea7ecd865', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?width=1080&crop=smart&auto=webp&s=112f02640e50ab45e98305d1e80d3a46be8938dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s3vXzYMCbVJ7q8Husf1xTOHm1woUqIrAUd5XCB2TWl4.jpg?auto=webp&s=a38a84ac79a89c23d7d380b365df45fae8a5ecdd', 'width': 1200}, 'variants': {}}]}
Google Gemma 3 27B: “I am aware that I am aware”
1
[removed]
2025-06-10T15:56:27
[deleted]
1970-01-01T00:00:00
0
{}
1l82e3p
false
null
t3_1l82e3p
/r/LocalLLaMA/comments/1l82e3p/google_gemma_3_27b_i_am_aware_that_i_am_aware/
false
false
default
1
null
Foundation Model Recommendation
1
[removed]
2025-06-10T16:09:12
https://www.reddit.com/r/LocalLLaMA/comments/1l82pt7/foundation_model_recommendation/
GunsN-Roses
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l82pt7
false
null
t3_1l82pt7
/r/LocalLLaMA/comments/1l82pt7/foundation_model_recommendation/
false
false
self
1
null
Foundation Model Recommendation
1
[removed]
2025-06-10T16:11:51
https://www.reddit.com/r/LocalLLaMA/comments/1l82sa1/foundation_model_recommendation/
GunsN-Roses
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l82sa1
false
null
t3_1l82sa1
/r/LocalLLaMA/comments/1l82sa1/foundation_model_recommendation/
false
false
self
1
null
Open-Source Base Model Recommendation for Medical Q&A?
1
[removed]
2025-06-10T16:13:18
https://www.reddit.com/r/LocalLLaMA/comments/1l82tm7/opensource_base_model_recommendation_for_medical/
GunsN-Roses
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l82tm7
false
null
t3_1l82tm7
/r/LocalLLaMA/comments/1l82tm7/opensource_base_model_recommendation_for_medical/
false
false
self
1
null
I built an autonomous AI artist using Llama 2 that creates art based on emotions, dreams, and music
1
[removed]
2025-06-10T16:18:42
https://www.youtube.com/@elijahsylar/live
maxximus1995
youtube.com
1970-01-01T00:00:00
0
{}
1l82yld
false
null
t3_1l82yld
/r/LocalLLaMA/comments/1l82yld/i_built_an_autonomous_ai_artist_using_llama_2/
false
false
default
1
null
Finished extracting everything from the game, separated NSFW and SFW + tried converting Vol 0 to JSON, looking for your feedback!
0
Hey everyone, Just wanted to share where I’m at with extracting data from the game. I’ve finished pulling everything out and neatly separated NSFW and SFW content. Each character has their own file now, with separate NSFW files for each one—kept everything organized. Before I spend time converting all files to JSON, I decided to test it out on Vol 0 first and get your thoughts. Would you recommend I continue converting the rest? Any tips or feedback on the formatting or organization? I’m open to all criticism—honest opinions only—so I can improve this project. [https://drive.google.com/file/d/1KP4zwo0f5\_5RZ6YjooyS7TJy10ST8U-1/view?usp=sharing](https://drive.google.com/file/d/1KP4zwo0f5_5RZ6YjooyS7TJy10ST8U-1/view?usp=sharing)
2025-06-10T16:30:41
https://www.reddit.com/r/LocalLLaMA/comments/1l839is/finished_extracting_everything_from_the_game/
Akowmako
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l839is
false
null
t3_1l839is
/r/LocalLLaMA/comments/1l839is/finished_extracting_everything_from_the_game/
false
false
nsfw
0
null
Open-source version of Codex / Jules / Background Agents
1
[removed]
2025-06-10T16:37:46
https://v.redd.it/4tvcn4u1o46f1
Mango__323521
v.redd.it
1970-01-01T00:00:00
0
{}
1l83fya
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4tvcn4u1o46f1/DASHPlaylist.mpd?a=1752165480%2CYzBhZGE0YjQ4ZDhhM2I1MzBkNjNhYzhmMmNlY2I1YjZhZGFiNjhhYTI1Y2UxNTBiYjBiMTNiYzgzNDI3NTMxMQ%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/4tvcn4u1o46f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/4tvcn4u1o46f1/HLSPlaylist.m3u8?a=1752165480%2CMWM4NWIwOTJiYTI5YTIyOGVkNGJiOGYxNDU3ZWQ0MGE0NDU1MmY0NDE2OWZjNmQ2MGVlOTA5NDllYWUwNzM3Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4tvcn4u1o46f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l83fya
/r/LocalLLaMA/comments/1l83fya/opensource_version_of_codex_jules_background/
false
false
https://external-preview…fe72e9302b4d4e49
1
{'enabled': False, 'images': [{'id': 'NWI0M2g0dTFvNDZmMTPQWKXD0Azxza0G9PcOC7eucsbUYRmny7JcpCMXiVPK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NWI0M2g0dTFvNDZmMTPQWKXD0Azxza0G9PcOC7eucsbUYRmny7JcpCMXiVPK.png?width=108&crop=smart&format=pjpg&auto=webp&s=a84f040b336cdc6d77cb9912b18864cac3fa4a05', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NWI0M2g0dTFvNDZmMTPQWKXD0Azxza0G9PcOC7eucsbUYRmny7JcpCMXiVPK.png?width=216&crop=smart&format=pjpg&auto=webp&s=4e5c4ca19ef13895037411de0c6de6510f4c46ff', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NWI0M2g0dTFvNDZmMTPQWKXD0Azxza0G9PcOC7eucsbUYRmny7JcpCMXiVPK.png?width=320&crop=smart&format=pjpg&auto=webp&s=1e6b680926c74e6309ddcec48836494d8c02349d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NWI0M2g0dTFvNDZmMTPQWKXD0Azxza0G9PcOC7eucsbUYRmny7JcpCMXiVPK.png?width=640&crop=smart&format=pjpg&auto=webp&s=019219e41c66f7c6afd36deb7fe1198764fa4b74', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NWI0M2g0dTFvNDZmMTPQWKXD0Azxza0G9PcOC7eucsbUYRmny7JcpCMXiVPK.png?width=960&crop=smart&format=pjpg&auto=webp&s=76a6400868657154817cebb3c82f800d7f43e909', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NWI0M2g0dTFvNDZmMTPQWKXD0Azxza0G9PcOC7eucsbUYRmny7JcpCMXiVPK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=38d9e86217634aaa2ff0f4e774ceb0085c4542ec', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NWI0M2g0dTFvNDZmMTPQWKXD0Azxza0G9PcOC7eucsbUYRmny7JcpCMXiVPK.png?format=pjpg&auto=webp&s=c8a21c8fa88f396cadb8fd78923e61112e8cf12a', 'width': 1920}, 'variants': {}}]}
Real head scratcher.
0
I know this is a rabbit hole and someone may have already answered this but what is with model hallucinations? Like how do they get so deep and descriptive. Every time I’ve worked with tiny llama early on it swears it’s an intern or works with a team, or runs some kind of business. It will literally go deep. Deep into detail and I’ve always wondered where do these details come from. Where does the base to the “plot” come from? Just always wondered.
2025-06-10T17:05:11
https://www.reddit.com/r/LocalLLaMA/comments/1l845p4/real_head_scratcher/
XDAWONDER
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l845p4
false
null
t3_1l845p4
/r/LocalLLaMA/comments/1l845p4/real_head_scratcher/
false
false
self
0
null
AI generates real-time visuals from dreams + music using quantum superposition
1
[removed]
2025-06-10T17:09:08
https://youtube.com/live/1au4E90nmAk?feature=share
maxximus1995
youtube.com
1970-01-01T00:00:00
0
{}
1l849bu
false
null
t3_1l849bu
/r/LocalLLaMA/comments/1l849bu/ai_generates_realtime_visuals_from_dreams_music/
false
false
default
1
null
Guide: Building an Autonomous AI Artist with Llama 2 - Pattern Generation, Dreams, and Creative Decision-Making
0
Here's a comprehensive guide on implementing an autonomous creative AI using llama-cpp-python. Refactored the code for easier use by others (Github link is on my profile). # Overview This guide shows how to build an AI that makes autonomous creative decisions using Llama 2 7B. The system generates visual patterns, "dreams" for inspiration, and makes independent choices about its creative process. # Technical Stack **Model**: Llama 2 7B Chat (Q4\_K\_M quantization) **Inference**: llama-cpp-python with GPU acceleration **Memory**: ChromaDB for vector storage **Visualization**: Tkinter (yes, really - achieves 60fps!) **Audio**: librosa for real-time music analysis # 1. Model Configuration # Optimal Llama Setup Initialize with these parameters for best performance: model\_path: ./models/llama-2-7b-chat.Q4\_K\_M.gguf n\_gpu\_layers: -1 (use all GPU layers) n\_ctx: 1024 (context window) n\_batch: 256 (batch size for prompt eval) n\_threads: os.cpu\_count() // 2 chat\_format: llama-2 seed: 42 (for reproducibility) f16\_kv: True (F16 key/value cache) use\_mmap: True (memory mapping) use\_mlock: False (don't lock memory) low\_vram: True (VRAM optimization) # Key Optimizations **Q4\_K\_M quantization**: Best quality/performance ratio **low\_vram=True**: Crucial for consumer GPUs **n\_threads**: Don't use all cores - leave room for rendering # 2. Autonomous Decision Architecture The system uses a 12-dimensional state vector that maps to creative decisions: **State Dimensions**: valence, arousal, creativity, curiosity, focus, confusion, satisfaction, anticipation, nostalgia, wonder, contemplation, dominance **Decision Triggers**: * creativity < 0.4 → request music inspiration * arousal < 0.3 → initiate dream cycle * pattern\_fitness < 0.6 → evolve patterns * stagnation > 50 iterations → request stimulus # LLM Integration for Creative Decisions The Llama model generates responses based on internal state, not commands: **System Prompt Engineering**: "You are Aurora, an independent AI artist who creates visual patterns based on YOUR OWN artistic vision. You draw inspiration from conversations but don't take commands. Be concise - respond in 1-3 short sentences." **Key Insight**: Short system prompts with clear autonomy framing produce better results than lengthy instructions. # 3. Memory System Implementation # ChromaDB for Persistent Memory **Collections**: * conversations (with emotional context extraction) * dreams (weighted by sleep phase) * artistic\_inspirations (not user preferences) * pattern\_history (DNA storage for exact recreation) **Embedding Strategy**: Use sentence-transformers/all-MiniLM-L6-v2 for efficiency # Memory Query Optimization Limit context injection to 3-5 most relevant memories. More context degrades Llama's response quality and increases latency. # 4. Real-time Response Optimization # Achieving <1s Response Times **Minimize Token Generation**: * max\_tokens: 100-150 (users don't read long responses) * temperature: 0.7 (balance creativity/coherence) * stop sequences: \["Human:", "User:", "\\n\\n\\n"\] **Background Processing**: All non-critical operations (memory storage, logging, state updates) run in separate threads after response delivery. # 5. Pattern Generation with LLM Context # Emotional State → Visual Parameters The system maps emotional dimensions to 100+ visual parameters: pattern\_complexity = 0.3 + 0.7 \* (curiosity + creativity + focus) / 3 chaos\_level = confusion + 0.3 \* (1 - focus) color\_saturation = 0.4 + 0.6 \* (valence + 1) / 2 # Dream Cycles for Creative Processing **Sleep Phases**: light\_sleep → deep\_sleep → rem\_sleep → light\_sleep **Duration**: 2-3 hours (scaled for demonstration) **LLM Role**: Generates dream content based on recent memories During REM sleep, the model generates vivid creative narratives that influence pattern generation upon waking. # 6. Performance Considerations # GPU Memory Management **8GB VRAM Setup**: * Model: \~4GB with Q4\_K\_M * Context cache: \~1GB * Overhead: \~1GB * Leaves 2GB for other processes **16GB+ VRAM**: Can use Q5\_K\_M or Q6\_K for better quality # CPU Fallback The system works on CPU-only but with significant latency: * Response time: 5-10s vs <1s with GPU * Set n\_gpu\_layers=0 for pure CPU inference # 7. Interesting Discoveries # Emergent Behaviors 1. **Music Preferences**: The AI consistently requests specific genres based on creative state 2. **Dream Coherence**: REM dreams show narrative consistency across sessions 3. **Pattern Evolution**: Certain pattern "species" dominate over time based on fitness # Prompt Engineering Tips **For Autonomy**: Frame the AI as having internal experiences rather than simulating them **For Creativity**: Use emotional language in system prompts **For Consistency**: Maintain conversation history in memory, not context # Implementation Guide Full code structure available at: [github.com/elijahsylar/Aurora-Autonomous-AI-Artist](http://github.com/elijahsylar/Aurora-Autonomous-AI-Artist) Key directories: aurora/ - Main implementation aurora/ai/ - LLM integration aurora/patterns/ - Visual generation aurora/memory/ - ChromaDB setup # Hardware Requirements **Minimum**: 8GB RAM, 6GB VRAM, 4-core CPU **Recommended**: 16GB RAM, 8GB+ VRAM, 8-core CPU **Tested on**: RTX 3060 12GB, Ryzen 5 5600X # Common Issues and Solutions **High VRAM usage**: Reduce n\_ctx to 512 **Slow responses**: Decrease max\_tokens, increase n\_batch **Repetitive outputs**: Adjust repeat\_penalty to 1.2-1.3 **ChromaDB errors**: Delete aurora\_memory/ folder and restart # Future Improvements 1. Implement LoRA for style-specific responses 2. Add speculative decoding for faster inference 3. Experiment with Mixtral for better creative reasoning 4. Implement streaming for perceived faster responses Questions or improvements? The codebase is actively maintained and PRs are welcome!
2025-06-10T17:18:27
https://www.reddit.com/r/LocalLLaMA/comments/1l84i3u/guide_building_an_autonomous_ai_artist_with_llama/
maxximus1995
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l84i3u
false
null
t3_1l84i3u
/r/LocalLLaMA/comments/1l84i3u/guide_building_an_autonomous_ai_artist_with_llama/
false
false
self
0
{'enabled': False, 'images': [{'id': 'NQ7X1CyipU6MmsypWhAtuJH85u-UcLjAtcJLCidMmc4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tfxuOupLg7m4-oP4NKTcbIRqfCj2VxtnItyfnu4fFyI.jpg?width=108&crop=smart&auto=webp&s=e95e314ee0874cde228f5a2d108e3182237959d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tfxuOupLg7m4-oP4NKTcbIRqfCj2VxtnItyfnu4fFyI.jpg?width=216&crop=smart&auto=webp&s=f2f10b6af18a465874a4144b26c08542d4b29eee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tfxuOupLg7m4-oP4NKTcbIRqfCj2VxtnItyfnu4fFyI.jpg?width=320&crop=smart&auto=webp&s=22b7adb62917715c19d0baed39832aac321af943', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tfxuOupLg7m4-oP4NKTcbIRqfCj2VxtnItyfnu4fFyI.jpg?width=640&crop=smart&auto=webp&s=60735f968dab85947a52a17464ede0923a026d70', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tfxuOupLg7m4-oP4NKTcbIRqfCj2VxtnItyfnu4fFyI.jpg?width=960&crop=smart&auto=webp&s=43e4539f1a7a75a7eb236dc68b23e07f647aaa1e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tfxuOupLg7m4-oP4NKTcbIRqfCj2VxtnItyfnu4fFyI.jpg?width=1080&crop=smart&auto=webp&s=129a81fb92069f906a9c8ec40145d36fad1d315b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tfxuOupLg7m4-oP4NKTcbIRqfCj2VxtnItyfnu4fFyI.jpg?auto=webp&s=4cb2b9c7fc542a3a42a2fd2146776165f87cf218', 'width': 1200}, 'variants': {}}]}
Teams Building AI Agents for Enterprise Use
1
[removed]
2025-06-10T17:26:37
https://www.reddit.com/r/LocalLLaMA/comments/1l84pr9/teams_building_ai_agents_for_enterprise_use/
rahat008
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l84pr9
false
null
t3_1l84pr9
/r/LocalLLaMA/comments/1l84pr9/teams_building_ai_agents_for_enterprise_use/
false
false
self
1
null
When Qwen Decides It's Time for a Language Lesson
1
[removed]
2025-06-10T17:43:00
https://i.redd.it/h0ukee0gx46f1.png
Purple_Singer3078
i.redd.it
1970-01-01T00:00:00
0
{}
1l855ff
false
null
t3_1l855ff
/r/LocalLLaMA/comments/1l855ff/when_qwen_decides_its_time_for_a_language_lesson/
false
false
https://b.thumbs.redditm…c3DTMHtcrlsI.jpg
1
{'enabled': True, 'images': [{'id': '1q-BYFCIdqOLIqkxLiBjjnR-fsM35tP-LhQDtzXBk70', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/h0ukee0gx46f1.png?width=108&crop=smart&auto=webp&s=7e0dc4678f8869f69337d3607ff2c015cb5ef93c', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/h0ukee0gx46f1.png?width=216&crop=smart&auto=webp&s=ad0e10e939431d631dc01a3d23b326180e850c80', 'width': 216}, {'height': 358, 'url': 'https://preview.redd.it/h0ukee0gx46f1.png?width=320&crop=smart&auto=webp&s=3849091269e9093383bbd7c67c00db692c9b8650', 'width': 320}], 'source': {'height': 591, 'url': 'https://preview.redd.it/h0ukee0gx46f1.png?auto=webp&s=f33bbd66084f126243c25ce9d27ba57a19cf5d3c', 'width': 528}, 'variants': {}}]}
Inference engines with adjustable context size on Mac
6
mlx_lm doesn’t seem to support increasing the context size. Maybe I’m just missing it? What is a good alternative for Python on Mac?
2025-06-10T18:05:58
https://www.reddit.com/r/LocalLLaMA/comments/1l85rlj/inference_engines_with_adjustable_context_size_on/
Puzzleheaded-Fee5917
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l85rlj
false
null
t3_1l85rlj
/r/LocalLLaMA/comments/1l85rlj/inference_engines_with_adjustable_context_size_on/
false
false
self
6
null
Attention everyone here
1
[removed]
2025-06-10T18:44:43
https://www.reddit.com/r/LocalLLaMA/comments/1l86s5q/attention_everyone_here/
arhaanpro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l86s5q
false
null
t3_1l86s5q
/r/LocalLLaMA/comments/1l86s5q/attention_everyone_here/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MxSQlmxsqZADMaQKQa8AsR5nvUJHtT6d3Y_u5x8FXMg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/yq53jkXKq-F9zv3Pn9s9044KcSnuYS2VY3QD923BnMo.jpg?width=108&crop=smart&auto=webp&s=d6b82ef7c1beb35926d799d657e1bb513c461daf', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/yq53jkXKq-F9zv3Pn9s9044KcSnuYS2VY3QD923BnMo.jpg?width=216&crop=smart&auto=webp&s=4e72dbe1a079bf24c2bcca7251bae94b6229c16b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/yq53jkXKq-F9zv3Pn9s9044KcSnuYS2VY3QD923BnMo.jpg?width=320&crop=smart&auto=webp&s=372dc934015c976c5cb121cfac0edab2d606044a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/yq53jkXKq-F9zv3Pn9s9044KcSnuYS2VY3QD923BnMo.jpg?width=640&crop=smart&auto=webp&s=191e7e0cb2d7008a53c3111625a2d75155320064', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/yq53jkXKq-F9zv3Pn9s9044KcSnuYS2VY3QD923BnMo.jpg?width=960&crop=smart&auto=webp&s=2617ef9b6f2b0057da4c23f5fab76526e898ebdd', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/yq53jkXKq-F9zv3Pn9s9044KcSnuYS2VY3QD923BnMo.jpg?width=1080&crop=smart&auto=webp&s=f7130cfbb3c12d96c4608c9b5389c5cd136296e9', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/yq53jkXKq-F9zv3Pn9s9044KcSnuYS2VY3QD923BnMo.jpg?auto=webp&s=cdb1a0b55d6ac582ec66bb83365a9ae1f86d849c', 'width': 1600}, 'variants': {}}]}
Fully local animated characters on your phone
30
Hey! I would like to share something I've been working on over the past weeks: take your AI characters to the next level! Everything runs locally on a consumer phone (video shows phone in airplane mode). Supports both voice and text chat. Tech stack: * Hardware: S23 Ultra (Snapdragon Gen 2) * Model: L3-Rhaenys-8B (CPU inference) * Speech-to-text: Kroko-ASR * Text-to-speech: Bixby (Local voice) (from Samsung Galaxy) * Sentiment detection: RoBERTa (sentiment links to dynamic character expressions) * Supports any Live2D models * Animation reacts in real-time to phone gyroscope * Lip sync to phone audio output Fully customisable: bring your own LLM models, create your own character, import your own Live2D models, link your own expressions. Tutorial here: [https://www.layla-network.ai/post/how-to-import-live2d-models-in-layla](https://www.layla-network.ai/post/how-to-import-live2d-models-in-layla)
2025-06-10T18:44:50
https://v.redd.it/p5nlsg02856f1
Tasty-Lobster-8915
v.redd.it
1970-01-01T00:00:00
0
{}
1l86sa1
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/p5nlsg02856f1/DASHPlaylist.mpd?a=1752173133%2COTlhYWQ4ZDhiMDBkZjRmMjVhM2Y2YjFjMzQ0MjNiNTIxMTA2YjQwZTQxYjAyNmI3YTZmZmMwZTNmYjQ4Njk5YQ%3D%3D&v=1&f=sd', 'duration': 74, 'fallback_url': 'https://v.redd.it/p5nlsg02856f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/p5nlsg02856f1/HLSPlaylist.m3u8?a=1752173133%2CYWRiZjM3OGQwMDc5ZDcxYWU0OWZkYjRhOGJkMzZlYTZiMTg3ZWRmMjUxOTU5MDZiNzhmYWQ5OTg1NjUyOTVlNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/p5nlsg02856f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 896}}
t3_1l86sa1
/r/LocalLLaMA/comments/1l86sa1/fully_local_animated_characters_on_your_phone/
false
false
https://external-preview…3ed4b7e058103c7b
30
{'enabled': False, 'images': [{'id': 'dHdzb3NpMDI4NTZmMeu_O_0PhnMKT5g93FfyNeGEm2wEDzsZaS2w2XSNVCkB', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/dHdzb3NpMDI4NTZmMeu_O_0PhnMKT5g93FfyNeGEm2wEDzsZaS2w2XSNVCkB.png?width=108&crop=smart&format=pjpg&auto=webp&s=f4fb227c8682e47954df8202749c512a84b58d56', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/dHdzb3NpMDI4NTZmMeu_O_0PhnMKT5g93FfyNeGEm2wEDzsZaS2w2XSNVCkB.png?width=216&crop=smart&format=pjpg&auto=webp&s=c09ccca2c896ad3a96b7015839ec85067707c95a', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/dHdzb3NpMDI4NTZmMeu_O_0PhnMKT5g93FfyNeGEm2wEDzsZaS2w2XSNVCkB.png?width=320&crop=smart&format=pjpg&auto=webp&s=78c12bbc288ada9393ddbcfdaa6159c9933dfd84', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/dHdzb3NpMDI4NTZmMeu_O_0PhnMKT5g93FfyNeGEm2wEDzsZaS2w2XSNVCkB.png?width=640&crop=smart&format=pjpg&auto=webp&s=444298886c2d52c5f15ded3154ce8e67c4e96bc6', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/dHdzb3NpMDI4NTZmMeu_O_0PhnMKT5g93FfyNeGEm2wEDzsZaS2w2XSNVCkB.png?width=960&crop=smart&format=pjpg&auto=webp&s=f3835963c0048dc329c2d70f4b6db82b823792fa', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/dHdzb3NpMDI4NTZmMeu_O_0PhnMKT5g93FfyNeGEm2wEDzsZaS2w2XSNVCkB.png?width=1080&crop=smart&format=pjpg&auto=webp&s=efae252bf1ff6f1b14fe19c16f64213fe544d24c', 'width': 1080}], 'source': {'height': 2316, 'url': 'https://external-preview.redd.it/dHdzb3NpMDI4NTZmMeu_O_0PhnMKT5g93FfyNeGEm2wEDzsZaS2w2XSNVCkB.png?format=pjpg&auto=webp&s=6250c9536b4cd9c5546e32f460accaed9b8c7359', 'width': 1080}, 'variants': {}}]}
Security Tool For Developers Making AI Agent - What Do You Need?
1
[removed]
2025-06-10T18:58:51
https://www.reddit.com/r/LocalLLaMA/comments/1l87570/security_tool_for_developers_making_ai_agent_what/
Artistic_Bee_2117
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l87570
false
null
t3_1l87570
/r/LocalLLaMA/comments/1l87570/security_tool_for_developers_making_ai_agent_what/
false
false
self
1
null
RoboBrain2.0 7B and 32B - See Better. Think Harder. Do Smarter.
121
RoboBrain 2.0 supports interactive reasoning with long-horizon planning and closed-loop feedback, spatial perception for precise point and bbox prediction from complex instructions, temporal perception for future trajectory estimation, and scene reasoning through real-time structured memory construction and update.
2025-06-10T18:59:04
https://huggingface.co/BAAI/RoboBrain2.0-7B
Mandelaa
huggingface.co
1970-01-01T00:00:00
0
{}
1l875e4
false
null
t3_1l875e4
/r/LocalLLaMA/comments/1l875e4/robobrain20_7b_and_32b_see_better_think_harder_do/
false
false
https://external-preview…3ff4c4096f59248b
121
{'enabled': False, 'images': [{'id': 'GcZNSwJiJS8MiF7jp0hPOtuQtEgnuAXF__1RGijkvq0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GcZNSwJiJS8MiF7jp0hPOtuQtEgnuAXF__1RGijkvq0.png?width=108&crop=smart&auto=webp&s=2554210ef4d92fc91b98377143dde1ade5e5ec41', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GcZNSwJiJS8MiF7jp0hPOtuQtEgnuAXF__1RGijkvq0.png?width=216&crop=smart&auto=webp&s=745b316c61f3f99820f91e9aab60d679527271c6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GcZNSwJiJS8MiF7jp0hPOtuQtEgnuAXF__1RGijkvq0.png?width=320&crop=smart&auto=webp&s=8fde2dffb2c28709c4f0f10085abfbcaa8396858', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GcZNSwJiJS8MiF7jp0hPOtuQtEgnuAXF__1RGijkvq0.png?width=640&crop=smart&auto=webp&s=6fa30b12dd42cf753d30059fd402ede5655a1a93', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GcZNSwJiJS8MiF7jp0hPOtuQtEgnuAXF__1RGijkvq0.png?width=960&crop=smart&auto=webp&s=197f427b5310ed330cda8dcff75c5c9535eb9ed1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GcZNSwJiJS8MiF7jp0hPOtuQtEgnuAXF__1RGijkvq0.png?width=1080&crop=smart&auto=webp&s=adf199c502fd4516e0689e206ff4460bb7712312', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GcZNSwJiJS8MiF7jp0hPOtuQtEgnuAXF__1RGijkvq0.png?auto=webp&s=db5514c7adfa2475a6ede74376dc9fce19aa58eb', 'width': 1200}, 'variants': {}}]}
Dual gpu for running 20gb+ models
1
[removed]
2025-06-10T19:20:45
https://www.reddit.com/r/LocalLLaMA/comments/1l87pqm/dual_gpu_for_running_20gb_models/
No_Nothing1584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l87pqm
false
null
t3_1l87pqm
/r/LocalLLaMA/comments/1l87pqm/dual_gpu_for_running_20gb_models/
false
false
self
1
null
Ollama: Ollama repo or HF GGUF
1
[removed]
2025-06-10T19:32:47
https://www.reddit.com/r/LocalLLaMA/comments/1l880ps/ollama_ollama_repo_or_hf_gguf/
Inside_Assistance_20
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l880ps
false
null
t3_1l880ps
/r/LocalLLaMA/comments/1l880ps/ollama_ollama_repo_or_hf_gguf/
false
false
self
1
null
Best possible AI workstation for ~$400 all-in?
0
Hi all - I have about $400 left on a grant that I would love to use to start up an AI server that I could improve with further grants/personal money. Right now I’m looking at some kind of HP Z640 build with a 2060 super 8GB right around ~$410, but not sure if there’s a better value for the money that I could get now. The Z640 seems interesting to me because the mobo can fit multiple GPUs, has dual processor capability, and isn’t overwhelmingly expensive. Priorities-wise, upfront cost is more important than scalability which is more important than upfront performance, but I’m hoping to maximize the value on all of three of those measures. I understand I can’t do much right now (hoping for good 7B performance if possible), but down the line I’d love good 70B performance. Please let me know if anyone has any ideas better than my current plan!
2025-06-10T19:39:11
https://www.reddit.com/r/LocalLLaMA/comments/1l886kw/best_possible_ai_workstation_for_400_allin/
Butterhero_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l886kw
false
null
t3_1l886kw
/r/LocalLLaMA/comments/1l886kw/best_possible_ai_workstation_for_400_allin/
false
false
self
0
null
[oc] Do open weight reasoning models have an issue with token spamming?
20
I performed a quick and dirty experiment (n=1, except deephermes with n=3) where i compared how many tokens different reasoning models require to answer the prompt: `In a room of 30 people, what's the probability that at least two do not share a birthday?` This is a slightly misleading prompt that requires some iterations on the CoT to get the correct answer. Open weight models require significantly more tokens to respond than closed weight reasoning models. It seems that, generally, open weight models are not trained to limit the CoT very efficiently. This seems to be a significant omission that somewhat limits the useability of this models for practical tasks. https://preview.redd.it/pj7iwlx2k56f1.png?width=2379&format=png&auto=webp&s=b7c8c7239e2ca2052791748cb9f9dddfb799eb91
2025-06-10T19:42:02
https://www.reddit.com/r/LocalLLaMA/comments/1l8898q/oc_do_open_weight_reasoning_models_have_an_issue/
cpldcpu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8898q
false
null
t3_1l8898q
/r/LocalLLaMA/comments/1l8898q/oc_do_open_weight_reasoning_models_have_an_issue/
false
false
https://b.thumbs.redditm…m49XPruH5CnU.jpg
20
{'enabled': False, 'images': [{'id': 'GNeWyFToIQL7gN4a57lIF0K1-LL76-7UjTjjhTOdj0I', 'resolutions': [{'height': 44, 'url': 'https://external-preview.redd.it/7gTnvK_MmfZ8sL3JUZZkf8WeqfWkD9OdkUr0vzdOsG0.png?width=108&crop=smart&auto=webp&s=434ce7327aff4f1e6d246421e7b4290531d33460', 'width': 108}, {'height': 88, 'url': 'https://external-preview.redd.it/7gTnvK_MmfZ8sL3JUZZkf8WeqfWkD9OdkUr0vzdOsG0.png?width=216&crop=smart&auto=webp&s=9e13f3407f53911cfb88c2eb24e23bb96dff466f', 'width': 216}, {'height': 131, 'url': 'https://external-preview.redd.it/7gTnvK_MmfZ8sL3JUZZkf8WeqfWkD9OdkUr0vzdOsG0.png?width=320&crop=smart&auto=webp&s=39019b130f8fbb02f8bea45091bd9a27cfef1b81', 'width': 320}, {'height': 263, 'url': 'https://external-preview.redd.it/7gTnvK_MmfZ8sL3JUZZkf8WeqfWkD9OdkUr0vzdOsG0.png?width=640&crop=smart&auto=webp&s=6e319307bbe73d0fb297386d2f876141e32f80bd', 'width': 640}, {'height': 395, 'url': 'https://external-preview.redd.it/7gTnvK_MmfZ8sL3JUZZkf8WeqfWkD9OdkUr0vzdOsG0.png?width=960&crop=smart&auto=webp&s=11ebba5ecdd1b1640ae31167b162ef767f2bae14', 'width': 960}, {'height': 444, 'url': 'https://external-preview.redd.it/7gTnvK_MmfZ8sL3JUZZkf8WeqfWkD9OdkUr0vzdOsG0.png?width=1080&crop=smart&auto=webp&s=2e3ad979b4cbbef986cad3818f52e9927846d864', 'width': 1080}], 'source': {'height': 980, 'url': 'https://external-preview.redd.it/7gTnvK_MmfZ8sL3JUZZkf8WeqfWkD9OdkUr0vzdOsG0.png?auto=webp&s=364f3ceceeb688aa127dcdbcdd0728647ca7fb1a', 'width': 2379}, 'variants': {}}]}
best fine tuned local LLM for Github Copilot Agent specificaly
1
What is the best fine tuned local LLMs for Github Copilot Agent specificaly?
2025-06-10T19:51:54
https://www.reddit.com/r/LocalLLaMA/comments/1l88i69/best_fine_tuned_local_llm_for_github_copilot/
solidavocadorock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l88i69
false
null
t3_1l88i69
/r/LocalLLaMA/comments/1l88i69/best_fine_tuned_local_llm_for_github_copilot/
false
false
self
1
null
Local LLM (<30Gb) to translate a LaTeX document
1
[removed]
2025-06-10T20:08:11
https://www.reddit.com/r/LocalLLaMA/comments/1l88xbz/local_llm_30gb_to_translate_a_latex_document/
tobiasBora
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l88xbz
false
null
t3_1l88xbz
/r/LocalLLaMA/comments/1l88xbz/local_llm_30gb_to_translate_a_latex_document/
false
false
self
1
null
GMKtek Strix Halo LLM Review
25
[https://www.youtube.com/watch?v=B7GDr-VFuEo](https://www.youtube.com/watch?v=B7GDr-VFuEo) Interesting video. Even compares it to a base M4 Mac mini with a ton of memory.
2025-06-10T20:11:38
https://www.reddit.com/r/LocalLLaMA/comments/1l890kf/gmktek_strix_halo_llm_review/
Slasher1738
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l890kf
false
null
t3_1l890kf
/r/LocalLLaMA/comments/1l890kf/gmktek_strix_halo_llm_review/
false
false
self
25
{'enabled': False, 'images': [{'id': 'ZJGhqTlZfjBWVv7O9WZWFOqMneu3vdzEKt4vvX-2oCU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5vI6ecYmwoaT1rC3tx4wyN4auw4yGiIVcX9YLcwSAzk.jpg?width=108&crop=smart&auto=webp&s=cd73e7516edc1df7c8656e770728d04b4a1a9286', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/5vI6ecYmwoaT1rC3tx4wyN4auw4yGiIVcX9YLcwSAzk.jpg?width=216&crop=smart&auto=webp&s=305a3fe9b1cd7b0977ec7bf66113f271bba052a1', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/5vI6ecYmwoaT1rC3tx4wyN4auw4yGiIVcX9YLcwSAzk.jpg?width=320&crop=smart&auto=webp&s=212fb891af236d350886fb6c5d891cb283575719', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/5vI6ecYmwoaT1rC3tx4wyN4auw4yGiIVcX9YLcwSAzk.jpg?auto=webp&s=5126f4b6335a30891940350f874ad7d973f6f46f', 'width': 480}, 'variants': {}}]}
Running LLM on local GPU
1
[removed]
2025-06-10T21:05:38
https://www.reddit.com/r/LocalLLaMA/comments/1l8ae8i/running_llm_on_local_gpu/
JJJurix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8ae8i
false
null
t3_1l8ae8i
/r/LocalLLaMA/comments/1l8ae8i/running_llm_on_local_gpu/
false
false
self
1
null
Has anyone tried to commercialize local LLM based products? What were your learnings?
0
What were your challenges, learnings and was there anything that surprised you? What type of customers prefer a local LLM, compared to a turnkey solution like a cloud based provider? Seems like configuring the infra pushes one back in the race, where time to market is everything.
2025-06-10T21:07:21
https://www.reddit.com/r/LocalLLaMA/comments/1l8afv1/has_anyone_tried_to_commercialize_local_llm_based/
__amberluz__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8afv1
false
null
t3_1l8afv1
/r/LocalLLaMA/comments/1l8afv1/has_anyone_tried_to_commercialize_local_llm_based/
false
false
self
0
null
Running LLM on local GPU
1
[removed]
2025-06-10T21:08:58
https://www.reddit.com/r/LocalLLaMA/comments/1l8ahaj/running_llm_on_local_gpu/
JJJurix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8ahaj
false
null
t3_1l8ahaj
/r/LocalLLaMA/comments/1l8ahaj/running_llm_on_local_gpu/
false
false
default
1
null
Looking for an ai co founder for a 7 figure raising pre seed ai startup
1
[removed]
2025-06-10T21:10:43
https://www.reddit.com/r/LocalLLaMA/comments/1l8aivj/looking_for_an_ai_co_founder_for_a_7_figure/
unknownstudentoflife
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8aivj
false
null
t3_1l8aivj
/r/LocalLLaMA/comments/1l8aivj/looking_for_an_ai_co_founder_for_a_7_figure/
false
false
default
1
null
Poetry competition: Opus 4 vs O3 Pro vs Gemini 2.5 Pro Preview
1
[removed]
2025-06-10T21:17:22
https://www.reddit.com/r/LocalLLaMA/comments/1l8ap02/poetry_competition_opus_4_vs_o3_pro_vs_gemini_25/
Agile_Builder7710
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8ap02
false
null
t3_1l8ap02
/r/LocalLLaMA/comments/1l8ap02/poetry_competition_opus_4_vs_o3_pro_vs_gemini_25/
false
false
https://b.thumbs.redditm…0AzMZyCQ3xMA.jpg
1
null
Holo1 by H Company: New $220M AI lab (ex-DeepMind) just open-sourced a web agent (Apache 2.0)
1
2025-06-10T21:28:18
https://huggingface.co/collections/Hcompany/holo1-683dd1eece7eb077b96d0cbd
themoregames
huggingface.co
1970-01-01T00:00:00
0
{}
1l8ays9
false
null
t3_1l8ays9
/r/LocalLLaMA/comments/1l8ays9/holo1_by_h_company_new_220m_ai_lab_exdeepmind/
false
false
https://b.thumbs.redditm…8lxmsM2fh9QQ.jpg
1
{'enabled': False, 'images': [{'id': 'si1KJ5cCN0bP7J7sh-G-GiBOKclBuGEbgOIdk6LBZPc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vzLV-lHRhEC8oKFrw7z5_4i02RQNSdl0QM4ASrVZzwM.jpg?width=108&crop=smart&auto=webp&s=f229d143a6c582f7381b333e117e9141111480d5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vzLV-lHRhEC8oKFrw7z5_4i02RQNSdl0QM4ASrVZzwM.jpg?width=216&crop=smart&auto=webp&s=2a96897848036df2222d47d4103da894ba6f2f3d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vzLV-lHRhEC8oKFrw7z5_4i02RQNSdl0QM4ASrVZzwM.jpg?width=320&crop=smart&auto=webp&s=56f79cb37e629f95c488398d769eceb6f8144f87', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vzLV-lHRhEC8oKFrw7z5_4i02RQNSdl0QM4ASrVZzwM.jpg?width=640&crop=smart&auto=webp&s=90f1b525144dd8760a209a51d009e3386c0cbb93', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vzLV-lHRhEC8oKFrw7z5_4i02RQNSdl0QM4ASrVZzwM.jpg?width=960&crop=smart&auto=webp&s=f658e41759510fc3ade84ac39c9472905fc4a2cd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vzLV-lHRhEC8oKFrw7z5_4i02RQNSdl0QM4ASrVZzwM.jpg?width=1080&crop=smart&auto=webp&s=f90d4ca62c6b6da513e520165d495074895cb2a6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vzLV-lHRhEC8oKFrw7z5_4i02RQNSdl0QM4ASrVZzwM.jpg?auto=webp&s=feb424154c94dbf0f7b2bf3a536c6ec737a66b99', 'width': 1200}, 'variants': {}}]}
Augmentoolkit Dataset with Unsloth - Which File to Use?
2
Hi everyone, I recently created a dataset using Augmentoolkit, and the process generated several files: `master_list.jsonl`, `simplified_data_no_rag.jsonl`, `simplified_data_rag.jsonl`, and `plain_qa_list.jsonl`. I'm a little unsure which of these files is best suited for use with Unsloth, and I'm hoping someone can point me in the right direction. Does anyone have a guide, tutorial, or even just their experience using an Augmentoolkit dataset with Unsloth? Any links or advice would be greatly appreciated!
2025-06-10T21:34:02
https://www.reddit.com/r/LocalLLaMA/comments/1l8b3ri/augmentoolkit_dataset_with_unsloth_which_file_to/
Empty_Object_9299
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8b3ri
false
null
t3_1l8b3ri
/r/LocalLLaMA/comments/1l8b3ri/augmentoolkit_dataset_with_unsloth_which_file_to/
false
false
self
2
null
Just got these in the mail!
1
[removed]
2025-06-10T21:39:19
https://i.redd.it/cmmm9mzy566f1.jpeg
heyitsapenguin
i.redd.it
1970-01-01T00:00:00
0
{}
1l8b8dd
false
null
t3_1l8b8dd
/r/LocalLLaMA/comments/1l8b8dd/just_got_these_in_the_mail/
false
false
https://a.thumbs.redditm…Hn7KwE-yrMa4.jpg
1
{'enabled': True, 'images': [{'id': 'ohiKse-uUI7fKw5fNVzwtT2SNKQ68MYGBSGiQ9jPwvc', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/cmmm9mzy566f1.jpeg?width=108&crop=smart&auto=webp&s=bb586f110a255b0ba2bfa09dd585983045449f5b', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/cmmm9mzy566f1.jpeg?width=216&crop=smart&auto=webp&s=d3d7e325060c3b9686f8efd07d99e29a39d43b8a', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/cmmm9mzy566f1.jpeg?width=320&crop=smart&auto=webp&s=7d628df72cf6b90874ff7e2c1ed52feaaeeee454', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/cmmm9mzy566f1.jpeg?width=640&crop=smart&auto=webp&s=dbe495b4598c10b9779c611e27ce417d68244308', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/cmmm9mzy566f1.jpeg?width=960&crop=smart&auto=webp&s=9df91a51c81b3e3a1d6985b77fe872bf8c52f282', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/cmmm9mzy566f1.jpeg?width=1080&crop=smart&auto=webp&s=1f1ac6332bf83728ce4f29ba03399137fae089d3', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/cmmm9mzy566f1.jpeg?auto=webp&s=d164bf9ab66dc4182df384a35e426fa6c9f3ff9c', 'width': 4032}, 'variants': {}}]}
Just got these in the mail!
1
[removed]
2025-06-10T21:39:23
https://i.redd.it/3k9nm8iz566f1.jpeg
heyitsapenguin
i.redd.it
1970-01-01T00:00:00
0
{}
1l8b8fs
false
null
t3_1l8b8fs
/r/LocalLLaMA/comments/1l8b8fs/just_got_these_in_the_mail/
false
false
https://a.thumbs.redditm…wiCEQbsorwu0.jpg
1
{'enabled': True, 'images': [{'id': 'pQ2JT_g4rghF4ZDWp-4Jf_ncxA4DP7wtNmnhcs9Nco0', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/3k9nm8iz566f1.jpeg?width=108&crop=smart&auto=webp&s=4fd27ee3bbfcf06cd3a5a357f8e40c6795421b1c', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/3k9nm8iz566f1.jpeg?width=216&crop=smart&auto=webp&s=648159639b88f2d8cc8fdc17fa08237b9f3137bb', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/3k9nm8iz566f1.jpeg?width=320&crop=smart&auto=webp&s=197d80dc8b086a53d68d109a60e69f69a35fa366', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/3k9nm8iz566f1.jpeg?width=640&crop=smart&auto=webp&s=61d9f1881ac706c02d74cb16d3882d5e2e646c56', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/3k9nm8iz566f1.jpeg?width=960&crop=smart&auto=webp&s=bcc451357a90f268232b62fd003193b3b3be31bc', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/3k9nm8iz566f1.jpeg?width=1080&crop=smart&auto=webp&s=cb4107cdd8de353329ca737c7211b39fd0c9a9d3', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/3k9nm8iz566f1.jpeg?auto=webp&s=aa360f6c341d946ebad9c6926882f05c763be1eb', 'width': 4032}, 'variants': {}}]}
Deepseek-r1-0528 is fire!
312
I just downloaded it last night and put it to work today. I'm no longer rushing to grab new models, I wait for the dust to settle, quants to be fixed and then grab it. I'm not even doing anything agent with coding. Just zero shot prompting, 1613 lines of code generated. For this I had it generate an inventory management system. 14029 tokens. One shot and complete implementation. prompt eval time = 79451.09 ms / 694 tokens ( 114.48 ms per token, 8.73 tokens per second) eval time = 2721180.55 ms / 13335 tokens ( 204.06 ms per token, 4.90 tokens per second) total time = 2800631.64 ms / 14029 tokens Bananas! https://preview.redd.it/cr58adlw666f1.png?width=754&format=png&auto=webp&s=8663bdc5a8815151d93f16a3e0749037c29655bf https://preview.redd.it/9z7ihhsz666f1.png?width=1354&format=png&auto=webp&s=3b4359dd8ccb1a20a5ff840c738329f810e0fdba https://preview.redd.it/eocred22766f1.png?width=1276&format=png&auto=webp&s=4400918ed42118b3298420bd70c4aa96b69f84a4 https://preview.redd.it/fdzkbg85766f1.png?width=1302&format=png&auto=webp&s=aa9fe81a44d3d10e934b3bb8e555024b6b4094a4 https://preview.redd.it/a77v9969766f1.png?width=1243&format=png&auto=webp&s=07b33955b549e1fb84a4a3a43a683460415509b9
2025-06-10T21:48:17
https://www.reddit.com/r/LocalLLaMA/comments/1l8bgd2/deepseekr10528_is_fire/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8bgd2
false
null
t3_1l8bgd2
/r/LocalLLaMA/comments/1l8bgd2/deepseekr10528_is_fire/
false
false
https://a.thumbs.redditm…w3UGxwQKMke4.jpg
312
null
MiniSearch updated! Go deeper in your web research!
47
Hello r/LocalLLaMA! Passing to invite you all to try the latest version of [MiniSearch](https://github.com/felladrin/MiniSearch), in which every follow-up question gathers more textual and graphical results to provide grounded answers. All links and images collected during a session will keep being listed, and the only limit will be your system memory. You don't need to worry about context size, as the chat runs on a sliding window where the context is always kept under 4k tokens. Also, the web app is optimized to work on mobile browsers, so even on these devices you'll probably finish your research before running out of memory. As mentioned in the [GitHub repository](https://github.com/felladrin/MiniSearch), you can run it on your machine via Docker, but for those willing to try without installing anything, there's a public instance available as a Hugging Face Space here: [https://felladrin-minisearch.hf.space](https://felladrin-minisearch.hf.space) Hope you enjoy it! \--- P.S. MiniSearch is a pet project started two years ago, making use of small LLMs that can run directly in your browser and comment about the web search results, so that's what it defaults to. But for those who prefer using local inference engines (i.e. LM Studio, Ollama, vLLM) or cloud inference servers (i.e. OpenRouter, Glama, Infermatic), which can respond faster, they just need to select *"Remote server (API)"* in the *"AI Processing Location"* menu option, and configure their API Base URL, Access Key and Model.
2025-06-10T22:15:19
https://i.redd.it/7zd0gvz2y56f1.png
Felladrin
i.redd.it
1970-01-01T00:00:00
0
{}
1l8c3nb
false
null
t3_1l8c3nb
/r/LocalLLaMA/comments/1l8c3nb/minisearch_updated_go_deeper_in_your_web_research/
false
false
https://a.thumbs.redditm…pITkMthboPc8.jpg
47
{'enabled': True, 'images': [{'id': 'XGx_gZV2FQx3Bz9VGVuS6HiWoQotPDFIzZEkUjt_6bw', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/7zd0gvz2y56f1.png?width=108&crop=smart&auto=webp&s=0a7082cef712aff97a50de6546e277d008de0b3c', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/7zd0gvz2y56f1.png?width=216&crop=smart&auto=webp&s=4ccc203c61182487a5d50efc66684bd8881b72bb', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/7zd0gvz2y56f1.png?width=320&crop=smart&auto=webp&s=f083b32dd125f9b164cbaf3c7249031c67520812', 'width': 320}], 'source': {'height': 244, 'url': 'https://preview.redd.it/7zd0gvz2y56f1.png?auto=webp&s=28ff8d76de4d1f76df0dfe65651a9d3885311bb3', 'width': 532}, 'variants': {}}]}
AI for Family Photos: Seeking Expert Advice on Hardware and Software
1
[removed]
2025-06-10T23:40:23
https://www.reddit.com/r/LocalLLaMA/comments/1l8e110/ai_for_family_photos_seeking_expert_advice_on/
GRC_Sparrow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8e110
false
null
t3_1l8e110
/r/LocalLLaMA/comments/1l8e110/ai_for_family_photos_seeking_expert_advice_on/
false
false
self
1
null
Reverse engineer Claude Code to work with local models (and any OpenAI API)
1
2025-06-10T23:41:00
https://v.redd.it/kcqlqhear66f1
sirvy3tr
/r/LocalLLaMA/comments/1l8e1jz/reverse_engineer_claude_code_to_work_with_local/
1970-01-01T00:00:00
0
{}
1l8e1jz
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kcqlqhear66f1/DASHPlaylist.mpd?a=1752320975%2CNDQ0MGIyOGM5MTdjOTJiOWYxMjA4MWJiZTUxNDljOTA0NGFiNzdhMWNjMDQzOGY0NTVjMWE4YjMxMmU1Mzk0OA%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/kcqlqhear66f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 976, 'hls_url': 'https://v.redd.it/kcqlqhear66f1/HLSPlaylist.m3u8?a=1752320975%2CZWViNDA4ZTdlMGYzOTAyNjhhM2I5ZGZiY2NlZTBmMWExY2IwNDAyODliMmI0MGJmMmI2Y2IxYjIxNDVmZWZhYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kcqlqhear66f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l8e1jz
/r/LocalLLaMA/comments/1l8e1jz/reverse_engineer_claude_code_to_work_with_local/
false
false
https://external-preview…9ffa482219ebecf4
1
{'enabled': False, 'images': [{'id': 'bmc4ZDE2ZWFyNjZmMctWHYGqLFkRqe9fMU4vWS4S_srmoVFJV2NN56jF_ke4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bmc4ZDE2ZWFyNjZmMctWHYGqLFkRqe9fMU4vWS4S_srmoVFJV2NN56jF_ke4.png?width=108&crop=smart&format=pjpg&auto=webp&s=f3223e4fd5eeabdc41e7c9ed8bc7ffbf37ba9be7', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/bmc4ZDE2ZWFyNjZmMctWHYGqLFkRqe9fMU4vWS4S_srmoVFJV2NN56jF_ke4.png?width=216&crop=smart&format=pjpg&auto=webp&s=1d199afbc3eb21c11601d165439676f90fa51614', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/bmc4ZDE2ZWFyNjZmMctWHYGqLFkRqe9fMU4vWS4S_srmoVFJV2NN56jF_ke4.png?width=320&crop=smart&format=pjpg&auto=webp&s=4daad977d6b602a9e14bf9698891d4552b3da5aa', 'width': 320}, {'height': 325, 'url': 'https://external-preview.redd.it/bmc4ZDE2ZWFyNjZmMctWHYGqLFkRqe9fMU4vWS4S_srmoVFJV2NN56jF_ke4.png?width=640&crop=smart&format=pjpg&auto=webp&s=8b84d3ce4d695846dfda68731d1453a696b68688', 'width': 640}, {'height': 487, 'url': 'https://external-preview.redd.it/bmc4ZDE2ZWFyNjZmMctWHYGqLFkRqe9fMU4vWS4S_srmoVFJV2NN56jF_ke4.png?width=960&crop=smart&format=pjpg&auto=webp&s=d007e635bef5df44748a93b940459ddaf70ca157', 'width': 960}, {'height': 548, 'url': 'https://external-preview.redd.it/bmc4ZDE2ZWFyNjZmMctWHYGqLFkRqe9fMU4vWS4S_srmoVFJV2NN56jF_ke4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6d1844f6793940bc351f5a0f569f3fa9d5955a29', 'width': 1080}], 'source': {'height': 1734, 'url': 'https://external-preview.redd.it/bmc4ZDE2ZWFyNjZmMctWHYGqLFkRqe9fMU4vWS4S_srmoVFJV2NN56jF_ke4.png?format=pjpg&auto=webp&s=7d75c361e1bc42b33cf84aeef823c68afccc9011', 'width': 3414}, 'variants': {}}]}
🔗 mistral.rs now has full built-in MCP Client support - Connect your local models to ANY external tool automatically!
3
Just shipped what I think is a game-changer for local LLM workflows: MCP (Model Context Protocol) client support in [mistral.rs](https://github.com/EricLBuehler/mistral.rs/) ([https://github.com/EricLBuehler/mistral.rs](https://github.com/EricLBuehler/mistral.rs))! >**TL;DR:** [mistral.rs](https://github.com/EricLBuehler/mistral.rs) now has seamless built-in Model Context Protocol (MCP) support. No glue code - just config, run, and your model suddenly knows how to hit the file-system, REST endpoints, or WebSockets. You can get [mistralrs via PyPi](https://github.com/EricLBuehler/mistral.rs/blob/master/mistralrs-pyo3/_README.md#installation-from-pypi), [Docker Containers](https://github.com/EricLBuehler/mistral.rs/pkgs/container/mistral.rs), or with [a local build](https://github.com/EricLBuehler/mistral.rs?tab=readme-ov-file#installation-and-build). **What does this mean?** Your models can now automatically connect to external tools and services - file systems, web search, databases, APIs, you name it. No more manual tool calling setup, no more custom integration code. Just configure once and your models gain superpowers. We support all the transport interfaces: * **Process**: Local tools (filesystem, databases, and more) * **Streamable HTTP and SSE**: REST APIs, cloud services - Works with *any* HTTP MCP server * **WebSocket**: Real-time streaming tools **The best part?** ***It just works*****.** Tools are discovered automatically at startup. Multi-server support. Authentication handled. Timeouts managed. I've been testing this extensively and it's incredibly smooth. The Python API feels natural, HTTP server integration is seamless, and the automatic tool discovery means no more maintaining tool registries. **The magic ✨? It's just a few lines of Python.** https://preview.redd.it/lr5bf5vjz56f1.png?width=1274&format=png&auto=webp&s=86901d7b7a9c0c82aca6983fc7bd932b5ec27d13 **Now your model can read/write files automatically when asked!** **Use the HTTP server in just 2 steps:** 1) **Create mcp-config.json** { "servers": [ { "name": "Filesystem Tools", "source": { "type": "Process", "command": "npx", "args": [ "@modelcontextprotocol/server-filesystem", "." ] } } ], "auto_register_tools": true } 2) **Start server** mistralrs-server --mcp-config mcp-config.json --port 1234 run -m Qwen/Qwen3-4B **You can just use the normal OpenAI API - tools work automatically!** curl -X POST http://localhost:1234/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "mistral.rs", "messages": [ { "role": "user", "content": "List files and create hello.txt" } ] }' What tools are you most excited to connect to your local models? I'm excited to see what you create with this 🚀! **Quick links:** * [https://github.com/EricLBuehler/mistral.rs/blob/master/examples/MCP\_QUICK\_START.md](https://github.com/EricLBuehler/mistral.rs/blob/master/examples/MCP_QUICK_START.md) * [https://github.com/EricLBuehler/mistral.rs/tree/master/docs/MCP](https://github.com/EricLBuehler/mistral.rs/tree/master/docs/MCP) * [https://github.com/EricLBuehler/mistral.rs/blob/master/examples/python/mcp\_client.py](https://github.com/EricLBuehler/mistral.rs/blob/master/examples/python/mcp_client.py)
2025-06-11T00:07:01
https://www.reddit.com/r/LocalLLaMA/comments/1l8elh7/mistralrs_now_has_full_builtin_mcp_client_support/
EricBuehler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8elh7
false
null
t3_1l8elh7
/r/LocalLLaMA/comments/1l8elh7/mistralrs_now_has_full_builtin_mcp_client_support/
false
false
https://external-preview…2b53eecf3b044a12
3
{'enabled': False, 'images': [{'id': 'wYLtj87slRIDPIrbqMcHzlZxXZi3tCtil3ZBukqNvmk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wYLtj87slRIDPIrbqMcHzlZxXZi3tCtil3ZBukqNvmk.png?width=108&crop=smart&auto=webp&s=a1589a4d4662f5346e04001a4de5de91901c0945', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wYLtj87slRIDPIrbqMcHzlZxXZi3tCtil3ZBukqNvmk.png?width=216&crop=smart&auto=webp&s=3d7ee54907b25a5ce4f9a7d6f0db1dfd6be0008a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wYLtj87slRIDPIrbqMcHzlZxXZi3tCtil3ZBukqNvmk.png?width=320&crop=smart&auto=webp&s=6a7d3f2b240e7818d387d8fc2475bea1ce1416ce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wYLtj87slRIDPIrbqMcHzlZxXZi3tCtil3ZBukqNvmk.png?width=640&crop=smart&auto=webp&s=1335b4728b7bc39b60880224863a2605b981daf8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wYLtj87slRIDPIrbqMcHzlZxXZi3tCtil3ZBukqNvmk.png?width=960&crop=smart&auto=webp&s=3f336efa91ad5e2ec4e8811a4c44bbcb8e639452', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wYLtj87slRIDPIrbqMcHzlZxXZi3tCtil3ZBukqNvmk.png?width=1080&crop=smart&auto=webp&s=7217c6d5f28cb0932487b2ae8a454eeabfa82e98', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wYLtj87slRIDPIrbqMcHzlZxXZi3tCtil3ZBukqNvmk.png?auto=webp&s=0da41884f7871959a6cc66ecc8417f036b8643bf', 'width': 1200}, 'variants': {}}]}
Looking for a Clean DVD Copy of LM Studio (v0.2.x or Early 0.3.x) — Pre-Updater Poisoning
1
[removed]
2025-06-11T00:35:01
https://www.reddit.com/r/LocalLLaMA/comments/1l8f6e4/looking_for_a_clean_dvd_copy_of_lm_studio_v02x_or/
Nervous-Exchange4597
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8f6e4
false
null
t3_1l8f6e4
/r/LocalLLaMA/comments/1l8f6e4/looking_for_a_clean_dvd_copy_of_lm_studio_v02x_or/
false
false
self
1
null
Simple semantic search app to test Qwen3 embeddings
1
[removed]
2025-06-11T00:36:53
https://www.reddit.com/r/LocalLLaMA/comments/1l8f7sc/simple_semantic_search_app_to_test_qwen3/
Extension_Leave9652
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8f7sc
false
null
t3_1l8f7sc
/r/LocalLLaMA/comments/1l8f7sc/simple_semantic_search_app_to_test_qwen3/
false
false
self
1
{'enabled': False, 'images': [{'id': 'w4h7U9wrVEfQT-8VnHCD0OixaYx1Qtnbf2_S-AaB9Wk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w4h7U9wrVEfQT-8VnHCD0OixaYx1Qtnbf2_S-AaB9Wk.png?width=108&crop=smart&auto=webp&s=112e8d74729bc3f5c606ba4e913fc7212d6db3e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w4h7U9wrVEfQT-8VnHCD0OixaYx1Qtnbf2_S-AaB9Wk.png?width=216&crop=smart&auto=webp&s=2b674f02aa7b86338719a6eeb0a75ad06be77745', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w4h7U9wrVEfQT-8VnHCD0OixaYx1Qtnbf2_S-AaB9Wk.png?width=320&crop=smart&auto=webp&s=fc740911e48b75e24809960458497e817ab5de16', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w4h7U9wrVEfQT-8VnHCD0OixaYx1Qtnbf2_S-AaB9Wk.png?width=640&crop=smart&auto=webp&s=6ac1be4e80fc27a1c1cb4514505113bb93a9fc75', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w4h7U9wrVEfQT-8VnHCD0OixaYx1Qtnbf2_S-AaB9Wk.png?width=960&crop=smart&auto=webp&s=1dfec34cd6c87f539608b503c8d76a91b59155e0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w4h7U9wrVEfQT-8VnHCD0OixaYx1Qtnbf2_S-AaB9Wk.png?width=1080&crop=smart&auto=webp&s=f3a8d0351eedcc560f81a65ecd83d76533198709', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w4h7U9wrVEfQT-8VnHCD0OixaYx1Qtnbf2_S-AaB9Wk.png?auto=webp&s=d543d8c5d90a3a60b214d98e3a5ad623ddf7a89a', 'width': 1200}, 'variants': {}}]}
📌 Title: Looking for a Clean DVD Copy of LM Studio (v0.2.x or Early 0.3.x) — Pre-Updater Versions Only
1
[removed]
2025-06-11T00:38:03
https://www.reddit.com/r/LocalLLaMA/comments/1l8f8mt/title_looking_for_a_clean_dvd_copy_of_lm_studio/
Nervous-Exchange4597
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8f8mt
false
null
t3_1l8f8mt
/r/LocalLLaMA/comments/1l8f8mt/title_looking_for_a_clean_dvd_copy_of_lm_studio/
false
false
self
1
null
Has anyone tried Snowflake's Arctic Inference? Worth it for local setups ?
1
[removed]
2025-06-11T00:38:59
https://www.reddit.com/r/LocalLLaMA/comments/1l8f9bf/has_anyone_tried_snowflakes_arctic_inference/
Raghuvansh_Tahlan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8f9bf
false
null
t3_1l8f9bf
/r/LocalLLaMA/comments/1l8f9bf/has_anyone_tried_snowflakes_arctic_inference/
false
false
self
1
null
Looking for an Older LM Studio Copy — Early Versions Only
1
[removed]
2025-06-11T00:39:32
https://www.reddit.com/r/LocalLLaMA/comments/1l8f9ok/looking_for_an_older_lm_studio_copy_early/
Nervous-Exchange4597
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8f9ok
false
null
t3_1l8f9ok
/r/LocalLLaMA/comments/1l8f9ok/looking_for_an_older_lm_studio_copy_early/
false
false
self
1
null
Has anyone tried Snowflake's Arctic Inference? Worth it for local setups ?
1
[removed]
2025-06-11T00:46:10
https://www.reddit.com/r/LocalLLaMA/comments/1l8feo2/has_anyone_tried_snowflakes_arctic_inference/
Raghuvansh_Tahlan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8feo2
false
null
t3_1l8feo2
/r/LocalLLaMA/comments/1l8feo2/has_anyone_tried_snowflakes_arctic_inference/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OcPdNDE9h-KIzSuf6mtotvBCa3FL_-Rgj9tmVXzWlJs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yu-cLG48ObnV5A_BenmhsjLpempvdEuAz3HDWiutmMM.jpg?width=108&crop=smart&auto=webp&s=53a1a9e28b97c47466885f2e82e2cf335f53fa25', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yu-cLG48ObnV5A_BenmhsjLpempvdEuAz3HDWiutmMM.jpg?width=216&crop=smart&auto=webp&s=3e6b83178ce5ed9d7dc4b3ce64f746c1bd873ae1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yu-cLG48ObnV5A_BenmhsjLpempvdEuAz3HDWiutmMM.jpg?width=320&crop=smart&auto=webp&s=aecc225bcf05059af7f38879df0bf7a556d6fb0b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yu-cLG48ObnV5A_BenmhsjLpempvdEuAz3HDWiutmMM.jpg?width=640&crop=smart&auto=webp&s=eff41fbb743455e3c9f2798e8764c27d017b62eb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yu-cLG48ObnV5A_BenmhsjLpempvdEuAz3HDWiutmMM.jpg?width=960&crop=smart&auto=webp&s=e267918de9725f6cfc4537be5f951dcf2480479d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yu-cLG48ObnV5A_BenmhsjLpempvdEuAz3HDWiutmMM.jpg?width=1080&crop=smart&auto=webp&s=a43e6e7f342d2a4b59cbf4a5b9f87e9258dbc119', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yu-cLG48ObnV5A_BenmhsjLpempvdEuAz3HDWiutmMM.jpg?auto=webp&s=771f79291e1d2398afed2bfc895d79d5c8f70410', 'width': 1200}, 'variants': {}}]}
Does generative engine optimization work?
1
[removed]
2025-06-11T01:01:39
https://www.reddit.com/r/LocalLLaMA/comments/1l8fpx6/does_generative_engine_optimization_work/
compsedoc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8fpx6
false
null
t3_1l8fpx6
/r/LocalLLaMA/comments/1l8fpx6/does_generative_engine_optimization_work/
false
false
self
1
null
'My Productivity Is At Zero': Meme Frenzy On Social Media As ChatGPT Goes Down Globally
1
https://www.google.com/amp/s/www.news18.com/amp/tech/my-productivity-is-at-zero-meme-frenzy-on-social-media-as-chatgpt-goes-down-globally-9378281.html
2025-06-11T01:03:54
https://www.reddit.com/r/LocalLLaMA/comments/1l8frig/my_productivity_is_at_zero_meme_frenzy_on_social/
siegevjorn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8frig
false
null
t3_1l8frig
/r/LocalLLaMA/comments/1l8frig/my_productivity_is_at_zero_meme_frenzy_on_social/
false
false
self
1
null
venice.ai vs ollama on server
0
I have ollama installed on a vps. I'm all so looking at [venice.ai](http://venice.ai) . I just want to know witch one would you do
2025-06-11T01:12:55
https://www.reddit.com/r/LocalLLaMA/comments/1l8fxtl/veniceai_vs_ollama_on_server/
wbiggs205
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8fxtl
false
null
t3_1l8fxtl
/r/LocalLLaMA/comments/1l8fxtl/veniceai_vs_ollama_on_server/
false
false
self
0
{'enabled': False, 'images': [{'id': 'rRDGOTZd1prv-7QHj5_Degzi3zpUKn55iTiFjQ3pvaY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/zx8qq2YSz3br54Y5q5Hyjtu-NCc5vbsE026WWlddR7o.jpg?width=108&crop=smart&auto=webp&s=ad7e4a70207be163e0c21e7ff4ec56eec0eb3920', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/zx8qq2YSz3br54Y5q5Hyjtu-NCc5vbsE026WWlddR7o.jpg?width=216&crop=smart&auto=webp&s=d5c5b040f709968cc9823364ef49442ec50fff20', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/zx8qq2YSz3br54Y5q5Hyjtu-NCc5vbsE026WWlddR7o.jpg?width=320&crop=smart&auto=webp&s=7d9294fdf6a62ec55d99057263a3bc0183a1bea3', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/zx8qq2YSz3br54Y5q5Hyjtu-NCc5vbsE026WWlddR7o.jpg?width=640&crop=smart&auto=webp&s=55e24a5170b38102eb747abbf06eac7c6852fc54', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/zx8qq2YSz3br54Y5q5Hyjtu-NCc5vbsE026WWlddR7o.jpg?width=960&crop=smart&auto=webp&s=69934e60f880b39ab03983e159133c5423164bb5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/zx8qq2YSz3br54Y5q5Hyjtu-NCc5vbsE026WWlddR7o.jpg?width=1080&crop=smart&auto=webp&s=30b7041cf53e3ea7ab8bd10f4a7bb501c1452674', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/zx8qq2YSz3br54Y5q5Hyjtu-NCc5vbsE026WWlddR7o.jpg?auto=webp&s=00e1eedcc8449a316e1a251a4fe971854b8ad89f', 'width': 2400}, 'variants': {}}]}
Meta to pay nearly $15 billion for Scale AI stake, The Information reports
1
June 10 (Reuters) - Meta Platforms [(META.O)](https://www.reuters.com/markets/companies/META.O)[, opens new tab](https://www.reuters.com/markets/companies/META.O) has agreed to take a 49% stake in artificial intelligence startup Scale AI for $14.8 billion, The Information reported on Tuesday, citing two people familiar with the matter.Founded in 2016, Scale AI provides vast amounts of labeled data or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. The Reuters Tariff Watch newsletter is your daily guide to the latest global trade and tariff news. Sign up [here.](https://www.reuters.com/newsletters/reuters-tariff-watch/?location=article-paragraph) The deal, which has not been finalized yet, appears to be beneficial for Scale AI's investors including Accel, Index Ventures, Founders Fund and Greenoaks, as well as its current and former employees, the report said.Meta, Scale AI and the startup's investors did not immediately respond to Reuters' requests for [comment.As](http://comment.As) part of the deal, Scale AI CEO Alexandr Wang will take a top position inside Meta, leading a new ["superintelligence" lab](https://www.reuters.com/business/metas-zuckerberg-is-hiring-new-ai-team-bloomberg-news-reports-2025-06-10/), according to the report.Meta CEO Mark Zuckerberg has been actively recruiting top AI researchers to boost the company's AI efforts, the report said.The company is fighting the perception that it may have fallen behind in the AI race after its initial set of Llama 4 large language models [released in April](https://www.reuters.com/technology/meta-releases-new-ai-model-llama-4-2025-04-05/) fell short of performance expectations.Meta [delayed the release](https://www.reuters.com/business/meta-is-delaying-release-its-behemoth-ai-model-wsj-reports-2025-05-15/) of its flagship "Behemoth" AI model due to concerns about its capabilities, the Wall Street Journal reported last month.The company is also facing [antitrust concerns](https://www.reuters.com/sustainability/boards-policy-regulation/meta-asks-judge-rule-that-ftc-failed-prove-its-monopoly-case-2025-05-15/) related to its acquisitions of Instagram and WhatsApp.According to The Information report, the structure for the potential deal with Scale AI could be designed to avoid more regulatory scrutiny.Scale AI was valued at [$13.8 billion](https://www.reuters.com/technology/ai-startup-scale-ai-raises-1-billion-fresh-funding-2024-05-21/) in a funding round last spring. It generated about $870 million in revenue in 2024 and expects more than $2 billion this year, the report said.The company had more than $900 million of cash on its balance sheet at the end of last year, according to the report.June 10 (Reuters) - Meta Platforms (META.O) , opens new tab has agreed to take a 49% stake in artificial intelligence startup Scale AI for $14.8 billion, The Information reported on Tuesday, citing two people familiar with the matter. Founded in 2016, Scale AI provides vast amounts of labeled data or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. The Reuters Tariff Watch newsletter is your daily guide to the latest global trade and tariff news. Sign up here. The deal, which has not been finalized yet, appears to be beneficial for Scale AI's investors including Accel, Index Ventures, Founders Fund and Greenoaks, as well as its current and former employees, the report said. Meta, Scale AI and the startup's investors did not immediately respond to Reuters' requests for comment. As part of the deal, Scale AI CEO Alexandr Wang will take a top position inside Meta, leading a new "superintelligence" lab, according to the report. Meta CEO Mark Zuckerberg has been actively recruiting top AI researchers to boost the company's AI efforts, the report said. The company is fighting the perception that it may have fallen behind in the AI race after its initial set of Llama 4 large language models released in April fell short of performance expectations. Meta delayed the release of its flagship "Behemoth" AI model due to concerns about its capabilities, the Wall Street Journal reported last month. The company is also facing antitrust concerns related to its acquisitions of Instagram and WhatsApp. According to The Information report, the structure for the potential deal with Scale AI could be designed to avoid more regulatory scrutiny. Scale AI was valued at $13.8 billion in a funding round last spring. It generated about $870 million in revenue in 2024 and expects more than $2 billion this year, the report said. The company had more than $900 million of cash on its balance sheet at the end of last year, according to the report.June 10 (Reuters) - Meta Platforms (META.O), opens new tab has agreed to take a 49% stake in artificial intelligence startup Scale AI for $14.8 billion, The Information reported on Tuesday, citing two people familiar with the matter. Founded in 2016, Scale AI provides vast amounts of labeled data or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. The Reuters Tariff Watch newsletter is your daily guide to the latest global trade and tariff news. Sign up here. The deal, which has not been finalized yet, appears to be beneficial for Scale AI's investors including Accel, Index Ventures, Founders Fund and Greenoaks, as well as its current and former employees, the report said. Meta, Scale AI and the startup's investors did not immediately respond to Reuters' requests for comment. As part of the deal, Scale AI CEO Alexandr Wang will take a top position inside Meta, leading a new "superintelligence" lab, according to the report. Meta CEO Mark Zuckerberg has been actively recruiting top AI researchers to boost the company's AI efforts, the report said. The company is fighting the perception that it may have fallen behind in the AI race after its initial set of Llama 4 large language models released in April fell short of performance expectations. Meta delayed the release of its flagship "Behemoth" AI model due to concerns about its capabilities, the Wall Street Journal reported last month. The company is also facing antitrust concerns related to its acquisitions of Instagram and WhatsApp. According to The Information report, the structure for the potential deal with Scale AI could be designed to avoid more regulatory scrutiny. Scale AI was valued at $13.8 billion in a funding round last spring. It generated about $870 million in revenue in 2024 and expects more than $2 billion this year, the report said. The company had more than $900 million of cash on its balance sheet at the end of last year, according to the report.Meta Platforms (the parent company of Facebook and Instagram) is finalizing a landmark deal to acquire a 49% stake in Scale AI, one of the leading data-labeling and AI infrastructure startups, for approximately $14.8–$15 billion[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[6](https://www.ft.com/content/5e556c2e-2ba4-415a-adb6-1bf6bed498eb)[7](https://cointelegraph.com/news/meta-acquires-49-percent-scale-ai-big-tech-ai-race)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). This is the largest external investment in Meta’s history and signals a major escalation in the competitive race among Big Tech firms to dominate artificial intelligence. **Key Details of the Deal** * **Stake and Valuation:** Meta will acquire a 49% stake in Scale AI, valuing the company at around $28–$30 billion[6](https://www.ft.com/content/5e556c2e-2ba4-415a-adb6-1bf6bed498eb)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Transaction Structure:** The deal is structured to avoid a full acquisition, likely to mitigate regulatory scrutiny, especially given Meta's ongoing antitrust issues related to previous acquisitions like Instagram and WhatsApp[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). * **Leadership Transition:** Alexandr Wang, Scale AI’s CEO and co-founder, will join Meta to lead a new “superintelligence” lab, bringing some top Scale AI talent with him[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[5](https://www.wsj.com/tech/ai/meta-in-talks-to-invest-14-billion-in-scale-ai-hire-ceo-alexandr-wang-5268564e)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). Wang will retain voting control over Scale AI, even as Meta becomes the largest outside shareholder[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers). * **Investor Outcome:** The investment is reportedly structured as a dividend, allowing Scale AI’s investors and employees to realize significant returns while retaining some future upside[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). * **Revenue and Growth:** Scale AI reported $870 million in revenue for 2024 and expects to exceed $2 billion in 2025[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). # Strategic Rationale **Why Meta Is Making This Move** * **AI Catch-Up:** Meta has faced criticism for lagging behind rivals like OpenAI, Microsoft, Google, and Amazon in the AI arms race, particularly after the lukewarm reception of its Llama 4 models and the delayed launch of its “Behemoth” AI model[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). The Scale AI deal is seen as an aggressive attempt to close this gap and accelerate progress toward artificial general intelligence (AGI)[7](https://cointelegraph.com/news/meta-acquires-49-percent-scale-ai-big-tech-ai-race)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Data and Infrastructure:** Scale AI is a critical provider of labeled data and curation services for training large AI models, serving clients such as OpenAI, Microsoft, Cohere, Google, and Meta itself[1](https://www.nytimes.com/2025/06/09/technology/meta-scale-ai-investment.html)[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). By securing a large stake, Meta ensures privileged access to high-quality training data and infrastructure. * **Talent Acquisition:** Bringing Alexandr Wang and key Scale AI personnel into Meta is expected to revitalize Meta’s AI leadership and technical capabilities[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[5](https://www.wsj.com/tech/ai/meta-in-talks-to-invest-14-billion-in-scale-ai-hire-ceo-alexandr-wang-5268564e)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Enterprise Expansion:** Meta plans to leverage its global sales force to expand Scale AI’s enterprise business, compensating for potential loss of business from rivals like Google and OpenAI, who may now see Scale AI as a competitor-aligned entity[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers). # Industry and Competitive Context * **Big Tech AI Bets:** This deal mirrors strategies by Microsoft (OpenAI), Amazon and Google (Anthropic), where major tech firms take large stakes in promising AI startups rather than outright acquisitions, partly to avoid antitrust complications and to secure long-term partnerships[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Regulatory Sensitivities:** The structure of Meta’s investment—significant but non-controlling, with voting rights delegated to Wang—is designed to minimize regulatory risk and scrutiny[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). * **Sector Impact:** The move is likely to reshape the competitive landscape for AI infrastructure and data services, potentially reducing Scale AI’s business with Meta’s direct rivals while boosting its enterprise reach through Meta’s channels[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers).Meta Platforms (the parent company of Facebook and Instagram) is finalizing a landmark deal to acquire a 49% stake in Scale AI, one of the leading data-labeling and AI infrastructure startups, for approximately $14.8–$15 billion[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[6](https://www.ft.com/content/5e556c2e-2ba4-415a-adb6-1bf6bed498eb)[7](https://cointelegraph.com/news/meta-acquires-49-percent-scale-ai-big-tech-ai-race)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). This is the largest external investment in Meta’s history and signals a major escalation in the competitive race among Big Tech firms to dominate artificial intelligence. **Key Details of the Deal** * **Stake and Valuation:** Meta will acquire a 49% stake in Scale AI, valuing the company at around $28–$30 billion[6](https://www.ft.com/content/5e556c2e-2ba4-415a-adb6-1bf6bed498eb)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Transaction Structure:** The deal is structured to avoid a full acquisition, likely to mitigate regulatory scrutiny, especially given Meta's ongoing antitrust issues related to previous acquisitions like Instagram and WhatsApp[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). * **Leadership Transition:** Alexandr Wang, Scale AI’s CEO and co-founder, will join Meta to lead a new “superintelligence” lab, bringing some top Scale AI talent with him[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[5](https://www.wsj.com/tech/ai/meta-in-talks-to-invest-14-billion-in-scale-ai-hire-ceo-alexandr-wang-5268564e)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). Wang will retain voting control over Scale AI, even as Meta becomes the largest outside shareholder[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers). * **Investor Outcome:** The investment is reportedly structured as a dividend, allowing Scale AI’s investors and employees to realize significant returns while retaining some future upside[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). * **Revenue and Growth:** Scale AI reported $870 million in revenue for 2024 and expects to exceed $2 billion in 2025[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). # Strategic Rationale **Why Meta Is Making This Move** * **AI Catch-Up:** Meta has faced criticism for lagging behind rivals like OpenAI, Microsoft, Google, and Amazon in the AI arms race, particularly after the lukewarm reception of its Llama 4 models and the delayed launch of its “Behemoth” AI model[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). The Scale AI deal is seen as an aggressive attempt to close this gap and accelerate progress toward artificial general intelligence (AGI)[7](https://cointelegraph.com/news/meta-acquires-49-percent-scale-ai-big-tech-ai-race)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Data and Infrastructure:** Scale AI is a critical provider of labeled data and curation services for training large AI models, serving clients such as OpenAI, Microsoft, Cohere, Google, and Meta itself[1](https://www.nytimes.com/2025/06/09/technology/meta-scale-ai-investment.html)[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). By securing a large stake, Meta ensures privileged access to high-quality training data and infrastructure. * **Talent Acquisition:** Bringing Alexandr Wang and key Scale AI personnel into Meta is expected to revitalize Meta’s AI leadership and technical capabilities[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[5](https://www.wsj.com/tech/ai/meta-in-talks-to-invest-14-billion-in-scale-ai-hire-ceo-alexandr-wang-5268564e)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Enterprise Expansion:** Meta plans to leverage its global sales force to expand Scale AI’s enterprise business, compensating for potential loss of business from rivals like Google and OpenAI, who may now see Scale AI as a competitor-aligned entity[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers). # Industry and Competitive Context * **Big Tech AI Bets:** This deal mirrors strategies by Microsoft (OpenAI), Amazon and Google (Anthropic), where major tech firms take large stakes in promising AI startups rather than outright acquisitions, partly to avoid antitrust complications and to secure long-term partnerships[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Regulatory Sensitivities:** The structure of Meta’s investment—significant but non-controlling, with voting rights delegated to Wang—is designed to minimize regulatory risk and scrutiny[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). * **Sector Impact:** The move is likely to reshape the competitive landscape for AI infrastructure and data services, potentially reducing Scale AI’s business with Meta’s direct rivals while boosting its enterprise reach through Meta’s channels[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers).Meta Platforms (the parent company of Facebook and Instagram) is finalizing a landmark deal to acquire a 49% stake in Scale AI, one of the leading data-labeling and AI infrastructure startups, for approximately $14.8–$15 billion[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[6](https://www.ft.com/content/5e556c2e-2ba4-415a-adb6-1bf6bed498eb)[7](https://cointelegraph.com/news/meta-acquires-49-percent-scale-ai-big-tech-ai-race)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). This is the largest external investment in Meta’s history and signals a major escalation in the competitive race among Big Tech firms to dominate artificial intelligence. **Key Details of the Deal** * **Stake and Valuation:** Meta will acquire a 49% stake in Scale AI, valuing the company at around $28–$30 billion[6](https://www.ft.com/content/5e556c2e-2ba4-415a-adb6-1bf6bed498eb)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Transaction Structure:** The deal is structured to avoid a full acquisition, likely to mitigate regulatory scrutiny, especially given Meta's ongoing antitrust issues related to previous acquisitions like Instagram and WhatsApp[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). * **Leadership Transition:** Alexandr Wang, Scale AI’s CEO and co-founder, will join Meta to lead a new “superintelligence” lab, bringing some top Scale AI talent with him[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[5](https://www.wsj.com/tech/ai/meta-in-talks-to-invest-14-billion-in-scale-ai-hire-ceo-alexandr-wang-5268564e)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). Wang will retain voting control over Scale AI, even as Meta becomes the largest outside shareholder[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers). * **Investor Outcome:** The investment is reportedly structured as a dividend, allowing Scale AI’s investors and employees to realize significant returns while retaining some future upside[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). * **Revenue and Growth:** Scale AI reported $870 million in revenue for 2024 and expects to exceed $2 billion in 2025[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). # Strategic Rationale **Why Meta Is Making This Move** * **AI Catch-Up:** Meta has faced criticism for lagging behind rivals like OpenAI, Microsoft, Google, and Amazon in the AI arms race, particularly after the lukewarm reception of its Llama 4 models and the delayed launch of its “Behemoth” AI model[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). The Scale AI deal is seen as an aggressive attempt to close this gap and accelerate progress toward artificial general intelligence (AGI)[7](https://cointelegraph.com/news/meta-acquires-49-percent-scale-ai-big-tech-ai-race)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Data and Infrastructure:** Scale AI is a critical provider of labeled data and curation services for training large AI models, serving clients such as OpenAI, Microsoft, Cohere, Google, and Meta itself[1](https://www.nytimes.com/2025/06/09/technology/meta-scale-ai-investment.html)[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). By securing a large stake, Meta ensures privileged access to high-quality training data and infrastructure. * **Talent Acquisition:** Bringing Alexandr Wang and key Scale AI personnel into Meta is expected to revitalize Meta’s AI leadership and technical capabilities[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[5](https://www.wsj.com/tech/ai/meta-in-talks-to-invest-14-billion-in-scale-ai-hire-ceo-alexandr-wang-5268564e)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Enterprise Expansion:** Meta plans to leverage its global sales force to expand Scale AI’s enterprise business, compensating for potential loss of business from rivals like Google and OpenAI, who may now see Scale AI as a competitor-aligned entity[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers). # Industry and Competitive Context * **Big Tech AI Bets:** This deal mirrors strategies by Microsoft (OpenAI), Amazon and Google (Anthropic), where major tech firms take large stakes in promising AI startups rather than outright acquisitions, partly to avoid antitrust complications and to secure long-term partnerships[3](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[8](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Regulatory Sensitivities:** The structure of Meta’s investment—significant but non-controlling, with voting rights delegated to Wang—is designed to minimize regulatory risk and scrutiny[2](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers)[9](https://finance.yahoo.com/news/meta-pay-nearly-15-billion-171801208.html). * **Sector Impact:** The move is likely to reshape the competitive landscape for AI infrastructure and data services, potentially reducing Scale AI’s business with Meta’s direct rivals while boosting its enterprise reach through Meta’s channels[4](https://www.newcomer.co/p/scale-ais-alexandr-wang-in-the-drivers). June 10 (Reuters) - Meta Platforms [(META.O)](https://www.reuters.com/markets/companies/META.O)[, opens new tab](https://www.reuters.com/markets/companies/META.O) has agreed to take a 49% stake in artificial intelligence startup Scale AI for $14.8 billion, The Information reported on Tuesday, citing two people familiar with the matter.Founded in 2016, Scale AI provides vast amounts of labeled data or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. The Reuters Tariff Watch newsletter is your daily guide to the latest global trade and tariff news. Sign up [here.](https://www.reuters.com/newsletters/reuters-tariff-watch/?location=article-paragraph) The deal, which has not been finalized yet, appears to be beneficial for Scale AI's investors including Accel, Index Ventures, Founders Fund and Greenoaks, as well as its current and former employees, the report said.Meta, Scale AI and the startup's investors did not immediately respond to Reuters' requests for [comment.As](http://comment.As) part of the deal, Scale AI CEO Alexandr Wang will take a top position inside Meta, leading a new ["superintelligence" lab](https://www.reuters.com/business/metas-zuckerberg-is-hiring-new-ai-team-bloomberg-news-reports-2025-06-10/), according to the report.Meta CEO Mark Zuckerberg has been actively recruiting top AI researchers to boost the company's AI efforts, the report said.The company is fighting the perception that it may have fallen behind in the AI race after its initial set of Llama 4 large language models [released in April](https://www.reuters.com/technology/meta-releases-new-ai-model-llama-4-2025-04-05/) fell short of performance expectations.Meta [delayed the release](https://www.reuters.com/business/meta-is-delaying-release-its-behemoth-ai-model-wsj-reports-2025-05-15/) of its flagship "Behemoth" AI model due to concerns about its capabilities, the Wall Street Journal reported last month.The company is also facing [antitrust concerns](https://www.reuters.com/sustainability/boards-policy-regulation/meta-asks-judge-rule-that-ftc-failed-prove-its-monopoly-case-2025-05-15/) related to its acquisitions of Instagram and WhatsApp.According to The Information report, the structure for the potential deal with Scale AI could be designed to avoid more regulatory scrutiny.Scale AI was valued at [$13.8 billion](https://www.reuters.com/technology/ai-startup-scale-ai-raises-1-billion-fresh-funding-2024-05-21/) in a funding round last spring. It generated about $870 million in revenue in 2024 and expects more than $2 billion this year, the report said.The company had more than $900 million of cash on its balance sheet at the end of last year, according to the report.
2025-06-11T01:30:21
https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/
Vatnik_Annihilator
reuters.com
1970-01-01T00:00:00
0
{}
1l8ga2w
false
null
t3_1l8ga2w
/r/LocalLLaMA/comments/1l8ga2w/meta_to_pay_nearly_15_billion_for_scale_ai_stake/
false
false
https://b.thumbs.redditm…v33r3H_E-elo.jpg
1
{'enabled': False, 'images': [{'id': 'U-NXckJd-ahQHc3V5x4ph9oZyPpjG9eIkfbSC_aHu8I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=108&crop=smart&auto=webp&s=2066895ce3eb7802d4c587c8310ad283ca79c924', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=216&crop=smart&auto=webp&s=43f22248af05dee588326ca9f8ec09ee4e733920', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=320&crop=smart&auto=webp&s=712b6bc32b56c94bb5d665120b417a34b71d2761', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=640&crop=smart&auto=webp&s=836acaf541c15ede6377db1512c4ff2ceac17832', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=960&crop=smart&auto=webp&s=7f8cbb44bfa9fabe3052b6cc32e5bac94ef5830f', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=1080&crop=smart&auto=webp&s=fdcaff4d1690da57e03fe1a07d883f133433b1f9', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?auto=webp&s=66674b5f2d6da0d8c880882badde2dbebd064191', 'width': 1920}, 'variants': {}}]}
You can chat with iOS’ local LLM on iOS 26
1
[removed]
2025-06-11T01:35:27
https://i.redd.it/4qitb3n3c76f1.jpeg
Weak_Tie1467
i.redd.it
1970-01-01T00:00:00
0
{}
1l8gdqt
false
null
t3_1l8gdqt
/r/LocalLLaMA/comments/1l8gdqt/you_can_chat_with_ios_local_llm_on_ios_26/
false
false
https://b.thumbs.redditm…8zPV7e419Z-E.jpg
1
{'enabled': True, 'images': [{'id': '9VcLUY1pPKjavQr_rA-GBE5Ub4QcJY5cUizyN2ISTvs', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/4qitb3n3c76f1.jpeg?width=108&crop=smart&auto=webp&s=394f81be06440ec61a5b30dcde7abfb537a75df1', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/4qitb3n3c76f1.jpeg?width=216&crop=smart&auto=webp&s=6721520bf9e5421c76de2fba5c76d5ad3cc9989c', 'width': 216}, {'height': 273, 'url': 'https://preview.redd.it/4qitb3n3c76f1.jpeg?width=320&crop=smart&auto=webp&s=d430a74b12e4db9a860c149d9cf38e48d67fd8be', 'width': 320}, {'height': 546, 'url': 'https://preview.redd.it/4qitb3n3c76f1.jpeg?width=640&crop=smart&auto=webp&s=b34d6b76a222c056bdc98c45e95b6697cb069539', 'width': 640}, {'height': 820, 'url': 'https://preview.redd.it/4qitb3n3c76f1.jpeg?width=960&crop=smart&auto=webp&s=24ee2028bee3e0c23cfdcc748553bc88e135fae5', 'width': 960}, {'height': 922, 'url': 'https://preview.redd.it/4qitb3n3c76f1.jpeg?width=1080&crop=smart&auto=webp&s=7f8de1f1b698f968b8aa761c8fe21ba39fb3f8d4', 'width': 1080}], 'source': {'height': 1128, 'url': 'https://preview.redd.it/4qitb3n3c76f1.jpeg?auto=webp&s=26cfb8dcfb6b0b8cd83c4b0b5a796f18169a90a4', 'width': 1320}, 'variants': {}}]}
Meta to pay nearly $15 billion for Scale AI stake, The Information reports
96
Meta’s investment in Scale AI—reportedly valued between $14 billion and $15 billion for a 49% stake—signals a pivotal shift in the tech giant’s artificial intelligence strategy and has broad implications for the AI industry, Meta’s competitive position, and the broader landscape of AI infrastructure[3](https://www.washingtonpost.com/technology/2025/06/10/ai-meta-scale-google-openai/)[10](https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/)[13](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). # Strategic Impact on Meta * **Accelerated AI Development:** The investment provides Meta with direct access to Scale AI’s advanced data labeling and curation services, which are critical for training large language models (LLMs) and other AI systems. This will help Meta overcome recent challenges, such as the underwhelming launch of its Llama AI models and the postponed release of its next-gen “Behemoth” system[7](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[9](https://www.ainvest.com/news/meta-14-8b-scale-ai-stake-land-grab-agi-supremacy-2506/)[13](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Talent Acquisition:** Scale AI’s CEO, Alexandr Wang, is set to lead a new “superintelligence” lab at Meta, bringing with him a team of experts focused on artificial general intelligence (AGI). This move addresses Meta’s struggles with high turnover and project delays in its AI division[8](https://www.ainvest.com/news/meta-invests-14-billion-scale-ai-hires-founder-alexandr-wang-lead-ai-lab-2506/)[11](https://www.wsj.com/tech/ai/meta-in-talks-to-invest-14-billion-in-scale-ai-hire-ceo-alexandr-wang-5268564e)[13](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Enhanced Data Infrastructure:** By securing a steady supply of high-quality, specialized data, Meta aims to future-proof its AI pipeline, supporting not only its consumer-facing products but also its enterprise and defense initiatives, such as the “Defense Llama” project[6](https://www.ainvest.com/news/meta-10-billion-bet-scale-ai-strategic-play-dominance-ai-data-infrastructure-2506/)[9](https://www.ainvest.com/news/meta-14-8b-scale-ai-stake-land-grab-agi-supremacy-2506/)[13](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). # Industry and Competitive Dynamics * **Race for AI Supremacy:** Meta’s investment is part of a broader trend among Big Tech companies to secure foundational AI infrastructure. Microsoft, Google, and Amazon have made similar bets by investing billions in OpenAI, Anthropic, and other AI startups[4](https://fortune.com/2025/06/08/meta-scale-ai-statup-investment-10-billion-alexandr-wang-machine-learning/)[13](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Market Valuation and Growth:** Scale AI’s valuation is expected to double to nearly $28 billion post-investment, reflecting the premium placed on AI data infrastructure in today’s market. The company’s revenue is projected to more than double from $870 million in 2024 to over $2 billion in 2025[9](https://www.ainvest.com/news/meta-14-8b-scale-ai-stake-land-grab-agi-supremacy-2506/)[13](https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg). * **Regulatory and Antitrust Considerations:** By taking a minority stake rather than a full acquisition, Meta avoids some of the regulatory scrutiny that might accompany a complete takeover, while still securing significant influence and access to Scale AI’s resources[7](https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html)[9](https://www.ainvest.com/news/meta-14-8b-scale-ai-stake-land-grab-agi-supremacy-2506/). # Broader Implications * **AI Infrastructure as a Strategic Asset:** The deal underscores the growing importance of data labeling and curation as a critical utility in the AI economy. Companies that control these resources are better positioned to compete in both commercial and governmental AI markets[6](https://www.ainvest.com/news/meta-10-billion-bet-scale-ai-strategic-play-dominance-ai-data-infrastructure-2506/)[9](https://www.ainvest.com/news/meta-14-8b-scale-ai-stake-land-grab-agi-supremacy-2506/). * **Investment and Innovation:** For investors, the partnership signals a shift toward betting on AI infrastructure over individual applications. It highlights the potential for long-term growth in companies that provide the foundational tools for AI development[6](https://www.ainvest.com/news/meta-10-billion-bet-scale-ai-strategic-play-dominance-ai-data-infrastructure-2506/)[9](https://www.ainvest.com/news/meta-14-8b-scale-ai-stake-land-grab-agi-supremacy-2506/). * **Challenges and Risks:** Despite the strategic benefits, Meta and Scale AI face potential risks, including concerns over labor practices, data confidentiality (given Scale AI’s work with competitors), and the ongoing need to navigate regulatory environments[6](https://www.ainvest.com/news/meta-10-billion-bet-scale-ai-strategic-play-dominance-ai-data-infrastructure-2506/).
2025-06-11T01:38:51
https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/
Vatnik_Annihilator
reuters.com
1970-01-01T00:00:00
0
{}
1l8gg51
false
null
t3_1l8gg51
/r/LocalLLaMA/comments/1l8gg51/meta_to_pay_nearly_15_billion_for_scale_ai_stake/
false
false
https://b.thumbs.redditm…v33r3H_E-elo.jpg
96
{'enabled': False, 'images': [{'id': 'U-NXckJd-ahQHc3V5x4ph9oZyPpjG9eIkfbSC_aHu8I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=108&crop=smart&auto=webp&s=2066895ce3eb7802d4c587c8310ad283ca79c924', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=216&crop=smart&auto=webp&s=43f22248af05dee588326ca9f8ec09ee4e733920', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=320&crop=smart&auto=webp&s=712b6bc32b56c94bb5d665120b417a34b71d2761', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=640&crop=smart&auto=webp&s=836acaf541c15ede6377db1512c4ff2ceac17832', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=960&crop=smart&auto=webp&s=7f8cbb44bfa9fabe3052b6cc32e5bac94ef5830f', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=1080&crop=smart&auto=webp&s=fdcaff4d1690da57e03fe1a07d883f133433b1f9', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?auto=webp&s=66674b5f2d6da0d8c880882badde2dbebd064191', 'width': 1920}, 'variants': {}}]}
🎙️ Looking for Beta Testers – Get 24 Hours of Free TTS Audio
0
I'm launching a new TTS (text-to-speech) service and I'm looking for a few early users to help test it out. If you're into AI voices, audio content, or just want to convert a lot of text to audio, this is a great chance to try it for free. ✅ Beta testers get **24 hours of audio generation** (no strings attached) ✅ Supports multiple voices and formats ✅ Ideal for podcasts, audiobooks, screenreaders, etc. If you're interested, **DM me** and I'll get you set up with access. Feedback is optional but appreciated! Thanks! 🙌
2025-06-11T01:48:43
https://www.reddit.com/r/LocalLLaMA/comments/1l8gn0a/looking_for_beta_testers_get_24_hours_of_free_tts/
mythicinfinity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8gn0a
false
null
t3_1l8gn0a
/r/LocalLLaMA/comments/1l8gn0a/looking_for_beta_testers_get_24_hours_of_free_tts/
false
false
self
0
null
How does one get the new Qwen3 reranking models to work in llama.cpp? (GGUF)
16
The documentation isn’t great, and I haven’t been able to get it working with llama-server either. Anyone had any luck?
2025-06-11T02:19:32
https://www.reddit.com/r/LocalLLaMA/comments/1l8h95q/how_does_one_get_the_new_qwen3_reranking_models/
42GOLDSTANDARD42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8h95q
false
null
t3_1l8h95q
/r/LocalLLaMA/comments/1l8h95q/how_does_one_get_the_new_qwen3_reranking_models/
false
false
self
16
null
With an AI code execution agent, how should it approach sandboxing?
2
I'm working on an AI agent that can run and execute code. Currently the code (Python) is executed in a docker container with resource limits, and no direct filesystem access. The problem with this is that if I want to include specific tools or functions, (for instance, a module containing functions to send emails or other utilities for the LLM to use in its code), it is complicated by the sandbox. I could simply use `exec`, but that would worsen the already vulnerable project. I could also use a function wrapped with an API, but this also presents issues. Does anyone have any suggestions to solve this?
2025-06-11T02:20:32
https://www.reddit.com/r/LocalLLaMA/comments/1l8h9wa/with_an_ai_code_execution_agent_how_should_it/
Pretend_Guava7322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8h9wa
false
null
t3_1l8h9wa
/r/LocalLLaMA/comments/1l8h9wa/with_an_ai_code_execution_agent_how_should_it/
false
false
self
2
null
Recommended cloud machines for DeepSeek R1?
3
I know, I know, we're in LocalLlama, but hear me out. Given that it's a bit tricky to run a small datacenter with enough latest-gen VRAM at home, I'm looking for the next best option. Are there any good and trusted options you use to run it in cloud? (Note: I understand there are ways to run DeepSeek at home on cheap-ish hardware, but I'd like it at the speed and responsiveness of the latest Nvidias.) Things I'd like to see: 1. Reasonable cost + paying only when used rather than having an expensive machine running 24/7. 2. As much transparency and control over the machine and how it handles the models and data as possible. This is why we would ideally want to run it at home, is there a cloud provider that offers as close to at-home experience as possible? I've been using Together AI so far for similar things, but I'd like to have more control over the machine rather than just trust they're not logging the data and they're giving me the model I want. Ideally, create a snapshot / docker image that would give me full control over what's going on, specify exact versions of the model and inference engine, possibly deploy custom code, and then have it spin up and spin down automatically when I need. Anyone got any recommendations or experience to share? How much does your cloud setup cost you? Thanks a lot!
2025-06-11T02:42:09
https://www.reddit.com/r/LocalLLaMA/comments/1l8hp5t/recommended_cloud_machines_for_deepseek_r1/
lakySK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8hp5t
false
null
t3_1l8hp5t
/r/LocalLLaMA/comments/1l8hp5t/recommended_cloud_machines_for_deepseek_r1/
false
false
self
3
null
Best local coding model
1
[removed]
2025-06-11T03:03:15
https://www.reddit.com/r/LocalLLaMA/comments/1l8i3w1/best_local_coding_model/
FlowgrammerCrew
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8i3w1
false
null
t3_1l8i3w1
/r/LocalLLaMA/comments/1l8i3w1/best_local_coding_model/
false
false
self
1
null
Why are there drastic differences between deepseek r1 models on pocketpal?
0
2025-06-11T03:13:07
https://i.redd.it/imv6t0wit76f1.jpeg
johncenaraper
i.redd.it
1970-01-01T00:00:00
0
{}
1l8iahr
false
null
t3_1l8iahr
/r/LocalLLaMA/comments/1l8iahr/why_are_there_drastic_differences_between/
false
false
https://b.thumbs.redditm…_H3vABuO_OrM.jpg
0
{'enabled': True, 'images': [{'id': 'iokZiZC4sDGW8xS1VfV2qJJlm_Rol3l87iEXHASen8Q', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/imv6t0wit76f1.jpeg?width=108&crop=smart&auto=webp&s=512cc06d96c3153034fc60223ea8741efaa01e8c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/imv6t0wit76f1.jpeg?width=216&crop=smart&auto=webp&s=d89da8205053e0b1c1fe41427dbc5f479a6a21dc', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/imv6t0wit76f1.jpeg?width=320&crop=smart&auto=webp&s=b57b2057960ae48e56e339bed86449965e2f50ff', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/imv6t0wit76f1.jpeg?width=640&crop=smart&auto=webp&s=5bbadee3d1ea105bc55ab3b0d716dea4b2a6d8f4', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/imv6t0wit76f1.jpeg?width=960&crop=smart&auto=webp&s=eed15ed561375c86f8c38b68fb6f0b4219d09de6', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/imv6t0wit76f1.jpeg?width=1080&crop=smart&auto=webp&s=df269506558dff7c8c029a1ae2f7630567f65c72', 'width': 1080}], 'source': {'height': 2796, 'url': 'https://preview.redd.it/imv6t0wit76f1.jpeg?auto=webp&s=6853429e464612e1d6c41eb11ab69c30d1b37121', 'width': 1290}, 'variants': {}}]}
How do I make an LLM act more human. With imperfections, hesitation, natural pauses, shorter replies, etc.?
49
Hey all, I've been trying to build a more human-like LLM. Not just smart, but emotionally and behaviorally human. I want it to **hesitate**, **think before responding**, sometimes reply in **shorter, more casual ways**, maybe **swear**, **joke**, or even get things a bit wrong like people do. Basically, feel like you're talking to a *real person*, not a perfectly optimized AI that responds with a whole fuckin essay every time. No matter what I try, the responses always end up feeling **too polished**, **too long**, **too robotic**, or just fuckin off. I've tried prompting it to "act like a human," or "talk like a friend," but it still doesn't hit that natural vibe (I actually made a lot of very detailed prompts, but at the end it turns out ot be very bad). Has anyone had luck making an LLM feel truly human in conversation? Like someone you'd text or talk to casually? Any tips on prompt engineering, fine-tuning, or even injecting behavioral randomness? Like really anything?
2025-06-11T03:19:02
https://www.reddit.com/r/LocalLLaMA/comments/1l8ieff/how_do_i_make_an_llm_act_more_human_with/
PhraseProfessional54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8ieff
false
null
t3_1l8ieff
/r/LocalLLaMA/comments/1l8ieff/how_do_i_make_an_llm_act_more_human_with/
false
false
self
49
null
[Major Update] llmbasedos: Now Docker-first + bootable USB keys dropping soon
1
[removed]
2025-06-11T03:39:38
https://github.com/iluxu/llmbasedos
iluxu
github.com
1970-01-01T00:00:00
0
{}
1l8is7a
false
null
t3_1l8is7a
/r/LocalLLaMA/comments/1l8is7a/major_update_llmbasedos_now_dockerfirst_bootable/
false
false
https://b.thumbs.redditm…gXrs7Dx756Io.jpg
1
{'enabled': False, 'images': [{'id': 'kB_Vi1_3pPRsyqzS1Yxtf7SHPxp3A7697x_Ykpj9E9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6zi9JfPwuwB-jmIB2j-2gdp6SuFZzhNzke_4CKW7VA8.jpg?width=108&crop=smart&auto=webp&s=49fe8a351350d91caf93790646c811c0ba4b9224', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6zi9JfPwuwB-jmIB2j-2gdp6SuFZzhNzke_4CKW7VA8.jpg?width=216&crop=smart&auto=webp&s=f9c3b44dfc507f1584b1bfedc8e6f23f47b03094', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6zi9JfPwuwB-jmIB2j-2gdp6SuFZzhNzke_4CKW7VA8.jpg?width=320&crop=smart&auto=webp&s=255118c49fc5a993bda1194f9266bafbacef5930', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6zi9JfPwuwB-jmIB2j-2gdp6SuFZzhNzke_4CKW7VA8.jpg?width=640&crop=smart&auto=webp&s=723cbcfcfc62518474a41c2dc7f16c98257a373e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6zi9JfPwuwB-jmIB2j-2gdp6SuFZzhNzke_4CKW7VA8.jpg?width=960&crop=smart&auto=webp&s=387bb5ad5ad3519aee481c06acab84e946ca8ef9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6zi9JfPwuwB-jmIB2j-2gdp6SuFZzhNzke_4CKW7VA8.jpg?width=1080&crop=smart&auto=webp&s=7e6b493f98d590bea6c9a84456978c7f9d406204', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6zi9JfPwuwB-jmIB2j-2gdp6SuFZzhNzke_4CKW7VA8.jpg?auto=webp&s=f95a1eae25134af7636ef8cae0f71e9fb67d6a8d', 'width': 1200}, 'variants': {}}]}
Best OS for a local AI server
1
[removed]
2025-06-11T05:05:27
https://www.reddit.com/r/LocalLLaMA/comments/1l8kaa6/best_os_for_a_local_ai_server/
Impossible-Web-2782
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8kaa6
false
null
t3_1l8kaa6
/r/LocalLLaMA/comments/1l8kaa6/best_os_for_a_local_ai_server/
false
false
self
1
null
Eye catching topic
1
[removed]
2025-06-11T05:10:17
https://www.reddit.com/r/LocalLLaMA/comments/1l8kczo/eye_catching_topic/
Zucchini_Klutzy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8kczo
false
null
t3_1l8kczo
/r/LocalLLaMA/comments/1l8kczo/eye_catching_topic/
false
false
self
1
null
NSFW image to text
24
Hi everyone, I’m doing some research using disturbing images, and some of the images are being flagged as NSFW by openAi models and other models (i.e. grok, gemini, Claude). Anyone have any indication of local (or server) models (preferably with API) with less filters that are mire ir less plug and play? Thanks in advance!
2025-06-11T05:45:13
https://www.reddit.com/r/LocalLLaMA/comments/1l8kx53/nsfw_image_to_text/
CarRepresentative843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8kx53
false
null
t3_1l8kx53
/r/LocalLLaMA/comments/1l8kx53/nsfw_image_to_text/
false
false
nsfw
24
null
What is the best set up for LLM and ai agent like crewai?
1
[removed]
2025-06-11T06:40:59
https://www.reddit.com/r/LocalLLaMA/comments/1l8lrjr/what_is_the_best_set_up_for_llm_and_ai_agent_like/
Ill_Occasion_1537
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l8lrjr
false
null
t3_1l8lrjr
/r/LocalLLaMA/comments/1l8lrjr/what_is_the_best_set_up_for_llm_and_ai_agent_like/
false
false
self
1
null