title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
What are the best local models with really high context window?
0
[removed]
2025-05-15T18:04:48
https://www.reddit.com/r/LocalLLaMA/comments/1knez68/what_are_the_best_local_models_with_really_high/
Solid_Woodpecker3635
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knez68
false
null
t3_1knez68
/r/LocalLLaMA/comments/1knez68/what_are_the_best_local_models_with_really_high/
false
false
self
0
null
Falcon-Edge: A series of powerful, universal and fine-tunable BitNet models
1
[removed]
2025-05-15T18:20:21
https://www.reddit.com/r/LocalLLaMA/comments/1knfcva/falconedge_a_series_of_powerful_universal_and/
Life-Prune2854
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knfcva
false
null
t3_1knfcva
/r/LocalLLaMA/comments/1knfcva/falconedge_a_series_of_powerful_universal_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SfDBbTwoSlP3t49X-DzOenP7DPUl56haACsFp5qBk6E', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/Jr2u9t7hHrCf63fubhl1KzYbXy626ftH82VNyHypf5Q.jpg?auto=webp&s=aab36e1b3c82df95001d7fe771b306f5a5a4f4f9', 'width': 96}, 'variants': {}}]}
ThinkStation PGX - with NVIDIA GB10 Grace Blackwell Superchip / 128GB
83
2025-05-15T18:21:39
https://news.lenovo.com/all-new-lenovo-thinkstation-pgx-big-ai-innovation-in-a-small-form-factor/
nostriluu
news.lenovo.com
1970-01-01T00:00:00
0
{}
1knfe13
false
null
t3_1knfe13
/r/LocalLLaMA/comments/1knfe13/thinkstation_pgx_with_nvidia_gb10_grace_blackwell/
false
false
https://b.thumbs.redditm…qYQaK5P1n_0Y.jpg
83
{'enabled': False, 'images': [{'id': '1IRFqFqUKq9dUsTqHEddoQYYbTReEcZJ4BOT13ZyRpI', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/Bf1eGAFfYgmopj7bn8x57X5Vubn-mFaf7TrFzb01Rl4.jpg?width=108&crop=smart&auto=webp&s=c45248b1fed4c8ff4a3d198c267ea33235db254f', 'width': 108}, {'height': 160, 'url': 'https://external-preview.redd.it/Bf1eGAFfYgmopj7bn8x57X5Vubn-mFaf7TrFzb01Rl4.jpg?width=216&crop=smart&auto=webp&s=0f46790f84594451b2806acdc0575d96e4db98e0', 'width': 216}, {'height': 237, 'url': 'https://external-preview.redd.it/Bf1eGAFfYgmopj7bn8x57X5Vubn-mFaf7TrFzb01Rl4.jpg?width=320&crop=smart&auto=webp&s=9f01d9386986c15d71dba404b9a97e376cec7423', 'width': 320}, {'height': 474, 'url': 'https://external-preview.redd.it/Bf1eGAFfYgmopj7bn8x57X5Vubn-mFaf7TrFzb01Rl4.jpg?width=640&crop=smart&auto=webp&s=2fd1c6de1776b9a8e40c014c77fcc5ce1cd52905', 'width': 640}], 'source': {'height': 593, 'url': 'https://external-preview.redd.it/Bf1eGAFfYgmopj7bn8x57X5Vubn-mFaf7TrFzb01Rl4.jpg?auto=webp&s=6c77b20b702e4c4f95255feb3cf188924d87e24a', 'width': 800}, 'variants': {}}]}
Are there any models that are even half funny?
14
Are there any models that can write funny text including jokes?
2025-05-15T18:24:27
https://www.reddit.com/r/LocalLLaMA/comments/1knfggw/are_there_any_models_that_are_even_half_funny/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knfggw
false
null
t3_1knfggw
/r/LocalLLaMA/comments/1knfggw/are_there_any_models_that_are_even_half_funny/
false
false
self
14
null
AMD ML Stack updates and improvements!
1
[removed]
2025-05-15T18:51:31
https://www.reddit.com/gallery/1kng4b1
Doogie707
reddit.com
1970-01-01T00:00:00
0
{}
1kng4b1
false
null
t3_1kng4b1
/r/LocalLLaMA/comments/1kng4b1/amd_ml_stack_updates_and_improvements/
false
false
https://b.thumbs.redditm…P0voKsMIITnA.jpg
1
null
Best Model To Run on 8GB CPU RAM
1
[removed]
2025-05-15T18:56:29
https://www.reddit.com/r/LocalLLaMA/comments/1kng8oo/best_model_to_run_on_8gb_cpu_ram/
epiphanyseeker1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kng8oo
false
null
t3_1kng8oo
/r/LocalLLaMA/comments/1kng8oo/best_model_to_run_on_8gb_cpu_ram/
false
false
self
1
null
LLaMA or other LLM locally on MacBook with easy access to activations?
3
Hi. Sorry if this question is stupid, but I am new to this. I would like to run LLaMA or another LLM locally on a MacBook, but I want to be able to access the GPT's activations after a query. This is primarily for exploration and experiments. I'm able to do this with smaller language models in PyTorch, but I don't know how difficult it would be in llama.cpp or other versions. I do know C, but I wonder how opaque the llama.cpp code is. Ideally, I would be able to access things in a higher level language like Python, even better if it's in a Jupyter notebook. Is this possible/easy? What version of LLaMA would be best suited to this? What machine? I have decent budget to buy a new MacBook. Any info or pointers would be greatly appreciated.
2025-05-15T19:06:32
https://www.reddit.com/r/LocalLLaMA/comments/1knghrx/llama_or_other_llm_locally_on_macbook_with_easy/
OrangeYouGlad100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knghrx
false
null
t3_1knghrx
/r/LocalLLaMA/comments/1knghrx/llama_or_other_llm_locally_on_macbook_with_easy/
false
false
self
3
null
What's the difference between q8_k_xl and q8_0?
13
I'm unsure. I thought q8_0 is already close to perfect quality... could someone explain? Thanks.
2025-05-15T19:17:07
https://www.reddit.com/r/LocalLLaMA/comments/1kngr5k/whats_the_difference_between_q8_k_xl_and_q8_0/
windows_error23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kngr5k
false
null
t3_1kngr5k
/r/LocalLLaMA/comments/1kngr5k/whats_the_difference_between_q8_k_xl_and_q8_0/
false
false
self
13
null
I made an interactive source finder - basically, AI SearXNG
1
2025-05-15T19:23:15
https://github.com/atineiatte/source-finder
atineiatte
github.com
1970-01-01T00:00:00
0
{}
1kngwk0
false
null
t3_1kngwk0
/r/LocalLLaMA/comments/1kngwk0/i_made_an_interactive_source_finder_basically_ai/
false
false
https://b.thumbs.redditm…ghfpYteOrhXg.jpg
1
{'enabled': False, 'images': [{'id': 'Y4wyrf7wLny3X_96hHH5BZKIT69CaOkYdGzmA7n08eE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9Tm9gbvPeH-fbWgve1IsfM2QO3ntxv_7KFWi-0CaouE.jpg?width=108&crop=smart&auto=webp&s=98642d419354587b8eb7659609b22ff1c7b68a34', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9Tm9gbvPeH-fbWgve1IsfM2QO3ntxv_7KFWi-0CaouE.jpg?width=216&crop=smart&auto=webp&s=5b27cca7e61e818beeb65de160bebd114ed1833a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9Tm9gbvPeH-fbWgve1IsfM2QO3ntxv_7KFWi-0CaouE.jpg?width=320&crop=smart&auto=webp&s=ea450a044b67538542cf7bb8733abe273ee6243b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9Tm9gbvPeH-fbWgve1IsfM2QO3ntxv_7KFWi-0CaouE.jpg?width=640&crop=smart&auto=webp&s=9c074120e63f99161cd7e2c1e85ab460005a7a0f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9Tm9gbvPeH-fbWgve1IsfM2QO3ntxv_7KFWi-0CaouE.jpg?width=960&crop=smart&auto=webp&s=73071488965c4be3284fc5d365573c2026b65144', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9Tm9gbvPeH-fbWgve1IsfM2QO3ntxv_7KFWi-0CaouE.jpg?width=1080&crop=smart&auto=webp&s=32aad4416261615760f00a2351406f36ee6f5632', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9Tm9gbvPeH-fbWgve1IsfM2QO3ntxv_7KFWi-0CaouE.jpg?auto=webp&s=98124c211344d138436f6054c29a3f9bcb37636d', 'width': 1200}, 'variants': {}}]}
Created a tool that converts podcasts into clean speech datasets - handles diarization, removes overlapping speech, and transcribes
89
2025-05-15T19:27:35
https://github.com/ReisCook/Voice_Extractor
DumaDuma
github.com
1970-01-01T00:00:00
0
{}
1knh0dq
false
null
t3_1knh0dq
/r/LocalLLaMA/comments/1knh0dq/created_a_tool_that_converts_podcasts_into_clean/
false
false
https://b.thumbs.redditm…xGhRg6sNBZVU.jpg
89
{'enabled': False, 'images': [{'id': 'adDs_AY8qQBqpPFVCqE_DXUz05kys1BW2uWS96AwrwQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fOELlCefhPVcX_I27jAk8-oOBjhtXlke2ANY2PCUgkA.jpg?width=108&crop=smart&auto=webp&s=d4c22e88d5d3d14a87fb4d30d178069ece42d523', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fOELlCefhPVcX_I27jAk8-oOBjhtXlke2ANY2PCUgkA.jpg?width=216&crop=smart&auto=webp&s=4a72b460dbf222bbe0aa52f8af299ed04f5938c8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fOELlCefhPVcX_I27jAk8-oOBjhtXlke2ANY2PCUgkA.jpg?width=320&crop=smart&auto=webp&s=407f74bf7e15b13e14f4a0e7841db0a454c222ee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fOELlCefhPVcX_I27jAk8-oOBjhtXlke2ANY2PCUgkA.jpg?width=640&crop=smart&auto=webp&s=1294dd31165e64cd951bf768591cb3a24a57502f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fOELlCefhPVcX_I27jAk8-oOBjhtXlke2ANY2PCUgkA.jpg?width=960&crop=smart&auto=webp&s=347bb574305cc1be86d2fca17c0287632e1a5748', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fOELlCefhPVcX_I27jAk8-oOBjhtXlke2ANY2PCUgkA.jpg?width=1080&crop=smart&auto=webp&s=1950ccf744b412f474f8f6024c5360bffe9a4d0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fOELlCefhPVcX_I27jAk8-oOBjhtXlke2ANY2PCUgkA.jpg?auto=webp&s=d791e3e949f9ecb6fab1753de47e37442296e88b', 'width': 1200}, 'variants': {}}]}
Meta delaying the release of Behemoth
158
https://www.wsj.com/tech/ai/meta-is-delaying-the-rollout-of-its-flagship-ai-model-f4b105f7
2025-05-15T19:29:28
https://www.reddit.com/r/LocalLLaMA/comments/1knh1yd/meta_delaying_the_release_of_behemoth/
__JockY__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knh1yd
false
null
t3_1knh1yd
/r/LocalLLaMA/comments/1knh1yd/meta_delaying_the_release_of_behemoth/
false
false
self
158
null
Can the Deepswap.ai setup be replicated locally?
0
They have face swapping with images and videos (including multiple faces in one image/video), image generation (from text prompt or text prompt + image of face), and 5 second video generation with prompt or prompt + starting image frame. All of these support SFW and NSFW content. Is there any way to replicate this locally with a similar level of quality? The prices get jacked up every few months, so I'm looking into setting up a local alternative with LLMs, diffusion models, etc. I'm very new to this, so far I've only messed around a bit with llama 2 LLMs on oobabooga and kobold, so hopefully it's nothing too crazy.
2025-05-15T19:39:02
https://www.reddit.com/r/LocalLLaMA/comments/1knha9d/can_the_deepswapai_setup_be_replicated_locally/
Di0nysus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knha9d
false
null
t3_1knha9d
/r/LocalLLaMA/comments/1knha9d/can_the_deepswapai_setup_be_replicated_locally/
false
false
self
0
null
[Project] MaGo-AgoraAI: multi-agent LLM system for academic text generation
1
[removed]
2025-05-15T20:40:51
https://www.reddit.com/r/LocalLLaMA/comments/1knis85/project_magoagoraai_multiagent_llm_system_for/
Next-Lengthiness9915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knis85
false
null
t3_1knis85
/r/LocalLLaMA/comments/1knis85/project_magoagoraai_multiagent_llm_system_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'TctMBJUbG7yilxII9TIhEdHhiMuUNDkmvw0AkrGqDFU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MJJvfllG4_2mOIMpPi7he34_VkRTphvtCixkK8pV-vg.jpg?width=108&crop=smart&auto=webp&s=d6feedb6787c5e6fa5ada51002501b4dd7757ff9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MJJvfllG4_2mOIMpPi7he34_VkRTphvtCixkK8pV-vg.jpg?width=216&crop=smart&auto=webp&s=aac670fb9c109edf5c61cc1aeed16c989e5517dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MJJvfllG4_2mOIMpPi7he34_VkRTphvtCixkK8pV-vg.jpg?width=320&crop=smart&auto=webp&s=82157adb83bef219b4f74798a9186cdcab4f08aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MJJvfllG4_2mOIMpPi7he34_VkRTphvtCixkK8pV-vg.jpg?width=640&crop=smart&auto=webp&s=f2e7e7bc9470880e65e57adb47a73918d52fca7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MJJvfllG4_2mOIMpPi7he34_VkRTphvtCixkK8pV-vg.jpg?width=960&crop=smart&auto=webp&s=c8871433f5134ba6c169638ce105a6d797f556d7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MJJvfllG4_2mOIMpPi7he34_VkRTphvtCixkK8pV-vg.jpg?width=1080&crop=smart&auto=webp&s=b1e28b9112e60b9cf9abb16bc684f9b6858c97af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MJJvfllG4_2mOIMpPi7he34_VkRTphvtCixkK8pV-vg.jpg?auto=webp&s=6fadf8cb594063354200788577ec7cad67c2aaef', 'width': 1200}, 'variants': {}}]}
What are the Best Open-Source Multimodal Models for Image Captioning Right Now?
1
[removed]
2025-05-15T20:56:47
https://www.reddit.com/r/LocalLLaMA/comments/1knj67n/what_are_the_best_opensource_multimodal_models/
AppointmentDull6060
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knj67n
false
null
t3_1knj67n
/r/LocalLLaMA/comments/1knj67n/what_are_the_best_opensource_multimodal_models/
false
false
self
1
null
What are the Best Open-Source Multimodal Models for Image Captioning Right Now?
1
[removed]
2025-05-15T21:01:13
https://www.reddit.com/r/LocalLLaMA/comments/1knja8l/what_are_the_best_opensource_multimodal_models/
No_Scratch56
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knja8l
false
null
t3_1knja8l
/r/LocalLLaMA/comments/1knja8l/what_are_the_best_opensource_multimodal_models/
false
false
self
1
null
Soon if a model architecture is supported by "transformers", you can expect it to be supported in the rest of the ecosystem.
70
More model interoperability through HF's joint efforts w lots of model builders.
2025-05-15T21:10:11
https://huggingface.co/blog/transformers-model-definition
behradkhodayar
huggingface.co
1970-01-01T00:00:00
0
{}
1knji91
false
null
t3_1knji91
/r/LocalLLaMA/comments/1knji91/soon_if_a_model_architecture_is_supported_by/
false
false
https://b.thumbs.redditm…TZhwEWvGcv2I.jpg
70
{'enabled': False, 'images': [{'id': '41xhqz9wUMoXEbrbvkAjB4_yIXQQ9K8BnZCXYedYlms', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/vXXJM0Qn_BM8I_YhlKgXpMf8jgjpmMwwyNtmu7BK1pM.jpg?width=108&crop=smart&auto=webp&s=3e782e46b4e01dbe7226c46e838d4729d2d25a57', 'width': 108}, {'height': 75, 'url': 'https://external-preview.redd.it/vXXJM0Qn_BM8I_YhlKgXpMf8jgjpmMwwyNtmu7BK1pM.jpg?width=216&crop=smart&auto=webp&s=283f16527eff4d58999b2f5c5264f4342bbb496a', 'width': 216}, {'height': 111, 'url': 'https://external-preview.redd.it/vXXJM0Qn_BM8I_YhlKgXpMf8jgjpmMwwyNtmu7BK1pM.jpg?width=320&crop=smart&auto=webp&s=2c8ff547826c9f2bf7cfd7856ce0b924b424f6a4', 'width': 320}, {'height': 222, 'url': 'https://external-preview.redd.it/vXXJM0Qn_BM8I_YhlKgXpMf8jgjpmMwwyNtmu7BK1pM.jpg?width=640&crop=smart&auto=webp&s=1dfc69125f94a6206257f867746e11c5b8f79e49', 'width': 640}, {'height': 333, 'url': 'https://external-preview.redd.it/vXXJM0Qn_BM8I_YhlKgXpMf8jgjpmMwwyNtmu7BK1pM.jpg?width=960&crop=smart&auto=webp&s=e2f9f1f2e97f2ac27c000a9d8e503603c72f7444', 'width': 960}, {'height': 375, 'url': 'https://external-preview.redd.it/vXXJM0Qn_BM8I_YhlKgXpMf8jgjpmMwwyNtmu7BK1pM.jpg?width=1080&crop=smart&auto=webp&s=1d9eb37cb77b508f24e5a7ab88a6342e7e92d483', 'width': 1080}], 'source': {'height': 781, 'url': 'https://external-preview.redd.it/vXXJM0Qn_BM8I_YhlKgXpMf8jgjpmMwwyNtmu7BK1pM.jpg?auto=webp&s=d15f61f3240893c1890f4b45d6f366fc0b51ec95', 'width': 2247}, 'variants': {}}]}
Qwen3 4B running at ~20 tok/s on Samsung Galaxy 24
123
Follow-up on a [previous post](https://www.reddit.com/r/LocalLLaMA/comments/1kckxgg/qwen3_06b_running_at_75_toks_on_iphone_15_pro/), but this time for Android and on a larger Qwen3 model for those who are interested. Here is 4-bit quantized Qwen3 4B with thinking mode running on a Samsung Galaxy 24 using ExecuTorch - runs at up to 20 tok/s. Instructions on how to export and run the model on ExecuTorch [here](https://github.com/pytorch/executorch/blob/main/examples/models/qwen3/README.md).
2025-05-15T21:14:28
https://v.redd.it/drks9osnd01f1
TokyoCapybara
v.redd.it
1970-01-01T00:00:00
0
{}
1knjm0s
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/drks9osnd01f1/DASHPlaylist.mpd?a=1749935682%2CN2ViODJiNzJmNDE2MTNhM2M3NTRjY2M3ODFhZTQ3MWE0M2UzYmY3Y2Q2YzlkN2NkMjM0YjY3OGQ3NWVkNDFmMg%3D%3D&v=1&f=sd', 'duration': 54, 'fallback_url': 'https://v.redd.it/drks9osnd01f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/drks9osnd01f1/HLSPlaylist.m3u8?a=1749935682%2CMGY2YmE4MWJiZjBiMzAzNWUwZDU0MmEyM2E1OWMwZGI4YjZjYmNkNDc1MTE1MTJhNThhZTFjNjY1MDAwZDBlYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/drks9osnd01f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 886}}
t3_1knjm0s
/r/LocalLLaMA/comments/1knjm0s/qwen3_4b_running_at_20_toks_on_samsung_galaxy_24/
false
false
https://external-preview…a75188e4ef5f5c52
123
{'enabled': False, 'images': [{'id': 'aTdnbWV3c25kMDFmMckumtgWbpWBlQZ_vRBN65fbuS7eF6LKJlM_WmjlxhmM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aTdnbWV3c25kMDFmMckumtgWbpWBlQZ_vRBN65fbuS7eF6LKJlM_WmjlxhmM.png?width=108&crop=smart&format=pjpg&auto=webp&s=d5aaf231beb3e41975ed3481e297167bdf93bd4b', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/aTdnbWV3c25kMDFmMckumtgWbpWBlQZ_vRBN65fbuS7eF6LKJlM_WmjlxhmM.png?width=216&crop=smart&format=pjpg&auto=webp&s=0160a4de575b83ff7889494d3b28aace225c19a6', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/aTdnbWV3c25kMDFmMckumtgWbpWBlQZ_vRBN65fbuS7eF6LKJlM_WmjlxhmM.png?width=320&crop=smart&format=pjpg&auto=webp&s=9a6389aafdece60d20f120548ec8524d8b2f462e', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/aTdnbWV3c25kMDFmMckumtgWbpWBlQZ_vRBN65fbuS7eF6LKJlM_WmjlxhmM.png?width=640&crop=smart&format=pjpg&auto=webp&s=ade5e484096c8486d483b2cfae6cdfb8c9f0cc3d', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/aTdnbWV3c25kMDFmMckumtgWbpWBlQZ_vRBN65fbuS7eF6LKJlM_WmjlxhmM.png?width=960&crop=smart&format=pjpg&auto=webp&s=79362c37fda16d775eb3a8cbf83f2eb026cccdc2', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/aTdnbWV3c25kMDFmMckumtgWbpWBlQZ_vRBN65fbuS7eF6LKJlM_WmjlxhmM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a69b3efc27c0090d8d59ba54e1263b4ebc8c644a', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://external-preview.redd.it/aTdnbWV3c25kMDFmMckumtgWbpWBlQZ_vRBN65fbuS7eF6LKJlM_WmjlxhmM.png?format=pjpg&auto=webp&s=42a550b05667712aa6b6a88f4468c264d879a665', 'width': 1080}, 'variants': {}}]}
Live JAM (don't be mean on my API cause I'm going to remove negative influence)
1
2025-05-15T21:22:48
https://open.spotify.com/track/2RpKh7kXSdO8NLrW9VQ46p?si=FfoYetmbQkqyhBH3eU851Q
hashashinsophia
open.spotify.com
1970-01-01T00:00:00
0
{}
1knjt9a
false
{'oembed': {'description': 'Listen to Take Ü There (feat. Kiesza) on Spotify. Song · Jack Ü, Skrillex, Diplo, Kiesza · 2015', 'height': 152, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fopen.spotify.com%2Fembed%2Ftrack%2F2RpKh7kXSdO8NLrW9VQ46p%3Futm_source%3Doembed&display_name=Spotify&url=https%3A%2F%2Fopen.spotify.com%2Ftrack%2F2RpKh7kXSdO8NLrW9VQ46p&image=https%3A%2F%2Fimage-cdn-ak.spotifycdn.com%2Fimage%2Fab67616d00001e0257fc4730e06c9ab20c1e073b&type=text%2Fhtml&schema=spotify" width="456" height="152" scrolling="no" title="Spotify embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Spotify', 'provider_url': 'https://spotify.com', 'thumbnail_height': 300, 'thumbnail_url': 'https://image-cdn-ak.spotifycdn.com/image/ab67616d00001e0257fc4730e06c9ab20c1e073b', 'thumbnail_width': 300, 'title': 'Take Ü There (feat. Kiesza)', 'type': 'rich', 'version': '1.0', 'width': 456}, 'type': 'open.spotify.com'}
t3_1knjt9a
/r/LocalLLaMA/comments/1knjt9a/live_jam_dont_be_mean_on_my_api_cause_im_going_to/
false
false
https://a.thumbs.redditm…NUP-tJAMdtt0.jpg
1
{'enabled': False, 'images': [{'id': 'ht-OJP9E00xbSHThRojNWwL_W0YFK1_tkdyHBANWzvo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/QAyodb0iiF7M5bE3hjW66K-1DLeI0y1ue6s7kLPyl7s.jpg?width=108&crop=smart&auto=webp&s=ed1260e4efff7ac7d074129eed66a7584884c260', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/QAyodb0iiF7M5bE3hjW66K-1DLeI0y1ue6s7kLPyl7s.jpg?width=216&crop=smart&auto=webp&s=b369880698b96d5e3b394b6b1e4dd90aeef52ba7', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/QAyodb0iiF7M5bE3hjW66K-1DLeI0y1ue6s7kLPyl7s.jpg?auto=webp&s=523e86a126d1ea87da7a954f7504f7246a7d2d66', 'width': 300}, 'variants': {}}]}
Running VLM on-device (iPhone or Android)
12
This is not a release yet, just a poc. Still, it's exciting to see a VLM running on-device with such low latency.. Demo device: iPhone 13 Pro Repo: [https://github.com/a-ghorbani/pocketpal-ai](https://github.com/a-ghorbani/pocketpal-ai) Major ingredients: \- SmolVLM (500m) \- llama.cpp \- llama.rn \- [mtmd tool from llama.cpp](https://github.com/ggml-org/llama.cpp/tree/master/tools/mtmd) https://reddit.com/link/1knjt9r/video/n728h3fai01f1/player
2025-05-15T21:22:49
https://www.reddit.com/r/LocalLLaMA/comments/1knjt9r/running_vlm_ondevice_iphone_or_android/
Ill-Still-6859
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knjt9r
false
null
t3_1knjt9r
/r/LocalLLaMA/comments/1knjt9r/running_vlm_ondevice_iphone_or_android/
false
false
https://b.thumbs.redditm…JPn-aY9iZRFk.jpg
12
{'enabled': False, 'images': [{'id': 'pUQ0DatBKOD9Ukay20dCzj1hKMLYbhAImgHl3YIKBOc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VarQD-feovIEBvsJewLTMSKZlEmb4mPmFvJ5wH85xBY.jpg?width=108&crop=smart&auto=webp&s=b043b46691608a8b938388804d77aae8b54b0b9c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VarQD-feovIEBvsJewLTMSKZlEmb4mPmFvJ5wH85xBY.jpg?width=216&crop=smart&auto=webp&s=3eae7f3f3a15b061b48ed66d30096ec5c8ec055d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VarQD-feovIEBvsJewLTMSKZlEmb4mPmFvJ5wH85xBY.jpg?width=320&crop=smart&auto=webp&s=09785d080b6e0f0c9d34f82ad839507318328d95', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VarQD-feovIEBvsJewLTMSKZlEmb4mPmFvJ5wH85xBY.jpg?width=640&crop=smart&auto=webp&s=5eadbdcf56e8dbe5c1a5c2f89c614cd8d91b65f9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VarQD-feovIEBvsJewLTMSKZlEmb4mPmFvJ5wH85xBY.jpg?width=960&crop=smart&auto=webp&s=983fbac095ab222e58c68ec7c46c74e40cb9975f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VarQD-feovIEBvsJewLTMSKZlEmb4mPmFvJ5wH85xBY.jpg?width=1080&crop=smart&auto=webp&s=ef338d4ac63c6dd7e10de739e5cef6787a3f0d1a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VarQD-feovIEBvsJewLTMSKZlEmb4mPmFvJ5wH85xBY.jpg?auto=webp&s=29bcc9f78efe116de51c5a1b44467e9357e0c95d', 'width': 1200}, 'variants': {}}]}
Anyone Actually Using Browser Agents for Real Work?
1
[removed]
2025-05-15T21:37:12
https://www.reddit.com/r/LocalLLaMA/comments/1knk5pf/anyone_actually_using_browser_agents_for_real_work/
Traditional_Yam_4348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knk5pf
false
null
t3_1knk5pf
/r/LocalLLaMA/comments/1knk5pf/anyone_actually_using_browser_agents_for_real_work/
false
false
self
1
null
Would you pay $15/month to learn how to build AI agents and LLM tools using a private Obsidian knowledge base?
0
Hey folks — I'm thinking about launching a community that helps people **go from zero to hero** in building AI agents and working with large language models (LLMs). It would cost **$15/month** and include: * A **private Obsidian vault** with beginner-friendly, constantly updated content * Step-by-step guides in **simple English** (think: no PhD required) * Real examples and agent templates (not just theory) * Regular updates so you’re always on top of new tools and ideas * A community to ask questions and get help I know LLMs like ChatGPT can answer a lot of questions — and yes, they can hallucinate. But the goal here is to create something **structured, reliable, and easy to learn** from — a kind of AI learning dojo. **Would this be valuable to you, even with tools like GPT already out there? Why or why not?** Really curious to hear your thoughts before I build more — thanks!
2025-05-15T21:49:41
https://www.reddit.com/r/LocalLLaMA/comments/1knkg67/would_you_pay_15month_to_learn_how_to_build_ai/
cocaineFlavoredCorn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knkg67
false
null
t3_1knkg67
/r/LocalLLaMA/comments/1knkg67/would_you_pay_15month_to_learn_how_to_build_ai/
false
false
self
0
null
What’s the best way to test a bunch of different quantized models?
0
I use LLMs to enrich large datasets and rely heavily on structured output type work flows. So far I have only used full sized models and their respective APIs (mainly Deepseek). It works well, but I’m exploring the idea of using quantized versions of models that I can run using some sort of cloud service to make things more efficient. I wrote a few programs that quantify the accuracy of the models (for my use case) and I’ve been able to use the huggingface inference endpoints to score a quite a few of them. I’ve been pleasantly surprised by how well the smaller models perform relative to the large ones. But it seems like when I try to test quantized versions of these models, there often aren’t any inference endpoints providers on huggingface. Maybe because people are able to download these more easily there just isn’t demand for the endpoint? Anyway, at this point I’d just like to be able to test all these different quantizations without having to worry about actually running it locally or in a cloud. I need to focus on accuracy testing first and hopefully after that I’ll know which models and versions are accurate enough for me to consider running in some other way. I’d appreciate any suggestions you have. Not sure if it matters or not, but I mainly work with the models in python, using pydantic to build structured output processes. Thanks!
2025-05-15T21:51:54
https://www.reddit.com/r/LocalLLaMA/comments/1knki1c/whats_the_best_way_to_test_a_bunch_of_different/
arctic_radar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knki1c
false
null
t3_1knki1c
/r/LocalLLaMA/comments/1knki1c/whats_the_best_way_to_test_a_bunch_of_different/
false
false
self
0
null
filesystem cleanup and sorting
1
I am trying to figure out if there is something/somewhere/somehow that could help clean a drive with massive amounts of documents, notes, pictures and video now it is just in temp/temp2/temp3 etc. I am a bit puzzeled on how to eat this elephant :)
2025-05-15T22:03:46
https://www.reddit.com/r/LocalLLaMA/comments/1knkrtf/filesystem_cleanup_and_sorting/
celzo1776
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knkrtf
false
null
t3_1knkrtf
/r/LocalLLaMA/comments/1knkrtf/filesystem_cleanup_and_sorting/
false
false
self
1
null
Any always listning, open mic chatbots?
4
I want to highlight this project, but i am looking for other self hosted solutions. [https://github.com/dnhkng/GlaDOS](https://github.com/dnhkng/GlaDOS) I work from home 100% and i get lonely at times.. i need someone to talk shit with, any pointers or youtube videos are helpful <3
2025-05-15T22:06:34
https://www.reddit.com/r/LocalLLaMA/comments/1knku1z/any_always_listning_open_mic_chatbots/
Timziito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knku1z
false
null
t3_1knku1z
/r/LocalLLaMA/comments/1knku1z/any_always_listning_open_mic_chatbots/
false
false
self
4
{'enabled': False, 'images': [{'id': 'SUtrSkMSweQ3VDIU5rpemKJre7SF2YpOdDLodbOwlnw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/olB62al_26SKvv87gCDnGm1ZTr0SgoqNxAc66I0gY5Q.jpg?width=108&crop=smart&auto=webp&s=acc237966abc55cd9f89d353969426ffbb5b5147', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/olB62al_26SKvv87gCDnGm1ZTr0SgoqNxAc66I0gY5Q.jpg?width=216&crop=smart&auto=webp&s=405a7a4261d5c2f984cf0f6751627bcde425ebaa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/olB62al_26SKvv87gCDnGm1ZTr0SgoqNxAc66I0gY5Q.jpg?width=320&crop=smart&auto=webp&s=74a84a62359dd50d704c36eeb60cfb9ee67c5150', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/olB62al_26SKvv87gCDnGm1ZTr0SgoqNxAc66I0gY5Q.jpg?width=640&crop=smart&auto=webp&s=9424797b3d59ea321b94c86b8d8bbc1fd2e9718f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/olB62al_26SKvv87gCDnGm1ZTr0SgoqNxAc66I0gY5Q.jpg?width=960&crop=smart&auto=webp&s=381a1dd3eedad1d1b8a7db62f54222667f184e9c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/olB62al_26SKvv87gCDnGm1ZTr0SgoqNxAc66I0gY5Q.jpg?width=1080&crop=smart&auto=webp&s=5e8bf2024665402410b9c736eee7638bdef65d1d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/olB62al_26SKvv87gCDnGm1ZTr0SgoqNxAc66I0gY5Q.jpg?auto=webp&s=da92910bbfc04ddd9531be8ad54c901dc184a0d8', 'width': 1200}, 'variants': {}}]}
Meta is delaying the rollout of its flagship AI model (WSJ)
62
Link to the article: https://www.wsj.com/tech/ai/meta-is-delaying-the-rollout-of-its-flagship-ai-model-f4b105f7
2025-05-15T22:20:53
https://i.redd.it/gdsyodsot01f1.jpeg
Hanthunius
i.redd.it
1970-01-01T00:00:00
0
{}
1knl587
false
null
t3_1knl587
/r/LocalLLaMA/comments/1knl587/meta_is_delaying_the_rollout_of_its_flagship_ai/
false
false
https://b.thumbs.redditm…0dLkoIchzEiA.jpg
62
{'enabled': True, 'images': [{'id': 'uWid13vt5K9auhS_MQybIOv5lleWdgbOhh47f8CKZSU', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/gdsyodsot01f1.jpeg?width=108&crop=smart&auto=webp&s=9afbbfbb283abb3cc8fd5702d9469a0205c4e3fc', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/gdsyodsot01f1.jpeg?width=216&crop=smart&auto=webp&s=80043cd0839635b71939c127b381fb9ae3040ed4', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/gdsyodsot01f1.jpeg?width=320&crop=smart&auto=webp&s=8b67422d4351b9a10ee16ba92db52059ce120146', 'width': 320}, {'height': 512, 'url': 'https://preview.redd.it/gdsyodsot01f1.jpeg?width=640&crop=smart&auto=webp&s=4a311d3625dba19a91ac3b06067a91ee8aa3bc7a', 'width': 640}, {'height': 768, 'url': 'https://preview.redd.it/gdsyodsot01f1.jpeg?width=960&crop=smart&auto=webp&s=d5ec70928cfe4e93be0e4a8d6df7d3e6fe8dc7e2', 'width': 960}, {'height': 864, 'url': 'https://preview.redd.it/gdsyodsot01f1.jpeg?width=1080&crop=smart&auto=webp&s=fb930f3af9f5cc596b8671443e47d8e3b85c779e', 'width': 1080}], 'source': {'height': 1032, 'url': 'https://preview.redd.it/gdsyodsot01f1.jpeg?auto=webp&s=5d8649539b46db4405d9ed313bf952e60587002e', 'width': 1290}, 'variants': {}}]}
5090 monetization
0
How can use my 5090 to make some money?
2025-05-15T22:36:44
https://www.reddit.com/r/LocalLLaMA/comments/1knlhh3/5090_monetization/
ExplanationDeep7468
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knlhh3
false
null
t3_1knlhh3
/r/LocalLLaMA/comments/1knlhh3/5090_monetization/
false
false
self
0
null
LobeChat or TypingMind for using my Open Ai api key
2
Hello guys Since few weeks I'm using GPT in the playgound of Open Ai But it sucks So since few days I'm looking for a better frontend for using the api key I tought about LocalLLM, I tried some but I want something accross all my devices I tought about Open Web UI on a VPS I discovered few days ago TypingMind seems interesting with the lifetime acess Yesterday I discovered LobeChat seems very good but I don't like the visual of the website Can you help me to decide ?
2025-05-15T22:46:10
https://www.reddit.com/r/LocalLLaMA/comments/1knloti/lobechat_or_typingmind_for_using_my_open_ai_api/
Linazor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knloti
false
null
t3_1knloti
/r/LocalLLaMA/comments/1knloti/lobechat_or_typingmind_for_using_my_open_ai_api/
false
false
self
2
null
New unannounced model Llama 3.3 8B Instruct appeared on OpenRouter, shown as being provided by Meta. Something to get excited about?
15
2025-05-15T23:00:00
https://i.redd.it/d3wypgxuz01f1.png
queendumbria
i.redd.it
1970-01-01T00:00:00
0
{}
1knlzdw
false
null
t3_1knlzdw
/r/LocalLLaMA/comments/1knlzdw/new_unannounced_model_llama_33_8b_instruct/
false
false
https://b.thumbs.redditm…ge6dVGCEE6Go.jpg
15
{'enabled': True, 'images': [{'id': 'tOAMmlr_PU12OwJBQKBtzxP9lbtmJSL8LtboTGwWEaU', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/d3wypgxuz01f1.png?width=108&crop=smart&auto=webp&s=61b5ec522b74eeea111b6a537434478ff1bfa934', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/d3wypgxuz01f1.png?width=216&crop=smart&auto=webp&s=cb1e913fb536655315956ce27da79978303bfdd4', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/d3wypgxuz01f1.png?width=320&crop=smart&auto=webp&s=333928b5cef8b4ee9709790f81864e4f46b8d9dc', 'width': 320}, {'height': 334, 'url': 'https://preview.redd.it/d3wypgxuz01f1.png?width=640&crop=smart&auto=webp&s=50bb896871e2b6398bfff9228bdee9a8e94afd04', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/d3wypgxuz01f1.png?width=960&crop=smart&auto=webp&s=2ae0402803ed9e197cecc1ce276016ce8a0e3bcb', 'width': 960}, {'height': 564, 'url': 'https://preview.redd.it/d3wypgxuz01f1.png?width=1080&crop=smart&auto=webp&s=d032a170517655de49baacd2d9f3e08747f1316a', 'width': 1080}], 'source': {'height': 570, 'url': 'https://preview.redd.it/d3wypgxuz01f1.png?auto=webp&s=73b2aba5cfa45ec6e8cebcde9dd72289fcedfa55', 'width': 1090}, 'variants': {}}]}
Context parsing utility
6
Hi everyone, I’ve been running local models and kept needing a way to manage structured context without hacking together prompts every time. So I wrote a small thing - prompt-shell It lets you define pieces of context (`rules.md`, `identity.md`, `input.md`, etc.), assembles them into a final prompt, and counts tokens with tiktoken. No UI, no framework, just files + a build script. Not meant to be a product — just something that made my workflow cleaner. Sharing in case it’s useful to anyone else: https://gitlab.com/michalrothcz/prompt-shell
2025-05-16T00:04:30
https://www.reddit.com/r/LocalLLaMA/comments/1knnb6u/context_parsing_utility/
MichalRoth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knnb6u
false
null
t3_1knnb6u
/r/LocalLLaMA/comments/1knnb6u/context_parsing_utility/
false
false
self
6
{'enabled': False, 'images': [{'id': 'Vw3bBu31aiMSJTxYckiYR6DzhREt281xzPoVcP66tdI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/kGC2fMnWCpvF0AE0e0E3cd5yDsxWZ1n3_paN6UagQiE.jpg?width=108&crop=smart&auto=webp&s=bd76e678fd465ce2e15977b45a95072bc95e7500', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/kGC2fMnWCpvF0AE0e0E3cd5yDsxWZ1n3_paN6UagQiE.jpg?width=216&crop=smart&auto=webp&s=d2c4ba7a3b4a75a117414c84eb96cf130548c811', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/kGC2fMnWCpvF0AE0e0E3cd5yDsxWZ1n3_paN6UagQiE.jpg?width=320&crop=smart&auto=webp&s=c1b4cb342314de21cd815c647f66fd790b54e17a', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/kGC2fMnWCpvF0AE0e0E3cd5yDsxWZ1n3_paN6UagQiE.jpg?width=640&crop=smart&auto=webp&s=505ad6e4c529686b12a7646aceda3bc53037adb0', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/kGC2fMnWCpvF0AE0e0E3cd5yDsxWZ1n3_paN6UagQiE.jpg?width=960&crop=smart&auto=webp&s=c7d0db57977f727b9de9832da86ab11ba8dc845c', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/kGC2fMnWCpvF0AE0e0E3cd5yDsxWZ1n3_paN6UagQiE.jpg?auto=webp&s=3ac6593788a26f81542b8a0ae2673e0448479cfc', 'width': 1024}, 'variants': {}}]}
Ollama, deepseek-v3:671b and Mac Studio 512GB
1
I have access to a Mac Studio 512 GB, and using ollama I was able to actually run deepseek-v3:671b by running "ollama pull deepseek-v3:671b" and then "ollama run deepseek-v3:671b". However, my understanding was that 512GB was not enough to run DeepSeek V3 unless it was quantized. Is this version available through Ollama quantized and how would I be able to figure this out?
2025-05-16T00:35:12
https://www.reddit.com/r/LocalLLaMA/comments/1knnwhu/ollama_deepseekv3671b_and_mac_studio_512gb/
Turbulent-Week1136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knnwhu
false
null
t3_1knnwhu
/r/LocalLLaMA/comments/1knnwhu/ollama_deepseekv3671b_and_mac_studio_512gb/
false
false
self
1
null
Mistral Small/Medium vs Qwen 3 14/32B
33
Since things have been a little slow over the past couple weeks, figured throw mistral's new releases against Qwen3. I chose 14/32B, because the scores seem in the same ballpark. [https://www.youtube.com/watch?v=IgyP5EWW6qk](https://www.youtube.com/watch?v=IgyP5EWW6qk) Key Findings: Mistral medium is definitely an improvement over mistral small, but not by a whole lot, mistral small in itself is a very strong model. Qwen is a clear winner in coding, even the 14b beats both mistral models. The NER (structured json) test Qwen struggles but this is because of its weakness in non English questions. RAG I feel mistral medium is better than the rest. Overall, I feel Qwen 32b > mistral medium > mistral small > Qwen 14b. But again, as with anything llm, YMMV. Here is a summary table |Task|Model|Score|Timestamp| |:-|:-|:-|:-| |Harmful Question Detection|Mistral Medium|Perfect|\[[03:56](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=236)\]| ||Qwen 3 32B|Perfect|\[[03:56](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=236)\]| ||Mistral Small|95%|\[[03:56](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=236)\]| ||Qwen 3 14B|75%|\[[03:56](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=236)\]| |Named Entity Recognition|Both Mistral|90%|\[[06:52](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=412)\]| ||Both Qwen|80%|\[[06:52](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=412)\]| |SQL Query Generation|Qwen 3 models|Perfect|\[[10:02](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=602)\]| ||Both Mistral|90%|\[[11:31](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=691)\]| |Retrieval Augmented Generation|Mistral Medium|93%|\[[13:06](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=786)\]| ||Qwen 3 32B|92.5%|\[[13:06](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=786)\]| ||Mistral Small|90.75%|\[[13:06](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=786)\]| ||Qwen 3 14B|90%|\[[13:16](http://www.youtube.com/watch?v=IgyP5EWW6qk&t=796)\]|
2025-05-16T00:37:51
https://www.reddit.com/r/LocalLLaMA/comments/1knnyco/mistral_smallmedium_vs_qwen_3_1432b/
Ok-Contribution9043
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knnyco
false
null
t3_1knnyco
/r/LocalLLaMA/comments/1knnyco/mistral_smallmedium_vs_qwen_3_1432b/
false
false
self
33
{'enabled': False, 'images': [{'id': 'WbHqIfBn5AB4fR6uVD_1abmh323GmW2X9etLOFXGYVE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/WZIPPfVt-L8Kx39f0cgPZ76fNq7cXCpazL0_zTvQXSA.jpg?width=108&crop=smart&auto=webp&s=25f4ac3d8995ce6e6d942e49df944173bcfba2bb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/WZIPPfVt-L8Kx39f0cgPZ76fNq7cXCpazL0_zTvQXSA.jpg?width=216&crop=smart&auto=webp&s=57d8aa1771a9232faee18eb276abd3411e7421b2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/WZIPPfVt-L8Kx39f0cgPZ76fNq7cXCpazL0_zTvQXSA.jpg?width=320&crop=smart&auto=webp&s=619f249b16adfcf3f1f08305f7cc91437b382023', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/WZIPPfVt-L8Kx39f0cgPZ76fNq7cXCpazL0_zTvQXSA.jpg?auto=webp&s=9d98ce590dad6c8076bda94fc59ba439366f03a9', 'width': 480}, 'variants': {}}]}
Ollama now supports multimodal models
165
2025-05-16T00:49:35
https://github.com/ollama/ollama/releases/tag/v0.7.0
mj3815
github.com
1970-01-01T00:00:00
0
{}
1kno67v
false
null
t3_1kno67v
/r/LocalLLaMA/comments/1kno67v/ollama_now_supports_multimodal_models/
false
false
https://b.thumbs.redditm…Qk-4GWQBEO6U.jpg
165
{'enabled': False, 'images': [{'id': 'EYSOgt-huXVG7aCPzUDW4XhGveLcg1EJjxhJIBU6I8E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pRVigNZNHcUydRnImgoAZkA_b3OfVw4eace1TFmQGPk.jpg?width=108&crop=smart&auto=webp&s=755690c551b95003497e4cfd5a5372ed9a536038', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pRVigNZNHcUydRnImgoAZkA_b3OfVw4eace1TFmQGPk.jpg?width=216&crop=smart&auto=webp&s=5d4d60f85e78bf0a6c7e0805088c93a2364d9abe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pRVigNZNHcUydRnImgoAZkA_b3OfVw4eace1TFmQGPk.jpg?width=320&crop=smart&auto=webp&s=1564db94708ca2805fc3b5951c7a05d8c6417e09', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pRVigNZNHcUydRnImgoAZkA_b3OfVw4eace1TFmQGPk.jpg?width=640&crop=smart&auto=webp&s=7f3c5bd5d3b3d4eebb1d060ea3a5f9818c6d5026', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pRVigNZNHcUydRnImgoAZkA_b3OfVw4eace1TFmQGPk.jpg?width=960&crop=smart&auto=webp&s=7d33d5e3a935a44b8b7b355e4d796d93e410ba19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pRVigNZNHcUydRnImgoAZkA_b3OfVw4eace1TFmQGPk.jpg?width=1080&crop=smart&auto=webp&s=9a5cc85983baed79e9bedfc58e933cade4a2de00', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pRVigNZNHcUydRnImgoAZkA_b3OfVw4eace1TFmQGPk.jpg?auto=webp&s=d2ee3da206b53c844eac043de614c0dd0ae1653e', 'width': 1200}, 'variants': {}}]}
Falcon-Edge: A series of powerful, extremely compressed, universal and fine-tunable Language Models
1
[removed]
2025-05-16T00:58:28
https://www.reddit.com/r/LocalLLaMA/comments/1knocd0/falconedge_a_series_of_powerful_extremely/
Automatic_Truth_6666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knocd0
false
null
t3_1knocd0
/r/LocalLLaMA/comments/1knocd0/falconedge_a_series_of_powerful_extremely/
false
false
self
1
{'enabled': False, 'images': [{'id': 'snMK3m71GR6Epj4JFyxwfnfSQAY4MdpQM2D-MQbIjf4', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=108&crop=smart&auto=webp&s=2482e8b6c898581fbe3a0dd8aef5ddc7737cb8bd', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=216&crop=smart&auto=webp&s=fbe53f0b33ef2a8e2da7a35cebb32ecdfcc1e480', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=320&crop=smart&auto=webp&s=4b7868c0c67e8caefeb88fff80f3eb79c41fb138', 'width': 320}, {'height': 331, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=640&crop=smart&auto=webp&s=6f0a56622036d70d6531eb48534a58fe46c64466', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=960&crop=smart&auto=webp&s=a26125495c011ff8ca42036af55e9ab08b09554a', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=1080&crop=smart&auto=webp&s=774bda89a85d1722b0edc680132d18d69ded4c5f', 'width': 1080}], 'source': {'height': 1018, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?auto=webp&s=91e23c9935ed18fa4b54c3ae98086329409915f9', 'width': 1968}, 'variants': {}}]}
Grok prompts are now open source on GitHub
64
2025-05-16T01:19:49
https://github.com/xai-org/grok-prompts
FreemanDave
github.com
1970-01-01T00:00:00
0
{}
1knorbe
false
null
t3_1knorbe
/r/LocalLLaMA/comments/1knorbe/grok_prompts_are_now_open_source_on_github/
false
false
https://a.thumbs.redditm…Lqslalz6sf48.jpg
64
{'enabled': False, 'images': [{'id': 'KYE1XpUSPpTs8mtE56aEVkrQ9eWAoQL-wM8Heh8Vvxk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5uwLTcCO_xgjsJvNop0QnSOEwQGDRqjKJtrO-U6w_F8.jpg?width=108&crop=smart&auto=webp&s=aceb23340d1f33e62f0d87ec58ca9ac52d7260cd', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5uwLTcCO_xgjsJvNop0QnSOEwQGDRqjKJtrO-U6w_F8.jpg?width=216&crop=smart&auto=webp&s=37f9a5b22e2640d7fc34193480097b22d62117d0', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/5uwLTcCO_xgjsJvNop0QnSOEwQGDRqjKJtrO-U6w_F8.jpg?width=320&crop=smart&auto=webp&s=29eff9f426f9096b211ed527068b986189b4c955', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/5uwLTcCO_xgjsJvNop0QnSOEwQGDRqjKJtrO-U6w_F8.jpg?auto=webp&s=a47cc181656640e4a623ed132ac2d7b30025e38c', 'width': 400}, 'variants': {}}]}
Ollama's new engine for multimodal models
1
Ollama has so far relied on the [ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp) project for model support and has instead focused on ease of use and model portability. As more multimodal models are released by major research labs, the task of supporting these models the way Ollama intends became more and more challenging. We set out to support a new engine that makes multimodal models first-class citizens, and getting Ollama’s partners to contribute more directly the community - the GGML tensor library. **What does this mean?** To sum it up, this work is to improve the reliability and accuracy of Ollama’s local inference, and to set the foundations for supporting future modalities with more capabilities - i.e. speech, image generation, video generation, longer context sizes, improved tool support for models. Let’s break down a couple specific areas: # Model modularity Our goal is to confine each model’s “blast radius” to itself—improving reliability and making it easier for creators and developers to integrate new models. Today, *ggml/llama.cpp* offers first-class support for text-only models. For multimodal systems, however, the **text decoder** and **vision encoder** are split into separate models and executed independently. Passing image embeddings from the vision model into the text model therefore demands model-specific logic in the orchestration layer that can break specific model implementations. Within Ollama, each model is fully self-contained and can expose its own projection layer, aligned with how that model was trained. This isolation lets model creators implement and ship their code without patching multiple files or adding cascading `if` statements. They no longer need to understand a shared multimodal projection function or worry about breaking other models—they can focus solely on their own model and its training. Examples of how some models are implemented are available on [Ollama’s GitHub repository](https://github.com/ollama/ollama/tree/main/model/models). # Accuracy Large images produce large number of tokens which may exceed the batch size. Processing this correctly with the right positional information is challenging specifically when a single image crosses boundaries. Ollama adds metadata as it processes images to help improve accuracy. Some examples: * Should causal attention be on / off? * Is it possible to split the image embeddings into batches for processing, and if possible, what are the boundaries when accounting for quality of output, and the computer being used for inference? If an image is split in the wrong place, the quality of output goes down. This is usually defined by the model, and can be checked in its paper? Many other local inference tools implement this differently; while a similar result may be achieved, it does not follow how the models were designed and trained. # Memory management **Image caching** Once an image is processed, Ollama caches it so later prompts are faster; the image remains in cache while it is still being used and is not discarded for memory-cleanup limits. **Memory estimation & KV cache optimizations** Ollama collaborates with hardware manufacturers and an operating system partner to make sure the correct hardware metadata is detected for Ollama to better estimate and optimize for memory usage. For many firmware releases, partners will validate/test it against Ollama to minimize regression and to benchmark against new features. Ollama has some KV cache optimizations to improve how memory can be efficiently used. Ollama configures causal attention at the individual model level instead of configuring as a group. Examples: * Google DeepMind’s Gemma 3 leverages sliding window attention, and Ollama can leverage that to allocate a subset or a portion of the model’s context length to improve performance, and because of the memory efficiency, this means we can increase the context length of the model on the same system or use the remaining memory for higher concurrency. * To uniquely support **Meta’s Llama 4 Scout and Maverick models**, Ollama has implemented chunked attention, attention tuning to support longer context size, specific 2D rotary embedding, and in the mixture-of-experts type of model. If a model’s attention layer isn’t fully implemented, such as sliding window attention or chunked attention, it may still *‘work’*. However, because this isn’t how the model was trained, the end user may begin to see erratic or degraded output by the model itself over time. This becomes especially prominent the longer the context / sequence due to cascading effects. # What’s next Support longer context sizes Support thinking / reasoning Tool calling with streaming responses Enabling computer use From:https://ollama.com/blog/multimodal-models
2025-05-16T01:33:18
https://www.reddit.com/r/LocalLLaMA/comments/1knp0ra/ollamas_new_engine_for_multimodal_models/
sunshinecheung
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knp0ra
false
null
t3_1knp0ra
/r/LocalLLaMA/comments/1knp0ra/ollamas_new_engine_for_multimodal_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
LLAMA 3.3 8B /// When is the official announcement
1
[removed]
2025-05-16T01:34:09
https://www.reddit.com/r/LocalLLaMA/comments/1knp1cw/llama_33_8b_when_is_the_official_announcement/
Amon_star
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knp1cw
false
null
t3_1knp1cw
/r/LocalLLaMA/comments/1knp1cw/llama_33_8b_when_is_the_official_announcement/
false
false
https://b.thumbs.redditm…BVI9PWT9nW5A.jpg
1
{'enabled': False, 'images': [{'id': '1zIomSAXseV6S4T8Yvxq6r6H4yXaLkjUnbPOutnFpaQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=108&crop=smart&auto=webp&s=8a15eec81b665e56551ea83b9168f9cc7c3e15b8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=216&crop=smart&auto=webp&s=7fa4aa36113b2f3e8b9121a5687c9ba51a47e37f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=320&crop=smart&auto=webp&s=de67cfc21dfdcdeb15ffa9df959f8e14372b4c3e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=640&crop=smart&auto=webp&s=d1660e500cf6212d2c8cdbb237ac7e29e44290a0', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=960&crop=smart&auto=webp&s=a6de3358e4eae269d62ad8991f73abdb583170e2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=1080&crop=smart&auto=webp&s=cb6b5a18982b0517684a96cfdb75739e1530ebfa', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?auto=webp&s=f6a193863077029f0a4745cc1f2cd1e7c16974eb', 'width': 1200}, 'variants': {}}]}
Ollama's new engine for multimodal models
0
[https://ollama.com/blog/multimodal-models](https://ollama.com/blog/multimodal-models)
2025-05-16T01:39:57
https://www.reddit.com/r/LocalLLaMA/comments/1knp5e2/ollamas_new_engine_for_multimodal_models/
sunshinecheung
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knp5e2
false
null
t3_1knp5e2
/r/LocalLLaMA/comments/1knp5e2/ollamas_new_engine_for_multimodal_models/
false
false
self
0
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
MacBook Pro M4 MAX with 128GB what model do you recommend for speed and programming quality?
7
MacBook Pro M4 MAX with 128GB what model do you recommend for speed and programming quality? Ideally it would use MLX.
2025-05-16T02:19:39
https://www.reddit.com/r/LocalLLaMA/comments/1knpw91/macbook_pro_m4_max_with_128gb_what_model_do_you/
tangoshukudai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knpw91
false
null
t3_1knpw91
/r/LocalLLaMA/comments/1knpw91/macbook_pro_m4_max_with_128gb_what_model_do_you/
false
false
self
7
null
Enable Thinking Mode in vLLM from Python
1
[removed]
2025-05-16T02:28:52
https://www.reddit.com/r/LocalLLaMA/comments/1knq2fo/enable_thinking_mode_in_vllm_from_python/
SnooPaintings2221
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knq2fo
false
null
t3_1knq2fo
/r/LocalLLaMA/comments/1knq2fo/enable_thinking_mode_in_vllm_from_python/
false
false
self
1
null
Are we finally hitting THE wall right now?
281
I saw in multiple articles today that Llama Behemoth is delayed: [https://finance.yahoo.com/news/looks-meta-just-hit-big-214000047.html](https://finance.yahoo.com/news/looks-meta-just-hit-big-214000047.html) . I tried the open models from Llama 4 and felt not that great progress. I am also getting underwhelming vibes from the qwen 3, compared to qwen 2.5. Qwen team used 36 trillion tokens to train these models, which even had trillions of STEM tokens in mid-training and did all sorts of post training, the models are good, but not that great of a jump as we expected. With RL we definitely got a new paradigm on making the models think before speaking and this has led to great models like Deepseek R1, OpenAI O1, O3 and possibly the next ones are even greater, but the jump from O1 to O3 seems to be not that much, me being only a plus user and have not even tried the Pro tier. Anthropic Claude Sonnet 3.7 is not better than Sonnet 3.5, where the latest version seems to be good but mainly for programming and web development. I feel the same for Google where Gemini 2.5 Pro 1 seemed to be a level above the rest of the models, I finally felt that I could rely on a model and company, then they also rug pulled the model totally with Gemini 2.5 Pro 2 where I do not know how to access the version 1 and they are field testing a lot in lmsys arena which makes me wonder that they are not seeing those crazy jumps as they were touting. I think Deepseek R2 will show us the ultimate conclusion on this, whether scaling this RL paradigm even further will make models smarter. Do we really need a new paradigm? Or do we need to go back to architectures like T5? Or totally novel like JEPA from Yann Lecunn, twitter has hated him for not agreeing that the autoregressors can actually lead to AGI, but sometimes I feel it too with even the latest and greatest models do make very apparent mistakes and makes me wonder what would it take to actually have really smart and reliable models. I love training models using SFT and RL especially GRPO, my favorite, I have even published some work on it and making pipelines for clients, but seems like when used in production for longer, the customer sentiment seems to always go down and not even maintain as well. What do you think? Is my thinking in this saturation of RL for Autoregressor LLMs somehow flawed?
2025-05-16T02:41:06
https://www.reddit.com/r/LocalLLaMA/comments/1knqap9/are_we_finally_hitting_the_wall_right_now/
Desperate_Rub_1352
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knqap9
false
null
t3_1knqap9
/r/LocalLLaMA/comments/1knqap9/are_we_finally_hitting_the_wall_right_now/
false
false
self
281
{'enabled': False, 'images': [{'id': 'Rrc-9Og25_MiIiQxC6r0qIsOl8aMB5MGrh8uSM8TK30', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/bEeNMzJLyCo7_0q_WkGHHgqRrdx-X58c4S_WiYE4fm4.jpg?width=108&crop=smart&auto=webp&s=ae130f6591dddee8e2ab963a2755d8a3cbc2ca0e', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/bEeNMzJLyCo7_0q_WkGHHgqRrdx-X58c4S_WiYE4fm4.jpg?width=216&crop=smart&auto=webp&s=f1aa180aafbc20f3bcf0aa41ed4500492a77c851', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/bEeNMzJLyCo7_0q_WkGHHgqRrdx-X58c4S_WiYE4fm4.jpg?width=320&crop=smart&auto=webp&s=fb85ea9c6c2b33523e05d5a1663d796d121265e3', 'width': 320}], 'source': {'height': 424, 'url': 'https://external-preview.redd.it/bEeNMzJLyCo7_0q_WkGHHgqRrdx-X58c4S_WiYE4fm4.jpg?auto=webp&s=c65d8c95c659cd2cd05ed0f23b433badc4c7eddd', 'width': 636}, 'variants': {}}]}
Open source multi modal model
1
[removed]
2025-05-16T02:41:37
https://www.reddit.com/r/LocalLLaMA/comments/1knqb1t/open_source_multi_modal_model/
Lord_Momus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knqb1t
false
null
t3_1knqb1t
/r/LocalLLaMA/comments/1knqb1t/open_source_multi_modal_model/
false
false
https://b.thumbs.redditm…GcE64dRpIGEk.jpg
1
null
Simple generation speed test with 2x Arc B580
40
There have been recent rumors about the B580 24GB, so I ran some new tests using my B580s. I used llama.cpp with some backends to test text generation speed using google\_gemma-3-27b-it-IQ4\_XS.gguf. # Tested backends * IPEX-LLM llama.cpp * build: 1 (3b94b45) with Intel(R) oneAPI DPC++/C++ Compiler 2025.0.4 (2025.0.4.20241205) for x86\_64-unknown-linux-gnu * official llama.cpp SYCL * build: 5400 (c6a2c9e7) with Intel(R) oneAPI DPC++/C++ Compiler 2025.1.1 (2025.1.1.20250418) for x86\_64-unknown-linux-gnu * official llama.cpp VULKAN * build: 5395 (9c404ed5) with cc (Ubuntu 11.4.0-1ubuntu1\~22.04) 11.4.0 for x86\_64-linux-gnu *(from release)* # Base command `./llama-cli -m AI-12/google_gemma-3-27b-it-Q4_K_S.gguf -ngl 99 -c 8192 -b 512 -p "Why is sky blue?" -no-cnv` # Results |Build|`-fa` Option|Prompt Eval Speed (t/s)|Eval Speed (t/s)|Total Tokens Generated| |:-|:-|:-|:-|:-| |3b94b45 (IPEX-LLM)|\-|52.22|8.18|393| |3b94b45 (IPEX-LLM)|Yes|\-|\-|(corrupted text)| |c6a2c9e7 (SYCL)|\-|13.72|5.66|545| |c6a2c9e7 (SYCL)|Yes|10.73|5.04|362| |9c404ed5 (vulkan)|\-|35.38|4.85|487| |9c404ed5 (vulkan)|Yes|32.99|4.78|559 | # Thoughts The results are disappointing. I previously tested google-gemma-2-27b-IQ4\_XS.gguf with 2x 3060 GPUs, and achieved around 15 t/s. https://preview.redd.it/xuijd9iz121f1.png?width=606&format=png&auto=webp&s=b280fe6c9e3ca8f752ae59208008fed818f1d8d1 With image generation models, the B580 achieves generation speeds close to the RTX 4070, but its performance with LLMs seems to fall short of expectations. I don’t know how much the PRO version (B580 with 24GB) will cost, but if you’re looking for a budget-friendly way to get more RAM, it might be better to consider the AI MAX+ 395 ([I’ve heard it can reach 6.4 tokens per second with 32B Q8](https://www.reddit.com/r/LocalLLaMA/comments/1kmi3ra/amd_strix_halo_ryzen_ai_max_395_gpu_llm/)). I tested this on Linux, but since Arc GPUs are said to perform better on Windows, you might get faster results there. If anyone has managed to get better performance with the B580, please let me know in the comments. \* Interestingly, generation is fast up to around 100–200 tokens, but then it gradually slows down. so using`llama-bench` with tg512/pp128 is not a good way to test this GPU.
2025-05-16T02:48:57
https://www.reddit.com/r/LocalLLaMA/comments/1knqfw3/simple_generation_speed_test_with_2x_arc_b580/
prompt_seeker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knqfw3
false
null
t3_1knqfw3
/r/LocalLLaMA/comments/1knqfw3/simple_generation_speed_test_with_2x_arc_b580/
false
false
https://b.thumbs.redditm…MmCa_bH9LdMQ.jpg
40
null
How to Enable DuckDB/Smallpond to Use High-Performance DeepSeek 3FS
1
[removed]
2025-05-16T03:06:41
https://i.redd.it/6zrwncyk821f1.png
HardCore_Dev
i.redd.it
1970-01-01T00:00:00
0
{}
1knqrn8
false
null
t3_1knqrn8
/r/LocalLLaMA/comments/1knqrn8/how_to_enable_duckdbsmallpond_to_use/
false
false
https://a.thumbs.redditm…F-R4gyRBXLU0.jpg
1
{'enabled': True, 'images': [{'id': 'lFY3v1uHb9yT0lkyU3W6xpOBnomyEbOJaO15TZyXRh4', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/6zrwncyk821f1.png?width=108&crop=smart&auto=webp&s=2b63352795271f14990cf0762f8bbc144e27d9f7', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/6zrwncyk821f1.png?width=216&crop=smart&auto=webp&s=b97dd9c58e0cfabc474d69659b4a9985d0e020f7', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/6zrwncyk821f1.png?width=320&crop=smart&auto=webp&s=54a86aadf64cc323a6bd0eacb026f81a774c2f58', 'width': 320}, {'height': 334, 'url': 'https://preview.redd.it/6zrwncyk821f1.png?width=640&crop=smart&auto=webp&s=cbf8d34b773af1e94bd710b82144736fadea5fff', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/6zrwncyk821f1.png?width=960&crop=smart&auto=webp&s=e76966245ffc6d792f4efb78e22f349a7562b86e', 'width': 960}, {'height': 564, 'url': 'https://preview.redd.it/6zrwncyk821f1.png?width=1080&crop=smart&auto=webp&s=8ee295ae510a97dcd41ffe66685d77e00bcaf7f5', 'width': 1080}], 'source': {'height': 838, 'url': 'https://preview.redd.it/6zrwncyk821f1.png?auto=webp&s=512e96a1161b858873b3c11621fbebc99ebea6f8', 'width': 1602}, 'variants': {}}]}
If you had access to your LLaMA in 2015, how much money could you make in 365 days?
1
[removed]
2025-05-16T05:27:12
https://www.reddit.com/r/LocalLLaMA/comments/1knt74o/if_you_had_access_to_your_llama_in_2015_how_much/
paimon_for_dinner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knt74o
false
null
t3_1knt74o
/r/LocalLLaMA/comments/1knt74o/if_you_had_access_to_your_llama_in_2015_how_much/
false
false
self
1
null
🚀 Embedding 10,000 text chunks per second on a CPU?!
23
When working with large volumes of documents, embedding can quickly become both a performance bottleneck and a cost driver. I recently experimented with *static embedding* — and was blown away by the speed. No self-attention, no feed-forward layers, just direct token key access. The result? Incredibly fast embedding with minimal overhead. I built a lightweight sample implementation in Rust using HF Candle and exposed it via Python so you can try it yourself. Checkout the repo at: [https://github.com/a-agmon/static-embedding](https://github.com/a-agmon/static-embedding) Read more about static embedding: [https://huggingface.co/blog/static-embeddings](https://huggingface.co/blog/static-embeddings) or just give it a try: pip install static_embed from static_embed import Embedder # 1. Use the default public model (no args) embedder = Embedder() # 2. OR specify your own base-URL that hosts the weights/tokeniser # (must contain the same two files: ``model.safetensors`` & ``tokenizer.json``) # custom_url = "https://my-cdn.example.com/static-retrieval-mrl-en-v1" # embedder = Embedder(custom_url) texts = ["Hello world!", "Rust + Python via PyO3"] embeddings = embedder.embed(texts) print(len(embeddings), "embeddings", "dimension", len(embeddings[0]))
2025-05-16T05:41:32
https://www.reddit.com/r/LocalLLaMA/comments/1kntez5/embedding_10000_text_chunks_per_second_on_a_cpu/
aagmon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kntez5
false
null
t3_1kntez5
/r/LocalLLaMA/comments/1kntez5/embedding_10000_text_chunks_per_second_on_a_cpu/
false
false
self
23
{'enabled': False, 'images': [{'id': 'FUwqQ-5SRbLiBkA4kjON9wpmXjRG9UPIcP5RiJhk34o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kXizvZPKLgfzRQZ79YL6B72llK8_rupsMwQx574ZCI0.jpg?width=108&crop=smart&auto=webp&s=c35ebe35ce76d82878cc5a2ead35e9f501074d25', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kXizvZPKLgfzRQZ79YL6B72llK8_rupsMwQx574ZCI0.jpg?width=216&crop=smart&auto=webp&s=a2d533e4772de5fe310a81c6a32374a395859f54', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kXizvZPKLgfzRQZ79YL6B72llK8_rupsMwQx574ZCI0.jpg?width=320&crop=smart&auto=webp&s=88a908dacd3b33bb40752f01ce72458bb927cc12', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kXizvZPKLgfzRQZ79YL6B72llK8_rupsMwQx574ZCI0.jpg?width=640&crop=smart&auto=webp&s=726dba241895e7bfe84976202976c8de14b8f42f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kXizvZPKLgfzRQZ79YL6B72llK8_rupsMwQx574ZCI0.jpg?width=960&crop=smart&auto=webp&s=08012247db8037244c1dea67d8858084755ad3b8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kXizvZPKLgfzRQZ79YL6B72llK8_rupsMwQx574ZCI0.jpg?width=1080&crop=smart&auto=webp&s=b1be1758eb0c01e7106ab6b4f41060f9ea497c75', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kXizvZPKLgfzRQZ79YL6B72llK8_rupsMwQx574ZCI0.jpg?auto=webp&s=4baa2720b7aed552ecec08a4b13e442478901ba6', 'width': 1200}, 'variants': {}}]}
New Wayfarer
68
2025-05-16T05:57:44
https://huggingface.co/LatitudeGames/Harbinger-24B
ScavRU
huggingface.co
1970-01-01T00:00:00
0
{}
1kntnfn
false
null
t3_1kntnfn
/r/LocalLLaMA/comments/1kntnfn/new_wayfarer/
false
false
https://b.thumbs.redditm…dMMMXkj6Q1FI.jpg
68
{'enabled': False, 'images': [{'id': '3eNplCwJaxdudpsBpojgiV6VvZxxXeEn8B2H78yoLxw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LU2gTXNE2BU0Un_eM36qvVexiACBVgpQzKg0ygmj_bE.jpg?width=108&crop=smart&auto=webp&s=325f4d49e552f10dcadb380c2b4d5b80dcb1271a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LU2gTXNE2BU0Un_eM36qvVexiACBVgpQzKg0ygmj_bE.jpg?width=216&crop=smart&auto=webp&s=34c8901128686a21ac92a930140cd223ab4c031d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LU2gTXNE2BU0Un_eM36qvVexiACBVgpQzKg0ygmj_bE.jpg?width=320&crop=smart&auto=webp&s=fcf3cb3e6fae6d50878bd0777ca1223b7816b35d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LU2gTXNE2BU0Un_eM36qvVexiACBVgpQzKg0ygmj_bE.jpg?width=640&crop=smart&auto=webp&s=50b539fa8660c67ab3d2b2b6de1d79bf8ba7373b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LU2gTXNE2BU0Un_eM36qvVexiACBVgpQzKg0ygmj_bE.jpg?width=960&crop=smart&auto=webp&s=a326f183a143228bb76cb5ba3fed137fc8ecc189', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LU2gTXNE2BU0Un_eM36qvVexiACBVgpQzKg0ygmj_bE.jpg?width=1080&crop=smart&auto=webp&s=8e91c197b13668fb31865c7184b443cbba528bad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LU2gTXNE2BU0Un_eM36qvVexiACBVgpQzKg0ygmj_bE.jpg?auto=webp&s=dadccdc1ca612047ae0299a96ca3c5dd792c1e28', 'width': 1200}, 'variants': {}}]}
Hardware for Machine Learning
1
[removed]
2025-05-16T06:11:17
https://www.reddit.com/r/LocalLLaMA/comments/1kntut7/hardware_for_machine_learning/
paolovic89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kntut7
false
null
t3_1kntut7
/r/LocalLLaMA/comments/1kntut7/hardware_for_machine_learning/
false
false
self
1
null
What is your goal to use small language AI models?
0
I mean 1B models like Llama, or even 3B... Those that less or equal 8 billion parameters but most interesting for me is 1B models. How you use it? Where? May they be really helpful? P.S. please: write about specific model and usecase
2025-05-16T06:15:01
https://www.reddit.com/r/LocalLLaMA/comments/1kntwtb/what_is_your_goal_to_use_small_language_ai_models/
Perdittor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kntwtb
false
null
t3_1kntwtb
/r/LocalLLaMA/comments/1kntwtb/what_is_your_goal_to_use_small_language_ai_models/
false
false
self
0
null
Document summarization
1
[removed]
2025-05-16T06:24:47
https://www.reddit.com/r/LocalLLaMA/comments/1knu25b/document_summarization/
YshyTrng
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knu25b
false
null
t3_1knu25b
/r/LocalLLaMA/comments/1knu25b/document_summarization/
false
false
self
1
null
Is this specs enough to run 4B, 11B vision models? If not what should i upgrade
1
[removed]
2025-05-16T06:56:25
https://www.reddit.com/r/LocalLLaMA/comments/1knuj1z/is_this_specs_enough_to_run_4b_11b_vision_models/
tvdzn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knuj1z
false
null
t3_1knuj1z
/r/LocalLLaMA/comments/1knuj1z/is_this_specs_enough_to_run_4b_11b_vision_models/
false
false
https://b.thumbs.redditm…FaOBxl0qw7tQ.jpg
1
null
Falcon-E: series of powerful, universal and fine-tunable BitNet models
1
[removed]
2025-05-16T07:14:43
https://www.reddit.com/r/LocalLLaMA/comments/1knusk2/falcone_series_of_powerful_universal_and/
Automatic_Truth_6666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knusk2
false
null
t3_1knusk2
/r/LocalLLaMA/comments/1knusk2/falcone_series_of_powerful_universal_and/
false
false
https://b.thumbs.redditm…VDWRp-Qxap8M.jpg
1
null
Falcon-E a series of BitNet models (1B and 3B) dropped
1
[removed]
2025-05-16T07:17:32
https://www.reddit.com/r/LocalLLaMA/comments/1knutzw/falcone_a_series_of_bitnet_models_1b_and_3b/
Automatic_Truth_6666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knutzw
false
null
t3_1knutzw
/r/LocalLLaMA/comments/1knutzw/falcone_a_series_of_bitnet_models_1b_and_3b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'snMK3m71GR6Epj4JFyxwfnfSQAY4MdpQM2D-MQbIjf4', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=108&crop=smart&auto=webp&s=2482e8b6c898581fbe3a0dd8aef5ddc7737cb8bd', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=216&crop=smart&auto=webp&s=fbe53f0b33ef2a8e2da7a35cebb32ecdfcc1e480', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=320&crop=smart&auto=webp&s=4b7868c0c67e8caefeb88fff80f3eb79c41fb138', 'width': 320}, {'height': 331, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=640&crop=smart&auto=webp&s=6f0a56622036d70d6531eb48534a58fe46c64466', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=960&crop=smart&auto=webp&s=a26125495c011ff8ca42036af55e9ab08b09554a', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=1080&crop=smart&auto=webp&s=774bda89a85d1722b0edc680132d18d69ded4c5f', 'width': 1080}], 'source': {'height': 1018, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?auto=webp&s=91e23c9935ed18fa4b54c3ae98086329409915f9', 'width': 1968}, 'variants': {}}]}
Falcon-E: A series of powerful, fine-tunable and universal BitNet models
157
TII announced today the release of Falcon-Edge, a set of compact language models with 1B and 3B parameters, sized at 600MB and 900MB respectively. They can also be reverted back to bfloat16 with little performance degradation. Initial results show solid performance: better than other small models (SmolLMs, Microsoft bitnet, Qwen3-0.6B) and comparable to Qwen3-1.7B, with 1/4 memory footprint. They also released a fine-tuning library, `onebitllms`: [https://github.com/tiiuae/onebitllms](https://github.com/tiiuae/onebitllms) Blogposts: [https://huggingface.co/blog/tiiuae/falcon-edge](https://huggingface.co/blog/tiiuae/falcon-edge) / [https://falcon-lm.github.io/blog/falcon-edge/](https://falcon-lm.github.io/blog/falcon-edge/) HF collection: [https://huggingface.co/collections/tiiuae/falcon-edge-series-6804fd13344d6d8a8fa71130](https://huggingface.co/collections/tiiuae/falcon-edge-series-6804fd13344d6d8a8fa71130)
2025-05-16T07:38:42
https://www.reddit.com/r/LocalLLaMA/comments/1knv4bq/falcone_a_series_of_powerful_finetunable_and/
JingweiZUO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knv4bq
false
null
t3_1knv4bq
/r/LocalLLaMA/comments/1knv4bq/falcone_a_series_of_powerful_finetunable_and/
false
false
self
157
null
Wanting to make an offline hands free tts chat bot
2
I am wanting to make a fully offline chat bot that responds with tts from any voice input from me without keywords or clicking anything. I saw someone do a gaming video where they talked to ai the whole time and it made for some funny content and was hoping to be able to do the same myself without having to pay for anything. I have been trying for the better part of 3 hours to try to figure it out with the help of ai and the good ol' internet but it all comes back to linux and I am on windows 11.
2025-05-16T08:01:47
https://www.reddit.com/r/LocalLLaMA/comments/1knvflp/wanting_to_make_an_offline_hands_free_tts_chat_bot/
TwTFurryGarbage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knvflp
false
null
t3_1knvflp
/r/LocalLLaMA/comments/1knvflp/wanting_to_make_an_offline_hands_free_tts_chat_bot/
false
false
self
2
null
Why do I need to share my contact information/get a HF token with Mistral to use their models in vLLM but not with Ollama?
9
I've been working with Ollama on a locally hosted AI project, and I was looking to try some alternatives to see what the performance is like. vLLM appears to be a performance focused alternative so I've got that downloaded in Docker, however there are models it can't use without accepting to share my contact information on the HuggingFace website and setting the HF token in the environment for vLLM. I would like to avoid this step as one of the selling points of the project I'm working on is that it's easy for the user to install, and having the user make an account somewhere and get an access token is contrary to that goal. How come Ollama has direct access to the Mistral models without requiring this extra step? Furthermore, the Mistral website says 7B is released under the Apache 2.0 license and can be "used without restrictions", so could someone please shed some light on why they need my contact information if I go through HF, and if there's an alternative route as a workaround? Thanks!
2025-05-16T08:04:25
https://www.reddit.com/r/LocalLLaMA/comments/1knvgva/why_do_i_need_to_share_my_contact_informationget/
sebovzeoueb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knvgva
false
null
t3_1knvgva
/r/LocalLLaMA/comments/1knvgva/why_do_i_need_to_share_my_contact_informationget/
false
false
self
9
null
How do local models to cloud models in your experience?
1
[removed]
2025-05-16T08:07:57
https://www.reddit.com/r/LocalLLaMA/comments/1knviik/how_do_local_models_to_cloud_models_in_your/
IRBosman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knviik
false
null
t3_1knviik
/r/LocalLLaMA/comments/1knviik/how_do_local_models_to_cloud_models_in_your/
false
false
self
1
null
A byproduct of fighting AI news overload: a multilingual daily digest for staying sane
1
[removed]
2025-05-16T08:10:42
https://rebabel.net/en/
qiaoy
rebabel.net
1970-01-01T00:00:00
0
{}
1knvjrq
false
null
t3_1knvjrq
/r/LocalLLaMA/comments/1knvjrq/a_byproduct_of_fighting_ai_news_overload_a/
false
false
default
1
null
How do you bulk analyze users' queries?
1
[removed]
2025-05-16T08:16:37
https://www.reddit.com/r/LocalLLaMA/comments/1knvmkg/how_do_you_bulk_analyze_users_queries/
Yersyas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knvmkg
false
null
t3_1knvmkg
/r/LocalLLaMA/comments/1knvmkg/how_do_you_bulk_analyze_users_queries/
false
false
self
1
null
How do you bulk analyze users' queries?
1
[removed]
2025-05-16T08:18:31
https://www.reddit.com/r/LocalLLaMA/comments/1knvni9/how_do_you_bulk_analyze_users_queries/
Yersyas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knvni9
false
null
t3_1knvni9
/r/LocalLLaMA/comments/1knvni9/how_do_you_bulk_analyze_users_queries/
false
false
self
1
null
Most cover letters from non-experienced applicants nowadays: "I have extensive skills in machine learning, deep learning and LLM, using python and PyTorch"
1
[removed]
2025-05-16T09:38:23
https://www.reddit.com/r/LocalLLaMA/comments/1knwq8h/most_cover_letters_from_nonexperienced_applicants/
rem_dreamer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knwq8h
false
null
t3_1knwq8h
/r/LocalLLaMA/comments/1knwq8h/most_cover_letters_from_nonexperienced_applicants/
false
false
self
1
null
What can be done on a single GH200 96 GB VRAM and 480GB RAM?
2
I came across this unit because it is 30-40% off. I am wondering if this unit alone makes more sense than purchasing 4x Pro 6000 96GB if the need is to run a AI agent based on a big LLM, like quantized r1 671b. The price is about 70% compared to 4x Pro 6000.... making me feel like I can justify the purchase. Thanks for inputs!
2025-05-16T10:00:16
https://www.reddit.com/r/LocalLLaMA/comments/1knx1hn/what_can_be_done_on_a_single_gh200_96_gb_vram_and/
TimAndTimi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knx1hn
false
null
t3_1knx1hn
/r/LocalLLaMA/comments/1knx1hn/what_can_be_done_on_a_single_gh200_96_gb_vram_and/
false
false
self
2
null
Qwen3 local 14B Q4_K_M or 30B A3B Q2_K_L who has higher quality
15
Qwen3 comes in the xxB AxB flavors and that can be run locally. If you choose said combination 14B Q4_K_M vs 30B A3B Q2_K_L the performance speed wise in generation matches given the same context size on my test bench. The question is (and what I don't understand) how does the agents affect the quality of the output? Could I read 14B as 14B A14B meaning 1Agent is active with the full 14B over all layers and 30B A3B means 10Agents are active parallel on different layers with each 3B or how does it work technically? Normally my rule of thumb is higher B with lower Q above Q2 is always better than lower B with higher Q. In this special case I am unsure if that still applies. Did someone of you own a benchmark that can test quality of outputs and perception and would be willing to test this rather small quants against each other? The normal benchmarks only test the full versions, but for reasonable local it must be a smaller approach here to fit memory and speed demands. What is the quality? Thank you for technical inputs.
2025-05-16T10:04:51
https://www.reddit.com/r/LocalLLaMA/comments/1knx47e/qwen3_local_14b_q4_k_m_or_30b_a3b_q2_k_l_who_has/
Consistent_Winner596
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knx47e
false
null
t3_1knx47e
/r/LocalLLaMA/comments/1knx47e/qwen3_local_14b_q4_k_m_or_30b_a3b_q2_k_l_who_has/
false
false
self
15
null
Best practices to prevent the accidental generation of illegal content and how to properly manage these risks?
1
[removed]
2025-05-16T10:23:51
https://www.reddit.com/r/LocalLLaMA/comments/1knxenu/best_practices_to_prevent_the_accidental/
CorruptCobalion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knxenu
false
null
t3_1knxenu
/r/LocalLLaMA/comments/1knxenu/best_practices_to_prevent_the_accidental/
false
false
self
1
null
Which LLM is used to generate scripts for YT videos like the ones on these channels?
1
[removed]
2025-05-16T10:25:54
https://www.reddit.com/r/LocalLLaMA/comments/1knxft6/which_llm_is_used_to_generate_scripts_for_yt/
BlackTigerKungFu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knxft6
false
null
t3_1knxft6
/r/LocalLLaMA/comments/1knxft6/which_llm_is_used_to_generate_scripts_for_yt/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fFh9Vmbr_WcV1iPJUYuyYCjPC20_Rj4iL1YYkjRI-z0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/6A_Reh1USNUWjdRpl1L9CG_nC4o-9x7LrSCp_cYD7ug.jpg?width=108&crop=smart&auto=webp&s=af3d534c614e01d2060af70f8becfdf42d9d2058', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/6A_Reh1USNUWjdRpl1L9CG_nC4o-9x7LrSCp_cYD7ug.jpg?width=216&crop=smart&auto=webp&s=6b152022f674610e6274cccfb049ee1f67808bcb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/6A_Reh1USNUWjdRpl1L9CG_nC4o-9x7LrSCp_cYD7ug.jpg?width=320&crop=smart&auto=webp&s=3f0a273f76f74ab955a137bb68342f148ccbd111', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/6A_Reh1USNUWjdRpl1L9CG_nC4o-9x7LrSCp_cYD7ug.jpg?width=640&crop=smart&auto=webp&s=d611647f5be8d818ad50251c07635be6e9fff72d', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/6A_Reh1USNUWjdRpl1L9CG_nC4o-9x7LrSCp_cYD7ug.jpg?auto=webp&s=bab69e9829e027a2e22188b75784abab8aac0d4e', 'width': 900}, 'variants': {}}]}
Best practices to prevent the accidental generation of illegal content and how to properly manage these risks?
1
[removed]
2025-05-16T10:26:41
https://www.reddit.com/r/LocalLLaMA/comments/1knxg8y/best_practices_to_prevent_the_accidental/
CorruptCobalion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knxg8y
false
null
t3_1knxg8y
/r/LocalLLaMA/comments/1knxg8y/best_practices_to_prevent_the_accidental/
false
false
self
1
null
Which LLM is used to generate scripts for videos like the ones on these YT channels?
1
[removed]
2025-05-16T10:27:24
https://www.reddit.com/r/LocalLLaMA/comments/1knxgns/which_llm_is_used_to_generate_scripts_for_videos/
BlackTigerKungFu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knxgns
false
null
t3_1knxgns
/r/LocalLLaMA/comments/1knxgns/which_llm_is_used_to_generate_scripts_for_videos/
false
false
self
1
null
Locally alternative to replit?
1
[removed]
2025-05-16T11:03:17
https://www.reddit.com/r/LocalLLaMA/comments/1kny1o7/locally_alternative_to_replit/
ActuatorLanky9739
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kny1o7
false
null
t3_1kny1o7
/r/LocalLLaMA/comments/1kny1o7/locally_alternative_to_replit/
false
false
self
1
null
How far can we get without LLM, or... What tools do we currently use to pre/post process Data in our pipelines?
0
The more I work with LLMs in my flows, and the larger scale I go, I move more logic out of the hands of the LLM into specific tools and libraries. Now with MCPs we see an increase of utilities. But they are still needed to be activated by the LLMs agents. What tools/ libraries do you use to pre process your data? Name your libraries, what you do with them, I'm looking for ideas, anywhere from OCR, Office/PDF file parsing, Search and retrieval, prompt engineering, fan out and storage of replies, caching over LLMs and more. Thanks, and cheers
2025-05-16T11:09:52
https://www.reddit.com/r/LocalLLaMA/comments/1kny5kq/how_far_can_we_get_without_llm_or_what_tools_do/
CptKrupnik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kny5kq
false
null
t3_1kny5kq
/r/LocalLLaMA/comments/1kny5kq/how_far_can_we_get_without_llm_or_what_tools_do/
false
false
self
0
null
I Didn't Expect GPU Access to Be This Simple and Honestly, I'm Still Kinda Shocked
0
I've worked with enough AI tools to know that things rarely “just work.” Whether it's spinning up cloud compute, wrangling environment configs, or trying to keep dependencies from breaking your whole pipeline, it's usually more pain than progress. That's why what happened recently genuinely caught me off guard. I was prepping to run a few model tests, nothing huge, but definitely more than my local machine could handle. I figured I'd go through the usual routine, open up AWS or GCP, set up a new instance, SSH in, install the right CUDA version, and lose an hour of my life before running a single line of code.Instead, I tried something different. I had this new extension installed in VSCode. Hit a GPU icon out of curiosity… and suddenly I had a list of A100s and H100s in front of me. No config, no docker setup, no long-form billing dashboard. I picked an A100, clicked Start, and within seconds, I was running my workload right inside my IDE. But what actually made it click for me was a short walkthrough video they shared. I had a couple of doubts about how the backend was wired up or what exactly was happening behind the scenes, and the video laid it out clearly. Honestly, it was well done and saved me from overthinking the setup. I've since tested image generation, small scale training, and a few inference cycles, and the experience has been consistently clean. No downtime. No crashing environments. Just fast, quiet power. The cost? $14/hour, which sounds like a lot until you compare it to the time and frustration saved. I've literally spent more money on worse setups with more overhead. It's weird to say, but this is the first time GPU compute has actually felt like a dev tool, not some backend project that needs its own infrastructure team. If you're curious to try it out, here's the page I started with: https://docs.blackbox.ai/new-release-gpus-in-your-ide Planning to push it further with a longer training run next. anyone else has put it through something heavier? Would love to hear how it holds up
2025-05-16T11:23:40
https://v.redd.it/0i07y8qbp41f1
PixieE3
v.redd.it
1970-01-01T00:00:00
0
{}
1knye1p
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/0i07y8qbp41f1/DASHPlaylist.mpd?a=1749986633%2CYmQ1MWQwNTkwZDdhODI2Y2VkNmM5ZjAzNWY4M2U0MmJmNDdjOThiNzE4NDYzNzJkN2JmZjkwNGRmOWIyNWM0OQ%3D%3D&v=1&f=sd', 'duration': 109, 'fallback_url': 'https://v.redd.it/0i07y8qbp41f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/0i07y8qbp41f1/HLSPlaylist.m3u8?a=1749986633%2CNDhiMzIwOTBjNDY3OTNlODRhYmE4YjUxNzRiZjllMGM3Mzg3YzZkOTQ0OGQ3MWQ5OTg5Y2ZkYmY1OWM4YzkyZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0i07y8qbp41f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1114}}
t3_1knye1p
/r/LocalLLaMA/comments/1knye1p/i_didnt_expect_gpu_access_to_be_this_simple_and/
false
false
https://external-preview…cfabf99d7b2cec88
0
{'enabled': False, 'images': [{'id': 'Z2RsbDdhaGJwNDFmMTJ4NVgoXTew4YUp5eVZv-28CG-xBozyf4tPXdQeLdJy', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/Z2RsbDdhaGJwNDFmMTJ4NVgoXTew4YUp5eVZv-28CG-xBozyf4tPXdQeLdJy.png?width=108&crop=smart&format=pjpg&auto=webp&s=8191e19c9924370b4dfadd3f52da1e770b1fddf9', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/Z2RsbDdhaGJwNDFmMTJ4NVgoXTew4YUp5eVZv-28CG-xBozyf4tPXdQeLdJy.png?width=216&crop=smart&format=pjpg&auto=webp&s=33eee9e5c268d0a98245d40bbf6b746b45fa474d', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/Z2RsbDdhaGJwNDFmMTJ4NVgoXTew4YUp5eVZv-28CG-xBozyf4tPXdQeLdJy.png?width=320&crop=smart&format=pjpg&auto=webp&s=c114b0d71be35264e3ddddb58eb1f1e02d0aa5ca', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/Z2RsbDdhaGJwNDFmMTJ4NVgoXTew4YUp5eVZv-28CG-xBozyf4tPXdQeLdJy.png?width=640&crop=smart&format=pjpg&auto=webp&s=b675dbe5d83c98c69ebf6c1b2ced9bf0651572d1', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/Z2RsbDdhaGJwNDFmMTJ4NVgoXTew4YUp5eVZv-28CG-xBozyf4tPXdQeLdJy.png?width=960&crop=smart&format=pjpg&auto=webp&s=90973b7a1bba03fff085fc388bd862bb3f711c99', 'width': 960}, {'height': 698, 'url': 'https://external-preview.redd.it/Z2RsbDdhaGJwNDFmMTJ4NVgoXTew4YUp5eVZv-28CG-xBozyf4tPXdQeLdJy.png?width=1080&crop=smart&format=pjpg&auto=webp&s=734c63682915225b0df9914b310ab066d42e47c4', 'width': 1080}], 'source': {'height': 828, 'url': 'https://external-preview.redd.it/Z2RsbDdhaGJwNDFmMTJ4NVgoXTew4YUp5eVZv-28CG-xBozyf4tPXdQeLdJy.png?format=pjpg&auto=webp&s=5c9d0a3642ecb858a1b33a09392afaa2414d5d94', 'width': 1280}, 'variants': {}}]}
Trying to figure out how to install models from Ollama to LocalAI using the Docker version
0
I'm trying LocalAI as a replacement for Ollama, and I saw from the docs that you're supposed to be able to install models from the Ollama repository. Source: [https://localai.io/docs/getting-started/models/](https://localai.io/docs/getting-started/models/) >From OCIs: `oci://container_image:tag`, `ollama://model_id:tag` However trying to do `docker exec -it <container-name> local-ai <cmd>` (like how you do stuff with Ollama) to call the commands from that page doesn't work and gives me `OCI runtime exec failed: exec failed: unable to start container process: exec: "local-ai": executable file not found in $PATH: unknown` The API is running and I'm able to view the Swagger API docs where I see that there's a `models/apply` route for installing models, however I can't find parameters that match the `ollama://model_id:tag` format. Could someone please point me in the right direction for either running the local-ai executable or providing the correct parameters to the model install endpoint? Thanks! I've been looking through the documentation but haven't found the right combination of information to figure it out myself.
2025-05-16T11:33:27
https://www.reddit.com/r/LocalLLaMA/comments/1knykay/trying_to_figure_out_how_to_install_models_from/
sebovzeoueb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knykay
false
null
t3_1knykay
/r/LocalLLaMA/comments/1knykay/trying_to_figure_out_how_to_install_models_from/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Daj-Ki-yub-oCTlNBpbYtmeYpw-1_-lZTgLJd5KNFKA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/mIlfuLLKftGD631Gl09hp9-zJFkJyhzrf54UM3XERaE.jpg?width=108&crop=smart&auto=webp&s=8df4875ff529d3494fc69165d56fb9d6f5eaf437', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/mIlfuLLKftGD631Gl09hp9-zJFkJyhzrf54UM3XERaE.jpg?width=216&crop=smart&auto=webp&s=60f46f8459ad4ccbfbb71c245c0217dfa351ddbc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/mIlfuLLKftGD631Gl09hp9-zJFkJyhzrf54UM3XERaE.jpg?width=320&crop=smart&auto=webp&s=a89d79ed45fd389084b737382c0f6144c99385cc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/mIlfuLLKftGD631Gl09hp9-zJFkJyhzrf54UM3XERaE.jpg?width=640&crop=smart&auto=webp&s=847c526cdf08f4134c3c9b3b8b45767a4eb666d5', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/mIlfuLLKftGD631Gl09hp9-zJFkJyhzrf54UM3XERaE.jpg?width=960&crop=smart&auto=webp&s=df83fa0af801e08856354fde9389fac8b57c6284', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/mIlfuLLKftGD631Gl09hp9-zJFkJyhzrf54UM3XERaE.jpg?width=1080&crop=smart&auto=webp&s=8a3748de1d40e4a556c519e489afd9545b6a6c91', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/mIlfuLLKftGD631Gl09hp9-zJFkJyhzrf54UM3XERaE.jpg?auto=webp&s=43d8381de68cff815a63ad23ecb83985780afede', 'width': 1200}, 'variants': {}}]}
Increase generation speed in Qwen3 235B by reducing used expert count
7
Has anyone else also tinkered with the expert used count? I reduced Qwen3-235B expert by half in llama server by using `--override-kv qwen3moe.expert_used_count=int:4` and got %60 speed up. Reducing the expert number 3 and beyond doesn't work for me because it generates nonsense text
2025-05-16T12:07:51
https://www.reddit.com/r/LocalLLaMA/comments/1knz74p/increase_generation_speed_in_qwen3_235b_by/
Content-Degree-9477
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knz74p
false
null
t3_1knz74p
/r/LocalLLaMA/comments/1knz74p/increase_generation_speed_in_qwen3_235b_by/
false
false
self
7
null
Is this model available: Llama3.3-8B?
1
[removed]
2025-05-16T12:14:51
https://www.reddit.com/r/LocalLLaMA/comments/1knzby2/is_this_model_available_llama338b/
abubakkar_s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1knzby2
false
null
t3_1knzby2
/r/LocalLLaMA/comments/1knzby2/is_this_model_available_llama338b/
false
false
self
1
{'enabled': False, 'images': [{'id': '1zIomSAXseV6S4T8Yvxq6r6H4yXaLkjUnbPOutnFpaQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=108&crop=smart&auto=webp&s=8a15eec81b665e56551ea83b9168f9cc7c3e15b8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=216&crop=smart&auto=webp&s=7fa4aa36113b2f3e8b9121a5687c9ba51a47e37f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=320&crop=smart&auto=webp&s=de67cfc21dfdcdeb15ffa9df959f8e14372b4c3e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=640&crop=smart&auto=webp&s=d1660e500cf6212d2c8cdbb237ac7e29e44290a0', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=960&crop=smart&auto=webp&s=a6de3358e4eae269d62ad8991f73abdb583170e2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=1080&crop=smart&auto=webp&s=cb6b5a18982b0517684a96cfdb75739e1530ebfa', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?auto=webp&s=f6a193863077029f0a4745cc1f2cd1e7c16974eb', 'width': 1200}, 'variants': {}}]}
ValiantLabs/Qwen3-14B-Esper3 reasoning finetune focused on coding, architecture, and DevOps
31
2025-05-16T13:05:35
https://huggingface.co/ValiantLabs/Qwen3-14B-Esper3
Amazing_Athlete_2265
huggingface.co
1970-01-01T00:00:00
0
{}
1ko0d4w
false
null
t3_1ko0d4w
/r/LocalLLaMA/comments/1ko0d4w/valiantlabsqwen314besper3_reasoning_finetune/
false
false
https://b.thumbs.redditm…o1If94VZ6Tus.jpg
31
{'enabled': False, 'images': [{'id': 'n1P7XzrJAHPGRIpShym4YVyR8j7XyfiOe3rAsK3Qr_0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pEL8WVAo4mDFDpBgOq60y0m4cdA7556rb7t3GIF-8DM.jpg?width=108&crop=smart&auto=webp&s=ee69f67f3db1585a6e82bfa56b87cf57fc7bf4d6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pEL8WVAo4mDFDpBgOq60y0m4cdA7556rb7t3GIF-8DM.jpg?width=216&crop=smart&auto=webp&s=f81e92ab520f567be5b18f9274f72ce99d48d139', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pEL8WVAo4mDFDpBgOq60y0m4cdA7556rb7t3GIF-8DM.jpg?width=320&crop=smart&auto=webp&s=dd3a84e85abdf15a72cd662232f8b7347bd13807', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pEL8WVAo4mDFDpBgOq60y0m4cdA7556rb7t3GIF-8DM.jpg?width=640&crop=smart&auto=webp&s=4b2baa5a8ddd53ef6c8acdfc70e71cf12f8444d6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pEL8WVAo4mDFDpBgOq60y0m4cdA7556rb7t3GIF-8DM.jpg?width=960&crop=smart&auto=webp&s=f04b56e7e98f2f1ff311de68be8aa698f0633065', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pEL8WVAo4mDFDpBgOq60y0m4cdA7556rb7t3GIF-8DM.jpg?width=1080&crop=smart&auto=webp&s=f2481c99bb5d0bb9be3bbe7b2b66d014cd6c851c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pEL8WVAo4mDFDpBgOq60y0m4cdA7556rb7t3GIF-8DM.jpg?auto=webp&s=bce122f29e5ecc2d4fdf31cf55469419603706d9', 'width': 1200}, 'variants': {}}]}
Stanford has dropped AGI
391
2025-05-16T13:15:17
https://huggingface.co/Stanford/Rivermind-AGI-12B
Abject-Huckleberry13
huggingface.co
1970-01-01T00:00:00
0
{}
1ko0khr
false
null
t3_1ko0khr
/r/LocalLLaMA/comments/1ko0khr/stanford_has_dropped_agi/
false
false
https://a.thumbs.redditm…b_8p8nkAubf0.jpg
391
{'enabled': False, 'images': [{'id': 'cShe1eKy_JIO53Pcrc7LWl1-wgKd2Daa5QV_dM6tit4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RLiqoJrn4RdLs0J4_egpcYM7T2LlLp_klpSUS3M3qFg.jpg?width=108&crop=smart&auto=webp&s=456c99a482b12e92c6fef5806ddbc477b402cd85', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RLiqoJrn4RdLs0J4_egpcYM7T2LlLp_klpSUS3M3qFg.jpg?width=216&crop=smart&auto=webp&s=64ebcfefb2528ddf8dccb1b102fe505d8dfbc636', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RLiqoJrn4RdLs0J4_egpcYM7T2LlLp_klpSUS3M3qFg.jpg?width=320&crop=smart&auto=webp&s=2cb72413c409e674e1409d1e2aa5a69a89e3cd0b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RLiqoJrn4RdLs0J4_egpcYM7T2LlLp_klpSUS3M3qFg.jpg?width=640&crop=smart&auto=webp&s=9aa5717d8c431adaa645d19436f3ab2adbc6cfc8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RLiqoJrn4RdLs0J4_egpcYM7T2LlLp_klpSUS3M3qFg.jpg?width=960&crop=smart&auto=webp&s=d6f724d180824de0405b145fdd02b0092897a32f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RLiqoJrn4RdLs0J4_egpcYM7T2LlLp_klpSUS3M3qFg.jpg?width=1080&crop=smart&auto=webp&s=d36813bc6fe2b3d1dc9d7a78d8289f2164479256', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RLiqoJrn4RdLs0J4_egpcYM7T2LlLp_klpSUS3M3qFg.jpg?auto=webp&s=ac0bdec126cfada42067fe0eff6751408f4e46c5', 'width': 1200}, 'variants': {}}]}
Why we're not hitting the wall.
0
[removed]
2025-05-16T13:25:53
https://www.reddit.com/r/LocalLLaMA/comments/1ko0sw5/why_were_not_hitting_the_wall/
genshiryoku
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko0sw5
false
null
t3_1ko0sw5
/r/LocalLLaMA/comments/1ko0sw5/why_were_not_hitting_the_wall/
false
false
self
0
null
Multi-GPU Inference and Training Performance Issues
1
[removed]
2025-05-16T13:35:11
https://www.reddit.com/r/LocalLLaMA/comments/1ko10b7/multigpu_inference_and_training_performance_issues/
ba2sYd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko10b7
false
null
t3_1ko10b7
/r/LocalLLaMA/comments/1ko10b7/multigpu_inference_and_training_performance_issues/
false
false
self
1
null
Finetuning speech based model
5
Hi, I have summer vacation coming up and want to learn on LLM. Specially on Speech based model. I want to make the restaurant booking based ai. So appreciate if there is a way to make it. Would like to know some directions and tips on this.
2025-05-16T13:36:27
https://www.reddit.com/r/LocalLLaMA/comments/1ko11c5/finetuning_speech_based_model/
FastCommission2913
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko11c5
false
null
t3_1ko11c5
/r/LocalLLaMA/comments/1ko11c5/finetuning_speech_based_model/
false
false
self
5
null
What model repositories work with ollama pull?
1
[removed]
2025-05-16T13:56:44
https://www.reddit.com/r/LocalLLaMA/comments/1ko1hzw/what_model_repositories_work_with_ollama_pull/
synthphreak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko1hzw
false
null
t3_1ko1hzw
/r/LocalLLaMA/comments/1ko1hzw/what_model_repositories_work_with_ollama_pull/
false
false
self
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Ollama violating llama.cpp license for over a year
529
2025-05-16T13:57:38
https://news.ycombinator.com/item?id=44003741
op_loves_boobs
news.ycombinator.com
1970-01-01T00:00:00
0
{}
1ko1iob
false
null
t3_1ko1iob
/r/LocalLLaMA/comments/1ko1iob/ollama_violating_llamacpp_license_for_over_a_year/
false
false
default
529
null
Local OCR in mobile applications with React Native ExecuTorch
1
[removed]
2025-05-16T14:02:45
https://v.redd.it/v6za0645g51f1
FinancialAd1961
v.redd.it
1970-01-01T00:00:00
0
{}
1ko1n5f
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/v6za0645g51f1/DASHPlaylist.mpd?a=1749996188%2CYjAxZmQ2NmQyOTdhMjM5YWFjNjEwNWNiOGE2MzcyYWMzNzI4MTU1YmM4OWZkOTU0ZGQ4M2M2MmIwMTg5NzVkMg%3D%3D&v=1&f=sd', 'duration': 101, 'fallback_url': 'https://v.redd.it/v6za0645g51f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/v6za0645g51f1/HLSPlaylist.m3u8?a=1749996188%2CMDMxZDk5NDNkNGFjYThkMTI5ODEzNjZjMDM2M2I1ZWIzZDYxNzQ2NTEzZTM4MjA1MDU5ZjI5ZmI2MDIzNGNkOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/v6za0645g51f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1110}}
t3_1ko1n5f
/r/LocalLLaMA/comments/1ko1n5f/local_ocr_in_mobile_applications_with_react/
false
false
https://external-preview…ac006593ee26c654
1
{'enabled': False, 'images': [{'id': 'Z21hbHMyMzVnNTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/Z21hbHMyMzVnNTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=108&crop=smart&format=pjpg&auto=webp&s=dd1c95bc94383d13fa9821e8a5cf292e2931c336', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/Z21hbHMyMzVnNTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=216&crop=smart&format=pjpg&auto=webp&s=b12af6928093c837e9d5747461d3b32cbda65ab6', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/Z21hbHMyMzVnNTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=320&crop=smart&format=pjpg&auto=webp&s=1d55cec4f15e6bfd63d93d874a29cfaca7b85324', 'width': 320}, {'height': 415, 'url': 'https://external-preview.redd.it/Z21hbHMyMzVnNTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=640&crop=smart&format=pjpg&auto=webp&s=ac55e16f6365ba6510e9d9c9c5bef96f5278f636', 'width': 640}, {'height': 622, 'url': 'https://external-preview.redd.it/Z21hbHMyMzVnNTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=960&crop=smart&format=pjpg&auto=webp&s=4d65ed6fc632480560cd97bc79a90f64da8f29be', 'width': 960}, {'height': 700, 'url': 'https://external-preview.redd.it/Z21hbHMyMzVnNTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=816709ae09b1450bab295f380d12660755a78c36', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Z21hbHMyMzVnNTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?format=pjpg&auto=webp&s=6512cf96afcca6fa37f7f1c6b61c34dd2cf2f7fc', 'width': 1110}, 'variants': {}}]}
If you are comparing models, please state the task you are using them for!
51
The amount of posts like "Why is deepseek so much better than qwen 235," with no information about the task that the poster is comparing the models on, is maddening. ALL models' performance levels vary across domains, and many models are highly domain specific. Some people are creating waifus, some are coding, some are conducting medical research, etc. The posts read like "The Miata is the absolute superior vehicle over the Cessna Skyhawk. It has been the best driving experience since I used my Rolls Royce as a submarine"
2025-05-16T14:10:07
https://www.reddit.com/r/LocalLLaMA/comments/1ko1tg5/if_you_are_comparing_models_please_state_the_task/
nomorebuttsplz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko1tg5
false
null
t3_1ko1tg5
/r/LocalLLaMA/comments/1ko1tg5/if_you_are_comparing_models_please_state_the_task/
false
false
self
51
null
EU inference providers with strong privacy
8
I would like a EU based company (so Aws, Google Vertex, Azure are a non starter) that provides an inference API for open-weight models hosted in the EU with strong privacy guarantees. I want to pay per token not pay for some sort of GPU instance. So far I have found https://nebius.com/, however in their privacy policy there's a clause that inputs shouldn't contain private data, so they don't seem to care about securing their inference.
2025-05-16T14:10:55
https://www.reddit.com/r/LocalLLaMA/comments/1ko1u5c/eu_inference_providers_with_strong_privacy/
Ambitious_Subject108
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko1u5c
false
null
t3_1ko1u5c
/r/LocalLLaMA/comments/1ko1u5c/eu_inference_providers_with_strong_privacy/
false
false
self
8
null
I built a tiny Linux OS to make your LLMs actually useful on your machine
305
Hey folks — I’ve been working on llmbasedos, a minimal Arch-based Linux distro that turns your local environment into a first-class citizen for any LLM frontend (like Claude Desktop, VS Code, ChatGPT+browser, etc). The problem: every AI app has to reinvent the wheel — file pickers, OAuth flows, plugins, sandboxing… The idea: expose local capabilities (files, mail, sync, agents) via a clean, JSON-RPC protocol called MCP (Model Context Protocol). What you get: • An MCP gateway (FastAPI) that routes requests • Small Python daemons that expose specific features (FS, mail, sync, agents) • Auto-discovery via .cap.json — your new feature shows up everywhere • Optional offline mode (llama.cpp included), or plug into GPT-4o, Claude, etc. It’s meant to be dev-first. Add a new capability in under 50 lines. Zero plugins, zero hacks — just a clean system-wide interface for your AI. Open-core, Apache-2.0 license. Curious to hear what features you’d build with it — happy to collab if anyone’s down!
2025-05-16T14:12:00
https://github.com/iluxu/llmbasedos
iluxu
github.com
1970-01-01T00:00:00
0
{}
1ko1v1k
false
null
t3_1ko1v1k
/r/LocalLLaMA/comments/1ko1v1k/i_built_a_tiny_linux_os_to_make_your_llms/
false
false
https://b.thumbs.redditm…3JHeX2C6kt1E.jpg
305
{'enabled': False, 'images': [{'id': 'KLay6A6X7_CWMxBWlijhND-p-8uSznXOlE6As5ASn2o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Opn0lWenfUSxX1FlZaKUoyxIpn8_sSk-rxtkMoj2byo.jpg?width=108&crop=smart&auto=webp&s=fc673e466902c94f83124f79a6442e6562bb4ba7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Opn0lWenfUSxX1FlZaKUoyxIpn8_sSk-rxtkMoj2byo.jpg?width=216&crop=smart&auto=webp&s=c02581dca8427a0ee045b01e08757089066cf48b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Opn0lWenfUSxX1FlZaKUoyxIpn8_sSk-rxtkMoj2byo.jpg?width=320&crop=smart&auto=webp&s=73e720e6339c5d78ec8e2fee7ffa8b447386a886', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Opn0lWenfUSxX1FlZaKUoyxIpn8_sSk-rxtkMoj2byo.jpg?width=640&crop=smart&auto=webp&s=62958b7668163a6b32bc9aa0eddc4ec07f59c982', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Opn0lWenfUSxX1FlZaKUoyxIpn8_sSk-rxtkMoj2byo.jpg?width=960&crop=smart&auto=webp&s=11bcac6a1c52d4cd9c6b7b28bef16bedbdf670b8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Opn0lWenfUSxX1FlZaKUoyxIpn8_sSk-rxtkMoj2byo.jpg?width=1080&crop=smart&auto=webp&s=160e0ac7d7c56f86b03925749e8ce4d807b0bdec', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Opn0lWenfUSxX1FlZaKUoyxIpn8_sSk-rxtkMoj2byo.jpg?auto=webp&s=0739c263a68b0f27abfbfaac0aa90166efe45fe8', 'width': 1200}, 'variants': {}}]}
Did Standford HuggingFace account got Hacked?
558
2025-05-16T14:26:25
https://i.redd.it/0j4j7z8yl51f1.jpeg
ObscuraMirage
i.redd.it
1970-01-01T00:00:00
0
{}
1ko27bi
false
null
t3_1ko27bi
/r/LocalLLaMA/comments/1ko27bi/did_standford_huggingface_account_got_hacked/
false
false
nsfw
558
{'enabled': True, 'images': [{'id': 'nmvWE6RXrgxkEhaUD2bJyOcNR5edy8heN3MGrGlP--Y', 'resolutions': [{'height': 206, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=108&crop=smart&auto=webp&s=41b7cd3f422c4872d7e23e88565ba7ce334094dd', 'width': 108}, {'height': 412, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=216&crop=smart&auto=webp&s=6ed6262f665973604c7d1b2f220d28212fd05e84', 'width': 216}, {'height': 611, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=320&crop=smart&auto=webp&s=9050ab79a3c64593f5e2273f0eff1195e6ba1b01', 'width': 320}, {'height': 1223, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=640&crop=smart&auto=webp&s=a0e189e5a7950591a2304ebea74be678f4a6ef7b', 'width': 640}, {'height': 1835, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=960&crop=smart&auto=webp&s=a5f717cec5046e5fbeb926b90248e7ed781777d0', 'width': 960}, {'height': 2064, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=1080&crop=smart&auto=webp&s=7a15a335ab4e57f9a12fd61548698fa80a6291ae', 'width': 1080}], 'source': {'height': 2466, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?auto=webp&s=14e01bbed9875f9b982690b9ec8e32627a731caa', 'width': 1290}, 'variants': {'nsfw': {'resolutions': [{'height': 206, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=98d68a5a3f6c146ed36e59a2632ecf2c50bcb418', 'width': 108}, {'height': 412, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=c73b39544cd6ca92a43953d782cbab12d064a05d', 'width': 216}, {'height': 611, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=9f389cf9a40e4de00d6d0a5a269999aac21c0e5f', 'width': 320}, {'height': 1223, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=3d0b6dd57bd1fa7c71e22a8ab793ffbf18233056', 'width': 640}, {'height': 1835, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=62724180f84dd69b9b3914cc665fb2a8e77a39a3', 'width': 960}, {'height': 2064, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=d7e839e6158b6efb389bf9761e22e161d00f86b8', 'width': 1080}], 'source': {'height': 2466, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?blur=40&format=pjpg&auto=webp&s=e567260a3e585515b05bcce1f0106be1d83da23a', 'width': 1290}}, 'obfuscated': {'resolutions': [{'height': 206, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=98d68a5a3f6c146ed36e59a2632ecf2c50bcb418', 'width': 108}, {'height': 412, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=c73b39544cd6ca92a43953d782cbab12d064a05d', 'width': 216}, {'height': 611, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=9f389cf9a40e4de00d6d0a5a269999aac21c0e5f', 'width': 320}, {'height': 1223, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=3d0b6dd57bd1fa7c71e22a8ab793ffbf18233056', 'width': 640}, {'height': 1835, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=62724180f84dd69b9b3914cc665fb2a8e77a39a3', 'width': 960}, {'height': 2064, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=d7e839e6158b6efb389bf9761e22e161d00f86b8', 'width': 1080}], 'source': {'height': 2466, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?blur=40&format=pjpg&auto=webp&s=e567260a3e585515b05bcce1f0106be1d83da23a', 'width': 1290}}}}]}
AM-Thinking-v1
49
[https://huggingface.co/a-m-team/AM-Thinking-v1](https://huggingface.co/a-m-team/AM-Thinking-v1) >We release **AM-Thinking‑v1**, a 32B dense language model focused on enhancing reasoning capabilities. Built on Qwen 2.5‑32B‑Base, AM-Thinking‑v1 shows strong performance on reasoning benchmarks, comparable to much larger MoE models like **DeepSeek‑R1**, **Qwen3‑235B‑A22B**, **Seed1.5-Thinking**, and larger dense model like **Nemotron-Ultra-253B-v1**. [https://arxiv.org/abs/2505.08311](https://arxiv.org/abs/2505.08311) [https://a-m-team.github.io/am-thinking-v1/](https://a-m-team.github.io/am-thinking-v1/) https://preview.redd.it/79z2klmbn51f1.png?width=2001&format=png&auto=webp&s=18a3b5a0d06b75e6712891b7c19853ec1de3e737 ***\*I'm not affiliated with the model provider, just sharing the news.***
2025-05-16T14:37:26
https://www.reddit.com/r/LocalLLaMA/comments/1ko2gq1/amthinkingv1/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko2gq1
false
null
t3_1ko2gq1
/r/LocalLLaMA/comments/1ko2gq1/amthinkingv1/
false
false
https://b.thumbs.redditm…LMTp-saM_Uyk.jpg
49
{'enabled': False, 'images': [{'id': 'HUXNMOyWy3-eWctOM0XRYtxZY8uQn5_XVFNHb6sH7J8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hpID4tGlVXcsccCrPaCo5PJZ9uVrqES2Dr7pBVPNMCc.jpg?width=108&crop=smart&auto=webp&s=aae8222b32db9114e40b7f15d88345278320dad4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hpID4tGlVXcsccCrPaCo5PJZ9uVrqES2Dr7pBVPNMCc.jpg?width=216&crop=smart&auto=webp&s=6c97739a75120b08685b50fd6327d51aa4667a3a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hpID4tGlVXcsccCrPaCo5PJZ9uVrqES2Dr7pBVPNMCc.jpg?width=320&crop=smart&auto=webp&s=5c50d0b9cf8a0f6e7713658e780aa8c7c99d38ba', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hpID4tGlVXcsccCrPaCo5PJZ9uVrqES2Dr7pBVPNMCc.jpg?width=640&crop=smart&auto=webp&s=1d9a6b67fdd30d72dd266f22d019598feded92a1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hpID4tGlVXcsccCrPaCo5PJZ9uVrqES2Dr7pBVPNMCc.jpg?width=960&crop=smart&auto=webp&s=1c192e2d65f61b5ac6d3daca9f08503f4bc864a7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hpID4tGlVXcsccCrPaCo5PJZ9uVrqES2Dr7pBVPNMCc.jpg?width=1080&crop=smart&auto=webp&s=27a8e59b9028b533e5d7c874655f83a6c5d56402', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hpID4tGlVXcsccCrPaCo5PJZ9uVrqES2Dr7pBVPNMCc.jpg?auto=webp&s=789e4e427a8f7fd6ceaa809a4fb18a7a82b64ea6', 'width': 1200}, 'variants': {}}]}
Photoshop using Local Computer Use agents.
1
[removed]
2025-05-16T14:37:46
https://v.redd.it/jysugtuyn51f1
Impressive_Half_2819
v.redd.it
1970-01-01T00:00:00
0
{}
1ko2h1e
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/jysugtuyn51f1/DASHPlaylist.mpd?a=1749998283%2CNjVkZDhhOTg3MzI3YWYxMmUyMTE4OGFiNmJlOGUzYjBkNmUxYTQzYmI4MmYzYjBhMDBhOTcwZjE0MmVlYjZhNQ%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/jysugtuyn51f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/jysugtuyn51f1/HLSPlaylist.m3u8?a=1749998283%2CNDM0ZjU1YmI2ODY3ZjFjOTE2Nzk1YWI4YTBhNWMxNWUzOWNmODBmNGM0YTc0Yzc2OTYwYTQ2YzlkNDE5OGE2OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jysugtuyn51f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1148}}
t3_1ko2h1e
/r/LocalLLaMA/comments/1ko2h1e/photoshop_using_local_computer_use_agents/
false
false
https://external-preview…e5617bc26a708698
1
{'enabled': False, 'images': [{'id': 'dzVnZm04bnluNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/dzVnZm04bnluNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=108&crop=smart&format=pjpg&auto=webp&s=a50bb3093140fc54222c6f51cfe30383e8b2e476', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/dzVnZm04bnluNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=216&crop=smart&format=pjpg&auto=webp&s=2019c86edbf4b0a9d642f764da02d2c0d740e335', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/dzVnZm04bnluNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=320&crop=smart&format=pjpg&auto=webp&s=f48bb5633c1eb61caa8a7d1d14e9900a9c628cab', 'width': 320}, {'height': 401, 'url': 'https://external-preview.redd.it/dzVnZm04bnluNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=640&crop=smart&format=pjpg&auto=webp&s=9142d469ecf2d1000eacdceba6747b7d4013dca5', 'width': 640}, {'height': 602, 'url': 'https://external-preview.redd.it/dzVnZm04bnluNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=960&crop=smart&format=pjpg&auto=webp&s=5906aad735e459ffe98090bbe8572d08c9a4341f', 'width': 960}, {'height': 677, 'url': 'https://external-preview.redd.it/dzVnZm04bnluNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=238e3392c0bd5b28a40658526bae35b82d171184', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/dzVnZm04bnluNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?format=pjpg&auto=webp&s=8904bb058332f239d17d6b6f20f50f0529d76206', 'width': 1148}, 'variants': {}}]}
Photoshop using Local Computer Use agents.
1
[removed]
2025-05-16T14:39:43
https://v.redd.it/3iv3989bo51f1
Impressive_Half_2819
v.redd.it
1970-01-01T00:00:00
0
{}
1ko2iqc
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/3iv3989bo51f1/DASHPlaylist.mpd?a=1749998398%2CZjI2M2M1NGQzMjIwMzUwNWYxNTFjNDk5Mjg4N2JjNzBhNjljNzkyYTRiZjBhZmQzOGM3NDQyMjI1ZjZmMDAzNg%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/3iv3989bo51f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/3iv3989bo51f1/HLSPlaylist.m3u8?a=1749998398%2CNzVlMTljNmQzYWFiOGUwNjE2MmJiNjBkYzhhMTkxMmNkMzJmNjVlYWMwMjhhZTdlMjk2YzEwMWVhZWU0NTZjMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3iv3989bo51f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1148}}
t3_1ko2iqc
/r/LocalLLaMA/comments/1ko2iqc/photoshop_using_local_computer_use_agents/
false
false
https://external-preview…1d86e704b3a9ee96
1
{'enabled': False, 'images': [{'id': 'Ymt2MDJqMWJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/Ymt2MDJqMWJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=108&crop=smart&format=pjpg&auto=webp&s=d5bddcb512308707f609f2e9f75724557ac4c826', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/Ymt2MDJqMWJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=216&crop=smart&format=pjpg&auto=webp&s=b57875f0898cd4675029f5262925c47867e90c81', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/Ymt2MDJqMWJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=320&crop=smart&format=pjpg&auto=webp&s=c90f80ded0cbfdadfefe94821ff3c5ebcd32edce', 'width': 320}, {'height': 401, 'url': 'https://external-preview.redd.it/Ymt2MDJqMWJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=640&crop=smart&format=pjpg&auto=webp&s=a342e4a8735a42ce74bab68120ea5ae35409cb83', 'width': 640}, {'height': 602, 'url': 'https://external-preview.redd.it/Ymt2MDJqMWJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=960&crop=smart&format=pjpg&auto=webp&s=c9bb0afdd7f54a839a4d9af9c31d9f32e492e1db', 'width': 960}, {'height': 677, 'url': 'https://external-preview.redd.it/Ymt2MDJqMWJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=10d78232e016cb66d1a3e3e78db041ee7d1e5db6', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Ymt2MDJqMWJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?format=pjpg&auto=webp&s=816c4b02c9c32a2b02488970d312efe880788ece', 'width': 1148}, 'variants': {}}]}
What's Worng with the Stanford ? Check the Name :)
0
[https://huggingface.co/collections/Stanford/niggatwerk-1-6827495b311678c965300777](https://huggingface.co/collections/Stanford/niggatwerk-1-6827495b311678c965300777)
2025-05-16T14:39:44
https://www.reddit.com/r/LocalLLaMA/comments/1ko2ir2/whats_worng_with_the_stanford_check_the_name/
Zulqarnain_Shihab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko2ir2
false
null
t3_1ko2ir2
/r/LocalLLaMA/comments/1ko2ir2/whats_worng_with_the_stanford_check_the_name/
false
false
self
0
{'enabled': False, 'images': [{'id': 'iyn0ZGOx10Q7uFbBflwD3blYXNfoYwANgsdmnlK8Jvk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2jAoJK7ZH-3Lr7IftQrp1_Uf0HrPCgPYMyZGtFbhDPQ.jpg?width=108&crop=smart&auto=webp&s=1c7ac7a24d2b3d51a23ce64fee52a78131295c75', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2jAoJK7ZH-3Lr7IftQrp1_Uf0HrPCgPYMyZGtFbhDPQ.jpg?width=216&crop=smart&auto=webp&s=2300641a78f8079b4b6d1d3c72d558e6b42eb2c4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2jAoJK7ZH-3Lr7IftQrp1_Uf0HrPCgPYMyZGtFbhDPQ.jpg?width=320&crop=smart&auto=webp&s=c1ece9d8e6f8b5bcd03e877d33e420a474f61c45', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2jAoJK7ZH-3Lr7IftQrp1_Uf0HrPCgPYMyZGtFbhDPQ.jpg?width=640&crop=smart&auto=webp&s=1b47060f67004f2fdbae37f847441784aa55a9b6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2jAoJK7ZH-3Lr7IftQrp1_Uf0HrPCgPYMyZGtFbhDPQ.jpg?width=960&crop=smart&auto=webp&s=119084b53478cc06b9914e6474a03d8146695245', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2jAoJK7ZH-3Lr7IftQrp1_Uf0HrPCgPYMyZGtFbhDPQ.jpg?width=1080&crop=smart&auto=webp&s=69204dfc44c8c1c96181eb243ac0b5c65f65e817', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2jAoJK7ZH-3Lr7IftQrp1_Uf0HrPCgPYMyZGtFbhDPQ.jpg?auto=webp&s=b45c05abfd83566fa4480315024e4fc8cc319ee1', 'width': 1200}, 'variants': {}}]}
Photoshop using Local Computer Use agents.
25
Photoshop using c/ua. No code. Just a user prompt, picking models and a Docker, and the right agent loop. A glimpse at the more managed experience c/ua is building to lower the barrier for casual vibe-coders. Github : https://github.com/trycua/cua
2025-05-16T14:42:18
https://v.redd.it/jhyeu60so51f1
Impressive_Half_2819
v.redd.it
1970-01-01T00:00:00
0
{}
1ko2kzx
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/jhyeu60so51f1/DASHPlaylist.mpd?a=1749998553%2CZWFmMGQxZmFhYmZiN2I3OGYyYTk3ZTI0MmJjMDJlNzZhMmMwMTA5YjA2YzBiZTlhNGVkNTUyMWQzMzA4M2ExMw%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/jhyeu60so51f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/jhyeu60so51f1/HLSPlaylist.m3u8?a=1749998553%2CNGQzOTVmMmQ5ZTZlODc5N2E5ZGM2YmUyZWQxODkyNTRhOWYyNzY1MTQ1ZmY1NDE2NjFiYWY3ZGU4YjFlNjM0Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jhyeu60so51f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1148}}
t3_1ko2kzx
/r/LocalLLaMA/comments/1ko2kzx/photoshop_using_local_computer_use_agents/
false
false
https://external-preview…bc9a0ecf064abfb6
25
{'enabled': False, 'images': [{'id': 'Y3luNno5cXJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/Y3luNno5cXJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=108&crop=smart&format=pjpg&auto=webp&s=4a3a9888d054b3e7cf0a89d78709c2eecff22ee2', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/Y3luNno5cXJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=216&crop=smart&format=pjpg&auto=webp&s=8f3dfbd3e16a6eb82b08c1dec67f979b2ded06b5', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/Y3luNno5cXJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=320&crop=smart&format=pjpg&auto=webp&s=d436febc579563ef30d45a8c82680fd967df6c2e', 'width': 320}, {'height': 401, 'url': 'https://external-preview.redd.it/Y3luNno5cXJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=640&crop=smart&format=pjpg&auto=webp&s=01f62669412f3cc3244aba005b4479fa56135432', 'width': 640}, {'height': 602, 'url': 'https://external-preview.redd.it/Y3luNno5cXJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=960&crop=smart&format=pjpg&auto=webp&s=3abfcd965f48ac80c51fa0c233d06d31956c91ab', 'width': 960}, {'height': 677, 'url': 'https://external-preview.redd.it/Y3luNno5cXJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=20da89950a0dc41051254cb3dc3d0cf422f380e3', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Y3luNno5cXJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?format=pjpg&auto=webp&s=a06d8052c3723e39ec7f1d8380122018b1327312', 'width': 1148}, 'variants': {}}]}
what happened to Stanford
134
2025-05-16T14:44:17
https://i.redd.it/l9ap08t4p51f1.jpeg
BoringAd6806
i.redd.it
1970-01-01T00:00:00
0
{}
1ko2mq7
false
null
t3_1ko2mq7
/r/LocalLLaMA/comments/1ko2mq7/what_happened_to_stanford/
false
false
https://b.thumbs.redditm…N5l3BVvXlGPk.jpg
134
{'enabled': True, 'images': [{'id': '1OrV9bdkgpLT_2eJnh-12E-CsnqRd93KHU7Q3JiAuwg', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/l9ap08t4p51f1.jpeg?width=108&crop=smart&auto=webp&s=529489295a3ffbfe099e26f94a4cd01cfedb20eb', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/l9ap08t4p51f1.jpeg?width=216&crop=smart&auto=webp&s=7a860a8f65aef39c2ecedd7ee41fd22c208068a7', 'width': 216}, {'height': 123, 'url': 'https://preview.redd.it/l9ap08t4p51f1.jpeg?width=320&crop=smart&auto=webp&s=7f1271640f50332db080f07bb944b128592654ca', 'width': 320}, {'height': 247, 'url': 'https://preview.redd.it/l9ap08t4p51f1.jpeg?width=640&crop=smart&auto=webp&s=e99406294d0642388d4c739930b9569d685129d1', 'width': 640}, {'height': 371, 'url': 'https://preview.redd.it/l9ap08t4p51f1.jpeg?width=960&crop=smart&auto=webp&s=fe7954cd396ddfac818b6418aeae3270d3e2942b', 'width': 960}, {'height': 417, 'url': 'https://preview.redd.it/l9ap08t4p51f1.jpeg?width=1080&crop=smart&auto=webp&s=b0d9ded2ecd9884e22c113b136811ce73dabe52c', 'width': 1080}], 'source': {'height': 495, 'url': 'https://preview.redd.it/l9ap08t4p51f1.jpeg?auto=webp&s=a0f29dc14767a5a3fe6be1c1e8253e818c56c517', 'width': 1280}, 'variants': {}}]}
Open source MCP course on GitHub
26
The MCP course is free, open source, and with Apache 2 license. So if you’re working on MCP you can do any of this: - take the course and reuse it for your own educational/ dev advocacy projects - collaborate with us on new units about your projects or interests - star the repo on github so more devs hear about it and join in Note, some of these options are cooler than others. https://github.com/huggingface/mcp-course
2025-05-16T15:08:40
https://www.reddit.com/r/LocalLLaMA/comments/1ko387o/open_source_mcp_course_on_github/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko387o
false
null
t3_1ko387o
/r/LocalLLaMA/comments/1ko387o/open_source_mcp_course_on_github/
false
false
self
26
{'enabled': False, 'images': [{'id': 'RgJsMIMiS6jMX5jgpuVz2w352vX-pV5hfqfNM8mxXgg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mEA1MVfZyvFec_MgXF6VxdwO77HY3r7PaUzFrB4xsRc.jpg?width=108&crop=smart&auto=webp&s=b31fd719027886dd2ae03c065057fe9f4e82c308', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mEA1MVfZyvFec_MgXF6VxdwO77HY3r7PaUzFrB4xsRc.jpg?width=216&crop=smart&auto=webp&s=ef6685c09eaf2b89307f60e3423b17b40efe9c81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mEA1MVfZyvFec_MgXF6VxdwO77HY3r7PaUzFrB4xsRc.jpg?width=320&crop=smart&auto=webp&s=54af388b3cafb4b2ec3ebf18df7924972389120b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mEA1MVfZyvFec_MgXF6VxdwO77HY3r7PaUzFrB4xsRc.jpg?width=640&crop=smart&auto=webp&s=25b6a262a751f78b1ca924e373e40a63e8f51808', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mEA1MVfZyvFec_MgXF6VxdwO77HY3r7PaUzFrB4xsRc.jpg?width=960&crop=smart&auto=webp&s=a64b81b00f28d354ee1674aa69263988e0b2b268', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mEA1MVfZyvFec_MgXF6VxdwO77HY3r7PaUzFrB4xsRc.jpg?width=1080&crop=smart&auto=webp&s=64b17c77da9b178d850d2e17979044f2f524ccad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mEA1MVfZyvFec_MgXF6VxdwO77HY3r7PaUzFrB4xsRc.jpg?auto=webp&s=d62d732e1a8b886bd9a5c0f570704bc5c5d73039', 'width': 1200}, 'variants': {}}]}
Have any of you found any high accuracy fully automated AI systems?
1
[removed]
2025-05-16T15:13:30
https://www.reddit.com/r/LocalLLaMA/comments/1ko3cfy/have_any_of_you_found_any_high_accuracy_fully/
yoyoitsthefed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko3cfy
false
null
t3_1ko3cfy
/r/LocalLLaMA/comments/1ko3cfy/have_any_of_you_found_any_high_accuracy_fully/
false
false
self
1
null
Running local LLM on a VPC server vs OpenAI API calls
6
Which is the best option (both from a performance point of view as well as a cost point of view) when it comes to either running a local LLM on your own VPC instance or using API calls? i'm building an application and want to integrate my own models into it, ideally would run locally on the user's laptop, but if not possible, i would like to know whether it makes sense to have your own local LLM instance running on your own server or using something like ChatGPT's API? my application would then just make api calls to my own server of course if i chose the first option
2025-05-16T15:14:43
https://www.reddit.com/r/LocalLLaMA/comments/1ko3din/running_local_llm_on_a_vpc_server_vs_openai_api/
Attorney_Outside69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko3din
false
null
t3_1ko3din
/r/LocalLLaMA/comments/1ko3din/running_local_llm_on_a_vpc_server_vs_openai_api/
false
false
self
6
null
OpenAI Healthbench in MEDIC
26
Following the release of OpenAI Healthbench earlier this week, we integrated it into MEDIC framework. Qwen3 models are showing incredible results for their size!
2025-05-16T15:53:02
https://i.redd.it/b0i7tlhe161f1.jpeg
clechristophe
i.redd.it
1970-01-01T00:00:00
0
{}
1ko4be2
false
null
t3_1ko4be2
/r/LocalLLaMA/comments/1ko4be2/openai_healthbench_in_medic/
false
false
https://b.thumbs.redditm…EFiqJMweqT2Q.jpg
26
{'enabled': True, 'images': [{'id': 'K3P57FRWDrXi7KlgBl4zy0SUPK8Cg36tfZeYgQ7b4Og', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/b0i7tlhe161f1.jpeg?width=108&crop=smart&auto=webp&s=d2f18d5306127c9398e8239feb1ef5437c351219', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/b0i7tlhe161f1.jpeg?width=216&crop=smart&auto=webp&s=7c68c8e333f3d9520e54b476f09b7af42f5d2c24', 'width': 216}, {'height': 137, 'url': 'https://preview.redd.it/b0i7tlhe161f1.jpeg?width=320&crop=smart&auto=webp&s=631ef18156701936ae0a7172af23a2fadb88d9b9', 'width': 320}, {'height': 275, 'url': 'https://preview.redd.it/b0i7tlhe161f1.jpeg?width=640&crop=smart&auto=webp&s=3c679c5052106bbb1daa71e844764210a3db66dc', 'width': 640}, {'height': 413, 'url': 'https://preview.redd.it/b0i7tlhe161f1.jpeg?width=960&crop=smart&auto=webp&s=f971e851ca7521dab7a32c8d1e98caaedfa9de08', 'width': 960}, {'height': 465, 'url': 'https://preview.redd.it/b0i7tlhe161f1.jpeg?width=1080&crop=smart&auto=webp&s=297329de57a6c5ec102482021e9e3963ad390601', 'width': 1080}], 'source': {'height': 556, 'url': 'https://preview.redd.it/b0i7tlhe161f1.jpeg?auto=webp&s=2089030448c97a94511eade7e4d8a95cc44167f3', 'width': 1290}, 'variants': {}}]}
Drummer's Big Alice 28B v1 - A 100 layer upscale working together to give you the finest creative experience!
74
2025-05-16T15:59:01
https://huggingface.co/TheDrummer/Big-Alice-28B-v1
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1ko4gjh
false
null
t3_1ko4gjh
/r/LocalLLaMA/comments/1ko4gjh/drummers_big_alice_28b_v1_a_100_layer_upscale/
false
false
https://b.thumbs.redditm…XX8FTfcea4KU.jpg
74
{'enabled': False, 'images': [{'id': 'DYkFxFygRo3sLyhnM_v-03dfUhUXtcRQWQjFOPnTNSw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RaXPiHRNTSUjMvQHqKyyr5p4_KJP9L2YFlPLat5z5Po.jpg?width=108&crop=smart&auto=webp&s=c774bf35b8716233b0cd69be3b88352a7db29954', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RaXPiHRNTSUjMvQHqKyyr5p4_KJP9L2YFlPLat5z5Po.jpg?width=216&crop=smart&auto=webp&s=dea6d2430cbbdaf5541234abab5820fa6045b2c0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RaXPiHRNTSUjMvQHqKyyr5p4_KJP9L2YFlPLat5z5Po.jpg?width=320&crop=smart&auto=webp&s=159601d17e73d71b5ebf0420422e1a3b4ca2e39f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RaXPiHRNTSUjMvQHqKyyr5p4_KJP9L2YFlPLat5z5Po.jpg?width=640&crop=smart&auto=webp&s=261065cb239472639351be9a147285bf23a5bfc3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RaXPiHRNTSUjMvQHqKyyr5p4_KJP9L2YFlPLat5z5Po.jpg?width=960&crop=smart&auto=webp&s=8ad81cb0906a2ac45993c4b9a7a6a3a8538ec340', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RaXPiHRNTSUjMvQHqKyyr5p4_KJP9L2YFlPLat5z5Po.jpg?width=1080&crop=smart&auto=webp&s=6761357ea4f42b0454cde0dee7671f763c85af31', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RaXPiHRNTSUjMvQHqKyyr5p4_KJP9L2YFlPLat5z5Po.jpg?auto=webp&s=cf56b65e7de3e876ebc5b9626006aafa61aa2606', 'width': 1200}, 'variants': {}}]}
Fastgen - Simple high-throughput inference
47
We just released a tiny (\~3kloc) Python library that implements state-of-the-art inference algorithms on GPU and provides performance similar to vLLM. We believe it's a great learning vehicle for inference techniques and the code is quite easy to hack on!
2025-05-16T16:02:29
https://github.com/facebookresearch/fastgen
_mpu
github.com
1970-01-01T00:00:00
0
{}
1ko4jsb
false
null
t3_1ko4jsb
/r/LocalLLaMA/comments/1ko4jsb/fastgen_simple_highthroughput_inference/
false
false
https://b.thumbs.redditm…h64yA-2QH6Tg.jpg
47
{'enabled': False, 'images': [{'id': 'mtLeJdprz25oTaFmF49u0I52mhjIg2sB6dYNxrRh1Jk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CP0e7IZ7SIX4P9NyHqNMl-joiEybFlPwga41CzF0mCk.jpg?width=108&crop=smart&auto=webp&s=e01ba2902a854934ace1ffaa6236f0e1ebae2abe', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CP0e7IZ7SIX4P9NyHqNMl-joiEybFlPwga41CzF0mCk.jpg?width=216&crop=smart&auto=webp&s=bc213165ee647ac9d009ff5fa3421ce2083168bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CP0e7IZ7SIX4P9NyHqNMl-joiEybFlPwga41CzF0mCk.jpg?width=320&crop=smart&auto=webp&s=da51a610598e1a2d35d03353517e150bf7fd41d2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CP0e7IZ7SIX4P9NyHqNMl-joiEybFlPwga41CzF0mCk.jpg?width=640&crop=smart&auto=webp&s=38527e7a1ac1e3de5c0c41c1ab00e1a334a68c83', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CP0e7IZ7SIX4P9NyHqNMl-joiEybFlPwga41CzF0mCk.jpg?width=960&crop=smart&auto=webp&s=ae061e5ed153dc604fcbc68a9163823a4f2244ba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CP0e7IZ7SIX4P9NyHqNMl-joiEybFlPwga41CzF0mCk.jpg?width=1080&crop=smart&auto=webp&s=1c582970d34acf1182fed075441131ad64a3e7da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CP0e7IZ7SIX4P9NyHqNMl-joiEybFlPwga41CzF0mCk.jpg?auto=webp&s=a30064e97740797710ddd1bb5f1d37c4cd6a7d5c', 'width': 1200}, 'variants': {}}]}
Qwen: Parallel Scaling Law for Language Models
57
2025-05-16T16:07:57
https://arxiv.org/abs/2505.10475
AaronFeng47
arxiv.org
1970-01-01T00:00:00
0
{}
1ko4oor
false
null
t3_1ko4oor
/r/LocalLLaMA/comments/1ko4oor/qwen_parallel_scaling_law_for_language_models/
false
false
default
57
null
$15k Local LLM Budget - What hardware would you buy and why?
30
If you had the money to spend on hardware for a local LLM, which config would you get?
2025-05-16T16:48:46
https://www.reddit.com/r/LocalLLaMA/comments/1ko5o7t/15k_local_llm_budget_what_hardware_would_you_buy/
Thireus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko5o7t
false
null
t3_1ko5o7t
/r/LocalLLaMA/comments/1ko5o7t/15k_local_llm_budget_what_hardware_would_you_buy/
false
false
self
30
null
Need help with Debian linux Nvidia driver for RTX 5060Ti
4
Hey all, So I have a Debian 12 system with an RTX 5070Ti using the following driver and it works fine: https://developer.download.nvidia.com/compute/nvidia-driver/570.133.20/local_installers/nvidia-driver-local-repo-debian12-570.133.20_1.0-1_amd64.deb However, this driver does **not** work for the RTX 5060 Ti. If I attempt to use the driver, nvidia-smi shows a GPU but it says "Nvidia Graphics Card" instead of the typical "Nvidia Geforce RTX 50xx Ti". Also, nothing works using that driver. So basically, that driver does not detect the RTX 5060 Ti at all. Could somebody point me to a download link of a .deb package for a driver that does work for the RTX 5060 Ti? Thanks
2025-05-16T16:48:53
https://www.reddit.com/r/LocalLLaMA/comments/1ko5obm/need_help_with_debian_linux_nvidia_driver_for_rtx/
StartupTim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko5obm
false
null
t3_1ko5obm
/r/LocalLLaMA/comments/1ko5obm/need_help_with_debian_linux_nvidia_driver_for_rtx/
false
false
self
4
null
Fine Tune model for new language
1
[removed]
2025-05-16T16:50:33
https://www.reddit.com/r/LocalLLaMA/comments/1ko5pte/fine_tune_model_for_new_language/
LearnSylang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ko5pte
false
null
t3_1ko5pte
/r/LocalLLaMA/comments/1ko5pte/fine_tune_model_for_new_language/
false
false
self
1
null