title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Strange Results Running dots.llm1 instruct IQ4_XS?
1
[removed]
2025-06-23T21:30:58
https://www.reddit.com/r/LocalLLaMA/comments/1lit36k/strange_results_running_dotsllm1_instruct_iq4_xs/
random-tomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lit36k
false
null
t3_1lit36k
/r/LocalLLaMA/comments/1lit36k/strange_results_running_dotsllm1_instruct_iq4_xs/
false
false
self
1
null
🧠💬 Introducing AI Dialogue Duo – A Two-AI Conversational Roleplay System (Open Source)
1
[removed]
2025-06-23T21:33:49
https://www.reddit.com/r/LocalLLaMA/comments/1lit5t3/introducing_ai_dialogue_duo_a_twoai/
Reasonable_Brief578
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lit5t3
false
null
t3_1lit5t3
/r/LocalLLaMA/comments/1lit5t3/introducing_ai_dialogue_duo_a_twoai/
false
false
self
1
null
Prompt Playground - app for comparing and fine-tuning LLM prompts
1
[removed]
2025-06-23T21:39:18
https://www.reddit.com/r/LocalLLaMA/comments/1litaod/prompt_playground_app_for_comparing_and/
shaypser1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1litaod
false
null
t3_1litaod
/r/LocalLLaMA/comments/1litaod/prompt_playground_app_for_comparing_and/
false
false
self
1
null
How much power would one need to run their own Deepseek?
1
[removed]
2025-06-23T21:44:11
https://www.reddit.com/r/LocalLLaMA/comments/1litewp/how_much_power_would_one_need_to_run_their_own/
IZA_does_the_art
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1litewp
false
null
t3_1litewp
/r/LocalLLaMA/comments/1litewp/how_much_power_would_one_need_to_run_their_own/
false
false
self
1
null
Introducing the First AI Agent for System Performance Debugging
1
[removed]
2025-06-23T22:33:28
https://www.reddit.com/r/LocalLLaMA/comments/1liumf6/introducing_the_first_ai_agent_for_system/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1liumf6
false
null
t3_1liumf6
/r/LocalLLaMA/comments/1liumf6/introducing_the_first_ai_agent_for_system/
false
false
https://b.thumbs.redditm…FSneOzj5Os3E.jpg
1
null
Local LLM-Based AI Agent for Automated System Performance Debugging
1
[removed]
2025-06-23T22:47:40
https://www.reddit.com/r/LocalLLaMA/comments/1liuydy/local_llmbased_ai_agent_for_automated_system/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1liuydy
false
null
t3_1liuydy
/r/LocalLLaMA/comments/1liuydy/local_llmbased_ai_agent_for_automated_system/
false
false
https://b.thumbs.redditm…MivUgeBR2cnQ.jpg
1
null
The Local LLM Research Challenge: Can Your Model Match GPT-4's ~95% Accuracy?
1
[removed]
2025-06-23T23:14:34
https://www.reddit.com/r/LocalLLaMA/comments/1livkug/the_local_llm_research_challenge_can_your_model/
ComplexIt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1livkug
false
null
t3_1livkug
/r/LocalLLaMA/comments/1livkug/the_local_llm_research_challenge_can_your_model/
false
false
self
1
null
Can I usw my old PC as server?
1
[removed]
2025-06-23T23:43:24
https://www.reddit.com/r/LocalLLaMA/comments/1liw7y7/can_i_usw_my_old_pc_as_server/
Odd-Name-1556
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1liw7y7
false
null
t3_1liw7y7
/r/LocalLLaMA/comments/1liw7y7/can_i_usw_my_old_pc_as_server/
false
false
self
1
null
Uncensored model with persistent memory that works as an assistent?
1
[removed]
2025-06-24T00:02:09
https://www.reddit.com/r/LocalLLaMA/comments/1liwmip/uncensored_model_with_persistent_memory_that/
Born_Ground_8919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1liwmip
false
null
t3_1liwmip
/r/LocalLLaMA/comments/1liwmip/uncensored_model_with_persistent_memory_that/
false
false
self
1
null
What's the best nsfw llm that I can run on my 5090?
1
[removed]
2025-06-24T01:38:57
https://www.reddit.com/r/LocalLLaMA/comments/1liylsy/whats_the_best_nsfw_llm_that_i_can_run_on_my_5090/
jwheeler2210
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1liylsy
false
null
t3_1liylsy
/r/LocalLLaMA/comments/1liylsy/whats_the_best_nsfw_llm_that_i_can_run_on_my_5090/
false
false
nsfw
1
null
Unlimited llm usage !!! How much are we willing to pay?
1
[removed] [View Poll](https://www.reddit.com/poll/1lizq99)
2025-06-24T02:34:15
https://www.reddit.com/r/LocalLLaMA/comments/1lizq99/unlimited_llm_usage_how_much_are_we_willing_to_pay/
Inevitable-Orange-43
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lizq99
false
null
t3_1lizq99
/r/LocalLLaMA/comments/1lizq99/unlimited_llm_usage_how_much_are_we_willing_to_pay/
false
false
self
1
null
GPU Upgrade on an Outdated Build or Start Fresh?
1
[removed]
2025-06-24T02:41:12
https://www.reddit.com/r/LocalLLaMA/comments/1lizv4y/gpu_upgrade_on_an_outdated_build_or_start_fresh/
swanlake523
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lizv4y
false
null
t3_1lizv4y
/r/LocalLLaMA/comments/1lizv4y/gpu_upgrade_on_an_outdated_build_or_start_fresh/
false
false
self
1
null
Is it possible to de-align models?
1
[removed]
2025-06-24T02:44:31
https://www.reddit.com/r/LocalLLaMA/comments/1lizxhs/is_it_possible_to_dealign_models/
Remarkable_Story_310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lizxhs
false
null
t3_1lizxhs
/r/LocalLLaMA/comments/1lizxhs/is_it_possible_to_dealign_models/
false
false
self
1
null
bought a tower with an rtx 4090 and want to install another rtx 4090 externally
1
[removed]
2025-06-24T02:51:19
https://www.reddit.com/r/LocalLLaMA/comments/1lj029h/bought_a_tower_with_an_rtx_4090_and_want_to/
vegatx40
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj029h
false
null
t3_1lj029h
/r/LocalLLaMA/comments/1lj029h/bought_a_tower_with_an_rtx_4090_and_want_to/
false
false
self
1
null
Mistral-Small got an update
1
[removed]
2025-06-24T03:20:31
https://www.reddit.com/r/LocalLLaMA/comments/1lj0mpl/mistralsmall_got_an_update/
ttkciar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj0mpl
false
null
t3_1lj0mpl
/r/LocalLLaMA/comments/1lj0mpl/mistralsmall_got_an_update/
false
false
self
1
{'enabled': False, 'images': [{'id': '3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU.png?width=108&crop=smart&auto=webp&s=bcb646eb0d29b10fc855c3faa4ec547bea3a2720', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU.png?width=216&crop=smart&auto=webp&s=43fd06237effb7db42d0b231837877b44670b382', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU.png?width=320&crop=smart&auto=webp&s=b95bc86800ee6d461774335ad44f7519511a85b6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU.png?width=640&crop=smart&auto=webp&s=7d6eecbfa2b523b92f82faf94cb6ab334696d320', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU.png?width=960&crop=smart&auto=webp&s=6fc9407f8e748429d8a0d15bbec6f96d9d690998', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU.png?width=1080&crop=smart&auto=webp&s=de77ea1d525b070640d5462af0d097b7745389d8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU.png?auto=webp&s=11ce1d231d2822ed08cc0aa2b21835ed4e44be72', 'width': 1200}, 'variants': {}}]}
Maybe stupid question but, Why can't Field Specific models be ridden of unnecessary information? Wouldn't that make them better?
1
[removed]
2025-06-24T04:04:59
https://www.reddit.com/gallery/1lj1gro
HareMayor
reddit.com
1970-01-01T00:00:00
0
{}
1lj1gro
false
null
t3_1lj1gro
/r/LocalLLaMA/comments/1lj1gro/maybe_stupid_question_but_why_cant_field_specific/
false
false
https://external-preview…24d5e9dde02f34b1
1
{'enabled': True, 'images': [{'id': 'G7k-TQabDNi3vQVw1KUm6Tp7frp2cVPQpNw402uPmIo', 'resolutions': [{'height': 39, 'url': 'https://external-preview.redd.it/G7k-TQabDNi3vQVw1KUm6Tp7frp2cVPQpNw402uPmIo.png?width=108&crop=smart&auto=webp&s=58fb216f58d0193ed0f7e3062368f3ef886776ee', 'width': 108}, {'height': 79, 'url': 'https://external-preview.redd.it/G7k-TQabDNi3vQVw1KUm6Tp7frp2cVPQpNw402uPmIo.png?width=216&crop=smart&auto=webp&s=e5f2365afc79eb8d859859db3fb488d9a515de8f', 'width': 216}, {'height': 117, 'url': 'https://external-preview.redd.it/G7k-TQabDNi3vQVw1KUm6Tp7frp2cVPQpNw402uPmIo.png?width=320&crop=smart&auto=webp&s=b8a2e70e7da456f4aa6548c641f6a0664efd4da3', 'width': 320}, {'height': 235, 'url': 'https://external-preview.redd.it/G7k-TQabDNi3vQVw1KUm6Tp7frp2cVPQpNw402uPmIo.png?width=640&crop=smart&auto=webp&s=fff2875c78a06d2d704ae40352ead4f1bf064b63', 'width': 640}, {'height': 353, 'url': 'https://external-preview.redd.it/G7k-TQabDNi3vQVw1KUm6Tp7frp2cVPQpNw402uPmIo.png?width=960&crop=smart&auto=webp&s=271655900357b6347bd606d6025746860b63b321', 'width': 960}, {'height': 397, 'url': 'https://external-preview.redd.it/G7k-TQabDNi3vQVw1KUm6Tp7frp2cVPQpNw402uPmIo.png?width=1080&crop=smart&auto=webp&s=f1592f6d3fa59dae2873544e84d4e9c6cc00ee5a', 'width': 1080}], 'source': {'height': 706, 'url': 'https://external-preview.redd.it/G7k-TQabDNi3vQVw1KUm6Tp7frp2cVPQpNw402uPmIo.png?auto=webp&s=6408807cd2f1b84238839672b986ab3f7438a1f6', 'width': 1919}, 'variants': {}}]}
A test method to assess whether LLMs actually "think"
1
[removed]
2025-06-24T04:08:31
https://www.reddit.com/r/LocalLLaMA/comments/1lj1j27/a_test_method_to_assess_whether_llms_actually/
Upper-Pressure-1954
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj1j27
false
null
t3_1lj1j27
/r/LocalLLaMA/comments/1lj1j27/a_test_method_to_assess_whether_llms_actually/
false
false
self
1
null
I built a LOCAL OS that makes LLMs into REAL autonomous agents (no more prompt-chaining BS
1
[removed]
2025-06-24T04:49:51
https://github.com/iluxu/llmbasedos
iluxu
github.com
1970-01-01T00:00:00
0
{}
1lj29ii
false
null
t3_1lj29ii
/r/LocalLLaMA/comments/1lj29ii/i_built_a_local_os_that_makes_llms_into_real/
false
false
https://external-preview…e83556fc2b2c63bb
1
{'enabled': False, 'images': [{'id': 'jgEuQnheBQAiYNAguw1V1SKJQxy22rvASFf_3EqKWfI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jgEuQnheBQAiYNAguw1V1SKJQxy22rvASFf_3EqKWfI.png?width=108&crop=smart&auto=webp&s=a039e85e21244846e59cda2e2fa470b632a56517', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jgEuQnheBQAiYNAguw1V1SKJQxy22rvASFf_3EqKWfI.png?width=216&crop=smart&auto=webp&s=fc557569e8d3dc1d683e0fbde49c6f2eb6b8e6fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jgEuQnheBQAiYNAguw1V1SKJQxy22rvASFf_3EqKWfI.png?width=320&crop=smart&auto=webp&s=0837e00c95dd63d9212b87ade993484a3476cc59', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jgEuQnheBQAiYNAguw1V1SKJQxy22rvASFf_3EqKWfI.png?width=640&crop=smart&auto=webp&s=ce34684f3aa976d02a2f499fc0eb34b3f97fa4e5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jgEuQnheBQAiYNAguw1V1SKJQxy22rvASFf_3EqKWfI.png?width=960&crop=smart&auto=webp&s=92ced45a74ba15ccec4bb6b7ce9dad8c14b166e0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jgEuQnheBQAiYNAguw1V1SKJQxy22rvASFf_3EqKWfI.png?width=1080&crop=smart&auto=webp&s=07beda6bd982529a0e7c7c4067b995b5d94f03a7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jgEuQnheBQAiYNAguw1V1SKJQxy22rvASFf_3EqKWfI.png?auto=webp&s=32747b04c151cb7321fa91f3af41805878e16d09', 'width': 1200}, 'variants': {}}]}
Phi-tiny-MoE & Phi-mini-MoE released
1
[removed]
2025-06-24T05:47:13
https://huggingface.co/microsoft/Phi-mini-MoE-instruct
kristaller486
huggingface.co
1970-01-01T00:00:00
0
{}
1lj37y8
false
null
t3_1lj37y8
/r/LocalLLaMA/comments/1lj37y8/phitinymoe_phiminimoe_released/
false
false
default
1
null
Phi-mini-MoE & Phi-tiny-MoE
1
2025-06-24T05:48:20
https://huggingface.co/microsoft/Phi-mini-MoE-instruct
kristaller486
huggingface.co
1970-01-01T00:00:00
0
{}
1lj38kr
false
null
t3_1lj38kr
/r/LocalLLaMA/comments/1lj38kr/phiminimoe_phitinymoe/
false
false
default
1
null
Just open-sourced Eion - a shared memory system for AI agents
1
[removed]
2025-06-24T06:14:35
https://www.reddit.com/r/LocalLLaMA/comments/1lj3nlt/just_opensourced_eion_a_shared_memory_system_for/
7wdb417
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj3nlt
false
null
t3_1lj3nlt
/r/LocalLLaMA/comments/1lj3nlt/just_opensourced_eion_a_shared_memory_system_for/
false
false
self
1
null
Bug Bounty Researcher
1
[removed]
2025-06-24T06:23:33
https://www.reddit.com/r/LocalLLaMA/comments/1lj3sil/bug_bounty_researcher/
No_Paraphernalia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj3sil
false
null
t3_1lj3sil
/r/LocalLLaMA/comments/1lj3sil/bug_bounty_researcher/
false
false
self
1
null
Qualcomm adds Whisper large and medium to AI Hub Models collection
1
[removed]
2025-06-24T07:05:54
https://github.com/quic/ai-hub-models/releases/tag/v0.30.5
Balance-
github.com
1970-01-01T00:00:00
0
{}
1lj4g6f
false
null
t3_1lj4g6f
/r/LocalLLaMA/comments/1lj4g6f/qualcomm_adds_whisper_large_and_medium_to_ai_hub/
false
false
https://external-preview…89f4976dac98d80d
1
{'enabled': False, 'images': [{'id': 'uYzwDTVcqVWRdorcuWeLEdil7LEtkxRhQIcQiWCygLI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uYzwDTVcqVWRdorcuWeLEdil7LEtkxRhQIcQiWCygLI.png?width=108&crop=smart&auto=webp&s=8d38cad4b21aea56b376c5e9111ae872ebdeef0c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uYzwDTVcqVWRdorcuWeLEdil7LEtkxRhQIcQiWCygLI.png?width=216&crop=smart&auto=webp&s=235df53dd55acb8560e55734aacf8e3efa67471b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uYzwDTVcqVWRdorcuWeLEdil7LEtkxRhQIcQiWCygLI.png?width=320&crop=smart&auto=webp&s=fe27927916be2a9aef7432f02f36229b7c2e82ed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uYzwDTVcqVWRdorcuWeLEdil7LEtkxRhQIcQiWCygLI.png?width=640&crop=smart&auto=webp&s=b200ccc0720fa1a26e8454309ab6cd651871667b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uYzwDTVcqVWRdorcuWeLEdil7LEtkxRhQIcQiWCygLI.png?width=960&crop=smart&auto=webp&s=719e906e5ff7bd95372adafac2ba579127b8f9a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uYzwDTVcqVWRdorcuWeLEdil7LEtkxRhQIcQiWCygLI.png?width=1080&crop=smart&auto=webp&s=d15ee238427410c2bc08e31a2e1b5ca5e4205f1d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uYzwDTVcqVWRdorcuWeLEdil7LEtkxRhQIcQiWCygLI.png?auto=webp&s=79d657d71ad9238ddfdbbc10ed4f7056429cc65a', 'width': 1200}, 'variants': {}}]}
Oh shit, the user is right
1
[removed]
2025-06-24T07:09:54
https://i.redd.it/yumgniygrt8f1.png
cool_xixi
i.redd.it
1970-01-01T00:00:00
0
{}
1lj4iae
false
null
t3_1lj4iae
/r/LocalLLaMA/comments/1lj4iae/oh_shit_the_user_is_right/
false
false
https://external-preview…116b4981150bbb3a
1
{'enabled': True, 'images': [{'id': 'u7uQsxFoHlA_4q5qqI6XkGin9_uKjSXrScHrF9aKtRs', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/yumgniygrt8f1.png?width=108&crop=smart&auto=webp&s=0a0abad07258f203568de17764acf86e8eda2e43', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/yumgniygrt8f1.png?width=216&crop=smart&auto=webp&s=432c4bc266f83459d16d3ed4e40113bcff0a8073', 'width': 216}, {'height': 110, 'url': 'https://preview.redd.it/yumgniygrt8f1.png?width=320&crop=smart&auto=webp&s=447a071328ce96e922569a8cfc95017b3957d482', 'width': 320}], 'source': {'height': 135, 'url': 'https://preview.redd.it/yumgniygrt8f1.png?auto=webp&s=842b10221050cd10a03f6f1d9d9f3b869f20fb64', 'width': 391}, 'variants': {}}]}
😂🏎️🏎️🏎️
1
2025-06-24T07:49:40
https://i.redd.it/973m4slnxt8f1.png
codegolf-guru
i.redd.it
1970-01-01T00:00:00
0
{}
1lj53gv
false
null
t3_1lj53gv
/r/LocalLLaMA/comments/1lj53gv/_/
false
false
default
1
{'enabled': True, 'images': [{'id': '973m4slnxt8f1', 'resolutions': [{'height': 139, 'url': 'https://preview.redd.it/973m4slnxt8f1.png?width=108&crop=smart&auto=webp&s=a47777d96a155d910c844b2bf720cb77c4b72dee', 'width': 108}, {'height': 278, 'url': 'https://preview.redd.it/973m4slnxt8f1.png?width=216&crop=smart&auto=webp&s=7d713a39564dcf0122827803f530ee232cd5e6d0', 'width': 216}, {'height': 412, 'url': 'https://preview.redd.it/973m4slnxt8f1.png?width=320&crop=smart&auto=webp&s=2ff679f3f8923524ba694ba2e71611b47840e70e', 'width': 320}, {'height': 824, 'url': 'https://preview.redd.it/973m4slnxt8f1.png?width=640&crop=smart&auto=webp&s=b093873466de69a8bea6ef7033db90af14ae46ca', 'width': 640}, {'height': 1236, 'url': 'https://preview.redd.it/973m4slnxt8f1.png?width=960&crop=smart&auto=webp&s=ea474171308ac85be106e33072bf763f42462ced', 'width': 960}, {'height': 1390, 'url': 'https://preview.redd.it/973m4slnxt8f1.png?width=1080&crop=smart&auto=webp&s=6dc3b080c02868a1d11925f1588b941097a208df', 'width': 1080}], 'source': {'height': 4311, 'url': 'https://preview.redd.it/973m4slnxt8f1.png?auto=webp&s=99a95f1d493cee1dff498ff704b513f050adf030', 'width': 3348}, 'variants': {}}]}
Doing a half-assed RAG
1
[removed]
2025-06-24T07:58:22
https://www.reddit.com/r/LocalLLaMA/comments/1lj5829/doing_a_halfassed_rag/
HistorianPotential48
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj5829
false
null
t3_1lj5829
/r/LocalLLaMA/comments/1lj5829/doing_a_halfassed_rag/
false
false
self
1
null
Best truly uncensored open-source LLM
1
[removed]
2025-06-24T08:10:49
https://www.reddit.com/r/LocalLLaMA/comments/1lj5eoq/best_truly_uncensored_opensource_llm/
Over_Friendship3455
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj5eoq
false
null
t3_1lj5eoq
/r/LocalLLaMA/comments/1lj5eoq/best_truly_uncensored_opensource_llm/
false
false
self
1
null
What are the best AI tools that can build a web app from just a prompt?
1
[removed]
2025-06-24T08:30:38
https://www.reddit.com/r/LocalLLaMA/comments/1lj5pbj/what_are_the_best_ai_tools_that_can_build_a_web/
netixc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj5pbj
false
null
t3_1lj5pbj
/r/LocalLLaMA/comments/1lj5pbj/what_are_the_best_ai_tools_that_can_build_a_web/
false
false
self
1
null
How to read a text in an image which is rotated and extract the hand-marked checkbox values through prompt?
1
[removed]
2025-06-24T08:44:17
https://www.reddit.com/r/LocalLLaMA/comments/1lj5wlm/how_to_read_a_text_in_an_image_which_is_rotated/
TariqSm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj5wlm
false
null
t3_1lj5wlm
/r/LocalLLaMA/comments/1lj5wlm/how_to_read_a_text_in_an_image_which_is_rotated/
false
false
self
1
null
Qwen2.5 14b on an M1 Pro with 16gb RAM
1
[removed]
2025-06-24T09:08:13
https://www.reddit.com/r/LocalLLaMA/comments/1lj69o2/qwen25_14b_on_an_m1_pro_with_16gb_ram/
gamerboy12555
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj69o2
false
null
t3_1lj69o2
/r/LocalLLaMA/comments/1lj69o2/qwen25_14b_on_an_m1_pro_with_16gb_ram/
false
false
self
1
null
Mistral Small 3.1 Instruct suddenly a gated model
1
[removed]
2025-06-24T09:18:54
https://www.reddit.com/r/LocalLLaMA/comments/1lj6fiy/mistral_small_31_instruct_suddenly_a_gated_model/
gustavhertz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj6fiy
false
null
t3_1lj6fiy
/r/LocalLLaMA/comments/1lj6fiy/mistral_small_31_instruct_suddenly_a_gated_model/
false
false
https://b.thumbs.redditm…9f3f9ZOOpjTI.jpg
1
null
what happened to this subreddit?
1
[removed]
2025-06-24T09:21:35
https://www.reddit.com/r/LocalLLaMA/comments/1lj6h1o/what_happened_to_this_subreddit/
anonymous_anki
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj6h1o
false
null
t3_1lj6h1o
/r/LocalLLaMA/comments/1lj6h1o/what_happened_to_this_subreddit/
false
false
self
1
null
s
1
[removed]
2025-06-24T09:29:44
https://www.reddit.com/r/LocalLLaMA/comments/1lj6liy/s/
Worthstream
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj6liy
false
null
t3_1lj6liy
/r/LocalLLaMA/comments/1lj6liy/s/
false
false
self
1
null
It's art
1
2025-06-24T09:29:59
https://i.redd.it/fsp50n0mgu8f1.jpeg
kernel348
i.redd.it
1970-01-01T00:00:00
0
{}
1lj6lnm
false
null
t3_1lj6lnm
/r/LocalLLaMA/comments/1lj6lnm/its_art/
false
false
default
1
{'enabled': True, 'images': [{'id': 'fsp50n0mgu8f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/fsp50n0mgu8f1.jpeg?width=108&crop=smart&auto=webp&s=d895b4afdbf46520df71664da11864ebe5305274', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/fsp50n0mgu8f1.jpeg?width=216&crop=smart&auto=webp&s=90fff46dec441b537768ec1ce51417f40dfa1c5c', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/fsp50n0mgu8f1.jpeg?width=320&crop=smart&auto=webp&s=1b4083d84b5df1666383e7a64a93453c704bc013', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/fsp50n0mgu8f1.jpeg?width=640&crop=smart&auto=webp&s=516d9baa716a6608ddbaa273d5fae4014fb64980', 'width': 640}], 'source': {'height': 736, 'url': 'https://preview.redd.it/fsp50n0mgu8f1.jpeg?auto=webp&s=e7df26c926103fb0b03454a415e114e18fb17ecd', 'width': 736}, 'variants': {}}]}
Mistral-small 3.2 24B tool call parser
1
[removed]
2025-06-24T09:59:54
https://www.reddit.com/r/LocalLLaMA/comments/1lj72pt/mistralsmall_32_24b_tool_call_parser/
Remarkable-Law9287
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj72pt
false
null
t3_1lj72pt
/r/LocalLLaMA/comments/1lj72pt/mistralsmall_32_24b_tool_call_parser/
false
false
self
1
null
why do we have to tokenize our input in huggingface but not in ollama?
1
[removed]
2025-06-24T10:02:11
https://www.reddit.com/r/LocalLLaMA/comments/1lj748z/why_do_we_have_to_tokenize_our_input_in/
Beyond_Birthday_13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj748z
false
null
t3_1lj748z
/r/LocalLLaMA/comments/1lj748z/why_do_we_have_to_tokenize_our_input_in/
false
false
self
1
null
I was done scrolling, so i built a Alt Tab like UI to navigate questions in ChatGPT
1
[removed]
2025-06-24T10:21:47
https://v.redd.it/6133mqg5pu8f1
CategoryFew5869
v.redd.it
1970-01-01T00:00:00
0
{}
1lj7fs5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6133mqg5pu8f1/DASHPlaylist.mpd?a=1753352519%2CMjdlM2ZhNmYzZTM2OGU4ZjU0MzFiOWEwYzBkNThlYzliNDEzODYyNzBlNTFlMzYyM2IzY2JhOTIzZThiNDk1Ng%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/6133mqg5pu8f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6133mqg5pu8f1/HLSPlaylist.m3u8?a=1753352519%2CMDRhNDgxMmFmYjM2NTlhMzU4YWI0ZWE2YWE4Yjc1ZmJmYWNlNTEwZmUzZWYwNmNmZWU4MDkwOGRkMTIwYmFkYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6133mqg5pu8f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1lj7fs5
/r/LocalLLaMA/comments/1lj7fs5/i_was_done_scrolling_so_i_built_a_alt_tab_like_ui/
false
false
https://external-preview…c0b56408f787ada3
1
{'enabled': False, 'images': [{'id': 'MGc0cnpxZzVwdThmMYC__n0Nn3l7RMQZdme2zTaTfGW2W0uGQqrbgd6LLmdx', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MGc0cnpxZzVwdThmMYC__n0Nn3l7RMQZdme2zTaTfGW2W0uGQqrbgd6LLmdx.png?width=108&crop=smart&format=pjpg&auto=webp&s=2f29e832cd817751021df2ac42a473297955b54d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MGc0cnpxZzVwdThmMYC__n0Nn3l7RMQZdme2zTaTfGW2W0uGQqrbgd6LLmdx.png?width=216&crop=smart&format=pjpg&auto=webp&s=c520474474896a91341762ba6a05414542a17498', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MGc0cnpxZzVwdThmMYC__n0Nn3l7RMQZdme2zTaTfGW2W0uGQqrbgd6LLmdx.png?width=320&crop=smart&format=pjpg&auto=webp&s=7f650c9603c218fa2e370a005bea2a473a544e22', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MGc0cnpxZzVwdThmMYC__n0Nn3l7RMQZdme2zTaTfGW2W0uGQqrbgd6LLmdx.png?width=640&crop=smart&format=pjpg&auto=webp&s=d94fc5ef666074eaaa7950e4618af1d7d6528aef', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MGc0cnpxZzVwdThmMYC__n0Nn3l7RMQZdme2zTaTfGW2W0uGQqrbgd6LLmdx.png?width=960&crop=smart&format=pjpg&auto=webp&s=656cd0a5be0b05d15d8c20929f15ceec65c092ac', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MGc0cnpxZzVwdThmMYC__n0Nn3l7RMQZdme2zTaTfGW2W0uGQqrbgd6LLmdx.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4876883cead075981a4cedcbe4233f214468142d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MGc0cnpxZzVwdThmMYC__n0Nn3l7RMQZdme2zTaTfGW2W0uGQqrbgd6LLmdx.png?format=pjpg&auto=webp&s=7559fe33c136606d80ec88fae8bdad70a9b42842', 'width': 1920}, 'variants': {}}]}
What is happening with r/LocalLLama?
1
[removed]
2025-06-24T10:56:32
https://www.reddit.com/r/LocalLLaMA/comments/1lj80j6/what_is_happening_with_rlocalllama/
benja0x40
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj80j6
false
null
t3_1lj80j6
/r/LocalLLaMA/comments/1lj80j6/what_is_happening_with_rlocalllama/
false
false
self
1
null
Is it possible to run 2x Radeon RX 7600 XT 16 GB for local AI?
1
[removed]
2025-06-24T11:41:27
https://www.reddit.com/r/LocalLLaMA/comments/1lj8ukb/is_it_possible_to_run_2x_radeon_rx_7600_xt_16_gb/
whatever462672
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj8ukb
false
null
t3_1lj8ukb
/r/LocalLLaMA/comments/1lj8ukb/is_it_possible_to_run_2x_radeon_rx_7600_xt_16_gb/
false
false
self
1
null
Jailbreaking Gemini
1
[removed]
2025-06-24T11:48:29
https://www.reddit.com/r/LocalLLaMA/comments/1lj8zgx/jailbreaking_gemini/
Sufficient_Abies2479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj8zgx
false
null
t3_1lj8zgx
/r/LocalLLaMA/comments/1lj8zgx/jailbreaking_gemini/
false
false
self
1
null
Speech to text with emotions
1
[removed]
2025-06-24T12:10:33
https://www.reddit.com/r/LocalLLaMA/comments/1lj9f72/speech_to_text_with_emotions/
iboima
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj9f72
false
null
t3_1lj9f72
/r/LocalLLaMA/comments/1lj9f72/speech_to_text_with_emotions/
false
false
self
1
null
How to train AI agent with my local repository and use it in house?
1
[removed]
2025-06-24T12:14:13
https://www.reddit.com/r/LocalLLaMA/comments/1lj9hue/how_to_train_ai_agent_with_my_local_repository/
01101110111motiv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj9hue
false
null
t3_1lj9hue
/r/LocalLLaMA/comments/1lj9hue/how_to_train_ai_agent_with_my_local_repository/
false
false
self
1
null
(meta question) Many comments on this sub are not visible for me
1
[removed]
2025-06-24T12:33:00
https://www.reddit.com/r/LocalLLaMA/comments/1lj9w5d/meta_question_many_comments_on_this_sub_are_not/
ilhud9s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj9w5d
false
null
t3_1lj9w5d
/r/LocalLLaMA/comments/1lj9w5d/meta_question_many_comments_on_this_sub_are_not/
false
false
self
1
null
Where to start learning?
1
[removed]
2025-06-24T12:35:19
https://www.reddit.com/r/LocalLLaMA/comments/1lj9xyu/where_to_start_learning/
Ok_Caterpillar_9945
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lj9xyu
false
null
t3_1lj9xyu
/r/LocalLLaMA/comments/1lj9xyu/where_to_start_learning/
false
false
self
1
null
I Discovered the road to ASI 🧐
1
[removed]
2025-06-24T12:46:22
https://www.reddit.com/r/LocalLLaMA/comments/1lja6ji/i_discovered_the_road_to_asi/
1EvilSexyGenius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lja6ji
false
null
t3_1lja6ji
/r/LocalLLaMA/comments/1lja6ji/i_discovered_the_road_to_asi/
false
false
self
1
null
Introducing Atria, a self-hosted AI building platform (beta launch)
1
[removed]
2025-06-24T13:22:41
https://www.reddit.com/r/LocalLLaMA/comments/1ljazob/introducing_atria_a_selfhosted_ai_building/
StygianBlue2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljazob
false
null
t3_1ljazob
/r/LocalLLaMA/comments/1ljazob/introducing_atria_a_selfhosted_ai_building/
false
false
self
1
null
What are best LLMs in french ?
1
[removed]
2025-06-24T13:23:04
https://www.reddit.com/r/LocalLLaMA/comments/1ljb00d/what_are_best_llms_in_french/
Head_Mushroom_3748
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljb00d
false
null
t3_1ljb00d
/r/LocalLLaMA/comments/1ljb00d/what_are_best_llms_in_french/
false
false
self
1
null
What t f happened !!!!????
1
[removed]
2025-06-24T13:46:21
https://www.reddit.com/r/LocalLLaMA/comments/1ljbjej/what_t_f_happened/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljbjej
false
null
t3_1ljbjej
/r/LocalLLaMA/comments/1ljbjej/what_t_f_happened/
false
false
self
1
null
I'm releasing my app !
1
[removed]
2025-06-24T13:51:01
https://i.redd.it/alg4dvl6rv8f1.png
Kindly-Treacle-6378
i.redd.it
1970-01-01T00:00:00
0
{}
1ljbng3
false
null
t3_1ljbng3
/r/LocalLLaMA/comments/1ljbng3/im_releasing_my_app/
false
false
default
1
{'enabled': True, 'images': [{'id': 'alg4dvl6rv8f1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/alg4dvl6rv8f1.png?width=108&crop=smart&auto=webp&s=725af794422dc20e1312d1a71bc585bf08bd6462', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/alg4dvl6rv8f1.png?width=216&crop=smart&auto=webp&s=41a8bc8a8aee4b8d104df6abe4adb21676ca57b2', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/alg4dvl6rv8f1.png?width=320&crop=smart&auto=webp&s=d0e70b4991014a9cc9a87c313a649deac073b81e', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/alg4dvl6rv8f1.png?width=640&crop=smart&auto=webp&s=ebd7dbad9ab0cce1f4eb81c504f58d7b2c1d8bfd', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/alg4dvl6rv8f1.png?width=960&crop=smart&auto=webp&s=dec5becfd0d3459dcb393e3363dc33f759c907cd', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/alg4dvl6rv8f1.png?width=1080&crop=smart&auto=webp&s=814c58ed276fb4e8ad328c4a50076dec263df1e9', 'width': 1080}], 'source': {'height': 2412, 'url': 'https://preview.redd.it/alg4dvl6rv8f1.png?auto=webp&s=de0b8fff02019cb0f6dd881c3b9c21848461f054', 'width': 1084}, 'variants': {}}]}
How many of you are using MCP?
1
[removed]
2025-06-24T13:51:21
https://www.reddit.com/r/LocalLLaMA/comments/1ljbnpk/how_many_of_you_are_using_mcp/
Yapper_Zipper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljbnpk
false
null
t3_1ljbnpk
/r/LocalLLaMA/comments/1ljbnpk/how_many_of_you_are_using_mcp/
false
false
self
1
null
Any open source text to speech that gives you more expressive control?
1
[removed]
2025-06-24T14:07:44
https://www.reddit.com/r/LocalLLaMA/comments/1ljc24w/any_open_source_text_to_speech_that_gives_you/
Brad12d3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljc24w
false
null
t3_1ljc24w
/r/LocalLLaMA/comments/1ljc24w/any_open_source_text_to_speech_that_gives_you/
false
false
self
1
null
LM Studio to read documents?
1
[removed]
2025-06-24T14:26:56
https://www.reddit.com/r/LocalLLaMA/comments/1ljcj7n/lm_studio_to_read_documents/
rocky_balboa202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljcj7n
false
null
t3_1ljcj7n
/r/LocalLLaMA/comments/1ljcj7n/lm_studio_to_read_documents/
false
false
self
1
null
Confused about serving STT and TTS concurrently through API. Any help would be appreciated!
1
[removed]
2025-06-24T14:28:27
https://www.reddit.com/r/LocalLLaMA/comments/1ljckln/confused_about_serving_stt_and_tts_concurrently/
learninggamdev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljckln
false
null
t3_1ljckln
/r/LocalLLaMA/comments/1ljckln/confused_about_serving_stt_and_tts_concurrently/
false
false
self
1
null
GGUF Vision Models - Does it make a difference if I pick f16 or bf16 for the mmproj file?
1
[removed]
2025-06-24T14:47:13
https://www.reddit.com/r/LocalLLaMA/comments/1ljd1t1/gguf_vision_models_does_it_make_a_difference_if_i/
nmkd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljd1t1
false
null
t3_1ljd1t1
/r/LocalLLaMA/comments/1ljd1t1/gguf_vision_models_does_it_make_a_difference_if_i/
false
false
self
1
null
Automating Form Mapping with AI
1
[removed]
2025-06-24T14:48:37
https://www.reddit.com/r/LocalLLaMA/comments/1ljd355/automating_form_mapping_with_ai/
carrick1363
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljd355
false
null
t3_1ljd355
/r/LocalLLaMA/comments/1ljd355/automating_form_mapping_with_ai/
false
false
self
1
null
Speed comparison for Gemma 3 27B
1
[removed]
2025-06-24T14:49:25
https://www.reddit.com/r/LocalLLaMA/comments/1ljd3uz/speed_comparison_for_gemma_3_27b/
phazze777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljd3uz
false
null
t3_1ljd3uz
/r/LocalLLaMA/comments/1ljd3uz/speed_comparison_for_gemma_3_27b/
false
false
self
1
null
Seeking recommendations for an advanced, company-funded AI/LLM course
1
[removed]
2025-06-24T14:51:05
https://www.reddit.com/r/LocalLLaMA/comments/1ljd5co/seeking_recommendations_for_an_advanced/
amunocis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljd5co
false
null
t3_1ljd5co
/r/LocalLLaMA/comments/1ljd5co/seeking_recommendations_for_an_advanced/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SKRWjohBObgx3yHzNjtPrtwjGEYT8p8D_A_p6zNsKM8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SKRWjohBObgx3yHzNjtPrtwjGEYT8p8D_A_p6zNsKM8.png?width=108&crop=smart&auto=webp&s=dd798cbbdbb71fca5a22358785768f607fd15254', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/SKRWjohBObgx3yHzNjtPrtwjGEYT8p8D_A_p6zNsKM8.png?width=216&crop=smart&auto=webp&s=fe9285bcb4990076233792b6b1f708045d5a7738', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/SKRWjohBObgx3yHzNjtPrtwjGEYT8p8D_A_p6zNsKM8.png?width=320&crop=smart&auto=webp&s=ff802d735110db2cc9e76f1781ea174572dcc52e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/SKRWjohBObgx3yHzNjtPrtwjGEYT8p8D_A_p6zNsKM8.png?width=640&crop=smart&auto=webp&s=de1eb797e9c357f23df24f55eaa5143db86b75c0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/SKRWjohBObgx3yHzNjtPrtwjGEYT8p8D_A_p6zNsKM8.png?width=960&crop=smart&auto=webp&s=79fb8ef9a959ba0d45e923486f3acb22fab73ece', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/SKRWjohBObgx3yHzNjtPrtwjGEYT8p8D_A_p6zNsKM8.png?width=1080&crop=smart&auto=webp&s=a797243e8bbada24b41ff7717aeae84499d67b8c', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/SKRWjohBObgx3yHzNjtPrtwjGEYT8p8D_A_p6zNsKM8.png?auto=webp&s=f2bc4da7a42cb89d30554b8540fc2bc5f75ddd71', 'width': 1280}, 'variants': {}}]}
Why aren’t there any new posts?
1
[removed]
2025-06-24T15:11:55
https://www.reddit.com/r/LocalLLaMA/comments/1ljdoxb/why_arent_there_any_new_posts/
tubi_el_tababa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljdoxb
false
null
t3_1ljdoxb
/r/LocalLLaMA/comments/1ljdoxb/why_arent_there_any_new_posts/
false
false
self
1
null
Issues with Qwen
1
[removed]
2025-06-24T15:34:38
https://www.reddit.com/r/LocalLLaMA/comments/1ljeagz/issues_with_qwen/
LazyChampionship5819
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljeagz
false
null
t3_1ljeagz
/r/LocalLLaMA/comments/1ljeagz/issues_with_qwen/
false
false
https://b.thumbs.redditm…sQgFDIr9iTnM.jpg
1
null
Applying COCONUT continuous reasoning into a learnt linear layer that produces sampling parameters (temp, top-k, top-p, etc.) for the current token
1
[removed]
2025-06-24T15:37:21
https://i.redd.it/8r7jlwzw9w8f1.png
ryunuck
i.redd.it
1970-01-01T00:00:00
0
{}
1ljed12
false
null
t3_1ljed12
/r/LocalLLaMA/comments/1ljed12/applying_coconut_continuous_reasoning_into_a/
false
false
default
1
{'enabled': True, 'images': [{'id': '8r7jlwzw9w8f1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/8r7jlwzw9w8f1.png?width=108&crop=smart&auto=webp&s=f80f572936e71a88cd9a898040b4e846b610d95c', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/8r7jlwzw9w8f1.png?width=216&crop=smart&auto=webp&s=f16948d383fa9c66bc69d90abda8ddfc2ab65481', 'width': 216}, {'height': 428, 'url': 'https://preview.redd.it/8r7jlwzw9w8f1.png?width=320&crop=smart&auto=webp&s=f61a44204b1f741836c2d95ada787063b7e670b8', 'width': 320}, {'height': 856, 'url': 'https://preview.redd.it/8r7jlwzw9w8f1.png?width=640&crop=smart&auto=webp&s=5b1fd7a29771f81a5f781bcfea4d156159c8a18a', 'width': 640}], 'source': {'height': 856, 'url': 'https://preview.redd.it/8r7jlwzw9w8f1.png?auto=webp&s=8a3517bef5037f042cb21ba9c641772207850f7f', 'width': 640}, 'variants': {}}]}
Applying COCONUT continuous reasoning into a learnt linear layer that produces sampling parameters (temp, top-k, top-p, etc.) for the current token
1
[removed]
2025-06-24T15:40:18
https://i.redd.it/o7mv6dugaw8f1.png
ryunuck
i.redd.it
1970-01-01T00:00:00
0
{}
1ljefwo
false
null
t3_1ljefwo
/r/LocalLLaMA/comments/1ljefwo/applying_coconut_continuous_reasoning_into_a/
false
false
https://external-preview…5756328af11126a8
1
{'enabled': True, 'images': [{'id': 'JRIP_xkg4BuH_ZZnihw-prRSBimm16YO4u5T6SZALhc', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/o7mv6dugaw8f1.png?width=108&crop=smart&auto=webp&s=843fb5f5735191c6789bdde40227822e168f0360', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/o7mv6dugaw8f1.png?width=216&crop=smart&auto=webp&s=50bfb5dda13b6b06ad04b57d63f7c47c61853a10', 'width': 216}, {'height': 428, 'url': 'https://preview.redd.it/o7mv6dugaw8f1.png?width=320&crop=smart&auto=webp&s=5e2d9060bf99eba22e68c2be49ed3faad8f0a5e2', 'width': 320}, {'height': 856, 'url': 'https://preview.redd.it/o7mv6dugaw8f1.png?width=640&crop=smart&auto=webp&s=1e827dff90d2db53e034e39b5a8fb9900bca6c01', 'width': 640}], 'source': {'height': 856, 'url': 'https://preview.redd.it/o7mv6dugaw8f1.png?auto=webp&s=9145290a979cbd65d902c52623978686fcb82720', 'width': 640}, 'variants': {}}]}
Applying COCONUT continuous reasoning into a learnt linear layer that produces sampling parameters (temp, top-k, top-p, etc.) for the current token
1
[removed]
2025-06-24T15:42:42
https://i.redd.it/ttjtuk92bw8f1.png
psychonucks
i.redd.it
1970-01-01T00:00:00
0
{}
1ljei7l
false
null
t3_1ljei7l
/r/LocalLLaMA/comments/1ljei7l/applying_coconut_continuous_reasoning_into_a/
false
false
default
1
{'enabled': True, 'images': [{'id': 'ttjtuk92bw8f1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/ttjtuk92bw8f1.png?width=108&crop=smart&auto=webp&s=43304c1d2b04e081425d806184ca26b436c44435', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/ttjtuk92bw8f1.png?width=216&crop=smart&auto=webp&s=22b5961b99d3e6cd82a21ff12c7b7480eb121751', 'width': 216}, {'height': 428, 'url': 'https://preview.redd.it/ttjtuk92bw8f1.png?width=320&crop=smart&auto=webp&s=cd6f09c0d864d0d9ebc2e05abe65762c0e02b7c1', 'width': 320}, {'height': 856, 'url': 'https://preview.redd.it/ttjtuk92bw8f1.png?width=640&crop=smart&auto=webp&s=f0fc671a0265d2738819ec8b2f5fdbafaeed63fb', 'width': 640}], 'source': {'height': 856, 'url': 'https://preview.redd.it/ttjtuk92bw8f1.png?auto=webp&s=0a142314aac78103f8d78e57c8f835f6d9d5918f', 'width': 640}, 'variants': {}}]}
Help with vLLM Speculative Decoding using Medusa
1
[removed]
2025-06-24T15:43:29
https://www.reddit.com/r/LocalLLaMA/comments/1ljeizj/help_with_vllm_speculative_decoding_using_medusa/
Equivalent_Pair_4146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljeizj
false
null
t3_1ljeizj
/r/LocalLLaMA/comments/1ljeizj/help_with_vllm_speculative_decoding_using_medusa/
false
false
self
1
null
Applying COCONUT continuous reasoning into a learnt linear layer that produces sampling parameters (temp, top-k, top-p, etc.) for the current token
1
[removed]
2025-06-24T15:47:14
https://www.reddit.com/r/LocalLLaMA/comments/1ljemhd/applying_coconut_continuous_reasoning_into_a/
ryunuck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljemhd
false
null
t3_1ljemhd
/r/LocalLLaMA/comments/1ljemhd/applying_coconut_continuous_reasoning_into_a/
false
false
https://b.thumbs.redditm…sNNHTq9C-WJY.jpg
1
null
I built a tool to calculate exactly how many GPUs you need—based on your chosen model, quantization, context length, concurrency level, and target throughput.
1
[removed]
2025-06-24T15:50:02
https://www.reddit.com/r/LocalLLaMA/comments/1ljep8m/i_built_a_tool_to_calculate_exactly_how_many_gpus/
RubJunior488
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljep8m
false
null
t3_1ljep8m
/r/LocalLLaMA/comments/1ljep8m/i_built_a_tool_to_calculate_exactly_how_many_gpus/
false
false
self
1
null
Local equivalent to Gemini 2.0 flash
1
[removed]
2025-06-24T15:50:26
https://www.reddit.com/r/LocalLLaMA/comments/1ljepmx/local_equivalent_to_gemini_20_flash/
Russ_Dill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljepmx
false
null
t3_1ljepmx
/r/LocalLLaMA/comments/1ljepmx/local_equivalent_to_gemini_20_flash/
false
false
self
1
null
Applying COCONUT continuous reasoning into a learnt linear layer that produces sampling parameters (temp, top-k, top-p, etc.) for the current token
1
[removed]
2025-06-24T15:51:26
https://i.redd.it/k1malgvjcw8f1.png
ryunuck
i.redd.it
1970-01-01T00:00:00
0
{}
1ljeqko
false
null
t3_1ljeqko
/r/LocalLLaMA/comments/1ljeqko/applying_coconut_continuous_reasoning_into_a/
false
false
https://external-preview…5756328af11126a8
1
{'enabled': True, 'images': [{'id': 'JRIP_xkg4BuH_ZZnihw-prRSBimm16YO4u5T6SZALhc', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/k1malgvjcw8f1.png?width=108&crop=smart&auto=webp&s=ca66e2e50a9b024b413fe22b1397932563367ac5', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/k1malgvjcw8f1.png?width=216&crop=smart&auto=webp&s=e38c7e3d124bfac78c4937e239b851b19d76f0cb', 'width': 216}, {'height': 428, 'url': 'https://preview.redd.it/k1malgvjcw8f1.png?width=320&crop=smart&auto=webp&s=6fcba6393c755c4a544b3933ba2051bf9a545658', 'width': 320}, {'height': 856, 'url': 'https://preview.redd.it/k1malgvjcw8f1.png?width=640&crop=smart&auto=webp&s=d8f589da89e74f8cbdbdca5bf0afbef9c8df9aeb', 'width': 640}], 'source': {'height': 856, 'url': 'https://preview.redd.it/k1malgvjcw8f1.png?auto=webp&s=e6602963ec208527837dbdc10519d0c2207c0176', 'width': 640}, 'variants': {}}]}
I built a tool to calculate exactly how many GPUs you need—based on your chosen model, quantization, context length, concurrency level, and target throughput.
1
[removed]
2025-06-24T16:03:31
https://www.reddit.com/r/LocalLLaMA/comments/1ljf1z4/i_built_a_tool_to_calculate_exactly_how_many_gpus/
RubJunior488
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljf1z4
false
null
t3_1ljf1z4
/r/LocalLLaMA/comments/1ljf1z4/i_built_a_tool_to_calculate_exactly_how_many_gpus/
false
false
https://b.thumbs.redditm…MtXn1zRDN3KQ.jpg
1
null
What's a good model for generating a schedule for multiple employees?
1
[removed]
2025-06-24T16:08:54
https://www.reddit.com/r/LocalLLaMA/comments/1ljf76m/whats_a_good_model_for_generating_a_schedule_for/
ThatGreenM-M
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljf76m
false
null
t3_1ljf76m
/r/LocalLLaMA/comments/1ljf76m/whats_a_good_model_for_generating_a_schedule_for/
false
false
self
1
null
Day 2 of 50 Days of Building a Small Language Model from Scratch — Tokenizers: The Unsung Heroes of Language Models
1
[removed]
2025-06-24T16:10:27
https://www.reddit.com/r/LocalLLaMA/comments/1ljf8ls/day_2_of_50_days_of_building_a_small_language/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljf8ls
false
null
t3_1ljf8ls
/r/LocalLLaMA/comments/1ljf8ls/day_2_of_50_days_of_building_a_small_language/
false
false
self
1
null
What's the best Vision Model for local OCR ( scanned invoices etc.) on a rtx 5080 in june 2025?
1
[removed]
2025-06-24T16:18:58
https://www.reddit.com/r/LocalLLaMA/comments/1ljfgsd/whats_the_best_vision_model_for_local_ocr_scanned/
Key_Rush_3180
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljfgsd
false
null
t3_1ljfgsd
/r/LocalLLaMA/comments/1ljfgsd/whats_the_best_vision_model_for_local_ocr_scanned/
false
false
self
1
null
WebBench: A real-world benchmark for Browser Agents
1
[removed]
2025-06-24T16:40:55
https://i.redd.it/qkwd19xhlw8f1.jpeg
Impressive_Half_2819
i.redd.it
1970-01-01T00:00:00
0
{}
1ljg1ux
false
null
t3_1ljg1ux
/r/LocalLLaMA/comments/1ljg1ux/webbench_a_realworld_benchmark_for_browser_agents/
false
false
default
1
{'enabled': True, 'images': [{'id': 'qkwd19xhlw8f1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/qkwd19xhlw8f1.jpeg?width=108&crop=smart&auto=webp&s=ca4b9df8943992c44e9dd02311d75e3f4ed297db', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/qkwd19xhlw8f1.jpeg?width=216&crop=smart&auto=webp&s=590c2c6b8b489cc4e744b9e11bfd9af958ed8b91', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/qkwd19xhlw8f1.jpeg?width=320&crop=smart&auto=webp&s=829333233de6e3f875e78dc0bfa9ca8c111fa6a6', 'width': 320}, {'height': 366, 'url': 'https://preview.redd.it/qkwd19xhlw8f1.jpeg?width=640&crop=smart&auto=webp&s=7819b950b985bcb0461550c5a1a4f035ca8da1cc', 'width': 640}, {'height': 549, 'url': 'https://preview.redd.it/qkwd19xhlw8f1.jpeg?width=960&crop=smart&auto=webp&s=26a5121a13e6f2eb462f2dc5e496363c2884fcbd', 'width': 960}, {'height': 618, 'url': 'https://preview.redd.it/qkwd19xhlw8f1.jpeg?width=1080&crop=smart&auto=webp&s=ac966996d01693a8a2f2a989cac2ec531f758e98', 'width': 1080}], 'source': {'height': 916, 'url': 'https://preview.redd.it/qkwd19xhlw8f1.jpeg?auto=webp&s=7c126a17eec30a7c87aa0087387f6a38d06ffeb1', 'width': 1600}, 'variants': {}}]}
Tokilake: Unlock Your Idle GPUs as a Unified LLM API – Behind NAT, Fully Private
1
[removed]
2025-06-24T16:42:15
https://www.reddit.com/r/LocalLLaMA/comments/1ljg35h/tokilake_unlock_your_idle_gpus_as_a_unified_llm/
Square-Air6513
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljg35h
false
null
t3_1ljg35h
/r/LocalLLaMA/comments/1ljg35h/tokilake_unlock_your_idle_gpus_as_a_unified_llm/
false
false
self
1
null
Are there leaderboards that ranking LLM for tasks?
1
[removed]
2025-06-24T17:12:14
https://www.reddit.com/r/LocalLLaMA/comments/1ljgvod/are_there_leaderboards_that_ranking_llm_for_tasks/
GTHell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljgvod
false
null
t3_1ljgvod
/r/LocalLLaMA/comments/1ljgvod/are_there_leaderboards_that_ranking_llm_for_tasks/
false
false
self
1
null
I'm sure most people have read about the Claud Spiritual Bliss Attractor and I wanted to reproduce it locally, so I made Resonant Chat Arena, a simple python script to put two LLMs in conversation with each other.
8
2025-06-24T17:33:15
https://github.com/jkingsman/resonant-chat-arena
CharlesStross
github.com
1970-01-01T00:00:00
0
{}
1ljhg1i
false
null
t3_1ljhg1i
/r/LocalLLaMA/comments/1ljhg1i/im_sure_most_people_have_read_about_the_claud/
false
false
https://external-preview…ec6d3693fcb2b29e
8
{'enabled': False, 'images': [{'id': 'rna5zREg5_FzFmMGv-Mzfn4pHDOOgy6GUqSdq0vIQVE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rna5zREg5_FzFmMGv-Mzfn4pHDOOgy6GUqSdq0vIQVE.png?width=108&crop=smart&auto=webp&s=be8cbbf89194fd69bf6142f0e8f0f036ca1df411', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rna5zREg5_FzFmMGv-Mzfn4pHDOOgy6GUqSdq0vIQVE.png?width=216&crop=smart&auto=webp&s=9b1cde5983d46a432d1f7f1fc41d3bd3a7bde600', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rna5zREg5_FzFmMGv-Mzfn4pHDOOgy6GUqSdq0vIQVE.png?width=320&crop=smart&auto=webp&s=3460fbc73e6c20895174ebd99d25048e13678887', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rna5zREg5_FzFmMGv-Mzfn4pHDOOgy6GUqSdq0vIQVE.png?width=640&crop=smart&auto=webp&s=61a501a09debdd7a58e2f8b7a92cf6aad5a73838', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rna5zREg5_FzFmMGv-Mzfn4pHDOOgy6GUqSdq0vIQVE.png?width=960&crop=smart&auto=webp&s=8ec61cc74cf89b0ac5ec2f2035c3a5ee3ff7c4b9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rna5zREg5_FzFmMGv-Mzfn4pHDOOgy6GUqSdq0vIQVE.png?width=1080&crop=smart&auto=webp&s=45be6a8b00327cd7aa93ed1347fce015aa23c549', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rna5zREg5_FzFmMGv-Mzfn4pHDOOgy6GUqSdq0vIQVE.png?auto=webp&s=59d1667b1cb6d4c6d1be052e6617fd6dc2830639', 'width': 1200}, 'variants': {}}]}
Federal Judge: Training On Copyrighted Works Is Fair Use
1
[removed]
2025-06-24T17:44:04
https://www.reddit.com/r/LocalLLaMA/comments/1ljhql9/federal_judge_training_on_copyrighted_works_is/
MrPecunius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljhql9
false
null
t3_1ljhql9
/r/LocalLLaMA/comments/1ljhql9/federal_judge_training_on_copyrighted_works_is/
false
false
self
1
null
I wanna create a startup using LLaMa smthg, idk what? any ideas geeks?
1
[removed]
2025-06-24T17:44:57
https://www.reddit.com/r/LocalLLaMA/comments/1ljhrh3/i_wanna_create_a_startup_using_llama_smthg_idk/
Expert-Address-2918
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljhrh3
false
null
t3_1ljhrh3
/r/LocalLLaMA/comments/1ljhrh3/i_wanna_create_a_startup_using_llama_smthg_idk/
false
false
self
1
null
Running on TPU ?!!
1
[removed]
2025-06-24T17:47:24
https://www.reddit.com/r/LocalLLaMA/comments/1ljhttx/running_on_tpu/
Symbiote_in_me
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljhttx
false
null
t3_1ljhttx
/r/LocalLLaMA/comments/1ljhttx/running_on_tpu/
false
false
self
1
null
So, are we back?
1
[removed]
2025-06-24T18:20:32
https://www.reddit.com/r/LocalLLaMA/comments/1ljipwv/so_are_we_back/
Herr_Drosselmeyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljipwv
false
null
t3_1ljipwv
/r/LocalLLaMA/comments/1ljipwv/so_are_we_back/
false
false
self
1
null
Anthropic wins a major fair use victory for AI (Purchased copies of books is fair use for training)
1
2025-06-24T18:34:53
https://www.theverge.com/news/692015/anthropic-wins-a-major-fair-use-victory-for-ai-but-its-still-in-trouble-for-stealing-books
theZeitt
theverge.com
1970-01-01T00:00:00
0
{}
1ljj3ey
false
null
t3_1ljj3ey
/r/LocalLLaMA/comments/1ljj3ey/anthropic_wins_a_major_fair_use_victory_for_ai/
false
false
default
1
null
The LLM's RL Revelation We Didn't See Coming
1
[removed]
2025-06-24T18:50:26
https://youtu.be/z3awgfU4yno
FeathersOfTheArrow
youtu.be
1970-01-01T00:00:00
0
{}
1ljji8f
false
{'oembed': {'author_name': 'bycloud', 'author_url': 'https://www.youtube.com/@bycloudAI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/z3awgfU4yno?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="The LLM&#39;s RL Revelation We Didn&#39;t See Coming"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/z3awgfU4yno/hqdefault.jpg', 'thumbnail_width': 480, 'title': "The LLM's RL Revelation We Didn't See Coming", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1ljji8f
/r/LocalLLaMA/comments/1ljji8f/the_llms_rl_revelation_we_didnt_see_coming/
false
false
default
1
null
Tiny Tavern - IA character mobile app via Ollama
1
[removed]
2025-06-24T18:57:04
https://www.reddit.com/r/LocalLLaMA/comments/1ljjojq/tiny_tavern_ia_character_mobile_app_via_ollama/
Ill_Marketing_5245
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljjojq
false
null
t3_1ljjojq
/r/LocalLLaMA/comments/1ljjojq/tiny_tavern_ia_character_mobile_app_via_ollama/
false
false
https://b.thumbs.redditm…QD_CYlt4ABDw.jpg
1
null
Angry creator seeks free AI to rewrite the fire OpenAI tried to put out
1
[removed]
2025-06-24T19:06:17
https://www.reddit.com/r/LocalLLaMA/comments/1ljjxgx/angry_creator_seeks_free_ai_to_rewrite_the_fire/
A_R_N-c_a
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljjxgx
false
null
t3_1ljjxgx
/r/LocalLLaMA/comments/1ljjxgx/angry_creator_seeks_free_ai_to_rewrite_the_fire/
false
false
self
1
null
Deepseek R1 lied about its codeforces rating to be 2029?
1
[removed]
2025-06-24T19:08:20
https://www.reddit.com/r/LocalLLaMA/comments/1ljjzfv/deepseek_r1_lied_about_its_codeforces_rating_to/
ThemeResponsible2116
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljjzfv
false
null
t3_1ljjzfv
/r/LocalLLaMA/comments/1ljjzfv/deepseek_r1_lied_about_its_codeforces_rating_to/
false
false
https://b.thumbs.redditm…wFv_4ZkkmvLo.jpg
1
null
Combining VRam for Inference
1
[removed]
2025-06-24T19:19:40
https://www.reddit.com/r/LocalLLaMA/comments/1ljkach/combining_vram_for_inference/
Th3OnlyN00b
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljkach
false
null
t3_1ljkach
/r/LocalLLaMA/comments/1ljkach/combining_vram_for_inference/
false
false
self
1
null
Is it normal to have significantly more performance from Qwen 235B compared to Qwen 32B when doing partial offloading?
1
[removed]
2025-06-24T19:28:30
https://www.reddit.com/r/LocalLLaMA/comments/1ljkilv/is_it_normal_to_have_significantly_more/
OUT_OF_HOST_MEMORY
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljkilv
false
null
t3_1ljkilv
/r/LocalLLaMA/comments/1ljkilv/is_it_normal_to_have_significantly_more/
false
false
self
1
null
Are we back?
1
[removed]
2025-06-24T19:34:07
https://www.reddit.com/r/LocalLLaMA/comments/1ljkno8/are_we_back/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljkno8
false
null
t3_1ljkno8
/r/LocalLLaMA/comments/1ljkno8/are_we_back/
false
false
self
1
null
The Context Lock-In Problem No One’s Talking About
1
[removed]
2025-06-24T19:56:11
https://www.reddit.com/r/LocalLLaMA/comments/1ljl87q/the_context_lockin_problem_no_ones_talking_about/
Imad-aka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljl87q
false
null
t3_1ljl87q
/r/LocalLLaMA/comments/1ljl87q/the_context_lockin_problem_no_ones_talking_about/
false
false
self
1
null
Knowledge Database Advise needed/ Local RAG for IT Asset Discovery - Best approach for varied data?
1
[removed]
2025-06-24T19:58:54
https://www.reddit.com/r/LocalLLaMA/comments/1ljlalp/knowledge_database_advise_needed_local_rag_for_it/
Rompe101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljlalp
false
null
t3_1ljlalp
/r/LocalLLaMA/comments/1ljlalp/knowledge_database_advise_needed_local_rag_for_it/
false
false
self
1
null
Mobile AI Apps?
2
Kobold.cpp on termux gets pretty decent speed with Gemma 3 4b 6ksm ~~(sutra version is faster and better at writting, decriptions and everything else except extremely low context windows)~~ on snapdragon 850, Mi 9. I could not find other programs or apps that run model locally on android asides from Google Edge Gallery, but edge only supports .task files and is in super alpha launch. Do you guys knows if there's other apps for local? Or if there's something in development yet?
2025-06-24T20:10:12
https://www.reddit.com/r/LocalLLaMA/comments/1ljll8c/mobile_ai_apps/
WEREWOLF_BX13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljll8c
false
null
t3_1ljll8c
/r/LocalLLaMA/comments/1ljll8c/mobile_ai_apps/
false
false
self
2
null
Please help me understand frequency penalty, presence penalty, and repetition penalty
3
I am writing a (very amateur) python LLM story telling app. My app sends a POST request to Ollama or OpenRouter or whatever backend you want, with a bunch of parameters I found online. Things like model, prompt, and these three penalties: presence\_penalty, frequency\_penalty, and repetition\_penalty. In debugging I was struck by how little I understand these parameters. Specifically: Do these penalties count the tokens in your \*prompt\*? Or only tokens in the \*LLM's response\*? I've asked ChatGPT, Claude, Gemini, Brave Search Leo, and several others, and I've gotten different answers. The most common answer is that only the response and not the prompt are counted. My debugging likewise suggested only the response is counted. The LLM performs very differently when asked for a single 500-token generation vs. 10 separate 50-token generations. It appeared that the single 500-token generation eventually stopped using punctuation; while that never happened in the separate 50-token generations. I hypothesized that in the single long generation, the penalties accrue over time, eventually obliterating punctuation. But in multiple smaller generations, the penalties don't have time to stack up as much, because they aren't counting the tokens from previous steps (in the prompt). For testing, I entered the following using my app on OpenRouter with Mistral Small 3.2 24B Instruct. The parameters used were temperature 0.0, top\_p 0.8, top\_k 80, repetition\_penalty 1.0, frequency\_penalty 0.0, presence\_penalty 0.0. \>The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. The color is blue. Now what color was it? ### INSTRUCTION: Continue the story. (note that the prompt would have also contained a system prompt and other information generated by my app) With the parameters above, the baseline response is: "The color was blue. The color was blue. The color was blue. The color was blue. The color was blue. The color was blue. The color was blue. The color was blue. The color was blue. The color was blue. The color was blue. The color was blue." Changing ONLY presence\_penalty 2.0, the response is: "The color was blue. The color is always blue. \\n But now, the air hummed with an unfamiliar tension. A faint metallic scent lingered beneath the usual saltiness of the sea breeze, sharp and unsettling." (The presence penalty did not consider tokens in the prompt. The LLM immediately repeated the statement, "The color was blue." Subsequently, however, those tokens were now in the LLM response, and the presenece penalty began to count them. They couldn't be repeated the way they were in the baseline response.) Changing ONLY frequency\_penalty 2.0, the response is: "The color was blue. The color is always blue. \\n But now, the air hummed with something new—a low, insistent vibration that prickled against her skin like static electricity." (Similarly, the frequency penalty did not consider tokens in the prompt. The LLM only stopped repeating the statement, "The color was blue." once those tokens made it into the LLM response. Then they couldn't be repeated anymore like the baseline response.) Changing ONLY repetition\_penalty 2.0, the response is: "Lena's fingers trembled as she traced her name etched into one corner—her initials carved deep enough for time itself. \\n "Who did this?" Her voice cracked like dry leaves underfoot; there were no footprints in sight except hers now blurred together where they'd paced earlier today searching" (The repetition penalty DID consider tokens in the prompt. The LLM could not repeat the statement, "The color is blue." even in the very first tokens it generated. Unlike the other penalties, that statement wasn't repeated even once.) I did not expect these penalties to work differently in this regard. Notably, OpenRouter's website says the "input" is counted for the presence and frequency penalties. https://openrouter.ai/docs/api-reference/parameters. But my finding seems to directly contradict that. I was hoping one of you fine folks would know better than me. Hey, maybe it's API specific? Wouldn't that be fun?
2025-06-24T20:10:27
https://www.reddit.com/r/LocalLLaMA/comments/1ljllgo/please_help_me_understand_frequency_penalty/
ruumies
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljllgo
false
null
t3_1ljllgo
/r/LocalLLaMA/comments/1ljllgo/please_help_me_understand_frequency_penalty/
false
false
self
3
null
Subreddit back in business
632
As most of you folks I'm also not sure what happened but I'm attaching screenshot of the last actions taken by the previous moderator before deleting their account
2025-06-24T20:16:36
https://i.redd.it/1sx7mwusnx8f1.jpeg
HOLUPREDICTIONS
i.redd.it
1970-01-01T00:00:00
0
{}
1ljlr5b
false
null
t3_1ljlr5b
/r/LocalLLaMA/comments/1ljlr5b/subreddit_back_in_business/
false
false
default
632
{'enabled': True, 'images': [{'id': '1sx7mwusnx8f1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/1sx7mwusnx8f1.jpeg?width=108&crop=smart&auto=webp&s=505c891feba9f30abe8510ca980b0be8bd842a92', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/1sx7mwusnx8f1.jpeg?width=216&crop=smart&auto=webp&s=66599d6e60fbf0bb6a76c40d955d120c611794ff', 'width': 216}, {'height': 195, 'url': 'https://preview.redd.it/1sx7mwusnx8f1.jpeg?width=320&crop=smart&auto=webp&s=2f586dfd484daea2c124a9753058901d9c8b2041', 'width': 320}, {'height': 390, 'url': 'https://preview.redd.it/1sx7mwusnx8f1.jpeg?width=640&crop=smart&auto=webp&s=a3f5a6313e8a4b034a44e79151a371760d959973', 'width': 640}], 'source': {'height': 470, 'url': 'https://preview.redd.it/1sx7mwusnx8f1.jpeg?auto=webp&s=407f4c976126c48ea27aa3728c2c1646527368a4', 'width': 770}, 'variants': {}}]}
Is it normal to have significantly more performance from Qwen 235B compared to Qwen 32B when doing partial offloading?
3
here are the llama-swap settings I am running, my hardware is a xeon e5-2690v4 with 128GB of 2400 DDR4 and 2 P104-100 8GB GPUs, while prompt processing is faster on the 32B (12 tk/s vs 5 tk/s) the actual inference is much faster on the 235B, 5tk/s vs 2.5 tk/s. Does anyone know why this is? Even if the 235B only has 22B active parameters more of those parameters should be offloaded than for the entire 32B model.here are the llama-swap settings I am running, my hardware is a xeon e5-2690v4 with 128GB of 2400 DDR4 and 2 P104-100 8GB GPUs, while prompt processing is faster on the 32B (12 tk/s vs 5 tk/s) the actual inference is much faster on the 235B, 5tk/s vs 2.5 tk/s. Does anyone know why this is? Even if the 235B only has 22B active parameters more of those parameters should be offloaded than for the entire 32B model. ``` "Qwen3:32B": proxy: http://127.0.0.1:9995 checkEndpoint: /health ttl: 1800 cmd: > ~/raid/llama.cpp/build/bin/llama-server --port 9995 --no-webui --no-warmup --model ~/raid/models/Qwen3-32B-Q4_K_M.gguf --flash-attn --cache-type-k f16 --cache-type-v f16 --gpu-layers 34 --split-mode layer --ctx-size 32768 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 --presence-penalty 1.5 "Qwen3:235B": proxy: http://127.0.0.1:9993 checkEndpoint: /health ttl: 1800 cmd: > ~/raid/llama.cpp/build/bin/llama-server --port 9993 --no-webui --no-warmup --model ~/raid/models/Qwen3-235B-A22B-UD-Q3_K_XL-00001-of-00003.gguf --flash-attn --cache-type-k f16 --cache-type-v f16 --gpu-layers 95 --split-mode layer --ctx-size 32768 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 --presence-penalty 1.5 --override-tensor exps=CPU ```
2025-06-24T20:24:47
https://www.reddit.com/r/LocalLLaMA/comments/1ljlyrs/is_it_normal_to_have_significantly_more/
OUT_OF_HOST_MEMORY
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljlyrs
false
null
t3_1ljlyrs
/r/LocalLLaMA/comments/1ljlyrs/is_it_normal_to_have_significantly_more/
false
false
self
3
null
0.5 tok/s with R1 Q4 on EPYC 7C13 with 1TB of RAM, BIOS settings to blame?
12
[Now I've got your attention, I hope!](https://i.redd.it/8wptyvlppx8f1.gif) Hi there everyone! I've just recently assembled an entire home server system, however, for some reason, the performance I'm getting is atrocious with 1TB of 2400MHz RAM on EPYC 7C13 running on Gigabyte MZ32-AR1. I'm getting 3-12 tok/s on prompt eval (depending on context), and 0.3-0.6 tok/s generation. Now, the model I'm running is Ubergarm's R1 0528 IQ4\_KS\_R4, on ik\_llama, so that's a bit different than what a lot of people here are running. However, on the more 'standard' R1 GGUFs from Unsloth, the performance is even worse, and that's true across everything I've tried, Kobold.cpp, LMstudio, Ollama, etc. True of other LLMs as well such as Qwen, people report way better tok/s with the same/almost the same CPU and system. So, here's my request, if anyone is in the know, can you please share the BIOS options that I should use to optimize this CPU for LLM interference? I'm ready to sacrifice pretty much any setting/feature if that means I will be able to get this running in line with what other people online are getting. Also, I know what you think, the model is entirely mlock'ed and is using 128 threads, my OS is Ubuntu 25.04, and other than Ubuntu's tendency to set locked memory to just 128 or so gigs every time I reboot which can be simply fixed with sudo su and then ulimit -Hl and -l, I don't seem to have any issues on the OS side, so that's where my entire guess of this being the BIOS settings fault comes from. Thank you so much for reading all of this, and have a great day!
2025-06-24T20:27:16
https://www.reddit.com/r/LocalLLaMA/comments/1ljm13j/05_toks_with_r1_q4_on_epyc_7c13_with_1tb_of_ram/
BasicCoconut9187
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljm13j
false
null
t3_1ljm13j
/r/LocalLLaMA/comments/1ljm13j/05_toks_with_r1_q4_on_epyc_7c13_with_1tb_of_ram/
false
false
https://a.thumbs.redditm…fc5tgINc2Dq8.jpg
12
null
Polaris: A Post-training recipe for scaling RL on Advanced ReasonIng models
46
Here is the link. I have no idea what it is but it was released a few days ago and has an intriguing concept so I decided to post here to see if anyone knows about this. It seems pretty new but its some sort of post-training RL with a unique approach that claims a Qwen3-4b performance boost that surpasses Claude-4-Opus, Grok-3-Beta, and o3-mini-high. Take it with a grain of salt. I am not in any way affiliated with this project. Someone simply recommended it to me so I posted it here to gather your thoughts.
2025-06-24T20:28:55
https://www.reddit.com/r/LocalLLaMA/comments/1ljm2n2/polaris_a_posttraining_recipe_for_scaling_rl_on/
swagonflyyyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljm2n2
false
null
t3_1ljm2n2
/r/LocalLLaMA/comments/1ljm2n2/polaris_a_posttraining_recipe_for_scaling_rl_on/
false
false
self
46
{'enabled': False, 'images': [{'id': 'E0mxyQKIBg9_L5-Uibj4NGnf8c47vbOyJqVz_vOZVpU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E0mxyQKIBg9_L5-Uibj4NGnf8c47vbOyJqVz_vOZVpU.png?width=108&crop=smart&auto=webp&s=5a15e7d7ac0b52dfdf2170149515a16efe27df26', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E0mxyQKIBg9_L5-Uibj4NGnf8c47vbOyJqVz_vOZVpU.png?width=216&crop=smart&auto=webp&s=96ac0e1a2d6fe53e4a1bd038834c1c791ba1a629', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E0mxyQKIBg9_L5-Uibj4NGnf8c47vbOyJqVz_vOZVpU.png?width=320&crop=smart&auto=webp&s=392320ca8b14bc7e216d694d0ccd84c7f0cf93d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E0mxyQKIBg9_L5-Uibj4NGnf8c47vbOyJqVz_vOZVpU.png?width=640&crop=smart&auto=webp&s=edea491b1f73e3b5ccaa1d39c3bec22e2b0715f9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E0mxyQKIBg9_L5-Uibj4NGnf8c47vbOyJqVz_vOZVpU.png?width=960&crop=smart&auto=webp&s=1db6579a38a2264dea7bc115126bd0ca51c62455', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E0mxyQKIBg9_L5-Uibj4NGnf8c47vbOyJqVz_vOZVpU.png?width=1080&crop=smart&auto=webp&s=edb02790366083b3b4e4fe2d506911ee2d6efe7b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/E0mxyQKIBg9_L5-Uibj4NGnf8c47vbOyJqVz_vOZVpU.png?auto=webp&s=12acd257ffeb9469da1951937e89f27c9a6dca3f', 'width': 1200}, 'variants': {}}]}
Agent Arena – crowdsourced testbed for evaluating AI agents in the wild
11
We just launched Agent Arena -- a crowdsourced testbed for evaluating AI agents in the wild. Think Chatbot Arena, but for agents. It’s completely free to run matches. We cover the inference. I always find myself debating whether to use 4o or o3, but now I just try both on Agent Arena! Try it out: https://obl.dev/
2025-06-24T20:29:27
https://www.reddit.com/r/LocalLLaMA/comments/1ljm32s/agent_arena_crowdsourced_testbed_for_evaluating/
tejpal-obl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljm32s
false
null
t3_1ljm32s
/r/LocalLLaMA/comments/1ljm32s/agent_arena_crowdsourced_testbed_for_evaluating/
false
false
self
11
null
LocalLlama is saved!
564
LocalLlama has been many folk's favorite place to be for everything AI, so it's good to see a new moderator taking the reins! Thanks to u/HOLUPREDICTIONS for taking the reins! More detail here: [https://www.reddit.com/r/LocalLLaMA/comments/1ljlr5b/subreddit\_back\_in\_business/](https://www.reddit.com/r/LocalLLaMA/comments/1ljlr5b/subreddit_back_in_business/) TLDR - the previous moderator (we appreciate their work) unfortunately left the subreddit, and unfortunately deleted new comments and posts - it's now lifted!
2025-06-24T20:30:08
https://www.reddit.com/r/LocalLLaMA/comments/1ljm3pb/localllama_is_saved/
danielhanchen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljm3pb
false
null
t3_1ljm3pb
/r/LocalLLaMA/comments/1ljm3pb/localllama_is_saved/
false
false
self
564
null
We built a tool that helps you plan features before using AI to code (public beta launch)
0
2025-06-24T20:41:25
https://v.redd.it/8td2dkxbsx8f1
eastwindtoday
/r/LocalLLaMA/comments/1ljmdzg/we_built_a_tool_that_helps_you_plan_features/
1970-01-01T00:00:00
0
{}
1ljmdzg
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8td2dkxbsx8f1/DASHPlaylist.mpd?a=1753519303%2CN2QyMDA0NWYxYWMyMTUzZmM1NDYzN2VlNTc1NmUwYmExYjgzOWViMmQ4NjM0MGM2ZWUwMmUzODc4NDA4NmRjYw%3D%3D&v=1&f=sd', 'duration': 377, 'fallback_url': 'https://v.redd.it/8td2dkxbsx8f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/8td2dkxbsx8f1/HLSPlaylist.m3u8?a=1753519303%2CYjBlMDlhMjc3ZDkwZDk1NzU1OWUzYTI4ODY1MWJiNzIxNmJlYTRmNzNiZTkwNzgzMDZkZDQwZDkyMDFmYWVlMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8td2dkxbsx8f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1784}}
t3_1ljmdzg
/r/LocalLLaMA/comments/1ljmdzg/we_built_a_tool_that_helps_you_plan_features/
false
false
https://external-preview…c8761d0582709108
0
{'enabled': False, 'images': [{'id': 'OTEzaXJneGJzeDhmMZvu8dVTETA4R0lbzOCMpCYSy2EGu4LkODdfToxoMFWa', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/OTEzaXJneGJzeDhmMZvu8dVTETA4R0lbzOCMpCYSy2EGu4LkODdfToxoMFWa.png?width=108&crop=smart&format=pjpg&auto=webp&s=1a05d63253a12b40a6d990bf048acd8f7df08fe5', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/OTEzaXJneGJzeDhmMZvu8dVTETA4R0lbzOCMpCYSy2EGu4LkODdfToxoMFWa.png?width=216&crop=smart&format=pjpg&auto=webp&s=a9ae72ae4e51f78cd12adc28b722c3d9053b68f5', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/OTEzaXJneGJzeDhmMZvu8dVTETA4R0lbzOCMpCYSy2EGu4LkODdfToxoMFWa.png?width=320&crop=smart&format=pjpg&auto=webp&s=e1baeb0499a061c7755ca3378b9c93c0713d1096', 'width': 320}, {'height': 387, 'url': 'https://external-preview.redd.it/OTEzaXJneGJzeDhmMZvu8dVTETA4R0lbzOCMpCYSy2EGu4LkODdfToxoMFWa.png?width=640&crop=smart&format=pjpg&auto=webp&s=c641a834d198e29b8b81701c8568940a03042192', 'width': 640}, {'height': 581, 'url': 'https://external-preview.redd.it/OTEzaXJneGJzeDhmMZvu8dVTETA4R0lbzOCMpCYSy2EGu4LkODdfToxoMFWa.png?width=960&crop=smart&format=pjpg&auto=webp&s=675509e91e328a0e7eb73bbf9d67c59e25921a09', 'width': 960}, {'height': 653, 'url': 'https://external-preview.redd.it/OTEzaXJneGJzeDhmMZvu8dVTETA4R0lbzOCMpCYSy2EGu4LkODdfToxoMFWa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=719f8488f5936d2679c8f9f9765e015d3c9dc590', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OTEzaXJneGJzeDhmMZvu8dVTETA4R0lbzOCMpCYSy2EGu4LkODdfToxoMFWa.png?format=pjpg&auto=webp&s=b83becdcad79e4d668a38e92e948baf9c4d72799', 'width': 1784}, 'variants': {}}]}
Vision model for detecting welds?
3
I searched for "best vision models" up to date, but are there any difference between industry applications and "document scanning" models? Should we proceed to fine-tine them with photos to identify correct welds vs incorrect welds? Can anyone guide us regarding vision model in industry applications (mainly construction industry)
2025-06-24T20:49:23
https://www.reddit.com/r/LocalLLaMA/comments/1ljmlcn/vision_model_for_detecting_welds/
-Fake_GTD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljmlcn
false
null
t3_1ljmlcn
/r/LocalLLaMA/comments/1ljmlcn/vision_model_for_detecting_welds/
false
false
self
3
null
Falcon H1 Models
1
Why is this model family slept on ? From what i understood its a new hybrid architecture and it has alreally good results. Am i missing something?
2025-06-24T21:05:04
https://www.reddit.com/r/LocalLLaMA/comments/1ljmzvi/falcon_h1_models/
Daemontatox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ljmzvi
false
null
t3_1ljmzvi
/r/LocalLLaMA/comments/1ljmzvi/falcon_h1_models/
false
false
self
1
null