title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
C# Flash Card Generator
4
I'm posting this here mainly as an example app for the .NET lovers out there. Public domain. [https://github.com/dpmm99/Faxtract](https://github.com/dpmm99/Faxtract) is a rather simple ASP .NET web app using LLamaSharp (a llama.cpp wrapper) to perform batched inference. It accepts PDF, HTML, or TXT files and breaks them into fairly small chunks, but you can use the Extra Context checkbox to add a course, chapter title, page title, or whatever context you think would keep the generated flash cards consistent. A few screenshots: [Upload form and inference progress display](https://preview.redd.it/33ovon2np05f1.png?width=1945&format=png&auto=webp&s=cfa2ee2fdd2585f641fd5db8eca3b02252c41c41) [Download button and chunks\/generated flash card counts display](https://preview.redd.it/793ui5mrp05f1.png?width=662&format=png&auto=webp&s=c859c8ed78928e5f219404082e36a35faef2bbb1) [Reviewing a chunk and its generated flash cards](https://preview.redd.it/ddfkskv3q05f1.png?width=994&format=png&auto=webp&s=86629c7df7c4b0df4a98665e843a63a9ec2f4e0a)
2025-06-05T02:18:51
https://www.reddit.com/r/LocalLLaMA/comments/1l3nwic/c_flash_card_generator/
DeProgrammer99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3nwic
false
null
t3_1l3nwic
/r/LocalLLaMA/comments/1l3nwic/c_flash_card_generator/
false
false
https://b.thumbs.redditm…4OKe9CLeYgHQ.jpg
4
{'enabled': False, 'images': [{'id': '10b2-ooZ8CZCLvFOgXbLZsmIJN6kUoVbLr_2vI7ULxU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mvZNRhrCH_xoYiPx_UuqAthbYQIcuBybXiRVHoZ3gFg.jpg?width=108&crop=smart&auto=webp&s=5eff254bb0b42cc53e93411a86e03f95d8e2162c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mvZNRhrCH_xoYiPx_UuqAthbYQIcuBybXiRVHoZ3gFg.jpg?width=216&crop=smart&auto=webp&s=d2985f30ed054140db14e9934dde7bfd74154b49', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mvZNRhrCH_xoYiPx_UuqAthbYQIcuBybXiRVHoZ3gFg.jpg?width=320&crop=smart&auto=webp&s=49e6bee9a4ae907f3908fb9d8de2188609705816', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mvZNRhrCH_xoYiPx_UuqAthbYQIcuBybXiRVHoZ3gFg.jpg?width=640&crop=smart&auto=webp&s=7a3775d446e57051ccbe22e92dd4b437faad1c6e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mvZNRhrCH_xoYiPx_UuqAthbYQIcuBybXiRVHoZ3gFg.jpg?width=960&crop=smart&auto=webp&s=351b53b79f9ff2316de179d205b7b28a9715b5fe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mvZNRhrCH_xoYiPx_UuqAthbYQIcuBybXiRVHoZ3gFg.jpg?width=1080&crop=smart&auto=webp&s=f29dd640d2374498f0d5a27adb5b8172c4a29a17', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mvZNRhrCH_xoYiPx_UuqAthbYQIcuBybXiRVHoZ3gFg.jpg?auto=webp&s=e2f1052109aa638a6670f4ca4b01ec5b67d24e23', 'width': 1200}, 'variants': {}}]}
Local AI smart speaker
7
I was wondering if there were any low cost options for a Bluetooth speaker/microphone to connect to my server for voice chat with a local llm. Can an old echo or something be repurposed?
2025-06-05T02:53:13
https://www.reddit.com/r/LocalLLaMA/comments/1l3ok95/local_ai_smart_speaker/
Llamapants
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3ok95
false
null
t3_1l3ok95
/r/LocalLLaMA/comments/1l3ok95/local_ai_smart_speaker/
false
false
self
7
null
why aren’t we seeing more real products built with local LLMs?
1
[removed]
2025-06-05T02:55:38
https://www.reddit.com/r/LocalLLaMA/comments/1l3olw3/why_arent_we_seeing_more_real_products_built_with/
mindfulbyte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3olw3
false
null
t3_1l3olw3
/r/LocalLLaMA/comments/1l3olw3/why_arent_we_seeing_more_real_products_built_with/
false
false
self
1
null
why isn’t anyone building legit tools with local LLMs?
54
asked this in a recent comment but curious what others think. i could be missing it, but why aren’t more niche on device products being built? not talking wrappers or playgrounds, i mean real, useful tools powered by local LLMs. models are getting small enough, 3B and below is workable for a lot of tasks. the potential upside is clear to me, so what’s the blocker? compute? distribution? user experience?
2025-06-05T03:00:37
https://www.reddit.com/r/LocalLLaMA/comments/1l3op8b/why_isnt_anyone_building_legit_tools_with_local/
mindfulbyte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3op8b
false
null
t3_1l3op8b
/r/LocalLLaMA/comments/1l3op8b/why_isnt_anyone_building_legit_tools_with_local/
false
false
self
54
null
HP Z440 5x GPU build
6
Hello everyone, I was about to build a very expensive machine with brand new epyc milan CPU and romed8-2t in a mining rack with 5 3090s mounted via risers since I couldn’t find any used epyc CPUs or motherboards here in india. Had a spare Z440 and it has 2 x16 slots and 1 x8 slot. Q.1 Is this a good idea? Z440 was the cheapest x99 system around here. Q.2 Can I split x16s to x8x8 and mount 5 GPUs at x8 pcie 3 speeds on a Z440? I was planning to put this in a 18U rack with pcie extensions coming out of Z440 chassis and somehow mounting the GPUs in the rack. Q.3 What’s the best way of mounting the GPUs above the chassis? I would also need at least 1 external PSU to be mounted somewhere outside the chassis.
2025-06-05T03:15:30
https://www.reddit.com/r/LocalLLaMA/comments/1l3oz8o/hp_z440_5x_gpu_build/
BeeNo7094
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3oz8o
false
null
t3_1l3oz8o
/r/LocalLLaMA/comments/1l3oz8o/hp_z440_5x_gpu_build/
false
false
self
6
null
OpenAI should open source GPT3.5 turbo
124
Dont have a real point here, just the title, food for thought. I think it would be a pretty cool thing to do. at this point it's extremely out of date, so they wouldn't be loosing any "edge", it would just be a cool thing to do/have and would be a nice throwback. openAI's 10th year anniversary is coming up in december, would be a pretty cool thing to do, just sayin.
2025-06-05T03:18:48
https://www.reddit.com/r/LocalLLaMA/comments/1l3p1f0/openai_should_open_source_gpt35_turbo/
Expensive-Apricot-25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3p1f0
false
null
t3_1l3p1f0
/r/LocalLLaMA/comments/1l3p1f0/openai_should_open_source_gpt35_turbo/
false
false
self
124
null
Qwen3 235B Q2_K_L Repeats Letters Despite Penalties.
1
My intended use case is as a backend for cline. For this, I am using the Qwen3 235B Q2_K_L model. I keep encountering repetition issues (specifically, endless repetition of the last letter), even after adding penalty parameters. I’m not sure if my launch method is correct—here’s my current launch command: ``` ./llama-server.exe -m "E:\Qwen3-235B-A22B-GGUF\Q2_K_L\Qwen3-235B-A22B-Q2_K_L-00001-of-00002.gguf" -c 2048 -ngl 95 --no-mmap --dry-multiplier 0.8 --dry-base 1.75 --dry-allowed-length 2 --temp 0.6 --min-p 0.01 --top-p 0.9 --presence-penalty 1.5 --frequency-penalty 1.0 ``` Any suggestions? Thanks!
2025-06-05T04:06:36
https://www.reddit.com/r/LocalLLaMA/comments/1l3pwjg/qwen3_235b_q2_k_l_repeats_letters_despite/
realJoeTrump
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3pwjg
false
null
t3_1l3pwjg
/r/LocalLLaMA/comments/1l3pwjg/qwen3_235b_q2_k_l_repeats_letters_despite/
false
false
self
1
null
Niche Q but want to ask in an active community: what’s the cheapest transcription tool for audio that contains medical terminology?
1
[removed]
2025-06-05T04:29:11
https://www.reddit.com/r/LocalLLaMA/comments/1l3qaia/niche_q_but_want_to_ask_in_an_active_community/
adrenalinsufficiency
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3qaia
false
null
t3_1l3qaia
/r/LocalLLaMA/comments/1l3qaia/niche_q_but_want_to_ask_in_an_active_community/
false
false
self
1
null
RTX PRO 6000 machine for 12k?
12
Hi, Is there a company that sells a complete machine (cpu, ram, gpu, drive, motherboard, case, power supply, etc all wired up) with RTX 6000 Pro for 12k USD or less? The card itself is around 7-8k I think, which leaves 4k for the other components. Is this economically possible? Bonus point: The machine supports adding another rtx 6000 gpu in the future to get 2x96 GB of vram.
2025-06-05T04:37:26
https://www.reddit.com/r/LocalLLaMA/comments/1l3qfhh/rtx_pro_6000_machine_for_12k/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3qfhh
false
null
t3_1l3qfhh
/r/LocalLLaMA/comments/1l3qfhh/rtx_pro_6000_machine_for_12k/
false
false
self
12
null
how good is local llm compared with claude / chatgpt?
0
just curious is it worth the effort to set up local llm
2025-06-05T05:03:15
https://www.reddit.com/r/LocalLLaMA/comments/1l3qvbu/how_good_is_local_llm_compared_with_claude_chatgpt/
anonymous_2600
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3qvbu
false
null
t3_1l3qvbu
/r/LocalLLaMA/comments/1l3qvbu/how_good_is_local_llm_compared_with_claude_chatgpt/
false
false
self
0
null
Dealing with tool_calls hallucinations
5
Hi all, I have a specific prompt to output to json but for some reason the llm decides to use a made up tool call. Llama.cpp using qwen 30b How do you handle these things? Tried passing an empty array to tools: [] and begged the llm to not use tool calls. Driving me mad!
2025-06-05T05:06:31
https://www.reddit.com/r/LocalLLaMA/comments/1l3qxas/dealing_with_tool_calls_hallucinations/
EstebanGee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3qxas
false
null
t3_1l3qxas
/r/LocalLLaMA/comments/1l3qxas/dealing_with_tool_calls_hallucinations/
false
false
self
5
null
Deal of the century - or atleast great value for money
0
[https://www.ebay.com/str/ipowerresaleinc](https://www.ebay.com/str/ipowerresaleinc)
2025-06-05T05:56:37
https://www.reddit.com/r/LocalLLaMA/comments/1l3rps3/deal_of_the_century_or_atleast_great_value_for/
weight_matrix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3rps3
false
null
t3_1l3rps3
/r/LocalLLaMA/comments/1l3rps3/deal_of_the_century_or_atleast_great_value_for/
false
false
self
0
null
Mix and Match
2
I have a 4070 super in my current computer, I still have an old 3060ti from my last upgrade, is it compatible to run at the same time as my 4070 to add more vram?
2025-06-05T06:08:03
https://www.reddit.com/r/LocalLLaMA/comments/1l3rwit/mix_and_match/
Doomkeepzor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3rwit
false
null
t3_1l3rwit
/r/LocalLLaMA/comments/1l3rwit/mix_and_match/
false
false
self
2
null
Interactive Results Browser for Misguided Attention Eval
6
Thanks to Gemini 2.5 pro, there is now an[ interactive results browser](https://cpldcpu.github.io/MisguidedAttention/) for the [misguided attention eval](https://github.com/cpldcpu/MisguidedAttention). The last wave of new models got significantly better at correctly resonding to the prompts. Especially reasoning models. Currently, DS-R1-0528 is leading the pack. Claude Opus 4 is almost at the top of the chart even in non-thinking mode. I haven't run it in thinking mode yet (it's not available on openrouter), but I assume that it would jump ahead.
2025-06-05T06:24:23
https://www.reddit.com/r/LocalLLaMA/comments/1l3s5wh/interactive_results_browser_for_misguided/
cpldcpu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3s5wh
false
null
t3_1l3s5wh
/r/LocalLLaMA/comments/1l3s5wh/interactive_results_browser_for_misguided/
false
false
self
6
null
Need your Feedback
1
[removed]
2025-06-05T07:08:37
https://www.reddit.com/r/LocalLLaMA/comments/1l3su8c/need_your_feedback/
Careless_Werewolf148
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3su8c
false
null
t3_1l3su8c
/r/LocalLLaMA/comments/1l3su8c/need_your_feedback/
false
false
self
1
null
Easiest way to access multiple Social Medias with LLMs
1
[removed]
2025-06-05T07:36:30
https://www.reddit.com/r/LocalLLaMA/comments/1l3t9a7/easiest_way_to_access_multiple_social_medias_with/
Ok_GreyMatter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3t9a7
false
null
t3_1l3t9a7
/r/LocalLLaMA/comments/1l3t9a7/easiest_way_to_access_multiple_social_medias_with/
false
false
self
1
null
VLLM with 4x7900xtx with Qwen3-235B-A22B-UD-Q2_K_XL
20
Hello Reddit! Our "AI" computer now has 4x RTX 7900 XTX and 1x RTX 7800 XT. Llama-server works well, and we successfully launched Qwen3-235B-A22B-UD-Q2\_K\_XL with a 40,960 context length. |GPU|Backend|Input |OutPut| |:-|:-|:-|:-| |4x7900 xtx|HIP llama-server, -fa|160 t/s (356 tokens)|20 t/s (328 tokens)| |4x7900 xtx|HIP llama-server, -fa --parallel 2 for 2 request in one time|130 t/s (58t/s + 72t//s)|13.5 t/s (7t/s + 6.5t/s)| |3x7900 xtx + 1x7800xt|HIP llama-server, -fa|...|16-18 token/s| **Question to discuss:** Is it possible to run this model from Unsloth AI faster using VLLM on amd or no ways to launch GGUF? Can we offload layers to each GPU in a smarter way? If you've run a similar model (even on different GPUs), please share your results. If you're considering setting up a test (perhaps even on AMD hardware), feel free to ask any relevant questions here.
2025-06-05T07:41:40
https://www.reddit.com/r/LocalLLaMA/comments/1l3tby7/vllm_with_4x7900xtx_with_qwen3235ba22budq2_k_xl/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3tby7
false
null
t3_1l3tby7
/r/LocalLLaMA/comments/1l3tby7/vllm_with_4x7900xtx_with_qwen3235ba22budq2_k_xl/
false
false
self
20
null
Best TTS
1
[removed]
2025-06-05T08:39:35
https://www.reddit.com/r/LocalLLaMA/comments/1l3u59g/best_tts/
SmoothRock54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3u59g
false
null
t3_1l3u59g
/r/LocalLLaMA/comments/1l3u59g/best_tts/
false
false
self
1
null
I organized a 100-game Town of Salem competition featuring best models as players. Game logs are available too.
113
As many of you probably know, Town of Salem is a popular game. If you don't know what I'm talking about, you can read the game_rules.yaml in the repo. My personal preference has always been to moderate rather than play among friends. Two weeks ago, I had the idea to make LLMs play this game to have fun and see who is the best. Imo, this is a great way to measure LLM capabilities across several crucial areas: contextual understanding, managing information privacy, developing sophisticated strategies, employing deception, and demonstrating persuasive skills. I'll be sharing charts based on a simulation of 100 games. For a deeper dive into the methodology, more detailed results and more charts, please visit the repo https://github.com/summersonnn/Town-Of-Salem-with-LLMs Total dollars spent: ~60$ - half of which spent on new Claude models. Looking at the results, I see those 30$ spent for nothing :D Vampire points are calculated as follows : - If vampires win and a vampire is alive at the end, that vampire earns 1 point - If vampires win but the vampire is dead, they receive 0.5 points Peasant survival rate is calculated as follows: sum the total number of rounds survived across all games that this model/player has participated in and divide by the total number of rounds played in those same games. Win Ratios are self-explanatory. Quick observations: - New Deepseek, even the distilled Qwen is very good at this game. - Claude models and Grok are worst - GPT 4.1 is also very successful. - Gemini models are average in general but performs best when peasant Overall win ratios: - Vampires win ratio: 34/100 : 34% - Peasants win ratio: 45/100 : 45% - Clown win ratio: 21/100 : 21%
2025-06-05T08:43:52
https://www.reddit.com/gallery/1l3u7e9
kyazoglu
reddit.com
1970-01-01T00:00:00
0
{}
1l3u7e9
false
null
t3_1l3u7e9
/r/LocalLLaMA/comments/1l3u7e9/i_organized_a_100game_town_of_salem_competition/
false
false
https://b.thumbs.redditm…P1vzWdIvOaqs.jpg
113
null
Aider & Full Automation: Seeking direct system command execution (not just simulation)
1
[removed]
2025-06-05T09:56:51
https://www.reddit.com/r/LocalLLaMA/comments/1l3v9h8/aider_full_automation_seeking_direct_system/
dewijones92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3v9h8
false
null
t3_1l3v9h8
/r/LocalLLaMA/comments/1l3v9h8/aider_full_automation_seeking_direct_system/
false
false
self
1
null
On Prem LLM plug-and-play ‘package’ for SME organisational context
1
[removed]
2025-06-05T10:03:36
https://www.reddit.com/r/LocalLLaMA/comments/1l3vdht/on_prem_llm_plugandplay_package_for_sme/
jon18476
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3vdht
false
null
t3_1l3vdht
/r/LocalLLaMA/comments/1l3vdht/on_prem_llm_plugandplay_package_for_sme/
false
false
self
1
null
AI Linter VS Code suggestions
3
What is a good extension to use a local model as a linter? I do not want AI generated code, I only want the AI to act as a linter and say, “hey, you seem to be missing a zero in the integer here.” And obvious problems like that, but problems not so obvious a normal linter can find them. Ideally it would be able to trigger a warning at a line in the code and not open a big chat box for all problems which can be annoying to shuffle through
2025-06-05T10:26:43
https://www.reddit.com/r/LocalLLaMA/comments/1l3vqut/ai_linter_vs_code_suggestions/
DoggoChann
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3vqut
false
null
t3_1l3vqut
/r/LocalLLaMA/comments/1l3vqut/ai_linter_vs_code_suggestions/
false
false
self
3
null
New embedding model "Qwen3-Embedding-0.6B-GGUF" just dropped.
446
Anyone tested it yet?
2025-06-05T10:30:53
https://huggingface.co/Qwen/Qwen3-Embedding-0.6B-GGUF
Proto_Particle
huggingface.co
1970-01-01T00:00:00
0
{}
1l3vt95
false
null
t3_1l3vt95
/r/LocalLLaMA/comments/1l3vt95/new_embedding_model_qwen3embedding06bgguf_just/
false
false
https://b.thumbs.redditm…JYYUsB3eiQXw.jpg
446
{'enabled': False, 'images': [{'id': '9lpKpNoi91vS1Idczeb6luvGh4vi0sY883cTturjqzg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nCFCX9SJ8G9lwL3THBeDPCNNzee25aFLCHH5cPLrrSM.jpg?width=108&crop=smart&auto=webp&s=16056ab3be753d66bcf5da3487a64235e037e0bd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nCFCX9SJ8G9lwL3THBeDPCNNzee25aFLCHH5cPLrrSM.jpg?width=216&crop=smart&auto=webp&s=b565036ed6db93d73fcb86d373914a121f97fe52', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nCFCX9SJ8G9lwL3THBeDPCNNzee25aFLCHH5cPLrrSM.jpg?width=320&crop=smart&auto=webp&s=56126acbf70aacb9484e9a9dccb53e4c66f70cfc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nCFCX9SJ8G9lwL3THBeDPCNNzee25aFLCHH5cPLrrSM.jpg?width=640&crop=smart&auto=webp&s=7073b11dd8c3ebaa999dbf1000e83f46f243a01e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nCFCX9SJ8G9lwL3THBeDPCNNzee25aFLCHH5cPLrrSM.jpg?width=960&crop=smart&auto=webp&s=7141d4e2bd90e33294f2fa1db4e5c6e39c5191c4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nCFCX9SJ8G9lwL3THBeDPCNNzee25aFLCHH5cPLrrSM.jpg?width=1080&crop=smart&auto=webp&s=b2466dcb9bab61917a29d7db8e2c4ab7a4a040c7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nCFCX9SJ8G9lwL3THBeDPCNNzee25aFLCHH5cPLrrSM.jpg?auto=webp&s=5c03e37e24bfc64a46af77d13a4674cb1a580a49', 'width': 1200}, 'variants': {}}]}
Enterprise AI agents
1
[removed]
2025-06-05T10:35:49
https://www.reddit.com/r/LocalLLaMA/comments/1l3vw33/enterprise_ai_agents/
yecohn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3vw33
false
null
t3_1l3vw33
/r/LocalLLaMA/comments/1l3vw33/enterprise_ai_agents/
false
false
self
1
null
Check out this new VSCode Extension! Query multiple BitNet servers from within GitHub Copilot via the Model Context Protocol all locally!
4
[https://marketplace.visualstudio.com/items?itemName=nftea-gallery.bitnet-vscode-extension](https://marketplace.visualstudio.com/items?itemName=nftea-gallery.bitnet-vscode-extension) [https://marketplace.visualstudio.com/items?itemName=nftea-gallery.bitnet-vscode-extension](https://marketplace.visualstudio.com/items?itemName=nftea-gallery.bitnet-vscode-extension)
2025-06-05T11:17:31
https://www.reddit.com/r/LocalLLaMA/comments/1l3wloi/check_out_this_new_vscode_extension_query/
ufos1111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3wloi
false
null
t3_1l3wloi
/r/LocalLLaMA/comments/1l3wloi/check_out_this_new_vscode_extension_query/
false
false
self
4
null
Best simple model for local fine tuning?
19
Back in the day I used to use gpt2 but tensorflow has moved on and it's not longer properly supported. Are there any good replacements? I don't need an excellent model at all, something as simple and weak as gpt2 is ideal (I would much rather faster training). It'll be unlearning all its written language anyways: I'm tackling a similar project to the guy a while back that generated Pokemon sprites fine-tuning gpt2.
2025-06-05T11:17:53
https://www.reddit.com/r/LocalLLaMA/comments/1l3wlwy/best_simple_model_for_local_fine_tuning/
Lucario1296
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3wlwy
false
null
t3_1l3wlwy
/r/LocalLLaMA/comments/1l3wlwy/best_simple_model_for_local_fine_tuning/
false
false
self
19
null
Best locall LLM for C++
1
[removed]
2025-06-05T11:40:01
https://www.reddit.com/r/LocalLLaMA/comments/1l3x05o/best_locall_llm_for_c/
ayx03
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3x05o
false
null
t3_1l3x05o
/r/LocalLLaMA/comments/1l3x05o/best_locall_llm_for_c/
false
false
self
1
null
BAIDU joined huggingface
201
2025-06-05T11:52:51
https://huggingface.co/baidu
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1l3x8fr
false
null
t3_1l3x8fr
/r/LocalLLaMA/comments/1l3x8fr/baidu_joined_huggingface/
false
false
default
201
{'enabled': False, 'images': [{'id': 'VBByCdkzVD7PWV8lNUdba_RoNhzl4Gw0LZW9JEZN8Oc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TwIi1dX9vW7A-ld45UUpPdNNrb7BMww9X7rSHaojGsI.jpg?width=108&crop=smart&auto=webp&s=955a4c0e5b5e1785d28a0180fefa4dc08b8be3a0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TwIi1dX9vW7A-ld45UUpPdNNrb7BMww9X7rSHaojGsI.jpg?width=216&crop=smart&auto=webp&s=e4ae9c4139b8e45290d05fbbdc4c7bcf5efb4c9a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TwIi1dX9vW7A-ld45UUpPdNNrb7BMww9X7rSHaojGsI.jpg?width=320&crop=smart&auto=webp&s=2ed0ed92de4cc91b64546d6b5bdd4c80538a86fb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TwIi1dX9vW7A-ld45UUpPdNNrb7BMww9X7rSHaojGsI.jpg?width=640&crop=smart&auto=webp&s=0a3f62b0c8292740628b458cf229341a596cbebe', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TwIi1dX9vW7A-ld45UUpPdNNrb7BMww9X7rSHaojGsI.jpg?width=960&crop=smart&auto=webp&s=cb4d58ff90a6425cdbb9fa5d41bbc89770667b62', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TwIi1dX9vW7A-ld45UUpPdNNrb7BMww9X7rSHaojGsI.jpg?width=1080&crop=smart&auto=webp&s=98c4808587cfba6d670e4124eeb731b0902381b2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TwIi1dX9vW7A-ld45UUpPdNNrb7BMww9X7rSHaojGsI.jpg?auto=webp&s=1f15ddda1e9829830d2761b56f2578d158cf51b0', 'width': 1200}, 'variants': {}}]}
Looking for Advice: Best LLM/Embedding Models for Precise Document Retrieval (Product Standards)
3
Hi everyone, I’m working on a chatbot for my company to help colleagues quickly find answers in a set of about 60 very similar marketing standards. The documents are all formatted quite similarly, and the main challenge is that when users ask specific questions, the retrieval often pulls the *wrong* standard—or sometimes answers from related but incorrect documents. I’ve tried building a simple RAG pipeline using nomic-embed-text for embeddings and Llama 3.1 or Gemma3:4b as the LLM (all running locally via Streamlit so everyone in the company network can use it). I’ve also experimented with adding a reranker, but it only helps to a certain extent. I’m not an expert in LLMs or information retrieval (just learning as I go!), so I’m looking for advice from people with more experience: * What models or techniques would you recommend for **improving the accuracy of retrieval**, especially when the documents are very similar in structure and content? * Are there specific embedding models or LLMs that perform better for legal/standards texts and can handle fine-grained distinctions between similar documents? * Is there a different approach I should consider (metadata, custom chunking, etc.)? Any advice or pointers (even things you think are obvious!) would be hugely appreciated. Thanks a lot in advance for your help!
2025-06-05T12:29:06
https://www.reddit.com/r/LocalLLaMA/comments/1l3xxpw/looking_for_advice_best_llmembedding_models_for/
Hooches
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3xxpw
false
null
t3_1l3xxpw
/r/LocalLLaMA/comments/1l3xxpw/looking_for_advice_best_llmembedding_models_for/
false
false
self
3
null
Non-reasoning Qwen3-235B worse than maverick? Is this experience real with you guys?
3
[Intelligence Index Qwen3-235B-nothink beaten by Maverick?](https://preview.redd.it/8e3jqpw4t35f1.png?width=4092&format=png&auto=webp&s=d577dadbcfa9968158c76ae2e2c387bc4ec5dc0e) Is this experienced by you guys? [Wtf](https://preview.redd.it/c55h532zt35f1.png?width=4092&format=png&auto=webp&s=f87c9ae0b0f143791621f7520c9b01fe9750349e) [Aider Polygot has very different results???? Idk what to trust now man](https://preview.redd.it/y4k0rnl0u35f1.png?width=1960&format=png&auto=webp&s=1219a0a1aa94946f75f3934836c8b6332852af9f) Please share your results when using qwen3 models for coding.
2025-06-05T12:46:31
https://www.reddit.com/r/LocalLLaMA/comments/1l3yamg/nonreasoning_qwen3235b_worse_than_maverick_is/
True_Requirement_891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3yamg
false
null
t3_1l3yamg
/r/LocalLLaMA/comments/1l3yamg/nonreasoning_qwen3235b_worse_than_maverick_is/
false
false
https://b.thumbs.redditm…CsbQy2hsx5nk.jpg
3
null
Qwen3-32b /nothink or qwen3-14b /think?
18
What has been your experience and what are the pro/cons?
2025-06-05T12:58:31
https://www.reddit.com/r/LocalLLaMA/comments/1l3yjeb/qwen332b_nothink_or_qwen314b_think/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3yjeb
false
null
t3_1l3yjeb
/r/LocalLLaMA/comments/1l3yjeb/qwen332b_nothink_or_qwen314b_think/
false
false
self
18
null
4090 boards with 48gb Ram - will there ever be an upgrade service?
6
I keep seeing these cards being sold in china, but I haven't seen anything about being able to upgrade an existing card. Are these Chinese cards just fitted with higher capacity RAM chips and a different BIOS or are there PCB level differences? Does anyone think there's a chance a service will be offered to upgrade these cards?
2025-06-05T13:00:07
https://www.reddit.com/r/LocalLLaMA/comments/1l3ykjn/4090_boards_with_48gb_ram_will_there_ever_be_an/
thisisnotdave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3ykjn
false
null
t3_1l3ykjn
/r/LocalLLaMA/comments/1l3ykjn/4090_boards_with_48gb_ram_will_there_ever_be_an/
false
false
self
6
null
Approach for developing / designing UI
1
[removed]
2025-06-05T13:05:09
https://www.reddit.com/r/LocalLLaMA/comments/1l3yos6/approach_for_developing_designing_ui/
Suspicious_Dress_350
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3yos6
false
null
t3_1l3yos6
/r/LocalLLaMA/comments/1l3yos6/approach_for_developing_designing_ui/
false
false
self
1
null
Best world knowledge model that can run on your phone
39
I basically want Internet-level knowledge when my phone is not connected to the internet (camping etc). I've heard good things about Gemma 2 2b for creative writing. But is it still the best model for things like world knowledge? Questions like: - How to identify different clam species - How to clean clam that you caught - Easy clam recipes while camping (Can you tell I'm planning to go clamming while camping?) Or others like: - When is low tide typically in June in X location - Good restaurants near X campsite - is it okay to put food inside my car overnight when camping in a place with bears? Etc
2025-06-05T13:22:38
https://www.reddit.com/r/LocalLLaMA/comments/1l3z2m3/best_world_knowledge_model_that_can_run_on_your/
clavidk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3z2m3
false
null
t3_1l3z2m3
/r/LocalLLaMA/comments/1l3z2m3/best_world_knowledge_model_that_can_run_on_your/
false
false
self
39
null
Help me please :)
1
[removed]
2025-06-05T13:44:53
https://www.reddit.com/r/LocalLLaMA/comments/1l3zkky/help_me_please/
MackPheson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3zkky
false
null
t3_1l3zkky
/r/LocalLLaMA/comments/1l3zkky/help_me_please/
false
false
self
1
null
Help with audio visualization
1
[removed]
2025-06-05T13:46:28
https://www.reddit.com/r/LocalLLaMA/comments/1l3zlxa/help_with_audio_visualization/
MackPheson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3zlxa
false
null
t3_1l3zlxa
/r/LocalLLaMA/comments/1l3zlxa/help_with_audio_visualization/
false
false
self
1
null
Does newest LM Studio not have Playground tab anymore on Windows
1
[removed]
2025-06-05T14:02:59
https://i.redd.it/hc458irz745f1.jpeg
bilderbergman
i.redd.it
1970-01-01T00:00:00
0
{}
1l3zzn1
false
null
t3_1l3zzn1
/r/LocalLLaMA/comments/1l3zzn1/does_newest_lm_studio_not_have_playground_tab/
false
false
default
1
{'enabled': True, 'images': [{'id': 'hc458irz745f1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/hc458irz745f1.jpeg?width=108&crop=smart&auto=webp&s=ae2ee222df92cab003f27e00eee0d296c4e28bbb', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/hc458irz745f1.jpeg?width=216&crop=smart&auto=webp&s=d373e153b5bdfb8e453d9771c45f5762d136fb2a', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/hc458irz745f1.jpeg?width=320&crop=smart&auto=webp&s=6700cfd10e88704788821ca35abbcd356769a992', 'width': 320}, {'height': 852, 'url': 'https://preview.redd.it/hc458irz745f1.jpeg?width=640&crop=smart&auto=webp&s=ab48acbe08697cfd066b7a2fc19030834d8a965c', 'width': 640}, {'height': 1278, 'url': 'https://preview.redd.it/hc458irz745f1.jpeg?width=960&crop=smart&auto=webp&s=0b8b4e90e8be4ea77b64277a4521b5ce338789d4', 'width': 960}, {'height': 1438, 'url': 'https://preview.redd.it/hc458irz745f1.jpeg?width=1080&crop=smart&auto=webp&s=9585b871c6dad513d1fe3fb4a0fd971265d24c6b', 'width': 1080}], 'source': {'height': 4624, 'url': 'https://preview.redd.it/hc458irz745f1.jpeg?auto=webp&s=4b347be024ac571530ae10ae2365f257c699324c', 'width': 3472}, 'variants': {}}]}
Building my first AI project (IDE + LLM). How can I protect the idea and deploy it as a total beginner? 🇨🇦
1
[removed]
2025-06-05T14:04:55
https://www.reddit.com/r/LocalLLaMA/comments/1l401dw/building_my_first_ai_project_ide_llm_how_can_i/
Business-Opinion7579
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l401dw
false
null
t3_1l401dw
/r/LocalLLaMA/comments/1l401dw/building_my_first_ai_project_ide_llm_how_can_i/
false
false
self
1
null
Best LLM for a RTX 5090 + 64 GB RAM
1
[removed]
2025-06-05T14:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1l405nq/best_llm_for_a_rtx_5090_64_gb_ram/
tomxposed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l405nq
false
null
t3_1l405nq
/r/LocalLLaMA/comments/1l405nq/best_llm_for_a_rtx_5090_64_gb_ram/
false
false
self
1
null
I wrote a little script to automate commit messages
20
I wrote a little script to automate commit messages This might be pretty lame, but this is the first time I've actually done any scripting with LLMs to do some task for me. This is just for a personal project git repo, so the stakes are as low as can be for the accuracy of these commit messages. I feel like this is a big upgrade over the quality of my usual messages for a project like this. I found that the outputs for qwen3 8b Q4\_K\_M were much better than gemma3 4b Q4\_K\_M, possibly to nobody's suprise. I hope this might be of use to someone out there! ```bash #! /bin/bash NO_CONFIRM=false if [[ "$1" == "-y" ]]; then NO_CONFIRM=true fi diff_output=$(git diff --staged) echo if [ -z "${diff_output}" ]; then if $NO_CONFIRM; then git add * else read -p "No files staged. Add all and proceed? [y/n] " -n 1 -r if [[ $REPLY =~ ^[Yy]$ ]]; then git add * else exit 1 fi fi fi diff_output=$(git diff --staged) prompt="\no-think [INSTRUCTIONS] Write a git commit message for this diff output in the form of a bulleted list, describing the changes to each individual file. Do not include ANY formatting e.g. bold text (**). [DIFF]: $diff_output" response=$(echo "$prompt" | ollama.exe run qwen3) message=$(echo "$response" | sed -e '/<think>/d' -e '/<\/think>/d' -e "/^$/d") git status echo "Commit message:" echo "$message" echo if $NO_CONFIRM; then echo "$message" | git commit -qF - git push else read -p "Proceed with commit? [y/n] " -n 1 -r echo if [[ $REPLY =~ ^[Yy]$ ]]; then echo "$message" | git commit -qF - git push else git reset HEAD -- . fi fi ```
2025-06-05T14:12:44
https://i.redd.it/shflqezx845f1.png
aiueka
i.redd.it
1970-01-01T00:00:00
0
{}
1l40835
false
null
t3_1l40835
/r/LocalLLaMA/comments/1l40835/i_wrote_a_little_script_to_automate_commit/
false
false
default
20
{'enabled': True, 'images': [{'id': 'shflqezx845f1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/shflqezx845f1.png?width=108&crop=smart&auto=webp&s=7d3e445d65a5f1a124433b2066a0eb1feea84392', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/shflqezx845f1.png?width=216&crop=smart&auto=webp&s=2b52ba6d912f0374935556b3f2813b27b6cd4f01', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/shflqezx845f1.png?width=320&crop=smart&auto=webp&s=8f055c85c59c4ad487d9af6cb7adfab23caec2ec', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/shflqezx845f1.png?width=640&crop=smart&auto=webp&s=09c22a37fa5f9d3263e4a2b024a47d3237e987fb', 'width': 640}, {'height': 517, 'url': 'https://preview.redd.it/shflqezx845f1.png?width=960&crop=smart&auto=webp&s=de8b486b352142726914734c2f25b5ca9272581f', 'width': 960}], 'source': {'height': 546, 'url': 'https://preview.redd.it/shflqezx845f1.png?auto=webp&s=0b8cc88ba48e72f6a1f220d9302e3e143a3815f5', 'width': 1012}, 'variants': {}}]}
Hybrid setup for reasoning
9
I want to make for myself a chat assistant that would use qwen3 8b for reasoning tokens and then stop when it gets the end of thought token, then feed that to qwen3 30b for the rest. The idea being that i dont mind reading while the text is being generated but dont like to wait for it to load. I know there is no free luch and performance will be reduced. Has anybody tried this? Is it a bad idea?
2025-06-05T14:22:32
https://www.reddit.com/r/LocalLLaMA/comments/1l40gij/hybrid_setup_for_reasoning/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l40gij
false
null
t3_1l40gij
/r/LocalLLaMA/comments/1l40gij/hybrid_setup_for_reasoning/
false
false
self
9
null
What's the cheapest setup for running full Deepseek R1
110
Looking how DeepSeek is performing I'm thinking of setting it up locally. What's the cheapest way for setting it up locally so it will have reasonable performance?(10-15t/s?) I was thinking about 2x Epyc with DDR4 3200, because prices seem reasonable right now for 1TB of RAM - but I'm not sure about the performance. What do you think?
2025-06-05T14:25:05
https://www.reddit.com/r/LocalLLaMA/comments/1l40ip8/whats_the_cheapest_setup_for_running_full/
Wooden_Yam1924
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l40ip8
false
null
t3_1l40ip8
/r/LocalLLaMA/comments/1l40ip8/whats_the_cheapest_setup_for_running_full/
false
false
self
110
null
What are the biggest pain points when evaluating AI agents ?
1
[removed]
2025-06-05T14:58:59
https://www.reddit.com/r/LocalLLaMA/comments/1l41cc8/what_are_the_biggest_pain_points_when_evaluating/
NoAdministration4196
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l41cc8
false
null
t3_1l41cc8
/r/LocalLLaMA/comments/1l41cc8/what_are_the_biggest_pain_points_when_evaluating/
false
false
self
1
null
Programming using LLMs is the damnedest thing…
1
[removed]
2025-06-05T14:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1l41cx7/programming_using_llms_is_the_damnedest_thing/
ETBiggs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l41cx7
false
null
t3_1l41cx7
/r/LocalLLaMA/comments/1l41cx7/programming_using_llms_is_the_damnedest_thing/
false
false
self
1
null
AI agent evaluation painpoints for developers
1
[removed]
2025-06-05T15:01:37
https://www.reddit.com/r/LocalLLaMA/comments/1l41eyp/ai_agent_evaluation_painpoints_for_developers/
NoAdministration4196
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l41eyp
false
null
t3_1l41eyp
/r/LocalLLaMA/comments/1l41eyp/ai_agent_evaluation_painpoints_for_developers/
false
false
self
1
null
DeepSeek’s new R1-0528-Qwen3-8B is the most intelligent 8B parameter model yet, but not by much: Alibaba’s own Qwen3 8B is just one point behind
114
https://preview.redd.it/… your thoughts?
2025-06-05T15:12:32
https://www.reddit.com/r/LocalLLaMA/comments/1l41p1x/deepseeks_new_r10528qwen38b_is_the_most/
ApprehensiveAd3629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l41p1x
false
null
t3_1l41p1x
/r/LocalLLaMA/comments/1l41p1x/deepseeks_new_r10528qwen38b_is_the_most/
false
false
https://b.thumbs.redditm…lxbGrth9XgQk.jpg
114
{'enabled': False, 'images': [{'id': 'qa-h-2yE89JD5_ETAyW_L2wANYsMBO04I2h5j3k3Q58', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/wWklwlAOa2ZfKIhxn9DXW0SWlAhj78brp-TpzL-wtyA.jpg?width=108&crop=smart&auto=webp&s=796695bf5a1404fa79da19be8121139c127b807d', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/wWklwlAOa2ZfKIhxn9DXW0SWlAhj78brp-TpzL-wtyA.jpg?width=216&crop=smart&auto=webp&s=9d1b25253b9589b5156e5c0e2f777b9203673a31', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/wWklwlAOa2ZfKIhxn9DXW0SWlAhj78brp-TpzL-wtyA.jpg?width=320&crop=smart&auto=webp&s=30495d5de0552bd0119d39aa16929ef1460c7264', 'width': 320}, {'height': 319, 'url': 'https://external-preview.redd.it/wWklwlAOa2ZfKIhxn9DXW0SWlAhj78brp-TpzL-wtyA.jpg?width=640&crop=smart&auto=webp&s=dc64a51336980d22549a47b3a4f8a6f231537058', 'width': 640}, {'height': 479, 'url': 'https://external-preview.redd.it/wWklwlAOa2ZfKIhxn9DXW0SWlAhj78brp-TpzL-wtyA.jpg?width=960&crop=smart&auto=webp&s=9362bc7b039d471bd445a50102d44b475026b5b9', 'width': 960}, {'height': 538, 'url': 'https://external-preview.redd.it/wWklwlAOa2ZfKIhxn9DXW0SWlAhj78brp-TpzL-wtyA.jpg?width=1080&crop=smart&auto=webp&s=73d915e749cc2f19a47c72c7aed035e9c1bb3ca8', 'width': 1080}], 'source': {'height': 1022, 'url': 'https://external-preview.redd.it/wWklwlAOa2ZfKIhxn9DXW0SWlAhj78brp-TpzL-wtyA.jpg?auto=webp&s=b1f7ea2aceb25d9ab8bf876c467073d7c35d964d', 'width': 2048}, 'variants': {}}]}
Looking for UI that can store and reference characters easily
3
I am a relative neophyte to locally run llms I've been using them for storytelling but obviously they get confused after they get close to character limit. I've just started playing around with silly tavern via oobabooga which seems like a popular option, but are there any other uis that are relatively easy to set up to reference multiple characters on their names or identifiers being used?
2025-06-05T16:00:20
https://www.reddit.com/r/LocalLLaMA/comments/1l42woy/looking_for_ui_that_can_store_and_reference/
Haddock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l42woy
false
null
t3_1l42woy
/r/LocalLLaMA/comments/1l42woy/looking_for_ui_that_can_store_and_reference/
false
false
self
3
null
Sarvam AI (indian startup) is likely pulling of massive "download farming" in HF
1
[removed]
2025-06-05T16:19:45
https://www.reddit.com/r/LocalLLaMA/comments/1l43emc/sarvam_ai_indian_startup_is_likely_pulling_of/
Ortho-BenzoPhenone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l43emc
false
null
t3_1l43emc
/r/LocalLLaMA/comments/1l43emc/sarvam_ai_indian_startup_is_likely_pulling_of/
false
false
https://b.thumbs.redditm…08ryWxkodWzM.jpg
1
null
New LLM trained to reason on chemistry from language: first step towards scientific agents
1
[removed]
2025-06-05T16:22:42
https://x.com/andrewwhite01/status/1930652479039099072
clefourrier
x.com
1970-01-01T00:00:00
0
{}
1l43hb1
false
null
t3_1l43hb1
/r/LocalLLaMA/comments/1l43hb1/new_llm_trained_to_reason_on_chemistry_from/
false
false
default
1
null
New LLM trained to reason on chemistry from language: first step towards scientific agents
51
Some interesting tricks in the paper to make it good at a specific scientific domain, has cool applications like retrosynthesis (how do I get to this molecule) or reaction prediction (what do I get from A + B?), and everything is open source !
2025-06-05T16:24:29
https://www.nature.com/articles/d41586-025-01753-1
clefourrier
nature.com
1970-01-01T00:00:00
0
{}
1l43ivu
false
null
t3_1l43ivu
/r/LocalLLaMA/comments/1l43ivu/new_llm_trained_to_reason_on_chemistry_from/
false
false
default
51
{'enabled': False, 'images': [{'id': 'h8pBBOTpdLNMV6niaV1bR_1yNoR-3Ky7Xs63nebLUdw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/yIrTUZRJVfTjMLKREn3vBHXx0YcQC6rt4cf6CzSytrI.jpg?width=108&crop=smart&auto=webp&s=16b3e1b2125dfb444423e69d3215eab0aa80c41e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/yIrTUZRJVfTjMLKREn3vBHXx0YcQC6rt4cf6CzSytrI.jpg?width=216&crop=smart&auto=webp&s=5deaa480416462e5e86ac1397db79bc27273bcbf', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/yIrTUZRJVfTjMLKREn3vBHXx0YcQC6rt4cf6CzSytrI.jpg?width=320&crop=smart&auto=webp&s=2259b32bc8a9956a6fd91bb6426c0687e9a8c33c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/yIrTUZRJVfTjMLKREn3vBHXx0YcQC6rt4cf6CzSytrI.jpg?width=640&crop=smart&auto=webp&s=f877daf31e434fc6283e2be76580fd135a8593dc', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/yIrTUZRJVfTjMLKREn3vBHXx0YcQC6rt4cf6CzSytrI.jpg?width=960&crop=smart&auto=webp&s=0d62420dcdd28379358619760728aaee1d934546', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yIrTUZRJVfTjMLKREn3vBHXx0YcQC6rt4cf6CzSytrI.jpg?auto=webp&s=b5c2ac2d38ed194e8ca7df0e75a8100b1efb070b', 'width': 1066}, 'variants': {}}]}
How can I connect to a local LLM from my iPhone?
10
I've got LM Studio running on my PC and I'm wondering if anyone knows a way to connect to it from iPhone? I've looked around and tried several apps but haven't found one that lets you specify the API URL.
2025-06-05T16:48:58
https://www.reddit.com/r/LocalLLaMA/comments/1l4450t/how_can_i_connect_to_a_local_llm_from_my_iphone/
NonYa_exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4450t
false
null
t3_1l4450t
/r/LocalLLaMA/comments/1l4450t/how_can_i_connect_to_a_local_llm_from_my_iphone/
false
false
self
10
null
Mac Air M2 users or lower. What’s the optimal model/tool to run to get started with LocalLLaMa
1
[removed]
2025-06-05T16:55:14
https://www.reddit.com/r/LocalLLaMA/comments/1l44agk/mac_air_m2_users_or_lower_whats_the_optimal/
picturpoet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l44agk
false
null
t3_1l44agk
/r/LocalLLaMA/comments/1l44agk/mac_air_m2_users_or_lower_whats_the_optimal/
false
false
self
1
null
Sparse Transformers: Run 2x faster LLM with 30% lesser memory
496
We have built fused operator kernels for structured contextual sparsity based on the amazing works of LLM in a Flash (Apple) and Deja Vu (Zichang et al). We avoid loading and computing activations with feed forward layer weights whose outputs will eventually be zeroed out. The result? We are seeing **5X faster MLP** layer performance in transformers with **50% lesser memory** consumption avoiding the sleeping nodes in every token prediction. For Llama 3.2, Feed forward layers accounted for 30% of total weights and forward pass computation resulting in 1.6-1.8x increase in throughput: Sparse LLaMA 3.2 3B vs LLaMA 3.2 3B (on HuggingFace Implementation): - Time to First Token (TTFT): 1.51× faster (1.209s → 0.803s) - Output Generation Speed: 1.79× faster (0.7 → 1.2 tokens/sec) - Total Throughput: 1.78× faster (0.7 → 1.3 tokens/sec) - Memory Usage: 26.4% reduction (6.125GB → 4.15GB) Please find the operator kernels with differential weight caching open sourced at github/sparse\_transformers. PS: We will be actively adding kernels for int8, CUDA and sparse attention.
2025-06-05T17:07:31
https://github.com/NimbleEdge/sparse_transformers
Economy-Mud-6626
github.com
1970-01-01T00:00:00
0
{}
1l44lw8
false
null
t3_1l44lw8
/r/LocalLLaMA/comments/1l44lw8/sparse_transformers_run_2x_faster_llm_with_30/
false
false
https://b.thumbs.redditm…86P1hfUD2HpY.jpg
496
{'enabled': False, 'images': [{'id': '1on1N3jH_bYn3YHPp1eKzmiBavqd33UEWggC4ESEjWE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XkyXf73h0xcotkA0Ymuqhjk48O0f-bM4vEg5RHnYOlk.jpg?width=108&crop=smart&auto=webp&s=374d8947fd517973f24741b4a1ef65d5035daeb2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XkyXf73h0xcotkA0Ymuqhjk48O0f-bM4vEg5RHnYOlk.jpg?width=216&crop=smart&auto=webp&s=15dc9c2660e35fac1e298f63a71bb823cf49e410', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XkyXf73h0xcotkA0Ymuqhjk48O0f-bM4vEg5RHnYOlk.jpg?width=320&crop=smart&auto=webp&s=77cced607d44badabae0b7097da0b49ef30e4582', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XkyXf73h0xcotkA0Ymuqhjk48O0f-bM4vEg5RHnYOlk.jpg?width=640&crop=smart&auto=webp&s=780bf00a530e24b1353ea10dfe3ec2b41b58fe56', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XkyXf73h0xcotkA0Ymuqhjk48O0f-bM4vEg5RHnYOlk.jpg?width=960&crop=smart&auto=webp&s=26027758979d996094e72a0adb72f45359a00a08', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XkyXf73h0xcotkA0Ymuqhjk48O0f-bM4vEg5RHnYOlk.jpg?width=1080&crop=smart&auto=webp&s=2f6f12635728d261058c8d3fc9371fb450807ddf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XkyXf73h0xcotkA0Ymuqhjk48O0f-bM4vEg5RHnYOlk.jpg?auto=webp&s=c7897c298cb9d10ad72a4ae3275e112551ce2e29', 'width': 1200}, 'variants': {}}]}
how are BERT models used in anomaly detection?
1
[removed]
2025-06-05T17:39:42
https://www.reddit.com/r/LocalLLaMA/comments/1l45g68/how_are_bert_models_used_in_anomaly_detection/
sybau6969
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l45g68
false
null
t3_1l45g68
/r/LocalLLaMA/comments/1l45g68/how_are_bert_models_used_in_anomaly_detection/
false
false
self
1
null
What's your local LLM agent set-up for coding? Looking for suggestions and workflows.
1
[removed]
2025-06-05T17:49:05
https://www.reddit.com/r/LocalLLaMA/comments/1l45oyh/whats_your_local_llm_agent_setup_for_coding/
accountforHW
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l45oyh
false
null
t3_1l45oyh
/r/LocalLLaMA/comments/1l45oyh/whats_your_local_llm_agent_setup_for_coding/
false
false
self
1
null
What's the best model for playing a role right now , that will fit on 8gbvram?
2
I'm not looking for anything that tends to talk naughty on purpose, but unrestricted is probably best anyway. I just want to be able to tell it, You are character x, your backstory is y, and then feed it with a conversation history to this point and have it reliably take on it's role. I have other safeguards in place to make sure it conforms but I want the best at being creative with it's given role. I'm basically going to have two or more talk to each other but instead of one shot , i want each of them to only come up with the dialog or actions for the character they are told they are.
2025-06-05T17:49:13
https://www.reddit.com/r/LocalLLaMA/comments/1l45p2d/whats_the_best_model_for_playing_a_role_right_now/
opUserZero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l45p2d
false
null
t3_1l45p2d
/r/LocalLLaMA/comments/1l45p2d/whats_the_best_model_for_playing_a_role_right_now/
false
false
self
2
null
smollm is crazy
0
2025-06-05T18:14:57
https://v.redd.it/l1u09vctg55f1
3d_printing_kid
v.redd.it
1970-01-01T00:00:00
0
{}
1l46d96
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/l1u09vctg55f1/DASHPlaylist.mpd?a=1751739310%2CN2MwYWIzNDBkZWRhNjdlZDE5OWUxNzI0MjIzNDU2OWEyYmU3OGI2NGVhNmVkNmY5NjNhMTgwNTc0YWE0ZWY1Yw%3D%3D&v=1&f=sd', 'duration': 248, 'fallback_url': 'https://v.redd.it/l1u09vctg55f1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/l1u09vctg55f1/HLSPlaylist.m3u8?a=1751739310%2COTZiM2E1MzY0OTY1ODEzNDQ1ZDVkNGI1YzM3OTEzNDRiYmI1OTczNDMxMzMwMGZhZjRiZWU0YzVhNzY2NzEzMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/l1u09vctg55f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 850}}
t3_1l46d96
/r/LocalLLaMA/comments/1l46d96/smollm_is_crazy/
false
false
https://external-preview…435060dce81459ae
0
{'enabled': False, 'images': [{'id': 'Nm91OHV2Y3RnNTVmMYC1oXT879drMGhz7A_iST_bdDJ62X2-qbCshqC67I28', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Nm91OHV2Y3RnNTVmMYC1oXT879drMGhz7A_iST_bdDJ62X2-qbCshqC67I28.png?width=108&crop=smart&format=pjpg&auto=webp&s=62b3ffc701ba66c8f9f03681676e828618f01293', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Nm91OHV2Y3RnNTVmMYC1oXT879drMGhz7A_iST_bdDJ62X2-qbCshqC67I28.png?width=216&crop=smart&format=pjpg&auto=webp&s=b3d7a4aa2484c7fca92a8fde82b0c425cfadb762', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Nm91OHV2Y3RnNTVmMYC1oXT879drMGhz7A_iST_bdDJ62X2-qbCshqC67I28.png?width=320&crop=smart&format=pjpg&auto=webp&s=2a73d4b378fbde33ebe9de8f23b35d99d3eeabfa', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/Nm91OHV2Y3RnNTVmMYC1oXT879drMGhz7A_iST_bdDJ62X2-qbCshqC67I28.png?width=640&crop=smart&format=pjpg&auto=webp&s=938d5c26572ad8a869005dcac8dc8626d83b0c05', 'width': 640}, {'height': 541, 'url': 'https://external-preview.redd.it/Nm91OHV2Y3RnNTVmMYC1oXT879drMGhz7A_iST_bdDJ62X2-qbCshqC67I28.png?width=960&crop=smart&format=pjpg&auto=webp&s=18c09ea8b6e93ca7a592d735482c7ef44ad307d4', 'width': 960}, {'height': 609, 'url': 'https://external-preview.redd.it/Nm91OHV2Y3RnNTVmMYC1oXT879drMGhz7A_iST_bdDJ62X2-qbCshqC67I28.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4bfb52db175ccbadb7b41d0e1aef62b2a84bd99e', 'width': 1080}], 'source': {'height': 632, 'url': 'https://external-preview.redd.it/Nm91OHV2Y3RnNTVmMYC1oXT879drMGhz7A_iST_bdDJ62X2-qbCshqC67I28.png?format=pjpg&auto=webp&s=0fd936e4074b23cded79fe27a7aec1e9fc5cb42a', 'width': 1120}, 'variants': {}}]}
M🐢st Efficient RAG Framework for Offline Local Rag?
1
[removed]
2025-06-05T18:36:33
https://www.reddit.com/r/LocalLLaMA/comments/1l46wsw/mst_efficient_rag_framework_for_offline_local_rag/
taper_fade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l46wsw
false
null
t3_1l46wsw
/r/LocalLLaMA/comments/1l46wsw/mst_efficient_rag_framework_for_offline_local_rag/
false
false
self
1
null
M🐢st Efficient RAG Framework for Offline Local Rag?
1
[removed]
2025-06-05T18:37:53
https://www.reddit.com/r/LocalLLaMA/comments/1l46xww/mst_efficient_rag_framework_for_offline_local_rag/
taper_fade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l46xww
false
null
t3_1l46xww
/r/LocalLLaMA/comments/1l46xww/mst_efficient_rag_framework_for_offline_local_rag/
false
false
self
1
null
1000(!!!)tps. Deepinfra went wild on Maverick throughput.
3
2025-06-05T18:54:07
https://i.redd.it/52bm64zxn55f1.jpeg
temirulan
i.redd.it
1970-01-01T00:00:00
0
{}
1l47clk
false
null
t3_1l47clk
/r/LocalLLaMA/comments/1l47clk/1000tps_deepinfra_went_wild_on_maverick_throughput/
false
false
default
3
{'enabled': True, 'images': [{'id': '52bm64zxn55f1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/52bm64zxn55f1.jpeg?width=108&crop=smart&auto=webp&s=0d188dbe2d73d6411779816164703db9a59587ef', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/52bm64zxn55f1.jpeg?width=216&crop=smart&auto=webp&s=37183a43524b2e9126c5ebb8b9470562f357c9e6', 'width': 216}, {'height': 126, 'url': 'https://preview.redd.it/52bm64zxn55f1.jpeg?width=320&crop=smart&auto=webp&s=5096638c0dc207969399f0953cb9d5345da354ea', 'width': 320}, {'height': 252, 'url': 'https://preview.redd.it/52bm64zxn55f1.jpeg?width=640&crop=smart&auto=webp&s=3f7f8f7e69536aea1547387ba6ee23d40848e77f', 'width': 640}, {'height': 378, 'url': 'https://preview.redd.it/52bm64zxn55f1.jpeg?width=960&crop=smart&auto=webp&s=ebacf3e2fe3c8506cede74b9fa8c5e2ab7b7ef99', 'width': 960}, {'height': 425, 'url': 'https://preview.redd.it/52bm64zxn55f1.jpeg?width=1080&crop=smart&auto=webp&s=1bcdac3ab75e4fb93a3f1338b20343464d64a2bc', 'width': 1080}], 'source': {'height': 803, 'url': 'https://preview.redd.it/52bm64zxn55f1.jpeg?auto=webp&s=84e5965472ff8d71ae76013fe4b2c7ec755f0e59', 'width': 2036}, 'variants': {}}]}
Is Qwen the new face of local LLMs?
75
The Qwen team has been killing it. Every new model is a heavy hitter and every new model becomes SOTA for that category. I've been seeing way more fine tunes of Qwen models than LLaMa lately. LocalQwen coming soon lol?
2025-06-05T18:54:52
https://www.reddit.com/r/LocalLLaMA/comments/1l47dav/is_qwen_the_new_face_of_local_llms/
Due-Employee4744
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l47dav
false
null
t3_1l47dav
/r/LocalLLaMA/comments/1l47dav/is_qwen_the_new_face_of_local_llms/
false
false
self
75
null
With 8gb vram: qwen3 8b q6 or 32b iq1?
4
Both end up being about the same size and fit just enough on the vram provided the kv cache is offloaded. I tried looking for performance of models at equal memory footprint but was unable to. Any advice is much appreciated.
2025-06-05T18:57:34
https://www.reddit.com/r/LocalLLaMA/comments/1l47fv0/with_8gb_vram_qwen3_8b_q6_or_32b_iq1/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l47fv0
false
null
t3_1l47fv0
/r/LocalLLaMA/comments/1l47fv0/with_8gb_vram_qwen3_8b_q6_or_32b_iq1/
false
false
self
4
null
Fine tune result problem
1
[removed]
2025-06-05T19:03:24
https://www.reddit.com/r/LocalLLaMA/comments/1l47lks/fine_tune_result_problem/
ithe1975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l47lks
false
null
t3_1l47lks
/r/LocalLLaMA/comments/1l47lks/fine_tune_result_problem/
false
false
self
1
null
1000(!!!) tps on Maverick by Deepinfra
1
[removed]
2025-06-05T19:14:04
https://i.redd.it/pmzccp0ir55f1.jpeg
temirulan
i.redd.it
1970-01-01T00:00:00
0
{}
1l47vbf
false
null
t3_1l47vbf
/r/LocalLLaMA/comments/1l47vbf/1000_tps_on_maverick_by_deepinfra/
false
false
default
1
{'enabled': True, 'images': [{'id': 'pmzccp0ir55f1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/pmzccp0ir55f1.jpeg?width=108&crop=smart&auto=webp&s=3bbb2ca6449dc0870bf1cf7a599f5e625978b6a3', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/pmzccp0ir55f1.jpeg?width=216&crop=smart&auto=webp&s=0eb1bc14c5c28ed329949708e47fa7017b944f01', 'width': 216}, {'height': 126, 'url': 'https://preview.redd.it/pmzccp0ir55f1.jpeg?width=320&crop=smart&auto=webp&s=921ae65052f07a4c2cb17a8c14860353c5d6c81c', 'width': 320}, {'height': 252, 'url': 'https://preview.redd.it/pmzccp0ir55f1.jpeg?width=640&crop=smart&auto=webp&s=7c88f85f6d27185b8ebf3479c291da11ade3b5ee', 'width': 640}, {'height': 378, 'url': 'https://preview.redd.it/pmzccp0ir55f1.jpeg?width=960&crop=smart&auto=webp&s=6f1189fdec005d48b24e08d4c0e9ea066f0be762', 'width': 960}, {'height': 425, 'url': 'https://preview.redd.it/pmzccp0ir55f1.jpeg?width=1080&crop=smart&auto=webp&s=0e661e111770c5f354b481daa3a2cabb71574b79', 'width': 1080}], 'source': {'height': 803, 'url': 'https://preview.redd.it/pmzccp0ir55f1.jpeg?auto=webp&s=1c7e7e01094c5c2ae57cf364c413161053a7aaed', 'width': 2036}, 'variants': {}}]}
Is it dumb to build a server with 7x 5060 Ti?
13
I'm considering putting together a system with 7x 5060 Ti to get the most cost-effective VRAM. This will have to be an open frame with riser cables and an Epyc server motherboard with 7 PCIe slots. The idea was to have capacity for medium size models that exceed 24GB but fit in \~100GB VRAM. I think I can put this machine together for between $10k and $15k. For simplicity I was going to go with Windows and Ollama. Inference speed is not critical but crawling along at CPU speeds is not going to be viable. I don't really know what I'm doing. Is this dumb? Go ahead and roast my plan as long as you can propose something better.
2025-06-05T19:33:32
https://www.reddit.com/r/LocalLLaMA/comments/1l48cnk/is_it_dumb_to_build_a_server_with_7x_5060_ti/
vector76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l48cnk
false
null
t3_1l48cnk
/r/LocalLLaMA/comments/1l48cnk/is_it_dumb_to_build_a_server_with_7x_5060_ti/
false
false
self
13
null
Model defaults Benchmark - latest version of {technology}.
0
API endpoints, opinionated frameworks, available SDK methods. From agentic coding/vibe coding perspective - heavily fine tuned models stubbornly enforce outdated solutions. Is there any project/benchmark that lets users subscribe to model updates? - Anthropics models not knowing what MCP is, - Gemini 2.5 pro enforcing 1.5 pro and outdated Gemini api, - Models using outdated defaults tend to generate too much boilerplate or using breaking libraries. For most of boilerplate I'd like AI to write for me I'd rather use -5 IQ model that use desired tech stack instead of +10 IQ which will try to force me to using outdated solutions. Simple QA and asking for latest versions of libraries usually helps but maybe there is something that can solve this problem better? lmsys webdev arena skewed models towards generating childish gradients. Lately labs focused on reasoning benchmarks promising AGI while what we really need is those obvious and time consuming parts. Starting from the most popular like: Latest Linux kernel, latest language versions, kubernetes/container techs, frameworks nextjs/Django/symphony/ror, web servers, reverse proxies, databases, up to latest model versions. is there any benchmark that checks that? With option to $ to get notified when new models knowing particular set of technologies appear?
2025-06-05T19:58:47
https://www.reddit.com/r/LocalLLaMA/comments/1l48yz9/model_defaults_benchmark_latest_version_of/
secopsml
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l48yz9
false
null
t3_1l48yz9
/r/LocalLLaMA/comments/1l48yz9/model_defaults_benchmark_latest_version_of/
false
false
self
0
null
What is the best way to sell a RTX 6000 Pro blackwell (new) and the average going price?
0
2025-06-05T20:09:35
https://i.redd.it/f9is7bfe165f1.jpeg
traderjay_toronto
i.redd.it
1970-01-01T00:00:00
0
{}
1l498jv
false
null
t3_1l498jv
/r/LocalLLaMA/comments/1l498jv/what_is_the_best_way_to_sell_a_rtx_6000_pro/
false
false
default
0
{'enabled': True, 'images': [{'id': 'f9is7bfe165f1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/f9is7bfe165f1.jpeg?width=108&crop=smart&auto=webp&s=e49e7f1e682f1e8f2af40c9a5a1cf15c4c9df896', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/f9is7bfe165f1.jpeg?width=216&crop=smart&auto=webp&s=726b9376717407b30d74f36e3d6df496ae99a635', 'width': 216}, {'height': 158, 'url': 'https://preview.redd.it/f9is7bfe165f1.jpeg?width=320&crop=smart&auto=webp&s=69b872d96615e5707c0034284e29b03f2b00e690', 'width': 320}, {'height': 317, 'url': 'https://preview.redd.it/f9is7bfe165f1.jpeg?width=640&crop=smart&auto=webp&s=459da3b6ba1fee7a98a572f7149ef320ec43e3c7', 'width': 640}, {'height': 476, 'url': 'https://preview.redd.it/f9is7bfe165f1.jpeg?width=960&crop=smart&auto=webp&s=f21f0b37b10fa766f23007a663139033d208bf57', 'width': 960}, {'height': 535, 'url': 'https://preview.redd.it/f9is7bfe165f1.jpeg?width=1080&crop=smart&auto=webp&s=abd2670200c97e696f4d79acab78ff8afe20235e', 'width': 1080}], 'source': {'height': 564, 'url': 'https://preview.redd.it/f9is7bfe165f1.jpeg?auto=webp&s=b75e49bbc0eb2bd06b0e0b245c3bd26e9964d952', 'width': 1137}, 'variants': {}}]}
Mac Studio Ultra vs RTX Pro on thread ripper
1
[removed]
2025-06-05T20:24:39
https://www.reddit.com/r/LocalLLaMA/comments/1l49m2k/mac_studio_ultra_vs_rtx_pro_on_thread_ripper/
Dry-Vermicelli-682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l49m2k
false
null
t3_1l49m2k
/r/LocalLLaMA/comments/1l49m2k/mac_studio_ultra_vs_rtx_pro_on_thread_ripper/
false
false
self
1
null
Embeddings vs Reasoning vs Thinking Models?
1
Please explain me in plain English the difference between these types of models from the training perspective. Also what use cases are best solved by each type?
2025-06-05T20:35:18
https://www.reddit.com/r/LocalLLaMA/comments/1l49viu/embeddings_vs_reasoning_vs_thinking_models/
cloudcreator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l49viu
false
null
t3_1l49viu
/r/LocalLLaMA/comments/1l49viu/embeddings_vs_reasoning_vs_thinking_models/
false
false
self
1
null
[R] Model-Preserving Adaptive Rounding
1
[removed]
2025-06-05T20:44:06
https://www.reddit.com/r/LocalLLaMA/comments/1l4a3d2/r_modelpreserving_adaptive_rounding/
tsengalb99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4a3d2
false
null
t3_1l4a3d2
/r/LocalLLaMA/comments/1l4a3d2/r_modelpreserving_adaptive_rounding/
false
false
self
1
null
Looking for Advice- Starting point running Local LLM/Training
1
[removed]
2025-06-05T21:08:19
https://www.reddit.com/r/LocalLLaMA/comments/1l4aopo/looking_for_advice_starting_point_running_local/
Ok-Cup-608
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4aopo
false
null
t3_1l4aopo
/r/LocalLLaMA/comments/1l4aopo/looking_for_advice_starting_point_running_local/
false
false
self
1
null
Looking for Advice- Starting point GPU
1
[removed]
2025-06-05T21:10:08
https://www.reddit.com/r/LocalLLaMA/comments/1l4aqbm/looking_for_advice_starting_point_gpu/
Ok-Cup-608
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4aqbm
false
null
t3_1l4aqbm
/r/LocalLLaMA/comments/1l4aqbm/looking_for_advice_starting_point_gpu/
false
false
self
1
null
Qwen3-32B is absolutely awesome
1
[removed]
2025-06-05T21:11:59
https://www.reddit.com/r/LocalLLaMA/comments/1l4arx4/qwen332b_is_absolutely_awesome/
gtresselt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4arx4
false
null
t3_1l4arx4
/r/LocalLLaMA/comments/1l4arx4/qwen332b_is_absolutely_awesome/
false
false
self
1
null
Much lower performance for Mistral-Small 24B on RTX 3090 and from deepinfra API
1
Hi friends, I was using deepinfra API and find that [mistralai/Mistral-Small-24B-Instruct-2501](https://deepinfra.com/mistralai/Mistral-Small-24B-Instruct-2501?version=010d42b0ae15e140bf9c5e02ca88273b9c257a89) is a very useful model. But when I deployed the Q4 quantized version on my RTX 3090, it does not work as good. I doubt the performance degradation is because of the quantization, because deepinfra is using the original version, but still want to confirm. If yes, this is very disappointing to me coz the only reason I purchase the GPU is that I thought I could have this level of local AI to do many fun things. It turns out that those quantized 32b models can not handle any serious tasks (like read some long articles and extract useful information)...
2025-06-05T21:42:16
https://www.reddit.com/r/LocalLLaMA/comments/1l4biki/much_lower_performance_for_mistralsmall_24b_on/
rumboll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4biki
false
null
t3_1l4biki
/r/LocalLLaMA/comments/1l4biki/much_lower_performance_for_mistralsmall_24b_on/
false
false
self
1
{'enabled': False, 'images': [{'id': 'amgfYGwa2WrQh6GXm5VGkqwQoIMx_3FzVvXwxN_upLs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/A5gAnz2ZdeDVSXFGTXKEP95JRpka9aH-VUOZQCnvxRk.jpg?width=108&crop=smart&auto=webp&s=3318ee60bc67fe35f858ef342ae3ae7487f5b278', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/A5gAnz2ZdeDVSXFGTXKEP95JRpka9aH-VUOZQCnvxRk.jpg?width=216&crop=smart&auto=webp&s=a6cb90b95a9254175f5524b906f4d0b5b60f5aad', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/A5gAnz2ZdeDVSXFGTXKEP95JRpka9aH-VUOZQCnvxRk.jpg?width=320&crop=smart&auto=webp&s=b54029881de946e57c8e8fa895f7268b63241c49', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/A5gAnz2ZdeDVSXFGTXKEP95JRpka9aH-VUOZQCnvxRk.jpg?width=640&crop=smart&auto=webp&s=143cc5ebddee3c2916c786c4f1b313eb2394e884', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/A5gAnz2ZdeDVSXFGTXKEP95JRpka9aH-VUOZQCnvxRk.jpg?auto=webp&s=cf7473297b286c301b34de822c7b761008fb7b5d', 'width': 768}, 'variants': {}}]}
How Fast can I run models.
0
I'm running image processing with gemma 3 27b and getting structured outputs as response, but my present pipeline is awfully slow (I use huggingface for the most part and lmformatenforcer), it processes a batch of 32 images in 5-10 minutes when I get a response of atmax 256 tokens per image. Now this is running on 4 A100 40 gig chips. This seems awfully slow and suboptimal. Can people share some codebooks and benchmark times for image processing, and should I shift to sglang? I cannot use the latest version of VLLM in my uni's compute cluster.
2025-06-05T21:53:00
https://www.reddit.com/r/LocalLLaMA/comments/1l4brna/how_fast_can_i_run_models/
feelin-lonely-1254
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4brna
false
null
t3_1l4brna
/r/LocalLLaMA/comments/1l4brna/how_fast_can_i_run_models/
false
false
self
0
null
iOS app to talk (voice) to self-hosted LLMs
1
[App Store link](https://apps.apple.com/app/apple-store/id6737482921?pt=127100219&ct=r-locallama&mt=8)
2025-06-05T22:05:46
https://v.redd.it/gwaw7821m65f1
lostmsu
v.redd.it
1970-01-01T00:00:00
0
{}
1l4c2hv
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/gwaw7821m65f1/DASHPlaylist.mpd?a=1751753163%2CZjVjOGY1NWEzY2I3NDA2NDE2Zjg2YzZiZTg4M2I0MzY1ZTI0MGM3OGI5YjcwOWE2N2NiMWM2NjVkYmRhM2ZjMg%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/gwaw7821m65f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/gwaw7821m65f1/HLSPlaylist.m3u8?a=1751753163%2CZDgxMTY4OWU2Y2JiZGMxMGYwOGYyYjdlMWUyOWNlMWJiYzI2MzA5NzM4MTUzMDZmNDUyY2E4ZGVhNzQ4MzdjMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gwaw7821m65f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1l4c2hv
/r/LocalLLaMA/comments/1l4c2hv/ios_app_to_talk_voice_to_selfhosted_llms/
false
false
https://external-preview…9bb2a1f4d391008b
1
{'enabled': False, 'images': [{'id': 'c3FmcW83MjFtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c3FmcW83MjFtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=108&crop=smart&format=pjpg&auto=webp&s=07c6f539b1eccc3d360f252b8c4ec3d101f9c258', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c3FmcW83MjFtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=216&crop=smart&format=pjpg&auto=webp&s=ffd7cd8dedaef331a8c0c045e2776f822454fd8e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c3FmcW83MjFtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=320&crop=smart&format=pjpg&auto=webp&s=963c5195f2b1da4244833610fe6c4653f4ffa340', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c3FmcW83MjFtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=640&crop=smart&format=pjpg&auto=webp&s=f0ae5ebd9a2a5c6671c6b3c1b0ef6326b3b618df', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/c3FmcW83MjFtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=960&crop=smart&format=pjpg&auto=webp&s=2c4f78a6c35c6ac90aed82781d534e0bc148836c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/c3FmcW83MjFtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4e41db0f3ed23f0bba3083987977eeac7b5d0dea', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/c3FmcW83MjFtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?format=pjpg&auto=webp&s=4e7fe7e33c207c12177149c91992ea440f3d7355', 'width': 1280}, 'variants': {}}]}
iOS app to talk (voice) to self-hosted LLMs
2
2025-06-05T22:06:50
https://v.redd.it/j5f97gebm65f1
lostmsu
v.redd.it
1970-01-01T00:00:00
0
{}
1l4c3ds
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/j5f97gebm65f1/DASHPlaylist.mpd?a=1751753223%2CMTQzNmFjN2Y4MThkOTAzOTg0ZjcxMmM3OGQ5OWU1ZDFjZDRkNDYxYjg5ZTYwZWQ5OWJmMjY5NGY0ZjJkMjFiNQ%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/j5f97gebm65f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/j5f97gebm65f1/HLSPlaylist.m3u8?a=1751753223%2CYWQwZGYxMWRhMTA1MTBmYWVlM2I5MWJhZDA2YWY1MTgzYzAwMTAxMTA1MDQyZjBkMTA3MTFjNmRmYjI4YmFiOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/j5f97gebm65f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1l4c3ds
/r/LocalLLaMA/comments/1l4c3ds/ios_app_to_talk_voice_to_selfhosted_llms/
false
false
https://external-preview…b06c4aa03f008575
2
{'enabled': False, 'images': [{'id': 'bWx4aHJiZWJtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bWx4aHJiZWJtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=108&crop=smart&format=pjpg&auto=webp&s=0e7c96637ab67e6bc63fffcf4374884539f75ee0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bWx4aHJiZWJtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=216&crop=smart&format=pjpg&auto=webp&s=ff4afbba02a27972f7c053150456be2147062c3f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bWx4aHJiZWJtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=320&crop=smart&format=pjpg&auto=webp&s=40a91dfb8b3b6ca80e2ca8b1c290f9c2d75a9c45', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bWx4aHJiZWJtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=640&crop=smart&format=pjpg&auto=webp&s=41cc0ad6d5247dc274215853904ffc9c081f01aa', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bWx4aHJiZWJtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=960&crop=smart&format=pjpg&auto=webp&s=0c24651c5a7b54b292c317953695558df33b633f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bWx4aHJiZWJtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=1080&crop=smart&format=pjpg&auto=webp&s=de87b76abd71ecf7c231b264bcd1e767e20643a8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/bWx4aHJiZWJtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?format=pjpg&auto=webp&s=699d2adc069e2e87d21bb5c09937d8386f6d330c', 'width': 1280}, 'variants': {}}]}
Step-by-step GraphRAG tutorial for multi-hop QA - from the RAG_Techniques repo (16K+ stars)
60
Many people asked for this! Now I have a new step-by-step tutorial on **GraphRAG** in my **RAG\_Techniques** repo on GitHub (16K+ stars), one of the world’s leading RAG resources packed with hands-on tutorials for different techniques. **Why do we need this?** Regular RAG cannot answer hard questions like: *“How did the protagonist defeat the villain’s assistant?”* (Harry Potter and Quirrell) It cannot connect information across multiple steps. **How does it work?** It combines vector search with graph reasoning. It uses only vector databases - no need for separate graph databases. It finds entities and relationships, expands connections using math, and uses AI to pick the right answers. **What you will learn** * Turn text into entities, relationships and passages for vector storage * Build two types of search (entity search and relationship search) * Use math matrices to find connections between data points * Use AI prompting to choose the best relationships * Handle complex questions that need multiple logical steps * Compare results: Graph RAG vs simple RAG with real examples **Full notebook available here:** [GraphRAG with vector search and multi-step reasoning](https://github.com/NirDiamant/RAG_TECHNIQUES/blob/main/all_rag_techniques/graphrag_with_milvus_vectordb.ipynb)
2025-06-05T22:08:08
https://www.reddit.com/r/LocalLLaMA/comments/1l4c4hh/stepbystep_graphrag_tutorial_for_multihop_qa_from/
Nir777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4c4hh
false
null
t3_1l4c4hh
/r/LocalLLaMA/comments/1l4c4hh/stepbystep_graphrag_tutorial_for_multihop_qa_from/
false
false
self
60
{'enabled': False, 'images': [{'id': '5R8NiYOchlJmm8fWG-mcHa7WZPElNUSt07Y6VYeJE6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FX9dUlXm1lJTuauNBIZBuXGNPgFZRyMezMHbvw0SgZc.jpg?width=108&crop=smart&auto=webp&s=732a922b7387388ae884f9b9fab8442f071bea63', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FX9dUlXm1lJTuauNBIZBuXGNPgFZRyMezMHbvw0SgZc.jpg?width=216&crop=smart&auto=webp&s=623f10ffde882378b1df82039dcdb5ec2b54bf8f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FX9dUlXm1lJTuauNBIZBuXGNPgFZRyMezMHbvw0SgZc.jpg?width=320&crop=smart&auto=webp&s=f4b499d4df06be83cc3fbe8b9a722f838fce1285', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FX9dUlXm1lJTuauNBIZBuXGNPgFZRyMezMHbvw0SgZc.jpg?width=640&crop=smart&auto=webp&s=2d29109c5a6531fa495b91078b544b2eef487b50', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FX9dUlXm1lJTuauNBIZBuXGNPgFZRyMezMHbvw0SgZc.jpg?width=960&crop=smart&auto=webp&s=6084a02b8c4985757229f5680edc45deeca200fd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FX9dUlXm1lJTuauNBIZBuXGNPgFZRyMezMHbvw0SgZc.jpg?width=1080&crop=smart&auto=webp&s=283301ad7679c37513bf67912c60d88b9bb9a33c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FX9dUlXm1lJTuauNBIZBuXGNPgFZRyMezMHbvw0SgZc.jpg?auto=webp&s=138cc5970316e08ddda1a633f2b47032519b249e', 'width': 1200}, 'variants': {}}]}
Do LLMs have opinions?
0
Or do they simply just mirror our inputs, and adhere to instructions in system prompts while mimicking the data from training/fine-tuning? Like people say that LLMs are shown to hold liberal views, but is that not just because the dominant part of the training data is expressions of people holding such views?
2025-06-05T22:17:32
https://www.reddit.com/r/LocalLLaMA/comments/1l4cc2e/do_llms_have_opinions/
WeAllFuckingFucked
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4cc2e
false
null
t3_1l4cc2e
/r/LocalLLaMA/comments/1l4cc2e/do_llms_have_opinions/
false
false
self
0
null
Open sourcing SERAX a file format built specifically for AI data generation
1
[removed]
2025-06-05T22:29:34
https://www.reddit.com/r/LocalLLaMA/comments/1l4clui/open_sourcing_serax_a_file_format_built/
VantigeAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4clui
false
null
t3_1l4clui
/r/LocalLLaMA/comments/1l4clui/open_sourcing_serax_a_file_format_built/
false
false
self
1
null
Open sourcing SERAX a file format built specifically for AI data generation
1
[removed]
2025-06-05T22:44:49
https://www.reddit.com/r/LocalLLaMA/comments/1l4cxy6/open_sourcing_serax_a_file_format_built/
VantigeAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4cxy6
false
null
t3_1l4cxy6
/r/LocalLLaMA/comments/1l4cxy6/open_sourcing_serax_a_file_format_built/
false
false
self
1
null
New Quantization Paper: Model-Preserving Adaptive Rounding
1
[removed]
2025-06-05T22:50:45
https://www.reddit.com/r/LocalLLaMA/comments/1l4d2qe/new_quantization_paper_modelpreserving_adaptive/
tsengalb99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4d2qe
false
null
t3_1l4d2qe
/r/LocalLLaMA/comments/1l4d2qe/new_quantization_paper_modelpreserving_adaptive/
false
false
self
1
null
Did avian.io go under?
0
Cannot get response from the support and all API requests have been failing for weeks.
2025-06-05T22:57:54
https://www.reddit.com/r/LocalLLaMA/comments/1l4d8dn/did_avianio_go_under/
punkpeye
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4d8dn
false
null
t3_1l4d8dn
/r/LocalLLaMA/comments/1l4d8dn/did_avianio_go_under/
false
false
self
0
null
A little gpu poor man needing some help
12
Hello my dear friends of opensource llms. I unfortunately encountered a situation to which I can't find any solution. I want to use tensor parallelism with exl2, as i have two rtx 3060. But exl2 quantization only uses on gpu by design, which results in oom errors for me. If somebody could convert the qwen long (https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B) into exl 2 around 4-4.5 bpw, I'd come in my pants.
2025-06-05T22:57:58
https://www.reddit.com/r/LocalLLaMA/comments/1l4d8fc/a_little_gpu_poor_man_needing_some_help/
Flashy_Management962
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4d8fc
false
null
t3_1l4d8fc
/r/LocalLLaMA/comments/1l4d8fc/a_little_gpu_poor_man_needing_some_help/
false
false
self
12
{'enabled': False, 'images': [{'id': 'jP7Lx5njiL0YGj9UteZAtC6ujAbqS9hzjcauwjE7bRY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6xt2dj6pF7ujB6ey8kvUmE-zcpCNrm-RJmVvkmzTjsI.jpg?width=108&crop=smart&auto=webp&s=da5cdf62b7adb5dfd525dd4e7ce5816b62d18d96', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6xt2dj6pF7ujB6ey8kvUmE-zcpCNrm-RJmVvkmzTjsI.jpg?width=216&crop=smart&auto=webp&s=8a9f03079a710dff1b73949ed860efa8fa3eea4d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6xt2dj6pF7ujB6ey8kvUmE-zcpCNrm-RJmVvkmzTjsI.jpg?width=320&crop=smart&auto=webp&s=587a4f1905cc9f9855ab27efa7232a127cf05184', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6xt2dj6pF7ujB6ey8kvUmE-zcpCNrm-RJmVvkmzTjsI.jpg?width=640&crop=smart&auto=webp&s=9f4152dceb2123eba108beb3d2eb34236ce69da1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6xt2dj6pF7ujB6ey8kvUmE-zcpCNrm-RJmVvkmzTjsI.jpg?width=960&crop=smart&auto=webp&s=277df9a364feb1445582568d8069d9679e517f0c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6xt2dj6pF7ujB6ey8kvUmE-zcpCNrm-RJmVvkmzTjsI.jpg?width=1080&crop=smart&auto=webp&s=31841d7a94efb48945d2c209a2c2063e22c2fa28', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6xt2dj6pF7ujB6ey8kvUmE-zcpCNrm-RJmVvkmzTjsI.jpg?auto=webp&s=fb4b42baebd4105b0587fab3e4925f6ec9b2cecf', 'width': 1200}, 'variants': {}}]}
Align text with audio
0
Hi, I have an audio generated using OpenAi’s TTS API and I have a raw transcript. Is there a practical way to generate SRT or ASS captions with timestamps without processing the audio file? I am currently using Whisper library to generate captions, but it takes 16 seconds to process the audio file.
2025-06-06T00:02:43
https://www.reddit.com/r/LocalLLaMA/comments/1l4ekah/align_text_with_audio/
Terrible_Dimension66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4ekah
false
null
t3_1l4ekah
/r/LocalLLaMA/comments/1l4ekah/align_text_with_audio/
false
false
self
0
null
OpenThinker3 7B released
1
[https://huggingface.co/open-thoughts/OpenThinker3-7B](https://huggingface.co/open-thoughts/OpenThinker3-7B) [https://huggingface.co/bartowski/open-thoughts\_OpenThinker3-7B-GGUF](https://huggingface.co/bartowski/open-thoughts_OpenThinker3-7B-GGUF)
2025-06-06T00:26:02
https://www.reddit.com/r/LocalLLaMA/comments/1l4f1f6/openthinker3_7b_released/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4f1f6
false
null
t3_1l4f1f6
/r/LocalLLaMA/comments/1l4f1f6/openthinker3_7b_released/
false
false
self
1
{'enabled': False, 'images': [{'id': 'aa7mY1LSqAx_HZNaXVUa4ki0ZQltBVxg310whTh9EG0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=108&crop=smart&auto=webp&s=4ec479e9c6bcd24dad79b5f1a3efc6ba88a44783', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=216&crop=smart&auto=webp&s=d985bbd741496854a37f573182d5fc3d928d4151', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=320&crop=smart&auto=webp&s=1510580e18faee06c1d14255a0993a90068d91cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=640&crop=smart&auto=webp&s=1ab990150292f2bdad4b94e6edd98b1d4d23036e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=960&crop=smart&auto=webp&s=e608805653212118a52186d6efc06eafec887356', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=1080&crop=smart&auto=webp&s=537dad8cfe997564ca3ec93ae3136288bba73ad4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?auto=webp&s=aa212abfdf8286c696c52351531692c732c3caad', 'width': 1200}, 'variants': {}}]}
OpenThinker3 released
216
[https://huggingface.co/open-thoughts/OpenThinker3-7B](https://huggingface.co/open-thoughts/OpenThinker3-7B) [https://huggingface.co/bartowski/open-thoughts\_OpenThinker3-7B-GGUF](https://huggingface.co/bartowski/open-thoughts_OpenThinker3-7B-GGUF)
2025-06-06T00:26:49
https://www.reddit.com/r/LocalLLaMA/comments/1l4f1yp/openthinker3_released/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4f1yp
false
null
t3_1l4f1yp
/r/LocalLLaMA/comments/1l4f1yp/openthinker3_released/
false
false
self
216
{'enabled': False, 'images': [{'id': 'aa7mY1LSqAx_HZNaXVUa4ki0ZQltBVxg310whTh9EG0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=108&crop=smart&auto=webp&s=4ec479e9c6bcd24dad79b5f1a3efc6ba88a44783', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=216&crop=smart&auto=webp&s=d985bbd741496854a37f573182d5fc3d928d4151', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=320&crop=smart&auto=webp&s=1510580e18faee06c1d14255a0993a90068d91cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=640&crop=smart&auto=webp&s=1ab990150292f2bdad4b94e6edd98b1d4d23036e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=960&crop=smart&auto=webp&s=e608805653212118a52186d6efc06eafec887356', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?width=1080&crop=smart&auto=webp&s=537dad8cfe997564ca3ec93ae3136288bba73ad4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YqXUfGELS3s8NgpI8N8mcy9tyckDXxgiRMqaMOwQmJ4.jpg?auto=webp&s=aa212abfdf8286c696c52351531692c732c3caad', 'width': 1200}, 'variants': {}}]}
What happened to WizardLM-2 8x22b?
74
I was mildly intrigued when I saw /u/SomeOddCodeGuy [mention that](https://old.reddit.com/r/LocalLLaMA/comments/1cvw3s5/my_personal_guide_for_developing_software_with_ai/): > I prefer local AI models for various reasons, and [the quality of some like WizardLM-2 8x22b are on par with ChatGPT 4](https://prollm.toqan.ai/leaderboard), but use what you have available and feel most comfortable with. There's a Microsoft HF page that is now [empty](https://huggingface.co/collections/microsoft/wizardlm-661d403f71e6c8257dbd598a), with a history showing that a model once existed but appears to have been deleted. This is an old model now, so not really looking to fire it up and use it, but does anyone know what happened to get it taken down?
2025-06-06T00:58:45
https://www.reddit.com/r/LocalLLaMA/comments/1l4fo3x/what_happened_to_wizardlm2_8x22b/
RobotRobotWhatDoUSee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4fo3x
false
null
t3_1l4fo3x
/r/LocalLLaMA/comments/1l4fo3x/what_happened_to_wizardlm2_8x22b/
false
false
self
74
null
Turn based two model critique for rounds to refine answer - any examples or FOSS projects?
1
I felt like I heard of someone making a pipeline of lets say code prime fib in python as a prompt, it is served by model1, model ones answer then feeds to model2 to critique, This back and forth goes on for int turns to hopefully come back with a better answer than just one model answering. It's similar to what thinking models do but broken down. Is this worth testing for local hosting, potentially for offline Coding with AI? Good idea to test, already been tested?
2025-06-06T01:31:21
https://www.reddit.com/r/LocalLLaMA/comments/1l4gb6s/turn_based_two_model_critique_for_rounds_to/
HilLiedTroopsDied
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4gb6s
false
null
t3_1l4gb6s
/r/LocalLLaMA/comments/1l4gb6s/turn_based_two_model_critique_for_rounds_to/
false
false
self
1
null
Smallest llm that can help in text rearrangement
1
Ive been using a translation model. Need a smallest llm that can just rearrange the output text acc to language needs
2025-06-06T02:07:20
https://www.reddit.com/r/LocalLLaMA/comments/1l4gzzw/smallest_llm_that_can_help_in_text_rearrangement/
Away_Expression_3713
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4gzzw
false
null
t3_1l4gzzw
/r/LocalLLaMA/comments/1l4gzzw/smallest_llm_that_can_help_in_text_rearrangement/
false
false
self
1
null
Is ddr5/pcie5 necessary for a rtx pro 6000 workstation?
0
For a PC that uses rtx pro 6000 as its gpu, do you think ddr5 ram and pcie 5.0 are necessary to fully utilize the gpu? What about SSD speed and RAID? And since pro 6000 doesn’t support nvlink, is it reasonable to have two pro 6000s on the motherboard and let them bridge through pcie? We know that ddr4 and pcie4 components can be cheaper, what do you think?
2025-06-06T02:25:13
https://www.reddit.com/r/LocalLLaMA/comments/1l4hccw/is_ddr5pcie5_necessary_for_a_rtx_pro_6000/
SpecialistPear755
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4hccw
false
null
t3_1l4hccw
/r/LocalLLaMA/comments/1l4hccw/is_ddr5pcie5_necessary_for_a_rtx_pro_6000/
false
false
self
0
null
Llama 3.3:70B on HP Z2 Mini G1a works, but….
1
[removed]
2025-06-06T02:54:15
https://i.redd.it/93yvq9cl185f1.jpeg
walkerb1972
i.redd.it
1970-01-01T00:00:00
0
{}
1l4hw13
false
null
t3_1l4hw13
/r/LocalLLaMA/comments/1l4hw13/llama_3370b_on_hp_z2_mini_g1a_works_but/
false
false
default
1
{'enabled': True, 'images': [{'id': '93yvq9cl185f1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/93yvq9cl185f1.jpeg?width=108&crop=smart&auto=webp&s=60b3c86cbde14f1f5332f66616652850c1c1fb01', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/93yvq9cl185f1.jpeg?width=216&crop=smart&auto=webp&s=a268469930d25fe384dcbf69d88aea85d6452889', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/93yvq9cl185f1.jpeg?width=320&crop=smart&auto=webp&s=e9e7aa8ecd6ece1bc62092a717b4d67ce9f5a7f0', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/93yvq9cl185f1.jpeg?width=640&crop=smart&auto=webp&s=e136ea7ed94698d6644567da3b63e0688c15e623', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/93yvq9cl185f1.jpeg?width=960&crop=smart&auto=webp&s=e83f48da0306f2cce4f8116bb2194dc47b8a2dde', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/93yvq9cl185f1.jpeg?width=1080&crop=smart&auto=webp&s=5cd86bc778885d151cbff11fde3b6932ac47b275', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/93yvq9cl185f1.jpeg?auto=webp&s=c71264be715a3b44332286c0c7e0b544697b250d', 'width': 4032}, 'variants': {}}]}
How to share an open source project to this sub without getting filtered?
0
[removed]
2025-06-06T03:09:15
https://www.reddit.com/r/LocalLLaMA/comments/1l4i5xn/how_to_share_an_open_source_project_to_this_sub/
VantigeAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4i5xn
false
null
t3_1l4i5xn
/r/LocalLLaMA/comments/1l4i5xn/how_to_share_an_open_source_project_to_this_sub/
false
false
self
0
null
Best general purpose LLM for an 8GB 3060?
3
Hey everyone, I’m running a local LLM setup on a home server with a 3060 (8GB VRAM), using Ollama and OpenWebUI. Just after some advice on what the best general-purpose model would be for this kind of hardware. Mainly using it for general chat, coding help, and a bit of local data processing. Priorities are good performance, low VRAM use, and relatively strong output quality without massive context windows or plugins. I’ve looked at a few like Gemma, Mistral, DeepSeek, etc., but not sure which format or quant level gives the best balance on this GPU. Anyone got suggestions for a model + quant combo that works well on a 3060? Cheers!
2025-06-06T03:14:52
https://www.reddit.com/r/LocalLLaMA/comments/1l4i9st/best_general_purpose_llm_for_an_8gb_3060/
DisgustingBlackChimp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4i9st
false
null
t3_1l4i9st
/r/LocalLLaMA/comments/1l4i9st/best_general_purpose_llm_for_an_8gb_3060/
false
false
self
3
null
MiniCPM4: Ultra-Efficient LLMs on End Devices
65
Randomly saw this -- no models yet.
2025-06-06T03:40:56
https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b
adefa
huggingface.co
1970-01-01T00:00:00
0
{}
1l4irk9
false
null
t3_1l4irk9
/r/LocalLLaMA/comments/1l4irk9/minicpm4_ultraefficient_llms_on_end_devices/
false
false
default
65
{'enabled': False, 'images': [{'id': 'urN7B2TlaIBWXNq4r0fZnMUhUh2UrMvsfjcXAUJ2oTc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QvrTYBQgDHPkgT3IRbnKHNb-1zHcP8AdJT7CXsvRCqg.jpg?width=108&crop=smart&auto=webp&s=0b6dcb95f5889fe301fc6ee42ce2d2a1ba53781e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QvrTYBQgDHPkgT3IRbnKHNb-1zHcP8AdJT7CXsvRCqg.jpg?width=216&crop=smart&auto=webp&s=4f75a687dba9c41d1941a847b8648b77bee26295', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QvrTYBQgDHPkgT3IRbnKHNb-1zHcP8AdJT7CXsvRCqg.jpg?width=320&crop=smart&auto=webp&s=5abfeeea208ec6b067391b582d5884fe1c61172f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QvrTYBQgDHPkgT3IRbnKHNb-1zHcP8AdJT7CXsvRCqg.jpg?width=640&crop=smart&auto=webp&s=a5b96b807c9e088b046df65be2f64ac04b84e550', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QvrTYBQgDHPkgT3IRbnKHNb-1zHcP8AdJT7CXsvRCqg.jpg?width=960&crop=smart&auto=webp&s=97f2bd5abc29d8ebe6a2b8fecf6ff58aecb73104', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QvrTYBQgDHPkgT3IRbnKHNb-1zHcP8AdJT7CXsvRCqg.jpg?width=1080&crop=smart&auto=webp&s=e66ec6a3a865f8cd2ddce4cdbdb07271dc68e7b0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QvrTYBQgDHPkgT3IRbnKHNb-1zHcP8AdJT7CXsvRCqg.jpg?auto=webp&s=90698c3fdd98260e38284ddb004ed9da396c94a6', 'width': 1200}, 'variants': {}}]}
anyone encountered this problem where f5 tts gives file with no sound ?
4
2025-06-06T03:53:52
https://i.redd.it/0jhn08f6c85f1.png
SnooDrawings7547
i.redd.it
1970-01-01T00:00:00
0
{}
1l4izz4
false
null
t3_1l4izz4
/r/LocalLLaMA/comments/1l4izz4/anyone_encountered_this_problem_where_f5_tts/
false
false
default
4
{'enabled': True, 'images': [{'id': '0jhn08f6c85f1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/0jhn08f6c85f1.png?width=108&crop=smart&auto=webp&s=27af8367c4f71e952d866534963581a7684833e3', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/0jhn08f6c85f1.png?width=216&crop=smart&auto=webp&s=193178baac6ed607799499cf65acb8112e6a68cc', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/0jhn08f6c85f1.png?width=320&crop=smart&auto=webp&s=9860b64083c092e9a946737c4d2114dee1251b0b', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/0jhn08f6c85f1.png?width=640&crop=smart&auto=webp&s=d0650204c45ec7086215ff04ef383a4a26550f18', 'width': 640}, {'height': 406, 'url': 'https://preview.redd.it/0jhn08f6c85f1.png?width=960&crop=smart&auto=webp&s=5277ef740b35bfaa136ca4f69d80fd95de8c33a2', 'width': 960}, {'height': 457, 'url': 'https://preview.redd.it/0jhn08f6c85f1.png?width=1080&crop=smart&auto=webp&s=45f3d3770b6224db94deb5214c14a17259c01838', 'width': 1080}], 'source': {'height': 639, 'url': 'https://preview.redd.it/0jhn08f6c85f1.png?auto=webp&s=a23ae45a4504f94d8adbc73a58a63dcb1696c65d', 'width': 1509}, 'variants': {}}]}
Thinking about switching from cloud based AI to sth more local
1
[removed]
2025-06-06T03:57:32
[deleted]
1970-01-01T00:00:00
0
{}
1l4j2dy
false
null
t3_1l4j2dy
/r/LocalLLaMA/comments/1l4j2dy/thinking_about_switching_from_cloud_based_ai_to/
false
false
default
1
null
Private LLM For Company
1
[removed]
2025-06-06T04:34:36
https://www.reddit.com/r/LocalLLaMA/comments/1l4jqtr/private_llm_for_company/
Acataleptic23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4jqtr
false
null
t3_1l4jqtr
/r/LocalLLaMA/comments/1l4jqtr/private_llm_for_company/
false
false
self
1
null
Do we need a new programming language optimized for AI to write code?
1
[removed]
2025-06-06T04:37:36
https://www.reddit.com/r/LocalLLaMA/comments/1l4jsnb/do_we_need_a_new_programming_language_optimized/
ggeezz12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4jsnb
false
null
t3_1l4jsnb
/r/LocalLLaMA/comments/1l4jsnb/do_we_need_a_new_programming_language_optimized/
false
false
self
1
null
Model-Preserving Adaptive Rounding
1
[removed]
2025-06-06T05:12:41
https://www.reddit.com/r/LocalLLaMA/comments/1l4ke2z/modelpreserving_adaptive_rounding/
tsengalb99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4ke2z
false
null
t3_1l4ke2z
/r/LocalLLaMA/comments/1l4ke2z/modelpreserving_adaptive_rounding/
false
false
self
1
null