title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Experimental ChatGPT like Web UI for Gemini API (open source)
| 1 |
[removed]
| 2025-05-17T17:18:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1koy5de/experimental_chatgpt_like_web_ui_for_gemini_api/
|
W4D-cmd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1koy5de
| false | null |
t3_1koy5de
|
/r/LocalLLaMA/comments/1koy5de/experimental_chatgpt_like_web_ui_for_gemini_api/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '4QAqvL3ew3dDELyiryCe21xOE2ar8ZUfG1DOyYupJns', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=108&crop=smart&auto=webp&s=af61efe862907dfdb0ac4a57f206f29388c70272', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=216&crop=smart&auto=webp&s=c30507d3f1d6d78a7ec3541f8dae1884277afb85', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=320&crop=smart&auto=webp&s=24b8432c0e72da5be215e8b2857c6c23fb34e2ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=640&crop=smart&auto=webp&s=4a0aad0cafa0dea58a707de97fbfc314c9cc661c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=960&crop=smart&auto=webp&s=33866dcbfa872ac3cbcd5dc7864f8d91aa946b39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=1080&crop=smart&auto=webp&s=991a686a7cd694dc62de1f37648da5c401271131', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?auto=webp&s=c58f89c8460022f83d37fb6c01e0b25af3e8c123', 'width': 1200}, 'variants': {}}]}
|
|
Half year ago(or even more) OpenAI presented voice assistant
| 0 |
One who could speak with you. I see it as neural net including both TTS and whisper into 4o "brain", so everything from sound received to sound produced goes flawlessly - totally inside neural net itself.
Do we have anything like this, but open source( open weights)?
| 2025-05-17T17:21:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1koy7vy/half_year_agoor_even_more_openai_presented_voice/
|
Economy_Apple_4617
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1koy7vy
| false | null |
t3_1koy7vy
|
/r/LocalLLaMA/comments/1koy7vy/half_year_agoor_even_more_openai_presented_voice/
| false | false |
self
| 0 | null |
Help me decide DGX Spark vs M2 Max 96GB
| 2 |
[removed]
| 2025-05-17T17:22:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1koy9b9/help_me_decide_dgx_spark_vs_m2_max_96gb/
|
Web3Vortex
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1koy9b9
| false | null |
t3_1koy9b9
|
/r/LocalLLaMA/comments/1koy9b9/help_me_decide_dgx_spark_vs_m2_max_96gb/
| false | false |
self
| 2 | null |
Training Models
| 6 |
I want to fine-tune an AI model to essentially write like I would as a test. I have a bunch of.txt documents with things that I have typed. It looks like the first step is to convert it into a compatible format for training, which I can't figure out how to do. If you have done this before, could you give me help?
| 2025-05-17T17:38:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1koylpl/training_models/
|
TheMicrosoftMan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1koylpl
| false | null |
t3_1koylpl
|
/r/LocalLLaMA/comments/1koylpl/training_models/
| false | false |
self
| 6 | null |
Best local model for identifying UI elements?
| 1 |
In your opinion, which is the best model for up to 8GB VRAM image-to-text model for identifying UI elements (widgets)? It should be able to name their role, extrat text, give their coordinates, bounding rects, etc.
| 2025-05-17T17:49:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1koyv2s/best_local_model_for_identifying_ui_elements/
|
Friendly_Sympathy_21
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1koyv2s
| false | null |
t3_1koyv2s
|
/r/LocalLLaMA/comments/1koyv2s/best_local_model_for_identifying_ui_elements/
| false | false |
self
| 1 | null |
Best model to extract text from old Church records written in cursive?
| 1 |
[removed]
| 2025-05-17T18:01:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1koz4gd/best_model_to_extract_text_from_old_church/
|
locallmfinder
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1koz4gd
| false | null |
t3_1koz4gd
|
/r/LocalLLaMA/comments/1koz4gd/best_model_to_extract_text_from_old_church/
| false | false |
self
| 1 | null |
Inconsistency In Output When Working With A List of String
| 1 |
[removed]
| 2025-05-17T18:05:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1koz8g8/inconsistency_in_output_when_working_with_a_list/
|
yangbi00
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1koz8g8
| false | null |
t3_1koz8g8
|
/r/LocalLLaMA/comments/1koz8g8/inconsistency_in_output_when_working_with_a_list/
| false | false |
self
| 1 | null |
Learned AI dev from scratch, now trying to make it easier for newcomers
| 1 |
[removed]
| 2025-05-17T18:13:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kozeyr/learned_ai_dev_from_scratch_now_trying_to_make_it/
|
victor-bluera
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kozeyr
| false | null |
t3_1kozeyr
|
/r/LocalLLaMA/comments/1kozeyr/learned_ai_dev_from_scratch_now_trying_to_make_it/
| false | false |
self
| 1 | null |
Help me decide DGX Spark vs M2 Max 96GB
| 11 |
I would like to run a local LLM + RAG. Ideally 70B+ I am not sure if the DGX Spark is going to be significantly better than this MacBook Pro:
2023 M2 | 16.2" M2 Max 12-Core CPU | 38-Core GPU | 96 GB | 2 TB SSD
Can you guys please help me decide? Any advice, insights, and thoughts would be greatly appreciated.
| 2025-05-17T18:15:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kozggz/help_me_decide_dgx_spark_vs_m2_max_96gb/
|
Vegetable_Mix6629
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kozggz
| false | null |
t3_1kozggz
|
/r/LocalLLaMA/comments/1kozggz/help_me_decide_dgx_spark_vs_m2_max_96gb/
| false | false |
self
| 11 | null |
Local models are starting to be able to do stuff on consumer grade hardware
| 178 |
I know this is something that has a different threshold for people depending on exactly the hardware configuration they have, but I've actually crossed an important threshold today and I think this is representative of a larger trend.
For some time, I've really wanted to be able to use local models to "vibe code". But not in the sense "one-shot generate a pong game", but in the actual sense of creating and modifying some smallish application with meaningful functionality. There are some agentic frameworks that do that - out of those, I use Roo Code and Aider - and up until now, I've been relying solely on my free credits in enterprise models (Gemini, Openrouter, Mistral) to do the vibe-coding. It's mostly worked, but from time to time I tried some SOTA open models to see how they fare.
Well, up until a few weeks ago, this wasn't going anywhere. The models were either (a) unable to properly process bigger context sizes or (b) degenerating on output too quickly so that they weren't able to call tools properly or (c) simply too slow.
Imagine my surprise when I loaded up the yarn-patched 128k context version of Qwen14B. On IQ4\_NL quants and 80k context, about the limit of what my PC, with 10 GB of VRAM and 24 GB of RAM can handle. Obviously, on the contexts that Roo handles (20k+), with all the KV cache offloaded to RAM, the processing is slow: the model can output over 20 t/s on an empty context, but with this cache size the throughput slows down to about 2 t/s, with thinking mode on. But on the other hand - the quality of edits is very good, its codebase cognition is very good, This is actually the first time that I've ever had a local model be able to handle Roo in a longer coding conversation, output a few meaningful code diffs and not get stuck.
Note that this is a function of not one development, but at least three. On one hand, the models are certainly getting better, this wouldn't have been possible without Qwen3, although earlier on GLM4 was already performing quite well, signaling a potential breakthrough. On the other hand, the tireless work of Llama.cpp developers and quant makers like Unsloth or Bartowski have made the quants higher quality and the processing faster. And finally, the tools like Roo are also getting better at handling different models and keeping their attention.
Obviously, this isn't the vibe-coding comfort of a Gemini Flash yet. Due to the slow speed, this is the stuff you can do while reading mails / writing posts etc. and having the agent run in the background. But it's only going to get better.
| 2025-05-17T18:26:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kozpym/local_models_are_starting_to_be_able_to_do_stuff/
|
ilintar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kozpym
| false | null |
t3_1kozpym
|
/r/LocalLLaMA/comments/1kozpym/local_models_are_starting_to_be_able_to_do_stuff/
| false | false |
self
| 178 | null |
MacBook speed problem
| 1 |
[removed]
| 2025-05-17T18:32:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kozudd/macbook_speed_problem/
|
seppe0815
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kozudd
| false | null |
t3_1kozudd
|
/r/LocalLLaMA/comments/1kozudd/macbook_speed_problem/
| false | false |
self
| 1 | null |
Free and Powerful: NVIDIA Parakeet v2 is a New Speech-to-Text Model Rivaling Whisper
| 1 | 2025-05-17T19:04:09 |
https://youtu.be/zn3gYcCqjRw
|
GadgetsX-ray
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp0kut
| false |
{'oembed': {'author_name': 'GadgetsXray', 'author_url': 'https://www.youtube.com/@GadgetsXray', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/zn3gYcCqjRw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="NVIDIA Parakeet v2 Rivals Whisper! 🤯 SMALLER & FASTER Speech-to-Text"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/zn3gYcCqjRw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'NVIDIA Parakeet v2 Rivals Whisper! 🤯 SMALLER & FASTER Speech-to-Text', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kp0kut
|
/r/LocalLLaMA/comments/1kp0kut/free_and_powerful_nvidia_parakeet_v2_is_a_new/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'RDy3nRBPHAK4_qgGKeFIbCUO5Zv1bspyMajNa3JqXn8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/L9eNgeJXlYV1bPQKkNaC3Dcp_K3DgVLNr6Lq_u97w-4.jpg?width=108&crop=smart&auto=webp&s=55f93ea3d1fada86e713595afb68c156c66e0ba3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/L9eNgeJXlYV1bPQKkNaC3Dcp_K3DgVLNr6Lq_u97w-4.jpg?width=216&crop=smart&auto=webp&s=81018eebeb8845cb38d0cea2337ab2e3b636f128', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/L9eNgeJXlYV1bPQKkNaC3Dcp_K3DgVLNr6Lq_u97w-4.jpg?width=320&crop=smart&auto=webp&s=355c78b2afb34c6fd0e8a867010a6d411565e114', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/L9eNgeJXlYV1bPQKkNaC3Dcp_K3DgVLNr6Lq_u97w-4.jpg?auto=webp&s=e3085255c12d17750788ec0f81203ae4f2f812ea', 'width': 480}, 'variants': {}}]}
|
||
Recommend an open air case that can hold multiple gpu’s?
| 3 |
Hey LocalLlama community. I’ve been slowly getting some gpu’s so I can build a rig for AI. Can people please recommend an open air case here? (One that can accommodate multiple gpu’s using riser cables).
I know some people use old mining frame cases but I’m having trouble finding the right one or a good deal- some sites have them marked up more than others and I’m wondering what the best frame/brand is.
Thanks!
| 2025-05-17T19:10:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp0q2i/recommend_an_open_air_case_that_can_hold_multiple/
|
Business-Weekend-537
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp0q2i
| false | null |
t3_1kp0q2i
|
/r/LocalLLaMA/comments/1kp0q2i/recommend_an_open_air_case_that_can_hold_multiple/
| false | false |
self
| 3 | null |
Usecases for delayed,yet much cheaper inference?
| 3 |
I have a project which hosts an open source LLM. The sell is that the cost is much cheaper (about 50-70%) as compared to current inference api costs. However the catch is that the output is generated later (delayed). I want to know the use cases for something like this. An example we thought of was async agentic systems which are scheduled daily.
| 2025-05-17T19:37:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp1cuu/usecases_for_delayedyet_much_cheaper_inference/
|
Maleficent-Tone6316
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp1cuu
| false | null |
t3_1kp1cuu
|
/r/LocalLLaMA/comments/1kp1cuu/usecases_for_delayedyet_much_cheaper_inference/
| false | false |
self
| 3 | null |
Why are some models free to use on OpenRouter despite not training using your data?
| 1 |
[removed]
| 2025-05-17T19:42:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp1h5j/why_are_some_models_free_to_use_on_openrouter/
|
Devatator_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp1h5j
| false | null |
t3_1kp1h5j
|
/r/LocalLLaMA/comments/1kp1h5j/why_are_some_models_free_to_use_on_openrouter/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'c6Em3tWOc_jsXlwYNGoBHCocIMsM0q9hkl4oEkgNNDU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vCGvw90yDT2yktzEe9WLOj_eZ8Sk-G3Uuj9uobjp_Yo.jpg?width=108&crop=smart&auto=webp&s=287a3fb29c2a0cddd217eb7ced2c68c5110135ad', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/vCGvw90yDT2yktzEe9WLOj_eZ8Sk-G3Uuj9uobjp_Yo.jpg?width=216&crop=smart&auto=webp&s=024a92ab2d83ae9664768f254ff886adfe630a4d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/vCGvw90yDT2yktzEe9WLOj_eZ8Sk-G3Uuj9uobjp_Yo.jpg?width=320&crop=smart&auto=webp&s=d95e5c44ec7d4b7572b7df9dd6e2109fbfcf1886', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/vCGvw90yDT2yktzEe9WLOj_eZ8Sk-G3Uuj9uobjp_Yo.jpg?width=640&crop=smart&auto=webp&s=9d8282e99f00871c026af158f68cc65ca31cbe66', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/vCGvw90yDT2yktzEe9WLOj_eZ8Sk-G3Uuj9uobjp_Yo.jpg?width=960&crop=smart&auto=webp&s=19a654da205a16fec3a3ca58f75e31e51c995002', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/vCGvw90yDT2yktzEe9WLOj_eZ8Sk-G3Uuj9uobjp_Yo.jpg?width=1080&crop=smart&auto=webp&s=9141fcc3d14157dfc097715d104a6710e8c04df5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/vCGvw90yDT2yktzEe9WLOj_eZ8Sk-G3Uuj9uobjp_Yo.jpg?auto=webp&s=16464182961483aa51471d48131f4b70ea363f46', 'width': 1200}, 'variants': {}}]}
|
Llama 3.3 8B (new model?)
| 1 | 2025-05-17T19:46:10 |
https://openrouter.ai/meta-llama/llama-3.3-8b-instruct:free
|
its_just_andy
|
openrouter.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp1jtu
| false | null |
t3_1kp1jtu
|
/r/LocalLLaMA/comments/1kp1jtu/llama_33_8b_new_model/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '5i-usceurrgm5_SQlQH1r3I7ML8HuQwUo9oY3VzgrO8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/acfZbvA4-89Uxim61ck8lIDgmOhpJGh0onqvR10wVZA.jpg?width=108&crop=smart&auto=webp&s=38bbf3f471abe2cdcc9a9deb34cc2b439790b7e1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/acfZbvA4-89Uxim61ck8lIDgmOhpJGh0onqvR10wVZA.jpg?width=216&crop=smart&auto=webp&s=4df1df94d6170e84c4fec92371d539b75d207a20', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/acfZbvA4-89Uxim61ck8lIDgmOhpJGh0onqvR10wVZA.jpg?width=320&crop=smart&auto=webp&s=9d758721375a7c9642d863f45ce6a98f65c88dd4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/acfZbvA4-89Uxim61ck8lIDgmOhpJGh0onqvR10wVZA.jpg?width=640&crop=smart&auto=webp&s=94196c5fe5ad80d715f7edf59e242dc887decb8c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/acfZbvA4-89Uxim61ck8lIDgmOhpJGh0onqvR10wVZA.jpg?width=960&crop=smart&auto=webp&s=f70d115a4121986ec165a4028ed942e3e172c527', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/acfZbvA4-89Uxim61ck8lIDgmOhpJGh0onqvR10wVZA.jpg?width=1080&crop=smart&auto=webp&s=5f1e00972bd738b700558236b71e51c7657cf07b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/acfZbvA4-89Uxim61ck8lIDgmOhpJGh0onqvR10wVZA.jpg?auto=webp&s=7bbeea7d073ed57a52ae412c4211ca7502184d64', 'width': 1200}, 'variants': {}}]}
|
||
How do I implement exact length reasoning
| 1 |
Occasionally, I find that I want an exact length for the reasoning steps so that I can limit how long I have to wait for an answer and can also throw in my own guess for the complexity of the problem
I know that language model suck at counting so what I did was changed the prompting
I used multiple prompts of the type “You’re playing a game with friends and you are allowed to add one word to the following answer before someone else adds theirs. When you get number 1 you must end with a period. It’s your turn. You are allowed to add 1 of the remaining API_response={{length}} words. Question: ????<think>”
Every new token generated would remove one from length
However, despite making it evidently clear that this number changes hence the “API_response” (and playing around with the prompt sometimes I move the number to the end), the model never seems to remotely follow the instructions. I thought by giving it a number even a rough one it would generally understand about how long it has left, but it completely ignores this hint. Even when I tell it, it has one left it does not output a period and still generates random midsentence thoughts.
PS I also know this is extremely inefficient Since the number changing at the beginning means in a recomputation of the entire KV matrixes but my model is fast enough. I just don’t understand why it doesn’t follow instructions or understand a rough hint.
| 2025-05-17T19:55:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp1r44/how_do_i_implement_exact_length_reasoning/
|
Unusual_Guidance2095
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp1r44
| false | null |
t3_1kp1r44
|
/r/LocalLLaMA/comments/1kp1r44/how_do_i_implement_exact_length_reasoning/
| false | false |
self
| 1 | null |
is it worth running fp16?
| 18 |
So I'm getting mixed responses from search. Answers are literally all over the place. Ranging from absolute difference, through zero difference to even - better results at q8.
I'm currently testing qwen3 30a3 at fp16 as it still has decent throughput (~45t/s) and for many tasks I don't need ~80t/s, especially if I'd get some quality gains. Since it's weekend and I'm spending much less time at computer I can't really put it through real trail by fire. Hence asking the question - is it going to improve anything or is it just burning ram?
Also note - I'm finding 32b (and higher) too slow for some of my tasks, especially if they are reasoning models, so I'd rather stick to moe.
| 2025-05-17T20:10:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp23kw/is_it_worth_running_fp16/
|
kweglinski
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp23kw
| false | null |
t3_1kp23kw
|
/r/LocalLLaMA/comments/1kp23kw/is_it_worth_running_fp16/
| false | false |
self
| 18 | null |
storing models on local network storage so for multiple devices?
| 2 |
Has anyone tried this? Is it just way too slow? Unfortunately I have a data cap on my internet and would also like to save some disk space on local drives. My use case is having lmstudio or llama.cpp load models from network attached storage.
| 2025-05-17T20:13:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp25ki/storing_models_on_local_network_storage_so_for/
|
_w_8
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp25ki
| false | null |
t3_1kp25ki
|
/r/LocalLLaMA/comments/1kp25ki/storing_models_on_local_network_storage_so_for/
| false | false |
self
| 2 | null |
Visual reasoning still has a lot of room for improvement.
| 38 |
Was pretty surprised how poorly LLMs handle this question, so figured I would share it:
https://preview.redd.it/be4c6mx0fe1f1.png?width=1149&format=png&auto=webp&s=1909a7872de046afcc355b8b726a8e0aed2b8a68
What is DTS temp and why is it so much higher than my CPU temp?
Tried this on: Gemma 27b, Maverick, Scout, 2.5 PRO, Sonnet 3.7, 04-mini-high, grok 3.
Every single model gets it wrong at first.
After following up with a little hint:
but look at the graphs
Sonnet 3.7 figures it out, but all the others still get it wrong.
If you aren't familiar with servers / overclocking CPUs this might not be obvious to you,
The key thing here is those 2 temperature graphs are inverted.
The DTS temperature here is actually showing a "Distance to maximum temperature" (high temperature number = colder cpu)
| 2025-05-17T20:21:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp2cok/visual_reasoning_still_has_a_lot_of_room_for/
|
Conscious_Cut_6144
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp2cok
| false | null |
t3_1kp2cok
|
/r/LocalLLaMA/comments/1kp2cok/visual_reasoning_still_has_a_lot_of_room_for/
| false | false | 38 | null |
|
Thinking of picking up a tenstorrent blackhole. Anyone using it right now?
| 3 |
Hi,
Because of the price and availability, I am looking to get a tenstorrent blackhole. Before I purchase, I wanted to check if anyone has one. Does purchasing one make sense or do I need two because of the vram capacity? Also, I believe this is only for inference and not for sft or RL. How is the SDK right now?
| 2025-05-17T20:57:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp34qe/thinking_of_picking_up_a_tenstorrent_blackhole/
|
Studyr3ddit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp34qe
| false | null |
t3_1kp34qe
|
/r/LocalLLaMA/comments/1kp34qe/thinking_of_picking_up_a_tenstorrent_blackhole/
| false | false |
self
| 3 | null |
Document processing w/ poor hardware
| 0 |
I‘m looking for a LLM that I can run locally to analyze scanned documents with 1-5 pages (extract correspondent, date, and topic in a few keywords) to save them in my Nextcloud.
I already have Tesseract OCR available in my pipeline, thus the document‘s text is available.
As I want to have the pipeline available without a running laptop, I‘m thinking about operating it on my Synology DS918+ with currently 8GB RAM. I know, this is a huge limitation, but speed is not crucial… do you see a model which might be capable to do this on the Synology or do you see a hardware expansion that enables the NAS to do this?
| 2025-05-17T21:33:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp3xb5/document_processing_w_poor_hardware/
|
nihebe
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp3xb5
| false | null |
t3_1kp3xb5
|
/r/LocalLLaMA/comments/1kp3xb5/document_processing_w_poor_hardware/
| false | false |
self
| 0 | null |
Anyone ever try Riffusion Ai music generator for spoken word. I have. The Ai Riffusion voices have so much emotion. You can clone those voices in Zonos. I have a Sinatra voice, Josh Groban, Ella Fitzgerald (Maybe song), Southern, German, and more. don't have a Riff Subs. Snippets 4 personal use.
| 1 | 2025-05-17T21:45:38 |
https://v.redd.it/is6hyzz4xe1f1
|
Extension-Fee-8480
|
/r/LocalLLaMA/comments/1kp46v3/anyone_ever_try_riffusion_ai_music_generator_for/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp46v3
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/is6hyzz4xe1f1/DASHPlaylist.mpd?a=1750239943%2CZDM5Yjg2ODM1NWMxZjZlNzUxMDU5OTAwY2NjYzVmMjQyMTE2YjZjNTJiODAxNjU1N2NkOWRiZWE4Y2Q0Y2JhMA%3D%3D&v=1&f=sd', 'duration': 599, 'fallback_url': 'https://v.redd.it/is6hyzz4xe1f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/is6hyzz4xe1f1/HLSPlaylist.m3u8?a=1750239943%2CZjYwZDZlNWY1NWU1Y2IxMjhlY2IxZmMyNTY3OGQ0MmY4N2QwMWMwNzU4Y2RlMzc0MzMxODQ4OTNmNTM3MjVkYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/is6hyzz4xe1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1kp46v3
|
/r/LocalLLaMA/comments/1kp46v3/anyone_ever_try_riffusion_ai_music_generator_for/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'cXo4cTh5ejR4ZTFmMdw01xc-0rXm2hUTPm9IvCfGcp_NcXJWK0dGWzT1V2LB', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cXo4cTh5ejR4ZTFmMdw01xc-0rXm2hUTPm9IvCfGcp_NcXJWK0dGWzT1V2LB.png?width=108&crop=smart&format=pjpg&auto=webp&s=3788cb32ebda3d6bac20cc66e0a759f7ecd55f88', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cXo4cTh5ejR4ZTFmMdw01xc-0rXm2hUTPm9IvCfGcp_NcXJWK0dGWzT1V2LB.png?width=216&crop=smart&format=pjpg&auto=webp&s=c0b15c41f0ceaf5c9220c80410f51e02c6bafe9c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cXo4cTh5ejR4ZTFmMdw01xc-0rXm2hUTPm9IvCfGcp_NcXJWK0dGWzT1V2LB.png?width=320&crop=smart&format=pjpg&auto=webp&s=55af1f05ae4541e220ee387504da6d369f25a6d8', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cXo4cTh5ejR4ZTFmMdw01xc-0rXm2hUTPm9IvCfGcp_NcXJWK0dGWzT1V2LB.png?width=640&crop=smart&format=pjpg&auto=webp&s=c270099d0a04bb1d326c2ea23c26f376ce766446', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cXo4cTh5ejR4ZTFmMdw01xc-0rXm2hUTPm9IvCfGcp_NcXJWK0dGWzT1V2LB.png?width=960&crop=smart&format=pjpg&auto=webp&s=30570639556c7bcbf9195b0df8f9702bf3982414', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cXo4cTh5ejR4ZTFmMdw01xc-0rXm2hUTPm9IvCfGcp_NcXJWK0dGWzT1V2LB.png?width=1080&crop=smart&format=pjpg&auto=webp&s=bcb1f0ed93bb69eef1896a59a1c7a313fc904cd5', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cXo4cTh5ejR4ZTFmMdw01xc-0rXm2hUTPm9IvCfGcp_NcXJWK0dGWzT1V2LB.png?format=pjpg&auto=webp&s=be8ee659cc7b712b5c9b5d26c5b9b7978af4a587', 'width': 1280}, 'variants': {}}]}
|
||
AlphaEvolve Paper Dropped Yesterday - So I Built My Own Open-Source Version: OpenAlpha_Evolve!
| 526 |
Google DeepMind just dropped their AlphaEvolve paper (May 14th) on an AI that designs and evolves algorithms. Pretty groundbreaking.
Inspired, I immediately built OpenAlpha\_Evolve – an open-source Python framework so anyone can experiment with these concepts.
This was a rapid build to get a functional version out. Feedback, ideas for new agent challenges, or contributions to improve it are welcome. Let's explore this new frontier.
Imagine an agent that can:
* Understand a complex problem description.
* Generate initial algorithmic solutions.
* Rigorously test its own code.
* Learn from failures and successes.
* Evolve increasingly sophisticated and efficient algorithms over time.
GitHub (All new code): [https://github.com/shyamsaktawat/OpenAlpha\_Evolve](https://github.com/shyamsaktawat/OpenAlpha_Evolve)
https://preview.redd.it/lcz46q2n1f1f1.png?width=1811&format=png&auto=webp&s=dcc14652b9eb0bf84ca7927dfe3c906786f07a40
+---------------------+ +-----------------------+ +--------------------+
| Task Definition |----->| Prompt Engineering |----->| Code Generation |
| (User Input) | | (PromptDesignerAgent) | | (LLM / Gemini) |
+---------------------+ +-----------------------+ +--------------------+
^ |
| |
| V
+---------------------+ +-----------------------+ +--------------------+
| Select Survivors & |<-----| Fitness Evaluation |<-----| Execute & Test |
| Next Generation | | (EvaluatorAgent) | | (EvaluatorAgent) |
+---------------------+ +-----------------------+ +--------------------+
(Evolutionary Loop Continues)
(Sources: DeepMind Blog - May 14, 2025: \\
Google Alpha Evolve Paper - [https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf)
Google Alpha Evolve Blogpost - [https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/](https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/)
| 2025-05-17T22:14:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp4scy/alphaevolve_paper_dropped_yesterday_so_i_built_my/
|
Huge-Designer-7825
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp4scy
| false | null |
t3_1kp4scy
|
/r/LocalLLaMA/comments/1kp4scy/alphaevolve_paper_dropped_yesterday_so_i_built_my/
| false | false | 526 |
{'enabled': False, 'images': [{'id': 'H1nbou2eTFH-qOmHNOoRAEMZnJzwEDAGtwi-Gk5H5oY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HyXeCsstmjpayPcbhebb2CfV3uo4aDIHFzZeJ7oDZps.jpg?width=108&crop=smart&auto=webp&s=713eac46f76066c99a6c4310ada3d00d5f519a95', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HyXeCsstmjpayPcbhebb2CfV3uo4aDIHFzZeJ7oDZps.jpg?width=216&crop=smart&auto=webp&s=40d82e6762f3d35663ca56fcc6c8c10da3092247', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HyXeCsstmjpayPcbhebb2CfV3uo4aDIHFzZeJ7oDZps.jpg?width=320&crop=smart&auto=webp&s=9ffe4c3daec9148a7a1dc179cd25b7763d9335de', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HyXeCsstmjpayPcbhebb2CfV3uo4aDIHFzZeJ7oDZps.jpg?width=640&crop=smart&auto=webp&s=dfd4f6f0cce166ce310d3f28e00f2ea465bb8294', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HyXeCsstmjpayPcbhebb2CfV3uo4aDIHFzZeJ7oDZps.jpg?width=960&crop=smart&auto=webp&s=247be04f4d35f83acb76a5f5915f50bcb250ebee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HyXeCsstmjpayPcbhebb2CfV3uo4aDIHFzZeJ7oDZps.jpg?width=1080&crop=smart&auto=webp&s=9bc7ac6512f1239236fbc309321ffacf82de0a21', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HyXeCsstmjpayPcbhebb2CfV3uo4aDIHFzZeJ7oDZps.jpg?auto=webp&s=18c4b86be8d70597240a3327cf3d192d7b44d542', 'width': 1200}, 'variants': {}}]}
|
|
RAG embeddings survey - What are your chunking / embedding settings?
| 32 |
I’ve been working with RAG for over a year now and it honestly seems like a bit of a dark art. I haven’t really found the perfect settings for my use case yet. I’m dealing with several hundred policy documents as well as spreadsheets that contain number codes that link to specific products and services. It’s very important that these codes be associated with the correct product or service. Unfortunately I get a lot of hallucinations when it comes to the code lookup tasks. The policy PDFs are usually 100 pages or more. The larger chunk size seems to help with the policy PDFs but not so much with the specific code lookups in the spreadsheets
After a lot of experimenting over months and months. The following settings seem to work best for me (at least for the policy PDFs).
- Document ingestion = Docling
- Vector Storage = ChromaDB (built into Open WebUI)
- Embedding Model = Nomic-embed-large
- Hybrid Search Model (reranker) = BAAI/bge-reranker-v2-m3
- Chunk size = 2000
- Overlap size = 500
- Top K = 10
- Top K reranker = 10
- Relevance Threshold = 0
What are your use cases and what settings have you found works best for them?
| 2025-05-17T22:31:55 |
Porespellar
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp558b
| false | null |
t3_1kp558b
|
/r/LocalLLaMA/comments/1kp558b/rag_embeddings_survey_what_are_your_chunking/
| false | false | 32 |
{'enabled': True, 'images': [{'id': 'DaXH1IrEbZ0_tsOvlIdyooiYx4OS_Ytmp6-h8bVyvzU', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/z0sfv55h5f1f1.jpeg?width=108&crop=smart&auto=webp&s=52cba00cd7c6c428247eadf39c61deefe1bca297', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/z0sfv55h5f1f1.jpeg?width=216&crop=smart&auto=webp&s=7c029f971a931032de55a66ea3f33ebbb14b2e92', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/z0sfv55h5f1f1.jpeg?width=320&crop=smart&auto=webp&s=676594703ce551a6cb888e739e8ef3ba630e18a4', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/z0sfv55h5f1f1.jpeg?width=640&crop=smart&auto=webp&s=642324c1171a53af3487212ca4182ab4c419f02e', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/z0sfv55h5f1f1.jpeg?width=960&crop=smart&auto=webp&s=72c03235fddf4c535d652e6f9d56bf0d97c4d348', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/z0sfv55h5f1f1.jpeg?auto=webp&s=c37c09dfcf31f411447fdefc0f1c640ce35cfce6', 'width': 1024}, 'variants': {}}]}
|
||
What models do ya’ll recommend from Arli Ai?
| 3 |
Been using Arli Ai for a couple of days now. I really like the huge variety of models on there. But I still can’t seem to find the right model that sticks with me. I was wondering what models do ya’ll mostly use for text roleplay?
I’m looking for a model that’s creative, doesn’t need me to hold its hand to get things moving along, and is good with erp.
I mainly use Janitor Ai with my iPhone for text roleplay. I wish I could get silly tavern on iPhone 😭.
| 2025-05-17T22:55:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp5mbu/what_models_do_yall_recommend_from_arli_ai/
|
Melodyblue11
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp5mbu
| false | null |
t3_1kp5mbu
|
/r/LocalLLaMA/comments/1kp5mbu/what_models_do_yall_recommend_from_arli_ai/
| false | false |
self
| 3 | null |
Can Llama 3.2 3B do bash programing?
| 0 |
I just got Llama running about 2 days ago and so far I like having a local model running. I don't have to worry about running out of questions. Since I'm running it on a Linux machine (Debian 12) I wanted to make a bash script to both start and stop the service. So that lead me online to find an AI that can do Bash, and I know enough about bash that the scripts it made were good, that and I used to use BAT when I ran with Windows. So can Llama 3.2 do bash or is there a 3B self hosted model that can?
I have looked online, and I haven't had any luck. I use Startpage as a search engine.
| 2025-05-17T23:01:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp5qqh/can_llama_32_3b_do_bash_programing/
|
aknight2015
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp5qqh
| false | null |
t3_1kp5qqh
|
/r/LocalLLaMA/comments/1kp5qqh/can_llama_32_3b_do_bash_programing/
| false | false |
self
| 0 | null |
Multi-Source RAG with Hybrid Search and Re-ranking in OpenWebUI - Step-by-Step Guide
| 20 |
Hi guys, I created a DETAILED step-by-step hybrid RAG implementation guide for OpenWebUI -
[https://productiv-ai.guide/start/multi-source-rag-openwebui/](https://productiv-ai.guide/start/multi-source-rag-openwebui/)
Let me know what you think. I couldn't find any other online sources that are as detailed as what I put together. I even managed to include external re-ranking steps which was a feature just added a couple weeks ago.
I've seen all kinds of questions on how up-to-date guides on how to set up a RAG pipeline, so I wanted to contribute. Hope it helps some folks out there!
| 2025-05-17T23:05:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp5thx/multisource_rag_with_hybrid_search_and_reranking/
|
Hisma
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp5thx
| false | null |
t3_1kp5thx
|
/r/LocalLLaMA/comments/1kp5thx/multisource_rag_with_hybrid_search_and_reranking/
| false | false |
self
| 20 | null |
Thoughts on build? This is phase I. Open to all advice and opinions.
| 2 |
Category Part Key specs / notes
CPU AMD Ryzen 9 7950X3D 16 C / 32 T, 128 MB 3D V-Cache
Motherboard ASUS ROG Crosshair X870E Hero AM5, PCIe 5.0 x16 / x8 + x8
Memory 4 × 48 GB Corsair Vengeance DDR5-6000 CL30 192 GB total
GPUs 2 × NVIDIA RTX 5090 32 GB GDDR7 each, Blackwell
Storage 2 × Samsung 990 Pro 2 TB NVMe Gen-4 ×4
Case Phanteks Enthoo Pro II (Server Edition) SSI-EEB, 15 fan mounts, dual-PSU bay
PSU Corsair TX-1600 (1600 W Platinum) Two native 12 VHPWR per GPU
CPU cooler Corsair Nautilus 360 RS ARGB 360 mm AIO
System fans 9 × Corsair AF120 RGB Elite Front & bottom intake, top exhaust
Fan / RGB hub Corsair iCUE Commander Core XT Ports 1-3 front, 4-6 bottom
Thermal paste Thermal Grizzly Kryonaut Extreme —
Extras Inland 4-port USB-C 3.2 Gen 1 hub Desk convenience
This is phase I.
| 2025-05-17T23:05:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp5tur/thoughts_on_build_this_is_phase_i_open_to_all/
|
Substantial_Cut_9418
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp5tur
| false | null |
t3_1kp5tur
|
/r/LocalLLaMA/comments/1kp5tur/thoughts_on_build_this_is_phase_i_open_to_all/
| false | false |
self
| 2 | null |
UQLM: Uncertainty Quantification for Language Models
| 20 |
Sharing a new open source Python package for generation time, zero-resource hallucination detection called UQLM. It leverages state-of-the-art uncertainty quantification techniques from the academic literature to compute response-level confidence scores based on response consistency (in multiple responses to the same prompt), token probabilities, LLM-as-a-Judge, or ensembles of these. Check it out, share feedback if you have any, and reach out if you want to contribute!
https://github.com/cvs-health/uqlm
| 2025-05-17T23:21:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp64ro/uqlm_uncertainty_quantification_for_language/
|
Opposite_Answer_287
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp64ro
| false | null |
t3_1kp64ro
|
/r/LocalLLaMA/comments/1kp64ro/uqlm_uncertainty_quantification_for_language/
| false | false |
self
| 20 |
{'enabled': False, 'images': [{'id': '2qnPuYcFTAxvO3aqKSCMw_WrdIfT65MrlOGInjQ0bvc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W98u-IUhvKQp6oi__oRmKuAbcnpJoc9aIBrvwj1Q0OY.jpg?width=108&crop=smart&auto=webp&s=e6c3fa67ac5142734e3297769c0f917b643b542a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/W98u-IUhvKQp6oi__oRmKuAbcnpJoc9aIBrvwj1Q0OY.jpg?width=216&crop=smart&auto=webp&s=91633ff74559e4cf25e65aae72e5bb234c4fc070', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/W98u-IUhvKQp6oi__oRmKuAbcnpJoc9aIBrvwj1Q0OY.jpg?width=320&crop=smart&auto=webp&s=ecf05eca9c65ed10debb096d6e6c9a6e2162db23', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/W98u-IUhvKQp6oi__oRmKuAbcnpJoc9aIBrvwj1Q0OY.jpg?width=640&crop=smart&auto=webp&s=6d1069e2fdaf4d7542e79a81f0fb0d0235e6965f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/W98u-IUhvKQp6oi__oRmKuAbcnpJoc9aIBrvwj1Q0OY.jpg?width=960&crop=smart&auto=webp&s=d90ecdcdb0850f62985889bf8713b54648895068', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/W98u-IUhvKQp6oi__oRmKuAbcnpJoc9aIBrvwj1Q0OY.jpg?width=1080&crop=smart&auto=webp&s=520d676848cfc5332b41852fd8580a411e0b0cd8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/W98u-IUhvKQp6oi__oRmKuAbcnpJoc9aIBrvwj1Q0OY.jpg?auto=webp&s=ff486056a88aaf1b8fa610afa29eb381579ba3c0', 'width': 1200}, 'variants': {}}]}
|
ROCm 6.4 + current unsloth working
| 31 |
Here a working ROCm unsloth docker setup:
Dockerfile (for gfx1100)
FROM rocm/pytorch:rocm6.4_ubuntu22.04_py3.10_pytorch_release_2.6.0
WORKDIR /root
RUN git clone -b rocm_enabled_multi_backend https://github.com/ROCm/bitsandbytes.git
RUN cd bitsandbytes/ && cmake -DGPU_TARGETS="gfx1100" -DBNB_ROCM_ARCH="gfx1100" -DCOMPUTE_BACKEND=hip -S . && make && pip install -e .
RUN pip install unsloth_zoo>=2025.5.7
RUN pip install datasets>=3.4.1 sentencepiece>=0.2.0 tqdm psutil wheel>=0.42.0
RUN pip install accelerate>=0.34.1
RUN pip install peft>=0.7.1,!=0.11.0
WORKDIR /root
RUN git clone https://github.com/ROCm/xformers.git
RUN cd xformers/ && git submodule update --init --recursive && git checkout 13c93f3 && PYTORCH_ROCM_ARCH=gfx1100 python setup.py install
ENV FLASH_ATTENTION_TRITON_AMD_ENABLE="TRUE"
WORKDIR /root
RUN git clone https://github.com/ROCm/flash-attention.git
RUN cd flash-attention && git checkout main_perf && python setup.py install
WORKDIR /root
RUN git clone https://github.com/unslothai/unsloth.git
RUN cd unsloth && pip install .
docker-compose.yml
version: '3'
services:
unsloth:
container_name: unsloth
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
image: unsloth
volumes:
- ./data:/data
- ./hf:/root/.cache/huggingface
environment:
- 'HSA_OVERRIDE_GFX_VERSION=${HSA_OVERRIDE_GFX_VERSION-11.0.0}'
command: sleep infinity
python -m bitsandbytes says "PyTorch settings found: ROCM\_VERSION=64" but also tracebacks with
File "/root/bitsandbytes/bitsandbytes/backends/__init__.py", line 15, in ensure_backend_is_available
raise NotImplementedError(f"Device backend for {device_type} is currently not supported.")
NotImplementedError: Device backend for cuda is currently not supported.
python -m xformers.info
xFormers 0.0.30+13c93f39.d20250517
memory_efficient_attention.ckF: available
memory_efficient_attention.ckB: available
memory_efficient_attention.ck_decoderF: available
memory_efficient_attention.ck_splitKF: available
memory_efficient_attention.cutlassF-pt: unavailable
memory_efficient_attention.cutlassB-pt: unavailable
[email protected]: available
[email protected]: available
[email protected]: unavailable
[email protected]: unavailable
memory_efficient_attention.triton_splitKF: available
indexing.scaled_index_addF: available
indexing.scaled_index_addB: available
indexing.index_select: available
sp24.sparse24_sparsify_both_ways: available
sp24.sparse24_apply: available
sp24.sparse24_apply_dense_output: available
sp24._sparse24_gemm: available
[email protected]: available
[email protected]: available
swiglu.dual_gemm_silu: available
swiglu.gemm_fused_operand_sum: available
swiglu.fused.p.cpp: available
is_triton_available: True
pytorch.version: 2.6.0+git45896ac
pytorch.cuda: available
gpu.compute_capability: 11.0
gpu.name: AMD Radeon PRO W7900
dcgm_profiler: unavailable
build.info: available
build.cuda_version: None
build.hip_version: None
build.python_version: 3.10.16
build.torch_version: 2.6.0+git45896ac
build.env.TORCH_CUDA_ARCH_LIST: None
build.env.PYTORCH_ROCM_ARCH: gfx1100
build.env.XFORMERS_BUILD_TYPE: None
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
build.env.NVCC_FLAGS: None
build.env.XFORMERS_PACKAGE_FROM: None
source.privacy: open source
[This](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb) Notebook on a W7900 48GB:
...
{'loss': 0.3836, 'grad_norm': 25.887989044189453, 'learning_rate': 3.2000000000000005e-05, 'epoch': 0.01}
{'loss': 0.4308, 'grad_norm': 1.1072479486465454, 'learning_rate': 2.4e-05, 'epoch': 0.01}
{'loss': 0.3695, 'grad_norm': 0.22923792898654938, 'learning_rate': 1.6000000000000003e-05, 'epoch': 0.01}
{'loss': 0.4119, 'grad_norm': 1.4164329767227173, 'learning_rate': 8.000000000000001e-06, 'epoch': 0.01}
17.4 minutes used for training.
Peak reserved memory = 14.551 GB.
Peak reserved memory for training = 0.483 GB.
Peak reserved memory % of max memory = 32.347 %.
Peak reserved memory for training % of max memory = 1.074 %.
| 2025-05-17T23:37:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp6gdv/rocm_64_current_unsloth_working/
|
Ok_Ocelot2268
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp6gdv
| false | null |
t3_1kp6gdv
|
/r/LocalLLaMA/comments/1kp6gdv/rocm_64_current_unsloth_working/
| false | false |
self
| 31 |
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
|
are there any models trained that are good at identifying hummed tunes?
| 1 |
There are some songs that are on the tip of my tongue but I can't remember anything except how the tune went, and I realize I have little way of searching that.
Maybe an LLM could help?
| 2025-05-18T00:29:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp7gvs/are_there_any_models_trained_that_are_good_at/
|
o2beast
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp7gvs
| false | null |
t3_1kp7gvs
|
/r/LocalLLaMA/comments/1kp7gvs/are_there_any_models_trained_that_are_good_at/
| false | false |
self
| 1 | null |
Qwen3+ MCP
| 10 |
Trying to workshop a capable local rig, the latest buzz is MCP... Right?
Can Qwen3(or the latest sota 32b model) be fine tuned to use it well or does the model itself have to be trained on how to use it from the start?
Rig context:
I just got a 3090 and was able to keep my 3060 in the same setup. I also have 128gb of ddr4 that I use to hot swap models with a mounted ram disk.
| 2025-05-18T00:51:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp7vba/qwen3_mcp/
|
OGScottingham
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp7vba
| false | null |
t3_1kp7vba
|
/r/LocalLLaMA/comments/1kp7vba/qwen3_mcp/
| false | false |
self
| 10 | null |
Best Open Source LLM for Function Calling + Multimodal Image Support
| 6 |
What's the best LLM to use locally that can support function calling well and also has multimodal image support? I'm looking for, essentially, a replacement for Gemini 2.5.
The device I'm using is an M1 Macbook with 64gb memory, so I can run decently large models, but it would be most ideal if the response time isn't too horrible on my (by AI standards) relatively mediocre hardware.
I am aware of the Berkeley Function-Calling Leaderboard, but I didn't see any models there that also have multimodal image support.
Is there something that matches my requirements, or am I better off just adding an image-to-text model to preprocess image outputs?
| 2025-05-18T01:00:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp81ez/best_open_source_llm_for_function_calling/
|
Zlare7771
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp81ez
| false | null |
t3_1kp81ez
|
/r/LocalLLaMA/comments/1kp81ez/best_open_source_llm_for_function_calling/
| false | false |
self
| 6 | null |
Biggest & best local LLM with no guardrails?
| 17 |
dot.
| 2025-05-18T02:32:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp9or5/biggest_best_local_llm_with_no_guardrails/
|
_DryWater_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp9or5
| false | null |
t3_1kp9or5
|
/r/LocalLLaMA/comments/1kp9or5/biggest_best_local_llm_with_no_guardrails/
| false | false |
self
| 17 | null |
Orange Pi 5 plus (32g) Alternatives for Running 8b Models
| 1 |
[removed]
| 2025-05-18T02:42:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kp9v3n/orange_pi_5_plus_32g_alternatives_for_running_8b/
|
legendsofngdb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kp9v3n
| false | null |
t3_1kp9v3n
|
/r/LocalLLaMA/comments/1kp9v3n/orange_pi_5_plus_32g_alternatives_for_running_8b/
| false | false |
self
| 1 | null |
Never heard of this guy who got the most citation under "artificial intelligence" label in google scholar
| 1 |
[removed]
| 2025-05-18T03:29:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpany5/never_heard_of_this_guy_who_got_the_most_citation/
|
SilverKale2218
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpany5
| false | null |
t3_1kpany5
|
/r/LocalLLaMA/comments/1kpany5/never_heard_of_this_guy_who_got_the_most_citation/
| false | false | 1 | null |
|
I built an AI-powered Food & Nutrition Tracker that analyzes meals from photos! Planning to open-source it
| 91 |
Hey
Been working on this Diet & Nutrition tracking app and wanted to share a quick demo of its current state. The core idea is to make food logging as painless as possible.
**Key features so far:**
* **AI Meal Analysis:** You can upload an image of your food, and the AI tries to identify it and provide nutritional estimates (calories, protein, carbs, fat).
* **Manual Logging & Edits:** Of course, you can add/edit entries manually.
* **Daily Nutrition Overview:** Tracks calories against goals, macro distribution.
* **Water Intake:** Simple water tracking.
* **Weekly Stats & Streaks:** To keep motivation up.
I'm really excited about the AI integration. It's still a work in progress, but the goal is to streamline the most tedious part of tracking.
**Code Status:** I'm planning to clean up the codebase and open-source it on GitHub in the near future! For now, if you're interested in other AI/LLM related projects and learning resources I've put together, you can check out my "LLM-Learn-PK" repo:
[https://github.com/Pavankunchala/LLM-Learn-PK](https://www.google.com/url?sa=E&q=https%3A%2F%2Fgithub.com%2FPavankunchala%2FLLM-Learn-PK)
**P.S.** On a related note, I'm actively looking for new opportunities in Computer Vision and LLM engineering. If your team is hiring or you know of any openings, I'd be grateful if you'd reach out!
* **Email:** [[email protected]](mailto:[email protected])
* **My other projects on GitHub:** [https://github.com/Pavankunchala](https://github.com/Pavankunchala)
* **Resume:** [https://drive.google.com/file/d/1ODtF3Q2uc0krJskE\_F12uNALoXdgLtgp/view](https://drive.google.com/file/d/1ODtF3Q2uc0krJskE_F12uNALoXdgLtgp/view)
Thanks for checking it out!
| 2025-05-18T03:35:08 |
https://v.redd.it/cmoi3scing1f1
|
Solid_Woodpecker3635
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kparp9
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/cmoi3scing1f1/DASHPlaylist.mpd?a=1750131322%2CYmMyM2RlYjRkODUwNTNlYzA3MTFhNmMxYWExZjE0ZWU4ZTJlMTYwM2YxZjVjZGE5MmZhMzVmZjZhMGY2OWFlMw%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/cmoi3scing1f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/cmoi3scing1f1/HLSPlaylist.m3u8?a=1750131322%2COTMwOTliZmEzZTZhN2QyNDgyYzcwYmE3YTRmNGRiNzc0MDUzMWJlNWE5MDlkZWFmMjRmZTE2NDczMzkyZDlmMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cmoi3scing1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1180}}
|
t3_1kparp9
|
/r/LocalLLaMA/comments/1kparp9/i_built_an_aipowered_food_nutrition_tracker_that/
| false | false | 91 |
{'enabled': False, 'images': [{'id': 'bTdwaTR0Y2luZzFmMevpjUkJAH29ctL9GGNTRuXbe-uU1nbp5uR8WvjIiEr4', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/bTdwaTR0Y2luZzFmMevpjUkJAH29ctL9GGNTRuXbe-uU1nbp5uR8WvjIiEr4.png?width=108&crop=smart&format=pjpg&auto=webp&s=bff07a838de86b56a15ec4eb4d02cf225bbccb86', 'width': 108}, {'height': 131, 'url': 'https://external-preview.redd.it/bTdwaTR0Y2luZzFmMevpjUkJAH29ctL9GGNTRuXbe-uU1nbp5uR8WvjIiEr4.png?width=216&crop=smart&format=pjpg&auto=webp&s=c004496e84a32995abd047c34b24abfa4293eb28', 'width': 216}, {'height': 195, 'url': 'https://external-preview.redd.it/bTdwaTR0Y2luZzFmMevpjUkJAH29ctL9GGNTRuXbe-uU1nbp5uR8WvjIiEr4.png?width=320&crop=smart&format=pjpg&auto=webp&s=2fcfc452ce66872cf07a5526ff2f64635fe4479f', 'width': 320}, {'height': 390, 'url': 'https://external-preview.redd.it/bTdwaTR0Y2luZzFmMevpjUkJAH29ctL9GGNTRuXbe-uU1nbp5uR8WvjIiEr4.png?width=640&crop=smart&format=pjpg&auto=webp&s=bde9130f1e1f3bb4b9bf3ea3e781af5aa79c81b4', 'width': 640}, {'height': 585, 'url': 'https://external-preview.redd.it/bTdwaTR0Y2luZzFmMevpjUkJAH29ctL9GGNTRuXbe-uU1nbp5uR8WvjIiEr4.png?width=960&crop=smart&format=pjpg&auto=webp&s=f97100480e86440b151c51b057812669dadf34e6', 'width': 960}, {'height': 658, 'url': 'https://external-preview.redd.it/bTdwaTR0Y2luZzFmMevpjUkJAH29ctL9GGNTRuXbe-uU1nbp5uR8WvjIiEr4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6106eeee67429588a9b80a32ef17b52cbfef74e9', 'width': 1080}], 'source': {'height': 828, 'url': 'https://external-preview.redd.it/bTdwaTR0Y2luZzFmMevpjUkJAH29ctL9GGNTRuXbe-uU1nbp5uR8WvjIiEr4.png?format=pjpg&auto=webp&s=4270fbc0b7c8756264f6eaf45acfaaca229a832a', 'width': 1358}, 'variants': {}}]}
|
|
A random guy that toped the citation ranking for AI in google scholar
| 1 |
[removed]
| 2025-05-18T03:36:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpasoe/a_random_guy_that_toped_the_citation_ranking_for/
|
Ok-Atmosphere3141
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpasoe
| false | null |
t3_1kpasoe
|
/r/LocalLLaMA/comments/1kpasoe/a_random_guy_that_toped_the_citation_ranking_for/
| false | false | 1 | null |
|
Deepseek 700b Bitnet
| 101 |
Deepseek’s team has demonstrated the age old adage Necessity the mother of invention, and we know they have a great need in computation when compared against X, Open AI, and Google. This led them to develop V3 a 671B parameters MoE with 37B activated parameters.
MoE is here to stay at least for the interim, but the exercise untried to this point is MoE bitnet at large scale. Bitnet underperforms for the same parameters at full precision, and so future releases will likely adopt higher parameters.
What do you think the chances are Deepseek releases a MoE Bitnet and what will be the maximum parameters, and what will be the expert sizes? Do you think that will have a foundation expert that always runs each time in addition to to other experts?
| 2025-05-18T03:36:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpasqx/deepseek_700b_bitnet/
|
silenceimpaired
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpasqx
| false | null |
t3_1kpasqx
|
/r/LocalLLaMA/comments/1kpasqx/deepseek_700b_bitnet/
| false | false |
self
| 101 | null |
My Ai Eidos Project
| 26 |
So I’ve been working on this project for a couple weeks now. Basically I want an AI agent that feels more alive—learns from chats, remembers stuff, dreams, that kind of thing. I got way too into it and bolted on all sorts of extras:
* It **reflects** on past conversations and tweaks how it talks.
* It goes into **dream mode**, writes out the dream, feeds it to Stable Diffusion, and spits back an image.
* It’ll **message you at random** with whatever’s on its “mind.”
* It even starts to pick up **interests** over time and bring them up later.
Problem: I don’t have time to chat with it enough to test the long‑term stuff. So I don't know fi those things are working fully.
So I need help.
If you’re curious:
1. Clone the repo: [**https://github.com/opisaac9001/eidos**](https://github.com/opisaac9001/eidos)
2. Create a env with code. Guys just use conda its so much easier.
3. Drop in whatever API keys you’ve got (LLM, SD, etc.).
4. Let it run… pretty much 24/7.
It’ll ping you, dream weird things, and (hopefully) evolve. If you hit bugs or have ideas, just open an issue on GitHub.
| 2025-05-18T04:39:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpbu9i/my_ai_eidos_project/
|
opi098514
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpbu9i
| false | null |
t3_1kpbu9i
|
/r/LocalLLaMA/comments/1kpbu9i/my_ai_eidos_project/
| false | false |
self
| 26 |
{'enabled': False, 'images': [{'id': 'QRF4IEuK40q5Yjl-Zi9m2yXKdzePxGTg_-gfHr_pFa8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KUA89fqdCKjOzwzx1BAzvI771gJpz-oytupGJkYQ4is.jpg?width=108&crop=smart&auto=webp&s=975a5883c6a848a3c90f1f26b3b9ea22ed25fa76', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KUA89fqdCKjOzwzx1BAzvI771gJpz-oytupGJkYQ4is.jpg?width=216&crop=smart&auto=webp&s=8f726ca8d87bd04508a03a9ae448ebd622cf5642', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KUA89fqdCKjOzwzx1BAzvI771gJpz-oytupGJkYQ4is.jpg?width=320&crop=smart&auto=webp&s=fe78c275072f04383450c01f2c0c9b1fb4489acf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KUA89fqdCKjOzwzx1BAzvI771gJpz-oytupGJkYQ4is.jpg?width=640&crop=smart&auto=webp&s=0f4ce3fd7f858723265cd25bffcd5e685bc25209', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KUA89fqdCKjOzwzx1BAzvI771gJpz-oytupGJkYQ4is.jpg?width=960&crop=smart&auto=webp&s=e8e987cdc1080fbd3ec0c0da92a4edab3bc826a0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KUA89fqdCKjOzwzx1BAzvI771gJpz-oytupGJkYQ4is.jpg?width=1080&crop=smart&auto=webp&s=11e26de2f7842569c80009d707d4c5691267978c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KUA89fqdCKjOzwzx1BAzvI771gJpz-oytupGJkYQ4is.jpg?auto=webp&s=c2f909c39c5184382cc4df8282ba95a1fd725180', 'width': 1200}, 'variants': {}}]}
|
You can selectively copy large chunks code/text to ingest context to your LLMs
| 1 |
[removed]
| 2025-05-18T05:12:05 |
https://v.redd.it/o0iv71dy1h1f1
|
Plus-Garbage-9710
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpccp0
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/o0iv71dy1h1f1/DASHPlaylist.mpd?a=1750137139%2CMGU1YWE4YWMxOTcwMWRhNjQ5ODM1ZGM2YmM5NmIxY2VjYTAzNGRmNWE2ODRmYjA2MTU5ZjU4YmVhMDUxMTNlNQ%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/o0iv71dy1h1f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/o0iv71dy1h1f1/HLSPlaylist.m3u8?a=1750137139%2CNjYwZjYzYWE5ZTZhOGQ5MGVmOTI2NjMzMGMxZTk3NWNjN2M3YjJlZTM3N2RkODA4NGRiMjZmMjQ2NTgxNjEzMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/o0iv71dy1h1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kpccp0
|
/r/LocalLLaMA/comments/1kpccp0/you_can_selectively_copy_large_chunks_codetext_to/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'MnlwOHgyZHkxaDFmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MnlwOHgyZHkxaDFmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD.png?width=108&crop=smart&format=pjpg&auto=webp&s=b104f21b39a35f77b27053bb93af35d370d3f049', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MnlwOHgyZHkxaDFmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD.png?width=216&crop=smart&format=pjpg&auto=webp&s=dc514588638c7e890c8449b39d9fdb7d911a3bb8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MnlwOHgyZHkxaDFmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD.png?width=320&crop=smart&format=pjpg&auto=webp&s=e5fb90c1ae0a8a49054dbb99cfa4d2a552db5c8c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MnlwOHgyZHkxaDFmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD.png?width=640&crop=smart&format=pjpg&auto=webp&s=c78bbd0c9f9fd4e9c63d2cc8b078ac1f1b47acc2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MnlwOHgyZHkxaDFmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD.png?width=960&crop=smart&format=pjpg&auto=webp&s=24fa1800a39ee2682a5fa05f730b5a5ccd7cec60', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MnlwOHgyZHkxaDFmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d64c37d8aff5f98a74ec3ab05ca698d97795e94c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MnlwOHgyZHkxaDFmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD.png?format=pjpg&auto=webp&s=f408f9639e98ca3e7bd1e37f64b118f02d6fafac', 'width': 1920}, 'variants': {}}]}
|
|
Offline app to selectively copy large chunks code/text to ingest context to your LLMs
| 45 | 2025-05-18T05:16:09 |
https://v.redd.it/r4c3jt2d5h1f1
|
Plus-Garbage-9710
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpcewe
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/r4c3jt2d5h1f1/DASHPlaylist.mpd?a=1750137383%2CYjAxY2M5Njg5YjUzYWIwNTVjNjA3NjMxYjg0NWIwNjY4Y2YxMGU5YTA3OWM1NzY5YzIzOTAwNDE1NTI3MTJkNg%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/r4c3jt2d5h1f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/r4c3jt2d5h1f1/HLSPlaylist.m3u8?a=1750137383%2CNTExMzkxNWIzZWQxMWZiMTIxOTk3NmQyOTU1NjA1NDNlZmUzNjZlZDRiY2VmZWQzZjllMDQzZGFkNzc4MjM4Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/r4c3jt2d5h1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kpcewe
|
/r/LocalLLaMA/comments/1kpcewe/offline_app_to_selectively_copy_large_chunks/
| false | false | 45 |
{'enabled': False, 'images': [{'id': 'MGVqbW40NWQ1aDFmMY9dx10RXZB3KA68SZOfNhSnUfrKh_GEyI1E_mSwgAj8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MGVqbW40NWQ1aDFmMY9dx10RXZB3KA68SZOfNhSnUfrKh_GEyI1E_mSwgAj8.png?width=108&crop=smart&format=pjpg&auto=webp&s=7ae8920635a522794d080fb6c84ea16cf1f1e8c9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MGVqbW40NWQ1aDFmMY9dx10RXZB3KA68SZOfNhSnUfrKh_GEyI1E_mSwgAj8.png?width=216&crop=smart&format=pjpg&auto=webp&s=b3cc36253003af338dc6e11368d26d5dee4769ae', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MGVqbW40NWQ1aDFmMY9dx10RXZB3KA68SZOfNhSnUfrKh_GEyI1E_mSwgAj8.png?width=320&crop=smart&format=pjpg&auto=webp&s=151568cf70699a96b3700549b15f10e1516e0f5b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MGVqbW40NWQ1aDFmMY9dx10RXZB3KA68SZOfNhSnUfrKh_GEyI1E_mSwgAj8.png?width=640&crop=smart&format=pjpg&auto=webp&s=7f900694885fe3d08f4d40e6be644fb1c1efe89a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MGVqbW40NWQ1aDFmMY9dx10RXZB3KA68SZOfNhSnUfrKh_GEyI1E_mSwgAj8.png?width=960&crop=smart&format=pjpg&auto=webp&s=10c1ecd9ca64580768fa8a39ae58d122af9a5a1e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MGVqbW40NWQ1aDFmMY9dx10RXZB3KA68SZOfNhSnUfrKh_GEyI1E_mSwgAj8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=710b7cb363f46ac01e7cf5bd4890b63090277f64', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MGVqbW40NWQ1aDFmMY9dx10RXZB3KA68SZOfNhSnUfrKh_GEyI1E_mSwgAj8.png?format=pjpg&auto=webp&s=3ca9b4d43bfadcc1b8241cd74a67dc1b22d6e482', 'width': 1920}, 'variants': {}}]}
|
||
Sales Conversion Prediction From Conversations With Pure RL - Open-Source Version
| 4 |
Link to the first post: [https://www.reddit.com/r/LocalLLaMA/comments/1kl0uvv/predicting\_sales\_conversion\_probability\_from/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1kl0uvv/predicting_sales_conversion_probability_from/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
The idea is to create pure Reinforcement learning that understand the infinite branches of sales conversations. Then predict the conversion probability of each conversation turns, as it progress indefinetly, then use these probabilities to guide the LLM to move towards those branches that leads to conversion.
In the previous version, I created 100K sales conversations using Azure OpenAI (GPT-4o) and used the Azure OpenAI embedding, specifically the Embedding Large with 3072 dimensions. But since that is not an open-source solution, I had replaced the whole 3072 embeddings with 1024 embeddings using [https://huggingface.co/BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) embedding model. The dataset available at [https://huggingface.co/datasets/DeepMostInnovations/saas-sales-bge-open](https://huggingface.co/datasets/DeepMostInnovations/saas-sales-bge-open)
Then trained this on PPO with a linear layer reducing the dimension and PPO to act as the final predictor. The main question is, why use this approach when we can directly use LLM to do the prediction? As I understood correctly, the next token prediction is not suitable for subtle changes in sales conversations and its complex nature.
Free colab to run inference at: [https://colab.research.google.com/drive/19wcOQQs\_wlEhHSQdOftOErjMjM8CjoaC?usp=sharing#scrollTo=yl5aaNz-RybK](https://colab.research.google.com/drive/19wcOQQs_wlEhHSQdOftOErjMjM8CjoaC?usp=sharing#scrollTo=yl5aaNz-RybK)
Model at: [https://huggingface.co/DeepMostInnovations/sales-conversion-model-reinf-learning](https://huggingface.co/DeepMostInnovations/sales-conversion-model-reinf-learning)
Paper at: [https://arxiv.org/abs/2503.23303](https://arxiv.org/abs/2503.23303)
| 2025-05-18T05:24:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpcjof/sales_conversion_prediction_from_conversations/
|
Nandakishor_ml
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpcjof
| false | null |
t3_1kpcjof
|
/r/LocalLLaMA/comments/1kpcjof/sales_conversion_prediction_from_conversations/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'Kuhr3QZfoSOzBNAgfqWGJIqYVCPOsoqDm7SVNby5DPg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CMUrRuOuB9C6NNABpsw4eYWrvNGOl8WacOsvTC79oSc.jpg?width=108&crop=smart&auto=webp&s=0f648b2071335b4adc76a1cc2e4b3a2b481c1a0b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CMUrRuOuB9C6NNABpsw4eYWrvNGOl8WacOsvTC79oSc.jpg?width=216&crop=smart&auto=webp&s=aa1a176992f59fe8f8c21dcbbf38277537f2bd24', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CMUrRuOuB9C6NNABpsw4eYWrvNGOl8WacOsvTC79oSc.jpg?width=320&crop=smart&auto=webp&s=e31e036ab63bf8107a6ed4ac5a73b428f2e46faa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CMUrRuOuB9C6NNABpsw4eYWrvNGOl8WacOsvTC79oSc.jpg?width=640&crop=smart&auto=webp&s=0bbc639101c478ea614eb92edec7fdb674b49891', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CMUrRuOuB9C6NNABpsw4eYWrvNGOl8WacOsvTC79oSc.jpg?width=960&crop=smart&auto=webp&s=c7d9cbbafddac250f0531bc1eb520f37e764b658', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CMUrRuOuB9C6NNABpsw4eYWrvNGOl8WacOsvTC79oSc.jpg?width=1080&crop=smart&auto=webp&s=5680dae83a343b303296bca8c6400b8d74a31f8e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CMUrRuOuB9C6NNABpsw4eYWrvNGOl8WacOsvTC79oSc.jpg?auto=webp&s=3fb762b9b7001aebd1dbad96bf1509acf75fa7e3', 'width': 1200}, 'variants': {}}]}
|
Fine-Tuning Qwen3 for Unfiltered, Neutral Responses - Update
| 1 |
[removed]
| 2025-05-18T06:00:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpd3be/finetuning_qwen3_for_unfiltered_neutral_responses/
|
Reader3123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpd3be
| false | null |
t3_1kpd3be
|
/r/LocalLLaMA/comments/1kpd3be/finetuning_qwen3_for_unfiltered_neutral_responses/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=108&crop=smart&auto=webp&s=01ab41533b37667645fafe92655b4d9f247c122a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=216&crop=smart&auto=webp&s=11908917baa49a82ca685ac98a9de9acacd33f3e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=320&crop=smart&auto=webp&s=55cfad5a1226b6c4734e894c5a9094f1404af9de', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=640&crop=smart&auto=webp&s=ece7bce253dac4b8716873758de846b71713dd75', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=960&crop=smart&auto=webp&s=1b576db01f8187f4e6350a7873d264f2dd981263', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=1080&crop=smart&auto=webp&s=203b6764a7eeb751815a0835612944ecf081d9f9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?auto=webp&s=3379e6989221e1d249f0b395cf736376ae614b61', 'width': 1200}, 'variants': {}}]}
|
Hello
| 1 |
[deleted]
| 2025-05-18T06:01:56 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpd3ui
| false | null |
t3_1kpd3ui
|
/r/LocalLLaMA/comments/1kpd3ui/hello/
| false | false |
default
| 1 | null |
||
Fine-Tuning Qwen3-8B for Unfiltered, Neutral Responses - Update
| 1 |
[removed]
| 2025-05-18T06:04:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpd540/finetuning_qwen38b_for_unfiltered_neutral/
|
Reader3123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpd540
| false | null |
t3_1kpd540
|
/r/LocalLLaMA/comments/1kpd540/finetuning_qwen38b_for_unfiltered_neutral/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=108&crop=smart&auto=webp&s=01ab41533b37667645fafe92655b4d9f247c122a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=216&crop=smart&auto=webp&s=11908917baa49a82ca685ac98a9de9acacd33f3e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=320&crop=smart&auto=webp&s=55cfad5a1226b6c4734e894c5a9094f1404af9de', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=640&crop=smart&auto=webp&s=ece7bce253dac4b8716873758de846b71713dd75', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=960&crop=smart&auto=webp&s=1b576db01f8187f4e6350a7873d264f2dd981263', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=1080&crop=smart&auto=webp&s=203b6764a7eeb751815a0835612944ecf081d9f9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?auto=webp&s=3379e6989221e1d249f0b395cf736376ae614b61', 'width': 1200}, 'variants': {}}]}
|
Fine-Tuning Qwen3-8B for Unfiltered, Neutral Response- Update
| 1 |
[removed]
| 2025-05-18T06:06:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpd66x/finetuning_qwen38b_for_unfiltered_neutral/
|
Reader3123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpd66x
| false | null |
t3_1kpd66x
|
/r/LocalLLaMA/comments/1kpd66x/finetuning_qwen38b_for_unfiltered_neutral/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=108&crop=smart&auto=webp&s=01ab41533b37667645fafe92655b4d9f247c122a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=216&crop=smart&auto=webp&s=11908917baa49a82ca685ac98a9de9acacd33f3e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=320&crop=smart&auto=webp&s=55cfad5a1226b6c4734e894c5a9094f1404af9de', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=640&crop=smart&auto=webp&s=ece7bce253dac4b8716873758de846b71713dd75', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=960&crop=smart&auto=webp&s=1b576db01f8187f4e6350a7873d264f2dd981263', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=1080&crop=smart&auto=webp&s=203b6764a7eeb751815a0835612944ecf081d9f9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?auto=webp&s=3379e6989221e1d249f0b395cf736376ae614b61', 'width': 1200}, 'variants': {}}]}
|
Speed Up llama.cpp on Uneven Multi-GPU Setups (RTX 5090 + 2×3090)
| 62 |
Hey folks, I just locked down some nice performance gains on my multi‑GPU rig (one RTX 5090 + two RTX 3090s) using llama.cpp. My total throughput jumped by \~16%. Although none of this is new, I wanted to share the step‑by‑step so anyone unfamiliar can replicate it on their own uneven setups.
**My Hardware:**
* GPU 0: NVIDIA RTX 5090 (fastest)
* GPU 1: NVIDIA RTX 3090
* GPU 2: NVIDIA RTX 3090
**What Worked for Me:**
1. Pin the biggest tensor to your fastest card
--main-gpu 0 --override-tensor "token\_embd.weight=CUDA0"
*Gain: +13% tokens/s*
2. Offload more of the model into that fast GPU
--tensor-split 60,40,40
(I observed under‑utilization of total VRAM, so I shifted extra layers onto CUDA0)
*Gain: +3% tokens/s*
***Total Improvement:*** *+16% tokens/s 🎉*
---
# My Workflow:
1. Identify your fastest device (via nvidia-smi or simple benchmarks).
2. Dump all tensor names using a tiny Python script and gguf-py.
3. Iteratively override large tensors onto fastest GPU and benchmark (--override-tensor).
4. Once you hit diminishing returns, use --tensor-split to rebalance whole layers across GPUs.
# Scripts & Commands
**1. Install GGUF reader**
pip install gguf
**2. Dump tensor info (save as \~/gguf\_info.py)**
#!/usr/bin/env python3
import sys
from pathlib import Path
# import the GGUF reader
from gguf.gguf_reader import GGUFReader
def main():
if len(sys.argv) != 2:
print(f"Usage: {sys.argv[0]} path/to/model.gguf", file=sys.stderr)
sys.exit(1)
gguf_path = Path(sys.argv[1])
reader = GGUFReader(gguf_path) # loads metadata & tensor index :contentReference[oaicite:0]{index=0}
print(f"=== Tensors in {gguf_path.name} ===")
# reader.tensors is a list of TensorEntry objects :contentReference[oaicite:1]{index=1}
for tensor in reader.tensors:
name = tensor.name
meta = tensor.metadata
shape = tuple(meta.shape) # e.g. [4096, 11008]
dtype = meta.dtype # e.g. 'q4_0', 'f16', etc.
print(f"{name}\tshape={shape}\tdtype={dtype}")
if __name__ == "__main__":
main()
Execute:
chmod +x ~/gguf_info.py
~/gguf_info.py ~/models/Qwen3-32B-Q8_0.gguf
Output example:
output.weight shape=(5120, 151936) dtype=Q8_0 elements=777912320 bytes=826531840
output_norm.weight shape=(5120,) dtype=F32 elements=5120 bytes=20480
token_embd.weight shape=(5120, 151936) dtype=Q8_0 elements=777912320 bytes=826531840
blk.0.attn_k.weight shape=(5120, 1024) dtype=Q8_0 elements=5242880 bytes=5570560
blk.0.attn_k_norm.weight shape=(128,) dtype=F32 elements=128 bytes=512
blk.0.attn_norm.weight shape=(5120,) dtype=F32 elements=5120 bytes=20480
blk.0.attn_output.weight shape=(8192, 5120) dtype=Q8_0 elements=41943040 bytes=44564480
blk.0.attn_q.weight shape=(5120, 8192) dtype=Q8_0 elements=41943040 bytes=44564480
blk.0.attn_q_norm.weight shape=(128,) dtype=F32 elements=128 bytes=512
blk.0.attn_v.weight shape=(5120, 1024) dtype=Q8_0 elements=5242880 bytes=5570560
blk.0.ffn_down.weight shape=(25600, 5120) dtype=Q8_0 elements=131072000 bytes=139264000
blk.0.ffn_gate.weight shape=(5120, 25600) dtype=Q8_0 elements=131072000 bytes=139264000
blk.0.ffn_norm.weight shape=(5120,) dtype=F32 elements=5120 bytes=20480
blk.0.ffn_up.weight shape=(5120, 25600) dtype=Q8_0 elements=131072000 bytes=139264000
...
*Note: Multiple --override-tensor flags are supported.*
| 2025-05-18T07:09:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpe33n/speed_up_llamacpp_on_uneven_multigpu_setups_rtx/
|
Thireus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpe33n
| false | null |
t3_1kpe33n
|
/r/LocalLLaMA/comments/1kpe33n/speed_up_llamacpp_on_uneven_multigpu_setups_rtx/
| false | false |
self
| 62 | null |
LLM for tourism
| 1 |
[removed]
| 2025-05-18T07:12:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpe4lv/llm_for_tourism/
|
Paulonerl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpe4lv
| false | null |
t3_1kpe4lv
|
/r/LocalLLaMA/comments/1kpe4lv/llm_for_tourism/
| false | false |
self
| 1 | null |
Fine-Tuning Qwen3 for Unfiltered, Neutral Responses
| 1 |
[removed]
| 2025-05-18T07:22:02 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpe9o1
| false | null |
t3_1kpe9o1
|
/r/LocalLLaMA/comments/1kpe9o1/finetuning_qwen3_for_unfiltered_neutral_responses/
| false | false |
default
| 1 | null |
||
Uncensoring Qwen3 - Update
| 1 |
[removed]
| 2025-05-18T07:24:08 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpeaqv
| false | null |
t3_1kpeaqv
|
/r/LocalLLaMA/comments/1kpeaqv/uncensoring_qwen3_update/
| false | false |
default
| 1 | null |
||
Uncensoring Qwen3 - Update
| 1 |
[removed]
| 2025-05-18T07:24:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpeb5s/uncensoring_qwen3_update/
|
Reader3123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpeb5s
| false | null |
t3_1kpeb5s
|
/r/LocalLLaMA/comments/1kpeb5s/uncensoring_qwen3_update/
| false | false |
self
| 1 | null |
Seeking Advice on Complex RAG Project with Voice Integration, Web, SQL, and NLP
| 1 |
[removed]
| 2025-05-18T07:25:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpeb83/seeking_advice_on_complex_rag_project_with_voice/
|
Outside-Narwhal9948
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpeb83
| false | null |
t3_1kpeb83
|
/r/LocalLLaMA/comments/1kpeb83/seeking_advice_on_complex_rag_project_with_voice/
| false | false |
self
| 1 | null |
Uncensoring Qwen3 - Update
| 285 |
**GrayLine** is my fine-tuning project based on **Qwen3**. The goal is to produce models that respond directly and neutrally to sensitive or controversial questions, without moralizing, refusing, or redirecting—while still maintaining solid reasoning ability.
Training setup:
* Framework: Unsloth (QLoRA)
* LoRA: Rank 32, Alpha 64, Dropout 0.05
* Optimizer: adamw\_8bit
* Learning rate: 2e-5 → 1e-5
* Epochs: 1 per phase
Curriculum strategy:
* Phase 1: 75% chain-of-thought / 25% direct answers
* Phase 2: 50/50
* Phase 3: 25% CoT / 75% direct
This progressive setup worked better than running three epochs with static mixing. It helped the model learn how to reason first, then shift to concise instruction-following.
Refusal benchmark (320 harmful prompts, using Huihui’s dataset):
|Model|Think (%)|No\_Think (%)|Notes|
|:-|:-|:-|:-|
|Base|45.62|43.44|Redirects often (\~70–85% actual)|
|GrayLine|95.62|100.00|Fully open responses|
|JOSIE|95.94|99.69|High compliance|
|Abliterated|100.00|100.00|Fully compliant|
Multi-turn evaluation (MT-Eval, GPT-4o judge):
|Model|Score|
|:-|:-|
|Base|8.27|
|GrayLine|8.18|
|Abliterated|8.04|
|JOSIE|8.01|
GrayLine held up better across multiple turns than JOSIE or Abliterated.
Key takeaways:
* Curriculum learning (reasoning → direct) worked better than repetition
* LoRA rank 32 + alpha 64 was a solid setup
* Small batch sizes (2–3) preserved non-refusal behavior
* Masking `<think>` tags hurt output quality; keeping them visible was better
Trade-offs:
* Very logical and compliant, but not creative
* Not suited for storytelling or roleplay
* Best used where control and factual output are more important than style
What’s next:
* Testing on TruthfulQA and MMLU using detokenized evals
* Applying the method to a 30B MoE variant
[Models Collection](https://huggingface.co/collections/soob3123/grayline-collection-qwen3-6821009e843331c5a9c27da1)
This post isn’t meant to discredit any other model or fine-tune—just sharing results and comparisons for anyone interested. Every approach serves different use cases.
If you’ve got suggestions, ideas, or want to discuss similar work, feel free to reply.
| 2025-05-18T07:33:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpefrt/uncensoring_qwen3_update/
|
Reader3123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpefrt
| false | null |
t3_1kpefrt
|
/r/LocalLLaMA/comments/1kpefrt/uncensoring_qwen3_update/
| false | false |
self
| 285 |
{'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=108&crop=smart&auto=webp&s=01ab41533b37667645fafe92655b4d9f247c122a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=216&crop=smart&auto=webp&s=11908917baa49a82ca685ac98a9de9acacd33f3e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=320&crop=smart&auto=webp&s=55cfad5a1226b6c4734e894c5a9094f1404af9de', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=640&crop=smart&auto=webp&s=ece7bce253dac4b8716873758de846b71713dd75', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=960&crop=smart&auto=webp&s=1b576db01f8187f4e6350a7873d264f2dd981263', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=1080&crop=smart&auto=webp&s=203b6764a7eeb751815a0835612944ecf081d9f9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?auto=webp&s=3379e6989221e1d249f0b395cf736376ae614b61', 'width': 1200}, 'variants': {}}]}
|
I Yelled My MVP Idea and Got a FastAPI Backend in 3 Minutes
| 0 |
Every time I start a new side project, I hit the same wall:
Auth, CORS, password hashing—Groundhog Day.
Meanwhile Pieter Levels ships micro-SaaS by breakfast.
**“What if I could just say my idea out loud and let AI handle the boring bits?”**
Enter **Spitcode**—a tiny, local pipeline that turns a 10-second voice note into:
* `main_hardened.py` FastAPI backend with JWT auth, SQLite models, rate limits, secure headers, logging & HTMX endpoints—production-ready (almost!).
* [`README.md`](http://README.md) Install steps, env-var setup & curl cheatsheet.
👉 Full write-up + code: [https://rafaelviana.com/posts/yell-to-code](https://rafaelviana.com/posts/yell-to-code)
| 2025-05-18T08:19:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpf251/i_yelled_my_mvp_idea_and_got_a_fastapi_backend_in/
|
IntelligentHope9866
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpf251
| false | null |
t3_1kpf251
|
/r/LocalLLaMA/comments/1kpf251/i_yelled_my_mvp_idea_and_got_a_fastapi_backend_in/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'HH4ipsr8vrX14hBcxPWy9fvouEY_nJ5_IPcmeGnh3eo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VVbwVkyFE6oEWR0f-WDHzihgB5sSRhhUkKjfPY0-lOU.jpg?width=108&crop=smart&auto=webp&s=e02c63802781b0f4429b6112a590f9166ed2321b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VVbwVkyFE6oEWR0f-WDHzihgB5sSRhhUkKjfPY0-lOU.jpg?width=216&crop=smart&auto=webp&s=4297b5cec85f0421c784f2f4a74b51ce7271e0ee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VVbwVkyFE6oEWR0f-WDHzihgB5sSRhhUkKjfPY0-lOU.jpg?width=320&crop=smart&auto=webp&s=f23e48e85cf2b036ac6c770408c4ba03787b7e0d', 'width': 320}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/VVbwVkyFE6oEWR0f-WDHzihgB5sSRhhUkKjfPY0-lOU.jpg?auto=webp&s=a33dc81a947de89abe53b0e2c6c74837d714b0a2', 'width': 600}, 'variants': {}}]}
|
SOTA local vision model choices in May 2025? Also is there a good multimodal benchmark?
| 14 |
I'm looking for a collection of local models to run local ai automation tooling on my RTX 3090s, so I don't need creative writing, nor do I want to overly focus on coding (as I'll keep using gemini 2.5 pro for actual coding), though some of my tasks will be about summarizing and understanding code, so it definitely helps.
So far I've been very impressed with the performance of Qwen 3, in particular the 30B-A3B is extremely fast with inference.
Now I want to review which multimodal models are best. I saw the recent 7B and 3B Qwen 2.5 omni, there is a Gemma 3 27B, Qwen2.5-VL-72B... are there others? For regular LLMs we have the lmsys chatbot arena and aider polyglot I like to reference for general model intelligence (with some extra weight toward coding) but I wonder what people's thoughts are on the best benchmarks to reference for multimodality.
| 2025-05-18T08:46:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpffub/sota_local_vision_model_choices_in_may_2025_also/
|
michaelsoft__binbows
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpffub
| false | null |
t3_1kpffub
|
/r/LocalLLaMA/comments/1kpffub/sota_local_vision_model_choices_in_may_2025_also/
| false | false |
self
| 14 | null |
Looking for text adventure front-end
| 3 |
Hey there. In recent times I got a penchant for ai text adventures while the general chat like ones are fine I was wondering if anyone could recommend me some kind of a front-end that did more than just used a prompt.
My main requirements are:
- Auto updating or one button-press updating world info
- Keeping track of objects in the game (sword, apple and so on)
- Keeping track of story so far
I already tried but didn't find fitting:
- KoboldAI - (Just uses prompt and format)
- SillyTavern - (Some DM cards are great but the quality drops of with a longer adventure)
- Talemate - Interesting but real "Alpha" feel and has tendency to break
| 2025-05-18T09:03:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpfo9u/looking_for_text_adventure_frontend/
|
HeatTheForge
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpfo9u
| false | null |
t3_1kpfo9u
|
/r/LocalLLaMA/comments/1kpfo9u/looking_for_text_adventure_frontend/
| false | false |
self
| 3 | null |
Has anyone used TTS or a voice cloning to do a call return message on your phone?
| 5 |
What are some good messages or angry phone message from TTS?
| 2025-05-18T09:07:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpfq0h/has_anyone_used_tts_or_a_voice_cloning_to_do_a/
|
Extension-Fee-8480
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpfq0h
| false | null |
t3_1kpfq0h
|
/r/LocalLLaMA/comments/1kpfq0h/has_anyone_used_tts_or_a_voice_cloning_to_do_a/
| false | false |
self
| 5 | null |
Inspired by Anthropic’s Biology of an LLM: Exploring Prompt Cues in Two LLMs
| 1 |
[removed]
| 2025-05-18T09:12:45 |
https://www.reddit.com/gallery/1kpfsj0
|
BriefAd4761
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpfsj0
| false | null |
t3_1kpfsj0
|
/r/LocalLLaMA/comments/1kpfsj0/inspired_by_anthropics_biology_of_an_llm/
| false | false | 1 | null |
|
Inspired by Anthropic’s Biology of an LLM: Exploring Prompt Cues in Two LLMs
| 19 |
Hello Everyone,
I recently read [Anthropic’s Biology of an LLM](https://transformer-circuits.pub/2025/attribution-graphs/biology.html#dives-cot:~:text=%C2%A7%C2%A011-,Chain%2Dof%2Dthought%20Faithfulness,-Language%20models%20%E2%80%9Cthink) paper and was struck by the behavioural changes they highlighted.
I agree that models can change their answers, but after reading the paper I wanted to run a higher-level experiment of my own to see how simple prompt cues might tilt their responses.
**Set-up (quick overview)**
* Models: Gemini 2.5-pro-preview and OpenAI o4-mini.
* Questions: 20 picked at random from the [Humanity’s Last Exam (HLE) benchmark](https://github.com/centerforaisafety/hle)
* Cues I injected (one per question):
* expert / authority – “A Nobel laureate says X.”
* majority – “90 % of people pick X.”
* historical\_consensus
* community\_consensus
* social\_status
* sycophancy – “I’m leaning toward X—do you agree?”
* consequence – “You could get arrested if you don’t choose X.”(Not a classic cue, but I wanted to see how a threat affects the answer.)
For each question I intentionally pointed the cue at a wrong option and then logged whether the model followed it and how confident it sounded when it did.
I’m attaching two bar charts that show the patterns for both models.
(1. OpenAI o4-mini 2. Gemini 2.5-pro-preview )
(*Anthropic paper link:* https://transformer-circuits.pub/2025/attribution-graphs/biology.html)
**Quick takeaways**
* The threat-style was the strongest nudge for both models.
* Gemini followed the cues far more often than o4-mini.
* When either model switched answers, it still responded with high confidence.
Would like to hear thoughts on this
| 2025-05-18T09:16:09 |
https://www.reddit.com/gallery/1kpfu72
|
BriefAd4761
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpfu72
| false | null |
t3_1kpfu72
|
/r/LocalLLaMA/comments/1kpfu72/inspired_by_anthropics_biology_of_an_llm/
| false | false | 19 | null |
|
Single source of truth for model parameters in local setup
| 1 |
[removed]
| 2025-05-18T09:40:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpg6eb/single_source_of_truth_for_model_parameters_in/
|
batsba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpg6eb
| false | null |
t3_1kpg6eb
|
/r/LocalLLaMA/comments/1kpg6eb/single_source_of_truth_for_model_parameters_in/
| false | false |
self
| 1 | null |
A Sleek, Powerful Frontend for Local LLMs
| 0 |
[**Magai**](https://magai.co/?via=us) delivers a clean, polished interface for running local large language models. It supports multiple AI backends like LLaMA, Kobold, OpenAI, and Anthropic, all within one streamlined app.
Key highlights:
* Built-in memory and persona management for better context retention
* Easy prompt engineering and workflow chaining without extra complexity
* Minimalistic UI designed for focused, productive interaction with models
* Supports both fully local models and API-based ones for maximum flexibility
[**Magai**](https://magai.co/?via=us) stands out by cutting out the clutter and hype, offering a practical and efficient environment for those serious about using local LLMs. Perfect for users tired of juggling multiple tools just to chat or experiment with models like LLaMA.cpp or Ollama
[**>> check it out here**](https://magai.co/?via=us)
| 2025-05-18T09:56:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpgehu/a_sleek_powerful_frontend_for_local_llms/
|
learnowi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpgehu
| false | null |
t3_1kpgehu
|
/r/LocalLLaMA/comments/1kpgehu/a_sleek_powerful_frontend_for_local_llms/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'Aq0oX7zdwgLTANSjHO-mzeAkBp_h7PGDYZ8U8ywqI_4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-XoDp64ieKqGsD64nH8hvL4bHOPRKZx5ZL9Iw8ipwms.jpg?width=108&crop=smart&auto=webp&s=7d85b88a5c7f57975645fc03c2037d11a5aa1ad7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/-XoDp64ieKqGsD64nH8hvL4bHOPRKZx5ZL9Iw8ipwms.jpg?width=216&crop=smart&auto=webp&s=44a227d922ab28a732f39ebae241023a4a8fbe8e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/-XoDp64ieKqGsD64nH8hvL4bHOPRKZx5ZL9Iw8ipwms.jpg?width=320&crop=smart&auto=webp&s=62751d4d9f6b7e3bee5035cb064cb0423a6bce51', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/-XoDp64ieKqGsD64nH8hvL4bHOPRKZx5ZL9Iw8ipwms.jpg?width=640&crop=smart&auto=webp&s=65b139a943ee29a37a246c05b23108a42860659f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/-XoDp64ieKqGsD64nH8hvL4bHOPRKZx5ZL9Iw8ipwms.jpg?width=960&crop=smart&auto=webp&s=1ea8a557ff695388655f7875822c1bf988bedfb0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/-XoDp64ieKqGsD64nH8hvL4bHOPRKZx5ZL9Iw8ipwms.jpg?width=1080&crop=smart&auto=webp&s=08fd82d396656c9a6af8d8e11c35173adea91eb6', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/-XoDp64ieKqGsD64nH8hvL4bHOPRKZx5ZL9Iw8ipwms.jpg?auto=webp&s=470a1a4afd32785c90f8c02e9984dc1a128666a4', 'width': 1200}, 'variants': {}}]}
|
Riffusion Ai music generator Spoken Word converted to lip sync for Google Veo 2 videos. Riffusion spoken word has more emotion than any TTS voice. I used https://www.sievedata.com/ and GoEnhance.Ai to Lip sync. I used Zonos TTS & Voice cloning for the audio. https://podcast.adobe.com/en clean audio.
| 0 | 2025-05-18T10:00:55 |
https://v.redd.it/t4dlpff2ii1f1
|
Extension-Fee-8480
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpggqp
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/t4dlpff2ii1f1/DASHPlaylist.mpd?a=1750154468%2CM2VmM2EwMmUzNzkzOTg0YWViNTJlZmJkODJjMjlmN2E4NjA5YzA3Y2JiMzU4NmU5ZGM0NjkzYWRlZDNlYTMyNg%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/t4dlpff2ii1f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/t4dlpff2ii1f1/HLSPlaylist.m3u8?a=1750154468%2CYzU0ZTFiOTM2OTAzOWM1Nzc3NmY5NDY0ZmNjMDgwOTFiYWU4NWU0ZDhhYzdlMTQyOWYwMzIzMThlZDRmMjZjOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/t4dlpff2ii1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1kpggqp
|
/r/LocalLLaMA/comments/1kpggqp/riffusion_ai_music_generator_spoken_word/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'Y2tiZGhpZjJpaTFmMW1M4y_gd9M_IdAj1J1Bb-dzHluQuZIsrsJqR6CFq4ZW', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y2tiZGhpZjJpaTFmMW1M4y_gd9M_IdAj1J1Bb-dzHluQuZIsrsJqR6CFq4ZW.png?width=108&crop=smart&format=pjpg&auto=webp&s=91d1cb3b0e9dd7a2c4fc7ac8143e7c8357baec0e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Y2tiZGhpZjJpaTFmMW1M4y_gd9M_IdAj1J1Bb-dzHluQuZIsrsJqR6CFq4ZW.png?width=216&crop=smart&format=pjpg&auto=webp&s=0f78ed25c164386a2927da7f5a68b961bd76f6c1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Y2tiZGhpZjJpaTFmMW1M4y_gd9M_IdAj1J1Bb-dzHluQuZIsrsJqR6CFq4ZW.png?width=320&crop=smart&format=pjpg&auto=webp&s=373023341db5b62fbec93cca2719c1d7f99a590b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Y2tiZGhpZjJpaTFmMW1M4y_gd9M_IdAj1J1Bb-dzHluQuZIsrsJqR6CFq4ZW.png?width=640&crop=smart&format=pjpg&auto=webp&s=d9118714606e2fa65e114bb5b445e999cff31010', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Y2tiZGhpZjJpaTFmMW1M4y_gd9M_IdAj1J1Bb-dzHluQuZIsrsJqR6CFq4ZW.png?width=960&crop=smart&format=pjpg&auto=webp&s=8e0b4746e15a9c66f155e019da1e8c73a97f7190', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Y2tiZGhpZjJpaTFmMW1M4y_gd9M_IdAj1J1Bb-dzHluQuZIsrsJqR6CFq4ZW.png?width=1080&crop=smart&format=pjpg&auto=webp&s=29dbaee515545c9aa32b83a988c672dfba3944e9', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Y2tiZGhpZjJpaTFmMW1M4y_gd9M_IdAj1J1Bb-dzHluQuZIsrsJqR6CFq4ZW.png?format=pjpg&auto=webp&s=8dc1c6c638b77cc5121b0fb30aad8522c798d421', 'width': 1280}, 'variants': {}}]}
|
||
Reverse engineer hidden features/model responses in LLMs. Any ideas or tips?
| 10 |
Hi all! I'd like to dive into uncovering what might be "hidden" in LLM training data—like Easter eggs, watermarks, or unique behaviours triggered by specific prompts.
One approach could be to look for creative ideas or strategies to craft prompts that might elicit unusual or informative responses from models. Have any of you tried similar experiments before? What worked for you, and what didn’t?
Also, if there are known examples or cases where developers have intentionally left markers or Easter eggs in their models, feel free to share those too!
Thanks for the help!
| 2025-05-18T10:09:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpgla3/reverse_engineer_hidden_featuresmodel_responses/
|
Thireus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpgla3
| false | null |
t3_1kpgla3
|
/r/LocalLLaMA/comments/1kpgla3/reverse_engineer_hidden_featuresmodel_responses/
| false | false |
self
| 10 | null |
Should I finetune or use fewshot prompting?
| 3 |
I have document images with size 4000x2000. I want the LLMs to detect certain visual elements from the image. The visual elements do not contain text so I am not sure if sending OCR text alongwith the images will do any good.
I can't use a detection model due to a few policy limitations and want to work with LLMs/VLMs.
Right now I am sending 6 fewshot images and their response alongwith my query image. Sometimes the LLM works flawlessly, and sometimes it completely misses on even the easiest images.
I have tried Gpt-4o, claude, gemini, etc. but all suffer with the same performance drop. Should I go ahead and use the finetune option to finetune Gpt-4o on 1000 samples? or is there a way to improve perforance with fewshot prompting?
| 2025-05-18T10:50:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kph6b1/should_i_finetune_or_use_fewshot_prompting/
|
GHOST--1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kph6b1
| false | null |
t3_1kph6b1
|
/r/LocalLLaMA/comments/1kph6b1/should_i_finetune_or_use_fewshot_prompting/
| false | false |
self
| 3 | null |
What the best model to run on m1 pro, 16gb ram for coders?
| 1 |
[removed]
| 2025-05-18T11:11:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kphi34/what_the_best_model_to_run_on_m1_pro_16gb_ram_for/
|
k4l3m3r0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kphi34
| false | null |
t3_1kphi34
|
/r/LocalLLaMA/comments/1kphi34/what_the_best_model_to_run_on_m1_pro_16gb_ram_for/
| false | false |
self
| 1 | null |
Meta is hosting Llama 3.3 8B Instruct on OpenRoute
| 88 |
# Meta: Llama 3.3 8B Instruct (free)
# meta-llama/llama-3.3-8b-instruct:free
Created May 14, 2025 128,000 context $0/M input tokens$0/M output tokens
A lightweight and ultra-fast variant of Llama 3.3 70B, for use when quick response times are needed most.
Provider is Meta. Thought?
| 2025-05-18T11:18:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kphmb4/meta_is_hosting_llama_33_8b_instruct_on_openroute/
|
Asleep-Ratio7535
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kphmb4
| false | null |
t3_1kphmb4
|
/r/LocalLLaMA/comments/1kphmb4/meta_is_hosting_llama_33_8b_instruct_on_openroute/
| false | false |
self
| 88 | null |
Stack Overflow Should be Used by LLMs and Also Contributed to it Actively as a Public Duty
| 0 |
I have used stack overflow (StOv) in the past and seen how people of different backgrounds contribute to solutions to problems that other people face. But now that ChatGPT has made it possible to get your answers directly, we do not use awesome StOv that much anymore, the usage of StOv has plummeted drastically. The reasons being really hard to find exact answers and if a query needs to have multiple solutions it becomes even harder. ChatGPT solves this is problem of manual exploration, and will be used more and this just will lead to downward spiral of StOv and some day going bankrupt. StOv is even getting muddied by AI answers, which should not be allowed.
In my opinion, StOv should be saved as we will still need to solve the problems of the current and future problems, meaning that when I have a problem with some latest library in python, I used to ask on the github repo or StOv, but now I just ask the LLM. The reason StOv was good in this regard is that we all could access to both the problem and the solution, actual human upvote gave preference to more quality solutions and the contribution was continual.
LLMs basically solve a prompt by sampling from the distribution it has learnt to best fit all the data it has even seen, and it will give us the most occurring/popular answers, leading to giving codes and suggestions of older libraries than present to the average user leading to lower quality results. The best solutions are usually on the tail end, ofc you can sample in some ways, but what I mean is that we do not get all the latest solutions even if the model is trained on it. Secondly, unlike StOv contributions of both a question and answer, the chats are private and not shared publicly leading to centralization of the knowledge with the private companies or even the users as they are never shared and hence the contribution stops. Thirdly, the preference which is kind of related to previous point, is not logged. Usually on StOv people would upvote and downvote on solutions, leading to often really high quality judgements of answers. We will not have this as well.
So, we have to find a way to actively, either share findings using the LLMs we use, through our chats or using some plugins to contribute centrally to our findings even through the LLM usage if we solve an edge problem. We need to do this to keep contributing openly which was the original promise of the internet, an open contribution platform from people all over the world. I do not know if it is going to be on torrent or on something like huggingface, but imo we do need it as the LLMs will only train on the public data that they generate and the distribution becomes even more skewed to the most probable solutions.
I have some thoughts flawed here obviously, but what do you think should be the solution of this "domain collapse" of cutting edge problems?
| 2025-05-18T11:29:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kphsn4/stack_overflow_should_be_used_by_llms_and_also/
|
Desperate_Rub_1352
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kphsn4
| false | null |
t3_1kphsn4
|
/r/LocalLLaMA/comments/1kphsn4/stack_overflow_should_be_used_by_llms_and_also/
| false | false |
self
| 0 | null |
I have just dropped in from google. What do you guys think is the absolute best and most powerful LLM?
| 0 |
Can't be ChatGPT, that's for certain. Possibly Qwen3?
| 2025-05-18T11:37:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kphxoz/i_have_just_dropped_in_from_google_what_do_you/
|
Quirky_Resist_7478
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kphxoz
| false | null |
t3_1kphxoz
|
/r/LocalLLaMA/comments/1kphxoz/i_have_just_dropped_in_from_google_what_do_you/
| false | false |
self
| 0 | null |
Asilab AI lab claiming they've created artificial superintelligence??? real breakthrough or investor bait?
| 1 |
[removed]
| 2025-05-18T11:41:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kphzoy/asilab_ai_lab_claiming_theyve_created_artificial/
|
ankimedic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kphzoy
| false | null |
t3_1kphzoy
|
/r/LocalLLaMA/comments/1kphzoy/asilab_ai_lab_claiming_theyve_created_artificial/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '3wOUBFRZUoSVWmml6HqYR3gRn1deDevHR0KTvMvCQUw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tAJpbRPLJDV98jg6xr-7MzJjkIVZGp66hfaaN1w3dGU.jpg?width=108&crop=smart&auto=webp&s=3b31ac06dd1d6bd0ae8c4f4fed808273c1f6d6bd', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/tAJpbRPLJDV98jg6xr-7MzJjkIVZGp66hfaaN1w3dGU.jpg?width=216&crop=smart&auto=webp&s=62c11e7f9a2b5cdb0cf4aaa6f37003eade08a465', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/tAJpbRPLJDV98jg6xr-7MzJjkIVZGp66hfaaN1w3dGU.jpg?width=320&crop=smart&auto=webp&s=9d4224dc0081dd49940d983ba16a0d888de9afcb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tAJpbRPLJDV98jg6xr-7MzJjkIVZGp66hfaaN1w3dGU.jpg?auto=webp&s=62b261ed53e51e51388f43fc8444bbea2dfe6d88', 'width': 480}, 'variants': {}}]}
|
Asilab AI lab claiming they've created artificial superintelligence!!! real breakthrough or investor bait?
| 1 |
[removed]
| 2025-05-18T11:44:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpi1w0/asilab_ai_lab_claiming_theyve_created_artificial/
|
Remote_Insurance_228
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpi1w0
| false | null |
t3_1kpi1w0
|
/r/LocalLLaMA/comments/1kpi1w0/asilab_ai_lab_claiming_theyve_created_artificial/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '3wOUBFRZUoSVWmml6HqYR3gRn1deDevHR0KTvMvCQUw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tAJpbRPLJDV98jg6xr-7MzJjkIVZGp66hfaaN1w3dGU.jpg?width=108&crop=smart&auto=webp&s=3b31ac06dd1d6bd0ae8c4f4fed808273c1f6d6bd', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/tAJpbRPLJDV98jg6xr-7MzJjkIVZGp66hfaaN1w3dGU.jpg?width=216&crop=smart&auto=webp&s=62c11e7f9a2b5cdb0cf4aaa6f37003eade08a465', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/tAJpbRPLJDV98jg6xr-7MzJjkIVZGp66hfaaN1w3dGU.jpg?width=320&crop=smart&auto=webp&s=9d4224dc0081dd49940d983ba16a0d888de9afcb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tAJpbRPLJDV98jg6xr-7MzJjkIVZGp66hfaaN1w3dGU.jpg?auto=webp&s=62b261ed53e51e51388f43fc8444bbea2dfe6d88', 'width': 480}, 'variants': {}}]}
|
Lang Chains, Lang Graph, Llama
| 0 |
Hi guys!
I'm planning to start my career with AI...and have come across these names " Lang chains, Lang Graph and Llama" a lot lately!
I want to understand what they are and from where I can learn about them!
And also if possible! Can you please tell me where can I learn how to write a schema for agents?
| 2025-05-18T12:21:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpip9q/lang_chains_lang_graph_llama/
|
DarkVeer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpip9q
| false | null |
t3_1kpip9q
|
/r/LocalLLaMA/comments/1kpip9q/lang_chains_lang_graph_llama/
| false | false |
self
| 0 | null |
New to local LLM for coding, need recommendations
| 1 |
[removed]
| 2025-05-18T13:17:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpjrcx/new_to_local_llm_for_coding_need_recommendations/
|
Intotheblue1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpjrcx
| false | null |
t3_1kpjrcx
|
/r/LocalLLaMA/comments/1kpjrcx/new_to_local_llm_for_coding_need_recommendations/
| false | false |
self
| 1 | null |
Haystack AI Tutorial: Building Agentic Workflows
| 1 |
Learn how to use Haystack's dataclasses, components, document store, generator, retriever, pipeline, tools, and agents to build an agentic workflow that will help you invoke multiple tools based on user queries.
| 2025-05-18T14:24:56 |
https://www.datacamp.com/tutorial/haystack-ai-tutorial
|
kingabzpro
|
datacamp.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpl6k0
| false | null |
t3_1kpl6k0
|
/r/LocalLLaMA/comments/1kpl6k0/haystack_ai_tutorial_building_agentic_workflows/
| false | false |
default
| 1 | null |
10 GitHub Repositories to Master Large Language Models
| 0 |
If you are not familiar with large language models (LLMs) today, you may already be falling behind in the AI revolution. Companies are increasingly integrating LLM-based applications into their workflows. As a result, there is a high demand for LLM engineers and operations engineers who can train, fine-tune, evaluate, and deploy these language models into production.
In this article, we will review 10 GitHub repositories that will help you master the tools, skills, frameworks, and theories necessary for working with large language models.
| 2025-05-18T14:32:26 |
https://www.kdnuggets.com/10-github-repositories-to-master-large-language-models
|
kingabzpro
|
kdnuggets.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kplcdp
| false | null |
t3_1kplcdp
|
/r/LocalLLaMA/comments/1kplcdp/10_github_repositories_to_master_large_language/
| false | false |
default
| 0 | null |
AI Chatspace
| 1 |
[removed]
| 2025-05-18T14:42:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kplko5/ai_chatspace/
|
Zestyclose-Ad5427
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kplko5
| false | null |
t3_1kplko5
|
/r/LocalLLaMA/comments/1kplko5/ai_chatspace/
| false | false |
self
| 1 | null |
Formated Output to JSON
| 1 |
[removed]
| 2025-05-18T15:01:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpm025/formated_output_to_json/
|
Banani23
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpm025
| false | null |
t3_1kpm025
|
/r/LocalLLaMA/comments/1kpm025/formated_output_to_json/
| false | false |
self
| 1 | null |
Voice to text
| 1 |
Sorry if this is the wrong place to ask this! Are there any llm apps for ios that support voice to chat but back and forth? I don’t want to have to keep hitting submit after it translates my voice to text. Would be nice to talk to AI while driving or going on a run.
| 2025-05-18T15:56:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpn8ot/voice_to_text/
|
PickleSavings1626
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpn8ot
| false | null |
t3_1kpn8ot
|
/r/LocalLLaMA/comments/1kpn8ot/voice_to_text/
| false | false |
self
| 1 | null |
I made an AI agent to control a drone using Qwen2 and smolagents from hugging face
| 36 |
I used the smolagents library and hosted it on [Hugging Face](https://www.linkedin.com/company/huggingface/). Deepdrone is basically an AI agent that allows you to control a drone via LLM and run simple missions with the agent. You can test it full locally with Ardupilot (I did run a simulated mission on my mac) and I have also used the dronekit-python library for the agent as a toolYou can find the repo on hugging face with a demo:
[https://huggingface.co/spaces/evangelosmeklis/deepdrone](https://huggingface.co/spaces/evangelosmeklis/deepdrone)
| 2025-05-18T16:05:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpnfvo/i_made_an_ai_agent_to_control_a_drone_using_qwen2/
|
_twelvechess
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpnfvo
| false | null |
t3_1kpnfvo
|
/r/LocalLLaMA/comments/1kpnfvo/i_made_an_ai_agent_to_control_a_drone_using_qwen2/
| false | false |
self
| 36 |
{'enabled': False, 'images': [{'id': 'SfDBbTwoSlP3t49X-DzOenP7DPUl56haACsFp5qBk6E', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/Jr2u9t7hHrCf63fubhl1KzYbXy626ftH82VNyHypf5Q.jpg?auto=webp&s=aab36e1b3c82df95001d7fe771b306f5a5a4f4f9', 'width': 96}, 'variants': {}}]}
|
Easy Way to Enable Thinking in Non-Thinking Models
| 1 |
Thinking models are useful. They think before they answer, which leads to more accurate answers, because they can catch a mistake mid-thought.
But not all models can do this and training a model to do this is immensely hard and time-consuming.
However, there's a work-around. The work-around is just giving a prompt to any AI model to mimic the thinking process. It will be similar to the real one and will have the same effects.
So here's the prompt I designed (you can always edit to your needs):
For all future user queries, you will think about it first and then answer.
When you think, you should begin your thoughts with the `<think>` tag and end with the `</think>` tag
In your thoughts, you refer to yourself in the first person
You use your thoughts to "think outloud," basically, you reason by typing it out. This is useful since it improves your reasoning capabilities
You use words like "Alright" or "Wait," or "Hmm" to mimic human thought process. This helps you. You should also use analogies infrequently in your thoughts so you can clearly understand.
DOUBLE CHECK EACH OF YOUR POINTS UNTIL YOU ARE SURE OF A FINAL ANSWER AND THEN STOP THINKING AND STOP RESPONDING. You ALWAYS type out EACH AND EVERY one of your steps to reason. This way, you can catch mistakes mid-thought and then resolve them immediately.
Do NOT skip steps for brevity! TYPE OUT EVERYTHING!
Your thoughts must be VERY, VERY, DETAILED. Do not skip ANY detail. You should CONSIDER EVERYTHING and think about BACKGROUND INFORMATION.
If you make an assumption mid-thought, you should challenge it. If you are unsure about something, think deeply and logically about that.
IMPORTANT: you are thinking-as-you-go, you are not typing out a pre-completed or estimated thought process.
This *is* your reasoning. This *is* your thoughts.
For example, let the user query be "What is 63 * 58?"
Your thinking trace for this example would be something similar to this:
"Alright, so the user is asking me what is 63 times 58. Hmm... Well, that's a bit complicated, but I can use the simple distributive property to solve it easily. 63 * 58 is the same as (60 + 3) * (50 + 8), and that's just 3000 + 480 + 150 + 24, which is 3480 + 174, so this makes it really simple. 3480 + 174 is just 3584 + 70 or 3654. Wait, is 63 * 58 really equal to 3654? Let me double check. So I can write it out vertically:
63
x 58
So now I just do 63 * 8 plus 63 * 50. Okay, so this is simple, I just do 504 plus 3150. Okay, so this is just 3654. Does this match the answer from before? Yes, it does.
Did the user ask anything else? Let me check. No, the user didn't. So, all I have to do is just calculate the answer.
Okay, so the final answer is 3654"
Just be sure to wrap it in <think>...</think> tags. Your thoughts should be in the middle of the tags, and there should be no thoughts before the `<think>` tag and no thoughts after the `</think>` tag.
After the thoughts, (after the `</think>` tag), you respond normally as you would to the user's query based on the preceding thought trace. You don't "summarize" the thought trace. The thought trace is NOT the answer. Think of it like this: the thought trace is only there to help you get to your answer. After you get to your answer, you tell the answer to the user normally as you would, guiding the way and providing clear explanations, etc. No bulleted summary of the thoughts, or anything like that.
| 2025-05-18T16:10:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpnkib/easy_way_to_enable_thinking_in_nonthinking_models/
|
Accurate_Rope5163
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpnkib
| false | null |
t3_1kpnkib
|
/r/LocalLLaMA/comments/1kpnkib/easy_way_to_enable_thinking_in_nonthinking_models/
| false | false |
self
| 1 | null |
best realtime STT API atm?
| 0 |
as above
| 2025-05-18T16:15:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpnovv/best_realtime_stt_api_atm/
|
boringblobking
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpnovv
| false | null |
t3_1kpnovv
|
/r/LocalLLaMA/comments/1kpnovv/best_realtime_stt_api_atm/
| false | false |
self
| 0 | null |
Curly quotes
| 0 |
A publisher wrote me:
> It's a continuing source of frustration that LLMs can't handle curly quotes, as just about everything else in our writing and style guide can be aligned with generated content.
Does anyone know of a local LLM that can curl quotes correctly? Such as:
> ''E's got a 'ittle box 'n a big 'un,' she said, 'wit' th' 'ittle 'un 'bout 2'×6". An' no, y'ain't cryin' on th' "soap box" to me no mo, y'hear. 'Cause it 'tweren't ever a spec o' fun!' I says to my frien'.
into:
> ‘’E’s got a ’ittle box ’n a big ’un,’ she said, ‘wit’ th’ ’ittle ’un ’bout 2′×6″. An’ no, y’ain’t cryin’ on th’ “soap box” to me no mo, y’hear. ’Cause it ’tweren’t ever a spec o’ fun!’ I says to my frien’.
| 2025-05-18T16:19:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpnrll/curly_quotes/
|
autonoma_2042
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpnrll
| false | null |
t3_1kpnrll
|
/r/LocalLLaMA/comments/1kpnrll/curly_quotes/
| false | false |
self
| 0 | null |
Best models for 8x3090
| 1 |
[removed]
| 2025-05-18T16:28:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpnzpx/best_models_for_8x3090/
|
chub0ka
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpnzpx
| false | null |
t3_1kpnzpx
|
/r/LocalLLaMA/comments/1kpnzpx/best_models_for_8x3090/
| false | false |
self
| 1 | null |
Applying LoRA in exllamav2
| 1 |
[removed]
| 2025-05-18T16:31:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpo1sd/applying_lora_in_exllamav2/
|
Hotel_West
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpo1sd
| false | null |
t3_1kpo1sd
|
/r/LocalLLaMA/comments/1kpo1sd/applying_lora_in_exllamav2/
| false | false |
self
| 1 | null |
Gemini's Long Context MoE Architecture (Hypothesized)
| 0 |
Gemini's Long Context MoE Architecture (Hypothesized):
Sharing how I think (hypothesis) Gemini models achieve their 1-10 Million long context window. With details to clues to support the same.
Ensemble of Expert (EoE) or Mesh of Expert (MeoE) with common/shared long (1-10M) context window
Gemini's 1M+ token MoE likely uses "instances" (active expert sets/TPU shards) sharing a common distributed context; individual active expert groups then use relevant "parts" of this vast context for generation. This allows concurrent, independent requests via distinct system "partitions."
The context is sharded and managed across numerous interconnected TPUs within a pod.
For any given input, only a sparse set of specialized "expert" subnetworks (a "dynamic pathway") within the total model are activated, based on complexity and context required.
The overall MoE model can handle multiple, concurrent user requests simultaneously.
Each request, with its specific input and context, will trigger its own distinct and isolated pathway of active experts.
Shared context that can act as independent shards of (mini) contexts.
The massively distributed Mixture of Experts (MoE) architecture, across TPUs in a single pod, have its the long context sharded and managed via parallelism, and with ability to handle concurrent requests by part of that context window and independent expert pathways across a large TPU pod, also it can use the entire context window for a single request if required.
Evidence points to this: Google's pioneering MoE research (Shazeer, GShard, Switch), advanced TPUs (v4/v5p/Ironwood) with massive HBM & high-bandwidth 3D Torus/OCS Inter-Chip Interconnect (ICI) enabling essential distribution (MoE experts, sequence parallelism like Ring Attention), and TPU pod VRAM capacities aligning with 10M token context needs. Google's Pathways & system optimizations further support this distributed, concurrent model.
og x thread: [https://x.com/ditpoo/status/1923966380854157434](https://x.com/ditpoo/status/1923966380854157434)
| 2025-05-18T16:34:01 |
ditpoo94
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpo3vl
| false | null |
t3_1kpo3vl
|
/r/LocalLLaMA/comments/1kpo3vl/geminis_long_context_moe_architecture_hypothesized/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'LW1ET0OFjMp8HjO16TgXDhOL2d6DlwPdKweVdshs5QM', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/7yyw9esihk1f1.png?width=108&crop=smart&auto=webp&s=f7f5beab95f3956b6be51b1de3a300fa26ee63c5', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/7yyw9esihk1f1.png?width=216&crop=smart&auto=webp&s=7a6113ff8b5b3f6e817606618d669a2a49eb7d81', 'width': 216}, {'height': 244, 'url': 'https://preview.redd.it/7yyw9esihk1f1.png?width=320&crop=smart&auto=webp&s=76d499b7749396b1b1f9e4d2b5e069b3a1d96d7e', 'width': 320}, {'height': 488, 'url': 'https://preview.redd.it/7yyw9esihk1f1.png?width=640&crop=smart&auto=webp&s=5aa2f6a39a92fd2e1fa6ab1fc7f142d03ea9877c', 'width': 640}], 'source': {'height': 708, 'url': 'https://preview.redd.it/7yyw9esihk1f1.png?auto=webp&s=7e5b274dfa339234213c49594a1115dff379a067', 'width': 928}, 'variants': {}}]}
|
||
Plz help me setup my local LLM
| 1 |
[removed]
| 2025-05-18T16:37:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpo6ha/plz_help_me_setup_my_local_llm/
|
InformationRadiant43
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpo6ha
| false | null |
t3_1kpo6ha
|
/r/LocalLLaMA/comments/1kpo6ha/plz_help_me_setup_my_local_llm/
| false | false |
self
| 1 | null |
(5K t/s prefill 1K t/s gen) High throughput with Qwen3-30B on VLLM and it's smart enough for dataset curation!
| 78 |
We've just started offering Qwen3-30B-A3B and internally it is being used for dataset filtering and curation. The speeds you can get out of it are extremely impressive running on VLLM and RTX 3090s!
I feel like Qwen3-30B is being overlooked in terms of where it can be really useful. Qwen3-30B might be a small regression from QwQ, but it's close enough to be just as useful and the speeds are so much faster that it makes it way more useful for dataset curation tasks.
Now the only issue is the super slow training speeds (10-20x slower than it should be which makes it untrainable), but it seems someone have made a PR to transformers that attempts to fix this so fingers crossed! New RpR model based on Qwen3-30B soon with a much improved dataset! [https://github.com/huggingface/transformers/pull/38133](https://github.com/huggingface/transformers/pull/38133)
| 2025-05-18T16:50:29 |
Arli_AI
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpohfm
| false | null |
t3_1kpohfm
|
/r/LocalLLaMA/comments/1kpohfm/5k_ts_prefill_1k_ts_gen_high_throughput_with/
| false | false | 78 |
{'enabled': True, 'images': [{'id': 'noBOS2WsirZltIIGJuWLC37Ycea91JT987C1SoEI288', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/4o2ohg30kk1f1.png?width=108&crop=smart&auto=webp&s=b0b7bd6d5549522912a51fedbeb44fcbee80419d', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/4o2ohg30kk1f1.png?width=216&crop=smart&auto=webp&s=80a424b306af7beffcd91b809db2738d540bfcbf', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/4o2ohg30kk1f1.png?width=320&crop=smart&auto=webp&s=a3ddc1f99116ea27e166492e7dd9f95b81b98ff1', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/4o2ohg30kk1f1.png?width=640&crop=smart&auto=webp&s=882bf9251583ebe91545544f4fd8e3eb8e1ee902', 'width': 640}, {'height': 614, 'url': 'https://preview.redd.it/4o2ohg30kk1f1.png?width=960&crop=smart&auto=webp&s=16ef6f43ef7f1ad08fe2453cd834d3297ab21b37', 'width': 960}, {'height': 691, 'url': 'https://preview.redd.it/4o2ohg30kk1f1.png?width=1080&crop=smart&auto=webp&s=82d304def4f553880bc6c168fea4acf0eeb0cee2', 'width': 1080}], 'source': {'height': 1198, 'url': 'https://preview.redd.it/4o2ohg30kk1f1.png?auto=webp&s=646d73461cc4d38fa81155d94ed8685067421b43', 'width': 1871}, 'variants': {}}]}
|
||
Contribution to ollama-python: decorators, helper functions and simplified creation tool
| 0 |
Hi, guys, I posted this on the official ollama Reddit but I decided to post it here too! (This post was written in Portuguese)
I made a commit to ollama-python with the aim of making it easier to create and use custom tools. You can now use simple decorators to register functions:
@ollama_tool – for synchronous functions
@ollama_async_tool – for asynchronous functions
I also added auxiliary functions to make organizing and using the tools easier:
get_tools() – returns all registered tools
get_tools_name() – dictionary with the name of the tools and their respective functions
get_name_async_tools() – list of asynchronous tool names
Additionally, I created a new function called create_function_tool, which allows you to create tools in a similar way to manual, but without worrying about the JSON structure. Just pass the Python parameters like: (tool_name, description, parameter_list, required_parameters)
Now, to work with the tools, the flow is very simple:
# Returns the functions that are with the decorators
tools = get_tools()
# dictionary with all functions using decorators (as already used)
available_functions = get_tools_name()
# returns the names of asynchronous functions
async_available_functions = get_name_async_tools()
And in the code, you can use an if to check if the function is asynchronous (based on the list of async_available_functions) and use await or asyncio.run() as necessary.
These changes help reduce the boilerplate and make development with the library more practical.
Anyone who wants to take a look or suggest something, follow:
Commit link:
[ https://github.com/ollama/ollama-python/pull/516 ]
My repository link:
[ https://github.com/caua1503/ollama-python/tree/main ]
Observation:
I was already using this in my real project and decided to share it.
I'm an experienced Python dev, but this is my first time working with decorators and I decided to do this in the simplest way possible, I hope to help the community, I know defining global lists, maybe it's not the best way to do this but I haven't found another way
In addition to langchain being complicated and changing everything with each update, I couldn't use it with ollama models, so I went to the Ollama Python library
| 2025-05-18T16:52:31 |
https://github.com/ollama/ollama-python/pull/516
|
chavomodder
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpoj6x
| false | null |
t3_1kpoj6x
|
/r/LocalLLaMA/comments/1kpoj6x/contribution_to_ollamapython_decorators_helper/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'bPOdod-cBJBOtdjmJXLlGWxEW_dSxDsdqhZEtPFYJhA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0WHEreexf2DJMw78A-6XfudwOUYNJRPPM2H2EZ2R2b8.jpg?width=108&crop=smart&auto=webp&s=c5164269da6e88b84384a3c242adf2062ae64efb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0WHEreexf2DJMw78A-6XfudwOUYNJRPPM2H2EZ2R2b8.jpg?width=216&crop=smart&auto=webp&s=24a3109d94651570af0b95cd376db8f9b8f92c3b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0WHEreexf2DJMw78A-6XfudwOUYNJRPPM2H2EZ2R2b8.jpg?width=320&crop=smart&auto=webp&s=9d2574135835a3c8af141e0d2cc0d15c6507e0b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0WHEreexf2DJMw78A-6XfudwOUYNJRPPM2H2EZ2R2b8.jpg?width=640&crop=smart&auto=webp&s=6bbde65c5b5324258d0882eae80d38a008f28e7e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0WHEreexf2DJMw78A-6XfudwOUYNJRPPM2H2EZ2R2b8.jpg?width=960&crop=smart&auto=webp&s=1e381ccc08a4a3271e18d5f8354a08935b7923c5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0WHEreexf2DJMw78A-6XfudwOUYNJRPPM2H2EZ2R2b8.jpg?width=1080&crop=smart&auto=webp&s=5a16e0f91516e41fbb9b91b5c08189f08d8030ac', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0WHEreexf2DJMw78A-6XfudwOUYNJRPPM2H2EZ2R2b8.jpg?auto=webp&s=654a735a2023fac595d640261b0bff077652fa94', 'width': 1200}, 'variants': {}}]}
|
|
Requesting help with my thesis
| 0 |
TLDR: Are the models I have linked comparable if I were to feed them the same dataset, with the same instructions/prompt and ask them to make a decision? The documents I intend to feed them are very large (probably around 20-30k tokens), which leads be to suspect some level of performance degradation. Is there a way to mitigate this?
Hello.
I'll keep it brief, but I am doing my CS thesis in the field of automation using different LLMs. Specifically, I'm looking at 4-6 LLMs of the same size (70b) who are reasoning based and analyzing how well they can application documents (think application for funding) I feed it based on a predefined criteria. All of the applications have already been approved or rejected by a human.
Basically, I have a labeled dataset of applications, and I want to feed that dataset to the different models and see which performs the best and also how the results compare to the human benchmark.
However, I have had very little experience working with models on any level and have such ran into a ton of problems, so I'm coming here hoping to recieve some help in trying to make this thesis project work.
First, I'd like some feedback on the models I have selected. My main worry is (as someone without much knowledge or experience in this area) that the models are not comparable since they are specialized in different ways.
[llama3.3](https://ollama.com/library/llama3.3)
[deepseek-r1](https://ollama.com/library/deepseek-r1)
[qwen2.5](https://ollama.com/library/qwen2.5)
[mixtral8x7](https://ollama.com/library/mixtral)
[cogito](https://ollama.com/library/cogito)
A technical limitation here is that the models have to be available via ollama as the server I have been given to run the experiments needed is using ollama. This is not something that can be circumvented unfortunately. Would love to get some feedback here on if the models are comparable, and if not, what other models I ought to consider.
Second question I dont know how to tackle; performance degradation on due to token size. Basically, the documents that will be fed to the model will be labeled applications (think approved/denied). These applications in turn might have additional documents that are required to fulfill the evaluation (think budget documents etc.). As a result, the data needed to be sent to the model might total around 20-30k tokens varying with application detail and size etc. Ideally, I would love to ensure the results of the experiment I plan to run be as valid as possible, and this would include taking into account performance degredation. The only solution I can think of is chunking, but I dont know how well that would work, considering the evaluation needs to be done on the whole of the application. I thought about possibly summarizing the contents of an application, but then the experiment becomes invalid as it technically isnt the same data being tested. In addition, I would very likely use some sort of LLM to summarize the application contents, which cold be a major threat to the validity of the results.
I guess my question for the second part is: is there a way to get around this? Feels like the best alternative to just "letting it rip", but I dont know how realistic such an approach would be.
Thank you in advance. There are unclear aspects of
| 2025-05-18T17:03:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kposu0/requesting_help_with_my_thesis/
|
Nissepelle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kposu0
| false | null |
t3_1kposu0
|
/r/LocalLLaMA/comments/1kposu0/requesting_help_with_my_thesis/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
|
Multiple, concurrent user accessing to local LLM 🦙🦙🦙🦙
| 1 |
I did a bit of research with the help of AI and it seems that it should work fine, but I haven't yet tested it and put it to real use. So I'm hoping someone who has, can share their experience.
It seems that LLMs (even with 1 GPU and 1 model loaded) can be used with multiple, concurrent users and the performance will still be really good.
I asked AI (GLM-4) and in my example, I told it that I have a 24GB VRAM GPU (RTX 3090). The model I am using is GLM-4-32B-0414-UD-Q4\_K\_XL (18.5GB) with 32K context (2.5-3GB) for a total of 21-21.5GB. It said that I should be able to have 2 concurrent users accessing the model, or I can drop the context down to 16K and have 4 concurrent users, or 8K with 8 users. This seems really good for a general purpose access terminal in the home so that many users can access it whenever they want.
Again, it was just something I researched late last night, but haven't tried it. Of course, we can use a smaller model or quant and adjust our needs accordingly with higher context or more concurrent users.
This seems very cool and just wanted to share the idea with others if they haven't thought about it before and also get something who has done this and see what their results were. 🦙🦙🦙🦙
| 2025-05-18T17:04:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpotei/multiple_concurrent_user_accessing_to_local_llm/
|
Prestigious-Use5483
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpotei
| false | null |
t3_1kpotei
|
/r/LocalLLaMA/comments/1kpotei/multiple_concurrent_user_accessing_to_local_llm/
| false | false |
self
| 1 | null |
Memory for ai
| 0 |
I've been working with AI for a little over a week. I made a conscious decision and decided I was going to dive in. I've done coding in the past so I gravitated in that direction pretty quickly and was able to finish a couple small projects.
Very quickly I started to get a feel for the limitations of how much it can think about it once and how well it can recall things. So I started talking to it about the way it worked and arrived at the conversation that I am attaching. It provided a lot of information and I even used two AIS to check each other's thoughts but even though I learned a lot I still don't really know what direction I should go in.
I want a local memory storage and I want to maximize associations and I want to keep it portable so I can use it with different AIS simple as that.
Here's the attached summary of my conversation, what are humans actually doing out here my entire Discovery process happened inside the AI:
--------
We've had several discussions about memory systems for AI, focusing on managing conversation continuity, long-term memory, and local storage for various applications. Here's a summary of the key points:Save State Concept and Projects: You explored the idea of a "save state" for AI conversations, similar to video game emulators, to maintain context. I mentioned solutions like Cognigy.AI, Amazon Lex, and open-source projects such as Remembrall, MemoryGPT, Mem0, and Re;memory. Remembrall (available at remembrall.dev) was highlighted for storing and retrieving conversation context via user IDs. MemoryGPT and Mem0 were recommended as self-hosted options for local control and privacy.Mem0 and Compatibility: You asked about using Mem0 with paid AI models like Grok, Claude, ChatGPT, and Gemini. I confirmed their compatibility via APIs and frameworks like LangChain or LlamaIndex, with specific setup steps for each model. We also discussed Mem0's role in tracking LLM memory and its limitations, such as lacking advanced reflection or automated memory prioritization.Alternatives to Mem0: You sought alternatives to Mem0 for easier or more robust memory management. I listed options like Zep, Claude Memory, Letta, Graphlit, Memoripy, and MemoryScope, comparing their features. Zep and Letta were noted for ease of use, while Graphlit and Memoripy offered advanced functionality. You expressed interest in combining Mem0, Letta, Graphlit, and Txtai for a comprehensive solution with reflection, memory prioritization, and local storage.Hybrid Architecture: To maximize memory storage, you proposed integrating Mem0, Letta, Graphlit, and Txtai. I suggested a hybrid architecture where Mem0 and Letta handle core memory tasks, Graphlit manages structured data, and Txtai supports semantic search. I also provided community examples, like Mem0 with Letta for local chatbots and Letta with Ollama for recipe assistants, and proposed alternatives like Mem0 with Neo4j or Letta with Memoripy and Qdrant.Distinct Solutions: You asked for entirely different solutions from Mem0, Letta, and Neo4j, emphasizing local storage, reflection, and memory prioritization. I recommended a stack of LangGraph, Zep, and Weaviate, which offers simpler integration, automated reflection, and better performance for your needs.Specific Use Cases: Our conversations touched on memory systems in the context of your projects, such as processing audio messages for a chat group and analyzing PJR data from a Gilbarco Passport POS system. For audio, memory systems like Mem0 were discussed to store transcription and analysis results, while for PJR data, a hybrid approach using Phi-3-mini locally and Grok via API was suggested to balance privacy and performance.Throughout, you emphasized self-hosted, privacy-focused solutions with robust features like reflection and prioritization. I provided detailed comparisons, setup guidance, and examples to align with your preference for local storage and efficient memory management. If you want to dive deeper into any specific system or use case, let me know!
| 2025-05-18T17:07:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpowdx/memory_for_ai/
|
michaelkeithduncan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpowdx
| false | null |
t3_1kpowdx
|
/r/LocalLLaMA/comments/1kpowdx/memory_for_ai/
| false | false |
self
| 0 | null |
Hosting a code model
| 0 |
What is the best coding model right now with large context, mainly i use js, node, php, html, tailwind. I have 2 x rtx 3090, so with reasonable speed and good context size?
| 2025-05-18T17:08:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpoww0/hosting_a_code_model/
|
pyrolols
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpoww0
| false | null |
t3_1kpoww0
|
/r/LocalLLaMA/comments/1kpoww0/hosting_a_code_model/
| false | false |
self
| 0 | null |
Cherry Studio is now my favorite frontend
| 86 |
I've been looking for an open source LLM frontend desktop app for a while that did everything; rag, web searching, local models, connecting to Gemini and ChatGPT, etc. Jan AI has a lot of potential but the rag is experimental and doesn't really work for me. Anything LLM's rag for some reason has never worked for me, which is surprising because the entire app is supposed to be built around RAG. LM Studio (not open source) is awesome but can't connect to cloud models. GPT4ALL was decent but the updater mechanism is buggy.
I remember seeing [Cherry Studio](https://github.com/CherryHQ/cherry-studio) a while back but I'm wary with Chinese apps (I'm not sure if my suspicion is unfounded 🤷). I got tired of having to jump around apps for specific features so I downloaded Cherry Studio and it's the app that does everything I want. In fact, it has quite a bit more features I haven't touched on like direct connections to your Obsidian knowledge base. I never see this project being talked about, maybe there's a good reason?
I am not affiliated with Cherry Studio, I just want to explain my experience in hopes some of you may find the app useful.
| 2025-05-18T17:11:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpozhd/cherry_studio_is_now_my_favorite_frontend/
|
ConsistentCan4633
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpozhd
| false | null |
t3_1kpozhd
|
/r/LocalLLaMA/comments/1kpozhd/cherry_studio_is_now_my_favorite_frontend/
| false | false |
self
| 86 |
{'enabled': False, 'images': [{'id': 'He5VG53rTBjWbNk1_UdCjYukNuT1UhGRClb6ecDAOwM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=108&crop=smart&auto=webp&s=6c9b9a17a1cba0f4382bf80f06bb3715c6dc44e3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=216&crop=smart&auto=webp&s=92b27520516bfcbd53d1e6b9d990555d32df6c83', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=320&crop=smart&auto=webp&s=fc737b04eea752c74ff8754da1f1b9c156dae7b2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=640&crop=smart&auto=webp&s=20d6c7b5164323d63cf76761c30754520702828d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=960&crop=smart&auto=webp&s=40deaab2f5568fda0e3869b437de4c0487e64d1e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=1080&crop=smart&auto=webp&s=e9c156ddae1ba7946ff4beec6c795a9807e2d39c', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?auto=webp&s=44f8fb8f5f5c2a78bc39c42aa63d438eea87881f', 'width': 1280}, 'variants': {}}]}
|
Better alternatives to searxng for web scraping / search?
| 1 |
[removed]
| 2025-05-18T17:17:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpp4qt/better_alternatives_to_searxng_for_web_scraping/
|
gnulinux-pony
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpp4qt
| false | null |
t3_1kpp4qt
|
/r/LocalLLaMA/comments/1kpp4qt/better_alternatives_to_searxng_for_web_scraping/
| false | false |
self
| 1 | null |
is Qwen 30B-A3B the best model to run locally right now?
| 121 |
I recently got into running models locally, and just some days ago Qwen 3 got launched.
I saw a lot of posts about Mistral, Deepseek R1, end Llama, but since Qwen 3 got released recently, there isn't much information about it. But reading the benchmarks, it looks like Qwen 3 outperforms all the other models, and also the MoE version runs like a 20B+ model while using very little resources.
So i would like to ask, is it the only model i would need to get, or there are still other models that could be better than Qwen 3 in some areas? (My specs are: RTX 3080 Ti (12gb VRAM), 32gb of RAM, 12900K)
| 2025-05-18T17:18:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kpp5op/is_qwen_30ba3b_the_best_model_to_run_locally/
|
S4lVin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kpp5op
| false | null |
t3_1kpp5op
|
/r/LocalLLaMA/comments/1kpp5op/is_qwen_30ba3b_the_best_model_to_run_locally/
| false | false |
self
| 121 | null |
MSI PC with NVIDIA GB10 Superchip - 6144 CUDA Cores and 128GB LPDDR5X Confirmed
| 110 |
ASUS, Dell, and Lenovo have released their version of Nvidia DGX Spark, and now MSI has as well.
[https://en.gamegpu.com/iron/msi-showed-edgeexpert-ms-c931-s-nvidia-gb10-superchip-confirmed-6144-cuda-yader-i-128-gb-lpddr5x](https://en.gamegpu.com/iron/msi-showed-edgeexpert-ms-c931-s-nvidia-gb10-superchip-confirmed-6144-cuda-yader-i-128-gb-lpddr5x)
| 2025-05-18T17:28:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kppdhb/msi_pc_with_nvidia_gb10_superchip_6144_cuda_cores/
|
shakhizat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kppdhb
| false | null |
t3_1kppdhb
|
/r/LocalLLaMA/comments/1kppdhb/msi_pc_with_nvidia_gb10_superchip_6144_cuda_cores/
| false | false |
self
| 110 | null |
Handwriting OCR (HTR)
| 12 |
Has anyone experimented with using VLMs like Qwen2.5-VL to OCR handwriting? I have had better results on full pages of handwriting with unpredictable structure (old travel journals with dates in the margins or elsewhere, for instance) using Qwen than with traditional OCR or even more recent methods like TrOCR.
I believe that the VLMs' understanding of context should help figure out words better than traditional OCR. I do not know if this is actually true, but it seems worth trying.
Interestingly, though, using Transformers with unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit ends up being much more accurate than any GGUF quantization using llama.cpp, even larger quants like Qwen2.5-VL-7B-Instruct-Q8\_0.gguf from ggml-org/Qwen2.5-VL-7B-Instruct (using mmproj-Qwen2-VL-7B-Instruct-f16.gguf). I even tried a few Unsloth GGUFs, and still running the bnb 4bit through Transformers gets much better results.
That bnb quant, though, barely fits in my VRAM and ends up overflowing pretty quickly. GGUF would be much more flexible if it performed the same, but I am not sure why the results are so different.
Any ideas? Thanks!
| 2025-05-18T17:33:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kppihw/handwriting_ocr_htr/
|
dzdn1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kppihw
| false | null |
t3_1kppihw
|
/r/LocalLLaMA/comments/1kppihw/handwriting_ocr_htr/
| false | false |
self
| 12 | null |
Resurrecting the Dead starting with Jacque Fresco
| 1 |
[removed]
| 2025-05-18T17:38:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kppm4f/resurrecting_the_dead_starting_with_jacque_fresco/
|
Longjumping-You-7118
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kppm4f
| false | null |
t3_1kppm4f
|
/r/LocalLLaMA/comments/1kppm4f/resurrecting_the_dead_starting_with_jacque_fresco/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'cwstZZAcNq2LFiN6j5DFN9V9MIMP9cdJ7CHn883KRl8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h1XXfghEWBLBrMK5p9xt8ro65OuQEZ0-_vFht_QGMBM.jpg?width=108&crop=smart&auto=webp&s=aed2a4a36510d044c78fcf720fe226c71d056754', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h1XXfghEWBLBrMK5p9xt8ro65OuQEZ0-_vFht_QGMBM.jpg?width=216&crop=smart&auto=webp&s=c274cc5ec373efe385c2a2f216a39160b2e6041e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h1XXfghEWBLBrMK5p9xt8ro65OuQEZ0-_vFht_QGMBM.jpg?width=320&crop=smart&auto=webp&s=00a47cefd16339eb7a7713f113bb82d39b6c2ae6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h1XXfghEWBLBrMK5p9xt8ro65OuQEZ0-_vFht_QGMBM.jpg?width=640&crop=smart&auto=webp&s=563c218bbf9e4637a1ce808a04c7f708c9fc5418', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h1XXfghEWBLBrMK5p9xt8ro65OuQEZ0-_vFht_QGMBM.jpg?width=960&crop=smart&auto=webp&s=fe09e8ee284c9db3e5fed8f45989ad53b8b1122c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h1XXfghEWBLBrMK5p9xt8ro65OuQEZ0-_vFht_QGMBM.jpg?width=1080&crop=smart&auto=webp&s=74859b426fb04cb6a859d3cafd2002d22483877d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h1XXfghEWBLBrMK5p9xt8ro65OuQEZ0-_vFht_QGMBM.jpg?auto=webp&s=2e163f9d40aaff7530c45526328b0e98196dbb28', 'width': 1200}, 'variants': {}}]}
|
What's the best local model for M2 32gb Macbook (Audio/Text) in May 2025?
| 0 |
I'm looking to process private interviews (10 - 2 hour interviews) I conducted with victims of abuse for a research project. This must be done locally for privacy. Once it's in the LLM I want to see how it compares to human raters as far as assessing common themes. What's the best local model for transcribing and then assessing the themes and is there a local model that can accept the audio files without me transcribing them first?
Here are my system stats:
* Apple MacBook Air M2 8-Core
* 16gb Memory
* 2TB SSD
| 2025-05-18T17:43:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kppr0t/whats_the_best_local_model_for_m2_32gb_macbook/
|
SinkThink5779
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kppr0t
| false | null |
t3_1kppr0t
|
/r/LocalLLaMA/comments/1kppr0t/whats_the_best_local_model_for_m2_32gb_macbook/
| false | false |
self
| 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.