title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Yet Another Quantization Algorithm (YAQA) | 1 | [removed] | 2025-06-06T05:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l4kfd4/yet_another_quantization_algorithm_yaqa/ | tsengalb99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4kfd4 | false | null | t3_1l4kfd4 | /r/LocalLLaMA/comments/1l4kfd4/yet_another_quantization_algorithm_yaqa/ | false | false | self | 1 | null |
Should I choose llama-swap over my own solution | 5 | I built something similar to llama-swap a while ago. Config file with server settings for a number of different Models I use. It automatically re-starts llama-server instances when I request another model. It's not a proxy though. My apps still talk to the currently running llama-server instance directly (through a custom abstraction layer that basically is a proxy for llama-server).
I want to add some new capabilities, most importantly, add rules like "keep current model running unless there isn't enough VRAM left for new model". I don't see something like that in their config example. So I assume I'd have to somehow make it work with their "group" concept? Seems a bit rigid for my taste.
Are there things I don't see here? What other benefits would make me reconsider? Does their go-based implementation provide noticeable advantages to my naive python-based process management? | 2025-06-06T06:06:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l4l938/should_i_choose_llamaswap_over_my_own_solution/ | mnze_brngo_7325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4l938 | false | null | t3_1l4l938 | /r/LocalLLaMA/comments/1l4l938/should_i_choose_llamaswap_over_my_own_solution/ | false | false | self | 5 | null |
Is there an video or article or book where a lot of real world datasets are used to train industry level LLM with all the code? | 9 | Is there an video or article or book where a lot of real world datasets are used to train industry level LLM with all the code? Everything I can find is toy models trained with toy datasets, that I played with tons of times already. I know GPT3 or Llama papers gives some information about what datasets were used, but I wanna see insights from an expert on how he trains with the data realtime to prevent all sorts failure modes, to make the model have good diverse outputs, to make it have a lot of stable knowledge, to make it do many different tasks when prompted, to not overfit, etc.
I guess "Build a Large Language Model (From Scratch)" by Sebastian Raschka is the closest to this ideal that exists, even if it's not exactly what I want. He has chapters on Pretraining on Unlabeled Data, Finetuning for Text Classification, Finetuning to Follow Instructions.
https://youtu.be/Zar2TJv-sE0
In that video he has simple datasets, like just pretraining with one book. I wanna see full training pipeline with mixed diverse quality datasets that are cleaned, balanced, blended or/and maybe with ordering for curriculum learning. And I wanna methods for stabilizing training, preventing catastrophic forgetting and mode collapse, etc. in a better model. And making the model behave like assistant, make summaries that make sense, etc.
At least there's this RedPajama open reproduction of the LLaMA training dataset. <https://www.together.ai/blog/redpajama-data-v2>
Now I wanna see someone train a model using this dataset or a similar dataset. I suspect it should be more than just running this training pipeline for as long as you want, when it comes to bigger frontier models.
I just found this GitHub repo to set it for single training run.
<https://github.com/techconative/llm-finetune/blob/main/tutorials/pretrain_redpajama.md> <https://github.com/techconative/llm-finetune/blob/main/pretrain/redpajama.py>
There's this video on it too but they don't show training in detail.
https://www.youtube.com/live/_HFxuQUg51k?si=aOzrC85OkE68MeNa
There's also SlimPajama.
Then there's also The Pile dataset, which is also very diverse dataset. <https://arxiv.org/abs/2101.00027>
which is used in single training run here. <https://github.com/FareedKhan-dev/train-llm-from-scratch>
And more insights into creating or extending these datasets than just what's in their papers could also be nice.
I wanna see the full complexity of training a full better model in all it's glory with as many implementation details as possible. It's so hard to find such resources.
Do you know any resource(s) closer to this ideal? | 2025-06-06T06:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l4lwtq/is_there_an_video_or_article_or_book_where_a_lot/ | Happysedits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4lwtq | false | null | t3_1l4lwtq | /r/LocalLLaMA/comments/1l4lwtq/is_there_an_video_or_article_or_book_where_a_lot/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'O4NIi1E5_R1byN18JxxjgC67yqog8scgm_H-yjjZSEk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/yJZOrktSf3706K8SsYmAWo-4E6FP9nY4XcHL5TzPeIQ.jpg?width=108&crop=smart&auto=webp&s=98571eaf9500f9d207e1f1733a6f8e28d8a82563', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/yJZOrktSf3706K8SsYmAWo-4E6FP9nY4XcHL5TzPeIQ.jpg?width=216&crop=smart&auto=webp&s=c057a21ff6d5cb638f8af47b2ef51628aee2972f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/yJZOrktSf3706K8SsYmAWo-4E6FP9nY4XcHL5TzPeIQ.jpg?width=320&crop=smart&auto=webp&s=6134ac57269c7cd0bbccff50c4a7dcb066630520', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/yJZOrktSf3706K8SsYmAWo-4E6FP9nY4XcHL5TzPeIQ.jpg?auto=webp&s=18218bd22d22494954ef274f23e3954cdeb01134', 'width': 480}, 'variants': {}}]} |
China's Xiaohongshu(Rednote) released its dots.llm open source AI model | 416 | https://huggingface.co/spaces/rednote-hilab/dots-demo
| 2025-06-06T07:28:45 | https://github.com/rednote-hilab/dots.llm1 | Fun-Doctor6855 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l4mgry | false | null | t3_1l4mgry | /r/LocalLLaMA/comments/1l4mgry/chinas_xiaohongshurednote_released_its_dotsllm/ | false | false | default | 416 | {'enabled': False, 'images': [{'id': 'dzT3Ipp6otwumdGdMqcZYoOXNptMhbuF91P9vr8s_p4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O4a5ycdopqyUZaJQmD9jwW8EpcdIe_Y4TADgsKDlB-k.jpg?width=108&crop=smart&auto=webp&s=11afaf19de477de4f4e17e2663686bb7e0fe691f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/O4a5ycdopqyUZaJQmD9jwW8EpcdIe_Y4TADgsKDlB-k.jpg?width=216&crop=smart&auto=webp&s=2bdb9f2ba8a25b95da3e6d46500c09472e4984be', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/O4a5ycdopqyUZaJQmD9jwW8EpcdIe_Y4TADgsKDlB-k.jpg?width=320&crop=smart&auto=webp&s=d48ee0834edcbdea0c6ff8abd3c334d9463ea274', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/O4a5ycdopqyUZaJQmD9jwW8EpcdIe_Y4TADgsKDlB-k.jpg?width=640&crop=smart&auto=webp&s=eeb78f1f04433cf2e3b17e9288228b43ca05ce64', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/O4a5ycdopqyUZaJQmD9jwW8EpcdIe_Y4TADgsKDlB-k.jpg?width=960&crop=smart&auto=webp&s=d30fa4eedc0f0c7d0a8d9f9fbbd4b11748042c1e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/O4a5ycdopqyUZaJQmD9jwW8EpcdIe_Y4TADgsKDlB-k.jpg?width=1080&crop=smart&auto=webp&s=8a6552d496a1e732079109d875791e3dca8e362e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/O4a5ycdopqyUZaJQmD9jwW8EpcdIe_Y4TADgsKDlB-k.jpg?auto=webp&s=5a4823a3b7d634ed6062efac5a8d4e2273e5ad07', 'width': 1200}, 'variants': {}}]} |
China's Rednote Open-source dots.llm performance & cost | 140 | ERROR: type should be string, got "\nhttps://github.com/rednote-hilab/dots.llm1/blob/main/dots1_tech_report.pdf" | 2025-06-06T07:51:36 | Fun-Doctor6855 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4ms71 | false | null | t3_1l4ms71 | /r/LocalLLaMA/comments/1l4ms71/chinas_rednote_opensource_dotsllm_performance_cost/ | false | false | default | 140 | {'enabled': True, 'images': [{'id': '4kbcizani95f1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/4kbcizani95f1.png?width=108&crop=smart&auto=webp&s=acc9f15f0fd89b5fdb8dab151a534593927cd1c5', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/4kbcizani95f1.png?width=216&crop=smart&auto=webp&s=8e49d574df2224d78a09addba51b51b0c5cc7497', 'width': 216}, {'height': 175, 'url': 'https://preview.redd.it/4kbcizani95f1.png?width=320&crop=smart&auto=webp&s=d889c9403c5edc609c91cd8265f56926ff69ca6c', 'width': 320}, {'height': 350, 'url': 'https://preview.redd.it/4kbcizani95f1.png?width=640&crop=smart&auto=webp&s=0cfcb8ad68d5f56fd762315c1af14b5f339c1ff6', 'width': 640}, {'height': 526, 'url': 'https://preview.redd.it/4kbcizani95f1.png?width=960&crop=smart&auto=webp&s=0916a87c1979e3975b0588e1f39b1b54cf8df59d', 'width': 960}, {'height': 592, 'url': 'https://preview.redd.it/4kbcizani95f1.png?width=1080&crop=smart&auto=webp&s=3c97f5d15124caeaca90278708904d99f4c45cfe', 'width': 1080}], 'source': {'height': 886, 'url': 'https://preview.redd.it/4kbcizani95f1.png?auto=webp&s=6bbf857c47b9a925751e4846de9fbd8edd3615ad', 'width': 1616}, 'variants': {}}]} |
|
llama3:70b (4-bit Quantized) from Ollama is not paying attention at the initial part of the system prompt. | 1 | [removed] | 2025-06-06T08:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l4mx2b/llama370b_4bit_quantized_from_ollama_is_not/ | Evening-Power-3302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4mx2b | false | null | t3_1l4mx2b | /r/LocalLLaMA/comments/1l4mx2b/llama370b_4bit_quantized_from_ollama_is_not/ | false | false | self | 1 | null |
Help- in need of advice choosing GPU 5060ti vs 5070 or AMD | 1 | [removed] | 2025-06-06T08:04:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l4myno/help_in_need_of_advice_choosing_gpu_5060ti_vs/ | Ok-Cup-608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4myno | false | null | t3_1l4myno | /r/LocalLLaMA/comments/1l4myno/help_in_need_of_advice_choosing_gpu_5060ti_vs/ | false | false | self | 1 | null |
Can a model be so radically altered that its origin can no longer be recognized? YES! | 94 | **Phi-lthy4**( [https://huggingface.co/SicariusSicariiStuff/Phi-lthy4](https://huggingface.co/SicariusSicariiStuff/Phi-lthy4) ) has been consistently described as **exceptionally unique** by all who have tested it, **almost devoid of SLOP**, and it is now widely regarded as the **most unique roleplay model available**. It underwent an intensive continued pretraining (CPT) phase, extensive supervised fine-tuning (SFT) on high-quality organic datasets, and leveraged advanced techniques including model merging, parameter pruning, and upscaling. The model was trained using **AXOLOTL** :slight\_smile:
Interestingly, this distinctiveness was validated in a recent paper: [*Gradient-Based Model Fingerprinting for LLM Similarity Detection and Family Classification*](https://arxiv.org/html/2506.01631v1). Among a wide array of models tested, this one stood out as **unclassifiable** by traditional architecture-based fingerprinting—highlighting the extent of its architectural deviation. This was the result of **deep structural modification**: not just fine-tuning, but full-layer re-architecture, aggressive parameter pruning, and fusion with unrelated models. | 2025-06-06T08:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l4mzbr/can_a_model_be_so_radically_altered_that_its/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4mzbr | false | null | t3_1l4mzbr | /r/LocalLLaMA/comments/1l4mzbr/can_a_model_be_so_radically_altered_that_its/ | false | false | self | 94 | {'enabled': False, 'images': [{'id': '6I6zlj5qBDMKY-S0diPhFY3TXNcscxPYnAU5SHXHueE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qifdM-0N9Odnjr9ZvoV983wjPcBE0iH_utXQ86v0StQ.jpg?width=108&crop=smart&auto=webp&s=4015542c2bd16193822270b458eb7ae2a72bf53e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qifdM-0N9Odnjr9ZvoV983wjPcBE0iH_utXQ86v0StQ.jpg?width=216&crop=smart&auto=webp&s=7449037bd4f4c9b1fe1614e0935d91c8a0bcb2d8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qifdM-0N9Odnjr9ZvoV983wjPcBE0iH_utXQ86v0StQ.jpg?width=320&crop=smart&auto=webp&s=56853e7115ba80dbcb763cee536daccad8876cb0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qifdM-0N9Odnjr9ZvoV983wjPcBE0iH_utXQ86v0StQ.jpg?width=640&crop=smart&auto=webp&s=cfa66b1029df1e62f1dd43cdcabf838b6a534ced', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qifdM-0N9Odnjr9ZvoV983wjPcBE0iH_utXQ86v0StQ.jpg?width=960&crop=smart&auto=webp&s=f4530006f3b5c8f5639a2321e8ad28d693e86025', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qifdM-0N9Odnjr9ZvoV983wjPcBE0iH_utXQ86v0StQ.jpg?width=1080&crop=smart&auto=webp&s=a5d380a8333d13ca446762278de0b5e452db572d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qifdM-0N9Odnjr9ZvoV983wjPcBE0iH_utXQ86v0StQ.jpg?auto=webp&s=c1a00c2f67d25717739f96b230e8418a742a350c', 'width': 1200}, 'variants': {}}]} |
Graphic card 5060 ti vs 5070 for Llama | 1 | [removed] | 2025-06-06T08:10:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l4n1v4/graphic_card_5060_ti_vs_5070_for_llama/ | Ok-Cup-608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4n1v4 | false | null | t3_1l4n1v4 | /r/LocalLLaMA/comments/1l4n1v4/graphic_card_5060_ti_vs_5070_for_llama/ | false | false | self | 1 | null |
🚀 Chat with Local LLMs via Chrome – New Privacy-First Extension (Ollama Client) | 1 | [removed] | 2025-06-06T08:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l4n9em/chat_with_local_llms_via_chrome_new_privacyfirst/ | Some_Storage_9977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4n9em | false | null | t3_1l4n9em | /r/LocalLLaMA/comments/1l4n9em/chat_with_local_llms_via_chrome_new_privacyfirst/ | false | false | self | 1 | null |
Locally hosted DeepSeek-R1 server in LM Studio | 1 | [removed] | 2025-06-06T08:28:29 | https://v.redd.it/6sqi1pk3p95f1 | walkerb1972 | /r/LocalLLaMA/comments/1l4nb7d/locally_hosted_deepseekr1_server_in_lm_studio/ | 1970-01-01T00:00:00 | 0 | {} | 1l4nb7d | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6sqi1pk3p95f1/DASHPlaylist.mpd?a=1751920116%2CMjc0ZjBiMjA4MDVlZjFiODU2NDJhYjQwYjNkNDViNGJhMDdmOWFlNDMxNDNhMWMzNzBlMTkxZDlhOGZhZDVhOQ%3D%3D&v=1&f=sd', 'duration': 65, 'fallback_url': 'https://v.redd.it/6sqi1pk3p95f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6sqi1pk3p95f1/HLSPlaylist.m3u8?a=1751920116%2CZTZlY2NkMDhiYTlkYmE5ZWQ2ZGFlZDg1OGE5NjVmM2E2NGQxODI4NDI1NmQ2M2NmZjJkYmZiMzI0OTkwZGIwOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6sqi1pk3p95f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l4nb7d | /r/LocalLLaMA/comments/1l4nb7d/locally_hosted_deepseekr1_server_in_lm_studio/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cGkzeW1hbzJwOTVmMfO3rqe82TjJQzUmwxOHGyZi52Sarhgd2f7bWh8fY2ok', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cGkzeW1hbzJwOTVmMfO3rqe82TjJQzUmwxOHGyZi52Sarhgd2f7bWh8fY2ok.png?width=108&crop=smart&format=pjpg&auto=webp&s=f3068854d2c65661d36bb8ece4a9783f92fa4e59', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cGkzeW1hbzJwOTVmMfO3rqe82TjJQzUmwxOHGyZi52Sarhgd2f7bWh8fY2ok.png?width=216&crop=smart&format=pjpg&auto=webp&s=22e772dad38586579cea3c9e17cff1ae85c7a421', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cGkzeW1hbzJwOTVmMfO3rqe82TjJQzUmwxOHGyZi52Sarhgd2f7bWh8fY2ok.png?width=320&crop=smart&format=pjpg&auto=webp&s=5f6c15637cab4105e6118166af36cde88957556a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cGkzeW1hbzJwOTVmMfO3rqe82TjJQzUmwxOHGyZi52Sarhgd2f7bWh8fY2ok.png?width=640&crop=smart&format=pjpg&auto=webp&s=c94cbd99a43c25e769089ff2bf9c5ac1b34bf573', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cGkzeW1hbzJwOTVmMfO3rqe82TjJQzUmwxOHGyZi52Sarhgd2f7bWh8fY2ok.png?width=960&crop=smart&format=pjpg&auto=webp&s=bcbf0f9c569707109afbd521c5a4c66a4405bda4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cGkzeW1hbzJwOTVmMfO3rqe82TjJQzUmwxOHGyZi52Sarhgd2f7bWh8fY2ok.png?width=1080&crop=smart&format=pjpg&auto=webp&s=48937cc3d7671746587538cd0ef2c5240e30b832', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cGkzeW1hbzJwOTVmMfO3rqe82TjJQzUmwxOHGyZi52Sarhgd2f7bWh8fY2ok.png?format=pjpg&auto=webp&s=f2a639c599825c84d6831fb02b5e46573a414113', 'width': 1920}, 'variants': {}}]} |
|
Is updating prompts frequently even worth it? | 1 | [removed] | 2025-06-06T08:38:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l4ngbb/is_updating_prompts_frequently_even_worth_it/ | Useful_Artichoke_292 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4ngbb | false | null | t3_1l4ngbb | /r/LocalLLaMA/comments/1l4ngbb/is_updating_prompts_frequently_even_worth_it/ | false | false | self | 1 | null |
Tokasaurus: An LLM Inference Engine for High-Throughput Workloads | 28 | 2025-06-06T08:40:10 | https://scalingintelligence.stanford.edu/blogs/tokasaurus/ | AppearanceHeavy6724 | scalingintelligence.stanford.edu | 1970-01-01T00:00:00 | 0 | {} | 1l4ngz5 | false | null | t3_1l4ngz5 | /r/LocalLLaMA/comments/1l4ngz5/tokasaurus_an_llm_inference_engine_for/ | false | false | default | 28 | {'enabled': False, 'images': [{'id': '98Ssw8LPycFLwlm_uHfDB4EVaoCaGEd0Q0M_tFW-Cko', 'resolutions': [{'height': 105, 'url': 'https://external-preview.redd.it/UMHldeViAkaftNoXr0yZV1xJLJ_mUiopvNlMx-OrluA.jpg?width=108&crop=smart&auto=webp&s=16ef86d80eacfcc1615675e738758041e968a400', 'width': 108}, {'height': 211, 'url': 'https://external-preview.redd.it/UMHldeViAkaftNoXr0yZV1xJLJ_mUiopvNlMx-OrluA.jpg?width=216&crop=smart&auto=webp&s=2310bfbad8d3d3d87120c3cfb8b92e9ceeef74bc', 'width': 216}, {'height': 313, 'url': 'https://external-preview.redd.it/UMHldeViAkaftNoXr0yZV1xJLJ_mUiopvNlMx-OrluA.jpg?width=320&crop=smart&auto=webp&s=9039d8f804d52619036f486b89bbe1d0a42e7dd7', 'width': 320}, {'height': 626, 'url': 'https://external-preview.redd.it/UMHldeViAkaftNoXr0yZV1xJLJ_mUiopvNlMx-OrluA.jpg?width=640&crop=smart&auto=webp&s=e47b3c25f6e5641c7f6cb8d1f335603b210516d8', 'width': 640}, {'height': 939, 'url': 'https://external-preview.redd.it/UMHldeViAkaftNoXr0yZV1xJLJ_mUiopvNlMx-OrluA.jpg?width=960&crop=smart&auto=webp&s=945cf438b047cb451c92eec70ff5c6d9bd570d1e', 'width': 960}, {'height': 1057, 'url': 'https://external-preview.redd.it/UMHldeViAkaftNoXr0yZV1xJLJ_mUiopvNlMx-OrluA.jpg?width=1080&crop=smart&auto=webp&s=721f8692000136a875bd054225be2d238cd001d6', 'width': 1080}], 'source': {'height': 1388, 'url': 'https://external-preview.redd.it/UMHldeViAkaftNoXr0yZV1xJLJ_mUiopvNlMx-OrluA.jpg?auto=webp&s=e3a399144b1a56769b514a32a3654f46904eeabb', 'width': 1418}, 'variants': {}}]} |
|
MiniCPM4: 7x decoding speed than Qwen3-8B | 156 | MiniCPM 4 is an extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements.
* 🏗️ **Efficient Model Architecture:**
* InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts
* 🧠 **Efficient Learning Algorithms:**
* Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search
* BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction
* Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy
* 📚 **High-Quality Training Data:**
* UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset [UltraFinweb](https://huggingface.co/datasets/openbmb/Ultra-FineWeb)
* UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data
* ⚡ **Efficient Inference and Deployment System:**
* [CPM.cu](http://CPM.cu) \-- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding.
* ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities
[https://github.com/OpenBMB/MiniCPM/blob/main/README-en.md](https://github.com/OpenBMB/MiniCPM/blob/main/README-en.md) | 2025-06-06T08:45:36 | Lynncc6 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4njon | false | null | t3_1l4njon | /r/LocalLLaMA/comments/1l4njon/minicpm4_7x_decoding_speed_than_qwen38b/ | false | false | default | 156 | {'enabled': True, 'images': [{'id': 'j4mqq99tr95f1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/j4mqq99tr95f1.png?width=108&crop=smart&auto=webp&s=5eb8fee188d1c0ac243261b540f880b0725c4dc0', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/j4mqq99tr95f1.png?width=216&crop=smart&auto=webp&s=725cf011b9253f766df4698ad3c538702c930f38', 'width': 216}, {'height': 138, 'url': 'https://preview.redd.it/j4mqq99tr95f1.png?width=320&crop=smart&auto=webp&s=17854ff118ef1a8cc02b0870d1b483583eb22fcc', 'width': 320}, {'height': 276, 'url': 'https://preview.redd.it/j4mqq99tr95f1.png?width=640&crop=smart&auto=webp&s=3176005523900a855f124250586125520eda5fa5', 'width': 640}, {'height': 414, 'url': 'https://preview.redd.it/j4mqq99tr95f1.png?width=960&crop=smart&auto=webp&s=70454b3e400128d14e691701be9184d38ad732ce', 'width': 960}], 'source': {'height': 439, 'url': 'https://preview.redd.it/j4mqq99tr95f1.png?auto=webp&s=ae686accc2ef04c8a30d4d1d82fc8b5117ef66df', 'width': 1017}, 'variants': {}}]} |
|
Which agent-like terminal do you guys use? Something like Warp but free. | 5 | I want something which can browse around a source code repository and answer questions about it. Warp is pretty good but doesn’t let you use your own llm keys.
Open web-ui’s function calling doesn’t seems to be able to execute more than one functions per turn so it’s not good for planning steps.
| 2025-06-06T08:55:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l4nobz/which_agentlike_terminal_do_you_guys_use/ | grey-seagull | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4nobz | false | null | t3_1l4nobz | /r/LocalLLaMA/comments/1l4nobz/which_agentlike_terminal_do_you_guys_use/ | false | false | self | 5 | null |
It is possble to run non-reasoning deepseek-r1-0528? | 30 | I know, stupid question, but couldn't find an answer to it! | 2025-06-06T08:57:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l4npcl/it_is_possble_to_run_nonreasoning_deepseekr10528/ | relmny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4npcl | false | null | t3_1l4npcl | /r/LocalLLaMA/comments/1l4npcl/it_is_possble_to_run_nonreasoning_deepseekr10528/ | false | false | self | 30 | null |
8x RTX 3090 setup with p2p patch | 1 | A month ago I [complained](https://www.reddit.com/r/LocalLLaMA/comments/1kds51e/inference_needs_nontrivial_amount_of_pcie/) that connecting 8 RTX 3090 with PCIe 3.0 x4 links is bad idea. I have upgraded my rig with PCIe 4.0 x8 links (4x theoretical bandwidth improvement) and listed numbers in this reddit [post](https://www.reddit.com/r/LocalLLaMA/comments/1l3i78l/update_inference_needs_nontrivial_amount_of_pcie/). Now it's time to try the p2p patch from geohot.
Last time I tried it on a Threadripper motherboard it didn't work, so I'm
TLDR: It works and at 8x tensor parallel it improves prefill by 70%, generation by 7%.
We can confidently conclude that the common belief that inference does not require cross-GPU bandwidth is entirely false. Maybe it does not need bandwidth in simple pipeline-parallel cases, for tensor parallel you should throw as much bandwidth as possible.
Below are the details of my setup and tests I ran:
Hardware: EPYC 7302, H12SSL, 8 RTX 3090 in slots 1, 3, 5, 6, used cpayne PCIe 4.0 2x x8 adapters with redrivers for all GPUs, 70cm SlimSAS cables from aliexpress. In BIOS: IOMMU off, above 4 decoding on, large BAR support on.
Topology (not sure why PHB is set differently for GPU0-3 vs GPU4-7, both these groups should be connected to same PCIe root complex):
```
nvidia-smi topo --matrix
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PHB NODE NODE NODE NODE NODE NODE 0-31 0 N/A
GPU1 PHB X NODE NODE NODE NODE NODE NODE 0-31 0 N/A
GPU2 NODE NODE X PHB NODE NODE NODE NODE 0-31 0 N/A
GPU3 NODE NODE PHB X NODE NODE NODE NODE 0-31 0 N/A
GPU4 NODE NODE NODE NODE X PHB PHB PHB 0-31 0 N/A
GPU5 NODE NODE NODE NODE PHB X PHB PHB 0-31 0 N/A
GPU6 NODE NODE NODE NODE PHB PHB X PHB 0-31 0 N/A
GPU7 NODE NODE NODE NODE PHB PHB PHB X 0-31 0 N/A
```
nccl bandwidth before:
```
./build/all_reduce_perf -b 8 -e 128M -f 2 -g 8
# nThread 1 nGpus 8 minBytes 8 maxBytes 134217728 step: 2(factor) warmup iters: 5 iters: 20 agg iters: 1 validation: 1 graph: 0
#
# <...>
# out-of-place in-place
# size count type redop root time algbw busbw #wrong time algbw busbw #wrong
# (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s)
8 2 float sum -1 47.84 0.00 0.00 0 48.33 0.00 0.00 0
16 4 float sum -1 47.92 0.00 0.00 0 48.47 0.00 0.00 0
32 8 float sum -1 48.12 0.00 0.00 0 48.30 0.00 0.00 0
64 16 float sum -1 48.47 0.00 0.00 0 48.99 0.00 0.00 0
128 32 float sum -1 49.44 0.00 0.00 0 48.85 0.00 0.00 0
256 64 float sum -1 49.49 0.01 0.01 0 49.29 0.01 0.01 0
512 128 float sum -1 49.31 0.01 0.02 0 49.07 0.01 0.02 0
1024 256 float sum -1 49.42 0.02 0.04 0 49.12 0.02 0.04 0
2048 512 float sum -1 49.34 0.04 0.07 0 49.30 0.04 0.07 0
4096 1024 float sum -1 49.39 0.08 0.15 0 48.94 0.08 0.15 0
8192 2048 float sum -1 49.35 0.17 0.29 0 49.83 0.16 0.29 0
16384 4096 float sum -1 48.55 0.34 0.59 0 49.49 0.33 0.58 0
32768 8192 float sum -1 61.25 0.54 0.94 0 62.47 0.52 0.92 0
65536 16384 float sum -1 107.4 0.61 1.07 0 110.6 0.59 1.04 0
131072 32768 float sum -1 190.6 0.69 1.20 0 145.2 0.90 1.58 0
262144 65536 float sum -1 325.4 0.81 1.41 0 317.3 0.83 1.45 0
524288 131072 float sum -1 409.5 1.28 2.24 0 411.9 1.27 2.23 0
1048576 262144 float sum -1 566.8 1.85 3.24 0 559.2 1.88 3.28 0
2097152 524288 float sum -1 948.5 2.21 3.87 0 944.1 2.22 3.89 0
4194304 1048576 float sum -1 1723.5 2.43 4.26 0 1720.3 2.44 4.27 0
8388608 2097152 float sum -1 3264.7 2.57 4.50 0 3272.1 2.56 4.49 0
16777216 4194304 float sum -1 6460.7 2.60 4.54 0 6463.4 2.60 4.54 0
33554432 8388608 float sum -1 12944 2.59 4.54 0 12942 2.59 4.54 0
67108864 16777216 float sum -1 26328 2.55 4.46 0 26263 2.56 4.47 0
134217728 33554432 float sum -1 54895 2.44 4.28 0 54913 2.44 4.28 0
# Out of bounds values : 0 OK
# Avg bus bandwidth : 1.67661
```
nccl bandwidth after (note NCCL_P2P_LEVEL=SYS, nccl needs to be told to use p2p):
```
NCCL_P2P_LEVEL=SYS ./build/all_reduce_perf -b 8 -e 128M -f 2 -g 8
# nThread 1 nGpus 8 minBytes 8 maxBytes 134217728 step: 2(factor) warmup iters: 5 iters: 20 agg iters: 1 validation: 1 graph: 0
#
# <...>
# out-of-place in-place
# size count type redop root time algbw busbw #wrong time algbw busbw #wrong
# (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s)
8 2 float sum -1 46.65 0.00 0.00 0 47.19 0.00 0.00 0
16 4 float sum -1 49.03 0.00 0.00 0 47.99 0.00 0.00 0
32 8 float sum -1 48.54 0.00 0.00 0 48.86 0.00 0.00 0
64 16 float sum -1 48.56 0.00 0.00 0 48.74 0.00 0.00 0
128 32 float sum -1 49.11 0.00 0.00 0 49.21 0.00 0.00 0
256 64 float sum -1 48.97 0.01 0.01 0 49.31 0.01 0.01 0
512 128 float sum -1 49.28 0.01 0.02 0 49.37 0.01 0.02 0
1024 256 float sum -1 49.34 0.02 0.04 0 49.08 0.02 0.04 0
2048 512 float sum -1 49.48 0.04 0.07 0 49.35 0.04 0.07 0
4096 1024 float sum -1 49.26 0.08 0.15 0 49.23 0.08 0.15 0
8192 2048 float sum -1 48.83 0.17 0.29 0 49.10 0.17 0.29 0
16384 4096 float sum -1 49.72 0.33 0.58 0 49.76 0.33 0.58 0
32768 8192 float sum -1 49.54 0.66 1.16 0 49.67 0.66 1.15 0
65536 16384 float sum -1 51.46 1.27 2.23 0 50.83 1.29 2.26 0
131072 32768 float sum -1 76.73 1.71 2.99 0 75.11 1.75 3.05 0
262144 65536 float sum -1 137.9 1.90 3.33 0 143.1 1.83 3.21 0
524288 131072 float sum -1 206.9 2.53 4.43 0 208.2 2.52 4.41 0
1048576 262144 float sum -1 298.8 3.51 6.14 0 295.3 3.55 6.21 0
2097152 524288 float sum -1 559.6 3.75 6.56 0 548.5 3.82 6.69 0
4194304 1048576 float sum -1 1034.8 4.05 7.09 0 1018.5 4.12 7.21 0
8388608 2097152 float sum -1 1979.9 4.24 7.41 0 1954.7 4.29 7.51 0
16777216 4194304 float sum -1 3963.0 4.23 7.41 0 3849.8 4.36 7.63 0
33554432 8388608 float sum -1 7812.3 4.30 7.52 0 7784.7 4.31 7.54 0
67108864 16777216 float sum -1 15653 4.29 7.50 0 15643 4.29 7.51 0
134217728 33554432 float sum -1 31367 4.28 7.49 0 31154 4.31 7.54 0
# Out of bounds values : 0 OK
# Avg bus bandwidth : 2.9099
```
Using TechxGenus/Mistral-Large-Instruct-2411-AWQ LLM with 8 tensor parallel on sglang:
before: prefill 340t/s, generate 40t/s (~0 context)
after: prefill 585t/s (+70%), generate 43t/s (+7%) (~0 context)
Didn't try at larger context but at 80k I previously saw 250t/s prefill and 33t/s generate. I expect the same throughput improvement as at ~0 context.
As for p2p patch, I used https://github.com/p12tic/open-gpu-kernel-modules/tree/570.133.20-p2p. Run ./script.sh and it will build Debian package with kernel drivers. Now I see that the upstream repo has new releases as well, so it may make sense to try these too: https://github.com/tinygrad/open-gpu-kernel-modules/releases | 2025-06-06T09:45:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l4oeaj/8x_rtx_3090_setup_with_p2p_patch/ | pmur12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4oeaj | false | null | t3_1l4oeaj | /r/LocalLLaMA/comments/1l4oeaj/8x_rtx_3090_setup_with_p2p_patch/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 't3ClEqRkV9jbsN-syDR1DFXv_9CIlY0q9kTBwpkVCcA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JqIHMHzLbMobSzqcGALj958qjd6nBi4YvVXWF7IjXKw.jpg?width=108&crop=smart&auto=webp&s=a5aff8a53c5726cf0d423f4f26f91dcd0497bd03', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JqIHMHzLbMobSzqcGALj958qjd6nBi4YvVXWF7IjXKw.jpg?width=216&crop=smart&auto=webp&s=8e3a40c75ce8ed4a182890ce65c682a7179bb39a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JqIHMHzLbMobSzqcGALj958qjd6nBi4YvVXWF7IjXKw.jpg?width=320&crop=smart&auto=webp&s=ba52ff552f74d7ce549bea542c55a75d08ab0bbe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JqIHMHzLbMobSzqcGALj958qjd6nBi4YvVXWF7IjXKw.jpg?width=640&crop=smart&auto=webp&s=ce82dcc05ff93ded6a5b20e6a7e4416414fbe60b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JqIHMHzLbMobSzqcGALj958qjd6nBi4YvVXWF7IjXKw.jpg?width=960&crop=smart&auto=webp&s=4b231911b608c586e0ef3fd2fe303c231a46778b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JqIHMHzLbMobSzqcGALj958qjd6nBi4YvVXWF7IjXKw.jpg?width=1080&crop=smart&auto=webp&s=85d3760f16190514eee1ee4a5da52ec26ce50be9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JqIHMHzLbMobSzqcGALj958qjd6nBi4YvVXWF7IjXKw.jpg?auto=webp&s=34231c6bf4007533cab3b6ca2f573d4159161fe6', 'width': 1200}, 'variants': {}}]} |
Help me find voice cloning FOSS with UI | 4 | I’m searching for simple-to-set-up software to run voice cloning and generation locally. Plus point would be if it can work with Slovak language. Is there a viable option? | 2025-06-06T10:30:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l4p2po/help_me_find_voice_cloning_foss_with_ui/ | KekecVN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4p2po | false | null | t3_1l4p2po | /r/LocalLLaMA/comments/1l4p2po/help_me_find_voice_cloning_foss_with_ui/ | false | false | self | 4 | null |
China's Rednote Open-source dots.llm Benchmarks | 101 | https://www.xiaohongshu.com/user/profile/683ffe42000000001d021a4c | 2025-06-06T10:32:40 | Fun-Doctor6855 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4p45i | false | null | t3_1l4p45i | /r/LocalLLaMA/comments/1l4p45i/chinas_rednote_opensource_dotsllm_benchmarks/ | false | false | default | 101 | {'enabled': True, 'images': [{'id': 'cambn0sdba5f1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/cambn0sdba5f1.jpeg?width=108&crop=smart&auto=webp&s=28c7cbda4a038c3451ba05bde2a899c95a5af3b6', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/cambn0sdba5f1.jpeg?width=216&crop=smart&auto=webp&s=ffa74a7e3f4b36f16af7720bc4edca03635010f5', 'width': 216}, {'height': 272, 'url': 'https://preview.redd.it/cambn0sdba5f1.jpeg?width=320&crop=smart&auto=webp&s=d818800abdc4d0f3e0a211086b77390909847aa8', 'width': 320}, {'height': 544, 'url': 'https://preview.redd.it/cambn0sdba5f1.jpeg?width=640&crop=smart&auto=webp&s=0e2950c2aa2742a53b381ab5520309309c0fa362', 'width': 640}, {'height': 816, 'url': 'https://preview.redd.it/cambn0sdba5f1.jpeg?width=960&crop=smart&auto=webp&s=104cb2195def9103e4612413f23de277d636392e', 'width': 960}, {'height': 918, 'url': 'https://preview.redd.it/cambn0sdba5f1.jpeg?width=1080&crop=smart&auto=webp&s=d986d296a8d8eca3eff2ebbb84cabe0b783dd16b', 'width': 1080}], 'source': {'height': 1142, 'url': 'https://preview.redd.it/cambn0sdba5f1.jpeg?auto=webp&s=af5aa6bd8c1e40eaf84a75b2125b1e359f2e5c11', 'width': 1343}, 'variants': {}}]} |
|
Struggling to use autocomplete with continue and openwebui | 1 | [removed] | 2025-06-06T10:37:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l4p6xh/struggling_to_use_autocomplete_with_continue_and/ | Reasonable-Archer538 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4p6xh | false | null | t3_1l4p6xh | /r/LocalLLaMA/comments/1l4p6xh/struggling_to_use_autocomplete_with_continue_and/ | false | false | self | 1 | null |
A prototype for personal finance resolution. | 26 | Hi! Kuvera v0.1.0 is now live!
A series of personal finance advisor models that try to resolve the queries by trying to understand the person’s psychological state and relevant context.
These are still prototypes that have much room for improvement.
What’s included in this release:
-
Akhil-Theerthala/Kuvera-8B-v0.1.0
: Qwen3-8B, meticulously fine-tuned on approximately 20,000 personal-finance inquiries.
-
Akhil-Theerthala/Kuvera-14B-v0.1.0
: LoRA on DeepSeek-R1-Distill-Qwen-14B, honed through training on about 10,000 chain-of-thought queries.
For those interested, the models and datasets are accessible for free (links in the comments). If you are curious about the upcoming version's roadmap, let’s connect—there are many more developments I plan to make, and would definitely appreciate any help. | 2025-06-06T10:50:11 | https://huggingface.co/Akhil-Theerthala/Kuvera-8B-v0.1.0 | The-Silvervein | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l4pdyc | false | null | t3_1l4pdyc | /r/LocalLLaMA/comments/1l4pdyc/a_prototype_for_personal_finance_resolution/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'VvCnIRYBwofWbrtnA_usVoDlhKNPR-K5DJKzWDpRIEc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eRmOKrP_DKamzTn5vzBRb4P-HTRykpHQCacHiMsZM7c.jpg?width=108&crop=smart&auto=webp&s=2af36e650459905b5a2d525b7820bb8c4443fdd8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eRmOKrP_DKamzTn5vzBRb4P-HTRykpHQCacHiMsZM7c.jpg?width=216&crop=smart&auto=webp&s=d1c9da639b624bdd747a0775dab8638d27f90e50', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eRmOKrP_DKamzTn5vzBRb4P-HTRykpHQCacHiMsZM7c.jpg?width=320&crop=smart&auto=webp&s=2c5eb7a9b9124947e4f06502f374fcc7004cfb7c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eRmOKrP_DKamzTn5vzBRb4P-HTRykpHQCacHiMsZM7c.jpg?width=640&crop=smart&auto=webp&s=f2c62f29a0800bdf1afdaf952cee07267fb9e32e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eRmOKrP_DKamzTn5vzBRb4P-HTRykpHQCacHiMsZM7c.jpg?width=960&crop=smart&auto=webp&s=24d2aa6bf6caeeea85dacce959fedf96e4c46a1e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eRmOKrP_DKamzTn5vzBRb4P-HTRykpHQCacHiMsZM7c.jpg?width=1080&crop=smart&auto=webp&s=eefcaf84f2b75abdad10dc1022ce88837849987d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eRmOKrP_DKamzTn5vzBRb4P-HTRykpHQCacHiMsZM7c.jpg?auto=webp&s=3e2f4b66d2475cebb191181252a07b3fe1312b3d', 'width': 1200}, 'variants': {}}]} |
|
Real-time conversation with a character on your local machine | 215 | And also the voice split function
Sorry for my English =) | 2025-06-06T11:12:33 | https://v.redd.it/vzlhsb24ia5f1 | ResolveAmbitious9572 | /r/LocalLLaMA/comments/1l4prlo/realtime_conversation_with_a_character_on_your/ | 1970-01-01T00:00:00 | 0 | {} | 1l4prlo | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vzlhsb24ia5f1/DASHPlaylist.mpd?a=1751929963%2CNzkxY2YwNDJjMjJmY2EyNWNjOWVhMWUxYTljOTUyOTM2ZmEyN2I5OTc1OGIyNzhlYTM0ZTQyNDVlZjdjYzIxYg%3D%3D&v=1&f=sd', 'duration': 129, 'fallback_url': 'https://v.redd.it/vzlhsb24ia5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/vzlhsb24ia5f1/HLSPlaylist.m3u8?a=1751929963%2CZTVjNDZlMDc3ZmIyZDViMTM5MjBiMDdmYjkwYWRjYWFkNzZmMmI4ZDUzMDA3ZmYxNzk5OWNkYjdmM2UxZjA2ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vzlhsb24ia5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l4prlo | /r/LocalLLaMA/comments/1l4prlo/realtime_conversation_with_a_character_on_your/ | false | false | 215 | {'enabled': False, 'images': [{'id': 'bmNldHdvMjRpYTVmMdEmQdBg5R_hfbeuAJLNQo4_VPyV37-iPjxO1DAp2E-K', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bmNldHdvMjRpYTVmMdEmQdBg5R_hfbeuAJLNQo4_VPyV37-iPjxO1DAp2E-K.png?width=108&crop=smart&format=pjpg&auto=webp&s=5faea925e036d05e05363c966d94794d21b036bd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bmNldHdvMjRpYTVmMdEmQdBg5R_hfbeuAJLNQo4_VPyV37-iPjxO1DAp2E-K.png?width=216&crop=smart&format=pjpg&auto=webp&s=4e8a956671cc3ee79d138f494ea0b40ad9ceea12', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bmNldHdvMjRpYTVmMdEmQdBg5R_hfbeuAJLNQo4_VPyV37-iPjxO1DAp2E-K.png?width=320&crop=smart&format=pjpg&auto=webp&s=4f7ab2bd87b6c13c92ef3ce19f19b05547603ce4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bmNldHdvMjRpYTVmMdEmQdBg5R_hfbeuAJLNQo4_VPyV37-iPjxO1DAp2E-K.png?width=640&crop=smart&format=pjpg&auto=webp&s=d32b419116ec44599a590c2dd86eea0501807ea9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bmNldHdvMjRpYTVmMdEmQdBg5R_hfbeuAJLNQo4_VPyV37-iPjxO1DAp2E-K.png?width=960&crop=smart&format=pjpg&auto=webp&s=82879e1d5ed82a2c4b4d33bb5a3a6abf4e669845', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bmNldHdvMjRpYTVmMdEmQdBg5R_hfbeuAJLNQo4_VPyV37-iPjxO1DAp2E-K.png?width=1080&crop=smart&format=pjpg&auto=webp&s=281e359dbe93ff674ffc186984841b3d1fff01de', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bmNldHdvMjRpYTVmMdEmQdBg5R_hfbeuAJLNQo4_VPyV37-iPjxO1DAp2E-K.png?format=pjpg&auto=webp&s=5227565e437838fe6a5db174e22c340a90214304', 'width': 1920}, 'variants': {}}]} |
|
new Bielik models have been released | 62 | [https://huggingface.co/speakleash/Bielik-11B-v2.6-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.6-Instruct)
[https://huggingface.co/speakleash/Bielik-11B-v2.6-Instruct-GGUF](https://huggingface.co/speakleash/Bielik-11B-v2.6-Instruct-GGUF)
[https://huggingface.co/speakleash/Bielik-11B-v2.5-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.5-Instruct)
[https://huggingface.co/speakleash/Bielik-11B-v2.5-Instruct-GGUF](https://huggingface.co/speakleash/Bielik-11B-v2.5-Instruct-GGUF)
[https://huggingface.co/speakleash/Bielik-4.5B-v3.0-Instruct](https://huggingface.co/speakleash/Bielik-4.5B-v3.0-Instruct)
[https://huggingface.co/speakleash/Bielik-4.5B-v3.0-Instruct-GGUF](https://huggingface.co/speakleash/Bielik-4.5B-v3.0-Instruct-GGUF)
Bielik-11B-v2.6-Instruct is a generative text model featuring 11 billion parameters. It is an instruct fine-tuned version of the Bielik-11B-v2. Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH.
You might be wondering why you'd need a Polish language model - well, it's always nice to have someone to talk to in Polish!!!
| 2025-06-06T11:25:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l4pzrm/new_bielik_models_have_been_released/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4pzrm | false | null | t3_1l4pzrm | /r/LocalLLaMA/comments/1l4pzrm/new_bielik_models_have_been_released/ | false | false | self | 62 | {'enabled': False, 'images': [{'id': 'vs1h9ByfOzYzTv0FTBFd26pK_oG6nMykFWLMM5aAVbs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ViCxjnc8WTqAkDTHwgD5qQCVB1k-fsD-lA7k1mfFgc8.jpg?width=108&crop=smart&auto=webp&s=c35b02de6c4af8d885eef1d87e34a177cf285446', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ViCxjnc8WTqAkDTHwgD5qQCVB1k-fsD-lA7k1mfFgc8.jpg?width=216&crop=smart&auto=webp&s=576e00763a75123c899e6f3467bb3fe42079d67d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ViCxjnc8WTqAkDTHwgD5qQCVB1k-fsD-lA7k1mfFgc8.jpg?width=320&crop=smart&auto=webp&s=92b1f7839f75f71768d751d7e78871579e26e506', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ViCxjnc8WTqAkDTHwgD5qQCVB1k-fsD-lA7k1mfFgc8.jpg?width=640&crop=smart&auto=webp&s=442cc492154585fcb7507002fd665a71d20e9171', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ViCxjnc8WTqAkDTHwgD5qQCVB1k-fsD-lA7k1mfFgc8.jpg?width=960&crop=smart&auto=webp&s=9b4ef3abb11ba89c08c62acc15dc85e115c353aa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ViCxjnc8WTqAkDTHwgD5qQCVB1k-fsD-lA7k1mfFgc8.jpg?width=1080&crop=smart&auto=webp&s=b1c6ed9c7108c5f4243672d5767ef486f9cc892e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ViCxjnc8WTqAkDTHwgD5qQCVB1k-fsD-lA7k1mfFgc8.jpg?auto=webp&s=61c76f4284627b3f9600a15f2809af76c467c1ca', 'width': 1200}, 'variants': {}}]} |
Today, I've mostly been conversing with Reddit from my mobile... | 1 | 2025-06-06T11:28:30 | https://v.redd.it/t5n4dkobla5f1 | AffectionateHoney992 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4q1jc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t5n4dkobla5f1/DASHPlaylist.mpd?a=1751801326%2CODllOGNlOTJjZGMyZTcwYzY0NWYwYjI3NGNjMWU3ZDgzYmQwZDQxNDg4OTIyMWZlNzk4OTY1OThiNDdjYThiMw%3D%3D&v=1&f=sd', 'duration': 68, 'fallback_url': 'https://v.redd.it/t5n4dkobla5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/t5n4dkobla5f1/HLSPlaylist.m3u8?a=1751801326%2CMDJjZmYyOWQ4NjNmMjhlNzkwY2UyZDZhZDQ4ZDUwZGZlZWNhMzFjNjk5Y2Q2OWU5MjM3NjhiNTJlYWVlZjk0Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/t5n4dkobla5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 860}} | t3_1l4q1jc | /r/LocalLLaMA/comments/1l4q1jc/today_ive_mostly_been_conversing_with_reddit_from/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZHRiZWhrb2JsYTVmMcaBTDTv4FsyuQF638hG_AFz0_dZjajyIHT3OBMUKgC6', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/ZHRiZWhrb2JsYTVmMcaBTDTv4FsyuQF638hG_AFz0_dZjajyIHT3OBMUKgC6.png?width=108&crop=smart&format=pjpg&auto=webp&s=a8c56cdc5e7f10aae5930cae7315e6dbe725fbb9', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/ZHRiZWhrb2JsYTVmMcaBTDTv4FsyuQF638hG_AFz0_dZjajyIHT3OBMUKgC6.png?width=216&crop=smart&format=pjpg&auto=webp&s=de8604684ef879b7686165996113df3c14b2a388', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/ZHRiZWhrb2JsYTVmMcaBTDTv4FsyuQF638hG_AFz0_dZjajyIHT3OBMUKgC6.png?width=320&crop=smart&format=pjpg&auto=webp&s=ac9761969ca63b984a2001b8c1cd02c50ba4f433', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/ZHRiZWhrb2JsYTVmMcaBTDTv4FsyuQF638hG_AFz0_dZjajyIHT3OBMUKgC6.png?width=640&crop=smart&format=pjpg&auto=webp&s=67cdd431af4aa5458c8323ad18274635f975a107', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/ZHRiZWhrb2JsYTVmMcaBTDTv4FsyuQF638hG_AFz0_dZjajyIHT3OBMUKgC6.png?width=960&crop=smart&format=pjpg&auto=webp&s=ffe657956acfef031abb151025fa9e806cc0c23b', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/ZHRiZWhrb2JsYTVmMcaBTDTv4FsyuQF638hG_AFz0_dZjajyIHT3OBMUKgC6.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e2486df12885facd7f6234907632b27a6d62cc91', 'width': 1080}], 'source': {'height': 2856, 'url': 'https://external-preview.redd.it/ZHRiZWhrb2JsYTVmMcaBTDTv4FsyuQF638hG_AFz0_dZjajyIHT3OBMUKgC6.png?format=pjpg&auto=webp&s=f074b86392925491b897a1a07e09d8ec4f17da67', 'width': 1280}, 'variants': {}}]} |
||
Cannot even run the smallest model on system RAM? | 0 | I am a bit confused. I am trying to run small LLMs on my Unraid server within the Ollama docker, using just the CPU and 16GB of system RAM.
Got Ollama up and running, but even when pulling the smallest models like Qwen 3 0.6B with Q4\_K\_M quantization, Ollama tells me I need way more RAM than I have left to spare. Why is that? Should this model not be running on any potato? Does this have to do with context overhead?
Sorry if this is a stupid question, I am trying to learn more about this and cannot find the solution anywhere else. | 2025-06-06T11:29:31 | FloJak2004 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4q25p | false | null | t3_1l4q25p | /r/LocalLLaMA/comments/1l4q25p/cannot_even_run_the_smallest_model_on_system_ram/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'jxaainwcka5f1', 'resolutions': [{'height': 8, 'url': 'https://preview.redd.it/jxaainwcka5f1.png?width=108&crop=smart&auto=webp&s=5952beb830596c4475777528bce76a5915d89885', 'width': 108}, {'height': 16, 'url': 'https://preview.redd.it/jxaainwcka5f1.png?width=216&crop=smart&auto=webp&s=5412c44210e10a4d7321daac942574afe97d9fe7', 'width': 216}, {'height': 25, 'url': 'https://preview.redd.it/jxaainwcka5f1.png?width=320&crop=smart&auto=webp&s=3df2a7bfc895c1907623c6f225a6a62a2c9b8f07', 'width': 320}, {'height': 50, 'url': 'https://preview.redd.it/jxaainwcka5f1.png?width=640&crop=smart&auto=webp&s=ed3fbb25762c941fe69d1192db81b4d2139f4b17', 'width': 640}, {'height': 75, 'url': 'https://preview.redd.it/jxaainwcka5f1.png?width=960&crop=smart&auto=webp&s=073c997cc7b27c49847671991b1ca67cd7892bfd', 'width': 960}, {'height': 84, 'url': 'https://preview.redd.it/jxaainwcka5f1.png?width=1080&crop=smart&auto=webp&s=3120220dca021c75ebf91958f2c65859b94709e7', 'width': 1080}], 'source': {'height': 114, 'url': 'https://preview.redd.it/jxaainwcka5f1.png?auto=webp&s=0c6d8e406d17affa234ad35428a7740dfbeecc3b', 'width': 1450}, 'variants': {}}]} |
|
I built an app that turns your photos into smart packing lists — all on your iPhone, 100% private, no APIs, no data collection! | 283 | Fullpack uses Apple’s **VisionKit** to identify items directly from your photos and helps you organize them into **packing lists** for any occasion.
Whether you're prepping for a “Workday,” “Beach Holiday,” or “Hiking Weekend,” you can easily create a plan and Fullpack will remind you what to pack before you head out.
✅ Everything runs *entirely* on your device
🚫 No cloud processing
🕵️♂️ No data collection
🔐 Your photos and personal data stay private
This is my **first solo app** — I designed, built, and launched it entirely on my own. It’s been an amazing journey bringing an idea to life from scratch.
🧳 **Try Fullpack for free on the App Store:**
[https://apps.apple.com/us/app/fullpack/id6745692929](https://apps.apple.com/us/app/fullpack/id6745692929)
I’m also really excited about the future of **on-device AI**. With open-source LLMs getting smaller and more efficient, there’s so much potential for building powerful tools that respect user privacy — right on our phones and laptops.
Would love to hear your thoughts, feedback, or suggestions! | 2025-06-06T11:38:47 | w-zhong | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4q7xf | false | null | t3_1l4q7xf | /r/LocalLLaMA/comments/1l4q7xf/i_built_an_app_that_turns_your_photos_into_smart/ | false | false | default | 283 | {'enabled': True, 'images': [{'id': '9b1s8amsla5f1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/9b1s8amsla5f1.jpeg?width=108&crop=smart&auto=webp&s=dd5d1053a10125600d16baa908d60a3850eee9cc', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/9b1s8amsla5f1.jpeg?width=216&crop=smart&auto=webp&s=7cbb24d4e5fcf4b893bfdf06824defe2579660a3', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/9b1s8amsla5f1.jpeg?width=320&crop=smart&auto=webp&s=4dac17db935b1bd61365d75e1f70c0b0f5dd18a2', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/9b1s8amsla5f1.jpeg?width=640&crop=smart&auto=webp&s=60bc03925e8a38bf96e597c38e363db50f9958d4', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/9b1s8amsla5f1.jpeg?width=960&crop=smart&auto=webp&s=9647388b1b458d55585c66bd61247e089c2df5c4', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/9b1s8amsla5f1.jpeg?width=1080&crop=smart&auto=webp&s=66d0253015192f116244b6bcc212d87a1bf3dc15', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/9b1s8amsla5f1.jpeg?auto=webp&s=d175bf3d7717cd83158628753e41d3adf4383e42', 'width': 3024}, 'variants': {}}]} |
|
Mega LLM Resource of 43 lectures | Popular Youtube Playlist | 1 | [removed] | 2025-06-06T11:48:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l4qeib/mega_llm_resource_of_43_lectures_popular_youtube/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4qeib | false | {'oembed': {'description': 'In this playlist, we will learn about the entire process of building a Large Language Model (LLM) from scratch. Nothing will be assumed. Everything will be s...', 'height': 450, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fvideoseries%3Flist%3DPLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fplaylist%3Flist%3DPLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FXpr8D6LeAtw%2Fhqdefault.jpg%3Fsqp%3D-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE%3D%26rs%3DAOn4CLB-lxbDfAE7qoD3W0AThViqZzd55w%26days_since_epoch%3D20245&type=text%2Fhtml&schema=youtube" width="600" height="450" scrolling="no" title="YouTube embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'http://youtube.com', 'thumbnail_height': 270, 'thumbnail_url': 'https://i.ytimg.com/vi/Xpr8D6LeAtw/hqdefault.jpg?sqp=-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLB-lxbDfAE7qoD3W0AThViqZzd55w&days_since_epoch=20245', 'thumbnail_width': 480, 'title': 'Building LLMs from scratch', 'type': 'video', 'version': '1.0', 'width': 600}, 'type': 'youtube.com'} | t3_1l4qeib | /r/LocalLLaMA/comments/1l4qeib/mega_llm_resource_of_43_lectures_popular_youtube/ | false | false | 1 | null |
|
Build LLM from Scratch | Mega Playlist of 43 videos | 47 | Just like with machine learning, you will be a serious LLM engineer only if you truly understand how the nuts and bolts of a Large Language Model (LLM) work.
Very few people understand how an LLM exactly works. Even fewer can build an entire LLM from scratch.
Wouldn't it be great for you to build your own LLM from scratch?
Here is an awesome, playlist series on Youtube: Build your own LLM from scratch.
Playlist link: [https://www.youtube.com/playlist?list=PLPTV0NXA\_ZSgsLAr8YCgCwhPIJNNtexWu](https://www.youtube.com/playlist?list=PLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu)
It has become very popular on Youtube.
Everything is written on a whiteboard. From scratch.
43 lectures are released.
This lecture series is inspired from Sebastian Raschka's book "Build LLMs from scratch"
Hope you learn a lot :)
P.S: Attached GIF shows a small snippet of the notes accompanying this playlist | 2025-06-06T11:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l4qf6k/build_llm_from_scratch_mega_playlist_of_43_videos/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4qf6k | false | null | t3_1l4qf6k | /r/LocalLLaMA/comments/1l4qf6k/build_llm_from_scratch_mega_playlist_of_43_videos/ | false | false | self | 47 | {'enabled': False, 'images': [{'id': '5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=108&crop=smart&auto=webp&s=9b6bc043bdccaad2019c8bbbae3441b99aaf894f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=216&crop=smart&auto=webp&s=b374e2f14de6652bd2c0e9f3a0d4656baf9bbc15', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=320&crop=smart&auto=webp&s=6a459b1295ced9b8325a2f950cc985a2d4fd69df', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?auto=webp&s=a5ece470c3825c54146e1f008b6a0d6189e0231a', 'width': 480}, 'variants': {}}]} |
Best model for coding on 8GB VRAM | 1 | [removed] | 2025-06-06T11:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l4qfx0/best_model_for_coding_on_8gb_vram/ | PressLaunchMike | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4qfx0 | false | null | t3_1l4qfx0 | /r/LocalLLaMA/comments/1l4qfx0/best_model_for_coding_on_8gb_vram/ | false | false | self | 1 | null |
Local AI on different PCs? | 1 | [removed] | 2025-06-06T12:07:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l4qqyb/local_ai_on_different_pcs/ | MoneyMultiplier888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4qqyb | false | null | t3_1l4qqyb | /r/LocalLLaMA/comments/1l4qqyb/local_ai_on_different_pcs/ | false | false | self | 1 | null |
Ailoy: A super-easy python / javasript agent builder | 19 | We’ve released **Ailoy**, a library that makes building agents incredibly easy.
We believe it's the easiest way to embed agents in your code.
available for both Python and JavaScript.
Homepage: [https://brekkylab.github.io/ailoy/](https://brekkylab.github.io/ailoy/)
Github: [https://github.com/brekkylab/ailoy](https://github.com/brekkylab/ailoy) | 2025-06-06T12:35:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l4rain/ailoy_a_supereasy_python_javasript_agent_builder/ | ArmCompetitive4605 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4rain | false | null | t3_1l4rain | /r/LocalLLaMA/comments/1l4rain/ailoy_a_supereasy_python_javasript_agent_builder/ | false | false | self | 19 | null |
Semantic routing and caching doesn't work - task specific LLMs (TLMs) ftw! | 9 | If you are building caching techniques for LLMs or developing a router to handle certain queries by select LLMs/agents - know that semantic caching and routing is a broken approach. Here is why.
* Follow-ups or Elliptical Queries: Same issue as embeddings — "And Boston?" doesn't carry meaning on its own. Clustering will likely put it in a generic or wrong cluster unless context is encoded.
* Semantic Drift and Negation: Clustering can’t capture logical distinctions like negation, sarcasm, or intent reversal. “I don’t want a refund” may fall in the same cluster as “I want a refund.”
* Unseen or Low-Frequency Queries: Sparse or emerging intents won’t form tight clusters. Outliers may get dropped or grouped incorrectly, leading to intent “blind spots.”
* Over-clustering / Under-clustering: Setting the right number of clusters is non-trivial. Fine-grained intents often end up merged unless you do manual tuning or post-labeling.
* Short Utterances: Queries like “cancel,” “report,” “yes” often land in huge ambiguous clusters. Clustering lacks precision for atomic expressions.
What can you do instead? You are far better off in using a LLM and instruct it to predict the scenario for you (like here is a user query, does it overlap with recent list of queries here) or build a very small and highly capable TLM (Task-specific LLM).
For agent routing and hand off i've built [a guide](https://docs.archgw.com/guides/agent_routing.html) on how to use it via my open source [project](https://github.com/katanemo/archgw) i have on GH. If you want to learn about my approach drop me a comment. | 2025-06-06T12:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l4rnsc/semantic_routing_and_caching_doesnt_work_task/ | AdditionalWeb107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4rnsc | false | null | t3_1l4rnsc | /r/LocalLLaMA/comments/1l4rnsc/semantic_routing_and_caching_doesnt_work_task/ | false | false | self | 9 | null |
Current best model for technical documentation text generation for RAG / fine tuning? | 5 | I want to create a model which supports us in writing technical documentation. We already have a lot of text from older documentations and want to use this as RAG / fine tuning source. Inference GPU memory size will be at least 80GB.
Which model would you recommend for this task currently? | 2025-06-06T13:01:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l4rtov/current_best_model_for_technical_documentation/ | OkAstronaut4911 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4rtov | false | null | t3_1l4rtov | /r/LocalLLaMA/comments/1l4rtov/current_best_model_for_technical_documentation/ | false | false | self | 5 | null |
Which LLM is good for NSFW Text to Image Prompts? | 1 | [removed] | 2025-06-06T13:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l4sljd/which_llm_is_good_for_nsfw_text_to_image_prompts/ | Cheap_Musician_5382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4sljd | false | null | t3_1l4sljd | /r/LocalLLaMA/comments/1l4sljd/which_llm_is_good_for_nsfw_text_to_image_prompts/ | false | false | nsfw | 1 | null |
Bad for device to let MBP M4 64GB process all night? (e.g. damage?) | 1 | [removed] | 2025-06-06T13:36:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l4sm2r/bad_for_device_to_let_mbp_m4_64gb_process_all/ | Electronic_Voice_306 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4sm2r | false | null | t3_1l4sm2r | /r/LocalLLaMA/comments/1l4sm2r/bad_for_device_to_let_mbp_m4_64gb_process_all/ | false | false | self | 1 | null |
Deciding on hardware requirements | 1 | [removed] | 2025-06-06T14:20:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l4tmw8/deciding_on_hardware_requirements/ | Beautiful_Wait_8964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4tmw8 | false | null | t3_1l4tmw8 | /r/LocalLLaMA/comments/1l4tmw8/deciding_on_hardware_requirements/ | false | false | self | 1 | null |
Have Large Language Models(LLMs) Finally Mastered Geolocation? | 18 | > An ambiguous city street, a freshly mown field, and a parked armoured vehicle were among the example photos we chose to challenge Large Language Models (LLMs) from OpenAI, Google, Anthropic, Mistral and xAI to geolocate.
> Back in July 2023, Bellingcat analysed the geolocation performance of OpenAI and Google’s models. Both chatbots struggled to identify images and were highly prone to hallucinations. However, since then, such models have rapidly evolved.
> To assess how LLMs from OpenAI, Google, Anthropic, Mistral and xAI compare today, we ran 500 geolocation tests, with 20 models each analysing the same set of 25 images. | 2025-06-06T14:24:54 | https://www.bellingcat.com/resources/how-tos/2025/06/06/have-llms-finally-mastered-geolocation/ | True-Combination7059 | bellingcat.com | 1970-01-01T00:00:00 | 0 | {} | 1l4tqgt | false | null | t3_1l4tqgt | /r/LocalLLaMA/comments/1l4tqgt/have_large_language_modelsllms_finally_mastered/ | false | false | default | 18 | null |
New model - Qwen3 Embedding + Reranker | 1 | [removed] | 2025-06-06T14:58:41 | https://www.reddit.com/gallery/1l4ujo7 | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l4ujo7 | false | null | t3_1l4ujo7 | /r/LocalLLaMA/comments/1l4ujo7/new_model_qwen3_embedding_reranker/ | false | false | 1 | null |
|
New model - Qwen3 Embedding + Reranker | 0 | OP: [https://www.reddit.com/r/Qwen\_AI/comments/1l4qvhe/new\_model\_qwen3\_embedding\_reranker/](https://www.reddit.com/r/Qwen_AI/comments/1l4qvhe/new_model_qwen3_embedding_reranker/)
Qwen Team has launched a new set of AI models, **Qwen3 Embedding** and **Qwen3 Reranker** , it is designed for text embedding, search, and reranking.
# How It Works
**Embedding models** *convert text into vectors for search.* **Reranking models** *take a question and a document and score how well they match.* The models are trained in multiple stages using AI-generated training data to improve performance.
# What’s Special
Qwen3 Embedding achieves top performance in search and ranking tasks across many languages. The largest model, 8B, ranks number one on the MTEB multilingual leaderboard. It works well with both natural language and code. Developers aims to support **text & images** in the future.
# Model Sizes Available
Models are available in **0.6B / 4B / 8B** versions, supports **multilingual** and **code-related** task. Developers can customize instructions and embedding sizes.
# Opensource
The models are available on GitHub, Hugging Face, and ModelScope under the Apache 2.0 license.
Qwen Blog for more details: [https://qwenlm.github.io/blog/qwen3-embedding/](https://qwenlm.github.io/blog/qwen3-embedding/) | 2025-06-06T14:58:46 | https://www.reddit.com/gallery/1l4ujqq | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l4ujqq | false | null | t3_1l4ujqq | /r/LocalLLaMA/comments/1l4ujqq/new_model_qwen3_embedding_reranker/ | false | false | default | 0 | null |
New model - Qwen3 Embedding + Reranker | 0 | OP: [https://www.reddit.com/r/Qwen\_AI/comments/1l4qvhe/new\_model\_qwen3\_embedding\_reranker/](https://www.reddit.com/r/Qwen_AI/comments/1l4qvhe/new_model_qwen3_embedding_reranker/)
Qwen Team has launched a new set of AI models, **Qwen3 Embedding** and **Qwen3 Reranker** , it is designed for text embedding, search, and reranking.
# How It Works
**Embedding models** *convert text into vectors for search.* **Reranking models** *take a question and a document and score how well they match.* The models are trained in multiple stages using AI-generated training data to improve performance.
# What’s Special
Qwen3 Embedding achieves top performance in search and ranking tasks across many languages. The largest model, 8B, ranks number one on the MTEB multilingual leaderboard. It works well with both natural language and code. Developers aims to support **text & images** in the future.
# Model Sizes Available
Models are available in **0.6B / 4B / 8B** versions, supports **multilingual** and **code-related** task. Developers can customize instructions and embedding sizes.
# Opensource
The models are available on GitHub, Hugging Face, and ModelScope under the Apache 2.0 license.
Qwen Blog for more details: [https://qwenlm.github.io/blog/qwen3-embedding/](https://qwenlm.github.io/blog/qwen3-embedding/) | 2025-06-06T14:58:56 | https://www.reddit.com/gallery/1l4ujwg | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l4ujwg | false | null | t3_1l4ujwg | /r/LocalLLaMA/comments/1l4ujwg/new_model_qwen3_embedding_reranker/ | false | false | default | 0 | null |
New model - Qwen3 Embedding + Reranker | 18 | OP: [https://www.reddit.com/r/Qwen\_AI/comments/1l4qvhe/new\_model\_qwen3\_embedding\_reranker/](https://www.reddit.com/r/Qwen_AI/comments/1l4qvhe/new_model_qwen3_embedding_reranker/)
Qwen Team has launched a new set of AI models, **Qwen3 Embedding** and **Qwen3 Reranker** , it is designed for text embedding, search, and reranking.
# How It Works
**Embedding models** *convert text into vectors for search.* **Reranking models** *take a question and a document and score how well they match.* The models are trained in multiple stages using AI-generated training data to improve performance.
# What’s Special
Qwen3 Embedding achieves top performance in search and ranking tasks across many languages. The largest model, 8B, ranks number one on the MTEB multilingual leaderboard. It works well with both natural language and code. Developers aims to support **text & images** in the future.
# Model Sizes Available
Models are available in **0.6B / 4B / 8B** versions, supports **multilingual** and **code-related** task. Developers can customize instructions and embedding sizes.
# Opensource
The models are available on GitHub, Hugging Face, and ModelScope under the Apache 2.0 license.
Qwen Blog for more details: [https://qwenlm.github.io/blog/qwen3-embedding/](https://qwenlm.github.io/blog/qwen3-embedding/) | 2025-06-06T14:58:59 | https://www.reddit.com/gallery/1l4ujxg | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l4ujxg | false | null | t3_1l4ujxg | /r/LocalLLaMA/comments/1l4ujxg/new_model_qwen3_embedding_reranker/ | false | false | default | 18 | null |
New model - Qwen3 Embedding + Reranker | 1 | [removed] | 2025-06-06T15:01:32 | https://www.reddit.com/gallery/1l4umgm | koc_Z3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l4umgm | false | null | t3_1l4umgm | /r/LocalLLaMA/comments/1l4umgm/new_model_qwen3_embedding_reranker/ | false | false | 1 | null |
|
I thought Qwen3 was putting out some questionable content into my code... | 33 | Oh. \*\*SOLVED.\*\* See why, I think, at the end.
Okay, so I was trying \`aider\`. Only tried a bit here and there, but I just switched to using \`Qwen\_Qwen3-14B-Q6\_K\_L.gguf\`. And I see this in my aider output:
\`\`\`text
\## Signoff: insurgent (razzin' frazzin' motherfu... stupid directx...)
\`\`\`
Now, please bear in mind, this is script that plots timestamps, like \`ls | plottimes\` and, aside from plotting time data as a \`heatmap\`, it has no special war or battle terminology, nor profane language in it. I am not familiar with this thing to know where or how that was generated, since it SEEMS to be from a trial run aider did of the code:
https://preview.redd.it/zamjz1bdsb5f1.jpg?width=719&format=pjpg&auto=webp&s=5ca874f91bdd6fe7fc20f4eb797e5ddc22500dec
But, that seems to be the code running -- not LLM output directly.
Odd!
...scrolling back to see what's up there:
https://preview.redd.it/2bvvmzutrb5f1.jpg?width=951&format=pjpg&auto=webp&s=eaaf87299878570375267830ca720633b4191686
Oh. Those are random BSD 'fortune' outputs! Aider is apparently using full login shell to execute the trial runs of the code. I guess it's time to disable fortune in login. :)
| 2025-06-06T15:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l4vdnd/i_thought_qwen3_was_putting_out_some_questionable/ | jaggzh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4vdnd | false | null | t3_1l4vdnd | /r/LocalLLaMA/comments/1l4vdnd/i_thought_qwen3_was_putting_out_some_questionable/ | false | false | 33 | null |
|
Is this the largest "No synthetic data" open weight LLM? (142B) | 356 | From the GitHub page of https://huggingface.co/rednote-hilab/dots.llm1.base | 2025-06-06T15:47:49 | AaronFeng47 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l4vrj4 | false | null | t3_1l4vrj4 | /r/LocalLLaMA/comments/1l4vrj4/is_this_the_largest_no_synthetic_data_open_weight/ | false | false | default | 356 | {'enabled': True, 'images': [{'id': 'sgokl11mvb5f1', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/sgokl11mvb5f1.png?width=108&crop=smart&auto=webp&s=70e091dae77e690684915ac00545acf713fb2f16', 'width': 108}, {'height': 264, 'url': 'https://preview.redd.it/sgokl11mvb5f1.png?width=216&crop=smart&auto=webp&s=e537c8eb2aa4f08cc91e0579f556f4924be465df', 'width': 216}, {'height': 391, 'url': 'https://preview.redd.it/sgokl11mvb5f1.png?width=320&crop=smart&auto=webp&s=e64afb266d3a55ea9621a59c098e2e9397eddfc8', 'width': 320}, {'height': 782, 'url': 'https://preview.redd.it/sgokl11mvb5f1.png?width=640&crop=smart&auto=webp&s=f69e4eb2a1788d099e9a9c6d7f3d448ed97cf251', 'width': 640}, {'height': 1174, 'url': 'https://preview.redd.it/sgokl11mvb5f1.png?width=960&crop=smart&auto=webp&s=f391918a34efa4fa8c127fe5a259325a8a74c72c', 'width': 960}, {'height': 1321, 'url': 'https://preview.redd.it/sgokl11mvb5f1.png?width=1080&crop=smart&auto=webp&s=e1c266abeefd23e153f42b3458f298273b30639e', 'width': 1080}], 'source': {'height': 1321, 'url': 'https://preview.redd.it/sgokl11mvb5f1.png?auto=webp&s=970ab23e8deb5042b1ba57fd4ca7ec326d63a176', 'width': 1080}, 'variants': {}}]} |
|
ether0 - Mistral 24B with RL on several molecular design tasks in chemistry | 34 | A Reasoning Model for Chemistry
open weights: [https://huggingface.co/futurehouse/ether0](https://huggingface.co/futurehouse/ether0)
ether0 is a 24B language model trained to reason in English and output molecular structures as SMILES. It is derived from fine-tuning and reinforcement learning training from Mistral-Small-24B-Instruct-2501. Ask questions in English, but they may also include molecules specified as SMILES. The SMILES do not need to be canonical and may contain stereochemistry information. ether0 has limited support for IUPAC names.
source: [https://x.com/SGRodriques/status/1930656794348785763](https://x.com/SGRodriques/status/1930656794348785763) | 2025-06-06T15:54:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l4vx7i/ether0_mistral_24b_with_rl_on_several_molecular/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4vx7i | false | null | t3_1l4vx7i | /r/LocalLLaMA/comments/1l4vx7i/ether0_mistral_24b_with_rl_on_several_molecular/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'iCMq5l8PV8l3uvNWWBrpeQgtO0VcTQXa9BqIXRBGPmk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HJg_in9BrTrMWG9hxp35AuIjQiW6FYC9tZD_C0eCYE8.jpg?width=108&crop=smart&auto=webp&s=59569a79b8743eaba966d7f2912b7d37ab60b644', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HJg_in9BrTrMWG9hxp35AuIjQiW6FYC9tZD_C0eCYE8.jpg?width=216&crop=smart&auto=webp&s=66a17260916b11aa551c123d3b739aeef4603895', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HJg_in9BrTrMWG9hxp35AuIjQiW6FYC9tZD_C0eCYE8.jpg?width=320&crop=smart&auto=webp&s=ef3b608b746bd4e9b004f64356226c74655e0e7c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HJg_in9BrTrMWG9hxp35AuIjQiW6FYC9tZD_C0eCYE8.jpg?width=640&crop=smart&auto=webp&s=3432856fb9280d0692b86c66e3ea28abe94d7bd2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HJg_in9BrTrMWG9hxp35AuIjQiW6FYC9tZD_C0eCYE8.jpg?width=960&crop=smart&auto=webp&s=5931fb90809e5421f546832b2ac8a4ba5fcfd77c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HJg_in9BrTrMWG9hxp35AuIjQiW6FYC9tZD_C0eCYE8.jpg?width=1080&crop=smart&auto=webp&s=c655e16613db0b8a15968a02bfbfd4bf949053c4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HJg_in9BrTrMWG9hxp35AuIjQiW6FYC9tZD_C0eCYE8.jpg?auto=webp&s=6af0992fcf0556528cbc8324e7fd2c317764df8d', 'width': 1200}, 'variants': {}}]} |
Better quantization: Yet Another Quantization Algorithm | 141 | We're introducing Yet Another Quantization Algorithm, a new quantization algorithm that better preserves the original model's outputs after quantization. YAQA reduces the KL by >30% over QTIP and achieves an even lower KL than Google's QAT model on Gemma 3.
See the paper [https://arxiv.org/pdf/2505.22988](https://arxiv.org/pdf/2505.22988) and code [https://github.com/Cornell-RelaxML/yaqa](https://github.com/Cornell-RelaxML/yaqa) for more details. We also have some prequantized Llama 3.1 70B Instruct models at [https://huggingface.co/collections/relaxml/yaqa-6837d4c8896eb9ceb7cb899e](https://huggingface.co/collections/relaxml/yaqa-6837d4c8896eb9ceb7cb899e) | 2025-06-06T16:12:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l4wd2w/better_quantization_yet_another_quantization/ | tsengalb99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4wd2w | false | null | t3_1l4wd2w | /r/LocalLLaMA/comments/1l4wd2w/better_quantization_yet_another_quantization/ | false | false | self | 141 | null |
Hugging Face Just Dropped it's MCP Server | 229 | 2025-06-06T16:12:58 | https://hf.co/mcp | eternviking | hf.co | 1970-01-01T00:00:00 | 0 | {} | 1l4wdwh | false | null | t3_1l4wdwh | /r/LocalLLaMA/comments/1l4wdwh/hugging_face_just_dropped_its_mcp_server/ | false | false | default | 229 | null |
|
I forked google’s Fullstack LangGraph Quickstart to work with ollama + searxng | 1 | [removed] | 2025-06-06T16:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l4x2i8/i_forked_googles_fullstack_langgraph_quickstart/ | Filo0104 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4x2i8 | false | null | t3_1l4x2i8 | /r/LocalLLaMA/comments/1l4x2i8/i_forked_googles_fullstack_langgraph_quickstart/ | false | false | self | 1 | null |
what's the case against flash attention? | 65 | I accidently stumbled upon the -fa (flash attention) flag in llama.cpp's llama-server. I cannot speak to the speedup in performence as i haven't properly tested it, but the memory optimization is huge: 8B-F16-gguf model with 100k fit comfortably in 32GB vram gpu with some 2-3 GB to spare.
A very brief search revealed that flash attention theoretically computes the same mathematical function, and in practice benchmarks show no change in the model's output quality.
So my question is, is flash attention really just free lunch? what's the catch? why is it not enabled by default? | 2025-06-06T16:59:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l4xiwg/whats_the_case_against_flash_attention/ | Responsible-Crew1801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4xiwg | false | null | t3_1l4xiwg | /r/LocalLLaMA/comments/1l4xiwg/whats_the_case_against_flash_attention/ | false | false | self | 65 | null |
Quick Question on Limitations of Mac M1 for LLMS | 1 | [removed] | 2025-06-06T17:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l4xspl/quick_question_on_limitations_of_mac_m1_for_llms/ | chrismryan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4xspl | false | null | t3_1l4xspl | /r/LocalLLaMA/comments/1l4xspl/quick_question_on_limitations_of_mac_m1_for_llms/ | false | false | self | 1 | null |
Offline verbal chat bot with modular tool calling! | 17 | This is an update from my original [post](https://www.reddit.com/r/LocalLLaMA/comments/1l2vrg2/fully_offline_verbal_chat_bot/) where I demoed my fully offline verbal chat bot. I've made a couple updates, and should be releasing it on github soon.
\- Clipboard insertion: allows you to insert your clipboard to the prompt with just a key press
\- Modular tool calling: allows the model to use tools that can be drag and dropped into a folder
To clarify how tool calling works: Behind the scenes the program parses the json headers of all files in the tools folder at startup, and then passes them along with the users message. This means you can simply drag and drop a tool, restart the app, and use it.
Please leave suggestions and ask any questions you might have! | 2025-06-06T17:44:15 | https://v.redd.it/onqpjk30fc5f1 | NonYa_exe | /r/LocalLLaMA/comments/1l4yncl/offline_verbal_chat_bot_with_modular_tool_calling/ | 1970-01-01T00:00:00 | 0 | {} | 1l4yncl | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/onqpjk30fc5f1/DASHPlaylist.mpd?a=1751953459%2CNWQzMDE3YjQ2YmZlY2I0NjZjNTg4ZmU4ZmJlYzFhZDI3NTllOTNkMzdmM2M5YWNiZjY2MzIwM2JlMmVjNWFjYQ%3D%3D&v=1&f=sd', 'duration': 250, 'fallback_url': 'https://v.redd.it/onqpjk30fc5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/onqpjk30fc5f1/HLSPlaylist.m3u8?a=1751953459%2CNGFkYzNmN2M1ZDYwYjlmYjhjYTdmMWM5MGNhNzJiNDA5NzZkZjY4MmQ1OWI5OWU4NTRiNzcyNGU2MWNmNmMwMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/onqpjk30fc5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l4yncl | /r/LocalLLaMA/comments/1l4yncl/offline_verbal_chat_bot_with_modular_tool_calling/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'aXAxOXFpMzBmYzVmMf4vZSu7SIjEMc78UdmUdVYtZoDmH2fqjic2HovHvoAi', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aXAxOXFpMzBmYzVmMf4vZSu7SIjEMc78UdmUdVYtZoDmH2fqjic2HovHvoAi.png?width=108&crop=smart&format=pjpg&auto=webp&s=f2dd49444defd2d20e432cf90df7df06202cc3b9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aXAxOXFpMzBmYzVmMf4vZSu7SIjEMc78UdmUdVYtZoDmH2fqjic2HovHvoAi.png?width=216&crop=smart&format=pjpg&auto=webp&s=a9ebd9a178623c0f3c21b09cece2f0781d72ba94', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aXAxOXFpMzBmYzVmMf4vZSu7SIjEMc78UdmUdVYtZoDmH2fqjic2HovHvoAi.png?width=320&crop=smart&format=pjpg&auto=webp&s=7f41fd614ad32f88c3b27f45773e01cbf4848614', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aXAxOXFpMzBmYzVmMf4vZSu7SIjEMc78UdmUdVYtZoDmH2fqjic2HovHvoAi.png?width=640&crop=smart&format=pjpg&auto=webp&s=26013fdc85a1cb7bed4e3b9c19c2df791712dcbf', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aXAxOXFpMzBmYzVmMf4vZSu7SIjEMc78UdmUdVYtZoDmH2fqjic2HovHvoAi.png?width=960&crop=smart&format=pjpg&auto=webp&s=35f131db37f3e0d9dae723699c691c888d7b6361', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aXAxOXFpMzBmYzVmMf4vZSu7SIjEMc78UdmUdVYtZoDmH2fqjic2HovHvoAi.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fe79c59d09c405c708e9d00a2be82d497c207f55', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aXAxOXFpMzBmYzVmMf4vZSu7SIjEMc78UdmUdVYtZoDmH2fqjic2HovHvoAi.png?format=pjpg&auto=webp&s=c94db3c1a8fede633bf21c949b1e1260846e0238', 'width': 1920}, 'variants': {}}]} |
|
Seeking similar model with longer context length than Darkest-Muse-v1? | 1 | [removed] | 2025-06-06T18:17:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l4zgzy/seeking_similar_model_with_longer_context_length/ | julimoooli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4zgzy | false | null | t3_1l4zgzy | /r/LocalLLaMA/comments/1l4zgzy/seeking_similar_model_with_longer_context_length/ | false | false | self | 1 | null |
Opinion needed | Local/Remote AI chat webapp (incomplete / under development) | 1 | [removed] | 2025-06-06T18:18:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l4zi0p/opinion_needed_localremote_ai_chat_webapp/ | Neural-Systems | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4zi0p | false | null | t3_1l4zi0p | /r/LocalLLaMA/comments/1l4zi0p/opinion_needed_localremote_ai_chat_webapp/ | false | false | self | 1 | null |
Seeking similar model with longer context length than Darkest-Muse-v1? | 1 | [removed] | 2025-06-06T18:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l4zje4/seeking_similar_model_with_longer_context_length/ | julimoooli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4zje4 | false | null | t3_1l4zje4 | /r/LocalLLaMA/comments/1l4zje4/seeking_similar_model_with_longer_context_length/ | false | false | self | 1 | null |
Best online playground for running inference on Llama models? | 1 | [removed] | 2025-06-06T18:28:58 | https://www.reddit.com/r/LocalLLaMA/comments/1l4zqwi/best_online_playground_for_running_inference_on/ | LastOfStendhal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l4zqwi | false | null | t3_1l4zqwi | /r/LocalLLaMA/comments/1l4zqwi/best_online_playground_for_running_inference_on/ | false | false | self | 1 | null |
Is there a local alternative to google code diffusion? | 6 | LLMs write code, and I have some installed locally, and they are working fine
Google has DeepMind Diffusion, and I tested today, just a few request to build a few web samples, and that is shit!!!
No LLMs local or remote can compete with that shit
The question, is there an open-source alternative of something similar / local? | 2025-06-06T18:44:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l504fg/is_there_a_local_alternative_to_google_code/ | Careful-State-854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l504fg | false | null | t3_1l504fg | /r/LocalLLaMA/comments/1l504fg/is_there_a_local_alternative_to_google_code/ | false | false | self | 6 | null |
Help Choosing the Best LLM Inference Stack for Local Deployment (8x RTX 6000 Blackwell) | 1 | [removed] | 2025-06-06T19:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l50kzq/help_choosing_the_best_llm_inference_stack_for/ | Fresh_Month_2594 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l50kzq | false | null | t3_1l50kzq | /r/LocalLLaMA/comments/1l50kzq/help_choosing_the_best_llm_inference_stack_for/ | false | false | self | 1 | null |
Can you help me find that story writing LLM tool that was introduced by other reddit user in this subreddit? | 1 | [removed] | 2025-06-06T19:12:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l50snd/can_you_help_me_find_that_story_writing_llm_tool/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l50snd | false | null | t3_1l50snd | /r/LocalLLaMA/comments/1l50snd/can_you_help_me_find_that_story_writing_llm_tool/ | false | false | self | 1 | null |
LegoGPT training params | 1 | [removed] | 2025-06-06T19:19:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l50z5l/legogpt_training_params/ | EchoOdd5367 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l50z5l | false | null | t3_1l50z5l | /r/LocalLLaMA/comments/1l50z5l/legogpt_training_params/ | false | false | self | 1 | null |
Need selfhosted AI to generate better bash scripts and ansible playbooks | 1 | Hi. I am new to AI Models.
I need a selfhosted AI which i can give access to a directory with my scripts and playbooks etc. From which it can check the projects code and tell me where I could make it better, more concise and where it's wrong or grammar of comment is bad etc.
If possible it should be able to help me generate readme.md files too. It will be best if it can have multiple ai selfhosted and online ones like chatgpt, deepseek, llama etc. So I can either keep my files on local system for privacy or the online models can have access to them if I need it be.
Would prefer to run in docker container using compose but won't mind just installing into host os either.
I have 16 thread amd cpu, 32gb ddr5 ram, 4060 rtx 8gb gpu, legion slim 5 gen 9 laptop.
Thank you. Sorry for my bad English. | 2025-06-06T19:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l51c1o/need_selfhosted_ai_to_generate_better_bash/ | human_with_humanity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l51c1o | false | null | t3_1l51c1o | /r/LocalLLaMA/comments/1l51c1o/need_selfhosted_ai_to_generate_better_bash/ | false | false | self | 1 | null |
NER: extract position | 1 | Hi,
I wonder if it is possible to extract position of name entity with local LLM.
For instance, suppose I have a recipe with foods, I want to extract all foods with the position of the word in the original text.
If prompt the LLM with something like : " extract foods with the position", it will failed many time.
For instance, if the food is misspelled, the LLM returns a corrected word with the wrong positon | 2025-06-06T19:45:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l51lhp/ner_extract_position/ | TargetDangerous2216 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l51lhp | false | null | t3_1l51lhp | /r/LocalLLaMA/comments/1l51lhp/ner_extract_position/ | false | false | self | 1 | null |
What is the best value card I could buy for decent performance? | 3 | I have a 1080 (ancient) card that I use now with 7b-ish models and I'm thinking of an update mainly to use larger models. My use case is running an embedding model alongside a normal one and I don't mind switching the "normal" models depending on the case (coding vs chatbot). I was looking for a comparator for different cards and their performance but couldn't find one that gives os/gpu/tps and eventually median price. So I wonder about the new 9060/9070 from AMD, the 16g Intel ones. Is it worth getting a gpu vs the 395 max/128g or nvidia's golden box thing? | 2025-06-06T19:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l51p85/what_is_the_best_value_card_i_could_buy_for/ | equinoxel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l51p85 | false | null | t3_1l51p85 | /r/LocalLLaMA/comments/1l51p85/what_is_the_best_value_card_i_could_buy_for/ | false | false | self | 3 | null |
3b and 7b Serving with new Hardware | 2 | I don't want this to be a promotional post even though it kind of is. We are looking for people who want ot host 3b/8b models of the llama, gemma, and mistral model familys. We are working towards expanding to qwen and eventually larger model sizes: [https://www.positron.ai/snap-serve](https://www.positron.ai/snap-serve)
We are running an experiment to test our hardware out at $30 a month for 3b and $60 a month for 8b size models. If anyone has fine tunes. If you have a fine tune that you want running and can help test our hardware, the first 5 people will get a free month for the 3b model size and half off the 8b model size. We are looking for folks to try and test out the system on this new hardware outside Nvidia.
This isn't tiny LORA adapters running on crowded public serverless endpoints - we run your entire custom model in a dedicated instance for an incredible price with token per second rates double that of comparable NVIDIA options.
Would love for some people, and **I know the parameter and model family** size is not ideal but its just the start as we continue it all. | 2025-06-06T19:58:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l51vxy/3b_and_7b_serving_with_new_hardware/ | No-Fig-8614 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l51vxy | false | null | t3_1l51vxy | /r/LocalLLaMA/comments/1l51vxy/3b_and_7b_serving_with_new_hardware/ | false | false | self | 2 | null |
Help with Proxmox + Debian + Docker /w Nvidia 5060TI | 2 | Hi! Im at my Witts end here. I've been trying for the past few days with varying levels of success and failure. I have proxmox running with a Debian VM running docker containers. I'm trying to use a 5060ti in passthrough mode to the Debian VM
I have the cpu set to host and passed through the 5060TI using PCI.
I'm super confused, I've tried following multiple guides. But get various errors. The farthest I've gotten is running the Nvidia official installer for 575. However nvidia-smi in the Debian VM says "no devices found". But I do have a device in /dev/nvidia0.
My questions are:
What (if any) drivers do I need to install in the proxmox host?
What drivers do I need in the guest VM (Debian)?
Anything special I need to do to get it to work in docker containers (ollama)?
Thanks so much! | 2025-06-06T20:11:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l5277f/help_with_proxmox_debian_docker_w_nvidia_5060ti/ | EarEquivalent3929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5277f | false | null | t3_1l5277f | /r/LocalLLaMA/comments/1l5277f/help_with_proxmox_debian_docker_w_nvidia_5060ti/ | false | false | self | 2 | null |
AI server help, duel k80s LocalAGI | 0 | Hey everyone,
I’m trying to get LocalAGI set up on my local server to act as a backend replacement for Ollama, mainly because I want search tools, memory, and agent capabilities that Ollama doesn’t currently offer. I’ve been having a tough time getting everything running reliably, and I could use some help or guidance from people more experienced with this setup.
My main issue is that my server uses two k80s, old but I got them very very cheap and didnt want to upgrade without dipping my toes in. This is my first time working with AI in general so I want to get some experiance before I spend a ton of money on new gpus. k80s only support up to cuda 11.4, and while localAGI should support that it still wont use the GPUs. Since they are technical 2 gpus on a board I plan to use each 12gb section for a different thing. not ideal but 12gb is more than enough for me testing it out. I can get ollama to run on cpu but it also doesnt support k80s, and while I did find a repo ollama37 for k80s specificaly that is also buggy all around. I also want to note that even in CPU only mode LocalAGI still doesnt work, I get a verity of errors but mainly backend failures or a warning about the legacy gpus.
I am guessing its something silly but I have been working on it the last few days with no luck following the online documentation. I am also open to alternatives instead of localAGI, my main goals are an ollama replacemnet that can do memory and idealy internet search.
**Server**: Dell PowerEdge R730
* **CPUs**: 2× Xeon E5-2695 v4 (36 threads total)
* **RAM**: 160GB DDR4 ECC
* **GPUs**: 2× NVIDIA K80s (4 total GPUs – 12GB VRAM each)
* **OS**: Ubuntu with GUI
* **Storage**: 2TB SSD | 2025-06-06T20:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l52n0b/ai_server_help_duel_k80s_localagi/ | JcorpTech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l52n0b | false | null | t3_1l52n0b | /r/LocalLLaMA/comments/1l52n0b/ai_server_help_duel_k80s_localagi/ | false | false | self | 0 | null |
CrewAI with Ollama and MCP | 0 | Anybody spin this up with ollama successfully? I tried using the example and spin up a MCP with tools.
I can see the tools and “use” them, but I cannot for the life of me get the output from it. | 2025-06-06T20:30:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l52nov/crewai_with_ollama_and_mcp/ | SpareIntroduction721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l52nov | false | null | t3_1l52nov | /r/LocalLLaMA/comments/1l52nov/crewai_with_ollama_and_mcp/ | false | false | self | 0 | null |
Is there appetite for hosting 3b/8b size models at an affordable rate? | 0 | I don't want this to be a promotional post even though it kind of is. We are looking for people who want ot host 3b/8b models of the llama, gemma, and mistral model family's. We are working towards expanding to qwen and eventually larger model sizes, we are using new hardware that hasn't been really publicized like Groq, SambaNova, Cerebrus, or even specialized cloud services like TPU's
We are running an experiments and would love to know if anyone is interested in hosting 3/8b size models. Would there be interest in this? I'd love to know if people would find value out of a service like this.
I am not here to sell this I just want to know if people would be interested or is it not worth it until its larger parameter sizes as a lot of folks can self host this size model. But if you run multiple finetunes of this size.
This isn't tiny LORA adapters running on crowded public serverless endpoints - we run your entire custom model in a dedicated instance for an incredible price with token per second rates better than NVIDIA options.
Would love for some people, and **I know the parameter and model family** size is not ideal but its just the start as we continue it all.
The hardware is still in trial so we are aiming to get to what a 3b/8b class model would get on equivalent hardware, obviously Blackwell and A100/H100 etc hardware will be much faster but we are aiming at the 3090/4090 class hardware with these models.
Our new service is called: [https://www.positron.ai/snap-serve](https://www.positron.ai/snap-serve) | 2025-06-06T20:44:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l52z9k/is_there_appetite_for_hosting_3b8b_size_models_at/ | No-Fig-8614 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l52z9k | false | null | t3_1l52z9k | /r/LocalLLaMA/comments/1l52z9k/is_there_appetite_for_hosting_3b8b_size_models_at/ | false | false | self | 0 | null |
Terrible hindi translation, missing texts, paused timeline whisper ? | 0 | I have been trying very hard from hours.
When I am using whisper all models tiny to large models I am facing this issue.
Also i set language to hindi and if I don't set anything I get translation of it in english which is surprisingly good
While i just want hindi text over it correct. | 2025-06-06T20:50:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l5345h/terrible_hindi_translation_missing_texts_paused/ | jadhavsaurabh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5345h | false | null | t3_1l5345h | /r/LocalLLaMA/comments/1l5345h/terrible_hindi_translation_missing_texts_paused/ | false | false | self | 0 | null |
Training Arguments | 1 | [removed] | 2025-06-06T20:50:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l534kh/training_arguments/ | EchoOdd5367 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l534kh | false | null | t3_1l534kh | /r/LocalLLaMA/comments/1l534kh/training_arguments/ | false | false | self | 1 | null |
Same document retrieved multiple times in results – why? | 1 | [removed] | 2025-06-06T20:58:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l53asa/same_document_retrieved_multiple_times_in_results/ | OldBlackberry9158 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l53asa | false | null | t3_1l53asa | /r/LocalLLaMA/comments/1l53asa/same_document_retrieved_multiple_times_in_results/ | false | false | self | 1 | null |
Git for Idiots (Broken down to Four Commands) | 22 | Before AI will take over, people will still have to deal with git.
Since i noticed that a lot of my collegues want to work with AI but have no idea of how Git works i have implemented a basic Git for Idiots which breaks down Git to a basic version control and online backup functionality for solo projects with four commands.
It really makes stuff incredibly simple for Vibe Coding. Give it a try, if you want:
[https://github.com/AlexSchardin/Git-For-Idiots-solo](https://github.com/AlexSchardin/Git-For-Idiots-solo) | 2025-06-06T21:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l53ych/git_for_idiots_broken_down_to_four_commands/ | Consistent-Disk-7282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l53ych | false | null | t3_1l53ych | /r/LocalLLaMA/comments/1l53ych/git_for_idiots_broken_down_to_four_commands/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'BdPlmM6UBlIvv_9b8BtloLVbPtkWemBeAm8iOCCLElw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_w4iQi8hmqjoa2i4yir8Njz05rJVqGjaSPtQW3d3ARE.jpg?width=108&crop=smart&auto=webp&s=86428f98ae948af850ee82e65e5ccbd41b779cbe', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_w4iQi8hmqjoa2i4yir8Njz05rJVqGjaSPtQW3d3ARE.jpg?width=216&crop=smart&auto=webp&s=85d6a377815a9baaefd5b80ff1c4a0ecde31a7c6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_w4iQi8hmqjoa2i4yir8Njz05rJVqGjaSPtQW3d3ARE.jpg?width=320&crop=smart&auto=webp&s=02d010f29a6f196f1dcdf557192b3409610f3ac2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_w4iQi8hmqjoa2i4yir8Njz05rJVqGjaSPtQW3d3ARE.jpg?width=640&crop=smart&auto=webp&s=5f6d6407601aea586952b67f26dbfbcf85736d3a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_w4iQi8hmqjoa2i4yir8Njz05rJVqGjaSPtQW3d3ARE.jpg?width=960&crop=smart&auto=webp&s=89b945f867f28c1b8e0e8277471fc3f91c9d0208', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_w4iQi8hmqjoa2i4yir8Njz05rJVqGjaSPtQW3d3ARE.jpg?width=1080&crop=smart&auto=webp&s=d695a5236e590c8ad4fa0d4b114a87c7c825eb1e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_w4iQi8hmqjoa2i4yir8Njz05rJVqGjaSPtQW3d3ARE.jpg?auto=webp&s=840922d75114cdc0183a58260a69526e49fd76f2', 'width': 1200}, 'variants': {}}]} |
so anyway.. i ported Bagel to run with 8GB... not that you should but... | 1 | 2025-06-06T21:37:35 | https://www.reddit.com/r/CrossosAI/comments/1l54321/behold_core_bagel/? | loscrossos | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l547t2 | false | null | t3_1l547t2 | /r/LocalLLaMA/comments/1l547t2/so_anyway_i_ported_bagel_to_run_with_8gb_not_that/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'a-4P51wt8yglyYw9FNkMIrrE_-Z-7_PRPfqbH82yLV0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HeIcia73z3LdeB3kx9Kglr2UFvrrZyKfrF5iRF-YY3o.jpg?width=108&crop=smart&auto=webp&s=98ef100eaab0c4d07019f1e5092ca7ae0d227325', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HeIcia73z3LdeB3kx9Kglr2UFvrrZyKfrF5iRF-YY3o.jpg?width=216&crop=smart&auto=webp&s=048c9413e98c9de1dfeb51922ce4db4d210b40b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HeIcia73z3LdeB3kx9Kglr2UFvrrZyKfrF5iRF-YY3o.jpg?width=320&crop=smart&auto=webp&s=b9fbb3f6801c9845feed504b8c2c6d7d851e4ca6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HeIcia73z3LdeB3kx9Kglr2UFvrrZyKfrF5iRF-YY3o.jpg?width=640&crop=smart&auto=webp&s=9ad478b2938f03552d2a2715c1c1b99c5f901333', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HeIcia73z3LdeB3kx9Kglr2UFvrrZyKfrF5iRF-YY3o.jpg?width=960&crop=smart&auto=webp&s=397f6c59b72b8afd07a3631cffa8b255dc6b379e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HeIcia73z3LdeB3kx9Kglr2UFvrrZyKfrF5iRF-YY3o.jpg?width=1080&crop=smart&auto=webp&s=81aa33a45f02945c78ad6b1bfd3d2c758bd3c5d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HeIcia73z3LdeB3kx9Kglr2UFvrrZyKfrF5iRF-YY3o.jpg?auto=webp&s=6fe96804f3883adaf25152913aac033937d34e29', 'width': 1200}, 'variants': {}}]} |
|
So cool! Imagine if it was local. Any similar localLLM projects out there? | 0 | https://youtu.be/FpSJX59L7N4?si=SYCl8STqFxZnwg7a | 2025-06-06T21:59:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l54pw7/so_cool_imagine_if_it_was_local_any_similar/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l54pw7 | false | null | t3_1l54pw7 | /r/LocalLLaMA/comments/1l54pw7/so_cool_imagine_if_it_was_local_any_similar/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'navJ1b03qwSRM5044KR_KP9_62j9mUy-O-_xXeB6PLE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/q03BPLLZr7F5_GokBlcPkizRDQ2BMXNYnhJBNj_JYQE.jpg?width=108&crop=smart&auto=webp&s=78432354ebc2207fd86ed0f8bc4cccd96d966390', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/q03BPLLZr7F5_GokBlcPkizRDQ2BMXNYnhJBNj_JYQE.jpg?width=216&crop=smart&auto=webp&s=63aa18e9421ad4c09df49603e2f186932d52d57a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/q03BPLLZr7F5_GokBlcPkizRDQ2BMXNYnhJBNj_JYQE.jpg?width=320&crop=smart&auto=webp&s=5cd7894461860dfb50d52486450d4a493bcfec17', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/q03BPLLZr7F5_GokBlcPkizRDQ2BMXNYnhJBNj_JYQE.jpg?auto=webp&s=179173935b18d0a935749072d488cf0784e1d9f2', 'width': 480}, 'variants': {}}]} |
Local Vision LLM finetuning | 1 | [removed] | 2025-06-06T22:07:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l54w6r/local_vision_llm_finetuning/ | Cool-Instruction-435 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l54w6r | false | null | t3_1l54w6r | /r/LocalLLaMA/comments/1l54w6r/local_vision_llm_finetuning/ | false | false | self | 1 | null |
Guys real question where llama 4 behemoth and thinking ?? | 238 | 2025-06-06T22:21:01 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l557lg | false | null | t3_1l557lg | /r/LocalLLaMA/comments/1l557lg/guys_real_question_where_llama_4_behemoth_and/ | false | false | 238 | {'enabled': True, 'images': [{'id': 'I2Nh12fwA5O6Csj6ndw-N8Tw7fW5rQnRhZ3bsU04g2k', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/xl7vf5frtd5f1.png?width=108&crop=smart&auto=webp&s=d6106a435db9f4caf57819ef012afd4b1367adb8', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/xl7vf5frtd5f1.png?width=216&crop=smart&auto=webp&s=423caed607f8d21e4336fae32e9efb5d619235f8', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/xl7vf5frtd5f1.png?width=320&crop=smart&auto=webp&s=0fd54e190b2c3948c8604b98a9acfd58df9f897d', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/xl7vf5frtd5f1.png?width=640&crop=smart&auto=webp&s=f1857551fffb10a9f04b925ff8e3efbe712cef73', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/xl7vf5frtd5f1.png?width=960&crop=smart&auto=webp&s=d49a00290361dea88d509d69ad384dc93fd252bc', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/xl7vf5frtd5f1.png?width=1080&crop=smart&auto=webp&s=a50e2907033a6937525f7c46118f426a6e0689a1', 'width': 1080}], 'source': {'height': 608, 'url': 'https://preview.redd.it/xl7vf5frtd5f1.png?auto=webp&s=978f60183e5fe5d2d531dcfcb1c61672ea43f5f6', 'width': 1080}, 'variants': {}}]} |
|||
Recommended AI model to run on my laptop without overheating? | 1 | [removed] | 2025-06-06T22:41:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l55nxx/recommended_ai_model_to_run_on_my_laptop_without/ | Jealous_Matter_1282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l55nxx | false | null | t3_1l55nxx | /r/LocalLLaMA/comments/1l55nxx/recommended_ai_model_to_run_on_my_laptop_without/ | false | false | self | 1 | null |
9060xt 16gb vs B580 lmstudio | 1 | [removed] | 2025-06-06T23:10:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l56b1w/9060xt_16gb_vs_b580_lmstudio/ | Buildthehomelab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l56b1w | false | null | t3_1l56b1w | /r/LocalLLaMA/comments/1l56b1w/9060xt_16gb_vs_b580_lmstudio/ | true | false | spoiler | 1 | null |
Pocketflow is now a workflow generator called Osly!! All you need to do is describe your idea | 2 | [removed] | 2025-06-06T23:47:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l5735k/pocketflow_is_now_a_workflow_generator_called/ | Weak_Birthday2735 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5735k | false | null | t3_1l5735k | /r/LocalLLaMA/comments/1l5735k/pocketflow_is_now_a_workflow_generator_called/ | false | false | self | 2 | null |
Pocketflow is now a workflow generator called Osly!! All you need to do is describe your idea | 0 | We built a tool that automates repetitive tasks super easily! Pocketflow was cool but you needed to be technical for that. We re-imagined a way for non-technical creators to build workflows without an IDE.
How our tool, Osly works:
1. Describe any task in plain English.
2. Our AI builds, tests, and perfects a robust workflow.
3. You get a workflow with an interactive frontend that's ready to use or to share.
This has helped us and a handful of our customer save hours on manual work!! We've automate various tasks, from sales outreach to monitoring deal flow on social media!!
Try it out, especially while it is free!! | 2025-06-06T23:56:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l579ap/pocketflow_is_now_a_workflow_generator_called/ | Weak_Birthday2735 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l579ap | false | null | t3_1l579ap | /r/LocalLLaMA/comments/1l579ap/pocketflow_is_now_a_workflow_generator_called/ | false | false | self | 0 | null |
Built a one click local AI installer and fully functional app named Feni…🔎has enterprise security features baked in. Got it downloaded and installed in a VM! Spent a half a month on the project. What do you think? | 1 | 2025-06-07T00:14:23 | https://v.redd.it/gjqr1d95de5f1 | Outrageous_Beat_3630 | /r/LocalLLaMA/comments/1l57ms1/built_a_one_click_local_ai_installer_and_fully/ | 1970-01-01T00:00:00 | 0 | {} | 1l57ms1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gjqr1d95de5f1/DASHPlaylist.mpd?a=1751976867%2CNzUxMTEwODE2YjAwZDgwZjY1YmU2YjRlMzA5ZWY3MjE0YzUxOGNkZmIwMGVhN2Y0MzlmNWE2ZGVlNzkzNWFkYg%3D%3D&v=1&f=sd', 'duration': 176, 'fallback_url': 'https://v.redd.it/gjqr1d95de5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/gjqr1d95de5f1/HLSPlaylist.m3u8?a=1751976867%2CMWNlYjg3ZGU0NmE2YmE4NDRjNTRlYjNkZDBhYTY0YTg4MTIzOGY2NjZlNjE0NDQ5ZjkyMmRkOWEyZTk1YjM2Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gjqr1d95de5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1l57ms1 | /r/LocalLLaMA/comments/1l57ms1/built_a_one_click_local_ai_installer_and_fully/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZmdpYTBqMjVkZTVmMS0H6rkVf6GAxjuDNK1kvyp-uXSIu9iwUTTsYWpzBDHO', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/ZmdpYTBqMjVkZTVmMS0H6rkVf6GAxjuDNK1kvyp-uXSIu9iwUTTsYWpzBDHO.png?width=108&crop=smart&format=pjpg&auto=webp&s=46df3ba98235d67a68d6d5e8ffb339bd79eb3f0d', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/ZmdpYTBqMjVkZTVmMS0H6rkVf6GAxjuDNK1kvyp-uXSIu9iwUTTsYWpzBDHO.png?width=216&crop=smart&format=pjpg&auto=webp&s=43bfed4d585b8ba0afd6ee1673ad554ad7054b4d', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/ZmdpYTBqMjVkZTVmMS0H6rkVf6GAxjuDNK1kvyp-uXSIu9iwUTTsYWpzBDHO.png?width=320&crop=smart&format=pjpg&auto=webp&s=a9290f6030569f441c6707c124fcf8d77fad9f72', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/ZmdpYTBqMjVkZTVmMS0H6rkVf6GAxjuDNK1kvyp-uXSIu9iwUTTsYWpzBDHO.png?width=640&crop=smart&format=pjpg&auto=webp&s=5995a8134d388cd8b97d14f63bb1fa18f11d83d6', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/ZmdpYTBqMjVkZTVmMS0H6rkVf6GAxjuDNK1kvyp-uXSIu9iwUTTsYWpzBDHO.png?width=960&crop=smart&format=pjpg&auto=webp&s=547b86f090345b5cfea60c2b663bde82c36ffb58', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/ZmdpYTBqMjVkZTVmMS0H6rkVf6GAxjuDNK1kvyp-uXSIu9iwUTTsYWpzBDHO.png?width=1080&crop=smart&format=pjpg&auto=webp&s=91a5e98d2aa893b8caf3c558739e88dba3028de7', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/ZmdpYTBqMjVkZTVmMS0H6rkVf6GAxjuDNK1kvyp-uXSIu9iwUTTsYWpzBDHO.png?format=pjpg&auto=webp&s=53df3b9ec00ff6c9ffd822efb88c3f36fb8f49a1', 'width': 1080}, 'variants': {}}]} |
||
I built a platform that generates overviews of codebases and creates a map of the codebase dependencies | 1 | [removed] | 2025-06-07T00:14:50 | https://v.redd.it/mwjkmq1wde5f1 | ComfortableArm121 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l57n42 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mwjkmq1wde5f1/DASHPlaylist.mpd?a=1751847303%2CZDllZmYxOTc4YjRjYWI4YTUwMmY5ZTJhNTllYzRhODBiMGY3YTY3OGVjNTgzYzBmOWM3Y2Q3YjEzNDU0MTM4Ng%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/mwjkmq1wde5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/mwjkmq1wde5f1/HLSPlaylist.m3u8?a=1751847303%2CNWM0ZDlhNDcyYzcxOTczOTg4YTU3MzIyMjE1ZDM4ZjA0YmE4M2NjYmE4NGE1M2VjZjAzNDhiOTA2YzI5ZGI4NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mwjkmq1wde5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1592}} | t3_1l57n42 | /r/LocalLLaMA/comments/1l57n42/i_built_a_platform_that_generates_overviews_of/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eTF5NGt1MXdkZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/eTF5NGt1MXdkZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=108&crop=smart&format=pjpg&auto=webp&s=1a95ff36191f31f206c1a34e954e82bc99e05218', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/eTF5NGt1MXdkZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=216&crop=smart&format=pjpg&auto=webp&s=733cc2c91ec5b177e81d4affed6002d647903428', 'width': 216}, {'height': 217, 'url': 'https://external-preview.redd.it/eTF5NGt1MXdkZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=320&crop=smart&format=pjpg&auto=webp&s=79c25927ff796b9bc2b9dba4cfe289d4b2afe5f5', 'width': 320}, {'height': 434, 'url': 'https://external-preview.redd.it/eTF5NGt1MXdkZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=640&crop=smart&format=pjpg&auto=webp&s=fff9ad425d2546a64844a163926b79d62409f171', 'width': 640}, {'height': 651, 'url': 'https://external-preview.redd.it/eTF5NGt1MXdkZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=960&crop=smart&format=pjpg&auto=webp&s=f6ebaf238b241e2a7c9c3fb549e27e72d15117b7', 'width': 960}, {'height': 732, 'url': 'https://external-preview.redd.it/eTF5NGt1MXdkZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e70ffd388e763ee7f02040342328628cdabeed4e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eTF5NGt1MXdkZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?format=pjpg&auto=webp&s=9dc46ddb8d0988e56d5ae873a275e5add4abccf0', 'width': 1592}, 'variants': {}}]} |
|
I built a platform that generates overviews of codebases and creates a map of the codebase dependencies | 18 | 2025-06-07T00:16:33 | https://v.redd.it/dtd99xtbee5f1 | ComfortableArm121 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l57of0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dtd99xtbee5f1/DASHPlaylist.mpd?a=1751847405%2CZDdjMDMwNzA4ZjgyZGU0YWQyOWIyMzRiMGNkNzZlY2U3YWY1M2RlNjZlNDIzMTIzNjc5ZTM2YTdlMjI2MjYyNw%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/dtd99xtbee5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/dtd99xtbee5f1/HLSPlaylist.m3u8?a=1751847405%2CYmVkYjJkYThlMDc1OTNkODhiMWE2MjUyMjY4MDZjZDE0Y2YxMDZjOWM2YjZmNGQ3ZDcxNWIxZDFkNjUyZjdlYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dtd99xtbee5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1592}} | t3_1l57of0 | /r/LocalLLaMA/comments/1l57of0/i_built_a_platform_that_generates_overviews_of/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'ajV3dDR4dGJlZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/ajV3dDR4dGJlZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=108&crop=smart&format=pjpg&auto=webp&s=0458269863ea8abf72822eef3c34b48413ba9e6a', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/ajV3dDR4dGJlZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=216&crop=smart&format=pjpg&auto=webp&s=1a7c6c34012caaf2f0d9a1109160d855c53997fa', 'width': 216}, {'height': 217, 'url': 'https://external-preview.redd.it/ajV3dDR4dGJlZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=320&crop=smart&format=pjpg&auto=webp&s=21074b08b9fe192ee767d5451b055d3ce785ec6b', 'width': 320}, {'height': 434, 'url': 'https://external-preview.redd.it/ajV3dDR4dGJlZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=640&crop=smart&format=pjpg&auto=webp&s=e78b72898a6f11b7f818571209feac8f958f962b', 'width': 640}, {'height': 651, 'url': 'https://external-preview.redd.it/ajV3dDR4dGJlZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=960&crop=smart&format=pjpg&auto=webp&s=eed4662db6f57bc39d813feb1d505ccdfb944cc6', 'width': 960}, {'height': 732, 'url': 'https://external-preview.redd.it/ajV3dDR4dGJlZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4ec41406147755a98c01a33ce980cbecd3129a84', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ajV3dDR4dGJlZTVmMfiX-ZPFXvivw9U83pY8eGEanyuVX5PV_GlEnhBt2ZQ_.png?format=pjpg&auto=webp&s=66bd1b5a4fb29790e5fb9fbd9ffae9500ffdd9bd', 'width': 1592}, 'variants': {}}]} |
||
I need help something llama.cpp | 1 | [removed] | 2025-06-07T00:25:47 | https://www.reddit.com/gallery/1l57v1f | Puzzled-Yoghurt564 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l57v1f | false | null | t3_1l57v1f | /r/LocalLLaMA/comments/1l57v1f/i_need_help_something_llamacpp/ | false | false | 1 | null |
|
—Built an app and 1 click installer for local AI— with enterprise security features—-running in a VM here | 1 | 2025-06-07T00:28:52 | https://v.redd.it/57s4eew7ge5f1 | Outrageous_Beat_3630 | /r/LocalLLaMA/comments/1l57x9g/built_an_app_and_1_click_installer_for_local_ai/ | 1970-01-01T00:00:00 | 0 | {} | 1l57x9g | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/57s4eew7ge5f1/DASHPlaylist.mpd?a=1751977737%2CMTY2ZjcwMjViNWQ1ODQ3ZDc4MzIzYjA2NTBjZjg2MzUxZWM5YzU2NTFkYzY4N2U1NmQ5YzRjMWYzNDU1YjUzYg%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/57s4eew7ge5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/57s4eew7ge5f1/HLSPlaylist.m3u8?a=1751977737%2CZjdlZTJhYmJjYThiNmVmZDZhOTZlZDZjN2E0Mzc3OGQyOTMxMjIxYzI5MDAxMjA2MmMyMDY2ZmM2ZTcwMzg2Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/57s4eew7ge5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1l57x9g | /r/LocalLLaMA/comments/1l57x9g/built_an_app_and_1_click_installer_for_local_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OHRmbnRlMzdnZTVmMWLvs4GTuyQxq5d6WTM1z-AYTTQ6EBP4UCg9wmLpG2Dr', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/OHRmbnRlMzdnZTVmMWLvs4GTuyQxq5d6WTM1z-AYTTQ6EBP4UCg9wmLpG2Dr.png?width=108&crop=smart&format=pjpg&auto=webp&s=a49f2a0a5f5d44d429d4432d486ad597d9c30a4a', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/OHRmbnRlMzdnZTVmMWLvs4GTuyQxq5d6WTM1z-AYTTQ6EBP4UCg9wmLpG2Dr.png?width=216&crop=smart&format=pjpg&auto=webp&s=6733704956e14bb17f1154319a95d5cccb8e8051', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/OHRmbnRlMzdnZTVmMWLvs4GTuyQxq5d6WTM1z-AYTTQ6EBP4UCg9wmLpG2Dr.png?width=320&crop=smart&format=pjpg&auto=webp&s=dfbb4fc81bb23ab8d3b7bd10786ea292c756578f', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/OHRmbnRlMzdnZTVmMWLvs4GTuyQxq5d6WTM1z-AYTTQ6EBP4UCg9wmLpG2Dr.png?width=640&crop=smart&format=pjpg&auto=webp&s=2b5ff3654b6ab01a9a08782caca89fe0429b011e', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/OHRmbnRlMzdnZTVmMWLvs4GTuyQxq5d6WTM1z-AYTTQ6EBP4UCg9wmLpG2Dr.png?width=960&crop=smart&format=pjpg&auto=webp&s=8ba713df38aaa84f75adcd0d59e1d69a3b57785e', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/OHRmbnRlMzdnZTVmMWLvs4GTuyQxq5d6WTM1z-AYTTQ6EBP4UCg9wmLpG2Dr.png?width=1080&crop=smart&format=pjpg&auto=webp&s=211da02edb80fa1d86f5f4f26de18458c60eb0e8', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/OHRmbnRlMzdnZTVmMWLvs4GTuyQxq5d6WTM1z-AYTTQ6EBP4UCg9wmLpG2Dr.png?format=pjpg&auto=webp&s=4af461ffb63f765ac732c6f3dcf942428b470566', 'width': 1080}, 'variants': {}}]} |
||
is Whisper v3 Large Turbo still top dog for English transcriptions? | 6 | I have a couple hundred hours of audio to transcribe. Is this still the best model or any others for best accuracy? | 2025-06-07T00:53:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l58dva/is_whisper_v3_large_turbo_still_top_dog_for/ | milkygirl21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l58dva | false | null | t3_1l58dva | /r/LocalLLaMA/comments/1l58dva/is_whisper_v3_large_turbo_still_top_dog_for/ | false | false | self | 6 | null |
Local AI with one click installer- I built the app and installer — honest thoughts 💭?? | 1 | —input Sanitization & Validation ✅
—Comprehensive logging & audit trail✅
— Secure HTTP Client ✅
—-File security & Validation ✅
Built the app and bundled direct ollama download, no troubleshooting, just click and download load and you have the app, no fuss ollama install and llama 3.2:B all in a convenient little package:)
| 2025-06-07T00:54:06 | https://v.redd.it/9k0usbnvke5f1 | Outrageous_Beat_3630 | /r/LocalLLaMA/comments/1l58ejs/local_ai_with_one_click_installer_i_built_the_app/ | 1970-01-01T00:00:00 | 0 | {} | 1l58ejs | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9k0usbnvke5f1/DASHPlaylist.mpd?a=1751979250%2CZDk3MzVlZWY4ODU1NDEzMTBkMjdlNzBiNWEyZGYyZDJlOTRiYmQ2NjIxNWUzZjQ2MzhhNGEwOTZjMjhmM2Q0Mg%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/9k0usbnvke5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/9k0usbnvke5f1/HLSPlaylist.m3u8?a=1751979250%2CZDQ4MmRiMDE5YTM1MGNiODY0OTc4YTBlNjkzYTRlZjc2ZDAzYTBmM2UzMWJhYjI3M2YxNWU2MjkwNTRkMGE0Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9k0usbnvke5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1l58ejs | /r/LocalLLaMA/comments/1l58ejs/local_ai_with_one_click_installer_i_built_the_app/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eGJicWVrMHZrZTVmMVJqHgIiMuV_Pam5XgQifzwZx7gYS1DhiXmDlBC6Nvao', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/eGJicWVrMHZrZTVmMVJqHgIiMuV_Pam5XgQifzwZx7gYS1DhiXmDlBC6Nvao.png?width=108&crop=smart&format=pjpg&auto=webp&s=524ac0e01e22e7115f29a46f0ddc43f66f5141dd', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/eGJicWVrMHZrZTVmMVJqHgIiMuV_Pam5XgQifzwZx7gYS1DhiXmDlBC6Nvao.png?width=216&crop=smart&format=pjpg&auto=webp&s=ff4fc35b7913c5439c7bff7d45d20760c7b1892f', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/eGJicWVrMHZrZTVmMVJqHgIiMuV_Pam5XgQifzwZx7gYS1DhiXmDlBC6Nvao.png?width=320&crop=smart&format=pjpg&auto=webp&s=4bc816c11389d8cad96f8fc5a1d02edd7bb739b0', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/eGJicWVrMHZrZTVmMVJqHgIiMuV_Pam5XgQifzwZx7gYS1DhiXmDlBC6Nvao.png?width=640&crop=smart&format=pjpg&auto=webp&s=5edba3ffdd38373b50c1893cb4cec5cb995ebd29', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/eGJicWVrMHZrZTVmMVJqHgIiMuV_Pam5XgQifzwZx7gYS1DhiXmDlBC6Nvao.png?width=960&crop=smart&format=pjpg&auto=webp&s=c7a1ab13ad9671d030fceb15194263290ee877ca', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/eGJicWVrMHZrZTVmMVJqHgIiMuV_Pam5XgQifzwZx7gYS1DhiXmDlBC6Nvao.png?width=1080&crop=smart&format=pjpg&auto=webp&s=472c54d4789f5c2ba9e1624b4eac0749af5d8f6d', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/eGJicWVrMHZrZTVmMVJqHgIiMuV_Pam5XgQifzwZx7gYS1DhiXmDlBC6Nvao.png?format=pjpg&auto=webp&s=8d96f594dc8248f6eaaecee987258c82fb693552', 'width': 1080}, 'variants': {}}]} |
|
Do weights hide "hyperbolic trees”? A quick coffee-rant and an ask for open science (long) | 51 | Every morning I grab a cup of coffee and read all the papers I can for at least 3 hours.
You guys probably read the latest Meta paper that says we can "store" almost 4 bits per param as some sort of "constant" in LLMs.
**What if I told you that there are similar papers in neurobiology?** Similar constants have been found in biological neurons - some neuro papers show that CA1 synapses pack around 4.7 bits per synapse. While it could be a coincidence, none of this is random though it is slightly apples-to-oranges.
And the best part of this is that since we have access to the open weights, we can test many of the hypothesis available. There's no need to go full crank territory when we can do open collaborative science.
After looking at the meta paper, for some reason I tried to match the constant to something that would make sense to me. The constant is around 3.6 with some flexibility, which approaches (2−ϕ) \* 10. So, we can more or less define the "memory capacity function" of an LLM like f(p) ≈ (2−ϕ) ⋅ 10 ⋅ p. Where p is the parameter count and 10 is pure curve-fitting.
The 3.6 bits is probably the *Shannon/Kolmogorov* information the model can store **about a dataset**, not raw mantissa bits. And could be architecture/precision dependent so i don't know.
This is probably all wrong and just a coincidence but take it as an "operational" starting point of sorts. (2−ϕ) is not a random thing, it's a number on which evolution falls when doing phyllotaxis to generate the rotation "spawn points" of leaves to maximize coverage.
What if the nature of the learning process is making the LLMs converge on these "constants" (as in magic numbers from CS) to maximize their goals. I'm not claiming a golden angle shows up, rather some patterned periodicity that makes sense in a high dimensional weight space.
Correct me if I'm wrong here, but what if this is here to optimize some other geometry? not every parameter vector is nailed to a perfect unit sphere, but activation vectors that matter for attention get RMS- or ℓ₂-normalised, so they live on a thin hyperspherical shell
I don't know what 10 is here, but this could be distributing memorization across every new param/leaf in a hypersphere. each new head / embedding direction wants to overlap as little as possible with the ones already there
afaik this could all be pure numerology, but the angle is kind of there
Now I found some guy (link below) that seems to have found some evidence of hyperbolic distributions in the weights. Again, hyperbolic structures have been already found on biological brains. While these are not the same, maybe the way the information reaches them creates some sort of emerging encoding structure.
This hyperbolic tail does not necessarily imply proof of curvature, but we can test for it (Hyperbolic-SVD curvature fit).
Holistically speaking, since we train on data that is basically a projection of our world models, the training should (kind of) create some sort of "reverse engineered" holographic representation of that world model, of which we acquire a string of symbols - via inference - that represents a slice of that.
Then it seems as if bio/bit networks converge on "sphere-rim coverage + hyperbolic interior" because that maximizes memory and routing efficiency under sparse wiring budgets.
\---
**If this holds true (to some extent), then this is useful data to both optimize our training runs and our quantization methods**.
\+ If we identify where the "trunks" vs the "twigs" are, we can keep the trunks in 8 bits and prune the twigs to 4 bit (or less). (compare k\_eff-based pruning to magnitude pruning; if no win, k\_eff is useless)
\+ If "golden-angle packing" is real, many twigs could be near-duplicates.
\+ If a given "tree" stops growing, we could freeze it.
\+ Since "memory capacity" scales linearly with param count, and if every new weight vector lands on a hypersphere with minimal overlap (think 137° leaf spiral in 4 D), linear scaling drops out naturally. As far as i read, the models in the Meta paper were small.
\+ Plateau at \~3.6 bpp is *independent* of dataset size (once big enough). A sphere has only so much surface area; after that, you can’t pack new “directions” without stepping on toes -> switch to interior tree-branches = generalization.
\+ if curvature really < 0, Negative curvature says the matrix behaves like a tree embedded in hyperbolic space, so a Lorentz low-rank factor (U, V, R) might shave parameters versus plain UVᵀ.
\---
I’m usually an obscurantist, but these hypotheses are too easy to test to keep private and could help all of us in these commons, if by any chance this pseudo-coffee-rant helps you get some research ideas that is more than enough for me.
Maybe to start with, someone should dump key/query vectors and histogram for the golden angles
If anyone has the means, please rerun Meta’s capacity probe—to see if the 3.6 bpp plateau holds?
**All of this is falsifiable, so go ahead and kill it with data**
Thanks for reading my rant, have a nice day/night/whatever
Links:
[How much do language models memorize?](https://arxiv.org/pdf/2505.24832)
[Nanoconnectomic upper bound on the variability of synaptic plasticity | eLife](https://elifesciences.org/articles/10778)
[Hyperbolic Space - ueaj - Obsidian Publish](https://publish.obsidian.md/ueaj/Machine+Learning/The+Spaces/Hyperbolic+Space) | 2025-06-07T01:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l59hwo/do_weights_hide_hyperbolic_trees_a_quick/ | OmarBessa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l59hwo | false | null | t3_1l59hwo | /r/LocalLLaMA/comments/1l59hwo/do_weights_hide_hyperbolic_trees_a_quick/ | false | false | self | 51 | null |
Noob needs help with AnythingLLM Docker - HTTPS Support | 1 | Hi Everyone,
I am new to the LLLM world and have been learning a ton. I am doing a pet project for work building an AI bot into an internal site we have using AnythingLLM. The issue I have is that I can link in the HTTP version of the bot into the HTTPS site.
I created my docker with this command which works fine:
`export STORAGE_LOCATION="/Users/pa/Documents/anythingLLM" && \`
`mkdir -p $STORAGE_LOCATION && \`
`touch "$STORAGE_LOCATION/.env" && \`
`docker run -d -p 3001:3001 \`
`--cap-add SYS_ADMIN \`
`-v ${STORAGE_LOCATION}:/app/server/storage \`
`-v ${STORAGE_LOCATION}/.env:/app/server/.env \`
`-e STORAGE_DIR="/app/server/storage" \`
`mintplexlabs/anythingllm`
My struggle is trying to implement HTTPS. I was looking at this: [https://github.com/Mintplex-Labs/anything-llm/issues/523](https://github.com/Mintplex-Labs/anything-llm/issues/523) and makes it seem its possible but feel like I am making no progress. I have not used docker before today and have not found any guides or video to help me get over this last hurdle. Can anyone help point me in the right direction? | 2025-06-07T02:00:08 | https://www.reddit.com/r/LocalLLaMA/comments/1l59nam/noob_needs_help_with_anythingllm_docker_https/ | eld101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l59nam | false | null | t3_1l59nam | /r/LocalLLaMA/comments/1l59nam/noob_needs_help_with_anythingllm_docker_https/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'j1aAKt9jiytQXUdeo6dgQiUmE5pkCMtsIjhN1kx8mqs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/30W-b9PKOrhc6BuJ2DUKy_Eu6TFBXeKIV1tEa6VjdXo.jpg?width=108&crop=smart&auto=webp&s=db615c087a1c1665890e9151aea77348999c2847', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/30W-b9PKOrhc6BuJ2DUKy_Eu6TFBXeKIV1tEa6VjdXo.jpg?width=216&crop=smart&auto=webp&s=f55fe5b25c180dae6f15cebb04d6b09957a14e18', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/30W-b9PKOrhc6BuJ2DUKy_Eu6TFBXeKIV1tEa6VjdXo.jpg?width=320&crop=smart&auto=webp&s=af5168c950a4003115e155595faf31937602aae8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/30W-b9PKOrhc6BuJ2DUKy_Eu6TFBXeKIV1tEa6VjdXo.jpg?width=640&crop=smart&auto=webp&s=02e85c8b48ef2200c0b682d0dfc7b3144391a1f4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/30W-b9PKOrhc6BuJ2DUKy_Eu6TFBXeKIV1tEa6VjdXo.jpg?width=960&crop=smart&auto=webp&s=7d21d68d70b692f127e666d54908d2e2dfc94ea8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/30W-b9PKOrhc6BuJ2DUKy_Eu6TFBXeKIV1tEa6VjdXo.jpg?width=1080&crop=smart&auto=webp&s=6cd3b8fa8928eb6a641ac3bfa043c22a28b1d135', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/30W-b9PKOrhc6BuJ2DUKy_Eu6TFBXeKIV1tEa6VjdXo.jpg?auto=webp&s=614a196a5198772af7ceaa690942307a03b786a7', 'width': 1200}, 'variants': {}}]} |
Permanent Reasoning XML tags with Group Relative Policy Optimisation using LLaMa | 1 | With models like QwQ, <think> XML tags are generated without explicitly asking for them. I checked the Modelfile but it seems like system prompt does not explicitly ask for them either. So reasoning trace generation must be from training process.
However after training LLaMa with GRPO trainer that does not seem to be happening. Should I pre-train using GRPO with a larger dataset and then train with my dataset or do supervised finetuning beforehand? | 2025-06-07T02:16:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l59ykr/permanent_reasoning_xml_tags_with_group_relative/ | baklava-balaclava | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l59ykr | false | null | t3_1l59ykr | /r/LocalLLaMA/comments/1l59ykr/permanent_reasoning_xml_tags_with_group_relative/ | false | false | self | 1 | null |
Reverse Engineering Cursor's LLM Client | 36 | 2025-06-07T02:42:57 | https://www.tensorzero.com/blog/reverse-engineering-cursors-llm-client/ | bianconi | tensorzero.com | 1970-01-01T00:00:00 | 0 | {} | 1l5agd6 | false | null | t3_1l5agd6 | /r/LocalLLaMA/comments/1l5agd6/reverse_engineering_cursors_llm_client/ | false | false | default | 36 | {'enabled': False, 'images': [{'id': 'K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WN-d8kNFp_bMOWc0Jxo_v1OTc4Y2FNIweL3BqVs8-To.jpg?width=108&crop=smart&auto=webp&s=9d871ed7e9b1584c7d16a889f3370beda0be6cfd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/WN-d8kNFp_bMOWc0Jxo_v1OTc4Y2FNIweL3BqVs8-To.jpg?width=216&crop=smart&auto=webp&s=006bf5b7234a4165183f82c4d12a0df3265de899', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/WN-d8kNFp_bMOWc0Jxo_v1OTc4Y2FNIweL3BqVs8-To.jpg?width=320&crop=smart&auto=webp&s=607b39df77b4ae55a3dab1e4a38f16d4240c1e4f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/WN-d8kNFp_bMOWc0Jxo_v1OTc4Y2FNIweL3BqVs8-To.jpg?width=640&crop=smart&auto=webp&s=ef10133c368fd67735eb95c094af8ac553c79236', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/WN-d8kNFp_bMOWc0Jxo_v1OTc4Y2FNIweL3BqVs8-To.jpg?width=960&crop=smart&auto=webp&s=7555961d888adfc15b01e4eb67e57ba1cf87ecb6', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/WN-d8kNFp_bMOWc0Jxo_v1OTc4Y2FNIweL3BqVs8-To.jpg?width=1080&crop=smart&auto=webp&s=d83dbdcb53b9bcea82c02d8715d2f8be00c93d68', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/WN-d8kNFp_bMOWc0Jxo_v1OTc4Y2FNIweL3BqVs8-To.jpg?auto=webp&s=927ab4dde27ca3687f2c84a1ac6fb262aeed464f', 'width': 1200}, 'variants': {}}]} |
|
2X EPYC 9005 series Engineering CPU's for local Ai inference..? | 7 | Is it a good idea to use Engineering CPU's instead of retail ones for running Llama.CPP.?
Will it actually work .! | 2025-06-07T03:26:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l5b9ab/2x_epyc_9005_series_engineering_cpus_for_local_ai/ | sub_RedditTor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5b9ab | false | null | t3_1l5b9ab | /r/LocalLLaMA/comments/1l5b9ab/2x_epyc_9005_series_engineering_cpus_for_local_ai/ | false | false | self | 7 | null |
deep research with local LLM and local documents | 1 | [removed] | 2025-06-07T03:38:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l5bgtn/deep_research_with_local_llm_and_local_documents/ | tomkod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5bgtn | false | null | t3_1l5bgtn | /r/LocalLLaMA/comments/1l5bgtn/deep_research_with_local_llm_and_local_documents/ | false | false | self | 1 | null |
how does one post here? | 1 | [removed] | 2025-06-07T03:40:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l5bhu3/how_does_one_post_here/ | tomkod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5bhu3 | false | null | t3_1l5bhu3 | /r/LocalLLaMA/comments/1l5bhu3/how_does_one_post_here/ | false | false | self | 1 | null |
KoboldCpp 1.93's Smart AutoGenerate Images (fully local, just kcpp alone) | 137 | 2025-06-07T04:10:46 | https://v.redd.it/ksaugoxxif5f1 | HadesThrowaway | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l5c0tf | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ksaugoxxif5f1/DASHPlaylist.mpd?a=1751861462%2CNzc1NDYyNDc1OTUzNzRhYzc5MjQ5NDMzMjhlM2YzNjBjMzdhNDVjNTI2MDk4NmUyNzI4MjI5ODg3NGVmOGJhYw%3D%3D&v=1&f=sd', 'duration': 56, 'fallback_url': 'https://v.redd.it/ksaugoxxif5f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/ksaugoxxif5f1/HLSPlaylist.m3u8?a=1751861462%2CMWMyOGYzOTk5NWExOWE3ZTcwY2QyOGRiZmQ1OGUzYWIxNTZmOTk1ZGQxYzU0Y2MzNWQ1NTFmMDNiMzFjNDA4Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ksaugoxxif5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1l5c0tf | /r/LocalLLaMA/comments/1l5c0tf/koboldcpp_193s_smart_autogenerate_images_fully/ | false | false | 137 | {'enabled': False, 'images': [{'id': 'NHRvemxvdDVrZjVmMSFfoyYGAySgYuvKAT1gTMSKLTBd_9hKeny4y4byCyRw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NHRvemxvdDVrZjVmMSFfoyYGAySgYuvKAT1gTMSKLTBd_9hKeny4y4byCyRw.png?width=108&crop=smart&format=pjpg&auto=webp&s=ad8cb5355196b67f380d4471a8b7911853ab7e1f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NHRvemxvdDVrZjVmMSFfoyYGAySgYuvKAT1gTMSKLTBd_9hKeny4y4byCyRw.png?width=216&crop=smart&format=pjpg&auto=webp&s=1a40ef4c81eeda61235296b3284571cb907bab5c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NHRvemxvdDVrZjVmMSFfoyYGAySgYuvKAT1gTMSKLTBd_9hKeny4y4byCyRw.png?width=320&crop=smart&format=pjpg&auto=webp&s=cba6928f42ff71fa4490b15fdfec00ea5263ec05', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NHRvemxvdDVrZjVmMSFfoyYGAySgYuvKAT1gTMSKLTBd_9hKeny4y4byCyRw.png?width=640&crop=smart&format=pjpg&auto=webp&s=61f481dcff8d381a10dfac07bcae7c1adcd8cf4f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NHRvemxvdDVrZjVmMSFfoyYGAySgYuvKAT1gTMSKLTBd_9hKeny4y4byCyRw.png?width=960&crop=smart&format=pjpg&auto=webp&s=01ed05d2a733c36bb6223831b7de6a7fd6b3b337', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NHRvemxvdDVrZjVmMSFfoyYGAySgYuvKAT1gTMSKLTBd_9hKeny4y4byCyRw.png?width=1080&crop=smart&format=pjpg&auto=webp&s=65563f83333736414be0c056e76251548f38e52b', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NHRvemxvdDVrZjVmMSFfoyYGAySgYuvKAT1gTMSKLTBd_9hKeny4y4byCyRw.png?format=pjpg&auto=webp&s=eb0d7f696ab25801b735ac5d5bed7be3b889dcd0', 'width': 1280}, 'variants': {}}]} |
||
CoexistAI: All you need for local Deep Research | 1 | [removed] | 2025-06-07T04:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l5c2o0/coexistai_all_you_need_for_local_deep_research/ | Longjumping_Essay498 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5c2o0 | false | null | t3_1l5c2o0 | /r/LocalLLaMA/comments/1l5c2o0/coexistai_all_you_need_for_local_deep_research/ | false | false | self | 1 | null |
CoexistAI: your own research assistant | 1 | [removed] | 2025-06-07T04:30:20 | https://github.com/SPThole/CoexistAI | Optimalutopic | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l5ccdt | false | null | t3_1l5ccdt | /r/LocalLLaMA/comments/1l5ccdt/coexistai_your_own_research_assistant/ | false | false | default | 1 | null |
CoexistAI: your own research assistant | 1 | [removed] | 2025-06-07T04:34:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l5cepf/coexistai_your_own_research_assistant/ | Optimalutopic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5cepf | false | null | t3_1l5cepf | /r/LocalLLaMA/comments/1l5cepf/coexistai_your_own_research_assistant/ | false | false | self | 1 | null |
Windows Gaming laptop vs Apple M4 | 3 | My old laptop is getting loaded while running Local LLMs. It is only able to run 1B to 3 B models that too very slowly.
I will need to upgrade the hardware
I am working on making AI Agents. I work with back end Python manipulation
I will need your suggestions on Windows Gaming Laptops vs Apple m - series ?
| 2025-06-07T05:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l5cua7/windows_gaming_laptop_vs_apple_m4/ | bull_bear25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l5cua7 | false | null | t3_1l5cua7 | /r/LocalLLaMA/comments/1l5cua7/windows_gaming_laptop_vs_apple_m4/ | false | false | self | 3 | null |
gemini-2.5-pro-preview-06-05 performance on IDP Leaderboard | 63 | There is a slight improvement in Table extraction and long document understanding. Slight drop in accuracy in OCR accuracy which is little surprising since gemini models are always very good with OCR but overall best model.
Although I have noticed, it stopped giving answer midway whenever I try to extract information from W2 tax forms, might be because of privacy reason. This is much more prominent with gemini models (both 06-05 and 03-25) than OpenAI or Claude. Anyone faced this issue? I am thinking of creating a test set for this. | 2025-06-07T05:03:24 | SouvikMandal | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l5cw3w | false | null | t3_1l5cw3w | /r/LocalLLaMA/comments/1l5cw3w/gemini25propreview0605_performance_on_idp/ | false | false | default | 63 | {'enabled': True, 'images': [{'id': 'fcugwamdsf5f1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/fcugwamdsf5f1.png?width=108&crop=smart&auto=webp&s=75755f85de25d8a16cdd7bafa23ff6302c73550f', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/fcugwamdsf5f1.png?width=216&crop=smart&auto=webp&s=5c9a769b69892d9c026e2212c4f9a5eb8ddea79d', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/fcugwamdsf5f1.png?width=320&crop=smart&auto=webp&s=0f6181e71ba18bec0f4d5702411a11f80fb5ea79', 'width': 320}, {'height': 292, 'url': 'https://preview.redd.it/fcugwamdsf5f1.png?width=640&crop=smart&auto=webp&s=f205b4c99a129e98c3373c875e810dfedcfd3505', 'width': 640}, {'height': 438, 'url': 'https://preview.redd.it/fcugwamdsf5f1.png?width=960&crop=smart&auto=webp&s=f4ad96e655eebb795a30d0b0466c1cb9c1b9a4c2', 'width': 960}, {'height': 492, 'url': 'https://preview.redd.it/fcugwamdsf5f1.png?width=1080&crop=smart&auto=webp&s=36a863d93530cffbd0dbdee8a812d839a372b380', 'width': 1080}], 'source': {'height': 1002, 'url': 'https://preview.redd.it/fcugwamdsf5f1.png?auto=webp&s=e67b1505c4b8aa6ede216fe6c0adbc2c18b311b2', 'width': 2196}, 'variants': {}}]} |
|
14B Hybrid Reasoning UI Model for websites and components | 0 | [removed] | 2025-06-07T05:04:04 | https://www.reddit.com/gallery/1l5cwh6 | United-Rush4073 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l5cwh6 | false | null | t3_1l5cwh6 | /r/LocalLLaMA/comments/1l5cwh6/14b_hybrid_reasoning_ui_model_for_websites_and/ | false | false | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.