title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
HOW TO STOP LLM GENERATION? | 1 | [removed] | 2025-01-14T10:22:56 | https://www.reddit.com/r/LocalLLaMA/comments/1i133j4/how_to_stop_llm_generation/ | MBHQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i133j4 | false | null | t3_1i133j4 | /r/LocalLLaMA/comments/1i133j4/how_to_stop_llm_generation/ | false | false | self | 1 | null |
Which LLM for my use case? | 1 | [removed] | 2025-01-14T10:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i135zq/which_llm_for_my_use_case/ | SeparateSteak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i135zq | false | null | t3_1i135zq | /r/LocalLLaMA/comments/1i135zq/which_llm_for_my_use_case/ | false | false | self | 1 | null |
what model should I choose for my use case? | 1 | [removed] | 2025-01-14T10:33:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i138ms/what_model_should_i_choose_for_my_use_case/ | HansEliSebastianFors | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i138ms | false | null | t3_1i138ms | /r/LocalLLaMA/comments/1i138ms/what_model_should_i_choose_for_my_use_case/ | false | false | self | 1 | null |
Deploy Qwen2-VL-72B Instruct as Sagemaker Endpoint | 1 | [removed] | 2025-01-14T10:33:23 | https://www.reddit.com/r/LocalLLaMA/comments/1i138nh/deploy_qwen2vl72b_instruct_as_sagemaker_endpoint/ | Worldly-Dimension282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i138nh | false | null | t3_1i138nh | /r/LocalLLaMA/comments/1i138nh/deploy_qwen2vl72b_instruct_as_sagemaker_endpoint/ | false | false | self | 1 | null |
How good are open models with XML? | 1 | [removed] | 2025-01-14T10:39:52 | https://www.reddit.com/r/LocalLLaMA/comments/1i13btd/how_good_are_open_models_with_xml/ | Traditional-Gap-3313 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i13btd | false | null | t3_1i13btd | /r/LocalLLaMA/comments/1i13btd/how_good_are_open_models_with_xml/ | false | false | self | 1 | null |
How will you prepare for the future of LLMs/AI? | 2 | I’m not talking about bigger and better ones.
Time and again, software is new, free, amazing, complex. And eventually, it gets monetized. Massively monetized.
I’m trying to avoid hyperbole, and rely on reality. Think the early internet, early social media, email, the start bar on windows, games. Everything has ads now and combs your usage for data they can use. This isn’t a post or discussions about that or the ethics of it. It’s about how you plan to use LLMs.
Are you building your own framework or models? Do you plan to just keep going (maybe the future has less open source, maybe it doesn’t) without change? Are you looking forward to the monetization of LLMs? (No judgement, a lot of things have and do get better when a company can make money). Is there anything you expect to see in the future of LLMs? Personally I’m shocked I can use copilot/chatgpt to search the internet and not have ads injected into it.
So rather than discuss the possible future of LLMs, what’s YOUR plan for YOUR future with LLMs and AI (if any)? | 2025-01-14T10:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i13d0m/how_will_you_prepare_for_the_future_of_llmsai/ | NotTheTitanic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i13d0m | false | null | t3_1i13d0m | /r/LocalLLaMA/comments/1i13d0m/how_will_you_prepare_for_the_future_of_llmsai/ | false | false | self | 2 | null |
Alternatives to Llama Guard? | 1 | [removed] | 2025-01-14T10:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/1i13e2i/alternatives_to_llama_guard/ | ivvnwong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i13e2i | false | null | t3_1i13e2i | /r/LocalLLaMA/comments/1i13e2i/alternatives_to_llama_guard/ | false | false | self | 1 | null |
Deployed Midnight-Miqu-103B and it's slow, very slow | 1 | [removed] | 2025-01-14T11:04:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i13omo/deployed_midnightmiqu103b_and_its_slow_very_slow/ | No-News908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i13omo | false | null | t3_1i13omo | /r/LocalLLaMA/comments/1i13omo/deployed_midnightmiqu103b_and_its_slow_very_slow/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'HH4ipsr8vrX14hBcxPWy9fvouEY_nJ5_IPcmeGnh3eo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VVbwVkyFE6oEWR0f-WDHzihgB5sSRhhUkKjfPY0-lOU.jpg?width=108&crop=smart&auto=webp&s=e02c63802781b0f4429b6112a590f9166ed2321b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VVbwVkyFE6oEWR0f-WDHzihgB5sSRhhUkKjfPY0-lOU.jpg?width=216&crop=smart&auto=webp&s=4297b5cec85f0421c784f2f4a74b51ce7271e0ee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VVbwVkyFE6oEWR0f-WDHzihgB5sSRhhUkKjfPY0-lOU.jpg?width=320&crop=smart&auto=webp&s=f23e48e85cf2b036ac6c770408c4ba03787b7e0d', 'width': 320}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/VVbwVkyFE6oEWR0f-WDHzihgB5sSRhhUkKjfPY0-lOU.jpg?auto=webp&s=a33dc81a947de89abe53b0e2c6c74837d714b0a2', 'width': 600}, 'variants': {}}]} |
Today I start my own business - and it's thanks to open-source LLMs | 1 | [removed] | 2025-01-14T11:28:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i1415l/today_i_start_my_own_business_and_its_thanks_to/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1415l | false | null | t3_1i1415l | /r/LocalLLaMA/comments/1i1415l/today_i_start_my_own_business_and_its_thanks_to/ | false | false | self | 1 | null |
What % of these do you think will be here by 2026? | 127 | 2025-01-14T11:31:28 | omnisvosscio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i142iy | false | null | t3_1i142iy | /r/LocalLLaMA/comments/1i142iy/what_of_these_do_you_think_will_be_here_by_2026/ | false | false | 127 | {'enabled': True, 'images': [{'id': 'Xmrn95Z59IlxMNyNmN9HcqQpDJgN2DYMnnwm5QJq6mg', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/bk6yk62f3yce1.png?width=108&crop=smart&auto=webp&s=9770b2d5485aa3e8fc8f45bf42973c1734fc43e3', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/bk6yk62f3yce1.png?width=216&crop=smart&auto=webp&s=711c4b9328eeafe486d9b0ecc6af658b81acc9f0', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/bk6yk62f3yce1.png?width=320&crop=smart&auto=webp&s=04d9b4f8221488598e24eb4a04ee9456c7e9164d', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/bk6yk62f3yce1.png?width=640&crop=smart&auto=webp&s=e98a42c3a350d3d500729e61e160e27f838a59bd', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/bk6yk62f3yce1.png?width=960&crop=smart&auto=webp&s=ca186924e302d51b0614c3b53e38b02ca4c64d5a', 'width': 960}, {'height': 606, 'url': 'https://preview.redd.it/bk6yk62f3yce1.png?width=1080&crop=smart&auto=webp&s=671e91c5591b4cd7dc20dd7b963a1313ae3a4a10', 'width': 1080}], 'source': {'height': 1088, 'url': 'https://preview.redd.it/bk6yk62f3yce1.png?auto=webp&s=874245fe9a82abd0f90140a752e69839ebe2da50', 'width': 1936}, 'variants': {}}]} |
|||
LLaMa only learns prompts not answers from finetuning | 1 | [removed] | 2025-01-14T11:32:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i1436c/llama_only_learns_prompts_not_answers_from/ | Tuuby | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1436c | false | null | t3_1i1436c | /r/LocalLLaMA/comments/1i1436c/llama_only_learns_prompts_not_answers_from/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'IAHtwi0T0oy-f581U-2zcLbf0dCoH9PB4FqlwyjxC5I', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xdwOkM0s57rpnOf0RtJaHHfbn3JGXdopWC3iO_PjmXw.jpg?width=108&crop=smart&auto=webp&s=0acbff9f9a7040aff0bfdf2cc10f505fa80e98b1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/xdwOkM0s57rpnOf0RtJaHHfbn3JGXdopWC3iO_PjmXw.jpg?width=216&crop=smart&auto=webp&s=ec2b9d9bed933b34e14edcfaa3dc5eaad5cb3e0c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/xdwOkM0s57rpnOf0RtJaHHfbn3JGXdopWC3iO_PjmXw.jpg?width=320&crop=smart&auto=webp&s=f78a17ca2dbf5553f7ca6c37da77d69e0c13458a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/xdwOkM0s57rpnOf0RtJaHHfbn3JGXdopWC3iO_PjmXw.jpg?width=640&crop=smart&auto=webp&s=77abda059b167349be79d54a128d1970080bdff4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/xdwOkM0s57rpnOf0RtJaHHfbn3JGXdopWC3iO_PjmXw.jpg?width=960&crop=smart&auto=webp&s=349911e651f08428389c496968a7708df3aebd19', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/xdwOkM0s57rpnOf0RtJaHHfbn3JGXdopWC3iO_PjmXw.jpg?width=1080&crop=smart&auto=webp&s=f64fa4da73a6433dea79936b7ab423d50e1be079', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/xdwOkM0s57rpnOf0RtJaHHfbn3JGXdopWC3iO_PjmXw.jpg?auto=webp&s=d03a34f4c358e7d59b42d4cec920e23551762873', 'width': 1200}, 'variants': {}}]} |
|
Today I start my own business - and it's thanks to open-source LLMs | 1 | [removed] | 2025-01-14T11:32:56 | https://www.reddit.com/r/LocalLLaMA/comments/1i1438s/today_i_start_my_own_business_and_its_thanks_to/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1438s | false | null | t3_1i1438s | /r/LocalLLaMA/comments/1i1438s/today_i_start_my_own_business_and_its_thanks_to/ | false | false | self | 1 | null |
Today I start my own business - and it's thanks to open-source LLMs | 1 | [removed] | 2025-01-14T11:34:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i14499/today_i_start_my_own_business_and_its_thanks_to/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i14499 | false | null | t3_1i14499 | /r/LocalLLaMA/comments/1i14499/today_i_start_my_own_business_and_its_thanks_to/ | false | false | self | 1 | null |
Today I start my own business - and it's thanks to open-source LLMs | 1 | [removed] | 2025-01-14T11:35:43 | https://www.reddit.com/r/LocalLLaMA/comments/1i144n6/today_i_start_my_own_business_and_its_thanks_to/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i144n6 | false | null | t3_1i144n6 | /r/LocalLLaMA/comments/1i144n6/today_i_start_my_own_business_and_its_thanks_to/ | false | false | self | 1 | null |
Today I start my own business - and it's thanks to open-source LLMs | 1 | [removed] | 2025-01-14T11:36:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i1453d/today_i_start_my_own_business_and_its_thanks_to/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1453d | false | null | t3_1i1453d | /r/LocalLLaMA/comments/1i1453d/today_i_start_my_own_business_and_its_thanks_to/ | false | false | self | 1 | null |
Today I start my own business - and it's thanks to open-source LLMs | 1 | [removed] | 2025-01-14T11:37:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i145kd/today_i_start_my_own_business_and_its_thanks_to/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i145kd | false | null | t3_1i145kd | /r/LocalLLaMA/comments/1i145kd/today_i_start_my_own_business_and_its_thanks_to/ | false | false | self | 1 | null |
Today I start my own business - and it's thanks to open-source LLMs | 1 | [removed] | 2025-01-14T11:38:24 | https://www.reddit.com/r/LocalLLaMA/comments/1i145yw/today_i_start_my_own_business_and_its_thanks_to/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i145yw | false | null | t3_1i145yw | /r/LocalLLaMA/comments/1i145yw/today_i_start_my_own_business_and_its_thanks_to/ | false | false | self | 1 | null |
Wrong time to buy a GPU? | 1 | I’m looking to buy a 2nd system dedicated to running local LLM and have found a 3090 system with 64GB for just under £1000 (top of my budget unfortunately and I know 24GB is still very limited but up until know I’ve been surviving running llms pure cpu with 32GB ram).
Given the 5000 series is coming out at the end of the month and DIGITS may time, should I be holding off for a couple of months, or is there something else I should be considering?
Thanks in advance! | 2025-01-14T11:39:48 | https://www.reddit.com/r/LocalLLaMA/comments/1i146ou/wrong_time_to_buy_a_gpu/ | Inevitable-Solid-936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i146ou | false | null | t3_1i146ou | /r/LocalLLaMA/comments/1i146ou/wrong_time_to_buy_a_gpu/ | false | false | self | 1 | null |
Today I start my own company - and it's thanks to open-source LLMs | 1 | [removed] | 2025-01-14T11:40:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i1474j/today_i_start_my_own_company_and_its_thanks_to/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1474j | false | null | t3_1i1474j | /r/LocalLLaMA/comments/1i1474j/today_i_start_my_own_company_and_its_thanks_to/ | false | false | self | 1 | null |
Today I start my very own org 100% devoted to open-source - and it's all thanks to LLMs | 200 | **P.S.** Big thank you to every single one of you here!! My background is in biology - not software dev. This huge milestone in my life could never have happened if it wasn't for LLMs, the fantastic open source ecosystem around them, and of course all the awesome folks here in r /LocalLlama!
Also this post was originally a lot longer but I keep getting autofiltered lol - will put the rest in comments 😄 | 2025-01-14T11:43:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i148es/today_i_start_my_very_own_org_100_devoted_to/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i148es | false | null | t3_1i148es | /r/LocalLLaMA/comments/1i148es/today_i_start_my_very_own_org_100_devoted_to/ | false | false | self | 200 | null |
Open-source, local RAG to index files from a shared SMB folder? | 2 | I hope this post isn’t considered low-effort. I’m looking for an open-source RAG solution that:
\- Runs fully on a local machine
\- Can index a network drive or shared folder (ingesting all documents within it and its subfolders. Ideally it should update itself when new documents are added.
\- Provides a user interface for prompting
Is there anything out there I am missing? I went through many articles but I could not identify anything matching all the criteria. | 2025-01-14T13:06:45 | https://www.reddit.com/r/LocalLLaMA/comments/1i15mz7/opensource_local_rag_to_index_files_from_a_shared/ | drplan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i15mz7 | false | null | t3_1i15mz7 | /r/LocalLLaMA/comments/1i15mz7/opensource_local_rag_to_index_files_from_a_shared/ | false | false | self | 2 | null |
gptme v0.26.0 released (terminal agent): now with local TTS support thanks to Kokoro! | 12 | 2025-01-14T13:12:51 | https://github.com/ErikBjare/gptme/releases/tag/v0.26.0 | ErikBjare | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i15r15 | false | null | t3_1i15r15 | /r/LocalLLaMA/comments/1i15r15/gptme_v0260_released_terminal_agent_now_with/ | false | false | 12 | {'enabled': False, 'images': [{'id': 's14wfGUN6vhoZFkAQlMDEFQYnZFQV3_L7C2BCF3a4eY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nVrrnWPE9PNaZxi0hikKCRe5FivIKQCQYSuViCRtN4E.jpg?width=108&crop=smart&auto=webp&s=6ece37e4cf62cb9ed59a0a8790f6c06e72d0f39b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nVrrnWPE9PNaZxi0hikKCRe5FivIKQCQYSuViCRtN4E.jpg?width=216&crop=smart&auto=webp&s=c07b334d2677007120b7b522086c58901129a354', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nVrrnWPE9PNaZxi0hikKCRe5FivIKQCQYSuViCRtN4E.jpg?width=320&crop=smart&auto=webp&s=2a18f6ebbe4a0053b2482d57fc2d4a4726970074', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nVrrnWPE9PNaZxi0hikKCRe5FivIKQCQYSuViCRtN4E.jpg?width=640&crop=smart&auto=webp&s=7f71b1d99417f0a767821b8c5b465f03022407f4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nVrrnWPE9PNaZxi0hikKCRe5FivIKQCQYSuViCRtN4E.jpg?width=960&crop=smart&auto=webp&s=2cd8f406f8248bd7afee2287573348b7c3e69f91', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nVrrnWPE9PNaZxi0hikKCRe5FivIKQCQYSuViCRtN4E.jpg?width=1080&crop=smart&auto=webp&s=0ff16a592db210ca4b9e3c21e9a9206362a94c54', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nVrrnWPE9PNaZxi0hikKCRe5FivIKQCQYSuViCRtN4E.jpg?auto=webp&s=8b98a68da206d52c05019d9dd13e8a180f4ea1a2', 'width': 1200}, 'variants': {}}]} |
||
NVidia APUs for notebooks also just around the corner (May 2025 release!) | 2 | 2025-01-14T13:20:42 | https://youtu.be/D7rR69tMAxs?si=CVkW_ZvqFGwVZjbQ&t=370 | Zyj | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1i15wd7 | false | {'oembed': {'author_name': "Moore's Law Is Dead", 'author_url': 'https://www.youtube.com/@MooresLawIsDead', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/D7rR69tMAxs?start=370&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="RTX 5060 Picture & Nvidia APU Leak: A Family of AMD Strix Halo KILLERS!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/D7rR69tMAxs/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'RTX 5060 Picture & Nvidia APU Leak: A Family of AMD Strix Halo KILLERS!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1i15wd7 | /r/LocalLLaMA/comments/1i15wd7/nvidia_apus_for_notebooks_also_just_around_the/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'oToubsN2eZkeLr5iDDkYQbfYrJgxus9NBONsgpzaXiA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wC4DxBzn0bkUQbBwl3DM46XFZ5rIkcH8ELtQY-BnjB4.jpg?width=108&crop=smart&auto=webp&s=50faa225b68c2290f3179b60c3ed47c030b1ca16', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wC4DxBzn0bkUQbBwl3DM46XFZ5rIkcH8ELtQY-BnjB4.jpg?width=216&crop=smart&auto=webp&s=3e0fe18a39764470556d50eba18e5ca63c063303', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wC4DxBzn0bkUQbBwl3DM46XFZ5rIkcH8ELtQY-BnjB4.jpg?width=320&crop=smart&auto=webp&s=15075443ade86cd753617579ea2aaf16bd20f7b6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wC4DxBzn0bkUQbBwl3DM46XFZ5rIkcH8ELtQY-BnjB4.jpg?auto=webp&s=a5f62f87f7b8393e694fe4254f7ae6b09bac494a', 'width': 480}, 'variants': {}}]} |
||
AI Workflow using WeatherStack API | 1 | [removed] | 2025-01-14T13:20:56 | https://www.reddit.com/r/LocalLLaMA/comments/1i15wil/ai_workflow_using_weatherstack_api/ | 0xhbam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i15wil | false | null | t3_1i15wil | /r/LocalLLaMA/comments/1i15wil/ai_workflow_using_weatherstack_api/ | false | false | self | 1 | null |
AI Workflow using WeatherStack API
| 1 | [removed] | 2025-01-14T13:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/1i15y67/ai_workflow_using_weatherstack_api/ | 0xhbam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i15y67 | false | null | t3_1i15y67 | /r/LocalLLaMA/comments/1i15y67/ai_workflow_using_weatherstack_api/ | false | false | self | 1 | null |
Deepseek iOS App Released | 1 | [removed] | 2025-01-14T13:31:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i163ic/deepseek_ios_app_released/ | Formal-Narwhal-1610 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i163ic | false | null | t3_1i163ic | /r/LocalLLaMA/comments/1i163ic/deepseek_ios_app_released/ | false | false | self | 1 | null |
Apparently all AI fail this simple question. | 0 | 2025-01-14T13:47:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i16eq4/apparently_all_ai_fail_this_simple_question/ | Robert__Sinclair | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i16eq4 | false | null | t3_1i16eq4 | /r/LocalLLaMA/comments/1i16eq4/apparently_all_ai_fail_this_simple_question/ | false | false | 0 | null |
||
Best RAG local 2025 for outlook and text/pdf files | 1 | [removed] | 2025-01-14T13:52:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i16i7q/best_rag_local_2025_for_outlook_and_textpdf_files/ | ialocalllm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i16i7q | false | null | t3_1i16i7q | /r/LocalLLaMA/comments/1i16i7q/best_rag_local_2025_for_outlook_and_textpdf_files/ | false | false | self | 1 | null |
do people/systems use this prompting/system technique? 'Beyond the Prompt: Creating Richer Contexts' | 1 | [removed] | 2025-01-14T13:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i16khy/do_peoplesystems_use_this_promptingsystem/ | inteblio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i16khy | false | null | t3_1i16khy | /r/LocalLLaMA/comments/1i16khy/do_peoplesystems_use_this_promptingsystem/ | false | false | self | 1 | null |
openbmb/MiniCPM-o-2_6 · Hugging Face | 37 | The model is built in an end-to-end fashion based on SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.6, and introduces new features for realtime speech conversation and multimodal live streaming. | 2025-01-14T13:57:25 | https://huggingface.co/openbmb/MiniCPM-o-2_6 | Durian881 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1i16lvy | false | null | t3_1i16lvy | /r/LocalLLaMA/comments/1i16lvy/openbmbminicpmo2_6_hugging_face/ | false | false | 37 | {'enabled': False, 'images': [{'id': '47zIFZcMoq4eLfMpPpx9UsJi5Oq45jaPMLy4-KhnPPk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=108&crop=smart&auto=webp&s=182864ff8445baab94c3baf94f87c914c070fdb2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=216&crop=smart&auto=webp&s=167d61400fbd50a227ebcf27a757addebb5b38c3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=320&crop=smart&auto=webp&s=4a713995bcc7da68979a173d6d51f91a0c0d1dd1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=640&crop=smart&auto=webp&s=bf06c624cc0dcbf599f5edea7b4be7e420f634b7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=960&crop=smart&auto=webp&s=c8a17316ff5f86130a715ab4928aa91486aaa2a9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=1080&crop=smart&auto=webp&s=f098b938d40335f539be2c35054d1e8aaceec2b1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?auto=webp&s=34c637b0cd1f5a0766fb85d10608f3234ae6d28c', 'width': 1200}, 'variants': {}}]} |
|
BEST RAG local LLM for emails , pdf , text | 1 | [removed] | 2025-01-14T14:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i16pdt/best_rag_local_llm_for_emails_pdf_text/ | Proof-Exercise2695 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i16pdt | false | null | t3_1i16pdt | /r/LocalLLaMA/comments/1i16pdt/best_rag_local_llm_for_emails_pdf_text/ | false | false | self | 1 | null |
for better results, should the LLM break down the prompt as-it-reads it? Just as we humans think about what is being said, as it is input. | 1 | [removed] | 2025-01-14T14:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i16pj5/for_better_results_should_the_llm_break_down_the/ | inteblio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i16pj5 | false | null | t3_1i16pj5 | /r/LocalLLaMA/comments/1i16pj5/for_better_results_should_the_llm_break_down_the/ | false | false | self | 1 | null |
Llama 3 8b or Mistral Nemo 12b for 12gb Vram? | 10 | I have a ryzen 5 5500 and an rtx 3060 12gb. I'm new to LLM stuff but I want to start learning to train one. Which one should I use. I found online that both are fantastic but Llama might be too much with 12gb? | 2025-01-14T14:05:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i16s5q/llama_3_8b_or_mistral_nemo_12b_for_12gb_vram/ | NaviGray | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i16s5q | false | null | t3_1i16s5q | /r/LocalLLaMA/comments/1i16s5q/llama_3_8b_or_mistral_nemo_12b_for_12gb_vram/ | false | false | self | 10 | null |
Best RAG Local LLM that read emails / pdf | 1 | [removed] | 2025-01-14T14:05:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i16sg1/best_rag_local_llm_that_read_emails_pdf/ | Proof-Exercise2695 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i16sg1 | false | null | t3_1i16sg1 | /r/LocalLLaMA/comments/1i16sg1/best_rag_local_llm_that_read_emails_pdf/ | false | false | self | 1 | null |
Introduction in fine tuning | 2 | What resources can I use to learn about fine-tuning LLMS? | 2025-01-14T14:31:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i17b53/introduction_in_fine_tuning/ | Apart_Expert_5551 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i17b53 | false | null | t3_1i17b53 | /r/LocalLLaMA/comments/1i17b53/introduction_in_fine_tuning/ | false | false | self | 2 | null |
Best fine-tuning libs | 1 | [removed] | 2025-01-14T14:36:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i17euv/best_finetuning_libs/ | Tiberius_Gladiator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i17euv | false | null | t3_1i17euv | /r/LocalLLaMA/comments/1i17euv/best_finetuning_libs/ | false | false | self | 1 | null |
OASIS: Open social media stimulator that uses up to 1 million agents. | 551 | 2025-01-14T14:43:05 | omnisvosscio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i17k5e | false | null | t3_1i17k5e | /r/LocalLLaMA/comments/1i17k5e/oasis_open_social_media_stimulator_that_uses_up/ | false | false | default | 551 | {'enabled': True, 'images': [{'id': 'rgfjjzbf1zce1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/rgfjjzbf1zce1.png?width=108&crop=smart&auto=webp&s=8f719ac7a8568d3c067efaa8a1f64603f6433090', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/rgfjjzbf1zce1.png?width=216&crop=smart&auto=webp&s=b4cb46d3d17dae0da0828df1156954338ef0e35e', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/rgfjjzbf1zce1.png?width=320&crop=smart&auto=webp&s=e6d3062b574e44f0b320b6864b6da0852880f525', 'width': 320}, {'height': 293, 'url': 'https://preview.redd.it/rgfjjzbf1zce1.png?width=640&crop=smart&auto=webp&s=ad7c4c95213e6848f1fad91fc11eacb2cb18e3b8', 'width': 640}, {'height': 440, 'url': 'https://preview.redd.it/rgfjjzbf1zce1.png?width=960&crop=smart&auto=webp&s=ec81f0d1cc1db08a607b06fce4c72402c01d3179', 'width': 960}, {'height': 495, 'url': 'https://preview.redd.it/rgfjjzbf1zce1.png?width=1080&crop=smart&auto=webp&s=a7f7298352563d45ad0081b2acbbcff65ee49830', 'width': 1080}], 'source': {'height': 3839, 'url': 'https://preview.redd.it/rgfjjzbf1zce1.png?auto=webp&s=fbef069fa35bf759f31fdb4a3d5210d0acfb4e22', 'width': 8368}, 'variants': {}}]} |
||
Deepseek V3 on AMD Epyc for coding? | 1 | [removed] | 2025-01-14T14:48:44 | https://www.reddit.com/r/LocalLLaMA/comments/1i17oc9/deepseek_v3_on_amd_epyc_for_coding/ | NewBrilliant6795 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i17oc9 | false | null | t3_1i17oc9 | /r/LocalLLaMA/comments/1i17oc9/deepseek_v3_on_amd_epyc_for_coding/ | false | false | self | 1 | null |
Best Model to summarize YouTube Transcripts | 1 | [removed] | 2025-01-14T15:06:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i182sh/best_model_to_summarize_youtube_transcripts/ | flamefibers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i182sh | false | null | t3_1i182sh | /r/LocalLLaMA/comments/1i182sh/best_model_to_summarize_youtube_transcripts/ | false | false | self | 1 | null |
Any good guide on fine tuning a new race behavior on a LLM, for roleplaying? | 0 | Hello,
I'm running Koboldcpp with a nvidia GPU with 16 GB of vram.
I want to fine tune an existing gguf model, in a way that:
\- add characteristics and behavior of a new humanoid race, in a way that my character and NPCs of that race behave and talk according to it;
\- put all that is know of that race into a fictious book or classified document that eventualy can be reached by my character and/or NPCs;
\- by visiting certain places, I can meet NPCs that talk about rummors of people commenting about the existence of a book detailing a mythological race.
\- the full "book" contents are stored inside the LLM and can be reached and learned by NPCs and the player.
Am I asking too much? :D
Can someone point me to where find info on how to format the book contents, the dialogue line examples by human NPCs when interacting with individuals of this race and examples os dialogue lines from individuals of this race.
Also I'm newbie and never fine tuned a LLM, so I need instrunctions on how to do it on windows.(but I know how to use and could install any Linux distro on a VM)
Also, if any one knows of a way of playing multiplayer (people connecting to my koboldcpp or similar app remotelly) I'll be glad to know the details.
Thanks in advance | 2025-01-14T15:07:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i1831x/any_good_guide_on_fine_tuning_a_new_race_behavior/ | GoodSamaritan333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1831x | false | null | t3_1i1831x | /r/LocalLLaMA/comments/1i1831x/any_good_guide_on_fine_tuning_a_new_race_behavior/ | false | false | self | 0 | null |
Tool Calling vs. Deterministic Chains | 1 | [removed] | 2025-01-14T15:28:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i18jxx/tool_calling_vs_deterministic_chains/ | Thybrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i18jxx | false | null | t3_1i18jxx | /r/LocalLLaMA/comments/1i18jxx/tool_calling_vs_deterministic_chains/ | false | false | self | 1 | null |
Qwen team hasn't released a 72b coder model in a long time | 1 | [removed] | 2025-01-14T15:34:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i18oj2/qwen_team_hasnt_released_a_72b_coder_model_in_a/ | AdventurousSwim1312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i18oj2 | false | null | t3_1i18oj2 | /r/LocalLLaMA/comments/1i18oj2/qwen_team_hasnt_released_a_72b_coder_model_in_a/ | false | false | self | 1 | null |
DDR6 RAM and a reasonable GPU should be able to run 70b models with good speed | 84 | Right now low VRAM GPUs are the bottleneck in running bigger models, but DDR6 ram should somewhat fix this issue. The ram can supplement GPUs to run LLMs at pretty good speed.
Running bigger models on CPU alone is not ideal, a reasonable speed GPU will still be needed to calculate the context. Let's use a RTX 4080 for example but a slower one is fine as well.
A 70b Q4 KM model is \~40 GB
8192 context is around 3.55 GB
RTX 4080 can hold around 12 GB of the model + 3.55 GB context + leaving 0.45 GB for system memory.
RTX 4080 Memory Bandwidth is 716.8 GB/s x 0.7 for efficiency = \~502 GB/s
For DDR6 ram, it's hard to say for sure but should be around twice the speed of DDR5 and supports Quad Channel so should be close to 360 GB/s \* 0.7 = 252 GB/s
(0.3×502) + (0.7×252) = 327 GB/s
So the model should run at around 8.2 tokens/s
It should be a pretty reasonable speed for the average user. Even a slower GPU should be fine as well.
If I made a mistake in the calculation, feel free to let me know. | 2025-01-14T15:51:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i192xf/ddr6_ram_and_a_reasonable_gpu_should_be_able_to/ | itsnottme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i192xf | false | null | t3_1i192xf | /r/LocalLLaMA/comments/1i192xf/ddr6_ram_and_a_reasonable_gpu_should_be_able_to/ | false | false | self | 84 | null |
Agentic setup beats vanilla LLM usage by a huge margin | 1 | [removed] | 2025-01-14T15:55:50 | https://www.reddit.com/r/LocalLLaMA/comments/1i1963w/agentic_setup_beats_vanilla_llm_usage_by_a_huge/ | Kitchen-Bear-2733 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1963w | false | null | t3_1i1963w | /r/LocalLLaMA/comments/1i1963w/agentic_setup_beats_vanilla_llm_usage_by_a_huge/ | false | false | 1 | null |
|
Agents setups beat vanilla LLMs by a huge margin on several benchmarks | 1 | [removed] | 2025-01-14T16:02:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i19bvl/agents_setups_beat_vanilla_llms_by_a_huge_margin/ | Kitchen-Bear-2733 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i19bvl | false | null | t3_1i19bvl | /r/LocalLLaMA/comments/1i19bvl/agents_setups_beat_vanilla_llms_by_a_huge_margin/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'sF7hCUDndWWbmpDZiHanjKBW4Sl8e86xFd0Z97QZ7E4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N-H5PzqY4rt9elbIRf9c3Uzl0KVn52kJAZxkzPWy-qY.jpg?width=108&crop=smart&auto=webp&s=44c4222304d6deade9209e8e950d2bc002bb7345', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N-H5PzqY4rt9elbIRf9c3Uzl0KVn52kJAZxkzPWy-qY.jpg?width=216&crop=smart&auto=webp&s=e1137b8c603f722aaeb68fbefd15b3ace692e54c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N-H5PzqY4rt9elbIRf9c3Uzl0KVn52kJAZxkzPWy-qY.jpg?width=320&crop=smart&auto=webp&s=9ead6b02cb9625dce15a92907ab9ef8777e6be84', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N-H5PzqY4rt9elbIRf9c3Uzl0KVn52kJAZxkzPWy-qY.jpg?width=640&crop=smart&auto=webp&s=39a8be2677281f385ebfc0d70b91f18ce12ecf19', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N-H5PzqY4rt9elbIRf9c3Uzl0KVn52kJAZxkzPWy-qY.jpg?width=960&crop=smart&auto=webp&s=9e8a3a10f113293f9b252dee641660329f717c39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N-H5PzqY4rt9elbIRf9c3Uzl0KVn52kJAZxkzPWy-qY.jpg?width=1080&crop=smart&auto=webp&s=7e48edb90bf1952cb86676328342754ca1f27193', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N-H5PzqY4rt9elbIRf9c3Uzl0KVn52kJAZxkzPWy-qY.jpg?auto=webp&s=a1e1dba728196fb8782d18ecdc434bc167da7f39', 'width': 1200}, 'variants': {}}]} |
|
Agentic setups beat vanilla LLMs by a huge margin 📈 | 180 | Hello folks 👋🏻 I'm Merve, I work on Hugging Face's new agents library smolagents.
We recently observed that many people are sceptic of agentic systems, so we benchmarked our CodeAgents (agents that write their actions/tool calls in python blobs) against vanilla LLM calls.
Plot twist: agentic setups easily bring 40 percentage point improvements compared to vanilla LLMs This crazy score increase makes sense, let's take this SimpleQA question:
"Which Dutch player scored an open-play goal in the 2022 Netherlands vs Argentina game in the men’s FIFA World Cup?"
If I had to answer that myself, I certainly would do better with access to a web search tool than with my vanilla knowledge. (argument put forward by Andrew Ng in a great talk at Sequoia)
Here each benchmark is a subsample of \~50 questions from the original benchmarks. Find the whole benchmark here: [https://github.com/huggingface/smolagents/blob/main/examples/benchmark.ipynb](https://github.com/huggingface/smolagents/blob/main/examples/benchmark.ipynb)
https://preview.redd.it/7p6lbz7fgzce1.png?width=1467&format=png&auto=webp&s=30d91e22b32e572e8824b08b4d95a52aeb82c5d5
| 2025-01-14T16:05:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i19e8u/agentic_setups_beat_vanilla_llms_by_a_huge_margin/ | unofficialmerve | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i19e8u | false | null | t3_1i19e8u | /r/LocalLLaMA/comments/1i19e8u/agentic_setups_beat_vanilla_llms_by_a_huge_margin/ | false | false | 180 | null |
|
Llama.cpp server locks up randomly serving Llama-3.2-3B-Instruct-Q8_0.gguf | 6 | Has anyone come across something like this? It looks like the context window is getting "clogged up" as it were, but unsure how to make it fail the request if that were to happen, as opposed to just locking up and rendering the server useless?
This is how this server is started in Docker:
`llama1:`
`image: llama-cpp-docker`
`container_name: llama1`
`restart: unless-stopped`
`environment:`
`- GGML_CUDA_NO_PINNED=1`
`- LLAMA_CTX_SIZE=8192`
`- LLAMA_MODEL=/models/Llama-3.2-3B-Instruct-Q8_0.gguf`
`- LLAMA_N_GPU_LAYERS=99`
`- LLAMA_BATCH_SIZE=512`
`- LLAMA_UBATCH_SIZE=1024`
`- LLAMA_THREADS=3`
`- LLAMA_LOG_FILE=llama`
Below is what the log of the failed request looks like. Any nudge in the right direction will be greatly appreciated!
`srv update_slots: all slots are idle`
`slot launch_slot_: id 0 | task 1649 | processing task`
`slot update_slots: id 0 | task 1649 | new prompt, n_ctx_slot = 8192, n_keep = 0, n_prompt_tokens = 3866`
`slot update_slots: id 0 | task 1649 | kv cache rm [0, end)`
`slot update_slots: id 0 | task 1649 | prompt processing progress, n_past = 512, n_tokens = 512, progress = 0.132437`
`slot update_slots: id 0 | task 1649 | kv cache rm [512, end)`
`slot update_slots: id 0 | task 1649 | prompt processing progress, n_past = 1024, n_tokens = 512, progress = 0.264873`
`slot update_slots: id 0 | task 1649 | kv cache rm [1024, end)`
`slot update_slots: id 0 | task 1649 | prompt processing progress, n_past = 1536, n_tokens = 512, progress = 0.397310`
`slot update_slots: id 0 | task 1649 | kv cache rm [1536, end)`
`slot update_slots: id 0 | task 1649 | prompt processing progress, n_past = 2048, n_tokens = 512, progress = 0.529747`
`slot update_slots: id 0 | task 1649 | kv cache rm [2048, end)`
`slot update_slots: id 0 | task 1649 | prompt processing progress, n_past = 2560, n_tokens = 512, progress = 0.662183`
`slot update_slots: id 0 | task 1649 | kv cache rm [2560, end)`
`slot update_slots: id 0 | task 1649 | prompt processing progress, n_past = 3072, n_tokens = 512, progress = 0.794620`
`slot update_slots: id 0 | task 1649 | kv cache rm [3072, end)`
`slot update_slots: id 0 | task 1649 | prompt processing progress, n_past = 3584, n_tokens = 512, progress = 0.927056`
`slot update_slots: id 0 | task 1649 | kv cache rm [3584, end)`
`slot update_slots: id 0 | task 1649 | prompt processing progress, n_past = 3866, n_tokens = 282, progress = 1.000000`
`slot update_slots: id 0 | task 1649 | prompt done, n_past = 3866, n_tokens = 282`
`slot update_slots: id 0 | task 1649 | slot context shift, n_keep = 1, n_left = 8190, n_discard = 4095`
`slot update_slots: id 0 | task 1649 | slot context shift, n_keep = 1, n_left = 8190, n_discard = 4095`
`slot update_slots: id 0 | task 1649 | slot context shift, n_keep = 1, n_left = 8190, n_discard = 4095`
`slot update_slots: id 0 | task 1649 | slot context shift, n_keep = 1, n_left = 8190, n_discard = 4095`
`slot update_slots: id 0 | task 1649 | slot context shift, n_keep = 1, n_left = 8190, n_discard = 4095`
`slot update_slots: id 0 | task 1649 | slot context shift, n_keep = 1, n_left = 8190, n_discard = 4095`
`slot update_slots: id 0 | task 1649 | slot context shift, n_keep = 1, n_left = 8190, n_discard = 4095`
`slot update_slots: id 0 | task 1649 | slot context shift, n_keep = 1, n_left = 8190, n_discard = 4095`
`slot update_slots: id 0 | task 1649 | slot context shift, n_keep = 1, n_left = 8190, n_discard = 4095`
| 2025-01-14T16:07:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i19fu5/llamacpp_server_locks_up_randomly_serving/ | lurkalotter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i19fu5 | false | null | t3_1i19fu5 | /r/LocalLLaMA/comments/1i19fu5/llamacpp_server_locks_up_randomly_serving/ | false | false | self | 6 | null |
Ai2 Discord Community! | 1 | [removed] | 2025-01-14T16:10:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i19hxi/ai2_discord_community/ | DefiantHost6488 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i19hxi | false | null | t3_1i19hxi | /r/LocalLLaMA/comments/1i19hxi/ai2_discord_community/ | false | false | self | 1 | null |
Finetuning Llama for a step by step synthesis | 1 | [removed] | 2025-01-14T16:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/1i19po4/finetuning_llama_for_a_step_by_step_synthesis/ | No-Judge3265 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i19po4 | false | null | t3_1i19po4 | /r/LocalLLaMA/comments/1i19po4/finetuning_llama_for_a_step_by_step_synthesis/ | false | false | self | 1 | null |
Deepseek v3 Experiences | 24 | Hi All,
I would like to probe the community to find out your experiences with running Deepseek v3 locally. I have been building a local inference machine and managed to get enough ram to be able to run the Q4\_K\_M.
Build:
Xeon w7-3455
Asus W790 Sage
432gb DDR5 @ 4800 ( 4x32, 3x96, 16 )
3 x RTX 3090
llama command:
./build/bin/llama-server --model \~/llm/models/unsloth\_DeepSeek-V3-GGUF\_f\_Q4\_K\_M/DeepSeek-V3-Q4\_K\_M/DeepSeek-V3-Q4\_K\_M-00001-of-00009.gguf --cache-type-k q5\_0 --threads 22 --host [0.0.0.0](http://0.0.0.0) \--no-context-shift --port 9999 --ctx-size 8240 --gpu-layers 6
Results with small context: (What is deepseek?) about 7
prompt eval time = 1317.45 ms / 7 tokens ( 188.21 ms per token, 5.31 tokens per second)
eval time = 81081.39 ms / 269 tokens ( 301.42 ms per token, 3.32 tokens per second)
total time = 82398.83 ms / 276 tokens
Results with large context: ( Shopify theme file + prompt )
prompt eval time = 368904.48 ms / 3099 tokens ( 119.04 ms per token, 8.40 tokens per second)
eval time = 372849.73 ms / 779 tokens ( 478.63 ms per token, 2.09 tokens per second)
total time = 741754.21 ms / 3878 tokens
It doesn't seem like running this model locally makes any sense until the ktransformers team can integrate it. What do you guys think? Is there something I am missing to get the performance higher? | 2025-01-14T16:30:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i19ysx/deepseek_v3_experiences/ | easyrider99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i19ysx | false | null | t3_1i19ysx | /r/LocalLLaMA/comments/1i19ysx/deepseek_v3_experiences/ | false | false | self | 24 | null |
Beating cuBLAS in Single-Precision General Matrix Multiplication | 1 | 2025-01-14T16:31:33 | https://salykova.github.io/sgemm-gpu | salykova | salykova.github.io | 1970-01-01T00:00:00 | 0 | {} | 1i19zw1 | false | null | t3_1i19zw1 | /r/LocalLLaMA/comments/1i19zw1/beating_cublas_in_singleprecision_general_matrix/ | false | false | default | 1 | null |
|
MiniMax-Text-01 - A powerful new MoE language model with 456B total parameters (45.9 billion activated) | 294 | [https://huggingface.co/MiniMaxAI/MiniMax-Text-01](https://huggingface.co/MiniMaxAI/MiniMax-Text-01)
https://preview.redd.it/8os84sl2mzce1.png?width=3320&format=png&auto=webp&s=b4f6f93b8a0965d65139ba727de29c55880f1b91
**Description:** MiniMax-Text-01 is a powerful language model with 456 billion total parameters, of which 45.9 billion are activated per token. To better unlock the long context capabilities of the model, MiniMax-Text-01 adopts a hybrid architecture that combines Lightning Attention, Softmax Attention and Mixture-of-Experts (MoE). Leveraging advanced parallel strategies and innovative compute-communication overlap methods—such as Linear Attention Sequence Parallelism Plus (LASP+), varlen ring attention, Expert Tensor Parallel (ETP), etc., MiniMax-Text-01's training context length is extended to 1 million tokens, and it can handle a context of up to 4 million tokens during the inference. On various academic benchmarks, MiniMax-Text-01 also demonstrates the performance of a top-tier model.
**Model Architecture:**
* Total Parameters: 456B
* Activated Parameters per Token: 45.9B
* Number Layers: 80
* Hybrid Attention: a softmax attention is positioned after every 7 lightning attention.
* Number of attention heads: 64
* Attention head dimension: 128
* Mixture of Experts:
* Number of experts: 32
* Expert hidden dimension: 9216
* Top-2 routing strategy
* Positional Encoding: Rotary Position Embedding (RoPE) applied to half of the attention head dimension with a base frequency of 10,000,000
* Hidden Size: 6144
* Vocab Size: 200,064
**Blog post:** [https://www.minimaxi.com/en/news/minimax-01-series-2](https://www.minimaxi.com/en/news/minimax-01-series-2)
**HuggingFace:** [https://huggingface.co/MiniMaxAI/MiniMax-Text-01](https://huggingface.co/MiniMaxAI/MiniMax-Text-01)
**Try online:** [https://www.hailuo.ai/](https://www.hailuo.ai/)
**Github:** [https://github.com/MiniMax-AI/MiniMax-01](https://github.com/MiniMax-AI/MiniMax-01)
**Homepage:** [https://www.minimaxi.com/en](https://www.minimaxi.com/en)
**PDF paper:** [https://filecdn.minimax.chat/\_Arxiv\_MiniMax\_01\_Report.pdf](https://filecdn.minimax.chat/_Arxiv_MiniMax_01_Report.pdf)
Note: I am not affiliated
GGUF quants might take a while because the architecture is new (MiniMaxText01ForCausalLM) | 2025-01-14T16:41:20 | https://www.reddit.com/r/LocalLLaMA/comments/1i1a88y/minimaxtext01_a_powerful_new_moe_language_model/ | Many_SuchCases | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1a88y | false | null | t3_1i1a88y | /r/LocalLLaMA/comments/1i1a88y/minimaxtext01_a_powerful_new_moe_language_model/ | false | false | 294 | {'enabled': False, 'images': [{'id': 't-JH8IngcHivm1YVPoa7hh4mpZsdS9DbW7wYMvhxr-w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=108&crop=smart&auto=webp&s=4e357908a6066334b13339e17cc3095d7b4423a2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=216&crop=smart&auto=webp&s=2e4bb466e39c0d1903bf3066a3d0dea689925709', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=320&crop=smart&auto=webp&s=99aba628436f65b36c0505f0486e41298b1a9462', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=640&crop=smart&auto=webp&s=ceab60c72e05525604b9367fa7915922146839a5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=960&crop=smart&auto=webp&s=280815ef68e57515faad9d1dc62361728eb48c64', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=1080&crop=smart&auto=webp&s=244b695811b8b50aac245615b143ce76ecbb76af', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?auto=webp&s=447d8333eccd1f5454e014f3a01bcf504de0e10d', 'width': 1200}, 'variants': {}}]} |
|
Coding model recommendations | 1 | Hey guys,
What are the latest models that run decent on an RTX3090 24GB? I’m looking for help writing code locally.
Also do you guys think that adding an RTX3060 12GB would be helpful? Or should I just get an RTX4060 16GB | 2025-01-14T16:53:48 | https://www.reddit.com/r/LocalLLaMA/comments/1i1aiu0/coding_model_recommendations/ | gomezer1180 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1aiu0 | false | null | t3_1i1aiu0 | /r/LocalLLaMA/comments/1i1aiu0/coding_model_recommendations/ | false | false | self | 1 | null |
What is your efficient go-to model for TTS? | 29 | What do I want?
* CPU inference
* Multilanguage. Not just the top 7 languages.
* Voice cloning. I prefer voice cloning over fine-tuning for most cases.
I checked recent posts about TTS models and the leaderboard. Tried 3 of them:
[Piper](https://github.com/rhasspy/piper)
* This is the fastest model in my experience. It even works instantly on my crappy server.
* Multilanguage.
* It doesn't have voice cloning but fine-tuning is not hard.
* One thing I don't like, it is not maintained anymore. I wish they could update pytorch version to 2.0, so I can easily fine-tune on GPU rented servers(48GB+ GPU). Currently, I couldn't even fine-tune on RTX 4090.
[F5TTS](https://github.com/SWivid/F5-TTS/)
* Multilanguage and voice cloning.
* Inference speed is bad compared to Piper.
[XTTS (coqui-ai-fork)](https://github.com/idiap/coqui-ai-TTS)
* Multilanguage.
* Don't have voice cloning.
* Inference speed is bad compared to Piper.
[Kokoro-TTS](https://huggingface.co/hexgrad/Kokoro-82M)
* It is #1 on the leaderboard, I didn't even try because [language support](https://huggingface.co/hexgrad/Kokoro-82M/discussions/30) is not enough for me. | 2025-01-14T17:09:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i1ax9u/what_is_your_efficient_goto_model_for_tts/ | requizm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1ax9u | false | null | t3_1i1ax9u | /r/LocalLLaMA/comments/1i1ax9u/what_is_your_efficient_goto_model_for_tts/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'LSqjtG_hYdY37QjQtHoNAacsE-RFICCFSgItgI8Yk5g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/j2VhQi1-6c-tbqI9MwSL1j4npZ8l1egmSWrjC4wxrRQ.jpg?width=108&crop=smart&auto=webp&s=0f739ca3a03fb0096192d68ea924c7452189cd3b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/j2VhQi1-6c-tbqI9MwSL1j4npZ8l1egmSWrjC4wxrRQ.jpg?width=216&crop=smart&auto=webp&s=5ad27695d00501ca634f4be0edaae3011318fde9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/j2VhQi1-6c-tbqI9MwSL1j4npZ8l1egmSWrjC4wxrRQ.jpg?width=320&crop=smart&auto=webp&s=49902e1f4f569259332ba0cdb270b6bb142525cd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/j2VhQi1-6c-tbqI9MwSL1j4npZ8l1egmSWrjC4wxrRQ.jpg?width=640&crop=smart&auto=webp&s=65919d4a3a407fb4c788ff0c9786fe8624a81651', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/j2VhQi1-6c-tbqI9MwSL1j4npZ8l1egmSWrjC4wxrRQ.jpg?width=960&crop=smart&auto=webp&s=cba5331523843a240769c9bad04a9fec2af5ed6b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/j2VhQi1-6c-tbqI9MwSL1j4npZ8l1egmSWrjC4wxrRQ.jpg?width=1080&crop=smart&auto=webp&s=bd8aceca8523a356f577b41aeba5fd469a6760ab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/j2VhQi1-6c-tbqI9MwSL1j4npZ8l1egmSWrjC4wxrRQ.jpg?auto=webp&s=dde24be0176cebad5e0759c8ccff1fc09b1517b9', 'width': 1200}, 'variants': {}}]} |
An LLM serving framework that can fast run o1-like SmallThinker on smartphones | 34 | Today, we're excited to announce the release of PowerServe, a highly optimized serving framework specifically designed for smartphone.
[Githu](https://github.com/powerserve-project/PowerServe)
[Running on Qualcomm 8 Gen4](https://reddit.com/link/1i1b0bo/video/uf85o248szce1/player)
Key Features:
* **One-click** deployment
* NPU **speculative** inference support
* Achieves **40** tokens/s running o1-like reasoning model Smallthinker on mobile devices
* Support **Android**, Harmony Next SmartPhone
* Support Qwen2, Llama3 series and SmallThinker-3B-Preview
In the future, we will integrate more acceleration methods, including PowerInfer, PowerInfer-2, and more speculative inference algorithms. | 2025-01-14T17:13:24 | https://www.reddit.com/r/LocalLLaMA/comments/1i1b0bo/an_llm_serving_framework_that_can_fast_run_o1like/ | Zealousideal_Bad_52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1b0bo | false | null | t3_1i1b0bo | /r/LocalLLaMA/comments/1i1b0bo/an_llm_serving_framework_that_can_fast_run_o1like/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'etl9O16fkF9bEHzKVVPBi2R9yfIXGNbFWgCB9rO8uA8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qjrUvoYZJnvSkWJhxCodjQr_xKXfBbf0Wm7xUIvDP4Q.jpg?width=108&crop=smart&auto=webp&s=e30db5e8c9bec080b3205699f7c2f9661da6bc30', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qjrUvoYZJnvSkWJhxCodjQr_xKXfBbf0Wm7xUIvDP4Q.jpg?width=216&crop=smart&auto=webp&s=26ea8a8ae421bea3d2090837a7aa65aba0dc4dcf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qjrUvoYZJnvSkWJhxCodjQr_xKXfBbf0Wm7xUIvDP4Q.jpg?width=320&crop=smart&auto=webp&s=31f3b083d7483e80f79a48f9e15aeaf2654f87a3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qjrUvoYZJnvSkWJhxCodjQr_xKXfBbf0Wm7xUIvDP4Q.jpg?width=640&crop=smart&auto=webp&s=48af74da7e0e05784fc236d508cf81bf9193d9e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qjrUvoYZJnvSkWJhxCodjQr_xKXfBbf0Wm7xUIvDP4Q.jpg?width=960&crop=smart&auto=webp&s=7226ed94c4d0db9264a7bfde058a31125fe06eb9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qjrUvoYZJnvSkWJhxCodjQr_xKXfBbf0Wm7xUIvDP4Q.jpg?width=1080&crop=smart&auto=webp&s=0767a2a58efa5d13dd6af7e7d8bced17281e5bea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qjrUvoYZJnvSkWJhxCodjQr_xKXfBbf0Wm7xUIvDP4Q.jpg?auto=webp&s=51fbee4c49660cf3604ac5e62d6f4f767adf9fef', 'width': 1200}, 'variants': {}}]} |
|
Windows install not working | 0 | I’ve installed from anythingllm dotcom and it installs the file structure but not the executable. The desktop icon just pops up “missing shortcut” and there is no anythingllm.exe in the folder.
I installed the Windows/ARM version because I have an AMD processor and an AMD gpu.
Any ideas what might be wrong? | 2025-01-14T17:14:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i1b1nz/windows_install_not_working/ | 321headbang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1b1nz | false | null | t3_1i1b1nz | /r/LocalLLaMA/comments/1i1b1nz/windows_install_not_working/ | false | false | self | 0 | null |
Transformer^2: Self-adaptive LLMs | 114 | 2025-01-14T17:16:19 | https://arxiv.org/abs/2501.06252 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1i1b2xq | false | null | t3_1i1b2xq | /r/LocalLLaMA/comments/1i1b2xq/transformer2_selfadaptive_llms/ | false | false | default | 114 | null |
|
New Thematic Generalization Benchmark: measures how effectively LLMs infer a specific "theme" from a small set of examples and anti-examples | 27 | 2025-01-14T17:30:37 | https://github.com/lechmazur/generalization | zero0_one1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i1bf5j | false | null | t3_1i1bf5j | /r/LocalLLaMA/comments/1i1bf5j/new_thematic_generalization_benchmark_measures/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'ghOo109L6NYvOtTGmgudsgekC_9SBAPGyD2Z9JfYobo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wF_HKR0BhWrREPFLC9pRJkrUmcKOwJnL6VjPCFunPqU.jpg?width=108&crop=smart&auto=webp&s=a55586872100583a6555fba999f14be3c3923f2d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wF_HKR0BhWrREPFLC9pRJkrUmcKOwJnL6VjPCFunPqU.jpg?width=216&crop=smart&auto=webp&s=fa7b14ba44dd124f0b4eb1dcbc98ca4f3125b393', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wF_HKR0BhWrREPFLC9pRJkrUmcKOwJnL6VjPCFunPqU.jpg?width=320&crop=smart&auto=webp&s=6c9a037968ee93b8a9b6eb6428fa5783b9b7c351', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wF_HKR0BhWrREPFLC9pRJkrUmcKOwJnL6VjPCFunPqU.jpg?width=640&crop=smart&auto=webp&s=713f1dd254baaea660314069551e52bcb224426f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wF_HKR0BhWrREPFLC9pRJkrUmcKOwJnL6VjPCFunPqU.jpg?width=960&crop=smart&auto=webp&s=0dba5148db70502773fc399ab891312bc7eec43d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wF_HKR0BhWrREPFLC9pRJkrUmcKOwJnL6VjPCFunPqU.jpg?width=1080&crop=smart&auto=webp&s=83297e670aea4f15b9a074317521b58a4d029274', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wF_HKR0BhWrREPFLC9pRJkrUmcKOwJnL6VjPCFunPqU.jpg?auto=webp&s=33db7b1de97a86a2182126bd8f5ec3488d13e308', 'width': 1200}, 'variants': {}}]} |
||
Difference between proprietary models and self-hosted ones? | 0 | Let me preface this by saying I am no expert in the field, just a curious reader with a compsci background.
I am wondering just how large the gap is, between the best proprietary models (OpenAi's ChatGPT, Claude Sonnet, Gemini) and the best self-hosted models (general purposes questions and answers)? I often read that the beat selfhoted models aren't that far behind. However I fail to understand how that works, the largest self-hosted models are like 400B parameters, with most being more around the 70B mark.
From my understanding the proprietary models have over 1T parameters, and I don't see how a 70B model can provide an equivalent good experience even if some benchmark suggest that? I understand that data amount isn't everything of course but it still makes me wonder..
Maybe someone can provide some insights here? | 2025-01-14T17:32:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i1bh1x/difference_between_proprietary_models_and/ | 4bjmc881 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1bh1x | false | null | t3_1i1bh1x | /r/LocalLLaMA/comments/1i1bh1x/difference_between_proprietary_models_and/ | false | false | self | 0 | null |
Running a 2B LLM on an iphone with swift-mlx | 15 | Hey all 👋!
A bit of self promotion in this post but hopefully that's fine :) I work at Kyutai and we released yesterday a new multilingual 2B LLM aimed at on device inference, Helium 2B. Just wanted to share a video with the model running locally on an iPhone 16 Pro at \~28 tok/s (seems to reach \~35 tok/s when plugged in) 🚀 All that uses mlx-swift with q4 quantization - not much optimizations at this stage so just relying on mlx to do all the hard work for us!
It's just a proof of concept at this stage as you cannot even enter a prompt and we don't have an instruct variant of the model anyway. We're certainly looking forward to some feedback on the model itself, we plan on supporting more languages in the near future as well as releasing the whole training pipeline. And we also plan to release more models that run on device too! | 2025-01-14T17:34:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i1bi3b/running_a_2b_llm_on_an_iphone_with_swiftmlx/ | l-m-z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1bi3b | false | null | t3_1i1bi3b | /r/LocalLLaMA/comments/1i1bi3b/running_a_2b_llm_on_an_iphone_with_swiftmlx/ | false | false | self | 15 | null |
Finetuning Llama3 8B on mediawiki data | 1 | [removed] | 2025-01-14T17:41:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i1bo8f/finetuning_llama3_8b_on_mediawiki_data/ | coderman4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1bo8f | false | null | t3_1i1bo8f | /r/LocalLLaMA/comments/1i1bo8f/finetuning_llama3_8b_on_mediawiki_data/ | false | false | self | 1 | null |
run Codestral 25.01 in a few lines of code in a app
| 0 | Codestral 25.01
new coding model #1 on LMSYS is now available in ai-gradio
pip install --upgrade "ai-gradio\[mistral\]"
import gradio as gr
import ai\_gradio
demo = gr.load(
"mistral:codestral-latest",
src=ai\_gradio.registry,
coder=True
)
demo.launch() | 2025-01-14T17:43:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i1bq6x/run_codestral_2501_in_a_few_lines_of_code_in_a_app/ | Illustrious_Row_9971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1bq6x | false | null | t3_1i1bq6x | /r/LocalLLaMA/comments/1i1bq6x/run_codestral_2501_in_a_few_lines_of_code_in_a_app/ | false | false | self | 0 | null |
Google Chrome AI – Making Chrome Better with AI | 1 | 2025-01-14T17:46:35 | https://www.youtube.com/watch?v=F49O9Vnh5PE&ab_channel=SukeeshV | Sukeesh | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1i1bslp | false | {'oembed': {'author_name': 'Sukeesh V', 'author_url': 'https://www.youtube.com/@sukeeshv', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/F49O9Vnh5PE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Google Chrome AI - Making Chrome better with AI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/F49O9Vnh5PE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Google Chrome AI - Making Chrome better with AI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1i1bslp | /r/LocalLLaMA/comments/1i1bslp/google_chrome_ai_making_chrome_better_with_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': '1mSkpVkpQRiyUh9YWLpIXxTuBjWdpS9a5TF-fhiLjsg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rm2DsK6asQg64Un7am1Xc3V8Ey9VQe58I6bT_KEsEWs.jpg?width=108&crop=smart&auto=webp&s=5bb31a5867763b8312c4d1f8338a05164127f127', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rm2DsK6asQg64Un7am1Xc3V8Ey9VQe58I6bT_KEsEWs.jpg?width=216&crop=smart&auto=webp&s=37c2feb8a2c4efa12726a02a187562ef225cd507', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rm2DsK6asQg64Un7am1Xc3V8Ey9VQe58I6bT_KEsEWs.jpg?width=320&crop=smart&auto=webp&s=c30b7cf0cd5a81f7ca4ae7c52f9432b76d0cfc3c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rm2DsK6asQg64Un7am1Xc3V8Ey9VQe58I6bT_KEsEWs.jpg?auto=webp&s=001b3237fec5de6b60ca10bd9ee4b93dd951a789', 'width': 480}, 'variants': {}}]} |
||
SmolGhidorah - An attempt at a Psuedo-MoE | 9 | I just finished a small Psuedo-MoE utilizing Qwen 2.5 models from 1.5B to 3B. I'm hoping to get this running faster, currently model loading and unloading takes too long. I say finished but I still have a lot to improve!
My ideal outcome is a simple assistant I can use on my Orange PI 5+ and perhaps a Pi 5 16GB. I've wanted a small 3x3B MoE because 3B models run so well on edge devices, so I took matters into my own hands (to the best of my abilities).
I'll eventually finetune each model, and maybe the embedding model to optimize routing a bit. I just need to wait to buy some more compute on Colab. Unless I can find a better way to route queries that isn't too complex. I'm open to suggestions, tried Mergoo but it isn't maintained.
I also plan on using quantized models, particularly ONNX models since they'll run on my NPU.
[Here this link](https://github.com/Smol-Kaiju/SmolGhidorah/blob/main/smolGhidorah_Psuedo_MoE_KeywordRouter.ipynb).
And here is a quick rundown:
**Models:**
Embeddings Model:
all-MiniLM-L6-v2- Handles embeddings for informed routing decisions.
***General Model:***
`Qwen/Qwen2.5-3B-Instruct` \- Handles general queries.
***Math Reasoning Model:***
`cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_1.5B_only_right` \- Specialized for mathematical reasoning tasks.
***Reasoning Model:***
`prithivMLmods/QwQ-LCoT-3B-Instruct` \- Specialized for general reasoning tasks (Plan on training a 1.5B version of this one).
**Query Routing Mechanism:**
***Keyword-Based Routing:*** First checks if the query contains keywords related to reasoning (e.g., "think", "explain", "why", etc.). If it does, it proceeds to embedding-based routing to select the most appropriate reasoning model.
**Embedding-Based Routing:** Uses precomputed average embeddings of example queries for each reasoning model. It calculates the similarity between the query embedding and the average embeddings of the reasoning models to determine which model to use. | 2025-01-14T17:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i1by1x/smolghidorah_an_attempt_at_a_psuedomoe/ | OrangeESP32x99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1by1x | false | null | t3_1i1by1x | /r/LocalLLaMA/comments/1i1by1x/smolghidorah_an_attempt_at_a_psuedomoe/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'tr2Y9HTurj59B5rsw1UzqHZfDA1jI6qZL9lPw6L-yf8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gLG1DJ5-b3jfdnnMuLYafy1oYjXam6U68KqjXux6LVs.jpg?width=108&crop=smart&auto=webp&s=e6b26fe1709d839a3de6c87f79e6ad1167b2265c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gLG1DJ5-b3jfdnnMuLYafy1oYjXam6U68KqjXux6LVs.jpg?width=216&crop=smart&auto=webp&s=67aac337c96526e45c8a1c0f18e1e54cb2fc38fe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gLG1DJ5-b3jfdnnMuLYafy1oYjXam6U68KqjXux6LVs.jpg?width=320&crop=smart&auto=webp&s=70a9f080f0fbd9404347e71cdee55ac46b1b6e06', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gLG1DJ5-b3jfdnnMuLYafy1oYjXam6U68KqjXux6LVs.jpg?width=640&crop=smart&auto=webp&s=b67cfbf5359ebd012e7f44dc720eb8dc0da6c8a9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gLG1DJ5-b3jfdnnMuLYafy1oYjXam6U68KqjXux6LVs.jpg?width=960&crop=smart&auto=webp&s=36cf4c393f567f6cfa504bf2fe9c3b3e15e142f8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gLG1DJ5-b3jfdnnMuLYafy1oYjXam6U68KqjXux6LVs.jpg?width=1080&crop=smart&auto=webp&s=681225d30d60f7e66add83db8fe733b09873aeb4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gLG1DJ5-b3jfdnnMuLYafy1oYjXam6U68KqjXux6LVs.jpg?auto=webp&s=33247857e8a5d3318e5e134abefde83b6659a7d7', 'width': 1200}, 'variants': {}}]} |
Mix voices in Kokoro-82M TTS model at any ratio | 1 | 2025-01-14T17:53:10 | https://v.redd.it/nb3uuf8ezzce1 | ozgrozer | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i1by33 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nb3uuf8ezzce1/DASHPlaylist.mpd?a=1739469207%2CMjUwNjlhZjUyOGNkZGNhNjdmYzJjMzA2NWMzMjllNGExMzMyN2FlOTRlMThlYWQ5MTkzYWRiNjVmNTMyMjUwZg%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/nb3uuf8ezzce1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/nb3uuf8ezzce1/HLSPlaylist.m3u8?a=1739469207%2CNjg0ZWFkYTdjZWQ5NDVhMmI2NjZjMDhlODZlNzZkOTY4YmQ5NGVlMTFiZjQ4ZDhlZjZlNWVmNWQ3YjQ1YzYyNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nb3uuf8ezzce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1600}} | t3_1i1by33 | /r/LocalLLaMA/comments/1i1by33/mix_voices_in_kokoro82m_tts_model_at_any_ratio/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eHc4ZXpmOGV6emNlMbH28UpYC3sdlJRJrRz_oFfrDzdHNk9iakYJ5CIeTFlT', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/eHc4ZXpmOGV6emNlMbH28UpYC3sdlJRJrRz_oFfrDzdHNk9iakYJ5CIeTFlT.png?width=108&crop=smart&format=pjpg&auto=webp&s=b49f89ee5e72459c0ea25cb2f8f650b685200e2f', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/eHc4ZXpmOGV6emNlMbH28UpYC3sdlJRJrRz_oFfrDzdHNk9iakYJ5CIeTFlT.png?width=216&crop=smart&format=pjpg&auto=webp&s=184feb7481063f1e5dd30c0a87c4e676cda6507e', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/eHc4ZXpmOGV6emNlMbH28UpYC3sdlJRJrRz_oFfrDzdHNk9iakYJ5CIeTFlT.png?width=320&crop=smart&format=pjpg&auto=webp&s=899c4f6dd1a2729d0a4eff02d310622f72863057', 'width': 320}, {'height': 432, 'url': 'https://external-preview.redd.it/eHc4ZXpmOGV6emNlMbH28UpYC3sdlJRJrRz_oFfrDzdHNk9iakYJ5CIeTFlT.png?width=640&crop=smart&format=pjpg&auto=webp&s=be1d816d72f617ad60b5b15935b275a1dbe17ad7', 'width': 640}, {'height': 648, 'url': 'https://external-preview.redd.it/eHc4ZXpmOGV6emNlMbH28UpYC3sdlJRJrRz_oFfrDzdHNk9iakYJ5CIeTFlT.png?width=960&crop=smart&format=pjpg&auto=webp&s=51e4375a1e86f40711ccae36af4226f848693557', 'width': 960}, {'height': 729, 'url': 'https://external-preview.redd.it/eHc4ZXpmOGV6emNlMbH28UpYC3sdlJRJrRz_oFfrDzdHNk9iakYJ5CIeTFlT.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7a8ecf2f417e7237ca9646b76c03ec5da5a98343', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eHc4ZXpmOGV6emNlMbH28UpYC3sdlJRJrRz_oFfrDzdHNk9iakYJ5CIeTFlT.png?format=pjpg&auto=webp&s=c19b4dd24dd9a0e3414335b4efb5696452c416ef', 'width': 1600}, 'variants': {}}]} |
||
Meet Parlant: A New Open-Source Framework for Reliable AI Agents (With Parlant, you can not only spin up and serve an LLM agent in minutes—with a full-fledged & responsive conversation management API—but, more importantly, you can continuously guide and improve its decision making and general behavi | 1 | 2025-01-14T18:17:43 | https://pxl.to/kgqelf6 | ai-lover | pxl.to | 1970-01-01T00:00:00 | 0 | {} | 1i1cixy | false | null | t3_1i1cixy | /r/LocalLLaMA/comments/1i1cixy/meet_parlant_a_new_opensource_framework_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MOrB5UIRL78mRDUo8KzcoISUa74xbVf8qnFVrIl8CWk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EZeaJJJIL9T-agj6Kp633lJJK1tpUGd-_LFJxObn3NE.jpg?width=108&crop=smart&auto=webp&s=042ff47a214a13623bb2e8973ca3bbae8e5a1aa3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EZeaJJJIL9T-agj6Kp633lJJK1tpUGd-_LFJxObn3NE.jpg?width=216&crop=smart&auto=webp&s=d26092b085c14fb76e3d8fbaaf71f1880e20228f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EZeaJJJIL9T-agj6Kp633lJJK1tpUGd-_LFJxObn3NE.jpg?width=320&crop=smart&auto=webp&s=5adb6aa134d3197162abc225dbc1736cb329cb27', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EZeaJJJIL9T-agj6Kp633lJJK1tpUGd-_LFJxObn3NE.jpg?width=640&crop=smart&auto=webp&s=a0a855df82d4cc74bf9d75be84dfbf6ae83e2b8e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EZeaJJJIL9T-agj6Kp633lJJK1tpUGd-_LFJxObn3NE.jpg?width=960&crop=smart&auto=webp&s=5d95e2b62ff4d3c9883a04cf7aa6f971aebc202c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EZeaJJJIL9T-agj6Kp633lJJK1tpUGd-_LFJxObn3NE.jpg?width=1080&crop=smart&auto=webp&s=e1bcfbc70817ce5ac2c97cf387d1f94c8494a840', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/EZeaJJJIL9T-agj6Kp633lJJK1tpUGd-_LFJxObn3NE.jpg?auto=webp&s=bf3c784841361f7d27be8602455586afc01f0b61', 'width': 2000}, 'variants': {}}]} |
||
Need help with RAG | 1 | Hey everyone,
I’ve been lurking here for a while and love experimenting with some local LLMs. (This is turning into an expensive hobby lol) Now, I’m trying to dive into programming an LLM with RAG for my job. I’m not a software developer or engineer, just a hobbyist, but I’m looking for helpful resources on RAG.
Most of what I find is either too advanced or too basic to actually work with. Any suggestions for beginner-friendly but practical resources?
Thanks! | 2025-01-14T18:23:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i1cnsu/need_help_with_rag/ | LostMyOtherAcct69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1cnsu | false | null | t3_1i1cnsu | /r/LocalLLaMA/comments/1i1cnsu/need_help_with_rag/ | false | false | self | 1 | null |
Has anyone tried fine tuning on sales data to predict success of products for e-commerce? | 0 | I am thinking about an idea - creating a LLM-based proxy for my customers' preferences for products, probably based on previous sales, with the view of using this fine-tuned LLM assess potential products for sales potential.
I am not expecting a fine-tuned LLM to be perfect at predicting sales, but I would like to at least get some signal in terms of would my customers prefer Product A or Product B in the same category.
I have so far tried doing this without fine-tuning and results for top sellers seem to be consistently good (i.e. an LLM can predict that it is a better product), but once you go beyond the big hitters the performance drops quite a bit.
Has anyone tried that? Any tips or preparing the data and choosing an objective? | 2025-01-14T18:24:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i1coof/has_anyone_tried_fine_tuning_on_sales_data_to/ | Time-Winter-4319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1coof | false | null | t3_1i1coof | /r/LocalLLaMA/comments/1i1coof/has_anyone_tried_fine_tuning_on_sales_data_to/ | false | false | self | 0 | null |
CVE management for OSS tools | 2 | How is everyone managing security vulnerabilities from the hundreds of components used in tools such as Ollama, vLLM, n8n, Langflow etc. Do you go to a secure repository where the AI softwares have been scanned , and an addressed from vulnerabilities. If you are following a process that addresses vulnerabilities can you share? Thanks | 2025-01-14T18:39:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i1d1lh/cve_management_for_oss_tools/ | No-Leopard7644 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1d1lh | false | null | t3_1i1d1lh | /r/LocalLLaMA/comments/1i1d1lh/cve_management_for_oss_tools/ | false | false | self | 2 | null |
AI Search Assistant with Local model and Knowledge Base Support | 27 | Hi all, just want to share with you an open source search assistant with local model and knowledge base support called LeetTools ([https://github.com/leettools-dev/leettools](https://github.com/leettools-dev/leettools)). You can run highly customizable AI search workflows (like Perplexity, Google Deep Research) locally on your command line with a full automated document pipeline. The search results and generated outputs are saved to local knowledge bases, which can add your own data and be queried together.
Here is an example of an article about “How does Ollama work”, generated with the digest flow that is similar to Google deep research:
[https://github.com/leettools-dev/leettools/blob/main/docs/examples/ollama.md](https://github.com/leettools-dev/leettools/blob/main/docs/examples/ollama.md)
The digest flow works as follows:
https://i.redd.it/n8ar4jaca0de1.gif
With a DuckDB-backend and configurable LLM settings, LeetTools can run with minimal resource requirements on the command line and can be easily integrated with other applications needing AI search and knowledge base support. You can use any LLM service by switch simple configuration: we have examples for both Ollama and the new DeepSeek V3 API.
The tool is totally free with Apache license. Feedbacks and suggestions would be highly appreciated. Thanks and enjoy! | 2025-01-14T18:54:16 | https://www.reddit.com/r/LocalLLaMA/comments/1i1de3o/ai_search_assistant_with_local_model_and/ | LeetTools | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1de3o | false | null | t3_1i1de3o | /r/LocalLLaMA/comments/1i1de3o/ai_search_assistant_with_local_model_and/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'dAZNdMo09J2CTMPuCUrOIP5mW8jOZOw7NU1HiOFjrV4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X7lWSwknYi-M9GhyRLKkteIF2TostzEhXH1nEzNr3rE.jpg?width=108&crop=smart&auto=webp&s=a195d74d3220f6edf75dd1cb31ed82c1faf13bd3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X7lWSwknYi-M9GhyRLKkteIF2TostzEhXH1nEzNr3rE.jpg?width=216&crop=smart&auto=webp&s=da0cf80b72103e24205ace8f5c746cee179bd0e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X7lWSwknYi-M9GhyRLKkteIF2TostzEhXH1nEzNr3rE.jpg?width=320&crop=smart&auto=webp&s=4c331d8f127fb715c26b778fd19c7c274d3fe316', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X7lWSwknYi-M9GhyRLKkteIF2TostzEhXH1nEzNr3rE.jpg?width=640&crop=smart&auto=webp&s=e1235ae89fa654008dcc53cafea8e75a3a216cb4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X7lWSwknYi-M9GhyRLKkteIF2TostzEhXH1nEzNr3rE.jpg?width=960&crop=smart&auto=webp&s=efc630a6e08452e6c3914bdc02ddf0681f211f6f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X7lWSwknYi-M9GhyRLKkteIF2TostzEhXH1nEzNr3rE.jpg?width=1080&crop=smart&auto=webp&s=00cdebcf0fc40e3adddbbc2c685d1abaa62b8285', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X7lWSwknYi-M9GhyRLKkteIF2TostzEhXH1nEzNr3rE.jpg?auto=webp&s=c3ae395e9a57c53adc2cbb9d06a509facb9fa592', 'width': 1200}, 'variants': {}}]} |
|
Run LLMs on your own device with Kolosal AI | 1 | [removed] | 2025-01-14T19:09:13 | https://v.redd.it/jferdnx6d0de1 | SmilingGen | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i1dr9g | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jferdnx6d0de1/DASHPlaylist.mpd?a=1739473768%2CZWRlMDMzYWE5MmExMTgwNWE4MGMzYWEwZjYwZGY1NTZjYmFjMWRhYjA1MzBhMDM5ODI3NGIyZjUxODAwZmZhOA%3D%3D&v=1&f=sd', 'duration': 131, 'fallback_url': 'https://v.redd.it/jferdnx6d0de1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/jferdnx6d0de1/HLSPlaylist.m3u8?a=1739473768%2CMzkzODM4M2QwNTRhYTcxNzA4ZWU1Mzc1MmM3MmY2OGEyNzE4NzlhNDhlNTExNGVhMThjMGJkNmFiNGM4ZGIxYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jferdnx6d0de1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1794}} | t3_1i1dr9g | /r/LocalLLaMA/comments/1i1dr9g/run_llms_on_your_own_device_with_kolosal_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bzl3dXRpdDZkMGRlMYQzy3K6DtwQL8pRwVv96sr74uxKccv63eIljYSLkD0o', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/bzl3dXRpdDZkMGRlMYQzy3K6DtwQL8pRwVv96sr74uxKccv63eIljYSLkD0o.png?width=108&crop=smart&format=pjpg&auto=webp&s=055dc76925ec0ea8468ae37992d3e7eeb27df66f', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/bzl3dXRpdDZkMGRlMYQzy3K6DtwQL8pRwVv96sr74uxKccv63eIljYSLkD0o.png?width=216&crop=smart&format=pjpg&auto=webp&s=c61f29f12aee00c9d8a605598a9b93d679ad724f', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/bzl3dXRpdDZkMGRlMYQzy3K6DtwQL8pRwVv96sr74uxKccv63eIljYSLkD0o.png?width=320&crop=smart&format=pjpg&auto=webp&s=7d373fb50c8e6562d7d452ef1cc02cbedc536552', 'width': 320}, {'height': 385, 'url': 'https://external-preview.redd.it/bzl3dXRpdDZkMGRlMYQzy3K6DtwQL8pRwVv96sr74uxKccv63eIljYSLkD0o.png?width=640&crop=smart&format=pjpg&auto=webp&s=a72131326cbef644f42bbe58073f21aebc9ebde6', 'width': 640}, {'height': 578, 'url': 'https://external-preview.redd.it/bzl3dXRpdDZkMGRlMYQzy3K6DtwQL8pRwVv96sr74uxKccv63eIljYSLkD0o.png?width=960&crop=smart&format=pjpg&auto=webp&s=67476e7301442d538307588b44737ad09fa11135', 'width': 960}, {'height': 650, 'url': 'https://external-preview.redd.it/bzl3dXRpdDZkMGRlMYQzy3K6DtwQL8pRwVv96sr74uxKccv63eIljYSLkD0o.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d169c969ca28625ffccac9172ca92e5e1738ae0d', 'width': 1080}], 'source': {'height': 1156, 'url': 'https://external-preview.redd.it/bzl3dXRpdDZkMGRlMYQzy3K6DtwQL8pRwVv96sr74uxKccv63eIljYSLkD0o.png?format=pjpg&auto=webp&s=c4c4c5ea06b72826dbe9763615eb1f51075ae66b', 'width': 1920}, 'variants': {}}]} |
|
Best ways/practices for implementing citations for RAG? | 2 | Hello, startup founder here. When using AI tools in general powered by RAG systems, I very often see very clean ways to give the user the various “citations” (chunks) used to generate the output from the source documents. I am looking to implement this feature on a knowledge base comprised of multiple docs (sometimes complex PDFs). Is the there any library for this? Anything out of the box?
I am considering integrating a doc viewer in my web app and ideally i’d like to highlight the relevant citations snippets - but am still doing discovery on the design/architecture.
Was wondering if anyone here had to tackle a similar problem. If so, feel free to share your insights!
P.S. - if anyone is interested, we help companies win more government tenders - using AI :).
https://justskim.ai | 2025-01-14T19:36:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i1eenf/best_wayspractices_for_implementing_citations_for/ | bibbi9999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1eenf | false | null | t3_1i1eenf | /r/LocalLLaMA/comments/1i1eenf/best_wayspractices_for_implementing_citations_for/ | false | false | self | 2 | null |
Radiator broke, so I asked QWQ to give all real solutions to the polynomial equation 4x^4 + 5x^3 + x^2 - x + 15 = 0 | 1 | 2025-01-14T19:39:01 | Quantitation | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i1eh0s | false | null | t3_1i1eh0s | /r/LocalLLaMA/comments/1i1eh0s/radiator_broke_so_i_asked_qwq_to_give_all_real/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'h0UFSSfDVoazDtKb-mm0pnCxDBb5LwsvOjXpmuVi64U', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/nty2uwkai0de1.png?width=108&crop=smart&auto=webp&s=083c0d41c94029777fb63836f1d55303640f1fe8', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/nty2uwkai0de1.png?width=216&crop=smart&auto=webp&s=ac2a03c408b51649ecda6b13c2d8ba75a04f994c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/nty2uwkai0de1.png?width=320&crop=smart&auto=webp&s=102a5263dcf6cdaf4f1bc229bb77bcf7f887ef1c', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/nty2uwkai0de1.png?width=640&crop=smart&auto=webp&s=350b69fcadf2522cf5e8b826393adec3938ca478', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/nty2uwkai0de1.png?width=960&crop=smart&auto=webp&s=6215d227584acfa15394d9c63e5b90d1c3b49782', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/nty2uwkai0de1.png?width=1080&crop=smart&auto=webp&s=b7580b5318c10f0a2884c8fb127979dc084fae43', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://preview.redd.it/nty2uwkai0de1.png?auto=webp&s=0b4a754cd613fd749d872874e35f6b38ea154c0c', 'width': 3840}, 'variants': {}}]} |
|||
2025 and the future of Local AI | 64 | 2024 was an amazing year for Local AI. We had great free models Llama 3.x, Qwen2.5 Deepseek v3 and much more.
However, we also see some counter-trends such as Mistral previously released very liberal licenses, but started moving towards Research licenses. We see some AI shops closing down.
I wonder if we are getting close to Peak 'free' AI as competition heats up and competitors drop out leaving remaining competitors forced to monetize.
We still have LLama, Qwen and Deepseek providing open models - but even here, there are questions on whether we can really deploy these easily (esp. with monstrous 405B Llama and DS v3).
Let's also think about economics. Imagine a world where OpenAI does make a leap ahead. They release an AI which they sell to corporations for $1,000 a month subject to a limited duty cycle. Let's say this is powerful enough and priced right to wipe out 30% of office jobs. What will this do to society and the economy? What happens when this 30% ticks upwards to 50%, 70%?
Currently, we have software companies like Google which have huge scale, servicing the world with a relatively small team. What if most companies are like this? A core team of execs with the work done mainly through AI systems. What happens when this comes to manual jobs through AI robots?
What would the average person do? How can such an economy function? | 2025-01-14T19:59:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i1eyl5/2025_and_the_future_of_local_ai/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1eyl5 | false | null | t3_1i1eyl5 | /r/LocalLLaMA/comments/1i1eyl5/2025_and_the_future_of_local_ai/ | false | false | self | 64 | null |
Getting started with LocalLLaMA | 1 | [removed] | 2025-01-14T20:04:00 | https://www.reddit.com/r/LocalLLaMA/comments/1i1f2zt/getting_started_with_localllama/ | Acceptable-Cheek5099 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1f2zt | false | null | t3_1i1f2zt | /r/LocalLLaMA/comments/1i1f2zt/getting_started_with_localllama/ | false | false | self | 1 | null |
MiniMax releases new SOTA-class 456b MoE model with 4m context! | 1 | 2025-01-14T20:07:24 | https://huggingface.co/MiniMaxAI/MiniMax-Text-01 | Billy462 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1i1f5x2 | false | null | t3_1i1f5x2 | /r/LocalLLaMA/comments/1i1f5x2/minimax_releases_new_sotaclass_456b_moe_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': 't-JH8IngcHivm1YVPoa7hh4mpZsdS9DbW7wYMvhxr-w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=108&crop=smart&auto=webp&s=4e357908a6066334b13339e17cc3095d7b4423a2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=216&crop=smart&auto=webp&s=2e4bb466e39c0d1903bf3066a3d0dea689925709', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=320&crop=smart&auto=webp&s=99aba628436f65b36c0505f0486e41298b1a9462', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=640&crop=smart&auto=webp&s=ceab60c72e05525604b9367fa7915922146839a5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=960&crop=smart&auto=webp&s=280815ef68e57515faad9d1dc62361728eb48c64', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=1080&crop=smart&auto=webp&s=244b695811b8b50aac245615b143ce76ecbb76af', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?auto=webp&s=447d8333eccd1f5454e014f3a01bcf504de0e10d', 'width': 1200}, 'variants': {}}]} |
||
Getting started with Local LLaMA – any tips for a beginner? | 1 | [removed] | 2025-01-14T20:07:26 | https://www.reddit.com/r/LocalLLaMA/comments/1i1f5y1/getting_started_with_local_llama_any_tips_for_a/ | Acceptable-Cheek5099 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1f5y1 | false | null | t3_1i1f5y1 | /r/LocalLLaMA/comments/1i1f5y1/getting_started_with_local_llama_any_tips_for_a/ | false | false | self | 1 | null |
Using vision [reasoning] models locally on iPhone. | 1 | [removed] | 2025-01-14T20:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i1fefw/using_vision_reasoning_models_locally_on_iphone/ | Puzzleheaded-Fly4322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1fefw | false | null | t3_1i1fefw | /r/LocalLLaMA/comments/1i1fefw/using_vision_reasoning_models_locally_on_iphone/ | false | false | self | 1 | null |
I accidentally built an open alternative to Google AI Studio | 933 | Yesterday, I had a mini heart attack when I discovered Google AI Studio, a product that looked (at first glance) just like the tool I've been building for 5 months. However, I dove in and was super relieved once I got into the details. There were a bunch of differences, which I've detailed below.
I thought I’d share what I have, in case anyone has been using G AI Sudio, and might want to check out my [rapid prototyping tool on Github, called Kiln](https://github.com/Kiln-AI/Kiln). There are some similarities, but there are also some big differences when it comes to privacy, collaboration, model support, fine-tuning, and ML techniques. I built Kiln because I've been building AI products for \~10 years (most recently at Apple, and my own startup & MSFT before that), and I wanted to build an easy to use, privacy focused, open source AI tooling.
Differences:
* Model Support: Kiln allows any LLM (including Gemini/Gemma) through a ton of hosts: Ollama, OpenRouter, OpenAI, etc. Google supports only Gemini & Gemma via Google Cloud.
* Fine Tuning: Google lets you fine tune only Gemini, with at most 500 samples. Kiln has no limits on data size, 9 models you can tune in a few clicks (no code), and support for tuning any open model via Unsloth.
* Data Privacy: Kiln can't access your data (it runs locally, data stays local); Google stores everything. Kiln can run/train local models (Ollama/Unsloth/LiteLLM); Google always uses their cloud.
* Collaboration: Google is single user, while Kiln allows unlimited users/collaboration.
* ML Techniques: Google has standard prompting. Kiln has standard prompts, chain-of-thought/reasoning, and auto-prompts (using your dataset for multi-shot).
* Dataset management: Google has a table with max 500 rows. Kiln has powerful dataset management for teams with Git sync, tags, unlimited rows, human ratings, and more.
* Python Library: Google is UI only. Kiln has a python library for extending it for when you need more than the UI can offer.
* Open Source: Google’s is completely proprietary and private source. Kiln’s library is MIT open source; the UI isn’t MIT, but it is 100% source-available, on Github, and free.
* Similarities: Both handle structured data well, both have a prompt library, both have similar “Run” UX, both had user friendly UIs.
If anyone wants to check Kiln out, [here's the GitHub repository](https://github.com/Kiln-AI/Kiln) and [docs are here](https://docs.getkiln.ai). Getting started is super easy - it's a one-click install to get setup and running.
I’m very interested in any feedback or feature requests (model requests, integrations with other tools, etc.) I'm currently working on comprehensive evals, so feedback on what you'd like to see in that area would be super helpful. My hope is to make something as easy to use as G AI Studio, as powerful as Vertex AI, all while open and private.
Thanks in advance! I’m happy to answer any questions.
Side note: I’m usually pretty good at competitive research before starting a project. I had looked up Google's "AI Studio" before I started. However, I found and looked at "Vertex AI Studio", which is a completely different type of product. How one company can have 2 products with almost identical names is beyond me... | 2025-01-14T20:18:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i1ffid/i_accidentally_built_an_open_alternative_to/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1ffid | false | null | t3_1i1ffid | /r/LocalLLaMA/comments/1i1ffid/i_accidentally_built_an_open_alternative_to/ | false | false | self | 933 | {'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=216&crop=smart&auto=webp&s=0b774d9f72bf345e9e39402886649223ad60e4d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=320&crop=smart&auto=webp&s=6c769aa8ce8a2839b46e12de1fd8743d4171f08d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=640&crop=smart&auto=webp&s=c9f49d760efe4ddd92a3a07a57705e5073b56eed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=960&crop=smart&auto=webp&s=8666fab577a806da6551b1f2e0ec70f217f6f2fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=1080&crop=smart&auto=webp&s=b3de3b28dfba5fc1615aa5f1c855312805eda01b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?auto=webp&s=6728f96b3a663740abd86d6d7aff692490474d84', 'width': 1280}, 'variants': {}}]} |
MCP and local LLMs | 1 | Has anyone been able to integrate and utilize MCPs with their local LLMs? If so, what's your workflow? | 2025-01-14T21:27:56 | https://www.reddit.com/r/LocalLLaMA/comments/1i1gyhl/mcp_and_local_llms/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1gyhl | false | null | t3_1i1gyhl | /r/LocalLLaMA/comments/1i1gyhl/mcp_and_local_llms/ | false | false | self | 1 | null |
Running DeepSeek V3 at 8 tokens/s on home server. RAM only no GPU. | 1 | 2025-01-14T21:45:50 | Big_Specific9749 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i1hdg1 | false | null | t3_1i1hdg1 | /r/LocalLLaMA/comments/1i1hdg1/running_deepseek_v3_at_8_tokenss_on_home_server/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'p6-KyWFUkO34C6eowpjZG3kT6rlrCwbbI0awwUEwrlo', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/x6ofmw6241de1.png?width=108&crop=smart&auto=webp&s=34f42d2c0bcab7cd1d5b87e1807e0fdb5dac5ca0', 'width': 108}, {'height': 64, 'url': 'https://preview.redd.it/x6ofmw6241de1.png?width=216&crop=smart&auto=webp&s=5f29641866f6e89a40b7f3141e8eb9195fa92612', 'width': 216}, {'height': 95, 'url': 'https://preview.redd.it/x6ofmw6241de1.png?width=320&crop=smart&auto=webp&s=3a045bde9a325c87b3f72fc62571ae17dc0f1204', 'width': 320}, {'height': 190, 'url': 'https://preview.redd.it/x6ofmw6241de1.png?width=640&crop=smart&auto=webp&s=12948fcbbc112402770feb649ca64b7179ac3faf', 'width': 640}, {'height': 285, 'url': 'https://preview.redd.it/x6ofmw6241de1.png?width=960&crop=smart&auto=webp&s=4ffdb55b47b5fce0762bc2c7f1922153e5b5c308', 'width': 960}, {'height': 320, 'url': 'https://preview.redd.it/x6ofmw6241de1.png?width=1080&crop=smart&auto=webp&s=2fa70dc3659d2e9763f8fc2e45ca8f894b8b7387', 'width': 1080}], 'source': {'height': 617, 'url': 'https://preview.redd.it/x6ofmw6241de1.png?auto=webp&s=7a743b655be5e3e124d0bd70a548ee0a6360b5df', 'width': 2076}, 'variants': {}}]} |
|||
What do you use your local LLM on your phone to do? | 7 | Those of you who have set up a local LLM on your phone: What do you use it for? Have you found any interesting things you can do with it? | 2025-01-14T21:56:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i1hmd7/what_do_you_use_your_local_llm_on_your_phone_to_do/ | t0f0b0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1hmd7 | false | null | t3_1i1hmd7 | /r/LocalLLaMA/comments/1i1hmd7/what_do_you_use_your_local_llm_on_your_phone_to_do/ | false | false | self | 7 | null |
Question about embedding RAG knowledge into smaller model | 1 | I am trying to make a small model more knowledgeable in a narrow area (for example, mummies of Argentina in order to act as a QnA bot on a museum website), I don’t want context to take up the limited context. Is it possible to have a larger model use RAG to answer a ton of questions from many different people, then take the questions and answers minus the context and fine tune the smaller model?
Small: 1.5 billion or so.
If not small what is the size needed for this to work if this does work after a certain size? | 2025-01-14T22:07:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i1hvld/question_about_embedding_rag_knowledge_into/ | Ok-Cicada-5207 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1hvld | false | null | t3_1i1hvld | /r/LocalLLaMA/comments/1i1hvld/question_about_embedding_rag_knowledge_into/ | false | false | self | 1 | null |
Train AI locally for studying purpose (Small amount of Data in German language) | 1 | [removed] | 2025-01-14T22:15:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i1i2il/train_ai_locally_for_studying_purpose_small/ | Ok_Phase_8827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1i2il | false | null | t3_1i1i2il | /r/LocalLLaMA/comments/1i1i2il/train_ai_locally_for_studying_purpose_small/ | false | false | self | 1 | null |
Train AI locally for studying purpose (Small amount of Data in German language) | 1 | [removed] | 2025-01-14T22:19:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i1i5gh/train_ai_locally_for_studying_purpose_small/ | Ok_Phase_8827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1i5gh | false | null | t3_1i1i5gh | /r/LocalLLaMA/comments/1i1i5gh/train_ai_locally_for_studying_purpose_small/ | false | false | self | 1 | null |
My First Small AI Project for my company | 14 | Hi everyone!
I just wrapped up my first little project at the company I work for:
a simple RAG chatbot able to help my colleagues in the assistance department based on internal reports on common issues, manuals, standard procedures and website pages for general knowledge on the company / product links.
I built it using LangChain for vector DB search and Flutter for the UI, locally hosted on a RPi.
I had fun trying to squeeze as much performance as possible from old office hardware. I experimented with small and quantized models (mostly from Bartosky [thanks for those!]). Unfortunately and as supposed, not even a LLaMA 3.2 1B Q4 couldn't hit decent speeds (> 1 token/s).
So, while waiting for GPUs, I'm testing Mistral, groq (really fast inference!!) and other few providers through their APIs.
AI development has become a real hobby for me, even though my background is in a different type of engineering. I spend my "free" time at work (simple but time-consuming tasks) listening model-testing, try to learn how neural networks work, or with hands on video like Google Colab tutorials.
I know I won't become a researcher publishing papers or a top developer in the field, but I’d love to get better.
What would you recommend I focus on or study to improve as an AI developer?
Thanks in advance for any advice! | 2025-01-14T22:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i1ibte/my_first_small_ai_project_for_my_company/ | cri10095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1ibte | false | null | t3_1i1ibte | /r/LocalLLaMA/comments/1i1ibte/my_first_small_ai_project_for_my_company/ | false | false | self | 14 | null |
Guys anybody used kokor tts 82M model? | 0 | Is this model the slm of tts domain i havent used it share ur reviews if possible they are saying that output quality is Sota is it hype | 2025-01-14T22:38:04 | https://www.reddit.com/r/LocalLLaMA/comments/1i1ilgk/guys_anybody_used_kokor_tts_82m_model/ | Feisty-Pineapple7879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1ilgk | false | null | t3_1i1ilgk | /r/LocalLLaMA/comments/1i1ilgk/guys_anybody_used_kokor_tts_82m_model/ | false | false | self | 0 | null |
Difference between Qwen2.5 and Qwen2.5-Coder for NON coding tasks? | 11 | This might be a silly question, but are the Qwen2.5 models identical for non coding tasks? When it comes to things like writing, note taking, chat... if the context/output is not coding related, would there be a material difference expected?
Or is it best to just use Qwen2.5-coder (in this case, 14B parameters) no matter what? | 2025-01-14T23:23:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i1jlop/difference_between_qwen25_and_qwen25coder_for_non/ | StatFlow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1jlop | false | null | t3_1i1jlop | /r/LocalLLaMA/comments/1i1jlop/difference_between_qwen25_and_qwen25coder_for_non/ | false | false | self | 11 | null |
Fine tuning Gemma with LoRA in Google Colab (4 minutes) | 1 | 2025-01-14T23:24:52 | https://www.youtube.com/watch?v=87aG24KWvM8 | Competitive_Travel16 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1i1jmed | false | {'oembed': {'author_name': 'Google Cloud Tech', 'author_url': 'https://www.youtube.com/@googlecloudtech', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/87aG24KWvM8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Fine tuning Gemma with LoRA in Google Colab"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/87aG24KWvM8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Fine tuning Gemma with LoRA in Google Colab', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1i1jmed | /r/LocalLLaMA/comments/1i1jmed/fine_tuning_gemma_with_lora_in_google_colab_4/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'i2CmdVxTmTyDYf7Fp3cPwyl2E4DmS2KHDScV5lEmyAQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/4Lnx1wmm40meSm_t-pFzB4V2qprpOrO9CVjtMqgPOcE.jpg?width=108&crop=smart&auto=webp&s=00321a5d7f97e38a09e6662a24eab0bd80667632', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/4Lnx1wmm40meSm_t-pFzB4V2qprpOrO9CVjtMqgPOcE.jpg?width=216&crop=smart&auto=webp&s=a509e1edcd62d0ea297bd839d2ba0c25ad20733c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/4Lnx1wmm40meSm_t-pFzB4V2qprpOrO9CVjtMqgPOcE.jpg?width=320&crop=smart&auto=webp&s=0cf3621e43fe9c84f7ce3333b442bd0d99e51b02', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/4Lnx1wmm40meSm_t-pFzB4V2qprpOrO9CVjtMqgPOcE.jpg?auto=webp&s=0d9dea02cc53b7949a56801e727cec8f7f2dafd9', 'width': 480}, 'variants': {}}]} |
||
I built a fast "agentic" insurance app with FastAPIs using small function calling LLMs | 25 | I recently came across this post on small function-calling LLMs https://www.reddit.com/r/LocalLLaMA/comments/1hr9ll1/i_built_a_small_function_calling_llm_that_packs_a/ and decided to give the project a whirl. My use case was to build an agentic workflow for insurance claims (being able to process them, show updates, add documents, etc)
Here is what I liked: I was able to build an agentic solution with just APIs (for the most part) - and it was fast as advertised. The Arch-Function LLMs did generalize well and I wrote mostly business logic. The thing that I found interesting was its prompt_target feature which helped me build task routing and extracted keywords/information from a user query so that I can improve accuracy of tasks and trigger downstream agents when/if needed.
Here is what I did not like: There seems to be a close integration with Gradio at the moment. The gateway enriches conversational state with meta-data, which seems to improve function calling performance. But i suspect they might improve that over time. Also descriptions of prompt_targets/function calling need to be simple and terse. There is some work to make sure the parameters and descriptions aren't too obtuse. I think OpenAI offers similar guidance, but it needs simple and concise descriptions of downstream tasks and parameters.
https://github.com/katanemo/archgw | 2025-01-14T23:29:23 | Terrible_Attention83 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i1jpvi | false | null | t3_1i1jpvi | /r/LocalLLaMA/comments/1i1jpvi/i_built_a_fast_agentic_insurance_app_with/ | false | false | 25 | {'enabled': True, 'images': [{'id': 'PhOp5FYx5ZHwLbeTMbpZ-C3caeTJ-KbR4TgiKXjcNy4', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/gf7clczln1de1.jpeg?width=108&crop=smart&auto=webp&s=e15acc00aa57ba2c61436943586afd6b426d656a', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/gf7clczln1de1.jpeg?width=216&crop=smart&auto=webp&s=f1fd9f5412b162f4e18e207630a3ae47e4ae04e2', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/gf7clczln1de1.jpeg?width=320&crop=smart&auto=webp&s=f96b9f838d09495c36397ad488aee359372bbbb8', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/gf7clczln1de1.jpeg?width=640&crop=smart&auto=webp&s=f1d16508d11da2cd20e0adecb6812fc481fae7d0', 'width': 640}, {'height': 577, 'url': 'https://preview.redd.it/gf7clczln1de1.jpeg?width=960&crop=smart&auto=webp&s=1e75a3b31870ea87a2f746be7bde4219cee3fcfb', 'width': 960}, {'height': 649, 'url': 'https://preview.redd.it/gf7clczln1de1.jpeg?width=1080&crop=smart&auto=webp&s=a60be4623cc3205be7144b487002ad30083c748e', 'width': 1080}], 'source': {'height': 880, 'url': 'https://preview.redd.it/gf7clczln1de1.jpeg?auto=webp&s=2fb61138eea03eca0e225fbc1534374249eb80b6', 'width': 1464}, 'variants': {}}]} |
||
VSCode extension for autocomplete? | 1 | I would like to put my 4090 to use with something like Qwen Coder when working on code for my own projects and thus I have been trying to find an extension that is compatible with ollama - since it runs nice and neat on startup, ready to serve installed models. However, I tried a few extensions (Cody, CodeGPT, ...) but couldn't find one that either worked with ollama, or wouldn't need me to make an account.
The feature I am most needing is autocomplete: Highlight a comment (or write in chat) and drop the result into my document. Optionally, refactoring, documenting or rewriting as needed. But the autocomplete would help a lot since I need to make some basic ReactJS/TailwindCSS/SchadcnUI components every once in a while.
What are the extensions you use? Got some to recommend?
Thank you! | 2025-01-14T23:48:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i1k4e3/vscode_extension_for_autocomplete/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1k4e3 | false | null | t3_1i1k4e3 | /r/LocalLLaMA/comments/1i1k4e3/vscode_extension_for_autocomplete/ | false | false | self | 1 | null |
Audiblez: Generate audiobooks from e-books with Kokoro-82M | 140 | 2025-01-14T23:54:42 | https://claudio.uk/posts/epub-to-audiobook.html | inkompatible | claudio.uk | 1970-01-01T00:00:00 | 0 | {} | 1i1k8yq | false | null | t3_1i1k8yq | /r/LocalLLaMA/comments/1i1k8yq/audiblez_generate_audiobooks_from_ebooks_with/ | false | false | default | 140 | null |
|
Towards System 2 Reasoning in LLMs: Learning How To Think | 3 | 2025-01-15T00:12:10 | https://www.synthlabs.ai/research/meta-chain-of-thought | Recoil42 | synthlabs.ai | 1970-01-01T00:00:00 | 0 | {} | 1i1kmlr | false | null | t3_1i1kmlr | /r/LocalLLaMA/comments/1i1kmlr/towards_system_2_reasoning_in_llms_learning_how/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'xNHjDE8oERcmyun8p-6RczhQsOhrRJ9NYRuU_ByPKcM', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/cFyfBjbbV76vIqznwMhRaZLDact1P0xp9HSA6aq2nh4.jpg?width=108&crop=smart&auto=webp&s=d9fb837c98c4a05510f879074ca96584d41bb3a4', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/cFyfBjbbV76vIqznwMhRaZLDact1P0xp9HSA6aq2nh4.jpg?width=216&crop=smart&auto=webp&s=152d655e2b9c99ce4a0f69bebfe4b324e5e2a4ad', 'width': 216}, {'height': 199, 'url': 'https://external-preview.redd.it/cFyfBjbbV76vIqznwMhRaZLDact1P0xp9HSA6aq2nh4.jpg?width=320&crop=smart&auto=webp&s=c6aa3e19cb55eade10e9c26e3ddd9ef1c500b1ad', 'width': 320}, {'height': 398, 'url': 'https://external-preview.redd.it/cFyfBjbbV76vIqznwMhRaZLDact1P0xp9HSA6aq2nh4.jpg?width=640&crop=smart&auto=webp&s=582c72404c2ba6813d818650fbc039f486c56eed', 'width': 640}, {'height': 597, 'url': 'https://external-preview.redd.it/cFyfBjbbV76vIqznwMhRaZLDact1P0xp9HSA6aq2nh4.jpg?width=960&crop=smart&auto=webp&s=be6f2d1d2f6655959818e71ba3877fc511326c96', 'width': 960}, {'height': 672, 'url': 'https://external-preview.redd.it/cFyfBjbbV76vIqznwMhRaZLDact1P0xp9HSA6aq2nh4.jpg?width=1080&crop=smart&auto=webp&s=e5e7d347b6133d05d18479dbfa3045450858e26a', 'width': 1080}], 'source': {'height': 896, 'url': 'https://external-preview.redd.it/cFyfBjbbV76vIqznwMhRaZLDact1P0xp9HSA6aq2nh4.jpg?auto=webp&s=e122a5068aae99e6dbb1c3233e93506999fe4d6d', 'width': 1440}, 'variants': {}}]} |
||
2025 will be the year of small omni models? | 14 | I believe 2025 will be the year of small omni models.
What we already have:
* [Megrez-3B-Omni](https://huggingface.co/Infinigence/Megrez-3B-Omni) (released at the end of 2024)
* [MiniCPM-o](https://huggingface.co/openbmb/MiniCPM-o-2_6) built on top of SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B.
What's your opinion? | 2025-01-15T00:13:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i1knlj/2025_will_be_the_year_of_small_omni_models/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1knlj | false | null | t3_1i1knlj | /r/LocalLLaMA/comments/1i1knlj/2025_will_be_the_year_of_small_omni_models/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'ZICu4T7HlYsnePx2qqGNBGc05r1gZMS0AyF22mx1U_M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JgYCgYWlnijiF0mLj_YQ8-GPI2BxHuwQa6OWI8GfuN8.jpg?width=108&crop=smart&auto=webp&s=fe26ddab6857d183c6a9426a9cc0d9c6cc342d5e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JgYCgYWlnijiF0mLj_YQ8-GPI2BxHuwQa6OWI8GfuN8.jpg?width=216&crop=smart&auto=webp&s=5d8171a71f37b390265eb0a8391893347196828d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JgYCgYWlnijiF0mLj_YQ8-GPI2BxHuwQa6OWI8GfuN8.jpg?width=320&crop=smart&auto=webp&s=48d0c02c85a834642f64651494e6ed0580ec4ac5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JgYCgYWlnijiF0mLj_YQ8-GPI2BxHuwQa6OWI8GfuN8.jpg?width=640&crop=smart&auto=webp&s=387b80ed0053ae787df849162cbe3126041202ff', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JgYCgYWlnijiF0mLj_YQ8-GPI2BxHuwQa6OWI8GfuN8.jpg?width=960&crop=smart&auto=webp&s=f7f31f7a1c762ecc8e480bda75755714674928c7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JgYCgYWlnijiF0mLj_YQ8-GPI2BxHuwQa6OWI8GfuN8.jpg?width=1080&crop=smart&auto=webp&s=0a848ffdb78a003157f45c584313d712a08340d5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JgYCgYWlnijiF0mLj_YQ8-GPI2BxHuwQa6OWI8GfuN8.jpg?auto=webp&s=a9ca2e831a1d2801f95ad885080a29ed2064e90c', 'width': 1200}, 'variants': {}}]} |
Dataset creation info? | 4 | Hi folks,
I've been a longtime user of local LLMs, however am interested in finetuning with a toolset like unsloth assuming it is still the best for this?
My big question with all this though, is there a good pipeline/tools for dataset creation that might be suggested to me as a newcomer?
Let's say as an example that I have access to a mediawiki, both the website running on a server as well as an xml dump if that's easier.
Is there any way to take the dump ((or crawl the pages) and construct something that unsloth can use to add knowledge to an llm like llama 3.1?
Thanks. | 2025-01-15T00:18:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i1kr8h/dataset_creation_info/ | coderman4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1kr8h | false | null | t3_1i1kr8h | /r/LocalLLaMA/comments/1i1kr8h/dataset_creation_info/ | false | false | self | 4 | null |
Sharing my unorthodox home setup, and how I use local LLMs | 108 | So for the past year and a half+ I've been tinkering with, planning out and updating my home setup, and figured that with 2025 here, I'd join in on sharing where it's at. It's an expensive little home lab, though nothing nearly as fancy or cool as what other folks have.
***tl;dr****- I have 2 "assistants" (1 large and 1 small, with each assistant made up of between 4-7 models working together), and a development machine/assistant. The dev box simulates the smaller assistant for dev purposes. Each assistant has offline wiki access, vision capability, and I use them for all my hobby work/random stuff.*
# The Hardware
The hardware is a mix of stuff I already had, or stuff I bought for LLM tinkering. I'm a software dev and tinkering with stuff is one of my main hobbies, so I threw a fair bit of money at it.
* Refurb M2 Ultra Mac Studio w/1 TB internal drive + USB C 2TB drive
* Refurb M2 Max Macbook Pro 96GB
* Refurb M2 Mac Mini base model
* Windows 10 Desktop w/ RTX 4090
Total Hardware Pricing: \~$5,500 for studio refurbished + \~$3000 for Macbook Pro refurbished + \~$500 Mac Mini refurbished (*already owned*) + \~$2000 Windows desktop (*already owned*) == **$10,500 in total hardware**
# The Software
* I do most of my inference using KoboldCPP
* I do vision inference through Ollama and my dev box uses Ollama
* I run all inference through WilmerAI, which handles all the workflows and domain routing. This lets me use as many models as I want to power the assistants, and also setup workflows for coding windows, use the offline wiki api, etc.
* For zero-shots, simple dev questions and other quick hits, I use Open WebUI as my front end. Otherwise I use SillyTavern for more involved programming tasks and for my assistants.
* All of the gaming quality of life features in ST double over very nicely for assistant work and programming lol
# The Setup
The Mac Mini acts as one of three WilmerAI "cores"; the mini is the Wilmer home core, and also acts as the web server for all of my instances of ST and Open WebUI. There are 6 instances of Wilmer on this machine, each with its own purpose. The Macbook Pro is the Wilmer portable core (3 instances of Wilmer), and the Windows Desktop is the Wilmer dev core (2 instances of Wilmer).
All of the models for the Wilmer home core are on the Mac Studio, and I hope to eventually add another box to expand the home core.
Each core acts independently from the others, meaning doing things like removing the macbook from the network won't hurt the home core. Each core has its own text models, offline wiki api, and vision model.
I have 2 "assistants" set up, with the intention to later add a third. Each assistant is essentially built to be an advanced "rubber duck" (*as in the rubber duck programming method where you talk through a problem to an inanimate object and it helps you solve this problem*). Each assistant is built entirely to talk through problems with me, of any kind, and help me solve them by challenging me, answering my questions, or using a specific set of instructions on how to think through issues in unique ways. Each assistant is built to be different, and thus solve things differently.
Each assistant is made up of multiple LLMs. Some examples would be:
* A responder model, which does the talking
* A RAG model, which I use for pulling data from the offline wikipedia api for factual questions
* A reasoning model, for thinking through a response before the responder answers
* A coding model, for handle code issues and math issues.
The two assistants are:
1. **RolandAI**\- powered by the home core. All of Roland's models are generally running on the Mac Studio, and is by far the more powerful of the two. Its got conversation memories going back to early 2024, and I primarily use it. At this point I have to prune the memories regularly lol. I'm saving the pruned memories for when I get a secondary memory system into Wilmer that I can backload them into.
2. **SomeOddCodeBot**\- powered by the portable core. All these models run on the Macbook. This is my "second opinion" bot, and also my portable bot for when I'm on the road. It's setup is specifically different from Roland, beyond just being smaller, so that they will "think" differently about problems.
Each assistant's persona and problem solving instructions exist only within the workflows of Wilmer, meaning that front ends like SillyTavern have no information in a character card for it, Open WebUI has no prompt for it, etc. Roland, as an entity, is a specific series of workflow nodes that are designed to act, speak and process problems/prompts in a very specific way.
I generally have a total of about 8 front end SillyTavern/Open WebUI windows open.
* Four ST windows. Two are for the two assistants individually, and one is a group chat that have both in case I want the two assistants to process a longer/more complex concept together. This replaced my old "development group".
* I have a fourth ST window for my home core "Coding" Wilmer instance, which is a workflow that is just for coding questions (for example, one iteration of this was using QwQ + Qwen2.5 32b coder, which the response quality landed somewhere between ChatGPT 4o and o1. Tis slow though).
* After that, I have 4 Open WebUI windows for coding workflows, reasoning workflows and a encyclopedic questions using the offline wiki api.
# How I Use Them
Roland is obviously going to be the more powerful of the two assistants; I have 180GB, give or take, of VRAM to build out its model structure with. SomeOddCodeBot has about 76GB of VRAM, but has a similar structure just using smaller models.
I use these assistants for any personal projects that I have; I can't use them for anything work related, but I do a *lot* of personal dev and tinkering. Whenever I have an idea, whenever I'm checking something, etc I usually bounce the ideas off of one or both assistants. If I'm trying to think through a problem I might do similarly.
Another example is code reviews: I often pass in the before/after code to both bots, and ask for a general analysis of what's what. I'm reviewing it myself as well, but the bots help me find little things I might have missed, and generally make me feel better that I didn't miss anything.
The code reviews will often be for my own work, as well as anyone committing to my personal projects.
For the dev core, I use Ollama as the main inference because I can do a neat trick with Wilmer on it. As long as each individual model fits on 20GB of VRAM, I can use as many models as I want in the workflow. Ollama API calls let you pass the model name in, and it unloads the current model and loads the new model instead, so I can have each Wilmer node just pass in a different model name. This lets me simulate the 76GB portable core with only 20GB, since I only use smaller models on the portable core, so I can have a dev assistant to break and mess with while I'm updating Wilmer code.
# 2025 Plans
* I plan to convert the dev core into a coding agent box and build a Wilmer agent jobs system; think of like an agent wrapping an agent lol. I want something like Aider running as the worker agent, that is controlled by a wrapping agent that calls a Roland Wilmer instance to manage the coder. ie- Roland is in charge of the agent doing the coding.
* I've been using Roland to code review me, help me come up with architectures for things, etc for a while. The goal of that is to tune the workflows so that I can eventually just put Roland in charge of a coding agent running on the Windows box. Write down what I want, get back a higher quality version than if I just left the normal agent to its devices; something QAed by a workflow thinking in a specific way that I want it to think. If that works well, I'd try to expand that out to have N number of agents running off of runpod boxes for larger dev work.
* All of this is just a really high level plan atm, but I became more interested in it after finding out about that $1m competition =D What was a "that's a neat idea" became a "I really want to try this". So this whole plan may fail miserably, but I do have some hope based on how I'm already using Wilmer today.
* I want to add Home Assistant integration in and start making home automation workflows in Wilmer. Once I've got some going, I'll add a new Wilmer core to the house, as well as a third assistant, to manage it.
* I've got my eye on an NVidia digits... might get it to expand Roland a bit.
Anyhow, that's pretty much it. It's an odd setup, but I thought some of you might get a kick out of it. | 2025-01-15T00:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/1i1kz1c/sharing_my_unorthodox_home_setup_and_how_i_use/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1kz1c | false | null | t3_1i1kz1c | /r/LocalLLaMA/comments/1i1kz1c/sharing_my_unorthodox_home_setup_and_how_i_use/ | false | false | self | 108 | null |
OpenAI “reasoning” | 1 | [removed] | 2025-01-15T00:32:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i1l2i9/openai_reasoning/ | Ok-Ship-1443 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1l2i9 | false | null | t3_1i1l2i9 | /r/LocalLLaMA/comments/1i1l2i9/openai_reasoning/ | false | false | self | 1 | null |
Seems like you guys know laptops. Anything worth waiting for? (ML/NLP) | 1 | [removed] | 2025-01-15T01:00:16 | https://www.reddit.com/r/LocalLLaMA/comments/1i1lmdh/seems_like_you_guys_know_laptops_anything_worth/ | HigoChumbo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i1lmdh | false | null | t3_1i1lmdh | /r/LocalLLaMA/comments/1i1lmdh/seems_like_you_guys_know_laptops_anything_worth/ | false | false | self | 1 | null |
Apache 2.0 licensed streaming 8B multimodal model beats gpt 4o in ASR/STT , claude sonnet in visual and gemini 1.5 pro in visual, speech - minicpm-o 2.6 | 1 | [removed] | 2025-01-15T01:08:01 | https://www.reddit.com/gallery/1i1ls35 | TheLogiqueViper | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i1ls35 | false | null | t3_1i1ls35 | /r/LocalLLaMA/comments/1i1ls35/apache_20_licensed_streaming_8b_multimodal_model/ | false | false | 1 | null |
|
NVIDIA Leverages HBAR tech to Log AI Computations | 0 | 2025-01-15T01:10:57 | https://www.cryptonews.net/news/altcoins/30253578/ | cameheretoposthis | cryptonews.net | 1970-01-01T00:00:00 | 0 | {} | 1i1lu87 | false | null | t3_1i1lu87 | /r/LocalLLaMA/comments/1i1lu87/nvidia_leverages_hbar_tech_to_log_ai_computations/ | false | false | 0 | {'enabled': False, 'images': [{'id': '8bGCKK15Olmcb6NqMUVZNtlK5twT1Y2AZmGGFKFgpFY', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/okptNW95eY6hTSbyZKyT-QVAhbQf43JCc8yMsksVIQE.jpg?width=108&crop=smart&auto=webp&s=ae121a6e68a25003671696ba74ee48c6f66a8c63', 'width': 108}, {'height': 80, 'url': 'https://external-preview.redd.it/okptNW95eY6hTSbyZKyT-QVAhbQf43JCc8yMsksVIQE.jpg?width=216&crop=smart&auto=webp&s=d34667938f4219537cdb9069fa4747d8ff3979fd', 'width': 216}, {'height': 118, 'url': 'https://external-preview.redd.it/okptNW95eY6hTSbyZKyT-QVAhbQf43JCc8yMsksVIQE.jpg?width=320&crop=smart&auto=webp&s=c470b7109a6eafe4cb88b8e919af6993da2770dd', 'width': 320}, {'height': 237, 'url': 'https://external-preview.redd.it/okptNW95eY6hTSbyZKyT-QVAhbQf43JCc8yMsksVIQE.jpg?width=640&crop=smart&auto=webp&s=6f016c692833345d1309d6b6f073283946a1ad07', 'width': 640}], 'source': {'height': 276, 'url': 'https://external-preview.redd.it/okptNW95eY6hTSbyZKyT-QVAhbQf43JCc8yMsksVIQE.jpg?auto=webp&s=0afd2ae9a49e064b96dd14c5dba421ed35d0019c', 'width': 745}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.