title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Local API service backend for Pydantic AI/Local agents | 2 | Hey there guys. Could you give me a quick help?
I'm looking for deploying a local LLM for some experimentation via creating Agentic frameworks.
However I don't know the best practices.
It might be as simple as googling it
What I'd like to achieve is the following :
Langchain / Pydantic AI.
Whenever I load in an agent I want that agent to be coming from let's say localhost:36819 or 192.168.0.150:36819. Where I can select which model I load in. Let's say it can be Llama3.1-8B-instruct or Mistral-8B or anything.
Basically I'm looking for a backend solution for myself to be testing custom models.
Why local?
Because 1 I want to do experiment with uncensored stuff
And 2 - I have a few projects for which the cost calculation turns to my way Vs API for processing several hundred gigs of data. And because I want to experiment with different models.
HW is a 3090 | 2025-01-17T18:04:16 | https://www.reddit.com/r/LocalLLaMA/comments/1i3mjpp/local_api_service_backend_for_pydantic_ailocal/ | randoomkiller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3mjpp | false | null | t3_1i3mjpp | /r/LocalLLaMA/comments/1i3mjpp/local_api_service_backend_for_pydantic_ailocal/ | false | false | self | 2 | null |
[Rant] So I came up with an idea to enhance RP experience a few months ago | 0 | Imagine having an ST addon, that makes your characters develop personality and long-term memory.
Get a couple of specialized BERTAs measuring the "emotional intensity" of whats going on, mix in some NLP to figure out what concept these emotions are connected to, and a processing module that will bake memories to associate emotions with these concepts. Put these "engrams" into the time-graph long-term memory module. Brew a constantly fine-tuned "attention layer", that would affect how much attention character is paying to certain concepts and emotions, and determine how vivid the memory is and how long its retained. Add gates and triggers to recall the memories when the situation is similar in some aspect. Slap a natural language encoder on top of it, that would convert it all into text to put it back into the context. Plan out a "personality matrix", that is basically a self-evolving char card, built on top of it. Become frustrated with concept extraction being very uncooperative and set it all aside for a few weeks to learn how to fine-tune a small model just for that.
...find out that apparently google has made an entire base model that is trained to do it all, with pretty much the exact same architecture, but better. Become sad and dis-inspired. Bask in misery. | 2025-01-17T18:06:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i3mlhn/rant_so_i_came_up_with_an_idea_to_enhance_rp/ | Xandrmoro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3mlhn | false | null | t3_1i3mlhn | /r/LocalLLaMA/comments/1i3mlhn/rant_so_i_came_up_with_an_idea_to_enhance_rp/ | false | false | self | 0 | null |
LCLV: Real-time video analysis with Moondream 2B & OLLama (open source, local). Anyone want a set up guide? | 178 | 2025-01-17T18:21:33 | https://v.redd.it/c3kcfymfilde1 | ParsaKhaz | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i3mybo | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/c3kcfymfilde1/DASHPlaylist.mpd?a=1739730108%2CYmFhZGY1MTEzZTA5OGZlNTg0MDkxMTYzY2NkYzlkOWZiNGY4NDNhZjU1N2ZlOTgyNTU0OGIxZmY0NjZjZjAwMw%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/c3kcfymfilde1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/c3kcfymfilde1/HLSPlaylist.m3u8?a=1739730108%2COTU0NzNmNGM3MDU3YTUyNDcyMDM5NGI4NjhmMjk2ZTQ5MWEyNjRiNmU5OTYzOTNmZjQwOWZiY2FkMmY0ZGQ5OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/c3kcfymfilde1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 674}} | t3_1i3mybo | /r/LocalLLaMA/comments/1i3mybo/lclv_realtime_video_analysis_with_moondream_2b/ | false | false | 178 | {'enabled': False, 'images': [{'id': 'MXZ5aHh4bWZpbGRlMSTqk2DOPEdgmnDyQ8guvDBrE8AyiWMeqDE4BRKGe_SG', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/MXZ5aHh4bWZpbGRlMSTqk2DOPEdgmnDyQ8guvDBrE8AyiWMeqDE4BRKGe_SG.png?width=108&crop=smart&format=pjpg&auto=webp&s=efdd54df8da37e98f640d48df3d255a1190f01bc', 'width': 108}, {'height': 153, 'url': 'https://external-preview.redd.it/MXZ5aHh4bWZpbGRlMSTqk2DOPEdgmnDyQ8guvDBrE8AyiWMeqDE4BRKGe_SG.png?width=216&crop=smart&format=pjpg&auto=webp&s=2b3599f34432d41ac89d14f5347eb90ef921d44f', 'width': 216}, {'height': 227, 'url': 'https://external-preview.redd.it/MXZ5aHh4bWZpbGRlMSTqk2DOPEdgmnDyQ8guvDBrE8AyiWMeqDE4BRKGe_SG.png?width=320&crop=smart&format=pjpg&auto=webp&s=742d09d61bf2bfa0040ccccba4dcfb11be5b3072', 'width': 320}, {'height': 455, 'url': 'https://external-preview.redd.it/MXZ5aHh4bWZpbGRlMSTqk2DOPEdgmnDyQ8guvDBrE8AyiWMeqDE4BRKGe_SG.png?width=640&crop=smart&format=pjpg&auto=webp&s=cdf67cd1c0e19ae4d1d2f1d6faaf833168dc2660', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/MXZ5aHh4bWZpbGRlMSTqk2DOPEdgmnDyQ8guvDBrE8AyiWMeqDE4BRKGe_SG.png?format=pjpg&auto=webp&s=3010d43505e0daf4c5e46faf9e2f3c3d75ffd337', 'width': 674}, 'variants': {}}]} |
||
RAM usage of context versus parameters | 2 | How should I think about the relationship between the amount of VRAM needed for large-context windows versus large model parameter counts. If context window sizes started to shift towards the billions, would we just start to load up our models with knowledge in the context instead of in pre-training? How should we think about the tradeoff between knowledge gained in training and knowledge gained in context?
| 2025-01-17T18:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/1i3n37b/ram_usage_of_context_versus_parameters/ | Mysterious-Rent7233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3n37b | false | null | t3_1i3n37b | /r/LocalLLaMA/comments/1i3n37b/ram_usage_of_context_versus_parameters/ | false | false | self | 2 | null |
LLM that can clone images to react | 0 | Claude is great at feeding it an image of a component i want to recreate and it creates react code of it, however i usually have to have quite a long chat to get it too look the same. Is there another model, maybe one I can run locally that can give me the same type of image support? Unfortunately I hit the limit on claude pro for the day in less then 20 minutes doing this task. | 2025-01-17T18:29:38 | https://www.reddit.com/r/LocalLLaMA/comments/1i3n549/llm_that_can_clone_images_to_react/ | PositiveEnergyMatter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3n549 | false | null | t3_1i3n549 | /r/LocalLLaMA/comments/1i3n549/llm_that_can_clone_images_to_react/ | false | false | self | 0 | null |
How good is o1-pro? | 1 | Is it actually much better than the other models? I am particularly interested in math and coding? | 2025-01-17T18:32:00 | https://www.reddit.com/r/LocalLLaMA/comments/1i3n73x/how_good_is_o1pro/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3n73x | false | null | t3_1i3n73x | /r/LocalLLaMA/comments/1i3n73x/how_good_is_o1pro/ | false | false | self | 1 | null |
Any "mainstream" apps with genuinely useful local AI features? | 25 | Curious if any of you actually regularly use features in apps with local AI processing?
When I say "mainstream app", I mean more like PyCharm from JetBrains (i.e. making lots of money, large teams behind them, etc.) than an open-source/indie dev app.
And I'm more talking about a feature in an app (which does a bunch of things other than that AI feature), as opposed to an app that's entirely about using AI locally, like Ollama, LMStudio, etc.
I'm also not talking about OS features, e.g. auto-complete on iPhones. More interested in app that you've downloaded.
Currently, the only thing I can think of in my day-to-day is [code completion in PyCharm](https://blog.jetbrains.com/ai/2024/11/jetbrains-ai-assistant-2024-3/), but even that is now some kind of hybrid local/cloud thing. | 2025-01-17T18:36:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i3nbb7/any_mainstream_apps_with_genuinely_useful_local/ | intofuture | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3nbb7 | false | null | t3_1i3nbb7 | /r/LocalLLaMA/comments/1i3nbb7/any_mainstream_apps_with_genuinely_useful_local/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': '3JqvgZAHz6zYg73NYleBK7n1T0xUSHT8IIiNUFO7Pmk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/el0GVBAqHSVqNo8z6gOpmkjSxRjNMmSvGwo8wEGBObI.jpg?width=108&crop=smart&auto=webp&s=639a91a8130e8521328c3236103e77546f250307', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/el0GVBAqHSVqNo8z6gOpmkjSxRjNMmSvGwo8wEGBObI.jpg?width=216&crop=smart&auto=webp&s=fce60cd352251657b82ee5090afa130d700d1286', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/el0GVBAqHSVqNo8z6gOpmkjSxRjNMmSvGwo8wEGBObI.jpg?width=320&crop=smart&auto=webp&s=d9cc565b05c2d06ae9be4dcbf8bd58d4404cbc43', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/el0GVBAqHSVqNo8z6gOpmkjSxRjNMmSvGwo8wEGBObI.jpg?width=640&crop=smart&auto=webp&s=1610d8d821582e8db5feba238aa18bea4e54b5ae', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/el0GVBAqHSVqNo8z6gOpmkjSxRjNMmSvGwo8wEGBObI.jpg?width=960&crop=smart&auto=webp&s=1f39b3f61a1400305450875f90557f86c17ca396', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/el0GVBAqHSVqNo8z6gOpmkjSxRjNMmSvGwo8wEGBObI.jpg?width=1080&crop=smart&auto=webp&s=50c35de65e77aedc50ce8ca781afc8d5dcfdd87a', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/el0GVBAqHSVqNo8z6gOpmkjSxRjNMmSvGwo8wEGBObI.jpg?auto=webp&s=1811c8dce50f4099a154d3b97234bfb2f30be05d', 'width': 1280}, 'variants': {}}]} |
Realtime speaker diarization | 203 | 2025-01-17T18:57:16 | https://youtube.com/watch?v=-zpyi1KHOUk&si=qzksOIhsLjo9J8Zp | Lonligrin | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1i3nsbx | false | {'oembed': {'author_name': 'Linguflex', 'author_url': 'https://www.youtube.com/@Linguflex', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/-zpyi1KHOUk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Cracking Realtime Speaker Diarization: Progress & Challenges (3)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/-zpyi1KHOUk/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Cracking Realtime Speaker Diarization: Progress & Challenges (3)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1i3nsbx | /r/LocalLLaMA/comments/1i3nsbx/realtime_speaker_diarization/ | false | false | 203 | {'enabled': False, 'images': [{'id': 'ciWqcgtY-vE6y8BRJXB4yHqxRFbZjierz2SM5upazPA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/JO_yTxc06ktYf5LFR-Rn-h9sKgRJ8XcsPo1m_3iqmLE.jpg?width=108&crop=smart&auto=webp&s=2232ed19e5d5ef4b03dc3b3c0eb50953a6b2a164', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/JO_yTxc06ktYf5LFR-Rn-h9sKgRJ8XcsPo1m_3iqmLE.jpg?width=216&crop=smart&auto=webp&s=84317b40b08559925ebb3b69847439bbf94f1f48', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/JO_yTxc06ktYf5LFR-Rn-h9sKgRJ8XcsPo1m_3iqmLE.jpg?width=320&crop=smart&auto=webp&s=8a91a5195fd0960c0d708c2f400bd55c115bba5a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/JO_yTxc06ktYf5LFR-Rn-h9sKgRJ8XcsPo1m_3iqmLE.jpg?auto=webp&s=ea9b9dadb7b2fd9462bb4363cba8ce636267b1b2', 'width': 480}, 'variants': {}}]} |
||
PC upgrade for LLM (CPU based) - RAM only? Is AM4 enough? | 1 | [removed] | 2025-01-17T19:05:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i3nzdq/pc_upgrade_for_llm_cpu_based_ram_only_is_am4/ | Repsol_Honda_PL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3nzdq | false | null | t3_1i3nzdq | /r/LocalLLaMA/comments/1i3nzdq/pc_upgrade_for_llm_cpu_based_ram_only_is_am4/ | false | false | self | 1 | null |
I am open sourcing a smart text editor that runs completely in-browser using WebLLM + LLAMA (requires Chrome + WebGPU) | 275 | 2025-01-17T19:14:56 | https://v.redd.it/n3fmqwcsslde1 | yyjhao | /r/LocalLLaMA/comments/1i3o7a8/i_am_open_sourcing_a_smart_text_editor_that_runs/ | 1970-01-01T00:00:00 | 0 | {} | 1i3o7a8 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/n3fmqwcsslde1/DASHPlaylist.mpd?a=1739862902%2CNDMwM2I1NDljNjMyMmM1Njk1MWRiY2YyMzkzNjliNzMxNDdiMDY2OGFjYmU2OTZjNThkNmMyNWNkNDRiMWEzNQ%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/n3fmqwcsslde1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/n3fmqwcsslde1/HLSPlaylist.m3u8?a=1739862902%2CZDNhYmMzMmI5MjFhZTU2MzUyM2Q0M2VlMWMzOTU3MDczMjQ0MDMyN2E3NzgzMDY5ZGFlZWFlYThiNDYzYjBmNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n3fmqwcsslde1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}} | t3_1i3o7a8 | /r/LocalLLaMA/comments/1i3o7a8/i_am_open_sourcing_a_smart_text_editor_that_runs/ | false | false | 275 | {'enabled': False, 'images': [{'id': 'MGt0ZzN4Y3NzbGRlMeSgvI1GdDqWZSs569grdhgwadhN-F5M6UL9TiNWoaqW', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/MGt0ZzN4Y3NzbGRlMeSgvI1GdDqWZSs569grdhgwadhN-F5M6UL9TiNWoaqW.png?width=108&crop=smart&format=pjpg&auto=webp&s=a619e8cd8c21417e4abf1163c52effbf8e2c74dc', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/MGt0ZzN4Y3NzbGRlMeSgvI1GdDqWZSs569grdhgwadhN-F5M6UL9TiNWoaqW.png?width=216&crop=smart&format=pjpg&auto=webp&s=304579c9c79d523df674b15c83e36b58a050cba6', 'width': 216}, {'height': 220, 'url': 'https://external-preview.redd.it/MGt0ZzN4Y3NzbGRlMeSgvI1GdDqWZSs569grdhgwadhN-F5M6UL9TiNWoaqW.png?width=320&crop=smart&format=pjpg&auto=webp&s=1c95218c934250d1f4eabda6208d952b5157c423', 'width': 320}, {'height': 441, 'url': 'https://external-preview.redd.it/MGt0ZzN4Y3NzbGRlMeSgvI1GdDqWZSs569grdhgwadhN-F5M6UL9TiNWoaqW.png?width=640&crop=smart&format=pjpg&auto=webp&s=7f2cc6a700e434d78272e091e0727ebeb6de7a5b', 'width': 640}, {'height': 662, 'url': 'https://external-preview.redd.it/MGt0ZzN4Y3NzbGRlMeSgvI1GdDqWZSs569grdhgwadhN-F5M6UL9TiNWoaqW.png?width=960&crop=smart&format=pjpg&auto=webp&s=96db97fb77fe3a5e7fa091af25703997d5d867d6', 'width': 960}, {'height': 745, 'url': 'https://external-preview.redd.it/MGt0ZzN4Y3NzbGRlMeSgvI1GdDqWZSs569grdhgwadhN-F5M6UL9TiNWoaqW.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2ebabe92f477cd0c6099a967a551a12f9f9f0c5b', 'width': 1080}], 'source': {'height': 1698, 'url': 'https://external-preview.redd.it/MGt0ZzN4Y3NzbGRlMeSgvI1GdDqWZSs569grdhgwadhN-F5M6UL9TiNWoaqW.png?format=pjpg&auto=webp&s=2657a6bda3f43715ef1876ce320cbe98a3b0d90c', 'width': 2460}, 'variants': {}}]} |
||
DeepSeek-R1 (Preview) Benchmarked on LiveCodeBench | 228 | 2025-01-17T20:06:47 | https://imgur.com/a/WdpIkiy | Charuru | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1i3pexj | false | null | t3_1i3pexj | /r/LocalLLaMA/comments/1i3pexj/deepseekr1_preview_benchmarked_on_livecodebench/ | false | false | 228 | {'enabled': False, 'images': [{'id': 'C4xXLuRegvnAklLIw9QQEkDNsswqKWdY9JU_JNil_K4', 'resolutions': [{'height': 124, 'url': 'https://external-preview.redd.it/RiXxcULN7VvmAA8zRKm9Hg6sMZIuDEZ9SdZM3h7z4e0.jpg?width=108&crop=smart&auto=webp&s=8320e90572d28cd0ad11de203a4fda67a6073b14', 'width': 108}, {'height': 248, 'url': 'https://external-preview.redd.it/RiXxcULN7VvmAA8zRKm9Hg6sMZIuDEZ9SdZM3h7z4e0.jpg?width=216&crop=smart&auto=webp&s=0934a94455baf74a4041c3a87861041bee0d022c', 'width': 216}, {'height': 367, 'url': 'https://external-preview.redd.it/RiXxcULN7VvmAA8zRKm9Hg6sMZIuDEZ9SdZM3h7z4e0.jpg?width=320&crop=smart&auto=webp&s=dc417edeeb34f86e220f8c5b43d16c01836af5af', 'width': 320}, {'height': 735, 'url': 'https://external-preview.redd.it/RiXxcULN7VvmAA8zRKm9Hg6sMZIuDEZ9SdZM3h7z4e0.jpg?width=640&crop=smart&auto=webp&s=1c43191d847a8866681673c575cc88d8e702dd05', 'width': 640}, {'height': 1102, 'url': 'https://external-preview.redd.it/RiXxcULN7VvmAA8zRKm9Hg6sMZIuDEZ9SdZM3h7z4e0.jpg?width=960&crop=smart&auto=webp&s=a1231e8ee8fe1b35e63b4a6e27884ff1499231a4', 'width': 960}, {'height': 1240, 'url': 'https://external-preview.redd.it/RiXxcULN7VvmAA8zRKm9Hg6sMZIuDEZ9SdZM3h7z4e0.jpg?width=1080&crop=smart&auto=webp&s=1d014db9124e323991f7a7b25278907b32a753a3', 'width': 1080}], 'source': {'height': 2422, 'url': 'https://external-preview.redd.it/RiXxcULN7VvmAA8zRKm9Hg6sMZIuDEZ9SdZM3h7z4e0.jpg?auto=webp&s=b750c2eecadaacecfcca45c75f2864f54b49eafd', 'width': 2108}, 'variants': {}}]} |
||
PhoenixOS: Fast OS-level support for GPU checkpoint and restore | 13 | 2025-01-17T20:12:31 | https://github.com/SJTU-IPADS/PhoenixOS | Aaaaaaaaaeeeee | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i3pjoy | false | null | t3_1i3pjoy | /r/LocalLLaMA/comments/1i3pjoy/phoenixos_fast_oslevel_support_for_gpu_checkpoint/ | false | false | 13 | {'enabled': False, 'images': [{'id': '284YqitxEW9jUQoTVM4n0uix5x7odzPHcBP6zB0ln1c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uvSmu97PNFkQJQgeOoBqELIYPXOgsfaW4WlXfeQivVc.jpg?width=108&crop=smart&auto=webp&s=ec31eaf3371d2bce9f8626e80fd3b792da365e45', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/uvSmu97PNFkQJQgeOoBqELIYPXOgsfaW4WlXfeQivVc.jpg?width=216&crop=smart&auto=webp&s=c199bb49436d20b7d66841b5450b54a02d7dd2c6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/uvSmu97PNFkQJQgeOoBqELIYPXOgsfaW4WlXfeQivVc.jpg?width=320&crop=smart&auto=webp&s=220e18587f2d52b49337e7ad58527e24b8734cb7', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/uvSmu97PNFkQJQgeOoBqELIYPXOgsfaW4WlXfeQivVc.jpg?width=640&crop=smart&auto=webp&s=6e3cd9f27a7967f7c7bc44096ccf283a68f7e8b9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/uvSmu97PNFkQJQgeOoBqELIYPXOgsfaW4WlXfeQivVc.jpg?width=960&crop=smart&auto=webp&s=5ecbe85552ef061709f8b603f0ac81a3bfd011fd', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/uvSmu97PNFkQJQgeOoBqELIYPXOgsfaW4WlXfeQivVc.jpg?auto=webp&s=1d723db17762b8dfb138c2264f494b5cd64b7c8d', 'width': 1024}, 'variants': {}}]} |
||
Use LM Studio to create AI work assistant | 1 | [removed] | 2025-01-17T20:17:00 | https://www.reddit.com/r/LocalLLaMA/comments/1i3pndi/use_lm_studio_to_create_ai_work_assistant/ | Broad_Judgment_523 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3pndi | false | null | t3_1i3pndi | /r/LocalLLaMA/comments/1i3pndi/use_lm_studio_to_create_ai_work_assistant/ | false | false | self | 1 | null |
70b 3.3 | 1 | [removed] | 2025-01-17T20:19:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1i3ppec | false | null | t3_1i3ppec | /r/LocalLLaMA/comments/1i3ppec/70b_33/ | false | false | default | 1 | null |
||
Where is everyone? Hello? | 0 | A while ago it was punch after punch, a model could keep the leaderboard for just 2 days before getting dethroned. It was so unstable it feeled like a all-VS-all box match. Like here is Mistral hitting, but here comes Yi, then Cohere, then Llama and Gemini, all in the same month.
Now everything is still, Google had their model in experimental phase for a month now, Mistral didn't really push any boundary lately, Anthropic launched a 0.01% better model, OpenAI launched unusable expensive models.
Like bro, give us the show | 2025-01-17T20:20:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i3pq5b/where_is_everyone_hello/ | EnnioEvo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3pq5b | false | null | t3_1i3pq5b | /r/LocalLLaMA/comments/1i3pq5b/where_is_everyone_hello/ | false | false | self | 0 | null |
Forwarding Email to LLM | 2 | So yesterday I stumbled upon the following post on X:
https://preview.redd.it/fif7s4sv0mde1.png?width=1428&format=png&auto=webp&s=a2a40aaeab14216c714993ff7f15b6acaf644ccf
I would very much like to learn how to build this. So I dug a little deeper:
https://preview.redd.it/o5gcnca41mde1.png?width=986&format=png&auto=webp&s=709e90238bb9253ea90c0ddd831c743a9b15e90e
Got it. Also:
https://preview.redd.it/0o1tvr081mde1.png?width=1618&format=png&auto=webp&s=0a93b205b92bf181415e147ae16c1061dd35e6f9
So I should have basically everything I need to do this. At least a general path forward. Problem is, I'm stumped. I've signed up for Upstash (and within it, QStash is free to use) and have been poking around in the documentation (https://upstash.com/docs/workflow/getstarted), but I'm not making much progress.
What is the HTTP endpoint in this context? What URL would you use?
If any of the superior talent here could expand on this or point me in the right direction, it will be much appreciated.
| 2025-01-17T20:21:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i3pr5v/forwarding_email_to_llm/ | AlphaTechBro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3pr5v | false | null | t3_1i3pr5v | /r/LocalLLaMA/comments/1i3pr5v/forwarding_email_to_llm/ | false | false | 2 | null |
|
Beating cuBLAS in SGEMM from Scratch | 70 | A while ago, I shared my article here about optimizing matrix multiplication on CPUs, achieving performance that outpaced NumPy - [Beating NumPy's matrix multiplication in 150 lines of C code](https://www.reddit.com/r/LocalLLaMA/comments/1dt3rqc/beating_numpys_matrix_multiplication_in_150_lines/)
I received positive feedback from you, and today I'm excited to share my second blog post. This one focuses on an SGEMM implementation that outperforms cuBLAS with its (modified?) CUTLASS kernel across a wide range of matrix sizes. The blog delves into benchmarking code on CUDA devices and explains the algorithm's design along with optimization techniques. These include inlined PTX, asynchronous memory copies, double-buffering, avoiding shared memory bank conflicts, and efficient coalesced storage using shared memory. The code is super easy to tweak, so you can customize it for your projects with kernel fusion or just drop it into your libraries as-is. If you have any questions, feel free to comment or send me a direct message - I'd love to hear your feedback and answer any questions you may have! Below, I've included performance comparisons against cuBLAS and Simon Boehm’s highly cited work, which is now integrated into llamafile aka tinyBLAS.
Blog post: [https://salykova.github.io/sgemm-gpu](https://salykova.github.io/sgemm-gpu)
Code: [https://github.com/salykova/sgemm.cu](https://github.com/salykova/sgemm.cu)
https://preview.redd.it/lygsu82g5mde1.jpg?width=650&format=pjpg&auto=webp&s=0dc4c066b2c9e378e60986495d21a437fbc174c3
| 2025-01-17T20:26:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i3pup0/beating_cublas_in_sgemm_from_scratch/ | salykova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3pup0 | false | null | t3_1i3pup0 | /r/LocalLLaMA/comments/1i3pup0/beating_cublas_in_sgemm_from_scratch/ | false | false | 70 | null |
|
Current SoTA for local speech to text + diarization? | 11 | What’s the current sota for local speech to text + diarization? Is it still whisper + pyannote? feel like it’s been 1yr+ without any significant jumps in performance/ efficiency.
Wondering if anyone else has found a step change since? | 2025-01-17T20:29:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i3px18/current_sota_for_local_speech_to_text_diarization/ | dat09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3px18 | false | null | t3_1i3px18 | /r/LocalLLaMA/comments/1i3px18/current_sota_for_local_speech_to_text_diarization/ | false | false | self | 11 | null |
AI Research | 25 | Do we still need AI research, or is ASI just a matter of scaling? I'm 17 years old and I want to become an AI researcher. I want to know your opinion/get advice | 2025-01-17T20:52:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i3qfgy/ai_research/ | ASI-Enjoyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3qfgy | false | null | t3_1i3qfgy | /r/LocalLLaMA/comments/1i3qfgy/ai_research/ | false | false | self | 25 | null |
Is It Possible to Run 400B+ Parameter Models on Machines with 128GB of RAM? | 1 | [removed] | 2025-01-17T21:01:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i3qmtb/is_it_possible_to_run_400b_parameter_models_on/ | gamblingapocalypse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3qmtb | false | null | t3_1i3qmtb | /r/LocalLLaMA/comments/1i3qmtb/is_it_possible_to_run_400b_parameter_models_on/ | false | false | self | 1 | null |
5090 OpenCL & Vulkan leaks | 42 | Ack, not crushing 4090.
[https://videocardz.com/newz/nvidia-geforce-rtx-5090-appears-in-first-geekbench-opencl-vulkan-leaks](https://videocardz.com/newz/nvidia-geforce-rtx-5090-appears-in-first-geekbench-opencl-vulkan-leaks)
| 2025-01-17T21:17:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i3qzom/5090_opencl_vulkan_leaks/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3qzom | false | null | t3_1i3qzom | /r/LocalLLaMA/comments/1i3qzom/5090_opencl_vulkan_leaks/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'Yaj2TJOPmQ2ZO_e4jD9ZRqsn1bjxc0EUphpk-E5066Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/x47r2KfVuFtS6kBZC-8F6HPgmAYuMKExbduvUPak08I.jpg?width=108&crop=smart&auto=webp&s=7735cd45d678fbb1effe75982df5b2fc05f4c144', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/x47r2KfVuFtS6kBZC-8F6HPgmAYuMKExbduvUPak08I.jpg?width=216&crop=smart&auto=webp&s=c3668293800edd53e4dc7941b49fefaf3511a540', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/x47r2KfVuFtS6kBZC-8F6HPgmAYuMKExbduvUPak08I.jpg?width=320&crop=smart&auto=webp&s=d0a5abc3a6ffc90aac91598d88262d9214d801d8', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/x47r2KfVuFtS6kBZC-8F6HPgmAYuMKExbduvUPak08I.jpg?width=640&crop=smart&auto=webp&s=4b233970a6d31828710a9732abe7df34d2d747ef', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/x47r2KfVuFtS6kBZC-8F6HPgmAYuMKExbduvUPak08I.jpg?width=960&crop=smart&auto=webp&s=ab64bfba4c17898720fb467df6e395304717fd59', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/x47r2KfVuFtS6kBZC-8F6HPgmAYuMKExbduvUPak08I.jpg?width=1080&crop=smart&auto=webp&s=d19475112102a03b4a353529b20cfcef4662089b', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/x47r2KfVuFtS6kBZC-8F6HPgmAYuMKExbduvUPak08I.jpg?auto=webp&s=cbe51940ae8930e27f9144cb9e78fb0cb08c6233', 'width': 2500}, 'variants': {}}]} |
Function calling in llama.cpp? | 10 | How are you using function calling in llama.cpp? I tried few things but it doesn't really seem to work 😕
| 2025-01-17T21:25:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i3r6iu/function_calling_in_llamacpp/ | Few_Acanthisitta_858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3r6iu | false | null | t3_1i3r6iu | /r/LocalLLaMA/comments/1i3r6iu/function_calling_in_llamacpp/ | false | false | self | 10 | null |
Very different sizes between gemma base and it | 1 | [removed] | 2025-01-17T21:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i3rgk6/very_different_sizes_between_gemma_base_and_it/ | Familiar-Medium-6271 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3rgk6 | false | null | t3_1i3rgk6 | /r/LocalLLaMA/comments/1i3rgk6/very_different_sizes_between_gemma_base_and_it/ | false | false | 1 | null |
|
The “apple” test - Why aren’t newer reasoning models doing better on this basic benchmark? (and yes, I know token prediction mechanics play a role) | 24 | Most of you are probably familiar with the infamous LLM “apple test” benchmark.
If you’re not, here it is, you give an LLM the following seemingly simple instruction prompt:
- Write 10 sentences that end in the word “apple”.
Sadly, most open source (and even a lot of frontier models fail miserably at this task. I’ve read that it has a lot to do with the way token prediction works, but some models can actually pass this test easily.
Models that I’ve tested that pass or fail on this test:
LLMs that PASS the apple test:
- Llama 3.3:70b (Q4KM)
- Athene-V2 (Q4KM)
- Nemotron (Q4KM)
- Qwen 2.5:72b (Q4KM)
LLMs that FAIL the apple test (most are newer models)
- Phi-4 14b (FP16)
- InternLM3 (FP16)
- Falcon 3 10b (FP16)
- Granite 3 Dense (FP16)
- QwQ 32b (Q_8)
- GLM-4 8b (FP16)
- Command-R (Q4KM)
- MiniCPM 8b v2.6 (FP16)
- Mistral Small 22b (Q4KM)
- Nemotron Mini 4b (FP16)
- Qwen 2.5 7b (FP16)
- WizardLM2 7b (FP16)
FAILED but with an honorable mention:
- Olmo2 14b (FP16) - this model is lightning fast and got 8 of 10 consistently correct and was able to fix its mistake after a second shot at it (most models won’t do better with more chances).
This task seems to be challenging for models under 70b to complete. Even the newer reasoning models with higher test time compute capabilities don’t seem to do well at all.
- Why haven’t newer models gotten better at this task over time?
- Is the underlying mechanism of token prediction still preventing success?
- Are the models that this works with just cheating by training to pass the specific benchmark?
Has anyone found an open source model under 70b that can pass the apple test consistently? | 2025-01-17T21:49:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i3rpsh/the_apple_test_why_arent_newer_reasoning_models/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3rpsh | false | null | t3_1i3rpsh | /r/LocalLLaMA/comments/1i3rpsh/the_apple_test_why_arent_newer_reasoning_models/ | false | false | self | 24 | null |
Codestral v2 vs Deepseek v3 | 1 | [removed] | 2025-01-17T21:58:38 | https://www.reddit.com/r/LocalLLaMA/comments/1i3rx2i/codestral_v2_vs_deepseek_v3/ | Razah786 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3rx2i | false | null | t3_1i3rx2i | /r/LocalLLaMA/comments/1i3rx2i/codestral_v2_vs_deepseek_v3/ | false | false | self | 1 | null |
Friday LLM madness - Cow feathers | 1 | [removed] | 2025-01-17T22:17:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i3scuz/friday_llm_madness_cow_feathers/ | darknetone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3scuz | false | null | t3_1i3scuz | /r/LocalLLaMA/comments/1i3scuz/friday_llm_madness_cow_feathers/ | false | false | 1 | null |
|
Evaluating vector indexes in MariaDB and pgvector: part 1 | 3 | 2025-01-17T22:28:40 | https://smalldatum.blogspot.com/2025/01/evaluating-vector-indexes-in-mariadb.html | AccomplishedRoyal9 | smalldatum.blogspot.com | 1970-01-01T00:00:00 | 0 | {} | 1i3slw9 | false | null | t3_1i3slw9 | /r/LocalLLaMA/comments/1i3slw9/evaluating_vector_indexes_in_mariadb_and_pgvector/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'wPwf5jbQ2KfD0T1xx_sQNqBwKZhp3_hQH1R4OTiW0ns', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/C_Q0JYuIEl0onPPIh41AHXtkukkbDzIqLlfN15PdSP4.jpg?width=108&crop=smart&auto=webp&s=973044c76e1bea2444de8472a73a41701cf421d6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/C_Q0JYuIEl0onPPIh41AHXtkukkbDzIqLlfN15PdSP4.jpg?width=216&crop=smart&auto=webp&s=9add5feefefb4b61e08ec9bbeea3da61fedbca94', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/C_Q0JYuIEl0onPPIh41AHXtkukkbDzIqLlfN15PdSP4.jpg?width=320&crop=smart&auto=webp&s=74b20e2487deb411c9fb3bde30cca27a8a5481fb', 'width': 320}, {'height': 343, 'url': 'https://external-preview.redd.it/C_Q0JYuIEl0onPPIh41AHXtkukkbDzIqLlfN15PdSP4.jpg?width=640&crop=smart&auto=webp&s=58e89cf1d4d0c6009948c5ea1f4514d45ca8f9ee', 'width': 640}, {'height': 515, 'url': 'https://external-preview.redd.it/C_Q0JYuIEl0onPPIh41AHXtkukkbDzIqLlfN15PdSP4.jpg?width=960&crop=smart&auto=webp&s=eedace8c7b5ad67b14e54f0ca60128eb8ec9eb0f', 'width': 960}, {'height': 580, 'url': 'https://external-preview.redd.it/C_Q0JYuIEl0onPPIh41AHXtkukkbDzIqLlfN15PdSP4.jpg?width=1080&crop=smart&auto=webp&s=e5f183e17a936201c5375d2206757975aa04fe61', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/C_Q0JYuIEl0onPPIh41AHXtkukkbDzIqLlfN15PdSP4.jpg?auto=webp&s=6097ebc73c4f4f730bbf530a6b6fb7b6c079253e', 'width': 1173}, 'variants': {}}]} |
||
[ Removed by Reddit ] | 1 | [ Removed by Reddit on account of violating the [content policy](/help/contentpolicy). ] | 2025-01-17T22:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/1i3sse9/removed_by_reddit/ | wapsss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3sse9 | false | null | t3_1i3sse9 | /r/LocalLLaMA/comments/1i3sse9/removed_by_reddit/ | false | false | self | 1 | null |
Hyperbrowser.ai - Web Infra for AI Agents + LocalLLaMA special | 1 | [removed] | 2025-01-17T22:50:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i3t2ty/hyperbrowserai_web_infra_for_ai_agents_localllama/ | strongoffense | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3t2ty | false | null | t3_1i3t2ty | /r/LocalLLaMA/comments/1i3t2ty/hyperbrowserai_web_infra_for_ai_agents_localllama/ | false | false | self | 1 | null |
EU laws on finetuning an evil model | 1 | [removed] | 2025-01-18T00:06:25 | https://www.reddit.com/r/LocalLLaMA/comments/1i3upez/eu_laws_on_finetuning_an_evil_model/ | CyberVikingr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3upez | false | null | t3_1i3upez | /r/LocalLLaMA/comments/1i3upez/eu_laws_on_finetuning_an_evil_model/ | false | false | self | 1 | null |
Best Noob-Friendly Offline Story Generator? | 3 | I am trying to find the best offline program to use for generating stories. I have no coding knowledge, so much of the language used on sites like Cohere's Command R+ are completely alien to me. Is there a good plug and play program which is either exclusively or at least mainly focused on story generation?
Ideally it would have long-term memory for better consistency with longer stories, but I am fine working with shorter-memory ones too if they give good results. The story generations would probably be between 1000-2000 words per use to keep tighter control on the narrative path. It should also have a simple way of adding new documents for it to learn from. | 2025-01-18T00:55:36 | https://www.reddit.com/r/LocalLLaMA/comments/1i3vpgu/best_noobfriendly_offline_story_generator/ | Mr_Chr15topher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3vpgu | false | null | t3_1i3vpgu | /r/LocalLLaMA/comments/1i3vpgu/best_noobfriendly_offline_story_generator/ | false | false | self | 3 | null |
4x AMD Instinct Mi60 AI Server + Llama 3.1 Tulu 8B + vLLM | 0 | 2025-01-18T01:19:56 | https://v.redd.it/0i5u0emzlnde1 | Any_Praline_8178 | /r/LocalLLaMA/comments/1i3w6tm/4x_amd_instinct_mi60_ai_server_llama_31_tulu_8b/ | 1970-01-01T00:00:00 | 0 | {} | 1i3w6tm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0i5u0emzlnde1/DASHPlaylist.mpd?a=1739884802%2CZGEzNjYxYTFhOTdkZmQ2MmY4YTk0YWU3OThmMDA4MDQ5YjRmMzIxYjliNjE5ZjY4ZTA3MjYwMDAzNDViMGE1ZA%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/0i5u0emzlnde1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1904, 'hls_url': 'https://v.redd.it/0i5u0emzlnde1/HLSPlaylist.m3u8?a=1739884802%2CODljZTY0N2EwMzI5ZGE3ODVjOWQ0MDA0YzBkZDc4ZmUyZmVkNGQ4NDA4MTliNzk5N2U2MmRlMDhiZWYzNDg0Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0i5u0emzlnde1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1i3w6tm | /r/LocalLLaMA/comments/1i3w6tm/4x_amd_instinct_mi60_ai_server_llama_31_tulu_8b/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dTJxaTBlbXpsbmRlMdk1yEyr65UascgB9rzrCAZZL_Y-jSAzHs03VU6zOje9', 'resolutions': [{'height': 190, 'url': 'https://external-preview.redd.it/dTJxaTBlbXpsbmRlMdk1yEyr65UascgB9rzrCAZZL_Y-jSAzHs03VU6zOje9.png?width=108&crop=smart&format=pjpg&auto=webp&s=6eb2d3dfbd89cc3930a0889d27f0661f3c056048', 'width': 108}, {'height': 380, 'url': 'https://external-preview.redd.it/dTJxaTBlbXpsbmRlMdk1yEyr65UascgB9rzrCAZZL_Y-jSAzHs03VU6zOje9.png?width=216&crop=smart&format=pjpg&auto=webp&s=6d1e2af685a6b224500551ba8af0ddee957f462a', 'width': 216}, {'height': 563, 'url': 'https://external-preview.redd.it/dTJxaTBlbXpsbmRlMdk1yEyr65UascgB9rzrCAZZL_Y-jSAzHs03VU6zOje9.png?width=320&crop=smart&format=pjpg&auto=webp&s=b7bba35425d6f61c0b2f58c090ebcf94cec1f6a6', 'width': 320}, {'height': 1127, 'url': 'https://external-preview.redd.it/dTJxaTBlbXpsbmRlMdk1yEyr65UascgB9rzrCAZZL_Y-jSAzHs03VU6zOje9.png?width=640&crop=smart&format=pjpg&auto=webp&s=620a9caedaa30556e5a19d0d406f1ff6c6cfb935', 'width': 640}, {'height': 1691, 'url': 'https://external-preview.redd.it/dTJxaTBlbXpsbmRlMdk1yEyr65UascgB9rzrCAZZL_Y-jSAzHs03VU6zOje9.png?width=960&crop=smart&format=pjpg&auto=webp&s=0a5f171ac90317ed7516d3f64568af21ece0e050', 'width': 960}, {'height': 1903, 'url': 'https://external-preview.redd.it/dTJxaTBlbXpsbmRlMdk1yEyr65UascgB9rzrCAZZL_Y-jSAzHs03VU6zOje9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=46866aeeeec2a4997a290160150cf66b787a40ae', 'width': 1080}], 'source': {'height': 3796, 'url': 'https://external-preview.redd.it/dTJxaTBlbXpsbmRlMdk1yEyr65UascgB9rzrCAZZL_Y-jSAzHs03VU6zOje9.png?format=pjpg&auto=webp&s=082bb2f8dcf8b4b6f3f5aa063c2c2f2f0d503372', 'width': 2154}, 'variants': {}}]} |
||
[2403.09919] Recurrent Drafter for Fast Speculative Decoding in Large Language Models | 31 | 2025-01-18T01:20:34 | https://arxiv.org/abs/2403.09919 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1i3w7ao | false | null | t3_1i3w7ao | /r/LocalLLaMA/comments/1i3w7ao/240309919_recurrent_drafter_for_fast_speculative/ | false | false | default | 31 | null |
|
Haven’t related to a meme this hard in a minute | 1 | 2025-01-18T01:27:04 | Bjornhub1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i3wbz0 | false | null | t3_1i3wbz0 | /r/LocalLLaMA/comments/1i3wbz0/havent_related_to_a_meme_this_hard_in_a_minute/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'itjeogGraVkRV7jmjFm2gAiRk93-kzAGtQg_ixf-w38', 'resolutions': [{'height': 184, 'url': 'https://preview.redd.it/uojh1r9cnnde1.jpeg?width=108&crop=smart&auto=webp&s=19be43a7699e100d1dae8c9a32d7b8affc288331', 'width': 108}, {'height': 369, 'url': 'https://preview.redd.it/uojh1r9cnnde1.jpeg?width=216&crop=smart&auto=webp&s=0075f93d0f4afb031b28022b2ab64f9f26c38027', 'width': 216}, {'height': 548, 'url': 'https://preview.redd.it/uojh1r9cnnde1.jpeg?width=320&crop=smart&auto=webp&s=3e7904cddb14a75f45db424efbe7d0e49b7de69b', 'width': 320}, {'height': 1096, 'url': 'https://preview.redd.it/uojh1r9cnnde1.jpeg?width=640&crop=smart&auto=webp&s=1c58f90a0ea092ead0ddefba57d82c634be547e2', 'width': 640}], 'source': {'height': 1456, 'url': 'https://preview.redd.it/uojh1r9cnnde1.jpeg?auto=webp&s=90f2286e18c532e020c06b4683720e9d34187ea7', 'width': 850}, 'variants': {}}]} |
|||
3.3 70b | 1 | [removed] | 2025-01-18T02:06:55 | https://www.reddit.com/r/LocalLLaMA/comments/1i3x3ts/33_70b/ | Murcielago-980 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3x3ts | false | null | t3_1i3x3ts | /r/LocalLLaMA/comments/1i3x3ts/33_70b/ | false | false | self | 1 | null |
Ephemeral Tool Generation > Function Calling | 2 | Predefined functions are for hard working people. Ephemeral Tool Generation is for lazy folks like me. Why not just have the entire thing on auto pilot. Tools are now cattle, and are created and destroyed on demand. This is the way.
This was generated with no predefined tools, using the following prompt:
Create a figure with two side-by-side subplots comparing Q1 and Q2 data for four teams in a department, where the y-axis represents the average project completion rate, and the x-axis is the team name. Each bar in the subplot should also display the exact value on top.
https://preview.redd.it/o2rqhca0unde1.png?width=2426&format=png&auto=webp&s=8d1d3d3e6ea4dc4be8a912478e85636c016ac134
I have also tested with prompts such as:
Print the user details for Github user <username>
Or:
Return the top 10 list of completion models in HuggingFace sorted by likes.
It has nailed it every time. Stay tuned. Manifold will be out soon.
| 2025-01-18T02:11:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i3x6ro/ephemeral_tool_generation_function_calling/ | LocoMod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3x6ro | false | null | t3_1i3x6ro | /r/LocalLLaMA/comments/1i3x6ro/ephemeral_tool_generation_function_calling/ | false | false | 2 | null |
|
50k budget in GPUs, best Open Source Model I can run? | 0 | Per the title, I am working with a company who's willing to spend 50k on gpus (not including CPU, RAM, etc) to inference an open source model. What is the best setup/model for this budget? | 2025-01-18T02:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i3xj22/50k_budget_in_gpus_best_open_source_model_i_can/ | NathanA2CsAlt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3xj22 | false | null | t3_1i3xj22 | /r/LocalLLaMA/comments/1i3xj22/50k_budget_in_gpus_best_open_source_model_i_can/ | false | false | self | 0 | null |
Which AI Coding Assistant Do You Prefer: Qwen, Codestral, or Copilot? | 1 | [removed] | 2025-01-18T02:37:30 | https://www.reddit.com/r/LocalLLaMA/comments/1i3xobm/which_ai_coding_assistant_do_you_prefer_qwen/ | ImportantOwl2939 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3xobm | false | null | t3_1i3xobm | /r/LocalLLaMA/comments/1i3xobm/which_ai_coding_assistant_do_you_prefer_qwen/ | false | false | self | 1 | null |
What's the cheapest way to run Llama 3.x 8B class models with realtime-like (chatgpt speed) tokens per second? | 39 | fireworks.ai? spin up on runpod? build a home server? | 2025-01-18T02:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i3xoyd/whats_the_cheapest_way_to_run_llama_3x_8b_class/ | synexo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3xoyd | false | null | t3_1i3xoyd | /r/LocalLLaMA/comments/1i3xoyd/whats_the_cheapest_way_to_run_llama_3x_8b_class/ | false | false | self | 39 | null |
Which AI Coding Assistant Do You Prefer: Qwen, Codestral, or Copilot? | 1 | [removed] | 2025-01-18T02:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i3y2d5/which_ai_coding_assistant_do_you_prefer_qwen/ | ImportantOwl2939 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3y2d5 | false | null | t3_1i3y2d5 | /r/LocalLLaMA/comments/1i3y2d5/which_ai_coding_assistant_do_you_prefer_qwen/ | false | false | self | 1 | null |
What llm do you use for your local agents. | 1 | I'm using run pod cloud hosting. My preference is using autogen. A while ago I remember trying to get local models to use autogen and it would just be a shit show. Wouldn't really work. Only thing that worked consistently was gpt4. What local model have you found to be better then others when it comes to local agents. 94gb -140gb vram is what I have available | 2025-01-18T03:21:44 | https://www.reddit.com/r/LocalLLaMA/comments/1i3yh5o/what_llm_do_you_use_for_your_local_agents/ | rhaastt-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3yh5o | false | null | t3_1i3yh5o | /r/LocalLLaMA/comments/1i3yh5o/what_llm_do_you_use_for_your_local_agents/ | false | false | self | 1 | null |
How do you guys use Open Source models in your workplace? I wish to start using them at my workplace. | 1 | I am the only AI guy in our workplace. I have built some fine GenAI applications for the company but using Openai's API.
We got some credits for Scaleway, so we are free to play around with GPU for a month.
Btw there best gpu is good enough to run non quantised version of qwen-32b.
Now here is where my doubt arises, scaleway also provides access to some open source models api keys(most of them being costlier than their gpu itself). This made me question if there is a drawback in locally downloding the models from huggingface.
This is why I am asking you guys how you all leverage open source models in your company. I am very eager to explore the real AI part of my job, some guidance will be appreciated. Thanks | 2025-01-18T03:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/1i3yw8d/how_do_you_guys_use_open_source_models_in_your/ | Existing-Pay7076 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3yw8d | false | null | t3_1i3yw8d | /r/LocalLLaMA/comments/1i3yw8d/how_do_you_guys_use_open_source_models_in_your/ | false | false | self | 1 | null |
I don't think AI will kill programming, but it will change it in a few big ways. | 0 | I think it will kill websites and frontends. I think companies will start having their own internal tools that their agents can use, but somebody still has to code those. And I think those will be about a billion times more fun to code than another stuffy react app. I can see an app store for tools that you can embed.
Think of all the stupid things you have had to code that needed just enough interface to be easier to use than the command line, but not quite a full app or page.
The real winner we have here is natural language processing that honestly doesn't suck any more, and that is achievable with even some of the simpler models. | 2025-01-18T03:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/1i3yzoc/i_dont_think_ai_will_kill_programming_but_it_will/ | malformed-packet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3yzoc | false | null | t3_1i3yzoc | /r/LocalLLaMA/comments/1i3yzoc/i_dont_think_ai_will_kill_programming_but_it_will/ | false | false | self | 0 | null |
Grokking at the Edge of Numerical Stability | 25 | https://arxiv.org/abs/2501.04697
Grokking, the sudden generalization that occurs after prolonged overfitting, is a surprising phenomenon challenging our understanding of deep learning. Although significant progress has been made in understanding grokking, the reasons behind the delayed generalization and its dependence on regularization remain unclear. In this work, we argue that without regularization, grokking tasks push models to the edge of numerical stability, introducing floating point errors in the Softmax function, which we refer to as Softmax Collapse (SC). We demonstrate that SC prevents grokking and that mitigating SC enables grokking without regularization. Investigating the root cause of SC, we find that beyond the point of overfitting, the gradients strongly align with what we call the naïve loss minimization (NLM) direction. This component of the gradient does not alter the model's predictions but decreases the loss by scaling the logits, typically by scaling the weights along their current direction. We show that this scaling of the logits explains the delay in generalization characteristic of grokking and eventually leads to SC, halting further learning. To validate our hypotheses, we introduce two key contributions that address the challenges in grokking tasks: StableMax, a new activation function that prevents SC and enables grokking without regularization, and ⊥Grad, a training algorithm that promotes quick generalization in grokking tasks by preventing NLM altogether. These contributions provide new insights into grokking, elucidating its delayed generalization, reliance on regularization, and the effectiveness of existing grokking-inducing methods. Code for this paper is available at this https URL. | 2025-01-18T04:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i3z6cb/grokking_at_the_edge_of_numerical_stability/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3z6cb | false | null | t3_1i3z6cb | /r/LocalLLaMA/comments/1i3z6cb/grokking_at_the_edge_of_numerical_stability/ | false | false | self | 25 | null |
LLMs for getting data from a bunch of documents | 1 | [removed] | 2025-01-18T04:07:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i3za4e/llms_for_getting_data_from_a_bunch_of_documents/ | Unhappy-Fig-2208 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3za4e | false | null | t3_1i3za4e | /r/LocalLLaMA/comments/1i3za4e/llms_for_getting_data_from_a_bunch_of_documents/ | false | false | self | 1 | null |
Here’s my Hardware: Help me Build a Frankenstein Hybrid AI Setup for LLMs, Big Data, and Mobile App Testing | 1 | [removed] | 2025-01-18T04:14:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i3zelv/heres_my_hardware_help_me_build_a_frankenstein/ | cloudcircuitry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3zelv | false | null | t3_1i3zelv | /r/LocalLLaMA/comments/1i3zelv/heres_my_hardware_help_me_build_a_frankenstein/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KNAUv4v8qS3PNd9J3AsUwtVWWrHJR2u7XYB9xNAIt3c', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=108&crop=smart&auto=webp&s=1baf133bb163d748fc24979ab0b3bd4c712feec5', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=216&crop=smart&auto=webp&s=848e3580231ce0a5a220dec4e726914d5fe9677b', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=320&crop=smart&auto=webp&s=204ffeec0ea8adb53eb1faad138ee1d56106f717', 'width': 320}, {'height': 367, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=640&crop=smart&auto=webp&s=47d89b09e36cd6d4a00e7efcdb650d72292ba6ce', 'width': 640}, {'height': 551, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=960&crop=smart&auto=webp&s=ab5864bcb2eeb2d2ee8798d016d03a6eb515f4c0', 'width': 960}, {'height': 620, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=1080&crop=smart&auto=webp&s=2677627d24421fef9004df283b70dfdd4abd6cf5', 'width': 1080}], 'source': {'height': 958, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?auto=webp&s=8ab26763d9cc09dff48a56b1719ed3040c54375f', 'width': 1668}, 'variants': {}}]} |
The best embedding model so far iamgroot42/rover_nexus | 0 | No need for reranker just use it and its also top in [MTEB Leader Board](https://huggingface.co/spaces/mteb/leaderboard).
I tested it in OpenWebUI and it's the best I've ever tested and its fast AF.
[https://huggingface.co/iamgroot42/rover\_nexus](https://huggingface.co/iamgroot42/rover_nexus) | 2025-01-18T04:40:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i3ztzu/the_best_embedding_model_so_far_iamgroot42rover/ | AlgorithmicKing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i3ztzu | false | null | t3_1i3ztzu | /r/LocalLLaMA/comments/1i3ztzu/the_best_embedding_model_so_far_iamgroot42rover/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'XyO6BICbW4Hg8xmbvc3hN3cENx4gTiYAHoZDX0xzla0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=108&crop=smart&auto=webp&s=96645ff2d3c13c9de5b8e543d793398e8378a5ce', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=216&crop=smart&auto=webp&s=5fe7dd25ac52b49026818459348b727a60f76c95', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=320&crop=smart&auto=webp&s=46bd623b4140579283466426f35db45a2716afdf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=640&crop=smart&auto=webp&s=b024b9ba08b61cf952b69cb7507fca3e1ebfa39e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=960&crop=smart&auto=webp&s=4a6c9716fe66802e32392d34d5d6cafa747a6c2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=1080&crop=smart&auto=webp&s=52ce36e354673a164cd33267e3c92737187dd009', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?auto=webp&s=e855b63ad31d3cd9e7b74c56d92057a31258081f', 'width': 1200}, 'variants': {}}]} |
Whisper turbo fine tuning guidance | 7 | I am looking to try fine tuning whisper large v3 turbo on runpod. I have a 3090 which I could use locally, but why not play with a cloud gpu so I can use my gpu for other stuff. Does anyone have any guides I can follow to help with the fine tuning process? I asked ChatGPT and it almost seems too easy. I already have my audio files in .wav format and their correctly transcribed text files.
Thanks for any help or advice! | 2025-01-18T04:53:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i401lt/whisper_turbo_fine_tuning_guidance/ | fgoricha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i401lt | false | null | t3_1i401lt | /r/LocalLLaMA/comments/1i401lt/whisper_turbo_fine_tuning_guidance/ | false | false | self | 7 | null |
What do I need to use to lip sync with audio just a few seconds / segment of a video? | 8 | For a project, I'm looking to record an actor, and swap just a few words from the video with their voice customized to the user's preference. For example: If in the video, the actor says: I know David. If you're wondering how he makes great videos, checkout this page.
Here I want to configure it this way: I know $name. If you're wondering how $genderpronoun makes great videos, checkout this page.
So, on an input box of my website, if they input their name to Steve, and select the gender as Male, it needs to lip sync the audio and video to that name and pronoun and provide the updated video with the same voice and lip sync output video.
Any ideas on how to make this happen? I've looked into HeyGen, Wave2Lip and others, but they're mostly for making new videos from scratch with completely new scripts or training them. I'm looking for it to generate within a few seconds to a minute by sticking to the original video and script but only changing 2 words. Any local implementation or free or paid APIs would be much helpful. | 2025-01-18T05:10:49 | https://www.reddit.com/r/LocalLLaMA/comments/1i40bx0/what_do_i_need_to_use_to_lip_sync_with_audio_just/ | thescientificindian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i40bx0 | false | null | t3_1i40bx0 | /r/LocalLLaMA/comments/1i40bx0/what_do_i_need_to_use_to_lip_sync_with_audio_just/ | false | false | self | 8 | null |
Take a risk on a 4090 for stupid cheap or stick with getting a 3099 for double the price? | 0 | Stupid idea for a 4090?
I have a lead on a 4090 that has a burned connector.
Even with travel time, gas, new PSU, and parts to repair the unit, I still think I’d come out ahead to drive all that way and repair the connector.
If you had a lead on a 4090 that was a “stupid good price” …. Would you make the 700 mile trip and try to repair it?
It’s effectively 1/4 the price of what I can find online new or used.
Tell me I’m not crazy to think of doing this?
Alternatively, for double the price, I can potentially get a 3090 24gb card that’s functioning without issues.
Would it be worth the risk? I’m thinking it would be worth the risk of getting the 4090 and still trying to repair it.
Also asked seller for shipping if possible as I would rather not drive that lol | 2025-01-18T05:21:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i40i6i/take_a_risk_on_a_4090_for_stupid_cheap_or_stick/ | bigDottee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i40i6i | false | null | t3_1i40i6i | /r/LocalLLaMA/comments/1i40i6i/take_a_risk_on_a_4090_for_stupid_cheap_or_stick/ | false | false | self | 0 | null |
Non-code fine-tuned completion models? | 2 | Are there are good fine-tuned non-code completion models these days? Like base Llama 3.1 fine tuned on high-quality/creative completions?
I think chatifying models has been disastrous for their creativity
I just want 3.5 instruct back | 2025-01-18T06:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/1i41dt7/noncode_finetuned_completion_models/ | PetersOdyssey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i41dt7 | false | null | t3_1i41dt7 | /r/LocalLLaMA/comments/1i41dt7/noncode_finetuned_completion_models/ | false | false | self | 2 | null |
Deciding Next GPU purchase (Replace 3070) | 0 | I have socked away some money and finally saved over the years. There's 3 options that stand out to me:
RTX 6000 ADA - 300W | 48GB VRAM | 18,176 cuda cores | 568 Tensor cores
RTX 5090 - 575W | 32 GB VRAM | 21,760 cuda cores | 680 tensor cores
RTX 4090 - 450W | 24 GB VRAM | 16384 cuda cores | 512 tensor cores
Any opinions on the best option?
[View Poll](https://www.reddit.com/poll/1i41pxk) | 2025-01-18T06:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i41pxk/deciding_next_gpu_purchase_replace_3070/ | DIY-Tech-HA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i41pxk | false | null | t3_1i41pxk | /r/LocalLLaMA/comments/1i41pxk/deciding_next_gpu_purchase_replace_3070/ | false | false | self | 0 | null |
huggingface alernatives | 1 | out of curiosity what are some good huggingface alternatives? | 2025-01-18T07:05:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i421qo/huggingface_alernatives/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i421qo | false | null | t3_1i421qo | /r/LocalLLaMA/comments/1i421qo/huggingface_alernatives/ | false | false | self | 1 | null |
r/LocalLLaMa if China had powerful 100K H100 clusters too | 1 | 2025-01-18T07:35:07 | random-tomato | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i42gmf | false | null | t3_1i42gmf | /r/LocalLLaMA/comments/1i42gmf/rlocalllama_if_china_had_powerful_100k_h100/ | false | false | 1 | {'enabled': True, 'images': [{'id': '0_gHMD9WmEH6wv7kuWFsIWu8kazSwH8vsx-v5fNC2fM', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/2piy5v3vgpde1.png?width=108&crop=smart&auto=webp&s=6ee45baa2319b55d5d0bfab4c0de3ed30ad53916', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/2piy5v3vgpde1.png?width=216&crop=smart&auto=webp&s=3d3ecf5ac6b479ccbac4395917bf4dc7d1bdb716', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/2piy5v3vgpde1.png?width=320&crop=smart&auto=webp&s=df2cfcd9dae3d02e725405cb16389b6d79ca6567', 'width': 320}], 'source': {'height': 331, 'url': 'https://preview.redd.it/2piy5v3vgpde1.png?auto=webp&s=b69a523076b575f4a5820fb3a25e290afb51532b', 'width': 522}, 'variants': {}}]} |
|||
Self hosted avatar generation? | 5 | Is there a model/platform/framework for generating personal avatars (i.e., avatar replica from images/videos, own voice, etc)? | 2025-01-18T08:00:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i42sxd/self_hosted_avatar_generation/ | x0rchid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i42sxd | false | null | t3_1i42sxd | /r/LocalLLaMA/comments/1i42sxd/self_hosted_avatar_generation/ | false | false | self | 5 | null |
KoboldCpp 1.82 - Now supports OuteTTS v0.2+0.3 with speaker voice synthesis and XTTS/OpenAI speech API, TAESD for Flux & SD3, multilingual whisper (plus RAG and WebSearch from v1.81) | 189 | Hey it's me Concedo, here again playing how-many-more-API-endpoints-can-koboldcpp-serve.
Today's release brings long awaited TTS support, which works on all versions of OuteTTS GGUFs including the newly released **v0.3 500M and 1B** models. It also provides XTTS and OpenAI Speech compatible APIs, so it can work as a direct TTS drop-in for existing frontends that use those features.
There are also some pretty cool improvements, as well as many other features, so do check out the release notes if you haven't yet. Last release, we also added WebSearch and a simple browser based RAG, so check that out if you missed it.
https://github.com/LostRuins/koboldcpp/releases | 2025-01-18T08:27:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i435so/koboldcpp_182_now_supports_outetts_v0203_with/ | HadesThrowaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i435so | false | null | t3_1i435so | /r/LocalLLaMA/comments/1i435so/koboldcpp_182_now_supports_outetts_v0203_with/ | false | false | self | 189 | {'enabled': False, 'images': [{'id': 'MRbTnR3ra04-qmbDmyLuIbAb_K6gstBjXs7ycXYHe2A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fLN5BRTx-XsTs4qgou3U8C3oONsdF7ZDHBd6EmEyqFI.jpg?width=108&crop=smart&auto=webp&s=aee89760b1c308649053f6f086b959f3e2ab37b6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fLN5BRTx-XsTs4qgou3U8C3oONsdF7ZDHBd6EmEyqFI.jpg?width=216&crop=smart&auto=webp&s=53dd0ff142047ca4ec68939deedd187fa0565244', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fLN5BRTx-XsTs4qgou3U8C3oONsdF7ZDHBd6EmEyqFI.jpg?width=320&crop=smart&auto=webp&s=c2e3c56de1cbb06882dbe3e6116340f44225c74c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fLN5BRTx-XsTs4qgou3U8C3oONsdF7ZDHBd6EmEyqFI.jpg?width=640&crop=smart&auto=webp&s=2cd043c661e552b0c220618cbc8dc50db3639249', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fLN5BRTx-XsTs4qgou3U8C3oONsdF7ZDHBd6EmEyqFI.jpg?width=960&crop=smart&auto=webp&s=ace3977d10d1aa44d8a4eaeeb538859a73e1a5f1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fLN5BRTx-XsTs4qgou3U8C3oONsdF7ZDHBd6EmEyqFI.jpg?width=1080&crop=smart&auto=webp&s=a6208f9ce7259069b6966d7e01e65a3a4ed5016b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fLN5BRTx-XsTs4qgou3U8C3oONsdF7ZDHBd6EmEyqFI.jpg?auto=webp&s=98e58ec1b65bb871e43b8ec43ce10a6373007f92', 'width': 1200}, 'variants': {}}]} |
[Hyperfitting] This paper may remove the need for samplers altogether at minimal cost! | 1 | 2025-01-18T08:54:54 | https://x.com/mgostIH/status/1880320930855153969 | mgostIH | x.com | 1970-01-01T00:00:00 | 0 | {} | 1i43iza | false | null | t3_1i43iza | /r/LocalLLaMA/comments/1i43iza/hyperfitting_this_paper_may_remove_the_need_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'nDW_DucUt4dYtLNUhghJ3nIz1nyArKeboCTeE1mQu-c', 'resolutions': [{'height': 25, 'url': 'https://external-preview.redd.it/xzf5nEJC23X4pitQSMtJRjBP20artJSag2yl3c9Ctvs.jpg?width=108&crop=smart&auto=webp&s=ae8f07e0ffd4594cda8e080716b851996be88ace', 'width': 108}, {'height': 50, 'url': 'https://external-preview.redd.it/xzf5nEJC23X4pitQSMtJRjBP20artJSag2yl3c9Ctvs.jpg?width=216&crop=smart&auto=webp&s=de2cdcb2ef263aa6948aa73411e2a338fef26fc1', 'width': 216}, {'height': 74, 'url': 'https://external-preview.redd.it/xzf5nEJC23X4pitQSMtJRjBP20artJSag2yl3c9Ctvs.jpg?width=320&crop=smart&auto=webp&s=1acbc6324e4443694aee9ae39aae278805134191', 'width': 320}, {'height': 148, 'url': 'https://external-preview.redd.it/xzf5nEJC23X4pitQSMtJRjBP20artJSag2yl3c9Ctvs.jpg?width=640&crop=smart&auto=webp&s=cb0228965f492821c5c3da2feaacdc525b27f0af', 'width': 640}, {'height': 222, 'url': 'https://external-preview.redd.it/xzf5nEJC23X4pitQSMtJRjBP20artJSag2yl3c9Ctvs.jpg?width=960&crop=smart&auto=webp&s=18ca98a412ac10d6b7b7ab1bdf9a215555d0b1ec', 'width': 960}], 'source': {'height': 226, 'url': 'https://external-preview.redd.it/xzf5nEJC23X4pitQSMtJRjBP20artJSag2yl3c9Ctvs.jpg?auto=webp&s=bb848d0401e6bb2fb6fbb6edcf967e4c330de768', 'width': 973}, 'variants': {}}]} |
||
Object and shape detection | 2 | Hi, are there some model that can be trained to detect shape objects I drew?
Do you have some ressource to help for that? | 2025-01-18T09:14:19 | https://www.reddit.com/r/LocalLLaMA/comments/1i43sdo/object_and_shape_detection/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i43sdo | false | null | t3_1i43sdo | /r/LocalLLaMA/comments/1i43sdo/object_and_shape_detection/ | false | false | self | 2 | null |
r/LocalLLaMa if China had 100,000 H100 GPU clusters too... | 1 | [removed] | 2025-01-18T09:21:22 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1i43vv9 | false | null | t3_1i43vv9 | /r/LocalLLaMA/comments/1i43vv9/rlocalllama_if_china_had_100000_h100_gpu_clusters/ | false | false | default | 1 | null |
||
r/LocalLLaMa if Qwen/Deepseek had 100K H100 GPU clusters too... | 1 | [removed] | 2025-01-18T09:22:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1i43wif | false | null | t3_1i43wif | /r/LocalLLaMA/comments/1i43wif/rlocalllama_if_qwendeepseek_had_100k_h100_gpu/ | false | false | default | 1 | null |
||
Civilization if Qwen/Deepseek had 100K H100 GPU clusters... | 1 | 2025-01-18T09:24:39 | random-tomato | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i43xft | false | null | t3_1i43xft | /r/LocalLLaMA/comments/1i43xft/civilization_if_qwendeepseek_had_100k_h100_gpu/ | false | false | 1 | {'enabled': True, 'images': [{'id': '-m8G_4toZ4yNsGAzjieYbaZ-pUXKTMgDiYuS2VWyIS4', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/hrcg0jpg0qde1.jpeg?width=108&crop=smart&auto=webp&s=0912d62b6d7a69a128c8917ec08bffb1d4cc37f4', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/hrcg0jpg0qde1.jpeg?width=216&crop=smart&auto=webp&s=7a28254acd7853c171c65febe7aba81eb68a3bb3', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/hrcg0jpg0qde1.jpeg?width=320&crop=smart&auto=webp&s=23215b0d1704119e890913f1f8bc333268dea3ef', 'width': 320}], 'source': {'height': 331, 'url': 'https://preview.redd.it/hrcg0jpg0qde1.jpeg?auto=webp&s=18c199055622ddbd54951fdbf7acc712419c9111', 'width': 522}, 'variants': {}}]} |
|||
Any audio models that accurately transcribe audio while preserving profanity? | 1 | [removed] | 2025-01-18T10:02:30 | https://www.reddit.com/r/LocalLLaMA/comments/1i44fvg/any_audio_models_that_accurately_transcribe_audio/ | xseson23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i44fvg | false | null | t3_1i44fvg | /r/LocalLLaMA/comments/1i44fvg/any_audio_models_that_accurately_transcribe_audio/ | false | false | self | 1 | null |
Any audio models that accurately transcribe audio while preserving profanity? | 1 | [removed] | 2025-01-18T10:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i44izs/any_audio_models_that_accurately_transcribe_audio/ | xseson23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i44izs | false | null | t3_1i44izs | /r/LocalLLaMA/comments/1i44izs/any_audio_models_that_accurately_transcribe_audio/ | false | false | self | 1 | null |
Remember this guy? Bait again? | 1 | 2025-01-18T10:19:31 | Own-Potential-2308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i44nyx | false | null | t3_1i44nyx | /r/LocalLLaMA/comments/1i44nyx/remember_this_guy_bait_again/ | false | false | 1 | {'enabled': True, 'images': [{'id': '_DM2-jC0Jd-Z-fzEdWbGCnITz0dHkbWcNYKZzV1cZ8k', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/kf1qur8baqde1.jpeg?width=108&crop=smart&auto=webp&s=c70fdb27fd2e3963cc798c72ba63353b27e0392f', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/kf1qur8baqde1.jpeg?width=216&crop=smart&auto=webp&s=bb148576d84d4ae2781c02babe6cc612f38e9e10', 'width': 216}, {'height': 249, 'url': 'https://preview.redd.it/kf1qur8baqde1.jpeg?width=320&crop=smart&auto=webp&s=3799333faaffda03ca65c56a64454dc71c021c35', 'width': 320}, {'height': 498, 'url': 'https://preview.redd.it/kf1qur8baqde1.jpeg?width=640&crop=smart&auto=webp&s=420aa79cbd94ec5ac9eda14928e151af427574df', 'width': 640}, {'height': 747, 'url': 'https://preview.redd.it/kf1qur8baqde1.jpeg?width=960&crop=smart&auto=webp&s=30c4b1b50b322821aaa9e46e46f20cdc9b5050d2', 'width': 960}, {'height': 841, 'url': 'https://preview.redd.it/kf1qur8baqde1.jpeg?width=1080&crop=smart&auto=webp&s=9f5bac91d314eb57c834c3ac979db3f13ec504eb', 'width': 1080}], 'source': {'height': 841, 'url': 'https://preview.redd.it/kf1qur8baqde1.jpeg?auto=webp&s=a01c8ea69c39d12e632dbfeaa2ab5c1104194002', 'width': 1080}, 'variants': {}}]} |
|||
Remember this guy? | 0 | 2025-01-18T10:20:13 | Own-Potential-2308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i44obb | false | null | t3_1i44obb | /r/LocalLLaMA/comments/1i44obb/remember_this_guy/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'wSg0FZTU2xGxMAhqMWsAXaxvr6SfCfon96GftRw4W7I', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/pdc3umigaqde1.jpeg?width=108&crop=smart&auto=webp&s=1be91186cd0dda391d02c097f2cdf64cf815062e', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/pdc3umigaqde1.jpeg?width=216&crop=smart&auto=webp&s=9e9687d1ad42efac92ffce76741849c0880e6d56', 'width': 216}, {'height': 249, 'url': 'https://preview.redd.it/pdc3umigaqde1.jpeg?width=320&crop=smart&auto=webp&s=5f959dda36973e8f0ccbbad4a17712f29c072614', 'width': 320}, {'height': 498, 'url': 'https://preview.redd.it/pdc3umigaqde1.jpeg?width=640&crop=smart&auto=webp&s=e07aa543055de5c3def7c184bab83b3c9782272f', 'width': 640}, {'height': 747, 'url': 'https://preview.redd.it/pdc3umigaqde1.jpeg?width=960&crop=smart&auto=webp&s=24c02074436c0ca270f1696d7829d80759cadd51', 'width': 960}, {'height': 841, 'url': 'https://preview.redd.it/pdc3umigaqde1.jpeg?width=1080&crop=smart&auto=webp&s=dda9d555cd6028b207fb3dcaa952b906bd363811', 'width': 1080}], 'source': {'height': 841, 'url': 'https://preview.redd.it/pdc3umigaqde1.jpeg?auto=webp&s=7e58a3dca89acffb80057d5ce07c49a1f39844bb', 'width': 1080}, 'variants': {}}]} |
|||
Intel should release a 24GB version of the Arc B580 | 364 | The B580 is already showing impressive performance for LLM inference, matching the RTX 3060 in Vulkan benchmarks (\~36 tokens/sec on Qwen2 7B) while being more power efficient and $50 cheaper. But VRAM is the real bottleneck for running larger models locally.
With Intel's strong XMX matrix performance and the existing clamshell memory design validated in shipping docs, a 24GB variant is technically feasible. This would enable running 13B models quantized to 8-bit (most 13B models need \~14GB), existing models with larger context, etc.
It would have way better price/performance than RTX 4060 Ti 16GB, native Vulkan support without CUDA lock-in and more performance potential if OpenVINO is further optimized.
The regular B580's stellar price/performance ratio shows Intel can be aggressive on pricing. A \~$329 24GB variant would hit a sweet spot for local LLM enthusiasts building inference rigs.
This is Intel's chance to build mind- and marketshare among AI developers and enthusiasts who are tired of CUDA lock-in. They can grow a community around OpenVINO and their AI tooling. Every developer who builds with Intel's stack today helps their ecosystem forward. The MLPerf results show they have the performance - now they just need to get the hardware into developers' hands.
* Dec 16 '24: [Shipping document suggests that a 24 GB version of Intel's Arc B580 graphics card could be heading to market, though not for gaming](https://www.pcgamer.com/hardware/graphics-cards/shipping-document-suggests-that-a-24-gb-version-of-intels-arc-b580-graphics-card-could-be-heading-to-market-though-not-for-gaming/)
https://preview.redd.it/xaydqqjygqde1.png?width=691&format=png&auto=webp&s=0d57bc47d8936ed555b725e7733a88541d20f6d8 | 2025-01-18T10:58:44 | https://www.reddit.com/r/LocalLLaMA/comments/1i457gp/intel_should_release_a_24gb_version_of_the_arc/ | Balance- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i457gp | false | null | t3_1i457gp | /r/LocalLLaMA/comments/1i457gp/intel_should_release_a_24gb_version_of_the_arc/ | false | false | 364 | {'enabled': False, 'images': [{'id': 'g6y-c4adGL6FSRlo1jTaocCuemapsOYG52lQjxy2dUU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=108&crop=smart&auto=webp&s=2be7d740cb31a436d4570ca2851b6938abd36aca', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=216&crop=smart&auto=webp&s=161613bd94790b7ead6d485ff41fc73c06b1ebfb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=320&crop=smart&auto=webp&s=c1558845b46d2b418d1e6d87a8ba36651d78cbe4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=640&crop=smart&auto=webp&s=ddd3f42144ca0c2a05d54cf349b57f74c2e13f0f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=960&crop=smart&auto=webp&s=66e62a40cfeb8a4310ea33538fe5083186238b10', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=1080&crop=smart&auto=webp&s=90370de6f4f7e8a4c3ac91f80bfd6dd03b9cf044', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?auto=webp&s=431f02035325737f3bcd69627b33adf57a7c2c75', 'width': 1200}, 'variants': {}}]} |
|
Best LLMs for logical reasoning and maths given my laptop specs | 0 | AMD Ryzen 7 PRO 5850U with Radeon Graphics
1 socket x 8 cores x 2 threads = 16 logical CPUs with avx, avx2
28G RAM
AMD Cezanne - 256M VRAM
350G SSD available
I'm looking for LLMs recommendations for logical reasoning and maths tasks. The maths tasks are not complicated but the model should be able to do comparative math. The primary purpose would be to process and compare finanfical information.
Between HF transformers lib and ollama, what do people recommend? I'm also planning on using an interactive chat UI, and LangChain to proces some documents. | 2025-01-18T11:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/1i4592b/best_llms_for_logical_reasoning_and_maths_given/ | learner1118 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4592b | false | null | t3_1i4592b | /r/LocalLLaMA/comments/1i4592b/best_llms_for_logical_reasoning_and_maths_given/ | false | false | self | 0 | null |
What hardware to run coding models? | 1 | Hi guys, I'm looking to run a local LLM to assist me with coding. I believe the Qwen models are good. I'm looking for accuracy and quality at reasonable speeds. What model give the best results?
In terms of hardware, I've read that Apple silicone can do a decent job. Will these models run reasonably well on a Mac Mini or should I get something like a 3090 instead? | 2025-01-18T11:27:43 | https://www.reddit.com/r/LocalLLaMA/comments/1i45mi7/what_hardware_to_run_coding_models/ | Blues520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i45mi7 | false | null | t3_1i45mi7 | /r/LocalLLaMA/comments/1i45mi7/what_hardware_to_run_coding_models/ | false | false | self | 1 | null |
NEED IDEAS TO FIX | Properly get the data from a Image with Confidence scores for each row. | 1 | [removed] | 2025-01-18T12:12:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i46abb/need_ideas_to_fix_properly_get_the_data_from_a/ | Fuzzy-Consequence718 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i46abb | false | null | t3_1i46abb | /r/LocalLLaMA/comments/1i46abb/need_ideas_to_fix_properly_get_the_data_from_a/ | false | false | self | 1 | null |
Qualcomm AI hub | 22 | https://github.com/quic/ai-hub-models?tab=readme-ov-file
I check every few months to see how things are going with the Snapdragon NPU, but I never find anything useful, until now
Maybe there are others out there who want to tinker a bit with Android and the NPU.
There also examples for Image Gen, LLM, whisper
| 2025-01-18T12:25:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i46hrp/qualcomm_ai_hub/ | Big-Ad1693 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i46hrp | false | null | t3_1i46hrp | /r/LocalLLaMA/comments/1i46hrp/qualcomm_ai_hub/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'yB703qk8boJiIhZN36-_qRGOrgddDDVCzzyFPL1CVOs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KOsy0ExvvvPuDzeaHKI_XgMQVsq2ywt6gzusjROKhwc.jpg?width=108&crop=smart&auto=webp&s=2c1b3058a7578056a2af62f8915e62e0a23c5ab2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KOsy0ExvvvPuDzeaHKI_XgMQVsq2ywt6gzusjROKhwc.jpg?width=216&crop=smart&auto=webp&s=387deaf0c3ed19c99a64ad15e3b583669c01b367', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KOsy0ExvvvPuDzeaHKI_XgMQVsq2ywt6gzusjROKhwc.jpg?width=320&crop=smart&auto=webp&s=8f5cfce4289259fd2b8bd733de5151b56e9b5670', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KOsy0ExvvvPuDzeaHKI_XgMQVsq2ywt6gzusjROKhwc.jpg?width=640&crop=smart&auto=webp&s=fdc8a28d3b3b866ed3b6ed935fe7ea344f6e74f8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KOsy0ExvvvPuDzeaHKI_XgMQVsq2ywt6gzusjROKhwc.jpg?width=960&crop=smart&auto=webp&s=64373aaef15c363967ecec310235a43a5d20bc6b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KOsy0ExvvvPuDzeaHKI_XgMQVsq2ywt6gzusjROKhwc.jpg?width=1080&crop=smart&auto=webp&s=099a13e35abe30b0e3ca63925e9a86a67ae9633a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KOsy0ExvvvPuDzeaHKI_XgMQVsq2ywt6gzusjROKhwc.jpg?auto=webp&s=e536ba05d2fa6bef21d41406e4b02597ce495d82', 'width': 1200}, 'variants': {}}]} |
Why can't LLMs be re-trained on the go with the conversation for infinite memory? | 69 | I'm just trying to understand the technical limitations and is this something that's considered.
I think the context window should only exist for instructions, while maintaining an infinte memory. This could really put LLMs in the realms of writing a complete book series and effecively changing the world as w e know it. | 2025-01-18T12:56:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i46zfr/why_cant_llms_be_retrained_on_the_go_with_the/ | freecodeio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i46zfr | false | null | t3_1i46zfr | /r/LocalLLaMA/comments/1i46zfr/why_cant_llms_be_retrained_on_the_go_with_the/ | false | false | self | 69 | null |
How to finetune on "negative responses" ? | 1 | [removed] | 2025-01-18T13:12:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i4792c/how_to_finetune_on_negative_responses/ | Waggerra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4792c | false | null | t3_1i4792c | /r/LocalLLaMA/comments/1i4792c/how_to_finetune_on_negative_responses/ | false | false | self | 1 | null |
Interact with Facebook using a local LLM | 0 | Sorry in advance if rhe question is noob. Is it possible (meaning...are there any open resources) to interact over Facebook using a local llm in a semi automated way?
I have trained a local model able to reply most questions/interactions the way I would. I'd like to deploy it to interact with specific posts.
Is that feasible? | 2025-01-18T13:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i47wcy/interact_with_facebook_using_a_local_llm/ | Green-Ad-3964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i47wcy | false | null | t3_1i47wcy | /r/LocalLLaMA/comments/1i47wcy/interact_with_facebook_using_a_local_llm/ | false | false | self | 0 | null |
Bad eval rate on home Server. | 1 | [removed] | 2025-01-18T13:56:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i481hd/bad_eval_rate_on_home_server/ | Roalkege | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i481hd | false | null | t3_1i481hd | /r/LocalLLaMA/comments/1i481hd/bad_eval_rate_on_home_server/ | false | false | self | 1 | null |
Has anyone tried anything besides native Python to build Agents? | 26 | I know, it's a very common question around here to ask. Actually I am working a project and have been using simple python to build my agentic workflow. But as it is expanding, I am facing some issues on keeping up with it. I am planning to use some framework and Pydantic AI is on my radar. I am also interested by Bee Agent Framework but, it's written in typescript predominantly. If you have any other suggestions, please let me know. | 2025-01-18T14:14:04 | https://www.reddit.com/r/LocalLLaMA/comments/1i48dmj/has_anyone_tried_anything_besides_native_python/ | QaeiouX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i48dmj | false | null | t3_1i48dmj | /r/LocalLLaMA/comments/1i48dmj/has_anyone_tried_anything_besides_native_python/ | false | false | self | 26 | null |
Can I cancel my ChatGPT Plus subscription in favour of DeepSeek V3 API ? | 0 | I am impressed by DeepSeek and makes me to feel I am spending lot for ChatGPT plus. I am from India and $23 is very higher for us.
I am thinking about cancelling ChatGPT Plus and use the DeepSeek chat and sometimes API.
My only concern is, I am a using it for coding mostly, and I ask code help in Android, Flutter, JS and Python.
So, how DeepSeek excels in code generation for the above stack?
Also, how does the O1 and upcoming O3 performs in coding and other human language tasks (translation etc) when compared with DeepSeek V3? | 2025-01-18T14:28:07 | https://www.reddit.com/r/LocalLLaMA/comments/1i48no3/can_i_cancel_my_chatgpt_plus_subscription_in/ | MatrixEternal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i48no3 | false | null | t3_1i48no3 | /r/LocalLLaMA/comments/1i48no3/can_i_cancel_my_chatgpt_plus_subscription_in/ | false | false | self | 0 | null |
Offering my OpenRouter account (+credits) at a discount | 1 | [removed] | 2025-01-18T14:51:45 | https://www.reddit.com/r/LocalLLaMA/comments/1i49514/offering_my_openrouter_account_credits_at_a/ | Mission_Bear7823 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i49514 | false | null | t3_1i49514 | /r/LocalLLaMA/comments/1i49514/offering_my_openrouter_account_credits_at_a/ | false | false | self | 1 | null |
Openrouter account for experimentation | 0 | First, I'd like to apologize if my post isnt the typical post here, though i believe it could be useful to part of the audience here, and there is nothing misleading, etc.
Basically I got an Openrouter account i dont need anymore, with credits in it, and so im offering it at a discount (half the price). Its great for trying different models, learning and experimentation, but also would work well in production (it has great rate limits ie 3K+ per min and you get 300M tokens in Deepseek v3).
If anyone find it useful, let me know and ill let you in the details. | 2025-01-18T14:56:38 | https://www.reddit.com/r/LocalLLaMA/comments/1i498h5/openrouter_account_for_experimentation/ | Mission_Bear7823 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i498h5 | false | null | t3_1i498h5 | /r/LocalLLaMA/comments/1i498h5/openrouter_account_for_experimentation/ | false | false | self | 0 | null |
4 Months free of Gemini Advanced + 2TB Storage on Google Drive. UK Only* :)) | 0 | 2025-01-18T15:10:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i49it7/4_months_free_of_gemini_advanced_2tb_storage_on/ | a1kron_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i49it7 | false | null | t3_1i49it7 | /r/LocalLLaMA/comments/1i49it7/4_months_free_of_gemini_advanced_2tb_storage_on/ | false | false | 0 | null |
||
LLama 98 running on a Pentium II 400mHz :D | 1 | [removed] | 2025-01-18T15:28:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i49vzv/llama_98_running_on_a_pentium_ii_400mhz_d/ | mat_rinaldi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i49vzv | false | null | t3_1i49vzv | /r/LocalLLaMA/comments/1i49vzv/llama_98_running_on_a_pentium_ii_400mhz_d/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'KKeFcJkZfjr-ksuzmJ409u5hYyJXEQFk3xHZ3rikALQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=108&crop=smart&auto=webp&s=510cf40e9b6cf40b58efccf063c1c9da74117384', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=216&crop=smart&auto=webp&s=3e6bd5caac678296a4e305593724f91769e737c6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=320&crop=smart&auto=webp&s=ce87b289ea788cb1382bad98c2fecd50374de69a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=640&crop=smart&auto=webp&s=50fbc439702368cb6375fd525ca3253357d1a101', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=960&crop=smart&auto=webp&s=3f85de7bc5203419cc815a77484af1d0d0cf6d07', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=1080&crop=smart&auto=webp&s=7d659aa185da3a4466194aacc4759637e6f480ff', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?auto=webp&s=dfdb6bfae4e058af88d5bcbbdd9c5da202e1f283', 'width': 2400}, 'variants': {}}]} |
|
Nuggt: Retrieve Information from the internet to be used as context for LLM (Open Source) | 27 | [Nuggt Demo GIF](https://i.redd.it/n6awgafpurde1.gif)
Hi r/LocalLLaMA
We all understand that the quality of LLM output depends heavily on the context and prompt provided. For example, asking an LLM to generate a good blog article on a given topic (let's say *X*) might result in a generic answer that may or may not meet your expectations. However, if you provide guidelines on how to write a good article and supply the LLM with additional relevant information about the topic, you significantly increase the chances of receiving a response that aligns with your needs.
With this in mind, I wanted to create a workspace that makes it easy to build and manage context for use with LLMs. I imagine there are many of us who might use LLMs in workflows similar to the following:
**Task**: Let’s say you want to write an elevator pitch for your startup.
**Step 1**: Research how to write a good elevator pitch, then save the key points as context.
**Step 2**: Look up examples of effective elevator pitches and add these examples to your context.
**Step 3**: Pass this curated context to the LLM and ask it to craft an elevator pitch for your startup. Importantly, you expect transparency—ensuring the LLM uses your provided context as intended and shows how it informed the output.
If you find workflows like this appealing, I think you’ll enjoy this tool. Here are its key features:
1. It integrates **Tavily** and **Firecrawl** to gather information on any topic from the internet.
2. You can highlight any important points, right-click, and save them as context.
3. You can pass this context to the LLM, which will use it to assist with your task. In its responses, the LLM will cite the relevant parts of the context so you can verify how your input was used and even trace it back to the original sources.
My hypothesis is that many of us would benefit from building strong context to complete our tasks. Of course, I could be wrong—perhaps this is just one of my idiosyncrasies, putting so much effort into creating detailed context! Who knows? The only way to find out is to post it here and see what the community thinks.
I’d love to hear your feedback!
Here is the github repo: [https://github.com/shoibloya/nuggt-research](https://github.com/shoibloya/nuggt-research) | 2025-01-18T15:36:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i4a2by/nuggt_retrieve_information_from_the_internet_to/ | Loya_3005 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4a2by | false | null | t3_1i4a2by | /r/LocalLLaMA/comments/1i4a2by/nuggt_retrieve_information_from_the_internet_to/ | false | false | 27 | null |
|
LLama 98 running on a Pentium II 400mHz :D | 1 | [removed] | 2025-01-18T15:38:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i4a3wk/llama_98_running_on_a_pentium_ii_400mhz_d/ | matmanalog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4a3wk | false | null | t3_1i4a3wk | /r/LocalLLaMA/comments/1i4a3wk/llama_98_running_on_a_pentium_ii_400mhz_d/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'KKeFcJkZfjr-ksuzmJ409u5hYyJXEQFk3xHZ3rikALQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=108&crop=smart&auto=webp&s=510cf40e9b6cf40b58efccf063c1c9da74117384', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=216&crop=smart&auto=webp&s=3e6bd5caac678296a4e305593724f91769e737c6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=320&crop=smart&auto=webp&s=ce87b289ea788cb1382bad98c2fecd50374de69a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=640&crop=smart&auto=webp&s=50fbc439702368cb6375fd525ca3253357d1a101', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=960&crop=smart&auto=webp&s=3f85de7bc5203419cc815a77484af1d0d0cf6d07', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?width=1080&crop=smart&auto=webp&s=7d659aa185da3a4466194aacc4759637e6f480ff', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/UJ6PeagdF_dVLD9YZSlfurjZomVxORnj6p4cYva_MrM.jpg?auto=webp&s=dfdb6bfae4e058af88d5bcbbdd9c5da202e1f283', 'width': 2400}, 'variants': {}}]} |
|
Have you truly replaced paid models(chatgpt, Claude etc) with self hosted ollama or hugging face ? | 287 | I’ve been experimenting with locally hosted setups, but I keep finding myself coming back to ChatGPT for the ease and performance. For those of you who’ve managed to fully switch, do you still use services like ChatGPT occasionally? Do you use both?
Also, what kind of GPU setup is really needed to get that kind of seamless experience? My 16GB VRAM feels pretty inadequate in comparison to what these paid models offer. Would love to hear your thoughts and setups...
| 2025-01-18T16:14:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i4awir/have_you_truly_replaced_paid_modelschatgpt_claude/ | Economy-Fact-8362 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4awir | false | null | t3_1i4awir | /r/LocalLLaMA/comments/1i4awir/have_you_truly_replaced_paid_modelschatgpt_claude/ | false | false | self | 287 | null |
Need help to decide macbook model!! I am going to use it for fine tuning llm models and running them locally. | 1 | [removed] | 2025-01-18T16:27:25 | https://www.reddit.com/r/LocalLLaMA/comments/1i4b6a8/need_help_to_decide_macbook_model_i_am_going_to/ | imtusharraj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4b6a8 | false | null | t3_1i4b6a8 | /r/LocalLLaMA/comments/1i4b6a8/need_help_to_decide_macbook_model_i_am_going_to/ | false | false | self | 1 | null |
Need help to decide macbook model!! I am going to use it for fine tuning llm models and running them locally. | 1 | [removed] | 2025-01-18T16:33:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i4bbck/need_help_to_decide_macbook_model_i_am_going_to/ | imtusharraj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4bbck | false | null | t3_1i4bbck | /r/LocalLLaMA/comments/1i4bbck/need_help_to_decide_macbook_model_i_am_going_to/ | false | false | self | 1 | null |
Consumer vs Server platform for dual 3090 setup | 1 | [removed] | 2025-01-18T16:34:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i4bbh8/consumer_vs_server_platform_for_dual_3090_setup/ | hanzo_h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4bbh8 | false | null | t3_1i4bbh8 | /r/LocalLLaMA/comments/1i4bbh8/consumer_vs_server_platform_for_dual_3090_setup/ | false | false | self | 1 | null |
-Nevoria- LLama 3.3 70b | 42 | Hey everyone!
TLDR: This is a merge focused on combining storytelling capabilities with detailed scene descriptions, while maintaining a balanced approach to maintain intelligence and useability and reducing positive bias. Currently ranked as the highest 70B on the UGI benchmark!
What went into this?
I took EVA-LLAMA 3.33 for its killer storytelling abilities and mixed it with EURYALE v2.3's detailed scene descriptions. Added Anubis v1 to enhance the prose details, and threw in some Negative\_LLAMA to keep it from being too sunshine-and-rainbows. All this sitting on a Nemotron-lorablated base.
Subtracting the lorablated base during merging causes a "weight twisting" effect. If you've played with my previous Astoria models, you'll recognize this approach - it creates some really interesting balance in how the model responds.
As usual my goal is to keep the model Intelligent with a knack for storytelling and RP.
Benchmark Results:
\- UGI Score: 56.75 (Currently #1 for 70B models and equal or better than 123b models!)
\- Open LLM Average: 43.92% (while not as useful from people training on the questions, still useful)
\- Solid scores across the board, especially in IFEval (69.63%) and BBH (56.60%)
Already got some quantized versions available:
Recommended template: LLam@ception by @.konnect
Check it out: [https://huggingface.co/Steelskull/L3.3-MS-Nevoria-70B](https://huggingface.co/Steelskull/L3.3-MS-Nevoria-70B)
Would love to hear your thoughts and experiences with it! Your feedback helps make the next one even better.
Happy prompting! 🚀 | 2025-01-18T16:39:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i4bfpo/nevoria_llama_33_70b/ | mentallyburnt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4bfpo | false | null | t3_1i4bfpo | /r/LocalLLaMA/comments/1i4bfpo/nevoria_llama_33_70b/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'astY7nhZWVdI0nvkEU44bykO9YuXyaxA2wm0pZGyY8w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3ZirrobdQ2PXX1mOSCz9Ra8wUG2HUg6sAMEtYN7fZuU.jpg?width=108&crop=smart&auto=webp&s=4189bdc73016a87f8aeadb4b377195a86312bf12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3ZirrobdQ2PXX1mOSCz9Ra8wUG2HUg6sAMEtYN7fZuU.jpg?width=216&crop=smart&auto=webp&s=c3c097f6bcc8336f6a6d37a9e9bd54e7891079b6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3ZirrobdQ2PXX1mOSCz9Ra8wUG2HUg6sAMEtYN7fZuU.jpg?width=320&crop=smart&auto=webp&s=c52869e0d1d86c3db185461f4dcb1623fbc8d331', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3ZirrobdQ2PXX1mOSCz9Ra8wUG2HUg6sAMEtYN7fZuU.jpg?width=640&crop=smart&auto=webp&s=74e35a0f0d8c84dd7210f48cc1937425ef75a96d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3ZirrobdQ2PXX1mOSCz9Ra8wUG2HUg6sAMEtYN7fZuU.jpg?width=960&crop=smart&auto=webp&s=8dc6075ee59918061d9d46e7f1ae55cc5434d2f0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3ZirrobdQ2PXX1mOSCz9Ra8wUG2HUg6sAMEtYN7fZuU.jpg?width=1080&crop=smart&auto=webp&s=86cbe6927cda5d90be9d0114dc8659ea1e487dcf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3ZirrobdQ2PXX1mOSCz9Ra8wUG2HUg6sAMEtYN7fZuU.jpg?auto=webp&s=b3bdae9a727dd34ce8d3d258ac44b0841fd84dbf', 'width': 1200}, 'variants': {}}]} |
Dedicated deployments are too expensive | 1 | [removed] | 2025-01-18T16:42:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i4bih3/dedicated_deployments_are_too_expensive/ | bigboyparpa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4bih3 | false | null | t3_1i4bih3 | /r/LocalLLaMA/comments/1i4bih3/dedicated_deployments_are_too_expensive/ | false | false | self | 1 | null |
Llama 3.2 1B Instruct – What Are the Best Use Cases for Small LLMs? | 102 | 2025-01-18T17:23:32 | ThetaCursed | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i4cfpz | false | null | t3_1i4cfpz | /r/LocalLLaMA/comments/1i4cfpz/llama_32_1b_instruct_what_are_the_best_use_cases/ | false | false | 102 | {'enabled': True, 'images': [{'id': '3Ovmc_t1ndlkNNWdm5GVT7ZhzUS8GecXhogzGb4DzLU', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/tr0h9qvkdsde1.png?width=108&crop=smart&auto=webp&s=1a8cd795a08382f58ee205640c650401fd5628af', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/tr0h9qvkdsde1.png?width=216&crop=smart&auto=webp&s=fcba9b4531a9ffc117960d3b16bf2a8671dfdc5e', 'width': 216}, {'height': 126, 'url': 'https://preview.redd.it/tr0h9qvkdsde1.png?width=320&crop=smart&auto=webp&s=237cce46c19ab8ed30310b8c79fcf688f233dcf2', 'width': 320}], 'source': {'height': 184, 'url': 'https://preview.redd.it/tr0h9qvkdsde1.png?auto=webp&s=86bb034646e0094c805f9334011e50714ab318db', 'width': 465}, 'variants': {}}]} |
|||
Why can't i find material on how to fine-tune a local llama? | 0 | I tried and tried, but every webpage i got just told me on "what" should i do, not "how", and youtube was **even worse**, they are always using .ipynb and google colab, running the stuff on the cloud. I have my goddamn llama, why'd i run the fine-tuning on the cloud, let alone export the result? Theres gotta be something i'm missing, either that or the documentation is scarse. Which imo it is because i hardly can find stuff like the documentation for the llama api, which i did [like this](https://github.com/MatthewLacerda2/Jarvis/blob/main/main.py). It was a bit difficult to find the fields i wanted to use, was a bit of trial-and-error | 2025-01-18T17:36:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i4cqm2/why_cant_i_find_material_on_how_to_finetune_a/ | Blender-Fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4cqm2 | false | null | t3_1i4cqm2 | /r/LocalLLaMA/comments/1i4cqm2/why_cant_i_find_material_on_how_to_finetune_a/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'u51BlV0Uk6FhiQJ0jRwa_qLu9MGLq3TottClNOmpAhg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=108&crop=smart&auto=webp&s=ede581637dcafdb3321f9ae45278a65102e9c242', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=216&crop=smart&auto=webp&s=fcb4a69bae1ef79ae135be0bd55ec9acdb11bbe0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=320&crop=smart&auto=webp&s=b3bfeb5a6aaa54465fc08f005282536bec803a95', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=640&crop=smart&auto=webp&s=bea37c6251d1ea9f60deb05faf71b16b370baf3b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=960&crop=smart&auto=webp&s=e87c08662db1316edfd0dd57005dceb0a0353b77', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=1080&crop=smart&auto=webp&s=3725f0effe3b9d7d270f49468d6320d56601b5af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?auto=webp&s=f479c5860afa64863ace123b7eb96175f7f2acf0', 'width': 1200}, 'variants': {}}]} |
What is the best model of in context learning? | 2 | Fine-tuning is expensive, is it possible to have a model with great ability of in context learning and large context window to avoid some kind of simple fine-tuning? | 2025-01-18T17:44:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i4cwro/what_is_the_best_model_of_in_context_learning/ | henryclw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4cwro | false | null | t3_1i4cwro | /r/LocalLLaMA/comments/1i4cwro/what_is_the_best_model_of_in_context_learning/ | false | false | self | 2 | null |
LLMs in Production book in print - seems like it has a little for everyone running LLMs locally or self hosting elsewhere. Finetuning, picking models, etc. | 2 | 2025-01-18T17:47:25 | https://www.manning.com/books/llms-in-production | jobe_br | manning.com | 1970-01-01T00:00:00 | 0 | {} | 1i4czbi | false | null | t3_1i4czbi | /r/LocalLLaMA/comments/1i4czbi/llms_in_production_book_in_print_seems_like_it/ | false | false | 2 | {'enabled': False, 'images': [{'id': '_tHqB7z7_nUfPl0ZumZOC83fi4Tr-Tl-J9zuPnba-So', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/88Tkp4phdjLSYRKL6QPbB1O0IMz-RVm8rf2O0JEluis.jpg?width=108&crop=smart&auto=webp&s=a453581fe559bac00480f740d7a902c9c979facc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/88Tkp4phdjLSYRKL6QPbB1O0IMz-RVm8rf2O0JEluis.jpg?width=216&crop=smart&auto=webp&s=fbc625350f771e79630c9a7ce234e3ce7c9950d4', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/88Tkp4phdjLSYRKL6QPbB1O0IMz-RVm8rf2O0JEluis.jpg?width=320&crop=smart&auto=webp&s=ba35ef6e0303bd2af2c61ea089c160b4995ae5e2', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/88Tkp4phdjLSYRKL6QPbB1O0IMz-RVm8rf2O0JEluis.jpg?width=640&crop=smart&auto=webp&s=3cbcd8c2d744060b617c36edb568205b930694f1', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/88Tkp4phdjLSYRKL6QPbB1O0IMz-RVm8rf2O0JEluis.jpg?width=960&crop=smart&auto=webp&s=f4f6543cab9adcd0283818d2d91d1c7b43cd046e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/88Tkp4phdjLSYRKL6QPbB1O0IMz-RVm8rf2O0JEluis.jpg?width=1080&crop=smart&auto=webp&s=85ae18bfb0dcc606fb1a2e11c2c734ead35b6dd7', 'width': 1080}], 'source': {'height': 641, 'url': 'https://external-preview.redd.it/88Tkp4phdjLSYRKL6QPbB1O0IMz-RVm8rf2O0JEluis.jpg?auto=webp&s=d7d1f30f35142ecd1ce4b4ec201730b17b3529bd', 'width': 1140}, 'variants': {}}]} |
||
Success!: Tesla p40+1080GTX_Cooler in a Dell T420 :) | 9 | First, the money shot:
https://preview.redd.it/gg6j4ea9isde1.png?width=1347&format=png&auto=webp&s=84d77db8a83b5481621d22deeedb3636a6efa674
And yes, I'm aware my PERCs are a bit close, I'm brainstorming on that. So the approach I took was following [advice from FullStackSensei](https://www.reddit.com/r/LocalLLaMA/comments/1hozg2h/comment/m4di1mw/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) I acquired a used [GTX1080 dell reference card ](https://www.youtube.com/watch?v=AM--NTHFBlI&lc=Ugzrysx0dPl-yrbO1yp4AaABAg.ADCmLC_52VxADCn-nMSdT-)with issues. As the only things I needed were the fan and cooler, I wasn't too worried about it being for parts. It took some minor modifications, to include using a dremel and an oscillating cutter:
https://preview.redd.it/8g0vc0h5jsde1.png?width=467&format=png&auto=webp&s=02dc9789435917b0cf7fde5b109aec0157ba5569
but as shown here, the temps are completely manageable, and the fan is barely blowing :
https://preview.redd.it/4e7mwcdjjsde1.png?width=941&format=png&auto=webp&s=a3d397688349dcfd222da0ae2722529e4d9be958
Parts you'll need:
Links omitted to make sure I'm following guideines.
* GPU fan adapter cable (look for "PWM GPU fan adapter cable")
* Thermal pads of varying sizes
* PWM Fan Controller (I used the Coolerguys 12v PWM thermostat model)
Hope this helps anyone having troubles like I was with all the 3d printed fan shrouds and their concern for noise. | 2025-01-18T17:57:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i4d7f3/success_tesla_p401080gtx_cooler_in_a_dell_t420/ | s0n1cm0nk3y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4d7f3 | false | null | t3_1i4d7f3 | /r/LocalLLaMA/comments/1i4d7f3/success_tesla_p401080gtx_cooler_in_a_dell_t420/ | false | false | 9 | null |
|
Guide: Easiest way to run any vLLM model on AWS with autoscaling (scale down to 0) | 7 | A lot of our customers have been finding our guide for vLLM deployment on their own private cloud super helpful. vLLM is super helpful and straightforward and provides the highest token throughput when compared against frameworks like LoRAX, TGI etc.
Please let me know your thoughts on whether the guide is helpful and has a positive contribution to your understanding of model deployments in general.
Find the guide here:- [https://tensorfuse.io/docs/guides/llama\_guide](https://tensorfuse.io/docs/guides/llama_guide) | 2025-01-18T18:02:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i4dbg3/guide_easiest_way_to_run_any_vllm_model_on_aws/ | tempNull | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4dbg3 | false | null | t3_1i4dbg3 | /r/LocalLLaMA/comments/1i4dbg3/guide_easiest_way_to_run_any_vllm_model_on_aws/ | false | false | self | 7 | null |
The Best Animation Creator (Not Video Generator)? | 2 | Hello guys! Do you know any good AI animation creators? I mean, to work like this:
I’m drawing like starting frame, ending frame, and a few in between, and similar to interpolation (but plain interpolation won’t work here because no video is ready) it will create enough frames to make from make few drawings an animated sequence?
Open-source only! Thank you! | 2025-01-18T18:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i4dswh/the_best_animation_creator_not_video_generator/ | yukiarimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4dswh | false | null | t3_1i4dswh | /r/LocalLLaMA/comments/1i4dswh/the_best_animation_creator_not_video_generator/ | false | false | self | 2 | null |
Interesting article on how DeepSeek has improved the architecture in DeepSeek V2 and V3. | 145 | [epoch.ai](http://epoch.ai) has published an interesting article: [https://epoch.ai/gradient-updates/how-has-deepseek-improved-the-transformer-architecture](https://epoch.ai/gradient-updates/how-has-deepseek-improved-the-transformer-architecture)
It talks about MLA, MoE innovations and Multi-Token Prediction. | 2025-01-18T19:00:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i4em80/interesting_article_on_how_deepseek_has_improved/ | jpydych | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4em80 | false | null | t3_1i4em80 | /r/LocalLLaMA/comments/1i4em80/interesting_article_on_how_deepseek_has_improved/ | false | false | self | 145 | {'enabled': False, 'images': [{'id': 'bts7DpfGPCbbKfQQSMu2gSDhqmY0jAUjxrw2HdanA-E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IrPcROKgognLkNXzppShXaL5kKyx94V5lLQbr3R9g04.jpg?width=108&crop=smart&auto=webp&s=bf9b3a3e6d9545e22bea6513d0e0dfcf73759b45', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/IrPcROKgognLkNXzppShXaL5kKyx94V5lLQbr3R9g04.jpg?width=216&crop=smart&auto=webp&s=c47af1a9cb3e32ac42f5983ba6bb0cde21e87941', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/IrPcROKgognLkNXzppShXaL5kKyx94V5lLQbr3R9g04.jpg?width=320&crop=smart&auto=webp&s=507445a4608e8577099aab6bf490ce078f4347e4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/IrPcROKgognLkNXzppShXaL5kKyx94V5lLQbr3R9g04.jpg?width=640&crop=smart&auto=webp&s=8afe542e481833042f2c387df317e3ea02feac20', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/IrPcROKgognLkNXzppShXaL5kKyx94V5lLQbr3R9g04.jpg?width=960&crop=smart&auto=webp&s=386d31c0ac891f3edb47791ee19c6a0a6409a8f3', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/IrPcROKgognLkNXzppShXaL5kKyx94V5lLQbr3R9g04.jpg?width=1080&crop=smart&auto=webp&s=54a71402faaf1e9075ab68bcc4bbf611048be334', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/IrPcROKgognLkNXzppShXaL5kKyx94V5lLQbr3R9g04.jpg?auto=webp&s=a0e3c4ac391e05b3145ef669dfcdcf876ff3e1c7', 'width': 1200}, 'variants': {}}]} |
Tutorial: Fine tuning models on your Mac with MLX - by an ex-Ollama developer | 1 | 2025-01-18T19:06:46 | https://www.youtube.com/watch?v=BCfCdTp-fdM | AngryBirdenator | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1i4err2 | false | {'oembed': {'author_name': 'Matt Williams', 'author_url': 'https://www.youtube.com/@technovangelist', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/BCfCdTp-fdM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Is MLX the best Fine Tuning Framework?"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/BCfCdTp-fdM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Is MLX the best Fine Tuning Framework?', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1i4err2 | /r/LocalLLaMA/comments/1i4err2/tutorial_fine_tuning_models_on_your_mac_with_mlx/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'R36JWd44wfsYL58DdVZ-VzpJkDtyBLaty-1_iJP9m-U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eXBvhhdQjkkm7iGerIbuTAdnlqBorGFvor6KN3WyLRQ.jpg?width=108&crop=smart&auto=webp&s=43ed23166e600da48d2ce24f595a29d5ca375bda', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/eXBvhhdQjkkm7iGerIbuTAdnlqBorGFvor6KN3WyLRQ.jpg?width=216&crop=smart&auto=webp&s=9db37e029966c57828c7115848105fdc59fc4918', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/eXBvhhdQjkkm7iGerIbuTAdnlqBorGFvor6KN3WyLRQ.jpg?width=320&crop=smart&auto=webp&s=861d36942f14b036e51a90401942ebfe4764ca48', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/eXBvhhdQjkkm7iGerIbuTAdnlqBorGFvor6KN3WyLRQ.jpg?auto=webp&s=184fec39bc4c8269db5fbda11322a28801259b8a', 'width': 480}, 'variants': {}}]} |
||
Recreate krea.ai realtime with flux / Stream diffusion | 4 | What's the best realtime text2image or text&image2image local experience I can get in Januray 2025?
[https://github.com/cumulo-autumn/StreamDiffusion](https://github.com/cumulo-autumn/StreamDiffusion) ? something for flux with loras?
I just used the most wow tech since notebooklm, perplexity and chatgpt - i'm not affiliated, just did retrofuturism styled free trial and completely forgot about the real world. [https://www.krea.ai/apps/image/realtime](https://www.krea.ai/apps/image/realtime)
[https://x.com/krea\_ai](https://x.com/krea_ai)
with remote instance like h100 / a100 / a6000?
or local 4090 / 3070 / 2060? | 2025-01-18T19:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/1i4ffp0/recreate_kreaai_realtime_with_flux_stream/ | secopsml | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4ffp0 | false | null | t3_1i4ffp0 | /r/LocalLLaMA/comments/1i4ffp0/recreate_kreaai_realtime_with_flux_stream/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '-ogWnjCE32e3oA3LrcVjC8gZusSP4og4xNPuABbGb5Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uuf0wfN6H65TS27Bq3UB2QX_vufZt83_bYP_VVQReL4.jpg?width=108&crop=smart&auto=webp&s=3807773d4a28be110e50f8c6db5955472d6cffdc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uuf0wfN6H65TS27Bq3UB2QX_vufZt83_bYP_VVQReL4.jpg?width=216&crop=smart&auto=webp&s=27020dbc12575b5d50a48eb10f09b08af3ab958f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uuf0wfN6H65TS27Bq3UB2QX_vufZt83_bYP_VVQReL4.jpg?width=320&crop=smart&auto=webp&s=9b9cbf180df663ee6386138874f1415751794f6f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uuf0wfN6H65TS27Bq3UB2QX_vufZt83_bYP_VVQReL4.jpg?width=640&crop=smart&auto=webp&s=d9faf295a4225f3daaf44c94b68d3192786961bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uuf0wfN6H65TS27Bq3UB2QX_vufZt83_bYP_VVQReL4.jpg?width=960&crop=smart&auto=webp&s=2f95e836a3bd9e70a7ba952c2f15259a467cab5e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uuf0wfN6H65TS27Bq3UB2QX_vufZt83_bYP_VVQReL4.jpg?width=1080&crop=smart&auto=webp&s=d980cb00c46f480d856fcd8736253b430cac1a6a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uuf0wfN6H65TS27Bq3UB2QX_vufZt83_bYP_VVQReL4.jpg?auto=webp&s=b745afa979ce64ee81504cc986ff0fa0261b0f57', 'width': 1200}, 'variants': {}}]} |
4080 16gb and my old 3070 8gb | 52 | Decided to throw my old 3070 in and an old set of ddr4 to see what happens. Now up to 24 gb of vram and 64 gb of dram with a 12700kf. I was worried about my 750 watt psu but it’s pulling under 400 watts at load and I’ll set some limits just in case. Got 22 tok/sec on gwen 2.5 32b q4_0. I’ll try a 70b later. | 2025-01-18T19:45:36 | https://www.reddit.com/gallery/1i4fmvy | Glooves | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i4fmvy | false | null | t3_1i4fmvy | /r/LocalLLaMA/comments/1i4fmvy/4080_16gb_and_my_old_3070_8gb/ | false | false | 52 | null |
|
Nup with 16 TOPS | 1 | [removed] | 2025-01-18T19:49:01 | https://www.reddit.com/r/LocalLLaMA/comments/1i4fpph/nup_with_16_tops/ | Old-Objective4230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4fpph | false | null | t3_1i4fpph | /r/LocalLLaMA/comments/1i4fpph/nup_with_16_tops/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.