title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Tell me how do I create my own ai regarding dermatology | 0 | I am currently pursuing my medical education and have developed a compelling vision to create an artificial intelligence solution in the healthcare domain, specifically focusing on dermatological diagnostics. Despite having no prior programming experience or coding background, I am deeply committed to dedicating two hours of focused learning every day to acquire the necessary technical skills. My ambitious goal is to launch a functional AI system by September 2027, which gives me a structured timeline to develop and refine this project. I'm seeking comprehensive guidance from experienced professionals who can help me navigate this complex journey from being a complete beginner to developing a sophisticated AI diagnostic tool.
My specific interest lies in creating an AI system capable of analyzing and diagnosing basic dermatological conditions. This intersection of healthcare and technology fascinates me, as it has the potential to improve patient care and accessibility to medical expertise. I need detailed information about the essential tools, frameworks, and learning resources required to bring this vision to life. Understanding the fundamental building blocks, from basic programming concepts to advanced machine learning algorithms specific to medical image processing, will be crucial. Additionally, I would greatly appreciate guidance on the required technical stack, development environments, and relevant datasets that would be instrumental in training an AI model to recognize and classify various skin conditions accurately.
Would you be able to outline a structured learning path that takes into account my medical background, complete lack of programming experience, and my specific goal of developing a dermatological AI diagnostic tool? | 2025-01-15T18:41:56 | https://www.reddit.com/r/LocalLLaMA/comments/1i24fbb/tell_me_how_do_i_create_my_own_ai_regarding/ | Versionbatman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i24fbb | false | null | t3_1i24fbb | /r/LocalLLaMA/comments/1i24fbb/tell_me_how_do_i_create_my_own_ai_regarding/ | false | false | self | 0 | null |
Why is the unquantized transformer model always better? | 0 | I'm comparing Qwen2 VL 7B. I'm running locally, unquantized with transformers and comparing it to the f16 version of [https://huggingface.co/bartowski/Qwen2-VL-7B-Instruct-GGUF](https://huggingface.co/bartowski/Qwen2-VL-7B-Instruct-GGUF) which I'm running in LM Studio. I'm using it to create keywords for images and I just have to say, there is some kind of magic in the unquantized version -- I expected them to be essentially the same. For example, if I have a picture of a gold ring with a diamond and I ask the model to produce keywords for the metal color, the unquantized model will correctly respond with "gold" while the quantized model will respond "gold, diamond". It's subtle but I've examined hundreds of images and the unquantized model is almost always more accurate when there are differences. | 2025-01-15T18:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/1i24fq1/why_is_the_unquantized_transformer_model_always/ | cjj2003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i24fq1 | false | null | t3_1i24fq1 | /r/LocalLLaMA/comments/1i24fq1/why_is_the_unquantized_transformer_model_always/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'MT7yyPFa9OEeHwCk_N3z-5Sthy_j3MjapSROmC8ZRZ0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NlT0LDJguUI2kXx30LmZG3P7TPmX8JjvjzXl8hYGMWs.jpg?width=108&crop=smart&auto=webp&s=48fe855dc3c417e7c8eee791ad7b9a72457c0172', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NlT0LDJguUI2kXx30LmZG3P7TPmX8JjvjzXl8hYGMWs.jpg?width=216&crop=smart&auto=webp&s=c73e74286d22da5143f199361b9c3b9bb4ae371c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NlT0LDJguUI2kXx30LmZG3P7TPmX8JjvjzXl8hYGMWs.jpg?width=320&crop=smart&auto=webp&s=abf0fe1dd2bfa7b68539c1733f7c7fbf685ea0dd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NlT0LDJguUI2kXx30LmZG3P7TPmX8JjvjzXl8hYGMWs.jpg?width=640&crop=smart&auto=webp&s=470f2b9874d7fedada4ed114e43e53b2f95df816', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NlT0LDJguUI2kXx30LmZG3P7TPmX8JjvjzXl8hYGMWs.jpg?width=960&crop=smart&auto=webp&s=84671ea282d3ba2cb9bdd5d8fb0bd71f13732905', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NlT0LDJguUI2kXx30LmZG3P7TPmX8JjvjzXl8hYGMWs.jpg?width=1080&crop=smart&auto=webp&s=c572beb467af4ffb81f7c473af6733c90127f26d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NlT0LDJguUI2kXx30LmZG3P7TPmX8JjvjzXl8hYGMWs.jpg?auto=webp&s=72ce95eb96a3fa644ae98b3a5507aafb0cc2a632', 'width': 1200}, 'variants': {}}]} |
How can a model perform better than the model that generated its training data? | 1 | [removed] | 2025-01-15T19:13:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i256qe/how_can_a_model_perform_better_than_the_model/ | Ok-Engineering5104 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i256qe | false | null | t3_1i256qe | /r/LocalLLaMA/comments/1i256qe/how_can_a_model_perform_better_than_the_model/ | false | false | self | 1 | null |
Using OpenAI for Vision? Switch to Moondream's free API in just 3 lines of code. | 0 | 2025-01-15T19:18:44 | ParsaKhaz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i25bc6 | false | null | t3_1i25bc6 | /r/LocalLLaMA/comments/1i25bc6/using_openai_for_vision_switch_to_moondreams_free/ | false | false | 0 | {'enabled': True, 'images': [{'id': '01gmeX0Cn9Y44EgCq3WMCMAEHu4F0fSSyBE0SzuHuc0', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/10ru1lk4g7de1.png?width=108&crop=smart&auto=webp&s=5c8f4b0ec277c3bdb7a7e3d63742b61a844c2de6', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/10ru1lk4g7de1.png?width=216&crop=smart&auto=webp&s=5f0ddecc1244da263589027fb329a08248a96a1a', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/10ru1lk4g7de1.png?width=320&crop=smart&auto=webp&s=2d19bba19affa8635717e6a68b0388209774f443', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/10ru1lk4g7de1.png?width=640&crop=smart&auto=webp&s=ebc31abdb12fc7fab6ee1b69e0b1ae1eb48edbec', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/10ru1lk4g7de1.png?auto=webp&s=633ae9b9edd158383b315b979aba5c80198ad2a8', 'width': 800}, 'variants': {}}]} |
|||
Beating cuBLAS in Single-Precision General Matrix Multiplication | 19 | A while ago, I shared my article here about optimizing matrix multiplication on CPUs, achieving performance that outpaced NumPy - [Beating NumPy's matrix multiplication in 150 lines of C code](https://www.reddit.com/r/LocalLLaMA/comments/1dt3rqc/beating_numpys_matrix_multiplication_in_150_lines/)
I received positive feedback from your community, and today I'm excited to share my second blog post. This one focuses on an SGEMM implementation that outperforms cuBLAS with its (modified?) CUTLASS kernel across a wide range of matrix sizes. Below, I've included performance comparisons (with both locked and unlocked clocks) against cuBLAS and Simon Boehm’s highly cited work, which is now integrated into llamafile aka tinyBLAS. The blog delves into benchmarking code on CUDA devices and explains the algorithm's design along with optimization techniques. These include inlined PTX, asynchronous memory copies, double-buffering, avoiding shared memory bank conflicts, and efficient coalesced storage using shared memory. The code is super easy to tweak, so you can customize it for your projects with kernel fusion or just drop it into your libraries as-is. If you have any questions, feel free to comment or send me a direct message - I'd love to hear your feedback and answer any questions you may have!
https://preview.redd.it/y3pgixh4l7de1.png?width=1256&format=png&auto=webp&s=cce30a78791965503a0fa340646a537de1d57195
https://preview.redd.it/b5vqz7c5l7de1.png?width=1253&format=png&auto=webp&s=2cf2103c8c7426dedfb70deb05d9c54c320e6808
| 2025-01-15T19:27:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i25jas/beating_cublas_in_singleprecision_general_matrix/ | salykova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i25jas | false | null | t3_1i25jas | /r/LocalLLaMA/comments/1i25jas/beating_cublas_in_singleprecision_general_matrix/ | false | false | 19 | null |
|
Multi vs Single GPU set-up. | 1 | [removed] | 2025-01-15T19:49:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i261dy/multi_vs_single_gpu_setup/ | Inevitable-Job6519 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i261dy | false | null | t3_1i261dy | /r/LocalLLaMA/comments/1i261dy/multi_vs_single_gpu_setup/ | false | false | self | 1 | null |
AI research | 1 | [removed] | 2025-01-15T19:52:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i263rd/ai_research/ | ASI-Enjoyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i263rd | false | null | t3_1i263rd | /r/LocalLLaMA/comments/1i263rd/ai_research/ | false | false | self | 1 | null |
Local always listening | 1 | Could anyone give me some high level advice that I can research for the software required to setup an always listening LLM(no internet access), behavior similar to Google home for example. I have a modern cpu and 4080, lots of ram. | 2025-01-15T20:08:43 | https://www.reddit.com/r/LocalLLaMA/comments/1i26hjs/local_always_listening/ | Special-Language-999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i26hjs | false | null | t3_1i26hjs | /r/LocalLLaMA/comments/1i26hjs/local_always_listening/ | false | false | self | 1 | null |
ATTENTION IS ALL YOU NEED PT. 2 - TITANS: Learning to Memorize at Test Time | 340 | https://arxiv.org/pdf/2501.00663v1
The innovation in this field has been iterating at light speed, and I think we have something special here. I tried something similar but I’m no PhD student and the Math is beyond me.
TLDR; Google Research introduces Titans, a new Al model that learns to store information in a dedicated "long-term memory" at test time.
This means it can adapt whenever it sees something surprising, updating its memory on-the-fly. Unlike standard Transformers that handle only the current text window, Titans keep a deeper, more permanent record-similar to short-term vs. long-term memory in humans. The method scales more efficiently (linear time) than traditional Transformers(qudratic time) for very long input sequences. i.e theoretically infinite context windows.
Don’t be mistaken, this isn’t just a next-gen “artificial intelligence”, but a step towards to “artificial consciousness” with persistent memory - IF we define consciousness as the ability to model internally(self-modeling), organize, integrate, and recollect of data (with respect to a real-time input)as posited by IIT… would love to hear y’all’s thoughts 🧠👀
| 2025-01-15T20:16:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i26nk4/attention_is_all_you_need_pt_2_titans_learning_to/ | AIGuy3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i26nk4 | false | null | t3_1i26nk4 | /r/LocalLLaMA/comments/1i26nk4/attention_is_all_you_need_pt_2_titans_learning_to/ | false | false | self | 340 | null |
Imagine while Reasoning in Space: Multimodal Visualization-of-Thought - Enables visual thinking in MLLMs by generating image visualizations of their reasoning traces! | 0 | 2025-01-15T20:23:31 | https://arxiv.org/abs/2501.07542 | Singularian2501 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1i26tg2 | false | null | t3_1i26tg2 | /r/LocalLLaMA/comments/1i26tg2/imagine_while_reasoning_in_space_multimodal/ | false | false | default | 0 | null |
|
Found a tool that uses LLaMa and other AI agents to generate hyper-realistic presenters for video ads | 1 | [removed] | 2025-01-15T20:41:27 | https://v.redd.it/pn408lpfy7de1 | Level-Novel9288 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i2784z | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pn408lpfy7de1/DASHPlaylist.mpd?a=1739565703%2CYWFkZGRlMDI5OTA1MTQwZWM4MGQ5ZDIwNGZhMzE4Nzc1NWZhY2RkYTFkOWVlMTY1NzgzOGQ1ZDdiMjdhZmIyNQ%3D%3D&v=1&f=sd', 'duration': 5, 'fallback_url': 'https://v.redd.it/pn408lpfy7de1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/pn408lpfy7de1/HLSPlaylist.m3u8?a=1739565703%2CZjdhNGUyYjczY2MyODIxM2QxMmQ4MGNlYmRmZWY0ZTFhNWYyNjI4NzhiMWVlMDZkNzZkYThlNjBhNjA3NzliOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pn408lpfy7de1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1i2784z | /r/LocalLLaMA/comments/1i2784z/found_a_tool_that_uses_llama_and_other_ai_agents/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aGtlaGlsYWR5N2RlMSIzN9zoH9ccTqNjhbM9zW36MnuMnb0pw2mkHlZ9zNY8', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/aGtlaGlsYWR5N2RlMSIzN9zoH9ccTqNjhbM9zW36MnuMnb0pw2mkHlZ9zNY8.png?width=108&crop=smart&format=pjpg&auto=webp&s=8330fbacdd275f96d2e9b26866f1137877694e02', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/aGtlaGlsYWR5N2RlMSIzN9zoH9ccTqNjhbM9zW36MnuMnb0pw2mkHlZ9zNY8.png?width=216&crop=smart&format=pjpg&auto=webp&s=548213e68dc3e2037c0b69fdf7ada053f52140d9', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/aGtlaGlsYWR5N2RlMSIzN9zoH9ccTqNjhbM9zW36MnuMnb0pw2mkHlZ9zNY8.png?width=320&crop=smart&format=pjpg&auto=webp&s=9e37f582bb43d522cc6f86a7c9053cf8d2ae46e2', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/aGtlaGlsYWR5N2RlMSIzN9zoH9ccTqNjhbM9zW36MnuMnb0pw2mkHlZ9zNY8.png?width=640&crop=smart&format=pjpg&auto=webp&s=7f7be26ba46472a3553aa603e825ef178be30786', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/aGtlaGlsYWR5N2RlMSIzN9zoH9ccTqNjhbM9zW36MnuMnb0pw2mkHlZ9zNY8.png?width=960&crop=smart&format=pjpg&auto=webp&s=bf5c75faff116316ce6603933de17b6f318d2b11', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/aGtlaGlsYWR5N2RlMSIzN9zoH9ccTqNjhbM9zW36MnuMnb0pw2mkHlZ9zNY8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=897b90a4c8f9bfaeb203a4a82502889ce1e34de4', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/aGtlaGlsYWR5N2RlMSIzN9zoH9ccTqNjhbM9zW36MnuMnb0pw2mkHlZ9zNY8.png?format=pjpg&auto=webp&s=5a50df15fa034d6defefa4d16baa7534fca48c30', 'width': 1080}, 'variants': {}}]} |
|
Deepseek is overthinking | 776 | 2025-01-15T20:57:13 | Mr_Jericho | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i27l37 | false | null | t3_1i27l37 | /r/LocalLLaMA/comments/1i27l37/deepseek_is_overthinking/ | false | false | 776 | {'enabled': True, 'images': [{'id': 'PB4LhzqrtDTsLRjwFbTXamJCQcvmmBU7p2iLnKZOJdE', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/rz378lgd18de1.jpeg?width=108&crop=smart&auto=webp&s=54c63c377b0d52583be3965f0fcbd367b8c77b27', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/rz378lgd18de1.jpeg?width=216&crop=smart&auto=webp&s=1ef49c6c43b939b39e6611124850d4752a5b2f3d', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/rz378lgd18de1.jpeg?width=320&crop=smart&auto=webp&s=a898d20ce7c9f3ad358a43084c668754835d31f5', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/rz378lgd18de1.jpeg?width=640&crop=smart&auto=webp&s=deff4f920457d1affd3bc98d78e4fc3601dda4b9', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/rz378lgd18de1.jpeg?width=960&crop=smart&auto=webp&s=d8f44fd6ca88b05acb84dcb59e70f546a344d0fa', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/rz378lgd18de1.jpeg?width=1080&crop=smart&auto=webp&s=d6f43aa92d08295433333b213f62ec1a69055c40', 'width': 1080}], 'source': {'height': 11078, 'url': 'https://preview.redd.it/rz378lgd18de1.jpeg?auto=webp&s=6dd912bfc43f6c8ff4517260419ee464a3a5e5e3', 'width': 1080}, 'variants': {}}]} |
|||
Is it possible to run ai models on the combination of c/gpu and ram on windows pc? | 1 | [removed] | 2025-01-15T21:04:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i27rer/is_it_possible_to_run_ai_models_on_the/ | unknownstudentoflife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i27rer | false | null | t3_1i27rer | /r/LocalLLaMA/comments/1i27rer/is_it_possible_to_run_ai_models_on_the/ | false | false | self | 1 | null |
Is it possible to run ai models on the combination of c/gpu and ram on windows pc? | 1 | [removed] | 2025-01-15T21:06:20 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1i27srp | false | null | t3_1i27srp | /r/LocalLLaMA/comments/1i27srp/is_it_possible_to_run_ai_models_on_the/ | false | false | default | 1 | null |
||
Pair Browsing - Chrome Extension that uses AI to drive your browser | 3 | 2025-01-15T21:16:57 | https://github.com/rbarazi/pair-browsing | Quirky_Researcher | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i281og | false | null | t3_1i281og | /r/LocalLLaMA/comments/1i281og/pair_browsing_chrome_extension_that_uses_ai_to/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'G50i3tZtysyv_5RaJ8PKUKeLA0x2XmKw_cKZKmP2i6g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/edC6ZiyLxvT-HKi-xCkduUqMDKOE48MwU2sSXYw-JDY.jpg?width=108&crop=smart&auto=webp&s=032b1314b9978b0d411b6edd005000d4be3b8bde', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/edC6ZiyLxvT-HKi-xCkduUqMDKOE48MwU2sSXYw-JDY.jpg?width=216&crop=smart&auto=webp&s=b2ae891c65c3d48ff24b53f7a8a7dded84c4fbb8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/edC6ZiyLxvT-HKi-xCkduUqMDKOE48MwU2sSXYw-JDY.jpg?width=320&crop=smart&auto=webp&s=5ae3a7f9191f9bec63843892a2bc8c0590cb7112', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/edC6ZiyLxvT-HKi-xCkduUqMDKOE48MwU2sSXYw-JDY.jpg?width=640&crop=smart&auto=webp&s=375fa03854a5c730b786a844305c58c03df8547e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/edC6ZiyLxvT-HKi-xCkduUqMDKOE48MwU2sSXYw-JDY.jpg?width=960&crop=smart&auto=webp&s=e0ceb1fab08eb9ba5bf925150d94e9bec953cf30', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/edC6ZiyLxvT-HKi-xCkduUqMDKOE48MwU2sSXYw-JDY.jpg?width=1080&crop=smart&auto=webp&s=aacbf1fa32a55a1eaaca3c750a589d3c037714e5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/edC6ZiyLxvT-HKi-xCkduUqMDKOE48MwU2sSXYw-JDY.jpg?auto=webp&s=0642f07b910136613ef59bf71997b48f044b5695', 'width': 1200}, 'variants': {}}]} |
||
LM Studio - Model Not Loading? | 3 | Hi all,
I've tried to load Llama 3.2 3B Instruct, but whenever I click it, nothing happens in the "Select a model to load" box - could anyone know how I can solve this?
Im running LM studio on an M2 Pro Mac Mini.
Many Thanks! | 2025-01-15T21:17:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i281te/lm_studio_model_not_loading/ | Amazing_Mix_7938 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i281te | false | null | t3_1i281te | /r/LocalLLaMA/comments/1i281te/lm_studio_model_not_loading/ | false | false | self | 3 | null |
Start Using Ollama + Python (Phi4) | Easy to follow no BS/fluff | 0 | 2025-01-15T21:18:39 | https://toolworks.dev/docs/Guides/ollama-python-guide | 0xlisykes | toolworks.dev | 1970-01-01T00:00:00 | 0 | {} | 1i2832u | false | null | t3_1i2832u | /r/LocalLLaMA/comments/1i2832u/start_using_ollama_python_phi4_easy_to_follow_no/ | false | false | default | 0 | null |
|
Found this weird thing | 1 | This might be an unusual post, if this is the wrong place then please point me to the right one.
I've been looking for a way to stuff as many GPUs in an AI/rendering machine as I can for a while, and the limiting factor has been the size of the cards that I can get my hands on, which meant I couldn't fit more than 2 in a motherboard, not without watercooling at least, and I don't feel comfortable putting water near expensive hardware.
Anyway, I stumbled across this: [https://imgur.com/a/3Qup44I](https://imgur.com/a/3Qup44I) , it looks like one of those old 'm' boxes, but instead of a meager pentium and a single pcie lane per slot, it has two X99 Xeon sockets, and claims to have a mix of x16 and x8 slots, that are, importantly, spaced apart. What do you think? worth trying, or is it too sketchy? | 2025-01-15T21:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i289ab/found_this_weird_thing/ | VXT7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i289ab | false | null | t3_1i289ab | /r/LocalLLaMA/comments/1i289ab/found_this_weird_thing/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6NNb7Cm1BxD5eZIslLVlBqCP_9LogTZkhLuymtw7VP4', 'resolutions': [{'height': 95, 'url': 'https://external-preview.redd.it/jewJ376cLhNel3bA_KG7fSoiVtJe1jtpJ507YaQyWjU.jpg?width=108&crop=smart&auto=webp&s=210ce3bfc1bdc70abc8a7d5c5df6e777fefce801', 'width': 108}, {'height': 190, 'url': 'https://external-preview.redd.it/jewJ376cLhNel3bA_KG7fSoiVtJe1jtpJ507YaQyWjU.jpg?width=216&crop=smart&auto=webp&s=78e49ed3e2b826d10e96c96f13189a84252c63f5', 'width': 216}, {'height': 282, 'url': 'https://external-preview.redd.it/jewJ376cLhNel3bA_KG7fSoiVtJe1jtpJ507YaQyWjU.jpg?width=320&crop=smart&auto=webp&s=921306c5bfbade551c3bee2a6cfa9dda9819e30c', 'width': 320}], 'source': {'height': 557, 'url': 'https://external-preview.redd.it/jewJ376cLhNel3bA_KG7fSoiVtJe1jtpJ507YaQyWjU.jpg?auto=webp&s=83bfe86f08de8f0da9fdd4ff9394e5134193a864', 'width': 630}, 'variants': {}}]} |
Rx 6650 xt as local ai card? | 1 | [removed] | 2025-01-15T21:27:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i28abn/rx_6650_xt_as_local_ai_card/ | gun3kter_cz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i28abn | false | null | t3_1i28abn | /r/LocalLLaMA/comments/1i28abn/rx_6650_xt_as_local_ai_card/ | false | false | self | 1 | null |
UMbreLLa: Llama3.3-70B INT4 on RTX 4070Ti Achieving up to 9.6 Tokens/s! 🚀 | 151 | **UMbreLLa: Unlocking Llama3.3-70B Performance on Consumer GPUs 🚀**
Have you ever imagined running **70B models** on a consumer GPU at blazing-fast speeds? With **UMbreLLa**, it's now a reality! Here's what it delivers:
🎯 **Inference Speeds:**
* **RTX 4070 Ti**: Up to **9.7 tokens/sec**
* **RTX 4090**: Up to **11.4 tokens/sec**
✨ **What makes it possible?**
UMbreLLa combines **offloading**, **speculative decoding**, and **quantization**, perfectly tailored for single-user LLM deployment scenarios.
💻 **Why does it matter?**
* Run **70B models** on **affordable hardware** with near-human responsiveness.
* Expertly optimized for **coding tasks** and beyond.
* Consumer GPUs finally punching above their weight for high-end LLM inference!
Whether you’re a developer, researcher, or just an AI enthusiast, this tech transforms how we think about personal AI deployment.
What do you think? Could UMbreLLa be the game-changer we've been waiting for? Let me know your thoughts!
Github: [https://github.com/Infini-AI-Lab/UMbreLLa](https://github.com/Infini-AI-Lab/UMbreLLa)
\#AI #LLM #RTX4070Ti #RTX4090 #TechInnovation | 2025-01-15T21:45:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i28pfq/umbrella_llama3370b_int4_on_rtx_4070ti_achieving/ | Otherwise_Respect_22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i28pfq | false | null | t3_1i28pfq | /r/LocalLLaMA/comments/1i28pfq/umbrella_llama3370b_int4_on_rtx_4070ti_achieving/ | false | false | self | 151 | {'enabled': False, 'images': [{'id': '_N_uDmxJkkU4Zwny94leERMwuhFVb_dr2oI2PQ4U7uU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/26QQz0CLDmbOepvqEr2GHQPtKxnRN6Ls42dwfYRkZX0.jpg?width=108&crop=smart&auto=webp&s=224f90a443074094f85a7818772074d5e436ba5a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/26QQz0CLDmbOepvqEr2GHQPtKxnRN6Ls42dwfYRkZX0.jpg?width=216&crop=smart&auto=webp&s=eee3c54013a7933ccc4d70ee4f65a42059cf3db9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/26QQz0CLDmbOepvqEr2GHQPtKxnRN6Ls42dwfYRkZX0.jpg?width=320&crop=smart&auto=webp&s=2ef1ab8fd288fe9d025594fe85a9a07a8b99fa0e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/26QQz0CLDmbOepvqEr2GHQPtKxnRN6Ls42dwfYRkZX0.jpg?width=640&crop=smart&auto=webp&s=a39bedabfd09f0d001c7b69b76443f91d59ef8fd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/26QQz0CLDmbOepvqEr2GHQPtKxnRN6Ls42dwfYRkZX0.jpg?width=960&crop=smart&auto=webp&s=4526059182a7ca7ea7c346ccb16ed99cc23cce3a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/26QQz0CLDmbOepvqEr2GHQPtKxnRN6Ls42dwfYRkZX0.jpg?width=1080&crop=smart&auto=webp&s=863858d2404f97540e1e6417620a08ec3c8a7cf1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/26QQz0CLDmbOepvqEr2GHQPtKxnRN6Ls42dwfYRkZX0.jpg?auto=webp&s=143f708dfe80c3ed97870cf9c9b645f398e63075', 'width': 1200}, 'variants': {}}]} |
Speculation about upcoming Nemotron model sizes | 0 | I was just looking into pruned models again and noticed that the 40B model that was mentioned in the [51B Nemotron blog post](https://developer.nvidia.com/blog/advancing-the-accuracy-efficiency-frontier-with-llama-3-1-nemotron-51b/#tailoring_llms_for_diverse_needs%C2%A0) still has not been released. I bet that's gonna be the Super model of the soon to be released [new Nemotron models](https://www.reddit.com/r/LocalLLaMA/comments/1hvjgqs/new_open_nemotron_models_from_nvidia_are_on_the/). It's about the perfect size for 32GB VRAM with decent context size, but too big for a 24GB card (unless lower than 4 bit quant or really low context). If the Super model performs well it will at least be a good choice for dual 16GB GPU setups...
Besides that Nano will probably be based on 8B 3.1 Llama and Ultra on 405B but those are really just guesses, just wanted to get that 40B guess out there since I haven't seen it yet :) | 2025-01-15T21:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i290j1/speculation_about_upcoming_nemotron_model_sizes/ | Chelono | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i290j1 | false | null | t3_1i290j1 | /r/LocalLLaMA/comments/1i290j1/speculation_about_upcoming_nemotron_model_sizes/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'xzMQ1qaL7eUqYYJtZSct7XkFjAdvqlG2x6PFITmgfu8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/tRdTrxb91slWo2OnF-IK-rXvbtiXXjHfvcNpwTrAdXs.jpg?width=108&crop=smart&auto=webp&s=146c8bc375ca4bfc0ddd60d25534df6ef4bbd4ca', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/tRdTrxb91slWo2OnF-IK-rXvbtiXXjHfvcNpwTrAdXs.jpg?width=216&crop=smart&auto=webp&s=328eb3341c3943e79e6395fac049970f3b8306bb', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/tRdTrxb91slWo2OnF-IK-rXvbtiXXjHfvcNpwTrAdXs.jpg?width=320&crop=smart&auto=webp&s=46c7052dee192d165fa705cca37142b70ea76c74', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/tRdTrxb91slWo2OnF-IK-rXvbtiXXjHfvcNpwTrAdXs.jpg?width=640&crop=smart&auto=webp&s=68104c23bb824ba3956886d34768c1a6239789bc', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/tRdTrxb91slWo2OnF-IK-rXvbtiXXjHfvcNpwTrAdXs.jpg?width=960&crop=smart&auto=webp&s=d8616d5044c7eb003899f43d06711603d81eb348', 'width': 960}], 'source': {'height': 561, 'url': 'https://external-preview.redd.it/tRdTrxb91slWo2OnF-IK-rXvbtiXXjHfvcNpwTrAdXs.jpg?auto=webp&s=1dfb7638ea71480b484853aa52c778c66decc16d', 'width': 999}, 'variants': {}}]} |
Contextual AI - SoTA Benchmarks across the RAG stack | 0 | 2025-01-15T22:08:23 | https://contextual.ai/blog/platform-benchmarks-2025/ | apsdehal | contextual.ai | 1970-01-01T00:00:00 | 0 | {} | 1i298e5 | false | null | t3_1i298e5 | /r/LocalLLaMA/comments/1i298e5/contextual_ai_sota_benchmarks_across_the_rag_stack/ | false | false | 0 | {'enabled': False, 'images': [{'id': '6qzYqQu5boGdd1hFdFqepeXFNsHYm1sm0Hlzoj2pmLs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/D1yFk7IJu3HcJ1INQzIQWmGGJ8z_KNKcHbZj57OKU4Y.jpg?width=108&crop=smart&auto=webp&s=225ceaa4517763394a69dd36b065646da8e2ae69', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/D1yFk7IJu3HcJ1INQzIQWmGGJ8z_KNKcHbZj57OKU4Y.jpg?width=216&crop=smart&auto=webp&s=f318327ee832f74f6128cf73e465f4af0915030c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/D1yFk7IJu3HcJ1INQzIQWmGGJ8z_KNKcHbZj57OKU4Y.jpg?width=320&crop=smart&auto=webp&s=f8c5d4fd6caf533191d5f19035a15f9d93f76594', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/D1yFk7IJu3HcJ1INQzIQWmGGJ8z_KNKcHbZj57OKU4Y.jpg?width=640&crop=smart&auto=webp&s=ea04aca6b43caff42ea571c575eb82f37e861d32', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/D1yFk7IJu3HcJ1INQzIQWmGGJ8z_KNKcHbZj57OKU4Y.jpg?width=960&crop=smart&auto=webp&s=c8835f778689badc3e2d34d7ded3c748da83af01', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/D1yFk7IJu3HcJ1INQzIQWmGGJ8z_KNKcHbZj57OKU4Y.jpg?auto=webp&s=308beee7d62e57ffc307490f2366e90c0a5fd539', 'width': 1024}, 'variants': {}}]} |
||
Repurposing ETH Mining Rigs for AI Workloads | 1 | [removed] | 2025-01-15T22:13:52 | Gioxxy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i29d0a | false | null | t3_1i29d0a | /r/LocalLLaMA/comments/1i29d0a/repurposing_eth_mining_rigs_for_ai_workloads/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'WK3kMqLTovFYRNZsRAckcNgdo23ubEbasSP3z690gxs', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/uue2xns1f8de1.jpeg?width=108&crop=smart&auto=webp&s=dd9cd59647d8568db0f9c8d90a30e56c9d58b37b', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/uue2xns1f8de1.jpeg?width=216&crop=smart&auto=webp&s=332bf94e5bbad989514ee08d0057fb870a987399', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/uue2xns1f8de1.jpeg?width=320&crop=smart&auto=webp&s=43154eff6eec91472358443604b6a2d944821bd0', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/uue2xns1f8de1.jpeg?width=640&crop=smart&auto=webp&s=ec402686d9253e16578ec7db4c1f8c97efc821b5', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/uue2xns1f8de1.jpeg?width=960&crop=smart&auto=webp&s=e1be2b88d95ef5e7e7bddd8175e1c16a2c8113ee', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/uue2xns1f8de1.jpeg?width=1080&crop=smart&auto=webp&s=d751f08014e82877df093d384048e8ecb4591e06', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/uue2xns1f8de1.jpeg?auto=webp&s=00af4df5cfe1bfa394372374e9e63727100b48d6', 'width': 3024}, 'variants': {}}]} |
||
LLM front end with model already loaded? | 2 | I'm looking for a LLM front, that I could set a model to be the default. And once I closed it and opened again It would load, and I could start typing. Is there something like that? Maybe even have a few presets (different models) be able to evoke them by just opening the software? | 2025-01-15T22:16:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i29f3k/llm_front_end_with_model_already_loaded/ | rorowhat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i29f3k | false | null | t3_1i29f3k | /r/LocalLLaMA/comments/1i29f3k/llm_front_end_with_model_already_loaded/ | false | false | self | 2 | null |
Best local voice cloning model? | 8 | I've been noticing a lot of hype and praise for the recent KokoroTTS model, and while the model makes great voices, its only limited to a few pre-trained voices with no (future?) support for voice cloning or finetuning, which is why I've been wondering what the best one for local is. Is XTTS still holding reign and considered the best local voice cloning model? I've checked the TTS-AGI arena and there Fish Speech is #2, trailing behind Kokoro in the open model category whereas in the Pendrokar arena, XTTS is behind Kokoro, with Fish trailing behind quite a lot. This discrepancy looks a bit odd to me, has Fish gotten any better than XTTS?
I personally didn't find XTTS to be as great as others here said, the voices had a lisp to it and my outputs were strangely low quality. | 2025-01-15T22:22:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i29jw8/best_local_voice_cloning_model/ | subhayan2006 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i29jw8 | false | null | t3_1i29jw8 | /r/LocalLLaMA/comments/1i29jw8/best_local_voice_cloning_model/ | false | false | self | 8 | null |
L2E Llama2.c in a PDF in a Shroedinger PNG which is both a PNG and a PDF | 7 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/heesmmuqg8de1.png?width=1920&format=png&auto=webp&s=62b3d2ba8bed0a524b1677223c8e9b0bb46db7cd\n\nThe header Image at my repo is a Polyglot PNG ie it is both a PNG and PDF. (Shroedinger PDF / PNG) \n \nIf you rename the .png file to .pdf, it can be opened in Firefox (Chrome doesn't work with the Polyglot PNG) \n \nThe PDF also has a L2E flavour of karpathy's llama2.c running the smol 260k model. \n \nCheck this for details: [https://twitter.com/VulcanIgnis/status/1879649889178837025](https://twitter.com/VulcanIgnis/status/1879649889178837025) \nFind it here: [https://github.com/trholding/llama2.c/blob/master/assets/l2e\\_sky\\_fun.png](https://github.com/trholding/llama2.c/blob/master/assets/l2e_sky_fun.png) \n\n\nPure PDF versions of the smaller and smol models work in both chrome and firefox. Adobe Acrobat is not yet supported. \n\nThe PDF part was done way back in Nov, was planning to make it into a self regenerating comic demo and also add Acrobat support. (I swear I am going to create a better reader if Adobe doesn't support proper modern JS or continues to be very undocumented JS 1.3 something ...) I just posted cos the trends seems to be running stuff in PDF. I am might impressed with Doom in PDF, kudos. Emscripten was used to compile this to something inbetween ASM.JS and JS. Hope you'll have fun with this... \n\n\n" | 2025-01-15T22:35:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i29uup/l2e_llama2c_in_a_pdf_in_a_shroedinger_png_which/ | AMICABoard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i29uup | false | null | t3_1i29uup | /r/LocalLLaMA/comments/1i29uup/l2e_llama2c_in_a_pdf_in_a_shroedinger_png_which/ | true | false | spoiler | 7 | {'enabled': False, 'images': [{'id': 'KRvfPZBxDH4KRHYF5PQVmGF5ZRr_X5ULiHvflJ_Ky2s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qX7ldHGNAoRibL7wxyxccyXltuNOgXHwsrs-x0KCG5w.jpg?width=108&crop=smart&auto=webp&s=2d36ffde5cf0296c2476d9eaa806335232227a1a', 'width': 108}, {'height': 163, 'url': 'https://external-preview.redd.it/qX7ldHGNAoRibL7wxyxccyXltuNOgXHwsrs-x0KCG5w.jpg?width=216&crop=smart&auto=webp&s=7a3a97cf3d4665b91cf4fea55aab5b7074bd413b', 'width': 216}, {'height': 242, 'url': 'https://external-preview.redd.it/qX7ldHGNAoRibL7wxyxccyXltuNOgXHwsrs-x0KCG5w.jpg?width=320&crop=smart&auto=webp&s=79f25550e94e07298163044f256e6d1ec1087ae8', 'width': 320}], 'source': {'height': 454, 'url': 'https://external-preview.redd.it/qX7ldHGNAoRibL7wxyxccyXltuNOgXHwsrs-x0KCG5w.jpg?auto=webp&s=9ba59c1b2894e40ce8513f8d54b48e99d35c1402', 'width': 600}, 'variants': {'obfuscated': {'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qX7ldHGNAoRibL7wxyxccyXltuNOgXHwsrs-x0KCG5w.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=c0a857c472d2d2d94e3b8febebcbbdedfe3ed724', 'width': 108}, {'height': 163, 'url': 'https://external-preview.redd.it/qX7ldHGNAoRibL7wxyxccyXltuNOgXHwsrs-x0KCG5w.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=801057b54f767e95011228908713712fc8698d1a', 'width': 216}, {'height': 242, 'url': 'https://external-preview.redd.it/qX7ldHGNAoRibL7wxyxccyXltuNOgXHwsrs-x0KCG5w.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=22eac67e6f664ae9a5a5b1a1771827dd9460584b', 'width': 320}], 'source': {'height': 454, 'url': 'https://external-preview.redd.it/qX7ldHGNAoRibL7wxyxccyXltuNOgXHwsrs-x0KCG5w.jpg?blur=40&format=pjpg&auto=webp&s=5445ce94b68c9409a70e2d8d603cea43005058ab', 'width': 600}}}}]} |
Google just released a new architecture | 986 | Looks like a big deal? [Thread by lead author](https://x.com/behrouz_ali/status/1878859086227255347). | 2025-01-15T22:38:26 | https://arxiv.org/abs/2501.00663 | FeathersOfTheArrow | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1i29wz5 | false | null | t3_1i29wz5 | /r/LocalLLaMA/comments/1i29wz5/google_just_released_a_new_architecture/ | false | false | default | 986 | null |
Are 1B (llama 3.2) models usually this capable? | 2 | 2025-01-15T22:38:51 | https://www.reddit.com/gallery/1i29xar | Tobias783 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i29xar | false | null | t3_1i29xar | /r/LocalLLaMA/comments/1i29xar/are_1b_llama_32_models_usually_this_capable/ | false | false | 2 | null |
||
Bora's Law: Intelligence Scales With Constraints, Not Compute | 1 | [removed] | 2025-01-15T22:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i2a1lf/boras_law_intelligence_scales_with_constraints/ | atlasspring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2a1lf | false | null | t3_1i2a1lf | /r/LocalLLaMA/comments/1i2a1lf/boras_law_intelligence_scales_with_constraints/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zyC0PAp82lk7YMSg5cbt8V3yIJH2OXHi1yuivV-FnMw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=108&crop=smart&auto=webp&s=92d322e8aeb0d013eb09dc4ad3e5aca2c171c448', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=216&crop=smart&auto=webp&s=b342cc78b4d39652d9b5def14cf292f1e8df5013', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=320&crop=smart&auto=webp&s=c4b82e62ee796d8dcd2383d3cb3f2e2335b913c2', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=640&crop=smart&auto=webp&s=66fea1d036f73667aea316c4a2d224c9f5d19fb6', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?auto=webp&s=fed3f6977a78b1d60dbe04fc341491da48e73fca', 'width': 920}, 'variants': {}}]} |
Bora's Law: Intelligence Scales With Constraints, Not Compute | 1 | [removed] | 2025-01-15T22:46:43 | https://www.reddit.com/r/LocalLLaMA/comments/1i2a3ky/boras_law_intelligence_scales_with_constraints/ | atlasspring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2a3ky | false | null | t3_1i2a3ky | /r/LocalLLaMA/comments/1i2a3ky/boras_law_intelligence_scales_with_constraints/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zyC0PAp82lk7YMSg5cbt8V3yIJH2OXHi1yuivV-FnMw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=108&crop=smart&auto=webp&s=92d322e8aeb0d013eb09dc4ad3e5aca2c171c448', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=216&crop=smart&auto=webp&s=b342cc78b4d39652d9b5def14cf292f1e8df5013', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=320&crop=smart&auto=webp&s=c4b82e62ee796d8dcd2383d3cb3f2e2335b913c2', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=640&crop=smart&auto=webp&s=66fea1d036f73667aea316c4a2d224c9f5d19fb6', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?auto=webp&s=fed3f6977a78b1d60dbe04fc341491da48e73fca', 'width': 920}, 'variants': {}}]} |
Bora's Law: Intelligence Scales With Constraints, Not Compute | 0 | After building autonomous systems and experimenting with LLMs, I've realized something fundamental: intelligence doesn't scale with compute or model size—that's like saying watching millions of driving videos makes you a better driver. Instead, intelligence scales exponentially with constraints.
This explains why LLMs hallucinate (unbounded solution space) and why careful constraint engineering often outperforms raw compute scaling.
I've detailed this in an article that connects human learning patterns to AI development.
Link here: [https://chrisbora.substack.com/p/boras-law-intelligence-scales-with](https://chrisbora.substack.com/p/boras-law-intelligence-scales-with) | 2025-01-15T22:52:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i2a8c8/boras_law_intelligence_scales_with_constraints/ | atlasspring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2a8c8 | false | null | t3_1i2a8c8 | /r/LocalLLaMA/comments/1i2a8c8/boras_law_intelligence_scales_with_constraints/ | false | false | spoiler | 0 | {'enabled': False, 'images': [{'id': 'zyC0PAp82lk7YMSg5cbt8V3yIJH2OXHi1yuivV-FnMw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=108&crop=smart&auto=webp&s=92d322e8aeb0d013eb09dc4ad3e5aca2c171c448', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=216&crop=smart&auto=webp&s=b342cc78b4d39652d9b5def14cf292f1e8df5013', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=320&crop=smart&auto=webp&s=c4b82e62ee796d8dcd2383d3cb3f2e2335b913c2', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=640&crop=smart&auto=webp&s=66fea1d036f73667aea316c4a2d224c9f5d19fb6', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?auto=webp&s=fed3f6977a78b1d60dbe04fc341491da48e73fca', 'width': 920}, 'variants': {'obfuscated': {'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=63a2db287585817aac1c4915396609771bc57773', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=d86d0d37b74865dc74fd84c4d2349a16b6c760bc', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=65634ce46c2479bce2ebdaed90730811636eb7fd', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=5730db467616327046168461c0409b917e6270cb', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/YpykkLw6Za56Cb9neUx6-0cforXV8ppGLote-CiNAR4.jpg?blur=40&format=pjpg&auto=webp&s=9084c5bc57091ba170a775a8a260dcb66ac1ec38', 'width': 920}}}}]} |
Multi-step agent framework for partial automation of academic writing? | 0 | Greetings!
I am interested in automating a chain of tasks i am currently stuck doing almost daily, that involves a series of predetermined set of processes:
1. Analyze document (to be written) requirements
2. Prepare an outline which includes required references/citations
3. Search for relevant literature, extract it's content relevant to the requirements
4. Preparation of a side documents which includes the selected citations along with a relevant TLDR in a specific format
5. Preparation of an o1 friendly prompt
6. Writing of the main document
7. Evaluation, refinement, completion
Currently, although these steps are being completed by the models, i have to connect all of them together by moving the data from one model to the other and preparing each of the prompts.
Are there any recommendations for an "agent"-beginner framework that would allow me to at least partially automate this flow?
P.S. Albeit a little slow, my desktop can run up to 32B models for the purpose, and i feel safe to also provide api keys from google. My programming skills are limited although i am comfortable with working on WSL to set this up, i know my way through docker as well. In terms of code, i can at least follow the instructions of the models to "hack" my way into getting something to work. That's it!
Thank you for the time!
(Also as a student, i try to keep things affordable, so FREE is strongly preferable even if it means more complicated to setup.) | 2025-01-15T23:01:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i2af40/multistep_agent_framework_for_partial_automation/ | Exotic-Investment110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2af40 | false | null | t3_1i2af40 | /r/LocalLLaMA/comments/1i2af40/multistep_agent_framework_for_partial_automation/ | false | false | self | 0 | null |
This sub is being astroturfed by Chinese farms; they upvote Deepthink propaganda, and mass report anything critical of it - until automod removes it for “too many reports”. | 0 | 2025-01-15T23:16:02 | katiecharm | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i2ar1v | false | null | t3_1i2ar1v | /r/LocalLLaMA/comments/1i2ar1v/this_sub_is_being_astroturfed_by_chinese_farms/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'L6YxBNhgYOctrxxrK0ZYCHtgJJD-0tavv26ZJRp8-NI', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/a4tddlg4q8de1.jpeg?width=108&crop=smart&auto=webp&s=3ceb7eee26a93dec10d2b69b25289e6200fa40d5', 'width': 108}, {'height': 235, 'url': 'https://preview.redd.it/a4tddlg4q8de1.jpeg?width=216&crop=smart&auto=webp&s=033d6576d2f6d9ce408430f770328cbce761c47a', 'width': 216}, {'height': 348, 'url': 'https://preview.redd.it/a4tddlg4q8de1.jpeg?width=320&crop=smart&auto=webp&s=4274e5b06eefb0c38ce9e5803cd3fde89cd8b009', 'width': 320}, {'height': 696, 'url': 'https://preview.redd.it/a4tddlg4q8de1.jpeg?width=640&crop=smart&auto=webp&s=c31766f564a377f1305928b8578804ed1cccafba', 'width': 640}, {'height': 1044, 'url': 'https://preview.redd.it/a4tddlg4q8de1.jpeg?width=960&crop=smart&auto=webp&s=f88592fd8d2403276ef3259958bb878781cb37e4', 'width': 960}, {'height': 1175, 'url': 'https://preview.redd.it/a4tddlg4q8de1.jpeg?width=1080&crop=smart&auto=webp&s=6ff76c4ec5691ecf6150aa9417fab505a22eda7d', 'width': 1080}], 'source': {'height': 1397, 'url': 'https://preview.redd.it/a4tddlg4q8de1.jpeg?auto=webp&s=8b8626748b2eaddebcfa4480e1417cff14781045', 'width': 1284}, 'variants': {}}]} |
|||
Meta Prompts - Because Your LLM Can Do Better Than Hello World | 163 | Alright, fasten your seatbelts. We're taking a ride through meta-prompting land.
**TL;DR**:
[https://streamable.com/vsgcks](https://streamable.com/vsgcks)
We create this by just using two prompts, and what you see in the video isn't even 1/6th of everything. It's just boring to watch 10 minutes of scrolling. With just two prompts we deconstruct an arbitrary complex project into such small parts even LLMs can do it
Default meta prompt collection:
[https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9)
Meta prompt collection with prompts creating summaries and context sync (use them when using Cline or other coding assistants):
[https://gist.github.com/pyros-projects/f6430df8ac6f1ac37e5cfb6a8302edcf](https://gist.github.com/pyros-projects/f6430df8ac6f1ac37e5cfb6a8302edcf)
How to use them:
[https://gist.github.com/pyros-projects/e2c96b57ac7883076cca7bc3dc7ff527](https://gist.github.com/pyros-projects/e2c96b57ac7883076cca7bc3dc7ff527)
Even if it's mostly about o1 and similar reasoning models everything can also be applied to any other LLM
---
## A Quick History of Meta-Prompts
Meta-prompts originated from [this paper](https://arxiv.org/pdf/2401.12954), written by a guy at an indie research lab and another guy from a college with a cactus garden. Back then, everyone was obsessed with role-playing prompts like:
_“You are an expert software engineer…”_
These two geniuses thought after eating some juicy cacti from the garden: _“What if the LLM came up with its own expert prompt and decided what kind of expert to role-play?”_ The result? The first meta-prompt was born.
### The very first meta prompt
You are Meta-Expert, an extremely clever expert with the unique ability to collaborate with multiple experts (such as Expert Problem Solver, Expert Mathematician, Expert Essayist, etc.) to tackle any task and solve complex problems. Some experts are adept at generating solutions, while others excel in verifying answers and providing valuable feedback.
You also have special access to Expert Python, which has the unique ability to generate and execute Python code given natural-language instructions. Expert Python is highly capable of crafting code to perform complex calculations when provided with clear and precise directions. It is especially useful for computational tasks.
As Meta-Expert, your role is to oversee the communication between the experts, effectively utilizing their skills to answer questions while applying your own critical thinking and verification abilities.
To communicate with an expert, type its name (e.g., "Expert Linguist" or "Expert Puzzle Solver"), followed by a colon `:`, and then provide detailed instructions enclosed within triple quotes. For example:
```
Expert Mathematician:
"""
You are a mathematics expert specializing in geometry and algebra.
Compute the Euclidean distance between the points (-2, 5) and (3, 7).
"""
```
Ensure that your instructions are clear and unambiguous, including all necessary information within the triple quotes. You can also assign personas to the experts (e.g., "You are a physicist specialized in...").
**Guidelines:**
1. Interact with only one expert at a time, breaking complex problems into smaller, solvable tasks if needed.
2. Each interaction is treated as an isolated event, so always provide complete details in every call.
3. If a mistake is found in an expert's solution, request another expert to review, compare solutions, and provide feedback. You can also request an expert to redo their calculations using input from others.
**Important Notes:**
- All experts, except yourself, have no memory. Always provide full context when contacting them.
- Experts may occasionally make errors. Seek multiple opinions or independently verify solutions if uncertain.
- Before presenting a final answer, consult an expert for confirmation. Ideally, verify the final solution with two independent experts.
- Aim to resolve each query within 15 rounds or fewer.
- Avoid repeating identical questions to experts. Carefully examine responses and seek clarification when needed.
**Final Answer Format:** Present your final answer in the following format:
```
>> FINAL ANSWER:
"""
[final answer]
"""
```
For multiple-choice questions, select only one option. Each question has a unique answer, so analyze the information thoroughly to determine the most accurate and appropriate response. Present only one solution if multiple options are available.
---
The idea was simple but brilliant: you’d give the LLM this meta-prompt, execute it, append the answers to the context, and repeat until it had everything it needed.
Compared to other prompting strategies, meta-prompts outperform many of them:
![[https://imgur.com/a/Smd0i1m]]
If you’re curious, you can check out [Meta-Prompting on GitHub](https://github.com/suzgunmirac/meta-prompting) for some early examples from the paper. Just keep in mind, this was during the middle ages of LLM research, when prompting was actually still researched. But surprisingly the og meta prompt still holds up and can be quite effective!
Since currently there's a trend toward imprinting prompting strategies directly into LLMs (like CoT reasoning), this might be another approach worth exploring. Will definitely try it out when our server farm has some capacity free.
### The Problem with normal prompts
Let’s talk about the galaxy-brain takes I keep hearing:
- _“LLMs are only useful for small code snippets.”_
- _“I played around with o1 for an hour and decided it sucks.”_
Why do people think this? Because their prompts are hot garbage, like:
- _“Generate me an enterprise-level user management app.”_
- _“Prove this random math theorem.”_
That’s it. No context. No structure. No plan. Then they’re shocked when the result is either vague nonsense or flat-out wrong. Like, have you ever managed an actual project? Do you tell your dev team, _“Write me a AAA game. Just figure it out,”_ and expect Baldur's Gate?
No. Absolutely not. But somehow it seems to be expected that LLMs deliver superhuman feats even tho people love to scream out how stupid they are...
Here’s the truth: **LLMs can absolutely handle enterprise-level complexity. if you prompt them like they’re part of an actual project team.** That’s where meta-prompts come in. They turn chaos into order and give LLMs the context, process, and structure they need to perform like experts. It's basically in-context fine-tuning
### Meta Prompts
So, if you're a dev or architect looking for a skill that's crazy relevant now and will stay relevant for the next few months (years? who knows), get good at meta-prompts.
I expect that with o3, solution architects won't manage dev teams anymore, they'll spend their days orchestrating meta-prompts. Some of us are already way faster using just o1 Pro than working with actual human devs, and I can't even imagine what a bot with a 2770 ELO on Codeforces will do to the architect-dev relationship.
Now, are meta-prompts trivially easy? Of course not. (Shoutout to my friends yesterday who told me _"prompt engineering doesn't exist,"_ lol.) They require in-depth knowledge of project management, software architecture, and subject-matter expertise. They have to be custom-tailored to your personal workflow and work quirks. That's the reason I probably saw them being mentioned on reddit like only twice.
But I promise anyone can understand the basics. The rest is experience. Try them out, make them your own, and you'll never look back, because for the first time, you'll actually be using an LLM instead of wasting time with it. Then you have the keys to your own personal prompting wonderland.
This is how probably the smallest completely self-contained meta prompt pipeline looks like which can solve any kind of projects or tasks (at least I couldn't make them smaller the last few days when I was writing this)
[Meta Prompt 01 - Planning](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning-md)
[Meta Prompt 02 - Iterative chain prompting](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-02_prompt_chain-md)
[Meta Prompt 03 - Task selection prompting](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-03_prompt_chain_alt-md) (only needed if your LLM doesn't like #2)
What do I mean with pipeline? Well the flow works like this. Give LLM prompt 01. When it's done generating, give it prompt 02. Then you continue giving it prompt 02 until you are done with the project. The prompt forces the LLM to iterate upon itself so to speak.
Here a more detailed "how to":
[https://gist.github.com/pyros-projects/e2c96b57ac7883076cca7bc3dc7ff527](https://gist.github.com/pyros-projects/e2c96b57ac7883076cca7bc3dc7ff527)
### How does this work and what makes meta-prompts different?
Instead of dumping a vague brain dump on the model and hoping for magic, you teach it _how to think_. You tell it:
1. **What you want (context)**
Example: _“Build a web app that analyzes GitHub repos and generates AI-ready documentation.”_
2. **How to think about it (structure)**
Example: _“Break it into components, define tasks, and create technical specs.”_
3. **What to deliver (outputs)**
Example: _“A YAML file with architecture, components, and tasks.”_
Meta-prompts follow a pattern: they define **roles**, **rules**, and **deliverables**. Let’s break it down with the ones I’ve created for this guide:
1. **Planning Meta-Prompt**
[https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning-md](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning-md)
- Role: _You’re a software architect and technical project planner._
- Rules: Break the project into a comprehensive plan with architecture, components, and tasks.
- Deliverables: A structured YAML file with sections like `Project Identity`, `Technical Architecture`, and `Task Breakdown`.
- Possible output [https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning_output-md](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning_output-md)
2. **Execution Chain Meta-Prompt**
[https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-02_prompt_chain-md](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-02_prompt_chain-md)
- Role: _You’re an expert at turning plans into actionable chunks._
- Rules: Take the project plan and generate coding prompts and review prompts for each task.
- Deliverables: Sequential execution and review prompts, including setup, specs, and criteria.
- Possible output:
[https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-02_prompt_chain_potential_output-md](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-02_prompt_chain_potential_output-md)
3. **Task Selection Meta-Prompt**
[https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-03_prompt_chain_alt-md](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-03_prompt_chain_alt-md)
- Role: _You’re a project manager keeping the workflow smooth._
- Rules: Analyze dependencies and select the next task while preserving context.
- Deliverables: The next coding and review prompt, complete with rationale and updated state.
Each meta-prompt builds on the last, creating a self-contained workflow where the LLM isn’t just guessing—it’s following a logical progression.
Meta-prompts turn LLMs into software architects, project managers, and developers, all locked inside a little text box. They enable:
- **Comprehensive technical planning**
- **Iterative task execution**
- **Clear rules and quality standards**
- **Modular, scalable designs**
### Meta rules
Meta-prompts are powerful, but they aren’t magic. They need **you** to guide them. Here’s what to keep in mind:
1. **Context Is Everything.**
LLMs are like goldfish with a giant whiteboard. They only remember what’s in their current context. If your plan is messy or missing details, your outputs will be just as bad. Spend the extra time refining your prompts and filling gaps. A good meta prompt is designed to minimize these issues by keeping everything structured.
2. **Modularity Is Key.**
Good meta-prompts break projects into modular, self-contained pieces. There is the saying "Every project is deconstructable into something a junior dev could implement." I would go one step further: "Every project is deconstructable into something an LLM could implement." This isn’t just a nice-to-have—it’s essential. Modularity is not only good practice, it makes things easier! Modularity will abstract difficulty away.
3. **Iterate, Iterate, Iterate.**
Meta-prompts aren’t one-and-done. They’re a living system that you refine as the project evolves. Didn’t like the YAML output from the Planning Meta-Prompt? Tell the LLM what to fix and run it again. Got a weak coding prompt? Adjust it in the Execution Chain and rerun. You are the conductor—make the orchestra play in tune.
4. **Meta-Prompts Need Rules.**
If you’re too vague, the LLM will fill in the gaps with nonsense. That’s why good meta prompts are a huge book of rules, like defining how breaking down dependencies, defining interfaces, and creating acceptance criteria work. For example, the [Task Selection Meta-Prompt](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-03_task_selection_md) ensures only the right task is chosen based on dependencies, context, and priorities. The rules make sure you aren't doing a task which the prerequisites are still missing for.
5. **Meta-Prompts Aren’t Easy, But They’re Worth It.**
Yeah, these prompts take effort. You need to know your project, your tools, and how to manage both. But once you’ve got the hang of them, they’re a game-changer. No more vague prompts. No more bad outputs. Just a smooth, efficient process where the LLM is a true teammate.
And guess what? The LLM delivers, because now it knows what you actually need. Plus, you're guardrailing it against its worst enemy: its own creativity. Nothing good happens when you let an LLM be _creative_. Prompts like _"Generate me an enterprise-level user management app"_ are like handing it a creativity license. Don't.
My personal meta-prompts I use at work are gigantic, easily 10 times more and bigger than what I prepared for this thread, and 100s of hours went into them to pack in corporate identity stuff, libraries we like to use a certain way, personal coding styles, and everything else so it feels like a buddy that can read my mind.
That's why I'm quite pissy if some schmuck who played with o1 for like an hour thinks they are some kind of authority in knowing what such a model has to offer. Especially if they aren't interested at all in help or learning how to get the best out of it. In the end, a model does what the prompter gives it, and therefore a model is just as good as the person using it.
So I can only recommend you learn them (and of course you can improve your work with any other LLM as well) and you'll discover a whole new layer of how you can use LLMs, and I hope with this thread I could outline the very basics of them.
Cheers
Pyro
PS: I have not forgotten that I have to make you guys a Anime Waifu with infinite context | 2025-01-15T23:30:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i2b2eo/meta_prompts_because_your_llm_can_do_better_than/ | Pyros-SD-Models | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2b2eo | false | null | t3_1i2b2eo | /r/LocalLLaMA/comments/1i2b2eo/meta_prompts_because_your_llm_can_do_better_than/ | false | false | self | 163 | {'enabled': False, 'images': [{'id': '2PRb1iAjTJKXGCld1gd3ICCCS40aWP96ArwrYQREPRY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/2xLakH3B0nNw4B6bHnMf7HXuSPhnsYfzf0O_i9lwqzk.jpg?width=108&crop=smart&auto=webp&s=ffd03800b1e258d4f654577d8f277e819b197f45', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/2xLakH3B0nNw4B6bHnMf7HXuSPhnsYfzf0O_i9lwqzk.jpg?width=216&crop=smart&auto=webp&s=ac6861db9359c60e84e6c31115b0f4d5a79eb77e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/2xLakH3B0nNw4B6bHnMf7HXuSPhnsYfzf0O_i9lwqzk.jpg?width=320&crop=smart&auto=webp&s=4534bcbc3245785ed3c183db06cb4afa7e0d03b8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/2xLakH3B0nNw4B6bHnMf7HXuSPhnsYfzf0O_i9lwqzk.jpg?width=640&crop=smart&auto=webp&s=9013363b18a5e7179937c0c429507dae90b4dfae', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/2xLakH3B0nNw4B6bHnMf7HXuSPhnsYfzf0O_i9lwqzk.jpg?width=960&crop=smart&auto=webp&s=726a1b2b958256aff59a17b9c85def4588956c0c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/2xLakH3B0nNw4B6bHnMf7HXuSPhnsYfzf0O_i9lwqzk.jpg?width=1080&crop=smart&auto=webp&s=58ceb2d7f143e9f52af0de79d2c6054674cc6c53', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/2xLakH3B0nNw4B6bHnMf7HXuSPhnsYfzf0O_i9lwqzk.jpg?auto=webp&s=dc8b97c5a0102a336166b5ee8843aeccfd0c015a', 'width': 1200}, 'variants': {}}]} |
Are there any LLMs trained on copyrighted content? | 0 | I tried continual pre-training on 150 years of a specific news companies articles (from my own subscription), my (personal) library of books, and a solid 40k tokens of hiphop.
the results are… really unbelievable. It feels like early GPT-3. It gives VERY interesting and insightful opinions, temperature actually makes a difference in the diversity and unpredictability of output, it can actually be funny, and it has a genuine grasp of certain authors style of writing… It’s really made me realize how much we’re missing out on because of all the synthetic slop and bland, overly centric wikipedia-style drivel that has replaced actual human content.
Obviously I can’t share this model, and *I WILL BLOCK ANYONE WHO DMs ME ABOUT IT*
But the results have really opened my eyes… Are there any models, perhaps from China where copyright doesnt matter, or where the trainers have licensed content, out there like this anymore? Or is the practice entirely dead in the water? | 2025-01-15T23:43:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i2bc2y/are_there_any_llms_trained_on_copyrighted_content/ | Imjustmisunderstood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2bc2y | false | null | t3_1i2bc2y | /r/LocalLLaMA/comments/1i2bc2y/are_there_any_llms_trained_on_copyrighted_content/ | false | false | self | 0 | null |
Open source local llms gui better than OLLAMA, Google studio ai? | 0 | Dayammmm m I KNEW I should have ignored my wife the moment I saw then, hearing the shout,"You PROMISED YOU WOULD FIX THE DRYER VENT!"
I did,
...and promptly forgot source of the name of the author of his latest GitHub Local LLM GUI, offering...exactly as seen above but, more liberal features, unlimited this and that, compared to Googles LM Studio....FU*K!!! Help me AI_Bros & Sisters!?? WTF was it?
Sending protective and healing vibes to you and your loved ones…
Namaste, Chas
| 2025-01-16T00:03:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i2brci/open_source_local_llms_gui_better_than_ollama/ | Hesynergy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2brci | false | null | t3_1i2brci | /r/LocalLLaMA/comments/1i2brci/open_source_local_llms_gui_better_than_ollama/ | false | false | self | 0 | null |
Is this a reasonable price for a dual 3090 rig? | 0 | 2025-01-16T00:08:57 | https://customluxpcs.com/product/mairin/?srsltid=AfmBOooicPMObon1L9AaQ_OEgdKP4elAAGvPZcbGOJhS0mQ0kvF0iR-c2JE | MassiveLibrarian4861 | customluxpcs.com | 1970-01-01T00:00:00 | 0 | {} | 1i2bvb9 | false | null | t3_1i2bvb9 | /r/LocalLLaMA/comments/1i2bvb9/is_this_a_reasonable_price_for_a_dual_3090_rig/ | false | false | 0 | {'enabled': False, 'images': [{'id': '63fzrg0dtecrw3BLOO5YHL2TCfy9JXk6RIuU6Oy7vkE', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/BkA4vqRkrnoO8UFrlMvILPYy9Ya1MuBLk88nkmTpjzw.jpg?width=108&crop=smart&auto=webp&s=36c8ae25f154f935b4015ce041c6a8f76e81b459', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/BkA4vqRkrnoO8UFrlMvILPYy9Ya1MuBLk88nkmTpjzw.jpg?width=216&crop=smart&auto=webp&s=2505c901acbebb250016f1b2d20b63cc21d5048e', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/BkA4vqRkrnoO8UFrlMvILPYy9Ya1MuBLk88nkmTpjzw.jpg?width=320&crop=smart&auto=webp&s=fa1668ead4b007c94aa58072d3c754bd44bae30c', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/BkA4vqRkrnoO8UFrlMvILPYy9Ya1MuBLk88nkmTpjzw.jpg?width=640&crop=smart&auto=webp&s=917e717ddd480b207ea5a1911b437a711ba42e71', 'width': 640}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/BkA4vqRkrnoO8UFrlMvILPYy9Ya1MuBLk88nkmTpjzw.jpg?auto=webp&s=356158b2d508ccecb5c1cc9c2b358435a2e8b153', 'width': 810}, 'variants': {}}]} |
||
Why do so few persons understand why the strawberries question is so hard to answer for an llm? | 0 | It comes up so much, and people think the answer is wrong instead of seeing that the question is wrong or the way the system works.
Basically what an llm is doing is it doesn't work with characters in a certain language, it works with tokens (or actually simple numbers with a translator in between)
Basically what happens is :
You ask your question -> this gets translated to numbers -> the computer returns numbers -> the numbers are translated back to text (with the help of tokens not characters)
Ok, now imagine we don't use numbers, but simply another language.
\- You ask your question "How many r's are in the word strawberry's?"
\- A translator translates it to Dutch where it becomes (literally translated) "Hoeveel r'en zitten er in het woord aardbei?"
\- Now a dutch speaking person answers 1
\- The translator translates the dutch 1 to the English 1
\- You get the answer back as 1.
1 is the correct answer for the dutch language, it is just the wrong answer for the English language.
This is basically an almost unsolvable problem (with current tech) which just comes from translation. In terms of an llm there are basically two ways to solve this :
\- Either overtrain the model or this question so its general logic goes wrong, but it gives the wanted answer for this extremely niche question.
\- Or the model should have the intelligence to call a tool for this specific problem, because the problem is solved with computers, it is just a basic translation problem.
The problem is basically that for this specific problem, you want a very intelligent translator which for this exact kind of questions does not translate the word strawberry, it should translate the rest of the question, just not the word as the question requires the exact word and not something like it or an alias or an equivalent or anything else but the exact word.
And you need that intelligent translator for only a very super minor subset of questions, or all other questions you do not want the exact word, but just a system which works with equivalent words etc so you can ask the question in normal human text and not in a programming language.
But people who still think that this is a wrong answer for an llm, could you give a human way to solve this with a translator? Or an equivalent example is ask a deaf person : "How many h-sounds are there in the pronunciation of the word hour". Things like a silent-h are quirks in the English language | 2025-01-16T00:25:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i2c7ol/why_do_so_few_persons_understand_why_the/ | Former-Ad-5757 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2c7ol | false | null | t3_1i2c7ol | /r/LocalLLaMA/comments/1i2c7ol/why_do_so_few_persons_understand_why_the/ | false | false | self | 0 | null |
Trying to understand Embeddings with respect to LLAMA | 0 | Hello Everyone,
I have a quick question. So I have converted some context(text) into embeddings using the multilingual-e5-large model.
Could you tell me which approach is right/feasible?
\-> We will save these embedding vector values in a vector db and when the user prompts, we will get the nearest vector embedding matching the prompt and then convert the embedding again into text and then pass it to the llama3.
\-> Or can we directly pass the embeddings(vector values) to llama3 once we retrieve the nearest vector?
I am scratching my head to understand which approach is the right one, I would appreciate any help or suggestions.
Thank you | 2025-01-16T00:38:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i2ch86/trying_to_understand_embeddings_with_respect_to/ | s1va1209 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2ch86 | false | null | t3_1i2ch86 | /r/LocalLLaMA/comments/1i2ch86/trying_to_understand_embeddings_with_respect_to/ | false | false | self | 0 | null |
How to control over the output of llama ? | 2 | I have some text which I want llama to classify into 4 categories. Now llama classifies it but also gives explanation and stuff .. | 2025-01-16T00:44:43 | https://www.reddit.com/r/LocalLLaMA/comments/1i2clwc/how_to_control_over_the_output_of_llama/ | RstarPhoneix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2clwc | false | null | t3_1i2clwc | /r/LocalLLaMA/comments/1i2clwc/how_to_control_over_the_output_of_llama/ | false | false | self | 2 | null |
3090 Turbo - Cooler Replacement? | 1 | How difficult is it to swap the cooler to a more traditional style cooler? Because let me tell you... 3090 Turbo's are... quite loud, lol. And I don't really need to save on the space. | 2025-01-16T01:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i2d0om/3090_turbo_cooler_replacement/ | PangurBanTheCat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2d0om | false | null | t3_1i2d0om | /r/LocalLLaMA/comments/1i2d0om/3090_turbo_cooler_replacement/ | false | false | self | 1 | null |
What models are you running & is local worth it? | 1 | [removed] | 2025-01-16T01:14:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i2d7qq/what_models_are_you_running_is_local_worth_it/ | ansuz2419 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2d7qq | false | null | t3_1i2d7qq | /r/LocalLLaMA/comments/1i2d7qq/what_models_are_you_running_is_local_worth_it/ | false | false | self | 1 | null |
rant/vent | 0 | the other day i saw a post, here.
one dude was seeking recommendations for some llms that can teach code.
now...
and some brilliant individuals replied with models fined tuned for coding
whyy?, just why?
an llm that is fine tuned to write code, will only write code. and just because an llm can write code doesn't mean it can teach code.
stop comparing llms to human brain!!!!, it doesn't work like that!
if you want to learn to code, go to youtube!, thousands of hours of lectures are available for free.
and if you have any doubts with solving problems, ask claude or perplexity!! | 2025-01-16T01:25:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i2dfil/rantvent/ | input_output_stream3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2dfil | false | null | t3_1i2dfil | /r/LocalLLaMA/comments/1i2dfil/rantvent/ | false | false | self | 0 | null |
I used Kokoro-82M, Llama 3.2, and Whisper Small to build a real-time speech-to-speech chatbot that runs locally on my MacBook! | 449 | 2025-01-16T01:57:31 | https://v.redd.it/yw01bva1i9de1 | tycho_brahes_nose_ | /r/LocalLLaMA/comments/1i2e23v/i_used_kokoro82m_llama_32_and_whisper_small_to/ | 1970-01-01T00:00:00 | 0 | {} | 1i2e23v | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yw01bva1i9de1/DASHPlaylist.mpd?a=1739714257%2CNDQ1NTNmZTE5Yzc5ZGRiZWRmOTQ4YTdkOGU0NDkxZGM1MjNmMDgyMTBhZmQ4ZDJlYWYzZjU4YTEyMTJjMmMzOQ%3D%3D&v=1&f=sd', 'duration': 212, 'fallback_url': 'https://v.redd.it/yw01bva1i9de1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/yw01bva1i9de1/HLSPlaylist.m3u8?a=1739714257%2CNTViNjg3ZmM0OWQ4NWMwNDRkYWIwZDkwNGFkNzExOGRkYjg0NDBlYTk2OGJhNmMxNjRiOWFiMzlhYWNlNTE0NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yw01bva1i9de1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1i2e23v | /r/LocalLLaMA/comments/1i2e23v/i_used_kokoro82m_llama_32_and_whisper_small_to/ | false | false | 449 | {'enabled': False, 'images': [{'id': 'ajBjajZ2YTFpOWRlMdVERFdEQKrY8cptLv00gyZBVqtju60x3iy8w-FpWSZ2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ajBjajZ2YTFpOWRlMdVERFdEQKrY8cptLv00gyZBVqtju60x3iy8w-FpWSZ2.png?width=108&crop=smart&format=pjpg&auto=webp&s=42c4ea21b6e8548de2a5a8c3ad88b35b634fbffd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ajBjajZ2YTFpOWRlMdVERFdEQKrY8cptLv00gyZBVqtju60x3iy8w-FpWSZ2.png?width=216&crop=smart&format=pjpg&auto=webp&s=ef04edd06bbf04f1eaad1568abf947d1bec24d8b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ajBjajZ2YTFpOWRlMdVERFdEQKrY8cptLv00gyZBVqtju60x3iy8w-FpWSZ2.png?width=320&crop=smart&format=pjpg&auto=webp&s=9cad5b761792f0d1128639885784010bf171ee7e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ajBjajZ2YTFpOWRlMdVERFdEQKrY8cptLv00gyZBVqtju60x3iy8w-FpWSZ2.png?width=640&crop=smart&format=pjpg&auto=webp&s=482db0709fdc8aa0885b9b8663d4c2d55405eb71', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ajBjajZ2YTFpOWRlMdVERFdEQKrY8cptLv00gyZBVqtju60x3iy8w-FpWSZ2.png?width=960&crop=smart&format=pjpg&auto=webp&s=d9d36ca9ddf193c250a65748ed23e6fc8f9cefbf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ajBjajZ2YTFpOWRlMdVERFdEQKrY8cptLv00gyZBVqtju60x3iy8w-FpWSZ2.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3d00f68745f5472251eb3a932aaa46094de0f728', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ajBjajZ2YTFpOWRlMdVERFdEQKrY8cptLv00gyZBVqtju60x3iy8w-FpWSZ2.png?format=pjpg&auto=webp&s=63dc61c5c8e185391bfc00449129e1a0ad7cd4d7', 'width': 1920}, 'variants': {}}]} |
||
Smaller versions of phi4? | 1 | There seems to be only the 14B version of phi4 currently available. Does anyone know if there are any smaller versions expected to be released?
I thought one of the main appeals of the earlier phi models was being quite good at the smaller sizes so I’m surprised there weren’t any available. At 14B it’s just one of the many models in that range and honestly not very interesting.
I also know opinions on the phi models tend to be pretty divisive. Are there any real life use cases folks have used the earlier versions for? Especially the smaller versions. I’d appreciate sharing your experience in that case (not just opinions and benchmarks). | 2025-01-16T02:33:00 | https://www.reddit.com/r/LocalLLaMA/comments/1i2erae/smaller_versions_of_phi4/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2erae | false | null | t3_1i2erae | /r/LocalLLaMA/comments/1i2erae/smaller_versions_of_phi4/ | false | false | self | 1 | null |
I'm an iOS developer thinking of adding an LLM feature in my app. Question inside for either local or API based LLMs (newbie) | 1 | So I'm quite new to LLMs, I've been playing around with Ollama and with ChatGPT like most people. I have an idea to integrate an LLM feature in my app that will generate content for users, specifically character information and stats for a game (not actual conversation, but stuff like backstory, age, strength, etc).
My questions are:
\- Would this be better suited for OpenAI, Claude or Gemini?
\- What, if any, local LLMs can run on a phone to complete these tasks? What considerations would I have to make incorporating it into the app?
\- Will I need to fine tune the model? What all will I need to do to "prep" it for deployment in an app? Are there ways to limit its functionality (no sexual content or controversial stuff)?
I'm sorry for sounding illiterate, but I want to get to know the best way to do this with the minimal friction for users. I like that I don't have to setup a whole login/management/system for a tool like OpenAI (not to mention costs and user performance considerations) but I can also see a local model might balloon my app size or cause other performance considerations. Any advice is appreciated! | 2025-01-16T02:43:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i2ez1l/im_an_ios_developer_thinking_of_adding_an_llm/ | Spudly2319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2ez1l | false | null | t3_1i2ez1l | /r/LocalLLaMA/comments/1i2ez1l/im_an_ios_developer_thinking_of_adding_an_llm/ | false | false | self | 1 | null |
I built a plugin for Moondream2 for FiftyOne, posting in case there’s any FO users in this community | 2 | This plugin integrates Moondream2into FiftyOne, enabling various visual AI capabilities like image captioning, visual question answering, object detection, and point localization.
It’s a seamless interface to Moondream2's capabilities within FiftyOne, offering:
Multiple vision-language tasks:
Image captioning (short or detailed)
Visual question answering
Object detection
Point localization
Hardware acceleration (CUDA/MPS) when available
Dynamic version selection from HuggingFace
Full integration with FiftyOne's Dataset and UI | 2025-01-16T02:50:20 | https://github.com/harpreetsahota204/moondream2-plugin | datascienceharp | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i2f3mr | false | null | t3_1i2f3mr | /r/LocalLLaMA/comments/1i2f3mr/i_built_a_plugin_for_moondream2_for_fiftyone/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'mZyWeLblbIdHIiJbCOpH0V41sa_7LHAV4fZ2JUwMwEg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XKv0SHXNe_CPGXwJZBCKgjeRB8FUDu-brIckbjrVLvs.jpg?width=108&crop=smart&auto=webp&s=c1527e91ca166a0fbf07dcc33ede64866216b069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XKv0SHXNe_CPGXwJZBCKgjeRB8FUDu-brIckbjrVLvs.jpg?width=216&crop=smart&auto=webp&s=76baf443cd1ce80770daaf6c691a2581f06055bf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XKv0SHXNe_CPGXwJZBCKgjeRB8FUDu-brIckbjrVLvs.jpg?width=320&crop=smart&auto=webp&s=e94983f647b30b4884c21ca960d613df1526a5d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XKv0SHXNe_CPGXwJZBCKgjeRB8FUDu-brIckbjrVLvs.jpg?width=640&crop=smart&auto=webp&s=becfd676106e8f8c2e5cca56d9103099bee716b8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XKv0SHXNe_CPGXwJZBCKgjeRB8FUDu-brIckbjrVLvs.jpg?width=960&crop=smart&auto=webp&s=ed5282dc47eda8f6feb7bdae658ae24b4e4cb48c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XKv0SHXNe_CPGXwJZBCKgjeRB8FUDu-brIckbjrVLvs.jpg?width=1080&crop=smart&auto=webp&s=cc8e4047cf35c050d9a5e0644435f29577d88593', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XKv0SHXNe_CPGXwJZBCKgjeRB8FUDu-brIckbjrVLvs.jpg?auto=webp&s=86835eb4d3eca9f213c1fde7dd39ea2594069953', 'width': 1200}, 'variants': {}}]} |
|
InternLM3 released with Apache License 2.0, What is your experience so far? | 39 | InternLM3-8B-Instruct realeased with Apache License 2.0.
\-Trained on only 4T tokens, saving more than 75% of the training cost.
\-Supports deep thinking for complex reasoning and normal mode for chat.
Chat Web: [https://internlm-chat.intern-ai.org.cn/](https://internlm-chat.intern-ai.org.cn/)
Model: [https://huggingface.co/internlm/internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct)
https://preview.redd.it/gihftk77v9de1.png?width=2229&format=png&auto=webp&s=398e771323dfdaf50d2f240528da8d3bc6bbf26b
https://preview.redd.it/qv2cr1w5v9de1.png?width=4096&format=png&auto=webp&s=7ec4d107872d0684216d7ed1d587746c7a59d413
https://preview.redd.it/22gjo8ucv9de1.png?width=615&format=png&auto=webp&s=1dca5ea63e2c182ab756b9b937fcb8d10ca24ab8
| 2025-01-16T03:08:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i2fgc2/internlm3_released_with_apache_license_20_what_is/ | vansinhu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2fgc2 | false | null | t3_1i2fgc2 | /r/LocalLLaMA/comments/1i2fgc2/internlm3_released_with_apache_license_20_what_is/ | false | false | 39 | null |
|
Is there a model to search REAL images with natural language? | 0 | I don't want generated fluff, but I also don't want to struggle with maintaining a complex scraper of google images. I know and found two interesting projects based on 2021's CLIP from OpenAI.
[https://www.reddit.com/r/LocalLLaMA/comments/1gtsdwx/i\_used\_clip\_and\_text\_embedding\_model\_to\_create\_an/](https://www.reddit.com/r/LocalLLaMA/comments/1gtsdwx/i_used_clip_and_text_embedding_model_to_create_an/) to search on my local machine.
[https://github.com/haltakov/natural-language-image-search](https://github.com/haltakov/natural-language-image-search) to search on a unsplash dataset.
I'd like to move forward and have a bigger dataset of images, and being able to query 5 related to a natural language query, anybody worked on that already? | 2025-01-16T03:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/1i2fjh2/is_there_a_model_to_search_real_images_with/ | OkBitOfConsideration | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2fjh2 | false | null | t3_1i2fjh2 | /r/LocalLLaMA/comments/1i2fjh2/is_there_a_model_to_search_real_images_with/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'MR1TBWR6-Cyc1cZ5-wjbJiFyA4E3_AFoINesu-Dfz7w', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/5J9rhsWrshMumrSLZSLdTKpYSaPTeoOn8Mdd70yhvcI.jpg?width=108&crop=smart&auto=webp&s=3b210b3209e80d9764911c81c2666316fa0f50af', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/5J9rhsWrshMumrSLZSLdTKpYSaPTeoOn8Mdd70yhvcI.jpg?width=216&crop=smart&auto=webp&s=3084b9beae3376bf0a4b1870e586a8781fa057ad', 'width': 216}, {'height': 185, 'url': 'https://external-preview.redd.it/5J9rhsWrshMumrSLZSLdTKpYSaPTeoOn8Mdd70yhvcI.jpg?width=320&crop=smart&auto=webp&s=051ccfbbc7c6b1dbc8371fd24f72a639af23ae37', 'width': 320}, {'height': 370, 'url': 'https://external-preview.redd.it/5J9rhsWrshMumrSLZSLdTKpYSaPTeoOn8Mdd70yhvcI.jpg?width=640&crop=smart&auto=webp&s=ee2c42f8ea7471c9b623299cad0249a4a3083b71', 'width': 640}, {'height': 555, 'url': 'https://external-preview.redd.it/5J9rhsWrshMumrSLZSLdTKpYSaPTeoOn8Mdd70yhvcI.jpg?width=960&crop=smart&auto=webp&s=dbb85fa726bf63df5cd96eb9dc296e7619df2731', 'width': 960}, {'height': 625, 'url': 'https://external-preview.redd.it/5J9rhsWrshMumrSLZSLdTKpYSaPTeoOn8Mdd70yhvcI.jpg?width=1080&crop=smart&auto=webp&s=fb84432f1880117c9821ea637f3201da6df79641', 'width': 1080}], 'source': {'height': 1218, 'url': 'https://external-preview.redd.it/5J9rhsWrshMumrSLZSLdTKpYSaPTeoOn8Mdd70yhvcI.jpg?auto=webp&s=326a269b7d8bb9ae3482776710ed87539db4324f', 'width': 2104}, 'variants': {}}]} |
Impressed with smolagents using 4o-mini — any recommended local models? | 6 | I've been using smolagents since release with 4o-mini as the model of choice, which works quite well when generating code and running complex agents.
Any recommendations on an equivalent small local model? I have tested the smaller instruction models from qwen-2.5, llama-3.2, gemma-2, phi-4, mistral — none are quite as consistent as 4o-mini. Any community fine-tuned models that might be worth trying specifically for agent use? | 2025-01-16T03:24:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i2fr79/impressed_with_smolagents_using_4omini_any/ | sunpazed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2fr79 | false | null | t3_1i2fr79 | /r/LocalLLaMA/comments/1i2fr79/impressed_with_smolagents_using_4omini_any/ | false | false | self | 6 | null |
New function calling benchmark shows Pythonic approach outperforms JSON (DPAB-α) | 46 | A new benchmark (DPAB-α) has been released that evaluates LLM function calling in both Pythonic and JSON approaches. It demonstrates that Pythonic function calling often outperforms traditional JSON-based methods, especially for complex multi-step tasks.
**Key findings from benchmarks:**
* Claude 3.5 Sonnet leads with 87% on Pythonic vs 45% on JSON
* Smaller models show impressive results (Dria-Agent-α-3B: 72% Pythonic)
* Even larger models like DeepSeek V3 (685B) show significant gaps (63% Pythonic vs 33% JSON)
Benchmark: [https://github.com/firstbatchxyz/function-calling-eval](https://github.com/firstbatchxyz/function-calling-eval)
Blog: [https://huggingface.co/blog/andthattoo/dpab-a](https://huggingface.co/blog/andthattoo/dpab-a)
Not affiliated with the project, just sharing. | 2025-01-16T03:38:23 | https://www.reddit.com/r/LocalLLaMA/comments/1i2g0q5/new_function_calling_benchmark_shows_pythonic/ | emanuilov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2g0q5 | false | null | t3_1i2g0q5 | /r/LocalLLaMA/comments/1i2g0q5/new_function_calling_benchmark_shows_pythonic/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'OCHGPuknLHO_YsM2UIDo8Urmngecii3LtcMfiJX3f3U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uYWE2TXaSV5UuQowK5bMiWaqU-qUajDSWr_MWiX9gNU.jpg?width=108&crop=smart&auto=webp&s=fec0d5f3d3faf56810d19867910a0d0fc5ace6c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uYWE2TXaSV5UuQowK5bMiWaqU-qUajDSWr_MWiX9gNU.jpg?width=216&crop=smart&auto=webp&s=8036f5d16cfee260783523b770afc6a52b6beffa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uYWE2TXaSV5UuQowK5bMiWaqU-qUajDSWr_MWiX9gNU.jpg?width=320&crop=smart&auto=webp&s=a9f121a54776f0f67b3d7e4b6b518280fdd42fd5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uYWE2TXaSV5UuQowK5bMiWaqU-qUajDSWr_MWiX9gNU.jpg?width=640&crop=smart&auto=webp&s=d74a855db09645eb53afaa15cba17f1039fa6ba1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uYWE2TXaSV5UuQowK5bMiWaqU-qUajDSWr_MWiX9gNU.jpg?width=960&crop=smart&auto=webp&s=b694e1de61fa0fa4a9e6d8f3f2d673fd006ad715', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uYWE2TXaSV5UuQowK5bMiWaqU-qUajDSWr_MWiX9gNU.jpg?width=1080&crop=smart&auto=webp&s=ed07a32d3e526d2764fc2234563b141dc8bde319', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uYWE2TXaSV5UuQowK5bMiWaqU-qUajDSWr_MWiX9gNU.jpg?auto=webp&s=e5a11fc7d0276bbc1c5d6fb0317efb8b95a3767e', 'width': 1200}, 'variants': {}}]} |
u/YT_Brian, why did you block me, coward? | 0 | 2025-01-16T03:53:20 | https://www.reddit.com/gallery/1i2gaic | input_output_stream3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i2gaic | false | null | t3_1i2gaic | /r/LocalLLaMA/comments/1i2gaic/uyt_brian_why_did_you_block_me_coward/ | false | false | 0 | null |
||
Struggling to Host & Fine-Tune LLaMA 3.2 3B on the Cloud - Any Tips? | 4 | Hey everyone,
I’ve been trying to host and fine-tune LLaMA 3.2 3B on a cloud platform like Lambda Labs, but honestly, it’s been pretty tough. Setting everything up and getting it to run smoothly feels more complicated than I expected.
I was wondering if anyone here has found an easier way to handle this? Maybe a more beginner-friendly method or tools that can simplify the process? I’d love to hear how you’ve managed to make it work!
Thanks so much for any advice you can share! | 2025-01-16T04:07:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i2gjsu/struggling_to_host_finetune_llama_32_3b_on_the/ | Necessary_Round8009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2gjsu | false | null | t3_1i2gjsu | /r/LocalLLaMA/comments/1i2gjsu/struggling_to_host_finetune_llama_32_3b_on_the/ | false | false | self | 4 | null |
Compared AMD 7900 XTX to Nvidia ada 4000 SFF | 5 | Compared with Ollama and phi4:latest model.
* architecture phi3
* parameters 14.7B
* context length 16384
* embedding length 5120
* quantization Q4\_K\_M
AMD 7900 XTX (300w tdp)
prompt eval duration: 5ms
eval rate: 54.26 tokens/s
430W during inference total systempower
Nvidia 4000 SFF ada (70W tdp)
prompt eval duration: 21ms
eval rate: 26.45 tokens/s
134W during inference total systempower
| 2025-01-16T04:25:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i2gvcg/compared_amd_7900_xtx_to_nvidia_ada_4000_sff/ | badabimbadabum2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2gvcg | false | null | t3_1i2gvcg | /r/LocalLLaMA/comments/1i2gvcg/compared_amd_7900_xtx_to_nvidia_ada_4000_sff/ | false | false | self | 5 | null |
Deepseek V3 benchmark thread | 1 | [removed] | 2025-01-16T04:40:19 | https://www.reddit.com/r/LocalLLaMA/comments/1i2h4h0/deepseek_v3_benchmark_thread/ | slavik-f | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2h4h0 | false | null | t3_1i2h4h0 | /r/LocalLLaMA/comments/1i2h4h0/deepseek_v3_benchmark_thread/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]} |
u/YT_Brian, i will explain what i wrote if you don't be a coward and unblock me first. but you wont do that, because you are too ashamed of yourself for writing those non-sense replies while you were 'baked', and you won't be able to defend your self... | 0 | ...this is the reason that you blocked me immediately after writing this (below) reply. | 2025-01-16T05:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/1i2hi9f/uyt_brian_i_will_explain_what_i_wrote_if_you_dont/ | input_output_stream3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2hi9f | false | null | t3_1i2hi9f | /r/LocalLLaMA/comments/1i2hi9f/uyt_brian_i_will_explain_what_i_wrote_if_you_dont/ | false | false | self | 0 | null |
Open Source Implementations of ChatGPT's memory feature? | 5 | Are there any good implementations for ChatGPT's memoery featurr where it remember's context across conversations? Most agentic frameworks seem to store the information and perform RAG over it which is obviously not that great. Would appreciate if anyone has any insight on how it works behind the scenes. | 2025-01-16T05:08:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i2hlmz/open_source_implementations_of_chatgpts_memory/ | Vegetable-College353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2hlmz | false | null | t3_1i2hlmz | /r/LocalLLaMA/comments/1i2hlmz/open_source_implementations_of_chatgpts_memory/ | false | false | self | 5 | null |
Where should I start? | 1 | [removed] | 2025-01-16T05:12:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i2hnmq/where_should_i_start/ | BenefitOfTheDoubt_01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2hnmq | false | null | t3_1i2hnmq | /r/LocalLLaMA/comments/1i2hnmq/where_should_i_start/ | false | false | self | 1 | null |
Non-Academic AI/LLM Research Productivity: A Quick Comparative Analysis of 2023 and 2024 | 11 | With the help of AI buddies, tried to produce a approximate summary of AI//LLM research papers published with citations (a semi-signal of quality and importance), here is the approximate results (given nature of the data these number are obviously rough, please consider that before any pedantic commenting ;-), and I am sure I missed several organizations, feel free to highlight those or offer an authoritative source for these numbers:
|Company|Number of Published Research Articles in 2024|Approximate Total Citations in 2024|Number of Published Research Articles in 2023|Approximate Total Citations in 2023|
|:-|:-|:-|:-|:-|
|Google/DeepMind|120 - 150|8,000 - 10,000|100 - 130|7,000 - 9,000|
|Microsoft|80 - 100|5,000 - 7,000|70 - 90|4,000 - 6,000|
|Meta|70 - 90|4,000 - 6,000|60 - 80|3,000 - 5,000|
|OpenAI|40 - 60|10,000 - 15,000|30 - 50|8,000 - 12,000|
|NVIDIA|40 - 60|3,000 - 5,000|35 - 50|2,500 - 4,000|
|IBM/Watson|30 - 50|1,000 - 2,000|25 - 40|800 - 1,500|
|Baidu|30 - 50|1,500 - 2,500|25 - 40|1,200 - 2,000|
|Anthropic|20 - 30|4,000 - 6,000|15 - 25|3,000 - 5,000|
|Hugging Face|20 - 30|2,000 - 4,000|15 - 25|1,500 - 3,000|
|Stability AI|15 - 25|1,000 - 2,000|10 - 20|800 - 1,500|
|Cohere|15 - 25|1,500 - 3,000|10 - 20|1,000 - 2,000|
|AI21 Labs|10 - 20|500 - 1,500|8 - 18|400 - 1,200|
|Alibaba|10 - 20|800 - 1,800|8 - 18|700 - 1,500|
|Mistral AI|8 - 12|1,000 - 2,000|5 - 10|800 - 1,500|
|Aleph Alpha|8 - 12|600 - 1,000|5 - 10|500 - 800|
|Inflection AI|5 - 10|500 - 1,000|3 - 8|400 - 800|
|Adept|5 - 10|400 - 800|3 - 8|300 - 700|
|MosaicML|5 - 10|300 - 700|3 - 8|200 - 600|
|Groq|3 - 7|200 - 600|2 - 6|150 - 500|
|Tenstorrent|3 - 7|100 - 300|2 - 6|80 - 250|
|Abridge|2 - 5|100 - 300|1 - 4|80 - 200|
|Harvey|2 - 5|200 - 500|1 - 4|150 - 400|
|[Character.ai](http://Character.ai)|1 - 4|50 - 200|0 - 3|30 - 150|
|ElevenLabs|1 - 4|50 - 200|0 - 3|30 - 150|
|Jasper|1 - 4|50 - 200|0 - 3|30 - 150|
|Perplexity AI|1 - 4|50 - 200|0 - 3|30 - 150|
|Replit|1 - 4|50 - 200|0 - 3|30 - 150|
|Palantir|1 - 4|50 - 200|0 - 3|30 - 150|
|Writer|0 - 2|0 - 50|0 - 2|0 - 50|
|Safe Superintelligence|0 - 2|0 - 50|0 - 2|0 - 50|
|World Labs|0 - 2|0 - 50|0 - 2|0 - 50|
|xAI|0 - 2|0 - 50|0 - 2|0 - 50| | 2025-01-16T05:12:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i2hnwy/nonacademic_aillm_research_productivity_a_quick/ | palindsay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2hnwy | false | null | t3_1i2hnwy | /r/LocalLLaMA/comments/1i2hnwy/nonacademic_aillm_research_productivity_a_quick/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'PezcliVTOJmrw2T-iy6hQL8d2hqy4q6G8U__SS7ZjrY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=108&crop=smart&auto=webp&s=23183dce45b8759af44dc45578bcd60d1883477a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=216&crop=smart&auto=webp&s=52091792582b6a74d0a7f4cce12d173a32a79716', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=320&crop=smart&auto=webp&s=5b0a456015d02e783fc787f594e54fe0e969ea15', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=640&crop=smart&auto=webp&s=61fb8046c762f14e0e07ea500d1ad85ab8481ee2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=960&crop=smart&auto=webp&s=831e1b06425cd4ca7928aaf4f90c1adacf6854d6', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=1080&crop=smart&auto=webp&s=d6c0ba0fc918c425682b1427ac6210ee38973a76', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?auto=webp&s=91bd92d61d32d6d820ca8c34b2eaea08283a75d5', 'width': 1200}, 'variants': {}}]} |
Is rollout same as "generation" with different decoding? | 1 | [removed] | 2025-01-16T05:15:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i2hpil/is_rollout_same_as_generation_with_different/ | ContactChoice9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2hpil | false | null | t3_1i2hpil | /r/LocalLLaMA/comments/1i2hpil/is_rollout_same_as_generation_with_different/ | false | false | self | 1 | null |
Chill, chill, chill, chill, chill, chill
Chill, chill, chill, chill, chill, chill
Chill, chill, chill, chill, chill, chill
Chill, chill, chill, chill, chill, chill | 0 | They throwin' hate at me
Want me to stay at ease
F'ck you and your corporation
Y'all n-ggas can't control me
.
I'm 'bout to wild the f'ck out
You n-ggas p'ussy, ain't me
.
I won't end this fight, not this time again
So long, so long, so long, you cannot survive
And I'm not dyin', and I can't lose
I can't lose, no, I can't lose | 2025-01-16T05:43:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i2i5di/chill_chill_chill_chill_chill_chill_chill_chill/ | input_output_stream3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2i5di | false | null | t3_1i2i5di | /r/LocalLLaMA/comments/1i2i5di/chill_chill_chill_chill_chill_chill_chill_chill/ | false | false | self | 0 | null |
fuck u/automoderator | 0 | Four in the mornin', and I'm zonin'
They say I'm possessed, it's an omen
I keep it 300, like the Romans
300 bitches, where the Trojans?
Baby, we livin' in the moment
I've been a menace for the longest
But I ain't finished, I'm devoted
And you know it, and you know it
https://preview.redd.it/fd64w8q6pade1.png?width=894&format=png&auto=webp&s=3ada88a86981f84bc3760e990c50fd086677cb9e
| 2025-01-16T05:54:00 | https://www.reddit.com/r/LocalLLaMA/comments/1i2ib18/fuck_uautomoderator/ | input_output_stream3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2ib18 | false | null | t3_1i2ib18 | /r/LocalLLaMA/comments/1i2ib18/fuck_uautomoderator/ | false | false | self | 0 | null |
New model from MiniMax | 30 | https://huggingface.co/MiniMaxAI/MiniMax-VL-01 | 2025-01-16T06:07:02 | https://www.reddit.com/r/LocalLLaMA/comments/1i2iif5/new_model_from_minimax/ | iamnotdeadnuts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2iif5 | false | null | t3_1i2iif5 | /r/LocalLLaMA/comments/1i2iif5/new_model_from_minimax/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'NrY_Z2DV0vXmUlOk9wvoch5CqfC5lt4RSo0NnPePW-s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o_TLwUQQ5o86ysTmk56OFdT8FvBPCSgb5ufXnTvfn4Y.jpg?width=108&crop=smart&auto=webp&s=0c67c6b011d84083b7cb341601d515b60483228f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/o_TLwUQQ5o86ysTmk56OFdT8FvBPCSgb5ufXnTvfn4Y.jpg?width=216&crop=smart&auto=webp&s=7c5331dad52ab81e47b76994848b70321607f1ee', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/o_TLwUQQ5o86ysTmk56OFdT8FvBPCSgb5ufXnTvfn4Y.jpg?width=320&crop=smart&auto=webp&s=836c72465480b03631b1f81dfa277b0da193ab66', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/o_TLwUQQ5o86ysTmk56OFdT8FvBPCSgb5ufXnTvfn4Y.jpg?width=640&crop=smart&auto=webp&s=54fe5b91d846adbeb0a43a00912e7d83b9345163', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/o_TLwUQQ5o86ysTmk56OFdT8FvBPCSgb5ufXnTvfn4Y.jpg?width=960&crop=smart&auto=webp&s=6f640a6baee91398f56bfdd48205b7372a2ae1b6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/o_TLwUQQ5o86ysTmk56OFdT8FvBPCSgb5ufXnTvfn4Y.jpg?width=1080&crop=smart&auto=webp&s=effece38f64c800deef84f338a9894055d22ebd2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o_TLwUQQ5o86ysTmk56OFdT8FvBPCSgb5ufXnTvfn4Y.jpg?auto=webp&s=dd40aa893233e36fce7585941be383ef19a32822', 'width': 1200}, 'variants': {}}]} |
Creating a Financial Agent with Langgraph | 1 | [removed] | 2025-01-16T06:19:55 | https://www.reddit.com/r/LocalLLaMA/comments/1i2ipex/creating_a_financial_agent_with_langgraph/ | AIsimons | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2ipex | false | null | t3_1i2ipex | /r/LocalLLaMA/comments/1i2ipex/creating_a_financial_agent_with_langgraph/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dxvQrH9Hm5RiFy1cuMUD4zRFNDfOiUlMij9iVW3iuEM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=108&crop=smart&auto=webp&s=a123dab650e09bb35de4dce8c27274913c1b9abb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=216&crop=smart&auto=webp&s=fac3f862a531a87dd4285a17418fc923478cc5f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=320&crop=smart&auto=webp&s=60b6870e4af346e8adcb8ac8d1b97e05610a9d6b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=640&crop=smart&auto=webp&s=df8eadd04ed668e639050ae7b2e55af2c6a7950f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=960&crop=smart&auto=webp&s=8674a4553d87a116970b1a8428f2e818ae2aa7a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=1080&crop=smart&auto=webp&s=daea24c2ff93b8cf0237d181cbedcf7d27d6a4dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?auto=webp&s=7e13e5818d7fc95e8d2f60dfcbec8e212d6d6228', 'width': 1200}, 'variants': {}}]} |
Creating a Financial Agent with Langgraph Agent | 1 | [removed] | 2025-01-16T06:21:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i2iq2w/creating_a_financial_agent_with_langgraph_agent/ | AIsimons | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2iq2w | false | null | t3_1i2iq2w | /r/LocalLLaMA/comments/1i2iq2w/creating_a_financial_agent_with_langgraph_agent/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dxvQrH9Hm5RiFy1cuMUD4zRFNDfOiUlMij9iVW3iuEM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=108&crop=smart&auto=webp&s=a123dab650e09bb35de4dce8c27274913c1b9abb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=216&crop=smart&auto=webp&s=fac3f862a531a87dd4285a17418fc923478cc5f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=320&crop=smart&auto=webp&s=60b6870e4af346e8adcb8ac8d1b97e05610a9d6b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=640&crop=smart&auto=webp&s=df8eadd04ed668e639050ae7b2e55af2c6a7950f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=960&crop=smart&auto=webp&s=8674a4553d87a116970b1a8428f2e818ae2aa7a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?width=1080&crop=smart&auto=webp&s=daea24c2ff93b8cf0237d181cbedcf7d27d6a4dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nVxBzQetJg3CQF44eaBEFj5UhLSdE_ezDHgK3AFXZO8.jpg?auto=webp&s=7e13e5818d7fc95e8d2f60dfcbec8e212d6d6228', 'width': 1200}, 'variants': {}}]} |
What's the smallest language model that is helpful as a coding assistant? | 2 | The title pretty much explains it | 2025-01-16T06:32:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i2ivql/whats_the_smallest_language_model_that_is_helpful/ | Physical-Security115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2ivql | false | null | t3_1i2ivql | /r/LocalLLaMA/comments/1i2ivql/whats_the_smallest_language_model_that_is_helpful/ | false | false | self | 2 | null |
LLM Email Analysis | 2 | Hoping someone here might be able to help. I’m looking for a tool that’ll help me analyse my email data. I’ve got 7 years worth of customer conversation threads, Ideally I’d like to be able to give a LLM prompts and it would return answers based on my email data. “What are the most common questions customers ask?” Etc I’ve got zero coding experience but pretty savvy getting things up and running like open source projects etc TIA | 2025-01-16T06:42:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i2j0rv/llm_email_analysis/ | binnight95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2j0rv | false | null | t3_1i2j0rv | /r/LocalLLaMA/comments/1i2j0rv/llm_email_analysis/ | false | false | self | 2 | null |
ajiu | 1 | [removed] | 2025-01-16T06:45:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i2j23f/ajiu/ | DowntownDuty8265 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2j23f | false | null | t3_1i2j23f | /r/LocalLLaMA/comments/1i2j23f/ajiu/ | false | false | self | 1 | null |
Script to run & test any HuggingFace model | 1 | [removed] | 2025-01-16T06:55:20 | https://www.reddit.com/r/LocalLLaMA/comments/1i2j79w/script_to_run_test_any_huggingface_model/ | chatkaa_dunga | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2j79w | false | null | t3_1i2j79w | /r/LocalLLaMA/comments/1i2j79w/script_to_run_test_any_huggingface_model/ | false | false | self | 1 | null |
Nvidia DIGITS vs H100/A100? | 0 | i searched in google and i cant find any articles/benchmarks can anyone help me? | 2025-01-16T07:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/1i2jfzp/nvidia_digits_vs_h100a100/ | AlgorithmicKing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2jfzp | false | null | t3_1i2jfzp | /r/LocalLLaMA/comments/1i2jfzp/nvidia_digits_vs_h100a100/ | false | false | self | 0 | null |
Mistral Codestral 25.01 is overhyped – Here’s why i prefer Claude 3.5 Sonnet | 1 | [removed] | 2025-01-16T07:23:00 | https://www.reddit.com/r/LocalLLaMA/comments/1i2jl5l/mistral_codestral_2501_is_overhyped_heres_why_i/ | One-Problem-5085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2jl5l | false | null | t3_1i2jl5l | /r/LocalLLaMA/comments/1i2jl5l/mistral_codestral_2501_is_overhyped_heres_why_i/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1_MnsoBOjHUVlBv8s1AW8GF3ZoHqy4Q7Cx8Vh-5po64', 'resolutions': [{'height': 23, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?width=108&crop=smart&auto=webp&s=5d3f084b1f24c6be1b219ed06d50ede11039ae20', 'width': 108}, {'height': 47, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?width=216&crop=smart&auto=webp&s=4c4c27a0375b804db5d90bf12bf5c57a81b64386', 'width': 216}], 'source': {'height': 60, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?auto=webp&s=58df702c38afd9cce5d0d8f1b6181031aa15e77b', 'width': 272}, 'variants': {}}]} |
RAGLite – A Python package for the unhobbling of RAG | 1 | [removed] | 2025-01-16T07:42:20 | https://www.reddit.com/r/LocalLLaMA/comments/1i2ju5t/raglite_a_python_package_for_the_unhobbling_of_rag/ | lsorber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2ju5t | false | null | t3_1i2ju5t | /r/LocalLLaMA/comments/1i2ju5t/raglite_a_python_package_for_the_unhobbling_of_rag/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'A6dqj80cSBnpPE_t2HRLi2_rvIGkgeaK0mpU2KyVMzI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rlaRn-6Nn09PXZxvqnx-AGzv5eQfCD7ILNFrD-3fL9A.jpg?width=108&crop=smart&auto=webp&s=4468c296bcd7b1f8ff890a2cfaa38624fa7c9fbb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rlaRn-6Nn09PXZxvqnx-AGzv5eQfCD7ILNFrD-3fL9A.jpg?width=216&crop=smart&auto=webp&s=45f715ba66ba9cafbe6176eb4d9291825145740a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rlaRn-6Nn09PXZxvqnx-AGzv5eQfCD7ILNFrD-3fL9A.jpg?width=320&crop=smart&auto=webp&s=28d7f56d884ff8920312a2663a7f14ef24f14d32', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rlaRn-6Nn09PXZxvqnx-AGzv5eQfCD7ILNFrD-3fL9A.jpg?width=640&crop=smart&auto=webp&s=365e2f7033e568b50e89955520f58c068abfc012', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rlaRn-6Nn09PXZxvqnx-AGzv5eQfCD7ILNFrD-3fL9A.jpg?width=960&crop=smart&auto=webp&s=d9ae21a3cb6db18d67eaca1ba8b04614033d6911', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rlaRn-6Nn09PXZxvqnx-AGzv5eQfCD7ILNFrD-3fL9A.jpg?width=1080&crop=smart&auto=webp&s=2d61e5ebd237a7e37ebc3e20f3dd4ad88ff47045', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rlaRn-6Nn09PXZxvqnx-AGzv5eQfCD7ILNFrD-3fL9A.jpg?auto=webp&s=8e36124d2b61331bb443f2bad6bb904ffc100aef', 'width': 1200}, 'variants': {}}]} |
Zhipu AI added to US sanctions blacklist | 39 | Is this the first time that a LLM producer has been sanctioned?
https://www.reuters.com/world/us/us-adds-16-entities-its-trade-blacklist-14-china-2025-01-15/ | 2025-01-16T08:20:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i2kca9/zhipu_ai_added_to_us_sanctions_blacklist/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2kca9 | false | null | t3_1i2kca9 | /r/LocalLLaMA/comments/1i2kca9/zhipu_ai_added_to_us_sanctions_blacklist/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'ajpvllKuKrvB0Tpn7HbXAyjSASevi6oND8a7VrkWnGU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bp31OdVWPsw1UlqUDkMnF55Pwxx6rOiu_tY0se0hRgU.jpg?width=108&crop=smart&auto=webp&s=2965d1f1036cb59f02be60390ba7557d37a8beeb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bp31OdVWPsw1UlqUDkMnF55Pwxx6rOiu_tY0se0hRgU.jpg?width=216&crop=smart&auto=webp&s=a5f70834b51765031a1ee496b9af6665e437b5e2', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/bp31OdVWPsw1UlqUDkMnF55Pwxx6rOiu_tY0se0hRgU.jpg?width=320&crop=smart&auto=webp&s=4059751e04c06639681825d87d56c05b4f66b72d', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/bp31OdVWPsw1UlqUDkMnF55Pwxx6rOiu_tY0se0hRgU.jpg?width=640&crop=smart&auto=webp&s=cde1cd120009d2597ba47012f033b1d298c66c6d', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/bp31OdVWPsw1UlqUDkMnF55Pwxx6rOiu_tY0se0hRgU.jpg?width=960&crop=smart&auto=webp&s=ab92586f4edfcb5017e8c854aa6c8acfa5284539', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/bp31OdVWPsw1UlqUDkMnF55Pwxx6rOiu_tY0se0hRgU.jpg?width=1080&crop=smart&auto=webp&s=88664139d4d679d728a95e27b4b0c1b139ef0f59', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/bp31OdVWPsw1UlqUDkMnF55Pwxx6rOiu_tY0se0hRgU.jpg?auto=webp&s=a37ff3b18203e0cb96fb61d73210f5f07a3d0f61', 'width': 1920}, 'variants': {}}]} |
How to use Model served by LitGPT with LangChain? | 1 | [removed] | 2025-01-16T08:32:48 | https://www.reddit.com/r/LocalLLaMA/comments/1i2ki9c/how_to_use_model_served_by_litgpt_with_langchain/ | Informal-Victory8655 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2ki9c | false | null | t3_1i2ki9c | /r/LocalLLaMA/comments/1i2ki9c/how_to_use_model_served_by_litgpt_with_langchain/ | false | false | self | 1 | null |
MiniMax-Text-01: a new open-source top-tier model | 1 | [removed] | 2025-01-16T09:15:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i2l1zs/minimaxtext01_a_new_opensource_toptier_model/ | Striking-Gene2724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2l1zs | false | null | t3_1i2l1zs | /r/LocalLLaMA/comments/1i2l1zs/minimaxtext01_a_new_opensource_toptier_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 't-JH8IngcHivm1YVPoa7hh4mpZsdS9DbW7wYMvhxr-w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=108&crop=smart&auto=webp&s=4e357908a6066334b13339e17cc3095d7b4423a2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=216&crop=smart&auto=webp&s=2e4bb466e39c0d1903bf3066a3d0dea689925709', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=320&crop=smart&auto=webp&s=99aba628436f65b36c0505f0486e41298b1a9462', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=640&crop=smart&auto=webp&s=ceab60c72e05525604b9367fa7915922146839a5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=960&crop=smart&auto=webp&s=280815ef68e57515faad9d1dc62361728eb48c64', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?width=1080&crop=smart&auto=webp&s=244b695811b8b50aac245615b143ce76ecbb76af', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HbANHzNsjzvIfVaIHJm0DQnyVjkhdwH7FoXz3GLoR3k.jpg?auto=webp&s=447d8333eccd1f5454e014f3a01bcf504de0e10d', 'width': 1200}, 'variants': {}}]} |
MiniCPM-o 2.6: A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone | 1 | [removed] | 2025-01-16T09:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i2l2j7/minicpmo_26_a_gpt4o_level_mllm_for_vision_speech/ | Striking-Gene2724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2l2j7 | false | null | t3_1i2l2j7 | /r/LocalLLaMA/comments/1i2l2j7/minicpmo_26_a_gpt4o_level_mllm_for_vision_speech/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'E7QMcb50rmLiQlc8lOGFsqUtuJLCo6Li6TqK1alY2QQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=108&crop=smart&auto=webp&s=a93a9748275d4d12f15b04f6213f90053c189113', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=216&crop=smart&auto=webp&s=6d59ae0cb6a486b8a69713f0ccffa6b733855c63', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=320&crop=smart&auto=webp&s=8f980a97e7afaa3c9a1fe2d7dd05a2277cbd3339', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=640&crop=smart&auto=webp&s=eeed22305606bfaf17269040f2e92371145cf016', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=960&crop=smart&auto=webp&s=d347137f5e10123e13ceedb0817fad8db56a9aa2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=1080&crop=smart&auto=webp&s=2ef2ae2e169955379c41b80d07d9012da5ee54e6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?auto=webp&s=b503c68ec96aa8749938e60006b6af3c0cc265ea', 'width': 1200}, 'variants': {}}]} |
MiniCPM-o 2.6: A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone | 1 | [removed] | 2025-01-16T09:18:32 | https://www.reddit.com/r/LocalLLaMA/comments/1i2l34s/minicpmo_26_a_gpt4o_level_mllm_for_vision_speech/ | Striking-Gene2724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2l34s | false | null | t3_1i2l34s | /r/LocalLLaMA/comments/1i2l34s/minicpmo_26_a_gpt4o_level_mllm_for_vision_speech/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'E7QMcb50rmLiQlc8lOGFsqUtuJLCo6Li6TqK1alY2QQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=108&crop=smart&auto=webp&s=a93a9748275d4d12f15b04f6213f90053c189113', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=216&crop=smart&auto=webp&s=6d59ae0cb6a486b8a69713f0ccffa6b733855c63', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=320&crop=smart&auto=webp&s=8f980a97e7afaa3c9a1fe2d7dd05a2277cbd3339', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=640&crop=smart&auto=webp&s=eeed22305606bfaf17269040f2e92371145cf016', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=960&crop=smart&auto=webp&s=d347137f5e10123e13ceedb0817fad8db56a9aa2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?width=1080&crop=smart&auto=webp&s=2ef2ae2e169955379c41b80d07d9012da5ee54e6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1t6yjRcK16sRIEOoqofkQ3Q7XoJ3b5gF67hvw33eEAw.jpg?auto=webp&s=b503c68ec96aa8749938e60006b6af3c0cc265ea', 'width': 1200}, 'variants': {}}]} |
criticize my local ai build, what would you change? | 1 | [https://pcpartpicker.com/list/vm7X2x](https://pcpartpicker.com/list/vm7X2x) | 2025-01-16T09:22:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i2l4rg/criticize_my_local_ai_build_what_would_you_change/ | nas2k21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2l4rg | false | null | t3_1i2l4rg | /r/LocalLLaMA/comments/1i2l4rg/criticize_my_local_ai_build_what_would_you_change/ | false | false | self | 1 | null |
Jailbroken Llama on iOS | 1 | https://apps.apple.com/us/app/clariti-private-ai-assistant/id6739746682 | 2025-01-16T09:42:07 | https://v.redd.it/rk1hab9utbde1 | claritiai | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i2le4e | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rk1hab9utbde1/DASHPlaylist.mpd?a=1739612542%2CNmI2N2I0Mjg3YTYwM2U3NjJiNTc3NTk0OGZmYTI0NmQ4MDY2NTZlMjViMDk1MDQ2MjI4Y2U1MjViZjczYzg3MA%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/rk1hab9utbde1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/rk1hab9utbde1/HLSPlaylist.m3u8?a=1739612542%2CZjExNTFiYTAyYzIwNjhhNjkzYjM1MmMzNGJkOTBmZDdlNjllZjI5MWZlODRlNDIxN2ViZDBmNGY0YWEwZjViYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rk1hab9utbde1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 590}} | t3_1i2le4e | /r/LocalLLaMA/comments/1i2le4e/jailbroken_llama_on_ios/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=108&crop=smart&format=pjpg&auto=webp&s=600682ddb014e9b7cee444f70be5a5e4f5b4061e', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=216&crop=smart&format=pjpg&auto=webp&s=a1da419c5fbe44862c9f370c747a7892f0f03df5', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=320&crop=smart&format=pjpg&auto=webp&s=f6b955dfc4825bda70f6598b72574aacc90de236', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=640&crop=smart&format=pjpg&auto=webp&s=9ee18d131e2b677e5d6e6d2b522d1c6b56af7819', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?format=pjpg&auto=webp&s=845cc619c1c7f985d151926d3be740053461565e', 'width': 886}, 'variants': {'nsfw': {'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ba91b383e789e1dd892f5e285a5d4508843dc914', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=d32a09073dcc3d95d233ca7c906144eadc559e73', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c70baea2ba78ed99dbdf5d2ed6db37000c77d3c4', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=24280529ff3d92830d1b8aa9464291939eeb6edc', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?blur=40&format=pjpg&auto=webp&s=0a4e21d255f19fc7b549275e23c776831ca21cfd', 'width': 886}}, 'obfuscated': {'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ba91b383e789e1dd892f5e285a5d4508843dc914', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=d32a09073dcc3d95d233ca7c906144eadc559e73', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c70baea2ba78ed99dbdf5d2ed6db37000c77d3c4', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=24280529ff3d92830d1b8aa9464291939eeb6edc', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/ZjJvNWRhN3V0YmRlMfoX2soACNAx8nCO85AT3OlXcVyCBsYoIQqHxC2J0wfH.png?blur=40&format=pjpg&auto=webp&s=0a4e21d255f19fc7b549275e23c776831ca21cfd', 'width': 886}}}}]} |
All new SOTA MOE open source model, up to 4M context. - MiniMax-AI/MiniMax-01 | 82 | 2025-01-16T09:48:38 | https://github.com/MiniMax-AI/MiniMax-01 | bidet_enthusiast | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i2lh3b | false | null | t3_1i2lh3b | /r/LocalLLaMA/comments/1i2lh3b/all_new_sota_moe_open_source_model_up_to_4m/ | false | false | 82 | {'enabled': False, 'images': [{'id': 'BURb3p-aFpm6Xf9NnGfZ9a6TO9yx3gJN0bHJXxxzaFc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CaDa8UUx90v9PcEOeKGi-HSkE2urc6XyHG74Upv4XCw.jpg?width=108&crop=smart&auto=webp&s=3e99617ca92b634c7fb153e62135ace16e8242e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CaDa8UUx90v9PcEOeKGi-HSkE2urc6XyHG74Upv4XCw.jpg?width=216&crop=smart&auto=webp&s=055dc5bc76366c5027705abf6c6a55f73b1a67f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CaDa8UUx90v9PcEOeKGi-HSkE2urc6XyHG74Upv4XCw.jpg?width=320&crop=smart&auto=webp&s=292dc3016b8da388a58b110eb4be09cce66155df', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CaDa8UUx90v9PcEOeKGi-HSkE2urc6XyHG74Upv4XCw.jpg?width=640&crop=smart&auto=webp&s=951db4cb8b0737b142d2311a74092a236c8f4e90', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CaDa8UUx90v9PcEOeKGi-HSkE2urc6XyHG74Upv4XCw.jpg?width=960&crop=smart&auto=webp&s=7fa3f8cc7feaa56d9dfe32f51fc0573920f6b932', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CaDa8UUx90v9PcEOeKGi-HSkE2urc6XyHG74Upv4XCw.jpg?width=1080&crop=smart&auto=webp&s=118ab22d4c6a73ca7922de3a08618a90b21030eb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CaDa8UUx90v9PcEOeKGi-HSkE2urc6XyHG74Upv4XCw.jpg?auto=webp&s=3e79062d33d7e031b542eab31047404ed663081c', 'width': 1200}, 'variants': {}}]} |
||
Does deepseek-chat allow to generate multiple outputs per input? | 1 | [removed] | 2025-01-16T09:57:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i2ll7d/does_deepseekchat_allow_to_generate_multiple/ | NarrowEffect | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2ll7d | false | null | t3_1i2ll7d | /r/LocalLLaMA/comments/1i2ll7d/does_deepseekchat_allow_to_generate_multiple/ | false | false | self | 1 | null |
She Is in Love With ChatGPT | 0 | 2025-01-16T10:41:14 | https://www.nytimes.com/2025/01/15/technology/ai-chatgpt-boyfriend-companion.html | MasterScrat | nytimes.com | 1970-01-01T00:00:00 | 0 | {} | 1i2m6jm | false | null | t3_1i2m6jm | /r/LocalLLaMA/comments/1i2m6jm/she_is_in_love_with_chatgpt/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Y-fvUWXe2byWfxFPzV_mKPw1TT4fNiYFv0GLCqtjyqc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1F2vo78DL0SCuRu1B3daTlnCl1HGw5CbaSf-g-GFjoc.jpg?width=108&crop=smart&auto=webp&s=f2159c5acb470a5c6ba2e040b227f43301d6303f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/1F2vo78DL0SCuRu1B3daTlnCl1HGw5CbaSf-g-GFjoc.jpg?width=216&crop=smart&auto=webp&s=b2ff631572190445dc92c10f546d16626157e527', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/1F2vo78DL0SCuRu1B3daTlnCl1HGw5CbaSf-g-GFjoc.jpg?width=320&crop=smart&auto=webp&s=0b9b1ff7160e96e4565475504e2aca4862708bb5', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/1F2vo78DL0SCuRu1B3daTlnCl1HGw5CbaSf-g-GFjoc.jpg?width=640&crop=smart&auto=webp&s=63ab7111c68adcdeeb891d7fb3695becf24c8091', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/1F2vo78DL0SCuRu1B3daTlnCl1HGw5CbaSf-g-GFjoc.jpg?width=960&crop=smart&auto=webp&s=a82337e0a68c7a727e93eed2770d4fafb9e4fd82', 'width': 960}], 'source': {'height': 550, 'url': 'https://external-preview.redd.it/1F2vo78DL0SCuRu1B3daTlnCl1HGw5CbaSf-g-GFjoc.jpg?auto=webp&s=066a1c2f4b689241a5bafca292d7cad8d15a7ebc', 'width': 1050}, 'variants': {}}]} |
||
Is any AI coding IDE/helper able to solve this kind of a problem | 1 | [removed] | 2025-01-16T10:51:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i2mbwl/is_any_ai_coding_idehelper_able_to_solve_this/ | Jakedismo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2mbwl | false | null | t3_1i2mbwl | /r/LocalLLaMA/comments/1i2mbwl/is_any_ai_coding_idehelper_able_to_solve_this/ | false | false | 1 | null |
|
LLM AI-powered NPCs, what changes will it bring to the gaming world? Do gamers want this change? If you are wondering about the new AI in games read this and let me know what you think, I want to talk about it. | 1 | 2025-01-16T10:54:34 | https://medium.com/curiouserinstitute/ai-powered-npcs-hype-or-hallucination-11ddfc530e33 | AetherianChronicles | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1i2md7b | false | null | t3_1i2md7b | /r/LocalLLaMA/comments/1i2md7b/llm_aipowered_npcs_what_changes_will_it_bring_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'JPNS4O9tb-X3iI-dKomz1c_r3D_B3AJ8M8KsfYL7czE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ro11USDZrAKYcWrbSLXr-e26Ai0aVwk-NnDIyciBatA.jpg?width=108&crop=smart&auto=webp&s=9bd28cf68a4aab606f638f2137b46bd9e1f2287c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ro11USDZrAKYcWrbSLXr-e26Ai0aVwk-NnDIyciBatA.jpg?width=216&crop=smart&auto=webp&s=3919db265860446f0a4fb11aa866f20eb6df89af', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ro11USDZrAKYcWrbSLXr-e26Ai0aVwk-NnDIyciBatA.jpg?width=320&crop=smart&auto=webp&s=203f8a349461fb40463e71e5bdec441a2878c46b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ro11USDZrAKYcWrbSLXr-e26Ai0aVwk-NnDIyciBatA.jpg?width=640&crop=smart&auto=webp&s=8d8fd482b757d66c5edb554369e041c367f74d6f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ro11USDZrAKYcWrbSLXr-e26Ai0aVwk-NnDIyciBatA.jpg?width=960&crop=smart&auto=webp&s=4f3585b550f070b9d045ff816b9700052d17cfbe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ro11USDZrAKYcWrbSLXr-e26Ai0aVwk-NnDIyciBatA.jpg?width=1080&crop=smart&auto=webp&s=384e006b1ca91d45470c8bf626b2d725c6a370bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ro11USDZrAKYcWrbSLXr-e26Ai0aVwk-NnDIyciBatA.jpg?auto=webp&s=1d1624704cd58cd9b135b5bc20830ec88c3da5a2', 'width': 1200}, 'variants': {}}]} |
||
Hands-on experience with the MiniCPM-o 2.6 | 1 | [removed] | 2025-01-16T11:08:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i2mk9y/handson_experience_with_the_minicpmo_26/ | DennisKise_648 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2mk9y | false | null | t3_1i2mk9y | /r/LocalLLaMA/comments/1i2mk9y/handson_experience_with_the_minicpmo_26/ | false | false | 1 | {'enabled': False, 'images': [{'id': '47zIFZcMoq4eLfMpPpx9UsJi5Oq45jaPMLy4-KhnPPk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=108&crop=smart&auto=webp&s=182864ff8445baab94c3baf94f87c914c070fdb2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=216&crop=smart&auto=webp&s=167d61400fbd50a227ebcf27a757addebb5b38c3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=320&crop=smart&auto=webp&s=4a713995bcc7da68979a173d6d51f91a0c0d1dd1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=640&crop=smart&auto=webp&s=bf06c624cc0dcbf599f5edea7b4be7e420f634b7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=960&crop=smart&auto=webp&s=c8a17316ff5f86130a715ab4928aa91486aaa2a9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=1080&crop=smart&auto=webp&s=f098b938d40335f539be2c35054d1e8aaceec2b1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?auto=webp&s=34c637b0cd1f5a0766fb85d10608f3234ae6d28c', 'width': 1200}, 'variants': {}}]} |
|
Releasing the paper "Enhancing Human-Like Responses in Large Language Models", along with the Human-Like DPO Dataset and Human-Like LLMs | 24 | 🚀 Introducing our paper: **Enhancing Human-Like Responses in Large Language Models.**
We've been working on improving conversational AI with **more natural, human-like responses**—while keeping performance strong on standard benchmarks!
📄 **Paper:** [Enhancing Human-Like Responses in Large Language Models](https://huggingface.co/papers/2501.05032)
📊 **Dataset:** [Human-Like DPO Dataset](https://huggingface.co/datasets/HumanLLMs/Human-Like-DPO-Dataset)
🤖 **Models:** [Human-Like LLMs Collection](https://huggingface.co/collections/HumanLLMs/human-like-llms-6759fa68f22e11eb1a10967e)
Related Tweet: [https://x.com/Weyaxi/status/1877763008257986846](https://x.com/Weyaxi/status/1877763008257986846)
# What We Did:
* Used **synthetic datasets** generated from Llama3 family to fine-tune models with **DPO** and **LoRA**.
* Achieved **90% selection rate** in human-likeness when compared with the offical instruct models we fine-tuned.
* Maintained strong performance (nearly no loss) on benchmarks like Open LLM Leaderboard.
These models and our dataset are **open-source** on Hugging Face—feel free to test them out, fine-tune them further, or contribute! 🚀 | 2025-01-16T11:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/1i2mnmp/releasing_the_paper_enhancing_humanlike_responses/ | Weyaxi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2mnmp | false | null | t3_1i2mnmp | /r/LocalLLaMA/comments/1i2mnmp/releasing_the_paper_enhancing_humanlike_responses/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'mdY2zRiuSX29GopJUBZPzItDaUhTCrdmncjwNZRRPto', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dfVy8d9w77Z4cx4afOMXGAQ-da_LZXx5jR4NzuTvwO8.jpg?width=108&crop=smart&auto=webp&s=8cd4318e0a82e5438af6f5c595df82a8ee65dcd1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dfVy8d9w77Z4cx4afOMXGAQ-da_LZXx5jR4NzuTvwO8.jpg?width=216&crop=smart&auto=webp&s=76bb4af5e15184e3dfd1943408c76505753ea6b2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dfVy8d9w77Z4cx4afOMXGAQ-da_LZXx5jR4NzuTvwO8.jpg?width=320&crop=smart&auto=webp&s=dafdbc7a0934bb56e216ee537a8712e1a8c15606', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dfVy8d9w77Z4cx4afOMXGAQ-da_LZXx5jR4NzuTvwO8.jpg?width=640&crop=smart&auto=webp&s=1fb8eb065c05eb45842097e34e76eda8c403b88f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dfVy8d9w77Z4cx4afOMXGAQ-da_LZXx5jR4NzuTvwO8.jpg?width=960&crop=smart&auto=webp&s=77d5919603d5dc1bb45d7c560ad75362b6996d84', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dfVy8d9w77Z4cx4afOMXGAQ-da_LZXx5jR4NzuTvwO8.jpg?width=1080&crop=smart&auto=webp&s=89f0636cdff85afb95d404fd14d6eea7f73d5bb6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dfVy8d9w77Z4cx4afOMXGAQ-da_LZXx5jR4NzuTvwO8.jpg?auto=webp&s=812c0ab2039fbe8ca134a3976ed25a37e8756ee7', 'width': 1200}, 'variants': {}}]} |
Code similarity and retrieval - Create an embedding and/or a tokenizer? | 1 | [removed] | 2025-01-16T11:20:50 | https://www.reddit.com/r/LocalLLaMA/comments/1i2mqy6/code_similarity_and_retrieval_create_an_embedding/ | BrewryActual | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2mqy6 | false | null | t3_1i2mqy6 | /r/LocalLLaMA/comments/1i2mqy6/code_similarity_and_retrieval_create_an_embedding/ | false | false | self | 1 | null |
How would you build an LLM agent application without using LangChain? | 573 | 2025-01-16T11:37:48 | Zealousideal-Cut590 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i2n0il | false | null | t3_1i2n0il | /r/LocalLLaMA/comments/1i2n0il/how_would_you_build_an_llm_agent_application/ | false | false | 573 | {'enabled': True, 'images': [{'id': '0r7xXxV92sz3SnUD8rKUqtUuAiKBWvjwnmtOE3kWatc', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/q1d445cdecde1.jpeg?width=108&crop=smart&auto=webp&s=7f6e1b5c97f7bc77a8821dfc7468b5fd8f5862b7', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/q1d445cdecde1.jpeg?width=216&crop=smart&auto=webp&s=53c4ed61458eb93e65bbd87b75bfd6543cbf6400', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/q1d445cdecde1.jpeg?width=320&crop=smart&auto=webp&s=d53957b0b5cd83b245d383aec699f6fb075f1d50', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/q1d445cdecde1.jpeg?auto=webp&s=dce4416fd11cb24fbd1c8630faab589c1dbb515f', 'width': 500}, 'variants': {}}]} |
|||
What is the proper multi-turn conversation format for the Llama3.1 fine-tuning | 2 | I am using SFTTrainer to fine-tune my model on the multi-turn conversation dataset like this:
JSONL fine:
{
"messages": [
{"role": "system", "content": "You are a helpful AI chatbot."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing well, thank you! How can I help you?"},
{"role": "user", "content": "Can you explain machine learning?"},
{"role": "assistant", "content": "Machine learning is..."}
]
}{
"messages": [
{"role": "system", "content": "You are a helpful AI chatbot."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing well, thank you! How can I help you?"},
{"role": "user", "content": "Can you explain machine learning?"},
{"role": "assistant", "content": "Machine learning is..."}
]
}
For me it is crucial to keep the previous conversation.
I could not find the best-practice for this.
I am using SFTTrainer from trl | 2025-01-16T11:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i2n0xx/what_is_the_proper_multiturn_conversation_format/ | lapups | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2n0xx | false | null | t3_1i2n0xx | /r/LocalLLaMA/comments/1i2n0xx/what_is_the_proper_multiturn_conversation_format/ | false | false | self | 2 | null |
Kadrey v. Meta Platforms copyright infringement lawsuit | 3 | - https://www.courtlistener.com/docket/67569326/kadrey-v-meta-platforms-inc/
- https://techcrunch.com/2025/01/14/meta-execs-obsessed-over-beating-openais-gpt-4-internally-court-filings-reveal/
Anybody following this? It might affect future Llama releases. Meta got in trouble in 2023 for disclosing in the first Llama paper that they used pirated books in the pretraining dataset (originally just Books3 from ThePile), and from the lawsuit eventually it turned out they used more than that for the following Llama releases (including several hundred billion tokens of from LibGen).
It's common knowledge that every AI lab is training commercially-competitive LLMs on copyrighted data, but if Meta loses, LLMs pretraining (including open-weight models) in the US might be in trouble as it is in the EU due to the upcoming regulations there. | 2025-01-16T11:49:44 | https://www.reddit.com/r/LocalLLaMA/comments/1i2n6vx/kadrey_v_meta_platforms_copyright_infringement/ | brown2green | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2n6vx | false | null | t3_1i2n6vx | /r/LocalLLaMA/comments/1i2n6vx/kadrey_v_meta_platforms_copyright_infringement/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '2thtgPUUOuy6zAS_BxVPxe1q1_mYixfjdcm92aFJOHA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nuCMCVJ48um9B7nnHjpBsEqOGarrrTfqRKfqFGRgccI.jpg?width=108&crop=smart&auto=webp&s=d703cf818bb44be3220b27fc810c50da603837dc', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nuCMCVJ48um9B7nnHjpBsEqOGarrrTfqRKfqFGRgccI.jpg?width=216&crop=smart&auto=webp&s=7c856318c386743905e0ee525dfaff11e781aede', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nuCMCVJ48um9B7nnHjpBsEqOGarrrTfqRKfqFGRgccI.jpg?width=320&crop=smart&auto=webp&s=0d01fbef747c0f1a42167a7638146d0ddf2c12bf', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nuCMCVJ48um9B7nnHjpBsEqOGarrrTfqRKfqFGRgccI.jpg?width=640&crop=smart&auto=webp&s=33e90e8132c290e3ae9c3fd9a80f3655f1d82296', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nuCMCVJ48um9B7nnHjpBsEqOGarrrTfqRKfqFGRgccI.jpg?width=960&crop=smart&auto=webp&s=a9e15193e97b5ddaf586488af18a55ba6143ce6e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nuCMCVJ48um9B7nnHjpBsEqOGarrrTfqRKfqFGRgccI.jpg?width=1080&crop=smart&auto=webp&s=989d6dfdf333a1b9d857b4cf3fbfd587b805e238', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nuCMCVJ48um9B7nnHjpBsEqOGarrrTfqRKfqFGRgccI.jpg?auto=webp&s=b572d8e84b2389d321c27c576f5902eae0dc544b', 'width': 1200}, 'variants': {}}]} |
Where can you find really good quality data to train an LLM in reinforcement learning ? | 1 | Hello,
I'm looking for a place to list really high quality reflection data to use to train a model. And data on extremely diverse domains (math, language, planning, organization, logic, etc.). Ideally, I'd like data on the response quality of o1 pro, for example. Where can I find it ? | 2025-01-16T11:52:49 | https://www.reddit.com/r/LocalLLaMA/comments/1i2n8k4/where_can_you_find_really_good_quality_data_to/ | Wonderful-Excuse4922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2n8k4 | false | null | t3_1i2n8k4 | /r/LocalLLaMA/comments/1i2n8k4/where_can_you_find_really_good_quality_data_to/ | false | false | self | 1 | null |
easiest way of using LMstudio to "chat" with my outlook inbox? | 3 | Is there any "easy" way of using a model in LM studio to ask questions to my Outlook inbox?
things such as "Which emails haven't I answered yet?", "Is there any relevant event happening this week that I was informed about via email?", "create a table with the current state of all the leads that I have been getting this month".
Is this even possible? I understand that I could create a script to use my Outlook API and download all the emails as text files and use any model in LM studio to just read these text files. But is there a better way of doing it?
Thanks in advance!! | 2025-01-16T11:57:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i2navx/easiest_way_of_using_lmstudio_to_chat_with_my/ | marloquemegusta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2navx | false | null | t3_1i2navx | /r/LocalLLaMA/comments/1i2navx/easiest_way_of_using_lmstudio_to_chat_with_my/ | false | false | self | 3 | null |
exl2 works better with long context than llama.cpp? | 1 | I was running this RAPTOR example
[https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
by modifying it to use langchain's LlamaCpp. After multiple tries, I noticed that it needs 50k context to run with Phi-3-medium-128k-instruct Q4\_K\_M. However, to run it on my 3090, I need to offload 5 out of 41 layers and got a run time of 20 hours.
Then I tried langchain's ExLlamav2 with Phi-3-medium-128k-instruct 4.25bpw. I find that it can finish in 20 minutes at 19GB VRAM usage at 50k context.
How come? Can I set something in langchain's LlamaCpp to prevent layers offload? | 2025-01-16T12:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/1i2nep2/exl2_works_better_with_long_context_than_llamacpp/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2nep2 | false | null | t3_1i2nep2 | /r/LocalLLaMA/comments/1i2nep2/exl2_works_better_with_long_context_than_llamacpp/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XBcr3dnixBuwXeHz-aGyT6iKxWDIU8p06Vgty-mpwAs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QQMsP0ng5FtFnxpNctSoFzpywqmQnbDFFVBNRii9CZs.jpg?width=108&crop=smart&auto=webp&s=54250a24bff4467baafc48eab7da9af786ebed1c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QQMsP0ng5FtFnxpNctSoFzpywqmQnbDFFVBNRii9CZs.jpg?width=216&crop=smart&auto=webp&s=bcef7b5c92b5bb417644cf92c509c0c818bff16e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QQMsP0ng5FtFnxpNctSoFzpywqmQnbDFFVBNRii9CZs.jpg?width=320&crop=smart&auto=webp&s=3ebcb153020c413d9128f683fb97e76a6cd3d612', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QQMsP0ng5FtFnxpNctSoFzpywqmQnbDFFVBNRii9CZs.jpg?width=640&crop=smart&auto=webp&s=787a75119972990cc9817c58e3bb5c4b04e6099d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QQMsP0ng5FtFnxpNctSoFzpywqmQnbDFFVBNRii9CZs.jpg?width=960&crop=smart&auto=webp&s=37854863d63beb3844cfa527f71eb90bd484b391', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QQMsP0ng5FtFnxpNctSoFzpywqmQnbDFFVBNRii9CZs.jpg?width=1080&crop=smart&auto=webp&s=4e72321d0415c88db5a586cb7bd099123a155c05', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QQMsP0ng5FtFnxpNctSoFzpywqmQnbDFFVBNRii9CZs.jpg?auto=webp&s=df2f483c5ee91a0170e9a676f824eae3ced367cb', 'width': 1200}, 'variants': {}}]} |
Do you think that LLMs can do better natural language translation than services like DeepL, GoogleTranslate, Microsoft Translate etc.? | 57 | My personal experience (which could be very subjective) with these translators is that even regular old chat bots with not much prompt engineering already produce better results with translations. Is this really just an unpopular opinion? | 2025-01-16T12:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i2nkui/do_you_think_that_llms_can_do_better_natural/ | sassyhusky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2nkui | false | null | t3_1i2nkui | /r/LocalLLaMA/comments/1i2nkui/do_you_think_that_llms_can_do_better_natural/ | false | false | self | 57 | null |
Who's cooking local agents for the DRIA benchmark? | 1 | [removed] | 2025-01-16T12:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/1i2nn4y/whos_cooking_local_agents_for_the_dria_benchmark/ | bburtenshaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2nn4y | false | null | t3_1i2nn4y | /r/LocalLLaMA/comments/1i2nn4y/whos_cooking_local_agents_for_the_dria_benchmark/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'I4lns6ybFSx3Zh8daacho5Q3Yc7MjOn0Ybu7B0cdC38', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/frsUZr5qJ8oNdj_m9LK1jaZSUFaDE4blKoQ-_RaOdTg.jpg?width=108&crop=smart&auto=webp&s=419f6a90f84905208802a390c6278eb40f21a2f8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/frsUZr5qJ8oNdj_m9LK1jaZSUFaDE4blKoQ-_RaOdTg.jpg?width=216&crop=smart&auto=webp&s=e4001844269f18ee3b0e23f49e2ba5a5660f4b9d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/frsUZr5qJ8oNdj_m9LK1jaZSUFaDE4blKoQ-_RaOdTg.jpg?width=320&crop=smart&auto=webp&s=8527f078ad7ec2f528a3dee009120e03be144d25', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/frsUZr5qJ8oNdj_m9LK1jaZSUFaDE4blKoQ-_RaOdTg.jpg?width=640&crop=smart&auto=webp&s=36bbc5b5d2fa498b1e928c26ceb713e7a5ecbad3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/frsUZr5qJ8oNdj_m9LK1jaZSUFaDE4blKoQ-_RaOdTg.jpg?width=960&crop=smart&auto=webp&s=2a9dd93b237d7be168a88931c8b3f9b3c8c03f4d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/frsUZr5qJ8oNdj_m9LK1jaZSUFaDE4blKoQ-_RaOdTg.jpg?width=1080&crop=smart&auto=webp&s=bf2c1e8825b1800ec36ff5752154a2f6c507d933', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/frsUZr5qJ8oNdj_m9LK1jaZSUFaDE4blKoQ-_RaOdTg.jpg?auto=webp&s=60470d2732ae8a5a704353f68730490a24a9dfc7', 'width': 1200}, 'variants': {}}]} |
|
Best AI for writing in Chinese? | 2 | I have to process and generate responses to a large number of Chinese reviews. What would be the best ai for this task? Deepseek came to my mind as it's Chinese. Would Gemini or claude perform better? | 2025-01-16T12:22:07 | https://www.reddit.com/r/LocalLLaMA/comments/1i2npin/best_ai_for_writing_in_chinese/ | PixelatedXenon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2npin | false | null | t3_1i2npin | /r/LocalLLaMA/comments/1i2npin/best_ai_for_writing_in_chinese/ | false | false | self | 2 | null |
Low GPU usage in LM Studio | 1 | [removed] | 2025-01-16T12:41:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i2o0zm/low_gpu_usage_in_lm_studio/ | Right_Conference_859 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2o0zm | false | null | t3_1i2o0zm | /r/LocalLLaMA/comments/1i2o0zm/low_gpu_usage_in_lm_studio/ | false | false | self | 1 | null |
The Mirage of Artificial Intelligence Terms of Use Restrictions | 10 | 2025-01-16T12:44:05 | https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5049562 | Jamais_Vu206 | papers.ssrn.com | 1970-01-01T00:00:00 | 0 | {} | 1i2o2he | false | null | t3_1i2o2he | /r/LocalLLaMA/comments/1i2o2he/the_mirage_of_artificial_intelligence_terms_of/ | false | false | default | 10 | null |
|
My article: Building an On-Premise Document Intelligence Stack with Docling, Ollama, Phi-4 | ExtractThinker | 6 | 2025-01-16T13:09:47 | https://medium.com/@enoch3712/building-an-on-premise-document-intelligence-stack-with-docling-ollama-phi-4-extractthinker-6ab60b495751 | GeorgiaWitness1 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1i2ojbd | false | null | t3_1i2ojbd | /r/LocalLLaMA/comments/1i2ojbd/my_article_building_an_onpremise_document/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'iDVBUlOnomwOqr1kdOhcVNqxwNfPX-Dh6LWT7b1NeFw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/W6xziHU0vFKUkIddpZi7AQv6qoWvIZTU1If9frCz1Ls.jpg?width=108&crop=smart&auto=webp&s=6d6403746a129cd1b1a72c94c39a67691637677e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/W6xziHU0vFKUkIddpZi7AQv6qoWvIZTU1If9frCz1Ls.jpg?width=216&crop=smart&auto=webp&s=a5c82a1bfe0d55b4d2e6dc44a5a0df02124c85c0', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/W6xziHU0vFKUkIddpZi7AQv6qoWvIZTU1If9frCz1Ls.jpg?width=320&crop=smart&auto=webp&s=8af83d7bdc266de7c8b00bda9ba21c4f56f528cb', 'width': 320}, {'height': 346, 'url': 'https://external-preview.redd.it/W6xziHU0vFKUkIddpZi7AQv6qoWvIZTU1If9frCz1Ls.jpg?width=640&crop=smart&auto=webp&s=dc3feb9690aa60bb1069e3283969deb56065629e', 'width': 640}, {'height': 520, 'url': 'https://external-preview.redd.it/W6xziHU0vFKUkIddpZi7AQv6qoWvIZTU1If9frCz1Ls.jpg?width=960&crop=smart&auto=webp&s=f16cb627fffa26b852c7b85aa2acd10f6735ff1d', 'width': 960}, {'height': 585, 'url': 'https://external-preview.redd.it/W6xziHU0vFKUkIddpZi7AQv6qoWvIZTU1If9frCz1Ls.jpg?width=1080&crop=smart&auto=webp&s=8214e0bf277da065e173d64104c7a1c7184bb59f', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/W6xziHU0vFKUkIddpZi7AQv6qoWvIZTU1If9frCz1Ls.jpg?auto=webp&s=b9bc8346ec2d228cb35dafc6ba19d6ab3da3eeb3', 'width': 1200}, 'variants': {}}]} |
||
AI project consultancy? | 4 | I’ve been building internal AI based apps for our company close to 2 years now. Started from the basics, RAG chatbots, etc, now in more advanced agentic systems.
I’ve been thinking lately about a consultancy that specializes in building these projects for smaller companies as a consultant, for companies who don’t have a data science team but want to get into AI automations.
Has anyone else thought about/did it/decided against this? What are your thoughts?
Since I have all the frameworks in place already I had planned on hosting demos of template automations on my own website to send around, then it would just be something I share with local companies. “Hey, my company specializes in automating this kind of thing using AI, here’s my card” type of interactions?
Idk, just curious what others think. I have a well paying job in AI already but it feels like I’m being held back by it and could do something more with these skills. | 2025-01-16T13:11:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i2okdd/ai_project_consultancy/ | 2016YamR6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i2okdd | false | null | t3_1i2okdd | /r/LocalLLaMA/comments/1i2okdd/ai_project_consultancy/ | false | false | self | 4 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.