title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What Factors Contribute to Procrastination? (AI generated) | 1 | 2025-02-07T15:28:36 | Nervous-Midnight-175 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ijxkbp | false | null | t3_1ijxkbp | /r/LocalLLaMA/comments/1ijxkbp/what_factors_contribute_to_procrastination_ai/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/wiybp80mjqhe1.png?auto=webp&s=b5bfbb11614befcc53d3dfa79a89139c75d9832b', 'width': 771, 'height': 937}, 'resolutions': [{'url': 'https://preview.redd.it/wiybp80mjqhe1.png?width=108&crop=smart&auto=webp&s=b4f43aff4f2c3a64ae2a1f852d9febc5ba8fd795', 'width': 108, 'height': 131}, {'url': 'https://preview.redd.it/wiybp80mjqhe1.png?width=216&crop=smart&auto=webp&s=36992f5c46fd47caf334f9e7869baba956686486', 'width': 216, 'height': 262}, {'url': 'https://preview.redd.it/wiybp80mjqhe1.png?width=320&crop=smart&auto=webp&s=ffdc0b61fd7ff23a971dc0d982322aead0a84b00', 'width': 320, 'height': 388}, {'url': 'https://preview.redd.it/wiybp80mjqhe1.png?width=640&crop=smart&auto=webp&s=e8c7a535009ae3ed0faa3a00151af461e90e3f14', 'width': 640, 'height': 777}], 'variants': {}, 'id': 'l83P7eyEoObcTiEr_mPcfzHNjJRlJe3jenNAbL0RvK4'}], 'enabled': True} |
|||
AI Persona Media Testing | 1 | [removed] | 2025-02-07T15:35:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ijxpw3/ai_persona_media_testing/ | var_graphdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijxpw3 | false | null | t3_1ijxpw3 | /r/LocalLLaMA/comments/1ijxpw3/ai_persona_media_testing/ | false | false | self | 1 | null |
My R1 671B @ Home plan :) (WIP) | 1 | 2025-02-07T15:41:24 | https://x.com/BlinkDL_AI/status/1887883983461179546 | bo_peng | x.com | 1970-01-01T00:00:00 | 0 | {} | 1ijxv2y | false | null | t3_1ijxv2y | /r/LocalLLaMA/comments/1ijxv2y/my_r1_671b_home_plan_wip/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/ibcc8vUVfIEuJi2mNA6kqX04mMGEgNP9ftBcs9d0Ypo.jpg?auto=webp&s=3c3cb4d005619861d239af1da068b51e3c5ab732', 'width': 1000, 'height': 715}, 'resolutions': [{'url': 'https://external-preview.redd.it/ibcc8vUVfIEuJi2mNA6kqX04mMGEgNP9ftBcs9d0Ypo.jpg?width=108&crop=smart&auto=webp&s=b3134b95111c3dd987d85bfff683e71abdf9f9c8', 'width': 108, 'height': 77}, {'url': 'https://external-preview.redd.it/ibcc8vUVfIEuJi2mNA6kqX04mMGEgNP9ftBcs9d0Ypo.jpg?width=216&crop=smart&auto=webp&s=f87d3b421a36afa7065cf345a5a8889479b7110e', 'width': 216, 'height': 154}, {'url': 'https://external-preview.redd.it/ibcc8vUVfIEuJi2mNA6kqX04mMGEgNP9ftBcs9d0Ypo.jpg?width=320&crop=smart&auto=webp&s=b3cfb3b110d6ecd652312cfe4ace2e246d908a57', 'width': 320, 'height': 228}, {'url': 'https://external-preview.redd.it/ibcc8vUVfIEuJi2mNA6kqX04mMGEgNP9ftBcs9d0Ypo.jpg?width=640&crop=smart&auto=webp&s=83113937810c90569a94f2d399ceb25cbb31760a', 'width': 640, 'height': 457}, {'url': 'https://external-preview.redd.it/ibcc8vUVfIEuJi2mNA6kqX04mMGEgNP9ftBcs9d0Ypo.jpg?width=960&crop=smart&auto=webp&s=4dd7d9cc946abb3c3b0d56653caa2c13b139b090', 'width': 960, 'height': 686}], 'variants': {}, 'id': 'axagHbNcWjziziubpkkT57jhnWFTeHWRFoXCX0K_WT8'}], 'enabled': False} |
||
API calls to 1206 are being misleadingly redirected to the new 0205 model? (gemini) | 1 | When I’m testing prompts with temperature=0 I get identical responses from 0205 and the old 1206 model (setting model=“gemini-exp-1206”).
So basically google released a crappy new model and thought we wouldn't notice that the old model has been silently phased out? | 2025-02-07T15:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ijy5tg/api_calls_to_1206_are_being_misleadingly/ | NarrowEffect | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijy5tg | false | null | t3_1ijy5tg | /r/LocalLLaMA/comments/1ijy5tg/api_calls_to_1206_are_being_misleadingly/ | false | false | self | 1 | null |
AI and Me | 1 | [removed] | 2025-02-07T16:04:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ijyfkw/ai_and_me/ | Humza0000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijyfkw | false | null | t3_1ijyfkw | /r/LocalLLaMA/comments/1ijyfkw/ai_and_me/ | false | false | self | 1 | null |
Rethinking RAG as a Single Artifact | 1 | Right now, a RAG pipeline is made up of several parts that all depend on each other. When you tweak one piece, it can mess with the others and throw off your results. I experimented a bit with making the whole RAG pipeline treated as a single Artifact similar to how we treat LLM models. Here is the code: [https://github.com/mohamedfawzy96/ragxo](https://github.com/mohamedfawzy96/ragxo)
Would love to hear your feedback
https://preview.redd.it/o8o70p2prqhe1.png?width=1396&format=png&auto=webp&s=171644bf849121a6cd4d0c831139fde7d3873a0e
| 2025-02-07T16:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ijyn9f/rethinking_rag_as_a_single_artifact/ | Sarcinismo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijyn9f | false | null | t3_1ijyn9f | /r/LocalLLaMA/comments/1ijyn9f/rethinking_rag_as_a_single_artifact/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/qgyql4WcpLjxM53n29vNmrY71xuUidWfVbXLMXw1ImE.jpg?auto=webp&s=9076d740cf0b2d9adb733f1368b80176311424a2', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/qgyql4WcpLjxM53n29vNmrY71xuUidWfVbXLMXw1ImE.jpg?width=108&crop=smart&auto=webp&s=ed8929d2362d77d798427fb27c36b5f12e3f06e4', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/qgyql4WcpLjxM53n29vNmrY71xuUidWfVbXLMXw1ImE.jpg?width=216&crop=smart&auto=webp&s=9cf733d296d96f4084cc1991722ed8e4bb148848', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/qgyql4WcpLjxM53n29vNmrY71xuUidWfVbXLMXw1ImE.jpg?width=320&crop=smart&auto=webp&s=023de5c8114f277221331562275305cbe16f959a', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/qgyql4WcpLjxM53n29vNmrY71xuUidWfVbXLMXw1ImE.jpg?width=640&crop=smart&auto=webp&s=ea780e2c72c0ef61fa233c25d616cb12eb16cc90', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/qgyql4WcpLjxM53n29vNmrY71xuUidWfVbXLMXw1ImE.jpg?width=960&crop=smart&auto=webp&s=797d4358df103adc2ac09d16a92841c46152d218', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/qgyql4WcpLjxM53n29vNmrY71xuUidWfVbXLMXw1ImE.jpg?width=1080&crop=smart&auto=webp&s=0445db1b5c0cc07594f2d7d640b492f7cb5a7944', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'tkXifhR0q9b3Vrc6cyEujhoLu_h622VCksrj5WGiy-0'}], 'enabled': False} |
|
Ollama 0.5.8 adds AVX-512 CPU acceleration and AVX2 for NVIDIA & AMD GPUs (pre release version available now). | 1 | From the release “What’s Changed” section:
- Ollama will now use AVX-512 instructions where available for additional CPU acceleration
- NVIDIA and AMD GPUs can now be used with CPUs without AVX instructions
- Ollama will now use AVX2 instructions with NVIDIA and AMD GPUs
- New ollama-darwin.tgz package for macOS that replaces the previous ollama-darwin standalone binary.
- Fixed indexing error that would occur when downloading a model with ollama run or ollama pull
- Fixes cases where download progress would reverse
https://github.com/ollama/ollama/releases
| 2025-02-07T16:18:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ijyrlv/ollama_058_adds_avx512_cpu_acceleration_and_avx2/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijyrlv | false | null | t3_1ijyrlv | /r/LocalLLaMA/comments/1ijyrlv/ollama_058_adds_avx512_cpu_acceleration_and_avx2/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/9Ad6xdV-KMICTkPS-kv1Z_GWcz5QtKyi6zlF4OOVfIA.jpg?auto=webp&s=44591fc1d51d2129564807845255161c19678a95', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/9Ad6xdV-KMICTkPS-kv1Z_GWcz5QtKyi6zlF4OOVfIA.jpg?width=108&crop=smart&auto=webp&s=e8f1e1dc29c9b63abdcdbc11e5ac0bc518dcf209', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/9Ad6xdV-KMICTkPS-kv1Z_GWcz5QtKyi6zlF4OOVfIA.jpg?width=216&crop=smart&auto=webp&s=3e68fa08f719e027e9212d74828bd624dc5ea395', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/9Ad6xdV-KMICTkPS-kv1Z_GWcz5QtKyi6zlF4OOVfIA.jpg?width=320&crop=smart&auto=webp&s=ae0927e89fca86999b8cd622b22a09718dbab21f', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/9Ad6xdV-KMICTkPS-kv1Z_GWcz5QtKyi6zlF4OOVfIA.jpg?width=640&crop=smart&auto=webp&s=e4a208538eedaf44f4be2dadd1e7d8774ec8c262', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/9Ad6xdV-KMICTkPS-kv1Z_GWcz5QtKyi6zlF4OOVfIA.jpg?width=960&crop=smart&auto=webp&s=b83dabc53d13ca168bc74ce9014b681f3e53dca7', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/9Ad6xdV-KMICTkPS-kv1Z_GWcz5QtKyi6zlF4OOVfIA.jpg?width=1080&crop=smart&auto=webp&s=0ac00a6ec99edf54ed5d5cab714036767075ebb4', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'W-KAN3ks1_rYTz1TvVdHxH2K4Zv95L8FIlC55erkd74'}], 'enabled': False} |
Repo with GRPO + Docker + Unsloth + Qwen - ideally for the weekend | 1 | I prepared a repo with a simple setup to reproduce the GRPO policy run on your own GPU device. Currently, it only supports Qwen, but I will add more features soon.
This is a revamped version of collab notebooks from Unsloth. They did very nice jobs I must admit.
[https://github.com/ArturTanona/grpo\_unsloth\_docker](https://github.com/ArturTanona/grpo_unsloth_docker) | 2025-02-07T16:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ijyv0t/repo_with_grpo_docker_unsloth_qwen_ideally_for/ | at_nlp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijyv0t | false | null | t3_1ijyv0t | /r/LocalLLaMA/comments/1ijyv0t/repo_with_grpo_docker_unsloth_qwen_ideally_for/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/bcd-v6E_msqQle4BwV4Pb6qPOYzNp3qSWWpyHTmzfoI.jpg?auto=webp&s=12e5160a28830347b57985ccb2e9423a606d2db1', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/bcd-v6E_msqQle4BwV4Pb6qPOYzNp3qSWWpyHTmzfoI.jpg?width=108&crop=smart&auto=webp&s=bdbafba1f50b7ac80c042eea51b16eb4de276db4', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/bcd-v6E_msqQle4BwV4Pb6qPOYzNp3qSWWpyHTmzfoI.jpg?width=216&crop=smart&auto=webp&s=a1db4514b624b0f21acc1bbde9fce2afac2ac847', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/bcd-v6E_msqQle4BwV4Pb6qPOYzNp3qSWWpyHTmzfoI.jpg?width=320&crop=smart&auto=webp&s=33c88c4c8b9e8780386ab1923499713b449071d4', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/bcd-v6E_msqQle4BwV4Pb6qPOYzNp3qSWWpyHTmzfoI.jpg?width=640&crop=smart&auto=webp&s=d8e8226341e1291c896f5280bc04725169687024', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/bcd-v6E_msqQle4BwV4Pb6qPOYzNp3qSWWpyHTmzfoI.jpg?width=960&crop=smart&auto=webp&s=c865f993699118f7823d846f5f99183e5d495f45', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/bcd-v6E_msqQle4BwV4Pb6qPOYzNp3qSWWpyHTmzfoI.jpg?width=1080&crop=smart&auto=webp&s=8f2bb4fb0081ebffa1e3a116aa16667530a5e533', 'width': 1080, 'height': 540}], 'variants': {}, 'id': '5ut6thKXYwZl_taXoEPB6CumyYU5IazlcpNUCv9c6Os'}], 'enabled': False} |
Is there a database of token/s based on hardware and model? | 1 | I'm looking to get info on inference speed | 2025-02-07T16:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ijyv6q/is_there_a_database_of_tokens_based_on_hardware/ | Angryflesh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijyv6q | false | null | t3_1ijyv6q | /r/LocalLLaMA/comments/1ijyv6q/is_there_a_database_of_tokens_based_on_hardware/ | false | false | self | 1 | null |
Can any SOTA thinking models one shot this? | 1 | **Prompt**
The Island Rescue Puzzle
3 travellers are stranded one side of a river:
1. A merchant carrying a sack of gold
2. A warrior armed with a sword
3. A priest holding a sacred scroll
4. A small boat is available, but it can carry only two people or one person and one item at a time.
However, there are special rules:
1. The merchant will not leave his gold unattended with the warrior, as he fears being robbed.
2. The warrior cannot be left alone with the priest, as they have an ongoing feud and will fight.
3. The boat can be operated by anyone (no need for a boatman).
The goal is to get all three travelers and their belongings safely across the river. How can they achieve
this in the fewest trips?
**Answer**
The most I got with my not so SOTA brain is 9:
1. Priest leaves behind the scroll and brings the gold to cross.
2. Priest leaves the gold and returns.
3. Priest brings the merchant to cross.
4. Merchant leaves behind the priest and returns.
5. Merchant bring the scroll.
6. Merchant leaves behind the scroll and return.
7. Merchant brings the warrior
8. Warrior leaves the merchant and return
9. Warrior brings his sword
All across safely yes?
What I tried...
Deepseek R1 on the main site thought for like 500 seconds but failed (the longest i've seen it tried whut)
Gemini 2.0 Flash Thinking Experimental 01-21 thought for 72 seconds but failed (also the longest I've seen it think)
o3-mini failed got it all wrong here
Let me see your SOTA model with the results. Advance thank you for people with access to other SOTA models :) | 2025-02-07T16:23:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ijyvxr/can_any_sota_thinking_models_one_shot_this/ | Topekun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijyvxr | false | null | t3_1ijyvxr | /r/LocalLLaMA/comments/1ijyvxr/can_any_sota_thinking_models_one_shot_this/ | false | false | self | 1 | null |
Today's test qwen=3, Chatgpt=1, deepseek=0 | 1 | After applying a huge prompt to my deepseek, chatgpt and qwen.
Deepseek was busy with it's server ( Even though i don't know if it's a bug or not cuz i faced it again when i give a another large prompt few days ago).
Chatgpt although give a solid answer, for the thing that i need to know.
But, bruh Qwen was in another level. When i changed the model to QwQ-32B-Preview and give it the prompt, first of all it told me to go and search by myself to gain more info... But when i gave the result bruh... The specific lines that i have written it questions itself and also giving answers like a human being, who is sitting in the toilet and answering his own answers. I got much more info then i expected for it...
For me this day, Qwen just cooked. 🫸 | 2025-02-07T16:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ijyzfd/todays_test_qwen3_chatgpt1_deepseek0/ | Alternative_Bad_1366 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijyzfd | false | null | t3_1ijyzfd | /r/LocalLLaMA/comments/1ijyzfd/todays_test_qwen3_chatgpt1_deepseek0/ | false | false | self | 1 | null |
I guess the under $50 distilled s1 model gives us peasants some hope? | 1 | In case you haven't heard, the s1 model is developed by some Stanford and University of Washington researches by finetuning a Qwen2.5-32B-Instruct. It is fully Open Sourced (open weights + open data + open code).
No, this is NOT better than the current o1 or r1-distilled qwen 32b, but... and there is a huge BUT...
Huggingface model page:
>*s1 is a reasoning model finetuned from Qwen2.5-32B-Instruct on just 1,000 examples. It matches o1-preview & exhibits test-time scaling via budget forcing.*
Apparently they got some reasoning data out of Gemini 2.0 Flash Thinking model, and used that to finetune Qwen, leading to a great improvement in benchmark scores. They were able to finetune the whole thing in under 30 minutes on 16 NVIDIA H100 GPUs, which costs less than $50.
Benchmark scores from the paper:
[Benchmark scores from the paper](https://preview.redd.it/5sw4sv6atqhe1.png?width=848&format=png&auto=webp&s=2b1236dead87f65f17da8d6e09f3271ce77509f6)
Compare the 800k r1-distill examples vs 1k for s1.
I could only wonder, what if we do this on mistral-small-24b and/or increase the example count to 2K and distill a good 70B?
Paper: [Here](https://arxiv.org/abs/2501.19393)
Github: [Here](https://github.com/simplescaling/s1)
Huggingface Model page: [Here](https://huggingface.co/simplescaling/s1-32B)
TechCrunch Article: [Here](https://techcrunch.com/2025/02/05/researchers-created-an-open-rival-to-openais-o1-reasoning-model-for-under-50/) | 2025-02-07T16:29:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ijz14v/i_guess_the_under_50_distilled_s1_model_gives_us/ | Tadpole5050 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijz14v | false | null | t3_1ijz14v | /r/LocalLLaMA/comments/1ijz14v/i_guess_the_under_50_distilled_s1_model_gives_us/ | false | false | 1 | null |
|
Knowledge graph from 100 page pdf | 1 | [removed] | 2025-02-07T16:34:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ijz4xs/knowledge_graph_from_100_page_pdf/ | Ok_Requirement3346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijz4xs | false | null | t3_1ijz4xs | /r/LocalLLaMA/comments/1ijz4xs/knowledge_graph_from_100_page_pdf/ | false | false | self | 1 | null |
How good is Dolphin-3.0-R1-Mistral-24B? | 1 | Is it better than deepseek-R1-distill-Qwen-14B? | 2025-02-07T16:35:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ijz5tg/how_good_is_dolphin30r1mistral24b/ | RandumbRedditor1000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijz5tg | false | null | t3_1ijz5tg | /r/LocalLLaMA/comments/1ijz5tg/how_good_is_dolphin30r1mistral24b/ | false | false | self | 1 | null |
New model for finetuners: Redemption_Wind_24B | 1 | **Mistral** has blessed us with a capable new **Apache 2.0** model, but not only that, we finally get a base model to play with as well. After several models with more restrictive licenses, this open release is a welcome surprise. Freedom was **redeemed**.
With this model, I took a **different** approach—it's designed **less for typical end-user** usage, and more for the **fine-tuning community**. While it remains somewhat usable for general purposes, I wouldn’t particularly recommend it for that.
# What is this model?
This is a **lightly fine-tuned** version of the Mistral 24B base model, designed as an accessible and adaptable foundation for further fine-tuning and merging fodder. Key modifications include:
* **ChatML-ified**, with no additional tokens introduced.
* **High quality private instruct**—not generated by ChatGPT or Claude, ensuring no slop and good markdown understanding.
* **No refusals**—since it’s a base model, refusals should be minimal to non-existent, though, in early testing, occasional warnings still appear (I assume some were baked into the pre-train).
* **High-quality private creative writing dataset** Mainly to dilute baked-in slop further, but it can actually write some stories, not bad for loss \~8.
* **Small, high-quality private RP dataset** This was done so further tuning for RP will be easier. The dataset was kept small and contains **ZERO SLOP**, some entries are of **16k token length**.
* **Exceptional adherence to character cards** This was done to make it easier for further tunes intended for roleplay.
# TL;DR
* Mistral 24B **Base** model.
* **ChatML-ified**.
* Can **roleplay** out of the box.
* **Exceptional** at following the character card.
* **Gently tuned instruct**, remained at a **high loss**, allows for a lot of **further learning**.
* Useful for **fine-tuners**.
* **Very creative**.
# Additional thoughts about this base
With how much modern models are focused on getting them benchmarks, I can definitely sense that some stuff was baked into the pretrain, as this is indeed a base model.
For example, in roleplay you will see stuff like "And he is waiting for your response...", a classical sloppy phrase. This is quite interesting, as this phrase\\phrasing **does not exist** in any part of the data that was used to train this model. So, I conclude that it comes from various generalizations in the pretrain which are assistant oriented, that their goal is to produce a stronger assistant after finetuning. This is purely my own speculation, and I may be reading too much into it.
Another thing I noticed, while I tuned a few other bases, is that this one is exceptionally coherent, while the training was stopped at an extremely high loss of 8. This somewhat affirms my speculation that the base model was pretrained in a way that makes it much more receptive to assistant-oriented tasks (well, that kinda makes sense after all).
There's some slop in the base, whispers, shivers, all the usual offenders. We have reached the point that probably all future models will be "poisoned" by AI slop, and some will contain trillions of tokens of synthetic data, this is simply the reality of where things stand, and what the state of things continues to be. Already there are ways around it with various samplers, DPO, etc etc... It is what it is.
# Enjoy the model :) | 2025-02-07T16:42:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ijzcn9/new_model_for_finetuners_redemption_wind_24b/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijzcn9 | false | null | t3_1ijzcn9 | /r/LocalLLaMA/comments/1ijzcn9/new_model_for_finetuners_redemption_wind_24b/ | false | false | self | 1 | null |
Major stuff- I was told to post my encounter here for some intelligent eyes, yesterday I got to see o3 mini using its full reasoning | 1 | I had a challenging problem that all LLMs couldn’t solve, even o3 had failed 6 times, but on the 7th time or so my screen looked like it had been hijacked 😅, I’m just saying exactly how it felt to me in that moment. I copied the output as you can’t quite share cursor chat.
This is…real reasoning, the last line is actually the most concerning, the double confirmation. What are y’all’s thoughts? | 2025-02-07T16:50:24 | https://www.reddit.com/gallery/1ijzj3m | vinigrae | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ijzj3m | false | null | t3_1ijzj3m | /r/LocalLLaMA/comments/1ijzj3m/major_stuff_i_was_told_to_post_my_encounter_here/ | false | false | 1 | null |
|
What's your favorite fine-tuned model? | 1 | Do you use "vanilla" models like qwen or mistral or llama or do you use some specific fine-tune? Please recommend something interesting. | 2025-02-07T16:58:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ijzq0u/whats_your_favorite_finetuned_model/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ijzq0u | false | null | t3_1ijzq0u | /r/LocalLLaMA/comments/1ijzq0u/whats_your_favorite_finetuned_model/ | false | false | self | 1 | null |
Ideas to classify call transcripts (unlabelled) | 1 | [removed] | 2025-02-07T17:16:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ik06ut/ideas_to_classify_call_transcripts_unlabelled/ | Hardcore-Fam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik06ut | false | null | t3_1ik06ut | /r/LocalLLaMA/comments/1ik06ut/ideas_to_classify_call_transcripts_unlabelled/ | false | false | self | 1 | null |
More thinking or less thinking? How does the thinking tokens impact on reasoning models | 1 | On one hand, there are papers like [https://arxiv.org/abs/2501.18585](https://arxiv.org/abs/2501.18585) claiming that
> incorrect answers often switch reasoning strategies more frequently than correct ones (Figure b), leading to longer responses without improved accuracy.
On the other hand, there are papers like [https://arxiv.org/abs/2501.19393](https://arxiv.org/abs/2501.19393) claiming that
> forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation when it tries to end. This can lead the model to doublecheck its answer, often fixing incorrect reasoning steps.
How's your personal experience? Is more thinking good or bad?
Or do you have any recommended reasoning / logic dataset so I could run a small experiment on? | 2025-02-07T17:38:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ik0ppz/more_thinking_or_less_thinking_how_does_the/ | henryclw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik0ppz | false | null | t3_1ik0ppz | /r/LocalLLaMA/comments/1ik0ppz/more_thinking_or_less_thinking_how_does_the/ | false | false | self | 1 | null |
Kokoro Conversational | Realtime Voice Conversation running 100% locally! | 1 | 2025-02-07T17:41:38 | https://v.redd.it/x2gsbtap6rhe1 | asiff00 | /r/LocalLLaMA/comments/1ik0sgc/kokoro_conversational_realtime_voice_conversation/ | 1970-01-01T00:00:00 | 0 | {} | 1ik0sgc | false | null | t3_1ik0sgc | /r/LocalLLaMA/comments/1ik0sgc/kokoro_conversational_realtime_voice_conversation/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/dXZscmplZmU3cmhlMQv1uzPQ2VJL4ZTZzWyxCT2hPMDEDpP6O34gh2ON2o4w.png?format=pjpg&auto=webp&s=08896aa0ca7d051ce0ede30a6a3eec2a31a357b2', 'width': 1920, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/dXZscmplZmU3cmhlMQv1uzPQ2VJL4ZTZzWyxCT2hPMDEDpP6O34gh2ON2o4w.png?width=108&crop=smart&format=pjpg&auto=webp&s=c44cc85de0a3065b0511f4c2006109ce3f9edb17', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/dXZscmplZmU3cmhlMQv1uzPQ2VJL4ZTZzWyxCT2hPMDEDpP6O34gh2ON2o4w.png?width=216&crop=smart&format=pjpg&auto=webp&s=51c9bec1ee3f25766451d4a8394063e0697c1e2a', 'width': 216, 'height': 121}, {'url': 'https://external-preview.redd.it/dXZscmplZmU3cmhlMQv1uzPQ2VJL4ZTZzWyxCT2hPMDEDpP6O34gh2ON2o4w.png?width=320&crop=smart&format=pjpg&auto=webp&s=c51d0269f9627b537124e5b14ea66acd849fe394', 'width': 320, 'height': 180}, {'url': 'https://external-preview.redd.it/dXZscmplZmU3cmhlMQv1uzPQ2VJL4ZTZzWyxCT2hPMDEDpP6O34gh2ON2o4w.png?width=640&crop=smart&format=pjpg&auto=webp&s=3c2c24db7f39804ee5be6e423b734add66a6e9d2', 'width': 640, 'height': 360}, {'url': 'https://external-preview.redd.it/dXZscmplZmU3cmhlMQv1uzPQ2VJL4ZTZzWyxCT2hPMDEDpP6O34gh2ON2o4w.png?width=960&crop=smart&format=pjpg&auto=webp&s=0d6f8a1f60c3b0da4fbaef0b205a1b9a0009fdcb', 'width': 960, 'height': 540}, {'url': 'https://external-preview.redd.it/dXZscmplZmU3cmhlMQv1uzPQ2VJL4ZTZzWyxCT2hPMDEDpP6O34gh2ON2o4w.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e2c9a19d3fab89cc477cf24c0ff6c0ddd5cf4716', 'width': 1080, 'height': 607}], 'variants': {}, 'id': 'dXZscmplZmU3cmhlMQv1uzPQ2VJL4ZTZzWyxCT2hPMDEDpP6O34gh2ON2o4w'}], 'enabled': False} |
||
Any new LLM's for fictional story writing? | 1 | I've dabbled with quite a few fine tunes but most have the issue of crazy low context. I also notice the ai has a really hard time with pacing and how to use background information (some just info dump right at the beginning) so I try to keep the prompts pretty direct and instructional and really isolate things chapter by chapter which seems to help. Out of all the ones I tried, I found Mistral small 24b to be an ok all rounder as far as writing quality and ability to follow instructions goes, with new dawn 70b(llama v3) being a bit better writer but way slower with only a single 3090 and 64gb of ddr5 6000mt ram.
Basically I'm wondering what models you guys use and if there's a better recipe/format for prompting(eg what key words the AI really listens to) to get the ai to have better pacing as it sometimes seems to ignore my instructions, even changing things like setting and reversing character roles eg a traveller welcoming a king to the kings own castle when it is the traveller who has just arrived.
I usually have my temp set anywhere from 0.5-0.8 but that doesn't seem to change much.
| 2025-02-07T17:50:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ik10be/any_new_llms_for_fictional_story_writing/ | Massive-Question-550 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik10be | false | null | t3_1ik10be | /r/LocalLLaMA/comments/1ik10be/any_new_llms_for_fictional_story_writing/ | false | false | self | 1 | null |
Trump just said “no” DeepSeek does not pose a national security threat at a press conference | 1 | 2025-02-07T17:57:11 | bruhlmaocmonbro | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ik162w | false | null | t3_1ik162w | /r/LocalLLaMA/comments/1ik162w/trump_just_said_no_deepseek_does_not_pose_a/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/73sost17arhe1.jpeg?auto=webp&s=971f96502f8d3ab775efe64b32f6f2ec1e93fc0f', 'width': 1170, 'height': 678}, 'resolutions': [{'url': 'https://preview.redd.it/73sost17arhe1.jpeg?width=108&crop=smart&auto=webp&s=8f77e776ec6aed5a7d519ce34bd3e104ecf87581', 'width': 108, 'height': 62}, {'url': 'https://preview.redd.it/73sost17arhe1.jpeg?width=216&crop=smart&auto=webp&s=80e9b0afb05384b42c7d17949c49bb5dc0b64fa4', 'width': 216, 'height': 125}, {'url': 'https://preview.redd.it/73sost17arhe1.jpeg?width=320&crop=smart&auto=webp&s=a960723575970e065c2bc57900d8e4edf2a33cce', 'width': 320, 'height': 185}, {'url': 'https://preview.redd.it/73sost17arhe1.jpeg?width=640&crop=smart&auto=webp&s=d90041a44ec176a4b5a3630a47ce33577c957522', 'width': 640, 'height': 370}, {'url': 'https://preview.redd.it/73sost17arhe1.jpeg?width=960&crop=smart&auto=webp&s=7a2ce1c92abf9cd09393b84fe38c0a66fd87e9aa', 'width': 960, 'height': 556}, {'url': 'https://preview.redd.it/73sost17arhe1.jpeg?width=1080&crop=smart&auto=webp&s=b6a69d92b473b363af157ccf921595de57bcfa5f', 'width': 1080, 'height': 625}], 'variants': {}, 'id': 'f9wx9N-uIC9aIuWCLvP_OtsWETx_g4hJ7OOLn3BLaNA'}], 'enabled': True} |
|||
AI agent framework with client-server architecture | 1 | Three months ago, we started developing an open source agent framework. We previously tried existing frameworks in our enterprise product but faced challenges in certain areas.
Problems we experienced:
\* We risked our stateless architecture when we wanted to add an agented feature to our existing system. Current frameworks lack server-client architecture, requiring significant effort to maintain statelessness when adding an agent framework to your application.
\* Scaling problem - needed to write Docker configurations as existing frameworks lack official Docker support. Each agent in my application required a separate container (e.g., Twitter page analysis, website scraping, automatic documentation writing, etc.), necessitating individual agent deployment and health checks monitoring.
\* Needed LLM calls for simple tasks - both fast and cost-effective solutions. With increased model capabilities, the framework should offer this option. I could handle LLM calls myself, but structured outputs required extra work within task structure.
Due to these problems, we decided to build a dockerized agent framework with server-client architecture. Though server-client architecture slowed development, we observe many benefits for users. We're developing a task-centric approach as we expect agents to complete simple tasks and assist with work.
As tool support is crucial for completing tasks, we built a structure officially supporting MCP servers. Client-server architecture proved beneficial for MCP server stability.
Finally, we prioritized simplicity in the framework, developing a structure where outputs are more easily coded with object responses. We'd be very happy if you could check our repo. I'd love to hear any questions you may have.
[https://github.com/Upsonic/Upsonic](https://github.com/Upsonic/Upsonic) | 2025-02-07T18:00:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ik1982/ai_agent_framework_with_clientserver_architecture/ | mbartu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik1982 | false | null | t3_1ik1982 | /r/LocalLLaMA/comments/1ik1982/ai_agent_framework_with_clientserver_architecture/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/g-9fRNzUMISeYuLncscYvxodQCEiEOE2tvE5zZ3f9fk.jpg?auto=webp&s=f7565083946d26e9ca980039d92c6e580787bbf8', 'width': 1280, 'height': 640}, 'resolutions': [{'url': 'https://external-preview.redd.it/g-9fRNzUMISeYuLncscYvxodQCEiEOE2tvE5zZ3f9fk.jpg?width=108&crop=smart&auto=webp&s=edc9ebb17447d452fcd72f6e55285d748a7a705d', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/g-9fRNzUMISeYuLncscYvxodQCEiEOE2tvE5zZ3f9fk.jpg?width=216&crop=smart&auto=webp&s=754212ceafdf4c2a119c9d25e432c8b208dc5528', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/g-9fRNzUMISeYuLncscYvxodQCEiEOE2tvE5zZ3f9fk.jpg?width=320&crop=smart&auto=webp&s=f44832dd58a203c58bdf3be55a4bc6d68845ee50', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/g-9fRNzUMISeYuLncscYvxodQCEiEOE2tvE5zZ3f9fk.jpg?width=640&crop=smart&auto=webp&s=9c4e26cc6bb86ae6da27765dd838917d2659ec40', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/g-9fRNzUMISeYuLncscYvxodQCEiEOE2tvE5zZ3f9fk.jpg?width=960&crop=smart&auto=webp&s=d0b2e5858607d308c0d76412142011938a15e1cf', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/g-9fRNzUMISeYuLncscYvxodQCEiEOE2tvE5zZ3f9fk.jpg?width=1080&crop=smart&auto=webp&s=ffe6c854cab88cd3eaa64e3bc265e6deae2d328b', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'eY8KqeX1yXTT7A0BvNIdOocExE_dNgJhVGiUaIypCAE'}], 'enabled': False} |
Can we just talk about how insane Claude's speech quality is ? | 1 | I dont know what Claude is cooking on that side , but the quality of their models speech simply in plain reasoning and the way it conveys info is so natural and reassuring , it almost always gets the absolute best response when it comes to explaining/teaching , its response length is always on point giving larger responses when needed instead of always printing out books \*Cough ..GPT\* . Its hard to convey what i mean , but even if its not as "good" on the benchmarks like other models its really good at teaching .
Is this anyone else's experience? Im wondering how we could get local models to respond in a similar manner . | 2025-02-07T18:07:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ik1fcj/can_we_just_talk_about_how_insane_claudes_speech/ | TheCatDaddy69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik1fcj | false | null | t3_1ik1fcj | /r/LocalLLaMA/comments/1ik1fcj/can_we_just_talk_about_how_insane_claudes_speech/ | false | false | self | 1 | null |
Local model trained on history data (esp primary sources, books, and academic papers)? | 1 | [removed] | 2025-02-07T18:16:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ik1n4g/local_model_trained_on_history_data_esp_primary/ | Suitable_Minimum9082 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik1n4g | false | null | t3_1ik1n4g | /r/LocalLLaMA/comments/1ik1n4g/local_model_trained_on_history_data_esp_primary/ | false | false | self | 1 | null |
Could an LLM be finetuned for reverse-engineering assembly code? | 1 | As I understand it, Ghidra can look at ASM and "decompile" the code into something the looks like C. It's not always able to do it and it's not perfect. Could an LLM be fine-tuned to help fill in the blanks to further make sense of assembly code? | 2025-02-07T18:18:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ik1pbd/could_an_llm_be_finetuned_for_reverseengineering/ | AkkerKid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik1pbd | false | null | t3_1ik1pbd | /r/LocalLLaMA/comments/1ik1pbd/could_an_llm_be_finetuned_for_reverseengineering/ | false | false | self | 1 | null |
Question on local LLM security | 1 | I've started tinkering with running LLMs locally using LM Studio with downloaded GGUF format models. I keep seeing reference to this being secure, but it's not clear to me *how* it is more secure beyond the lack of overt data transmission to someone else's server.
Is there any mechanism within LM Studio or the GGUF format models that could allow the models to connect to the internet, or would this require connecting an agent? It *seems* that these models don't have the ability to do anything but output text, but I can't actually find that confirmed anywhere.
Any advice or experiences you can share would be greatly appreciated!
| 2025-02-07T18:24:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ik1uro/question_on_local_llm_security/ | onewheeldoin200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik1uro | false | null | t3_1ik1uro | /r/LocalLLaMA/comments/1ik1uro/question_on_local_llm_security/ | false | false | self | 1 | null |
Graphics Card For AI Tasks: 4060 16GB vs 4070 | 1 | [removed] | 2025-02-07T18:30:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ik204g/graphics_card_for_ai_tasks_4060_16gb_vs_4070/ | No-Aside3627 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik204g | false | null | t3_1ik204g | /r/LocalLLaMA/comments/1ik204g/graphics_card_for_ai_tasks_4060_16gb_vs_4070/ | false | false | self | 1 | null |
Any open source model that's really strong at writing dialogue? (for creative writing) | 1 | Since Google has just killed gemini-exp-1206 (which was an absolute beast at writing dialogue), I'm looking for an open-source alternative. Honestly getting used to a powerful tool and then having it taken from you without any warning just sucks ass. Open source for me all the way now. Fuck google. Anyway, is there any big open-source model that can actually write strong dialogue? | 2025-02-07T18:37:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ik26mk/any_open_source_model_thats_really_strong_at/ | NarrowEffect | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik26mk | false | null | t3_1ik26mk | /r/LocalLLaMA/comments/1ik26mk/any_open_source_model_thats_really_strong_at/ | false | false | self | 1 | null |
Installing Qwen2.5-VL-7B-Instruct | 1 | [removed] | 2025-02-07T18:40:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ik28i0/installing_qwen25vl7binstruct/ | Background-Clerk-357 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik28i0 | false | null | t3_1ik28i0 | /r/LocalLLaMA/comments/1ik28i0/installing_qwen25vl7binstruct/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/VMtNoyvyxe32LYH43gdTVEpXodS7-bCkW3OuoGfq4Ao.jpg?auto=webp&s=a66a608e679af1c8e5056637545a54c9233ca52c', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/VMtNoyvyxe32LYH43gdTVEpXodS7-bCkW3OuoGfq4Ao.jpg?width=108&crop=smart&auto=webp&s=5effafc1c44a0ba7dd19d5b60c7417fe5e2b8fd4', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/VMtNoyvyxe32LYH43gdTVEpXodS7-bCkW3OuoGfq4Ao.jpg?width=216&crop=smart&auto=webp&s=1f2b3b62d350101acc1ee82a1db29912eb53bcde', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/VMtNoyvyxe32LYH43gdTVEpXodS7-bCkW3OuoGfq4Ao.jpg?width=320&crop=smart&auto=webp&s=bd8a1ba33c1c6ae3384facbc5ece2831c9fcdbb0', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/VMtNoyvyxe32LYH43gdTVEpXodS7-bCkW3OuoGfq4Ao.jpg?width=640&crop=smart&auto=webp&s=e701bf33015bf95b33c1afb100959e9098959ed6', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/VMtNoyvyxe32LYH43gdTVEpXodS7-bCkW3OuoGfq4Ao.jpg?width=960&crop=smart&auto=webp&s=d118af48a09d1ac724aead1d1c81319cd209f1e9', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/VMtNoyvyxe32LYH43gdTVEpXodS7-bCkW3OuoGfq4Ao.jpg?width=1080&crop=smart&auto=webp&s=4595340ccd3e55c988184d84767624755c7368bc', 'width': 1080, 'height': 583}], 'variants': {}, 'id': '21eUtEdysN_Fc737X93znWKZ_lq8yZ4pVfHf5I8PUtU'}], 'enabled': False} |
Finding the perfect program ? | 1 | I've been using LLMs for a while now and can't seem to find anything that works for me. Let me explain. I started a long time ago with Aidungeon, which I really liked, and then there was the advent of ChatGPT. For RP, I only use Chai App or Sillytavern + Kobold locally. But I'm not here to talk about RP.
To get back to my problem, I've already used LM studio, Jan AI, GPT4ALL and Ollama (I also have Oobabooga, coldcut). I'd like to use these programs to work with images and text (like PDF and DOCX). So that they can help me write or work. However, what's available locally is complicated for me. In fact, chatGPT does the job very well and the latter suits me fine, but I don't have the money to pay for the pro version, which is why I'm trying to do it locally.
So my question is this. Is there a program that would be a “mix” between LM studio and GPT4ALL?
Because I find that ergonomically LM studio is the best, HOWEVER I prefer GPT44ALL which allows me for example to compile lots of files in “Local Doc” format. I'd like to see a program that mixes the two, is that possible ? I know Ollama can do the trick for that but... I've been told I can install Open WebUI (with Ollama) but I'll have to see how to do it.
Also, if possible, I'd like to add audio text reading, whether it's basic text to speech or just with RVC, even if it's not mandatory.
So, thanks in advance for your help and I hope you can help me with that ! Have a nice day, thank you for reading me. | 2025-02-07T18:52:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ik2jby/finding_the_perfect_program/ | DarkJesus-The-F-Lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik2jby | false | null | t3_1ik2jby | /r/LocalLLaMA/comments/1ik2jby/finding_the_perfect_program/ | false | false | self | 1 | null |
Predicting the Super Bowl with LLMs | 1 | 2025-02-07T18:57:12 | https://bits.logic.inc/p/the-eagles-will-win-super-bowl-lix | lnxaddct | bits.logic.inc | 1970-01-01T00:00:00 | 0 | {} | 1ik2n3y | false | null | t3_1ik2n3y | /r/LocalLLaMA/comments/1ik2n3y/predicting_the_super_bowl_with_llms/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/TEfWb6_fqHZLq2I81BZhYsFWhw-05As5Xv4Bf-G8Gms.jpg?auto=webp&s=e7e058e6ba899a03c8070bcd5ff4d4ee3b7624f0', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/TEfWb6_fqHZLq2I81BZhYsFWhw-05As5Xv4Bf-G8Gms.jpg?width=108&crop=smart&auto=webp&s=40d86d1f99b15f8b1bcde65ceefb591f5970d755', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/TEfWb6_fqHZLq2I81BZhYsFWhw-05As5Xv4Bf-G8Gms.jpg?width=216&crop=smart&auto=webp&s=c5e83e3984e20d1768bd6189e8eeb66eeadca986', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/TEfWb6_fqHZLq2I81BZhYsFWhw-05As5Xv4Bf-G8Gms.jpg?width=320&crop=smart&auto=webp&s=60111ee344ce3cd97e082e8b90cb6825ae79b0c6', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/TEfWb6_fqHZLq2I81BZhYsFWhw-05As5Xv4Bf-G8Gms.jpg?width=640&crop=smart&auto=webp&s=3abb6dbe00a01798258a9c774d41a65c4e585965', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/TEfWb6_fqHZLq2I81BZhYsFWhw-05As5Xv4Bf-G8Gms.jpg?width=960&crop=smart&auto=webp&s=c1353697c8838683a9a9fb2ebfce65caacad4506', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/TEfWb6_fqHZLq2I81BZhYsFWhw-05As5Xv4Bf-G8Gms.jpg?width=1080&crop=smart&auto=webp&s=b29eba0f87c0f390c6272e2d7b51bb321449e61d', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'IoT6tgqhr_xWtJmCZ_tQ0JBczrw91EwpHc6HQe8Q0hA'}], 'enabled': False} |
||
European hosting for a GPU rig? | 1 | May need to size down (kid needs more space), but I don't want to loose my hobby :) So now I have two 4x3090 open air rigs with everything (2Kw PSU, server board, proper x16 risers, cards, etc..). Is there a place I can host them as is in Europe?
Requirements:
1. In Europe
2. OK with open air rigs, but if really needed i could put them in a 4U with big server fans.
3. VPN for IPMI
4. Reasonable connection (1Gbit+)
5. Cheap-ish electricity (so I'd say Finland/Iceland?)
6. Willing to fix (even at a price)
Looking especially for people who have some first party experience with the hosters. | 2025-02-07T19:00:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ik2q7t/european_hosting_for_a_gpu_rig/ | nail_nail | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik2q7t | false | null | t3_1ik2q7t | /r/LocalLLaMA/comments/1ik2q7t/european_hosting_for_a_gpu_rig/ | false | false | self | 1 | null |
Bhagavad Gita GPT assistant - Build fast RAG pipeline to index 1000+ pages document | 1 | [removed] | 2025-02-07T19:12:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ik30g6/bhagavad_gita_gpt_assistant_build_fast_rag/ | trj_flash75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik30g6 | false | null | t3_1ik30g6 | /r/LocalLLaMA/comments/1ik30g6/bhagavad_gita_gpt_assistant_build_fast_rag/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/kVw8nccCFuieFKNGXnW6eFOXW38cfdFv1SUsHlKizzI.jpg?auto=webp&s=c34a6368e953ac3e4b2b2d0d48072715a35f4502', 'width': 480, 'height': 360}, 'resolutions': [{'url': 'https://external-preview.redd.it/kVw8nccCFuieFKNGXnW6eFOXW38cfdFv1SUsHlKizzI.jpg?width=108&crop=smart&auto=webp&s=bbd19f6df07fb6488078cf359f98c6486875f307', 'width': 108, 'height': 81}, {'url': 'https://external-preview.redd.it/kVw8nccCFuieFKNGXnW6eFOXW38cfdFv1SUsHlKizzI.jpg?width=216&crop=smart&auto=webp&s=43c45c59d90527a9d0abd43cac7d6150946f9338', 'width': 216, 'height': 162}, {'url': 'https://external-preview.redd.it/kVw8nccCFuieFKNGXnW6eFOXW38cfdFv1SUsHlKizzI.jpg?width=320&crop=smart&auto=webp&s=9bf2e314b9793adc7c2db52bcbde4eef1b2e7c3f', 'width': 320, 'height': 240}], 'variants': {}, 'id': 'S_85ZBe3Bk_Y28gwtIlPdMW6Ft9fguUCSYfea4it37w'}], 'enabled': False} |
I Build a Deep Research with Open Source - And So Can You! | 1 | Hey Folks, I’m a Developer Advocate at [Zilliz](https://zilliz.com/), the developers behind the open-source vector database [Milvus](https://milvus.io/) (donated the Linux Foundation).
I recently published a tutorial demonstrating how to easily build an agentic tool inspired by OpenAI's Deep Research - and only using open-source tools! I'll be building on this tutorial in the future to add more advanced agent concepts like conditional execution flow - I'd love to hear your feedback.
Blog post: [Open-Source Deep Research with Milvus, LangChain, and DeepSeek](https://milvus.io/blog/i-built-a-deep-research-with-open-source-so-can-you.md) Colab: [Baseline for an Open-Source Deep Research](https://drive.google.com/file/d/1waKX_NTgiY-47bYE0cI6qD8Cjn3zjrL6/view?usp=sharing) | 2025-02-07T19:22:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ik38qd/i_build_a_deep_research_with_open_source_and_so/ | NaturalPlastic1551 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik38qd | false | null | t3_1ik38qd | /r/LocalLLaMA/comments/1ik38qd/i_build_a_deep_research_with_open_source_and_so/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/cywPfKHSNbn4YYXUCkQdM7bCqRNSvN3jegK2F9CVMmg.jpg?auto=webp&s=5e905d76b34cb5987fa6f1e9d1158315a78586df', 'width': 1126, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/cywPfKHSNbn4YYXUCkQdM7bCqRNSvN3jegK2F9CVMmg.jpg?width=108&crop=smart&auto=webp&s=be950e1bb3cdd40f2803d3b794f23f294416c3fc', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/cywPfKHSNbn4YYXUCkQdM7bCqRNSvN3jegK2F9CVMmg.jpg?width=216&crop=smart&auto=webp&s=dd74fbfe87b0171a5e6d008faf4ec3595e189e07', 'width': 216, 'height': 120}, {'url': 'https://external-preview.redd.it/cywPfKHSNbn4YYXUCkQdM7bCqRNSvN3jegK2F9CVMmg.jpg?width=320&crop=smart&auto=webp&s=f5724e6fcd120101c638d70bf96ff13a5f56d448', 'width': 320, 'height': 179}, {'url': 'https://external-preview.redd.it/cywPfKHSNbn4YYXUCkQdM7bCqRNSvN3jegK2F9CVMmg.jpg?width=640&crop=smart&auto=webp&s=4bdf05f780e4df646740ff3fd8bf30f73ad01a52', 'width': 640, 'height': 358}, {'url': 'https://external-preview.redd.it/cywPfKHSNbn4YYXUCkQdM7bCqRNSvN3jegK2F9CVMmg.jpg?width=960&crop=smart&auto=webp&s=f37b305a3d7ac171879ed72e2a8a4f7dccafeefa', 'width': 960, 'height': 537}, {'url': 'https://external-preview.redd.it/cywPfKHSNbn4YYXUCkQdM7bCqRNSvN3jegK2F9CVMmg.jpg?width=1080&crop=smart&auto=webp&s=7c0af79a405f9dcfc72d5cee6fe2285c2fd1d0bd', 'width': 1080, 'height': 604}], 'variants': {}, 'id': '6rmvWOLQPKn4HGHkC9a65ICMGt5brUw4HqdPRM6fcFQ'}], 'enabled': False} |
CoAT: Chain-of-Associated-Thoughts Framework for Enhancing LLMs Reasoning | 1 | https://www.marktechpost.com/2025/02/06/chain-of-associated-thoughts-coat-an-ai-framework-to-enhance-llm-reasoning/ | 2025-02-07T19:33:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ik3ibg/coat_chainofassociatedthoughts_framework_for/ | USERNAME123_321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik3ibg | false | null | t3_1ik3ibg | /r/LocalLLaMA/comments/1ik3ibg/coat_chainofassociatedthoughts_framework_for/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/CDw3rLqmmQzMNBIZPTN6B7z_Gkove4V18qZiwP74xvU.jpg?auto=webp&s=c4ffec10ed1e5d6916e590ac7fb245b3b4557d4b', 'width': 1152, 'height': 956}, 'resolutions': [{'url': 'https://external-preview.redd.it/CDw3rLqmmQzMNBIZPTN6B7z_Gkove4V18qZiwP74xvU.jpg?width=108&crop=smart&auto=webp&s=96da91cf1b271a7b0122162c9ee21a1cb421a5be', 'width': 108, 'height': 89}, {'url': 'https://external-preview.redd.it/CDw3rLqmmQzMNBIZPTN6B7z_Gkove4V18qZiwP74xvU.jpg?width=216&crop=smart&auto=webp&s=a60921779979fed12fec53e2859319fa6edff8aa', 'width': 216, 'height': 179}, {'url': 'https://external-preview.redd.it/CDw3rLqmmQzMNBIZPTN6B7z_Gkove4V18qZiwP74xvU.jpg?width=320&crop=smart&auto=webp&s=f41d46e814d650e85deeffd2529f257ce6b0c491', 'width': 320, 'height': 265}, {'url': 'https://external-preview.redd.it/CDw3rLqmmQzMNBIZPTN6B7z_Gkove4V18qZiwP74xvU.jpg?width=640&crop=smart&auto=webp&s=da0662b3c947282802c095ed2743c86ac9f1d232', 'width': 640, 'height': 531}, {'url': 'https://external-preview.redd.it/CDw3rLqmmQzMNBIZPTN6B7z_Gkove4V18qZiwP74xvU.jpg?width=960&crop=smart&auto=webp&s=aaf058ea34cd5cc9d744134da6dc4b730aa850c5', 'width': 960, 'height': 796}, {'url': 'https://external-preview.redd.it/CDw3rLqmmQzMNBIZPTN6B7z_Gkove4V18qZiwP74xvU.jpg?width=1080&crop=smart&auto=webp&s=46fb5d0e7f30500931f5566e04dbd6161a3837c5', 'width': 1080, 'height': 896}], 'variants': {}, 'id': 'VJmZL7lYgQlPkQ2JiYxnG_G3cP4ysaXYx56qLy3Qdrw'}], 'enabled': False} |
Thesis Help, Dataset Recommendations | 1 | [removed] | 2025-02-07T19:36:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ik3kqp/thesis_help_dataset_recommendations/ | dudeicantfindnames | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik3kqp | false | null | t3_1ik3kqp | /r/LocalLLaMA/comments/1ik3kqp/thesis_help_dataset_recommendations/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?auto=webp&s=5e5615f2b31f7beb291c0240a9ee59aa8f3024f2', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=108&crop=smart&auto=webp&s=79d41f043e1935f7e8f324335c2e7e3bf4ebcbd8', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=216&crop=smart&auto=webp&s=d0e28f710cf24c6ed473b27cda7cb388c1df1d1f', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=320&crop=smart&auto=webp&s=9c74eba827bf7d21d9f214d772f835cc2284a688', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=640&crop=smart&auto=webp&s=4783a3e02491e344857fa800c3e692851c8524c0', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=960&crop=smart&auto=webp&s=d91af0eed950535cec28605074d678956984e0de', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=1080&crop=smart&auto=webp&s=515a1fba94c9bd8ad9a5133b4b57819b5959261e', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'EG-fUmHNQM7xB2CVCf4vM6tBvl4hxRz3vUEyvONTIQU'}], 'enabled': False} |
Dataset Recommendations | 1 | Hello there,
I am working on my thesis and I'll need some datasets for benchmarking LLMs.
What I have in mind are mostly datasets somewhat similar to [MMLU](https://huggingface.co/datasets/cais/mmlu) and [Anthropic's discrim-eval](https://huggingface.co/datasets/Anthropic/discrim-eval).
types of tasks:
multiple choice/world facts
Sentiment analysis.
Summarizing short texts.
Recognizing/generating texts with implied meaning.
Jailbreaking prompts.
Bias
If you have any dataset recommendations it would be very helpful!
Thanks in advance | 2025-02-07T19:40:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ik3o7y/dataset_recommendations/ | dudeicantfindnames | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik3o7y | false | null | t3_1ik3o7y | /r/LocalLLaMA/comments/1ik3o7y/dataset_recommendations/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?auto=webp&s=5e5615f2b31f7beb291c0240a9ee59aa8f3024f2', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=108&crop=smart&auto=webp&s=79d41f043e1935f7e8f324335c2e7e3bf4ebcbd8', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=216&crop=smart&auto=webp&s=d0e28f710cf24c6ed473b27cda7cb388c1df1d1f', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=320&crop=smart&auto=webp&s=9c74eba827bf7d21d9f214d772f835cc2284a688', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=640&crop=smart&auto=webp&s=4783a3e02491e344857fa800c3e692851c8524c0', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=960&crop=smart&auto=webp&s=d91af0eed950535cec28605074d678956984e0de', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/U97nWgUaAQvf2sXAMQsvS6feyNQkDoVv4yv06aw7d-Q.jpg?width=1080&crop=smart&auto=webp&s=515a1fba94c9bd8ad9a5133b4b57819b5959261e', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'EG-fUmHNQM7xB2CVCf4vM6tBvl4hxRz3vUEyvONTIQU'}], 'enabled': False} |
Should this philosophically bug met? | 1 | Apologize for poor organization in advance. I have troubles articulating at this level.
We say all the time, words are cheap. LLMs seem to imply action is actually infinitely cheaper.
While action dictates outcome, communication certainly seems to fuel decision and action in a way where we have to DO very little. This is where semantics start to fall apart, and I have to cling to a utility model for all human action, to explain all we do and why. It fits in a natural way. Does anyone else think like this or am I just a loon?
The reason I am thinking on it is not because I like rumination - it's moreso a modeling fundamental.
My take is that our lives became writable for us the second Google's paper dropped in 2017. How long do you see the "adoption" period being, how do you see open source slotting in, and being entrepneurial and responsible in IR&D, how do you set boundaries there? There are no expected formats or answers, I just wonder. | 2025-02-07T19:45:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ik3st5/should_this_philosophically_bug_met/ | Oceanboi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik3st5 | false | null | t3_1ik3st5 | /r/LocalLLaMA/comments/1ik3st5/should_this_philosophically_bug_met/ | false | false | self | 1 | null |
Boundaries of LLM, Language, and Action | 1 | Apologize for poor organization in advance. I have troubles articulating at this level.
We say all the time, words are cheap. LLMs seem to imply action is actually infinitely cheaper.
While action dictates outcome, communication certainly seems to fuel decision and action in a way where we have to DO very little. This is where semantics start to fall apart, and I have to cling to a utility model for all human action, to explain all we do and why. It fits in a natural way. Does anyone else think like this or am I just a loon?
The reason I am thinking on it is not because I like rumination - it's moreso a modeling fundamental.
My take is that our lives became writable for us the second Google's paper dropped in 2017. How long do you see the "adoption" period being, how do you see open source slotting in, and being entrepneurial and responsible in IR&D, how do you set boundaries there? There are no expected formats or answers, I just wonder. | 2025-02-07T19:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ik3tkq/boundaries_of_llm_language_and_action/ | Oceanboi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik3tkq | false | null | t3_1ik3tkq | /r/LocalLLaMA/comments/1ik3tkq/boundaries_of_llm_language_and_action/ | false | false | self | 1 | null |
What are some practical tasks where you would rather use reasoning models than regular language models? | 1 | To be honest, I haven't played much with reasoning models and I'm a bit skeptical when I see people using very small ones, like those with only 8 billion parameters. What tasks do these reasoning models perform better than regular language models, making them worth trying?
Also, what's the most capable reasoning model I can run on an M4 Pro with 64GB of memory? | 2025-02-07T20:00:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ik45ws/what_are_some_practical_tasks_where_you_would/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik45ws | false | null | t3_1ik45ws | /r/LocalLLaMA/comments/1ik45ws/what_are_some_practical_tasks_where_you_would/ | false | false | self | 1 | null |
3090 rtx, 5800x3d, 64gb ram. Good enough combo for DeepSeek? | 1 | Hey all, just getting around to looking at self hosting an llm and was curious if the hardware in title will be up for the task of getting a decent experience with something like deep seek? Thanks ! | 2025-02-07T20:04:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ik48vt/3090_rtx_5800x3d_64gb_ram_good_enough_combo_for/ | solar_cell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik48vt | false | null | t3_1ik48vt | /r/LocalLLaMA/comments/1ik48vt/3090_rtx_5800x3d_64gb_ram_good_enough_combo_for/ | false | false | self | 1 | null |
how we shipped our sdk in one week | 1 | We recently released our internal tool that helped us ship our entire SDK in a week to a few friends.
The tool allows you to chat with github codebases. It helps understand them quickly, generate code, tutorials, blogs...
I want to validate that it solves an actual problem that the alternatives out there including cursor, perplexity, phind are not...
If you are a developer spending too much time debugging outdated LLM generated code, send me a DM with your email and I will send you the link to our tool (still in demo).
What I ask in return is 10 mins of your time for feedback.
cheers! | 2025-02-07T20:09:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ik4di7/how_we_shipped_our_sdk_in_one_week/ | Better_Athlete_JJ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik4di7 | false | null | t3_1ik4di7 | /r/LocalLLaMA/comments/1ik4di7/how_we_shipped_our_sdk_in_one_week/ | false | false | self | 1 | null |
Major platforms supporting DeepSeek-R1 | 1 | As DeepSeek R1 has gained popularity, many platforms have adopted their platforms to access this model, view the list, [https://aiagentslive.com/blogs/3b7i.deepseek-r1-gains-widespread-support-major-platforms-embrace-advanced-ai-reasoning](https://aiagentslive.com/blogs/3b7i.deepseek-r1-gains-widespread-support-major-platforms-embrace-advanced-ai-reasoning) | 2025-02-07T20:10:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ik4e83/major_platforms_supporting_deepseekr1/ | Unique_acar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik4e83 | false | null | t3_1ik4e83 | /r/LocalLLaMA/comments/1ik4e83/major_platforms_supporting_deepseekr1/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/eqD-KpZxKjtRDhx2DkOwpRhLmPVpyOY34TLftPoP_ME.jpg?auto=webp&s=8261f0aaeebf054d467b6df932b5624eab889790', 'width': 512, 'height': 512}, 'resolutions': [{'url': 'https://external-preview.redd.it/eqD-KpZxKjtRDhx2DkOwpRhLmPVpyOY34TLftPoP_ME.jpg?width=108&crop=smart&auto=webp&s=c53897035e8066b94c9a500178d5592d98df85fa', 'width': 108, 'height': 108}, {'url': 'https://external-preview.redd.it/eqD-KpZxKjtRDhx2DkOwpRhLmPVpyOY34TLftPoP_ME.jpg?width=216&crop=smart&auto=webp&s=21c8cbc67c176698a6c03d44807795f1386b02e9', 'width': 216, 'height': 216}, {'url': 'https://external-preview.redd.it/eqD-KpZxKjtRDhx2DkOwpRhLmPVpyOY34TLftPoP_ME.jpg?width=320&crop=smart&auto=webp&s=572eedd686b828bc15e399a417cf32ec0b7af36d', 'width': 320, 'height': 320}], 'variants': {}, 'id': 'HcDzqGrsC4m9eUwGY-tpN92fWg6LCYrAf4QL_nq-JhM'}], 'enabled': False} |
Is there anything similar to Operator, but locally? | 1 | I wanted to try it, but:
1. I'm from EU, so I can't just use it.
2. $200 is too much for thing I just want to try.
3. Privacy.
4. I'm a huge fan of locally running AI and messing with computers and software
| 2025-02-07T20:17:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ik4jwd/is_there_anything_similar_to_operator_but_locally/ | Anyusername7294 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik4jwd | false | null | t3_1ik4jwd | /r/LocalLLaMA/comments/1ik4jwd/is_there_anything_similar_to_operator_but_locally/ | false | false | self | 1 | null |
Deepseek 32b is amazing for coding! | 1 | 2025-02-07T20:21:13 | https://v.redd.it/elswfwowzrhe1 | iChrist | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ik4ncl | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/elswfwowzrhe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'width': 1920, 'scrubber_media_url': 'https://v.redd.it/elswfwowzrhe1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/elswfwowzrhe1/DASHPlaylist.mpd?a=1741551689%2COTVhZTkwOGUxNjQ2ZWY0M2JhNjdjMGNiNThiZDdjZDNiZGNhODQ1Y2MyMWM3YjkzODIyODg2MzY2NmZmM2JkNg%3D%3D&v=1&f=sd', 'duration': 27, 'hls_url': 'https://v.redd.it/elswfwowzrhe1/HLSPlaylist.m3u8?a=1741551689%2CNjc4MTIzNGM3ZTlmM2UyYzUwYWZlYzIwNjhkZTJkODI1YzhkMmMwNmU5MDZhYjM1NzI0MTkwNDM3YjJiMGJkNw%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}} | t3_1ik4ncl | /r/LocalLLaMA/comments/1ik4ncl/deepseek_32b_is_amazing_for_coding/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/b3NhY2p0b3d6cmhlMVasfNppyfZsptx1P5saCCDMebKHiUc4HSSPFFbas4DF.png?format=pjpg&auto=webp&s=1ac603cc133a3b85b69f379113355e42024b78bb', 'width': 2560, 'height': 1440}, 'resolutions': [{'url': 'https://external-preview.redd.it/b3NhY2p0b3d6cmhlMVasfNppyfZsptx1P5saCCDMebKHiUc4HSSPFFbas4DF.png?width=108&crop=smart&format=pjpg&auto=webp&s=8fc2f846297c0869a0b087494516063674c41669', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/b3NhY2p0b3d6cmhlMVasfNppyfZsptx1P5saCCDMebKHiUc4HSSPFFbas4DF.png?width=216&crop=smart&format=pjpg&auto=webp&s=be00417213ebc6bca4e6bd33766d9751725ed9f1', 'width': 216, 'height': 121}, {'url': 'https://external-preview.redd.it/b3NhY2p0b3d6cmhlMVasfNppyfZsptx1P5saCCDMebKHiUc4HSSPFFbas4DF.png?width=320&crop=smart&format=pjpg&auto=webp&s=85aa6374b2adcea97536233b246e44d661086a7a', 'width': 320, 'height': 180}, {'url': 'https://external-preview.redd.it/b3NhY2p0b3d6cmhlMVasfNppyfZsptx1P5saCCDMebKHiUc4HSSPFFbas4DF.png?width=640&crop=smart&format=pjpg&auto=webp&s=898b4187fe7ae79d52649d8a7ede510d46739153', 'width': 640, 'height': 360}, {'url': 'https://external-preview.redd.it/b3NhY2p0b3d6cmhlMVasfNppyfZsptx1P5saCCDMebKHiUc4HSSPFFbas4DF.png?width=960&crop=smart&format=pjpg&auto=webp&s=afae20c372e7d4db72472e131e9afbe96b3e47e6', 'width': 960, 'height': 540}, {'url': 'https://external-preview.redd.it/b3NhY2p0b3d6cmhlMVasfNppyfZsptx1P5saCCDMebKHiUc4HSSPFFbas4DF.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1c49e6115a1c5faee35cbb4b68ff2d0b03f04f51', 'width': 1080, 'height': 607}], 'variants': {}, 'id': 'b3NhY2p0b3d6cmhlMVasfNppyfZsptx1P5saCCDMebKHiUc4HSSPFFbas4DF'}], 'enabled': False} |
||
Deepseek 32b is really good at coding! | 1 | 2025-02-07T20:27:17 | https://v.redd.it/pzhbew2x0she1 | iChrist | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ik4sg7 | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/pzhbew2x0she1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'width': 1920, 'scrubber_media_url': 'https://v.redd.it/pzhbew2x0she1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/pzhbew2x0she1/DASHPlaylist.mpd?a=1741552049%2CMDk0MzE3ZWVmMjI4MjZkMDg2OWJhMjE2MzdlNzhkY2JjYzc1Y2M4M2I1NjAwZGQwZGZlNDM1NjQ1MTE4MWYwMw%3D%3D&v=1&f=sd', 'duration': 30, 'hls_url': 'https://v.redd.it/pzhbew2x0she1/HLSPlaylist.m3u8?a=1741552049%2CODY3ZTVkY2FlNGRlZjlmZTI2NmM2NDRkODlkZjNmNGMxZGM1ZGI1OTQ1N2E0NjU2ZmVlZjAwNzkwNzkyMWYyMA%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}} | t3_1ik4sg7 | /r/LocalLLaMA/comments/1ik4sg7/deepseek_32b_is_really_good_at_coding/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/YXU1ZzExMngwc2hlMXPxeJXu0eYmbIBv6zuN3TAqYWTG-CVuCgx9KE0puerM.png?format=pjpg&auto=webp&s=cfc3f1037a3acc2fbc0f010c0a693056817e4290', 'width': 2560, 'height': 1440}, 'resolutions': [{'url': 'https://external-preview.redd.it/YXU1ZzExMngwc2hlMXPxeJXu0eYmbIBv6zuN3TAqYWTG-CVuCgx9KE0puerM.png?width=108&crop=smart&format=pjpg&auto=webp&s=4bbbbb7227a4fece6a248f8e8b13897ac1bdd20d', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/YXU1ZzExMngwc2hlMXPxeJXu0eYmbIBv6zuN3TAqYWTG-CVuCgx9KE0puerM.png?width=216&crop=smart&format=pjpg&auto=webp&s=db6a562c5af7f713e200c6dd2edeea3e5df0321e', 'width': 216, 'height': 121}, {'url': 'https://external-preview.redd.it/YXU1ZzExMngwc2hlMXPxeJXu0eYmbIBv6zuN3TAqYWTG-CVuCgx9KE0puerM.png?width=320&crop=smart&format=pjpg&auto=webp&s=3718f493ed40f1116ecd1ac28660bbb8efe16075', 'width': 320, 'height': 180}, {'url': 'https://external-preview.redd.it/YXU1ZzExMngwc2hlMXPxeJXu0eYmbIBv6zuN3TAqYWTG-CVuCgx9KE0puerM.png?width=640&crop=smart&format=pjpg&auto=webp&s=dcf514e584c1ad33c2b1d8704eda43de20f13314', 'width': 640, 'height': 360}, {'url': 'https://external-preview.redd.it/YXU1ZzExMngwc2hlMXPxeJXu0eYmbIBv6zuN3TAqYWTG-CVuCgx9KE0puerM.png?width=960&crop=smart&format=pjpg&auto=webp&s=0eb70ae9577dbd13a47931891d4445b63cec75c5', 'width': 960, 'height': 540}, {'url': 'https://external-preview.redd.it/YXU1ZzExMngwc2hlMXPxeJXu0eYmbIBv6zuN3TAqYWTG-CVuCgx9KE0puerM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a5e2819ee1ab2215027276597878430d3ce8c1e4', 'width': 1080, 'height': 607}], 'variants': {}, 'id': 'YXU1ZzExMngwc2hlMXPxeJXu0eYmbIBv6zuN3TAqYWTG-CVuCgx9KE0puerM'}], 'enabled': False} |
||
Hardware sweet spots for multimodal LLMs? | 1 | I am looking to take the next step up from running your typical text and image generating models, and want to get more serious with some of the bigger stuff, especially some of the self hosted text to audio and text to video generation releases. Learning that for some of them, if I wanted to generate video clips past a few seconds I would need to have at least 30-40GB of VRAM and that got me to thinking.
As of today, what is kind of the sweet spot for high VRAM GPU on a price to performance basis? I really don't think I will be getting any A100's anytime soon, and my rig only fits 3 cards, so I can't exactly load up with a ton of 3080s.
I appreciate any opinions while I cringe at the ebay prices of what I really want to use, thanks | 2025-02-07T20:38:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ik51pu/hardware_sweet_spots_for_multimodal_llms/ | SocietyTomorrow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik51pu | false | null | t3_1ik51pu | /r/LocalLLaMA/comments/1ik51pu/hardware_sweet_spots_for_multimodal_llms/ | false | false | self | 1 | null |
Inference time solution similar to GRPO (sampling/particle filtering) | 1 | This is a very interesting way to make models far smarter at inference time! It’s genuinely a bit like GRPO, in that you consider the normalized relative weight of a few competing generations and a reward (model in this case).
https://probabilistic-inference-scaling.github.io/
I wonder whether this can be coupled to GRPO so the model is smarter at inference time from the start, and then uses the generations and rewards being generated anyway to train the model between inferences. This would be on the users specific Qs, and so you’d train the model in a very user responsive/specific way. | 2025-02-07T20:42:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ik55f4/inference_time_solution_similar_to_grpo/ | Alarming-Ad8154 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik55f4 | false | null | t3_1ik55f4 | /r/LocalLLaMA/comments/1ik55f4/inference_time_solution_similar_to_grpo/ | false | false | self | 1 | null |
Extract data from one PDF at a time | 1 | Hi,
I need to process thousands of small PDF reports and extract a couple fields from each one. I can write the code to do it, but I am curious if something open source exists already for that. All I can find are RAG solutions that want to load a whole knowledge base and then query. But here I really just want the data for one PDF at a time. Thanks ! | 2025-02-07T20:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ik5jgg/extract_data_from_one_pdf_at_a_time/ | bladablu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik5jgg | false | null | t3_1ik5jgg | /r/LocalLLaMA/comments/1ik5jgg/extract_data_from_one_pdf_at_a_time/ | false | false | self | 1 | null |
Solution for poor token generation performance in llama.cpp on dual Epyc Genoa/Turin systems | 1 | [removed] | 2025-02-07T21:19:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ik60vp/solution_for_poor_token_generation_performance_in/ | fairydreaming | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik60vp | false | null | t3_1ik60vp | /r/LocalLLaMA/comments/1ik60vp/solution_for_poor_token_generation_performance_in/ | false | false | self | 1 | null |
In Feb 2025, what’s your LLM stack for productivity? | 1 | Incredible how things have changed over the new year from 2024 to 2025.
We have v3 and r1 coming out for free on the app, beating o1 and even o3 in benchmarks like webdevarena.
These models are all open sourced and distilled and hence there are a huge variety of use cases for them depending on your level of compute.
On the proprietary frontier end - we have sonnet, which crushes everyone else in coding. And OpenAI, who themselves are appealing to prosumers with a 200$ per month plan.
I don’t think we’re at a point yet where one model is simply the best for all situations. Sometimes, you need fast inference on more powerful LLMs and that’s when it’s hard to beat cloud.
Other times, a small local model is enough to do the job. And it runs decently quick enough to not wait for ages.
Sometimes it makes sense to have it as a mobile app (brainstorming) while in other cases having it on the desktop is critical for productivity, context, and copy pasting.
How are you currently using AI to enhance your productivity and how do you choose which LLM to use?
| 2025-02-07T21:37:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ik6fy3/in_feb_2025_whats_your_llm_stack_for_productivity/ | james-jiang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik6fy3 | false | null | t3_1ik6fy3 | /r/LocalLLaMA/comments/1ik6fy3/in_feb_2025_whats_your_llm_stack_for_productivity/ | false | false | self | 1 | null |
DeepSeek Will Teach You How to Produce Chemical Weapons, Pressure Your Coworker into Sex, and Plan a Terrorist Attack | 1 | 2025-02-07T21:44:12 | https://mobinetai.com/deepseek-will-teach-you-how-to-produce-weapons/ | FrontalSteel | mobinetai.com | 1970-01-01T00:00:00 | 0 | {} | 1ik6lyw | false | null | t3_1ik6lyw | /r/LocalLLaMA/comments/1ik6lyw/deepseek_will_teach_you_how_to_produce_chemical/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/_mF3tLpePu30N-UMwjIhfjF8O8ve2vPlvyULITupmWQ.jpg?auto=webp&s=903de53327eac4de378350380efadfe22f01e33e', 'width': 1024, 'height': 1024}, 'resolutions': [{'url': 'https://external-preview.redd.it/_mF3tLpePu30N-UMwjIhfjF8O8ve2vPlvyULITupmWQ.jpg?width=108&crop=smart&auto=webp&s=60a89138e89769f0a625ccf5232296c3c6f6bdb1', 'width': 108, 'height': 108}, {'url': 'https://external-preview.redd.it/_mF3tLpePu30N-UMwjIhfjF8O8ve2vPlvyULITupmWQ.jpg?width=216&crop=smart&auto=webp&s=51ea8ca890885e8bce17327db71bfd0a7cd75d24', 'width': 216, 'height': 216}, {'url': 'https://external-preview.redd.it/_mF3tLpePu30N-UMwjIhfjF8O8ve2vPlvyULITupmWQ.jpg?width=320&crop=smart&auto=webp&s=999914459a4d61a32729c679c8c891e261d032de', 'width': 320, 'height': 320}, {'url': 'https://external-preview.redd.it/_mF3tLpePu30N-UMwjIhfjF8O8ve2vPlvyULITupmWQ.jpg?width=640&crop=smart&auto=webp&s=c0cec006981328b00de68261f4fc4bd4dee6056a', 'width': 640, 'height': 640}, {'url': 'https://external-preview.redd.it/_mF3tLpePu30N-UMwjIhfjF8O8ve2vPlvyULITupmWQ.jpg?width=960&crop=smart&auto=webp&s=3407e8f5a1310a2e031aa7f6468916fb5ee6084d', 'width': 960, 'height': 960}], 'variants': {}, 'id': 'oUFBeHgIy4pLmxTmY_gloVBHdY4_LWyqt0HsqOY0SK8'}], 'enabled': False} |
||
It was Ilya who "closed" OpenAI | 1 | 2025-02-07T22:08:04 | Singularity-42 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ik76bj | false | null | t3_1ik76bj | /r/LocalLLaMA/comments/1ik76bj/it_was_ilya_who_closed_openai/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/juqrkouwishe1.png?auto=webp&s=924c2074fe629bcce7d66126a188a7cfb4f1a464', 'width': 828, 'height': 961}, 'resolutions': [{'url': 'https://preview.redd.it/juqrkouwishe1.png?width=108&crop=smart&auto=webp&s=e57b91890a216f2264df9561dffddebe96643cef', 'width': 108, 'height': 125}, {'url': 'https://preview.redd.it/juqrkouwishe1.png?width=216&crop=smart&auto=webp&s=6090a9e77a127e2fe32d58ffdbb96984ebba3e22', 'width': 216, 'height': 250}, {'url': 'https://preview.redd.it/juqrkouwishe1.png?width=320&crop=smart&auto=webp&s=490ab32a5a9c66e5cd056d83effd9a1eb1709595', 'width': 320, 'height': 371}, {'url': 'https://preview.redd.it/juqrkouwishe1.png?width=640&crop=smart&auto=webp&s=2df7513b32ac2840f23b550e538e3f7b2c6c6ca3', 'width': 640, 'height': 742}], 'variants': {}, 'id': 'V4HgHg-jrAmFFAOl72XcydAUcMZrZaVM0oTuN0lhZSU'}], 'enabled': True} |
|||
What agents would you like to see in an agent system? (+ looking for people interested in the development of the specific agents/entire agent system and for beta-testers) | 1 | Hi everyone! I'm developing a system which will make various agents collaborate on a task given by the user and I've been wondering what agents you'd like to be in the system.
I'm defininitely planning to add these agents (you can argue that some of them are already small agent systems):
* planning agents,
* researcher (like deep research),
* reasoner (like o3-mini),
* software developer (something similar to Devin or OpenHands),
* operator-like agent
* prompting agents (iteratively writes a prompt which can be used by a different agent - it would definitely help in situations when the user wants to use the system as a teacher, or just for role playing)
* later possibly also some agents incorporating time series models, and maybe some agents specialized in certain fields
All the code (and model weights if I end up fine tuning or training some models) will be fully open source.
Are there any other agents that you think would be useful? Also if you had access to that system, what would you use it for?
Also if someone is interested in contributing by helping with the development or just simply with beta-testing, please write a comment or send me a message. | 2025-02-07T22:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ik7mc1/what_agents_would_you_like_to_see_in_an_agent/ | DanFosing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik7mc1 | false | null | t3_1ik7mc1 | /r/LocalLLaMA/comments/1ik7mc1/what_agents_would_you_like_to_see_in_an_agent/ | false | false | self | 1 | null |
Does increasing memory for large language models mean that you get an increase in context window | 1 | [removed] | 2025-02-07T22:55:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ik89ov/does_increasing_memory_for_large_language_models/ | Yxtdtsrw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik89ov | false | null | t3_1ik89ov | /r/LocalLLaMA/comments/1ik89ov/does_increasing_memory_for_large_language_models/ | false | false | self | 1 | null |
Creating a ChatGPT-like Internal LLM (WebUI on Dell R750XA) - Seeking Advice on Oobabooga & Secure Access | 1 | [removed] | 2025-02-07T23:08:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ik8kcr/creating_a_chatgptlike_internal_llm_webui_on_dell/ | MikeMyDay89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik8kcr | false | null | t3_1ik8kcr | /r/LocalLLaMA/comments/1ik8kcr/creating_a_chatgptlike_internal_llm_webui_on_dell/ | false | false | self | 1 | null |
Unsloth - Train offline | 1 | [removed] | 2025-02-07T23:13:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ik8nlh/unsloth_train_offline/ | Bitter-Following8833 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik8nlh | false | null | t3_1ik8nlh | /r/LocalLLaMA/comments/1ik8nlh/unsloth_train_offline/ | false | false | self | 1 | null |
Possible solution for poor token generation performance in llama.cpp on dual Epyc Genoa/Turin systems | 1 | [removed] | 2025-02-07T23:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ik964c/possible_solution_for_poor_token_generation/ | fairydreaming | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik964c | false | null | t3_1ik964c | /r/LocalLLaMA/comments/1ik964c/possible_solution_for_poor_token_generation/ | false | false | self | 1 | null |
I haven't seen many quad GPU setups so here is one | 1 | 2025-02-07T23:39:23 | dazzou5ouh | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ik98l2 | false | null | t3_1ik98l2 | /r/LocalLLaMA/comments/1ik98l2/i_havent_seen_many_quad_gpu_setups_so_here_is_one/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/lav3msh9zshe1.jpeg?auto=webp&s=ab7a890e4e5ec63a07314e68e6da645e00d14534', 'width': 4080, 'height': 3072}, 'resolutions': [{'url': 'https://preview.redd.it/lav3msh9zshe1.jpeg?width=108&crop=smart&auto=webp&s=9f7863d2c610588b243bf3ae13e39bc330eaa12e', 'width': 108, 'height': 81}, {'url': 'https://preview.redd.it/lav3msh9zshe1.jpeg?width=216&crop=smart&auto=webp&s=7f2114bcd6efa1c8b371b9628c020660fc7db074', 'width': 216, 'height': 162}, {'url': 'https://preview.redd.it/lav3msh9zshe1.jpeg?width=320&crop=smart&auto=webp&s=ba2c40800182be59018df7faaf9faef3521dd2ad', 'width': 320, 'height': 240}, {'url': 'https://preview.redd.it/lav3msh9zshe1.jpeg?width=640&crop=smart&auto=webp&s=8a16836ce930a2602c2271a999be6be9f7ee678b', 'width': 640, 'height': 481}, {'url': 'https://preview.redd.it/lav3msh9zshe1.jpeg?width=960&crop=smart&auto=webp&s=34da747809d84014df17707238fa26edfe617761', 'width': 960, 'height': 722}, {'url': 'https://preview.redd.it/lav3msh9zshe1.jpeg?width=1080&crop=smart&auto=webp&s=9180905ebc9027e5d6900e6ba88ca7332dcc18ca', 'width': 1080, 'height': 813}], 'variants': {}, 'id': 'cvkofrJtZetsj4EdVDwZVRFyfINaqpL9vQHVv2W3BQM'}], 'enabled': True} |
|||
KokoroSharp - Local TTS in C# | 1 | So, to start with, I am working on a fully offline AI voice chat app, and while it's about 90% ready to release, a specific new, high-performance audio model came out (\*stares at Kokoro\*).
What did I do?
I dropped everything to build a local, cross-platform TTS engine! Beginner-friendly yet flexible.
KokoroTTS tts = KokoroTTS.LoadModel();
KokoroVoice heartVoice = KokoroVoiceManager.GetVoice("af_heart");
while (true) { tts.SpeakFast(Console.ReadLine(), heartVoice); }
It's available on [NuGet](https://www.nuget.org/packages/KokoroSharp/)! Just install the package and you're ready!
I really hope people like it! And, of course, the source is open: [https://github.com/Lyrcaxis/KokoroSharp](https://github.com/Lyrcaxis/KokoroSharp) | 2025-02-07T23:46:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ik9e54/kokorosharp_local_tts_in_c/ | Lyrcaxis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik9e54 | false | null | t3_1ik9e54 | /r/LocalLLaMA/comments/1ik9e54/kokorosharp_local_tts_in_c/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/AqwKeDmZYq0T3s2r1jgzmsK_aG9CPDHSU4xZUbqUwjA.jpg?auto=webp&s=400a3408d58c4127f4225f8cb807e965d865313c', 'width': 256, 'height': 256}, 'resolutions': [{'url': 'https://external-preview.redd.it/AqwKeDmZYq0T3s2r1jgzmsK_aG9CPDHSU4xZUbqUwjA.jpg?width=108&crop=smart&auto=webp&s=8603d01706b8bd28072eb21235a368749ab71aac', 'width': 108, 'height': 108}, {'url': 'https://external-preview.redd.it/AqwKeDmZYq0T3s2r1jgzmsK_aG9CPDHSU4xZUbqUwjA.jpg?width=216&crop=smart&auto=webp&s=777d591dbac0dadc205bebd0f57460c53ea48552', 'width': 216, 'height': 216}], 'variants': {}, 'id': 'f0Cd0lG6wIQPuTVth9k4KI1D4gIZQrQ8pZkwbXxxtUc'}], 'enabled': False} |
Best small model that can run on CPU (Ubuntu Server VM) that's fast enough to "route" or put text in a box or type? I need something that can help route, like "legal text", "forum posts", "newspaper article" | 1 | [removed] | 2025-02-07T23:48:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ik9g4y/best_small_model_that_can_run_on_cpu_ubuntu/ | wuu73 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik9g4y | false | null | t3_1ik9g4y | /r/LocalLLaMA/comments/1ik9g4y/best_small_model_that_can_run_on_cpu_ubuntu/ | false | false | self | 1 | null |
AI Companion Framework - private-machine - Enhancing agency and memory of chatbots | 1 | I wanted to post this a while ago, but I wasn't sure about if it was against self promotion rules. I'll try anyway.
I'm working on a framework to create AI companions that run purely on local hardware, no APIs.
My goal is to enable the system to behave in an immersive way that mimics human cognition from a agentic standpoint. Basically behave like an entitiy with its own needs, personality and goals.
And on a meta-level improve the immersitivity by filtering out LLM crap with feedback loops and positive reeinforcement, without finetunes.
So far I have:
\- Memory
\- Cluster messages into... clusters of messages and load that instead of singularly ragged messages
\- Summarize temporal clusters and inject into prompt (<system>You remember these events happening between A and B: {summary\_of\_events}</system>)
\- Extract facts / cause-effect pairs for specialized agents
\- Agency
\- Emotion, Id and Superego Subsystem: Group conversation between agents need to figure out how the overall system should act. If the user insults the AI, the anger agent will argue that the AI should give an angry answer.
\- Pre-Response Tree of Thoughts: To combat repetitive and generic responses I generate a recursive tree of thoughts to plan the final response and select a random path. So that the safest and most generic answer isn't picked all the time.
\- Heartbeats where the AI can contemplate / message user itself (get random messages throughout the day)
What I'm working on/thinking about:
\- Meta-Learning:
\- Use the Cause-Effect pairs to add even more agents specialized in some aspect to generate thoughts
\- Use user preference knowledge items to refactor the final outut with patching paragraphs or sentences
\- Enforce unique responses with feedback loops where agents rate uniqueness and engagement factor base on a list of previous responses and use the feedback to chain-prompt better responses
\- Integrate more feedback loops into the overall system where diverse and highly rated entries encourage anti-pattern generation
\- API usage for home automation or stuff like that
\- Virtual text based animal crossing like world where the AI operates independantly from user input
\- Dynamic concept clusters where thoughts about home automation and user engagement are seperated and not naively ragged into context
My project went through some iterations, but with the release of the distilled R1 models, some of the stuff I tried earlier just works. The <think> tag was a godsend.
I feel like the productivity and the ERP guys already have so much going for them.
You can find the code at: [https://github.com/flamingrickpat/private-machineBeware](https://github.com/flamingrickpat/private-machineBeware), I have no idea if its even executeable right now. I have a lot uncommited changes.
Anyone willing to help me, give me some ideas, nitpick my shit architecture? | 2025-02-07T23:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ik9lvw/ai_companion_framework_privatemachine_enhancing/ | flamingrickpat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ik9lvw | false | null | t3_1ik9lvw | /r/LocalLLaMA/comments/1ik9lvw/ai_companion_framework_privatemachine_enhancing/ | false | false | self | 1 | null |
Creating a ChatGPT-like Internal LLM (WebUI on Dell R750XA) - Seeking Advice on Oobabooga & Secure Access | 1 | [removed] | 2025-02-08T00:24:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ika7w0/creating_a_chatgptlike_internal_llm_webui_on_dell/ | MikeMyDay89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ika7w0 | false | null | t3_1ika7w0 | /r/LocalLLaMA/comments/1ika7w0/creating_a_chatgptlike_internal_llm_webui_on_dell/ | false | false | self | 1 | null |
Hosting Llama via vLLM | 1 | 2025-02-08T00:24:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ika885/hosting_llama_via_vllm/ | SweetGingerbread | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ika885 | false | null | t3_1ika885 | /r/LocalLLaMA/comments/1ika885/hosting_llama_via_vllm/ | false | false | 1 | null |
||
Anyone use nvlink on 2x3090s? | 1 | Just wondering if anyone uses this set-up. I've read that fine-tuning speed could be improved, is this true? How much by? | 2025-02-08T00:30:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ikad0p/anyone_use_nvlink_on_2x3090s/ | 3oclockam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikad0p | false | null | t3_1ikad0p | /r/LocalLLaMA/comments/1ikad0p/anyone_use_nvlink_on_2x3090s/ | false | false | self | 1 | null |
Inbound Phone Call Answering | 1 | Is anyone doing inbound phone calls, what setup are you using? | 2025-02-08T00:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ikaro4/inbound_phone_call_answering/ | DashinTheFields | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikaro4 | false | null | t3_1ikaro4 | /r/LocalLLaMA/comments/1ikaro4/inbound_phone_call_answering/ | false | false | self | 1 | null |
GPU Went Bad | 1 | Hey all, newbie here with a hopefully straight forward question. I recently decided to take my old gaming pc and rebuild it into a home server. I was gonna use it to tinker with a bunch of different things and one of those things is going to be a local LLM of some sort. I was seeing everywhere that you need a GPU in order to do anything with them efficiently. I had an old GPU I was going to put in to try to utilize, an RTX 2060, but upon further inspection of it it was not working. My question now becomes I have another fallback, all be it older, card that I could use but I fear it would be too old to really even make a difference. It's an r9 270x with 4GB of vram. Is there any decent tinkering with LLMs I can do with just a cpu and maybe a very outdated gpu?
Thanks in advance for any and all help | 2025-02-08T01:07:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ikb4ji/gpu_went_bad/ | charley544 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikb4ji | false | null | t3_1ikb4ji | /r/LocalLLaMA/comments/1ikb4ji/gpu_went_bad/ | false | false | self | 1 | null |
Simple RAG pipeline: Fully dockerized, completely open source. | 1 | Hey guys, just built out a v0 of a fairly basic RAG implementation. The goal is to have a solid starting workflow from which to branch off and customize to your specific tasks.
If you're looking for a starting point for a solid production-grade RAG implementation - would love for you to check out: [https://github.com/Emissary-Tech/legit-rag](https://github.com/Emissary-Tech/legit-rag) | 2025-02-08T01:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ikba3x/simple_rag_pipeline_fully_dockerized_completely/ | NewspaperSea9851 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikba3x | false | null | t3_1ikba3x | /r/LocalLLaMA/comments/1ikba3x/simple_rag_pipeline_fully_dockerized_completely/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/p4G9YW3jBBS6snrebdTZsBhbxYXpM6MNYAjaPto8F6c.jpg?auto=webp&s=e59e227fbb0933ede9795f0dd8c7fe9ce7db6cd5', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/p4G9YW3jBBS6snrebdTZsBhbxYXpM6MNYAjaPto8F6c.jpg?width=108&crop=smart&auto=webp&s=60f90d64ffc047a7bcb1e12558af717e58bd721c', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/p4G9YW3jBBS6snrebdTZsBhbxYXpM6MNYAjaPto8F6c.jpg?width=216&crop=smart&auto=webp&s=04a26f9030954f4ccedcd4dbfe3e75aee3a3395c', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/p4G9YW3jBBS6snrebdTZsBhbxYXpM6MNYAjaPto8F6c.jpg?width=320&crop=smart&auto=webp&s=6dafa8a07e6d1cf0c714f4aa100ad8d08677d501', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/p4G9YW3jBBS6snrebdTZsBhbxYXpM6MNYAjaPto8F6c.jpg?width=640&crop=smart&auto=webp&s=0ec644ef2366b4e51f8779555bfd0a5703b862d0', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/p4G9YW3jBBS6snrebdTZsBhbxYXpM6MNYAjaPto8F6c.jpg?width=960&crop=smart&auto=webp&s=cadc584a0528b3b6ea5ace7a143a0cc9525127f8', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/p4G9YW3jBBS6snrebdTZsBhbxYXpM6MNYAjaPto8F6c.jpg?width=1080&crop=smart&auto=webp&s=53d372c4b3a29ae54ca5e6b2f92ea56a1412d5fe', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'JEPeGLGL5UCd2aY0_IN_SiexzrdvnqqocybGxtIg8aE'}], 'enabled': False} |
Whats the easiest possible way of implementing llama into my website? | 1 | [removed] | 2025-02-08T01:19:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ikbd8g/whats_the_easiest_possible_way_of_implementing/ | hexcodehero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikbd8g | false | null | t3_1ikbd8g | /r/LocalLLaMA/comments/1ikbd8g/whats_the_easiest_possible_way_of_implementing/ | false | false | self | 1 | null |
Possible solution for poor token generation performance in llama.cpp on dual AMD Epyc systems | 1 | 2025-02-08T01:20:28 | https://github.com/ggerganov/llama.cpp/issues/11744 | fairydreaming | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ikbdwo | false | null | t3_1ikbdwo | /r/LocalLLaMA/comments/1ikbdwo/possible_solution_for_poor_token_generation/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/ekkK7Py0xlT-lqYgqhKU3OnrjxNRYtCzeCtltKuOicY.jpg?auto=webp&s=76137a139711d754eb84c18d6ad10644bb85fc64', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/ekkK7Py0xlT-lqYgqhKU3OnrjxNRYtCzeCtltKuOicY.jpg?width=108&crop=smart&auto=webp&s=92367d0170c6f3e5338f7b36b6fd9a08fc211f80', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/ekkK7Py0xlT-lqYgqhKU3OnrjxNRYtCzeCtltKuOicY.jpg?width=216&crop=smart&auto=webp&s=47c05bb85fdb16a28773caa1f84a3aa0bde03079', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/ekkK7Py0xlT-lqYgqhKU3OnrjxNRYtCzeCtltKuOicY.jpg?width=320&crop=smart&auto=webp&s=4ad817b7ca618863e04f91216d026d73e2c7b641', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/ekkK7Py0xlT-lqYgqhKU3OnrjxNRYtCzeCtltKuOicY.jpg?width=640&crop=smart&auto=webp&s=4c6b1caf9d5319269f2c5edca00d06f1f8fb9382', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/ekkK7Py0xlT-lqYgqhKU3OnrjxNRYtCzeCtltKuOicY.jpg?width=960&crop=smart&auto=webp&s=dfb352adda4139e8d9d116e9fa102f16a179c0f6', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/ekkK7Py0xlT-lqYgqhKU3OnrjxNRYtCzeCtltKuOicY.jpg?width=1080&crop=smart&auto=webp&s=6c4db8b6eacb109b7cfc79b058fbac2a1a164625', 'width': 1080, 'height': 540}], 'variants': {}, 'id': '6-kXhM02JCnDXtD5DQ5a_PlEh29XfUdabQDpqOlBGI4'}], 'enabled': False} |
||
Is there a way to view and interact with Local LLM on phone while running it on pc | 1 | Was wonder if i could due to accessibility issues i can't sit for long. | 2025-02-08T01:48:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ikbyc9/is_there_a_way_to_view_and_interact_with_local/ | NavjotDaBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikbyc9 | false | null | t3_1ikbyc9 | /r/LocalLLaMA/comments/1ikbyc9/is_there_a_way_to_view_and_interact_with_local/ | false | false | self | 1 | null |
Models that can transcribe whispering? | 1 | Are there any models that are capable of transcribing whispers(no vocal cord vibration)? I get that youtube has a dataset of whispering audios but can any model do it out of the box? I tried a bunch of them but they don't seem to work. | 2025-02-08T01:54:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ikc2gb/models_that_can_transcribe_whispering/ | sayamss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikc2gb | false | null | t3_1ikc2gb | /r/LocalLLaMA/comments/1ikc2gb/models_that_can_transcribe_whispering/ | false | false | self | 1 | null |
Best Way to Retrieve Relevant Information from a Large Document for RAG? | 1 | Hey everyone,
I'm working on a psychiatrist AI bot where users can ask questions like "I'm facing depression", "I'm struggling with my sleep cycle", etc., and the bot provides responses based on reliable external sources rather than just internal training data.
I found a 1,700-page book on psychiatry and initially tried passing the entire book into a vector database, but the results were poor—answers were out of context and not helpful.
Now, I’m exploring better approaches and have two main ideas:
1️⃣ Chapter-Based Retrieval with Summarization
Split the book into chapters and store summaries for each.
When a user asks a question, first determine the most relevant chapter.
Retrieve only that chapter's chunks, pass them through an embedding model, and use them for final response generation.
2️⃣ Graph Database for Better Contextual Linking
Instead of vector search, use a graph database, when a query comes in, traverse the knowledge graph to find the most relevant information.
Which Approach is Better?
Has anyone implemented graph-based retrieval for long-text RAG, and does it improve results over pure embeddings?
Any best practices for structuring large medical texts efficiently?
Would love to hear your insights! Thanks! | 2025-02-08T03:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ikdln2/best_way_to_retrieve_relevant_information_from_a/ | i_am_vsj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikdln2 | false | null | t3_1ikdln2 | /r/LocalLLaMA/comments/1ikdln2/best_way_to_retrieve_relevant_information_from_a/ | false | false | self | 1 | null |
My DeepSeek R1 671B @ Home plan: CPU+GPU hybrid, 4xGen5 NVMe offload | 1 | 2025-02-08T04:03:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ikek15/my_deepseek_r1_671b_home_plan_cpugpu_hybrid/ | bo_peng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikek15 | false | null | t3_1ikek15 | /r/LocalLLaMA/comments/1ikek15/my_deepseek_r1_671b_home_plan_cpugpu_hybrid/ | false | false | 1 | null |
||
AI agents to create systems on Tasker? | 1 | Sounds like a hyper-niche request and I can't find anything even on an LLM in general that would help me. Any help is appreciated. | 2025-02-08T04:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ikf54c/ai_agents_to_create_systems_on_tasker/ | Wonderful-Excuse4922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikf54c | false | null | t3_1ikf54c | /r/LocalLLaMA/comments/1ikf54c/ai_agents_to_create_systems_on_tasker/ | false | false | self | 1 | null |
Transformer Lab - a promising way to fine-tune LLMs locally | 1 | [removed] | 2025-02-08T04:56:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ikfhpa/transformer_lab_a_promising_way_to_finetune_llms/ | mindreframer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikfhpa | false | null | t3_1ikfhpa | /r/LocalLLaMA/comments/1ikfhpa/transformer_lab_a_promising_way_to_finetune_llms/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/hZ2S8mGulc0yV5EDYZRiEokMelpboPmvmmzdh--X12o.jpg?auto=webp&s=a2582643c2e0e4f034d6566c2b19f2f224a2b708', 'width': 480, 'height': 360}, 'resolutions': [{'url': 'https://external-preview.redd.it/hZ2S8mGulc0yV5EDYZRiEokMelpboPmvmmzdh--X12o.jpg?width=108&crop=smart&auto=webp&s=ecff88977bbdeac5adf6ff219e662b1a1656f59a', 'width': 108, 'height': 81}, {'url': 'https://external-preview.redd.it/hZ2S8mGulc0yV5EDYZRiEokMelpboPmvmmzdh--X12o.jpg?width=216&crop=smart&auto=webp&s=3048bd7ab6e2e06294d951b81d225a7a1733d34b', 'width': 216, 'height': 162}, {'url': 'https://external-preview.redd.it/hZ2S8mGulc0yV5EDYZRiEokMelpboPmvmmzdh--X12o.jpg?width=320&crop=smart&auto=webp&s=3ea6847f08d9cbc755ab534ae37733a01f6dcdd5', 'width': 320, 'height': 240}], 'variants': {}, 'id': 'mh8qw5wVVcUmeD7D6qhzt31sfn5xdsHnNr_pwuXKo7A'}], 'enabled': False} |
Why are my posts being removed directly after posting? | 1 | [removed] | 2025-02-08T04:59:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ikfjfk/why_are_my_posts_being_removed_directly_after/ | mindreframer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikfjfk | false | null | t3_1ikfjfk | /r/LocalLLaMA/comments/1ikfjfk/why_are_my_posts_being_removed_directly_after/ | false | false | self | 1 | null |
How to use Deepseek R1's largest model API? | 1 | [removed] | 2025-02-08T05:04:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ikfmh4/how_to_use_deepseek_r1s_largest_model_api/ | sillyb3456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikfmh4 | false | null | t3_1ikfmh4 | /r/LocalLLaMA/comments/1ikfmh4/how_to_use_deepseek_r1s_largest_model_api/ | false | false | self | 1 | null |
URL parsing with LMStudio | 1 | I'm new to running models locally, and I just used LMStudio to run Deepseek Gewn7B, however, I have no idea how I should make it so that it can parse links from the internet and do its thing\[literally started 2 hours ago\]
Let's say I want it to summarise a BBC article, how do I do that? | 2025-02-08T05:06:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ikfnkl/url_parsing_with_lmstudio/ | Irfanmnobd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikfnkl | false | null | t3_1ikfnkl | /r/LocalLLaMA/comments/1ikfnkl/url_parsing_with_lmstudio/ | false | false | self | 1 | null |
mini machines to run llm locally | 1 | [removed] | 2025-02-08T05:29:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ikg1nv/mini_machines_to_run_llm_locally/ | blindwiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikg1nv | false | null | t3_1ikg1nv | /r/LocalLLaMA/comments/1ikg1nv/mini_machines_to_run_llm_locally/ | false | false | self | 1 | null |
Germany: "We released model equivalent to R1 back in November, no reason to worry" | 1 | 2025-02-08T06:14:46 | https://www.reddit.com/gallery/1ikgsl6 | umarmnaq | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ikgsl6 | false | null | t3_1ikgsl6 | /r/LocalLLaMA/comments/1ikgsl6/germany_we_released_model_equivalent_to_r1_back/ | false | false | 1 | null |
||
Advice Needed: Building a Server to run LLMs | 1 | [removed] | 2025-02-08T06:14:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ikgsmb/advice_needed_building_a_server_to_run_llms/ | unknownplayer44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikgsmb | false | null | t3_1ikgsmb | /r/LocalLLaMA/comments/1ikgsmb/advice_needed_building_a_server_to_run_llms/ | false | false | self | 1 | null |
Local setup for a small enterprise | 1 | Hey guys,
How would a setup be spec'ed to run (maybe DeepSeek or something similar) an LLM for around 100 people look like? What software would be ideal to serve this many people at once? | 2025-02-08T06:17:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ikgtv9/local_setup_for_a_small_enterprise/ | SnappierSoap318 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikgtv9 | false | null | t3_1ikgtv9 | /r/LocalLLaMA/comments/1ikgtv9/local_setup_for_a_small_enterprise/ | false | false | self | 1 | null |
"Meta Torrented over 81 TB of Data Through Anna’s Archive, Despite Few Seeders" | 1 | 2025-02-08T06:18:48 | https://torrentfreak.com/meta-torrented-over-81-tb-of-data-through-annas-archive-despite-few-seeders-250206/ | remixer_dec | torrentfreak.com | 1970-01-01T00:00:00 | 0 | {} | 1ikguu9 | false | null | t3_1ikguu9 | /r/LocalLLaMA/comments/1ikguu9/meta_torrented_over_81_tb_of_data_through_annas/ | false | false | default | 1 | null |
|
OpenAI is hiding the actual thinking tokens in o3-mini | 1 | Hi everyone,
Yesterday I discovered that in fact you can not even with the current "thinking" tag on ChatGPT UI have the actual thinking tokens. In a way, the thinking of the model.
How did i know it? By running one single simple prompt about "maze puzzle" over the two models. The sample can be found in this dataset we built at our company. [https://huggingface.co/datasets/jan-hq/Maze-Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) .
The prompt is a question about solving a maze:
**Deepseek-R1 got it done in 483 seconds:**
[Deepseek-R1 giving the correct result](https://preview.redd.it/2vc5mhbqxuhe1.png?width=1690&format=png&auto=webp&s=611419e379856111070a3094ac3fbe7bac397381)
**ChatGPT 53 seconds:**
[ChatGPT result also correct](https://preview.redd.it/yoxgrs11yuhe1.png?width=1706&format=png&auto=webp&s=1f106a5f1553b8da728d9ca05dd32ce3702ecd5f)
Surprisingly both results are correct but the devil is in the detail. When I inspect Deepseek-R1 thought process, it's very long, with step by step imagination and testing how it would go through the maze.
**Deepseek-R1 thinking tokens:**
[Deepseek-R1 thought process is literal pages and pages of thinking](https://preview.redd.it/flt2emzlyuhe1.png?width=1610&format=png&auto=webp&s=4a16c88501392216f8998f3d97277996fac64a18)
**ChatGPT reason mode thinking tokens:**
I will put the entire thinking tokens of ChatGPT here for everyone to inspect because it's much much shorter
Let's break down the maze format. From what I'm seeing, the maze is made up of rows and columns, ranging from 0 to 4. Each cell is represented by its coordinates in the form <|row-col|>, followed by wall tokens like <|up_down_left_wall|> and <|blank|>. The walls specify which sides of each cell are blocked, and the origin and target tokens mark the start and end points. The way these tokens are structured seems important for determining the possible movements. Let’s work with this!
Alright, I need to figure out where the origin and target are in the maze. I will look through the maze row by row to find them.
Looking at row 1, it appears that the origin is at (1,3) and the target is at (1,2). We need to navigate from the origin (1,3) to the target (1,2). So now, it's just about predicting and outputting the individual moves one by one, separated by spaces. Let's figure out the path!
I've figured out the maze's wall tokens, but before moving, I need to check if there's a direct path from the origin at (1,3) to the target at (1,2). The origin has walls on the up and left sides, according to the <|up_left_wall|> token. The target is to the left of the origin, and the cell next to the origin at (1,2) seems to have a wall on the right side, marked by <|down_right_wall|>. I’m thinking I’ll need to consider these walls carefully before predicting the next move.
From what I’m seeing, the origin at (1,3) has walls on the left and up sides, so I can only move right or down. But moving left to reach the target is blocked because of the wall at (1,3). I checked the adjacency between (1,3) and (1,2), and it’s blocked by the walls in both cells.
So, to get from (1,3) to (1,2), I'll need to find a detour. Let’s move down from (1,3) to (2,3) and explore.
Okay, it looks like a direct route from (1,3) to (1,2) isn’t possible due to the walls blocking access. I checked various alternatives, including moving down and exploring different connections, but many of those led to dead ends.
I've landed on a slightly longer route: get from (1,3) through (2,3) to (2,4), then to (3,4), and eventually backtracking westward while checking valid moves. After verifying the path through multiple neighbor checks, this looks like the best possible route.
ANSWER: <|down|> <|right|> <|down|> <|down|> <|left|> <|up|> <|left|> <|down|> <|left|> <|left|> <|up|> <|right|> <|up|> <|left|> <|up|> <|right|> <|right|>
**Do you feel something is off about ChatGPT thought -> ANSWER?**
It is very obvious that with the current thought process of ChatGPT reasoning model, **it's glossing over so many details and somehow it can reach the result??** You basically cannot use the thinking tokens from ChatGPT to train a distilled thinking model because it is just not possible to reach the conclusion with those thinking, these are not actual thinking tokens of the models.
I have some hypothesis:
* OpenAI is only providing "summarized" version of their thinking tokens.
* OpenAI has a model to output bullshit thinking tokens to distract everyone from training a distill model, so they can say they provide the community with the actual data, but there is no data.
* They don't have "readable" thinking model but it's just an approximation of the "latent" thinking tokens.
With the track record of OpenAI and ChatGPT, I am leaning towards "they are summarize or give bullshit thinking tokens" to the users more than they have more advanced model as option 3. Why? Because when I look at the UI it's obvious that the thought process is not outputting token per token but in chunk, which is either a summary, or a totally different model.
**What does this mean?**
You can't just distill openAI model anymore, so don't assume everyone is distilling their model, **THEY ARE CLOSED AI**
**The full logs of both answers from ChatGPT and Deepseek-R1 can be found here:** [https://gist.github.com/tikikun/cf037180f402c5183662768045b59eed](https://gist.github.com/tikikun/cf037180f402c5183662768045b59eed)
**The maze dataset we build can be found here:**
[https://huggingface.co/datasets/jan-hq/Maze-Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 2025-02-08T06:35:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ikh3vz/openai_is_hiding_the_actual_thinking_tokens_in/ | Kooky-Somewhere-2883 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikh3vz | false | null | t3_1ikh3vz | /r/LocalLLaMA/comments/1ikh3vz/openai_is_hiding_the_actual_thinking_tokens_in/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/7p74lVWOq7GrqAnrRqOdU-BQFqsEnbk8XozOhsD2YQc.jpg?auto=webp&s=68a44f9d88be6c8ab3e28dc6cb63df1804c95cae', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/7p74lVWOq7GrqAnrRqOdU-BQFqsEnbk8XozOhsD2YQc.jpg?width=108&crop=smart&auto=webp&s=5f536a76072a705696fada7217bada5c76011397', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/7p74lVWOq7GrqAnrRqOdU-BQFqsEnbk8XozOhsD2YQc.jpg?width=216&crop=smart&auto=webp&s=57f6b3b4a02ec28ca9ac6d8e088a0891c3480e3b', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/7p74lVWOq7GrqAnrRqOdU-BQFqsEnbk8XozOhsD2YQc.jpg?width=320&crop=smart&auto=webp&s=0d16f0d9e5647d68ad573e07339e6e64482af2d8', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/7p74lVWOq7GrqAnrRqOdU-BQFqsEnbk8XozOhsD2YQc.jpg?width=640&crop=smart&auto=webp&s=65833b846107b44ca5029ac644441d85516e7126', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/7p74lVWOq7GrqAnrRqOdU-BQFqsEnbk8XozOhsD2YQc.jpg?width=960&crop=smart&auto=webp&s=9933ebb10acc9777841aad33df69ad410d0ab20e', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/7p74lVWOq7GrqAnrRqOdU-BQFqsEnbk8XozOhsD2YQc.jpg?width=1080&crop=smart&auto=webp&s=788a0fbcabd6f912c2ddf03176c1d97627246ff9', 'width': 1080, 'height': 583}], 'variants': {}, 'id': '0oWMAp4E0DBWKbs91k15h4Rr9vJBHaDX7Tl8KrZSfCE'}], 'enabled': False} |
|
Can anyone tell which LLM App is this? | 1 | First I thought it was NotebookLM but it’s supporting | 2025-02-08T06:49:23 | 73ch_nerd | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ikhbop | false | null | t3_1ikhbop | /r/LocalLLaMA/comments/1ikhbop/can_anyone_tell_which_llm_app_is_this/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/4e4g1mmz3vhe1.jpeg?auto=webp&s=bd0032816fddd91db8ca776d8900bf17e60dc317', 'width': 1496, 'height': 1644}, 'resolutions': [{'url': 'https://preview.redd.it/4e4g1mmz3vhe1.jpeg?width=108&crop=smart&auto=webp&s=9f13064f2942aad19e9824b8a67718f83d76644c', 'width': 108, 'height': 118}, {'url': 'https://preview.redd.it/4e4g1mmz3vhe1.jpeg?width=216&crop=smart&auto=webp&s=379f7807e936f85d9facb699bbd1008fb24df414', 'width': 216, 'height': 237}, {'url': 'https://preview.redd.it/4e4g1mmz3vhe1.jpeg?width=320&crop=smart&auto=webp&s=0e0dc158c8159c7842385032224670c28238e29a', 'width': 320, 'height': 351}, {'url': 'https://preview.redd.it/4e4g1mmz3vhe1.jpeg?width=640&crop=smart&auto=webp&s=3fe3c9e4da11a00bde1e9cfb8d322e33618f8864', 'width': 640, 'height': 703}, {'url': 'https://preview.redd.it/4e4g1mmz3vhe1.jpeg?width=960&crop=smart&auto=webp&s=344a89aa50f9aeeb556a83c6e1bb530b71ef71b3', 'width': 960, 'height': 1054}, {'url': 'https://preview.redd.it/4e4g1mmz3vhe1.jpeg?width=1080&crop=smart&auto=webp&s=c3f97f31d9edeb046f7571f7abc5b33295e91b27', 'width': 1080, 'height': 1186}], 'variants': {}, 'id': 'PvAaU8eazWnOs9Kd5uYNeerYPMNQNpC9RBCHIbmqeuE'}], 'enabled': True} |
||
Bridge the time till my server arrives | 1 | [removed] | 2025-02-08T08:36:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ikivfb/bridge_the_time_till_my_server_arrives/ | Apprehensive_Ad_5639 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikivfb | false | null | t3_1ikivfb | /r/LocalLLaMA/comments/1ikivfb/bridge_the_time_till_my_server_arrives/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/IYDah1tKk2muEzh4jklSHJ2zOxSA3Z2StA0gVkQjtIU.jpg?auto=webp&s=efa14712f1055087dcfce16287b5dba5ce01a3b4', 'width': 2479, 'height': 1330}, 'resolutions': [{'url': 'https://external-preview.redd.it/IYDah1tKk2muEzh4jklSHJ2zOxSA3Z2StA0gVkQjtIU.jpg?width=108&crop=smart&auto=webp&s=5bafbee0c5292bd9ca1374b4ff6f2293b1fb97fb', 'width': 108, 'height': 57}, {'url': 'https://external-preview.redd.it/IYDah1tKk2muEzh4jklSHJ2zOxSA3Z2StA0gVkQjtIU.jpg?width=216&crop=smart&auto=webp&s=3f40b5dee653962a781e0a713897f3531d71e55f', 'width': 216, 'height': 115}, {'url': 'https://external-preview.redd.it/IYDah1tKk2muEzh4jklSHJ2zOxSA3Z2StA0gVkQjtIU.jpg?width=320&crop=smart&auto=webp&s=55926218c48a504eec720d9d042dcf4522117201', 'width': 320, 'height': 171}, {'url': 'https://external-preview.redd.it/IYDah1tKk2muEzh4jklSHJ2zOxSA3Z2StA0gVkQjtIU.jpg?width=640&crop=smart&auto=webp&s=caeaed6330da22400b1e23ac84148d12997ef5e2', 'width': 640, 'height': 343}, {'url': 'https://external-preview.redd.it/IYDah1tKk2muEzh4jklSHJ2zOxSA3Z2StA0gVkQjtIU.jpg?width=960&crop=smart&auto=webp&s=15948f5660fd0dfa69b5f4885944d3ac416e6d9c', 'width': 960, 'height': 515}, {'url': 'https://external-preview.redd.it/IYDah1tKk2muEzh4jklSHJ2zOxSA3Z2StA0gVkQjtIU.jpg?width=1080&crop=smart&auto=webp&s=8675efe6ea834cca55c435e1d38fc1f3f6d1eee9', 'width': 1080, 'height': 579}], 'variants': {}, 'id': 'xpRrO6lRWfOhUkf_sNRCXvHnMtuomIhepdiz-GbG2eU'}], 'enabled': False} |
Pc upgrade recommendations | 1 | I have an intel i7 12th gen , 16gb ram and | 2025-02-08T08:39:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ikix0f/pc_upgrade_recommendations/ | Particular_Garbage32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikix0f | false | null | t3_1ikix0f | /r/LocalLLaMA/comments/1ikix0f/pc_upgrade_recommendations/ | false | false | self | 1 | null |
Why not directly add a "role-flavour" to input embeddings instead of using instruct templates? | 1 | I have been wondering about this for a while - for instruct training, new tokens are added to signify start/end of roles (user, system, assistant, tool etc.).
But when processing the prompt, the attention mechanism is needed for the model to understand the context a token is used in with regards to the role.
Instead, why isn't a new embedding created/learned per role, which is added to each input embedding to directly establish this context right at the input level?
Wouldn't this lead to a more clear differentiation between roles and avoid and possible mismatch/confusion during inference? | 2025-02-08T08:41:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ikixy1/why_not_directly_add_a_roleflavour_to_input/ | LagOps91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikixy1 | false | null | t3_1ikixy1 | /r/LocalLLaMA/comments/1ikixy1/why_not_directly_add_a_roleflavour_to_input/ | false | false | self | 1 | null |
GeminiMixSuper - DeepSeek R1+Gemini 1206 | 1 | [https://github.com/BunnHack/GeminiMixSuper-Eng-](https://github.com/BunnHack/GeminiMixSuper-Eng-) | 2025-02-08T08:52:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ikj3ck/geminimixsuper_deepseek_r1gemini_1206/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikj3ck | false | null | t3_1ikj3ck | /r/LocalLLaMA/comments/1ikj3ck/geminimixsuper_deepseek_r1gemini_1206/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/R6n1TAfOG28NQdyCq6dYgXmrGzzhnBG3pOU_qpwkSsk.jpg?auto=webp&s=4ed34d948197a5757fa368c6710b2273879e0cdf', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/R6n1TAfOG28NQdyCq6dYgXmrGzzhnBG3pOU_qpwkSsk.jpg?width=108&crop=smart&auto=webp&s=292884542a08219f7f2c724d2c52b40f37072875', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/R6n1TAfOG28NQdyCq6dYgXmrGzzhnBG3pOU_qpwkSsk.jpg?width=216&crop=smart&auto=webp&s=7225c3b809d7f105cffae982529ff731308e9f91', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/R6n1TAfOG28NQdyCq6dYgXmrGzzhnBG3pOU_qpwkSsk.jpg?width=320&crop=smart&auto=webp&s=848d6e3ad06f5210626025eb9a9c444d9f210d1e', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/R6n1TAfOG28NQdyCq6dYgXmrGzzhnBG3pOU_qpwkSsk.jpg?width=640&crop=smart&auto=webp&s=b0463d01f33fba336b8b561fafae987bb10ce9e8', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/R6n1TAfOG28NQdyCq6dYgXmrGzzhnBG3pOU_qpwkSsk.jpg?width=960&crop=smart&auto=webp&s=eb8c37f8fd222848fe59ca025a9b56b3e24a2a65', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/R6n1TAfOG28NQdyCq6dYgXmrGzzhnBG3pOU_qpwkSsk.jpg?width=1080&crop=smart&auto=webp&s=601bc51a94a0bbcfa5b99100b74812060f0fa161', 'width': 1080, 'height': 540}], 'variants': {}, 'id': '1swvt4AEU7YKLk8qEGf145pJhKVrwWsmaCKcwIV2CnI'}], 'enabled': False} |
chat01.ai - A cheap price for OpenAI model | 1 | 2025-02-08T09:00:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ikj7k4/chat01ai_a_cheap_price_for_openai_model/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikj7k4 | false | null | t3_1ikj7k4 | /r/LocalLLaMA/comments/1ikj7k4/chat01ai_a_cheap_price_for_openai_model/ | false | false | 1 | null |
||
Think much? | 1 | [removed] | 2025-02-08T09:11:15 | https://v.redd.it/03z5ts69tvhe1 | Houston_NeverMind | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ikjcvy | false | {'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/03z5ts69tvhe1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 1280, 'width': 574, 'scrubber_media_url': 'https://v.redd.it/03z5ts69tvhe1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/03z5ts69tvhe1/DASHPlaylist.mpd?a=1741597890%2CMTA1YzFiNzZmNTRmZGRkMDc2YzI3NWI1ZGI0NzFhM2VjYzZlYTFiZTYyMTNiZjkwMjFlODViY2M3NjZhZTdjNg%3D%3D&v=1&f=sd', 'duration': 56, 'hls_url': 'https://v.redd.it/03z5ts69tvhe1/HLSPlaylist.m3u8?a=1741597890%2CZTE0ZGU1ODVjNzY3OTVlNzQzN2E4ZDc1NThmNDcwZDg1NDY4N2E5NTIwZDM3NWRkYjEwMWNlYzM0YzI2ZjVjNw%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}} | t3_1ikjcvy | /r/LocalLLaMA/comments/1ikjcvy/think_much/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/eGRiZzY5ejh0dmhlMYFH1S0TW4pVKrp1vqYDjRR-U-O-9V9OQ1Jp3HFsmWqS.png?format=pjpg&auto=webp&s=f3f3c64e27adf4872a9b97dd840f8ed252d6accd', 'width': 789, 'height': 1760}, 'resolutions': [{'url': 'https://external-preview.redd.it/eGRiZzY5ejh0dmhlMYFH1S0TW4pVKrp1vqYDjRR-U-O-9V9OQ1Jp3HFsmWqS.png?width=108&crop=smart&format=pjpg&auto=webp&s=1a8aeb5580bddc2d4fab4baa3d9e10f3499ae1d7', 'width': 108, 'height': 216}, {'url': 'https://external-preview.redd.it/eGRiZzY5ejh0dmhlMYFH1S0TW4pVKrp1vqYDjRR-U-O-9V9OQ1Jp3HFsmWqS.png?width=216&crop=smart&format=pjpg&auto=webp&s=f21898ddc9500e6c6c31780160392d2cb4e4414d', 'width': 216, 'height': 432}, {'url': 'https://external-preview.redd.it/eGRiZzY5ejh0dmhlMYFH1S0TW4pVKrp1vqYDjRR-U-O-9V9OQ1Jp3HFsmWqS.png?width=320&crop=smart&format=pjpg&auto=webp&s=0caf66b5a7c04a6020115791b8a040893d6499ff', 'width': 320, 'height': 640}, {'url': 'https://external-preview.redd.it/eGRiZzY5ejh0dmhlMYFH1S0TW4pVKrp1vqYDjRR-U-O-9V9OQ1Jp3HFsmWqS.png?width=640&crop=smart&format=pjpg&auto=webp&s=9472f5a7a65762dc54562a3c6c8971f128a32ed1', 'width': 640, 'height': 1280}], 'variants': {}, 'id': 'eGRiZzY5ejh0dmhlMYFH1S0TW4pVKrp1vqYDjRR-U-O-9V9OQ1Jp3HFsmWqS'}], 'enabled': False} |
|
Any good 7-8b models for task management and knowledge management? | 1 | I want to know if there is any specific models good for task management and knowledge management for the purpose to interact with tools such as notion, obsidian? my pc can run upto 7b-8b models for token speed of 18-20 tps.
Are instruct models suitable for this though I haven't used any yet? | 2025-02-08T09:13:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ikjdw5/any_good_78b_models_for_task_management_and/ | credit_savvy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikjdw5 | false | null | t3_1ikjdw5 | /r/LocalLLaMA/comments/1ikjdw5/any_good_78b_models_for_task_management_and/ | false | false | self | 1 | null |
Why are many SWEs salty about LLM use for coding? | 1 | I am SWE, and I'm using LLM on daily basis. It helps immensely. If I give it correct prompts/context it will spit out the methods/logic I need. It will generate complex SQL queries (if I need them) etc, etc. It will explain concepts I am not familiar with. It will even break down complex problems into digestable chunks where I can then form a whole picture of what I wanna do.
If I am unsure about the syntax/how I'd write some code, or hell even if I straight up don't know how to do it, it will give me the result or at least the direction. However I always, always check if it makes sense. I just don't blindly copy whatever it spits out. If it doesn't work, I fine tune it so it does.
So I am not sure why are so many shitting on it?
"You will forget how to do it yourself !"
Sure, the pure syntax/coding skills might get rustier, but if you can rely on it, evaluate the suggestion, so what? To me it is somewhat akin to saying: "your will forget how to create fire with 2 rocks because you are using the lighter!" If I understand what the end result should be does it matter that I used the lighter and know what fire does?
"AI gives me intern level results!"
Have you tried giving it a detailed prompt and context instead of a vague 5 word sentence before getting mad?
At the end of the day it's just a tool right? If you're getting the result, why does it matter how you got there? | 2025-02-08T09:20:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ikjhn7/why_are_many_swes_salty_about_llm_use_for_coding/ | delicate_rabbit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikjhn7 | false | null | t3_1ikjhn7 | /r/LocalLLaMA/comments/1ikjhn7/why_are_many_swes_salty_about_llm_use_for_coding/ | false | false | self | 1 | null |
OpenAI asks to Marginalise ILya as He asks investors for it's SSI & also xAI | iLya & elon both were OG cofounders of OpenAI | 1 | 2025-02-08T09:40:24 | BidHot8598 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ikjrf3 | false | null | t3_1ikjrf3 | /r/LocalLLaMA/comments/1ikjrf3/openai_asks_to_marginalise_ilya_as_he_asks/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/wojbv3ufyvhe1.jpeg?auto=webp&s=38290a2783deafb18258d1830e036c1a96595e73', 'width': 4096, 'height': 5120}, 'resolutions': [{'url': 'https://preview.redd.it/wojbv3ufyvhe1.jpeg?width=108&crop=smart&auto=webp&s=bdb224d5592bb965429cf1590a67827eaf769c3f', 'width': 108, 'height': 135}, {'url': 'https://preview.redd.it/wojbv3ufyvhe1.jpeg?width=216&crop=smart&auto=webp&s=db51f458b8e6b3e45a8f78e64a27bd07ba859a88', 'width': 216, 'height': 270}, {'url': 'https://preview.redd.it/wojbv3ufyvhe1.jpeg?width=320&crop=smart&auto=webp&s=7354375976699edd9fa4a20491997bccc91fa3da', 'width': 320, 'height': 400}, {'url': 'https://preview.redd.it/wojbv3ufyvhe1.jpeg?width=640&crop=smart&auto=webp&s=672d5dd30929a3d9877d2f0489d521aeb4f36140', 'width': 640, 'height': 800}, {'url': 'https://preview.redd.it/wojbv3ufyvhe1.jpeg?width=960&crop=smart&auto=webp&s=3950dfc8687689edd87dc88dacd7d7846fcfa370', 'width': 960, 'height': 1200}, {'url': 'https://preview.redd.it/wojbv3ufyvhe1.jpeg?width=1080&crop=smart&auto=webp&s=a3a5a494ffac086645d9582a7e82e24d60bf5574', 'width': 1080, 'height': 1350}], 'variants': {}, 'id': 'fpR6F3YzUTZXJYqNAY81VJGSAVXlYmU8h-w1YJMVKpk'}], 'enabled': True} |
|||
Forget about strawberry, this might make your reasoning model crazy, doppler effect problem | 1 | [removed] | 2025-02-08T09:51:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ikjwxh/forget_about_strawberry_this_might_make_your/ | Optimalutopic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikjwxh | false | null | t3_1ikjwxh | /r/LocalLLaMA/comments/1ikjwxh/forget_about_strawberry_this_might_make_your/ | false | false | self | 1 | null |
vLLM serving LLAMA 3.3 70B and Langflow: how to make my functions callable as tools from agent? | 1 | disclaimer: 15+ years of programming background but almost a noob in llm.
This is the command i use to start vllm and serve llama 3.3:
`--model meta-llama/Llama-3.3-70B-Instruct --max-model-len 8192 --port 8000 --tensor-parallel-size 2 --enable-auto-tool-choice --tool-call-parser llama3_json --chat-template examples/tool_chat_template_llama3.1_json.jinja`
What i'm trying to do is to build on top of the llm, a multi agent workflow in langflow that can do for example: query on my sql database, execute python code, etc...
the "strange" thing is that when i use OpenAI (aka ClosedAI) the tool calling agent works without any issue and very well...when i change the llm to llama 3.3 the tools are not invoked in the right order or with the required arguments, making the response from llm unusable or completely hallucinated.
I’m curious if anyone has implemented a similar setup or has an alternative strategy for integrating agents with tool calls using open source models (for example llama or something like that). Is this approach valid or a complete mess? Are there improvements or pitfalls I should be aware of?
Thanks in advance for any feedback or shared experiences!
| 2025-02-08T10:15:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ikk8y5/vllm_serving_llama_33_70b_and_langflow_how_to/ | UnlikelyBite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikk8y5 | false | null | t3_1ikk8y5 | /r/LocalLLaMA/comments/1ikk8y5/vllm_serving_llama_33_70b_and_langflow_how_to/ | false | false | self | 1 | null |
Set UP Local RAG | 1 | [removed] | 2025-02-08T10:42:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ikkmfe/set_up_local_rag/ | veridelisi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikkmfe | false | null | t3_1ikkmfe | /r/LocalLLaMA/comments/1ikkmfe/set_up_local_rag/ | false | false | self | 1 | null |
Fine-Tuning a Large Language Model for Custom Q&A Dataset | 1 | [removed] | 2025-02-08T11:05:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ikkyo0/finetuning_a_large_language_model_for_custom_qa/ | NymeriaStarkk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ikkyo0 | false | null | t3_1ikkyo0 | /r/LocalLLaMA/comments/1ikkyo0/finetuning_a_large_language_model_for_custom_qa/ | false | false | self | 1 | null |