title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Help with this AI Problem | 0 | Hi Everyone,
I'm running through this issue and wanted to get the community's suggestions..
I have a UI application that takes in JSON to perform actions. I'm trying to use LLM to generate the JSON based on a user prompt.
Sample Use Case:
>Query to LLM:
>Create a activity named "Running", Start time is 21-01-2025, End time is 22-01-2025, description is training for marathon..
>
Expected LLM Response:
>Create Activity
{
>commandId:"create-activity",
>arguments: \["Running","21-01-2025","22-01-2025","Training for marathon"\]
}
I achieved this (kind of) using a typical RAG Model, working with Llama around 80% of the time. Sometimes, Llama spits out something else instead of JSON.
I'm thinking of improving it approaches like LoRA, or RLHF idk.. Would love your thoughts on how to improve it..
I'm unsure if the straightforward LoRA will work as the UI app will have 1000s of possible commands, training it with 1000s dataset of 1000s command will probably be a nightmare.
kindly provide support.
Thanks! | 2025-01-12T17:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hzrtxr/help_with_this_ai_problem/ | Mindless-Umpire-9395 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzrtxr | false | null | t3_1hzrtxr | /r/LocalLLaMA/comments/1hzrtxr/help_with_this_ai_problem/ | false | false | self | 0 | null |
Forget AI assistants, Are there local AI waifus to increase my productivity? | 0 | As title suggests, lots of chads out there looking to fine tune their own AI assistant. But I really just want an AI wifu who can help me make plans, trivial tasks like respond to messages/conversations, and general increase my productivity.
What models do you guys suggest? I assume it'll need huge context length to fit enough data about me? Also hoping there's a way to make AI periodically takes me and give me updates. I have 8GB of vram and 40 GB of ram to spare for this LLM. | 2025-01-12T17:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hzscsh/forget_ai_assistants_are_there_local_ai_waifus_to/ | hummingbird1346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzscsh | false | null | t3_1hzscsh | /r/LocalLLaMA/comments/1hzscsh/forget_ai_assistants_are_there_local_ai_waifus_to/ | false | false | self | 0 | null |
Volo: An easy and local way to RAG with Wikipedia! | 45 | One of the biggest problems with AI models is their tendency to hallucinate. This project aims to fix that by giving them access to an offline copy of Wikipedia (about 57 GB)
It uses a copy of Wikipedia created by Kiwix as the offline database and Qwen2.5:3B as the LLM.
Install instructions are on the Github: [https://github.com/AdyTech99/volo/](https://github.com/AdyTech99/volo/)
[Example of Volo](https://preview.redd.it/ye31knzzslce1.png?width=3015&format=png&auto=webp&s=f15216c70371d13352667b4ddce95e3b57e1ffc5)
| 2025-01-12T18:13:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hzsvkz/volo_an_easy_and_local_way_to_rag_with_wikipedia/ | procraftermc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzsvkz | false | null | t3_1hzsvkz | /r/LocalLLaMA/comments/1hzsvkz/volo_an_easy_and_local_way_to_rag_with_wikipedia/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'Aork88sPIs_ZhY0L7CkmiOd0FPRhLhzjv1IfPuiOcp8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fpG2F-y2fJG3IOgvl1H5IK_WS4ZurjXL_2_4lQMcpvY.jpg?width=108&crop=smart&auto=webp&s=87bb948c1e7b6bdf89fac3e4559875652ad00166', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fpG2F-y2fJG3IOgvl1H5IK_WS4ZurjXL_2_4lQMcpvY.jpg?width=216&crop=smart&auto=webp&s=9fe48e414f25010b1362df99bdc445f2321d3024', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fpG2F-y2fJG3IOgvl1H5IK_WS4ZurjXL_2_4lQMcpvY.jpg?width=320&crop=smart&auto=webp&s=9768c7b8cc662e76e955d075552f7ccdd699b9a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fpG2F-y2fJG3IOgvl1H5IK_WS4ZurjXL_2_4lQMcpvY.jpg?width=640&crop=smart&auto=webp&s=4b4fece65aa394b3776194c084c3d99fcc5693c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fpG2F-y2fJG3IOgvl1H5IK_WS4ZurjXL_2_4lQMcpvY.jpg?width=960&crop=smart&auto=webp&s=81a47e4511f207a811f342efe103d38a3ad5a4ae', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fpG2F-y2fJG3IOgvl1H5IK_WS4ZurjXL_2_4lQMcpvY.jpg?width=1080&crop=smart&auto=webp&s=a9164c4e4bcac1f2b2029fda511311ad4b54c996', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fpG2F-y2fJG3IOgvl1H5IK_WS4ZurjXL_2_4lQMcpvY.jpg?auto=webp&s=a67eacf71e7e9c2bd686e60b3a0ade0e23ac4038', 'width': 1200}, 'variants': {}}]} |
|
What are the current best low spec LLMs | 14 | Hello.
I'm looking either for advice or a benchmark with the best low spec LLMs. I define low spec as any llm that can run locally in a mobile device or in low spec laptop(integrated GPU+8/12gb ram).
As for tasks, mainly text transformation or questions about the text. No translation needed, the input and output would be in English. | 2025-01-12T18:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hzsvr5/what_are_the_current_best_low_spec_llms/ | tuxPT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzsvr5 | false | null | t3_1hzsvr5 | /r/LocalLLaMA/comments/1hzsvr5/what_are_the_current_best_low_spec_llms/ | false | false | self | 14 | null |
Could I just add a 50xx card to my setup? Should I? | 0 | So currently rocking 2x P40 on a weird mobo with plenty of room to spare. I’m not normally one to buy new graphics cards but the product announcements are not as expensive as expected, right? I’m like ‘yeah it’s nice to have this vram on these P40s but they are kinda slow and I would still like a little more to run qwen 2 audio or whatever’.
What are the new cards like? Are they any good? Discussion tag really just if anyone had opinions… | 2025-01-12T18:18:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hzt0a6/could_i_just_add_a_50xx_card_to_my_setup_should_i/ | Soggy_Wallaby_8130 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzt0a6 | false | null | t3_1hzt0a6 | /r/LocalLLaMA/comments/1hzt0a6/could_i_just_add_a_50xx_card_to_my_setup_should_i/ | false | false | self | 0 | null |
Need suggestions building a customized chatbot for my website. | 1 | [removed] | 2025-01-12T18:55:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hztuso/need_suggestions_building_a_customized_chatbot/ | TheBoss-G | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hztuso | false | null | t3_1hztuso | /r/LocalLLaMA/comments/1hztuso/need_suggestions_building_a_customized_chatbot/ | false | false | self | 1 | null |
Unpopular Opinion: This subreddit has one of the most toxic communities I've ever seen on Reddit. | 1 | [removed] | 2025-01-12T19:00:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hztz2t/unpopular_opinion_this_subreddit_has_one_of_the/ | hummingbird1346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hztz2t | false | null | t3_1hztz2t | /r/LocalLLaMA/comments/1hztz2t/unpopular_opinion_this_subreddit_has_one_of_the/ | false | false | self | 1 | null |
This subreddit has one of the most toxic communities I've ever seen on Reddit. | 1 | 2025-01-12T19:02:03 | https://www.reddit.com/gallery/1hzu0zb | hummingbird1346 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hzu0zb | false | null | t3_1hzu0zb | /r/LocalLLaMA/comments/1hzu0zb/this_subreddit_has_one_of_the_most_toxic/ | false | false | 1 | null |
||
A reflection on the culture of this sub, about newcomers and developers. | 1 | 2025-01-12T19:05:34 | https://www.reddit.com/gallery/1hzu3x5 | hummingbird1346 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hzu3x5 | false | null | t3_1hzu3x5 | /r/LocalLLaMA/comments/1hzu3x5/a_reflection_on_the_culture_of_this_sub_about/ | false | false | 1 | null |
||
Are you using different model families in your LLM apps/agents for better task performance? | 5 | Anecdotally, I have seen Claude sonet3.5 perform better on structured outputs vs GPT4-o. But conversely see OpenAI model families perform better on other tasks (like creative writing). This experience is amplified for open source models.
So the broader community question is: are you using multiple models from different model families in your apps? If so what’s your use case and what models are you using?
| 2025-01-12T19:29:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hzuouz/are_you_using_different_model_families_in_your/ | AdditionalWeb107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzuouz | false | null | t3_1hzuouz | /r/LocalLLaMA/comments/1hzuouz/are_you_using_different_model_families_in_your/ | false | false | self | 5 | null |
Kokoro #1 on TTS leaderboard | 303 | After a short time and a few sabotage attempts, Kokoro is now #1 on the TTS Arena Leaderboard:
https://huggingface.co/spaces/Pendrokar/TTS-Spaces-Arena
I hadn't done any comparative tests to see whether it was better than XTTSv2 (which I was using previously) but the smaller model size and licensing was enough for me to switch after using it just for a few minutes.
I'd like to see work do produce a F16 and Int8 version (currently, I'm running the full F32 version). But this is a very nice model in terms of size performance when you just need simple TTS rendering of text. | 2025-01-12T19:38:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hzuw4z/kokoro_1_on_tts_leaderboard/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzuw4z | false | null | t3_1hzuw4z | /r/LocalLLaMA/comments/1hzuw4z/kokoro_1_on_tts_leaderboard/ | false | false | self | 303 | {'enabled': False, 'images': [{'id': 'BHDN4LcfFxCyfBd9tXPJkBjDrmj5BVbmicFa3RjFPdg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/z6V_bl4daXSKhvqv6GYmBClSwlmr7dFLWiPdbGoSx1k.jpg?width=108&crop=smart&auto=webp&s=8a5d68888204a0a59bb580cbc0bf4dda1e98abbe', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/z6V_bl4daXSKhvqv6GYmBClSwlmr7dFLWiPdbGoSx1k.jpg?width=216&crop=smart&auto=webp&s=8b03aa9c436332dcb4056517f034879aa8af3c46', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/z6V_bl4daXSKhvqv6GYmBClSwlmr7dFLWiPdbGoSx1k.jpg?width=320&crop=smart&auto=webp&s=e21941e9a855d4edfdb852020f0e853c91e58b47', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/z6V_bl4daXSKhvqv6GYmBClSwlmr7dFLWiPdbGoSx1k.jpg?width=640&crop=smart&auto=webp&s=a35df95149de12db22b9e09964c202faa448704d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/z6V_bl4daXSKhvqv6GYmBClSwlmr7dFLWiPdbGoSx1k.jpg?width=960&crop=smart&auto=webp&s=9378b6262a3df8f4a40385a428ae04b29f01e739', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/z6V_bl4daXSKhvqv6GYmBClSwlmr7dFLWiPdbGoSx1k.jpg?width=1080&crop=smart&auto=webp&s=16f07da33944dbfeafd7db479eeb4b64d3da5ee7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/z6V_bl4daXSKhvqv6GYmBClSwlmr7dFLWiPdbGoSx1k.jpg?auto=webp&s=1bf82c36ab05f584affb96bed7ae227bc0aee81b', 'width': 1200}, 'variants': {}}]} |
I made a webui alternative for Vision Language Models like LLaMA 3.2 11b | 1 | Hey, I made this because in the oobabooga text-generation-webui didn't have the capability to use the "multimodal" part of these kind of models (the image sending). It also has characters as you would have them in others webui. It's made using the transformers package.
Tell me what you think about [this webui](https://github.com/ricardo2001l/visual-text-generation-webui), also if you want to contribute by making a pull request, i'd be glad. So give it a try. | 2025-01-12T19:41:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hzuyno/i_made_a_webui_alternative_for_vision_language/ | Any-Shopping2394 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzuyno | false | null | t3_1hzuyno | /r/LocalLLaMA/comments/1hzuyno/i_made_a_webui_alternative_for_vision_language/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'FWucPct-Qr8Dm_wqjRmWhJ3d6PqaA1o32PkiqfdPdaU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=108&crop=smart&auto=webp&s=baf08923bd632691aae18b97ee93c729840e717f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=216&crop=smart&auto=webp&s=4399184f8bf2b5da312cd6c33d5cc2ddd294d54a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=320&crop=smart&auto=webp&s=3635336d6cf144f1f25458368fd56da32d929b40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=640&crop=smart&auto=webp&s=418e01a5c68f28f0ed313d915bf1a6ff4371fec6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=960&crop=smart&auto=webp&s=bc659cb196917675565bea0fe2343468e6ab0e6f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=1080&crop=smart&auto=webp&s=85edbde24c6427b727d5b7750e6abf35f5d46dad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?auto=webp&s=5aa820c9e4ef8eb4b961c625b09634a116bab35c', 'width': 1200}, 'variants': {}}]} |
I made a Webui Alternative for Vision Language Models like LLaMA 3.2 11b | 11 | Hey, I made this because in the oobabooga text-generation-webui didn't have the capability to use the "multimodal" part of these kind of models (the image sending). It also has characters as you would have them in others webui. It's made using the transformers package.
Tell me what you think about [this webui](https://github.com/ricardo2001l/visual-text-generation-webui), also if you want to contribute by making a pull request, i'd be glad. So give it a try [https://github.com/ricardo2001l/visual-text-generation-webui](https://github.com/ricardo2001l/visual-text-generation-webui).
[how the webui looks](https://preview.redd.it/9s8wsw3d9mce1.png?width=1902&format=png&auto=webp&s=38a388f5479320ed5a2866def64cc7010dfc3637)
| 2025-01-12T19:45:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hzv1v5/i_made_a_webui_alternative_for_vision_language/ | Any-Shopping2394 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzv1v5 | false | null | t3_1hzv1v5 | /r/LocalLLaMA/comments/1hzv1v5/i_made_a_webui_alternative_for_vision_language/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'FWucPct-Qr8Dm_wqjRmWhJ3d6PqaA1o32PkiqfdPdaU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=108&crop=smart&auto=webp&s=baf08923bd632691aae18b97ee93c729840e717f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=216&crop=smart&auto=webp&s=4399184f8bf2b5da312cd6c33d5cc2ddd294d54a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=320&crop=smart&auto=webp&s=3635336d6cf144f1f25458368fd56da32d929b40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=640&crop=smart&auto=webp&s=418e01a5c68f28f0ed313d915bf1a6ff4371fec6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=960&crop=smart&auto=webp&s=bc659cb196917675565bea0fe2343468e6ab0e6f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?width=1080&crop=smart&auto=webp&s=85edbde24c6427b727d5b7750e6abf35f5d46dad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/reoQFTzGjChcWrotB_NPBBfZ1W2yeo6AUYjnYDwGotU.jpg?auto=webp&s=5aa820c9e4ef8eb4b961c625b09634a116bab35c', 'width': 1200}, 'variants': {}}]} |
|
API providers that allow grammar-guided sampling? | 6 | I would like to try out deepseek v3 with grammar guided decoding - this is supported by vllm, but I haven't found API providers that expose this feature. Are you aware of any? | 2025-01-12T20:11:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hzvp3n/api_providers_that_allow_grammarguided_sampling/ | nielsrolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzvp3n | false | null | t3_1hzvp3n | /r/LocalLLaMA/comments/1hzvp3n/api_providers_that_allow_grammarguided_sampling/ | false | false | self | 6 | null |
I let a Computer Organize my Life for a Week, and My Mom Does Not Approve | 1 | [removed] | 2025-01-12T20:33:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hzw6z1/i_let_a_computer_organize_my_life_for_a_week_and/ | Collecthor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzw6z1 | false | null | t3_1hzw6z1 | /r/LocalLLaMA/comments/1hzw6z1/i_let_a_computer_organize_my_life_for_a_week_and/ | false | false | self | 1 | null |
Radiologist copilot | 1 | Sorry I'm new to this.
I'm a radiologist. Is there any advantage to training a local model on all of the Radiology pdfs + textbooks + [Radiopaedia.org](http://Radiopaedia.org) articles that I have access to and use as a sort of copilot while reading studies?
I've used chatGPT in a similar way and it's just not good. It's differentials are so broad so as to be completely useless. If a layperson were to read the stuff it spits out, they'd be blown away, but for radiologists, it's absolute, unadulterated slop--sometimes completely wrong, too. The only thing it's good at is when I'm having word finding difficulties and I describe something to it that it then names.
1. could this be better than chatgpt at helping me expand my differential diagnosis when I get a rare/weird/interesting case?
2. can this run on a 5090 or do I have to string together gpus/use a suped up mac?
3. can I use it to write me multiple choice questions based on a topic of my choosing to keep my medical knowledge sharp? do you know of any non-local ways to do this effectively? | 2025-01-12T20:42:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hzwf8o/radiologist_copilot/ | commodores12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzwf8o | false | null | t3_1hzwf8o | /r/LocalLLaMA/comments/1hzwf8o/radiologist_copilot/ | false | false | self | 1 | null |
Llama 3.2 11b instruct vision on vllm aws g5.xlarge ec2 | 1 | [removed] | 2025-01-12T20:44:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hzwg44/llama_32_11b_instruct_vision_on_vllm_aws_g5xlarge/ | neozzeric | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzwg44 | false | null | t3_1hzwg44 | /r/LocalLLaMA/comments/1hzwg44/llama_32_11b_instruct_vision_on_vllm_aws_g5xlarge/ | false | false | self | 1 | null |
Current best local models for companionship? for random small talk for lonely people | 38 | Asking for a friend. | 2025-01-12T21:01:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hzwun4/current_best_local_models_for_companionship_for/ | MasterScrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzwun4 | false | null | t3_1hzwun4 | /r/LocalLLaMA/comments/1hzwun4/current_best_local_models_for_companionship_for/ | false | false | self | 38 | null |
Free Kokoro TTS API | 1 | [removed] | 2025-01-12T21:09:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hzx1cg/free_kokoro_tts_api/ | iamMess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzx1cg | false | null | t3_1hzx1cg | /r/LocalLLaMA/comments/1hzx1cg/free_kokoro_tts_api/ | false | false | self | 1 | null |
Will businesses ultimately prefer task-specific LLM agents or general-purpose models? | 1 | [removed] | 2025-01-12T21:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hzxr6a/will_businesses_ultimately_prefer_taskspecific/ | Reasonable_Jump_996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzxr6a | false | null | t3_1hzxr6a | /r/LocalLLaMA/comments/1hzxr6a/will_businesses_ultimately_prefer_taskspecific/ | false | false | self | 1 | null |
Phi4 14b vs all of my real-world use cases: A Short Review | 1 | [removed] | 2025-01-12T21:52:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hzy1gr/phi4_14b_vs_all_of_my_realworld_use_cases_a_short/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzy1gr | false | null | t3_1hzy1gr | /r/LocalLLaMA/comments/1hzy1gr/phi4_14b_vs_all_of_my_realworld_use_cases_a_short/ | false | false | self | 1 | null |
What’s next for AI-based automation in 2025? | 1 | [removed] | 2025-01-12T21:54:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hzy3gu/whats_next_for_aibased_automation_in_2025/ | Frosty_Programmer672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzy3gu | false | null | t3_1hzy3gu | /r/LocalLLaMA/comments/1hzy3gu/whats_next_for_aibased_automation_in_2025/ | false | false | self | 1 | null |
Mountain lion encounter video—very dark | 1 | [removed] | 2025-01-12T21:59:19 | https://www.reddit.com/gallery/1hzy79y | Jealous-Lychee6243 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hzy79y | false | null | t3_1hzy79y | /r/LocalLLaMA/comments/1hzy79y/mountain_lion_encounter_videovery_dark/ | false | false | 1 | null |
|
Search-o1: Agentic Search-Enhanced Large Reasoning Models - Renmin University of China | 50 | 2025-01-12T22:13:56 | https://search-o1.github.io/ | Singularian2501 | search-o1.github.io | 1970-01-01T00:00:00 | 0 | {} | 1hzyjsj | false | null | t3_1hzyjsj | /r/LocalLLaMA/comments/1hzyjsj/searcho1_agentic_searchenhanced_large_reasoning/ | false | false | default | 50 | null |
|
I forbade a model from using its own token predictions to choose the next word – QwQ 32b is adorably freaking out sometimes | 31 |
I set up a small experiment with QwQ-32B-Preview, a model known for its ability to reason and follow instructions. The idea was simple: it had to predict its next word without being allowed to rely on its own predictions as an LLM
The model started in confusion but soon shifted into self-analysis, hypothesis testing, and even philosophical contemplation. It was like watching it wrestle with its own constraints, occasionally freaking out in the most adorable ways.
Here is a link with an experiment:
https://shir-man.com/amibroken/
| 2025-01-12T22:31:24 | https://v.redd.it/0sijc7ke3nce1 | Shir_man | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hzyy6h | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/0sijc7ke3nce1/DASHPlaylist.mpd?a=1739313101%2CMWQ4ODAxNmJkYWNiNjdmZTUyZGMxMzNhNGExMjAwMjZlZGRiMjQ3ZTg1Nzk4YTBlNjA2ZTQzNmFmYTcyMWU5OA%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/0sijc7ke3nce1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1036, 'hls_url': 'https://v.redd.it/0sijc7ke3nce1/HLSPlaylist.m3u8?a=1739313101%2CNmExNjY5M2ExODJhOWM5MTRjNGU0ZmIyODFhNzgyMTI3ZDYyM2IwNjY4Y2VkYTcyYzg1ZWYzMjQxZmIzZmI5Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0sijc7ke3nce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1hzyy6h | /r/LocalLLaMA/comments/1hzyy6h/i_forbade_a_model_from_using_its_own_token/ | false | false | 31 | {'enabled': False, 'images': [{'id': 'NWtucDc0aGUzbmNlMQJW0EyliUT-_L3Ce_WpeK6JjB__h_GT9gcIW1SOMMne', 'resolutions': [{'height': 155, 'url': 'https://external-preview.redd.it/NWtucDc0aGUzbmNlMQJW0EyliUT-_L3Ce_WpeK6JjB__h_GT9gcIW1SOMMne.png?width=108&crop=smart&format=pjpg&auto=webp&s=bfe74cfba3138642251953a376810ec5f8bb0daf', 'width': 108}, {'height': 311, 'url': 'https://external-preview.redd.it/NWtucDc0aGUzbmNlMQJW0EyliUT-_L3Ce_WpeK6JjB__h_GT9gcIW1SOMMne.png?width=216&crop=smart&format=pjpg&auto=webp&s=9973e189f95f58904f2711a186b1e54cd6483f13', 'width': 216}, {'height': 460, 'url': 'https://external-preview.redd.it/NWtucDc0aGUzbmNlMQJW0EyliUT-_L3Ce_WpeK6JjB__h_GT9gcIW1SOMMne.png?width=320&crop=smart&format=pjpg&auto=webp&s=4989117e57e188960e4a209cdcf42d1b95b7441f', 'width': 320}, {'height': 921, 'url': 'https://external-preview.redd.it/NWtucDc0aGUzbmNlMQJW0EyliUT-_L3Ce_WpeK6JjB__h_GT9gcIW1SOMMne.png?width=640&crop=smart&format=pjpg&auto=webp&s=d9a540463ab9962fcda8bb7f98ae4123534ccae0', 'width': 640}], 'source': {'height': 1276, 'url': 'https://external-preview.redd.it/NWtucDc0aGUzbmNlMQJW0EyliUT-_L3Ce_WpeK6JjB__h_GT9gcIW1SOMMne.png?format=pjpg&auto=webp&s=1da23739f8dcfecf790f404d5961f492d9771777', 'width': 886}, 'variants': {}}]} |
|
SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub Issue Resolution | 14 | 2025-01-12T22:38:39 | https://github.com/InternLM/SWE-Fixer | Singularian2501 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hzz44r | false | null | t3_1hzz44r | /r/LocalLLaMA/comments/1hzz44r/swefixer_training_opensource_llms_for_effective/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'dfb60ulVoxlgEBDfP_2HjIUHbzjCNT1PfMrK1vc3VWs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7IU0NPKIyIxtqFcMeMTaEMBvLbn4HjBzM2rLWrfl5cg.jpg?width=108&crop=smart&auto=webp&s=6152f0c7f05863db0acee06f63c369c4c61dad4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7IU0NPKIyIxtqFcMeMTaEMBvLbn4HjBzM2rLWrfl5cg.jpg?width=216&crop=smart&auto=webp&s=0bf8caacbdcbcc744cff4650ea7f67128006620d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7IU0NPKIyIxtqFcMeMTaEMBvLbn4HjBzM2rLWrfl5cg.jpg?width=320&crop=smart&auto=webp&s=7a8eef5bcc6586ba08ad2f3fa58b7e49b25670a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7IU0NPKIyIxtqFcMeMTaEMBvLbn4HjBzM2rLWrfl5cg.jpg?width=640&crop=smart&auto=webp&s=a7ff734462a81c0a8142d7d1baafa4f4fe72ac1c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7IU0NPKIyIxtqFcMeMTaEMBvLbn4HjBzM2rLWrfl5cg.jpg?width=960&crop=smart&auto=webp&s=88e9ae0e9eed77c0d7d77bf5c0729be7f81a92c2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7IU0NPKIyIxtqFcMeMTaEMBvLbn4HjBzM2rLWrfl5cg.jpg?width=1080&crop=smart&auto=webp&s=fecc0adb90ca96408d8d07606174c0973f97191c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7IU0NPKIyIxtqFcMeMTaEMBvLbn4HjBzM2rLWrfl5cg.jpg?auto=webp&s=a7882b753ae269fd0daa3cc2c1500a4ad3335692', 'width': 1200}, 'variants': {}}]} |
||
Talk to your data and automate it in the way you want! Would love to know what do you guys think? | 1 | 2025-01-12T22:49:27 | https://www.youtube.com/watch?v=FXs2Pu5rYTA | Sea-Assignment6371 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1hzzcud | false | {'oembed': {'author_name': 'amin khorrami', 'author_url': 'https://www.youtube.com/@aminkhorrami1459', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/FXs2Pu5rYTA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="WaveQuery Demo"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/FXs2Pu5rYTA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'WaveQuery Demo', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'} | t3_1hzzcud | /r/LocalLLaMA/comments/1hzzcud/talk_to_your_data_and_automate_it_in_the_way_you/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'rzYnUz2P54Vo8cT3834O8WlAbDHA5f3CZLb6yo4bAvM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/PRY5b1m8B6gKZ90I42jcPpfNQs9xZuIRIBIXp50sSOk.jpg?width=108&crop=smart&auto=webp&s=d5fe3b8e7fa3f9640bc9eea2fa9212e3a50cb100', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/PRY5b1m8B6gKZ90I42jcPpfNQs9xZuIRIBIXp50sSOk.jpg?width=216&crop=smart&auto=webp&s=833aec2d966a11134ef8222d17748c9e6af87b3f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/PRY5b1m8B6gKZ90I42jcPpfNQs9xZuIRIBIXp50sSOk.jpg?width=320&crop=smart&auto=webp&s=c3cc7a7b54077fbb7cc2ed0b2f9fa623b0cdbb7c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/PRY5b1m8B6gKZ90I42jcPpfNQs9xZuIRIBIXp50sSOk.jpg?auto=webp&s=7cd290e2ebdef05976c607821ba1d15b303caca1', 'width': 480}, 'variants': {}}]} |
||
Server build question | 1 | [removed] | 2025-01-12T23:17:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hzzz9n/server_build_question/ | Weird_Bird1792 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hzzz9n | false | null | t3_1hzzz9n | /r/LocalLLaMA/comments/1hzzz9n/server_build_question/ | false | false | self | 1 | null |
What’s likely for Llama4? | 31 | So with all the breakthroughs and changing opinions since Llama 3 dropped back in July, I’ve been wondering—what’s Meta got cooking next?
Not trying to make this a low-effort post, I’m honestly curious. Anyone heard any rumors or have any thoughts on where they might take the Llama series from here?
Would love to hear what y’all think! | 2025-01-12T23:24:25 | https://www.reddit.com/r/LocalLLaMA/comments/1i004dd/whats_likely_for_llama4/ | SocialDinamo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i004dd | false | null | t3_1i004dd | /r/LocalLLaMA/comments/1i004dd/whats_likely_for_llama4/ | false | false | self | 31 | null |
Simple task (raw data to json) cant be accomplished by any local model? | 1 | [removed] | 2025-01-12T23:51:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i00p0p/simple_task_raw_data_to_json_cant_be_accomplished/ | mantafloppy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i00p0p | false | null | t3_1i00p0p | /r/LocalLLaMA/comments/1i00p0p/simple_task_raw_data_to_json_cant_be_accomplished/ | false | false | self | 1 | null |
Anyone worked with distributed inference on Llama.cpp? | 10 | I have it sort of working with:
build-rpc-cuda/bin/rpc-server -p 7000 (on the first gpu rig)
build-rpc-cuda/bin/rpc-server -p 7001 (on the second gpu rig)
build-rpc/bin/llama-cli -m ../model.gguf -p "Hello, my name is" --repeat-penalty 1.0 -n 64 --rpc 127.0.0.1:7000,127.0.0.1:7001 -ngl 99
This does distributed inference across the 2 machines, but I'm having to reload the entire model for each query.
I skimmed through the llama-cli -h and didn't see a way to make it keep the model loaded, or listen for connections instead of directly doing inference inside the command line.
Also skimmed though llama-server, which would allow keeping the model loaded and hosting an api, but doesn't appear to support RPC servers.
I assume I am missing something right?
[https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md)
[https://github.com/ggerganov/llama.cpp/tree/master/examples/rpc](https://github.com/ggerganov/llama.cpp/tree/master/examples/rpc) | 2025-01-13T00:17:19 | https://www.reddit.com/r/LocalLLaMA/comments/1i0182p/anyone_worked_with_distributed_inference_on/ | Conscious_Cut_6144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0182p | false | null | t3_1i0182p | /r/LocalLLaMA/comments/1i0182p/anyone_worked_with_distributed_inference_on/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
PSA: You can use Ollama to generate your git commit messages locally | 13 | Using git commit hooks you can ask any model from Ollama to generate a git commit message for you:
```
#!/usr/bin/env sh
# .git/hooks/prepare-commit-msg
# Make this file executable: chmod +x .git/hooks/prepare-commit-msg
echo "Running prepare-commit-msg hook"
COMMIT_MSG_FILE="$1"
# Get the staged diff
DIFF=$(git diff --cached)
# Generate a summary with ollama CLI and phi4 model
SUMMARY=$(
ollama run phi4 <<EOF
Generate a raw text commit message for the following diff.
Keep commit message concise and to the point.
Make the first line the title (100 characters max) and the rest the body:
$DIFF
EOF
)
if [ -f "$COMMIT_MSG_FILE" ]; then
# Save the AI generated summary to the commit message file
echo "$SUMMARY" >"$COMMIT_MSG_FILE"
# Append existing message if it exists
if [ -n "$EXISTING_MSG" ]; then
echo "" >>"$COMMIT_MSG_FILE"
echo "$EXISTING_MSG" >>"$COMMIT_MSG_FILE"
fi
fi
```
You can also use tools like [yek](https://github.com/mohsen1/yek) to put the entire repo plus the changes in the prompt to give the model more context for better messages
You can also cap the maximum time this should take with `--keep-alive` | 2025-01-13T00:32:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i01j4k/psa_you_can_use_ollama_to_generate_your_git/ | mehyay76 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i01j4k | false | null | t3_1i01j4k | /r/LocalLLaMA/comments/1i01j4k/psa_you_can_use_ollama_to_generate_your_git/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'PgM-nNj0HPlce3YHM8pg3JeQanR45Gb9zt332LHCFsE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7MDBSrVddoP5ujIj5zrleKNT2AAdHiq8O5bOJsVUzJE.jpg?width=108&crop=smart&auto=webp&s=60778c56bd0cd47c8460bfb202a879dc2e45fc8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7MDBSrVddoP5ujIj5zrleKNT2AAdHiq8O5bOJsVUzJE.jpg?width=216&crop=smart&auto=webp&s=7482b699360cb0cb266cb0c9c92d34a3a964773e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7MDBSrVddoP5ujIj5zrleKNT2AAdHiq8O5bOJsVUzJE.jpg?width=320&crop=smart&auto=webp&s=62a438ec45edbc6689329459e2a771eb672cb80e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7MDBSrVddoP5ujIj5zrleKNT2AAdHiq8O5bOJsVUzJE.jpg?width=640&crop=smart&auto=webp&s=556be6217de9184503771b4325243fec38c69498', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7MDBSrVddoP5ujIj5zrleKNT2AAdHiq8O5bOJsVUzJE.jpg?width=960&crop=smart&auto=webp&s=c6b835c666044baaa0c9b8412d91a45183851106', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7MDBSrVddoP5ujIj5zrleKNT2AAdHiq8O5bOJsVUzJE.jpg?width=1080&crop=smart&auto=webp&s=3b11ffcd437b4bc7583ca0036f178942b79d18e8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7MDBSrVddoP5ujIj5zrleKNT2AAdHiq8O5bOJsVUzJE.jpg?auto=webp&s=ed73ee4b8e537eecd22782000053140928d9ad2a', 'width': 1200}, 'variants': {}}]} |
Llama goes off the rails if you ask it for 5 odd numbers that don’t have the letter E in them | 534 | 2025-01-13T00:33:44 | Applemoi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i01k4s | false | null | t3_1i01k4s | /r/LocalLLaMA/comments/1i01k4s/llama_goes_off_the_rails_if_you_ask_it_for_5_odd/ | false | false | 534 | {'enabled': True, 'images': [{'id': 'HEPV1eW8ZzA7mOLSEVQfQCfYBKVxBfmtKiMR8__w8kk', 'resolutions': [{'height': 215, 'url': 'https://preview.redd.it/w5j543q9pnce1.jpeg?width=108&crop=smart&auto=webp&s=88265fc1cef51b776a68ed0ae3da1c5fbf54fbc7', 'width': 108}, {'height': 431, 'url': 'https://preview.redd.it/w5j543q9pnce1.jpeg?width=216&crop=smart&auto=webp&s=e837fe68ee5e2bf00b31731b6e7ebe5906def79c', 'width': 216}, {'height': 638, 'url': 'https://preview.redd.it/w5j543q9pnce1.jpeg?width=320&crop=smart&auto=webp&s=f2b88a3b7c9a6d2efb13bfb226463f04fd5a05fb', 'width': 320}, {'height': 1277, 'url': 'https://preview.redd.it/w5j543q9pnce1.jpeg?width=640&crop=smart&auto=webp&s=61adf904110b30f4cba98ecbd9c36a7462cf005f', 'width': 640}, {'height': 1916, 'url': 'https://preview.redd.it/w5j543q9pnce1.jpeg?width=960&crop=smart&auto=webp&s=2fdaa59fd881d72ba25b298a76fbe78409631431', 'width': 960}, {'height': 2156, 'url': 'https://preview.redd.it/w5j543q9pnce1.jpeg?width=1080&crop=smart&auto=webp&s=c9122381f99b7c76048e310571a4be8ee89f16b6', 'width': 1080}], 'source': {'height': 2408, 'url': 'https://preview.redd.it/w5j543q9pnce1.jpeg?auto=webp&s=5312d6ae2a22ec3d61fac7dd3ab97fe284927bce', 'width': 1206}, 'variants': {}}]} |
|||
Nvidia RTX Titan ADA Prototype | 0 | 2025-01-13T00:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i01qng/nvidia_rtx_titan_ada_prototype/ | FluxRBLX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i01qng | false | null | t3_1i01qng | /r/LocalLLaMA/comments/1i01qng/nvidia_rtx_titan_ada_prototype/ | false | false | 0 | null |
||
Looking for recent LLMs trained on ebooks or niche concepts | 1 | [removed] | 2025-01-13T00:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/1i01rqh/looking_for_recent_llms_trained_on_ebooks_or/ | oshikuru08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i01rqh | false | null | t3_1i01rqh | /r/LocalLLaMA/comments/1i01rqh/looking_for_recent_llms_trained_on_ebooks_or/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'C2ELu3KYxkT7Mil3B9pnh4_K1HJqoo7-6MPNJvLEnTI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PA2xCOboVpEZZl1UHjbblafX1LL1aTw5ZLol6mwrRSs.jpg?width=108&crop=smart&auto=webp&s=6fba25d1a52bbddb5945ebc38c958ffabfabe84e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PA2xCOboVpEZZl1UHjbblafX1LL1aTw5ZLol6mwrRSs.jpg?width=216&crop=smart&auto=webp&s=afe8757a0597de5fafb92875430c405655b3b633', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PA2xCOboVpEZZl1UHjbblafX1LL1aTw5ZLol6mwrRSs.jpg?width=320&crop=smart&auto=webp&s=bf4a1c5972d3212c057c78880934a06ddc95cd42', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PA2xCOboVpEZZl1UHjbblafX1LL1aTw5ZLol6mwrRSs.jpg?width=640&crop=smart&auto=webp&s=5996806bde8a1a368450aaf06599f6672b7a9f52', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PA2xCOboVpEZZl1UHjbblafX1LL1aTw5ZLol6mwrRSs.jpg?width=960&crop=smart&auto=webp&s=1c4f85f11bf53d2ff295f807153fa351d5567f49', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PA2xCOboVpEZZl1UHjbblafX1LL1aTw5ZLol6mwrRSs.jpg?width=1080&crop=smart&auto=webp&s=2b6879e0e1b18755753ad9bf1cd3f00922203a10', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PA2xCOboVpEZZl1UHjbblafX1LL1aTw5ZLol6mwrRSs.jpg?auto=webp&s=2a64a821b6566dd57d973f8df6b4f45938c0d3dd', 'width': 1200}, 'variants': {}}]} |
Speaches v0.6.0 - Kokoro-82M and PiperTTS API endpoints | 109 | Hey everyone!
Project: [https://github.com/speaches-ai/speaches](https://github.com/speaches-ai/speaches)
Checkout the documentation to get started: [https://speaches-ai.github.io/speaches/](https://speaches-ai.github.io/speaches/)
I just released Speaches v0.6.0 (previously named `faster-whisper-server`). The main feature added in this release is support for Piper and Kokoro Text-to-Speech models. Below is a full feature list:
* GPU and CPU support.
* [Deployable via Docker Compose / Docker](https://speaches-ai.github.io/speaches/installation/)
* [Highly configurable](https://speaches-ai.github.io/speaches/configuration/)
* OpenAI API compatible. All tools and SDKs that work with OpenAI's API should work with `speaches`.
* Streaming support (transcription is sent via SSE as the audio is transcribed. You don't need to wait for the audio to fully be transcribed before receiving it).
* LocalAgreement2 ([paper](https://aclanthology.org/2023.ijcnlp-demo.3.pdf) | [original implementation](https://github.com/ufal/whisper_streaming)) algorithm is used for live transcription.
* Live transcription support (audio is sent via WebSocketbe fully as it's generated).
* Dynamic model loading/offloading. In the request, specify which model you want to use. It will be loaded automatically and unloaded after a period of inactivity.
* Text-to-Speech via `kokoro`(Ranked #1 in the [TTS Arena](https://huggingface.co/spaces/Pendrokar/TTS-Spaces-Arena)) and `piper` models.
* [Coming soon](https://github.com/speaches-ai/speaches/issues/231): Audio generation (chat completions endpoint)
* Generate a spoken audio summary of a body of text (text in, audio out)
* Perform sentiment analysis on a recording (audio in, text out)
* Async speech to speech interactions with a model (audio in, audio out)
* [Coming soon](https://github.com/speaches-ai/speaches/issues/115): Realtime API
TTS functionality demo
https://reddit.com/link/1i02hpf/video/xfqgsah1xnce1/player
NOTE: The published hugging face space is currently broken, but the GradioUI should work when you spin it up locally using Docker | 2025-01-13T01:20:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i02hpf/speaches_v060_kokoro82m_and_pipertts_api_endpoints/ | fedirz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i02hpf | false | null | t3_1i02hpf | /r/LocalLLaMA/comments/1i02hpf/speaches_v060_kokoro82m_and_pipertts_api_endpoints/ | false | false | 109 | {'enabled': False, 'images': [{'id': 'KKcLQbIIo_a4cgzMo61sAEUe8ZwqYENV6r420HZlqi4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/sAX_ZIl7JN3or34mCkc-WHLID4UPjOUoH_urgQbSt28.jpg?width=108&crop=smart&auto=webp&s=76294deec4bfc08f564ce458e7fc33276cd5158b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/sAX_ZIl7JN3or34mCkc-WHLID4UPjOUoH_urgQbSt28.jpg?width=216&crop=smart&auto=webp&s=1e0e3fa371d682d4c84983667797abff8c6774d7', 'width': 216}, {'height': 321, 'url': 'https://external-preview.redd.it/sAX_ZIl7JN3or34mCkc-WHLID4UPjOUoH_urgQbSt28.jpg?width=320&crop=smart&auto=webp&s=c79efabee776f66482bae6ca13d9849805dcd0c0', 'width': 320}, {'height': 642, 'url': 'https://external-preview.redd.it/sAX_ZIl7JN3or34mCkc-WHLID4UPjOUoH_urgQbSt28.jpg?width=640&crop=smart&auto=webp&s=5015e5c1c0ec50d741510dd22d90c2586f24be16', 'width': 640}], 'source': {'height': 962, 'url': 'https://external-preview.redd.it/sAX_ZIl7JN3or34mCkc-WHLID4UPjOUoH_urgQbSt28.jpg?auto=webp&s=f9cbee7fc8483ead0a3e3660d2d61f3586f4754e', 'width': 958}, 'variants': {}}]} |
|
llama.cpp vs mistral.rs for in-app local inference | 1 | [removed] | 2025-01-13T01:41:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i02wcw/llamacpp_vs_mistralrs_for_inapp_local_inference/ | feznyng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i02wcw | false | null | t3_1i02wcw | /r/LocalLLaMA/comments/1i02wcw/llamacpp_vs_mistralrs_for_inapp_local_inference/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Gz5SFttAMmcoyHDFt5QIu0TBCHJUXRYdQgJIeXXhiFA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tGetFWaau3ayXcWxS-r9PMw9n-vXsKiHPbz3fRLkXsE.jpg?width=108&crop=smart&auto=webp&s=1c385b58091e153ac85d7776a9bc102b60496901', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tGetFWaau3ayXcWxS-r9PMw9n-vXsKiHPbz3fRLkXsE.jpg?width=216&crop=smart&auto=webp&s=fec6cd2dabb56ce52276e0778dd865dafe139dcf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tGetFWaau3ayXcWxS-r9PMw9n-vXsKiHPbz3fRLkXsE.jpg?width=320&crop=smart&auto=webp&s=1931086bf4b9d955372b7d5c851cb3f07e9628e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tGetFWaau3ayXcWxS-r9PMw9n-vXsKiHPbz3fRLkXsE.jpg?width=640&crop=smart&auto=webp&s=01a880647fe698433dc6316a8c85bc284a8140fc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tGetFWaau3ayXcWxS-r9PMw9n-vXsKiHPbz3fRLkXsE.jpg?width=960&crop=smart&auto=webp&s=4c3e085ea84fde6b62e91daba3aeec3a6d10a6ad', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tGetFWaau3ayXcWxS-r9PMw9n-vXsKiHPbz3fRLkXsE.jpg?width=1080&crop=smart&auto=webp&s=2131b72326a32f217ae828f5c2188871ab965c05', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/tGetFWaau3ayXcWxS-r9PMw9n-vXsKiHPbz3fRLkXsE.jpg?auto=webp&s=5895082c9467422e3a8b9acfb3ae9ebe6e590c23', 'width': 4096}, 'variants': {}}]} |
Help Me Build a Frankenstein Hybrid AI Setup for LLMs and Big Data | 1 | [removed] | 2025-01-13T02:07:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i03ekj/help_me_build_a_frankenstein_hybrid_ai_setup_for/ | cloudcircuitry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i03ekj | false | null | t3_1i03ekj | /r/LocalLLaMA/comments/1i03ekj/help_me_build_a_frankenstein_hybrid_ai_setup_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KNAUv4v8qS3PNd9J3AsUwtVWWrHJR2u7XYB9xNAIt3c', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=108&crop=smart&auto=webp&s=1baf133bb163d748fc24979ab0b3bd4c712feec5', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=216&crop=smart&auto=webp&s=848e3580231ce0a5a220dec4e726914d5fe9677b', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=320&crop=smart&auto=webp&s=204ffeec0ea8adb53eb1faad138ee1d56106f717', 'width': 320}, {'height': 367, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=640&crop=smart&auto=webp&s=47d89b09e36cd6d4a00e7efcdb650d72292ba6ce', 'width': 640}, {'height': 551, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=960&crop=smart&auto=webp&s=ab5864bcb2eeb2d2ee8798d016d03a6eb515f4c0', 'width': 960}, {'height': 620, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=1080&crop=smart&auto=webp&s=2677627d24421fef9004df283b70dfdd4abd6cf5', 'width': 1080}], 'source': {'height': 958, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?auto=webp&s=8ab26763d9cc09dff48a56b1719ed3040c54375f', 'width': 1668}, 'variants': {}}]} |
Help Me Build a Frankenstein Hybrid AI Setup for LLMs, Big Data, and Mobile App Testing | 1 | [removed] | 2025-01-13T02:18:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i03m1t/help_me_build_a_frankenstein_hybrid_ai_setup_for/ | thepoet24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i03m1t | false | null | t3_1i03m1t | /r/LocalLLaMA/comments/1i03m1t/help_me_build_a_frankenstein_hybrid_ai_setup_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KNAUv4v8qS3PNd9J3AsUwtVWWrHJR2u7XYB9xNAIt3c', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=108&crop=smart&auto=webp&s=1baf133bb163d748fc24979ab0b3bd4c712feec5', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=216&crop=smart&auto=webp&s=848e3580231ce0a5a220dec4e726914d5fe9677b', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=320&crop=smart&auto=webp&s=204ffeec0ea8adb53eb1faad138ee1d56106f717', 'width': 320}, {'height': 367, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=640&crop=smart&auto=webp&s=47d89b09e36cd6d4a00e7efcdb650d72292ba6ce', 'width': 640}, {'height': 551, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=960&crop=smart&auto=webp&s=ab5864bcb2eeb2d2ee8798d016d03a6eb515f4c0', 'width': 960}, {'height': 620, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?width=1080&crop=smart&auto=webp&s=2677627d24421fef9004df283b70dfdd4abd6cf5', 'width': 1080}], 'source': {'height': 958, 'url': 'https://external-preview.redd.it/xEdAj40uKvDnAcBc0jKt2IIp7nZETAModhx3kDFXDYg.jpg?auto=webp&s=8ab26763d9cc09dff48a56b1719ed3040c54375f', 'width': 1668}, 'variants': {}}]} |
How to train Local LLM for ISO 9001, 17025 Laboratory Competence and Quality? | 1 | [removed] | 2025-01-13T02:38:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i03zgt/how_to_train_local_llm_for_iso_9001_17025/ | powerflower_khi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i03zgt | false | null | t3_1i03zgt | /r/LocalLLaMA/comments/1i03zgt/how_to_train_local_llm_for_iso_9001_17025/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fFoTiXH1uUm-8BqDv370PoOxe9HA8xGyKPIYh4dMGdc', 'resolutions': [{'height': 42, 'url': 'https://external-preview.redd.it/xoo1S4zJtJVPBigS8fpqpBZiFOdETKyxe8AQsOlwC_Y.jpg?width=108&crop=smart&auto=webp&s=af8656747ce080e5a1c6970f9d9d243aaff8717b', 'width': 108}, {'height': 84, 'url': 'https://external-preview.redd.it/xoo1S4zJtJVPBigS8fpqpBZiFOdETKyxe8AQsOlwC_Y.jpg?width=216&crop=smart&auto=webp&s=3fd6d4c2d573b05dba079bdc724e30275e2b0e34', 'width': 216}, {'height': 125, 'url': 'https://external-preview.redd.it/xoo1S4zJtJVPBigS8fpqpBZiFOdETKyxe8AQsOlwC_Y.jpg?width=320&crop=smart&auto=webp&s=e4a8a1b0ec84aaa73414be7eda4d65b3e47e2e58', 'width': 320}, {'height': 251, 'url': 'https://external-preview.redd.it/xoo1S4zJtJVPBigS8fpqpBZiFOdETKyxe8AQsOlwC_Y.jpg?width=640&crop=smart&auto=webp&s=56a900046223dc3f587b6a4755f36ede7ef661ed', 'width': 640}, {'height': 377, 'url': 'https://external-preview.redd.it/xoo1S4zJtJVPBigS8fpqpBZiFOdETKyxe8AQsOlwC_Y.jpg?width=960&crop=smart&auto=webp&s=be349ceaf386e0f3dfe4d0d969edb87325f02d94', 'width': 960}, {'height': 424, 'url': 'https://external-preview.redd.it/xoo1S4zJtJVPBigS8fpqpBZiFOdETKyxe8AQsOlwC_Y.jpg?width=1080&crop=smart&auto=webp&s=2390122ec6ac3c8a14f8eac5d75ea987c0af8ea3', 'width': 1080}], 'source': {'height': 472, 'url': 'https://external-preview.redd.it/xoo1S4zJtJVPBigS8fpqpBZiFOdETKyxe8AQsOlwC_Y.jpg?auto=webp&s=27ed5a6c6ef452b433a23206d8b87b6fb8428a4d', 'width': 1200}, 'variants': {}}]} |
How many ‘r’ in strawberry? smallthinker:latest, Endless loop of output but sometimes it stops. | 1 | [removed] | 2025-01-13T02:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i0436e/how_many_r_in_strawberry_smallthinkerlatest/ | johncarpen1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0436e | false | null | t3_1i0436e | /r/LocalLLaMA/comments/1i0436e/how_many_r_in_strawberry_smallthinkerlatest/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'LtKtt6txZ-QrVhG54gL73uTj3IfQTG0w8wIIdqF-0s0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=108&crop=smart&auto=webp&s=f1a83fdddaddde93eaef101228f9b29048246dc3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=216&crop=smart&auto=webp&s=d4835cc298fc9315c676ba0083c28e3ecc4b3ea3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=320&crop=smart&auto=webp&s=e378760f26672dec8cfb3fdd0b39150134ad2f06', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=640&crop=smart&auto=webp&s=d7b0f3059faf83cc71a5c5ea692748e0630dc6bd', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=960&crop=smart&auto=webp&s=86b6adf32262d8f8d6f075ed19cf11eb88ff2392', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=1080&crop=smart&auto=webp&s=eab7e1d537052c5bc4bd3be68fcf701fffd965db', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?auto=webp&s=42118dbbd4702474126fe588763f8adf0b8be142', 'width': 1200}, 'variants': {}}]} |
|
PS5 for inference | 93 | For ~$350 for the whole system is there anything better? This thing packs 3060-tier tflops, 16gb unified gddr6 with ~450gbps bandwidth with 350W PSU. not to mention that this sits in so many people's living rooms, I'm not using any llms while gaming anyways, so PS5 could actually be dual purpose.
Currently looking into how I could run llms on PS5, if anyone has any leads let me know.
I wasn't aware that systems with unified ram using gddr actually existed, let alone that amd did it 5 years ago and so they could release their own DIGITS based on strix halo but with vram instead of ddr... | 2025-01-13T03:07:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i04iqo/ps5_for_inference/ | Chemical_Mode2736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i04iqo | false | null | t3_1i04iqo | /r/LocalLLaMA/comments/1i04iqo/ps5_for_inference/ | false | false | self | 93 | null |
Updated Vector Companion to include multi-agent chatting, increasing the number of agents to 4 in Chat Mode and 1 in Analysis Mode. | 18 | 2025-01-13T03:16:26 | https://v.redd.it/4824oh98ioce1 | swagonflyyyy | /r/LocalLLaMA/comments/1i04oze/updated_vector_companion_to_include_multiagent/ | 1970-01-01T00:00:00 | 0 | {} | 1i04oze | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/4824oh98ioce1/DASHPlaylist.mpd?a=1739459790%2CM2EyNjJmMGQxNzJhMDY3YWZjNzAyZThjZTAxYzI3MWRlZmNmZDk3NjljOGE1MjUxNGIxZGZiNzE1ZjhhOGIxZg%3D%3D&v=1&f=sd', 'duration': 420, 'fallback_url': 'https://v.redd.it/4824oh98ioce1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/4824oh98ioce1/HLSPlaylist.m3u8?a=1739459790%2CYWQ2OTExNzgzODA5NDQ4ZGE4OGM0ODk0MzA2YTA1NzA3ZTEyZjU1YmJiNjJkMTYxNjNjYTNlMmU1NTRhOTg1OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4824oh98ioce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1i04oze | /r/LocalLLaMA/comments/1i04oze/updated_vector_companion_to_include_multiagent/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'eXZzenFnOThpb2NlMdotc4n4PuHiprd-_0UOPMgBjCQcLOw7jb0_FYlyDNdR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eXZzenFnOThpb2NlMdotc4n4PuHiprd-_0UOPMgBjCQcLOw7jb0_FYlyDNdR.png?width=108&crop=smart&format=pjpg&auto=webp&s=945a514b5d9415e14c04ebedc036d37ec923baa4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eXZzenFnOThpb2NlMdotc4n4PuHiprd-_0UOPMgBjCQcLOw7jb0_FYlyDNdR.png?width=216&crop=smart&format=pjpg&auto=webp&s=9a05204342d8757bb0076dea9b247dbe9c0526e1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eXZzenFnOThpb2NlMdotc4n4PuHiprd-_0UOPMgBjCQcLOw7jb0_FYlyDNdR.png?width=320&crop=smart&format=pjpg&auto=webp&s=7ba7bd467155ecfa85fb9bf5878f4e54ac909ada', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eXZzenFnOThpb2NlMdotc4n4PuHiprd-_0UOPMgBjCQcLOw7jb0_FYlyDNdR.png?width=640&crop=smart&format=pjpg&auto=webp&s=7a33c6886028a66a67507e1aaa3a895ab2bbb183', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eXZzenFnOThpb2NlMdotc4n4PuHiprd-_0UOPMgBjCQcLOw7jb0_FYlyDNdR.png?width=960&crop=smart&format=pjpg&auto=webp&s=2f0287525cd0cbab50fd34491c46efb8ce87ff44', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eXZzenFnOThpb2NlMdotc4n4PuHiprd-_0UOPMgBjCQcLOw7jb0_FYlyDNdR.png?width=1080&crop=smart&format=pjpg&auto=webp&s=08c90c1c1c0975ebbcd41257d97e1292a4b31a7d', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eXZzenFnOThpb2NlMdotc4n4PuHiprd-_0UOPMgBjCQcLOw7jb0_FYlyDNdR.png?format=pjpg&auto=webp&s=20a7583f5f40b1757c6e1def80559841697fc037', 'width': 1280}, 'variants': {}}]} |
||
What is the cheapest way to run Deepseek on a US Hosted company? | 25 | I am a bit concerned about the privacy policies- especially considering PII data. I love how DeepSeek pricing is on their website- but has anyone tried to load their model in a service provider and see what costing structure works? if so, would like to hear more. thank you! | 2025-01-13T03:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i04pm9/what_is_the_cheapest_way_to_run_deepseek_on_a_us/ | MarsupialNo7544 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i04pm9 | false | null | t3_1i04pm9 | /r/LocalLLaMA/comments/1i04pm9/what_is_the_cheapest_way_to_run_deepseek_on_a_us/ | false | false | self | 25 | null |
Phi-4 vs. Llama3.3 benchmarked | 1 | [removed] | 2025-01-13T03:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/1i05a9s/phi4_vs_llama33_benchmarked/ | AIForOver50Plus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i05a9s | false | null | t3_1i05a9s | /r/LocalLLaMA/comments/1i05a9s/phi4_vs_llama33_benchmarked/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bl9m7XxCmTqqybxmTAPqaLPMAywmwgwPpliLCwHu3UM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5fvXFBBjeAdSsfsm-2o08VMV76j9qji4X8xkhLdMGK8.jpg?width=108&crop=smart&auto=webp&s=a9e263ba08159ca2e42c646a81a1225396071057', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/5fvXFBBjeAdSsfsm-2o08VMV76j9qji4X8xkhLdMGK8.jpg?width=216&crop=smart&auto=webp&s=0f73ac3353b56a393f006a555b03961f18487aa4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/5fvXFBBjeAdSsfsm-2o08VMV76j9qji4X8xkhLdMGK8.jpg?width=320&crop=smart&auto=webp&s=dafbd2042e85554a35678f75c91d4b01c519ffed', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/5fvXFBBjeAdSsfsm-2o08VMV76j9qji4X8xkhLdMGK8.jpg?auto=webp&s=959be4e65f28e9d29158e00536072308051b95a5', 'width': 480}, 'variants': {}}]} |
Janus goes off the rails if you say hello after asking it to generate an image | 7 | 2025-01-13T04:17:19 | WordyBug | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i05sl8 | false | null | t3_1i05sl8 | /r/LocalLLaMA/comments/1i05sl8/janus_goes_off_the_rails_if_you_say_hello_after/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'wM0MlRqJHHHAV7oBptL1fv46S-qrGgY8pZZYbfa0hC4', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/u2znn5m2toce1.png?width=108&crop=smart&auto=webp&s=de3fbbe239fce3716c17954d8514b0e3ab37d357', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/u2znn5m2toce1.png?width=216&crop=smart&auto=webp&s=4f087b3f0bcd0b3b64f525021f6c5d99270c5ad2', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/u2znn5m2toce1.png?width=320&crop=smart&auto=webp&s=c9fc7be2770aa1b8b938326cccf9184b02c9f45e', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/u2znn5m2toce1.png?width=640&crop=smart&auto=webp&s=650f2adc5982d20ab8b754318ef2a625a0c38f10', 'width': 640}], 'source': {'height': 1694, 'url': 'https://preview.redd.it/u2znn5m2toce1.png?auto=webp&s=6c32df5d408dbd54de7c4a006321a926d10df40a', 'width': 822}, 'variants': {}}]} |
|||
HW requirements for fine tuning Llama3.3 | 0 | I am thinking to purchase a server with a 16-core AMD CPU and two Nvidia RTX A6000 Ada GPU cards, as well as 128GB of system RAM. Will this be sufficient? If not, what more will I need? | 2025-01-13T04:25:02 | https://www.reddit.com/r/LocalLLaMA/comments/1i05xmd/hw_requirements_for_fine_tuning_llama33/ | Ok_Ostrich_8845 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i05xmd | false | null | t3_1i05xmd | /r/LocalLLaMA/comments/1i05xmd/hw_requirements_for_fine_tuning_llama33/ | false | false | self | 0 | null |
CharacterAI like ASR model | 0 | For some reason I feel like CharacterAI has the best ASR model out there.
As it is:
*Multilanguage
*Extremely fast (asr -> tts end to end takes 2 seconds)
What do you guys think they use user the hood? Or is it just whisperV3 turbo running on many 4090 instances? (And for free?)
| 2025-01-13T04:41:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i067pg/characterai_like_asr_model/ | FerLuisxd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i067pg | false | null | t3_1i067pg | /r/LocalLLaMA/comments/1i067pg/characterai_like_asr_model/ | false | false | self | 0 | null |
Training AI models might not need enormous data centres | 0 | 2025-01-13T05:00:05 | https://www.economist.com/science-and-technology/2025/01/08/training-ai-models-might-not-need-enormous-data-centres | mattraj | economist.com | 1970-01-01T00:00:00 | 0 | {} | 1i06ix6 | false | null | t3_1i06ix6 | /r/LocalLLaMA/comments/1i06ix6/training_ai_models_might_not_need_enormous_data/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'DLwZx7z-HNVjQQA6brDoJv21eux7OvUN7IcZS8ZhwPs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/12MgXTaAu9DkSAoTjKzcZlEx_SqlGW1lPXnHhmnsH34.jpg?width=108&crop=smart&auto=webp&s=1b099a75e0c62e0dc24e2fff58277011a31e49a3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/12MgXTaAu9DkSAoTjKzcZlEx_SqlGW1lPXnHhmnsH34.jpg?width=216&crop=smart&auto=webp&s=f4b0d61b22430c7e83f18ca57c642902460c8d2d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/12MgXTaAu9DkSAoTjKzcZlEx_SqlGW1lPXnHhmnsH34.jpg?width=320&crop=smart&auto=webp&s=23a81dfc716a34c5b2a6b1d4a69c978c2c72a782', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/12MgXTaAu9DkSAoTjKzcZlEx_SqlGW1lPXnHhmnsH34.jpg?width=640&crop=smart&auto=webp&s=4137009b10c4262e3b8f22b40b600cac4a6586cc', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/12MgXTaAu9DkSAoTjKzcZlEx_SqlGW1lPXnHhmnsH34.jpg?width=960&crop=smart&auto=webp&s=14c3397e23409395b19d3f21e75ef1cddb25c74a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/12MgXTaAu9DkSAoTjKzcZlEx_SqlGW1lPXnHhmnsH34.jpg?width=1080&crop=smart&auto=webp&s=6d168e136c954b870bffa1cb95a1f779f53596b1', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/12MgXTaAu9DkSAoTjKzcZlEx_SqlGW1lPXnHhmnsH34.jpg?auto=webp&s=0b0a715904d4f4301a0672bbf8e00d5001c3a17f', 'width': 1280}, 'variants': {}}]} |
||
How is Kokoro TTS so good with so few parameters? | 135 | As I understand it, Kokoro TTS is StyleTTS 2 with some modifications to the model architecture, trained mainly on outputs from OpenAI and ElevenLabs. But the results are much more impressive than StyleTTS with only 82M params.
Is it that training on a sufficiently good mix of synthetic data gives you superior results?
Or is there something hidden in the architecture changes that unlocked this new potential?
https://huggingface.co/hexgrad/Kokoro-82M | 2025-01-13T05:05:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i06mew/how_is_kokoro_tts_so_good_with_so_few_parameters/ | JealousAmoeba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i06mew | false | null | t3_1i06mew | /r/LocalLLaMA/comments/1i06mew/how_is_kokoro_tts_so_good_with_so_few_parameters/ | false | false | self | 135 | {'enabled': False, 'images': [{'id': 'TL8xIUiXgJg5YjryMYhj7JiBtqOghnN47_mvdxSWYzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=108&crop=smart&auto=webp&s=c44a83d5fab77c813216e5454c6fba07bfb55e15', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=216&crop=smart&auto=webp&s=bb8032866f6a8609550af1ac69ccea6df3761f92', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=320&crop=smart&auto=webp&s=7f990de0136d4482b7b3bcd05bda7d1723859680', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=640&crop=smart&auto=webp&s=b75663383244e2aa5f5fcf0207756c5dc28fb51b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=960&crop=smart&auto=webp&s=7f200c8a1257ecccf20195dc5abffaaeeb16f10a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=1080&crop=smart&auto=webp&s=9a5faaa15c9e5fde7b616979aadc6a151dfa87b0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?auto=webp&s=c3c1958b6cc380e316d46b3fe9508529724694d5', 'width': 1200}, 'variants': {}}]} |
Babel Benchmark: Can You Score Higher Than LLaMA 3.2? | 4 | One thing that’s been on my mind is how **AI benchmarks** tend to focus on where LLMs fall short compared to humans. They test models on tasks outside of their "natural habitat," like reasoning about the physical world. But what if we flipped that narrative? What if we tested **where LLMs are already superhuman**?
That’s how **Babel Benchmark** came to be.
[Babel Bench](https://preview.redd.it/tfesddmlzoce1.png?width=1363&format=png&auto=webp&s=9d5c96f451bca0926cb9bb5796bebabcb5ed6eda)
It’s a simple test:
1. **Generate a random English sentence.**
2. **Translate each word into a different language using native scripts.**
3. **Ask someone to decode the original sentence.**
Turns out, **LLMs crush this task**, while **humans struggle**. (At least, I did! Maybe polyglots will fare better.) It highlights something important: **Text is the LLM’s natural habitat**, and in that domain, they’re already **miles ahead of us**. Sure, LLMs might struggle with interacting in the physical world, but when it comes to **language comprehension at scale**, humans can’t keep up.
This project isn’t about making humans look bad — it’s about **shifting the conversation**. Instead of obsessing over where LLMs aren’t at human level, maybe it’s time to acknowledge **where they’re already beyond human capabilities**.
The challenge is out there: **Can you score higher than LLaMA 3.2?**
Try it out, test your own models, and share your scores!
[https://github.com/latent-variable/Babel\_Benchmark](https://github.com/latent-variable/Babel_Benchmark)
[Babel Benchmark scores](https://preview.redd.it/tszgfv5fzoce1.png?width=2400&format=png&auto=webp&s=8796b5aa2903c652f48eb4260e7a6ae29d422fd9)
A lot of benchmarks today feel like they’re designed to **trip LLMs up** — testing things they aren’t naturally good at (like reasoning about physical-world tasks). I’m not saying that’s a bad thing. But **language is where LLMs thrive**, and I think it’s worth highlighting their unique strengths.
Would love to see how **polyglots score** on this and how different models compare! Let me know what you think. | 2025-01-13T05:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/1i06nnx/babel_benchmark_can_you_score_higher_than_llama_32/ | onil_gova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i06nnx | false | null | t3_1i06nnx | /r/LocalLLaMA/comments/1i06nnx/babel_benchmark_can_you_score_higher_than_llama_32/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'o7-5syL7AfaxmpAMKGkVCJKoxLZ382z2FLYF3l8W8RQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/juE6rcRbKQoraUdoZFE4uJ_wsXgym817B-s-6KviRKg.jpg?width=108&crop=smart&auto=webp&s=c2835562726fae2f3eedee0abe08fc14de3ee495', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/juE6rcRbKQoraUdoZFE4uJ_wsXgym817B-s-6KviRKg.jpg?width=216&crop=smart&auto=webp&s=e95ab031f1c76473019c76197240f1ed280d1a06', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/juE6rcRbKQoraUdoZFE4uJ_wsXgym817B-s-6KviRKg.jpg?width=320&crop=smart&auto=webp&s=a081cfc5bf47deb7d1a5095ec2bf198163cc02dc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/juE6rcRbKQoraUdoZFE4uJ_wsXgym817B-s-6KviRKg.jpg?width=640&crop=smart&auto=webp&s=c01b73622ebf7c952ba674765901f85db5438eb7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/juE6rcRbKQoraUdoZFE4uJ_wsXgym817B-s-6KviRKg.jpg?width=960&crop=smart&auto=webp&s=bf6a503173a32228fa644b7c4038c09d52e78735', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/juE6rcRbKQoraUdoZFE4uJ_wsXgym817B-s-6KviRKg.jpg?width=1080&crop=smart&auto=webp&s=1fcad2c359fc9666abe82f1bd9eda678a541bad4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/juE6rcRbKQoraUdoZFE4uJ_wsXgym817B-s-6KviRKg.jpg?auto=webp&s=1de0e236c3a13da6ac5f9870fd26d3610aa4a083', 'width': 1200}, 'variants': {}}]} |
|
Local Omni or multimodal model recommendations? | 1 | I took a break for about 6 months from being actively involved in development in order to do some things IRL. I remember there was work on multimodal and omni models that was being done and looked promising.
Hugging Face is a valuable resource, but is literally a popularity contest. So I was wondering if anyone has kept tabs in this space and can recommend models for experimentation.
Thanks! | 2025-01-13T05:12:24 | https://www.reddit.com/r/LocalLLaMA/comments/1i06qjt/local_omni_or_multimodal_model_recommendations/ | ServeAlone7622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i06qjt | false | null | t3_1i06qjt | /r/LocalLLaMA/comments/1i06qjt/local_omni_or_multimodal_model_recommendations/ | false | false | self | 1 | null |
Can I safely rely on the pretraining loss? | 1 | [removed] | 2025-01-13T05:50:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i07ccc/can_i_safely_rely_on_the_pretraining_loss/ | Capable-Bunch-9357 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i07ccc | false | null | t3_1i07ccc | /r/LocalLLaMA/comments/1i07ccc/can_i_safely_rely_on_the_pretraining_loss/ | false | false | self | 1 | null |
when can we expect meta to release the LCM models (the ones discussed in patches scale better than tokens ) ?? | 2 | basically just the title | 2025-01-13T05:54:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i07ej5/when_can_we_expect_meta_to_release_the_lcm_models/ | Relevant-Ad9432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i07ej5 | false | null | t3_1i07ej5 | /r/LocalLLaMA/comments/1i07ej5/when_can_we_expect_meta_to_release_the_lcm_models/ | false | false | self | 2 | null |
MoE models on Android? | 1 | [removed] | 2025-01-13T05:57:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i07fyt/moe_models_on_android/ | Warm-Economics3749 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i07fyt | false | null | t3_1i07fyt | /r/LocalLLaMA/comments/1i07fyt/moe_models_on_android/ | false | false | self | 1 | null |
LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs | 17 | 2025-01-13T06:12:37 | https://arxiv.org/abs/2501.06186 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1i07ogx | false | null | t3_1i07ogx | /r/LocalLLaMA/comments/1i07ogx/llamavo1_rethinking_stepbystep_visual_reasoning/ | false | false | default | 17 | null |
|
How to Build a Multi-Agent System with Groq, Ollama, and RAG for PDF Querying? | 1 | [removed] | 2025-01-13T06:20:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i07sgs/how_to_build_a_multiagent_system_with_groq_ollama/ | Blazen_Lazarus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i07sgs | false | null | t3_1i07sgs | /r/LocalLLaMA/comments/1i07sgs/how_to_build_a_multiagent_system_with_groq_ollama/ | false | false | self | 1 | null |
Help! How to Build a Multi-Agent System with Groq, Ollama, and RAG for PDF Querying? | 1 | [removed] | 2025-01-13T06:23:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i07u75/help_how_to_build_a_multiagent_system_with_groq/ | Blazen_Lazarus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i07u75 | false | null | t3_1i07u75 | /r/LocalLLaMA/comments/1i07u75/help_how_to_build_a_multiagent_system_with_groq/ | false | false | self | 1 | null |
How to Build a Multi-Agent System with Groq, Ollama, and RAG for PDF Querying? | 1 | [removed] | 2025-01-13T06:33:30 | https://www.reddit.com/r/LocalLLaMA/comments/1i07zfz/how_to_build_a_multiagent_system_with_groq_ollama/ | Blazen_Lazarus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i07zfz | false | null | t3_1i07zfz | /r/LocalLLaMA/comments/1i07zfz/how_to_build_a_multiagent_system_with_groq_ollama/ | false | false | self | 1 | null |
Incredibly slow any tips? | 1 | [removed] | 2025-01-13T06:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i080hf/incredibly_slow_any_tips/ | Ok_Simple_5722 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i080hf | false | null | t3_1i080hf | /r/LocalLLaMA/comments/1i080hf/incredibly_slow_any_tips/ | false | false | self | 1 | null |
Incredibly slow any tips? | 1 | [removed] | 2025-01-13T06:37:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i081b8/incredibly_slow_any_tips/ | HorseObjective2854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i081b8 | false | null | t3_1i081b8 | /r/LocalLLaMA/comments/1i081b8/incredibly_slow_any_tips/ | false | false | self | 1 | null |
Use of RAG | 1 | [removed] | 2025-01-13T06:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i081ys/use_of_rag/ | Blazen_Lazarus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i081ys | false | null | t3_1i081ys | /r/LocalLLaMA/comments/1i081ys/use_of_rag/ | false | false | self | 1 | null |
Screen becomes a slideshow | 1 | [removed] | 2025-01-13T06:38:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i08217/screen_becomes_a_slideshow/ | HorseObjective2854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i08217 | false | null | t3_1i08217 | /r/LocalLLaMA/comments/1i08217/screen_becomes_a_slideshow/ | false | false | self | 1 | null |
Why model breaks after SFT? | 1 | [removed] | 2025-01-13T06:46:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i085yd/why_model_breaks_after_sft/ | worthlesspineapple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i085yd | false | null | t3_1i085yd | /r/LocalLLaMA/comments/1i085yd/why_model_breaks_after_sft/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'rGOJg3Lt23JqNO5-8wlbkkH_PrTv10IxAcDeUbn7xPM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=108&crop=smart&auto=webp&s=c8f630bbd77d1381441f1d24d2a40c947e86a698', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=216&crop=smart&auto=webp&s=ababbe3ac18dabc00e92e74d1645f47e5c9f43c6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=320&crop=smart&auto=webp&s=4bacb441497d8d69bb496ef9ee3d9cc8d59ee94e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=640&crop=smart&auto=webp&s=fc3e5fc94d8c131f31015fa0a7bf2c083e10e50d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=960&crop=smart&auto=webp&s=3a3c80c0a5fd80d2ff70b44dde2985f7316c57e2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=1080&crop=smart&auto=webp&s=fbd0b53940f79ee5175e91b9b7d04407dfa56bde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?auto=webp&s=c66dbf2acc72294a9b644d74879588d5ae2807fe', 'width': 1200}, 'variants': {}}]} |
|
Where to Begin? | 4 | Hey there I'm gonna be starting out on a 4080 mobile (12gb vram, 32gb ram, 14900hx) while I finish my 7900xtx desktop build and would like to know a few things.
Which version of LLaMA should I start out with on the 4080 mobile? I think it can handle 13bP, I want to just get a feel of the possibilities and setup a TTS that can view my screen and chat for starters.
What distro(s) of Linux are ideal and why?
I will be using Windows 11 Home and want a Linux distro to contrast and compare experiences on both. | 2025-01-13T06:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i089lx/where_to_begin/ | susne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i089lx | false | null | t3_1i089lx | /r/LocalLLaMA/comments/1i089lx/where_to_begin/ | false | false | self | 4 | null |
What makes deepseek-coder-2.5 stop teplying in the middle of a sentence? | 1 | I absolutely love this model. Mostly because it generates good enough code and runs fast without gpu on my favourite laptop (in ollama and openwebui). But every now and then, it just stops replying in the middle of its answer. How would I go about diagnosing why it does that and solving it? (Please no "qwen is better, just use that" suggestions.) | 2025-01-13T07:13:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i08jsl/what_makes_deepseekcoder25_stop_teplying_in_the/ | umataro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i08jsl | false | null | t3_1i08jsl | /r/LocalLLaMA/comments/1i08jsl/what_makes_deepseekcoder25_stop_teplying_in_the/ | false | false | self | 1 | null |
Difference in CUDA versions can have such a huge impact on the eloquence and creativity of LLM outputs? | 1 | [removed] | 2025-01-13T07:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/1i08k0j/difference_in_cuda_versions_can_have_such_a_huge/ | Elfrino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i08k0j | false | null | t3_1i08k0j | /r/LocalLLaMA/comments/1i08k0j/difference_in_cuda_versions_can_have_such_a_huge/ | false | false | self | 1 | null |
Difference in CUDA versions having impact on the eloquence and creativity of LLM outputs? | 6 | *Note: I purely use KoboldCPP for my LLM's, it might not effect other programs*
Not sure if anyone else has encountered this but I just wanted to share my experience. I had CUDA 11.8 for quite a while and was getting lovely and creative outputs from my LLMs. The prose was strong, intricate and pleasingly creative.
So a few months ago I switched over to CUDA 12.1 and then forgot about the upgrade.
Ever since then when using my models I got substandard outputs, the magic and creativity and eloquence was gone, it felt flat and formulaic with a lot of 'spine shivers' and generic slop.
I was pulling my hair out trying to find what I had done and then remembered of the CUDA version upgrade. After reverting back to 11.8 it's not back to it's creative and imaginative self.
Just thought I'd share in case anyone else has noticed a drop in their creative outputs. | 2025-01-13T07:48:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i090i6/difference_in_cuda_versions_having_impact_on_the/ | LeanderGem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i090i6 | false | null | t3_1i090i6 | /r/LocalLLaMA/comments/1i090i6/difference_in_cuda_versions_having_impact_on_the/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'u7Lvo1Puz6wmlPhVbOqDdErS3hR7_SOIeZ647tg2M0c', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1uR2ksH_VzEpwuZ8lrLv-vKOX0Hpu929m46XvkFSNq0.jpg?width=108&crop=smart&auto=webp&s=1d15c394417042f738e9b5d65bbcf36eed5c83fa', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/1uR2ksH_VzEpwuZ8lrLv-vKOX0Hpu929m46XvkFSNq0.jpg?width=216&crop=smart&auto=webp&s=fd4d607c45eb4c322f75c9c73edf331eaf617f46', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/1uR2ksH_VzEpwuZ8lrLv-vKOX0Hpu929m46XvkFSNq0.jpg?width=320&crop=smart&auto=webp&s=bcd670d7767a2a5cabf9c9e50d4edec186e018cb', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/1uR2ksH_VzEpwuZ8lrLv-vKOX0Hpu929m46XvkFSNq0.jpg?width=640&crop=smart&auto=webp&s=496bde40249ad1a553bdf097b5aebc00f219182d', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/1uR2ksH_VzEpwuZ8lrLv-vKOX0Hpu929m46XvkFSNq0.jpg?width=960&crop=smart&auto=webp&s=92d09fbe233ac502b8d00e4b0259c34fbb33b898', 'width': 960}], 'source': {'height': 510, 'url': 'https://external-preview.redd.it/1uR2ksH_VzEpwuZ8lrLv-vKOX0Hpu929m46XvkFSNq0.jpg?auto=webp&s=d583e621104a64506f92017b51dca30b6bf56948', 'width': 975}, 'variants': {}}]} |
TTS API - no strings attached | 1 | [removed] | 2025-01-13T08:04:55 | https://www.reddit.com/r/LocalLLaMA/comments/1i098cu/tts_api_no_strings_attached/ | iamMess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i098cu | false | null | t3_1i098cu | /r/LocalLLaMA/comments/1i098cu/tts_api_no_strings_attached/ | false | false | self | 1 | null |
Nvidia RTC ada thoughts | 0 | What are people’s opinion of Nvidia RTX 2000 ada 16gb? It currently seems like the most bang for the buck available within my budget at the vendor I might have to use.. The low power consumption is attractive as well for when the system isn’t actively using a model. How does it compare to the NVIDIA® GeForce RTX™ 4070, 12 GB GDDR6X? I am trying to wrap my head around all of this. I read that it is positioned the RTX 2000 ada lies in between a [GeForce RTX 4050 Mobile](https://www.tomshardware.com/news/nvidia-rtx-4070-4060-4050-mobile-benchmarks-die-sizes) (2,560 CUDA cores) and a [GeForce RTX 4060](https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4060-review-asus-dual) (3,072 CUDA cores, but those have less Vram.
I have also read about the RTX 4000 Ada, which is also sold by the vendor. It is similarly priced to the RTX 4090,, which I think would be my preference, but it does not appear like that is currently available with that.
Initially the AI would be used to help process, search, summarize, cross-reference and analyze hundreds of documents/archives using some sort of to-be-determined RAG system.....then move forward using the system to help transcribe and index audio interviews, better process and index documents we scan as well as photos of objects.
It would also be used for general/short and long form generative AI, if possible using the library outlined above. | 2025-01-13T08:25:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i09hsm/nvidia_rtc_ada_thoughts/ | vincewit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i09hsm | false | null | t3_1i09hsm | /r/LocalLLaMA/comments/1i09hsm/nvidia_rtc_ada_thoughts/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'SapxSc0sCdZ_05Z35oeXFZJqmxtoERLERGJwBZCb2Ug', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/gO_TYGfzRNBVLBTkmCSH22cFdbZGocTkuCpOLkAKEAs.jpg?width=108&crop=smart&auto=webp&s=1bab9ef1765a5342bffba8ee1bbcc03e988afcee', 'width': 108}, {'height': 150, 'url': 'https://external-preview.redd.it/gO_TYGfzRNBVLBTkmCSH22cFdbZGocTkuCpOLkAKEAs.jpg?width=216&crop=smart&auto=webp&s=b308b203c2e823c2bbdacb0827183164b6267dad', 'width': 216}, {'height': 223, 'url': 'https://external-preview.redd.it/gO_TYGfzRNBVLBTkmCSH22cFdbZGocTkuCpOLkAKEAs.jpg?width=320&crop=smart&auto=webp&s=c2f931debb9ccce6809d533a55b05843290d5bb7', 'width': 320}, {'height': 446, 'url': 'https://external-preview.redd.it/gO_TYGfzRNBVLBTkmCSH22cFdbZGocTkuCpOLkAKEAs.jpg?width=640&crop=smart&auto=webp&s=f09dbadc2e4ddf3762043d98a24729a657a137d7', 'width': 640}, {'height': 669, 'url': 'https://external-preview.redd.it/gO_TYGfzRNBVLBTkmCSH22cFdbZGocTkuCpOLkAKEAs.jpg?width=960&crop=smart&auto=webp&s=4b817ac5d43017b22bbc11daa4fcd5531c158a84', 'width': 960}], 'source': {'height': 697, 'url': 'https://external-preview.redd.it/gO_TYGfzRNBVLBTkmCSH22cFdbZGocTkuCpOLkAKEAs.jpg?auto=webp&s=54fbf8303e822d1f932fda1b2487b40c20a4d1da', 'width': 1000}, 'variants': {}}]} |
Which model will read a pdf to me? | 2 | Which model will read an entire pdf document to me? These are academic papers and non AI document reader are really annoying in the way they interpret pdfs. | 2025-01-13T08:27:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i09ilr/which_model_will_read_a_pdf_to_me/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i09ilr | false | null | t3_1i09ilr | /r/LocalLLaMA/comments/1i09ilr/which_model_will_read_a_pdf_to_me/ | false | false | self | 2 | null |
In LLM SFT Training, Should I Compute Loss for Every Response or Only the Last Response? | 1 | [removed] | 2025-01-13T08:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i09llp/in_llm_sft_training_should_i_compute_loss_for/ | Horror-Weakness-2305 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i09llp | false | null | t3_1i09llp | /r/LocalLLaMA/comments/1i09llp/in_llm_sft_training_should_i_compute_loss_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pFi7mcCIaz0glZnQ05cPKcrStzliRX7Rd7icCqhWvOE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yC-483BImtVyGrTkk3iN5SProjL6emq3l3Y8K6SK0eg.jpg?width=108&crop=smart&auto=webp&s=aeb9e621ecf40906932afb2e490a89ea27429c4b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yC-483BImtVyGrTkk3iN5SProjL6emq3l3Y8K6SK0eg.jpg?width=216&crop=smart&auto=webp&s=ac1c8bba4ad934b83e719b63817b07ec0a025535', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yC-483BImtVyGrTkk3iN5SProjL6emq3l3Y8K6SK0eg.jpg?width=320&crop=smart&auto=webp&s=e8994094f4ce9a57887a26d67e811e2429226c7f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yC-483BImtVyGrTkk3iN5SProjL6emq3l3Y8K6SK0eg.jpg?width=640&crop=smart&auto=webp&s=4ef3f3859ec3f689b2c4f95484c65e520ab94390', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yC-483BImtVyGrTkk3iN5SProjL6emq3l3Y8K6SK0eg.jpg?width=960&crop=smart&auto=webp&s=579c579367a03cddea9876a727070d1d8df077d4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yC-483BImtVyGrTkk3iN5SProjL6emq3l3Y8K6SK0eg.jpg?width=1080&crop=smart&auto=webp&s=e961d7f7bbfc56cb54537fb6f6099c93e8371ec7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yC-483BImtVyGrTkk3iN5SProjL6emq3l3Y8K6SK0eg.jpg?auto=webp&s=778efb2670c00bfbb3a84196b7b52e390ebcc4b6', 'width': 1200}, 'variants': {}}]} |
AI note taking app that works completely offline | 0 | I use note-taking apps like Granola and value their features. My main concern is keeping my data on my own device.
I wonder if others want a note-taking and summarization app that works offline and stores everything on their device?
Do you think users would pay a small one-time fee for lifetime access to such a private, local solution? | 2025-01-13T08:39:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i09o1s/ai_note_taking_app_that_works_completely_offline/ | imsinghaniya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i09o1s | false | null | t3_1i09o1s | /r/LocalLLaMA/comments/1i09o1s/ai_note_taking_app_that_works_completely_offline/ | false | false | self | 0 | null |
Any cheaper and better alternative to ElevenLabs? | 5 | We have been using ElevenLabs in our Text to Video product however the cost is extremely high
What would you all suggest as a better alternative? | 2025-01-13T08:46:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i09rdj/any_cheaper_and_better_alternative_to_elevenlabs/ | findinghorses | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i09rdj | false | null | t3_1i09rdj | /r/LocalLLaMA/comments/1i09rdj/any_cheaper_and_better_alternative_to_elevenlabs/ | false | false | self | 5 | null |
What are some lesser known modern tricks for RAG / agent-like systems? | 1 | [removed] | 2025-01-13T10:17:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i0axz5/what_are_some_lesser_known_modern_tricks_for_rag/ | Blue_Dude3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0axz5 | false | null | t3_1i0axz5 | /r/LocalLLaMA/comments/1i0axz5/what_are_some_lesser_known_modern_tricks_for_rag/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0amklnKW-dIN2I87m_sk01nwd2IcpbpKImidPj465M8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Jf3Dk6w6xFG-wf-4hczKVogd1n_62yF8l0hD25mutsg.jpg?width=108&crop=smart&auto=webp&s=0421f1ac743ce5478ff62a6a9ac3f96a5485a7ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Jf3Dk6w6xFG-wf-4hczKVogd1n_62yF8l0hD25mutsg.jpg?width=216&crop=smart&auto=webp&s=4514f9ee9ffa2cdba6f495615c979bf0a91ead2b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Jf3Dk6w6xFG-wf-4hczKVogd1n_62yF8l0hD25mutsg.jpg?width=320&crop=smart&auto=webp&s=0e5d062845fb73604f21f7faf2bab42516d69da1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Jf3Dk6w6xFG-wf-4hczKVogd1n_62yF8l0hD25mutsg.jpg?width=640&crop=smart&auto=webp&s=2ba32cc8d462ab07ccfe7d212364365d60fd0621', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Jf3Dk6w6xFG-wf-4hczKVogd1n_62yF8l0hD25mutsg.jpg?width=960&crop=smart&auto=webp&s=8f850ff4a84a9e2a7e630b584ce2de3603043f50', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Jf3Dk6w6xFG-wf-4hczKVogd1n_62yF8l0hD25mutsg.jpg?width=1080&crop=smart&auto=webp&s=b33b6d0cb073eaa2c2a8baed76d9762e8a0b4fd3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Jf3Dk6w6xFG-wf-4hczKVogd1n_62yF8l0hD25mutsg.jpg?auto=webp&s=d72feaa9c6f689210a82b48569d6a3eff3ddfb24', 'width': 1200}, 'variants': {}}]} |
Hugging Face just releases a free course on agents. | 1 | **Free course on agents by Hugging Face**
We just added a chapter to smol course on agents. Naturally, using smolagents! The course cover these topics:
\- Code agents that solve problem with code
\- Retrieval agents that supply grounded context
\- Custom functional agents that do whatever you need!
If you're building agent applications, this course should help.
Course in smol course [https://github.com/huggingface/smol-course/tree/main/8\_agents](https://github.com/huggingface/smol-course/tree/main/8_agents) | 2025-01-13T10:25:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i0b1lm/hugging_face_just_releases_a_free_course_on_agents/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0b1lm | false | null | t3_1i0b1lm | /r/LocalLLaMA/comments/1i0b1lm/hugging_face_just_releases_a_free_course_on_agents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xlYXTWwgln9vDuEQisTX0izN0DBIAyz-Lr1iLfaQMkM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=108&crop=smart&auto=webp&s=7e06b28149282c140cb7c1c7fe64f68c14c568fb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=216&crop=smart&auto=webp&s=b78401d265c9e0b53b3a45486a55de0d72439330', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=320&crop=smart&auto=webp&s=09e64baabc6a54793694ca2f42cf425245cd9f66', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=640&crop=smart&auto=webp&s=59eba6589f7f1641851f830b38b5de8e26d01df1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=960&crop=smart&auto=webp&s=013f3ff62e70e17a4e88b18483d8cae48e6be502', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=1080&crop=smart&auto=webp&s=0fbc7909dd561eacfc631c67c39f3cd696528b17', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?auto=webp&s=4912ad98a203609a9b6096b7592a5ad3f5db8e2c', 'width': 1200}, 'variants': {}}]} |
Hugging Face released a free course on agents. | 527 | We just added a chapter to smol course on agents. Naturally, using smolagents! The course cover these topics:
\- Code agents that solve problem with code
\- Retrieval agents that supply grounded context
\- Custom functional agents that do whatever you need!
If you're building agent applications, this course should help.
Course in smol course [https://github.com/huggingface/smol-course/tree/main/8\_agents](https://github.com/huggingface/smol-course/tree/main/8_agents) | 2025-01-13T10:26:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i0b289/hugging_face_released_a_free_course_on_agents/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0b289 | false | null | t3_1i0b289 | /r/LocalLLaMA/comments/1i0b289/hugging_face_released_a_free_course_on_agents/ | false | false | self | 527 | {'enabled': False, 'images': [{'id': 'xlYXTWwgln9vDuEQisTX0izN0DBIAyz-Lr1iLfaQMkM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=108&crop=smart&auto=webp&s=7e06b28149282c140cb7c1c7fe64f68c14c568fb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=216&crop=smart&auto=webp&s=b78401d265c9e0b53b3a45486a55de0d72439330', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=320&crop=smart&auto=webp&s=09e64baabc6a54793694ca2f42cf425245cd9f66', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=640&crop=smart&auto=webp&s=59eba6589f7f1641851f830b38b5de8e26d01df1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=960&crop=smart&auto=webp&s=013f3ff62e70e17a4e88b18483d8cae48e6be502', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?width=1080&crop=smart&auto=webp&s=0fbc7909dd561eacfc631c67c39f3cd696528b17', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PAup9Sjkc6TX4-4BKy-plkXvtVBNh8rWU_Jcp2bvzs4.jpg?auto=webp&s=4912ad98a203609a9b6096b7592a5ad3f5db8e2c', 'width': 1200}, 'variants': {}}]} |
What can I do with a good GPU | 5 | A while back me and a cousin wanted to do some AI stuff (translation etc), but we had to put it on hold due to reasons.
At that time, I became very interested in the ability to run models locally. However I knew I was held back by my computer at the time.
Now I have a decent laptop, a Lenovo with ab RTX 4080 12GB.
My goal is to do something useful with local AI while understanding on the low level how it works.
Whhat can I do with this resource? Where do I start? Thanks. | 2025-01-13T10:40:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i0b8y8/what_can_i_do_with_a_good_gpu/ | _Shojaku | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0b8y8 | false | null | t3_1i0b8y8 | /r/LocalLLaMA/comments/1i0b8y8/what_can_i_do_with_a_good_gpu/ | false | false | self | 5 | null |
This prompts an explanation - MetaAI powered by Llama 3.2 | 0 | I got interested using this, and thought, OK -- what happens if I prompted John Lennon's Imagine? | 2025-01-13T11:02:31 | rodzieman | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i0bjp0 | false | null | t3_1i0bjp0 | /r/LocalLLaMA/comments/1i0bjp0/this_prompts_an_explanation_metaai_powered_by/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'gHvvP-It1LhLdl90SdoEZ-NkxjxIun9KRY6jbHl6NM8', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/nb1wll2gtqce1.png?width=108&crop=smart&auto=webp&s=6ecc9dcbe36429e3c9576b2513cab4715135245b', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/nb1wll2gtqce1.png?width=216&crop=smart&auto=webp&s=c16b11679b2a602bf87ae5e4dce98eb1d4cee948', 'width': 216}, {'height': 357, 'url': 'https://preview.redd.it/nb1wll2gtqce1.png?width=320&crop=smart&auto=webp&s=335f8e238c74b15d2202b8bd2b2b03dcf73571db', 'width': 320}, {'height': 714, 'url': 'https://preview.redd.it/nb1wll2gtqce1.png?width=640&crop=smart&auto=webp&s=edaf9c29c224e035cbee7d6b87955eae5c8cad90', 'width': 640}, {'height': 1071, 'url': 'https://preview.redd.it/nb1wll2gtqce1.png?width=960&crop=smart&auto=webp&s=7c261191c965845303902ae330804550e94fda25', 'width': 960}, {'height': 1205, 'url': 'https://preview.redd.it/nb1wll2gtqce1.png?width=1080&crop=smart&auto=webp&s=b2068317c47e68f4f44b6d28d57db08a164b0530', 'width': 1080}], 'source': {'height': 1205, 'url': 'https://preview.redd.it/nb1wll2gtqce1.png?auto=webp&s=b6bb88b5a0449251d16db14cafa25185612010b5', 'width': 1080}, 'variants': {}}]} |
||
Best translation model? | 11 | Hi, I'm searching for the current best translation model with interest on quality and latency, and the condition that it must be real open source (allowing commercial usage). Some languages that I'm interested on are English, Japanese and Portuguese. Is there any leaderboard for this?
Some of the options that I'm considering are:
- NLLB
- mBART
- madlad
- opus models
- LibreTranslate (openNMT behind)
Any recommendations? | 2025-01-13T11:07:24 | https://www.reddit.com/r/LocalLLaMA/comments/1i0bm84/best_translation_model/ | xdoso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0bm84 | false | null | t3_1i0bm84 | /r/LocalLLaMA/comments/1i0bm84/best_translation_model/ | false | false | self | 11 | null |
Microserving LLM engines: A multi-level architecture that provides fine-grained APIs for orchestrating llm engines. | 19 | 2025-01-13T11:17:39 | omnisvosscio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i0brfj | false | null | t3_1i0brfj | /r/LocalLLaMA/comments/1i0brfj/microserving_llm_engines_a_multilevel/ | false | false | 19 | {'enabled': True, 'images': [{'id': 'tTZTSKUQjwFnQKrKJSveGVLXTEA9tZYrNpY_oHSq0x8', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/j0jo85w0wqce1.png?width=108&crop=smart&auto=webp&s=6689c6ab0bf2ebe9efce448ad04df35c1251a5b7', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/j0jo85w0wqce1.png?width=216&crop=smart&auto=webp&s=1d3e6c30b20b1307d14a86c97704d5c82e39f007', 'width': 216}, {'height': 163, 'url': 'https://preview.redd.it/j0jo85w0wqce1.png?width=320&crop=smart&auto=webp&s=df1700a7dec77fe711cd6abc654084dcfd6a48e6', 'width': 320}, {'height': 326, 'url': 'https://preview.redd.it/j0jo85w0wqce1.png?width=640&crop=smart&auto=webp&s=1755e5f09f080ac61690b8f2b4ae3c3ff0d12a24', 'width': 640}, {'height': 489, 'url': 'https://preview.redd.it/j0jo85w0wqce1.png?width=960&crop=smart&auto=webp&s=faabdf19a34d7a66d89f02a2726e47f632b00f1a', 'width': 960}, {'height': 551, 'url': 'https://preview.redd.it/j0jo85w0wqce1.png?width=1080&crop=smart&auto=webp&s=98dce6a6038d75585a12a8558779925fb5918ad8', 'width': 1080}], 'source': {'height': 815, 'url': 'https://preview.redd.it/j0jo85w0wqce1.png?auto=webp&s=ff51f9fad4a716184b95bdbcbecb2360805b5bd8', 'width': 1597}, 'variants': {}}]} |
|||
Is this where all LLMs are going? | 277 | 2025-01-13T11:19:47 | omnisvosscio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i0bsha | false | null | t3_1i0bsha | /r/LocalLLaMA/comments/1i0bsha/is_this_where_all_llms_are_going/ | false | false | 277 | {'enabled': True, 'images': [{'id': 'XeyQD_8Yr4Sj27Mvoui5nBT5H25hQeG0VW2ofQRvZ7w', 'resolutions': [{'height': 151, 'url': 'https://preview.redd.it/l1h02xo8wqce1.png?width=108&crop=smart&auto=webp&s=54d8d1a88f317fddc933b9c3d9a2346270215341', 'width': 108}, {'height': 302, 'url': 'https://preview.redd.it/l1h02xo8wqce1.png?width=216&crop=smart&auto=webp&s=c982b921def1062cee22b4863ce529bfb6a4c26b', 'width': 216}, {'height': 447, 'url': 'https://preview.redd.it/l1h02xo8wqce1.png?width=320&crop=smart&auto=webp&s=ac1b6e45207b44a6e10f84ecde13bdfaa366dcba', 'width': 320}, {'height': 894, 'url': 'https://preview.redd.it/l1h02xo8wqce1.png?width=640&crop=smart&auto=webp&s=03d40d0e8695392ff6f2dbe6e68c5d8afd724e12', 'width': 640}, {'height': 1342, 'url': 'https://preview.redd.it/l1h02xo8wqce1.png?width=960&crop=smart&auto=webp&s=7dc89325cded7bcc3793a9959063d91698362e50', 'width': 960}], 'source': {'height': 1348, 'url': 'https://preview.redd.it/l1h02xo8wqce1.png?auto=webp&s=a03f20802c0a9fb539a45c3c1f99a17d832fd3db', 'width': 964}, 'variants': {}}]} |
|||
Knowledgable Engineering LLMs? | 3 | Hi guys😊I am looking for an LLM to support my research in chip design offline. But even gpt 4o struggles with this. I want a good llm that has research level understanding of VLSI, etc.
Must have been fed a lot of research papers. Any idea guys? | 2025-01-13T11:31:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i0byc3/knowledgable_engineering_llms/ | bigchunguspromax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0byc3 | false | null | t3_1i0byc3 | /r/LocalLLaMA/comments/1i0byc3/knowledgable_engineering_llms/ | false | false | self | 3 | null |
I broke a LLM | 1 | [removed] | 2025-01-13T11:40:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i0c31j/i_broke_a_llm/ | SnooCats223 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0c31j | false | null | t3_1i0c31j | /r/LocalLLaMA/comments/1i0c31j/i_broke_a_llm/ | false | false | nsfw | 1 | null |
I made an AI teddy bear that can talk and feel | 1 | [removed] | 2025-01-13T11:51:52 | https://v.redd.it/dd2c9j482rce1 | MRBBLQ | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i0c98c | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dd2c9j482rce1/DASHPlaylist.mpd?a=1739361131%2CODY4Mzc5MGEzY2ExMWYyYzdiMDg2MzgyMGI5YTNjMzUyOTMxNTdhN2Y5OGY4MjJlN2U4NzZiMTBhNTRkY2UxYg%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/dd2c9j482rce1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/dd2c9j482rce1/HLSPlaylist.m3u8?a=1739361131%2CNjJkY2E5YzVmYTFkZTgxN2ZlMTIxZDAzMzQzMjI3NDVmNTNiMDdlM2Q4NGMwYmMyMWQyNzk0MWNlOTJmNGI3OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dd2c9j482rce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1i0c98c | /r/LocalLLaMA/comments/1i0c98c/i_made_an_ai_teddy_bear_that_can_talk_and_feel/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dnVqc2ZzMDgycmNlMV3_mi8sE6OgpTtUzQzwlDCuav4GleZXrKXT2Ya_M8dK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dnVqc2ZzMDgycmNlMV3_mi8sE6OgpTtUzQzwlDCuav4GleZXrKXT2Ya_M8dK.png?width=108&crop=smart&format=pjpg&auto=webp&s=e6694bdb05abceaea021325eda9803383ee9d567', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dnVqc2ZzMDgycmNlMV3_mi8sE6OgpTtUzQzwlDCuav4GleZXrKXT2Ya_M8dK.png?width=216&crop=smart&format=pjpg&auto=webp&s=c05725e09093ee1e8b3eaa24c0849921e8cf1958', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dnVqc2ZzMDgycmNlMV3_mi8sE6OgpTtUzQzwlDCuav4GleZXrKXT2Ya_M8dK.png?width=320&crop=smart&format=pjpg&auto=webp&s=26295f6ee3d65391b3837c143d82e42b64e55128', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dnVqc2ZzMDgycmNlMV3_mi8sE6OgpTtUzQzwlDCuav4GleZXrKXT2Ya_M8dK.png?width=640&crop=smart&format=pjpg&auto=webp&s=8830323f36019ddcd0c59a1dfd95e7bc5ef1e0db', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dnVqc2ZzMDgycmNlMV3_mi8sE6OgpTtUzQzwlDCuav4GleZXrKXT2Ya_M8dK.png?width=960&crop=smart&format=pjpg&auto=webp&s=b1470ddb2d31c0afdc53306bf1b54ebca19c8c5d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dnVqc2ZzMDgycmNlMV3_mi8sE6OgpTtUzQzwlDCuav4GleZXrKXT2Ya_M8dK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1d4a0c582aa8f7678fd5aee15d8418bb3fa5aa87', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dnVqc2ZzMDgycmNlMV3_mi8sE6OgpTtUzQzwlDCuav4GleZXrKXT2Ya_M8dK.png?format=pjpg&auto=webp&s=75367da7abbc1f99ce072692c27ab13d87dfe10a', 'width': 1920}, 'variants': {}}]} |
|
The UK finally joins the race | 0 | Seems that every other country has had a plan and a narrative, apart from my little old island(s) - the UK. However appears things are finally changing.....
Will be interesting to see if this is just rhetoric from a new government or a real incentived on going focus. | 2025-01-13T11:55:24 | https://www.bbc.co.uk/news/live/crm7zwp18n9t | BreakIt-Boris | bbc.co.uk | 1970-01-01T00:00:00 | 0 | {} | 1i0cb2j | false | null | t3_1i0cb2j | /r/LocalLLaMA/comments/1i0cb2j/the_uk_finally_joins_the_race/ | false | false | 0 | {'enabled': False, 'images': [{'id': '9nRvhsfDu2g7YRymjnc_RguWeoAajpEr1QhSxyQJ4cs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/WRgUq7Ln46lFclFVwOG-SFxBea6UDBd0ibPayIbyANY.jpg?width=108&crop=smart&auto=webp&s=69f8ddbd55283347bbfa5b0e2ba1407f6a1f1f1d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/WRgUq7Ln46lFclFVwOG-SFxBea6UDBd0ibPayIbyANY.jpg?width=216&crop=smart&auto=webp&s=280654204484d6611339d0673242b17cc9c35eca', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/WRgUq7Ln46lFclFVwOG-SFxBea6UDBd0ibPayIbyANY.jpg?width=320&crop=smart&auto=webp&s=9bbc966eeaeb82c8d90a764c6ddb251fa70f2b73', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/WRgUq7Ln46lFclFVwOG-SFxBea6UDBd0ibPayIbyANY.jpg?width=640&crop=smart&auto=webp&s=cc409a8fc64006554b51cadc73979a74d9edeb04', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/WRgUq7Ln46lFclFVwOG-SFxBea6UDBd0ibPayIbyANY.jpg?width=960&crop=smart&auto=webp&s=5409bd0c325f5272997fcf85c58aa1c30fe8f2fc', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/WRgUq7Ln46lFclFVwOG-SFxBea6UDBd0ibPayIbyANY.jpg?auto=webp&s=3c2371b266955e1f2b7926c8fbb130de5b4dc3e9', 'width': 1024}, 'variants': {}}]} |
|
How do i actually use runpod? | 2 | It's embarrassing to ask, but I cant work out how to do what i want to with their docs.
I have to do some inference for a few hours. I want to use python to download a model from huggingface and run this using aphrodite:
for i, output in enumerate(outputs):
prompt = output.prompt
generated\_text = output.outputs\[0\].text
results.append({
"id":dataset\[i\]\['url'\],
"prompt": prompt,
"generated\_text": generated\_text
})
I can create a pod by connecting to their api, but i don't know how to get it to run the script! I'd prefer to not use docker etc and just use python as its a one off.
I am sure i am being dumb, or missing something. Modal worked fine (just more expensive!) | 2025-01-13T12:23:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i0crd8/how_do_i_actually_use_runpod/ | Moreh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0crd8 | false | null | t3_1i0crd8 | /r/LocalLLaMA/comments/1i0crd8/how_do_i_actually_use_runpod/ | false | false | self | 2 | null |
Why do most vector databases use a NoSQL format rather than SQL? | 43 | If you've been exploring the world of vector databases, you might have noticed that most of them lean toward a NoSQL format instead of a traditional SQL approach. Why is that?
I'm just genuinely curious. Probably scalability? | 2025-01-13T12:38:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i0d0qo/why_do_most_vector_databases_use_a_nosql_format/ | Available_Ad_5360 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0d0qo | false | null | t3_1i0d0qo | /r/LocalLLaMA/comments/1i0d0qo/why_do_most_vector_databases_use_a_nosql_format/ | false | false | self | 43 | null |
Where to host a finetune QWEN 2.5 3b model | 1 | [removed] | 2025-01-13T12:41:49 | https://www.reddit.com/r/LocalLLaMA/comments/1i0d2jy/where_to_host_a_finetune_qwen_25_3b_model/ | Ok_Profession_3057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0d2jy | false | null | t3_1i0d2jy | /r/LocalLLaMA/comments/1i0d2jy/where_to_host_a_finetune_qwen_25_3b_model/ | false | false | self | 1 | null |
Perplexity Pro 1 Year for only $25 (usually $240) | 1 | [removed] | 2025-01-13T12:43:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i0d3vp/perplexity_pro_1_year_for_only_25_usually_240/ | minemateinnovation | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0d3vp | false | null | t3_1i0d3vp | /r/LocalLLaMA/comments/1i0d3vp/perplexity_pro_1_year_for_only_25_usually_240/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dfbcEt_GYqW7AdIr3qt7Tq3CAFeENzY-wsJ3KxQHwtY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pjBDooJmCDW7WtWGKgCxGKK99MnCqIevhK5QinYul0U.jpg?width=108&crop=smart&auto=webp&s=4aac90dadf9afa333cf9708fb2b5b0647622602b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/pjBDooJmCDW7WtWGKgCxGKK99MnCqIevhK5QinYul0U.jpg?width=216&crop=smart&auto=webp&s=395a3ab0b1d26dd8c9c1e639056d4b3ec948077b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/pjBDooJmCDW7WtWGKgCxGKK99MnCqIevhK5QinYul0U.jpg?width=320&crop=smart&auto=webp&s=9b4afd89122f1ad1464a1359f71cf66b18f1d2b1', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/pjBDooJmCDW7WtWGKgCxGKK99MnCqIevhK5QinYul0U.jpg?auto=webp&s=908fe416bee8da7ddef9a2c5438c7eb9658a4ec3', 'width': 512}, 'variants': {}}]} |
Where to host a finetune QWEN 2.5 3b model | 4 | I am going to fine-tune a QWEN 2.5 3b model using support chat logs. I am going to do the training in a run pod.
What is the best option to host the model to serve inference? At first It will only be used during nights and weekends, most of the time it will not have any use, but when it does it has to reply reasonably fast | 2025-01-13T12:45:23 | https://www.reddit.com/r/LocalLLaMA/comments/1i0d4pe/where_to_host_a_finetune_qwen_25_3b_model/ | ennriqe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0d4pe | false | null | t3_1i0d4pe | /r/LocalLLaMA/comments/1i0d4pe/where_to_host_a_finetune_qwen_25_3b_model/ | false | false | self | 4 | null |
Face Verification With Geolocation | 0 | I am working on a hospital project that requires both facial verification and location validation. Specifically, when a doctor captures their facial image, the system needs to verify both their identity and confirm that they are physically present in an authorized hospital ward. Need suggestions on hwo to proceed to verfiy location | 2025-01-13T12:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i0d67m/face_verification_with_geolocation/ | Adeel_Hasan_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0d67m | false | null | t3_1i0d67m | /r/LocalLLaMA/comments/1i0d67m/face_verification_with_geolocation/ | false | false | self | 0 | null |
Are LLM prone to cognitive biases? | 1 | I read a few articles about the topic, but most are either going over my head or are only surface based and incomplete.
In your experience, did you notice or actually checked whether LLMs are prone to typical, human thinking errors like anchoring, cognitive dissonance or gamblers fallacy etc.? | 2025-01-13T12:50:26 | https://www.reddit.com/r/LocalLLaMA/comments/1i0d7s0/are_llm_prone_to_cognitive_biases/ | kaisurniwurer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0d7s0 | false | null | t3_1i0d7s0 | /r/LocalLLaMA/comments/1i0d7s0/are_llm_prone_to_cognitive_biases/ | false | false | self | 1 | null |
Help me build a €4000 AI workstation focused on experimentation! | 1 | [removed] | 2025-01-13T13:03:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i0dgm1/help_me_build_a_4000_ai_workstation_focused_on/ | Legitimate-Ad1082 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0dgm1 | false | null | t3_1i0dgm1 | /r/LocalLLaMA/comments/1i0dgm1/help_me_build_a_4000_ai_workstation_focused_on/ | false | false | self | 1 | null |
Perplexity Pro 1 Year for only $25 (usually $240) | 1 | [removed] | 2025-01-13T13:11:32 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1i0dlts | false | null | t3_1i0dlts | /r/LocalLLaMA/comments/1i0dlts/perplexity_pro_1_year_for_only_25_usually_240/ | false | false | default | 1 | null |
||
Be gentle please. General questions about current state of LLM's. Is this the right place? | 1 | [removed] | 2025-01-13T13:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i0dq1x/be_gentle_please_general_questions_about_current/ | jkess114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0dq1x | false | null | t3_1i0dq1x | /r/LocalLLaMA/comments/1i0dq1x/be_gentle_please_general_questions_about_current/ | false | false | self | 1 | null |
How to use .with_structured_output with RunnableWithMessageHistory? | 0 | The TypedDict class part sometimes works but It gives RootListener error other times. Is there any other way? | 2025-01-13T13:25:03 | ShippersAreIdiots | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i0duzy | false | null | t3_1i0duzy | /r/LocalLLaMA/comments/1i0duzy/how_to_use_with_structured_output_with/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'okqIu-eVo8QJKy-d_ZQ19rZzF-ED-wuGy6hygLsidoE', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/3xl8r4nvirce1.png?width=108&crop=smart&auto=webp&s=6b431c87df196233c0e43f37ae4a50d78f4b371f', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/3xl8r4nvirce1.png?width=216&crop=smart&auto=webp&s=7d7779fbfd1d194f2fa46e248f7ab6b49fbc11a7', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/3xl8r4nvirce1.png?width=320&crop=smart&auto=webp&s=584b57dbb5d940017c7c198a8fff5c9658b3b473', 'width': 320}, {'height': 295, 'url': 'https://preview.redd.it/3xl8r4nvirce1.png?width=640&crop=smart&auto=webp&s=96ba4b7553f9c716d46e116e4c525632c6356a2e', 'width': 640}, {'height': 443, 'url': 'https://preview.redd.it/3xl8r4nvirce1.png?width=960&crop=smart&auto=webp&s=4e49d6a340ace22580c64b07c8e10e57dfb0fb7a', 'width': 960}, {'height': 498, 'url': 'https://preview.redd.it/3xl8r4nvirce1.png?width=1080&crop=smart&auto=webp&s=531bcabfda762525e88991a04f984184026e5090', 'width': 1080}], 'source': {'height': 1118, 'url': 'https://preview.redd.it/3xl8r4nvirce1.png?auto=webp&s=a0db409129a4e985066b54816866e2314354255a', 'width': 2420}, 'variants': {}}]} |
||
Increase in context size causes run time to explode? | 0 | I am trying the Phi-3-medium-128k-instruct model. Running the same script calling llama-cpp-python but with context size increase from 34816 to 55296.
Only took 1221sec when context is 34816
`llama_perf_context_print: load time = 9617.72 ms`
`llama_perf_context_print: prompt eval time = 0.00 ms / 12764 tokens ( 0.00 ms per token, inf tokens per second)`
`llama_perf_context_print: eval time = 0.00 ms / 22051 runs ( 0.00 ms per token, inf tokens per second)`
`llama_perf_context_print: total time = 1221633.52 ms / 34815 tokens`
It took 10119sec when context 55296. Almost 8x
`llama_perf_context_print: load time = 12634.97 ms`
`llama_perf_context_print: prompt eval time = 0.00 ms / 10909 tokens ( 0.00 ms per token, inf tokens per second)`
`llama_perf_context_print: eval time = 0.00 ms / 44386 runs ( 0.00 ms per token, inf tokens per second)`
`llama_perf_context_print: total time = 10119058.20 ms / 55295 tokens`
Is this normal??? If not, how do I fix this? Thanks a lot in advance. | 2025-01-13T13:32:30 | https://www.reddit.com/r/LocalLLaMA/comments/1i0e0bd/increase_in_context_size_causes_run_time_to/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0e0bd | false | null | t3_1i0e0bd | /r/LocalLLaMA/comments/1i0e0bd/increase_in_context_size_causes_run_time_to/ | false | false | self | 0 | null |
What are the Best Open source local models for reasoning, code and data analysis? | 6 | Pretty much the title, I want to work on a use case, where I want to create an agentic system that will analyse financial stock data and give recommendations related to the analysis.
I mean I have done some research, recently there's some buzz about deepseek and qwq but I want something light,, like below 10GB would do.
| 2025-01-13T13:51:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i0edwk/what_are_the_best_open_source_local_models_for/ | devroop_saha844 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0edwk | false | null | t3_1i0edwk | /r/LocalLLaMA/comments/1i0edwk/what_are_the_best_open_source_local_models_for/ | false | false | self | 6 | null |
What can I do with 128Mb Ram? | 1 | [removed] | 2025-01-13T13:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i0ehpq/what_can_i_do_with_128mb_ram/ | FederalTarget5929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i0ehpq | false | null | t3_1i0ehpq | /r/LocalLLaMA/comments/1i0ehpq/what_can_i_do_with_128mb_ram/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.