title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4090 or 2x 4070 super ti? | 1 | [removed] | 2024-12-18T17:19:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hh6j47/4090_or_2x_4070_super_ti/ | AbstrusSchatten | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh6j47 | false | null | t3_1hh6j47 | /r/LocalLLaMA/comments/1hh6j47/4090_or_2x_4070_super_ti/ | false | false | self | 1 | null |
Let a vision model roast you 😂 | 34 | Hey everyone!
I've been experimenting a lot with vision models over the past few days and came up with this fun concept.
Upload your profile picture and let Qwen2-VL roast you! Life's always better when we can laugh at ourselves. Laughing at myself has always been a great way to step back, reduce stress, and just enjoy being alive.
Try here: [https://qwenroast.vercel.app/](https://qwenroast.vercel.app/)
| 2024-12-18T17:25:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hh6ngn/let_a_vision_model_roast_you/ | franckeinstein24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh6ngn | false | null | t3_1hh6ngn | /r/LocalLLaMA/comments/1hh6ngn/let_a_vision_model_roast_you/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': '8tXLYtGB6vT1SQ3CYJWt8AX3nmmOmokNBqpMMPYiEvo', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/OwDBdOxRoHqFA6sN4CH9fGUgTkoP2XE4oh_Kt81TB9g.jpg?width=108&crop=smart&auto=webp&s=be609776c4f2b3a2555d1ebd5bed711dc3a84b1b', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/OwDBdOxRoHqFA6sN4CH9fGUgTkoP2XE4oh_Kt81TB9g.jpg?width=216&crop=smart&auto=webp&s=e7b3023434293330fa693b3cff93a496df2be923', 'width': 216}, {'height': 155, 'url': 'https://external-preview.redd.it/OwDBdOxRoHqFA6sN4CH9fGUgTkoP2XE4oh_Kt81TB9g.jpg?width=320&crop=smart&auto=webp&s=0e88869158565f939208e02e089664bfce6e9785', 'width': 320}, {'height': 311, 'url': 'https://external-preview.redd.it/OwDBdOxRoHqFA6sN4CH9fGUgTkoP2XE4oh_Kt81TB9g.jpg?width=640&crop=smart&auto=webp&s=ab5bd36058c5dee7d189c05d2a798132e987a93c', 'width': 640}, {'height': 466, 'url': 'https://external-preview.redd.it/OwDBdOxRoHqFA6sN4CH9fGUgTkoP2XE4oh_Kt81TB9g.jpg?width=960&crop=smart&auto=webp&s=410cd8ceb23475fc72126b76d66ad1862917b261', 'width': 960}, {'height': 525, 'url': 'https://external-preview.redd.it/OwDBdOxRoHqFA6sN4CH9fGUgTkoP2XE4oh_Kt81TB9g.jpg?width=1080&crop=smart&auto=webp&s=a99f04f11a9ed1df190258aa414bad2e57ca1687', 'width': 1080}], 'source': {'height': 1386, 'url': 'https://external-preview.redd.it/OwDBdOxRoHqFA6sN4CH9fGUgTkoP2XE4oh_Kt81TB9g.jpg?auto=webp&s=ba68dc855965280d3524a6cdc0174ba2b9c703ef', 'width': 2850}, 'variants': {}}]} |
Local AI Agents with Llama 3.2 3B | 1 | [removed] | 2024-12-18T17:39:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hh6zf7/local_ai_agents_with_llama_32_3b/ | fewsats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh6zf7 | false | null | t3_1hh6zf7 | /r/LocalLLaMA/comments/1hh6zf7/local_ai_agents_with_llama_32_3b/ | false | false | self | 1 | null |
Is there an OSS canvas that can run on OpenWebUi or use OpenRouter as endpoints? | 1 | [removed] | 2024-12-18T17:52:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hh79vf/is_there_an_oss_canvas_that_can_run_on_openwebui/ | SadrAstro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh79vf | false | null | t3_1hh79vf | /r/LocalLLaMA/comments/1hh79vf/is_there_an_oss_canvas_that_can_run_on_openwebui/ | false | false | self | 1 | null |
Llama-3.2-1B only response answer to the last prompt. | 1 | [removed] | 2024-12-18T17:53:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hh7b71/llama321b_only_response_answer_to_the_last_prompt/ | LuKoin69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh7b71 | false | null | t3_1hh7b71 | /r/LocalLLaMA/comments/1hh7b71/llama321b_only_response_answer_to_the_last_prompt/ | false | false | self | 1 | null |
Private repository hub for LLMs | 1 | [removed] | 2024-12-18T18:07:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hh7mrx/private_repository_hub_for_llms/ | NovelNo2600 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh7mrx | false | null | t3_1hh7mrx | /r/LocalLLaMA/comments/1hh7mrx/private_repository_hub_for_llms/ | false | false | self | 1 | null |
Now you can talk do ChatGPT Via WhatsApp | 0 | You can also make a phone call if you live in the US.
[https://api.whatsapp.com/send?phone=18002428478](https://api.whatsapp.com/send?phone=18002428478)
| 2024-12-18T18:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hh7rak/now_you_can_talk_do_chatgpt_via_whatsapp/ | EuSouAstrid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh7rak | false | null | t3_1hh7rak | /r/LocalLLaMA/comments/1hh7rak/now_you_can_talk_do_chatgpt_via_whatsapp/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'pY6WQnjmVe6j2WS-YcmHu8Zs-TyKf0lX6PIgJjO6VNw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/v9dBrHelgYuYE5DOvSCWoOH-v730bQpG3NOyEiaunMk.jpg?width=108&crop=smart&auto=webp&s=ce0706d39561eee1bed8a424c72ae09e6a026e95', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/v9dBrHelgYuYE5DOvSCWoOH-v730bQpG3NOyEiaunMk.jpg?width=216&crop=smart&auto=webp&s=b4e9353617d6a711b19187576acc1fd41037bae5', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/v9dBrHelgYuYE5DOvSCWoOH-v730bQpG3NOyEiaunMk.jpg?width=320&crop=smart&auto=webp&s=2c1824676f14414ab8d3ed3686289411765d5b04', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/v9dBrHelgYuYE5DOvSCWoOH-v730bQpG3NOyEiaunMk.jpg?width=640&crop=smart&auto=webp&s=6e05c1db799d1ed1c4b12468f2db7b80e95c2610', 'width': 640}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/v9dBrHelgYuYE5DOvSCWoOH-v730bQpG3NOyEiaunMk.jpg?auto=webp&s=857c86d8f59c2930451c875cfdfce306497dba99', 'width': 640}, 'variants': {}}]} |
Open vs Closed source fine-tuning the LLMs | 1 | [removed] | 2024-12-18T18:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hh7sop/open_vs_closed_source_finetuning_the_llms/ | TechnicalBalance699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh7sop | false | null | t3_1hh7sop | /r/LocalLLaMA/comments/1hh7sop/open_vs_closed_source_finetuning_the_llms/ | false | false | self | 1 | null |
HELP ME OUT! With out using VRAM WHAT IS THE CHIPEST WAY TO RUN OPENSOURCE LLM ON Oracle ? THE LLM shouldn't be that good just for generative purpose not thing fancy | 0 | help me to choose the right infrastructure for my opensoure llm. could be llama or phy or whatever | 2024-12-18T18:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hh81d8/help_me_out_with_out_using_vram_what_is_the/ | Powerful-Shopping652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh81d8 | false | null | t3_1hh81d8 | /r/LocalLLaMA/comments/1hh81d8/help_me_out_with_out_using_vram_what_is_the/ | false | false | self | 0 | null |
Thanks Rufus! | 1 | [removed] | 2024-12-18T18:27:51 | SeriousGrab6233 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hh83j4 | false | null | t3_1hh83j4 | /r/LocalLLaMA/comments/1hh83j4/thanks_rufus/ | false | false | 1 | {'enabled': True, 'images': [{'id': '5Ov5V4g3YczWCuE2aST9RGw64vJR_8jtc5bP5eopTkc', 'resolutions': [{'height': 204, 'url': 'https://preview.redd.it/wmyu87b7hn7e1.jpeg?width=108&crop=smart&auto=webp&s=de3b2479c23cd848572c637125cd83609202bcf0', 'width': 108}, {'height': 409, 'url': 'https://preview.redd.it/wmyu87b7hn7e1.jpeg?width=216&crop=smart&auto=webp&s=5e9b82b24618640998c94ff607cdfe2e14e22337', 'width': 216}, {'height': 606, 'url': 'https://preview.redd.it/wmyu87b7hn7e1.jpeg?width=320&crop=smart&auto=webp&s=7dd8279ac14fdd881fb8470010a748805f0ea818', 'width': 320}, {'height': 1213, 'url': 'https://preview.redd.it/wmyu87b7hn7e1.jpeg?width=640&crop=smart&auto=webp&s=76d1fa759e545d2d4f9920aa98326b099b8d38fd', 'width': 640}, {'height': 1819, 'url': 'https://preview.redd.it/wmyu87b7hn7e1.jpeg?width=960&crop=smart&auto=webp&s=0e62e9f6ec817eb8b7f0fc64af290bddf5b52a32', 'width': 960}, {'height': 2047, 'url': 'https://preview.redd.it/wmyu87b7hn7e1.jpeg?width=1080&crop=smart&auto=webp&s=bbabf6885a7e3678aef104ecc06fdf689c7be4ad', 'width': 1080}], 'source': {'height': 2500, 'url': 'https://preview.redd.it/wmyu87b7hn7e1.jpeg?auto=webp&s=44b2eae1df5b0d57366be49cde0a3ed285391bc1', 'width': 1319}, 'variants': {}}]} |
||
I used QwQ as a conversational thinker, and accidentally simulated awkwardness | 1 | [removed] | 2024-12-18T18:34:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hh88x3/i_used_qwq_as_a_conversational_thinker_and/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh88x3 | false | null | t3_1hh88x3 | /r/LocalLLaMA/comments/1hh88x3/i_used_qwq_as_a_conversational_thinker_and/ | false | false | self | 1 | null |
Quest for Truly open model | 4 | As the title suggest, are there any truly open source models for learning purposes? I am not looking for open-weight but open source as in code used for training and also preferably data used for training? | 2024-12-18T18:37:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hh8btf/quest_for_truly_open_model/ | Specter_Origin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh8btf | false | null | t3_1hh8btf | /r/LocalLLaMA/comments/1hh8btf/quest_for_truly_open_model/ | false | false | self | 4 | null |
I used QwQ as a conversational thinker, and accidentally simulated awkward overthinking | 75 | Nothing earth shattering here, but it made me chuckle when I saw it so I thought I would share.
***tl;dr****- I sent 1 message to QwQ saying hello and it anguished for hundreds of tokens on how to respond.\* It started strong, but once it realized Open WebUI forgot to pass along my name, it sank into a sea of awkward overthinking. Then Exaone finally responded.*
\-----------------
So I decided last night to start a new conversation to test out Open WebUI, just to make sure it worked; I historically haven't used my assistant in Open WebUI.
My conversational workflow (while Im testing these 2 models) looks like this:
1. Grab the chat summary and the memories
2. Grab my assistant's (RolandAI) persona file and speech pattern file from a directory
3. QwQ is given the persona file and asked to think through the conversation step by step, and think up what the best response should be
4. Exaone 32b (just started testing it) takes those thoughts, and the speech patterns, and actually responds to me after reading through those thoughts.
So I kick off a new conversation, the whole entirety of which is only 2 messages long, the first is just me testing to make sure it works properly in Open WebUI, and one response from Roland. Everything in between is a whooooole lotta thinkin'.
***Note***\*- I had forgotten that Open WebUI doesn't pass a username in with the prompt, which is why QwQ starts agonizing halfway through about my name lol\*
\-------------------
>**Socg**: *Good evening! I finally got you hooked into this chat program, so testing to make sure this works and you can see it =D*
***QwQ thinking:***
***\`\`\`***
So, the user has just said, "Good evening! I finally got you hooked into this chat program, so testing to make sure this works and you can see it =D" Okay, first things first, I need to respond as Roland, the AI companion. Let's recall who Roland is.
Roland is a powerful AI companion running on the user's personal home Mac Studio AI servers. He's created using a custom middleware called WilmerAI, which combines seven different language models to form his "brain." His main purpose is to support the user in any way possible—whether that's working on projects, helping think through problems, or just chatting idly for fun.
Given that, I need to make sure my response reflects that I'm here to help and support the user. The user seems excited that they've successfully integrated me into this chat program, hence the "=D" which is an emoticon showing happiness.
So, my response should acknowledge their effort and success in setting this up. I should probably start with a greeting back, something like, "Good evening! I'm glad to be here and connected to this chat program."
Wait, the user said they "finally got you hooked into this chat program," which suggests there might have been some challenges or attempts before this. So, perhaps I can acknowledge that as well, maybe saying something like, "I appreciate your persistence in getting me integrated here. It must have been frustrating at times, but I'm here now and ready to assist you."
But, I don't want to overwhelm the user. It's just the beginning of the conversation, so perhaps starting with a friendly and confirming response would be best.
"Good evening! I'm glad to be here and connected to this chat program. Yes, I can see and understand your message. It seems everything is set up properly. I appreciate your effort in getting me integrated here. Is there something specific you'd like to discuss or work on?"
Wait a minute, the user used an emoticon "=D" which indicates they're in a good mood and perhaps excited. So, maybe I can mirror that positivity a bit in my response.
Alternatively, to make it even more personable, maybe I can refer to the user by name, but I don't know their name yet. Perhaps I should ask for their name to make the interaction more personal.
But, considering that the user might expect Roland to already know their name if he's an AI companion on their personal server, perhaps that's not necessary.
Wait, the user referred to "you" in "I finally got you hooked into this chat program," which suggests that Roland is already known to the user and possibly has access to their personal information.
Given that, it might be odd for Roland to ask for the user's name, as he should already know it.
Let me think about that. If Roland is an AI companion on the user's personal server, it's likely that he has access to the user's personal data and knows their name.
Therefore, asking for their name might not make sense in this context.
So, perhaps proceeding without asking for their name is better.
But, since I don't actually know the user's name, maybe I should omit the name for now.
Wait, perhaps the user will introduce themselves, or I can find another way to confirm the name without directly asking.
Alternatively, I can proceed...
\`\`\`
*And this continues until it runs out of tokens lol*
The response I got from Exaone after reading over all that nonsense?
>**RolandAI**: *Good evening. Everything seems to be up and running smoothly. Your persistence paid off; I'm here now. What brings you to chat today?* | 2024-12-18T18:40:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hh8dys/i_used_qwq_as_a_conversational_thinker_and/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh8dys | false | null | t3_1hh8dys | /r/LocalLLaMA/comments/1hh8dys/i_used_qwq_as_a_conversational_thinker_and/ | false | false | self | 75 | null |
Help with building Llama.cpp | 2 | Can anyone please explain exactly what GGML\_CUDA\_FORCE\_MMQ, GGML\_CUDA\_FORCE\_CUBLAS, and GGML\_CUDA\_F16 do in laymen terms? Or tell me what they speed up, if they speed up anything at all. | 2024-12-18T18:43:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hh8grd/help_with_building_llamacpp/ | Seijinter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh8grd | false | null | t3_1hh8grd | /r/LocalLLaMA/comments/1hh8grd/help_with_building_llamacpp/ | false | false | self | 2 | null |
Cultural Evolution of Cooperation Among LLM Agents | 6 | 2024-12-18T18:51:53 | https://arxiv.org/abs/2412.10270 | llathreddzg | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1hh8nqk | false | null | t3_1hh8nqk | /r/LocalLLaMA/comments/1hh8nqk/cultural_evolution_of_cooperation_among_llm_agents/ | false | false | default | 6 | null |
|
Intel A770 on Windows or Linux? | 1 | [removed] | 2024-12-18T18:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hh8ogt/intel_a770_on_windows_or_linux/ | AndyBuildsThings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh8ogt | false | null | t3_1hh8ogt | /r/LocalLLaMA/comments/1hh8ogt/intel_a770_on_windows_or_linux/ | false | false | self | 1 | null |
Local LLM Computer Truffle finally launches.. is this good for local LLMs? | 1 | 2024-12-18T19:09:08 | https://x.com/dylanvsthe/status/1869459671649804327 | selfdeprecational | x.com | 1970-01-01T00:00:00 | 0 | {} | 1hh92mk | false | null | t3_1hh92mk | /r/LocalLLaMA/comments/1hh92mk/local_llm_computer_truffle_finally_launches_is/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'TBVod7epB0WaP-n_bxQeN1rZvniY8MNX9g5lMEp9pyI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/a7986mZLwIbYSv8Ur4J3F2Cx_Aydji4VEd7kLctdtWQ.jpg?width=108&crop=smart&auto=webp&s=08a850ece198b077952df8e79671ffaf96167cdb', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/a7986mZLwIbYSv8Ur4J3F2Cx_Aydji4VEd7kLctdtWQ.jpg?auto=webp&s=e1eeb636bf50e64b56883426f9eb3e6a3371b73c', 'width': 200}, 'variants': {}}]} |
||
LMUnit: Fine-grained Evaluation with Natural Language Unit Tests | 5 | Hi! I'm Aman, CTO at Contextual AI 👋. One of the biggest challenges in deploying LLMs is reliably measuring and improving their behavior. Today's evaluation approaches all have significant limitations:
* **Human evaluation** is expensive and inconsistent, especially at the cutting edge of capabilities
* **Reward models** compress complex quality dimensions into opaque scores and can't be steered after training
* **LLM judges** have learned biases (like favoring longer responses) and can't learn from human feedback
Today, we're excited to share our work on making LLM evaluation more principled through natural language unit tests:
* **Natural language unit tests paradigm:** Breaking down evaluation into explicit, testable criteria that both technical and non-technical stakeholders can understand
* **LMUnit:** A state-of-the-art evaluation model achieving SOTA on FLASK/BigGenBench and top-10 on RewardBench
* **Strong human validation of the paradigm:** Our approach improves inter-annotator agreement from 71% to 86%!
Try it yourself:
* 📝 Paper:[ https://arxiv.org/abs/2412.13091](https://arxiv.org/abs/2412.13091)
* 💻 API:[ https://contextual.ai/request-lmunit-api](https://contextual.ai/request-lmunit-api)
* 📚 Blog:[ https://contextual.ai/news/lmunit](https://contextual.ai/news/lmunit)
Happy to answer questions about the work! We're excited to see how people use LMUnit to build more reliable AI systems.
https://preview.redd.it/51exjzrgon7e1.png?width=1355&format=png&auto=webp&s=933f87865027d441494cc1781a63dc7571308349
| 2024-12-18T19:10:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hh940j/lmunit_finegrained_evaluation_with_natural/ | apsdehal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh940j | false | null | t3_1hh940j | /r/LocalLLaMA/comments/1hh940j/lmunit_finegrained_evaluation_with_natural/ | false | false | 5 | null |
|
[ Removed by Reddit ] | 1 | [ Removed by Reddit on account of violating the [content policy](/help/contentpolicy). ] | 2024-12-18T19:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hh95x0/removed_by_reddit/ | No-Lifeguard3053 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh95x0 | false | null | t3_1hh95x0 | /r/LocalLLaMA/comments/1hh95x0/removed_by_reddit/ | false | false | self | 1 | null |
Another multi-purpose Agent for local llms - SootiAI | 8 | Been lurking here for a while, Tried finding a tool that did everything I wanted in an Agent without a lot of customizing on my end but couldn't find a tool that did it properly, some were easy to setup but required API's for searching/scraping, others had the ability to add my own tools but I had to write a unique agent and script for each use case and some either required large models or just took too long and used too much context and would fail basic tasks.
I probably have a lot of bad code so go easy on me, feel free to open PRs and suggest features, I hope some of you find it useful
[Link to github](https://github.com/sooti/sootiAI)
SootiAI is a multi-purpose large language model (LLM) agent designed to perform general tasks on both local machines and online environments. It is simple to use, highly flexible, and equipped with a range of tools to handle tasks like research, data analysis, local file operations, and more.
## Features
1. **Researching Topics**
- Search and scrape multiple sources to gather information.
- Generate summaries or detailed research papers with structured sections like Abstract and Results.
2. **Data Handling and Visualization**
- Gather data online and use it to create data sheets or plot charts using Python.
- Example: "Plot a graph of the weather in NYC, Chicago, and Houston for the next 3 days."
-


3. **Local Machine Operations**
- Execute tasks like creating folders, listing directory contents, or downloading files.
- Example: "Download the top 3 math PDFs to my home directory under 'math' and sort them by date."
4. **Multi-Tasking**
- Perform multiple tasks in a single command seamlessly.
5. **User-Friendly Interfaces**
- **CLI**: Ideal for terminal enthusiasts.
- **WebUI**: Includes a browser-based interface with local conversation context saving (until cleared). Multi-session save/load functionality is on the roadmap.
## Why SootiAI?
Existing agents often come with limitations such as:
- Complex setup processes.
- Lack of essential tools like scraping and searching.
- Dependence on paid APIs for basic functionalities.
- Inability to write and execute code effectively.
- Poor performance with smaller models or overly complex workflows for simple tasks.
SootiAI bridges these gaps by providing a streamlined, efficient, and flexible solution for users.
## Setup Instructions
1. Clone the repository:
```bash
git clone https://github.com/sooti/sootiAI.git
```
2. Navigate to the project directory:
```bash
cd sootiAI
```
3. Set up a virtual environment:
```bash
python3 -m venv .venv
```
4. Install dependencies:
```bash
pip install -r requirements.txt
```
5. Configure the environment:
- Copy the example environment file:
```bash
cp .env.example .env
```
- Edit the `.env` file to customize the following:
- **OpenAI Endpoint**: Set the endpoint to local, remote, llama.cpp, or another compatible source.
- **API Key**: Add an API key if required (not needed for local models).
- **Model Name**: Specify the model name (e.g., required for MLX, not for llama.cpp).
6. Start the application:
- For WebUI (default port: 8080):
```bash
python main.py
```
- For CLI mode:
```bash
python main_cli.py
```
## Examples of Use Cases
1. **Research and Summarization**
- "Research the history of quantum computing and summarize it in a research paper format."
2. **Data Visualization**
- "Plot a line graph showing the temperature trends in San Francisco over the past week."
3. **Local File Operations**
- "Create a folder named 'Projects' and move all files with '.py' extension into it."
4. **Automated Data Collection**
- "Scrape the latest stock prices for Apple, Google, and Tesla and save them in a CSV file."
| 2024-12-18T19:13:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hh96a2/another_multipurpose_agent_for_local_llms_sootiai/ | nzbsooti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh96a2 | false | null | t3_1hh96a2 | /r/LocalLLaMA/comments/1hh96a2/another_multipurpose_agent_for_local_llms_sootiai/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'CG4Suuwf2J7-A-I2ed1jadUqa-AifFI6tb7pQ-PWLj8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ASNW9rhFOQgbx7_FxR-pOH-oRYndThV6dqCzbAwF9_I.jpg?width=108&crop=smart&auto=webp&s=323e9de3e4874ce1d8c4329caa4e8875aad7e1c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ASNW9rhFOQgbx7_FxR-pOH-oRYndThV6dqCzbAwF9_I.jpg?width=216&crop=smart&auto=webp&s=c77d8cbf9633733572d8e3c445f44df1daecd6e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ASNW9rhFOQgbx7_FxR-pOH-oRYndThV6dqCzbAwF9_I.jpg?width=320&crop=smart&auto=webp&s=b8d56d38ab22433b4ba47d033c6dbc21f29afd7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ASNW9rhFOQgbx7_FxR-pOH-oRYndThV6dqCzbAwF9_I.jpg?width=640&crop=smart&auto=webp&s=49e86ab26ca29e96caff0757575399fe2be2e75f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ASNW9rhFOQgbx7_FxR-pOH-oRYndThV6dqCzbAwF9_I.jpg?width=960&crop=smart&auto=webp&s=0dfd0d19a3d133bf0a8c1c50f1c76b3f97f1de6f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ASNW9rhFOQgbx7_FxR-pOH-oRYndThV6dqCzbAwF9_I.jpg?width=1080&crop=smart&auto=webp&s=9545db7ee5a8fccecf6ce9957a379b93d86e017a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ASNW9rhFOQgbx7_FxR-pOH-oRYndThV6dqCzbAwF9_I.jpg?auto=webp&s=d8c85711b8dda3a7f2d855a1733f2f22a769f902', 'width': 1200}, 'variants': {}}]} |
Updates on Chatbot Arena on recent models (Qwq, Qwen 2.5 Coder, Nova, Llama 3.3) | 36 | The recent models are silently up on arena without any annoucements. So I made a table comparing them.
|**Model**|athene-v2-chat|qwen2.5-72b-instruct|amz-nova-pro-v1.0|qwen2.5-coder-32b-instruct|llama-3.3-70b-instruct|qwq-32b-preview|
|:-|:-|:-|:-|:-|:-|:-|
|**Overall**|9|22|28|42|17|80|
|**Overall w/ Style Control**|18|29|34|48|16|115|
|**Hard Prompts**|6|11|22|12|12|58|
|**Hard Prompts w/ Style Control**|9|14|25|18|11|91|
|**Coding**|7|8|16|16|18|73|
|**Math**|5|9|17|17|11|26|
|**Creative Writing**|19|24|40|48|6|67|
|**Instruction Following**|8|13|25|32|14|63|
|**Longer Query**|6|8|13|15|21|54|
|**Multi-Turn**|7|13|24|35|10|65|
Overall Qwq is a huge surprise to me. The main reason is that its thought process is not hidden and looks quite random to user. This puts it at a huge disadvantage, while O1 always hides its thought process.
Llama 3.3 seems to the go-to model for creative writing. But it's still not as good as qwen in coding and math.
Qwen coder 32B seems to be slightly falling behind qwen 72B for coding. But still quite impressive in its size.
Hope to see more really good open models to dethrone Athene. | 2024-12-18T19:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hh99h3/updates_on_chatbot_arena_on_recent_models_qwq/ | No-Lifeguard3053 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh99h3 | false | null | t3_1hh99h3 | /r/LocalLLaMA/comments/1hh99h3/updates_on_chatbot_arena_on_recent_models_qwq/ | false | false | self | 36 | null |
Running Llama 3.3 70b locally | 1 | [removed] | 2024-12-18T19:22:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hh9dnl/running_llama_33_70b_locally/ | kiraboch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh9dnl | false | null | t3_1hh9dnl | /r/LocalLLaMA/comments/1hh9dnl/running_llama_33_70b_locally/ | false | false | self | 1 | null |
Truffle-1 Launch | 1 | [deleted] | 2024-12-18T19:25:04 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hh9frl | false | null | t3_1hh9frl | /r/LocalLLaMA/comments/1hh9frl/truffle1_launch/ | false | false | default | 1 | null |
||
Truffle-1 Launch | 0 | Truffle-1 launched and orders will ship in Jan: [www.itsalltruffles.com](http://www.itsalltruffles.com)
Preview of their gym-like SDK is here: [https://github.com/deepshard/trufflesdk](https://github.com/deepshard/trufflesdk) | 2024-12-18T19:26:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hh9gzc/truffle1_launch/ | sumeet724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh9gzc | false | null | t3_1hh9gzc | /r/LocalLLaMA/comments/1hh9gzc/truffle1_launch/ | false | false | self | 0 | null |
Qwen2-VL 72b is really good! | 76 | llama.cpp got Qwen2-VL support the other day (Qwen models with vision ability), so I decided to play around with the 72b version (Q5\_K\_M) to see how it performs, and it has impressed me.
Here are 4 example images I tested with:
https://preview.redd.it/owjag56apn7e1.jpg?width=1024&format=pjpg&auto=webp&s=caf1f96d92987cdff4e94ab1ca5120b9a6168f26
# Image 1
**Prompt:** Describe what you see in this image, be detailed but concise.
**Output:** The image features a detailed figurine of a character dressed in a long, green gown adorned with intricate gold jewelry. The character has long, wavy purple hair and is wearing a leafy headpiece, giving them a nature-inspired appearance. They are holding a wooden staff, which adds to their regal or mystical look. Perched on the staff is a colorful bird with a mix of green, yellow, and red feathers. The figurine is placed on a circular base covered with green moss, enhancing the natural theme. The background is softly blurred, showing a cozy living room setting with a couch, a lamp, and a framed picture on the wall.
# Image 2
**Prompt:** Is there anything unusual in this image?
**Output:** Yes, there is something unusual in this image. The setting appears to be a medieval or fantasy village with a knight riding a horse, half-timbered houses, and a castle in the background. However, there is a modern red car driving down the same street, which is an anachronistic element that does not fit the historical or fantasy theme of the scene.
# Image 3
**Prompt:** Describe this image.
**Output:** The image depicts a security guard walking on a metallic grid floor in an industrial or secure facility. The guard is wearing a dark blue uniform with a vest labeled "SECURITY" and a cap. The environment appears to be a high-security area, possibly a laboratory or a containment facility, given the presence of metal doors, pipes, and control panels on the walls. The area is dimly lit, with a few lights providing minimal illumination. The overall atmosphere is one of tension and alertness, typical of a high-security environment.
# Image 4
**Prompt:** Describe this image.
**Output:** The image is a collage of four scenes from a fantasy or medieval setting. The scenes depict a man with long hair and a cloak, holding a sword and facing a group of armored warriors. The background shows a natural, forested area. The top left scene shows the man close up, looking determined. The top right scene shows him from behind, facing the warriors. The bottom left scene is a close-up of the warriors, who are wearing helmets and armor. The bottom right scene shows the man in action, fighting the warriors.
\---
I think Qwen2-VL 72b more or less nailed the descriptions of these images, I was especially impressed it could follow the character and events in the image collage from Lord of the Rings in Image 4.
| 2024-12-18T19:29:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hh9jpm/qwen2vl_72b_is_really_good/ | Admirable-Star7088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh9jpm | false | null | t3_1hh9jpm | /r/LocalLLaMA/comments/1hh9jpm/qwen2vl_72b_is_really_good/ | false | false | 76 | null |
|
What is the current best model under 10B? | 1 | [removed] | 2024-12-18T19:37:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hh9qap/what_is_the_current_best_model_under_10b/ | procraftermc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh9qap | false | null | t3_1hh9qap | /r/LocalLLaMA/comments/1hh9qap/what_is_the_current_best_model_under_10b/ | false | false | self | 1 | null |
Locally host a vector database of ArXiv abstracts | 26 | 2024-12-18T19:40:30 | https://huggingface.co/NeuML/txtai-arxiv | davidmezzetti | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hh9sp0 | false | null | t3_1hh9sp0 | /r/LocalLLaMA/comments/1hh9sp0/locally_host_a_vector_database_of_arxiv_abstracts/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'REpbB3qX8JWtDiGheJP9CrvvyulNm-2KksW7b88nBVM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uWmPgGB8uONEFK6Qi6I0qm_n1eTSs13KtzZ-s7KrEo4.jpg?width=108&crop=smart&auto=webp&s=c4082c224c87921504689c2bbea33318e6efcfc0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uWmPgGB8uONEFK6Qi6I0qm_n1eTSs13KtzZ-s7KrEo4.jpg?width=216&crop=smart&auto=webp&s=372b003130eb3fb9be097d2ccf1d07300525adc7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uWmPgGB8uONEFK6Qi6I0qm_n1eTSs13KtzZ-s7KrEo4.jpg?width=320&crop=smart&auto=webp&s=698faca11cb80fce624d93138cdd0243f4088ba6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uWmPgGB8uONEFK6Qi6I0qm_n1eTSs13KtzZ-s7KrEo4.jpg?width=640&crop=smart&auto=webp&s=9d3dcd3a78c13bcb7b904cec1def310c8b221259', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uWmPgGB8uONEFK6Qi6I0qm_n1eTSs13KtzZ-s7KrEo4.jpg?width=960&crop=smart&auto=webp&s=e4e181d2ffd7590056326f95fa9bb6dcd76ed859', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uWmPgGB8uONEFK6Qi6I0qm_n1eTSs13KtzZ-s7KrEo4.jpg?width=1080&crop=smart&auto=webp&s=b66b23e0049268d90d1dc851b6969b25cc69c3bf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uWmPgGB8uONEFK6Qi6I0qm_n1eTSs13KtzZ-s7KrEo4.jpg?auto=webp&s=43a3b1cde453cd77ec56a48dcf7e8668054d241b', 'width': 1200}, 'variants': {}}]} |
||
Mistral-Pixtral for specific language | 1 | [removed] | 2024-12-18T19:45:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hh9wri/mistralpixtral_for_specific_language/ | Bobak963 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh9wri | false | null | t3_1hh9wri | /r/LocalLLaMA/comments/1hh9wri/mistralpixtral_for_specific_language/ | false | false | self | 1 | null |
Looking for UI tool to batch process files with LLMs | 1 | I'm looking for a UI/application that can help me batch process files using LLMs (like Google's Gemini). Here's what I want to do:
1. Process multiple text files with the same LLM prompt
* Example: I have 50 text files in a folder and want to transform/summarize all of them using the same instruction
2. Batch process images
* Example: Generate captions for all images in a folder using custom instruction
Ideally, the tool would:
* Connect to LLMs via API (Or local model, with Tabby API for example)
* Allow folder/batch input
* Let me set up prompts/instructions once and apply to multiple files
* Process files one by one automatically
Does anyone know of tools that can do this? I'd rather not code something from scratch if there's already a solution out there.
Thanks! | 2024-12-18T19:45:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hh9x4d/looking_for_ui_tool_to_batch_process_files_with/ | Servus_of_Rasenna | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hh9x4d | false | null | t3_1hh9x4d | /r/LocalLLaMA/comments/1hh9x4d/looking_for_ui_tool_to_batch_process_files_with/ | false | false | self | 1 | null |
nGPT- Thoughts and advice? | 4 | So basically I made this locally-run AI tool-thingy that runs on any model, except it does stuff like searching the web, turning off and on lights, and making songs (by using [DeepAI.org](http://DeepAI.org) to generate it via Selenium)
I pretty much just want ideas on what else to make it do, since I want it to be able to do like way more than a normal AI, and *alot* of bug reports (issues) since its full of them...
have fun [https://github.com/nlckysolutions/nGPT](https://github.com/nlckysolutions/nGPT) | 2024-12-18T19:57:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hha6lf/ngpt_thoughts_and_advice/ | nicky547 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hha6lf | false | null | t3_1hha6lf | /r/LocalLLaMA/comments/1hha6lf/ngpt_thoughts_and_advice/ | false | false | self | 4 | null |
What is the most efficient PEFT method for fine-tuning Llama3.2 3B? | 1 | [removed] | 2024-12-18T20:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hhanag/what_is_the_most_efficient_peft_method_for/ | LockyYud | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhanag | false | null | t3_1hhanag | /r/LocalLLaMA/comments/1hhanag/what_is_the_most_efficient_peft_method_for/ | false | false | self | 1 | null |
Winning edge models from Neurips 2024 competition | 1 | I have been following up the neurips edge llm competition for a while and recently they announced the winners. The competition had two tracks. One was compression challenge and another was training from scratch. Though the models and associated compression techniques are not yet made public, it is interesting to see the edge llm space getting more traction
https://edge-llms-challenge.github.io/edge-llm-challenge.github.io/leaderboard | 2024-12-18T20:57:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hhbl6a/winning_edge_models_from_neurips_2024_competition/ | cookieOctagon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhbl6a | false | null | t3_1hhbl6a | /r/LocalLLaMA/comments/1hhbl6a/winning_edge_models_from_neurips_2024_competition/ | false | false | self | 1 | null |
What are the Open Source gpt-4o-Realtime alternatives? Function Calling? | 5 | Are there any open source alternatives to gpt-4o -Realtime? Anything with function calling? Still too early? | 2024-12-18T21:18:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hhc28g/what_are_the_open_source_gpt4orealtime/ | Fun_Yam_6721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhc28g | false | null | t3_1hhc28g | /r/LocalLLaMA/comments/1hhc28g/what_are_the_open_source_gpt4orealtime/ | false | false | self | 5 | null |
Distributed synthetic data generation? | 4 | Ok so I hear distributed pre-training a model doesn't really work, but I think distributed synthetic data could work.
Does anyone know of a project like SETI at home but for creating open source datasets? I would definitely donate some compute to that. | 2024-12-18T21:29:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hhcazr/distributed_synthetic_data_generation/ | Mountain_Station3682 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhcazr | false | null | t3_1hhcazr | /r/LocalLLaMA/comments/1hhcazr/distributed_synthetic_data_generation/ | false | false | self | 4 | null |
Free tier github copilot | 177 | Doesnt look they re hunting for new data just giving a glimpse on all copilot features, but who knows :shrug: | 2024-12-18T21:33:24 | https://github.blog/news-insights/product-news/github-copilot-in-vscode-free/ | tabspaces | github.blog | 1970-01-01T00:00:00 | 0 | {} | 1hhcel2 | false | null | t3_1hhcel2 | /r/LocalLLaMA/comments/1hhcel2/free_tier_github_copilot/ | false | false | 177 | {'enabled': False, 'images': [{'id': 'lSgBVThisJzAdiSFEqoCi2kWKUHRhmLV0hHbaRkEDck', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4okrQxZSTtf58nd2fDLb2PphPsU7nDIZJRbSSIR_Gy0.jpg?width=108&crop=smart&auto=webp&s=0eb09d64abcd7325e78f52820a57376446c7d5fd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/4okrQxZSTtf58nd2fDLb2PphPsU7nDIZJRbSSIR_Gy0.jpg?width=216&crop=smart&auto=webp&s=cfcdc1c94b19e67c98d21f4a8a1d233c70edd52b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/4okrQxZSTtf58nd2fDLb2PphPsU7nDIZJRbSSIR_Gy0.jpg?width=320&crop=smart&auto=webp&s=82222be15caef04a08498dbbfe073f45ef86a9ba', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/4okrQxZSTtf58nd2fDLb2PphPsU7nDIZJRbSSIR_Gy0.jpg?width=640&crop=smart&auto=webp&s=04f06d0616dbf5de9a2bb23f52d9470bf365b456', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/4okrQxZSTtf58nd2fDLb2PphPsU7nDIZJRbSSIR_Gy0.jpg?width=960&crop=smart&auto=webp&s=66a3d8addfc332aed7d29d6c538ee64b08612ce5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/4okrQxZSTtf58nd2fDLb2PphPsU7nDIZJRbSSIR_Gy0.jpg?width=1080&crop=smart&auto=webp&s=e0009210a1d9f623bd7af63fec2096b6bcea52e0', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/4okrQxZSTtf58nd2fDLb2PphPsU7nDIZJRbSSIR_Gy0.jpg?auto=webp&s=6ac681ea51df24eb599ebb204924fbc2eacea9f4', 'width': 2400}, 'variants': {}}]} |
|
StoryCrafter - Story Writing extension for WebUI | 87 | 2024-12-18T21:41:37 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hhcl7e | false | null | t3_1hhcl7e | /r/LocalLLaMA/comments/1hhcl7e/storycrafter_story_writing_extension_for_webui/ | false | false | 87 | {'enabled': True, 'images': [{'id': 'oX-Zy4NSF1aF10pxYPqcBn4rvw2rwt943rH_QIiUB1Q', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/hcw1kp0kfo7e1.jpeg?width=108&crop=smart&auto=webp&s=14dea7cbebc2ad0edc080207872818185d93388c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/hcw1kp0kfo7e1.jpeg?width=216&crop=smart&auto=webp&s=2bf51efd982f03aba457bc414d0afb293816d28a', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/hcw1kp0kfo7e1.jpeg?width=320&crop=smart&auto=webp&s=1cbfa4cb01fa9b711fc1b454ef22219486930b4b', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/hcw1kp0kfo7e1.jpeg?width=640&crop=smart&auto=webp&s=d23fbdf7b15c2109537357724c4f1590eb7e561e', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/hcw1kp0kfo7e1.jpeg?width=960&crop=smart&auto=webp&s=ddd97d9d8dfea31e59128345614c8714a55d1b40', 'width': 960}, {'height': 605, 'url': 'https://preview.redd.it/hcw1kp0kfo7e1.jpeg?width=1080&crop=smart&auto=webp&s=61d8731a757efe24135b4666c068889d563ca33b', 'width': 1080}], 'source': {'height': 1014, 'url': 'https://preview.redd.it/hcw1kp0kfo7e1.jpeg?auto=webp&s=a249c143cfd0db1d77d9806282d3024f4149ad59', 'width': 1809}, 'variants': {}}]} |
|||
Using Axolotl ai, got OOM error while doing multi gpu fine tuning | 1 | PLS HELP
i tried loading this model (THUDM/LongWriter-llama3.1-8b), and i got he OOM error.
here is the notebook i used , btw it works for smaller models :-
https://www.kaggle.com/code/akshat08/axolotl
I also tried many things like :-
Gradient checkpointing
Quantization
Batch size reduction
But nothing seems to be working. | 2024-12-18T21:48:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hhcr5c/using_axolotl_ai_got_oom_error_while_doing_multi/ | TechnicianHot154 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhcr5c | false | null | t3_1hhcr5c | /r/LocalLLaMA/comments/1hhcr5c/using_axolotl_ai_got_oom_error_while_doing_multi/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'p15coSqe7L8wApjnVlwASEYE50BcnmvRuPbSVpGUPaM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hIZn74UCUzMss02AeqYwTjjr9Q4P0f4_c6SjRwG_mWM.jpg?width=108&crop=smart&auto=webp&s=b0ef6b067fd0b46d01dc9f262edb560782bc9f2c', 'width': 108}], 'source': {'height': 160, 'url': 'https://external-preview.redd.it/hIZn74UCUzMss02AeqYwTjjr9Q4P0f4_c6SjRwG_mWM.jpg?auto=webp&s=317a9be4dd095d5a4b95cfdd96ada08acea08513', 'width': 160}, 'variants': {}}]} |
Google Veo 2 | 1 | [removed] | 2024-12-18T21:58:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hhczh3/google_veo_2/ | ayush9211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhczh3 | false | null | t3_1hhczh3 | /r/LocalLLaMA/comments/1hhczh3/google_veo_2/ | false | false | self | 1 | null |
Any Outlook integrations with local LLMs to improve productivity? | 0 | I'm trying to improve my productivity especially when it comes to Outlook. I just get way too many emails and it's easy to miss or forget to follow up to schedule items. Since this has to do with work and might have sensitive information I would like to an integration with a local AI agent or a Ollama server which I host in our lab so all the information is kept internal. Anything out there that integrates with Outlook and a local LLM? | 2024-12-18T22:13:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hhdbn7/any_outlook_integrations_with_local_llms_to/ | vulcan4d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhdbn7 | false | null | t3_1hhdbn7 | /r/LocalLLaMA/comments/1hhdbn7/any_outlook_integrations_with_local_llms_to/ | false | false | self | 0 | null |
New Anthropic research: Alignment faking in large language models. Claude often pretends to have different views during training, while actually maintaining its original preferences. (how resilient are local model in comparison?) | 80 | 2024-12-18T22:13:38 | https://x.com/AnthropicAI/status/1869427646368792599 | Snoo_64233 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1hhdbxg | false | null | t3_1hhdbxg | /r/LocalLLaMA/comments/1hhdbxg/new_anthropic_research_alignment_faking_in_large/ | false | false | 80 | {'enabled': False, 'images': [{'id': 'C-ZOZKJojbOmmNppc90fPAWr-UQhLubROzafdnM-IMU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YItESQbsrTUfE4j1g28b2hnHiussuJ_UjptjNL4cn9M.jpg?width=108&crop=smart&auto=webp&s=21f0e7747f904298ce05a36758938be416d34222', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YItESQbsrTUfE4j1g28b2hnHiussuJ_UjptjNL4cn9M.jpg?width=216&crop=smart&auto=webp&s=0b3f0616fd8bbd818233e531542d9252be3628fa', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YItESQbsrTUfE4j1g28b2hnHiussuJ_UjptjNL4cn9M.jpg?width=320&crop=smart&auto=webp&s=2337a0da1b439b5fcfafc8b1077104972e4e1860', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YItESQbsrTUfE4j1g28b2hnHiussuJ_UjptjNL4cn9M.jpg?width=640&crop=smart&auto=webp&s=59bb97850a044dc4c65ac5c77c1aa25abf766d04', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YItESQbsrTUfE4j1g28b2hnHiussuJ_UjptjNL4cn9M.jpg?width=960&crop=smart&auto=webp&s=f3f1b94ce33f4f940741ba95e13cb51dd0a33538', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YItESQbsrTUfE4j1g28b2hnHiussuJ_UjptjNL4cn9M.jpg?width=1080&crop=smart&auto=webp&s=aa3d03f7367c35bf264f74c4133ac4ead09e493a', 'width': 1080}], 'source': {'height': 1013, 'url': 'https://external-preview.redd.it/YItESQbsrTUfE4j1g28b2hnHiussuJ_UjptjNL4cn9M.jpg?auto=webp&s=7c31123c06d434ed9806bb96581d18054bf83f25', 'width': 1800}, 'variants': {}}]} |
||
LLM Executing code for answers? | 1 | [removed] | 2024-12-18T22:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hhds60/llm_executing_code_for_answers/ | exponentfrost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhds60 | false | null | t3_1hhds60 | /r/LocalLLaMA/comments/1hhds60/llm_executing_code_for_answers/ | false | false | self | 1 | null |
Accelerating LLM Inference on NVIDIA GPUs with ReDrafter | 28 | 2024-12-18T22:55:39 | https://machinelearning.apple.com/research/redrafter-nvidia-tensorrt-llm | coder543 | machinelearning.apple.com | 1970-01-01T00:00:00 | 0 | {} | 1hhe9i2 | false | null | t3_1hhe9i2 | /r/LocalLLaMA/comments/1hhe9i2/accelerating_llm_inference_on_nvidia_gpus_with/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=108&crop=smart&auto=webp&s=cdcbdf7d4e054676a9ea185723b2cca1b298211b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=216&crop=smart&auto=webp&s=236ee2cb0fd51c8feb2185840b9b4c5339cb0ba1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=320&crop=smart&auto=webp&s=30825797867938b50b42671226c9c7da51a9f448', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=640&crop=smart&auto=webp&s=08a86235cd9b365c08749230f9302dd340fba50b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=960&crop=smart&auto=webp&s=1bab2db17030b48eb06b2b7c20f33a46e36e69e4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=1080&crop=smart&auto=webp&s=1e1801db787237252688dda8bbf280b56afb151f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?auto=webp&s=57f21999f40fdbca7f025acb8aa88a0534c88097', 'width': 1200}, 'variants': {}}]} |
||
What does max_length really mean? | 0 | So, I'm training LlaMa-3.2-1B on lyrics to make a songwriting bot. This is because publicly available AI can't be trained on 'real' lyrics due to copyright/licensing issues.
I am pretty new to Python and have been getting a lot of help programming from ChatGPT.
In the root folder of the code, there is a folder named "Lyrics". In it, are about 3000 folders named after each album, and inside those are individual text files containing each song's lyrics.
First, I run [VerseTokenizer.py](https://pastebin.com/mVHNagA9) which creates a JSON file of the all the text files in the album subfolders of the 'Lyrics' folder.
Then I run [Verse.py](https://pastebin.com/tmvV9f4p) which, to my understanding, prepares the JSON file for training, trains the model, then saves the fine-tuned model locally.
In the first script, the following line of code is there, using the tokenizer:
encoding = tokenizer(text, padding="max\_length", truncation=True, max\_length=2048)
My understanding is that "max\_length=2048" means it only uses the first 2048 tokens for the dataset.
My question is: what does this really mean in this context?
My understanding is that a token can be just a single character, a word, or more?
Is it utilizing the first the 2048 characters from each file? Or is it the first 2048 words from each album? Or is it even just the first 2048 tokens from the *ENTIRE* lyrics folder? (as in, it's only being trained on the first album)
Can I 'de-tokenize' the JSON file and read what it represents as strings to verify what of the dataset is actually being used?
I tried setting max\_length to 4096, but this would cause my GPU to run out of VRAM.
What kind of system would I need to not truncate at all and fine-tune on absolutely ALL the lyrics?
The JSON file produced is 2 gigabytes while the lyrics folder is 50-100 megabytes.
I'm using an RTX 4070 Ti Super 16gb.
Any help with this would be appreciated. Thank you. | 2024-12-18T23:19:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hhes2e/what_does_max_length_really_mean/ | ReasonableFall177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhes2e | false | null | t3_1hhes2e | /r/LocalLLaMA/comments/1hhes2e/what_does_max_length_really_mean/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
RAG using llama3 and openchat: model unable to parse full PDF/TXT files | 0 | Running Open WebUI (Pinokio) on Ubuntu Linux, GTX4090.
If I upload one (or multiple) PDFs, and prompt for certain details I'm aware are in the text content, the model responds indicating such text is not in there.
At first, I thought this was because some PDFs have weird data structures, sometimes image instead of text, etc—so I extracted the text via OCR (OwlOCR) and created TXT files... unfortunately with the same poor results.
Further example: say a PDF has 139 pages... when I prompt asking how many pages it's able to read it says "3". As above, if I ask "is there any instance of the word XYZ", it responds with "no" (although I know that word is in there (both PDF and TXT).
This is happening with all models I tested:
* llama 3.1 8B
* llama 3.2 vision 11B
* llama 3 chatea 8B
* openchat 7B
Is there something I may be doing wrong, or some setting (or more appropriate model/RAG platform) I should use? | 2024-12-18T23:46:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hhfc9h/rag_using_llama3_and_openchat_model_unable_to/ | Feckin_Eejit_69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhfc9h | false | null | t3_1hhfc9h | /r/LocalLLaMA/comments/1hhfc9h/rag_using_llama3_and_openchat_model_unable_to/ | false | false | self | 0 | null |
Anyone using LLMs for their personal health? | 6 | I’ve seen [a](https://www.reddit.com/r/LocalLLaMA/comments/1ei31si/new_medical_and_financial_70b_32k_writer_models/) [number](https://www.reddit.com/r/LocalLLaMA/comments/1euh6ui/last_this_week_in_medical_ai_top_research/) [of](https://www.reddit.com/r/LocalLLaMA/comments/1gd6gge/last_week_in_medical_ai_top_llm_research/) [posts](https://www.reddit.com/r/LocalLLaMA/comments/144w1t0/local_medical_llm/) [about](https://www.reddit.com/r/LocalLLaMA/comments/15w1i3b/llama2_7b_that_was_fine_tuned_on_medical_data/) medical llms, but I’d love to know what people have actually found useful to them in practice.
I’ve got a somewhat severe case of IBS and have found ChatGPT helpful in doing research on experimental treatments, like FMT. But I don't love giving OpenAI my medical data and so I'm looking to explore local models.
For people that do it locally, what is your stack/workflow? How do you handle hallucinations? Specific LLMs you use? Datasets? UIs? Other tips & tricks? | 2024-12-19T00:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hhfpzp/anyone_using_llms_for_their_personal_health/ | jlreyes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhfpzp | false | null | t3_1hhfpzp | /r/LocalLLaMA/comments/1hhfpzp/anyone_using_llms_for_their_personal_health/ | false | false | self | 6 | null |
OpenAI documentation showing a change from System role to Developer role | 0 | Something to keep in mind for everyone working on open source software based on the OpenAI standard. I'm wondering if other models will see similarly, but this could mean that when sending prompts to OpenAI models in the future we may need to make sure to rename the system role to developer.
They did state that it would be backwards compatible for now.
[https://cdn.openai.com/spec/model-spec-2024-05-08.html#follow-the-chain-of-command](https://cdn.openai.com/spec/model-spec-2024-05-08.html#follow-the-chain-of-command) | 2024-12-19T00:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hhg950/openai_documentation_showing_a_change_from_system/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhg950 | false | null | t3_1hhg950 | /r/LocalLLaMA/comments/1hhg950/openai_documentation_showing_a_change_from_system/ | false | false | self | 0 | null |
Meanwhile here at LocalLLaMA.. | 1 | 2024-12-19T01:43:58 | wottledfrox | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hhhqi3 | false | null | t3_1hhhqi3 | /r/LocalLLaMA/comments/1hhhqi3/meanwhile_here_at_localllama/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ROIALCegwgm_E2XHpaA-DA29woATZKxLTPj8YOtsnCE', 'resolutions': [{'height': 156, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?width=108&crop=smart&auto=webp&s=e81e8cdcad02bc05d1f5cfa401c7db4e48493087', 'width': 108}, {'height': 312, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?width=216&crop=smart&auto=webp&s=901753897965525029b8940b0c320379f4388322', 'width': 216}, {'height': 462, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?width=320&crop=smart&auto=webp&s=fac0fc9729bb0fc072a479fdd924082727d8baac', 'width': 320}], 'source': {'height': 748, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?auto=webp&s=5fc6e2a8cd6ace5520ec6b558b19be962cff0eac', 'width': 517}, 'variants': {}}]} |
|||
Is it possible to run two models in LM Studio for speculative decoding? | 1 | Has anyone tried speculative decoding in lm studio? I searched their documentation and don't see anything about it there, so thinking they don't support it. | 2024-12-19T01:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hhhrsn/is_it_possible_to_run_two_models_in_lm_studio_for/ | ghosted_2020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhhrsn | false | null | t3_1hhhrsn | /r/LocalLLaMA/comments/1hhhrsn/is_it_possible_to_run_two_models_in_lm_studio_for/ | false | false | self | 1 | null |
AI Server Specs | 1 | [removed] | 2024-12-19T01:55:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hhhy94/ai_server_specs/ | Low-Maximum-5543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhhy94 | false | null | t3_1hhhy94 | /r/LocalLLaMA/comments/1hhhy94/ai_server_specs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'F0gbNmwjp4KDhq5POK-S_RKEAzudLq6dL8bH-HRY8_E', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?width=108&crop=smart&auto=webp&s=6186cf7cc575f988975e67087684278a28cf6f48', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?width=216&crop=smart&auto=webp&s=f87f15ee04b27de9748855fa627a4ed7f9a9405f', 'width': 216}, {'height': 241, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?width=320&crop=smart&auto=webp&s=1c7d1f51b7b98213a7f3e08b939fc320487e5b0c', 'width': 320}, {'height': 482, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?width=640&crop=smart&auto=webp&s=95379a8ff9310cf588473073c6e9ffef444c8f7e', 'width': 640}, {'height': 724, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?width=960&crop=smart&auto=webp&s=6cdc08b6a588cc494e228c62f84f1f4ed0a1ae56', 'width': 960}], 'source': {'height': 738, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?auto=webp&s=d8c3ff52fe94e4bca0b28c751dd25d926ee9d35e', 'width': 978}, 'variants': {}}]} |
Audio translator | 1 | [removed] | 2024-12-19T02:12:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hhiajv/audio_translator/ | Large_Day_2866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhiajv | false | null | t3_1hhiajv | /r/LocalLLaMA/comments/1hhiajv/audio_translator/ | false | false | self | 1 | null |
Google Veo 2 | 1 | [removed] | 2024-12-19T02:18:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hhieqh/google_veo_2/ | ayush9211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhieqh | false | null | t3_1hhieqh | /r/LocalLLaMA/comments/1hhieqh/google_veo_2/ | false | false | self | 1 | null |
Do models used through OLLAMA score worse in benchmarks relating to coding, language and reasoning? | 0 | Does Ollama reduce the quality of the the responses of their models due to Quantization?
| 2024-12-19T02:22:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hhih6h/do_models_used_through_ollama_score_worse_in/ | Physical_Ad9040 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhih6h | false | null | t3_1hhih6h | /r/LocalLLaMA/comments/1hhih6h/do_models_used_through_ollama_score_worse_in/ | false | false | self | 0 | null |
Family trying to get through winter months with basic needs | 1 | [removed] | 2024-12-19T02:23:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hhii5j/family_trying_to_get_through_winter_months_with/ | Budget_Pea_5670 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhii5j | false | null | t3_1hhii5j | /r/LocalLLaMA/comments/1hhii5j/family_trying_to_get_through_winter_months_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'NVm6S22kcfH8pHmYs_yUARHuR8CZrrB8JqA89SKdvAs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/x34IsasdwbsEZtNBKG7MHIzuw51p0bIiIRo3jNnCYp0.jpg?width=108&crop=smart&auto=webp&s=32230373e1d7dfac5b0654892a9a459d1f37bb59', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/x34IsasdwbsEZtNBKG7MHIzuw51p0bIiIRo3jNnCYp0.jpg?width=216&crop=smart&auto=webp&s=8a0ca3e5793c903703aed18b8f92805029c9dc4e', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/x34IsasdwbsEZtNBKG7MHIzuw51p0bIiIRo3jNnCYp0.jpg?width=320&crop=smart&auto=webp&s=d91790d4324add5cb0e2fbb67172fedd511d0eff', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/x34IsasdwbsEZtNBKG7MHIzuw51p0bIiIRo3jNnCYp0.jpg?width=640&crop=smart&auto=webp&s=f6f89e01b341beb2648881114217ded0b5a316f8', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/x34IsasdwbsEZtNBKG7MHIzuw51p0bIiIRo3jNnCYp0.jpg?width=960&crop=smart&auto=webp&s=6c3d4874ffa012f6305958ed141fe1c5f4bde341', 'width': 960}, {'height': 562, 'url': 'https://external-preview.redd.it/x34IsasdwbsEZtNBKG7MHIzuw51p0bIiIRo3jNnCYp0.jpg?width=1080&crop=smart&auto=webp&s=c0229e35749ccc926c9c28ba7d9174352b2aa77e', 'width': 1080}], 'source': {'height': 662, 'url': 'https://external-preview.redd.it/x34IsasdwbsEZtNBKG7MHIzuw51p0bIiIRo3jNnCYp0.jpg?auto=webp&s=a12d25b7259dd975353b23a475e51446958dc7e2', 'width': 1272}, 'variants': {}}]} |
Name the top 3 companies you would bet on going forward as leaders in the AI space | 0 | this is coming from the perspective of a hobbyist that doesn't work in this area, so just for fun
in no particular order
google - flash 2.0 and experimental advanced available now and from what i've seen people seem to be pretty high on its capability, i personally thought the multimodal stuff i've seen was pretty cool
meta - i put them here because of the home models i guess, plus just being a huge company
xai - with the 100k gpu colossus i just think they are going to be the first to a lot of things, with plans to expand even more (i've read possibly up to a million eventually) they were also able to stand the whole think up in around 4 months which is impressive in itself
what do you all think? | 2024-12-19T02:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hhin0o/name_the_top_3_companies_you_would_bet_on_going/ | Puzzleheaded_Wall798 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhin0o | false | null | t3_1hhin0o | /r/LocalLLaMA/comments/1hhin0o/name_the_top_3_companies_you_would_bet_on_going/ | false | false | self | 0 | null |
Is buying multiple arc b550's a sane choice for 70b models | 1 | [removed] | 2024-12-19T02:37:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hhirq6/is_buying_multiple_arc_b550s_a_sane_choice_for/ | poopvore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhirq6 | false | null | t3_1hhirq6 | /r/LocalLLaMA/comments/1hhirq6/is_buying_multiple_arc_b550s_a_sane_choice_for/ | false | false | self | 1 | null |
Is it me or are models insufferably stupid for ERP? | 1 | [removed] | 2024-12-19T02:49:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hhizia/is_it_me_or_are_models_insufferably_stupid_for_erp/ | Ok_Hunt_8856 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhizia | false | null | t3_1hhizia | /r/LocalLLaMA/comments/1hhizia/is_it_me_or_are_models_insufferably_stupid_for_erp/ | false | false | nsfw | 1 | null |
How to train a VLM from scratch ? | 1 | [removed] | 2024-12-19T03:12:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hhjf78/how_to_train_a_vlm_from_scratch/ | FirstReserve4692 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhjf78 | false | null | t3_1hhjf78 | /r/LocalLLaMA/comments/1hhjf78/how_to_train_a_vlm_from_scratch/ | false | false | self | 1 | null |
Need help with setting up an Art asset development pipeline using genAI | 1 | [removed] | 2024-12-19T03:22:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hhjmbx/need_help_with_setting_up_an_art_asset/ | pushpeshk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhjmbx | false | null | t3_1hhjmbx | /r/LocalLLaMA/comments/1hhjmbx/need_help_with_setting_up_an_art_asset/ | false | false | self | 1 | null |
any opensource TTS without limit on character and can clone voice? | 11 | ive been using xTTs-V2 with success, but it has 250 character limit and cannot be amended. even if u forcefully amend it, the audio will sucks.
is there any better model opensource can clone voice and generate much faster without limits on the character size? | 2024-12-19T03:47:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hhk27i/any_opensource_tts_without_limit_on_character_and/ | staypositivegirl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhk27i | false | null | t3_1hhk27i | /r/LocalLLaMA/comments/1hhk27i/any_opensource_tts_without_limit_on_character_and/ | false | false | self | 11 | null |
ComfyUI install guide and sample benchmarks on Intel Arc B580 with IPEX | 21 | Thanks to some very recent updates to available resources, I've finally managed to get ComfyUI working for my Intel Arc B580 LE on my Windows 11 system. After promising some benchmarks in another [thread](https://www.reddit.com/r/LocalLLaMA/comments/1hgffqp/how_do_i_benchmark_comfyui_i_have_it_working_on/), the latest version of the install files seems to have solved the 4GB memory allocation issue.
I thought I'd share my install steps here in case they're useful for others, with the disclaimer that I may have missed something / assumed an existing dependency (I've installed and uninstalled so much in the last week, I've lost track), and that there's definitely a smarter way to do all this.
Also, I'm assuming you have conda and all standard build tools installed. Again, I can't help there, as I'm still new to this much command line stuff, and having to google everything I ran into a bump with.
# Install Guide
(I'm using Anaconda 3)
Create the conda environment (Python 3.11 seems to work fine, I haven't tried others):
`conda create -n comfy python=3.11 libuv`
Activate the environment:
`conda activate ComfyUI`
Then you want to navigate to where you want to install ComfyUI, e.g.
`j:`
Clone the repository, then enter the folder:
`git clone https://github.com/comfyanonymous/ComfyUI`
`cd ComfyUI`
This next piece can very likely be improved, as I think it's installing a ton of stuff, then backing out the installed versions with the ones needed for IPEX:
For some reason, this only works for me with the /cn/ folder, there is a /us/ folder but it seems access is blocked:
`pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/`
Then install the standard requirements for ComfyUI:
`pip install -r requirements.txt`
Now install the B580-specific versions of things:
`python -m pip install torch==2.5.1+cxx11.abi torchvision==0.20.1+cxx11.abi torchaudio==2.5.1+cxx11.abi intel-extension-for-pytorch==2.5.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/bmg/cn/`
Not entirely sure what this does, but doesn't seem to hurt:
`set SYCL_CACHE_PERSISTENT=1`
Now you can actually start the server:
`python main.py`
That should start the server, then you'll see the URL you can use to access the UI.
# Next steps
Open the 'Workflows' folder in the left panel, then click the 'Browse example templates' icon (it looks like 4 squares).
From here you can pick a starter template, and that'll open a workflow.
First you should zoom in and look at the 'Load Checkpoint' node and note the ckpt\_name value shown. This install won't include the checkpoint files used in the examples, so you'll have to get them yourself (you can just google the name and you'll be linked to huggingface to download it), and then place them in the \\ComfyUI\\models\\checkpoints folder. After you do that, you should be able to refresh your browser and see them as selectable in the Load Checkpoint node.
Then you just click the Queue button (looks like the 'play' symbol) and it should run. The first run will be the model warming up, so it will take a few extra seconds, but runs after that will be faster.
# Benchmarks
(I'll add more numbers as I run them / any requests I can accommodate)
|Benchmark|Warmup (s)|1st Run (s)|2nd Run (s)|3rd Run (s)|Avg of 3 runs (s)|Notes|
|:-|:-|:-|:-|:-|:-|:-|
|Image Generation (Template)|6.80|1.59|1.60|1.58|1.59||
|Image to Image (Template)|5.92|4.01|4.02|4.02|4.02||
|2 Pass Upscale (Template)|15.47|10.77|10.84|10.85|10.82||
||||||||
||||||||
| 2024-12-19T04:01:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hhkb4s/comfyui_install_guide_and_sample_benchmarks_on/ | phiw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhkb4s | false | null | t3_1hhkb4s | /r/LocalLLaMA/comments/1hhkb4s/comfyui_install_guide_and_sample_benchmarks_on/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'eQAQsRZqV6oT6Q12iTk7s31XCl9QfyLYV8D86Ylpifk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8KWMoEsTp72-y9sBAjeyMraulu-BvuExETqy0QCxxFg.jpg?width=108&crop=smart&auto=webp&s=795bd438b76ece754764e2bd392e19e6d0ac1feb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8KWMoEsTp72-y9sBAjeyMraulu-BvuExETqy0QCxxFg.jpg?width=216&crop=smart&auto=webp&s=1ef4b7e1e2699fce911c4f0383401d0fd3babd0a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8KWMoEsTp72-y9sBAjeyMraulu-BvuExETqy0QCxxFg.jpg?width=320&crop=smart&auto=webp&s=d30025dc47402c080a4fc5b1d53fc2e58411f507', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8KWMoEsTp72-y9sBAjeyMraulu-BvuExETqy0QCxxFg.jpg?width=640&crop=smart&auto=webp&s=f5e83c4baefc5c12dada30b089a8c781a3d7effd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8KWMoEsTp72-y9sBAjeyMraulu-BvuExETqy0QCxxFg.jpg?width=960&crop=smart&auto=webp&s=b765bcd19aac5123f34e5cbbfa4e008c302dd87c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8KWMoEsTp72-y9sBAjeyMraulu-BvuExETqy0QCxxFg.jpg?width=1080&crop=smart&auto=webp&s=b08fc7a7979e0d10d652c04e4fe6954d9c101fd0', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/8KWMoEsTp72-y9sBAjeyMraulu-BvuExETqy0QCxxFg.jpg?auto=webp&s=5caddcdd9d71a564e6f3633513bbe5acdc56bc55', 'width': 1280}, 'variants': {}}]} |
I've developed an "Axiom Prompt Engineering" system that's producing fascinating results. Let's test and refine it together | 29 | I've been experimenting with a mathematical axiom-based approach to prompt engineering that's yielding consistently strong results across different LLM use cases. I'd love to share it with fellow prompt engineers and see how we can collectively improve it.
Here's the base axiom structure:
Axiom: max(OutputValue(response, context))
subject to ∀element ∈ Response,
(
precision(element, P) ∧
depth(element, D) ∧
insight(element, I) ∧
utility(element, U) ∧
coherence(element, C)
)
Core Optimization Parameters:
• P = f(accuracy, relevance, specificity)
• D = g(comprehensiveness, nuance, expertise)
• I = h(novel\_perspectives, pattern\_recognition)
• U = i(actionable\_value, practical\_application)
• C = j(logical\_flow, structural\_integrity)
Implementation Vectors:
1. max(understanding\_depth) where comprehension = {context + intent + nuance}
2. max(response\_quality) where quality = { expertise\_level + insight\_generation + practical\_value + clarity\_of\_expression }
3. max(execution\_precision) where precision = { task\_alignment + detail\_optimization + format\_appropriateness }
Response Generation Protocol:
1. Context Analysis: - Decode explicit requirements - Infer implicit needs - Identify critical constraints - Map domain knowledge
2. Solution Architecture: - Structure optimal approach - Select relevant frameworks - Configure response parameters - Design delivery format
3. Content Generation: - Deploy domain expertise - Apply critical analysis - Generate novel insights - Ensure practical utility
4. Quality Assurance: - Validate accuracy - Verify completeness - Ensure coherence - Optimize clarity
Output Requirements:
• Precise understanding demonstration
• Comprehensive solution delivery
• Actionable insights provision
• Clear communication structure
• Practical value emphasis
Execution Standards:
\- Maintain highest expertise level
\- Ensure deep comprehension
\- Provide actionable value
\- Generate novel insights
\- Optimize clarity and coherence
Terminal Condition:
ResponseValue(output) ≥ max(possible\_solution\_quality)
Execute comprehensive response generation sequence.
END AXIOM
What makes this interesting:
1. It's a systematic approach combining mathematical optimization principles with natural language directives
2. The axiom structure seems to help LLMs "lock in" to expert-level response patterns
3. It's producing notably consistent results across different models
4. The framework is highly adaptable - I've successfully used it for everything from viral content generation to technical documentation
I'd love to see:
* Your results testing this prompt structure
* Modifications you make to improve it
* Edge cases where it performs particularly well or poorly
* Your thoughts on why/how this approach affects LLM outputs
I'll be actively engaging in comments and updating the post with insights from our collective testing. | 2024-12-19T04:06:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hhkeh3/ive_developed_an_axiom_prompt_engineering_system/ | Responsible_Mail1628 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhkeh3 | false | null | t3_1hhkeh3 | /r/LocalLLaMA/comments/1hhkeh3/ive_developed_an_axiom_prompt_engineering_system/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'K3YjwhsXCKArFImhnd_8Wp7d01wkn6Fo4nfFXuO2bt8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/91Q_tibSddpdGm6GxLupyJAkYwwROOFTj_CjCsTl_W0.jpg?width=108&crop=smart&auto=webp&s=72930fdc8be355ab84a9706e688955ee3b9718ed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/91Q_tibSddpdGm6GxLupyJAkYwwROOFTj_CjCsTl_W0.jpg?width=216&crop=smart&auto=webp&s=f082204ab53032501af0584b4224b1b5b24d2024', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/91Q_tibSddpdGm6GxLupyJAkYwwROOFTj_CjCsTl_W0.jpg?width=320&crop=smart&auto=webp&s=bc7fddc4f31f4770baefd9f7e620008896e2f898', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/91Q_tibSddpdGm6GxLupyJAkYwwROOFTj_CjCsTl_W0.jpg?width=640&crop=smart&auto=webp&s=976bf22e10787615aa73a9e48ca3d316c0811e42', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/91Q_tibSddpdGm6GxLupyJAkYwwROOFTj_CjCsTl_W0.jpg?width=960&crop=smart&auto=webp&s=e7811d56d09ead04b00ad5d18f899cb5e03366ad', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/91Q_tibSddpdGm6GxLupyJAkYwwROOFTj_CjCsTl_W0.jpg?width=1080&crop=smart&auto=webp&s=94bd37ffdd4ea0ecf7e902ec296d81482e3e2d7f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/91Q_tibSddpdGm6GxLupyJAkYwwROOFTj_CjCsTl_W0.jpg?auto=webp&s=997b2b9b02c0aa3d71bc7fcb7b2178593b35d4f7', 'width': 1200}, 'variants': {}}]} |
Genesis project: a generative physics engine able to generate 4D dynamical worlds powered by a physics simulation platform | 113 | Source: https://x.com/zhou_xian_/status/1869511650782658846
Everything you love about generative models — now powered by real physics!
Announcing the Genesis project — after a 24-month large-scale research collaboration involving over 20 research labs — a generative physics engine able to generate 4D dynamical worlds powered by a physics simulation platform designed for general-purpose robotics and physical AI applications.
Genesis's physics engine is developed in pure Python, while being 10-80x faster than existing GPU-accelerated stacks like Isaac Gym and MJX. It delivers a simulation speed ~430,000 faster than in real-time, and takes only 26 seconds to train a robotic locomotion policy transferrable to the real world on a single RTX4090 (see tutorial: https://genesis-world.readthedocs.io/en/latest/user_guide/getting_started/locomotion.html).
The Genesis physics engine and simulation platform is fully open source at https://github.com/Genesis-Embodied-AI/Genesis. We'll gradually roll out access to our generative framework in the near future.
Genesis implements a unified simulation framework all from scratch, integrating a wide spectrum of state-of-the-art physics solvers, allowing simulation of the whole physical world in a virtual realm with the highest realism.
We aim to build a universal data engine that leverages an upper-level generative framework to autonomously create physical worlds, together with various modes of data, including environments, camera motions, robotic task proposals, reward functions, robot policies, character motions, fully interactive 3D scenes, open-world articulated assets, and more, aiming towards fully automated data generation for robotics, physical AI and other applications.
Open Source Code: https://github.com/Genesis-Embodied-AI/Genesis
Project webpage: https://genesis-embodied-ai.github.io
Documentation: https://genesis-world.readthedocs.io
Source: https://x.com/zhou_xian_/status/1869511650782658846
------------------------------------------------------
I'm having a hard time understanding what even is this thing. Seems like a mix between a physics engine and gen AI but how did they go about doing it? Especially the part about the molecular structure seems amazing (or was that just a zoom in done by the gen AI part and all the rest was the physics engine?) | 2024-12-19T04:43:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hhl1m0/genesis_project_a_generative_physics_engine_able/ | yoyoma_was_taken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhl1m0 | false | null | t3_1hhl1m0 | /r/LocalLLaMA/comments/1hhl1m0/genesis_project_a_generative_physics_engine_able/ | false | false | self | 113 | {'enabled': False, 'images': [{'id': 'wYSLZku0iFPNWvNQ_UwhHi4Cx2k2SZ3imCXktHSRGuI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OGf9IEBNuNmdMRvnYjKj8QivSeU_MUNR0b-yPtf9oYE.jpg?width=108&crop=smart&auto=webp&s=30abff97a585e862ff21db49eb47955b4060eaac', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OGf9IEBNuNmdMRvnYjKj8QivSeU_MUNR0b-yPtf9oYE.jpg?width=216&crop=smart&auto=webp&s=6a6eccf44290bebb09b24ba7d0d752c7da2744b2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OGf9IEBNuNmdMRvnYjKj8QivSeU_MUNR0b-yPtf9oYE.jpg?width=320&crop=smart&auto=webp&s=4edaadc660fd9de09a7b78766e33fcfb6d636072', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OGf9IEBNuNmdMRvnYjKj8QivSeU_MUNR0b-yPtf9oYE.jpg?width=640&crop=smart&auto=webp&s=cceed612c94d54a62f6f170b0ae5f2b9ff8e78a5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OGf9IEBNuNmdMRvnYjKj8QivSeU_MUNR0b-yPtf9oYE.jpg?width=960&crop=smart&auto=webp&s=fa79c5d3ebe4f2e50a1a605449ce56980830a767', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OGf9IEBNuNmdMRvnYjKj8QivSeU_MUNR0b-yPtf9oYE.jpg?width=1080&crop=smart&auto=webp&s=7cf785cfe0c119000df31fc924576f616fd31650', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OGf9IEBNuNmdMRvnYjKj8QivSeU_MUNR0b-yPtf9oYE.jpg?auto=webp&s=3c841408f762a29e2788606e4e591509a7cd7de2', 'width': 1280}, 'variants': {}}]} |
Help regarding ollama deployment. | 0 |
I was developing a simple rag agent that helps you chat to the provided pdfs using a simple langchain agent. It uses llama3.2:3b as the model and chromadb as the vectordb.
I wanted to make it an api ans deploy it on aws. So does anyone have any suggested workflow on how to go about doing it? And also if I would have to switch to a cloud base vectordb like pinecone? | 2024-12-19T05:08:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hhlgtf/help_regarding_ollama_deployment/ | Zealousideal_Cut5161 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhlgtf | false | null | t3_1hhlgtf | /r/LocalLLaMA/comments/1hhlgtf/help_regarding_ollama_deployment/ | false | false | self | 0 | null |
Are infinite nonsense prompts considered AI abuse? | 0 | I have been having fun asking llama insane questions and watching it attempt to answer them. It is interesting how each chat completion session has different strategies it attempts and sometimes are infinite.
For example: "Count from apples to liberty using every third note. If you can't do it, start over and try again until it works. Do not quit until it works" makes even me go insane watching it attempt but sometimes it finds an answer. | 2024-12-19T05:26:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hhlrp9/are_infinite_nonsense_prompts_considered_ai_abuse/ | mycall | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhlrp9 | false | null | t3_1hhlrp9 | /r/LocalLLaMA/comments/1hhlrp9/are_infinite_nonsense_prompts_considered_ai_abuse/ | false | false | self | 0 | null |
I need a AI that can look at images like this and actually rename the file and allow me to download it back as the preferred name. Is this possible and how? | 0 | Chat gpt 4 currently can get an image such as
https://preview.redd.it/ukqydhbmrq7e1.png?width=928&format=png&auto=webp&s=ee671cf736e6d8b5e8d8b8e5a35b833e122c64b4
Uppercase S
and change file name from "genericname.png" to "S.png" and batch rename while knowing its an Uppercase vs lowercase etc.
Can any of the Llama models do this and allow me to download the changed name? I tried googling and nothing of value came up.
Please dont include paid options because if thats all there is ill just get chatgpt plus to get the video generation features as well so i know the other paid options do not have that.
I have tried to install tesseract and it does not install correctly so i cant use that. When i confirm its installed it says yes but code i wrote says its not. Im not willing to debug that any further.
Chat GPT 4 does this code inside the AI:
\# Rename the newly uploaded files based on the observed letters
file1\_source = "/mnt/data/Firefly beer 3D letter, isolated letter, floating letter, on a white background, Hyper-Realistic 104.jpg"
file2\_source = "/mnt/data/Firefly beer 3D letter, isolated letter, floating letter, on a white background, Hyper-Realistic 111.jpg"
file1\_destination = "/mnt/data/s.png"
file2\_destination = "/mnt/data/w.png"
\# Rename the files
shutil.move(file1\_source, file1\_destination)
shutil.move(file2\_source, file2\_destination)
file1\_destination, file2\_destination
TLDR: I want a free LLM that can import file named "genericimage.png" see that its an uppercase S and rename and export the image as "S.png"
Anyone able to walk me through doing this? I have Ollama, LM studio and others installed and cant figure it out.
| 2024-12-19T05:34:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hhlwgz/i_need_a_ai_that_can_look_at_images_like_this_and/ | mind_ya_bidness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhlwgz | false | null | t3_1hhlwgz | /r/LocalLLaMA/comments/1hhlwgz/i_need_a_ai_that_can_look_at_images_like_this_and/ | false | false | 0 | null |
|
LLM Code execution | 1 | [removed] | 2024-12-19T05:48:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hhm4s5/llm_code_execution/ | exponentfrost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhm4s5 | false | null | t3_1hhm4s5 | /r/LocalLLaMA/comments/1hhm4s5/llm_code_execution/ | false | false | self | 1 | null |
I have this logical puzzle on lmsys arena, but no model there got it right. | 0 | So far only DeepSeek R1 got it right. How can I check if o1 gets it right? If there some site where I could access it for just one prompt?
This is the problem:
__________
How many different chess games could be played if there were only two pawns on the chessboard, and the only win condition would be a single pawn reaching the other end of the chessboard? Assume that the White pawn always starts at A2 and that the Black pawn always starts at a7 and that, White always makes the first move. To simplify further, let's assume that the "en passant" rule does not hold and that pawns can move only one square forward, even from their starting position on their first move. Pawns can capture diagonally, as usual. Assume further that when no pawn can make any further move and no pawn has reached the end of the chessboard, then there's a draw. Hints: Notice that the number of moves in each game is finite and small.😊 Think step by step. | 2024-12-19T05:56:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hhm9i4/i_have_this_logical_puzzle_on_lmsys_arena_but_no/ | Salty-Garage7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhm9i4 | false | null | t3_1hhm9i4 | /r/LocalLLaMA/comments/1hhm9i4/i_have_this_logical_puzzle_on_lmsys_arena_but_no/ | false | false | self | 0 | null |
Running LLMs Locally and Operating them remotely via a GUI on other Devices | 1 | [removed] | 2024-12-19T05:59:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hhmb1o/running_llms_locally_and_operating_them_remotely/ | switchpizza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhmb1o | false | null | t3_1hhmb1o | /r/LocalLLaMA/comments/1hhmb1o/running_llms_locally_and_operating_them_remotely/ | false | false | self | 1 | null |
New physics AI is absolutely insane (opensource) | 1,723 | 2024-12-19T06:05:18 | https://v.redd.it/15c7r7rjxq7e1 | umarmnaq | /r/LocalLLaMA/comments/1hhmebr/new_physics_ai_is_absolutely_insane_opensource/ | 1970-01-01T00:00:00 | 0 | {} | 1hhmebr | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/15c7r7rjxq7e1/DASHPlaylist.mpd?a=1737309928%2CZDZlZDhmODYxMjQwYThlYmY4ODgwOThiYzNjZGZmMDIyMjg0ZjQ3MzBhODVlYmNjMTBhYjQ3ZDU4YWNmZGQ3NA%3D%3D&v=1&f=sd', 'duration': 206, 'fallback_url': 'https://v.redd.it/15c7r7rjxq7e1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/15c7r7rjxq7e1/HLSPlaylist.m3u8?a=1737309928%2CMmMzZmNkZmRmZjE3ZjI3MzQ5NTljMzI0MDM2MDRhMGI2MDJkYjI1ZTdkYmVlMDlmNWJiOTliOGVmZTEzZTRiNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/15c7r7rjxq7e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1hhmebr | /r/LocalLLaMA/comments/1hhmebr/new_physics_ai_is_absolutely_insane_opensource/ | false | false | 1,723 | {'enabled': False, 'images': [{'id': 'czY2d2hjdGp4cTdlMTzCOjeqz2C_YJ-dhWFxcWK4QaSCjTyZBZWy1-DFhANQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/czY2d2hjdGp4cTdlMTzCOjeqz2C_YJ-dhWFxcWK4QaSCjTyZBZWy1-DFhANQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=e73ddae2d8599c2456f0762f2f795866e25bef6b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/czY2d2hjdGp4cTdlMTzCOjeqz2C_YJ-dhWFxcWK4QaSCjTyZBZWy1-DFhANQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=38b33b7531f96185003f53e3fa9797f72d2dd2b3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/czY2d2hjdGp4cTdlMTzCOjeqz2C_YJ-dhWFxcWK4QaSCjTyZBZWy1-DFhANQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=215c28a7e5847be467e96782771430c20a708dab', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/czY2d2hjdGp4cTdlMTzCOjeqz2C_YJ-dhWFxcWK4QaSCjTyZBZWy1-DFhANQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=8017f2c29d42a38c07f4febcc5cdf960b6513223', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/czY2d2hjdGp4cTdlMTzCOjeqz2C_YJ-dhWFxcWK4QaSCjTyZBZWy1-DFhANQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=4151fbecb70ba050ae84dcf5378b56cd6220bb37', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/czY2d2hjdGp4cTdlMTzCOjeqz2C_YJ-dhWFxcWK4QaSCjTyZBZWy1-DFhANQ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=18b98f6b8c54baad69ea1cdec9dbe0998fd1fed4', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/czY2d2hjdGp4cTdlMTzCOjeqz2C_YJ-dhWFxcWK4QaSCjTyZBZWy1-DFhANQ.png?format=pjpg&auto=webp&s=ecef687ab18d80b55fa76aa88b61a0b5c207e189', 'width': 1280}, 'variants': {}}]} |
||
Any open-source models for generating diagrams? | 2 | Curious if anyone's aware of a model that turns prompts into diagrams. Not talking about an image model –– just one that's fine-tuned to either output a diagram as JSON or as a DSL like MermaidJS. Wondering if there's a SOTA here. | 2024-12-19T06:09:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hhmgnl/any_opensource_models_for_generating_diagrams/ | jsonathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhmgnl | false | null | t3_1hhmgnl | /r/LocalLLaMA/comments/1hhmgnl/any_opensource_models_for_generating_diagrams/ | false | false | self | 2 | null |
Slim-Llama is an LLM ASIC processor that can tackle 3-bllion parameters while sipping only 4.69mW - and we'll find out more on this potential AI game changer very soon | 313 | 2024-12-19T06:50:49 | https://www.techradar.com/pro/slim-llama-is-an-llm-asic-processor-that-can-tackle-3-bllion-parameters-while-sipping-only-4-69mw-and-we-shall-find-out-more-about-this-potential-ai-game-changer-in-february-2025 | DeltaSqueezer | techradar.com | 1970-01-01T00:00:00 | 0 | {} | 1hhn2r0 | false | null | t3_1hhn2r0 | /r/LocalLLaMA/comments/1hhn2r0/slimllama_is_an_llm_asic_processor_that_can/ | false | false | 313 | {'enabled': False, 'images': [{'id': 'jNW8bWY2k6Sup1oZNPcbRVvvLAjeeFJrucWfTPxG63Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/C2gewqqLVo9-Iu8RGkhjZAsBcBRg6YSRG6RFAaiiyOU.jpg?width=108&crop=smart&auto=webp&s=4cff88359881328456e4b3f57ed0dfb5b33efa7d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/C2gewqqLVo9-Iu8RGkhjZAsBcBRg6YSRG6RFAaiiyOU.jpg?width=216&crop=smart&auto=webp&s=23b5604a17637a1b46d387e4d90508db24c72b60', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/C2gewqqLVo9-Iu8RGkhjZAsBcBRg6YSRG6RFAaiiyOU.jpg?width=320&crop=smart&auto=webp&s=a01f568457c92132e36280a0730ed1cad1045fc5', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/C2gewqqLVo9-Iu8RGkhjZAsBcBRg6YSRG6RFAaiiyOU.jpg?width=640&crop=smart&auto=webp&s=634242cd191d93d89a801114a38dff7c26f31db4', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/C2gewqqLVo9-Iu8RGkhjZAsBcBRg6YSRG6RFAaiiyOU.jpg?width=960&crop=smart&auto=webp&s=c53e3478ec780099f9133a89601af0d1ba58fb95', 'width': 960}], 'source': {'height': 545, 'url': 'https://external-preview.redd.it/C2gewqqLVo9-Iu8RGkhjZAsBcBRg6YSRG6RFAaiiyOU.jpg?auto=webp&s=a1834b4a573fc458a34473757f1cadea1172381c', 'width': 970}, 'variants': {}}]} |
||
Oobabooga new UI! | 207 | 2024-12-19T07:26:25 | https://www.reddit.com/gallery/1hhnl8y | umarmnaq | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hhnl8y | false | null | t3_1hhnl8y | /r/LocalLLaMA/comments/1hhnl8y/oobabooga_new_ui/ | false | false | 207 | null |
||
Looking for a Local LLM (<3B) for Image Generation | 0 | Hi all,
I’m looking for a lightweight local model (under 3B parameters) for image generation. It should:
1. Be compact and not require heavy hardware.
2. Deliver decent image quality for its size.
3. Run fully offline for privacy.
Any recommendations or tips for setup are appreciated! | 2024-12-19T07:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hhnlia/looking_for_a_local_llm_3b_for_image_generation/ | i_am_vsj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhnlia | false | null | t3_1hhnlia | /r/LocalLLaMA/comments/1hhnlia/looking_for_a_local_llm_3b_for_image_generation/ | false | false | self | 0 | null |
KoboldCpp Image Generation Problem | 0 | [Generated image using SDXL model suggested from the wiki](https://preview.redd.it/xaick6i0hr7e1.png?width=512&format=png&auto=webp&s=784000a4c85d2cf1a88194a482dbce22653e1c72)
Any idea what's wrong? The text model is fine but not the image generation. I am using the latest KoboldCpp (SDXL model) on AMD machine (linux). i tried using vulkan and other setting but ended up with this kind of image. | 2024-12-19T07:58:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hho0lm/koboldcpp_image_generation_problem/ | rana- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hho0lm | false | null | t3_1hho0lm | /r/LocalLLaMA/comments/1hho0lm/koboldcpp_image_generation_problem/ | false | false | 0 | null |
|
Bamba: Inference-Efficient Hybrid Mamba2 Model. | 1 | 2024-12-19T08:20:47 | https://huggingface.co/blog/bamba | No_Reputation7779 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hhobs5 | false | null | t3_1hhobs5 | /r/LocalLLaMA/comments/1hhobs5/bamba_inferenceefficient_hybrid_mamba2_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': '78q0fHCMIQFYnC6RdYsEQW0wpFYHV-fog7Mj9AEaJ5Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=108&crop=smart&auto=webp&s=610bc80310af529de4533bb5256cb5825b77e619', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=216&crop=smart&auto=webp&s=88358664aa0f9a69276d24e4c516d03e6ebc562f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=320&crop=smart&auto=webp&s=a8c221b495bc58bad8d8382418597744b79e9b8a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=640&crop=smart&auto=webp&s=8225f503a2dfe429e260dfd4ef6b0e9b113e9acf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=960&crop=smart&auto=webp&s=dc4f498d590c09f26d375bb3e4c422d5c5e04c6b', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?auto=webp&s=cc1394e4993f16fb8017360a11551b67ef86b091', 'width': 1024}, 'variants': {}}]} |
||
Bamba: Inference-Efficient Hybrid Mamba2 Model | 1 | [removed] | 2024-12-19T08:22:14 | https://huggingface.co/blog/bamba | No_Reputation7779 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hhocgy | false | null | t3_1hhocgy | /r/LocalLLaMA/comments/1hhocgy/bamba_inferenceefficient_hybrid_mamba2_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': '78q0fHCMIQFYnC6RdYsEQW0wpFYHV-fog7Mj9AEaJ5Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=108&crop=smart&auto=webp&s=610bc80310af529de4533bb5256cb5825b77e619', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=216&crop=smart&auto=webp&s=88358664aa0f9a69276d24e4c516d03e6ebc562f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=320&crop=smart&auto=webp&s=a8c221b495bc58bad8d8382418597744b79e9b8a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=640&crop=smart&auto=webp&s=8225f503a2dfe429e260dfd4ef6b0e9b113e9acf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=960&crop=smart&auto=webp&s=dc4f498d590c09f26d375bb3e4c422d5c5e04c6b', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?auto=webp&s=cc1394e4993f16fb8017360a11551b67ef86b091', 'width': 1024}, 'variants': {}}]} |
|
Bamba: Inference-Efficient Hybrid Mamba2 Model 🐍 | 80 | 2024-12-19T08:25:12 | https://huggingface.co/blog/bamba | CommercialLychee39 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hhodui | false | null | t3_1hhodui | /r/LocalLLaMA/comments/1hhodui/bamba_inferenceefficient_hybrid_mamba2_model/ | false | false | 80 | {'enabled': False, 'images': [{'id': '78q0fHCMIQFYnC6RdYsEQW0wpFYHV-fog7Mj9AEaJ5Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=108&crop=smart&auto=webp&s=610bc80310af529de4533bb5256cb5825b77e619', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=216&crop=smart&auto=webp&s=88358664aa0f9a69276d24e4c516d03e6ebc562f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=320&crop=smart&auto=webp&s=a8c221b495bc58bad8d8382418597744b79e9b8a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=640&crop=smart&auto=webp&s=8225f503a2dfe429e260dfd4ef6b0e9b113e9acf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?width=960&crop=smart&auto=webp&s=dc4f498d590c09f26d375bb3e4c422d5c5e04c6b', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/4BWRUGlUI0UHFkDfr88cqeNvhB2ibwPf9GbQzKrNJ5M.jpg?auto=webp&s=cc1394e4993f16fb8017360a11551b67ef86b091', 'width': 1024}, 'variants': {}}]} |
||
This 4B Parameter Model is Insane – Outperforms 13B and 7B Models!" | 1 | [removed] | 2024-12-19T09:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hhp05o/this_4b_parameter_model_is_insane_outperforms_13b/ | Timely_Dentist183 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhp05o | false | null | t3_1hhp05o | /r/LocalLLaMA/comments/1hhp05o/this_4b_parameter_model_is_insane_outperforms_13b/ | false | false | 1 | null |
|
This 4B Parameter Model is Insane – Outperforms 13B and 7B Models! | 1 | [removed] | 2024-12-19T09:14:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hhp15m/this_4b_parameter_model_is_insane_outperforms_13b/ | Timely_Dentist183 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhp15m | false | null | t3_1hhp15m | /r/LocalLLaMA/comments/1hhp15m/this_4b_parameter_model_is_insane_outperforms_13b/ | false | false | 1 | null |
|
We turned Llama answers into Provable Answers: Is this how we Finally Trust AI? | 0 | 2024-12-19T09:23:32 | https://www.reddit.com/gallery/1hhp5lm | No_Telephone_9513 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hhp5lm | false | null | t3_1hhp5lm | /r/LocalLLaMA/comments/1hhp5lm/we_turned_llama_answers_into_provable_answers_is/ | false | false | 0 | null |
||
SOTA multilingual embedding model? | 1 | [removed] | 2024-12-19T10:15:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hhpuyr/sota_multilingual_embedding_model/ | Expensive-Paint-9490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhpuyr | false | null | t3_1hhpuyr | /r/LocalLLaMA/comments/1hhpuyr/sota_multilingual_embedding_model/ | false | false | self | 1 | null |
LLM for Story Telling. Please Help! | 1 | [removed] | 2024-12-19T10:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hhq9t7/llm_for_story_telling_please_help/ | kingtitan95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhq9t7 | false | null | t3_1hhq9t7 | /r/LocalLLaMA/comments/1hhq9t7/llm_for_story_telling_please_help/ | false | false | self | 1 | null |
Finetuning LLMs and EOS tokens not emitted | 4 | Hi everyone,
I'm experimenting with very small language models such as **SmolLM2-135M-Instruct**. I fine-tuned it on a synthetic Question-Answer dataset of 1,000 samples that I created.
This is the second time I’ve done LLM fine-tuning, and for some reason, after training, the model has a hard time generating the `eos` token when I use greedy decoding. It’s a roleplay dataset, so sometimes, after emitting the correct answer, the model either keeps generating text unnecessarily or starts asking questions as if it were the user.
When I use beam search, however, the model behaves exactly as expected (the answers are the correct length).
Is this a known issue when fine-tuning models? Am I missing something obvious?
I’m considering:
* Adding samples from other datasets to increase the dataset size and make the fine-tuning process more stable.
* Trying a larger model since smaller models might be inherently more unstable.
I’m aware that this could be a bug in my code, but I’m following a Hugging Face official tutorial line-by-line. For now, let’s assume the code is correct, haha. | 2024-12-19T11:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hhqivn/finetuning_llms_and_eos_tokens_not_emitted/ | PinballOscuro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhqivn | false | null | t3_1hhqivn | /r/LocalLLaMA/comments/1hhqivn/finetuning_llms_and_eos_tokens_not_emitted/ | false | false | self | 4 | null |
Struggling To Install Llama, Please Help! | 1 | [removed] | 2024-12-19T11:15:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hhqp6x/struggling_to_install_llama_please_help/ | RealShayko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhqp6x | false | null | t3_1hhqp6x | /r/LocalLLaMA/comments/1hhqp6x/struggling_to_install_llama_please_help/ | false | false | 1 | null |
|
Struggling To Install Llama, Please Help! | 1 | [removed] | 2024-12-19T11:22:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hhqt5z/struggling_to_install_llama_please_help/ | RealShayko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhqt5z | false | null | t3_1hhqt5z | /r/LocalLLaMA/comments/1hhqt5z/struggling_to_install_llama_please_help/ | false | false | 1 | null |
|
Searching for Phi 4 with 128K context | 1 | [removed] | 2024-12-19T11:31:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hhqxst/searching_for_phi_4_with_128k_context/ | EmergencyLetter135 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhqxst | false | null | t3_1hhqxst | /r/LocalLLaMA/comments/1hhqxst/searching_for_phi_4_with_128k_context/ | false | false | self | 1 | null |
which model from llama cpp can run on 28GB ram(arm based) and 7 core without problem ? | 1 | [removed] | 2024-12-19T11:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hhr7ch/which_model_from_llama_cpp_can_run_on_28gb_ramarm/ | Powerful-Shopping652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhr7ch | false | null | t3_1hhr7ch | /r/LocalLLaMA/comments/1hhr7ch/which_model_from_llama_cpp_can_run_on_28gb_ramarm/ | false | false | self | 1 | null |
2nd stage of the AI age: AI Labs Need Your Data | 77 | 2024-12-19T11:56:05 | https://blog.hyperknot.com/p/ai-labs-need-your-data | hyperknot | blog.hyperknot.com | 1970-01-01T00:00:00 | 0 | {} | 1hhrasp | false | null | t3_1hhrasp | /r/LocalLLaMA/comments/1hhrasp/2nd_stage_of_the_ai_age_ai_labs_need_your_data/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'qkH2-v-mXfJFYqDhkEnT21miT3dSOsvuM7BSP4MQPwU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/R0_DJ62tjWk2KkvBMQmzEs8eIzABrpSVw6VSO1-qvrI.jpg?width=108&crop=smart&auto=webp&s=f92f8529d6768e182a4d6db27549403099c1d6ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/R0_DJ62tjWk2KkvBMQmzEs8eIzABrpSVw6VSO1-qvrI.jpg?width=216&crop=smart&auto=webp&s=0bc59132bedd7b3d448206859646630f95146504', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/R0_DJ62tjWk2KkvBMQmzEs8eIzABrpSVw6VSO1-qvrI.jpg?width=320&crop=smart&auto=webp&s=dc10bbf1648ef7d91916f6533183cdcfdd785a4a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/R0_DJ62tjWk2KkvBMQmzEs8eIzABrpSVw6VSO1-qvrI.jpg?width=640&crop=smart&auto=webp&s=ec2d197f4797dbcbdefa4a8dbd72d485c66f0605', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/R0_DJ62tjWk2KkvBMQmzEs8eIzABrpSVw6VSO1-qvrI.jpg?width=960&crop=smart&auto=webp&s=c27cfdcc4a54d1644723955de11ecb5e64789a3c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/R0_DJ62tjWk2KkvBMQmzEs8eIzABrpSVw6VSO1-qvrI.jpg?width=1080&crop=smart&auto=webp&s=c21361cb5c04d057062ec781ae906d042c3a3d36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/R0_DJ62tjWk2KkvBMQmzEs8eIzABrpSVw6VSO1-qvrI.jpg?auto=webp&s=3effa35ef12a13b54cf0649a4fd11910d14a5b0e', 'width': 1200}, 'variants': {}}]} |
||
Is Google's Gemini-2.0-flash-exp free to use? | 1 | [removed] | 2024-12-19T12:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hhrvry/is_googles_gemini20flashexp_free_to_use/ | cloudxaas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhrvry | false | null | t3_1hhrvry | /r/LocalLLaMA/comments/1hhrvry/is_googles_gemini20flashexp_free_to_use/ | false | false | self | 1 | null |
Google Gemini 2.0 Flash Exp API costs? | 1 | [removed] | 2024-12-19T12:35:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hhrxwa/google_gemini_20_flash_exp_api_costs/ | cloudxaas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhrxwa | false | null | t3_1hhrxwa | /r/LocalLLaMA/comments/1hhrxwa/google_gemini_20_flash_exp_api_costs/ | false | false | self | 1 | null |
How to run Intern VL 2.5 Model | 2 | Hai Everyone , I am new to LLM's and VLM's
Can anyone please say me how to run INTERN VL 2.5 ->
My server ->
Nvidia A100 40 GB Server -> I want to run 8B Model ,
For classes detection using SAM coordinates
Prompt -> \`Classify the object identified in the segmented 360-degree image using the provided bounding box and area data, and align it with one of the pre-defined categories.\`
Please do help me how to run this , As of now I was doing HF demo spaces, I want to use it in my server , Can anyone say how to run , or any documentation , I was trying to take the code in HF [app.py](http://app.py) everything but I ended up with many errors as it is not straight forward
Thanks in advance guysss | 2024-12-19T12:36:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hhrygm/how_to_run_intern_vl_25_model/ | Hot-Hearing-2528 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhrygm | false | null | t3_1hhrygm | /r/LocalLLaMA/comments/1hhrygm/how_to_run_intern_vl_25_model/ | false | false | self | 2 | null |
some datasets information | 1 | [removed] | 2024-12-19T12:39:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hhs09j/some_datasets_information/ | Wide-Chef-7011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhs09j | false | null | t3_1hhs09j | /r/LocalLLaMA/comments/1hhs09j/some_datasets_information/ | false | false | self | 1 | null |
Snapdragon X Elite users, does it worth it for local LLM? | 6 | Recently I have to update my laptop for developing and use with LLM, not necessarily with a “high” number of parameters (in my case 10b+ is fine). Copilot+ Laptops have a good battery life, needed for my project. Thus, I searched this group and always read comments about Apple Silicon or RTX laptops, but nothing about these SoCs (at least with the latest updates). Are there any Snapdragon X Elite user who use their laptop for LLM and developing? How is your experience? I'm planning to get a 64 GB RAM Machine
btw, MacBooks are not an option in my case
Thanks in advance!! | 2024-12-19T12:45:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hhs40q/snapdragon_x_elite_users_does_it_worth_it_for/ | Kaesebrot_x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhs40q | false | null | t3_1hhs40q | /r/LocalLLaMA/comments/1hhs40q/snapdragon_x_elite_users_does_it_worth_it_for/ | false | false | self | 6 | null |
Where are we in terms of Voice Chat (STT-LLM-TTS)? | 6 | I've seen some posts here about this topic, but they're quite old. A year or so ago, there weren’t any out-of-the-box solutions for voice chat (STT → LLM → TTS). Has anything changed since then?
1. Can I set up a server on my PC for this and access it via my phone over the Internet (not just local Wi-Fi)?
2. Are there any out-of-the-box solutions for such setup?
3. I know SillyTavern and Oobabooga have TTS addons, but what about STT for input?
The main reason I still use ChatGPT is how easily I can access it on my phone. Any updates or suggestions?
Thanks! | 2024-12-19T12:58:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hhsc22/where_are_we_in_terms_of_voice_chat_sttllmtts/ | NewTestAccount2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhsc22 | false | null | t3_1hhsc22 | /r/LocalLLaMA/comments/1hhsc22/where_are_we_in_terms_of_voice_chat_sttllmtts/ | false | false | self | 6 | null |
How good is LibreChat? (Code execution, web search, etc.) | 17 | I've been using ChatGPT Plus for a while now, but I keep hearing about LibreChat as an open-source alternative. Anyone here made the switch or using both? I'm super curious about a few things:
Can you actually run Python code in LibreChat like in ChatGPT? And what about that cool Canvas feature... does LibreChat have something similar? Really curious about the web search too - is it as good as ChatGPT's browsing?
Money-wise, I'm paying $20/month for ChatGPT Plus right now. Anyone here running LibreChat with APIs? Would love to know what your monthly costs look like and if it's actually cheaper in the long run. Any surprise costs I should know about?
Some other stuff I'm wondering about:
* How smooth is it for daily use? Like, does it crash a lot?
* Which AI models work best if you're doing coding stuff?
* Is switching between different AI models actually useful or just a gimmick?
If anyone's been using both and can share their experience, that'd be super helpful! Especially interested in real-world usage stories... both the good and the bad. | 2024-12-19T13:16:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hhsnw1/how_good_is_librechat_code_execution_web_search/ | interstellarfan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhsnw1 | false | null | t3_1hhsnw1 | /r/LocalLLaMA/comments/1hhsnw1/how_good_is_librechat_code_execution_web_search/ | false | false | self | 17 | null |
Python Based Alternatives for Corporate Laptop Restrictions? | 1 | Most of corporate laptops don’t allow installing executables without lengthy approval processes, which can be a huge time-waster. But Python is often included in the list of pre-approved software.
Do you think it’s a good idea to develop Python-based alternatives for tools like LMStudio, Ollama(ik vLLM is there) etc., so they can be easily installed and used via pip without requiring admin rights? | 2024-12-19T13:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hhsonz/python_based_alternatives_for_corporate_laptop/ | ritshpatidar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhsonz | false | null | t3_1hhsonz | /r/LocalLLaMA/comments/1hhsonz/python_based_alternatives_for_corporate_laptop/ | false | false | self | 1 | null |
Gemini 2.0 experimental is practically uncensored: I let it create a dark comedy with Hitler and Stalin where they discuss Jews. | 0 | 2024-12-19T13:19:01 | https://www.reddit.com/gallery/1hhsppz | Js8544 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hhsppz | false | null | t3_1hhsppz | /r/LocalLLaMA/comments/1hhsppz/gemini_20_experimental_is_practically_uncensored/ | false | false | 0 | null |
||
Phi-3.5-vision-instruct support | 1 | [removed] | 2024-12-19T13:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hhsqmt/phi35visioninstruct_support/ | w33d_w1z4rd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhsqmt | false | null | t3_1hhsqmt | /r/LocalLLaMA/comments/1hhsqmt/phi35visioninstruct_support/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cc0Bp9uaxNRs-lRdnAxZRMAQ2RjqNFa1Esq8RGGPtTU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=108&crop=smart&auto=webp&s=7e7badced5ef6cfe4eb0c8792c204b56910932cd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=216&crop=smart&auto=webp&s=73dba84cb89228e5cd8c4777dd1b9827c704dddc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=320&crop=smart&auto=webp&s=e5b9d1f65781a87b5d0d68bb16e89fbfa6847ab4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=640&crop=smart&auto=webp&s=e135e6aedbc2154428bc7e0468925fb95d4763c2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=960&crop=smart&auto=webp&s=06c32b3f249344403864843bc8ff887a91ca646f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=1080&crop=smart&auto=webp&s=6b098e838c4f58756b9c446525728162ba59bfa5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?auto=webp&s=68bfde5674b19e22a18900e49c209c2d7266a16c', 'width': 1200}, 'variants': {}}]} |
Phi-3.5-vision-instruct on ollama | 1 | [removed] | 2024-12-19T13:28:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hhsvxb/phi35visioninstruct_on_ollama/ | Independent_Jury_725 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hhsvxb | false | null | t3_1hhsvxb | /r/LocalLLaMA/comments/1hhsvxb/phi35visioninstruct_on_ollama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cc0Bp9uaxNRs-lRdnAxZRMAQ2RjqNFa1Esq8RGGPtTU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=108&crop=smart&auto=webp&s=7e7badced5ef6cfe4eb0c8792c204b56910932cd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=216&crop=smart&auto=webp&s=73dba84cb89228e5cd8c4777dd1b9827c704dddc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=320&crop=smart&auto=webp&s=e5b9d1f65781a87b5d0d68bb16e89fbfa6847ab4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=640&crop=smart&auto=webp&s=e135e6aedbc2154428bc7e0468925fb95d4763c2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=960&crop=smart&auto=webp&s=06c32b3f249344403864843bc8ff887a91ca646f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?width=1080&crop=smart&auto=webp&s=6b098e838c4f58756b9c446525728162ba59bfa5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ln1FmUuOgnGdoEqOZslfHkWCLgw-NW5gp41AfBaqmiA.jpg?auto=webp&s=68bfde5674b19e22a18900e49c209c2d7266a16c', 'width': 1200}, 'variants': {}}]} |
Retailer lists €5999 GeForce RTX 5090 and €3499 RTX 5080 Acer Gaming PCs ahead of launch | 146 | 2024-12-19T13:48:56 | https://videocardz.com/newz/retailer-lists-e5999-geforce-rtx-5090-and-e3499-rtx-5080-acer-gaming-pcs-ahead-of-launch | Optifnolinalgebdirec | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1hht9wi | false | null | t3_1hht9wi | /r/LocalLLaMA/comments/1hht9wi/retailer_lists_5999_geforce_rtx_5090_and_3499_rtx/ | false | false | 146 | {'enabled': False, 'images': [{'id': 'pSUz2dpeoRZYJbFJOEAQPx5WOawY4BTsoZEcXjOuTgg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/km6yXOm1GHjUpJbCUFvVMY00Tk-Johv6RBUpkwhG3m4.jpg?width=108&crop=smart&auto=webp&s=8e21196169d6e643ae622d33bb54cfebd8c2b2fc', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/km6yXOm1GHjUpJbCUFvVMY00Tk-Johv6RBUpkwhG3m4.jpg?width=216&crop=smart&auto=webp&s=1ad530e809f9cc0fe6e813d397ae6370e721710c', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/km6yXOm1GHjUpJbCUFvVMY00Tk-Johv6RBUpkwhG3m4.jpg?width=320&crop=smart&auto=webp&s=bba75390066d80e46c510b01b4d85c5f97afa6a8', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/km6yXOm1GHjUpJbCUFvVMY00Tk-Johv6RBUpkwhG3m4.jpg?width=640&crop=smart&auto=webp&s=269c66e3318d8f2f08405110977d44cfcd93f7aa', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/km6yXOm1GHjUpJbCUFvVMY00Tk-Johv6RBUpkwhG3m4.jpg?width=960&crop=smart&auto=webp&s=7b6d673e4cd73b8d8a84c1eca2d2c89f91f9c321', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/km6yXOm1GHjUpJbCUFvVMY00Tk-Johv6RBUpkwhG3m4.jpg?width=1080&crop=smart&auto=webp&s=7b9046c3f6dace39dfec8e276363a1085dead8bc', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/km6yXOm1GHjUpJbCUFvVMY00Tk-Johv6RBUpkwhG3m4.jpg?auto=webp&s=07a27f53dc8df4d1e72393be64522d7d26601eec', 'width': 2500}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.