title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What is the best model that fits inside 8GB VRAM? | 27 | My laptop has Nvidia 4070 GPU and it has 8GB of VRAM. What is the best model that I can run locally inside this GPU? | 2024-12-08T06:31:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h9ct2x/what_is_the_best_model_that_fits_inside_8gb_vram/ | Ok_Ostrich_8845 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9ct2x | false | null | t3_1h9ct2x | /r/LocalLLaMA/comments/1h9ct2x/what_is_the_best_model_that_fits_inside_8gb_vram/ | false | false | self | 27 | null |
Can I run a local interactive story that won't break or forget context? | 1 | [removed] | 2024-12-08T06:37:10 | https://www.reddit.com/r/LocalLLaMA/comments/1h9cvsp/can_i_run_a_local_interactive_story_that_wont/ | AcrobaticSmell2850 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9cvsp | false | null | t3_1h9cvsp | /r/LocalLLaMA/comments/1h9cvsp/can_i_run_a_local_interactive_story_that_wont/ | false | false | self | 1 | null |
We have o1 at home. Create an open-webui pipeline for pairing a dedicated thinking model (QwQ) and response model. | 353 | 2024-12-08T06:54:46 | onil_gova | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h9d4xh | false | null | t3_1h9d4xh | /r/LocalLLaMA/comments/1h9d4xh/we_have_o1_at_home_create_an_openwebui_pipeline/ | false | false | 353 | {'enabled': True, 'images': [{'id': '_LtXhVLCG-Zjl1fk5mVBJP8zna2V7c1XZdooZGzd7rE', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/kziumptpnk5e1.png?width=108&crop=smart&auto=webp&s=bfab3f0acb12d11071a331ba9fa98ac27266b566', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/kziumptpnk5e1.png?width=216&crop=smart&auto=webp&s=a43292550751f0590b8485ae87f0ff06ac45b3c8', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/kziumptpnk5e1.png?width=320&crop=smart&auto=webp&s=347b9aae2acb94bcbae2a49c3455ccc664bf27cb', 'width': 320}, {'height': 367, 'url': 'https://preview.redd.it/kziumptpnk5e1.png?width=640&crop=smart&auto=webp&s=be538fa559eb2b9fe7d182a7cb5e43b33927c0e5', 'width': 640}, {'height': 550, 'url': 'https://preview.redd.it/kziumptpnk5e1.png?width=960&crop=smart&auto=webp&s=208970eb908c63cbd83383abe0a611ee3bb990f9', 'width': 960}], 'source': {'height': 561, 'url': 'https://preview.redd.it/kziumptpnk5e1.png?auto=webp&s=af558f2766f94cfc35aeba99d02380ba8441db1b', 'width': 978}, 'variants': {}}]} |
|||
FP16 vs Q8/Q4: Unexpected Performance Discrepancies in AI Model Responses | 1 | [removed] | 2024-12-08T07:15:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h9dftv/fp16_vs_q8q4_unexpected_performance_discrepancies/ | ArnaudPolitico | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9dftv | false | null | t3_1h9dftv | /r/LocalLLaMA/comments/1h9dftv/fp16_vs_q8q4_unexpected_performance_discrepancies/ | false | false | self | 1 | null |
Claude was the only one to give the correct answer. | 17 | **Question:**
Write a Python program that draws a diagonal line from the bottom left to the top right.
Here’s the next part: when I drag and move the line either to the left or right, the part of the line that moves out of the screen must reappear from the other side.
In simple terms, if I move it to the right, it should start re-entering from the left.
*Only a single attempt is allowed for each model.*
**Result:**
[Test Result](https://preview.redd.it/1ldgwdttok5e1.png?width=1220&format=png&auto=webp&s=61495927e4c2246eba0851bd5d910edba508e61f)
**Code:**
DeepSeek: [https://pastebin.com/dfK9H2mG](https://pastebin.com/dfK9H2mG)
Gemini: [https://pastebin.com/P1ne0tD1](https://pastebin.com/P1ne0tD1)
GPT: [https://pastebin.com/4yUA9pNG](https://pastebin.com/4yUA9pNG)
Llama: [https://pastebin.com/UpPwv5hf](https://pastebin.com/UpPwv5hf)
Mistral: [https://pastebin.com/zB0aAuPp](https://pastebin.com/zB0aAuPp)
Qwen: [https://pastebin.com/pV1k5HWy](https://pastebin.com/pV1k5HWy)
QwQ: [https://pastebin.com/0ycpjsda](https://pastebin.com/0ycpjsda)
Claude: [https://pastebin.com/Ng05ChpH](https://pastebin.com/Ng05ChpH)
| 2024-12-08T07:15:58 | https://www.reddit.com/r/LocalLLaMA/comments/1h9dfur/claude_was_the_only_one_to_give_the_correct_answer/ | vinam_7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9dfur | false | null | t3_1h9dfur | /r/LocalLLaMA/comments/1h9dfur/claude_was_the_only_one_to_give_the_correct_answer/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
|
Local Inferencing for Sentiment Analysis | 3 | I have built an app that accesses news articles through an aggregator API and I am parsing topics and entities.
One thing which I am struggling with is sentiment analysis of the articles... I have tried to use the python sentiment analysis libraries but they don't work with different languages. I am presently using a huggingface RoBERTa model which is designed to do sentiment analysis but it doesn't do a great job with longer articles and often the specific entity mentioned in the article that I searched for might be positively referenced even if the whole article has a negative sentiment.
It would be easy to just throw it at gpt-40-mini and have it provide a JSON schema output contextualized based on the search entity but that would cost a LOT. I've tried a local llama through OLLAMA but my nvidia RTX3080 can't manage multiple queries on the API and each entity searched could have ~1000 articles. I'm searching ~2000 entities a day so it's a problem. Given the task is purely sentiment analysis of longish news articles, are you aware of a local model I can run which is lightweight enough to handle my use case but also multi-lingual? | 2024-12-08T07:22:05 | https://www.reddit.com/r/LocalLLaMA/comments/1h9dixf/local_inferencing_for_sentiment_analysis/ | Character-Cry7549 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9dixf | false | null | t3_1h9dixf | /r/LocalLLaMA/comments/1h9dixf/local_inferencing_for_sentiment_analysis/ | false | false | self | 3 | null |
Database Connectivity - Generating New Column Values from Existing Columns | 1 | Hi all, wondering if anyone has performed this or has any materials they can point in my direction.
I have a database table, currently it has 5 columns. 3 values are populated by a user and two are empty, I would like to take the content of the 3 columns so an LLM can generate new values for the remaining two columns.
So basically it’s a read operation, prompt for generation and then an update statement back to the database.
Any suggestions are greatly appreciated.
Thanks! | 2024-12-08T08:04:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h9e3xs/database_connectivity_generating_new_column/ | Xiang_Ganger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9e3xs | false | null | t3_1h9e3xs | /r/LocalLLaMA/comments/1h9e3xs/database_connectivity_generating_new_column/ | false | false | self | 1 | null |
🥂 FineWeb2 dataset: A sparkling update with 1000s of languages | 125 | 2024-12-08T08:41:55 | https://huggingface.co/datasets/HuggingFaceFW/fineweb-2 | PhilipsNostrum | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h9em28 | false | null | t3_1h9em28 | /r/LocalLLaMA/comments/1h9em28/fineweb2_dataset_a_sparkling_update_with_1000s_of/ | false | false | default | 125 | {'enabled': False, 'images': [{'id': '5xUl-e4QHnuDYawQwK665M1m5HSSQ3gVWjS15H-E_3g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vWkunf7MnBP6Qic1jwP5j-I2nrpDJlOOOUBM0KnrMsw.jpg?width=108&crop=smart&auto=webp&s=7827a16d24bd6b705d451d798ceafb3c3723f20c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vWkunf7MnBP6Qic1jwP5j-I2nrpDJlOOOUBM0KnrMsw.jpg?width=216&crop=smart&auto=webp&s=ebd6e2460e48e0f2877aaa3a7fe83cca2188b8dd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vWkunf7MnBP6Qic1jwP5j-I2nrpDJlOOOUBM0KnrMsw.jpg?width=320&crop=smart&auto=webp&s=3cc5f7115c622e558ff27f12695bdde3c685eede', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vWkunf7MnBP6Qic1jwP5j-I2nrpDJlOOOUBM0KnrMsw.jpg?width=640&crop=smart&auto=webp&s=df31fea3a198a9c097ed86c6dd332b61a2707499', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vWkunf7MnBP6Qic1jwP5j-I2nrpDJlOOOUBM0KnrMsw.jpg?width=960&crop=smart&auto=webp&s=8c16312493d472b3def551a34ceadf53d1b1227c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vWkunf7MnBP6Qic1jwP5j-I2nrpDJlOOOUBM0KnrMsw.jpg?width=1080&crop=smart&auto=webp&s=c9dd6dff71a0da410e165231d4f0aa28fff959fd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vWkunf7MnBP6Qic1jwP5j-I2nrpDJlOOOUBM0KnrMsw.jpg?auto=webp&s=8221e17a13da3e0c67846d2638ad2dbac92eee94', 'width': 1200}, 'variants': {}}]} |
|
2024 Wrap-Up: What Amazing Projects Have You Built with Open-Source AI Models? Let’s Create the Ultimate Resource Guide! 📚 | 47 | Hey everyone! 👋
As AI and machine learning enthusiasts, we’re all witnessing the incredible growth of open-source models.
I’m putting together a **Resource Guide** to showcase what’s possible with open-source AI models. It will feature **real-world use cases, applications, tips, and resources**—and I’d love your input!
Here are a few ideas to get us started:
# 💡 Applications and Projects
* What have you built using open-source models? (e.g., chatbots, summarizers, content generators, custom domain assistants, tools for research, etc.)
* How did you adapt the model for your use case? Did you fine-tune it? Use specific tools or libraries?
# 🔧 Development Tools & Techniques
* What frameworks or tools did you find helpful? (e.g., LangChain, Hugging Face, LlamaIndex, etc.)
* How did you handle deployment? (e.g., APIs, local servers, serverless GPU platform, cloud-based platforms)
# 🚀 Lessons Learned
* What challenges did you face during development? How did you overcome them?
* Any tips for beginners who want to experiment with open-source AI?
# 🌐 Resources to Share
* Links to your project’s code, documentation, or demos (if public).
* Tutorials, papers, or repositories you found helpful.
* Recommendations for open-source model communities or forums.
Let’s make this thread a treasure trove of knowledge for anyone interested in using open-source AI models. Whether you’re a developer, researcher, or enthusiast, your experience matters!
Looking forward to your stories, insights, and resources. Let’s inspire each other and help the community grow! 🚀 | 2024-12-08T09:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h9f3ta/2024_wrapup_what_amazing_projects_have_you_built/ | rbgo404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9f3ta | false | null | t3_1h9f3ta | /r/LocalLLaMA/comments/1h9f3ta/2024_wrapup_what_amazing_projects_have_you_built/ | false | false | self | 47 | null |
Any experience with Qwen QwQ as a HyDE RAG? | 0 | Hi all, was wondering if someone tested the QwQ model to generate the hypothetical document for a RAG, maybe with some additionnal steps:
Initial query -> QwQ answer -> extract relevant data -> rag
Thanks for sharing! | 2024-12-08T09:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1h9fb9r/any_experience_with_qwen_qwq_as_a_hyde_rag/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9fb9r | false | null | t3_1h9fb9r | /r/LocalLLaMA/comments/1h9fb9r/any_experience_with_qwen_qwq_as_a_hyde_rag/ | false | false | self | 0 | null |
Best Frontend for creative writing / story writing? | 5 |
I don’t mean silly tavern.
I’ve heard there have been some good ones, but I cannot remember what they were called.
Any suggestions? | 2024-12-08T10:08:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h9fstw/best_frontend_for_creative_writing_story_writing/ | Deluded-1b-gguf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9fstw | false | null | t3_1h9fstw | /r/LocalLLaMA/comments/1h9fstw/best_frontend_for_creative_writing_story_writing/ | false | false | self | 5 | null |
Webui for vision models | 6 | I would like to be able to chat and send pictures using Qwen2-VL-7B-Instruct. Is there a simple web ui available for it? | 2024-12-08T10:26:59 | https://www.reddit.com/r/LocalLLaMA/comments/1h9g26p/webui_for_vision_models/ | swagerka21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9g26p | false | null | t3_1h9g26p | /r/LocalLLaMA/comments/1h9g26p/webui_for_vision_models/ | false | false | self | 6 | null |
Google Gemini experimental 1206 is really good at coding. Helped me fix issues where Claude Sonnet struggled. | 144 | Good to have more than one sophisticated models for coding. | 2024-12-08T10:44:18 | https://www.reddit.com/r/LocalLLaMA/comments/1h9gaok/google_gemini_experimental_1206_is_really_good_at/ | appakaradi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9gaok | false | null | t3_1h9gaok | /r/LocalLLaMA/comments/1h9gaok/google_gemini_experimental_1206_is_really_good_at/ | false | false | self | 144 | null |
Optimizing Model inference for Cost | 1 | [removed] | 2024-12-08T10:44:41 | https://www.reddit.com/r/LocalLLaMA/comments/1h9gave/optimizing_model_inference_for_cost/ | AI_Overlord_314159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9gave | false | null | t3_1h9gave | /r/LocalLLaMA/comments/1h9gave/optimizing_model_inference_for_cost/ | false | false | self | 1 | null |
Help with the build | 1 | [removed] | 2024-12-08T11:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/1h9gn4a/help_with_the_build/ | Due-Year1465 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9gn4a | false | null | t3_1h9gn4a | /r/LocalLLaMA/comments/1h9gn4a/help_with_the_build/ | false | false | self | 1 | null |
Help with upgrading my PC | 1 | [removed] | 2024-12-08T11:14:53 | https://www.reddit.com/r/LocalLLaMA/comments/1h9gq1t/help_with_upgrading_my_pc/ | IdoPIdo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9gq1t | false | null | t3_1h9gq1t | /r/LocalLLaMA/comments/1h9gq1t/help_with_upgrading_my_pc/ | false | false | self | 1 | null |
Does 3.3 support vision? | 1 | [removed] | 2024-12-08T11:21:09 | https://www.reddit.com/r/LocalLLaMA/comments/1h9gtf4/does_33_support_vision/ | Capaj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9gtf4 | false | null | t3_1h9gtf4 | /r/LocalLLaMA/comments/1h9gtf4/does_33_support_vision/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eSqVwtqI8lEdDB_rmKd0BYBIMU8SrRzZJO1i5nKZGFA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=108&crop=smart&auto=webp&s=0c1ad514fc554f44bb46c9152baba6986076ba74', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=216&crop=smart&auto=webp&s=6812ecd42fbe74627d9304542cff0afbc249d156', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=320&crop=smart&auto=webp&s=edcea14234fffb82794052382f17f20c47c18201', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=640&crop=smart&auto=webp&s=7f26739ea8d6076629ad93a3856b95884751bd17', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=960&crop=smart&auto=webp&s=c866e4a12291ee18670273f6977eb695701e8fdc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=1080&crop=smart&auto=webp&s=d3b816735e116dda737a95791da468cbabc6e140', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?auto=webp&s=bdb67dc29d0015d08538d028017edd9629393606', 'width': 1200}, 'variants': {}}]} |
Apple Silicon. MLX - what is a good frontend tool for testing the models | 0 | https://github.com/ml-explore/mlx. I can run models using MLX server. It can serve open ui compatible web api. Any good front end to use this( besides LM studio)? | 2024-12-08T11:25:53 | https://www.reddit.com/r/LocalLLaMA/comments/1h9gvz5/apple_silicon_mlx_what_is_a_good_frontend_tool/ | appakaradi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9gvz5 | false | null | t3_1h9gvz5 | /r/LocalLLaMA/comments/1h9gvz5/apple_silicon_mlx_what_is_a_good_frontend_tool/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'I5JJJ2aLTO1GTM02s0vljYlhIMcMfHIF7njrjQw_8Sw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/15lWAFEyqalZOpTtIES7vocvha6y7CbEnHfqpN3H58k.jpg?width=108&crop=smart&auto=webp&s=245df595f7b49731ae5860e4851d1b62f8506413', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/15lWAFEyqalZOpTtIES7vocvha6y7CbEnHfqpN3H58k.jpg?width=216&crop=smart&auto=webp&s=c10a59ef402cc4cb45aa98032ff7743385df00e2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/15lWAFEyqalZOpTtIES7vocvha6y7CbEnHfqpN3H58k.jpg?width=320&crop=smart&auto=webp&s=bc14e0ed4b0536c255194239a90f4a7a346bbc4f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/15lWAFEyqalZOpTtIES7vocvha6y7CbEnHfqpN3H58k.jpg?width=640&crop=smart&auto=webp&s=c428449509eeff132413feba3a494cc705374dcf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/15lWAFEyqalZOpTtIES7vocvha6y7CbEnHfqpN3H58k.jpg?width=960&crop=smart&auto=webp&s=5b58ac1d89476a1ff1530358d99eb3612c1d4c36', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/15lWAFEyqalZOpTtIES7vocvha6y7CbEnHfqpN3H58k.jpg?width=1080&crop=smart&auto=webp&s=0f77f4ef2f9df4093c9ed43e0c6df156f7adff2a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/15lWAFEyqalZOpTtIES7vocvha6y7CbEnHfqpN3H58k.jpg?auto=webp&s=95e0b48a0efdd689ecc3a1958a4074aa69f1b9bc', 'width': 1200}, 'variants': {}}]} |
Open source text editor with llm features? | 2 | Hey,
I'm wondering if an open source tool similar to this one : https://editgpt.app/ already exists?
At least one where you can pick the LLM you want. Mostly for fixing grammar in a long text, but with an easy interface like this app seems to have. | 2024-12-08T11:31:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h9gyxo/open_source_text_editor_with_llm_features/ | Nyao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9gyxo | false | null | t3_1h9gyxo | /r/LocalLLaMA/comments/1h9gyxo/open_source_text_editor_with_llm_features/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'QzOQWJKV-EJt0JdeeH1SiIfiWhsrX0LHsNqrwogAd8w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/0CZSqH8-AXjLa-wcnav9jC6Sreb_25aZa9J959b1qnc.jpg?width=108&crop=smart&auto=webp&s=6a12320ac2e4eeff04d778b9542670f8480845fb', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/0CZSqH8-AXjLa-wcnav9jC6Sreb_25aZa9J959b1qnc.jpg?width=216&crop=smart&auto=webp&s=ddfa1290157788731dd80a2c87bb4f98651b4bc1', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/0CZSqH8-AXjLa-wcnav9jC6Sreb_25aZa9J959b1qnc.jpg?width=320&crop=smart&auto=webp&s=a98457992f7f1ae1c4a08651c76fa10a5dd74b0f', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/0CZSqH8-AXjLa-wcnav9jC6Sreb_25aZa9J959b1qnc.jpg?width=640&crop=smart&auto=webp&s=c09d9066878c2e05549feb001df5d3f994bd9335', 'width': 640}, {'height': 537, 'url': 'https://external-preview.redd.it/0CZSqH8-AXjLa-wcnav9jC6Sreb_25aZa9J959b1qnc.jpg?width=960&crop=smart&auto=webp&s=ad1576746eff15c04fce886699ef9916dd65ef8f', 'width': 960}, {'height': 604, 'url': 'https://external-preview.redd.it/0CZSqH8-AXjLa-wcnav9jC6Sreb_25aZa9J959b1qnc.jpg?width=1080&crop=smart&auto=webp&s=088bfd246ccebe29d915316f048dfd52dc016f46', 'width': 1080}], 'source': {'height': 1072, 'url': 'https://external-preview.redd.it/0CZSqH8-AXjLa-wcnav9jC6Sreb_25aZa9J959b1qnc.jpg?auto=webp&s=819ec230cc4d90db0d55f4473217e63f4e7a8ac1', 'width': 1916}, 'variants': {}}]} |
Models for less popular languages (Dutch), what is the way to go? | 4 | Hi all!
I am working on a project where half of the queries will be in my local language (Dutch), OpenAI and Claude are having models which are very good at speaking this language. However, the OSS models are a different story, they can do it, but it is obviously not as good. Even Llama 3.3 70B is noticeably worse off, and for Qwen 2.5 is below Llama. As far as I know there are no leaderboards for obscure languages and it makes sense that a language that is spoken by < 30M people globally is not that important.
So all in all, I am looking for my options. I have 48GB vram available which fits 70B nicely, I could double this up and go to 96GB vram and look for a bigger model which is pre trained, or take a gamble and see if I can fine tune my own model. The problem is that 96GB of vram will still not be enough to fine tune my model other then Qlora at 8bits, which will degrade performance quite a bit. Just changing the model won't be possible either because I have not been able to find any model that would fit 96gb and is decent at Dutch.
Because the prompts could contain personal data I don't want to send this to Claude/OpenAI and I need to run it locally. One thing I thought about is using a smaller model to strip out personal details, replace them with tags, still send the prompt to Claude/OpenAI and replace the tags later on again with personal data. But regardless of that, I really don't think this is a stable solution.
So, I would love to hear your opinions about solving this issue!
| 2024-12-08T11:33:42 | https://www.reddit.com/r/LocalLLaMA/comments/1h9h04x/models_for_less_popular_languages_dutch_what_is/ | Taronyuuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9h04x | false | null | t3_1h9h04x | /r/LocalLLaMA/comments/1h9h04x/models_for_less_popular_languages_dutch_what_is/ | false | false | self | 4 | null |
Cost of Finetuning LLama 3.3 70b into local language | 1 | [removed] | 2024-12-08T11:59:46 | https://www.reddit.com/r/LocalLLaMA/comments/1h9hdm9/cost_of_finetuning_llama_33_70b_into_local/ | Old-Attitude6110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9hdm9 | false | null | t3_1h9hdm9 | /r/LocalLLaMA/comments/1h9hdm9/cost_of_finetuning_llama_33_70b_into_local/ | false | false | self | 1 | null |
A new evaluation metric? | 4 | I've been thinking about the evaluation of LLMs, and I've realized that there might be a missing parameter on which we test them. I'd like to hear your opinions on the matter.
A lot of the time, LLMs are used for RAG. A crucial element in that is to listen to the data given as context, even if it contradicts the parametrical knowledge of the model. I suspect that some models might be better at taking the context into consideration, and letting it "override" what the model learned during training. This ability is something that, to my knowledge, isn't tested for in current evaluation methods.
I can imagine an approach where the tester asks questions that the model should know the answer to, like "who was Queen Elizabeth II's father?" and recording that answer. Then asking the same question, but with context that contradicts the parametrical knowledge of the model (and the truth), and seeing if the model lets the context override it. And then, of course, repeat the process for a large number of questions and testing different models.
What do you think? Would this be a useful evaluation metric, does it already exist, and is the approach I outlined at all reasonable? | 2024-12-08T12:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h9hp06/a_new_evaluation_metric/ | _donau_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9hp06 | false | null | t3_1h9hp06 | /r/LocalLLaMA/comments/1h9hp06/a_new_evaluation_metric/ | false | false | self | 4 | null |
Last Week in Medical AI: Top LLM Research Papers/Models (December 2 - December 7, 2024)
| 12 | **Medical LLM & Models**
* Block MedCare: Blockchain AI & IoT
* This research proposes a novel Ethereum-based system for secure and efficient Electronic Health Record (EHR) management, empowering patients with data control.
* LLMs4Life: Biomedical Ontology Learning
* This paper extends the NeOn-GPT pipeline for ontology learning using LLMs with advanced prompt engineering and ontology reuse to improve generated ontologies' domain-specific reasoning and structural depth in complex domains like life sciences.
* LLaMA II for Multimodal Diagnosis
* This paper explores multimodal fusion methods for medical data using a transformer-based model with a LLaMA II backbone, focusing on disease classification with chest X-rays and clinical reports from the OpenI dataset.
* Compact LLM for EHR Privacy
* This paper introduces a compact LLM framework for local deployment in healthcare settings with strict privacy requirements and limited resources. It uses a novel preprocessing technique with information extraction methods like regular expressions to enhance smaller LLM performance on EHR data.
**Frameworks & Methods**
\- RARE: Retrieval-Augmented Reasoning
\- STORM: Strategies for Rare Events
\- TransFair: Fair Disease Classification
\- PePR: Performance Per Resource
\- Medical LLM Best Practices
**LLM Applications**
\- Medchain: LLMs in Clinical Practice
\- Query Nursing Note Summarization
\- CLINICSUM: Patient Conversation Summaries
\- Text Embeddings for Classifiers
**LLM Benchmarks**
\- Polish Medical Exams Transfer
\- Single-Cell Omics Annotation
\- LLMs in Precision Medicine
\- Low-Resource Healthcare Challenges
**Other Models**
\- LLM Chatbot Hallucinations
\- Multi-stage Chest X-ray Diagnosis
\- EchoONE: Echocardiography AI
\- Radiology Report Grounding
**Ethics & Fairness**
\- Privacy in Medical Imaging
\- Demographic Fairness in AI
**Datasets**
\- LLM Scientific Knowledge Extraction
\- Biomedical Knowledge Review
| 2024-12-08T12:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/1h9hy2a/last_week_in_medical_ai_top_llm_research/ | aadityaura | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9hy2a | false | null | t3_1h9hy2a | /r/LocalLLaMA/comments/1h9hy2a/last_week_in_medical_ai_top_llm_research/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'bkm0Tv78Ag2crDWbwRqlL-xx8E4at6sOADSfDCDSKsI', 'resolutions': [{'height': 101, 'url': 'https://external-preview.redd.it/XHDFIRo3Sk2DT-r_kcZ30kgjfyxz-WCXGo9pM7HjRMQ.jpg?width=108&crop=smart&auto=webp&s=11f008389abda2041ffaa7af230b8461f80e3b6a', 'width': 108}, {'height': 203, 'url': 'https://external-preview.redd.it/XHDFIRo3Sk2DT-r_kcZ30kgjfyxz-WCXGo9pM7HjRMQ.jpg?width=216&crop=smart&auto=webp&s=fb307c318009343b68d350221d48afb91a5b9b75', 'width': 216}, {'height': 301, 'url': 'https://external-preview.redd.it/XHDFIRo3Sk2DT-r_kcZ30kgjfyxz-WCXGo9pM7HjRMQ.jpg?width=320&crop=smart&auto=webp&s=a2fb944a824d13d0103fe28f57f54a5bde728a2c', 'width': 320}, {'height': 603, 'url': 'https://external-preview.redd.it/XHDFIRo3Sk2DT-r_kcZ30kgjfyxz-WCXGo9pM7HjRMQ.jpg?width=640&crop=smart&auto=webp&s=bffa62b836897463ee2c5789a222b94f51a97cc4', 'width': 640}, {'height': 905, 'url': 'https://external-preview.redd.it/XHDFIRo3Sk2DT-r_kcZ30kgjfyxz-WCXGo9pM7HjRMQ.jpg?width=960&crop=smart&auto=webp&s=6a2a7fbb2df51d684b14d338d04822b7d0b4013f', 'width': 960}, {'height': 1019, 'url': 'https://external-preview.redd.it/XHDFIRo3Sk2DT-r_kcZ30kgjfyxz-WCXGo9pM7HjRMQ.jpg?width=1080&crop=smart&auto=webp&s=281d9ca228a6176e897dc7a2ef15f46fdc1d594a', 'width': 1080}], 'source': {'height': 1308, 'url': 'https://external-preview.redd.it/XHDFIRo3Sk2DT-r_kcZ30kgjfyxz-WCXGo9pM7HjRMQ.jpg?auto=webp&s=d1400ead449322777bfe76342dc4ca640df9a1f2', 'width': 1386}, 'variants': {}}]} |
Finetuning Llama 3.3 for a language | 5 | Hello, Llama 3.3 is very capable in 8 languages but we deal with other languages that are not supported, so I am researching fine tuning llama 3.3 into different languages like Arabic, Persian, Urdu and so. Currently, it could reply to these languages in a naive way and with english tokens in between.
So any tips for this first ? and have anyone successfully fine tuned llama for a specific language ? Also, what could be the cost to do something like this ? any suggested roadmap
I was thinking trying with a smaller model like 7b and see how it works then try to cascade this to the 70b. | 2024-12-08T12:37:41 | https://www.reddit.com/r/LocalLLaMA/comments/1h9hz53/finetuning_llama_33_for_a_language/ | skylight22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9hz53 | false | null | t3_1h9hz53 | /r/LocalLLaMA/comments/1h9hz53/finetuning_llama_33_for_a_language/ | false | false | self | 5 | null |
So what are your thoughts on llama 3.3 70b vs qwen 2.5 72b. Is llama 3.3 70b is best open source model better than qwen and even 4o ? | 0 | Well 3.3 70b does feel like it is best open source model and really excited 405b version | 2024-12-08T12:38:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h9hzge/so_what_are_your_thoughts_on_llama_33_70b_vs_qwen/ | Evening_Action6217 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9hzge | false | null | t3_1h9hzge | /r/LocalLLaMA/comments/1h9hzge/so_what_are_your_thoughts_on_llama_33_70b_vs_qwen/ | false | false | self | 0 | null |
Using AMD GPU for LLMs? | 29 | Hello, I enjoy playing around with LLMs and experimenting.
Right now, I have an RTX 3070, and with its 8 GB of VRAM, I can run relatively small models. On top of that, I’m a gamer and use Linux. Many Linux users consider AMD graphics cards to be better for gaming on Linux due to better driver support.
I’ve been eyeing an RX 7900 XT with 20 GB, but I’m wondering how it performs with LLMs. As far as I know, CUDA, which is an Nvidia technology, is what makes Nvidia GPUs powerful when it comes to LLMs, am I right? What’s the situation with AMD?
I don’t want to lose the ability to use LLMs and AI models if I decide to buy an AMD card. | 2024-12-08T12:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h9i1y5/using_amd_gpu_for_llms/ | PsychologicalLog1090 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9i1y5 | false | null | t3_1h9i1y5 | /r/LocalLLaMA/comments/1h9i1y5/using_amd_gpu_for_llms/ | false | false | self | 29 | null |
Regarding chatGPT voice mode, is there any alternative? | 4 | I really enjoy ChatGPT's voice mode, but I'm not as impressed with its coding capabilities. I prefer Claude for coding, so I'm torn between continuing my ChatGPT Plus subscription and finding another AI platform with a good voice dialogue feature and subscribing to Claude Pro.
Regarding chatGPT voice mode, is there any alternative?
any advice would be appreciated | 2024-12-08T13:55:09 | https://www.reddit.com/r/LocalLLaMA/comments/1h9jbtd/regarding_chatgpt_voice_mode_is_there_any/ | FitAirline8359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9jbtd | false | null | t3_1h9jbtd | /r/LocalLLaMA/comments/1h9jbtd/regarding_chatgpt_voice_mode_is_there_any/ | false | false | self | 4 | null |
We need more dataset benchmarks | 64 | 2024-12-08T14:26:06 | Balance- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h9jxir | false | null | t3_1h9jxir | /r/LocalLLaMA/comments/1h9jxir/we_need_more_dataset_benchmarks/ | false | false | 64 | {'enabled': True, 'images': [{'id': 'bhDllRoYn-W49FiXta51Atp027fvnGhploJUFlKo6Qc', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/0lwqx3nwwm5e1.png?width=108&crop=smart&auto=webp&s=776019a70f8c81751ad8664c5d0a762dbb7be437', 'width': 108}, {'height': 148, 'url': 'https://preview.redd.it/0lwqx3nwwm5e1.png?width=216&crop=smart&auto=webp&s=5df397ffe658fe3d7cac3377e3d932a7067179ad', 'width': 216}, {'height': 219, 'url': 'https://preview.redd.it/0lwqx3nwwm5e1.png?width=320&crop=smart&auto=webp&s=dfd3cc599811edfb0b19c4d49297f8b55da1d3d6', 'width': 320}, {'height': 439, 'url': 'https://preview.redd.it/0lwqx3nwwm5e1.png?width=640&crop=smart&auto=webp&s=80f72051b0f43227d603fd2544e4c0f98c0c87f1', 'width': 640}, {'height': 658, 'url': 'https://preview.redd.it/0lwqx3nwwm5e1.png?width=960&crop=smart&auto=webp&s=c44a09eb19956972923eef28d31a95737e46b63c', 'width': 960}, {'height': 741, 'url': 'https://preview.redd.it/0lwqx3nwwm5e1.png?width=1080&crop=smart&auto=webp&s=7618005c35765a9c43c465eaaa117a3f28917d23', 'width': 1080}], 'source': {'height': 1723, 'url': 'https://preview.redd.it/0lwqx3nwwm5e1.png?auto=webp&s=51e647d354e9539908b038e1f29de5d88609415e', 'width': 2511}, 'variants': {}}]} |
|||
Llama 3.3-70b model Repeated Words ("fork") in Response Generation | 1 | [removed] | 2024-12-08T14:36:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h9k55e/llama_3370b_model_repeated_words_fork_in_response/ | wikd_13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9k55e | false | null | t3_1h9k55e | /r/LocalLLaMA/comments/1h9k55e/llama_3370b_model_repeated_words_fork_in_response/ | false | false | 1 | null |
|
Llama 3.3 is now almost 25x cheaper than GPT 4o on OpenRouter, but is it worth the hype? | 617 | 2024-12-08T14:47:15 | avianio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h9kci3 | false | null | t3_1h9kci3 | /r/LocalLLaMA/comments/1h9kci3/llama_33_is_now_almost_25x_cheaper_than_gpt_4o_on/ | false | false | 617 | {'enabled': True, 'images': [{'id': '6SKLFvcczmCSJlFCaGsT3A2Uxhi7eOZFUpUaz5FIelA', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/wjwd67aa0n5e1.png?width=108&crop=smart&auto=webp&s=df9604e4766db40b8a4e7c738106b3e7cdcbf17d', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/wjwd67aa0n5e1.png?width=216&crop=smart&auto=webp&s=6273cf094b8b35487a11cd2f19787e329ef48ec4', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/wjwd67aa0n5e1.png?width=320&crop=smart&auto=webp&s=a21a3dc7e322b1236620a4d1d31c6aaf73a4c964', 'width': 320}, {'height': 601, 'url': 'https://preview.redd.it/wjwd67aa0n5e1.png?width=640&crop=smart&auto=webp&s=2e4c28d5d2e83ae5c34ae599f9ac36f67c344485', 'width': 640}, {'height': 901, 'url': 'https://preview.redd.it/wjwd67aa0n5e1.png?width=960&crop=smart&auto=webp&s=07991ac340abf4b526511f91f39ad45e383ecb5d', 'width': 960}, {'height': 1014, 'url': 'https://preview.redd.it/wjwd67aa0n5e1.png?width=1080&crop=smart&auto=webp&s=2857ea555ccece21a63773d73adb7d81fc2cffca', 'width': 1080}], 'source': {'height': 1304, 'url': 'https://preview.redd.it/wjwd67aa0n5e1.png?auto=webp&s=f3e2b7a9fde194b463b4d31af6842caafb445e0b', 'width': 1388}, 'variants': {}}]} |
|||
How do I connect GPT soVITS tts to SillyTavern mobile? | 1 | [removed] | 2024-12-08T14:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1h9kilt/how_do_i_connect_gpt_sovits_tts_to_sillytavern/ | Parking-Court-3705 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9kilt | false | null | t3_1h9kilt | /r/LocalLLaMA/comments/1h9kilt/how_do_i_connect_gpt_sovits_tts_to_sillytavern/ | false | false | self | 1 | null |
Fine Tune a pretrained model for Address Standardization | 1 | [removed] | 2024-12-08T15:04:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h9kp2n/fine_tune_a_pretrained_model_for_address/ | zulkifliarshd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9kp2n | false | null | t3_1h9kp2n | /r/LocalLLaMA/comments/1h9kp2n/fine_tune_a_pretrained_model_for_address/ | false | false | self | 1 | null |
RE: Oxy 1 small, Feedbacks ? 32B version ? Repetition issue ? | 1 | [removed] | 2024-12-08T15:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h9ku8h/re_oxy_1_small_feedbacks_32b_version_repetition/ | tornadosoftwares | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9ku8h | false | null | t3_1h9ku8h | /r/LocalLLaMA/comments/1h9ku8h/re_oxy_1_small_feedbacks_32b_version_repetition/ | false | false | 1 | null |
|
Oxy 1 small, Feedbacks ? 32B version ? Repetition issue ? | 1 | [removed] | 2024-12-08T15:40:58 | moicremy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h9lgux | false | null | t3_1h9lgux | /r/LocalLLaMA/comments/1h9lgux/oxy_1_small_feedbacks_32b_version_repetition_issue/ | false | false | 1 | {'enabled': True, 'images': [{'id': '1dbF0gVz6GyTd8OAqlbLY9dOt48G25nPtsLDTU3-Xe4', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/fafe9k8ban5e1.png?width=108&crop=smart&auto=webp&s=2711191671b000c0a3cb3d144ba73d526bba6f2c', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/fafe9k8ban5e1.png?width=216&crop=smart&auto=webp&s=629a78a5390a269c579c5f848f3c7fe7d69b7262', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/fafe9k8ban5e1.png?width=320&crop=smart&auto=webp&s=509c50cd28fb89198d0b6960365dc2ac2c45d5b1', 'width': 320}, {'height': 321, 'url': 'https://preview.redd.it/fafe9k8ban5e1.png?width=640&crop=smart&auto=webp&s=892cb514e16eea7fcffc122a41c47e690503e509', 'width': 640}], 'source': {'height': 450, 'url': 'https://preview.redd.it/fafe9k8ban5e1.png?auto=webp&s=324f92f62bccd46d6f4f72ff06cc64dd88367abc', 'width': 897}, 'variants': {}}]} |
||
test | 1 | [removed] | 2024-12-08T15:58:29 | https://www.reddit.com/r/LocalLLaMA/comments/1h9lul6/test/ | Potential-Emu121 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9lul6 | false | null | t3_1h9lul6 | /r/LocalLLaMA/comments/1h9lul6/test/ | false | false | self | 1 | null |
Which model would you recommend for my system specs? I plan to use it as roleplay that can also answer advanced questions | 1 | [removed] | 2024-12-08T16:00:05 | https://www.reddit.com/r/LocalLLaMA/comments/1h9lvvy/which_model_would_you_recommend_for_my_system/ | FBIsecretservice_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9lvvy | false | null | t3_1h9lvvy | /r/LocalLLaMA/comments/1h9lvvy/which_model_would_you_recommend_for_my_system/ | false | false | self | 1 | null |
First timer; looking for a little nudge in the right direction. | 1 | [removed] | 2024-12-08T16:01:28 | https://www.reddit.com/r/LocalLLaMA/comments/1h9lx4x/first_timer_looking_for_a_little_nudge_in_the/ | dgioulakis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9lx4x | false | null | t3_1h9lx4x | /r/LocalLLaMA/comments/1h9lx4x/first_timer_looking_for_a_little_nudge_in_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'jp5z77EpDBjtVnmt9VLkWzbTvog6s_uSCG6X9QIQY0Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LTZtPfYBYq0iGpfjqjzqLOoFfIJ0RS6Xoz5WC8RP1AU.jpg?width=108&crop=smart&auto=webp&s=aeb555d468a72951094ee40ab5b80eba5d55a93d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LTZtPfYBYq0iGpfjqjzqLOoFfIJ0RS6Xoz5WC8RP1AU.jpg?width=216&crop=smart&auto=webp&s=73ac85b620d684877fb22deaf38d1e79635e6fb2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LTZtPfYBYq0iGpfjqjzqLOoFfIJ0RS6Xoz5WC8RP1AU.jpg?width=320&crop=smart&auto=webp&s=799333c1d2f1036ee1cf0e14ee93c9dd11cea40a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LTZtPfYBYq0iGpfjqjzqLOoFfIJ0RS6Xoz5WC8RP1AU.jpg?width=640&crop=smart&auto=webp&s=fd2483966ae78ba959e20677c09317d95ab50735', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LTZtPfYBYq0iGpfjqjzqLOoFfIJ0RS6Xoz5WC8RP1AU.jpg?width=960&crop=smart&auto=webp&s=bd97715a03f2d99527fe712787aaaa5b6a053a68', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LTZtPfYBYq0iGpfjqjzqLOoFfIJ0RS6Xoz5WC8RP1AU.jpg?width=1080&crop=smart&auto=webp&s=5d1fd627c778c230818a04776853042c17620c61', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LTZtPfYBYq0iGpfjqjzqLOoFfIJ0RS6Xoz5WC8RP1AU.jpg?auto=webp&s=d7edf5b0f5426dd051808f81252e31184c32fe80', 'width': 1200}, 'variants': {}}]} |
|
First timer; looking for a little nudge in the right direction. | 1 | [removed] | 2024-12-08T16:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1h9lysu/first_timer_looking_for_a_little_nudge_in_the/ | dgioulakis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9lysu | false | null | t3_1h9lysu | /r/LocalLLaMA/comments/1h9lysu/first_timer_looking_for_a_little_nudge_in_the/ | false | false | self | 1 | null |
How can I have a local speech to speech conversation with (almost) any llm? | 1 | [removed] | 2024-12-08T16:52:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h9n206/how_can_i_have_a_local_speech_to_speech/ | Chafedokibu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9n206 | false | null | t3_1h9n206 | /r/LocalLLaMA/comments/1h9n206/how_can_i_have_a_local_speech_to_speech/ | false | false | self | 1 | null |
M1 Max or M4? | 1 | I want a Mac for software development but also for LLM inference because it could walk (not run) most really large LLM models and I could test them out finally. Which would it be better for this desires? Maybe the M4 Pro would be better than both? | 2024-12-08T16:57:17 | https://www.reddit.com/r/LocalLLaMA/comments/1h9n5vh/m1_max_or_m4/ | MKU64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9n5vh | false | null | t3_1h9n5vh | /r/LocalLLaMA/comments/1h9n5vh/m1_max_or_m4/ | false | false | self | 1 | null |
Fish Speech 1.5 🎉 - Making state-of-the-art TTS accessible to everyone! | 1 | 2024-12-08T17:16:59 | https://v.redd.it/nptnz75epn5e1 | Technical-Garden-567 | /r/LocalLLaMA/comments/1h9nm08/fish_speech_15_making_stateoftheart_tts/ | 1970-01-01T00:00:00 | 0 | {} | 1h9nm08 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nptnz75epn5e1/DASHPlaylist.mpd?a=1736399852%2CMWI0Y2RiZDI4ODYyNzZkYmZlY2QwZGU3NTY1ZmRmZmJkYzQwMzMxMDc1YmU0YjFkZDQ2NWMyMTE5NmNjNjgwZA%3D%3D&v=1&f=sd', 'duration': 146, 'fallback_url': 'https://v.redd.it/nptnz75epn5e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/nptnz75epn5e1/HLSPlaylist.m3u8?a=1736399852%2CNzUzNzNmMGYzYjY2MjkwNDk3NWQ3ZmMwMWViNDU5ZThlMTY4NzZiNDI5MWQ5MGY4MmZmODBiZDM5NWM1ZGZkNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nptnz75epn5e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1h9nm08 | /r/LocalLLaMA/comments/1h9nm08/fish_speech_15_making_stateoftheart_tts/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bXdmcW43NWVwbjVlMVhyUCeUNXGfLswfbxS0OZQjY9AQfpyUcyN9utQ0fpBk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bXdmcW43NWVwbjVlMVhyUCeUNXGfLswfbxS0OZQjY9AQfpyUcyN9utQ0fpBk.png?width=108&crop=smart&format=pjpg&auto=webp&s=0225e5923595c01eaaf0de071517c47fa095ed4a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bXdmcW43NWVwbjVlMVhyUCeUNXGfLswfbxS0OZQjY9AQfpyUcyN9utQ0fpBk.png?width=216&crop=smart&format=pjpg&auto=webp&s=98bf78ddcff3f9c7d556d33a12551f1caa795217', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bXdmcW43NWVwbjVlMVhyUCeUNXGfLswfbxS0OZQjY9AQfpyUcyN9utQ0fpBk.png?width=320&crop=smart&format=pjpg&auto=webp&s=040a0b3322664b1f0535b3d3f4ff805a9b355a3e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bXdmcW43NWVwbjVlMVhyUCeUNXGfLswfbxS0OZQjY9AQfpyUcyN9utQ0fpBk.png?width=640&crop=smart&format=pjpg&auto=webp&s=824966cb83cea64f9066732e8125cc328b2ebece', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bXdmcW43NWVwbjVlMVhyUCeUNXGfLswfbxS0OZQjY9AQfpyUcyN9utQ0fpBk.png?width=960&crop=smart&format=pjpg&auto=webp&s=6b74c3bad9095b68e6ca530def96722b02ebc155', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bXdmcW43NWVwbjVlMVhyUCeUNXGfLswfbxS0OZQjY9AQfpyUcyN9utQ0fpBk.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7d2b17d5321dca20b954cb7c166d7a24bfb6f9e9', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/bXdmcW43NWVwbjVlMVhyUCeUNXGfLswfbxS0OZQjY9AQfpyUcyN9utQ0fpBk.png?format=pjpg&auto=webp&s=9e8a471b4b9d8876f209482e1348006109ac47ad', 'width': 3840}, 'variants': {}}]} |
||
Favorite anonymizers?? | 1 | [removed] | 2024-12-08T17:36:33 | https://www.reddit.com/r/LocalLLaMA/comments/1h9o233/favorite_anonymizers/ | AnyMessage6544 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9o233 | false | null | t3_1h9o233 | /r/LocalLLaMA/comments/1h9o233/favorite_anonymizers/ | false | false | self | 1 | null |
LLM chat UI and auto-docstring generation in vscode with new extension! | 1 | [removed] | 2024-12-08T17:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/1h9o28m/llm_chat_ui_and_autodocstring_generation_in/ | Expensive-Apricot-25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9o28m | false | null | t3_1h9o28m | /r/LocalLLaMA/comments/1h9o28m/llm_chat_ui_and_autodocstring_generation_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RV8_Y5ucSGWdP5stC8finNreAI1bDJbhPlRN_j5uPis', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MhMPjLx4b_gl8i7oHaftk1cKLiSffnncLeP-g1UooFI.jpg?width=108&crop=smart&auto=webp&s=511d091e422ee06c5770260d6d0e764d1279ec5e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MhMPjLx4b_gl8i7oHaftk1cKLiSffnncLeP-g1UooFI.jpg?width=216&crop=smart&auto=webp&s=03334833e481b5c0c91e3f176caa9928f6f23999', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MhMPjLx4b_gl8i7oHaftk1cKLiSffnncLeP-g1UooFI.jpg?width=320&crop=smart&auto=webp&s=0c6b748482469ccdd17fbd6408b7c88cb7c838bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MhMPjLx4b_gl8i7oHaftk1cKLiSffnncLeP-g1UooFI.jpg?width=640&crop=smart&auto=webp&s=119dbfab4ccbd692817fdc71dfcb7872701fbcf7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MhMPjLx4b_gl8i7oHaftk1cKLiSffnncLeP-g1UooFI.jpg?width=960&crop=smart&auto=webp&s=545c8968ca1a5efd4203dc83c3d3f4ae156b8757', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MhMPjLx4b_gl8i7oHaftk1cKLiSffnncLeP-g1UooFI.jpg?width=1080&crop=smart&auto=webp&s=f960852edcf733e64045ca9bb90447461c921d21', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MhMPjLx4b_gl8i7oHaftk1cKLiSffnncLeP-g1UooFI.jpg?auto=webp&s=74f2c96df9822f5bd5f9838ae473f4f10471dc0c', 'width': 1200}, 'variants': {}}]} |
Which gpu risers to buy? | 1 | So basically I found the following website: [https://gpurisers.com/product-category/gpu-risers/](https://gpurisers.com/product-category/gpu-risers/) And now my quesiton is, should I buy them even from there? do I go 4 / 8 / 12 capacitors? any recommendations? its for threadripper 3960x + 4 3090 | 2024-12-08T17:45:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h9o9ed/which_gpu_risers_to_buy/ | Autumnlight_02 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9o9ed | false | null | t3_1h9o9ed | /r/LocalLLaMA/comments/1h9o9ed/which_gpu_risers_to_buy/ | false | false | self | 1 | null |
Spent $200 for OpenAI o1-pro, regretting it | 1 | [removed] | 2024-12-08T17:49:55 | https://www.reddit.com/r/LocalLLaMA/comments/1h9ocve/spent_200_for_openai_o1pro_regretting_it/ | Business-Lead2679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9ocve | false | null | t3_1h9ocve | /r/LocalLLaMA/comments/1h9ocve/spent_200_for_openai_o1pro_regretting_it/ | false | false | self | 1 | null |
Alter text in image | 0 | Hello,
Sorry if this has been asked, I’m still quite new to LLaMA.
I am trying to make an input of an image, translate the text of the image, and replace the text of the image to match the original text style.
I found TextBrushStyle: https://ai.meta.com/research/publications/textstylebrush-transfer-of-text-aesthetics-from-a-single-example/
Which is exactly what I want to do, but cannot figure out how. | 2024-12-08T18:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/1h9ozvi/alter_text_in_image/ | fontos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9ozvi | false | null | t3_1h9ozvi | /r/LocalLLaMA/comments/1h9ozvi/alter_text_in_image/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'L4JqBEe7Rb2OZEgLiWXqk0m6gF2fuXcQyL9sLXtdNZ8', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/fAfEPa0srWasfopyLTffbFLdAZj0z3HJhhqX0vQm6oE.jpg?width=108&crop=smart&auto=webp&s=1da633476800920e40008b1009fbfbd319ecdba9', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/fAfEPa0srWasfopyLTffbFLdAZj0z3HJhhqX0vQm6oE.jpg?width=216&crop=smart&auto=webp&s=cfd9a775bef8622a5b9d7ceee9a544bb282c2587', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/fAfEPa0srWasfopyLTffbFLdAZj0z3HJhhqX0vQm6oE.jpg?width=320&crop=smart&auto=webp&s=270e4a8d9bbcb82d3ddeb39aeda912e663149bed', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/fAfEPa0srWasfopyLTffbFLdAZj0z3HJhhqX0vQm6oE.jpg?width=640&crop=smart&auto=webp&s=201ab4238220dc42a0077d26bbd252e5671d0182', 'width': 640}], 'source': {'height': 466, 'url': 'https://external-preview.redd.it/fAfEPa0srWasfopyLTffbFLdAZj0z3HJhhqX0vQm6oE.jpg?auto=webp&s=855f8437405b2415403f2acf47bf5706581bc3bb', 'width': 824}, 'variants': {}}]} |
A Few Quick Questions From A Newbie | 1 | [removed] | 2024-12-08T18:23:28 | https://www.reddit.com/r/LocalLLaMA/comments/1h9p4uu/a_few_quick_questions_from_a_newbie/ | Rbarton124 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9p4uu | false | null | t3_1h9p4uu | /r/LocalLLaMA/comments/1h9p4uu/a_few_quick_questions_from_a_newbie/ | false | false | self | 1 | null |
Mac mini m4 pro for LLM | 1 | So i ordered a 64gb shared memory system. that means a rough estimate of 48gb for the gpu which will allow me to run llama 3.3 70b as q4 locally. does anyone of you use the new apple cube for this kind of stuff ? if yes, how is your experience so far ? until now i used my rtx 4070 laptop to run a llama 3 8b q6 model, which generates about 30-32 tokens per second. im looking forward to finally be using a larger model but am a bit worried for the lower quantization it brings with it. in the end i couldnt bring myself to buy an old m2 ultra in a mac studio with 192gb because of the cost and the age of the system since it came out in 2022. Any thoughts / input ? :) | 2024-12-08T18:33:17 | https://www.reddit.com/r/LocalLLaMA/comments/1h9pd0k/mac_mini_m4_pro_for_llm/ | getmevodka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9pd0k | false | null | t3_1h9pd0k | /r/LocalLLaMA/comments/1h9pd0k/mac_mini_m4_pro_for_llm/ | false | false | self | 1 | null |
Best serverless service to qwen2 vl | 1 | [removed] | 2024-12-08T18:33:36 | https://www.reddit.com/r/LocalLLaMA/comments/1h9pdb1/best_serverless_service_to_qwen2_vl/ | krakotay1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9pdb1 | false | null | t3_1h9pdb1 | /r/LocalLLaMA/comments/1h9pdb1/best_serverless_service_to_qwen2_vl/ | false | false | self | 1 | null |
They will use "safety" to justify annulling the open-source AI models, just a warning | 402 | They will use safety, they will use inefficiencies excuses, they will pull and tug and desperately try to prevent **plebeians** like us the advantages these models are providing.
Back up your most important models. SSD drives, clouds, everywhere you can think of.
Big centralized AI companies will also push for this regulation which would strip us of private and local LLMs too | 2024-12-08T18:44:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h9pmnp/they_will_use_safety_to_justify_annulling_the/ | DamiaHeavyIndustries | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9pmnp | false | null | t3_1h9pmnp | /r/LocalLLaMA/comments/1h9pmnp/they_will_use_safety_to_justify_annulling_the/ | false | false | self | 402 | null |
Build and Scale Embeddings API Like a Pro using OpenAI EmbeddingSpec with LitServe | 2 | Discover how to build a production-ready embeddings API by combining LitServe for high-performance infrastructure, the OpenAI Embedding Spec for industry-standard compatibility, and FastEmbed for efficient embedding generation. This guide provides a step-by-step approach to scaling your embedding API efficiently for advanced AI applications.
Explore all the exciting features and try it yourself at [Lightning AI Studio here](https://lightning.ai/bhimrajyadav/studios/build-and-scale-embeddings-api-like-a-pro-using-openai-embeddingspec-with-litserve):
https://preview.redd.it/ywd2kv6x7o5e1.jpg?width=1920&format=pjpg&auto=webp&s=b08c33ebceae4b8cc92e16477f10e427b00e1e9c
| 2024-12-08T18:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/1h9pr9j/build_and_scale_embeddings_api_like_a_pro_using/ | bhimrazy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9pr9j | false | null | t3_1h9pr9j | /r/LocalLLaMA/comments/1h9pr9j/build_and_scale_embeddings_api_like_a_pro_using/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'BSjeTU1aACDqZLkkp8gO9YMG66w5ER5ye2i4_Mvk_dY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7VJ430_tNXV9AE5hMsPRdC-lCzkLmFFIqg2etZ86MgA.jpg?width=108&crop=smart&auto=webp&s=99c9464f0aae8ccba4782673f980dfb59d97fc85', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/7VJ430_tNXV9AE5hMsPRdC-lCzkLmFFIqg2etZ86MgA.jpg?width=216&crop=smart&auto=webp&s=74d6ec4fc7ce376deaaf60a96f8946e8c03930fd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/7VJ430_tNXV9AE5hMsPRdC-lCzkLmFFIqg2etZ86MgA.jpg?width=320&crop=smart&auto=webp&s=1e03607cd4389dc51ca606f985e9d03c91f657a3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/7VJ430_tNXV9AE5hMsPRdC-lCzkLmFFIqg2etZ86MgA.jpg?width=640&crop=smart&auto=webp&s=322b3a61cf4508f6653dc66c550199bf33b64993', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/7VJ430_tNXV9AE5hMsPRdC-lCzkLmFFIqg2etZ86MgA.jpg?width=960&crop=smart&auto=webp&s=ce67053b42a0e64483c26bc03890fdb6d44ae44f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/7VJ430_tNXV9AE5hMsPRdC-lCzkLmFFIqg2etZ86MgA.jpg?width=1080&crop=smart&auto=webp&s=bb43accbf05739a5365c4a6bfd3015ad52ddf0a4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/7VJ430_tNXV9AE5hMsPRdC-lCzkLmFFIqg2etZ86MgA.jpg?auto=webp&s=3a7446c862828c0d3b284b201da582cbb218631d', 'width': 1920}, 'variants': {}}]} |
|
Haven't tried this out yet, but a new Monstral merge by MarsupialAI! | 7 | 2024-12-08T18:55:59 | https://huggingface.co/MarsupialAI/Monstral-123B-v2 | morbidSuplex | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h9pvmf | false | null | t3_1h9pvmf | /r/LocalLLaMA/comments/1h9pvmf/havent_tried_this_out_yet_but_a_new_monstral/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'X6gXGtwnpI-qOFuNmgSso8pOkwzHQJ0gwfc-0XzKNEk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XtPa2XUr44UXt_Ym_LZHsKVpJI3BaFkblBRqb0TMp00.jpg?width=108&crop=smart&auto=webp&s=c6a5fe07f8d93553425defa1a7a2f54ad074694b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XtPa2XUr44UXt_Ym_LZHsKVpJI3BaFkblBRqb0TMp00.jpg?width=216&crop=smart&auto=webp&s=946bbc93ae37bd1fb7d62dc4e6b0c348ff2be694', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XtPa2XUr44UXt_Ym_LZHsKVpJI3BaFkblBRqb0TMp00.jpg?width=320&crop=smart&auto=webp&s=63bf1d8ae403cffa547f217449a028d6c4fad70c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XtPa2XUr44UXt_Ym_LZHsKVpJI3BaFkblBRqb0TMp00.jpg?width=640&crop=smart&auto=webp&s=0a88db76be50bcbeb3a06ce21b3b3987f2c6fb1c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XtPa2XUr44UXt_Ym_LZHsKVpJI3BaFkblBRqb0TMp00.jpg?width=960&crop=smart&auto=webp&s=846e06154da5b498daf258272976b6d1aa2579ea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XtPa2XUr44UXt_Ym_LZHsKVpJI3BaFkblBRqb0TMp00.jpg?width=1080&crop=smart&auto=webp&s=c2c4b91e2a20fcc344a1b5ba46779493ce07d398', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XtPa2XUr44UXt_Ym_LZHsKVpJI3BaFkblBRqb0TMp00.jpg?auto=webp&s=89112caefec5421c7f96888199695de70f3e0f4f', 'width': 1200}, 'variants': {}}]} |
||
Optimizing Model inference for Cost for Llama 3.3 | 0 | Hi,
I'm working on a task where we have to analyze millions of messages, emails and transcripts everyday in over 100 languages across the globe. I want to minimize the cost of inferencing for this task. The task is not time sensitive, so I can always do batch inferencing and maximize throughput.
My question is how should I make sure I get the best bang for my buck (measured by $/tk) while making sure the performance remains acceptable/amazing. The initial areas of optimization I'm thinking about are
1. **Selecting the right model**: Llama 3.1 8b seems to working well for me on English, but sucks on every other language and I would still prefer a little better performance. Llama 3.3 70b is doing pretty well on multiple languages and the performance is very nice. I would love to be able to get the same perf on 8b by either distilling the model or offloading the more complex tasks to 70b while doing the bulk of the work on 8b. Let me know your thoughts.
2. **Hosting:** For business reasons, I can't use inference endpoints from companies. So what is the most cost effective way to host these models? Which quantization techniques, batching, which GPU to use (RTX, Ampere, Hopper? ) etc. Looking for suggestions to optimize this for max throughput and minimize cost.
Any other thoughts you guys have, which I haven't thought about are also appreciated. I saw Llama3.3 70b on Hyperbolic for $0.4/M tokens, if I can reach an average price close to that or lower, that would be amazing.
Thanks in advance. | 2024-12-08T18:56:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h9pvyi/optimizing_model_inference_for_cost_for_llama_33/ | AI_Overlord_314159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9pvyi | false | null | t3_1h9pvyi | /r/LocalLLaMA/comments/1h9pvyi/optimizing_model_inference_for_cost_for_llama_33/ | false | false | self | 0 | null |
Build and Scale Embeddings API Like a Pro using OpenAI EmbeddingSpec with LitServe | 0 | 2024-12-08T19:00:06 | bhimrazy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h9pz52 | false | null | t3_1h9pz52 | /r/LocalLLaMA/comments/1h9pz52/build_and_scale_embeddings_api_like_a_pro_using/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'c1Ubmlf03tdK24qsXFK4RUjuVUSKIsdOBT_ANko-3dg', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/jzghhivr9o5e1.jpeg?width=108&crop=smart&auto=webp&s=b2712ef35a324026bc15ab261b5643ebcbab1e4a', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/jzghhivr9o5e1.jpeg?width=216&crop=smart&auto=webp&s=d6405bb0041f8be040679f55b6d12a6195052cdb', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/jzghhivr9o5e1.jpeg?width=320&crop=smart&auto=webp&s=08d7ffbf60dfbb3020d4b18a7db8c7737aa34988', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/jzghhivr9o5e1.jpeg?width=640&crop=smart&auto=webp&s=584a7351b4f9928230eae61e83e89077dcb66406', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/jzghhivr9o5e1.jpeg?width=960&crop=smart&auto=webp&s=56b5b7dc44066c74ae333d67fad129b7f32d8f6d', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/jzghhivr9o5e1.jpeg?width=1080&crop=smart&auto=webp&s=a2d6c09855b38bcec1192c4da0da0d4d7afa3078', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/jzghhivr9o5e1.jpeg?auto=webp&s=3a1bded21bae2b6332b32df9bfc2661d38d78087', 'width': 1920}, 'variants': {}}]} |
|||
Now that we have QWQ, do you think it is possible to use model distillation and RLAIF from QWQ onto a smaller model to perform analytical performance on par with the larger model? | 16 | These two methods have proven to work previously for traditional LLMs, so I'm thinking to myself in order to speed up LLM analysis, we could use a smaller model for that. Is this a feasible approach or are we not there yet? | 2024-12-08T19:19:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h9qf6j/now_that_we_have_qwq_do_you_think_it_is_possible/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9qf6j | false | null | t3_1h9qf6j | /r/LocalLLaMA/comments/1h9qf6j/now_that_we_have_qwq_do_you_think_it_is_possible/ | false | false | self | 16 | null |
Aisuite announced | 1 | [removed] | 2024-12-08T19:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h9qf6z/aisuite_announced/ | nborwankar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9qf6z | false | null | t3_1h9qf6z | /r/LocalLLaMA/comments/1h9qf6z/aisuite_announced/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bVP2nwL5QCiSc5YwjL1159wP46RRj4Tp-d_hlyDILM8', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/7Kyhq2OeMLpqBA7tX1yUBGflTYIcMWtWLv2Xl9esTEg.jpg?width=108&crop=smart&auto=webp&s=8b3db0aa9e15843f569c949bb2866f17eb07d371', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/7Kyhq2OeMLpqBA7tX1yUBGflTYIcMWtWLv2Xl9esTEg.jpg?width=216&crop=smart&auto=webp&s=3829aac6d2af89d4e72ed6a0aa26b967b4199129', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/7Kyhq2OeMLpqBA7tX1yUBGflTYIcMWtWLv2Xl9esTEg.jpg?width=320&crop=smart&auto=webp&s=bbb6abcdadc0b410fa69b62c65800e46aa97581f', 'width': 320}, {'height': 339, 'url': 'https://external-preview.redd.it/7Kyhq2OeMLpqBA7tX1yUBGflTYIcMWtWLv2Xl9esTEg.jpg?width=640&crop=smart&auto=webp&s=1a35256b7d47fdbafd1fd4990a5fc700e68fbbd7', 'width': 640}, {'height': 509, 'url': 'https://external-preview.redd.it/7Kyhq2OeMLpqBA7tX1yUBGflTYIcMWtWLv2Xl9esTEg.jpg?width=960&crop=smart&auto=webp&s=5ee81d71386615c116f1d5b153942041e79b36e2', 'width': 960}], 'source': {'height': 547, 'url': 'https://external-preview.redd.it/7Kyhq2OeMLpqBA7tX1yUBGflTYIcMWtWLv2Xl9esTEg.jpg?auto=webp&s=138eb38a3b755560077e5f118d63015743cd8a27', 'width': 1030}, 'variants': {}}]} |
Modifying model vocabulary | 3 | Long story short, I am trying to use speculative decoding on llama.cpp with Mistral-Large + Mistral-7b-v0.3.
It turns out that a few tokens in the vocabulary are different. So I modified the tokenizer.json and tokenizer\_config.json of Mistral-7B, then I converted it to .gguf and then quantized it. Well, despite my little hack, the resulting .gguf model has the same original vocabulary and not the one updated by me.
How can I proceed to modify it? And, BTW, what are the two mentioned .json doing, if modifying them changes nothing? | 2024-12-08T19:25:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h9qk8a/modifying_model_vocabulary/ | anemone_armada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9qk8a | false | null | t3_1h9qk8a | /r/LocalLLaMA/comments/1h9qk8a/modifying_model_vocabulary/ | false | false | self | 3 | null |
Spent $200 for o1-pro, regretting it | 401 | $200 is insane, and I regret it, but hear me out - I have unlimited access to best of the best OpenAI has to offer, so what is stopping me from creating a huge open source dataset for local LLM training? ;)
I need suggestions though, what kind of data would be the most valuable to y’all, what exactly? Perhaps a dataset for training open-source o1? Give me suggestions, lets extract as much value as possible from this. I can get started today. | 2024-12-08T19:40:05 | https://www.reddit.com/r/LocalLLaMA/comments/1h9qvu8/spent_200_for_o1pro_regretting_it/ | Business-Lead2679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9qvu8 | false | null | t3_1h9qvu8 | /r/LocalLLaMA/comments/1h9qvu8/spent_200_for_o1pro_regretting_it/ | false | false | self | 401 | null |
best AI for building data sets? | 1 | [removed] | 2024-12-08T19:48:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h9r2li/best_ai_for_building_data_sets/ | ThousandNiches | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9r2li | false | null | t3_1h9r2li | /r/LocalLLaMA/comments/1h9r2li/best_ai_for_building_data_sets/ | false | false | self | 1 | null |
TTS WebGPU: The first ever text-to-speech web app built with WebGPU acceleration (powered by OuteTTS and Transformers.js) | 146 | 2024-12-08T19:49:45 | https://v.redd.it/3i4py92sgo5e1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h9r3lv | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/3i4py92sgo5e1/DASHPlaylist.mpd?a=1736279398%2CMThjYTg2ZTVlMGQyZDg1MTc3ZTc0OTU2NjVhYzE5YWNlY2QzNDkwNjUyNTFmNTRkMTJmNWE4MTk3ODA1MTQ5Zg%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/3i4py92sgo5e1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/3i4py92sgo5e1/HLSPlaylist.m3u8?a=1736279398%2CZWRmZjlmNWQ5MTI5MmQ5ODc2YTM5NDk2MTQ3MTI1MTFjMmZlMmE3MWVmZDMxYWQ1YTZjZDJkZTVlMzBhYjA4Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3i4py92sgo5e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 960}} | t3_1h9r3lv | /r/LocalLLaMA/comments/1h9r3lv/tts_webgpu_the_first_ever_texttospeech_web_app/ | false | false | 146 | {'enabled': False, 'images': [{'id': 'aTZib2ZhMnNnbzVlMfvSzXwL42B1hz76KAKk1PH_m_JOu_jMAFQDbCVnhBJ2', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aTZib2ZhMnNnbzVlMfvSzXwL42B1hz76KAKk1PH_m_JOu_jMAFQDbCVnhBJ2.png?width=108&crop=smart&format=pjpg&auto=webp&s=c65d121b8ad8d1d12b7a2cf284482ff2936812d1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aTZib2ZhMnNnbzVlMfvSzXwL42B1hz76KAKk1PH_m_JOu_jMAFQDbCVnhBJ2.png?width=216&crop=smart&format=pjpg&auto=webp&s=79aa34b353aa659bf3c7905932a124b138f8a1f4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aTZib2ZhMnNnbzVlMfvSzXwL42B1hz76KAKk1PH_m_JOu_jMAFQDbCVnhBJ2.png?width=320&crop=smart&format=pjpg&auto=webp&s=9602ba21144a6f06338b434f26bff0e602042867', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/aTZib2ZhMnNnbzVlMfvSzXwL42B1hz76KAKk1PH_m_JOu_jMAFQDbCVnhBJ2.png?width=640&crop=smart&format=pjpg&auto=webp&s=f9824f26b2bfe2dcdecb049b62a0f20d3baf55ce', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/aTZib2ZhMnNnbzVlMfvSzXwL42B1hz76KAKk1PH_m_JOu_jMAFQDbCVnhBJ2.png?width=960&crop=smart&format=pjpg&auto=webp&s=0defe8edaf929941417f0e5e6283a7f71963be76', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/aTZib2ZhMnNnbzVlMfvSzXwL42B1hz76KAKk1PH_m_JOu_jMAFQDbCVnhBJ2.png?width=1080&crop=smart&format=pjpg&auto=webp&s=857507f5e35c8e90bdb767fbf0c10d567dd417c5', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/aTZib2ZhMnNnbzVlMfvSzXwL42B1hz76KAKk1PH_m_JOu_jMAFQDbCVnhBJ2.png?format=pjpg&auto=webp&s=baf2b865e18366b6c0c002c8c4acf35d31bbe5d0', 'width': 1280}, 'variants': {}}]} |
||
Impish_Mind_8B: A Unique 8B Llama 3.1 Model with Fun Personality 🧠✨ | 24 | Hey r/LocalLLaMA and AI enthusiasts! I cooked this wild new finetune called Impish\_Mind\_8B that's got some seriously interesting characteristics.
# 🔍 Key Highlights:
* **Intended Use**: Creative writing, role-play, and general tasks
* **Censorship Level**: Low (8/10 uncensored)
* **Unique Selling Points**:
* Enhanced personality and character analysis (very good MBTI analysis, table usage, etc)
* Improved markdown understanding
* Strong creative writing capabilities
# 🎨 Personality Quirks
The model got a bit of an attitude:
* **Slightly** **paranoid** and **e**dgy when not in strict assistant mode (a bit 4chan data was used)
* **Unique** role-play flavor
* **Slightly** lower positivity bias
# 💻 Tech Specs
* Multiple quantization options available (FP16, GGUF, EXL2)
* Trained with a focus on minimizing toxic data while steel being pretty uncensored
* Uses Llama-3-Instruct template
check out the model card for more details & example outputs:
[https://huggingface.co/SicariusSicariiStuff/Impish\_Mind\_8B](https://huggingface.co/SicariusSicariiStuff/Impish_Mind_8B) | 2024-12-08T19:50:43 | https://www.reddit.com/r/LocalLLaMA/comments/1h9r4di/impish_mind_8b_a_unique_8b_llama_31_model_with/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9r4di | false | null | t3_1h9r4di | /r/LocalLLaMA/comments/1h9r4di/impish_mind_8b_a_unique_8b_llama_31_model_with/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'ZElWd4RxAvo4vbsjrozwFfzcjbh5nra0yW7Ym7deo5w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_8ufDmv5iPcvtx1emDIZTBokTggQwW_g-p1kuZhQj0A.jpg?width=108&crop=smart&auto=webp&s=4b93986f6666481fa2081645fe72f0f0b2f20ba8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_8ufDmv5iPcvtx1emDIZTBokTggQwW_g-p1kuZhQj0A.jpg?width=216&crop=smart&auto=webp&s=432addc421db4248432e0fea4174a3645f28d714', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_8ufDmv5iPcvtx1emDIZTBokTggQwW_g-p1kuZhQj0A.jpg?width=320&crop=smart&auto=webp&s=626ab3fa0fb7a784b87f3f76d6e5fceff826dbfb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_8ufDmv5iPcvtx1emDIZTBokTggQwW_g-p1kuZhQj0A.jpg?width=640&crop=smart&auto=webp&s=1fdc9304e09902a60d0b66ec4cbefb31ba9b498d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_8ufDmv5iPcvtx1emDIZTBokTggQwW_g-p1kuZhQj0A.jpg?width=960&crop=smart&auto=webp&s=85f1c2030538ec884b39bdd0bef9053584a5e902', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_8ufDmv5iPcvtx1emDIZTBokTggQwW_g-p1kuZhQj0A.jpg?width=1080&crop=smart&auto=webp&s=92bc3876255eb549fd985586117e40d187dad3b9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_8ufDmv5iPcvtx1emDIZTBokTggQwW_g-p1kuZhQj0A.jpg?auto=webp&s=abf064e773d2111f04d347b999d612732ead4faf', 'width': 1200}, 'variants': {}}]} |
We may not see Qwen 3.0 | 299 | The head of Alibaba's Qwen team has just joined ByteDance (parent company of TikTok), taking a dozen developers with him. Bytedance is not known to open source their models.
Not really a good news for the open source LLM comunity.
http://www.aastocks.com/en/mobile/news.aspx?newsid=NOW.1402351&newssource=AAFN
| 2024-12-08T20:16:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h9rphk/we_may_not_see_qwen_30/ | sb5550 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9rphk | false | null | t3_1h9rphk | /r/LocalLLaMA/comments/1h9rphk/we_may_not_see_qwen_30/ | false | false | self | 299 | {'enabled': False, 'images': [{'id': 'WPbC0KpkS_hd-1vWw42YzsO5HVKubJ9_Hbc4P-ghwuI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VT6sr-03fNyDckuyI9_waL4gdiKrwvO9Mp96urQA1zo.jpg?width=108&crop=smart&auto=webp&s=0dd283fba1b6f72cb03489e759d0a576e6ab8c01', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/VT6sr-03fNyDckuyI9_waL4gdiKrwvO9Mp96urQA1zo.jpg?width=216&crop=smart&auto=webp&s=caa8e7a82549235750d66b037d4f06a23bbf535f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/VT6sr-03fNyDckuyI9_waL4gdiKrwvO9Mp96urQA1zo.jpg?width=320&crop=smart&auto=webp&s=eac16275240b33243c4d09a549d4a8160299d92d', 'width': 320}], 'source': {'height': 318, 'url': 'https://external-preview.redd.it/VT6sr-03fNyDckuyI9_waL4gdiKrwvO9Mp96urQA1zo.jpg?auto=webp&s=b3dd676a7a7c1072f724183a5f8b97d0dec83b60', 'width': 565}, 'variants': {}}]} |
Best Practices for Managing Multiple AI Model Environments in company? | 5 | Hi everyone,
I’m working on projects involving multiple AI models like Whisper, LLaMA, Audio models and Stable Diffusion. A recurring challenge I face is installing them and managing their environments effectively, particularly resolving dependency conflicts and ensuring scalability.
Currently, I rely on **Conda** for creating isolated environments for each model. I started also experimenting with **Docker**, which offers better isolation, but some models I want to test don't have ready ones or there is some public one and it doesn't feel that safe to test but maybe my lack of knowledge is stopping me here.
I spent like last week trying to make this one work [myshell-ai/OpenVoice: Instant voice cloning by MIT and MyShell.](https://github.com/myshell-ai/OpenVoice) for cloning voice and generating audio. I reinstalled new os and conda packages all stuff but still get some issues with cuda or dependencies.
Recently, I came across tools like **Pinokio and other** [https://github.com/diStyApps/seait/](https://github.com/diStyApps/seait/), [https://pinokio.computer/](https://pinokio.computer/) , [https://github.com/LykosAI/StabilityMatrix/](https://github.com/LykosAI/StabilityMatrix/), [https://lykos.ai/](https://lykos.ai/) which seem to simplify the deployment but the security issues seem quite big. There is also Ollama and OpenWeb UI but they don't have the models I need or seem to lack the possibility to host one environment with the model and make efficient requests and get output on local resources.
**My key questions**:
* How do you manage multiple environments for your production workloads?
* Is creating custom Docker containers with optimized FastAPI responses the best approach, or are there alternative solutions that work better for you?
* Are there any legitimate repositories or tools with pre-set models and easy deployment options that I can rely on for more seamless integration?
I’d really appreciate any advice or recommendations as I am losing ton of time at job for this. | 2024-12-08T21:11:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h9sy6w/best_practices_for_managing_multiple_ai_model/ | SomeRandomGuuuuuuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9sy6w | false | null | t3_1h9sy6w | /r/LocalLLaMA/comments/1h9sy6w/best_practices_for_managing_multiple_ai_model/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'i80b4mt9uELo1gYR2JswQsvS5viYoNshjpUql4ld16k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/p4NzdkkKqyf0Qyrw6-SgcRG84LQqUEP-Bo4sET3iEIY.jpg?width=108&crop=smart&auto=webp&s=f6cc14b17171597438d4a5597a112d99134c11a2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/p4NzdkkKqyf0Qyrw6-SgcRG84LQqUEP-Bo4sET3iEIY.jpg?width=216&crop=smart&auto=webp&s=fb11799d12310f9832e2e147c8fee031d9c3db2a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/p4NzdkkKqyf0Qyrw6-SgcRG84LQqUEP-Bo4sET3iEIY.jpg?width=320&crop=smart&auto=webp&s=f0ba12e50a250b3f5be18c24c0da234b7e4a8908', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/p4NzdkkKqyf0Qyrw6-SgcRG84LQqUEP-Bo4sET3iEIY.jpg?width=640&crop=smart&auto=webp&s=816a1026c7143e78301b9cb26ee4c03987370aa0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/p4NzdkkKqyf0Qyrw6-SgcRG84LQqUEP-Bo4sET3iEIY.jpg?width=960&crop=smart&auto=webp&s=2c9eeef500f00dc4c11cc08108235d5b7c68aa85', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/p4NzdkkKqyf0Qyrw6-SgcRG84LQqUEP-Bo4sET3iEIY.jpg?width=1080&crop=smart&auto=webp&s=c442aa95355ffb0c04aa22f7c9db3c92f1200ad1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/p4NzdkkKqyf0Qyrw6-SgcRG84LQqUEP-Bo4sET3iEIY.jpg?auto=webp&s=d6bbedfd1f02295d9af7ef2f40645f3ecd952fbd', 'width': 1200}, 'variants': {}}]} |
Smallest model for summarizing? | 2 | I want to run the smallest model ever which can look at a PDF file and extract information. I am just playing around right now so I do not want to invest in hardware. I do have a server with 32GB of RAM and a Xeon processor (a bit old).
Any ideas?
Thanks in advance | 2024-12-08T21:18:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h9t4dj/smallest_model_for_summarizing/ | temapone11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9t4dj | false | null | t3_1h9t4dj | /r/LocalLLaMA/comments/1h9t4dj/smallest_model_for_summarizing/ | false | false | self | 2 | null |
I can't get LM studio to run lama 3.3 70B | 1 | [removed] | 2024-12-08T21:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/1h9t9zp/i_cant_get_lm_studio_to_run_lama_33_70b/ | LeftLavishness6118 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9t9zp | false | null | t3_1h9t9zp | /r/LocalLLaMA/comments/1h9t9zp/i_cant_get_lm_studio_to_run_lama_33_70b/ | false | false | self | 1 | null |
How to use 2 Psus correctly? Quad 3090 build | 2 | So earlier I made a post, [https://www.reddit.com/r/LocalLLaMA/comments/1h9o9ed/comment/m13b5qw/?context=3](https://www.reddit.com/r/LocalLLaMA/comments/1h9o9ed/comment/m13b5qw/?context=3) about risers, but now I noticed that I fluked up and that mixing 2 psus is a poor idea. Does anyone know how I can run in a dual psu situation 2 gpus?
Setup: PSU: 1x [Thermaltake Toughpower GF3 1650W](https://www.amazon.de/dp/B0B7NTBF95?ref=ppx_yo2ov_dt_b_fed_asin_title) 1x [Corsair RM1000e (2023) Vollmodulares, Geräuscharmes ATX-Netzteil](https://www.amazon.de/dp/B0BVKZ9GCB?ref=ppx_yo2ov_dt_b_fed_asin_title)
CPU: Threadripper 3960X
Motherboard: TRX40 Creator.
Issue: I basically got 4 3090. I want to run 2 of them with pcie riser cables, (using the 2 16x slots). And now I dont know how to run the other 2 3090's. I just read that mixing psus is a poor idea, but otherwise I am a bit confused on how to let those two cards run from a different psu. If you guys know of parts (Europe) can you link them? I really want to minimize a fire hazard | 2024-12-08T21:37:33 | https://www.reddit.com/r/LocalLLaMA/comments/1h9tjc2/how_to_use_2_psus_correctly_quad_3090_build/ | Autumnlight_02 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9tjc2 | false | null | t3_1h9tjc2 | /r/LocalLLaMA/comments/1h9tjc2/how_to_use_2_psus_correctly_quad_3090_build/ | false | false | self | 2 | null |
2 LLMs talking and running code! (Llama 3.1 8B Instruct + Qwen 2.5 Coder 32B Instruct) | 1 | 2024-12-08T21:39:16 | https://v.redd.it/81lvpq702p5e1 | random-tomato | /r/LocalLLaMA/comments/1h9tknp/2_llms_talking_and_running_code_llama_31_8b/ | 1970-01-01T00:00:00 | 0 | {} | 1h9tknp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/81lvpq702p5e1/DASHPlaylist.mpd?a=1736415565%2CYzMxN2JiMGFkMTZhNDhmODEzMDdjNmM2ZTNlNDRjYjk3YWI0ZjlkY2Q3ZjZkMWY0NzhmZTdkNDE0ZGQ3NGYxOA%3D%3D&v=1&f=sd', 'duration': 151, 'fallback_url': 'https://v.redd.it/81lvpq702p5e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/81lvpq702p5e1/HLSPlaylist.m3u8?a=1736415565%2CZmM1M2Q3MjQ5NzEzYTM3MDQ0NTk4MGI2NzJiNTU4ZTgyOTQ2MGVmMjI5ZDc4OTk4MDkzZjFhYzk4NTc3MjljMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/81lvpq702p5e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1866}} | t3_1h9tknp | /r/LocalLLaMA/comments/1h9tknp/2_llms_talking_and_running_code_llama_31_8b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'N216MDZxNzAycDVlMSAVrBdblQ2QvB0i8A0PUI5Egw3Q14ONx03m7VU8dROa', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/N216MDZxNzAycDVlMSAVrBdblQ2QvB0i8A0PUI5Egw3Q14ONx03m7VU8dROa.png?width=108&crop=smart&format=pjpg&auto=webp&s=b58e55d968db557209736d4d15943c3c6c300439', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/N216MDZxNzAycDVlMSAVrBdblQ2QvB0i8A0PUI5Egw3Q14ONx03m7VU8dROa.png?width=216&crop=smart&format=pjpg&auto=webp&s=c9eac6dd85c164e25b41d8c464eccd7b502e40fc', 'width': 216}, {'height': 185, 'url': 'https://external-preview.redd.it/N216MDZxNzAycDVlMSAVrBdblQ2QvB0i8A0PUI5Egw3Q14ONx03m7VU8dROa.png?width=320&crop=smart&format=pjpg&auto=webp&s=cab93d62e7cbf01dbd2ca86227d9133dc40b96e1', 'width': 320}, {'height': 370, 'url': 'https://external-preview.redd.it/N216MDZxNzAycDVlMSAVrBdblQ2QvB0i8A0PUI5Egw3Q14ONx03m7VU8dROa.png?width=640&crop=smart&format=pjpg&auto=webp&s=4389efeb7e4925cf5aa4c8586fb36b6f0e1aaf5b', 'width': 640}, {'height': 555, 'url': 'https://external-preview.redd.it/N216MDZxNzAycDVlMSAVrBdblQ2QvB0i8A0PUI5Egw3Q14ONx03m7VU8dROa.png?width=960&crop=smart&format=pjpg&auto=webp&s=b51f2a6aea1bb145118bd6cfded63b0ad97c79d7', 'width': 960}, {'height': 625, 'url': 'https://external-preview.redd.it/N216MDZxNzAycDVlMSAVrBdblQ2QvB0i8A0PUI5Egw3Q14ONx03m7VU8dROa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=89660c88ffe6437e5ed20f8fb54b6f162ae7b59d', 'width': 1080}], 'source': {'height': 2304, 'url': 'https://external-preview.redd.it/N216MDZxNzAycDVlMSAVrBdblQ2QvB0i8A0PUI5Egw3Q14ONx03m7VU8dROa.png?format=pjpg&auto=webp&s=6d1974b0efc09f71666a7be8a517fb20371e4502', 'width': 3980}, 'variants': {}}]} |
||
2 LLMs talking and running code! (Llama 3.1 8B Instruct + Qwen 2.5 Coder 32B Instruct) | 57 | 2024-12-08T21:43:43 | https://v.redd.it/4c1wszyy2p5e1 | random-tomato | /r/LocalLLaMA/comments/1h9to45/2_llms_talking_and_running_code_llama_31_8b/ | 1970-01-01T00:00:00 | 0 | {} | 1h9to45 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4c1wszyy2p5e1/DASHPlaylist.mpd?a=1736415841%2CZTZhNjRjMjk5MDc5NzgxYmY0MWQ0OGM5YzUzNzNhOTNmZWFlNWMwYWNiNjQ5ZDg2OTQwNmY2YzJiMGYyMDcyMw%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/4c1wszyy2p5e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/4c1wszyy2p5e1/HLSPlaylist.m3u8?a=1736415841%2CMWE1YmE1M2FmYzVhYWI5OGRiOTVmODgxODUyMjc1NmUyMjcwNmY4ZWNlZWMzZjFkNGQ4MmZhNzgzNjY4MWVmOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4c1wszyy2p5e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1866}} | t3_1h9to45 | /r/LocalLLaMA/comments/1h9to45/2_llms_talking_and_running_code_llama_31_8b/ | false | false | 57 | {'enabled': False, 'images': [{'id': 'NnFlYm0wenkycDVlMYHmQWFaxYJoR7tKMJdcQN8TRRiueBbXJs3gunLFjwjl', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/NnFlYm0wenkycDVlMYHmQWFaxYJoR7tKMJdcQN8TRRiueBbXJs3gunLFjwjl.png?width=108&crop=smart&format=pjpg&auto=webp&s=f002564914813f9017619faf9c75d2b5b21d2739', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/NnFlYm0wenkycDVlMYHmQWFaxYJoR7tKMJdcQN8TRRiueBbXJs3gunLFjwjl.png?width=216&crop=smart&format=pjpg&auto=webp&s=6142ae8a316563ed896c0703be9c2ff7c380b1e3', 'width': 216}, {'height': 185, 'url': 'https://external-preview.redd.it/NnFlYm0wenkycDVlMYHmQWFaxYJoR7tKMJdcQN8TRRiueBbXJs3gunLFjwjl.png?width=320&crop=smart&format=pjpg&auto=webp&s=1b716bbba8120a383f224222d2ceb3291646b19f', 'width': 320}, {'height': 370, 'url': 'https://external-preview.redd.it/NnFlYm0wenkycDVlMYHmQWFaxYJoR7tKMJdcQN8TRRiueBbXJs3gunLFjwjl.png?width=640&crop=smart&format=pjpg&auto=webp&s=e4779da4a009ba3ac420ec916c9dc2ff0bedb2bc', 'width': 640}, {'height': 555, 'url': 'https://external-preview.redd.it/NnFlYm0wenkycDVlMYHmQWFaxYJoR7tKMJdcQN8TRRiueBbXJs3gunLFjwjl.png?width=960&crop=smart&format=pjpg&auto=webp&s=eefdc28d1e0317f59a3750b2e6d0960c3805e302', 'width': 960}, {'height': 625, 'url': 'https://external-preview.redd.it/NnFlYm0wenkycDVlMYHmQWFaxYJoR7tKMJdcQN8TRRiueBbXJs3gunLFjwjl.png?width=1080&crop=smart&format=pjpg&auto=webp&s=bf0743d282ba493d0a7e6601c5b7e46f15de3862', 'width': 1080}], 'source': {'height': 2304, 'url': 'https://external-preview.redd.it/NnFlYm0wenkycDVlMYHmQWFaxYJoR7tKMJdcQN8TRRiueBbXJs3gunLFjwjl.png?format=pjpg&auto=webp&s=ed3f2c67e751641e63fd6c9fc3a7ee69182fe2ab', 'width': 3980}, 'variants': {}}]} |
||
Is this free google LLM on openrouter any good? | 2 | Seems to be decent but didn't really test much. Is it better than new Llama? | 2024-12-08T23:00:45 | https://www.reddit.com/r/LocalLLaMA/comments/1h9vcyd/is_this_free_google_llm_on_openrouter_any_good/ | Funny_Acanthaceae285 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9vcyd | false | null | t3_1h9vcyd | /r/LocalLLaMA/comments/1h9vcyd/is_this_free_google_llm_on_openrouter_any_good/ | false | false | self | 2 | null |
Is using a Llama-based plugin unsustainable? | 0 | Perchance.org AI text plugin, etc.
What's the environmental impact of using it from time to time? Sorry if this isn't the right place. | 2024-12-08T23:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/1h9vuqd/is_using_a_llamabased_plugin_unsustainable/ | magpie_7934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9vuqd | false | null | t3_1h9vuqd | /r/LocalLLaMA/comments/1h9vuqd/is_using_a_llamabased_plugin_unsustainable/ | false | false | self | 0 | null |
Are local models your "go-to"? | 24 | Curious what most people use in their day to day.
[View Poll](https://www.reddit.com/poll/1h9wlzn) | 2024-12-09T00:01:26 | https://www.reddit.com/r/LocalLLaMA/comments/1h9wlzn/are_local_models_your_goto/ | _Vedr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9wlzn | false | null | t3_1h9wlzn | /r/LocalLLaMA/comments/1h9wlzn/are_local_models_your_goto/ | false | false | self | 24 | null |
Ollama tweak on a Mac not working | 0 | So I’ve been using llama3.2:3b on my laptop using Python to pass to ollama api. I was trying to setup wsl2 on windows to run Ollama but it wouldn’t do the same on the Mac. Just need to change the host address and it still defaults to 127.0.01. So I can pass to the api for generator change hundreds of folder files to another layout. | 2024-12-09T00:32:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h9x8m8/ollama_tweak_on_a_mac_not_working/ | DCGreatDane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9x8m8 | false | null | t3_1h9x8m8 | /r/LocalLLaMA/comments/1h9x8m8/ollama_tweak_on_a_mac_not_working/ | false | false | self | 0 | null |
RAM sweet spot for local LLMs in 2025? | 6 | Next year I plan to do 2 things:
1. Pick up a mini PC with Strix Halo where it'll be pretty affordable to upgrade RAM and could use it as just an LLM "server"
2. May upgrade my macbook to something with the M5 chip. Ram is crazy expensive on these machines though.
My current machines all have 32gb, but that limits me to roughly 20b-27b sized models.
Is the sweet spot something like 64gb for 70b models, and beyond that you just need way too much ram? (Maybe a macbook with 48gb would even work)
It's a bit weird because we're seeing bigger models come out of course like llama with 405b which are way too big, but not sure if we might start getting like quantized versions of larger models that need 128gb of ram. | 2024-12-09T01:03:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h9xv15/ram_sweet_spot_for_local_llms_in_2025/ | zerostyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9xv15 | false | null | t3_1h9xv15 | /r/LocalLLaMA/comments/1h9xv15/ram_sweet_spot_for_local_llms_in_2025/ | false | false | self | 6 | null |
Persys AI now open source | 73 | As promised, Persys AI is now open-source.
Repo: [https://github.com/persys-ai/persys](https://github.com/persys-ai/persys)
# Background
I made Persys (personal-system) to act as sort of a second brain using local AI and storage. The physical device has 1/2 TB of storage and comes with llama pre-installed. It's like a home-lab/NAS/assistant all rolled into one with a clean Electron based application.
It has a chat application like the regular cloud-based ones. You can add multiple personalities too.
Some other apps:
* Paper: interactive authoring tool where the content gets embedded for RAG.
* Library: a PDF reader where you can perform RAG on the PDF you're reading.
* ToDo: a to-do app with a calendar where your to-do items get embedded.
* CardClip: a contacts app where the info gets embedded.
* Music: a music player, it also embeds your music library details.
* Files: where you can access your storage and upload. Plaintext files also get embedded.
The whole concept is to basically create embeddings of your entire digital self but locally. Everything is embedded using the nomic-embed-text model. Eventually, I'd love to open an app-store where 3rd party apps can use the on-device models and user files to do other cool stuff. Consider the pre-installed apps a demo.
The device itself is a Raspberry Pi 5 (8gb model) and it runs llama3.2 pretty well. You can run the code I posted on your own devices (Linux). You can change models in your settings too.
Please let me know how it runs if you have juiced up hardware.
This is just the start and I'm a solo dev so bear with me on the docs and pull-requests.
To the folks who ordered devices, I want to say **thank you** again and your orders are currently shipping. | 2024-12-09T01:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/1h9y8e2/persys_ai_now_open_source/ | ranoutofusernames__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9y8e2 | false | null | t3_1h9y8e2 | /r/LocalLLaMA/comments/1h9y8e2/persys_ai_now_open_source/ | false | false | self | 73 | {'enabled': False, 'images': [{'id': 'aVBHeuBUlXReqo2_4G_qu_6ErtUe1iFTMz1vwCIEzqs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h82ctywJycuticm2Gk_JCGsCVgvjhDKYF7yzPOK44PE.jpg?width=108&crop=smart&auto=webp&s=1286ad033ef205d241e88c5b9d58ea590237570a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h82ctywJycuticm2Gk_JCGsCVgvjhDKYF7yzPOK44PE.jpg?width=216&crop=smart&auto=webp&s=e2e95d49e3aa369a1e882ea17990d8a82eeab08c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h82ctywJycuticm2Gk_JCGsCVgvjhDKYF7yzPOK44PE.jpg?width=320&crop=smart&auto=webp&s=b1b39c5ae7dd2dbfd74ec8679e31762d838be006', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h82ctywJycuticm2Gk_JCGsCVgvjhDKYF7yzPOK44PE.jpg?width=640&crop=smart&auto=webp&s=56cd7a2039bcc51d6095fda14032ed52d8e71886', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h82ctywJycuticm2Gk_JCGsCVgvjhDKYF7yzPOK44PE.jpg?width=960&crop=smart&auto=webp&s=b4af5815090b1aa9e8620db90956b064ce2a0e95', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h82ctywJycuticm2Gk_JCGsCVgvjhDKYF7yzPOK44PE.jpg?width=1080&crop=smart&auto=webp&s=7fe2d73ee92d575fd1d46a4f4a48640e42627de4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h82ctywJycuticm2Gk_JCGsCVgvjhDKYF7yzPOK44PE.jpg?auto=webp&s=3aab021e1de5aae5a1f4a3592fd8b1a05b339e12', 'width': 1200}, 'variants': {}}]} |
Expanding on an existing language model? | 1 | [removed] | 2024-12-09T02:07:19 | https://www.reddit.com/r/LocalLLaMA/comments/1h9z32n/expanding_on_an_existing_language_model/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9z32n | false | null | t3_1h9z32n | /r/LocalLLaMA/comments/1h9z32n/expanding_on_an_existing_language_model/ | false | false | self | 1 | null |
LG Releases 3 New Models - EXAONE-3.5 in 2.4B, 7.8B, and 32B sizes | 510 | Link: https://huggingface.co/collections/LGAI-EXAONE/exaone-35-674d0e1bb3dcd2ab6f39dbb4 | 2024-12-09T02:20:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h9zbl2/lg_releases_3_new_models_exaone35_in_24b_78b_and/ | Many_SuchCases | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9zbl2 | false | null | t3_1h9zbl2 | /r/LocalLLaMA/comments/1h9zbl2/lg_releases_3_new_models_exaone35_in_24b_78b_and/ | false | false | self | 510 | {'enabled': False, 'images': [{'id': 'AtQ29FnMMc_AtQqBmQ_s18VHTFy4SedTiQdV_m0ddIQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=108&crop=smart&auto=webp&s=11f7fb90ed9e307d2cd0e6a8de0fcafb82f15d88', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=216&crop=smart&auto=webp&s=d3b53fe2faaf29d65fbc3d7d266eaa44587e8022', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=320&crop=smart&auto=webp&s=9ffed69b177fc6b2d87c630530531b329ef5649a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=640&crop=smart&auto=webp&s=1f6d88719d89f09c29d31b5f2f61162df24b22b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=960&crop=smart&auto=webp&s=fe97ccc4419b003ff74a30d6b6355b98b85c47f4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=1080&crop=smart&auto=webp&s=cb2bbcfafab85d0d8944ca41d0fd1da57d602f77', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?auto=webp&s=4bb632c086a7f7de6a8a240f827f1730b0f7631c', 'width': 1200}, 'variants': {}}]} |
TextCraft 1.0.7 Update: Added Temperature Control in UI | 1 | TextCraft is an add-in for Microsoft Word that seamlessly integrates essential AI tools, including text generation, proofreading, and more, directly into the user interface. Designed for offline use, TextCraft allows you to access AI-powered features without requiring an internet connection, making it a more privacy-friendly alternative to Microsoft Copilot.
https://github.com/suncloudsmoon/TextCraft | 2024-12-09T02:26:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h9zg0b/textcraft_107_update_added_temperature_control_in/ | SuccessIsHardWork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9zg0b | false | null | t3_1h9zg0b | /r/LocalLLaMA/comments/1h9zg0b/textcraft_107_update_added_temperature_control_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KRP_oZRmxF7MX9Ep4OM2JuvtJkZfgPrc83vgDykyJZU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1zOAfL1vTYdWdvfzccswi3Bb-NduLVgy1lykMbASG4o.jpg?width=108&crop=smart&auto=webp&s=9f35a7c4a5aefe7211b44bcb6c21323498c99a0b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1zOAfL1vTYdWdvfzccswi3Bb-NduLVgy1lykMbASG4o.jpg?width=216&crop=smart&auto=webp&s=56b826479d7585fbe4c43bbc67fc4c0f9eade14e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1zOAfL1vTYdWdvfzccswi3Bb-NduLVgy1lykMbASG4o.jpg?width=320&crop=smart&auto=webp&s=69b3ec4825b62cae03e52054286c379268865252', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1zOAfL1vTYdWdvfzccswi3Bb-NduLVgy1lykMbASG4o.jpg?width=640&crop=smart&auto=webp&s=85f4815514284b3f696b9d6f9f33526d5c9e2f42', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1zOAfL1vTYdWdvfzccswi3Bb-NduLVgy1lykMbASG4o.jpg?width=960&crop=smart&auto=webp&s=9c05f38329357c5a8724dac079a7e24609ae5cb8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1zOAfL1vTYdWdvfzccswi3Bb-NduLVgy1lykMbASG4o.jpg?width=1080&crop=smart&auto=webp&s=7857580384d652ad1559864c89174b15041f2808', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1zOAfL1vTYdWdvfzccswi3Bb-NduLVgy1lykMbASG4o.jpg?auto=webp&s=031e92e3ef759e592e0e13d5e2b883603acb1bf9', 'width': 1200}, 'variants': {}}]} |
O1 Replication Paper | 6 | Hi Everyone,
Just released paper that I think hints at how OpenAI might have developed some of O1's remarkable reasoning capabilities. TLDR- you need a small dataset of really high quality human paired with a little bit of RL
Here are some of they key take ways from the research
* Reasoning data is extremely scarce on the internet. It's very difficult to find data that really shows the problem solving process e.g hypothesis testing,
* RL although important is generally overrated by the general community. It's really the cherry on top. The human data does most of the heavy lifting. See deep-seek math for more info on this
Paper can be found here: [https://arxiv.org/abs/2412.04645](https://arxiv.org/abs/2412.04645)
Not saying this is definitively how o1 but results suggest that this method can be used to create very similar behaviour
Happy to answer any questions on the paper | 2024-12-09T02:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h9zhbr/o1_replication_paper/ | Brosarr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9zhbr | false | null | t3_1h9zhbr | /r/LocalLLaMA/comments/1h9zhbr/o1_replication_paper/ | false | false | self | 6 | null |
OpenGVLab/InternVL2_5-78B · Hugging Face | 16 | 2024-12-09T02:50:37 | https://huggingface.co/OpenGVLab/InternVL2_5-78B | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h9zw0h | false | null | t3_1h9zw0h | /r/LocalLLaMA/comments/1h9zw0h/opengvlabinternvl2_578b_hugging_face/ | false | false | default | 16 | {'enabled': False, 'images': [{'id': 'AcZ_XcYnwvPDnG_iWQUmLL9H5MgS2M33KoPHNvvvPWg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MrgJC64gND5ZvHpDuETOFJ-D4rnjRzXd6WQMuUBe1d0.jpg?width=108&crop=smart&auto=webp&s=53b5069011e3b552825195814973a19bd07b182f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MrgJC64gND5ZvHpDuETOFJ-D4rnjRzXd6WQMuUBe1d0.jpg?width=216&crop=smart&auto=webp&s=5775a31ed677f61a9e952e7be718226e612f76e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MrgJC64gND5ZvHpDuETOFJ-D4rnjRzXd6WQMuUBe1d0.jpg?width=320&crop=smart&auto=webp&s=0b0ffb4e4b4e6b423df356fb6a93b937353d87ba', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MrgJC64gND5ZvHpDuETOFJ-D4rnjRzXd6WQMuUBe1d0.jpg?width=640&crop=smart&auto=webp&s=283252c63a8c18a267789d04b4144b4c24b3bc92', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MrgJC64gND5ZvHpDuETOFJ-D4rnjRzXd6WQMuUBe1d0.jpg?width=960&crop=smart&auto=webp&s=55081bf8ef9434d930d407803719e5a94afdfa9f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MrgJC64gND5ZvHpDuETOFJ-D4rnjRzXd6WQMuUBe1d0.jpg?width=1080&crop=smart&auto=webp&s=73765f35fcc85fa3916d3d05c42be860db52f5c8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MrgJC64gND5ZvHpDuETOFJ-D4rnjRzXd6WQMuUBe1d0.jpg?auto=webp&s=23a40bbb5bfc57e0791d2ef9e510b1c06c2ea9d0', 'width': 1200}, 'variants': {}}]} |
|
calculate compute to serve X concurrent users with a 8b LLM | 1 | [removed] | 2024-12-09T03:18:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ha0emo/calculate_compute_to_serve_x_concurrent_users/ | Alarmed_Spread_1410 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha0emo | false | null | t3_1ha0emo | /r/LocalLLaMA/comments/1ha0emo/calculate_compute_to_serve_x_concurrent_users/ | false | false | self | 1 | null |
LYMT: I made something like 'handmade o1' by using prompt engineering | 0 | Hello r/LocalLLaMA and AI enthusiasts,
I’m excited to share that I’ve completed my first Python distribution project called LYMT: Let Your Model Think.
The idea for this project came to me after the release of the o1 model. I wondered, "Could prompt engineering help lightweight models maximize their performance?" As a result, I was able to significantly reduce errors in LLM models, particularly when dealing with simple mathematical calculations.
Would love to hear your thoughts or feedback!
[GreenScreen410/LYMT: LYMT: Let Your Model Think](https://github.com/GreenScreen410/LYMT) | 2024-12-09T04:22:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ha1jmv/lymt_i_made_something_like_handmade_o1_by_using/ | GreenScreen410 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha1jmv | false | null | t3_1ha1jmv | /r/LocalLLaMA/comments/1ha1jmv/lymt_i_made_something_like_handmade_o1_by_using/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '7jVSOpa1wbx7n4d0BjabT_eGLSNDpzWk8LJmW5_9beg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XuVc2Bc-kWDWc7fxgHYbe79aScJTL8YK_iTk2LhZn4k.jpg?width=108&crop=smart&auto=webp&s=f8c43a44e717c57d585e5d55d431d0359cbd1003', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XuVc2Bc-kWDWc7fxgHYbe79aScJTL8YK_iTk2LhZn4k.jpg?width=216&crop=smart&auto=webp&s=020d1178d79f1dd4f9a9e6892cbce6b4276aae2e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XuVc2Bc-kWDWc7fxgHYbe79aScJTL8YK_iTk2LhZn4k.jpg?width=320&crop=smart&auto=webp&s=78bc49813035bed12bac6ffcd886872a903b8fe4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XuVc2Bc-kWDWc7fxgHYbe79aScJTL8YK_iTk2LhZn4k.jpg?width=640&crop=smart&auto=webp&s=5a5d2f1f83ec670dfb45811abcba843725aa1751', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XuVc2Bc-kWDWc7fxgHYbe79aScJTL8YK_iTk2LhZn4k.jpg?width=960&crop=smart&auto=webp&s=6a22336111364bc545b4bdf6a52b02f0966d50e7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XuVc2Bc-kWDWc7fxgHYbe79aScJTL8YK_iTk2LhZn4k.jpg?width=1080&crop=smart&auto=webp&s=b81f848aabdfc1094e01a1d45ea1f172337957a0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XuVc2Bc-kWDWc7fxgHYbe79aScJTL8YK_iTk2LhZn4k.jpg?auto=webp&s=069f5f7944e5d7424450be3d29514f6fd434fd26', 'width': 1200}, 'variants': {}}]} |
On-Device Large Language Models: No API, No ChatGPT, Full Control | 0 | 2024-12-09T04:37:27 | https://youtube.com/watch?v=qyqDejEIO5U&si=X9dfgw9NzaojAwsd | Future_Court_9169 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1ha1tjf | false | {'oembed': {'author_name': 'theterminalguy', 'author_url': 'https://www.youtube.com/@theterminalguy', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/qyqDejEIO5U?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="On-Device Large Language Models: No API, No ChatGPT, Full Control"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/qyqDejEIO5U/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'On-Device Large Language Models: No API, No ChatGPT, Full Control', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ha1tjf | /r/LocalLLaMA/comments/1ha1tjf/ondevice_large_language_models_no_api_no_chatgpt/ | false | false | 0 | {'enabled': False, 'images': [{'id': '5ngICUQzv8kAPRQGA67KOKdtFoM8YdbGHFx3lNlurVA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/lofQ0uJf1PczZ4oDQ-8_Axh7jlRqLEl6M3aJo0pl82U.jpg?width=108&crop=smart&auto=webp&s=658539b86db547a7883c353a3408947b6fdc0c00', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/lofQ0uJf1PczZ4oDQ-8_Axh7jlRqLEl6M3aJo0pl82U.jpg?width=216&crop=smart&auto=webp&s=c454a59b62e26cc975572398858c5aad76bf0b71', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/lofQ0uJf1PczZ4oDQ-8_Axh7jlRqLEl6M3aJo0pl82U.jpg?width=320&crop=smart&auto=webp&s=b2f7ac400b2130336e54fb9e54701df0d7e3f3cb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/lofQ0uJf1PczZ4oDQ-8_Axh7jlRqLEl6M3aJo0pl82U.jpg?auto=webp&s=f3b7c76f73420b50d6983b8eec983c75868e6f89', 'width': 480}, 'variants': {}}]} |
||
QWQ getting crazy!! | 1 | [removed] | 2024-12-09T04:44:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ha1xqk/qwq_getting_crazy/ | Own-Ingenuity5895 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha1xqk | false | null | t3_1ha1xqk | /r/LocalLLaMA/comments/1ha1xqk/qwq_getting_crazy/ | false | false | self | 1 | null |
I'm looking for the best/smartest model that will perform okay on a 12GB card | 1 | [removed] | 2024-12-09T05:21:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ha2kk9/im_looking_for_the_bestsmartest_model_that_will/ | UndeadGodzilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha2kk9 | false | null | t3_1ha2kk9 | /r/LocalLLaMA/comments/1ha2kk9/im_looking_for_the_bestsmartest_model_that_will/ | false | false | self | 1 | null |
Current favourite llm models for 2x24 gb vram | 3 |
The title says it all, what are your current favourite models for 2x24gb vram?
Maybe we can do a thread with the model name, link short description, what it is deemed best for (use case), maybe this helps to ‚clear the jungle‘ :)
| 2024-12-09T05:35:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ha2skp/current_favourite_llm_models_for_2x24_gb_vram/ | cosmo-pax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha2skp | false | null | t3_1ha2skp | /r/LocalLLaMA/comments/1ha2skp/current_favourite_llm_models_for_2x24_gb_vram/ | false | false | self | 3 | null |
All the Open-source AI tools we love | 158 | Currently, not only large language and multimodal models flood the open-source space, yet a plethora of other ML tools as well.
Which do you love and recommend? | 2024-12-09T05:45:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ha2yor/all_the_opensource_ai_tools_we_love/ | cosmo-pax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha2yor | false | null | t3_1ha2yor | /r/LocalLLaMA/comments/1ha2yor/all_the_opensource_ai_tools_we_love/ | false | false | self | 158 | null |
Can someone please run the Llama 3.3 70B quantization to accuracy degradation graph? | 19 | Or can you teach me how to do it on runpod?
I’ve been seeing comments floating about on how quantization is hurting the modern dense models.
Back in the day people used to make sub 1 bit quantization work with Llama 1 and now even Q4 seems to be giving people unsatisfactory results .
Is this a true observation? Can we please test this? If you’ll teach me how I’d happily do it on Runpod.
EDIT: it just occurred to me that I should ask Claude this question before posting here so I’ll go do that now but posting anyway. | 2024-12-09T06:15:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ha3f2v/can_someone_please_run_the_llama_33_70b/ | Educational_Gap5867 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha3f2v | false | null | t3_1ha3f2v | /r/LocalLLaMA/comments/1ha3f2v/can_someone_please_run_the_llama_33_70b/ | false | false | self | 19 | null |
My Homelab Build: 4x RTX 3090 Powerhouse | 49 | Hello,
Thought I'd share my build after. While it wasn't specifically built for LLM experiments, I grabbed two extra 3090s to play around with local LLMs.
Most parts are used. I spent a lot of time hunting for deals on eBay. Here's the breakdown:
* CPU: Threadripper 2920X, $60
* CPU Cooler: ARCTIC Freezer 4U-M, $50
* Motherboard: MSI X399 Gaming Pro Carbon AC, $160
* GPUs: 4x RTX 3090, approximately, $625 each
* Memory: 128 GB DDR4 3200 MHz, brand new, local shopping $200
* PSU: Dark Power Pro 13 1300W, 80 PLUS Titanium, $160
* Case: MIning Rig from Amazon, $59
* Fans: 5xThermalright TL-E12B V3, $8 each
* Wifi card: EDUP PCIe WiFi 6E Card, $25
Had to get some extra stuff like PCIe risers too. Originally planned to use Phanteks Enthoo Pro II Server Edition case. it's huge, but fitting 4 GPUs was impossible without water cooling and removing GPU fans. Wasn't ready for that headache, so went with an open case instead. Yeah, it'll get dusty in the garage, but leaf blower once a month can help.
Regarding power supply: The 1300W PSU is admittedly undersized for 4 GPUs. Currently, I must remain under 1400W on the circuit until I can rewire the garage outlets. Due to limited space in the electrical panel, the 240V upgrade will need to wait. I run Arch Linux with custom scripts and systemd services to manage power constraints. Current limitations:
nvidia-smi -pm 1
mvidia-smi -pl 230
nvidia-smi -lgc 0,1400
These limits can be adjusted based on workload requirements, as I rarely need all four GPUs at maximum capacity.
Threadrippers are great, but the 2920x is showing its age. However, for $60, it’s still a great deal. I would appreciate more threads, but it is what it is. If I were to build the system again, I might consider EPYC now. With this motherboard, I run GPUs at 16/8/16/8 PCIe 3.0 lanes. Although I haven’t noticed significant bottlenecks, with a larger budget, I would target PCIe 4.0. It’s a cost-performance tradeoff.
This system is fully powered by green energy. I have solar panels, and using the excess energy to power this system is much better than selling it back to the provider!
Regarding the system's applications: I'm relatively new to local LLMs, so current usage primarily involves experiments and serving models on my local network, with plans to explore agents in the future. Beyond LLMs, this machine serves as my primary compute server for resource-intensive tasks. I am a software engineer working on distributed systems. I access it via SSH from other devices for various purposes, including:
* Running multiple dockerized applications, some are served on internet
* Data processing pipelines, experiments using big data tools
* Rendering tasks, not my expertise, used by other family members
* Development Server: All development is done on this server. Access is primarily via SSH tunneling, allowing even our basic laptops to remain productive.
For LLM experiments, I primarily run AWQ quantization with SGLang. The largest models I have tried so far fit into two GPUs with AWQ. Therefore, for now, I aim to run two different models simultaneously. If you have more ideas for experiments on this machine, I am open to suggestions. Integrating | 2024-12-09T06:20:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ha3hwu/my_homelab_build_4x_rtx_3090_powerhouse/ | everydayissame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha3hwu | false | null | t3_1ha3hwu | /r/LocalLLaMA/comments/1ha3hwu/my_homelab_build_4x_rtx_3090_powerhouse/ | false | false | self | 49 | null |
Silly Question from a Newbie About RAG | 1 | [removed] | 2024-12-09T07:07:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ha46hl/silly_question_from_a_newbie_about_rag/ | Lucky-Brilliant-2997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha46hl | false | null | t3_1ha46hl | /r/LocalLLaMA/comments/1ha46hl/silly_question_from_a_newbie_about_rag/ | false | false | self | 1 | null |
LG Calls EXAONE-3.5 Open Source | 58 | Calls it [OpenSource](https://www.lgresearch.ai/blog/view?seq=507) but License says it's Not [OpenSource](https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-32B-Instruct/blob/main/LICENSE).
It's an OpenWeight Model right. Ok, I run few tests on **EXAONE-3.5-32B-Instruct-GGUF:Q8\_0**
**Struggled with simple tasks! What are your thoughts on this? We've come a long way. We now have more than 10 highly capable models, and maybe EXAONE 4 will be impressive.**
I can’t understand how EXAONE 3.5 was rated so impressively when benchmarked against leading competitors. | 2024-12-09T07:09:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ha47r5/lg_calls_exaone35_open_source/ | Vishnu_One | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha47r5 | false | null | t3_1ha47r5 | /r/LocalLLaMA/comments/1ha47r5/lg_calls_exaone35_open_source/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': '92S2DwVExD9HYhRfqlkoIHz-dlbIyeZ15N6wb8u0IKY', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/7PIHanoQyAz4DWQ2pd9ZBjs7C7u8H4YPeq8fnWmamdw.jpg?width=108&crop=smart&auto=webp&s=821b47f8ab1a5826aacdebcb71289b1fa702641a', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/7PIHanoQyAz4DWQ2pd9ZBjs7C7u8H4YPeq8fnWmamdw.jpg?width=216&crop=smart&auto=webp&s=c65c201e39c4978e41277929fde08bf4d925d76f', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/7PIHanoQyAz4DWQ2pd9ZBjs7C7u8H4YPeq8fnWmamdw.jpg?width=320&crop=smart&auto=webp&s=f51aa3a6af4f88fe6b6620436fe4cca6e376f87c', 'width': 320}, {'height': 368, 'url': 'https://external-preview.redd.it/7PIHanoQyAz4DWQ2pd9ZBjs7C7u8H4YPeq8fnWmamdw.jpg?width=640&crop=smart&auto=webp&s=b8814d3b28b20a985885cec6de0889b033654d00', 'width': 640}], 'source': {'height': 495, 'url': 'https://external-preview.redd.it/7PIHanoQyAz4DWQ2pd9ZBjs7C7u8H4YPeq8fnWmamdw.jpg?auto=webp&s=fa4c8b337b163e190f1bcd2786519ac2f75cf756', 'width': 860}, 'variants': {}}]} |
Wait for prices to drop after 5090 or 4070 Ti Super 16-Gig vs 3090 Ti 24-Gig for deep learning? | 7 | I am stuck, I bought a prebuilt with a 4070Ti, I got it for a pretty good price about $1400 USD. I am just about to get into deep learning, but I think, I want to train transformer models - Multimodal Language Models to be specific.
To be a little more specific. As an example I would like to train a model on a picture of a hand with a cut, and have it diagnose it, or a rash, or an allergic reaction, a burn etc.
Should I return this computer and build one from scratch?
Thank you to anyone who took the time to respond :)
Mind you I am completely new to this space, there is a nostalgia about it that reminds me of when the internet really got going in the late 90s | 2024-12-09T07:11:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ha48ue/wait_for_prices_to_drop_after_5090_or_4070_ti/ | samy1563 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha48ue | false | null | t3_1ha48ue | /r/LocalLLaMA/comments/1ha48ue/wait_for_prices_to_drop_after_5090_or_4070_ti/ | false | false | self | 7 | null |
Llama 3.3:70b-Instruct vs. Qwen2.5-Coder:32B – Which AI Model Reigns Supreme? Benchmark Results Inside! | 1 | [removed] | 2024-12-09T07:19:41 | texasdude11 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ha4cor | false | null | t3_1ha4cor | /r/LocalLLaMA/comments/1ha4cor/llama_3370binstruct_vs_qwen25coder32b_which_ai/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Rjf5FdwX0fit3jD_OfHQOPa5M7N-1b4gP9y0pT4ojso', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/yt6kzwksxr5e1.jpeg?width=108&crop=smart&auto=webp&s=574ebf7cf30068c1f706acac1077be6111440b9f', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/yt6kzwksxr5e1.jpeg?width=216&crop=smart&auto=webp&s=2956fbb6953100ed5115825aa466aa7df8fb958e', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/yt6kzwksxr5e1.jpeg?width=320&crop=smart&auto=webp&s=a43167af336eeafb7688e9157fd78f9fb3e545cb', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/yt6kzwksxr5e1.jpeg?width=640&crop=smart&auto=webp&s=9b8b766bac06845456225e7fcfd39119c7054ece', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/yt6kzwksxr5e1.jpeg?width=960&crop=smart&auto=webp&s=26e595062d6ad1ed31fad9d0f2d6c8698676fc44', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/yt6kzwksxr5e1.jpeg?width=1080&crop=smart&auto=webp&s=19a1cc12fcf1f0df6f64c6bea169760cda790fa2', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/yt6kzwksxr5e1.jpeg?auto=webp&s=c9b263c0a09aa5d0b203137a3524bdd5e27ad993', 'width': 1280}, 'variants': {}}]} |
||
Silly Question from a Newbie About RAG | 1 | [removed] | 2024-12-09T07:19:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ha4cr1/silly_question_from_a_newbie_about_rag/ | Lucky-Brilliant-2997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha4cr1 | false | null | t3_1ha4cr1 | /r/LocalLLaMA/comments/1ha4cr1/silly_question_from_a_newbie_about_rag/ | false | false | self | 1 | null |
What's the best open source model advance voice mode like openai? | 5 | Was wondering what's the best open source alternative to advanced voice mode in openai. Probably something built on top of livekit? | 2024-12-09T07:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ha4lhi/whats_the_best_open_source_model_advance_voice/ | KeikakuAccelerator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha4lhi | false | null | t3_1ha4lhi | /r/LocalLLaMA/comments/1ha4lhi/whats_the_best_open_source_model_advance_voice/ | false | false | self | 5 | null |
Mac mini M4 base model as headless Ollama server | 0 | Hello, I want to use Mac mini M4 base model (16GB/256GB) as headless Ollama server (models up to 8-10GB). Can I use it in future as speech-to-text -> Ollama -> text to speech box (does Mac mini M4 has good enough mic and speaker integrated?) ? What solution is for STT -> Ollama -> TTS best in Mac mini? What remote desktop should I use? Or only SSH server is needed? I have no experiences with Mac ecosystem whatsoever, so I need some advices. I want to basicaly make offline AI voice assistant based on Mac mini M4. Thank you for advices. | 2024-12-09T07:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ha4miq/mac_mini_m4_base_model_as_headless_ollama_server/ | TruckUseful4423 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha4miq | false | null | t3_1ha4miq | /r/LocalLLaMA/comments/1ha4miq/mac_mini_m4_base_model_as_headless_ollama_server/ | false | false | self | 0 | null |
Best Way to Get Started as Beginner? | 1 | I’ve been using ChatGPT and Custom GPTs for a year now, so I have that basic understanding of how to leverage LLMs.
However, I’m motivated to try building my own AI Agent for a very specific use case that requires training the model on tons of text-based data.
I was recommended on using Llama 3.1 or Llama 3.3.
I quickly realized that I would need to host Llama 3.x myself, which I have never done before.
I’m super excited to dive into it and learn myself.
My question: As I’m starting out, what’s the best way to host and train Llama 3.x while keeping my costs low?
I want to eventually train it with a lot of data. And all the way down the road, I’d love to productize it and provide it as a service.
What would your recommendations be to start off, and then how to progress into making it viable as a consumer product?
Thank you! | 2024-12-09T07:50:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ha4rxu/best_way_to_get_started_as_beginner/ | consciuoslydone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha4rxu | false | null | t3_1ha4rxu | /r/LocalLLaMA/comments/1ha4rxu/best_way_to_get_started_as_beginner/ | false | false | self | 1 | null |
InternVL2.5 is here! An advanced MLLM series with parameters ranging from 1B to 78B. | 1 | [removed] | 2024-12-09T07:52:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ha4t0z/internvl25_is_here_an_advanced_mllm_series_with/ | OpenGVLab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha4t0z | false | null | t3_1ha4t0z | /r/LocalLLaMA/comments/1ha4t0z/internvl25_is_here_an_advanced_mllm_series_with/ | false | false | 1 | null |
|
You can replace 'hub' with 'ingest' in any Github url for a prompt-friendly text extract | 589 | 2024-12-09T07:53:36 | https://v.redd.it/46uow9wo3s5e1 | MrCyclopede | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ha4td7 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/46uow9wo3s5e1/DASHPlaylist.mpd?a=1736322830%2COWI2OWJjOWJmYzczZGMzOWIyYWMzMTcyY2MxMzU4N2Y4YWQ1NTcwYWVhOGU2YWY3NmQ2M2YwOGEyZDdlNDkwYw%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/46uow9wo3s5e1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/46uow9wo3s5e1/HLSPlaylist.m3u8?a=1736322830%2CMDRlNWM3ZDRlZjE4NzMzYjhiZGJkNjRkMTM3NmJiNDMwMTAwMjhiNmNlYTRiZDRjOGQ0ZjA0ZmRmMjRhMjgwYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/46uow9wo3s5e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1ha4td7 | /r/LocalLLaMA/comments/1ha4td7/you_can_replace_hub_with_ingest_in_any_github_url/ | false | false | 589 | {'enabled': False, 'images': [{'id': 'bnd2Znhkdm8zczVlMRIgHtPEsifh8tM_5wNutcA5VGiDBofkx8bkWIRP7xGT', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnd2Znhkdm8zczVlMRIgHtPEsifh8tM_5wNutcA5VGiDBofkx8bkWIRP7xGT.png?width=108&crop=smart&format=pjpg&auto=webp&s=2bb27e46cb374f382e8dc4e5b8d44fa5fe043d65', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bnd2Znhkdm8zczVlMRIgHtPEsifh8tM_5wNutcA5VGiDBofkx8bkWIRP7xGT.png?width=216&crop=smart&format=pjpg&auto=webp&s=b8d582480f86b6e565f74fc5bf4eac6d021d9ab5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bnd2Znhkdm8zczVlMRIgHtPEsifh8tM_5wNutcA5VGiDBofkx8bkWIRP7xGT.png?width=320&crop=smart&format=pjpg&auto=webp&s=630487b0c9af3c1d00012c6f183a876ed8d6c23e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bnd2Znhkdm8zczVlMRIgHtPEsifh8tM_5wNutcA5VGiDBofkx8bkWIRP7xGT.png?width=640&crop=smart&format=pjpg&auto=webp&s=492f57e69054323ade4a50dc8c5a004552815914', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bnd2Znhkdm8zczVlMRIgHtPEsifh8tM_5wNutcA5VGiDBofkx8bkWIRP7xGT.png?width=960&crop=smart&format=pjpg&auto=webp&s=3b8d9c5e23cc74b007f1c4e636a2bd91e882a9ce', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bnd2Znhkdm8zczVlMRIgHtPEsifh8tM_5wNutcA5VGiDBofkx8bkWIRP7xGT.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1dd1698ee286dd2149da3a71a39ffbeea1a54875', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/bnd2Znhkdm8zczVlMRIgHtPEsifh8tM_5wNutcA5VGiDBofkx8bkWIRP7xGT.png?format=pjpg&auto=webp&s=b87ded5b79b75af2ee1fa6f0ca05d7d37e3e056c', 'width': 1280}, 'variants': {}}]} |
||
Join Us at GPU-Poor LLM Gladiator Arena : Evaluating EXAONE 3.5 Models 🏆🤖 | 84 | 2024-12-09T07:57:20 | https://huggingface.co/spaces/k-mktr/gpu-poor-llm-arena | kastmada | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ha4v3q | false | null | t3_1ha4v3q | /r/LocalLLaMA/comments/1ha4v3q/join_us_at_gpupoor_llm_gladiator_arena_evaluating/ | false | false | 84 | {'enabled': False, 'images': [{'id': 'Jn8Qu_vDoWZof-N9lLOzftuBNpRrHvtYkXkKQBL1A48', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=108&crop=smart&auto=webp&s=4c1f344aca5db7afdd71312c01538475aa7c9b7f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=216&crop=smart&auto=webp&s=fd176b4b97c51d6d90835f587373cdbf22506e0c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=320&crop=smart&auto=webp&s=2d3fe2b3a23a4750a0954ae952fe131f7586bc5a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=640&crop=smart&auto=webp&s=e7ff80e00d414dbc8a7a294fd4bb94410a536b19', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=960&crop=smart&auto=webp&s=8e17133b8be719bc18faf3053db5105f4a62a0e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=1080&crop=smart&auto=webp&s=aa0167c430ce0d125a18de31c140059ff6ba325d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?auto=webp&s=d91168b0c864f09b57c49468b6985435f999aca0', 'width': 1200}, 'variants': {}}]} |
||
Can I build a RAG Chatbot using an English Database that is able to answer queries in other languages? | 1 | [removed] | 2024-12-09T07:58:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ha4vgr/can_i_build_a_rag_chatbot_using_an_english/ | durianapple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ha4vgr | false | null | t3_1ha4vgr | /r/LocalLLaMA/comments/1ha4vgr/can_i_build_a_rag_chatbot_using_an_english/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.