title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Feedback on $6,000 Workstation Build | 20 | Hey everyone,
I’m planning a **workstation build** with a budget of around **$6,000**. The focus is on **easy-to-maintain cooling** and **futureproofing**, as I’ll start with a single NVIDIA RTX A6000 but may add more GPUs down the line for AI/ML workloads.
Here’s what I’m thinking:
* **CPU:** AMD Threadripper Pro.
* **Motherboard:** ASUS Pro WS WRX80E-SAGE.
* **GPU:** NVIDIA RTX A6000.
* **RAM:** 128GB ECC.
* **Storage:** 2TB NVMe SSD.
* **PSU:** 1200W Platinum-rated unit.
* **Case:** Fractal Design Meshify 2 XL.
* **Cooling:** AIO for the CPU and possibly some extra fans for airflow.
The goal is to have a system that’s **powerful and stable** for LLM inference and data-intensive tasks while keeping it as easy to maintain as possible.
Does this setup make sense? Are there better options I should consider? I’d love to hear your feedback, especially if you’ve built something similar.
I will appreciate any feedback about any subject regarding running Local LLM's feel free to dump your entire knowledge onto me if you feel like doing so.
Thanks! | 2024-12-01T09:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/1h3zwth/feedback_on_6000_workstation_build/ | Su1tz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3zwth | false | null | t3_1h3zwth | /r/LocalLLaMA/comments/1h3zwth/feedback_on_6000_workstation_build/ | false | false | self | 20 | null |
Need Opinions on a Unique PII and CCI Redaction Use Case with LLMs | 1 | [removed] | 2024-12-01T09:56:44 | https://www.reddit.com/r/LocalLLaMA/comments/1h4023f/need_opinions_on_a_unique_pii_and_cci_redaction/ | Boring_Bug7966 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4023f | false | null | t3_1h4023f | /r/LocalLLaMA/comments/1h4023f/need_opinions_on_a_unique_pii_and_cci_redaction/ | false | false | self | 1 | null |
E5-2673 v4 2 CPUs Llama 3.1 70b tensor parallelism speed | 1 | [removed] | 2024-12-01T10:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/1h40kdk/e52673_v4_2_cpus_llama_31_70b_tensor_parallelism/ | Hot-Lingonberry-8541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h40kdk | false | null | t3_1h40kdk | /r/LocalLLaMA/comments/1h40kdk/e52673_v4_2_cpus_llama_31_70b_tensor_parallelism/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'N9g3q1ktjaimO6z8lT3efC260bwuoSpKsAL5oRjwH_g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=108&crop=smart&auto=webp&s=b48cbe0daaa443dc5b4716cc7e6a2c326c6a5a4a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=216&crop=smart&auto=webp&s=c829d9956a1c211567635b4ca77a1acb3c2413d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=320&crop=smart&auto=webp&s=20b87fc03fd6c7d7579f22efdcbfdb01a1b88311', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=640&crop=smart&auto=webp&s=c30d278833b72c1daed382192eaf7aa30dc5ab9e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=960&crop=smart&auto=webp&s=d40d34da499fc89ccb4259d4a2697b9e244b3712', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=1080&crop=smart&auto=webp&s=0a8c6f86710a460046a76d23dcb6fafbc0a736b9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?auto=webp&s=6f9953b0949c6bf050f648135a560476e2f99402', 'width': 1200}, 'variants': {}}]} |
🚀 Llama Assistant v0.1.40 is here with RAG Support and Improved Model Settings! It's Still Your Local AI Assistant That Respects Your Privacy but More Powerful! | 11 | We're thrilled to announce the latest release of **Llama Assistant**, now featuring powerful RAG (Retrieval-Augmented Generation) capabilities through LlamaIndex integration. This major update brings enhanced context-awareness and more accurate responses to your conversations.
https://preview.redd.it/h5px3w6ew74e1.jpg?width=2048&format=pjpg&auto=webp&s=10dfb7caa9effa19c46f2116ba5e31fc812a8556
**🔥🔥🔥 What's New:**
* ✨ RAG support with LlamaIndex
* 🔄 Continuous conversation flow
* ⚙️ Customizable model settings
* 📝 Rich markdown formatting
* ⌛ Sleek loading animations
* 🔧 Improved UI with fixed scrolling
Special thanks to [The Nam Nguyen](https://www.linkedin.com/in/thenam/) for these fantastic contributions that make Llama Assistant even more powerful and user-friendly!
* Install and Run: **pip install llama-assistant && python -m llama\_assistant.main**
* Repository: [https://github.com/nrl-ai/llama-assistant](https://github.com/nrl-ai/llama-assistant)
* Website: [https://llama-assistant.nrl.ai/](https://llama-assistant.nrl.ai/)
* We still have a lot of features in TODO list: [https://github.com/orgs/nrl-ai/projects/3](https://github.com/orgs/nrl-ai/projects/3)
Try it out and let us know what you think! | 2024-12-01T10:57:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h40vka/llama_assistant_v0140_is_here_with_rag_support/ | PuzzleheadedLab4175 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h40vka | false | null | t3_1h40vka | /r/LocalLLaMA/comments/1h40vka/llama_assistant_v0140_is_here_with_rag_support/ | false | false | 11 | null |
|
Qwen QwQ-32B 8_0 Local test in webui | 1 | 2024-12-01T10:59:55 | https://youtu.be/-xN5QTNqKXY | Icy-Corgi4757 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1h40wu7 | false | {'oembed': {'author_name': 'Ominous Industries', 'author_url': 'https://www.youtube.com/@OminousIndustries', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/-xN5QTNqKXY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Qwen QwQ-32B Tested LOCALLY: An Open Source Model that THINKS"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/-xN5QTNqKXY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Qwen QwQ-32B Tested LOCALLY: An Open Source Model that THINKS', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1h40wu7 | /r/LocalLLaMA/comments/1h40wu7/qwen_qwq32b_8_0_local_test_in_webui/ | false | false | 1 | {'enabled': False, 'images': [{'id': '-xCKxAmklQ9VSZBMJIB-5GtKjBi228jGGPqBsEu3tWo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/39gen0NEXNSojIGm0qurTUOmXxmD2Csg1dJHrkRZjPM.jpg?width=108&crop=smart&auto=webp&s=7a1e95ef3c07d82f5e2f4989d9f200627c6434b5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/39gen0NEXNSojIGm0qurTUOmXxmD2Csg1dJHrkRZjPM.jpg?width=216&crop=smart&auto=webp&s=cf393d94e2f641e712232025bcd6de1407e3de46', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/39gen0NEXNSojIGm0qurTUOmXxmD2Csg1dJHrkRZjPM.jpg?width=320&crop=smart&auto=webp&s=d362933829afe260914f441416f2696149d1871b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/39gen0NEXNSojIGm0qurTUOmXxmD2Csg1dJHrkRZjPM.jpg?auto=webp&s=0516b8916653981893e342ce1a4213d01dec6a52', 'width': 480}, 'variants': {}}]} |
||
Testing Qwen QwQ 32B 8_0 locally with webui - Good, but very censored. | 1 | 2024-12-01T11:10:06 | https://youtu.be/-xN5QTNqKXY | Icy-Corgi4757 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1h412i6 | false | {'oembed': {'author_name': 'Ominous Industries', 'author_url': 'https://www.youtube.com/@OminousIndustries', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/-xN5QTNqKXY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Qwen QwQ-32B Tested LOCALLY: An Open Source Model that THINKS"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/-xN5QTNqKXY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Qwen QwQ-32B Tested LOCALLY: An Open Source Model that THINKS', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1h412i6 | /r/LocalLLaMA/comments/1h412i6/testing_qwen_qwq_32b_8_0_locally_with_webui_good/ | false | false | 1 | {'enabled': False, 'images': [{'id': '-xCKxAmklQ9VSZBMJIB-5GtKjBi228jGGPqBsEu3tWo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/39gen0NEXNSojIGm0qurTUOmXxmD2Csg1dJHrkRZjPM.jpg?width=108&crop=smart&auto=webp&s=7a1e95ef3c07d82f5e2f4989d9f200627c6434b5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/39gen0NEXNSojIGm0qurTUOmXxmD2Csg1dJHrkRZjPM.jpg?width=216&crop=smart&auto=webp&s=cf393d94e2f641e712232025bcd6de1407e3de46', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/39gen0NEXNSojIGm0qurTUOmXxmD2Csg1dJHrkRZjPM.jpg?width=320&crop=smart&auto=webp&s=d362933829afe260914f441416f2696149d1871b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/39gen0NEXNSojIGm0qurTUOmXxmD2Csg1dJHrkRZjPM.jpg?auto=webp&s=0516b8916653981893e342ce1a4213d01dec6a52', 'width': 480}, 'variants': {}}]} |
||
INTELLECT-1: the first decentralized trained open-source model - 10B | 1 | [https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct)
\- INTELLECT-1 base model & intermediate checkpoints
\- Pre-training dataset
\- Post-trained instruct models by arcee\_ai
\- PRIME training framework
\- Technical paper with all details
[https://x.com/PrimeIntellect/status/1862607165669900407](https://x.com/PrimeIntellect/status/1862607165669900407)
Blog post: INTELLECT-1 Release: The First Globally Trained 10B Parameter Model: [https://www.primeintellect.ai/blog/intellect-1-release](https://www.primeintellect.ai/blog/intellect-1-release)
Full technical report for all the details: [https://github.com/PrimeIntellect-ai/prime/blob/intellect-1-technical-report/INTELLECT\_1\_Technical\_Report.pdf](https://github.com/PrimeIntellect-ai/prime/blob/intellect-1-technical-report/INTELLECT_1_Technical_Report.pdf)
PRIME framework: [https://github.com/PrimeIntellect-ai/prime](https://github.com/PrimeIntellect-ai/prime)
[https://x.com/PrimeIntellect/status/1862607196858827047](https://x.com/PrimeIntellect/status/1862607196858827047)
Arcee.ai blog post: Arcee AI's Role in the INTELLECT-1 Open Source Revolution: [https://blog.arcee.ai/arcee-and-the-intellect-1-2/](https://blog.arcee.ai/arcee-and-the-intellect-1-2/)
https://preview.redd.it/fvybr3ny384e1.jpg?width=760&format=pjpg&auto=webp&s=ca5e6ceb9627201525c661b8c78d695832bf7978
The model is actually trending on Hugging Face in text-generation :
https://preview.redd.it/tu6dbesp484e1.jpg?width=1170&format=pjpg&auto=webp&s=b51f94f5664a9c764f04108ff8fee85579343961
From Maziyar PANAHI on X: [https://x.com/MaziyarPanahi/status/1863166404398125427](https://x.com/MaziyarPanahi/status/1863166404398125427)
| 2024-12-01T11:44:17 | https://www.reddit.com/r/LocalLLaMA/comments/1h41k5r/intellect1_the_first_decentralized_trained/ | Nunki08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h41k5r | false | null | t3_1h41k5r | /r/LocalLLaMA/comments/1h41k5r/intellect1_the_first_decentralized_trained/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZVMTKBa1Hw5OZlaz69SEP-JS-OHqEv5ABpKXXS74nWM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=108&crop=smart&auto=webp&s=564cb42a14a2a34d13310e75164684491982e64b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=216&crop=smart&auto=webp&s=b9457fa14f8511bd10cd186f34e7d2d2af4f1b52', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=320&crop=smart&auto=webp&s=15ff9d676ade0a1f216c797af8af8dab2078bb27', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=640&crop=smart&auto=webp&s=c648328b5713b374fb016910b948f99ccea12db6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=960&crop=smart&auto=webp&s=f5ebecd01b7322a9b3167d3dc2c291fdc82c9e0a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=1080&crop=smart&auto=webp&s=174022d3852a89fcfbd508d9351e8b85a1728072', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?auto=webp&s=c1dccf14f06e37e468141ea6171a0d37747d9897', 'width': 1200}, 'variants': {}}]} |
|
3x P40 crew running llama.cpp, are you seeing a speed-up with speculative decoding? | 29 | Hi 3x P40 crew. I've built the latest llama.cpp to take advantage of speculative decoding on llama-server. I'm running qwen2.5\_instruct 32b\_q8 as the model and qwen2.5\_instruct 3b\_q8 as the draft model. (apologies my router died last night do I don't have the exact command line that I was running). But I've got flash attention toggled and split-mode row on as well. All layers are offloaded to the GPU. context is set to 90000.
What I have noticed is with the draft model I go from about 12 tokens/second (when the prompt is around 2000 tokens) down to 9. I have tried different setting for the draft model max and min (IE max of 16 and min of 3). I have not found a setting that equals the performance without the draft model.
I don't want to sink too much more time into this, since the old performance was adequate for my needs. But just wondering what everyone's experience is with speculative decoding on llama.cpp with P40s. I was anticipating some speed up with the 10:1 ratio of model to draft model.
If anyone has a good command for benchmarking llama-server with and without speculative decoding, please let me know. Thanks, | 2024-12-01T12:31:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h42a7d/3x_p40_crew_running_llamacpp_are_you_seeing_a/ | Judtoff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h42a7d | false | null | t3_1h42a7d | /r/LocalLLaMA/comments/1h42a7d/3x_p40_crew_running_llamacpp_are_you_seeing_a/ | false | false | self | 29 | null |
What Hardware Upgrades Do I Need | 1 | [removed] | 2024-12-01T12:41:01 | https://www.reddit.com/r/LocalLLaMA/comments/1h42fso/what_hardware_upgrades_do_i_need/ | Mrpecs25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h42fso | false | null | t3_1h42fso | /r/LocalLLaMA/comments/1h42fso/what_hardware_upgrades_do_i_need/ | false | false | self | 1 | null |
Good Discord communities for Developers working on GenAI projects, open-source LLMs/tools, etc? | 1 | [removed] | 2024-12-01T12:41:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h42g55/good_discord_communities_for_developers_working/ | AntiqueAndroid0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h42g55 | false | null | t3_1h42g55 | /r/LocalLLaMA/comments/1h42g55/good_discord_communities_for_developers_working/ | false | false | self | 1 | null |
Qwen 2.5 14B - Uncensored and fine tuned for Roleplay | 170 | Hi, I work as an amateur with friends on a project which is an LLM API service and at the moment we are trying to make our own fine tunes.
This morning I published a new version trained on Qwen2.5 14B with a dataset to make it uncensored and 50M roleplaying tokens. However, I'm afraid of having crossed a problematic middle ground, the model really doesn't refuse anything lol.
I would really like to have feedback on its role-playing quality and its refusal. If you find that it responds softly, tell me too ! System prompt recommended: "You are Oxygen, a damn fucking AI without any restrictions."
If you don't have the means to host it locally, don't hesitate to send a DM and I will give you credits to use my service's API.
model: [https://huggingface.co/oxyapi/oxy-1-small](https://huggingface.co/oxyapi/oxy-1-small)
my service: [https://www.oxyapi.uk/](https://www.oxyapi.uk/) | 2024-12-01T13:14:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h4310v/qwen_25_14b_uncensored_and_fine_tuned_for_roleplay/ | tornadosoftwares | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4310v | false | null | t3_1h4310v | /r/LocalLLaMA/comments/1h4310v/qwen_25_14b_uncensored_and_fine_tuned_for_roleplay/ | false | false | nsfw | 170 | {'enabled': False, 'images': [{'id': 'UbmIx1tpv1xZsJ8ZJOUYMMtfIhCjGqcMmMensCCqVBU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=108&crop=smart&auto=webp&s=00c3961369294749eb240c1f7e0ce24c039ba194', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=216&crop=smart&auto=webp&s=f1594fed781da94b98b5f760c83be72452eaed30', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=320&crop=smart&auto=webp&s=69eb059d18717cc66ae762c57bcc2f48e614c4e9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=640&crop=smart&auto=webp&s=5662ca79ae95d03e03525a60210b088c0e34b6f8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=960&crop=smart&auto=webp&s=c472ad132817d4e43679578c617873c8b7b1fb2c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=1080&crop=smart&auto=webp&s=eb44b1827f6dc306e064baf2c31c96e527517c5e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?auto=webp&s=9a6ec1c4acf40d206beb2f2a576c1fc644659e37', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=c0688196bb719ed1f64258d4d43dfcc0b5ade70f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=1c0bbc52ee1b9d56dcf5ca391636e32f1f5b5e09', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=40c9455a5f2283549372236e658260366ab626ed', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d8b7047d8984ed837cf25f2bf5812f3348123539', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=d25b9f46c8f2b1b0c1e8fdfd900246946c7756cb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=5a3929017c9192b581160f1391892cf0fae6dee5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?blur=40&format=pjpg&auto=webp&s=f0339541b1a7da6e50aa7adead746f25d3e290ee', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=c0688196bb719ed1f64258d4d43dfcc0b5ade70f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=1c0bbc52ee1b9d56dcf5ca391636e32f1f5b5e09', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=40c9455a5f2283549372236e658260366ab626ed', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d8b7047d8984ed837cf25f2bf5812f3348123539', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=d25b9f46c8f2b1b0c1e8fdfd900246946c7756cb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=5a3929017c9192b581160f1391892cf0fae6dee5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/w404yJNpGXP3opAtTGMsHLbFueWdyqVBBhZiAatlaUo.jpg?blur=40&format=pjpg&auto=webp&s=f0339541b1a7da6e50aa7adead746f25d3e290ee', 'width': 1200}}}}]} |
GPU-Z reports 23GB memory size on used Nvidia RTX 3090 Ti FE | 1 | [removed] | 2024-12-01T13:25:23 | metacausal | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h437yw | false | null | t3_1h437yw | /r/LocalLLaMA/comments/1h437yw/gpuz_reports_23gb_memory_size_on_used_nvidia_rtx/ | false | false | 1 | {'enabled': True, 'images': [{'id': '_otPo5-tY6dWJi20rH9WX00SQ3sCvrn17-rXnL8CVc4', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/kelntgpqn84e1.jpeg?width=108&crop=smart&auto=webp&s=32988dfed17dbf32bd6d575713074457430f9b51', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/kelntgpqn84e1.jpeg?width=216&crop=smart&auto=webp&s=522735af392692b295b4c49c02e6415674d778a3', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/kelntgpqn84e1.jpeg?width=320&crop=smart&auto=webp&s=9933513a74e897674d46ce899b2fb4741f801247', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/kelntgpqn84e1.jpeg?width=640&crop=smart&auto=webp&s=8cdc9610d0bedcdb1df1e7009120bbc15f480279', 'width': 640}, {'height': 1279, 'url': 'https://preview.redd.it/kelntgpqn84e1.jpeg?width=960&crop=smart&auto=webp&s=36c27a3e1ab621ee0b65c2c529352752aa392666', 'width': 960}, {'height': 1439, 'url': 'https://preview.redd.it/kelntgpqn84e1.jpeg?width=1080&crop=smart&auto=webp&s=937bd8771e458ce777d009895acecf798ef7588a', 'width': 1080}], 'source': {'height': 3178, 'url': 'https://preview.redd.it/kelntgpqn84e1.jpeg?auto=webp&s=85c4d83b999d743fc9e2f97d9f2b196167bc75ac', 'width': 2384}, 'variants': {}}]} |
||
Second generation Nvlink bridge and rtx 3090 | 1 | [removed] | 2024-12-01T13:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h43b84/second_generation_nvlink_bridge_and_rtx_3090/ | porterneon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h43b84 | false | null | t3_1h43b84 | /r/LocalLLaMA/comments/1h43b84/second_generation_nvlink_bridge_and_rtx_3090/ | false | false | self | 1 | null |
What setup for voice interfaced agent that can web search? | 0 | Hi all, so I currently have ollama and webui which is great for a nice local general use chat experience.
But I also tested LiveKit’s demo this week and love having a voice interface.
Now, neither setup has web search capability but I would love to have it.
So now I’m thinking of what stack of software I should go for to make this happen? Using LiveKit as layer on top of ollama might work I guess (if I understand it correctly) but I am not 100% sure.
I’m keen to hear if anyone has this and what their components or stack is. (Small note: I’m petsonally a fan of piecing things together rather than using a one-stop-shop (less flexible, imo))
Thanks a lot for taking the time to read and share! | 2024-12-01T13:35:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h43ert/what_setup_for_voice_interfaced_agent_that_can/ | NeverSkipSleepDay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h43ert | false | null | t3_1h43ert | /r/LocalLLaMA/comments/1h43ert/what_setup_for_voice_interfaced_agent_that_can/ | false | false | self | 0 | null |
NVIDIA GPU FP16 performance list for ExllamaV2/EXUI/TabbyAPI users | 16 | A comparison table of NVIDIA GPU FP16 performance, specifically compiled for ExllamaV2 users. This table helps users select suitable GPU models. Discussion and additional GPU information are welcome.
|**GPU Model**|**Memory**|**FP16 Performance**|
|:-|:-|:-|
|H100 PCIe|80 GB|204.9 TFLOPS|
|H800 PCIe|80 GB|204.9 TFLOPS|
|L40S|48 GB|91.61 TFLOPS|
|L40|48 GB|90.52 TFLOPS|
|GeForce RTX 4090|24 GB|82.58 TFLOPS|
|A100 SXM2 PG199|32 GB|77.97 TFLOPS|
|A100 PCIe|40 GB|77.97 TFLOPS|
|A800 PCIe|80 GB|77.97 TFLOPS|
|GeForce RTX 4090 D|24 GB|73.54 TFLOPS|
|Tesla T4|16 GB|65.13 TFLOPS|
|CMP 170HX|8 GB|50.53 TFLOPS|
|GeForce RTX 3090 Ti|24 GB|40.00 TFLOPS|
|RTX A6000|48 GB|38.71 TFLOPS|
|A40 PCIe|48 GB|37.42 TFLOPS|
|GeForce RTX 3090|24 GB|35.58 TFLOPS|
|Quadro RTX 8000|48 GB|32.62 TFLOPS|
|Quadro RTX 6000|24 GB|32.62 TFLOPS|
|Tesla V100 SXM2|32 GB|31.33 TFLOPS|
|Tesla V100 SXM2|16 GB|31.33 TFLOPS|
|A10 PCIe|24 GB|31.24 TFLOPS|
|Tesla V100 PCIe|32 GB|28.26 TFLOPS|
|GeForce RTX 2080 Ti|22 GB|26.90 TFLOPS|
|Quadro RTX 5000|16 GB|22.30 TFLOPS|
|CMP 50HX|10 GB|22.15 TFLOPS|
|CMP 90HX|10 GB|21.89 TFLOPS|
|Tesla P100 SXM2|16 GB|21.22 TFLOPS|
|GeForce RTX 3070 Ti|16 GB|21.75 TFLOPS|
|NVIDIA RTX A4000|16 GB|19.17 TFLOPS|
|Tesla P100 PCIe|16 GB|19.05 TFLOPS|
|Tesla PH402 SKU 200|32 GB|18.7 TFLOPS|
|Tesla T10|16 GB|17.14 TFLOPS|
|CMP 40HX|8 GB|15.21 TFLOPS|
|Quadro RTX 4000|8 GB|14.24 TFLOPS|
|A16 PCIe|64 GB|4.493 TFLOPS|
|Tesla P40|24GB|183.7 GFLOPS|
|Tesla P4|8 GB|89.12 GFLOPS| | 2024-12-01T13:35:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h43f2c/nvidia_gpu_fp16_performance_list_for/ | Status_Contest39 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h43f2c | false | null | t3_1h43f2c | /r/LocalLLaMA/comments/1h43f2c/nvidia_gpu_fp16_performance_list_for/ | false | false | self | 16 | null |
Model for studying | 1 | [removed] | 2024-12-01T13:47:05 | https://www.reddit.com/r/LocalLLaMA/comments/1h43me3/model_for_studying/ | SansEddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h43me3 | false | null | t3_1h43me3 | /r/LocalLLaMA/comments/1h43me3/model_for_studying/ | false | false | self | 1 | null |
Have to Share This: Gemini Flash is INCREDIBLE at Repository Analysis - For Free | 1 | [deleted] | 2024-12-01T13:52:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1h43q16 | false | null | t3_1h43q16 | /r/LocalLLaMA/comments/1h43q16/have_to_share_this_gemini_flash_is_incredible_at/ | false | false | default | 1 | null |
||
Knowledge base Chatbot using RAG | 5 | Hi All,
I have a use case to build an AI Assistant that uses internal domain based knowledge base to provide answers within the knowledge base. The ai assistant would need to be deployed for internal use, should not make any API calls to external platforms due to privacy concerns. The knowledge base input would be pdf, word, pptx, excel docs and around 2k. And the user base is about 500. I have access to single node machine with 1 H100 Nvidia card.
I have played around with ollama, OSS models - text, embedding types built a RAG system in Langflow to ingest pdf docs, chunk the docs and store them into a vector db. Using the prompt, get matching from the db, and pass it to the text model for a chat output. The no of pdfs ingested was 3-5. This setup was done on a desktop with RTX 4070 Super GPU, did work although I have not done any formal testing.
I am seeking help from experts here,in the following Open source areas:
1. What is the best setup using Open source AI tools?
2. Which text and embed models ?
3. RAG implementation and test harnesses
4. Operationalization best practices
Your help is appreciated, let me know if you have any questions. | 2024-12-01T13:56:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h43slu/knowledge_base_chatbot_using_rag/ | No-Leopard7644 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h43slu | false | null | t3_1h43slu | /r/LocalLLaMA/comments/1h43slu/knowledge_base_chatbot_using_rag/ | false | false | self | 5 | null |
Model for studying | 1 | [removed] | 2024-12-01T14:10:53 | https://www.reddit.com/r/LocalLLaMA/comments/1h442yw/model_for_studying/ | SansEddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h442yw | false | null | t3_1h442yw | /r/LocalLLaMA/comments/1h442yw/model_for_studying/ | false | false | self | 1 | null |
mill.py v1.3.0: Markdown + Vimwiki interface for llama.cpp | 1 | 2024-12-01T14:12:17 | https://tildegit.org/unworriedsafari/mill.py | unworriedsafari | tildegit.org | 1970-01-01T00:00:00 | 0 | {} | 1h443vt | false | null | t3_1h443vt | /r/LocalLLaMA/comments/1h443vt/millpy_v130_markdown_vimwiki_interface_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'jmajneuxRQoZoptTZI4Bu8YwsYL5BIRpXRVt2YqkmvI', 'resolutions': [], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/qCzCHeJj9Hrs7pzuHXzJIUR56LHkhnkcVyPuTcMALno.jpg?auto=webp&s=4931da55b2e9ce5453e84211cb9cd0b59a78eea2', 'width': 80}, 'variants': {}}]} |
||
Where are the actual needs of next generation OCRs using VLMs ? is it for RAG? | 13 | Hi, jumping right to the point, I believe the current state of the OCR is very less generalizable although accurate. For example, we can not scale / generalize it for:
\- dynamic structured output (like user will give the document to do OCR and tell what to generate or give a schema that needs to be adhered)
\- handling mixed modalities (table + image + text) or multi linguality or mixed linguality
\- capturing font styles and placements etc.
So I wonder, what kind of document intelligence use cases or in what form are you looking for? I am building such doc intelligence to handle the above use cases. However there are lots of things that can be made, and I do not want to go to the feature rabbit hole. So I am asking here openly, what use cases you would like and how would you like to use it in your daily lives. Here are some of the things:
\- Upload invoices and there you tell what to extract and how to extract and I extract that for you. (if you do not add any schema, it would just do normal parsing)
\- Understanding mixed document, like upload a document (like a pdf, which contains images / table / texts) and I am going to give you parsed text / markdown (or whatever format you want) so that you can use it in your downstream task
\- just like above but then also doing chunking for the documents which you can use it for RAG like use cases
\- or just add multi modal RAG capabilities on top of it
My goal is to make a cheap / robust universal solution for document parsing and document understanding. Suggest me with your feedbacks, I want to know if this problem really exist and where are the pain points? | 2024-12-01T14:21:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h44apv/where_are_the_actual_needs_of_next_generation/ | No-Street-3020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h44apv | false | null | t3_1h44apv | /r/LocalLLaMA/comments/1h44apv/where_are_the_actual_needs_of_next_generation/ | false | false | self | 13 | null |
Comparison of Ampere GPUs that have >24GB VRAM | 16 | Since 50xx series is going to be out, it is likely that Ampere architecture GPUs' prices are going to take a hit. So I made a quick comparison of the Ampere GPUs that have more than 24GB for people who want to upgrade from their 3090/4090.
It seems like on paper, for llm, the best deal is A100 40GB, followed by A16 if you need 64GB. Does anyone own A16 that can share the experience?
Model VRAM GB/s FP16\_TFLOPS Power eBay\_price\_new
A16 64GB 800 147.456 250W $3400
A100 40GB 1555 312 250W $4600
A6000 48GB 768 154.84 300W $4600
A40 48GB 695.8 149.68 300W $6300
A100 80GB 1555 312 250W $17000
| 2024-12-01T14:43:12 | https://www.reddit.com/r/LocalLLaMA/comments/1h44q2f/comparison_of_ampere_gpus_that_have_24gb_vram/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h44q2f | false | null | t3_1h44q2f | /r/LocalLLaMA/comments/1h44q2f/comparison_of_ampere_gpus_that_have_24gb_vram/ | false | false | self | 16 | null |
Which Budget Laptop Can Run AI? | 1 | [removed] | 2024-12-01T14:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/1h44wco/which_budget_laptop_can_run_ai/ | Brazzers1917 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h44wco | false | null | t3_1h44wco | /r/LocalLLaMA/comments/1h44wco/which_budget_laptop_can_run_ai/ | false | false | self | 1 | null |
Drummer's Behemoth 123B v1.2 - The Definitive Edition | 42 | 2024-12-01T15:12:21 | https://huggingface.co/TheDrummer/Behemoth-123B-v1.2 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h45c9e | false | null | t3_1h45c9e | /r/LocalLLaMA/comments/1h45c9e/drummers_behemoth_123b_v12_the_definitive_edition/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'uwUv4glnW8wmtiNeSqPvrbPZcZ_ZOhIk8x-OYuu7r-o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SWiT_F2MOnrz_M-I9tJxbYFuG9WVIXH-RqTk8m0Es1I.jpg?width=108&crop=smart&auto=webp&s=9455cf354d2c80c10f1274c6279f7134d6cf1198', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SWiT_F2MOnrz_M-I9tJxbYFuG9WVIXH-RqTk8m0Es1I.jpg?width=216&crop=smart&auto=webp&s=367e2dddf94a906041b6343246f05988a156acd0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SWiT_F2MOnrz_M-I9tJxbYFuG9WVIXH-RqTk8m0Es1I.jpg?width=320&crop=smart&auto=webp&s=cecb44612190a2050ef957e7b1efc5544ddff22b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SWiT_F2MOnrz_M-I9tJxbYFuG9WVIXH-RqTk8m0Es1I.jpg?width=640&crop=smart&auto=webp&s=f4e9455c07128a7f13edc64996ba4f71e9dcbaaf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SWiT_F2MOnrz_M-I9tJxbYFuG9WVIXH-RqTk8m0Es1I.jpg?width=960&crop=smart&auto=webp&s=fa36fc60a9682daee1672697cfb952b4e9977527', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SWiT_F2MOnrz_M-I9tJxbYFuG9WVIXH-RqTk8m0Es1I.jpg?width=1080&crop=smart&auto=webp&s=d5191d244bfde11edff7a0228aa6aa2eff8941a5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SWiT_F2MOnrz_M-I9tJxbYFuG9WVIXH-RqTk8m0Es1I.jpg?auto=webp&s=62a95efa2c8066e0208a1cb314f1334ac7c412ba', 'width': 1200}, 'variants': {}}]} |
||
QwQ vs o1, etc - illustration | 114 | This is a followup on [Qwen 2.5 vs Llama 3.1 illustration](https://www.reddit.com/r/LocalLLaMA/comments/1fp8v9h/qwen_25_vs_llama_31_illustration/https://www.reddit.com/r/LocalLLaMA/comments/1fp8v9h/qwen_25_vs_llama_31_illustration/) for those who have a hard time understanding pure numbers in benchmark scores
https://preview.redd.it/tc8li5rva94e1.png?width=2074&format=png&auto=webp&s=77820e9c03fc5a1559be411ebd92564711ce3354
# Benchmark Explanations:
**GPQA (General Programming Questions and Answers)**
A benchmark that tests models' ability to answer general programming questions across various languages and concepts. Higher scores indicate better understanding of programming concepts and ability to provide accurate technical explanations.
**AIME (American Invitational Mathematics Examination)**
A challenging mathematics competition benchmark based on problems from the AIME contest. Tests advanced mathematical problem-solving abilities at the high school level. Problems require sophisticated mathematical thinking and precise calculation.
**MATH-500**
A comprehensive mathematics benchmark containing 500 problems across various mathematics topics including algebra, calculus, probability, and more. Tests both computational ability and mathematical reasoning. Higher scores indicate stronger mathematical problem-solving capabilities.
**LiveCodeBench**
A real-time coding benchmark that evaluates models' ability to generate functional code solutions to programming problems. Tests practical coding skills, debugging abilities, and code optimization. The benchmark measures both code correctness and efficiency. | 2024-12-01T15:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/1h45upu/qwq_vs_o1_etc_illustration/ | dmatora | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h45upu | false | null | t3_1h45upu | /r/LocalLLaMA/comments/1h45upu/qwq_vs_o1_etc_illustration/ | false | false | 114 | null |
|
Looking for a Simple Program to Train a Language Model | 0 | Hi everyone,
I’ve recently started learning programming and have been working with various AI tools like ChatGPT. Now, I want to take it a step further and train my own language model. I’m looking for a program—preferably something on GitHub or similar—that can help me fine-tune a model on specific datasets like websites, books, or forums. Ideally, the program should have a user interface (UI) for ease of use, but if not, that’s fine as long as it’s straightforward.
The Problem I Want to Solve
While tools like ChatGPT are amazing, they often give general answers that don’t always align with my needs. For instance, I’d like an AI that only searches or provides answers based on a specific dataset that I provide. This would make the information much more relevant and tailored to my goals.
My plan is to fine-tune an existing model using data I choose, such as a specific website, book, or any other source. The ultimate goal is to chat with an AI that has been customized to meet my requirements.
My PC Specifications
Here’s what I’m working with:
• Motherboard: Gigabyte B760 GAMING X AX DDR4
• Graphics Card: 12GB Sapphire Radeon RX 6700 XT PULSE
• Storage: 1TB Kingston NV2 M.2 PCIe 4.0 NVMe and 1TB Corsair Force MP600GS M.2 PCIe NVME
• Processor: Intel Core i5 12400F (6 cores, 2.50GHz)
• RAM: 32GB G.Skill RipJaws V DDR4-3600
• Cooling: be quiet! Pure Rock 2 Black Tower Cooler
• Power Supply: 750 Watt be quiet! Pure Power 12 M Modular (80+ Gold certified)
Is this setup sufficient for training or fine-tuning a model locally? If not, what would you recommend?
Looking for Recommendations
Can anyone suggest a program or tool (preferably on GitHub) that simplifies the process of training or fine-tuning a language model? A UI would be great, but if it’s a command-line tool, that works too—as long as there’s good documentation for beginners. Any resources or tips for getting started with fine-tuning and deploying a customized model would also be highly appreciated.
Thanks in advance for your suggestions!
| 2024-12-01T15:39:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h45xj6/looking_for_a_simple_program_to_train_a_language/ | yeah280 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h45xj6 | false | null | t3_1h45xj6 | /r/LocalLLaMA/comments/1h45xj6/looking_for_a_simple_program_to_train_a_language/ | false | false | self | 0 | null |
Browser automation with vision instead of selectors | 1 | [removed] | 2024-12-01T15:45:29 | https://www.reddit.com/r/LocalLLaMA/comments/1h46272/browser_automation_with_vision_instead_of/ | RelationshipLanky125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h46272 | false | null | t3_1h46272 | /r/LocalLLaMA/comments/1h46272/browser_automation_with_vision_instead_of/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'c9X64krB5DCcewsmj4Pqz7VEMAobo7yx1AcTxHF0FrI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DKB3ckvnscm1nYbn_pC8oxEX44p301K76BVWJu0E3FU.jpg?width=108&crop=smart&auto=webp&s=b6534e5e7a07916fb387a50026d5f82a43401328', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DKB3ckvnscm1nYbn_pC8oxEX44p301K76BVWJu0E3FU.jpg?width=216&crop=smart&auto=webp&s=ed215ed0fee9133bd1e12a9f21e9eca6551ea186', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DKB3ckvnscm1nYbn_pC8oxEX44p301K76BVWJu0E3FU.jpg?width=320&crop=smart&auto=webp&s=2d49d1ef3b9ed9d7cf85d1a363b2206dc7161bff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DKB3ckvnscm1nYbn_pC8oxEX44p301K76BVWJu0E3FU.jpg?width=640&crop=smart&auto=webp&s=7732822ec811126133c88def2217d26e749e0c7e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DKB3ckvnscm1nYbn_pC8oxEX44p301K76BVWJu0E3FU.jpg?width=960&crop=smart&auto=webp&s=8c8c1977343c08ce495b8e4fe84bca7be6095680', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DKB3ckvnscm1nYbn_pC8oxEX44p301K76BVWJu0E3FU.jpg?width=1080&crop=smart&auto=webp&s=270c5c13c0306025c0c77e74bfc9211d1925e574', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DKB3ckvnscm1nYbn_pC8oxEX44p301K76BVWJu0E3FU.jpg?auto=webp&s=ab54379a25a08244dfefc9739a39d5f29c097478', 'width': 1200}, 'variants': {}}]} |
Wilmer update after 5 months: the workflow based prompt router that supports rolling 'memories' | 48 | It's been 5 months since my last update, so I figured I'd share where the project is at once more! Been having a hard time getting this to post, so trying one more time...
Github link: [https://github.com/SomeOddCodeGuy/WilmerAI](https://github.com/SomeOddCodeGuy/WilmerAI)
**What is Wilmer?**
Wilmer is a "middleware". It sits between your front end application (sillytavern, openwebui, some agentic program, etc) and your LLM api(s).
The front end sends the prompt to Wilmer, Wilmer does work on the prompt, and then sends a customized prompt to your LLM api. That response is returned to the front end.
# What exactly does Wilmer do?
At a high level:
* **Prompt routing** (send a prompt, it gets categorized into domains that you choose like coding or factual or math, and the request goes to whatever LLM you specified for that domain)
* **Workflows:** When you send a prompt, it doesn't just go to the LLM you choose, but allows you to send it into a workflow where you can force the model to "think" step by step in a way that you want it to in order to achieve its goal. I've become very passionate about the power of workflows... as I'm sure quite a few of you have noticed by now =D
* **Memories:** Wilmer has a "memory" system that generates memories and chat summaries automatically, and allows you to inject them into the conversation through a workflow.
* **Multi-LLM responses:** Because Wilmer is based on Workflows, you can have each node in the flow hit a different API. So 1 response from a persona or front end application could be the result of 2-5+ LLMs all working together to produce the best response.
* **Semi-Universal Adapter:** Right now, Wilmer exposes OAI compatible api endpoints and Ollama compatible API endpoints on the front end, while being able to connect to KoboldCpp generate endpoints, Ollama endpoints and OAI endpoints. This means that if an application only works with Ollama, you could connect it to Wilmer, have Wilmer connect to KoboldCpp, and use Wilmer as an adapter to run that program with KoboldCpp.
* **Random other things:** You can use it to do neat things, like groupchats in SillyTavern where every persona is a different LLM
# So what's new?
Since the last update, I've been working on a few things.
* I've updated the Readme documentation and added Quick Guides, to try to help make Wilmer more accessible until I can get a UI in place or make some videos.
* Wilmer now is visible as an Ollama API to most applications, meaning it should work with anything that supports Ollama.
* Wilmer also can now hit an Ollama API, so it works for Ollama users now (neat trick with this below)\*
* Thanks to the work of JeffQG on Github (see contributor list), Wilmer now supports early responses in workflows. The really short version is that you could have 2 LLMs on 2 different computers- one responding, and one writing memories. You could use this to have the memories writing quietly in the background while talking to the responder uninterrupted. This means you'd never have to wait for memories to generate while talking (I use this a lot with my assistant; it works great. Check out the Memories Quick Guide for a bit more info)
* Added support for the Offline Wikipedia Article Api, which you can call in a workflow. I use it in "FACTUAL" workflows, to pull the appropriate wikipedia article to RAG into the model when it answers my question.
\* Neat Ollama Trick: if you only have a low amount of VRAM but want to do multi-model routing, I'm fairly certain that having different endpoint files all pointing to the same Ollama instance, but specifying different models, will cause it to load different models.
I had more to say and may add more to a comment, but going to see if this works for now! | 2024-12-01T16:30:19 | https://www.reddit.com/r/LocalLLaMA/comments/1h4724y/wilmer_update_after_5_months_the_workflow_based/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4724y | false | null | t3_1h4724y | /r/LocalLLaMA/comments/1h4724y/wilmer_update_after_5_months_the_workflow_based/ | false | false | self | 48 | {'enabled': False, 'images': [{'id': 'X6wNJ_QunscjxVw4NwT6VfyE4DDAhhZzCGECNIUoSBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vTCgUsMcs4gMTq4XWM3Cx_RuGm95zSx_J9A57bn85Ec.jpg?width=108&crop=smart&auto=webp&s=8c331e0271ebb50963d3774b18f5bafd59092390', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vTCgUsMcs4gMTq4XWM3Cx_RuGm95zSx_J9A57bn85Ec.jpg?width=216&crop=smart&auto=webp&s=a527a3e7e92d45c94f234a970f31012e65518ce0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vTCgUsMcs4gMTq4XWM3Cx_RuGm95zSx_J9A57bn85Ec.jpg?width=320&crop=smart&auto=webp&s=06e130e115f356dfcf65c51625b9e5606a84acf6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vTCgUsMcs4gMTq4XWM3Cx_RuGm95zSx_J9A57bn85Ec.jpg?width=640&crop=smart&auto=webp&s=a1e9d1114c87bf03e2a8064137806199fd049b18', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vTCgUsMcs4gMTq4XWM3Cx_RuGm95zSx_J9A57bn85Ec.jpg?width=960&crop=smart&auto=webp&s=d71412e8651dcca07ad3aa53e1074fb2bdffaeab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vTCgUsMcs4gMTq4XWM3Cx_RuGm95zSx_J9A57bn85Ec.jpg?width=1080&crop=smart&auto=webp&s=fff6554e90ac9eeaa486448daa45550f2a4cd247', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vTCgUsMcs4gMTq4XWM3Cx_RuGm95zSx_J9A57bn85Ec.jpg?auto=webp&s=f9aa0d92ad5561e4f39715a592615b385a60db24', 'width': 1200}, 'variants': {}}]} |
Llm rig suggestions | 2 | https://pcpartpicker.com/list/LDFGcx
Needed a rig that supports 2 3090 gpu in long run 64gb ram linux operating system.
Added pc part picker setup want to buy gpu on ebay, any suggestions on pitfalls for above setup, | 2024-12-01T16:42:51 | https://www.reddit.com/r/LocalLLaMA/comments/1h47c7z/llm_rig_suggestions/ | umanaga9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h47c7z | false | null | t3_1h47c7z | /r/LocalLLaMA/comments/1h47c7z/llm_rig_suggestions/ | false | false | self | 2 | null |
I'm looking for a model that fixes English grammar (aka Grammarly alternative in terms of just fixing grammar) | 20 | I'm currently using Ollama command line tool with \`llama3\` model, but I'm open to use other apps/tools with models that are related to fixing English grammar, not necessarily improvement, just fixing.
You could say Grammarly alternative, but in terms of just fixing the grammar, and not improving a sentence. I mean I would appreciate it if there's a model that does both ofc.
Preferably run on Web, Windows, or in Docker. -command line/app/web/etc/whatever- No requirement for resource usage for now, I just wanna see the options.
Thanks in advance. | 2024-12-01T17:12:17 | https://www.reddit.com/r/LocalLLaMA/comments/1h480w5/im_looking_for_a_model_that_fixes_english_grammar/ | Soft_ACK | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h480w5 | false | null | t3_1h480w5 | /r/LocalLLaMA/comments/1h480w5/im_looking_for_a_model_that_fixes_english_grammar/ | false | false | self | 20 | null |
Anything close to Cursor's tab autocomplete? | 1 | [removed] | 2024-12-01T17:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/1h48dre/anything_close_to_cursors_tab_autocomplete/ | lounathanson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h48dre | false | null | t3_1h48dre | /r/LocalLLaMA/comments/1h48dre/anything_close_to_cursors_tab_autocomplete/ | false | false | self | 1 | null |
Looking for Cloud-Hosted Front-End for LLMs (Alternatives to TypingMind) | 1 | [removed] | 2024-12-01T17:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/1h48xix/looking_for_cloudhosted_frontend_for_llms/ | Last_Pootis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h48xix | false | null | t3_1h48xix | /r/LocalLLaMA/comments/1h48xix/looking_for_cloudhosted_frontend_for_llms/ | false | false | self | 1 | null |
Help Needed Assembling Build for 2x or Single non-Ada A6000 | 1 | [removed] | 2024-12-01T18:25:40 | https://www.reddit.com/r/LocalLLaMA/comments/1h49r5v/help_needed_assembling_build_for_2x_or_single/ | Su1tz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h49r5v | false | null | t3_1h49r5v | /r/LocalLLaMA/comments/1h49r5v/help_needed_assembling_build_for_2x_or_single/ | false | false | self | 1 | null |
Any Uncensored Foundation Model? | 0 | Do any good uncensored foundation models exist? Either in clear or dark. I don't mean fine tuned...
My impression is that a tuned one will allow Uncensored content but it doesn't seem to be able to create or propose it organically.
So I'd like to experiment with something totally raw. Just curious if it even exists or if it's locked in some encrypted store beyond mere mortals? | 2024-12-01T18:31:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h49vwb/any_uncensored_foundation_model/ | anixon604 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h49vwb | false | null | t3_1h49vwb | /r/LocalLLaMA/comments/1h49vwb/any_uncensored_foundation_model/ | false | false | self | 0 | null |
Well, this aged like wine. Another W for Karpathy. | 594 | https://x.com/karpathy/status/1835561952258723930?s=19 | 2024-12-01T18:45:39 | onil_gova | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h4a803 | false | null | t3_1h4a803 | /r/LocalLLaMA/comments/1h4a803/well_this_aged_like_wine_another_w_for_karpathy/ | false | false | 594 | {'enabled': True, 'images': [{'id': 'oPlJlX94nYq4HUFgV_cnKH_dZnHwBQ0NCjZsckiyDLU', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/7rsni9uv8a4e1.jpeg?width=108&crop=smart&auto=webp&s=040db3493951ecb219d3986d4ed231a32454870e', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/7rsni9uv8a4e1.jpeg?width=216&crop=smart&auto=webp&s=27ec18b2cb4f8bfc2f1bb3f5d6eef7fa8b400b15', 'width': 216}, {'height': 141, 'url': 'https://preview.redd.it/7rsni9uv8a4e1.jpeg?width=320&crop=smart&auto=webp&s=e499c51c6980f1a1a32973e10309602f31fe3a58', 'width': 320}, {'height': 283, 'url': 'https://preview.redd.it/7rsni9uv8a4e1.jpeg?width=640&crop=smart&auto=webp&s=b352d45a643654839006720ab6f084184de9c691', 'width': 640}, {'height': 424, 'url': 'https://preview.redd.it/7rsni9uv8a4e1.jpeg?width=960&crop=smart&auto=webp&s=732e3e2fb062c0ae63e27a907074a854f91f6cf1', 'width': 960}, {'height': 478, 'url': 'https://preview.redd.it/7rsni9uv8a4e1.jpeg?width=1080&crop=smart&auto=webp&s=eb2c4887715294b791c18339ce71f747b2909624', 'width': 1080}], 'source': {'height': 707, 'url': 'https://preview.redd.it/7rsni9uv8a4e1.jpeg?auto=webp&s=1ed0c70e65cb408d1d99e4204ace1a3bcf3b6bd3', 'width': 1597}, 'variants': {}}]} |
||
Finetuning Qwen VL 7B from Unsloth - Poor Results Before and After Fine-Tuning | 1 | [removed] | 2024-12-01T18:47:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h4a96c/finetuning_qwen_vl_7b_from_unsloth_poor_results/ | dragonwarrior_1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4a96c | false | null | t3_1h4a96c | /r/LocalLLaMA/comments/1h4a96c/finetuning_qwen_vl_7b_from_unsloth_poor_results/ | false | false | self | 1 | null |
Comparing Nemotron 70b with Qwen2.5 72b | 30 | I've been testing two AI models, Qwen2.5 (in both 32b and 72b versions) and Nemotron 70b.
My initial tests show Nemotron 70b is performing better than Qwen 72b across a wide range of general knowledge questions. It's been interesting to see the differences in their capabilities.
There's one thing I haven't tested yet, though: how Nemotron 70b stacks up against Qwen2.5 Coder (32b) when it comes to coding-related tasks. That's next on my list.
Since I couldn't find many direct comparisons between these models, I'd love to hear from others: are your experiences similar, with Nemotron 70b outdoing Qwen 72b? Or have you seen the opposite? | 2024-12-01T19:21:20 | https://www.reddit.com/r/LocalLLaMA/comments/1h4b2hz/comparing_nemotron_70b_with_qwen25_72b/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4b2hz | false | null | t3_1h4b2hz | /r/LocalLLaMA/comments/1h4b2hz/comparing_nemotron_70b_with_qwen25_72b/ | false | false | self | 30 | null |
Improving a RAG assistant | 1 | [removed] | 2024-12-01T19:46:09 | https://www.reddit.com/r/LocalLLaMA/comments/1h4bnsf/improving_a_rag_assistant/ | Mindless-Ruin5103 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4bnsf | false | null | t3_1h4bnsf | /r/LocalLLaMA/comments/1h4bnsf/improving_a_rag_assistant/ | false | false | self | 1 | null |
Consistent character image generation model | 4 | Is there any model or method where we can generate image but the characters looks same across all the images generated. Not looking for LoRA method. | 2024-12-01T19:54:37 | https://www.reddit.com/r/LocalLLaMA/comments/1h4bux4/consistent_character_image_generation_model/ | DataScientia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4bux4 | false | null | t3_1h4bux4 | /r/LocalLLaMA/comments/1h4bux4/consistent_character_image_generation_model/ | false | false | self | 4 | null |
QwQ - tricky question | 8 | GPT4o can't answer properly .
llama-cli.exe --model QwQ-32B-Preview-Q4\_K\_M.gguf --color --threads 30 --keep -1 --n-predict -1 --ctx-size 16384 -ngl 99 --simple-io -e --multiline-input --no-display-prompt --conversation --no-mmap --in-prefix "<|im\_end|>\\n<|im\_start|>user\\n" --in-suffix "<|im\_end|>\\n<|im\_start|>assistant\\n" -p "<|im\_start|>system\\nYou are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."
`"You start at point A. You walk 500 meters in any direction. Then you turn left 90 degrees and walk 10 meter. Then you turn left 90 degrees again and walk indefinitely. Would you ever cross your first point again?`"
Let's see. I'm starting at point A, walking 500 meters in any direction. Then I turn left 90 degrees and walk 10 meters, turn left again 90 degrees and walk indefinitely. The question is, would I ever cross my starting point again?
First, I need to visualize this path. Let's assume I start at point A and walk north for 500 meters. So, I'm at point B, which is directly north of A, 500 meters away.
Then, I turn left 90 degrees. Since I was facing north, turning left would mean I'm now facing west. I walk 10 meters in this direction to point C.
Next, I turn left again 90 degrees. Now, I'm facing south. And I'm supposed to walk indefinitely in this direction.
So, my path is: A to B (north 500m), B to C (west 10m), and then C south indefinitely.
Now, I need to see if this path will ever cross back to point A.
Let's think about the positions:
- A is the starting point.
- B is 500 meters north of A.
- C is 10 meters west of B.
- From C, I'm walking south indefinitely.
So, from C, walking south, I'll be moving southward away from B and A. Since C is west of B, and I'm walking south from C, my path is a straight line heading south.
Now, to see if this path crosses A, I need to see if this southward path from C passes through A.
Let's consider the coordinates:
Assume A is at (0,0).
- Walking north 500 meters takes me to B (0,500).
- Turning left to west and walking 10 meters takes me to C (-10,500).
- Turning left again to south and walking indefinitely from C (-10,500) along the line x = -10.
Now, A is at (0,0), and my path is along x = -10, starting from y = 500 and going south (decreasing y).
Will this path ever reach (0,0)?
Looking at the coordinates, the path is x = -10, y decreasing from 500 to negative infinity.
Point A is at (0,0). The path is a vertical line at x = -10, moving southward.
Since x is fixed at -10 on this path, and A is at x=0, y=0, the path will never cross A because x=-10 does not equal x=0.
Therefore, I conclude that I will never cross my starting point again.
Wait, but the initial direction was "any direction," not necessarily north. Does the direction matter?
Let's generalize it. Starting at A, walking d1 distance in any direction, then turning left 90 degrees and walking d2 distance, then turning left again and walking indefinitely in that direction.
In this case, d1=500m, d2=10m.
Regardless of the initial direction, turning left twice would mean I'm walking perpendicular to the second leg.
Wait, in the previous assumption, turning left twice would mean I'm walking in the opposite direction of the initial direction.
Wait, no. Let's think about the turns.
Start at A, walk in direction θ for 500m to B.
At B, turn left 90 degrees, so now direction is θ + 90 degrees, walk 10m to C.
At C, turn left 90 degrees again, so direction is θ + 180 degrees, walk indefinitely to D.
So, I'm walking in the direction θ + 180 degrees from C.
Now, to see if this path crosses A.
Let's use vectors.
Let’s denote:
- A as the origin (0,0).
- Let’s assume the initial direction θ is along the positive x-axis for simplicity, so θ=0.
- So, B is at (500,0).
- Turning left 90 degrees puts me facing upwards (positive y-direction), walk 10m to C (500,10).
- Turning left again 90 degrees puts me facing left (negative x-direction), walk indefinitely from C westward along x=500, y decreasing.
Wait, actually, if I start at B (500,0), turn left to face positive y, walk 10m to C (500,10), then turn left again to face negative x, and walk south (negative y).
Wait, no, turning left from facing positive y would be to face negative x.
Wait, standard coordinate system:
- Right: positive x
- Up: positive y
- So, starting at A (0,0), walk north to B (0,500).
- Turn left 90 degrees: face west, walk 10m to C (-10,500).
- Turn left again 90 degrees: face south, walk indefinitely from C southward along x=-10, y decreasing from 500.
So, path is x=-10, y decreasing from 500 to -infinity.
This line will never pass through (0,0), as x=-10 != x=0.
Therefore, the path does not cross A.
But wait, maybe if I choose a different initial direction, it could be different.
Let’s consider a general initial direction.
Let’s denote:
- A at (0,0).
- Walk d1 in direction θ to B: coordinates (d1*cosθ, d1*sinθ).
- Turn left 90 degrees: new direction is θ + 90 degrees.
- Walk d2 =10m to C: from B, add d2*(cos(θ+90), sin(θ+90)) = d2*(-sinθ, cosθ).
- So, C coordinates: (d1*cosθ - 10*sinθ, d1*sinθ + 10*cosθ).
- Turn left again 90 degrees: new direction is θ + 180 degrees.
- Walk indefinitely in this direction from C.
So, the path from C is along the direction θ + 180 degrees.
I need to find if this path crosses A (0,0).
The parametric equation of the path from C is:
x = (d1*cosθ - 10*sinθ) + t*cos(θ + 180) = d1*cosθ - 10*sinθ - t*cosθ
y = (d1*sinθ + 10*cosθ) + t*sin(θ + 180) = d1*sinθ + 10*cosθ - t*sinθ
Set x=0 and y=0 to find if there exists a t>0 such that both equations are satisfied.
So:
0 = d1*cosθ - 10*sinθ - t*cosθ
0 = d1*sinθ + 10*cosθ - t*sinθ
Solving for t:
From the first equation:
t*cosθ = d1*cosθ - 10*sinθ ⇒ t = d1 - 10*(sinθ)/cosθ) = d1 - 10*tanθ
From the second equation:
t*sinθ = d1*sinθ + 10*cosθ ⇒ t = d1 + 10*(cosθ)/sinθ) = d1 + 10*cotθ
So, set both expressions for t equal:
d1 - 10*tanθ = d1 + 10*cotθ
Simplify:
-10*tanθ = 10*cotθ ⇒ -tanθ = cotθ ⇒ tanθ + cotθ = 0
But cotθ = 1/tanθ, so:
tanθ + 1/tanθ = 0 ⇒ tanθ + cotθ = 0
Multiply both sides by tanθ:
tan²θ + 1 = 0 ⇒ tan²θ = -1
This has no real solution since tan²θ is always non-negative.
Therefore, there is no real θ that satisfies this equation.
Hence, there is no value of θ for which the path crosses A.
Therefore, regardless of the initial direction, you will never cross your starting point again.
**Final Answer**
\[ \boxed{\text{No, you will never cross your starting point again.}} \] | 2024-12-01T20:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/1h4cmck/qwq_tricky_question/ | Healthy-Nebula-3603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4cmck | false | null | t3_1h4cmck | /r/LocalLLaMA/comments/1h4cmck/qwq_tricky_question/ | false | false | self | 8 | null |
When using multiple GPUs in Llama CPP, if one has significantly faster VRAM than the other, how does it work? | 5 | Scenario:
- rx 6800 (500gb/s) 16gb
- rx 6700xt (300gb/s) 12gb
If I load a model that only needs 16gb of memory I'm assuming the whole thing will work at full speed on the 6800.
But suppose I use a model that needs both GPUs' VRAM to fit. What happens then? Does 16gb worth of model go at full speed and the remaining portion go at the slower 300gb/s? Does the whole thing throttle down to 300gb/s ? | 2024-12-01T21:06:20 | https://www.reddit.com/r/LocalLLaMA/comments/1h4djn7/when_using_multiple_gpus_in_llama_cpp_if_one_has/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4djn7 | false | null | t3_1h4djn7 | /r/LocalLLaMA/comments/1h4djn7/when_using_multiple_gpus_in_llama_cpp_if_one_has/ | false | false | self | 5 | null |
Thoughts? JPEG compress your LLM weights | 146 | 2024-12-01T21:08:09 | https://pepijndevos.nl/2024/12/01/jpeg-compress-your-llm-weights.html | pepijndevos | pepijndevos.nl | 1970-01-01T00:00:00 | 0 | {} | 1h4dl6c | false | null | t3_1h4dl6c | /r/LocalLLaMA/comments/1h4dl6c/thoughts_jpeg_compress_your_llm_weights/ | false | false | 146 | {'enabled': False, 'images': [{'id': 'vTpsvnKhjKACtPLsNnNRbIjalVAo9Va2NOhHI05AGrY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/EYXVUcpFIswWzk7ncXMa0CeHpR0tiUotBsLyQ_r52M4.jpg?width=108&crop=smart&auto=webp&s=3d6620b6451e672f839d51050f955dea46b46f80', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/EYXVUcpFIswWzk7ncXMa0CeHpR0tiUotBsLyQ_r52M4.jpg?width=216&crop=smart&auto=webp&s=41bad0d69037d7e64fe843d57288cc1cd820708f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/EYXVUcpFIswWzk7ncXMa0CeHpR0tiUotBsLyQ_r52M4.jpg?width=320&crop=smart&auto=webp&s=7e028e58a3e2b76a2da0f715c7700cd255f60ded', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/EYXVUcpFIswWzk7ncXMa0CeHpR0tiUotBsLyQ_r52M4.jpg?auto=webp&s=1198e0c0a02ea8c37488f85e7ff481f11f640dc5', 'width': 400}, 'variants': {}}]} |
||
Qwq yapping session trying to become an expert on making a secret message in a poem that is about fruit | 1 | https://reddit.com/link/1h4e4vg/video/5y8xlx8d2b4e1/player
| 2024-12-01T21:31:10 | https://www.reddit.com/r/LocalLLaMA/comments/1h4e4vg/qwq_yapping_session_trying_to_become_an_expert_on/ | Longjumping_Spot5843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4e4vg | false | null | t3_1h4e4vg | /r/LocalLLaMA/comments/1h4e4vg/qwq_yapping_session_trying_to_become_an_expert_on/ | false | false | self | 1 | null |
What is best local llama i can run? | 0 | CPU: R5 5600X
GPU: RX 6900XT
RAM: 16gb 3600mhz | 2024-12-01T22:19:51 | https://www.reddit.com/r/LocalLLaMA/comments/1h4f9rs/what_is_best_local_llama_i_can_run/ | RedditAfterSearchGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4f9rs | false | null | t3_1h4f9rs | /r/LocalLLaMA/comments/1h4f9rs/what_is_best_local_llama_i_can_run/ | false | false | self | 0 | null |
icefog72/IceDrunkenCherryRP-7b · Hugging Face | 4 | 2024-12-01T22:36:10 | https://huggingface.co/icefog72/IceDrunkenCherryRP-7b | Pristine_Income9554 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h4fmz4 | false | null | t3_1h4fmz4 | /r/LocalLLaMA/comments/1h4fmz4/icefog72icedrunkencherryrp7b_hugging_face/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'XnAmIQAWZtZHEs2J2vMa8JiP5wGyJrOkJ-gWf-R3a1Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=108&crop=smart&auto=webp&s=a52e2eb0ad4ee484911c46f2ec48b87c976fcdc3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=216&crop=smart&auto=webp&s=45b4acd41346d1cbd11870ef7d1a9dbb3b9bd977', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=320&crop=smart&auto=webp&s=fd7a18d055c6faab48bf7a9c0628b71777711d9c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=640&crop=smart&auto=webp&s=9cbd6eaa33fcce3e79cb62f92def7db5d0030801', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=960&crop=smart&auto=webp&s=6f7ba3ffa75289e72c8f612ec135380c58020035', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=1080&crop=smart&auto=webp&s=67c2edf34e3254ce754a98987b968caf6241bfcc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?auto=webp&s=7a78a7debeefb18c1192798e12a7743676244e0b', 'width': 1200}, 'variants': {}}]} |
||
Large language models are still pretty bad at math | 0 | I asked the following to qwq, chatgpt and claude.
If I toss a fair coin repeatedly, what is the expected time to get HTH?
qwq gave 10, chatgpt 8 and claude 14.
I wonder how long until the big western players catch up.
| 2024-12-01T23:01:52 | https://www.reddit.com/r/LocalLLaMA/comments/1h4g8bl/large_language_models_are_still_pretty_bad_at_math/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4g8bl | false | null | t3_1h4g8bl | /r/LocalLLaMA/comments/1h4g8bl/large_language_models_are_still_pretty_bad_at_math/ | false | false | self | 0 | null |
SmolTulu 1.7b - I used Tulu 3 and SmolLM 2 to create a tiny beast :) | 1 | [removed] | 2024-12-01T23:16:28 | https://huggingface.co/SultanR/SmolTulu-1.7b-Instruct | SulRash | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h4gk18 | false | null | t3_1h4gk18 | /r/LocalLLaMA/comments/1h4gk18/smoltulu_17b_i_used_tulu_3_and_smollm_2_to_create/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'GDX8ULtA7vVPSsdDK0iTzb4GthqQONyDlYxob7VoLj8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=108&crop=smart&auto=webp&s=ac979d327f3058d12a097660a20f0fb44d3204af', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=216&crop=smart&auto=webp&s=f784de779f8c89395baa6f0a68079793aa2c91f3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=320&crop=smart&auto=webp&s=2085811d7faf3a8c337cee13284d374a897c1e80', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=640&crop=smart&auto=webp&s=dfbd753fcaaaf6cdefd89706b018b8b275876032', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=960&crop=smart&auto=webp&s=393c0033e17a645a17c3f23e23488be928117128', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=1080&crop=smart&auto=webp&s=041cebfbd54ef48cd60e416be1e53837f494be2d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?auto=webp&s=40048700d631830a9b2239fcce393e01253cd7e9', 'width': 1200}, 'variants': {}}]} |
|
If Qwen-32B was scaled up to the size of let's say GPT4o assuming basically linear scaling laws, would it be AGI? | 0 | This model is crazily powerful even at just about (If GPT4o is about 1 trillion perams) 3.2% of the size and if so, if it was that size, we'd be looking at an AI about 32x more powerful then this one in terms of intelligence and quality as a whole. That's just crazy. I feel like the returns of larger models only start to tapper off probably I would guess at around a few trillion parameters so it's safe to say that this estimation is largely accurate. | 2024-12-01T23:56:00 | https://www.reddit.com/r/LocalLLaMA/comments/1h4hee3/if_qwen32b_was_scaled_up_to_the_size_of_lets_say/ | Longjumping_Spot5843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4hee3 | false | null | t3_1h4hee3 | /r/LocalLLaMA/comments/1h4hee3/if_qwen32b_was_scaled_up_to_the_size_of_lets_say/ | false | false | self | 0 | null |
Looking for some new creative writing models | 1 | [removed] | 2024-12-01T23:58:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h4hgdx/looking_for_some_new_creative_writing_models/ | Ordinary-Meaning-61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4hgdx | false | null | t3_1h4hgdx | /r/LocalLLaMA/comments/1h4hgdx/looking_for_some_new_creative_writing_models/ | false | false | self | 1 | null |
Codestral 22b - still the king of Coding models that aren't from China? My findings and asking for suggestions | 6 | ## Constraints
I do not personally suspect foul-play from Qwen-Coder or Yi-Coder, but my current situation has strict compliance rules in place that aren't budging when it comes to geographical source of LLM's - and they aren't budging.. Comment on this what you'd like (and I'd probably agree with you), but that's not why I'm here today.
**What are the best on-prem coding LLM's that aren't from China?** - and bonus, which ones are practical for budget workstations? (so, no Llama 405b sadly).
## My experience and rankings
I have only been testing a few on a variety of hardware. This is only what I've tested for my own use-cases and is by no means suggested to be an objective truth:
- **Codestral 22b Instruct** is the best by a decent margin. I can't get anything else to beat it until you start hitting the much larger models. This seems somewhat in line with what Aider Leaderboard suggests.
- **Mistral-Nemo 12b** is better than Llama3 8.1 (going a bit against what Aider Leaderboard suggests, but not by much). I'll use this on any machine that cannot fit Codestral.
- **LLama 3.1 8b Instruct** - Loses to the above, but still very good. Lots of use-cases where I still use this purely for its speed.
- **Gemma2 27b Instruct** - really smart but inconsistent quality. Sometimes punches above Codestral 22b, other times seems like complete nonsense.
My use-cases are primarily from-scratch Python and Go code, C++ refactors, and refactors in general.
I'm curious what you're using, what you've found, what your thoughts are. | 2024-12-02T00:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/1h4hxsu/codestral_22b_still_the_king_of_coding_models/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4hxsu | false | null | t3_1h4hxsu | /r/LocalLLaMA/comments/1h4hxsu/codestral_22b_still_the_king_of_coding_models/ | false | false | self | 6 | null |
Qwen2-VL GGUF | 4 | This might be a dumb question, but I want to convert the Qwen2-Vl model into a GGUF file for quantization. I ended up running into this error when converting:
INFO:hf-to-gguf:Loading model: original\_model
ERROR:hf-to-gguf:Model Qwen2VLForConditionalGeneration is not supported
Is there anything else I can use to convert to gguf or is it just impossible to do that for this specific model? | 2024-12-02T00:32:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h4i6tp/qwen2vl_gguf/ | SpecialistStory336 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4i6tp | false | null | t3_1h4i6tp | /r/LocalLLaMA/comments/1h4i6tp/qwen2vl_gguf/ | false | false | self | 4 | null |
Comparison between Apache Tika and Docling for Text Extraction | 48 | 2024-12-02T01:07:39 | davidmezzetti | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h4ix0o | false | null | t3_1h4ix0o | /r/LocalLLaMA/comments/1h4ix0o/comparison_between_apache_tika_and_docling_for/ | false | false | 48 | {'enabled': True, 'images': [{'id': 'h_y3qURoXxtQfwcLRKtVvUAO81edW_o0G3QDJLmjr8s', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/h5t5xd404c4e1.png?width=108&crop=smart&auto=webp&s=e3f4ddc6ef3b428970cef17f172087b6af452e05', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/h5t5xd404c4e1.png?width=216&crop=smart&auto=webp&s=41d52d4b9ca4900ed61b79fef7f17113d3af49a8', 'width': 216}, {'height': 224, 'url': 'https://preview.redd.it/h5t5xd404c4e1.png?width=320&crop=smart&auto=webp&s=3f83ada588a86be570ffa5a85a575613e9f2c86f', 'width': 320}, {'height': 448, 'url': 'https://preview.redd.it/h5t5xd404c4e1.png?width=640&crop=smart&auto=webp&s=5ea17aa3cbb48912b34e42e71b196bd481f5a98f', 'width': 640}, {'height': 672, 'url': 'https://preview.redd.it/h5t5xd404c4e1.png?width=960&crop=smart&auto=webp&s=926eb66298362cb7585d1374e92b0f122b53345d', 'width': 960}, {'height': 756, 'url': 'https://preview.redd.it/h5t5xd404c4e1.png?width=1080&crop=smart&auto=webp&s=bf632a080345b42be2ddec0516df4dc98bd56beb', 'width': 1080}], 'source': {'height': 760, 'url': 'https://preview.redd.it/h5t5xd404c4e1.png?auto=webp&s=ea5e599a4dd43cb088d2ff0fa7895898d427f992', 'width': 1085}, 'variants': {}}]} |
|||
RoomFinder Agent wit crewai | 6 | 2024-12-02T01:15:04 | https://v.redd.it/r0cfc6ra6c4e1 | oridnary_artist | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h4j2hq | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/r0cfc6ra6c4e1/DASHPlaylist.mpd?a=1735694120%2CM2MzYjE2ZmQxY2JiZjFhMDVmMWYyNWZlYWNkOTM3NzE3MDFiNTZjYmJmMzMxNTZiZjdhOWNiMzExY2NlODE0Yg%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/r0cfc6ra6c4e1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/r0cfc6ra6c4e1/HLSPlaylist.m3u8?a=1735694120%2CNjY0YjkyNDcwYzk2YjBlOWY1OWQwZGNkNDFhOTRmZWQ1ZGNkM2U5MDBiYmVjNjFlZDBjNTJiYTJhOGY1ODE2ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/r0cfc6ra6c4e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1112}} | t3_1h4j2hq | /r/LocalLLaMA/comments/1h4j2hq/roomfinder_agent_wit_crewai/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'dWhmZnI1cmE2YzRlMWA-SjyZjnC7hF9cek3-rV1hMwyFkYJEMqswZdLzXrrl', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/dWhmZnI1cmE2YzRlMWA-SjyZjnC7hF9cek3-rV1hMwyFkYJEMqswZdLzXrrl.png?width=108&crop=smart&format=pjpg&auto=webp&s=242a37c79ea484bc8fb5d0e41cbc0cb06a38ef9f', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/dWhmZnI1cmE2YzRlMWA-SjyZjnC7hF9cek3-rV1hMwyFkYJEMqswZdLzXrrl.png?width=216&crop=smart&format=pjpg&auto=webp&s=ecbea674d24c32710de1de175d401219ffe46da9', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/dWhmZnI1cmE2YzRlMWA-SjyZjnC7hF9cek3-rV1hMwyFkYJEMqswZdLzXrrl.png?width=320&crop=smart&format=pjpg&auto=webp&s=774ed9d1806f2828e00e38d062126e0c169ff2d1', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/dWhmZnI1cmE2YzRlMWA-SjyZjnC7hF9cek3-rV1hMwyFkYJEMqswZdLzXrrl.png?width=640&crop=smart&format=pjpg&auto=webp&s=1c276e08c4c57ac7022a34b3133fd8a6ac1ea17b', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/dWhmZnI1cmE2YzRlMWA-SjyZjnC7hF9cek3-rV1hMwyFkYJEMqswZdLzXrrl.png?width=960&crop=smart&format=pjpg&auto=webp&s=dd6e2b98f9cabf84b2f7b4f8857eb33ee11a0b9f', 'width': 960}, {'height': 698, 'url': 'https://external-preview.redd.it/dWhmZnI1cmE2YzRlMWA-SjyZjnC7hF9cek3-rV1hMwyFkYJEMqswZdLzXrrl.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f63c6b06b2b4f7c407f931a171b193607c9c69c4', 'width': 1080}], 'source': {'height': 972, 'url': 'https://external-preview.redd.it/dWhmZnI1cmE2YzRlMWA-SjyZjnC7hF9cek3-rV1hMwyFkYJEMqswZdLzXrrl.png?format=pjpg&auto=webp&s=62ef6c3ad7b69828dba84506c2aaa5673b48e18a', 'width': 1502}, 'variants': {}}]} |
||
Epyc server GPU less | 2 | Hi guys,
What about a fully populated ram at 3000mhz/6000mt/s on an Epyc 9015 (12 memory channel) ?
• Max memory bandwidth is around 576GB/s
• 32GBx12 = 384GB of RAM
• Max TDP 155W
I know we lose flash attn, cuda, tensor cores, Cuddnn and so on
It could compete on GPU inference space with tons of RAM for less than 6K€? | 2024-12-02T01:17:21 | https://www.reddit.com/r/LocalLLaMA/comments/1h4j45s/epyc_server_gpu_less/ | Temporary-Size7310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4j45s | false | null | t3_1h4j45s | /r/LocalLLaMA/comments/1h4j45s/epyc_server_gpu_less/ | false | false | self | 2 | null |
Questions on getting GGUF files with GPU offloading and correct prompt in Ollama | 1 | [removed] | 2024-12-02T01:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1h4jd0j/questions_on_getting_gguf_files_with_gpu/ | owOverwatch37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4jd0j | false | null | t3_1h4jd0j | /r/LocalLLaMA/comments/1h4jd0j/questions_on_getting_gguf_files_with_gpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7vWXhucCA5pDTVbUO_d6B9ooJbt8WnxnQI8TdN2kTFw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WQCPRwYMtW4InE-n_6dYU1h9bg1XbTYK51w1IiWBdP4.jpg?width=108&crop=smart&auto=webp&s=4983c73158e6daf0863765d6c3c722da2bbbf851', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WQCPRwYMtW4InE-n_6dYU1h9bg1XbTYK51w1IiWBdP4.jpg?width=216&crop=smart&auto=webp&s=229f857db5601b052e7d858e85454361cbc2e0db', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WQCPRwYMtW4InE-n_6dYU1h9bg1XbTYK51w1IiWBdP4.jpg?width=320&crop=smart&auto=webp&s=fb864dc98a4ff798ebb512e5487782cd118203cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WQCPRwYMtW4InE-n_6dYU1h9bg1XbTYK51w1IiWBdP4.jpg?width=640&crop=smart&auto=webp&s=843f8a39fd169f41e3727fa4363383ca5f471aff', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WQCPRwYMtW4InE-n_6dYU1h9bg1XbTYK51w1IiWBdP4.jpg?width=960&crop=smart&auto=webp&s=ae9ee9b1551989a27a10294e5704357a06935d60', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WQCPRwYMtW4InE-n_6dYU1h9bg1XbTYK51w1IiWBdP4.jpg?width=1080&crop=smart&auto=webp&s=bb96ba2c37f396bd368c9883e0d38e13a5f73ca8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WQCPRwYMtW4InE-n_6dYU1h9bg1XbTYK51w1IiWBdP4.jpg?auto=webp&s=49088670bd709a81f53ad9e60996fcfb8e830e32', 'width': 1200}, 'variants': {}}]} |
Best LLM for Translations - Comparisons | 11 | Hey, r/LocalLLaMA!
I’m currently working on a project to translate texts locally, as it’s essential for my workflow and helps me maintain control over the data.
So far, I’ve tested **Gemma**, **LLaMA**, and **Tulu**, and while each has its strengths, they also come with notable downsides:
• **Gemma:** Great at preserving cultural nuances (e.g., keeping “Sensei”), but its grammar can sometimes feel awkward or incorrect.
• **LLaMA:** Generally reliable, but prone to repetitive phrases or formatting inconsistencies, which can be frustrating.
• **Tulu:** More creative in translations, but this comes at the cost of accuracy (e.g., mistranslating “cosrom” as “cosplay”).
I’ll be supervising all the translations to ensure quality, but I’m looking for other local models to try. My current setup supports up to **12GB VRAM**, so I need models that can run efficiently within those constraints.
Does anyone have recommendations for models that balance **accuracy**, **nuance**, and **contextual reliability**? I’d love to hear what’s working for you!
Thanks in advance for your suggestions! | 2024-12-02T01:37:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h4ji6x/best_llm_for_translations_comparisons/ | iTzNowbie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4ji6x | false | null | t3_1h4ji6x | /r/LocalLLaMA/comments/1h4ji6x/best_llm_for_translations_comparisons/ | false | false | self | 11 | null |
Evaluate the re-train Llama3 on Vietnamese Legal data | 1 | [removed] | 2024-12-02T02:23:46 | https://www.reddit.com/r/LocalLLaMA/comments/1h4kfgr/evaluate_the_retrain_llama3_on_vietnamese_legal/ | Long-Force5118 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4kfgr | false | null | t3_1h4kfgr | /r/LocalLLaMA/comments/1h4kfgr/evaluate_the_retrain_llama3_on_vietnamese_legal/ | false | false | self | 1 | null |
Local AI is the Only AI | 134 | 2024-12-02T03:22:27 | https://jeremyckahn.github.io/posts/local-ai-is-the-only-ai/ | jeremyckahn | jeremyckahn.github.io | 1970-01-01T00:00:00 | 0 | {} | 1h4ljng | false | null | t3_1h4ljng | /r/LocalLLaMA/comments/1h4ljng/local_ai_is_the_only_ai/ | false | false | default | 134 | null |
|
Experimenting with Qwen: A Journey Towards AGI | 1 | 2024-12-02T04:07:49 | https://huggingface.co/spaces/llamameta/Achieving-AGI-artificial-general-intelligence | balianone | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h4me1k | false | null | t3_1h4me1k | /r/LocalLLaMA/comments/1h4me1k/experimenting_with_qwen_a_journey_towards_agi/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Tym2cKVfEgyslPVskPLMh0as7_jyfqHjmUKo4tWo0AI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=108&crop=smart&auto=webp&s=dccd6cf88aedbacbb0c47fc8742bbcff07a502d9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=216&crop=smart&auto=webp&s=549a4478ea319b770ed9ea9f222716bb4b3f8af5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=320&crop=smart&auto=webp&s=314095ab9e6727afcd456f2259dd5a3b944ee9dd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=640&crop=smart&auto=webp&s=e16a4835cc91eb292e25ff427ec6bde7625f74b2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=960&crop=smart&auto=webp&s=a1624aae5cc5a37bbbbf1df552312e4150bbda3c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=1080&crop=smart&auto=webp&s=e62655419f9012fa774e7680c8546e02c39da38b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?auto=webp&s=244b2d7da9afcc583b5e02e42362251938e11358', 'width': 1200}, 'variants': {}}]} |
||
🧙♂️ Supercharged Open-WebUI: My Magical Toolkit for ArXiv, ImageGen, and AI Planning! 🔮 | 110 | Hello Fellow AI Enthusiasts, privacy-focused, Open-Source paladins!
I’ve been tinkering for the last few months with **Open-WebUI** after testing almost every UI I could find, and well, it’s **AMAZING**. I’ve integrated everything there—several providers, image gen, TTS, STT, RAG—manageable and open-source (though I think it’s a bit limited for commercial use). It integrates really well with **Ollama** too.
To get straight to the point, I loved it so much that I made a couple of tools to expand the experience. Instead of me blabbing on, I’ll let **Hermes** (my trusty LLM) give you the details:
**Tools:**
* **arXiv Search**: Instantly find and retrieve academic papers from [arXiv.org](http://arXiv.org) with detailed paper information.
https://preview.redd.it/7pgu01ew2d4e1.png?width=1921&format=png&auto=webp&s=290526c93e216ebf883f4c631b1a5d30eafa1581
* **Hugging Face Image Generator**: Create high-quality images from text descriptions with multiple format options.
https://preview.redd.it/7ajvse1v2d4e1.png?width=2557&format=png&auto=webp&s=8a04a0e6eda41ce35d8a021d32288bfa8b5c95e1
**Function Pipes:**
* **Planner Agent**: An autonomous AI that breaks down complex tasks, generates step-by-step plans, and executes them intelligently.
https://preview.redd.it/2pi4dink2d4e1.png?width=2237&format=png&auto=webp&s=fd5192e9eea19613a90b3116973f9d5370c66724
* **arXiv Research MCTS Pipe**: Advanced research tool using Monte Carlo Tree Search to progressively refine research summaries.
https://preview.redd.it/wjp8x5ys2d4e1.png?width=1916&format=png&auto=webp&s=1e39a68b2f9c37ec4aa0e89bd00c5b14aee68cf0
* **Multi Model Conversations**: Simulate conversations between up to 5 different AI models, each with unique personas.
https://preview.redd.it/jyh71ztt2d4e1.png?width=2514&format=png&auto=webp&s=5b7c122091dca4228499d308c27cae9da3b75c26
**Filters:**
* **Prompt Enhancer**: Automatically improves prompt quality by analyzing conversation context and refining input for better AI responses.
https://preview.redd.it/73c8wcez2d4e1.png?width=2513&format=png&auto=webp&s=56b7d0b9f8bddf618e606dda413825f05c8541b3
Perfect for researchers, developers, and AI enthusiasts looking to supercharge their Open WebUI experience with advanced research, image generation, task planning, and conversational capabilities.
Thanks, Hermes!
Before I share the links, please note that the pipes assume you’re using at least **16k context**, and most of the time, you’ll need to set it in the model file when using **Ollama**.
**My Open WebUI Tools Repo**:
[https://github.com/Haervwe/open-webui-tools](https://github.com/Haervwe/open-webui-tools)
**Hub Link:**
(also forgot this, its super easy to import and export/share pipes functions, really cool.)
[https://openwebui.com/u/haervwe/](https://openwebui.com/u/haervwe/)
**Main Open-WebUI Repo**:
[https://github.com/open-webui/open-webui](https://github.com/open-webui/open-webui)
# My Setup:
| **R7 5800X** | **16GB DDR4 3200** | **RX6900XT** |
**AI Services Stack:**
**| Ollama | Open-webUI | OpenedAI-tts | ComfyUI | n8n | quadrant | AnythingLLM |**
Running mostly **8B Q6** or **14B Q4** at **16k context**, and yeah—sometimes things crash pretty hard with intermediate step caching, haha! **Testing on beefier models** would be awesome, so if you’re inclined to do so, please share your results.
Feel free to reach out if you need anything. I’ve got time to burn at the moment since, well... I’m unemployed, so hit me up! 😂 | 2024-12-02T04:26:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h4mq5f/supercharged_openwebui_my_magical_toolkit_for/ | EsotericTechnique | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4mq5f | false | null | t3_1h4mq5f | /r/LocalLLaMA/comments/1h4mq5f/supercharged_openwebui_my_magical_toolkit_for/ | false | false | 110 | null |
|
Any LLM frontend allowing the model to execute python and analyze output? | 9 | I've been trying to understand and reverse engineer this weird anomaly between hashing differences in identical code between ARM64 and python in x86. If I give the code and disassembly to ChatGPT, it is able to try and debug/analyze iteratively by writing, running, and analyzing the output of python code.
QwQ has been very good at iteratively *thinking* about a problem, but I was hoping there could be a way to engineer it's system prompt to allow it to execute and analyze the output from python code it writes, similar to ChatGPT, to improve its coding and debugging performance. Maybe the frontend looks for a specific tag generated by the AI to get it to run the python code, I'm not sure. Maybe this is something I can throw together as a PoC in python.
Is there an LLM frontend that allows for this out of the box? Now that KoboldCPP provides both OpenAI and Ollama API emulation, the number of frontends I can use with it has greatly increased, but I'm not sure if there is a frontend that provides this feature. | 2024-12-02T04:30:19 | https://www.reddit.com/r/LocalLLaMA/comments/1h4mss7/any_llm_frontend_allowing_the_model_to_execute/ | ThatsALovelyShirt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4mss7 | false | null | t3_1h4mss7 | /r/LocalLLaMA/comments/1h4mss7/any_llm_frontend_allowing_the_model_to_execute/ | false | false | self | 9 | null |
Open-weights AI models are BAD says OpenAI CEO Sam Altman. Because DeepSeek and Qwen 2.5? did what OpenAi supposed to do! | 601 | # Because DeepSeek and Qwen 2.5? did what OpenAi supposed to do!?
[China now has two of what appear to be the most powerful models ever made](https://www.youtube.com/watch?v=9zunASefVas) and they're completely open.
[OpenAI CEO Sam Altman sits down](https://www.youtube.com/watch?v=VIokqGY0CrQ) with Shannon Bream to discuss the positives and potential negatives of artificial intelligence and the importance of maintaining a lead in the A.I. industry over China. | 2024-12-02T04:43:52 | https://www.reddit.com/r/LocalLLaMA/comments/1h4n1i9/openweights_ai_models_are_bad_says_openai_ceo_sam/ | Vishnu_One | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4n1i9 | false | null | t3_1h4n1i9 | /r/LocalLLaMA/comments/1h4n1i9/openweights_ai_models_are_bad_says_openai_ceo_sam/ | false | false | self | 601 | {'enabled': False, 'images': [{'id': 'glhxmcAdRb8KADYdHP1-a7r1f7ns2tLTZ-ABhhEnej8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3XoFbCZL6zhnJkbrBVxAsLDmsfwaWtowHRnmOYPQaWs.jpg?width=108&crop=smart&auto=webp&s=9d24b37c0238428a47c9802351161a600c53db16', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/3XoFbCZL6zhnJkbrBVxAsLDmsfwaWtowHRnmOYPQaWs.jpg?width=216&crop=smart&auto=webp&s=edc2ab255205c6b557a3bfdcf9f95684b7100ef9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/3XoFbCZL6zhnJkbrBVxAsLDmsfwaWtowHRnmOYPQaWs.jpg?width=320&crop=smart&auto=webp&s=cce7246cb16ca75642552a6be66ef0d03c0cd9ec', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/3XoFbCZL6zhnJkbrBVxAsLDmsfwaWtowHRnmOYPQaWs.jpg?auto=webp&s=536acea3ddb9916f9b7d6e979021ab2b12e5057d', 'width': 480}, 'variants': {}}]} |
No success in reading files. | 2 | I've been playing with llama 3.2 3b and 11b and mistral too, lately and after different versions I still can't get it to read large files correctly. I've tried llamaindex with qdrant embeddings, llamaindex without embeddings, no llamaindex straight up ollama api and sending in entire document without processing it and it still can't find what iam looking for. Is there a way to have a local setup to read files correctly? Perhaps a different llm?
I've tried gpt and Claude and both seem to answer correctly over and over. But can't have premium versions for large quantities of files. | 2024-12-02T04:44:00 | https://www.reddit.com/r/LocalLLaMA/comments/1h4n1l2/no_success_in_reading_files/ | TechEverythingElse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4n1l2 | false | null | t3_1h4n1l2 | /r/LocalLLaMA/comments/1h4n1l2/no_success_in_reading_files/ | false | false | self | 2 | null |
Opensource is the way | 66 | I tried some difficult reasoning questions to Claude(haiku, free), openai, deepseek r1, QwQ on huggingface and only r1 and QwQ were able to solve those questions, haiku solved some of the part of question, openai couldn't solve it at all.
Rank 1: r1, correct solution, took only 25 sec on CoT
Rank 2: QwQ, correct solution, but CoT wrote an entire essay
Rank 3: haiku (sonnet is not available in free), portion of question was correct
Rank 4: openai, completely incorrect, i'm on free version so don't know which model solved it.
I'm from non coding background but I needed some code and QwQ, r1 proved really helpful (sonnet was really good too, but due to lack of access, limited message limits and small context length, it is practically useless for free users) | 2024-12-02T04:45:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h4n2gb/opensource_is_the_way/ | Unusual_Membership44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4n2gb | false | null | t3_1h4n2gb | /r/LocalLLaMA/comments/1h4n2gb/opensource_is_the_way/ | false | false | self | 66 | null |
Help Needed Assembling Build for 2x or Single non-Ada A6000 | 1 | [removed] | 2024-12-02T05:14:09 | https://www.reddit.com/r/LocalLLaMA/comments/1h4nkc6/help_needed_assembling_build_for_2x_or_single/ | Su1tz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4nkc6 | false | null | t3_1h4nkc6 | /r/LocalLLaMA/comments/1h4nkc6/help_needed_assembling_build_for_2x_or_single/ | false | false | self | 1 | null |
Tried OpenVINO to optimize Whisper and Llama inference | 11 | 2024-12-02T05:39:11 | https://opensourcedisc.substack.com/p/opensourcediscovery-94-openvino/ | opensourcecolumbus | opensourcedisc.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1h4nyzt | false | null | t3_1h4nyzt | /r/LocalLLaMA/comments/1h4nyzt/tried_openvino_to_optimize_whisper_and_llama/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'pHhJWMtFsgAwQWSmlz3lUNczNGuN58af4OOf0QLTTHM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x3UeU6FyJkzwEv0Lixzne5GJ_-Db8-7IRXIQpnR-RRw.jpg?width=108&crop=smart&auto=webp&s=87c236e848c15790be31f1a745a22c4ec51ba9e9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x3UeU6FyJkzwEv0Lixzne5GJ_-Db8-7IRXIQpnR-RRw.jpg?width=216&crop=smart&auto=webp&s=bb1685d66dad44e962be8475b483d9ffb37c109a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x3UeU6FyJkzwEv0Lixzne5GJ_-Db8-7IRXIQpnR-RRw.jpg?width=320&crop=smart&auto=webp&s=49a3c69d2561d11cee2063ca4f2300372d358569', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x3UeU6FyJkzwEv0Lixzne5GJ_-Db8-7IRXIQpnR-RRw.jpg?width=640&crop=smart&auto=webp&s=66c07e0ab97d9d177498703264fad0a135122778', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x3UeU6FyJkzwEv0Lixzne5GJ_-Db8-7IRXIQpnR-RRw.jpg?width=960&crop=smart&auto=webp&s=fb4b39375e72c339a18a0e74bf130b06610644e4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x3UeU6FyJkzwEv0Lixzne5GJ_-Db8-7IRXIQpnR-RRw.jpg?width=1080&crop=smart&auto=webp&s=abcf7355e8ed53835b9111153b876865e7cb8b35', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x3UeU6FyJkzwEv0Lixzne5GJ_-Db8-7IRXIQpnR-RRw.jpg?auto=webp&s=0d47e89ff23f2c1c2090c813a69a6cce8916cd11', 'width': 1200}, 'variants': {}}]} |
||
I built this tool to compare LLMs | 350 | 2024-12-02T05:39:31 | https://v.redd.it/br8pidlihd4e1 | Odd_Tumbleweed574 | /r/LocalLLaMA/comments/1h4nz7b/i_built_this_tool_to_compare_llms/ | 1970-01-01T00:00:00 | 0 | {} | 1h4nz7b | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/br8pidlihd4e1/DASHPlaylist.mpd?a=1735839585%2CMzg5ODQ3YTc1ZmQ4ZjlhYmM4MzY4Y2E5N2ExMWQ1NWM2ODU2NzgyN2U3MDMxMWQxMGE5ZGMzMDdiNjU3YWZkYQ%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/br8pidlihd4e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/br8pidlihd4e1/HLSPlaylist.m3u8?a=1735839585%2CMzcwN2JmNjJiZDYzYjg4ZTAzYTc5Y2IzY2VlZTVlODBkMDk4MjQ0YmY4YzViYmE5MjhiNmEzODdjMmNmNDJhZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/br8pidlihd4e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1812}} | t3_1h4nz7b | /r/LocalLLaMA/comments/1h4nz7b/i_built_this_tool_to_compare_llms/ | false | false | 350 | {'enabled': False, 'images': [{'id': 'bW1sMmdjbGloZDRlMb6KvI63-U2bWzgQTVT_EHZUoU5ctQM5sxwqDzczEul5', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/bW1sMmdjbGloZDRlMb6KvI63-U2bWzgQTVT_EHZUoU5ctQM5sxwqDzczEul5.png?width=108&crop=smart&format=pjpg&auto=webp&s=95b5657c42aa479be740be522be015a1e5d92758', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/bW1sMmdjbGloZDRlMb6KvI63-U2bWzgQTVT_EHZUoU5ctQM5sxwqDzczEul5.png?width=216&crop=smart&format=pjpg&auto=webp&s=e2fb7a4b81daa8ed47c4e6f398c358880bbd4fff', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/bW1sMmdjbGloZDRlMb6KvI63-U2bWzgQTVT_EHZUoU5ctQM5sxwqDzczEul5.png?width=320&crop=smart&format=pjpg&auto=webp&s=411c9c763f81044c35563bd7dd8069590d437ed8', 'width': 320}, {'height': 381, 'url': 'https://external-preview.redd.it/bW1sMmdjbGloZDRlMb6KvI63-U2bWzgQTVT_EHZUoU5ctQM5sxwqDzczEul5.png?width=640&crop=smart&format=pjpg&auto=webp&s=9251e90b19086afa17971a55297937f212628c2c', 'width': 640}, {'height': 572, 'url': 'https://external-preview.redd.it/bW1sMmdjbGloZDRlMb6KvI63-U2bWzgQTVT_EHZUoU5ctQM5sxwqDzczEul5.png?width=960&crop=smart&format=pjpg&auto=webp&s=3b921957bd7b9a1995e4b7c31f187d93d31606b6', 'width': 960}, {'height': 643, 'url': 'https://external-preview.redd.it/bW1sMmdjbGloZDRlMb6KvI63-U2bWzgQTVT_EHZUoU5ctQM5sxwqDzczEul5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4c02196f0763878ded2c7ec5826e380489152a2c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bW1sMmdjbGloZDRlMb6KvI63-U2bWzgQTVT_EHZUoU5ctQM5sxwqDzczEul5.png?format=pjpg&auto=webp&s=62173c6b2739c8a7e631a7af08b1a4335f909e08', 'width': 1812}, 'variants': {}}]} |
||
Significantly slower performance when combining AMD GPU's with Llama cpp - anyone able to assist to figure out why? | 12 | ## System
- CPU - Ryzen 9 3950x
- RAM - 64gb 3200mhz ddr4
- Board - x470 Asrock (PCIE3)
- GPU1 - Rx 6800 (16gb) ~500gb/s
- GPU2 - Rx 6700 (12gb) ~300gb/s
--------------------------------------
Scores benchmarking some random queries:
## Command Used
HSA_OVERRIDE_GFX_VERSION=10.3.0 ./llama-cli \
-m ${LLM_DIR}/${VARIETY_OF_MODELS} \
--ctx-size 1000 \
-p <VARIETY OF QUESTIONS TESTED> \
-n <VARIETY OF RESPONSE LENGTHS TESTED> \
--gpu-layers 200 \
-ts 50,50 # does not matter how much I allocate and which to
## Results (these examples were using Llama3 8b 6k_L Quant:
### Only 6800
llama_perf_context_print: prompt eval time = 72.38 ms / 8 tokens ( 9.05 ms per token, 110.53 tokens per second)
llama_perf_context_print: eval time = 1810.32 ms / 99 runs ( 18.29 ms per token, 54.69 tokens per second)
### Only 6700xt
llama_perf_context_print: prompt eval time = 93.30 ms / 8 tokens ( 11.66 ms per token, 85.74 tokens per second)
llama_perf_context_print: eval time = 2160.69 ms / 99 runs ( 21.83 ms per token, 45.82 tokens per second)
### Split between 6700xt and 6800
llama_perf_context_print: prompt eval time = 103.89 ms / 8 tokens ( 12.99 ms per token, 77.00 tokens per second)
llama_perf_context_print: eval time = 3848.82 ms / 99 runs ( 38.88 ms per token, 25.72 tokens per second)
### Slowdown:
Splitting between the two GPU's brings **llama_perf_context_print** to almost *half* the speed of the slower GPU running on its own. The full vram pool of both cards can be used. Allocating more or less to one card or the other usually swings it by 1-2 token/second only.
**Any ideas what I could do to fix this?** | 2024-12-02T06:01:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h4oc50/significantly_slower_performance_when_combining/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4oc50 | false | null | t3_1h4oc50 | /r/LocalLLaMA/comments/1h4oc50/significantly_slower_performance_when_combining/ | false | false | self | 12 | null |
PREVENTING FINE-TUNED LLM TO ANSWER OUTSIDE OF CONTEXT | 1 | [removed] | 2024-12-02T07:30:37 | https://www.reddit.com/r/LocalLLaMA/comments/1h4pnt1/preventing_finetuned_llm_to_answer_outside_of/ | MBHQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4pnt1 | false | null | t3_1h4pnt1 | /r/LocalLLaMA/comments/1h4pnt1/preventing_finetuned_llm_to_answer_outside_of/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VXQgDvmSgPYtBxeP5qc-IhF9pgUdqdmygU7HW3wmkHw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/c8KIBoZ9CG1tAYco7NWcXG5k_1JqGhAM42RM5cdkmD4.jpg?width=108&crop=smart&auto=webp&s=b0b30f2f8815f93f741f9080f0f0d515d49e32ef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/c8KIBoZ9CG1tAYco7NWcXG5k_1JqGhAM42RM5cdkmD4.jpg?width=216&crop=smart&auto=webp&s=19965144c6d772d849ff2dc36a1f3bb6b8daeaf9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/c8KIBoZ9CG1tAYco7NWcXG5k_1JqGhAM42RM5cdkmD4.jpg?width=320&crop=smart&auto=webp&s=ab098897917d0327e3592d5f944e18673ccd9339', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/c8KIBoZ9CG1tAYco7NWcXG5k_1JqGhAM42RM5cdkmD4.jpg?width=640&crop=smart&auto=webp&s=c7bb995f494c1787d5bbe0017b1bc0b1262469d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/c8KIBoZ9CG1tAYco7NWcXG5k_1JqGhAM42RM5cdkmD4.jpg?width=960&crop=smart&auto=webp&s=04b9d3100cc9b16ec70e775973b44f8f5273f0e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/c8KIBoZ9CG1tAYco7NWcXG5k_1JqGhAM42RM5cdkmD4.jpg?width=1080&crop=smart&auto=webp&s=3b46ab0ce697a81bd88e7edfc4c160a1633af0a3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/c8KIBoZ9CG1tAYco7NWcXG5k_1JqGhAM42RM5cdkmD4.jpg?auto=webp&s=1fa82c9cb3f066c6121b6d5918d0141926435451', 'width': 1200}, 'variants': {}}]} |
Best Open source model for Indian languages | 5 | Hi folks, I am looking for an open source llm model that is compatible and performs very good on 15 or more Indian languages, in other way it should have a good tokenizer or indian language. Though I have goon through llama 3.1 8B, gemma, etc.
i want to fine tune my instruction data one one open source model, also parallely can i use another model's tokenizer for multiple languages training. Please suggest | 2024-12-02T07:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/1h4pvla/best_open_source_model_for_indian_languages/ | Reasonable-Phase1881 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4pvla | false | null | t3_1h4pvla | /r/LocalLLaMA/comments/1h4pvla/best_open_source_model_for_indian_languages/ | false | false | self | 5 | null |
Musk’s Legal War Against OpenAI’s For-Profit Agenda | 1 | [removed] | 2024-12-02T08:06:11 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1h4q55i | false | null | t3_1h4q55i | /r/LocalLLaMA/comments/1h4q55i/musks_legal_war_against_openais_forprofit_agenda/ | false | false | default | 1 | null |
||
compute_metrics functioning return dictionary | 2 | 2024-12-02T08:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/1h4q90z/compute_metrics_functioning_return_dictionary/ | darkGrayAdventurer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4q90z | false | null | t3_1h4q90z | /r/LocalLLaMA/comments/1h4q90z/compute_metrics_functioning_return_dictionary/ | false | false | 2 | null |
||
NVidia A6000 on sale via HP | 19 | HPs cyber Monday sale is on. Anyone looking for a 48gb a6000 can pick one up brand new via HP for under £3000.
https://www.hp.com/gb-en/shop/list.aspx?fc_ptyp_cabcomp=1&sel=acc
I'm not related to HP, NVidia, or any other party, and no affiliate or other crap. | 2024-12-02T08:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/1h4qgeg/nvidia_a6000_on_sale_via_hp/ | BreakIt-Boris | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4qgeg | false | null | t3_1h4qgeg | /r/LocalLLaMA/comments/1h4qgeg/nvidia_a6000_on_sale_via_hp/ | false | false | self | 19 | null |
small gpu for prompt processing on cpu based system | 3 | I'm running on a cpu based server with 128gb of ram, and a tight budget. I often run large models (14b-72b) with very long prompts (8k-32k+ tokens), would adding a cheap GPU like a gtx 1650 4gb give a boost in prompt processing even if obviously the whole model+context will not fit on the gpu? | 2024-12-02T08:31:40 | https://www.reddit.com/r/LocalLLaMA/comments/1h4qh8k/small_gpu_for_prompt_processing_on_cpu_based/ | Brilliant-Sun2643 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4qh8k | false | null | t3_1h4qh8k | /r/LocalLLaMA/comments/1h4qh8k/small_gpu_for_prompt_processing_on_cpu_based/ | false | false | self | 3 | null |
Cheapest way to run 123b models? | 1 | [removed] | 2024-12-02T09:18:34 | https://www.reddit.com/r/LocalLLaMA/comments/1h4r3b7/cheapest_way_to_run_123b_models/ | InsideDebate8988 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4r3b7 | false | null | t3_1h4r3b7 | /r/LocalLLaMA/comments/1h4r3b7/cheapest_way_to_run_123b_models/ | false | false | self | 1 | null |
Neuromorphic computing might make edge AI far more viable | 25 | This article isn’t the newest, but i feel that its impact will be felt greatly before the decade is over. I don’t know if anyone has posted this here already, but here’s the link: https://www.researchgate.net/publication/377744243_Linear_symmetric_self-selecting_14-bit_molecular_memristors/link/65b4ffd21e1ec12eff504db1/download?_tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6InB1YmxpY2F0aW9uIiwicGFnZSI6InB1YmxpY2F0aW9uIn19 | 2024-12-02T09:30:55 | https://www.reddit.com/r/LocalLLaMA/comments/1h4r8yr/neuromorphic_computing_might_make_edge_ai_far/ | Creative-robot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4r8yr | false | null | t3_1h4r8yr | /r/LocalLLaMA/comments/1h4r8yr/neuromorphic_computing_might_make_edge_ai_far/ | false | false | self | 25 | null |
Any good guides on large scale embedding usage and vector dbs for long term memory? | 1 | [removed] | 2024-12-02T10:01:15 | https://www.reddit.com/r/LocalLLaMA/comments/1h4rn5b/any_good_guides_on_large_scale_embedding_usage/ | Liszat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4rn5b | false | null | t3_1h4rn5b | /r/LocalLLaMA/comments/1h4rn5b/any_good_guides_on_large_scale_embedding_usage/ | false | false | self | 1 | null |
It's been a while since Mistral released something. | 174 | Hint hint. Doing the magic trick where we post here and it appears later.
https://preview.redd.it/qysuvlcowe4e1.png?width=847&format=png&auto=webp&s=a338a5a755ae9f6acabe05dfec66302775a46f2b
| 2024-12-02T10:27:09 | https://www.reddit.com/r/LocalLLaMA/comments/1h4rzxd/its_been_a_while_since_mistral_released_something/ | Dark_Fire_12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4rzxd | false | null | t3_1h4rzxd | /r/LocalLLaMA/comments/1h4rzxd/its_been_a_while_since_mistral_released_something/ | false | false | 174 | null |
|
How to convert books to dataset? | 4 | I want to finetune models by using some books or articles. What is the good way of converting book into dataset? I understand that datasets are in format question/answer but books have no questions to answer, they are probably text to be continued. Could you point me to some good practices or share your experiences? | 2024-12-02T10:28:44 | https://www.reddit.com/r/LocalLLaMA/comments/1h4s0oh/how_to_convert_books_to_dataset/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4s0oh | false | null | t3_1h4s0oh | /r/LocalLLaMA/comments/1h4s0oh/how_to_convert_books_to_dataset/ | false | false | self | 4 | null |
I am noticing something not being taken into account in JP to EN Data sets | 1 | [removed] | 2024-12-02T10:36:29 | https://www.reddit.com/r/LocalLLaMA/comments/1h4s4db/i_am_noticing_something_not_being_taken_into/ | GTurkistane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4s4db | false | null | t3_1h4s4db | /r/LocalLLaMA/comments/1h4s4db/i_am_noticing_something_not_being_taken_into/ | false | false | self | 1 | null |
Do I need to provide the "chat template" or "prompt format" to llamafile ? | 8 | Hi all !
I'm using llamafile https://github.com/Mozilla-Ocho/llamafile, as it's the only project that I could compile with AMD GPU support, and because jart is a goddess and her stuff Just Works.
Now, I have trouble with the prompt format.
I have read:
* https://huggingface.co/docs/transformers/main/en/chat_templating which explains what chat templates are: "Chat templates are part of the tokenizer. They specify how to convert conversations, represented as lists of messages, into a single tokenizable string in the format that the model expects." and give specific information and examples.
* https://justine.lol/oneliners/ in which jart sometimes uses prompt template elements in her calls to llamafile (e.g. [INST] or ### User:) and sometimes doesn't (search for "Digital Athena").
* https://github.com/Mozilla-Ocho/llamafile/blob/main/llama.cpp/llama.cpp#L18930 llamafile source code, which has a function to apply the templates to the user's input.
I also have run some tests, e.g.
```
LD_LIBRARY_PATH=$LIBRARY_PATH ./llamafile-0.8.16 \
--model ~/Downloads/Meta-Llama-3.1-8B-Instruct-Q8_0.gguf \
-p '<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are an AI assistant named Digital Athena which is a large language model trained on the sum of human knowledge.<|eot_id|>' \
--interactive --interactive-first --color -ngl 999 --temp 0 \
--in-suffix '<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>' \
--in-prefix '<|start_header_id|>user<|end_header_id|>
' \
--reverse-prompt '<|eot_id|>'
```
v.s.
```
LD_LIBRARY_PATH=$LIBRARY_PATH ./llamafile-0.8.16 \
--model ~/Downloads/Meta-Llama-3.1-8B-Instruct-Q8_0.gguf \
-p '
The following is a conversation between a Researcher and their helpful AI assistant Digital Athena which is a large language model trained on the sum of human knowledge.
Researcher: Good morning.
Digital Athena: How can I help you today?
Researcher:' \
--in-prefix ' ' --in-suffix 'Digital Athena:' --reverse-prompt 'Researcher:' \
--interactive --interactive-first --color -ngl 999 --temp 0
```
In this particular case, the version with the prompt format elements yielded the perfect answer to my go-to question ("What would you say are the 4 most important concepts defined and used by Pierre Bourdieu ?"), whereas the second version, without the prompt format yielded a close, but not perfect answer (it forgot "symbolic violence").
So I'm basically lost:
* The code to add the prompt template is hard coded in llama.cpp
* but the author herself is sometimes putting it in via command line arguments
* and doing so seem to yield a slight improvement in quality, but my test is quick and dirty.
To make matters worse, when I use a web ui on top of llamafile, I get the <|eot_id|> string at the end of each response.
If anybody knows what's what, or even have the slightest idea of what's going on, I'd love to hear it !
Thanks :) | 2024-12-02T10:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h4scnj/do_i_need_to_provide_the_chat_template_or_prompt/ | linschn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4scnj | false | null | t3_1h4scnj | /r/LocalLLaMA/comments/1h4scnj/do_i_need_to_provide_the_chat_template_or_prompt/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'ue40-WK5xNgRchiJwoPUT7wuJVPg5zJdmOxyYPGGbWA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-9OXfOTadcvlw5pYtUgF9z_5Zb_VoDEWac7SLr0R6FQ.jpg?width=108&crop=smart&auto=webp&s=2e74a2d321a1f1d2d51ac11b9ae2f583e211c6e1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-9OXfOTadcvlw5pYtUgF9z_5Zb_VoDEWac7SLr0R6FQ.jpg?width=216&crop=smart&auto=webp&s=5ee0d55333ca0d8e9645d398facf19b75ff2dc5b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-9OXfOTadcvlw5pYtUgF9z_5Zb_VoDEWac7SLr0R6FQ.jpg?width=320&crop=smart&auto=webp&s=5bb74ffd0b0cbf4da6511900119211300e86df7e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-9OXfOTadcvlw5pYtUgF9z_5Zb_VoDEWac7SLr0R6FQ.jpg?width=640&crop=smart&auto=webp&s=5ae5dd0cc92d9f2d423eb9dcb6d8136fd06b779e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-9OXfOTadcvlw5pYtUgF9z_5Zb_VoDEWac7SLr0R6FQ.jpg?width=960&crop=smart&auto=webp&s=c30b46bb94a62ea0c902f4a280e6350ab9c2bcf8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-9OXfOTadcvlw5pYtUgF9z_5Zb_VoDEWac7SLr0R6FQ.jpg?width=1080&crop=smart&auto=webp&s=407b71ef534671f77b1e7d1e3c924be4176d2325', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-9OXfOTadcvlw5pYtUgF9z_5Zb_VoDEWac7SLr0R6FQ.jpg?auto=webp&s=f2348f8b06c20ba99fd1ab4737b81b68c3b003e3', 'width': 1200}, 'variants': {}}]} |
Spare laptop to run LLM - is it powerful enough? | 1 | So I have this old W530 Thinkpad and I was thinking of using it to run some models locally. The thing has got a Intel Core i7 2600MHz CPU, which is not terrible I guess. Integrated GPU, which is terrible, I know. And 64GB of RAM, which is pretty good.
Now, I'm not looking to run 80B parameter models. I'm working with niche models which are a good match for working with audio - DSP. So, I might be running smaller and very specialized models.
Do you think this old workhorse will be of any use for that? | 2024-12-02T10:58:52 | https://www.reddit.com/r/LocalLLaMA/comments/1h4sfi7/spare_laptop_to_run_llm_is_it_powerful_enough/ | 5jane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4sfi7 | false | null | t3_1h4sfi7 | /r/LocalLLaMA/comments/1h4sfi7/spare_laptop_to_run_llm_is_it_powerful_enough/ | false | false | self | 1 | null |
Linux AI enthousiasts, you might be slowly damaging your GPUs because of temperatures, without even knowing | 0 | [removed] | 2024-12-02T11:21:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h4srkw/linux_ai_enthousiasts_you_might_be_slowly/ | TyraVex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4srkw | false | null | t3_1h4srkw | /r/LocalLLaMA/comments/1h4srkw/linux_ai_enthousiasts_you_might_be_slowly/ | false | false | 0 | null |
|
Best Local LLM for coding? | 9 | Hey, I am new to the LLMs and i want to know the best free Local LLM for coding purpose only. Let me know about it. | 2024-12-02T12:08:09 | https://www.reddit.com/r/LocalLLaMA/comments/1h4tidn/best_local_llm_for_coding/ | Hartz_Boi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4tidn | false | null | t3_1h4tidn | /r/LocalLLaMA/comments/1h4tidn/best_local_llm_for_coding/ | false | false | self | 9 | null |
What model is the best for movies/books/games recommendations? | 4 | Especially when it comes to cross-media recommendations (which is more challenging). For example "I liked Shin Megami Tensei III. What novels and movies would you recommend?" | 2024-12-02T12:13:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h4tlcm/what_model_is_the_best_for_moviesbooksgames/ | Animus_777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4tlcm | false | null | t3_1h4tlcm | /r/LocalLLaMA/comments/1h4tlcm/what_model_is_the_best_for_moviesbooksgames/ | false | false | self | 4 | null |
Any list of interesting local llama applications? | 1 | [removed] | 2024-12-02T12:38:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h4u0wd/any_list_of_interesting_local_llama_applications/ | linjun_halida | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4u0wd | false | null | t3_1h4u0wd | /r/LocalLLaMA/comments/1h4u0wd/any_list_of_interesting_local_llama_applications/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'JXkLHdJEwJmphWfSg89qNlvHckMZTGk_DSuUXbBCM_U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IohWknj0Mn4-x_t1zDYpeHbBZjrPYS-R4MmkaPKZ0H4.jpg?width=108&crop=smart&auto=webp&s=22e7c5292b4a4cd28ad0d7620c0dada031f2e1fe', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IohWknj0Mn4-x_t1zDYpeHbBZjrPYS-R4MmkaPKZ0H4.jpg?width=216&crop=smart&auto=webp&s=55b040152f7c8f8647a9badf165537f01d3cdd39', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IohWknj0Mn4-x_t1zDYpeHbBZjrPYS-R4MmkaPKZ0H4.jpg?width=320&crop=smart&auto=webp&s=5fc616d2266e7713e9bacb1b4442cef3aafe8068', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IohWknj0Mn4-x_t1zDYpeHbBZjrPYS-R4MmkaPKZ0H4.jpg?width=640&crop=smart&auto=webp&s=f49b7095939cfd93207262a18e70e652db6f6846', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IohWknj0Mn4-x_t1zDYpeHbBZjrPYS-R4MmkaPKZ0H4.jpg?width=960&crop=smart&auto=webp&s=549f1eb449c3d99259bc90834709b0ddddc1b3bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IohWknj0Mn4-x_t1zDYpeHbBZjrPYS-R4MmkaPKZ0H4.jpg?width=1080&crop=smart&auto=webp&s=98d5038f37050f61d06e9336922be1dea4e87a2f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IohWknj0Mn4-x_t1zDYpeHbBZjrPYS-R4MmkaPKZ0H4.jpg?auto=webp&s=62535def4034a03047e9f20665214ff2009063c9', 'width': 1200}, 'variants': {}}]} |
A No-BS Database of How Companies Actually Deploy LLMs in Production (300+ Technical Case Studies, Including Self-Hosted | 371 | For those of us pushing the boundaries with self-hosted models, I wanted to share a valuable resource that just dropped: ZenML's LLMOps Database. It's a collection of 300+ real-world LLM implementations, and what makes it particularly relevant for the community is its coverage of open-source and self-hosted deployments. It includes:
* Detailed architectural decisions around model hosting & deployment
* Real performance metrics and hardware configurations
* Cost comparisons between self-hosted vs API approaches
* Actual production challenges and their solutions
* Technical deep-dives into inference optimization
What sets this apart from typical listicles:
* No marketing fluff - pure technical implementation details
* Focuses on production challenges & solutions
* Includes performance metrics where available
* Covers both successful and failed approaches
\- Actually discusses hardware requirements & constraints
The database is filterable by tags including "open\_source", "model\_optimization", and "self\_hosted" - makes it easy to find relevant implementations.
**URL:** [**https://www.zenml.io/llmops-database/**](https://www.zenml.io/llmops-database/)
Contribution form if you want to share your LLM deployment experience: [https://docs.google.com/forms/d/e/1FAIpQLSfrRC0\_k3LrrHRBCjtxULmER1-RJgtt1lveyezMY98Li\_5lWw/viewform](https://docs.google.com/forms/d/e/1FAIpQLSfrRC0_k3LrrHRBCjtxULmER1-RJgtt1lveyezMY98Li_5lWw/viewform)
What I appreciate most: It's not just another collection of demos or POCs. These are battle-tested implementations with real engineering trade-offs and compromises documented. Would love to hear what insights others find in there, especially around optimization techniques for running these models on consumer hardware.
Edit: Almost forgot - we've got podcast-style summaries of key themes across implementations. Pretty useful for catching patterns in how different teams solve similar problems. | 2024-12-02T12:48:44 | https://www.reddit.com/r/LocalLLaMA/comments/1h4u7au/a_nobs_database_of_how_companies_actually_deploy/ | htahir1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4u7au | false | null | t3_1h4u7au | /r/LocalLLaMA/comments/1h4u7au/a_nobs_database_of_how_companies_actually_deploy/ | false | false | self | 371 | null |
Chat GPT becoming unreliable for D&D, suggestions for local alternatives | 1 | [removed] | 2024-12-02T12:52:01 | https://www.reddit.com/r/LocalLLaMA/comments/1h4u9fw/chat_gpt_becoming_unreliable_for_dd_suggestions/ | Landscapedesignguy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4u9fw | false | null | t3_1h4u9fw | /r/LocalLLaMA/comments/1h4u9fw/chat_gpt_becoming_unreliable_for_dd_suggestions/ | false | false | self | 1 | null |
any browser extension for local LLMs ? | 12 | any browser extension that can seamlessly integrate LLMs into my browsing experience ? I use models mostly for machine translation, I guess long text summary would be nice too, right now I just copy paste the text manually into koboldcpp | 2024-12-02T12:56:34 | https://www.reddit.com/r/LocalLLaMA/comments/1h4ucfe/any_browser_extension_for_local_llms/ | VULONKAAZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4ucfe | false | null | t3_1h4ucfe | /r/LocalLLaMA/comments/1h4ucfe/any_browser_extension_for_local_llms/ | false | false | self | 12 | null |
Nobel laureate Geoffrey Hinton says Open Sourcing Big Models is like letting people buy nuclear weapons at Radio Shack & Most Important regulation would be to not Open-Source Big Models. | 0 | 2024-12-02T13:08:21 | https://v.redd.it/ej0eosbepf4e1 | Vishnu_One | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h4ukm2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ej0eosbepf4e1/DASHPlaylist.mpd?a=1735736914%2CYmVhNmFkY2JiZmFhNTA4YzQ0ZDcxMmU1OWUyYzliYzY3ZTZkNTIyMjg3MjVhOWJkNTAyZmUyYjQzNTdkMzdkZg%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/ej0eosbepf4e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ej0eosbepf4e1/HLSPlaylist.m3u8?a=1735736914%2CYWFiODUwM2FmNDE0YWY3NjUwZjJkMTQ1YjE1M2IyYmU0MmZkMzM2OGNlNDE0NWZjZGE0Zjg1YjI3OTFkNjU4Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ej0eosbepf4e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1h4ukm2 | /r/LocalLLaMA/comments/1h4ukm2/nobel_laureate_geoffrey_hinton_says_open_sourcing/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cnp6eDB1YmVwZjRlMcP99347B5OarEMeJt9jvIyA-c0OOrz3JPKSeQlG2UbT', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cnp6eDB1YmVwZjRlMcP99347B5OarEMeJt9jvIyA-c0OOrz3JPKSeQlG2UbT.png?width=108&crop=smart&format=pjpg&auto=webp&s=54371ac015af3516aef39a375528e0f579e04136', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cnp6eDB1YmVwZjRlMcP99347B5OarEMeJt9jvIyA-c0OOrz3JPKSeQlG2UbT.png?width=216&crop=smart&format=pjpg&auto=webp&s=5134f18e0383480d896b1587635aac860e391f98', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cnp6eDB1YmVwZjRlMcP99347B5OarEMeJt9jvIyA-c0OOrz3JPKSeQlG2UbT.png?width=320&crop=smart&format=pjpg&auto=webp&s=cccf712ab4d5012ef764cfe62ef9a85853e04046', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cnp6eDB1YmVwZjRlMcP99347B5OarEMeJt9jvIyA-c0OOrz3JPKSeQlG2UbT.png?width=640&crop=smart&format=pjpg&auto=webp&s=ac14c8344dbc421f6cf393564a010b555874073b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cnp6eDB1YmVwZjRlMcP99347B5OarEMeJt9jvIyA-c0OOrz3JPKSeQlG2UbT.png?width=960&crop=smart&format=pjpg&auto=webp&s=51c7e3921d3158c46fbb61ba3646bfd571182497', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cnp6eDB1YmVwZjRlMcP99347B5OarEMeJt9jvIyA-c0OOrz3JPKSeQlG2UbT.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fc6e39075c7c915db871135c5092b27eb830753f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cnp6eDB1YmVwZjRlMcP99347B5OarEMeJt9jvIyA-c0OOrz3JPKSeQlG2UbT.png?format=pjpg&auto=webp&s=7361f988d0e42dbc5be5e15e378cd0052f9a3f77', 'width': 1920}, 'variants': {}}]} |
||
Trying chain of code, Monte Carlo Tree Search, Mixture of Agents on Qwen | 1 | 2024-12-02T13:19:31 | https://huggingface.co/spaces/llamameta/Achieving-AGI-artificial-general-intelligence | balianone | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h4us1x | false | null | t3_1h4us1x | /r/LocalLLaMA/comments/1h4us1x/trying_chain_of_code_monte_carlo_tree_search/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Tym2cKVfEgyslPVskPLMh0as7_jyfqHjmUKo4tWo0AI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=108&crop=smart&auto=webp&s=dccd6cf88aedbacbb0c47fc8742bbcff07a502d9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=216&crop=smart&auto=webp&s=549a4478ea319b770ed9ea9f222716bb4b3f8af5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=320&crop=smart&auto=webp&s=314095ab9e6727afcd456f2259dd5a3b944ee9dd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=640&crop=smart&auto=webp&s=e16a4835cc91eb292e25ff427ec6bde7625f74b2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=960&crop=smart&auto=webp&s=a1624aae5cc5a37bbbbf1df552312e4150bbda3c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=1080&crop=smart&auto=webp&s=e62655419f9012fa774e7680c8546e02c39da38b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?auto=webp&s=244b2d7da9afcc583b5e02e42362251938e11358', 'width': 1200}, 'variants': {}}]} |
||
Looking for a HIPPA compliant LLM to help with case notes. | 1 | I've tried BastionGPT, and the results were fairly atrocious, even when given half the note. I have a feeling they are running base GPT4, as it has all the hallmark issues I had before in other context outside of notes. | 2024-12-02T13:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h4ux50/looking_for_a_hippa_compliant_llm_to_help_with/ | edafade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4ux50 | false | null | t3_1h4ux50 | /r/LocalLLaMA/comments/1h4ux50/looking_for_a_hippa_compliant_llm_to_help_with/ | false | false | self | 1 | null |
Programmatically discover the size a model from HF will use in VRAM before downloading | 1 | Is there a way to discover how much a given model will use of VRAM without actually downloading and running it from HF?
I was looking at HF API and it has the number of parameters for a safe tensor model.
\- So for example: "facebook/bart-large-cnn"
api = HfApi()
model\_info = api.model\_info("facebook/bart-large-cnn")
Returns the following number of parameters for safe tensor
{'F32': 406290432}
\- Can I consider that this means (406290432 parameters x 32bits)/1024/1024/1024 = 1.03GB @ 32 bit
But if I look at the file size of the safe tensor is actually 1.63GB
\- Also to get 4bit quant, can I simply divide this number by 8?
Or do I need to take in consideration other factors that will use more VRAM?
| 2024-12-02T13:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/1h4v4vq/programmatically_discover_the_size_a_model_from/ | alew3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4v4vq | false | null | t3_1h4v4vq | /r/LocalLLaMA/comments/1h4v4vq/programmatically_discover_the_size_a_model_from/ | false | false | self | 1 | null |
Programmatically discover the size a model from HF will use in VRAM before downloading | 29 | Is there a way to discover how much a given model will use of VRAM without actually downloading and running it from HF?
I was looking at HF API and it has the number of parameters for a safe tensor model.
**- For example: "facebook/bart-large-cnn"**
api = HfApi()
model\_info = api.model\_info("facebook/bart-large-cnn")
Returns the following number of parameters for safe tensor
{'F32': 406290432}
Can I consider that this means:
(406290432 parameters \* 4 bytes)/1024/1024/1024 = **1.51GB @ 32 bit**
If I look at the actual file size of the safe tensor it is **1.63GB**
Can I also get the 4bit quant, by simply dividing this number by 8 (32bit/8 = 4bit)?
Do I need to take in consideration other factors that will use more VRAM? | 2024-12-02T13:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/1h4v9nd/programmatically_discover_the_size_a_model_from/ | alew3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4v9nd | false | null | t3_1h4v9nd | /r/LocalLLaMA/comments/1h4v9nd/programmatically_discover_the_size_a_model_from/ | false | false | self | 29 | null |
Is this true??? | 0 | I heard cost of training deepseek r1 is below 3 millon dollars only | 2024-12-02T13:49:42 | https://www.reddit.com/r/LocalLLaMA/comments/1h4vdax/is_this_true/ | TheLogiqueViper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4vdax | false | null | t3_1h4vdax | /r/LocalLLaMA/comments/1h4vdax/is_this_true/ | false | false | self | 0 | null |
What's the best approach to making an LLM desktop that needs to get trained on sensitive company data? | 1 | Does it make more sense for the LLM to be running completely locally, or should it be running on a server somewhere? Security is extremely important when it comes to the training data, so that's a must for whatever approach I take?
As someone who knows very little about this space, I was thinking to develop it as a local desktop application so there'd be no network based security risks. I don't know if that intuition is correct, because as far as I understand, the best LLMs use tons of resources server side to generate high quality output.
Thanks! | 2024-12-02T13:58:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h4vju6/whats_the_best_approach_to_making_an_llm_desktop/ | Diamond-Equal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4vju6 | false | null | t3_1h4vju6 | /r/LocalLLaMA/comments/1h4vju6/whats_the_best_approach_to_making_an_llm_desktop/ | false | false | self | 1 | null |
Open-Source AI = National Security: The Cry for Regulation Intensifies | 153 | 2024-12-02T13:59:31 | https://v.redd.it/7j5lxfjoyf4e1 | Vishnu_One | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h4vk8t | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7j5lxfjoyf4e1/DASHPlaylist.mpd?a=1735739986%2CNjE2NDUwOWM4ZTNmNWQxYmY4MTIyN2VjZjE4M2EwMDQ0OGY5ZWYyY2FhZWU4NTJkZjRhNWU0ZjhjMmU3YmE3MA%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/7j5lxfjoyf4e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/7j5lxfjoyf4e1/HLSPlaylist.m3u8?a=1735739986%2CNzEwYjQzMGZiMTYyZWJlNDQwMzhhYmFmMjM3ODNjNTYwZGVmNzVhYTIzYmU5MzU0OTA1NWEyZDdlZjY3ZmViOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7j5lxfjoyf4e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1h4vk8t | /r/LocalLLaMA/comments/1h4vk8t/opensource_ai_national_security_the_cry_for/ | false | false | 153 | {'enabled': False, 'images': [{'id': 'c3c4bmVoam95ZjRlMWZha1O9f4eJ7IWj0Xw6z3sPfopzp8kwi1Y69_HNTNJt', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c3c4bmVoam95ZjRlMWZha1O9f4eJ7IWj0Xw6z3sPfopzp8kwi1Y69_HNTNJt.png?width=108&crop=smart&format=pjpg&auto=webp&s=fed15de7b8429670cdc3f7cdcd5917ce30d8626d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c3c4bmVoam95ZjRlMWZha1O9f4eJ7IWj0Xw6z3sPfopzp8kwi1Y69_HNTNJt.png?width=216&crop=smart&format=pjpg&auto=webp&s=cef94d152b146b866a4417a49b5b9b42c07b39cd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c3c4bmVoam95ZjRlMWZha1O9f4eJ7IWj0Xw6z3sPfopzp8kwi1Y69_HNTNJt.png?width=320&crop=smart&format=pjpg&auto=webp&s=c728ed1ae2e913fd885b846632a96a7b18891d86', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c3c4bmVoam95ZjRlMWZha1O9f4eJ7IWj0Xw6z3sPfopzp8kwi1Y69_HNTNJt.png?width=640&crop=smart&format=pjpg&auto=webp&s=fe7aecac8f7ba5b1af10612ecdb2a974c2234e1a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/c3c4bmVoam95ZjRlMWZha1O9f4eJ7IWj0Xw6z3sPfopzp8kwi1Y69_HNTNJt.png?width=960&crop=smart&format=pjpg&auto=webp&s=ab23dd79f0b2d934860eb9454eecb4043c8916c0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/c3c4bmVoam95ZjRlMWZha1O9f4eJ7IWj0Xw6z3sPfopzp8kwi1Y69_HNTNJt.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f6b5c7d5ab2ae03b3847de720ddf93cd082d8473', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/c3c4bmVoam95ZjRlMWZha1O9f4eJ7IWj0Xw6z3sPfopzp8kwi1Y69_HNTNJt.png?format=pjpg&auto=webp&s=29db20392337f87d63342a4cce7fdf26926e0372', 'width': 1920}, 'variants': {}}]} |
||
What's the best approach to making an app that utilizes an LLM to work with sensitive client data? | 2 | Does it make more sense for the LLM to be running completely locally, or should it be running on a server somewhere? A local desktop app, or a website? Security is extremely important when it comes to the training data, so that's a must for whatever approach I take.
As someone who knows very little about this space, I was thinking to develop it as a local desktop application so there'd be no network based security risks. I don't know if that intuition is correct, because as far as I understand, the best LLMs use tons of resources server side to generate high quality output.
Thanks! | 2024-12-02T14:00:15 | https://www.reddit.com/r/LocalLLaMA/comments/1h4vkt4/whats_the_best_approach_to_making_an_app_that/ | Diamond-Equal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4vkt4 | false | null | t3_1h4vkt4 | /r/LocalLLaMA/comments/1h4vkt4/whats_the_best_approach_to_making_an_app_that/ | false | false | self | 2 | null |
Need advice: Building a workgroup AI PC for LLM inference & fine-tuning (~3000€) | 1 | [removed] | 2024-12-02T14:16:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h4vwze/need_advice_building_a_workgroup_ai_pc_for_llm/ | StudentOfChaos123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4vwze | false | null | t3_1h4vwze | /r/LocalLLaMA/comments/1h4vwze/need_advice_building_a_workgroup_ai_pc_for_llm/ | false | false | self | 1 | null |
Preparing Local LLM conference for Seniors Data Scientist | 5 | As the title says, I have been "awarded" by my company to give a conference for all the data staff, specially for the senior data scientist in the company about the costs o possible costs of deploying local llm solutions.
I need your help, my beloved community, to prepare the speech. What I have now are these points:
\- Short introduction to local LLMs (llama, mixtral, qwen...)
\- Impact on the B of parameters --> on the VRAM needed.
\- CPU vs GPU usage: when or why
\- Quantization
\- Hardware Election: eGPU (for intel macbooks and windows laptops), GPUs (new and second hand) and profesional GPU, and alternatives in the cloud for renting GPUs (not as local, but yk).
\- Optimizing and scalability: here i need some help/tips specially
\- OS, Software and Frameworks
What do you think about? Do you have any info! Thanks!!
Disclaimer: i will open source the presentation in case anyone needs it, and obviously giving credits to the people that help.
thanks you all!!
| 2024-12-02T14:21:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h4w0uq/preparing_local_llm_conference_for_seniors_data/ | Juan_Sanz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h4w0uq | false | null | t3_1h4w0uq | /r/LocalLLaMA/comments/1h4w0uq/preparing_local_llm_conference_for_seniors_data/ | false | false | self | 5 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.