title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Any models trained to "go hard" | 0 | Really hard concept to define but if you know what I mean then you know. Are there any models that have been trained specifically to spit bars? | 2024-12-31T03:45:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hq5y9l/any_models_trained_to_go_hard/ | Unfair-Ad9415 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq5y9l | false | null | t3_1hq5y9l | /r/LocalLLaMA/comments/1hq5y9l/any_models_trained_to_go_hard/ | false | false | self | 0 | null |
Finetuning a model for generating descriptions of expeditions | 1 | [removed] | 2024-12-31T03:48:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hq607o/finetuning_a_model_for_generating_descriptions_of/ | MariaFitz345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq607o | false | null | t3_1hq607o | /r/LocalLLaMA/comments/1hq607o/finetuning_a_model_for_generating_descriptions_of/ | false | false | self | 1 | null |
Training a model for generating descriptions of expeditions | 1 | [removed] | 2024-12-31T03:49:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hq618c/training_a_model_for_generating_descriptions_of/ | MariaFitz345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq618c | false | null | t3_1hq618c | /r/LocalLLaMA/comments/1hq618c/training_a_model_for_generating_descriptions_of/ | false | false | self | 1 | null |
Has anyone tried running DeepSeek v3 on the Jetson Nano? How does it perform? | 1 | [removed] | 2024-12-31T04:59:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hq7awr/has_anyone_tried_running_deepseek_v3_on_the/ | representworld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq7awr | false | null | t3_1hq7awr | /r/LocalLLaMA/comments/1hq7awr/has_anyone_tried_running_deepseek_v3_on_the/ | false | false | self | 1 | null |
Any good avatar tech like VASA1 yet? | 1 | [removed] | 2024-12-31T05:08:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hq7gri/any_good_avatar_tech_like_vasa1_yet/ | Cool_Brick_772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq7gri | false | null | t3_1hq7gri | /r/LocalLLaMA/comments/1hq7gri/any_good_avatar_tech_like_vasa1_yet/ | false | false | self | 1 | null |
Cloud API's with support for banning words? | 2 | I love the "banned_tokens" parameter in koboldcpp ("An array of string sequences, each entry represents a word or phrase prevented from being generated, either modifying model vocab or by backtracking and regenerating when they appear.")
But does anyone know of a cloud API provider with support for a similar parameter? Some support "logit_bias", but it's not really the same thing. | 2024-12-31T05:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hq85uq/cloud_apis_with_support_for_banning_words/ | Risse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq85uq | false | null | t3_1hq85uq | /r/LocalLLaMA/comments/1hq85uq/cloud_apis_with_support_for_banning_words/ | false | false | self | 2 | null |
Just received my Jetson Orin Nano (sponsored by NVIDIA), the Nano SuperComputer. What are some cool stuff I should try with it? | 2 | [removed] | 2024-12-31T06:02:39 | mehul_gupta1997 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hq8dd5 | false | null | t3_1hq8dd5 | /r/LocalLLaMA/comments/1hq8dd5/just_received_my_jetson_orin_nano_sponsored_by/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'xWKCLSaUU2_Y6-khiju2t6ZZJbUnHR6iAztXBVPQl5s', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/uvcycip2k4ae1.png?width=108&crop=smart&auto=webp&s=5187b61157d050bd5e21343bee026a5f7a873e8b', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/uvcycip2k4ae1.png?width=216&crop=smart&auto=webp&s=88118ddaaf9bea2e9af499a71abeddb74a494cd5', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/uvcycip2k4ae1.png?width=320&crop=smart&auto=webp&s=5c2308f910964d5986493fd0f90f8cbcde65d417', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/uvcycip2k4ae1.png?width=640&crop=smart&auto=webp&s=21cd4af0afcb18849a43cc63cb0e908bf72c196c', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/uvcycip2k4ae1.png?width=960&crop=smart&auto=webp&s=5545cf8f1e7f38df197411ab577ad055ecd3c665', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/uvcycip2k4ae1.png?width=1080&crop=smart&auto=webp&s=dc8436882cb71a86f66b435b74ebdefd398c7b2a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/uvcycip2k4ae1.png?auto=webp&s=58d06f2b1120f57e7e1e01bda10a0ec3090600be', 'width': 1080}, 'variants': {}}]} |
||
LiveBench-2024-06-24 (Initial version) vs 2024 end of year (6 months, 8 days later). SOTA Global score increase of 14.51. SOTA biggest increase was reasoning: 27.58. Caveat: Reasoning models including o1 are still poor at spatial and compositional reasoning. Livebench should add these two types. | 33 | 2024-12-31T06:11:01 | https://livebench.ai/#/ | Personal-Dot-380 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hq8ich | false | null | t3_1hq8ich | /r/LocalLLaMA/comments/1hq8ich/livebench20240624_initial_version_vs_2024_end_of/ | false | false | 33 | null |
||
Xiaomi recruits key DeepSeek researcher to lead its AI lab. | 128 | Recently some members of the Qwen team joined ByteDance, and now similar moves are happening at DeepSeek. This highlights the intense competition for AI talent within the country.
[https://www.aibase.com/news/14345](https://www.aibase.com/news/14345) | 2024-12-31T06:18:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hq8mt4/xiaomi_recruits_key_deepseek_researcher_to_lead/ | sb5550 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq8mt4 | false | null | t3_1hq8mt4 | /r/LocalLLaMA/comments/1hq8mt4/xiaomi_recruits_key_deepseek_researcher_to_lead/ | false | false | self | 128 | null |
Suggestions for generative AI projects | 1 | [removed] | 2024-12-31T06:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hq945u/suggestions_for_generative_ai_projects/ | 3harath_k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq945u | false | null | t3_1hq945u | /r/LocalLLaMA/comments/1hq945u/suggestions_for_generative_ai_projects/ | false | false | self | 1 | null |
How to Generate Text Dataset Using LLama 3.1? | 2 | So I am working on my semester mini-project. It’s titled "Indianism Detection in Texts Using Machine Learning" (yeah, I just randomly made it up during idea submissions). Now the problem is, there’s no such dataset for this in the entire world. To counter this, I came up with a pipeline to convert a normal (correct) English phrase into English with Indianisms using my local LLama 3.1 and then save both the correct and converted sentences into a dataset with labels, respectively.
I also created a simple pipeline for it (a kind of constitutional AI) but can’t seem to get any good responses. Could anyone suggest something better? (I’m 6 days away from the project submission deadline.)
I explained the current pipeline in this GitHub repo’s README. Check it out:
[https://github.com/iamDyeus/Synthetica](https://github.com/iamDyeus/Synthetica) | 2024-12-31T06:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hq98fa/how_to_generate_text_dataset_using_llama_31/ | dyeusyt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq98fa | false | null | t3_1hq98fa | /r/LocalLLaMA/comments/1hq98fa/how_to_generate_text_dataset_using_llama_31/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'eTeDRllNhGpeNsUszCvZnZVCnwe42EgNU7sBu4rquqo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-W8T7ecV014oqzZxXQkaXEBq8VbDm8DdcmAzANWFhrI.jpg?width=108&crop=smart&auto=webp&s=b7b16fb7c42407b779e4f0ed97217fafb4361fa6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-W8T7ecV014oqzZxXQkaXEBq8VbDm8DdcmAzANWFhrI.jpg?width=216&crop=smart&auto=webp&s=0ef002ad7429b4e4df0b759eff772416606f6fa9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-W8T7ecV014oqzZxXQkaXEBq8VbDm8DdcmAzANWFhrI.jpg?width=320&crop=smart&auto=webp&s=3dcc7edd2d2f5e830c5244ae190665b15a62a4b4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-W8T7ecV014oqzZxXQkaXEBq8VbDm8DdcmAzANWFhrI.jpg?width=640&crop=smart&auto=webp&s=8811e705569c0bc0c994ceb0c4dcba95a1d9a5aa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-W8T7ecV014oqzZxXQkaXEBq8VbDm8DdcmAzANWFhrI.jpg?width=960&crop=smart&auto=webp&s=7cab9263662fd5bfebf590abfe29292880156798', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-W8T7ecV014oqzZxXQkaXEBq8VbDm8DdcmAzANWFhrI.jpg?width=1080&crop=smart&auto=webp&s=6d4da23ef65807f553b72ff50c6c1f8c1682aae7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-W8T7ecV014oqzZxXQkaXEBq8VbDm8DdcmAzANWFhrI.jpg?auto=webp&s=83b6410a11e63be8b78e1c2ce1a70fa3643bb25f', 'width': 1200}, 'variants': {}}]} |
Data Diversity | 2 | I am looking to build a multilingual dataset for the purpose of fine-tuning the llama model. To ensure that this dataset is truly diverse, I need guidance on how to determine whether it includes all the necessary vocabulary, dialects, and specific linguistic nuances relevant to different languages. As someone who is relatively inexperienced in collecting datasets, I would appreciate any suggestions or best practices that could help me in this. Thank you | 2024-12-31T07:10:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hq9elz/data_diversity/ | Ai_Peep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq9elz | false | null | t3_1hq9elz | /r/LocalLLaMA/comments/1hq9elz/data_diversity/ | false | false | self | 2 | null |
Molex max wattage | 1 | [removed] | 2024-12-31T07:21:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hq9kaf/molex_max_wattage/ | Linkpharm2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq9kaf | false | null | t3_1hq9kaf | /r/LocalLLaMA/comments/1hq9kaf/molex_max_wattage/ | false | false | self | 1 | null |
Specialised models | 1 | [removed] | 2024-12-31T07:27:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hq9mtu/specialised_models/ | Breath_Unique | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq9mtu | false | null | t3_1hq9mtu | /r/LocalLLaMA/comments/1hq9mtu/specialised_models/ | false | false | self | 1 | null |
Llamafile | 1 | [removed] | 2024-12-31T07:33:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hq9prs/llamafile/ | Formal-Luck-4604 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq9prs | false | null | t3_1hq9prs | /r/LocalLLaMA/comments/1hq9prs/llamafile/ | false | false | self | 1 | null |
Molex max power | 1 | [removed] | 2024-12-31T07:49:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hq9xu4/molex_max_power/ | Linkpharm2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq9xu4 | false | null | t3_1hq9xu4 | /r/LocalLLaMA/comments/1hq9xu4/molex_max_power/ | false | false | self | 1 | null |
Question: Dell 5820 | 2x Dell OEM 3090s | 0 | I'm able to do PCIe slots 2 & 4 according to the manual, which would give me full PCIe 4.0 x16, yet the cards aren't actually 2 slots wide and i also misread the documentation a little.
Unless I get a riser and keep the lid off of the machine, i'll have to go with using Slot 1, which is an x8 and Slot 4, which is an x16.
---
**My question**:
- How much of a performance impact can be expected for inference when using a combo of the x16 / x8.
- Have any real world numbers?
- Any optimizations you'd do to squeeze any additional performance?
- Higher batch size?
---
Have any good issues, blogs, posts to read to get more insight? I'll be taking a look at llama.cpp issues tomorrow to see if i can find some information. | 2024-12-31T07:51:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hq9yzz/question_dell_5820_2x_dell_oem_3090s/ | _Boffin_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hq9yzz | false | null | t3_1hq9yzz | /r/LocalLLaMA/comments/1hq9yzz/question_dell_5820_2x_dell_oem_3090s/ | false | false | self | 0 | null |
A Roleplaying AI with "Story Flow Thought Chain" | 32 | Happy New Year's Eve everyone! 🎉 As we're wrapping up 2024, I wanted to share something special I've been working on - a roleplaying model called mirau. Consider this my small contribution to the AI community as we head into 2025!
\## What makes it different?
The key innovation is what I call the "Story Flow Thought Chain" - the model maintains two parallel streams of output:
1. An inner monologue (invisible to the character but visible to the user)
2. The actual dialogue response
This creates a continuous first-person narrative that helps maintain character consistency across long conversations.
\## Key Features:
\- \*\*Dual-Role System\*\*: Users can act both as a "director" giving meta-instructions and as a character in the story
\- \*\*Strong Character Consistency\*\*: The continuous inner narrative helps maintain consistent personality traits
\- \*\*Transparent Decision Making\*\*: You can see the model's "thoughts" before it responds
\- \*\*Extended Context Memory\*\*: Better handling of long conversations through the narrative structure
\## Example Interaction:
\`\`\`
System: I'm Doudou, and today is my first day at the new school. Sitting in the classroom, I can't help but look around...
User: (his voice is very gentle) I had polio
Bot: (Polio? What's that? I've never heard of it)
Polio? (tilting head in confusion)
\`\`\`
The parentheses show the model's inner thoughts, while the regular text is the actual response.
\## Try It Out:
You can try the model yourself at \[ModelScope Studio\](https://www.modelscope.cn/studios/mouseEliauk/mirau-14b-demo/summary)
The details and documentation are available in the \[README\](https://www.modelscope.cn/models/mouseEliauk/mirau-RP-14b/file/view/master?fileName=README\_en.md&status=1)
I'd love to hear your thoughts and feedback! What do you think about this approach to AI roleplaying? How do you think it compares to other roleplaying models you've used?
Edit: Thanks for all the interest! I'll try to answer questions in the comments. And once again, happy new year to all AI enthusiasts! Looking back at 2024, we've seen incredible progress in AI roleplaying, and I'm excited to see what 2025 will bring to our community! 🎊
P.S. What better way to spend the last day of 2024 than discussing AI with fellow enthusiasts? 😊 | 2024-12-31T08:10:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hqa8d3/a_roleplaying_ai_with_story_flow_thought_chain/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqa8d3 | false | null | t3_1hqa8d3 | /r/LocalLLaMA/comments/1hqa8d3/a_roleplaying_ai_with_story_flow_thought_chain/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': '0vm3JZeJ3WRNXcwOxUTBYxEXKl5AKg2y9nFyAiXOBh4', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/6BBbL0TQT0pe8S955ILI6a8HkGYNwwW9ratrzmKj8Cs.jpg?width=108&crop=smart&auto=webp&s=fe551eb076be7f79840e937fb1ed7171b697af05', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/6BBbL0TQT0pe8S955ILI6a8HkGYNwwW9ratrzmKj8Cs.jpg?width=216&crop=smart&auto=webp&s=515703b3eeaa86c2b4b21ba8e2196c74993600f7', 'width': 216}, {'height': 156, 'url': 'https://external-preview.redd.it/6BBbL0TQT0pe8S955ILI6a8HkGYNwwW9ratrzmKj8Cs.jpg?width=320&crop=smart&auto=webp&s=117b9f8027fef656369be41782a842977a741154', 'width': 320}, {'height': 312, 'url': 'https://external-preview.redd.it/6BBbL0TQT0pe8S955ILI6a8HkGYNwwW9ratrzmKj8Cs.jpg?width=640&crop=smart&auto=webp&s=363594530912b5895f2fe3a71ade44bf12aefe39', 'width': 640}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/6BBbL0TQT0pe8S955ILI6a8HkGYNwwW9ratrzmKj8Cs.jpg?auto=webp&s=61504ef8d2e8efe600670710674be2d1ef5f9ae2', 'width': 820}, 'variants': {}}]} |
What's your primary local LLM at the end of 2024? | 358 | Qwen2.5 32B remains my primary local LLM. Even three months after its release, it continues to be the optimal choice for 24GB GPUs.
What's your favourite local LLM at the end of this year? | 2024-12-31T08:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hqak1f/whats_your_primary_local_llm_at_the_end_of_2024/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqak1f | false | null | t3_1hqak1f | /r/LocalLLaMA/comments/1hqak1f/whats_your_primary_local_llm_at_the_end_of_2024/ | false | false | self | 358 | null |
Q: How to estimate token count needed to process an image? | 1 | I use *llava:7b*, *llama3.2-vision:latest* and *llava-llama3:latest* to interpret images.
I would like to estimate the token count before the actual execution.
I know that I view the following after the execution:
\* prompt\_eval\_count: Input token count
\* eval\_count: Output token count
Currently I use [OpenAI](https://community.openai.com/t/how-do-i-calculate-image-tokens-in-gpt4-vision/492318/5?utm_source=chatgpt.com) way to guess it. With an image of 1024 \* 1024 pixels, I expected to use 765 tokens. Below are the *prompt\_eval\_count* of three different models.
**llava:7b**
* text + image: 601
* text: 24
**llama3.2-vision:latest**
* text + image: 32
* text: 30
**llava-llama3:latest**
* text + image: 607
* text: 32
Based on the results, I believe none of them uses **OpenAI**'s method. I also suspect **llama3.2-vision:latest**'s consumption on visions was not reflected on *prompt\_eval\_count*.[](https://community.openai.com/t/how-do-i-calculate-image-tokens-in-gpt4-vision/492318/5?utm_source=chatgpt.com)
https://preview.redd.it/rn720e87b5ae1.jpg?width=1024&format=pjpg&auto=webp&s=078b802eefb63e00de2ef3390937f91fe9804565
Other parameters:
SYSTEM_PROMPT = "You are Whales, faithful AI assistant."
QUERY = "What's in the image?" | 2024-12-31T08:35:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hqaklz/q_how_to_estimate_token_count_needed_to_process/ | Vegetable_Carrot_873 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqaklz | false | null | t3_1hqaklz | /r/LocalLLaMA/comments/1hqaklz/q_how_to_estimate_token_count_needed_to_process/ | false | false | 1 | {'enabled': False, 'images': [{'id': 't-wCHRNAQIUmE5Kh0D9Obmu--yWwytnHraHdRp56tPY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IoEeRP0R4h4wMnnZK2Qi6TNz3r_ijtZlxdC5me85Z-w.jpg?width=108&crop=smart&auto=webp&s=bad4ef7b11c73114ffb8468170b84783ff5a2999', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/IoEeRP0R4h4wMnnZK2Qi6TNz3r_ijtZlxdC5me85Z-w.jpg?width=216&crop=smart&auto=webp&s=bab4c1fd95f684896b9d82f5cdcf7842ee3c0080', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/IoEeRP0R4h4wMnnZK2Qi6TNz3r_ijtZlxdC5me85Z-w.jpg?width=320&crop=smart&auto=webp&s=dd4c42abd0d9108dcdfa55365dec0c3c5703c5a9', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/IoEeRP0R4h4wMnnZK2Qi6TNz3r_ijtZlxdC5me85Z-w.jpg?width=640&crop=smart&auto=webp&s=1c3a3240244b89fb28ee4713da16722749a4e386', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/IoEeRP0R4h4wMnnZK2Qi6TNz3r_ijtZlxdC5me85Z-w.jpg?width=960&crop=smart&auto=webp&s=398f03cabd6622d750d2f2579455cb552d1e33ce', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/IoEeRP0R4h4wMnnZK2Qi6TNz3r_ijtZlxdC5me85Z-w.jpg?width=1080&crop=smart&auto=webp&s=e363837b3634a03df232653a296891beec964df9', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/IoEeRP0R4h4wMnnZK2Qi6TNz3r_ijtZlxdC5me85Z-w.jpg?auto=webp&s=a814473683e8e236c6c4c5ec5d55c1098616886d', 'width': 1200}, 'variants': {}}]} |
|
Any LLM projects with contextual awareness? | 0 | Every time I consider an “AI assistant”, I immediately encounter (simple to fix) situational awareness problems. I need the model to know that today is the 31st of December 2024, HH:MM:SS! That would be the very least.
On the other hand - in most cases - I need it to know who I am, where I live, what my profession is, what I am working on and so much more.
So question is: are there any companies working on assistants on steroids with actual situational awareness?
Would be feasible to make, but of course having to provide all this info as (if possible permanently cached) prompt would increase the cost.
| 2024-12-31T08:52:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hqaser/any_llm_projects_with_contextual_awareness/ | EternalOptimister | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqaser | false | null | t3_1hqaser | /r/LocalLLaMA/comments/1hqaser/any_llm_projects_with_contextual_awareness/ | false | false | self | 0 | null |
What are the best local embedding models for german? | 10 | I want to build a local RAG for german documents, but I don't know good embedding models for german. | 2024-12-31T08:58:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hqauuo/what_are_the_best_local_embedding_models_for/ | Xaron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqauuo | false | null | t3_1hqauuo | /r/LocalLLaMA/comments/1hqauuo/what_are_the_best_local_embedding_models_for/ | false | false | self | 10 | null |
What is your favourite LLM for mobile - 2024 | 1 | [removed] | 2024-12-31T09:25:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hqb7sm/what_is_your_favourite_llm_for_mobile_2024/ | Loveandfucklife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqb7sm | false | null | t3_1hqb7sm | /r/LocalLLaMA/comments/1hqb7sm/what_is_your_favourite_llm_for_mobile_2024/ | false | false | self | 1 | null |
is codeforces a reliable benchmark | 1 | I keep seeing how o3 excels in codeforces, but isn't most of the questions in its training data? | 2024-12-31T09:44:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hqbgfk/is_codeforces_a_reliable_benchmark/ | Pretty_Afternoon9022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqbgfk | false | null | t3_1hqbgfk | /r/LocalLLaMA/comments/1hqbgfk/is_codeforces_a_reliable_benchmark/ | false | false | self | 1 | null |
Ollama, HomeAssistant, websearch/ realtime info | 11 | i have setup Ollama with Homeassistant (with whisper and piper) as a form of free and local alternative to openAI and Google.
It works well in instructing control of my devices and also using data from sensors like my weather station.
However im on a quest to make the assistant more siri/google home like, with asking sometimes realtime or up to date info. Some local llms have a cutoff of 6months or 1yr old info.
Is there something i can use to serve Ollama with some websearch info to pass to HA?
As a noob in all this, just trying to figure which tools to use for such usecase.
I was playing with OpenwebUI on the side, in the thought of an additional chat, but couldnt figure which fully free search API
System is on unraid server, 64gb ram + 16gb gpu | 2024-12-31T09:53:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hqbkpa/ollama_homeassistant_websearch_realtime_info/ | Micro_FX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqbkpa | false | null | t3_1hqbkpa | /r/LocalLLaMA/comments/1hqbkpa/ollama_homeassistant_websearch_realtime_info/ | false | false | self | 11 | null |
Why there is not already like plenty 3rd party providers for DeepSeek V3? | 73 | I mean, literally anyone can download a SOTA model and make money by serving it (license allows that) but why does no one want to?
I'm eager to pay premium knowing my prompts are promptly deleted by a company under a jurisdiction I more or less trust.
Where is the use of unsanctioned access to the best AI chips by all the other countries? | 2024-12-31T10:06:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hqbqqq/why_there_is_not_already_like_plenty_3rd_party/ | robertpiosik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqbqqq | false | null | t3_1hqbqqq | /r/LocalLLaMA/comments/1hqbqqq/why_there_is_not_already_like_plenty_3rd_party/ | false | false | self | 73 | null |
New to Llama, questions about privacy | 1 | [removed] | 2024-12-31T10:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hqbz0e/new_to_llama_questions_about_privacy/ | thed0pepope | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqbz0e | false | null | t3_1hqbz0e | /r/LocalLLaMA/comments/1hqbz0e/new_to_llama_questions_about_privacy/ | false | false | self | 1 | null |
Introducing LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset for Reasoning Model Post-Training | 1 | [removed] | 2024-12-31T10:23:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hqbz1b/introducing_longtalkcot_v01_a_very_long/ | transformer_ML | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqbz1b | false | null | t3_1hqbz1b | /r/LocalLLaMA/comments/1hqbz1b/introducing_longtalkcot_v01_a_very_long/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200}, 'variants': {}}]} |
Control all elements while generating your story: | 1 | [removed] | 2024-12-31T10:41:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hqc7ol/control_all_elements_while_generating_your_story/ | Personal-Dot-380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqc7ol | false | null | t3_1hqc7ol | /r/LocalLLaMA/comments/1hqc7ol/control_all_elements_while_generating_your_story/ | false | false | self | 1 | null |
Kurrent Fine-Tuning | 2 | Does anyone have experiences with fine-tuning handwriting in LLaMA vision 3.2? Does it make sense to have single characters or full documents as dataset? Is there anything else to consider?
I want to finetune "Kurrent" (https://en.wikipedia.org/wiki/Kurrent) - an old German hand writing which currently doesn't work. Any suggestions/hints would be helpful. | 2024-12-31T11:15:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hqco96/kurrent_finetuning/ | der_ele | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqco96 | false | null | t3_1hqco96 | /r/LocalLLaMA/comments/1hqco96/kurrent_finetuning/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'uo8PNcSDNfhqWG6BE1hD31fb9oTZq8VMGvpFBqG9SEs', 'resolutions': [{'height': 123, 'url': 'https://external-preview.redd.it/e3tYudng3lvrtfVXQ1_zOxv0b98QNfidN73xkrGSEvk.jpg?width=108&crop=smart&auto=webp&s=32b6cfd1907f8209a40392b8e5a4d300ce7d0e87', 'width': 108}, {'height': 247, 'url': 'https://external-preview.redd.it/e3tYudng3lvrtfVXQ1_zOxv0b98QNfidN73xkrGSEvk.jpg?width=216&crop=smart&auto=webp&s=b63cc86e2e55cb8a5eb0d9c500f6a13e8fad4b1e', 'width': 216}, {'height': 366, 'url': 'https://external-preview.redd.it/e3tYudng3lvrtfVXQ1_zOxv0b98QNfidN73xkrGSEvk.jpg?width=320&crop=smart&auto=webp&s=260eda1e2fca58193583c908cceeea76f9a01171', 'width': 320}, {'height': 732, 'url': 'https://external-preview.redd.it/e3tYudng3lvrtfVXQ1_zOxv0b98QNfidN73xkrGSEvk.jpg?width=640&crop=smart&auto=webp&s=250bb2a4047a2596624749bd492a65a37e3c858d', 'width': 640}, {'height': 1099, 'url': 'https://external-preview.redd.it/e3tYudng3lvrtfVXQ1_zOxv0b98QNfidN73xkrGSEvk.jpg?width=960&crop=smart&auto=webp&s=34b2aa1b0481b645b9ddb9d1e6d605a952f1c00d', 'width': 960}, {'height': 1236, 'url': 'https://external-preview.redd.it/e3tYudng3lvrtfVXQ1_zOxv0b98QNfidN73xkrGSEvk.jpg?width=1080&crop=smart&auto=webp&s=c4a295110ae55715fda11559319c97753e2607ec', 'width': 1080}], 'source': {'height': 1374, 'url': 'https://external-preview.redd.it/e3tYudng3lvrtfVXQ1_zOxv0b98QNfidN73xkrGSEvk.jpg?auto=webp&s=f42ab6117270b5910d4771f1df8d783667bde52d', 'width': 1200}, 'variants': {}}]} |
Prompt to control all elements of your story: | 1 | [removed] | 2024-12-31T11:17:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hqcpfc/prompt_to_control_all_elements_of_your_story/ | Personal-Dot-380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqcpfc | false | null | t3_1hqcpfc | /r/LocalLLaMA/comments/1hqcpfc/prompt_to_control_all_elements_of_your_story/ | false | false | self | 1 | null |
Speculative Decoding, local setup experience | 3 | What is the currently best (bleeding edge/in-development) inference engine for doing speculative decoding locally? What target and draft models are you using and what kind of speedups are you seeing?
I am used to ollama/llama.cpp and while there is some support/desire for speculative decoding (https://github.com/ollama/ollama/issues/5800), I've also noticed that (at least) vllm may offer more sophisticated options (https://docs.vllm.ai/en/latest/usage/spec\_decode.html)... now I am considering to extend my local setup to also operate under vllm, unless I should check out something else first.
Thanks! | 2024-12-31T11:24:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hqcszg/speculative_decoding_local_setup_experience/ | bfroemel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqcszg | false | null | t3_1hqcszg | /r/LocalLLaMA/comments/1hqcszg/speculative_decoding_local_setup_experience/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'uH47gKuOi4NeLOi9IF_OvG2i7X8nBrG01OcJndYCGRw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/m1BAU-5nRfbzRhVtsHI8OtudXCCS8nvywve41tec1As.jpg?width=108&crop=smart&auto=webp&s=abf4e868e77cd07d342f439244add7afbd455544', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/m1BAU-5nRfbzRhVtsHI8OtudXCCS8nvywve41tec1As.jpg?width=216&crop=smart&auto=webp&s=264533a7ce94cde5bbea0cbff458dd2c49a049f4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/m1BAU-5nRfbzRhVtsHI8OtudXCCS8nvywve41tec1As.jpg?width=320&crop=smart&auto=webp&s=5dd1a12af3fb4cdf4bda3740fbd96feb8d3eb615', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/m1BAU-5nRfbzRhVtsHI8OtudXCCS8nvywve41tec1As.jpg?width=640&crop=smart&auto=webp&s=98e8582aa34b7b987f0f6ac07c8741524420518c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/m1BAU-5nRfbzRhVtsHI8OtudXCCS8nvywve41tec1As.jpg?width=960&crop=smart&auto=webp&s=e2e4e294171f4b8d2d7eae4c1f9879fa6d424218', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/m1BAU-5nRfbzRhVtsHI8OtudXCCS8nvywve41tec1As.jpg?width=1080&crop=smart&auto=webp&s=3f20717c7857903ae05333f1d50aa024028191a5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/m1BAU-5nRfbzRhVtsHI8OtudXCCS8nvywve41tec1As.jpg?auto=webp&s=784ce34c358f06f9c4328000433d07b633c4ed09', 'width': 1200}, 'variants': {}}]} |
Experimental AI-Phone Benchmarking Initiative | 44 | Hey everyone,
I’ve started an **experimental AI-phone benchmarking initiative** to help us better understand the performance of AI-phones as we move beyond traditional smartphones. This benchmarking is built around GGUF models and **llama.cpp**, integrated with **PocketPal AI**.
# Join the Beta Program for PocketPal AI 1.6.2 (benchmarking feature) here:
* [Google Play: https://play.google.com/store/apps/details?id=com.pocketpalai](https://play.google.com/store/apps/details?id=com.pocketpalai):
* [TestFlight (iOS): https://testflight.apple.com/join/B3KE74MS](https://testflight.apple.com/join/B3KE74MS)
# After joining:
1. Download the betta version (1.6.2).
2. Download/Choose a model.
3. Navigate to the Benchmark page in the app and run the benchmark with the downloaded model.
4. **Important:** Ensure the settings are "**default**":
* **PP:** 512
* **TG:** 128
5. Submit your results to the leaderboard, woohoo
# Leaderboard:
View the results here: [AI-Phone Leaderboard](https://huggingface.co/spaces/a-ghorbani/ai-phone-leaderboard)
# Ranking System Details
# Scoring Algo:
1. **Weights:**
* Prompt Processing (PP): **40%**
* Token Generation (TG): **60%**
* (PP has less that TG since PP is an one-time cost per prompt)
2. **Quantization Quality Factors:**
* F16/F32: **1.0**
* Q8: **0.8**
* Q6: **0.6**
* (Scales linearly to Q1 at **0.1**)
3. **Performance Score Formula:**
* Base Score: `base_score = (TG_speed * 0.6) + (PP_speed * 0.4)`
* Adjusted Score: `performance_score = base_score * model_size * quant_factor`
* Normalized Score: `normalized_score = (performance_score / max_performance_score) * 100`
# Data Aggregation:
We group data by normalized device IDs and platform to ensure consistency:
def normalize_device_id(device_info):
if device_info["systemName"].lower() == "ios":
return f"iOS/{device_info['model']}"
memory_tier = f"{device_info['totalMemory'] // (1024**3)}GB"
return f"{device_info['brand']}/{device_info['model']}/{memory_tier}"
# Feedback Wanted ...
Let me know how useful you may find this benchmarking.
Specifically, I’m looking for feedback on both the **app** and the **benchmarking approach** (rankings, methodology, etc.). The benchmarking app is open-source, hosted on Hugging Face Spaces, and the UI is built using Streamlit. (For those that might not know PocketPal AI, it is also open source: https://github.com/a-ghorbani/pocketpal-ai)
Your input will be invaluable as we refine the process and as a community we can create a standard for benchmarking AI-Phones.
*Processing img b7e1mi5646ae1...*
| 2024-12-31T11:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hqctlk/experimental_aiphone_benchmarking_initiative/ | Ill-Still-6859 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqctlk | false | null | t3_1hqctlk | /r/LocalLLaMA/comments/1hqctlk/experimental_aiphone_benchmarking_initiative/ | false | false | 44 | {'enabled': False, 'images': [{'id': 'cndRW6QCJG_he3FSQdAk6GKmH0k830rmqID0DT6ULIg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/r_VdnVzmSzrYl13LSpmCYnIbeIEpvfB_WvJx5SrBo8E.jpg?width=108&crop=smart&auto=webp&s=ff67678f8728dd5b0425512e5ca6e50cba983d1b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/r_VdnVzmSzrYl13LSpmCYnIbeIEpvfB_WvJx5SrBo8E.jpg?width=216&crop=smart&auto=webp&s=0a1c0b42ea72559753429f6e3d9b11dd58e2d8c9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/r_VdnVzmSzrYl13LSpmCYnIbeIEpvfB_WvJx5SrBo8E.jpg?width=320&crop=smart&auto=webp&s=532a838719cf510149e84e0217af79a892fcdec6', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/r_VdnVzmSzrYl13LSpmCYnIbeIEpvfB_WvJx5SrBo8E.jpg?auto=webp&s=f6b130445e14d2479dc341f75f12e61e5928d40e', 'width': 512}, 'variants': {}}]} |
|
Practical "Local" Config for DeepSeek V3? | 3 | Looking for advice on a practical hardware setup (new or second-hand) to run DeepSeek v3 at home at reasonable speed. What’s the best config (CPU, RAM, quantization) to achieve \~10-20 tokens/sec? | 2024-12-31T12:42:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hqdxoa/practical_local_config_for_deepseek_v3/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqdxoa | false | null | t3_1hqdxoa | /r/LocalLLaMA/comments/1hqdxoa/practical_local_config_for_deepseek_v3/ | false | false | self | 3 | null |
Inference integration directly in Linux shell? | 5 | I am looking for a tool that allows me to directly chat to an AI inside the bash that is always available without the need to start a server or something.
Thinking about a background service that connects to an API and a little bash command that then sends through the background service directly to Kobold, LM Studio, external resource etc like so
`~$ ai-tool [options] some text you want to send to the service, no json no "" or other marks needed`
`> Response from AI`
It should also be possible to configure it so that it can send a backlog of n lines of the last bash output and the last bash commands to the AI for shell context (of course only to a trusted local API), for instance via an option flag like `-n100`. | 2024-12-31T13:03:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hqe9l4/inference_integration_directly_in_linux_shell/ | dreamyrhodes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqe9l4 | false | null | t3_1hqe9l4 | /r/LocalLLaMA/comments/1hqe9l4/inference_integration_directly_in_linux_shell/ | false | false | self | 5 | null |
3x Arc A770 build | 1 | [removed] | 2024-12-31T13:09:06 | https://www.reddit.com/gallery/1hqed37 | Echo9Zulu- | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hqed37 | false | null | t3_1hqed37 | /r/LocalLLaMA/comments/1hqed37/3x_arc_a770_build/ | false | false | 1 | null |
|
AMD Strix Halo or Apple M4? | 1 | [removed] | 2024-12-31T13:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hqeftt/amd_strix_halo_or_apple_m4/ | Physical-Security115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqeftt | false | null | t3_1hqeftt | /r/LocalLLaMA/comments/1hqeftt/amd_strix_halo_or_apple_m4/ | false | false | self | 1 | null |
Experimenting with AI-powered command line for Ubuntu terminal (llama-3.3-70b + Groq under the hood) | 3 | I've been playing around with an AI-powered command line for the Ubuntu terminal, and it's been a fun ride!
I even set up a web-based version using `ttyd` so you can play around with it directly (in your web browser and without registration) [https://clai-container.tail8d8623.ts.net/](https://clai-container.tail8d8623.ts.net/)
Basically, I just ask it to perform different tasks, and the AI figures out the one-liner to do it. For example, I asked it to print "LLaMA" in ASCII
https://preview.redd.it/36koyw6ar6ae1.png?width=688&format=png&auto=webp&s=25b58d4f23f349b6fa12846db21b08f1b82f38ea
https://preview.redd.it/h9g72qpfr6ae1.png?width=1398&format=png&auto=webp&s=ecf16aee6ba55fe928679b37581bdab51b05b2ca
Some fun ideas to try:
* "Run hello world in 5 programming languages"
* "Find the top 10 biggest files"
Feel free to give it a shot and share your experiments! :)
P.S.
This PoC leverages the llama-3.3-70b-versatile model running on Groq’s custom AI silicon (LPUs), delivering an impressive 1200 tokens per second - significantly outpacing existing services that typically achieve <100 tokens per second.
The environment is standard Ubuntu 24.04. The command-line interface is powered by the excellent yai project, while the web terminal utilizes ttyd. | 2024-12-31T13:28:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hqeoz6/experimenting_with_aipowered_command_line_for/ | aospan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqeoz6 | false | null | t3_1hqeoz6 | /r/LocalLLaMA/comments/1hqeoz6/experimenting_with_aipowered_command_line_for/ | false | false | 3 | null |
|
Radeon GPU for local LLM? | 8 | I noticed that a 24gb Radeon 7900 XTX is less than $1000. I could buy 2 for less than a current RTX 4090 24gb.
Are Radeons a good choice for a local LLM server? Can they be used in dual mode for 48gb VRAM? | 2024-12-31T13:39:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hqeviv/radeon_gpu_for_local_llm/ | PetMogwai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqeviv | false | null | t3_1hqeviv | /r/LocalLLaMA/comments/1hqeviv/radeon_gpu_for_local_llm/ | false | false | self | 8 | null |
Deepseek question | 1 | [removed] | 2024-12-31T13:45:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hqezef/deepseek_question/ | lsb7402 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqezef | false | null | t3_1hqezef | /r/LocalLLaMA/comments/1hqezef/deepseek_question/ | false | false | self | 1 | null |
Any solutions to give LLaMA 3 internet access and save its reponse? Kinda like perplexity but no gui | 3 | Hello. I have about 80k items that I need LLaMA to check online then return certain answers for me to save in my database. Currently, the only api available that offers this is Perplexity with the online LLaMA 3 Sonar models. However, perplexity charges 5$ per 1k requests, which is unnecessarily high. What's the recommended method for me to either locally run, or deploy online, a LLaMA 3 model that can access the internet and return answers for me in a way that I can capture into json and save into my database? I can code in Python and Golang.
Thank you for reading! | 2024-12-31T13:56:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hqf68q/any_solutions_to_give_llama_3_internet_access_and/ | uouzername | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqf68q | false | null | t3_1hqf68q | /r/LocalLLaMA/comments/1hqf68q/any_solutions_to_give_llama_3_internet_access_and/ | false | false | self | 3 | null |
What's the bare minimum gpu needed for reasonable local llm performance? AMD or Intel cards? | 0 | Basically this. I want to run deepseek3 locally but right now I don't have a modern gpu (just laptops). I want to know what I need to reasonably use a llm for chat and coding assistance | 2024-12-31T14:01:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hqf9yn/whats_the_bare_minimum_gpu_needed_for_reasonable/ | gameguy56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqf9yn | false | null | t3_1hqf9yn | /r/LocalLLaMA/comments/1hqf9yn/whats_the_bare_minimum_gpu_needed_for_reasonable/ | false | false | self | 0 | null |
Why is there no sub-10B recent model? | 1 | [removed] | 2024-12-31T14:15:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hqfjen/why_is_there_no_sub10b_recent_model/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqfjen | false | null | t3_1hqfjen | /r/LocalLLaMA/comments/1hqfjen/why_is_there_no_sub10b_recent_model/ | false | false | self | 1 | null |
Why is there no sub-10B recent model? | 1 | [removed] | 2024-12-31T14:16:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hqfjo5/why_is_there_no_sub10b_recent_model/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqfjo5 | false | null | t3_1hqfjo5 | /r/LocalLLaMA/comments/1hqfjo5/why_is_there_no_sub10b_recent_model/ | false | false | self | 1 | null |
Is there no more progress in lightweight models? | 0 | Hi. Am I wrong that it seems no new lightweight model, i.e. 10B or less parameters has been released recently, despite a good amount of new models of more parameter counts, e.g. 33B, 70B, etc?
Is there a specific reason, or these are not priorities as much as SotA ones are? Is there anything new that I missed? | 2024-12-31T14:17:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hqfkox/is_there_no_more_progress_in_lightweight_models/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqfkox | false | null | t3_1hqfkox | /r/LocalLLaMA/comments/1hqfkox/is_there_no_more_progress_in_lightweight_models/ | false | false | self | 0 | null |
open-source AI-powered web scraping project | 22 | This open-source AI-powered web scraping project is amazing, and I even created a video tutorial for it!:
[https://github.com/ScrapeGraphAI/Scrapegraph-ai](https://github.com/ScrapeGraphAI/Scrapegraph-ai)
[https://youtu.be/PEB8z48mAhw](https://youtu.be/PEB8z48mAhw) | 2024-12-31T14:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hqfm4f/opensource_aipowered_web_scraping_project/ | GitDit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqfm4f | false | null | t3_1hqfm4f | /r/LocalLLaMA/comments/1hqfm4f/opensource_aipowered_web_scraping_project/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'NU-Od4tHvixdlURCbBmryiz8o3hWvvjum-VwNzjwqhA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Xtbkpgq81Z0wU00d1_Tw8VZmmMJzl0fSe4gpVSAtJcM.jpg?width=108&crop=smart&auto=webp&s=1d311b5aa9fc739bdd3659c3789a99e4c07f1a08', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Xtbkpgq81Z0wU00d1_Tw8VZmmMJzl0fSe4gpVSAtJcM.jpg?width=216&crop=smart&auto=webp&s=e52d4f77b21cf5ff10b6efd58069c35eb050aca8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Xtbkpgq81Z0wU00d1_Tw8VZmmMJzl0fSe4gpVSAtJcM.jpg?width=320&crop=smart&auto=webp&s=c759f112138ef10333ce77d7cbbfe9626108813d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Xtbkpgq81Z0wU00d1_Tw8VZmmMJzl0fSe4gpVSAtJcM.jpg?width=640&crop=smart&auto=webp&s=b869b2f6ce11dc1f0d706401a8e04f5e05dfcf00', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Xtbkpgq81Z0wU00d1_Tw8VZmmMJzl0fSe4gpVSAtJcM.jpg?width=960&crop=smart&auto=webp&s=768de7cde719d0c88ecf951514b8222f363be016', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Xtbkpgq81Z0wU00d1_Tw8VZmmMJzl0fSe4gpVSAtJcM.jpg?width=1080&crop=smart&auto=webp&s=8b62ca68488ec0ba01aa24ca0ed5e01f95af1d7c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Xtbkpgq81Z0wU00d1_Tw8VZmmMJzl0fSe4gpVSAtJcM.jpg?auto=webp&s=238dc64480ffec74728a942a5656895c713634e7', 'width': 1200}, 'variants': {}}]} |
What was the most satisfying way you used llama's json output for? | 1 | [removed] | 2024-12-31T14:37:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hqfxua/what_was_the_most_satisfying_way_you_used_llamas/ | Anarchosyndikalismus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqfxua | false | null | t3_1hqfxua | /r/LocalLLaMA/comments/1hqfxua/what_was_the_most_satisfying_way_you_used_llamas/ | false | false | self | 1 | null |
AWS is currently deploying a cluster with 400k Trainium2 chips for Anthropic called “Project Rainier”. Amazon’s Trainium2 is not a proven “training” chip, and most of the volumes will be in LLM inference. Amazon’s new $4 billion investment in Anthropic is effectively going into this. | 220 | https://semianalysis.com/2024/12/03/amazons-ai-self-sufficiency-trainium2-architecture-networking/ | 2024-12-31T14:37:59 | https://www.reddit.com/gallery/1hqfyai | Personal-Dot-380 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hqfyai | false | null | t3_1hqfyai | /r/LocalLLaMA/comments/1hqfyai/aws_is_currently_deploying_a_cluster_with_400k/ | false | false | 220 | null |
|
Using LoRA with LlamaSharp | 0 | I need some help with using a LoRA with LlamaSharp. I'm using 0.19 of the LlamaSharp nuget. I've tried the following:
Convert base model and LoRA to gguf and load them into LlamaSharp
Problem: I saw LoraAdapter in the api docs but visual studio complains it can't find the constructor. I can't find any other way to load the LoRA.
Merge the LoRA into the base model
Problem: all the scripts I've found complain that they can't import 'is_npu_available' from 'accelerate.utils' or shard_checkpoint from transformers.modeling_utils
I'm not a python developer but I have been a software engineer for 30 years in a bunch of different languages. I suspect there's some python-ism around pip install -r requirements.txt that I'm not groking. I don't have conda installed except maybe miniconda from oobabooga's install. When I run pip install for whatever thing I downloaded from github it generally complains about dependency version mismatches.
When I run pip list it says accelerate is at v 0.18 when in the starcoder git clone, which is the only environment I've found that has a built in merger that isn't just some random copy paste from the internet.
I've also tried copy pasting other mergers into a .py file in my main oobabooga install folder.
I'm running on windows. I'd be willing to run this step on wsl or Linux but I haven't tried it yet.
| 2024-12-31T14:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hqfyb8/using_lora_with_llamasharp/ | HypnoDaddy4You | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqfyb8 | false | null | t3_1hqfyb8 | /r/LocalLLaMA/comments/1hqfyb8/using_lora_with_llamasharp/ | false | false | self | 0 | null |
How to Change the Port in Ollama (Version 0.5.4)? | 1 | [removed] | 2024-12-31T14:54:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hqga2b/how_to_change_the_port_in_ollama_version_054/ | umen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqga2b | false | null | t3_1hqga2b | /r/LocalLLaMA/comments/1hqga2b/how_to_change_the_port_in_ollama_version_054/ | false | false | self | 1 | null |
Regarding AI agents | 1 | [removed] | 2024-12-31T14:55:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hqgawe/regarding_ai_agents/ | TechnicalBalance699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqgawe | false | null | t3_1hqgawe | /r/LocalLLaMA/comments/1hqgawe/regarding_ai_agents/ | false | false | self | 1 | null |
I Had a Dream for the New Year Last Night... but woke up and I was too groggy to remember it clearly. Any help? | 18 | 2024-12-31T15:01:33 | SeymourBits | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hqgf3w | false | null | t3_1hqgf3w | /r/LocalLLaMA/comments/1hqgf3w/i_had_a_dream_for_the_new_year_last_night_but/ | false | false | 18 | {'enabled': True, 'images': [{'id': 'DsjEQpv6Cvt3HFoDGjwN059skiQuEzh8sA6NQ0zWoHs', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/tgosxlwx77ae1.png?width=108&crop=smart&auto=webp&s=302ff0693ea5df407072040851a7c1979b4899d0', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/tgosxlwx77ae1.png?width=216&crop=smart&auto=webp&s=1243a5bffff0f4089a6e553cbf5907142abd932a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/tgosxlwx77ae1.png?width=320&crop=smart&auto=webp&s=60d34cb015c49448d58db1689e88702a423dc576', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/tgosxlwx77ae1.png?width=640&crop=smart&auto=webp&s=4386c1dff53475e4eaf7b1ac704e44c8883b974e', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/tgosxlwx77ae1.png?width=960&crop=smart&auto=webp&s=62442f6021caf5d926c9aa2edaace7eee1ce8908', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/tgosxlwx77ae1.png?auto=webp&s=a0b81377f31d004160c216644cb38a842a0245ad', 'width': 1024}, 'variants': {}}]} |
|||
|Help| What is the best llm for coding/programming that is under 5 billion parameters? | 1 | [removed] | 2024-12-31T15:11:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hqgmr8/help_what_is_the_best_llm_for_codingprogramming/ | 185BCE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqgmr8 | false | null | t3_1hqgmr8 | /r/LocalLLaMA/comments/1hqgmr8/help_what_is_the_best_llm_for_codingprogramming/ | false | false | self | 1 | null |
Testing Llama for cyber security and it's been amazing | 1 | I been testing Llama for cyber security and it's been working much better than ChatGPT: [The Power of LLaMA 3.1 70B Parameter AI for Cybersecurity Threat Detection: A Game Changer for Linux Server Security – Alexander Mirvis](https://www.alexandermirvis.com/the-power-of-llama-3-1-70b-parameter-ai-for-cybersecurity-threat-detection-a-game-changer-for-server-security/) | 2024-12-31T15:17:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hqgqmp/testing_llama_for_cyber_security_and_its_been/ | LynxGeekNYC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqgqmp | false | null | t3_1hqgqmp | /r/LocalLLaMA/comments/1hqgqmp/testing_llama_for_cyber_security_and_its_been/ | false | false | self | 1 | null |
Can 1,000,000 AI agents simulate social media? (Experimenting with AI Research Project OASIS) | 1 | 2024-12-31T15:25:29 | https://dev.to/omnigeorgio/can-1000000-ai-agents-simulate-social-media-ck6 | omnisvosscio | dev.to | 1970-01-01T00:00:00 | 0 | {} | 1hqgwti | false | null | t3_1hqgwti | /r/LocalLLaMA/comments/1hqgwti/can_1000000_ai_agents_simulate_social_media/ | false | false | 1 | {'enabled': False, 'images': [{'id': '2RG5UizjTX04IecmYGFimcPSltyujN-hfsjBttqT2n8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/a5NyTWS902iSIgEM9ylC6RUvrE5rXcpUU-GpnOB7aXg.jpg?width=108&crop=smart&auto=webp&s=51cf3a9f239412d5831f4bc06f5031dd636c38be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/a5NyTWS902iSIgEM9ylC6RUvrE5rXcpUU-GpnOB7aXg.jpg?width=216&crop=smart&auto=webp&s=de1b4cacadf87c28c0032de1ea78d5154a2d76cd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/a5NyTWS902iSIgEM9ylC6RUvrE5rXcpUU-GpnOB7aXg.jpg?width=320&crop=smart&auto=webp&s=6fd22b1d2a51d23ac8fe53d53fdce3a2e24d2dbe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/a5NyTWS902iSIgEM9ylC6RUvrE5rXcpUU-GpnOB7aXg.jpg?width=640&crop=smart&auto=webp&s=2652487f27672088199639930ec2de30c9b86167', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/a5NyTWS902iSIgEM9ylC6RUvrE5rXcpUU-GpnOB7aXg.jpg?width=960&crop=smart&auto=webp&s=2baa5cda70b3430f6377ab417e75b1d3fe70ad20', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/a5NyTWS902iSIgEM9ylC6RUvrE5rXcpUU-GpnOB7aXg.jpg?auto=webp&s=005ae64cbedab0348d71858485775de7433c7274', 'width': 1000}, 'variants': {}}]} |
||
smolagents: new agent library by Hugging Face | 114 | Hello! it's Merve from Hugging Face 🤗
we just launched smolagents, a new simple library to unlock both native and traditional (JSON) tool calling for LLMs
using native tool calling through LLMs is as simple as:
from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel
agent = CodeAgent(tools=\[DuckDuckGoSearchTool()\], model=HfApiModel())
agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?")
We wrote a blog for you to get started [https://huggingface.co/blog/smolagents](https://huggingface.co/blog/smolagents)
Please try and let us know what you think! | 2024-12-31T15:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hqgz3s/smolagents_new_agent_library_by_hugging_face/ | unofficialmerve | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqgz3s | false | null | t3_1hqgz3s | /r/LocalLLaMA/comments/1hqgz3s/smolagents_new_agent_library_by_hugging_face/ | false | false | self | 114 | {'enabled': False, 'images': [{'id': 'LY2JZN_xbWXk0wz7w3JfYlBb20wmTMXGl-sW84L8wA8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6a5miLwMneRzcfvXYaty7V2vwWl7iMtekEBZ0GUETJQ.jpg?width=108&crop=smart&auto=webp&s=dda760b03aec3b45c9422716b097d39b349b1c49', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/6a5miLwMneRzcfvXYaty7V2vwWl7iMtekEBZ0GUETJQ.jpg?width=216&crop=smart&auto=webp&s=a96eb87d37e0d34da008809c58267ad4d225deb0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/6a5miLwMneRzcfvXYaty7V2vwWl7iMtekEBZ0GUETJQ.jpg?width=320&crop=smart&auto=webp&s=36c8014b8d2c96928b2f574411a7627438c6dd7e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/6a5miLwMneRzcfvXYaty7V2vwWl7iMtekEBZ0GUETJQ.jpg?width=640&crop=smart&auto=webp&s=7b78ba096b2c3bc6a1288a9b88102eac9c450621', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/6a5miLwMneRzcfvXYaty7V2vwWl7iMtekEBZ0GUETJQ.jpg?width=960&crop=smart&auto=webp&s=cd07a2443efc166a8409b12affce89b3bf05af79', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/6a5miLwMneRzcfvXYaty7V2vwWl7iMtekEBZ0GUETJQ.jpg?width=1080&crop=smart&auto=webp&s=b9974434763196eaedddef392241851bcfe52195', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/6a5miLwMneRzcfvXYaty7V2vwWl7iMtekEBZ0GUETJQ.jpg?auto=webp&s=9ef696094edf481695480bfaac80e4c8f0508d59', 'width': 1920}, 'variants': {}}]} |
I Had a Dream for the New Year Last Night... but woke up and I was too groggy to remember it clearly. Any help? | 40 | 2024-12-31T15:38:16 | SeymourBits | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hqh6cu | false | null | t3_1hqh6cu | /r/LocalLLaMA/comments/1hqh6cu/i_had_a_dream_for_the_new_year_last_night_but/ | false | false | 40 | {'enabled': True, 'images': [{'id': 'EDwwN_alJDsRvyzqafgWQTI9eDpN-0CZqWtT1mCkziM', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/sscq39ane7ae1.png?width=108&crop=smart&auto=webp&s=5d62c2a1aede7b99bdab2a4c4c77a5d6c0a94f19', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/sscq39ane7ae1.png?width=216&crop=smart&auto=webp&s=56f9b7f868941a1730dcf522c73d8aa09d7e1564', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/sscq39ane7ae1.png?width=320&crop=smart&auto=webp&s=fc0ff2bf661f91cebc6eb1e22c3b3d8ba9586d1d', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/sscq39ane7ae1.png?width=640&crop=smart&auto=webp&s=44f267060e1b0e1e89eef79e8b2006ba912fde66', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/sscq39ane7ae1.png?width=960&crop=smart&auto=webp&s=3ca4c928aa3d0fe6656dbf5e1c5eba847f101be7', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/sscq39ane7ae1.png?auto=webp&s=56d1bb2781b48130511750f04e970ca7d4febec3', 'width': 1024}, 'variants': {}}]} |
|||
How to start? | 0 | I’d like to learn how to operate and run LLMs locally. Where should I start?
I consider myself “technically aware,” as I work closely with machine learning engineers, but I haven’t written much (any) code myself. I’ve been using OpenAI since its early days, and with the advancements in the field, it feels like the ultimate use case is to run these models locally for better privacy and control.
How can I get started on this journey? To be clear, my goal isn’t to create my own LLM from scratch, but to become informed and capable enough to use open-source LLMs effectively. Hosting, security, tuning, etc. Really, I think software ultimately shifts to bespoke and locks so I’d like to engage with the open-source community where ideally this all starts. What resources, tools, or steps would you recommend for someone with my background?
| 2024-12-31T16:00:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hqhmpw/how_to_start/ | iLikeCorn193472 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqhmpw | false | null | t3_1hqhmpw | /r/LocalLLaMA/comments/1hqhmpw/how_to_start/ | false | false | self | 0 | null |
Need early AI integration advice for prototype | 2 | Seeking guidance - made a local python application that tracks user behavior - input/data is written into a local DB (sql lite). Goal is to have a an early prototype that uses AI to learn a user's baseline behavior and alert to deviations or abnormalities. Problem is, we're limited on both budget (free/open source preferred but not opposed to paying) and AI dev experience. Current main focus is completing a prototype, whether using local AI (like llama3) or cloud AI. Really looking for suggestions or recommendations on easiest and most efficient AI model and how to integrate for this early prototype. Any guidance or advice is greatly appreciated!!! (If not the place to recommend anything other than llama, maybe you can provide advice on how/why llama would or would not work well for this?) | 2024-12-31T16:03:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hqhp7u/need_early_ai_integration_advice_for_prototype/ | jwillistyle7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqhp7u | false | null | t3_1hqhp7u | /r/LocalLLaMA/comments/1hqhp7u/need_early_ai_integration_advice_for_prototype/ | false | false | self | 2 | null |
Trying to wrap my head around the performance differences between a M4 Pro 64gb Mac Mini vs a 2x 3090 TR1 w/ 256gb DDR4 | 1 | [removed] | 2024-12-31T16:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hqhqn3/trying_to_wrap_my_head_around_the_performance/ | salec65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqhqn3 | false | null | t3_1hqhqn3 | /r/LocalLLaMA/comments/1hqhqn3/trying_to_wrap_my_head_around_the_performance/ | false | false | self | 1 | null |
Who’s so kind hearted to make the MLX versions of Phi-4? | 1 | [removed] | 2024-12-31T16:11:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hqhvqr/whos_so_kind_hearted_to_make_the_mlx_versions_of/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqhvqr | false | null | t3_1hqhvqr | /r/LocalLLaMA/comments/1hqhvqr/whos_so_kind_hearted_to_make_the_mlx_versions_of/ | false | false | self | 1 | null |
Interesting ARM Hardware on the horizon - Radxa Orion O6. | 24 | I was testing Radxa RK3688 based SoC and it's GPU and NPU capabilities. Apart the initial difficulties (including poor support for software developers) the hardware is interesing - it support up to 32GB RAM. It is capable of infrence few tokens per second for a 14b models like Qwen2.5-14b using both OpenCL GPU backend and NPU at the same time.
The more interesting the newly announced product - Radxa Orion O6.
Major uknown is Cix P1 SoC (it's not Rockchip) real performance however the specs are nice:
* 4x Cortex®-A720 (Big cores) 4x Cortex®-A720 (Medium cores) 4x Cortex®-A520 (LITTLE cores) 12MB shared L3 cache
* GPU: Arm Immortals G720 MC10 Hardware Ray-Tracing enabled Graphics APIs: Vulkan® 1.3 OpenGL® ES 3.2 OpenCL® 3.0
* Neural Processing Unit (NPU) Computing Power: 28.8 TOPs Precision Support: INT4/INT8/INT16 FP16/BF16 TF32
* RAM: LPDDR5128-bit memory bus 5500MT/s transfer speedConfigurations:4GB/8GB/16GB/32GB/64GB with 100GB/s bandwith.
64GB version for about $450. Mind the power efficency of ARM.
**I think might be an insteresting Jetson Orin Nano alternative.** | 2024-12-31T16:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hqi2tn/interesting_arm_hardware_on_the_horizon_radxa/ | imkebe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqi2tn | false | null | t3_1hqi2tn | /r/LocalLLaMA/comments/1hqi2tn/interesting_arm_hardware_on_the_horizon_radxa/ | false | false | self | 24 | null |
Try Llama 3.1 8B in Your Browser: AQLM.rs Delivers Al at Your Fingertips | HackerNoon | 1 | 2024-12-31T16:24:45 | https://hackernoon.com/try-llama-31-8b-in-your-browser-aqlmrs-delivers-al-at-your-fingertips | mycall | hackernoon.com | 1970-01-01T00:00:00 | 0 | {} | 1hqi5vp | false | null | t3_1hqi5vp | /r/LocalLLaMA/comments/1hqi5vp/try_llama_31_8b_in_your_browser_aqlmrs_delivers/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'l-ekOMio8td6xRcRq9FNqI99kKCgWiRMx6UPckzh1Ts', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y3092VIHNzKfmlWDRauIYoC9GrgDUj0Sv97VlJYK-9c.jpg?width=108&crop=smart&auto=webp&s=94f9fb5f0615ab8901e1f6953168f8e01a156065', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Y3092VIHNzKfmlWDRauIYoC9GrgDUj0Sv97VlJYK-9c.jpg?width=216&crop=smart&auto=webp&s=efe41519a113d0ead260090ca79130f84a4ce1d4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Y3092VIHNzKfmlWDRauIYoC9GrgDUj0Sv97VlJYK-9c.jpg?width=320&crop=smart&auto=webp&s=76718c1b040d5624a72bb91a84531d9d9f779965', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/Y3092VIHNzKfmlWDRauIYoC9GrgDUj0Sv97VlJYK-9c.jpg?width=640&crop=smart&auto=webp&s=ceb3256723c11e43210a0ef9cc9ce39a01ce49a4', 'width': 640}, {'height': 541, 'url': 'https://external-preview.redd.it/Y3092VIHNzKfmlWDRauIYoC9GrgDUj0Sv97VlJYK-9c.jpg?width=960&crop=smart&auto=webp&s=4be64de5d1431c34e05a0d367f73ede127f37aec', 'width': 960}, {'height': 609, 'url': 'https://external-preview.redd.it/Y3092VIHNzKfmlWDRauIYoC9GrgDUj0Sv97VlJYK-9c.jpg?width=1080&crop=smart&auto=webp&s=6aee1e1d4b7b3b883abe22d4b4eaffd2a1d59d3c', 'width': 1080}], 'source': {'height': 880, 'url': 'https://external-preview.redd.it/Y3092VIHNzKfmlWDRauIYoC9GrgDUj0Sv97VlJYK-9c.jpg?auto=webp&s=250f73abb2c4d27ecce04bb647e3ae07211387fa', 'width': 1560}, 'variants': {}}]} |
||
DeepSeek V3 running on llama.cpp wishes you a Happy New Year! | 284 | 2024-12-31T16:34:25 | https://youtu.be/FzCEoTiqP7I | fairydreaming | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1hqidbs | false | {'oembed': {'author_name': 'Dreaming Fairy', 'author_url': 'https://www.youtube.com/@dreamingfairy8804', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/FzCEoTiqP7I?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="DeepSeek V3 running on Epyc 9374F (Q4_K_M, 384GB of RAM, llama.cpp)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/FzCEoTiqP7I/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'DeepSeek V3 running on Epyc 9374F (Q4_K_M, 384GB of RAM, llama.cpp)', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'} | t3_1hqidbs | /r/LocalLLaMA/comments/1hqidbs/deepseek_v3_running_on_llamacpp_wishes_you_a/ | false | false | 284 | {'enabled': False, 'images': [{'id': 'WIpY24oMUmveNN9altD7LNCIHVIoueDwIkdRXw681XE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fpike117Cx6LyQ6awQLDQ4Fu54kXpzvd22IzvWo7oCA.jpg?width=108&crop=smart&auto=webp&s=9c1d2d9256b6a252eeb26980f023022eda8380bc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fpike117Cx6LyQ6awQLDQ4Fu54kXpzvd22IzvWo7oCA.jpg?width=216&crop=smart&auto=webp&s=02dc17156059005216c1331900a45210b49a3de4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fpike117Cx6LyQ6awQLDQ4Fu54kXpzvd22IzvWo7oCA.jpg?width=320&crop=smart&auto=webp&s=95134719d079738b0b50dc1738f95d7cf9d22a88', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fpike117Cx6LyQ6awQLDQ4Fu54kXpzvd22IzvWo7oCA.jpg?auto=webp&s=f0f4573cb268a89708661bf81a91b575c505ec7c', 'width': 480}, 'variants': {}}]} |
||
AutoRound-, AutoGPTQ- and AutoAWQ-format quantized models on HF: just my little contribution | 5 | As 2024 comes to a close, I am proud to share that I have successfully uploaded over 215 quantized SLM/LLM models to my Hugging Face account. These models were entirely quantized using the computational resources of my homelab, achieving approximately 72 TFLOPS of performance-powered solely by "domestic" hardware. You can explore the models here: [https://huggingface.co/fbaldassarri](https://huggingface.co/fbaldassarri)
Completing this initial batch took nearly four months of dedicated effort, relying exclusively on my own resources without any cloud services or external credits. The response has been encouraging, with several thousand downloads from my repositories so far. Looking ahead, I am preparing a new series of quantized models, leveraging diverse opensource architectures and openly sharing the methodologies I adopted behind their preparation.
These repositories will remain freely available on Hugging Face, with the goal of accelerating research and development in open-source, community-driven solutions. My broader aim is to contribute to advancing AI in text generation while promoting a more inclusive and democratic approach.
I am particularly focused on advocating for INT4 quantization as an optimal solution across various use cases. As an advocate for Weight-only-Quantization (WoQ) and SignRound methods, my work emphasizes local, private, and personal inference capabilities of our daily-driver PCs.
I welcome feedback and collaboration: so, if there are specific open models you’d like to see quantized using formats such as AutoRound, AutoGPTQ, AutoAWQ, or OpenVINO IR for experimentation, feel free to connect with me. I am eager to assist wherever possible.
Lastly, I would like to extend my gratitude to Intel’s AI researchers and software engineers for their contributions to open-source frameworks like OpenVINO, NNCF, IPEX, and IPEX-LLM. These tools have been instrumental in maximizing the potential of my hardware. Special thanks go to the Intel AutoRound team, [Wenhua Cheng](https://www.linkedin.com/in/wenhua-cheng-05460bb0/) et al., for their invaluable feedback and exceptional tools.
\#AI #GenerativeAI #NLP #NLU #NLG #LLM #OpenSource
[https:\/\/huggingface.co\/fbaldassarri](https://preview.redd.it/5syu57ado7ae1.jpg?width=2276&format=pjpg&auto=webp&s=5987933d7b11cbb9c75cfdf04ec32269ab9e9775)
| 2024-12-31T16:42:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hqijn6/autoround_autogptq_and_autoawqformat_quantized/ | fbaldassarri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqijn6 | false | null | t3_1hqijn6 | /r/LocalLLaMA/comments/1hqijn6/autoround_autogptq_and_autoawqformat_quantized/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'mV7BX9MBfVIeVF9e_tatLqw4Cm03xD_9C1R-1228J1M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kZr9QwYzpzj7Wkw6Hxlac2dvz2duz-U604aNuLcYWAE.jpg?width=108&crop=smart&auto=webp&s=64928d62fa58b7fc5e57967af37bd17a1d3047f6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kZr9QwYzpzj7Wkw6Hxlac2dvz2duz-U604aNuLcYWAE.jpg?width=216&crop=smart&auto=webp&s=9b99bad8edc447073a11ef1f97b26142d196550d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kZr9QwYzpzj7Wkw6Hxlac2dvz2duz-U604aNuLcYWAE.jpg?width=320&crop=smart&auto=webp&s=97fe9a42abc9dba668a03f9fdf5b706b490ce7e5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kZr9QwYzpzj7Wkw6Hxlac2dvz2duz-U604aNuLcYWAE.jpg?width=640&crop=smart&auto=webp&s=2c147cf74f8691c4fdac20cd9c17addd263fb48a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kZr9QwYzpzj7Wkw6Hxlac2dvz2duz-U604aNuLcYWAE.jpg?width=960&crop=smart&auto=webp&s=438711aa3ab2268bb1ac9966c37b928308901bed', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kZr9QwYzpzj7Wkw6Hxlac2dvz2duz-U604aNuLcYWAE.jpg?width=1080&crop=smart&auto=webp&s=cc143dc90e8d97349303fdf7e15b9ae266df06b5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kZr9QwYzpzj7Wkw6Hxlac2dvz2duz-U604aNuLcYWAE.jpg?auto=webp&s=d9dd60c4853d39cbad1cd9ad8bb3b568baa70f38', 'width': 1200}, 'variants': {}}]} |
|
2024 AI LLM Timeline (open weights + API access) | 94 | 2024-12-31T17:29:35 | https://v.redd.it/rlsxdmuxx7ae1 | vaibhavs10 | /r/LocalLLaMA/comments/1hqjk44/2024_ai_llm_timeline_open_weights_api_access/ | 1970-01-01T00:00:00 | 0 | {} | 1hqjk44 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rlsxdmuxx7ae1/DASHPlaylist.mpd?a=1738387781%2CMjA4NzQxYzJiMGVkNzhkOTQxODBlNmJmNjVhZTM5MzNlNWNjNWQzOWYxNTZiNDNmZTUyNGE1YWRkMzg5MGFjMQ%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/rlsxdmuxx7ae1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/rlsxdmuxx7ae1/HLSPlaylist.m3u8?a=1738387781%2CNTZhZTQ3OGNlMzYyMDE3YjdkYTIzNTc4YTE1ZWRkMzNjYjk3ZGZhNjkwYzgwOGMyMDJjZDcxMGMyYmI4ZTE0OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rlsxdmuxx7ae1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1670}} | t3_1hqjk44 | /r/LocalLLaMA/comments/1hqjk44/2024_ai_llm_timeline_open_weights_api_access/ | false | false | 94 | {'enabled': False, 'images': [{'id': 'N29uY29wdnh4N2FlMYakIzF1HSalWwoyuWBTb4oHSoq1HXTNLXMDMjkaX3Tx', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/N29uY29wdnh4N2FlMYakIzF1HSalWwoyuWBTb4oHSoq1HXTNLXMDMjkaX3Tx.png?width=108&crop=smart&format=pjpg&auto=webp&s=56810643630ffa7fbf0f190afc16caf81484415d', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/N29uY29wdnh4N2FlMYakIzF1HSalWwoyuWBTb4oHSoq1HXTNLXMDMjkaX3Tx.png?width=216&crop=smart&format=pjpg&auto=webp&s=00bf5f4871b0f18219dadaace1bbc9fc3ff5c136', 'width': 216}, {'height': 206, 'url': 'https://external-preview.redd.it/N29uY29wdnh4N2FlMYakIzF1HSalWwoyuWBTb4oHSoq1HXTNLXMDMjkaX3Tx.png?width=320&crop=smart&format=pjpg&auto=webp&s=7ca4af4c98756e3771e447d92a654e6d90f87535', 'width': 320}, {'height': 413, 'url': 'https://external-preview.redd.it/N29uY29wdnh4N2FlMYakIzF1HSalWwoyuWBTb4oHSoq1HXTNLXMDMjkaX3Tx.png?width=640&crop=smart&format=pjpg&auto=webp&s=937eea6a7e51bb7f0e7b8c6728556aae0f6369f1', 'width': 640}, {'height': 620, 'url': 'https://external-preview.redd.it/N29uY29wdnh4N2FlMYakIzF1HSalWwoyuWBTb4oHSoq1HXTNLXMDMjkaX3Tx.png?width=960&crop=smart&format=pjpg&auto=webp&s=12ffeb8a23b67f3e845560dd0a28e593ce5939d0', 'width': 960}, {'height': 698, 'url': 'https://external-preview.redd.it/N29uY29wdnh4N2FlMYakIzF1HSalWwoyuWBTb4oHSoq1HXTNLXMDMjkaX3Tx.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3c3e992c984df5c9e4539aa45f0121c621bcddc2', 'width': 1080}], 'source': {'height': 2234, 'url': 'https://external-preview.redd.it/N29uY29wdnh4N2FlMYakIzF1HSalWwoyuWBTb4oHSoq1HXTNLXMDMjkaX3Tx.png?format=pjpg&auto=webp&s=d8851269bc5856b796c1c4a0bf1b896e89f0fbe0', 'width': 3456}, 'variants': {}}]} |
||
What would you like to see in Unsloth for 2025? | 82 | Happy new year everyone! First off, I just wanted to say a huge thank you for fine-tuning with Unsloth. The support we’ve gotten from all of you has been incredible, and it means a lot! :))
It’s still just the 2 of us on the team & we've already got loads of ideas for 2025 but we’d love to hear from you guys! What do YOU want to see in Unsloth next year?
You can suggest anything, something super ambitious, or even something tiny! Maybe Diffusion/Whisper support or Unsloth RAG, or maybe just a simple model support. Whatever it is, we want to know!
We’d also love to know:
* What’s been working well for you and what hasn't been?
* What’s been a missing feature?
* How can we make Unsloth easier to use or understand?
* Would better docs or guides (like on creating datasets) help?
Once again, thank you or being part of this journey with us and happy tuning!
P.S. I’ll be replying to every comment to make sure every voice is heard. | 2024-12-31T18:09:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hqkeyn/what_would_you_like_to_see_in_unsloth_for_2025/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqkeyn | false | null | t3_1hqkeyn | /r/LocalLLaMA/comments/1hqkeyn/what_would_you_like_to_see_in_unsloth_for_2025/ | false | false | self | 82 | {'enabled': False, 'images': [{'id': 'oUAe34zUCLxMUIpYtOvOz6aYou2CnbtJjhJZ0bwJ6Jg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=108&crop=smart&auto=webp&s=6481fbac644d8a96c2918c63e805d1c62e24cbe5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=216&crop=smart&auto=webp&s=941b00cf4a68a70df266160fe06769bc2a817a41', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=320&crop=smart&auto=webp&s=e794c7cbf042b8d8e6fdd8f8c239e0f5cb398261', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=640&crop=smart&auto=webp&s=57fbf9c89972d5c31e3bd2d3354696be4e8d5b9d', 'width': 640}, {'height': 505, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=960&crop=smart&auto=webp&s=557f9a403410be41c1438b6d2b1a2acd9d507da4', 'width': 960}, {'height': 568, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=1080&crop=smart&auto=webp&s=989ea96f774aa62c199da9564be3b7b646db1494', 'width': 1080}], 'source': {'height': 834, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?auto=webp&s=fb46a23aaa0ed1c5044eaea486ff79352cce2675', 'width': 1584}, 'variants': {}}]} |
|Help| What is the best llm for coding/programming that is under 5 billion parameters? | 1 | [removed] | 2024-12-31T18:15:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hqkjcn/help_what_is_the_best_llm_for_codingprogramming/ | 185BCE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqkjcn | false | null | t3_1hqkjcn | /r/LocalLLaMA/comments/1hqkjcn/help_what_is_the_best_llm_for_codingprogramming/ | false | false | self | 1 | null |
Deepseek and qwen | 1,169 | 2024-12-31T18:18:39 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hqklqj | false | null | t3_1hqklqj | /r/LocalLLaMA/comments/1hqklqj/deepseek_and_qwen/ | false | false | 1,169 | {'enabled': True, 'images': [{'id': 'bj2JawhiIeYTITWPf4C_sgOwRh8DZLKTHOBn17j2YIw', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/46be5fpe78ae1.png?width=108&crop=smart&auto=webp&s=be22f435d21f6cc9272ed055d66070fc3e0d4e8a', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/46be5fpe78ae1.png?width=216&crop=smart&auto=webp&s=ea6c8f82ff7122e146eb9c079e3e7691582545fc', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/46be5fpe78ae1.png?width=320&crop=smart&auto=webp&s=dfb8311ab88097f99fb25e1824d56f3c4c104700', 'width': 320}, {'height': 519, 'url': 'https://preview.redd.it/46be5fpe78ae1.png?width=640&crop=smart&auto=webp&s=f58a5556cfa2ade7fccf23983439288ed0f43bbe', 'width': 640}, {'height': 779, 'url': 'https://preview.redd.it/46be5fpe78ae1.png?width=960&crop=smart&auto=webp&s=9e8ac1d7728c025e228310458275313e15bee67c', 'width': 960}, {'height': 877, 'url': 'https://preview.redd.it/46be5fpe78ae1.png?width=1080&crop=smart&auto=webp&s=1f51acc1df1ed167748e543b9de1e258d3b59269', 'width': 1080}], 'source': {'height': 877, 'url': 'https://preview.redd.it/46be5fpe78ae1.png?auto=webp&s=6bc45e5c603bf884dede678aa503d5d40fed2dcd', 'width': 1080}, 'variants': {}}]} |
|||
Alibaba slashes prices on large language models by up to 85% as China AI rivalry heats up | 430 | 2024-12-31T18:34:49 | https://www.cnbc.com/2024/12/31/alibaba-baba-cloud-unit-slashes-prices-on-ai-models-by-up-to-85percent.html | fallingdowndizzyvr | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 1hqkxy0 | false | null | t3_1hqkxy0 | /r/LocalLLaMA/comments/1hqkxy0/alibaba_slashes_prices_on_large_language_models/ | false | false | 430 | {'enabled': False, 'images': [{'id': 'hnM3vOU6qYJjEKJOhb_rvpM3dIm3uJ-g6ijpSWCb8ko', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/G2KFdf5nT3BnSoVAXN-trKagIf0brGjo1XZ9YBjXrrU.jpg?width=108&crop=smart&auto=webp&s=476bd29b1e9e14a91e2e8b328f219f0f35a9e8df', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/G2KFdf5nT3BnSoVAXN-trKagIf0brGjo1XZ9YBjXrrU.jpg?width=216&crop=smart&auto=webp&s=8eebeb0353bcc7e692ac27de8504f2d16a211508', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/G2KFdf5nT3BnSoVAXN-trKagIf0brGjo1XZ9YBjXrrU.jpg?width=320&crop=smart&auto=webp&s=6686f4dfdb52d107b9b5cc180c7a287fe46a5c41', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/G2KFdf5nT3BnSoVAXN-trKagIf0brGjo1XZ9YBjXrrU.jpg?width=640&crop=smart&auto=webp&s=5a1a2af1681be8dbd94bb528bc8d1b95e1e2a669', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/G2KFdf5nT3BnSoVAXN-trKagIf0brGjo1XZ9YBjXrrU.jpg?width=960&crop=smart&auto=webp&s=67d715105d72b62c0a5e8c4ddd1207962f7f1f54', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/G2KFdf5nT3BnSoVAXN-trKagIf0brGjo1XZ9YBjXrrU.jpg?width=1080&crop=smart&auto=webp&s=1541791109cc24c0e75a5a187ae660f5d37f4e07', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/G2KFdf5nT3BnSoVAXN-trKagIf0brGjo1XZ9YBjXrrU.jpg?auto=webp&s=b22aabcfa843f7af43f1dcdab36a3c421e8027f4', 'width': 1920}, 'variants': {}}]} |
||
Llama doesnt update with the new Doc provided for RAG ? | 0 | I am completely new for LLMs and related stuff.
I was following a tutorial on youtube to get an idea.
I made a RAG using streamlit as provided in the tutorial
The issue is that First I ran this code with "sample.pdf" and then i wanted to try this with "ck3small" pdf as to check if the code is working or not with the new doc provided.
It still refers to old pdf when answering the questions.
I checked at other threads where it was said to write something to clear cache , which I did and the problem still exists :
I am not exactly sure if this is LLAMA or langchain or Streamlit issue.
Here's the code.
# app.py
import streamlit as st
import os
import logging
from langchain_community.document_loaders import UnstructuredPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Chroma
from langchain_ollama import OllamaEmbeddings
from langchain.prompts import ChatPromptTemplate, PromptTemplate
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain.retrievers.multi_query import MultiQueryRetriever
import ollama
# Configure logging
logging.basicConfig(level=logging.INFO)
# Constants
DOC_PATH = "./ck3small.pdf"
MODEL_NAME = "llama3.2"
EMBEDDING_MODEL = "nomic-embed-text"
VECTOR_STORE_NAME = "simple-rag"
PERSIST_DIRECTORY = "./chroma_db"
# Function to ingest PDF documents
def ingest_pdf(doc_path):
"""Load PDF documents."""
if os.path.exists(doc_path):
loader = UnstructuredPDFLoader(file_path=doc_path)
data = loader.load()
logging.info("PDF loaded successfully.")
return data
else:
logging.error(f"PDF file not found at path: {doc_path}")
st.error("PDF file not found.")
return None
# Function to split documents into smaller chunks
def split_documents(documents):
"""Split documents into smaller chunks."""
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1200, chunk_overlap=300)
chunks = text_splitter.split_documents(documents)
logging.info("Documents split into chunks.")
return chunks
# Function to load or create the vector database
@st.cache_resource(show_spinner=False)
def load_vector_db(doc_path):
"""Load or create the vector database."""
# Pull the embedding model if not already available
ollama.pull(EMBEDDING_MODEL)
embedding = OllamaEmbeddings(model=EMBEDDING_MODEL)
if os.path.exists(PERSIST_DIRECTORY):
vector_db = Chroma(
embedding_function=embedding,
collection_name=VECTOR_STORE_NAME,
persist_directory=PERSIST_DIRECTORY,
)
logging.info("Loaded existing vector database.")
else:
# Load and process the PDF document
data = ingest_pdf(doc_path)
if data is None:
return None
# Split the documents into chunks
chunks = split_documents(data)
vector_db = Chroma.from_documents(
documents=chunks,
embedding=embedding,
collection_name=VECTOR_STORE_NAME,
persist_directory=PERSIST_DIRECTORY,
)
vector_db.persist()
logging.info("Vector database created and persisted.")
return vector_db
# Function to create a multi-query retriever
def create_retriever(vector_db, llm):
"""Create a multi-query retriever."""
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from
a vector database. By generating multiple perspectives on the user question, your
goal is to help the user overcome some of the limitations of the distance-based
similarity search. Provide these alternative questions separated by newlines.
Original question: {question}""",
)
retriever = MultiQueryRetriever.from_llm(
vector_db.as_retriever(), llm, prompt=QUERY_PROMPT
)
logging.info("Retriever created.")
return retriever
# Function to create the chain with preserved syntax
def create_chain(retriever, llm):
"""Create the chain with preserved syntax."""
# RAG prompt
template = """Answer the question based ONLY on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
logging.info("Chain created with preserved syntax.")
return chain
# Main function
def main():
st.title("Document Assistant")
# User input
user_input = st.text_input("Enter your question:", "")
if user_input:
with st.spinner("Generating response..."):
try:
# Initialize the language model
llm = ChatOllama(model=MODEL_NAME)
# Load the vector database
vector_db = load_vector_db(DOC_PATH)
if vector_db is None:
st.error("Failed to load or create the vector database.")
return
# Create the retriever
retriever = create_retriever(vector_db, llm)
# Create the chain
chain = create_chain(retriever, llm)
# Get the response
response = chain.invoke(input=user_input)
st.markdown("**Assistant:**")
st.write(response)
except Exception as e:
st.error(f"An error occurred: {str(e)}")
else:
st.info("Please enter a question to get started.")
if __name__ == "__main__":
main()
| 2024-12-31T18:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hql9zr/llama_doesnt_update_with_the_new_doc_provided_for/ | Radiant_Butterfly982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hql9zr | false | null | t3_1hql9zr | /r/LocalLLaMA/comments/1hql9zr/llama_doesnt_update_with_the_new_doc_provided_for/ | false | false | self | 0 | null |
3090 vs 4090 vs other? | 2 | I just sold my 4070 Super (12GB VRAM) and am looking for a replacement with 24GB VRAM.
I'm considering the 3090 and 4090. Which do you think makes more sense for my use cases?
1. Local AI hosting
2. Video editing (DaVinci Resolve)
3. 3D modeling (Blender)
If there's a better alternative (not necessarily cheaper), I'd love to hear your suggestions. | 2024-12-31T18:54:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hqld1u/3090_vs_4090_vs_other/ | the_forbidden_won | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqld1u | false | null | t3_1hqld1u | /r/LocalLLaMA/comments/1hqld1u/3090_vs_4090_vs_other/ | false | false | self | 2 | null |
Who would be interested in a prompting + fine-tuning LLM Chess Tournament? | 3 | Earlier this week I saw a post where a user set up an online arena for LLMs to play chess. I drafted up this idea shortly after.
Note that this says January, but realistically would be like February or something considering January starts tomorrow. This post is to gauge if anyone would be interested to participate before putting the money and resources into it
------------------------------------
This is a competition where your LLM setup will compete against others in its ability to play (and win) chess.
You can enter by January 7th, and this competition will take up the entire month of January. If this takes off, I'd love to do more throughout 2025 with different concepts.
Rules:
1. No closed source SOTA models can be used. If you are doing this competition with prompting only, it will be paired with any open model of your choosing. Prompt-only entries will have a max model size of 72B. Any model above 32b parameters will be ran as AWQ.
2. For fine-tuned/trained model entries, your model must be in a format supported by VLLM. AWQ, int8, or GGUF are recommended. It will also be limited to 48GB of VRAM usage, so 72B will have to be AWQ.
3. If you're making a prompting entry, your prompt will be made public at the end of the competition. If you're a fine-tuning/training entry, your model itself must be open-weight and the training data must be open-source, and will also be made public at the end. All credits will be given of course.
This is the timeline of the contest:
Week 1 (Jan 1st to Jan 7th): Training Phase
This is where YOU will prompt-engineer or fine-tune and test your submission. You can use the testing tools available. You MUST submit your model and/or prompts by January 7th, 11:59pm Central (US/Chicago).
Week 2 (Jan 8th - Jan 14th): Round 1
This week is when the models are put to a test against a three chess bots, varying in difficulty. They will be ran constantly throughout the week and counted towards a win/loss ratio.
The wins/losses ratio will be what determines where your model sits on the leaderboard. Hot fixes for models will be accepted as long as they are submitted by 11:59 pm January 9th.
Week 3 (Jan 15th - Jan 22nd): Revision Phase
This week is where the top 8 performing models proceed. These 8 contestants will be able to make further revisions to their models or prompts for week 4.
Week 4 (Jan 23rd - Jan 30th): Round 2
This week, the remaining 8 models will be put up against the three chess bots again. No hot fixes or further revisions will be accepted.
The top 4 performing models proceed to the final battle.
The Final Battle (January 31st):
The last day of the competition. Instead of the final 4 models competing against chess bots, they will compete against each other in a bracket-style tournament.
This tournament will be ran 10 times. Whoever wins the most will be awarded first place, and second and third place will be awarded to the runner-ups respectively.
The prize pool so far is $100 (out of my own pocket)
$50 to 1st place.
$35 to 2nd place
$15 to 3rd place
If anyone would like to contribute to the pool, it would be greatly appreciated! Feel free to DM me.
A crisp virtual high-five will go to fourth place, as well as bragging rights.
In the event of a tie, the models performance in Rounds 1 and 2 will be taken into account in order to break the tie.
-----------------
Would you guys be interested?
| 2024-12-31T19:09:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hqloxc/who_would_be_interested_in_a_prompting_finetuning/ | maxwell321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqloxc | false | null | t3_1hqloxc | /r/LocalLLaMA/comments/1hqloxc/who_would_be_interested_in_a_prompting_finetuning/ | false | false | self | 3 | null |
Methodologies for "Knowledge Storage" in LLM Parameters | 0 | Searching through the entire Reddit, it seems that most posts talk about platforms, tools, or parameter-efficient ways for "Continual Pre-training". However, I wonder what would be the suggested way to do continual pre-training in terms of data preparation/augmentation?
Let's simplify the question: we have someone's biography. How can this biography be learned by a model (e.g., llama) in its parameters? "Learn" means that the model can answer questions without taking the biography as part of input. Apparently, it is less effective if naively feeding the exact biography into the model. Then, what would be the suggested augmentation solution or other solutions? (I'm aware that ICL is more effective in this scenario; however, I'm curious about the training/pre-training solution.)
Are there any research on it?
I only found one relevant paper so far: [https://arxiv.org/pdf/2309.14316](https://arxiv.org/pdf/2309.14316) | 2024-12-31T19:14:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hqlt25/methodologies_for_knowledge_storage_in_llm/ | wandering-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqlt25 | false | null | t3_1hqlt25 | /r/LocalLLaMA/comments/1hqlt25/methodologies_for_knowledge_storage_in_llm/ | false | false | self | 0 | null |
Revisting llama.cpp speculative decoding w/ Qwen2.5-Coder 32B (AMD vs Nvidia results) | 65 | There have been some recent questions on how the 7900 XTX runs 30B class models, and I was actually curious to revisit some of the llama.cpp speculative decoding tests I had done a while back, so I figured, why not knock out both of those with some end of year testing.
# Methodology
While I'm a big fan of `llama-bench` for basic testing, with speculative decoding this doesn't really work (speed will depend on draft acceptance, which is workload dependent). I've been using [vLLM's benchmark_serving.py](https://github.com/vllm-project/vllm/blob/main/benchmarks/benchmark_serving.py) for a lot of recent testing, so that's what I used for this test.
I was lazy, so I just found a ShareGPT-formatted coding repo on HF so I wouldn't have to do any reformatting: https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT
I used the latest HEAD checkouts of [hjc4869/llama.cpp](https://github.com/hjc4869/llama.cpp) (b4398) for AMD and [llama.cpp](https://github.com/ggerganov/llama.cpp) (b4400) on Nvidia w/ just standard cmake flags for each backend.
While my previous testing was with a 32B Q8_0 quant, to fit in a 24GB card to allow comparisons, I'm using a Q4_K_M. Context will be limited, but the model launches with `n_ctx_per_seq (4096)` by default, so that's fine for benchmarking.
For speculative decoding, I previously found slightly better results w/ a 1.5B draft model (vs 0.5B) and am using these settings:
```
--draft-max 24 --draft-min 1 --draft-p-min 0.6
```
If you want to run similar testing on your own system with your own workloads (or models) the source code, some sample scripts, (along with some more raw results) are also available here: https://github.com/AUGMXNT/speed-benchmarking/tree/main/llama.cpp-code
# AMD Radeon Pro W7900
For the W7900 (241W max TDP), speculative decoding gives us ~60% higher throughput and 40% lower TPOT, at the cost of 7.5% additional memory usage:
| Metric | W7900 Q4_K_M | W7900 Q4_K_M + 1.5B Q8 | % Difference |
|:--------------------------------|---------------:|-------------------------:|---------------:|
| Memory Usage (GiB) | 20.57 | 22.12 | 7.5 |
| Successful requests | 50 | 50 | 0.0 |
| Benchmark duration (s) | 1085.39 | 678.21 | -37.5 |
| Total input tokens | 5926 | 5926 | 0.0 |
| Total generated tokens | 23110 | 23204 | 0.4 |
| Request throughput (req/s) | 0.05 | 0.07 | 40.0 |
| Output token throughput (tok/s) | 21.29 | 34.21 | 60.7 |
| Total Token throughput (tok/s) | 26.75 | 42.95 | 60.6 |
| Mean TTFT (ms) | 343.50 | 344.16 | 0.2 |
| Median TTFT (ms) | 345.69 | 346.8 | 0.3 |
| P99 TTFT (ms) | 683.43 | 683.85 | 0.1 |
| Mean TPOT (ms) | 46.09 | 28.83 | -37.4 |
| Median TPOT (ms) | 45.97 | 28.70 | -37.6 |
| P99 TPOT (ms) | 47.70 | 42.65 | -10.6 |
| Mean ITL (ms) | 46.22 | 28.48 | -38.4 |
| Median ITL (ms) | 46.00 | 0.04 | -99.9 |
| P99 ITL (ms) | 48.79 | 310.77 | 537.0 |
# Nvidia RTX 3090 (MSI Ventus 3X 24G OC)
On the RTX 3090 (420W max TDP), we are able to get better performance with FA on. We get a similar benefit, with speculative decoding giving us ~55% higher throughput and 35% lower TPOT, at the cost of 9.5% additional memory usage:
| Metric | RTX 3090 Q4_K_M | RTX 3090 Q4_K_M + 1.5B Q8 | % Difference |
|:--------------------------------|------------------:|----------------------------:|---------------:|
| Memory Usage (GiB) | 20.20 | 22.03 | 9.5 |
| Successful requests | 50 | 50 | 0.0 |
| Benchmark duration (s) | 659.45 | 419.7 | -36.4 |
| Total input tokens | 5926 | 5926 | 0.0 |
| Total generated tokens | 23447 | 23123 | -1.4 |
| Request throughput (req/s) | 0.08 | 0.12 | 50.0 |
| Output token throughput (tok/s) | 35.56 | 55.09 | 54.9 |
| Total Token throughput (tok/s) | 44.54 | 69.21 | 55.4 |
| Mean TTFT (ms) | 140.01 | 141.43 | 1.0 |
| Median TTFT (ms) | 97.17 | 97.92 | 0.8 |
| P99 TTFT (ms) | 373.87 | 407.96 | 9.1 |
| Mean TPOT (ms) | 27.85 | 18.23 | -34.5 |
| Median TPOT (ms) | 27.80 | 17.96 | -35.4 |
| P99 TPOT (ms) | 28.73 | 28.14 | -2.1 |
| Mean ITL (ms) | 27.82 | 17.83 | -35.9 |
| Median ITL (ms) | 27.77 | 0.02 | -99.9 |
| P99 ITL (ms) | 29.34 | 160.18 | 445.9 |
# W7900 vs 3090 Comparison
You can see that the 3090 without speculative decoding actually beats out the throughput of the W7900 *with* speculative decoding:
| Metric | W7900 Q4_K_M + 1.5B Q8 | RTX 3090 Q4_K_M + 1.5B Q8 | % Difference |
|:--------------------------------|-------------------------:|----------------------------:|---------------:|
| Memory Usage (GiB) | 22.12 | 22.03 | -0.4 |
| Successful requests | 50 | 50 | 0.0 |
| Benchmark duration (s) | 678.21 | 419.70 | -38.1 |
| Total input tokens | 5926 | 5926 | 0.0 |
| Total generated tokens | 23204 | 23123 | -0.3 |
| Request throughput (req/s) | 0.07 | 0.12 | 71.4 |
| Output token throughput (tok/s) | 34.21 | 55.09 | 61.0 |
| Total Token throughput (tok/s) | 42.95 | 69.21 | 61.1 |
| Mean TTFT (ms) | 344.16 | 141.43 | -58.9 |
| Median TTFT (ms) | 346.8 | 97.92 | -71.8 |
| P99 TTFT (ms) | 683.85 | 407.96 | -40.3 |
| Mean TPOT (ms) | 28.83 | 18.23 | -36.8 |
| Median TPOT (ms) | 28.7 | 17.96 | -37.4 |
| P99 TPOT (ms) | 42.65 | 28.14 | -34.0 |
| Mean ITL (ms) | 28.48 | 17.83 | -37.4 |
| Median ITL (ms) | 0.04 | 0.02 | -50.0 |
| P99 ITL (ms) | 310.77 | 160.18 | -48.5 |
Note: the 7900 XTX has higher TDP and clocks, and in my previous testing usually is ~10% faster than the W7900, but the gap between it and the 3090 would still be sizable, as the RTX 3090 is *significantly* faster than the W7900:
- >60% higher throughput
- >70% lower median TTFT (!)
- ~37% lower TPOT | 2024-12-31T19:16:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hqlug2/revisting_llamacpp_speculative_decoding_w/ | randomfoo2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqlug2 | false | null | t3_1hqlug2 | /r/LocalLLaMA/comments/1hqlug2/revisting_llamacpp_speculative_decoding_w/ | false | false | self | 65 | {'enabled': False, 'images': [{'id': 'yxqNak4GUlMo3H2SsrdD_YM0C8iR8_NAhn2zlnMUEuc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vdf1sNc4YlsnIkXt0qV3lI3TpKeSS0gq2nl4ooEK-WE.jpg?width=108&crop=smart&auto=webp&s=2faa340115de580f64530ea855b7aa8fab6e0433', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vdf1sNc4YlsnIkXt0qV3lI3TpKeSS0gq2nl4ooEK-WE.jpg?width=216&crop=smart&auto=webp&s=0ad0e425a481cba1da919cff4e179fd581b943f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vdf1sNc4YlsnIkXt0qV3lI3TpKeSS0gq2nl4ooEK-WE.jpg?width=320&crop=smart&auto=webp&s=a05c0cc9c425d432271a4f06c59aa832c7e80573', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vdf1sNc4YlsnIkXt0qV3lI3TpKeSS0gq2nl4ooEK-WE.jpg?width=640&crop=smart&auto=webp&s=fcfc42efeddee313ae42fd3a0cc18e5cde68b1c3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vdf1sNc4YlsnIkXt0qV3lI3TpKeSS0gq2nl4ooEK-WE.jpg?width=960&crop=smart&auto=webp&s=ff40058b19515c732884ecb2718a8c45c6fa0f50', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vdf1sNc4YlsnIkXt0qV3lI3TpKeSS0gq2nl4ooEK-WE.jpg?width=1080&crop=smart&auto=webp&s=07dc236aa8b406fc35e1259860416ca9af2ca3e7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vdf1sNc4YlsnIkXt0qV3lI3TpKeSS0gq2nl4ooEK-WE.jpg?auto=webp&s=df41ade27e28ecf0ac20faa396776ff1ece4e7a3', 'width': 1200}, 'variants': {}}]} |
Awesome LLM apps | 81 | https://github.com/Shubhamsaboo/awesome-llm-apps | 2024-12-31T19:19:09 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hqlw3j | false | null | t3_1hqlw3j | /r/LocalLLaMA/comments/1hqlw3j/awesome_llm_apps/ | false | false | 81 | {'enabled': True, 'images': [{'id': 'cwDNp3272pt82fshwuttEOI92Zez0EOqhYVJNYwfVCY', 'resolutions': [{'height': 213, 'url': 'https://preview.redd.it/nkzhjw97i8ae1.png?width=108&crop=smart&auto=webp&s=ee8763d2fd70505a0ee74a98aff6e848355e0604', 'width': 108}, {'height': 426, 'url': 'https://preview.redd.it/nkzhjw97i8ae1.png?width=216&crop=smart&auto=webp&s=aaff0d1f2e5c3a3223b015cbfe67920da1606089', 'width': 216}, {'height': 632, 'url': 'https://preview.redd.it/nkzhjw97i8ae1.png?width=320&crop=smart&auto=webp&s=10497f5180c098f334c5165bb000364c31ab0d6a', 'width': 320}, {'height': 1264, 'url': 'https://preview.redd.it/nkzhjw97i8ae1.png?width=640&crop=smart&auto=webp&s=f9704b9e45212f28e148447e2f2643fa6d8a7487', 'width': 640}, {'height': 1896, 'url': 'https://preview.redd.it/nkzhjw97i8ae1.png?width=960&crop=smart&auto=webp&s=2df05b85fbb0d17929136df71286e6881b060a9f', 'width': 960}, {'height': 2134, 'url': 'https://preview.redd.it/nkzhjw97i8ae1.png?width=1080&crop=smart&auto=webp&s=115e8e033cff53c46d8fe427d87e0a80bd292a16', 'width': 1080}], 'source': {'height': 2134, 'url': 'https://preview.redd.it/nkzhjw97i8ae1.png?auto=webp&s=e275d926d23e2933dd697285a26e857d1744de68', 'width': 1080}, 'variants': {}}]} |
||
Fine-Tuning LLaMA 3.2 with my own Dataset | 0 | I’m currently working on fine-tuning the LLaMA 3.2 model using a custom dataset I’ve built. I’ve successfully made a JSON file that contains 792 entries, formatted specifically for LLaMA 3.2. Here’s a small sample from my dataset to demonstrate the structure:
{
"input": "What are the advantages of using a system virtual machine?",
"output": "System virtual machines allow multiple operating systems on one computer, support legacy software without old hardware, and provide server consolidation, although they may have lower performance and require significant effort to implement."
},
# Goals:
1. Fine-tune the model to improve its understanding of theoretical computer science concepts.
2. Deploy it for answering academic and research questions.
# Questions:
1. Is my dataset format correct for fine-tuning?
2. What steps should I follow to train the model effectively?
3. How do I ensure the model performs well after training?
4. I have added the code which I used below. I will be uploading the dataset and base model from hugging. Hopefully this the correct method.
[https://colab.research.google.com/drive/15OyFkGoCImV9dSsewU1wa2JuKB4-mDE\_?usp=drive\_link](https://colab.research.google.com/drive/15OyFkGoCImV9dSsewU1wa2JuKB4-mDE_?usp=drive_link)
I’m using Google Colab for this and would appreciate any tips or suggestions to make this process smoother. Thanks in advance! | 2024-12-31T19:20:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hqlwp9/finetuning_llama_32_with_my_own_dataset/ | SnooRevelations5257 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqlwp9 | false | null | t3_1hqlwp9 | /r/LocalLLaMA/comments/1hqlwp9/finetuning_llama_32_with_my_own_dataset/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
Ollama not using GPU, Need Help | 1 | Hey guys, I have a laptop with RTX 2060 Mobile 6gb VRAM, 64GB ram and Ryzen 4800H, I wanted to run Qwen-Coder 2.5 locally using Ollama and use it with bolt. I have tried it with bolt and even in CLI, it always uses CPU, puts 80%+ workload, for large models it makes sense, cuz of the VRAM constraint, but even small models which is like 2-4GB is using CPU. How can I fix this. | 2024-12-31T19:28:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hqm2z6/ollama_not_using_gpu_need_help/ | Specific-Orchid-6978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqm2z6 | false | null | t3_1hqm2z6 | /r/LocalLLaMA/comments/1hqm2z6/ollama_not_using_gpu_need_help/ | false | false | self | 1 | null |
WIDGET | 1 | [removed] | 2024-12-31T19:34:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hqm766/widget/ | SecretaryOk3714 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqm766 | false | null | t3_1hqm766 | /r/LocalLLaMA/comments/1hqm766/widget/ | false | false | self | 1 | null |
Best local chain/agent tools for coding? | 1 | What are the best tools that can run locally and do multi-step coding tasks? Like I give it a task ("write a python gui that does x y and z") which can then create a requirements document, refine/expand the required document, write the code, check that all of the requirements are included, and cheack for mistakes.
Are there tools that do this? Ideally running llama or phi-3 on a 12Gb vram gpu
Are there any for windows? Are there any that can install necessary python packages by itself? | 2024-12-31T19:46:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hqmfxi/best_local_chainagent_tools_for_coding/ | Cunninghams_right | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqmfxi | false | null | t3_1hqmfxi | /r/LocalLLaMA/comments/1hqmfxi/best_local_chainagent_tools_for_coding/ | false | false | self | 1 | null |
Will Deepseek go public one day? | 18 | Just incredible using lower end Nvidia chips and costing only $5mil to develop.
https://reddit.com/link/1hqmmmt/video/xhpfr28mo8ae1/player
| 2024-12-31T19:55:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hqmmmt/will_deepseek_go_public_one_day/ | inquisitiveman2002 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqmmmt | false | null | t3_1hqmmmt | /r/LocalLLaMA/comments/1hqmmmt/will_deepseek_go_public_one_day/ | false | false | self | 18 | null |
QuantumLLMInstruct: A 500k LLM Instruction-Tuning Dataset with Problem-Solution Pairs for Quantum Computing | 23 | 🚀 **Introducing QuantumLLMInstruct (QLMMI): The Largest Quantum Computing Datas**et!
I'm excited to announce my paper **QuantumLLMInstruct (QLMMI)**, a groundbreaking dataset featuring over **500,000 instruction-following problem-solution pairs** tailored specifically for fine-tuning LLMs for solving quantum computing problems—the most comprehensive dataset of its kind! 🌌
# What makes QLMMI unique?
* Covers **90+ primary seed domains** and hundreds of subdomains autonomously generated by LLMs.
* Designed for LLM **instruction fine-tuning**, tackling complex quantum challenges across topics like:
* Synthetic Hamiltonians
* QASM code generation
* Jordan-Wigner transformations
* Trotter-Suzuki decompositions
* Enhanced with advanced reasoning techniques like **Chain-of-Thought (CoT)** and **ToRA**, ensuring mathematical precision and diversity.
# Open and Collaborative
Built with the **Qwen-2.5-Coder models**, QLMMI is completely **open-source**, with the code and dataset available on HuggingFace.
[https://arxiv.org/pdf/2412.20956](https://arxiv.org/pdf/2412.20956)
[https://huggingface.co/datasets/BoltzmannEntropy/QuantumLLMInstruct](https://huggingface.co/datasets/BoltzmannEntropy/QuantumLLMInstruct)
[https://huggingface.co/spaces/BoltzmannEntropy/QuantumLLMInstruct](https://huggingface.co/spaces/BoltzmannEntropy/QuantumLLMInstruct)
Best, | 2024-12-31T20:01:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hqmr3g/quantumllminstruct_a_500k_llm_instructiontuning/ | QuanstScientist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqmr3g | false | null | t3_1hqmr3g | /r/LocalLLaMA/comments/1hqmr3g/quantumllminstruct_a_500k_llm_instructiontuning/ | false | false | self | 23 | null |
A local LLM that does not anonymize data? | 1 | [removed] | 2024-12-31T20:09:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hqmwx7/a_local_llm_that_does_not_anonymize_data/ | floydfan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqmwx7 | false | null | t3_1hqmwx7 | /r/LocalLLaMA/comments/1hqmwx7/a_local_llm_that_does_not_anonymize_data/ | false | false | self | 1 | null |
I built a chatGPT but for sensitive data & regulated work 🔒 runs offline! | 0 |
I wanted to share an app I've been working on called Clariti - it's an AI assistant designed specifically for situations where you can't/shouldn't use ChatGPT due to privacy concerns.Our devices are remarkably capable of running AI analysis locally, thanks to MLX and Apple's Neural Engine:
Built with SwiftUI and MLX-Swift to chat with LLM's like LLama 3.2 3B Instruct
Chat with your documents, calendar, health data, and more... 100% Private and runs Offline!
You can check it out here: [\[App Store Link\]](https://apps.apple.com/us/app/clariti-ai-privately/id6739746682) \- **Free Trial !**
\_\_\_\_\_
1. Performance by Device:
\- iPhone 12/13 series: Excellent performance with Llama 3.2B - 1B Instruct models
\- iPhone 14/15 series: Excellent performance with Llama 3.2B-4B Instruct models
\- Modern iPads: Efficiently runs 7B models (8-bit quantized)
\- Apple Silicon Macs: Superior performance with larger models (7B-13B)
2. MLX Framework Benefits:
\- Specifically optimized for Apple Silicon architecture
\- Utilizes Metal for GPU acceleration
\- Memory-efficient through dynamic memory management
\- Fast inference times with minimal latency
\- Privacy-focused as all processing happens on-device
3. Model Capabilities:
\- Text generation and analysis
\- Document understanding
\- Contextual responses
\- Chat functionality
\- All without requiring cloud connectivity
The learning comes from two sources:
1. Pre-trained open-source models optimized and quantized for MLX (see MLX-Community on huggingface)
2. Your own documents through Retrieval Augmented Generation (RAG), which allows the AI to learn from your content without retraining the model
This hybrid approach ensures both privacy and performance while maintaining high-quality AI capabilities on your device, enhanced by your personal knowledge base
https://preview.redd.it/un3kafenw8ae1.png?width=1290&format=png&auto=webp&s=f99a43316335d1d821e18e6c472cb652bc24c86b
| 2024-12-31T20:40:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hqnihd/i_built_a_chatgpt_but_for_sensitive_data/ | claritiai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqnihd | false | null | t3_1hqnihd | /r/LocalLLaMA/comments/1hqnihd/i_built_a_chatgpt_but_for_sensitive_data/ | false | false | 0 | null |
|
Censorship workaround for DeepSeek, brought to you by the CCP | 1 | 2024-12-31T20:54:19 | https://www.reddit.com/gallery/1hqnss4 | 1234oguz | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hqnss4 | false | null | t3_1hqnss4 | /r/LocalLLaMA/comments/1hqnss4/censorship_workaround_for_deepseek_brought_to_you/ | false | false | 1 | null |
||
Interesting DeepSeek behavior | 476 | 2024-12-31T20:55:57 | https://www.reddit.com/gallery/1hqntx4 | 1234oguz | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hqntx4 | false | null | t3_1hqntx4 | /r/LocalLLaMA/comments/1hqntx4/interesting_deepseek_behavior/ | false | false | 476 | null |
||
DeepSeek == GPT-4? | 0 | 2024-12-31T20:57:31 | https://www.reddit.com/gallery/1hqnv3m | 1234oguz | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hqnv3m | false | null | t3_1hqnv3m | /r/LocalLLaMA/comments/1hqnv3m/deepseek_gpt4/ | false | false | 0 | null |
||
Ollama template vs input text | 1 | I can't seem to figure out how ollama templates work. Does anyone know if the template affects the text you actually input?
Are they completely unrelated?
Are there answers or do we just put words in magic box and get words out? | 2024-12-31T21:22:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hqocrx/ollama_template_vs_input_text/ | reality_comes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqocrx | false | null | t3_1hqocrx | /r/LocalLLaMA/comments/1hqocrx/ollama_template_vs_input_text/ | false | false | self | 1 | null |
Are there any local alternatives to ChatGPT's wolfram GPT? | 1 | I want something I can run locally that can solve complex natural language equations and step me through the solution without hallucinating. | 2024-12-31T21:23:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hqodwk/are_there_any_local_alternatives_to_chatgpts/ | wunnsen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqodwk | false | null | t3_1hqodwk | /r/LocalLLaMA/comments/1hqodwk/are_there_any_local_alternatives_to_chatgpts/ | false | false | self | 1 | null |
Rtx a6000+3090 | 2 | Hi,
After having dropped the Mac route for running 70b+ llm, I’m considering buying an a6000 (ampere) and matching it with a 3090.
It would allow me to get a little more of 70go of vram and run llama 3.3 and command r+.
Is that even possible ? Good idea or bad ?
Thanks! | 2024-12-31T21:46:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hqot5u/rtx_a60003090/ | HappyFaithlessness70 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqot5u | false | null | t3_1hqot5u | /r/LocalLLaMA/comments/1hqot5u/rtx_a60003090/ | false | false | self | 2 | null |
4060 Ti 16GB on a Windows 11 VM | 1 | [removed] | 2024-12-31T21:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hqp244/4060_ti_16gb_on_a_windows_11_vm/ | Emotional_Public_398 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqp244 | false | null | t3_1hqp244 | /r/LocalLLaMA/comments/1hqp244/4060_ti_16gb_on_a_windows_11_vm/ | false | false | nsfw | 1 | null |
Best bang for buck GPU/Tensor processors for training and inference for 5k$ ? | 1 | As the title suggests, what is the best money I can spend on in terms of training and serving a Model below 5k? Can be GPU or any other variants that I may not even be aware of. This does not include the cost for other peripherals or mobo or cpu. | 2024-12-31T22:06:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hqp7ak/best_bang_for_buck_gputensor_processors_for/ | Specter_Origin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqp7ak | false | null | t3_1hqp7ak | /r/LocalLLaMA/comments/1hqp7ak/best_bang_for_buck_gputensor_processors_for/ | false | false | self | 1 | null |
About an upgrade difference | 1 | I know that this is a pretty broad question but how much of a boost should I expect on LLM performance, assuming there will be be little to no bottleneck, when upgrading from a GeForce 1050 Ti with 4GB of vRAM to a GeForce 3060 RTX with 12GB of vRAM? I mean in terms of overall and the parameters of models that should run. I realize this is modest. It has 64 GB of system RAM win 11 capable i7 processor but not listed on the MS special list despite meeting the security requirements. Anyway, I digress. Also any uncensored models you can recommend? I am currently a Nemo Mystral model. | 2024-12-31T22:33:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hqppbh/about_an_upgrade_difference/ | theshadowraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqppbh | false | null | t3_1hqppbh | /r/LocalLLaMA/comments/1hqppbh/about_an_upgrade_difference/ | false | false | self | 1 | null |
For 2025 | 6 | I was just curious about what you all predict or think about the following questions for the upcoming year?
1. What if any major breakthroughs do you predict for open-source LLMs?
2. What trends do you hope occur with the above?
3. What major or minor developer will play the biggest role?
4. Will this be the year of major licensing laws and restrictions cripple open-source models?
5. Is this a “make it or break it” year for open-source models? | 2024-12-31T22:46:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hqpyfm/for_2025/ | theshadowraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqpyfm | false | null | t3_1hqpyfm | /r/LocalLLaMA/comments/1hqpyfm/for_2025/ | false | false | self | 6 | null |
can you tune ANY LLM with unsloth? | 1 | [removed] | 2024-12-31T22:59:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hqq66z/can_you_tune_any_llm_with_unsloth/ | StandardOne1681 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqq66z | false | null | t3_1hqq66z | /r/LocalLLaMA/comments/1hqq66z/can_you_tune_any_llm_with_unsloth/ | false | false | self | 1 | null |
Quanitization questions | 5 | What would you recommend:
K\_M vs K\_S
4 vs 5 as the cutoff for lowest without a major dropoff, is the latter better with new mode?
If this was asked previously could someone provide a link please? | 2024-12-31T23:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hqq9be/quanitization_questions/ | theshadowraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqq9be | false | null | t3_1hqq9be | /r/LocalLLaMA/comments/1hqq9be/quanitization_questions/ | false | false | self | 5 | null |
Best LLMs of 2024 | 0 | 2024-12-31T23:06:03 | https://www.youtube.com/watch?v=4NUtg4Aj1dI | dulldata | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1hqqax6 | false | {'oembed': {'author_name': '1littlecoder', 'author_url': 'https://www.youtube.com/@1littlecoder', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/4NUtg4Aj1dI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="2024 AI Winners!!! 💥Best LLMs of 2024 💥"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/4NUtg4Aj1dI/hqdefault.jpg', 'thumbnail_width': 480, 'title': '2024 AI Winners!!! 💥Best LLMs of 2024 💥', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1hqqax6 | /r/LocalLLaMA/comments/1hqqax6/best_llms_of_2024/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YwxxDR9ZWEyZinbEY0UBPZ6v3REFkP6VOEoUh2s0RzA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/LKA_bpCikQAt_Cy-h5fDyQYpwzdB7hNjaQMGDvMyNRo.jpg?width=108&crop=smart&auto=webp&s=2d40a8934ac55aa2504cde8ea84aa2f62e631f22', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/LKA_bpCikQAt_Cy-h5fDyQYpwzdB7hNjaQMGDvMyNRo.jpg?width=216&crop=smart&auto=webp&s=2b940c3e6b3b68f2f78da31cbb7a13a1b93e4781', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/LKA_bpCikQAt_Cy-h5fDyQYpwzdB7hNjaQMGDvMyNRo.jpg?width=320&crop=smart&auto=webp&s=b6bb087292ce8e078597818a53c9d66ce4b96fcc', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/LKA_bpCikQAt_Cy-h5fDyQYpwzdB7hNjaQMGDvMyNRo.jpg?auto=webp&s=9339187b894ceadb5b1e484f9c284ad39207602c', 'width': 480}, 'variants': {}}]} |
||
Small models without GPU? | 0 | I wanted to experiment with AI agents just to prototype the architecture.
What are some small models that could run on a windows 10 desktop without the need for a GPU?
| 2024-12-31T23:42:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hqqxwm/small_models_without_gpu/ | tvmaly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hqqxwm | false | null | t3_1hqqxwm | /r/LocalLLaMA/comments/1hqqxwm/small_models_without_gpu/ | false | false | self | 0 | null |
it's just 262GB | 1 | 2024-12-31T23:50:24 | toodle_enthusiast | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hqr328 | false | null | t3_1hqr328 | /r/LocalLLaMA/comments/1hqr328/its_just_262gb/ | false | false | 1 | {'enabled': True, 'images': [{'id': '8SW74Bf9Ymm3fB7DvZRlLLHC-2ZaV6E6bUuBEdahBHA', 'resolutions': [{'height': 145, 'url': 'https://preview.redd.it/3ynal5ru0ltc1.jpeg?width=108&crop=smart&auto=webp&s=35c425b0b30d323288bca82c363f24fb58ff9c77', 'width': 108}, {'height': 290, 'url': 'https://preview.redd.it/3ynal5ru0ltc1.jpeg?width=216&crop=smart&auto=webp&s=9c520b87cf3f8213fc57b8ef8c790306c613d239', 'width': 216}, {'height': 430, 'url': 'https://preview.redd.it/3ynal5ru0ltc1.jpeg?width=320&crop=smart&auto=webp&s=2f2c34cdbd915db08659ef2db25da59c0e6058b6', 'width': 320}], 'source': {'height': 672, 'url': 'https://preview.redd.it/3ynal5ru0ltc1.jpeg?auto=webp&s=07bb9e0767d9b1079c65e2148c7dfb88c3ff1744', 'width': 500}, 'variants': {}}]} |
|||
My truest feeling about recent events (Claude: powerful [coding] man | Old version of GPT: middle-class | Deepseek: precious [insert price/performance] women) | 1 | 2025-01-01T00:00:11 | Kuro1103 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hqr97x | false | null | t3_1hqr97x | /r/LocalLLaMA/comments/1hqr97x/my_truest_feeling_about_recent_events_claude/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'TQZ17PVK2uL87BM8uLCKQ34eCW2VGVt5Z_KRlwRXMX4', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/wp8a85wiv9ae1.png?width=108&crop=smart&auto=webp&s=79f200b4baeafb4d0b816ca47d77cc0440925c2d', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/wp8a85wiv9ae1.png?width=216&crop=smart&auto=webp&s=a3ede080c7b19b7e409e10dd84d1df8708a49876', 'width': 216}, {'height': 344, 'url': 'https://preview.redd.it/wp8a85wiv9ae1.png?width=320&crop=smart&auto=webp&s=f7008aa2e3430979fc3dca50b7e55d60e467d95e', 'width': 320}, {'height': 689, 'url': 'https://preview.redd.it/wp8a85wiv9ae1.png?width=640&crop=smart&auto=webp&s=900667993f6d2ee8ff2ea13ba306b42fe9273e4e', 'width': 640}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/wp8a85wiv9ae1.png?auto=webp&s=89a1d149c8935e256c08a51aebc5a58714fb1c5c', 'width': 928}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.