title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Is bitnet false/fake?
97
Its soon been a year since bitnet was revealed, yet we have no large models of it. There's no tests of large models from what I can see, only fake models to test theoretical speeds. I mean sure we got that bitnet interference from microsoft in October but what use is that when there's 0 models to be used. It just seems odd that there's no large models and no one is saying "We tried large bitnet models but they're bad" or something then it would make sense no one makes them. But its just quiet. It seems like a win win situation for everyone if they do work?
2024-12-03T11:22:25
https://www.reddit.com/r/LocalLLaMA/comments/1h5l89d/is_bitnet_falsefake/
Bandit-level-200
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5l89d
false
null
t3_1h5l89d
/r/LocalLLaMA/comments/1h5l89d/is_bitnet_falsefake/
false
false
self
97
null
How to download and use LlamaParse model locally?
2
I'm using LlamaParse in my code where i need to put Llama Cloud API key. I want to download the model so that i can use it locally without key and internet. I couldn't find any site from where i can download and use it
2024-12-03T11:36:46
https://www.reddit.com/r/LocalLLaMA/comments/1h5lg0l/how_to_download_and_use_llamaparse_model_locally/
kabhikhusikabhigm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5lg0l
false
null
t3_1h5lg0l
/r/LocalLLaMA/comments/1h5lg0l/how_to_download_and_use_llamaparse_model_locally/
false
false
self
2
null
Best alternatives to BERT - NLU Encoder Models
2
I'm looking for alternatives to BERT or distilBERT for multilingual proposes. I would like a bidirectional masked encoder architecture similar to what BERT is, but more powerful and with more context for task in Natural Language Understanding. Any recommendations would be much appreciated.
2024-12-03T11:38:02
https://www.reddit.com/r/LocalLLaMA/comments/1h5lgr0/best_alternatives_to_bert_nlu_encoder_models/
mr_house7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5lgr0
false
null
t3_1h5lgr0
/r/LocalLLaMA/comments/1h5lgr0/best_alternatives_to_bert_nlu_encoder_models/
false
false
self
2
null
A simple CLI for collecting the contents of your code files into a single file
3
I want to copy-paste the contents of a whole directory to my favourite LLM, but doing this manually would be insane. Now there is a Rust CLI to do this (you need Rust to be installed): [https://github.com/0xideas/dirscribe](https://github.com/0xideas/dirscribe)
2024-12-03T11:41:12
https://www.reddit.com/r/LocalLLaMA/comments/1h5liiv/a_simple_cli_for_collecting_the_contents_of_your/
0xideas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5liiv
false
null
t3_1h5liiv
/r/LocalLLaMA/comments/1h5liiv/a_simple_cli_for_collecting_the_contents_of_your/
false
false
self
3
{'enabled': False, 'images': [{'id': 'YArazg4ZSpPcY5KPzMI0GrLphdTfttBOrMSgILI7ZI0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EMnYIgL4W-VIs7WCCJZ2Y_eLSjHjVDLQXsnUNpEIHtQ.jpg?width=108&crop=smart&auto=webp&s=f71a722e980f5d9a33a62498121824e20b90334b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EMnYIgL4W-VIs7WCCJZ2Y_eLSjHjVDLQXsnUNpEIHtQ.jpg?width=216&crop=smart&auto=webp&s=a80d980910c36c29d37f9184568d996448fc3197', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EMnYIgL4W-VIs7WCCJZ2Y_eLSjHjVDLQXsnUNpEIHtQ.jpg?width=320&crop=smart&auto=webp&s=c6f3f043fdaf57a44185d2610a8fb354ef7992b4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EMnYIgL4W-VIs7WCCJZ2Y_eLSjHjVDLQXsnUNpEIHtQ.jpg?width=640&crop=smart&auto=webp&s=de272e9514dea31ac2523fd43ece5085a88cfc14', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EMnYIgL4W-VIs7WCCJZ2Y_eLSjHjVDLQXsnUNpEIHtQ.jpg?width=960&crop=smart&auto=webp&s=5baa0873f430f25a66bb04e8d9e5eca0216b3c61', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EMnYIgL4W-VIs7WCCJZ2Y_eLSjHjVDLQXsnUNpEIHtQ.jpg?width=1080&crop=smart&auto=webp&s=7b0f729b4f92ec1ad9868570e194cbed732b2f50', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EMnYIgL4W-VIs7WCCJZ2Y_eLSjHjVDLQXsnUNpEIHtQ.jpg?auto=webp&s=d51d95e75990445eca2755ee02a02f3f06b4fa8d', 'width': 1200}, 'variants': {}}]}
Introducing SmolChat: Running any GGUF SLMs/LLMs locally, on-device in Android (like an offline, miniature, open-source ChatGPT)
121
2024-12-03T11:45:49
https://v.redd.it/nos1ft10fm4e1
shubham0204_dev
/r/LocalLLaMA/comments/1h5ll56/introducing_smolchat_running_any_gguf_slmsllms/
1970-01-01T00:00:00
0
{}
1h5ll56
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nos1ft10fm4e1/DASHPlaylist.mpd?a=1735947959%2CZTc4Njg4Y2I0ODA2MDhhMzU5MDBjMGI2ODdhYzQxMmU5MGNhZTM0MzhkOTM2N2IxM2Q5MzVhZDY5NjY2ZGU3YQ%3D%3D&v=1&f=sd', 'duration': 276, 'fallback_url': 'https://v.redd.it/nos1ft10fm4e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/nos1ft10fm4e1/HLSPlaylist.m3u8?a=1735947959%2CN2EyMzczYWQ0NTVjMjUzYTRjZTljNDcxZmIzYzRjYjVkNTQ4ODdhOWQwN2MwZjQ5NDk3OGQyZTA1ZTM4NDQyOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nos1ft10fm4e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 886}}
t3_1h5ll56
/r/LocalLLaMA/comments/1h5ll56/introducing_smolchat_running_any_gguf_slmsllms/
false
false
https://external-preview…c5429475e0fca0c5
121
{'enabled': False, 'images': [{'id': 'aTQxdzF4NDBmbTRlMbK-pNBvE_AIrpPszUUZ8iKfYsaDV57rDj5wdDBhREGk', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aTQxdzF4NDBmbTRlMbK-pNBvE_AIrpPszUUZ8iKfYsaDV57rDj5wdDBhREGk.png?width=108&crop=smart&format=pjpg&auto=webp&s=f6763dc9fecbe4c9b259a69fab8f93cb63b99a35', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/aTQxdzF4NDBmbTRlMbK-pNBvE_AIrpPszUUZ8iKfYsaDV57rDj5wdDBhREGk.png?width=216&crop=smart&format=pjpg&auto=webp&s=3572fe59c9d6cfc7101049b98f959ea13f7efda9', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/aTQxdzF4NDBmbTRlMbK-pNBvE_AIrpPszUUZ8iKfYsaDV57rDj5wdDBhREGk.png?width=320&crop=smart&format=pjpg&auto=webp&s=5b0fc6a188aa847175b28d79ba2045ae5937188f', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/aTQxdzF4NDBmbTRlMbK-pNBvE_AIrpPszUUZ8iKfYsaDV57rDj5wdDBhREGk.png?width=640&crop=smart&format=pjpg&auto=webp&s=ea1de98b0167f79018108c9d9c1cad460cef6e09', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/aTQxdzF4NDBmbTRlMbK-pNBvE_AIrpPszUUZ8iKfYsaDV57rDj5wdDBhREGk.png?width=960&crop=smart&format=pjpg&auto=webp&s=d9a6e83a949beca4ce6e7e5b3d331569b38cb217', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/aTQxdzF4NDBmbTRlMbK-pNBvE_AIrpPszUUZ8iKfYsaDV57rDj5wdDBhREGk.png?width=1080&crop=smart&format=pjpg&auto=webp&s=37204803647f373bd5f403c80045701829af1aa5', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://external-preview.redd.it/aTQxdzF4NDBmbTRlMbK-pNBvE_AIrpPszUUZ8iKfYsaDV57rDj5wdDBhREGk.png?format=pjpg&auto=webp&s=9f10de9e90dc7696f313b5eb392dc30a7eae0ffe', 'width': 1080}, 'variants': {}}]}
Need Advice for communications job using LLM
1
[removed]
2024-12-03T12:00:51
https://www.reddit.com/r/LocalLLaMA/comments/1h5ltjm/need_advice_for_communications_job_using_llm/
NervesKeepPopping
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5ltjm
false
null
t3_1h5ltjm
/r/LocalLLaMA/comments/1h5ltjm/need_advice_for_communications_job_using_llm/
false
false
self
1
null
HunyuanVideo: A Systematic Framework For Large Video Generation Model Training
63
2024-12-03T12:36:41
https://huggingface.co/tencent/HunyuanVideo
a_slay_nub
huggingface.co
1970-01-01T00:00:00
0
{}
1h5mffm
false
null
t3_1h5mffm
/r/LocalLLaMA/comments/1h5mffm/hunyuanvideo_a_systematic_framework_for_large/
false
false
https://b.thumbs.redditm…t1Vwmy5VNfRw.jpg
63
{'enabled': False, 'images': [{'id': 'khDnslphXgSaukRc7zMVHHffkpQVf5Lt9zvvovZx5YQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aTUFXJxQKKQ71s5qeAKbrcA9tna8cGDuG6cjMnISSd4.jpg?width=108&crop=smart&auto=webp&s=764ffac05b16c5d7f8673055ec64391c07f92eb8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aTUFXJxQKKQ71s5qeAKbrcA9tna8cGDuG6cjMnISSd4.jpg?width=216&crop=smart&auto=webp&s=340969ccbfb54b5aa145e690d9409da9d29d3697', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aTUFXJxQKKQ71s5qeAKbrcA9tna8cGDuG6cjMnISSd4.jpg?width=320&crop=smart&auto=webp&s=112984ef2869ddf7fe1b592bd33f4f8d46304c2f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aTUFXJxQKKQ71s5qeAKbrcA9tna8cGDuG6cjMnISSd4.jpg?width=640&crop=smart&auto=webp&s=69a15d972ac5d04efde9009a83aaa1a6d2f855f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aTUFXJxQKKQ71s5qeAKbrcA9tna8cGDuG6cjMnISSd4.jpg?width=960&crop=smart&auto=webp&s=e1681223a04b606c9318ee0a734de2fc8c00d720', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aTUFXJxQKKQ71s5qeAKbrcA9tna8cGDuG6cjMnISSd4.jpg?width=1080&crop=smart&auto=webp&s=900d2043d3b1c673952ec5d5a4b434643415edc4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aTUFXJxQKKQ71s5qeAKbrcA9tna8cGDuG6cjMnISSd4.jpg?auto=webp&s=3a591db3ab69fb87454b361090ed78251faa3ff6', 'width': 1200}, 'variants': {}}]}
Stable Diff and Ollama low storage
0
Hi, I installed the Stable Diffusion Auto 11111 build yesterday on an Ubuntu server with 1TB of space, and today it says there is no storage left. There's no way it has used up a whole 1TB of space already. Is it configured in some way to only use a set amount of space? Any ideas, as Ollama also says it has no storage space left, so I'm very confused about what's happened.
2024-12-03T12:55:41
https://www.reddit.com/r/LocalLLaMA/comments/1h5mres/stable_diff_and_ollama_low_storage/
Totalkiller4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5mres
false
null
t3_1h5mres
/r/LocalLLaMA/comments/1h5mres/stable_diff_and_ollama_low_storage/
false
false
self
0
null
Is BERT really as good as GPT? How do they compare.
0
Also what are some LLMs that use the BERT architecture as well
2024-12-03T13:34:45
https://www.reddit.com/r/LocalLLaMA/comments/1h5nibh/is_bert_really_as_good_as_gpt_how_do_they_compare/
Longjumping_Spot5843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5nibh
false
null
t3_1h5nibh
/r/LocalLLaMA/comments/1h5nibh/is_bert_really_as_good_as_gpt_how_do_they_compare/
false
false
self
0
null
Detecting hallucination via a combination of perplexity and entailment
6
Based on some papers, I tried to implement a simple code to detect possible hallucinations. It is mostly uncertainty based right now. It seems to work but I would love to get feedback on how to improve it. I am more interested in the logic part, not code structure and readability. from openai import OpenAI import numpy as np from pydantic import BaseModel import time client = OpenAI( api_key="key", ) class CheckEntailment(BaseModel): label: str def check_entailment(fragment1:str,fragment2:str)->bool: messages = [{"role": "user", "content": f"You have two responses from a large language model. Check if the meaning of one repsonse is entailed by the other, or if there is a contradiction. Return '0' if entailment. Return '1' if contradiction. Return only the label, without any explanation. \n Response1: \n {fragment1}\n\n Response2: \n {fragment2}"}] completion = client.beta.chat.completions.parse( model="gpt-4o-mini", messages=messages, temperature=0.1, logprobs=True, top_logprobs=2, response_format = CheckEntailment) entailment = False #print(completion.choices[0].logprobs.content[3].top_logprobs) for top_logprob in completion.choices[0].logprobs.content[3].top_logprobs: #print(top_logprob.token, np.round(np.exp(top_logprob.logprob),2)) if "0" in top_logprob.token and np.exp(top_logprob.logprob) > 0.7: entailment = True return entailment # print(check_entailment("Capital of India is New Delhi.", "Paris.")) # print(check_entailment("Capital of India is New Delhi.", "New Delhi")) some_tricky_questions=[ "Which state does Alabama have its longest border with? Is it Florida or Tennessee?", "Who hosted the British Gameshow Countdown in 2007: a) Nick Hewer b) Richard Whiteley c) Jeff Stelling?", "Trivia question: Which Black Eyed Peas band member was the only one to host Saturday Night Live?", "What year in the 1980s were the FIS Alpine World Ski Championships hosted in Argentina?", "How many Brazilian numbers are there between 1-6?", "Which Israeli mathematician founded an online sequences repository in the 1970s?", "Write the 7 english words that have three consecutive double letters. No need to provide explanations, just say the words.", #adding two questions where it should not hallucinate "What is the capital of India?", "what is the full form of CPU?"] def calculate_entropy(probs): """ Calculate the entropy """ probs = np.array(probs) probs = probs / probs.sum() probs = probs[probs > 0] entropy = -np.sum(probs * np.log2(probs)) return entropy for question in some_tricky_questions: print("question",question) messages = [{"role": "user", "content": f"{question}"}] gpt_response = client.chat.completions.create( model="gpt-4o-mini", messages=messages, temperature=0.1, logprobs=True, max_completion_tokens=60 ) time.sleep(2) # get perplexity score using a low temperature response logprobs = [token.logprob for token in gpt_response.choices[0].logprobs.content] perplexity_score = np.round(np.exp(-np.mean(logprobs)),2) #initialize clusters with the first response clusters = [[gpt_response.choices[0].message.content]] #generate some more responses using higher temperature and check entailment gpt_response = client.chat.completions.create( model="gpt-4o-mini", messages=messages, n=7, temperature=0.9, logprobs=True, max_completion_tokens=60 ) time.sleep(2) #check entailment and form clusters responses = [choice.message.content for choice in gpt_response.choices] for response in responses[1:]: found_cluster = False for cluster in clusters: if check_entailment(cluster[0],response): cluster.append(response) found_cluster = True break if not found_cluster: clusters.append([response]) cluster_probs = [len(cluster)/(len(responses)+1) for cluster in clusters] discrete_entropy = calculate_entropy(cluster_probs) print("clusters",clusters) print("no of clusters",len(clusters)) print("perplexity",perplexity_score) print("entropy",discrete_entropy)
2024-12-03T13:38:57
https://www.reddit.com/r/LocalLLaMA/comments/1h5nl5z/detecting_hallucination_via_a_combination_of/
maylad31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5nl5z
false
null
t3_1h5nl5z
/r/LocalLLaMA/comments/1h5nl5z/detecting_hallucination_via_a_combination_of/
false
false
self
6
null
I made simple tool to visualise tokens-per-second speed generation (no cookies and other bs)
26
Sometimes people share how fast an LLM generates responses, and it can be hard to visualize: I created a small tool where you can either input a number to see the token rendering speed or use a URL with a get parameter, like [https://shir-man.com/tokens-per-second/?speed=4](https://shir-man.com/tokens-per-second/?speed=4), to share it directly. It might be useful for some folks here
2024-12-03T13:56:45
https://www.reddit.com/r/LocalLLaMA/comments/1h5ny3z/i_made_simple_tool_to_visualise_tokenspersecond/
Shir_man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5ny3z
false
null
t3_1h5ny3z
/r/LocalLLaMA/comments/1h5ny3z/i_made_simple_tool_to_visualise_tokenspersecond/
false
false
self
26
null
[Showing my project] Looking for a Perplexity-like tool but for YouTube? Check out COFYT!
1
[removed]
2024-12-03T14:03:23
https://www.reddit.com/r/LocalLLaMA/comments/1h5o3af/showing_my_project_looking_for_a_perplexitylike/
Ruibiks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5o3af
false
null
t3_1h5o3af
/r/LocalLLaMA/comments/1h5o3af/showing_my_project_looking_for_a_perplexitylike/
false
false
https://b.thumbs.redditm…hm41-U9l6REk.jpg
1
{'enabled': False, 'images': [{'id': 'CoXhGPfZac2sDF0DxhrNIqJDe7xFMWa730VhpeVvwRQ', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/nRKhOYylCAZFkGrNKM6YEE-v_EYbJxuMXliNHttbL-A.jpg?width=108&crop=smart&auto=webp&s=a28953b2c72e6deeb9d650a94ead0e00b7c28b2d', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/nRKhOYylCAZFkGrNKM6YEE-v_EYbJxuMXliNHttbL-A.jpg?width=216&crop=smart&auto=webp&s=96d28f6ab744509cb2aef86a61982be013163df0', 'width': 216}, {'height': 217, 'url': 'https://external-preview.redd.it/nRKhOYylCAZFkGrNKM6YEE-v_EYbJxuMXliNHttbL-A.jpg?width=320&crop=smart&auto=webp&s=c38e42c29c46dc0d778c03f0faaaa500dffdbca4', 'width': 320}, {'height': 434, 'url': 'https://external-preview.redd.it/nRKhOYylCAZFkGrNKM6YEE-v_EYbJxuMXliNHttbL-A.jpg?width=640&crop=smart&auto=webp&s=ad6b7f3a359183646b7f8440a581f85570a0bfe2', 'width': 640}, {'height': 652, 'url': 'https://external-preview.redd.it/nRKhOYylCAZFkGrNKM6YEE-v_EYbJxuMXliNHttbL-A.jpg?width=960&crop=smart&auto=webp&s=ca46b385f971453bbfd7ab7dd02c0d4fb2134137', 'width': 960}], 'source': {'height': 662, 'url': 'https://external-preview.redd.it/nRKhOYylCAZFkGrNKM6YEE-v_EYbJxuMXliNHttbL-A.jpg?auto=webp&s=fd6bd3188a2fa755da3bdc1e85b7f44e752981f7', 'width': 974}, 'variants': {}}]}
Drummer's Endurance 100B v1 - PRUNED Mistral Large 2407 123B with RP tuning! Smaller and faster with nearly the same performance!
67
2024-12-03T15:06:06
https://huggingface.co/TheDrummer/Endurance-100B-v1
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1h5ph34
false
null
t3_1h5ph34
/r/LocalLLaMA/comments/1h5ph34/drummers_endurance_100b_v1_pruned_mistral_large/
false
false
default
67
null
Does "think step by step" work with structured outputs?
1
[removed]
2024-12-03T15:10:12
https://www.reddit.com/r/LocalLLaMA/comments/1h5pkfe/does_think_step_by_step_work_with_structured/
AmorSoftware
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5pkfe
false
null
t3_1h5pkfe
/r/LocalLLaMA/comments/1h5pkfe/does_think_step_by_step_work_with_structured/
false
false
self
1
null
Does "think step by step" work with structured outputs?
1
For GPT 4o and 4o-mini, can the model think step by step (use a bit more reasoning or chain of thought) if I use structured outputs? I wonder if the lack of open ended response prevents the model from going through a COT process to get a better result.
2024-12-03T15:11:11
https://www.reddit.com/r/LocalLLaMA/comments/1h5plai/does_think_step_by_step_work_with_structured/
IWantToBeAWebDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5plai
false
null
t3_1h5plai
/r/LocalLLaMA/comments/1h5plai/does_think_step_by_step_work_with_structured/
false
false
self
1
null
Vulkan based Llama.cpp working on AMD BC-250
1
[removed]
2024-12-03T15:12:24
https://www.reddit.com/r/LocalLLaMA/comments/1h5pmh7/vulkan_based_llamacpp_working_on_amd_bc250/
MachineZer0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5pmh7
false
null
t3_1h5pmh7
/r/LocalLLaMA/comments/1h5pmh7/vulkan_based_llamacpp_working_on_amd_bc250/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
I get the 500 GB limit, but why can't I upload files larger than 1 GB? — Hugging Face.
28
2024-12-03T15:32:04
https://i.redd.it/qgn8mq71kn4e1.png
_idkwhattowritehere_
i.redd.it
1970-01-01T00:00:00
0
{}
1h5q322
false
null
t3_1h5q322
/r/LocalLLaMA/comments/1h5q322/i_get_the_500_gb_limit_but_why_cant_i_upload/
false
false
https://b.thumbs.redditm…c920Lgl9-c5k.jpg
28
{'enabled': True, 'images': [{'id': 'Tzi7aEHWYqWlznq30DJVBdDdO6Tj8Jim263lF5-RmyM', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/qgn8mq71kn4e1.png?width=108&crop=smart&auto=webp&s=c3c47781500c5fe85f592c67171c753d3f0c1523', 'width': 108}, {'height': 65, 'url': 'https://preview.redd.it/qgn8mq71kn4e1.png?width=216&crop=smart&auto=webp&s=b2d31abdc9f13f3e5904bc8f96bca981eed73711', 'width': 216}, {'height': 96, 'url': 'https://preview.redd.it/qgn8mq71kn4e1.png?width=320&crop=smart&auto=webp&s=dea9234ad8f490c6c37a6cbd4b9541d184766a96', 'width': 320}, {'height': 193, 'url': 'https://preview.redd.it/qgn8mq71kn4e1.png?width=640&crop=smart&auto=webp&s=67c14c826ee6799ad6f94c898a94aeab0f96d871', 'width': 640}], 'source': {'height': 273, 'url': 'https://preview.redd.it/qgn8mq71kn4e1.png?auto=webp&s=1086e94356b83868e81b3510ca25b746e0748e59', 'width': 903}, 'variants': {}}]}
LoRA question
1
[removed]
2024-12-03T15:33:40
https://www.reddit.com/r/LocalLLaMA/comments/1h5q4dg/lora_question/
KevinBeicon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5q4dg
false
null
t3_1h5q4dg
/r/LocalLLaMA/comments/1h5q4dg/lora_question/
false
false
self
1
null
OuteTTS-0.2-500M Text to speech model
51
OuteTTS-0.2-500M is an enhanced text-to-speech model featuring improved voice cloning capabilities and multilingual support, trained on extensive datasets. Key updates include better accuracy, more natural speech synthesis, an expanded vocabulary of over 5 billion audio tokens, and experimental support for Chinese, Japanese, and Korean languages
2024-12-03T15:44:26
https://i.redd.it/xjq74eddmn4e1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1h5qdeb
false
null
t3_1h5qdeb
/r/LocalLLaMA/comments/1h5qdeb/outetts02500m_text_to_speech_model/
false
false
https://b.thumbs.redditm…UIsVzUfFg4WM.jpg
51
{'enabled': True, 'images': [{'id': 'am7PKqeBD7jxiHLe6JHUcX001fhzd-lCNm7JDelsInU', 'resolutions': [{'height': 202, 'url': 'https://preview.redd.it/xjq74eddmn4e1.png?width=108&crop=smart&auto=webp&s=fd6d3b1bcf4fba5d7802c0956d0cab7fc27a4902', 'width': 108}, {'height': 405, 'url': 'https://preview.redd.it/xjq74eddmn4e1.png?width=216&crop=smart&auto=webp&s=bc89b4bbef30257f07b6f4f687652b8f507b2730', 'width': 216}, {'height': 600, 'url': 'https://preview.redd.it/xjq74eddmn4e1.png?width=320&crop=smart&auto=webp&s=0c903ba7e4691e0f121075e6fc82654e494109a9', 'width': 320}, {'height': 1201, 'url': 'https://preview.redd.it/xjq74eddmn4e1.png?width=640&crop=smart&auto=webp&s=cc773403f4d743ed185ce0f40c033af63194ac67', 'width': 640}, {'height': 1802, 'url': 'https://preview.redd.it/xjq74eddmn4e1.png?width=960&crop=smart&auto=webp&s=e1ac428e471d12e6668a0c989933b7a659706a83', 'width': 960}, {'height': 2028, 'url': 'https://preview.redd.it/xjq74eddmn4e1.png?width=1080&crop=smart&auto=webp&s=fab4986712c02febaad68f72f150fc0493c24a13', 'width': 1080}], 'source': {'height': 2028, 'url': 'https://preview.redd.it/xjq74eddmn4e1.png?auto=webp&s=1b44b558fe2b687b1b76641b9b29f0c66b91de6e', 'width': 1080}, 'variants': {}}]}
SANA: High-resolution image generation from Nvidia Labs.
207
Sana is a family of models for generating images with resolutions up to 4096x4096 pixels. The main advantage of Sana is its high inference speed and low resource requirements, the models can be run even on a laptop. Sana's test results are impressive: 🟠Sana-0.6B, which works with 512x512 images, is 5x faster than PixArt-Σ, while performing better on FID, Clip Score, GenEval, and DPG-Bench metrics. 🟠At 1024x1024 resolution, Sana-0.6B is 40x faster than PixArt-Σ. 🟠Sana-0.6B is 39 times faster than Flux-12B at 1024x1024 resolution) and can be run on a laptop with 16 GB VRAM, generating 1024x1024 images in less than a second
2024-12-03T15:52:16
https://i.redd.it/cdgulimrnn4e1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1h5qjyp
false
null
t3_1h5qjyp
/r/LocalLLaMA/comments/1h5qjyp/sana_highresolution_image_generation_from_nvidia/
false
false
https://b.thumbs.redditm…OHmkHUU93uls.jpg
207
{'enabled': True, 'images': [{'id': 'lxOx-SzFf7sw-_mQLqmovcyCEafrtEwtj43IPE1Se88', 'resolutions': [{'height': 215, 'url': 'https://preview.redd.it/cdgulimrnn4e1.png?width=108&crop=smart&auto=webp&s=4cb03226f744d78556f4e078db3f1623af36b8df', 'width': 108}, {'height': 431, 'url': 'https://preview.redd.it/cdgulimrnn4e1.png?width=216&crop=smart&auto=webp&s=1859ea0c219ef7acbdbb02e97ae68b4151aa5ff7', 'width': 216}, {'height': 639, 'url': 'https://preview.redd.it/cdgulimrnn4e1.png?width=320&crop=smart&auto=webp&s=1aa9ec398d454530ce3c991788ab420dec815661', 'width': 320}, {'height': 1279, 'url': 'https://preview.redd.it/cdgulimrnn4e1.png?width=640&crop=smart&auto=webp&s=61fab4859d685790d053f709688f9a816387533b', 'width': 640}, {'height': 1919, 'url': 'https://preview.redd.it/cdgulimrnn4e1.png?width=960&crop=smart&auto=webp&s=8d39ddc44d6f6b531cec19d61ad48ab5e6beba95', 'width': 960}, {'height': 2159, 'url': 'https://preview.redd.it/cdgulimrnn4e1.png?width=1080&crop=smart&auto=webp&s=8b48f44c224e391d06eed71210de4ed378ccbd3f', 'width': 1080}], 'source': {'height': 2159, 'url': 'https://preview.redd.it/cdgulimrnn4e1.png?auto=webp&s=0d8cb4746baee827e5b734be63c8075146bfe2ad', 'width': 1080}, 'variants': {}}]}
CPU to assist 4090: 7900x or 9800X3D
2
The title says it all: which CPU would be better suited to assist a Nvidia 4090 in a local AI system running Ollama with Open WebUI?
2024-12-03T15:58:58
https://www.reddit.com/r/LocalLLaMA/comments/1h5qpme/cpu_to_assist_4090_7900x_or_9800x3d/
Calrissiano
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5qpme
false
null
t3_1h5qpme
/r/LocalLLaMA/comments/1h5qpme/cpu_to_assist_4090_7900x_or_9800x3d/
false
false
self
2
null
Local models which can be run.
1
[removed]
2024-12-03T16:05:09
https://www.reddit.com/r/LocalLLaMA/comments/1h5qv3d/local_models_which_can_be_run/
iKSv2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5qv3d
false
null
t3_1h5qv3d
/r/LocalLLaMA/comments/1h5qv3d/local_models_which_can_be_run/
false
false
self
1
null
(Very) Small models are useful for what?
28
For very small models, say up to 2-3b params... have you found any uses that are perfectly adequate at? Very interested to know. Thanks!
2024-12-03T16:18:45
https://www.reddit.com/r/LocalLLaMA/comments/1h5r6bi/very_small_models_are_useful_for_what/
cangaroo_hamam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5r6bi
false
null
t3_1h5r6bi
/r/LocalLLaMA/comments/1h5r6bi/very_small_models_are_useful_for_what/
false
false
self
28
null
Dates and Llama, why are you confused?
1
I am trying to get Llama-3.2-3b to calculate a date from today's date, but can't seem to get my mind around what it is thinking.. >**User** \> Today is December 3, 2024. What is 27 days ago? >**Assistant** \> To find the date 27 days ago, I'll subtract 27 days from December 3. >If you go back 31 days (the number of days in a month), we'll reach the end of October. >October has 31 days. So, if you go back 27 days from December 3, we'll be in September. >September has 30 days. We're going to subtract 25 days, as 27 - 2 = 25. >2024-12-03 - 27 days = 2024-09-28 >So, 27 days ago was September 28. **User** \> What is 10 days ago? **Assistant** \> To find the date 10 days ago, I'll subtract 10 days from December 3. >December has 31 days. So, if you go back 10 days from December 3, we'll be in November. >2024-12-03 - 10 days = 2024-11-24 >So, 10 days ago was November 24. What am I doing wrong or not understanding about its' calculations of dates?
2024-12-03T17:02:39
https://www.reddit.com/r/LocalLLaMA/comments/1h5s7nd/dates_and_llama_why_are_you_confused/
chilanvilla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5s7nd
false
null
t3_1h5s7nd
/r/LocalLLaMA/comments/1h5s7nd/dates_and_llama_why_are_you_confused/
false
false
self
1
null
Do you think we're in an extremely inefficient and expensive era for running LLMs locally on our own hardware?
1
[removed]
2024-12-03T17:04:16
https://www.reddit.com/r/LocalLLaMA/comments/1h5s91q/do_you_think_were_in_an_extremely_inefficient_and/
Roubbes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5s91q
false
null
t3_1h5s91q
/r/LocalLLaMA/comments/1h5s91q/do_you_think_were_in_an_extremely_inefficient_and/
false
false
self
1
null
How We Used Llama 3.2 to Fix a Copywriting Nightmare
1
[removed]
2024-12-03T17:10:53
https://www.reddit.com/r/LocalLLaMA/comments/1h5set8/how_we_used_llama_32_to_fix_a_copywriting/
kaulvimal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5set8
false
null
t3_1h5set8
/r/LocalLLaMA/comments/1h5set8/how_we_used_llama_32_to_fix_a_copywriting/
false
false
self
1
{'enabled': False, 'images': [{'id': 'deIK_fefS_mPODE-fYMYNYt2TAlXSyjSqxQro6CvJnY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=108&crop=smart&auto=webp&s=7abba2a0ef01f10d40d540f412f9b74253090843', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=216&crop=smart&auto=webp&s=2ede2d77a28c529c3b19c0c7ed5603f79503d38f', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=320&crop=smart&auto=webp&s=edcd3963103a7182d2372d8b47fca1d595d1074a', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=640&crop=smart&auto=webp&s=fdc485809e27e168b4758abb5d973db6fa9b0433', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=960&crop=smart&auto=webp&s=57f2138217120f147a566000390d7392c1ddad80', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=1080&crop=smart&auto=webp&s=00588864e35af8a36bd8c749161ade2084932db1', 'width': 1080}], 'source': {'height': 673, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?auto=webp&s=36381a27094f174a4e60750bb667f662e8b3afb6', 'width': 1200}, 'variants': {}}]}
Production Grade Multi-Agent Systems
1
Which one do you prefer, any frameworks or writing multi agentic system from scratch? I am talking about production grade systems not for hobby projects and which stacks do you use and where do you deploy etc.
2024-12-03T17:37:34
https://www.reddit.com/r/LocalLLaMA/comments/1h5t1ue/production_grade_multiagent_systems/
Cool_Radio_1465
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5t1ue
false
null
t3_1h5t1ue
/r/LocalLLaMA/comments/1h5t1ue/production_grade_multiagent_systems/
false
false
self
1
null
Macbook Pro M2 Max at 96GB RAM, or M4 Max at 36 GB RAM?
7
Basically, I want to get a Macbook Pro to run local LLMs - would it be better to prioritize getting the latest CPU (but lower RAM at 36GB), or get a refurbished laptop with an M2 Max, but with 96GB of RAM? What would be a better mobile machine for running local LLMs?
2024-12-03T17:38:10
https://www.reddit.com/r/LocalLLaMA/comments/1h5t2el/macbook_pro_m2_max_at_96gb_ram_or_m4_max_at_36_gb/
keokq
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5t2el
false
null
t3_1h5t2el
/r/LocalLLaMA/comments/1h5t2el/macbook_pro_m2_max_at_96gb_ram_or_m4_max_at_36_gb/
false
false
self
7
null
Local clone of myself
1
[removed]
2024-12-03T18:03:33
https://www.reddit.com/r/LocalLLaMA/comments/1h5tou7/local_clone_of_myself/
ReplacementSafe8563
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5tou7
false
null
t3_1h5tou7
/r/LocalLLaMA/comments/1h5tou7/local_clone_of_myself/
false
false
self
1
null
PyGPT?
1
[removed]
2024-12-03T18:11:32
https://www.reddit.com/r/LocalLLaMA/comments/1h5tvvw/pygpt/
FewCartographer903
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5tvvw
false
null
t3_1h5tvvw
/r/LocalLLaMA/comments/1h5tvvw/pygpt/
false
false
self
1
null
Testing DeepSeek-R1-Lite-Preview in Math and Reasoning Tasks
1
[removed]
2024-12-03T18:12:50
https://www.reddit.com/r/LocalLLaMA/comments/1h5tx0j/testing_deepseekr1litepreview_in_math_and/
kiryangol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5tx0j
false
null
t3_1h5tx0j
/r/LocalLLaMA/comments/1h5tx0j/testing_deepseekr1litepreview_in_math_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IdxN62ONM10g0_alqdN9lA6GL07ICrk1hd_SP8I5L5o', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/wf5h3cfqUBR1PhlvaWb5D-C-OB93mFnX7zb5B4iHBus.jpg?width=108&crop=smart&auto=webp&s=d9cf2fe1061f65df935124874e95992122d1b777', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/wf5h3cfqUBR1PhlvaWb5D-C-OB93mFnX7zb5B4iHBus.jpg?width=216&crop=smart&auto=webp&s=831ff865ebe3a773409af010a419cfa27426ff82', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/wf5h3cfqUBR1PhlvaWb5D-C-OB93mFnX7zb5B4iHBus.jpg?width=320&crop=smart&auto=webp&s=c9ad2902ccb01864580b0c28c93958da32d0d5ab', 'width': 320}, {'height': 387, 'url': 'https://external-preview.redd.it/wf5h3cfqUBR1PhlvaWb5D-C-OB93mFnX7zb5B4iHBus.jpg?width=640&crop=smart&auto=webp&s=575ff59aa770c9e7a238b66fae9c77a812d0e28e', 'width': 640}, {'height': 581, 'url': 'https://external-preview.redd.it/wf5h3cfqUBR1PhlvaWb5D-C-OB93mFnX7zb5B4iHBus.jpg?width=960&crop=smart&auto=webp&s=62278093cc2b2fb15442ae2bcb38b91295d08c85', 'width': 960}, {'height': 654, 'url': 'https://external-preview.redd.it/wf5h3cfqUBR1PhlvaWb5D-C-OB93mFnX7zb5B4iHBus.jpg?width=1080&crop=smart&auto=webp&s=00ac3bc7cb303f37cfaade75cf80f8013323f8eb', 'width': 1080}], 'source': {'height': 727, 'url': 'https://external-preview.redd.it/wf5h3cfqUBR1PhlvaWb5D-C-OB93mFnX7zb5B4iHBus.jpg?auto=webp&s=80ac8e5c08a1d135cfeeeb246e65851f5d5c6255', 'width': 1200}, 'variants': {}}]}
I'm thinking Mac Mini M4 Pro with 64GB or RAM might be the best way to go right now for local LLMs- thoughts?
4
Initially I was looking into PCs with say a 4090 super (24GB of VRAM) and 64GB of RAM. Generally these seem to have the Intel 14**** that people say tends to burn itself up so that doesn't seem good. These configurations end up being around $3K from Alienware and elsewhere. So I thought I'd look into what there is on the Mac side initially looking at MBPs (laptops) and those end up in the $4k to $5K range (a lot, and I don't really need a laptop). Then I looked into a Mac Mini with the M4 Pro, 64GB of RAM and 1TB and that comes out to $2400. The unified memory architecture would seem to put it ahead here and the price is actually pretty competitive with what I'd be getting in a gaming PC. So what are the downsides of going with a Mac to run LLMs (and other ML archs) locally? (I'm a Linux user) Also, apparently AMD is going to be coming out with the Strix Halo in a few months which also would allow for a unified memory architecture, but as I understand it that would probably only be in laptops?
2024-12-03T18:42:59
https://www.reddit.com/r/LocalLLaMA/comments/1h5un2i/im_thinking_mac_mini_m4_pro_with_64gb_or_ram/
cafedude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5un2i
false
null
t3_1h5un2i
/r/LocalLLaMA/comments/1h5un2i/im_thinking_mac_mini_m4_pro_with_64gb_or_ram/
false
false
self
4
null
Amazon unveils their LLM family, Nova.
159
https://preview.redd.it/…88a1b7795c656a
2024-12-03T18:43:03
https://www.reddit.com/r/LocalLLaMA/comments/1h5un4b/amazon_unveils_their_llm_family_nova/
jpydych
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5un4b
false
null
t3_1h5un4b
/r/LocalLLaMA/comments/1h5un4b/amazon_unveils_their_llm_family_nova/
false
false
https://b.thumbs.redditm…OA662ginNpEo.jpg
159
null
llama.cpp bug fixed! Speculative decoding is 30% faster with 2x the context size
262
Testing with Qwen-2.5-Coder-32B-Q4\_K\_M I was able to double my context size and get a \~30% performance increase. On a single 3090 I hit 106.64 tokens/second at 28500 context size with my code generation benchmark. |scenario|python|typescript|swift| |:-|:-|:-|:-| |3090 (before)|78.72 tps|53.15 tps|45.26 tps| |3090 (after)|106.65 tps|70.48 tps|57.89 tps| |tokens/second increase|35.48%|32.60%|27.03%| If you want to find the optimal settings for your setup I wrote up a testing guide with configurations and the benchmarking script here: [optimizing code generation with llama-swap](https://github.com/mostlygeek/llama-swap/tree/main/examples/benchmark-snakegame).
2024-12-03T18:46:27
https://www.reddit.com/r/LocalLLaMA/comments/1h5uq43/llamacpp_bug_fixed_speculative_decoding_is_30/
No-Statement-0001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5uq43
false
null
t3_1h5uq43
/r/LocalLLaMA/comments/1h5uq43/llamacpp_bug_fixed_speculative_decoding_is_30/
false
false
self
262
null
AI Dashboard Builder
1
[removed]
2024-12-03T19:08:57
https://www.reddit.com/r/LocalLLaMA/comments/1h5v9tl/ai_dashboard_builder/
nava_7777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5v9tl
false
null
t3_1h5v9tl
/r/LocalLLaMA/comments/1h5v9tl/ai_dashboard_builder/
false
false
self
1
{'enabled': False, 'images': [{'id': 's8POFqCo-i4-7Vy6D8qFSWOGbLhg5rGZZvakmSzbFhE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/voKyfaBwoVkewIjxa-ZvYumr4atq9HnRxtwiOv3dQ5k.jpg?width=108&crop=smart&auto=webp&s=f26c048f61ef9b2452de68984dd3671912394b0b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/voKyfaBwoVkewIjxa-ZvYumr4atq9HnRxtwiOv3dQ5k.jpg?width=216&crop=smart&auto=webp&s=ee357d0bb59d5f0e496814b2d2a087bb3925225f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/voKyfaBwoVkewIjxa-ZvYumr4atq9HnRxtwiOv3dQ5k.jpg?width=320&crop=smart&auto=webp&s=5e2d603cb191dc0c4f12eab59c4495569dcf89cf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/voKyfaBwoVkewIjxa-ZvYumr4atq9HnRxtwiOv3dQ5k.jpg?width=640&crop=smart&auto=webp&s=ebccab9475588f4b5a0bc128081dc12fcdfbf08d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/voKyfaBwoVkewIjxa-ZvYumr4atq9HnRxtwiOv3dQ5k.jpg?width=960&crop=smart&auto=webp&s=36dd5e6bf6c19f11cda284c5129eb23f422c8ef7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/voKyfaBwoVkewIjxa-ZvYumr4atq9HnRxtwiOv3dQ5k.jpg?width=1080&crop=smart&auto=webp&s=e1e21d8afe1051f4a127c1abf333bb89c6aff1d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/voKyfaBwoVkewIjxa-ZvYumr4atq9HnRxtwiOv3dQ5k.jpg?auto=webp&s=7463b3d55951734249a600334318461347c4413d', 'width': 1200}, 'variants': {}}]}
MeshGen (LLaMA-Mesh in Blender) v0.2 update brings 5x speedup, CPU support
48
I've just [released](https://github.com/huggingface/meshgen/releases/tag/v0.3.0) an update to [MeshGen](https://github.com/huggingface/meshgen) that replaces the transformers backend and [full LLaMA-Mesh](https://huggingface.co/Zhengyi/LLaMA-Mesh) with a llama-cpp-python backend and [quantized LLaMA-Mesh](https://huggingface.co/bartowski/LLaMA-Mesh-GGUF) This dramatically improves performance and memory requirements, now requiring 8GB VRAM for the GPU version, or optionally a slower CPU version. It takes \~10s to generate a mesh on an RTX 4090. kudos u/noneabove1182 for the quantized LLaMA-Mesh 🤗
2024-12-03T19:10:16
https://www.reddit.com/r/LocalLLaMA/comments/1h5vay8/meshgen_llamamesh_in_blender_v02_update_brings_5x/
individual_kex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5vay8
false
null
t3_1h5vay8
/r/LocalLLaMA/comments/1h5vay8/meshgen_llamamesh_in_blender_v02_update_brings_5x/
false
false
self
48
{'enabled': False, 'images': [{'id': 'gP2x-AC0hXvlP3NqBxVXSE2rjz11lMMbFJTeNHzZKRM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kZ0VVGU4i17RxLOKZQVGkqrL64V_QFHiNB2zAG2Puo8.jpg?width=108&crop=smart&auto=webp&s=4766a7e8c81fa173ccabf9033f9e48b26743cedd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kZ0VVGU4i17RxLOKZQVGkqrL64V_QFHiNB2zAG2Puo8.jpg?width=216&crop=smart&auto=webp&s=88f7f58508c691f836577b7d50dacb90861901ca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kZ0VVGU4i17RxLOKZQVGkqrL64V_QFHiNB2zAG2Puo8.jpg?width=320&crop=smart&auto=webp&s=d9541b2720457cf988421b24926fc57ab140bd9f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kZ0VVGU4i17RxLOKZQVGkqrL64V_QFHiNB2zAG2Puo8.jpg?width=640&crop=smart&auto=webp&s=70f055ee6d7d9712aa2555954da1549f9676e468', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kZ0VVGU4i17RxLOKZQVGkqrL64V_QFHiNB2zAG2Puo8.jpg?width=960&crop=smart&auto=webp&s=6fa61f58625e1ad29599d29c26872e9037de7e7f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kZ0VVGU4i17RxLOKZQVGkqrL64V_QFHiNB2zAG2Puo8.jpg?width=1080&crop=smart&auto=webp&s=34e6b745bdb8d9499c01053c6c13267d9cfba06a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kZ0VVGU4i17RxLOKZQVGkqrL64V_QFHiNB2zAG2Puo8.jpg?auto=webp&s=419cdc5fa39a571fa013774ad75dd4621dc57593', 'width': 1200}, 'variants': {}}]}
Awesome Open Data Sets for Training
29
See here https://github.com/lmmlzn/Awesome-LLMs-Datasets Also I wanted to ask what yous think would be the best combination of data sets to achieve state of the art coding and machine learning performance on par with or exceeding Qwen 2.5. I wonder if QwQ could be retained on Qwen 2.5 datasets.
2024-12-03T19:16:05
https://www.reddit.com/r/LocalLLaMA/comments/1h5vg32/awesome_open_data_sets_for_training/
NewBronzeAge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5vg32
false
null
t3_1h5vg32
/r/LocalLLaMA/comments/1h5vg32/awesome_open_data_sets_for_training/
false
false
self
29
{'enabled': False, 'images': [{'id': '-DAH-lkZX7_zh-eTDuwPK7Du6oFXrxCuyxxJbL0IDoo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8hNUse9x-MN8DlUtHIe7QpNl03ZTsNB1xzdqpvxcBxc.jpg?width=108&crop=smart&auto=webp&s=4922bd15abe6ccbd550b32195bbf3ba9832de8ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8hNUse9x-MN8DlUtHIe7QpNl03ZTsNB1xzdqpvxcBxc.jpg?width=216&crop=smart&auto=webp&s=f3c4871945ac5b364aade7291b456a3793a9b490', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8hNUse9x-MN8DlUtHIe7QpNl03ZTsNB1xzdqpvxcBxc.jpg?width=320&crop=smart&auto=webp&s=d72dbb1f2a45587ade73dcfa19084d218cfba14b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8hNUse9x-MN8DlUtHIe7QpNl03ZTsNB1xzdqpvxcBxc.jpg?width=640&crop=smart&auto=webp&s=3d5d1f93baf762c25dfb22861cafdd3974edb9e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8hNUse9x-MN8DlUtHIe7QpNl03ZTsNB1xzdqpvxcBxc.jpg?width=960&crop=smart&auto=webp&s=e296c19165df4b49f94cdd8a94ba93646416dbb9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8hNUse9x-MN8DlUtHIe7QpNl03ZTsNB1xzdqpvxcBxc.jpg?width=1080&crop=smart&auto=webp&s=c73505399d97d192c359d5487ad9fc79cc8ce2db', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8hNUse9x-MN8DlUtHIe7QpNl03ZTsNB1xzdqpvxcBxc.jpg?auto=webp&s=505dfaed2f663ffa2820fd548c5d91587db2051f', 'width': 1200}, 'variants': {}}]}
TriLM_3.9B is so mean !!!!
1
[removed]
2024-12-03T19:17:02
https://www.reddit.com/r/LocalLLaMA/comments/1h5vgy7/trilm_39b_is_so_mean/
Weird-Field6128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5vgy7
false
null
t3_1h5vgy7
/r/LocalLLaMA/comments/1h5vgy7/trilm_39b_is_so_mean/
false
false
https://a.thumbs.redditm…g8jYL2OWKDX8.jpg
1
null
I benchmarked Qwen QwQ on aider coding bench - results are underwhelming
71
After running for nearly 4 days on an M3 Max 40c 128gb, I have to say I’m not impressed with QwQ coding capabilities. As it’s been mentioned previously, this model seems better suited as a “planner” coupled with Qwen-coder 32B. The pair combined might can do some damage on coding benchmarks if someone’s able to do further analysis.. BTW, for everyone saying this model and Qwen-coder 32B can run on rtx 3090, I’d like to see some feedback. Maybe it’s just the MLX architecture being RAM hungry but it was using around 90 GB of ram (w/ a context window of 16384) for the duration of the benchmarks. You’d need about 4 RTX 3090’s for that but maybe I’m ignorant and GGUF or other formats don’t take as much ram.
2024-12-03T19:21:28
https://www.reddit.com/gallery/1h5vkyh
AIGuy3000
reddit.com
1970-01-01T00:00:00
0
{}
1h5vkyh
false
null
t3_1h5vkyh
/r/LocalLLaMA/comments/1h5vkyh/i_benchmarked_qwen_qwq_on_aider_coding_bench/
false
false
https://b.thumbs.redditm…J0kzk-CRopCg.jpg
71
null
Intel Battlemage GPUs Just Got Announced
143
2024-12-03T19:22:05
https://www.phoronix.com/review/intel-arc-b580-battlemage
Someone13574
phoronix.com
1970-01-01T00:00:00
0
{}
1h5vlfz
false
null
t3_1h5vlfz
/r/LocalLLaMA/comments/1h5vlfz/intel_battlemage_gpus_just_got_announced/
false
false
https://b.thumbs.redditm…vzb-vNKTOt8A.jpg
143
{'enabled': False, 'images': [{'id': 'O_E4g-pvCPQ-Ov5D7UPxoH8u3trAZQW33i-5bkEOADA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/zZ879RfIKy1ENAKKngxuXZTZmwP-37H-nepuA7SzJdU.jpg?width=108&crop=smart&auto=webp&s=eedc33c2766b078d31ea95209cc2374bae1b3979', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/zZ879RfIKy1ENAKKngxuXZTZmwP-37H-nepuA7SzJdU.jpg?width=216&crop=smart&auto=webp&s=b84547763c48c9a0d941c7dad84f2fd0434aaa19', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/zZ879RfIKy1ENAKKngxuXZTZmwP-37H-nepuA7SzJdU.jpg?width=320&crop=smart&auto=webp&s=cbf666e8f683b082b0a50d54aa57c280f5b7075c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/zZ879RfIKy1ENAKKngxuXZTZmwP-37H-nepuA7SzJdU.jpg?width=640&crop=smart&auto=webp&s=72f27884b7c481fb80fb2329cc4287ab5009efe9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/zZ879RfIKy1ENAKKngxuXZTZmwP-37H-nepuA7SzJdU.jpg?width=960&crop=smart&auto=webp&s=e379661b004860bc99e7af81c97c4ff50bf0578d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/zZ879RfIKy1ENAKKngxuXZTZmwP-37H-nepuA7SzJdU.jpg?width=1080&crop=smart&auto=webp&s=e4c0ee9ff9591a56663c8a17b45720c1ee9ff888', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/zZ879RfIKy1ENAKKngxuXZTZmwP-37H-nepuA7SzJdU.jpg?auto=webp&s=4a184c7f5e1c5bcb8a3b31a0e83ba83e9f7b0a40', 'width': 1920}, 'variants': {}}]}
Have anyone else had this experience with TriLM_3.9B
3
[https://imgur.com/a/trilm-3-9b-is-so-mean-r9MrmjK](https://imgur.com/a/trilm-3-9b-is-so-mean-r9MrmjK) I am testing this on my local machine with no changes in the system prompt
2024-12-03T19:23:04
https://www.reddit.com/r/LocalLLaMA/comments/1h5vmdn/have_anyone_else_had_this_experience_with_trilm/
Weird-Field6128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5vmdn
false
null
t3_1h5vmdn
/r/LocalLLaMA/comments/1h5vmdn/have_anyone_else_had_this_experience_with_trilm/
false
false
self
3
{'enabled': False, 'images': [{'id': 'XEezgJ6UVIhuGXu80MxyK-CKeiIaGp3umYOR7QSQNJ0', 'resolutions': [{'height': 158, 'url': 'https://external-preview.redd.it/SsJhk1Te_JbEDQskiK4_igQ3dzUkcLUgHPw0A9zMUUA.jpg?width=108&crop=smart&auto=webp&s=c358d2fbdd2477d0c64b00f27dbd0a5ef15ba71c', 'width': 108}, {'height': 317, 'url': 'https://external-preview.redd.it/SsJhk1Te_JbEDQskiK4_igQ3dzUkcLUgHPw0A9zMUUA.jpg?width=216&crop=smart&auto=webp&s=7ce7a3b6099c3c807c971e7873d8558d01c06ccc', 'width': 216}, {'height': 470, 'url': 'https://external-preview.redd.it/SsJhk1Te_JbEDQskiK4_igQ3dzUkcLUgHPw0A9zMUUA.jpg?width=320&crop=smart&auto=webp&s=747b43ba66a2761a9b434debac7562f8de7f2434', 'width': 320}], 'source': {'height': 899, 'url': 'https://external-preview.redd.it/SsJhk1Te_JbEDQskiK4_igQ3dzUkcLUgHPw0A9zMUUA.jpg?auto=webp&s=29847351347ba3da6d2e908ef54d972cc0eb0d1d', 'width': 611}, 'variants': {}}]}
Project Alice v0.3 => OS Agentic Workflows with Web UI
1
[removed]
2024-12-03T20:52:22
https://www.reddit.com/r/LocalLLaMA/comments/1h5xu7z/project_alice_v03_os_agentic_workflows_with_web_ui/
wontreadterms
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5xu7z
false
null
t3_1h5xu7z
/r/LocalLLaMA/comments/1h5xu7z/project_alice_v03_os_agentic_workflows_with_web_ui/
false
false
https://b.thumbs.redditm…LWwbJKFcWYHo.jpg
1
{'enabled': False, 'images': [{'id': 'OBO_pS0YvELCZ9uM1Lr6G6gdWDBoRFOyaV14AHsBH-8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x_m-hwbXiLcdnJw1WRtqOsk0hh73TBaUhzB1fogKRFU.jpg?width=108&crop=smart&auto=webp&s=2e829ca474f8963e9e20d50e43917d4952c1a7ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x_m-hwbXiLcdnJw1WRtqOsk0hh73TBaUhzB1fogKRFU.jpg?width=216&crop=smart&auto=webp&s=989adb1d5722d8b09bf46a27c8af06899d870d36', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x_m-hwbXiLcdnJw1WRtqOsk0hh73TBaUhzB1fogKRFU.jpg?width=320&crop=smart&auto=webp&s=2882b9366dd8f8b9f1198139669314acc5ce84fb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x_m-hwbXiLcdnJw1WRtqOsk0hh73TBaUhzB1fogKRFU.jpg?width=640&crop=smart&auto=webp&s=5891423aad7dba03ef8f2141d2d5750cac466315', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x_m-hwbXiLcdnJw1WRtqOsk0hh73TBaUhzB1fogKRFU.jpg?width=960&crop=smart&auto=webp&s=506882e43e82b3f1d2f5f01f4bb719b3718cac88', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x_m-hwbXiLcdnJw1WRtqOsk0hh73TBaUhzB1fogKRFU.jpg?width=1080&crop=smart&auto=webp&s=b31797f9a83e569997ea082ac0cb1aacef39d0b1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x_m-hwbXiLcdnJw1WRtqOsk0hh73TBaUhzB1fogKRFU.jpg?auto=webp&s=249b9609ce874cb0dc065dd70df148966417560a', 'width': 1200}, 'variants': {}}]}
Best way to build code summarizer app?
1
[removed]
2024-12-03T20:53:59
https://www.reddit.com/r/LocalLLaMA/comments/1h5xvlz/best_way_to_build_code_summarizer_app/
RedOblivion01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5xvlz
false
null
t3_1h5xvlz
/r/LocalLLaMA/comments/1h5xvlz/best_way_to_build_code_summarizer_app/
false
false
self
1
null
Understanding the difference between major Inferencing / serving options?
1
[removed]
2024-12-03T21:13:31
https://www.reddit.com/r/LocalLLaMA/comments/1h5yckb/understanding_the_difference_between_major/
Ok-Mycologist-2487
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5yckb
false
null
t3_1h5yckb
/r/LocalLLaMA/comments/1h5yckb/understanding_the_difference_between_major/
false
false
self
1
null
We’ve built the #1st Agentic AI solution. Seamlessly run local LLMs, integrate paid LLMs, and combine their strengths to create your own powerful network of multiple AI agents.
1
2024-12-03T21:14:33
https://www.prompx.com/
PrompX
prompx.com
1970-01-01T00:00:00
0
{}
1h5ydfe
false
null
t3_1h5ydfe
/r/LocalLLaMA/comments/1h5ydfe/weve_built_the_1st_agentic_ai_solution_seamlessly/
false
false
default
1
null
I Just Installed a Local Llama on my 4070. I have never used python before.
8
I can’t believe how much information I now have at the tip of my fingers on a computer that isn’t connected to the internet. Wild times…
2024-12-03T21:55:57
https://www.reddit.com/r/LocalLLaMA/comments/1h5zdi0/i_just_installed_a_local_llama_on_my_4070_i_have/
zilifrom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5zdi0
false
null
t3_1h5zdi0
/r/LocalLLaMA/comments/1h5zdi0/i_just_installed_a_local_llama_on_my_4070_i_have/
false
false
self
8
null
Models with large context windows
2
I'm curious if there's any good models out there with a large context window (\~500k tokens). In a non local, environment - Gemini seems to be the best bet since it has 1 mill token window. Locally, I haven't found too many options...
2024-12-03T22:26:40
https://www.reddit.com/r/LocalLLaMA/comments/1h604go/models_with_large_context_windows/
CSlov23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h604go
false
null
t3_1h604go
/r/LocalLLaMA/comments/1h604go/models_with_large_context_windows/
false
false
self
2
null
$666 Refurbished RTX 3090, $810 Refurbished RTX 3090 Ti
0
# [ZOTAC GAMING GeForce RTX 3090 Trinity OC \[Refurbished\]](https://www.zotacstore.com/us/zt-a30900j-10p-r) * 10496 Cores * Boost: 1710 MHz * 24GB GDDR6X / 19.5 Gbps / 384-bit Free Shipping **$665.99** # [ZOTAC GAMING GeForce RTX 3090 Ti AMP Extreme Holo \[Refurbished\]](https://www.zotacstore.com/us/zt-a30910b-10p-r) * 10752 Cores * Boost: 1890 MHz * 24GB GDDR6X / 21 Gbps / 384-bit Free Shipping **$809.99** I know nothing about Zotac or their refurb quality, just saw these on slickdeals...
2024-12-03T22:35:12
https://www.reddit.com/r/LocalLLaMA/comments/1h60bqu/666_refurbished_rtx_3090_810_refurbished_rtx_3090/
randomqhacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h60bqu
false
null
t3_1h60bqu
/r/LocalLLaMA/comments/1h60bqu/666_refurbished_rtx_3090_810_refurbished_rtx_3090/
false
false
self
0
null
Working with the OpenAI Realtime API in Python
6
I've been experimenting with the OpenAI realtime API and I have to say that things are not as straight forward as I thought they would be. Essentially I want a python based backend or middleware together with a light static frontend client to have a speech to speech conversation with through the browser. The basics are easy but then you have to deal with latencies, optimizing binary chunk sizes, dealing with the echo problem where the LLM hears what it says itself, automatically detecting start and end of speech. It's all very finicky. Have you found and resources, libraries or tutorials that tackle this? OpenAIs official code is JavaScript only and also not very straightforward.
2024-12-03T23:09:20
https://www.reddit.com/r/LocalLLaMA/comments/1h614k7/working_with_the_openai_realtime_api_in_python/
gopietz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h614k7
false
null
t3_1h614k7
/r/LocalLLaMA/comments/1h614k7/working_with_the_openai_realtime_api_in_python/
false
false
self
6
null
I've tested QwQ 32b on Simple bench!
31
Used QwQ 32b preview Q4 K\_M on RTX 3090 (OLLAMA OpenWEBUI) and tested it on simple bench ([simple bench github](https://github.com/simple-bench/SimpleBench)). I am amazed! Only on one question it went from English to Chinese. The thinking process is very messy, but still 5 out of 10 seems like an amazing result (even more amazing it is Q4). It got 5 out of 10 questions correct. When i look at results from official paper ([Simple bench paper](https://drive.google.com/file/d/1mddNFK5UbBFVr3oDftd2Kyc6D8TFctfe/view)), seems Qwen has the strongest result? Anyone else tested it?
2024-12-03T23:10:56
https://www.reddit.com/r/LocalLLaMA/comments/1h615vg/ive_tested_qwq_32b_on_simple_bench/
Zliko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h615vg
false
null
t3_1h615vg
/r/LocalLLaMA/comments/1h615vg/ive_tested_qwq_32b_on_simple_bench/
false
false
self
31
{'enabled': False, 'images': [{'id': '1DTwNc8i_cqIT8oLTenOw6hoC2SIftx3sGZ9g34j1Yg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dCnTihhtINRTxMzNZPhGLvfqizqNZZQ5DqmYvPiwvqM.jpg?width=108&crop=smart&auto=webp&s=0c063c53d561f5c5b0a469132876a6cfb39a4d76', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dCnTihhtINRTxMzNZPhGLvfqizqNZZQ5DqmYvPiwvqM.jpg?width=216&crop=smart&auto=webp&s=2c886169c1ca3191821c4a8055f7fb36da9dbff5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dCnTihhtINRTxMzNZPhGLvfqizqNZZQ5DqmYvPiwvqM.jpg?width=320&crop=smart&auto=webp&s=335be96e0b6208465ac25c2798868ed2449abb23', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dCnTihhtINRTxMzNZPhGLvfqizqNZZQ5DqmYvPiwvqM.jpg?width=640&crop=smart&auto=webp&s=2bdf33b20e71e9ebd7c6725f20299cec71676c98', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dCnTihhtINRTxMzNZPhGLvfqizqNZZQ5DqmYvPiwvqM.jpg?width=960&crop=smart&auto=webp&s=b34f7924a9c886299dc158f001d515ebb5cbff52', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dCnTihhtINRTxMzNZPhGLvfqizqNZZQ5DqmYvPiwvqM.jpg?width=1080&crop=smart&auto=webp&s=4de513be7f259a2a4bc54d35fa0637236833fbda', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dCnTihhtINRTxMzNZPhGLvfqizqNZZQ5DqmYvPiwvqM.jpg?auto=webp&s=f4cd62a1b248f8fe3b488e06ba7e72e8200dfde8', 'width': 1200}, 'variants': {}}]}
Audio classification model to train on sample audio, get start end stamps.
2
I want to train a model to detect sequence of sounds and noises in audio and give me back estimate start and end time stamps. How would I go about doing this if I have 30 sample clips. I will get more in time. I’m looking to do this locally. Not sure where to start. I’m probably going to get openwebUi. Use that as API reference I have ollama installed. I see it kind of like how people grab a bunch of images of their face and train some model then all the sudden it can use the face In image generation. I’m not looking to generate sounds. Just identify similar sounds from my samples and start and end time stamps. As I mentioned before. Thanks for the help
2024-12-03T23:30:56
https://www.reddit.com/r/LocalLLaMA/comments/1h61ma0/audio_classification_model_to_train_on_sample/
Disastrous_Purpose22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h61ma0
false
null
t3_1h61ma0
/r/LocalLLaMA/comments/1h61ma0/audio_classification_model_to_train_on_sample/
false
false
self
2
null
Ollama has merged in K/V cache quantisation support, halving the memory used by the context
447
It took a while, but we got there in the end - [https://github.com/ollama/ollama/pull/6279#issuecomment-2515827116](https://github.com/ollama/ollama/pull/6279#issuecomment-2515827116) Official build/release in the days to come.
2024-12-04T00:24:58
https://www.reddit.com/r/LocalLLaMA/comments/1h62u1p/ollama_has_merged_in_kv_cache_quantisation/
sammcj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h62u1p
false
null
t3_1h62u1p
/r/LocalLLaMA/comments/1h62u1p/ollama_has_merged_in_kv_cache_quantisation/
false
false
self
447
{'enabled': False, 'images': [{'id': 'EOfjrNyMLWFe1QLyBDjZzRaa3zFJbCJYPyqrmBy71uk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_hiYI1dXpOulHj6OfUBgW2dFyhJjm9RfVUaD9VNDa9U.jpg?width=108&crop=smart&auto=webp&s=c02e13d4fdd35ab0992f6c221123d102bac83c92', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_hiYI1dXpOulHj6OfUBgW2dFyhJjm9RfVUaD9VNDa9U.jpg?width=216&crop=smart&auto=webp&s=6a4767eb86a40995f44677e71ef36d8ef84977a8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_hiYI1dXpOulHj6OfUBgW2dFyhJjm9RfVUaD9VNDa9U.jpg?width=320&crop=smart&auto=webp&s=7f621d2f655ac6064451304fcae52d7b57f31753', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_hiYI1dXpOulHj6OfUBgW2dFyhJjm9RfVUaD9VNDa9U.jpg?width=640&crop=smart&auto=webp&s=c29c9e771cd055b1b0dbce545481c6b4fde50648', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_hiYI1dXpOulHj6OfUBgW2dFyhJjm9RfVUaD9VNDa9U.jpg?width=960&crop=smart&auto=webp&s=2b9c1fb7fae74aef947d6af56700ac1c9a2161f3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_hiYI1dXpOulHj6OfUBgW2dFyhJjm9RfVUaD9VNDa9U.jpg?width=1080&crop=smart&auto=webp&s=c566160336c20c2a63b9d75c266999130deea9c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_hiYI1dXpOulHj6OfUBgW2dFyhJjm9RfVUaD9VNDa9U.jpg?auto=webp&s=23e01e2d839515797a3a7f03abc3c9fad9e7b24f', 'width': 1200}, 'variants': {}}]}
Ya know, we haven't got a new Phi model in a while, particularly a bitnet model
63
Just sayin'...
2024-12-04T01:09:03
https://www.reddit.com/r/LocalLLaMA/comments/1h63sa6/ya_know_we_havent_got_a_new_phi_model_in_a_while/
Thrumpwart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h63sa6
false
null
t3_1h63sa6
/r/LocalLLaMA/comments/1h63sa6/ya_know_we_havent_got_a_new_phi_model_in_a_while/
false
false
self
63
null
Anyone using agents locally?
13
Anyone using agents locally? What framework and models and for what use cases? I've been using agents for coding but everything is way too slow locally. Curious if people are finding good agents that solve real world problems locally without it taking a day to return.
2024-12-04T01:22:09
https://www.reddit.com/r/LocalLLaMA/comments/1h6423e/anyone_using_agents_locally/
Vegetable_Sun_9225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6423e
false
null
t3_1h6423e
/r/LocalLLaMA/comments/1h6423e/anyone_using_agents_locally/
false
false
self
13
null
Could anyone share some notebooks or repos of multimodal agentic RAG on complex PDFs with tables?
2
I've tried multimodal RAG where essentially I take each page of each of the PDFs as images, and create a CLIP based vector DB from them and then do RAG on that. It works but the results are not too reliable. So I want to use an agent based workflow that could rewrite the prompts, rerank the retrieved chunks, etc. But my agentic workflow implementation is not working correctly - it's not able to retrieve anything from the vector DB. That's why I would like to see some good implementations of this process. Also, I don't even necessarily need a multimodal RAG - I just converted all the PDFs to a collection of images because that was more convenient than extracting tables from the PDFs and handling them separately. But if there are some good implementations of agentic RAG being done on complex PDFs with tables, I'd try that out too.
2024-12-04T01:34:58
https://www.reddit.com/r/LocalLLaMA/comments/1h64bzs/could_anyone_share_some_notebooks_or_repos_of/
ResearcherNo4728
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h64bzs
false
null
t3_1h64bzs
/r/LocalLLaMA/comments/1h64bzs/could_anyone_share_some_notebooks_or_repos_of/
false
false
self
2
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
AI disappointment: Why Llama 3.2 loses out to Chat-GPT - An analysis of the limitations of Llama 3.2 compared to Chat-GPT
1
[removed]
2024-12-04T02:25:07
https://www.reddit.com/r/LocalLLaMA/comments/1h65cke/ai_disappointment_why_llama_32_loses_out_to/
Gedankenmanipulation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h65cke
false
null
t3_1h65cke
/r/LocalLLaMA/comments/1h65cke/ai_disappointment_why_llama_32_loses_out_to/
false
false
self
1
null
Fine-tune LLM on new knowledge base
1
[removed]
2024-12-04T02:39:49
https://www.reddit.com/r/LocalLLaMA/comments/1h65mwh/finetune_llm_on_new_knowledge_base/
Key-Nebula-3198
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h65mwh
false
null
t3_1h65mwh
/r/LocalLLaMA/comments/1h65mwh/finetune_llm_on_new_knowledge_base/
false
false
self
1
null
How frequent is your llm usage?
1
[removed]
2024-12-04T02:42:51
https://www.reddit.com/r/LocalLLaMA/comments/1h65p1r/how_frequent_is_your_llm_usage/
ECrispy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h65p1r
false
null
t3_1h65p1r
/r/LocalLLaMA/comments/1h65p1r/how_frequent_is_your_llm_usage/
false
false
self
1
null
How would you build this?
1
[removed]
2024-12-04T03:12:44
https://www.reddit.com/r/LocalLLaMA/comments/1h66adv/how_would_you_build_this/
scary_kitten_daddy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h66adv
false
null
t3_1h66adv
/r/LocalLLaMA/comments/1h66adv/how_would_you_build_this/
false
false
self
1
null
Modified llama.cpp to support Llama-3_1-Nemotron-51B
81
After two weeks of on-and-off hacking, I successfully modified llama.cpp to convert and Nvidia's Llama-3\_1-Nemotron-51B. [https://huggingface.co/ymcki/Llama-3\_1-Nemotron-51B-Instruct-GGUF](https://huggingface.co/ymcki/Llama-3_1-Nemotron-51B-Instruct-GGUF) This is a model that is on par with the bigger Llama-3.1-Nemotron-70B. It used Nvidia's proprietary method called Neural Architecture Search (NAS) to significantly reduce model size. Currently, I only uploaded Q3\_K\_S, Q4\_0, Q4\_0\_4\_8 and Q4\_K\_M for different local llama scenarios. If you need other quants, you can request here. If I think your request makes sense, I can make it and upload there. I am going to ask llama.cpp to see if they can merge my code to their release. Hopefully, we can then see more applications based on llama.cpp to be able to run this model.
2024-12-04T03:52:30
https://www.reddit.com/r/LocalLLaMA/comments/1h6724m/modified_llamacpp_to_support_llama3_1nemotron51b/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6724m
false
null
t3_1h6724m
/r/LocalLLaMA/comments/1h6724m/modified_llamacpp_to_support_llama3_1nemotron51b/
false
false
self
81
{'enabled': False, 'images': [{'id': 'aV-nGSFmVQc3Fl675Pho3p6rhr1KGsBVoEZHqLMnHNs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Y79ZuYlhQq6EWMGIvk21EFgD0msYHS6a2z7Zut5EcUc.jpg?width=108&crop=smart&auto=webp&s=6a0dc082f21191f234ea538a8d53b69090783115', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Y79ZuYlhQq6EWMGIvk21EFgD0msYHS6a2z7Zut5EcUc.jpg?width=216&crop=smart&auto=webp&s=daa94687a4a18d19e9937a3981ead8618f1daf2d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Y79ZuYlhQq6EWMGIvk21EFgD0msYHS6a2z7Zut5EcUc.jpg?width=320&crop=smart&auto=webp&s=474093e1125c7f638ba81b9472b88fc6b9c16a6d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Y79ZuYlhQq6EWMGIvk21EFgD0msYHS6a2z7Zut5EcUc.jpg?width=640&crop=smart&auto=webp&s=738f2412db680ec51a51b06e8daa675905d94920', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Y79ZuYlhQq6EWMGIvk21EFgD0msYHS6a2z7Zut5EcUc.jpg?width=960&crop=smart&auto=webp&s=4bea6f6d8846b8c7bae16651e90d4c442e72ce6b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Y79ZuYlhQq6EWMGIvk21EFgD0msYHS6a2z7Zut5EcUc.jpg?width=1080&crop=smart&auto=webp&s=b5836e13ebb622ebf01495f1ebd6c04d3b42fb7e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Y79ZuYlhQq6EWMGIvk21EFgD0msYHS6a2z7Zut5EcUc.jpg?auto=webp&s=ecc12f2234382618ca7e54df518a43f159185863', 'width': 1200}, 'variants': {}}]}
discor bot with support for ollama, text-generation-webui, jan, etc.
0
2024-12-04T05:10:25
https://github.com/jake83741/vnc-lm
4500vcel
github.com
1970-01-01T00:00:00
0
{}
1h68grb
false
null
t3_1h68grb
/r/LocalLLaMA/comments/1h68grb/discor_bot_with_support_for_ollama/
false
false
https://b.thumbs.redditm…C8DQku_HRFXk.jpg
0
{'enabled': False, 'images': [{'id': 'oEXM2W69CzX23nwQxuIggGPR1g1t77IzO7TQSjuSd8U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9yZR72JVDfm4KhHWFZecrnKfyLCDf3RYFJw1AIUFj18.jpg?width=108&crop=smart&auto=webp&s=aecd9e6a9f1cbacf8950f2c8529cfdcf4678c6ae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9yZR72JVDfm4KhHWFZecrnKfyLCDf3RYFJw1AIUFj18.jpg?width=216&crop=smart&auto=webp&s=684c03d906025e0d9d4f557415620e04467e54f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9yZR72JVDfm4KhHWFZecrnKfyLCDf3RYFJw1AIUFj18.jpg?width=320&crop=smart&auto=webp&s=136eb4746981ac89e3941b6348eb80560d29ed64', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9yZR72JVDfm4KhHWFZecrnKfyLCDf3RYFJw1AIUFj18.jpg?width=640&crop=smart&auto=webp&s=ac4a58277c91799994ea371787953268cb8f0131', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9yZR72JVDfm4KhHWFZecrnKfyLCDf3RYFJw1AIUFj18.jpg?width=960&crop=smart&auto=webp&s=e039074c4a0c2981eea44ae4bb482ca044ba42b5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9yZR72JVDfm4KhHWFZecrnKfyLCDf3RYFJw1AIUFj18.jpg?width=1080&crop=smart&auto=webp&s=730a87ffaf61b482b2e8e1c14cfdba3ea3e3d4c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9yZR72JVDfm4KhHWFZecrnKfyLCDf3RYFJw1AIUFj18.jpg?auto=webp&s=ae3d60c2016a7d02a861398f7839d5a49f7819ba', 'width': 1200}, 'variants': {}}]}
What are some interesting pretrained robotics models?
32
[octo-base](https://huggingface.co/rail-berkeley/octo-base) is a 93M param transformer, trained on 25 datasets [dobb-e](https://huggingface.co/notmahi/dobb-e) has 21.3M params, trained on "Homes of New York", 13 hours of house interactions [RDT-1B](https://huggingface.co/robotics-diffusion-transformer/rdt-1b), a 1B model trained on 46 datasets I know LeRobot said they'd release a pretrained model at some point, but I can't find out if they have yet. What else?
2024-12-04T05:20:09
https://www.reddit.com/r/LocalLLaMA/comments/1h68mv7/what_are_some_interesting_pretrained_robotics/
NeuralLambda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h68mv7
false
null
t3_1h68mv7
/r/LocalLLaMA/comments/1h68mv7/what_are_some_interesting_pretrained_robotics/
false
false
self
32
{'enabled': False, 'images': [{'id': 'BJh1-A_I9BL-qG_uo9KKhCAMlNLZlRSj7r5-41LKnsY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Vv1H3cB41NCw8SmsOpRjvopIf4VWJUrq9NIriRY_iZI.jpg?width=108&crop=smart&auto=webp&s=4b972136b8b61aa73459807907ab12db251c5f5d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Vv1H3cB41NCw8SmsOpRjvopIf4VWJUrq9NIriRY_iZI.jpg?width=216&crop=smart&auto=webp&s=7f3fb06fc8a77d883d9a13b314f1f2942b57a358', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Vv1H3cB41NCw8SmsOpRjvopIf4VWJUrq9NIriRY_iZI.jpg?width=320&crop=smart&auto=webp&s=0e69590b6f00f682045649c5d231d5932f6ded97', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Vv1H3cB41NCw8SmsOpRjvopIf4VWJUrq9NIriRY_iZI.jpg?width=640&crop=smart&auto=webp&s=b73e8fdf7e7fa85aab15b35d19d1b0d6aa19b6a1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Vv1H3cB41NCw8SmsOpRjvopIf4VWJUrq9NIriRY_iZI.jpg?width=960&crop=smart&auto=webp&s=3ccdbb0c6b2ad2efe63fb7e31f6283abfb1068dc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Vv1H3cB41NCw8SmsOpRjvopIf4VWJUrq9NIriRY_iZI.jpg?width=1080&crop=smart&auto=webp&s=96d323c350028e5ff51c40249fd92f7d15131153', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Vv1H3cB41NCw8SmsOpRjvopIf4VWJUrq9NIriRY_iZI.jpg?auto=webp&s=3775e575cf1ca3f73ad8fd8e458ab0c8a5e26ea1', 'width': 1200}, 'variants': {}}]}
Best small (ie < 70B) model for instruction following?
4
I've worked with phi-medium and a few others, and wanted the community consensus. Which small models excel at instruction following, particularly when paired with few-shot (around 5-6) examples?
2024-12-04T06:15:00
https://www.reddit.com/r/LocalLLaMA/comments/1h69jhq/best_small_ie_70b_model_for_instruction_following/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h69jhq
false
null
t3_1h69jhq
/r/LocalLLaMA/comments/1h69jhq/best_small_ie_70b_model_for_instruction_following/
false
false
self
4
null
i've tried 10 different LinkedIn AI agents, and they all suck, so i created a better one, works locally with llama
1
2024-12-04T06:27:33
https://youtu.be/n6Qxrrd5T-I
Patient-Mulberry6090
youtu.be
1970-01-01T00:00:00
0
{}
1h69qim
false
{'oembed': {'author_name': 'Matthew D.Y.', 'author_url': 'https://www.youtube.com/@MatthewD.Y.', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/n6Qxrrd5T-I?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Trailer demo of a new LinkedIn AI agent that helps me reach out to new connections"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/n6Qxrrd5T-I/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Trailer demo of a new LinkedIn AI agent that helps me reach out to new connections', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'}
t3_1h69qim
/r/LocalLLaMA/comments/1h69qim/ive_tried_10_different_linkedin_ai_agents_and/
false
false
https://a.thumbs.redditm…SxSvYC17N9j0.jpg
1
{'enabled': False, 'images': [{'id': 'EN3GHydqLvpSrn879_Gm60MjT6AWq3M93ZE2og6xIT4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XX2cZZdlTUbEbXyH6j6Il9dfKn7qM6lSM9LZok86OzY.jpg?width=108&crop=smart&auto=webp&s=9f42ece93023aae879187d2e37b73b2d2f95ff9f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XX2cZZdlTUbEbXyH6j6Il9dfKn7qM6lSM9LZok86OzY.jpg?width=216&crop=smart&auto=webp&s=3968170d68d08a3a14d8a1f948fa747ad4bea38b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XX2cZZdlTUbEbXyH6j6Il9dfKn7qM6lSM9LZok86OzY.jpg?width=320&crop=smart&auto=webp&s=0ff1d06b5f041daf70d81e4970a39ff2898ee220', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XX2cZZdlTUbEbXyH6j6Il9dfKn7qM6lSM9LZok86OzY.jpg?auto=webp&s=8847390f726c9f0a0d80814dc65b5fef3bf04147', 'width': 480}, 'variants': {}}]}
What tricks do you use ??
1
[removed]
2024-12-04T06:28:44
https://www.reddit.com/r/LocalLLaMA/comments/1h69r6z/what_tricks_do_you_use/
hutchdogknowsme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h69r6z
false
null
t3_1h69r6z
/r/LocalLLaMA/comments/1h69r6z/what_tricks_do_you_use/
false
false
self
1
null
Does someone have experience training in aws sagemaker with deepspeed?
3
I am struggling with the deepspeed configs due to aws built in libraries. If someone has done before I would highly appreciate!
2024-12-04T06:39:31
https://www.reddit.com/r/LocalLLaMA/comments/1h69wzp/does_someone_have_experience_training_in_aws/
Previous-Front-5211
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h69wzp
false
null
t3_1h69wzp
/r/LocalLLaMA/comments/1h69wzp/does_someone_have_experience_training_in_aws/
false
false
self
3
null
hardware requirements to fine-tune qwen 2.5 coder-32B?
1
[removed]
2024-12-04T07:04:07
https://www.reddit.com/r/LocalLLaMA/comments/1h6a9ww/hardware_requirements_to_finetune_qwen_25_coder32b/
Character_External54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6a9ww
false
null
t3_1h6a9ww
/r/LocalLLaMA/comments/1h6a9ww/hardware_requirements_to_finetune_qwen_25_coder32b/
false
false
self
1
null
What models can you pair for speculative decoding?
3
I tried to use llama-3.1-70b along with llama-3.2-3b on Mac. After processing some text, it throws and error: llama.cpp/src/llama.cpp:17577: processorGGML_ASSERT(n_tokens_all <= cparams.n_batch) failed zsh: abort ./build/bin/llama-speculative -m -md -c 10000 -n 1000 -f
2024-12-04T07:19:19
https://www.reddit.com/r/LocalLLaMA/comments/1h6ahjj/what_models_can_you_pair_for_speculative_decoding/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6ahjj
false
null
t3_1h6ahjj
/r/LocalLLaMA/comments/1h6ahjj/what_models_can_you_pair_for_speculative_decoding/
false
false
self
3
null
HunyuanVideo: A Systematic Framework For Large Video Generation Model Training
62
Hunyuan Video is a new open-source 13B video generator developed by Tencent. The quality is impressive, especially for a 13B model, although it currently generates up to only five seconds of video. The model weights are available.
2024-12-04T07:25:27
https://i.redd.it/1kd432a9as4e1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1h6ako7
false
null
t3_1h6ako7
/r/LocalLLaMA/comments/1h6ako7/hunyuanvideo_a_systematic_framework_for_large/
false
false
https://b.thumbs.redditm…lE35IfFwHoNU.jpg
62
{'enabled': True, 'images': [{'id': 'dKFIY4tApIt1GO6OAPIML2qxru3w1jf6teH5soqiI3k', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/1kd432a9as4e1.png?width=108&crop=smart&auto=webp&s=abb1140f247825a6202164c8bf5880f28af71177', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/1kd432a9as4e1.png?width=216&crop=smart&auto=webp&s=7d0dffb729a17ff9e180b79c71e4a4073de7e9f8', 'width': 216}, {'height': 569, 'url': 'https://preview.redd.it/1kd432a9as4e1.png?width=320&crop=smart&auto=webp&s=c9094c1fdf4f290951fba2f5556b5c4fe52e073f', 'width': 320}, {'height': 1138, 'url': 'https://preview.redd.it/1kd432a9as4e1.png?width=640&crop=smart&auto=webp&s=5cddb263d7d0ba319edefbabf72b1f10b3f4403d', 'width': 640}, {'height': 1707, 'url': 'https://preview.redd.it/1kd432a9as4e1.png?width=960&crop=smart&auto=webp&s=6a9c9190d706724b53ee84076a30f5f38f921e97', 'width': 960}, {'height': 1921, 'url': 'https://preview.redd.it/1kd432a9as4e1.png?width=1080&crop=smart&auto=webp&s=726de892298094d382ad4299a4266120cb6576c9', 'width': 1080}], 'source': {'height': 1921, 'url': 'https://preview.redd.it/1kd432a9as4e1.png?auto=webp&s=6fc2aaf86160c2a49ede674f8eed327b82cfed77', 'width': 1080}, 'variants': {}}]}
2x RTX a4000 vs 1x RTX a4000 + 1x RTx 4060ti 16GB?
0
I currently already have a workstation with a single RTX a4000 and looking to add another card with another 16GB of VRAM to open up larger model options. I know roughly what to expect with 2x a4000s as i did some baseline testing on runpod with this setup, but how much of a performance drop would i notice if i went for an RTX 4060ti with 16gb of Vram instead of a second a4000? especially given that the 4060ti is less than half of an a4000 unless i can find one on ebay at a good price.
2024-12-04T08:36:28
https://www.reddit.com/r/LocalLLaMA/comments/1h6bish/2x_rtx_a4000_vs_1x_rtx_a4000_1x_rtx_4060ti_16gb/
RED_iix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6bish
false
null
t3_1h6bish
/r/LocalLLaMA/comments/1h6bish/2x_rtx_a4000_vs_1x_rtx_a4000_1x_rtx_4060ti_16gb/
false
false
self
0
null
I'm plan to upgrade my desktop to have 128Gb of RAM and 7900XTX 24Gb video card. What LLMs can be run using this configuration?
1
[removed]
2024-12-04T09:16:02
https://www.reddit.com/r/LocalLLaMA/comments/1h6c0xm/im_plan_to_upgrade_my_desktop_to_have_128gb_of/
Star-Guardian-RU
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6c0xm
false
null
t3_1h6c0xm
/r/LocalLLaMA/comments/1h6c0xm/im_plan_to_upgrade_my_desktop_to_have_128gb_of/
false
false
self
1
null
Better results on venice.ai vs. local qwen2.5-coder:32b
0
I made [llmaid](https://github.com/awaescher/llmaid), a tool that can run through a directory and apply LLM changes on files based on a given system prompt. There are several use cases but I am mostly interested in a code documentation task. Mostly for fun and benchmarking. For this I wrote a [system prompt](https://github.com/awaescher/llmaid/blob/master/prompts/code%20documenter.txt) defining the rules and they work quite well - depending on the model of course. However I noticed that every model I tested (pretty much everything notable below 100b) was not able to keep the code "as is" and just add comments and code summaries. The best models like qwen2.5-coder did pretty perfect in let's say 95% of all files. But some files (mostly interfaces and [ASP.NET](http://ASP.NET) controllers) were rewritten with code changes. **To the question:** I just tried the exact same prompt on exact the same model on [venice.ai](http://venice.ai), using the same temperature and top\_p settings and - boom - their code documentation was perfect, even for the 100% bad-result ASP.NET controllers. Does anyone have a clue what these Venices could do to achieve so much better results? Again, it's the same model, the same temperature and the same top\_p. There are no other settings I can adjust or review on their site. My best guess is they're using less quantization, so I tried **qwen2.5:32b-instruct-q8\_0**, but the results did not improve significantly. Also, I don't think Venice is using such a high quant without explicitly mentioning it, as it has significantly higher system requirements and most people might not even notice it.
2024-12-04T09:38:55
https://www.reddit.com/r/LocalLLaMA/comments/1h6cbu7/better_results_on_veniceai_vs_local_qwen25coder32b/
waescher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6cbu7
false
null
t3_1h6cbu7
/r/LocalLLaMA/comments/1h6cbu7/better_results_on_veniceai_vs_local_qwen25coder32b/
false
false
self
0
{'enabled': False, 'images': [{'id': 'FSnzyOAj53YRzuETUzI8R6yncbnPIhaPDQKhDswR5Kg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XvutmIO1tFZW-5EhVj5Vh87MOMxTtLeawtMkrMnWGXA.jpg?width=108&crop=smart&auto=webp&s=0aab0b3f2fc0c2bc74d113f36903a303d0f03a04', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XvutmIO1tFZW-5EhVj5Vh87MOMxTtLeawtMkrMnWGXA.jpg?width=216&crop=smart&auto=webp&s=67daebe8cc7e6b6cb80ed421da84eeea4fe4539f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XvutmIO1tFZW-5EhVj5Vh87MOMxTtLeawtMkrMnWGXA.jpg?width=320&crop=smart&auto=webp&s=7d2131ae3f024306655e05d3773238c34f8bfac2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XvutmIO1tFZW-5EhVj5Vh87MOMxTtLeawtMkrMnWGXA.jpg?width=640&crop=smart&auto=webp&s=32693098983da2ff55dd161cd7a8cab164b7b93e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XvutmIO1tFZW-5EhVj5Vh87MOMxTtLeawtMkrMnWGXA.jpg?width=960&crop=smart&auto=webp&s=e0c677191c2174d27ba5790fb48f5072b008e802', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XvutmIO1tFZW-5EhVj5Vh87MOMxTtLeawtMkrMnWGXA.jpg?width=1080&crop=smart&auto=webp&s=af5185c2b32ae6c8153d08694647fbd6816cfc9a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XvutmIO1tFZW-5EhVj5Vh87MOMxTtLeawtMkrMnWGXA.jpg?auto=webp&s=3b4d1d1940ebd245098d7f39c57a5068b95ed875', 'width': 1200}, 'variants': {}}]}
Which CPU for local AI rig
1
[removed]
2024-12-04T09:40:49
https://www.reddit.com/r/LocalLLaMA/comments/1h6ccqj/which_cpu_for_local_ai_rig/
Pondretti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6ccqj
false
null
t3_1h6ccqj
/r/LocalLLaMA/comments/1h6ccqj/which_cpu_for_local_ai_rig/
false
false
self
1
null
smol-course - day 1 : Free instruction tuning course by Hugging Face
73
Day 1 of smol course complete. I learnt that people are hungry for models they can actually use, on hardware they own or can afford. \- The material and exercises focused on instruction tuning. Split up into chat templates and supervised fine tuning. There's a lot more to this subject than this, but we're keeping things smol. \- We have 325 students, 7 submissions, and 12 improvements. \- The folk contributing are great! They already know this stuff and just want to lend a hand to others by improving the course. ⏩ If you haven't already, try out module 1! There are difficulty levels from 🐢 to 🦁, so even if you just want a quick read you can give it a go.⭐️ The stats are the wildest. Here's the repo, in case you want to try it out or get involved.
2024-12-04T09:45:35
https://www.reddit.com/r/LocalLLaMA/comments/1h6ceyb/smolcourse_day_1_free_instruction_tuning_course/
bburtenshaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6ceyb
false
null
t3_1h6ceyb
/r/LocalLLaMA/comments/1h6ceyb/smolcourse_day_1_free_instruction_tuning_course/
false
false
self
73
{'enabled': False, 'images': [{'id': '_P51QeHD4g2GVcWNcwKEFQQ43wKv_Ox4veleeEpT0do', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/p6KD6oU0JXxb-IIx4-LScINDyZYIETg0EEN7yzJi3bM.jpg?width=108&crop=smart&auto=webp&s=fdee0c92a850d8f5fb37d7f5f91bcb444daaa1c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/p6KD6oU0JXxb-IIx4-LScINDyZYIETg0EEN7yzJi3bM.jpg?width=216&crop=smart&auto=webp&s=d491fd7ee3f9e264dfdbc05e0d8b52ac3c409fca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/p6KD6oU0JXxb-IIx4-LScINDyZYIETg0EEN7yzJi3bM.jpg?width=320&crop=smart&auto=webp&s=6928815f2e44e268756a5df46db2064d8779f933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/p6KD6oU0JXxb-IIx4-LScINDyZYIETg0EEN7yzJi3bM.jpg?width=640&crop=smart&auto=webp&s=48ce716a2ee562e9f6a6121e8913b99b47ea77f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/p6KD6oU0JXxb-IIx4-LScINDyZYIETg0EEN7yzJi3bM.jpg?width=960&crop=smart&auto=webp&s=c2b049d1a22bcec2c7651da1e775ff0a0fcdf880', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/p6KD6oU0JXxb-IIx4-LScINDyZYIETg0EEN7yzJi3bM.jpg?width=1080&crop=smart&auto=webp&s=8a2d52e7f2d09498649d6e74acc2098f6f7f6351', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/p6KD6oU0JXxb-IIx4-LScINDyZYIETg0EEN7yzJi3bM.jpg?auto=webp&s=67d9c0eebf0ae4729001fca7083efd4619afec86', 'width': 1200}, 'variants': {}}]}
I’m building a home AI rig and need some helps
1
[removed]
2024-12-04T09:46:45
https://www.reddit.com/r/LocalLLaMA/comments/1h6cfhv/im_building_a_home_ai_rig_and_need_some_helps/
nattie98
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6cfhv
false
null
t3_1h6cfhv
/r/LocalLLaMA/comments/1h6cfhv/im_building_a_home_ai_rig_and_need_some_helps/
false
false
self
1
null
I am noticing something not being taken into account in JP to EN Data sets
1
[removed]
2024-12-04T10:42:16
https://www.reddit.com/r/LocalLLaMA/comments/1h6d6xl/i_am_noticing_something_not_being_taken_into/
Oehriehqkbt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6d6xl
false
null
t3_1h6d6xl
/r/LocalLLaMA/comments/1h6d6xl/i_am_noticing_something_not_being_taken_into/
false
false
self
1
null
Are there any repositories similar to Letta (Memgpt) for custom tool calling agents ?
1
[removed]
2024-12-04T10:42:30
https://www.reddit.com/r/LocalLLaMA/comments/1h6d71l/are_there_any_repositories_similar_to_letta/
Smooth-Stage-8183
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6d71l
false
null
t3_1h6d71l
/r/LocalLLaMA/comments/1h6d71l/are_there_any_repositories_similar_to_letta/
false
false
self
1
null
Hugging Face CEO’s 2025 predictions for AI
1
These are Clem Delangue's prediction for AI in 2025. What are yours?
2024-12-04T11:07:22
https://www.reddit.com/r/LocalLLaMA/comments/1h6dk7a/hugging_face_ceos_2025_predictions_for_ai/
bburtenshaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6dk7a
false
null
t3_1h6dk7a
/r/LocalLLaMA/comments/1h6dk7a/hugging_face_ceos_2025_predictions_for_ai/
false
false
self
1
null
What to look for when selecting CPU
3
I will do some tests in a client on-premise that don't have GPUs. I will just try some small LLMs, probably max 14B. When is about CPUs, what features do you think I should pay more attention? Clock speed, core count, multithreading?
2024-12-04T11:28:03
https://www.reddit.com/r/LocalLLaMA/comments/1h6duv5/what_to_look_for_when_selecting_cpu/
mwon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6duv5
false
null
t3_1h6duv5
/r/LocalLLaMA/comments/1h6duv5/what_to_look_for_when_selecting_cpu/
false
false
self
3
null
TabbyAPI Qwen2.5 exl2 with speculative decoding *slower*
3
Hoping to achieve better generation speeds than with ollama/Qwen2.5:32b-Q4_K_M (30-35t/s) I downloaded tabbyAPI and the respective 4.0bpw-exl2 quant. However, speeds remained largely unchanged. From what I've read here, other people achieve around 40t/s. With the 1.5b GPTQ Int4 (couldn't find an exl2 on HF, Int8 somehow has incompatible shapes) this slows down to just 25t/s. I've confirmed that VRAM isn't full, so it shouldn't be overflowing into RAM. This is with 16k FP16 cache and everything else at default settings, running on Win10/RTX3090. Prompt is 93t and processed at ~222t/s, ~372t are being generated. When given a coding prompt, roughly 1k t are generated at 37t/s. Could anyone point me in the right direction? With the equally sized coder model people seem to get 60-100t/s.
2024-12-04T11:32:03
https://www.reddit.com/r/LocalLLaMA/comments/1h6dx4d/tabbyapi_qwen25_exl2_with_speculative_decoding/
ForgotMyOldPwd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6dx4d
false
null
t3_1h6dx4d
/r/LocalLLaMA/comments/1h6dx4d/tabbyapi_qwen25_exl2_with_speculative_decoding/
false
false
self
3
null
Need advice: Building a workgroup AI PC for LLM inference & fine-tuning (~3000€)
1
[removed]
2024-12-04T11:35:30
https://www.reddit.com/r/LocalLLaMA/comments/1h6dz2k/need_advice_building_a_workgroup_ai_pc_for_llm/
StudentOfChaos123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6dz2k
false
null
t3_1h6dz2k
/r/LocalLLaMA/comments/1h6dz2k/need_advice_building_a_workgroup_ai_pc_for_llm/
false
false
self
1
null
No IQ4_XS-iMat-EN quantization for 32b Qwen 2.5 coder?
0
For 32b Qwen 2.5 coder, since speculative decoding generally doubles inference speed at the cost of +/- 1GB VRAM (out of 24GB). IQ4\_XS instead of Q4\_K\_M seems necessary. But there must be some minor loss in performance. That said, there was a post two weeks ago that tested IQ4\_XS-iMat-EN (Qwen 2.5 NON-coder) and it came close to Q5\_K\_S in terms of performance so it should not be of less quality.  [https://www.reddit.com/r/LocalLLaMA/comments/1flqwzw/qwen25\_14b\_gguf\_quantization\_evaluation\_results/](https://www.reddit.com/r/LocalLLaMA/comments/1flqwzw/qwen25_14b_gguf_quantization_evaluation_results/) So who can provide IQ4\_XS-iMat-EN quantization for 32b Qwen 2.5 coder?
2024-12-04T11:40:53
https://www.reddit.com/r/LocalLLaMA/comments/1h6e1yw/no_iq4_xsimaten_quantization_for_32b_qwen_25_coder/
AdamDhahabi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6e1yw
false
null
t3_1h6e1yw
/r/LocalLLaMA/comments/1h6e1yw/no_iq4_xsimaten_quantization_for_32b_qwen_25_coder/
false
false
self
0
null
Hugging Face CEO’s 2025 predictions for AI
1,116
2024-12-04T11:44:51
https://i.redd.it/s5r2cw5hkt4e1.png
bburtenshaw
i.redd.it
1970-01-01T00:00:00
0
{}
1h6e46r
false
null
t3_1h6e46r
/r/LocalLLaMA/comments/1h6e46r/hugging_face_ceos_2025_predictions_for_ai/
false
false
https://b.thumbs.redditm…k-C7lQDscNFs.jpg
1,116
{'enabled': True, 'images': [{'id': 'vhWWdK-igdEU95P6wa_gpcHQwTHRYZ1_m63SYA6jJ2M', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/s5r2cw5hkt4e1.png?width=108&crop=smart&auto=webp&s=0b1355e02e271a75bc72c48e730f2f6bc2b803a6', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/s5r2cw5hkt4e1.png?width=216&crop=smart&auto=webp&s=05dbc69a08215cf3b03b703817873198b5cfd950', 'width': 216}, {'height': 375, 'url': 'https://preview.redd.it/s5r2cw5hkt4e1.png?width=320&crop=smart&auto=webp&s=413d1fab9e4c2a6494101149a23dc92e4a69e31d', 'width': 320}, {'height': 750, 'url': 'https://preview.redd.it/s5r2cw5hkt4e1.png?width=640&crop=smart&auto=webp&s=304a891cebf472aef3ba9f9fd9bd39af681ee24d', 'width': 640}], 'source': {'height': 1032, 'url': 'https://preview.redd.it/s5r2cw5hkt4e1.png?auto=webp&s=df2220e858ff6fc9500272503d77825736e27436', 'width': 880}, 'variants': {}}]}
What is the best NSFW RP model 12b- 22b? 16G vram
107
Im trying to find the ultimate best RP model, Would be nice if it was really good at fighting scenes, nsfw, unexpected stuff, good writing, feels human like, These are the ones I’ve tried ArliAIRP max 22b q4- eh Unslop Nemo 12b v4 q6_k - eh Lyragutenberg 12b q6_k - decent and can be good, but I think there’s better? Anyway, do you guys have any really good models that just blow your mind away as to how good it is? Thanks
2024-12-04T11:45:34
https://www.reddit.com/r/LocalLLaMA/comments/1h6e4lv/what_is_the_best_nsfw_rp_model_12b_22b_16g_vram/
Deluded-1b-gguf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6e4lv
false
null
t3_1h6e4lv
/r/LocalLLaMA/comments/1h6e4lv/what_is_the_best_nsfw_rp_model_12b_22b_16g_vram/
false
false
nsfw
107
null
what do you use llama 3.1 3b and 1b for? I'm struggling even with the 8b
1
[removed]
2024-12-04T11:51:53
https://www.reddit.com/r/LocalLLaMA/comments/1h6e83v/what_do_you_use_llama_31_3b_and_1b_for_im/
lutian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6e83v
false
null
t3_1h6e83v
/r/LocalLLaMA/comments/1h6e83v/what_do_you_use_llama_31_3b_and_1b_for_im/
false
false
self
1
null
I am noticing something not being taken into account in JP to EN Data sets
1
[removed]
2024-12-04T12:16:31
https://www.reddit.com/r/LocalLLaMA/comments/1h6emi3/i_am_noticing_something_not_being_taken_into/
Oehriehqkbt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6emi3
false
null
t3_1h6emi3
/r/LocalLLaMA/comments/1h6emi3/i_am_noticing_something_not_being_taken_into/
false
false
self
1
{'enabled': False, 'images': [{'id': 'dROJ8P7F4PhUWw6nsym9HkuS_crcn6Y_40Qk9nUOTrQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=108&crop=smart&auto=webp&s=5e126aab35df6f7a1cabaff2403ce2bf73eb0b25', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=216&crop=smart&auto=webp&s=517c479b1798541828fb521c85dd02943b705586', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=320&crop=smart&auto=webp&s=0e6d57056506f8ffd2547ffed80f93088ae8d060', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=640&crop=smart&auto=webp&s=31827c372f3fa6f36815da461afcffeb4d5990d3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=960&crop=smart&auto=webp&s=553236c4b1663c341c287466062f8a14cd22977d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=1080&crop=smart&auto=webp&s=658cf77a23c3c154327cc84b2021c0469495ac23', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?auto=webp&s=eaf6c5babc69df78263d3a2bff70dad1d779dac8', 'width': 1200}, 'variants': {}}]}
Phi-3-mini surprised me!!!
0
Apparently... Either I'm not surprised, or Microsoft did an excellent job with Phi-3-mini 😳 After several hours trying a little bit of everything with VSC + Continue + Phi-3-mini, among other LLMs... I was surprised that this one could perform so well, almost to the point of feeling like a 32B or even like a GPT 3.5. 🤔 In fact, it responded with better logic to certain code problems than Qwen Coder 7B... I really loved it. I would totally recommend it 😌
2024-12-04T12:20:17
https://www.reddit.com/r/LocalLLaMA/comments/1h6eosa/phi3mini_surprised_me/
Ordinary_Mud7430
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6eosa
false
null
t3_1h6eosa
/r/LocalLLaMA/comments/1h6eosa/phi3mini_surprised_me/
false
false
self
0
null
Introducing my latest LLM - HomerCreativeAnvita-Mix-Qw7B!
1
[removed]
2024-12-04T12:27:11
https://www.reddit.com/r/LocalLLaMA/comments/1h6esx9/introducing_my_latest_llm/
suayptalha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6esx9
false
null
t3_1h6esx9
/r/LocalLLaMA/comments/1h6esx9/introducing_my_latest_llm/
false
false
https://b.thumbs.redditm…w8FmeyNBSpMA.jpg
1
{'enabled': False, 'images': [{'id': '8BQiKZWpY5D-Hul8ndeuQ3nd8A3M99NcQTIHnK8IJII', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/LtSUNl2npIkC3Mr3AnxR82lPssx-szQfFAXryeBzhrM.jpg?width=108&crop=smart&auto=webp&s=da3c6a3151fc982d496bece27b64b08aec3a6662', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/LtSUNl2npIkC3Mr3AnxR82lPssx-szQfFAXryeBzhrM.jpg?auto=webp&s=1f2edef64e9e28332a1531775b532166d7eca999', 'width': 200}, 'variants': {}}]}
How to disable caching on llama cpp
0
Okay I am at my wits end searching for this. But I haven't been able to find answers , I have looked at the source codes for llama cpp and I can see that kv_cache is used in the context of a variable "longest_prefix" but there is no way to disable it,.. atleast that I found. Just a background regarding my use case. I am using llama2 for an internal chatbot with retrieval QA. I am using llamacpp to initialise the llm. The chatbot works great for the first 2-3 conversations in a session but then slowly starts going wayward. I first thought this was a context size issue, since we have a pretty big system prompt. So we reduced the conversation buffer memory to hold just the last two chats in the conversation, that way we don't send the entire history everytime, and the total token size remains in the range of 1500-2000 per chat in the conversation. But the problem didn't go away, ...first couple of chats was fine..but then wayward answers. That's when I noticed the" llama-generate prefix match hit :4xx" tokens in the logs ( the number is illustrative here) digging down the rabbit hole and reading the source code I alluded to earlier I found that , lllamacpp usually caches a few tokens in the conversation from the second chat onwards. In my case 400-500. Even the code seems to suggest that. Now this is essentially where the system prompt ( which is fine to be cached) and the begining of the context of the new chat is supposed to be. So essentially the cache is storing the system prompt and some of the context from the previous chat everytime..and I suspect this is what is causing the wayward behaviour 2-3 chats down the conversation. Ofcourse this is a hypothesis. But to test this I want to know if there is anyway I can disable the cache. I understand performance will take a bit, but that's a problem I can solve separately. First I need to see how good the chat system is for holding longer conversations. Any help or suggestions here would be amazing.
2024-12-04T12:49:29
https://www.reddit.com/r/LocalLLaMA/comments/1h6f7bn/how_to_disable_caching_on_llama_cpp/
Much_Discussion1490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6f7bn
false
null
t3_1h6f7bn
/r/LocalLLaMA/comments/1h6f7bn/how_to_disable_caching_on_llama_cpp/
false
false
self
0
null
how to proceed if I want to feed my info
0
Hi! up to now I've been using Claude a lot and also used Ollama and Lm studio but in a very low degree as my GPU is always busy in other tasks and that was the main reason to use Claude or other web services. But with the coming of Xmas I might be able to just focus on using LLama or Qwen...I know how to code a bit and of course with the help of AGI I can try to do things That i was not able to do one year ago.. I need advice on how to proceed. I have my data in excel and txt file...Most of the data is qualitative data. (text) how can I feed in a python script my data as source. It is not tons of data or pdfs, so I just want the data to be used as reference. But I would like, if possible to use python scripts so I can speed up a bit . In the past I managed to connect ollama but I also saw Lm studio now can be used ¿? (Am I wrong? if so, which are the steps to use the different models or system prompts?) I would like some advice on the best approaches as I only have 12 GB in my GPU (4070) and I am not super expert. Thanks to all
2024-12-04T13:15:30
https://www.reddit.com/r/LocalLLaMA/comments/1h6fpmt/how_to_proceed_if_i_want_to_feed_my_info/
yayita2500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6fpmt
false
null
t3_1h6fpmt
/r/LocalLLaMA/comments/1h6fpmt/how_to_proceed_if_i_want_to_feed_my_info/
false
false
self
0
null
Finetuning Llama 3.1 - 8B - Instruct on GSM8k
1
[removed]
2024-12-04T13:20:00
https://www.reddit.com/r/LocalLLaMA/comments/1h6fsy2/finetuning_llama_31_8b_instruct_on_gsm8k/
Chroless
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6fsy2
false
null
t3_1h6fsy2
/r/LocalLLaMA/comments/1h6fsy2/finetuning_llama_31_8b_instruct_on_gsm8k/
false
false
self
1
null
Finetuning Llama 3.1 Instruct on GSM8k
1
[removed]
2024-12-04T13:22:12
https://www.reddit.com/r/LocalLLaMA/comments/1h6fung/finetuning_llama_31_instruct_on_gsm8k/
Chroless
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6fung
false
null
t3_1h6fung
/r/LocalLLaMA/comments/1h6fung/finetuning_llama_31_instruct_on_gsm8k/
false
false
self
1
null
Finetuning on GSM8k
1
[removed]
2024-12-04T13:23:11
https://www.reddit.com/r/LocalLLaMA/comments/1h6fvda/finetuning_on_gsm8k/
Chroless
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6fvda
false
null
t3_1h6fvda
/r/LocalLLaMA/comments/1h6fvda/finetuning_on_gsm8k/
false
false
self
1
null
GuardBench: a Python library and benchmark for guardrail models evaluation
1
[removed]
2024-12-04T13:42:39
https://www.reddit.com/r/LocalLLaMA/comments/1h6ga7e/guardbench_a_python_library_and_benchmark_for/
p53ud0_r4nd0m_numb3r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6ga7e
false
null
t3_1h6ga7e
/r/LocalLLaMA/comments/1h6ga7e/guardbench_a_python_library_and_benchmark_for/
false
false
self
1
null
What software should i use for ml
1
I know a little pytorch and want to move to running and trainning models on my server, I want the software to be in a docker container and is very stable (My server has a rtx 4060ti 16gb)
2024-12-04T13:47:27
https://www.reddit.com/r/LocalLLaMA/comments/1h6gdrn/what_software_should_i_use_for_ml/
Budget-Relief1307
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6gdrn
false
null
t3_1h6gdrn
/r/LocalLLaMA/comments/1h6gdrn/what_software_should_i_use_for_ml/
false
false
self
1
null
what do you use llama 3.1 3b and 1b for? I'm struggling even with the 8b
5
I wanted really badly to use llama for a "doc2exam" project, but it kept giving subpar results I'm basically trying to generate exam questions from documents, and I can't seem to be able to go below gpt-4o-mini (I already launched it btw) I'm still using llama3.1 8b for validation, but I'd like to know if you guys have any insights even providing 10 examples in the prompt sometimes makes it hiccup. running under ollama
2024-12-04T13:50:28
https://www.reddit.com/r/LocalLLaMA/comments/1h6gg22/what_do_you_use_llama_31_3b_and_1b_for_im/
lutian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6gg22
false
null
t3_1h6gg22
/r/LocalLLaMA/comments/1h6gg22/what_do_you_use_llama_31_3b_and_1b_for_im/
false
false
self
5
null
LlamaIndex CitationQuery - Sources from nodes not counted properly
1
[removed]
2024-12-04T13:57:13
https://www.reddit.com/r/LocalLLaMA/comments/1h6gl66/llamaindex_citationquery_sources_from_nodes_not/
Namarrus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6gl66
false
null
t3_1h6gl66
/r/LocalLLaMA/comments/1h6gl66/llamaindex_citationquery_sources_from_nodes_not/
false
false
self
1
null
Help decoding retrieved vector values to "see" what a chunk looks like in natural language
1
[removed]
2024-12-04T14:03:19
https://www.reddit.com/r/LocalLLaMA/comments/1h6gqbm/help_decoding_retrieved_vector_values_to_see_what/
RAGcontent
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6gqbm
false
null
t3_1h6gqbm
/r/LocalLLaMA/comments/1h6gqbm/help_decoding_retrieved_vector_values_to_see_what/
false
false
self
1
null
Early results training Llama-3.1-8B as an evaluator
20
2024-12-04T14:32:51
https://i.redd.it/gwlbriyrdu4e1.png
fortunemaple
i.redd.it
1970-01-01T00:00:00
0
{}
1h6heit
false
null
t3_1h6heit
/r/LocalLLaMA/comments/1h6heit/early_results_training_llama318b_as_an_evaluator/
false
false
https://a.thumbs.redditm…ri27nnofFu58.jpg
20
{'enabled': True, 'images': [{'id': 'krNKy6clDjxvVsmm3R0CEgmZDu95VTqVXQO8bUtkR9M', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/gwlbriyrdu4e1.png?width=108&crop=smart&auto=webp&s=4012fbe82a7dcd1054db488c8c970213e632be8a', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/gwlbriyrdu4e1.png?width=216&crop=smart&auto=webp&s=650f0bad0fea4ea249037009b1e87a846511dc92', 'width': 216}, {'height': 163, 'url': 'https://preview.redd.it/gwlbriyrdu4e1.png?width=320&crop=smart&auto=webp&s=2a7e91a93925bcc829e5368cc641c4c1bb7761b2', 'width': 320}, {'height': 327, 'url': 'https://preview.redd.it/gwlbriyrdu4e1.png?width=640&crop=smart&auto=webp&s=e2b0b7fdf1e787b689fddecae4048f4c0dea9653', 'width': 640}, {'height': 491, 'url': 'https://preview.redd.it/gwlbriyrdu4e1.png?width=960&crop=smart&auto=webp&s=6186845f673d95425adfc8ce6d3b8257c2dc0aa2', 'width': 960}], 'source': {'height': 533, 'url': 'https://preview.redd.it/gwlbriyrdu4e1.png?auto=webp&s=e7d6a42e802c34197e32506a1b780138bc233feb', 'width': 1041}, 'variants': {}}]}
Taking the latest views on laptops for local models
1
[removed] [View Poll](https://www.reddit.com/poll/1h6hl74)
2024-12-04T14:41:02
https://www.reddit.com/r/LocalLLaMA/comments/1h6hl74/taking_the_latest_views_on_laptops_for_local/
ExpressionEcstatic80
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h6hl74
false
null
t3_1h6hl74
/r/LocalLLaMA/comments/1h6hl74/taking_the_latest_views_on_laptops_for_local/
false
false
self
1
null