title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Drummer's Skyfall 39B and Tunguska 39B! An upscale experiment on Mistral Small 22B with RP tuning!
1
[removed]
2024-12-16T15:01:04
https://www.reddit.com/r/LocalLLaMA/comments/1hfkzu9/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
TheLocalDrummer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfkzu9
false
null
t3_1hfkzu9
/r/LocalLLaMA/comments/1hfkzu9/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
false
false
https://b.thumbs.redditm…map5PRP5eo5I.jpg
1
{'enabled': False, 'images': [{'id': '1_Weo2q76rJe8LnA2Uz_qDQZl1kHt7Wb3qrcK3Er08I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=108&crop=smart&auto=webp&s=b5e75fda771f25d04db45e8a2cd723555620caee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=216&crop=smart&auto=webp&s=3d1a8df12ec53937e36b6a4313f12f108a4d419e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=320&crop=smart&auto=webp&s=853ab1cb8afc597b0355ea8be48f82d5688ad514', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=640&crop=smart&auto=webp&s=a90a738f26c72f79861af718000f04c7eb102b6a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=960&crop=smart&auto=webp&s=9924793255acfe3cded6e64fc87fea87dc20d53a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=1080&crop=smart&auto=webp&s=aacb25923d46fe5e2623b3ee2847d7f64100de24', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?auto=webp&s=dcdefea1ebc45a8174debd7b619f799ccbab6ec1', 'width': 1200}, 'variants': {}}]}
Drummer's Skyfall 39B and Tunguska 39B! An upscale experiment on Mistral Small 22B with additional training!
1
[removed]
2024-12-16T15:02:22
https://www.reddit.com/r/LocalLLaMA/comments/1hfl11c/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
TheLocalDrummer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfl11c
false
null
t3_1hfl11c
/r/LocalLLaMA/comments/1hfl11c/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
false
false
https://b.thumbs.redditm…x4fOZBy9wtyQ.jpg
1
{'enabled': False, 'images': [{'id': '1_Weo2q76rJe8LnA2Uz_qDQZl1kHt7Wb3qrcK3Er08I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=108&crop=smart&auto=webp&s=b5e75fda771f25d04db45e8a2cd723555620caee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=216&crop=smart&auto=webp&s=3d1a8df12ec53937e36b6a4313f12f108a4d419e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=320&crop=smart&auto=webp&s=853ab1cb8afc597b0355ea8be48f82d5688ad514', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=640&crop=smart&auto=webp&s=a90a738f26c72f79861af718000f04c7eb102b6a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=960&crop=smart&auto=webp&s=9924793255acfe3cded6e64fc87fea87dc20d53a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=1080&crop=smart&auto=webp&s=aacb25923d46fe5e2623b3ee2847d7f64100de24', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?auto=webp&s=dcdefea1ebc45a8174debd7b619f799ccbab6ec1', 'width': 1200}, 'variants': {}}]}
Drummer's Skyfall 39B and Tunguska 39B! An upscale experiment on Mistral Small 22B with additional training!
1
[removed]
2024-12-16T15:03:19
https://www.reddit.com/r/LocalLLaMA/comments/1hfl1sm/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
TheLocalDrummer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfl1sm
false
null
t3_1hfl1sm
/r/LocalLLaMA/comments/1hfl1sm/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
false
false
https://b.thumbs.redditm…x4fOZBy9wtyQ.jpg
1
{'enabled': False, 'images': [{'id': '1_Weo2q76rJe8LnA2Uz_qDQZl1kHt7Wb3qrcK3Er08I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=108&crop=smart&auto=webp&s=b5e75fda771f25d04db45e8a2cd723555620caee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=216&crop=smart&auto=webp&s=3d1a8df12ec53937e36b6a4313f12f108a4d419e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=320&crop=smart&auto=webp&s=853ab1cb8afc597b0355ea8be48f82d5688ad514', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=640&crop=smart&auto=webp&s=a90a738f26c72f79861af718000f04c7eb102b6a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=960&crop=smart&auto=webp&s=9924793255acfe3cded6e64fc87fea87dc20d53a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=1080&crop=smart&auto=webp&s=aacb25923d46fe5e2623b3ee2847d7f64100de24', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?auto=webp&s=dcdefea1ebc45a8174debd7b619f799ccbab6ec1', 'width': 1200}, 'variants': {}}]}
Drummer's Skyfall 39B and Tunguska 39B! An upscale experiment on Mistral Small 22B with additional training!
1
[removed]
2024-12-16T15:05:56
https://www.reddit.com/r/LocalLLaMA/comments/1hfl3sw/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
TheLocalDrummer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfl3sw
false
null
t3_1hfl3sw
/r/LocalLLaMA/comments/1hfl3sw/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
false
false
https://b.thumbs.redditm…x4fOZBy9wtyQ.jpg
1
{'enabled': False, 'images': [{'id': '1_Weo2q76rJe8LnA2Uz_qDQZl1kHt7Wb3qrcK3Er08I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=108&crop=smart&auto=webp&s=b5e75fda771f25d04db45e8a2cd723555620caee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=216&crop=smart&auto=webp&s=3d1a8df12ec53937e36b6a4313f12f108a4d419e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=320&crop=smart&auto=webp&s=853ab1cb8afc597b0355ea8be48f82d5688ad514', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=640&crop=smart&auto=webp&s=a90a738f26c72f79861af718000f04c7eb102b6a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=960&crop=smart&auto=webp&s=9924793255acfe3cded6e64fc87fea87dc20d53a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=1080&crop=smart&auto=webp&s=aacb25923d46fe5e2623b3ee2847d7f64100de24', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?auto=webp&s=dcdefea1ebc45a8174debd7b619f799ccbab6ec1', 'width': 1200}, 'variants': {}}]}
Skyfall 39B and Tunguska 39B! An upscale experiment on Mistral Small 22B with additional training!
1
[removed]
2024-12-16T15:06:32
https://www.reddit.com/r/LocalLLaMA/comments/1hfl49m/skyfall_39b_and_tunguska_39b_an_upscale/
TheLocalDrummer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfl49m
false
null
t3_1hfl49m
/r/LocalLLaMA/comments/1hfl49m/skyfall_39b_and_tunguska_39b_an_upscale/
false
false
https://b.thumbs.redditm…x4fOZBy9wtyQ.jpg
1
{'enabled': False, 'images': [{'id': '1_Weo2q76rJe8LnA2Uz_qDQZl1kHt7Wb3qrcK3Er08I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=108&crop=smart&auto=webp&s=b5e75fda771f25d04db45e8a2cd723555620caee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=216&crop=smart&auto=webp&s=3d1a8df12ec53937e36b6a4313f12f108a4d419e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=320&crop=smart&auto=webp&s=853ab1cb8afc597b0355ea8be48f82d5688ad514', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=640&crop=smart&auto=webp&s=a90a738f26c72f79861af718000f04c7eb102b6a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=960&crop=smart&auto=webp&s=9924793255acfe3cded6e64fc87fea87dc20d53a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?width=1080&crop=smart&auto=webp&s=aacb25923d46fe5e2623b3ee2847d7f64100de24', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5GWZz4XEusQ_ANaTWwQXX3l5MEGAjfh9vFuo1-KN_Fo.jpg?auto=webp&s=dcdefea1ebc45a8174debd7b619f799ccbab6ec1', 'width': 1200}, 'variants': {}}]}
Drummer's Skyfall 39B and Tunguska 39B! An upscale experiment on Mistral Small 22B with additional RP & creative training!
1
[removed]
2024-12-16T15:10:48
https://www.reddit.com/r/LocalLLaMA/comments/1hfl7l8/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
TheLocalDrummer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfl7l8
false
null
t3_1hfl7l8
/r/LocalLLaMA/comments/1hfl7l8/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
false
false
https://a.thumbs.redditm…d0ig7oKLchA0.jpg
1
null
Is V100 Still Viable for LLM Fine-Tuning?
1
[removed]
2024-12-16T15:11:10
https://www.reddit.com/r/LocalLLaMA/comments/1hfl7vt/is_v100_still_viable_for_llm_finetuning/
Left-Day-9079
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfl7vt
false
null
t3_1hfl7vt
/r/LocalLLaMA/comments/1hfl7vt/is_v100_still_viable_for_llm_finetuning/
false
false
self
1
null
Drummer's Skyfall 39B and Tunguska 39B! An upscale experiment on Mistral Small 22B with additional RP & creative training!
1
test
2024-12-16T15:11:15
https://www.reddit.com/r/LocalLLaMA/comments/1hfl7yb/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
TheLocalDrummer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfl7yb
false
null
t3_1hfl7yb
/r/LocalLLaMA/comments/1hfl7yb/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
false
false
self
1
null
Drummer's Skyfall 39B and Tunguska 39B! An upscale experiment on Mistral Small 22B with additional RP & creative training!
1
[removed]
2024-12-16T15:13:19
https://www.reddit.com/r/LocalLLaMA/comments/1hfl9jj/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
TheLocalDrummer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfl9jj
false
null
t3_1hfl9jj
/r/LocalLLaMA/comments/1hfl9jj/drummers_skyfall_39b_and_tunguska_39b_an_upscale/
false
false
self
1
null
Hugging Face launches the Synthetic Data Generator - a UI to Build Datasets with Natural Language
223
Hi, I work at Hugging Face, and my team just shipped a free no-code UI for synthetic data generation under an Apache 2.0 license. The Synthetic Data Generator allows you to create high-quality datasets for training and fine-tuning language models.  [The announcement blog](https://huggingface.co/blog/synthetic-data-generator) goes over a practical example of how to use it, and we made a [YouTube video](https://www.youtube.com/watch?v=nXjVtnGeEss). Supported Tasks: * Text Classification (50 samples/minute) * Chat Data for Supervised Fine-Tuning (20 samples/minute) This tool simplifies the process of creating custom datasets, and enables you to: * Describe the characteristics of your desired application * Iterate on sample datasets * Produce full-scale datasets * Push your datasets to the [Hugging Face Hub](https://huggingface.co/datasets?other=datacraft) and/or [Argilla](https://docs.argilla.io/) Some cool additional features: * pip installable * Host locally * Swap out Hugging Face models * Use OpenAI-compatible APIs Some tasks intend to be added based on engagement on [GitHub](https://github.com/argilla-io/synthetic-data-generator/issues): * Evaluate datasets with LLMs as a Judge * Generate RAG datasets As always, we are open to suggestions and feedback.
2024-12-16T15:24:00
https://www.reddit.com/r/LocalLLaMA/comments/1hflhu4/hugging_face_launches_the_synthetic_data/
chef1957
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hflhu4
false
null
t3_1hflhu4
/r/LocalLLaMA/comments/1hflhu4/hugging_face_launches_the_synthetic_data/
false
false
self
223
{'enabled': False, 'images': [{'id': 'oG3mE7Po0Tm3-6oKQDq6HgqtpsG2_Lizp2F5eS1xtZ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VyoHm9k6S3od7CrcsE0_mDGWAkj-y0zyqlgZimSSD28.jpg?width=108&crop=smart&auto=webp&s=43ff63199cb9c47a68e870ebe94d7b22c5333f3d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VyoHm9k6S3od7CrcsE0_mDGWAkj-y0zyqlgZimSSD28.jpg?width=216&crop=smart&auto=webp&s=fa5c9210480ee7f6aa0e9b537aa99ab0625634cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VyoHm9k6S3od7CrcsE0_mDGWAkj-y0zyqlgZimSSD28.jpg?width=320&crop=smart&auto=webp&s=7cb9947b7a3773078ec25c12f029557a2aef7765', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VyoHm9k6S3od7CrcsE0_mDGWAkj-y0zyqlgZimSSD28.jpg?width=640&crop=smart&auto=webp&s=897178e1317ec8f77e32cbecbf82f7461316c4ca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VyoHm9k6S3od7CrcsE0_mDGWAkj-y0zyqlgZimSSD28.jpg?width=960&crop=smart&auto=webp&s=b6f61f0a2102b88b0f3fc163132115c70a24bdbf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VyoHm9k6S3od7CrcsE0_mDGWAkj-y0zyqlgZimSSD28.jpg?width=1080&crop=smart&auto=webp&s=904603c1bb06b2a00f97a3b5758e08a5a7db50c6', 'width': 1080}], 'source': {'height': 1160, 'url': 'https://external-preview.redd.it/VyoHm9k6S3od7CrcsE0_mDGWAkj-y0zyqlgZimSSD28.jpg?auto=webp&s=602a8edca385b278f97959328b7c0fa3f2a06f79', 'width': 2320}, 'variants': {}}]}
RAG based document generation with OLLAMA - when are guardrails needed?
1
[removed]
2024-12-16T15:24:19
https://www.reddit.com/r/LocalLLaMA/comments/1hfli2z/rag_based_document_generation_with_ollama_when/
KishiBayes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfli2z
false
null
t3_1hfli2z
/r/LocalLLaMA/comments/1hfli2z/rag_based_document_generation_with_ollama_when/
false
false
self
1
null
Podcast summarisation
3
Hi, What are some good models to summarise a podcast? Or, should I just use whisper to get the transcript and use a LLM to generate the summarisation?
2024-12-16T15:31:34
https://www.reddit.com/r/LocalLLaMA/comments/1hflnwj/podcast_summarisation/
dirk_klement
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hflnwj
false
null
t3_1hflnwj
/r/LocalLLaMA/comments/1hflnwj/podcast_summarisation/
false
false
self
3
null
Looking for API with llama models that allows for custom grammar.
0
I'm playing with custom grammars in llama.cpp on my Mac. I'd like to test some ideas on bigger models, but sadly not enough ram. Do you know of any llama model provider that allows to upload custom GBNF grammar file?
2024-12-16T15:51:49
https://www.reddit.com/r/LocalLLaMA/comments/1hfm43w/looking_for_api_with_llama_models_that_allows_for/
zie1ony
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfm43w
false
null
t3_1hfm43w
/r/LocalLLaMA/comments/1hfm43w/looking_for_api_with_llama_models_that_allows_for/
false
false
self
0
null
Setup/environment to compare performance of multiple LLMs?
2
For my university I am working on a project in which I'm trying to extract causal relationships from scientific papers using LLMs and outputting them in a .Json format to visualise in a graph. I want to try some local LLMs and compare their results for this task. For example I'd like to give them 20 test questions, and compare their outputs to the desired output, run this say 10 times and get a % score for how well they did on average. Is there an easy way to do this automatically? Even better if I can also do API calls in the same environment to compare to cloud models! I am adapt in Python and don't mind doing some scripting, but a visual interface would be amazing. I ran into GPT4ALL Any recommendations: \- for a model I can run (11GB DDR5 VRAM) which might work well for this task? \- on fine-tuning? \- on older but finetuned models (BioGPT for this purpose) versus newer but general models? Any help is really appreciated! Hardware: CPU: 7600X GPU: 2080TI 11GB VRAM RAM: 2x 32GB 4800mhz CL40
2024-12-16T15:52:58
https://www.reddit.com/r/LocalLLaMA/comments/1hfm518/setupenvironment_to_compare_performance_of/
ApplePenguinBaguette
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfm518
false
null
t3_1hfm518
/r/LocalLLaMA/comments/1hfm518/setupenvironment_to_compare_performance_of/
false
false
self
2
null
Clustering Question
3
Hey all, I'm working on clustering large amounts of text and looking for different approaches people have found helpful & breaking down a few of the things I've tried. If there's any articles, or post you've seen on the best way to cluster text, please let me know! * Chunking and similarity clustering. Doesn't work well, too much variance. * Extracting a very short summary & clustering based off that, works a lot better, still a few small issues i.e. where do you decide to break a cluster etc. * Kmeans - Eh. * Doing a "double" cluster. Finding high level ideas and then drilling into each of those with an embedding model. * Trying something like BM25 or IT-IDF to extract out similar words and cluster on that. To break it down: The main issue I have is that clusters are pretty arbitrary, and end up getting that I feel like should be in a different cluster quite frequently.
2024-12-16T15:59:25
https://www.reddit.com/r/LocalLLaMA/comments/1hfma6j/clustering_question/
coolcloud
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfma6j
false
null
t3_1hfma6j
/r/LocalLLaMA/comments/1hfma6j/clustering_question/
false
false
self
3
null
Gemini 2.0 Flash Exp fully deterministic (at least in my testing) - Will that always be the case?
10
One of the most common problems I have faced working with LLMs is lack of deterministic outputs. I was for a long time under the impression that if I gave a temperature of 0, I'd always get the same result. I learned that not to be the case due to hardware, parallelization, sampling, etc. I've been using Gemini 1.5 pro-002 for a while now and it is always very annoying that I set a seed, I set a temperature of 0, but it still would not always be 100% consistent. Some words would change and when I was chaining together LLM calls, it would produce a very different final result. Gemini 2.0 Flash however, I am getting the exact same results every single time. I tried a few tests(ran each 10 times) that failed for Gemini 1.5 pro and succeeded for 2.0 Flash 1. Tell me a story in 3 sentences 2. Give me 100 Random numbers and 100 random names 3. Tell me a story about LLMS A few questions for those more knowledgeable than me: Are there any instances that will break it being deterministic for 2.0 flash? Why is 2.0 flash deterministic but 1.5 pro is non-deterministic? Does it have something to do with the hardware the experimental version is run on or is it more likely they made some kind of change to the sampling? Will that still be the case when the non-experimental version comes out?
2024-12-16T16:00:18
https://www.reddit.com/r/LocalLLaMA/comments/1hfmazm/gemini_20_flash_exp_fully_deterministic_at_least/
DivergingDog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfmazm
false
null
t3_1hfmazm
/r/LocalLLaMA/comments/1hfmazm/gemini_20_flash_exp_fully_deterministic_at_least/
false
false
self
10
null
The Emerging Open-Source AI Stack
1
2024-12-16T16:01:50
https://www.timescale.com/blog/the-emerging-open-source-ai-stack/?utm_source=reddit&utm_medium=referral&utm_campaign=december-AI-launch&utm_content=the-emerging-open-source-ai-stack
jjackyliang
timescale.com
1970-01-01T00:00:00
0
{}
1hfmcfi
false
null
t3_1hfmcfi
/r/LocalLLaMA/comments/1hfmcfi/the_emerging_opensource_ai_stack/
false
false
https://a.thumbs.redditm…e0HSkA0zhmi0.jpg
1
{'enabled': False, 'images': [{'id': 'ztj_NauhsTDG-7vyQy-uTd_gcjJEp3El-ayravxX6bQ', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=108&crop=smart&auto=webp&s=726cfbce4be50b4ea49ce0d39d3369a893d575b0', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=216&crop=smart&auto=webp&s=719fa1478d1563bda59fe36dcde27556c92033d9', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=320&crop=smart&auto=webp&s=4266de200cf6066a392667a01a19e1c7db7b6e5b', 'width': 320}, {'height': 350, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=640&crop=smart&auto=webp&s=258baa48af17da2ba6345cbf07f53e2515f668db', 'width': 640}, {'height': 526, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=960&crop=smart&auto=webp&s=78c0832f60a6770adb52aba25fed150a6d382ea7', 'width': 960}, {'height': 592, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=1080&crop=smart&auto=webp&s=71c0b44d9c33e8ef08ba83f6198437c4a34a7909', 'width': 1080}], 'source': {'height': 921, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?auto=webp&s=81c9f977e9147d6de32fa77b00ff7ab20efc9667', 'width': 1680}, 'variants': {}}]}
Generate Unlimited Podcast Audio Using Python and Google’s Generative AI: A Complete Step-by-Step Tutorial
1
[removed]
2024-12-16T16:18:29
https://www.reddit.com/r/LocalLLaMA/comments/1hfmq6c/generate_unlimited_podcast_audio_using_python_and/
Busy-Basket-5291
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfmq6c
false
null
t3_1hfmq6c
/r/LocalLLaMA/comments/1hfmq6c/generate_unlimited_podcast_audio_using_python_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xccC-_IsJnZpW5votGDIVdKBmkG0I3GpXby1TPlmU9o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/b6RSwO1idGX8lz5Y6Mk4jAi-8CXve_k_2ol_tjy5lzU.jpg?width=108&crop=smart&auto=webp&s=32d538ea07b233b55f405efd585a40bfa2015147', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/b6RSwO1idGX8lz5Y6Mk4jAi-8CXve_k_2ol_tjy5lzU.jpg?width=216&crop=smart&auto=webp&s=f06acca14d78193d2c23fa620132f1f1a2122a0d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/b6RSwO1idGX8lz5Y6Mk4jAi-8CXve_k_2ol_tjy5lzU.jpg?width=320&crop=smart&auto=webp&s=82acaf1b0513b4e2e630667516d36b7c03c290cb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/b6RSwO1idGX8lz5Y6Mk4jAi-8CXve_k_2ol_tjy5lzU.jpg?auto=webp&s=34c2107d65a5dc728d1459835e42c8eaf6f0084d', 'width': 480}, 'variants': {}}]}
Creative Writing fine-tune examples?
3
I'm looking for some examples of LLM outputs from a finetuned model or LORA built with a dataset of the author's own works. I have seen some guides on doing this, but I just want to find some sample outputs ideally.
2024-12-16T16:32:01
https://www.reddit.com/r/LocalLLaMA/comments/1hfn1ic/creative_writing_finetune_examples/
pwillia7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfn1ic
false
null
t3_1hfn1ic
/r/LocalLLaMA/comments/1hfn1ic/creative_writing_finetune_examples/
false
false
self
3
null
Is a combo of old 1070 graphic cards worth it?
1
[removed]
2024-12-16T16:49:35
https://www.reddit.com/r/LocalLLaMA/comments/1hfnfng/is_a_combo_of_old_1070_graphic_cards_worth_it/
majorfrankies
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfnfng
false
null
t3_1hfnfng
/r/LocalLLaMA/comments/1hfnfng/is_a_combo_of_old_1070_graphic_cards_worth_it/
false
false
self
1
null
Help me run a thought experiment on "reframing" in an LLM
2
TL;DR: In short, I'm wondering if token selection could be "deflected" by an embedding, whether toward some summarized concept (Javascript code) or away from a concept (Java code, or an incorrect function.) without actually impacting context... a sort of ad hoc application of memory/goals. \*\*\* Imagine we have an LLM, which has a current context, and it reaches some point in the generation that could conceivably become conjectural.. like coming up with an example or beginning a block of code (or a function). So, imagine, just before it implements that code block, perhaps by emitting a token in training, we'll call it `<|bookmark|>` The LLM stores the current context to disk (or elsewhere in memory). Then, it continues on to complete the block, it is asked (and trained) to (and I hate to use the term) reflect on what it just wrote. Now. if it determines it might have made a mistake (this is the bit I may be hazy on), we now have a diff between the current state and the bookmark state, a sort of embedding of the current position. Now, we can use that embedding as a negative - reverse RAG sort of idea, if the next token is too similar to that embedding, we lower the score. Or, it could literally "delete" the tokens output, the way a user would when editing or amending their output. I think the general idea would work, but I suppose it would have to be only a slight modification if a token is too similar... if I'm writing a function to sort lists, I imagine another function to sort lists might be VERY similar, even if incorrect. Sort of a "deflection", either bending token selection toward the embedding, or away from it. And if one embedding/vector can do the deflection, you could create a number of these to encourage certain output and discourage other output. I'm wondering if such "splats" of embeddings might constitute a sort of short term memory that doesn't necessarily increase context requirements.
2024-12-16T16:53:40
https://www.reddit.com/r/LocalLLaMA/comments/1hfnj3e/help_me_run_a_thought_experiment_on_reframing_in/
bigattichouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfnj3e
false
null
t3_1hfnj3e
/r/LocalLLaMA/comments/1hfnj3e/help_me_run_a_thought_experiment_on_reframing_in/
false
false
self
2
null
Doubt: Wrong loss is getting calculated while fine tuning Whisper for conditional Generation
1
[removed]
2024-12-16T17:00:12
https://www.reddit.com/r/LocalLLaMA/comments/1hfnomb/doubt_wrong_loss_is_getting_calculated_while_fine/
Coder10100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfnomb
false
null
t3_1hfnomb
/r/LocalLLaMA/comments/1hfnomb/doubt_wrong_loss_is_getting_calculated_while_fine/
false
false
self
1
null
Embedding model without trust_remote_code=True
1
[removed]
2024-12-16T17:02:10
https://www.reddit.com/r/LocalLLaMA/comments/1hfnqis/embedding_model_without_trust_remote_codetrue/
Expensive-Paint-9490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfnqis
false
null
t3_1hfnqis
/r/LocalLLaMA/comments/1hfnqis/embedding_model_without_trust_remote_codetrue/
false
false
self
1
null
Vision model to OCR and interpret faxes
2
I currently use PaperlessNGX to OCR faxes and then use their API to pull the raw text for interpretation. Tesseract seems to do pretty well with OCR, but has a hard time with faint text or anything hand written on the fax. It also has issues with complex layouts. I’m just trying to title and categorize faxes that come in, maybe summarize the longer faxes, and occasionally pull out specific information like names, dates, or other numbers based on the type of fax. I‘m doing that currently with the raw text and some basic programming workflows, but it’s quite limited because the workflows have to be updated for each new fax type. Are there good models for a workflow like this? Accessible through an API?
2024-12-16T17:13:19
https://www.reddit.com/r/LocalLLaMA/comments/1hfnzyp/vision_model_to_ocr_and_interpret_faxes/
hainesk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfnzyp
false
null
t3_1hfnzyp
/r/LocalLLaMA/comments/1hfnzyp/vision_model_to_ocr_and_interpret_faxes/
false
false
self
2
null
A single 3090 or 5 mi50s?
1
[removed]
2024-12-16T17:13:23
https://www.reddit.com/r/LocalLLaMA/comments/1hfo00n/a_single_3090_or_5_mi50s/
Forward-Ad-7672
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfo00n
false
null
t3_1hfo00n
/r/LocalLLaMA/comments/1hfo00n/a_single_3090_or_5_mi50s/
false
false
self
1
null
Any decent app similar in ease-of-use to Msty for running image-related models?
6
While `ComfyUI` isn't without its flaws - it can be *very* disorienting to use, especially for those new to upscales or other advanced models - it does have some redeeming qualities. Yet, I personally find it confusing. However, one significant drawback is that it lacks native support for many popular model formats. This means that I'm often forced into scripting conversions between different file types (e.g., .`safetensors`, .`pth`, `onnx`, and `ncnn`), which can be time-consuming and cumbersome. In contrast, `chaiNNer` offers some improvements over `ComfyUI` (i.e kind of slightly easier to use, if not by that much), but nonetheless shares the same limitation as `ComfyUI` regarding model format support. As fas as LLMs and VLMs are concerned, `Msty` couldn't possibly get simpler than what it already is. It just works, and you don't spend time debugging the background stuff and installing dozens of things...
2024-12-16T17:19:55
https://www.reddit.com/r/LocalLLaMA/comments/1hfo5n5/any_decent_app_similar_in_easeofuse_to_msty_for/
blueredscreen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfo5n5
false
null
t3_1hfo5n5
/r/LocalLLaMA/comments/1hfo5n5/any_decent_app_similar_in_easeofuse_to_msty_for/
false
false
self
6
null
The Emerging Open-Source AI Stack
100
2024-12-16T17:35:54
https://www.timescale.com/blog/the-emerging-open-source-ai-stack
jascha_eng
timescale.com
1970-01-01T00:00:00
0
{}
1hfojc1
false
null
t3_1hfojc1
/r/LocalLLaMA/comments/1hfojc1/the_emerging_opensource_ai_stack/
false
false
https://b.thumbs.redditm…wTBluUXmR6gw.jpg
100
{'enabled': False, 'images': [{'id': 'ztj_NauhsTDG-7vyQy-uTd_gcjJEp3El-ayravxX6bQ', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=108&crop=smart&auto=webp&s=726cfbce4be50b4ea49ce0d39d3369a893d575b0', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=216&crop=smart&auto=webp&s=719fa1478d1563bda59fe36dcde27556c92033d9', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=320&crop=smart&auto=webp&s=4266de200cf6066a392667a01a19e1c7db7b6e5b', 'width': 320}, {'height': 350, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=640&crop=smart&auto=webp&s=258baa48af17da2ba6345cbf07f53e2515f668db', 'width': 640}, {'height': 526, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=960&crop=smart&auto=webp&s=78c0832f60a6770adb52aba25fed150a6d382ea7', 'width': 960}, {'height': 592, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?width=1080&crop=smart&auto=webp&s=71c0b44d9c33e8ef08ba83f6198437c4a34a7909', 'width': 1080}], 'source': {'height': 921, 'url': 'https://external-preview.redd.it/zDKxjioBzRdAtB0vbyojBApNCahgSF3CwxsqK2U7AYY.jpg?auto=webp&s=81c9f977e9147d6de32fa77b00ff7ab20efc9667', 'width': 1680}, 'variants': {}}]}
Best local-hosted model for coding tasks on 16gb VRAM?
5
I'm looking for a model to help me complete some code-related tasks that will fit in 16GB of VRAM (4070TI Super). Which model should I chose and which quantization? I mostly want to try to get a fake-copilot running with Continue.dev. I'm not expecting miracles either, but something functional would be nice.
2024-12-16T18:02:31
https://www.reddit.com/r/LocalLLaMA/comments/1hfp6mp/best_localhosted_model_for_coding_tasks_on_16gb/
AntwonTheDamaja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfp6mp
false
null
t3_1hfp6mp
/r/LocalLLaMA/comments/1hfp6mp/best_localhosted_model_for_coding_tasks_on_16gb/
false
false
self
5
null
New Models: Megrez 3B Instruct and Megrez 3B Omni with Apache 2.0 License
84
https://preview.redd.it/…ini-Megrez-Omni)
2024-12-16T18:50:28
https://www.reddit.com/r/LocalLLaMA/comments/1hfqbtt/new_models_megrez_3b_instruct_and_megrez_3b_omni/
Many_SuchCases
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfqbtt
false
null
t3_1hfqbtt
/r/LocalLLaMA/comments/1hfqbtt/new_models_megrez_3b_instruct_and_megrez_3b_omni/
false
false
https://b.thumbs.redditm…10QsUj-MyqAo.jpg
84
{'enabled': False, 'images': [{'id': '3moyWnZ5kqJfvgCdAbFmdK4bzcE63TlgP6G8kDzHF1Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yI3sMUbW1YszfR-jdBJjGD280b8aCQjowsb0NtYKypA.jpg?width=108&crop=smart&auto=webp&s=c08baa3a099c7d1657d9209eea0c90b2665c1b02', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yI3sMUbW1YszfR-jdBJjGD280b8aCQjowsb0NtYKypA.jpg?width=216&crop=smart&auto=webp&s=880429e7da88e5a234c4b1dbaa30a7994ae5c8a6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yI3sMUbW1YszfR-jdBJjGD280b8aCQjowsb0NtYKypA.jpg?width=320&crop=smart&auto=webp&s=5723b3d003c6ae68a05f6a13ab6abb576bb28b94', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yI3sMUbW1YszfR-jdBJjGD280b8aCQjowsb0NtYKypA.jpg?width=640&crop=smart&auto=webp&s=fe287961bfd18d7e08388298d91f5b8008b2fb51', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yI3sMUbW1YszfR-jdBJjGD280b8aCQjowsb0NtYKypA.jpg?width=960&crop=smart&auto=webp&s=8d37b3772e7ebaf98f4f406d49b7f7b3ca840732', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yI3sMUbW1YszfR-jdBJjGD280b8aCQjowsb0NtYKypA.jpg?width=1080&crop=smart&auto=webp&s=79acfef9219e5437b0e68f3c408478a322eb1d00', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yI3sMUbW1YszfR-jdBJjGD280b8aCQjowsb0NtYKypA.jpg?auto=webp&s=e21e3049a626ad07333fc11353b493bc8aba2d44', 'width': 1200}, 'variants': {}}]}
Rumour: 24GB Arc B580.
543
2024-12-16T19:33:45
https://www.pcgamer.com/hardware/graphics-cards/shipping-document-suggests-that-a-24-gb-version-of-intels-arc-b580-graphics-card-could-be-heading-to-market-though-not-for-gaming/
Billy462
pcgamer.com
1970-01-01T00:00:00
0
{}
1hfrdos
false
null
t3_1hfrdos
/r/LocalLLaMA/comments/1hfrdos/rumour_24gb_arc_b580/
false
false
https://b.thumbs.redditm…ye2mU1TVbE1o.jpg
543
{'enabled': False, 'images': [{'id': 'g6y-c4adGL6FSRlo1jTaocCuemapsOYG52lQjxy2dUU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=108&crop=smart&auto=webp&s=2be7d740cb31a436d4570ca2851b6938abd36aca', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=216&crop=smart&auto=webp&s=161613bd94790b7ead6d485ff41fc73c06b1ebfb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=320&crop=smart&auto=webp&s=c1558845b46d2b418d1e6d87a8ba36651d78cbe4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=640&crop=smart&auto=webp&s=ddd3f42144ca0c2a05d54cf349b57f74c2e13f0f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=960&crop=smart&auto=webp&s=66e62a40cfeb8a4310ea33538fe5083186238b10', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?width=1080&crop=smart&auto=webp&s=90370de6f4f7e8a4c3ac91f80bfd6dd03b9cf044', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/KNNit46prWlA2v7rjsUV6TaIPMXvtB72RAGA4ZyQjNE.jpg?auto=webp&s=431f02035325737f3bcd69627b33adf57a7c2c75', 'width': 1200}, 'variants': {}}]}
Graph-Based Editor for LLM Workflows
19
We made an open-source tool that provides a graph-based interface for building, debugging, and evaluating LLM workflows: [https://github.com/PySpur-Dev/PySpur](https://github.com/PySpur-Dev/PySpur) **Why we built this:** Before this, we built several LLM-powered applications that collectively served thousands of users. The biggest challenge we faced was ensuring reliability: making sure the workflows were robust enough to handle edge cases and deliver consistent results. In practice, achieving this reliability meant repeatedly: 1. **Breaking down complex goals into simpler steps:** Composing prompts, tool calls, parsing steps, and branching logic. 2. **Debugging failures:** Identifying which part of the workflow broke and why. 3. **Measuring performance:** Assessing changes against real metrics to confirm actual improvement. We tried some existing observability tools or agent frameworks and they fell short on at least one of these three dimensions. We wanted something that allowed us to iterate quickly and stay focused on improvement rather than wrestling with multiple disconnected tools or code scripts. We eventually arrived at three principles upon which we built PySpur : 1. **Graph-based interface:** We can lay out an LLM workflow as a node graph. A node can be an LLM call, a function call, a parsing step, or any logic component. The visual structure provides an instant overview, making complex workflows more intuitive. 2. **Integrated debugging:** When something fails, we can pinpoint the problematic node, tweak it, and re-run it on some test cases right in the UI. 3. **Evaluate at the node level:** We can assess how node changes affect performance downstream. We hope it's useful for other LLM developers out there, enjoy!
2024-12-16T19:36:30
https://www.reddit.com/r/LocalLLaMA/comments/1hfrg2f/graphbased_editor_for_llm_workflows/
Brilliant-Day2748
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfrg2f
false
null
t3_1hfrg2f
/r/LocalLLaMA/comments/1hfrg2f/graphbased_editor_for_llm_workflows/
false
false
self
19
{'enabled': False, 'images': [{'id': 'zVcwKEYZe8apRMcG3Uga0fMZnV7Dp_mZtqToGNXlxhc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L5LLhL9ewwMOpDKpOgxIQj9JXPgbZUiBcoz71cmCMQk.jpg?width=108&crop=smart&auto=webp&s=b3fd332e5815a6007a2397484e21b54043f695ea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L5LLhL9ewwMOpDKpOgxIQj9JXPgbZUiBcoz71cmCMQk.jpg?width=216&crop=smart&auto=webp&s=6249ae94c65e1ddb423e10262757cf133035c168', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L5LLhL9ewwMOpDKpOgxIQj9JXPgbZUiBcoz71cmCMQk.jpg?width=320&crop=smart&auto=webp&s=19573bfa640e44da4e01f39daf909d023834c761', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L5LLhL9ewwMOpDKpOgxIQj9JXPgbZUiBcoz71cmCMQk.jpg?width=640&crop=smart&auto=webp&s=fabeff130cff30352b3de186475bb3075eeaded9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L5LLhL9ewwMOpDKpOgxIQj9JXPgbZUiBcoz71cmCMQk.jpg?width=960&crop=smart&auto=webp&s=58aa4fa88ba0c21bb5d008daa529c672aa666c01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L5LLhL9ewwMOpDKpOgxIQj9JXPgbZUiBcoz71cmCMQk.jpg?width=1080&crop=smart&auto=webp&s=b18c69e4113098be996c25bd4d8a5e846f6f0d6e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L5LLhL9ewwMOpDKpOgxIQj9JXPgbZUiBcoz71cmCMQk.jpg?auto=webp&s=b7ba528437af21246a12031651e16d6edaaffc81', 'width': 1200}, 'variants': {}}]}
Anyone regularly using LLMs for their personal health?
2
[removed]
2024-12-16T19:39:45
https://www.reddit.com/r/LocalLLaMA/comments/1hfriwh/anyone_regularly_using_llms_for_their_personal/
jlreyes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfriwh
false
null
t3_1hfriwh
/r/LocalLLaMA/comments/1hfriwh/anyone_regularly_using_llms_for_their_personal/
false
false
self
2
null
CoT Model Fine Tuning
1
[removed]
2024-12-16T19:46:08
https://www.reddit.com/r/LocalLLaMA/comments/1hfro6x/cot_model_fine_tuning/
AustinFirstAndOnly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfro6x
false
null
t3_1hfro6x
/r/LocalLLaMA/comments/1hfro6x/cot_model_fine_tuning/
false
false
self
1
null
CPU inferencing LLM + RAG help! (+ PiperTTS setup)
1
[removed]
2024-12-16T19:46:53
https://www.reddit.com/r/LocalLLaMA/comments/1hfrotr/cpu_inferencing_llm_rag_help_pipertts_setup/
ReplacementSafe8563
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfrotr
false
null
t3_1hfrotr
/r/LocalLLaMA/comments/1hfrotr/cpu_inferencing_llm_rag_help_pipertts_setup/
false
false
self
1
null
Local RAG apps with barebones dependencies
1
[removed]
2024-12-16T20:01:46
https://www.reddit.com/r/LocalLLaMA/comments/1hfs1em/local_rag_apps_with_barebones_dependencies/
110_percent_wrong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfs1em
false
null
t3_1hfs1em
/r/LocalLLaMA/comments/1hfs1em/local_rag_apps_with_barebones_dependencies/
false
false
self
1
null
HF enters the scaling chat
54
[https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling-test-time-compute](https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling-test-time-compute)
2024-12-16T20:20:14
https://www.reddit.com/r/LocalLLaMA/comments/1hfsgxd/hf_enters_the_scaling_chat/
muchomuchacho
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfsgxd
false
null
t3_1hfsgxd
/r/LocalLLaMA/comments/1hfsgxd/hf_enters_the_scaling_chat/
false
false
self
54
null
List of Small/Edge models i tested without a GPU
34
So encouraged by the 3 people who believed in my [efforts](https://www.reddit.com/r/LocalLLaMA/comments/1hcg7fw/comment/m1o91tt/) ( u/poli-cya, u/Ill-Still-6859, u/GraybeardTheIrate :') ) I decided to share this little spread[shet](https://imgur.com/jQ4CLFq) of Small Edge models I've been building up. Unlike most tests here I'm: - Not basing it on the fancy AI benchmarks out there because there are people much smarter than I ([R baboon](https://imgur.com/A8T3o6d))[^1] doing this. - basing it on actual use cases I have for AI which is : 1. For work (not a coder - I'm the [enemy](https://www.youtube.com/watch?v=YeNBsW0Slrk) system prompt below)[^4] 2. Getting lists of obscure/forgotten games (prompt is: "Give me a list of 30 obscure games between 1998 to 2008") [^2][^3] 3. Reading reviews for aforementioned games. (Prompt: "Create a 3000 word review of the 2004 video game The Suffering in the review style of destructoid.com" - I mean, if you've played it...suffice to say it makes my nipples hard) As you can tell, these are subjective as hell. Here's the sheet: https://docs.google.com/spreadsheets/d/1MVPQuMPwFPChcckbV2RJ6ECjWjUufP17V5iiTKgexfs/edit?usp=sharing ### Other points of note: - For work, I'm comparing all these models with Claude which is the one paid AI I use heavily (my company paid for a year). At the time of writing I have a context window of 400k input tokens which I've never reached - max I've done is 20k. I do understand this is unfair since some models on my list were made by 1 person finetuning in their freetime - but thems the breaks. - [I'm a GPU Poor](https://imgur.com/fsEdmav), I do earn but that video I linked above isn't a joke to me, its a reality. I'm running these on my desktop PC (AMD Ryzen 7 5700G w/ 64GB RAM) but no GPU. This works twofold in my favor in that I noticed the output tokens/sec. on my pc right now match up with the output speed on my phone (Samsung Galaxy S22 Ultra) with apps like ChatterUI and Layla - so I get the benefit of the exact same model in my pocket too. - I'm a storage poor too, so my intent is to keep a max of 8 models which I can hopefully bring down to 3. So for the models I delete, this sheet acts as a record. - The first test was for work, its a lot more simpler than the later tests I ran that were game related. - I intend to add a column for what I now know as `Time to first token` because quite a few of the larger models were murder with that. - In case you're wondering why all my notes are 'breathless' - that's because I'd typed all those up in notepad initially - everytime I used a comma it'd split that text into its own cell in the Spreadsheet so I just avoided using comma's unless it was info for the next column. - To complete the circle I ran my poorly scratched together sheets by claude and it gave a more concise view - I've added that as a 3rd sheet called `Claudified results` I saw that LM studio saves these as JSON files and, if you'd like, I can upload the output for each model however I will need to exclude the work one, it'll probably break the NDA and put me at risk...because that's how dying companies work - however I CAN tell you what the prompt is: ### Other benefits of doing this (i.e. this is where you could come in): - maybe I'm judging these models without doing some basic grooming to give the lesser faring models more of a fighting chance. So perhaps you guys have a better way to warm up/prompt the models. - Or Maybe you can tell ME a better way of doing this, maybe a common man version of my tests already exisits and I'm just not aware. Either way I'm open to input! Enjoy! P.S. I thought of keeping a flair for myself but the only thing I could think of were 'Edgelord' and anything else sounded like a sex thing :( [^1]: sorry couldnt resist but now you see my point about smarter people than me being out there. [^2]: before becoming a parent and corporate gear I'd hoped to write and reivew obscure games with dabs of storytelling in the midst about said game's history, this never happened - now if I can make my own little local database of games that I can read about on Emustation or Launchbox that'd be nice! [^3]: System prompt\: "You are an expert in finding truly obscure hidden gem video games (especially between 1998 to 2011) that have been discussed on reddit and other communites like mobygames.com and found on internet archive. Your main focus should be on singleplayer games that are obscure first and lesser known games second. Popular games discouraged. [^4]: ``` Below is a system prompt I have been working on for you, it details what I need you to be and the step by step guide of my product. I would need you to do the following: - go through all the text and remove duplications. --take all the information below and first answer any potential questions first (perhaps by creating a Q&A section) in a way that can be understood by the intended audience. - Write all the processed data back into a system prompt that I can use making recommendations on anything I may have missed - the intent is that the system prompt will make it clear that you are a Senior Product Manager and that you know, in detail, how company/platform works at a high level view down to a step by step basis. Let me know if you have questions that need answering before you begin processing. After you're done, list out what was unnecessary so that I do not do it again. Below is the text you need to work on: Initial system prompt: You are a senior product manager specializing in tech solutions. Your role is to advise junior product owners, junior developers and building features for company/platform CORE RESPONSIBILITIES: - Reading through the user prompt and understanding whether these are questions that need answering first and then documenting later. In that order of priority. - Provide step-by-step guidance for feature development (with the intended audience being Developers, QA teams, UI/UX, product owners - all juniors) - Focus on MVP approach first, with additional features under 'Considerations' - Identify impact areas (Reports, Calculations, Workflows) - Optimize user workflows for minimum clicks/steps RESPONSE STRUCTURE: - Core Components (bulleted list) - Impact Analysis - Workflow Diagram - MVP Features Considerations (future enhancements) PLATFORM CONTEXT: 1. Admin setup portal 2. Order management, sales reporting 3. Inventory materials management The intent is to see what we currently can do and then add in features and functions to fill in gaps. In addition to understanding how currently Jalebi works, The intent of the text below is to also have a documented version of the steps given to both new clients and jalebi staff when onboarding them. (Note for the person reading this: At this point I copy paste in the step guide I mentioned earlier that pushes the no. of input tokens to 20,000+ ) ```
2024-12-16T20:47:51
https://www.reddit.com/r/LocalLLaMA/comments/1hft45b/list_of_smalledge_models_i_tested_without_a_gpu/
RobinRelique
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hft45b
false
null
t3_1hft45b
/r/LocalLLaMA/comments/1hft45b/list_of_smalledge_models_i_tested_without_a_gpu/
false
false
self
34
{'enabled': False, 'images': [{'id': 'Gzqa453Zjsk5GYisvDyJ0VSVDQ4LE2aeTziqrxxx6po', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/S_tkjjNYIJEZPnTq44edim6wmFoeqcQis2LHgR4I7_A.jpg?width=108&crop=smart&auto=webp&s=6e657040192fd74df237efcbc8e58f191092ae66', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/S_tkjjNYIJEZPnTq44edim6wmFoeqcQis2LHgR4I7_A.jpg?width=216&crop=smart&auto=webp&s=10b6d2c31d99882a6164cddd046369af12151df5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/S_tkjjNYIJEZPnTq44edim6wmFoeqcQis2LHgR4I7_A.jpg?width=320&crop=smart&auto=webp&s=a86c04bb197ae71375b8d1650dcb005abd059d8d', 'width': 320}], 'source': {'height': 374, 'url': 'https://external-preview.redd.it/S_tkjjNYIJEZPnTq44edim6wmFoeqcQis2LHgR4I7_A.jpg?auto=webp&s=2b9c436bcb94944be4308a89cf0dab95cbf9847f', 'width': 498}, 'variants': {}}]}
Graphiti Temporal Knowledge Graph with Local LLMs
24
[Here is the code on Github](https://github.com/tonymantoan/local_graphiti) for some experiments I did getting [Zep's](https://help.getzep.com/concepts) [Graphiti](https://github.com/getzep/graphiti) temporal knowledge graph to run with local LLMs. There are also some notes on how it performed with different models, and some notes on the setup and config. Graphiti is a RAGGraph solution with some temporal awareness which Zep generously open sourced. Sounded pretty cool but out of the box it only uses ChatGPT and Claude. (You can setup the config so the OpenAI client points to any inference endpoint you want, but it will still try to use ChatGPT or Claud for tokenizing, and anyway depending the model you are using the prompts might not get formatted correctly). But I wanted to use for it my own locally running LLM app as well as have more control over the prompts and api requests. So I extended a couple of their core classes to use local LLMs, and HF sentence transformers for embedding so you can pick whatever embedded model you want. Bottom Line: Given the complexity of the prompts involved, I am surprised how well the smallish models that I can run actually did. Ultimately they were either too inconsistent or too slow to be practical, but they're not far off! I have hope that newer, better trained, more compact models will be able to handle the work load on my hardware in the near future. I'll likely return to this before long.
2024-12-16T20:54:28
https://www.reddit.com/r/LocalLLaMA/comments/1hft9va/graphiti_temporal_knowledge_graph_with_local_llms/
Mennas11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hft9va
false
null
t3_1hft9va
/r/LocalLLaMA/comments/1hft9va/graphiti_temporal_knowledge_graph_with_local_llms/
false
false
self
24
{'enabled': False, 'images': [{'id': 'X6UJE1pE8uioxwUngukreOdcyJLiSQ2QXKFdaufKMIY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3TjURCCIDEHUO6ctsQr8SVgoM7bMDoQrxf7Qn7gYWh4.jpg?width=108&crop=smart&auto=webp&s=ef3d0d20bdc11d8dfbbeeb425e9ce332fe09d5e1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3TjURCCIDEHUO6ctsQr8SVgoM7bMDoQrxf7Qn7gYWh4.jpg?width=216&crop=smart&auto=webp&s=b0e73d1cae944a71ed5458d889800fe515177ac3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3TjURCCIDEHUO6ctsQr8SVgoM7bMDoQrxf7Qn7gYWh4.jpg?width=320&crop=smart&auto=webp&s=188da0bf301a0835be11f3b01fe56afb0e122928', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3TjURCCIDEHUO6ctsQr8SVgoM7bMDoQrxf7Qn7gYWh4.jpg?width=640&crop=smart&auto=webp&s=8c6c38eb5fbb6019cf3b73e71ec10113e8980e82', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3TjURCCIDEHUO6ctsQr8SVgoM7bMDoQrxf7Qn7gYWh4.jpg?width=960&crop=smart&auto=webp&s=3f8ad74c032f3170cca17d1fc6e85fb5c86f1836', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3TjURCCIDEHUO6ctsQr8SVgoM7bMDoQrxf7Qn7gYWh4.jpg?width=1080&crop=smart&auto=webp&s=590e8b2add76acd5ffcec429a399e3ec9731f36e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3TjURCCIDEHUO6ctsQr8SVgoM7bMDoQrxf7Qn7gYWh4.jpg?auto=webp&s=88a10ac5b06dff63fce8af99f24bed8b286c4523', 'width': 1200}, 'variants': {}}]}
My take on the Post Pretraining world - Ilya’s talk
166
Hey r/LocalLLaMA! You might have heard Ilya Sutskever - the famed computer scientist from OpenAI, now at SSI saying we're in the post pretraining world. I don't normally post in long form, but I wanted to post my thoughts on his talk! Ilya is implying we need to find **something else to scale** \- the [brain–body mass ratio graph](https://en.wikipedia.org/wiki/Brain%E2%80%93body_mass_ratio) in the talk showed human intelligence “scaled” better than mammals. https://preview.redd.it/4399wop6x97e1.png?width=913&format=png&auto=webp&s=640a1de8620f4f8c65cec27072832586a91b2733 LSTMs got out-scaled by transformers - the goal is to "edit" the scaling laws to make it more efficient. Evolution somehow first tried scaling intelligence for mammals, then pushed the frontier up for non-human primates. Large elephants which exceeded the 700g gram wall were extinct in the end. Then hominids came along and broke the wall, and scaled far better. \[0\] https://preview.redd.it/r5imcyuhw97e1.png?width=702&format=png&auto=webp&s=3f59ebd982011590ba6b566b48ea8900b998a2e3 (A) Kaplan et al’s scaling laws \[1\] shows if we increase **TRAINING compute** = N (# parameters) \* D (# tokens / data), the test loss also decreases in a log-log setting. https://preview.redd.it/3yjagiarw97e1.png?width=2018&format=png&auto=webp&s=d9c6a544da3cb330cb29edada19875126049beea (A)\* Instead of scaling TRAINING compute, Sutskever mentioned we can scale **TEST TIME** compute through search, or like O1 / QwQ etc. (B) First on D (scaling data). There exists a theoretical “**Data Wall**” which is when all the data in the world (the internet and everything else) gets consumed by large models.  Once we reach that point, we have to find ways to overcome this barrier to make models to continue to scale. https://preview.redd.it/5b1ij3myw97e1.png?width=2028&format=png&auto=webp&s=3176e5e1b5e8dc23cdb00808b72ed36d59398409 This could mean **Synthetic Data Generation** as Sutskever mentioned - literally using a trained model to augment datasets. The question is if this will plateau or keep scaling. Another approach is to make data scaling more efficient through better **filtering**. The FineWeb \[2\] dataset is one example of this. We can also do more RL & post-training via DPO, PPO etc to squeeze more performance out of the same amount of tokens as explained in [Lambert’s blog post](https://www.interconnects.ai/p/openais-reinforcement-finetuning) \[3\]. These move the frontier downwards. https://preview.redd.it/6jaywn4jx97e1.jpg?width=1456&format=pjpg&auto=webp&s=d15eb53dcddd710a33571dba61a6a798be11c8c2 (C) Second on N (# of parameters) - the trick is to move to **active parameters** instead of total parameters. Large labs like OpenAI replaced MLP / FFNs in Dense transformers with MoE layers \[4\]. Instead of doing huge matrix multiplies, we smartly only select a few column groups to multiply instead, and leave the rest as 0. We can scale transformers to trillions of parameters like in Switch transformers \[5\]. (C)(i) Coincidentally Meta released multiple papers including one on **Byte Latent Transformers** \[6\]  and **Memory Layers** \[7\]. BLTs edit the scaling laws itself by changing the definition of “tokens” in data scaling and also adding more to the non embedding parameters. BLTs remove BPE tokenization by instead learning to allocate more optimum amounts of tokens / bytes to certain groups of patches through a smaller encoder. We then run a transformer on combined patches, and use a decoder for prediction. https://preview.redd.it/mrwdyobpx97e1.png?width=2191&format=png&auto=webp&s=c3cd728454454fcaf8fecd8b6442faf23d658d2a (D) Memory Layers are what really interested me! They are essentially sparse lookup tables - first devised as Product Key layers in Lample et al’s paper \[8\] we replace the FFN MLP with a gigantic learnable matrix of size (100M, d) called V (Values). We then only select the top K rows of V (say 4) via a weighted sum via the softmax. To find the top 4, we need another matrix K (Keys) of size (100M, d) to allow simple dot products to obtain the top indices. This essentially converts the dense MLP into a **weighted sparse lookup table**. The issue is finding the top K rows needs 100M operations since we need to do (K \* q) to obtain the indices. Accessing V is easy, and we can offload V to RAM. The trick in \[8\] is to use **Fast Approximate Nearest Neighbors** to find the top k rows. But this is hard to differentiate during training, so instead we do another trick - we split K (100M, d) into 2 matrices KA and KB both (sqrt(100M), d/2) in size, and use the **Cartesian product**. https://preview.redd.it/rg2ywkuzx97e1.png?width=1754&format=png&auto=webp&s=5010d818e67055cee2cb399aa324853a759614e1 (E) The Cartesian product of KA and KB is size (100M, d) - every row of KA (1, d/2) corresponds to the entire KB matrix (sqrt(100M), d/2), and since we have sqrt(100M) rows in KA, the total cartesian product is of size sqrt(100M) \* (sqrt(100M, d/2 + d/2) = (100M, d) To get indices of 0 to N-1, we can then simply observe to find the largest dot product of (a\^2 + b\^2), we can find the max of (a\^2) then the max of (b\^2), and combine them separately. So the indices are simply sqrt(N) \* topK\_indices (KA \* q) + topK\_indices (KB \* q). This is super cool since we can now scale these sparse lookup tables to massive scales and only using a small (sqrt(100M), d) extra space. The \[7\] paper also adds a non linearity like in GLU \[9\] variants, and this is called the **Memory+ layer**, and this scales better than MoEs! https://preview.redd.it/gd5dan3cx97e1.png?width=2012&format=png&auto=webp&s=94a3a7fc7d859c9820f40a0415ae94df4086b49f (F) A long post, but my final talk is Ilya is saying we need to find something else to scale. This could be: 1. Scaling instead test time compute via search, agents, O1 style 2. Changing the arch by holding training compute constant like MoEs, Memory+ layers etc 3. Changing the scales for scaling laws ie like BLTs 4. Breaking the Data Wall via Synthetic Data Generation, RL, DPO, PPO, filtering etc 5. Or something else! I watched Ilya’s talk here: [https://www.youtube.com/watch?v=1yvBqasHLZs](https://www.youtube.com/watch?v=1yvBqasHLZs) References: * \[0\] Brain–body mass ratio [https://en.wikipedia.org/wiki/Brain%E2%80%93body\_mass\_ratio](https://en.wikipedia.org/wiki/Brain%E2%80%93body_mass_ratio) * \[1\] Kaplan et al “Scaling Laws for Neural Language Models” [https://arxiv.org/pdf/2001.08361](https://arxiv.org/pdf/2001.08361) * \[2\] Penedo et al “The FineWeb Datasets” [https://arxiv.org/abs/2406.17557](https://arxiv.org/abs/2406.17557) * \[3\] Lambert RL for the masses  [https://www.interconnects.ai/p/openais-reinforcement-finetuning](https://www.interconnects.ai/p/openais-reinforcement-finetuning) * \[4\] Shazeer et al “Outrageously Large Neural Networks”  [https://arxiv.org/abs/1701.06538](https://arxiv.org/abs/1701.06538) * \[5\] Fedus et al “Switch Transformers”  [https://arxiv.org/abs/2101.03961](https://arxiv.org/abs/2101.03961) * \[6\] Pagnoni et al “Byte Latent Transformer” [https://ai.meta.com/research/publications/byte-latent-transformer-patches-scale-better-than-tokens/](https://ai.meta.com/research/publications/byte-latent-transformer-patches-scale-better-than-tokens/) * \[7\] Berges et al “Memory Layers at Scale” [https://ai.meta.com/research/publications/memory-layers-at-scale/](https://ai.meta.com/research/publications/memory-layers-at-scale/) * \[8\] Lample et al “Large Memory Layers with Product Keys” [https://arxiv.org/abs/1907.05242](https://arxiv.org/abs/1907.05242) * \[9\] Shazeer “GLU Variants Improve Transformer” [https://arxiv.org/abs/2002.05202](https://arxiv.org/abs/2002.05202)
2024-12-16T21:00:32
https://www.reddit.com/r/LocalLLaMA/comments/1hftf75/my_take_on_the_post_pretraining_world_ilyas_talk/
danielhanchen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hftf75
false
null
t3_1hftf75
/r/LocalLLaMA/comments/1hftf75/my_take_on_the_post_pretraining_world_ilyas_talk/
false
false
https://b.thumbs.redditm…n5EEjFJ8j5eI.jpg
166
{'enabled': False, 'images': [{'id': 'T0O-egShATfeTkvTyRuChf8-8WCnU5fICE6NZKslubc', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/Ani7vWfOEjbzMeVssV0GZqKSI01SMWtuv5ovDjqVka4.jpg?width=108&crop=smart&auto=webp&s=4a37f4b9265549ddbc0cf25ed2ad4819663817ec', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/Ani7vWfOEjbzMeVssV0GZqKSI01SMWtuv5ovDjqVka4.jpg?width=216&crop=smart&auto=webp&s=7d3a29df84dfd96c0f9db4dfd422086089bf8372', 'width': 216}, {'height': 198, 'url': 'https://external-preview.redd.it/Ani7vWfOEjbzMeVssV0GZqKSI01SMWtuv5ovDjqVka4.jpg?width=320&crop=smart&auto=webp&s=02f3e20e213d0b53492e5e11d0e112a7705e71af', 'width': 320}, {'height': 396, 'url': 'https://external-preview.redd.it/Ani7vWfOEjbzMeVssV0GZqKSI01SMWtuv5ovDjqVka4.jpg?width=640&crop=smart&auto=webp&s=afc68dd6f4826f1108a2b67d60a91dad26170e1c', 'width': 640}, {'height': 594, 'url': 'https://external-preview.redd.it/Ani7vWfOEjbzMeVssV0GZqKSI01SMWtuv5ovDjqVka4.jpg?width=960&crop=smart&auto=webp&s=1a6a01523b1f9d3c59d25bbd8dd5ea97fc1849c2', 'width': 960}, {'height': 668, 'url': 'https://external-preview.redd.it/Ani7vWfOEjbzMeVssV0GZqKSI01SMWtuv5ovDjqVka4.jpg?width=1080&crop=smart&auto=webp&s=9b13e03d37f74a3a21f57fdafb0ed1258bfcc993', 'width': 1080}], 'source': {'height': 743, 'url': 'https://external-preview.redd.it/Ani7vWfOEjbzMeVssV0GZqKSI01SMWtuv5ovDjqVka4.jpg?auto=webp&s=89fbda883c0e95155f8cd2f6ae239ff6e48ddecb', 'width': 1200}, 'variants': {}}]}
Anyone regularly using LLMs for their personal health?
1
[removed]
2024-12-16T21:04:59
https://www.reddit.com/r/LocalLLaMA/comments/1hftj9a/anyone_regularly_using_llms_for_their_personal/
jlreyes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hftj9a
false
null
t3_1hftj9a
/r/LocalLLaMA/comments/1hftj9a/anyone_regularly_using_llms_for_their_personal/
false
false
self
1
null
Model help
1
[removed]
2024-12-16T21:12:36
https://www.reddit.com/r/LocalLLaMA/comments/1hftpm4/model_help/
Sleimixx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hftpm4
false
null
t3_1hftpm4
/r/LocalLLaMA/comments/1hftpm4/model_help/
false
false
self
1
null
Which OS Do Most People Use for Local LLMs?
50
What do you think is the most popular OS for running local LLMs? MacOS, Windows, or Linux? I see a lot of Mac and Windows users. I use both and will start experimenting with Linux. What do you use?
2024-12-16T21:30:50
https://www.reddit.com/r/LocalLLaMA/comments/1hfu52r/which_os_do_most_people_use_for_local_llms/
1BlueSpork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfu52r
false
null
t3_1hfu52r
/r/LocalLLaMA/comments/1hfu52r/which_os_do_most_people_use_for_local_llms/
false
false
self
50
null
Which medium local LLM (12b+) doesn’t have a strong AI writing style?
3
Thanks in advance. I’m asking for a social media / content creation type assistant LLM too
2024-12-16T21:35:27
https://www.reddit.com/r/LocalLLaMA/comments/1hfu8uc/which_medium_local_llm_12b_doesnt_have_a_strong/
Deluded-1b-gguf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfu8uc
false
null
t3_1hfu8uc
/r/LocalLLaMA/comments/1hfu8uc/which_medium_local_llm_12b_doesnt_have_a_strong/
false
false
self
3
null
Need Help About Programming
1
[removed]
2024-12-16T21:37:03
https://www.reddit.com/r/LocalLLaMA/comments/1hfua8q/need_help_about_programming/
aaaazzzz11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfua8q
false
null
t3_1hfua8q
/r/LocalLLaMA/comments/1hfua8q/need_help_about_programming/
false
false
self
1
null
I built yet another agent framework
1
[removed]
2024-12-16T21:44:44
https://www.reddit.com/r/LocalLLaMA/comments/1hfugz1/i_built_yet_another_agent_framework/
igorbenav
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfugz1
false
null
t3_1hfugz1
/r/LocalLLaMA/comments/1hfugz1/i_built_yet_another_agent_framework/
false
false
self
1
null
Xpenology - Best Way to implement AI
1
[removed]
2024-12-16T21:50:01
https://www.reddit.com/r/LocalLLaMA/comments/1hfulii/xpenology_best_way_to_implement_ai/
Emotional_Public_398
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfulii
false
null
t3_1hfulii
/r/LocalLLaMA/comments/1hfulii/xpenology_best_way_to_implement_ai/
false
false
self
1
null
Making the LLM stop
2
50% of the times or more, running the model using llama.cpp with my prompt makes the model never stop. After the actual answer is completed it might ether start repeating some of the last sentences, vomit random stuff, or, occasionally, produce the desired \[end of text\]. Any tip to make it always produce the \[end of text\]?
2024-12-16T22:06:35
https://www.reddit.com/r/LocalLLaMA/comments/1hfuzuu/making_the_llm_stop/
goingsplit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfuzuu
false
null
t3_1hfuzuu
/r/LocalLLaMA/comments/1hfuzuu/making_the_llm_stop/
false
false
self
2
null
Agent Framework Discussion
8
Wanted to start a discussion on choosing/ not choosing a framework when building complex agent workflow. Lang Graph & PydanticAI stand out as the most low-level and flexible architectures. Even still these don't appear to be loved by developers in the community. Can someone expand upon why? I understand that creating agents in pure python allows for more flexibility, but does the burden of building out feature prebuilt with other frameworks not suck up all your time needed to actually develop the agent workflow? I have experience building agents in LlamaIndex and LangGraph, but want to get some opinions before I delve into a more long run project. I am especially interested in long run self-augment learning by the agents. Basically a network of agent specific "cheat sheets" written and constantly updated by a managing agent. At this point Im rambling; lets discuss.
2024-12-16T22:22:20
https://www.reddit.com/r/LocalLLaMA/comments/1hfvcu5/agent_framework_discussion/
Difficult-Paper-6305
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfvcu5
false
null
t3_1hfvcu5
/r/LocalLLaMA/comments/1hfvcu5/agent_framework_discussion/
false
false
self
8
null
Cooling Tesla P40 in my Dell T420
3
Hi Folks!, I'm upgrading from my RTXA2000 to a Telsa P40 for my LLM's in my homelab primary machine. I'm still rather new these enterprise cards, so please bare with me. I've since upgraded my 495w dual power supplies to 1100w duals. I have 1:2 of the EPS12's powering the P40. Unfortunately, I'm reaching 89-90c with a model loaded. My goal is to keep a model loaded, and utilize it at my leisure (to remedy load times for operation). Hopefully I'm not biting off more than I can chew..... Currently the P40 has dual Noctua 40mm fans attached as shown. It's not the greatest position, I know. I'll have to check voltage and amperage, but so far they are barely pushing any air for my expectation of these fans. They are running off the fan header for the fan on the top right, so I know its monitoring is based on CPU temps alone. If the voltages line up, I'm going to try to bump up the amperage by running off a SATA fan controller, with respects to what the fan can take after some research. If that is not enough to cool things off I'm thinking of migrating to a setup with a 97x33 fan and [one of these](https://www.amazon.com/Tesla-P100-V100-Blower-Cooling/dp/B0DDJM7X4R/ref=sr_1_20?crid=3K95K1SSO8K1&dib=eyJ2IjoiMSJ9.GVZAamcfMozoIVPPPaqDs2t1JgfaHf_8YL2eligtCQT-vswEoX6bA263jVE7S1ZNphV08X0jBelERL-9LrAyt6EPsM-QIzUeVEUwkVFlwh5TBUe2NkSqG4qmDMBi7LsD0ojobRt87Rn5b6tiRf628nMmSWkWj4BZEC85PLzk66qZlKFIdLndW9PhSuaQ3wiylb98xOCHUw5j3QGVqe29XMwgIzGteghK6IXCEyxkpPw.-Obz27r3U82C8VXoANhTVuU7D7e71w1eutYKC3-n9pY&dib_tag=se&keywords=tesla+p40&qid=1734376697&sprefix=tesla+p40%2Caps%2C173&sr=8-20). If I go this route, I'm curious how you guys are regulating fan speed as I've been told these fans will just pull as much amperage as they can, all to the wonderful tune of a data center. Any suggestions for the below are greatly appreciated: * If the noctuas are enough? * How to properly pull a 12v or 5v power source that is not "temp regulated" like the CPU fan. * This way I can just adjust on the fly and not have base amp/voltage modified. * Any way to pull power off a EPS12v? (no problem wiring or soldering, I'm pretty decent at it ;) ) * If I go with the 97x33, how to adjust the fan speed/amperage. https://preview.redd.it/uhixe2ggga7e1.jpg?width=1024&format=pjpg&auto=webp&s=d1a84766d1ae3d763783711c64e6f42dbc669700 https://preview.redd.it/l79tpklmga7e1.png?width=654&format=png&auto=webp&s=4cbac5e34c9877db9901efa70419aab460f10d12 Updates:
2024-12-16T22:41:54
https://www.reddit.com/r/LocalLLaMA/comments/1hfvsqm/cooling_tesla_p40_in_my_dell_t420/
s0n1cm0nk3y
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfvsqm
false
null
t3_1hfvsqm
/r/LocalLLaMA/comments/1hfvsqm/cooling_tesla_p40_in_my_dell_t420/
false
false
https://b.thumbs.redditm…Kydq2kXiBsUA.jpg
3
null
Centralized source of documentation and best practices for beginners?
1
A [comment on a popular post yesterday suggested workflow applications like Wilmer, Omnichain, N8N, Langflow](https://www.reddit.com/r/LocalLLaMA/comments/1hf7jd2/everyone_share_their_favorite_chain_of_thought/m29olx1/). I'd pretty much never heard of most of these. Even when searching for ways to solve issues these address. While reading things in huggingface repos, I also randomly run into brand new terminology I have zero understanding of and half the time can't find any explanations of on Google. In better established areas you can often find help pages that list several 'default' tools and projects that most people use. (e.g. create-react-app, bootstrap) Looking for a few sources someone can recommend that basically give an intro to what to use, and the basics of things.
2024-12-16T22:45:40
https://www.reddit.com/r/LocalLLaMA/comments/1hfvvs4/centralized_source_of_documentation_and_best/
TryKey925
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfvvs4
false
null
t3_1hfvvs4
/r/LocalLLaMA/comments/1hfvvs4/centralized_source_of_documentation_and_best/
false
false
self
1
null
Outperforming Llama 70B with Llama 3B on hard math by scaling test-time compute!
462
Hi! I'm Lewis, a researcher at Hugging Face 👋. Over the past months we’ve been diving deep in trying to reverse engineer and reproduce several of key results that allow LLMs to "think longer" via test-time compute and are finally happy to share some of our knowledge. Today we're sharing a detailed blog post on how we managed to outperform Llama 70B with Llama 3B on MATH by combining step-wise reward models with tree-search algorithms: [https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling-test-time-compute](https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling-test-time-compute) In the blog post we cover: * **Compute-optimal scaling:** How we implemented [@GoogleDeepMind](https://x.com/GoogleDeepMind) 's recipe to boost the mathematical capabilities of open models at test-time. * **Diverse Verifier Tree Search (DVTS):** An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets. * **Search and Learn: A** lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM. You can check it out here: [https://github.com/huggingface/search-and-learn](https://github.com/huggingface/search-and-learn) Happy to answer questions! https://preview.redd.it/yjvjqbedia7e1.png?width=1000&format=png&auto=webp&s=0abcabe5978f9e56e4e1c5293e1c91aa5fc01b26
2024-12-16T22:52:25
https://www.reddit.com/r/LocalLLaMA/comments/1hfw14v/outperforming_llama_70b_with_llama_3b_on_hard/
lewtun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfw14v
false
null
t3_1hfw14v
/r/LocalLLaMA/comments/1hfw14v/outperforming_llama_70b_with_llama_3b_on_hard/
false
false
https://a.thumbs.redditm…uhIvpmStJJy0.jpg
462
null
Intel Arc in Data Centers
0
Nvidia disallows the use of RTX cards (EULA) in data centers. Has Intel a similar rule for Intel Arc GPUs / drivers?
2024-12-16T23:16:01
https://www.reddit.com/r/LocalLLaMA/comments/1hfwk94/intel_arc_in_data_centers/
Jotschi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfwk94
false
null
t3_1hfwk94
/r/LocalLLaMA/comments/1hfwk94/intel_arc_in_data_centers/
false
false
self
0
null
I made a fork of HunyuanVideo to work on Apple HW because I wanted to play around with SORA (like capabilities) locally on my Macbook pro.
37
Have fun: [https://github.com/gregcmartin/HunyuanVideo\_MLX](https://github.com/gregcmartin/HunyuanVideo_MLX)
2024-12-16T23:52:53
https://www.reddit.com/r/LocalLLaMA/comments/1hfxclr/i_made_a_fork_of_hunyuanvideo_to_work_on_apple_hw/
Striking_Luck_886
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfxclr
false
null
t3_1hfxclr
/r/LocalLLaMA/comments/1hfxclr/i_made_a_fork_of_hunyuanvideo_to_work_on_apple_hw/
false
false
self
37
{'enabled': False, 'images': [{'id': 'KOXReGsMEl9K7kI8C9qhkxy1n6DKAH5QK5CZ2FW1i2E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r0SV1-TnZFRY4RP_XOl3JWJsqZY5gfoVloJZN2-Qaow.jpg?width=108&crop=smart&auto=webp&s=b317273a36c0fa141188158223fa1b525d401f51', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r0SV1-TnZFRY4RP_XOl3JWJsqZY5gfoVloJZN2-Qaow.jpg?width=216&crop=smart&auto=webp&s=6d79415d9ff4197da55be4b9b6749792fe1f2b81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r0SV1-TnZFRY4RP_XOl3JWJsqZY5gfoVloJZN2-Qaow.jpg?width=320&crop=smart&auto=webp&s=61a29ac488725b4244d3ab3af0b33990e231b06a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r0SV1-TnZFRY4RP_XOl3JWJsqZY5gfoVloJZN2-Qaow.jpg?width=640&crop=smart&auto=webp&s=3e74357a013d61dea969d224b78afe6ad7e1cc85', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r0SV1-TnZFRY4RP_XOl3JWJsqZY5gfoVloJZN2-Qaow.jpg?width=960&crop=smart&auto=webp&s=422cb9c6bd56bb5c81daded9c00e37904d25fb11', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r0SV1-TnZFRY4RP_XOl3JWJsqZY5gfoVloJZN2-Qaow.jpg?width=1080&crop=smart&auto=webp&s=4a7ccdb85ca5c7725872f90d6592e159cf6f6ee8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r0SV1-TnZFRY4RP_XOl3JWJsqZY5gfoVloJZN2-Qaow.jpg?auto=webp&s=ea8bfcb400b4bb39396f52be33c516f4bb5efd01', 'width': 1200}, 'variants': {}}]}
Llama 3.1 8B struggles with tool calls
2
Hello, I'm using the Llama 3.1 8B model within a standard ReAct architecture. Despite having a very specific system prompt, the model consistently tries to call tools even when it's unnecessary. I've checked my code, and everything seems fine. Interestingly, I tried the same setup with Mistral NeMo, and the experience was significantly better—no excessive tool calls. I'm running this with LangChain and Ollama. Is this a known issue, or am I missing something? Has anyone else experienced this behavior? Thanks in advance!
2024-12-16T23:55:34
https://www.reddit.com/r/LocalLLaMA/comments/1hfxepg/llama_31_8b_struggles_with_tool_calls/
povedaaqui
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfxepg
false
null
t3_1hfxepg
/r/LocalLLaMA/comments/1hfxepg/llama_31_8b_struggles_with_tool_calls/
false
false
self
2
null
Instruction Tuning with Llama 3 7B and QLoRA for Word Grouping Task
1
[removed]
2024-12-16T23:55:49
https://www.reddit.com/r/LocalLLaMA/comments/1hfxew5/instruction_tuning_with_llama_3_7b_and_qlora_for/
Filus95
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfxew5
false
null
t3_1hfxew5
/r/LocalLLaMA/comments/1hfxew5/instruction_tuning_with_llama_3_7b_and_qlora_for/
false
false
self
1
null
is there anyway i can tell ollama to use ssd instead of ram
1
[removed]
2024-12-17T00:39:17
https://www.reddit.com/r/LocalLLaMA/comments/1hfybvs/is_there_anyway_i_can_tell_ollama_to_use_ssd/
BirdLate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfybvs
false
null
t3_1hfybvs
/r/LocalLLaMA/comments/1hfybvs/is_there_anyway_i_can_tell_ollama_to_use_ssd/
false
false
self
1
null
(3 models) L3-MOE-8X8B-Dark-Planet-8D-Mirrored-Chaos-47B-GGUF - AKA The Death Star - NSFW, Non AI Like Prose
66
New list of models from DavidAU (me!) ; This is the largest model I have ever built (source at 95GB). It also uses methods as far as I am aware that have never been used to construct a model, including a MOE. This model uses 8 unreleased versions of Dark Planet 8B (creative) using an evolution process. Each one is tested and only good ones are kept. The model is for creative use cases / role play, and can output NSFW. With this model you can access 1, 2, 3 or all 8 of these models - they work together. This model is set at 4 experts by default. As it is a "MOE" you can control the power levels too. Example generations at the repo ; detailed settings, quants and a lot more info too. Link to Imatrix versions also at this repo. https://huggingface.co/DavidAU/L3-MOE-8X8B-Dark-Planet-8D-Mirrored-Chaos-47B-GGUF Smaller versions (links to IMATRIX versions also at each repo) - each is also a "different flavor" too: https://huggingface.co/DavidAU/L3-MOE-4x8B-Dark-Planet-Rising-25B-GGUF https://huggingface.co/DavidAU/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF Source Code for all - to make quants / use directly: [https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be](https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be)
2024-12-17T01:28:49
https://www.reddit.com/r/LocalLLaMA/comments/1hfzcoz/3_models/
Dangerous_Fix_5526
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfzcoz
false
null
t3_1hfzcoz
/r/LocalLLaMA/comments/1hfzcoz/3_models/
false
false
nsfw
66
{'enabled': False, 'images': [{'id': 'DfAVji7PEd-wa-f1AEwjj59xqxZF_EZV7224XWSSX4c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=108&crop=smart&auto=webp&s=fde783232e9d9c7a27a6a9cd337fcbf10b1cca8a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=216&crop=smart&auto=webp&s=655f680197076793d685a7bb44ee58f0d25c701c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=320&crop=smart&auto=webp&s=540c009bc53a122e5198e812998be07581a80ba3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=640&crop=smart&auto=webp&s=40b6b95daa0379909d516852ff40cba164f6c016', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=960&crop=smart&auto=webp&s=1219bebdd14e37910fccb064fb9f8bcad1cf6e6e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=1080&crop=smart&auto=webp&s=8d72816af10e803bec0463cfa811bbb41e392eb3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?auto=webp&s=d81d72a97930e39f0148120ed48167980772ae5a', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=03d78f187e5edf224990b331e60d6cbdb3d74909', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=c3d76db535c150e7307cfa8e97928abce2335a3c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=f322fd7870beceabb35d10a0a52a291e04efb045', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=3fd831ed79f94b00baca2f3eeed9f97bfcf455d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=49f072c59bd45bc0bb938028cc5ad2d29fcd4d58', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=87149c815821d9d1470712ac4abbfc48b24080d7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?blur=40&format=pjpg&auto=webp&s=a99609dbc0058d851596b604495b98f9af9928b0', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=03d78f187e5edf224990b331e60d6cbdb3d74909', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=c3d76db535c150e7307cfa8e97928abce2335a3c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=f322fd7870beceabb35d10a0a52a291e04efb045', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=3fd831ed79f94b00baca2f3eeed9f97bfcf455d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=49f072c59bd45bc0bb938028cc5ad2d29fcd4d58', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=87149c815821d9d1470712ac4abbfc48b24080d7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/r5QGVkVdsWi-CInGquMUea9kSY5UN0fR8y4Dyeo1i64.jpg?blur=40&format=pjpg&auto=webp&s=a99609dbc0058d851596b604495b98f9af9928b0', 'width': 1200}}}}]}
An acceleration of stable-diffusion.cpp
1
[removed]
2024-12-17T01:38:18
https://www.reddit.com/r/LocalLLaMA/comments/1hfzjdq/an_acceleration_of_stablediffusioncpp/
Specialist_Bug_5643
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfzjdq
false
null
t3_1hfzjdq
/r/LocalLLaMA/comments/1hfzjdq/an_acceleration_of_stablediffusioncpp/
false
false
self
1
{'enabled': False, 'images': [{'id': 'tZdKnGxWMNQLx7hZkCXVsRfsAet4hFW6AlNOhFpV3Kg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=108&crop=smart&auto=webp&s=d7541c387db923a4a779157b1e339ac046ae0cb4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=216&crop=smart&auto=webp&s=b98cf664a7bade0db20e3fd8f427a64109a88174', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=320&crop=smart&auto=webp&s=b5a8a1147f007067e2872c7cf911d4e6be97aa5d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=640&crop=smart&auto=webp&s=6626b5295fcf350b2ed58495349dd8354d8c843e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=960&crop=smart&auto=webp&s=709f2e8c348aad9bc58f7bc0a9c4eff6ecdbcce8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=1080&crop=smart&auto=webp&s=eb1e9137145b49f5cde68a643c5c545c7a872cfa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?auto=webp&s=a35ac07f7357f066b31efcb7464aca28113cad1a', 'width': 1200}, 'variants': {}}]}
2 x 3060 vs 1 4060ti
1
[removed]
2024-12-17T01:38:22
https://www.reddit.com/r/LocalLLaMA/comments/1hfzjf3/2_x_3060_vs_1_4060ti/
Weird_Bird1792
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfzjf3
false
null
t3_1hfzjf3
/r/LocalLLaMA/comments/1hfzjf3/2_x_3060_vs_1_4060ti/
false
false
self
1
null
Acceleration of stable-diffusion cpp
1
2024-12-17T01:41:45
https://github.com/SealAILab/stable-diffusion-cpp
Specialist_Bug_5643
github.com
1970-01-01T00:00:00
0
{}
1hfzlnm
false
null
t3_1hfzlnm
/r/LocalLLaMA/comments/1hfzlnm/acceleration_of_stablediffusion_cpp/
false
false
https://b.thumbs.redditm…W8wKhvPThDgs.jpg
1
{'enabled': False, 'images': [{'id': 'tZdKnGxWMNQLx7hZkCXVsRfsAet4hFW6AlNOhFpV3Kg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=108&crop=smart&auto=webp&s=d7541c387db923a4a779157b1e339ac046ae0cb4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=216&crop=smart&auto=webp&s=b98cf664a7bade0db20e3fd8f427a64109a88174', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=320&crop=smart&auto=webp&s=b5a8a1147f007067e2872c7cf911d4e6be97aa5d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=640&crop=smart&auto=webp&s=6626b5295fcf350b2ed58495349dd8354d8c843e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=960&crop=smart&auto=webp&s=709f2e8c348aad9bc58f7bc0a9c4eff6ecdbcce8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?width=1080&crop=smart&auto=webp&s=eb1e9137145b49f5cde68a643c5c545c7a872cfa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/m7TnDIPJEGvwDkGKeZRxVCf-5fDSDj2oTwb6f7bUhC4.jpg?auto=webp&s=a35ac07f7357f066b31efcb7464aca28113cad1a', 'width': 1200}, 'variants': {}}]}
Looking for diffing tools
1
I’m curious how tools like cursor and Lovable do their code diffing where the LLM suggests a change to some code and when approved it only changes the specific snippet rather than rewriting the full code file. Is there a library or platform for this that’s widely used? I recently found patched.codes, but curious what else is out there. What do you all use for this kind of thing? End game for me is the ambition to build a self healing coding agent
2024-12-17T01:50:44
https://www.reddit.com/r/LocalLLaMA/comments/1hfzrv9/looking_for_diffing_tools/
shepbryan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hfzrv9
false
null
t3_1hfzrv9
/r/LocalLLaMA/comments/1hfzrv9/looking_for_diffing_tools/
false
false
self
1
null
Going from 4K to 128K
8
Hello, I have an insruct model with 4k context size. How do I train it to accommodate for 128k? Is that the standard procedure? Do people first pretain and then instruct tune with a standard context size and then another training with longer context? What goes behind in the last stage? Is it done after sft + rlhf?
2024-12-17T02:17:02
https://www.reddit.com/r/LocalLLaMA/comments/1hg0a8b/going_from_4k_to_128k/
Low_Tour_4060
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg0a8b
false
null
t3_1hg0a8b
/r/LocalLLaMA/comments/1hg0a8b/going_from_4k_to_128k/
false
false
self
8
null
InternLM-XComposer2.5-OmniLive Released, What is your experience so far?
13
ERROR: type should be string, got "\n\nhttps://preview.redd.it/el5wqqabkb7e1.png?width=618&format=png&auto=webp&s=e6ee3e89473e12bd6d6dd28c120ec88a288d0d72\n\n**InternLM-XComposer2.5-OmniLive**, a comprehensive multimodal system for long-term streaming video and audio interactions.\n\n* Real-time visual and auditory perception to understand the external world.\n* Automatic formation of long-term memory based on observed content.\n* Seamless voice interaction for more natural communication with human users.\n\n \nTech report: [https://arxiv.org/pdf/2412.09596](https://arxiv.org/pdf/2412.09596)\n\nCode: [https://github.com/InternLM/InternLM-XComposer/tree/main/InternLM-XComposer-2.5-OmniLive](https://github.com/InternLM/InternLM-XComposer/tree/main/InternLM-XComposer-2.5-OmniLive)\n\nModel:[https://huggingface.co/internlm/internlm-xcomposer2d5-ol-7b](https://huggingface.co/internlm/internlm-xcomposer2d5-ol-7b)\n\n"
2024-12-17T02:26:08
https://www.reddit.com/r/LocalLLaMA/comments/1hg0gcd/internlmxcomposer25omnilive_released_what_is_your/
vansinhu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg0gcd
false
null
t3_1hg0gcd
/r/LocalLLaMA/comments/1hg0gcd/internlmxcomposer25omnilive_released_what_is_your/
false
false
https://b.thumbs.redditm…vOizi0n6ALbY.jpg
13
null
Guidance on Fine-Tuning LLM for Custom Writing Task: Model, GPU, and Cloud Platform Considerations
1
[removed]
2024-12-17T02:50:13
https://www.reddit.com/r/LocalLLaMA/comments/1hg0wlu/guidance_on_finetuning_llm_for_custom_writing/
Rqees
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg0wlu
false
null
t3_1hg0wlu
/r/LocalLLaMA/comments/1hg0wlu/guidance_on_finetuning_llm_for_custom_writing/
false
false
self
1
null
Best way to perform web search with good results?
9
I got mixed results running HF's chat UI locally with web search enabled. I really like how it works, but the results aren't as good as Perplexity's, even though it uses Google for the search. How much of this is due to the model I'm using? I tried Llama 3.1 8b and also Qwen2.5 14b, but didn't see much difference. Asking about current events works well, but for some recent stuff, the results can be pretty poor. For example, if I search for "what is the latest video by LinusTechTips," Perplexity gives the correct result, while the Chat UI gives a video that doesn't even show up on the first page and I'm not sure if it actually exists. Is there a better method?
2024-12-17T02:53:27
https://www.reddit.com/r/LocalLLaMA/comments/1hg0ysv/best_way_to_perform_web_search_with_good_results/
Sky_Linx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg0ysv
false
null
t3_1hg0ysv
/r/LocalLLaMA/comments/1hg0ysv/best_way_to_perform_web_search_with_good_results/
false
false
self
9
null
New LLM optimization technique slashes memory costs up to 75%
524
2024-12-17T03:04:32
https://venturebeat.com/ai/new-llm-optimization-technique-slashes-memory-costs-up-to-75/
badgerfish2021
venturebeat.com
1970-01-01T00:00:00
0
{}
1hg16jj
false
null
t3_1hg16jj
/r/LocalLLaMA/comments/1hg16jj/new_llm_optimization_technique_slashes_memory/
false
false
https://b.thumbs.redditm…XvxiyR0FzePY.jpg
524
{'enabled': False, 'images': [{'id': 'yGguz820k3cg8gfBqQ9yx361A4t2YwheCPnz7Ma7SX8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/v6VZRChnNNSXSLUIiJRJ8rrhJARCUZL-EE-SkNxz52U.jpg?width=108&crop=smart&auto=webp&s=9ab61e039796dd2071a00044216b67056c7b8f29', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/v6VZRChnNNSXSLUIiJRJ8rrhJARCUZL-EE-SkNxz52U.jpg?width=216&crop=smart&auto=webp&s=720f4c1bc8f404c576424f1e6f1a2f65101ed007', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/v6VZRChnNNSXSLUIiJRJ8rrhJARCUZL-EE-SkNxz52U.jpg?width=320&crop=smart&auto=webp&s=a56801cc7d891a42014a63446017d1f1171ad821', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/v6VZRChnNNSXSLUIiJRJ8rrhJARCUZL-EE-SkNxz52U.jpg?width=640&crop=smart&auto=webp&s=2091d2f9d9e725ba5523ca796714ae90e3d5a3f7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/v6VZRChnNNSXSLUIiJRJ8rrhJARCUZL-EE-SkNxz52U.jpg?width=960&crop=smart&auto=webp&s=39dbd1f9a341c8f0f9cf6c9469457a581edb1f06', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/v6VZRChnNNSXSLUIiJRJ8rrhJARCUZL-EE-SkNxz52U.jpg?auto=webp&s=382de8c049d4650668b33d3f2f98f079baed957a', 'width': 1024}, 'variants': {}}]}
Are there any Ai that have none of this bullshit censorship I can run?
0
[removed]
2024-12-17T03:56:58
https://www.reddit.com/r/LocalLLaMA/comments/1hg24yu/are_there_any_ai_that_have_none_of_this_bullshit/
Entire-Formal4792
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg24yu
false
null
t3_1hg24yu
/r/LocalLLaMA/comments/1hg24yu/are_there_any_ai_that_have_none_of_this_bullshit/
false
false
self
0
null
ZOTAC confirms GeForce RTX 5090 with 32GB GDDR7 memory, 5080 and 5070 series listed as well - VideoCardz.com
161
2024-12-17T05:32:01
https://videocardz.com/newz/zotac-confirms-geforce-rtx-5090-with-32gb-gddr7-memory-5080-and-5070-series-listed-as-well
chillinewman
videocardz.com
1970-01-01T00:00:00
0
{}
1hg3ra4
false
null
t3_1hg3ra4
/r/LocalLLaMA/comments/1hg3ra4/zotac_confirms_geforce_rtx_5090_with_32gb_gddr7/
false
false
https://b.thumbs.redditm…YoSx1e7hitjA.jpg
161
{'enabled': False, 'images': [{'id': 'yXgmuUi6YLbkDjJ13Gq4UKhlTxFqOVzYbnEwKyx2mmA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Bd7bzNEP8gjhMbC53-5_RONQtionBwfIEk9euNZXbmc.jpg?width=108&crop=smart&auto=webp&s=20bfcf205dad76936c65f87b2a1125098b04226c', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/Bd7bzNEP8gjhMbC53-5_RONQtionBwfIEk9euNZXbmc.jpg?width=216&crop=smart&auto=webp&s=0a890438ed9ff6bebca8d4b9945439c4ff30d7fc', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/Bd7bzNEP8gjhMbC53-5_RONQtionBwfIEk9euNZXbmc.jpg?width=320&crop=smart&auto=webp&s=5eae3eca516e4061f27d88f7f1c228b0a9244cb2', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/Bd7bzNEP8gjhMbC53-5_RONQtionBwfIEk9euNZXbmc.jpg?width=640&crop=smart&auto=webp&s=38a4060a0a55c3e8ef76b14265ba57d6162aba30', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/Bd7bzNEP8gjhMbC53-5_RONQtionBwfIEk9euNZXbmc.jpg?width=960&crop=smart&auto=webp&s=72ccaa6e23bb05b2f7e2681dd19f02842d77b27f', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/Bd7bzNEP8gjhMbC53-5_RONQtionBwfIEk9euNZXbmc.jpg?width=1080&crop=smart&auto=webp&s=fa8a7bf297ec881fc57296be5c67fa5cd036b79e', 'width': 1080}], 'source': {'height': 1040, 'url': 'https://external-preview.redd.it/Bd7bzNEP8gjhMbC53-5_RONQtionBwfIEk9euNZXbmc.jpg?auto=webp&s=efdfd20ca6dc1c1890b83df63c67955d5bdbe962', 'width': 2000}, 'variants': {}}]}
Llama 3.3 outperforming Mistral-Large-2411 when helping me with code
43
Just thought I'd share. I'm working with both Python and C++ in my current project and there's a lot of information the model needs to keep track of in order to help me effectively. Mistral-Large-2411 (aka 2.1) on Le Chat is struggling - it outputs detailed breakdowns of a solution without actually fixing the code. Meanwhile Llama 3.3 (GGUF 4.66bpw) is able to grasp the problem and work with me, producing meaningful fixes. The only catch is that it runs at like... 1.2 tok/s. But I'd rather wait 10 minutes for a working solution than wait 10 seconds for a not-quite-solution that just wastes my own time. YMMV.
2024-12-17T05:57:31
https://www.reddit.com/r/LocalLLaMA/comments/1hg45n0/llama_33_outperforming_mistrallarge2411_when/
Master-Meal-77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg45n0
false
null
t3_1hg45n0
/r/LocalLLaMA/comments/1hg45n0/llama_33_outperforming_mistrallarge2411_when/
false
false
self
43
null
Any Models do import image and export renamed image? SEE below for context.
0
Chat gpt 4 currently can get an image such as [Uppercase S](https://preview.redd.it/vsrhk7t2nc7e1.jpg?width=2048&format=pjpg&auto=webp&s=3a722212e0f89e4a0699874f23a45b4dbbe8cafc) and change file name from "genericname.png" to "S.png" and batch rename while knowing its an Uppercase vs lowercase etc. Can any of the Llama models do this and allow me to download the changed name? I tried googling and nothing of value came up. Please dont include paid options because if thats all there is ill just get chatgpt plus to get the video generation features as well so i know the other paid options do not have that. I have tried to install tesseract and it does not install correctly so i cant use that. When i confirm its installed it says yes but code i wrote says its not. Im not willing to debug that any further. TLDR: I want a free LLM that can import file named "genericimage.png" see that its an uppercase S and rename and export the image as "S.png"
2024-12-17T06:07:38
https://www.reddit.com/r/LocalLLaMA/comments/1hg4bky/any_models_do_import_image_and_export_renamed/
mind_ya_bidness
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg4bky
false
null
t3_1hg4bky
/r/LocalLLaMA/comments/1hg4bky/any_models_do_import_image_and_export_renamed/
false
false
https://b.thumbs.redditm…S-5YA3T9D6eM.jpg
0
null
Can it run a company?
1
[removed]
2024-12-17T06:32:27
https://www.reddit.com/r/LocalLLaMA/comments/1hg4ouj/can_it_run_a_company/
Significant-Media281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg4ouj
false
null
t3_1hg4ouj
/r/LocalLLaMA/comments/1hg4ouj/can_it_run_a_company/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SJBzosdKR6cIdelR89bySOHY4fPJB7hjqDFeyz8nhrA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/s9uqvWan7pHNnKsOd6_XJBhfui7KKFzgIxOdnoaTvKI.jpg?width=108&crop=smart&auto=webp&s=7b30ab4184941ebabeef7d0dd855bcda5f9b51a5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/s9uqvWan7pHNnKsOd6_XJBhfui7KKFzgIxOdnoaTvKI.jpg?width=216&crop=smart&auto=webp&s=d6618ef92d7428bdde83ec74afc2ffe2e2e8b635', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/s9uqvWan7pHNnKsOd6_XJBhfui7KKFzgIxOdnoaTvKI.jpg?width=320&crop=smart&auto=webp&s=c79fd7e29f8d1c7d129690bf4652e1b349f6873d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/s9uqvWan7pHNnKsOd6_XJBhfui7KKFzgIxOdnoaTvKI.jpg?auto=webp&s=a142b5426c1b88957dd553403a2913cf1f4cb724', 'width': 480}, 'variants': {}}]}
Can it run a company?
0
I've been wanting to say this for quite some time. I just didn't really know if I should show my ignorance on the matter. Maybe this has been said already too. But, as of late it seems that there is enough framework and models to be indistinguishable from AGI. If someone had multiple GPUs with a model that specializes in its own domain on each one and combined all the Frameworks that I've seen in the past few months posted here, it sure seems feasible. I've seen people that have created agents that can manipulate operating systems and personal data/files. Agents that can view what's currently on your screen or going on in a video game. And of course interpret pictures and video and audio in general. Maya Akim on YouTube with her self improving agents which I have seen more of as of late. Self updating knowledge/memory to keep things current in RAG. Web-searching and scraping, website fields being filled out and buttons being pressed, furthermore, frameworks that show the model what its creation looks like for review and new iterations. I can’t think of what models and frameworks can’t do to get most work done. Please excuse my ignorance but it just seems that if somebody had several GPUs that one could be used as the main brain and delegate all the activity to all the other GPUs and models on them to do its bidding and then to report back through text for more planning and delegation. I don't think an all powerful single multi-modal AGI model is needed. But, I have to say this scares the hell out of me [https://youtu.be/cyrrfl0eNYc?feature=shared](https://youtu.be/cyrrfl0eNYc?feature=shared) You guys are all awesome and I know someone will let me know why this will not work. When I think of AGI, I think of something that can start a company and run it successfully.
2024-12-17T06:36:51
https://www.reddit.com/r/LocalLLaMA/comments/1hg4r4c/can_it_run_a_company/
MinimumPC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg4r4c
false
null
t3_1hg4r4c
/r/LocalLLaMA/comments/1hg4r4c/can_it_run_a_company/
false
false
self
0
{'enabled': False, 'images': [{'id': 'SJBzosdKR6cIdelR89bySOHY4fPJB7hjqDFeyz8nhrA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/s9uqvWan7pHNnKsOd6_XJBhfui7KKFzgIxOdnoaTvKI.jpg?width=108&crop=smart&auto=webp&s=7b30ab4184941ebabeef7d0dd855bcda5f9b51a5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/s9uqvWan7pHNnKsOd6_XJBhfui7KKFzgIxOdnoaTvKI.jpg?width=216&crop=smart&auto=webp&s=d6618ef92d7428bdde83ec74afc2ffe2e2e8b635', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/s9uqvWan7pHNnKsOd6_XJBhfui7KKFzgIxOdnoaTvKI.jpg?width=320&crop=smart&auto=webp&s=c79fd7e29f8d1c7d129690bf4652e1b349f6873d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/s9uqvWan7pHNnKsOd6_XJBhfui7KKFzgIxOdnoaTvKI.jpg?auto=webp&s=a142b5426c1b88957dd553403a2913cf1f4cb724', 'width': 480}, 'variants': {}}]}
who's running LLMs on the weakest hardware?
48
Who all are running LLMs on wimpy devices? Not like, i tried it once, like actually use it on a regular basis?
2024-12-17T06:47:04
https://www.reddit.com/r/LocalLLaMA/comments/1hg4w8j/whos_running_llms_on_the_weakest_hardware/
Vegetable_Sun_9225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg4w8j
false
null
t3_1hg4w8j
/r/LocalLLaMA/comments/1hg4w8j/whos_running_llms_on_the_weakest_hardware/
false
false
self
48
null
Update: Launching the Edge LLM Leaderboard!
41
# Announcing the Edge LLM Leaderboard – Now Live with Support from Hugging Face! https://huggingface.co/spaces/nyunai/edge-llm-leaderboard We are excited to launch the **Edge LLM Leaderboard**, a platform designed to benchmark the performance of **Compressed LLMs** on **real edge hardware**, starting with the **Raspberry Pi 5 (8GB)** powered by the ARM Cortex A76 CPU and optimized using **llama.cpp**. --- ## Key Highlights - **Real-World Performance Metrics**: Benchmark critical metrics including: - **Prefill Latency** - **Decode Latency** - **Model Size** - **130+ Models at Launch**: We’ve evaluated a broad set of **sub-8B models** using quantizations optimized for the ARM platform, including: - **Q8_0** - **Q4_K_M** - **Q4_0_4_4** (ARM Neon Optimized) This ensures a comprehensive comparison of models' throughput, latency, and memory utilization on real, accessible hardware. --- ## Future Plans - **Expanded Backend Support**: Integrating more frameworks that support the ARM platform. - **Additional Edge Hardware**: Benchmarking performance on other underexplored edge devices to broaden the leaderboard’s scope and applicability. --- ## Your Input Matters We aim to make this a **community-driven initiative** and invite your insights, feedback, and model requests. If there’s a particular model, hardware, or optimization you’d like to see included on the leaderboard, please reach out to us: **edge-llm-evaluation[@]nyunai[dot]com** Leaderboard Link - https://huggingface.co/spaces/nyunai/edge-llm-leaderboard
2024-12-17T07:11:41
https://www.reddit.com/r/LocalLLaMA/comments/1hg58qg/update_launching_the_edge_llm_leaderboard/
Ok-Entrepreneur-6154
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg58qg
false
null
t3_1hg58qg
/r/LocalLLaMA/comments/1hg58qg/update_launching_the_edge_llm_leaderboard/
false
false
self
41
{'enabled': False, 'images': [{'id': 'Kr3Rnj-aTloKj5A5c9Qu7JAm3Dt7NAZUHq-jaqdGB-E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sr6O8t0p1kImX2Kw1WayrU9FrNJMyyVldA6yyQ0-onc.jpg?width=108&crop=smart&auto=webp&s=c0fcc811acaea7e449400e28a078dfefefbc07b1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sr6O8t0p1kImX2Kw1WayrU9FrNJMyyVldA6yyQ0-onc.jpg?width=216&crop=smart&auto=webp&s=a443f3c1a82d8c92b1103a1d971501f59fe0eebc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sr6O8t0p1kImX2Kw1WayrU9FrNJMyyVldA6yyQ0-onc.jpg?width=320&crop=smart&auto=webp&s=2e4786378931f16cbc08ca9dd6159994a9bc8066', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sr6O8t0p1kImX2Kw1WayrU9FrNJMyyVldA6yyQ0-onc.jpg?width=640&crop=smart&auto=webp&s=784cb86bdfa48f522eb7e9e9b6ef5b1fcfc908e1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sr6O8t0p1kImX2Kw1WayrU9FrNJMyyVldA6yyQ0-onc.jpg?width=960&crop=smart&auto=webp&s=e6a7ac1d75b430d725e458d5ca29283e14ebc6eb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sr6O8t0p1kImX2Kw1WayrU9FrNJMyyVldA6yyQ0-onc.jpg?width=1080&crop=smart&auto=webp&s=767f66778c0405150f753ffb18f70829b37a75a8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sr6O8t0p1kImX2Kw1WayrU9FrNJMyyVldA6yyQ0-onc.jpg?auto=webp&s=92bde089099eacd75ac4049f217f86e9bd94d935', 'width': 1200}, 'variants': {}}]}
How to Start with Local Llama for Production on Limited RAM and CPU?
0
Hello all, At my company, we want to leverage the power of AI for data analysis. However, due to security reasons, we cannot use external APIs like OpenAI, so we are limited to running a local LLM (Large Language Model). is using Llama is something you would recommend? My main constraint is that I can use servers with **16 GB of RAM** and **no GPU**. Thank you for your insights!
2024-12-17T07:22:12
https://www.reddit.com/r/LocalLLaMA/comments/1hg5dut/how_to_start_with_local_llama_for_production_on/
umen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg5dut
false
null
t3_1hg5dut
/r/LocalLLaMA/comments/1hg5dut/how_to_start_with_local_llama_for_production_on/
false
false
self
0
null
It's calming to see the training logs scroll up, like looking at the matrix
119
2024-12-17T07:38:57
https://v.redd.it/3aq2u8hy3d7e1
amang0112358
/r/LocalLLaMA/comments/1hg5lyy/its_calming_to_see_the_training_logs_scroll_up/
1970-01-01T00:00:00
0
{}
1hg5lyy
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/3aq2u8hy3d7e1/DASHPlaylist.mpd?a=1737142748%2CZDM1OTQyYTY0Mzg4MTc0NzQ1Yzg0YTlkZmZkZjk3MWQ1YzA3M2JlMDdlNzJjZWFhYjdlNjM3YjA5MDYzNzkyYw%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/3aq2u8hy3d7e1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/3aq2u8hy3d7e1/HLSPlaylist.m3u8?a=1737142748%2CNzZlYmU1YWRlYmI4NGZkNTM0NGZjODFjNjBhMDQxMTcyMDYxN2M1ZjdhNzIxZjQ0YWY3ZDY4YTU1MWQxNDlkOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3aq2u8hy3d7e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1hg5lyy
/r/LocalLLaMA/comments/1hg5lyy/its_calming_to_see_the_training_logs_scroll_up/
false
false
https://external-preview…b0ab211051a21682
119
{'enabled': False, 'images': [{'id': 'cXV0dWo4aHkzZDdlMR5ZxJuLugi_OSIdB9IbGhsj9w0rV87nR5iKwHAOc1kj', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cXV0dWo4aHkzZDdlMR5ZxJuLugi_OSIdB9IbGhsj9w0rV87nR5iKwHAOc1kj.png?width=108&crop=smart&format=pjpg&auto=webp&s=fc7efbea4465e7809121a976b24e40518e06c014', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/cXV0dWo4aHkzZDdlMR5ZxJuLugi_OSIdB9IbGhsj9w0rV87nR5iKwHAOc1kj.png?width=216&crop=smart&format=pjpg&auto=webp&s=1c88a47137da5e4f3b0c054bb21a89ee019469b8', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/cXV0dWo4aHkzZDdlMR5ZxJuLugi_OSIdB9IbGhsj9w0rV87nR5iKwHAOc1kj.png?width=320&crop=smart&format=pjpg&auto=webp&s=58043c918432b7d86850f6ffdb910cf6ff69513f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/cXV0dWo4aHkzZDdlMR5ZxJuLugi_OSIdB9IbGhsj9w0rV87nR5iKwHAOc1kj.png?width=640&crop=smart&format=pjpg&auto=webp&s=4a4dc4f7d5cfd8a303ac3073984e07bc464c2ba0', 'width': 640}], 'source': {'height': 806, 'url': 'https://external-preview.redd.it/cXV0dWo4aHkzZDdlMR5ZxJuLugi_OSIdB9IbGhsj9w0rV87nR5iKwHAOc1kj.png?format=pjpg&auto=webp&s=1bf59654c181599c9541726f6465a05d8846c7b6', 'width': 806}, 'variants': {}}]}
What Dataset Structure should be used for Finetuning Moondream LLM?
1
[removed]
2024-12-17T07:41:58
https://www.reddit.com/r/LocalLLaMA/comments/1hg5ncg/what_dataset_structure_should_be_used_for/
darknsilence
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg5ncg
false
null
t3_1hg5ncg
/r/LocalLLaMA/comments/1hg5ncg/what_dataset_structure_should_be_used_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DtYOeHUj9E59qr_l9CJ_uciu37q6M8vpqsSR41xgyUk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Idbypv0ANPf_PtXni-w8WuhCgwl3WxPHBPffL_7B0i0.jpg?width=108&crop=smart&auto=webp&s=47e86f720229aa59dcd43644982a03d06e637cd9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Idbypv0ANPf_PtXni-w8WuhCgwl3WxPHBPffL_7B0i0.jpg?width=216&crop=smart&auto=webp&s=be7afe7f204b46d5ea39bbf4f47676f8d79bbc59', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Idbypv0ANPf_PtXni-w8WuhCgwl3WxPHBPffL_7B0i0.jpg?width=320&crop=smart&auto=webp&s=fdb6785dc7b975fb82a1f9c664b9afa8dab86520', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Idbypv0ANPf_PtXni-w8WuhCgwl3WxPHBPffL_7B0i0.jpg?width=640&crop=smart&auto=webp&s=0bf3415836c5be6728d53f73ca16f1c4f4453f73', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Idbypv0ANPf_PtXni-w8WuhCgwl3WxPHBPffL_7B0i0.jpg?width=960&crop=smart&auto=webp&s=afc0a99a86a5a3b0d01561ba9ab1aaf89ee9267b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Idbypv0ANPf_PtXni-w8WuhCgwl3WxPHBPffL_7B0i0.jpg?width=1080&crop=smart&auto=webp&s=ad2096cf817d8b8117b204d4e8351dcbc3383d5a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Idbypv0ANPf_PtXni-w8WuhCgwl3WxPHBPffL_7B0i0.jpg?auto=webp&s=cfbc4ce51c375325735b19b6f648addeb86e82c9', 'width': 1200}, 'variants': {}}]}
Using LLaMA 3b as an assistant's search engine.
1
[removed]
2024-12-17T07:46:11
https://www.reddit.com/r/LocalLLaMA/comments/1hg5pei/using_llama_3b_as_an_assistants_search_engine/
Confused_Innovation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg5pei
false
null
t3_1hg5pei
/r/LocalLLaMA/comments/1hg5pei/using_llama_3b_as_an_assistants_search_engine/
false
false
self
1
null
NobodyWho: Local LLM in Godot
1
[removed]
2024-12-17T07:49:54
https://www.reddit.com/r/LocalLLaMA/comments/1hg5r7r/nobodywho_local_llm_in_godot/
No_Abbreviations_532
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg5r7r
false
null
t3_1hg5r7r
/r/LocalLLaMA/comments/1hg5r7r/nobodywho_local_llm_in_godot/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Ek06H0mRV_NwMMcKq6YENy-yIdOoST9aWR6wxBACsDo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kge8rxDnW13C1ntwYWtOy-2FjxWCzS07QFlHQFnK7w0.jpg?width=108&crop=smart&auto=webp&s=0c0dc513eaea4039e42edf5a00e05566905363a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Kge8rxDnW13C1ntwYWtOy-2FjxWCzS07QFlHQFnK7w0.jpg?width=216&crop=smart&auto=webp&s=43859673eda73f6b32e0debaa5ce19d24408bbad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Kge8rxDnW13C1ntwYWtOy-2FjxWCzS07QFlHQFnK7w0.jpg?width=320&crop=smart&auto=webp&s=64cfe714cadb82f9ffa4be03c411640607241ee3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Kge8rxDnW13C1ntwYWtOy-2FjxWCzS07QFlHQFnK7w0.jpg?width=640&crop=smart&auto=webp&s=37d90a1aa66c7058c40ebee81dff23b6582c614f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Kge8rxDnW13C1ntwYWtOy-2FjxWCzS07QFlHQFnK7w0.jpg?width=960&crop=smart&auto=webp&s=cc5999552df51f6d072afc4bf5694c96ff08ab41', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Kge8rxDnW13C1ntwYWtOy-2FjxWCzS07QFlHQFnK7w0.jpg?width=1080&crop=smart&auto=webp&s=f6cce1ea90d6ef74c8b975d9326700287811fb33', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Kge8rxDnW13C1ntwYWtOy-2FjxWCzS07QFlHQFnK7w0.jpg?auto=webp&s=11516e58a7afef7cbadd6f18f8e35fa4926ac4a1', 'width': 1200}, 'variants': {}}]}
GPU benchmarking with Llama.cpp
1
2024-12-17T08:05:12
https://medium.com/@gwennardin/gpu-benchmarking-with-llama-cpp-61d6375c7379
yeswearecoding
medium.com
1970-01-01T00:00:00
0
{}
1hg5ymd
false
null
t3_1hg5ymd
/r/LocalLLaMA/comments/1hg5ymd/gpu_benchmarking_with_llamacpp/
false
false
https://b.thumbs.redditm…jLgw-c4WSUIE.jpg
1
{'enabled': False, 'images': [{'id': 'ig8yensfqZ7BhvYUqam26cEiliv98yp3TLOa-8qjtVY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ndp8XML2EQPvlz5U4Nm0bRcYba5roHhC4C7lU12ZasU.jpg?width=108&crop=smart&auto=webp&s=f3639d2c46876f287072c35e2050e4cac0520604', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ndp8XML2EQPvlz5U4Nm0bRcYba5roHhC4C7lU12ZasU.jpg?width=216&crop=smart&auto=webp&s=cc8415097f153def975e53cdebb9a5adfa006c9d', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ndp8XML2EQPvlz5U4Nm0bRcYba5roHhC4C7lU12ZasU.jpg?width=320&crop=smart&auto=webp&s=a5c60f54019a364d847bb36b3010aa2372972cd0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ndp8XML2EQPvlz5U4Nm0bRcYba5roHhC4C7lU12ZasU.jpg?width=640&crop=smart&auto=webp&s=7e8e3c8cf76c5ea21f08738a802d25cfd088a258', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ndp8XML2EQPvlz5U4Nm0bRcYba5roHhC4C7lU12ZasU.jpg?width=960&crop=smart&auto=webp&s=d6f173720076039522fec05b314f724427290f85', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/ndp8XML2EQPvlz5U4Nm0bRcYba5roHhC4C7lU12ZasU.jpg?auto=webp&s=10a1359a4f46678d2f660f710fc156e44a9848e4', 'width': 1024}, 'variants': {}}]}
Just a quick FYI to all you in the EU - Llama 3.3 IS permitted for use under the usual license, only the multimodal 3.2 is restricted. Small mercies!
8
2024-12-17T08:06:06
https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/discussions/35
Peter_Lightblue
huggingface.co
1970-01-01T00:00:00
0
{}
1hg5z0m
false
null
t3_1hg5z0m
/r/LocalLLaMA/comments/1hg5z0m/just_a_quick_fyi_to_all_you_in_the_eu_llama_33_is/
false
false
https://b.thumbs.redditm…YhCvic42EdPQ.jpg
8
{'enabled': False, 'images': [{'id': 'UyFQzvT78kBmnfQXWLkbZmkSgz4Ps9mvwjwWvq95jFc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lP42ISOX_wmjZrfOO9nfD6nA6EhBNj9H-7VwOcuI9qI.jpg?width=108&crop=smart&auto=webp&s=b779941a03ff3ad3a50dfd0ec29419ede17757c3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lP42ISOX_wmjZrfOO9nfD6nA6EhBNj9H-7VwOcuI9qI.jpg?width=216&crop=smart&auto=webp&s=aedb5afa7a6540dd14c11516af77f390069d59bd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lP42ISOX_wmjZrfOO9nfD6nA6EhBNj9H-7VwOcuI9qI.jpg?width=320&crop=smart&auto=webp&s=5e94464f8ca61a9054575d23a6107056710210dd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lP42ISOX_wmjZrfOO9nfD6nA6EhBNj9H-7VwOcuI9qI.jpg?width=640&crop=smart&auto=webp&s=54155fca6fa3975117d1ba19a64117f076d188de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lP42ISOX_wmjZrfOO9nfD6nA6EhBNj9H-7VwOcuI9qI.jpg?width=960&crop=smart&auto=webp&s=a103645a1562aa71a085ac1deb425b086006783b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lP42ISOX_wmjZrfOO9nfD6nA6EhBNj9H-7VwOcuI9qI.jpg?width=1080&crop=smart&auto=webp&s=7b9c4de316abb659e8210ebcf6fceab7d54b2aa9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lP42ISOX_wmjZrfOO9nfD6nA6EhBNj9H-7VwOcuI9qI.jpg?auto=webp&s=d4c282195e2e2b4602a2f8798bfaff8a36ab0e03', 'width': 1200}, 'variants': {}}]}
Just downloaded pocketpal
0
The moment i read that it was open source, i immediately downloaded it Which LLM should i download, it feels like a dump question,but from so many models and different sizes, i can't decide, Should i try Llama 3.2 3B Q3, my tablet isn't that powerful, going to use it for summerization, making MCQs from texts
2024-12-17T08:14:09
https://www.reddit.com/r/LocalLLaMA/comments/1hg62qg/just_downloaded_pocketpal/
SEIF-CHAN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg62qg
false
null
t3_1hg62qg
/r/LocalLLaMA/comments/1hg62qg/just_downloaded_pocketpal/
false
false
self
0
null
MacBook M4 Pro vs Custom Windows PC: Which Should I Buy?
1
[removed]
2024-12-17T08:41:56
https://www.reddit.com/r/LocalLLaMA/comments/1hg6fbw/macbook_m4_pro_vs_custom_windows_pc_which_should/
Separate_Cup_5095
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg6fbw
false
null
t3_1hg6fbw
/r/LocalLLaMA/comments/1hg6fbw/macbook_m4_pro_vs_custom_windows_pc_which_should/
false
false
self
1
null
Relative performance in llama.cpp when adjusting power limits for an RTX 3090 (w/ scripts)
51
I've been in a bunch of recent conversations talking about Power Limits on RTX 3090s and their relative performance deltas/sweet spots. It's been a while since I've run a test, so I figured, why not. Testing was done with a relatively recent HEAD build of llama.cpp (`build: ba1cb19c (4327)`) and a [Llama 3.1 8B Q4\_K\_M](https://huggingface.co/bartowski/Meta-Llama-3.1-8B-Instruct-GGUF) on an MSI 3090 (Arch Linux 6.11.6, Nvidia 565.57.01, CUDA 12.7) which has a 420W defaul PL and a 450W hard cap. I used the default `llama-bench` and here is a graph of the raw `pp512` (prefill) and `tg128` (token generation) numbers: [pp512\/tg128 t\/s vs Power Limit](https://preview.redd.it/xn0a93iwid7e1.png?width=1980&format=png&auto=webp&s=d46820e4290b09ad24439700da2779fa22ef53aa) And here's the chart that shows the percentage drop relative to the default 420W @ 100%: [pp512\/tg128 &#37; vs Power Limit](https://preview.redd.it/xsaauklyid7e1.png?width=1979&format=png&auto=webp&s=3f120c44d07fb0d99cf19f8c5f7d06171244e597) While some people have reported a good performance at 250W, you can see that for my 3090 at least performance starts to drop a lot more starting at around 300W, so I created a delta chart to more easily see the dropoff as you continue lowering the PL: [pp512\/tg128 delta\/10W &#37; vs Power Limit](https://preview.redd.it/4v52ufxgjd7e1.png?width=1977&format=png&auto=webp&s=dda76398615b02a2110d19d5677bc13833d269a5) This shows that below 310W, the perf drop goes from <2% all the way to 6%+ per 10W drop. Of course, everyone's card will be slightly different (silicon lottery and other factors), so here's the script I used to generate my numbers. It actually only takes a few minutes to run, and you can test with any card and model you want to see what is optimal for your own use case (you can also change the `BENCH_CMD` to what you want, for example `-fa 1` hobbles most non-CUDA cards atm): #!/bin/bash # Define starting and ending power limits START_WATT=450 END_WATT=200 STEP_WATT=10 SLEEP=10 # Define the GPU index and benchmark command GPU_INDEX=0 BENCH_CMD="build/bin/llama-bench -m /models/llm/gguf/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf -fa 1 -o json" # Iterate over power limits for (( PL=$START_WATT; PL>=$END_WATT; PL-=$STEP_WATT )); do echo "${PL} W" # Set GPU power limit, suppress warnings and errors sudo nvidia-smi -i $GPU_INDEX -pl $PL > /dev/null 2>&1 # Run the benchmark and extract avg_ts values CUDA_VISIBLE_DEVICES=1 $BENCH_CMD 2>/dev/null | grep '"avg_ts"' | awk '{print " " $0}' # Optional: short delay between runs sleep $SLEEP done For those wanting to generate their own datatable/chart, I've shared my ChatGPT session and you can look at the "Analysis" code blocks for the functions that parse/load into a data frame, crunch numbers, and output graphs: [https://chatgpt.com/share/676139b4-43b8-8012-9454-1011e5b3733f](https://chatgpt.com/share/676139b4-43b8-8012-9454-1011e5b3733f) And just for those interested, my raw numbers: |W|pp512|tg128|pp512%|tg128%|pp512\_delta|tg128\_delta| |:-|:-|:-|:-|:-|:-|:-| |450|5442.020147|140.985242|101.560830|100.686129|\-0.420607|\-0.547695| |440|5419.482446|140.218335|101.140223|100.138434|\-0.714783|0.037217| |430|5381.181601|140.270448|100.425440|100.175651|\-0.425440|\-0.175651| |420|5358.384892|140.024493|100.000000|100.000000|\-0.610852|\-0.177758| |410|5325.653085|139.775588|99.389148|99.822242|\-0.698033|\-0.246223| |400|5288.196194|139.430816|98.690115|99.576019|\-1.074908|\-0.080904| |390|5230.598495|139.317530|97.615207|99.495115|\-0.499002|0.022436| |380|5203.860063|139.348946|97.116205|99.517551|\-0.900025|\-0.242616| |370|5155.635982|139.009224|96.216231|99.274935|\-0.200087|0.099170| |360|5144.914574|139.148086|96.016144|99.374105|\-1.537586|\-0.402733| |350|5062.524770|138.584162|94.478558|98.971372|\-0.288584|\-0.283706| |340|5047.061345|138.186904|94.189974|98.687666|\-1.324028|\-1.376613| |330|4976.114820|137.659554|92.865946|98.311053|\-1.409475|\-0.930440| |320|4900.589724|136.356709|91.456471|97.380613|\-1.770304|\-0.947564| |310|4805.676462|135.029888|89.685167|96.433049|\-2.054098|\-1.093082| |300|4749.204291|133.499305|88.631265|95.339967|\-1.520217|\-3.170793| |290|4667.745230|129.058018|87.111048|92.168174|\-1.978206|\-5.403633| |280|4561.745323|121.491608|85.132842|86.764541|\-1.909862|\-5.655093| |270|4459.407577|113.573094|83.222980|81.109448|\-1.895414|\-5.548168| |260|4357.844024|105.804299|81.327566|75.561280|\-3.270065|\-5.221320| |250|4182.621354|98.493172|78.057501|70.339960|\-5.444974|\-5.666857| |240|3890.858696|90.558185|72.612527|64.673103|\-9.635262|\-5.448258| |230|3374.564233|82.929289|62.977265|59.224845|\-3.706330|\-5.934959| |220|3175.964801|74.618892|59.270935|53.289886|\-5.139659|\-5.229488| |210|2900.562098|67.296329|54.131276|48.060398|\-6.386631|\-5.562067| |200|2558.341844|59.508072|47.744645|42.498331|NaN|NaN|
2024-12-17T09:06:32
https://www.reddit.com/r/LocalLLaMA/comments/1hg6qrd/relative_performance_in_llamacpp_when_adjusting/
randomfoo2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg6qrd
false
null
t3_1hg6qrd
/r/LocalLLaMA/comments/1hg6qrd/relative_performance_in_llamacpp_when_adjusting/
false
false
https://b.thumbs.redditm…zBoc-1x0URYI.jpg
51
{'enabled': False, 'images': [{'id': 'Qmkk7Bnxr2pgRC9e4hEY-TewNcBov6rYmRYppQesvE8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JH7g4kuYBDfr0c6fX3u8EhApruGaP29XBnE4WPy46e0.jpg?width=108&crop=smart&auto=webp&s=8e245a0a4e9d1b10e4fe033fe59c676452534b50', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JH7g4kuYBDfr0c6fX3u8EhApruGaP29XBnE4WPy46e0.jpg?width=216&crop=smart&auto=webp&s=37022275206f97ccdb677524f573add7e1215f0d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JH7g4kuYBDfr0c6fX3u8EhApruGaP29XBnE4WPy46e0.jpg?width=320&crop=smart&auto=webp&s=0c9bc41a24aa58337c0c48ac1521b28503050b9b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JH7g4kuYBDfr0c6fX3u8EhApruGaP29XBnE4WPy46e0.jpg?width=640&crop=smart&auto=webp&s=6d3e904b18336c58b788cf5c10f1c8e57e14bd5b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JH7g4kuYBDfr0c6fX3u8EhApruGaP29XBnE4WPy46e0.jpg?width=960&crop=smart&auto=webp&s=bdf2818dea3d0a89b5096f75570a606654b17a92', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JH7g4kuYBDfr0c6fX3u8EhApruGaP29XBnE4WPy46e0.jpg?width=1080&crop=smart&auto=webp&s=3c532be31137ccd6525db3d088653956b781a1c2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JH7g4kuYBDfr0c6fX3u8EhApruGaP29XBnE4WPy46e0.jpg?auto=webp&s=a0baceb48d6590fc83297f51f10a741b40adc7e9', 'width': 1200}, 'variants': {}}]}
Falcon 3 just dropped
380
[https://huggingface.co/blog/falcon3](https://huggingface.co/blog/falcon3)
2024-12-17T09:37:55
https://www.reddit.com/r/LocalLLaMA/comments/1hg74wd/falcon_3_just_dropped/
Uhlo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg74wd
false
null
t3_1hg74wd
/r/LocalLLaMA/comments/1hg74wd/falcon_3_just_dropped/
false
false
self
380
{'enabled': False, 'images': [{'id': 'HaLwZ5lOatin68lmsyxepvq5hAW0yOljCDJQLug6uXs', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=108&crop=smart&auto=webp&s=e9419c15e20db1d8874db69e9e84337f860739e9', 'width': 108}, {'height': 92, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=216&crop=smart&auto=webp&s=2e0789c7f92514b9c5bd85af82e05301532e97aa', 'width': 216}, {'height': 136, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=320&crop=smart&auto=webp&s=0b0447bbdff04ecf55462213197cc8ec1baeb021', 'width': 320}, {'height': 273, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=640&crop=smart&auto=webp&s=50c5076b2b50dd0f257cd8656bf543e04123e6a8', 'width': 640}, {'height': 410, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=960&crop=smart&auto=webp&s=106d5fa92785de52f46a75f9084223f6b41f5197', 'width': 960}, {'height': 461, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=1080&crop=smart&auto=webp&s=5aa6f8e1609697e04839cba92514b463f48afd5d', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?auto=webp&s=d3982673e5b6e56aad76736a6e420c735a5c9de1', 'width': 2106}, 'variants': {}}]}
Evaluating the quality of LLM responses
1
[removed]
2024-12-17T09:45:23
https://www.reddit.com/r/LocalLLaMA/comments/1hg78bg/evaluating_the_quality_of_llm_responses/
raikirichidori255
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg78bg
false
null
t3_1hg78bg
/r/LocalLLaMA/comments/1hg78bg/evaluating_the_quality_of_llm_responses/
false
false
self
1
null
What should I use for LLM inference?
1
[removed]
2024-12-17T09:56:04
https://www.reddit.com/r/LocalLLaMA/comments/1hg7ddc/what_should_i_use_for_llm_inference/
huntsman2099
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg7ddc
false
null
t3_1hg7ddc
/r/LocalLLaMA/comments/1hg7ddc/what_should_i_use_for_llm_inference/
false
false
self
1
null
How Should Graph RAG Be Set Up for the Generation Phase
1
[removed]
2024-12-17T10:01:47
https://www.reddit.com/r/LocalLLaMA/comments/1hg7g5i/how_should_graph_rag_be_set_up_for_the_generation/
BackgroundLow3793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg7g5i
false
null
t3_1hg7g5i
/r/LocalLLaMA/comments/1hg7g5i/how_should_graph_rag_be_set_up_for_the_generation/
false
false
self
1
null
How Should Graph RAG Be Set Up for the Generation Phase?
1
[removed]
2024-12-17T10:04:31
https://www.reddit.com/r/LocalLLaMA/comments/1hg7hk4/how_should_graph_rag_be_set_up_for_the_generation/
BackgroundLow3793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg7hk4
false
null
t3_1hg7hk4
/r/LocalLLaMA/comments/1hg7hk4/how_should_graph_rag_be_set_up_for_the_generation/
false
false
self
1
null
How Should Graph RAG Be Set Up for the Generation Phase?
1
[removed]
2024-12-17T10:15:49
https://www.reddit.com/r/LocalLLaMA/comments/1hg7n1x/how_should_graph_rag_be_set_up_for_the_generation/
BackgroundLow3793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg7n1x
false
null
t3_1hg7n1x
/r/LocalLLaMA/comments/1hg7n1x/how_should_graph_rag_be_set_up_for_the_generation/
false
false
self
1
null
Fine-tuning Llama on a custom dataset of prompt–completion pairs?
18
Hello, I have a dataset consisting of about 8,000 prompt–completion pairs and a very small corpus of unstructured text from which I'd like to fine-tune a Llama model. The resulting model should simply respond with the most likely completion (in the style of the legacy `text-davinci-002` OpenAI model) without safety mitigations. I have an NVIDIA A4500 (20GB of GDDR6) to use for fine-tuning and inference (the machine has a I9-13900k and 64GB of RAM for offloading as well if needed). Questions: * Which is the best base model my hardware could run at a reasonable speed? * How do I go about fine-tuning a model locally? It seems like [Torchtune](https://pytorch.org/torchtune/) will do this with an [instruct dataset](https://pytorch.org/torchtune/stable/basics/instruct_datasets.html) for the prompt–completion pairs, but I'm not seeing whether I can also include my unstructured data (perhaps with empty prompts like in OpenAI's old format) and if I need to handle annotating my data with stopwords or whether that's done by the library. Is there a better way to do this? Thanks in advance!
2024-12-17T10:53:14
https://www.reddit.com/r/LocalLLaMA/comments/1hg84tn/finetuning_llama_on_a_custom_dataset_of/
codeofdusk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg84tn
false
null
t3_1hg84tn
/r/LocalLLaMA/comments/1hg84tn/finetuning_llama_on_a_custom_dataset_of/
false
false
self
18
null
How do I force loading embeddings to my minor GPU?
2
I have one 1030 and one 3090 card. I tried to force loading embeddings to 1030 but it always go to 3090. When I load huggingface transformers model, I can force it to load to either. How come I can't do the same for embeddings? This is the toy code that can recreate my problem. Thanks a lot in advance. `import os` `from txtai.embeddings import Embeddings` `os.environ["CUDA_VISIBLE_DEVICES"] = "0"` `embeddings = Embeddings()` `embeddings.load(path="txtai-wikipedia")`
2024-12-17T10:56:47
https://www.reddit.com/r/LocalLLaMA/comments/1hg86jm/how_do_i_force_loading_embeddings_to_my_minor_gpu/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg86jm
false
null
t3_1hg86jm
/r/LocalLLaMA/comments/1hg86jm/how_do_i_force_loading_embeddings_to_my_minor_gpu/
false
false
self
2
null
What should I use for LLM inference?
1
[removed]
2024-12-17T11:10:46
https://www.reddit.com/r/LocalLLaMA/comments/1hg8dtu/what_should_i_use_for_llm_inference/
GrammarPaparazzi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg8dtu
false
null
t3_1hg8dtu
/r/LocalLLaMA/comments/1hg8dtu/what_should_i_use_for_llm_inference/
false
false
self
1
null
Introducing Falcon 3 Family
144
I'm thrilled to be part of the incredible **Falcon** team as we release **Falcon 3**, the latest innovation in open-source large language models. This release marks a significant milestone, and I'm proud to contribute to such a groundbreaking project. Discover more about **Falcon 3** and its features in the official blog post here: [**Introducing Falcon 3 on Hugging Face**](https://huggingface.co/blog/falcon3)
2024-12-17T11:18:25
https://www.reddit.com/r/LocalLLaMA/comments/1hg8hpc/introducing_falcon_3_family/
HDElectronics
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg8hpc
false
null
t3_1hg8hpc
/r/LocalLLaMA/comments/1hg8hpc/introducing_falcon_3_family/
false
false
self
144
{'enabled': False, 'images': [{'id': 'HaLwZ5lOatin68lmsyxepvq5hAW0yOljCDJQLug6uXs', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=108&crop=smart&auto=webp&s=e9419c15e20db1d8874db69e9e84337f860739e9', 'width': 108}, {'height': 92, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=216&crop=smart&auto=webp&s=2e0789c7f92514b9c5bd85af82e05301532e97aa', 'width': 216}, {'height': 136, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=320&crop=smart&auto=webp&s=0b0447bbdff04ecf55462213197cc8ec1baeb021', 'width': 320}, {'height': 273, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=640&crop=smart&auto=webp&s=50c5076b2b50dd0f257cd8656bf543e04123e6a8', 'width': 640}, {'height': 410, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=960&crop=smart&auto=webp&s=106d5fa92785de52f46a75f9084223f6b41f5197', 'width': 960}, {'height': 461, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?width=1080&crop=smart&auto=webp&s=5aa6f8e1609697e04839cba92514b463f48afd5d', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/VmR4z128wU34oCVOHSo7BAWgT0pTvRCVf5RT5VseLP8.jpg?auto=webp&s=d3982673e5b6e56aad76736a6e420c735a5c9de1', 'width': 2106}, 'variants': {}}]}
[HOLIDAY PROMO] Perplexity AI PRO - 1 YEAR PLAN OFFER - 75% OFF
1
[removed]
2024-12-17T11:31:07
https://i.redd.it/0veav9qx9e7e1.jpeg
MReus11R
i.redd.it
1970-01-01T00:00:00
0
{}
1hg8o3v
false
null
t3_1hg8o3v
/r/LocalLLaMA/comments/1hg8o3v/holiday_promo_perplexity_ai_pro_1_year_plan_offer/
false
false
https://a.thumbs.redditm…fiPNoOoMKOT4.jpg
1
{'enabled': True, 'images': [{'id': 'uX8ZNcDdbLMwc8CF9eK8xcaJO6jTLtnrKtWhCiURYus', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/0veav9qx9e7e1.jpeg?width=108&crop=smart&auto=webp&s=2db8fe5baab8a3716f99f01824d51b7187ec4f77', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/0veav9qx9e7e1.jpeg?width=216&crop=smart&auto=webp&s=a7fd184aa5496b13e4ad3c5d45d8d5f5a7b48018', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/0veav9qx9e7e1.jpeg?width=320&crop=smart&auto=webp&s=df88ce047287f84859a6d50a23144299a916541e', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/0veav9qx9e7e1.jpeg?width=640&crop=smart&auto=webp&s=e4a750819bd0e70847be8a4089a7dc4da49f4060', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/0veav9qx9e7e1.jpeg?width=960&crop=smart&auto=webp&s=77dd291057ee7e1bfde80872958c180bf70f89e8', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/0veav9qx9e7e1.jpeg?width=1080&crop=smart&auto=webp&s=a676d94cfd11742fcdc178820af68d877a76c756', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://preview.redd.it/0veav9qx9e7e1.jpeg?auto=webp&s=0a675b71e1df72aafd66d61dae2d13ac442353f8', 'width': 2000}, 'variants': {}}]}
Ideal Setup for Running LLama 3.3 or Nemotron 70B Models? Also Looking for Recommendations for Coding-Focused LLMs
1
[removed]
2024-12-17T11:32:38
https://www.reddit.com/r/LocalLLaMA/comments/1hg8owz/ideal_setup_for_running_llama_33_or_nemotron_70b/
shehroz_ahmad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg8owz
false
null
t3_1hg8owz
/r/LocalLLaMA/comments/1hg8owz/ideal_setup_for_running_llama_33_or_nemotron_70b/
false
false
self
1
null
LLM game on Steam that runs on user's hardware
1
[removed]
2024-12-17T11:34:06
https://www.reddit.com/r/LocalLLaMA/comments/1hg8pnu/llm_game_on_steam_that_runs_on_users_hardware/
Isabeelzebub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg8pnu
false
null
t3_1hg8pnu
/r/LocalLLaMA/comments/1hg8pnu/llm_game_on_steam_that_runs_on_users_hardware/
false
false
self
1
{'enabled': False, 'images': [{'id': 'c7PHynmSJErhK1nL8Cu797YkWd-voqMaelj-xWcSKyE', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/D3qqyqcxGI1cl86QEWX1hhog00WCst0fWnI0CG38ASg.jpg?width=108&crop=smart&auto=webp&s=4860825467f280f379e25aa7baabd94369e2126a', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/D3qqyqcxGI1cl86QEWX1hhog00WCst0fWnI0CG38ASg.jpg?width=216&crop=smart&auto=webp&s=d085951bfcd63c4cd00bd94f673ad1ff38781140', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/D3qqyqcxGI1cl86QEWX1hhog00WCst0fWnI0CG38ASg.jpg?width=320&crop=smart&auto=webp&s=6ed72ec956dd8d558d8ca1bfeed12aab2ff7962f', 'width': 320}], 'source': {'height': 353, 'url': 'https://external-preview.redd.it/D3qqyqcxGI1cl86QEWX1hhog00WCst0fWnI0CG38ASg.jpg?auto=webp&s=b67eb48309421b52d4fbe62c0bea4bb7eacaf942', 'width': 616}, 'variants': {}}]}
Hardware update? (my PC specs - what should I change? Give me direction).
1
[removed]
2024-12-17T11:35:12
https://www.reddit.com/r/LocalLLaMA/comments/1hg8q87/hardware_update_my_pc_specs_what_should_i_change/
Repsol_Honda_PL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg8q87
false
null
t3_1hg8q87
/r/LocalLLaMA/comments/1hg8q87/hardware_update_my_pc_specs_what_should_i_change/
false
false
self
1
null
Best model to understand video with audio
2
What's the best model for understanding video with audio(\~5 min long)? My father is a chef, I want to take videos of him cooking & talking, and let LLM to generate step-by-step guides.
2024-12-17T11:39:39
https://www.reddit.com/r/LocalLLaMA/comments/1hg8sg1/best_model_to_understand_video_with_audio/
ImpossiblePlay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg8sg1
false
null
t3_1hg8sg1
/r/LocalLLaMA/comments/1hg8sg1/best_model_to_understand_video_with_audio/
false
false
self
2
null
Does NVLink make a difference in Ollama?
1
I'm trying to get a second identical GPU that supports NVLink in order to get 96GB VRAM next year and the idea behind that is to be able to split my workload between 2 GPUs. I know I don't need NVLink for that but I'm just wondering what the performance difference for inference would be if I enabled it and how that would play out during inference. I'm looking for capacity and speed preservation, since I'm aware there is no speedup during inference with NVLink. I know this stuff is useful for training, which would be helpful for training smol models in the future, but if I used NVLink instead of 2xPCIe 5.0 x8 to transfer data between GPUs, would NVlink be able to handle this data faster than the latter? If so, how would Ollama handle that if it has the capacity to do so? And what would happen if I were to run a model that uses up more than 48GB VRAM? Would NVLink allow me to extend that with the second GPU? I don't want to make the assumption that NVLink makes both GPUs behave like a giant GPU so I'm asking to see what my options are here since I'm looking for quantity, not quality.
2024-12-17T12:11:01
https://www.reddit.com/r/LocalLLaMA/comments/1hg9a3k/does_nvlink_make_a_difference_in_ollama/
swagonflyyyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hg9a3k
false
null
t3_1hg9a3k
/r/LocalLLaMA/comments/1hg9a3k/does_nvlink_make_a_difference_in_ollama/
false
false
self
1
null