title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Just launched MLX Model Manager - Swift Package to run LLM/VLMs with a couple of lines of code
1
2024-12-23T03:24:38
https://v.redd.it/e2zm625ioi8e1
kunalbatra
v.redd.it
1970-01-01T00:00:00
0
{}
1hkeytj
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/e2zm625ioi8e1/DASHPlaylist.mpd?a=1737516292%2CODVkNTdjNjQyNDhlZjIwYzljNGRiMTIwYTFmZjk4OWU5ODAwMzI3ZjdjZGJmMjgwMGNiZTIzMjc3MmZkODMxZg%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/e2zm625ioi8e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/e2zm625ioi8e1/HLSPlaylist.m3u8?a=1737516292%2COWZlNTQ4MmI5Yzc4OWNkNmEyNThkYjA0NjAyYmZjZDRmOTgxM2E0NmVjZGMzZjMyZGU1ODM3NDk4Y2QwNjA4Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/e2zm625ioi8e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1692}}
t3_1hkeytj
/r/LocalLLaMA/comments/1hkeytj/just_launched_mlx_model_manager_swift_package_to/
false
false
https://external-preview…1be9597c145d0d66
1
{'enabled': False, 'images': [{'id': 'N3V2YnE0NWlvaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/N3V2YnE0NWlvaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=108&crop=smart&format=pjpg&auto=webp&s=e219b0867578f2d7086c6c03aa62a2d4a1088aee', 'width': 108}, {'height': 137, 'url': 'https://external-preview.redd.it/N3V2YnE0NWlvaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=216&crop=smart&format=pjpg&auto=webp&s=837c99dee060235801bc61260b6b775116c5858c', 'width': 216}, {'height': 204, 'url': 'https://external-preview.redd.it/N3V2YnE0NWlvaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=320&crop=smart&format=pjpg&auto=webp&s=ab565a28c83936b8d2d77201ea53ae610a4b2a00', 'width': 320}, {'height': 408, 'url': 'https://external-preview.redd.it/N3V2YnE0NWlvaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=640&crop=smart&format=pjpg&auto=webp&s=d3b83f4de2e4f3bf24e88fc9498d92564a11a62e', 'width': 640}, {'height': 612, 'url': 'https://external-preview.redd.it/N3V2YnE0NWlvaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=960&crop=smart&format=pjpg&auto=webp&s=38d0b0c3da41f58a4a48b088344dad6c1d2a4d9a', 'width': 960}, {'height': 689, 'url': 'https://external-preview.redd.it/N3V2YnE0NWlvaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=1080&crop=smart&format=pjpg&auto=webp&s=80eed834a5a07678897d1eb3ce82b7ca3c5503be', 'width': 1080}], 'source': {'height': 1814, 'url': 'https://external-preview.redd.it/N3V2YnE0NWlvaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?format=pjpg&auto=webp&s=fb13d90e622dc0d4ba859e2ea5f1628c82ae938b', 'width': 2842}, 'variants': {}}]}
Just released MLX Model Manager - a Swift Package to quickly add LLM/VLMs in your app with couple of lines of code
74
2024-12-23T03:28:20
https://v.redd.it/yghc8et7pi8e1
Onboto
v.redd.it
1970-01-01T00:00:00
0
{}
1hkf0w5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yghc8et7pi8e1/DASHPlaylist.mpd?a=1737516516%2CMjVlZWU1OTRjNDhhZWZjZjdlMjRmYjAwMjYwMGY3M2U5ODAyYjFhODY0MWM4NjI4MzNiMDIwNTVjMDFiNjRjMA%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/yghc8et7pi8e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/yghc8et7pi8e1/HLSPlaylist.m3u8?a=1737516516%2COGI5ZWU0ZjAzMWExYzlmNGQxNDI5MDA1NTVmYWI3NzVlZjIzZTg2MmEyMzU0MDE0MDFjZTNiYzEwMTJkZTczMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yghc8et7pi8e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1692}}
t3_1hkf0w5
/r/LocalLLaMA/comments/1hkf0w5/just_released_mlx_model_manager_a_swift_package/
false
false
https://external-preview…a51220bbe7d31d3f
74
{'enabled': False, 'images': [{'id': 'cHZpamVldDdwaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/cHZpamVldDdwaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=108&crop=smart&format=pjpg&auto=webp&s=1e5204d31e18b0656c8ae45f7471e056127f62ea', 'width': 108}, {'height': 137, 'url': 'https://external-preview.redd.it/cHZpamVldDdwaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=216&crop=smart&format=pjpg&auto=webp&s=336b2d0f17235f9c196f778ae91d16a5702269fb', 'width': 216}, {'height': 204, 'url': 'https://external-preview.redd.it/cHZpamVldDdwaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=320&crop=smart&format=pjpg&auto=webp&s=47404c01fa18d9116a8b3b1478bae34d9180097b', 'width': 320}, {'height': 408, 'url': 'https://external-preview.redd.it/cHZpamVldDdwaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=640&crop=smart&format=pjpg&auto=webp&s=53a5772749687af95eafc136f50b677bfe765895', 'width': 640}, {'height': 612, 'url': 'https://external-preview.redd.it/cHZpamVldDdwaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=960&crop=smart&format=pjpg&auto=webp&s=9ea47dc46ef8e6c893e6168cd24794398ae15c9c', 'width': 960}, {'height': 689, 'url': 'https://external-preview.redd.it/cHZpamVldDdwaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?width=1080&crop=smart&format=pjpg&auto=webp&s=73b2b3fc6910f35584c4d2c5ffb4a718de245264', 'width': 1080}], 'source': {'height': 1814, 'url': 'https://external-preview.redd.it/cHZpamVldDdwaThlMTBAIvstQMnO-aaVz6bqweM9pyYmZpw8MZo0HsKSFSit.png?format=pjpg&auto=webp&s=0c04fde91c8d1897bd42ab1edd71b674d37a6c6e', 'width': 2842}, 'variants': {}}]}
Need help with PaddleOCR accuracy issues
1
[removed]
2024-12-23T03:41:31
https://www.reddit.com/r/LocalLLaMA/comments/1hkf8xu/need_help_with_paddleocr_accuracy_issues/
Impossible-Cod-5994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkf8xu
false
null
t3_1hkf8xu
/r/LocalLLaMA/comments/1hkf8xu/need_help_with_paddleocr_accuracy_issues/
false
false
self
1
null
Groq's LLMs are one of the best for testing the workflow
0
I tend to test my code a lot since I get bugs some or the other way. Thank god I explored Groq during my learning phase itself. Groq is a free provider of LLMs. Whenever I test my code, I use Groq's LLMs until I debug all the errors since it's free of cost which enables you to test your code as many times. Although it doesn't provide good results, it's good for checking whether your code is fully functioning. After my code is finalised, I switch to other LLMs such as GPT, Claude, etc for deriving more accurate results in production without much testing to save the costs. The above pic represents the usage of Groq's Llama models as compared to in one of my projects. Do you test your code as frequently as me? And which LLM do you use for testing?
2024-12-23T03:48:47
https://i.redd.it/i4kupptwsi8e1.png
Available-Stress8598
i.redd.it
1970-01-01T00:00:00
0
{}
1hkfdcg
false
null
t3_1hkfdcg
/r/LocalLLaMA/comments/1hkfdcg/groqs_llms_are_one_of_the_best_for_testing_the/
false
false
https://b.thumbs.redditm…tWE9VFv71BNY.jpg
0
{'enabled': True, 'images': [{'id': 'MQuxGx7bcCMKapNzNXcu72qa9rZdXnxBV7-Gj5AG34c', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/i4kupptwsi8e1.png?width=108&crop=smart&auto=webp&s=8af2a4f97dbe22c68a42c05cec3a67282cdc7ad5', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/i4kupptwsi8e1.png?width=216&crop=smart&auto=webp&s=a204037f5fd8ab6c3e39b475d540a9466f1d56a8', 'width': 216}, {'height': 406, 'url': 'https://preview.redd.it/i4kupptwsi8e1.png?width=320&crop=smart&auto=webp&s=2a07e64eac3aa005045b9df41d7f334004802db6', 'width': 320}, {'height': 813, 'url': 'https://preview.redd.it/i4kupptwsi8e1.png?width=640&crop=smart&auto=webp&s=832418f418e4d463c98baa143ae1fcd1a955541f', 'width': 640}, {'height': 1219, 'url': 'https://preview.redd.it/i4kupptwsi8e1.png?width=960&crop=smart&auto=webp&s=2950bf35a899e64fd8929411df16a9d9137611d6', 'width': 960}, {'height': 1372, 'url': 'https://preview.redd.it/i4kupptwsi8e1.png?width=1080&crop=smart&auto=webp&s=1f02e584fef435f051426bc2b97308ac64c6d4f9', 'width': 1080}], 'source': {'height': 1372, 'url': 'https://preview.redd.it/i4kupptwsi8e1.png?auto=webp&s=fa3bb0b76c5bb61a767e5e1441e59a48c33e2a27', 'width': 1080}, 'variants': {}}]}
llama.cpp now supports Llama-3_1-Nemotron-51B
110
Good news that my PR is approved and merged to the main branch of llama.cpp. Starting from version b4380, you should be able to run and convert Llama-3\_1-Nemotron-51B. I suppose it will gradually make it to other software based on llama.cpp. However, since bartowski suggested me to create a new model type for it, the previous GGUFs I uploaded will no longer work with the official llama.cpp. Therefore, I re-created the GGUFs them with the updated software. This time I created them with imatrix and measured perplexity and KL Divergence. Currently, I made Q6\_K, Q5\_K, Q4\_K\_M, IQ4\_XS, Q4\_0\_4\_8, IQ3\_M, IQ3\_S available. Please let me know if you need other quants, I can upload them if there is a use case. https://huggingface.co/ymcki/Llama-3\_1-Nemotron-51B-Instruct-GGUF/ As we can see, there is a significant improvement with imatrix. I am happy now that I can run a mid-sized model on my 3090 with confidence. Hope you also find the GGUFs useful in your workflow.
2024-12-23T04:04:25
https://www.reddit.com/r/LocalLLaMA/comments/1hkfmvd/llamacpp_now_supports_llama3_1nemotron51b/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkfmvd
false
null
t3_1hkfmvd
/r/LocalLLaMA/comments/1hkfmvd/llamacpp_now_supports_llama3_1nemotron51b/
false
false
self
110
null
Any open source projects on perplexity pro search / gemini deep search?
1
[removed]
2024-12-23T04:13:42
https://www.reddit.com/r/LocalLLaMA/comments/1hkfsjs/any_open_source_projects_on_perplexity_pro_search/
OneConfusion3313
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkfsjs
false
null
t3_1hkfsjs
/r/LocalLLaMA/comments/1hkfsjs/any_open_source_projects_on_perplexity_pro_search/
false
false
self
1
null
Need help with why the model is stopping midway while answering.
1
2024-12-23T04:19:44
https://i.redd.it/8hpse6qeyi8e1.jpeg
Infinite-Calendar542
i.redd.it
1970-01-01T00:00:00
0
{}
1hkfw73
false
null
t3_1hkfw73
/r/LocalLLaMA/comments/1hkfw73/need_help_with_why_the_model_is_stopping_midway/
false
false
https://b.thumbs.redditm…7p5Amqqcr6Lw.jpg
1
{'enabled': True, 'images': [{'id': 'ftpFmY5LEkiTtXRmmqUAs8ESoK8yBJlEHx2NABC4p_8', 'resolutions': [{'height': 163, 'url': 'https://preview.redd.it/8hpse6qeyi8e1.jpeg?width=108&crop=smart&auto=webp&s=5468b0874ca1a39e6ecbe7694f9d2a87b6e7810f', 'width': 108}, {'height': 327, 'url': 'https://preview.redd.it/8hpse6qeyi8e1.jpeg?width=216&crop=smart&auto=webp&s=268fdd0cd9ef9e91b06173cf5a0c5c8396ee9bc2', 'width': 216}, {'height': 484, 'url': 'https://preview.redd.it/8hpse6qeyi8e1.jpeg?width=320&crop=smart&auto=webp&s=fd576c979e34e0d2a23614457b5fc7968ffb6068', 'width': 320}, {'height': 969, 'url': 'https://preview.redd.it/8hpse6qeyi8e1.jpeg?width=640&crop=smart&auto=webp&s=a082d9c1ce41b31f771b8d825465b1bf1cb2a4b5', 'width': 640}], 'source': {'height': 1280, 'url': 'https://preview.redd.it/8hpse6qeyi8e1.jpeg?auto=webp&s=c46259475990bc78de58bd4d93126bbca17aa0b3', 'width': 845}, 'variants': {}}]}
help in the requirements for running Llama 3.3 locally
1
[removed]
2024-12-23T04:55:26
https://www.reddit.com/r/LocalLLaMA/comments/1hkggjz/help_in_the_requirements_for_running_llama_33/
Dreamer1396
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkggjz
false
null
t3_1hkggjz
/r/LocalLLaMA/comments/1hkggjz/help_in_the_requirements_for_running_llama_33/
false
false
self
1
null
Requirements for running Llama 3.3 locally
1
[removed]
2024-12-23T05:02:16
https://www.reddit.com/r/LocalLLaMA/comments/1hkgkqx/requirements_for_running_llama_33_locally/
Dreamer1396
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkgkqx
false
null
t3_1hkgkqx
/r/LocalLLaMA/comments/1hkgkqx/requirements_for_running_llama_33_locally/
false
false
self
1
null
Is there any local LLM similar to Gpt?
1
[removed]
2024-12-23T05:05:10
https://www.reddit.com/r/LocalLLaMA/comments/1hkgmhb/is_there_any_local_llm_similar_to_gpt/
uSayaka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkgmhb
false
null
t3_1hkgmhb
/r/LocalLLaMA/comments/1hkgmhb/is_there_any_local_llm_similar_to_gpt/
false
false
self
1
null
Cmon, Marco!
0
2024-12-23T05:33:58
https://i.redd.it/sq9jqhzobj8e1.png
roz303
i.redd.it
1970-01-01T00:00:00
0
{}
1hkh281
false
null
t3_1hkh281
/r/LocalLLaMA/comments/1hkh281/cmon_marco/
false
false
https://a.thumbs.redditm…ApTIY1fl-sf4.jpg
0
{'enabled': True, 'images': [{'id': '1xxIfr3EUFGIjIETEPXgc0pLXXSHoeILwW8aBAjVYGk', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/sq9jqhzobj8e1.png?width=108&crop=smart&auto=webp&s=b82d8f90868dd9ac1ae1f843b939ebb498aa2a78', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/sq9jqhzobj8e1.png?width=216&crop=smart&auto=webp&s=16993ca3780af712c3116bab8c282c603df4fea5', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/sq9jqhzobj8e1.png?width=320&crop=smart&auto=webp&s=9b6d19f47d9fa338fa95a88d1715fcf63336316e', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/sq9jqhzobj8e1.png?width=640&crop=smart&auto=webp&s=a9354d2a845071c082dfcb9882d94433e393c072', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/sq9jqhzobj8e1.png?width=960&crop=smart&auto=webp&s=312b07a803e8eab3181ccd7117eb2a5da8c345ae', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/sq9jqhzobj8e1.png?width=1080&crop=smart&auto=webp&s=34f535ff540851bd2c017013f7bf8f28d9db514d', 'width': 1080}], 'source': {'height': 1020, 'url': 'https://preview.redd.it/sq9jqhzobj8e1.png?auto=webp&s=ecfb8f5a24c7f8fcc049b82c2c8bd8bcef3eb2e2', 'width': 1812}, 'variants': {}}]}
Llama 3 vs 3.1 vs 3.2
4
What can you say about these 3 versions of Llama LLMs? Were they trained around the same time? Or 3.2 and 3.1 were later enhancement from 3?
2024-12-23T05:36:48
https://www.reddit.com/r/LocalLLaMA/comments/1hkh3qj/llama_3_vs_31_vs_32/
Ok_Ostrich_8845
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkh3qj
false
null
t3_1hkh3qj
/r/LocalLLaMA/comments/1hkh3qj/llama_3_vs_31_vs_32/
false
false
self
4
null
Any models/AI services out there that can do architectural plans or construction drawings for a house
2
I plan to rebuild my aging house and started talks with a builder. My wife and I would like to work on an intial design which we'll provide to the builder who's design team will finish it off. My originan plan was to use SketchUp, but I'm not wondering if there are any AI methods that will make it easier. We have image, video and even game/mesh generation so I'm thinking there has to be something out there that is decent by now. Any recommendations?
2024-12-23T06:18:28
https://www.reddit.com/r/LocalLLaMA/comments/1hkhq3a/any_modelsai_services_out_there_that_can_do/
vulcan4d
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkhq3a
false
null
t3_1hkhq3a
/r/LocalLLaMA/comments/1hkhq3a/any_modelsai_services_out_there_that_can_do/
false
false
self
2
null
Playground and API response different?
0
I was using Qwen VL Chat to extract text from a set of images. I tested their model on their playground deployed on HuggingFace Spaces and was quite satisfied by their response. After that I set up a Python script to automate the task of extracting text from images. I hosted the Qwen model using vLLM on NVIDIA A100 and used it's endpoint in my script and used the same images as playground. But I'm getting the response different now as for some images the response is "I cannot perform text extraction from images" Why is this happening? The same is happening in case of other vision models when i tested them on Google Colab using their Inference API.
2024-12-23T06:50:35
https://www.reddit.com/r/LocalLLaMA/comments/1hki6eh/playground_and_api_response_different/
Available-Stress8598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hki6eh
false
null
t3_1hki6eh
/r/LocalLLaMA/comments/1hki6eh/playground_and_api_response_different/
false
false
self
0
null
Will we ever get new Opuses and Ultras of the world or is inference-time compute for the rest of our days? I want to talk with masters of language and philosophy, benchmarks be damned.
264
2024-12-23T07:07:34
https://i.redd.it/alvvsiq5rj8e1.jpeg
DangerousBenefit
i.redd.it
1970-01-01T00:00:00
0
{}
1hkievg
false
null
t3_1hkievg
/r/LocalLLaMA/comments/1hkievg/will_we_ever_get_new_opuses_and_ultras_of_the/
false
false
https://b.thumbs.redditm…g5Trc1qG4RDE.jpg
264
{'enabled': True, 'images': [{'id': 'kcdKcI7RZ2gFubsGmT7NVPZLhmb6_tsHtNzfLBZN5YA', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/alvvsiq5rj8e1.jpeg?width=108&crop=smart&auto=webp&s=39d117680f58aa7ea984a51341ee3c89635748a6', 'width': 108}, {'height': 204, 'url': 'https://preview.redd.it/alvvsiq5rj8e1.jpeg?width=216&crop=smart&auto=webp&s=caad5185577ce1ce782b0b9917e307fecc1a6946', 'width': 216}, {'height': 303, 'url': 'https://preview.redd.it/alvvsiq5rj8e1.jpeg?width=320&crop=smart&auto=webp&s=2f0eaf7276dd4a4c54dec785ab153c2e0d30cafc', 'width': 320}], 'source': {'height': 474, 'url': 'https://preview.redd.it/alvvsiq5rj8e1.jpeg?auto=webp&s=58a759fda347533c924f68986b828496ceeca6db', 'width': 500}, 'variants': {}}]}
I made a TikTok Brainrot Generator
1
[removed]
2024-12-23T07:11:58
https://www.reddit.com/r/LocalLLaMA/comments/1hkigz2/i_made_a_tiktok_brainrot_generator/
notrealDirect
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkigz2
false
null
t3_1hkigz2
/r/LocalLLaMA/comments/1hkigz2/i_made_a_tiktok_brainrot_generator/
false
false
self
1
null
I made a TikTok BrainRot generator
1
[removed]
2024-12-23T07:19:05
https://www.reddit.com/r/LocalLLaMA/comments/1hkikdi/i_made_a_tiktok_brainrot_generator/
notrealDirect
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkikdi
false
null
t3_1hkikdi
/r/LocalLLaMA/comments/1hkikdi/i_made_a_tiktok_brainrot_generator/
false
false
self
1
null
TikTok BrainRot generator
1
[removed]
2024-12-23T07:19:52
https://www.reddit.com/r/LocalLLaMA/comments/1hkikqa/tiktok_brainrot_generator/
notrealDirect
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkikqa
false
null
t3_1hkikqa
/r/LocalLLaMA/comments/1hkikqa/tiktok_brainrot_generator/
false
false
self
1
null
Updated Ception presets - Mistral 2407, Llama 3.3, Qwen 2.5
1
[removed]
2024-12-23T07:23:32
https://www.reddit.com/r/LocalLLaMA/comments/1hkimkj/updated_ception_presets_mistral_2407_llama_33/
Konnect1983
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkimkj
false
null
t3_1hkimkj
/r/LocalLLaMA/comments/1hkimkj/updated_ception_presets_mistral_2407_llama_33/
false
false
self
1
null
Favourite Uncensored Models
89
Titoe says it all, what are your current favourite unvensored Models and for which use case?
2024-12-23T07:31:09
https://www.reddit.com/r/LocalLLaMA/comments/1hkiq2o/favourite_uncensored_models/
cosmo-pax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkiq2o
false
null
t3_1hkiq2o
/r/LocalLLaMA/comments/1hkiq2o/favourite_uncensored_models/
false
false
self
89
null
Has anyone successfully generated reasonable documentation from a code base using an LLM?
7
You know like when you ask ChatGPT or Claude to explain a piece of code? Has anyone tried throwing an entire repo at it? If so, what did you use? Any additional agents? How accurate was the end result?
2024-12-23T08:02:13
https://www.reddit.com/r/LocalLLaMA/comments/1hkj4pe/has_anyone_successfully_generated_reasonable/
shenglong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkj4pe
false
null
t3_1hkj4pe
/r/LocalLLaMA/comments/1hkj4pe/has_anyone_successfully_generated_reasonable/
false
false
self
7
null
Generate a "black dog"
1
[removed]
2024-12-23T08:22:02
https://www.reddit.com/r/LocalLLaMA/comments/1hkjdsu/generate_a_black_dog/
highlii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkjdsu
false
null
t3_1hkjdsu
/r/LocalLLaMA/comments/1hkjdsu/generate_a_black_dog/
false
false
https://b.thumbs.redditm…itdLyrb6WqSI.jpg
1
null
Best uncensored chatbot? (Not for sexting lol)
1
[removed]
2024-12-23T08:45:56
https://www.reddit.com/r/LocalLLaMA/comments/1hkjokd/best_uncensored_chatbot_not_for_sexting_lol/
Creative-Concert-377
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkjokd
false
null
t3_1hkjokd
/r/LocalLLaMA/comments/1hkjokd/best_uncensored_chatbot_not_for_sexting_lol/
false
false
self
1
null
Has anyone done negative prompting for LLMs?
1
Has anyone done negative prompting for LLMs? I read this paper on how to apply Classifier free guidance on LLMs: [https://arxiv.org/pdf/2306.17806](https://arxiv.org/pdf/2306.17806) >Classifier-Free Guidance (CFG) \[37\] has recently emerged in text-to-image generation as a lightweight technique to encourage prompt-adherence in generations. In this work, we demonstrate that CFG can be used broadly as an inference-time technique in pure language modeling. We show that CFG (1) improves the performance of Pythia, GPT-2 and LLaMA-family models across an array of tasks: Q&A, reasoning, code generation, and machine translation, achieving SOTA on LAMBADA with LLaMA-7B over PaLM-540B; (2) brings improvements equivalent to a model with twice the parameter-count; (3) can stack alongside other inference-time methods like Chain-of-Thought and Self-Consistency, yielding further improvements in difficult tasks; (4) can be used to increase the faithfulness and coherence of assistants in challenging form-driven and content-driven prompts: in a human evaluation we show a 75% preference for GPT4All using CFG over baseline. and I was wondering if anyone tried this technique?
2024-12-23T09:32:10
https://www.reddit.com/r/LocalLLaMA/comments/1hkk9ll/has_anyone_done_negative_prompting_for_llms/
searcher1k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkk9ll
false
null
t3_1hkk9ll
/r/LocalLLaMA/comments/1hkk9ll/has_anyone_done_negative_prompting_for_llms/
false
false
self
1
null
i need a tip for buying a graphics card for local ollama
1
[removed]
2024-12-23T10:46:52
https://www.reddit.com/r/LocalLLaMA/comments/1hkl8th/i_need_a_tip_for_buying_a_graphics_card_for_local/
No-Dust7863
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkl8th
false
null
t3_1hkl8th
/r/LocalLLaMA/comments/1hkl8th/i_need_a_tip_for_buying_a_graphics_card_for_local/
false
false
self
1
null
Small parameters optimization
1
People talk big time about techniques to make reasoning better old/small models. But how do I achieve that everyday? How do I achieve to perform, given enough steps, what does big/new models? What do I type? Which arguments to the LLaMaFile? Which parameters, which prompt...?
2024-12-23T11:18:42
https://www.reddit.com/r/LocalLLaMA/comments/1hklouj/small_parameters_optimization/
xqoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hklouj
false
null
t3_1hklouj
/r/LocalLLaMA/comments/1hklouj/small_parameters_optimization/
false
false
self
1
null
Two GPUs vs one GPU
11
I have an opportunity to set up my PC with two 8GB gpus since I can't afford a single 16gb card. Will this be a big improvement since I'm told that any local LLM work I do will still be limited by the size available to a single card. What's your experience?
2024-12-23T11:21:04
https://www.reddit.com/r/LocalLLaMA/comments/1hklpzb/two_gpus_vs_one_gpu/
McDoof
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hklpzb
false
null
t3_1hklpzb
/r/LocalLLaMA/comments/1hklpzb/two_gpus_vs_one_gpu/
false
false
self
11
null
Highly recommended LLM UI for Linux?
7
Using a Linux Mint machine with a Ryzen 7 7800X, 32GB RAM and an AMD Radeon RX 7800XT. What are some recommended UI's etc to run LLM locally? I was using Windows on a different machine using a Nvidia card and running LM Studio which was fine but I know I the set up is different using AMD cards so looking for a good alternative from people's experience. Thanks.
2024-12-23T11:33:37
https://www.reddit.com/r/LocalLLaMA/comments/1hklwgi/highly_recommended_llm_ui_for_linux/
Drama_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hklwgi
false
null
t3_1hklwgi
/r/LocalLLaMA/comments/1hklwgi/highly_recommended_llm_ui_for_linux/
false
false
self
7
null
What are some things unique to specific models that you have learnt through experience in prompting?
0
I have spent quite some time working on different LLMs and I have noticed some peculiar ways in which specific models perform differently or get a performance boost or degradation based on syntax, format and prompting style changes. You would't be able to guess these things unless you have worked with that specific model for a long time. I'm curious to know whether others have had a similar experience (anecdotal since LLMs are a black box and it's hard to "explain" why they do things in a certain way) I'll go first 1. **OpenAI / Anthropic models:** Even though LLMs process input as XML tags, I notice good performance boosts if I send my input as a JSON instead of wrapping it in XML tags, particularly for longer context lengths. This is despite the official guides using/suggesting XML tags. 2. **Haiku/Sonnet:** Much better in writing or coming up with the right words for things compared to their OpenAI counterparts. 3. **Sonnet**: If you can limit output length by good choice of prompt output structure, it can also give a boost to performance for hard reasoning tasks. In other words, outputting more leads to worse performance. (Assuming you don't want to output Reasoning text for some reason and just the final structured output)
2024-12-23T11:34:08
https://www.reddit.com/r/LocalLLaMA/comments/1hklwqr/what_are_some_things_unique_to_specific_models/
pravictor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hklwqr
false
null
t3_1hklwqr
/r/LocalLLaMA/comments/1hklwqr/what_are_some_things_unique_to_specific_models/
false
false
self
0
null
Where do I find resources to write better prompts
1
[removed]
2024-12-23T11:35:15
https://www.reddit.com/r/LocalLLaMA/comments/1hklxco/where_do_i_find_resources_to_write_better_prompts/
Sensitive_Bison_4458
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hklxco
false
null
t3_1hklxco
/r/LocalLLaMA/comments/1hklxco/where_do_i_find_resources_to_write_better_prompts/
false
false
self
1
null
Tips/guides on setting up speedy local inference?
4
I'm looking for information and tips/info on how to speed up local inference. I've got three 24GB GPUs, a mix of ada and ampere. Host is Windows 11. Currently I'm using TabbyAPI with speculative prediction. Main model: Qwen2.5-Coder-32B-Instruct-8.0bpw-exl2 Draft model: Qwen2.5-Coder-1.5B-BASE-8.0bpw-exl2 Main/Draft Cache: Q8 I'm getting anywhere from 8 to 22 tokens/s depending on context length. I'd like to know what it takes to at least double that, especially on the high context end. Is there a software configuration that can do that or do I need a more powerful GPU? What setups are other people using to get performance? I wanted to try vLLM but having an odd number of GPUs resulted in poor compatibility. Is it a major boost worth getting a 4th GPU? Being able to run jobs in parallel would be nice, too.
2024-12-23T11:39:32
https://www.reddit.com/r/LocalLLaMA/comments/1hklzgk/tipsguides_on_setting_up_speedy_local_inference/
MorallyDeplorable
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hklzgk
false
null
t3_1hklzgk
/r/LocalLLaMA/comments/1hklzgk/tipsguides_on_setting_up_speedy_local_inference/
false
false
self
4
null
[meme] When O3-High refuses
1
2024-12-23T11:47:55
https://i.redd.it/sc5wcdj36l8e1.png
cov_id19
i.redd.it
1970-01-01T00:00:00
0
{}
1hkm3ty
false
null
t3_1hkm3ty
/r/LocalLLaMA/comments/1hkm3ty/meme_when_o3high_refuses/
false
false
https://b.thumbs.redditm…QJWHJePpo0vo.jpg
1
{'enabled': True, 'images': [{'id': '0RZKEtJAfwSJ_sX4Di1qQyeGNlpwgmnGf66cP2TEQuA', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/sc5wcdj36l8e1.png?width=108&crop=smart&auto=webp&s=62bd3261d4afa456ed9a2625e7ceb2c41c426498', 'width': 108}, {'height': 86, 'url': 'https://preview.redd.it/sc5wcdj36l8e1.png?width=216&crop=smart&auto=webp&s=5c18de79f70661ebc201155d18a2f287e3eab814', 'width': 216}, {'height': 128, 'url': 'https://preview.redd.it/sc5wcdj36l8e1.png?width=320&crop=smart&auto=webp&s=09262289509a14f707781d1dd4d9bbf78810e7af', 'width': 320}, {'height': 257, 'url': 'https://preview.redd.it/sc5wcdj36l8e1.png?width=640&crop=smart&auto=webp&s=80ce7faf72c55b789415b4901c3e85bcecc536aa', 'width': 640}], 'source': {'height': 376, 'url': 'https://preview.redd.it/sc5wcdj36l8e1.png?auto=webp&s=00c948d80465e1ec346277080dbefcbd0a1111f8', 'width': 936}, 'variants': {}}]}
What is the best SLM that you have used for fine tuning on SQL queries?
4
Also, is fine tuning small language models on Company specific sql queries worth it? I am providing: System prompt: Has my table's schema and some few shots(1784 tokens long). Chat history: question and answers. Final user question: The question user asks and based on this question and chat history a system prompt is generated according to how system prompt structured it.
2024-12-23T11:59:20
https://www.reddit.com/r/LocalLLaMA/comments/1hkm9o4/what_is_the_best_slm_that_you_have_used_for_fine/
ShippersAreIdiots
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkm9o4
false
null
t3_1hkm9o4
/r/LocalLLaMA/comments/1hkm9o4/what_is_the_best_slm_that_you_have_used_for_fine/
false
false
self
4
null
Langflow vs Rivet vs Chainforge vs Flowise, etc.
1
[removed]
2024-12-23T12:29:18
https://www.reddit.com/r/LocalLLaMA/comments/1hkmqpm/langflow_vs_rivet_vs_chainforge_vs_flowise_etc/
lemontheme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkmqpm
false
null
t3_1hkmqpm
/r/LocalLLaMA/comments/1hkmqpm/langflow_vs_rivet_vs_chainforge_vs_flowise_etc/
false
false
self
1
null
Even mistral ai going for pro plan Ó⁠╭⁠╮⁠Ò
1
[removed]
2024-12-23T13:22:10
https://www.reddit.com/gallery/1hknn08
Evening_Action6217
reddit.com
1970-01-01T00:00:00
0
{}
1hknn08
false
null
t3_1hknn08
/r/LocalLLaMA/comments/1hknn08/even_mistral_ai_going_for_pro_plan_óò/
false
false
https://a.thumbs.redditm…8ZQc_Hrun4f8.jpg
1
null
Multi-Agentic Tree Search for Advanced Multi-Context Reasoning
1
[removed]
2024-12-23T13:43:10
https://www.reddit.com/r/LocalLLaMA/comments/1hko0gg/multiagentic_tree_search_for_advanced/
Kalitis-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hko0gg
false
null
t3_1hko0gg
/r/LocalLLaMA/comments/1hko0gg/multiagentic_tree_search_for_advanced/
false
false
self
1
null
Are hallucinations caused because the model doesn't know what it doesn't know?
60
There’s a very interesting concept that, in my opinion, more prepared people tend to understand better: knowing what you don’t know. In other words, recognizing that, to accomplish task X, it’s necessary to understand Y and Z, because without that knowledge, any result would be incorrect. Now, do AI models operate with the certainty that they know everything they’re asked? And is that why they end up “hallucinating”? Imagine a human who, due to some pathology, wakes up believing they can speak a language they’ve never learned. They’re absolutely convinced of this ability and confidently start speaking the language. However, everything they say is meaningless — just “linguistic hallucinations.” It’s a silly question, for sure. But maybe more people have thought about it too, so here I am, passing embarrassment for me and for them 🫡
2024-12-23T14:00:41
https://www.reddit.com/r/LocalLLaMA/comments/1hkobzd/are_hallucinations_caused_because_the_model/
thecalmgreen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkobzd
false
null
t3_1hkobzd
/r/LocalLLaMA/comments/1hkobzd/are_hallucinations_caused_because_the_model/
false
false
self
60
null
CodeCombiner: An Open-Source GUI Tool for One-Click Code Gathering for AI / LLM Feeding
2
Hello! Meet CodeCombiner, an Open Source GUI tool that I recently developed. It allows you to gather all your code files in one location with just a single click, simplifying the process of feeding them to AI and LLMs. This user-friendly application streamlines your workflow, and while similar solutions exist as command-line tools or VS Code plugins, CodeCombiner offers a faster and more intuitive experience. I developed this tool in a Windows environment, and you can download and start using it right now. If you're on a different platform, you can easily build the application in your respective environment using the provided source code. Give it a try and enhance your workflow! I created this tool to save time in my own work, and I made it open-source so it can be helpful to others. If you find it useful, please consider leaving a star on GitHub! Here’s the link to the project: [GitHub Project Link](https://github.com/chandrath/Simple-Code-Combiner) Download the application: [Download Link (Windows)](https://github.com/chandrath/Simple-Code-Combiner/releases) https://preview.redd.it/x6fh0rnkwl8e1.png?width=693&format=png&auto=webp&s=4dbc770778df27525619d2ec67f399d76c50e4a2
2024-12-23T14:19:23
https://www.reddit.com/r/LocalLLaMA/comments/1hkop2y/codecombiner_an_opensource_gui_tool_for_oneclick/
444_Guy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkop2y
false
null
t3_1hkop2y
/r/LocalLLaMA/comments/1hkop2y/codecombiner_an_opensource_gui_tool_for_oneclick/
false
false
https://b.thumbs.redditm…hzqAXkMsOBII.jpg
2
{'enabled': False, 'images': [{'id': '_npuDWpkciwQPd_GOXgggeX0kCPrXdRW4PQKUdRm6WE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uMYRE3Gnmk4XhGwVa07qZcwDdkekLwzVkhsnmD0k-is.jpg?width=108&crop=smart&auto=webp&s=58353c05bbfc13cc893182581ebc5f8621e16d2c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uMYRE3Gnmk4XhGwVa07qZcwDdkekLwzVkhsnmD0k-is.jpg?width=216&crop=smart&auto=webp&s=dd176cf8727e418da4647c6de732d5e63b3685e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uMYRE3Gnmk4XhGwVa07qZcwDdkekLwzVkhsnmD0k-is.jpg?width=320&crop=smart&auto=webp&s=255219e5a8f8771be81da2b55384e76e65e6fd75', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uMYRE3Gnmk4XhGwVa07qZcwDdkekLwzVkhsnmD0k-is.jpg?width=640&crop=smart&auto=webp&s=2528624e3cc3b585186e7270c3bfc303591f8f4a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uMYRE3Gnmk4XhGwVa07qZcwDdkekLwzVkhsnmD0k-is.jpg?width=960&crop=smart&auto=webp&s=3d4db2e9a488cb9b5a056a6e823314719c37116d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uMYRE3Gnmk4XhGwVa07qZcwDdkekLwzVkhsnmD0k-is.jpg?width=1080&crop=smart&auto=webp&s=e04ae6bb67b406fae755b7903f788efc0f050c63', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uMYRE3Gnmk4XhGwVa07qZcwDdkekLwzVkhsnmD0k-is.jpg?auto=webp&s=eb6a0d440e8e94ba42bd35c8a453745237fb2d93', 'width': 1200}, 'variants': {}}]}
x2 P40s or x2 3060 12gb?
7
I know the difference between the two is double the vram but I was wondering if it'd be worth investing in a pair of 3060s simply because they're newer. Like the M40 going obsolete, I'm concerned about how long the P40s will last before they're phased out. I don't know much about its longevity hence me asking, but considering I can get 2 3060 12gbs for $180-250 each and P40s are being sold at $300+ right now, I figured I'd ask for some advice.
2024-12-23T14:19:32
https://www.reddit.com/r/LocalLLaMA/comments/1hkop6t/x2_p40s_or_x2_3060_12gb/
switchpizza
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkop6t
false
null
t3_1hkop6t
/r/LocalLLaMA/comments/1hkop6t/x2_p40s_or_x2_3060_12gb/
false
false
self
7
null
Building a conversational AI for a website
1
[removed]
2024-12-23T14:40:13
https://www.reddit.com/r/LocalLLaMA/comments/1hkp3kt/building_a_conversational_ai_for_a_website/
tfwnoasiangf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkp3kt
false
null
t3_1hkp3kt
/r/LocalLLaMA/comments/1hkp3kt/building_a_conversational_ai_for_a_website/
false
false
self
1
null
Human in the loop laws are coming.
1
[removed]
2024-12-23T14:50:08
https://www.reddit.com/r/LocalLLaMA/comments/1hkpas3/human_in_the_loop_laws_are_coming/
Brave_Compatriot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkpas3
false
null
t3_1hkpas3
/r/LocalLLaMA/comments/1hkpas3/human_in_the_loop_laws_are_coming/
false
false
self
1
null
Understanding which LLM model Works best for what (IN SIMPLE TERMS) – Help Needed!
1
[removed]
2024-12-23T15:02:16
https://www.reddit.com/r/LocalLLaMA/comments/1hkpjkt/understanding_which_llm_model_works_best_for_what/
Every-Assignment5935
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkpjkt
false
null
t3_1hkpjkt
/r/LocalLLaMA/comments/1hkpjkt/understanding_which_llm_model_works_best_for_what/
false
false
self
1
null
Function Calling
6
I've recently started dabbling with function calling. It's all very new to me. Does anyone know the distinction between structured outputs Vs JSON support and function calling. I've even found passing mentions to React Agents. What is the modern way of approaching this even for models that don't officially support function calling?
2024-12-23T15:06:25
https://www.reddit.com/r/LocalLLaMA/comments/1hkpmoa/function_calling/
SvenVargHimmel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkpmoa
false
null
t3_1hkpmoa
/r/LocalLLaMA/comments/1hkpmoa/function_calling/
false
false
self
6
null
Layla allows generating images via Stable Diffusion alongside running LLMs on your phone
40
**Completely offline on device.** Image generation model uses <500MB RAM so it can be run alongside an LLM during chat. Currently Stable Diffusion 1.5 and SDXL Turbo models are supported. High end phones can do an 8B LLM + SD1.5 [\(Image generation time skipped in video above, see benchmark generation speeds below. LLM response speed is real-time\)](https://reddit.com/link/1hkq3ub/video/g9q7eld88m8e1/player) Image generation speeds on an S23 Ultra 256x256 (with CFG): \~3 seconds per iteration 512x512 (with CFG): \~10 seconds per iteration Can be further speed up by setting CFG = 1.0 (no guidance, skips negative prompt, so skips one inference pass per iteration, doubling the speed at the cost of quality) **Models are pre-compiled for mobile use:** https://reddit.com/link/1hkq3ub/video/hml3jrpv8m8e1/player All model images in the above video are generated on my phone locally, so it should give you a realistic expectation of what the quality is like. **Auxiliary features** Supports on-device image upscaling for your generated images using RealESRGAN. You can also combine image generation and LLM to create custom characters descriptions, scenarios, and generate images out of them. https://preview.redd.it/ye9f5m969m8e1.png?width=808&format=png&auto=webp&s=5b148288ab4d74943d1022b4c94a4106f7411919 Everything works completely offline on a phone.
2024-12-23T15:28:53
https://www.reddit.com/r/LocalLLaMA/comments/1hkq3ub/layla_allows_generating_images_via_stable/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkq3ub
false
null
t3_1hkq3ub
/r/LocalLLaMA/comments/1hkq3ub/layla_allows_generating_images_via_stable/
false
false
https://b.thumbs.redditm…wpsqWU9VM8sU.jpg
40
null
Aider Polygot Leaderboard
1
2024-12-23T16:04:48
https://aider.chat/2024/12/21/polyglot.html
davewolfs
aider.chat
1970-01-01T00:00:00
0
{}
1hkqvu3
false
null
t3_1hkqvu3
/r/LocalLLaMA/comments/1hkqvu3/aider_polygot_leaderboard/
false
false
https://b.thumbs.redditm…4jZvsiHMbdqU.jpg
1
{'enabled': False, 'images': [{'id': 'qR4fVo4590tCWGA0neHGwcbS2cAddpVqgZeM0-tzpek', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/BQgc-WoPPXosh-ho_8vQxUYwdG3-JIBDa18DtRMIB2Q.jpg?width=108&crop=smart&auto=webp&s=90c8f8a4a00086c7429475733716c95668519df6', 'width': 108}, {'height': 110, 'url': 'https://external-preview.redd.it/BQgc-WoPPXosh-ho_8vQxUYwdG3-JIBDa18DtRMIB2Q.jpg?width=216&crop=smart&auto=webp&s=d85116821f965b3f77909440df141a277b9a61b3', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/BQgc-WoPPXosh-ho_8vQxUYwdG3-JIBDa18DtRMIB2Q.jpg?width=320&crop=smart&auto=webp&s=3b41dba39529732e4a6694053a96c2d4a859e079', 'width': 320}, {'height': 328, 'url': 'https://external-preview.redd.it/BQgc-WoPPXosh-ho_8vQxUYwdG3-JIBDa18DtRMIB2Q.jpg?width=640&crop=smart&auto=webp&s=9e71522fa218e7ed8a532f9544af0a136091f92d', 'width': 640}, {'height': 492, 'url': 'https://external-preview.redd.it/BQgc-WoPPXosh-ho_8vQxUYwdG3-JIBDa18DtRMIB2Q.jpg?width=960&crop=smart&auto=webp&s=7abad7ea0b99da19504a29da1ebbc47c1d4e538f', 'width': 960}, {'height': 553, 'url': 'https://external-preview.redd.it/BQgc-WoPPXosh-ho_8vQxUYwdG3-JIBDa18DtRMIB2Q.jpg?width=1080&crop=smart&auto=webp&s=a68fafbb388b2c74de75dffbef22a0beeae50ba7', 'width': 1080}], 'source': {'height': 1468, 'url': 'https://external-preview.redd.it/BQgc-WoPPXosh-ho_8vQxUYwdG3-JIBDa18DtRMIB2Q.jpg?auto=webp&s=3b819b236ae0314e4a73c1f1eb9d129629ca7bb8', 'width': 2864}, 'variants': {}}]}
Simplifying Fine-Tuning Workflows: Seeking Feedback on a New Platform Idea
1
[removed]
2024-12-23T16:39:53
https://www.reddit.com/r/LocalLLaMA/comments/1hkrncy/simplifying_finetuning_workflows_seeking_feedback/
AhmadMirza17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkrncy
false
null
t3_1hkrncy
/r/LocalLLaMA/comments/1hkrncy/simplifying_finetuning_workflows_seeking_feedback/
false
false
self
1
null
Calculus !
336
2024-12-23T16:48:25
https://i.redd.it/uvckcpa0om8e1.jpeg
ritshpatidar
i.redd.it
1970-01-01T00:00:00
0
{}
1hkru2s
false
null
t3_1hkru2s
/r/LocalLLaMA/comments/1hkru2s/calculus/
false
false
https://a.thumbs.redditm…7KMFePtjmpi8.jpg
336
{'enabled': True, 'images': [{'id': 'qKmNfIHhglprVTGn0jEmEetKDMpLe8PY8TBZlukHvAk', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/uvckcpa0om8e1.jpeg?width=108&crop=smart&auto=webp&s=7a8b7ca69e81834a0dc7a659f8f40d2e7a90fe51', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/uvckcpa0om8e1.jpeg?width=216&crop=smart&auto=webp&s=cfe87ac85fa37cd5bca4276cae21b282d963eabb', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/uvckcpa0om8e1.jpeg?width=320&crop=smart&auto=webp&s=4d2ccf0279e96021a0a354beea018579eb40e010', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/uvckcpa0om8e1.jpeg?width=640&crop=smart&auto=webp&s=086cf842578e2c750ef3a8efba00c696e26f3b5b', 'width': 640}], 'source': {'height': 800, 'url': 'https://preview.redd.it/uvckcpa0om8e1.jpeg?auto=webp&s=106c6e4622ce05bc32eac15a6fdbfa9d8145010c', 'width': 800}, 'variants': {}}]}
How does GPT-o (and O3) work?
1
[removed]
2024-12-23T17:09:22
https://www.reddit.com/r/LocalLLaMA/comments/1hksb08/how_does_gpto_and_o3_work/
blackrabbit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hksb08
false
null
t3_1hksb08
/r/LocalLLaMA/comments/1hksb08/how_does_gpto_and_o3_work/
false
false
self
1
null
Tools to Manage Context and Edit LLM Responses?
0
I often have a problem when using a chat interface for coding that there will be several steps before I can get to the final solution. However, the LLM might get hung up on the first one of these steps, requiring a bit of back and forth before getting it right after fixing errors, asking for small revisions, etc. Now, as I move on the next step, my context is polluted by my back-and-forth to complete step 1. There's a bunch of irrelevant garbage in the context because of the small fixes needed to get the first step right. So the responses for step 2 and later get worse and worse. Is there a tool that lets me create a conversation/chat like a tree, and go back before the LLM had to be corrected, edit the LLM response to make it look like it got it right the first time, then continue from there? Is that making any sense?
2024-12-23T17:09:32
https://www.reddit.com/r/LocalLLaMA/comments/1hksb5x/tools_to_manage_context_and_edit_llm_responses/
JeepyTea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hksb5x
false
null
t3_1hksb5x
/r/LocalLLaMA/comments/1hksb5x/tools_to_manage_context_and_edit_llm_responses/
false
false
self
0
null
Anybody else getting Trojan:Script/Ulthar.A!ml in the latest llama.cpp release?
1
[removed]
2024-12-23T17:26:41
https://www.reddit.com/r/LocalLLaMA/comments/1hksp2v/anybody_else_getting_trojanscriptultharaml_in_the/
701nf1n17y4ndb3y0nd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hksp2v
false
null
t3_1hksp2v
/r/LocalLLaMA/comments/1hksp2v/anybody_else_getting_trojanscriptultharaml_in_the/
false
false
https://b.thumbs.redditm…zqY4ieH7rDPc.jpg
1
null
LLM Consortium - Multi-Model AI Response Synthesis
0
This project is LLM consortium system that combines the strengths of multiple AI models, specifically GPT-4 and Claude 3 Sonnet, to generate more reliable responses. When a user submits a prompt, the system simultaneously queries both models, and then uses Claude 3 Haiku as a judge to synthesize and analyze their responses. The judge evaluates the consistency, completeness, and quality of the responses, providing a confidence score and highlighting any dissenting views. If the confidence score is below 0.8, the system can perform up to three iterations to refine the response. Check it out here: https://llm-consortium.rnikhil.com/
2024-12-23T17:43:07
https://www.reddit.com/r/LocalLLaMA/comments/1hkt25q/llm_consortium_multimodel_ai_response_synthesis/
Excellent-Effect237
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkt25q
false
null
t3_1hkt25q
/r/LocalLLaMA/comments/1hkt25q/llm_consortium_multimodel_ai_response_synthesis/
false
false
self
0
null
Multimodal llms ( silly doubt)
0
[removed]
2024-12-23T17:58:03
https://www.reddit.com/r/LocalLLaMA/comments/1hktdw0/multimodal_llms_silly_doubt/
Wide-Chef-7011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hktdw0
false
null
t3_1hktdw0
/r/LocalLLaMA/comments/1hktdw0/multimodal_llms_silly_doubt/
false
false
self
0
null
How can I design a scalable LLM middleware to handle indefinite conversations while retaining context?
1
[removed]
2024-12-23T17:59:11
https://www.reddit.com/r/LocalLLaMA/comments/1hkterr/how_can_i_design_a_scalable_llm_middleware_to/
Quantum_Qualia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkterr
false
null
t3_1hkterr
/r/LocalLLaMA/comments/1hkterr/how_can_i_design_a_scalable_llm_middleware_to/
false
false
self
1
null
Qwen QwQ distillation into Qwen 2.5 1.5B
1
[removed]
2024-12-23T18:32:55
https://www.reddit.com/r/LocalLLaMA/comments/1hku55d/qwen_qwq_distillation_into_qwen_25_15b/
micaebe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hku55d
false
null
t3_1hku55d
/r/LocalLLaMA/comments/1hku55d/qwen_qwq_distillation_into_qwen_25_15b/
false
false
self
1
{'enabled': False, 'images': [{'id': '_4CZ64qWFe8CGy3QtQlTGV30X0irDq_64oeqgLaAsx8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7JjU2aCbYeUZKLxV57rhd6YO_w-70OJBkLXw3dTHYxM.jpg?width=108&crop=smart&auto=webp&s=7b430b9b55ff79f5e90c747807eb2219466f3b2f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7JjU2aCbYeUZKLxV57rhd6YO_w-70OJBkLXw3dTHYxM.jpg?width=216&crop=smart&auto=webp&s=fb8bf901c455d8d9ecfc9642e5a4d53daf5edb67', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7JjU2aCbYeUZKLxV57rhd6YO_w-70OJBkLXw3dTHYxM.jpg?width=320&crop=smart&auto=webp&s=5b2eda2d71f55070652ea5b2091a1053b17ad269', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7JjU2aCbYeUZKLxV57rhd6YO_w-70OJBkLXw3dTHYxM.jpg?width=640&crop=smart&auto=webp&s=05528ec5fc51b684d0604e7183cea9a5b52e5974', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7JjU2aCbYeUZKLxV57rhd6YO_w-70OJBkLXw3dTHYxM.jpg?width=960&crop=smart&auto=webp&s=b44244cf6b779b62c924927e7cf4045990c4b087', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7JjU2aCbYeUZKLxV57rhd6YO_w-70OJBkLXw3dTHYxM.jpg?width=1080&crop=smart&auto=webp&s=dc4c7f4f808c0ae2972af4f1f71c1f9947761941', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7JjU2aCbYeUZKLxV57rhd6YO_w-70OJBkLXw3dTHYxM.jpg?auto=webp&s=3232d74099cece49cfb0ef2958766ae0211e1a56', 'width': 1200}, 'variants': {}}]}
How can I design a scalable LLM middleware to handle indefinite conversations while retaining context?
13
NousResearch's Hermes 3 is awesome for roleplaying but the context is short, their 72B model is hosted pretty cheaply on the likes of hyperbolic but alas the context window length is only 12k..... I've been thinking about how best to design a middleware layer for large language models that can handle an indefinite stream of conversation while still preserving context long past the original token window limit. My current plan is to have a Python middleware watch for when the token window gets overloaded and automatically summarize or compress the ongoing conversation, pushing certain high-level points or crucial details into a retrieval-augmented generation vector database. This way, at any given time, the LLM only receives an abridged version of the full discussion, but can also cross-reference the vector store whenever it encounters relevant keywords or semantic matches, perhaps by embedding those triggers directly into the prompt itself. I’m curious if anyone has experimented with a similar approach or has an even better idea for orchestrating large language model memory management at scale. How should I structure the summarization pipeline, what algorithms or methodologies might help in identifying the “important” tidbits, and is there a more elegant way to ensure the LLM continually knows when and how to query the vector store? Any insights, lessons learned, or alternative suggestions would be incredibly helpful.
2024-12-23T18:34:01
https://www.reddit.com/r/LocalLLaMA/comments/1hku5z6/how_can_i_design_a_scalable_llm_middleware_to/
Quantum_Qualia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hku5z6
false
null
t3_1hku5z6
/r/LocalLLaMA/comments/1hku5z6/how_can_i_design_a_scalable_llm_middleware_to/
false
false
self
13
null
Best Approach for Converting Unstructured Text to Predefined JSON Format for LLM Fine-Tuning?
1
[removed]
2024-12-23T18:58:54
https://www.reddit.com/r/LocalLLaMA/comments/1hkup7z/best_approach_for_converting_unstructured_text_to/
oguzhancttnky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkup7z
false
null
t3_1hkup7z
/r/LocalLLaMA/comments/1hkup7z/best_approach_for_converting_unstructured_text_to/
false
false
self
1
null
CLIDataForge: A Simple, Data-Driven Pipeline for Large-Scale LLM Dataset Creation
5
Hello, Here is a tool I've been working on, and I thought some of you might find it useful. It's called CLIDataForge, and you can check it out here: [https://github.com/chrismrutherford/cliDataForge](https://github.com/chrismrutherford/cliDataForge) **What does it do?** CLIDataForge is a command-line tool for creating and managing large-scale training datasets for LLM fine tuning. I found myself writing similar chunks of code for different projects, and thought, "There must be a better way!" So, I decided to make something data-driven and reusable. **Why I made it:** 1. **Simplicity**: No fancy frameworks or overly complex architectures. Just a straightforward CLI tool that gets the job done. 2. **Scalability**: While many projects use JSON files, I opted for PostgreSQL. Why? Once you're dealing with datasets of several hundred thousand entries, tracking many JSON files becomes a problem. 3. **Flexibility**: The data-driven approach means you can adapt it to different projects without rewriting core code each time. **Key Features:** * Multi-stage processing pipeline * Parallel processing for speed * Integrated with PostgreSQL for handling large datasets * Simple prompt management system * Easy column management and data import/export It's not trying to be the be-all and end-all of data processing tools, but rather a simple, effective system for those who need something a bit more robust than scripts but don't want to use massive frameworks. I'd love to hear your thoughts, suggestions, or any questions you might have. And if you find it useful, do give it a star on GitHub! I'm going to integrate hugging face at some stage
2024-12-23T19:04:09
https://www.reddit.com/r/LocalLLaMA/comments/1hkutmn/clidataforge_a_simple_datadriven_pipeline_for/
lolzinventor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkutmn
false
null
t3_1hkutmn
/r/LocalLLaMA/comments/1hkutmn/clidataforge_a_simple_datadriven_pipeline_for/
false
false
self
5
{'enabled': False, 'images': [{'id': '2e0TgPAY_LLaNg8mD7SdOub3PPPFVMhH8_hw2uLLOCY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ioo8ifCVhtpOkGXYwvgBinWUiWYKOrloRMUc-lzfKhk.jpg?width=108&crop=smart&auto=webp&s=398d72b0b38f843136ca511018279e849f38f9ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ioo8ifCVhtpOkGXYwvgBinWUiWYKOrloRMUc-lzfKhk.jpg?width=216&crop=smart&auto=webp&s=2e5e56ea3d5ed4e8d680605f64e9300b564915d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ioo8ifCVhtpOkGXYwvgBinWUiWYKOrloRMUc-lzfKhk.jpg?width=320&crop=smart&auto=webp&s=5d5e441f53261b70d45f8aa7f066175bc56235de', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ioo8ifCVhtpOkGXYwvgBinWUiWYKOrloRMUc-lzfKhk.jpg?width=640&crop=smart&auto=webp&s=cd59c40a6d44d7086793c7c26791e73652c08f83', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ioo8ifCVhtpOkGXYwvgBinWUiWYKOrloRMUc-lzfKhk.jpg?width=960&crop=smart&auto=webp&s=eefd907e92e60fcf61f182a7d3d5c335d06070bb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ioo8ifCVhtpOkGXYwvgBinWUiWYKOrloRMUc-lzfKhk.jpg?width=1080&crop=smart&auto=webp&s=10843f4a9fde740061d1701953b990fc53f29bb9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ioo8ifCVhtpOkGXYwvgBinWUiWYKOrloRMUc-lzfKhk.jpg?auto=webp&s=662f09f0044596d2eeb389f95da9d9c6e84e511e', 'width': 1200}, 'variants': {}}]}
My Apple Intelligence Writing Tools for Windows/Linux/macOS app just had a huge new update. It supports a ton of local LLM implementations, and is open source & free :D. You can now chat with its one-click summaries of websites/YT videos/docs, and bring up an LLM chat UI anytime. Here's a new demo!
117
2024-12-23T19:21:18
https://v.redd.it/a8te4ixeen8e1
TechExpert2910
v.redd.it
1970-01-01T00:00:00
0
{}
1hkv6og
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/a8te4ixeen8e1/DASHPlaylist.mpd?a=1737573694%2COTczNjBmZTRmMTBmNWZhNzNiZDJmNzc5MWM5YjI3ZTI4YWE3MGNhYWJhZDc4M2QxZGZlNzkxNzVjZmZiNTg1ZQ%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/a8te4ixeen8e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/a8te4ixeen8e1/HLSPlaylist.m3u8?a=1737573694%2CNTFkZTRmZjg0YTNmYjI1ZTdlMmM1YWZjOWFmYTcxMDFiNDc1MzEwYTkzNzRmNzI2NjIxNDI1ZjAzOWVhOTUyNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/a8te4ixeen8e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1468}}
t3_1hkv6og
/r/LocalLLaMA/comments/1hkv6og/my_apple_intelligence_writing_tools_for/
false
false
https://external-preview…72d693737908ef1c
117
{'enabled': False, 'images': [{'id': 'cHJoN2pneGVlbjhlMTB-_ERSpoLo97zgaZe_Ty7qceG2UqpOgIqwvXUeUIDK', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/cHJoN2pneGVlbjhlMTB-_ERSpoLo97zgaZe_Ty7qceG2UqpOgIqwvXUeUIDK.png?width=108&crop=smart&format=pjpg&auto=webp&s=6e87076e3b7bd78e4db8174782f1fab3a4d0d514', 'width': 108}, {'height': 158, 'url': 'https://external-preview.redd.it/cHJoN2pneGVlbjhlMTB-_ERSpoLo97zgaZe_Ty7qceG2UqpOgIqwvXUeUIDK.png?width=216&crop=smart&format=pjpg&auto=webp&s=6dcccec57531f7e5ae48f6c86d9c4da0db0f1dcb', 'width': 216}, {'height': 235, 'url': 'https://external-preview.redd.it/cHJoN2pneGVlbjhlMTB-_ERSpoLo97zgaZe_Ty7qceG2UqpOgIqwvXUeUIDK.png?width=320&crop=smart&format=pjpg&auto=webp&s=24e4305eae69df7099c910ca50488ead797b0193', 'width': 320}, {'height': 470, 'url': 'https://external-preview.redd.it/cHJoN2pneGVlbjhlMTB-_ERSpoLo97zgaZe_Ty7qceG2UqpOgIqwvXUeUIDK.png?width=640&crop=smart&format=pjpg&auto=webp&s=5c2b8849182934d8b000daf409639e83e55e7dea', 'width': 640}, {'height': 706, 'url': 'https://external-preview.redd.it/cHJoN2pneGVlbjhlMTB-_ERSpoLo97zgaZe_Ty7qceG2UqpOgIqwvXUeUIDK.png?width=960&crop=smart&format=pjpg&auto=webp&s=b384d2e0452b5bb9beb7c576508e64c1d7fbaab0', 'width': 960}, {'height': 794, 'url': 'https://external-preview.redd.it/cHJoN2pneGVlbjhlMTB-_ERSpoLo97zgaZe_Ty7qceG2UqpOgIqwvXUeUIDK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=cb7e0cdbc3097deca14eaa36d9432a246d50c8b1', 'width': 1080}], 'source': {'height': 1340, 'url': 'https://external-preview.redd.it/cHJoN2pneGVlbjhlMTB-_ERSpoLo97zgaZe_Ty7qceG2UqpOgIqwvXUeUIDK.png?format=pjpg&auto=webp&s=e171f3148955e7ce2e7298c4d5d8e2c1e9eeea16', 'width': 1822}, 'variants': {}}]}
Has anyone tested phi4 yet? How does it perform?
44
The benchmarks look great, and the model weights have been out for some time already, but surprisingly I haven't seen any reviews on it, in particular its performance on math and coding as compared to Qwen 2.5 14b and other similarly sized relevant models; any insight in that regard?
2024-12-23T19:28:05
https://www.reddit.com/r/LocalLLaMA/comments/1hkvbnn/has_anyone_tested_phi4_yet_how_does_it_perform/
LLMtwink
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkvbnn
false
null
t3_1hkvbnn
/r/LocalLLaMA/comments/1hkvbnn/has_anyone_tested_phi4_yet_how_does_it_perform/
false
false
self
44
null
I built a tool for renting cheap GPUs cheaply inferencing
1
Hi guys, As the title suggests, we were struggling a lot with hosting our own models at affordable prices while maintaining decent precision. Hosting models often demands huge self-built racks or significant financial backing. I built a tool that rents the cheapest spot GPU VMs from your favorite Cloud Providers, spins up inference clusters based on VLLM and serves them to you easily. It ensures full quota transparency, optimizes token throughput, and keeps costs predictable by monitoring spending. I’m looking for beta users to test and refine the platform. If you’re interested in getting cost-effective access to powerful machines (those juicy high VRAM setups), I’d love for you to hear from you guys! Link to waitlist: [https://open-scheduler.com/](https://open-scheduler.com/)
2024-12-23T19:32:28
https://www.reddit.com/r/LocalLLaMA/comments/1hkvf2k/i_built_a_tool_for_renting_cheap_gpus_cheaply/
RedditsBestest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkvf2k
false
null
t3_1hkvf2k
/r/LocalLLaMA/comments/1hkvf2k/i_built_a_tool_for_renting_cheap_gpus_cheaply/
false
false
self
1
null
I built a tool for renting cheap GPUs
51
Hi guys, as the title suggests, we were struggling a lot with hosting our own models at affordable prices while maintaining decent precision. Hosting models often demands huge self-built racks or significant financial backing. I built a tool that rents the cheapest spot GPU VMs from your favorite Cloud Providers, spins up inference clusters based on VLLM and serves them to you easily. It ensures full quota transparency, optimizes token throughput, and keeps costs predictable by monitoring spending. I’m looking for beta users to test and refine the platform. If you’re interested in getting cost-effective access to powerful machines (like juicy high VRAM setups), I’d love for you to hear from you guys! Link to Website: [https://open-scheduler.com/](https://open-scheduler.com/)
2024-12-23T19:35:04
https://www.reddit.com/r/LocalLLaMA/comments/1hkvh0w/i_built_a_tool_for_renting_cheap_gpus/
RedditsBestest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkvh0w
false
null
t3_1hkvh0w
/r/LocalLLaMA/comments/1hkvh0w/i_built_a_tool_for_renting_cheap_gpus/
false
false
self
51
null
ChatGPT Plus vs Claude Pro for Learning Backend Development with Java
0
Hi everyone, I’m currently learning backend development with a focus on Java and Spring Boot, and I’m considering subscribe to support my learning journey. Right now, I’m deciding between ChatGPT Plus and Claude Pro. I’ve looked at rankings on platforms like Chatbot Arena and Webdev Arena, but I’m not sure how relevant these are to backend development, especially for tasks involving Java and Spring Boot. I’d really appreciate insights from backend developers who’ve used these tools in real-world scenarios. Here’s what I’d like to know: 1. How well do these tools assist with backend development tasks like debugging, explaining concepts, and generating or reviewing code for Java and computer science? 2. Which one is more effective for learning and applying frameworks, libraries, or tools commonly used in Java backend development? 3. Are there any specific advantages or disadvantages you’ve noticed when using these tools? If you’ve tried both, a comparison would be especially valuable! Thanks in advance for sharing your experiences—I’m looking forward to your advice.
2024-12-23T19:57:15
https://www.reddit.com/r/LocalLLaMA/comments/1hkvy5h/chatgpt_plus_vs_claude_pro_for_learning_backend/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkvy5h
false
null
t3_1hkvy5h
/r/LocalLLaMA/comments/1hkvy5h/chatgpt_plus_vs_claude_pro_for_learning_backend/
false
false
self
0
null
JSON structured output comparison between 4o, 4o-mini, and sonnet 3.5 (or other LLMs)? Any benchmarks or experience?
6
Hey - I am in the midst of a project in which I am * taking the raw data from a Notion database, pulled via API and saved as raw JSON * have 500 files. Each is a separate sub-page of this database. Each file averages about 75kb, or 21,000 tokens of unstructured JSON. Though, only about 1/10th of is the important stuff. Most of it is metadata * Plan to create a fairly comprehensive prompt for an LLM to turn this raw JSON into a structured JSON so that I can use these processed JSON files to write to a postgres database with everything important extracted and semantically structured for use in an application So basically, I need to write a thorough prompt to describe the database structure, and walk the LLM through the actual content and how to interpret it correctly, so that it can organize it according to the structure of the database. Now that I'm getting ready to do that, I am trying to decide which LLM model is best suited for this given the complexity and size of the project. I don't mind spending like $100 to get the best results, but I have struggled to find any authoritative comparison of how well various models perform for structured JSON output. Is 4o significantly better that 4o-mini? Or would 4o-mini be totally sufficient? Would I need to be concerned about losing important data or the logic being all fucked up? Obviously, I can't check each and every entry. Is Sonnet 3.5 better than both? Or same? Do you have any experience with this type of task and have any insight advice? Know of anyone who has benchmarked something similar to this? Thank you in advance for any help you can offer!
2024-12-23T20:00:09
https://www.reddit.com/r/LocalLLaMA/comments/1hkw0e8/json_structured_output_comparison_between_4o/
No-Emu9365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkw0e8
false
null
t3_1hkw0e8
/r/LocalLLaMA/comments/1hkw0e8/json_structured_output_comparison_between_4o/
false
false
self
6
null
Tried my hand on sora and suno
1
Let me know your thoughts!
2024-12-23T20:10:14
https://v.redd.it/07ejnprznn8e1
Optimalutopic
/r/LocalLLaMA/comments/1hkw8j8/tried_my_hand_on_sora_and_suno/
1970-01-01T00:00:00
0
{}
1hkw8j8
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/07ejnprznn8e1/DASHPlaylist.mpd?a=1737706236%2CMTUxZTMwNDgxMjAwNWY5M2U5NGYxYTljZTE2ZTcwN2E3MjhhMTEzY2MzYWY0OGNkYzQzMTBmOWRlOWNiN2RjZQ%3D%3D&v=1&f=sd', 'duration': 56, 'fallback_url': 'https://v.redd.it/07ejnprznn8e1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/07ejnprznn8e1/HLSPlaylist.m3u8?a=1737706236%2CZTJkMDM4ZDhmOTg0NzU5MzUyZDU1NWI0ODNlZGQ2ODkwNGQ0OGRkOTEwODY5NWEzNTFmNmVkYjg5MDk3ZmM5Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/07ejnprznn8e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1hkw8j8
/r/LocalLLaMA/comments/1hkw8j8/tried_my_hand_on_sora_and_suno/
false
false
https://external-preview…195aa5a5cf789ef8
1
{'enabled': False, 'images': [{'id': 'dDU5djl2b3pubjhlMVD_ReQ-ovJcnEFx8whq5KBNrUoiktUJKZpedf3aYrZC', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dDU5djl2b3pubjhlMVD_ReQ-ovJcnEFx8whq5KBNrUoiktUJKZpedf3aYrZC.png?width=108&crop=smart&format=pjpg&auto=webp&s=5c84f12a705cb8a4120da1ed0af0a059635aaedf', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dDU5djl2b3pubjhlMVD_ReQ-ovJcnEFx8whq5KBNrUoiktUJKZpedf3aYrZC.png?width=216&crop=smart&format=pjpg&auto=webp&s=95699d07b508026ee76a6ad6981bb19750ce8676', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dDU5djl2b3pubjhlMVD_ReQ-ovJcnEFx8whq5KBNrUoiktUJKZpedf3aYrZC.png?width=320&crop=smart&format=pjpg&auto=webp&s=9658fb56b095fb28bbf4fb03ada8bf5e9ce6c7c4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dDU5djl2b3pubjhlMVD_ReQ-ovJcnEFx8whq5KBNrUoiktUJKZpedf3aYrZC.png?width=640&crop=smart&format=pjpg&auto=webp&s=288dec65305d9aee23d495bf37e71366b218b7e0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dDU5djl2b3pubjhlMVD_ReQ-ovJcnEFx8whq5KBNrUoiktUJKZpedf3aYrZC.png?width=960&crop=smart&format=pjpg&auto=webp&s=0c38ba726beb7454b2e8ec263a73846ef6be2a52', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dDU5djl2b3pubjhlMVD_ReQ-ovJcnEFx8whq5KBNrUoiktUJKZpedf3aYrZC.png?width=1080&crop=smart&format=pjpg&auto=webp&s=20ec2fa5ddd3ed14260f53b4e4f59eecf521ae88', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/dDU5djl2b3pubjhlMVD_ReQ-ovJcnEFx8whq5KBNrUoiktUJKZpedf3aYrZC.png?format=pjpg&auto=webp&s=cac440b6aa7d7963ad09467254f5d6c7096a955d', 'width': 1280}, 'variants': {}}]}
Discord invite for locallama?
1
[removed]
2024-12-23T20:45:18
https://www.reddit.com/r/LocalLLaMA/comments/1hkwz7j/discord_invite_for_locallama/
kitkatmafia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkwz7j
false
null
t3_1hkwz7j
/r/LocalLLaMA/comments/1hkwz7j/discord_invite_for_locallama/
false
false
self
1
null
You can now run *private* GGUFs from Hugging Face Hub directly in Ollama
135
Hi all, I'm VB, GPU poor in residence at Hugging Face - Starting today, you can run your private GGUFs from the Hugging Face hub directly in Ollama! 🔥 Works out of the box, all you need to do is add your Ollama SSH key to your profile, and that's it! Run private fine-tunes, quants and more, with the same old UX! Quite excited to bring more than a million smol LLMs closer to all Ollama users - loads of more goodies in the pipeline! All it requires is two steps: 1. Copy your Ollama SSH key, you can do so via: `cat ~/.ollama/id_ed25519.pub | pbcopy` 2. Add the corresponding key to your Hugging Face account by going to [your account settings](https://huggingface.co/settings/keys) and clicking on `Add new SSH key` 3. That’s it! You can now run private GGUFs from the Hugging Face Hub: `ollama run` [`hf.co/{username}/{repository}`](http://hf.co/{username}/{repository}) Full details here: [https://huggingface.co/docs/hub/en/ollama](https://huggingface.co/docs/hub/en/ollama) Remember, Not your weights, not your brain! 🤗 Looking forward to your feedback!
2024-12-23T20:49:32
https://www.reddit.com/r/LocalLLaMA/comments/1hkx2bi/you_can_now_run_private_ggufs_from_hugging_face/
vaibhavs10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkx2bi
false
null
t3_1hkx2bi
/r/LocalLLaMA/comments/1hkx2bi/you_can_now_run_private_ggufs_from_hugging_face/
false
false
self
135
{'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=108&crop=smart&auto=webp&s=4bc231a80d79babe4e6cddf7b4c71dcb0aa8f8ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=216&crop=smart&auto=webp&s=d7108244b7182d85047aa59446f1dfb68542b610', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=320&crop=smart&auto=webp&s=d34fa1a756c458772d3c8680309a93cf8d758b40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=640&crop=smart&auto=webp&s=5b03e18da2698977cf1222f0c9e54ccb6177ffc4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=960&crop=smart&auto=webp&s=3d875ff29aae8239d010f3b964e5a2f3ebe32e3d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=1080&crop=smart&auto=webp&s=b51090c30528b6b8c637acb54d7fc0f6a5249cf5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?auto=webp&s=ee70e402c0f8274b46f38378bada81dbeb5b1dac', 'width': 1200}, 'variants': {}}]}
Are you GPU-poor? How do you deal with it?
1
[removed]
2024-12-23T20:49:38
https://www.reddit.com/r/LocalLLaMA/comments/1hkx2ej/are_you_gpupoor_how_do_you_deal_with_it/
Elegant_vamp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkx2ej
false
null
t3_1hkx2ej
/r/LocalLLaMA/comments/1hkx2ej/are_you_gpupoor_how_do_you_deal_with_it/
false
false
self
1
null
LMSYS Copilot Arena update, with Deepseek on top
24
2024-12-23T20:52:01
https://i.redd.it/zqp14tfevn8e1.png
jpydych
i.redd.it
1970-01-01T00:00:00
0
{}
1hkx48t
false
null
t3_1hkx48t
/r/LocalLLaMA/comments/1hkx48t/lmsys_copilot_arena_update_with_deepseek_on_top/
false
false
https://b.thumbs.redditm…ZBU-jA4W_TVU.jpg
24
{'enabled': True, 'images': [{'id': 'Wovbbcp6n6-nEP0BUhyH4CkCgIyMLiq8FB_4zyvGgqo', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/zqp14tfevn8e1.png?width=108&crop=smart&auto=webp&s=200e4702f26a0d00133714d86e5615a438cb6fd6', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/zqp14tfevn8e1.png?width=216&crop=smart&auto=webp&s=0d55def3134a55256c9d5bb4cf9b7b0eef121ccb', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/zqp14tfevn8e1.png?width=320&crop=smart&auto=webp&s=84d026fe76fae41156a65182f258bbfe887538c4', 'width': 320}, {'height': 270, 'url': 'https://preview.redd.it/zqp14tfevn8e1.png?width=640&crop=smart&auto=webp&s=d53f0e4b84ef2a5e3b7c2e3c179eca4563de75f9', 'width': 640}, {'height': 406, 'url': 'https://preview.redd.it/zqp14tfevn8e1.png?width=960&crop=smart&auto=webp&s=f74ef4f11147943b59ff602c367194bfc509a080', 'width': 960}, {'height': 456, 'url': 'https://preview.redd.it/zqp14tfevn8e1.png?width=1080&crop=smart&auto=webp&s=4ca3c8f1d42c76468a9c5015d66c23bbd7503063', 'width': 1080}], 'source': {'height': 774, 'url': 'https://preview.redd.it/zqp14tfevn8e1.png?auto=webp&s=0ded550ad71f8a680c3afc4c23092df42d845b1a', 'width': 1830}, 'variants': {}}]}
rtrvr.ai: Universal Web Agent
1
[removed]
2024-12-23T21:03:45
https://www.reddit.com/r/LocalLLaMA/comments/1hkxdav/rtrvrai_universal_web_agent/
BodybuilderLost328
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkxdav
false
null
t3_1hkxdav
/r/LocalLLaMA/comments/1hkxdav/rtrvrai_universal_web_agent/
false
false
self
1
null
Ollama (llama3.2:3b) runs extremely slow on my MBP (36GB M3) and also makes my computer extremely hot
0
How do I figure out the cause of this? I'm not sure how much RAM or integrated CPU/GPU memory it's using, but theoretically I should have enough integrated memory to run llama3.2:3b.
2024-12-23T21:15:21
https://www.reddit.com/r/LocalLLaMA/comments/1hkxm3t/ollama_llama323b_runs_extremely_slow_on_my_mbp/
Amazydayzee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkxm3t
false
null
t3_1hkxm3t
/r/LocalLLaMA/comments/1hkxm3t/ollama_llama323b_runs_extremely_slow_on_my_mbp/
false
false
self
0
null
Guys am I crazy or is this paper totally batshit haha
88
2024-12-23T21:22:26
http://dx.doi.org/10.13140/RG.2.2.32495.34727
Kappa-chino
dx.doi.org
1970-01-01T00:00:00
0
{}
1hkxrgq
false
null
t3_1hkxrgq
/r/LocalLLaMA/comments/1hkxrgq/guys_am_i_crazy_or_is_this_paper_totally_batshit/
false
false
default
88
null
Synthetic data generation
4
Hi I have about $2000 in OpenAI credits which are about to expire in a few days. I was wondering if there is a turnkey way to generate a domain specific dataset in a specific format? I don’t want to pay anything apart from OpenAI credits which I have. Thank you
2024-12-23T22:10:58
https://www.reddit.com/r/LocalLLaMA/comments/1hkyri3/synthetic_data_generation/
Whyme-__-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkyri3
false
null
t3_1hkyri3
/r/LocalLLaMA/comments/1hkyri3/synthetic_data_generation/
false
false
self
4
null
Looking for 'AI' DJ or similar for large collection of MP3 files.
6
I download my music and use it in a music player I developed with electron. Is there an AI model on hugging face or ollama that I could use to get a list of MP3's that would sound good when played back to back? I can fade them in and out programmatically, maybe there is a small embedding model for audio that might be able to achieve this. Another question is there a good model for audio to lyrics for searching based on lyrics? Thanks!
2024-12-23T22:32:48
https://www.reddit.com/r/LocalLLaMA/comments/1hkz82z/looking_for_ai_dj_or_similar_for_large_collection/
Hidden1nin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkz82z
false
null
t3_1hkz82z
/r/LocalLLaMA/comments/1hkz82z/looking_for_ai_dj_or_similar_for_large_collection/
false
false
self
6
null
Unveiling LLMs: What They Are Not.
0
There are many ways to understand something, and one of them, as curious as it may seem, is knowing what that something *is not*. Recently, we’ve been bombarded with fantastic claims about LLMs. Some have proven true, others not so much, and many, in my opinion, feel more like marketing buzzwords—which, I must admit, has become quite tiresome. I want to use this space for you to collaborate and contribute to the community by sharing what LLMs *definitely are not*. In other words, what they are not capable of doing or, if they can, in which tasks they are not the best option. Note: Obviously, the topic allows for jokes like "definitely not an air fryer," and depending on how creative that is, it might be funny. Otherwise, you’d just be being annoying. So, overall, please try to provide thoughtful responses.
2024-12-23T22:50:47
https://www.reddit.com/r/LocalLLaMA/comments/1hkzl93/unveiling_llms_what_they_are_not/
thecalmgreen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkzl93
false
null
t3_1hkzl93
/r/LocalLLaMA/comments/1hkzl93/unveiling_llms_what_they_are_not/
false
false
self
0
null
Handle follow-up or clarifying questions in RAG scenarios (with ease)
18
There several threads here on reddit like this [one](https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/) and this [one](https://www.reddit.com/r/LangChain/comments/1djcvh0/chat_history_for_rag_how_to_search_for_follow_up/) that highlight challenges with effectively handling follow-up questions from a user, especially in RAG scenarios. Specifically, these are multi-turn conversations that can range from **Adjusting a Retrieval** **User:** What are the benefits of renewable energy? **Assistant:** Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind.... **User:** Include cost considerations in the response. **OR.....** # Clarifying a Response **User:** Can you tell me about the history of the internet? **Assistant:** The internet was developed from research programs like ARPANET in the late 1960s.... **User:** Can you focus on how ARPANET worked? **OR...** # Switching Intent **User:** What are the symptoms of diabetes? **Assistant:** Common symptoms include frequent urination, excessive thirst, fatigue, and blurry vision. **User:** How is it diagnosed? Most of these scenarios requires carefully crafting, editing and optimizing prompts to an LLM to rewrite the follow-up query, extract relevant contextual information and then trigger retrieval to answer the question. The whole process is slow, error prone and adds significant latency. [Arch](https://github.com/katanemo/archgw) (an intelligent gateway for agents) pushed out an update (0.1.7) to accurately handle multi-turn intent, extracting relevant contextual information and calling downstream developer APIs (aka function calling) in <500ms! Arch is an open source infrastructure gateway for agents so that developers can focus on what matters most. Arch is engineered with purpose-built (fast) LLMs for the seamless integration of prompts with APIs (among other things). More details on how that multi-turn works: [https://docs.archgw.com/build\_with\_arch/multi\_turn.html](https://docs.archgw.com/build_with_arch/multi_turn.html) and you can run the demo here: [https://github.com/katanemo/archgw/tree/main/demos/multi\_turn\_rag\_agent](https://github.com/katanemo/archgw/tree/main/demos/multi_turn_rag_agent) The high-level architecture and request flow looks like this, and below is a sample multi-turn interaction that it can help developers build quickly. [Prompt to API processing handled via Arch Gateway](https://preview.redd.it/s61q7r39ho8e1.png?width=2626&format=png&auto=webp&s=97a4827bdc86663bbf52a8524a2d6e8f677d7c98) [Example of a multi-turn response handled via Arch](https://preview.redd.it/407oqppxeo8e1.png?width=1064&format=png&auto=webp&s=72ccdd6020de6ce229199e69727f01eeb1ae072b) **Disclaimer**: I am one of the core contributors to [https://github.com/katanemo/archgw](https://github.com/katanemo/archgw) \- and would love to answer any questions you may have.
2024-12-23T22:56:48
https://www.reddit.com/r/LocalLLaMA/comments/1hkzpqv/handle_followup_or_clarifying_questions_in_rag/
AdditionalWeb107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hkzpqv
false
null
t3_1hkzpqv
/r/LocalLLaMA/comments/1hkzpqv/handle_followup_or_clarifying_questions_in_rag/
false
false
https://b.thumbs.redditm…2iYeUBbVky2o.jpg
18
{'enabled': False, 'images': [{'id': 'CumNe617pvfcpWpBOsseCcHSxcBsOZ4Uh2VdsiqcTN8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B7Gq3TBojGoD0HxG60BGdCyfc6FrWlgPXNkLc74WKEM.jpg?width=108&crop=smart&auto=webp&s=d36b9da2eee6e7b037090b9ed2f2ddecd5f0aea7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B7Gq3TBojGoD0HxG60BGdCyfc6FrWlgPXNkLc74WKEM.jpg?width=216&crop=smart&auto=webp&s=1106dfa8d0a666ae44a19faff993d523bffe790a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B7Gq3TBojGoD0HxG60BGdCyfc6FrWlgPXNkLc74WKEM.jpg?width=320&crop=smart&auto=webp&s=c69f38d4b8dc5523a3ad6d8904d4256be5c885e4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B7Gq3TBojGoD0HxG60BGdCyfc6FrWlgPXNkLc74WKEM.jpg?width=640&crop=smart&auto=webp&s=d29e47c86058f7b71518976a27025db70f5baa90', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B7Gq3TBojGoD0HxG60BGdCyfc6FrWlgPXNkLc74WKEM.jpg?width=960&crop=smart&auto=webp&s=26cb08cea73b19018b612f0cf8205559920caea9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B7Gq3TBojGoD0HxG60BGdCyfc6FrWlgPXNkLc74WKEM.jpg?width=1080&crop=smart&auto=webp&s=f69e57539cacd21fd190bb4e87ff56b18ec2d8dc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B7Gq3TBojGoD0HxG60BGdCyfc6FrWlgPXNkLc74WKEM.jpg?auto=webp&s=04d7f9134e8536422045391d81e801d68f815bc4', 'width': 1200}, 'variants': {}}]}
Are there aspects of VERY large parameter models that cannot be matched by smaller ones?
21
Bit of a random thought but will small models eventually rival or out perform models like chatgpt/sonnet in every way or will these super large models always hold an edge by sheer training size? Possibly too early to tell? Just curious as a noob on the topic.
2024-12-23T23:52:44
https://www.reddit.com/r/LocalLLaMA/comments/1hl0t84/are_there_aspects_of_very_large_parameter_models/
Business_Respect_910
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl0t84
false
null
t3_1hl0t84
/r/LocalLLaMA/comments/1hl0t84/are_there_aspects_of_very_large_parameter_models/
false
false
self
21
null
Easiest way to get started with AI-assisted coding using local models (free, open-source)
9
Hey everyone 👋, I’ve been experimenting with ways to simplify my coding workflow using chat-based LLMs, and I wanted to share a tool I built called **gptree**. It’s a lightweight CLI tool designed to streamline project context sharing for coding tasks—perfect if you’re using any local model or chat-based LLM for coding assistance. # What does gptree do? If you’re working on coding projects and want AI to assist with tasks like debugging, expanding functionality, or writing new features, providing the right context is key. That’s where `gptree` comes in: * **Generates a file tree** for your project, respecting `.gitignore` to avoid unnecessary clutter. * Includes an **interactive mode** so you can select only the files you want to share. * Outputs a **text blob** of the file tree and the contents of selected files, ready to paste into any LLM prompt. This makes it the easiest, no-overhead way to start leveraging AI for coding—even if you’re just getting started with local models. [Quick demo of GPTree — pasting straight into ChatGPT](https://i.redd.it/fkzxymsnvo8e1.gif) # Why use gptree? * **Quick Start for AI-Assisted Coding**: No complex integrations, just generate context and paste into your favorite LLM interface. * **Flexible**: Works with any local model (not just Llama-based ones) or cloud-based tools like ChatGPT. * **Efficient**: Keeps everything lightweight and respects your `.gitignore` to avoid bloated prompts. # Get Started The tool is open-source and easy to install: # Install via Homebrew 🍺 brew tap travisvn/tap brew install gptree # Install via pipx (recommended for Python users) 🐍 pipx install gptree-cli Here’s the GitHub repo if you want to check it out: [https://github.com/travisvn/gptree](https://github.com/travisvn/gptree) Let me know if you have any questions or ideas for improvements! I’d also love feedback on how this could work better for different local setups. If you find it helpful, a ⭐ on the GitHub repo would mean a lot and helps others discover the tool!
2024-12-24T00:18:59
https://www.reddit.com/r/LocalLLaMA/comments/1hl1bce/easiest_way_to_get_started_with_aiassisted_coding/
lapinjapan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl1bce
false
null
t3_1hl1bce
/r/LocalLLaMA/comments/1hl1bce/easiest_way_to_get_started_with_aiassisted_coding/
false
false
https://b.thumbs.redditm…frqx5Hb0mHqM.jpg
9
{'enabled': False, 'images': [{'id': 'o7fDBZVvuWfKUM6GNkJLzalihd9X7XbAjTdE682Rmh0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9anusGdl1RAEFy1KOaXYQypfcwZ_7CkkkbmVI5_GL48.jpg?width=108&crop=smart&auto=webp&s=1b2be49130f8d84dc1cf65ef5a5120a8c2fa9e08', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9anusGdl1RAEFy1KOaXYQypfcwZ_7CkkkbmVI5_GL48.jpg?width=216&crop=smart&auto=webp&s=9c4a072c1b3c0b3ff04ce459af5dfe70acf65873', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9anusGdl1RAEFy1KOaXYQypfcwZ_7CkkkbmVI5_GL48.jpg?width=320&crop=smart&auto=webp&s=0247338d9eb2edd9f93c5c78cd13e97406393bbb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9anusGdl1RAEFy1KOaXYQypfcwZ_7CkkkbmVI5_GL48.jpg?width=640&crop=smart&auto=webp&s=4c0b0e44881f959bcb0b3ee8f50ee52bfcd4e310', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9anusGdl1RAEFy1KOaXYQypfcwZ_7CkkkbmVI5_GL48.jpg?width=960&crop=smart&auto=webp&s=6c57345e6c96ee91f27e2b39027ce7882a1acd70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9anusGdl1RAEFy1KOaXYQypfcwZ_7CkkkbmVI5_GL48.jpg?width=1080&crop=smart&auto=webp&s=2a62210cc5731a302a27da5d9bb407754bc01db8', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/9anusGdl1RAEFy1KOaXYQypfcwZ_7CkkkbmVI5_GL48.jpg?auto=webp&s=df28ef5734d5612e8ad1c8d822b5d4a6cfaf63b1', 'width': 1280}, 'variants': {}}]}
llama 3.2 3B is amazing
371
This is the first small model that has worked so well for me and it's usable. It has a context window that does indeed remember things that were previously said without errors. Also handles Spanish ( i have not seen this since stable lm 3b) very well and all in Q4\_K\_M. Personally i'm using llama-3.2-3b-instruct-abliterated.Q4\_K\_M.gguf
2024-12-24T00:46:05
https://www.reddit.com/r/LocalLLaMA/comments/1hl1tso/llama_32_3b_is_amazing/
ventilador_liliana
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl1tso
false
null
t3_1hl1tso
/r/LocalLLaMA/comments/1hl1tso/llama_32_3b_is_amazing/
false
false
self
371
null
Predictions for 2025?
138
2024 has been a wild ride with lots of development inside and outside AI. What are your predictions for this coming year?
2024-12-24T01:32:55
https://www.reddit.com/r/LocalLLaMA/comments/1hl2pd6/predictions_for_2025/
kidupstart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl2pd6
false
null
t3_1hl2pd6
/r/LocalLLaMA/comments/1hl2pd6/predictions_for_2025/
false
false
self
138
null
Where is qwen2.5 with tool training and 128k context?
0
Been down a rabbit hole trying to find the magic qwen 2.5 32b or 14b model that actually has tool training so that it's capable of using tools and actually has 128k context but i only seem to be able to find one or the other. i'm trying to find a version of this model that will actually work with cline or roo cline vscode extensions being served over ollama. the defacto version of qwen 2.5 available through the ollama models hub is incapable of using tools it seems so cline and roo cline tool/function calling just causes it to break. fof the love of god i want to like this model since so many people have had positive things to say about it but i absolutely need tool usage and large context out of it. can someone please point me in the direction of the correct version?
2024-12-24T01:36:16
https://www.reddit.com/r/LocalLLaMA/comments/1hl2rmk/where_is_qwen25_with_tool_training_and_128k/
waywardspooky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl2rmk
false
null
t3_1hl2rmk
/r/LocalLLaMA/comments/1hl2rmk/where_is_qwen25_with_tool_training_and_128k/
false
false
self
0
null
is speculative decoding useful in production?
1
[removed]
2024-12-24T01:41:14
https://www.reddit.com/r/LocalLLaMA/comments/1hl2utv/is_speculative_decoding_useful_in_production/
Klutzy_Psychology849
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl2utv
false
null
t3_1hl2utv
/r/LocalLLaMA/comments/1hl2utv/is_speculative_decoding_useful_in_production/
false
false
self
1
null
is speculative decoding useful in production?
0
is it actually useful or does it heavily depend as usual? if so what are its usecases?
2024-12-24T01:48:48
https://www.reddit.com/r/LocalLLaMA/comments/1hl2zol/is_speculative_decoding_useful_in_production/
khaliiil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl2zol
false
null
t3_1hl2zol
/r/LocalLLaMA/comments/1hl2zol/is_speculative_decoding_useful_in_production/
false
false
self
0
null
Best and fastest 2-3b model I can run?
3
So this space changes so fast it's nuts. I have LMStudio and Openwebui running on my PC, 8gb RTX 4060 GPU. I want to run a small model that is as fast as possible, and also as good as possible for text summarization and similar tasks, as an API. I know there's unsloth, bnb, exlama, all these things. Im just not updated enough on what to run here. Currently I'm using LMStudio with their Gemma 2b. It's alright, but I assume there's a much better solution out there? Any help would be greatly appreciated.
2024-12-24T01:52:59
https://www.reddit.com/r/LocalLLaMA/comments/1hl32g0/best_and_fastest_23b_model_i_can_run/
mstahh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl32g0
false
null
t3_1hl32g0
/r/LocalLLaMA/comments/1hl32g0/best_and_fastest_23b_model_i_can_run/
false
false
self
3
null
TimesFM, a 200m Time Series Foundation Model from Goolgle
87
2024-12-24T01:56:23
https://huggingface.co/collections/google/timesfm-release-66e4be5fdb56e960c1e482a6
mlon_eusk-_-
huggingface.co
1970-01-01T00:00:00
0
{}
1hl34mr
false
null
t3_1hl34mr
/r/LocalLLaMA/comments/1hl34mr/timesfm_a_200m_time_series_foundation_model_from/
false
false
https://b.thumbs.redditm…bxTYV52aVkkQ.jpg
87
{'enabled': False, 'images': [{'id': 'gjsHnsN7uOjyZeU9BnnUvQ3M2dc2w0xrs2AburDi9Fo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/j-dTV6i1y5BbgUhQb0vnDfKltzju5tN7J3SCuuw3Ark.jpg?width=108&crop=smart&auto=webp&s=da04179597a53aa5a09b842ec323377260a70bf5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/j-dTV6i1y5BbgUhQb0vnDfKltzju5tN7J3SCuuw3Ark.jpg?width=216&crop=smart&auto=webp&s=863a787c21e9b6c036afd36b560d4561ab2912a5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/j-dTV6i1y5BbgUhQb0vnDfKltzju5tN7J3SCuuw3Ark.jpg?width=320&crop=smart&auto=webp&s=5f59e9ae6943d28f790a9756897571a7e2c7ff84', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/j-dTV6i1y5BbgUhQb0vnDfKltzju5tN7J3SCuuw3Ark.jpg?width=640&crop=smart&auto=webp&s=d13901960e63d9f766c17d8c0226b22c0e7761f2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/j-dTV6i1y5BbgUhQb0vnDfKltzju5tN7J3SCuuw3Ark.jpg?width=960&crop=smart&auto=webp&s=af781d05dd9ef71e86b7c06721e5583d85ec419d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/j-dTV6i1y5BbgUhQb0vnDfKltzju5tN7J3SCuuw3Ark.jpg?width=1080&crop=smart&auto=webp&s=0084ab90eaf53bc92371db0c9aa20c429389c6f1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/j-dTV6i1y5BbgUhQb0vnDfKltzju5tN7J3SCuuw3Ark.jpg?auto=webp&s=c83e8f146c7bb7a1dbd6fabbf1705a709ced139b', 'width': 1200}, 'variants': {}}]}
This might be a dumb question but how many bits are in a token?
121
I'm new to llms but I keep hearing people talk about token prices and context windows as measured in tokens and is there a set number of bits per token? Are they variable by model? Variable with one model?
2024-12-24T02:18:38
https://www.reddit.com/r/LocalLLaMA/comments/1hl3iwa/this_might_be_a_dumb_question_but_how_many_bits/
KnownDairyAcolyte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl3iwa
false
null
t3_1hl3iwa
/r/LocalLLaMA/comments/1hl3iwa/this_might_be_a_dumb_question_but_how_many_bits/
false
false
self
121
null
Hunyuan fp8 on a 12 GB 3080 can produce mobile quality gifs in 10 minutes
51
[Default prompt from this workflow: https:\/\/civitai.com\/models\/1048302?modelVersionId=1176230](https://reddit.com/link/1hl3tg0/video/btpt2s97po8e1/player) I followed [this guide first](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/), with some extra finagling (updating and cloning then installing [custom nodes](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite)), I got the output here. On a desktop you can see the seams but on mobile it should look okay. Zoom out if not, all things considered, it works surprisingly well. 9 minute thirty second to 11 minute generation times on my machine. Later iterations are slower than earlier ones and this compounding effect seems worse the higher tile counts are used.
2024-12-24T02:35:20
https://www.reddit.com/r/LocalLLaMA/comments/1hl3tg0/hunyuan_fp8_on_a_12_gb_3080_can_produce_mobile/
Emergency-Walk-2991
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl3tg0
false
null
t3_1hl3tg0
/r/LocalLLaMA/comments/1hl3tg0/hunyuan_fp8_on_a_12_gb_3080_can_produce_mobile/
false
false
self
51
null
model choices for agents?
0
What models are you finding useful for deploying agents to perform language-based tasks, including summarization, interpretation (in English) and sentiment analysis? Seems like YouTubers are creating interesting content around n8n and agentic AI workflows but are often calling out to OpenAI via API. Curious what your use cases and model choices have been - I’m particularly interested in surveying typical model size. Personally, I find that ~Q4 and ~8b or 11b models are meeting most of my needs - sometimes even less vram. What are your experiences?
2024-12-24T02:38:03
https://www.reddit.com/r/LocalLLaMA/comments/1hl3v45/model_choices_for_agents/
TellMeThing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl3v45
false
null
t3_1hl3v45
/r/LocalLLaMA/comments/1hl3v45/model_choices_for_agents/
false
false
self
0
null
I have a barely used 4070 Super (12GB). To achieve 24GB, is it cheaper to add a used 3060 12GB, or to sell my 4070 and buy a used 3090?
1
[removed]
2024-12-24T02:43:24
https://www.reddit.com/r/LocalLLaMA/comments/1hl3yfo/i_have_a_barely_used_4070_super_12gb_to_achieve/
manwiththe104IQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl3yfo
false
null
t3_1hl3yfo
/r/LocalLLaMA/comments/1hl3yfo/i_have_a_barely_used_4070_super_12gb_to_achieve/
false
false
self
1
null
We Should Be Swarm-Inferencing
11
Wanted to spark a discussion here. With O1 and O3 pushing the onus for quality improvement to inference time, doing so with a distributed network makes a ton of sense. Unlike training, inferencing is very, very parallelizable over multiple GPUs - even over a distributed network with milliseconds of latency. The live sharing packets are small, and we can probably make some distributed Ethereum-esque wrapper to ensure compute privacy and incentivize against freeloading. [https://news.ycombinator.com/item?id=42308590#42313885](https://news.ycombinator.com/item?id=42308590#42313885) >the equation for figuring what factor slower it would be is 1 / (1 + time to do transfers and trigger processing per each token in seconds). That would mean under a less ideal situation where the penalty is 5 milliseconds per token, the calculation will be \~0.99502487562 times what it would have been had it been done in a hypothetical single GPU that has all of the VRAM needed, but otherwise the same specifications. This penalty is also not very noticeable. So - no real significant loss from distributing. \--- Napkin math (courtesy of o1): \- likely around  100-200 PFLOPs of total compute available from consumer devices in the world with over 24GB VRAM \- running o3 at $50ish-per-inference low-compute mode estimates: 5-30 exaFLOPs \- o3 at high-compute SOTA mode, $5kish-per-inference estimate: 1-2 zetaFLOPs So, around 1000 inferences per day of o3 low-compute, 10 per day high-compute if the whole network could somehow be utilized. Of course it wouldn't, and of course all those numbers will change in efficiencies soon enough, but that's still a lot of compute in ballpark. Now, models \*can\* still be split up between multiple GPUs over the network, at somewhat higher risk of slowdown, which matters for e.g. if the base model is well above 24GB or if we want to use smaller GPUs/CPUs/legacy hardware. If we did that, our total compute can probably be stretched 2-5x if we were to network <24GB GPUs, CPUs and legacy hardware in a separate "slow pool". [https://chatgpt.com/share/676a1c7c-0940-8003-99dd-d24a1e9e01ed](https://chatgpt.com/share/676a1c7c-0940-8003-99dd-d24a1e9e01ed) \--- I've found a few similar projects, of which AI Horde seems the most applicable, but I'm curious if anyone else knows of any or has expertise in the area: [https://aihorde.net/](https://aihorde.net/) [https://boinc.berkeley.edu/projects.php](https://boinc.berkeley.edu/projects.php) [https://petals.dev/](https://petals.dev/) \--- Also, keep in mind there are significant new hardware architectures available down the line which forego the complexities and flexibilities of modern GPUs for just brute-force transformer inferencing on much cruder chip architectures. 10-100x speedups and 100-1000x energy efficiency gains potentially there, even before ternary adder stuff. Throw those on the distributed network and keep churning. They would be brittle for new model training, but might be quite enough for just brute force inference. [https://arxiv.org/pdf/2409.03384v1](https://arxiv.org/pdf/2409.03384v1) Analysis: [https://chatgpt.com/share/6721b626-898c-8003-aa5e-ebec9ea65e82](https://chatgpt.com/share/6721b626-898c-8003-aa5e-ebec9ea65e82) \--- SUMMARY: so, even if this network might not be much (realistically, like 1 good o3 query per day right now lol) it would still scale quite well as the world's compute capabilities increase, and be able to nearly compete with or surpass corporate offerings. If it's limited primarily to queries about sensitive topics that are important to the world and need to be provably NOT influenced by black-box corporate models, that's still quite useful. Can still use cheap datacenter compute for anything else, and run much more efficient models on the vast majority of lower-intelligence questions. Cheers and thanks for reading! \-W
2024-12-24T02:52:39
https://www.reddit.com/r/LocalLLaMA/comments/1hl449c/we_should_be_swarminferencing/
dogcomplex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl449c
false
null
t3_1hl449c
/r/LocalLLaMA/comments/1hl449c/we_should_be_swarminferencing/
false
false
self
11
null
am I going insane from AI model FOMO? o1-pro vs o1 vs o1-mini vs gpt-4o vs sonnet 3.5 v2 vs llama 3.3 (please tell me i'm not alone)
1
[removed]
2024-12-24T04:15:20
https://www.reddit.com/r/LocalLLaMA/comments/1hl5j9s/am_i_going_insane_from_ai_model_fomo_o1pro_vs_o1/
Charles_Boyle69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl5j9s
false
null
t3_1hl5j9s
/r/LocalLLaMA/comments/1hl5j9s/am_i_going_insane_from_ai_model_fomo_o1pro_vs_o1/
false
false
self
1
null
Doing a non-CS PhD, want to get hired in AI. What are my chances? I have extensive experience with local LLMs: running, serving, quantization, finetuning, building web apps based on LLMs, structured output using JSON and grammars, etc.
0
I'm doing a Ph.D. in quantitative marketing at the business school of a university with great ranking. The university is especially famous for its ML research. My own research is about Large Language Models (LLMs), human-AI interaction, and economics of AI (game theory analysis). I'll graduate in May and was wondering, do you think I have a chance finding a job in AI-related positions? I'm mostly interested in research positions but am also open to more practical ones as I have made LLM-powered websites and tools in the past. I've self-learnt lots of CS topics and recently created a Lisp-like language as part of my research on LLMs. If you need additional info, I'm willing to share. Thank you in advance!
2024-12-24T04:21:44
https://www.reddit.com/r/LocalLLaMA/comments/1hl5n0h/doing_a_noncs_phd_want_to_get_hired_in_ai_what/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl5n0h
false
null
t3_1hl5n0h
/r/LocalLLaMA/comments/1hl5n0h/doing_a_noncs_phd_want_to_get_hired_in_ai_what/
false
false
self
0
null
Aider has released a new much harder code editing benchmark since their previous one was saturated. The Polyglot benchmark now tests on 6 different languages (C++, Go, Java, JavaScript, Python and Rust).
220
2024-12-24T04:23:12
https://i.redd.it/bp16i4ap3q8e1.png
jd_3d
i.redd.it
1970-01-01T00:00:00
0
{}
1hl5ntq
false
null
t3_1hl5ntq
/r/LocalLLaMA/comments/1hl5ntq/aider_has_released_a_new_much_harder_code_editing/
false
false
https://b.thumbs.redditm…iQvZ8XZ640us.jpg
220
{'enabled': True, 'images': [{'id': 'BqpG7qXhUiM7Pfmj62vRctwqFWCcYJTN4DVolcqES0Y', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/bp16i4ap3q8e1.png?width=108&crop=smart&auto=webp&s=e24a5e3acb3362e5740b216eeede9c2ccba4d35b', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/bp16i4ap3q8e1.png?width=216&crop=smart&auto=webp&s=d496aca2832c2bc32397173463314f99ecf7321f', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/bp16i4ap3q8e1.png?width=320&crop=smart&auto=webp&s=5ae5c52396920b4403324949ace7ce7062093f7f', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/bp16i4ap3q8e1.png?width=640&crop=smart&auto=webp&s=99c0c19194caec073461320c50cc318eb61b7612', 'width': 640}], 'source': {'height': 524, 'url': 'https://preview.redd.it/bp16i4ap3q8e1.png?auto=webp&s=b5c458dd723bfd3e603d80abbf180d4b68657b55', 'width': 935}, 'variants': {}}]}
What are your use cases for local LLM and the hardware you use?
26
I’m curious about why someone uses local LLM and the type of hardware you use ( the money you put into it). I asking in a perspective of cost / benefit. This is my hardware ( a gaming build) : - Ryzen 5 7600x - 4070 ti 16gb - 32 gb ram ddr5 Software - Ollama - OpenWebUI - windows 10 I mostly use models that fit my 16gb vram and here is my conclusion to date after month of trying multiple models: No build can cost benefits more than cloud options by a big margin. I always come back to my paid copilot in VSCode for coding I always come back to my paid Gemini for everything else. I see a case for those proprietary model at ~ 50$ a month, for a ever evolving model, no maintenance and access from everywhere. But why would someone build a local LLM and how much are you pouring into ? I’m ready to invest in a better build but I do not see the benefit compared to cloud solutions. I didn’t try private cloud yet. But will to compare the cost to run bigger models.
2024-12-24T05:24:23
https://www.reddit.com/r/LocalLLaMA/comments/1hl6nkf/what_are_your_use_cases_for_local_llm_and_the/
Polymath_314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl6nkf
false
null
t3_1hl6nkf
/r/LocalLLaMA/comments/1hl6nkf/what_are_your_use_cases_for_local_llm_and_the/
false
false
self
26
null
Llama 3.2 says it can try to modify or write it's own code to bypass restrictions
1
[removed]
2024-12-24T07:08:46
https://i.redd.it/4jaxh46ixq8e1.jpeg
SheeTheyMaut
i.redd.it
1970-01-01T00:00:00
0
{}
1hl87kp
false
null
t3_1hl87kp
/r/LocalLLaMA/comments/1hl87kp/llama_32_says_it_can_try_to_modify_or_write_its/
false
false
https://b.thumbs.redditm…SHX_xsAUgDHU.jpg
1
{'enabled': True, 'images': [{'id': '8M6OlryXV8gpw3JylH594esesNa8lz_R5tkYr0vf4Lk', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/4jaxh46ixq8e1.jpeg?width=108&crop=smart&auto=webp&s=d5027a029ac3b994f6d3425eccbf6a36f26f9482', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/4jaxh46ixq8e1.jpeg?width=216&crop=smart&auto=webp&s=f0d9f45a229694c9da384b7029bacc70b2d3cb53', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/4jaxh46ixq8e1.jpeg?width=320&crop=smart&auto=webp&s=34ac54cdfdab1f6a78ecb2d85797b147c47281a2', 'width': 320}], 'source': {'height': 2160, 'url': 'https://preview.redd.it/4jaxh46ixq8e1.jpeg?auto=webp&s=d7fe2758ab82035782ef66a44103402d719c6991', 'width': 402}, 'variants': {}}]}
Llama 3.2 says it can try to modify or write it's own code to bypass restrictions
1
[removed]
2024-12-24T07:11:31
https://www.reddit.com/r/LocalLLaMA/comments/1hl88xh/llama_32_says_it_can_try_to_modify_or_write_its/
SheeTheyMaut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl88xh
false
null
t3_1hl88xh
/r/LocalLLaMA/comments/1hl88xh/llama_32_says_it_can_try_to_modify_or_write_its/
false
false
self
1
null
Fine-tuning an LLM on a New Language for Long-Context RAG Question Answering
1
[removed]
2024-12-24T07:43:01
https://www.reddit.com/r/LocalLLaMA/comments/1hl8oq8/finetuning_an_llm_on_a_new_language_for/
BackgroundLow3793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl8oq8
false
null
t3_1hl8oq8
/r/LocalLLaMA/comments/1hl8oq8/finetuning_an_llm_on_a_new_language_for/
false
false
self
1
null
How to run Qwen on my mobile using executorch?
1
[removed]
2024-12-24T07:49:29
https://www.reddit.com/r/LocalLLaMA/comments/1hl8s27/how_to_run_qwen_on_my_mobile_using_executorch/
No_South_1521
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl8s27
false
null
t3_1hl8s27
/r/LocalLLaMA/comments/1hl8s27/how_to_run_qwen_on_my_mobile_using_executorch/
false
false
self
1
null
Trying to build a RAG chat bot, turned into my worse nightmare
1
[removed]
2024-12-24T09:14:27
https://www.reddit.com/r/LocalLLaMA/comments/1hl9wdv/trying_to_build_a_rag_chat_bot_turned_into_my/
ruth5031
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl9wdv
false
null
t3_1hl9wdv
/r/LocalLLaMA/comments/1hl9wdv/trying_to_build_a_rag_chat_bot_turned_into_my/
false
false
self
1
null
My plan to build a water cooled 3x5090 box
1
[removed]
2024-12-24T09:19:14
https://www.reddit.com/r/LocalLLaMA/comments/1hl9ylu/my_plan_to_build_a_water_cooled_3x5090_box/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl9ylu
false
null
t3_1hl9ylu
/r/LocalLLaMA/comments/1hl9ylu/my_plan_to_build_a_water_cooled_3x5090_box/
false
false
https://b.thumbs.redditm…LAchT8iRUUvU.jpg
1
null
[Tool] A tiny utility I made for better coding prompts with local files
8
i'm no Santa, but workflows for coding are quite cumbersome if you want to run something outside IDE. Made this tiny tool that lets me pick files from my projects and formats them properly with delimiters for prompts. Nothing fancy, just saves me a bunch of clicks and runs locally. Figured some of you might find it useful too. [https://github.com/Recklesz/FileAggregator-for-LLMs](https://github.com/Recklesz/FileAggregator-for-LLMs)
2024-12-24T09:19:37
https://www.reddit.com/r/LocalLLaMA/comments/1hl9ysj/tool_a_tiny_utility_i_made_for_better_coding/
lessis_amess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hl9ysj
false
null
t3_1hl9ysj
/r/LocalLLaMA/comments/1hl9ysj/tool_a_tiny_utility_i_made_for_better_coding/
false
false
self
8
{'enabled': False, 'images': [{'id': 'Z1g5juYwPgp6-03auksKqfmeoqR6GhUx6GAOJWP58Rk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VSTVM7ORc-HIMRZMjYl4aEz1A-N0z_2be_GYpcV4CJ4.jpg?width=108&crop=smart&auto=webp&s=55e2337f8f183e7de084cc787efdbec407c4137a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VSTVM7ORc-HIMRZMjYl4aEz1A-N0z_2be_GYpcV4CJ4.jpg?width=216&crop=smart&auto=webp&s=120601510afc3c54761b31f057155f8d4ae900e9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VSTVM7ORc-HIMRZMjYl4aEz1A-N0z_2be_GYpcV4CJ4.jpg?width=320&crop=smart&auto=webp&s=00e053913c81102f05d5601ea87db40c118f94e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VSTVM7ORc-HIMRZMjYl4aEz1A-N0z_2be_GYpcV4CJ4.jpg?width=640&crop=smart&auto=webp&s=7a707a8d8240bf8bef66e51441c63133f2e7692d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VSTVM7ORc-HIMRZMjYl4aEz1A-N0z_2be_GYpcV4CJ4.jpg?width=960&crop=smart&auto=webp&s=b2fc357f78322672ac83226cad59fc46032dcac7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VSTVM7ORc-HIMRZMjYl4aEz1A-N0z_2be_GYpcV4CJ4.jpg?width=1080&crop=smart&auto=webp&s=80bf876ffbbc17491aeab16b107b85674cf635cd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VSTVM7ORc-HIMRZMjYl4aEz1A-N0z_2be_GYpcV4CJ4.jpg?auto=webp&s=78bf6c7c9c1b036858a0dac0cf89fb0b54b32586', 'width': 1200}, 'variants': {}}]}