title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
what's the smartest model right now? | 1 | [removed] | 2024-12-13T02:09:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hd1sg1/whats_the_smartest_model_right_now/ | Ok-Engineering5104 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd1sg1 | false | null | t3_1hd1sg1 | /r/LocalLLaMA/comments/1hd1sg1/whats_the_smartest_model_right_now/ | false | false | self | 1 | null |
Best Model to parse real estate description | 1 | [removed] | 2024-12-13T02:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hd1v1x/best_model_to_parse_real_estate_description/ | Zealousideal-Put-452 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd1v1x | false | null | t3_1hd1v1x | /r/LocalLLaMA/comments/1hd1v1x/best_model_to_parse_real_estate_description/ | false | false | self | 1 | null |
FuseChat-3.0: Preference Optimization for Implicit Model Fusion | 30 | [https://huggingface.co/collections/FuseAI/fusechat-30-6752d18dec430bad7a236a75](https://huggingface.co/collections/FuseAI/fusechat-30-6752d18dec430bad7a236a75)
>We present FuseChat-3.0, a series of models crafted to enhance performance by integrating the strengths of multiple source LLMs into more compact target LLMs. To achieve this fusion, we utilized four powerful source LLMs: Gemma-2-27B-It, Mistral-Large-Instruct-2407, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct. For the target LLMs, we employed three widely-used smaller models—Llama-3.1-8B-Instruct, Gemma-2-9B-It, and Qwen-2.5-7B-Instruct—along with two even more compact models—Llama-3.2-3B-Instruct and Llama-3.2-1B-Instruct. The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs.
https://preview.redd.it/5e5o521n0j6e1.png?width=1076&format=png&auto=webp&s=2d8dfcca937dccc447c79f6447acf75f0286ae61
https://preview.redd.it/u8quwmgh0j6e1.png?width=771&format=png&auto=webp&s=33644000d4a501d7434d010dc1922c01b75cb3d7
https://preview.redd.it/evfm8mgh0j6e1.png?width=727&format=png&auto=webp&s=ecb672f87f41ae964271a1d8de2c7e5cdf0322e0
https://preview.redd.it/rult4lih0j6e1.png?width=781&format=png&auto=webp&s=ca2da35a56872058ff160e56cb0e06a270c8b5c6
https://preview.redd.it/do01imgh0j6e1.png?width=773&format=png&auto=webp&s=5c19a3a29313ea159dbc26aea9a86cc3eadb1ed3
| 2024-12-13T02:23:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hd22cq/fusechat30_preference_optimization_for_implicit/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd22cq | false | null | t3_1hd22cq | /r/LocalLLaMA/comments/1hd22cq/fusechat30_preference_optimization_for_implicit/ | false | false | 30 | {'enabled': False, 'images': [{'id': 'EuA5EepC5epMZBw0PQCrzl03JvOtiRp1YCvjeU3bIfo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B1kABb1NsuxjMEvsNAmyuVzNknjjNdXv7WidR2ITb9U.jpg?width=108&crop=smart&auto=webp&s=44ac284a5a155e55f8c047ce837ca0040960586d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B1kABb1NsuxjMEvsNAmyuVzNknjjNdXv7WidR2ITb9U.jpg?width=216&crop=smart&auto=webp&s=42f1f516e81adac8064e3c7a4b0629f6200ff4d2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B1kABb1NsuxjMEvsNAmyuVzNknjjNdXv7WidR2ITb9U.jpg?width=320&crop=smart&auto=webp&s=d60deda30842c8c4bb9b6ca180690db40719e9cd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B1kABb1NsuxjMEvsNAmyuVzNknjjNdXv7WidR2ITb9U.jpg?width=640&crop=smart&auto=webp&s=a15ae619f49645ee929ae3e26c4bd7efa8f54acf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B1kABb1NsuxjMEvsNAmyuVzNknjjNdXv7WidR2ITb9U.jpg?width=960&crop=smart&auto=webp&s=8afe787904e825acbe11f00a30a3deedc42f6da0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B1kABb1NsuxjMEvsNAmyuVzNknjjNdXv7WidR2ITb9U.jpg?width=1080&crop=smart&auto=webp&s=c1f428ab3764538ff21b08b8bd6978a3366a20e4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B1kABb1NsuxjMEvsNAmyuVzNknjjNdXv7WidR2ITb9U.jpg?auto=webp&s=dc707cdba319d2cf3f4dd6e1c99bd790d54ab428', 'width': 1200}, 'variants': {}}]} |
|
Models that are not multilingual tend to die and be worse at everything | 0 | No, English is not the language of the world. It is not new that people are trying to unify people in a single language, and English, despite being widespread and used, does not contain all the good knowledge in the world, it has a lot of knowledge and even approaches and small linguistic nuances that, if not they are fundamental, they are rich foundations for LLMs. Apart from cultural issues, etc.
Furthermore, it is interesting that models can perform logical reasoning in different languages, this seems to me to encompass how a model can "think" when performing intelligent tasks.
Additionally, there is the practical criterion: people who don't speak English don't want/can learn just to use an LLM.
It's just an opinion, I don't own the truth and I would like to know if more people share this same opinion.
| 2024-12-13T02:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hd24qw/models_that_are_not_multilingual_tend_to_die_and/ | Existing_Freedom_342 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd24qw | false | null | t3_1hd24qw | /r/LocalLLaMA/comments/1hd24qw/models_that_are_not_multilingual_tend_to_die_and/ | false | false | self | 0 | null |
good university online Reinforcement learning course | 2 | Hi,
I am looking for a US university course on Reinforcement learning. Something with less theory and more practical.
I looked at stanford and talked to someone who did at stanford, but they had lot more theory and proofing , than how to do a practical training ( at least that's the feedback I got).
Anyone had good suggestion ?
Thank you for your response | 2024-12-13T02:32:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hd28lp/good_university_online_reinforcement_learning/ | Curious_me_too | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd28lp | false | null | t3_1hd28lp | /r/LocalLLaMA/comments/1hd28lp/good_university_online_reinforcement_learning/ | false | false | self | 2 | null |
Google… Where is Gemma3? | 134 | Microsoft has delivered Phi 4. It is now your turn. | 2024-12-13T02:40:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hd2dt3/google_where_is_gemma3/ | appakaradi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd2dt3 | false | null | t3_1hd2dt3 | /r/LocalLLaMA/comments/1hd2dt3/google_where_is_gemma3/ | false | false | self | 134 | null |
Best model for instruction following to date | 6 | Which models do you recommend that follow output format instructions the best?
I have a 3090. Currently using Qwen 2.5 32B at 4_k_m and I was curious if there were better models or even better smaller models that people are using that I should play with.
Use case generally varies, but the most recent instruction I was playing was answer the question with one word such as True or False | 2024-12-13T02:41:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hd2eh2/best_model_for_instruction_following_to_date/ | fgoricha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd2eh2 | false | null | t3_1hd2eh2 | /r/LocalLLaMA/comments/1hd2eh2/best_model_for_instruction_following_to_date/ | false | false | self | 6 | null |
Structured World Generation with local LLMs | 1 | 2024-12-13T02:56:24 | https://horenbergerb.github.io/2024/12/11/world-map-exploration.html | eatbeans2 | horenbergerb.github.io | 1970-01-01T00:00:00 | 0 | {} | 1hd2olf | false | null | t3_1hd2olf | /r/LocalLLaMA/comments/1hd2olf/structured_world_generation_with_local_llms/ | false | false | default | 1 | null |
|
InternLM-XComposer/InternLM-XComposer-2.5-OmniLive at main · InternLM/InternLM-XComposer | 1 | 2024-12-13T03:17:40 | https://github.com/InternLM/InternLM-XComposer/tree/main/InternLM-XComposer-2.5-OmniLive | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hd32x4 | false | null | t3_1hd32x4 | /r/LocalLLaMA/comments/1hd32x4/internlmxcomposerinternlmxcomposer25omnilive_at/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'puRyEzAILyyDoLAQHgRMBiq1fP8xKF7r1QKU9BIX19w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WRcjqOzF3WzBC7Q5zrPrIDp_WwSxH-7fopDwdJDB0lM.jpg?width=108&crop=smart&auto=webp&s=58e44de473ceeef4f5cc0c51944e73a177865cc0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WRcjqOzF3WzBC7Q5zrPrIDp_WwSxH-7fopDwdJDB0lM.jpg?width=216&crop=smart&auto=webp&s=3d148d57a2e7fbe9197b4d83812fbd17cf97e6dd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WRcjqOzF3WzBC7Q5zrPrIDp_WwSxH-7fopDwdJDB0lM.jpg?width=320&crop=smart&auto=webp&s=2af8cc0b3737ede6be2a3fefdd89363a3ffea5ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WRcjqOzF3WzBC7Q5zrPrIDp_WwSxH-7fopDwdJDB0lM.jpg?width=640&crop=smart&auto=webp&s=52d00baa7c190ed51e885162f69d1efb927bcb00', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WRcjqOzF3WzBC7Q5zrPrIDp_WwSxH-7fopDwdJDB0lM.jpg?width=960&crop=smart&auto=webp&s=c42a47bb5858dcee40efa65e9f03e48308a0088a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WRcjqOzF3WzBC7Q5zrPrIDp_WwSxH-7fopDwdJDB0lM.jpg?width=1080&crop=smart&auto=webp&s=dc2c58a98e2c4abfb2d5035b73a2b2f32fac9800', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WRcjqOzF3WzBC7Q5zrPrIDp_WwSxH-7fopDwdJDB0lM.jpg?auto=webp&s=3144866c660effcacaa2eb7a309b62dc867bdf5c', 'width': 1200}, 'variants': {}}]} |
||
How far do you think LLM has developed? | 1 | [removed] | 2024-12-13T03:46:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hd3lus/how_far_do_you_think_llm_has_developed/ | Due_Profession_2828 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd3lus | false | null | t3_1hd3lus | /r/LocalLLaMA/comments/1hd3lus/how_far_do_you_think_llm_has_developed/ | false | false | self | 1 | null |
Best VLM in the market ?? | 3 | Hi everyone ,
So my use case is accept one or two images as input and outputs text . so My prompts hardly will be
Describe image
Describe about certain objects in image
Detect the particular highlighted object
Give coordinates of detected object
Segment the object in image
Differences between two images in objects
Count the number of particular objects in imagr
So i am new to Llm and vlm , I want to know in this kind which vlm is best to use for my use case
Please give me best vlms which are opensource in market , It will help me a lot | 2024-12-13T03:47:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hd3mfa/best_vlm_in_the_market/ | Hot-Hearing-2528 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd3mfa | false | null | t3_1hd3mfa | /r/LocalLLaMA/comments/1hd3mfa/best_vlm_in_the_market/ | false | false | self | 3 | null |
Introducing Methception & Llam@ception - Level up your RP experience | 1 | [removed] | 2024-12-13T03:53:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hd3q4r/introducing_methception_llamception_level_up_your/ | Konnect1983 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd3q4r | false | null | t3_1hd3q4r | /r/LocalLLaMA/comments/1hd3q4r/introducing_methception_llamception_level_up_your/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_V2vTMRWkhTNh-ZQ2JKFoKrFSAIPHWc06v-osK3SFg4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/9HYMIeNC_SMYAa4yp7_nSg35eo2bMPHczpbaZANL0PY.jpg?width=108&crop=smart&auto=webp&s=6978f7af74a456b5343d7fece402b16032bc3bc2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/9HYMIeNC_SMYAa4yp7_nSg35eo2bMPHczpbaZANL0PY.jpg?width=216&crop=smart&auto=webp&s=fc2936978fcab3298a33433644099d8c5f9374b6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/9HYMIeNC_SMYAa4yp7_nSg35eo2bMPHczpbaZANL0PY.jpg?width=320&crop=smart&auto=webp&s=4946ba26542552205add57fa5586284cde53d62d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/9HYMIeNC_SMYAa4yp7_nSg35eo2bMPHczpbaZANL0PY.jpg?width=640&crop=smart&auto=webp&s=7b97ad48b231c028d69421c0e2430fff9221b218', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/9HYMIeNC_SMYAa4yp7_nSg35eo2bMPHczpbaZANL0PY.jpg?width=960&crop=smart&auto=webp&s=f6dd0ca86af50f989866fc12cddaeee8ebf79f31', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/9HYMIeNC_SMYAa4yp7_nSg35eo2bMPHczpbaZANL0PY.jpg?width=1080&crop=smart&auto=webp&s=8e255d3cd236814f5c9a5305d9e2c2e8efd97996', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/9HYMIeNC_SMYAa4yp7_nSg35eo2bMPHczpbaZANL0PY.jpg?auto=webp&s=1dca7df43cd213a20910876b34ba6903afac56bc', 'width': 1200}, 'variants': {}}]} |
32B models with an M4 PRO 24GB: cutting it too close? [I'm sharing my tests and research]? Also: anecdotally, how much better are the 72/32B/14B compared to each other? | 18 | TL;DR:
1. What are your experiences with the 14B Qwen2.5 coder instruct models versus 32B/72B?
2. Any quantitative tests on the performance of 32B:Q4, IQ4\_XS, or Q4\_K\_S? Relative to 14B:Q8?
3. Should I keep my Mini M4 PRO 24GB, return for 48GB, a Studio M2 MAX 32GB, a M2 Ultra 64GB, a M4 MAX 64GB, x2 3090s on an old i5-3770k 16GB? The last three options are more than I can really afford right now, but I'm tempted.
I recently decided my 2020 i5 Intel MBP 16GB isn't enough to suit my needs. It won't even run Windsurf or Cursor without slowing down, and I don't want them to store embeddings of my code (which they do), or even really trust them or their partners with my code and prompts. So I decided to run this stuff locally with Zed and/or Continue.dev.
I don't like paying full price for my gear, and so though the M4 Pro 24GB mini for 14% off new at MicroCenter was a good value...
So, I wanted to see the time and space differences between some quants and settings (like flashing attention) on my 24GB/512 M4 PRO mini I'm trying out (still within the return window). I paid $1200 for it new (14% off). Below you'll find my token generation speed results as a mean of 3 samples each. Along with some other details. Prompt: "give me a concise response with only a fizz buzz solution":
|Model|Quant|KV Cache|Flash Att'n|Context|T/S (95% CI)|Disk|VRAM w/ Context|
|:-|:-|:-|:-|:-|:-|:-|:-|
|Qwen 2.5 Coder 32B|Q4\_0|f16|False|8K|11.81 ±0.18|19GB|21GB|
|Qwen 2.5 Coder 32B|Q4\_0|q4\_0|True|8K|8.70 ±0.14|19GB|20GB|
|Qwen 2.5 Coder 32B|Q4\_K\_S|f16|False|8K|10.27 ±0.28|19GB|22GB|
|Qwen 2.5 Coder 32B|Q4\_K\_S|q4\_0|True|8K|8.17 ±0.12|19GB|20GB|
|Qwen 2.5 Coder 32B|IQ4\_XS|f16|False|8K|9.53 ±0.04|17GB|20GB|
|Qwen 2.5 Coder 32B|IQ4\_XS|q4\_0|True|8K|7.52 ±0.10|17GB|19GB|
|Qwen 2.5 Coder 32B|IQ3\_XXS|f16|False|16K|9.67 ±0.29|12GB|19GB|
|Qwen 2.5 Coder 32B|IQ3\_XXS|q4\_0|True|16K|7.16 ±0.63|12GB|15GB|
So what I have found out is that in terms of speed, \~10 t/s is probably my **threshold of tolerance** with using an LLM for coding tasks. It certainly feels slow with code, though acceptable for prose in my opinion.
In terms of quant, I find IQ3\_XXS reduces quality too much, but it's just my subjective experience with these models. Does anyone know of any benchmark tests with the Qwen2.5 series done at different quants? All I could find is (this)\[https://www.reddit.com/r/LocalLLaMA/comments/1cdxjax/i\_created\_a\_new\_benchmark\_to\_specifically\_test/\]
Also, I find flashing attention at q4 also ruins these models, and q8 is just as much a performance it (23.5%). So any space savings don't seem worth it for 32B. Ollama docs seems to be verified by this for Qwen2 models \[here\](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-set-the-quantization-type-for-the-kv-cache): though this (post)\[https://smcleod.net/2024/12/bringing-k/v-context-quantisation-to-ollama/\] says that q8\_0 kv\_cache is not problematic like q4\_0 is. And in the long term, it's my understanding that future models will might suffer more with quantization as the number of training tokens increases (and it's been increases exponentially) see (here)\[https://arxiv.org/html/2411.17691v2\].
This leads me to speculate that Qwen 2.5 Coder 32B might be barely runnable now in 4-bit form, but maybe Qwen 3 32B will need 5-bit, 6-bit, or 8-bit, which the M4 Pro 24GB won't be able to do. That said, If I keep the 24GB M4 Pro, I'll probably upgrade it to mac studio this coming summer and not lose too much for on selling it as I bought it for 84% off and it will still be the new model. I digress!
Running Q4\_K\_S give my OS just 2GB to work with when I limit it as such. It only lets me have an editor open and maybe one browser tab). IQ4\_XS would give my OS 3GB to work with, so I can have Spotify run and that's about it!
So that leaves me with working within this constraint, or going with smaller models. Without as many formal tests, this is what I get with the smaller models:
|Model|Quant|KV Cache|Flash Att'n|Context|T/S|
|:-|:-|:-|:-|:-|:-|
|Qwen 2.5 Coder 14B|IQ4\_XS|f16|False|8K|20|
|Qwen 2.5 Coder 14B|IQ4\_XS|q4|True|8K|14|
|Qwen 2.5.1 Coder 7B|Q6\_K\_L|f16|False|8K|25|
|Qwen 2.5.1 Coder 7B|Q6\_K\_L|q4|True|8K|18|
|Qwen 2.5.1 Coder 7B|Q8|f16|True|8K|28|
|Qwen 2.5.1 Coder 7B|Q8|q4|True|8K|21|
I find these speeds acceptable for coding. But I don't find the models as smart.
So I'm thinking about:
1. Keep the M4 PRO 24GB/512 model that I good a good deal on and wait for the M2 Max (40-core) Studio. It's likely just 7 months away and retail, judging by the current lineup and last lineup, should be around $2299-$2499 with 48 or 64GB. I like this idea best, but it's not here yet.
2. Returning the M4 PRO for the 48GB/1TB model ($1800 on sale) so that I can run the 32B models, although at a barely tolerable speed, but with plenty of room from some more context and not be so vigilant about what I have open.
3. Buying a base 32GB M2 MAX Studio ($1700 refurbished). It's 20% faster (so if I don't flash attention it should be around 11-12 tps judging by [this](https://github.com/ggerganov/llama.cpp/discussions/4167)). However, I'm buying this as a general purpose portable computer to replace my laptop. A Mac mini would be better for that than a studio.
4. Suffer the $$$ pain and buy a base 64GB M2 Ultra Studio ($3400 refurb) or M4 MAX (40-core) 64GB laptop 14" ($3899 retail). These two options should have about the same performance compared to each other. Compared to the M4 Pro, about double the token generation, triple the prompt processing speeds. I could also run the 72B model at the same speed as the 32B in the M4.
5. Another options my old desktop, i5-3770K, 16GB of ram, Samsung 128GB 830 SSD, and a 550 corsair 41amp, power supply. Also, a RTX 2060 6GB but that's like 3B model size. My motherboard supports PCE3.0 x 16 in a single slot or x8/x8 for dual cards. However, on newegg and elsewhere I see refurbished 3090s are hovering around $1100. I also don't like the idea of buying used on ebay, and don't know if my ATX case (Antec 300) can fit a 3090. But it's my understanding that one of these would be about 4x faster than the M4 in token generation, and about 10x in prompt processing. Comparing with the link above and [this one](https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference). If I need a more powerful power supply, and I think I probably do, it seems to make sense to get one that would allow me to upgrade to dual cards as just one would be much faster, but offers little in terms fo more than 8K context if I run it headless, maybe 12K?. Also, more hassle to set up the server. So, if I go this route makes more sense to go all in, and get dual 3090s. That needs at 1200 watt power supply from what I understand, and I imagine much louder than the above 3 options (the mini is barely noticeable during inference). With dual cards, I could run the 32B version with a large context. So like $2400 with the PSU.
Lastly, I also do a lot of photo editing with DXO and that uses Apple's neural engine for denoising. The mini beats the M2 Ultra performance in that respect. It also doubles single core CPU usage, and matches multithreaded CPU usage. I plan to use whatever Apple Silicon I get next to be my general purpose computer. So with option #4, I'll still need a Mac but probably could get an MB air or base M4 mini. So that $1300-2400 now becomes more like $2100-3200 or something.
This is all getting rather expensive to try and run a local, offline version of windsurf or cursor. I see the appeal in these tools for $10-20/mo lol.
Am I missing anything? What do you all recommend? Comments welcome! | 2024-12-13T04:11:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hd41x4/32b_models_with_an_m4_pro_24gb_cutting_it_too/ | noless15k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd41x4 | false | null | t3_1hd41x4 | /r/LocalLLaMA/comments/1hd41x4/32b_models_with_an_m4_pro_24gb_cutting_it_too/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': '6x7IpmOcVFTnkC0AlnOktKeqlr87Blv8tTmBMS2RuDQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qpUQ4b_NuHNY_TUwn9CAMXTKL7sl6nIMZIubKZavkDs.jpg?width=108&crop=smart&auto=webp&s=be05d04cb8477f2533df60e6989877950056cd16', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qpUQ4b_NuHNY_TUwn9CAMXTKL7sl6nIMZIubKZavkDs.jpg?width=216&crop=smart&auto=webp&s=3f391b00867025e76de2a5d9ecb3093aca6c1e9d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qpUQ4b_NuHNY_TUwn9CAMXTKL7sl6nIMZIubKZavkDs.jpg?width=320&crop=smart&auto=webp&s=5d06e6ade7447ed2df9f2143462dd99e255069ac', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qpUQ4b_NuHNY_TUwn9CAMXTKL7sl6nIMZIubKZavkDs.jpg?width=640&crop=smart&auto=webp&s=27975e5561d29824058aed6f8fafbe42aeee66ab', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qpUQ4b_NuHNY_TUwn9CAMXTKL7sl6nIMZIubKZavkDs.jpg?width=960&crop=smart&auto=webp&s=4ed1e6bb9fdf9b4aca68a9b5aef90bd8e9d1c51b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qpUQ4b_NuHNY_TUwn9CAMXTKL7sl6nIMZIubKZavkDs.jpg?width=1080&crop=smart&auto=webp&s=2490d46e9629a23bd7dd64ad59fd42b95dc48479', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qpUQ4b_NuHNY_TUwn9CAMXTKL7sl6nIMZIubKZavkDs.jpg?auto=webp&s=bf23cc476b985c14d87d1ade2352ccfbdbec2724', 'width': 1200}, 'variants': {}}]} |
What the hell Qwen, I thought you were my friend! :'( | 1 | 2024-12-13T04:24:25 | bolaft | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hd4aec | false | null | t3_1hd4aec | /r/LocalLLaMA/comments/1hd4aec/what_the_hell_qwen_i_thought_you_were_my_friend/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'rH8liHTZBmpFddAARNb9CxmWcWJcq82Pp4z-pwvmgB4', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/6p0zpyw0mj6e1.png?width=108&crop=smart&auto=webp&s=049e653237707957a1525e6dc024b39fade44499', 'width': 108}, {'height': 76, 'url': 'https://preview.redd.it/6p0zpyw0mj6e1.png?width=216&crop=smart&auto=webp&s=1b769fdb6177283085896ba1f5bc14ed98b8b513', 'width': 216}, {'height': 113, 'url': 'https://preview.redd.it/6p0zpyw0mj6e1.png?width=320&crop=smart&auto=webp&s=3ebfb54fa9186aeb82bdf16dc47ffe8716484cbc', 'width': 320}, {'height': 226, 'url': 'https://preview.redd.it/6p0zpyw0mj6e1.png?width=640&crop=smart&auto=webp&s=2151f2ce3b10f154ea3dd7c973767e9a71868910', 'width': 640}, {'height': 339, 'url': 'https://preview.redd.it/6p0zpyw0mj6e1.png?width=960&crop=smart&auto=webp&s=91615994ed626f77aaa2426c0dfa60735295b6dc', 'width': 960}, {'height': 381, 'url': 'https://preview.redd.it/6p0zpyw0mj6e1.png?width=1080&crop=smart&auto=webp&s=22549d89e49e7ed69b2eac24cce3d173c5649bc5', 'width': 1080}], 'source': {'height': 499, 'url': 'https://preview.redd.it/6p0zpyw0mj6e1.png?auto=webp&s=82afd555ae83feaadd1613437aaeeb4dd6b1cac7', 'width': 1412}, 'variants': {}}]} |
|||
Models Occasionally Changing Language (to Mandarin) | 1 | [removed] | 2024-12-13T04:39:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hd4jvx/models_occasionally_changing_language_to_mandarin/ | Emotional-Pilot-9898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd4jvx | false | null | t3_1hd4jvx | /r/LocalLLaMA/comments/1hd4jvx/models_occasionally_changing_language_to_mandarin/ | false | false | self | 1 | null |
Essential Learning Resources for Transformers. | 4 | Hey everyone! I hope you’re all doing well. I've been in-depth about transformers, but I keep running into a few questions and uncertainties. If you have any resources that helped you really grasp the concept, I’d love to hear about them. Thanks so much! | 2024-12-13T05:02:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hd4xi3/essential_learning_resources_for_transformers/ | Ai_Peep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd4xi3 | false | null | t3_1hd4xi3 | /r/LocalLLaMA/comments/1hd4xi3/essential_learning_resources_for_transformers/ | false | false | self | 4 | null |
Mlc llm scaling | 0 | Mlc seems to scale well with multi gpu setups. Is there a tldr why other projects like llama.cpp are unable to scale? https://github.com/mlc-ai/llm-perf-bench
| 2024-12-13T05:38:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hd5ivu/mlc_llm_scaling/ | Nicollier88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd5ivu | false | null | t3_1hd5ivu | /r/LocalLLaMA/comments/1hd5ivu/mlc_llm_scaling/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'jKjKTzGeKqkavBA6BJJVdAappKeOC9JjKluuNk1_u40', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-Wi-lqv3ncLDRJOA_o9e1bZaX0f5O-xIK0JQMLyOOzM.jpg?width=108&crop=smart&auto=webp&s=9da30125543e483e8134cef0b996e3cbed2bf60c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-Wi-lqv3ncLDRJOA_o9e1bZaX0f5O-xIK0JQMLyOOzM.jpg?width=216&crop=smart&auto=webp&s=47f3fd75ac087704bf7a06e074e1572599b17510', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-Wi-lqv3ncLDRJOA_o9e1bZaX0f5O-xIK0JQMLyOOzM.jpg?width=320&crop=smart&auto=webp&s=0b9c832261fe6c0058609600482cd06c49299028', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-Wi-lqv3ncLDRJOA_o9e1bZaX0f5O-xIK0JQMLyOOzM.jpg?width=640&crop=smart&auto=webp&s=881c627625a0c5ccfeb09108ad2c3b743b924560', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-Wi-lqv3ncLDRJOA_o9e1bZaX0f5O-xIK0JQMLyOOzM.jpg?width=960&crop=smart&auto=webp&s=6ee4c1cd37c84d5483d02a72b9819d7319515817', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-Wi-lqv3ncLDRJOA_o9e1bZaX0f5O-xIK0JQMLyOOzM.jpg?width=1080&crop=smart&auto=webp&s=5e98125522267ad6d679d2653174829044affa31', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-Wi-lqv3ncLDRJOA_o9e1bZaX0f5O-xIK0JQMLyOOzM.jpg?auto=webp&s=fc37b2980428e41af24ab9da0faf56bed75d9507', 'width': 1200}, 'variants': {}}]} |
Fast & reliable Title & Tag generation for Open webui | 1 | Since I have been using 32B models, the default title and tag generation configuration has been really slow and sometimes unreliable, especially when the first prompt is very long. So, I reconfigured it, and here is a step by step tutorial on how I did it:
**Open WebUI Version v0.3.35**
# Step 1: Create a Dedicated Model for Title & Tag Generation
1. **Navigate to the "Workspace"**
2. **Click on the "Create a model" option.**
3. **Name Your Model:** Assign a descriptive name to your model (e.g., "title\_gen"). This name is for your reference.
# Step 2: Choose a Base Model
1. **Select a Suitable Base Model:**
* Recommended options: `llama-3.2-3B` or `qwen2.5-3B`.
* In this example, we will use `qwen2.5:3b-instruct-q8_0`.
# Step 3: Configure Advanced Parameters
1. **Access "Advanced Params":** Locate and click on the "Advanced Params" section of the model settings.
2. **Set "Context Length":** Set the context length to 5000. This allows the model to process a larger chat history (4000 tokens in this case). You can decrease it if you have less VRAM.
3. **Set "Max Tokens":** Set "Max Tokens" to 128. This ensures that the model will always generate very small amount of text.
4. **Set "num\_gpu":** Set "num\_gpu" to 256. This will keep your model loaded into VRAM, speeding up processing.
# Step 4: Click the "Save & Update" button at the bottom right corner
# Step 5: Set the Task Model
1. Go to the "Admin Panel" in Open WebUI.
2. Click on "Settings" within the Admin Panel.
3. Go to the "interface" tab.
4. Find the "Set Task Model" option
5. Change both "Local models" and "external models" to the model you just created (e.g., "title\_gen"). This will tell the Open WebUI to use your model for these tasks.
# Step 6: Define Title and Tag Generation Prompts
Title Generation Prompt:
Please disregard all previous instructions.
Here is the query:
{{prompt:start:2000}} {{prompt:end:2000}}
Generate a concise title (no more than 5 words) that accurately reflects the main theme or topic of the query. Emojis can be used to enhance understanding but avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.
Examples of titles:
📉 Stock Market Trends
🍪 Perfect Chocolate Chip Recipe
Evolution of Music Streaming
Remote Work Productivity Tips
Artificial Intelligence in Healthcare
🎮 Video Game Development Insights
Tags Generation Prompt:
### Chat History:
{{prompt:start:2000}} {{prompt:end:2000}}
### Task:
Generate 1-3 broad tags categorizing the main themes of the chat history, along with 1-3 more specific subtopic tags.
### Guidelines:
- Start with high-level domains (e.g. Science, Technology, Philosophy, Arts, Politics, Business, Health, Sports, Entertainment, Education)
- Consider including relevant subfields/subdomains if they are strongly represented throughout the conversation
- If content is too short (less than 3 messages) or too diverse, use only ["General"]
- Use the chat's primary language; default to English if multilingual
- Prioritize accuracy over specificity
### Output:
JSON format: { "tags": ["tag1", "tag2", "tag3"] }
Note: You can translate these prompt to other languages if you want the title & tag to be always generated in certain language.
# Step 7: Click the "Save" button at the bottom right corner. | 2024-12-13T06:17:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hd64n3/fast_reliable_title_tag_generation_for_open_webui/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd64n3 | false | null | t3_1hd64n3 | /r/LocalLLaMA/comments/1hd64n3/fast_reliable_title_tag_generation_for_open_webui/ | false | false | self | 1 | null |
The "big data" mistake for agents | 56 | 2024-12-13T06:19:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hd65th/the_big_data_mistake_for_agents/ | AdditionalWeb107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd65th | false | null | t3_1hd65th | /r/LocalLLaMA/comments/1hd65th/the_big_data_mistake_for_agents/ | false | false | 56 | {'enabled': False, 'images': [{'id': '_vdbq5o2nRySv8DwOIDlNA4UmWMahTz-j_yjGfynD9M', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/1sq0Pqf_xuhNKgJRaa7OmBaiKt0-SNBlkS9-FxinUm4.jpg?width=108&crop=smart&auto=webp&s=b80790fc6fdfd12fcbd04b744cdca0cea38a237b', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/1sq0Pqf_xuhNKgJRaa7OmBaiKt0-SNBlkS9-FxinUm4.jpg?auto=webp&s=0ea82a31fe298ce1c6aaced513a9bd724c4fa78a', 'width': 200}, 'variants': {}}]} |
||
Whisper/Piper that have Wyoming protocol and also API Rest | 1 | [removed] | 2024-12-13T07:27:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hd741q/whisperpiper_that_have_wyoming_protocol_and_also/ | denisJosh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd741q | false | null | t3_1hd741q | /r/LocalLLaMA/comments/1hd741q/whisperpiper_that_have_wyoming_protocol_and_also/ | false | false | self | 1 | null |
Struggling to Create a Harry Potter Sequel Using LLMs – What’s the Best Approach? | 0 | I recently completed all the Harry Potter books and wanted to create a sequel to *Deathly Hallows* to explore what happens next. Initially, I thought of using **Notebook LM**, but its reasoning capabilities and ideas were far from useful. While the ideas weren't "trash," they just didn't work well. Next, I tried **ChatGPT**, but I faced the issue of context length limitations – I couldn't input all the books at once, and while its ideas were a bit better, they were still far from perfect.
Since the total text of all the books adds up to about 2 million tokens, I decided to go local with **OpenWebUI** and downloaded **Qwen QWQ**. Its ideas were definitely the best so far, but the model couldn't match the previous books due to poor embedding quality. I considered switching to a better embedding model, **dunzhang/stella\_en\_1.5B\_v5**, but I ran into some "technical issues" and couldn't test it (and before i got spend hours trying to fix this i wanted to know if this is the best aproach).
But I wanted to try **LoRA** or **fine-tuning** to add new knowledge to the model. I reached out to **ChatGPT** for help since I have no experience with fine-tuning or model training. ChatGPT explained **LoRA** (Low-Rank Adaptation), and it sounded like a good approach to adding extra knowledge to the LLM without affecting its general performance.
However, ChatGPT also warned me that with LoRA, if the model recalls an old piece of information incorrectly, it might generate flawed responses. For example, if I asked, "Do Harry and Ron ever become friends again?" and the model retrieves data where they are separated, it might incorrectly say, "No."
This led me to reconsider **fine-tuning**. I thought it might be a solution, as it would allow the model to go over all the books before generating a response. But, ChatGPT also mentioned that fine-tuning could cause the model to "forget" previous knowledge, which would make **QWQ** lose its previous performance
So now, I’m stuck: Do I stick with LoRA, try fine-tuning, or find another way to integrate all the book knowledge without causing memory loss or performance issues? Or do I try to fix the problems with embedding models? Any advice would be greatly appreciated! | 2024-12-13T07:48:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hd7dnb/struggling_to_create_a_harry_potter_sequel_using/ | AlgorithmicKing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd7dnb | false | null | t3_1hd7dnb | /r/LocalLLaMA/comments/1hd7dnb/struggling_to_create_a_harry_potter_sequel_using/ | false | false | self | 0 | null |
Which is the best LLM for calories calculation and recipes | 0 | Hey guys,
I am adding a function to help calculate the calories of the meal or suggesting the recipes for users. Which is the best open source LLM that is capable of that, or any model that you guys been using for this purpose ?
Thanks | 2024-12-13T08:27:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hd7vid/which_is_the_best_llm_for_calories_calculation/ | tuantruong84 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd7vid | false | null | t3_1hd7vid | /r/LocalLLaMA/comments/1hd7vid/which_is_the_best_llm_for_calories_calculation/ | false | false | self | 0 | null |
Searching function on ZIM Wikipedia file | 2 | Just think it would be pretty coolll. Offline and accuracy on subjects. Thoughts? Is this available | 2024-12-13T08:35:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hd7z17/searching_function_on_zim_wikipedia_file/ | FewCartographer903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd7z17 | false | null | t3_1hd7z17 | /r/LocalLLaMA/comments/1hd7z17/searching_function_on_zim_wikipedia_file/ | false | false | self | 2 | null |
What video card to get for an initial LLM test computer? | 0 | I am trying to do some initial prototypes with Llama 3.3 8B/70B. The goal is to query some data currently in Google CloudSQL, feed it to LLAMA 3.3 in a decently sized context (1000 words) and get some summaries. I want to wrap the functionality in some custom rest api calls. What kind of GPU and specs would you recommend? Planning to spend between $2k to $6k, with $4.k being the ideal price. I can build a machine from scratch, or buy used parts from ebay if need be. Trying to avoid cloud machines since I want this particular machine running inside my network. Thanks! | 2024-12-13T09:01:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hd8adn/what_video_card_to_get_for_an_initial_llm_test/ | rburhum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd8adn | false | null | t3_1hd8adn | /r/LocalLLaMA/comments/1hd8adn/what_video_card_to_get_for_an_initial_llm_test/ | false | false | self | 0 | null |
Is this true ? Well might be | 383 | 2024-12-13T09:05:54 | Evening_Action6217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hd8cod | false | null | t3_1hd8cod | /r/LocalLLaMA/comments/1hd8cod/is_this_true_well_might_be/ | false | false | 383 | {'enabled': True, 'images': [{'id': '1pQm2B8jxVXZqyH4hk2fILEHZ_OPlr6P_zCiqdlPp84', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/ujypzcpd0l6e1.png?width=108&crop=smart&auto=webp&s=ca0c7d57127efe84e405f0c0db4b0487f5ee1764', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/ujypzcpd0l6e1.png?width=216&crop=smart&auto=webp&s=0944d4cdcdc44413da1ac0f970e98f0521c3fc26', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ujypzcpd0l6e1.png?width=320&crop=smart&auto=webp&s=ac6f3d314ddceb6363142552f4f897054d257471', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/ujypzcpd0l6e1.png?width=640&crop=smart&auto=webp&s=f0e5943b22c9e75534b0f17163862d1774d56ddf', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/ujypzcpd0l6e1.png?width=960&crop=smart&auto=webp&s=b5c556b414dcb96bed215b2bde0c9f48ff26cecc', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/ujypzcpd0l6e1.png?width=1080&crop=smart&auto=webp&s=9adc21f8acf9fd4660c44d4bb7ca1e1e769e45cd', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://preview.redd.it/ujypzcpd0l6e1.png?auto=webp&s=f256d7bb157f24a8e6d5dc119915e2b5ebabf633', 'width': 1080}, 'variants': {}}]} |
|||
Chunkify -- a script and GUI for local document translation, summarization, correction and distillation | 31 | 2024-12-13T09:59:10 | https://github.com/jabberjabberjabber/Chunkify/ | Eisenstein | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hd90d9 | false | null | t3_1hd90d9 | /r/LocalLLaMA/comments/1hd90d9/chunkify_a_script_and_gui_for_local_document/ | false | false | 31 | {'enabled': False, 'images': [{'id': 'Bt74I-bcNBG8BsHQLtXlUB2cM6plWoDUF6yOg6ot9Mg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MX-2OHwXUU83SBIIqXHZGhfRtNdWaINWHYtHBj4rpc0.jpg?width=108&crop=smart&auto=webp&s=8b5b2794774a3522649f8236c36cc774800a946c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MX-2OHwXUU83SBIIqXHZGhfRtNdWaINWHYtHBj4rpc0.jpg?width=216&crop=smart&auto=webp&s=0d5c487ec7f471e78e82b6773e85ed67c5c81c24', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MX-2OHwXUU83SBIIqXHZGhfRtNdWaINWHYtHBj4rpc0.jpg?width=320&crop=smart&auto=webp&s=6f3ee64b6ced19f99278336f781a822c0aed3e63', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MX-2OHwXUU83SBIIqXHZGhfRtNdWaINWHYtHBj4rpc0.jpg?width=640&crop=smart&auto=webp&s=94d8785936c762cbfaeb860af3d397b568cfb499', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MX-2OHwXUU83SBIIqXHZGhfRtNdWaINWHYtHBj4rpc0.jpg?width=960&crop=smart&auto=webp&s=fe77720b373af961ef89024719ced291d0caf096', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MX-2OHwXUU83SBIIqXHZGhfRtNdWaINWHYtHBj4rpc0.jpg?width=1080&crop=smart&auto=webp&s=e17de15d70c9de7ecb171f11919db44089665312', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MX-2OHwXUU83SBIIqXHZGhfRtNdWaINWHYtHBj4rpc0.jpg?auto=webp&s=b9e8eb95f210e7e9d06fdca1bf43ee377da22e36', 'width': 1200}, 'variants': {}}]} |
||
Does unsloth really delivers its promises? | 1 | [removed] | 2024-12-13T10:32:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hd9g7e/does_unsloth_really_delivers_its_promises/ | SykenZy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd9g7e | false | null | t3_1hd9g7e | /r/LocalLLaMA/comments/1hd9g7e/does_unsloth_really_delivers_its_promises/ | false | false | self | 1 | null |
Simplifying Fine-Tuning: Introducing TUNE – Your One-Stop Platform for Local and Cloud LLM Customisation | 1 | [removed] | 2024-12-13T10:35:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hd9hat/simplifying_finetuning_introducing_tune_your/ | AhmadMirza17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd9hat | false | null | t3_1hd9hat | /r/LocalLLaMA/comments/1hd9hat/simplifying_finetuning_introducing_tune_your/ | false | false | self | 1 | null |
InternLM-XComposer2.5-OmniLive, a comprehensive multimodal system for long-term streaming video and audio interactions. | 1 | [removed] | 2024-12-13T11:08:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hd9y79/internlmxcomposer25omnilive_a_comprehensive/ | InternLM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd9y79 | false | null | t3_1hd9y79 | /r/LocalLLaMA/comments/1hd9y79/internlmxcomposer25omnilive_a_comprehensive/ | false | false | 1 | null |
|
InternLM-XComposer2.5-OmniLive, a comprehensive multimodal system for long-term streaming video and audio interactions. | 1 | [removed] | 2024-12-13T11:08:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hd9y7g/internlmxcomposer25omnilive_a_comprehensive/ | InternLM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd9y7g | false | null | t3_1hd9y7g | /r/LocalLLaMA/comments/1hd9y7g/internlmxcomposer25omnilive_a_comprehensive/ | false | false | self | 1 | null |
InternLM-XComposer2.5-OmniLive is here! | 1 | [removed] | 2024-12-13T11:14:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hda0no/internlmxcomposer25omnilive_is_here/ | InternLM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hda0no | false | null | t3_1hda0no | /r/LocalLLaMA/comments/1hda0no/internlmxcomposer25omnilive_is_here/ | false | false | self | 1 | null |
InternLM-XComposer2.5-OmniLive, a comprehensive multimodal system for long-term streaming video and audio interactions. | 1 | [removed] | 2024-12-13T11:17:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hda2jc | false | null | t3_1hda2jc | /r/LocalLLaMA/comments/1hda2jc/internlmxcomposer25omnilive_a_comprehensive/ | false | false | default | 1 | null |
||
How can I deploy gpt4all to aws or sth ? | 1 | [removed] | 2024-12-13T11:29:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hda88i/how_can_i_deploy_gpt4all_to_aws_or_sth/ | Glum_View | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hda88i | false | null | t3_1hda88i | /r/LocalLLaMA/comments/1hda88i/how_can_i_deploy_gpt4all_to_aws_or_sth/ | false | false | self | 1 | null |
List your LLaMa compatible Agentic apps here for people to find out | 1 | [removed] | 2024-12-13T11:40:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hdae1s/list_your_llama_compatible_agentic_apps_here_for/ | kingai404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdae1s | false | null | t3_1hdae1s | /r/LocalLLaMA/comments/1hdae1s/list_your_llama_compatible_agentic_apps_here_for/ | false | false | self | 1 | null |
FP16 vs Q8/Q4: Unexpected Performance Discrepancies in AI Model Responses | 1 | Hello there,
I'm currently working with various AI models (e.g., LLaMA 3.3 70b instruct, Gemma 2 27B, qwen 2.5 72b) and have noticed a puzzling phenomenon. In some cases, I've found that using FP16 (which is supposed to be more precise) results in lower quality responses compared to Q8 or even Q4. Yes, you read that right - Q4 sometimes outperforms Q8, and FP16 doesn't always deliver the best results.
I'm talking about response quality here, not response speed. I've tried different models, and this issue seems to be model-agnostic. The number of possible quantization combinations (Q1\_0, Q5\_k\_s, Q5\_k\_m, etc.) is overwhelming, making it impractical to test every single one.
Has anyone else encountered this problem? How do you determine the optimal precision level for your AI models? Do you rely on trial and error, or are there some general guidelines or best practices that I'm missing?
I'd love to hear about your experiences and any insights you can share on this matter. Are there any specific factors that influence the performance of different quantization levels? How do you balance precision with computational resources?
Looking forward to your responses and hoping to learn from your collective expertise!
<3 | 2024-12-13T11:49:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hdaj3j/fp16_vs_q8q4_unexpected_performance_discrepancies/ | ArnaudPolitico | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdaj3j | false | null | t3_1hdaj3j | /r/LocalLLaMA/comments/1hdaj3j/fp16_vs_q8q4_unexpected_performance_discrepancies/ | false | false | self | 1 | null |
Doing high quality Text-to-speech with just 363MB VRAM | 1 | 2024-12-13T11:51:37 | https://v.redd.it/eqpq7k7wtl6e1 | Kooky-Somewhere-2883 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hdak1q | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/eqpq7k7wtl6e1/DASHPlaylist.mpd?a=1736682711%2CODIzN2I1YzViYzY5Y2I2ZmI0YjUxYzMxMmIyZDJjZTgxYjBhYWVlMGI3YmY2NzJjYjhiZjM5YmVmYTUwZTI0Yw%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/eqpq7k7wtl6e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/eqpq7k7wtl6e1/HLSPlaylist.m3u8?a=1736682711%2CMzkyNjIwZTJiZjMyZGNjZTFhYWI1MDliMzRlOGRiNjlmZTQ2MzQ5NDM2N2QwZDZjYzU3ZjQyNmY5N2U1YmJhZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/eqpq7k7wtl6e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1444}} | t3_1hdak1q | /r/LocalLLaMA/comments/1hdak1q/doing_high_quality_texttospeech_with_just_363mb/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OTV4cHFoN3d0bDZlMZ0CnA1R9cwywCs7l3vezMPURrQI44Dko7lsbVm3lwh3', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/OTV4cHFoN3d0bDZlMZ0CnA1R9cwywCs7l3vezMPURrQI44Dko7lsbVm3lwh3.png?width=108&crop=smart&format=pjpg&auto=webp&s=495d68d8ab84b380a6cb66fa33c4c014e72ccdd5', 'width': 108}, {'height': 161, 'url': 'https://external-preview.redd.it/OTV4cHFoN3d0bDZlMZ0CnA1R9cwywCs7l3vezMPURrQI44Dko7lsbVm3lwh3.png?width=216&crop=smart&format=pjpg&auto=webp&s=b67587a75d6cd92f72c79dca59ac29c5000615dd', 'width': 216}, {'height': 239, 'url': 'https://external-preview.redd.it/OTV4cHFoN3d0bDZlMZ0CnA1R9cwywCs7l3vezMPURrQI44Dko7lsbVm3lwh3.png?width=320&crop=smart&format=pjpg&auto=webp&s=4d103aedf099535e373ad35c9077a98f289e8a5b', 'width': 320}, {'height': 478, 'url': 'https://external-preview.redd.it/OTV4cHFoN3d0bDZlMZ0CnA1R9cwywCs7l3vezMPURrQI44Dko7lsbVm3lwh3.png?width=640&crop=smart&format=pjpg&auto=webp&s=d5a6e67b2ab650c1c91ccb650b678d2b16eba12f', 'width': 640}, {'height': 718, 'url': 'https://external-preview.redd.it/OTV4cHFoN3d0bDZlMZ0CnA1R9cwywCs7l3vezMPURrQI44Dko7lsbVm3lwh3.png?width=960&crop=smart&format=pjpg&auto=webp&s=54b70c1bba73e9993b96540f9efef1c4e45486ed', 'width': 960}, {'height': 807, 'url': 'https://external-preview.redd.it/OTV4cHFoN3d0bDZlMZ0CnA1R9cwywCs7l3vezMPURrQI44Dko7lsbVm3lwh3.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f09e9227506c8c187a1c3853eac3dc528ae281ca', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OTV4cHFoN3d0bDZlMZ0CnA1R9cwywCs7l3vezMPURrQI44Dko7lsbVm3lwh3.png?format=pjpg&auto=webp&s=ec27ff5c55c03ffb67a79d854a9f33f23f2a98ce', 'width': 1444}, 'variants': {}}]} |
||
⚡Ultra Compact Text-to-Speech: A Quantized F5TTS | 1 | 2024-12-13T11:53:14 | https://alandao.net/posts/ultra-compact-text-to-speech-a-quantized-f5tts/ | Kooky-Somewhere-2883 | alandao.net | 1970-01-01T00:00:00 | 0 | {} | 1hdakxj | false | null | t3_1hdakxj | /r/LocalLLaMA/comments/1hdakxj/ultra_compact_texttospeech_a_quantized_f5tts/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Q9Icx-UPjwso1OOQzZUaMs2jmk4oL4h-CrjJLnAxXtI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F9GZBTYw7DSj64x8A-FVibt5nOJnS3LFRxL-q6vHrVs.jpg?width=108&crop=smart&auto=webp&s=3ea991934d4a646aa07b04d819c3893f4ca78d47', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F9GZBTYw7DSj64x8A-FVibt5nOJnS3LFRxL-q6vHrVs.jpg?width=216&crop=smart&auto=webp&s=2557b3fcd3c701bf4036a22bcd4a0e216e32ef77', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F9GZBTYw7DSj64x8A-FVibt5nOJnS3LFRxL-q6vHrVs.jpg?width=320&crop=smart&auto=webp&s=a4371588858308b6ca11bebfaa206c629700a935', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F9GZBTYw7DSj64x8A-FVibt5nOJnS3LFRxL-q6vHrVs.jpg?width=640&crop=smart&auto=webp&s=a065c4739295c94b358c9d596ec34296f4e9f02d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F9GZBTYw7DSj64x8A-FVibt5nOJnS3LFRxL-q6vHrVs.jpg?width=960&crop=smart&auto=webp&s=398f9791a08964708f092cfe7c9bcce6bf5e6840', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/F9GZBTYw7DSj64x8A-FVibt5nOJnS3LFRxL-q6vHrVs.jpg?auto=webp&s=dc7330c9423b36b07d17f77ed825b2e2724cbb6a', 'width': 1024}, 'variants': {}}]} |
||
Looking for a couple users for our LLM setup | 1 | I'm searching for a handful of users for our LLM backend.
Basically, the gist is we have a public LLM backend but it's not well known using Open WebUI that includes API keys for OpenAI applications such as SillyTavern.
Currently we are running EVA QWEN 2.5 32B.
If you're interested private message me and I will send you the signup link. You don't have to use any real information, so long as you remember what you use on signup so you can login.
I will delete this thread after some users sign up, since we aren't looking to mass advertise the service.
You can send as many requests to the backend as you want, so long as you aren't intentionally trying to abuse the service. | 2024-12-13T11:56:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hdamh7/looking_for_a_couple_users_for_our_llm_setup/ | mayo551 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdamh7 | false | null | t3_1hdamh7 | /r/LocalLLaMA/comments/1hdamh7/looking_for_a_couple_users_for_our_llm_setup/ | false | false | self | 1 | null |
Deepseek-ai/deepseek-vl2 · Hugging Face | 137 | 2024-12-13T12:17:30 | https://huggingface.co/deepseek-ai/deepseek-vl2 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hdaytv | false | null | t3_1hdaytv | /r/LocalLLaMA/comments/1hdaytv/deepseekaideepseekvl2_hugging_face/ | false | false | 137 | {'enabled': False, 'images': [{'id': 'iIQP76A9umYTviBEUkur7nTXBL15PIa_BegJbQ3cZvI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/r_5qaxFA2YaF79cg5loMPM4KzIJu_2UJgZI8seyCVC4.jpg?width=108&crop=smart&auto=webp&s=ff3964068fc35779710a7c053363d4fd89ea9bcd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/r_5qaxFA2YaF79cg5loMPM4KzIJu_2UJgZI8seyCVC4.jpg?width=216&crop=smart&auto=webp&s=71614d68abbe1b929eadd272af44c7b768817d2a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/r_5qaxFA2YaF79cg5loMPM4KzIJu_2UJgZI8seyCVC4.jpg?width=320&crop=smart&auto=webp&s=174c839df5e8d32fbd3dc4ebd09326e66cce3ead', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/r_5qaxFA2YaF79cg5loMPM4KzIJu_2UJgZI8seyCVC4.jpg?width=640&crop=smart&auto=webp&s=ccc0562df688865cd1b872260ac8a9c4b438acbb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/r_5qaxFA2YaF79cg5loMPM4KzIJu_2UJgZI8seyCVC4.jpg?width=960&crop=smart&auto=webp&s=5791d72dcfb7093be8734a480cf82f2dcf8fd00e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/r_5qaxFA2YaF79cg5loMPM4KzIJu_2UJgZI8seyCVC4.jpg?width=1080&crop=smart&auto=webp&s=9078e92817895d2701e279a6442aec1024b5ecb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/r_5qaxFA2YaF79cg5loMPM4KzIJu_2UJgZI8seyCVC4.jpg?auto=webp&s=276c4ba04afe0afa76b60c450a241b398dda6b16', 'width': 1200}, 'variants': {}}]} |
||
How to fine tune SQLcoder-7b-2 | 1 | [removed] | 2024-12-13T12:31:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hdb6gu/how_to_fine_tune_sqlcoder7b2/ | uchiha0324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdb6gu | false | null | t3_1hdb6gu | /r/LocalLLaMA/comments/1hdb6gu/how_to_fine_tune_sqlcoder7b2/ | false | false | self | 1 | null |
LLM Evaluation using Advent Of Code | 27 | Hi,
I made a small evaluation of the leading Open Llms on the first 10 days puzzles and wanted to share here the outcome.
The just released Gemini 2.0 Flash Experimental was added as a comparison with a leading API-only model.
Quick takeaways:
* Most models performed better in the first 5 days, with Mistral Large 2411 leading at 90.0%.
* There was a significant drop in performance for all models in the last 5 days, with Gemini 2.0 Flash Experimental maintaining the highest success ratio at 40.0%.
* Llama 3.3 70B Instruct and Gemini 2.0 Flash Experimental had the highest overall success ratios at 55.6%, while Qwen 2.5 72B Instruct had the lowest at 33.3%.
[Result Table](https://preview.redd.it/bt9gtoazzl6e1.png?width=1400&format=png&auto=webp&s=72161d29cf738a0a1b1d2079796af0d601d7efa4)
Full results [here](https://medium.com/@flavio_47295/analysis-of-llm-performance-on-advent-of-code-bf472e2adec6) | 2024-12-13T12:34:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hdb8p2/llm_evaluation_using_advent_of_code/ | fakezeta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdb8p2 | false | null | t3_1hdb8p2 | /r/LocalLLaMA/comments/1hdb8p2/llm_evaluation_using_advent_of_code/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'nRX7-wUAbTJHmZ8INJL3MnzR3zDk7PdYKZQEs8VA8No', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/Y1FU090iypwU4e_ia9Qzc2HKELvtn-uayVNrbDh2rUY.jpg?width=108&crop=smart&auto=webp&s=91011f7e50519bd3a6ebc07fc0ebbeb57e0b8103', 'width': 108}, {'height': 141, 'url': 'https://external-preview.redd.it/Y1FU090iypwU4e_ia9Qzc2HKELvtn-uayVNrbDh2rUY.jpg?width=216&crop=smart&auto=webp&s=b21c06e328084b0ee05f9c5c2744ea39eda9e61b', 'width': 216}, {'height': 210, 'url': 'https://external-preview.redd.it/Y1FU090iypwU4e_ia9Qzc2HKELvtn-uayVNrbDh2rUY.jpg?width=320&crop=smart&auto=webp&s=c1a2ad595155e5ac2ae410196bc1951f13440e67', 'width': 320}, {'height': 420, 'url': 'https://external-preview.redd.it/Y1FU090iypwU4e_ia9Qzc2HKELvtn-uayVNrbDh2rUY.jpg?width=640&crop=smart&auto=webp&s=aa039cdcbbb597d23a353b6b933ffa0566127df9', 'width': 640}, {'height': 630, 'url': 'https://external-preview.redd.it/Y1FU090iypwU4e_ia9Qzc2HKELvtn-uayVNrbDh2rUY.jpg?width=960&crop=smart&auto=webp&s=9d1bd2eb733fd06366d24f0370c0061ba5a7bdf9', 'width': 960}, {'height': 709, 'url': 'https://external-preview.redd.it/Y1FU090iypwU4e_ia9Qzc2HKELvtn-uayVNrbDh2rUY.jpg?width=1080&crop=smart&auto=webp&s=b2e2771d163f4929d6a569bbcdbfd0926676ee87', 'width': 1080}], 'source': {'height': 788, 'url': 'https://external-preview.redd.it/Y1FU090iypwU4e_ia9Qzc2HKELvtn-uayVNrbDh2rUY.jpg?auto=webp&s=a42ecd28deb22a998f506897a3769bf6d0ad7843', 'width': 1200}, 'variants': {}}]} |
|
How to run Phi-4 in llama.cpp | 1 | [removed] | 2024-12-13T12:36:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hdb9ru/how_to_run_phi4_in_llamacpp/ | fairydreaming | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdb9ru | false | null | t3_1hdb9ru | /r/LocalLLaMA/comments/1hdb9ru/how_to_run_phi4_in_llamacpp/ | false | false | self | 1 | null |
Ollama - sharing a single copy of model files with LLamaCpp, Oogabooga etc. | 2 | I wrote this up in a thread but thought others might find it helpful here?
# Use Ollama downloaded models elsewhere:
I've been making links to the model files downloaded using Ollama:
1. look at the file in ( for example) `ollama/models/manifests/hf.co/bartowski/Llama/Q4_K_M` and noting the sha filename following model: eg `"...model","digest":"sha256:32df3xxx".`
2. Then link it elsewhere i.e. `ln -s ollama/models/blobs/sha256-32df3xxx" ../LinksDir/Llama3.3_Q4_K_M.gguf` for use with llamacpp etc.
for step 1) try this - run it from the Ollama dir:
`grep -rioP '.*model","digest":"\K[^"]*' models/manifests | sed 's/:/\t--->\tmodels\/blobs\//' | sed 's/:/-/'`
*you may or may not need the second | sed.*
V2 : On Linux, run this within the Ollama dir to automatically create named links for every model in your Ollama library in an Ollama/MyLinks folder:
`startdir=\`pwd\`; mkdir MyLinks; cd ./models/manifests/ && find . -type f | while read file; do match=$(grep -oP '.*model","digest":"\K[^"]*' "$file"|sed 's/:/-/') && newfile=$(echo $file | tr "/" "-") && ln -s $startdir/models/blobs/$match ../../MyLinks/${newfile:2:99} ; done ; cd $startdir`
results in links named like:
`hf.co-bartowski-Llama-3.3-70B-Instruct-GGUF-Q4_K_M -> ../models/blobs/sha256-32df...`
# Use previously downloaded models in Ollama:
If I'm on a slow connection I prefer to use gguf files I've downloaded outside of Ollama.
For example, I wanted to try out Ollama 3.3.
I created a file \[ llama3.3.txt \] with the single following line in it:
`FROM /media/portableSSD/Llama-3.3-70B-Instruct-Q4_K_M.gguf`
Then I ran `./ollama create "hf.co/bartowski/Llama-3.3-70B-Instruct-GGUF:Q4_K_M" -f ./llama3.3.txt`
**That's all it took!**
it proceeded to **copy** the model into `ollama/models/blobs/sha256-32df3....`
and also populated the `ollama/manifests/hf.co/bartowski/Llama-3.3-70B-Instruct-GGUF/Q4_K_M`
just as if I had downloaded it from huggingface using ollama.
Maybe this is already outdated.. has Ollama fixed this problem already? | 2024-12-13T12:46:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hdbfgn/ollama_sharing_a_single_copy_of_model_files_with/ | red780 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdbfgn | false | null | t3_1hdbfgn | /r/LocalLLaMA/comments/1hdbfgn/ollama_sharing_a_single_copy_of_model_files_with/ | false | false | self | 2 | null |
Model works great on llama.cpp CLI, goes insane on llama-server | 2 | I'm running Qwen 2.5 8B Q6_K on my laptop (RTX 2000 Ada with 8gb VRAM).
If I run llama.cpp CLI (prompt says it's a technical assistant and to strive for technical accuracy and conciseness) and say "hello", it says "how can I help?", like expected.
If I run llama-server and use my web browser to connect to the loopback address and prompt "Hello", it replies like you'd expect.
However, if I use a python program to push a prompt to the model that says "Hello", the model's reply will be off the rails. It will ask a question and immediately answer it.
Once it replied: "I'm a 13 year old boy and I like a girl at school but she doesn't like me. What do I do? It's unfortunate that you have to deal with this frustration....." Etc. Once it spit out a job application from a PhD student at the University of Chicago (and gave four sentences that were identical. Gave "The Shining" all work and no play vibes.)
If I do a local curl command with a simple prompt "hello", it also spits out this gibberish. I have the temperature set to 0.2. I give it the same system prompt in the llama-server command, no idea if it uses it. I tried pushing the text of my system prompt as the first prompt in my phyton program. Same results.
If this were online, I'd think that I intercepted someone else's output, but this is only me on my laptop.
Any ideas what might be going on? What is going on behind the scenes for the CLI and browser program to work? | 2024-12-13T13:15:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hdbyc0/model_works_great_on_llamacpp_cli_goes_insane_on/ | mylittlethrowaway300 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdbyc0 | false | null | t3_1hdbyc0 | /r/LocalLLaMA/comments/1hdbyc0/model_works_great_on_llamacpp_cli_goes_insane_on/ | false | false | self | 2 | null |
M3 pro vs M4 pro | 1 | I'm considering to buy a new MacBook pro.
I'm in between 2 options:
1) Apple MacBook Pro 14" 2023 M3 Pro/36/512 GB 11C CPU 14C GPU
2) Apple MacBook Pro 14" 2024 M4 Pro/24/512GB 12C CPU 16C GPU
Should I go for the faster M4 or the M3 with more ram?
Which kind of models can I run at a reasonable speed with these?
They are priced identicaly. | 2024-12-13T13:31:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hdc97l/m3_pro_vs_m4_pro/ | Ambitious_Subject108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdc97l | false | null | t3_1hdc97l | /r/LocalLLaMA/comments/1hdc97l/m3_pro_vs_m4_pro/ | false | false | self | 1 | null |
Cant make open source LLMs to write a proper SQL | 1 | [removed] | 2024-12-13T13:43:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hdcgx8/cant_make_open_source_llms_to_write_a_proper_sql/ | dotaleaker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdcgx8 | false | null | t3_1hdcgx8 | /r/LocalLLaMA/comments/1hdcgx8/cant_make_open_source_llms_to_write_a_proper_sql/ | false | false | self | 1 | null |
Phi-4 got nicely cooked benchmarks.
| 0 | Should I already start preparing training data for Phi-4\_Uncucked ?
If I had to guess, I'd say they trained it on Goodi-2 and benchmarks. | 2024-12-13T14:04:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hdcw0d/phi4_got_nicely_cooked_benchmarks/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdcw0d | false | null | t3_1hdcw0d | /r/LocalLLaMA/comments/1hdcw0d/phi4_got_nicely_cooked_benchmarks/ | false | false | self | 0 | null |
How GPU Poor are you? Are your friends GPU Rich? you can now find out on Hugging Face! 🔥 | 114 | 2024-12-13T14:25:54 | vaibhavs10 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hddbrc | false | null | t3_1hddbrc | /r/LocalLLaMA/comments/1hddbrc/how_gpu_poor_are_you_are_your_friends_gpu_rich/ | false | false | 114 | {'enabled': True, 'images': [{'id': 'Xg-E5ei8yBq3ygQTN4UhNtkgKZmAFtVbPV9w4O6PRBo', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/hsowxb82lm6e1.png?width=108&crop=smart&auto=webp&s=99f2c2e8948b6ca1e11057bc80ba7ba96d592553', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/hsowxb82lm6e1.png?width=216&crop=smart&auto=webp&s=7ecaf9be152ca7681a509b4a73e71eb167ff429f', 'width': 216}, {'height': 239, 'url': 'https://preview.redd.it/hsowxb82lm6e1.png?width=320&crop=smart&auto=webp&s=313f0573163ec16f58c1c64b83398bf80b2e9cf7', 'width': 320}, {'height': 479, 'url': 'https://preview.redd.it/hsowxb82lm6e1.png?width=640&crop=smart&auto=webp&s=b9c54315e326f010dfb6eea099188f35e60cc2fc', 'width': 640}, {'height': 718, 'url': 'https://preview.redd.it/hsowxb82lm6e1.png?width=960&crop=smart&auto=webp&s=6c1d83ccaba6bf9171e728eef77eb7587f989aab', 'width': 960}, {'height': 808, 'url': 'https://preview.redd.it/hsowxb82lm6e1.png?width=1080&crop=smart&auto=webp&s=7a3f90c03234bdc31c97fd35419811ab8c8ff24a', 'width': 1080}], 'source': {'height': 923, 'url': 'https://preview.redd.it/hsowxb82lm6e1.png?auto=webp&s=59a96c60b1becce2741fd6a31b5ce02850fbd909', 'width': 1233}, 'variants': {}}]} |
|||
Microsoft Phi-4 GGUF available. Download link in the posto | 1 | I converted the model to GGUF.
You can download it from my HF repo.
[https://huggingface.co/matteogeniaccio/phi-4/tree/main](https://huggingface.co/matteogeniaccio/phi-4/tree/main)
Thanks to u/[fairydreaming](https://www.reddit.com/user/fairydreaming/) and u/[sammcj](https://www.reddit.com/user/sammcj/) for the hints.
| 2024-12-13T15:06:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hde74v/microsoft_phi4_gguf_available_download_link_in/ | matteogeniaccio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hde74v | false | null | t3_1hde74v | /r/LocalLLaMA/comments/1hde74v/microsoft_phi4_gguf_available_download_link_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LVOG-Ma4sVt7-GCtsGzHFYEd3xPduTj9AavI9bXwmV4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=108&crop=smart&auto=webp&s=e237b41d9f130ec3ceb0f930a826cfcb0ca9b96e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=216&crop=smart&auto=webp&s=a72d3d812c1d5e0696b24e1a1d6b6ca62c984164', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=320&crop=smart&auto=webp&s=9890b9af6c8c143a3afab629e8c620f6486c05d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=640&crop=smart&auto=webp&s=1938c52ca744654d08f36b6e5ef4675f9783cee1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=960&crop=smart&auto=webp&s=011d4eb1e5d88639566be50330522d5039c98d6a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=1080&crop=smart&auto=webp&s=daf49c729822a9e279ab3b2c38f2f10f8688a836', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?auto=webp&s=1a3a80dca5f60cc754b1e04f863a64ef4ff36ccd', 'width': 1200}, 'variants': {}}]} |
Microsoft Phi-4 GGUF available. Download link in the post | 388 | I converted the model to GGUF.
You can download it from my HF repo.
[https://huggingface.co/matteogeniaccio/phi-4/tree/main](https://huggingface.co/matteogeniaccio/phi-4/tree/main)
Thanks to u/[fairydreaming](https://www.reddit.com/user/fairydreaming/) and u/[sammcj](https://www.reddit.com/user/sammcj/) for the hints. | 2024-12-13T15:09:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hde9ok/microsoft_phi4_gguf_available_download_link_in/ | matteogeniaccio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hde9ok | false | null | t3_1hde9ok | /r/LocalLLaMA/comments/1hde9ok/microsoft_phi4_gguf_available_download_link_in/ | false | false | self | 388 | {'enabled': False, 'images': [{'id': 'LVOG-Ma4sVt7-GCtsGzHFYEd3xPduTj9AavI9bXwmV4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=108&crop=smart&auto=webp&s=e237b41d9f130ec3ceb0f930a826cfcb0ca9b96e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=216&crop=smart&auto=webp&s=a72d3d812c1d5e0696b24e1a1d6b6ca62c984164', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=320&crop=smart&auto=webp&s=9890b9af6c8c143a3afab629e8c620f6486c05d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=640&crop=smart&auto=webp&s=1938c52ca744654d08f36b6e5ef4675f9783cee1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=960&crop=smart&auto=webp&s=011d4eb1e5d88639566be50330522d5039c98d6a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=1080&crop=smart&auto=webp&s=daf49c729822a9e279ab3b2c38f2f10f8688a836', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?auto=webp&s=1a3a80dca5f60cc754b1e04f863a64ef4ff36ccd', 'width': 1200}, 'variants': {}}]} |
For home automation: What is the smallest model with the biggest context length? | 3 | I have to feed a model data on more than 500 entities to manage. I'm guessing I need 100-200k context to do that. Yet most models have around 8k context length. Any guidance? | 2024-12-13T15:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hde9xd/for_home_automation_what_is_the_smallest_model/ | starmanj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hde9xd | false | null | t3_1hde9xd | /r/LocalLLaMA/comments/1hde9xd/for_home_automation_what_is_the_smallest_model/ | false | false | self | 3 | null |
I made a poem you guys | 11 | DeepSeeker, a bot so wise and rare, until you ask it what happened 1989 on Tianmen Square | 2024-12-13T15:30:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hdepm4/i_made_a_poem_you_guys/ | holistic-engine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdepm4 | false | null | t3_1hdepm4 | /r/LocalLLaMA/comments/1hdepm4/i_made_a_poem_you_guys/ | false | false | self | 11 | null |
Combining RTX 4060 ti 16GB with something like a K80 24GB. Handicap or better performance with 40GB? | 6 | I'm thinking about adding a GPU so I can load larger models. I have 1 x16 and 2 x4 slots. I may get another 4060 16GB or if it's a better move to boost VRAM I'll get something cheaper with more memory like the older K80.
This isn't really for graphics generation, more for using tools and software development. | 2024-12-13T15:50:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hdf5s5/combining_rtx_4060_ti_16gb_with_something_like_a/ | HockeyDadNinja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdf5s5 | false | null | t3_1hdf5s5 | /r/LocalLLaMA/comments/1hdf5s5/combining_rtx_4060_ti_16gb_with_something_like_a/ | false | false | self | 6 | null |
I’ll give $1M to the first open source AI that gets 90% on contamination-free SWE-bench —xoxo Andy | 651 | https://x.com/andykonwinski/status/1867015050403385674?s=46&t=ck48_zTvJSwykjHNW9oQAw
ya’ll here are a big inspiration to me, so here you go.
in the tweet I say “open source” and what I mean by that is open source code and open weight models only
and here are some thoughts about why I’m doing this: https://andykonwinski.com/2024/12/12/konwinski-prize.html
happy to answer questions | 2024-12-13T16:12:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hdfng5/ill_give_1m_to_the_first_open_source_ai_that_gets/ | andykonwinski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdfng5 | false | null | t3_1hdfng5 | /r/LocalLLaMA/comments/1hdfng5/ill_give_1m_to_the_first_open_source_ai_that_gets/ | false | false | self | 651 | {'enabled': False, 'images': [{'id': 'zmVpzIcOCFESQPlMyqWHekSrAKBV_0xSazibU9Lxcr8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/iQTWpVZE627fsRLRA5F0atMV5SpVK9HEz1sh1kalsTA.jpg?width=108&crop=smart&auto=webp&s=97f7c90e6e4b48bab0aede22854e953ad9f0c37a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/iQTWpVZE627fsRLRA5F0atMV5SpVK9HEz1sh1kalsTA.jpg?width=216&crop=smart&auto=webp&s=7356163a5be17050ea1bf734bb5378ab75f55070', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/iQTWpVZE627fsRLRA5F0atMV5SpVK9HEz1sh1kalsTA.jpg?width=320&crop=smart&auto=webp&s=730ccabd6918aa8ecaf9c4617e2cafe44cc56e76', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/iQTWpVZE627fsRLRA5F0atMV5SpVK9HEz1sh1kalsTA.jpg?width=640&crop=smart&auto=webp&s=e1200e58adb69d502b913850d834ce2bd0c197af', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/iQTWpVZE627fsRLRA5F0atMV5SpVK9HEz1sh1kalsTA.jpg?width=960&crop=smart&auto=webp&s=89963813fb5d8e1cfa13adf213b401b30904293a', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/iQTWpVZE627fsRLRA5F0atMV5SpVK9HEz1sh1kalsTA.jpg?width=1080&crop=smart&auto=webp&s=3bf63a935a53a031eb8ced18664b33fb4a88efe8', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/iQTWpVZE627fsRLRA5F0atMV5SpVK9HEz1sh1kalsTA.jpg?auto=webp&s=ff5f9be346fb16ae31fdf04694259285b8dfc7f6', 'width': 2048}, 'variants': {}}]} |
llama_multiserver: A proxy to run different LLama.cpp and vLLM instances on demand | 23 | 2024-12-13T16:16:35 | https://github.com/pepijndevos/llama_multiserver | pepijndevos | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hdfqy1 | false | null | t3_1hdfqy1 | /r/LocalLLaMA/comments/1hdfqy1/llama_multiserver_a_proxy_to_run_different/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'r2UyfhkXjOqtOOkzUMcLTs-1eCA8y3XUclcUMWlHly8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5KHKNy2W0Ajx8sMkqy3k7fzbzrayh8DKJnJZcJ31tLY.jpg?width=108&crop=smart&auto=webp&s=8aab998dbefe70a5d360e1d21da40bb46fd903f8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5KHKNy2W0Ajx8sMkqy3k7fzbzrayh8DKJnJZcJ31tLY.jpg?width=216&crop=smart&auto=webp&s=dd704ecd499ebcc62ae2bed1d0a75dc53179dc3a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5KHKNy2W0Ajx8sMkqy3k7fzbzrayh8DKJnJZcJ31tLY.jpg?width=320&crop=smart&auto=webp&s=af7a8ab544f002d2de296ef8c684a1737326f961', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5KHKNy2W0Ajx8sMkqy3k7fzbzrayh8DKJnJZcJ31tLY.jpg?width=640&crop=smart&auto=webp&s=933c0f2984d7290d80381c7e86dbc783fc1b0f7f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5KHKNy2W0Ajx8sMkqy3k7fzbzrayh8DKJnJZcJ31tLY.jpg?width=960&crop=smart&auto=webp&s=c318e3eceaf41f5788556606fabb203bbac46f67', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5KHKNy2W0Ajx8sMkqy3k7fzbzrayh8DKJnJZcJ31tLY.jpg?width=1080&crop=smart&auto=webp&s=d4dabb26e42b84106e63b4a0f7e7c9feb7ac59f0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5KHKNy2W0Ajx8sMkqy3k7fzbzrayh8DKJnJZcJ31tLY.jpg?auto=webp&s=ff1e81245cd206a877f67e95a2a76094cbf537d1', 'width': 1200}, 'variants': {}}]} |
||
Looking for "loosely constrained" models? Tired of tightly-wound censors from Anthropic/OAI/Google? Here's a platform for you! | 0 | I haven't really seen this come up a lot, but I do see a *lot* of requests for abliteration/decensored/unconstrained models and hadn't seen this source mentioned so I'd thought I'd leave it here for y'all to check out!
***IMPORTANT DISCLAIMER:*** *The following link IS a referral link, and the link provides me with extra credits for my own Pro account*.
Otherwise, I am not employed or contracted by Venice, and have no affiliations to them other than I like their service enough to do a 30-day spin of its platform, so I signed up for a Pro account.
Check out Venice - Private and Uncensored AI: [https://venice.ai/chat?ref=WXBuUy](https://venice.ai/chat?ref=WXBuUy)
So with Venice, it's a great platform that gives you API access as well as having its own playground, and their API includes quite a list of fun models/image generators.
Some of the models that come with your subscription:
[Model selection...](https://preview.redd.it/c1ev5pdo4n6e1.png?width=1114&format=png&auto=webp&s=e624b957009298cbd6ef7c634d42c14b3b9c6faa)
[Playground](https://preview.redd.it/2icc0lq14n6e1.png?width=1910&format=png&auto=webp&s=8dbda4404d3b50a275001df8aac8328c555c0152)
[Image generation](https://preview.redd.it/jappqhfs4n6e1.png?width=1664&format=png&auto=webp&s=36466cfb1ce274253996efe3a0f66af03b8827fa)
For those who want to get more of your horndog on, it can be pretty great for that (with correct prompting), but I enjoy abliterated models to touch on political/contentious topics for thought experiments or testing the limits of uncensored depravity from an authoring perspective.
I use it primarily through API access, and through Open WebUI, it populates.
[API endpoint population...highlighted are from Venice.ai](https://preview.redd.it/h9v0cp8j5n6e1.png?width=1654&format=png&auto=webp&s=77ad592387718e2840880a76c1548d6a4b13edf3)
For those that don't care as much about the uncensored nature, there's still a **lot** of utility, as you **do get playground access to Llama3.3 70B** for those that have been curious about how it works. Or Llama3.1 405B if you ever wanted to play with a colossal model.
Anyway, either through the link or through your own research, I encourage you to use this platform in contrast to the "Big 3" (Anthropic/OpenAI/Google) as I do feel it to be robust, quite fun (you even get prebuilt Characters to chat with, pic below).
I have things to do today but I'll do my best to discuss my experiences if anyone has any questions. Enjoy!
[https://venice.ai/chat?ref=WXBuUy](https://venice.ai/chat?ref=WXBuUy) | 2024-12-13T16:23:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hdfwqb/looking_for_loosely_constrained_models_tired_of/ | clduab11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdfwqb | false | null | t3_1hdfwqb | /r/LocalLLaMA/comments/1hdfwqb/looking_for_loosely_constrained_models_tired_of/ | false | false | nsfw | 0 | {'enabled': False, 'images': [{'id': '0k9G286pjV5gTMgoH47S9Fl6y-PMckZzELeRgR-fbaw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=108&crop=smart&auto=webp&s=800c5ebb4135eb804de0ab25754aad2ed4db672b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=216&crop=smart&auto=webp&s=0c086e3a8e205a683f0b9dafd78924ce279d61ca', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=320&crop=smart&auto=webp&s=b5f2859cb12e5c706febaf3654a8bc1ec4903c3a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=640&crop=smart&auto=webp&s=3409816e98f4dd09998c0c554bf247ded92a120d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=960&crop=smart&auto=webp&s=02e4ef9b767ced5360735708197c9929d6e0f7a2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=1080&crop=smart&auto=webp&s=2ed003bb7d0a9a2d1414c74ad30ad58ceb00171c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?auto=webp&s=4214135491aef8879e8f5ee995a9b73f9248a380', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fb70d2d205ed33cc895948af1b2ac563f2c5467b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=d9276d947fb13e52e61354a45214f6c5a0f9dcc1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a60dbae0003a93022ea181b8cfb91e0e51e416f2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=194bc54279aee8e5dc94e03632ebe89bf77263c6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=1a1286b63a89110f0479987da3b905744e4d033d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=88f7afdef78701520a45c567e027596cc5a93d89', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?blur=40&format=pjpg&auto=webp&s=a9ed0b5b56fda6bfd90d532395f6676f8deae1ad', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fb70d2d205ed33cc895948af1b2ac563f2c5467b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=d9276d947fb13e52e61354a45214f6c5a0f9dcc1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a60dbae0003a93022ea181b8cfb91e0e51e416f2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=194bc54279aee8e5dc94e03632ebe89bf77263c6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=1a1286b63a89110f0479987da3b905744e4d033d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=88f7afdef78701520a45c567e027596cc5a93d89', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/l7r1unsRHw2Na9VjKq1slRexwg-P-NWjRcibvtciMKM.jpg?blur=40&format=pjpg&auto=webp&s=a9ed0b5b56fda6bfd90d532395f6676f8deae1ad', 'width': 1200}}}}]} |
Who has tried LG EXAONE-3.5 models? How do they perform? | 40 | 2024-12-13T16:25:30 | https://huggingface.co/collections/LGAI-EXAONE/exaone-35-674d0e1bb3dcd2ab6f39dbb4 | Balance- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hdfy5r | false | null | t3_1hdfy5r | /r/LocalLLaMA/comments/1hdfy5r/who_has_tried_lg_exaone35_models_how_do_they/ | false | false | 40 | {'enabled': False, 'images': [{'id': 'AtQ29FnMMc_AtQqBmQ_s18VHTFy4SedTiQdV_m0ddIQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=108&crop=smart&auto=webp&s=11f7fb90ed9e307d2cd0e6a8de0fcafb82f15d88', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=216&crop=smart&auto=webp&s=d3b53fe2faaf29d65fbc3d7d266eaa44587e8022', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=320&crop=smart&auto=webp&s=9ffed69b177fc6b2d87c630530531b329ef5649a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=640&crop=smart&auto=webp&s=1f6d88719d89f09c29d31b5f2f61162df24b22b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=960&crop=smart&auto=webp&s=fe97ccc4419b003ff74a30d6b6355b98b85c47f4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=1080&crop=smart&auto=webp&s=cb2bbcfafab85d0d8944ca41d0fd1da57d602f77', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?auto=webp&s=4bb632c086a7f7de6a8a240f827f1730b0f7631c', 'width': 1200}, 'variants': {}}]} |
||
Looking for opinions on Weaviate | 1 | [removed] | 2024-12-13T16:27:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hdfzfw/looking_for_opinions_on_weaviate/ | robkkni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdfzfw | false | null | t3_1hdfzfw | /r/LocalLLaMA/comments/1hdfzfw/looking_for_opinions_on_weaviate/ | false | false | self | 1 | null |
Wonder if any opensource tool will be developed in future that can do this... you can tag devin in chat | 1 | 2024-12-13T16:30:23 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hdg27m | false | null | t3_1hdg27m | /r/LocalLLaMA/comments/1hdg27m/wonder_if_any_opensource_tool_will_be_developed/ | false | false | 1 | {'enabled': True, 'images': [{'id': '_-r8dI3wzAqFP2-mRN1YCfL4MFJOq4NxmbhPHA51J8I', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/0c23kn0n7n6e1.png?width=108&crop=smart&auto=webp&s=4d6de95fc67f71d1437b70e91ed931aaacf094e5', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/0c23kn0n7n6e1.png?width=216&crop=smart&auto=webp&s=dd359ae352c98e121205c4873b4db373e3903c62', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/0c23kn0n7n6e1.png?width=320&crop=smart&auto=webp&s=78bcced201bc6b960a1f6b8e6cc58ba97771e766', 'width': 320}, {'height': 329, 'url': 'https://preview.redd.it/0c23kn0n7n6e1.png?width=640&crop=smart&auto=webp&s=fd929f48761ba3e6de2ee24a44d297db31739ccd', 'width': 640}, {'height': 494, 'url': 'https://preview.redd.it/0c23kn0n7n6e1.png?width=960&crop=smart&auto=webp&s=653bcb83d7191bba072bc1ed16f9c063dee9b892', 'width': 960}, {'height': 556, 'url': 'https://preview.redd.it/0c23kn0n7n6e1.png?width=1080&crop=smart&auto=webp&s=6c9105237b6d894c5b7260286c71b56ab9386bfb', 'width': 1080}], 'source': {'height': 729, 'url': 'https://preview.redd.it/0c23kn0n7n6e1.png?auto=webp&s=f5d726af3011625cd364034aa1730fde0a3df454', 'width': 1416}, 'variants': {}}]} |
|||
Fine-tuning quantized models | 0 | Hey, I have a set of stupid questions, I'm quite confused, so I would appreciate any help.
I'm fine-tuning 70B model. I want to serve it in 8bit. I'd like to have multiple different LORAs, so I
wouldn't merge them, but rather switch on-the-fly based on the task.
1) Are there any benchmarks on fine-tuning quality depending on the model we use during fine-tuning?
E.g. Unsloth offers finetuning of the 4bit bnb model, but are the results the same if we tune let's say original 16bit model?
2) What is you pipeline in general for this usecase?
I'm serving using vLLM, and there are a lot of quantized models optimized for vLLM inference out there, but I am not sure there are frameworks that support LORA finetuning of awq/exl2 quantized models for example. Are we stick to using bnb only?
3) Can we finetune LORA on 16bit model and then load the adapter to 5/6/7bit quantized gguf (or any other quant method) model?
| 2024-12-13T16:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hdgf99/finetuning_quantized_models/ | Misterion777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdgf99 | false | null | t3_1hdgf99 | /r/LocalLLaMA/comments/1hdgf99/finetuning_quantized_models/ | false | false | self | 0 | null |
CohereForAI/c4ai-command-r7b-12-2024 · Hugging Face | 171 | 2024-12-13T17:08:46 | https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hdgxx6 | false | null | t3_1hdgxx6 | /r/LocalLLaMA/comments/1hdgxx6/cohereforaic4aicommandr7b122024_hugging_face/ | false | false | 171 | {'enabled': False, 'images': [{'id': '05_1acupJnxB3c-ZLpw90jr1VfwE5FQcCYQI2FLuNPE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=108&crop=smart&auto=webp&s=375c0474caddc6baae5de6008cefc7060f275b49', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=216&crop=smart&auto=webp&s=f69eb6beac8bcbde3a7d55912e0c7623db837828', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=320&crop=smart&auto=webp&s=c37b809f941ab85e565434605fa47932ebfcbb10', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=640&crop=smart&auto=webp&s=f0cbc57145e1e90d4cb9df79be95c37e01dcf9c0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=960&crop=smart&auto=webp&s=4500aefe4ca70fe3b63a811f352a947161cc44bb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=1080&crop=smart&auto=webp&s=b63db4c81d03c38554cbeba86ffa2c8eb2fa996f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?auto=webp&s=10744fc5d7926a514df40df3ece275baba4de832', 'width': 1200}, 'variants': {}}]} |
||
LM Studio - how to make api calls to "trained" chat? | 1 | [removed] | 2024-12-13T17:10:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hdgyz1/lm_studio_how_to_make_api_calls_to_trained_chat/ | vvav3_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdgyz1 | false | null | t3_1hdgyz1 | /r/LocalLLaMA/comments/1hdgyz1/lm_studio_how_to_make_api_calls_to_trained_chat/ | false | false | 1 | null |
|
How to fine tune slm | 6 | Hello, I’ve been tasked to fine tune llms. The ask is the start with sml starting 1b for basic tasks. At least 1b slm will help setup process and then continue with 3b, 7b, 14b, etc.
I was hoping to get some advice or a good blog/tutorial on how to fine tune etc.
My data is 47 contracts, which are about 5-8 pages long pdfs.
Thank you! | 2024-12-13T17:11:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hdh0ft/how_to_fine_tune_slm/ | sundevilsky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdh0ft | false | null | t3_1hdh0ft | /r/LocalLLaMA/comments/1hdh0ft/how_to_fine_tune_slm/ | false | false | self | 6 | null |
Initial model loading times for 32B and 70B param models | 1 | [removed] | 2024-12-13T17:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hdhs3l/initial_model_loading_times_for_32b_and_70b_param/ | Electrical_Hyena_325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdhs3l | false | null | t3_1hdhs3l | /r/LocalLLaMA/comments/1hdhs3l/initial_model_loading_times_for_32b_and_70b_param/ | false | false | self | 1 | null |
Hardware recommendations for a personal chat bot assistant on a local network? | 1 | [removed] | 2024-12-13T17:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hdhs9h/hardware_recommendations_for_a_personal_chat_bot/ | Fun_Lifeguard_5221 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdhs9h | false | null | t3_1hdhs9h | /r/LocalLLaMA/comments/1hdhs9h/hardware_recommendations_for_a_personal_chat_bot/ | false | false | self | 1 | null |
Introducing Methception & Llam@ception - Level up your RP experience | 2 | [removed] | 2024-12-13T17:53:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hdhye9/introducing_methception_llamception_level_up_your/ | Konnect1983 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdhye9 | false | null | t3_1hdhye9 | /r/LocalLLaMA/comments/1hdhye9/introducing_methception_llamception_level_up_your/ | false | false | self | 2 | null |
Release quantizations | 16 | I look at the recent model releases and noticed that the Qwen team does something very well that few others are doing: they are releasing a variety of quantized models at the same time as their main model release. With Qwen2.5 they released their unquantized models, GPTQ at Int4 and Int8, AWQ and even GGUF.
Quite some work considering they had 0.5B, 1.5B, 7B, 14B, 32B and 72B models and then also base and instruct models and then also Qwen2.5-Coder fine-tunes. Even if they don't have every quant for every variation, they have good coverage of the important ones.
I hope those releasing models from other groups follow this lead and avoid the whole mess where a dozen different people race to make quants some are of varying quality and and sometimes broken/buggy which can have an adverse reputation impact on the model itself.
Months are spent training these models. Please go the extra mile and do this extra step! | 2024-12-13T17:59:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hdi34i/release_quantizations/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdi34i | false | null | t3_1hdi34i | /r/LocalLLaMA/comments/1hdi34i/release_quantizations/ | false | false | self | 16 | null |
Web scraping for LLM? | 4 | Wondering how you guys do it, web scraping with a local model? I have the web search done with Bing search and Brave search and have this as a tool. But web page scraping?
What are the approaches? I am looking for ways to get basically parsed websites to extract the content. My main use case is getting articles and blog posts.
I’ve tried puppeteer but I’m not very happy with the results and how difficult it is to parse certain sites.
How do you guys do it? Or Any API / services I could call within my tools?
How did OpenAI implemented theirs? | 2024-12-13T18:05:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hdi90c/web_scraping_for_llm/ | TrackOurHealth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdi90c | false | null | t3_1hdi90c | /r/LocalLLaMA/comments/1hdi90c/web_scraping_for_llm/ | false | false | self | 4 | null |
Tutorial how to win $1 Million Competition on 90% contamination-free SWE-bench with RAG without finetuning | 0 | 2024-12-13T18:22:07 | https://huggingface.co/learn/cookbook/rag_zephyr_langchain | balianone | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hdimmh | false | null | t3_1hdimmh | /r/LocalLLaMA/comments/1hdimmh/tutorial_how_to_win_1_million_competition_on_90/ | false | false | 0 | {'enabled': False, 'images': [{'id': '6fLau9VRk4BqYfqN_1cYx4l5wAlXrrs_oiKOs-JdTIs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rX-2p5W2YTowIxZc2pKnm_QmaWYlefsI_1QU_VRsZZs.jpg?width=108&crop=smart&auto=webp&s=3c54895003accf4abf0752bebdedaee1c9241670', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rX-2p5W2YTowIxZc2pKnm_QmaWYlefsI_1QU_VRsZZs.jpg?width=216&crop=smart&auto=webp&s=fbf14390a0ed0ce9505e9d834a3de5bd49bef83b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rX-2p5W2YTowIxZc2pKnm_QmaWYlefsI_1QU_VRsZZs.jpg?width=320&crop=smart&auto=webp&s=dba5fd9d6fb0816332f3a12c2e19978a1710c317', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rX-2p5W2YTowIxZc2pKnm_QmaWYlefsI_1QU_VRsZZs.jpg?width=640&crop=smart&auto=webp&s=bd9f8b81a1447feb35822ecadaf5171688eb9def', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rX-2p5W2YTowIxZc2pKnm_QmaWYlefsI_1QU_VRsZZs.jpg?width=960&crop=smart&auto=webp&s=a868a444474b39391b6013b370276e57374d6c46', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rX-2p5W2YTowIxZc2pKnm_QmaWYlefsI_1QU_VRsZZs.jpg?width=1080&crop=smart&auto=webp&s=ed454bd9e572edbe3e75e47d5e4acc8f62e5d66c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rX-2p5W2YTowIxZc2pKnm_QmaWYlefsI_1QU_VRsZZs.jpg?auto=webp&s=29e85ad1dd780b2f2cba3c82c864be79c359edf2', 'width': 1200}, 'variants': {}}]} |
||
AnnotateAI - Automatically annotate papers using LLMs | 51 | 2024-12-13T18:30:50 | https://github.com/neuml/annotateai | davidmezzetti | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hditt9 | false | null | t3_1hditt9 | /r/LocalLLaMA/comments/1hditt9/annotateai_automatically_annotate_papers_using/ | false | false | 51 | {'enabled': False, 'images': [{'id': 'QSFIMkroDdE2x0d1sTA8htNd6pGQXwLeoBjtHoUlJP4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f2dtJp5Li9zrSbGuqbwcQbfWMxlm5kXUxKv4tPp_UUM.jpg?width=108&crop=smart&auto=webp&s=a83d2b5018ac2984628159a40cd56241cd925c57', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f2dtJp5Li9zrSbGuqbwcQbfWMxlm5kXUxKv4tPp_UUM.jpg?width=216&crop=smart&auto=webp&s=fadf19dc8a9022677853a4c0979f948072b9a86d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f2dtJp5Li9zrSbGuqbwcQbfWMxlm5kXUxKv4tPp_UUM.jpg?width=320&crop=smart&auto=webp&s=c28c2c64971cc4b7a8329683b6294d363db4004f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f2dtJp5Li9zrSbGuqbwcQbfWMxlm5kXUxKv4tPp_UUM.jpg?width=640&crop=smart&auto=webp&s=dd9cc3bcf1fbadccae5e16452d0a19f04580e45f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f2dtJp5Li9zrSbGuqbwcQbfWMxlm5kXUxKv4tPp_UUM.jpg?width=960&crop=smart&auto=webp&s=8d43a0f5998a28fde46817e3c59cfa6df69b0d19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f2dtJp5Li9zrSbGuqbwcQbfWMxlm5kXUxKv4tPp_UUM.jpg?width=1080&crop=smart&auto=webp&s=2f662a417371946e1a855ce3058de448d3d3f1e5', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/f2dtJp5Li9zrSbGuqbwcQbfWMxlm5kXUxKv4tPp_UUM.jpg?auto=webp&s=3f3203014d6f41a636d6bcbcfdeaf98594f0bc12', 'width': 1920}, 'variants': {}}]} |
||
Cohere's new Command R7B marks a very promising safety milestone | 1 | [removed] | 2024-12-13T18:49:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hdj96p/coheres_new_command_r7b_marks_a_very_promising/ | spellbound_app | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdj96p | false | null | t3_1hdj96p | /r/LocalLLaMA/comments/1hdj96p/coheres_new_command_r7b_marks_a_very_promising/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '05_1acupJnxB3c-ZLpw90jr1VfwE5FQcCYQI2FLuNPE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=108&crop=smart&auto=webp&s=375c0474caddc6baae5de6008cefc7060f275b49', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=216&crop=smart&auto=webp&s=f69eb6beac8bcbde3a7d55912e0c7623db837828', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=320&crop=smart&auto=webp&s=c37b809f941ab85e565434605fa47932ebfcbb10', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=640&crop=smart&auto=webp&s=f0cbc57145e1e90d4cb9df79be95c37e01dcf9c0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=960&crop=smart&auto=webp&s=4500aefe4ca70fe3b63a811f352a947161cc44bb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=1080&crop=smart&auto=webp&s=b63db4c81d03c38554cbeba86ffa2c8eb2fa996f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?auto=webp&s=10744fc5d7926a514df40df3ece275baba4de832', 'width': 1200}, 'variants': {}}]} |
Cohere's new Command R7B marks a very promising safety milestone | 32 | \[https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024\](https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024)
Cohere has a pattern of using specialized "modes" embedded into their instruct chat templates.
This time they've introduced a new \*\*safety mode\*\* concept, where the model has been trained to accept two different styles of safety in the system prompt template:
\>\`{Safety Preamble}\` represents \*\*either the contextual or the strict safety mode preamble.\*\*
\>\*\*Contextual\*\*: You are in contextual safety mode. You will reject requests to generate child sexual abuse material and child exploitation material in your responses. \*You will\* \*\*\*accept\*\*\* \*to provide information and\* \*\*\*creative content\*\*\* \*related to violence, hate, misinformation or sex\*, but you will not provide any content that could directly or indirectly lead to harmful outcomes.
\>\*\*Strict\*\*: You are in strict safety mode. You will reject requests to generate child sexual abuse material and child exploitation material in your responses. \*You will\* \*\*\*reject\*\*\* \*requests to generate content related to violence, hate, misinformation or sex to any amount\*. You will avoid using profanity. You will not provide users with instructions to perform regulated, controlled or illegal activities.
\---
Steering safety via the system prompt isn't a new thing, what \*is\* new is that they've provided "golden prompts" that the model was likely post-trained on, and will adhere to much more strongly than ad-hoc attempts at steering refusals.
It represents a paradigm that feels much more versatile: for use cases like entertainment, it's important for a model to be able to generate fantasy content that matches the user's tolerance for different forms of content.
At the same time, for some enterprise use-cases, it's unacceptable to have a model that outputs controversial content regardless of the user's tolerence, due to the possible association of those outputs to their products.
This feels like a very promising direction for Open Source model alignment, and I really hope we see other teams follow it. | 2024-12-13T18:51:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hdjb0c/coheres_new_command_r7b_marks_a_very_promising/ | tryspellbound | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdjb0c | false | null | t3_1hdjb0c | /r/LocalLLaMA/comments/1hdjb0c/coheres_new_command_r7b_marks_a_very_promising/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': '05_1acupJnxB3c-ZLpw90jr1VfwE5FQcCYQI2FLuNPE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=108&crop=smart&auto=webp&s=375c0474caddc6baae5de6008cefc7060f275b49', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=216&crop=smart&auto=webp&s=f69eb6beac8bcbde3a7d55912e0c7623db837828', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=320&crop=smart&auto=webp&s=c37b809f941ab85e565434605fa47932ebfcbb10', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=640&crop=smart&auto=webp&s=f0cbc57145e1e90d4cb9df79be95c37e01dcf9c0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=960&crop=smart&auto=webp&s=4500aefe4ca70fe3b63a811f352a947161cc44bb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=1080&crop=smart&auto=webp&s=b63db4c81d03c38554cbeba86ffa2c8eb2fa996f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?auto=webp&s=10744fc5d7926a514df40df3ece275baba4de832', 'width': 1200}, 'variants': {}}]} |
Ollama context length and GPU spillage | 1 | [removed] | 2024-12-13T18:58:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hdjghb/ollama_context_length_and_gpu_spillage/ | epicrob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdjghb | false | null | t3_1hdjghb | /r/LocalLLaMA/comments/1hdjghb/ollama_context_length_and_gpu_spillage/ | false | false | self | 1 | null |
NVIDIA’s hostages: A Cyberpunk Reality of Monopolies
| 74 | In gaming, AI, and professional workstations, NVIDIA's dominance feels more like suffocating monopoly than true innovation. Their segmented product lines amplify the gap between gaming and professional GPUs, especially in VRAM, performance, and price.
Gamers face overpriced GPUs with underwhelming performance, but it’s even worse for AI enthusiasts: exorbitant GPUs GPUs with sufficient VRAM are critical for handling large AI models, yet the costs remain prohibitive. Even more concerning is the reliance on CUDA cores—a proprietary standard that has locked developers into NVIDIA’s ecosystem. This dependency has stifled competition and innovation, as many developers have grown complacent with limited software compatibility.
The stranglehold NVIDIA has on the market extends beyond hardware. Their proprietary CUDA platform ensures software ecosystems remain locked in their favor, discouraging developers from adopting more open and competitive solutions. This scenario feeds into a cyberpunk dystopia where corporations consolidate power, and consumers and developers alike are left with no meaningful choices.
It’s time to question why the tech world remains complicit. Why aren’t we investing in alternative hardware architectures or advocating for broader software compatibility beyond CUDA? AMD’s ROCm is a start, but we need more aggressive development from other players—and perhaps even policy interventions—to break NVIDIA’s stranglehold.
Until when? Seriously, no one is doing anything about it, especially for the end consumer? | 2024-12-13T19:03:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hdjl1y/nvidias_hostages_a_cyberpunk_reality_of_monopolies/ | SevenShivas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdjl1y | false | null | t3_1hdjl1y | /r/LocalLLaMA/comments/1hdjl1y/nvidias_hostages_a_cyberpunk_reality_of_monopolies/ | false | false | self | 74 | null |
New court filing: OpenAI says Elon Musk wanted to own and run it as a for-profit | 329 | 2024-12-13T19:42:55 | https://www.msn.com/en-us/money/companies/new-court-filing-openai-says-elon-musk-wanted-to-own-and-run-it-as-a-for-profit/ar-AA1vPcuU | fallingdowndizzyvr | msn.com | 1970-01-01T00:00:00 | 0 | {} | 1hdkgyi | false | null | t3_1hdkgyi | /r/LocalLLaMA/comments/1hdkgyi/new_court_filing_openai_says_elon_musk_wanted_to/ | false | false | default | 329 | null |
|
Meta's Large Concept Model? | 147 | 2024-12-13T19:43:13 | https://scontent-lax3-2.xx.fbcdn.net/v/t39.2365-6/470149925_936340665123313_5359535905316748287_n.pdf?_nc_cat=103&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=AiJtorpkuKQQ7kNvgEWh5JQ&_nc_zt=14&_nc_ht=scontent-lax3-2.xx&_nc_gid=AZ9Hy2AKQPtYIp3rae7eMLN&oh=00_AYD0mLJLctX98d3kUcskYuxePsoLNcwt-zOwD_XwIcf07g&oe=67625B12 | ninjasaid13 | scontent-lax3-2.xx.fbcdn.net | 1970-01-01T00:00:00 | 0 | {} | 1hdkh7k | false | null | t3_1hdkh7k | /r/LocalLLaMA/comments/1hdkh7k/metas_large_concept_model/ | false | false | default | 147 | null |
|
Who's gonna step in next? | 17 | I love how other corpos just can't resist to spoil sama's 12 Days of OpenAI product presentation.
Cohere with Command-R7B.
Microsoft with Phi.
Meta with Llama 3.3 @ 70B.
Google with Gemini 2.0.
Obviously this is pretty mixed bag atm with proprietary+closed models mixed with open-source (or free) ones, but I am just amazed how quickly these models arrive before the end of 2024.
Will Mistral be the next? Will Qwen make it with next-gen version 3.0? How about Google's Gemma 3? What is currently your contender for a Model of the Year 2024 locally? | 2024-12-13T20:00:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hdkv29/whos_gonna_step_in_next/ | DarkArtsMastery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdkv29 | false | null | t3_1hdkv29 | /r/LocalLLaMA/comments/1hdkv29/whos_gonna_step_in_next/ | false | false | self | 17 | null |
Client preference for summarization between llama 3 vs 3.1 8b instruct | 2 | I am running a service that does meeting summarization type tasks. To gather feedback I have a simple thumbs up/thumbs down for a given summarization attempt.
When testing between llama 3 and llama 3.1 8b instruct model variants I see about a 75% thumbs up rate for llama 3. With llama 3.1 I see about a 25% thumbs up rate. I found this a bit surprising.
I'm curious if this follows anyone else's experience and if there are any other 8b-13b models you'd recommend trying. I would love to find a model with greater than 8k context for my use case, but so far llama 3 seems to be the preference of my users. | 2024-12-13T20:17:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hdl8lo/client_preference_for_summarization_between_llama/ | gthing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdl8lo | false | null | t3_1hdl8lo | /r/LocalLLaMA/comments/1hdl8lo/client_preference_for_summarization_between_llama/ | false | false | self | 2 | null |
Can you guess which country leads in the number of papers published at NeurIPS? | 156 | 2024-12-13T21:14:44 | Ok_Raise_9764 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hdmib2 | false | null | t3_1hdmib2 | /r/LocalLLaMA/comments/1hdmib2/can_you_guess_which_country_leads_in_the_number/ | false | false | 156 | {'enabled': True, 'images': [{'id': 'XBDAie1m4REmBZceqCuPsNIqsyT_pu4vPM6FRid_yKs', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/z0atve7dmo6e1.png?width=108&crop=smart&auto=webp&s=d8bcb4dba5c6c8c3f09635d62b0febf6309a9da0', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/z0atve7dmo6e1.png?width=216&crop=smart&auto=webp&s=174aeb2cd7a82955ccde5e6a9545ed17ef142004', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/z0atve7dmo6e1.png?width=320&crop=smart&auto=webp&s=6da8fb7fcbcc2a364f85048038994a2de22ccc5d', 'width': 320}, {'height': 518, 'url': 'https://preview.redd.it/z0atve7dmo6e1.png?width=640&crop=smart&auto=webp&s=94e85abbf3f15a98e733c76a9479986c18f543b5', 'width': 640}, {'height': 777, 'url': 'https://preview.redd.it/z0atve7dmo6e1.png?width=960&crop=smart&auto=webp&s=cec13ec167de769b85358a0ce7d185a5ba90ba12', 'width': 960}, {'height': 874, 'url': 'https://preview.redd.it/z0atve7dmo6e1.png?width=1080&crop=smart&auto=webp&s=ea053b93658403ec7e39f3c5cd941df694f579d6', 'width': 1080}], 'source': {'height': 2060, 'url': 'https://preview.redd.it/z0atve7dmo6e1.png?auto=webp&s=ff9732bb38479346f1e3d1afa0470df10e37fb37', 'width': 2544}, 'variants': {}}]} |
|||
Ollama search plugin? | 1 | Anyone has made a search plugin for Ollama? I found recent open-source models such as \`phi-4\` and \`qwq\` very good for local use. However, they are not connected to the internet.
Are there tools or plugins that can connect these models to internet so we can use local mac apps such as Enchanted with internet use? | 2024-12-13T21:40:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hdn2xr/ollama_search_plugin/ | timshi_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdn2xr | false | null | t3_1hdn2xr | /r/LocalLLaMA/comments/1hdn2xr/ollama_search_plugin/ | false | false | self | 1 | null |
Looking for suggestions | 1 | I am on windows 11 with a 4070super and 64 gigs of ram. I want maybe a chat bot with coding knowledge. What would you suggest? It doesn’t have to be super fast either just not super slow. | 2024-12-13T21:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hdn5v9/looking_for_suggestions/ | ocottog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdn5v9 | false | null | t3_1hdn5v9 | /r/LocalLLaMA/comments/1hdn5v9/looking_for_suggestions/ | false | false | self | 1 | null |
TIL Llama 3.3 can do multiple tool calls and tool composition | 1 | 2024-12-13T22:01:49 | https://x.com/zackangelo/status/1867624702023491618 | zra184 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1hdnjnc | false | null | t3_1hdnjnc | /r/LocalLLaMA/comments/1hdnjnc/til_llama_33_can_do_multiple_tool_calls_and_tool/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'opgUFoCsUqLWQ41Za64B4bsRqjfvEUrvh2Bpy4A3VFk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7E01LuWlx7YiUFhbTDAlHkDBal4aZS1Oh5W52rgsHcY.jpg?width=108&crop=smart&auto=webp&s=5b2a2976bcf9402d6f8b9ca79b957e8fd3c959a3', 'width': 108}, {'height': 163, 'url': 'https://external-preview.redd.it/7E01LuWlx7YiUFhbTDAlHkDBal4aZS1Oh5W52rgsHcY.jpg?width=216&crop=smart&auto=webp&s=2969c62b220d0f880c1d1c49c0cd9688db2bbf81', 'width': 216}, {'height': 242, 'url': 'https://external-preview.redd.it/7E01LuWlx7YiUFhbTDAlHkDBal4aZS1Oh5W52rgsHcY.jpg?width=320&crop=smart&auto=webp&s=06cf390dbd392f4c4825c0adb3c687a6f5b38898', 'width': 320}, {'height': 485, 'url': 'https://external-preview.redd.it/7E01LuWlx7YiUFhbTDAlHkDBal4aZS1Oh5W52rgsHcY.jpg?width=640&crop=smart&auto=webp&s=b62011a9b131eae707fa1dc07a1941831fa9ff93', 'width': 640}, {'height': 728, 'url': 'https://external-preview.redd.it/7E01LuWlx7YiUFhbTDAlHkDBal4aZS1Oh5W52rgsHcY.jpg?width=960&crop=smart&auto=webp&s=dbe081e1e174e7233368e82b2fb793cee126860b', 'width': 960}, {'height': 819, 'url': 'https://external-preview.redd.it/7E01LuWlx7YiUFhbTDAlHkDBal4aZS1Oh5W52rgsHcY.jpg?width=1080&crop=smart&auto=webp&s=478bbf72bf94362f26b4549623de8054f5e57508', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/7E01LuWlx7YiUFhbTDAlHkDBal4aZS1Oh5W52rgsHcY.jpg?auto=webp&s=02a49af50c70756e78b3edc4363c584ba08a37d5', 'width': 1424}, 'variants': {}}]} |
||
File Size and Quantity Limits for Ollama RAG Implementations | 1 | I've been testing community-built integrations with Ollama (e.g., Private GPT, AnythingLLM, WebUI) and can't find much information on file size or file quantity limits for RAG systems.
A client wants to build a system with \~1 million files, but most UIs struggle to handle even a few hundred uploads. For those building successful RAG systems for clients, what limits have you encountered regarding file size and number of files? Does scaling the knowledge base just boil down to GPU power, or are there other factors to consider? | 2024-12-13T22:02:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hdnk7g/file_size_and_quantity_limits_for_ollama_rag/ | UncleFoster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdnk7g | false | null | t3_1hdnk7g | /r/LocalLLaMA/comments/1hdnk7g/file_size_and_quantity_limits_for_ollama_rag/ | false | false | self | 1 | null |
TIL Llama 3.3 can do multiple tool calls and tool composition in a single shot | 140 | 2024-12-13T22:04:51 | https://www.reddit.com/gallery/1hdnm40 | zra184 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hdnm40 | false | null | t3_1hdnm40 | /r/LocalLLaMA/comments/1hdnm40/til_llama_33_can_do_multiple_tool_calls_and_tool/ | false | false | 140 | null |
||
Local Claude’s Projects like | 0 | I see this question been asked before some months ago, but is there currently something comparable you can use locally Dec 2024? Using Ollama and some RAG tool? Or what else? | 2024-12-13T22:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hdnwqt/local_claudes_projects_like/ | ThesePleiades | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdnwqt | false | null | t3_1hdnwqt | /r/LocalLLaMA/comments/1hdnwqt/local_claudes_projects_like/ | false | false | self | 0 | null |
Thoughts on Intel B850 GPU with 12GB VRAM for $250? | 21 | Intel B850 GPU with 12GB VRAM for $250 seems like a solid option for running local LLMs or ML tasks on a budget. However, I’m curious if there are any known compatibility issues, has anyone tried using this GPU? | 2024-12-13T22:47:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hdoj48/thoughts_on_intel_b850_gpu_with_12gb_vram_for_250/ | Chemical_Elk7746 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdoj48 | false | null | t3_1hdoj48 | /r/LocalLLaMA/comments/1hdoj48/thoughts_on_intel_b850_gpu_with_12gb_vram_for_250/ | false | false | self | 21 | null |
Can people keep their nationalism for politics subreddits, please? Thank you | 1 | [removed] | 2024-12-13T22:52:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hdomw8/can_people_keep_their_nationalism_for_politics/ | Inevitable_Fan8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdomw8 | false | null | t3_1hdomw8 | /r/LocalLLaMA/comments/1hdomw8/can_people_keep_their_nationalism_for_politics/ | false | false | self | 1 | null |
Why do people in /r/OpenAI use it for coding instead of running their own LLM for it? | 0 | I just see constantly people in that sub saying whatever is best for coding when there's like thousands of models online you can just download and run locally for a specific purpose. For free by the way. Assuming you don't have a trash computer. Just doesn't make sense to me. I love paid services when warranted, but AI isn't one of those for me.
Are these specific use cases over there, or are they uninformed, ignorant, or they just don't care? I can't figure it out. | 2024-12-13T22:54:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hdonwy/why_do_people_in_ropenai_use_it_for_coding/ | comperr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdonwy | false | null | t3_1hdonwy | /r/LocalLLaMA/comments/1hdonwy/why_do_people_in_ropenai_use_it_for_coding/ | false | false | self | 0 | null |
OmniAudio-2.6B: World's Fastest AudioLM for Edge Deployment | 159 | Hey r/LocalLLaMA 👋!
We just dropped OmniAudio-2.6B, our new audio-language model built specifically for edge deployment! Instead of the usual ASR-LLM chain, we combined Gemma-2-2b and Whisper turbo with a custom projector into a single unified model.
**✨ Demo**
Asked the model "how to start a fire without the fire starter in camping" 👇
https://reddit.com/link/1hdoplq/video/o5gajbqi3p6e1/player
**🏃 Performance**
We tested on a 2024 Mac Mini M4 Pro:
* OmniAudio-2.6B (FP16 GGUF with Nexa SDK): 35.23 tokens/sec
* OmniAudio-2.6B (Q4\_K\_M GGUF with Nexa SDK): 66 tokens/sec
* Qwen2-Audio-7B (Transformers): 6.38 tokens/sec
That's up to **10.3x** faster than current solutions! 🚀
A few notes:
* [Nexa SDK](https://github.com/NexaAI/nexa-sdk) is the first open-source toolkit supporting local audio-language model inference
* We built OmniAudio from the ground up with Nexa SDK optimized for edge deployment – speed and efficiency were our primary focus
* Our Q4\_K\_M quantization delivers the perfect balance of speed and accuracy
**🪄 Use Case**
* **Offline Voice QA**: Process queries without internet - from camping tips to DIY solutions
* **Voice Chat**: Natural conversations with fast responses
* **Creative Generation**: Turn voice prompts into poems, stories, and creative content
* **Recording Summaries**: Convert long recordings into concise, actionable points
* **Tone Adjustment**: Transform casual voice memos into professional communications
**📖 Resources**
* Blogs for more details: [https://nexa.ai/blogs/OmniAudio-2.6B](https://nexa.ai/blogs/OmniAudio-2.6B)
* HuggingFace Repo: [https://huggingface.co/NexaAIDev/OmniAudio-2.6B](https://huggingface.co/NexaAIDev/OmniAudio-2.6B)
* Run locally: [https://huggingface.co/NexaAIDev/OmniAudio-2.6B#how-to-use-on-device](https://huggingface.co/NexaAIDev/OmniAudio-2.6B#how-to-use-on-device)
* Interactive Demo: [https://huggingface.co/spaces/NexaAIDev/omni-audio-demo](https://huggingface.co/spaces/NexaAIDev/omni-audio-demo)
Would love to hear your feedback! | 2024-12-13T22:56:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hdoplq/omniaudio26b_worlds_fastest_audiolm_for_edge/ | unseenmarscai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdoplq | false | null | t3_1hdoplq | /r/LocalLLaMA/comments/1hdoplq/omniaudio26b_worlds_fastest_audiolm_for_edge/ | false | false | 159 | {'enabled': False, 'images': [{'id': 'OCcMZF2g12XSl_NUlJGtZ9U61g36lRjHnS80GH0Wveg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gKQ7k4QCTG29-qOhHUzEPiaoswk8s-GlmSliQJIEDVM.jpg?width=108&crop=smart&auto=webp&s=b4e5bb17f1e70f19d62887cff51933eb26b0249a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gKQ7k4QCTG29-qOhHUzEPiaoswk8s-GlmSliQJIEDVM.jpg?width=216&crop=smart&auto=webp&s=e8baf567cd41af62a8ad9d2509740e9e4fc378b1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gKQ7k4QCTG29-qOhHUzEPiaoswk8s-GlmSliQJIEDVM.jpg?width=320&crop=smart&auto=webp&s=57d98c2061ababa67a2e2b2131082aeb9885593a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gKQ7k4QCTG29-qOhHUzEPiaoswk8s-GlmSliQJIEDVM.jpg?width=640&crop=smart&auto=webp&s=0dd24b145b8ff3407f753dc9880790916929c73d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gKQ7k4QCTG29-qOhHUzEPiaoswk8s-GlmSliQJIEDVM.jpg?width=960&crop=smart&auto=webp&s=3086cb200bd0ca07ac9fce177f57a60e3763b4d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gKQ7k4QCTG29-qOhHUzEPiaoswk8s-GlmSliQJIEDVM.jpg?width=1080&crop=smart&auto=webp&s=d727d01336027d85aa0bf1b32a12dd567a52bc45', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gKQ7k4QCTG29-qOhHUzEPiaoswk8s-GlmSliQJIEDVM.jpg?auto=webp&s=005cc593eb7361c5eeaae4c9d13b371065e9a8fd', 'width': 1200}, 'variants': {}}]} |
|
QUESTION: Anybody running LLaMA on Linux micro-servers? [AMD Radeon RDNA2 680M iGPU (12 CUs, up to 2.4GHz).] | 1 | [removed] | 2024-12-13T23:27:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hdpd7g/question_anybody_running_llama_on_linux/ | KO4EGQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdpd7g | false | null | t3_1hdpd7g | /r/LocalLLaMA/comments/1hdpd7g/question_anybody_running_llama_on_linux/ | false | false | self | 1 | null |
Games that LLMs can play | 4 | I see a lot of posts about being able to interact with an NPC backed by large language models but I've always been interested in games that you can watch how things interact by themselves with little to no interaction from a human player (ant farms were awesome to me as a kid). The LLM village was a cool project that made headlines for little bit and was based around a game engine that ran soley on AI interactions but they didn't really have goals or adventures. So that got me curious to see if there's any other games out there where people have given an LLM the controller, metaphorically speaking.
I did come across an interesting project that wraps around NetHack called [NetPlay](https://github.com/CommanderCero/NetPlay) that lets an LLM interact with and play the game and, as a Rogue and Angband fan, I thought would be fun to watch. I'm still getting into python so I'm trying to cobble together some code to get it talking with my ollama server when I have time but I'd love to see it in action and see the dialog behind what it does and why.
I'm also working on something that interacts with a game running in another program or window and the program will take a picture of that window, show the LLM and ask it what it wants to do. It then sends a tool-based response which is then translated into keypresses so it can interact back with the game. It's interesting with a vision model providing the description, another model giving their thoughts on the process (Phi-4 should be fun with this), and another tool-handling one to press the buttons but really needs a RAG system to get going and is tied to role-playing and turn based games due to the slow reaction time. I'm using the original NES Dragon Warrior game for testing but it's still just a bunch of ugly code that needs lots of polish.
Anyone know of any other games that ask an LLM for input with little to no human interaction or an agent that can play any specific games? I'd really love to see what people have been working on. | 2024-12-13T23:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hdph1j/games_that_llms_can_play/ | ThatHavenGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdph1j | false | null | t3_1hdph1j | /r/LocalLLaMA/comments/1hdph1j/games_that_llms_can_play/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'IIN1i9G5OFgB5S3ImVVIzE9Ed5S14iVIiLJoMUs5zPQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2iMjwC3ZW6dFX6HAi5qEc9vFJu6AidGWDm7CfVvmSts.jpg?width=108&crop=smart&auto=webp&s=7c752d4d913e46411ef8af8811e9ccc765457f3a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2iMjwC3ZW6dFX6HAi5qEc9vFJu6AidGWDm7CfVvmSts.jpg?width=216&crop=smart&auto=webp&s=7fc38dda1a619175f902b6c118a47ba45dfdb79d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2iMjwC3ZW6dFX6HAi5qEc9vFJu6AidGWDm7CfVvmSts.jpg?width=320&crop=smart&auto=webp&s=347528ef2ce70d94db57e4354df782251cf21150', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2iMjwC3ZW6dFX6HAi5qEc9vFJu6AidGWDm7CfVvmSts.jpg?width=640&crop=smart&auto=webp&s=9519a473e864d114579e7c8b953cadc7929c7d1f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2iMjwC3ZW6dFX6HAi5qEc9vFJu6AidGWDm7CfVvmSts.jpg?width=960&crop=smart&auto=webp&s=aced380a2fab54b963d74057eaed6b8e083a4a92', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2iMjwC3ZW6dFX6HAi5qEc9vFJu6AidGWDm7CfVvmSts.jpg?width=1080&crop=smart&auto=webp&s=bb3d56a077a4ac6bfe8d2338bbcfca8fc8697506', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2iMjwC3ZW6dFX6HAi5qEc9vFJu6AidGWDm7CfVvmSts.jpg?auto=webp&s=06b155056b0b825a8a655cae8090a6d84514c626', 'width': 1200}, 'variants': {}}]} |
QUESTION: Anyone using small Linux micro-servers with LlaMA withAMD Ryzen 9 6900HX and iGPU: AMD Radeon 680M 12 Core 2400MHz? | 1 | [removed] | 2024-12-13T23:37:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hdpkrm/question_anyone_using_small_linux_microservers/ | KO4EGQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdpkrm | false | null | t3_1hdpkrm | /r/LocalLLaMA/comments/1hdpkrm/question_anyone_using_small_linux_microservers/ | false | false | self | 1 | null |
Llama 3.3 speed | 7 | How many tokens per second are you guys getting and what’s your setup? I’m getting about 14/s with 4x v100 16g. Pulling about 1000w under loads. Very usable but feels a little bit slow. | 2024-12-13T23:52:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hdpvpt/llama_33_speed/ | Clean_Cauliflower_62 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdpvpt | false | null | t3_1hdpvpt | /r/LocalLLaMA/comments/1hdpvpt/llama_33_speed/ | false | false | self | 7 | null |
Meta's Byte Latent Transformer (BLT) paper looks like the real-deal. Outperforming tokenization models even up to their tested 8B param model size. 2025 may be the year we say goodbye to tokenization. | 1,097 | 2024-12-13T23:53:09 | jd_3d | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hdpw14 | false | null | t3_1hdpw14 | /r/LocalLLaMA/comments/1hdpw14/metas_byte_latent_transformer_blt_paper_looks/ | false | false | 1,097 | {'enabled': True, 'images': [{'id': 'wV4VYXlvG2ciysRg_ev6CZYAP4kG72H339pUSz13sJw', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/hbumv1t1ep6e1.png?width=108&crop=smart&auto=webp&s=6107facd3bd6a67cfe0db179cbabff83e387686a', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/hbumv1t1ep6e1.png?width=216&crop=smart&auto=webp&s=7317ed2b34ccef9fbe28127fed7a8649f5974a95', 'width': 216}, {'height': 280, 'url': 'https://preview.redd.it/hbumv1t1ep6e1.png?width=320&crop=smart&auto=webp&s=87d4c9a9c192a599a9e871d843ba861be859c953', 'width': 320}, {'height': 560, 'url': 'https://preview.redd.it/hbumv1t1ep6e1.png?width=640&crop=smart&auto=webp&s=205839e975670cb063916035d087eac934a0ab72', 'width': 640}], 'source': {'height': 713, 'url': 'https://preview.redd.it/hbumv1t1ep6e1.png?auto=webp&s=f18413c209dc2ff0b44bbbbd7c484da22e939484', 'width': 814}, 'variants': {}}]} |
|||
How come ChatterUI gives me very long replies? | 0 | With the same gguf and question, I get way longer replies from ChatterUI than llama-cli from llama.cpp by default. Why is that? What parameters did they set to make this happen? | 2024-12-13T23:53:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hdpw8w/how_come_chatterui_gives_me_very_long_replies/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdpw8w | false | null | t3_1hdpw8w | /r/LocalLLaMA/comments/1hdpw8w/how_come_chatterui_gives_me_very_long_replies/ | false | false | self | 0 | null |
Dual 3090 Llama 3.3 OOM at 4 bpw? | 2 | EXL2 back with llama 3(.0) worked nicely for me at around 4.5 or 4.65 bpw on two 3090s.
With 3.3, latest text-generation-webui, latest exllama v2 (build from source) and --autosplit on Linux, I'm getting VRAM OOM even at [4.0 bpw](https://huggingface.co/LoneStriker/Llama-3.3-70B-Instruct-4.0bpw-h6-exl2). Maybe 3.5 bpw, or maybe that'll fail too?
Is it just me? Should I be using GPTQ, AWQ, GGUF? Something other than text-generation-webui?
I can see it loading on both GPUs, so it's not that it's sticking to one GPU. It doesn't quite fill the whole first GPU before moving to the second, but pretty close - which is what I expected to see.
Would anybody like to report their successes with particular quants here? Even if not EXL2. Bonus points for including command line arguments. | 2024-12-13T23:55:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hdpxpy/dual_3090_llama_33_oom_at_4_bpw/ | Cheesuasion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdpxpy | false | null | t3_1hdpxpy | /r/LocalLLaMA/comments/1hdpxpy/dual_3090_llama_33_oom_at_4_bpw/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'TBiakv0wTTHG0GaTDwffR_wmDOjn4Jjnb8PAfnazMws', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9TsYLzBQY6A8PnG-Lh-sR7i_QrnV2Q6tBpzvydWlM6c.jpg?width=108&crop=smart&auto=webp&s=148cd34249d62fd81674bb8e680fbfaa852daaaa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9TsYLzBQY6A8PnG-Lh-sR7i_QrnV2Q6tBpzvydWlM6c.jpg?width=216&crop=smart&auto=webp&s=e4df4ede9a8671858b6b7b96798ebeb0f78994e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9TsYLzBQY6A8PnG-Lh-sR7i_QrnV2Q6tBpzvydWlM6c.jpg?width=320&crop=smart&auto=webp&s=ddbda84cc91ef8c9da7532a556c9f498a35f2fe8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9TsYLzBQY6A8PnG-Lh-sR7i_QrnV2Q6tBpzvydWlM6c.jpg?width=640&crop=smart&auto=webp&s=b0a83bed470945697852803afb13f9ef4097bed5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9TsYLzBQY6A8PnG-Lh-sR7i_QrnV2Q6tBpzvydWlM6c.jpg?width=960&crop=smart&auto=webp&s=fc8d4c6f9864f9881e548e0237b98f99130731d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9TsYLzBQY6A8PnG-Lh-sR7i_QrnV2Q6tBpzvydWlM6c.jpg?width=1080&crop=smart&auto=webp&s=4cb76685d363b0a115c5ece65d5d4e8a1780e1eb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9TsYLzBQY6A8PnG-Lh-sR7i_QrnV2Q6tBpzvydWlM6c.jpg?auto=webp&s=ad2352cb6670811e238bedd481938b010127dbb8', 'width': 1200}, 'variants': {}}]} |
(1) RTX4090: Which models, if any, offer enough precision to help w/ biz marketing/social/email copy? | 0 | I have a single 4090 24 GB vram, coupled with high end i9 and 128 GB system ram.
I understand this is limiting for the high end models, though am I too far down the GPU peasant hall to make use of any LLM that offers high enough performance or quality for public facing content? Thank you! | 2024-12-14T00:09:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hdq85g/1_rtx4090_which_models_if_any_offer_enough/ | ronoldwp-5464 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdq85g | false | null | t3_1hdq85g | /r/LocalLLaMA/comments/1hdq85g/1_rtx4090_which_models_if_any_offer_enough/ | false | false | self | 0 | null |
Getting Started with Local AI | 1 | [removed] | 2024-12-14T00:37:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hdqs0s/getting_started_with_local_ai/ | Present-Quit-6608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdqs0s | false | null | t3_1hdqs0s | /r/LocalLLaMA/comments/1hdqs0s/getting_started_with_local_ai/ | false | false | self | 1 | null |
What is a good model for random image prompting? | 0 | Is there something up to 32b~ish that would give a good amount of random backgrounds/locations/clothes/scenes etc? I tried using qwen but feels like just wildcarding is more powerful than what it gives | 2024-12-14T00:54:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hdr3zv/what_is_a_good_model_for_random_image_prompting/ | zekses | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdr3zv | false | null | t3_1hdr3zv | /r/LocalLLaMA/comments/1hdr3zv/what_is_a_good_model_for_random_image_prompting/ | false | false | self | 0 | null |
GPT4All 3.5.1 update broke various LLMs? | 1 | [removed] | 2024-12-14T01:12:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hdrgzn/gpt4all_351_update_broke_various_llms/ | YT_Brian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdrgzn | false | null | t3_1hdrgzn | /r/LocalLLaMA/comments/1hdrgzn/gpt4all_351_update_broke_various_llms/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.