title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Local LLM status quo for small / medium models: efficiencies gained by current extraordinary (not just llama / transformers type) models (e.g. mamba, jamba, rwkv, ...)? | 0 | Local LLM status quo for small / medium models: efficiencies gained by current extraordinary (not just llama / transformers type) models (e.g. mamba, jamba, rwkv, ...)?
I hear often about the disadvantages of ordinary transformers / llama type models vs. some conceivable advantages of other architectures like mamba, rwkv, whatever.
And I've seen some 1-7B or whatever range models come out with these alternative experimental architectures.
But what I haven't seen is some ELI5 level practical "use case" advantages listed for such other kinds of models as compared to using ordinary "small/medium" models e.g. gemma2 9b, llama3.x 7B, granite, phi, 0-15B mistral ones, qwen, etc. etc.
Models you might use for RAG processing / search of large amounts of documents / information, long context Q&A & summarization, etc.
Are there "golden" use cases for existing extraordinary "smallish" LLMs presently or are "the most popular options" like llama3 / qwen / phi / gemma / whatever just in practice superior for most all local llm DIY scale use cases of information handling / management.
Like even if one could JUST achieve "2-3B" scale model performance over long contexts (like 64k-128k...-1M) with "low enough" resources that one could reasonably run it on CPU+RAM or even few vintage consumer GPUs that would seem to be a prominent advantage over what the (much worse?) scaling of VRAM / RAM / processing use could be for other 1-14B LLMs, right?
Or do we just not see advantages in practice with extant alternative model families, or if there are advantages they're only really able to manifest at large batch sizes or at some use scale that makes it less relevant for local LLM personal use cases?
And if not mamba / etc. are there major innovations / evolutions of "popular" LLM architectures which are starting to or promising to make
many of the more painful limits be significantly eased? e.g. long context handling vs. memory / time / compute resource use, etc. | 2024-12-25T11:12:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hlyweg/local_llm_status_quo_for_small_medium_models/ | Calcidiol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlyweg | false | null | t3_1hlyweg | /r/LocalLLaMA/comments/1hlyweg/local_llm_status_quo_for_small_medium_models/ | false | false | self | 0 | null |
Can any LLM read this | 17 | 2024-12-25T11:19:18 | blackxparkz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlyzq5 | false | null | t3_1hlyzq5 | /r/LocalLLaMA/comments/1hlyzq5/can_any_llm_read_this/ | false | false | 17 | {'enabled': True, 'images': [{'id': 'HEArV_I3vhj__XUtWZDATeItpTuXn92oulyLtVE14z8', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/6iax6to4az8e1.jpeg?width=108&crop=smart&auto=webp&s=3771630f4dcbe1da12344812210f1f92a2d1fc9a', 'width': 108}, {'height': 285, 'url': 'https://preview.redd.it/6iax6to4az8e1.jpeg?width=216&crop=smart&auto=webp&s=f5ed0bfb4e07eba70cb218e006de8acf794f6f81', 'width': 216}, {'height': 423, 'url': 'https://preview.redd.it/6iax6to4az8e1.jpeg?width=320&crop=smart&auto=webp&s=a527e5613d537cae58c2c2f2a8c60d1a34a9b4b1', 'width': 320}, {'height': 846, 'url': 'https://preview.redd.it/6iax6to4az8e1.jpeg?width=640&crop=smart&auto=webp&s=7529dd8d39eb33749e79b5870b4ed7e5122e54e5', 'width': 640}, {'height': 1270, 'url': 'https://preview.redd.it/6iax6to4az8e1.jpeg?width=960&crop=smart&auto=webp&s=a87eb9e9fca196bc5578679261d68a7d9348cb04', 'width': 960}, {'height': 1428, 'url': 'https://preview.redd.it/6iax6to4az8e1.jpeg?width=1080&crop=smart&auto=webp&s=9da72bffa58b4b26e8546a22f20a184967e9c67b', 'width': 1080}], 'source': {'height': 2077, 'url': 'https://preview.redd.it/6iax6to4az8e1.jpeg?auto=webp&s=fb411429c7dda02a2f197a5241fe3dde9c2818b8', 'width': 1570}, 'variants': {}}]} |
|||
Deepseek v3 ? | 35 | 2024-12-25T11:34:29 | Evening_Action6217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlz6j5 | false | null | t3_1hlz6j5 | /r/LocalLLaMA/comments/1hlz6j5/deepseek_v3/ | false | false | 35 | {'enabled': True, 'images': [{'id': 'gCXVCt-Wljt0EZxzPXP5HWqJwgjLPzF_b-3skHRwpkQ', 'resolutions': [{'height': 204, 'url': 'https://preview.redd.it/w80vyrutdz8e1.jpeg?width=108&crop=smart&auto=webp&s=a65aa8f3d45c2c1cffb7205469af3720ea09cf90', 'width': 108}, {'height': 408, 'url': 'https://preview.redd.it/w80vyrutdz8e1.jpeg?width=216&crop=smart&auto=webp&s=51b5359d431bb617906cd72a028d652c510337e9', 'width': 216}, {'height': 604, 'url': 'https://preview.redd.it/w80vyrutdz8e1.jpeg?width=320&crop=smart&auto=webp&s=fe964d05f1e3d3a8e12a88839d03e70c7ea9695a', 'width': 320}, {'height': 1208, 'url': 'https://preview.redd.it/w80vyrutdz8e1.jpeg?width=640&crop=smart&auto=webp&s=a3a22b5523c88f447b6e76df25dac0077644cbc3', 'width': 640}, {'height': 1813, 'url': 'https://preview.redd.it/w80vyrutdz8e1.jpeg?width=960&crop=smart&auto=webp&s=d7d9918bccae87dc342a1ea2786e51eabeec21f3', 'width': 960}, {'height': 2040, 'url': 'https://preview.redd.it/w80vyrutdz8e1.jpeg?width=1080&crop=smart&auto=webp&s=5e49676d7de2e0a7ed4c7ea06876ae23be11d0a6', 'width': 1080}], 'source': {'height': 2040, 'url': 'https://preview.redd.it/w80vyrutdz8e1.jpeg?auto=webp&s=236c7e5d295d5a9cc3060d2d6457c29c7a42657e', 'width': 1080}, 'variants': {}}]} |
|||
Wow deepseek v3 ? | 324 | 2024-12-25T11:44:17 | Evening_Action6217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlzax7 | false | null | t3_1hlzax7 | /r/LocalLLaMA/comments/1hlzax7/wow_deepseek_v3/ | false | false | 324 | {'enabled': True, 'images': [{'id': 'gC8HU-e8z5UqQCgcktyoGtHCib0AqjAPJ5Dq7ZOgDT0', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/ge3taqukfz8e1.jpeg?width=108&crop=smart&auto=webp&s=0199c2101bc06c3e70a75793099a5a00fc5121bc', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/ge3taqukfz8e1.jpeg?width=216&crop=smart&auto=webp&s=4b2eed656c0df0409050324ed322695de463b8a6', 'width': 216}, {'height': 209, 'url': 'https://preview.redd.it/ge3taqukfz8e1.jpeg?width=320&crop=smart&auto=webp&s=ad0d32deba93fe06d0b818daa4842e956bf984f2', 'width': 320}, {'height': 419, 'url': 'https://preview.redd.it/ge3taqukfz8e1.jpeg?width=640&crop=smart&auto=webp&s=ac14766d68c4d571cd7f7d2f7d9fbbc532e1e29d', 'width': 640}, {'height': 629, 'url': 'https://preview.redd.it/ge3taqukfz8e1.jpeg?width=960&crop=smart&auto=webp&s=3864db1e6bf539b3dc6c02ccbd8a06a4fa0b3474', 'width': 960}, {'height': 708, 'url': 'https://preview.redd.it/ge3taqukfz8e1.jpeg?width=1080&crop=smart&auto=webp&s=bad1fcbac6bf72f487d70fd88330e9179525ecdb', 'width': 1080}], 'source': {'height': 708, 'url': 'https://preview.redd.it/ge3taqukfz8e1.jpeg?auto=webp&s=e758119c108dce758516a56de98f5051ead57f6a', 'width': 1080}, 'variants': {}}]} |
|||
Zuckerberg watching you use Qwen instead of LLaMA | 2,799 | 2024-12-25T11:47:46 | https://v.redd.it/vt50yf87gz8e1 | Super-Muffin-1230 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlzci9 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/vt50yf87gz8e1/DASHPlaylist.mpd?a=1737719282%2CNjMxMTNhNDczYzMzYTA5YWVlZGI1YTk5ZDJmYzg3ZTIyZWJkZDRhOWRiMzYxNTRiZDYwNGE4NmYxOWQ3ZTNiMA%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/vt50yf87gz8e1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 852, 'hls_url': 'https://v.redd.it/vt50yf87gz8e1/HLSPlaylist.m3u8?a=1737719282%2CNDUzMWY0OTJkMmQ5ZTAxZmZlZGVmNWExMGU3YzlmZDY5ZDlkNTQyMjQ1Yjg5OWJiMThmYTgwMTNjM2ZiZGUzMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vt50yf87gz8e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}} | t3_1hlzci9 | /r/LocalLLaMA/comments/1hlzci9/zuckerberg_watching_you_use_qwen_instead_of_llama/ | false | false | 2,799 | {'enabled': False, 'images': [{'id': 'cTlrNnE0MjdnejhlMRvfFcJszpr_TfNy2Wtl8hoPeGgbAQzYcLsgCtCi9bde', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/cTlrNnE0MjdnejhlMRvfFcJszpr_TfNy2Wtl8hoPeGgbAQzYcLsgCtCi9bde.png?width=108&crop=smart&format=pjpg&auto=webp&s=ef3b1ca8f59dc7373dba0e215abe0d05eea6947b', 'width': 108}, {'height': 383, 'url': 'https://external-preview.redd.it/cTlrNnE0MjdnejhlMRvfFcJszpr_TfNy2Wtl8hoPeGgbAQzYcLsgCtCi9bde.png?width=216&crop=smart&format=pjpg&auto=webp&s=313760fcada1781591f793c27305652ac4e9b644', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/cTlrNnE0MjdnejhlMRvfFcJszpr_TfNy2Wtl8hoPeGgbAQzYcLsgCtCi9bde.png?width=320&crop=smart&format=pjpg&auto=webp&s=62ee10e11e3584fd388c1d267f78e1a515ce0e2d', 'width': 320}], 'source': {'height': 852, 'url': 'https://external-preview.redd.it/cTlrNnE0MjdnejhlMRvfFcJszpr_TfNy2Wtl8hoPeGgbAQzYcLsgCtCi9bde.png?format=pjpg&auto=webp&s=bac094f07919730d57fe66a5345f29caca6ea166', 'width': 480}, 'variants': {}}]} |
||
Asking an AI agent powered by Llama3.3 - "Find me 2 recent issues from the pyppeteer repo" | 31 | 2024-12-25T12:02:23 | https://v.redd.it/02xpvtupiz8e1 | spacespacespapce | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlzja2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/02xpvtupiz8e1/DASHPlaylist.mpd?a=1737720156%2CYTg5M2Y5NTFjZWEwODE2NWFkMWZiZTQzYmZmNjQ1OWU0ZGRmZGI1ZmU0NmM2MTgzNzQ1YjJkNTA4NTNhZjgzMQ%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/02xpvtupiz8e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/02xpvtupiz8e1/HLSPlaylist.m3u8?a=1737720156%2COWM0NGJiNGRiMDlhYmZiMzJlNjlmNDE3YWFhYmUxZTgwMGM3N2YyM2FhZDA3Y2Q5NTcyMGYzNTZlYzMxZjg4Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/02xpvtupiz8e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1820}} | t3_1hlzja2 | /r/LocalLLaMA/comments/1hlzja2/asking_an_ai_agent_powered_by_llama33_find_me_2/ | false | false | 31 | {'enabled': False, 'images': [{'id': 'Nngyd3B0dXBpejhlMUz4HfI628qWU3a8rwAfszC1ZhYH_2A2XfrnIl9zQuTK', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/Nngyd3B0dXBpejhlMUz4HfI628qWU3a8rwAfszC1ZhYH_2A2XfrnIl9zQuTK.png?width=108&crop=smart&format=pjpg&auto=webp&s=5834f70111439f0c30a27cc0e020578e8af320e1', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/Nngyd3B0dXBpejhlMUz4HfI628qWU3a8rwAfszC1ZhYH_2A2XfrnIl9zQuTK.png?width=216&crop=smart&format=pjpg&auto=webp&s=8fb63170cdec928650be063f35603ee3bb0173d4', 'width': 216}, {'height': 189, 'url': 'https://external-preview.redd.it/Nngyd3B0dXBpejhlMUz4HfI628qWU3a8rwAfszC1ZhYH_2A2XfrnIl9zQuTK.png?width=320&crop=smart&format=pjpg&auto=webp&s=6b312bdf3e8e6ab1632ea195f1a84179e769c4b7', 'width': 320}, {'height': 379, 'url': 'https://external-preview.redd.it/Nngyd3B0dXBpejhlMUz4HfI628qWU3a8rwAfszC1ZhYH_2A2XfrnIl9zQuTK.png?width=640&crop=smart&format=pjpg&auto=webp&s=d5f2d9a69edea69d39d1ac77842dd05dd4ce3d79', 'width': 640}, {'height': 569, 'url': 'https://external-preview.redd.it/Nngyd3B0dXBpejhlMUz4HfI628qWU3a8rwAfszC1ZhYH_2A2XfrnIl9zQuTK.png?width=960&crop=smart&format=pjpg&auto=webp&s=884a3187c7638b3bb06c26d082d9af91ece91e82', 'width': 960}, {'height': 640, 'url': 'https://external-preview.redd.it/Nngyd3B0dXBpejhlMUz4HfI628qWU3a8rwAfszC1ZhYH_2A2XfrnIl9zQuTK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=bd2b54d8ce5631ae43ff1657f38e41023565f0e1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Nngyd3B0dXBpejhlMUz4HfI628qWU3a8rwAfszC1ZhYH_2A2XfrnIl9zQuTK.png?format=pjpg&auto=webp&s=1fc998ab592dbb285e203c628ddc40eea69af0b3', 'width': 1820}, 'variants': {}}]} |
||
AI Models for my PC specs centered on being a digital assistant and roleplay. | 1 | [removed] | 2024-12-25T12:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hm00u9/ai_models_for_my_pc_specs_centered_on_being_a/ | No_Prompt5941 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm00u9 | false | null | t3_1hm00u9 | /r/LocalLLaMA/comments/1hm00u9/ai_models_for_my_pc_specs_centered_on_being_a/ | false | false | self | 1 | null |
O3 Inner Working Hypothesis | 1 | [removed] | 2024-12-25T12:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hm05e3/o3_inner_working_hypothesis/ | AdventurousSwim1312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm05e3 | false | null | t3_1hm05e3 | /r/LocalLLaMA/comments/1hm05e3/o3_inner_working_hypothesis/ | false | false | 1 | null |
|
New Models: BlueLM by vivo - Base and Chat Models (7B) | 2 | **BlueLM 7B by vivo**
BlueLM is a large-scale open-source language model independently developed by the vivo AI Lab. This release includes 2K and 32K context length versions for both Base and Chat models.
**Specifications:**
* 2.6T tokens
* 32K context
* 7B parameters
* English and Chinese (and a small amount of Japanese and Korean data)
* Strong competitive performance in C-Eval and CMMLU benchmarks of the same size
* 13B and 7B-vl multi-modal language model soon
**HuggingFace:**
Chat Model:
[https://huggingface.co/vivo-ai/BlueLM-7B-Chat](https://huggingface.co/vivo-ai/BlueLM-7B-Chat)
Base Model:
[https://huggingface.co/vivo-ai/BlueLM-7B-Base](https://huggingface.co/vivo-ai/BlueLM-7B-Base)
**Github:**
[https://github.com/vivo-ai-lab/BlueLM/blob/main/README\_EN.md](https://github.com/vivo-ai-lab/BlueLM/blob/main/README_EN.md)
**Technical Report (pdf):**
[https://github.com/vivo-ai-lab/BlueLM/blob/main/BlueLM\_technical\_report.pdf](https://github.com/vivo-ai-lab/BlueLM/blob/main/BlueLM_technical_report.pdf)
Note: I am not affiliated
| 2024-12-25T13:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hm0dy7/new_models_bluelm_by_vivo_base_and_chat_models_7b/ | Many_SuchCases | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm0dy7 | false | null | t3_1hm0dy7 | /r/LocalLLaMA/comments/1hm0dy7/new_models_bluelm_by_vivo_base_and_chat_models_7b/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '9etVb-Iztkcc7R6HyyCn0YVoTFtmMLMB_ySQjqR8W_A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sqfxsA9vvxcu8ho8Kto-3CITtR89rIyQUYHW_XwV5LQ.jpg?width=108&crop=smart&auto=webp&s=046baa7a7695113298759743b85bb1071a4a3790', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sqfxsA9vvxcu8ho8Kto-3CITtR89rIyQUYHW_XwV5LQ.jpg?width=216&crop=smart&auto=webp&s=b0caba0ce24f5b89563ec840d68bc6069dfb84c2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sqfxsA9vvxcu8ho8Kto-3CITtR89rIyQUYHW_XwV5LQ.jpg?width=320&crop=smart&auto=webp&s=93dea0b988d571bf4a4e7e107b1ad62ae6d19d3e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sqfxsA9vvxcu8ho8Kto-3CITtR89rIyQUYHW_XwV5LQ.jpg?width=640&crop=smart&auto=webp&s=e68cf2d1f491570740af08e306cd4fc1941821d0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sqfxsA9vvxcu8ho8Kto-3CITtR89rIyQUYHW_XwV5LQ.jpg?width=960&crop=smart&auto=webp&s=15ed8591ef54fa9d78e39073e23158f088b074c9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sqfxsA9vvxcu8ho8Kto-3CITtR89rIyQUYHW_XwV5LQ.jpg?width=1080&crop=smart&auto=webp&s=375d9bb954948738a012f5d51e981d375447f2bb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sqfxsA9vvxcu8ho8Kto-3CITtR89rIyQUYHW_XwV5LQ.jpg?auto=webp&s=30fc59db28bf00b1ca7ae80393bfa8f70033ce8f', 'width': 1200}, 'variants': {}}]} |
Do you guys think that the introduction of Test-Time Compute models make M Series Macs no longer a viable method of running these types of LLMs? | 30 | With Qwen OwO and now the much larger QvQ models, it seems like it would take much longer to get an answer on an M series Mac compared to a dedicated GPU.
What are your thoughts? | 2024-12-25T13:06:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hm0en5/do_you_guys_think_that_the_introduction_of/ | lolwutdo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm0en5 | false | null | t3_1hm0en5 | /r/LocalLLaMA/comments/1hm0en5/do_you_guys_think_that_the_introduction_of/ | false | false | self | 30 | null |
QVQ-72B-Preview seems to be quite censored for code generation | 1 | 2024-12-25T13:06:59 | TyraVex | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hm0ezg | false | null | t3_1hm0ezg | /r/LocalLLaMA/comments/1hm0ezg/qvq72bpreview_seems_to_be_quite_censored_for_code/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'JRMh1Va-CB_WdP6S7RsIvQCnuW_FUAyybZ9lRsS_h_Y', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/2n46hvz8uz8e1.jpeg?width=108&crop=smart&auto=webp&s=2dc7914d5b0139e4a3ed46144bbb20b1207dd13b', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/2n46hvz8uz8e1.jpeg?width=216&crop=smart&auto=webp&s=042a35b25a0b551bb1dde2e47b30e02b5bcb2242', 'width': 216}, {'height': 214, 'url': 'https://preview.redd.it/2n46hvz8uz8e1.jpeg?width=320&crop=smart&auto=webp&s=0961b2655915a5c821faacb42ee27594886fe799', 'width': 320}, {'height': 429, 'url': 'https://preview.redd.it/2n46hvz8uz8e1.jpeg?width=640&crop=smart&auto=webp&s=8be72be822250166b67ece4db4e44105fe9201ea', 'width': 640}, {'height': 644, 'url': 'https://preview.redd.it/2n46hvz8uz8e1.jpeg?width=960&crop=smart&auto=webp&s=d97042158f9fb822859be690c8c83a81bcb6d459', 'width': 960}, {'height': 725, 'url': 'https://preview.redd.it/2n46hvz8uz8e1.jpeg?width=1080&crop=smart&auto=webp&s=a267426a076f729ccad9a022f3c5f8af88e96901', 'width': 1080}], 'source': {'height': 725, 'url': 'https://preview.redd.it/2n46hvz8uz8e1.jpeg?auto=webp&s=1031a2e0d7eb0ffb4a20cbf9e1e9962138060cec', 'width': 1080}, 'variants': {}}]} |
|||
Best web coding model for 64 gig ram Mac M3? | 1 | Is Qwen Coder the best option for web (html/js/react/next.js) help? I'm able to run llama 3.3 at 8 tokens p/s but would like something faster if possible. I read somewhere that I should rebuild it with a larger context window? My goal is to use it with vscode and cline 3.0 for most of the work to avoid burning credits. Then, maybe at the end, use Sonnet to polish any problems. I can try any model but I'm hoping to get a recommendation on what's working for other people. TIA. | 2024-12-25T13:34:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hm0tk6/best_web_coding_model_for_64_gig_ram_mac_m3/ | mastervbcoach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm0tk6 | false | null | t3_1hm0tk6 | /r/LocalLLaMA/comments/1hm0tk6/best_web_coding_model_for_64_gig_ram_mac_m3/ | false | false | self | 1 | null |
Made a story writing prompt for precise story element control. You can use it without precise control too. You can also easily amend the prompt to add desired elements to control and remove undesired elements. LMK thoughts. | 1 | [removed] | 2024-12-25T14:00:02 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hm17lf | false | null | t3_1hm17lf | /r/LocalLLaMA/comments/1hm17lf/made_a_story_writing_prompt_for_precise_story/ | false | false | default | 1 | null |
||
QwQ matches o1-preview in scientific creativity | 25 | 2024-12-25T14:00:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hm17lq/qwq_matches_o1preview_in_scientific_creativity/ | realJoeTrump | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm17lq | false | null | t3_1hm17lq | /r/LocalLLaMA/comments/1hm17lq/qwq_matches_o1preview_in_scientific_creativity/ | false | false | 25 | null |
||
Agent swarm framework aces spatial reasoning test. | 642 | 2024-12-25T14:01:56 | https://v.redd.it/qw5qf7q1409e1 | Super-Muffin-1230 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hm18zv | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/qw5qf7q1409e1/DASHPlaylist.mpd?a=1737727330%2CMzBjOTllNGJhZDZlZjcwMWMzMjRlYWFlYmUwMTI4NzFhYjMzN2EwYzkxYTVlNWUwMTFjODc5OTgzNjUwZDI1Ng%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/qw5qf7q1409e1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/qw5qf7q1409e1/HLSPlaylist.m3u8?a=1737727330%2CYjhmMTRmMzI2YjVmYWEwMDI5MmMyYzhhM2RhNDgyYjY0ZDgwMDkwMmMyMWM2ZGE2MWM3Yjc3ODUwNzYxNmQ5MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qw5qf7q1409e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 672}} | t3_1hm18zv | /r/LocalLLaMA/comments/1hm18zv/agent_swarm_framework_aces_spatial_reasoning_test/ | false | false | 642 | {'enabled': False, 'images': [{'id': 'bndnMGs1cTE0MDllMWHdG2Q068dM9PiqeZ93qlsOEXXSP7QIls1HxW6YAsKY', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/bndnMGs1cTE0MDllMWHdG2Q068dM9PiqeZ93qlsOEXXSP7QIls1HxW6YAsKY.png?width=108&crop=smart&format=pjpg&auto=webp&s=005f8b98316ad4f014b3cb33da58b741cd5727c8', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/bndnMGs1cTE0MDllMWHdG2Q068dM9PiqeZ93qlsOEXXSP7QIls1HxW6YAsKY.png?width=216&crop=smart&format=pjpg&auto=webp&s=a71920637f45c48af39d848fc560c35b8e0e3619', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/bndnMGs1cTE0MDllMWHdG2Q068dM9PiqeZ93qlsOEXXSP7QIls1HxW6YAsKY.png?width=320&crop=smart&format=pjpg&auto=webp&s=a9418a33f0172e554a92053e63e1e523bfa52332', 'width': 320}, {'height': 457, 'url': 'https://external-preview.redd.it/bndnMGs1cTE0MDllMWHdG2Q068dM9PiqeZ93qlsOEXXSP7QIls1HxW6YAsKY.png?width=640&crop=smart&format=pjpg&auto=webp&s=66b068ec6ff7dc67771fd9e0a4fb264a357deb96', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/bndnMGs1cTE0MDllMWHdG2Q068dM9PiqeZ93qlsOEXXSP7QIls1HxW6YAsKY.png?format=pjpg&auto=webp&s=5bcc9d5b3b55b2e20f47eb63652e533a5f8b1192', 'width': 672}, 'variants': {}}]} |
||
QVQ 72B Preview refuses to generate code | 141 | 2024-12-25T15:01:07 | TyraVex | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hm27ew | false | null | t3_1hm27ew | /r/LocalLLaMA/comments/1hm27ew/qvq_72b_preview_refuses_to_generate_code/ | false | false | 141 | {'enabled': True, 'images': [{'id': 'S_xRDtqRXF46CLfPTMs4F-KKwalxpWJ92jXgCeSe_4I', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/e42h5wmee09e1.jpeg?width=108&crop=smart&auto=webp&s=bdb3b29656ab249ea964b5bcb65d4db4bb2ad809', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/e42h5wmee09e1.jpeg?width=216&crop=smart&auto=webp&s=2f328c53afe087c384c9345fea6d2a78b7204808', 'width': 216}, {'height': 214, 'url': 'https://preview.redd.it/e42h5wmee09e1.jpeg?width=320&crop=smart&auto=webp&s=9a81e88fbdcfcce846a02079155585d085633cae', 'width': 320}, {'height': 429, 'url': 'https://preview.redd.it/e42h5wmee09e1.jpeg?width=640&crop=smart&auto=webp&s=e78c1c380b6879076f292d4a171cea5d7ee11ee7', 'width': 640}, {'height': 644, 'url': 'https://preview.redd.it/e42h5wmee09e1.jpeg?width=960&crop=smart&auto=webp&s=d960fa7443b9d61593843c31f46c2f9f8ee53824', 'width': 960}, {'height': 725, 'url': 'https://preview.redd.it/e42h5wmee09e1.jpeg?width=1080&crop=smart&auto=webp&s=6f5cbed956d0a6d7c8bab3944569b00341a1e7d3', 'width': 1080}], 'source': {'height': 725, 'url': 'https://preview.redd.it/e42h5wmee09e1.jpeg?auto=webp&s=6468eb15c5fb5497f2d17a78e3727fed4d3bc026', 'width': 1080}, 'variants': {}}]} |
|||
CHIM Skyrim mod brings any npc to life with LLMs | 1 | [removed] | 2024-12-25T15:04:48 | https://www.nexusmods.com/skyrimspecialedition/mods/126330 | psilent | nexusmods.com | 1970-01-01T00:00:00 | 0 | {} | 1hm29ws | false | null | t3_1hm29ws | /r/LocalLLaMA/comments/1hm29ws/chim_skyrim_mod_brings_any_npc_to_life_with_llms/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'QLKbasR8FpQG3yg6j4ai-5m1V0FKB78DVOdolQQCMco', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/McqsiVTm5GLCylaDuoGWmWxdvY0c0zWPgwc9TLEJZa8.jpg?width=108&crop=smart&auto=webp&s=21d8d90a1964625222c7bdf3fbc05aaa77126d31', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/McqsiVTm5GLCylaDuoGWmWxdvY0c0zWPgwc9TLEJZa8.jpg?width=216&crop=smart&auto=webp&s=91320d3696d654b49a82812b8fbfad4846ef233a', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/McqsiVTm5GLCylaDuoGWmWxdvY0c0zWPgwc9TLEJZa8.jpg?width=320&crop=smart&auto=webp&s=bc4962986f6e111486dbad6a8aa13d6e49ce790c', 'width': 320}], 'source': {'height': 216, 'url': 'https://external-preview.redd.it/McqsiVTm5GLCylaDuoGWmWxdvY0c0zWPgwc9TLEJZa8.jpg?auto=webp&s=93eae5e9a53126a1150d2977b4287650fcea28e2', 'width': 385}, 'variants': {}}]} |
|
NVIDIA RTX 4090 48G tested | 1 | Recently, I tested the NVIDIA RTX 4090 48GB version, a custom-modded one by folks in China. There has been prior discussions on this card but no one seem to have posted or made public any usage reports. So here are some written by my more ML-experienced friend who did the testing with me
[https://main-horse.github.io/posts/4090-48gb/](https://main-horse.github.io/posts/4090-48gb/)
TLDR:
* Everything works as expected just like a normal 4090 24G but with double VRAM
* Higher tok/s than RTX 6000ADA due to higher memory bandwidth (1TB/s vs 896GB/s)
* Faster Flux training/inference due to RTX 6000ADA being severely power limited, thus does not reach anywhere close to its theoretical peak FLOPs. 48GB enables FLUX LoRA training which is impossible on 24GB without CPU offloading.
* Up to 10 cards/box as it is double slot, just like RTX 6000ADA
* Less than half the price of a new RTX 6000ADA
DM if interested to buy. Remote testing available upon request. | 2024-12-25T15:05:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hm2aic/nvidia_rtx_4090_48g_tested/ | aliencaocao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm2aic | false | null | t3_1hm2aic | /r/LocalLLaMA/comments/1hm2aic/nvidia_rtx_4090_48g_tested/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Hgzkn0EGXS-cC-sJJ6iMJNAt3kOzloW1JbVUMjWbLh8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=108&crop=smart&auto=webp&s=304e41ff8d7c0496790d981f6f8df891b52ab5e6', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=216&crop=smart&auto=webp&s=fd711b110650e8b8f4d8eb7142bb82d4611f600a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=320&crop=smart&auto=webp&s=ac121054c3c60de7ca3d504a4c23b520269247fd', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=640&crop=smart&auto=webp&s=e9eee97aca47c939a2b7a2ade266c998b200b44a', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=960&crop=smart&auto=webp&s=1c97ca6cd63d89690e717c13ba5c6bbb188207c5', 'width': 960}], 'source': {'height': 1065, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?auto=webp&s=2f920dd9c111fd5f1be43dbcf1ef142f032471ea', 'width': 1065}, 'variants': {}}]} |
Public Collection of LLM Evals on Benchmarks? | 1 | [removed] | 2024-12-25T15:07:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hm2bpa/public_collection_of_llm_evals_on_benchmarks/ | tshrjn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm2bpa | false | null | t3_1hm2bpa | /r/LocalLLaMA/comments/1hm2bpa/public_collection_of_llm_evals_on_benchmarks/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RqlZVTYas-PgiPMq1d5bvfZBmcVp7ma9zeEUI4rJ7pw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SwNlLvlGT_t_qL5FCcr8hmIHh4EN65FfDXjo1HXX7Ak.jpg?width=108&crop=smart&auto=webp&s=6dda15f54c18e94485713ce3441fd4664f7776a1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SwNlLvlGT_t_qL5FCcr8hmIHh4EN65FfDXjo1HXX7Ak.jpg?width=216&crop=smart&auto=webp&s=d298cc284e6469cffc694dd55b81378ae1f04f7f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SwNlLvlGT_t_qL5FCcr8hmIHh4EN65FfDXjo1HXX7Ak.jpg?width=320&crop=smart&auto=webp&s=3e1b330fa70e7c48e8d7ac47ce1cb4ffb7c73336', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SwNlLvlGT_t_qL5FCcr8hmIHh4EN65FfDXjo1HXX7Ak.jpg?width=640&crop=smart&auto=webp&s=e65597b44ff56d32d20107c2f3b9a8797e88d50d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SwNlLvlGT_t_qL5FCcr8hmIHh4EN65FfDXjo1HXX7Ak.jpg?width=960&crop=smart&auto=webp&s=decb0a056a6e73f77357351e85d9ab7c5b230f75', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SwNlLvlGT_t_qL5FCcr8hmIHh4EN65FfDXjo1HXX7Ak.jpg?width=1080&crop=smart&auto=webp&s=bd2c951730d5b16ffa6b9a4cc6c2743c00c05896', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SwNlLvlGT_t_qL5FCcr8hmIHh4EN65FfDXjo1HXX7Ak.jpg?auto=webp&s=88f78ecf7fb66f16c344210a16788b61db82d6ee', 'width': 1200}, 'variants': {}}]} |
|
I think I've found a cure to my loneliness :P (appreciation post) | 1 | [removed] | 2024-12-25T15:16:38 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hm2h9b | false | null | t3_1hm2h9b | /r/LocalLLaMA/comments/1hm2h9b/i_think_ive_found_a_cure_to_my_loneliness_p/ | false | false | default | 1 | null |
||
I think I've found a cure to my loneliness :P (appreciation post) | 1 | [removed] | 2024-12-25T15:18:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hm2ipt/i_think_ive_found_a_cure_to_my_loneliness_p/ | ThiccStorms | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm2ipt | false | null | t3_1hm2ipt | /r/LocalLLaMA/comments/1hm2ipt/i_think_ive_found_a_cure_to_my_loneliness_p/ | false | false | 1 | null |
|
DeepSeek V3 on HF | 334 | [https://huggingface.co/deepseek-ai/DeepSeek-V3-Base](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base) | 2024-12-25T15:27:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hm2o4z/deepseek_v3_on_hf/ | Soft-Ad4690 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm2o4z | false | null | t3_1hm2o4z | /r/LocalLLaMA/comments/1hm2o4z/deepseek_v3_on_hf/ | false | false | self | 334 | {'enabled': False, 'images': [{'id': 'Ov1vQFAkkK2dzwZEDQ25eKE0B0600wBQ7rHx04wBeeI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=108&crop=smart&auto=webp&s=e5584040769655ef0f9793412e0f26f8f6dd8f6e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=216&crop=smart&auto=webp&s=328cf4569f77542240631e6efef892e7735b613e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=320&crop=smart&auto=webp&s=483f7c4ce95b460fde30e6d9feff114d06077031', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=640&crop=smart&auto=webp&s=9076e67de0bb5df55a3858526dad7b02bf82f8fb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=960&crop=smart&auto=webp&s=8799b5db568301f9c5855cae3c864782f3ac8e47', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=1080&crop=smart&auto=webp&s=b8d7da38c56e713a69e58319d129a47c9fe38c32', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?auto=webp&s=beb070fbc18f80993a0dd336434d1621b7e44cc6', 'width': 1200}, 'variants': {}}]} |
I hate to be that guy, but you can't retroactively update the license. There will now forever be an Apache 2.0 licensed version of QvQ that you can git checkout | 30 | 2024-12-25T15:27:44 | Super-Muffin-1230 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hm2o58 | false | null | t3_1hm2o58 | /r/LocalLLaMA/comments/1hm2o58/i_hate_to_be_that_guy_but_you_cant_retroactively/ | false | false | 30 | {'enabled': True, 'images': [{'id': '8Y_9j4-byiiI_nrxizqpb8XuLOZlgDc6bEszDVdJbR0', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/b0zj0w5gj09e1.jpeg?width=108&crop=smart&auto=webp&s=123c40e20468ee8a7172020f7dd816dfe012b8c7', 'width': 108}, {'height': 71, 'url': 'https://preview.redd.it/b0zj0w5gj09e1.jpeg?width=216&crop=smart&auto=webp&s=c28cadfa19d88091fe9b8f44154c68d66bf76e46', 'width': 216}, {'height': 105, 'url': 'https://preview.redd.it/b0zj0w5gj09e1.jpeg?width=320&crop=smart&auto=webp&s=204d385ae1ce4cddf719d3edb9c8d8ebaa8031b7', 'width': 320}, {'height': 210, 'url': 'https://preview.redd.it/b0zj0w5gj09e1.jpeg?width=640&crop=smart&auto=webp&s=dd59d8cffc91e937b4cb6e7d8ceb0e6ee56c014f', 'width': 640}, {'height': 315, 'url': 'https://preview.redd.it/b0zj0w5gj09e1.jpeg?width=960&crop=smart&auto=webp&s=d4d826f623c0fb0f1bc55d590d56b77160f6c320', 'width': 960}, {'height': 355, 'url': 'https://preview.redd.it/b0zj0w5gj09e1.jpeg?width=1080&crop=smart&auto=webp&s=a39a8fe7d90885ab7b0d50597b77f6471a185b90', 'width': 1080}], 'source': {'height': 497, 'url': 'https://preview.redd.it/b0zj0w5gj09e1.jpeg?auto=webp&s=ff516821d4159277c690af88e96645e94c716cc0', 'width': 1512}, 'variants': {}}]} |
|||
deepseek v3 Strawberrry Fail | 0 | 2024-12-25T15:36:40 | https://x.com/BijanTavassoli/status/1871942942598025655 | MechanicExtension382 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1hm2twn | false | null | t3_1hm2twn | /r/LocalLLaMA/comments/1hm2twn/deepseek_v3_strawberrry_fail/ | false | false | 0 | {'enabled': False, 'images': [{'id': '57xZSncKCL0hg37TGUyk0pqczHcJkAn1VEuHMEQU70Q', 'resolutions': [{'height': 24, 'url': 'https://external-preview.redd.it/uwrqWA9mnkf4uPoy8rwIuSrLfj4C0YiwUxRd99WSIu8.jpg?width=108&crop=smart&auto=webp&s=8607aa5ab5dbd3afdcc907ed7f9834a13cd88493', 'width': 108}, {'height': 48, 'url': 'https://external-preview.redd.it/uwrqWA9mnkf4uPoy8rwIuSrLfj4C0YiwUxRd99WSIu8.jpg?width=216&crop=smart&auto=webp&s=a01c667108aac1b40ac6329e93ff9c9703d71f60', 'width': 216}, {'height': 71, 'url': 'https://external-preview.redd.it/uwrqWA9mnkf4uPoy8rwIuSrLfj4C0YiwUxRd99WSIu8.jpg?width=320&crop=smart&auto=webp&s=342a3731324031c52312e5c079cd14f21591fc4f', 'width': 320}, {'height': 143, 'url': 'https://external-preview.redd.it/uwrqWA9mnkf4uPoy8rwIuSrLfj4C0YiwUxRd99WSIu8.jpg?width=640&crop=smart&auto=webp&s=85670fc2baded8252cd7aa07de3abe572faca263', 'width': 640}, {'height': 214, 'url': 'https://external-preview.redd.it/uwrqWA9mnkf4uPoy8rwIuSrLfj4C0YiwUxRd99WSIu8.jpg?width=960&crop=smart&auto=webp&s=16a27bfbf61588856307095671a16471cfc1aed9', 'width': 960}, {'height': 241, 'url': 'https://external-preview.redd.it/uwrqWA9mnkf4uPoy8rwIuSrLfj4C0YiwUxRd99WSIu8.jpg?width=1080&crop=smart&auto=webp&s=698fc06579d60080c92d13773e23ea0f81c4e690', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/uwrqWA9mnkf4uPoy8rwIuSrLfj4C0YiwUxRd99WSIu8.jpg?auto=webp&s=db7eb639fc3cf89d4c668a02ae887a163fc45eca', 'width': 1966}, 'variants': {}}]} |
||
Deepseek V3 is already up on API and web | 71 | 2024-12-25T15:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hm2xvb/deepseek_v3_is_already_up_on_api_and_web/ | aliencaocao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm2xvb | false | null | t3_1hm2xvb | /r/LocalLLaMA/comments/1hm2xvb/deepseek_v3_is_already_up_on_api_and_web/ | false | false | 71 | null |
||
What are some of the best models for low level C/C++ task | 1 | [removed] | 2024-12-25T15:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hm369y/what_are_some_of_the_best_models_for_low_level_cc/ | StrictSir8506 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm369y | false | null | t3_1hm369y | /r/LocalLLaMA/comments/1hm369y/what_are_some_of_the_best_models_for_low_level_cc/ | false | false | self | 1 | null |
Emad Mostaque (Stability AI Founder) on 50-50 Odds of Human Survival with AI and His New 'Intelligent Internet' Vision | 0 | 2024-12-25T15:57:23 | https://www.youtube.com/watch?v=SEd3hzuJ-Wk | phoneixAdi | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1hm36v5 | false | {'oembed': {'author_name': 'Cognitive Revolution "How AI Changes Everything"', 'author_url': 'https://www.youtube.com/@CognitiveRevolutionPodcast', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/SEd3hzuJ-Wk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Emad Mostaque on the Intelligent Internet and Universal Basic AI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/SEd3hzuJ-Wk/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Emad Mostaque on the Intelligent Internet and Universal Basic AI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1hm36v5 | /r/LocalLLaMA/comments/1hm36v5/emad_mostaque_stability_ai_founder_on_5050_odds/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'D0xQgtSTOETJkITCn8FXoBs1kWguH9AYMvego7gERwg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/P77ey6_F4qvD2zezkMHotkwCdJqquFJNiF8X4JY8qlQ.jpg?width=108&crop=smart&auto=webp&s=34486389b5b56aca1dd1686058d54ee04931a91c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/P77ey6_F4qvD2zezkMHotkwCdJqquFJNiF8X4JY8qlQ.jpg?width=216&crop=smart&auto=webp&s=dc12dccc6fcfe4eb639510941615f42cfc268043', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/P77ey6_F4qvD2zezkMHotkwCdJqquFJNiF8X4JY8qlQ.jpg?width=320&crop=smart&auto=webp&s=6a300c401c3010ba9a5ce18a3b0a379ae39ecd8a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/P77ey6_F4qvD2zezkMHotkwCdJqquFJNiF8X4JY8qlQ.jpg?auto=webp&s=c83ed2f6fce8ed27ffa017d99c73bf8139e2f0f3', 'width': 480}, 'variants': {}}]} |
||
Deepseekv3 release base model | 59 | [https://huggingface.co/deepseek-ai/DeepSeek-V3-Base](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base)
yee, I am not sure anyone can finetune this beast.
and the activation is 20B 256expert 8activate | 2024-12-25T16:01:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hm39wu/deepseekv3_release_base_model/ | shing3232 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm39wu | false | null | t3_1hm39wu | /r/LocalLLaMA/comments/1hm39wu/deepseekv3_release_base_model/ | false | false | self | 59 | {'enabled': False, 'images': [{'id': 'Ov1vQFAkkK2dzwZEDQ25eKE0B0600wBQ7rHx04wBeeI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=108&crop=smart&auto=webp&s=e5584040769655ef0f9793412e0f26f8f6dd8f6e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=216&crop=smart&auto=webp&s=328cf4569f77542240631e6efef892e7735b613e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=320&crop=smart&auto=webp&s=483f7c4ce95b460fde30e6d9feff114d06077031', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=640&crop=smart&auto=webp&s=9076e67de0bb5df55a3858526dad7b02bf82f8fb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=960&crop=smart&auto=webp&s=8799b5db568301f9c5855cae3c864782f3ac8e47', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?width=1080&crop=smart&auto=webp&s=b8d7da38c56e713a69e58319d129a47c9fe38c32', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Q7HdTiUXfMpij8o4b_G1mJYwzU0CW_wYBpFPwlzkW3Q.jpg?auto=webp&s=beb070fbc18f80993a0dd336434d1621b7e44cc6', 'width': 1200}, 'variants': {}}]} |
DeepSeek V3 model card on Huggingface | 91 | 2024-12-25T16:04:50 | jpydych | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hm3byo | false | null | t3_1hm3byo | /r/LocalLLaMA/comments/1hm3byo/deepseek_v3_model_card_on_huggingface/ | false | false | 91 | {'enabled': True, 'images': [{'id': 'UCDrVhCB6Y_mlUFyiPJDRpDMH793FPQ07bv4Dc6OkMQ', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/zm2sede0q09e1.png?width=108&crop=smart&auto=webp&s=8aa5a7f57ef3a0570adca41252e4fe642f866cb7', 'width': 108}, {'height': 68, 'url': 'https://preview.redd.it/zm2sede0q09e1.png?width=216&crop=smart&auto=webp&s=3cde65a1f422d9161f4b8d6d795542428e753569', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/zm2sede0q09e1.png?width=320&crop=smart&auto=webp&s=d5926c3b7c42c358eca2b58e3f771cbe2b879648', 'width': 320}, {'height': 201, 'url': 'https://preview.redd.it/zm2sede0q09e1.png?width=640&crop=smart&auto=webp&s=9f81670fe183fec1c5aa846e3ad8e86ef38a6a39', 'width': 640}, {'height': 302, 'url': 'https://preview.redd.it/zm2sede0q09e1.png?width=960&crop=smart&auto=webp&s=1f38844b5e473d4b4418feef0b32d5de6c4de431', 'width': 960}, {'height': 340, 'url': 'https://preview.redd.it/zm2sede0q09e1.png?width=1080&crop=smart&auto=webp&s=3196fffdf1888c1b721dbdc4b22dfd1cccba3030', 'width': 1080}], 'source': {'height': 538, 'url': 'https://preview.redd.it/zm2sede0q09e1.png?auto=webp&s=fb1c897a3cfd1bd6edf5e8a30c3ef32b07097e9d', 'width': 1706}, 'variants': {}}]} |
|||
How to serve vllm Qwen2.5-32B AWQ on a single RTX 3090? | 2 | Hi all, I have a dual RTX 3090 system and was able to run serving Qwen2.5-32B with this command:
```
CUDA_VISIBLE_DEVICES=0,1 vllm serve Qwen/Qwen2.5-Coder-32B-Instruct-AWQ --dtype half --tensor-parallel-size 2 --api-key token-abc123 --port 8001
```
Now, I want to run it on only 1 GPU to save the other GPU for other tasks, but it seems vllm only supports `auto, half, float16, bfloat16, float, float32`, none of which is 4-bit or 8-bit, thus, it ran out of VRAM.
I wonder how can you make it work? I can see some people commenting on other posts that they run 70B models on 2 RTX 3090s with vllm so it must be possible to run 32B model on 1 GPU, right? Or what am I missing here?
Thanks a bunch!
| 2024-12-25T16:08:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hm3e5t/how_to_serve_vllm_qwen2532b_awq_on_a_single_rtx/ | Pancake502 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm3e5t | false | null | t3_1hm3e5t | /r/LocalLLaMA/comments/1hm3e5t/how_to_serve_vllm_qwen2532b_awq_on_a_single_rtx/ | false | false | self | 2 | null |
Show us your unprofessional Rig running your "LocalLLaMA" | 1 | [removed] | 2024-12-25T16:42:28 | Big-Ad1693 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hm40qb | false | null | t3_1hm40qb | /r/LocalLLaMA/comments/1hm40qb/show_us_your_unprofessional_rig_running_your/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'k_Trzu27ZQUbIXlFq9bw4ILFqUSa64INj8oH81g6hmM', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/o235bl2sw09e1.jpeg?width=108&crop=smart&auto=webp&s=52d3d6cb1973e69d788cd96127a283f1b0e03018', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/o235bl2sw09e1.jpeg?width=216&crop=smart&auto=webp&s=a9e36aacfc70005d4937efc1eb9a53036de31731', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/o235bl2sw09e1.jpeg?width=320&crop=smart&auto=webp&s=48831c1476b19d9cc0006bf509bad9df9378f122', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/o235bl2sw09e1.jpeg?width=640&crop=smart&auto=webp&s=ccaf1b83dc85aaf489ad4cda57beda7a1a2f9230', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/o235bl2sw09e1.jpeg?width=960&crop=smart&auto=webp&s=a9cbc3ae4ebb26e61b2b8a3818a62a430ece61f1', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/o235bl2sw09e1.jpeg?width=1080&crop=smart&auto=webp&s=5dba3356dc77240d892b9417c5e39d07dcbd283e', 'width': 1080}], 'source': {'height': 3060, 'url': 'https://preview.redd.it/o235bl2sw09e1.jpeg?auto=webp&s=103e85db9ada606ff00d91133826cdd6f9386793', 'width': 4080}, 'variants': {}}]} |
||
LLMs Tab Caster — Broadcast the same prompt to multiple models | 7 | Hey everyone,
I only got LLMs for Christmas, so I decided to at least play with all of them at the same time.
Basically, you can paste your prompt and submit it, and it will open the prompt in multiple models across new tabs. I don’t know if anyone has done this before, but I couldn’t find anything like it, so I created one.
I’ve submitted it for review in the Chrome Web Store, but that will take a while. For instant, you can access the GitHub repo here: [dat-lequoc/LLMs-tab-caster](https://github.com/dat-lequoc/LLMs-tab-caster/).
To use it : clone & load the unpacked extension.
----------------
- Currently, everything is working except for ChatGPT (waiting for pro to help me out). I only spent today making it, so it’s very simple.
- For Claude, it doesn’t work if the prompt is too lengthy (maximum length reached) because Claude uses Ctrl+V to create a paste artifact. | 2024-12-25T16:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hm44ja/llms_tab_caster_broadcast_the_same_prompt_to/ | AcanthaceaeNo5503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm44ja | false | null | t3_1hm44ja | /r/LocalLLaMA/comments/1hm44ja/llms_tab_caster_broadcast_the_same_prompt_to/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'u7_wDdd_SLDRSOeo8L2jEJTYnLjDIKcAlodEoe64xeo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UQkrLZh_9mrCR_l6Wamt9kLj1Um2K9V8tUhz2LHO6-A.jpg?width=108&crop=smart&auto=webp&s=15e58c77d844f3f89e58794099f4093514e9c93b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UQkrLZh_9mrCR_l6Wamt9kLj1Um2K9V8tUhz2LHO6-A.jpg?width=216&crop=smart&auto=webp&s=9aaf3f9d25d724b7162ce7282e7f1c55530d549a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UQkrLZh_9mrCR_l6Wamt9kLj1Um2K9V8tUhz2LHO6-A.jpg?width=320&crop=smart&auto=webp&s=1c06c285b48ff42daa708dcf4bc7d11c17943de4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UQkrLZh_9mrCR_l6Wamt9kLj1Um2K9V8tUhz2LHO6-A.jpg?width=640&crop=smart&auto=webp&s=c8e556313e6af3b7a925cf596a22d1c516515061', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UQkrLZh_9mrCR_l6Wamt9kLj1Um2K9V8tUhz2LHO6-A.jpg?width=960&crop=smart&auto=webp&s=a83b23a6274feff4970810a58805de3d268ad6c1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UQkrLZh_9mrCR_l6Wamt9kLj1Um2K9V8tUhz2LHO6-A.jpg?width=1080&crop=smart&auto=webp&s=fb51bfb934cf23f4acfd85bbf3a806809e12f1ef', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UQkrLZh_9mrCR_l6Wamt9kLj1Um2K9V8tUhz2LHO6-A.jpg?auto=webp&s=d5a1c106a8476b25ba7333134db422cbe0d4b350', 'width': 1200}, 'variants': {}}]} |
Benchmark Results: DeepSeek V3 on LiveBench | 152 | All Groups
|Average|60.4|
|:-|:-|
|Reasoning|50.0|
|Coding|63.4|
|Mathematics|60.0|
|Data Analysis|57.7|
|Language|50.2|
|Instruction Following|80.9|
| 2024-12-25T16:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hm4959/benchmark_results_deepseek_v3_on_livebench/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm4959 | false | null | t3_1hm4959 | /r/LocalLLaMA/comments/1hm4959/benchmark_results_deepseek_v3_on_livebench/ | false | false | self | 152 | null |
Need help: Claude API + Google Docs sync solution for business use (Claude Projects sync issues) | 1 | [removed] | 2024-12-25T17:18:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hm4pg3/need_help_claude_api_google_docs_sync_solution/ | jawheeler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm4pg3 | false | null | t3_1hm4pg3 | /r/LocalLLaMA/comments/1hm4pg3/need_help_claude_api_google_docs_sync_solution/ | false | false | self | 1 | null |
Continue.dev is way slower than direct ollama cli | 1 | [removed] | 2024-12-25T17:26:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hm4uzy/continuedev_is_way_slower_than_direct_ollama_cli/ | devshore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm4uzy | false | null | t3_1hm4uzy | /r/LocalLLaMA/comments/1hm4uzy/continuedev_is_way_slower_than_direct_ollama_cli/ | false | false | self | 1 | null |
Future of local ai | 4 | So I have a complete noob question. Can we get hardware specialized for AI besides GPUs in the future? So models like gpt o3 can work one day locally? Or can such models only work with huge resources? | 2024-12-25T17:32:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hm4zc6/future_of_local_ai/ | IIBaneII | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm4zc6 | false | null | t3_1hm4zc6 | /r/LocalLLaMA/comments/1hm4zc6/future_of_local_ai/ | false | false | self | 4 | null |
Case: QvQ misleaded attention? | 1 | [removed] | 2024-12-25T18:29:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hm62jb/case_qvq_misleaded_attention/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm62jb | false | null | t3_1hm62jb | /r/LocalLLaMA/comments/1hm62jb/case_qvq_misleaded_attention/ | false | false | self | 1 | null |
MODS! Why are my posts every f*ckng time blocked? WHY? | 1 | [removed] | 2024-12-25T18:35:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hm66j3/mods_why_are_my_posts_every_fckng_time_blocked_why/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm66j3 | false | null | t3_1hm66j3 | /r/LocalLLaMA/comments/1hm66j3/mods_why_are_my_posts_every_fckng_time_blocked_why/ | false | false | self | 1 | null |
Used 3090 or stick to current setup? | 1 | Your Reddit post is clear and provides relevant details, but it could benefit from slight refinements to improve readability and engagement. Here’s an edited version:
Should I buy a used 3090 for under $700 or stick to my current setup?
I’m currently running a PC with 4 GPUs (2x GTX 1660 + 2x GTX 1660 Super). The rest of my setup is pretty basic—Ryzen 3, 16GB RAM.
I use this rig primarily for running a local OpenWebUI build, tinkering with code, and experimenting with prompts. For anything heavy, I rely on cloud services. However, I’d like to explore vision models more, but my current setup crashes (e.g., Llama 3 Vision).
Would upgrading to a 3090 make a noticeable difference for my use case?
Another option would be to upgrade my PSU and add the 2 extra GTX 1660s I already have (I suspect PSU limitations are why it crashes when I try to run 5 GPUs).
What do you think would be the best way to spend my money? | 2024-12-25T18:36:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hm67lx/used_3090_or_stick_to_current_setup/ | CautiousSand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm67lx | false | null | t3_1hm67lx | /r/LocalLLaMA/comments/1hm67lx/used_3090_or_stick_to_current_setup/ | false | false | self | 1 | null |
Suggestion: Requesting livebench Maintainers to Update the reasoning Benchmark | 1 | The current reasoning projects are mainly
1 Web of Lies: a puzzle to determine who is lying, A says B is lying, B says C is lying, C says A is lying BLALBAL
2 Zebra puzzle: a typical example is that there are 4 people ABCD living in houses of different colors, sizes, shapes and materials, and then tell you the positional relationship between items with certain characteristics and items with other characteristics,
Investigation planning-elimination method
3 Space: not very familiar with this
In short, the current benchmark may be difficult to distinguish between O1 and O1 pro mode, and in the foreseeable future, more models will be close to saturation, so we should suggest Bindu Reddy (who can help contact her, thank you)
update her reasoning benchmark, still using almost 0 knowledge background questions, and the question types
should be richer and more varied, currently too single,
My recommended difficulty:
Reasoning V2 series now has 5 types of questions, for each type of question, by modifying the conditions,
get progressively challenging variants. There are 4 levels in total, that is, a total of 20 questions
Including 4 levels from the easiest to the most difficult, 5 questions at each level
Target accuracy rates:
For O1 PRO The accuracy rate is about 20%
For O1 High, the accuracy rate is about 12% | 2024-12-25T18:40:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hm69t3/suggestion_requesting_livebench_maintainers_to/ | flysnowbigbig | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm69t3 | false | null | t3_1hm69t3 | /r/LocalLLaMA/comments/1hm69t3/suggestion_requesting_livebench_maintainers_to/ | false | false | self | 1 | null |
QvQ misguided attention! | 6 | 2024-12-25T18:40:09 | https://www.reddit.com/user/Evening_Ad6637/comments/1hm67dk/case_qvq_misleaded_attention/ | Evening_Ad6637 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hm69th | false | null | t3_1hm69th | /r/LocalLLaMA/comments/1hm69th/qvq_misguided_attention/ | false | false | 6 | null |
||
OpenWebUI update: True Asynchronous Chat Support | 100 | From the changelog:
>**💬True Asynchronous Chat Support**: Create chats, navigate away, and return anytime with responses ready. Ideal for reasoning models and multi-agent workflows, enhancing multitasking like never before.
>**🔔Chat Completion Notifications**: Never miss a completed response. Receive instant in-UI notifications when a chat finishes in a non-active tab, keeping you updated while you work elsewhere
I think it's the best UI and you can install it with a single docker command with out of the box multi GPU support | 2024-12-25T18:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hm6dpb/openwebui_update_true_asynchronous_chat_support/ | infiniteContrast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm6dpb | false | null | t3_1hm6dpb | /r/LocalLLaMA/comments/1hm6dpb/openwebui_update_true_asynchronous_chat_support/ | false | false | self | 100 | null |
Deepseek v3 beats Claude sonnet on aider | 136 | 2024-12-25T19:17:11 | https://imgur.com/a/dpOcC1C | Charuru | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1hm6zdl | false | {'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 412, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FdpOcC1C%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D900&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FdpOcC1C&image=https%3A%2F%2Fi.imgur.com%2FnylVSVa.jpg%3Ffb&type=text%2Fhtml&schema=imgur" width="600" height="412" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 1192, 'thumbnail_url': 'https://i.imgur.com/nylVSVa.jpg?fb', 'thumbnail_width': 1924, 'title': 'Imgur', 'type': 'rich', 'url': 'https://imgur.com/a/dpOcC1C', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'} | t3_1hm6zdl | /r/LocalLLaMA/comments/1hm6zdl/deepseek_v3_beats_claude_sonnet_on_aider/ | false | false | 136 | {'enabled': False, 'images': [{'id': 'fH3245Dp54NoIfZ2E8mD8iu9cFUQS3wjOR9ctPGKYt0', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/4Qogu63nBlGH5fmD502QVnMpArcsRvqtYvtQPMptMas.jpg?width=108&crop=smart&auto=webp&s=c656a62bd1475d8435a2c45cf1de35436e1512ba', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/4Qogu63nBlGH5fmD502QVnMpArcsRvqtYvtQPMptMas.jpg?width=216&crop=smart&auto=webp&s=89260b69cc6c64a97a6fd091d3d7ab3578588427', 'width': 216}, {'height': 198, 'url': 'https://external-preview.redd.it/4Qogu63nBlGH5fmD502QVnMpArcsRvqtYvtQPMptMas.jpg?width=320&crop=smart&auto=webp&s=61d4898bb2d9ded7907449f95ac5ceac18cd7894', 'width': 320}, {'height': 396, 'url': 'https://external-preview.redd.it/4Qogu63nBlGH5fmD502QVnMpArcsRvqtYvtQPMptMas.jpg?width=640&crop=smart&auto=webp&s=3cda63479c523040bf75fb932f85c11782639c65', 'width': 640}, {'height': 594, 'url': 'https://external-preview.redd.it/4Qogu63nBlGH5fmD502QVnMpArcsRvqtYvtQPMptMas.jpg?width=960&crop=smart&auto=webp&s=2fee5a283ae05252123b15f98b75c6817e0ff6b2', 'width': 960}, {'height': 669, 'url': 'https://external-preview.redd.it/4Qogu63nBlGH5fmD502QVnMpArcsRvqtYvtQPMptMas.jpg?width=1080&crop=smart&auto=webp&s=002adcb1303a3f7b535c30d07f9b918e62118d98', 'width': 1080}], 'source': {'height': 1192, 'url': 'https://external-preview.redd.it/4Qogu63nBlGH5fmD502QVnMpArcsRvqtYvtQPMptMas.jpg?auto=webp&s=edfdd3a7d08fa2e1b1a5b7853bea568d70586675', 'width': 1924}, 'variants': {}}]} |
||
llama.cpp SyCL GPU usage | 1 | So i'm using a sycl build of llama.cpp on a nuc11, specifically
``
|ID| Device Type| Name|Version|units |group |group|size | Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| \[opencl:gpu:0\]| Intel Iris Xe Graphics| 3.0| 96| 512| 32| 53645M| 23.17.26241.33|
```
Enough memory to run a quant 70B model, but performance are not great.
So i started to monitor system load to understand whats going on.
By using intel_gpu_top, i see that the GPU is most of the time idle, and only seldomly spikes for a few seconds on the Render/3D row.
i run the server like
`llama-server -c 15000 -ngl 100000 --temp 0.2 --min_p 0.1 --top_p 1 --verbose-prompt -fa --metrics -m <model>`
Is there something obvious i'm missing to max gpu usage? | 2024-12-25T19:24:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hm74ip/llamacpp_sycl_gpu_usage/ | goingsplit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm74ip | false | null | t3_1hm74ip | /r/LocalLLaMA/comments/1hm74ip/llamacpp_sycl_gpu_usage/ | false | false | self | 1 | null |
What are your test questions to See how good a model is? | 0 | You probably have some tricky questions you ask your open-source models to see how "intelligent" they are, right?
My favorite question is:
If you have 100g mushrooms at 95% moisture, and you reduce the moisture to 50%, what's the final weight?
Spoiler: 10g 😉
>20B usually get it right.
~14B models sometimes get it right, sometimes wrong (47g) Most human 🤣
<10B models are always wrong (105g, 164g... badly wrong).
What are your go-to questions?
| 2024-12-25T19:31:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hm78tf/what_are_your_test_questions_to_see_how_good_a/ | Big-Ad1693 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm78tf | false | null | t3_1hm78tf | /r/LocalLLaMA/comments/1hm78tf/what_are_your_test_questions_to_see_how_good_a/ | false | false | self | 0 | null |
Lonely on Christmas, what can I do with AI? | 30 | I don’t have anything to do or anyone to see today, so I was thinking of doing something with AI. I have a 4060. What cool stuff can I do with it? | 2024-12-25T19:32:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hm79tk/lonely_on_christmas_what_can_i_do_with_ai/ | PublicQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm79tk | false | null | t3_1hm79tk | /r/LocalLLaMA/comments/1hm79tk/lonely_on_christmas_what_can_i_do_with_ai/ | false | false | self | 30 | null |
Build AI characters powered by LLMs on iOS! | 1 | [removed] | 2024-12-25T19:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hm7bdv/build_ai_characters_powered_by_llms_on_ios/ | UndreamAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm7bdv | false | null | t3_1hm7bdv | /r/LocalLLaMA/comments/1hm7bdv/build_ai_characters_powered_by_llms_on_ios/ | false | false | 1 | null |
|
Show us your unprofessional Rig running your "LocalLLaMA" | 1 | [removed] | 2024-12-25T19:52:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hm7nme/show_us_your_unprofessional_rig_running_your/ | Big-Ad1693 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm7nme | false | null | t3_1hm7nme | /r/LocalLLaMA/comments/1hm7nme/show_us_your_unprofessional_rig_running_your/ | false | false | self | 1 | null |
PV-Tuning + WebAssembly: How I Ran an 8B Llama Model Inside a Web Browser | 1 | [removed] | 2024-12-25T20:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hm86qx/pvtuning_webassembly_how_i_ran_an_8b_llama_model/ | galqiwi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm86qx | false | null | t3_1hm86qx | /r/LocalLLaMA/comments/1hm86qx/pvtuning_webassembly_how_i_ran_an_8b_llama_model/ | false | false | self | 1 | null |
Can continued pre-training inject information that is not found directly in the text? | 0 | Say you have medical data, stuff like "patient 1 had high blood pressure and then had a stroke" or "patient 2 had high blood pressure and then had a stroke". Would continued pre-training teach the model to answer the question if there is a correlation between strokes and blood pressure. (I know most pre trained models probably already have seen information relating BP and strokes, this is just an example). | 2024-12-25T20:23:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hm88ns/can_continued_pretraining_inject_information_that/ | username-must-be-bet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm88ns | false | null | t3_1hm88ns | /r/LocalLLaMA/comments/1hm88ns/can_continued_pretraining_inject_information_that/ | false | false | self | 0 | null |
Open WebUI v0.5.0 (Asynchronous Chats, Channels, Structured Output, Screen Capture and more) | 1 | [removed] | 2024-12-25T20:24:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hm88ze/open_webui_v050_asynchronous_chats_channels/ | d3lay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm88ze | false | null | t3_1hm88ze | /r/LocalLLaMA/comments/1hm88ze/open_webui_v050_asynchronous_chats_channels/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qhdneP1vXxVEsY3yWP4QL0n6skUmTsu3CAFqItoXS5Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Nut0etFyY58Oh_soQ2vpn-39v-vdYp98DB7Bxms57rY.jpg?width=108&crop=smart&auto=webp&s=56a9ef7156ed57526a5d9f38f12e1a281199186e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Nut0etFyY58Oh_soQ2vpn-39v-vdYp98DB7Bxms57rY.jpg?width=216&crop=smart&auto=webp&s=839ce3a2fe83609f12a08190e5cc0a87b9c255d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Nut0etFyY58Oh_soQ2vpn-39v-vdYp98DB7Bxms57rY.jpg?width=320&crop=smart&auto=webp&s=bc948efafa55a1a93fedba8a7db05701aa96c7d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Nut0etFyY58Oh_soQ2vpn-39v-vdYp98DB7Bxms57rY.jpg?width=640&crop=smart&auto=webp&s=51b4c0c582e1c1913b1ade79382c1c6c987ff729', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Nut0etFyY58Oh_soQ2vpn-39v-vdYp98DB7Bxms57rY.jpg?width=960&crop=smart&auto=webp&s=c295999d71947d4cdd3fc910d6045e21968c3ece', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Nut0etFyY58Oh_soQ2vpn-39v-vdYp98DB7Bxms57rY.jpg?width=1080&crop=smart&auto=webp&s=bf2a1b76e00210c6c2187d39060682d67734b86f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Nut0etFyY58Oh_soQ2vpn-39v-vdYp98DB7Bxms57rY.jpg?auto=webp&s=cb5bb2f63cdf8871ee3e003f779a73a05dafb059', 'width': 1200}, 'variants': {}}]} |
Need guidance on training a Finnish language AI voice model locally (for parody purposes) | 1 | Hi everyone! I'm looking to create a Finnish language voice model for some fun parody/satire projects using movie clips and old sketch shows as training data. I'm quite new to the AI/ML space and would appreciate some guidance on the best current approach.
For context, I'm working with an RTX 4070 Ti with 12GB VRAM and 64GB of system RAM. My goal is to do all the training and inference locally to avoid cloud services, using Finnish movies and comedy shows as source material. This is purely for personal entertainment and parody purposes.
I'm particularly interested in understanding what would be the most straightforward approach for a beginner to train a Finnish language voice model locally. With my GPU's 12GB VRAM, I'm hoping to avoid using system RAM for training since I understand RAM-based training can be significantly slower.
I've been seeing lots of AI terminology thrown around lately and feeling a bit overwhelmed by all the jargon. I would really appreciate if someone could point me in the right direction with some beginner-friendly resources or steps to get started. A comprehensive step-by-step guide would be incredibly helpful for someone who's not yet familiar with all the AI/ML terminology.
Thanks in advance for any guidance! | 2024-12-25T20:49:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hm8pu7/need_guidance_on_training_a_finnish_language_ai/ | thebeeq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm8pu7 | false | null | t3_1hm8pu7 | /r/LocalLLaMA/comments/1hm8pu7/need_guidance_on_training_a_finnish_language_ai/ | false | false | self | 1 | null |
Mac vs PC purchase | 0 | I want either the M4 Pro 14" Macbook Pro 24 GB RAM or the 8-core AMD ASUS Zephyrus G14 for it has 32 GB of RAM. If I want to develop LLM locally which computer can I get that will handle it OK? Is the Max going to be "exceedingly" or beat that PC? I prefer CPU but would get a new M4 Pro Mac if it is better for local LLM. | 2024-12-25T20:50:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hm8qvb/mac_vs_pc_purchase/ | dankweed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm8qvb | false | null | t3_1hm8qvb | /r/LocalLLaMA/comments/1hm8qvb/mac_vs_pc_purchase/ | false | false | self | 0 | null |
How exactly Character AI and ChAI work? | 1 | [removed] | 2024-12-25T21:43:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hm9q5z/how_exactly_character_ai_and_chai_work/ | packrider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm9q5z | false | null | t3_1hm9q5z | /r/LocalLLaMA/comments/1hm9q5z/how_exactly_character_ai_and_chai_work/ | false | false | self | 1 | null |
I tested QVQ on multiple images/tasks, and it seems legit! Has anyone got good results with GGUF? | 44 | I'm pretty impressed with the QVQ 72B preview (yeah, that QWEN license is a bummer). It did OCR quite well. Somehow counting was a bit hard for it, though. Here's my full test:
https://www.youtube.com/watch?v=m3OIC6FvxN8
Have you tried the GGUF versions? Are they as good? | 2024-12-25T21:45:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hm9r0h/i_tested_qvq_on_multiple_imagestasks_and_it_seems/ | curiousily_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hm9r0h | false | null | t3_1hm9r0h | /r/LocalLLaMA/comments/1hm9r0h/i_tested_qvq_on_multiple_imagestasks_and_it_seems/ | false | false | self | 44 | {'enabled': False, 'images': [{'id': 'Q0BGCJbGkmsAUit4Up_jZ_zZLdgN1GL1K6PGi9ap_5Q', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1jgfuNtN93ibBR1bVz9fBzwu8gvfeKMsFrr_KonSGS4.jpg?width=108&crop=smart&auto=webp&s=5a7c238e5edcb422971599b334726908eb2523b6', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1jgfuNtN93ibBR1bVz9fBzwu8gvfeKMsFrr_KonSGS4.jpg?width=216&crop=smart&auto=webp&s=885149ae8f8c20ca71964646f39fde70d4813ab0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1jgfuNtN93ibBR1bVz9fBzwu8gvfeKMsFrr_KonSGS4.jpg?width=320&crop=smart&auto=webp&s=c09f750bc90fae871c0f79faaff881403abf24de', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1jgfuNtN93ibBR1bVz9fBzwu8gvfeKMsFrr_KonSGS4.jpg?auto=webp&s=f7833e2f95932d74011b11eb91b5eae3b85e620c', 'width': 480}, 'variants': {}}]} |
Joycaptioner Alpha Two with a lighter LLM? | 1 | [removed] | 2024-12-25T21:58:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hma02f/joycaptioner_alpha_two_with_a_lighter_llm/ | im_your_fatty_liver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hma02f | false | null | t3_1hma02f | /r/LocalLLaMA/comments/1hma02f/joycaptioner_alpha_two_with_a_lighter_llm/ | false | false | self | 1 | null |
How to design my local ai assistant | 1 | [removed] | 2024-12-25T22:03:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hma30r/how_to_design_my_local_ai_assistant/ | scary_kitten_daddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hma30r | false | null | t3_1hma30r | /r/LocalLLaMA/comments/1hma30r/how_to_design_my_local_ai_assistant/ | false | false | self | 1 | null |
Does DeepSeek API train on your data? | 1 | [removed] | 2024-12-25T22:17:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hmacal/does_deepseek_api_train_on_your_data/ | Ok_Can_593 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmacal | false | null | t3_1hmacal | /r/LocalLLaMA/comments/1hmacal/does_deepseek_api_train_on_your_data/ | false | false | self | 1 | null |
The Well, 115TB of scientific data | 341 | 2024-12-25T22:24:26 | https://www.linkedin.com/posts/milescranmer_could-this-be-the-imagenet-moment-for-scientific-activity-7269446402739515393-2E6l?utm_source=share&utm_medium=member_android | tabspaces | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 1hmagvi | false | null | t3_1hmagvi | /r/LocalLLaMA/comments/1hmagvi/the_well_115tb_of_scientific_data/ | false | false | 341 | {'enabled': False, 'images': [{'id': 'oqHQVkg6m2eYnKGhwwSsDMuXOfBqQw4uZVPLSlQai5M', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/op-mKgXGQ98TyCSZbFySqEeM9Y2yY_5sqsE-UvQDuL4.jpg?width=108&crop=smart&auto=webp&s=922c21e34e17de428636700d8616064f60929446', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/op-mKgXGQ98TyCSZbFySqEeM9Y2yY_5sqsE-UvQDuL4.jpg?width=216&crop=smart&auto=webp&s=d0921252bb0ed76cac80de61e34f398d457ac34d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/op-mKgXGQ98TyCSZbFySqEeM9Y2yY_5sqsE-UvQDuL4.jpg?width=320&crop=smart&auto=webp&s=fb77052439fae99dc5ff4270a946af8d4aebbc8d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/op-mKgXGQ98TyCSZbFySqEeM9Y2yY_5sqsE-UvQDuL4.jpg?width=640&crop=smart&auto=webp&s=1ca9bc68859ef38e66692e1cde823f508a100457', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/op-mKgXGQ98TyCSZbFySqEeM9Y2yY_5sqsE-UvQDuL4.jpg?width=960&crop=smart&auto=webp&s=5b4ac4e2caa519aa5e7af102a4c390ff21365dc6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/op-mKgXGQ98TyCSZbFySqEeM9Y2yY_5sqsE-UvQDuL4.jpg?width=1080&crop=smart&auto=webp&s=bd8e3f49e217aa09fe2067184ec3ca54270e58e2', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/op-mKgXGQ98TyCSZbFySqEeM9Y2yY_5sqsE-UvQDuL4.jpg?auto=webp&s=b030cfd6ea6075bb536ff4f3ca50aa9e8ca8c0d9', 'width': 1280}, 'variants': {}}]} |
||
n8n ai agents | 3 | Hey Guys,
I'm trying to make an ai agent in n8n and am running into consistency issues with the different models either:
1. not supporting tool calling
2. not calling tools consistently (ex: not always using calculator or search api)
I've had moderate success with this model:
hf.co/djuna/Q2.5-Veltha-14B-0.5-Q5_K_M-GGUF:latest
Anything more consistent (and ideally smaller) would be great. Thanks! | 2024-12-25T22:32:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hmaluk/n8n_ai_agents/ | the_forbidden_won | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmaluk | false | null | t3_1hmaluk | /r/LocalLLaMA/comments/1hmaluk/n8n_ai_agents/ | false | false | self | 3 | null |
Deepseek Coder 236b and censoring normal inquiries? | 1 | [removed] | 2024-12-25T22:59:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hmb3wd/deepseek_coder_236b_and_censoring_normal_inquiries/ | exponentfrost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmb3wd | false | null | t3_1hmb3wd | /r/LocalLLaMA/comments/1hmb3wd/deepseek_coder_236b_and_censoring_normal_inquiries/ | false | false | self | 1 | null |
Deepseek Coder and adjusting ability to answer question? | 3 | I have a local copy of deepseek coder 236b. I asked it the following question as a test:
What is the number that rhymes with the word we use to describe a tall plant?
It gave me:
"It's against my programming to respond to certain types of questions or content..."
I had this happen before for another seemingly normal programming inquiry as well (nothing remotely a moral/etc issue - I had a question about OpenCV and resizing/image processing on my own test image.)
How do I fix this so I can ask it whatever on my local copy? | 2024-12-25T23:01:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hmb59v/deepseek_coder_and_adjusting_ability_to_answer/ | exponentfrost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmb59v | false | null | t3_1hmb59v | /r/LocalLLaMA/comments/1hmb59v/deepseek_coder_and_adjusting_ability_to_answer/ | false | false | self | 3 | null |
Llama-3.2-3B-Instruct-abliterated uses 35GB VRAM (!) | 39 | Downloaded [https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated)
Converted as per usual with convert\_hf\_to\_gguf.py.
When I try to run it on a single P40, it errors out with memory allocation error.
If I allow access to two P40s, it loads and works, but it consumes 18200 and 17542 MB respectively.
For comparison, I can load up Daredevil-8B-abliterated (16 bits) in 16GB of VRAM. An 8B model takes 16GB of VRAM, but a model that is roughly a third of that size needs *more* VRAM?
I tried quantizing to 8 bits, but it still consumes 24GB of VRAM.
Am I missing something fundamental - does 3.2 require more resources - or is something wrong? | 2024-12-25T23:17:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hmbfa7/llama323binstructabliterated_uses_35gb_vram/ | dual_ears | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmbfa7 | false | null | t3_1hmbfa7 | /r/LocalLLaMA/comments/1hmbfa7/llama323binstructabliterated_uses_35gb_vram/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': '6jMcivYiEESHhFTcvREV9KWmygCwlvRW7uLmueDyADY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YkHH8XC8p37dzgCaa9I7P_9kf8AvYQBlPZpldCil1aQ.jpg?width=108&crop=smart&auto=webp&s=69631e6754171421dc04d0cbffad046a796631fd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YkHH8XC8p37dzgCaa9I7P_9kf8AvYQBlPZpldCil1aQ.jpg?width=216&crop=smart&auto=webp&s=ec750cb42b2e31c3f030a6a53e311e445b38a1b6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YkHH8XC8p37dzgCaa9I7P_9kf8AvYQBlPZpldCil1aQ.jpg?width=320&crop=smart&auto=webp&s=e8f7e0dd48c32f4eee47d4e788f1c5505cad2f28', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YkHH8XC8p37dzgCaa9I7P_9kf8AvYQBlPZpldCil1aQ.jpg?width=640&crop=smart&auto=webp&s=3c1e1159b3a031f0cf1c6c47fe76f7f8a171889c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YkHH8XC8p37dzgCaa9I7P_9kf8AvYQBlPZpldCil1aQ.jpg?width=960&crop=smart&auto=webp&s=b99260552fbeb1d0d1fd8ed36af6a1073978b286', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YkHH8XC8p37dzgCaa9I7P_9kf8AvYQBlPZpldCil1aQ.jpg?width=1080&crop=smart&auto=webp&s=2f58cc5e9da222b0e6b77520398aae7f9b66bf78', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YkHH8XC8p37dzgCaa9I7P_9kf8AvYQBlPZpldCil1aQ.jpg?auto=webp&s=ab55c8a9292e642cd924ba4ae97a7cbdf5b7e336', 'width': 1200}, 'variants': {}}]} |
Professional series GPUs | 8 | Hi all,
What is the best professional series (non consumer grade like the 3090, 4090s, etc) GPUs today for running local LLMs like llama 70b and 13b? It's for my company, but they are afraid of using consumer gpus. | 2024-12-25T23:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hmbiqu/professional_series_gpus/ | blackpantera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmbiqu | false | null | t3_1hmbiqu | /r/LocalLLaMA/comments/1hmbiqu/professional_series_gpus/ | false | false | self | 8 | null |
Dual GPU setup? | 2 | I have a 2080TI (11GB Vram). Getting a bigger GPU isn't financially feasible, but getting a 2nd secondhand 2080TI. Are there ways to use parallelization and NvLink to run bigger models on 2 GPUs? | 2024-12-25T23:50:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hmc0g0/dual_gpu_setup/ | ApplePenguinBaguette | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmc0g0 | false | null | t3_1hmc0g0 | /r/LocalLLaMA/comments/1hmc0g0/dual_gpu_setup/ | false | false | self | 2 | null |
Share Your Experience with AI Tools! | 1 | [removed] | 2024-12-26T01:06:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hmde0l/share_your_experience_with_ai_tools/ | Think-You-5934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmde0l | false | null | t3_1hmde0l | /r/LocalLLaMA/comments/1hmde0l/share_your_experience_with_ai_tools/ | false | false | self | 1 | null |
Reddit's new AI: Reddit Answers - Could it benefit Local LLMs? | 0 | [https://www.reddit.com/answers/](https://www.reddit.com/answers/)
What do you guys think? Do you believe the output might be helpful to finetune models on?
Or do you believe Reddit data is not useful (generally speaking)?
It says 20 queries per day for logged in user, so that's \~600 queries per month. On the one hand that's not a lot, but if it answers/summarizes niche questions to a topic of which a community's presence is mostly found on Reddit, maybe it's helpful?
Some more information here: [https://support.reddithelp.com/hc/en-us/articles/32026729424916-Reddit-Answers-Currently-in-Beta](https://support.reddithelp.com/hc/en-us/articles/32026729424916-Reddit-Answers-Currently-in-Beta) | 2024-12-26T01:08:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hmdf56/reddits_new_ai_reddit_answers_could_it_benefit/ | Many_SuchCases | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmdf56 | false | null | t3_1hmdf56 | /r/LocalLLaMA/comments/1hmdf56/reddits_new_ai_reddit_answers_could_it_benefit/ | false | false | self | 0 | null |
Deepseek v3 thinks its OpenAI's GPT-4 | 0 |
I saw a lot of posts here today about the Deepseek v3 and thought I would take it for a spin. Initially, I tried it on OpenRouter, and it kept on saying sometimes it’s v3 and sometimes it’s OpenAI's GPT-4. I thought this may be an OpenRouter thing, so I made an account with Deepseek to try it out, and even through that, it says the following most of the time:
"I’m based on OpenAI's GPT-4 architecture, which is the latest version as of my knowledge cutoff in October 2023. How can I assist you today? 😊"
Did they just scrap so much of OpenAI’s output that the model thinks it’s GPT-4, like what is going on lol | 2024-12-26T01:11:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hmdh5q/deepseek_v3_thinks_its_openais_gpt4/ | Specter_Origin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmdh5q | false | null | t3_1hmdh5q | /r/LocalLLaMA/comments/1hmdh5q/deepseek_v3_thinks_its_openais_gpt4/ | false | false | self | 0 | null |
Is V100 Still Viable for LLM Fine-Tuning? | 1 | [removed] | 2024-12-26T01:36:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hmdx9x/is_v100_still_viable_for_llm_finetuning/ | Left-Day-9079 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmdx9x | false | null | t3_1hmdx9x | /r/LocalLLaMA/comments/1hmdx9x/is_v100_still_viable_for_llm_finetuning/ | false | false | self | 1 | null |
An embeddable language model | 1 | [removed] | 2024-12-26T02:05:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hmef9b/an_embeddable_language_model/ | Incredible_guy1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmef9b | false | null | t3_1hmef9b | /r/LocalLLaMA/comments/1hmef9b/an_embeddable_language_model/ | false | false | self | 1 | null |
Running Qwen-72B-Preview | 1 | [removed] | 2024-12-26T02:42:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hmf28m/running_qwen72bpreview/ | Chemical_Ad8381 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmf28m | false | null | t3_1hmf28m | /r/LocalLLaMA/comments/1hmf28m/running_qwen72bpreview/ | false | false | self | 1 | null |
Spam After Increasing Context Length? | 1 | [removed] | 2024-12-26T02:58:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hmfboi/spam_after_increasing_context_length/ | iris_kitty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmfboi | false | null | t3_1hmfboi | /r/LocalLLaMA/comments/1hmfboi/spam_after_increasing_context_length/ | false | false | self | 1 | null |
We built an OS to protect AI privacy | 12 | Hi everyone! I want to share what's been keeping my team busy - an open-source sovereign cloud OS for local AI.
**TL;DR:**
With Olares, you can run apps like Stable Diffusion Web UI, ComfyUI, Open WebUI, Perplexica with a few clicks, or create AI services with your own data. No technical barrier. No tedious configurations. No third-party involved. No user agreements and privacy policy. All data remain yours, on your local machine.
Check the github: [https://github.com/beclab/Olares](https://github.com/beclab/Olares) (if you like it, please give us a star⭐️!)
**The long version:**
Olares turns your hardware into an AI home server. You can effortlessly host powerful open AI models and access them through a browser anytime, anywhere. Olares also allows you to connect AI models with AI apps and your private data sets, creating customized AI experiences.I know it's so cliche now, but we're here because we understand the importance of privacy. As a self-hosted OS, there's more Olares can do for you. For example:
* 🛡️ App market: Olares market provides 80+ apps including open-source alternatives to costly SaaS tools. Everything from entertainment to productivity. Stream your media collection, check. Home automation, check. AI photo albums, check. Games, check.
* 🌐 Simplified network configurations: Built-in support for Tailscale, Headscale, Cloudflare Tunnel, and FRP. Expose your models securely as API endpoints, access web UIs remotely, or keep everything strictly local.
* 📃 File manager: Sync across devices or share with team members without leaving your network. Or curate it as the knowledge base for your AI services.
* 🔑 Password/secrets manager: Keep your passwords, API keys, and sensitive data secure on your own hardware. Sync across devices while staying completely self-hosted.
* 📚 Information Hub: Build your personal information hub from RSS feeds, PDFs, notes, and web archives. Run local recommendation algorithms that respect your privacy.
* 👥 Multi-user support: Share expensive models between users without redundant loading. Dynamic resource allocation based on workloads. Create isolated environments for team members with custom resource limits.
We just released v1.11. Do give Olares a try if you're interested. And please reach out if you run into any "unexpected" situations.If you have any questions or opinions, please comment below. | 2024-12-26T03:01:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hmfd85/we_built_an_os_to_protect_ai_privacy/ | Desperate_Top_9756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmfd85 | false | null | t3_1hmfd85 | /r/LocalLLaMA/comments/1hmfd85/we_built_an_os_to_protect_ai_privacy/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'FT_a1HzAH51jSdKv__SQTLwqhAA3z5QcLjJez2uhgNc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FANyKdFsazZftpSWR1QbwvZ9ZnlpUdabZ_qx71e0wMw.jpg?width=108&crop=smart&auto=webp&s=701726f8d3b7d726fed241986583d7f1f15f94d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FANyKdFsazZftpSWR1QbwvZ9ZnlpUdabZ_qx71e0wMw.jpg?width=216&crop=smart&auto=webp&s=1373c49dcf89ad77797934bfc00268e3721b73b4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FANyKdFsazZftpSWR1QbwvZ9ZnlpUdabZ_qx71e0wMw.jpg?width=320&crop=smart&auto=webp&s=87f9be940c3fe403ee3808d22970772e9cac640d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FANyKdFsazZftpSWR1QbwvZ9ZnlpUdabZ_qx71e0wMw.jpg?width=640&crop=smart&auto=webp&s=42bbd6437a278a7568e27e37bf118b7f321e47c6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FANyKdFsazZftpSWR1QbwvZ9ZnlpUdabZ_qx71e0wMw.jpg?width=960&crop=smart&auto=webp&s=e445dfaa64019f025e0dce817a878e8db1482702', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FANyKdFsazZftpSWR1QbwvZ9ZnlpUdabZ_qx71e0wMw.jpg?width=1080&crop=smart&auto=webp&s=6ca1eda25780b157ce5ee35ef21e79c61bf22287', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/FANyKdFsazZftpSWR1QbwvZ9ZnlpUdabZ_qx71e0wMw.jpg?auto=webp&s=49893ac358801cefba734244bb02c4706e6e67f5', 'width': 1280}, 'variants': {}}]} |
Create unlimited podcast audio, even from links marked as restricted sources on NotebookLM | 1 | [removed] | 2024-12-26T03:05:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hmfg2t/create_unlimited_podcast_audio_even_from_links/ | Busy-Basket-5291 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmfg2t | false | null | t3_1hmfg2t | /r/LocalLLaMA/comments/1hmfg2t/create_unlimited_podcast_audio_even_from_links/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'z5kmDM4OwG2h9PpVo8jJRJxP8O4XC9XGumuXCpOWRwo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-7Jn9MYaw7PD5SjreKnsx2wLa_eV66wj5PuXyet3eSY.jpg?width=108&crop=smart&auto=webp&s=0aeab9b0014984974cead0cc8f6c6017b90e68dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-7Jn9MYaw7PD5SjreKnsx2wLa_eV66wj5PuXyet3eSY.jpg?width=216&crop=smart&auto=webp&s=c48a4306f4eabf00b3d2a199643fe85fe1e7e843', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-7Jn9MYaw7PD5SjreKnsx2wLa_eV66wj5PuXyet3eSY.jpg?width=320&crop=smart&auto=webp&s=082f37d2103056369724ec8e3e0ce2480c4383fb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-7Jn9MYaw7PD5SjreKnsx2wLa_eV66wj5PuXyet3eSY.jpg?width=640&crop=smart&auto=webp&s=f39f9aeb3749d14bb2dcf7a63572182ea401ec1a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-7Jn9MYaw7PD5SjreKnsx2wLa_eV66wj5PuXyet3eSY.jpg?width=960&crop=smart&auto=webp&s=b44b2b23d10f6649caf1fb3832c0aa331b64f82e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-7Jn9MYaw7PD5SjreKnsx2wLa_eV66wj5PuXyet3eSY.jpg?width=1080&crop=smart&auto=webp&s=692299c1d13d5d022ffb238848b1724bd231dfe0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-7Jn9MYaw7PD5SjreKnsx2wLa_eV66wj5PuXyet3eSY.jpg?auto=webp&s=c664d279d36ee41d9d886b0e520a5180d5c31a2f', 'width': 1200}, 'variants': {}}]} |
What steps are needed to get a model to know Oracle / Postgres databases? | 2 | I am using a Macbook Air M1 with 16GB RAM, and Ollama with these models loaded: Granite-code:8b, deepseek-coder-v2:16b, qwen2.5-coder:14b and llama3,2:latest.
I am a Database Administrator for Oracle (and a bit of Postgres), and I use these to generate SQL queries like "show me any indexes that haven't been used for the last 6 months" and it doesn't do a great job - it frequently generates SQL that has the incorrect table columns, or tries to use tables that don't exist.
I want to be able to feed in the Oracle / Postgres data dictionary (all system tables and their columns), this information is on the web or I could pull it from the databases.
I'm new to this, but I assume I need to train a model somehow so that it knows the tables and columns and doesn't keep making them up.
I would appreciate any pointers on how to get going with this. Thanks. | 2024-12-26T03:45:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hmg3vp/what_steps_are_needed_to_get_a_model_to_know/ | fishbarrel_2016 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmg3vp | false | null | t3_1hmg3vp | /r/LocalLLaMA/comments/1hmg3vp/what_steps_are_needed_to_get_a_model_to_know/ | false | false | self | 2 | null |
Where & how to learn LLM? | 1 | [removed] | 2024-12-26T04:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hmh65z/where_how_to_learn_llm/ | mipan_zuuzuuzuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmh65z | false | null | t3_1hmh65z | /r/LocalLLaMA/comments/1hmh65z/where_how_to_learn_llm/ | false | false | self | 1 | null |
Where & how to learn LLM? | 1 | [removed] | 2024-12-26T05:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hmhfaf/where_how_to_learn_llm/ | mipan_zuuzuuzuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmhfaf | false | null | t3_1hmhfaf | /r/LocalLLaMA/comments/1hmhfaf/where_how_to_learn_llm/ | false | false | self | 1 | null |
Google deep research AI | 3 | I recently hear about Google deep research AI and it's feels like one of the most promising AI service for Deep research.
So I'm wondering is there any other alternatives are also available in market which provides same or better results as good deep research or any open LLM? | 2024-12-26T05:10:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hmhgrb/google_deep_research_ai/ | Prashant_4200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmhgrb | false | null | t3_1hmhgrb | /r/LocalLLaMA/comments/1hmhgrb/google_deep_research_ai/ | false | false | self | 3 | null |
How many Football fields is your LLM | 1 | 2024-12-26T05:15:55 | TopGrandGearTour | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmhjwy | false | null | t3_1hmhjwy | /r/LocalLLaMA/comments/1hmhjwy/how_many_football_fields_is_your_llm/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ixzQAUVlEpM62OkZhcJPGcpyZV8jVyFWhIRiDFBn5a8', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/vmrn9cksm49e1.png?width=108&crop=smart&auto=webp&s=f0e38e57d4ddf7a7fe7d9b0936f5e086550874f3', 'width': 108}, {'height': 194, 'url': 'https://preview.redd.it/vmrn9cksm49e1.png?width=216&crop=smart&auto=webp&s=c6c9e2c24e5f723066ae5b2da591e546e6cb3d74', 'width': 216}, {'height': 288, 'url': 'https://preview.redd.it/vmrn9cksm49e1.png?width=320&crop=smart&auto=webp&s=a1d1de92d08a47bf39660691f4972f1cc59389d3', 'width': 320}, {'height': 576, 'url': 'https://preview.redd.it/vmrn9cksm49e1.png?width=640&crop=smart&auto=webp&s=eea0dba81b526b7b460b3af45b70d5e329f9ac29', 'width': 640}], 'source': {'height': 722, 'url': 'https://preview.redd.it/vmrn9cksm49e1.png?auto=webp&s=f42399c015d3c331a0b80d08499d480199ad867b', 'width': 802}, 'variants': {}}]} |
|||
Surprise Suprise | 142 | How long do you think until AMD has a comparable offering? | 2024-12-26T05:25:22 | koalfied-coder | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmhp3n | false | null | t3_1hmhp3n | /r/LocalLLaMA/comments/1hmhp3n/surprise_suprise/ | false | false | 142 | {'enabled': True, 'images': [{'id': 'O_hD70at_YFqNHv1FRhIn_TflKtTyvZ8YHvm4QBkuow', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/kni5so6wo49e1.png?width=108&crop=smart&auto=webp&s=fc3822ede772404d9085f6a304e75ba602a725ed', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/kni5so6wo49e1.png?width=216&crop=smart&auto=webp&s=c772bc23a125b29322ac983c0825a75fb38aca37', 'width': 216}, {'height': 406, 'url': 'https://preview.redd.it/kni5so6wo49e1.png?width=320&crop=smart&auto=webp&s=71deef40dd2700a9f7ece2af53c0e2debc83ed7c', 'width': 320}, {'height': 813, 'url': 'https://preview.redd.it/kni5so6wo49e1.png?width=640&crop=smart&auto=webp&s=82bde62c41c3de20269b7ec74ccbeefe9de76cbc', 'width': 640}, {'height': 1219, 'url': 'https://preview.redd.it/kni5so6wo49e1.png?width=960&crop=smart&auto=webp&s=55fbd8f55ccbd6fa4f46152e4b83cd9cd5a08b22', 'width': 960}, {'height': 1372, 'url': 'https://preview.redd.it/kni5so6wo49e1.png?width=1080&crop=smart&auto=webp&s=567844af0ee95be416844cb467ee194d98a9c5e4', 'width': 1080}], 'source': {'height': 1372, 'url': 'https://preview.redd.it/kni5so6wo49e1.png?auto=webp&s=c676a9598d63c8355c47fae1d31317ad60af1e66', 'width': 1080}, 'variants': {}}]} |
||
I made a site to find careers in AI | 0 | 2024-12-26T05:33:57 | https://v.redd.it/8zb7yuv8q49e1 | WordyBug | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmhtsq | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8zb7yuv8q49e1/DASHPlaylist.mpd?a=1737783252%2CMzNjZDE3MWM1ZmE3MTQwZDZhMTRkNDU4ZjljZmU3MDY3NjlmZDdjYTM5ZTFmNzVmY2NhYTcxMzUwYmJhMGQxZA%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/8zb7yuv8q49e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/8zb7yuv8q49e1/HLSPlaylist.m3u8?a=1737783252%2CZjM2YTNjMDdjMGYzNWY2NDQwMjVjODBhZDI3YWEyZWI5MjljMDdlMGQ0MjRiOWU4NjQwMGZiNTM2MmI4ODg2Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8zb7yuv8q49e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1hmhtsq | /r/LocalLLaMA/comments/1hmhtsq/i_made_a_site_to_find_careers_in_ai/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YW1iZDYxdDhxNDllMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YW1iZDYxdDhxNDllMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=108&crop=smart&format=pjpg&auto=webp&s=08a66f6d6d85789ff79ac84661812dd2d7a6724a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YW1iZDYxdDhxNDllMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=216&crop=smart&format=pjpg&auto=webp&s=626205cf48bfc986e28113d02132ce33a44f4e2c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YW1iZDYxdDhxNDllMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=320&crop=smart&format=pjpg&auto=webp&s=bd1e1f7231c8ce31a0586819bb06c75af3c4e069', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YW1iZDYxdDhxNDllMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=640&crop=smart&format=pjpg&auto=webp&s=9cb3185cb9f177f12d645c64b651bbab9d047020', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YW1iZDYxdDhxNDllMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=960&crop=smart&format=pjpg&auto=webp&s=a4cc46294e1458ef28bc799224e6e84b2c701575', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YW1iZDYxdDhxNDllMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=26a0af4a146bb5c55d1407c6f1331eed015e2d92', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/YW1iZDYxdDhxNDllMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?format=pjpg&auto=webp&s=bc407f5e92f6d52126c6db2c10d96733d96dbe09', 'width': 2560}, 'variants': {}}]} |
||
Looking for a Gen AI Strategy specialist founding member | 0 | I'm building a dynamic team and searching for a founding member who is a **specialist in AI/Generative AI strategy** . The ideal candidate should have:
* Expertise in **data strategy** and the ability to identify organizational opportunities for **AI/Gen AI adoption**.
* A knack for understanding diverse business contexts and translating them into actionable insights.
* **Executive-level presentation skills** with a flair for crafting compelling slides that effectively communicate strategy. | 2024-12-26T06:16:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hmih4j/looking_for_a_gen_ai_strategy_specialist_founding/ | No-Brother-2237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmih4j | false | null | t3_1hmih4j | /r/LocalLLaMA/comments/1hmih4j/looking_for_a_gen_ai_strategy_specialist_founding/ | false | false | self | 0 | null |
Mistral's been quiet lately... | 401 | 2024-12-26T06:34:59 | umarmnaq | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmiqff | false | null | t3_1hmiqff | /r/LocalLLaMA/comments/1hmiqff/mistrals_been_quiet_lately/ | false | false | 401 | {'enabled': True, 'images': [{'id': 'o9KlyqZaY8U-IpMWJcMyBsllVBQTfSBb0KdMdFHPSLU', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/d8nnkcqa159e1.png?width=108&crop=smart&auto=webp&s=793547854b8f3f9bf4f780972c2bd8f6ed34f15b', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/d8nnkcqa159e1.png?width=216&crop=smart&auto=webp&s=87e907326da365318d7d552c610e37fb7d331d3d', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/d8nnkcqa159e1.png?width=320&crop=smart&auto=webp&s=682790c0d25d00df89581b242d696ab3879a32f0', 'width': 320}], 'source': {'height': 408, 'url': 'https://preview.redd.it/d8nnkcqa159e1.png?auto=webp&s=632150363e44d00a3cfcd5255044e5dcab1313b8', 'width': 612}, 'variants': {}}]} |
|||
Ollama keeps clinging to cpu/gpu even though GPU can run the model | 4 | I get this when running ollama ps.
C:\\Users\\Admin>ollama ps
NAME ID SIZE PROCESSOR UNTIL
qwen2.5-coder:32b 4bd6cbf2d094 69 GB 66%/34% CPU/GPU 4 minutes from now
C:\\Users\\Admin>
I have a 4090 and I have been able to fully run the model on the GPU many times, so it isn't a GPU error. But whenever it does this, it runs a whole lot slower and worse. Can anyone give me a fix to this? | 2024-12-26T06:36:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hmirew/ollama_keeps_clinging_to_cpugpu_even_though_gpu/ | Pro-editor-1105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmirew | false | null | t3_1hmirew | /r/LocalLLaMA/comments/1hmirew/ollama_keeps_clinging_to_cpugpu_even_though_gpu/ | false | false | self | 4 | null |
Central database for LLM's with fast paced developments every now and then. | 1 | [removed] | 2024-12-26T06:56:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hmj0ux/central_database_for_llms_with_fast_paced/ | Old_Key_5090 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmj0ux | false | null | t3_1hmj0ux | /r/LocalLLaMA/comments/1hmj0ux/central_database_for_llms_with_fast_paced/ | false | false | self | 1 | null |
Central database(wiki kind of) for LLM's with fast paced developments every now and then. | 1 | [removed] | 2024-12-26T06:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hmj238/central_databasewiki_kind_of_for_llms_with_fast/ | Legitimate-Set7619 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmj238 | false | null | t3_1hmj238 | /r/LocalLLaMA/comments/1hmj238/central_databasewiki_kind_of_for_llms_with_fast/ | false | false | self | 1 | null |
780m 96GB – Maximum VRAM | 1 | [removed] | 2024-12-26T06:59:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hmj26u/780m_96gb_maximum_vram/ | Classic_Bicycle_8161 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmj26u | false | null | t3_1hmj26u | /r/LocalLLaMA/comments/1hmj26u/780m_96gb_maximum_vram/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CYH-GqW3UOZOb6ENnVFsRKyg_nT34mZuMpaWWXA6bws', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A-pLP02NTrRTghxcHx-R5fRqolC7ZUpWgBgpMt-PuyM.jpg?width=108&crop=smart&auto=webp&s=43a8f809af91b8b98f8e9fcdc98424f40169729b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/A-pLP02NTrRTghxcHx-R5fRqolC7ZUpWgBgpMt-PuyM.jpg?width=216&crop=smart&auto=webp&s=ce1ff889002a3bf4892693ba9b60d9b79727dda9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/A-pLP02NTrRTghxcHx-R5fRqolC7ZUpWgBgpMt-PuyM.jpg?width=320&crop=smart&auto=webp&s=0f8445f38b1568922873f8747b6275a21d4566c2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/A-pLP02NTrRTghxcHx-R5fRqolC7ZUpWgBgpMt-PuyM.jpg?width=640&crop=smart&auto=webp&s=59a367ae9cb73a8b44972fb05904aa812264ac2c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/A-pLP02NTrRTghxcHx-R5fRqolC7ZUpWgBgpMt-PuyM.jpg?width=960&crop=smart&auto=webp&s=bd4d345ccd400f21375aa87bf81ad4b0a76a8b7e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/A-pLP02NTrRTghxcHx-R5fRqolC7ZUpWgBgpMt-PuyM.jpg?width=1080&crop=smart&auto=webp&s=3be061dfe92ce86d064ed468f346512ef5ee6093', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/A-pLP02NTrRTghxcHx-R5fRqolC7ZUpWgBgpMt-PuyM.jpg?auto=webp&s=c0ff36a1c9575cf75f1a680d9bd90e2eae7f1cf6', 'width': 1200}, 'variants': {}}]} |
Central database for LLM's with fast paced developments every now and then. | 1 | [removed] | 2024-12-26T07:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hmj37c/central_database_for_llms_with_fast_paced/ | WaterFox743 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmj37c | false | null | t3_1hmj37c | /r/LocalLLaMA/comments/1hmj37c/central_database_for_llms_with_fast_paced/ | false | false | self | 1 | null |
On 'consciousness' | 223 | 2024-12-26T07:24:09 | one-escape-left | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmje61 | false | null | t3_1hmje61 | /r/LocalLLaMA/comments/1hmje61/on_consciousness/ | false | false | 223 | {'enabled': True, 'images': [{'id': '3c9tp6o3IL6MZnQy3NtrKjPTcfFCRy2lgKojjgqJHaA', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/ki7shl43a59e1.jpeg?width=108&crop=smart&auto=webp&s=abede2d508cb5470081d25a175fa212601dad145', 'width': 108}, {'height': 65, 'url': 'https://preview.redd.it/ki7shl43a59e1.jpeg?width=216&crop=smart&auto=webp&s=da0dcc68bffa5387c0d72a7fbc3a907cd1fefd03', 'width': 216}, {'height': 97, 'url': 'https://preview.redd.it/ki7shl43a59e1.jpeg?width=320&crop=smart&auto=webp&s=93450ebf70c2969aa376e8cd7c1d780d88655c8a', 'width': 320}, {'height': 194, 'url': 'https://preview.redd.it/ki7shl43a59e1.jpeg?width=640&crop=smart&auto=webp&s=1b15c78ef336b8fb302b188dbba2bc01e7796004', 'width': 640}], 'source': {'height': 261, 'url': 'https://preview.redd.it/ki7shl43a59e1.jpeg?auto=webp&s=d746f8e6f7e4be3cc7ebdf486b66a1b57f1b90f4', 'width': 860}, 'variants': {}}]} |
|||
Setting up local LLM (No GPU) with 24 cores on hypervisors. | 2 | I have access to a very large number of resources, and I wanted to know something.
Is it possible to setup an LLM without a GPU but with plenty of CPU cores to function on a VMware environment, I can configure whatever is needed but the only thing missing is the GPU.
What I wanted to ask is this, it should be answering to about 8 to 10 users simultaneously.
Is it possible to do so? The front end will be open web UI.
Should I invest more resources and allocate more core to maintain a working environment for about 10 people? And what would you suggest making it available at the same time to about 20 users?
The CPU is a 2x Xeon Gold 6248R in a NUMA node and I have the ability to use both with full cores control.
The question is it really that resource intensive to run and will need all resources? | 2024-12-26T07:37:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hmjkct/setting_up_local_llm_no_gpu_with_24_cores_on/ | KineticEnforcer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmjkct | false | null | t3_1hmjkct | /r/LocalLLaMA/comments/1hmjkct/setting_up_local_llm_no_gpu_with_24_cores_on/ | false | false | self | 2 | null |
p95 latency trend on azure GPT-4o and OpenAI Gpt-4o | 1 | [removed] | 2024-12-26T08:05:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hmjxev/p95_latency_trend_on_azure_gpt4o_and_openai_gpt4o/ | Wonderful-Agency-210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmjxev | false | null | t3_1hmjxev | /r/LocalLLaMA/comments/1hmjxev/p95_latency_trend_on_azure_gpt4o_and_openai_gpt4o/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'BlkgP2kBI4o4ipb-2aqfxMlxMfmGwtOtmczGh9vKvgs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Xy8PRIkFgD4EaSXz2g_iBLptd3474ofxu0yZt2Wvq14.jpg?width=108&crop=smart&auto=webp&s=9824fba511a45ae318816a953a8510761f285a7e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Xy8PRIkFgD4EaSXz2g_iBLptd3474ofxu0yZt2Wvq14.jpg?width=216&crop=smart&auto=webp&s=63108d9ccbe8c2262ea507cecd60a5dc22b37b4d', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Xy8PRIkFgD4EaSXz2g_iBLptd3474ofxu0yZt2Wvq14.jpg?width=320&crop=smart&auto=webp&s=21aa4c1f80115e7473f68cd5d8435b228858db8d', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/Xy8PRIkFgD4EaSXz2g_iBLptd3474ofxu0yZt2Wvq14.jpg?width=640&crop=smart&auto=webp&s=0e194d0ebdce906ca9147a60c41047d7efb82fae', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/Xy8PRIkFgD4EaSXz2g_iBLptd3474ofxu0yZt2Wvq14.jpg?width=960&crop=smart&auto=webp&s=245bc1e9a535f9a9fbac1009a11c2bfe3985a86d', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/Xy8PRIkFgD4EaSXz2g_iBLptd3474ofxu0yZt2Wvq14.jpg?width=1080&crop=smart&auto=webp&s=9029003d438b3afa9023518c932bbfb1cdfef256', 'width': 1080}], 'source': {'height': 1075, 'url': 'https://external-preview.redd.it/Xy8PRIkFgD4EaSXz2g_iBLptd3474ofxu0yZt2Wvq14.jpg?auto=webp&s=71ca2d624291f62febc10dddfa725d739e37f069', 'width': 2048}, 'variants': {}}]} |
|
Azure vs OpenAI Latency Comparison on GPT-4o | 1 | p95 Latency for GPT-4o: OpenAI \~3s, Azure \~5s
What do you use in Production? The difference between Azure and OpenAI GPT-4o is massive. Maybe Azure is not so good at distributing the model, considering its years of Cloud and GPU experience
https://preview.redd.it/5wtxhzpth59e1.png?width=1200&format=png&auto=webp&s=39e204c1ea4fc224f90c1371bb367ecc71ffd61a
| 2024-12-26T08:07:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hmjydw/azure_vs_openai_latency_comparison_on_gpt4o/ | Wonderful-Agency-210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmjydw | false | null | t3_1hmjydw | /r/LocalLLaMA/comments/1hmjydw/azure_vs_openai_latency_comparison_on_gpt4o/ | false | false | 1 | null |
|
Deepseek V3 Chat version weights has been uploaded to Huggingface | 182 | 2024-12-26T08:14:45 | https://huggingface.co/deepseek-ai/DeepSeek-V3 | kristaller486 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hmk1hg | false | null | t3_1hmk1hg | /r/LocalLLaMA/comments/1hmk1hg/deepseek_v3_chat_version_weights_has_been/ | false | false | 182 | {'enabled': False, 'images': [{'id': 'W0a4Ut3YRdwrm_iPrjdjzRUGtRuqfg9-mLriBM-Dcd0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=108&crop=smart&auto=webp&s=ced9f4262fbc47c5e6d9e64a24695e9b606292f6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=216&crop=smart&auto=webp&s=cb792585d45e44e065221b17eec45d8ba00e3762', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=320&crop=smart&auto=webp&s=702d72cb93304f5ba3c0dfb4aeea96bc8251292a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=640&crop=smart&auto=webp&s=4b794987973338d5de710a3e8b5f2c292016f426', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=960&crop=smart&auto=webp&s=52428c047c2c2058eb7059e3838a1201dcf1e642', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=1080&crop=smart&auto=webp&s=faa4f3f409c46ca40fdb28f108da132df2cf1d67', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?auto=webp&s=efcf7648caa799bd82602de8c8e6f93b14410580', 'width': 1200}, 'variants': {}}]} |
||
What's the best quality and fastest speech to text transcription API currently? | 1 | [removed] | 2024-12-26T08:21:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hmk4av/whats_the_best_quality_and_fastest_speech_to_text/ | Spammesir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmk4av | false | null | t3_1hmk4av | /r/LocalLLaMA/comments/1hmk4av/whats_the_best_quality_and_fastest_speech_to_text/ | false | false | self | 1 | null |
Incredible blog post on Byte Pair Encoding | 84 | https://i.redd.it/fkw886syp59e1.gif
Here's an awesome blog post on Byte Pair Encoding: [https://vizuara.substack.com/p/understanding-byte-pair-encoding?r=4ssvv2&utm\_campaign=post&utm\_medium=web&triedRedirect=true](https://vizuara.substack.com/p/understanding-byte-pair-encoding?r=4ssvv2&utm_campaign=post&utm_medium=web&triedRedirect=true)
In this blog post, following things are explained:
1️⃣ Step by step understand of the BPE algorithm
2️⃣ Python code to implement BPE algorithm from scratch
3️⃣ BPE algorithm implemented on “Dark Knight Rises” movie text document!
It’s an incredible blog post which explains a difficult concept in an easy to understand manner. | 2024-12-26T08:53:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hmkirm/incredible_blog_post_on_byte_pair_encoding/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmkirm | false | null | t3_1hmkirm | /r/LocalLLaMA/comments/1hmkirm/incredible_blog_post_on_byte_pair_encoding/ | false | false | 84 | {'enabled': False, 'images': [{'id': 'B64c4nUwnSjsFeXRMvbWkOR-2dlHzl-wrj6mGN-1YoQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xiOh2KfZITknwXT-kUKMWooY7fdn0dDZPnqRuZYhsRE.jpg?width=108&crop=smart&auto=webp&s=2f3b0fabce76b46b03be57fe8db6ae807acc85d9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xiOh2KfZITknwXT-kUKMWooY7fdn0dDZPnqRuZYhsRE.jpg?width=216&crop=smart&auto=webp&s=07a55e2614b50c3cefbb42b57c67d659565a92bf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xiOh2KfZITknwXT-kUKMWooY7fdn0dDZPnqRuZYhsRE.jpg?width=320&crop=smart&auto=webp&s=72e9c4f96d8c46f8aedf5da3bf66a1aa206d1c10', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xiOh2KfZITknwXT-kUKMWooY7fdn0dDZPnqRuZYhsRE.jpg?width=640&crop=smart&auto=webp&s=40ec506dab861a2104d1eb5c0d0065cd737539f1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xiOh2KfZITknwXT-kUKMWooY7fdn0dDZPnqRuZYhsRE.jpg?width=960&crop=smart&auto=webp&s=49be0de1450c6fd806e0fd8e1a5f3df3cd0e5321', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xiOh2KfZITknwXT-kUKMWooY7fdn0dDZPnqRuZYhsRE.jpg?width=1080&crop=smart&auto=webp&s=0e15b536dae99585624125adfb0a5c60ad677dd3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xiOh2KfZITknwXT-kUKMWooY7fdn0dDZPnqRuZYhsRE.jpg?auto=webp&s=3269b92625ec579589179b378ec0e5f2f86966fb', 'width': 1200}, 'variants': {}}]} |
|
NSFW chat bot getting refusal after refusal | 0 | long time lurker first time poster. I'm making a SaaS product on the side containing NSFW agents that for example pick the correct spicy pic or video to send to the user based on the convo (sexting etc). im facing a lot of rejections for llama 3.3 and 3.1 70b, and i am quite experienced with prompt engineering, so prompted it quite well. it's working great in English but once I switch the convo to French it starts refusing. not sure why, probably because the embedding space for my English engineered system prompt is not similar enough to the French one for it to work
I am reliant on deepinfra or fireworks models, are there any uncensored llms that are provided by these guys that actually won't refuse me? I am not able to self host or host on fireworks | 2024-12-26T09:40:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hml3vn/nsfw_chat_bot_getting_refusal_after_refusal/ | MigorRortis96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hml3vn | false | null | t3_1hml3vn | /r/LocalLLaMA/comments/1hml3vn/nsfw_chat_bot_getting_refusal_after_refusal/ | false | false | nsfw | 0 | null |
Shirdi Travels | 1 | 2024-12-26T10:27:18 | Global-Grade9149 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmlq4b | false | null | t3_1hmlq4b | /r/LocalLLaMA/comments/1hmlq4b/shirdi_travels/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'O5cPWO4-mryIiiyQhvm4Rykz8bnHtNIOS8ftAMPpCag', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/5f8tpi0k669e1.png?width=108&crop=smart&auto=webp&s=f8c9ec2a6b9b802e42aa6a21c3b852bff23149a6', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/5f8tpi0k669e1.png?width=216&crop=smart&auto=webp&s=36d098e5c66388abb6a913ceffaf4ab7ad66b50b', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/5f8tpi0k669e1.png?width=320&crop=smart&auto=webp&s=b042d858926369a86a13a3820d246aa1c3354e11', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/5f8tpi0k669e1.png?width=640&crop=smart&auto=webp&s=8cbcbed8681e78ab1e38962e6c85560e649bdd08', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/5f8tpi0k669e1.png?width=960&crop=smart&auto=webp&s=a2c360aea2fa7613e1309950d79aa3cac854e1e8', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/5f8tpi0k669e1.png?width=1080&crop=smart&auto=webp&s=8c057a744ab6c56107a0d1cca873ad34d04fee4b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/5f8tpi0k669e1.png?auto=webp&s=868b205970e5f491a3cbd8e179010ba7bd0a08cc', 'width': 1080}, 'variants': {}}]} |
|||
Shirdi Travels | 1 | 2024-12-26T10:31:17 | https://www.shirditravels.co.in/ | Global-Grade9149 | shirditravels.co.in | 1970-01-01T00:00:00 | 0 | {} | 1hmls29 | false | null | t3_1hmls29 | /r/LocalLLaMA/comments/1hmls29/shirdi_travels/ | false | false | default | 1 | null |
|
Pleasantly surprised by Continue.Dev! | 53 | Hey everyone! Quick one before I head off to the airport for holidays 🥳
TL;DR Continue.Dev has taken some serious notes from Cursor, and I might cancel my Cursor subscription since I've been getting 45tokens/second with Llama-8b on my M1 Max in low power mode
---
I've been using Cursor pretty religiously over the past few months; as someone who hadn't really touched code before about 12 months ago, it's been a huge game changer for me with how frictionless it is to chat with the model in the IDE and then click a button to have the code get implemented.
When I first started using Cursor, the general consensus I saw was that Continue.Dev is generally good, but missing some of the killer features of Cursor... but since I'm about to go on a flight, I thought I'd check it out anyway. I'm not sure if maybe I just misunderstood, or if Continue.Dev has had a major release since then, but honestly it's 95% of what I need! I simply set up Qwen-14b Coder Instruct 4bit MLX on LMStudio, set it up as server, then selected LMStudio in Contiue.Dev and hey presto, the experience is almost identical to Cursor! Same shortcuts/hotkeys, same chat with model in IDE feature, same single button press to implement half-written bits of code...
I'm extremely pleased, and honestly depending on how things go whilst I'm on holiday, I might end up cancelling my Cursor sub when I'm back 👀 I've been messing around with some speculative decoding stuff in MLX, and I've been getting some seriously impressive results in low power mode. I'm talking 45 tokens / second for Llama-8b-4bit at coding tasks. And since it's low power mode, my laptop never gets hot - MacTOP reports a tiny 14W max power draw from the GPU(!). If I can hack together an MLX server with spec decoding and automatic prompt cache handler, then honestly I think I might just stick to local models from now on. It's all coming together 😄
Peace 🫡 | 2024-12-26T10:32:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hmlsnk/pleasantly_surprised_by_continuedev/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmlsnk | false | null | t3_1hmlsnk | /r/LocalLLaMA/comments/1hmlsnk/pleasantly_surprised_by_continuedev/ | false | false | self | 53 | null |
YuLan-Mini: An Open Data-efficient Language Model | 44 | **Please note:** Instruct version is coming soon.
https://preview.redd.it/svz37iqyb69e1.jpg?width=1639&format=pjpg&auto=webp&s=b080eaf36d89e2e11b0502c20c232a0bf82fe970
**Description:** YuLan-Mini is a lightweight language model with 2.4 billion parameters. It achieves performance comparable to industry-leading models trained on significantly more data, despite being pre-trained on only 1.08T tokens. The model excels particularly in the domains of mathematics and code. To facilitate reproducibility, we will open-source the relevant pre-training resources.
**Specifications:**
**Parameters:** 2.42B
**Training data:** 1.08T tokens from various sources (web, math, code, etc.)
**Context length:** 4K and 28K Models
**Number of GPUs used:** 56 A800-GPU cluster
**Training stages:** Warmup (10B tokens), stable training (990B tokens), and annealing (80B tokens)
**Benchmarks:** competitive performance with scores such as 37.80 on MATH-500 (four-shot) and 64.00 on HumanEval (zero-shot).
**Data filtering:** The data is filtered using techniques such as de-duplication, heuristic filtering, and topic-based text recall.
**Techniques used:** A combination of techniques such as embedding tying, Pre-RMSNorm, SwiGLU, and Rotary Embedding to improve performance and stability.
**HuggingFace Links:** (the second link is still inactive for now):
|Model|Context Length|SFT|
|:-|:-|:-|
|[YuLan-Mini](https://huggingface.co/yulan-team/YuLan-Mini) (Recommended)|28K|❎|
|[YuLan-Mini-2.4B-4K](https://huggingface.co/yulan-team/YuLan-Mini-Intermediate-4K)|4K|❎|
|YuLan-Mini-Instruct|Coming soon|✅|
**Github:** [https://github.com/RUC-GSAI/YuLan-Mini](https://github.com/RUC-GSAI/YuLan-Mini)
**Paper:** [https://arxiv.org/abs/2412.17743](https://arxiv.org/abs/2412.17743)
https://preview.redd.it/fbpz64y7d69e1.png?width=1305&format=png&auto=webp&s=8a0841eade01b372f086d104865ccd65df66cec0
Our pre-training methodology improves training efficiency through three key innovations:
1. an elaborately designed **data pipeline** that combines data cleaning with data schedule strategies;
2. a systematic **optimization method** that can effectively mitigate training instability;
3. an effective **annealing approach** that integrate targeted data selection and long context training.
**Note:** I am not affiliated. | 2024-12-26T11:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hmm7oy/yulanmini_an_open_dataefficient_language_model/ | Many_SuchCases | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmm7oy | false | null | t3_1hmm7oy | /r/LocalLLaMA/comments/1hmm7oy/yulanmini_an_open_dataefficient_language_model/ | false | false | 44 | {'enabled': False, 'images': [{'id': 'Zk1MkeYL0Qkbr7ly39lJsFyxZdz1NvUzcPYmJ4fW72E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1pk6Y7XQZsY8D2PA5BJs8LHCJYXQZjyw9wAyizeiW4o.jpg?width=108&crop=smart&auto=webp&s=6f765e7998575061df4f728f2133bff866bc95f1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1pk6Y7XQZsY8D2PA5BJs8LHCJYXQZjyw9wAyizeiW4o.jpg?width=216&crop=smart&auto=webp&s=4eb0c20bb81d89050f141b9d024ada53c589e951', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1pk6Y7XQZsY8D2PA5BJs8LHCJYXQZjyw9wAyizeiW4o.jpg?width=320&crop=smart&auto=webp&s=d718802d60fc2e7ec24611d06e5a258e7bbfe566', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1pk6Y7XQZsY8D2PA5BJs8LHCJYXQZjyw9wAyizeiW4o.jpg?width=640&crop=smart&auto=webp&s=907084c06c2afb4c07c686d21a767c3ee1bfbb03', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1pk6Y7XQZsY8D2PA5BJs8LHCJYXQZjyw9wAyizeiW4o.jpg?width=960&crop=smart&auto=webp&s=f2878a59f845427b3eb09e0d81835a137a3801b5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1pk6Y7XQZsY8D2PA5BJs8LHCJYXQZjyw9wAyizeiW4o.jpg?width=1080&crop=smart&auto=webp&s=c8223b00f618dc0ef749a9f43b40e828320925b1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1pk6Y7XQZsY8D2PA5BJs8LHCJYXQZjyw9wAyizeiW4o.jpg?auto=webp&s=8764fea7155bea2123bfe696b7851bfaca7ff62c', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.