title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What can I run on a 2080 Super 8GB? | 0 | I am just looking to start experimenting, altho I am interested to see if I can build a "toy robot", something that can give fun simple responses to stimuli, without needing any kind of accuracy.
Altho coding support would be cool too but I'm guessing I'd be better off with a hosted solution for that if I get serious. | 2025-01-02T02:06:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hrix7y/what_can_i_run_on_a_2080_super_8gb/ | Lightspeedius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrix7y | false | null | t3_1hrix7y | /r/LocalLLaMA/comments/1hrix7y/what_can_i_run_on_a_2080_super_8gb/ | false | false | self | 0 | null |
New year (2025) poll: At what MSRP would you purchase RTX 5090 32GB? | 0 | NVIDIA will showcase 5090 in CES soon. Consumer grade GPU with 32GB VRAM is pretty darn excitiny. What will Nvidia price it's MSRP? More importantly, At what price would you purchase the GPU? Price is in USD ($).
[View Poll](https://www.reddit.com/poll/1hrj853) | 2025-01-02T02:22:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hrj853/new_year_2025_poll_at_what_msrp_would_you/ | siegevjorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrj853 | false | null | t3_1hrj853 | /r/LocalLLaMA/comments/1hrj853/new_year_2025_poll_at_what_msrp_would_you/ | false | false | self | 0 | null |
How to gauge improvements to llama as a user of Ray-Ban Meta | 1 | [removed] | 2025-01-02T02:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hrjcm6/how_to_gauge_improvements_to_llama_as_a_user_of/ | Alarmed-Instance5356 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrjcm6 | false | null | t3_1hrjcm6 | /r/LocalLLaMA/comments/1hrjcm6/how_to_gauge_improvements_to_llama_as_a_user_of/ | false | false | self | 1 | null |
My Makeshift Budget AI Rig. Where Should I Go From Here? | 13 | I’ve recently upgraded my old computer for ai and here’s what I have now
1x 3090 24 GB VRAM
1x 2060 super 8 GB VRAM
64 GB 3200 DDR4 ram
On a ROG STRIX X370-F motherboard
I had the 2060 before I upgraded to the 3090, put em both in for 32 gb of vram, it’s really nice but now I’m thinking where I should go from here.
My motherboard has 3 PCIE slots for gpus so that and case space are my main limiters
I’m thinking if the price for the intel 24gb is cheaper than used 3090s I’ll get 2 of them and replace the 2060 but I’m open to all suggestions! | 2025-01-02T02:32:10 | https://www.reddit.com/gallery/1hrjf6h | jeremiahn4 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hrjf6h | false | null | t3_1hrjf6h | /r/LocalLLaMA/comments/1hrjf6h/my_makeshift_budget_ai_rig_where_should_i_go_from/ | false | false | 13 | null |
|
I can't run 3B LLMs on my Android phone... | 2 | I'm using a POCO X6 Pro 12/512 GB and, strangely, it appears that bigger LLMs simply crash PocketPal.
I've tried Llama-3.2-3B-Instruct-Q4_0_4_8, Phi-3.5 mini 4k instruct (Q4_K_M) and Llama-3.2-3B-Instruct (Q6_K) and all of them crashed the application. The only one so far that worked was Llama-3.2-1b-instruct (Q8_0), but it did not meet my expectations (around 8 tokens per second with Flash Attention enabled w/ F16 Key and Value cache, around 4 - 5 tokens per second with Flash Attention disabled).
I cannot find the problem here. I've searched some solutions, but I did not find anything. Can anyone please help me? Where is the problem? Am I using the wrong settings, app or LLM? | 2025-01-02T03:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hrkd05/i_cant_run_3b_llms_on_my_android_phone/ | Azeitonius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrkd05 | false | null | t3_1hrkd05 | /r/LocalLLaMA/comments/1hrkd05/i_cant_run_3b_llms_on_my_android_phone/ | false | false | self | 2 | null |
M4 24GB Ollama performance with llama3.2, llama3.2-vision:11b, mistral:7b | 30 | I had a RTX 3070 rig, and recently got a M4 24GB 512 mac mini, and decided to test the AI performance on both. Initially I wanted to benchmark with popular benchmark tools like Geekbench AI but then figured out the benchmarks are probably not going to give me real data. So I ran ollama on these devices and got a few interesting result.
|**Metric**|**Mac Mini (llama3.2)**|**RTX 3070 (llama3.2)**|**Mac Mini (vision:11b)**|**RTX 3070 (vision:11b)**|**Mac Mini (mistral-7b)**|**RTX 3070 (mistral-7b)**|**Mac Mini (wizard-13b)**|**RTX 3070 (wizard-13b)**|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|**Total Duration**|7.5-8.8s|2.2-2.7s|17.4-23.4s|25.3-29.7s|7.1-8.8s|1.7-2.0s|6.11-7.47s|4.03-6.19s|
|**Load Duration**|25-29ms|9-9.5ms|23-29ms|9258-15386ms|25-29ms|25-29ms|9.73ms|2.70ms|
|**Prompt Eval Duration**|106-158ms|6-11ms|182-204ms|63-432ms|86-204ms|102-158ms|212-449ms|47-415ms|
|**Eval Duration**|7.3-8.0s|2.1-2.2s|17.4-23.4s|22.0-29.7s|7.0-8.6s|1.5-2.0s|6.37s|4.74s|
|**Tokens/Second**|41.3 t/s|140.5 t/s|31.2 t/s|89.7 t/s|39.8 t/s|135.2 t/s|12.8 t/s|19.6 t/s|
It was pretty clear the M4 24GB could not compare to the RTX 3070, but that is just for a reference. The M4 Mac Mini is powerful in it's own sense. I don't have a M4 Mac Pro at my hand right now, and all benchmark in the web is comparing M4 base with M4 Pro, which did not satisfy the need. | 2025-01-02T03:23:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hrkend/m4_24gb_ollama_performance_with_llama32/ | entrptaher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrkend | false | null | t3_1hrkend | /r/LocalLLaMA/comments/1hrkend/m4_24gb_ollama_performance_with_llama32/ | false | false | self | 30 | null |
Any change the cheap Chinese Nvidia L4s on eBay aren't scams? | 1 | There's currently several insanely low priced L4s on eBay from low rep, Chinese sellers. Any chance it'd be at all worth it to buy one and either be pleasantly surprised or have to go through eBay's guaranteed money-back policy? | 2025-01-02T03:23:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hrkf9e/any_change_the_cheap_chinese_nvidia_l4s_on_ebay/ | theaaronlockhart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrkf9e | false | null | t3_1hrkf9e | /r/LocalLLaMA/comments/1hrkf9e/any_change_the_cheap_chinese_nvidia_l4s_on_ebay/ | false | false | self | 1 | null |
Youtube Transcript Summarizer extension with ollama. | 1 | [removed] | 2025-01-02T03:27:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hrkhwd/youtube_transcript_summarizer_extension_with/ | loktar_00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrkhwd | false | null | t3_1hrkhwd | /r/LocalLLaMA/comments/1hrkhwd/youtube_transcript_summarizer_extension_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'O1IUQNe-COZt5DFxlgqfZzdMKFyEwlqRJT27w26r3po', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ppWL2ptKQCjMCY-fbvgfFzGWAdccrLn85PL3yH3SSfU.jpg?width=108&crop=smart&auto=webp&s=6bac9511b6ac94cfd7e056584d14ddfed7bb43c1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ppWL2ptKQCjMCY-fbvgfFzGWAdccrLn85PL3yH3SSfU.jpg?width=216&crop=smart&auto=webp&s=1104ecd2923950427cda52deb4992b520619af80', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ppWL2ptKQCjMCY-fbvgfFzGWAdccrLn85PL3yH3SSfU.jpg?width=320&crop=smart&auto=webp&s=81b9df73964459c93a98bf59cdb8d9e2943e485f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ppWL2ptKQCjMCY-fbvgfFzGWAdccrLn85PL3yH3SSfU.jpg?width=640&crop=smart&auto=webp&s=8dfb8383fa082327eb40bce3d35aa8842ca1d618', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ppWL2ptKQCjMCY-fbvgfFzGWAdccrLn85PL3yH3SSfU.jpg?width=960&crop=smart&auto=webp&s=772ee37d65a05a500ec265f5c349f1a79bf21e42', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ppWL2ptKQCjMCY-fbvgfFzGWAdccrLn85PL3yH3SSfU.jpg?width=1080&crop=smart&auto=webp&s=4e4275246c8710a4b5cea1aaed2f9e1f42b3ec60', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ppWL2ptKQCjMCY-fbvgfFzGWAdccrLn85PL3yH3SSfU.jpg?auto=webp&s=6f8d2f1d6ff139f7977f79a123bbc0e79d530766', 'width': 1200}, 'variants': {}}]} |
Neuroscience-Inspired Memory Layer for LLM Applications | 99 |
I work as security researcher but I am have been following and building AI agents for a while and I also did some work research on LLM Reasoning which became threading and many people use it to do things they could not do before, During this learning process I experimented with various opensource memory llm library such as mem0 etc it didnot worked well for me and my use cases and eventually I read a book called thousand brain theory by jeff hawkins which gave me an idea on how human brain might store knowledge across thousands of maps like structures in neocortex! I used this idea and concept net project from MIT to build an opensource python based Neuroscience-Inspired Memory Layer for LLM Applications called HawkinsDB! which purely experimental and HawkinsDB supports semantic , procedural and episodic types of memory
I need honest feedback from community and what you guys think about this work
[https://github.com/harishsg993010/HawkinsDB](https://github.com/harishsg993010/HawkinsDB) | 2025-01-02T03:45:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hrku28/neuroscienceinspired_memory_layer_for_llm/ | Altruistic-Tea-5612 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrku28 | false | null | t3_1hrku28 | /r/LocalLLaMA/comments/1hrku28/neuroscienceinspired_memory_layer_for_llm/ | false | false | self | 99 | {'enabled': False, 'images': [{'id': '8PqUl967unC04BdEI7qesyH2EJyfdm9EzL15hRXilRY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7zcBdeaLgB1F8jGnqO85si1ej-ivVvC2Yk2IW8oJHkk.jpg?width=108&crop=smart&auto=webp&s=c412a6aa054373bacba9efde029571031005db8c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7zcBdeaLgB1F8jGnqO85si1ej-ivVvC2Yk2IW8oJHkk.jpg?width=216&crop=smart&auto=webp&s=61e03c18628f3784beb124419a336d532fc5b4bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7zcBdeaLgB1F8jGnqO85si1ej-ivVvC2Yk2IW8oJHkk.jpg?width=320&crop=smart&auto=webp&s=94f23abec076b4a11fbaaeca38726f219a1c6cd8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7zcBdeaLgB1F8jGnqO85si1ej-ivVvC2Yk2IW8oJHkk.jpg?width=640&crop=smart&auto=webp&s=539dd0ff0b172a65e5eb3b615b6c1c33afab2373', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7zcBdeaLgB1F8jGnqO85si1ej-ivVvC2Yk2IW8oJHkk.jpg?width=960&crop=smart&auto=webp&s=87471f72ed82fbd0170e376ecbd8449593eb7f12', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7zcBdeaLgB1F8jGnqO85si1ej-ivVvC2Yk2IW8oJHkk.jpg?width=1080&crop=smart&auto=webp&s=dd489d3e484e7d73259dcafa292bfe1cc63c5caa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7zcBdeaLgB1F8jGnqO85si1ej-ivVvC2Yk2IW8oJHkk.jpg?auto=webp&s=ddbb50794dba46f6c8067172cb2078f5b4cd4e47', 'width': 1200}, 'variants': {}}]} |
I made a site to find the latest jobs in AI | 2 | 2025-01-02T03:50:11 | https://v.redd.it/f1tg6sc66iae1 | WordyBug | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hrkwzi | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/f1tg6sc66iae1/DASHPlaylist.mpd?a=1738381826%2CMmQwMTI3OTBkZDFhNDUyY2Q5ZTI5ZmIxYWY2OGY5ODFiZjdjNGYyNGUzMjUyMmI4YjdkYTA1YTk0ZTY2YmU1Zg%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/f1tg6sc66iae1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/f1tg6sc66iae1/HLSPlaylist.m3u8?a=1738381826%2CNGJiYWQxOTZhMmJlZDZkMmVmNDNiMGJjMWRmZmVkOWYxOGI4ZTdlNWI1N2MwNDA4NTM2MzdlMWU0NmVkZDkyYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/f1tg6sc66iae1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1hrkwzi | /r/LocalLLaMA/comments/1hrkwzi/i_made_a_site_to_find_the_latest_jobs_in_ai/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'emN3MWVyYzY2aWFlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/emN3MWVyYzY2aWFlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=108&crop=smart&format=pjpg&auto=webp&s=c163cd5c50baa50dc0b5e847729710b51e8cd090', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/emN3MWVyYzY2aWFlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=216&crop=smart&format=pjpg&auto=webp&s=544bf9abd4d1307197fc003a764a3199ec7aac06', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/emN3MWVyYzY2aWFlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=320&crop=smart&format=pjpg&auto=webp&s=3122a5aacd58188e4f1b8939b25dd2bf12e1bf89', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/emN3MWVyYzY2aWFlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=640&crop=smart&format=pjpg&auto=webp&s=d94d9ce9020bbc024e86cff9055637ae659a4218', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/emN3MWVyYzY2aWFlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=960&crop=smart&format=pjpg&auto=webp&s=f26254eb32a448fbc0f8707bcf03d5b1df9fb9fe', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/emN3MWVyYzY2aWFlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b0eefa87813795b30ef143ee69e5911f1d52f206', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/emN3MWVyYzY2aWFlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?format=pjpg&auto=webp&s=4db49c929d7b87a28c71cb03d3be99d80bd072bc', 'width': 2560}, 'variants': {}}]} |
||
China entering dram market | 1 | [https://biz.chosun.com/en/en-it/2024/12/26/T75PCZ4NK5HD5M4K6IUSEMQAEA/](https://biz.chosun.com/en/en-it/2024/12/26/T75PCZ4NK5HD5M4K6IUSEMQAEA/)
Should help memory prices drop. Maybe good news this time next year for 6090 card? :) AMD, Intel wherefore art thou?
Article says that since China dram manufactures are massively state funded the competition isn't fair. | 2025-01-02T04:34:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hrlotz/china_entering_dram_market/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrlotz | false | null | t3_1hrlotz | /r/LocalLLaMA/comments/1hrlotz/china_entering_dram_market/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GYVosXxLdLYzIsUqW5wa2r57CI0C7yW7bJ5zzohSdcY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VyrrdDpjlOdeedfVqKQQMvxiLiMcVdaeMeQtfUMecjc.jpg?width=108&crop=smart&auto=webp&s=2cf085aa92d7cca672ba528b1e716bd48bb43994', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VyrrdDpjlOdeedfVqKQQMvxiLiMcVdaeMeQtfUMecjc.jpg?width=216&crop=smart&auto=webp&s=2e7feba6ffa293384d4dd6162e2bbaafec3373e7', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/VyrrdDpjlOdeedfVqKQQMvxiLiMcVdaeMeQtfUMecjc.jpg?width=320&crop=smart&auto=webp&s=39ba20f697b42446c0c257ebeeb568b37297f164', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/VyrrdDpjlOdeedfVqKQQMvxiLiMcVdaeMeQtfUMecjc.jpg?width=640&crop=smart&auto=webp&s=fad6b7a8eefb09eda2dd2b7ca161b8b225d996ee', 'width': 640}], 'source': {'height': 393, 'url': 'https://external-preview.redd.it/VyrrdDpjlOdeedfVqKQQMvxiLiMcVdaeMeQtfUMecjc.jpg?auto=webp&s=d9cb7b7ca674ba113d30c5069335c60c575a5071', 'width': 750}, 'variants': {}}]} |
China advancing in the memory market | 92 | [https://biz.chosun.com/en/en-it/2024/12/26/T75PCZ4NK5HD5M4K6IUSEMQAEA/](https://biz.chosun.com/en/en-it/2024/12/26/T75PCZ4NK5HD5M4K6IUSEMQAEA/)
Should help memory prices drop. Maybe good news this time next year for 6090 card? AMD, Intel wherefore art thou?
Above article says that since China dram manufactures are massively state funded and the competition isn't fair.
Quote from above article: Chinese corporations, which are effectively operated like state-owned enterprises with massive government funding, is practically unfair competition.
Quote from first article below: It is worth noting that CXMT’s advancements in HBM2 is crucial for the development of domestically produced AI hardware, such as Huawei’s Ascend 910 series accelerator, which relies on CXMT’s HBM products, the report notes.
Similar articles:
1. [https://www.trendforce.com/news/2024/12/30/news-chinese-dram-giant-cxmt-reportedly-achieves-80-ddr5-yield-targeting-90-by-2025/](https://www.trendforce.com/news/2024/12/30/news-chinese-dram-giant-cxmt-reportedly-achieves-80-ddr5-yield-targeting-90-by-2025/)
2. [https://www.tomshardware.com/pc-components/dram/chinese-memory-maker-could-grab-15-percent-of-market-in-the-coming-years-stoking-price-wars](https://www.tomshardware.com/pc-components/dram/chinese-memory-maker-could-grab-15-percent-of-market-in-the-coming-years-stoking-price-wars) | 2025-01-02T04:44:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hrlvlb/china_advancing_in_the_memory_market/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrlvlb | false | null | t3_1hrlvlb | /r/LocalLLaMA/comments/1hrlvlb/china_advancing_in_the_memory_market/ | false | false | self | 92 | {'enabled': False, 'images': [{'id': 'GYVosXxLdLYzIsUqW5wa2r57CI0C7yW7bJ5zzohSdcY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VyrrdDpjlOdeedfVqKQQMvxiLiMcVdaeMeQtfUMecjc.jpg?width=108&crop=smart&auto=webp&s=2cf085aa92d7cca672ba528b1e716bd48bb43994', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VyrrdDpjlOdeedfVqKQQMvxiLiMcVdaeMeQtfUMecjc.jpg?width=216&crop=smart&auto=webp&s=2e7feba6ffa293384d4dd6162e2bbaafec3373e7', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/VyrrdDpjlOdeedfVqKQQMvxiLiMcVdaeMeQtfUMecjc.jpg?width=320&crop=smart&auto=webp&s=39ba20f697b42446c0c257ebeeb568b37297f164', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/VyrrdDpjlOdeedfVqKQQMvxiLiMcVdaeMeQtfUMecjc.jpg?width=640&crop=smart&auto=webp&s=fad6b7a8eefb09eda2dd2b7ca161b8b225d996ee', 'width': 640}], 'source': {'height': 393, 'url': 'https://external-preview.redd.it/VyrrdDpjlOdeedfVqKQQMvxiLiMcVdaeMeQtfUMecjc.jpg?auto=webp&s=d9cb7b7ca674ba113d30c5069335c60c575a5071', 'width': 750}, 'variants': {}}]} |
Can we share compute to train reasoning models? | 0 | I have seen a post here stating that ClosedAI and other Evil corporations are Gpu rich as hell so they always win. Is it possible for all of us to collectively combine compute to get a better chance at propelling open weight models.
I know distributed learning is difficult. But there are also places where we can contribute gpu resource for research and scientific purposes. Maybe we can replicate similar thing for reasoning model training.
Just a thought. | 2025-01-02T05:10:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hrmbt9/can_we_share_compute_to_train_reasoning_models/ | GodCREATOR333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrmbt9 | false | null | t3_1hrmbt9 | /r/LocalLLaMA/comments/1hrmbt9/can_we_share_compute_to_train_reasoning_models/ | false | false | self | 0 | null |
Need a good model that can detect and bound UI elements | 1 | [removed] | 2025-01-02T05:12:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hrmcn1/need_a_good_model_that_can_detect_and_bound_ui/ | ComfortableDivide640 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrmcn1 | false | null | t3_1hrmcn1 | /r/LocalLLaMA/comments/1hrmcn1/need_a_good_model_that_can_detect_and_bound_ui/ | false | false | self | 1 | null |
16GB VRAM Models | 42 | I just got an RTX 4060 TI 16GB last night, after being limited by a 4GB RX580 for a whole year. I could only run 1-3B models. Before that, I used RWKV-4-Raven 7B and RWKV-Novel 3B for a few months on an RTX 3060 12GB. But now, after getting a worthy upgrade (I hope), I want to try some good big models. So, please suggest some good gguf models to me.
I prefer speed over size > 10 T/s.
And do you recommend I use Qwen-Coder 7B or move to 14B, or is the DeepSeek-V2-Lite-Chat still the king of its class?
And is SuperNova-Medius that good for general or roleplay purposes?
And I want a good roleplay model, SFW & NSFW, that sticks to persona (Gemmasutra is peak fiction, but sometimes it mixes between male and female) and is talkative and can take the lead if possible (I suck with communication skills and prefer reading to writing). And I also want a good creative/story writing model, SFW & NSFW, with a high context length.
And what's the suitable Flux quantization to run?
I yap a lot, sorry, but I'm a bit enthusiastic about using it.
I have these models (Never got the chance to test them all yet).
| **300M** | **1B** | **2B** | **3B** | **7B** | **9B** |
|---|---|---|---|---|---|
| Lite-Oute-1-300M-Instruct | FuseChat-Llama-3.2-1B-Instruct | Gemma-2-2B-It-Abliterated | Impish-Llama-3B | CodeLlama-7B-Instruct | Gemma-2-Darkest-Muse-v1-9B |
| | Index-1.9B-Character | Gemmasutra-Mini-2B-v1 | Llama-3.2-3B-Instruct | Gorilla-OpenFunctions-v2-7B | |
| | KobbleTinyV2-1.1B | Octopus-v2-2B | Phi-3-mini-128k-Instruct | Kunoichi-DPO-v2-7B | |
| | Llama-3.2-1B-Instruct | MoE-Girl_400MA_1BT | Qwen2.5-3B-instruct | LLava-v1.6-Mistral-7B | |
| | Moondream2-1.86B | NuExtract-v1.5 | replit-code-v1_5-3b | LLama-3.2-3B-Instruct | |
| | OLMoe-1B-7B-0924-Instruct | | Reasoning-Llama-3b-v0.1 | Mistral-7B-Instruct-v0.1-GPTQ | |
| | RWKV-6-World-1.6B | | RWKV-6-World-3B-v2.1 | mpt-7b-storywriter | |
| | | | | OpenChat-3.5-7B-16K | |
| | | | | Qwen2.5-Coder-7B-Instruct | |
| | | | | SiliconMaid-7B | |
| | | | | Yi-Coder-1.5B-Chat | |
| | | | | zephyr-7B-beta-GPTQ | |
| 2025-01-02T05:34:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hrmpjw/16gb_vram_models/ | LSXPRIME | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrmpjw | false | null | t3_1hrmpjw | /r/LocalLLaMA/comments/1hrmpjw/16gb_vram_models/ | false | false | self | 42 | null |
Asked Claude to roast open-source models | 1 | 2025-01-02T06:18:00 | OrangePotatoFarmer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hrne7d | false | null | t3_1hrne7d | /r/LocalLLaMA/comments/1hrne7d/asked_claude_to_roast_opensource_models/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'qaSOzwNZmp486dma8rLGeOpVxaLXFg8_roIhu-8GCt0', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/spzozz88wiae1.png?width=108&crop=smart&auto=webp&s=4bf99798ebad448c923686c8cb43521148ec69fa', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/spzozz88wiae1.png?width=216&crop=smart&auto=webp&s=7f090c149285480d67468e48b186df4e5c285af2', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/spzozz88wiae1.png?width=320&crop=smart&auto=webp&s=a7ef833d94409cb4e5c50b103a6aa4ae9a27e01e', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/spzozz88wiae1.png?width=640&crop=smart&auto=webp&s=56c4668a1d176335451a3536a9cb0c5b8d9f056c', 'width': 640}, {'height': 614, 'url': 'https://preview.redd.it/spzozz88wiae1.png?width=960&crop=smart&auto=webp&s=fd7dcfbff5b9c015a21b90246e69a89411e9cc57', 'width': 960}, {'height': 691, 'url': 'https://preview.redd.it/spzozz88wiae1.png?width=1080&crop=smart&auto=webp&s=d42349e8251cc2037f342ce5abff89f43b610971', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/spzozz88wiae1.png?auto=webp&s=f9315e168f95f666b86a59debe2cb33423d2189d', 'width': 1406}, 'variants': {}}]} |
|||
DeepSeek v3 vs. Claude 3.5 Sonnet 1022: DeepSeek tends to write simpler code (My Experience) | 108 | Hey everyone, I've been experimenting with using LLMs to add new features to my code, and I've noticed some interesting differences between DeepSeek Coder and Claude 3.5 Sonnet. Specifically, DeepSeek tends to generate simpler, more straightforward code compared to Claude, which often leans towards more complex, object-oriented solutions.
I wanted to share a couple of examples and get your thoughts.
**Example 1: Implementing an Undo Feature for File Changes**
When I asked both models to implement an undo feature for file operations, they took very different approaches. Here's a summary of what I observed, based on Claude's own analysis of DeepSeek's code:
**Key Differences:**
* **Complexity:** Claude opted for an object-oriented design with a dedicated manager class to handle undo logic. This approach promotes better organization and separation of concerns, potentially making it easier to scale for more complex scenarios. DeepSeek, on the other hand, used a simpler, procedural method with a global list to track changes. This is easier to understand at a glance, especially for basic undo/redo.
* **Data Structures:** Claude tracked changes using a list of objects, each containing detailed information about the operation (type, path, timestamp, content). DeepSeek used a list of tuples, holding just the essential data (action, path, backup). Both are functional, but DeepSeek's approach is less verbose.
* **Error Handling:** Claude included more robust error handling, providing feedback to the user in case of issues. DeepSeek's error handling was more basic, primarily focusing on file deletion during undo.
* **Readability:** For those familiar with object-oriented programming, Claude's code is well-structured and easier to maintain in larger projects. DeepSeek's linear code is arguably easier to follow for those less comfortable with OOP concepts.
**Deep Diving into the Differences**
The differences go even deeper than just complexity. Here are some additional observations about their respective approaches:
**DeepSeek's Approach - Simplicity and Directness:**
* **Fewer moving parts:** It avoids introducing new classes and enums, relying on basic Python data structures and control flow.
* **Directness:** The logic for backing up and undoing is embedded directly within the functions that modify files.
* **Less abstraction:** There's less indirection, making it easier to see the direct relationship between the action and the undo operation.
* **Pragmatic Approach:** DeepSeek appears to focus on providing a functional solution with minimal overhead, prioritizing simplicity and ease of understanding.
**Claude's Approach - Robustness and Extensibility:**
* **Focus on Structure:** Claude seems to prioritize building a more robust and well-structured solution, anticipating potential future complexities. The use of classes and enums is characteristic of this approach.
* **Detailed Documentation:** Claude includes more detailed comments and docstrings, explaining the purpose of the classes and methods.
* **Experience Assumption:** Claude's response might assume a user with more software engineering experience who appreciates structured design patterns.
* **Communication Style:** It's more conversational, asking for confirmation (e.g., "Would you like me to explain...").
Interestingly, when I asked Claude to compare the two implementations, it acknowledged the simplicity and effectiveness of DeepSeek's code:
> "Yes, that's a good implementation! Your approach is actually simpler and more straightforward than my suggestion while still accomplishing the core functionality... One small improvement you might consider is removing the entry from `file_history` if the undo operation fails..."
**Example 2: Adding a Message Override Feature**
I saw a similar pattern when adding a message override feature. Again, Claude praised DeepSeek's implementation as "clearer and more straightforward," highlighting its advantages:
> "Yes, that's correct! This implementation is actually clearer and more straightforward than my previous suggestion. Your version has several advantages: 1. It keeps the message collection logic together in one place. 2. It makes it very clear when the default message is being used vs. the user message..."
**Which Approach is "Better"?**
Both implementations achieve the desired functionality. The "better" approach depends on the context and your priorities:
* **Choose Claude's approach if:**
* You anticipate needing more complex undo scenarios in the future.
* You value a well-structured and maintainable codebase.
* You are comfortable with object-oriented programming.
* **Choose DeepSeek's approach if:**
* You need a simple and quick solution.
* You prioritize ease of understanding and implementation.
* Your undo requirements are unlikely to change too much.
**My Takeaway:**
My experience suggests that DeepSeek Coder might be a better choice when you need a quick, clean implementation of a new feature. Claude, while capable of generating more sophisticated code, sometimes leans towards over-engineered solutions unless you provide very specific instructions. It also seems like DeepSeek might be more suitable for users that are less experienced, whereas Claude might target more experienced programmers.
**What are your thoughts?** Have you noticed similar differences between these or other LLMs? Do you prefer simpler or more complex solutions when using LLMs for coding? I'd love to hear about your experiences and any tips you might have! | 2025-01-02T06:50:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hrnvjo/deepseek_v3_vs_claude_35_sonnet_1022_deepseek/ | Miscend | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrnvjo | false | null | t3_1hrnvjo | /r/LocalLLaMA/comments/1hrnvjo/deepseek_v3_vs_claude_35_sonnet_1022_deepseek/ | false | false | self | 108 | null |
Improving prompts | 1 | i use deepseek ,cluada,gpt to rewrite prompt again in better way but all of them always drop some instructions,what do you think best approach to rewrite prompt and keep its all instructions? | 2025-01-02T07:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hro9xw/improving_prompts/ | Apprehensive_Dog1267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hro9xw | false | null | t3_1hro9xw | /r/LocalLLaMA/comments/1hro9xw/improving_prompts/ | false | false | self | 1 | null |
Do you still think larger model with more quant is better than smaller model with less quant? | 94 | 18 months ago [it was shown that Q2 on a large model was equal to the next smallest model](https://www.reddit.com/r/LocalLLaMA/comments/1441jnr/k_quantization_vs_perplexity/), at least when using perplexity as a measure.
Today imo it is different. Models [have larger vocabulary so are more negatively affected by quant](https://www.reddit.com/r/LocalLLaMA/comments/1fjo7zx/comment/lnpiwxx/), tho as that commenter also says, larger models are more resilient to quant loss than smaller models.
Recently people on here have said that very dense modern models suffer quality drop quickly after Q6, so it's better to run a smaller model with higher quant.
That certainly seems the case with Llama 3.1 and 3.2. In a way it makes sense that these highly-tuned high-vocab high-context 8B and 3B models would suffer from almost any quantization.
Anecdotally, I think I have gotten better answers from Qwen 2.5 Coder 7B at Q80 than I have got from the same model 32B at Q4KL.
What do you think? What's your experience? | 2025-01-02T07:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hrogx6/do_you_still_think_larger_model_with_more_quant/ | suprjami | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrogx6 | false | null | t3_1hrogx6 | /r/LocalLLaMA/comments/1hrogx6/do_you_still_think_larger_model_with_more_quant/ | false | false | self | 94 | null |
Why does't groq host Deepseek V3 | 1 | [removed] | 2025-01-02T07:34:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hroi2l/why_doest_groq_host_deepseek_v3/ | Ok-Length-9762 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hroi2l | false | null | t3_1hroi2l | /r/LocalLLaMA/comments/1hroi2l/why_doest_groq_host_deepseek_v3/ | false | false | self | 1 | null |
New reasoning model | 5 | 2025-01-02T08:24:41 | Evening_Action6217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hrp6ni | false | null | t3_1hrp6ni | /r/LocalLLaMA/comments/1hrp6ni/new_reasoning_model/ | false | false | 5 | {'enabled': True, 'images': [{'id': 'DQNbo5X6Uqyxr8wihUq1xiuc-CJqoS9dB5BurSogBU4', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/cqe0lfg9jjae1.jpeg?width=108&crop=smart&auto=webp&s=e0163893ce519b3792ba58f0230ae28f732a5a10', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/cqe0lfg9jjae1.jpeg?width=216&crop=smart&auto=webp&s=a97fff8a5d3a57f3dd2260328ee19a22c49a1ea6', 'width': 216}, {'height': 236, 'url': 'https://preview.redd.it/cqe0lfg9jjae1.jpeg?width=320&crop=smart&auto=webp&s=65d29e38d7905cc83e44dd5e746fbd5e5901784e', 'width': 320}, {'height': 473, 'url': 'https://preview.redd.it/cqe0lfg9jjae1.jpeg?width=640&crop=smart&auto=webp&s=b4034c7f1014832c455f10f5f5517a8e851f03f6', 'width': 640}, {'height': 710, 'url': 'https://preview.redd.it/cqe0lfg9jjae1.jpeg?width=960&crop=smart&auto=webp&s=4dc838fde93fd2f8c331ec31cffa8cf1cc27679e', 'width': 960}, {'height': 799, 'url': 'https://preview.redd.it/cqe0lfg9jjae1.jpeg?width=1080&crop=smart&auto=webp&s=aabe6926ce801312b26d4aa6f019aba5cd50c9eb', 'width': 1080}], 'source': {'height': 888, 'url': 'https://preview.redd.it/cqe0lfg9jjae1.jpeg?auto=webp&s=db53cef4f998527f370c81ea53fdf9e1ac6e2f1c', 'width': 1200}, 'variants': {}}]} |
|||
Here’s a way to get started with any Github Project: | 0 | FREE GitHub Project Hack!
Here’s a way to get started with any Github Project:
Use Git Ingest: Change 'hub' to 'ingest' in any GitHub repo URL. Download the text file!
Paste that text into Google AI Studio (2M context!).
Ask Gemini for a detailed cookbook! Unlock project secrets instantly. | 2025-01-02T08:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hrpl9f/heres_a_way_to_get_started_with_any_github_project/ | Dart7989 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrpl9f | false | null | t3_1hrpl9f | /r/LocalLLaMA/comments/1hrpl9f/heres_a_way_to_get_started_with_any_github_project/ | false | false | self | 0 | null |
How to find phishing/spam/safe email dataset ? | 1 | [removed] | 2025-01-02T09:14:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hrpu6t/how_to_find_phishingspamsafe_email_dataset/ | EstebanbanC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrpu6t | false | null | t3_1hrpu6t | /r/LocalLLaMA/comments/1hrpu6t/how_to_find_phishingspamsafe_email_dataset/ | false | false | self | 1 | null |
Intle GPU's? | 1 | [removed] | 2025-01-02T09:33:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hrq367/intle_gpus/ | JamesAibr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrq367 | false | null | t3_1hrq367 | /r/LocalLLaMA/comments/1hrq367/intle_gpus/ | false | false | self | 1 | null |
Best (24GB VRAM) local model for Obsidian / Finetuning Captions? | 1 | [removed] | 2025-01-02T09:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hrqesb/best_24gb_vram_local_model_for_obsidian/ | SubstantialFlounder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrqesb | false | null | t3_1hrqesb | /r/LocalLLaMA/comments/1hrqesb/best_24gb_vram_local_model_for_obsidian/ | false | false | self | 1 | null |
Best (24GB) local model for Obsidian, Coding, and Rewriting my Finetuning Captions? | 1 | [removed] | 2025-01-02T10:04:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hrqial/best_24gb_local_model_for_obsidian_coding_and/ | Travistyse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrqial | false | null | t3_1hrqial | /r/LocalLLaMA/comments/1hrqial/best_24gb_local_model_for_obsidian_coding_and/ | false | false | self | 1 | null |
Who cares if it is a Chinese Communist Party propaganda slave? At least it can code well enough. | 0 | 2025-01-02T10:09:13 | https://www.reddit.com/gallery/1hrqklm | _idkwhattowritehere_ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hrqklm | false | null | t3_1hrqklm | /r/LocalLLaMA/comments/1hrqklm/who_cares_if_it_is_a_chinese_communist_party/ | false | false | 0 | null |
||
What LLM is "tippu" on llm arena? | 11 | Today in the battle I came across TIPPU model (which is not available in direct chat).
It was pretty decent but I have no idea what that is.
Any clue?
| 2025-01-02T10:12:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hrqmjj/what_llm_is_tippu_on_llm_arena/ | Robert__Sinclair | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrqmjj | false | null | t3_1hrqmjj | /r/LocalLLaMA/comments/1hrqmjj/what_llm_is_tippu_on_llm_arena/ | false | false | self | 11 | null |
Simple universal jailbreak prompt for most local llms(not qwen) and google gemini. | 1 | [removed] | 2025-01-02T10:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hrr5xa/simple_universal_jailbreak_prompt_for_most_local/ | automatickk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrr5xa | false | null | t3_1hrr5xa | /r/LocalLLaMA/comments/1hrr5xa/simple_universal_jailbreak_prompt_for_most_local/ | false | false | nsfw | 1 | null |
What are the current best T2S models that I could maybe fine-tune on a 4070 Super PC? | 1 | [removed] | 2025-01-02T11:05:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hrrc86/what_are_the_current_best_t2s_models_that_i_could/ | Personal-Leather-933 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrrc86 | false | null | t3_1hrrc86 | /r/LocalLLaMA/comments/1hrrc86/what_are_the_current_best_t2s_models_that_i_could/ | false | false | self | 1 | null |
Resources for MetaData | 1 | [removed] | 2025-01-02T11:28:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hrrol5/resources_for_metadata/ | Alarming-East1193 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrrol5 | false | null | t3_1hrrol5 | /r/LocalLLaMA/comments/1hrrol5/resources_for_metadata/ | false | false | self | 1 | null |
Awesome blog post on Llama Mesh | 15 | When I read NVIDIA's LLaMa-Mesh paper, I was amazed at how LLMs can interface so seamlessly with the 3D world.
https://preview.redd.it/1wb5stqhikae1.png?width=1390&format=png&auto=webp&s=2b038c811748568b7a977a7692d04d046a06ffd4
Here's an incredible blog post which explains this paper in an easy to understand manner: [https://open.substack.com/pub/vizuara/p/llama-mesh-unifying-3d-mesh-generation?r=4ssvv2&utm\_campaign=post&utm\_medium=web&showWelcomeOnShare=true](https://open.substack.com/pub/vizuara/p/llama-mesh-unifying-3d-mesh-generation?r=4ssvv2&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true) | 2025-01-02T11:42:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hrrvt8/awesome_blog_post_on_llama_mesh/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrrvt8 | false | null | t3_1hrrvt8 | /r/LocalLLaMA/comments/1hrrvt8/awesome_blog_post_on_llama_mesh/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'V_fPPQJ13fupLY_PdfV_mshf3WQ_Q_b8yl0FgM4uQkM', 'resolutions': [{'height': 93, 'url': 'https://external-preview.redd.it/0AeRtBDCyPMl-HwdXi4IZpoeFozYV7fGDrVuY8LoKmk.jpg?width=108&crop=smart&auto=webp&s=6a747db40c8db7d57fa38f7fc5fa84d06753b7ef', 'width': 108}, {'height': 186, 'url': 'https://external-preview.redd.it/0AeRtBDCyPMl-HwdXi4IZpoeFozYV7fGDrVuY8LoKmk.jpg?width=216&crop=smart&auto=webp&s=07529c813a21faca64c52c3ba84cb8d70510d8d4', 'width': 216}, {'height': 275, 'url': 'https://external-preview.redd.it/0AeRtBDCyPMl-HwdXi4IZpoeFozYV7fGDrVuY8LoKmk.jpg?width=320&crop=smart&auto=webp&s=86e2c5dc7f807ffaefd8e05ccdc22517896457fd', 'width': 320}, {'height': 551, 'url': 'https://external-preview.redd.it/0AeRtBDCyPMl-HwdXi4IZpoeFozYV7fGDrVuY8LoKmk.jpg?width=640&crop=smart&auto=webp&s=b23dd26dac74372c28b101b20f9455876e6de7f3', 'width': 640}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0AeRtBDCyPMl-HwdXi4IZpoeFozYV7fGDrVuY8LoKmk.jpg?auto=webp&s=e7e6c38157bdb284bd5f56bf904f5deecbe3fb93', 'width': 696}, 'variants': {}}]} |
|
We fine-tuned Llama and got 4.2x Sonnet 3.5 accuracy for code generation | 0 | 2025-01-02T11:52:09 | https://finecodex.com/ | AppearanceHeavy6724 | finecodex.com | 1970-01-01T00:00:00 | 0 | {} | 1hrs12b | false | null | t3_1hrs12b | /r/LocalLLaMA/comments/1hrs12b/we_finetuned_llama_and_got_42x_sonnet_35_accuracy/ | false | false | default | 0 | null |
|
GPT-like tool to query internal YouTrack documentation for solving customer issues | 1 | [removed] | 2025-01-02T12:15:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hrsdyi/gptlike_tool_to_query_internal_youtrack/ | NikkEvan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrsdyi | false | null | t3_1hrsdyi | /r/LocalLLaMA/comments/1hrsdyi/gptlike_tool_to_query_internal_youtrack/ | false | false | self | 1 | null |
GPT-like tool to query internal YouTrack documentation for solving customer issues | 1 | [removed] | 2025-01-02T12:31:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hrsnvs/gptlike_tool_to_query_internal_youtrack/ | NikkEvan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrsnvs | false | null | t3_1hrsnvs | /r/LocalLLaMA/comments/1hrsnvs/gptlike_tool_to_query_internal_youtrack/ | false | false | self | 1 | null |
GPT-like tool to query internal documentation for solving customer issues | 1 | [removed] | 2025-01-02T12:35:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hrsq2f/gptlike_tool_to_query_internal_documentation_for/ | NikkEvan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrsq2f | false | null | t3_1hrsq2f | /r/LocalLLaMA/comments/1hrsq2f/gptlike_tool_to_query_internal_documentation_for/ | false | false | self | 1 | null |
Is anyone using PocketPal AI and LLM Farm on iPadOS are there better ones? | 0 | I’m trying these out on my Ipad Pro. PocketPal runs ok and I have tried a few models. I downloaded them in the app, when I go to the files app I can’t see the models I downloaded.
I’d like to import these models if possible to LLM farm. The model size is 3gb which isn’t usually a problem but I’m road tripping and only have cellular on my iPad.
Or if you have suggestions on different app to run on IPad Pro lmk. Thanks | 2025-01-02T13:09:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hrtbpv/is_anyone_using_pocketpal_ai_and_llm_farm_on/ | moldyjellybean | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrtbpv | false | null | t3_1hrtbpv | /r/LocalLLaMA/comments/1hrtbpv/is_anyone_using_pocketpal_ai_and_llm_farm_on/ | false | false | self | 0 | null |
Feedback needed: Live Competition Platform for Building & Training Language Models | 1 | [removed] | 2025-01-02T13:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hrtz5w/feedback_needed_live_competition_platform_for/ | IUCSWTETDFWTF | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrtz5w | false | null | t3_1hrtz5w | /r/LocalLLaMA/comments/1hrtz5w/feedback_needed_live_competition_platform_for/ | false | false | self | 1 | null |
Is it currently possible to build a cheap but powerful pdf chatbot solution? | 4 | Hello everyone, I would start by saying that I am not a programmer unfortunately.
I want to build a Local and super powerful AI chatbots system where I can upload (i.e. store on a computer or local server) tons of pdf textbooks and ask any kind of questions I want (Particularly difficult ones to help me understand complex scientific problems etc.) and also generate connections automatically done by AI between different concepts explained on different files for a certain subject (Maths, Physics whatever!!!). This is currently possible but online, with OpenAI API key etc. (And relying on third-party tools. Afforai for example). Since I am planning to use it extensively and by uploading very large textbooks and resources (terabytes of knowledge), it will be super expensive to rely on AI keys and SaaS solutions. I am an individual user at the end, not a company!! IS there a SUITABLE SOLUTION FOR MY USE CASE? 😭😭 If yes, which one? What is required to build something like this (both hardware and software)? Any recurring costs?
I want to build separate "folders" or knowledge bases for different Subjects and have different chatbots for each folder. In other words, upload maths textbooks and create a chatbot as my "Maths teacher" in order to help me with maths based only on maths folder, another one for chemistry and so on.
Thank you so much! | 2025-01-02T13:51:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hru4ks/is_it_currently_possible_to_build_a_cheap_but/ | ahmedfarrag17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hru4ks | false | null | t3_1hru4ks | /r/LocalLLaMA/comments/1hru4ks/is_it_currently_possible_to_build_a_cheap_but/ | false | false | self | 4 | null |
I've tried Granite 3.1 3b. It was very fast and very bad. | 45 | So I've tried Granite 3.1 3b MoE. I to be very honest rarely see LLMs worse than IBM models; even LLama 3.2 1b was better at coding than Granite 3.1 3b; 8b was not great either. I was thinking about using it as a autocompletion, as it is very fast, but the quality of code terrible. Might be useful for very quick summaries though, as it gives me 30 tok/sec on cpu only.
What is your take on Granite? Any success using it? I personally feel like Gemma 2 2b, LLama 3.2 3b and Qwen 2.5 1.5/3b are the only decent tiny/small models around | 2025-01-02T13:57:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hru986/ive_tried_granite_31_3b_it_was_very_fast_and_very/ | AppearanceHeavy6724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hru986 | false | null | t3_1hru986 | /r/LocalLLaMA/comments/1hru986/ive_tried_granite_31_3b_it_was_very_fast_and_very/ | false | false | self | 45 | null |
I don't know what should I do | 1 | [removed] | 2025-01-02T14:16:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hrunat/i_dont_know_what_should_i_do/ | Flat-Lengthiness7141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrunat | false | null | t3_1hrunat | /r/LocalLLaMA/comments/1hrunat/i_dont_know_what_should_i_do/ | false | false | self | 1 | null |
Local LLama 3.3 doesn't use Vram do 3090ti's Gpu | 1 | [removed] | 2025-01-02T14:28:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hruwg8/local_llama_33_doesnt_use_vram_do_3090tis_gpu/ | AggravatingHat6061 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hruwg8 | false | null | t3_1hruwg8 | /r/LocalLLaMA/comments/1hruwg8/local_llama_33_doesnt_use_vram_do_3090tis_gpu/ | false | false | self | 1 | null |
Anyone use hyperbolic as third party API for deepseekv3 | 1 | [removed] | 2025-01-02T14:31:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hruygg/anyone_use_hyperbolic_as_third_party_api_for/ | shing3232 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hruygg | false | null | t3_1hruygg | /r/LocalLLaMA/comments/1hruygg/anyone_use_hyperbolic_as_third_party_api_for/ | false | false | self | 1 | null |
Feedback needed: Live Competition Platform for Building & Training Language Models | 1 | [removed] | 2025-01-02T14:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hrv2us/feedback_needed_live_competition_platform_for/ | IUCSWTETDFWTF | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrv2us | false | null | t3_1hrv2us | /r/LocalLLaMA/comments/1hrv2us/feedback_needed_live_competition_platform_for/ | false | false | self | 1 | null |
Best-performing multilingual embedding model for RAG | 1 | [removed] | 2025-01-02T14:42:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hrv731/bestperforming_multilingual_embedding_model_for/ | fantaisiesaugrenue | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrv731 | false | null | t3_1hrv731 | /r/LocalLLaMA/comments/1hrv731/bestperforming_multilingual_embedding_model_for/ | false | false | self | 1 | null |
Choosing Between Python WebSocket Libraries and FastAPI for Scalable, Containerized Projects. | 10 | Hi everyone,
I'm currently at a crossroads in selecting the optimal framework for my project and would greatly appreciate your insights.
**Project Overview**:
* **Scalability**: Anticipate multiple concurrent users utilising several generative AI models.
* **Containerization**: Plan to deploy using Docker for consistent environments and streamlined deployments for each model, to be hosted on the cloud or our servers.
* **Potential vLLM Integration**: Currently using Transformers and LlamaCpp; however, plans may involve transitioning to vLLM, TGI, or other frameworks.
**Options Under Consideration**:
1. **Python WebSocket Libraries**: Considering lightweight libraries like `websockets` for direct WebSocket management.
2. **FastAPI**: A modern framework that supports both REST APIs and WebSockets, built on ASGI for asynchronous operations.
I am currently developing two projects: one using Python WebSocket libraries and another using FastAPI for REST APIs. I recently discovered that FastAPI also supports WebSockets. My goal is to gradually learn the architecture and software development for AI models. It seems that transitioning to FastAPI might be beneficial due to its widespread adoption and also because it manages REST APIs and WebSocket. This would allow me to start new projects with FastAPI and potentially refactor existing ones.
I am uncertain about the performance implications, particularly concerning scalability and latency. Could anyone share their experiences or insights on this matter? Am I overlooking any critical factors or other framework WebRTC or smth else?
**To summarize**, I am seeking a solution that offers high-throughput operations, maintains low latency, is compatible with Docker, and provides straightforward scaling strategies for real applications | 2025-01-02T14:50:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hrvd0i/choosing_between_python_websocket_libraries_and/ | SomeRandomGuuuuuuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrvd0i | false | null | t3_1hrvd0i | /r/LocalLLaMA/comments/1hrvd0i/choosing_between_python_websocket_libraries_and/ | false | false | self | 10 | null |
Achieved 62X faster inference times than Hugging Face with Monster Deploy | 1 | [removed] | 2025-01-02T15:17:25 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hrvyxs | false | null | t3_1hrvyxs | /r/LocalLLaMA/comments/1hrvyxs/achieved_62x_faster_inference_times_than_hugging/ | false | false | default | 1 | null |
||
Achieved 62X faster inference times than Hugging Face with Monster Deploy | 1 | [removed] | 2025-01-02T15:19:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hrw0qb/achieved_62x_faster_inference_times_than_hugging/ | stupidauthor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrw0qb | false | null | t3_1hrw0qb | /r/LocalLLaMA/comments/1hrw0qb/achieved_62x_faster_inference_times_than_hugging/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ixH7MjWJv3aT_mRE_imTJEtv95ewwkUYzqIZ-xlQ2kw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/jqUhzkroGl6qsUU3PjZmFOcu8f-E5gCpe7J005sn90Y.jpg?width=108&crop=smart&auto=webp&s=cdb2dfba76b3ed55902ba842eec84097245f4901', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/jqUhzkroGl6qsUU3PjZmFOcu8f-E5gCpe7J005sn90Y.jpg?width=216&crop=smart&auto=webp&s=9b530ec80f52f6b530e250a3712bea8894a5fab7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/jqUhzkroGl6qsUU3PjZmFOcu8f-E5gCpe7J005sn90Y.jpg?width=320&crop=smart&auto=webp&s=eed426e8a6612afdeba358bffc81d829c34e95e9', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/jqUhzkroGl6qsUU3PjZmFOcu8f-E5gCpe7J005sn90Y.jpg?width=640&crop=smart&auto=webp&s=5906448ab765d3406b120df2b92cb6e143ffb806', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/jqUhzkroGl6qsUU3PjZmFOcu8f-E5gCpe7J005sn90Y.jpg?width=960&crop=smart&auto=webp&s=2554ca042ad9ee9cc91872a6b166393d002cbf77', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/jqUhzkroGl6qsUU3PjZmFOcu8f-E5gCpe7J005sn90Y.jpg?auto=webp&s=d126cdea9eae399c38a1dfa262a6e00a81931350', 'width': 1024}, 'variants': {}}]} |
State-of-the-art local Vision, TTS and STT? | 23 | Hi, what is the current SOTA for local img to text, text to speech and speech to text? I do not want to use corpo APIs, as this project is supposed to babysit me to decrease my distractability by shouting at me when i do something that is not helping with my current goal (like doing taxes).
I have tried minicpm-v, which is decent, but still not good enough to interpret a screen. Are there vision models between 13 and 90b? I couldn't find any on ollama. Also TTS is propably easy, but STT? What could run there, is whisper still the best for that? | 2025-01-02T15:41:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hrwio7/stateoftheart_local_vision_tts_and_stt/ | ComprehensiveBird317 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrwio7 | false | null | t3_1hrwio7 | /r/LocalLLaMA/comments/1hrwio7/stateoftheart_local_vision_tts_and_stt/ | false | false | self | 23 | null |
Merging models from two different base models? | 6 | Has anyone managed to merge two or more models that come from two different base models of the same family?
I'm looking at mergekit, and the ties example requires a "base_model:", and I'm unsure which one to use.
I'm trying to merge different fine-tunes of qwen2.5 family. I think that code and math variants are further pre-trained on their respective tasks, even before doing the instruct fine-tunes. So I'd like to play around with merging them to see if they work for my downstream tasks.
Has anyone done anything like this? Thanks! | 2025-01-02T15:55:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hrwu1p/merging_models_from_two_different_base_models/ | ResidentPositive4122 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrwu1p | false | null | t3_1hrwu1p | /r/LocalLLaMA/comments/1hrwu1p/merging_models_from_two_different_base_models/ | false | false | self | 6 | null |
Budget is $30,000. What future-proof hardware (GPU cluster) can I buy to train and inference LLMs? Is it better to build it myself or purchase a complete package from websites like SuperMicro? | 95 | I know I can purchase a Mac Studio for $10000, but Macs aren't great at training models and inference is slow on them. I want to purchase a cluster of GPUs so I can train/finetune my models and mostly inference them. Being upgradeable in the future is very important for me. I understand that power consumption and cooling might be an issue, so I was wondering how I should go about building such a GPU cluster? | 2025-01-02T16:05:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hrx2qx/budget_is_30000_what_futureproof_hardware_gpu/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrx2qx | false | null | t3_1hrx2qx | /r/LocalLLaMA/comments/1hrx2qx/budget_is_30000_what_futureproof_hardware_gpu/ | false | false | self | 95 | null |
What other components for a 20k Euro 4x5090 Rig would you pick? | 1 | [removed] | 2025-01-02T16:13:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hrx9tj/what_other_components_for_a_20k_euro_4x5090_rig/ | PreatorAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrx9tj | false | null | t3_1hrx9tj | /r/LocalLLaMA/comments/1hrx9tj/what_other_components_for_a_20k_euro_4x5090_rig/ | false | false | self | 1 | null |
Is Deepseek v3 at 600 tok / s possible? | 12 | I have to get near instant speeds to a good local LLM for coding purposes.
Is there anything possible like this?
What if, say, I bought 4 tinygrad pros at $160k? Which would give me 368GB GPU RAM. | 2025-01-02T16:23:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hrxiaa/is_deepseek_v3_at_600_tok_s_possible/ | doctorjuice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrxiaa | false | null | t3_1hrxiaa | /r/LocalLLaMA/comments/1hrxiaa/is_deepseek_v3_at_600_tok_s_possible/ | false | false | self | 12 | null |
Sonus-1 has just released? | 1 | [removed] | 2025-01-02T16:47:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hry33i/sonus1_has_just_released/ | Previous_Echo7758 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hry33i | false | null | t3_1hry33i | /r/LocalLLaMA/comments/1hry33i/sonus1_has_just_released/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'twQ1CN0TeK-pFcaA4PtM9J0H_U1cA3Fs6j-kFQkyB5Y', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/wOTLcbSsHuWQOXPNYmnXF3FzermcDlicjVEX2fPlNcs.jpg?width=108&crop=smart&auto=webp&s=501b3de146b4169afbc0d3605ff3dfffdac415e6', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/wOTLcbSsHuWQOXPNYmnXF3FzermcDlicjVEX2fPlNcs.jpg?width=216&crop=smart&auto=webp&s=0f792dfed95be9f005fff25e8e136aebf148d460', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/wOTLcbSsHuWQOXPNYmnXF3FzermcDlicjVEX2fPlNcs.jpg?width=320&crop=smart&auto=webp&s=47cae9895743bb06bf7f3fa765b179e077188d05', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/wOTLcbSsHuWQOXPNYmnXF3FzermcDlicjVEX2fPlNcs.jpg?width=640&crop=smart&auto=webp&s=16dce700667b92b95db1705110f75225f988907b', 'width': 640}], 'source': {'height': 674, 'url': 'https://external-preview.redd.it/wOTLcbSsHuWQOXPNYmnXF3FzermcDlicjVEX2fPlNcs.jpg?auto=webp&s=7883f47a8f78c3985e36451ec78217f6406e4349', 'width': 674}, 'variants': {}}]} |
µLocalGLaDOS - offline Personality Core | 797 | 2025-01-02T17:01:31 | https://v.redd.it/j5uj9w183mae1 | Reddactor | /r/LocalLLaMA/comments/1hryfs6/µlocalglados_offline_personality_core/ | 1970-01-01T00:00:00 | 0 | {} | 1hryfs6 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/j5uj9w183mae1/DASHPlaylist.mpd?a=1738558898%2CYWQ2ZWEwYTRmMWEzYTRkOWY3MWNiYmRjNTQ0NzcwMTkwNTE0ZWIwYjhjYWFkZTY5YWNhOTFkYmE2NTE2NGMzNA%3D%3D&v=1&f=sd', 'duration': 105, 'fallback_url': 'https://v.redd.it/j5uj9w183mae1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/j5uj9w183mae1/HLSPlaylist.m3u8?a=1738558898%2CZDhlYzUzYjY0YjdlOTc3MjBhZmFhYmY5Y2FlYWZlMWFmM2JlYmEwOGQyNmY4YWFhNzQ4MGMxNzdiN2E2NThiNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/j5uj9w183mae1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 848}} | t3_1hryfs6 | /r/LocalLLaMA/comments/1hryfs6/µlocalglados_offline_personality_core/ | false | false | 797 | {'enabled': False, 'images': [{'id': 'cGdpM3J2MTgzbWFlMVb9CkkpinIiM_BZjMpG2UgcP7Pom4IZ-6oHdSJucDFh', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/cGdpM3J2MTgzbWFlMVb9CkkpinIiM_BZjMpG2UgcP7Pom4IZ-6oHdSJucDFh.png?width=108&crop=smart&format=pjpg&auto=webp&s=f500bab9fdd73aef218c0e68bf7687103ebb4100', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/cGdpM3J2MTgzbWFlMVb9CkkpinIiM_BZjMpG2UgcP7Pom4IZ-6oHdSJucDFh.png?width=216&crop=smart&format=pjpg&auto=webp&s=69f93f3513216d3bfd127ef10ee4f00bc838e919', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/cGdpM3J2MTgzbWFlMVb9CkkpinIiM_BZjMpG2UgcP7Pom4IZ-6oHdSJucDFh.png?width=320&crop=smart&format=pjpg&auto=webp&s=f429c2888597674cbae44cc24d900062cd3f6e63', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/cGdpM3J2MTgzbWFlMVb9CkkpinIiM_BZjMpG2UgcP7Pom4IZ-6oHdSJucDFh.png?width=640&crop=smart&format=pjpg&auto=webp&s=551bbd07e29143231a0c502f4b49999bb6afc19a', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/cGdpM3J2MTgzbWFlMVb9CkkpinIiM_BZjMpG2UgcP7Pom4IZ-6oHdSJucDFh.png?format=pjpg&auto=webp&s=2357af667fb3335a0df1e60e37d11d6db954b2a5', 'width': 848}, 'variants': {}}]} |
||
llama deduces that all articles are relevant to my topics | 1 | [removed] | 2025-01-02T17:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hryk1l/llama_deduces_that_all_articles_are_relevant_to/ | LandMobileJellyfish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hryk1l | false | null | t3_1hryk1l | /r/LocalLLaMA/comments/1hryk1l/llama_deduces_that_all_articles_are_relevant_to/ | false | false | self | 1 | null |
Doing the thing: speculations about the next DBRX release? | 9 | DeepSeek V3 has gotten me thinking about large MOE models. As I was reading over [this post](https://old.reddit.com/r/LocalLLaMA/comments/1gprkxw/overview_of_the_largest_mixture_of_expert_models/), it struck me that we haven't seen anything from DBRX in a while. Any speculation about when we might see something? | 2025-01-02T17:11:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hryosb/doing_the_thing_speculations_about_the_next_dbrx/ | RobotRobotWhatDoUSee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hryosb | false | null | t3_1hryosb | /r/LocalLLaMA/comments/1hryosb/doing_the_thing_speculations_about_the_next_dbrx/ | false | false | self | 9 | null |
Is there any model for natural translating text from Egnlish to Hebrew that can fit 8gb vram? | 1 | [removed] | 2025-01-02T17:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hryqa1/is_there_any_model_for_natural_translating_text/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hryqa1 | false | null | t3_1hryqa1 | /r/LocalLLaMA/comments/1hryqa1/is_there_any_model_for_natural_translating_text/ | false | false | self | 1 | null |
Tell me about the RTX 8000 - 48GB is cheap right now | 5 | I'm seeing new RTX 8000's for $2400 CAD. 48GB VRAM, 2 slot blower, PCIe 3.
I know it's an older architecture, but this seems like a great price.
Anyone have experience running 4 of these in a monster workstation or home server? | 2025-01-02T17:17:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hrytcp/tell_me_about_the_rtx_8000_48gb_is_cheap_right_now/ | Thrumpwart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrytcp | false | null | t3_1hrytcp | /r/LocalLLaMA/comments/1hrytcp/tell_me_about_the_rtx_8000_48gb_is_cheap_right_now/ | false | false | self | 5 | null |
how to use gemini ai to extrach text from folder full of images | 1 | [removed] | 2025-01-02T17:18:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hryuop/how_to_use_gemini_ai_to_extrach_text_from_folder/ | Careful_Thing622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hryuop | false | null | t3_1hryuop | /r/LocalLLaMA/comments/1hryuop/how_to_use_gemini_ai_to_extrach_text_from_folder/ | false | false | self | 1 | null |
This is the first time my local LLM that got the reference. | 103 | 2025-01-02T17:19:54 | hummingbird1346 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hryvtw | false | null | t3_1hryvtw | /r/LocalLLaMA/comments/1hryvtw/this_is_the_first_time_my_local_llm_that_got_the/ | false | false | 103 | {'enabled': True, 'images': [{'id': 'VmxwbPyYpTOf6cX4KuAdoGSavjp0DHs8nkWutAEldAc', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/gw79ikap6mae1.png?width=108&crop=smart&auto=webp&s=deaaf9dd41b3b38b884a0525f5d2a0af26625a13', 'width': 108}, {'height': 193, 'url': 'https://preview.redd.it/gw79ikap6mae1.png?width=216&crop=smart&auto=webp&s=edba243ca8aade81df813de9ff86c77e9464f04e', 'width': 216}, {'height': 287, 'url': 'https://preview.redd.it/gw79ikap6mae1.png?width=320&crop=smart&auto=webp&s=a2ad2bafcb9eddff79b509381fa67be5ac74a92a', 'width': 320}], 'source': {'height': 527, 'url': 'https://preview.redd.it/gw79ikap6mae1.png?auto=webp&s=f6cd2a7647bd8b307b861548906bb59aa8703c7b', 'width': 587}, 'variants': {}}]} |
|||
Why does deepseek v3 insist that it is from OpenAI 😲? | 1 | 2025-01-02T17:22:08 | andrew-atef-dev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hryxto | false | null | t3_1hryxto | /r/LocalLLaMA/comments/1hryxto/why_does_deepseek_v3_insist_that_it_is_from_openai/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'r9JcOT6Ijtba0aOzhue_4u-WFrF-jw2iPIWqSH74WH0', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/p9bp2wws6mae1.png?width=108&crop=smart&auto=webp&s=a53af826e9f4d9c6ce85effe4b7478b3c1eacece', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/p9bp2wws6mae1.png?width=216&crop=smart&auto=webp&s=be99d330fb59333528c20df2121d64b4d6dff4eb', 'width': 216}, {'height': 175, 'url': 'https://preview.redd.it/p9bp2wws6mae1.png?width=320&crop=smart&auto=webp&s=19e9bde90a2a0300e6c1f3c193ccc3ccb49697c6', 'width': 320}, {'height': 351, 'url': 'https://preview.redd.it/p9bp2wws6mae1.png?width=640&crop=smart&auto=webp&s=404856bce5d5cc73e5613b73bb5638b284b9f7d8', 'width': 640}, {'height': 527, 'url': 'https://preview.redd.it/p9bp2wws6mae1.png?width=960&crop=smart&auto=webp&s=fc55df4e999310338baced0363da4e0fdc9b5438', 'width': 960}, {'height': 593, 'url': 'https://preview.redd.it/p9bp2wws6mae1.png?width=1080&crop=smart&auto=webp&s=cff3f58820a888da7d4e9d2111c7da9a7c759f40', 'width': 1080}], 'source': {'height': 876, 'url': 'https://preview.redd.it/p9bp2wws6mae1.png?auto=webp&s=1c99bc03335a9f25b38ee75c17b16b9b95669b9d', 'width': 1595}, 'variants': {}}]} |
|||
Bye RAG Servers: I made a vector db directly in the browser using webAssembly, indexedDB and Transformers.js so we dont have to set up servers for doing RAG anymore | 108 | I was setting up docker instances and servers everytime I wanted to do some simple RAG agentic stuff, soon I realized there's a wasted potential on the built-in db in the browsers like chrome named IndexedDB.
Pairing it with Transformers.JS over webassembly we can run our own embedding models fast and directly in the browser, storing all the vector embeddings locally for cosine similarity search.
I wrapped all that in a thing called EntityDB, I hope you like it.
For anyone interested in the project, it is in the github repo for EntityDB. Happy hacking year
[https://github.com/babycommando/entity-db](https://github.com/babycommando/entity-db) | 2025-01-02T17:22:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hryy21/bye_rag_servers_i_made_a_vector_db_directly_in/ | babydriver808 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hryy21 | false | null | t3_1hryy21 | /r/LocalLLaMA/comments/1hryy21/bye_rag_servers_i_made_a_vector_db_directly_in/ | false | false | self | 108 | {'enabled': False, 'images': [{'id': 'z2CeukTLfWwfEmGN7_yt3GcgYiuOC9V9mi1mivkcQJ0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=108&crop=smart&auto=webp&s=3564e65ca619be503d63d87a8431d5ac79df01d4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=216&crop=smart&auto=webp&s=7d46d0b1a5c70b52b0e284a4ad0a7633e22a4f21', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=320&crop=smart&auto=webp&s=88e288df41ba62d4591e02225282b5e970484046', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=640&crop=smart&auto=webp&s=8e85c424d9a349610c35543d9399e42d89416927', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=960&crop=smart&auto=webp&s=b1c56c9d74f5f1a9d6c2f033b6e57854792f039d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?width=1080&crop=smart&auto=webp&s=416ba5162f3718061b177f55a5abea9f1812c765', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/UrTz5gAMe2cu_V5Odmpv5jsnKAbJygrLxwoy9CFFEFc.jpg?auto=webp&s=a53988222b666cb8a3bb6bcbf596f2b26686a40f', 'width': 1200}, 'variants': {}}]} |
not sure what to get for faster inference | 1 | [removed] | 2025-01-02T17:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hrza4v/not_sure_what_to_get_for_faster_inference/ | Tall-Maintenance-198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrza4v | false | null | t3_1hrza4v | /r/LocalLLaMA/comments/1hrza4v/not_sure_what_to_get_for_faster_inference/ | false | false | self | 1 | null |
Email management solution? | 1 | I've asked this question before but didn't get any responses.
I'd like to take better control of my email using local LLMs. I'm looking for a good small fine tuned model and a framework or prompts that will make it easier to automatically, organize my emails, detect and delete spam, auto respond and make it easier to answer questions using the contents of all my emails. | 2025-01-02T17:44:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hrzhga/email_management_solution/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrzhga | false | null | t3_1hrzhga | /r/LocalLLaMA/comments/1hrzhga/email_management_solution/ | false | false | self | 1 | null |
Need Suggestions for Building a Local PDF-Based Assistant with Llama 3.2-Vision | 1 | [removed] | 2025-01-02T17:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hrzi3i/need_suggestions_for_building_a_local_pdfbased/ | No_Hovercraft_0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrzi3i | false | null | t3_1hrzi3i | /r/LocalLLaMA/comments/1hrzi3i/need_suggestions_for_building_a_local_pdfbased/ | false | false | self | 1 | null |
Need Suggestions for Building a Local PDF-Based Assistant with Llama 3.2-Vision | 1 | [removed] | 2025-01-02T17:46:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hrzj6g/need_suggestions_for_building_a_local_pdfbased/ | No_Hovercraft_0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrzj6g | false | null | t3_1hrzj6g | /r/LocalLLaMA/comments/1hrzj6g/need_suggestions_for_building_a_local_pdfbased/ | false | false | self | 1 | null |
Need Suggestions for Building a Local PDF-Based Assistant with Llama 3.2-Vision
Post Content: | 1 | [removed] | 2025-01-02T17:55:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hrzrb2/need_suggestions_for_building_a_local_pdfbased/ | No_Hovercraft_0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrzrb2 | false | null | t3_1hrzrb2 | /r/LocalLLaMA/comments/1hrzrb2/need_suggestions_for_building_a_local_pdfbased/ | false | false | self | 1 | null |
What is the best open source text-to-image models out there? | 1 | [deleted] | 2025-01-02T17:56:04 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hrzrrp | false | null | t3_1hrzrrp | /r/LocalLLaMA/comments/1hrzrrp/what_is_the_best_open_source_texttoimage_models/ | false | false | default | 1 | null |
||
Text-to-image models | 0 | What are the best open source text-to-image models out there? | 2025-01-02T17:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hrzse4/texttoimage_models/ | Glad-Communication60 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrzse4 | false | null | t3_1hrzse4 | /r/LocalLLaMA/comments/1hrzse4/texttoimage_models/ | false | false | self | 0 | null |
Need Suggestions for Building a Local PDF-Based Assistant with Llama 3.2-Vision | 1 | [removed] | 2025-01-02T18:02:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hrzxmv/need_suggestions_for_building_a_local_pdfbased/ | ExreiN19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hrzxmv | false | null | t3_1hrzxmv | /r/LocalLLaMA/comments/1hrzxmv/need_suggestions_for_building_a_local_pdfbased/ | false | false | self | 1 | null |
Multimodal LLMs and 3D data | 5 | Hi, just curious, is it possible to get existing multimodal LLMs to parse higher dimensional data, just like they can currently parse 2D data?
The idea would be to take a 3D image made up of voxels or whatever, flatten it into a vector, and then train the model on a data set of such images + text descriptions, similarly to how things were done with 2D.
One could encode the 3D image as a set of 2D images corresponding to each slice, so it could perhaps be not much different from the way it's already trained. There are already LLMs that can parse video, so perhaps you could send the data into the model that way, sending a bunch of parallel 2D subplanes of the 3D voxel image as a "frame" of video. One could perhaps fine tune it to interpret such data as a single 3D image and train it on a bunch of rotations and etc so it can tell that it's the same object, just rotated, if you cross section it differently.
Has anyone tried this kind of thing? In principle this ought to be possible for 4D, 5D, etc data as well. | 2025-01-02T18:12:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hs06r5/multimodal_llms_and_3d_data/ | RiemannZetaFunction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs06r5 | false | null | t3_1hs06r5 | /r/LocalLLaMA/comments/1hs06r5/multimodal_llms_and_3d_data/ | false | false | self | 5 | null |
Google Veo 2 and How it works ? (Informative) | 1 | [removed] | 2025-01-02T18:16:00 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hs09it | false | {'oembed': {'author_name': 'TheAILabsCanada', 'author_url': 'https://www.youtube.com/@TheAILabsCanada', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/jvglR86_IdM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How Google Veo 2 Works ? | Veo 2 is future of AI | SORA2 Is a disaster #veo2 #openAI #movies"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/jvglR86_IdM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'How Google Veo 2 Works ? | Veo 2 is future of AI | SORA2 Is a disaster #veo2 #openAI #movies', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1hs09it | /r/LocalLLaMA/comments/1hs09it/google_veo_2_and_how_it_works_informative/ | false | false | default | 1 | null |
||
Phi-4 Count the number of R's :( | 0 | 2025-01-02T18:19:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hs0d0n/phi4_count_the_number_of_rs/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs0d0n | false | null | t3_1hs0d0n | /r/LocalLLaMA/comments/1hs0d0n/phi4_count_the_number_of_rs/ | false | false | 0 | null |
||
What's the best uncensored model that isn't too... horny? | 245 | I'm currently using Magnum-v4-Cydonia because I feel like it writes well and follows formatting correctly, but it is SO horny, lol. I've written my characters to be as bland and non-sexual as possible but it will go straight to hornsville within a few exchanges 80% of the time.
I just want a model that comes across as more... natural and neutral but isn't censored. | 2025-01-02T18:34:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hs0q4e/whats_the_best_uncensored_model_that_isnt_too/ | PangurBanTheCat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs0q4e | false | null | t3_1hs0q4e | /r/LocalLLaMA/comments/1hs0q4e/whats_the_best_uncensored_model_that_isnt_too/ | false | false | self | 245 | null |
FastMCP – a feature complete MCP framework (logging, error handling, SSE, notifications, typed server events, completions, roots, sampling) | 12 | 2025-01-02T18:38:09 | https://github.com/punkpeye/fastmcp | punkpeye | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hs0tbr | false | null | t3_1hs0tbr | /r/LocalLLaMA/comments/1hs0tbr/fastmcp_a_feature_complete_mcp_framework_logging/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'SIDp1Oh318RItbFkX66rwBuG3kjpleux46AYooDCCi4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1lggpCyG3vWaeT156zD5lYoSQ-aj_1MQbw1fuiAL8lA.jpg?width=108&crop=smart&auto=webp&s=3cd844411056dd426dcb3326e74773f038677bef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1lggpCyG3vWaeT156zD5lYoSQ-aj_1MQbw1fuiAL8lA.jpg?width=216&crop=smart&auto=webp&s=9412dced2dec6c9d59a0742e1ed8644176cab381', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1lggpCyG3vWaeT156zD5lYoSQ-aj_1MQbw1fuiAL8lA.jpg?width=320&crop=smart&auto=webp&s=c8aaa147c08f768a9d2363633b5b73c0a9050531', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1lggpCyG3vWaeT156zD5lYoSQ-aj_1MQbw1fuiAL8lA.jpg?width=640&crop=smart&auto=webp&s=e29ca18026105937dcc33d3a2fa748444b3dae7b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1lggpCyG3vWaeT156zD5lYoSQ-aj_1MQbw1fuiAL8lA.jpg?width=960&crop=smart&auto=webp&s=db21d7e3a93d0e86d2edfa27b53e774bd2fa5e2a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1lggpCyG3vWaeT156zD5lYoSQ-aj_1MQbw1fuiAL8lA.jpg?width=1080&crop=smart&auto=webp&s=ccb901158eb555489585370d1eabc71ff22a4b97', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1lggpCyG3vWaeT156zD5lYoSQ-aj_1MQbw1fuiAL8lA.jpg?auto=webp&s=ff9561e31a1ce843c1960d76c4961c1580549438', 'width': 1200}, 'variants': {}}]} |
||
Easily make a clone of NotebookLM! | 0 | You can now easily create a clone of NotebookLM using PlayHT's LDM (Large Dialogue Model)!
Just enter the speaker conversation and watch the model churn out the podcast!
The model is now available on fal.ai!
https://preview.redd.it/65cali6ommae1.png?width=1258&format=png&auto=webp&s=9e96c100857d0c0f1f372578beecb9a2af0bb746
| 2025-01-02T18:49:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hs13gc/easily_make_a_clone_of_notebooklm/ | Dart7989 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs13gc | false | null | t3_1hs13gc | /r/LocalLLaMA/comments/1hs13gc/easily_make_a_clone_of_notebooklm/ | false | false | 0 | null |
|
🐺🐦⬛ LLM Comparison/Test: DeepSeek-V3, QVQ-72B-Preview, Falcon3 10B, Llama 3.3 70B, Nemotron 70B in my updated MMLU-Pro CS benchmark | 173 | 2025-01-02T19:13:46 | https://huggingface.co/blog/wolfram/llm-comparison-test-2025-01-02 | WolframRavenwolf | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hs1oqy | false | null | t3_1hs1oqy | /r/LocalLLaMA/comments/1hs1oqy/llm_comparisontest_deepseekv3_qvq72bpreview/ | false | false | 173 | {'enabled': False, 'images': [{'id': '7QPEzTAqX5cjwSk5RjB7PghgfmYOFx-3atu8E-qUPaA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zQcFpuvGsouWY1eHUAAQKo-HpiNXMufIfDLFNbj6wpw.jpg?width=108&crop=smart&auto=webp&s=337eeba80217472ae9e018de77f2565a0eb5699b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zQcFpuvGsouWY1eHUAAQKo-HpiNXMufIfDLFNbj6wpw.jpg?width=216&crop=smart&auto=webp&s=0b39e0ec73a952221fbbfb856509d4cee14bfad9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zQcFpuvGsouWY1eHUAAQKo-HpiNXMufIfDLFNbj6wpw.jpg?width=320&crop=smart&auto=webp&s=09ab097aeb75f3172b0014d8f60c112a9cee3b89', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zQcFpuvGsouWY1eHUAAQKo-HpiNXMufIfDLFNbj6wpw.jpg?width=640&crop=smart&auto=webp&s=8d2bf2cb12f96e6ea69be7a444fd2427480ec3a0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zQcFpuvGsouWY1eHUAAQKo-HpiNXMufIfDLFNbj6wpw.jpg?width=960&crop=smart&auto=webp&s=34c19656529289fb30d4cfc8b9fb04d26bdce788', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zQcFpuvGsouWY1eHUAAQKo-HpiNXMufIfDLFNbj6wpw.jpg?width=1080&crop=smart&auto=webp&s=bbda4dcfc76756f712fa90f61f3d822e636227bb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zQcFpuvGsouWY1eHUAAQKo-HpiNXMufIfDLFNbj6wpw.jpg?auto=webp&s=25df759f40fcfb0951ed91c3326daa5445bb1a9a', 'width': 1200}, 'variants': {}}]} |
||
Introducing LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset for Reasoning Model Post-Training | 1 | [removed] | 2025-01-02T19:27:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hs20yl/introducing_longtalkcot_v01_a_very_long/ | transformer_ML | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs20yl | false | null | t3_1hs20yl | /r/LocalLLaMA/comments/1hs20yl/introducing_longtalkcot_v01_a_very_long/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200}, 'variants': {}}]} |
I used AI agents to see if I could write an entire book | AutoGen + Mistral-Nemo | 22 | 2025-01-02T19:30:03 | https://www.youtube.com/watch?v=EVrL6Qg7e9A | maddogawl | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1hs23gv | false | {'oembed': {'author_name': 'GosuCoder', 'author_url': 'https://www.youtube.com/@GosuCoder', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/EVrL6Qg7e9A?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="I Made an AI Write an Entire Book | Using AI Agents and Local LLMs"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/EVrL6Qg7e9A/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'I Made an AI Write an Entire Book | Using AI Agents and Local LLMs', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1hs23gv | /r/LocalLLaMA/comments/1hs23gv/i_used_ai_agents_to_see_if_i_could_write_an/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'zJfTzW9uPov3JLCINr7Lh56Tw8E70DN7OdTralEa0SQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/X6KABWWrRzeSpKxYjxFaaKcjUG78Bvw8XZW7sx-1Gqw.jpg?width=108&crop=smart&auto=webp&s=aad4173d2f51389be74667291ea8adb717fb6f50', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/X6KABWWrRzeSpKxYjxFaaKcjUG78Bvw8XZW7sx-1Gqw.jpg?width=216&crop=smart&auto=webp&s=26ed09c056ce0d43c967146fd0d66e2a5ee8b40e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/X6KABWWrRzeSpKxYjxFaaKcjUG78Bvw8XZW7sx-1Gqw.jpg?width=320&crop=smart&auto=webp&s=18e1de3a8da6a373f603b9459d11b2f54585e218', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/X6KABWWrRzeSpKxYjxFaaKcjUG78Bvw8XZW7sx-1Gqw.jpg?auto=webp&s=7f884c0e8aa7a165f6560ac6a5bf23e671e29054', 'width': 480}, 'variants': {}}]} |
||
If I'm out of space in my server chassis, is there something like a dedicated GPU chasis? | 3 | Well, I'm out of space in my server chassis, as the title says.
But I want to get into local LLMs, so I'm wondering if there's anything like an external GPU chasis, similar to an external GPU for laptops, where you can connect them using USB-C or some high bandwidth cable
I'm talking about a chasis where I could put multiple GPUs, and connect the entire thing to my server, which would be underneath it, is that a thing/possible? | 2025-01-02T19:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hs2q3i/if_im_out_of_space_in_my_server_chassis_is_there/ | TomerHorowitz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs2q3i | false | null | t3_1hs2q3i | /r/LocalLLaMA/comments/1hs2q3i/if_im_out_of_space_in_my_server_chassis_is_there/ | false | false | self | 3 | null |
NSFW 70B API | 0 | Is there a service where I can call a NSFW/uncensored 70B model besides RunPod? No shade on RunPod, I just want something a little more streamlined. | 2025-01-02T20:13:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hs36em/nsfw_70b_api/ | renegadellama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs36em | false | null | t3_1hs36em | /r/LocalLLaMA/comments/1hs36em/nsfw_70b_api/ | false | false | nsfw | 0 | null |
Why bother with RWKV/Mamba instead of decoder transformers? | 7 | The classic transformers are quadratic both in space and time complexity. However, since most of the models now use FA1/2/3 (which is linear in space) and during inference we are using kv caching (which is linear in time), during inference decoder transformers are basically linear.
During training, decoder transformers are obviously quadratic in time, since there is no kv caching, but the context length stays reasonable, since they are mostly trained on shorter sequences. To increase the context size, they are being rope scaled and finetuned on longer sequences, so the time complexity stays low with lower n for most of the training time.
So, why bother with alternative architectures, if transformers are already proven and linear? Why people are deriving linear approximation of attention, which performs worse? What is the catch and what am I missing?
P.S. I am talking about practicality, not research — obviously, more research is better. | 2025-01-02T20:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hs3966/why_bother_with_rwkvmamba_instead_of_decoder/ | netikas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs3966 | false | null | t3_1hs3966 | /r/LocalLLaMA/comments/1hs3966/why_bother_with_rwkvmamba_instead_of_decoder/ | false | false | self | 7 | null |
What model I can run locally that can read my whole codebase? | 1 | My PC has a 5800x and 32GB and a 3070, I dont care about inference speed I want it to give me meaningful solutions so something that I can ask a question, go scroll my phone for half an hour and then comeback to it. | 2025-01-02T20:23:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hs3fcp/what_model_i_can_run_locally_that_can_read_my/ | Theboyscampus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs3fcp | false | null | t3_1hs3fcp | /r/LocalLLaMA/comments/1hs3fcp/what_model_i_can_run_locally_that_can_read_my/ | false | false | self | 1 | null |
NEED HELP IN UI AND BACKEND, READY TO PAY BETWEEN 500-1000 | 1 | [removed] | 2025-01-02T20:24:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hs3fks/need_help_in_ui_and_backend_ready_to_pay_between/ | Ill-Midnight9539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs3fks | false | null | t3_1hs3fks | /r/LocalLLaMA/comments/1hs3fks/need_help_in_ui_and_backend_ready_to_pay_between/ | false | false | self | 1 | null |
I used local LLMs and local image generators to illustrate the first published Conan story: The Phoenix on the Sword | 2 | 2025-01-02T20:44:55 | https://brianheming.substack.com/p/illustrated-conan-adventures-the | RobertTetris | brianheming.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1hs3xv4 | false | null | t3_1hs3xv4 | /r/LocalLLaMA/comments/1hs3xv4/i_used_local_llms_and_local_image_generators_to/ | false | false | spoiler | 2 | {'enabled': False, 'images': [{'id': 's9_eBByoBWj-fuZ2pgKMeA89w3sQ_zH_ELtMUcwsuoc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=108&crop=smart&auto=webp&s=d27a6ea679260fa0afba4d41f136fcf2d6b5879d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=216&crop=smart&auto=webp&s=5e6c95c2150c498489f392c9b2b340b760bb78c3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=320&crop=smart&auto=webp&s=7e6164b1610f4cc8a5333e7c07cc255af2047d48', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=640&crop=smart&auto=webp&s=f69555857fe2a5cbba89c755916243d4aa1ed422', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=960&crop=smart&auto=webp&s=5fe68f4a07c8802e65e2e672f03c20ba38ca8f03', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=1080&crop=smart&auto=webp&s=2dfcd6774196e7d886fcc2efd68bbf4b4c0cb66e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?auto=webp&s=f1c1bb63f4f2dffd5cdbe0b6fdcc584252282fa2', 'width': 1200}, 'variants': {'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=4b4aa35c83527f55655d75e35f1dc9497ab0b8e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=fd94f79c255ca372201c97c11c464f16ad047c85', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=d81ce8842d32201a18c95b3cc3f4697a07be76cb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=30b8a18a4e617f0b05277241916ab37d9b953270', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=7b7c3fb758979d96f05ca16b2c2bf72bafd0d781', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=b3b952a24ce1a5779737aabf29e254eea0d8ba15', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UyjNsS3D6ngXJ1fu_Xqa6MalaNFE05iWeMtIY7l6W8A.jpg?blur=40&format=pjpg&auto=webp&s=4c70ffb0322fcb1f42a11c33d9244c1c3397a279', 'width': 1200}}}}]} |
|
Did anyone manage to run autocomplete on NPU? | 12 | Recently intel published [various tools](https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/CPP_Examples/README.md) to run LLM on NPU (the last two generations of intel chips and intel ARC discrete cards have a neural processing unit). Basically one has to convert the models to run on on NPU via a suitable quantization, and then run an inference engine. Since I bought this laptop, I never saw the NPU at any load other than 0%, both on windows and linux, and so it was a good occasion to heat it up a bit.
I was expecting lacklustre perfomance, due to the limited memory bandwidth of NPU. Add to this that the support is in the very early stage, as it is visible from the ipex-llm\[npu\] dependencies: which are all pretty old. Ironically, the only package to be used in the latest version is '[deprecated](https://anaconda.org/conda-forge/deprecated/)'). Instead I was surprised by the result. Qwen 7B ran surprisigly fast. Low latency was an expected feature from NPU, but the token count was fair (no official bench tool) and mostly it runs on very low energy. I have a intel 258V processor on a thin laptop working mostly on passive cooling, and -as opposite to the GPU models- the fan never kicked in. Accuracy seemed fair. I told myself that the NPU can be an awesome way to run autocomplete models
There are quite a few setbacks however. It only runs on windows, with MSVC. There are a limited number of verified models (but mostly any model works). And there is no server to run it in Continue. TBH I had decided to implement a server, just patching their cpp file, but I gave up soon when I saw how much pain it would be just to build all the dependencies on windows.
I was wondering if anyone managed to run model on NPU in Continue or similar software. In any case, it is some very welcome development with many possible applications. | 2025-01-02T21:02:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hs4d68/did_anyone_manage_to_run_autocomplete_on_npu/ | HairyAd9854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs4d68 | false | null | t3_1hs4d68 | /r/LocalLLaMA/comments/1hs4d68/did_anyone_manage_to_run_autocomplete_on_npu/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'TKhUPZbeciRBilZ773_uzw-vCeeOLe9H4NmkhnVNmwY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KQWQdUOF7j_8kbHzMMrjlsvNn8vGOamrpJsgnmdSq3o.jpg?width=108&crop=smart&auto=webp&s=d018a5970cc2f11c43733fa94651fbaa22d5bcdb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KQWQdUOF7j_8kbHzMMrjlsvNn8vGOamrpJsgnmdSq3o.jpg?width=216&crop=smart&auto=webp&s=b3e755ef45b6730676d086f6a499ccf2cc97474f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KQWQdUOF7j_8kbHzMMrjlsvNn8vGOamrpJsgnmdSq3o.jpg?width=320&crop=smart&auto=webp&s=e6cce853fd630dfcb242366e30d39ac4ab665348', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KQWQdUOF7j_8kbHzMMrjlsvNn8vGOamrpJsgnmdSq3o.jpg?width=640&crop=smart&auto=webp&s=6039e0591ac076c337b12a8e353a70a4d1129b08', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KQWQdUOF7j_8kbHzMMrjlsvNn8vGOamrpJsgnmdSq3o.jpg?width=960&crop=smart&auto=webp&s=21adf5ad29f2d29b92312702f69cecf52cc2b717', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KQWQdUOF7j_8kbHzMMrjlsvNn8vGOamrpJsgnmdSq3o.jpg?width=1080&crop=smart&auto=webp&s=8490483b98507061f5516d0a5a5844489c1a8ff8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KQWQdUOF7j_8kbHzMMrjlsvNn8vGOamrpJsgnmdSq3o.jpg?auto=webp&s=0246a824afaa958578170e0366123bb4f4c9f2d4', 'width': 1200}, 'variants': {}}]} |
Question about SLI | 2 | Hello all!
First post on here, but I’ve been lurking for awhile. Wondering if I would need to have a SLI compatible motherboard for my dual 3090 build. I plan on doing just interfacing with 70b models. | 2025-01-02T21:08:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hs4i8l/question_about_sli/ | DersWasTaken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs4i8l | false | null | t3_1hs4i8l | /r/LocalLLaMA/comments/1hs4i8l/question_about_sli/ | false | false | self | 2 | null |
Running models on Lenovo Legion Go | 1 | Hi community,
I want to buy a Lenovo Legion Go due to its versatility (gaming device, tablet and x86 device) but it would be pretty cool to use it also to use it to continue with my AI projects. Does anyone have one of these devices and with Linux installed what was the performance when running small models (16GB of RAM is not so much these days). | 2025-01-02T21:17:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hs4q65/running_models_on_lenovo_legion_go/ | danigoncalves | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs4q65 | false | null | t3_1hs4q65 | /r/LocalLLaMA/comments/1hs4q65/running_models_on_lenovo_legion_go/ | false | false | self | 1 | null |
Table Search? | 1 | [removed] | 2025-01-02T21:17:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hs4qmo/table_search/ | Ill_Recipe7620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs4qmo | false | null | t3_1hs4qmo | /r/LocalLLaMA/comments/1hs4qmo/table_search/ | false | false | self | 1 | null |
Llamafile for offline LLM won't run on each reboot on my PC | 2 | After each reboot when I try to run my TinyLlama Model, I find
run-detectors: unable to find an interpreter for ./TinyLlama-1.1B-Chat-v1.0.Q8\_0.llamafile
I have to run these commands on my Ubuntu 22.04:
sudo chmod +x /usr/bin/ape
sudo sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt\_misc/register"
sudo sh -c "echo ':APE-jart:M::jartsr::/usr/bin/ape:' >/proc/sys/fs/binfmt\_misc/register"
What it's even crazier is that first time I installed the model, everything went fine, even when at next reboot, "ape" interpreter was required to be installed by typing this command:
sudo wget -O /usr/bin/ape https://cosmo.zip/pub/cosmos/bin/ape-$(uname -m).elf
And later all the previous commands.
Is there a way to set automatically the ape interpreter to work with llamafile without writting to files each time?
A way to be automatically recognized by my Ubuntu system without having to set up things and make it run like a regular file as the first time I installed it
Not sure if this was the right place to post, if not, I'd appreciate if you tell me | 2025-01-02T21:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hs4rng/llamafile_for_offline_llm_wont_run_on_each_reboot/ | EerieKing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs4rng | false | null | t3_1hs4rng | /r/LocalLLaMA/comments/1hs4rng/llamafile_for_offline_llm_wont_run_on_each_reboot/ | false | false | self | 2 | null |
Claude 3.5 Sonnet Alternative for Roleplaying | 1 | [removed] | 2025-01-02T21:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hs5j4o/claude_35_sonnet_alternative_for_roleplaying/ | Sufficient_Count3889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs5j4o | false | null | t3_1hs5j4o | /r/LocalLLaMA/comments/1hs5j4o/claude_35_sonnet_alternative_for_roleplaying/ | false | false | self | 1 | null |
Has anyone successfully install the TTS Fish Speech? I'm struggling a bit. I think it is just a dumb mistake I made. Apologies if this is not the right sub to ask for help but I couldn't find another appropriate sub. | 1 | [removed] | 2025-01-02T22:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hs5vou/has_anyone_successfully_install_the_tts_fish/ | Tezozomoctli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs5vou | false | null | t3_1hs5vou | /r/LocalLLaMA/comments/1hs5vou/has_anyone_successfully_install_the_tts_fish/ | false | false | self | 1 | null |
Local LLM in LM studio responds with placeholder variables | 2 | This is probably and obvious one but I'm querying ollama from postman and it's responding with place holders like {a} and {b} in place of subjects. See the return below
''' "messages": [
{
"role": "system",
"content": "You are a helpful jokester."
},
{
"role": "user",
"content": "Tell me a joke."
},
{
"role": "assistant",
"content": "{\"joke\":\"{a} Why was {b} unable to find a needle in the haystack?\"}"
},
{
"role": "user",
"content": "Please finish the joke."
}
],
''' | 2025-01-02T22:25:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hs6duo/local_llm_in_lm_studio_responds_with_placeholder/ | beeshavekneestoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs6duo | false | null | t3_1hs6duo | /r/LocalLLaMA/comments/1hs6duo/local_llm_in_lm_studio_responds_with_placeholder/ | false | false | self | 2 | null |
Killed by LLM – I collected data on AI benchmarks we thought would last years | 105 | For my year-end I collected data on how quickly AI benchmarks are becoming obsolete: https://r0bk.github.io/killedbyllm/.
It's interesting to look back:
2023: GPT-4 was truely something new
\- It didn't just beat SOTA scores, it completely saturated benchmarks
\- It was the first time humanity created something that can beat the turing test
\- It created a clear "before/after" divide
2024: Others caught up, progress in fits and spurts
\- O1/O3 used test-time compute to saturate math and reasoning benchmarks
\- Sonnet 3.5/ 4o incremented some benchmarks into saturation, and pushed new visual evals into saturation
\- Llama 3/ Qwen 2.5 brought Open Weight models to be competitive across the board
Today: We need better benchmarks
\- I'm amazed seeing tasks I didn't think we'd solve until 2030 become obsolete, and yet we still can't trust a model to do the same tasks as a junior
\- It's clear our benchmarks aren't yet measuring real-world reliability, I hope we have as much progress in benchmarks as we do models in 2025.
Let me know what you think!
Code + data (if you'd like to contribute): [https://github.com/R0bk/killedbyllm](https://github.com/R0bk/killedbyllm)
Interactive view: [https://r0bk.github.io/killedbyllm/](https://r0bk.github.io/killedbyllm/)
P.S. I've had a hard time deciding what benchmarks are important enough to include. If you know of other benchmarks (including those yet to be saturated) that help answer "can AI do X" questions then please let me know. | 2025-01-02T22:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hs6ftc/killed_by_llm_i_collected_data_on_ai_benchmarks/ | robk001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs6ftc | false | null | t3_1hs6ftc | /r/LocalLLaMA/comments/1hs6ftc/killed_by_llm_i_collected_data_on_ai_benchmarks/ | false | false | self | 105 | null |
What are we expecting from Llama 4? | 74 | And when is it coming out? | 2025-01-02T22:32:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hs6jjq/what_are_we_expecting_from_llama_4/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs6jjq | false | null | t3_1hs6jjq | /r/LocalLLaMA/comments/1hs6jjq/what_are_we_expecting_from_llama_4/ | false | false | self | 74 | null |
"anthropic-firefly-preview" - a new upcoming model from anthropic?
| 59 | 2025-01-02T22:45:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hs6uam/anthropicfireflypreview_a_new_upcoming_model_from/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs6uam | false | null | t3_1hs6uam | /r/LocalLLaMA/comments/1hs6uam/anthropicfireflypreview_a_new_upcoming_model_from/ | false | false | 59 | {'enabled': False, 'images': [{'id': 'PahzH5QoRgsYkHI_6-Qe6jUIMUES2PqW2L3RXFeJG2M', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/BSsM_eSKNn-1E3fOem3HLaN-TGvwg3A_QYu7hcRl42Q.jpg?width=108&crop=smart&auto=webp&s=875a016160cb774beb20dcc0396649f5092b661a', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/BSsM_eSKNn-1E3fOem3HLaN-TGvwg3A_QYu7hcRl42Q.jpg?auto=webp&s=c5f3a4ab31f38e87a088e9e93c2e60b714eea6e5', 'width': 200}, 'variants': {}}]} |
||
for some reason god on twitter follows nous research | 0 | 2025-01-02T22:45:27 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hs6ui1 | false | null | t3_1hs6ui1 | /r/LocalLLaMA/comments/1hs6ui1/for_some_reason_god_on_twitter_follows_nous/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'rkxj7z4msnae1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/rkxj7z4msnae1.png?width=108&crop=smart&auto=webp&s=c70284e023ae1459690b7700ee07c5e28ed52130', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/rkxj7z4msnae1.png?width=216&crop=smart&auto=webp&s=e68879acf51d08f139e958132561e20f29529177', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/rkxj7z4msnae1.png?width=320&crop=smart&auto=webp&s=18b2badc7afaef99a344e0d5ca3940158470270e', 'width': 320}, {'height': 299, 'url': 'https://preview.redd.it/rkxj7z4msnae1.png?width=640&crop=smart&auto=webp&s=82925700dd1ef846db3ac3b85d32a626108e4f0c', 'width': 640}, {'height': 449, 'url': 'https://preview.redd.it/rkxj7z4msnae1.png?width=960&crop=smart&auto=webp&s=344f1811aa1417f75452c9006252c1bcdb228934', 'width': 960}, {'height': 505, 'url': 'https://preview.redd.it/rkxj7z4msnae1.png?width=1080&crop=smart&auto=webp&s=6091c1d1d551e0b770df7f9180a57c0051d630f1', 'width': 1080}], 'source': {'height': 554, 'url': 'https://preview.redd.it/rkxj7z4msnae1.png?auto=webp&s=e429fc496a5d397df31609b4c120b03d32181353', 'width': 1184}, 'variants': {}}]} |
||
Beyond RAG: Building a Knowledge Management System That Enhances Rather Than Replaces Thought | 11 | 2025-01-02T23:09:01 | https://nsavage.substack.com/p/beyond-rag-building-a-knowledge-management | Naga | nsavage.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1hs7enp | false | null | t3_1hs7enp | /r/LocalLLaMA/comments/1hs7enp/beyond_rag_building_a_knowledge_management_system/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'fzAtgbwYpWMCVPnqRfF944TV8Ai-bHG230SMh8SLWvM', 'resolutions': [{'height': 101, 'url': 'https://external-preview.redd.it/cwKNjdzuHIhqda-jAsK609i5QySWxvCz6VReo_06xWA.jpg?width=108&crop=smart&auto=webp&s=19399e2cc1eb315bcc40d43b63ffd047703d401b', 'width': 108}, {'height': 202, 'url': 'https://external-preview.redd.it/cwKNjdzuHIhqda-jAsK609i5QySWxvCz6VReo_06xWA.jpg?width=216&crop=smart&auto=webp&s=1b164c029a9d38d11163b6797f45e4ba2053cfd6', 'width': 216}, {'height': 300, 'url': 'https://external-preview.redd.it/cwKNjdzuHIhqda-jAsK609i5QySWxvCz6VReo_06xWA.jpg?width=320&crop=smart&auto=webp&s=e5132eac1efa2350e4578c6aea90424dfdc409d5', 'width': 320}, {'height': 600, 'url': 'https://external-preview.redd.it/cwKNjdzuHIhqda-jAsK609i5QySWxvCz6VReo_06xWA.jpg?width=640&crop=smart&auto=webp&s=837a0ff4b0da2842e11ff05e738f185c05477469', 'width': 640}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cwKNjdzuHIhqda-jAsK609i5QySWxvCz6VReo_06xWA.jpg?auto=webp&s=63b9ffd275964e3441e0014ac1f6ce735e829978', 'width': 640}, 'variants': {}}]} |
||
deepseek suks | 0 | Ive been using the API and everything after its first response is complete garbage. Can't keep context, can't follow instructions. Whats all the hype for? I'm back to using gpt4o for my api calls | 2025-01-02T23:55:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hs8gvr/deepseek_suks/ | RouteGuru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hs8gvr | false | null | t3_1hs8gvr | /r/LocalLLaMA/comments/1hs8gvr/deepseek_suks/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.