title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PSA - Deepseek v3 outperforms Sonnet at 53x cheaper pricing (API rates) | 433 | Considering that even a 3x price difference w/ these benchmarks would be extremely notable, this is pretty damn absurd. I have my eyes on anthropic, curious to see what they have on the way. Personally, I would still likely pay a premium for coding tasks if they can provide a more performative model (by a decent margin). | 2024-12-26T11:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hmm8v9/psa_deepseek_v3_outperforms_sonnet_at_53x_cheaper/ | cobalt1137 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmm8v9 | false | null | t3_1hmm8v9 | /r/LocalLLaMA/comments/1hmm8v9/psa_deepseek_v3_outperforms_sonnet_at_53x_cheaper/ | false | false | self | 433 | null |
Speculative Decoding: My findings | 25 | TL;DR: 1. I actually find that speculative decoding works best in 4bit and *not full precision*
2. In MLX, I got Llama-3.3-70b running at 11.5 tokens/second on my M1 Max MacBook
3. I also found that for MLX, the proportional gains are much higher in Low Power Mode (up to 3x greater speed boosts)
---
Hi everyone! Second quick post, just as I've been super excited this past week by spec decoding 😄
MLX has a new PR waiting to be implemented which will enable speculative decoding. Impatient as I am I couldn't wait for the PR to merge so I've been using that branch to do some early investigations!
I documented my findings as I was going, which you can see here https://x.com/priontific/status/1871155918689468530
And also here
https://x.com/priontific/status/1871355678167814523
That second one is what has me really excited. For coding tasks, I managed to get Llama3.3-70b running at 11.5 tokens/second... on my laptop 🤯
Anyway I gotta hop in the car, peace everyone! ✌️
| 2024-12-26T11:35:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hmmmdg/speculative_decoding_my_findings/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmmmdg | false | null | t3_1hmmmdg | /r/LocalLLaMA/comments/1hmmmdg/speculative_decoding_my_findings/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'YiTfd5gvFtlzRXOfRtLWbh4ljGv_c-lMXPn2cwFZ_sA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/abxpwkzUF_fUJjHjVQ6SIZUfum3v1eZcDb0EfcbNWso.jpg?width=108&crop=smart&auto=webp&s=f2c4fb0a583b0de7a7692612596771d2d6b24793', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/abxpwkzUF_fUJjHjVQ6SIZUfum3v1eZcDb0EfcbNWso.jpg?auto=webp&s=7fa58c23b2c5b3e89c37d9e218e0724fdea477a0', 'width': 200}, 'variants': {}}]} |
Deepseek V3 is officially released (code, paper, benchmark results) | 569 | 2024-12-26T11:50:17 | https://github.com/deepseek-ai/DeepSeek-V3 | kristaller486 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hmmtt3 | false | null | t3_1hmmtt3 | /r/LocalLLaMA/comments/1hmmtt3/deepseek_v3_is_officially_released_code_paper/ | false | false | 569 | {'enabled': False, 'images': [{'id': 'V4gRfsvMqp-sdfuPlFF58md70uEG4mpMSxG_31A3Ww8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pFrHoxtWvw0kTyGjl2tvnmrhzv0d4lz0kTv8A_vWuaM.jpg?width=108&crop=smart&auto=webp&s=990981a8a0d958bc62b777ac945d51097c5366b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pFrHoxtWvw0kTyGjl2tvnmrhzv0d4lz0kTv8A_vWuaM.jpg?width=216&crop=smart&auto=webp&s=2aa4ab7a06e39a6abc8510db5d872d5f14275e37', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pFrHoxtWvw0kTyGjl2tvnmrhzv0d4lz0kTv8A_vWuaM.jpg?width=320&crop=smart&auto=webp&s=3d6c8f00d1c0735a0d4ae8bd4ddfc9a1c34cd5b0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pFrHoxtWvw0kTyGjl2tvnmrhzv0d4lz0kTv8A_vWuaM.jpg?width=640&crop=smart&auto=webp&s=a4e84dd3877b3e8e65b413c654a0abdcf52c3176', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pFrHoxtWvw0kTyGjl2tvnmrhzv0d4lz0kTv8A_vWuaM.jpg?width=960&crop=smart&auto=webp&s=3b8b47acc3fa506935279d134e461ced2cba2a03', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pFrHoxtWvw0kTyGjl2tvnmrhzv0d4lz0kTv8A_vWuaM.jpg?width=1080&crop=smart&auto=webp&s=5a15e4ac208c45bcfaa770b44ccf781df7c52f72', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pFrHoxtWvw0kTyGjl2tvnmrhzv0d4lz0kTv8A_vWuaM.jpg?auto=webp&s=820af0ecb52ee602c54ce2caa6038735e9ae5a11', 'width': 1200}, 'variants': {}}]} |
||
DeepSeek-V3 Officially Released | 168 | Today, DeepSeek has released and open-sourced the first version of their new model series, DeepSeek-V3.
You can chat with the latest V3 model directly on their official website chat.deepseek.com. API services have been updated accordingly, with no changes required to existing API configurations. The current version of DeepSeek-V3 does not yet support multimodal input/output.
**Performance Matches Leading Proprietary Models**
Key specifications:
* Based on proprietary MoE (Mixture of Experts) architecture
* 671B total parameters
* 37B activated parameters
* Pretrained on 14.8T tokens
**Research Paper:**
[https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek\_V3.pdf](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf)
Benchmark results show that DeepSeek-V3 outperforms other open-source models including Qwen2.5-72B and Llama-3.1-405B. Its performance is on par with world-leading proprietary models like GPT-4o and Claude-3.5-Sonnet.
https://preview.redd.it/1wv0hkomn69e1.png?width=1080&format=png&auto=webp&s=2c5725bc3c09f8599b03826d4cd36a2e538201c5
**Encyclopedia Knowledge**: DeepSeek-V3 shows significant improvement over its predecessor DeepSeek-V2.5 in knowledge-based tasks (MMLU, MMLU-Pro, GPQA, SimpleQA), approaching the performance of the current best model Claude-3.5-Sonnet-1022.
**Long Text**: In long text evaluations, DeepSeek-V3 outperforms other models on average across DROP, FRAMES, and LongBench v2.
**Code**: DeepSeek-V3 significantly leads all non-o1 models in algorithmic coding scenarios (Codeforces), and approaches Claude-3.5-Sonnet-1022 in software engineering scenarios (SWE-Bench Verified).
**Mathematics**: On the American Invitational Mathematics Examination (AIME 2024, MATH) and China National Math Olympiad (CNMO 2024), DeepSeek-V3 substantially surpasses all open-source and proprietary models.
**Chinese Language Capabilities**: DeepSeek-V3 performs similarly to Qwen2.5-72B on educational evaluation sets like C-Eval and pronoun disambiguation, while showing superior performance on factual knowledge tests like C-SimpleQA.
https://preview.redd.it/z0buyr37o69e1.png?width=1080&format=png&auto=webp&s=692484c0700691e3b5a22b452146c82d2202dbc7
**Generation Speed Increased by 3x**
Through algorithmic and engineering innovations, DeepSeek-V3's token generation speed has significantly increased from 20 TPS to 60 TPS, achieving a 3x improvement compared to the V2.5 model. This brings users a faster and more fluid experience.
https://i.redd.it/vpv82tjoo69e1.gif
**API Service Price Adjustment**
With the release of the more powerful and faster DeepSeek-V3, our model API service pricing will be adjusted to **0.5 CNY (cache hit) / 2 CNY (cache miss) per million input tokens, and 8 CNY per million output tokens**, aiming to continuously provide better model services.
https://preview.redd.it/b3c2rveuo69e1.png?width=1080&format=png&auto=webp&s=b94e7f7ea4c22edb3740ab7c5572701f529ce1bc
Meanwhile, we have decided to offer a **45-day** promotional pricing period for the new model: From now until **February 8, 2025**, DeepSeek-V3's API service will maintain the familiar pricing of **0.1 CNY (cache hit) / 1 CNY (cache miss) per million input tokens, and 2 CNY per million output tokens**. Both existing registered users and new users who register during this period can enjoy these promotional rates.
https://preview.redd.it/7fc3pfu2p69e1.png?width=916&format=png&auto=webp&s=74a7f2d8a4005f9c612b7d7445e6d2d4f47ce17c
**Open Source Weights and Local Deployment**
DeepSeek-V3 is trained in FP8 and provides native FP8 weights as open source.
Thanks to the support of the open-source community, **SGLang** and **LMDeploy** have immediately added support for native FP8 inference of the V3 model, while **TensorRT-LLM** and **MindIE** have implemented BF16 inference. Additionally, to facilitate community adaptation and expand application scenarios, we provide conversion scripts from FP8 to BF16.
For model weight downloads and more local deployment information, please refer to:
[https://huggingface.co/deepseek-ai/DeepSeek-V3-Base](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base)
**"Pursuing inclusive AGI with open-source spirit and long-term commitment"** has always been DeepSeek's firm belief. We are very excited to share our progress in model pre-training with the community and are delighted to see the capability gap between open-source and closed-source models continuing to narrow.
This is a new beginning, and in the future, we will continue to develop richer features such as deep thinking and multimodality based on the DeepSeek-V3 base model, while continuing to share our latest exploration results with the community. | 2024-12-26T12:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hmn55p/deepseekv3_officially_released/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmn55p | false | null | t3_1hmn55p | /r/LocalLLaMA/comments/1hmn55p/deepseekv3_officially_released/ | false | false | 168 | {'enabled': False, 'images': [{'id': 'DfcTASMEZ4eJnZUuSAFW_3aU3aYHgTGYKXqIi3qNbwY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OdcXWxfZMxLBdp48pDY3KtBo5eblllCy64NCTeKB128.jpg?width=108&crop=smart&auto=webp&s=445364411532586650872e667bbc1d3b844e1d96', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OdcXWxfZMxLBdp48pDY3KtBo5eblllCy64NCTeKB128.jpg?width=216&crop=smart&auto=webp&s=507f92db2c9f124f0b31a36ae87aeb2b9e153375', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OdcXWxfZMxLBdp48pDY3KtBo5eblllCy64NCTeKB128.jpg?width=320&crop=smart&auto=webp&s=38f399f5ef3decdd57a494f5226a45daf614977e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OdcXWxfZMxLBdp48pDY3KtBo5eblllCy64NCTeKB128.jpg?width=640&crop=smart&auto=webp&s=7ce5490fd085c6e6f1e9fddc7f64ec5e1402ec50', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OdcXWxfZMxLBdp48pDY3KtBo5eblllCy64NCTeKB128.jpg?width=960&crop=smart&auto=webp&s=0c72472ef393fc3fade92e108e957fbc54987622', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OdcXWxfZMxLBdp48pDY3KtBo5eblllCy64NCTeKB128.jpg?width=1080&crop=smart&auto=webp&s=2c0e3742076bcca537ef4c8b1683dc0618726f26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OdcXWxfZMxLBdp48pDY3KtBo5eblllCy64NCTeKB128.jpg?auto=webp&s=8823ed7fa6cfa1cdd0a4b8c333a5d1b6a8c576d2', 'width': 1200}, 'variants': {}}]} |
|
Where & how to learn LLM? | 1 | [removed] | 2024-12-26T12:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hmnh0r/where_how_to_learn_llm/ | mipan_zuuzuuzuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmnh0r | false | null | t3_1hmnh0r | /r/LocalLLaMA/comments/1hmnh0r/where_how_to_learn_llm/ | false | false | self | 1 | null |
Wow this maybe probably best open source model ? | 477 | 2024-12-26T12:38:12 | Evening_Action6217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmnj93 | false | null | t3_1hmnj93 | /r/LocalLLaMA/comments/1hmnj93/wow_this_maybe_probably_best_open_source_model/ | false | false | 477 | {'enabled': True, 'images': [{'id': 'U_CCcLeowoA02j3Yljc5VxZ72dQ4a_VzGfQUh03O8V8', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/vry52nz3u69e1.jpeg?width=108&crop=smart&auto=webp&s=85010037cef84934d0024baa8b73c4e206e30ace', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/vry52nz3u69e1.jpeg?width=216&crop=smart&auto=webp&s=f93a9ad091750db3d14355a611cc3b87c19808c6', 'width': 216}, {'height': 208, 'url': 'https://preview.redd.it/vry52nz3u69e1.jpeg?width=320&crop=smart&auto=webp&s=09ab1984fb9f6b815b3b845d484eee6ae2c4c508', 'width': 320}, {'height': 417, 'url': 'https://preview.redd.it/vry52nz3u69e1.jpeg?width=640&crop=smart&auto=webp&s=b521d4e1200b3f3cd89297c6a568bdaf1d3e7234', 'width': 640}, {'height': 626, 'url': 'https://preview.redd.it/vry52nz3u69e1.jpeg?width=960&crop=smart&auto=webp&s=9c1c5a785d4f0e0be0146e53257523af9c763c1b', 'width': 960}, {'height': 705, 'url': 'https://preview.redd.it/vry52nz3u69e1.jpeg?width=1080&crop=smart&auto=webp&s=6828276261146e30f57a4d3a1d60dc9834f73916', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://preview.redd.it/vry52nz3u69e1.jpeg?auto=webp&s=6f4f7d8c9b3a11d00965cba508ac0a582d75ba33', 'width': 1574}, 'variants': {}}]} |
|||
So.... Was Reflection right all along? | 0 | That guy was still a complete liar, but in all fairness... he did see some potential in LLMs talking to themselves before the name of "test time compute" was even utilized... that's amazing in a sense. | 2024-12-26T13:09:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hmo11r/so_was_reflection_right_all_along/ | KillerX629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmo11r | false | null | t3_1hmo11r | /r/LocalLLaMA/comments/1hmo11r/so_was_reflection_right_all_along/ | false | false | self | 0 | null |
Automatic image translation | 1 | [removed] | 2024-12-26T13:18:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hmo6bl/automatic_image_translation/ | drawning_ness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmo6bl | false | null | t3_1hmo6bl | /r/LocalLLaMA/comments/1hmo6bl/automatic_image_translation/ | false | false | self | 1 | null |
Deepseek V3 Vram Requirements. | 4 | I have access to two A100 GPUs through ny University, could I do inerence using Deepseek V3? The model is huge, 685b would probably be too big even for 80-160GB Vram, but I read mixture of experts runs a lot lighter than their total number of parameters. | 2024-12-26T13:49:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hmoplg/deepseek_v3_vram_requirements/ | ApplePenguinBaguette | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmoplg | false | null | t3_1hmoplg | /r/LocalLLaMA/comments/1hmoplg/deepseek_v3_vram_requirements/ | false | false | self | 4 | null |
Satire AI news feed | 1 | [removed] | 2024-12-26T13:49:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hmopmi/satire_ai_news_feed/ | Upstairs_Bedroom6541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmopmi | false | null | t3_1hmopmi | /r/LocalLLaMA/comments/1hmopmi/satire_ai_news_feed/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'FqkvTQqkgQdAXCbStpZHoLnLuLiJrxrHpSlw-A2STn4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/v94vAyZyejnt0X-o3Ix4AG2d43ZXgbDRTYyRA3hb2jU.jpg?width=108&crop=smart&auto=webp&s=c167ae1fb00bef3e449becb52d7c6064cbc72d63', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/v94vAyZyejnt0X-o3Ix4AG2d43ZXgbDRTYyRA3hb2jU.jpg?width=216&crop=smart&auto=webp&s=08f6ccbdc3d42055fa835e1f371b19c9dfc35bc7', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/v94vAyZyejnt0X-o3Ix4AG2d43ZXgbDRTYyRA3hb2jU.jpg?auto=webp&s=79d8d69d202070d0cf0d88687278fa354e8ddd70', 'width': 300}, 'variants': {}}]} |
Who's this guy? and is he serious ? | 0 | 2024-12-26T14:03:36 | Evening_Action6217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmoyrq | false | null | t3_1hmoyrq | /r/LocalLLaMA/comments/1hmoyrq/whos_this_guy_and_is_he_serious/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'sMayCDPAv-cNJUoWLi70a9UXw46xMYFzERGg8f3l-eI', 'resolutions': [{'height': 131, 'url': 'https://preview.redd.it/fg2e4oec979e1.jpeg?width=108&crop=smart&auto=webp&s=2e07fdcbd88643286acdc27cbbed38411b7716f5', 'width': 108}, {'height': 263, 'url': 'https://preview.redd.it/fg2e4oec979e1.jpeg?width=216&crop=smart&auto=webp&s=1a2c04e9168936fdc843758b8597363263d2abcf', 'width': 216}, {'height': 389, 'url': 'https://preview.redd.it/fg2e4oec979e1.jpeg?width=320&crop=smart&auto=webp&s=e80c6a09324f1f172dd5cfbd1f5ea3b3a3354b0d', 'width': 320}, {'height': 779, 'url': 'https://preview.redd.it/fg2e4oec979e1.jpeg?width=640&crop=smart&auto=webp&s=8fd0a1bcd1e413dc8d9859a1fc2062ec33cf2c53', 'width': 640}, {'height': 1169, 'url': 'https://preview.redd.it/fg2e4oec979e1.jpeg?width=960&crop=smart&auto=webp&s=18464305c219d311a22599921a5dc31e0633150c', 'width': 960}, {'height': 1316, 'url': 'https://preview.redd.it/fg2e4oec979e1.jpeg?width=1080&crop=smart&auto=webp&s=f712417f676799e1eff6f54770bd6f3f0055bdef', 'width': 1080}], 'source': {'height': 1316, 'url': 'https://preview.redd.it/fg2e4oec979e1.jpeg?auto=webp&s=41dfab50847934f93f82b62c3d0eebf503e9de3f', 'width': 1080}, 'variants': {}}]} |
|||
Fastest Token/s Solution | 0 | What is the fastest token/s/llm-parameter/$ solution out there currently?
Is it running 2x EPYC with loads of RAM or a single A6000 or some older GPUs in some weird parallelised config? | 2024-12-26T14:06:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hmp0n6/fastest_tokens_solution/ | Solvicode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmp0n6 | false | null | t3_1hmp0n6 | /r/LocalLLaMA/comments/1hmp0n6/fastest_tokens_solution/ | false | false | self | 0 | null |
Uncensored models for politics, religion etc. | 19 | Are there any newer models that will hold an intelligent discussion about religion, politics, conspiracy theories etc without refusals, devolving into moralizing or trying to be politically correct and please everybody? QwQ is amazing for reasoning but shits the bed when asking about politics etc.
There is a mountain of bullshit floating about on social media at the moment.
Would be awesome to have a model to rationally discuss with or at least running current events by to determine if I am being gaslit or not. Max I can run are 70B models at Q4 at usable speeds
Maybe too much to ask at the current stage of open source. | 2024-12-26T14:18:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hmp8ej/uncensored_models_for_politics_religion_etc/ | Nobby_Binks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmp8ej | false | null | t3_1hmp8ej | /r/LocalLLaMA/comments/1hmp8ej/uncensored_models_for_politics_religion_etc/ | false | false | self | 19 | null |
Decided to build a fully open source Local Ai Meeting assistant openly. Using ollama, fastAPI, Next.js, Electron. | 2 | [UI V1.0](https://preview.redd.it/qb6gywwkh79e1.png?width=2688&format=png&auto=webp&s=4f0f355237aa38c02e2bf7ecd4d30f6781134174)
**TL;DR:** In this approach, I plan to build openly—meaning I’ll gather feedback and develop step by step. The initial UI development is complete, and I intend to build the rest as time allows. Contributions are welcome.
This is my humble attempt to solve a problem I face within my company: taking meeting notes while a client call is ongoing. The solution is a fully open-source tool that uses open-source models and tools.
When I explored existing tools to make this process easier, I encountered a significant issue: I don't want my company’s confidential data stored in someone else’s database.
Since I am already building my own local AI-based tools and agents to automate most of my tasks, I decided to create this tool—a privacy-first, open-source meeting assistant that transcribes and summarizes meetings, all locally on my own device.
This week, I focused on the UI, and here’s a sneak peek 👀 of what I’ve been working on! (Check out the video!)
[Fully open source local meeting minutes capturing tool](https://reddit.com/link/1hmq1x1/video/fn3uau2ei79e1/player)
Here's the architecture diagram. Curious to get feedbacks.
[Architecture diagram](https://preview.redd.it/umx4qzupi79e1.jpg?width=1394&format=pjpg&auto=webp&s=1795dc5d0c7a9e51715421181ff2e5a2e3c88a0b)
I'm planning to work on the backend during the next few weeks. I hope this will be helpful for at least a few of the community members. | 2024-12-26T15:01:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hmq1x1/decided_to_build_a_fully_open_source_local_ai/ | Sorry_Transition_599 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmq1x1 | false | null | t3_1hmq1x1 | /r/LocalLLaMA/comments/1hmq1x1/decided_to_build_a_fully_open_source_local_ai/ | false | false | 2 | null |
|
Dual RTX 3060 to run Pixtral? | 1 | [removed] | 2024-12-26T15:03:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hmq3fm/dual_rtx_3060_to_run_pixtral/ | lilythompsilly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmq3fm | false | null | t3_1hmq3fm | /r/LocalLLaMA/comments/1hmq3fm/dual_rtx_3060_to_run_pixtral/ | false | false | self | 1 | null |
Sonnet3.5 vs v3 | 192 | 2024-12-26T15:14:29 | vinam_7 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmqb2j | false | null | t3_1hmqb2j | /r/LocalLLaMA/comments/1hmqb2j/sonnet35_vs_v3/ | false | false | 192 | {'enabled': True, 'images': [{'id': 'lcCcQRP2Sohog18T2nbNt2AICiWNbHR7u0nVQ3DBMe4', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/y5zmucuql79e1.png?width=108&crop=smart&auto=webp&s=3bf1eda691e335a09b6dfcb16cbf7c67845e24fc', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/y5zmucuql79e1.png?width=216&crop=smart&auto=webp&s=ab75a0503d6de9ad1c75d632c4caab13f8c222ea', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/y5zmucuql79e1.png?width=320&crop=smart&auto=webp&s=37460626c9c107f8c0066b065ebdb2934abe65c6', 'width': 320}, {'height': 315, 'url': 'https://preview.redd.it/y5zmucuql79e1.png?width=640&crop=smart&auto=webp&s=043a80d1204a10a79d26eaccd5ad00d804dc37aa', 'width': 640}, {'height': 472, 'url': 'https://preview.redd.it/y5zmucuql79e1.png?width=960&crop=smart&auto=webp&s=87bb29393d06aa05f2af268b7f4ff110f6be3bf8', 'width': 960}, {'height': 531, 'url': 'https://preview.redd.it/y5zmucuql79e1.png?width=1080&crop=smart&auto=webp&s=760aabd3e6e486a580f0729a629704013bb8bc7b', 'width': 1080}], 'source': {'height': 664, 'url': 'https://preview.redd.it/y5zmucuql79e1.png?auto=webp&s=32e8359cfc60bd1dfc5cae322a9d9c8a94517276', 'width': 1348}, 'variants': {}}]} |
|||
Building a fully open source local LLM based meeting minutes recording and analysis | 44 | [UI Screenshot](https://preview.redd.it/ex9weqwvl79e1.png?width=2678&format=png&auto=webp&s=bbe485232c77c4c721cee130c93ed2c99686de28)
**TL;DR:** In this approach, I plan to build openly—meaning I’ll gather feedback and develop step by step. The initial UI development is complete, and I intend to build the rest as time allows. Contributions are welcome.
This is my humble attempt to solve a problem I face within my company: taking meeting notes while a client call is ongoing. The solution is a fully open-source tool that uses open-source models and tools.
When I explored existing tools to make this process easier, I encountered a significant issue: I don't want my company’s confidential data stored in someone else’s database.
Since I am already building my own local AI-based tools and agents to automate most of my tasks, I decided to create this tool—a privacy-first, open-source meeting assistant that transcribes and summarizes meetings, all locally on my own device.
This week, I focused on the UI, and here’s a sneak peek 👀 of what I’ve been working on! (Check out the video!
[UI Demo of fully open source AI meeting minutes recorded](https://reddit.com/link/1hmqc1a/video/8dur1yc1m79e1/player)
Here's the architecture diagram. Curious to get feedbacks.
[Architecture diagram](https://preview.redd.it/umx4qzupi79e1.jpg?width=1394&format=pjpg&auto=webp&s=1795dc5d0c7a9e51715421181ff2e5a2e3c88a0b)
Repo Link : [https://github.com/Zackriya-Solutions/meeting-minutes](https://github.com/Zackriya-Solutions/meeting-minutes)
I'm planning to work on the backend coming weeks. I hope this will be helpful for at least a few of the community members. | 2024-12-26T15:15:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hmqc1a/building_a_fully_open_source_local_llm_based/ | Sorry_Transition_599 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmqc1a | false | null | t3_1hmqc1a | /r/LocalLLaMA/comments/1hmqc1a/building_a_fully_open_source_local_llm_based/ | false | false | 44 | {'enabled': False, 'images': [{'id': 'KMtodORB2jbBuO3ODZiwifkjQQwYF_GFvTl53tEFgCQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UEepwuumwfSvCUmzYiRnAPXqmk32MSxjKjgl9XC7ZME.jpg?width=108&crop=smart&auto=webp&s=3425bcea2db7a4c50ea6106fa3d01bf44e57f8b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UEepwuumwfSvCUmzYiRnAPXqmk32MSxjKjgl9XC7ZME.jpg?width=216&crop=smart&auto=webp&s=93fe78f3720ad999355559f7c27b297da4aaa369', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UEepwuumwfSvCUmzYiRnAPXqmk32MSxjKjgl9XC7ZME.jpg?width=320&crop=smart&auto=webp&s=6b4c58bb5e753b9e7a46f355c6fc14afb9c0e0d4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UEepwuumwfSvCUmzYiRnAPXqmk32MSxjKjgl9XC7ZME.jpg?width=640&crop=smart&auto=webp&s=51acda99f885f0114b90dcb5fb1f74fc2bb15d0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UEepwuumwfSvCUmzYiRnAPXqmk32MSxjKjgl9XC7ZME.jpg?width=960&crop=smart&auto=webp&s=0468f235e7a81a6bc30c0a34e7469184f12398d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UEepwuumwfSvCUmzYiRnAPXqmk32MSxjKjgl9XC7ZME.jpg?width=1080&crop=smart&auto=webp&s=4eeeb9b4342bb7ef398321535d8ce1e6a8766ded', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UEepwuumwfSvCUmzYiRnAPXqmk32MSxjKjgl9XC7ZME.jpg?auto=webp&s=12254002d77f0f0484201e556d940ff872d6e606', 'width': 1200}, 'variants': {}}]} |
|
LLM recommendations | 0 | Hi All,
Like most of you I'm looking to run my own LLM in my local network. But I'm very new to this game and am looking for some advice and information.
I'm looking for something simple to start with good documentation and which I can use to make Home Assistants assistant more useful.
I have a server at home I can use, it has an NVIDIA RTX 2070 Super TI with 8GB of memory.
Does anyone have some information on where to start, and a recommendation about which LLM to try?
I hope this is an okay topic. If not please let me know.
Thanks for any help. | 2024-12-26T15:20:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hmqfgf/llm_recommendations/ | korsten123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmqfgf | false | null | t3_1hmqfgf | /r/LocalLLaMA/comments/1hmqfgf/llm_recommendations/ | false | false | self | 0 | null |
Deepseek V3 benchmarks are a reminder that Qwen 2.5 72B is the real king and everyone else is joking! | 159 | 2024-12-26T15:33:50 | ParaboloidalCrest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmqpca | false | null | t3_1hmqpca | /r/LocalLLaMA/comments/1hmqpca/deepseek_v3_benchmarks_are_a_reminder_that_qwen/ | false | false | 159 | {'enabled': True, 'images': [{'id': 'lgXk_p4wPW4z6RFkVzy8FqmDie90Rilq-CEy63xokAc', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/q4gg1cobp79e1.png?width=108&crop=smart&auto=webp&s=f5ee1f0ea5e9d7c21282dfdac9e5474dd54b58f1', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/q4gg1cobp79e1.png?width=216&crop=smart&auto=webp&s=1710a29248a93890a58a7937f9c3e52fda84e777', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/q4gg1cobp79e1.png?width=320&crop=smart&auto=webp&s=67cfa307e996094ae768f3e446be7bd4fb3fbc00', 'width': 320}, {'height': 373, 'url': 'https://preview.redd.it/q4gg1cobp79e1.png?width=640&crop=smart&auto=webp&s=9b289ddf09282308739393e00d4e89acfe8c47cb', 'width': 640}, {'height': 560, 'url': 'https://preview.redd.it/q4gg1cobp79e1.png?width=960&crop=smart&auto=webp&s=e180c498fbefe101ae8bb9035d012a1aa6ad918e', 'width': 960}, {'height': 630, 'url': 'https://preview.redd.it/q4gg1cobp79e1.png?width=1080&crop=smart&auto=webp&s=7568ba3a7c22c9596093d4cc76ea3add9a01479a', 'width': 1080}], 'source': {'height': 994, 'url': 'https://preview.redd.it/q4gg1cobp79e1.png?auto=webp&s=3f9eb2e00c296c29d0910fa7cc23d8174c7117d8', 'width': 1702}, 'variants': {}}]} |
|||
Does companies use GPUs for inference also ? (I mean companies like Deepseek) | 0 | I heard that Deepseek V3 was trained on around 2000 GPUs. There are many people using [chat.deepseek.com](http://chat.deepseek.com) for inference and if they are to use Nvidia GPUs for inference also, won't that also require similar number of GPUs ?
How are companies like deepseek able to provide cheap inference if they are using H100s for inference ?
Genuinely curious about this. | 2024-12-26T16:08:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hmrfs0/does_companies_use_gpus_for_inference_also_i_mean/ | Dazzling-Albatross72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmrfs0 | false | null | t3_1hmrfs0 | /r/LocalLLaMA/comments/1hmrfs0/does_companies_use_gpus_for_inference_also_i_mean/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'QNal655ISMiRXiFLzRYhoi4RbPV9fBimpiuad0nkkzc', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/llTUq73RG_bpacHm7caxVPEFDWjjrOeGjq7d4Q_Axak.jpg?width=108&crop=smart&auto=webp&s=0e6be06e62e3b734916ea06b0fc9ad501ce7533c', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/llTUq73RG_bpacHm7caxVPEFDWjjrOeGjq7d4Q_Axak.jpg?width=216&crop=smart&auto=webp&s=e2051e4967ec7f06b5e4b8b24e9a9ae6c592e910', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/llTUq73RG_bpacHm7caxVPEFDWjjrOeGjq7d4Q_Axak.jpg?width=320&crop=smart&auto=webp&s=889f3061e4f5586e1d8ab135b795a111e3d52e07', 'width': 320}, {'height': 343, 'url': 'https://external-preview.redd.it/llTUq73RG_bpacHm7caxVPEFDWjjrOeGjq7d4Q_Axak.jpg?width=640&crop=smart&auto=webp&s=8e929faed068936fc9674d0c85d17c6e47941b52', 'width': 640}, {'height': 515, 'url': 'https://external-preview.redd.it/llTUq73RG_bpacHm7caxVPEFDWjjrOeGjq7d4Q_Axak.jpg?width=960&crop=smart&auto=webp&s=ddf5e843200ff93304fe68e0567b7038f14a18c6', 'width': 960}, {'height': 579, 'url': 'https://external-preview.redd.it/llTUq73RG_bpacHm7caxVPEFDWjjrOeGjq7d4Q_Axak.jpg?width=1080&crop=smart&auto=webp&s=ca0c5814564964248e052576a8aa9851a51e4827', 'width': 1080}], 'source': {'height': 1118, 'url': 'https://external-preview.redd.it/llTUq73RG_bpacHm7caxVPEFDWjjrOeGjq7d4Q_Axak.jpg?auto=webp&s=fc766ea391baae2b31dd1f1d7446cd20c2b70d70', 'width': 2082}, 'variants': {}}]} |
Any people here freelancing using local AI? | 5 | What kind of AI applications are (small) companies looking for?
Did they find you or did you cold call a bunch of companies? | 2024-12-26T16:09:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hmrh0r/any_people_here_freelancing_using_local_ai/ | mmmm_frietjes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmrh0r | false | null | t3_1hmrh0r | /r/LocalLLaMA/comments/1hmrh0r/any_people_here_freelancing_using_local_ai/ | false | false | self | 5 | null |
Clipboard Conqueror is 1.0. Now supports ESC to cancel and prompt operator comments. | 12 | **TLDR:** [**Clipboard Conqueror**](https://github.com/aseichter2007/ClipboardConqueror) **good. New feature very nice. Its pretty complete. 1.0 Release. No binaries, don't worry it's just a couple easy clicks, no console required.**
Clipboard Conqueror now detects the escape key and aborts the generation. Koboldcpp single user mode only for now. If anyone knows why genkey doesn't work, or about other inference server cancel/abort protocols for non streaming responses, I'd be obliged.
Ctrl+Alt+C now sends the last copy with the default or |||prompts, set|
You can comment out prompts now, like:
||| prompt, <skipped prompt text till the next comma, clearHistory|
Clipboard Conqueror is a unique copy paste LLM command center that works in all applications. It works in any text box where you can select text. In a console, a 3d game, gmail, anywhere. It has no UI other than notification popups. Perfect for minimal context shiftng. Just talk to your document right here. Hey document, why did the chicken cross the road? ||| ctrl+a ctrl+c.
Write codes to completely control inference with any server:
|||
inferenceServerName,
promptFormat,
promptTitle,
<testPromptDoNotTouch,
! chatAssistantName,
numberOfTokensAsAnInteger
|
optional instant system prompt text
|
user text
~~~ continue text
Copy that code block and baby you're cooking with Clipboard Conqueror.
Everything but the initializer ||| is optional and most require no particular order. CC will use the default personality unless another prompt is called, you have a set prompt or you write an instant system prompt.
Everything is configurable.
You can type anywhere and command collaboration between multiple inference servers. Tell Llama 3 to prompt Claude with ease, and retain complete control of every stage.
A real query to copy:
|||mistral, < 8B,! brief:, _brief, @! South Park Production Script\n\n Episode Title:, @chatcom,cf,@c,2500| Produce a long and funny South Park (Season 2) episode script.
Pitch: Open on Eric Cartman telling the boys that Sam Altman hired them to incite regulatory capture; Sam is really hung up on how it's dangerous to let open models spread or breed, He wants the boys to suppress open source LLMs by spreading fear of LLMs, but not ChatGPT because his safety people will tame it. Cartman only cares about the cash and the model file. Cartman demands the weights and Sam agrees to give the boys a million dollars and a copy of the chatgpt 5 weights. Through the episode, Sam calls repeatedly to deliver multiple wildly different unhinged warnings about preposterous dangers like mind control for the entire planet- while telling the boys to make more videos. Fill in some zany scenes where the boys produce media about many silly preposterous threats of AI for their terror campaign. The boys meet Sam to close the deal. Sam Altman delivers usernames and passwords to the indignant children- rather than cash and the model weights. Cameo appearance by Rick Sanchez. Rick appears and casually mugs Sam Altman. Kenny dies in the struggle as Rick and Sam grapple. Rick steals Sam Altman's thumb drive with Chatgpt 5 on it, muttering about "for a friend". After their meeting the boys are mad about not getting paid and just getting ChatGPT pro accounts instead of the model weights, and that Altman killed Kenny. Cartman had secretly intended to sell the weights on the dark web for a billion dollars and goes on a detailed rant about his crumbling plans. Cartman is really heartbroken about it. Butters explains the lessons learned. Rick gives chatgpt 5 to Butters after the credits. Butters is Rick's favorite south park character, he admires Professor Chaos's honest ambition. Rick gives Butters cryptic advice about being a supervillain.
Fin.
This brief only scratches the surface of how you can manipulate the chat with Clipboard Conqueror to rule over AI. I have near every feature you want. The only thing I don't is the openai agents api.
You want a wild west of agents no code frameworked together across 4 different services? Just do it like this in minutes.@Define #@each ##@step. No science lab required.
|||novita, llama3, ! Prompt Master 3000, yourSavedPrompt, cf,@!assistant,@claude,@prompt, #@novita, #@chatML, ##@replicate, <##@model: mistral Large correct string I'm too lazy to look up, ##@anotherPrompt, ###@ollama, ###@runpodYouConfiguredInSetup, ###@!Eric Cartman's Commentary Corner:, ###@~Today on Eric,###@4000| Today is a funny day LLama3. Be humorous.| Tell me how to do the Jungle Jiggle.
Anyway, enough of that.
It's past the one year anniversary of Clipboard Conqueror. I'm curious about users of my software. I can see a lot of people have downloaded it, and really want to know how it's working out for them or if there are any notable difficulties. CC has 360 github stars, It must be working. I just expected a few more people looking for help with it, a little more collaboration and interaction. Is my code ugly or what? Does it just work?
# Lessons learned over the year:
\-Open source doesn't stack cash.
\-An infant makes coding really incredibly difficult.
When I was slamming CC out 80 hours a week I figured it would pay me back a few dollars an hour or get me a job or something, but mostly I just got silence. A handful of kind folks donated, a few helped me through multiplatform issues. A few dudes said my readme was hard to parse. I made a couple bucks a day if I absolutely flogged it across the internet as hard as I could. I was kinda panicking what with the software hiring crash last year.
I could see it getting 20+ downloads a day but aside from a couple github issues and a few forks, it's whisper quiet. I did get a few good tips for the performance of Clipboard Conqueror here and there though. I had fun using my tool to craft various content and I made a few friends on the journey.
I got a job doing 3d drafting, the pay sucks but it's chill and fun. Four tens in an office nearby, with an option to work friday if I need to run around tuesday is pretty swell.
Thank you, everyone who has interacted with me about it. Have a happy new year. Maybe I'll get the function calling filled in next. I'm kinda waiting on CC# for an LLM and apparatus able to transcribe what I have. | 2024-12-26T16:17:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hmrmut/clipboard_conqueror_is_10_now_supports_esc_to/ | aseichter2007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmrmut | false | null | t3_1hmrmut | /r/LocalLLaMA/comments/1hmrmut/clipboard_conqueror_is_10_now_supports_esc_to/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'gn85iSDn83tw6wx7jxeWi3Wce6GMvy-Geczg1qA4EKQ', 'resolutions': [{'height': 33, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=108&crop=smart&auto=webp&s=13c43de44c4501c4a0e011cb6dcf46194057250c', 'width': 108}, {'height': 67, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=216&crop=smart&auto=webp&s=f99a1813a6c7001c6734c4a91579862e521b1167', 'width': 216}, {'height': 100, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=320&crop=smart&auto=webp&s=ed39917419dde295ecc6c72e79709a24a0e3ee11', 'width': 320}, {'height': 200, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=640&crop=smart&auto=webp&s=6cd4f51591eeac1a514326d9872a1821b86f0f44', 'width': 640}, {'height': 300, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=960&crop=smart&auto=webp&s=72f156a7a947d458a913289a1503e33d5257719a', 'width': 960}, {'height': 337, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=1080&crop=smart&auto=webp&s=cb00cf745b6473ddbaef189691c467fd52c0056c', 'width': 1080}], 'source': {'height': 516, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?auto=webp&s=f6a6198a3dc6a55d6a6a3da4467844cf4902030b', 'width': 1650}, 'variants': {}}]} |
Website for seeing best open model for a specific computer | 3 | Is there a website for seeing the best local model to run on a specific computer? | 2024-12-26T17:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hmt4vi/website_for_seeing_best_open_model_for_a_specific/ | Benna100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmt4vi | false | null | t3_1hmt4vi | /r/LocalLLaMA/comments/1hmt4vi/website_for_seeing_best_open_model_for_a_specific/ | false | false | self | 3 | null |
What is the easiest / most reasonable way to run V3 in the cloud? | 4 | I know not local, but I can't run that model local right now.
Any suggestions? Thanks a lot! | 2024-12-26T17:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hmthqh/what_is_the_easiest_most_reasonable_way_to_run_v3/ | Funny_Acanthaceae285 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmthqh | false | null | t3_1hmthqh | /r/LocalLLaMA/comments/1hmthqh/what_is_the_easiest_most_reasonable_way_to_run_v3/ | false | false | self | 4 | null |
The DeepSeek v3 in OpenRouter is not the latest v3 we expected. | 4 | [OpenRouter snapshot](https://preview.redd.it/m5ulcn01g89e1.png?width=1160&format=png&auto=webp&s=ebba806ad2a6c146a5be21feb7831c15fc4d048d)
Though from the model description, it seems like it is the V3 and the URL links to the official DeepSeek v3 repository. It is NOT the V3 i expected. Why? | 2024-12-26T18:05:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hmu0st/the_deepseek_v3_in_openrouter_is_not_the_latest/ | houchenglin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmu0st | false | null | t3_1hmu0st | /r/LocalLLaMA/comments/1hmu0st/the_deepseek_v3_in_openrouter_is_not_the_latest/ | false | false | 4 | null |
|
Phi-4 finetuning | 4 | I know Microsoft has not released an official Phi-4 , and people have unofficially released it. Does anyone know if pre-made trainers like huggingface autotrain advanced or Unsloth can fine tune this model yet? I know both can do phi 3.5 so I assume the answer is yes but if anyone has tried it i would love to hear from you! | 2024-12-26T18:09:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hmu4j8/phi4_finetuning/ | jay2jp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmu4j8 | false | null | t3_1hmu4j8 | /r/LocalLLaMA/comments/1hmu4j8/phi4_finetuning/ | false | false | self | 4 | null |
Is this the SOTA reward model for guided search currently? | 1 | [removed] | 2024-12-26T18:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hmui6x/is_this_the_sota_reward_model_for_guided_search/ | hyperna21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmui6x | false | null | t3_1hmui6x | /r/LocalLLaMA/comments/1hmui6x/is_this_the_sota_reward_model_for_guided_search/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pE9NPQSh6krdr70aeyDX26JtfInHjn0t491UmmN7yUQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uRZz2q516kakokAM-OSqBWwOYGiUeOkQyTJtGuHGKbU.jpg?width=108&crop=smart&auto=webp&s=76049c182f9a4d2fa5fece26c51ef2f5bf1a0740', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uRZz2q516kakokAM-OSqBWwOYGiUeOkQyTJtGuHGKbU.jpg?width=216&crop=smart&auto=webp&s=c55b286072698fada8ae532d1b499362cd303e3f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uRZz2q516kakokAM-OSqBWwOYGiUeOkQyTJtGuHGKbU.jpg?width=320&crop=smart&auto=webp&s=0a00c67f2963bdda9afe6b8183194dcab3875ba8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uRZz2q516kakokAM-OSqBWwOYGiUeOkQyTJtGuHGKbU.jpg?width=640&crop=smart&auto=webp&s=d61087622518cf1e0e94b010af4dab24853806a7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uRZz2q516kakokAM-OSqBWwOYGiUeOkQyTJtGuHGKbU.jpg?width=960&crop=smart&auto=webp&s=01157cb375e965bc19758a9b2490859e48be3f46', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uRZz2q516kakokAM-OSqBWwOYGiUeOkQyTJtGuHGKbU.jpg?width=1080&crop=smart&auto=webp&s=50551805f21fe1f23478320736f7b3c3465260ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uRZz2q516kakokAM-OSqBWwOYGiUeOkQyTJtGuHGKbU.jpg?auto=webp&s=f6e45f05bf6156b650ac71351be9c888074534d2', 'width': 1200}, 'variants': {}}]} |
Guidance on how to get started with LLMs in low spec laptop | 1 | [removed] | 2024-12-26T18:35:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hmup2d/guidance_on_how_to_get_started_with_llms_in_low/ | FastCommission2913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmup2d | false | null | t3_1hmup2d | /r/LocalLLaMA/comments/1hmup2d/guidance_on_how_to_get_started_with_llms_in_low/ | false | false | self | 1 | null |
Deepseek v3 tops Kagi LLM benchmark for open-weights models - good, fast, cheap | 43 | 2024-12-26T19:04:16 | https://help.kagi.com/kagi/ai/llm-benchmark.html | anti-hero | help.kagi.com | 1970-01-01T00:00:00 | 0 | {} | 1hmvc3p | false | null | t3_1hmvc3p | /r/LocalLLaMA/comments/1hmvc3p/deepseek_v3_tops_kagi_llm_benchmark_for/ | false | false | default | 43 | null |
|
What’s the best quality Speech to Text Transcription API? | 1 | [removed] | 2024-12-26T19:24:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hmvsbu/whats_the_best_quality_speech_to_text/ | Spammesir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmvsbu | false | null | t3_1hmvsbu | /r/LocalLLaMA/comments/1hmvsbu/whats_the_best_quality_speech_to_text/ | false | false | self | 1 | null |
Anyone get the Bartowski Q4_K_M quant of QVQ-72b-Preview to spit out English instead of Mandarin (using Ollama)? | 10 | Loaded up the quant in Ollama off of HF, but all I get is Mandarin. Watched a couple of YouTubers getting the same results as me.
Is this a Llams.cpp vision support issue, or am I just missing something?
Not seeing it in anyone’s Ollama repositories yet either, so I guess I’ll just wait. 🤷♂️. I know I could run vLLM, but for some reason I can’t ever get it to play nice with WSL in my environment.
Any help is appreciated.
The command I used was
Ollama run hf.co/bartowski/QVQ-72b-Preview-GGUF:Q4_K_M | 2024-12-26T19:29:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hmvw3l/anyone_get_the_bartowski_q4_k_m_quant_of/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmvw3l | false | null | t3_1hmvw3l | /r/LocalLLaMA/comments/1hmvw3l/anyone_get_the_bartowski_q4_k_m_quant_of/ | false | false | self | 10 | null |
What’s the best quality Speech to Text transcription API? | 1 | [removed] | 2024-12-26T19:31:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hmvy44/whats_the_best_quality_speech_to_text/ | Spammesir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmvy44 | false | null | t3_1hmvy44 | /r/LocalLLaMA/comments/1hmvy44/whats_the_best_quality_speech_to_text/ | false | false | self | 1 | null |
Depth map generator for faces | 2 | Over the past several months, I’ve seen news about models that can, often in real time, track depth of images or videos (perhaps like that which is used by Apple Vision Pro).
I’m seeking to create a depth map that is as detailed and accurate as possible of primarily headshots/portraits. Stuff I’ve seen is good at discriminating foreground from background elements, but maybe not so much at the level of detail characteristic of the human face.
This would be for a retouching/photo editing application. Is anyone aware of local models or even paid online services that can do this? Photoshop has a neural filter option that can generate a depth map, but it’s not very useful for my needs. At least as far as I’ve been able to make that work. | 2024-12-26T19:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hmwcua/depth_map_generator_for_faces/ | Hinged31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmwcua | false | null | t3_1hmwcua | /r/LocalLLaMA/comments/1hmwcua/depth_map_generator_for_faces/ | false | false | self | 2 | null |
Best Vision Model on LM Studio? | 4 | Wanted to try a few different vision models like Molmo or Pixtral but they're unavailable on LM Studio I guess? 'Pixtral Text Only' is there but that's about it.
I've tried a few other programs but LM Studio is just... easy to use, tbh.
Any suggestions? | 2024-12-26T19:54:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hmwgdj/best_vision_model_on_lm_studio/ | PangurBanTheCat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmwgdj | false | null | t3_1hmwgdj | /r/LocalLLaMA/comments/1hmwgdj/best_vision_model_on_lm_studio/ | false | false | self | 4 | null |
Title: Experimenting with AutoCode in LocalLLaMA v3 | 1 | [removed] | 2024-12-26T20:00:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hmwkw0/title_experimenting_with_autocode_in_localllama_v3/ | Ok_Recover_4730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmwkw0 | false | null | t3_1hmwkw0 | /r/LocalLLaMA/comments/1hmwkw0/title_experimenting_with_autocode_in_localllama_v3/ | false | false | self | 1 | null |
Best small local llm for laptops | 6 | I was wondering if anyone knows the best small llm I can run locally on my laptop, cpu only.
I’ve tried out different sizes and qwen 2.5 32b was the largest that would fit on my laptop (32gb ram, i7 10th gen cpu) but it ran at about 1 tok/sec which is unusable.
Gemma 2 9b at q4 runs at 3tok/sec which is slightly better but still unusable. | 2024-12-26T20:11:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hmwu0u/best_small_local_llm_for_laptops/ | The_GSingh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmwu0u | false | null | t3_1hmwu0u | /r/LocalLLaMA/comments/1hmwu0u/best_small_local_llm_for_laptops/ | false | false | self | 6 | null |
Which models and of what size does Apple Intelligence use? | 1 | Just curious—I have it enabled on my Mac and iPhone, but I haven't really explored it yet since I'm used to using BoltAI with local models. The few things I've tried so far are surprisingly fast, which makes me think the models must be pretty small, right? Also, do these models just use the GPU like the ones I run with Ollama, or do they also utilize the Neural Engine? | 2024-12-26T20:32:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hmxarg/which_models_and_of_what_size_does_apple/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmxarg | false | null | t3_1hmxarg | /r/LocalLLaMA/comments/1hmxarg/which_models_and_of_what_size_does_apple/ | false | false | self | 1 | null |
BEFORE WE KEEP MOGGING US Labs and praising Deepseek | 0 | can we pls acknowledge:
1. Benchmark performance does not always equate with ”the vibes”
2. How much cheaper training a model would if you just train on others outputs
p.s. Im not a Deepseek hater - Im currently using it and I really like it | 2024-12-26T20:34:24 | lessis_amess | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmxbvl | false | null | t3_1hmxbvl | /r/LocalLLaMA/comments/1hmxbvl/before_we_keep_mogging_us_labs_and_praising/ | false | false | 0 | {'enabled': True, 'images': [{'id': '13DHEF1B3-_1Yrcj85FG05glbRxL_c_cb3DP3q8wUwU', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/lj8gwxa2799e1.jpeg?width=108&crop=smart&auto=webp&s=51077e00cd012d5b5701a092934ea1091314cb7c', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/lj8gwxa2799e1.jpeg?width=216&crop=smart&auto=webp&s=b23c934ecc21fbd10a4d5c4a5bf94c9aadc4a9c2', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/lj8gwxa2799e1.jpeg?width=320&crop=smart&auto=webp&s=9e42c2c11e70807079bd24dc04d4085e0e2088d5', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/lj8gwxa2799e1.jpeg?width=640&crop=smart&auto=webp&s=2c2fa9c8c2fcabf00c5d16b903895320dcea618c', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/lj8gwxa2799e1.jpeg?width=960&crop=smart&auto=webp&s=d460ebf61b5b1d87707b58892530f43f242ca1ef', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/lj8gwxa2799e1.jpeg?width=1080&crop=smart&auto=webp&s=c45c17f901a36e7e2ae5966198b0dd8a03dd6f46', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/lj8gwxa2799e1.jpeg?auto=webp&s=d3ba36d226997ffefb89554499cbcfc69f3898ba', 'width': 3024}, 'variants': {}}]} |
||
Creating a .exe to download ollama / open web ui / docker and possibly python lol all in one click | 2 | Just need to hear from people much more experienced than I am, to see if this is even possible before I waste any more mental bandwidth lol.
I am essentially wanting to know if it is feasible to build a one click "installer" for python, ollama, 2-3 models, open web ui, and docker. My company has been flirting with AI but we can't get past HIPPA, and I know one thing that does lol.
I'm running all my models on my home pc and hosting them with NGROK to use at the office, so I can already see the future of AI, just checking to see if anything like this has already been built or if it's even currently possible.
| 2024-12-26T20:37:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hmxemm/creating_a_exe_to_download_ollama_open_web_ui/ | B_Anthony12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmxemm | false | null | t3_1hmxemm | /r/LocalLLaMA/comments/1hmxemm/creating_a_exe_to_download_ollama_open_web_ui/ | false | false | self | 2 | null |
Diffusion Model for ASCII Art? | 1 | Hello, I'm a student researcher with a bit of time on their hands over winter break, recently on YouTube I saw a video with CLIs and the ASCII artwork within them, and I got curious if there were any diffusion models specifically for this (not generating images of ascii art but ascii art directly). Curious if anyone else has found them or if they would be interested in one and was debating training my own, not really my field of ML research (I used to dabble with them a bit but moved away as it was a very competitive space and I was still learning a lot and not good enough to make meaningful contributions) so I'm not sure how diffusion architecture would adapt to a more discrete space as well as the data sparsity that comes with it but I'd be willing to give it a shot.
TL;DR: Basically I'm curious if anybody knows of something that has been done like this, if people are interested if it hasn't, and any advice as to how one might do it. | 2024-12-26T20:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hmxet9/diffusion_model_for_ascii_art/ | Stelath45634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmxet9 | false | null | t3_1hmxet9 | /r/LocalLLaMA/comments/1hmxet9/diffusion_model_for_ascii_art/ | false | false | self | 1 | null |
DeepSeek is better than 4o on most benchmarks at 10% of the price? | 831 | 2024-12-26T20:43:43 | Odd_Tumbleweed574 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hmxjbn | false | null | t3_1hmxjbn | /r/LocalLLaMA/comments/1hmxjbn/deepseek_is_better_than_4o_on_most_benchmarks_at/ | false | false | 831 | {'enabled': True, 'images': [{'id': 'fEm6_CGl6TRB3ONQ3Wdbt8HG6xlA9Qg0ytPJR-P3rDI', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/gwmj6ili899e1.png?width=108&crop=smart&auto=webp&s=8543d88a5826c5f7605031fcb8df0d0e1feb6746', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/gwmj6ili899e1.png?width=216&crop=smart&auto=webp&s=971eb81ea1c1a83ccb1d33a3ce212616758dce49', 'width': 216}, {'height': 189, 'url': 'https://preview.redd.it/gwmj6ili899e1.png?width=320&crop=smart&auto=webp&s=3da300cac09589860f28f4f8486dd1ab9fd08a3b', 'width': 320}, {'height': 379, 'url': 'https://preview.redd.it/gwmj6ili899e1.png?width=640&crop=smart&auto=webp&s=cdc6cb09d7461bbfff8791506f21ea7909ec0406', 'width': 640}, {'height': 568, 'url': 'https://preview.redd.it/gwmj6ili899e1.png?width=960&crop=smart&auto=webp&s=978e3d7a60a0df77ded31925dfba824a4c0bddca', 'width': 960}, {'height': 639, 'url': 'https://preview.redd.it/gwmj6ili899e1.png?width=1080&crop=smart&auto=webp&s=7654b52207404c048b7178801acd41d2c1cda69d', 'width': 1080}], 'source': {'height': 1020, 'url': 'https://preview.redd.it/gwmj6ili899e1.png?auto=webp&s=9da000b695d297beffc039461a1413636f52239a', 'width': 1722}, 'variants': {}}]} |
|||
RAG Application for a specific domain. | 1 | [removed] | 2024-12-26T20:53:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hmxqut/rag_application_for_a_specific_domain/ | Internal-Plate5893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmxqut | false | null | t3_1hmxqut | /r/LocalLLaMA/comments/1hmxqut/rag_application_for_a_specific_domain/ | false | false | self | 1 | null |
DeepSeek V3 codeforces benchmark | 15 | Hey, does this mean that a higher score is better?
https://preview.redd.it/6o0ajmu2j99e1.png?width=323&format=png&auto=webp&s=1ffe3445088a9896833b9015b1bf073f9e7186e5
| 2024-12-26T21:43:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hmyuvi/deepseek_v3_codeforces_benchmark/ | OkStatement3655 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmyuvi | false | null | t3_1hmyuvi | /r/LocalLLaMA/comments/1hmyuvi/deepseek_v3_codeforces_benchmark/ | false | false | 15 | null |
|
How do you use LLMs ? | 0 | I know you might fit in multiple categories but I'm curious to know if people here just pay the ChatGPT+ and call it a day (a month) or you guys are using your own UI or VSCode, etc with an API and pay per million tokens or something.
[View Poll](https://www.reddit.com/poll/1hmz2zn) | 2024-12-26T21:53:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hmz2zn/how_do_you_use_llms/ | DrVonSinistro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmz2zn | false | null | t3_1hmz2zn | /r/LocalLLaMA/comments/1hmz2zn/how_do_you_use_llms/ | false | false | self | 0 | null |
Evaluating performance of zero shot/ few shot classification on unannotated data | 1 | [removed] | 2024-12-26T21:54:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hmz3yv/evaluating_performance_of_zero_shot_few_shot/ | MaterialThing9800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmz3yv | false | null | t3_1hmz3yv | /r/LocalLLaMA/comments/1hmz3yv/evaluating_performance_of_zero_shot_few_shot/ | false | false | self | 1 | null |
🚀 Ng Infinity Craft: A 7-Day AI Experiment in Browser Gaming 🌌 | 1 | [removed] | 2024-12-26T21:59:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hmz7xx/ng_infinity_craft_a_7day_ai_experiment_in_browser/ | Roseldine | self.LocalLLaMA | 2024-12-26T22:08:12 | 0 | {} | 1hmz7xx | false | null | t3_1hmz7xx | /r/LocalLLaMA/comments/1hmz7xx/ng_infinity_craft_a_7day_ai_experiment_in_browser/ | false | false | self | 1 | null |
OAI System Prompt 26122024 | 1 | [removed] | 2024-12-26T22:11:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hmzhhi/oai_system_prompt_26122024/ | AdventurousSwim1312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hmzhhi | false | null | t3_1hmzhhi | /r/LocalLLaMA/comments/1hmzhhi/oai_system_prompt_26122024/ | false | false | self | 1 | null |
how to run deepseek v3 on ollama or lmstudio? | 1 | [removed] | 2024-12-26T22:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hn07nh/how_to_run_deepseek_v3_on_ollama_or_lmstudio/ | ETBigPhone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn07nh | false | null | t3_1hn07nh | /r/LocalLLaMA/comments/1hn07nh/how_to_run_deepseek_v3_on_ollama_or_lmstudio/ | false | false | self | 1 | null |
how to run deepseek v3 on ollama or lmstudio? | 2 | Sorry if this is a stupid question, but how do I run Deepseek V3? It's not showing up in lmstudio or ollama library. I git cloned the entire 600GB+ repo. Can I run this locally? I've only ever used models that appear in lmstudio or ollama library, never added one that doesnt exist yet. | 2024-12-26T22:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hn091o/how_to_run_deepseek_v3_on_ollama_or_lmstudio/ | RouteGuru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn091o | false | null | t3_1hn091o | /r/LocalLLaMA/comments/1hn091o/how_to_run_deepseek_v3_on_ollama_or_lmstudio/ | false | false | self | 2 | null |
Llama 3.3:70b lessons ran local on my Apple MBP max M3 | 1 | [removed] | 2024-12-26T23:22:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hn0z7p/llama_3370b_lessons_ran_local_on_my_apple_mbp_max/ | AIForOver50Plus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn0z7p | false | null | t3_1hn0z7p | /r/LocalLLaMA/comments/1hn0z7p/llama_3370b_lessons_ran_local_on_my_apple_mbp_max/ | false | false | self | 1 | null |
Lessons from playing around locally with Llama 3.3:70b coding against OpenAPI | 1 | [removed] | 2024-12-26T23:26:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hn121r/lessons_from_playing_around_locally_with_llama/ | AIForOver50Plus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn121r | false | null | t3_1hn121r | /r/LocalLLaMA/comments/1hn121r/lessons_from_playing_around_locally_with_llama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TI5OvEM_RS1oA_lgxMqPf686qYboA_j-y87Jhc4ynK8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rXbT3cAnAP4-fbZ6TGoHSI1zGlKPjTVEJUAhOsZE59A.jpg?width=108&crop=smart&auto=webp&s=6e874e8b2d91fd8e0c7138956b964928ab9fecc8', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rXbT3cAnAP4-fbZ6TGoHSI1zGlKPjTVEJUAhOsZE59A.jpg?width=216&crop=smart&auto=webp&s=43ec85e594da7217a6a9d6f5427f5f1fceb5acba', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rXbT3cAnAP4-fbZ6TGoHSI1zGlKPjTVEJUAhOsZE59A.jpg?width=320&crop=smart&auto=webp&s=32bf344d291aa3a8517e9c32cca823a2005bd7d1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rXbT3cAnAP4-fbZ6TGoHSI1zGlKPjTVEJUAhOsZE59A.jpg?auto=webp&s=758d1e708111a8d39154d8b9bd9ba6e7628bb2bf', 'width': 480}, 'variants': {}}]} |
Does anyone have a guide for implementing an open source model for document understanding? | 2 | Hello, I am working on a project where I download an LLM and use it to locally query a pdf/document I have (to be able to ask my documents any questions in any context and retrieve an accurate answer).
I am using this option due to privacy and don’t want my documents to be trained on.
I am running into issues where any model I use just doesn’t understand my document even if I scrape the text with a text scraper in python.
I appreciate any help or guidance with this! | 2024-12-27T00:41:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hn2mdd/does_anyone_have_a_guide_for_implementing_an_open/ | Pointfit_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn2mdd | false | null | t3_1hn2mdd | /r/LocalLLaMA/comments/1hn2mdd/does_anyone_have_a_guide_for_implementing_an_open/ | false | false | self | 2 | null |
o1 is so unimpressive for coding | 112 | It can occasionally fix an issue that sonnet is struggling with, but more often than not it is useless. | 2024-12-27T01:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hn31k0/o1_is_so_unimpressive_for_coding/ | dalhaze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn31k0 | false | null | t3_1hn31k0 | /r/LocalLLaMA/comments/1hn31k0/o1_is_so_unimpressive_for_coding/ | false | false | self | 112 | null |
Deepseek V3 on livecodebench (highest non-reasoning model) | 186 | 2024-12-27T01:22:35 | https://imgur.com/a/RVC6gB0 | Charuru | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1hn3fvx | false | {'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 712, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FRVC6gB0%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D900&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FRVC6gB0&image=https%3A%2F%2Fi.imgur.com%2F9dKAXSs.jpg%3Ffb&type=text%2Fhtml&schema=imgur" width="600" height="712" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 2322, 'thumbnail_url': 'https://i.imgur.com/9dKAXSs.jpg?fb', 'thumbnail_width': 2074, 'title': 'Imgur', 'type': 'rich', 'url': 'https://imgur.com/a/RVC6gB0', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'} | t3_1hn3fvx | /r/LocalLLaMA/comments/1hn3fvx/deepseek_v3_on_livecodebench_highest_nonreasoning/ | false | false | 186 | {'enabled': False, 'images': [{'id': 'A5jiHOAU005M06IL-Lo210dDZz7W8nP3S_IotLgbM08', 'resolutions': [{'height': 120, 'url': 'https://external-preview.redd.it/IUCaWGza9pGfCbuLJ3eC3HGRuLWS27JhhXKDeTFT4Rc.jpg?width=108&crop=smart&auto=webp&s=048cc48c3a533ef20263ff2b45913ab69d6967d9', 'width': 108}, {'height': 241, 'url': 'https://external-preview.redd.it/IUCaWGza9pGfCbuLJ3eC3HGRuLWS27JhhXKDeTFT4Rc.jpg?width=216&crop=smart&auto=webp&s=8b19d6411022647b28683dbd700913b43eaf3480', 'width': 216}, {'height': 358, 'url': 'https://external-preview.redd.it/IUCaWGza9pGfCbuLJ3eC3HGRuLWS27JhhXKDeTFT4Rc.jpg?width=320&crop=smart&auto=webp&s=2199f1f27aac87fc6d7a04af1a9e45edb3d30c7d', 'width': 320}, {'height': 716, 'url': 'https://external-preview.redd.it/IUCaWGza9pGfCbuLJ3eC3HGRuLWS27JhhXKDeTFT4Rc.jpg?width=640&crop=smart&auto=webp&s=f85ee0ae6dd72e4f3fa89c7878549fc7912d05e5', 'width': 640}, {'height': 1074, 'url': 'https://external-preview.redd.it/IUCaWGza9pGfCbuLJ3eC3HGRuLWS27JhhXKDeTFT4Rc.jpg?width=960&crop=smart&auto=webp&s=7882e6bbcc084ddee97804acac5b9538dac48c01', 'width': 960}, {'height': 1209, 'url': 'https://external-preview.redd.it/IUCaWGza9pGfCbuLJ3eC3HGRuLWS27JhhXKDeTFT4Rc.jpg?width=1080&crop=smart&auto=webp&s=aac01a3f839159fcfb6ef0873af696df580de296', 'width': 1080}], 'source': {'height': 2322, 'url': 'https://external-preview.redd.it/IUCaWGza9pGfCbuLJ3eC3HGRuLWS27JhhXKDeTFT4Rc.jpg?auto=webp&s=08462f7263e60fed03cceb7b85876879345b3a00', 'width': 2074}, 'variants': {}}]} |
||
3 new 32 bit mastered models W augmented quants + 3 updated models with augmented quants... for creative, roleplay and other usage. | 27 | Hey, from DavidAU:
Just been pushing the envelope a bit (okay sorry about the pun), with 3 float32/32bit mastered models with additional quant (all: q2k to q8) augments too.
And updated some of my most popular models with new quants and augmented quants too as well.
More bits / augment quanted increase performance in terms of both instruction following and output generation.
Example generations at each model repo.
**New 32 bit/float 32 mastered with augmented quants:**
[https://huggingface.co/DavidAU/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf](https://huggingface.co/DavidAU/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf)
(first 32 bit MOE, all experts activated by default, with augmented quants)
[https://huggingface.co/DavidAU/Gemma-The-Writer-Mighty-Sword-9B-GGUF](https://huggingface.co/DavidAU/Gemma-The-Writer-Mighty-Sword-9B-GGUF)
(user reports now 16K context works without issue)
[https://huggingface.co/DavidAU/Gemma-The-Writer-N-Restless-Quill-V2-Enhanced32-10B-Uncensored-GGUF](https://huggingface.co/DavidAU/Gemma-The-Writer-N-Restless-Quill-V2-Enhanced32-10B-Uncensored-GGUF)
**Updated models, with new quants (revised/refreshed) and augmented quants:**
[https://huggingface.co/DavidAU/L3-DARKEST-PLANET-16.5B-GGUF](https://huggingface.co/DavidAU/L3-DARKEST-PLANET-16.5B-GGUF)
[https://huggingface.co/DavidAU/L3-Dark-Planet-8B-GGUF](https://huggingface.co/DavidAU/L3-Dark-Planet-8B-GGUF)
[https://huggingface.co/DavidAU/Gemma-The-Writer-N-Restless-Quill-10B-Uncensored-GGUF](https://huggingface.co/DavidAU/Gemma-The-Writer-N-Restless-Quill-10B-Uncensored-GGUF)
**Bonus: Monster "MOE":**
[https://huggingface.co/DavidAU/L3-MOE-2X16.5B-DARKEST-Planet-Song-of-Fire-29B-GGUF](https://huggingface.co/DavidAU/L3-MOE-2X16.5B-DARKEST-Planet-Song-of-Fire-29B-GGUF)
**For all models / listed by category:**
[https://huggingface.co/DavidAU](https://huggingface.co/DavidAU)
**Source / full precision also available here:**
[https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be](https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be) | 2024-12-27T01:35:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hn3p0o/3_new_32_bit_mastered_models_w_augmented_quants_3/ | Dangerous_Fix_5526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn3p0o | false | null | t3_1hn3p0o | /r/LocalLLaMA/comments/1hn3p0o/3_new_32_bit_mastered_models_w_augmented_quants_3/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'dRRAMzkeY36TTK_Xmv9eXV_-pWibCmTRqFbPzzoyZa0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/82A5sjevH-THZpM7rldRMCpp2I0N-Ipltl8MzpGGUIY.jpg?width=108&crop=smart&auto=webp&s=4fd24886310237b91131ee141d688020a4c860c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/82A5sjevH-THZpM7rldRMCpp2I0N-Ipltl8MzpGGUIY.jpg?width=216&crop=smart&auto=webp&s=cdf9fbf3b1cbb96ab5141a5805400ada5a23b044', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/82A5sjevH-THZpM7rldRMCpp2I0N-Ipltl8MzpGGUIY.jpg?width=320&crop=smart&auto=webp&s=f9eb3fe3fdb803f969af5b0f6687e01dea2c867f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/82A5sjevH-THZpM7rldRMCpp2I0N-Ipltl8MzpGGUIY.jpg?width=640&crop=smart&auto=webp&s=048a0df68e21461d55a165f2899e33f6a7414827', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/82A5sjevH-THZpM7rldRMCpp2I0N-Ipltl8MzpGGUIY.jpg?width=960&crop=smart&auto=webp&s=e737f43b9dffbd64d09afb888c25be60fb821c90', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/82A5sjevH-THZpM7rldRMCpp2I0N-Ipltl8MzpGGUIY.jpg?width=1080&crop=smart&auto=webp&s=c39faa905737891743b55d010853bfd45ec9658f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/82A5sjevH-THZpM7rldRMCpp2I0N-Ipltl8MzpGGUIY.jpg?auto=webp&s=592f40e10c95fd002d46b00c6732aed4d0f04ab4', 'width': 1200}, 'variants': {}}]} |
I believe I have built AGI, or at least the operating system for AGI. Does this align with what you believe to be the definition of AGI? Am I incorrect to call this AGI? [No Links] | 0 | No links and name left out because I want to get feedback and discussion. I built a software that runs AI on your computer (without cloud services) and gives AI (LLMs) the ability to interact with your digital ecosystem the same way a human can. It can interact with your filesystem, generate files, and even use your browser the same way you can. It’s also integrated with your calendar, email, and other services and can execute actions on those too. It can also do all this under the hood, so you can use your computer as normal while it still runs. Bear with me.
My understanding of the definition of AGI is an AI that can complete a range of tasks better or at least at the same level as a human. Based on this, if given the tools to interact with a computer, if the LLM is smart enough, wouldn’t this mean AGI is achieved?
Moreover, the software I built has a feature that allows you to create workflows and automations. Yes, right now it requires a human in-the-loop for safety and observability. Reasons, but it’s very easy to abstract that away to the decision of the AI.
So with these features you now have an AI that can interact with your digital ecosystem just like a human, so the only missing piece is intelligence. But I’d like to argue that we already have the baseline intelligence to call such a system AGI. The way I like to think of it is through the lens of a quote I came across years ago.
"Think of how stupid the average person is, and realize half of them are stupider than that." - George Carlin
Current LLMs are definitely way smarter than most people, so is it fair to call this AGI? Or must it be smarter than every person, in which case isn’t that ASI?
Thank you for reading this far and I’d like to see what you all think. | 2024-12-27T01:37:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hn3q6h/i_believe_i_have_built_agi_or_at_least_the/ | numinouslymusing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn3q6h | false | null | t3_1hn3q6h | /r/LocalLLaMA/comments/1hn3q6h/i_believe_i_have_built_agi_or_at_least_the/ | false | false | self | 0 | null |
Naive noob question - How much percent will the profit increase for your startup after you implemented deepseekv3 | 1 | [removed] | 2024-12-27T01:37:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hn3q9r/naive_noob_question_how_much_percent_will_the/ | chatsgpt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn3q9r | false | null | t3_1hn3q9r | /r/LocalLLaMA/comments/1hn3q9r/naive_noob_question_how_much_percent_will_the/ | false | false | self | 1 | null |
DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought | 1 | [removed] | 2024-12-27T01:46:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hn3w73/drto1_optimized_deep_reasoning_translation_via/ | rank_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn3w73 | false | null | t3_1hn3w73 | /r/LocalLLaMA/comments/1hn3w73/drto1_optimized_deep_reasoning_translation_via/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rQftdVkVyWBI-uwh5y-pvRnZpivO1AyUUCvNqhXexLw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CMRmrDLvQ95QC7POGWcAAJ9xO5J40-z4fAiJ-WJDyOk.jpg?width=108&crop=smart&auto=webp&s=1767007d6a97d5cca23f4de852d985967fbd587c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CMRmrDLvQ95QC7POGWcAAJ9xO5J40-z4fAiJ-WJDyOk.jpg?width=216&crop=smart&auto=webp&s=58e8517ac19fbca1c6b6c50ea92ad10aa9221aa9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CMRmrDLvQ95QC7POGWcAAJ9xO5J40-z4fAiJ-WJDyOk.jpg?width=320&crop=smart&auto=webp&s=2fc54509e759a3c5ebab02863d4311abc1a5600b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CMRmrDLvQ95QC7POGWcAAJ9xO5J40-z4fAiJ-WJDyOk.jpg?width=640&crop=smart&auto=webp&s=705139468699a322091f91e7d7c2c3b5a285147d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CMRmrDLvQ95QC7POGWcAAJ9xO5J40-z4fAiJ-WJDyOk.jpg?width=960&crop=smart&auto=webp&s=ca7a76fc2eacf51c6becd2d746433471709c1e95', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CMRmrDLvQ95QC7POGWcAAJ9xO5J40-z4fAiJ-WJDyOk.jpg?width=1080&crop=smart&auto=webp&s=59e120f070bb98908dacf4496a5be2eda9367c9b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CMRmrDLvQ95QC7POGWcAAJ9xO5J40-z4fAiJ-WJDyOk.jpg?auto=webp&s=396c1ca605d426f306c08c1d301ddede504b8431', 'width': 1200}, 'variants': {}}]} |
Does OpenRouter connect to the correct version of DeepSeek-V3, or does the official DeepSeek website use a system prompt? | 0 | 2024-12-27T01:53:26 | https://www.reddit.com/gallery/1hn4143 | Emotional-Metal4879 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hn4143 | false | null | t3_1hn4143 | /r/LocalLLaMA/comments/1hn4143/does_openrouter_connect_to_the_correct_version_of/ | false | false | 0 | null |
||
Where to hire LLM engineers or AI devs? | 1 | [removed] | 2024-12-27T02:27:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hn4opo/where_to_hire_llm_engineers_or_ai_devs/ | TimeWizardStudios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn4opo | false | null | t3_1hn4opo | /r/LocalLLaMA/comments/1hn4opo/where_to_hire_llm_engineers_or_ai_devs/ | false | false | self | 1 | null |
Godview | 9 | Super experimental project I made over Christmas.
TLDR: map + LLM
This came from a personal need from me. I’m currently planning an expansion for a brick and mortar business and I wanted to see some geo data for the locations I’m scoping out around the country and made this to help me out.
My personal application is seeing geographic competitive landscape, ideal location placement etc…
Thinking about adding zoning data and demographic data too. Would love some ideas on additional data for display.
Link: [godview.ai](https://godview.ai)
| 2024-12-27T02:47:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hn51z6/godview/ | ranoutofusernames__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn51z6 | false | null | t3_1hn51z6 | /r/LocalLLaMA/comments/1hn51z6/godview/ | false | false | self | 9 | null |
Who is using MCP with local LL MN s? | 1 | [deleted] | 2024-12-27T03:15:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hn5k53 | false | null | t3_1hn5k53 | /r/LocalLLaMA/comments/1hn5k53/who_is_using_mcp_with_local_ll_mn_s/ | false | false | default | 1 | null |
||
How to retrieve internal variables with inputting something into LLM | 1 | If I understand correctly, when I input something into LLM, I can only directly access the output right? How can I retrieve the internal variables? | 2024-12-27T03:16:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hn5l2c/how_to_retrieve_internal_variables_with_inputting/ | Ok_Web_2949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn5l2c | false | null | t3_1hn5l2c | /r/LocalLLaMA/comments/1hn5l2c/how_to_retrieve_internal_variables_with_inputting/ | false | false | self | 1 | null |
Who is using MCP with local LLMs? | 3 | Has anyone gotten it to work, which inference engine, which API, which models and which MCP servers? Is it useful or hype? I don't see a lot of chatter about it anymore. | 2024-12-27T03:17:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hn5lek/who_is_using_mcp_with_local_llms/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn5lek | false | null | t3_1hn5lek | /r/LocalLLaMA/comments/1hn5lek/who_is_using_mcp_with_local_llms/ | false | false | self | 3 | null |
Watch Groq Llama3.3 triumph over xAI Grok in the LLM Chess Arena! | 149 | 2024-12-27T03:17:14 | https://v.redd.it/erjdw6ej6b9e1 | estebansaa | /r/LocalLLaMA/comments/1hn5lii/watch_groq_llama33_triumph_over_xai_grok_in_the/ | 1970-01-01T00:00:00 | 0 | {} | 1hn5lii | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/erjdw6ej6b9e1/DASHPlaylist.mpd?a=1737991073%2CZGEyZDhjMTc1MmI0NzgwZTRmOWM5MDNkZDIzYjM4OGJkNzg4NThlMjMwOGU2YTc2NDNiYTMzODkyNjkxMzNhZA%3D%3D&v=1&f=sd', 'duration': 299, 'fallback_url': 'https://v.redd.it/erjdw6ej6b9e1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/erjdw6ej6b9e1/HLSPlaylist.m3u8?a=1737991073%2CYzkyYjQ4ZWIyNjZkNzY2ZGZmYWFlNTFjZDhlYzZmZDkwNmFiMmFhYmQyMmNlZGEyZDJkNjEzMTViMzhlNWZkYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/erjdw6ej6b9e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 940}} | t3_1hn5lii | /r/LocalLLaMA/comments/1hn5lii/watch_groq_llama33_triumph_over_xai_grok_in_the/ | false | false | 149 | {'enabled': False, 'images': [{'id': 'bTcyemVsZGo2YjllMUbKcHMAO9GEM7SYyNcW5GdBVoX7LFjUS6bGW2ypRJMM', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/bTcyemVsZGo2YjllMUbKcHMAO9GEM7SYyNcW5GdBVoX7LFjUS6bGW2ypRJMM.png?width=108&crop=smart&format=pjpg&auto=webp&s=176ce8a41d50242a9915a76b2a392fdb8cc5a9db', 'width': 108}, {'height': 165, 'url': 'https://external-preview.redd.it/bTcyemVsZGo2YjllMUbKcHMAO9GEM7SYyNcW5GdBVoX7LFjUS6bGW2ypRJMM.png?width=216&crop=smart&format=pjpg&auto=webp&s=a2eb57884b0482447f311aa8b8f2d5bab49fe12d', 'width': 216}, {'height': 245, 'url': 'https://external-preview.redd.it/bTcyemVsZGo2YjllMUbKcHMAO9GEM7SYyNcW5GdBVoX7LFjUS6bGW2ypRJMM.png?width=320&crop=smart&format=pjpg&auto=webp&s=95858907ce4573509f566530ea9c2fdfd3214670', 'width': 320}, {'height': 490, 'url': 'https://external-preview.redd.it/bTcyemVsZGo2YjllMUbKcHMAO9GEM7SYyNcW5GdBVoX7LFjUS6bGW2ypRJMM.png?width=640&crop=smart&format=pjpg&auto=webp&s=9147163e3c926b7a43ec3fb131c9bed36ae0ff1e', 'width': 640}, {'height': 735, 'url': 'https://external-preview.redd.it/bTcyemVsZGo2YjllMUbKcHMAO9GEM7SYyNcW5GdBVoX7LFjUS6bGW2ypRJMM.png?width=960&crop=smart&format=pjpg&auto=webp&s=e932a0a4d6af5548c1854d955318b781bae24ad8', 'width': 960}, {'height': 826, 'url': 'https://external-preview.redd.it/bTcyemVsZGo2YjllMUbKcHMAO9GEM7SYyNcW5GdBVoX7LFjUS6bGW2ypRJMM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=36578c7d08f60f0372ba795e776e7c4f7bc7f721', 'width': 1080}], 'source': {'height': 882, 'url': 'https://external-preview.redd.it/bTcyemVsZGo2YjllMUbKcHMAO9GEM7SYyNcW5GdBVoX7LFjUS6bGW2ypRJMM.png?format=pjpg&auto=webp&s=dc8e0a0edcbde3ce105c34634d119c59b9b30a20', 'width': 1152}, 'variants': {}}]} |
||
Have you seen an LLM better than the one powering Neuro-sama? | 1 | [removed] | 2024-12-27T03:45:40 | Express_Seesaw_8418 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hn63v2 | false | null | t3_1hn63v2 | /r/LocalLLaMA/comments/1hn63v2/have_you_seen_an_llm_better_than_the_one_powering/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'TwwgSG7esn9h7uSJjhkQGR2fd5SPIU7kThsReDATdDM', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/fjnb40l0cb9e1.jpeg?width=108&crop=smart&auto=webp&s=2f80970a20107e873d4069052e1b92209c05a304', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/fjnb40l0cb9e1.jpeg?width=216&crop=smart&auto=webp&s=1e8d0c53cd3a3ba5c460be93f233bc7fe6853c92', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/fjnb40l0cb9e1.jpeg?width=320&crop=smart&auto=webp&s=8b7e248641168a9e0d4395052fcbc770f51ea39e', 'width': 320}, {'height': 342, 'url': 'https://preview.redd.it/fjnb40l0cb9e1.jpeg?width=640&crop=smart&auto=webp&s=91c2c843f0f90ce973acfd3dd9a813f65350af17', 'width': 640}, {'height': 513, 'url': 'https://preview.redd.it/fjnb40l0cb9e1.jpeg?width=960&crop=smart&auto=webp&s=0c18125a41cd71b0622fa7e4cf294812d7d93c18', 'width': 960}, {'height': 577, 'url': 'https://preview.redd.it/fjnb40l0cb9e1.jpeg?width=1080&crop=smart&auto=webp&s=d47caa6020ff7c2304eb4831ca5848b54287f0bb', 'width': 1080}], 'source': {'height': 645, 'url': 'https://preview.redd.it/fjnb40l0cb9e1.jpeg?auto=webp&s=e41fb274fd67e6c0f8a7e73a41b8abf25bbc70ee', 'width': 1206}, 'variants': {}}]} |
||
Need Guidance on LLMs for Low specs Laptop | 1 | [removed] | 2024-12-27T03:52:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hn687n/need_guidance_on_llms_for_low_specs_laptop/ | FastCommission2913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn687n | false | null | t3_1hn687n | /r/LocalLLaMA/comments/1hn687n/need_guidance_on_llms_for_low_specs_laptop/ | false | false | self | 1 | null |
Does Anyone have Resources on LLM Performance Degradation as Input Tokens Increase? | 5 | It seems reasonable to me to claim that LLMs perform worse as the amount of tokens given to them increases but does anyone have hard data or resources I can look into to support those claims. I’m particular interested in the context of LLM agents and determining the optimal number of instructions to give them at once so that they complete all instructions. Thank you for any help! | 2024-12-27T04:18:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hn6oy6/does_anyone_have_resources_on_llm_performance/ | W0keBl0ke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn6oy6 | false | null | t3_1hn6oy6 | /r/LocalLLaMA/comments/1hn6oy6/does_anyone_have_resources_on_llm_performance/ | false | false | self | 5 | null |
Web RAG to generate answers like perplexity from your doc | 4 | Hey everyone,
I have been working on building a Web based RAG system which basically does embedding and answer generation all using webllm and transformer.js. Data is stored in sqlite3 db during compilation and we use it using wasm to load and get the embeddings for existing docs.
This is a basic version, but would love your thoughts and feedback on how we can improve this system.
You can try it out here, it does take some time to load and looking to optimize it.
https://docs.akiradocs.ai/aiSearch
If anyone knows better ways to improve this, would love to chat!
| 2024-12-27T04:20:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hn6qai/web_rag_to_generate_answers_like_perplexity_from/ | pandasaurav | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn6qai | false | null | t3_1hn6qai | /r/LocalLLaMA/comments/1hn6qai/web_rag_to_generate_answers_like_perplexity_from/ | false | false | self | 4 | null |
Why aren't more people talking about Phi-4? | 56 | Microsoft released the model on their Azure registry for download and mentioned they’d release it on Hugging Face a week later—but that hasn’t happened yet. I might be answering my own question here, but for a model that’s supposedly so performant, I expected to see more buzz by now. Why aren’t more people talking about it, sharing real-world use & performance, attempting fine-tunes, or just hacking and tinkering with it already? Or are people just waiting on the HF drop...
I've got the model myself running on Ollama and it's fairly intelligent, pretty decent as an agent, though pretty censored as always. | 2024-12-27T04:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hn6vj9/why_arent_more_people_talking_about_phi4/ | HadesTerminal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn6vj9 | false | null | t3_1hn6vj9 | /r/LocalLLaMA/comments/1hn6vj9/why_arent_more_people_talking_about_phi4/ | false | false | self | 56 | null |
tried my hand at coding a LLM sandbox | 2 | Hi LLM nerds
Been trying to make my own LLM player in c# to fit what I wanted. I wanted something that was a sandbox based on just having fun and focused on roleplay without any writing to disk and 100% local. After a few weeks of playing around I chucked it up on a website for others / friends.
The forms app was ported from a console app and uses the Lllamasharp wrapper for Llama.cpp. With a 3060ti and CUDA installed I get around the same tokens per second as LM Studio.
[https://aimultifool.com/](https://aimultifool.com/)
Any feedback welcome even if negative, I've only showed it to the dev guys who I work with. I'm not really much of a coder but enjoy it as a hobby. In hindsight it probably should of been Python based, but I've always wanted to learn c# :P | 2024-12-27T04:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hn6w3q/tried_my_hand_at_coding_a_llm_sandbox/ | doornailbarley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn6w3q | false | null | t3_1hn6w3q | /r/LocalLLaMA/comments/1hn6w3q/tried_my_hand_at_coding_a_llm_sandbox/ | false | false | self | 2 | null |
Finetuning Llama 3.3 3B/3.2 8B - Seeking Input | 1 | [removed] | 2024-12-27T04:40:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hn72ck/finetuning_llama_33_3b32_8b_seeking_input/ | No-Abalone1029 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn72ck | false | null | t3_1hn72ck | /r/LocalLLaMA/comments/1hn72ck/finetuning_llama_33_3b32_8b_seeking_input/ | false | false | self | 1 | null |
Fine-tuning an LLM on a Huge Conversation Dataset | 4 | Hi everyone,
I'm trying to fine-tune a large language model using a massive dataset of 400,000 message pairs. These messages tell a story when you read them in order, constructed by a back and forth between bot and user.
To give the model the full picture, I'm using a sliding window to include the 6 messages before each one – both from the user and the bot. This should help the model understand the conversation flow better - at least I hope it does.
I'm a stuck on how to actually fine-tune the model. I'm thinking LORA might not be the best fit for such a large dataset.
I'm interested in using a strong base model like Mistral-nemo. Most of the tutorials I've found focus on LORA, QLoRA, and PEFT, which do not help me at all.
Does anyone have any experience fine-tuning LLMs on this scale? Or can point me towards some helpful resources? | 2024-12-27T05:09:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hn7kqm/finetuning_an_llm_on_a_huge_conversation_dataset/ | Ruffi- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn7kqm | false | null | t3_1hn7kqm | /r/LocalLLaMA/comments/1hn7kqm/finetuning_an_llm_on_a_huge_conversation_dataset/ | false | false | self | 4 | null |
Running BERT Sentence Classifier on Android/ embedded | 8 | I came across this really nice C++ implementation [(https://github.com/yilong2001/berts.cpp](https://github.com/yilong2001/berts.cpp)) for running BERT models to perform Sentence Classification. llama.cpp does not have support for BertForSequenceClassification, so you might not be able to run a lite weight BERT classifier using llama.cpp backend.
If you are like me who wants to run stuff on embedded arm platforms, take a look at my fork ([https://github.com/v-prgmr/berts.cpp-on-android](https://github.com/v-prgmr/berts.cpp-on-android)) of that original implementation that lets you build the project for android :) | 2024-12-27T05:20:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hn7rcc/running_bert_sentence_classifier_on_android/ | Aware_Self2205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn7rcc | false | null | t3_1hn7rcc | /r/LocalLLaMA/comments/1hn7rcc/running_bert_sentence_classifier_on_android/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'wLtMkxDxgtqTPLKnlgjTf0eqd43GBy7YuUQG8TlGEIo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r_yw6Qn4ClrUCFVTZjVWmhnbs7DzIV1oKMq-gNsut3k.jpg?width=108&crop=smart&auto=webp&s=9cbf04a4de7c9ad4f6488f5a4d434abd048b03d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r_yw6Qn4ClrUCFVTZjVWmhnbs7DzIV1oKMq-gNsut3k.jpg?width=216&crop=smart&auto=webp&s=d0df61831722d79410423549ca8b7f1e21ca35e8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r_yw6Qn4ClrUCFVTZjVWmhnbs7DzIV1oKMq-gNsut3k.jpg?width=320&crop=smart&auto=webp&s=428376b4446fc7a3ad80700ceaf1d2c539064a9c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r_yw6Qn4ClrUCFVTZjVWmhnbs7DzIV1oKMq-gNsut3k.jpg?width=640&crop=smart&auto=webp&s=d4a90499f0b08ba436794b5707342a4ffc94b578', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r_yw6Qn4ClrUCFVTZjVWmhnbs7DzIV1oKMq-gNsut3k.jpg?width=960&crop=smart&auto=webp&s=283ccd494932df1989fb4a0f2ef5432d32a090ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r_yw6Qn4ClrUCFVTZjVWmhnbs7DzIV1oKMq-gNsut3k.jpg?width=1080&crop=smart&auto=webp&s=4dcb4738392fa84056d6b511c28276b74e569822', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r_yw6Qn4ClrUCFVTZjVWmhnbs7DzIV1oKMq-gNsut3k.jpg?auto=webp&s=66f4d38cef8937a136766ea378fc8962639ef9bf', 'width': 1200}, 'variants': {}}]} |
Vllm - Qwen speculative decoding | 1 | [removed] | 2024-12-27T05:26:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hn7uqm/vllm_qwen_speculative_decoding/ | Wonderful_Alfalfa115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn7uqm | false | null | t3_1hn7uqm | /r/LocalLLaMA/comments/1hn7uqm/vllm_qwen_speculative_decoding/ | false | false | self | 1 | null |
Vllm speculative decoding issue | 1 | [removed] | 2024-12-27T05:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hn7zs3/vllm_speculative_decoding_issue/ | hyperna21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn7zs3 | false | null | t3_1hn7zs3 | /r/LocalLLaMA/comments/1hn7zs3/vllm_speculative_decoding_issue/ | false | false | self | 1 | null |
REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models | 6 | RLHF (Reinforcement Learning from Human Feedback) is rapidly evolving, with algorithms such as PPO, DPO, RLOO, ReMax and GRPO emerging one after another. **By integrating various optimization techniques from Proximal Policy Optimization (PPO) into the traditional REINFORCE algorithm**, we “proposed” **REINFORCE++,** which aims to enhance performance and stability in RLHF while reducing computational resource requirements without the critic network.
**The key feature of REINFORCE++ is that it is more stable than GRPO and faster than PPO.**
**REINFORCE++'s** technical details are in:
[https://hijkzzz.notion.site/reinforce-plus-plus](https://hijkzzz.notion.site/reinforce-plus-plus)
and (technical report)
[https://github.com/hijkzzz/Awesome-LLM-Strawberry/blob/main/resources/REINFORCE%2B%2B.pdf](https://github.com/hijkzzz/Awesome-LLM-Strawberry/blob/main/resources/REINFORCE%2B%2B.pdf) | 2024-12-27T05:40:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hn83ms/reinforce_a_simple_and_efficient_approach_for/ | seventh_day123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn83ms | false | null | t3_1hn83ms | /r/LocalLLaMA/comments/1hn83ms/reinforce_a_simple_and_efficient_approach_for/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'VE0FpjeIYmWLto7r_27lnSz_PI40SQNbFhBEdLZEUg0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/h_mn_E6vGS5UpBqZfOGfJXtRBtt1bEFJqhobbXqwmJU.jpg?width=108&crop=smart&auto=webp&s=a06649b172ceb8dd46d5f4e27b739e5eb6107828', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/h_mn_E6vGS5UpBqZfOGfJXtRBtt1bEFJqhobbXqwmJU.jpg?width=216&crop=smart&auto=webp&s=e13cc789e1c49fd0cce7fef9cf676dbf49e6efd2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/h_mn_E6vGS5UpBqZfOGfJXtRBtt1bEFJqhobbXqwmJU.jpg?width=320&crop=smart&auto=webp&s=f3db7b228ce4d9694d830e60d891d2bb935679fc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/h_mn_E6vGS5UpBqZfOGfJXtRBtt1bEFJqhobbXqwmJU.jpg?width=640&crop=smart&auto=webp&s=f925e160f5c3236618068ebf983c24f8fc24a0e2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/h_mn_E6vGS5UpBqZfOGfJXtRBtt1bEFJqhobbXqwmJU.jpg?width=960&crop=smart&auto=webp&s=29d8aaacd92e52cfbfe21503ce52fcb107f6626f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/h_mn_E6vGS5UpBqZfOGfJXtRBtt1bEFJqhobbXqwmJU.jpg?width=1080&crop=smart&auto=webp&s=2cea661ca6361ed17f9af1e146353b04be613808', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/h_mn_E6vGS5UpBqZfOGfJXtRBtt1bEFJqhobbXqwmJU.jpg?auto=webp&s=22a678ebf2f2758e34cd3a23ff1a57fb04546d6a', 'width': 1200}, 'variants': {}}]} |
China is building a DAM that has 3 x more capacity than the world’s largest power station i.e. three Gorges Dam (22,500 MW; Also Chinese). Wonder how big a cluster it could power. Deepseek should make a proposal. | 0 | https://www.reuters.com/world/asia-pacific/china-build-worlds-largest-hydropower-dam-tibet-2024-12-26/ | 2024-12-27T05:44:24 | https://www.reddit.com/gallery/1hn85p7 | Super-Muffin-1230 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hn85p7 | false | null | t3_1hn85p7 | /r/LocalLLaMA/comments/1hn85p7/china_is_building_a_dam_that_has_3_x_more/ | false | false | 0 | null |
|
Deepseek v3 was trained on 8-11x less the normal budget of these kinds of models: specifically 2048 H800s (aka "nerfed H100s"), in 2 months. Llama 3 405B was, per their paper, trained on 16k H100s. DeepSeek estimate the cost was $5.5m USD. | 666 | 2024-12-27T05:52:41 | Super-Muffin-1230 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hn8ams | false | null | t3_1hn8ams | /r/LocalLLaMA/comments/1hn8ams/deepseek_v3_was_trained_on_811x_less_the_normal/ | false | false | 666 | {'enabled': True, 'images': [{'id': '2T-2KFoSaP0Mv9xwBFJRIuZFW8QPb4gjMGiCGJ7aimA', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/n7nn4r9oyb9e1.jpeg?width=108&crop=smart&auto=webp&s=c48112bdd6515081d1e52bab674ccce9910c1f1c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/n7nn4r9oyb9e1.jpeg?width=216&crop=smart&auto=webp&s=a3356d012ae252056e5f7e2bc713548f2e024ddc', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/n7nn4r9oyb9e1.jpeg?width=320&crop=smart&auto=webp&s=bbc088136630ce0d7b26b99fbf589ef44f19a749', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/n7nn4r9oyb9e1.jpeg?width=640&crop=smart&auto=webp&s=ec55784cc8b1bb2a72d9175b06746ded57d29e92', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/n7nn4r9oyb9e1.jpeg?auto=webp&s=368de706b80c50b6eb25f7f3efa27f4345a99fab', 'width': 889}, 'variants': {}}]} |
|||
Any good local voice conversion? | 1 | [removed] | 2024-12-27T05:56:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hn8cyw/any_good_local_voice_conversion/ | kimaust | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn8cyw | false | null | t3_1hn8cyw | /r/LocalLLaMA/comments/1hn8cyw/any_good_local_voice_conversion/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XNUfKMLsN4spqhyFi5kZmu2LxAak4eev8011z4do41A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cS4v8JuOVJabZFu1AVa0dwK0ciLEgAHGsPWvVxtZvCk.jpg?width=108&crop=smart&auto=webp&s=e3d6e142b8ca8a39b49cd46c6f8828b31c34e24a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cS4v8JuOVJabZFu1AVa0dwK0ciLEgAHGsPWvVxtZvCk.jpg?width=216&crop=smart&auto=webp&s=11b12d08c16b50255ab3f0137dadc488f2f04f8e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cS4v8JuOVJabZFu1AVa0dwK0ciLEgAHGsPWvVxtZvCk.jpg?width=320&crop=smart&auto=webp&s=522898a83ad4968d9f698984e631066f3f200b2d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cS4v8JuOVJabZFu1AVa0dwK0ciLEgAHGsPWvVxtZvCk.jpg?width=640&crop=smart&auto=webp&s=21ec7e52a16c3f8fab56194f31075ca0c6f9ae6d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cS4v8JuOVJabZFu1AVa0dwK0ciLEgAHGsPWvVxtZvCk.jpg?width=960&crop=smart&auto=webp&s=fccb7ab3002e027fe3279a948b1ac691a512f2b9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cS4v8JuOVJabZFu1AVa0dwK0ciLEgAHGsPWvVxtZvCk.jpg?width=1080&crop=smart&auto=webp&s=0aa1cfd770547d139a6d4f052de1ccb14ca66f21', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cS4v8JuOVJabZFu1AVa0dwK0ciLEgAHGsPWvVxtZvCk.jpg?auto=webp&s=6091aa6333edb69e9b953333498b7002be926d4f', 'width': 1200}, 'variants': {}}]} |
DeepSeek has released exclusive footage of their AI researchers training DeepSeek-V3 671B Mixture-of-Experts (MoE) on 2048 H800s. | 938 | 2024-12-27T06:22:05 | https://v.redd.it/tagjczxw3c9e1 | Super-Muffin-1230 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hn8rcx | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/tagjczxw3c9e1/DASHPlaylist.mpd?a=1737872539%2CMWZiOTFlYjEzNDNlOTlmZmU2NDQ4OTAzNWI0Nzg4MjYxYjNkODdjODE4ZTU0NTc3ZTUwMDliOGE2MzVlOTIwZg%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/tagjczxw3c9e1/DASH_360.mp4?source=fallback', 'has_audio': True, 'height': 638, 'hls_url': 'https://v.redd.it/tagjczxw3c9e1/HLSPlaylist.m3u8?a=1737872539%2CMzRlMjdhZGNmYzMwODljMjczNmE3YjdkY2ExMDIzOTVkM2NjNzExMDA0NTRjODA3NDMxZThmMGE1ZjUyNTQ2NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tagjczxw3c9e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 360}} | t3_1hn8rcx | /r/LocalLLaMA/comments/1hn8rcx/deepseek_has_released_exclusive_footage_of_their/ | false | false | 938 | {'enabled': False, 'images': [{'id': 'bW04ZHJub3czYzllMYBidLKxRm0KzuKP3i5QQ-Yf_TqhionZGA_vzdqJlrlc', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/bW04ZHJub3czYzllMYBidLKxRm0KzuKP3i5QQ-Yf_TqhionZGA_vzdqJlrlc.png?width=108&crop=smart&format=pjpg&auto=webp&s=7414fc8aef5307978a5a90e0f28aefd7556312ae', 'width': 108}, {'height': 383, 'url': 'https://external-preview.redd.it/bW04ZHJub3czYzllMYBidLKxRm0KzuKP3i5QQ-Yf_TqhionZGA_vzdqJlrlc.png?width=216&crop=smart&format=pjpg&auto=webp&s=d4502c401d719a2bf80044d47c446199b423c099', 'width': 216}, {'height': 567, 'url': 'https://external-preview.redd.it/bW04ZHJub3czYzllMYBidLKxRm0KzuKP3i5QQ-Yf_TqhionZGA_vzdqJlrlc.png?width=320&crop=smart&format=pjpg&auto=webp&s=130608658839b4ad0a295f349c05edfddfab7466', 'width': 320}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/bW04ZHJub3czYzllMYBidLKxRm0KzuKP3i5QQ-Yf_TqhionZGA_vzdqJlrlc.png?format=pjpg&auto=webp&s=73e69a445927cb85afe6e3bc53ed1ce037690d87', 'width': 406}, 'variants': {}}]} |
||
Possible to obtain context directly from vectorDB without using LLM (using Ollama)? | 1 | [removed] | 2024-12-27T06:40:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hn90zu/possible_to_obtain_context_directly_from_vectordb/ | anewaccount4yourmum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn90zu | false | null | t3_1hn90zu | /r/LocalLLaMA/comments/1hn90zu/possible_to_obtain_context_directly_from_vectordb/ | false | false | self | 1 | null |
How do multilingual LLMs handle concepts across languages? | 12 | Hi everyone!
I was wondering if multilingual LLMs activate similarly when asked about the same concepts in different languages. Since they are language models, they are trained on textual data, and I was wondering if their translation capabilities come from "simply" learning translation pairs, or rather from projecting higher-level concepts onto specific languages. That is, do they "understand" concepts at a higher level, independent of language?
I think this could be partially measured by comparing the activation patterns of a model (the latent space) when asked about the same thing in different languages. If the activation patterns are similar, it might indicate that the model "projects" higher-level concepts onto the language. If the activation patterns differ significantly, it might suggest that the model relies more on translation pairs, or is not modeling higher-level concepts independently of the language. Or, in other words, the concepts are modeled independently (separately) for each language. I would assume that at the early stage, they are modeled separately, but then possibly "concept understanding" emerges, if that make sense?
I’m curious if the results would be similar across different models or if it’s model-dependent. This could have practical implications, especially for interactions in languages other than English (the most commonly used language in LLMs). For example, if a non-English prompt is translated into English, and the LLM’s response is translated back, would it yield better results than asking directly in the original language?
Also, does anyone know how does this relate to humans? I mean the brain activation patterns in multilingual people? I’m assuming there is a lot of research on this topic, so I thought I’d ask here first. | 2024-12-27T06:53:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hn97lz/how_do_multilingual_llms_handle_concepts_across/ | NewTestAccount2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn97lz | false | null | t3_1hn97lz | /r/LocalLLaMA/comments/1hn97lz/how_do_multilingual_llms_handle_concepts_across/ | false | false | self | 12 | null |
Stream of Thought - prompt that makes LLMs more contextually rich and empathetic | 1 | Hi folks,
I was exploring LLM capabilities, especially on cheaper ones like Llama 3.3 70b, Gemini etc. But also on the incumbent models like Claude or ChatGPT, that they often miss context that are inferrable but not explicitly stated.
For example, if we mention statements such as "What is PBFT? Context: Priya is a high school student from Mumbai" and it won't switch its communication style to match or less likely to address Priya by name.
However, when asked to figure out how might an LLM adjust the tonality based on context, it makes smart assumptions and if they are used as instructions, the conversation feels a lot more personalized and engaging.
Then I explored Chain of Thought (CoT), and found that it's much more useful for reasoning tasks or tasks that require IQ, however, it doesn't necessarily adjust the conversational tone on the fly while adhering to certain guidelines.
This led me to develop something I am calling "Stream of Thought" where the LLM intermittently switches between "thinking" and "generating".
My expectation was that, if not finetuned, it won't work. But to my surprise, it did. Both Llama 3.3 70b and Grok 2 did very well, but Claude 3.5 Haiku was extremely impressive (more so than Sonnet).
Anyways, the trick is to tell the LLM to add thoughts in a special markup via the system prompt such as [thought]...[/thought] or [reasoning]...[/reasoning]. And also reassuring it that anything enclosed here isn't visible to the user, so it can make honest, or even inappropriate comments.
Then we can add some handcrafted examples of reasoning. And this causes the LLM to deliberate on the context and results in meta cognitive behavior where further tokens take those reasoning tokens into consideration and the result is improved a lot.
Please check out the complete article and the huggingface space where I have put out some examples. I intend to publish live demo soon.
I also want to find some ways to objectify the outputs and possibly make the difference more concrete. Would love to know if anyone's interested. | 2024-12-27T06:59:41 | https://blog.iamsohan.in/blog/stream-of-thought/ | ronniebasak | blog.iamsohan.in | 1970-01-01T00:00:00 | 0 | {} | 1hn9awc | false | null | t3_1hn9awc | /r/LocalLLaMA/comments/1hn9awc/stream_of_thought_prompt_that_makes_llms_more/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'BZ1i3byJg1Jfw6EPSX6Uqnfhgg3uu0GaKeeSwLHAYlY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jRMLQthWWPCBJlq-MhhFMYMTDSyYPRZFdknRnRGTV28.jpg?width=108&crop=smart&auto=webp&s=d78dfac36e46c1aea206f0e4da8edd87db65babf', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/jRMLQthWWPCBJlq-MhhFMYMTDSyYPRZFdknRnRGTV28.jpg?width=216&crop=smart&auto=webp&s=e64843076348cae3a31dc161f85efe4360592160', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/jRMLQthWWPCBJlq-MhhFMYMTDSyYPRZFdknRnRGTV28.jpg?width=320&crop=smart&auto=webp&s=a0c0167426564eb5e2da4a9b36126f24cde79d86', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/jRMLQthWWPCBJlq-MhhFMYMTDSyYPRZFdknRnRGTV28.jpg?width=640&crop=smart&auto=webp&s=524cf6373518824057df21e2cfea39b327dedb61', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/jRMLQthWWPCBJlq-MhhFMYMTDSyYPRZFdknRnRGTV28.jpg?width=960&crop=smart&auto=webp&s=54e02faae38798d55761b3cd1f0bcf9803e44364', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/jRMLQthWWPCBJlq-MhhFMYMTDSyYPRZFdknRnRGTV28.jpg?auto=webp&s=ede998b969dc85e609e74cc81ec548f5b7bd28c2', 'width': 1024}, 'variants': {}}]} |
|
System ram setup for very large context models? | 1 | I've been doing a lot of research lately on the best bang for my buck and have settled on getting a 5070ti in the future instead of a 3090 due to better all rounder performance and the fact that in command R+ with 128k context (will probably stick to 40-64k) I can't fit it all in v ram anyways so the limiting factor becomes my system ram.
I think I've settled on 2x48gb Corsair 6000MT at cl30 which gets me a fair bit of capacity and fast speeds at 96GB/s bandwidth vs 83 GB/s with 4 slots of 32gb 5200MT as that's the fastest you can easily get 4 sticks to run with AM5 9000. Of course this all super slow compared to 400-1400gb/s current gen video cards can do but unless I can get a bunch of cards for a very reasonable price that isn't happening.
One important thing I wanted to ask is if gpu offloading has any benefit for improving speed or is it always tied to the slowest component? (system ram) Or is it model dependent? Also my goal is 3-4 tokens a second, is this feasible or a pipe dream? | 2024-12-27T07:19:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hn9l04/system_ram_setup_for_very_large_context_models/ | Massive-Question-550 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn9l04 | false | null | t3_1hn9l04 | /r/LocalLLaMA/comments/1hn9l04/system_ram_setup_for_very_large_context_models/ | false | false | self | 1 | null |
New to running local LLMs, need some advice on hardware | 0 | So I have a homelab with a couple different systems running in it. Currently it's quite unimpressive, but I am inching closer to wanting to run my own local LLM to dick around with. Here is my current stack of hardware:
\- i9-9900k, 64GB RAM box with integrated graphics running TrueNAS with one Ubuntu VM, this is primarily my NAS so not much help
\- HP 800G2 i5-6500T, 32GB RAM running two VMs (Ubuntu Server and Home Assistant)
\- Mac mini 2018 i5-8500T, 64GB RAM. Purely for iMessage relay via AirMessage
I do have a Macbook M4 Max as well that I will do some LLM testing on, but I want an LLM I can use from anywhere in my network/
So my question is, if I wanted to get started with localLLM stuff and learning/tinkering around with it what would be the best upgrade/system to use? I was lucky enough to have a sizable gift card for Apple I got for Xmas, so I was considering getting a specced out M4 or M4 Pro Mac mini to replace/augment my current Mac mini.
Either way, if you have any suggestions (and maybe any suggested articles/videos to follow on what to get started with, I'm VERY green), then I would appreciate it! | 2024-12-27T07:21:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hn9lsy/new_to_running_local_llms_need_some_advice_on/ | Spudly2319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn9lsy | false | null | t3_1hn9lsy | /r/LocalLLaMA/comments/1hn9lsy/new_to_running_local_llms_need_some_advice_on/ | false | false | self | 0 | null |
What hardware for an ultra low power PC for just inference for an always on bot? Mac mini M4, Jetson Orin Nano Super, or something else? | 1 | [removed] | 2024-12-27T07:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hn9uj6/what_hardware_for_an_ultra_low_power_pc_for_just/ | TetsujinXLIV | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hn9uj6 | false | null | t3_1hn9uj6 | /r/LocalLLaMA/comments/1hn9uj6/what_hardware_for_an_ultra_low_power_pc_for_just/ | false | false | self | 1 | null |
The finance people believe that next year, companies will need to manage both human and AI employees - that they both have career paths. | 12 | 2024-12-27T07:47:55 | https://www.youtube.com/watch?v=gE8dPv6DZ9g | Internet--Traveller | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1hn9z68 | false | {'oembed': {'author_name': 'CNBC Television', 'author_url': 'https://www.youtube.com/@CNBCtelevision', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/gE8dPv6DZ9g?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Goldman Sachs' 2025 AI predictions: Here's what to expect"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/gE8dPv6DZ9g/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Goldman Sachs' 2025 AI predictions: Here's what to expect", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1hn9z68 | /r/LocalLLaMA/comments/1hn9z68/the_finance_people_believe_that_next_year/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'fJNtiUn5EeQN3Rqn08Lvzl901rezxTahk2ChYDwMOdA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Gnfdx0KqE8jKfcFKKZdMOyLLtCZ8cV2OccbjD0Jm8bk.jpg?width=108&crop=smart&auto=webp&s=91569c3c6dcd755fb93029a4586fd52a1f9a3653', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Gnfdx0KqE8jKfcFKKZdMOyLLtCZ8cV2OccbjD0Jm8bk.jpg?width=216&crop=smart&auto=webp&s=fc7370374c7686b5999e5441a8c06c05782627cb', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Gnfdx0KqE8jKfcFKKZdMOyLLtCZ8cV2OccbjD0Jm8bk.jpg?width=320&crop=smart&auto=webp&s=591f73c9ac702e7bab12e32e77b6a73bf34c55f0', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Gnfdx0KqE8jKfcFKKZdMOyLLtCZ8cV2OccbjD0Jm8bk.jpg?auto=webp&s=aa1fa489655cdd7a97e2decde87e01802176e7b4', 'width': 480}, 'variants': {}}]} |
||
Acting as Claude's Research Helper in AI (Sheaf Cohomology and LLMs) | 2 |
I had a blast writing this blog post and learned a tremendous amount. I suspect that we will soon see a blizzard of papers where the researcher/author isn't just "assisted" by really smart AI models, but rather where the human author becomes more of a research assistant/facilitator!
That is, the model itself would dictate the core direction the research should take, with some feedback and input from the human researcher to keep things on track and focused.
The human becomes more of a "token dispenser" and also facilitates cooperation between AI models from different labs (i.e., Claude 3.5 Sonnet and O1-Pro, which I had working together by the end of this).
If anyone reading this is an expert, I'd love to hear your take on whether these ideas have real merit. I suspect they do, since O1-Pro certainly thought so, and I would guess that it would be skeptical of ideas that it knew were generated by its arch-rival, Claude… | 2024-12-27T08:18:38 | https://fixmydocuments.com/blog/01_acting_as_claudes_research_helper_in_ai | dicklesworth | fixmydocuments.com | 1970-01-01T00:00:00 | 0 | {} | 1hnaea3 | false | null | t3_1hnaea3 | /r/LocalLLaMA/comments/1hnaea3/acting_as_claudes_research_helper_in_ai_sheaf/ | false | false | 2 | {'enabled': False, 'images': [{'id': '6PRe99Kv_zBGNTK8abHzbk4-nLnwIr3uITbllemPLkY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u6IBsFfJNPNJmXgYxKVs2U5CUyTa1Ftn787tLOcuyT4.jpg?width=108&crop=smart&auto=webp&s=715d47deda320ba2e2577811a5ae7f2d3b991b3c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/u6IBsFfJNPNJmXgYxKVs2U5CUyTa1Ftn787tLOcuyT4.jpg?width=216&crop=smart&auto=webp&s=5e094133045dbd1da7a0bbc78a331077a0ca6fb4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/u6IBsFfJNPNJmXgYxKVs2U5CUyTa1Ftn787tLOcuyT4.jpg?width=320&crop=smart&auto=webp&s=5a27da396fe467d2ec7929ca14ab59d435c51b69', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/u6IBsFfJNPNJmXgYxKVs2U5CUyTa1Ftn787tLOcuyT4.jpg?width=640&crop=smart&auto=webp&s=fa89c7af102e32cd2291f710a414a1214fa4a92f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/u6IBsFfJNPNJmXgYxKVs2U5CUyTa1Ftn787tLOcuyT4.jpg?width=960&crop=smart&auto=webp&s=eb3ac61c57c72ca1bec6324dd6c9bbcd952f2402', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/u6IBsFfJNPNJmXgYxKVs2U5CUyTa1Ftn787tLOcuyT4.jpg?width=1080&crop=smart&auto=webp&s=bd3774a22174e2863d10e2c0d4cfbb6af5880d6f', 'width': 1080}], 'source': {'height': 1500, 'url': 'https://external-preview.redd.it/u6IBsFfJNPNJmXgYxKVs2U5CUyTa1Ftn787tLOcuyT4.jpg?auto=webp&s=aadbe299d21c268eb8463e7d98bb8d42e9a04fa3', 'width': 3000}, 'variants': {}}]} |
|
A story writing prompt I made that gives you precise control over each story element. You can use without precise control too and let the model decide. You can also easily remove or add elements. | 1 | [removed] | 2024-12-27T08:22:58 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hnag9g | false | null | t3_1hnag9g | /r/LocalLLaMA/comments/1hnag9g/a_story_writing_prompt_i_made_that_gives_you/ | false | false | default | 1 | null |
||
[ Removed by Reddit ] | 1 | [removed] | 2024-12-27T08:30:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hnak0w/removed_by_reddit/ | Elven77AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hnak0w | false | null | t3_1hnak0w | /r/LocalLLaMA/comments/1hnak0w/removed_by_reddit/ | false | false | self | 1 | null |
How likely was it for DeepSeek to bootstrap training with outputs from other LLMs? | 0 | 2024-12-27T09:05:37 | https://x.com/AndrewMayne/status/1872497498868338800 | Snoo_64233 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1hnb0mb | false | null | t3_1hnb0mb | /r/LocalLLaMA/comments/1hnb0mb/how_likely_was_it_for_deepseek_to_bootstrap/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'JOVl6otmdOjSrjFdJUlABc78cQGYJKDDEUW190JW-BM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/gaL_Qv60mtV4U8PxPpiSwFtwItSDHT6vPecT0btmXEU.jpg?width=108&crop=smart&auto=webp&s=a7f3f2456b598c6bf11a8cfbf5d55e6f003c6440', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/gaL_Qv60mtV4U8PxPpiSwFtwItSDHT6vPecT0btmXEU.jpg?width=216&crop=smart&auto=webp&s=dc85708989370e59eb1dc8056ba23ffd16697bde', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/gaL_Qv60mtV4U8PxPpiSwFtwItSDHT6vPecT0btmXEU.jpg?width=320&crop=smart&auto=webp&s=7936cda189141fae6e1f0ef2e8c2705145244bf0', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/gaL_Qv60mtV4U8PxPpiSwFtwItSDHT6vPecT0btmXEU.jpg?width=640&crop=smart&auto=webp&s=8a550504601ec356ba2605b235bb69942a08784e', 'width': 640}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/gaL_Qv60mtV4U8PxPpiSwFtwItSDHT6vPecT0btmXEU.jpg?auto=webp&s=fc10d29128d8ce447183676385aaa372d967d91d', 'width': 943}, 'variants': {}}]} |
||
Where do you spend most of your time when building RAG? | 8 | I am curious.
Where are you guys spending most of your time when building production RAG solutions?
I have been building RAGs of all shapes and sizes for a while now, and I want to know whether my pain correlates to others - chunking.
Chunking, chunking, chunking.
I spend most of my time refining the chunking pipeline, not actually refining the RAG architecture. I find that if I get the chunking right then I can use naive RAG with no issues.
Is this normal? | 2024-12-27T09:26:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hnbam5/where_do_you_spend_most_of_your_time_when/ | Solvicode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hnbam5 | false | null | t3_1hnbam5 | /r/LocalLLaMA/comments/1hnbam5/where_do_you_spend_most_of_your_time_when/ | false | false | self | 8 | null |
Why/why not momentum in the residual stream space | 7 | I've noticed that it's debated whether transformers perform gradient descent for in-context learning ([1](https://arxiv.org/abs/2212.10559v2) [2](https://arxiv.org/abs/2310.08540))
In training, gradient descent is performed with **momentum** because it helps find a more favorable loss landscape, or to avoid local minima
What do you think of the idea of implementing momentum between decoder modules, along the residual stream?
My guess is that the residual stream with skip connections enables momentum-like action, already.
But then, maybe it could be beneficial as a way to encourage the desirable trait of long-range dependencies
Maybe there's a reason we don't use it already.
Thoughts? | 2024-12-27T09:34:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hnbe9c/whywhy_not_momentum_in_the_residual_stream_space/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hnbe9c | false | null | t3_1hnbe9c | /r/LocalLLaMA/comments/1hnbe9c/whywhy_not_momentum_in_the_residual_stream_space/ | false | false | self | 7 | null |
Deep Seek v3 has a "Deep Think" option. It shows the the chain of thought and it is fascinating. https://chat.deepseek.com/ | 87 | 2024-12-27T09:40:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hnbgtu/deep_seek_v3_has_a_deep_think_option_it_shows_the/ | appakaradi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hnbgtu | false | null | t3_1hnbgtu | /r/LocalLLaMA/comments/1hnbgtu/deep_seek_v3_has_a_deep_think_option_it_shows_the/ | false | false | 87 | null |
||
Local AI Assistant That Operates Automatically In Background | 7 | Does anyone know of any tools that allows you to ask a question, and then will automatically check periodically for updates?
Like for example let's say I want the latest news on some on-going situation, I can go to Open WebUI and use the web search to ask "What's the latest news on blah in December 2024" or something. Perplexity is pretty good for that too (not local obviously). But are there any tools where I can just say "Keep me apprised of the latest news available regarding \*blah\*" and it will do so?
Or even something like "Let me know when the web page \*blah\* changes from \*out of stock\* to \*in stock\*". Then maybe it automatically checks every 4 hours and lets me know. | 2024-12-27T10:11:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hnbwbi/local_ai_assistant_that_operates_automatically_in/ | GhostInThePudding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hnbwbi | false | null | t3_1hnbwbi | /r/LocalLLaMA/comments/1hnbwbi/local_ai_assistant_that_operates_automatically_in/ | false | false | self | 7 | null |
DeepSeek V3 was made with synthetic data for coding and math. They used distillation from R1(reasoner model). Also they implemented novel Multi-Token Prediction technique | 221 | There are many more interesting details in their paper.
[https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek\_V3.pdf](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf)
https://preview.redd.it/lu0wps2dbd9e1.png?width=1459&format=png&auto=webp&s=edb03fa323bacc2fc77bf7c247798fb4f2d2e099
| 2024-12-27T10:27:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hnc4d5/deepseek_v3_was_made_with_synthetic_data_for/ | Badjaniceman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hnc4d5 | false | null | t3_1hnc4d5 | /r/LocalLLaMA/comments/1hnc4d5/deepseek_v3_was_made_with_synthetic_data_for/ | false | false | 221 | {'enabled': False, 'images': [{'id': 'xRNAdmFK-ErjPTF6_3ofqA1aICdDAjt_cQsFFxvkZYk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yxr-v5mRBGKApL4OhKv2v7WGZrF8_e5n70HzwKZa71M.jpg?width=108&crop=smart&auto=webp&s=7777df56f9331b910b1d2c9230c765f12b94a89c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yxr-v5mRBGKApL4OhKv2v7WGZrF8_e5n70HzwKZa71M.jpg?width=216&crop=smart&auto=webp&s=e051b627b67bae9cb6a0b85ba9b88d446da29d56', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yxr-v5mRBGKApL4OhKv2v7WGZrF8_e5n70HzwKZa71M.jpg?width=320&crop=smart&auto=webp&s=76c90cf25539a02a6b6185fd3f961a673948d602', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yxr-v5mRBGKApL4OhKv2v7WGZrF8_e5n70HzwKZa71M.jpg?width=640&crop=smart&auto=webp&s=228b6ac758c65b3c6bf038df758ac056e4351bf0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yxr-v5mRBGKApL4OhKv2v7WGZrF8_e5n70HzwKZa71M.jpg?width=960&crop=smart&auto=webp&s=a34f7cc530239daed54d130a159966133acfb658', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yxr-v5mRBGKApL4OhKv2v7WGZrF8_e5n70HzwKZa71M.jpg?width=1080&crop=smart&auto=webp&s=6ca964030a7333ad524599a359a63756c52f1d66', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yxr-v5mRBGKApL4OhKv2v7WGZrF8_e5n70HzwKZa71M.jpg?auto=webp&s=b04ad6895a2945a48246c16e39ac443de5761832', 'width': 1200}, 'variants': {}}]} |
|
Going from 1 3090 to 3 | 1 | [removed] | 2024-12-27T10:44:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hnccm6/going_from_1_3090_to_3/ | Salt_Armadillo8884 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hnccm6 | false | null | t3_1hnccm6 | /r/LocalLLaMA/comments/1hnccm6/going_from_1_3090_to_3/ | false | false | self | 1 | null |
New model from qwen of sonnet level soon ? | 352 | 2024-12-27T10:50:28 | Evening_Action6217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hncfhc | false | null | t3_1hncfhc | /r/LocalLLaMA/comments/1hncfhc/new_model_from_qwen_of_sonnet_level_soon/ | false | false | 352 | {'enabled': True, 'images': [{'id': 'HdsSxur84t-ikcOqQWvF4a3e0pnoSvK0rucTc5urTos', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/d38tr8vsfd9e1.jpeg?width=108&crop=smart&auto=webp&s=f6fb8f307977d108efd2f1dc5320a49f74c92b7a', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/d38tr8vsfd9e1.jpeg?width=216&crop=smart&auto=webp&s=025f0ecb630f84c227a203c6320be81b13196fd8', 'width': 216}, {'height': 239, 'url': 'https://preview.redd.it/d38tr8vsfd9e1.jpeg?width=320&crop=smart&auto=webp&s=55a0ee1b9c35303734abbc543dd9ed2669be6302', 'width': 320}, {'height': 478, 'url': 'https://preview.redd.it/d38tr8vsfd9e1.jpeg?width=640&crop=smart&auto=webp&s=4fc21a7514a51311ef4680541392ed38cf594c33', 'width': 640}, {'height': 718, 'url': 'https://preview.redd.it/d38tr8vsfd9e1.jpeg?width=960&crop=smart&auto=webp&s=8f9ec40c02f603bca01119b66bd914636a8065de', 'width': 960}, {'height': 808, 'url': 'https://preview.redd.it/d38tr8vsfd9e1.jpeg?width=1080&crop=smart&auto=webp&s=3e9e984fe0d7135ef3849f1ffb875bc1bc01a367', 'width': 1080}], 'source': {'height': 808, 'url': 'https://preview.redd.it/d38tr8vsfd9e1.jpeg?auto=webp&s=ee742ad4757cf682e6f5f9679e7abb71047e5829', 'width': 1080}, 'variants': {}}]} |
|||
[image processing failed] | 1 | [deleted] | 2024-12-27T10:52:14 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hncgcf | false | null | t3_1hncgcf | /r/LocalLLaMA/comments/1hncgcf/image_processing_failed/ | false | false | default | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.