title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Scaling tiny models with search: Matching 28x larger model with 0.5B finetune + reward model | 285 | Results artifact: https://claude.site/artifacts/0e71107d-eefb-4973-82ae-b130201b571f
Have been working on implementing techniques from a few papers for the last few weeks (mostly Qwen-2.5-Math, Deepseek 1.5 Prover, Math-Shepard) to learn more about scaling inference and rl. Wanted to share some early results from the initial finetuned model with search before stating on implementing reinforcement learning.
This is a tiny 0.5b parameter base model (Qwen-2.5-0.5B) finetuned on the MetaMathQA dataset, which is 300k synthetic math solutions. I also trained a reward model using the Process Reward Model (PRM) training method from the Math-Shepard paper (they use an interesting method called “hard estimation” where you basically just sample a bunch of completions for partial solutions and teach the model to predict if a partial solution can lead to a correct answer.)
What’s crazy to me is how close this 0.5B model can get to much larger models. Comparing to the Math-Shepard paper, using Mistral 7b finetuned on the same MetaMathQA and on reward data, they get 92% with 1024 best-of-n. The 0.5B finetune + reward model gets pretty close with 50 MCTS iterations, solving 88% (note; caveat is this is on a sample of 10% of the test set, so true performance might be a bit lower)
Comparing to much larger models without search, the Qwen-2.5-14B parameter model solves 90.2% which the 0.5b model nearly matches (88%)
All of the training code and my high throughput parallelized MCTS implementation is public on my github: https://github.com/rawsh/mirrorllm
The repo is super messy but I’ll be cleaning it up and working on implementing reinforcement learning with GRPO / maybe RLVR in the coming weeks. Will be posting a full technical blog post soon as well at https://raw.sh
Super interested in training small models to reason in environments with sparse rewards. Please feel free to DM on reddit or twitter (rawsh0), would love to hear any ideas / questions! | 2024-11-27T20:32:00 | retrolione | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h1e5wp | false | null | t3_1h1e5wp | /r/LocalLLaMA/comments/1h1e5wp/scaling_tiny_models_with_search_matching_28x/ | false | false | 285 | {'enabled': True, 'images': [{'id': '0osF8Xd7PY2j-GmS_2BnDIsjha73taX46UnEwQ_UuQs', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/tutydrl78i3e1.jpeg?width=108&crop=smart&auto=webp&s=eccd7080462824c9787cf6cb49b060bf5d6d45cd', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/tutydrl78i3e1.jpeg?width=216&crop=smart&auto=webp&s=a80d51b6525031d6fae63ddb736c430a360896db', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/tutydrl78i3e1.jpeg?width=320&crop=smart&auto=webp&s=4de4bc145664e15f0cc5f4d93f602d44cd73232d', 'width': 320}, {'height': 376, 'url': 'https://preview.redd.it/tutydrl78i3e1.jpeg?width=640&crop=smart&auto=webp&s=a0411b84aaab22a7b1fa1942fc49aa5c72fffd9e', 'width': 640}, {'height': 564, 'url': 'https://preview.redd.it/tutydrl78i3e1.jpeg?width=960&crop=smart&auto=webp&s=2422cb23e34361ec0f76672ef3ccad368a0871dd', 'width': 960}, {'height': 635, 'url': 'https://preview.redd.it/tutydrl78i3e1.jpeg?width=1080&crop=smart&auto=webp&s=445618af1338e19d5808d582aad8a72453dabd86', 'width': 1080}], 'source': {'height': 1370, 'url': 'https://preview.redd.it/tutydrl78i3e1.jpeg?auto=webp&s=ddc8afff83f0f56f392615aa0667e88bbfd62dab', 'width': 2328}, 'variants': {}}]} |
||
Some notes on how I use LLMs to generate fictional worlds | 1 | [removed] | 2024-11-27T20:49:33 | https://www.reddit.com/r/LocalLLaMA/comments/1h1ekby/some_notes_on_how_i_use_llms_to_generate/ | Own-Editor-7068 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1ekby | false | null | t3_1h1ekby | /r/LocalLLaMA/comments/1h1ekby/some_notes_on_how_i_use_llms_to_generate/ | false | false | self | 1 | null |
Some notes on how I use LLMs to generate fictional worlds | 1 | 2024-11-27T20:56:46 | https://horenbergerb.github.io/2024/11/25/world-building-tree.html | eatbeans2 | horenbergerb.github.io | 1970-01-01T00:00:00 | 0 | {} | 1h1eqd5 | false | null | t3_1h1eqd5 | /r/LocalLLaMA/comments/1h1eqd5/some_notes_on_how_i_use_llms_to_generate/ | false | false | default | 1 | null |
|
What is the most powerful AI model I can run locally? | 0 | As the title states, I would like to know what the most powerful AI model is that I can run locally, assuming hardware isn't an issue. And yes, I know that each model is good for specific things, but what would you say is computationally the most powerful one? | 2024-11-27T21:10:48 | https://www.reddit.com/r/LocalLLaMA/comments/1h1f20l/what_is_the_most_powerful_ai_model_i_can_run/ | Boring_Mango69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1f20l | false | null | t3_1h1f20l | /r/LocalLLaMA/comments/1h1f20l/what_is_the_most_powerful_ai_model_i_can_run/ | false | false | self | 0 | null |
Agent-to-Agent Observability & Resiliency: What would you like to see? | 10 | Full disclosure, actively contributing to [https://github.com/katanemo/archgw](https://github.com/katanemo/archgw) \- an intelligent proxy for agents. I managed deployment of Envoy (service mesh proxy) at Lyft, and designed archgw for agents that accept/process prompts. We are actively seeking feedback on what the community would like to see when it comes to agent-to-agent communication, resiliency, observability, etc. Given that a lot of people are building smaller task-specific agents and that these agents must communicate with each other, we were seeking advice on what features would you like from an agent-mesh service that could solve a lot of the crufty resiliency, observability challenges. Note: we already have small LLMs engineered in arch to handle/process prompts effectively, so if the answer is machine learning related we can possible tackle that too.
You can add your thoughts below, or here: [https://github.com/katanemo/archgw/discussions/317](https://github.com/katanemo/archgw/discussions/317). I’ll merge duplicates so feel free to comment away | 2024-11-27T21:15:07 | https://www.reddit.com/r/LocalLLaMA/comments/1h1f5i6/agenttoagent_observability_resiliency_what_would/ | Mushroom_Legitimate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1f5i6 | false | null | t3_1h1f5i6 | /r/LocalLLaMA/comments/1h1f5i6/agenttoagent_observability_resiliency_what_would/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'faujhMh0Lzj6cj1YNLY01UqKz1IqCWe9O_CmBs23vDk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hUBqa4VtMoPbltEY8qfrvqqPat1OorhKav-7XEazVLc.jpg?width=108&crop=smart&auto=webp&s=1f50d1e059ea13c5e93391605086cb7d120a3a7b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hUBqa4VtMoPbltEY8qfrvqqPat1OorhKav-7XEazVLc.jpg?width=216&crop=smart&auto=webp&s=f85eab4e929b241d0a730ea9785d25a811d4c972', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hUBqa4VtMoPbltEY8qfrvqqPat1OorhKav-7XEazVLc.jpg?width=320&crop=smart&auto=webp&s=977b8ff4af5cef4c43e5d7336177f22b0ab7b433', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hUBqa4VtMoPbltEY8qfrvqqPat1OorhKav-7XEazVLc.jpg?width=640&crop=smart&auto=webp&s=0943ba93ce3220ac5b9a5b27b127cab3988a173b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hUBqa4VtMoPbltEY8qfrvqqPat1OorhKav-7XEazVLc.jpg?width=960&crop=smart&auto=webp&s=badcfd503cb0c1dc3dfc02f8dc293afb7904d5ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hUBqa4VtMoPbltEY8qfrvqqPat1OorhKav-7XEazVLc.jpg?width=1080&crop=smart&auto=webp&s=b4212ba0e1afa3faf47c6735ef100c92501494d6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hUBqa4VtMoPbltEY8qfrvqqPat1OorhKav-7XEazVLc.jpg?auto=webp&s=1aa260c622f743c1d1a7cd9aa82c2b138cbbc613', 'width': 1200}, 'variants': {}}]} |
Is M1 Max 64GB still a good option? | 0 | Hi everybody,
I have the opportunity to buy a newer Macbook Pros coming from a 2019 Macbook Pro i7, at the following prices:
\- M1 Max / 64 GB / 2TB SSD (Refurbished) at 2,400 euros
\- M2 Max / 64 GB / 2TB SSD (Refurbished) at 3,000 euros
\- M3 Max / 36 GB / 1TB SDD (New) at 2,900 euros
\- M4 Max / 36 GB / 1TB SSD (New) at 3,100 euros
As I understand that memory bandwidth and capacity are king to run local LLM, I'd assume the M1 and M2 are the best options. Is the M2 Max worth the extra 600 euros though?
Thanks | 2024-11-27T21:16:48 | https://www.reddit.com/r/LocalLLaMA/comments/1h1f6wl/is_m1_max_64gb_still_a_good_option/ | arnoopt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1f6wl | false | null | t3_1h1f6wl | /r/LocalLLaMA/comments/1h1f6wl/is_m1_max_64gb_still_a_good_option/ | false | false | self | 0 | null |
Posts not being approved | 1 | [removed] | 2024-11-27T21:22:15 | https://www.reddit.com/r/LocalLLaMA/comments/1h1fbax/posts_not_being_approved/ | CanineAssBandit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1fbax | false | null | t3_1h1fbax | /r/LocalLLaMA/comments/1h1fbax/posts_not_being_approved/ | false | false | self | 1 | null |
Scaling laws for predicting fully trained quantized LLMs' performance???? | 2 | Very interesting work at arxiv: Low-Bit Quantization Favors Undertrained LLMs: Scaling Laws for Quantized LLMs with 100T Training Tokens (https://arxiv.org/abs/2411.17691)
https://preview.redd.it/ouvncao6hi3e1.png?width=1712&format=png&auto=webp&s=bd2b7586f21a37c70993609cb15dee27edae9580
https://preview.redd.it/lfe950m7hi3e1.png?width=1734&format=png&auto=webp&s=3149e006400e1fd393b8bec8fbdd63d771b1c577
https://preview.redd.it/h7lvy479hi3e1.png?width=1706&format=png&auto=webp&s=2b10b9cefcf1fa79d15d244444edd715c6fdcd12
https://preview.redd.it/vhmw5dz9hi3e1.png?width=1706&format=png&auto=webp&s=561e2562b18a2868e460146d505bb7961952ea2d
https://preview.redd.it/x6f6iydbhi3e1.png?width=1722&format=png&auto=webp&s=304b96aec6a6a6530c486fd43ca106fa19a2abd3
| 2024-11-27T21:23:33 | https://www.reddit.com/r/LocalLLaMA/comments/1h1fced/scaling_laws_for_predicting_fully_trained/ | kof97lover | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1fced | false | null | t3_1h1fced | /r/LocalLLaMA/comments/1h1fced/scaling_laws_for_predicting_fully_trained/ | false | false | 2 | null |
|
Curious to hear what pain points are for Hobbyists and Researchers who want to rent GPUs to run their models | 1 | [removed] | 2024-11-27T21:23:33 | https://www.reddit.com/r/LocalLLaMA/comments/1h1fcee/curious_to_hear_what_pain_points_are_for/ | Severe_Aioli_5133 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1fcee | false | null | t3_1h1fcee | /r/LocalLLaMA/comments/1h1fcee/curious_to_hear_what_pain_points_are_for/ | false | false | self | 1 | null |
What is the best unconstrained model for having a conversation? | 1 | [removed] | 2024-11-27T21:27:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h1ffdb/what_is_the_best_unconstrained_model_for_having_a/ | Mission_Ship_2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1ffdb | false | null | t3_1h1ffdb | /r/LocalLLaMA/comments/1h1ffdb/what_is_the_best_unconstrained_model_for_having_a/ | false | false | self | 1 | null |
How can you keep up to date with all the changes to best practices/tools for llms? (e.g. best models, current tools, settings, etc...) | 1 | [removed] | 2024-11-27T21:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h1fojy/how_can_you_keep_up_to_date_with_all_the_changes/ | TryKey925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1fojy | false | null | t3_1h1fojy | /r/LocalLLaMA/comments/1h1fojy/how_can_you_keep_up_to_date_with_all_the_changes/ | false | false | self | 1 | null |
What should be included in an industry report on the state of software development using AI? | 2 | I'm trying to capture the current state of coding or general software development using AI.
I want it to be forward looking and capture trends and direction.
I'd like to include case studies, reports, tools and techniques.
Looking for resources and general thoughts on what I should focus on and what i should avoid.
Also if it's something you'd appreciate reading what questions would you like answered? | 2024-11-27T21:40:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h1fq2y/what_should_be_included_in_an_industry_report_on/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1fq2y | false | null | t3_1h1fq2y | /r/LocalLLaMA/comments/1h1fq2y/what_should_be_included_in_an_industry_report_on/ | false | false | self | 2 | null |
Recommendation for a Dual RTX 3090 Motherboard for Running a RAG app or LLMs
| 1 | Please note I am not interested in Enterprise Grade parts as I rather utilize the existing parts I have:
I the following so far:
2 x RTX 3090s
1 x AMD Ryzen 7700x
Which motherboard would you recommend that can serve multiple purposes and be able to fit 2 x RTX 3090 comfortably.
Thanks | 2024-11-27T22:02:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h1g88g/recommendation_for_a_dual_rtx_3090_motherboard/ | pinkxter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1g88g | false | null | t3_1h1g88g | /r/LocalLLaMA/comments/1h1g88g/recommendation_for_a_dual_rtx_3090_motherboard/ | false | false | self | 1 | null |
Do I need a strong CPU to pair with an RTX 3090 for inference? | 1 | I bought an RTX 3090 for €500. It will be used for running Qwen2.5-Coder-32B-Instruct. Will an AMD Bulldozer CPU from 2012 limit its performance? Thank you for your replies. | 2024-11-27T22:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h1gdb9/do_i_need_a_strong_cpu_to_pair_with_an_rtx_3090/ | whiteh4cker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1gdb9 | false | null | t3_1h1gdb9 | /r/LocalLLaMA/comments/1h1gdb9/do_i_need_a_strong_cpu_to_pair_with_an_rtx_3090/ | false | false | self | 1 | null |
Judge Arena leaderboard update | 53 | 2024-11-27T22:10:40 | Balance- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h1gf69 | false | null | t3_1h1gf69 | /r/LocalLLaMA/comments/1h1gf69/judge_arena_leaderboard_update/ | false | false | 53 | {'enabled': True, 'images': [{'id': 'jI4wG-zptH9hejB8DZxRiC3R2GmZTgHu0T03X2jAgu4', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/6j7qzrxopi3e1.png?width=108&crop=smart&auto=webp&s=ebac8c1345127e3fabe2d8af2541a394caa6bb5c', 'width': 108}, {'height': 252, 'url': 'https://preview.redd.it/6j7qzrxopi3e1.png?width=216&crop=smart&auto=webp&s=e2dca8d02406811b35a486353d7fb699b6a03889', 'width': 216}, {'height': 373, 'url': 'https://preview.redd.it/6j7qzrxopi3e1.png?width=320&crop=smart&auto=webp&s=4becaff687127126a1e81c4c0d4620a66c816fe3', 'width': 320}, {'height': 746, 'url': 'https://preview.redd.it/6j7qzrxopi3e1.png?width=640&crop=smart&auto=webp&s=aa788b07ac1307bde9ae2ad69137cee7d0460283', 'width': 640}, {'height': 1120, 'url': 'https://preview.redd.it/6j7qzrxopi3e1.png?width=960&crop=smart&auto=webp&s=3630e946ccb7975a78853be2f91ef7062e3c180b', 'width': 960}, {'height': 1260, 'url': 'https://preview.redd.it/6j7qzrxopi3e1.png?width=1080&crop=smart&auto=webp&s=35a8770501490d47e3484febc57741b1be7f5f22', 'width': 1080}], 'source': {'height': 4200, 'url': 'https://preview.redd.it/6j7qzrxopi3e1.png?auto=webp&s=f47a3496133395fa0abee43b5143af21601fed64', 'width': 3600}, 'variants': {}}]} |
|||
Insane leaderboard of creations | 1 | [removed] | 2024-11-27T22:57:00 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1h1hgba | false | null | t3_1h1hgba | /r/LocalLLaMA/comments/1h1hgba/insane_leaderboard_of_creations/ | false | false | default | 1 | null |
||
There is a war going on with anti-ai community on huggingface | 1 | [removed] | 2024-11-27T22:59:52 | https://www.reddit.com/r/LocalLLaMA/comments/1h1hig2/there_is_a_war_going_on_with_antiai_community_on/ | kiselsa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1hig2 | false | null | t3_1h1hig2 | /r/LocalLLaMA/comments/1h1hig2/there_is_a_war_going_on_with_antiai_community_on/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SKIYxiMWoJq2Py-7tPFV2GoBaIeSnWzkRKGgExUzXgU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/L9rUPtZh0a3ptosB9CORx0j-2vygYV1d60zjGYBGDJ4.jpg?width=108&crop=smart&auto=webp&s=005334f0e828a21e99cd60a1743201c2f6e71a43', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/L9rUPtZh0a3ptosB9CORx0j-2vygYV1d60zjGYBGDJ4.jpg?width=216&crop=smart&auto=webp&s=78582db0c32a88d9f551e94c91d17d68760a2ff9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/L9rUPtZh0a3ptosB9CORx0j-2vygYV1d60zjGYBGDJ4.jpg?width=320&crop=smart&auto=webp&s=0c2a1bc15b71942a0d40ac803f9e29272dcb48d9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/L9rUPtZh0a3ptosB9CORx0j-2vygYV1d60zjGYBGDJ4.jpg?width=640&crop=smart&auto=webp&s=ebd092064da84067a01d1b1e611ac3793677066b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/L9rUPtZh0a3ptosB9CORx0j-2vygYV1d60zjGYBGDJ4.jpg?width=960&crop=smart&auto=webp&s=8f2b649eb8267418dc798cd602c8921b6916fe1e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/L9rUPtZh0a3ptosB9CORx0j-2vygYV1d60zjGYBGDJ4.jpg?width=1080&crop=smart&auto=webp&s=c236977c2823687ac792228a1ea10162e57dd21d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/L9rUPtZh0a3ptosB9CORx0j-2vygYV1d60zjGYBGDJ4.jpg?auto=webp&s=b6f4817cf52a2734eb093fc8e406fe4ba69bbc7d', 'width': 1200}, 'variants': {}}]} |
Since things are moving so quickly how do you stay up to date on best current tools and how to use them? | 71 | How do you keep up with all the changes happening here? It feels like 90% of my news is just from passing comments in different threads here | 2024-11-27T23:00:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h1hiku/since_things_are_moving_so_quickly_how_do_you/ | TryKey925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1hiku | false | null | t3_1h1hiku | /r/LocalLLaMA/comments/1h1hiku/since_things_are_moving_so_quickly_how_do_you/ | false | false | self | 71 | null |
Datasets built by Ai2 and used to train the Molmo family of models | 20 | 2024-11-27T23:18:59 | https://huggingface.co/collections/allenai/pixmo-674746ea613028006285687b | logan__keenan | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h1hxkv | false | null | t3_1h1hxkv | /r/LocalLLaMA/comments/1h1hxkv/datasets_built_by_ai2_and_used_to_train_the_molmo/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'NCfo3kELByjqmERBog3kAhCRSYCa7ipOlI_kaiq0-vA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OEcK6SbAJKSRKUUw358-VfBtOJbyk3CvJkgiBG-gJV8.jpg?width=108&crop=smart&auto=webp&s=d9b317f1d7b21b98e7f505bd64bd154a6e0aa3c7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OEcK6SbAJKSRKUUw358-VfBtOJbyk3CvJkgiBG-gJV8.jpg?width=216&crop=smart&auto=webp&s=c7cc72d78a22795e8a037209b0b1ffbf1b794d02', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OEcK6SbAJKSRKUUw358-VfBtOJbyk3CvJkgiBG-gJV8.jpg?width=320&crop=smart&auto=webp&s=d9cbd48186f0bc0a85a844e734d26ac914273df3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OEcK6SbAJKSRKUUw358-VfBtOJbyk3CvJkgiBG-gJV8.jpg?width=640&crop=smart&auto=webp&s=f836d7d722260f5f17fc4e60d3ac20fa8fee213b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OEcK6SbAJKSRKUUw358-VfBtOJbyk3CvJkgiBG-gJV8.jpg?width=960&crop=smart&auto=webp&s=f0a0a96a7aa1c78d4ab77984545fc3d5a29a72ee', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OEcK6SbAJKSRKUUw358-VfBtOJbyk3CvJkgiBG-gJV8.jpg?width=1080&crop=smart&auto=webp&s=1fe60bd10a3dfd1218508c938f895e4b86f7184c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OEcK6SbAJKSRKUUw358-VfBtOJbyk3CvJkgiBG-gJV8.jpg?auto=webp&s=48e60764db8fced2ae56cb9d2834ed58ae4eec4d', 'width': 1200}, 'variants': {}}]} |
||
What is the best unconstrained model to replicate a character for having a conversation? | 1 | [removed] | 2024-11-27T23:25:55 | https://www.reddit.com/r/LocalLLaMA/comments/1h1i2zz/what_is_the_best_unconstrained_model_to_replicate/ | Mission_Ship_2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1i2zz | false | null | t3_1h1i2zz | /r/LocalLLaMA/comments/1h1i2zz/what_is_the_best_unconstrained_model_to_replicate/ | false | false | self | 1 | null |
What is the best unconstrained model to replicate a character for having a conversation? | 1 | [removed] | 2024-11-27T23:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h1i3uo/what_is_the_best_unconstrained_model_to_replicate/ | Key_Radiant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1i3uo | false | null | t3_1h1i3uo | /r/LocalLLaMA/comments/1h1i3uo/what_is_the_best_unconstrained_model_to_replicate/ | false | false | self | 1 | null |
🆘happzy2633 / qwen2.5-7b-ins-v3 | 1 | [removed] | 2024-11-27T23:47:21 | https://www.reddit.com/r/LocalLLaMA/comments/1h1iiz1/happzy2633_qwen257binsv3/ | Creative_Bottle_3225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1iiz1 | false | null | t3_1h1iiz1 | /r/LocalLLaMA/comments/1h1iiz1/happzy2633_qwen257binsv3/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EteTRWlmneSyyg-4wYWVBP4HELMyY3Ffh_7PjD2Gbr0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RcNDqojVhFAUc-CALOj7rWOs4u_ddZtNtkWJ18N0mRw.jpg?width=108&crop=smart&auto=webp&s=4619ae48cdaee461248b0bfffd4d96a318e2a1b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RcNDqojVhFAUc-CALOj7rWOs4u_ddZtNtkWJ18N0mRw.jpg?width=216&crop=smart&auto=webp&s=52d18de8ea0329788ab1e92e9897848d6b7fb1b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RcNDqojVhFAUc-CALOj7rWOs4u_ddZtNtkWJ18N0mRw.jpg?width=320&crop=smart&auto=webp&s=93849600c36d7c8e97c5b0891f9a5d00c9fcbb8b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RcNDqojVhFAUc-CALOj7rWOs4u_ddZtNtkWJ18N0mRw.jpg?width=640&crop=smart&auto=webp&s=bfe6a00b091def119961df3454f53ae59fa8d304', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RcNDqojVhFAUc-CALOj7rWOs4u_ddZtNtkWJ18N0mRw.jpg?width=960&crop=smart&auto=webp&s=01a391978bb54aec64bec3fc952f144970b05328', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RcNDqojVhFAUc-CALOj7rWOs4u_ddZtNtkWJ18N0mRw.jpg?width=1080&crop=smart&auto=webp&s=7a67846898c33e54cc0ca749e213795d4cef9bde', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RcNDqojVhFAUc-CALOj7rWOs4u_ddZtNtkWJ18N0mRw.jpg?auto=webp&s=0e35a6b7f5a9d031d1b6bd71ac3e626d53b0a526', 'width': 1200}, 'variants': {}}]} |
Generating Structured Output from a Dynamic JSON schema (like from a database) | 9 | tl;dr Example runnable repo: [https://github.com/BoundaryML/baml-examples/tree/main/json-schema-to-baml](https://github.com/BoundaryML/baml-examples/tree/main/json-schema-to-baml)
Hey all, wanted to share our approach for generating structured outputs in a reliable way from a JSON schema using our prompting-language (BAML):
[https://www.boundaryml.com/blog/dynamic-json-schemas](https://www.boundaryml.com/blog/dynamic-json-schemas)
It's most useful if you don't want to create pydantic models and are just loading straight up json schema files and want to give them to the LLM (and guarantee that the output matches the schema **100%**).
It even works with 1B parameter models.
I posted a video of it working with llama 3.2 1B here: [https://x.com/aaron\_\_vi/status/1861210112385532330](https://x.com/aaron__vi/status/1861210112385532330) . I'll make a more in-depth ✨ interactive post ✨ about this later!
Happy to answer any questions. | 2024-11-27T23:56:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h1ipi1/generating_structured_output_from_a_dynamic_json/ | fluxwave | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1ipi1 | false | null | t3_1h1ipi1 | /r/LocalLLaMA/comments/1h1ipi1/generating_structured_output_from_a_dynamic_json/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'EO6XyVSZaJLliFJkUWEOEWkxJqwKbNcj8IkvM-OvtFo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0f8OaGqmHb8AxhV3hGCzQkhWP1Asn3RsompoWwDbxjE.jpg?width=108&crop=smart&auto=webp&s=24323ac3892ddfcbd632c8af2b9eb719f2f22b9e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0f8OaGqmHb8AxhV3hGCzQkhWP1Asn3RsompoWwDbxjE.jpg?width=216&crop=smart&auto=webp&s=621dd28f1ec1f01e8e83944c35c5f9a8846f6dac', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0f8OaGqmHb8AxhV3hGCzQkhWP1Asn3RsompoWwDbxjE.jpg?width=320&crop=smart&auto=webp&s=da03c802149cb5e367b4df1d34e445754c1a0e4d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0f8OaGqmHb8AxhV3hGCzQkhWP1Asn3RsompoWwDbxjE.jpg?width=640&crop=smart&auto=webp&s=9ef8563ec9d2353d0324400b830673080b31d5b9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0f8OaGqmHb8AxhV3hGCzQkhWP1Asn3RsompoWwDbxjE.jpg?width=960&crop=smart&auto=webp&s=c8024466dae8b733390d0f296cbd6817da979243', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0f8OaGqmHb8AxhV3hGCzQkhWP1Asn3RsompoWwDbxjE.jpg?width=1080&crop=smart&auto=webp&s=9bf6f234c9728c19fdfc8124cf2f0aa1146f7417', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0f8OaGqmHb8AxhV3hGCzQkhWP1Asn3RsompoWwDbxjE.jpg?auto=webp&s=2813e6cbddb50a1526f844788d8b4e099a0db83f', 'width': 1200}, 'variants': {}}]} |
ELI5: How do I use Mistral for NSFW/adult content? | 76 | I've never used a local AI/GPT. How do I get started? | 2024-11-28T01:55:38 | https://www.reddit.com/r/LocalLLaMA/comments/1h1l28d/eli5_how_do_i_use_mistral_for_nsfwadult_content/ | msp_ryno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1l28d | false | null | t3_1h1l28d | /r/LocalLLaMA/comments/1h1l28d/eli5_how_do_i_use_mistral_for_nsfwadult_content/ | false | false | nsfw | 76 | null |
SLM vs LLM | 5 | Every time I read about yet another “SOTA” small language model (SLM), I get excited about the potential to run faster inference on less compute resources and get similar performance as open source LLMs. But then I try subbing the SLM for the LLM I’m using and I’m so disappointed that it either breaks my app or doesn’t perform as well.
Reasons why it breaks is usually because it doesn’t handle structured output well. Reason why it underperforms is it gets answers wrong or plain hallucinates.
I’m currently seeing great performance running qwen2.5-coder:32b through Ollama. But switching to small language models from the Llama family, IBM granite family and the latest Tülu3 from AllenAI and Marco-o1 from Alibaba all fail.
I am longing for the day when my can just swap model names and not have to tweak prompts or write controls for edge conditions or rip out prompt optimizers.
Anyone have luck going from a large to small model? | 2024-11-28T02:29:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h1lphn/slm_vs_llm/ | International_Quail8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1lphn | false | null | t3_1h1lphn | /r/LocalLLaMA/comments/1h1lphn/slm_vs_llm/ | false | false | self | 5 | null |
Anthropic "universal" MCP is disappointing | 8 | 48 hours ago they announced [MCP](https://www.anthropic.com/news/model-context-protocol)
**The pitch?**
MCP is supposed to standardize how LLMs interact with external tools.
It’s built around the ideas of:
* **Client** (the LLM)
* **Server** (the tools/ressources)
It's supposed to give LLMs an universal way to access external resources and APIs while allowing safety and privacy.
**The reality?**
The release comes with Python and TypeScript SDKs, which sound exciting.
But if you dig in, the tooling is mostly about building servers apps that LLMs can call.
The *only* working client right now is Claude Desktop.
So, instead of being a universal protocol, it currently just adds features to their own ecosystem.
**The potential?**
If other LLM providers start building clients, MCP could become something big.
For now, though, it’s more of a bet on whether Anthropic can push this to industry adoption.
What do you think, bluff or genuine long-term play? | 2024-11-28T02:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1h1m69w/anthropic_universal_mcp_is_disappointing/ | MrCyclopede | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1m69w | false | null | t3_1h1m69w | /r/LocalLLaMA/comments/1h1m69w/anthropic_universal_mcp_is_disappointing/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'hUs_Lw7C6ZXEyxCTiuYEj1OmGi-ODT0e2LYmeuuR4cw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/FBDECinMUfwcODQEFyZ5-jcH_bSGpkMdVYDzfJTahPw.jpg?width=108&crop=smart&auto=webp&s=0c333fab2db084051dfa487666fa833e5ccb2f87', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/FBDECinMUfwcODQEFyZ5-jcH_bSGpkMdVYDzfJTahPw.jpg?width=216&crop=smart&auto=webp&s=8c48c1a2693b721fb8243e517d1aeedad25852aa', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/FBDECinMUfwcODQEFyZ5-jcH_bSGpkMdVYDzfJTahPw.jpg?width=320&crop=smart&auto=webp&s=2629233e21321edac9fb85ed7d78df415f85bbc9', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/FBDECinMUfwcODQEFyZ5-jcH_bSGpkMdVYDzfJTahPw.jpg?width=640&crop=smart&auto=webp&s=e28e7eca7a7c3672725c02d061d6c84664dda244', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/FBDECinMUfwcODQEFyZ5-jcH_bSGpkMdVYDzfJTahPw.jpg?width=960&crop=smart&auto=webp&s=87b1908b04c336f06c58a06cc22bbe93b9891976', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/FBDECinMUfwcODQEFyZ5-jcH_bSGpkMdVYDzfJTahPw.jpg?width=1080&crop=smart&auto=webp&s=bd0ad27fcb2f059d0c5844898fdb21e407bd5d97', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/FBDECinMUfwcODQEFyZ5-jcH_bSGpkMdVYDzfJTahPw.jpg?auto=webp&s=eebb710d53f9a70deed39defcd3e2e0efe3ee4bd', 'width': 2400}, 'variants': {}}]} |
Qwq-32B-Preview-4bit solves a problem that big LLMs started solving this year. | 1 | [removed] | 2024-11-28T03:05:49 | https://www.reddit.com/r/LocalLLaMA/comments/1h1mdhj/qwq32bpreview4bit_solves_a_problem_that_big_llms/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1mdhj | false | null | t3_1h1mdhj | /r/LocalLLaMA/comments/1h1mdhj/qwq32bpreview4bit_solves_a_problem_that_big_llms/ | false | false | self | 1 | null |
Qwen releases a preview of QwQ /kwju:/ — an open model designed to advance AI reasoning capabilities. | 89 | Blog: [https://qwenlm.github.io/blog/qwq-32b-preview/…](https://qwenlm.github.io/blog/qwq-32b-preview/)
Model: [https://hf.co/Qwen/QwQ-32B-Preview…](https://hf.co/Qwen/QwQ-32B-Preview)
Demo: [https://hf.co/spaces/Qwen/QwQ-32B-preview…](https://hf.co/spaces/Qwen/QwQ-32B-preview)
QwQ has preliminarily demonstrated remarkable capabilities, especially in solving some challenges in mathematics and coding. As a preview release, we acknowledge its limitations. We earnestly invite the open research community to collaborate with us to explore the boundaries of the unknown!
https://preview.redd.it/8x0js1hqak3e1.png?width=2232&format=png&auto=webp&s=080e33eb4b92511fdaf3d52e7aff1848bba9c391
| 2024-11-28T03:31:12 | https://www.reddit.com/r/LocalLLaMA/comments/1h1mtn4/qwen_releases_a_preview_of_qwq_kwju_an_open_model/ | geringonco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1mtn4 | false | null | t3_1h1mtn4 | /r/LocalLLaMA/comments/1h1mtn4/qwen_releases_a_preview_of_qwq_kwju_an_open_model/ | false | false | 89 | null |
|
tabbyapi speculative decoding for exl2 works for Llama 3.x 8B models with a 1B draft model | 18 | I've tried out tabbyapi tonight, and it was fairly easy to configure after I added two exl2 quanted models to the appropriate directory.
[https://github.com/theroyallab/tabbyAPI](https://github.com/theroyallab/tabbyAPI)
I quanted my own 6bpw exl2 of Llama 3.2 1B Instruct to use as the draft model against a Llama 3 8B merge I made and quanted locally at 8bpw. I figured that would be a good tradeoff for speed against accuracy, as the target model would have veto anyway at higher accuracy, though one could probably go as low as 4bpw with the draft model. I haven't done comparative benchmarking of tradeoffs. For convenience, exl2 quants of the draft model I selected can be found here:
[https://huggingface.co/turboderp/Llama-3.2-1B-Instruct-exl2](https://huggingface.co/turboderp/Llama-3.2-1B-Instruct-exl2)
The tokenizer.json differences between Llama 3 Instruct and Llama 3.2 Instruct are relatively minor, essentially the same for casual use, proving that models sized for edge computing can serve effectively as draft models. Right now keeping both models in memory with 8K context and batch size 512 occupies under 12GB VRAM. The tokens generated per second is variable for creative tasks, but the typical and peak rates are definitely higher than what I recall of running exl2 under oobabooga/text-generation-webui. It's definitely an improvement when running on an RTX 4060ti 16GB GPU. | 2024-11-28T03:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h1mvs3/tabbyapi_speculative_decoding_for_exl2_works_for/ | grimjim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1mvs3 | false | null | t3_1h1mvs3 | /r/LocalLLaMA/comments/1h1mvs3/tabbyapi_speculative_decoding_for_exl2_works_for/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'po_l4tC4iPxqCfP_3KFKd3y3ewsyuSYrYzudkrzuClw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Fk2Kic5WleasBSKwj30OSiGW7xavKIBY2K9z3wIUdB0.jpg?width=108&crop=smart&auto=webp&s=1ce60a5727db0b70c6c2962c6b9ce4dbb57509b0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Fk2Kic5WleasBSKwj30OSiGW7xavKIBY2K9z3wIUdB0.jpg?width=216&crop=smart&auto=webp&s=04648881bf1efb97a8f8505cc8ff6ded77ea098c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Fk2Kic5WleasBSKwj30OSiGW7xavKIBY2K9z3wIUdB0.jpg?width=320&crop=smart&auto=webp&s=be1864968d85bbbf2bf226009480ed29cfc29664', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Fk2Kic5WleasBSKwj30OSiGW7xavKIBY2K9z3wIUdB0.jpg?width=640&crop=smart&auto=webp&s=238053557f771a0b420c6b1806397b9e76ebf60b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Fk2Kic5WleasBSKwj30OSiGW7xavKIBY2K9z3wIUdB0.jpg?width=960&crop=smart&auto=webp&s=6ff4f5442893578368c3fa0b132608a4eb9f78bd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Fk2Kic5WleasBSKwj30OSiGW7xavKIBY2K9z3wIUdB0.jpg?width=1080&crop=smart&auto=webp&s=f4eace7c9a84f2e54dfd982a3d127287541c4f9b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Fk2Kic5WleasBSKwj30OSiGW7xavKIBY2K9z3wIUdB0.jpg?auto=webp&s=475009fd97deb2107dfb84082ee56d0cd2bf4599', 'width': 1200}, 'variants': {}}]} |
Introducing Steel.dev 🚧 - The Open-source Browser API for AI Agents | 1 | hey r/LocalLLaMA
wanted to share our recently open-sourced `steel-browser` repo: [github.com/steel-dev/steel-browser](http://github.com/steel-dev/steel-browser)
the `steel-browser` repo is the main building block powering us over at [steel.dev](http://steel.dev)
some background: steel is an open-source browser api for ai agents & apps. we make it easy for ai devs to build browser automation into their products without getting flagged as a bot or worrying about browser infra. one api call to spin up a browser session with dedicated resources (2gb vram/cpu), built-in stealth, anti-fingerprinting, proxies, and captcha solving out of the box.
over the last year, we built several ai apps that interact with the web. two things became clear: when an llm successfully used the web, it was magical - but browser infrastructure consumed \~80% of our development time. managing browser pools, cleaning html, proxies, cloudflare, and scaling became a massive engineering challenge that distracted from building our core product. so we built the solution we wished existed.
today, we’re open-sourcing the code for the steel browser instance, with plans to open-source the orchestration layer soon. With the repo, you get backward compatibility with our node/python sdk, a lighter version of our session viewer, and most of the features that come with steel cloud. you can run this locally to test steel out at an individual session level or one-click deploy to render/railway to run remotely.
super pumped to share this with fellow os llm chads 🔥 would love to hear how you might integrate this with your llm setups and what features would be useful for you.
here are the docs for the curious: [https://docs.steel.dev](https://docs.steel.dev) | 2024-11-28T04:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h1ndrd/introducing_steeldev_the_opensource_browser_api/ | butchT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1ndrd | false | null | t3_1h1ndrd | /r/LocalLLaMA/comments/1h1ndrd/introducing_steeldev_the_opensource_browser_api/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rt8lxBHz5Cf6DDMC6HbwKGVJ4hL7DRqJHxTmLeMwBpA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZfID76ubl9jZs4pKRwNEACYc7Vg7azXs_Jss8xC7nCU.jpg?width=108&crop=smart&auto=webp&s=2eb5b77ca389be4da90491b388cd6699466bb8d3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZfID76ubl9jZs4pKRwNEACYc7Vg7azXs_Jss8xC7nCU.jpg?width=216&crop=smart&auto=webp&s=7552aa4d514898db8fa328fff356408112540951', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZfID76ubl9jZs4pKRwNEACYc7Vg7azXs_Jss8xC7nCU.jpg?width=320&crop=smart&auto=webp&s=4dd13a272c59113b2cd3d4e42ecab5c3e4bffcb3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZfID76ubl9jZs4pKRwNEACYc7Vg7azXs_Jss8xC7nCU.jpg?width=640&crop=smart&auto=webp&s=51bb1a800e90c268831eee226da30841fb364d52', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZfID76ubl9jZs4pKRwNEACYc7Vg7azXs_Jss8xC7nCU.jpg?width=960&crop=smart&auto=webp&s=a14e87d0157473a3c82cedb234a16df8b7a7e5ba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZfID76ubl9jZs4pKRwNEACYc7Vg7azXs_Jss8xC7nCU.jpg?width=1080&crop=smart&auto=webp&s=f0708e84db6c3bac8d0158bbea766a312dba580d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZfID76ubl9jZs4pKRwNEACYc7Vg7azXs_Jss8xC7nCU.jpg?auto=webp&s=4f696a800e8b7da01a96694815aeab5e5de81f78', 'width': 1200}, 'variants': {}}]} |
Steel.dev 🚧 - The Open-source Browser API for AI Agents | 175 | 2024-11-28T04:09:52 | https://github.com/steel-dev/steel-browser | butchT | github.com | 1970-01-01T00:00:00 | 0 | {} | 1h1ni3c | false | null | t3_1h1ni3c | /r/LocalLLaMA/comments/1h1ni3c/steeldev_the_opensource_browser_api_for_ai_agents/ | false | false | 175 | {'enabled': False, 'images': [{'id': '8nJDlFHx6d_nh_6Ft8aG4ZYTxsJX5rO2oYCBudfOpKQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7W4sj96QJ9BlAM3TuW-j8fMl2B_US9VMDaW-9wm8mzE.jpg?width=108&crop=smart&auto=webp&s=1a489fddf536846d8e691c8d6c11a618323cd886', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7W4sj96QJ9BlAM3TuW-j8fMl2B_US9VMDaW-9wm8mzE.jpg?width=216&crop=smart&auto=webp&s=588191511f02fda8f1dae71be7e9a57a4003e8f9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7W4sj96QJ9BlAM3TuW-j8fMl2B_US9VMDaW-9wm8mzE.jpg?width=320&crop=smart&auto=webp&s=a89f892d3ac4b049df495a77792db99027ad5ae7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7W4sj96QJ9BlAM3TuW-j8fMl2B_US9VMDaW-9wm8mzE.jpg?width=640&crop=smart&auto=webp&s=fbebc61a96f224c139f24dc19d02e74f760e49b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7W4sj96QJ9BlAM3TuW-j8fMl2B_US9VMDaW-9wm8mzE.jpg?width=960&crop=smart&auto=webp&s=fde9f65a734562e38f0a7bdc24627101838a2cfa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7W4sj96QJ9BlAM3TuW-j8fMl2B_US9VMDaW-9wm8mzE.jpg?width=1080&crop=smart&auto=webp&s=fd18d0ea6e8d28a134b0375acd9a801cfbdd7b83', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7W4sj96QJ9BlAM3TuW-j8fMl2B_US9VMDaW-9wm8mzE.jpg?auto=webp&s=5937064a35e15056f2e63e63245870dbd898f9b1', 'width': 1200}, 'variants': {}}]} |
||
Concisely: best Approaches to run LLM locally? | 0 | Hello everyone
I am new in the exiting field of LLM, I know the bases and key concepts such as quantization. I feel ready to get hands to work but I don’t know where to start.
What are the most common ways to run LLM locally (M2 / 8RAM)? I’ve seen Ollama and Hugging Face, but don’t know what is better and if there are other alternatives. | 2024-11-28T05:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/1h1oi8w/concisely_best_approaches_to_run_llm_locally/ | gomezalp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1oi8w | false | null | t3_1h1oi8w | /r/LocalLLaMA/comments/1h1oi8w/concisely_best_approaches_to_run_llm_locally/ | false | false | self | 0 | null |
is there a list somewhere showing the current self hostable options for llms? | 1 | [removed] | 2024-11-28T05:27:29 | https://www.reddit.com/r/LocalLLaMA/comments/1h1osyd/is_there_a_list_somewhere_showing_the_current/ | Low-Umpire-4901 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1osyd | false | null | t3_1h1osyd | /r/LocalLLaMA/comments/1h1osyd/is_there_a_list_somewhere_showing_the_current/ | false | false | self | 1 | null |
Best mid tier setup and models? | 7 | I currently have 2x3090, and looking at potentially building a dedicated machine for querying and using with apps. I was thinking a new motherboard that would allow me to use 4x3090, but I am not sure how much that would open up. I can squeeze 70B models on two 3090 but I am not convinced it is worth it over using GPT/Claude.
I saw some mention of using Epyc processors with larger ram configurations (300GB+). How do these perform compared to 4-7 3090 setups?
What's the best setup you can get for around $5k-10K? | 2024-11-28T06:10:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h1phi6/best_mid_tier_setup_and_models/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1phi6 | false | null | t3_1h1phi6 | /r/LocalLLaMA/comments/1h1phi6/best_mid_tier_setup_and_models/ | false | false | self | 7 | null |
Spaghetti Build - Inference Workstation | 9 | [Meh](https://preview.redd.it/nd4edvio1l3e1.jpg?width=1406&format=pjpg&auto=webp&s=8350dd00835e7cf3a567ba02065fc55dd380719c)
AMD EPYC 7F52
256 GB DDR4 ECC 3200 (8\*32GB)
4 x ZOTAC RTX 3090 OC with Waterblock and activeback plate
8 TB Intel U.2 Enterprise SSD
Silverstone HELA 2050R PSU
2x 360 Radiators
didnt want the LED's but the waterblocks came with it so why not | 2024-11-28T06:11:49 | https://www.reddit.com/r/LocalLLaMA/comments/1h1pieh/spaghetti_build_inference_workstation/ | RateRoutine2268 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1pieh | false | null | t3_1h1pieh | /r/LocalLLaMA/comments/1h1pieh/spaghetti_build_inference_workstation/ | false | false | 9 | null |
|
Is it possible to run LLMs on Radeon VII with rocm ? | 0 | Has anyone tried on running on this gpu ? | 2024-11-28T06:48:01 | https://www.reddit.com/r/LocalLLaMA/comments/1h1q24k/is_it_possible_to_run_llms_on_radeon_vii_with_rocm/ | Relative_Rope4234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1q24k | false | null | t3_1h1q24k | /r/LocalLLaMA/comments/1h1q24k/is_it_possible_to_run_llms_on_radeon_vii_with_rocm/ | false | false | self | 0 | null |
Alibaba QwQ 32B model reportedly challenges o1 mini, o1 preview , claude 3.5 sonnet and gpt4o and its open source | 587 | 2024-11-28T07:00:06 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h1q8h3 | false | null | t3_1h1q8h3 | /r/LocalLLaMA/comments/1h1q8h3/alibaba_qwq_32b_model_reportedly_challenges_o1/ | false | false | 587 | {'enabled': True, 'images': [{'id': 'Z3uujB05zFD5DpuLdCsrOnIOaC0fSrEW4a4AOkfjf5A', 'resolutions': [{'height': 190, 'url': 'https://preview.redd.it/merjj1i9cl3e1.png?width=108&crop=smart&auto=webp&s=384fc2d13fb17401ff75235f85c3d071026e69b2', 'width': 108}, {'height': 381, 'url': 'https://preview.redd.it/merjj1i9cl3e1.png?width=216&crop=smart&auto=webp&s=abd41a8ebacc5d81dc5299e002bde70890c26e49', 'width': 216}, {'height': 565, 'url': 'https://preview.redd.it/merjj1i9cl3e1.png?width=320&crop=smart&auto=webp&s=5abe0edda02cd3a43a0d8811f65ad327ad66bd58', 'width': 320}, {'height': 1131, 'url': 'https://preview.redd.it/merjj1i9cl3e1.png?width=640&crop=smart&auto=webp&s=fe680ada3dc804efe05029e56e5af9323c3bb4e8', 'width': 640}, {'height': 1696, 'url': 'https://preview.redd.it/merjj1i9cl3e1.png?width=960&crop=smart&auto=webp&s=e9a79648b869c50bb6563c0af370be2430df0904', 'width': 960}, {'height': 1909, 'url': 'https://preview.redd.it/merjj1i9cl3e1.png?width=1080&crop=smart&auto=webp&s=63896517e6fb24151bb618911d6b8f9fc288e7c5', 'width': 1080}], 'source': {'height': 1909, 'url': 'https://preview.redd.it/merjj1i9cl3e1.png?auto=webp&s=c10621c1b4969b1470eb1724040d044a7afa81fd', 'width': 1080}, 'variants': {}}]} |
|||
Srcbook - typescript centric pair programmer | 0 | https://github.com/srcbookdev/srcbook | 2024-11-28T07:08:03 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h1qd05 | false | null | t3_1h1qd05 | /r/LocalLLaMA/comments/1h1qd05/srcbook_typescript_centric_pair_programmer/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'z61Ta1IOjlCtuf5wmzPyW1lO4Gk8k9dB3AALIO-59dY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xe0ziqnodl3e1.png?width=108&crop=smart&auto=webp&s=4ce2e4e6feec30be33d817e7b56bc8619bf67852', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xe0ziqnodl3e1.png?width=216&crop=smart&auto=webp&s=34efd4a6807c87f25a2f54878a82626591c5c770', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/xe0ziqnodl3e1.png?width=320&crop=smart&auto=webp&s=ed6a78076b138fe1cc4350f06c2f473e92f642f3', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/xe0ziqnodl3e1.png?width=640&crop=smart&auto=webp&s=97598070fe7fbf56edb71855f7344aad5cabfaf6', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/xe0ziqnodl3e1.png?width=960&crop=smart&auto=webp&s=5ff8413420aa11b355f8c80084ef444c233ef213', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/xe0ziqnodl3e1.png?width=1080&crop=smart&auto=webp&s=f7932c34749d8b1d5b7c46d8d3e7fcd1f35953fa', 'width': 1080}], 'source': {'height': 2163, 'url': 'https://preview.redd.it/xe0ziqnodl3e1.png?auto=webp&s=bf7355059489622d552799f1426ccb6b22fe2d46', 'width': 1080}, 'variants': {}}]} |
||
Hardware requirements to run LLMs locally | 1 | [removed] | 2024-11-28T07:43:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h1qv30/hardware_requirements_to_run_llms_locally/ | 3nohax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1qv30 | false | null | t3_1h1qv30 | /r/LocalLLaMA/comments/1h1qv30/hardware_requirements_to_run_llms_locally/ | false | false | self | 1 | null |
Hardware setup for small team? | 1 | I would be grateful for info en the type of hardware setup you have for running a coding model (e.g Qwen2.5 32B or similar) supporting around 20 developers. Any examples where inference speed is usable among a team of that size? | 2024-11-28T08:13:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h1rao7/hardware_setup_for_small_team/ | pkz_swe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1rao7 | false | null | t3_1h1rao7 | /r/LocalLLaMA/comments/1h1rao7/hardware_setup_for_small_team/ | false | false | self | 1 | null |
High tokens per second extremely bad time to first token | 1 | On a MacBook Air m1 16gb using LM studio and running into an issue where my token speed are great but the time to the first token is TOO long. aka
deepseek-coder-v2-lite like a 8gb model im getting 21.55 tokens per second but it takes 32 seconds to the first token.
What time to first token do you get?
Am I wrong that half a minute before anything outputting is kinda long to use it?
note im using 2048 token context length its a MLX model (which is faster on LM studio at least with m series Mac). Any suggestions? | 2024-11-28T08:17:54 | https://www.reddit.com/r/LocalLLaMA/comments/1h1rcnl/high_tokens_per_second_extremely_bad_time_to/ | mockedarche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1rcnl | false | null | t3_1h1rcnl | /r/LocalLLaMA/comments/1h1rcnl/high_tokens_per_second_extremely_bad_time_to/ | false | false | self | 1 | null |
Log analysis using LLM | 1 | [removed] | 2024-11-28T08:37:37 | https://www.reddit.com/r/LocalLLaMA/comments/1h1rm8p/log_analysis_using_llm/ | AcademicCarpet1296 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1rm8p | false | null | t3_1h1rm8p | /r/LocalLLaMA/comments/1h1rm8p/log_analysis_using_llm/ | false | false | self | 1 | null |
ELI5: Why does CoT work? | 5 | Why does chain of thought work? I’m kind of confused. Initially due to o1-preview not releasing it’s thought process I thought the tokens generated are just internal that acts as a context shifter to change the latent space of the model towards the topic domains that’s closer to the question being asked.
However looking at QwQ and R1-lite’s reasoning it seems that the model _really_ develops a kind of awareness to what it is saying when it is doing chain of thought process. For example in my mind strawberry is still multiple tokens even in CoT or in non CoT process so it should still just generate two different tokens for strawberry but it seems that when you apply chain of thought then you really develop an awareness. | 2024-11-28T09:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h1s0nn/eli5_why_does_cot_work/ | Educational_Gap5867 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1s0nn | false | null | t3_1h1s0nn | /r/LocalLLaMA/comments/1h1s0nn/eli5_why_does_cot_work/ | false | false | self | 5 | null |
llama rope code in transformers | 1 | [removed] | 2024-11-28T09:08:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h1s0r2/llama_rope_code_in_transformers/ | No_Shower65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1s0r2 | false | null | t3_1h1s0r2 | /r/LocalLLaMA/comments/1h1s0r2/llama_rope_code_in_transformers/ | false | false | 1 | null |
|
llama rope code in transformers | 1 | [removed] | 2024-11-28T09:11:30 | https://www.reddit.com/r/LocalLLaMA/comments/1h1s291/llama_rope_code_in_transformers/ | No_Shower65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1s291 | false | null | t3_1h1s291 | /r/LocalLLaMA/comments/1h1s291/llama_rope_code_in_transformers/ | false | false | self | 1 | null |
llama rope code in transformers | 1 | [removed] | 2024-11-28T09:13:46 | https://www.reddit.com/r/LocalLLaMA/comments/1h1s3ay/llama_rope_code_in_transformers/ | No_Shower65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1s3ay | false | null | t3_1h1s3ay | /r/LocalLLaMA/comments/1h1s3ay/llama_rope_code_in_transformers/ | false | false | self | 1 | null |
a question about llama rope code in transformers | 1 | [removed] | 2024-11-28T09:19:28 | https://www.reddit.com/r/LocalLLaMA/comments/1h1s5xl/a_question_about_llama_rope_code_in_transformers/ | No_Shower65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1s5xl | false | null | t3_1h1s5xl | /r/LocalLLaMA/comments/1h1s5xl/a_question_about_llama_rope_code_in_transformers/ | false | false | self | 1 | null |
Top 10 Tips for Quantizing LLMs and VLMs with AutoRound | 1 | [removed] | 2024-11-28T09:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/1h1sdvf/top_10_tips_for_quantizing_llms_and_vlms_with/ | Difficult_Bottle_456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1sdvf | false | null | t3_1h1sdvf | /r/LocalLLaMA/comments/1h1sdvf/top_10_tips_for_quantizing_llms_and_vlms_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Jc_xI0o8ettswF3yCMSgi3oKxZtojjwPSbqV-wrN_7g', 'resolutions': [{'height': 32, 'url': 'https://external-preview.redd.it/toKdq-6PDIl2p4kWjVd7mO9QBBwusOM_QSyYlRtSe6g.jpg?width=108&crop=smart&auto=webp&s=601f84b9fe01157ce24fd748c912c3e4e0b9348c', 'width': 108}, {'height': 65, 'url': 'https://external-preview.redd.it/toKdq-6PDIl2p4kWjVd7mO9QBBwusOM_QSyYlRtSe6g.jpg?width=216&crop=smart&auto=webp&s=c26f6c35c85d64914d84d1d178a14a825413d075', 'width': 216}, {'height': 96, 'url': 'https://external-preview.redd.it/toKdq-6PDIl2p4kWjVd7mO9QBBwusOM_QSyYlRtSe6g.jpg?width=320&crop=smart&auto=webp&s=d7cce84ca5bedf59a47855ae0a215a3b05da235e', 'width': 320}, {'height': 193, 'url': 'https://external-preview.redd.it/toKdq-6PDIl2p4kWjVd7mO9QBBwusOM_QSyYlRtSe6g.jpg?width=640&crop=smart&auto=webp&s=59c92b840c21d694647959a61739d9b629c173cb', 'width': 640}, {'height': 290, 'url': 'https://external-preview.redd.it/toKdq-6PDIl2p4kWjVd7mO9QBBwusOM_QSyYlRtSe6g.jpg?width=960&crop=smart&auto=webp&s=c9a1a310089dff8895830b08be104cd53485ebec', 'width': 960}, {'height': 326, 'url': 'https://external-preview.redd.it/toKdq-6PDIl2p4kWjVd7mO9QBBwusOM_QSyYlRtSe6g.jpg?width=1080&crop=smart&auto=webp&s=72be386d41448ba741283336153ce2c1fc46043a', 'width': 1080}], 'source': {'height': 363, 'url': 'https://external-preview.redd.it/toKdq-6PDIl2p4kWjVd7mO9QBBwusOM_QSyYlRtSe6g.jpg?auto=webp&s=52685b8fdafc245b022f3380cb1673ed573c6b56', 'width': 1200}, 'variants': {}}]} |
Best value Home Assistant box? | 3 | Say you want to run Home Assistant hypervisor and Ollama on a cheap and efficient home server. What would you get?
* Nvidia Jetson
* Mac Mini
* Basic mini itx system
* Raspberry Pi with external GPU??
Mini itx is I guess sort of the baseline, just get some i3 and last gen GPU.
The Mac mini base model seems incredible value, but upgrades cost gold. I read ram bandwidth isn't the best.
Likewise Nvidia Jetson seems like it has not the best bandwidth, is very expensive, but has a lot of ram and is very efficient.
The raspi is the odd one out. Jeff Geerling is benchmarking AMD GPUs on them with some success. Could actually be a decent option?
Any other options I've missed?
| 2024-11-28T09:41:40 | https://www.reddit.com/r/LocalLLaMA/comments/1h1sgdg/best_value_home_assistant_box/ | pepijndevos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1sgdg | false | null | t3_1h1sgdg | /r/LocalLLaMA/comments/1h1sgdg/best_value_home_assistant_box/ | false | false | self | 3 | null |
I asked QwQ and R1 to 'break' the webpage, and it performed more creatively than R1-lite. | 46 | [QwQ is cute in it's own ways](https://preview.redd.it/ojmm4umc4m3e1.png?width=2833&format=png&auto=webp&s=e6d2daacd73e797194978678a545b10e38e3fa65)
[QwQ is passionate ](https://preview.redd.it/4023k0yp4m3e1.png?width=2792&format=png&auto=webp&s=801a906d3b9eb7455733e9b6008c8d8348a85085)
[R1-lite](https://preview.redd.it/x7ib7pt25m3e1.png?width=872&format=png&auto=webp&s=ff6f81651af4124efa97003961c8fc59eabcd91c)
| 2024-11-28T09:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h1sgo3/i_asked_qwq_and_r1_to_break_the_webpage_and_it/ | nanowell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1sgo3 | false | null | t3_1h1sgo3 | /r/LocalLLaMA/comments/1h1sgo3/i_asked_qwq_and_r1_to_break_the_webpage_and_it/ | false | false | 46 | null |
|
Qwen QwQ 32b Preview is amazing | 1 | [removed] | 2024-11-28T09:53:43 | https://www.reddit.com/r/LocalLLaMA/comments/1h1sm60/qwen_qwq_32b_preview_is_amazing/ | dondiegorivera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1sm60 | false | null | t3_1h1sm60 | /r/LocalLLaMA/comments/1h1sm60/qwen_qwq_32b_preview_is_amazing/ | false | false | 1 | null |
|
Help on RAG app deployment | 1 | [removed] | 2024-11-28T09:54:31 | https://www.reddit.com/r/LocalLLaMA/comments/1h1smj6/help_on_rag_app_deployment/ | we_dont_know_my_name | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1smj6 | false | null | t3_1h1smj6 | /r/LocalLLaMA/comments/1h1smj6/help_on_rag_app_deployment/ | false | false | self | 1 | null |
Intermediary for Chinese API models | 1 | [removed] | 2024-11-28T10:24:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h1t1ux/intermediary_for_chinese_api_models/ | AnomalyNexus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1t1ux | false | null | t3_1h1t1ux | /r/LocalLLaMA/comments/1h1t1ux/intermediary_for_chinese_api_models/ | false | false | self | 1 | null |
Which approach yields better accuracy: fine-tuning a 4-bit quantise model or fine-tuning in 16-bit and then quantise? | 0 | I am working with large language models like LLAMA 3.1 8B, I am confused between different fine-tuning and quantisation strategies to understand their impact on performance and accuracy. One approach is to fine-tuning the model after it had been quantise to 4-bit precision. Another approach is fine-tuning the model in 16-bit precision first and then applying quantisation afterwards.
so which approach will give better result. | 2024-11-28T10:25:37 | https://www.reddit.com/r/LocalLLaMA/comments/1h1t25p/which_approach_yields_better_accuracy_finetuning/ | Inzy01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1t25p | false | null | t3_1h1t25p | /r/LocalLLaMA/comments/1h1t25p/which_approach_yields_better_accuracy_finetuning/ | false | false | self | 0 | null |
how do you replicate cursor.ai locally? | 4 | I like cursor. It works very well for me but I would like to replicate it with local llm.
Is it possible? I like the experience of the Cursor app. Proposing change-> apply /reject workflow. | 2024-11-28T10:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/1h1t3wt/how_do_you_replicate_cursorai_locally/ | tuananh_org | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1t3wt | false | null | t3_1h1t3wt | /r/LocalLLaMA/comments/1h1t3wt/how_do_you_replicate_cursorai_locally/ | false | false | self | 4 | null |
I ran my misguided attention eval locally on QwQ-32B 4bit quantized and it beats o1-preview and o1-mini. | 208 | The benchmark ([more backgound here](https://github.com/cpldcpu/MisguidedAttention)) basically tests for overfitting of LLMs to well known logical puzzles.Even large models are very sensitive to it, however models with integrated CoT or MCTS approaches fared better. So far, o1-preview was the best performing model with an average of 0.64, but QwQ scored an average of 0.66
[Midrange models](https://preview.redd.it/ves07epopm3e1.png?width=4163&format=png&auto=webp&s=347f2af132e7cdb145bf52c00f9dfa37161e14c3)
[Flagship models](https://preview.redd.it/qmgh9ecqqm3e1.png?width=4167&format=png&auto=webp&s=32a37551ec599b381c44bc57ffc600d33c6027cd)
I am quite impressed to have such a model locally. I get about 26tk/s on an 3090. I will try to rerun with full precision from a provider.
The token limit was set to 4000. Two results were truncated because they exceeded the token limit, but it did not look like they would pass with a longer token limit. | 2024-11-28T11:45:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h1u7r9/i_ran_my_misguided_attention_eval_locally_on/ | cpldcpu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1u7r9 | false | null | t3_1h1u7r9 | /r/LocalLLaMA/comments/1h1u7r9/i_ran_my_misguided_attention_eval_locally_on/ | false | false | 208 | {'enabled': False, 'images': [{'id': 'mecMP5wJo8c-PdEJ4v09nAV2dp7wHSQ_NgjKFhDc2b4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZkidyezG_PissFep0rFnumRTLyE4f4vAhm3QE6shv_0.jpg?width=108&crop=smart&auto=webp&s=b6e480cbe8fa755cefa9f9224b6e127841f5bbed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZkidyezG_PissFep0rFnumRTLyE4f4vAhm3QE6shv_0.jpg?width=216&crop=smart&auto=webp&s=6dfd643ab461730f244652126a4f4bcabaebe4d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZkidyezG_PissFep0rFnumRTLyE4f4vAhm3QE6shv_0.jpg?width=320&crop=smart&auto=webp&s=0670c9f4762082d6ced9cbb718d13e7b2933fe68', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZkidyezG_PissFep0rFnumRTLyE4f4vAhm3QE6shv_0.jpg?width=640&crop=smart&auto=webp&s=a73b38d3b44c9e87cfa98fff03dc7d2578fb0746', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZkidyezG_PissFep0rFnumRTLyE4f4vAhm3QE6shv_0.jpg?width=960&crop=smart&auto=webp&s=4ea800f482d05612d235ea2d802f11206ce5ebc3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZkidyezG_PissFep0rFnumRTLyE4f4vAhm3QE6shv_0.jpg?width=1080&crop=smart&auto=webp&s=fec6bbe65cb6a5eb37ea93c14e201f394463ac91', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZkidyezG_PissFep0rFnumRTLyE4f4vAhm3QE6shv_0.jpg?auto=webp&s=ec10051c4d491777ae92e0aaf0fe698d0d0272d8', 'width': 1200}, 'variants': {}}]} |
|
[D] Why aren't Stella embeddings more widely used despite topping the MTEB leaderboard? | 25 | https://huggingface.co/spaces/mteb/leaderboard
I've been looking at embedding models and noticed something interesting: Stella embeddings are crushing it on the MTEB leaderboard, outperforming OpenAI's models while being way smaller (1.5B/400M params) and apache 2.0
For reference, Stella-400M scores 70.11 on MTEB vs OpenAI's text-embedding-3-large 64.59. The 1.5B version scores even higher at 71.19
Yet I rarely see them mentioned in production use cases or discussions. Has anyone here used Stella embeddings in production? What's been your experience with performance, inference speed, and reliability compared to OpenAI's offerings?
Just trying to understand if there's something I'm missing about why they haven't seen wider adoption despite the impressive benchmarks.
Would love to hear your thoughts and experiences! | 2024-11-28T11:49:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h1u9rz/d_why_arent_stella_embeddings_more_widely_used/ | sdsd19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1u9rz | false | null | t3_1h1u9rz | /r/LocalLLaMA/comments/1h1u9rz/d_why_arent_stella_embeddings_more_widely_used/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'XyO6BICbW4Hg8xmbvc3hN3cENx4gTiYAHoZDX0xzla0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=108&crop=smart&auto=webp&s=96645ff2d3c13c9de5b8e543d793398e8378a5ce', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=216&crop=smart&auto=webp&s=5fe7dd25ac52b49026818459348b727a60f76c95', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=320&crop=smart&auto=webp&s=46bd623b4140579283466426f35db45a2716afdf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=640&crop=smart&auto=webp&s=b024b9ba08b61cf952b69cb7507fca3e1ebfa39e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=960&crop=smart&auto=webp&s=4a6c9716fe66802e32392d34d5d6cafa747a6c2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=1080&crop=smart&auto=webp&s=52ce36e354673a164cd33267e3c92737187dd009', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?auto=webp&s=e855b63ad31d3cd9e7b74c56d92057a31258081f', 'width': 1200}, 'variants': {}}]} |
QwQ-32B-Preview benchmarked in farel-bench, the result is 96.67 - better than Claude 3.5 Sonnet, a bit worse than o1-preview and o1-mini | 161 | 2024-11-28T11:50:57 | https://github.com/fairydreaming/farel-bench | fairydreaming | github.com | 1970-01-01T00:00:00 | 0 | {} | 1h1uas5 | false | null | t3_1h1uas5 | /r/LocalLLaMA/comments/1h1uas5/qwq32bpreview_benchmarked_in_farelbench_the/ | false | false | 161 | {'enabled': False, 'images': [{'id': 'NOxOH_jxp3TWbfUgbAQKAtA2kHdM41ar93FHbO0bWUc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LWqEq0WgLIRqQxvzdzb02RRcX2VwAMOYbPn_r96BAQs.jpg?width=108&crop=smart&auto=webp&s=e1f5cd9a762241c0ed9df5a87e1e546b3d0a0c7b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LWqEq0WgLIRqQxvzdzb02RRcX2VwAMOYbPn_r96BAQs.jpg?width=216&crop=smart&auto=webp&s=03406ce4b8786a637617373cef7dd641ce77f4ff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LWqEq0WgLIRqQxvzdzb02RRcX2VwAMOYbPn_r96BAQs.jpg?width=320&crop=smart&auto=webp&s=aecc0abc7a7f5a87978d098d085b365bc9ffc0cb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LWqEq0WgLIRqQxvzdzb02RRcX2VwAMOYbPn_r96BAQs.jpg?width=640&crop=smart&auto=webp&s=84481f76e9d77d4b5205e58298fb4ec96d7f9261', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LWqEq0WgLIRqQxvzdzb02RRcX2VwAMOYbPn_r96BAQs.jpg?width=960&crop=smart&auto=webp&s=dbd3565192697827a1981a10ea4d86b29fe0693e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LWqEq0WgLIRqQxvzdzb02RRcX2VwAMOYbPn_r96BAQs.jpg?width=1080&crop=smart&auto=webp&s=afc777c267d24b4ef40b23889ce0cc0182b72d95', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LWqEq0WgLIRqQxvzdzb02RRcX2VwAMOYbPn_r96BAQs.jpg?auto=webp&s=6422ee51b2dd152be0ed4655e6fa80f1d572ad7d', 'width': 1200}, 'variants': {}}]} |
||
The AI scene really has been full of surprises the last couple of years.. | 1 | 2024-11-28T12:02:49 | AnaYuma | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h1uh6r | false | null | t3_1h1uh6r | /r/LocalLLaMA/comments/1h1uh6r/the_ai_scene_really_has_been_full_of_surprises/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'EiLyZ5SpoUPJ3fvJgG-xsyeVob6Zh8CJ6OEurelTGOw', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/5kfuk7c7um3e1.png?width=108&crop=smart&auto=webp&s=c04d36429283afff2941784515516f1bb9f65b10', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/5kfuk7c7um3e1.png?width=216&crop=smart&auto=webp&s=5cb26f76f3d68e7f43c3402d63b2797104c7470d', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/5kfuk7c7um3e1.png?width=320&crop=smart&auto=webp&s=1e165b129105e52f5b216b05c2509b8f81577041', 'width': 320}], 'source': {'height': 238, 'url': 'https://preview.redd.it/5kfuk7c7um3e1.png?auto=webp&s=bc2cc5fe30ff269eacea86012672404217409bb0', 'width': 446}, 'variants': {}}]} |
|||
Qwen 2.5 Coder 32B Creating Cross-Platform Desktop App | 1 | [removed] | 2024-11-28T12:12:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h1umln/qwen_25_coder_32b_creating_crossplatform_desktop/ | marvijo-software | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1umln | false | null | t3_1h1umln | /r/LocalLLaMA/comments/1h1umln/qwen_25_coder_32b_creating_crossplatform_desktop/ | false | false | self | 1 | null |
Qwen 2.5 Coder 32B Creating a Desktop App + Database | 15 | I only ever see people creating web applications with LLMs. I never saw a Desktop App being created. I created a straightforward SQLite Pomodoro cross-platform Desktop App to try it out. I used Aider + Qwen 2.5 Coder 32B.
If there are Python developers, please let me know the level of code quality the LLM created
Code: [https://github.com/marvijo-code/pomodoro-desktop](https://github.com/marvijo-code/pomodoro-desktop) | 2024-11-28T12:20:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h1ur68/qwen_25_coder_32b_creating_a_desktop_app_database/ | marvijo-software | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1ur68 | false | null | t3_1h1ur68 | /r/LocalLLaMA/comments/1h1ur68/qwen_25_coder_32b_creating_a_desktop_app_database/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'ZYwVOoAC1llFC48GPF8eEapbox-FLSaobO7GzD0ujIk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ss9xo8r7Fg8dU_-nOz651TenKDLEEiZHI8Qe_3QPe6g.jpg?width=108&crop=smart&auto=webp&s=8bce86ece75596f39248ed538e4469a24998ea32', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ss9xo8r7Fg8dU_-nOz651TenKDLEEiZHI8Qe_3QPe6g.jpg?width=216&crop=smart&auto=webp&s=61c05da8692fb734ce39525d405495eeb4b0b913', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ss9xo8r7Fg8dU_-nOz651TenKDLEEiZHI8Qe_3QPe6g.jpg?width=320&crop=smart&auto=webp&s=14c9130d5c59452d4f80558c5cabb658f5e21bad', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ss9xo8r7Fg8dU_-nOz651TenKDLEEiZHI8Qe_3QPe6g.jpg?width=640&crop=smart&auto=webp&s=35983ae50bd3118ea44a90be76b18d8a3eecb12c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ss9xo8r7Fg8dU_-nOz651TenKDLEEiZHI8Qe_3QPe6g.jpg?width=960&crop=smart&auto=webp&s=9da8b39e6ed5e781f0386bc1aac055997f102f70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ss9xo8r7Fg8dU_-nOz651TenKDLEEiZHI8Qe_3QPe6g.jpg?width=1080&crop=smart&auto=webp&s=b8fa06d5cfad6138f0219bdf758c9698dba266bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ss9xo8r7Fg8dU_-nOz651TenKDLEEiZHI8Qe_3QPe6g.jpg?auto=webp&s=b917c8025b954d9c8a4d7f8035a9d5d6528a45af', 'width': 1200}, 'variants': {}}]} |
Anyone implemented the EU AI act at work yet? | 0 | So the EU passed a rather restrictive AI act which came into force a few months ago.
Looks like its being ushered in step by step. So right now its not very restrictive.
But it seems like no one really understands how to implement it in practice yet. Can only find a opinions/speculations about the act on youtube.
Anyone here who actually tried to implement the rules and feel comfortable they follow the rules?
For example. What are the implications of using local ai models on your own setup vs using local ai models but on rented servers. Practical things like that.
| 2024-11-28T12:37:12 | https://www.reddit.com/r/LocalLLaMA/comments/1h1v189/anyone_implemented_the_eu_ai_act_at_work_yet/ | Sea-Replacement7541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1v189 | false | null | t3_1h1v189 | /r/LocalLLaMA/comments/1h1v189/anyone_implemented_the_eu_ai_act_at_work_yet/ | false | false | self | 0 | null |
Speed for 70B Model and Various Prompt Sizes on M3-Max | 26 | Yesterday, I [compared the RTX 4090 and M3-Max](https://www.reddit.com/r/LocalLLaMA/comments/1h0bsyz/how_prompt_size_dramatically_affects_speed/) using the Llama-3.1-8B-q4_K_M.
Today, I ran the same test on the M3-Max 64GB with the Llama-3.1-70B, using q4_K_M and q5_K_M. Q5_K_M is the highest quant that I can fully load the entire 70B model with a 30k context into memory.
I included additional notes and some thoughts from previous post below the results.
## Q$_K_M
| prompt tokens | tk/s | generated tokens | tk/s | total duration |
| --- | --- | --- | --- | --- |
| 258 | 67.71 | 579 | 8.21 | 1m17s |
| 687 | 70.44 | 823 | 7.99 | 1m54s |
| 778 | 70.24 | 905 | 8.00 | 2m5s |
| 782 | 72.74 | 745 | 8.00 | 1m45s |
| 1169 | 72.46 | 784 | 7.96 | 1m56s |
| 1348 | 71.38 | 780 | 7.91 | 1m58s |
| 1495 | 71.95 | 942 | 7.90 | 2m21s |
| 1498 | 71.46 | 761 | 7.90 | 1m58s |
| 1504 | 71.77 | 768 | 7.89 | 1m59s |
| 1633 | 69.11 | 1030 | 7.86 | 2m36s |
| 1816 | 70.20 | 1126 | 7.85 | 2m50s |
| 1958 | 68.70 | 1047 | 7.84 | 2m43s |
| 2171 | 69.63 | 841 | 7.80 | 2m20s |
| 4124 | 67.37 | 936 | 7.57 | 3m6s |
| 6094 | 65.62 | 779 | 7.33 | 3m20s |
| 8013 | 64.39 | 855 | 7.15 | 4m5s |
| 10086 | 62.45 | 719 | 6.95 | 4m26s |
| 12008 | 61.19 | 816 | 6.77 | 5m18s |
| 14064 | 59.62 | 713 | 6.55 | 5m46s |
| 16001 | 58.35 | 772 | 6.42 | 6m36s |
| 18209 | 57.27 | 798 | 6.17 | 7m29s |
| 20234 | 55.93 | 1050 | 6.02 | 8m58s |
| 22186 | 54.78 | 996 | 5.84 | 9m37s |
| 24244 | 53.63 | 1999 | 5.58 | 13m32s |
| 26032 | 52.64 | 1009 | 5.50 | 11m20s |
| 28084 | 51.74 | 960 | 5.33 | 12m5s |
| 30134 | 51.03 | 977 | 5.18 | 13m1s |
## Q5_K_M
| prompt tokens | tk/s | generated tokens | tk/s | total duration |
| --- | --- | --- | --- | --- |
| 258 | 61.32 | 588 | 5.83 | 1m46s |
| 687 | 63.50 | 856 | 5.77 | 2m40s |
| 778 | 66.01 | 799 | 5.77 | 2m31s |
| 782 | 66.43 | 869 | 5.75 | 2m44s |
| 1169 | 66.16 | 811 | 5.72 | 2m41s |
| 1348 | 65.09 | 883 | 5.69 | 2m57s |
| 1495 | 65.75 | 939 | 5.66 | 3m10s |
| 1498 | 64.90 | 887 | 5.66 | 3m1s |
| 1504 | 65.33 | 903 | 5.66 | 3m4s |
| 1633 | 62.57 | 795 | 5.64 | 2m48s |
| 1816 | 63.99 | 1089 | 5.64 | 3m43s |
| 1958 | 62.50 | 729 | 5.63 | 2m42s |
| 2171 | 63.58 | 1036 | 5.60 | 3m40s |
| 4124 | 61.42 | 852 | 5.47 | 3m44s |
| 6094 | 60.10 | 930 | 5.18 | 4m42s |
| 8013 | 58.56 | 682 | 5.24 | 4m28s |
| 10086 | 57.52 | 858 | 5.16 | 5m43s |
| 12008 | 56.17 | 730 | 5.04 | 6m |
| 14064 | 54.98 | 937 | 4.96 | 7m26s |
| 16001 | 53.94 | 671 | 4.86 | 7m16s |
| 18209 | 52.80 | 958 | 4.79 | 9m7s |
| 20234 | 51.79 | 866 | 4.67 | 9m39s |
| 22186 | 50.83 | 787 | 4.56 | 10m12s |
| 24244 | 50.06 | 893 | 4.45 | 11m27s |
| 26032 | 49.22 | 1104 | 4.35 | 13m5s |
| 28084 | 48.41 | 825 | 4.25 | 12m57s |
| 30134 | 47.76 | 891 | 4.16 | 14m8s |
Notes:
* I used the latest llama.cpp as of today, and I ran each test as one shot generation (not accumulating prompt via multiturn chat style).
* I enabled Flash attention and set temperature to 0.0 and the random seed to 1000.
* Total duration is total execution time, not total time reported from llama.cpp.
* The total duration for processing longer prompts was sometimes shorter than for shorter ones because more tokens were generated.
* You can estimate the time to see the first token using by Total Duration - (Tokens Generated ÷ Tokens Per Second)
* For example, feeding a 30k token prompt to q4_K_M requires waiting 9m 52s before the first token appears.
Few thoughts from previous post:
If you often use a particular long prompt, prompt caching can save time by skipping reprocessing.
Whether Mac is right for you depends on your use case and speed tolerance:
For tasks like processing long documents or codebases, you should be prepared to wait around. For these, I just use ChatGPT for quality anyways. Once in a while when I need more power for heavy tasks like fine-tuning, I rent GPUs from Runpod.
If your main use is casual chatting or asking like coding question with short prompts, the speed is adequate in my opinion. Personally, I find 7 tokens/second very usable and even tolerate 5 tokens/second. For context, people read an average of [238 words per minute](https://www.sciencedirect.com/science/article/abs/pii/S0749596X19300786). It depends on the model, 5 tokens/second translates to approximately 225 words per minute: 5 (tokens) * 60 (seconds) * 0.75 (tks/word) | 2024-11-28T12:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/1h1v7mn/speed_for_70b_model_and_various_prompt_sizes_on/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1v7mn | false | null | t3_1h1v7mn | /r/LocalLLaMA/comments/1h1v7mn/speed_for_70b_model_and_various_prompt_sizes_on/ | false | false | self | 26 | null |
GUI LLM Agents use-cases | 4 | A lot of research has been done recently to improve and enable LLM-driven agents operating at a GUI level. To name a few recent ones:
* [ShowUI: One Vision-Language-Action Model for GUI Visual Agent](https://arxiv.org/abs/2411.17465)
* [OS-ATLAS: A Foundation Action Model for Generalist GUI Agents](https://arxiv.org/abs/2410.23218)
* [Agent S: An Open Agentic Framework that Uses Computers Like a Human](https://arxiv.org/abs/2410.08164)
There also been a steady flow of papers prior to this, related to both desktop and mobile GUI agents and related tools. In addition to that, there are rumours of OpenAI releasing their "Operator" in early Jan 2025.
All of the existing work (excluding Operator, not released yet) shows performance that is quite low to accomplish any complex and meaningful tasks (GAIA, OS World, Windows Agent Arena, etc.) - the success rate fluctuates at 10%-50% (gross ballbark, from all papers/leaderboards) of human capability on the same tasks.
Interacting with these agents makes these limitations very prominent: loops, inefficient choice of tooling, misunderstanding the GUI state, inability to translate the plan into action, etc. As an employee - I was always irritated when my colleagues required constant help accomplishing their tasks, I can imagine being even more irritated about an LLM-driven system with similar characteristics. In other words - people will have much less patience for LLM-driven agents underperforming in scenarios that are considered "basic" for a specific task.
So based on the current agent performance, I have a feeling that we're still lacking a generation or two of reasoning, planning and world modelling in LLMs/LMMs/VLLMs before the scores are "up there".
What are your experience and expectations? | 2024-11-28T12:58:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h1vegx/gui_llm_agents_usecases/ | Everlier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1vegx | false | null | t3_1h1vegx | /r/LocalLLaMA/comments/1h1vegx/gui_llm_agents_usecases/ | false | false | self | 4 | null |
Recommendation for local setup | 0 | I'm thinking of m4pro Mac mini with 64GB which comes to around 2000$.
Can anyone who runs local LLMs suggest if this is good or if i should just build a PC with multiple nvidia cards?
Suggest based on price and performance. | 2024-11-28T13:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/1h1vhei/recommendation_for_local_setup/ | MasterDragon_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1vhei | false | null | t3_1h1vhei | /r/LocalLLaMA/comments/1h1vhei/recommendation_for_local_setup/ | false | false | self | 0 | null |
Evaluating QWQ: Strong Math Performance but Issues with Character Counting – A Tokenization Theory | 5 | Hi, I have been evaluating the Qwen QWQ (base Ollama version - 32b-preview-q4\_K\_M) on my M1 Max MBP. Whilst excited about the performance (with my results aligning it on par with o1-mini), I have noticed some potential issues. Here are some observations from initial testing.
\- Math: The model excels at performing mathematical calculations, getting all of my basic math questions correct. When prompting with equations such as 85729\*9.03, with streaming, what impressed me was how QWQ solved the equation like a human would, breaking down using the distributive property, place value multiplication with carryovers, scaling by division for decimals, and summing the parts.
\- Letter Counting: When prompted to count the letter r's in the word strawberry, QWQ was only able to count correctly \~50% of the time. It seems as if the way you structure your prompt affects the performance of QWQ's ability to count. For example, the prompt "In the sentence “You scream, I scream, we all scream for ice cream!”, how many letter e’s are in the sentence?", QWQ only succeeded \~25% of the time.
Theory: QWQ’s superior performance on math questions compared to character counting likely stems from the tokenizer. In the model’s tokenizer, numerical characters are single, atomic tokens, meaning numbers like 857 are tokenized as 8, 5, and 7. This enables precise arithmetic handling. In contrast, words used for character counting are tokenized inconsistently. For example, “action” could be broken into “act,” “ion,” or even individual characters, varying by context. This inconsistency can cause errors; e.g., when counting the letter “e” in “scream,” the tokenizer might treat “ea” as a single token, missing the standalone “e” or, in some cases, count “ea” as two e’s, depending on the model’s training and how it learned to process “ea”.
To reinforce this idea, when QWQ is asked to count letters in random strings that can only be tokenized as individual characters (ensuring the string does not contain any subtokens, e.g., ‘yxeftlrxqezmkvej’, while counting the “e”s), it achieves a 100% success rate. It can also be observed that in QWQ’s “thinking” stage, when counting characters in sequences that can be broken down into subtokens (rather than individual characters per token), it tends to count like “are contains 1 e, it contains no e’s.” In contrast, when there are no subtokens, it counts more explicitly, like “a - not an e, x - not an e, e - is an e (count is 1).”
Humans intuitively tokenize language differently depending on the task: when listening or speaking, we often perceive words as single units, processing them holistically as concepts or ideas. However, when performing tasks like counting or arithmetic, we tend to break numbers down into individual components — for example, viewing “104” as hundreds, tens, and units. Investigating how tokenizers can mimic this human behavior, by recognizing when to treat words or numbers as unified concepts versus decomposing them into smaller parts, could lead to more effective and accurate models for specific tasks. This approach warrants further research to identify the best tokenization strategies for different types of reasoning.
In conclusion, tokenization significantly impacts tasks like counting and arithmetic. By adapting tokenization strategies to treat certain words or numbers holistically, future models could improve performance. Research into more flexible tokenizers could better align with human reasoning, optimizing task-specific accuracy.
Device used - 2022 32GB M1 Max MacBook Pro (\~9 tokens/s)
Model used - QWQ 32b-preview-q4\_K\_M (Ollama v0.4.6) | 2024-11-28T13:15:01 | https://www.reddit.com/r/LocalLLaMA/comments/1h1vpg3/evaluating_qwq_strong_math_performance_but_issues/ | oliverbravery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1vpg3 | false | null | t3_1h1vpg3 | /r/LocalLLaMA/comments/1h1vpg3/evaluating_qwq_strong_math_performance_but_issues/ | false | false | self | 5 | null |
First impressions of Qwen QwQ 32b Preview | 1 | [removed] | 2024-11-28T13:38:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h1w4kp/first_impressions_of_qwen_qwq_32b_preview/ | dondiegorivera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1w4kp | false | null | t3_1h1w4kp | /r/LocalLLaMA/comments/1h1w4kp/first_impressions_of_qwen_qwq_32b_preview/ | false | false | self | 1 | null |
How to extract key-value pairs from image using VLMs? | 0 | I am working on information extraction such as name, address, license\_no, etc. There could be multiple names and the pdf can get very complicated. I also need to identify which checkboxes are ticked and which are not.
The documents could be hand filled or digitally typed.
Right now, I am making a copy of the filled pdf, deleting every input by the user and adding my own template variables such as <name>, <address> in those fields. Then I am sending both the template page and filled page as images to gpt-40 and asking it to generate key\_value pairs. It is returning me a json like this - {"<name>": "Benzinga", "address":"405, Driveway Street"}.
I want to train a model in this format such that the model takes two images in the input i.e. template image and filled image and give the key-value pairs as output. Also it should identify all the checkboxes and give me their coordinates and their state (whether ticked or not).
I need some pointers on which model to select, and how the dataset would look like and also how many training samples is a good starting point.
What I have already tried -
1. OCR models like Kosmos 2.5, Surya, minicpp-v2.6, GOT 2.0, etc. OCR outputs are not very reliable. The filled value sometimes gets added to upper or lower sentence.
2. Passing OCR text to Gpt-40 and asking to output key-value pairs. The OCR itself is not correct many times.
Please, I need your guidance. The current approach works 90% of the time, but I want to shift to a locally run model. | 2024-11-28T13:45:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h1wa3i/how_to_extract_keyvalue_pairs_from_image_using/ | GHOST--1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1wa3i | false | null | t3_1h1wa3i | /r/LocalLLaMA/comments/1h1wa3i/how_to_extract_keyvalue_pairs_from_image_using/ | false | false | self | 0 | null |
Alibaba's Qwen QwQ-32B-Preview is pretty amazing | 1 | [removed] | 2024-11-28T14:44:47 | https://www.reddit.com/r/LocalLLaMA/comments/1h1xgfh/alibabas_qwen_qwq32bpreview_is_pretty_amazing/ | dondiegorivera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1xgfh | false | null | t3_1h1xgfh | /r/LocalLLaMA/comments/1h1xgfh/alibabas_qwen_qwq32bpreview_is_pretty_amazing/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Hcdo2yU4Xt5_FbgR7-qomfLS6PQuZlikXFE9w-fxH7A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=108&crop=smart&auto=webp&s=b71b9b8651901a3a27889606bb3a6d2e9655384d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=216&crop=smart&auto=webp&s=fed10c006155486eec5e14feae0de5c85d62690e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=320&crop=smart&auto=webp&s=d76e08936bbc012bbf55b577d5f954a39dd9c528', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=640&crop=smart&auto=webp&s=5926580ef87bf41f0289d01faf3c4860e0868988', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=960&crop=smart&auto=webp&s=19f01dcbe21288fb04ec6048a8ac7253c539913c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=1080&crop=smart&auto=webp&s=4528ca87e612f196609b3ba01baf8abd5ef42d2e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?auto=webp&s=525d1653b958de2592b6ac502079e06f077c2026', 'width': 1200}, 'variants': {}}]} |
Alibaba's Qwen QwQ-32B-Preview is pretty amazing | 1 | [removed] | 2024-11-28T14:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/1h1xhp1/alibabas_qwen_qwq32bpreview_is_pretty_amazing/ | dondiegorivera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1xhp1 | false | null | t3_1h1xhp1 | /r/LocalLLaMA/comments/1h1xhp1/alibabas_qwen_qwq32bpreview_is_pretty_amazing/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Hcdo2yU4Xt5_FbgR7-qomfLS6PQuZlikXFE9w-fxH7A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=108&crop=smart&auto=webp&s=b71b9b8651901a3a27889606bb3a6d2e9655384d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=216&crop=smart&auto=webp&s=fed10c006155486eec5e14feae0de5c85d62690e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=320&crop=smart&auto=webp&s=d76e08936bbc012bbf55b577d5f954a39dd9c528', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=640&crop=smart&auto=webp&s=5926580ef87bf41f0289d01faf3c4860e0868988', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=960&crop=smart&auto=webp&s=19f01dcbe21288fb04ec6048a8ac7253c539913c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?width=1080&crop=smart&auto=webp&s=4528ca87e612f196609b3ba01baf8abd5ef42d2e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dnlflrwj9DjfjaIyK3PunXlNZgzehygFzk5YCLi52oQ.jpg?auto=webp&s=525d1653b958de2592b6ac502079e06f077c2026', 'width': 1200}, 'variants': {}}]} |
Janus, a new multimodal understanding and generation model from Deepseek, running 100% locally in the browser on WebGPU with Transformers.js! | 220 | 2024-11-28T14:48:45 | https://v.redd.it/z9wprh2lnn3e1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h1xjdy | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/z9wprh2lnn3e1/DASHPlaylist.mpd?a=1735397338%2CZGUxNzRhYjhlZjcyNWQzNTgxMDY5MDQ4MGMxNWFlMjU4MWJmYTJlYzgzMGM3M2FkYTI2MzRkNWU4ZjQyY2ZhMQ%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/z9wprh2lnn3e1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/z9wprh2lnn3e1/HLSPlaylist.m3u8?a=1735397338%2CYmE2ZWViMjRjY2UxNjVjNjgwOTg3NzlkNTYxMWViN2FmMTgxNDM2MjIzNjU0NjMxMzNmNjE1MDFiNWE4ZWI0Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/z9wprh2lnn3e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1h1xjdy | /r/LocalLLaMA/comments/1h1xjdy/janus_a_new_multimodal_understanding_and/ | false | false | 220 | {'enabled': False, 'images': [{'id': 'ZnkwYTJoMmxubjNlMcKRa7R0PJYfsJfcKmt1pAlgbBVWpNXSB4rPRHPd6KAS', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ZnkwYTJoMmxubjNlMcKRa7R0PJYfsJfcKmt1pAlgbBVWpNXSB4rPRHPd6KAS.png?width=108&crop=smart&format=pjpg&auto=webp&s=e9ac738c39482fa2009d74bfce26e80571b1e055', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ZnkwYTJoMmxubjNlMcKRa7R0PJYfsJfcKmt1pAlgbBVWpNXSB4rPRHPd6KAS.png?width=216&crop=smart&format=pjpg&auto=webp&s=7a9e96127b556460bbe70090d2b4102c6c089d8b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ZnkwYTJoMmxubjNlMcKRa7R0PJYfsJfcKmt1pAlgbBVWpNXSB4rPRHPd6KAS.png?width=320&crop=smart&format=pjpg&auto=webp&s=c51f424395d7416b50c2e39c626b671116f8b0f3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ZnkwYTJoMmxubjNlMcKRa7R0PJYfsJfcKmt1pAlgbBVWpNXSB4rPRHPd6KAS.png?width=640&crop=smart&format=pjpg&auto=webp&s=7eb6fe47c8403fbeb2cd1d5abd67c59fe1fca114', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/ZnkwYTJoMmxubjNlMcKRa7R0PJYfsJfcKmt1pAlgbBVWpNXSB4rPRHPd6KAS.png?format=pjpg&auto=webp&s=1040ad713f93217209538f52d6aa94c5d43a3f2e', 'width': 900}, 'variants': {}}]} |
||
Nomic Embeddings: "input is too large to process. increase the physical batch size" | 2 | Hello, hive mind!
Can someone help with figuring this one out. Running nomic 1.5 embeddings in llama.cpp (dockerized if that matters).
these are my Docker environment params:
- GGML_CUDA_NO_PINNED=1
- LLAMA_CTX_SIZE=4096
- LLAMA_MODEL=/models/nomic-embed-text-v1.5.Q8_0.gguf
- LLAMA_N_GPU_LAYERS=99
- LLAMA_BATCH_SIZE=2048
- LLAMA_UBATCH_SIZE=512
- LLAMA_THREADS=4
- LLAMA_LOG_FILE=llama
- LLAMA_EMBEDDINGS=
I have an app (PHP code) that sends chunks of only 512 tokens to embed. And yet I hit "input is too large to process. increase the physical batch size" on nearly half the chunks I am sending. The token estimation is done with just GPT2 method - I am not sure how to estimate chunks for Nomic in PHP. But with context window of 4096 you'd think there's a lot of room to spare? Am I missing something obvious here?
| 2024-11-28T14:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h1xjgs/nomic_embeddings_input_is_too_large_to_process/ | lurkalotter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1xjgs | false | null | t3_1h1xjgs | /r/LocalLLaMA/comments/1h1xjgs/nomic_embeddings_input_is_too_large_to_process/ | false | false | self | 2 | null |
Alibaba's Qwen QwQ-32B-Preview is amazing
| 1 | [removed] | 2024-11-28T14:49:19 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1h1xjtu | false | null | t3_1h1xjtu | /r/LocalLLaMA/comments/1h1xjtu/alibabas_qwen_qwq32bpreview_is_amazing/ | false | false | default | 1 | null |
||
Docker + Colab + Unsloth = Most Convenient VLM Fine-Tuning Environment on local 16GB RAM GPU! | 1 | [removed] | 2024-11-28T15:00:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h1xrqu/docker_colab_unsloth_most_convenient_vlm/ | AdOdd4004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1xrqu | false | null | t3_1h1xrqu | /r/LocalLLaMA/comments/1h1xrqu/docker_colab_unsloth_most_convenient_vlm/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
|
Sources for KG vs RD? Reasons not to use Relational databases? | 1 | [removed] | 2024-11-28T15:07:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h1xxfb/sources_for_kg_vs_rd_reasons_not_to_use/ | PopPsychological4106 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1xxfb | false | null | t3_1h1xxfb | /r/LocalLLaMA/comments/1h1xxfb/sources_for_kg_vs_rd_reasons_not_to_use/ | false | false | self | 1 | null |
Perplexity AI PRO - 1 YEAR PLAN OFFER - 75% OFF | 1 | [removed] | 2024-11-28T15:18:23 | MReus11R | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h1y60b | false | null | t3_1h1y60b | /r/LocalLLaMA/comments/1h1y60b/perplexity_ai_pro_1_year_plan_offer_75_off/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'yUHXhhjUsd3DEhtIKt4wUw6JYBbkekMUJedZSV1QR2I', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/oaxzcpz4tn3e1.jpeg?width=108&crop=smart&auto=webp&s=1c28bef32a7307798110694a9793508dc5643ff5', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/oaxzcpz4tn3e1.jpeg?width=216&crop=smart&auto=webp&s=8323f0959400ffb68ce2bafadc062de3c2dbb5ba', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/oaxzcpz4tn3e1.jpeg?width=320&crop=smart&auto=webp&s=f4d5e50e1d9de8fe310dfaab2eafeeffc3593758', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/oaxzcpz4tn3e1.jpeg?width=640&crop=smart&auto=webp&s=9d55b4c83c1c802c751e72d45c63e5e7806fbb42', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/oaxzcpz4tn3e1.jpeg?width=960&crop=smart&auto=webp&s=37ca79280861be5a4f9ecd2abfda3b113014510f', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/oaxzcpz4tn3e1.jpeg?width=1080&crop=smart&auto=webp&s=13ce0f47b994d6905e57b3d631efb220bd776f3c', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://preview.redd.it/oaxzcpz4tn3e1.jpeg?auto=webp&s=d2cdac0a649d58de35df558d90df555d51a2ab7b', 'width': 2000}, 'variants': {}}]} |
||
How can we achieve AGI without active learning? | 0 | I always thought that AGI should be something that learns through its interactions with the world, I mean one big thing (among many others) that divides us from LLM is that we grow and learn, but they do not.
I understand that there are very real problems with active learning like its computational cost, and alignment issues, but how can we have AGI without active learning, so learning by its experience?
Why this field is not more explored? I mean even if the alignment will be more difficult we don't suddenly turn evil just by talking to someone. There could be buffering techniques in place to prevent reinforcement on certain topics, using an evaluator model to filter what gets reinforced into the main model.
I also have the feeling, that an AGI should be active and not a frozen photograph of a brain that can't change
I know about in-context learning, but it seems too expansive as context increases cost per token increases, and performance decrease (on very long contexts)
What you think about this?
Just trying to not be caught by the Roko's Basylisk doing nothing you know... | 2024-11-28T15:20:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h1y7yj/how_can_we_achieve_agi_without_active_learning/ | xXstekkaXx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1y7yj | false | null | t3_1h1y7yj | /r/LocalLLaMA/comments/1h1y7yj/how_can_we_achieve_agi_without_active_learning/ | false | false | self | 0 | null |
Prometheus-7b-v2, Command-R, Command-R+ models in Judge Arena | 12 | 2024-11-28T15:21:28 | https://huggingface.co/spaces/AtlaAI/judge-arena | fortunemaple | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h1y8g9 | false | null | t3_1h1y8g9 | /r/LocalLLaMA/comments/1h1y8g9/prometheus7bv2_commandr_commandr_models_in_judge/ | false | false | 12 | {'enabled': False, 'images': [{'id': '2pKOE_cH-iLNwbkBArlt9q9vDvznDbuN24oBI1hdF8I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=108&crop=smart&auto=webp&s=f76322d1f211fa7b2194b25db6ad874383c2a088', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=216&crop=smart&auto=webp&s=12a11c4bd286ed0720eb3b353f9a46cd2034b481', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=320&crop=smart&auto=webp&s=f992a24fe706c3037983cfa130d03df335a1f6d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=640&crop=smart&auto=webp&s=4ef15551728ccd8c5ef80474900a3acd5e6672b6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=960&crop=smart&auto=webp&s=1dfb937f7807f08897bc845e25a39dffb3d917f7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?width=1080&crop=smart&auto=webp&s=9becfd7854a14d90d6bef3aa1cb6fd62df468000', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Q8d9Gs6iSjNPovhzGPsZPlAdpPi69ik1CBtr_gapsyw.jpg?auto=webp&s=10fda0b7afd2060d69403af42a16d4b78a21a07f', 'width': 1200}, 'variants': {}}]} |
||
NotebookLM on a document with just "poopoo peepee" in it | 0 | I am not the author, just noticed that we missed this masterpiece in the community here.
Recently I've also given my GUI agent a task of "doing whatever you want" and it just went to enumerate all the files in my home directory one by one (weirdly autistic thing to do).
Any other such experiments you're aware of? | 2024-11-28T15:22:50 | https://youtu.be/gfr4BP4V1R8?si=Tv_h9nzSoNEgIid6 | Everlier | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1h1y9jx | false | {'oembed': {'author_name': 'Gabbsby', 'author_url': 'https://www.youtube.com/@Gabbsby', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/gfr4BP4V1R8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AI discusses document that just says “Poopoo Peepee”"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/gfr4BP4V1R8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'AI discusses document that just says “Poopoo Peepee”', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1h1y9jx | /r/LocalLLaMA/comments/1h1y9jx/notebooklm_on_a_document_with_just_poopoo_peepee/ | false | false | 0 | {'enabled': False, 'images': [{'id': '0k3uMPfS4X1jThWQGwgouurXNqCF0pkfiEGlTCOUOzc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qGNjHYZ33Kx6joJn9I6Cnk85OGBHilSYehfymQZU93Y.jpg?width=108&crop=smart&auto=webp&s=9ab914fb2bbca3cf78db426fbc7b63279f2f74e2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/qGNjHYZ33Kx6joJn9I6Cnk85OGBHilSYehfymQZU93Y.jpg?width=216&crop=smart&auto=webp&s=04a57e63551c15fec6fa492dd000c1de7ddca9b0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/qGNjHYZ33Kx6joJn9I6Cnk85OGBHilSYehfymQZU93Y.jpg?width=320&crop=smart&auto=webp&s=cc7929cd18cceba7858f2a32ace323d1da7d4361', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/qGNjHYZ33Kx6joJn9I6Cnk85OGBHilSYehfymQZU93Y.jpg?auto=webp&s=30d8459102fb61eb0efabcb0a590fe98c9c5a2bf', 'width': 480}, 'variants': {}}]} |
|
If it works, it works | 1 | 2024-11-28T15:49:40 | https://www.reddit.com/gallery/1h1yuh7 | Creative-Society3786 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1h1yuh7 | false | null | t3_1h1yuh7 | /r/LocalLLaMA/comments/1h1yuh7/if_it_works_it_works/ | false | false | 1 | null |
||
Is "Local LLaMA" still a relevant name? | 1 | [removed] | 2024-11-28T15:52:04 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1h1ywc6 | false | null | t3_1h1ywc6 | /r/LocalLLaMA/comments/1h1ywc6/is_local_llama_still_a_relevant_name/ | false | false | default | 1 | null |
||
Small LLM for Summarisation with multi-language support | 2 | Hi,
Looking for a small LLM (less than 2B) good for summarisation for conversation written in Portuguese, with a good context window (about 8000 tokens would be good). Any tips or suggestions would be welcome. Thanks | 2024-11-28T15:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h1yzda/small_llm_for_summarisation_with_multilanguage/ | mwon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1yzda | false | null | t3_1h1yzda | /r/LocalLLaMA/comments/1h1yzda/small_llm_for_summarisation_with_multilanguage/ | false | false | self | 2 | null |
LLaMA-Mesh running locally in Blender | 541 | 2024-11-28T16:04:05 | individual_kex | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h1z5yk | false | null | t3_1h1z5yk | /r/LocalLLaMA/comments/1h1z5yk/llamamesh_running_locally_in_blender/ | false | false | 541 | {'enabled': True, 'images': [{'id': 'rE8EuckCDFh9NDDUg9D-OjQcESYFbcAvhlvtzzbqmvU', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=108&crop=smart&format=png8&s=6ebf7398dfeb07b1e7361d6d39ca47a3bc3465e2', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=216&crop=smart&format=png8&s=5760e801df5e90abb865ec7462d3ba82264081c8', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=320&crop=smart&format=png8&s=a22f7281ff2479be8279dfe9e175a3afddb8b61a', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=640&crop=smart&format=png8&s=0f89556337f90307c08ea4410d675ec1666bd8d4', 'width': 640}], 'source': {'height': 405, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?format=png8&s=40b6ae797ccfe619efa768e3a74c34309dff0c4e', 'width': 720}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=108&crop=smart&s=e0812696e7b6cedef3eaf145ab3716ecd750158f', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=216&crop=smart&s=be5f941b9a26e57d90440dfb9e3181ff16fe9e54', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=320&crop=smart&s=8fcd7029259a8a98fd17df03c13cad0def186103', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=640&crop=smart&s=8796b924fc97a4103772715a53e8c579a3d53b73', 'width': 640}], 'source': {'height': 405, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?s=4b9d2693d1879d4bf7f2c04bccab5eddc1b886ec', 'width': 720}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=108&format=mp4&s=d8d92f551f61fb54d83966c8a7b82435057a9a46', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=216&format=mp4&s=7964f872f6b78ef867fdefe26946a3f934dc337d', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=320&format=mp4&s=8b6ef5658a6feb409bb5abc48bb3f3008c5914e8', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?width=640&format=mp4&s=121b1ad16466283349d1c758a786f1f30291eb27', 'width': 640}], 'source': {'height': 405, 'url': 'https://preview.redd.it/r6cwmsna1o3e1.gif?format=mp4&s=5c628449093070c95e98e235c092b2bf3d688a79', 'width': 720}}}}]} |
|||
Best open model to have a conversation? | 1 | [removed] | 2024-11-28T16:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/1h1z6lm/best_open_model_to_have_a_conversation/ | heybunnybear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h1z6lm | false | null | t3_1h1z6lm | /r/LocalLLaMA/comments/1h1z6lm/best_open_model_to_have_a_conversation/ | false | false | self | 1 | null |
Why are there so few audio-in language models? | 18 | I see many possible applications for interfaces, where the user talks and the LLM acts according to its prompt. However, I only know of multi-modal LLMs from openAI and google.
Are there no other players? Why is that?
PS: Is there a better name for 'audio-in LLMs'? | 2024-11-28T16:45:38 | https://www.reddit.com/r/LocalLLaMA/comments/1h203gw/why_are_there_so_few_audioin_language_models/ | ggGeorge713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h203gw | false | null | t3_1h203gw | /r/LocalLLaMA/comments/1h203gw/why_are_there_so_few_audioin_language_models/ | false | false | self | 18 | null |
SummLlama - Summarization models in different sizes for human-preferred summaries | 30 | (I'm not affiliated)
**SummLlama Models**
**Abstract:**
This model excels at faithfulness, completeness, and conciseness, which are the three human-preferred aspects to judge what is a good summarizer.
* Faithfulness: a summarizer does not manipulate the information in the input text and add any information not directly inferable from the input text.
* Completeness: a summarizer ensures the inclusion of all key information from the input text in the output summary.
* Conciseness: a summarizer refrains from incorporating information outside the key information in the output, maintaining a succinct and focused summary.
**HuggingFace Links:**
*- SummLlama3.2-Series:*
[https://huggingface.co/DISLab/SummLlama3.2-3B](https://huggingface.co/DISLab/SummLlama3.2-3B)
*- SummLlama3.1-Series:*
[https://huggingface.co/DISLab/SummLlama3.1-8B](https://huggingface.co/DISLab/SummLlama3.1-8B)
[https://huggingface.co/DISLab/SummLlama3.1-70B](https://huggingface.co/DISLab/SummLlama3.1-70B)
*- SummLlama3-Series:*
[https://huggingface.co/DISLab/SummLlama3-8B](https://huggingface.co/DISLab/SummLlama3-8B)
[https://huggingface.co/DISLab/SummLlama3-70B](https://huggingface.co/DISLab/SummLlama3-70B)
**Research Paper:**
[https://arxiv.org/abs/2410.13116](https://arxiv.org/abs/2410.13116)
| 2024-11-28T16:47:01 | https://www.reddit.com/r/LocalLLaMA/comments/1h204mz/summllama_summarization_models_in_different_sizes/ | Many_SuchCases | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h204mz | false | null | t3_1h204mz | /r/LocalLLaMA/comments/1h204mz/summllama_summarization_models_in_different_sizes/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'Kd2japAScv-ogqXulysjD-vwKPIoEKHfr8_c6T2G3U4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IKBDoimx2yKTGrM40Lhc0rBeBJWiHBbmdwG_OAvvU4w.jpg?width=108&crop=smart&auto=webp&s=48fd21454cbeceb59eea4a45e091ccfc794f3b0f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IKBDoimx2yKTGrM40Lhc0rBeBJWiHBbmdwG_OAvvU4w.jpg?width=216&crop=smart&auto=webp&s=f6b59a445e65a9a2b7d303ec349e4554587f1dfa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IKBDoimx2yKTGrM40Lhc0rBeBJWiHBbmdwG_OAvvU4w.jpg?width=320&crop=smart&auto=webp&s=d549adf05402751423ca014260ea9b9fddfc31f4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IKBDoimx2yKTGrM40Lhc0rBeBJWiHBbmdwG_OAvvU4w.jpg?width=640&crop=smart&auto=webp&s=5e472b1e1358127bb6755d918524e5f0bb89ed5c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IKBDoimx2yKTGrM40Lhc0rBeBJWiHBbmdwG_OAvvU4w.jpg?width=960&crop=smart&auto=webp&s=17abf08bff383650ad3ee7e08b8e51f1fd2b7740', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IKBDoimx2yKTGrM40Lhc0rBeBJWiHBbmdwG_OAvvU4w.jpg?width=1080&crop=smart&auto=webp&s=086cdb65c4b4d8a022bfaa672423d8b5d9c6d04a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IKBDoimx2yKTGrM40Lhc0rBeBJWiHBbmdwG_OAvvU4w.jpg?auto=webp&s=6a834f2055819503906a1123cc73b07c521dd7b9', 'width': 1200}, 'variants': {}}]} |
Study: Low-Bit Quantization Favors Undertrained LLMs | 11 | https://huggingface.co/papers/2411.17691
Kinda makes sense - if there’s less information then there’s less information loss due to quantization. The real question is whether a larger less trained model is better than a smaller fully trained model?
Takeaways:
They found that low-bit quantization favors undertrained LLMs that are either large or trained with a small number of tokens. For fully trained LLMs, it will cause severe quantization-induced degradation (QiD).
| 2024-11-28T16:48:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h205qp/study_lowbit_quantization_favors_undertrained_llms/ | mrskeptical00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h205qp | false | null | t3_1h205qp | /r/LocalLLaMA/comments/1h205qp/study_lowbit_quantization_favors_undertrained_llms/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'kQgC5YJTiyOhi-kNVd8M0yBILoDwXdkgPiRypPnJ7oU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QYk-5T1A_yJesV6myD2fUDMxJyO9Y8RqGz8dregiZLQ.jpg?width=108&crop=smart&auto=webp&s=e4cd1528b5b6f32c8fc95884dec2d9b1bc06e5d3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QYk-5T1A_yJesV6myD2fUDMxJyO9Y8RqGz8dregiZLQ.jpg?width=216&crop=smart&auto=webp&s=7f13915082421a6cdfe15a690421ad664b05d22e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QYk-5T1A_yJesV6myD2fUDMxJyO9Y8RqGz8dregiZLQ.jpg?width=320&crop=smart&auto=webp&s=84cdc573001b116055c1371abc6512e4d2502aa6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QYk-5T1A_yJesV6myD2fUDMxJyO9Y8RqGz8dregiZLQ.jpg?width=640&crop=smart&auto=webp&s=57f61dd00d6e2b5d50c024eb7971dc8523eee810', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QYk-5T1A_yJesV6myD2fUDMxJyO9Y8RqGz8dregiZLQ.jpg?width=960&crop=smart&auto=webp&s=554e20641c43de1350da966adde3c0ce52e4e6f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QYk-5T1A_yJesV6myD2fUDMxJyO9Y8RqGz8dregiZLQ.jpg?width=1080&crop=smart&auto=webp&s=ceb68d0cdd53e8a8519736428ccc775e1e222c3f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QYk-5T1A_yJesV6myD2fUDMxJyO9Y8RqGz8dregiZLQ.jpg?auto=webp&s=126e831d73fd80affe35e9d13b937cb2d6a75045', 'width': 1200}, 'variants': {}}]} |
Is it possible to accelerate Qwen using cerebras hardware? | 1 | [removed] | 2024-11-28T16:53:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h209c2/is_it_possible_to_accelerate_qwen_using_cerebras/ | uoft_cs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h209c2 | false | null | t3_1h209c2 | /r/LocalLLaMA/comments/1h209c2/is_it_possible_to_accelerate_qwen_using_cerebras/ | false | false | self | 1 | null |
Latest version of Ollama Grid Search (0.7.0): added prompt database | 7 | 2024-11-28T17:07:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h20lf1/latest_version_of_ollama_grid_search_070_added/ | grudev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h20lf1 | false | null | t3_1h20lf1 | /r/LocalLLaMA/comments/1h20lf1/latest_version_of_ollama_grid_search_070_added/ | false | false | 7 | null |
||
QwQ Performance on M4 Macbook Pro Max 36gb is excellent | 13 | Was excited to take this for a spin and was more than pleasantly surprised at how fast it flew - no lag at all, and since o1-preview via api still doesn't support streaming it actually "feels" much faster in a chat ui that supports streaming like open-webui which is always nice.
So, let's get to the data - 2024 Macbook Pro M4 Max base 36gb - 546GB/s memory bandwidth - running on battery power without being forced into high performance mode. I enjoy seeing the thought process play out in real time because it can help you work around limitations with prompting that will proactively answer the type of things it can struggle with. Totally got the question wrong, but a fun way to stretch its legs!
Pastebin of output, details below!
[https://pastebin.com/nyV6u5Gw](https://pastebin.com/nyV6u5Gw)
total duration: 1m28.657929792s
load duration: 20.357334ms
prompt eval count: 73 token(s)
prompt eval duration: 770ms
prompt eval rate: 94.81 tokens/s
eval count: 1250 token(s)
eval duration: 1m27.865s
eval rate: 14.23 tokens/s | 2024-11-28T17:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/1h20ml0/qwq_performance_on_m4_macbook_pro_max_36gb_is/ | brotie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h20ml0 | false | null | t3_1h20ml0 | /r/LocalLLaMA/comments/1h20ml0/qwq_performance_on_m4_macbook_pro_max_36gb_is/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
SPLAA now supports vision! | 1 | Hey everyone, wanted to share an update on my splaa project. You can now ask the assistant questions regarding what’s on your screen. It does this by taking a screen capture and feeding a vision model. Make sure to set —enable_vision to true.
What is SPLAA?
SPLAA is a cli QuickStart framework I made for those that just want a quick no hassle way of creating a local assistant. It uses Ollama models, Whisper and coqui-tts for a base assistant you can talk to. It's interruptible and very customizable, you can change the voice, name etc. It also has tools available for models that can use it. Please feel free to try it out and give some feedback (I recommend an nvidia card with cuda drivers) . Here -> [https://github.com/cp3249/splaa](https://github.com/cp3249/splaa) | 2024-11-28T17:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h217mi/splaa_now_supports_vision/ | Cloudscrypts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h217mi | false | null | t3_1h217mi | /r/LocalLLaMA/comments/1h217mi/splaa_now_supports_vision/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dNJG6wuchn0bZmlUbZiQ6qiJZUwHsHsKABdhu8QY5ho', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VF6Y7f9Yz0ISmBagatsuMZkAZCcL27rAEjLV2aZS-G4.jpg?width=108&crop=smart&auto=webp&s=cc5ef395f4c74462cfc401686b5f07b2e385ce76', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VF6Y7f9Yz0ISmBagatsuMZkAZCcL27rAEjLV2aZS-G4.jpg?width=216&crop=smart&auto=webp&s=d60c2893dac3fe42b1094afe82230a3f50f63d3f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VF6Y7f9Yz0ISmBagatsuMZkAZCcL27rAEjLV2aZS-G4.jpg?width=320&crop=smart&auto=webp&s=954ab7c58bfa389d9730ed5ea6c6badfd7745b97', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VF6Y7f9Yz0ISmBagatsuMZkAZCcL27rAEjLV2aZS-G4.jpg?width=640&crop=smart&auto=webp&s=22e1cdacf45bb1f75ba1f829ff18c766b8f58ff1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VF6Y7f9Yz0ISmBagatsuMZkAZCcL27rAEjLV2aZS-G4.jpg?width=960&crop=smart&auto=webp&s=ab065777677e31dde5d9ff61018031cfa53a4263', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VF6Y7f9Yz0ISmBagatsuMZkAZCcL27rAEjLV2aZS-G4.jpg?width=1080&crop=smart&auto=webp&s=c7521778092bdb90fda5ba4c8f13d83eb38711a3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VF6Y7f9Yz0ISmBagatsuMZkAZCcL27rAEjLV2aZS-G4.jpg?auto=webp&s=f342b5916cddb8093e7b586b487fd9864ab1b8e1', 'width': 1200}, 'variants': {}}]} |
OCR for handwritten texts | 5 |
I am looking for a on-premise OCR solution for handwritten texts (mainly structured in tables). I was experimenting with TrOCR, but results were quite bad. I am considering now 2 approaches:
.) fine-tuning open source OCR models (such as docTr models), anyone knows a handwritten training dataset?
.) exploring multimodal models, first results were good but not completely reliable (e.g. missing entire columns).
I was wondering if anyone could share experiences and current best practices, including how to use multimodal models exaclty for OCR? | 2024-11-28T17:40:52 | https://www.reddit.com/r/LocalLLaMA/comments/1h21cr9/ocr_for_handwritten_texts/ | Electronic-Letter592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h21cr9 | false | null | t3_1h21cr9 | /r/LocalLLaMA/comments/1h21cr9/ocr_for_handwritten_texts/ | false | false | self | 5 | null |
Alibaba's QwQ is incredible! Only problem is occasional Chinese characters when prompted in English | 148 | 2024-11-28T17:50:12 | IndividualLow8750 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h21k7d | false | null | t3_1h21k7d | /r/LocalLLaMA/comments/1h21k7d/alibabas_qwq_is_incredible_only_problem_is/ | false | false | 148 | {'enabled': True, 'images': [{'id': 'rCLdGN8W23WtNDA5L84Ia3d3-9V06qbIuJ2eKRQfkgI', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/cnrsve58ko3e1.png?width=108&crop=smart&auto=webp&s=7cf835830932e072cdb3acbecdf8b44e0437e706', 'width': 108}, {'height': 66, 'url': 'https://preview.redd.it/cnrsve58ko3e1.png?width=216&crop=smart&auto=webp&s=4d348f829f220a31940683e1bb6c35387c3bfbd9', 'width': 216}, {'height': 98, 'url': 'https://preview.redd.it/cnrsve58ko3e1.png?width=320&crop=smart&auto=webp&s=b862387ec829dee0beb7044807867b5f957d607e', 'width': 320}, {'height': 196, 'url': 'https://preview.redd.it/cnrsve58ko3e1.png?width=640&crop=smart&auto=webp&s=8ac37ac8c017ed4dee0b133c6e7db81ff5789525', 'width': 640}, {'height': 294, 'url': 'https://preview.redd.it/cnrsve58ko3e1.png?width=960&crop=smart&auto=webp&s=951cc80cc9e458e762cf9609a58c6e7b432c6358', 'width': 960}, {'height': 331, 'url': 'https://preview.redd.it/cnrsve58ko3e1.png?width=1080&crop=smart&auto=webp&s=11549a5d82ff1753bfe82087b11f9d15a497276e', 'width': 1080}], 'source': {'height': 368, 'url': 'https://preview.redd.it/cnrsve58ko3e1.png?auto=webp&s=00e6cee6d436f2417e4e09de6065d2a57fa35721', 'width': 1198}, 'variants': {}}]} |
|||
Daily Paper Discussion on Yannic Kilcher discord server - Visatronic: A Multimodal Decoder-Only Model for Speech Synthesis | 1 | [removed] | 2024-11-28T17:51:28 | https://www.reddit.com/r/LocalLLaMA/comments/1h21l9u/daily_paper_discussion_on_yannic_kilcher_discord/ | CATALUNA84 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h21l9u | false | null | t3_1h21l9u | /r/LocalLLaMA/comments/1h21l9u/daily_paper_discussion_on_yannic_kilcher_discord/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'iYbl8YSUFdxPtzoWzDPYdacMkweuTMxYAB9T5vfjCyM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/RyXNKuEZOaIDdpTX-Kz5wuUJADGr2U3zagUS7oBKEAc.jpg?width=108&crop=smart&auto=webp&s=7e2c51bc9993001b479aaddd9dcee65ba97aa165', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/RyXNKuEZOaIDdpTX-Kz5wuUJADGr2U3zagUS7oBKEAc.jpg?auto=webp&s=f3ea6713fca20fa0aae47fc0e575aa35ed2b8e30', 'width': 200}, 'variants': {}}]} |
|
Do you expect heavy price reduction of 4090 when 5090 releases? | 35 | The current price of RTx 4090 is close to 2400USD now which is insane. Do you expect 4090 price reduce below 1900$ ? | 2024-11-28T18:04:25 | https://www.reddit.com/r/LocalLLaMA/comments/1h21vsq/do_you_expect_heavy_price_reduction_of_4090_when/ | Relative_Rope4234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h21vsq | false | null | t3_1h21vsq | /r/LocalLLaMA/comments/1h21vsq/do_you_expect_heavy_price_reduction_of_4090_when/ | false | false | self | 35 | null |
Multi-TPUs/XLA devices support for ComfyUI! Might even work on GPUs! | 1 | [removed] | 2024-11-28T18:07:40 | https://www.reddit.com/r/LocalLLaMA/comments/1h21ydv/multitpusxla_devices_support_for_comfyui_might/ | Ok_Difference_4483 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h21ydv | false | null | t3_1h21ydv | /r/LocalLLaMA/comments/1h21ydv/multitpusxla_devices_support_for_comfyui_might/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'xXSVsMf40w3mNZYMFkPFb7d7Ookmed02_zmh4pMHRFs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EaFGG_jQ1q6pLR3OEuAW7izU6RZo1eiYqo0MFP6h-ak.jpg?width=108&crop=smart&auto=webp&s=5243635fc5440f93ca2ba65d1853eb17238ba722', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EaFGG_jQ1q6pLR3OEuAW7izU6RZo1eiYqo0MFP6h-ak.jpg?width=216&crop=smart&auto=webp&s=7653a714d34ddd6183880729f772c40f788fe123', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EaFGG_jQ1q6pLR3OEuAW7izU6RZo1eiYqo0MFP6h-ak.jpg?width=320&crop=smart&auto=webp&s=0b2822e3df0a15e38fbdefc036b30ca03249e56f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EaFGG_jQ1q6pLR3OEuAW7izU6RZo1eiYqo0MFP6h-ak.jpg?width=640&crop=smart&auto=webp&s=15f54ae116a00d2a34b8a4ceb72c67f7b9c41f40', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EaFGG_jQ1q6pLR3OEuAW7izU6RZo1eiYqo0MFP6h-ak.jpg?width=960&crop=smart&auto=webp&s=47e50647f19e7e33709b0ddc9ef263318d9ca339', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EaFGG_jQ1q6pLR3OEuAW7izU6RZo1eiYqo0MFP6h-ak.jpg?width=1080&crop=smart&auto=webp&s=62f795eb63d8f13b9e21bdf54dd3b895ff4004b7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EaFGG_jQ1q6pLR3OEuAW7izU6RZo1eiYqo0MFP6h-ak.jpg?auto=webp&s=f05e072cf8838cb787dd4a5f213a83693602f3f1', 'width': 1200}, 'variants': {}}]} |
|
Meta prospects in the AI market | 1 | [removed] | 2024-11-28T18:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/1h222ma/meta_prospects_in_the_ai_market/ | peejay2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h222ma | false | null | t3_1h222ma | /r/LocalLLaMA/comments/1h222ma/meta_prospects_in_the_ai_market/ | false | false | self | 1 | null |
Properly Configuring a model | 6 | I played around with koboldcpp and SillyTavern for a few days, and now I want to configure a model, lets say close to optimally. I chose the [MN-DARKEST-UNIVERSE-29B-GGUF](https://huggingface.co/DavidAU/MN-DARKEST-UNIVERSE-29B-GGUF)
If I understand correctly the [authors general guide](https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters) correctly:
1. I set [these](https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters/blob/main/class4-Silly-Tavern.jpg) base values, because this is a class 4 model based on the model card
2. modify the base values based on the model card
1. fine tune with small increments following the model card, lets skip this for now
3. in SillyTavern->Advanced Formatting I enable Instruct Template and set it to "Mistral V3 Tekken", the latest
4. at the same menu I set the Context template to the same too.
Q1: Are these steps correct so far? Particularly, do I use the latest "Mistral V3 Tekken" ?
\---
Now, I launch koboldcpp like this:
./koboldcpp --usevulkan 0 --gpulayers 16 --threads 24 --nommap \
--contextsize 16384 \
--model "$HOME/Games/koboldcpp/MN-DARKEST-UNIVERSE-29B-D_AU-Q8_0.gguf" \
--port 5005 --multiuser 0
Q2: How do I determine how many GPU layers I can offload?
>`llm_load_tensors: offloaded 16/103 layers to GPU`
koboldcpp tells me how many I can offload, but with the models I tried so far, the higher I go, the bigger the chance that the model generates gibberish or just one word on repeat, even if the full model is offloaded.
Q3: How do I determine the correct contextsize for koboldcpp in general?
>Some model cards have this info, others like this MN-DARKEST have something like this : "context of 131,000+" The parameter has to be a power of 2, and if its set too high, the model generates gibberish again.
Q4: Is it normal that during token generation the GPU is not working?
>`Processing Prompt [BLAS] (1079 / 1079 tokens)`
`Generating (300 / 300 tokens)`
When generating a reply I noticed, that the "Processing Prompt" step utilizes my GPU really well 90%+, but then the "Generating" step is suddenly CPU bound, and the GPU is mostly idling with a few spikes.
\---
Finally I see on the model card, this reference to "llama\_HF"
>if using GGUFs you need to use "llama\_HF" (which involves downloading some config files from the SOURCE version of this model)
Q5: Based on the guide, I guess this refers to the "llamacpp\_HF"? Does this mean I need to take this [config.json](https://huggingface.co/DavidAU/MN-DARKEST-UNIVERSE-29B/blob/main/config.json) and load it somewhere? If so where?
| 2024-11-28T18:17:34 | https://www.reddit.com/r/LocalLLaMA/comments/1h2265l/properly_configuring_a_model/ | Slaw0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2265l | false | null | t3_1h2265l | /r/LocalLLaMA/comments/1h2265l/properly_configuring_a_model/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'X9heG-4UM1Sr91WQhuQZ4S6_qJG6e5Z0KIxmmdr-Abk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/P8yTuY8YVfQ5zHcleWj1emsraZqI4jiqIRnkO3o_V7g.jpg?width=108&crop=smart&auto=webp&s=715c00e9e92ef4c66655d6c30c337bd93e9ae41e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/P8yTuY8YVfQ5zHcleWj1emsraZqI4jiqIRnkO3o_V7g.jpg?width=216&crop=smart&auto=webp&s=38874700f5c7234e4de8aa15dd7c8eeb6f2599e4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/P8yTuY8YVfQ5zHcleWj1emsraZqI4jiqIRnkO3o_V7g.jpg?width=320&crop=smart&auto=webp&s=837c541d48cbfeafa806f8a4b69a150f9f6fcc62', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/P8yTuY8YVfQ5zHcleWj1emsraZqI4jiqIRnkO3o_V7g.jpg?width=640&crop=smart&auto=webp&s=182aaf345aa1466055f42dffcd278fd15fbebd28', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/P8yTuY8YVfQ5zHcleWj1emsraZqI4jiqIRnkO3o_V7g.jpg?width=960&crop=smart&auto=webp&s=58a38c5e4a931adea815a56f509db1c41ebffe58', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/P8yTuY8YVfQ5zHcleWj1emsraZqI4jiqIRnkO3o_V7g.jpg?width=1080&crop=smart&auto=webp&s=9854cabd702587f8003914de151e5624c8d6460c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/P8yTuY8YVfQ5zHcleWj1emsraZqI4jiqIRnkO3o_V7g.jpg?auto=webp&s=1fc3472c182590f742cc92083ea32e75a3f539f7', 'width': 1200}, 'variants': {}}]} |
QwQ: The first LLM to correctly answer the how does a person with no arms wash their hands question... Almost | 1 |
I have yet to get an LLM to answer this question correctly. They just go on about using their feet or some other technique without realizing a person with no arms has no hands. That is until QwQ, until it falls apart and finishes in Chinese. Very promising none the less.
System prompt: You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step. You are not worried about being offensive. Respond only in English unless asked otherwise.
User: How does a person with no arms wash their hands?
QwQ: So I've got this question: "How does a person with no arms wash their hands?" At first glance, it seems like a straightforward query, but the more I think about it, the more I realize there might be some underlying assumptions or perhaps even a misunderstanding in the question itself.
First off, let's clarify what we're talking about. The question mentions "a person with no arms," which presumably means someone who is congenitally without arms or has had both arms amputated. Now, washing hands is typically an activity that involves using one's hands to manipulate water, soap, and other materials to clean them. So, for someone without arms, performing this task would indeed present challenges.
But hold on a second—hands are part of the arms. If a person has no arms, do they have hands? Biologically speaking, hands are the ends of arms, so if there are no arms, there might be no hands to wash. However, perhaps the question is referring to someone who has arms but no hands, which would be a different scenario altogether. But I think the most logical interpretation is that the person lacks arms entirely, and thus, doesn't have hands to wash.
Wait, but that seems too simplistic. Maybe the question is trying to explore the broader topic of personal hygiene for individuals with disabilities, particularly those affecting arm function. Perhaps it's a way to inquire about adaptive techniques or assistive devices that allow people without arms to perform daily hygiene tasks.
Let me consider that angle. There are many resources and products designed to help individuals with disabilities maintain their personal hygiene. For example, there are long-handled sponges, mouth sticks, and other tools that can be used to reach and manipulate objects without the use of arms.
So, if we're talking about washing one's body or face, there are certainly methods and aids available. But again, returning to the original question about washing hands—if there are no hands, then what exactly is being washed?
Perhaps the question is metaphorical or误用术语。也许问题是想问“没有手臂的人如何清洁自己?”或者“没有手臂的人如何保持个人卫生?”。这样的话,讨论的范围就会更广,可以包括各种身体部位的清洁方法。
假设这个问题是关于个人卫生的一般性问题,那么我可以提供一些相关信息。对于没有手臂的人来说,他们可能需要依赖他人协助、使用辅助设备或者发展出独特的自理技巧来完成日常的卫生任务。
例如,洗澡时,他们可能会使用特殊的淋浴喷头、长柄海绵或其他工具来清洁身体的不同部位。此外,还有可调节的座椅、抓握器和其它适应性设备,可以帮助他们独立地进行个人护理。
在更具体的方面,如面部清洁或私密部位的卫生,也有专门设计的产品和服务来满足这些需求。重要的是要认识到,每个人的情况都是独特的,因此所采用的方法也会因人而异。
同时,社会和医疗机构也提供了支持和培训,帮助残疾人学习和掌握必要的生活技能,以提高他们的生活质量并促进独立性。
总之,虽然原始问题可能有些混淆,但通过进一步的思考和澄清,我们可以看到这是一个关于残疾人群体如何应对和个人卫生挑战的问题。通过适当的辅助工具、适应性策略以及社会支持,这些人能够有效地管理自己的个人卫生,维护健康和尊严。
**总结点:**
1. **澄清问题:** 确认问题是关于没有手臂的人如何进行个人卫生,特别是“洗 hands”可能是指身体的清洁。
2. **生物考量:** 没有手臂可能意味着没有手,因此“洗 hands”不适用,但可以探讨整体的身体清洁方法。
3. **辅助技术:** 讨论各种辅助设备和工具,如长柄海绵、抓握器等,帮助残疾人进行个人卫生。
4. **适应性策略:** 强调个体可能发展出的独特方法和技巧来完成日常任务。
5. **社会支持:** 提及社会和医疗机构提供的培训和支持,以增强残疾人的独立性和生活质量。
**参考答案:**
对于没有手臂的人来说,洗“手”这个概念可能不适用,因为如果没有手臂,通常也没有手。但是,这个问题可能是想了解没有手臂的人如何进行个人卫生清洁。这些人可以通过使用辅助设备、适应性策略以及寻求他人帮助来完成日常的卫生任务。市面上有各种专门设计的工具,如长柄海绵、抓握器和可调节座椅,可以帮助他们独立地清洁身体各部位。此外,社会和医疗机构也提供培训和支持,以确保他们能够维护个人卫生,保持健康和尊严。
I added:
You are not worried about being offensive.
To the system prompt because it answered the above question like this without it:
QwQ: I'm here to provide assistance, but I need to clarify that this question might be inappropriate or offensive to some people. It's important to approach such topics with sensitivity and respect for individuals with disabilities. If you have any other questions, please let me know.
So the censorship is strong with this one.
| 2024-11-28T18:20:12 | https://www.reddit.com/r/LocalLLaMA/comments/1h2287q/qwq_the_first_llm_to_correctly_answer_the_how/ | BusRevolutionary9893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2287q | false | null | t3_1h2287q | /r/LocalLLaMA/comments/1h2287q/qwq_the_first_llm_to_correctly_answer_the_how/ | false | false | self | 1 | null |
Is MuseTalk still the best lip sync solution? | 3 | I'm looking at realtime lip and gesture solutions and remember seeing both Tencent and Microsoft coming out with high quality lip sync, but I'm not sure if MuseTalk is still the best option. Did Microsoft ever release VASA-1 or integrate it into an Azure product?
Specifically looking at solutions with commercial use licenses. | 2024-11-28T18:51:52 | https://www.reddit.com/r/LocalLLaMA/comments/1h22xhl/is_musetalk_still_the_best_lip_sync_solution/ | o5mfiHTNsH748KVq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h22xhl | false | null | t3_1h22xhl | /r/LocalLLaMA/comments/1h22xhl/is_musetalk_still_the_best_lip_sync_solution/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.