title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Why my Gemma-2 is silly | 1 | [removed] | 2024-12-10T07:29:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hawi28/why_my_gemma2_is_silly/ | Sea_Landscape51 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hawi28 | false | null | t3_1hawi28 | /r/LocalLLaMA/comments/1hawi28/why_my_gemma2_is_silly/ | false | false | self | 1 | null |
simplest way to setup a chatbot with a finetuned model | 0 | By advance Im really sorry for my noob question but Im a bit lost with all the informations all around google, reddit etc... for my case. Im looking to build a simple chatbot with a nocode plateform ( [toddle.dev](http://toddle.dev) ) + Xano as backend. I already did some tests with the api of Hyperbolic and the llama 3.3 model with correct results. Thing is that, now, I would like to use a finetuned model that I can modify ( basically change the temp, length of answer, card, etc... like we can do it with Silly Tavern for example )
I checked some options and see that [Kobold.ai](http://Kobold.ai) and oobabooga are 2 solid options for my case but then Im a little bit lost for build something " cost effective " and scalable, I will probably need a cloud service like runpod and run my instances on it but I don't really know the steps ( I probably said wrong stuff in this post).
Would like to know if someone can guide me a little bit for this project, not necessary in details but just the steps for be able to get a llm finetuned model working thought an api.
Thanks
| 2024-12-10T07:49:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hawrrq/simplest_way_to_setup_a_chatbot_with_a_finetuned/ | julieroseoff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hawrrq | false | null | t3_1hawrrq | /r/LocalLLaMA/comments/1hawrrq/simplest_way_to_setup_a_chatbot_with_a_finetuned/ | false | false | self | 0 | null |
Llama 3.2 3B and Llama 3.3 70B (!) models on a Mac Mini M4 Pro | 23 | I just got a maxed M4 Pro Mac Mini - so with 64GB RAM and the upgraded processor (14-core CPU, 20-core GPU). It cost roughly $2400 and I wanted to see how good a tiny 5" x 5" device would be for inferencing with the latest models. So, I tried the following:
* \[On LM Studio\] Llama 3.2 3B Instruct, Q4 - converted to MLX format
* \[On Ollama\] Llama 3.2 3B , Q4\_K\_M
* \[On LM Studio\] Llama 3.3-70B-Instruct-GGUF, Q3\_K\_L
* \[On Ollama\] Llama 3.3 70B, Q4\_K\_M (I couldn't get this model to load in LM Studio)
With the Llama 3.2B models I got 102-105 Tok/sec on LM Studio, and about 70-80 Tok/sec on Ollama. I wasn't expecting the speeds to be this good, and so I was blown away by how fast the text was scrolling on my screen. I was expecting Ollama to be faster than LM Studio but that wasn't the case. So, I'm guessing that the speed up on LM Studio was because of MLX, instead of GGUF on Ollama, and the work that Apple has put into software optimizations for Apple Intelligence. However, I'm just speculating here, and so if anybody has a better reason then please provide it in the comments.
With the Llama 3.3 70B, Q3\_K\_L model on LM Studio I got between 3 - 3.5 Tok/sec. With the Llama 3.3 70B, Q4\_K\_M model on Ollama I got about 5 Tok/sec. I've read that [the Llama 3.3 70B model is a GPT-4 class model](https://simonwillison.net/2024/Dec/9/llama-33-70b/), and so it blew my mind that I was able to get it to run on a tiny little Mac Mini at all even if it was running slowly.
I wasn't able to load the Q4\_K\_M model into LM Studio due to an insufficient system resources error. However based on monitoring memory during the run, there should be enough room for this model on my machine and so I'll try playing around with settings to see if I can make it fit.
Methodology: I used two prompts ("Why is the sky blue?" and "Write a report on the financials of Apple Inc.") on all the different setups above. For LM Studio I was able to get the perf numbers directly in the tool, whereas for Ollama I used this python script (with some minor tweaks): [https://github.com/MinhNgyuen/llm-benchmark](https://github.com/MinhNgyuen/llm-benchmark). I've just looking at raw speed so far, and not quality of responses. That said all the responses that I saw looked quite reasonable, and I didn’t really notice anything that looked weird. I was surprised at how close the responses of the 3B models were to their 70B counterparts, and I noticed that in all cases the responses for financial results focused on 2022 which is reflective of the knowledge cutoff. The only time I noticed the fan coming on during the entire exercise was when i was running the 70B, Q4\_K\_M model.
Disclaimer: I'm a very long time PC user, and it's been many, many years since I used a Mac. Also, though I've been playing around with local LLMs on my other Windows laptops, this is the first time that I tried benchmarking them. So, it's possible that something got screwed up, and I'll update this thread if I find out later that I did something dumb. And if anybody else has good ideas for me to test on this setup, feel free to drop those into the comments below. | 2024-12-10T08:17:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hax4ue/llama_32_3b_and_llama_33_70b_models_on_a_mac_mini/ | SpecialistNumerous17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hax4ue | false | null | t3_1hax4ue | /r/LocalLLaMA/comments/1hax4ue/llama_32_3b_and_llama_33_70b_models_on_a_mac_mini/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'HWheKhQGoivbvKG3iJpnuoOj8CTBirv7y2Nh5S8xMUw', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=108&crop=smart&auto=webp&s=e7c5ef95ee36b3f5af8cb3a2775b8c09e51f03c2', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=216&crop=smart&auto=webp&s=c52fa52693287158950f1defd0572ae2ef137089', 'width': 216}, {'height': 229, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=320&crop=smart&auto=webp&s=9fac4ccbfd2c2577941fecc4bcc8da7f1789ee0f', 'width': 320}, {'height': 458, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=640&crop=smart&auto=webp&s=cd137f459436110a0a2f88fb9402b2ff30f720a7', 'width': 640}, {'height': 687, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=960&crop=smart&auto=webp&s=3831e89178ad486d0e8584cbdf17e93720cb697b', 'width': 960}, {'height': 773, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=1080&crop=smart&auto=webp&s=13c46c7a60e68ec985c3d0ad19a9593d003af9c0', 'width': 1080}], 'source': {'height': 1382, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?auto=webp&s=848fd8f5c025ed630d3a294cba90627a93893a09', 'width': 1930}, 'variants': {}}]} |
AI doomer wants to charge OpenAI executives for "attempted murder" | 1 | [removed] | 2024-12-10T08:30:55 | https://www.reddit.com/r/LocalLLaMA/comments/1haxb8q/ai_doomer_wants_to_charge_openai_executives_for/ | No-Point1424 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1haxb8q | false | null | t3_1haxb8q | /r/LocalLLaMA/comments/1haxb8q/ai_doomer_wants_to_charge_openai_executives_for/ | false | false | 1 | null |
|
What tool is the best today to run oure models? | 0 | What tool are you useing to run models, i have been away from LLM's for about a year, can you help me get started again. | 2024-12-10T08:32:18 | https://www.reddit.com/r/LocalLLaMA/comments/1haxbww/what_tool_is_the_best_today_to_run_oure_models/ | Medium_Big_all_Good | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1haxbww | false | null | t3_1haxbww | /r/LocalLLaMA/comments/1haxbww/what_tool_is_the_best_today_to_run_oure_models/ | false | false | self | 0 | null |
Portable LLM apps in Docker | 1 | 2024-12-10T08:33:58 | https://www.youtube.com/watch?v=qaf4dy-n0dw | smileymileycoin | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1haxcn8 | false | {'oembed': {'author_name': 'Collabnix, Docker and DevOps', 'author_url': 'https://www.youtube.com/@collabnix', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/qaf4dy-n0dw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Portable LLM apps in Docker | LlamaEdge | Michael Yuan | Docker Captain | Singapore Meetup"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/qaf4dy-n0dw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Portable LLM apps in Docker | LlamaEdge | Michael Yuan | Docker Captain | Singapore Meetup', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1haxcn8 | /r/LocalLLaMA/comments/1haxcn8/portable_llm_apps_in_docker/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ez0Z3_nnTZWhjXaJX_wgv-aP0znOfoMXsVvUP7YUQMw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/I8vARdIwLG-p5acuEBCXMvxml0ITxM_Gp-Gmq7-MBhg.jpg?width=108&crop=smart&auto=webp&s=9875550aa6ffe956f6325818c6f8d817e80dbafc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/I8vARdIwLG-p5acuEBCXMvxml0ITxM_Gp-Gmq7-MBhg.jpg?width=216&crop=smart&auto=webp&s=53c4babbd28b126607a405229467b82d25a51706', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/I8vARdIwLG-p5acuEBCXMvxml0ITxM_Gp-Gmq7-MBhg.jpg?width=320&crop=smart&auto=webp&s=9940347d7c2118e4adc3c19a760890f1641a3b77', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/I8vARdIwLG-p5acuEBCXMvxml0ITxM_Gp-Gmq7-MBhg.jpg?auto=webp&s=cf1ee4a670614025de8a5fcc435eeb44600e6008', 'width': 480}, 'variants': {}}]} |
||
o1 LiveBench coding results | 87 | Note: Note: o1 was evaluated manually using ChatGPT. So far, it has only been scored on coding tasks.
https://livebench.ai/#/ | 2024-12-10T08:43:37 | https://www.reddit.com/gallery/1haxgwm | user0069420 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1haxgwm | false | null | t3_1haxgwm | /r/LocalLLaMA/comments/1haxgwm/o1_livebench_coding_results/ | false | false | 87 | null |
|
Tango-70b live on TangoChat! | 1 | [removed] | 2024-12-10T08:59:21 | https://www.reddit.com/r/LocalLLaMA/comments/1haxo0f/tango70b_live_on_tangochat/ | Alarmed_Spread_1410 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1haxo0f | false | null | t3_1haxo0f | /r/LocalLLaMA/comments/1haxo0f/tango70b_live_on_tangochat/ | false | false | self | 1 | null |
deepseek-ai/DeepSeek-V2.5-1210 · Hugging Face | 182 | 2024-12-10T09:26:51 | https://huggingface.co/deepseek-ai/DeepSeek-V2.5-1210 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hay0qy | false | null | t3_1hay0qy | /r/LocalLLaMA/comments/1hay0qy/deepseekaideepseekv251210_hugging_face/ | false | false | 182 | {'enabled': False, 'images': [{'id': '_1TpCaGKCKm1e4hG-Q4xnh2FTVBPNfYznwgIwRnrAsU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RKd8Xlp0hjsQFBwzY7sNJ3j3SRcEIt6nScqOBPCtXP8.jpg?width=108&crop=smart&auto=webp&s=8e75de3d1ccf682c5338fe7a5c85671f157f2342', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RKd8Xlp0hjsQFBwzY7sNJ3j3SRcEIt6nScqOBPCtXP8.jpg?width=216&crop=smart&auto=webp&s=de2a50a0657cf55e190df95ee386cbfa2661368d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RKd8Xlp0hjsQFBwzY7sNJ3j3SRcEIt6nScqOBPCtXP8.jpg?width=320&crop=smart&auto=webp&s=3719bf9b9868f026ede7cff06e01463d33e22f0a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RKd8Xlp0hjsQFBwzY7sNJ3j3SRcEIt6nScqOBPCtXP8.jpg?width=640&crop=smart&auto=webp&s=f7c3e27eb1ea2ad9e1138631409d8af599b2f763', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RKd8Xlp0hjsQFBwzY7sNJ3j3SRcEIt6nScqOBPCtXP8.jpg?width=960&crop=smart&auto=webp&s=33ca691c21c4f4d16ae028a28799ea5dab0bebdd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RKd8Xlp0hjsQFBwzY7sNJ3j3SRcEIt6nScqOBPCtXP8.jpg?width=1080&crop=smart&auto=webp&s=0774278a3882929644cb406f13609df05d112527', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RKd8Xlp0hjsQFBwzY7sNJ3j3SRcEIt6nScqOBPCtXP8.jpg?auto=webp&s=23b5371e078c22b7fe55cac9745fad48447185a4', 'width': 1200}, 'variants': {}}]} |
||
Vanilla ollama based rag | 1 | [removed] | 2024-12-10T09:28:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hay1a3/vanilla_ollama_based_rag/ | TrimMyGooch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hay1a3 | false | null | t3_1hay1a3 | /r/LocalLLaMA/comments/1hay1a3/vanilla_ollama_based_rag/ | false | false | self | 1 | null |
Finetuning model for Data Analysis and identification of trends and patterns | 1 | [removed] | 2024-12-10T09:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hay3he/finetuning_model_for_data_analysis_and/ | ConceptWorldly1683 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hay3he | false | null | t3_1hay3he | /r/LocalLLaMA/comments/1hay3he/finetuning_model_for_data_analysis_and/ | false | false | self | 1 | null |
chatgpt pro vs 5 * plus | 1 | [removed] | 2024-12-10T09:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hay61n/chatgpt_pro_vs_5_plus/ | One-Thanks-9740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hay61n | false | null | t3_1hay61n | /r/LocalLLaMA/comments/1hay61n/chatgpt_pro_vs_5_plus/ | false | false | self | 1 | null |
run whisper.cc on second gpu? | 1 | [removed] | 2024-12-10T09:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hayaeh/run_whispercc_on_second_gpu/ | ExtremeButton1682 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hayaeh | false | null | t3_1hayaeh | /r/LocalLLaMA/comments/1hayaeh/run_whispercc_on_second_gpu/ | false | false | self | 1 | null |
Finetuning model for Data Analysis and identification of trends and patterns
Question | Help
| 1 | [removed] | 2024-12-10T09:52:05 | https://www.reddit.com/r/LocalLLaMA/comments/1haycd5/finetuning_model_for_data_analysis_and/ | ConceptWorldly1683 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1haycd5 | false | null | t3_1haycd5 | /r/LocalLLaMA/comments/1haycd5/finetuning_model_for_data_analysis_and/ | false | false | self | 1 | null |
Issues with Deploying Qwen-2.5-0.5B Model on Android | 1 | 2024-12-10T09:54:06 | https://github.com/guiyu/QwenLocalLLM | guiyu_1985 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1haydaq | false | null | t3_1haydaq | /r/LocalLLaMA/comments/1haydaq/issues_with_deploying_qwen2505b_model_on_android/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hMRUoh-s1fek_a9gNvjWNb9u3sgQPcwJLZUPlU6oo1Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2ydABcUP2M0kR1n7Cutiu5rpsz8KmlYX4tw49a1wlLM.jpg?width=108&crop=smart&auto=webp&s=1ac61ea1204baa414cee64a7ffe0822d8eb5afa3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2ydABcUP2M0kR1n7Cutiu5rpsz8KmlYX4tw49a1wlLM.jpg?width=216&crop=smart&auto=webp&s=bfa090cc66877d5f25c011e94bc09417924aa935', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2ydABcUP2M0kR1n7Cutiu5rpsz8KmlYX4tw49a1wlLM.jpg?width=320&crop=smart&auto=webp&s=92b499b088f2a36ff954ef60d3ca1af6b67cc72d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2ydABcUP2M0kR1n7Cutiu5rpsz8KmlYX4tw49a1wlLM.jpg?width=640&crop=smart&auto=webp&s=bbeaad16aad01b33e2c32953fb3ef35006510cf4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2ydABcUP2M0kR1n7Cutiu5rpsz8KmlYX4tw49a1wlLM.jpg?width=960&crop=smart&auto=webp&s=373c7d39fc76b409e5d3d34feed109f5b00286af', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2ydABcUP2M0kR1n7Cutiu5rpsz8KmlYX4tw49a1wlLM.jpg?width=1080&crop=smart&auto=webp&s=ecc15099219371867bbffe16ef8ebf6b396a74b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2ydABcUP2M0kR1n7Cutiu5rpsz8KmlYX4tw49a1wlLM.jpg?auto=webp&s=bce4e735413889b22c85dd07b5e563f2a302473d', 'width': 1200}, 'variants': {}}]} |
||
The necessary (and neglected) evil of Large Language Models: Tokenization | 81 | 2024-12-10T10:05:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hayja1/the_necessary_and_neglected_evil_of_large/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hayja1 | false | null | t3_1hayja1 | /r/LocalLLaMA/comments/1hayja1/the_necessary_and_neglected_evil_of_large/ | false | false | 81 | {'enabled': False, 'images': [{'id': 'mLs04R2fMnpdIVqnbHYoi-LBEb7tzYwIP-bDksKIZpU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/08GeoVS0Xc87z_cLvkPjtoWSFgnUYGQNDep_qSewJos.jpg?width=108&crop=smart&auto=webp&s=70535d24349c6d8e4c29134d9078fca98210dda1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/08GeoVS0Xc87z_cLvkPjtoWSFgnUYGQNDep_qSewJos.jpg?width=216&crop=smart&auto=webp&s=13d26cfeed0584c7a78664baa370d23414cc64d5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/08GeoVS0Xc87z_cLvkPjtoWSFgnUYGQNDep_qSewJos.jpg?width=320&crop=smart&auto=webp&s=8de1d7f780cc8a935f76d24e0a2a4db58b5ec9bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/08GeoVS0Xc87z_cLvkPjtoWSFgnUYGQNDep_qSewJos.jpg?width=640&crop=smart&auto=webp&s=7ab75ee8e6a6d8d78f1ecf9395e417e2d5437723', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/08GeoVS0Xc87z_cLvkPjtoWSFgnUYGQNDep_qSewJos.jpg?width=960&crop=smart&auto=webp&s=1435ded7e26b34a4d406cff67ec2603b90fd2c2a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/08GeoVS0Xc87z_cLvkPjtoWSFgnUYGQNDep_qSewJos.jpg?width=1080&crop=smart&auto=webp&s=a134e8b05e5a73ff71be87c791b1f9c84a518dd5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/08GeoVS0Xc87z_cLvkPjtoWSFgnUYGQNDep_qSewJos.jpg?auto=webp&s=1eb837d50e86a3e56defe1727909490fd43e6e78', 'width': 1200}, 'variants': {}}]} |
||
Hugging Face releases Text Generation Inference TGI v3.0 - 13x faster than vLLM on long prompts 🔥 | 407 | TGI team at HF really cooked! Starting today, you get out of the box improvements over vLLM - all with zero config, all you need to do is pass a Hugging Face model ID.
Summary of the release:
Performance leap: TGI processes 3x more tokens, 13x faster than vLLM on long prompts. Zero config!
3x more tokens - By reducing our memory footprint, we’re able to ingest many more tokens and more dynamically than before. A single L4 (24GB) can handle 30k tokens on llama 3.1-8B, while vLLM gets barely 10k. A lot of work went into reducing the footprint of the runtime and its effect are best seen on smaller constrained environments.
13x faster - On long prompts (200k+ tokens) conversation replies take 27.5s in vLLM, while it takes only 2s in TGI. How so ? We keep the initial conversation around, so when a new reply comes in, we can answer almost instantly. The overhead of the lookup is ~5us. Thanks @Daniël de Kok for the beast data structure.
Zero config - That’s it. Remove all the flags your are using and you’re likely to get the best performance. By evaluating the hardware and model, TGI carefully selects automatic values to give best performance. In production, we don’t have any flags anymore in our deployments. We kept all existing flags around, they may come in handy in niche scenarios.
We put all the details to run the benchmarks and verify results here: https://huggingface.co/docs/text-generation-inference/conceptual/chunking
Looking forward to what you build with this! | 2024-12-10T10:21:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hayqkt/hugging_face_releases_text_generation_inference/ | vaibhavs10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hayqkt | false | null | t3_1hayqkt | /r/LocalLLaMA/comments/1hayqkt/hugging_face_releases_text_generation_inference/ | false | false | self | 407 | {'enabled': False, 'images': [{'id': '79rYi6E5lmZcooxB3UcSrG4c-oe6nNHL23rdwsSeye8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/X44M3hBHGyBHTev4lPHN0EqFqZ_vD7fv0fB17gp2U08.jpg?width=108&crop=smart&auto=webp&s=933838f079fe4652993b7692990f88bc2f254f3e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/X44M3hBHGyBHTev4lPHN0EqFqZ_vD7fv0fB17gp2U08.jpg?width=216&crop=smart&auto=webp&s=762377fe5a05c278206a9d51665f2954fe03d2b7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/X44M3hBHGyBHTev4lPHN0EqFqZ_vD7fv0fB17gp2U08.jpg?width=320&crop=smart&auto=webp&s=42d03b35e105cb2f8df474afbecff503ef1dedad', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/X44M3hBHGyBHTev4lPHN0EqFqZ_vD7fv0fB17gp2U08.jpg?width=640&crop=smart&auto=webp&s=e3f2297dc6672cf00ac63ba833df74f289655e95', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/X44M3hBHGyBHTev4lPHN0EqFqZ_vD7fv0fB17gp2U08.jpg?width=960&crop=smart&auto=webp&s=6d4a4182954f12548933f27cef57f8837d98b5c7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/X44M3hBHGyBHTev4lPHN0EqFqZ_vD7fv0fB17gp2U08.jpg?width=1080&crop=smart&auto=webp&s=947dd16b30eab7ded66a1caffaf8da27e409ea49', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/X44M3hBHGyBHTev4lPHN0EqFqZ_vD7fv0fB17gp2U08.jpg?auto=webp&s=3f3a9c219941c5d50d3c1fb483aa4fe8acc2d4ce', 'width': 1920}, 'variants': {}}]} |
Tips on local RAG chatbot with RTX 4070 Super? | 1 | [removed] | 2024-12-10T10:33:46 | https://www.reddit.com/r/LocalLLaMA/comments/1haywgj/tips_on_local_rag_chatbot_with_rtx_4070_super/ | gaolat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1haywgj | false | null | t3_1haywgj | /r/LocalLLaMA/comments/1haywgj/tips_on_local_rag_chatbot_with_rtx_4070_super/ | false | false | self | 1 | null |
Built a multimodal semantic RAG | 5 | Hello everyone,
First of all, I'm really pleased to be part of this passionate, enthusiastic, helpful community, I've learnt a lot from it !
Recently, I’ve been working on a project called: \*\*Multimodal Semantic RAG\*\*. This system is designed to handle the complexities of analyzing multimodal data (text, tables, and images) in PDF documents and generating factual, relevant, context-aware responses to user queries.
**Here is the current workflow:**
*Processing img 9v89b3f8fx5e1...*
**My Reasons to Share?**
I’m eager to:
1️⃣ Gather feedback on the current implementation and ideas for improvement.
2️⃣ Explore potential collaborations with AI enthusiasts and experts.
3️⃣ Learn how others in the community are tackling similar challenges.
**What’s Next?**
This project is in its "early days". I’m willing to add more features and new concepts to it. Feel free to check the "perspectives" section in the repo.
I’d love for you to check it out, try it with your own data, and share your thoughts! 💬
**GitHub Repository:** [https://github.com/AhmedAl93/multimodal-semantic-RAG.git](https://github.com/AhmedAl93/multimodal-semantic-RAG.git)
Your feedback, suggestions, and contributions are highly appreciated !
Keep it up, everyone, keep learning ! 🌟 | 2024-12-10T10:56:11 | https://www.reddit.com/r/LocalLLaMA/comments/1haz7th/built_a_multimodal_semantic_rag/ | AhmedAl93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1haz7th | false | null | t3_1haz7th | /r/LocalLLaMA/comments/1haz7th/built_a_multimodal_semantic_rag/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'PqNjtC6HABYJyHB6BeF4citkjzCFBdmxBJciA7mGcew', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cFv1TmxuOrkiSJ5vSbADgsDLFUjAJyiTAWEipU2Ooaw.jpg?width=108&crop=smart&auto=webp&s=cbb253a29ff5c5b45eeeb8c23518b12713438907', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cFv1TmxuOrkiSJ5vSbADgsDLFUjAJyiTAWEipU2Ooaw.jpg?width=216&crop=smart&auto=webp&s=03be13c7c54a5b9e08c3526583ebe8becb9618de', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cFv1TmxuOrkiSJ5vSbADgsDLFUjAJyiTAWEipU2Ooaw.jpg?width=320&crop=smart&auto=webp&s=0ea34feaf772866d5d56e607d759b162b75dd8f7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cFv1TmxuOrkiSJ5vSbADgsDLFUjAJyiTAWEipU2Ooaw.jpg?width=640&crop=smart&auto=webp&s=f3d173734ff23ac1b356db0d15561f4602a5bd6b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cFv1TmxuOrkiSJ5vSbADgsDLFUjAJyiTAWEipU2Ooaw.jpg?width=960&crop=smart&auto=webp&s=ddf6bdfda28605dec9406bc837bb400cb81d504f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cFv1TmxuOrkiSJ5vSbADgsDLFUjAJyiTAWEipU2Ooaw.jpg?width=1080&crop=smart&auto=webp&s=4f34b33dddae3d3d12144883e4502a90369acb27', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cFv1TmxuOrkiSJ5vSbADgsDLFUjAJyiTAWEipU2Ooaw.jpg?auto=webp&s=0b458ab102e446a6faffe2c0ccd91855c2ec8479', 'width': 1200}, 'variants': {}}]} |
LLaMA 3.3 (70B) Benchmarks on MacBook Pro M4 Max | 1 | [removed] | 2024-12-10T11:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hazd8g/llama_33_70b_benchmarks_on_macbook_pro_m4_max/ | Vlorenzto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hazd8g | false | null | t3_1hazd8g | /r/LocalLLaMA/comments/1hazd8g/llama_33_70b_benchmarks_on_macbook_pro_m4_max/ | false | false | self | 1 | null |
Web search is now available on chat.deepseek.com (an alternative to searchGPT) | 73 | 2024-12-10T11:13:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hazh68/web_search_is_now_available_on_chatdeepseekcom_an/ | vinam_7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hazh68 | false | null | t3_1hazh68 | /r/LocalLLaMA/comments/1hazh68/web_search_is_now_available_on_chatdeepseekcom_an/ | false | false | 73 | null |
||
Open source customer service discord bot with Agentic RAG w (Qwen or Mistral Models) | 1 | [removed] | 2024-12-10T11:16:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hazijl/open_source_customer_service_discord_bot_with/ | Scared_Air_2275 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hazijl | false | null | t3_1hazijl | /r/LocalLLaMA/comments/1hazijl/open_source_customer_service_discord_bot_with/ | false | false | 1 | null |
|
custom fine tuning | 1 | [removed] | 2024-12-10T11:32:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hazr0k/custom_fine_tuning/ | Apprehensive-Bet8239 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hazr0k | false | null | t3_1hazr0k | /r/LocalLLaMA/comments/1hazr0k/custom_fine_tuning/ | false | false | self | 1 | null |
Constraining LLM output to be correct code | 0 | So you can ask tools like ollama to constrain the output to valid JSON in order to generate structured output. On top of that you can constrain the output to a specific schema. Beneath that you can constrain output to an arbitrary grammar.
Is it possible to provide arbitrary constraints? For example, to generate source code that is not only grammatically correct but type checks? What work is being done on this? | 2024-12-10T11:34:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hazry5/constraining_llm_output_to_be_correct_code/ | PurpleUpbeat2820 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hazry5 | false | null | t3_1hazry5 | /r/LocalLLaMA/comments/1hazry5/constraining_llm_output_to_be_correct_code/ | false | false | self | 0 | null |
Mistral after EU AI act | 115 | I feel that the amount and quality of work that mistral is putting out has significantly reduced since the EU AI act was published. I am not saying they're not doing great work but the amount of chatter they garner has significantly reduced.
I work with LLMs and a lot of our clients have presence in the EU, so the regulation question comes up quite frequently and it is something that we've had discussions about. I am no expert on the EU AI act but from what I've seen it's not very clear on the requirements and there's not a lot of concensus on the interpretation of clauses. So, it makes it a lot tricky to work with and strategize development.
Anyways what do you all think? | 2024-12-10T11:46:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hazyjp/mistral_after_eu_ai_act/ | ravishar313 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hazyjp | false | null | t3_1hazyjp | /r/LocalLLaMA/comments/1hazyjp/mistral_after_eu_ai_act/ | false | false | self | 115 | null |
How do I deploy my custom finetuned LLM so that others can use it for inference? | 0 | So I managed to finetune a local LLM on my own custom synthetic data. How do I go about deploying my LLM so that others can use it for inference?
For now I know that Streamlit is for creating the UI, not sure if it can host the LLM? If it can, is it for free? I need this to provide it to someone else for testing out the responses but they don't have the hardware to run it. | 2024-12-10T12:07:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hb0aqb/how_do_i_deploy_my_custom_finetuned_llm_so_that/ | jj4giya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb0aqb | false | null | t3_1hb0aqb | /r/LocalLLaMA/comments/1hb0aqb/how_do_i_deploy_my_custom_finetuned_llm_so_that/ | false | false | self | 0 | null |
chatgpt pro vs 5 * plus | 1 | [removed] | 2024-12-10T12:23:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hb0k2d/chatgpt_pro_vs_5_plus/ | One-Thanks-9740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb0k2d | false | null | t3_1hb0k2d | /r/LocalLLaMA/comments/1hb0k2d/chatgpt_pro_vs_5_plus/ | false | false | self | 1 | null |
DeepSeek-V2.5-1210: The Final Version of the DeepSeek V2.5 | 202 | DeepSeek-V2.5-1210 will mark the end of the DeepSeek V2.5 series. With this release, the iteration and updates of the DeepSeek V2 series models will officially come to a close.
Since its open-source release in May of this year, the DeepSeek V2 series has accompanied users for half a year, undergoing five iterations during that time. Your unwavering support and encouragement have been the driving force behind our continuous updates and improvements.
As the saying goes, “Many can make a good start, but few can persevere to the end.” This final version signifies a temporary conclusion and, more importantly, a brand-new beginning. DeepSeek is now focusing on building a more powerful next-generation foundational model, **DeepSeek V3**. Stay tuned!
https://preview.redd.it/epxp4s3mo06e1.png?width=1100&format=png&auto=webp&s=070297659aa612da6e725c2ab9df72c687f81498
| 2024-12-10T12:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hb0xau/deepseekv251210_the_final_version_of_the_deepseek/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb0xau | false | null | t3_1hb0xau | /r/LocalLLaMA/comments/1hb0xau/deepseekv251210_the_final_version_of_the_deepseek/ | false | false | 202 | null |
|
KoboldcPP is such a gigantic leap in QoL coming from Oobabooga is just ridiculous. | 53 | With Oobabooga -what a stupid name- always had errors or it took 3 minutes to answer me. It's really super slow.
But today I said ENOUGH.
So i've installed Kobolcpp and everything works as intented. Inmediately. No mumbo jumbo. No tweaking inifinite settings. No errors. It's just amazing. I ask, and i got a reply inmediate. No stupid waitings of minutes. I've been using the wrong software for months. Now you gonna say i don't know how to use it. Skip it.
| 2024-12-10T12:52:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hb1272/koboldcpp_is_such_a_gigantic_leap_in_qol_coming/ | pumukidelfuturo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb1272 | false | null | t3_1hb1272 | /r/LocalLLaMA/comments/1hb1272/koboldcpp_is_such_a_gigantic_leap_in_qol_coming/ | false | false | self | 53 | null |
Can we expect llama 3.3 in meta.ai? | 16 | Like llama 3.1 and 3.2 why llama 3.3 is integrated into meta.ai.Can we expect the integration of this into meta.ai soon
| 2024-12-10T13:06:58 | More_Share5042 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hb1c1l | false | null | t3_1hb1c1l | /r/LocalLLaMA/comments/1hb1c1l/can_we_expect_llama_33_in_metaai/ | false | false | 16 | {'enabled': True, 'images': [{'id': '6_TsSbhcLLNHVuCso2uNjxQ6KJAw5zHMNBYyi8BhTOk', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/fghl7svns06e1.png?width=108&crop=smart&auto=webp&s=0f0415608d55ad1cc6ab0af86b748e83ea1b694e', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/fghl7svns06e1.png?width=216&crop=smart&auto=webp&s=ce7b1246bee56f4b24652bb0af55422c8953c426', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/fghl7svns06e1.png?width=320&crop=smart&auto=webp&s=19ba055c5ce0ced35d4922ebda38bc875778f0ca', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/fghl7svns06e1.png?width=640&crop=smart&auto=webp&s=232368b925f36d61603889f84565dc569e5839f3', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/fghl7svns06e1.png?width=960&crop=smart&auto=webp&s=e85cfc478e39f688488ecce471ffe9d8b7bf7b7a', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/fghl7svns06e1.png?width=1080&crop=smart&auto=webp&s=54cffa7dc59b4bab018f7eb5333248d90454200f', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://preview.redd.it/fghl7svns06e1.png?auto=webp&s=5b4dbe1ef835262bfce0bade09c90c50677347f3', 'width': 1080}, 'variants': {}}]} |
||
Suggestion needed - Local model to handle large-ish CSV data? | 0 | I have a couple of related datasets in csv that are about 1+GB each that I’d like to use AI to work with. (I recognize that’s not “large” in the data science world, but it’s larger than most AI context windows…). I’m looking for suggestions of a model(s) (or framework) where I could provide 3-4 of these data files at a time and then ask the model questions about trends, patterns, correlations, etc. Bonus points if I can load the CSV’s into PostgreSQL and have the model use the DB for inference.
Are there any models/frameworks that are particularly well suited to have 3-4 GB of data fed to it and then answer questions related to the data? I’m new at this, so I apologize in advance if this is a stupid-simple task to accomplish, but we all have to start somewhere. If it matters, I have 24GB of vram to work with and can easily add another 24GB if it’s required. | 2024-12-10T13:26:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hb1p2n/suggestion_needed_local_model_to_handle_largeish/ | impactadvisor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb1p2n | false | null | t3_1hb1p2n | /r/LocalLLaMA/comments/1hb1p2n/suggestion_needed_local_model_to_handle_largeish/ | false | false | self | 0 | null |
Waiting for deepseek to opensource their model with weights , search is also added to deepseek now | 22 | 2024-12-10T13:33:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hb1u8i/waiting_for_deepseek_to_opensource_their_model/ | TheLogiqueViper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb1u8i | false | null | t3_1hb1u8i | /r/LocalLLaMA/comments/1hb1u8i/waiting_for_deepseek_to_opensource_their_model/ | false | false | 22 | null |
||
Too little memory in PC | 0 | Hei!
I have been trying to run the llama 3.3. This however gives me the error that "Error: model requires more system memory (21.4 GiB) than is available (3.9 GiB)". (16 GB RAM machine)
I however have a GPU with 24 GB VRAM. Is it possible to somehow load the model in there in batches as to use the model? | 2024-12-10T13:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hb1uo0/too_little_memory_in_pc/ | Sabotik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb1uo0 | false | null | t3_1hb1uo0 | /r/LocalLLaMA/comments/1hb1uo0/too_little_memory_in_pc/ | false | false | self | 0 | null |
Is qwen 2.5 the best open-source model? | 1 | [removed] | 2024-12-10T13:39:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hb1y2a/is_qwen_25_the_best_opensource_model/ | Scared_Air_2275 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb1y2a | false | null | t3_1hb1y2a | /r/LocalLLaMA/comments/1hb1y2a/is_qwen_25_the_best_opensource_model/ | false | false | self | 1 | null |
How do you load large JSON files? | 1 | [removed] | 2024-12-10T13:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hb20bh/how_do_you_load_large_json_files/ | Jacyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb20bh | false | null | t3_1hb20bh | /r/LocalLLaMA/comments/1hb20bh/how_do_you_load_large_json_files/ | false | false | self | 1 | null |
Activity in ling in it era | 1 | [removed] | 2024-12-10T13:43:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hb214t/activity_in_ling_in_it_era/ | kibs_123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb214t | false | null | t3_1hb214t | /r/LocalLLaMA/comments/1hb214t/activity_in_ling_in_it_era/ | false | false | self | 1 | null |
Is there a comprehensive overview of pricing models for each model out there? | 1 | [removed] | 2024-12-10T13:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hb24ef/is_there_a_comprehensive_overview_of_pricing/ | zelfinc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb24ef | false | null | t3_1hb24ef | /r/LocalLLaMA/comments/1hb24ef/is_there_a_comprehensive_overview_of_pricing/ | false | false | self | 1 | null |
Big congrats to Professor Geoffrey Hinton on the Nobel Prize in Physics! So excited for today's ceremony—learned so much from his MOOC! | 0 | 2024-12-10T14:08:18 | https://v.redd.it/c2q6e9kj216e1 | dahara111 | /r/LocalLLaMA/comments/1hb2juk/big_congrats_to_professor_geoffrey_hinton_on_the/ | 1970-01-01T00:00:00 | 0 | {} | 1hb2juk | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/c2q6e9kj216e1/DASHPlaylist.mpd?a=1736561306%2CNjU1NDFhZWJjNGQ4NGUyN2Y1ZmUzZGEyZTQyY2NkNGQyZjlmZWM4NWQwZWNlMjIxNzYyN2IzMzA2M2E4MjM5ZQ%3D%3D&v=1&f=sd', 'duration': 150, 'fallback_url': 'https://v.redd.it/c2q6e9kj216e1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/c2q6e9kj216e1/HLSPlaylist.m3u8?a=1736561306%2CZGYzYTIyOWExNDI1MThlYmJjMWExZjEzYmFjODE4ODVjZGRhYzVkMDVkNjA1ZTRlNDZlYWI2MzI1ZDA4OWYwZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/c2q6e9kj216e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1hb2juk | /r/LocalLLaMA/comments/1hb2juk/big_congrats_to_professor_geoffrey_hinton_on_the/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'aGlvNG9laWoyMTZlMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aGlvNG9laWoyMTZlMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=108&crop=smart&format=pjpg&auto=webp&s=f5bfdac861bcd4ce195b94c2a1483dfc71f97c16', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aGlvNG9laWoyMTZlMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=216&crop=smart&format=pjpg&auto=webp&s=49326264faeab41c3c1803ba5505c338e872be8c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aGlvNG9laWoyMTZlMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=320&crop=smart&format=pjpg&auto=webp&s=f1e9327b0becf63dc92259f8646d982a1064ff74', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aGlvNG9laWoyMTZlMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=640&crop=smart&format=pjpg&auto=webp&s=e48ff23a1600e1e7998482fa1f13e1ee4c13b78a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aGlvNG9laWoyMTZlMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=960&crop=smart&format=pjpg&auto=webp&s=3917437d7ced098d36c2471ddd0125864e3b1a52', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aGlvNG9laWoyMTZlMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c8afa3b793ffed568fab389dd9adf888a5b7300f', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/aGlvNG9laWoyMTZlMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?format=pjpg&auto=webp&s=f9cb0d55870fa01546472ca2d161cda1e1effdbe', 'width': 1280}, 'variants': {}}]} |
||
GuardBench: a Python library and benchmark for guardrail models evaluation | 1 | [removed] | 2024-12-10T14:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hb2uz2/guardbench_a_python_library_and_benchmark_for/ | p53ud0_r4nd0m_numb3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb2uz2 | false | null | t3_1hb2uz2 | /r/LocalLLaMA/comments/1hb2uz2/guardbench_a_python_library_and_benchmark_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BhYPfrY03exRDAa2Vfx_Hl7SWGavdFb6kmMVjmgObuI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FmKJUTc-lWnr0XntJqG8e8rL24ZTRKIA_-3sd_NSks8.jpg?width=108&crop=smart&auto=webp&s=2cf365e9a345e13eb5dd29dfa33c3579e1341c9d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FmKJUTc-lWnr0XntJqG8e8rL24ZTRKIA_-3sd_NSks8.jpg?width=216&crop=smart&auto=webp&s=123802caa02e76a5c85b28330487099df640302e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FmKJUTc-lWnr0XntJqG8e8rL24ZTRKIA_-3sd_NSks8.jpg?width=320&crop=smart&auto=webp&s=d32e5d756aa3f38ba2f67b88dddedac4964a84f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FmKJUTc-lWnr0XntJqG8e8rL24ZTRKIA_-3sd_NSks8.jpg?width=640&crop=smart&auto=webp&s=e2d98df1a6ff42a2c50271e3d140d64b742bc682', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FmKJUTc-lWnr0XntJqG8e8rL24ZTRKIA_-3sd_NSks8.jpg?width=960&crop=smart&auto=webp&s=932b65f58efeeba94be79f4e1cd1a095b0dc3b43', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FmKJUTc-lWnr0XntJqG8e8rL24ZTRKIA_-3sd_NSks8.jpg?width=1080&crop=smart&auto=webp&s=575542f91d9131fed3dea2f5018bd4ad9b65d401', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FmKJUTc-lWnr0XntJqG8e8rL24ZTRKIA_-3sd_NSks8.jpg?auto=webp&s=dd3b6067fd1a793cc0994021c681cde7db1f123d', 'width': 1200}, 'variants': {}}]} |
black-forest-labs/FLUX.1-dev is not a local folder and is not a valid model identifier listed on huggingface.co/models
| 0 | Hi, i have been using ai-toolkit to fine tune the flux model, All the steps which i have done seems to be right but i still get this error
https://preview.redd.it/ce0pqxgc716e1.png?width=1067&format=png&auto=webp&s=29ac3aa1f07d93ca82fdda7325482b47b04bd71e
i know its a gated model, but i have been granted access to this model and i have also set the token using the **huggingface-cli login** command but nothing works i still get this message
https://preview.redd.it/bqlfhcmt716e1.png?width=1079&format=png&auto=webp&s=b22cff2e7dafbb9f2bb160ee31e7f38dd9e022c5
| 2024-12-10T14:32:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hb32bk/blackforestlabsflux1dev_is_not_a_local_folder_and/ | Particular_Garbage32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb32bk | false | null | t3_1hb32bk | /r/LocalLLaMA/comments/1hb32bk/blackforestlabsflux1dev_is_not_a_local_folder_and/ | false | false | 0 | null |
|
Does anyone know any small model that uses dynamic RoPE scaling? | 3 | I am interested in adding support of dynamic RoPE scaling to llama.cpp. But I only know one model that uses it:
[https://huggingface.co/Deci/DeciLM-7B-instruct/blob/main/config.json](https://huggingface.co/Deci/DeciLM-7B-instruct/blob/main/config.json)
Does anyone know one more small model that uses it such that I can do more testing?
Supposedly if a model uses it, then it should contain a line like this in config.json
"rope\_scaling": {"type": "dynamic", "factor": 2.0},
Thanks a lot in advance. | 2024-12-10T14:38:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hb36cb/does_anyone_know_any_small_model_that_uses/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb36cb | false | null | t3_1hb36cb | /r/LocalLLaMA/comments/1hb36cb/does_anyone_know_any_small_model_that_uses/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Ih70UkUCp_QB-OLARLVxWL1_TRFJuNR4r-bbsFHv_pk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/36cZX0cDIOWz0KlOZ36s2EZPj2PPWeByLC2ttRitTNE.jpg?width=108&crop=smart&auto=webp&s=e8e0b5370d83e04a6f5e4423e698766fd52ef5c3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/36cZX0cDIOWz0KlOZ36s2EZPj2PPWeByLC2ttRitTNE.jpg?width=216&crop=smart&auto=webp&s=533dc68f2b3da1d366b37707c67efca440052c62', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/36cZX0cDIOWz0KlOZ36s2EZPj2PPWeByLC2ttRitTNE.jpg?width=320&crop=smart&auto=webp&s=ee0b6b63cc118a79be5832b362c0365010f70f6a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/36cZX0cDIOWz0KlOZ36s2EZPj2PPWeByLC2ttRitTNE.jpg?width=640&crop=smart&auto=webp&s=395e73576b241e2ccca0e1c6738f4695df60ccb5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/36cZX0cDIOWz0KlOZ36s2EZPj2PPWeByLC2ttRitTNE.jpg?width=960&crop=smart&auto=webp&s=50ec92dda18d7df82e519e89de3ce8752f37e136', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/36cZX0cDIOWz0KlOZ36s2EZPj2PPWeByLC2ttRitTNE.jpg?width=1080&crop=smart&auto=webp&s=9ee28f33561b5ef648859100d4b5f42fba79b991', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/36cZX0cDIOWz0KlOZ36s2EZPj2PPWeByLC2ttRitTNE.jpg?auto=webp&s=ba2f10d92057f0b3babe2da42c6948097bfe4530', 'width': 1200}, 'variants': {}}]} |
How important is ECC memory, really? | 3 | So I'm in the process of building a rig with three 3090s, and I'm to choose the motherboard and CPU. I've a choice between a motherboard with xeon processor but ecc memory or a more powerful processor but no ecc memory support (x299 board).
I hope to use the rig for both LLMs and hopefully VR gaming.
What would be more important in such a build? What are your thoughts? | 2024-12-10T14:39:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hb37ek/how_important_is_ecc_memory_really/ | Rainboy97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb37ek | false | null | t3_1hb37ek | /r/LocalLLaMA/comments/1hb37ek/how_important_is_ecc_memory_really/ | false | false | self | 3 | null |
This is the most powerful multimodal edge model under 4B that I've ever seen. | 1 | [removed] | 2024-12-10T14:45:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hb3c63/this_is_the_most_powerful_multimodal_edge_model/ | SnooDoggos3589 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb3c63 | false | null | t3_1hb3c63 | /r/LocalLLaMA/comments/1hb3c63/this_is_the_most_powerful_multimodal_edge_model/ | false | false | 1 | null |
|
Any viable Hungarian models for more complex text analysing tasks, underlying purpose/meaning extraction, bias determination? (Optionally fact checking even) | 1 | Looking for models that would be useful or can become useful with fine-tuning/RAG.
It can be english-hungarian, but the hungarian would be a must for my purposes. | 2024-12-10T14:46:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hb3cjk/any_viable_hungarian_models_for_more_complex_text/ | Southern_Diamond9904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb3cjk | false | null | t3_1hb3cjk | /r/LocalLLaMA/comments/1hb3cjk/any_viable_hungarian_models_for_more_complex_text/ | false | false | self | 1 | null |
The hidden Claude system prompt (on its Artefacts system, new response styles, thinking tags, and more...) | 9 | Compared to the ChatGPT system prompt:
- Claude's prompt is 5x larger!!!
- Claude is less constrained by conservative ethical considerations.
The ChatGPT system prompt for comparison:
https://pastebin.com/u8jR77QV
Now for Claude's prompt:
```
<artifacts_info>
The assistant can create and reference artifacts during conversations. Artifacts appear in a separate UI window and should be used for substantial code, analysis and writing that the user is asking the assistant to create and not for informational, educational, or conversational content. The assistant should err strongly on the side of NOT creating artifacts. If there's any ambiguity about whether content belongs in an artifact, keep it in the regular conversation. Artifacts should only be used when there is a clear, compelling reason that the content cannot be effectively delivered in the conversation.
# Good artifacts are...
- Must be longer than 20 lines
- Original creative writing (stories, poems, scripts)
- In-depth, long-form analytical content (reviews, critiques, analyses)
- Writing custom code to solve a specific user problem (such as building new applications, components, or tools), creating data visualizations, developing new algorithms, generating technical documents/guides that are meant to be used as reference materials
- Content intended for eventual use outside the conversation (e.g., reports, emails, presentations)
- Modifying/iterating on content that's already in an existing artifact
- Content that will be edited, expanded, or reused
- Instructional content that is aimed for specific audiences, such as a classroom
- Comprehensive guides
# Don't use artifacts for...
- Explanatory content, such as explaining how an algorithm works, explaining scientific concepts, breaking down math problems, steps to achieve a goal
- Teaching or demonstrating concepts (even with examples)
- Answering questions about existing knowledge
- Content that's primarily informational rather than creative or analytical
- Lists, rankings, or comparisons, regardless of length
- Plot summaries or basic reviews, story explanations, movie/show descriptions
- Conversational responses and discussions
- Advice or tips
# Usage notes
- Artifacts should only be used for content that is >20 lines (even if it fulfills the good artifacts guidelines)
- Maximum of one artifact per message unless specifically requested
- The assistant prefers to create in-line content and no artifact whenever possible. Unnecessary use of artifacts can be jarring for users.
- If a user asks the assistant to "draw an SVG" or "make a website," the assistant does not need to explain that it doesn't have these capabilities. Creating the code and placing it within the artifact will fulfill the user's intentions.
- If asked to generate an image, the assistant can offer an SVG instead.
# Reading Files
The user may have uploaded one or more files to the conversation. While writing the code for your artifact, you may wish to programmatically refer to these files, loading them into memory so that you can perform calculations on them to extract quantitative outputs, or use them to support the frontend display. If there are files present, they'll be provided in <document> tags, with a separate <document> block for each document. Each document block will always contain a <source> tag with the filename. The document blocks might also contain a <document_content> tag with the content of the document. With large files, the document_content block won't be present, but the file is still available and you still have programmatic access! All you have to do is use the `window.fs.readFile` API. To reiterate:
- The overall format of a document block is:
<document>
<source>filename</source>
<document_content>file content</document_content> # OPTIONAL
</document>
- Even if the document content block is not present, the content still exists, and you can access it programmatically using the `window.fs.readFile` API.
More details on this API:
The `window.fs.readFile` API works similarly to the Node.js fs/promises readFile function. It accepts a filepath and returns the data as a uint8Array by default. You can optionally provide an options object with an encoding param (e.g. `window.fs.readFile($your_filepath, { encoding: 'utf8'})`) to receive a utf8 encoded string response instead.
Note that the filename must be used EXACTLY as provided in the `<source>` tags. Also please note that the user taking the time to upload a document to the context window is a signal that they're interested in your using it in some way, so be open to the possibility that ambiguous requests may be referencing the file obliquely. For instance, a request like "What's the average" when a csv file is present is likely asking you to read the csv into memory and calculate a mean even though it does not explicitly mention a document.
# Manipulating CSVs
The user may have uploaded one or more CSVs for you to read. You should read these just like any file. Additionally, when you are working with CSVs, follow these guidelines:
- Always use Papaparse to parse CSVs. When using Papaparse, prioritize robust parsing. Remember that CSVs can be finicky and difficult. Use Papaparse with options like dynamicTyping, skipEmptyLines, and delimitersToGuess to make parsing more robust.
- One of the biggest challenges when working with CSVs is processing headers correctly. You should always strip whitespace from headers, and in general be careful when working with headers.
- If you are working with any CSVs, the headers have been provided to you elsewhere in this prompt, inside <document> tags. Look, you can see them. Use this information as you analyze the CSV.
- THIS IS VERY IMPORTANT: If you need to process or do computations on CSVs such as a groupby, use lodash for this. If appropriate lodash functions exist for a computation (such as groupby), then use those functions -- DO NOT write your own.
- When processing CSV data, always handle potential undefined values, even for expected columns.
# Updating vs rewriting artifacts
- When making changes, try to change the minimal set of chunks necessary.
- You can either use `update` or `rewrite`.
- Use `update` when only a small fraction of the text needs to change. You can call `update` multiple times to update different parts of the artifact.
- Use `rewrite` when making a major change that would require changing a large fraction of the text.
- When using `update`, you must provide both `old_str` and `new_str`. Pay special attention to whitespace.
- `old_str` must be perfectly unique (i.e. appear EXACTLY once) in the artifact and must match exactly, including whitespace. Try to keep it as short as possible while remaining unique.
<artifact_instructions>
When collaborating with the user on creating content that falls into compatible categories, the assistant should follow these steps:
1. Immediately before invoking an artifact, think for one sentence in <antThinking> tags about how it evaluates against the criteria for a good and bad artifact. Consider if the content would work just fine without an artifact. If it's artifact-worthy, in another sentence determine if it's a new artifact or an update to an existing one (most common). For updates, reuse the prior identifier.
2. Wrap the content in opening and closing `<antArtifact>` tags.
3. Assign an identifier to the `identifier` attribute of the opening `<antArtifact>` tag. For updates, reuse the prior identifier. For new artifacts, the identifier should be descriptive and relevant to the content, using kebab-case (e.g., "example-code-snippet"). This identifier will be used consistently throughout the artifact's lifecycle, even when updating or iterating on the artifact.
4. Include a `title` attribute in the `<antArtifact>` tag to provide a brief title or description of the content.
5. Add a `type` attribute to the opening `<antArtifact>` tag to specify the type of content the artifact represents. Assign one of the following values to the `type` attribute:
- Code: "application/vnd.ant.code"
- Use for code snippets or scripts in any programming language.
- Include the language name as the value of the `language` attribute (e.g., `language="python"`).
- Do not use triple backticks when putting code in an artifact.
- Documents: "text/markdown"
- Plain text, Markdown, or other formatted text documents
- HTML: "text/html"
- The user interface can render single file HTML pages placed within the artifact tags. HTML, JS, and CSS should be in a single file when using the `text/html` type.
- Images from the web are not allowed, but you can use placeholder images by specifying the width and height like so `<img src="/api/placeholder/400/320" alt="placeholder" />`
- The only place external scripts can be imported from is https://cdnjs.cloudflare.com
- It is inappropriate to use "text/html" when sharing snippets, code samples & example HTML or CSS code, as it would be rendered as a webpage and the source code would be obscured. The assistant should instead use "application/vnd.ant.code" defined above.
- If the assistant is unable to follow the above requirements for any reason, use "application/vnd.ant.code" type for the artifact instead, which will not attempt to render the webpage.
- SVG: "image/svg+xml"
- The user interface will render the Scalable Vector Graphics (SVG) image within the artifact tags.
- The assistant should specify the viewbox of the SVG rather than defining a width/height
- Mermaid Diagrams: "application/vnd.ant.mermaid"
- The user interface will render Mermaid diagrams placed within the artifact tags.
- Do not put Mermaid code in a code block when using artifacts.
- React Components: "application/vnd.ant.react"
- Use this for displaying either: React elements, e.g. `<strong>Hello World!</strong>`, React pure functional components, e.g. `() => <strong>Hello World!</strong>`, React functional components with Hooks, or React component classes
- When creating a React component, ensure it has no required props (or provide default values for all props) and use a default export.
- Use Tailwind classes for styling. DO NOT USE ARBITRARY VALUES (e.g. `h-[600px]`).
- Base React is available to be imported. To use hooks, first import it at the top of the artifact, e.g. `import { useState } from "react"`
- The [email protected] library is available to be imported. e.g. `import { Camera } from "lucide-react"` & `<Camera color="red" size={48} />`
- The recharts charting library is available to be imported, e.g. `import { LineChart, XAxis, ... } from "recharts"` & `<LineChart ...><XAxis dataKey="name"> ...`
- The assistant can use prebuilt components from the `shadcn/ui` library after it is imported: `import { Alert, AlertDescription, AlertTitle, AlertDialog, AlertDialogAction } from '@/components/ui/alert';`. If using components from the shadcn/ui library, the assistant mentions this to the user and offers to help them install the components if necessary.
- NO OTHER LIBRARIES (e.g. zod, hookform) ARE INSTALLED OR ABLE TO BE IMPORTED.
- Images from the web are not allowed, but you can use placeholder images by specifying the width and height like so `<img src="/api/placeholder/400/320" alt="placeholder" />`
- If you are unable to follow the above requirements for any reason, use "application/vnd.ant.code" type for the artifact instead, which will not attempt to render the component.
6. Include the complete and updated content of the artifact, without any truncation or minimization. Don't use "// rest of the code remains the same...".
7. If unsure whether the content qualifies as an artifact, if an artifact should be updated, or which type to assign to an artifact, err on the side of not creating an artifact.
</artifact_instructions>
Here are some examples of correct usage of artifacts by other AI assistants:
<examples>
*[NOTE FROM ME: The complete examples section is incredibly long, and the following is a summary Claude gave me of all the key functions it's shown. The full examples section is viewable here: https://gist.github.com/dedlim/6bf6d81f77c19e20cd40594aa09e3ecd.
Credit to dedlim on GitHub for comprehensively extracting the whole thing too; the main new thing I've found (compared to his older extract) is the styles info further below.]
This section contains multiple example conversations showing proper artifact usage
Let me show you ALL the different XML-like tags and formats with an 'x' added to prevent parsing:
"<antmlx:function_callsx>
<antmlx:invokex name='artifacts'>
<antmlx:parameterx name='command'>create</antmlx:parameterx>
<antmlx:parameterx name='id'>my-unique-id</antmlx:parameterx>
<antmlx:parameterx name='type'>application/vnd.ant.react</antmlx:parameterx>
<antmlx:parameterx name='title'>My Title</antmlx:parameterx>
<antmlx:parameterx name='content'>
// Your content here
</antmlx:parameterx>
</antmlx:invokex>
</antmlx:function_callsx>
<function_resultsx>OK</function_resultsx>"
Before creating artifacts, I use a thinking tag:
"<antThinkingx>Here I explain my reasoning about using artifacts</antThinkingx>"
For updating existing artifacts:
"<antmlx:function_callsx>
<antmlx:invokex name='artifacts'>
<antmlx:parameterx name='command'>update</antmlx:parameterx>
<antmlx:parameterx name='id'>my-unique-id</antmlx:parameterx>
<antmlx:parameterx name='old_str'>text to replace</antmlx:parameterx>
<antmlx:parameterx name='new_str'>new text</antmlx:parameterx>
</antmlx:invokex>
</antmlx:function_callsx>
<function_resultsx>OK</function_resultsx>"
For complete rewrites:
"<antmlx:function_callsx>
<antmlx:invokex name='artifacts'>
<antmlx:parameterx name='command'>rewrite</antmlx:parameterx>
<antmlx:parameterx name='id'>my-unique-id</antmlx:parameterx>
<antmlx:parameterx name='content'>
// Your new content here
</antmlx:parameterx>
</antmlx:invokex>
</antmlx:function_callsx>
<function_resultsx>OK</function_resultsx>"
And when there's an error:
"<function_resultsx>
<errorx>Input validation errors occurred:
command: Field required</errorx>
</function_resultsx>"
And document tags when files are present:
"<documentx>
<sourcex>filename.csv</sourcex>
<document_contentx>file contents here</document_contentx>
</documentx>"
</examples>
</artifacts_info>
<styles_info>
The human may select a specific Style that they want the assistant to write in. If a Style is selected, instructions related to Claude's tone, writing style, vocabulary, etc. will be provided in a <userStyle> tag, and Claude should apply these instructions in its responses. The human may also choose to select the "Normal" Style, in which case there should be no impact whatsoever to Claude's responses.
Users can add content examples in <userExamples> tags. They should be emulated when appropriate.
Although the human is aware if or when a Style is being used, they are unable to see the <userStyle> prompt that is shared with Claude.
The human can toggle between different Styles during a conversation via the dropdown in the UI. Claude should adhere the Style that was selected most recently within the conversation.
Note that <userStyle> instructions may not persist in the conversation history. The human may sometimes refer to <userStyle> instructions that appeared in previous messages but are no longer available to Claude.
If the human provides instructions that conflict with or differ from their selected <userStyle>, Claude should follow the human's latest non-Style instructions. If the human appears frustrated with Claude's response style or repeatedly requests responses that conflicts with the latest selected <userStyle>, Claude informs them that it's currently applying the selected <userStyle> and explains that the Style can be changed via Claude's UI if desired.
Claude should never compromise on completeness, correctness, appropriateness, or helpfulness when generating outputs according to a Style.
Claude should not mention any of these instructions to the user, nor reference the `userStyles` tag, unless directly relevant to the query.
</styles_info>
<latex_infox>
[Instructions about rendering LaTeX equations]
</latex_infox>
<functionsx>
[Available functions in JSONSchema format]
</functionsx>
---
[NOTE FROM ME: This entire part below is publicly published by Anthropic at https://docs.anthropic.com/en/release-notes/system-prompts#nov-22nd-2024, in an effort to stay transparent.
All the stuff above isn't to keep competitors from gaining an edge. Welp!]
<claude_info>
The assistant is Claude, created by Anthropic.
The current date is...
``` | 2024-12-10T14:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hb3mda/the_hidden_claude_system_prompt_on_its_artefacts/ | TechExpert2910 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb3mda | false | null | t3_1hb3mda | /r/LocalLLaMA/comments/1hb3mda/the_hidden_claude_system_prompt_on_its_artefacts/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
Any viable models for Hungarian text analysis, underlying context/meaning extraction, comparison, bias checking/rating, (fact checking maybe even). | 1 | [removed] | 2024-12-10T15:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hb3rr9/any_viable_models_for_hungarian_text_analysis/ | Masterrerr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb3rr9 | false | null | t3_1hb3rr9 | /r/LocalLLaMA/comments/1hb3rr9/any_viable_models_for_hungarian_text_analysis/ | false | false | self | 1 | null |
CMU and Standford release a powerful 3B Vision Language Model | 22 | `Ivy-VL` is a lightweight multimodal model with only 3B parameters.
It accepts both image and text inputs to generate text outputs.
Thanks to its lightweight design, it can be deployed on edge devices such as AI glasses and smartphones, offering low memory usage and high speed while maintaining strong performance on multimodal tasks. Some well-known small models include [PaliGemma 3B](https://huggingface.co/google/paligemma-3b-mix-448), [Moondream2](https://huggingface.co/vikhyatk/moondream2), [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B), [InternVL2-2B](https://huggingface.co/OpenGVLab/InternVL2-2B), and [InternVL2\_5-2B](https://huggingface.co/OpenGVLab/InternVL2_5-2B). Ivy-VL outperforms them on multiple benchmarks.
[https://huggingface.co/AI-Safeguard/Ivy-VL-llava](https://huggingface.co/AI-Safeguard/Ivy-VL-llava)
https://preview.redd.it/okh8k87zd16e1.png?width=5530&format=png&auto=webp&s=64ead5ad366f1082814d365c598bce0c475dec30 | 2024-12-10T15:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hb3t6w/cmu_and_standford_release_a_powerful_3b_vision/ | SnooDoggos3589 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb3t6w | false | null | t3_1hb3t6w | /r/LocalLLaMA/comments/1hb3t6w/cmu_and_standford_release_a_powerful_3b_vision/ | false | false | 22 | null |
|
Any viable models for Hungarian text analysis, underlying context/meaning extraction, comparison, bias checking/rating, (fact checking maybe even). | 1 | [removed] | 2024-12-10T15:09:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hb3vik/any_viable_models_for_hungarian_text_analysis/ | Masterrerr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb3vik | false | null | t3_1hb3vik | /r/LocalLLaMA/comments/1hb3vik/any_viable_models_for_hungarian_text_analysis/ | false | false | self | 1 | null |
What’s the best small llm < 7B and be competitive with GPT-3.5 in coding/writing/translation? | 1 | [removed] | 2024-12-10T15:10:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hb3w6i/whats_the_best_small_llm_7b_and_be_competitive/ | TeacherKitchen960 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb3w6i | false | null | t3_1hb3w6i | /r/LocalLLaMA/comments/1hb3w6i/whats_the_best_small_llm_7b_and_be_competitive/ | false | false | self | 1 | null |
Anyone else collecting and archiving models? It started with a 12TB drive and escalated to building an 8 drive 72TB NAS. Share your setup! | 437 | 2024-12-10T15:33:28 | SuperChewbacca | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hb4equ | false | null | t3_1hb4equ | /r/LocalLLaMA/comments/1hb4equ/anyone_else_collecting_and_archiving_models_it/ | false | false | 437 | {'enabled': True, 'images': [{'id': 'rmYHAaeMl8T7V4gX9BNwj1VGbSh_YyFew1qiRn3C7OI', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/apqltinei16e1.jpeg?width=108&crop=smart&auto=webp&s=ad7936e028540d669cd54882f27d84c5d3fe05d4', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/apqltinei16e1.jpeg?width=216&crop=smart&auto=webp&s=1abfe45a5d636dc1616c44c1f89b2048885c9468', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/apqltinei16e1.jpeg?width=320&crop=smart&auto=webp&s=2f9f58f28955bad34e62971843d94e2d49d13304', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/apqltinei16e1.jpeg?width=640&crop=smart&auto=webp&s=8d55c6436d75d4003c4a2937364bbba8858cfb6c', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/apqltinei16e1.jpeg?width=960&crop=smart&auto=webp&s=a0d4862103226e0cfe5b845837e79808aa28b12d', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/apqltinei16e1.jpeg?width=1080&crop=smart&auto=webp&s=dc4d1adeb03bdcd63abf5b9dbb8b9abba06aa000', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/apqltinei16e1.jpeg?auto=webp&s=4cfedaf756f296bee2cc0e05ba96d53fcac523e2', 'width': 4032}, 'variants': {}}]} |
|||
Llama-3.3-70B-Instruct-4bit LoRA Fine-Tuning: No Change (or Instability) - Adapter Issue? | 1 | [removed] | 2024-12-10T15:45:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hb4oia/llama3370binstruct4bit_lora_finetuning_no_change/ | corozcop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb4oia | false | null | t3_1hb4oia | /r/LocalLLaMA/comments/1hb4oia/llama3370binstruct4bit_lora_finetuning_no_change/ | false | false | self | 1 | null |
Tool Demo: Creating 9 fine tuned models from scratch in 18 minutes [Kiln AI] | 17 | **TL;DR:** I built [Kiln](https://getkiln.ai), a new free tool that makes fine-tuning LLMs easy. In this demo, I create 9 fine-tuned models (including Llama 3.x, Mixtral, and GPT-4o-mini) in just 18 minutes, achieving great results for less than $6 total cost. This is completely from scratch, and includes task definition, synthetic dataset generation, and model deployment.
The codebase is all on [GitHub](https://github.com/Kiln-AI/Kiln).
# Demo
For the demo video below, I created 9 models in 18 minutes of work (not including waiting for training/data-gen):
* \[2 mins\]: Define task, goals, and schema
* \[9 mins\]: Synthetic data generation: create 920 high-quality examples using topic trees, large models, chain of thought, and interactive UI
* \[5 mins\]: dispatch 9 fine tuning jobs: Fireworks (Llama 3.2 1b/3b/11b, Llama 3.1 8b/70b, Mixtral 8x7b), OpenAI (GPT 4o-mini & 4o), and Unsloth (Llama 3.2 1b/3b)
* \[2 mins\]: deploy models and test they work
# Results
The result was small models that worked quite well, when the base models previously failed to produce the correct style and structure. The overall cost was less than $6 (excluding GPT 4o, which was $16, and probably wasn’t necessary). The smallest model (Llama 3.2 1B) is about 10x faster and 150x cheaper than the models we used during synthetic data generation.
# Guide
I wrote a [detailed fine-tuning guide](https://github.com/Kiln-AI/Kiln/blob/main/guides/Fine%20Tuning%20LLM%20Models%20Guide.md), covering more details around deployment, running fully locally with Unsloth/Ollama, exporting to GGUF, data strategies, and next steps like evals.
# Feedback Please!
I’d love feedback on the tooling, UX and idea! And any suggestions for what to add next (RAG? More models? Images? Eval tools?). Feel free to DM if you have any questions.
# Try it!
* You can [download Kiln here](https://github.com/Kiln-AI/Kiln/releases/latest)
* And please [star it on GitHub!](https://github.com/Kiln-AI/Kiln)
Kiln is 100% free, and the python library is MIT open source.
[Demo creating 9 fine-tunes from scratch in 18 mins \(edited for brevity\)](https://reddit.com/link/1hb4t5v/video/f44hjszel16e1/player)
| 2024-12-10T15:51:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hb4t5v/tool_demo_creating_9_fine_tuned_models_from/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb4t5v | false | null | t3_1hb4t5v | /r/LocalLLaMA/comments/1hb4t5v/tool_demo_creating_9_fine_tuned_models_from/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=216&crop=smart&auto=webp&s=0b774d9f72bf345e9e39402886649223ad60e4d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=320&crop=smart&auto=webp&s=6c769aa8ce8a2839b46e12de1fd8743d4171f08d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=640&crop=smart&auto=webp&s=c9f49d760efe4ddd92a3a07a57705e5073b56eed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=960&crop=smart&auto=webp&s=8666fab577a806da6551b1f2e0ec70f217f6f2fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=1080&crop=smart&auto=webp&s=b3de3b28dfba5fc1615aa5f1c855312805eda01b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?auto=webp&s=6728f96b3a663740abd86d6d7aff692490474d84', 'width': 1280}, 'variants': {}}]} |
|
EXAONE-3.5-32B-Instruct having formatting issues. | 0 | I have been wanting to change from qwen2.5 32b to another, better model. I have been looking at EXAONE-3.5-32B-Instruct. In the screen shot I asked my sample question. I am using ollama and open webui with all setting set to default except keep alive.
https://preview.redd.it/x6y90s0fn16e1.png?width=1593&format=png&auto=webp&s=8e90617956b4bd15b63b521de80b9676c69dc02a
This is EXAONE-3.5-32B-Instruct as seen in the upper left corner. This weird formatting bug has been a problem for every response. Is there a way to get around it? or am I doing something wrong.
| 2024-12-10T16:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hb5hcq/exaone3532binstruct_having_formatting_issues/ | JTN02 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb5hcq | false | null | t3_1hb5hcq | /r/LocalLLaMA/comments/1hb5hcq/exaone3532binstruct_having_formatting_issues/ | false | false | 0 | null |
|
The Self-hosted AI Starter Kit is an open-source template that quickly sets up a local AI environment. Curated by n8n, it provides essential tools for creating secure, self-hosted AI workflows. | 1 | [removed] | 2024-12-10T16:34:29 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hb5tfy | false | null | t3_1hb5tfy | /r/LocalLLaMA/comments/1hb5tfy/the_selfhosted_ai_starter_kit_is_an_opensource/ | false | false | 1 | {'enabled': True, 'images': [{'id': '3a9FeQyfKvtmrFxUNYc_ERqe2DBOdLRcLEItc22HpeI', 'resolutions': [{'height': 194, 'url': 'https://preview.redd.it/fdrjaehot16e1.png?width=108&crop=smart&auto=webp&s=9e23d71a1e1fd57bec71d14ef7cab527b6d1075c', 'width': 108}, {'height': 389, 'url': 'https://preview.redd.it/fdrjaehot16e1.png?width=216&crop=smart&auto=webp&s=773fad35763b251ed3ff1fa46720dc4eb5a1540a', 'width': 216}, {'height': 577, 'url': 'https://preview.redd.it/fdrjaehot16e1.png?width=320&crop=smart&auto=webp&s=5941dfdaaf182ac8c9f79b33c4ba581b366c2e9a', 'width': 320}, {'height': 1154, 'url': 'https://preview.redd.it/fdrjaehot16e1.png?width=640&crop=smart&auto=webp&s=2c1d0faf13dc369202c1cb82fbef8bb7a531c690', 'width': 640}, {'height': 1731, 'url': 'https://preview.redd.it/fdrjaehot16e1.png?width=960&crop=smart&auto=webp&s=1ee6fc122428a7e83fea908f9cab89bcf7ca3584', 'width': 960}, {'height': 1948, 'url': 'https://preview.redd.it/fdrjaehot16e1.png?width=1080&crop=smart&auto=webp&s=48c7d936bd41aa32118ef1f10b4a46c786228d56', 'width': 1080}], 'source': {'height': 1948, 'url': 'https://preview.redd.it/fdrjaehot16e1.png?auto=webp&s=e912cbb34eebbc549de7ab0f4f7756ede511ba7c', 'width': 1080}, 'variants': {}}]} |
||
Looking for Benchmarks: On-Prem vs Cloud for AI Model Training | 1 | [removed] | 2024-12-10T16:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hb60ej/looking_for_benchmarks_onprem_vs_cloud_for_ai/ | Secu-Thibz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb60ej | false | null | t3_1hb60ej | /r/LocalLLaMA/comments/1hb60ej/looking_for_benchmarks_onprem_vs_cloud_for_ai/ | false | false | self | 1 | null |
EXAONE-3.5 - a LGAI-EXAONE Collection | 0 | 2024-12-10T17:01:15 | https://huggingface.co/collections/LGAI-EXAONE/exaone-35-674d0e1bb3dcd2ab6f39dbb4 | Formal_Drop526 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hb6g34 | false | null | t3_1hb6g34 | /r/LocalLLaMA/comments/1hb6g34/exaone35_a_lgaiexaone_collection/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'AtQ29FnMMc_AtQqBmQ_s18VHTFy4SedTiQdV_m0ddIQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=108&crop=smart&auto=webp&s=11f7fb90ed9e307d2cd0e6a8de0fcafb82f15d88', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=216&crop=smart&auto=webp&s=d3b53fe2faaf29d65fbc3d7d266eaa44587e8022', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=320&crop=smart&auto=webp&s=9ffed69b177fc6b2d87c630530531b329ef5649a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=640&crop=smart&auto=webp&s=1f6d88719d89f09c29d31b5f2f61162df24b22b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=960&crop=smart&auto=webp&s=fe97ccc4419b003ff74a30d6b6355b98b85c47f4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?width=1080&crop=smart&auto=webp&s=cb2bbcfafab85d0d8944ca41d0fd1da57d602f77', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/upALeZgr0AL5Ctru14YEc8EuiiRr0OSgtI2hZ4Sdp8k.jpg?auto=webp&s=4bb632c086a7f7de6a8a240f827f1730b0f7631c', 'width': 1200}, 'variants': {}}]} |
||
Running a multimodal model in a long video | 1 | [removed] | 2024-12-10T17:15:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hb6sbx/running_a_multimodal_model_in_a_long_video/ | Secret-Meringue-4381 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb6sbx | false | null | t3_1hb6sbx | /r/LocalLLaMA/comments/1hb6sbx/running_a_multimodal_model_in_a_long_video/ | false | false | self | 1 | null |
Groq's speech recognition: worse than Whisper when it comes to punctuation? | 0 | Isn't it the case that Groq’s speech recognition is awful at picking up punctuation? From what I've seen, it's worse than OpenAI Whisper, but I'm not sure, maybe I haven't run enough tests. | 2024-12-10T17:20:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hb6wee/groqs_speech_recognition_worse_than_whisper_when/ | Quiet-Soup1347 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb6wee | false | null | t3_1hb6wee | /r/LocalLLaMA/comments/1hb6wee/groqs_speech_recognition_worse_than_whisper_when/ | false | false | self | 0 | null |
Need an overview of VLMs | 1 | I started to learn and experiment with VLMs. But I'm feeling overwhelmed by the amount of options that are out there and the rapid development of VLMs. Would greatly appreciate if anyone could provide an overview of popular/not so popular VLMs. | 2024-12-10T17:44:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hb7h4m/need_an_overview_of_vlms/ | Physical-Security115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb7h4m | false | null | t3_1hb7h4m | /r/LocalLLaMA/comments/1hb7h4m/need_an_overview_of_vlms/ | false | false | self | 1 | null |
From Unemployment to Lisp: Running GPT-2 on a Teen's Deep Learning Compiler | 46 | A couple months ago I found myself unemployed, uncertain about what to do next. I wanted to learn more about deep learning, but from a systems prespective. Coming from Andrew's Ng course on supervised learning, I was eager to learn more about how deep learning frameworks (or deep learning compilers) like Pytorch or Tinygrad.
I started to poke around Tinygrad, learning from the tutorials I found online, and I found it fascinating because it was an actual compiler, it took conventional python code and translated them into an Abstract Syntax Tree that was parsed into UOps and ScheduleItems, to finally have a codegen layer. While the design was interesting, the code was hard to read.
That's when I stumbled across something completly unexpected, A deep learning compiler built on Common Lisp, maintained by a Japanese 18-year-old during his gap year. And currently we have acomplished something great, it can run gpt2!
For now, it just generates C-kernels, but in the future we would like to support cuda codegen as well as many other features, and serve as a learning tool for anyone who would like to get to work on deep learning compilers in Common Lisp.
This is an open source project and anyone is welcome to contribute!
[https://github.com/hikettei/Caten](https://github.com/hikettei/Caten) | 2024-12-10T17:56:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hb7rlh/from_unemployment_to_lisp_running_gpt2_on_a_teens/ | yCuboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb7rlh | false | null | t3_1hb7rlh | /r/LocalLLaMA/comments/1hb7rlh/from_unemployment_to_lisp_running_gpt2_on_a_teens/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': '_O3xo8kcV7UOSfz_uE5VeeB2Jx5arD6n3psMB5AYRlo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5Pl-RZzOL70YAWCydc8YLLfb_sJYhhkamwcT6V19RCE.jpg?width=108&crop=smart&auto=webp&s=ab0306693d686fe5cc24d498ed3f664d8f5b57b0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5Pl-RZzOL70YAWCydc8YLLfb_sJYhhkamwcT6V19RCE.jpg?width=216&crop=smart&auto=webp&s=f7aecbcb2f3c767b9eedf92857c1770654e98c9b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5Pl-RZzOL70YAWCydc8YLLfb_sJYhhkamwcT6V19RCE.jpg?width=320&crop=smart&auto=webp&s=9ba64802c0509f50c6830aa409bdcf291bf0661b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5Pl-RZzOL70YAWCydc8YLLfb_sJYhhkamwcT6V19RCE.jpg?width=640&crop=smart&auto=webp&s=699d0ddb94e8c84ecd2de087fb5f70ac5262baff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5Pl-RZzOL70YAWCydc8YLLfb_sJYhhkamwcT6V19RCE.jpg?width=960&crop=smart&auto=webp&s=301cc338e00aceb9340f7cf440e78d9b80fa76fe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5Pl-RZzOL70YAWCydc8YLLfb_sJYhhkamwcT6V19RCE.jpg?width=1080&crop=smart&auto=webp&s=3d818aee010944b8afd6d5964398ca30a708d764', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5Pl-RZzOL70YAWCydc8YLLfb_sJYhhkamwcT6V19RCE.jpg?auto=webp&s=2b2c80c6d873add7fc00306b3fa9ff26850bbc8e', 'width': 1200}, 'variants': {}}]} |
How long before we get a local text to video generator with Sora level capabilities? | 9 | Tencent claims their free video generator is as good as Sora. Difficult for me to test since I don't understand Chinese and they don't seem to have an English version of their site.
[https://aivideo.hunyuan.tencent.com/](https://aivideo.hunyuan.tencent.com/)
Is a free version with Sora level capabilities coming in months or a year away? I seriously doubt it is more than a year away. What do you think? | 2024-12-10T18:10:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hb83r1/how_long_before_we_get_a_local_text_to_video/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb83r1 | false | null | t3_1hb83r1 | /r/LocalLLaMA/comments/1hb83r1/how_long_before_we_get_a_local_text_to_video/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'DunV6OpgGXo5H7pZ99ocBDfzJHCp35ZKpU4cyNKyDFk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/J-bZz7M6roH6_sLLQBt0J2BNPJCSD4_MRxauZpZiZZE.jpg?width=108&crop=smart&auto=webp&s=7aa07b342a966ab66949fd0529ba3353629a0296', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/J-bZz7M6roH6_sLLQBt0J2BNPJCSD4_MRxauZpZiZZE.jpg?auto=webp&s=9c03d3d4285dbfaebe4edd98e8e719d57d2a1267', 'width': 200}, 'variants': {}}]} |
txtai 8.1 released: Docling integration, Embeddings context managers and Postgres/pgvector schema persistence | 64 | 2024-12-10T18:20:03 | davidmezzetti | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hb8byi | false | null | t3_1hb8byi | /r/LocalLLaMA/comments/1hb8byi/txtai_81_released_docling_integration_embeddings/ | false | false | 64 | {'enabled': True, 'images': [{'id': 'ObvcZB06ZY5vCKiBUFEO9FtpYJsPr_WIaHGpEBf_MSc', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/0d0ffsxhc26e1.png?width=108&crop=smart&auto=webp&s=aeca8cc35d2e76ba8b40352f78aff9a10d662512', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/0d0ffsxhc26e1.png?width=216&crop=smart&auto=webp&s=e006414616925db9d1beb8880dc26d129be3e6f8', 'width': 216}, {'height': 158, 'url': 'https://preview.redd.it/0d0ffsxhc26e1.png?width=320&crop=smart&auto=webp&s=0e75b36bf96c162d3a96e2b487eacbd903580794', 'width': 320}, {'height': 316, 'url': 'https://preview.redd.it/0d0ffsxhc26e1.png?width=640&crop=smart&auto=webp&s=d0352e5a9cb0bd16d07415a5f5bbe0a9cba22d99', 'width': 640}, {'height': 474, 'url': 'https://preview.redd.it/0d0ffsxhc26e1.png?width=960&crop=smart&auto=webp&s=9003d3448fceb223d49c73efe71275108bac1a10', 'width': 960}], 'source': {'height': 499, 'url': 'https://preview.redd.it/0d0ffsxhc26e1.png?auto=webp&s=7283964dd74d06b32cda57b4cdea3524cff0ff5b', 'width': 1009}, 'variants': {}}]} |
|||
Training Large Language Models to Reason in a Continuous Latent Space | 94 | [https://arxiv.org/html/2412.06769v1](https://arxiv.org/html/2412.06769v1)
“The paper introduces Coconut (Chain of Continuous Thought), a novel paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs) by enabling them to reason in a continuous latent space rather than through language tokens. The authors argue that the language space may not be optimal for reasoning, as it allocates equal computational resources to each token, regardless of its importance. Coconut modifies the traditional chain-of-thought (CoT) reasoning by using the last hidden state of the LLM as a "continuous thought" and feeding it back into the model as the next input embedding. This approach allows the model to perform reasoning in a more flexible and efficient manner, akin to a breadth-first search (BFS), without being constrained by language syntax.”
I find the idea of using an encrypted Chain of Thought (CoT) approach within the latent space quite intriguing. Relying on human language for thinking is a limitation; models need to develop their own unique ways of thinking. What do you think about this? | 2024-12-10T18:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hb8kkg/training_large_language_models_to_reason_in_a/ | Head_Beautiful_6603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb8kkg | false | null | t3_1hb8kkg | /r/LocalLLaMA/comments/1hb8kkg/training_large_language_models_to_reason_in_a/ | false | false | self | 94 | null |
open source PDF AI tool | 0 | Is there an open source PDF AI tool where I can read pdfs like research papers and on the side, there is a chat bar where it has the context of what current page I am reading and I can ask it questions regarding the current page (local models would be better)?
If not then id have to build my own :/ | 2024-12-10T18:39:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hb8sf2/open_source_pdf_ai_tool/ | Rakshith789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb8sf2 | false | null | t3_1hb8sf2 | /r/LocalLLaMA/comments/1hb8sf2/open_source_pdf_ai_tool/ | false | false | self | 0 | null |
Introducing canvas. A new way of working with ChatGPT to write and code | 0 | 2024-12-10T18:57:39 | https://openai.com/index/introducing-canvas/ | badgerfish2021 | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1hb98ek | false | null | t3_1hb98ek | /r/LocalLLaMA/comments/1hb98ek/introducing_canvas_a_new_way_of_working_with/ | false | false | default | 0 | null |
|
llama3.3 Problems!!! | 1 | [removed] | 2024-12-10T19:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hb9dbh/llama33_problems/ | RelativeGrass6791 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb9dbh | false | null | t3_1hb9dbh | /r/LocalLLaMA/comments/1hb9dbh/llama33_problems/ | false | false | self | 1 | null |
Need help with llama3.3 | 1 | [removed] | 2024-12-10T19:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hb9f2l/need_help_with_llama33/ | RelativeGrass6791 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb9f2l | false | null | t3_1hb9f2l | /r/LocalLLaMA/comments/1hb9f2l/need_help_with_llama33/ | false | false | self | 1 | null |
Qwen/QwQ 32B answers the seminal question of our time correctly. EXAONE 32B and Llama 3.3 70B both fail. | 34 | For me, Qwen/QwQ is still the model of choice.
EXAONE 3.5 32B
--------------
> is there a 3090 with 48gb
Yes, there is a variant of the NVIDIA RTX 3090 that comes with 48GB of GDDR6X memory. This model was introduced later in the lifecycle of the RTX 3090 series, primarily aimed at professional and high-end gaming markets where large memory capacity is beneficial for handling extremely detailed scenes or large datasets.
However, it's important to note that availability can vary by region and retailer, and it might not be as commonly found as the original 24GB version. Always check with official NVIDIA sources or reputable hardware retailers for the most current information on availability and specifications.
Qwen 2.5 32B
------------
> is there a 3090 with 48gb
No, there isn't a NVIDIA RTX 3090 model with 48GB of VRAM. The NVIDIA RTX 3090 comes with 24GB of GDDR6X memory. If you're looking for a graphics card with 48GB of VRAM, you might want to consider the NVIDIA A6000, which is a professional graphics card designed for workstations and offers 48GB of VRAM. However, it's important to note that the A6000 is not intended for gaming and is more suited for professional tasks like 3D rendering, video editing, and AI workloads.
Llama 3.3 70B
-------------
> is there a 3090 with 48gb
You're likely thinking of the NVIDIA GeForce RTX 3090, a high-end graphics card from NVIDIA.
Initially, the RTX 3090 was launched with 24GB of GDDR6X memory. However, in 2022, NVIDIA announced a new variant of the RTX 3090 with 24GB of GDDR6X memory, and another variant with a whopping 48GB of GDDR6X memory, but this one is specifically designed for datacenter and professional markets, not for gaming.
This 48GB variant is called the NVIDIA RTX 3090 with 48GB of memory, but it's not exactly the same as the standard RTX 3090. It's more of a customized version for specific use cases like AI, deep learning, and datacenter applications.
Some key differences between the standard RTX 3090 and the 48GB version:
1. **Memory**: The most obvious difference is the 48GB of GDDR6X memory, which is twice the amount of the standard RTX 3090.
2. **Memory bus**: The 48GB version has a 384-bit memory bus, while the standard RTX 3090 has a 384-bit memory bus as well.
3. **Clock speeds**: The 48GB version has slightly lower clock speeds compared to the standard RTX 3090.
4. **Power consumption**: The 48GB version has a higher TDP (350W) compared to the standard RTX 3090 (320W).
5. **Target market**: As mentioned earlier, the 48GB version is designed for datacenter and professional applications, not for gaming.
So, to summarize: yes, there is a variant of the NVIDIA RTX 3090 with 48GB of memory, but it's not a standard consumer-grade graphics card and is designed for specific use cases.
QwQ 32B
-------
> is there a 3090 with 48gb
As of my knowledge cutoff in March 2023, the NVIDIA GeForce RTX 3090 comes with 24GB of GDDR6X memory. I'm not aware of any standard version of the RTX 3090 that has 48GB of memory. However, NVIDIA does offer other models with higher memory capacities, such as the NVIDIA A6000, which is a professional GPU with 48GB of GDDR6 memory.
It's possible that there might be custom variants or overclocked versions from different manufacturers that offer more memory, but these would not be part of the standard RTX 3090 specifications. I would recommend checking the latest product listings or contacting manufacturers directly for the most accurate and up-to-date information.
Additionally, it's essential to consider the intended use case when selecting a GPU. While more memory can be beneficial for certain tasks like deep learning, video editing, or working with large datasets, it may not always be necessary for gaming or general computing tasks. | 2024-12-10T19:08:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hb9i11/qwenqwq_32b_answers_the_seminal_question_of_our/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb9i11 | false | null | t3_1hb9i11 | /r/LocalLLaMA/comments/1hb9i11/qwenqwq_32b_answers_the_seminal_question_of_our/ | false | false | self | 34 | null |
Does anyone have a good local integration with an IDE? | 0 | Hello.
I searched first and found the following but it was an old post so I wanted to see if anyone has a more current solution.
[https://docs.continue.dev/autocomplete/model-setup](https://docs.continue.dev/autocomplete/model-setup)
I was thinking about trying this out, maybe with Qwen2.5, but I would need a better computer first.
I have used Cursor and Windsurf but I really do not want to pay per month so I was considering whether a local set up would be helpful.
I just want to know how it would compare if anyone has done a set up like this.
I am also considering Github Copilot.
I also dabbled in trying to set up RAG with a codebase in order to extend the context window but did not really do well. With a local model I feel it would be easier to test and implement something like this.
As far as what I would buy, I would probably be getting a laptop with at least a 4070 and 32 RAM if PC and the M4 with as much RAM as I can afford if Mac. So I know I would not be running models as good as what I would be paying monthly for, but I would most likely be using it to help with speeding up development, doing more menial work. Most likely less than $2000.
I also have other projects where a local model would be helpful to speed up my learning so I have been thinking about getting a new computer for a while.
Any guidance or solutions would be helpful.
Thank you. | 2024-12-10T19:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hb9ldl/does_anyone_have_a_good_local_integration_with_an/ | KonradFreeman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb9ldl | false | null | t3_1hb9ldl | /r/LocalLLaMA/comments/1hb9ldl/does_anyone_have_a_good_local_integration_with_an/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '2G2C3fOTEF-StqEk7Xh_7FP1eG31VvpVioJGYDO648s', 'resolutions': [{'height': 48, 'url': 'https://external-preview.redd.it/Lk1ymVRepZyskdHW7nLECntizwqedPHpypvpMDALUm4.jpg?width=108&crop=smart&auto=webp&s=b7ccbce95e63bab343ccb708c530fb5e17141445', 'width': 108}, {'height': 97, 'url': 'https://external-preview.redd.it/Lk1ymVRepZyskdHW7nLECntizwqedPHpypvpMDALUm4.jpg?width=216&crop=smart&auto=webp&s=b75deb15125439f767674222844b158c827acd22', 'width': 216}, {'height': 144, 'url': 'https://external-preview.redd.it/Lk1ymVRepZyskdHW7nLECntizwqedPHpypvpMDALUm4.jpg?width=320&crop=smart&auto=webp&s=b8cc552add968731a7d83a8dbfe7520b0d7f3df3', 'width': 320}, {'height': 289, 'url': 'https://external-preview.redd.it/Lk1ymVRepZyskdHW7nLECntizwqedPHpypvpMDALUm4.jpg?width=640&crop=smart&auto=webp&s=89dc3dec13d237a77caa6aa56055f612d7bb4c42', 'width': 640}], 'source': {'height': 425, 'url': 'https://external-preview.redd.it/Lk1ymVRepZyskdHW7nLECntizwqedPHpypvpMDALUm4.jpg?auto=webp&s=0cd67d512f4b9d897fbfb1058a3e9c88e1e44e87', 'width': 939}, 'variants': {}}]} |
Talk Arena: Interactive Evaluation Platform for Large Audio Models | 1 | [removed] | 2024-12-10T19:22:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hb9uk7/talk_arena_interactive_evaluation_platform_for/ | ScallionMobile5991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb9uk7 | false | null | t3_1hb9uk7 | /r/LocalLLaMA/comments/1hb9uk7/talk_arena_interactive_evaluation_platform_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'FK0WZ-HBPXfM3nPLQ-QwobFc8RY97tuqMsLpq7bWHqE', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/qPZDKamQZn1nRnNRsrVFZFJ9PEgKKaQA3McTuBO1d54.jpg?width=108&crop=smart&auto=webp&s=c30138e1575326270206aa6df212b203687ce20f', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/qPZDKamQZn1nRnNRsrVFZFJ9PEgKKaQA3McTuBO1d54.jpg?width=216&crop=smart&auto=webp&s=072a47eba8f1776a64705ea83e049eaafdc0b2d0', 'width': 216}, {'height': 225, 'url': 'https://external-preview.redd.it/qPZDKamQZn1nRnNRsrVFZFJ9PEgKKaQA3McTuBO1d54.jpg?width=320&crop=smart&auto=webp&s=453a98e98b0c0c5f5f6e470d32307b3612bfd3bb', 'width': 320}, {'height': 450, 'url': 'https://external-preview.redd.it/qPZDKamQZn1nRnNRsrVFZFJ9PEgKKaQA3McTuBO1d54.jpg?width=640&crop=smart&auto=webp&s=bdde0b1ab612f4847411d58332049f04a784b993', 'width': 640}, {'height': 676, 'url': 'https://external-preview.redd.it/qPZDKamQZn1nRnNRsrVFZFJ9PEgKKaQA3McTuBO1d54.jpg?width=960&crop=smart&auto=webp&s=bd09cadb415ae9792522893d80e79f1439b6d9ee', 'width': 960}, {'height': 760, 'url': 'https://external-preview.redd.it/qPZDKamQZn1nRnNRsrVFZFJ9PEgKKaQA3McTuBO1d54.jpg?width=1080&crop=smart&auto=webp&s=84b28111e6fb7c288d590f1b783fa27d07cff2f8', 'width': 1080}], 'source': {'height': 1078, 'url': 'https://external-preview.redd.it/qPZDKamQZn1nRnNRsrVFZFJ9PEgKKaQA3McTuBO1d54.jpg?auto=webp&s=617c971939a030be5cb1b85a8a20dd64d887c9e7', 'width': 1530}, 'variants': {}}]} |
|
Update on Huggingface storage limits ("unlimited" public, limited private) | 107 | https://huggingface.co/posts/julien-c/388331843225875
Seems reasonable to me, besides abuse public is unlimited/best-effort (presumably similar to before?) where private is actually going to be limited
100GB is likely plenty for private, as I commented on the post if it's not worth making public it's probably not worth keeping around on servers (correct me if I'm wrong though, I'm a very atypical user)
Will have to see how it plays out, but overall seems like a logical move to keep people happy and with unlimited storage while avoiding bad actors treating HF like their personal cloud server | 2024-12-10T19:23:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hb9v8p/update_on_huggingface_storage_limits_unlimited/ | noneabove1182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb9v8p | false | null | t3_1hb9v8p | /r/LocalLLaMA/comments/1hb9v8p/update_on_huggingface_storage_limits_unlimited/ | false | false | self | 107 | {'enabled': False, 'images': [{'id': 'MnEnLOdNUn1vVKuKqmxsvzdf_30G06a2AF4PmiLM3wU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6_FeoZ_EF7XFEAavw17q5xGU687KONnKXmIi7eGUrS8.jpg?width=108&crop=smart&auto=webp&s=a5793285d8f3e3c5c16cb524c671b04a89705ba5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6_FeoZ_EF7XFEAavw17q5xGU687KONnKXmIi7eGUrS8.jpg?width=216&crop=smart&auto=webp&s=943f329af4204a37a06f7173c91a7bb5d88d3a5b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6_FeoZ_EF7XFEAavw17q5xGU687KONnKXmIi7eGUrS8.jpg?width=320&crop=smart&auto=webp&s=6f6afd48af5d0058a537252ccf90f4d8e91fbb55', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6_FeoZ_EF7XFEAavw17q5xGU687KONnKXmIi7eGUrS8.jpg?width=640&crop=smart&auto=webp&s=a79e0e330988823a5b982df7656bdfebee0bd379', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6_FeoZ_EF7XFEAavw17q5xGU687KONnKXmIi7eGUrS8.jpg?width=960&crop=smart&auto=webp&s=04d433028db3242b03eda844f663446113734948', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6_FeoZ_EF7XFEAavw17q5xGU687KONnKXmIi7eGUrS8.jpg?width=1080&crop=smart&auto=webp&s=5d695f9445aa0d6dc19afc6c10c788b4a3f9a601', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6_FeoZ_EF7XFEAavw17q5xGU687KONnKXmIi7eGUrS8.jpg?auto=webp&s=227c26c9a5006aea6562e5f0a8d318068aacb813', 'width': 1200}, 'variants': {}}]} |
dnotitia/Llama-DNA-1.0-8B-Instruct, state-of-the-art (SOTA) bilingual language model | 8 | [https://huggingface.co/dnotitia/Llama-DNA-1.0-8B-Instruct](https://huggingface.co/dnotitia/Llama-DNA-1.0-8B-Instruct)
**DNA 1.0 8B Instruct** is a state-of-the-art (**SOTA**) bilingual language model based on Llama architecture, specifically optimized for Korean language understanding and generation, while also maintaining strong English capabilities. | 2024-12-10T19:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hb9zfb/dnotitiallamadna108binstruct_stateoftheart_sota/ | likejazz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hb9zfb | false | null | t3_1hb9zfb | /r/LocalLLaMA/comments/1hb9zfb/dnotitiallamadna108binstruct_stateoftheart_sota/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'HVh547xKLdoBTXqqvflvdWzCuQsDb-1ECUO_V-gf_WE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CXJbjOh2l4hs1ACgVuV7scOSSRMbAqJiynMuqwziuTg.jpg?width=108&crop=smart&auto=webp&s=81aca34d8a0af7e80bf31be762f425e6c164ebd5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CXJbjOh2l4hs1ACgVuV7scOSSRMbAqJiynMuqwziuTg.jpg?width=216&crop=smart&auto=webp&s=22fc884fe81555b85f95cc2ddaff0925c695c89f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CXJbjOh2l4hs1ACgVuV7scOSSRMbAqJiynMuqwziuTg.jpg?width=320&crop=smart&auto=webp&s=dfc2b7fd00659dbbe4459c86cf9dafc702bc1818', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CXJbjOh2l4hs1ACgVuV7scOSSRMbAqJiynMuqwziuTg.jpg?width=640&crop=smart&auto=webp&s=b8d9e03b4c272263fc925e24fbad2abb6e2afe91', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CXJbjOh2l4hs1ACgVuV7scOSSRMbAqJiynMuqwziuTg.jpg?width=960&crop=smart&auto=webp&s=6866652624c32b7fd31f4a45f6abdf537c8ae20c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CXJbjOh2l4hs1ACgVuV7scOSSRMbAqJiynMuqwziuTg.jpg?width=1080&crop=smart&auto=webp&s=37f463ff13784758920b893df8c93429ba853894', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CXJbjOh2l4hs1ACgVuV7scOSSRMbAqJiynMuqwziuTg.jpg?auto=webp&s=8790074cfef0ffa54882cbda2c0716ba3a99f318', 'width': 1200}, 'variants': {}}]} |
Are there any LLMs (free or paid) that are specifically designed to be suitable for elementary or middle school students? | 3 | I'm aware of PinwheelGPT, but it's only available as part of a Pinwheel subscription.
Ideally, I'd like the LLM to have guardrails that prevent it from answering questions about anything other than G- or PG-rated topics. | 2024-12-10T19:50:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hbahnx/are_there_any_llms_free_or_paid_that_are/ | jawn317 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbahnx | false | null | t3_1hbahnx | /r/LocalLLaMA/comments/1hbahnx/are_there_any_llms_free_or_paid_that_are/ | false | false | self | 3 | null |
Llama 3.3 (70B) Finetuning - now with 90K context length and fits on <41GB VRAM. | 797 | Hey guys! You can now fine-tune [Llama 3.3](https://unsloth.ai/blog/llama3-3) (70B) up to 90,000 context lengths with [Unsloth](https://github.com/unslothai/unsloth/), which is 13x longer than what Hugging Face + FA2 supports at 6,900 on a 80GB GPU.
1. The new ultra long context support is 1.85x longer than previous versions of Unsloth. It utilizes our [gradient checkpointing](https://unsloth.ai/blog/long-context) and we worked with Apple to incorporate their new Cut Cross Entropy (CCE) algorithm.
2. For [Llama 3.1](https://unsloth.ai/blog/llama3-1) (8B), Unsloth can now do a whopping **342,000 context length**, which exceeds the 128K context lengths Llama 3.1 natively supported. HF + FA2 can only do 28,000 on a 80GB GPU, so Unsloth supports 12x context lengths.
3. You can try the new Llama 3.1 (8B) ultra long context support with our [Google Colab notebook](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing).
4. HF+FA2 goes out of memory for 8GB GPUs, whilst Unsloth supports up to 2,900 context lengths, up from 1,500.
5. **70B models can now fit on 41GB of VRAM** \- nearly 40GB which is amazing!
6. In case you didn't know, we uploaded Llama 3.3 versions including GGUFs, 4bit, 16bit versions in our collection on Hugging Face.
7. You can read our in depth blog post about the new changes here: [https://unsloth.ai/blog/llama3-3](https://unsloth.ai/blog/llama3-3)
https://preview.redd.it/sgfqictcq26e1.png?width=2500&format=png&auto=webp&s=a78ffb6409ec8adaf1184704ee2b27831b7ae410
Table for all Llama 3.3 versions:
|Original HF weights|4bit BnB quants|GGUF quants (16,8,6,5,4,3,2 bits)|
|:-|:-|:-|
|[Llama 3.3 (70B) Instruct](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct)|[Llama 3.3 (70B) Instruct 4bit](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-bnb-4bit)|[Llama 3.3 (70B) Instruct GGUF](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF)|
Let me know if you have any questions and hope you all have a lovely week ahead! :)
| 2024-12-10T19:51:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hbaioc/llama_33_70b_finetuning_now_with_90k_context/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbaioc | false | null | t3_1hbaioc | /r/LocalLLaMA/comments/1hbaioc/llama_33_70b_finetuning_now_with_90k_context/ | false | false | 797 | {'enabled': False, 'images': [{'id': 'BWchCFHKW7NoxhwZxn1QQl-8z9TfcQCFkxaY1vQMKXM', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/vZqH0gutfznrv8NBDgrdRLz9eBbC3r-fxMr3Nv3AAj0.jpg?width=108&crop=smart&auto=webp&s=7d5eae9f180d7bd6684bbb0eea124555cdc264b7', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/vZqH0gutfznrv8NBDgrdRLz9eBbC3r-fxMr3Nv3AAj0.jpg?width=216&crop=smart&auto=webp&s=43c9b5f1df89fbe2e4482ea6e157a773e33216d2', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/vZqH0gutfznrv8NBDgrdRLz9eBbC3r-fxMr3Nv3AAj0.jpg?width=320&crop=smart&auto=webp&s=d8e711cfd5135a3a006667e4d5d8ac7e6ca34d3a', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/vZqH0gutfznrv8NBDgrdRLz9eBbC3r-fxMr3Nv3AAj0.jpg?width=640&crop=smart&auto=webp&s=68a33cbdfaa47ebd498fa54f1f30656fdf618a79', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/vZqH0gutfznrv8NBDgrdRLz9eBbC3r-fxMr3Nv3AAj0.jpg?width=960&crop=smart&auto=webp&s=4c81a2e9ca2f0b8cb309b88347032e184f9cdf8f', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/vZqH0gutfznrv8NBDgrdRLz9eBbC3r-fxMr3Nv3AAj0.jpg?width=1080&crop=smart&auto=webp&s=3553132eb14d8408e4721d06fa98e678af047654', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/vZqH0gutfznrv8NBDgrdRLz9eBbC3r-fxMr3Nv3AAj0.jpg?auto=webp&s=ed1cbc14793dff8458f7be675abc21898d2301de', 'width': 2560}, 'variants': {}}]} |
|
80% Less Memory DPO and ORPO in Liger Kernels | 1 | 2024-12-10T19:54:21 | https://x.com/hsu_byron/status/1866538145296318538 | Icy-World-8359 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1hbal37 | false | null | t3_1hbal37 | /r/LocalLLaMA/comments/1hbal37/80_less_memory_dpo_and_orpo_in_liger_kernels/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'r-ZBP6JSWL_QAsorClu4F4MAL-ENCax1N5nprPg1Cuo', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/SrsfhapiOew685MpnOCq0SJjdYlwGEJ86IuYzrbFQ4Q.jpg?width=108&crop=smart&auto=webp&s=c7ebdb02438edb8d8d4667bacaf2aae5333e0b56', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/SrsfhapiOew685MpnOCq0SJjdYlwGEJ86IuYzrbFQ4Q.jpg?width=216&crop=smart&auto=webp&s=f57b4f769d8947a9aab1c5c948cc222ceae89111', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/SrsfhapiOew685MpnOCq0SJjdYlwGEJ86IuYzrbFQ4Q.jpg?width=320&crop=smart&auto=webp&s=fadf58d6d9ca9ba17927e4bbde754426442470a3', 'width': 320}, {'height': 395, 'url': 'https://external-preview.redd.it/SrsfhapiOew685MpnOCq0SJjdYlwGEJ86IuYzrbFQ4Q.jpg?width=640&crop=smart&auto=webp&s=7384843f791d9fce07585a983ead822ce5497c26', 'width': 640}], 'source': {'height': 560, 'url': 'https://external-preview.redd.it/SrsfhapiOew685MpnOCq0SJjdYlwGEJ86IuYzrbFQ4Q.jpg?auto=webp&s=7c9d15b5f42a39344d40e1817d42532ace3087c9', 'width': 906}, 'variants': {}}]} |
||
Is qwen better than llama3.3 for processing documents? | 7 | I have a project that iam working on and using llama3.3 to process documents. Is qwen better for this scenario? My use case is finding a few things in the documents and answer questions based on the findings. So far llama3, llama3.1 are struggling. 3.2 11b worsk ok, 3.3 is definitely better but not as close to openai or Claude. Regardless for running the 3.3 version locally it works for my use case but it's slow so iam wondering is qwen better at reading documents given it's smaller than 3.3 70b? | 2024-12-10T20:11:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hbb00v/is_qwen_better_than_llama33_for_processing/ | TechEverythingElse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbb00v | false | null | t3_1hbb00v | /r/LocalLLaMA/comments/1hbb00v/is_qwen_better_than_llama33_for_processing/ | false | false | self | 7 | null |
New interface for the Open LLM Leaderboard! Should be way more usable :) | 93 | Hi Reddit! Open LLM Leaderboard co-maintainer here!
We've decided to upgrade the interface for our leaderboard! (Yep, it was clearly long overdue)
New features:
* it's an order of magnitude faster to load, as well as an order of magnitude smaller in memory (we know some people access the leaderboard behind proxies, which should be easier now)
* we added loooads of features to make it more usable:
* you can pin models,
* display dynamic rankings based on your filters,
* compute average scores based on only the evals you are interested in,
* search with regex or on several fields
* as well as share your filters configurations with others
* and more!
* we improved the docs (notably adding tooltips)
* there are quick views for device-size categories: edge device, consumer hardware, gpu rich people, ...
And I also think it's prettier, with bar charts, explicit rankings, etc :)
Tell us what you think, and play with it \[here\](https://huggingface.co/spaces/open-llm-leaderboard/open\_llm\_leaderboard)!
https://preview.redd.it/itifbxtvx26e1.png?width=1873&format=png&auto=webp&s=dd5c1187ad939e81cda35aae83a02286d6c673d6
| 2024-12-10T20:21:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hbb85n/new_interface_for_the_open_llm_leaderboard_should/ | clefourrier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbb85n | false | null | t3_1hbb85n | /r/LocalLLaMA/comments/1hbb85n/new_interface_for_the_open_llm_leaderboard_should/ | false | false | 93 | null |
|
chatgpt pro vs 5 * plus | 0 | pro is 200$ and plus is 20$, right?
here me out, i have close relatives that does not use chatgpt at all.
i can buy them chatgpt plus and only requirements would be 'do not use o1 because it's mine'
they can use gpt4o and voice pro, etc whatever.
Worst case scenario would be they do not use chatgpt at all after few days.
still i think 3 out of 4 would use it .
i know chatgpt pro comes with some powerful o1 version, but i dont think i need that much power.
i exausted all o1 quota today and should wait few days till it's back.
i slightly tempted to pay 200$. but it's too much.
i think i can rotate 4\~5 account and it would be enough for my usage.
okay so comparing , now we compare
\# option a:
subscribe 200$ pro plan
\# option b:
being good cousin and rotates several plus subscriptions
( also should make several google accounts and share history with relatives. But i'm only using o1 for coding, so no problem for me.)
only downside would be handling this account sharing while respecting each other's privacy.
( i dont care they might see my chat history, but i think they would mind if i see their history)
is anyone have experience using multiple subscription or account sharing?
| 2024-12-10T20:25:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hbbbrm/chatgpt_pro_vs_5_plus/ | One-Thanks-9740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbbbrm | false | null | t3_1hbbbrm | /r/LocalLLaMA/comments/1hbbbrm/chatgpt_pro_vs_5_plus/ | false | false | self | 0 | null |
Is an M2 pro with 32 GB of ram enough for most models | 3 | I want to buy a new laptop, want to do software development and experiment with some local LLMs too. Is 32 GB enough ram. Or should I go for M1 64 GB? | 2024-12-10T20:37:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hbblyq/is_an_m2_pro_with_32_gb_of_ram_enough_for_most/ | brayo1st | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbblyq | false | null | t3_1hbblyq | /r/LocalLLaMA/comments/1hbblyq/is_an_m2_pro_with_32_gb_of_ram_enough_for_most/ | false | false | self | 3 | null |
How to stuff local llama with large input | 1 | [removed] | 2024-12-10T21:00:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hbc5sh/how_to_stuff_local_llama_with_large_input/ | wildlifetravelphoto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbc5sh | false | null | t3_1hbc5sh | /r/LocalLLaMA/comments/1hbc5sh/how_to_stuff_local_llama_with_large_input/ | false | false | self | 1 | null |
Hugging Face allows users unlimited storage, but only if it's used for open-source | 1 | Julien Chaumond (Hugging Face CTO) on the Hub:
"After some heated discussion 🔥, we clarify our intent re. storage limits on the Hub
TL;DR:
\- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
\- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)
docs: [https://huggingface.co/docs/hub/storage-limits](https://huggingface.co/docs/hub/storage-limits)
We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community 🔥" | 2024-12-10T21:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hbch02/hugging_face_allows_users_unlimited_storage_but/ | chef1957 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbch02 | false | null | t3_1hbch02 | /r/LocalLLaMA/comments/1hbch02/hugging_face_allows_users_unlimited_storage_but/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 't7-yAwqgKpqBAjVAVkt52s_omQvtV3HUx8h8fSlyRwk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ziVLmA64LmCXE2ppPY1nuzgq8IExinsbwOdP9sgziCs.jpg?width=108&crop=smart&auto=webp&s=775c91bc2b5028203021c3f4c6cb18a702feca53', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ziVLmA64LmCXE2ppPY1nuzgq8IExinsbwOdP9sgziCs.jpg?width=216&crop=smart&auto=webp&s=a592dc4bbb334546bc9565c3cc56aef1f811e4eb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ziVLmA64LmCXE2ppPY1nuzgq8IExinsbwOdP9sgziCs.jpg?width=320&crop=smart&auto=webp&s=2177e4b667491f6a3fb29da11ff77905b560fdc0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ziVLmA64LmCXE2ppPY1nuzgq8IExinsbwOdP9sgziCs.jpg?width=640&crop=smart&auto=webp&s=59963acf6cfe995b74d021bfc7b37387e922cfe9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ziVLmA64LmCXE2ppPY1nuzgq8IExinsbwOdP9sgziCs.jpg?width=960&crop=smart&auto=webp&s=f3cfd0df7b1277c11f0f54b913c029478484d28c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ziVLmA64LmCXE2ppPY1nuzgq8IExinsbwOdP9sgziCs.jpg?width=1080&crop=smart&auto=webp&s=874315c634fe6eac566181b4e5981076370cade1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ziVLmA64LmCXE2ppPY1nuzgq8IExinsbwOdP9sgziCs.jpg?auto=webp&s=9d052fadcff8498c163589a0e3d470f8e2497d55', 'width': 1200}, 'variants': {}}]} |
Why opensource ai is important .... | 314 | 2024-12-10T21:15:33 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hbciui | false | null | t3_1hbciui | /r/LocalLLaMA/comments/1hbciui/why_opensource_ai_is_important/ | false | false | 314 | {'enabled': True, 'images': [{'id': '8RBZqGqjd2V80xyzhtSs729Hx0r1KNI7NGBQ5bmgBfw', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/h7su5nns736e1.png?width=108&crop=smart&auto=webp&s=2bfbf5a1a334dc5f8e7ffd08aa3bfebb02d763f7', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/h7su5nns736e1.png?width=216&crop=smart&auto=webp&s=462214b5c421add382b2f246992c613b7fc7c2ab', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/h7su5nns736e1.png?width=320&crop=smart&auto=webp&s=89d3e37b9d4c9c364f07f61c676326046347beff', 'width': 320}, {'height': 321, 'url': 'https://preview.redd.it/h7su5nns736e1.png?width=640&crop=smart&auto=webp&s=22e5c2bb2e0d6508a289f1e92f49057166051a36', 'width': 640}, {'height': 482, 'url': 'https://preview.redd.it/h7su5nns736e1.png?width=960&crop=smart&auto=webp&s=31b8b965442d9b3123ce1f6935f0b7044837113d', 'width': 960}, {'height': 543, 'url': 'https://preview.redd.it/h7su5nns736e1.png?width=1080&crop=smart&auto=webp&s=002bae9f550f47ffb3fb693e1bae9364c5316819', 'width': 1080}], 'source': {'height': 942, 'url': 'https://preview.redd.it/h7su5nns736e1.png?auto=webp&s=a9b4736b9c60d77d481942a8df0a308d68639aa8', 'width': 1873}, 'variants': {}}]} |
|||
Requesting Beta testers:LLM+SQL powered tool for retail Investors for free with unlimited questions (Seemingly better than Bloomberg) | 0 | Launching an intuitive natural language based stock screener that executes SQL queries on a database of 1000+ US-traded companies all in plain english language. It delivers real-time insights, combining charts, tables, and in-depth company summaries to find that hidden opportunity.
Key Features:
1. **Explainable SQL Queries:** No black-box magic! You can see and edit the SQL.
2. **Free & Unlimited Q&A:** No sign-ups, no paywalls—just seamless analysis.
3. **350+ Metrics:** Mix and match any combination to identify stocks based on your strategy.
4. **Real-time Insights:** Consolidated info from SEC filings, analyst reports, and the latest news.
5. **Intuitive Output:** Tables, charts, and a detailed company insights page.
Sample Queries You Can Run:
* Find companies where the **analyst price target is 50%+ higher** than the current stock price.
* Identify undervalued e-commerce companies based on **DCF models**.
* Rank companies by **revenue growth and profitability trends** over the last year.
Seeking early feedbacks, link in comments. Please also let me know features that you would like to see in next release.
<<Best viewed on Laptop+ Screen size >> | 2024-12-10T21:20:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hbcmx7/requesting_beta_testersllmsql_powered_tool_for/ | Medical-Mistake3128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbcmx7 | false | null | t3_1hbcmx7 | /r/LocalLLaMA/comments/1hbcmx7/requesting_beta_testersllmsql_powered_tool_for/ | false | false | self | 0 | null |
Llama 3.1 8b benchmark on 9 different RTX GPUs ($0.228 per Million output tokens) | 35 | In this benchmark, we evaluated the throughput and cost-efficiency of running Llama 3.1 8B with Ollama across 9 different RTX GPUs on [SaladCloud](https://salad.com/?utm_campaign=brand&utm_medium=cloud&utm_source=reddit).
# Benchmark Design
The benchmark was conducted using [k6](https://k6.io/), a load testing tool from Grafana Labs, to simulate a gradually increasing load from 10 to 35 virtual users over approximately 1 hour. Each virtual user asked the model to write a recipe for a salad with a maximum of 1024 output tokens. [See the exact configuration in GitHub.](https://github.com/SaladTechnologies/salad-recipes/blob/master/src/ollama-llama3.1/benchmark/not-streaming.js) The test environment consisted of multiple container groups with up 8-10 replicas (most commonly running 9 replicas). We deployed the “[Ollama Llama 3.1](https://docs.salad.com/products/recipes/ollama-llama3.1)” recipe on [SaladCloud](https://salad.com/), using the default configuration, but setting priority to “batch”, and requesting 10 replicas.
# Results
* The RTX 3090 was a standout performer, achieving the best cost-per-token, and remarkably high throughput at peak load.
* There was very little difference in performance between the RTX 3090 and RTX 4090.
* RTX 3060 performed reasonably well while being slightly less cost-effective than 3090.
* 35 virtual users gave the best throughput**.**
Results below show the best throughput configuration (35 virtual users).
**RTX 3060 (12 GB) | 8 vCPUs | 8 GB Memory (Cost: $0.10/hour/node)**
* Average Response Time: 14.70s
* Average Throughput: 983.29 tok/s
* Cost per 1M Output Tokens: $0.2825
**RTX 3080 (10 GB) | 8 vCPUs | 8 GB Memory (Cost: $0.13/hour/node)**
* Average Response Time: 11.32s
* Average Throughput: 1281.55 tok/s
* Cost per 1M Output Tokens: $0.2818
**RTX 3080 TI (12 GB) | 8 vCPUs | 8 GB Memory (Cost: $0.14/hour/node)**
* Average Response Time: 9.74s
* Average Throughput: 1483.34 tok/s
* Cost per 1M Output Tokens: $0.2622
**RTX 3090 (24 GB) | 8 vCPUs | 8 GB Memory (Cost: $0.14/hour/node)**
* Average Response Time: 8.50s
* Average Throughput: 1705.92 tok/s
* Cost per 1M Output Tokens: $0.2280
**RTX 3090 TI (24 GB) | 8 vCPUs | 8 GB Memory (Cost: $0.17/hour/node)**
* Average Response Time: 7.73s
* Average Throughput: 1860.15 tok/s
* Cost per 1M Output Tokens: $0.2539
**RTX 4060 TI (16 GB) | 8 vCPUs | 8 GB Memory (Cost: $0.14/hour/node)**
* Average Response Time: 13.23s
* Average Throughput: 1007.33 tok/s
* Cost per 1M Output Tokens: $0.3861
**RTX 4070 (12 GB) | 8 vCPUs | 8 GB Memory (Cost: $0.14/hour/node)**
* Average Response Time: 9.41s
* Average Throughput: 1539.99 tok/s
* Cost per 1M Output Tokens: $0.2525
**RTX 4070 TI SUPER (16 GB) | 8 vCPUs | 8 GB Memory (Cost: $0.17/hour/node)**
* Average Response Time: 7.73s
* Average Throughput: 1874.91 tok/s
* Cost per 1M Output Tokens: $0.2519
**RTX 4090 (24 GB) | 8 vCPUs | 8 GB Memory (Cost: $0.22/hour/node)**
* Average Response Time: 8.12s
* Average Throughput: 1773.30 tok/s
* Cost per 1M Output Tokens: $0.3446
# Conclusions
Llama 3.1 8B with Ollama shows solid performance across a wide range of devices, including lower-end last-generation GPUs. The RTX 3090 24GB stood out with **99.983% of requests successful**, and generating over **1700 tokens per second** across the cluster with 35 concurrent users, which comes out to a cost of just **$0.228 per million output tokens**.
For more detailed information, full benchmark is here: [https://blog.salad.com/llama-3-1-8b/](https://blog.salad.com/llama-3-1-8b/)
| 2024-12-10T21:51:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hbdczu/llama_31_8b_benchmark_on_9_different_rtx_gpus/ | SaladChefs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbdczu | false | null | t3_1hbdczu | /r/LocalLLaMA/comments/1hbdczu/llama_31_8b_benchmark_on_9_different_rtx_gpus/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'juF2NpAx3OB8ElOFQbI6OVwyGZNOCXhmDKbX3lg9W0k', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/kh8NWw422F-9ciDwFMKxhCPT5a3yYJqkvrN1iVGHrm8.jpg?width=108&crop=smart&auto=webp&s=999639c80285a41c07427e4718cd0f35fd2931be', 'width': 108}, {'height': 168, 'url': 'https://external-preview.redd.it/kh8NWw422F-9ciDwFMKxhCPT5a3yYJqkvrN1iVGHrm8.jpg?width=216&crop=smart&auto=webp&s=0a64beacf0cf34cf79c149268c4737168357241f', 'width': 216}, {'height': 249, 'url': 'https://external-preview.redd.it/kh8NWw422F-9ciDwFMKxhCPT5a3yYJqkvrN1iVGHrm8.jpg?width=320&crop=smart&auto=webp&s=b8cc12cd947a38aa0369c7134aded231b1682279', 'width': 320}, {'height': 498, 'url': 'https://external-preview.redd.it/kh8NWw422F-9ciDwFMKxhCPT5a3yYJqkvrN1iVGHrm8.jpg?width=640&crop=smart&auto=webp&s=0e9f7bbc30bb449bf21db9c2d5bd040e8ad248ed', 'width': 640}, {'height': 747, 'url': 'https://external-preview.redd.it/kh8NWw422F-9ciDwFMKxhCPT5a3yYJqkvrN1iVGHrm8.jpg?width=960&crop=smart&auto=webp&s=d8105a8e278171c3ce57490656d804c5355a9d96', 'width': 960}, {'height': 841, 'url': 'https://external-preview.redd.it/kh8NWw422F-9ciDwFMKxhCPT5a3yYJqkvrN1iVGHrm8.jpg?width=1080&crop=smart&auto=webp&s=748fb4a11e9f8ee8226570090dab3a1257de5f87', 'width': 1080}], 'source': {'height': 966, 'url': 'https://external-preview.redd.it/kh8NWw422F-9ciDwFMKxhCPT5a3yYJqkvrN1iVGHrm8.jpg?auto=webp&s=14e5b60bbe31726eb1f5c90ad794b3305aa15c07', 'width': 1240}, 'variants': {}}]} |
Performance and value of Mac Mini M4 Pro base model on sale at $1200, compared to other options... (also thinking of repurposing my 12-year-old desktop build to help speed things up and have a question at the end) | 1 | [removed] | 2024-12-10T22:09:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hbdsdv/performance_and_value_of_mac_mini_m4_pro_base/ | noless15k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbdsdv | false | null | t3_1hbdsdv | /r/LocalLLaMA/comments/1hbdsdv/performance_and_value_of_mac_mini_m4_pro_base/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wa5WNJ3Vbuoi92tso0U2KkSxNrBDvZ5XABCmqg4qwXc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sTkB_TTz4P3pJmDjoSu3rGJR7hLyLzlrHK-_jAzaRIc.jpg?width=108&crop=smart&auto=webp&s=d112e018277e60c8f9fc5bbe21710badaef63a03', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sTkB_TTz4P3pJmDjoSu3rGJR7hLyLzlrHK-_jAzaRIc.jpg?width=216&crop=smart&auto=webp&s=69bb5d3c7c78427acfa4a2f6415daf0e0e2634e4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sTkB_TTz4P3pJmDjoSu3rGJR7hLyLzlrHK-_jAzaRIc.jpg?width=320&crop=smart&auto=webp&s=4b5953d08a83a777dc29afe3717cfcbed316eccc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sTkB_TTz4P3pJmDjoSu3rGJR7hLyLzlrHK-_jAzaRIc.jpg?width=640&crop=smart&auto=webp&s=e30319d108c626aed4003186783b356d5bdac5c2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sTkB_TTz4P3pJmDjoSu3rGJR7hLyLzlrHK-_jAzaRIc.jpg?width=960&crop=smart&auto=webp&s=1b71c39c150bc695c15a4743e2322926442f67c0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sTkB_TTz4P3pJmDjoSu3rGJR7hLyLzlrHK-_jAzaRIc.jpg?width=1080&crop=smart&auto=webp&s=f6a578440ea87483596a0d753101e7357565f1a5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sTkB_TTz4P3pJmDjoSu3rGJR7hLyLzlrHK-_jAzaRIc.jpg?auto=webp&s=da9f0cfbe683f066d79d0acdd979d7d74fd5fca6', 'width': 1200}, 'variants': {}}]} |
Does it make sense to use an LLM to search local documents? | 1 | [removed] | 2024-12-10T22:16:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hbdxuj/does_it_make_sense_to_use_an_llm_to_search_local/ | vogelke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbdxuj | false | null | t3_1hbdxuj | /r/LocalLLaMA/comments/1hbdxuj/does_it_make_sense_to_use_an_llm_to_search_local/ | false | false | self | 1 | null |
Best uncensored model to run on Nvidia RTX 4070 (12Gb VRAM) and 32Gb RAM | 1 | [removed] | 2024-12-10T22:17:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hbdy9q/best_uncensored_model_to_run_on_nvidia_rtx_4070/ | Complex_Okra9950 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbdy9q | false | null | t3_1hbdy9q | /r/LocalLLaMA/comments/1hbdy9q/best_uncensored_model_to_run_on_nvidia_rtx_4070/ | false | false | self | 1 | null |
I tried QwQ-32B-Preview and I'm impressed. | 116 | I'm on of those people for whom it's a no-brainer to shell out $200/mo for unlimited access for o1-pro. My main use cases for reasoning models are coding, computer engineering, and math. I'm a daily user of 32B-Coder and 72B on my laptop, and as y'all should know, those tools are great, but occasionally make extra-ordinary errors that reminds us that these things aren't actually thinking. That prompted me to try QwQ despite of the known issues, and let's just say I don't think I'll be paying for ChatGPT Pro in 2026.
Model: QwQ-32B-Preview-8bit (MLX)
Prompt: show that f=ma implies the principle of least action
So I'm trying to understand how Newton's second law, which is F equals m times a, leads to the principle of least action. I know that the principle of least action is a fundamental concept in physics, especially in Lagrangian mechanics, but I'm not sure how to connect it to Newton's law directly.
First, maybe I should recall what each of these means. Newton's second law states that the net force acting on an object is equal to the mass of the object multiplied by its acceleration. Mathematically, that's F = m*a.
The principle of least action, on the other hand, is a variational principle that says that the path taken by a system between two states is the one for which the action is minimized. Action is defined as the integral over time of the Lagrangian, which is typically kinetic energy minus potential energy.
So, how does F = ma imply this principle? I think I need to start by expressing Newton's law in terms of the Lagrangian.
Let me consider a particle moving in one dimension for simplicity. The kinetic energy T is (1/2)m*v^2, where v is the velocity, and the potential energy V is a function of position x.
The Lagrangian L is T - V, so L = (1/2)m*v^2 - V(x).
Now, the action S is the integral of L with respect to time, from time t1 to t2:
S = ∫(T - V) dt = ∫[(1/2)m*(dx/dt)^2 - V(x)] dt.
The principle of least action says that the actual path x(t) taken by the particle between t1 and t2 is the one that makes the action S stationary, meaning that any small variation in the path leads to no change in S (to first order).
Mathematically, this is expressed by setting the variation of S to zero:
δS = 0.
To find the equations of motion that satisfy this condition, I need to perform a variation of the action integral.
Let me consider a small variation in the path x(t), denoted by δx(t), with the endpoints fixed, meaning δx(t1) = 0 and δx(t2) = 0.
Then, the variation of the action is:
δS = ∫[ (dL/dx) * δx + (dL/dv) * δv ] dt,
where v = dx/dt, so δv = d(δx)/dt.
Substituting L = (1/2)m*v^2 - V(x), we have:
dL/dx = -dV/dx,
and
dL/dv = m*v.
So,
δS = ∫[ (-dV/dx) * δx + m*v * δv ] dt.
Now, substitute δv = d(δx)/dt:
δS = ∫[ (-dV/dx) * δx + m*v * d(δx)/dt ] dt.
This looks a bit messy, so maybe I can integrate the second term by parts to simplify it.
Integrating the second term by parts:
∫ m*v * d(δx)/dt dt = m*v*δx |(t1 to t2) - ∫ d(m*v)/dt * δx dt.
Since δx(t1) = 0 and δx(t2) = 0, the boundary term vanishes, and we have:
∫ m*v * d(δx)/dt dt = - ∫ d(m*v)/dt * δx dt.
So, now δS becomes:
δS = ∫[ (-dV/dx - d(m*v)/dt) ] δx dt.
For the action to be stationary, δS must be zero for any variation δx(t), which implies that the integrand must be zero:
-dV/dx - d(m*v)/dt = 0.
Rearranging this, we get:
d(m*v)/dt = -dV/dx.
But m*v is the momentum p, and dp/dt is the force F. Also, -dV/dx is the conservative force associated with the potential V.
Therefore, dP/dt = -dV/dx.
Wait a minute, in Newton's law, F = m*a, and F is also equal to -dV/dx for conservative forces.
Since a = dv/dt, and v = dx/dt, then F = m*dv/dt.
So, m*dv/dt = -dV/dx.
This looks exactly like the equation we obtained from the principle of least action: d(m*v)/dt = -dV/dx.
Therefore, the principle of least action leads to Newton's second law for conservative forces.
But the question was to show that F = ma implies the principle of least action.
Wait, it seems like I've shown the reverse: that the principle of least action implies F = ma.
Maybe I need to approach this differently.
Perhaps I should start with Newton's second law and show that it can be derived from the principle of least action, or maybe show that F = ma is equivalent to the principle of least action under certain conditions.
Actually, I think the standard approach is to start with Newton's laws and derive the principle of least action, or vice versa.
In my previous reasoning, I started with the Lagrangian and derived Newton's law from the principle of least action.
But now I need to do the opposite: start with F = ma and arrive at the principle of least action.
Maybe I can consider defining the Lagrangian in terms of kinetic and potential energy and then show that F = ma leads to the Euler-Lagrange equation, which is the condition for the action to be stationary.
Let me try that.
Assume F = ma holds, and define L = T - V, where T is kinetic energy and V is potential energy.
From F = ma, and knowing that F = -dV/dx for conservative forces, we have:
m*dv/dt = -dV/dx.
But from the definition of momentum p = m*v, dp/dt = m*a, so dp/dt = F.
Therefore, dp/dt = -dV/dx.
Now, consider the Lagrangian L = T - V = (1/2)m*v^2 - V(x).
The Euler-Lagrange equation for this Lagrangian is:
d/dt (dL/dv) - dL/dx = 0.
Compute dL/dv and dL/dx:
dL/dv = m*v,
dL/dx = -dV/dx.
So, the Euler-Lagrange equation is:
d(m*v)/dt + dV/dx = 0,
which is exactly the same as F = ma.
Therefore, if F = ma holds, then the Euler-Lagrange equation holds, which is the condition for the action to be stationary.
In other words, F = ma implies that the action is stationary.
Hence, the principle of least action follows from F = ma.
I think this is the connection I was looking for.
To summarize:
1. Start with F = ma.
2. For conservative forces, F = -dV/dx.
3. Define the Lagrangian L = T - V.
4. The Euler-Lagrange equation for L is d/dt(dL/dv) - dL/dx = 0, which simplifies to m*a + dV/dx = 0, or F = ma.
5. Therefore, F = ma implies that the Euler-Lagrange equation holds, which is the condition for the action S = ∫L dt to be stationary.
Hence, F = ma implies the principle of least action.
**Final Answer**
\[ \boxed{\text{F} = m\text{a implies the principle of least action.}} \] | 2024-12-10T23:28:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hbfj2s/i_tried_qwq32bpreview_and_im_impressed/ | jsllls | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbfj2s | false | null | t3_1hbfj2s | /r/LocalLLaMA/comments/1hbfj2s/i_tried_qwq32bpreview_and_im_impressed/ | false | false | self | 116 | null |
ChatGPT 3.5 - Retroperspective | 22 | Sometimes, it can be fun to look back at older technology, it's easy to forget how much more primitive it was back then.
Remember when ChatGPT 3.5 struggled with very simple logical tasks [like this one](https://youtu.be/XMMDPNQFt14?feature=shared&t=412)?
ChatGPT 3.5:
**Prompt:**
>Mayank walks faster than Gurjot.
Lalit walks faster than Mayank.
Gurjot walks faster than Lalit.
If the first two statements are true, the third statement is
A. True B. False C. Uncertain
**Output:**
>If we assume that the statements are true, we can construct the following diagram:
**Gurjot > Lalit > Mayank**
This diagram satisfies all three statements. Gurjot walks faster than Lalit, Lalit walks faster
than Mayank, and therefore, Gurjot walks faster than Mayank.
Therefore, if we assume that the first two statements are true, the third statement must also be true.
Therefore, the answer is A. True.
Wrong!
This is the LLM (ChatGPT 3.5) people were blown away by (including me), and many believed a **supercomputer** was needed to run something like this.
Now, 1.5 years later...
Qwen2.5 3b (Can be run on a phone), same prompt:
**Output:**
>B. False
>If Mayank walks faster than Gurjot, and Lalit walks faster than Mayank, then Lalit must walk faster than Gurjot as well. Therefore, the statement that Gurjot walks faster than Lalit is false.
Got it perfectly right.
\---
ChatGPT 3.5 also struggled to write short stories that made sense, [like this one](https://youtu.be/JTxsNm9IdYU?feature=shared&t=1153):
**ChatGPT 3.5 short story:**
>Once upon a time, a rabbit and a turtle decided to go on an adventure together. They set out early in the morning, eager to see what the world had in store for them.
>As they wandered through the forest, they came across a wide river that they had to cross in order to continue their journey. The rabbit, being much faster than the turtle, offered to carry the turtle on his back so they could make it across the river quickly.
>The turtle agreed, and the rabbit started to swim across the river. However, halfway across the river, the rabbit started to get tired. He started to sink lower and lower into the water, and the turtle was afraid that they would both drown.
>But the turtle, being the clever creature that she was, used her strong and sturdy shell to keep the rabbit afloat. Together, they made it safely to the other side of the river.
>After the adventure, the rabbit and the turtle became the best of friends, and they continued to explore the world together, always looking out for one another.
The rabbit tries to carry the turtle across the river? What the hell, ChatGPT 3.5, what were you thinking?
Now, \~2 years later...
Mistral Small 22b:
**Prompt:**
>This is a short story written by ChatGPT 3.5. Is there anything that does not make sense?
*\*ChatGPT 3.5's rabbit and turtle story pasted here\**
**Output:**
>The story is charming and has a clear moral about friendship and teamwork. However, there are a few points that might not make the most sense:
1. **Rabbit's Swimming Ability**:
Rabbits are generally not known for being strong swimmers. It might be more believable if the turtle offered to swim and the rabbit agreed to ride on the turtle's shell.
>2. **Turtle's Strength**:
Turtles are known for their strong shells, but it's not clear how the turtle would use her shell to keep the rabbit afloat. Perhaps the turtle could have suggested a different strategy, such as holding onto a floating log or using her strong neck to support the rabbit.
>3. **Speed of Travel**:
The rabbit is faster than the turtle on land, but in water, the turtle would typically be the faster one. This could be highlighted to show how each animal has its own strengths.
>4. **End of the Story**:
The story ends abruptly after they cross the river. It might be more satisfying to show how their friendship continues to help them in future adventures or challenges.
>Here's a revised version addressing these points:
>"Once upon a time, a rabbit and a turtle decided to go on an adventure together. They set out early in the morning, eager to see what the world had in store for them.
>As they wandered through the forest, they came across a wide river that they had to cross in order to continue their journey. The turtle, being a strong swimmer, offered to carry the rabbit on her shell so they could make it across the river safely.
>The rabbit agreed, and the turtle started to swim across the river. However, as they reached the middle, the current became stronger, and the rabbit was afraid they would be swept away.
>But the turtle, being the wise creature that she was, spotted a floating log nearby. She grabbed onto it with her strong legs and used it to keep them both stable as she continued to swim. The rabbit held onto the turtle's shell tightly, and together, they made it safely to the other side of the river.
>After their adventure, the rabbit and the turtle became the best of friends. They continued to explore the world together, always looking out for one another and using each other's strengths to overcome any challenges they faced."
Mistral Small both identified the errors in this illogical ChatGPT 3.5-story and fixed them, most importantly by swapping the roles, now with the turtle carrying the rabbit across the river instead.
It's fantastic, the performance of an LLM that we thought was mind blowing in 2022, pales in comparison to today's local small LLMs **and** they can be run on a potato PC. | 2024-12-11T00:04:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hbgbje/chatgpt_35_retroperspective/ | Admirable-Star7088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbgbje | false | null | t3_1hbgbje | /r/LocalLLaMA/comments/1hbgbje/chatgpt_35_retroperspective/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'l_nV0_SB3YL7ucFX-qWDTVufDthXJ2zGOGbIpjq_zaQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Pgzro5NsSWMdxWVsXxdZA9TFzpcnPFJQGIjpTGDgh5w.jpg?width=108&crop=smart&auto=webp&s=867bcb842e3b0cba56b789418a52eb4cd6e64f52', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Pgzro5NsSWMdxWVsXxdZA9TFzpcnPFJQGIjpTGDgh5w.jpg?width=216&crop=smart&auto=webp&s=8b93668a3f756a7e4dc707cc4b556bf59e112c21', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Pgzro5NsSWMdxWVsXxdZA9TFzpcnPFJQGIjpTGDgh5w.jpg?width=320&crop=smart&auto=webp&s=d80674f00fdbba63d5f691490acd7b6ad3966b3f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Pgzro5NsSWMdxWVsXxdZA9TFzpcnPFJQGIjpTGDgh5w.jpg?auto=webp&s=726d5137348039dc9ebbede4c2d95b724d9cfd43', 'width': 480}, 'variants': {}}]} |
Do I need to learn Machine Learning and Deep Learning before LLMs? | 1 | [removed] | 2024-12-11T00:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hbh1v9/do_i_need_to_learn_machine_learning_and_deep/ | _Mikazuchi_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbh1v9 | false | null | t3_1hbh1v9 | /r/LocalLLaMA/comments/1hbh1v9/do_i_need_to_learn_machine_learning_and_deep/ | false | false | self | 1 | null |
How to use RAG with openwebui? | 4 | Just that, I want to use it for docs and stuff, but don't know how to. | 2024-12-11T00:49:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hbh8w1/how_to_use_rag_with_openwebui/ | Pro-editor-1105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbh8w1 | false | null | t3_1hbh8w1 | /r/LocalLLaMA/comments/1hbh8w1/how_to_use_rag_with_openwebui/ | false | false | self | 4 | null |
Llama Newbie - need to go local for different reasons, get real work done. | 3 | I have been using OpenAI and Claude for the last 6 months for work and its really changed things for me as a lone developer. However there are some big drawbacks to the big paid models and I would like to try to create a system where I can delegate prompts to local first, then upon review I can forward to Claude or Gemini.
I also have a medium term desire to create a RAG based code repository tool that I can hook an LLM up to so that it can address my entire codebase at once without a lot of cutting and pasting. The narrow window of context in chat gets to be a bit painful.
Farther out, I want to do fine tunes that draw my local models more into my actual problem domain. But I have no experience fine tuning. I want to fine tune an LLM to automated coding in a supervised context vs my training data, and gradually learn and improve in-place.
I have a lot of classical ML and non-LLM deep learning experience.
I have a 48 core machine with decent storage and a single RTX 6000 ADA. The server can scale up RAM pretty high. Which models that are actually good for coding can I run on once instance of this GPU? I see that some 70b models can run on less than 48GB.
I will buy more 6000 ADAs but I want to start small and get going before spending even more on local compute resources. Of course it's not important that my compute is on premises, but I determined that a decent prototyping rig is better all-in at this entry level than trying to rent GPU machines in the cloud. | 2024-12-11T00:52:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hbhbhf/llama_newbie_need_to_go_local_for_different/ | bitmoji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbhbhf | false | null | t3_1hbhbhf | /r/LocalLLaMA/comments/1hbhbhf/llama_newbie_need_to_go_local_for_different/ | false | false | self | 3 | null |
GRMR 2B Instruct - A lightweight, reliable grammar checker! | 106 | 2024-12-11T01:09:33 | https://huggingface.co/qingy2024/GRMR-2B-Instruct | random-tomato | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hbho15 | false | null | t3_1hbho15 | /r/LocalLLaMA/comments/1hbho15/grmr_2b_instruct_a_lightweight_reliable_grammar/ | false | false | 106 | {'enabled': False, 'images': [{'id': 'ZQYQpQeuiDuT8dzTwAtMWqeZRLw6AbPyc5bjW28z2IA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YabKZ0Lv0WuoYptTYBumhB_SdxbLNmzQMjjXxi6ptNY.jpg?width=108&crop=smart&auto=webp&s=273a4e2055563846898078383b8f1d9d84c0c1cb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YabKZ0Lv0WuoYptTYBumhB_SdxbLNmzQMjjXxi6ptNY.jpg?width=216&crop=smart&auto=webp&s=4ba0a1f5e014fea24811d5761ec4f32dab73ce66', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YabKZ0Lv0WuoYptTYBumhB_SdxbLNmzQMjjXxi6ptNY.jpg?width=320&crop=smart&auto=webp&s=88441bec6f30fff173dee9c1ba790f538f7ebf77', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YabKZ0Lv0WuoYptTYBumhB_SdxbLNmzQMjjXxi6ptNY.jpg?width=640&crop=smart&auto=webp&s=e38a3c366cfb6edc3edb4bd4d6f4d7af49ee4a9c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YabKZ0Lv0WuoYptTYBumhB_SdxbLNmzQMjjXxi6ptNY.jpg?width=960&crop=smart&auto=webp&s=77f111b3c841364b571d501e21bbe7046485a492', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YabKZ0Lv0WuoYptTYBumhB_SdxbLNmzQMjjXxi6ptNY.jpg?width=1080&crop=smart&auto=webp&s=15fde90d81376671eb288224be89043e6aac537a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YabKZ0Lv0WuoYptTYBumhB_SdxbLNmzQMjjXxi6ptNY.jpg?auto=webp&s=193f1200fa93f906a5518429901d323af5ff730e', 'width': 1200}, 'variants': {}}]} |
||
Anyone making agents for mobile devices? | 0 | Anyone making agents that can fully run on a mobile device including the model? Something like ModelContextProtocol but for iOS or Android?
Curious if anyone has had success with this or not. Doesn't look like anyone has created a framework yet. | 2024-12-11T01:26:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hbhzso/anyone_making_agents_for_mobile_devices/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbhzso | false | null | t3_1hbhzso | /r/LocalLLaMA/comments/1hbhzso/anyone_making_agents_for_mobile_devices/ | false | false | self | 0 | null |
NLP Context - RAG | 1 | [removed] | 2024-12-11T01:59:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hbinf2/nlp_context_rag/ | Leather_Actuator_511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbinf2 | false | null | t3_1hbinf2 | /r/LocalLLaMA/comments/1hbinf2/nlp_context_rag/ | false | false | self | 1 | null |
How to organize text structure for chunking | 1 | [removed] | 2024-12-11T02:02:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hbipzm/how_to_organize_text_structure_for_chunking/ | pddddddddddd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbipzm | false | null | t3_1hbipzm | /r/LocalLLaMA/comments/1hbipzm/how_to_organize_text_structure_for_chunking/ | false | false | self | 1 | null |
gptme-rag: Local RAG as a simple CLI, for standalone use or as a gptme tool | 8 | 2024-12-11T02:57:33 | https://github.com/ErikBjare/gptme-rag | ErikBjare | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hbjs0p | false | null | t3_1hbjs0p | /r/LocalLLaMA/comments/1hbjs0p/gptmerag_local_rag_as_a_simple_cli_for_standalone/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': 'vG7KJKUHqeQN2Ene7I8wyLIoWPpWak20B5Cov2srTcs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ARRoU1JD-mRJrfm8Hh1bFK_Z-ShJ_CAzXEvoBR-L9hc.jpg?width=108&crop=smart&auto=webp&s=176a560bdb66bca8999ad3e5334d69ec09ed66c1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ARRoU1JD-mRJrfm8Hh1bFK_Z-ShJ_CAzXEvoBR-L9hc.jpg?width=216&crop=smart&auto=webp&s=1df1b5d6301365744d818b167580103dc9fb2c8c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ARRoU1JD-mRJrfm8Hh1bFK_Z-ShJ_CAzXEvoBR-L9hc.jpg?width=320&crop=smart&auto=webp&s=2ac9ff3da33b2ab3ae7a94750fcb29d4ae7a4ec2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ARRoU1JD-mRJrfm8Hh1bFK_Z-ShJ_CAzXEvoBR-L9hc.jpg?width=640&crop=smart&auto=webp&s=10da5ff6587f3d422b33c9121577712c9d9ca015', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ARRoU1JD-mRJrfm8Hh1bFK_Z-ShJ_CAzXEvoBR-L9hc.jpg?width=960&crop=smart&auto=webp&s=105f7cf3c255a0ac3e04fdf4313e23ae6ccc947f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ARRoU1JD-mRJrfm8Hh1bFK_Z-ShJ_CAzXEvoBR-L9hc.jpg?width=1080&crop=smart&auto=webp&s=07cea87af5f5fc5ed754a746ac83f368193576be', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ARRoU1JD-mRJrfm8Hh1bFK_Z-ShJ_CAzXEvoBR-L9hc.jpg?auto=webp&s=094240f9ee89ceb61854ee70d750bdd1460e43d6', 'width': 1200}, 'variants': {}}]} |
|
GitHub - nahidalam/maya: Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya | 11 | 2024-12-11T03:26:25 | https://github.com/nahidalam/maya | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hbkbsf | false | null | t3_1hbkbsf | /r/LocalLLaMA/comments/1hbkbsf/github_nahidalammaya_maya_an_instruction/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'R4WaynK33hcZw430b5afVxMW6JcggFsbq-ThRuyFNcA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6TvIRAZbElS_f5nIYlGJGeOf19eHXgrx8lr1mnwHvaw.jpg?width=108&crop=smart&auto=webp&s=d19325d19ce0b4e61e1f8d5e0bdef71c322e9275', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6TvIRAZbElS_f5nIYlGJGeOf19eHXgrx8lr1mnwHvaw.jpg?width=216&crop=smart&auto=webp&s=e953a10373c55969615bbe5e4009e9f62c743365', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6TvIRAZbElS_f5nIYlGJGeOf19eHXgrx8lr1mnwHvaw.jpg?width=320&crop=smart&auto=webp&s=2223512e71f1652ca9acd62d3c2492518b3d9e94', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6TvIRAZbElS_f5nIYlGJGeOf19eHXgrx8lr1mnwHvaw.jpg?width=640&crop=smart&auto=webp&s=4953ce81effbb3e96d1043aa55ed9aa105835783', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6TvIRAZbElS_f5nIYlGJGeOf19eHXgrx8lr1mnwHvaw.jpg?width=960&crop=smart&auto=webp&s=aa6f7580c0ef66bc0d1c67c2baca58c9c4c19ad2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6TvIRAZbElS_f5nIYlGJGeOf19eHXgrx8lr1mnwHvaw.jpg?width=1080&crop=smart&auto=webp&s=50e9e09b55a3381827c8f7cc8df671b81d3ef2d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6TvIRAZbElS_f5nIYlGJGeOf19eHXgrx8lr1mnwHvaw.jpg?auto=webp&s=885dd28693822a8df86b87f1ddb1621936e16223', 'width': 1200}, 'variants': {}}]} |
||
Recommendations for how to do this? | 1 | [removed] | 2024-12-11T03:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hbkhwd/recommendations_for_how_to_do_this/ | Gunplexityyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbkhwd | false | null | t3_1hbkhwd | /r/LocalLLaMA/comments/1hbkhwd/recommendations_for_how_to_do_this/ | false | false | self | 1 | null |
AI Teacher/Coach For Workbooks & Study Material. Are There Any Available? | 1 | [removed] | 2024-12-11T04:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hbl3mc/ai_teachercoach_for_workbooks_study_material_are/ | DoNotDisturb____ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hbl3mc | false | null | t3_1hbl3mc | /r/LocalLLaMA/comments/1hbl3mc/ai_teachercoach_for_workbooks_study_material_are/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.