title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Hello im a sillytavern user who runs Koboldcpp natively. im looking for good model recommendations that i can run. deets below.
1
the current models i run are either Mythochronos 13b and i recently tried violet Twilight 13b. however. i cant find a good mid point. Mythochronos isnt that smart but will make chats flow decently well. Twilight is too yappy and constantly puts out 400ish token responses even when the prompt has "100 words or less". its also super repetative. its one upside its really creative and great at nsfw stuff. my current hardware is 3060 12gb vram 32 gig ram. i prefer gguf format as i use koboldcpp. ooba has a tendency to crash my pc.
2025-02-06T10:23:40
https://www.reddit.com/r/LocalLLaMA/comments/1iizgsm/hello_im_a_sillytavern_user_who_runs_koboldcpp/
corkgunsniper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iizgsm
false
null
t3_1iizgsm
/r/LocalLLaMA/comments/1iizgsm/hello_im_a_sillytavern_user_who_runs_koboldcpp/
false
false
self
1
null
LM studio not detecting GGUF
1
[removed]
2025-02-06T10:27:48
https://www.reddit.com/r/LocalLLaMA/comments/1iiziv5/lm_studio_not_detecting_gguf/
FartMaker3000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iiziv5
false
null
t3_1iiziv5
/r/LocalLLaMA/comments/1iiziv5/lm_studio_not_detecting_gguf/
false
false
self
1
null
Lyzr Agent Studio is LIVE on Product Hunt! 🎉
1
[removed]
2025-02-06T11:00:15
https://www.reddit.com/r/LocalLLaMA/comments/1iizzh1/lyzr_agent_studio_is_live_on_product_hunt/
harshit_nariya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iizzh1
false
null
t3_1iizzh1
/r/LocalLLaMA/comments/1iizzh1/lyzr_agent_studio_is_live_on_product_hunt/
false
false
https://a.thumbs.redditm…gbdSJY5tBTG0.jpg
1
null
Looking for local llm for erp and image generation prompt. For using in ST.
1
[removed]
2025-02-06T11:25:07
https://www.reddit.com/r/LocalLLaMA/comments/1ij0ddb/looking_for_local_llm_for_erp_and_image/
Alonlystalker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij0ddb
false
null
t3_1ij0ddb
/r/LocalLLaMA/comments/1ij0ddb/looking_for_local_llm_for_erp_and_image/
false
false
self
1
null
Should I need local AI for Electrical Engineering studying ?
1
[removed]
2025-02-06T12:19:20
https://www.reddit.com/r/LocalLLaMA/comments/1ij188t/should_i_need_local_ai_for_electrical_engineering/
wolfson10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij188t
false
null
t3_1ij188t
/r/LocalLLaMA/comments/1ij188t/should_i_need_local_ai_for_electrical_engineering/
false
false
self
1
null
What is the best LLM for coding that can be run on M4 Max 64GB
1
[removed]
2025-02-06T12:25:19
https://www.reddit.com/r/LocalLLaMA/comments/1ij1bx3/what_is_the_best_llm_for_coding_that_can_be_run/
Flife0x
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij1bx3
false
null
t3_1ij1bx3
/r/LocalLLaMA/comments/1ij1bx3/what_is_the_best_llm_for_coding_that_can_be_run/
false
false
self
1
null
lineage-bench benchmark results updated with recently released models
88
2025-02-06T12:29:58
https://i.redd.it/pgf54p7ddihe1.png
fairydreaming
i.redd.it
1970-01-01T00:00:00
0
{}
1ij1ew9
false
null
t3_1ij1ew9
/r/LocalLLaMA/comments/1ij1ew9/lineagebench_benchmark_results_updated_with/
false
false
https://b.thumbs.redditm…Akjk9Tv3eyfU.jpg
88
{'enabled': True, 'images': [{'id': 'WXkOpnEWuJRM-qRI-jqAbcx9xXk9T-i0_mEF0M9-uxA', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/pgf54p7ddihe1.png?width=108&crop=smart&auto=webp&s=395aff4484a52c8752412fdacc05caf7fff33ae2', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/pgf54p7ddihe1.png?width=216&crop=smart&auto=webp&s=b2430b5d0c25dabe532c3dac51c634ea928fd6ca', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/pgf54p7ddihe1.png?width=320&crop=smart&auto=webp&s=0834d4ea43b0e6e8a9d305b841353015ee71b045', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/pgf54p7ddihe1.png?width=640&crop=smart&auto=webp&s=02d2e9ae3c8b4d6c2160f1ebcb0f9b3a96964947', 'width': 640}, {'height': 576, 'url': 'https://preview.redd.it/pgf54p7ddihe1.png?width=960&crop=smart&auto=webp&s=76d17ad890bef2134bef181cd7954c9ce34c845e', 'width': 960}], 'source': {'height': 600, 'url': 'https://preview.redd.it/pgf54p7ddihe1.png?auto=webp&s=61e65922ffb1a8f605210b55febf1c315df4986b', 'width': 1000}, 'variants': {}}]}
Any tool which is an LLM based local file "explorer + search engine" ?
7
I'm talking about the ones which don't use RAG, and are super low profile. If any. Otherwise please drop the ones using RAG or are heavy or of any other type.
2025-02-06T12:38:44
https://www.reddit.com/r/LocalLLaMA/comments/1ij1kh1/any_tool_which_is_an_llm_based_local_file/
ThiccStorms
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij1kh1
false
null
t3_1ij1kh1
/r/LocalLLaMA/comments/1ij1kh1/any_tool_which_is_an_llm_based_local_file/
false
false
self
7
null
Multiple Titan RTX 12Gb for an Inference RIG?
1
Hi guys, I don’t know much about running models locally other than using ollama. I was wondering if it’d possible to build a rig using multiple Titan RTX 12GB since they are very cheap now while having lot of VRAM 12GB (for the price). Does any one knows if this is feasible?
2025-02-06T12:52:46
https://www.reddit.com/r/LocalLLaMA/comments/1ij1tgg/multiple_titan_rtx_12gb_for_an_inference_rig/
LocSta29
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij1tgg
false
null
t3_1ij1tgg
/r/LocalLLaMA/comments/1ij1tgg/multiple_titan_rtx_12gb_for_an_inference_rig/
false
false
self
1
null
Mistral small 24b 2501 performance
1
[removed]
2025-02-06T12:58:34
https://www.reddit.com/r/LocalLLaMA/comments/1ij1xb5/mistral_small_24b_2501_performance/
stjepano85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij1xb5
false
null
t3_1ij1xb5
/r/LocalLLaMA/comments/1ij1xb5/mistral_small_24b_2501_performance/
false
false
self
1
null
Autiobooks: Automatically convert epubs to audiobooks (kokoro)
273
https://github.com/plusuncold/autiobooks This is a GUI frontend for Kokoro for generating audiobooks from epubs. The results are pretty good! PRs are very welcome
2025-02-06T12:58:49
https://v.redd.it/w21l2ag0oihe1
vosFan
/r/LocalLLaMA/comments/1ij1xge/autiobooks_automatically_convert_epubs_to/
1970-01-01T00:00:00
0
{}
1ij1xge
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w21l2ag0oihe1/DASHPlaylist.mpd?a=1741568335%2CMDBkNTRhODNmYjMwYWM5OTVjMjM2Y2RhZjlmNTFjNTkwZGM2ZWEyZTE2ZGExZGI1YzkxMTg4MDk5NDlmMTU0NQ%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/w21l2ag0oihe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/w21l2ag0oihe1/HLSPlaylist.m3u8?a=1741568335%2CMTM2MzkyNTc5NWI1ZDAyMzAxNDJhOGI1ZjIwZGVkNTZhY2YyNTNiOTQxN2QzNjM1NmRkMGEyMmM1ODNmZmEzNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w21l2ag0oihe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1174}}
t3_1ij1xge
/r/LocalLLaMA/comments/1ij1xge/autiobooks_automatically_convert_epubs_to/
false
false
https://external-preview…4a9180f53b7a13ef
273
{'enabled': False, 'images': [{'id': 'ZWtwaHU5YzBvaWhlMV54VUX8u6k6pXdX7L9L_cCrxwAtDjHSCnwLZyQNJRce', 'resolutions': [{'height': 99, 'url': 'https://external-preview.redd.it/ZWtwaHU5YzBvaWhlMV54VUX8u6k6pXdX7L9L_cCrxwAtDjHSCnwLZyQNJRce.png?width=108&crop=smart&format=pjpg&auto=webp&s=14375f1464380a7a4430f19f37fb32ab817e2c40', 'width': 108}, {'height': 198, 'url': 'https://external-preview.redd.it/ZWtwaHU5YzBvaWhlMV54VUX8u6k6pXdX7L9L_cCrxwAtDjHSCnwLZyQNJRce.png?width=216&crop=smart&format=pjpg&auto=webp&s=3f4d3518e2ab9bbeca6995bcb100dd0de508e3a3', 'width': 216}, {'height': 294, 'url': 'https://external-preview.redd.it/ZWtwaHU5YzBvaWhlMV54VUX8u6k6pXdX7L9L_cCrxwAtDjHSCnwLZyQNJRce.png?width=320&crop=smart&format=pjpg&auto=webp&s=c0e7d46859e0a825c4fc25026f8a594ce2bad3f1', 'width': 320}, {'height': 588, 'url': 'https://external-preview.redd.it/ZWtwaHU5YzBvaWhlMV54VUX8u6k6pXdX7L9L_cCrxwAtDjHSCnwLZyQNJRce.png?width=640&crop=smart&format=pjpg&auto=webp&s=a4af27c89a7dfdcc11b44741317755140de19992', 'width': 640}, {'height': 883, 'url': 'https://external-preview.redd.it/ZWtwaHU5YzBvaWhlMV54VUX8u6k6pXdX7L9L_cCrxwAtDjHSCnwLZyQNJRce.png?width=960&crop=smart&format=pjpg&auto=webp&s=d92208c8a91f2969adea710b3e244550fc8fd66b', 'width': 960}, {'height': 993, 'url': 'https://external-preview.redd.it/ZWtwaHU5YzBvaWhlMV54VUX8u6k6pXdX7L9L_cCrxwAtDjHSCnwLZyQNJRce.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a6ab416d26968b7b2ee172b33d01c7e0d0ddb3a1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZWtwaHU5YzBvaWhlMV54VUX8u6k6pXdX7L9L_cCrxwAtDjHSCnwLZyQNJRce.png?format=pjpg&auto=webp&s=ca1354338662e07a69e9ed214da3071486b9c0b0', 'width': 1174}, 'variants': {}}]}
Perplexity AI’s $1M Giveaway Turns Searching Into Winning
1
2025-02-06T12:59:06
https://www.bitdegree.org/crypto/news/perplexity-ais-1-million-giveaway-turns-searching-into-winning?utm_source=reddit&utm_medium=social&utm_campaign=r-perplexity-ais-1-million-giveaway
Educational_Swim8665
bitdegree.org
1970-01-01T00:00:00
0
{}
1ij1xnc
false
null
t3_1ij1xnc
/r/LocalLLaMA/comments/1ij1xnc/perplexity_ais_1m_giveaway_turns_searching_into/
false
false
https://a.thumbs.redditm…IaBNj5wBTwg0.jpg
1
{'enabled': False, 'images': [{'id': 'Q86E4kiUslkjXJwKjvAI2l_EnKV-LFjy1QGjh1-lWuM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/H2W8nWRFFU-DCMQ191MfGv2-C9_lcyZgvHwS3NRG6is.jpg?width=108&crop=smart&auto=webp&s=640b1060b954188c4a2f091e351c1cfc2c2aab8b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/H2W8nWRFFU-DCMQ191MfGv2-C9_lcyZgvHwS3NRG6is.jpg?width=216&crop=smart&auto=webp&s=5a573e092422d883580de701707ad16428b7d4e7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/H2W8nWRFFU-DCMQ191MfGv2-C9_lcyZgvHwS3NRG6is.jpg?width=320&crop=smart&auto=webp&s=ac1b24be60d07d94a760c3459d1b210975f7e0a9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/H2W8nWRFFU-DCMQ191MfGv2-C9_lcyZgvHwS3NRG6is.jpg?width=640&crop=smart&auto=webp&s=0353ff25fed099f52650810c7a6a5848e1d103ae', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/H2W8nWRFFU-DCMQ191MfGv2-C9_lcyZgvHwS3NRG6is.jpg?width=960&crop=smart&auto=webp&s=c54aa780305abd2cf7ce7799048b3f15d57a1332', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/H2W8nWRFFU-DCMQ191MfGv2-C9_lcyZgvHwS3NRG6is.jpg?auto=webp&s=bb4b8b28f62953516cb54e384bba822c2f1aeaac', 'width': 1024}, 'variants': {}}]}
Experience DeepSeek-R1-Distill-Llama-8B on Your Smartphone with PowerServe and Qualcomm NPU!
38
[PowerServe](https://github.com/powerserve-project/PowerServe) is a **high-speed** and **easy-to-use** LLM serving framework for local deployment. You can deploy popular LLMs with our [one-click compilation and deployment](https://github.com/powerserve-project/PowerServe/blob/main/docs/end_to_end.md). PowerServe offers the following advantages: \- **Lightning-Fast Prefill and Decode**: Optimized for NPU, achieving over 10x faster prefill speeds compared to llama.cpp, significantly accelerating model warm-up. \- **Efficient NPU Speculative Inference**: Supports speculative inference, delivering 2x faster inference speeds compared to traditional autoregressive decoding. \- **Seamless OpenAI API Compatibility**: Fully compatible with OpenAI API, enabling effortless migration of existing applications to the PowerServe platform. \- **Model Support**: Compatible with mainstream large language models such as **Llama3**, **Qwen2.5**, and **InternLM3**, catering to diverse application needs. \- **Ease of Use**: Features one-click deployment for quick setup, making it accessible to everyone. [Running DeepSeek-R1-Distill-Llama-8B with NPU](https://reddit.com/link/1ij205h/video/dsu4qf4doihe1/player)
2025-02-06T13:02:20
https://www.reddit.com/r/LocalLLaMA/comments/1ij205h/experience_deepseekr1distillllama8b_on_your/
Zealousideal_Bad_52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij205h
false
null
t3_1ij205h
/r/LocalLLaMA/comments/1ij205h/experience_deepseekr1distillllama8b_on_your/
false
false
https://a.thumbs.redditm…oFKre6EE_Su0.jpg
38
{'enabled': False, 'images': [{'id': 'QK4KG7jD9z0JQkS3RkT_0bseCKRzF5j9cuEMsMeOnYo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MI3THquRgDiXtkGwCynZSuvm0Cdq3csr60FcHLC_Xk4.jpg?width=108&crop=smart&auto=webp&s=3a805b776d787c7ac295521c9088cd16cc8d190c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MI3THquRgDiXtkGwCynZSuvm0Cdq3csr60FcHLC_Xk4.jpg?width=216&crop=smart&auto=webp&s=985a2c4aa451fd784dccb9d422b892e5be2c1d21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MI3THquRgDiXtkGwCynZSuvm0Cdq3csr60FcHLC_Xk4.jpg?width=320&crop=smart&auto=webp&s=d17755722ec56c9fba1ff36aba70e8b5310a7eeb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MI3THquRgDiXtkGwCynZSuvm0Cdq3csr60FcHLC_Xk4.jpg?width=640&crop=smart&auto=webp&s=0ae6c4d987ccc5b9eff4fb95ca0547ed5c8a98a2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MI3THquRgDiXtkGwCynZSuvm0Cdq3csr60FcHLC_Xk4.jpg?width=960&crop=smart&auto=webp&s=b1e700fbb58ae0697c842d64792c1e0a4230c3e6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MI3THquRgDiXtkGwCynZSuvm0Cdq3csr60FcHLC_Xk4.jpg?width=1080&crop=smart&auto=webp&s=58ed0b47fbca4978940d462ee28e80952b92246a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MI3THquRgDiXtkGwCynZSuvm0Cdq3csr60FcHLC_Xk4.jpg?auto=webp&s=b27a6f1604e3b91a2bd9b0c7676f021e274bb60a', 'width': 1200}, 'variants': {}}]}
Some German TTS samples made with XTTS.
1
2025-02-06T13:06:32
https://soundcloud.com/cylonius
77-81-6
soundcloud.com
1970-01-01T00:00:00
0
{}
1ij2331
false
{'oembed': {'author_name': 'Cylonius', 'author_url': 'https://soundcloud.com/cylonius', 'description': 'Listen to Cylonius | SoundCloud is an audio platform that lets you listen to what you love and share the sounds you create.', 'height': 500, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fw.soundcloud.com%2Fplayer%2F%3Fvisual%3Dtrue%26url%3Dhttps%253A%252F%252Fapi.soundcloud.com%252Fusers%252F1457059539%26show_artwork%3Dtrue&display_name=SoundCloud&url=https%3A%2F%2Fsoundcloud.com%2Fcylonius&image=https%3A%2F%2Fi1.sndcdn.com%2Favatars-dDz3XNZauxyEP4ci-hvpvyw-t500x500.jpg&type=text%2Fhtml&schema=soundcloud" width="500" height="500" scrolling="no" title="SoundCloud embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'SoundCloud', 'provider_url': 'https://soundcloud.com', 'thumbnail_height': 500, 'thumbnail_url': 'https://i1.sndcdn.com/avatars-dDz3XNZauxyEP4ci-hvpvyw-t500x500.jpg', 'thumbnail_width': 500, 'title': 'Cylonius', 'type': 'rich', 'version': '1.0', 'width': 500}, 'type': 'soundcloud.com'}
t3_1ij2331
/r/LocalLLaMA/comments/1ij2331/some_german_tts_samples_made_with_xtts/
false
false
https://b.thumbs.redditm…x3OkHLKqa1BQ.jpg
1
{'enabled': False, 'images': [{'id': '6zqn2qptXqn0kFNRVC198E36I-VkRYfTsDEkIfPfLhQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/66hbLz6WPSdRoc6QNOeA6fK94PYskGJQZfsM5m0ESwo.jpg?width=108&crop=smart&auto=webp&s=3e9fb907aef77729d5a6bcb7a10baa0f7968eca7', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/66hbLz6WPSdRoc6QNOeA6fK94PYskGJQZfsM5m0ESwo.jpg?width=216&crop=smart&auto=webp&s=ae8e5518c443957964edfba6f6bd334ed3bb6b60', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/66hbLz6WPSdRoc6QNOeA6fK94PYskGJQZfsM5m0ESwo.jpg?width=320&crop=smart&auto=webp&s=96c17ab69697912248bdbd63e0a2354cf8fb3dab', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/66hbLz6WPSdRoc6QNOeA6fK94PYskGJQZfsM5m0ESwo.jpg?auto=webp&s=08c01a4520fb23d0a6db2fb5b42159fe26f01fea', 'width': 500}, 'variants': {}}]}
Any spot to run DeepSeek R1 through API WITHOUT using the original site?
0
Maybe I'm just being paranoid, but I would gladly use the full size deepseek through an API as long as it's being hosted in the US by a US corp. Does this option exist at roughly the same cost as deepseek.com?
2025-02-06T13:10:25
https://www.reddit.com/r/LocalLLaMA/comments/1ij25th/any_spot_to_run_deepseek_r1_through_api_without/
StunningRegular8489
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij25th
false
null
t3_1ij25th
/r/LocalLLaMA/comments/1ij25th/any_spot_to_run_deepseek_r1_through_api_without/
false
false
self
0
null
OpenSource Python Library for Versioning, Exporting and Rolling back End2End RAG Pipelines
1
Hey everyone! I'm excited to share RagXO, an open-source library that enables you to export, version, rollback and reuse your end to end RAG pipeline anywhere. Feedback would be very much appreciated ? https://github.com/mohamedfawzy96/ragxo
2025-02-06T13:31:41
https://www.reddit.com/r/LocalLLaMA/comments/1ij2l9z/opensource_python_library_for_versioning/
Sarcinismo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij2l9z
false
null
t3_1ij2l9z
/r/LocalLLaMA/comments/1ij2l9z/opensource_python_library_for_versioning/
false
false
self
1
null
RAG or fine-tuning to make a domain expert LLM
1
[removed]
2025-02-06T13:34:04
https://www.reddit.com/r/LocalLLaMA/comments/1ij2n2d/rag_or_finetuning_to_make_a_domain_expert_llm/
Mdipanjan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij2n2d
false
null
t3_1ij2n2d
/r/LocalLLaMA/comments/1ij2n2d/rag_or_finetuning_to_make_a_domain_expert_llm/
false
false
self
1
null
🎉 Being Thankful for Everyone Who Made This Project a Super Hit! 🚀
0
[removed]
2025-02-06T13:39:29
https://www.reddit.com/r/LocalLLaMA/comments/1ij2r03/being_thankful_for_everyone_who_made_this_project/
akhilpanja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij2r03
false
null
t3_1ij2r03
/r/LocalLLaMA/comments/1ij2r03/being_thankful_for_everyone_who_made_this_project/
false
false
self
0
{'enabled': False, 'images': [{'id': 'iDNxn3TQyM1Mg9QA8l-RMHYPcrNGoPQEX8r2ktTIccM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8sFViBh8zL0yrRQVDFuYfsN1GkM-E3xtMynNStM1LpM.jpg?width=108&crop=smart&auto=webp&s=e9a16ee232d1ea3b1cdbc8887a80d16c4067ee5c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8sFViBh8zL0yrRQVDFuYfsN1GkM-E3xtMynNStM1LpM.jpg?width=216&crop=smart&auto=webp&s=18dc00b8f5d41f7a7564cda3494753e448e00eaf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8sFViBh8zL0yrRQVDFuYfsN1GkM-E3xtMynNStM1LpM.jpg?width=320&crop=smart&auto=webp&s=2a472da1c3c5824166f9141f2cfbc1d87ed6bfba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8sFViBh8zL0yrRQVDFuYfsN1GkM-E3xtMynNStM1LpM.jpg?width=640&crop=smart&auto=webp&s=e45ca030334162ace3f51d3d4beb3eed56f21c94', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8sFViBh8zL0yrRQVDFuYfsN1GkM-E3xtMynNStM1LpM.jpg?width=960&crop=smart&auto=webp&s=fa2493fc017c99544610fad6ad83e998be8007de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8sFViBh8zL0yrRQVDFuYfsN1GkM-E3xtMynNStM1LpM.jpg?width=1080&crop=smart&auto=webp&s=445f42b674a380cad04211537e5f133140528bb7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8sFViBh8zL0yrRQVDFuYfsN1GkM-E3xtMynNStM1LpM.jpg?auto=webp&s=d177d3a7c646644efd937abea14f8d3c61c33167', 'width': 1200}, 'variants': {}}]}
Upgarded to 4090: Best Local LLM Options?
10
I recently upgraded my work computer with an RTX 4090 24GB. While I use Claude's $20 subscription as my go to LLM, I'd like a local LLM backup for extensive coding sessions without API limits. Looking for recommendations in these categories: 1. General-purpose LLM: 2. Coding-focused LLM: 3. Image generation: 4. Video generation (just to experiment a bit): System specs: RTX 4090 24 GB / INTEL CORE I7-13700 32 GB (2X16 GB) DDR5 5200 CL40 SSD 1000GB M.2 KINGSTON NV2 3500MB/s PCI Express 4.0 NVMe INTEL CORE I7-13700 Let me know your top picks for my setup! Thanks!
2025-02-06T13:45:11
https://www.reddit.com/r/LocalLLaMA/comments/1ij2v6n/upgarded_to_4090_best_local_llm_options/
jaungoiko_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij2v6n
false
null
t3_1ij2v6n
/r/LocalLLaMA/comments/1ij2v6n/upgarded_to_4090_best_local_llm_options/
false
false
self
10
null
Looking for website that contains ALL ai chatbots for a small per use fee?
1
[removed]
2025-02-06T13:45:22
https://www.reddit.com/r/LocalLLaMA/comments/1ij2vbq/looking_for_website_that_contains_all_ai_chatbots/
MedicalParamedic9240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij2vbq
false
null
t3_1ij2vbq
/r/LocalLLaMA/comments/1ij2vbq/looking_for_website_that_contains_all_ai_chatbots/
false
false
self
1
null
LLamao is a shit app.
0
There; i purchased the "full" version so you wouldn't have to to and the fact that deepseek can't run on a phone from last year (s24 ultra) since the app can't somehow recognise there are more than 1gb of ram available is BS.
2025-02-06T13:49:45
https://www.reddit.com/r/LocalLLaMA/comments/1ij2ykb/llamao_is_a_shit_app/
No_Heart_SoD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij2ykb
false
null
t3_1ij2ykb
/r/LocalLLaMA/comments/1ij2ykb/llamao_is_a_shit_app/
false
false
self
0
null
Llama.cpp slower than transformers - CPU inference
1
[removed]
2025-02-06T13:51:44
https://www.reddit.com/r/LocalLLaMA/comments/1ij3020/llamacpp_slower_than_transformers_cpu_inference/
Educational_Bake_439
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij3020
false
null
t3_1ij3020
/r/LocalLLaMA/comments/1ij3020/llamacpp_slower_than_transformers_cpu_inference/
false
false
self
1
null
Hibiki by kyutai, a simultaneous speech-to-speech translation model, currently supporting FR to EN
690
2025-02-06T13:59:31
https://v.redd.it/gpawbnvlyihe1
Nunki08
v.redd.it
1970-01-01T00:00:00
0
{}
1ij35u7
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/gpawbnvlyihe1/DASHPlaylist.mpd?a=1741442383%2CZjgwMDAzNWE5NGU4NjVjZjdiM2E2MTAzMmQyOTI0OTAwNGZmYjVhYjRhMTZkZWIwZTYyNGQ3MzcxZWJkNDY5Yg%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/gpawbnvlyihe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/gpawbnvlyihe1/HLSPlaylist.m3u8?a=1741442383%2CYzM2MzJiMDg1MzdiNjk4YWU3NzRlMjQ1YTA2ODcxMmQ1ODY4YmY4ZTIxNThhODA1ODkyYzZkMjdmY2UzMzM0OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gpawbnvlyihe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1ij35u7
/r/LocalLLaMA/comments/1ij35u7/hibiki_by_kyutai_a_simultaneous_speechtospeech/
false
false
https://external-preview…3e23deecc116d93a
690
{'enabled': False, 'images': [{'id': 'Z3lrdWp0dmx5aWhlMaQ4EUN4_AgLY98885pUW0pYP7vfo05dn6YTgI9m58bO', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z3lrdWp0dmx5aWhlMaQ4EUN4_AgLY98885pUW0pYP7vfo05dn6YTgI9m58bO.png?width=108&crop=smart&format=pjpg&auto=webp&s=86b132400d64310561bbe354866881ad34eba071', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Z3lrdWp0dmx5aWhlMaQ4EUN4_AgLY98885pUW0pYP7vfo05dn6YTgI9m58bO.png?width=216&crop=smart&format=pjpg&auto=webp&s=5a619e950cb5728828a708b7af519743348289dd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Z3lrdWp0dmx5aWhlMaQ4EUN4_AgLY98885pUW0pYP7vfo05dn6YTgI9m58bO.png?width=320&crop=smart&format=pjpg&auto=webp&s=50024551e9ce2adfa3d78585cdca4e6bc3a4809f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Z3lrdWp0dmx5aWhlMaQ4EUN4_AgLY98885pUW0pYP7vfo05dn6YTgI9m58bO.png?width=640&crop=smart&format=pjpg&auto=webp&s=a03b6c5b67da1d1a5913fb8e0fe90c5f59e369d2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Z3lrdWp0dmx5aWhlMaQ4EUN4_AgLY98885pUW0pYP7vfo05dn6YTgI9m58bO.png?width=960&crop=smart&format=pjpg&auto=webp&s=856bc99c098c0a1dc5745a3c3828706bd6f8a976', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Z3lrdWp0dmx5aWhlMaQ4EUN4_AgLY98885pUW0pYP7vfo05dn6YTgI9m58bO.png?width=1080&crop=smart&format=pjpg&auto=webp&s=01115d78da038d64bd36ec8625626d92a56ec6d0', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Z3lrdWp0dmx5aWhlMaQ4EUN4_AgLY98885pUW0pYP7vfo05dn6YTgI9m58bO.png?format=pjpg&auto=webp&s=1be221147a7f1b2c9d4a6143d2769ddc55a6a876', 'width': 1280}, 'variants': {}}]}
[Guide] Mac Pro 2019 (MacPro7,1) w/ Linux & Local LLM/AI
1
[removed]
2025-02-06T14:54:08
https://www.reddit.com/r/LocalLLaMA/comments/1ij4c5b/guide_mac_pro_2019_macpro71_w_linux_local_llmai/
Faisal_Biyari
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij4c5b
false
null
t3_1ij4c5b
/r/LocalLLaMA/comments/1ij4c5b/guide_mac_pro_2019_macpro71_w_linux_local_llmai/
false
false
self
1
null
Unpopular opinion. The chatbot arena benchmark is not useless, rather it is misunderstood. It is not necessarily an hard benchmark, rather it is a benchmark of "what if the LLM would answer common queries for search engines"?
24
From another thread: The gemini flash thinking is great on chatbot arena. But why this? Before one jumps on the bandwagon "chatbot arena sucks" one has to understand what is tested there. Many say "human preferences" but I think it is a bit different. Most likely on chatbot arena people test the LLMs with relatively simple questions. Akin to "tell me how to write a function in X" rather than "this function doesn't work, fix it". Chatbot arena (at least for the category overall) is great to say "which model would be great for everyday use instead of searching the web". And I think that some companies, like google, are optimizing exactly for that. Hence Chatbot arena is relevant for them. They want to have models that can substitute or complement their search engine. More often than not on reddit people complain that Claude or other models do not excel in chatbot arena (again, the overall category), and thus the benchmark sucks. But that is because those people use the LLMs differently from the voters in chatbot arena. Asking an LLM to help on a niche (read: not that common in internet) coding or debugging problem is harder than a "I use the LLM rather than the search" request. Hence some models are good in hard benchmarks but less good in a benchmark that at the end measures the "substitute a search engine for common questions" metric. Therefore the point "I have a feeling all the current evals those model releases are using are just too far away from real work/life scenarios." is somewhat correct. If a model optimizes for Chatbot arena / search engine usage, then of course it is unlikely to be trained to solve consistently niche problems. And even if one has a benchmark that is more relevant to the use case (say: aider, livebench and what not). If one has a LLM that is right 60% of the time, there is still a lot of work to do for the person to fill the gaps. Then it also depends on the prompts - I found articles in the past where prompts where compared and some could really extract from from an LLM. Those prompts are standardized and optimized in "ad hoc" benchmarks. In Chatbot arena the prompts could be terrible, hence once again what is tested is "what people would type in a LLM based search engine". IMO what the people from LMSYS offer as hard human based benchmarking offers are: - the category hard prompts for general cases - the category longer query for general cases (most of the bullshit prompts IMO are short) - for coding, the WebDev Arena Leaderboard - there Claude is #1 by a mile (so far) . Claude 3.5 (from October 24) has 1250 Elo points, Deepseek R1 1210, o3 mini-high 1161, the next non-thinking model, Gemini exp 1206 has 1025. The distance Claude 3.5 vs Gemini exp is over 200 points, is massive and thus I think that actually Claude "thinks", at least in some domains. It cannot be that is so strong without thinking. - It would be cool if Chatbot Arena would add "hard prompts" for each specific subcategory. For example "math hard prompts", "coding hard prompts" and so on. But I guess that would dilute the votes too much and would require too much classification every week. This to say, I think chatbot arena is very useful IF seen in the proper context, that is mostly "search engine / stack overflow replacement".
2025-02-06T14:54:12
https://www.reddit.com/r/LocalLLaMA/comments/1ij4c7h/unpopular_opinion_the_chatbot_arena_benchmark_is/
pier4r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij4c7h
false
null
t3_1ij4c7h
/r/LocalLLaMA/comments/1ij4c7h/unpopular_opinion_the_chatbot_arena_benchmark_is/
false
false
self
24
null
Using LLM's to practice / learn a new language?
16
I would like to find the best way to leverage large language models (LLMs) to learn and practice a new language (Dutch). I am unsure what the best approach would be: should I use something like ChatGPT and instruct it to "roleplay" with me, pretending we're having a chat between friends, or is it better to host an LLM locally with a system prompt that instructs it to act like a person I have casual conversations with? Any pointers would be greatly appreciated. Thank you!
2025-02-06T14:57:52
https://www.reddit.com/r/LocalLLaMA/comments/1ij4f9b/using_llms_to_practice_learn_a_new_language/
TheMikeans
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij4f9b
false
null
t3_1ij4f9b
/r/LocalLLaMA/comments/1ij4f9b/using_llms_to_practice_learn_a_new_language/
false
false
self
16
null
DeepSeek-R1 for agentic tasks
1
[removed]
2025-02-06T15:02:26
https://www.reddit.com/r/LocalLLaMA/comments/1ij4j5q/deepseekr1_for_agentic_tasks/
c_stub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij4j5q
false
null
t3_1ij4j5q
/r/LocalLLaMA/comments/1ij4j5q/deepseekr1_for_agentic_tasks/
false
false
self
1
null
Is there a way to enforce LM Studio to use all the system resources?
0
I tried to load a 32b model on my system just to see how my system would be to get a response from inquiries. That took a while... I would be highly convinced of system low capacity causing this, it if it uses all system resources. But as I am monitoring the usage of ram,gpu,cpu cores simultaneously, I see this app is not using the full power of system. Half of the system is idle. Ofocurse it contributes more to the response time. Is there any way to fix it ? or I better shift to another app?
2025-02-06T15:03:46
https://www.reddit.com/r/LocalLLaMA/comments/1ij4k7x/is_there_a_way_to_enforce_lm_studio_to_use_all/
ExtremePresence3030
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij4k7x
false
null
t3_1ij4k7x
/r/LocalLLaMA/comments/1ij4k7x/is_there_a_way_to_enforce_lm_studio_to_use_all/
false
false
self
0
null
Deepseek Bias Filter is applied separate from output.
1
[removed]
2025-02-06T15:04:55
https://www.reddit.com/r/LocalLLaMA/comments/1ij4l5i/deepseek_bias_filter_is_applied_separate_from/
AdDramatic5939
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij4l5i
false
null
t3_1ij4l5i
/r/LocalLLaMA/comments/1ij4l5i/deepseek_bias_filter_is_applied_separate_from/
false
false
self
1
null
DeepSeek-R1 for agentic tasks
1
[removed]
2025-02-06T15:14:02
https://www.reddit.com/r/LocalLLaMA/comments/1ij4stq/deepseekr1_for_agentic_tasks/
c_stub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij4stq
false
null
t3_1ij4stq
/r/LocalLLaMA/comments/1ij4stq/deepseekr1_for_agentic_tasks/
false
false
self
1
null
DeepSeek-R1 for agentic tasks
1
[removed]
2025-02-06T15:19:19
https://www.reddit.com/r/LocalLLaMA/comments/1ij4x7s/deepseekr1_for_agentic_tasks/
think_mad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij4x7s
false
null
t3_1ij4x7s
/r/LocalLLaMA/comments/1ij4x7s/deepseekr1_for_agentic_tasks/
false
false
self
1
null
Any magnum-v4 fans out there? I just enjoy this model so much.
1
[removed]
2025-02-06T15:26:35
https://www.reddit.com/r/LocalLLaMA/comments/1ij53eq/any_magnumv4_fans_out_there_i_just_enjoy_this/
PSInvader
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij53eq
false
null
t3_1ij53eq
/r/LocalLLaMA/comments/1ij53eq/any_magnumv4_fans_out_there_i_just_enjoy_this/
false
false
self
1
null
DeepSeek-R1 for agentic tasks
10
DeepSeek-R1 doesn't support tool use natively, but can be used for agentic tasks through code actions. Here's an interesting blog post that describes this approach: https://krasserm.github.io/2025/02/05/deepseek-r1-agent/ Outperforms Claude 3.5 Sonnet by a large margin in a single-agent setup (65.6% vs 53.1% on a GAIA subset). The post also covers limitations of DeepSeek-R1 in this context, e.g. long reasoning traces and "underthinking" phenomenon. Has anyone experience with DeepSeek-R1 for agentic tasks and can share their approaches or thoughts?
2025-02-06T15:46:07
https://www.reddit.com/r/LocalLLaMA/comments/1ij5jnh/deepseekr1_for_agentic_tasks/
semteXKG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij5jnh
false
null
t3_1ij5jnh
/r/LocalLLaMA/comments/1ij5jnh/deepseekr1_for_agentic_tasks/
false
false
self
10
null
Memristor Chips for faster training LLM
2
https://www.instagram.com/reel/DFu3WK9vKQ2/?igsh=MWNybmU1ODlhamU5aw== What's the future of such chips?
2025-02-06T15:51:36
https://www.reddit.com/r/LocalLLaMA/comments/1ij5o8x/memristor_chips_for_faster_training_llm/
Stargazer-8989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij5o8x
false
null
t3_1ij5o8x
/r/LocalLLaMA/comments/1ij5o8x/memristor_chips_for_faster_training_llm/
false
false
self
2
{'enabled': False, 'images': [{'id': 'zRUJFyUtSArwq3Gv9exlL_D-YIjt0F_Nm7CkgmJpapE', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/m1ZVuIIqSWKG3aif4LPk7mSGzwU17ai4Nvc8gMNusQA.jpg?width=108&crop=smart&auto=webp&s=d7ffb6f8277011a5eb095f766b7760c10806d143', 'width': 108}, {'height': 382, 'url': 'https://external-preview.redd.it/m1ZVuIIqSWKG3aif4LPk7mSGzwU17ai4Nvc8gMNusQA.jpg?width=216&crop=smart&auto=webp&s=dfc307847c8022245741c2ac078fef294f458f60', 'width': 216}, {'height': 567, 'url': 'https://external-preview.redd.it/m1ZVuIIqSWKG3aif4LPk7mSGzwU17ai4Nvc8gMNusQA.jpg?width=320&crop=smart&auto=webp&s=b03e8575b61c5188d778224b9bb7287510bb296d', 'width': 320}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/m1ZVuIIqSWKG3aif4LPk7mSGzwU17ai4Nvc8gMNusQA.jpg?auto=webp&s=397572a20685d43b9c09caf5862c409082c799e3', 'width': 361}, 'variants': {}}]}
MCP vs Function Calling with LLMs
1
[removed]
2025-02-06T15:51:57
https://www.reddit.com/r/LocalLLaMA/comments/1ij5oin/mcp_vs_function_calling_with_llms/
Play2enlight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij5oin
false
null
t3_1ij5oin
/r/LocalLLaMA/comments/1ij5oin/mcp_vs_function_calling_with_llms/
false
false
self
1
null
GUI for API access LLMs
2
Hey community Which GUI do you use for using LLMs through API access? Any open source GUI similar to or better than chatgpt interface? I use Windows
2025-02-06T15:54:52
https://www.reddit.com/r/LocalLLaMA/comments/1ij5qzf/gui_for_api_access_llms/
maturelearner4846
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij5qzf
false
null
t3_1ij5qzf
/r/LocalLLaMA/comments/1ij5qzf/gui_for_api_access_llms/
false
false
self
2
null
Best model for 16gb Ram M2 Mac?
5
Hi guys, looking to use LM Studio on my 16gb ram MacBook and wanted to know the best option for me? A long long time ago I used Mistral 7B when it first came out! Time to refresh the models. A model which can also use vision would be great! But happy to hear some options. Thank you.
2025-02-06T15:55:03
https://www.reddit.com/r/LocalLLaMA/comments/1ij5r5p/best_model_for_16gb_ram_m2_mac/
99OG121314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij5r5p
false
null
t3_1ij5r5p
/r/LocalLLaMA/comments/1ij5r5p/best_model_for_16gb_ram_m2_mac/
false
false
self
5
null
DeepSeekV3 Context Length Discrepancy - 128k or 164k?
1
I noticed there's a discrepancy in the documented context length for DeepSeekV3: \- The config.json in the repository shows 164k context \- The model card on Hugging Face states 128k context Has anyone tested the actual context length or knows which specification is correct? This information would be helpful for properly configuring the model.
2025-02-06T15:55:09
https://www.reddit.com/r/LocalLLaMA/comments/1ij5r9u/deepseekv3_context_length_discrepancy_128k_or_164k/
XTREME-GAMER26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij5r9u
false
null
t3_1ij5r9u
/r/LocalLLaMA/comments/1ij5r9u/deepseekv3_context_length_discrepancy_128k_or_164k/
false
false
self
1
null
Mistral AI just released a mobile app
351
2025-02-06T15:56:41
https://mistral.ai/en/news/all-new-le-chat
According_to_Mission
mistral.ai
1970-01-01T00:00:00
0
{}
1ij5sma
false
null
t3_1ij5sma
/r/LocalLLaMA/comments/1ij5sma/mistral_ai_just_released_a_mobile_app/
false
false
https://a.thumbs.redditm…Dioa0HJU4cf4.jpg
351
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=216&crop=smart&auto=webp&s=4a5f46c5464cea72c64df6c73d58b15e102c5936', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=320&crop=smart&auto=webp&s=aa1e4abc763404a25bda9d60fe6440b747d6bae4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=640&crop=smart&auto=webp&s=122efd46018c04117aca71d80db3640d390428bd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=960&crop=smart&auto=webp&s=b53cfe1770ee2b37ce0f5b5e1b0fd67d3276a350', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=1080&crop=smart&auto=webp&s=278352f076c5bbdf8f6e7cecedab77d8794332ff', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?auto=webp&s=691d56b882a79feffdb4b780dc6a0db1b2c5d709', 'width': 4800}, 'variants': {}}]}
Are big models just stepping stones for distillation?
6
I've been thinking… the bigger models don’t seem to be getting much real-world usage. Are they just being built to get distilled? Feels like the industry is moving towards smaller, domain-specific models that are actually practical. What’s even the point of investing so much in these massive ones if they’re just going to be slimmed down later?
2025-02-06T16:00:42
https://www.reddit.com/r/LocalLLaMA/comments/1ij5w5b/are_big_models_just_stepping_stones_for/
iamnotdeadnuts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij5w5b
false
null
t3_1ij5w5b
/r/LocalLLaMA/comments/1ij5w5b/are_big_models_just_stepping_stones_for/
false
false
self
6
null
How I Built an Open Source AI Tool to Find My Autoimmune Disease (After $100k and 30+ Hospital Visits) - Now Available for Anyone to Use
2,101
Hey everyone, I want to share something I built after my long health journey. For 5 years, I struggled with mysterious symptoms - getting injured easily during workouts, slow recovery, random fatigue, joint pain. I spent over $100k visiting more than 30 hospitals and specialists, trying everything from standard treatments to experimental protocols at longevity clinics. Changed diets, exercise routines, sleep schedules - nothing seemed to help. The most frustrating part wasn't just the lack of answers - it was how fragmented everything was. Each doctor only saw their piece of the puzzle: the orthopedist looked at joint pain, the endocrinologist checked hormones, the rheumatologist ran their own tests. No one was looking at the whole picture. It wasn't until I visited a rheumatologist who looked at the combination of my symptoms and genetic test results that I learned I likely had an autoimmune condition. Interestingly, when I fed all my symptoms and medical data from before the rheumatologist visit into GPT, it suggested the same diagnosis I eventually received. After sharing this experience, I discovered many others facing similar struggles with fragmented medical histories and unclear diagnoses. That's what motivated me to turn this into an open source tool for anyone to use. While it's still in early stages, it's functional and might help others in similar situations. Here's what it looks like: https://i.redd.it/v6j508rxkjhe1.gif https://github.com/OpenHealthForAll/open-health \*\*What it can do:\*\* \* Upload medical records (PDFs, lab results, doctor notes) \* Automatically parses and standardizes lab results: \- Converts different lab formats to a common structure \- Normalizes units (mg/dL to mmol/L etc.) \- Extracts key markers like CRP, ESR, CBC, vitamins \- Organizes results chronologically \* Chat to analyze everything together: \- Track changes in lab values over time \- Compare results across different hospitals \- Identify patterns across multiple tests \* Works with different AI models: \- Local models like Deepseek (runs on your computer) \- Or commercial ones like GPT4/Claude if you have API keys \*\*Getting Your Medical Records:\*\* If you don't have your records as files: \- Check out \[Fasten Health\](https://github.com/fastenhealth/fasten-onprem) - it can help you fetch records from hospitals you've visited \- Makes it easier to get all your history in one place \- Works with most US healthcare providers \*\*Current Status:\*\* \- Frontend is ready and open source \- Document parsing is currently on a separate Python server \- Planning to migrate this to run completely locally \- Will add to the repo once migration is done Let me know if you have any questions about setting it up or using it!
2025-02-06T16:03:05
https://www.reddit.com/r/LocalLLaMA/comments/1ij5yf2/how_i_built_an_open_source_ai_tool_to_find_my/
Dry_Steak30
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij5yf2
false
null
t3_1ij5yf2
/r/LocalLLaMA/comments/1ij5yf2/how_i_built_an_open_source_ai_tool_to_find_my/
false
false
https://a.thumbs.redditm…-0ptjxlTHnR4.jpg
2,101
null
How to run VLM/multimodals locally?
0
Noob here, is there an easy way (something like LM Studio) to run VLMs such as SmolVLM locally on Windows 11?
2025-02-06T16:12:52
https://www.reddit.com/r/LocalLLaMA/comments/1ij670a/how_to_run_vlmmultimodals_locally/
liselisungerbob
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij670a
false
null
t3_1ij670a
/r/LocalLLaMA/comments/1ij670a/how_to_run_vlmmultimodals_locally/
false
false
self
0
null
adaptive online quantization of LLMs using self-distillation scheme
1
ok so take network Q, and quantized network QQ under some granular quantization policy. use KL(Q||QQ) + a total sum over QQ network size as the loss. batch user prompts perhaps mixed with some synthetic prompts. explore quantization policies under the KL loss. some analysis of network structure - activation statistics, expert utilization statistics, can help drive more granular assays. limited by the physical granularity of the network and the limit of quantization loss (which might be somehow elastic due to operational constraints.) this might achieve significant VRAM requirement reductions and as those are a major driver of costs we can see that this might be good thing. I am ignorant of the literature is this a dumb idea? are people already doing this? I am a bit obsessed about reducing the size of deepseek v3/r1 models to get them smaller. 8Xh200 is a lot different than say 2xh200 or 2x mi300x.
2025-02-06T16:26:15
https://www.reddit.com/r/LocalLLaMA/comments/1ij6iks/adaptive_online_quantization_of_llms_using/
bitmoji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij6iks
false
null
t3_1ij6iks
/r/LocalLLaMA/comments/1ij6iks/adaptive_online_quantization_of_llms_using/
false
false
self
1
null
Recommendation for Tool Use LLMs
0
Recommendation for Tool Use LLMs Hi, I'm trying to make an assistant that can properly recognize when to call functions. I've tried the GROQ Llama 70B which is decent but sometimes it gives the wrong calls. I tried to tackle this by having a function called promptLLM which has a description that tries to show it's more generic when no other functions apply. But now I've found it also fakes parameters in certain functions too. I was wondering if you guys had advice for other free API models that include tool use. Would I see better results with langchain even though I'm using their format? All advice in this area is appreciated as I'm just entering it for the first time. Thanks.
2025-02-06T16:34:21
https://www.reddit.com/r/LocalLLaMA/comments/1ij6ptu/recommendation_for_tool_use_llms/
Icy_Appointment1597
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij6ptu
false
null
t3_1ij6ptu
/r/LocalLLaMA/comments/1ij6ptu/recommendation_for_tool_use_llms/
false
false
self
0
null
Estimated time to fine-tune
1
[removed]
2025-02-06T16:51:06
https://www.reddit.com/r/LocalLLaMA/comments/1ij74za/estimated_time_to_finetune/
misterVector
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij74za
false
null
t3_1ij74za
/r/LocalLLaMA/comments/1ij74za/estimated_time_to_finetune/
false
false
self
1
null
Compensation for help getting a Flutter MacOS to work with Llama.cpp
0
Any existing binding in the flutter pub or just using ffi is fine. I have tried multiple bindings and pure fyi with no luck
2025-02-06T17:27:22
https://www.reddit.com/r/LocalLLaMA/comments/1ij81mt/compensation_for_help_getting_a_flutter_macos_to/
g0_g6t_1t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij81mt
false
null
t3_1ij81mt
/r/LocalLLaMA/comments/1ij81mt/compensation_for_help_getting_a_flutter_macos_to/
false
false
self
0
null
LPDDR5x / Lenovo P1 CPU inference
1
[removed]
2025-02-06T17:45:09
https://www.reddit.com/r/LocalLLaMA/comments/1ij8h8o/lpddr5x_lenovo_p1_cpu_inference/
beauddl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij8h8o
false
null
t3_1ij8h8o
/r/LocalLLaMA/comments/1ij8h8o/lpddr5x_lenovo_p1_cpu_inference/
false
false
self
1
null
The end of programming as we know it *currently*
0
https://www.oreilly.com/radar/the-end-of-programming-as-we-know-it/
2025-02-06T17:52:26
https://www.reddit.com/r/LocalLLaMA/comments/1ij8nhy/the_end_of_programming_as_we_know_it_currently/
gigicr1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij8nhy
false
null
t3_1ij8nhy
/r/LocalLLaMA/comments/1ij8nhy/the_end_of_programming_as_we_know_it_currently/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Phbsgo1EovVrMo3xEmzfjhQtjc4e1Zt976vzijc8-fs', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/Qo4g4yQU6qxQZvQwN1ZmbaRkl_fAB4cIK9OFZMG99uw.jpg?width=108&crop=smart&auto=webp&s=8e5e41b132e19c290ef6e840c5a2258f3439cf40', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/Qo4g4yQU6qxQZvQwN1ZmbaRkl_fAB4cIK9OFZMG99uw.jpg?width=216&crop=smart&auto=webp&s=7b2bd09d702c4caa7df14e9f1533c2d2ccd0be66', 'width': 216}, {'height': 217, 'url': 'https://external-preview.redd.it/Qo4g4yQU6qxQZvQwN1ZmbaRkl_fAB4cIK9OFZMG99uw.jpg?width=320&crop=smart&auto=webp&s=6993bf6eef6556eec65775a3f9be2e8ff23a9dec', 'width': 320}, {'height': 434, 'url': 'https://external-preview.redd.it/Qo4g4yQU6qxQZvQwN1ZmbaRkl_fAB4cIK9OFZMG99uw.jpg?width=640&crop=smart&auto=webp&s=c4de92e4bc70820f52e7b29a25bd84b3bf9cbd30', 'width': 640}, {'height': 651, 'url': 'https://external-preview.redd.it/Qo4g4yQU6qxQZvQwN1ZmbaRkl_fAB4cIK9OFZMG99uw.jpg?width=960&crop=smart&auto=webp&s=20a4448cf2e798d880f39d643998c9901d61309c', 'width': 960}, {'height': 732, 'url': 'https://external-preview.redd.it/Qo4g4yQU6qxQZvQwN1ZmbaRkl_fAB4cIK9OFZMG99uw.jpg?width=1080&crop=smart&auto=webp&s=f64ef2817e6e40370451fa30c520b63ef3916014', 'width': 1080}], 'source': {'height': 950, 'url': 'https://external-preview.redd.it/Qo4g4yQU6qxQZvQwN1ZmbaRkl_fAB4cIK9OFZMG99uw.jpg?auto=webp&s=5c92c6da19c06411eb9b5df600d1f59dd8eee5bd', 'width': 1400}, 'variants': {}}]}
How to handle concurrent connections using vllm
1
[removed]
2025-02-06T17:58:05
https://www.reddit.com/r/LocalLLaMA/comments/1ij8sgm/how_to_handle_concurrent_connections_using_vllm/
sol1d_007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij8sgm
false
null
t3_1ij8sgm
/r/LocalLLaMA/comments/1ij8sgm/how_to_handle_concurrent_connections_using_vllm/
false
false
self
1
null
deepseek.cpp: CPU inference for the DeepSeek family of large language models in pure C++
280
2025-02-06T18:13:29
https://github.com/andrewkchan/deepseek.cpp
reasonableklout
github.com
1970-01-01T00:00:00
0
{}
1ij96e5
false
null
t3_1ij96e5
/r/LocalLLaMA/comments/1ij96e5/deepseekcpp_cpu_inference_for_the_deepseek_family/
false
false
https://b.thumbs.redditm…RXpEr2LczGVo.jpg
280
{'enabled': False, 'images': [{'id': 'qDDyzpzJyLtVEm3GrchhdrH-wa89Cm4I40fAvxVsdbw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xxUqvQ7bjDufrBkxeXC-RZ_b54GSDiLzEjIobqu9d1M.jpg?width=108&crop=smart&auto=webp&s=495982854c68179dd65ae2cb9e2a88aaba2c0afa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xxUqvQ7bjDufrBkxeXC-RZ_b54GSDiLzEjIobqu9d1M.jpg?width=216&crop=smart&auto=webp&s=3e9d51bb62b0d5f08b3a094397b6b2aa7c31b8bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xxUqvQ7bjDufrBkxeXC-RZ_b54GSDiLzEjIobqu9d1M.jpg?width=320&crop=smart&auto=webp&s=f2e3add94bcab24d864613d798a035643e95186b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xxUqvQ7bjDufrBkxeXC-RZ_b54GSDiLzEjIobqu9d1M.jpg?width=640&crop=smart&auto=webp&s=99ba4ad51d925840f322676fb0bc2f13784c90d1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xxUqvQ7bjDufrBkxeXC-RZ_b54GSDiLzEjIobqu9d1M.jpg?width=960&crop=smart&auto=webp&s=1bd1bb71568952716b129b2b778e8b81b24acd38', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xxUqvQ7bjDufrBkxeXC-RZ_b54GSDiLzEjIobqu9d1M.jpg?width=1080&crop=smart&auto=webp&s=eaf9629382026441ce174657ecd48771dc1e5a60', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xxUqvQ7bjDufrBkxeXC-RZ_b54GSDiLzEjIobqu9d1M.jpg?auto=webp&s=301ea3d332fcaeba0779321076576d56d00026ed', 'width': 1200}, 'variants': {}}]}
Looking for best model to use for SEO content Writing
0
I built an automation that scraps the top 3 web results and then writes a seo optimized article that is designed to Out rank the competition. I've tried llama 3:2 and ollama run deepseek-r1. Deepseek r1 has been hit or miss, I don't like how it included the thinking when it loads the doc to Google drive. This is my maiden voyage using N8N. Experimenting with new models The automation runs like this: Enter KW>Scrape top 3 google search results> a bunch of data cleaning loops and codes> data extractor and summarizer (deepseek 14b)>SEO Content Agent (llama 3.2)> Humanizer content agent(llama 3.2)> create from text in google drive. I have 24gb of vram and an intel i9 14900. Not enough juice for new llama 3.3. Relatively new to local models, just hoping someone can point me in the right direction
2025-02-06T18:32:28
https://www.reddit.com/r/LocalLLaMA/comments/1ij9nfg/looking_for_best_model_to_use_for_seo_content/
Verryfastdoggo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij9nfg
false
null
t3_1ij9nfg
/r/LocalLLaMA/comments/1ij9nfg/looking_for_best_model_to_use_for_seo_content/
false
false
self
0
null
Recommend me courses or project
1
[removed]
2025-02-06T18:40:50
https://www.reddit.com/r/LocalLLaMA/comments/1ij9ur0/recommend_me_courses_or_project/
Zeltr3x
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ij9ur0
false
null
t3_1ij9ur0
/r/LocalLLaMA/comments/1ij9ur0/recommend_me_courses_or_project/
false
false
self
1
null
A Gentle Intro to Running a Local LLM (For Complete Beginners)
36
2025-02-06T18:50:42
https://www.dbreunig.com/2025/02/04/a-gentle-intro-to-running-a-local-llm.html
contextbot
dbreunig.com
1970-01-01T00:00:00
0
{}
1ija355
false
null
t3_1ija355
/r/LocalLLaMA/comments/1ija355/a_gentle_intro_to_running_a_local_llm_for/
false
false
https://b.thumbs.redditm…jyDtJCYqpINE.jpg
36
{'enabled': False, 'images': [{'id': 't7pRd5FuzZoZW_ftNzXaqzBoAbh5SBOOnRRPnW2y4V0', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/htZiSEyUB16U9gyyKeUrjXb4JczPOO0TCspO30BQGiQ.jpg?width=108&crop=smart&auto=webp&s=f8bf25b4f73147b0fb12ca88c760c982bd523c1e', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/htZiSEyUB16U9gyyKeUrjXb4JczPOO0TCspO30BQGiQ.jpg?width=216&crop=smart&auto=webp&s=7a28b07f4cd24f9f388821345d5bd0018cbee0ff', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/htZiSEyUB16U9gyyKeUrjXb4JczPOO0TCspO30BQGiQ.jpg?width=320&crop=smart&auto=webp&s=fdcba77acaa39566da793b1c558d4de565a775f9', 'width': 320}, {'height': 374, 'url': 'https://external-preview.redd.it/htZiSEyUB16U9gyyKeUrjXb4JczPOO0TCspO30BQGiQ.jpg?width=640&crop=smart&auto=webp&s=8beee75085a759caf2ded5e73f1204392da5d02e', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/htZiSEyUB16U9gyyKeUrjXb4JczPOO0TCspO30BQGiQ.jpg?width=960&crop=smart&auto=webp&s=041c88fade51eee50a0d00b49aaec361830212ab', 'width': 960}, {'height': 632, 'url': 'https://external-preview.redd.it/htZiSEyUB16U9gyyKeUrjXb4JczPOO0TCspO30BQGiQ.jpg?width=1080&crop=smart&auto=webp&s=19c45dcf4a4cc14162bf77bbd37bf605801ff54a', 'width': 1080}], 'source': {'height': 1168, 'url': 'https://external-preview.redd.it/htZiSEyUB16U9gyyKeUrjXb4JczPOO0TCspO30BQGiQ.jpg?auto=webp&s=6a7b2dad2008706e6581061dc5bd683d4bddf3dc', 'width': 1994}, 'variants': {}}]}
DeepSeek Llama 3.3 + Open-Webui Artifacts Overhaul Fork = BEST LOCAL CLAUDE/OAI CANVAS REPLACEMENT!
124
[React Renderer](https://preview.redd.it/1iqbdg4y3khe1.png?width=1293&format=png&auto=webp&s=7695c218715a211f47f5bc37aa8309fc6bb8cc62) [Full tailwind support w\/ preview](https://preview.redd.it/vb2iknfy3khe1.png?width=1370&format=png&auto=webp&s=6a2226fcf5d30f59a1d3453c374e48c16fd82156) [Difference viewer](https://preview.redd.it/an4w4onrekhe1.png?width=1283&format=png&auto=webp&s=fc67cfdb95a8c1990a3f7aaf821bf4297d962977) Hello everyone! I have been getting a lot of real world use this week now with the open-webui-artifacts-overhaul version of open-webui. It has been AMAZING at work and it completely replaced my need for Claude or OpenAI's artifacts. Of course, full disclaimer: I am the creator of this fork -- but all the features requested were from YOU, the community. I didn't realize how much I needed these features in my life, it really brings Open-WebUI up to par with the UI's used provided by SOTA models. Feel free to try it out yourself! [https://www.github.com/nick-tonjum/open-webui-artifacts-overhaul](https://www.github.com/nick-tonjum/open-webui-artifacts-overhaul) I believe this will be another couple of weeks of real world testing to iron out bugs and implement more features requested by the community. Please feel free to help out and submit Issues and Feature requests.
2025-02-06T18:51:33
https://www.reddit.com/r/LocalLLaMA/comments/1ija3v4/deepseek_llama_33_openwebui_artifacts_overhaul/
maxwell321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ija3v4
false
null
t3_1ija3v4
/r/LocalLLaMA/comments/1ija3v4/deepseek_llama_33_openwebui_artifacts_overhaul/
false
false
https://b.thumbs.redditm…TrLB-3ODgYdo.jpg
124
{'enabled': False, 'images': [{'id': 'XC1oi8iyR8eO8WlbuvyFBDv6HYqdHT7pvEdrpJeoYek', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GEk7Ll7QhkvaFAkkOayBbV1OyQKQVaWruZ9jP8F9VEw.jpg?width=108&crop=smart&auto=webp&s=e7393c83cde8d89e7e4757fd013fb3342c7bfedf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GEk7Ll7QhkvaFAkkOayBbV1OyQKQVaWruZ9jP8F9VEw.jpg?width=216&crop=smart&auto=webp&s=aae484385a1bce84a011b5fe4a2ec34d15ed4d92', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GEk7Ll7QhkvaFAkkOayBbV1OyQKQVaWruZ9jP8F9VEw.jpg?width=320&crop=smart&auto=webp&s=d6585871cdd8068b8039f3dae449b549d156442b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GEk7Ll7QhkvaFAkkOayBbV1OyQKQVaWruZ9jP8F9VEw.jpg?width=640&crop=smart&auto=webp&s=342c762f73e43798fe1835ee49e5c48ce5e3e306', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GEk7Ll7QhkvaFAkkOayBbV1OyQKQVaWruZ9jP8F9VEw.jpg?width=960&crop=smart&auto=webp&s=85d0caf43f9db7b7bae0d0de7bb37ad75537a95c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GEk7Ll7QhkvaFAkkOayBbV1OyQKQVaWruZ9jP8F9VEw.jpg?width=1080&crop=smart&auto=webp&s=21ba954b1521bc9a0f6b12aa279d753743a0ee99', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GEk7Ll7QhkvaFAkkOayBbV1OyQKQVaWruZ9jP8F9VEw.jpg?auto=webp&s=eac1d577fa6e7b030f0fd60987b8c935adb32502', 'width': 1200}, 'variants': {}}]}
Train your own Reasoning model - 80% less VRAM - GRPO now in Unsloth (7GB VRAM min.)
1,353
Hey [r/LocalLLaMA]()! We're excited to introduce reasoning in [Unsloth](https://github.com/unslothai/unsloth/releases/tag/2025-02) so you can now reproduce R1's "aha" moment locally. You'll only need **7GB of VRAM** to do it with Qwen2.5 (1.5B). 1. This is done through **GRPO**, and we've enhanced the entire process to make it use **80% less VRAM**. Try it in the [Colab notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb) for Llama 3.1 8B! 2. [Tiny-Zero](https://github.com/Jiayi-Pan/TinyZero) demonstrated that you could achieve your own "aha" moment with Qwen2.5 (1.5B) - but it required a minimum 4xA100 GPUs (160GB VRAM). Now, with Unsloth, you can achieve the same "aha" moment using just a single 7GB VRAM GPU 3. Previously GRPO only worked with FFT, but we made it work with QLoRA and LoRA. 4. With 15GB VRAM, you can transform Phi-4 (14B), Llama 3.1 (8B), Mistral (12B), or any model up to 15B parameters into a reasoning model Blog for more details: [https://unsloth.ai/blog/r1-reasoning](https://unsloth.ai/blog/r1-reasoning) |[Llama 3.1 8B Colab Link](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb)|[Phi-4 14B Colab Link](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb)|[Qwen 2.5 3B Colab Link](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(3B)-GRPO.ipynb)| |:-|:-|:-| |Llama 8B needs \~ 13GB|Phi-4 14B needs \~ 15GB|Qwen 3B needs \~7GB| I plotted the rewards curve for a specific run: https://preview.redd.it/xj5rtk69fkhe1.png?width=2057&format=png&auto=webp&s=a25a3a96393be54bc9687258df49329a56d530d7 Unsloth also now has 20x faster inference via vLLM! Please update Unsloth and vLLM via: `pip install --upgrade --no-cache-dir --force-reinstall unsloth_zoo unsloth vllm` P.S. thanks for all your overwhelming love and support for our R1 Dynamic 1.58-bit GGUF last week! Things like this really keep us going so thank you again. Happy reasoning!
2025-02-06T18:59:49
https://www.reddit.com/r/LocalLLaMA/comments/1ijab77/train_your_own_reasoning_model_80_less_vram_grpo/
danielhanchen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijab77
false
null
t3_1ijab77
/r/LocalLLaMA/comments/1ijab77/train_your_own_reasoning_model_80_less_vram_grpo/
false
false
https://b.thumbs.redditm…GJVHxHSFfH5I.jpg
1,353
{'enabled': False, 'images': [{'id': 'q9OOAMdMIqAFPA8V1MgkJkcKPfe8oDceFrcKTEaD6LI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/to7Gx1lMl0voSDkT7id5Fh2N7SEb6nUJ2HQzl2en4NU.jpg?width=108&crop=smart&auto=webp&s=3c71ca1eb635fe45bbed572d1cdc09c44cf345fa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/to7Gx1lMl0voSDkT7id5Fh2N7SEb6nUJ2HQzl2en4NU.jpg?width=216&crop=smart&auto=webp&s=ea557ba3025105e11c9527100cdf1181ac4c45e6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/to7Gx1lMl0voSDkT7id5Fh2N7SEb6nUJ2HQzl2en4NU.jpg?width=320&crop=smart&auto=webp&s=92352ab30141ad67bb6e792e0c27f972d5ecabaa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/to7Gx1lMl0voSDkT7id5Fh2N7SEb6nUJ2HQzl2en4NU.jpg?width=640&crop=smart&auto=webp&s=598db52b5cff8719b6abbc6affa07d300858717d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/to7Gx1lMl0voSDkT7id5Fh2N7SEb6nUJ2HQzl2en4NU.jpg?width=960&crop=smart&auto=webp&s=bfa073fe76ff7f232834c53fe1408068912b27c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/to7Gx1lMl0voSDkT7id5Fh2N7SEb6nUJ2HQzl2en4NU.jpg?width=1080&crop=smart&auto=webp&s=c10bf7da020ca7fb393537e413ffb64594dd64a7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/to7Gx1lMl0voSDkT7id5Fh2N7SEb6nUJ2HQzl2en4NU.jpg?auto=webp&s=ea309faba4b00ee9cfda476e2268720bfd434577', 'width': 1200}, 'variants': {}}]}
What API GUI, LLM API and IDE for game development?
3
Suppose you were to want to make a mmorpg in Unity. What model, ide, or llm gui would you use, and why?
2025-02-06T19:07:38
https://www.reddit.com/r/LocalLLaMA/comments/1ijainy/what_api_gui_llm_api_and_ide_for_game_development/
Material_Key7014
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijainy
false
null
t3_1ijainy
/r/LocalLLaMA/comments/1ijainy/what_api_gui_llm_api_and_ide_for_game_development/
false
false
self
3
null
Do you have kokoro gguf file?
1
[removed]
2025-02-06T19:08:58
https://www.reddit.com/r/LocalLLaMA/comments/1ijajw2/do_you_have_kokoro_gguf_file/
zoneofgenius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijajw2
false
null
t3_1ijajw2
/r/LocalLLaMA/comments/1ijajw2/do_you_have_kokoro_gguf_file/
false
false
self
1
null
Why can't even the most advanced models figure out which number is higher than another when there's decimals involved?
1
[removed]
2025-02-06T19:17:02
https://www.reddit.com/r/LocalLLaMA/comments/1ijar8n/why_cant_even_the_most_advanced_models_figure_out/
Rollingsound514
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijar8n
false
null
t3_1ijar8n
/r/LocalLLaMA/comments/1ijar8n/why_cant_even_the_most_advanced_models_figure_out/
false
false
self
1
null
Behold, the results of training a 1.49B llama from scratch for 13 hours on a 4060T
1
[deleted]
2025-02-06T19:18:21
[deleted]
1970-01-01T00:00:00
0
{}
1ijasf5
false
null
t3_1ijasf5
/r/LocalLLaMA/comments/1ijasf5/behold_the_results_of_training_a_149b_llama_from/
false
false
default
1
null
Behold: The results of training a 1.49B llama for 13 hours on a single 4060Ti 16GB (20M tokens)
359
2025-02-06T19:21:19
https://www.reddit.com/gallery/1ijauz4
Master-Meal-77
reddit.com
1970-01-01T00:00:00
0
{}
1ijauz4
false
null
t3_1ijauz4
/r/LocalLLaMA/comments/1ijauz4/behold_the_results_of_training_a_149b_llama_for/
false
false
https://b.thumbs.redditm…z7tZF5JbY5Uo.jpg
359
null
GitHub Copilot: The agent awakens
61
"Today, we are upgrading GitHub Copilot with the force of even more agentic AI – introducing agent mode and announcing the General Availability of Copilot Edits, both in VS Code. We are adding Gemini 2.0 Flash to the model picker for all Copilot users. And we unveil a first look at Copilot’s new autonomous agent, codenamed Project Padawan. From code completions, chat, and multi-file edits to workspace and agents, Copilot puts the human at the center of the creative work that is software development. AI helps with the things you don’t want to do, so you have more time for the things you do."
2025-02-06T19:24:03
https://github.blog/news-insights/product-news/github-copilot-the-agent-awakens/
FullstackSensei
github.blog
1970-01-01T00:00:00
0
{}
1ijaxeo
false
null
t3_1ijaxeo
/r/LocalLLaMA/comments/1ijaxeo/github_copilot_the_agent_awakens/
false
false
https://b.thumbs.redditm…zK2Xsih7RETA.jpg
61
{'enabled': False, 'images': [{'id': 'yNwz5InHYY8qxxvcApEwI0SzRoMdaOX2ikmL2F92nKQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/FNfBYXEZ6fJCMtp9hdpnkrJv53x_UkhrdS6GkDW1sh8.jpg?width=108&crop=smart&auto=webp&s=9ad1fd4a51b58829f60e9737251f28a04f2ddf1a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/FNfBYXEZ6fJCMtp9hdpnkrJv53x_UkhrdS6GkDW1sh8.jpg?width=216&crop=smart&auto=webp&s=266d6a11d3f23fc279602aca4c08e2540c04b098', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/FNfBYXEZ6fJCMtp9hdpnkrJv53x_UkhrdS6GkDW1sh8.jpg?width=320&crop=smart&auto=webp&s=7041084b753c8aa12c2d11ac2346fbf760ef11ed', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/FNfBYXEZ6fJCMtp9hdpnkrJv53x_UkhrdS6GkDW1sh8.jpg?width=640&crop=smart&auto=webp&s=000f375a894580879f1aa2a41815b8914c25cff3', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/FNfBYXEZ6fJCMtp9hdpnkrJv53x_UkhrdS6GkDW1sh8.jpg?width=960&crop=smart&auto=webp&s=e15a34510835932cc7fcf8bdfd48e75fa281a6d9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/FNfBYXEZ6fJCMtp9hdpnkrJv53x_UkhrdS6GkDW1sh8.jpg?width=1080&crop=smart&auto=webp&s=2cb31ed331d0a214487262eebce7b05b8f806152', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/FNfBYXEZ6fJCMtp9hdpnkrJv53x_UkhrdS6GkDW1sh8.jpg?auto=webp&s=ea406a8e206ec7369c7d42508a2749e31dfafb52', 'width': 1200}, 'variants': {}}]}
Share your favorite benchmarks, here are mine.
1
[removed]
2025-02-06T19:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1ijb6a3/share_your_favorite_benchmarks_here_are_mine/
Mr-Barack-Obama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijb6a3
false
null
t3_1ijb6a3
/r/LocalLLaMA/comments/1ijb6a3/share_your_favorite_benchmarks_here_are_mine/
false
false
self
1
null
Share your favorite benchmarks, here are mine.
1
My favorite overall benchmark is livebench ai. If you click show subcategories for language average you will be able to rank by **plot\_unscrambling** which to me is the most important benchmark for writing. Vals ai is useful for tax and law intelligence. The rest are interesting as well: github vectara hallucination-leaderboar artificialanalysis ai simple-bench agi safe ai aider eqbench creative\_writing github lechmazur writing Please share your favorite benchmarks too! I'd love to see some long context benchmarks.
2025-02-06T19:40:31
https://www.reddit.com/r/LocalLLaMA/comments/1ijbbdc/share_your_favorite_benchmarks_here_are_mine/
Mr-Barack-Obama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijbbdc
false
null
t3_1ijbbdc
/r/LocalLLaMA/comments/1ijbbdc/share_your_favorite_benchmarks_here_are_mine/
false
false
self
1
null
I’ve built metrics to evaluate any tool-calling AI agent (would love some feedback!)
1
Hey everyone! It seems like there are a lot of LLM evaluation metrics out there, but AI agent evaluation still feels pretty early. I couldn’t find many general-purpose metrics—most research metrics are from benchmarks like AgentBench or SWE-bench, which are great but very specific to their tasks (e.g., win rate in a card game or code correctness). So, I thought it would be helpful to create metrics for tool-using agents that work across different use cases. I’ve built 2 simple metrics so far, and would love to get some feedback! * **Tool Correctness** – Not just exact matches, but also considers things like whether the right tool was chosen, input parameters, ordering, and outputs. * **Task Completion** – Checks if the tool calls actually lead to completing the task. If you’ve worked on eval for AI agents, I’d love to hear how you approach it and what other metrics do you think would be useful (i.e. evaluating reasoning, tool efficiency??) Any thoughts or feedback would be really appreciated. You can check out the first two metrics here—I’d love to expand the list to cover more agent metrics soon! (built as part of deepeval) [https://docs.confident-ai.com/docs/metrics-tool-correctness](https://docs.confident-ai.com/docs/metrics-tool-correctness)
2025-02-06T19:46:31
https://www.reddit.com/r/LocalLLaMA/comments/1ijbgqm/ive_built_metrics_to_evaluate_any_toolcalling_ai/
FlimsyProperty8544
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijbgqm
false
null
t3_1ijbgqm
/r/LocalLLaMA/comments/1ijbgqm/ive_built_metrics_to_evaluate_any_toolcalling_ai/
false
false
self
1
{'enabled': False, 'images': [{'id': 'yoZXwoEJ_AqovES7UhSQDw3ob0wnoaSLuoIie5wJRHU', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/GWXhA0UgzFA4MPzAXYmnWYhvvQ0SscnuRFQOjM7Zcgk.jpg?width=108&crop=smart&auto=webp&s=5beae4f94e6638c3a1988592fa8a7a0aad887f91', 'width': 108}, {'height': 110, 'url': 'https://external-preview.redd.it/GWXhA0UgzFA4MPzAXYmnWYhvvQ0SscnuRFQOjM7Zcgk.jpg?width=216&crop=smart&auto=webp&s=fb3410a1eb94a2105df0a67e67c05f95f6173b35', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/GWXhA0UgzFA4MPzAXYmnWYhvvQ0SscnuRFQOjM7Zcgk.jpg?width=320&crop=smart&auto=webp&s=f0346c2d80c820a2e2960e5b5bc6a43008eca214', 'width': 320}, {'height': 328, 'url': 'https://external-preview.redd.it/GWXhA0UgzFA4MPzAXYmnWYhvvQ0SscnuRFQOjM7Zcgk.jpg?width=640&crop=smart&auto=webp&s=f6912b9b82a49489d8f4a612eb9e13f0543f0641', 'width': 640}, {'height': 493, 'url': 'https://external-preview.redd.it/GWXhA0UgzFA4MPzAXYmnWYhvvQ0SscnuRFQOjM7Zcgk.jpg?width=960&crop=smart&auto=webp&s=8d4329de3d393ec8b03b76120c8f276d4d7f2c0e', 'width': 960}, {'height': 554, 'url': 'https://external-preview.redd.it/GWXhA0UgzFA4MPzAXYmnWYhvvQ0SscnuRFQOjM7Zcgk.jpg?width=1080&crop=smart&auto=webp&s=b6778c5bc2889bd47d9e0c697fa48a6835fa82a4', 'width': 1080}], 'source': {'height': 636, 'url': 'https://external-preview.redd.it/GWXhA0UgzFA4MPzAXYmnWYhvvQ0SscnuRFQOjM7Zcgk.jpg?auto=webp&s=aba937ec2206a83f1c194e3084c10efb5d8f7dc8', 'width': 1238}, 'variants': {}}]}
Bodhi App - Run LLMs Locally
0
[removed]
2025-02-06T19:48:13
https://www.reddit.com/r/LocalLLaMA/comments/1ijbi9z/bodhi_app_run_llms_locally/
anagri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijbi9z
false
null
t3_1ijbi9z
/r/LocalLLaMA/comments/1ijbi9z/bodhi_app_run_llms_locally/
false
false
self
0
{'enabled': False, 'images': [{'id': 'T5fv0_w1CZIgXsJgNh69JcEGcXSbl16gBTSlr2YnJ7I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=108&crop=smart&auto=webp&s=0cb6729c07c1ef9127f81ac62f2797e7431d8732', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=216&crop=smart&auto=webp&s=18989517e393e7cc5e6d16baded1479357a0df10', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=320&crop=smart&auto=webp&s=bb90074a866b86635792ddee71e1ddf1c7fb7620', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=640&crop=smart&auto=webp&s=3b6828b00b2334903dba36aca2e7fb576a0f228d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=960&crop=smart&auto=webp&s=e5010a32e74614957a49c263248a7106748cd243', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=1080&crop=smart&auto=webp&s=f0d2569a7571d81ac910c68fb15fde137f5a1170', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?auto=webp&s=aaa2af7d751987ddfe54ce10d98a44e7c63a307e', 'width': 1200}, 'variants': {}}]}
Deepseek Dekstop Version Faster Prompting
1
[removed]
2025-02-06T19:48:17
https://www.reddit.com/r/LocalLLaMA/comments/1ijbibm/deepseek_dekstop_version_faster_prompting/
Effective-Machine187
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijbibm
false
null
t3_1ijbibm
/r/LocalLLaMA/comments/1ijbibm/deepseek_dekstop_version_faster_prompting/
false
false
self
1
{'enabled': False, 'images': [{'id': '4UXqOabapgCzu0TRqBpr6TXSD1xrCktb9f81rzCzpXk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5IUZiA9h8EXWHUFo_tqwdFsBCTmzmXczExBuRByuPV0.jpg?width=108&crop=smart&auto=webp&s=3bb7d2a5fa62c4ec7d8faa42e99179461ea2b600', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5IUZiA9h8EXWHUFo_tqwdFsBCTmzmXczExBuRByuPV0.jpg?width=216&crop=smart&auto=webp&s=c9d41078e019b5fdf256681e15cf8d8e325f7dbe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5IUZiA9h8EXWHUFo_tqwdFsBCTmzmXczExBuRByuPV0.jpg?width=320&crop=smart&auto=webp&s=45f8f63baa749f6f35f43d1bf31d76faa49b7bd6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5IUZiA9h8EXWHUFo_tqwdFsBCTmzmXczExBuRByuPV0.jpg?width=640&crop=smart&auto=webp&s=b9f57842b17f1ac9815b4b593c1f9481ce111eb6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5IUZiA9h8EXWHUFo_tqwdFsBCTmzmXczExBuRByuPV0.jpg?width=960&crop=smart&auto=webp&s=20b82dbab5fc01ed6e8533af06c6c94058ce7318', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5IUZiA9h8EXWHUFo_tqwdFsBCTmzmXczExBuRByuPV0.jpg?width=1080&crop=smart&auto=webp&s=f68f53e1f3a545d7d1b76a3ded471bd62669aec2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5IUZiA9h8EXWHUFo_tqwdFsBCTmzmXczExBuRByuPV0.jpg?auto=webp&s=cfd292e7d5b9a03e48dc91b417a7a36ec17cb90e', 'width': 1200}, 'variants': {}}]}
Check out a new way to run LLMs locally - Bodhi App
0
[removed]
2025-02-06T19:50:59
https://www.reddit.com/r/LocalLLaMA/comments/1ijbkmv/check_out_a_new_way_to_run_llms_locally_bodhi_app/
anagri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijbkmv
false
null
t3_1ijbkmv
/r/LocalLLaMA/comments/1ijbkmv/check_out_a_new_way_to_run_llms_locally_bodhi_app/
false
false
self
0
{'enabled': False, 'images': [{'id': 'T5fv0_w1CZIgXsJgNh69JcEGcXSbl16gBTSlr2YnJ7I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=108&crop=smart&auto=webp&s=0cb6729c07c1ef9127f81ac62f2797e7431d8732', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=216&crop=smart&auto=webp&s=18989517e393e7cc5e6d16baded1479357a0df10', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=320&crop=smart&auto=webp&s=bb90074a866b86635792ddee71e1ddf1c7fb7620', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=640&crop=smart&auto=webp&s=3b6828b00b2334903dba36aca2e7fb576a0f228d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=960&crop=smart&auto=webp&s=e5010a32e74614957a49c263248a7106748cd243', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?width=1080&crop=smart&auto=webp&s=f0d2569a7571d81ac910c68fb15fde137f5a1170', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XwuxqKSsC3U--FI_g9g_B1V-fCRQ3nHZmjd_ibYqNnY.jpg?auto=webp&s=aaa2af7d751987ddfe54ce10d98a44e7c63a307e', 'width': 1200}, 'variants': {}}]}
I built a grammar-checking VSCode extension with Ollama
10
After Grammarly disabled its API, no equivalent grammar-checking tool exists for VSCode. While [LTeX](https://marketplace.visualstudio.com/items?itemName=valentjn.vscode-ltex) catches spelling mistakes and some grammatical errors, it lacks the deeper linguistic understanding that Grammarly provides. I built an extension that aims to bridge the gap with a local Ollama model. It chunks text into paragraphs, asks an LLM to proofread each paragraph, and highlights potential errors. Users can then click on highlighted errors to view and apply suggested corrections. Check it out here: [https://marketplace.visualstudio.com/items?itemName=OlePetersen.lm-writing-tool](https://marketplace.visualstudio.com/items?itemName=OlePetersen.lm-writing-tool) [Demo of the writing tool](https://reddit.com/link/1ijbls6/video/5hwvhuecpkhe1/player) # Features: * **LLM-powered grammar checking** in American English * **Inline corrections** via quick fixes * **Choice of models**: Use a local `llama3.2:3b` model via [Ollama](https://ollama.com/) or `gpt-40-mini` through the [VSCode LM API](https://code.visualstudio.com/api/extension-guides/language-model) * **Rewrite suggestions** to improve clarity * **Synonym recommendations** for better word choices Feedback and contributions are welcome :) The code is available here: [https://github.com/peteole/lm-writing-tool](https://github.com/peteole/lm-writing-tool)
2025-02-06T19:52:21
https://www.reddit.com/r/LocalLLaMA/comments/1ijbls6/i_built_a_grammarchecking_vscode_extension_with/
ole_pe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijbls6
false
null
t3_1ijbls6
/r/LocalLLaMA/comments/1ijbls6/i_built_a_grammarchecking_vscode_extension_with/
false
false
https://b.thumbs.redditm…hoWHAR6bIufg.jpg
10
{'enabled': False, 'images': [{'id': 'e-gcB9n1pekjdWD6fs8VsYb-Sz4-ZKaAt9bcndZFLO0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/lksdiP83p2OmdDzL61PEsDIl6BBO1Au1jeVVk4jOlKg.jpg?width=108&crop=smart&auto=webp&s=ed74aa31b5ef026a964a7e000773c18b8a5c4362', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/lksdiP83p2OmdDzL61PEsDIl6BBO1Au1jeVVk4jOlKg.jpg?width=216&crop=smart&auto=webp&s=06a50c5bd19a5c1d691bbc900de598f6c51c0fd4', 'width': 216}], 'source': {'height': 256, 'url': 'https://external-preview.redd.it/lksdiP83p2OmdDzL61PEsDIl6BBO1Au1jeVVk4jOlKg.jpg?auto=webp&s=1a77dd21920abeafe712973ff9a5bbb0f6c44aa2', 'width': 256}, 'variants': {}}]}
How do i resolve this error?
2
I have the following code and i am getting the below error You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. from huggingface_hub import snapshot_download, login from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TrainingArguments, Trainer ) from peft import LoraConfig, prepare_model_for_kbit_training, get_peft_model from datasets import Dataset import torch import pandas as pd # 1. Login to Hugging Face Hub login(token="") # 2. Download the full model (including safetensors files) model_name = "meta-llama/Llama-3.2-1B-Instruct" local_path = r"C:\Users\\.llama\checkpoints\Llama3.2-1B-Instruct" # snapshot_download( # repo_id=model_name, # local_dir=local_path, # local_dir_use_symlinks=False, # revision="main", # allow_patterns=["*.json", "*.safetensors", "*.model", "*.txt", "*.py"] # ) #print("✅ Model downloaded and saved to:", local_path) # 3. Load model in 4-bit mode using the BitsAndBytes configuration model_path = local_path # Use the downloaded model path bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, bnb_4bit_use_double_quant=True # Critical for stability ) model = AutoModelForCausalLM.from_pretrained( model_path, quantization_config=bnb_config, device_map="cuda", torch_dtype=torch.float16, use_cache=False, # Must disable for QLoRA attn_implementation="sdpa" # Better memory usage ) # 4. Load tokenizer with LLama 3 templating tokenizer = AutoTokenizer.from_pretrained(model_path) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" # 5. Prepare model for k-bit training with gradient checkpointing model = prepare_model_for_kbit_training( model, use_gradient_checkpointing=True # Reduces VRAM usage ) # 6. Set up the official LLama 3 LoRA configuration peft_config = LoraConfig( r=32, # Higher rank for better adaptation lora_alpha=64, target_modules=[ "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", # Additional target for LLama 3 "up_proj", "down_proj" ], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", modules_to_save=["lm_head", "embed_tokens"] # Required for generation ) # 7. Attach the LoRA adapters to the model model = get_peft_model(model, peft_config) # Print trainable parameters model.print_trainable_parameters() # Ensure cache is disabled for training model.config.use_cache = False # Ensure only LoRA layers are trainable for name, param in model.named_parameters(): if "lora_" in name: param.requires_grad = True # Unfreeze LoRA layers else: param.requires_grad = False # Freeze base model # 8. Prepare the training dataset with a custom prompt formatter def format_prompt(row): return f"""<|begin_of_text|> <|start_header_id|>user<|end_header_id|> Diagnose based on these symptoms: {row['Symptoms_List']} Risk factors: {row['whoIsAtRiskDesc']} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> Diagnosis: {row['Name']} Recommended tests: {row['Common_Tests']} Details: {row['description']}<|eot_id|>""" # Load and format the CSV data df = pd.read_csv("Disease_symptoms.csv") df["Symptoms_List"] = df["Symptoms_List"].apply(eval) dataset = Dataset.from_dict({ "text": [format_prompt(row) for _, row in df.iterrows()] }) # 9. Define optimized training arguments training_args = TrainingArguments( output_dir="./llama3-medical", per_device_train_batch_size=1, gradient_accumulation_steps=16, # Adjust for VRAM constraints (e.g., 8GB) learning_rate=3e-5, num_train_epochs=5, logging_steps=5, optim="paged_adamw_32bit", # Preferred optimizer for this task fp16=True, max_grad_norm=0.5, warmup_ratio=0.1, lr_scheduler_type="cosine", report_to="none", save_strategy="no", remove_unused_columns=False, gradient_checkpointing=True ) # 10. Data collator to handle tokenization def collator(batch): return tokenizer( [item["text"] for item in batch], padding="longest", truncation=True, max_length=1024, return_tensors="pt" ) # 11. Initialize the Trainer trainer = Trainer( model=model, args=training_args, train_dataset=dataset, data_collator=collator ) # 12. Begin training (ensure cache is disabled) model.config.use_cache = False # Must be disabled for training model.enable_input_require_grads() # Enable gradients for inputs if necessary print("Starting training...") trainer.train() # 13. Save the fine-tuned adapter and tokenizer model.save_pretrained("./llama3-medical-adapter") tokenizer.save_pretrained("./llama3-medical-adapter") how do i resolve this? Thank you for the help!!
2025-02-06T19:54:07
https://www.reddit.com/r/LocalLLaMA/comments/1ijbnaj/how_do_i_resolve_this_error/
Artistic_Tooth_3181
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijbnaj
false
null
t3_1ijbnaj
/r/LocalLLaMA/comments/1ijbnaj/how_do_i_resolve_this_error/
false
false
self
2
null
Tiny Data, Strong Reasoning if you have $50
23
# s1K uses a small, curated dataset (1,000 samples) and "budget forcing" to achieve competitive AI reasoning, rivalling larger models like OpenAI's o1. * Sample Efficiency: Shows that quality > quantity in data. Training the s1-32B model on the s1K dataset only took **26 minutes on 16 NVIDIA H100 GPUs** * Test-Time Scaling: Inspired by o1, increasing compute at inference boosts performance. * Open Source: Promotes transparency and research. * Distillation: s1K leverages a distillation procedure from Gemini 2.0. The s1-32B model, fine-tuned on s1K, nearly matches Gemini 2.0 Thinking on AIME24. It suggests that AI systems can be more efficient, transparent and controllable. Thoughts? \#AI #MachineLearning #Reasoning #OpenSource #s1K [https://arxiv.org/pdf/2501.19393](https://arxiv.org/pdf/2501.19393)
2025-02-06T19:54:23
https://www.reddit.com/r/LocalLLaMA/comments/1ijbnit/tiny_data_strong_reasoning_if_you_have_50/
Xiwei
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijbnit
false
null
t3_1ijbnit
/r/LocalLLaMA/comments/1ijbnit/tiny_data_strong_reasoning_if_you_have_50/
false
false
self
23
null
Mistral’s new “Flash Answers”
190
2025-02-06T19:57:49
https://x.com/onetwoval/status/1887547069956845634?s=46&t=4i240TMN9BFmGRKFS4WP1A
According_to_Mission
x.com
1970-01-01T00:00:00
0
{}
1ijbqky
false
null
t3_1ijbqky
/r/LocalLLaMA/comments/1ijbqky/mistrals_new_flash_answers/
false
false
https://a.thumbs.redditm…NM-S36QOtl_4.jpg
190
{'enabled': False, 'images': [{'id': 'vDChXCROih4Blx7wEghy7LD3BjZWKV1GBmkRbE6176c', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/Oqw5kk3lifQ1HwlLen4W6BZPZtqltu9AUU6wWFbOBbg.jpg?width=108&crop=smart&auto=webp&s=4ece40fd28c24e6c21dac01e2690d7ed007bd094', 'width': 108}, {'height': 155, 'url': 'https://external-preview.redd.it/Oqw5kk3lifQ1HwlLen4W6BZPZtqltu9AUU6wWFbOBbg.jpg?width=216&crop=smart&auto=webp&s=834b9c6293f15bcb45b871df4b158fe0a29c28f7', 'width': 216}, {'height': 230, 'url': 'https://external-preview.redd.it/Oqw5kk3lifQ1HwlLen4W6BZPZtqltu9AUU6wWFbOBbg.jpg?width=320&crop=smart&auto=webp&s=b8e491c39e83a9c6358574e4f4b712ed5d73e0eb', 'width': 320}, {'height': 461, 'url': 'https://external-preview.redd.it/Oqw5kk3lifQ1HwlLen4W6BZPZtqltu9AUU6wWFbOBbg.jpg?width=640&crop=smart&auto=webp&s=687ea1cc1b90ede67d331463612f5148431106fd', 'width': 640}, {'height': 692, 'url': 'https://external-preview.redd.it/Oqw5kk3lifQ1HwlLen4W6BZPZtqltu9AUU6wWFbOBbg.jpg?width=960&crop=smart&auto=webp&s=28098279c43edc3443c4f686c91751e8e01fe6f2', 'width': 960}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Oqw5kk3lifQ1HwlLen4W6BZPZtqltu9AUU6wWFbOBbg.jpg?auto=webp&s=fccff72e87eff79cfdd845fc69e8f57709d21eac', 'width': 998}, 'variants': {}}]}
fuseO1-DeepSeekR1-QwQ-SkyT1-flash-32B-preview-abliterated
8
... an ollama sha256 name is more readable at this point.
2025-02-06T20:23:14
https://www.reddit.com/r/LocalLLaMA/comments/1ijcdsp/fuseo1deepseekr1qwqskyt1flash32bpreviewabliterated/
ParaboloidalCrest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijcdsp
false
null
t3_1ijcdsp
/r/LocalLLaMA/comments/1ijcdsp/fuseo1deepseekr1qwqskyt1flash32bpreviewabliterated/
false
false
self
8
null
Llama, Qwen, DeepSeek, and now Sentient's Dobby Unhinged - The Ultimate AI Shitposter
1
[removed]
2025-02-06T20:39:03
https://www.reddit.com/r/LocalLLaMA/comments/1ijcrjs/llama_qwen_deepseek_and_now_sentients_dobby/
jiMalinka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijcrjs
false
null
t3_1ijcrjs
/r/LocalLLaMA/comments/1ijcrjs/llama_qwen_deepseek_and_now_sentients_dobby/
false
false
nsfw
1
{'enabled': False, 'images': [{'id': 'fURvXWZzv6wlGksuw-B0Sc28jjlMi1LHGl_97LFGnzo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=108&crop=smart&auto=webp&s=586423125f4b054f3a89511a8e71a674332b4866', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=216&crop=smart&auto=webp&s=2f9eabd7473b3e0f85aca67e9e01eb06cc9ac820', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=320&crop=smart&auto=webp&s=2c97e120eafc17970dd2957386c90e3bb63e8e8c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=640&crop=smart&auto=webp&s=ca8c4531cc8d39da75712ae247aaa9909bd31a2b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=960&crop=smart&auto=webp&s=b1658f8ec776bb05fb1ae236da75fbd3d91ab520', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=1080&crop=smart&auto=webp&s=8a46eefea12cbd63d7028959125d8546fd0ad0b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?auto=webp&s=1fa661ae40c5c7109444f19f7b7d4711b526c4a3', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=5480985e759e34c79ec3f573021b95f1ab5b5550', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=5f12d15ab96f98b32482852f1ae6bf1beeeb7573', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=95c2919d6625a3a5b01d26db09517a069c21a150', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d58b80cfedd1d86ea1ad548851cdea6e9d1e957c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a201c6c096f2c938207acf25d2e749cc38b8ec16', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=fda4a70161356dc1b4890f860b9eebdf468ca3cd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?blur=40&format=pjpg&auto=webp&s=bb67b322885d3d990b2ee898070575d93745370a', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=5480985e759e34c79ec3f573021b95f1ab5b5550', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=5f12d15ab96f98b32482852f1ae6bf1beeeb7573', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=95c2919d6625a3a5b01d26db09517a069c21a150', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d58b80cfedd1d86ea1ad548851cdea6e9d1e957c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a201c6c096f2c938207acf25d2e749cc38b8ec16', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=fda4a70161356dc1b4890f860b9eebdf468ca3cd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?blur=40&format=pjpg&auto=webp&s=bb67b322885d3d990b2ee898070575d93745370a', 'width': 1200}}}}]}
Llama, Qwen, DeepSeek, now we got Sentient's Dobby Unhinged
1
[removed]
2025-02-06T20:42:14
https://www.reddit.com/r/LocalLLaMA/comments/1ijcucn/llama_qwen_deepseek_now_we_got_sentients_dobby/
jiMalinka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijcucn
false
null
t3_1ijcucn
/r/LocalLLaMA/comments/1ijcucn/llama_qwen_deepseek_now_we_got_sentients_dobby/
false
false
https://b.thumbs.redditm…InQlilbl_oXQ.jpg
1
null
"The future belongs to idea guys who can just do things"
0
2025-02-06T20:46:16
https://ghuntley.com/dothings/
hedgehog0
ghuntley.com
1970-01-01T00:00:00
0
{}
1ijcxwf
false
null
t3_1ijcxwf
/r/LocalLLaMA/comments/1ijcxwf/the_future_belongs_to_idea_guys_who_can_just_do/
false
false
https://b.thumbs.redditm…3xTfMLvDhqPQ.jpg
0
{'enabled': False, 'images': [{'id': '7Z7cVIx95rpXEpJEz6RqVjVH0YvgQRTVApkWUgzcDJ4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Vm4_PL90AG-6E0Sa5QxrZc6smrHF8u4GIINLp6RH3vQ.jpg?width=108&crop=smart&auto=webp&s=32e762a2aa36d99d4640f26e85d039de3b217ed2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Vm4_PL90AG-6E0Sa5QxrZc6smrHF8u4GIINLp6RH3vQ.jpg?width=216&crop=smart&auto=webp&s=db75021f1151b2ec7d30b09ffc63c465286e8d69', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Vm4_PL90AG-6E0Sa5QxrZc6smrHF8u4GIINLp6RH3vQ.jpg?width=320&crop=smart&auto=webp&s=b2fb7956bdfc185b689ebecc338cb00ca5067984', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Vm4_PL90AG-6E0Sa5QxrZc6smrHF8u4GIINLp6RH3vQ.jpg?width=640&crop=smart&auto=webp&s=117cd07d0b813e9d369d73c77b6378de762c71ce', 'width': 640}, {'height': 541, 'url': 'https://external-preview.redd.it/Vm4_PL90AG-6E0Sa5QxrZc6smrHF8u4GIINLp6RH3vQ.jpg?width=960&crop=smart&auto=webp&s=76fe91b3840a0fe79fbfcde6f9dfc66aa0c8a434', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/Vm4_PL90AG-6E0Sa5QxrZc6smrHF8u4GIINLp6RH3vQ.jpg?width=1080&crop=smart&auto=webp&s=e1e0599c152f10a7eff352bbb4652a2aeb0c7b7b', 'width': 1080}], 'source': {'height': 733, 'url': 'https://external-preview.redd.it/Vm4_PL90AG-6E0Sa5QxrZc6smrHF8u4GIINLp6RH3vQ.jpg?auto=webp&s=4d8d9e36e0da2d9196db28df2fc3d13a2a4d8aa5', 'width': 1300}, 'variants': {}}]}
BrowserAI: Run LLMs 100% Client-Side (WebGPU, Open Source!) 🚀 Get involved!
1
[removed]
2025-02-06T20:52:01
https://www.reddit.com/r/LocalLLaMA/comments/1ijd2yp/browserai_run_llms_100_clientside_webgpu_open/
shreyashk_gupta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijd2yp
false
null
t3_1ijd2yp
/r/LocalLLaMA/comments/1ijd2yp/browserai_run_llms_100_clientside_webgpu_open/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gkAZ7_W7TSZfOMYWHWY9_2AfgxzhGOPaHRJCgRgsPsU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=108&crop=smart&auto=webp&s=4b47100a0b989f4434615020ee124265775fcc29', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=216&crop=smart&auto=webp&s=669e91d203afb8f6420907bb0b1fe4dd08221ec8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=320&crop=smart&auto=webp&s=3f44363159af68f97b675d5be13d65c29d864e40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=640&crop=smart&auto=webp&s=cb0cba181bdb528eb8fa911f741ed68845dd1095', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=960&crop=smart&auto=webp&s=6438eeb88bdd0e72c574c779ab1e5002539deaf2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=1080&crop=smart&auto=webp&s=7f7aabc755257e7385fc8f549de7ee719403143c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?auto=webp&s=c14794b3ce4ef96f362b40eb0d59d0af81a73760', 'width': 1200}, 'variants': {}}]}
Exploring Client-Side LLMs with BrowserAI - Curious About Thoughts/Feedback
1
[removed]
2025-02-06T20:57:06
https://www.reddit.com/r/LocalLLaMA/comments/1ijd7h6/exploring_clientside_llms_with_browserai_curious/
shreyashk_gupta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijd7h6
false
null
t3_1ijd7h6
/r/LocalLLaMA/comments/1ijd7h6/exploring_clientside_llms_with_browserai_curious/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gkAZ7_W7TSZfOMYWHWY9_2AfgxzhGOPaHRJCgRgsPsU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=108&crop=smart&auto=webp&s=4b47100a0b989f4434615020ee124265775fcc29', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=216&crop=smart&auto=webp&s=669e91d203afb8f6420907bb0b1fe4dd08221ec8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=320&crop=smart&auto=webp&s=3f44363159af68f97b675d5be13d65c29d864e40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=640&crop=smart&auto=webp&s=cb0cba181bdb528eb8fa911f741ed68845dd1095', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=960&crop=smart&auto=webp&s=6438eeb88bdd0e72c574c779ab1e5002539deaf2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=1080&crop=smart&auto=webp&s=7f7aabc755257e7385fc8f549de7ee719403143c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?auto=webp&s=c14794b3ce4ef96f362b40eb0d59d0af81a73760', 'width': 1200}, 'variants': {}}]}
Exploring Client-Side LLMs with BrowserAI - Curious About Thoughts/Feedback
1
[removed]
2025-02-06T20:58:50
https://www.reddit.com/r/LocalLLaMA/comments/1ijd8yy/exploring_clientside_llms_with_browserai_curious/
shreyashk_gupta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijd8yy
false
null
t3_1ijd8yy
/r/LocalLLaMA/comments/1ijd8yy/exploring_clientside_llms_with_browserai_curious/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gkAZ7_W7TSZfOMYWHWY9_2AfgxzhGOPaHRJCgRgsPsU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=108&crop=smart&auto=webp&s=4b47100a0b989f4434615020ee124265775fcc29', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=216&crop=smart&auto=webp&s=669e91d203afb8f6420907bb0b1fe4dd08221ec8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=320&crop=smart&auto=webp&s=3f44363159af68f97b675d5be13d65c29d864e40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=640&crop=smart&auto=webp&s=cb0cba181bdb528eb8fa911f741ed68845dd1095', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=960&crop=smart&auto=webp&s=6438eeb88bdd0e72c574c779ab1e5002539deaf2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?width=1080&crop=smart&auto=webp&s=7f7aabc755257e7385fc8f549de7ee719403143c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U3OjwoSykvEBxcjGX_UrkLmbzC58v4A4Zn-WaLWjB1o.jpg?auto=webp&s=c14794b3ce4ef96f362b40eb0d59d0af81a73760', 'width': 1200}, 'variants': {}}]}
USA: EU regulates lol Also USA:
1
2025-02-06T21:13:05
https://i.redd.it/djydrjm74lhe1.png
CascadeTrident
i.redd.it
1970-01-01T00:00:00
0
{}
1ijdlzv
false
null
t3_1ijdlzv
/r/LocalLLaMA/comments/1ijdlzv/usa_eu_regulates_lol_also_usa/
false
false
https://b.thumbs.redditm…Rnop2z0gdRnM.jpg
1
{'enabled': True, 'images': [{'id': 'V5UcAP1hmCyK9mAheAIt3tq5QsWaGqfk9CfDBaoV9IY', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/djydrjm74lhe1.png?width=108&crop=smart&auto=webp&s=6a9aa9ff5c7e3c6485da2f0cd4062a7440186ff7', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/djydrjm74lhe1.png?width=216&crop=smart&auto=webp&s=3d1003010a1a35f18e6b59dce09c96f769487d3a', 'width': 216}, {'height': 312, 'url': 'https://preview.redd.it/djydrjm74lhe1.png?width=320&crop=smart&auto=webp&s=1b64a4441a1e51900cd7dc434f5a8b63c92bd1cc', 'width': 320}, {'height': 624, 'url': 'https://preview.redd.it/djydrjm74lhe1.png?width=640&crop=smart&auto=webp&s=df2e93abbcacc0630b2609d308c09ab66e43eab1', 'width': 640}, {'height': 937, 'url': 'https://preview.redd.it/djydrjm74lhe1.png?width=960&crop=smart&auto=webp&s=8163fe4318a882c0fc59d666645f72a35767dc51', 'width': 960}, {'height': 1054, 'url': 'https://preview.redd.it/djydrjm74lhe1.png?width=1080&crop=smart&auto=webp&s=ae9101405cfd42b4e6e503434329d2ea3a130645', 'width': 1080}], 'source': {'height': 1154, 'url': 'https://preview.redd.it/djydrjm74lhe1.png?auto=webp&s=b178a04143d53414b6b3a8c205f3f25e0646bf25', 'width': 1182}, 'variants': {}}]}
Did I do something wrong with Qwen?
0
I just tried Qwen today out of curiosity, and then asked about its cutoff date which is December 2024 or so it claimed. I asked what's the latest version of Honkai Star Rail within that range and it responded its 2.4... When on that month of the year, 2.7 was released... let's just say it failed as a lore master. Did I do something wrong is the AI being whack today?
2025-02-06T21:21:01
https://www.reddit.com/r/LocalLLaMA/comments/1ijdt38/did_i_do_something_wrong_with_qwen/
the_5th_Emperor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijdt38
false
null
t3_1ijdt38
/r/LocalLLaMA/comments/1ijdt38/did_i_do_something_wrong_with_qwen/
false
false
self
0
null
Think of LLM Applications as POMDPs — Not Agents
1
2025-02-06T21:24:33
https://www.tensorzero.com/blog/think-of-llm-applications-as-pomdps-not-agents
tens0rzer0
tensorzero.com
1970-01-01T00:00:00
0
{}
1ijdwa1
false
null
t3_1ijdwa1
/r/LocalLLaMA/comments/1ijdwa1/think_of_llm_applications_as_pomdps_not_agents/
false
false
https://a.thumbs.redditm…-G40E1HQIPI8.jpg
1
{'enabled': False, 'images': [{'id': 'P-gzOWYQKP4LnfofqbNpRwMMhw_Ji3UHj7vOqo4Q4G0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=108&crop=smart&auto=webp&s=57605a69a66c511bda3522ace671a0384645f0c1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=216&crop=smart&auto=webp&s=0d82e6a996b3e0ae603e2d55a04ae21443046803', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=320&crop=smart&auto=webp&s=f58c323345224360de5e4544ec8df96f8014213c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=640&crop=smart&auto=webp&s=6c5ceb4cc8009aa13a4e94948769af815b1fed4a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=960&crop=smart&auto=webp&s=5b12a0db355ea72d426a1fa40b3a03f9e6a66f4d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=1080&crop=smart&auto=webp&s=7bfcbec5e8179ab9c3dc61486df49d2e7b651193', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?auto=webp&s=0bbeefb32a62f482339c4ac33b91184eb0cce49b', 'width': 1200}, 'variants': {}}]}
Why is Ollama's response quality so much worse than the online (paid) variants of the same model?
0
Hi everyone, I've been experimenting with different language models and noticed a significant difference in response quality between the same models on different platforms. Specifically, when using `mistralai/mistral-small-24b-instruct-2501` on Openrouter, I received 50 tracks, whereas I only got 5 tracks when using `mistral-small:24b-instruct-2501-q8_0` on Ollama. Has anyone else experienced this issue? Why is there such a disparity in response quality between the free and paid versions of the same model? Is it due to different configurations, optimizations, or something else? Any insights or suggestions would be greatly appreciated! I've been experimenting with **Mistral 24B Instruct** on both **OpenRouter** and **Ollama**, and I've noticed a massive quality difference in responses. * **OpenRouter (mistralai/mistral-small-24b-instruct-2501)**: I got a well-structured response with **50 results**. * **Ollama (mistral-small:24b-instruct-2501-q8\_0)**: The same request only returned **5 results**. This isn't a one-off issue—I've consistently seen lower response quality when running models locally with **Ollama** compared to cloud-based services using the same base models. I understand that quantization (like Q8) can reduce precision, but the difference in response quality seems too drastic to be just that. Has anyone else experienced this?
2025-02-06T21:31:02
https://www.reddit.com/r/LocalLLaMA/comments/1ije2a3/why_is_ollamas_response_quality_so_much_worse/
planetearth80
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ije2a3
false
null
t3_1ije2a3
/r/LocalLLaMA/comments/1ije2a3/why_is_ollamas_response_quality_so_much_worse/
false
false
self
0
null
Is rollout same as "generation" with different decoding?
1
[removed]
2025-02-06T21:32:53
https://www.reddit.com/r/LocalLLaMA/comments/1ije3w8/is_rollout_same_as_generation_with_different/
ContactChoice9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ije3w8
false
null
t3_1ije3w8
/r/LocalLLaMA/comments/1ije3w8/is_rollout_same_as_generation_with_different/
false
false
self
1
null
Tool use with local models?
5
I am using llama-cpp-python and try to use tools. All models and settings I tried have resulted in either a) The model only ever calls functions and fails to answer, or b) Ignored the tools completely or got me something like "functions.get\_weather" (example function) Does someone have a working example on hand? I can't find any.
2025-02-06T21:37:48
https://www.reddit.com/r/LocalLLaMA/comments/1ije84q/tool_use_with_local_models/
WeeklyMeat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ije84q
false
null
t3_1ije84q
/r/LocalLLaMA/comments/1ije84q/tool_use_with_local_models/
false
false
self
5
null
Please explain how RAG works in LM Studio
1
[removed]
2025-02-06T21:41:45
https://www.reddit.com/r/LocalLLaMA/comments/1ijebhs/please_explain_how_rag_works_in_lm_studio/
Other-Pop7007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijebhs
false
null
t3_1ijebhs
/r/LocalLLaMA/comments/1ijebhs/please_explain_how_rag_works_in_lm_studio/
false
false
self
1
null
Mismatch GPUs for speed increase?
1
I'm starting to get into using different LLMs locally. My setup is a 3700x, 32gb ram, and a 1080ti. I have an extra GTX 1080 lying around and was wondering if I could plug that in for more performance, or if the mismatch would be an issue. They are the same architecture and same DDR5x VRAM, but idk if that matters.
2025-02-06T21:47:05
https://www.reddit.com/r/LocalLLaMA/comments/1ijeg7j/mismatch_gpus_for_speed_increase/
PrometheusAurelius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijeg7j
false
null
t3_1ijeg7j
/r/LocalLLaMA/comments/1ijeg7j/mismatch_gpus_for_speed_increase/
false
false
self
1
null
Does anyone know what the letters mean?
1
2025-02-06T21:54:47
https://i.redd.it/y7f0ug4pblhe1.jpeg
tech215
i.redd.it
1970-01-01T00:00:00
0
{}
1ijemrc
false
null
t3_1ijemrc
/r/LocalLLaMA/comments/1ijemrc/does_anyone_know_what_the_letters_mean/
false
false
https://b.thumbs.redditm…AVjKHYmUHYfE.jpg
1
{'enabled': True, 'images': [{'id': '4LKdTXSe9Ra73524tWAOOgFCxJ_IJsisFEw1F6jAQg4', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/y7f0ug4pblhe1.jpeg?width=108&crop=smart&auto=webp&s=22da18fbbcf23070409c06466d00f1df31b5d5ab', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/y7f0ug4pblhe1.jpeg?width=216&crop=smart&auto=webp&s=e8e30f9d11943aa75335f6c503bd86dc794d09aa', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/y7f0ug4pblhe1.jpeg?width=320&crop=smart&auto=webp&s=e04780f18b04049427fb370344de7b3a7c611320', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/y7f0ug4pblhe1.jpeg?width=640&crop=smart&auto=webp&s=094131896fad61447b1eac2372d338cf7765de77', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/y7f0ug4pblhe1.jpeg?width=960&crop=smart&auto=webp&s=8bbde0cbf8942d1f0100bb0ed536290f09909e88', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/y7f0ug4pblhe1.jpeg?width=1080&crop=smart&auto=webp&s=5c4e64949040d6488ec02a264de57c0357a49124', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/y7f0ug4pblhe1.jpeg?auto=webp&s=de413f31cbc66e8226ae3d72dd04164153aaa699', 'width': 1080}, 'variants': {}}]}
Any interest in a poker engine?
3
Hey everyone, I was playing around a bit using rust and I was thinking like, there are already models that are better than most players, so creating a model for Texas Holdem is definitely something that should/would be feasible. First thing, no, I don't have a model (yet) I could share. But I thought, maybe others are also interested in the environment without having to program a whole environment? The engine itself is able to play ~180k hands per second on my server with an AMD 8700GE. Of course, it's optimized for multiprocessing and I tried to keep the heap usage as low as possible. The performance goes down ~40-50%, when cloning the state for further usage with the model, so 90-100k hands per second are still possible in a full simulation on my server. The project is divided into multiple crates for the core, engine, cli, simulation and agents. All with comprehensive unit-test and benchmarks for Criterion/Flamegraph, traits to keep things generic and so on. If people are interested in it, I'll clean up the code a bit and probably release it this weekend. If nobody is interested, the code will stay dirty on my machine. So let me know if you're interested in it (or not)!
2025-02-06T22:02:03
https://www.reddit.com/r/LocalLLaMA/comments/1ijet6p/any_interest_in_a_poker_engine/
Suitable-Name
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijet6p
false
null
t3_1ijet6p
/r/LocalLLaMA/comments/1ijet6p/any_interest_in_a_poker_engine/
false
false
self
3
null
How bad of an idea is this laptop for local LLMs?
1
[removed]
2025-02-06T22:07:49
https://www.reddit.com/r/LocalLLaMA/comments/1ijeybg/how_bad_of_an_idea_is_this_laptop_for_local_llms/
szrotowyprogramista
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijeybg
false
null
t3_1ijeybg
/r/LocalLLaMA/comments/1ijeybg/how_bad_of_an_idea_is_this_laptop_for_local_llms/
false
false
self
1
null
Mistral Large 2.1?
3
https://preview.redd.it/… just missed it?
2025-02-06T22:16:02
https://www.reddit.com/r/LocalLLaMA/comments/1ijf5bf/mistral_large_21/
Zelcore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijf5bf
false
null
t3_1ijf5bf
/r/LocalLLaMA/comments/1ijf5bf/mistral_large_21/
false
false
https://b.thumbs.redditm…3bVskk6qn0ug.jpg
3
null
Mistral AI CEO Interview
78
This interview with Arthur Mensch, CEO of Mistral AI, is incredibly comprehensive and detailed. I highly recommend watching it!
2025-02-06T22:43:59
https://youtu.be/bzs0wFP_6ck
SignalCompetitive582
youtu.be
1970-01-01T00:00:00
0
{}
1ijfskv
false
{'oembed': {'author_name': 'Underscore_', 'author_url': 'https://www.youtube.com/@Underscore_', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/bzs0wFP_6ck?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="On a reçu le CEO de Mistral AI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/bzs0wFP_6ck/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'On a reçu le CEO de Mistral AI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1ijfskv
/r/LocalLLaMA/comments/1ijfskv/mistral_ai_ceo_interview/
false
false
https://b.thumbs.redditm…b9_a05juG_3M.jpg
78
{'enabled': False, 'images': [{'id': 'YMTtbEav1Xqb_H1mOkldxM2ItrXam2ca5_J07oV0aQg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/19CD9Zbziz4wjyY-KLNZm0d_AXIRPDRzjWqBsfq2Fg8.jpg?width=108&crop=smart&auto=webp&s=0722204100da4183ec92ffa3a77a8c9a03b6b5c9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/19CD9Zbziz4wjyY-KLNZm0d_AXIRPDRzjWqBsfq2Fg8.jpg?width=216&crop=smart&auto=webp&s=7312d9d69013cf9483c43532be55a4dd7291a65e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/19CD9Zbziz4wjyY-KLNZm0d_AXIRPDRzjWqBsfq2Fg8.jpg?width=320&crop=smart&auto=webp&s=3051c11f352f91dde0f5af22f09fb4d29e44376e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/19CD9Zbziz4wjyY-KLNZm0d_AXIRPDRzjWqBsfq2Fg8.jpg?auto=webp&s=3f76204b60aba0e0a90601f678e597ab66b2d9c6', 'width': 480}, 'variants': {}}]}
Am I crazy? Configuration help: iGPU, RAM and dGPU
2
I am a hobbyist who wants to build a new machine that I can eventually use for training once I'm smart enough. I am currently toying with Ollama on an old workstation, but I am having a hard time understanding how the hardware is being used. I would appreciate some feedback and an explanation of the viability of the following configuration. * CPU: AMD 5600g * RAM: 16, 32, or 64 GB? * GPU: 2 x RTX 3060 * Storage: 1TB NVMe SSD 1. My intent on the CPU choice is to take the burden of display output off the GPUs. I have newer AM4 chips but thought the tradeoff would be worth the hit. Is that true? 2. With the model running on the GPUs does the RAM size matter at all? I have 4 x 8gb and 4 x 16gb sticks available. 3. I assume the GPUs do not have to be the same make and model. Is that true? 4. How bad does Docker impact Ollama? Should I be using something else? Is bare metal prefered? 5. Am I crazy? If so, know that I'm having fun learning. TIA
2025-02-06T22:44:41
https://www.reddit.com/r/LocalLLaMA/comments/1ijft4z/am_i_crazy_configuration_help_igpu_ram_and_dgpu/
ShutterAce
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijft4z
false
null
t3_1ijft4z
/r/LocalLLaMA/comments/1ijft4z/am_i_crazy_configuration_help_igpu_ram_and_dgpu/
false
false
self
2
null
Aider v0.74.0 is out with improved Ollama support
16
The latest version of aider makes it much easier to work with Ollama by dynamically setting the context window based on the current chat conversation. Ollama uses a [2k context window by default](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size), which is very small. It also **silently** discards context that exceeds the window. This is especially dangerous because many users don't even realize that most of their data is being discarded by Ollama. Aider now sets [Ollama’s context window to be large enough for each request](https://aider.chat/docs/llms/ollama.html#setting-the-context-window-size) you send plus 8k tokens for the reply. This version also has improved support for running local copies of the very popular DeepSeek models. https://aider.chat/HISTORY.html
2025-02-06T23:00:30
https://www.reddit.com/r/LocalLLaMA/comments/1ijg69j/aider_v0740_is_out_with_improved_ollama_support/
rinconcam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijg69j
false
null
t3_1ijg69j
/r/LocalLLaMA/comments/1ijg69j/aider_v0740_is_out_with_improved_ollama_support/
false
false
self
16
{'enabled': False, 'images': [{'id': 'I2YxJOPPotjjuXH7p9p9SKKgcXwY-8VzrcBtnzIUykE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/p0JTNjm7oTu6clJCVKxZy1DYbAJAprkIL2GPbnSlGxw.jpg?width=108&crop=smart&auto=webp&s=046e72c622b94ec1d49c354ded98e6ccd3cedec3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/p0JTNjm7oTu6clJCVKxZy1DYbAJAprkIL2GPbnSlGxw.jpg?width=216&crop=smart&auto=webp&s=adeaefc0fc9292e606b8c00be1562302c57f48b0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/p0JTNjm7oTu6clJCVKxZy1DYbAJAprkIL2GPbnSlGxw.jpg?width=320&crop=smart&auto=webp&s=f5f8f1cb1cb26b881e6ded757c27e4f74891ddb6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/p0JTNjm7oTu6clJCVKxZy1DYbAJAprkIL2GPbnSlGxw.jpg?width=640&crop=smart&auto=webp&s=400286f4ff15594b1f5dd2684551bfba4b328452', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/p0JTNjm7oTu6clJCVKxZy1DYbAJAprkIL2GPbnSlGxw.jpg?width=960&crop=smart&auto=webp&s=bdf8b44fe0cbe66e071d9ba07331e08404a93e78', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/p0JTNjm7oTu6clJCVKxZy1DYbAJAprkIL2GPbnSlGxw.jpg?width=1080&crop=smart&auto=webp&s=4b4b1dc857fe02cdfceb53f71c7e5e3e019c7ccc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/p0JTNjm7oTu6clJCVKxZy1DYbAJAprkIL2GPbnSlGxw.jpg?auto=webp&s=ef2c585d29d6a2ef90eed750af4dde59925d8f1b', 'width': 1200}, 'variants': {}}]}
📝🧵 Introducing Text Loom: A Node-Based Text Processing Playground!
9
#TEXT LOOM! Hey text wranglers! 👋 Ever wanted to slice, dice, and weave text like a digital textile artist? https://github.com/kleer001/Text_Loom/blob/main/images/leaderloop_trim_4.gif?raw=true Text Loom is your new best friend! It's a **node-based workspace** where you can build awesome text processing pipelines by connecting simple, powerful nodes. * Want to split a script into scenes? Done. * Need to process a batch of files through an LLM? Easy peasy. * How about automatically formatting numbered lists or merging multiple documents? We've got you covered! Each node is like a tiny text-processing specialist: the [Section Node](https://github.com/kleer001/Text_Loom/wiki/Section-Node) slices text based on patterns, the [Query Node](https://github.com/kleer001/Text_Loom/wiki/Query-Node) talks to AI models, and the [Looper Node](https://github.com/kleer001/Text_Loom/wiki/Looper-Node) handles all your iteration needs. Mix and match to create your perfect text processing flow! Check out our [wiki](https://github.com/kleer001/Text_Loom/wiki) to see what's possible. 🚀 ## Why Terminal? Because Hackers Know Best! 💻 Remember those awesome 1900's movies where hackers typed furiously on glowing green screens, making magic happen with just their keyboards? *Turns out they were onto something!* While Text Loom's got a cool node-based interface, it's running on good old-fashioned terminal power. Just like Matthew Broderick in *WarGames* or the crew in *Hackers*, we're keeping it real with that sweet, sweet command line efficiency. No fancy GUI bloat, no mouse-hunting required – just you, your keyboard, and pure text-processing power. Want to feel like you're hacking the Gibson while actually getting real work done? We've got you covered! 🕹️ *Because text should flow, not fight you.* ✨
2025-02-06T23:31:50
https://www.reddit.com/r/LocalLLaMA/comments/1ijgw3a/introducing_text_loom_a_nodebased_text_processing/
kleer001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijgw3a
false
null
t3_1ijgw3a
/r/LocalLLaMA/comments/1ijgw3a/introducing_text_loom_a_nodebased_text_processing/
false
false
self
9
{'enabled': True, 'images': [{'id': 'ew4bXeHSEU1-eJsGB_Y-HjC5PdJWe-daVVRbuK-1Kfo', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=108&crop=smart&format=png8&s=e0c6e951213deb06cbfcb2d9c8037a983563b31b', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=216&crop=smart&format=png8&s=059c679da89212b266cfafd7652ac498cec40492', 'width': 216}, {'height': 225, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=320&crop=smart&format=png8&s=553469279de4e842a6a4309946516dd99d01127c', 'width': 320}, {'height': 451, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=640&crop=smart&format=png8&s=9f8ce65764c0a37d356d6665833c2ecd34d8846a', 'width': 640}, {'height': 677, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=960&crop=smart&format=png8&s=18d168be4c2a640f2ccc397359eba65288ce60f3', 'width': 960}], 'source': {'height': 705, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?format=png8&s=481588f23bd2fc17a35887fde71be42103b8b884', 'width': 999}, 'variants': {'gif': {'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=108&crop=smart&s=8b5f17507e6bd23a027f4b80e02fe83badc8b3a3', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=216&crop=smart&s=54e50614922b91da31ddb8b305f494ab5f741124', 'width': 216}, {'height': 225, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=320&crop=smart&s=fda80a102c069819d5c6db82cb5b460f1a9ea32f', 'width': 320}, {'height': 451, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=640&crop=smart&s=d0b43e04f2de3ab12cbf721100f1c39b93184e8a', 'width': 640}, {'height': 677, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=960&crop=smart&s=3a324410e5f377e120b4d4ce9ac447dc5b04e5e6', 'width': 960}], 'source': {'height': 705, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?s=60f2d8835612f6cf9286830a812e73e45bc0033f', 'width': 999}}, 'mp4': {'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=108&format=mp4&s=295b93840ee6a9a47334de35699980d0775ed7aa', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=216&format=mp4&s=aba00841f210df56af49d5215fa8421fad0da319', 'width': 216}, {'height': 225, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=320&format=mp4&s=de8dd1d04c93f04fc74128ff5505270436c1563e', 'width': 320}, {'height': 451, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=640&format=mp4&s=192b0f5642bc96d53d8acf751008fc588e93cfaa', 'width': 640}, {'height': 677, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?width=960&format=mp4&s=da7e13e8f2852923ac7261f302305c592564be56', 'width': 960}], 'source': {'height': 705, 'url': 'https://external-preview.redd.it/dh2JvM5RHLjWcYci6mqT2ff_t_vz4lNihB4S-VqiiNk.gif?format=mp4&s=d1ce511e62c64fde5c282d084ba13102c489eec8', 'width': 999}}}}]}
New to AI Agents – Need Advice to Start My Journey!
1
[removed]
2025-02-06T23:37:18
https://www.reddit.com/r/LocalLLaMA/comments/1ijh0au/new_to_ai_agents_need_advice_to_start_my_journey/
Negative-Scallion-34
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijh0au
false
null
t3_1ijh0au
/r/LocalLLaMA/comments/1ijh0au/new_to_ai_agents_need_advice_to_start_my_journey/
false
false
self
1
null
New to AI Agents – Need Advice to Start My Journey!
1
[removed]
2025-02-06T23:38:21
https://www.reddit.com/r/LocalLLaMA/comments/1ijh11j/new_to_ai_agents_need_advice_to_start_my_journey/
Negative-Scallion-34
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijh11j
false
null
t3_1ijh11j
/r/LocalLLaMA/comments/1ijh11j/new_to_ai_agents_need_advice_to_start_my_journey/
false
false
self
1
null
Why Local LLMs are Bad
1
[removed]
2025-02-06T23:39:50
https://www.reddit.com/r/LocalLLaMA/comments/1ijh28d/why_local_llms_are_bad/
crayzaystrawberry
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijh28d
false
null
t3_1ijh28d
/r/LocalLLaMA/comments/1ijh28d/why_local_llms_are_bad/
false
false
self
1
null
Will Qwen Max weights be open or not?
1
[removed]
2025-02-06T23:43:57
https://www.reddit.com/r/LocalLLaMA/comments/1ijh5bi/will_qwen_max_weights_be_open_or_not/
celsowm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijh5bi
false
null
t3_1ijh5bi
/r/LocalLLaMA/comments/1ijh5bi/will_qwen_max_weights_be_open_or_not/
false
false
self
1
null
Android app with anthropic like projects?
1
I'm constantly interacting with anthropic Claude through the feature they have called projects where you can create a context window of data and then ask it questions so an example is I have a context window called reading, with data that describes how to reformat news articles into readable segments. Then I create a new message in that project with a PDF print out of a news article and then I can read it in more palatable form. That's just one example of how I use these. I'd like to interact more with other models other than anthropic especially on my mobile but I really need to have this feature that allows me to create data and name it and then interact with it with the AI does anyone know an app that can do a similar thing but interact with other models bonus points for open router. An example of something that won't work is the deep-seek app only has a chat interface and sure I could have context windows I save and text editors and then paste them in every single time to give it the orientation to do the right thing but that's tedious.
2025-02-06T23:50:40
https://www.reddit.com/r/LocalLLaMA/comments/1ijhaep/android_app_with_anthropic_like_projects/
still-standing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijhaep
false
null
t3_1ijhaep
/r/LocalLLaMA/comments/1ijhaep/android_app_with_anthropic_like_projects/
false
false
self
1
null
Want to learn how to fine tune your own Large Language Model? I created a helpful guide!
63
Hello everyone! I am the creator of Kolo a tool that you can use to fine tune your own Large Language Model and test it quickly! I created a guide recently to explain what all the fine tuning parameters mean! Link to guide: [https://github.com/MaxHastings/Kolo/blob/main/FineTuningGuide.md](https://github.com/MaxHastings/Kolo/blob/main/FineTuningGuide.md) Link to ReadMe to learn how to use Kolo: [https://github.com/MaxHastings/Kolo](https://github.com/MaxHastings/Kolo)
2025-02-06T23:57:06
https://www.reddit.com/r/LocalLLaMA/comments/1ijhfau/want_to_learn_how_to_fine_tune_your_own_large/
Maxwell10206
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ijhfau
false
null
t3_1ijhfau
/r/LocalLLaMA/comments/1ijhfau/want_to_learn_how_to_fine_tune_your_own_large/
false
false
self
63
{'enabled': False, 'images': [{'id': '4pbaleN_zJnpkGBefA8cb7ik1swrB5dx099dJ5rmYf8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ay6GrSUs-Q9iIOU0y1x5sRiVm3Acv4fjO7lPaW7Bu7E.jpg?width=108&crop=smart&auto=webp&s=b560125816cdf2846f05278eda804c3f5a4397ee', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ay6GrSUs-Q9iIOU0y1x5sRiVm3Acv4fjO7lPaW7Bu7E.jpg?width=216&crop=smart&auto=webp&s=c9dca9fc9f8a76168d8e168e4bc2b4f568067746', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ay6GrSUs-Q9iIOU0y1x5sRiVm3Acv4fjO7lPaW7Bu7E.jpg?width=320&crop=smart&auto=webp&s=dff7a5c2fbcefc987d9a28b584392562ad57ac75', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ay6GrSUs-Q9iIOU0y1x5sRiVm3Acv4fjO7lPaW7Bu7E.jpg?width=640&crop=smart&auto=webp&s=a029e4ceb04c42cce35fe1f5ed6a6b31354c301b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ay6GrSUs-Q9iIOU0y1x5sRiVm3Acv4fjO7lPaW7Bu7E.jpg?width=960&crop=smart&auto=webp&s=64f6841e55c8b032140d7307c740e87d9afe73ce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ay6GrSUs-Q9iIOU0y1x5sRiVm3Acv4fjO7lPaW7Bu7E.jpg?width=1080&crop=smart&auto=webp&s=4915dbe3e98eda8c825ee86f17646ccde6cea0f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ay6GrSUs-Q9iIOU0y1x5sRiVm3Acv4fjO7lPaW7Bu7E.jpg?auto=webp&s=503db477cc41f18e5e9897452ca25b27d4c5a2e3', 'width': 1200}, 'variants': {}}]}