title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Promising Architecture, who should we contact | 1 | [removed] | 2025-06-23T01:35:06 | https://www.reddit.com/r/LocalLLaMA/comments/1li4c2h/promising_architecture_who_should_we_contact/ | Commercial-Ad-1148 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li4c2h | false | null | t3_1li4c2h | /r/LocalLLaMA/comments/1li4c2h/promising_architecture_who_should_we_contact/ | false | false | self | 1 | null |
Polaris: A Post-training recipe for scaling RL on Advanced ReasonIng models | 1 | [removed] | 2025-06-23T01:36:11 | https://www.reddit.com/r/LocalLLaMA/comments/1li4ctn/polaris_a_posttraining_recipe_for_scaling_rl_on/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li4ctn | false | null | t3_1li4ctn | /r/LocalLLaMA/comments/1li4ctn/polaris_a_posttraining_recipe_for_scaling_rl_on/ | false | false | self | 1 | null |
Replacement thermal pads for EVGA 3090 | 1 | [removed] | 2025-06-23T02:05:50 | https://www.reddit.com/r/LocalLLaMA/comments/1li4wul/replacement_thermal_pads_for_evga_3090/ | crapaud_dindon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li4wul | false | null | t3_1li4wul | /r/LocalLLaMA/comments/1li4wul/replacement_thermal_pads_for_evga_3090/ | false | false | self | 1 | null |
Agents hack the agent orchestration system | 1 | [removed] | 2025-06-23T02:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/1li5egt/agents_hack_the_agent_orchestration_system/ | durapensa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li5egt | false | null | t3_1li5egt | /r/LocalLLaMA/comments/1li5egt/agents_hack_the_agent_orchestration_system/ | false | false | self | 1 | null |
🚀 IdeaWeaver Weekly Update: June 23–27, 2024 | 1 | [removed] | 2025-06-23T03:32:34 | https://www.reddit.com/r/LocalLLaMA/comments/1li6jaw/ideaweaver_weekly_update_june_2327_2024/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li6jaw | false | null | t3_1li6jaw | /r/LocalLLaMA/comments/1li6jaw/ideaweaver_weekly_update_june_2327_2024/ | false | false | 1 | null |
|
🚀 IdeaWeaver Weekly Update: June 23–27, 2024 | 1 | [removed] | 2025-06-23T03:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/1li6nx5/ideaweaver_weekly_update_june_2327_2024/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li6nx5 | false | null | t3_1li6nx5 | /r/LocalLLaMA/comments/1li6nx5/ideaweaver_weekly_update_june_2327_2024/ | false | false | self | 1 | null |
Qwen3 vs phi4 vs gemma3 | 1 | [removed] | 2025-06-23T03:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/1li6obr/qwen3_vs_phi4_vs_gemma3/ | Divkix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li6obr | false | null | t3_1li6obr | /r/LocalLLaMA/comments/1li6obr/qwen3_vs_phi4_vs_gemma3/ | false | false | self | 1 | null |
Qwen3 or gemma3 or phi4 | 1 | [removed] | 2025-06-23T03:52:25 | https://www.reddit.com/r/LocalLLaMA/comments/1li6w48/qwen3_or_gemma3_or_phi4/ | Divkix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li6w48 | false | null | t3_1li6w48 | /r/LocalLLaMA/comments/1li6w48/qwen3_or_gemma3_or_phi4/ | false | false | self | 1 | null |
Does llama cpp python support the multi-modal changes to llama.cpp? | 1 | [removed] | 2025-06-23T05:06:14 | https://www.reddit.com/r/LocalLLaMA/comments/1li85wg/does_llama_cpp_python_support_the_multimodal/ | KDCreerStudios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li85wg | false | null | t3_1li85wg | /r/LocalLLaMA/comments/1li85wg/does_llama_cpp_python_support_the_multimodal/ | false | false | self | 1 | null |
Fenix, a multi-agent trading bot I built to run entirely on a local Mac Mini using Ollama and quanti | 1 | [removed] | 2025-06-23T05:29:19 | https://www.reddit.com/r/LocalLLaMA/comments/1li8jiz/fenix_a_multiagent_trading_bot_i_built_to_run/ | MoveDecent3455 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li8jiz | false | null | t3_1li8jiz | /r/LocalLLaMA/comments/1li8jiz/fenix_a_multiagent_trading_bot_i_built_to_run/ | false | false | self | 1 | null |
test post. | 1 | [removed] | 2025-06-23T06:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/1li93sw/test_post/ | No-Statement-0001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li93sw | false | null | t3_1li93sw | /r/LocalLLaMA/comments/1li93sw/test_post/ | false | false | self | 1 | null |
Will I be happy with a RTX 3090? | 1 | [removed] | 2025-06-23T06:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/1li94zg/will_i_be_happy_with_a_rtx_3090/ | eribob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li94zg | false | null | t3_1li94zg | /r/LocalLLaMA/comments/1li94zg/will_i_be_happy_with_a_rtx_3090/ | false | false | self | 1 | null |
Extract learning needs from an excel sheet | 1 | [removed] | 2025-06-23T06:33:24 | https://www.reddit.com/r/LocalLLaMA/comments/1li9jv3/extract_learning_needs_from_an_excel_sheet/ | Opening_Pollution_28 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li9jv3 | false | null | t3_1li9jv3 | /r/LocalLLaMA/comments/1li9jv3/extract_learning_needs_from_an_excel_sheet/ | false | false | self | 1 | null |
Tools to improve sequential order of execution by LLM | 1 | [removed] | 2025-06-23T06:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1li9m05/tools_to_improve_sequential_order_of_execution_by/ | Puzzleheaded-Ad-1343 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li9m05 | false | null | t3_1li9m05 | /r/LocalLLaMA/comments/1li9m05/tools_to_improve_sequential_order_of_execution_by/ | false | false | self | 1 | null |
Could i fine tune a gemma 3 12b on a limited GPU ? | 1 | [removed] | 2025-06-23T06:50:25 | https://www.reddit.com/r/LocalLLaMA/comments/1li9t78/could_i_fine_tune_a_gemma_3_12b_on_a_limited_gpu/ | Head_Mushroom_3748 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li9t78 | false | null | t3_1li9t78 | /r/LocalLLaMA/comments/1li9t78/could_i_fine_tune_a_gemma_3_12b_on_a_limited_gpu/ | false | false | self | 1 | null |
Run Llama3 and Mistral Models on your GPU in pure Java: We hit >100 toks/s with GPULlama3.java and Docker images are available | 1 | [removed] | 2025-06-23T07:26:55 | https://github.com/beehive-lab/GPULlama3.java | mikebmx1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1liacv6 | false | null | t3_1liacv6 | /r/LocalLLaMA/comments/1liacv6/run_llama3_and_mistral_models_on_your_gpu_in_pure/ | false | false | default | 1 | null |
Notebook LM AI podcast alternative | 1 | [removed] | 2025-06-23T07:27:21 | https://www.reddit.com/r/LocalLLaMA/comments/1liad3b/notebook_lm_ai_podcast_alternative/ | blackkksparx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liad3b | false | null | t3_1liad3b | /r/LocalLLaMA/comments/1liad3b/notebook_lm_ai_podcast_alternative/ | false | false | self | 1 | null |
Idea to speed up coding models | 1 | [removed] | 2025-06-23T07:33:45 | https://www.reddit.com/r/LocalLLaMA/comments/1liagd7/idea_to_speed_up_coding_models/ | Timotheeee1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liagd7 | false | null | t3_1liagd7 | /r/LocalLLaMA/comments/1liagd7/idea_to_speed_up_coding_models/ | false | false | self | 1 | null |
what happened to the sub why are there no posts and all comments are hidden | 1 | [removed] | 2025-06-23T07:42:01 | https://www.reddit.com/r/LocalLLaMA/comments/1liakkx/what_happened_to_the_sub_why_are_there_no_posts/ | visionsmemories | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liakkx | false | null | t3_1liakkx | /r/LocalLLaMA/comments/1liakkx/what_happened_to_the_sub_why_are_there_no_posts/ | false | false | self | 1 | null |
Searching for an Updated LLM Leaderboard Dataset | 1 | [removed] | 2025-06-23T08:29:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lib9j8/searching_for_an_updated_llm_leaderboard_dataset/ | razziath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lib9j8 | false | null | t3_1lib9j8 | /r/LocalLLaMA/comments/1lib9j8/searching_for_an_updated_llm_leaderboard_dataset/ | false | false | self | 1 | null |
Can Jetson Xavier NX (16GB) run LLaMA 3.1 8B locally? | 1 | [removed] | 2025-06-23T08:41:14 | https://www.reddit.com/r/LocalLLaMA/comments/1libfq0/can_jetson_xavier_nx_16gb_run_llama_31_8b_locally/ | spacegeekOps | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1libfq0 | false | null | t3_1libfq0 | /r/LocalLLaMA/comments/1libfq0/can_jetson_xavier_nx_16gb_run_llama_31_8b_locally/ | false | false | self | 1 | null |
Its all marketing... | 1 | [removed] | 2025-06-23T08:46:23 | freehuntx | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1libic7 | false | null | t3_1libic7 | /r/LocalLLaMA/comments/1libic7/its_all_marketing/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'mne7a0pd3n8f1', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/mne7a0pd3n8f1.png?width=108&crop=smart&auto=webp&s=25fb492502a60b918fdac98e030184abdea44353', 'width': 108}, {'height': 218, 'url': 'https://preview.redd.it/mne7a0pd3n8f1.png?width=216&crop=smart&auto=webp&s=355b0b2ef97bf0b186d8b89a0f46995f95f1f0c7', 'width': 216}, {'height': 323, 'url': 'https://preview.redd.it/mne7a0pd3n8f1.png?width=320&crop=smart&auto=webp&s=80215de9f472f3a8b9c8cbf1d35f691803119733', 'width': 320}], 'source': {'height': 617, 'url': 'https://preview.redd.it/mne7a0pd3n8f1.png?auto=webp&s=c2a089e1147fe8b1e2fd285ea12048989efd3b61', 'width': 610}, 'variants': {}}]} |
|
How can I make my own GPT-4o-Realtime-audio level AI voice (e.g., Mickey Mouse)? | 1 | [removed] | 2025-06-23T09:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/1libt2k/how_can_i_make_my_own_gpt4orealtimeaudio_level_ai/ | thibaudbrg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1libt2k | false | null | t3_1libt2k | /r/LocalLLaMA/comments/1libt2k/how_can_i_make_my_own_gpt4orealtimeaudio_level_ai/ | false | false | self | 1 | null |
I want to use local llm for a waste management tool | 1 | [removed] | 2025-06-23T09:05:25 | https://www.reddit.com/r/LocalLLaMA/comments/1libt5g/i_want_to_use_local_llm_for_a_waste_management/ | Sonder-Otis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1libt5g | false | null | t3_1libt5g | /r/LocalLLaMA/comments/1libt5g/i_want_to_use_local_llm_for_a_waste_management/ | false | false | self | 1 | null |
Why are there so many invisible posts and comments in Sub LocalLLaMA? | 1 | [removed] | 2025-06-23T09:28:58 | choose_a_guest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lic687 | false | null | t3_1lic687 | /r/LocalLLaMA/comments/1lic687/why_are_there_so_many_invisible_posts_and/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'kd9IUPkqQfgcPNQts3CA22i7tAd-qSGmlw_1tgpWAgA', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/1kgkmw91an8f1.png?width=108&crop=smart&auto=webp&s=8ded45aa191e76c8609c8a247f936956ba7cda8e', 'width': 108}, {'height': 191, 'url': 'https://preview.redd.it/1kgkmw91an8f1.png?width=216&crop=smart&auto=webp&s=793865ba128d001a47cede1049ddaf579075fbf6', 'width': 216}, {'height': 283, 'url': 'https://preview.redd.it/1kgkmw91an8f1.png?width=320&crop=smart&auto=webp&s=8a955cc4ac0fae1d9b161c75b685d790a6afe118', 'width': 320}, {'height': 566, 'url': 'https://preview.redd.it/1kgkmw91an8f1.png?width=640&crop=smart&auto=webp&s=177cc9902adf28e73fa0691d999a40eff1f02b72', 'width': 640}], 'source': {'height': 773, 'url': 'https://preview.redd.it/1kgkmw91an8f1.png?auto=webp&s=3549fd9970942bf798ee804dc7cc2d5c9c05c7f5', 'width': 873}, 'variants': {}}]} |
||
Tower+ 72B is build on top of Qwen 2.5 72B | 1 | [removed] | 2025-06-23T09:40:57 | touhidul002 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1licctq | false | null | t3_1licctq | /r/LocalLLaMA/comments/1licctq/tower_72b_is_build_on_top_of_qwen_25_72b/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'jvb6cx2fdn8f1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/jvb6cx2fdn8f1.png?width=108&crop=smart&auto=webp&s=1b44cd01dd4e92f82a3efe01e0e55e294eb2455a', 'width': 108}, {'height': 75, 'url': 'https://preview.redd.it/jvb6cx2fdn8f1.png?width=216&crop=smart&auto=webp&s=b34773c8b119902d4ef4a13296d038c2d27dbd1d', 'width': 216}, {'height': 111, 'url': 'https://preview.redd.it/jvb6cx2fdn8f1.png?width=320&crop=smart&auto=webp&s=8b17129616f914ece842c63dbb6145b1c1bb2073', 'width': 320}, {'height': 222, 'url': 'https://preview.redd.it/jvb6cx2fdn8f1.png?width=640&crop=smart&auto=webp&s=bc0d84560e28bda197e0a0650d7195c9562b51f3', 'width': 640}, {'height': 334, 'url': 'https://preview.redd.it/jvb6cx2fdn8f1.png?width=960&crop=smart&auto=webp&s=5fb303812dea98b4d15d5ff46b5c31a4c2756bd4', 'width': 960}, {'height': 376, 'url': 'https://preview.redd.it/jvb6cx2fdn8f1.png?width=1080&crop=smart&auto=webp&s=6bb52800657f1e6aa97760280f50ea0ff43017e9', 'width': 1080}], 'source': {'height': 766, 'url': 'https://preview.redd.it/jvb6cx2fdn8f1.png?auto=webp&s=8bc4908d9cbc88925c6b6ae2767ed9910b2e90e3', 'width': 2200}, 'variants': {}}]} |
|
Why are there so many invisible posts and comments in Sub LocalLLaMA? | 1 | [removed] | 2025-06-23T09:41:53 | choose_a_guest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1licdbd | false | null | t3_1licdbd | /r/LocalLLaMA/comments/1licdbd/why_are_there_so_many_invisible_posts_and/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'j68shdhkdn8f1', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/j68shdhkdn8f1.png?width=108&crop=smart&auto=webp&s=39910e1086085ac0b6f2e4c95e899e1d063830b8', 'width': 108}, {'height': 191, 'url': 'https://preview.redd.it/j68shdhkdn8f1.png?width=216&crop=smart&auto=webp&s=0a7e764c10b984981a1eca4ae5dc5f256f83f3b3', 'width': 216}, {'height': 283, 'url': 'https://preview.redd.it/j68shdhkdn8f1.png?width=320&crop=smart&auto=webp&s=bc9089bdd6bf499fe52e3e53f36cc015f216624a', 'width': 320}, {'height': 566, 'url': 'https://preview.redd.it/j68shdhkdn8f1.png?width=640&crop=smart&auto=webp&s=5b213c91854515cce0af876ba34789b9a2a26227', 'width': 640}], 'source': {'height': 773, 'url': 'https://preview.redd.it/j68shdhkdn8f1.png?auto=webp&s=c64b39209ddd1cd1aeda8b5d1640600d50d0c5f4', 'width': 873}, 'variants': {}}]} |
|
How to use gguf format model for image description? | 1 | [removed] | 2025-06-23T09:45:00 | https://www.reddit.com/r/LocalLLaMA/comments/1liceys/how_to_use_gguf_format_model_for_image_description/ | Best_Character_9311 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liceys | false | null | t3_1liceys | /r/LocalLLaMA/comments/1liceys/how_to_use_gguf_format_model_for_image_description/ | false | false | self | 1 | null |
quantize: Handle user-defined pruning of whole layers (blocks) by EAddario · Pull Request #13037 · ggml-org/llama.cpp | 1 | [removed] | 2025-06-23T09:48:29 | https://github.com/ggml-org/llama.cpp/pull/13037 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1licgw0 | false | null | t3_1licgw0 | /r/LocalLLaMA/comments/1licgw0/quantize_handle_userdefined_pruning_of_whole/ | false | false | default | 1 | null |
quantize: Handle user-defined pruning of whole layers (blocks) by EAddario | 1 | 2025-06-23T09:49:28 | https://github.com/ggml-org/llama.cpp/pull/13037 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lichev | false | null | t3_1lichev | /r/LocalLLaMA/comments/1lichev/quantize_handle_userdefined_pruning_of_whole/ | false | false | default | 1 | null |
|
quantize: Handle user-defined pruning of whole layers | 1 | 2025-06-23T09:50:17 | https://github.com/ggml-org/llama.cpp/pull/13037 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lichur | false | null | t3_1lichur | /r/LocalLLaMA/comments/1lichur/quantize_handle_userdefined_pruning_of_whole/ | false | false | default | 1 | null |
|
pruning of whole layers | 1 | [removed] | 2025-06-23T09:51:03 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1licia8 | false | null | t3_1licia8 | /r/LocalLLaMA/comments/1licia8/pruning_of_whole_layers/ | false | false | default | 1 | null |
||
quantize: Handle user-defined pruning of whole layers (blocks | 1 | [removed] | 2025-06-23T09:51:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1licin2 | false | null | t3_1licin2 | /r/LocalLLaMA/comments/1licin2/quantize_handle_userdefined_pruning_of_whole/ | false | false | default | 1 | null |
||
quantize: Handle user-defined pruning of whole layers | 1 | [removed] | 2025-06-23T09:52:43 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1licj7z | false | null | t3_1licj7z | /r/LocalLLaMA/comments/1licj7z/quantize_handle_userdefined_pruning_of_whole/ | false | false | default | 1 | null |
||
Gryphe/Codex-24B-Small-3.2 · Hugging Face | 1 | [removed] | 2025-06-23T09:53:30 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1licjp9 | false | null | t3_1licjp9 | /r/LocalLLaMA/comments/1licjp9/gryphecodex24bsmall32_hugging_face/ | false | false | default | 1 | null |
||
We're ReadyTensor! | 1 | [removed] | 2025-06-23T10:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/1licryi/were_readytensor/ | Ready_Tensor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1licryi | false | null | t3_1licryi | /r/LocalLLaMA/comments/1licryi/were_readytensor/ | false | false | self | 1 | null |
Looking for an upgrade from Meta-Llama-3.1-8B-Instruct-Q4_K_L.gguf, especially for letter parsing. Last time I looked into this was a very long time ago (7 months!) What are the best models nowadays? | 1 | [removed] | 2025-06-23T10:14:26 | https://www.reddit.com/r/LocalLLaMA/comments/1licvv8/looking_for_an_upgrade_from/ | AuspiciousApple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1licvv8 | false | null | t3_1licvv8 | /r/LocalLLaMA/comments/1licvv8/looking_for_an_upgrade_from/ | false | false | self | 1 | null |
What's missing in local / open AI? | 1 | [removed] | 2025-06-23T10:42:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lidc1u/whats_missing_in_local_open_ai/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lidc1u | false | null | t3_1lidc1u | /r/LocalLLaMA/comments/1lidc1u/whats_missing_in_local_open_ai/ | false | false | self | 1 | null |
Just found out local LLaMA 3 days ago, started with LM Studio. Then, I tried to see what is the biggest model I could use. Don't mind the slow generation. Qwen3-32b Q8 gguf on LM Studio is better than Oobabooga? (PC: R5 3600, RTX3060 12GB, 32GB RAM). What is the best local LLaMA + internet setup? | 1 | [removed] | 2025-06-23T10:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lidg19/just_found_out_local_llama_3_days_ago_started/ | Mystvearn2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lidg19 | false | null | t3_1lidg19 | /r/LocalLLaMA/comments/1lidg19/just_found_out_local_llama_3_days_ago_started/ | false | false | self | 1 | null |
Where's activity? | 1 | [removed] | 2025-06-23T11:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lidysr/wheres_activity/ | Guilty-Race-9633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lidysr | false | null | t3_1lidysr | /r/LocalLLaMA/comments/1lidysr/wheres_activity/ | false | false | self | 1 | null |
Run Llama on iPhone’s Neural Engine - 0.05s to first token | 1 | [removed] | 2025-06-23T12:03:36 | Glad-Speaker3006 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1liessp | false | null | t3_1liessp | /r/LocalLLaMA/comments/1liessp/run_llama_on_iphones_neural_engine_005s_to_first/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'quzzmbr33o8f1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/quzzmbr33o8f1.jpeg?width=108&crop=smart&auto=webp&s=c4ed8eb3f189310fd45b2506612c2cccd28e7da6', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/quzzmbr33o8f1.jpeg?width=216&crop=smart&auto=webp&s=207c477c3ab3fc733aea407770ecf2f177259c93', 'width': 216}, {'height': 400, 'url': 'https://preview.redd.it/quzzmbr33o8f1.jpeg?width=320&crop=smart&auto=webp&s=30c6309a062ea35f4388610ec75d7d74bf226004', 'width': 320}, {'height': 801, 'url': 'https://preview.redd.it/quzzmbr33o8f1.jpeg?width=640&crop=smart&auto=webp&s=b2028fcf2e02141203da4243ca62057a9cf8dae3', 'width': 640}, {'height': 1202, 'url': 'https://preview.redd.it/quzzmbr33o8f1.jpeg?width=960&crop=smart&auto=webp&s=cb73a5d69050b71a0325a3523d96f08fed8d1b3b', 'width': 960}, {'height': 1353, 'url': 'https://preview.redd.it/quzzmbr33o8f1.jpeg?width=1080&crop=smart&auto=webp&s=2a86b2e954bb5c15f934d8a733dcf7bbf3066b82', 'width': 1080}], 'source': {'height': 1615, 'url': 'https://preview.redd.it/quzzmbr33o8f1.jpeg?auto=webp&s=7c2e6e30a1bad5e8cb9f02677aa48ae33b05af58', 'width': 1289}, 'variants': {}}]} |
|
Llama on iPhone's Neural Engine - 0.05s to first token | 1 | [removed] | 2025-06-23T12:10:13 | Glad-Speaker3006 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1liexm6 | false | null | t3_1liexm6 | /r/LocalLLaMA/comments/1liexm6/llama_on_iphones_neural_engine_005s_to_first_token/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'YVjyxgsIGxIo9mg_gJ1gblaxyKUr6ysq2kyS3LW_cxw', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/kphjfwaa4o8f1.jpeg?width=108&crop=smart&auto=webp&s=b6a12a2a8a26071421cfe1b75bf9334321f2ec90', 'width': 108}, {'height': 266, 'url': 'https://preview.redd.it/kphjfwaa4o8f1.jpeg?width=216&crop=smart&auto=webp&s=c220019a2886abff07b1e181bb30f8f42f15d27f', 'width': 216}, {'height': 394, 'url': 'https://preview.redd.it/kphjfwaa4o8f1.jpeg?width=320&crop=smart&auto=webp&s=4ab4d543e4a43936a2955a119acb3b8b665df426', 'width': 320}, {'height': 789, 'url': 'https://preview.redd.it/kphjfwaa4o8f1.jpeg?width=640&crop=smart&auto=webp&s=9cd2315bc538f7de08aba53c4e23d88cdbc247c8', 'width': 640}, {'height': 1184, 'url': 'https://preview.redd.it/kphjfwaa4o8f1.jpeg?width=960&crop=smart&auto=webp&s=6569022a41a2a01dfca054a36338ca870d250fa0', 'width': 960}, {'height': 1332, 'url': 'https://preview.redd.it/kphjfwaa4o8f1.jpeg?width=1080&crop=smart&auto=webp&s=a9da21642b110861e4343cb7c751096d1afe158c', 'width': 1080}], 'source': {'height': 1586, 'url': 'https://preview.redd.it/kphjfwaa4o8f1.jpeg?auto=webp&s=3811e2b1f636fe33dd7ebeed5491171d9c75da06', 'width': 1285}, 'variants': {}}]} |
||
Kevin Durant - NBA star, is an early investor in Hugging Face (2017) | 1 | [removed] | 2025-06-23T12:38:39 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lifi4l | false | null | t3_1lifi4l | /r/LocalLLaMA/comments/1lifi4l/kevin_durant_nba_star_is_an_early_investor_in/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'afCCDtWnKEpurwPueUempZvBmyC4VOfpSx56OE9DHxk', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/tb8r81gv8o8f1.jpeg?width=108&crop=smart&auto=webp&s=7b7e56059161a0c3becbe6a003378572bc9910c7', 'width': 108}, {'height': 187, 'url': 'https://preview.redd.it/tb8r81gv8o8f1.jpeg?width=216&crop=smart&auto=webp&s=7d2277897a04e3511925deb34bf1e34112b1f189', 'width': 216}, {'height': 277, 'url': 'https://preview.redd.it/tb8r81gv8o8f1.jpeg?width=320&crop=smart&auto=webp&s=35ad69ba35853d875c5bbeb885cab9b46698d4aa', 'width': 320}, {'height': 555, 'url': 'https://preview.redd.it/tb8r81gv8o8f1.jpeg?width=640&crop=smart&auto=webp&s=e3f3e55e92ea511d984a26897dbaaffc52641321', 'width': 640}], 'source': {'height': 627, 'url': 'https://preview.redd.it/tb8r81gv8o8f1.jpeg?auto=webp&s=32314fef3065fe85dee9265de26443a7482eb8f5', 'width': 723}, 'variants': {}}]} |
||
Llama.cpp vulkan on termux giving "assertion errno = ETIME failed" | 1 | [removed] | 2025-06-23T12:50:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lifr7f/llamacpp_vulkan_on_termux_giving_assertion_errno/ | ExtremeAcceptable289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lifr7f | false | null | t3_1lifr7f | /r/LocalLLaMA/comments/1lifr7f/llamacpp_vulkan_on_termux_giving_assertion_errno/ | false | false | self | 1 | null |
Just Picked up a 16" M3 Pro 36GB MacBook Pro for $1,250. What should I run? | 1 | [removed] | 2025-06-23T13:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lifz7x/just_picked_up_a_16_m3_pro_36gb_macbook_pro_for/ | mentalasf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lifz7x | false | null | t3_1lifz7x | /r/LocalLLaMA/comments/1lifz7x/just_picked_up_a_16_m3_pro_36gb_macbook_pro_for/ | false | false | self | 1 | null |
AMD Formally Launches Radeon AI PRO 9000 Series | 1 | 2025-06-23T13:10:51 | https://www.techpowerup.com/338086/amd-formally-launches-ryzen-threadripper-pro-9000-and-radeon-ai-pro-9000-series | Risse | techpowerup.com | 1970-01-01T00:00:00 | 0 | {} | 1lig76b | false | null | t3_1lig76b | /r/LocalLLaMA/comments/1lig76b/amd_formally_launches_radeon_ai_pro_9000_series/ | false | false | default | 1 | null |
|
Are there any LLMs that are actually able to run on an "affordable" setup? Like, a server <$500/mo? | 1 | [removed] | 2025-06-23T13:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ligb2z/are_there_any_llms_that_are_actually_able_to_run/ | g15mouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ligb2z | false | null | t3_1ligb2z | /r/LocalLLaMA/comments/1ligb2z/are_there_any_llms_that_are_actually_able_to_run/ | false | false | self | 1 | null |
Vulkan + termux llama.cpp not working | 1 | [removed] | 2025-06-23T13:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ligiit/vulkan_termux_llamacpp_not_working/ | ExtremeAcceptable289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ligiit | false | null | t3_1ligiit | /r/LocalLLaMA/comments/1ligiit/vulkan_termux_llamacpp_not_working/ | false | false | self | 1 | null |
A team claimed that they fine-tuned a mistral-small to surpass most LLMs across different benchmarks | 1 | [removed] | 2025-06-23T13:37:48 | BreakfastFriendly728 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ligt5l | false | null | t3_1ligt5l | /r/LocalLLaMA/comments/1ligt5l/a_team_claimed_that_they_finetuned_a_mistralsmall/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'upjb09mwjo8f1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/upjb09mwjo8f1.png?width=108&crop=smart&auto=webp&s=8f5d6376276cea367ac070698a99e40f49223f4f', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/upjb09mwjo8f1.png?width=216&crop=smart&auto=webp&s=55cf68309bcda45a598bb9275008e090bc40cfc6', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/upjb09mwjo8f1.png?width=320&crop=smart&auto=webp&s=fd76d80d6232f21617652fec50d32374d0cfe286', 'width': 320}, {'height': 411, 'url': 'https://preview.redd.it/upjb09mwjo8f1.png?width=640&crop=smart&auto=webp&s=18d1ae5062a679eb9fc375abe592fc7ce6048c87', 'width': 640}, {'height': 617, 'url': 'https://preview.redd.it/upjb09mwjo8f1.png?width=960&crop=smart&auto=webp&s=2eec4acb7e541f4e9d6dce5e91281b76a6f73b72', 'width': 960}], 'source': {'height': 663, 'url': 'https://preview.redd.it/upjb09mwjo8f1.png?auto=webp&s=9b7d1b3a170453262d221f8f12a4997e03120da8', 'width': 1030}, 'variants': {}}]} |
|
No new posts & Missing comments on existing posts | 1 | [removed] | 2025-06-23T13:38:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ligu2j/no_new_posts_missing_comments_on_existing_posts/ | Mushoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ligu2j | false | null | t3_1ligu2j | /r/LocalLLaMA/comments/1ligu2j/no_new_posts_missing_comments_on_existing_posts/ | false | false | self | 1 | null |
Nanovllm a lightweight python implementation from the deepseek guys | 1 | [removed] | 2025-06-23T14:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lihd6v/nanovllm_a_lightweight_python_implementation_from/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lihd6v | false | null | t3_1lihd6v | /r/LocalLLaMA/comments/1lihd6v/nanovllm_a_lightweight_python_implementation_from/ | false | false | self | 1 | null |
Gemini weird behavior | 1 | [removed] | 2025-06-23T14:28:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lii1j2/gemini_weird_behavior/ | shahood123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lii1j2 | false | null | t3_1lii1j2 | /r/LocalLLaMA/comments/1lii1j2/gemini_weird_behavior/ | false | false | self | 1 | null |
LTT tests 4090 48gb cards from ebay. | 1 | 2025-06-23T14:47:40 | https://www.youtube.com/watch?v=HZgQp-WDebU | RedditUsr2 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1liiitu | false | {'oembed': {'author_name': 'Linus Tech Tips', 'author_url': 'https://www.youtube.com/@LinusTechTips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HZgQp-WDebU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="NVIDIA Never Authorized The Production Of This Card"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/HZgQp-WDebU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'NVIDIA Never Authorized The Production Of This Card', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1liiitu | /r/LocalLLaMA/comments/1liiitu/ltt_tests_4090_48gb_cards_from_ebay/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?width=108&crop=smart&auto=webp&s=34b6e95c9e78450a03bc17669db1039556875ab2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?width=216&crop=smart&auto=webp&s=94a5189da6314051515f34d0a46727096a47647f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?width=320&crop=smart&auto=webp&s=1fdb319a25ca00eba0456ee1f02c9bf5308cdb5e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?auto=webp&s=5ca2af1087455cec442de957ead14f0da81edf2e', 'width': 480}, 'variants': {}}]} |
||
What just happened? | 1 | [removed] | 2025-06-23T14:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1liints/what_just_happened/ | Anti-Hippy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liints | false | null | t3_1liints | /r/LocalLLaMA/comments/1liints/what_just_happened/ | false | false | self | 1 | null |
[OpenSource] A C library for embedding Apple Intelligence on-device Foundation models in any programming language or application with full support for native tool calling and MCP. | 1 | [removed] | 2025-06-23T15:06:26 | AndrewMD5 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lij0cp | false | null | t3_1lij0cp | /r/LocalLLaMA/comments/1lij0cp/opensource_a_c_library_for_embedding_apple/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '1pwr3sityo8f1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=108&crop=smart&format=png8&s=02a2dec4d6807528629fd690251a571a048559de', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=216&crop=smart&format=png8&s=d12b699eb8e5ed30b6332320fe3b9bd2fa0567a3', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=320&crop=smart&format=png8&s=e97e13b9dd510176501f6a05cf1feb8fe52acaae', 'width': 320}, {'height': 452, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=640&crop=smart&format=png8&s=8cdeb75dfd123bdd5a16e0d2a9da16afe13633bb', 'width': 640}], 'source': {'height': 588, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?format=png8&s=980391e210137ec30c5dc658fd6b8e74e9ba46b6', 'width': 831}, 'variants': {'gif': {'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=108&crop=smart&s=a7d7ebe05652859d59c4c1ed0db59f34c7c922d7', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=216&crop=smart&s=af136f9a881515dd72ba60bbea06195342a2e914', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=320&crop=smart&s=1c54f7b0e274b406dab26fe34fcbd938743598d9', 'width': 320}, {'height': 452, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=640&crop=smart&s=d360472b5e944813a99356c301b7fed3d5ebfbec', 'width': 640}], 'source': {'height': 588, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?s=dba9ef5c136c6e5fffacb2e5e00c9cec3160352f', 'width': 831}}, 'mp4': {'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=108&format=mp4&s=95c249f530e8706133f55da1c93be720c79c6462', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=216&format=mp4&s=c89f9e5d6618a58a7a4a029e22506956479df41d', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=320&format=mp4&s=04702b18567b1f56fb422479a037dac602df6afa', 'width': 320}, {'height': 452, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=640&format=mp4&s=a772baffc12eac6e059f79ce6fde6acd991171de', 'width': 640}], 'source': {'height': 588, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?format=mp4&s=46ed1e160c71f872bc496f04fb72d6a181bff55e', 'width': 831}}}}]} |
|
Anyone Using Local Models for Meeting Summarization? | 1 | [removed] | 2025-06-23T15:10:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lij43u/anyone_using_local_models_for_meeting/ | jaythesong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lij43u | false | null | t3_1lij43u | /r/LocalLLaMA/comments/1lij43u/anyone_using_local_models_for_meeting/ | false | false | self | 1 | null |
Rtx 4090 48g or rtx pro 6000 96g | 1 | [removed] | 2025-06-23T15:15:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lij8t7/rtx_4090_48g_or_rtx_pro_6000_96g/ | Fit_Camel_2459 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lij8t7 | false | null | t3_1lij8t7 | /r/LocalLLaMA/comments/1lij8t7/rtx_4090_48g_or_rtx_pro_6000_96g/ | false | false | self | 1 | null |
Advice needed: What is the most eficient way to use a local llm applied to web browsing. | 1 | [removed] | 2025-06-23T15:23:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lijggb/advice_needed_what_is_the_most_eficient_way_to/ | Interesting_Egg9997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lijggb | false | null | t3_1lijggb | /r/LocalLLaMA/comments/1lijggb/advice_needed_what_is_the_most_eficient_way_to/ | false | false | self | 1 | null |
How was LLaMA 3.2 1B made? | 1 | [removed] | 2025-06-23T15:24:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lijh20/how_was_llama_32_1b_made/ | AntiquePercentage536 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lijh20 | false | null | t3_1lijh20 | /r/LocalLLaMA/comments/1lijh20/how_was_llama_32_1b_made/ | false | false | self | 1 | null |
Have access to GPUs - wish to train something that's beneficial to the community | 1 | [removed] | 2025-06-23T15:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lijmts/have_access_to_gpus_wish_to_train_something_thats/ | fullgoopy_alchemist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lijmts | false | null | t3_1lijmts | /r/LocalLLaMA/comments/1lijmts/have_access_to_gpus_wish_to_train_something_thats/ | false | false | self | 1 | null |
Is there any modded GPU with 96GB of Vram? | 1 | [removed] | 2025-06-23T15:30:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lijmv5/is_there_any_modded_gpu_with_96gb_of_vram/ | polawiaczperel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lijmv5 | false | null | t3_1lijmv5 | /r/LocalLLaMA/comments/1lijmv5/is_there_any_modded_gpu_with_96gb_of_vram/ | false | false | self | 1 | null |
installing external GPU card | 1 | [removed] | 2025-06-23T15:45:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lik0tm/installing_external_gpu_card/ | tr3g | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lik0tm | false | null | t3_1lik0tm | /r/LocalLLaMA/comments/1lik0tm/installing_external_gpu_card/ | false | false | self | 1 | null |
Open Source LLM Firewall (Self-Hosted, Policy-Driven) | 1 | [removed] | 2025-06-23T15:46:02 | https://github.com/trylonai/gateway | Consistent_Equal5327 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lik18g | false | null | t3_1lik18g | /r/LocalLLaMA/comments/1lik18g/open_source_llm_firewall_selfhosted_policydriven/ | false | false | default | 1 | null |
How to integrate dynamic citations in a RAG system with an LLM? | 1 | [removed] | 2025-06-23T15:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lik1vd/how_to_integrate_dynamic_citations_in_a_rag/ | Mobile_Estate_9160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lik1vd | false | null | t3_1lik1vd | /r/LocalLLaMA/comments/1lik1vd/how_to_integrate_dynamic_citations_in_a_rag/ | false | false | self | 1 | null |
Script Orchestration | 1 | [removed] | 2025-06-23T15:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lik8pk/script_orchestration/ | Loud-Bake-2740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lik8pk | false | null | t3_1lik8pk | /r/LocalLLaMA/comments/1lik8pk/script_orchestration/ | false | false | self | 1 | null |
50 Days of Building a Small Language Model from Scratch — Day 1: What Are Small Language Models? | 1 | [removed] | 2025-06-23T16:00:24 | https://www.reddit.com/r/LocalLLaMA/comments/1likez1/50_days_of_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1likez1 | false | null | t3_1likez1 | /r/LocalLLaMA/comments/1likez1/50_days_of_building_a_small_language_model_from/ | false | false | 1 | null |
|
App that highlights text in pdf llm based its answer on? | 1 | [removed] | 2025-06-23T16:05:22 | https://www.reddit.com/r/LocalLLaMA/comments/1likjy0/app_that_highlights_text_in_pdf_llm_based_its/ | Sea-Replacement7541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1likjy0 | false | null | t3_1likjy0 | /r/LocalLLaMA/comments/1likjy0/app_that_highlights_text_in_pdf_llm_based_its/ | false | false | self | 1 | null |
I create Ghibli-AI-Art-Generator and Open source it | 1 | [removed] | 2025-06-23T16:05:48 | https://v.redd.it/luzutm9m8p8f1 | gaodalie | /r/LocalLLaMA/comments/1likkcm/i_create_ghibliaiartgenerator_and_open_source_it/ | 1970-01-01T00:00:00 | 0 | {} | 1likkcm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/luzutm9m8p8f1/DASHPlaylist.mpd?a=1753416352%2CMzI4ZmM3ZDhmZDliZjlmMGIyOGFmN2YwYTU4MmIwZDBkNGEwZWMzZDI2YjhhNTI4OWYwNDhkOWRhNjEwY2M3OA%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/luzutm9m8p8f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/luzutm9m8p8f1/HLSPlaylist.m3u8?a=1753416352%2CYjBhZjRiYTU1ZDFiM2ZkZjRlMTgxYzc4ZDk1YmM5MmFmMWZmMDMxNmY0MjY2OWMyNjRkM2FhZjFmYzA2ZGYyNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/luzutm9m8p8f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1likkcm | /r/LocalLLaMA/comments/1likkcm/i_create_ghibliaiartgenerator_and_open_source_it/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ejh3c3BvOW04cDhmMWjk8BUNYNp88e3U9YNh6_5B3JlSlDRepcSm8_uSSAOn', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ejh3c3BvOW04cDhmMWjk8BUNYNp88e3U9YNh6_5B3JlSlDRepcSm8_uSSAOn.png?width=108&crop=smart&format=pjpg&auto=webp&s=7fce48988ba2d8da169cd52077f337cd498bfcea', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ejh3c3BvOW04cDhmMWjk8BUNYNp88e3U9YNh6_5B3JlSlDRepcSm8_uSSAOn.png?width=216&crop=smart&format=pjpg&auto=webp&s=9c1e13875cb7e26286f05067c4dc70f9f1f425ff', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ejh3c3BvOW04cDhmMWjk8BUNYNp88e3U9YNh6_5B3JlSlDRepcSm8_uSSAOn.png?width=320&crop=smart&format=pjpg&auto=webp&s=7c0b4b05f6c1d2ec401a62e691dcfab0fe6b3943', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ejh3c3BvOW04cDhmMWjk8BUNYNp88e3U9YNh6_5B3JlSlDRepcSm8_uSSAOn.png?width=640&crop=smart&format=pjpg&auto=webp&s=ead65e827b120637b545f4cfe582f0889c1f0dd2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ejh3c3BvOW04cDhmMWjk8BUNYNp88e3U9YNh6_5B3JlSlDRepcSm8_uSSAOn.png?width=960&crop=smart&format=pjpg&auto=webp&s=ca4a11dfc997fc436778c86574c27486545dba45', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ejh3c3BvOW04cDhmMWjk8BUNYNp88e3U9YNh6_5B3JlSlDRepcSm8_uSSAOn.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7c6791efebb8e1d586a66d2bf1d495e4671193af', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ejh3c3BvOW04cDhmMWjk8BUNYNp88e3U9YNh6_5B3JlSlDRepcSm8_uSSAOn.png?format=pjpg&auto=webp&s=586643da9bfac54e6b6cf9e6a3f279db45a4923c', 'width': 1920}, 'variants': {}}]} |
|
Day 1 of 50 Days of Building a Small Language Model from Scratch
Topic: What is a Small Language Model (SLM)? | 3 | [removed] | 2025-06-23T16:15:38 | https://www.reddit.com/r/LocalLLaMA/comments/1liktwh/day_1_of_50_days_of_building_a_small_language/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liktwh | false | null | t3_1liktwh | /r/LocalLLaMA/comments/1liktwh/day_1_of_50_days_of_building_a_small_language/ | false | false | 3 | null |
|
Teach LLM to play Tetris | 1 | [removed] | 2025-06-23T16:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lilc9y/teach_llm_to_play_tetris/ | hadoopfromscratch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lilc9y | false | null | t3_1lilc9y | /r/LocalLLaMA/comments/1lilc9y/teach_llm_to_play_tetris/ | false | false | self | 1 | null |
Linus tech tips 48gb 4090 | 1 | [removed] | 2025-06-23T16:36:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lilda6/linus_tech_tips_48gb_4090/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lilda6 | false | null | t3_1lilda6 | /r/LocalLLaMA/comments/1lilda6/linus_tech_tips_48gb_4090/ | false | false | self | 1 | null |
Computing power needed to run something equal to Veo 3 or kling 2.1 locally | 1 | [removed] | 2025-06-23T16:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lilg7h/computing_power_needed_to_run_something_equal_to/ | Inevitable_Drive4729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lilg7h | false | null | t3_1lilg7h | /r/LocalLLaMA/comments/1lilg7h/computing_power_needed_to_run_something_equal_to/ | false | false | self | 1 | null |
Has anybody else found DeepSeek-R1-0528-Qwen3-8B to be wildly unreliable? | 1 | [removed] | 2025-06-23T16:42:37 | https://www.reddit.com/r/LocalLLaMA/comments/1liljg6/has_anybody_else_found_deepseekr10528qwen38b_to/ | Quagmirable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liljg6 | false | null | t3_1liljg6 | /r/LocalLLaMA/comments/1liljg6/has_anybody_else_found_deepseekr10528qwen38b_to/ | false | false | self | 1 | null |
Paradigm shift: Polaris takes local models to the next level. | 1 | [removed] | 2025-06-23T16:50:25 | Ordinary_Mud7430 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lilqrp | false | null | t3_1lilqrp | /r/LocalLLaMA/comments/1lilqrp/paradigm_shift_polaris_takes_local_models_to_the/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'qvd3fu1aip8f1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/qvd3fu1aip8f1.jpeg?width=108&crop=smart&auto=webp&s=edd5d010370cd7d0514093b0caae6ca87889615d', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/qvd3fu1aip8f1.jpeg?width=216&crop=smart&auto=webp&s=5d61f992ec776560aca3da2cfb570058e2687e4a', 'width': 216}, {'height': 209, 'url': 'https://preview.redd.it/qvd3fu1aip8f1.jpeg?width=320&crop=smart&auto=webp&s=42ed12560560824dc42198646473f3108c06dca7', 'width': 320}, {'height': 419, 'url': 'https://preview.redd.it/qvd3fu1aip8f1.jpeg?width=640&crop=smart&auto=webp&s=d26536541b9f5c9bc26640edbd9c41a5193a0ff3', 'width': 640}, {'height': 629, 'url': 'https://preview.redd.it/qvd3fu1aip8f1.jpeg?width=960&crop=smart&auto=webp&s=adef15493132ceabf24acd802e8e6aeab601db9e', 'width': 960}, {'height': 707, 'url': 'https://preview.redd.it/qvd3fu1aip8f1.jpeg?width=1080&crop=smart&auto=webp&s=d006fc2d6d759fef76014546e6053f49a4c97995', 'width': 1080}], 'source': {'height': 839, 'url': 'https://preview.redd.it/qvd3fu1aip8f1.jpeg?auto=webp&s=618a946c90229d50fe0d60da0896cf43546c96bb', 'width': 1280}, 'variants': {}}]} |
|
LM Studio seems to be much slower than Ollama, but Ollama's CLI is pretty limited. Is there a middle ground here? | 1 | [removed] | 2025-06-23T16:53:11 | https://www.reddit.com/r/LocalLLaMA/comments/1liltdi/lm_studio_seems_to_be_much_slower_than_ollama_but/ | nat2r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liltdi | false | null | t3_1liltdi | /r/LocalLLaMA/comments/1liltdi/lm_studio_seems_to_be_much_slower_than_ollama_but/ | false | false | self | 1 | null |
What gemma-3 (12b and 27b) version are you using/do you prefer? | 1 | [removed] | 2025-06-23T17:22:28 | https://www.reddit.com/r/LocalLLaMA/comments/1limlml/what_gemma3_12b_and_27b_version_are_you_usingdo/ | relmny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1limlml | false | null | t3_1limlml | /r/LocalLLaMA/comments/1limlml/what_gemma3_12b_and_27b_version_are_you_usingdo/ | false | false | self | 1 | null |
Sharing My 2-Week Solo Build: Local LLM Chat App with Characters, Inline Suggestions, and Prompt Tools | 1 | [removed] | 2025-06-23T17:24:27 | https://www.reddit.com/r/LocalLLaMA/comments/1limnk1/sharing_my_2week_solo_build_local_llm_chat_app/ | RIPT1D3_Z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1limnk1 | false | null | t3_1limnk1 | /r/LocalLLaMA/comments/1limnk1/sharing_my_2week_solo_build_local_llm_chat_app/ | false | false | 1 | null |
|
having trouble using LMStudio | 1 | [removed] | 2025-06-23T17:25:56 | https://www.reddit.com/r/LocalLLaMA/comments/1limp00/having_trouble_using_lmstudio/ | LazyChampionship5819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1limp00 | false | null | t3_1limp00 | /r/LocalLLaMA/comments/1limp00/having_trouble_using_lmstudio/ | false | false | 1 | null |
|
Are we there with Local Code dev? | 1 | [removed] | 2025-06-23T17:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1limzib/are_we_there_with_local_code_dev/ | sandwich_stevens | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1limzib | false | null | t3_1limzib | /r/LocalLLaMA/comments/1limzib/are_we_there_with_local_code_dev/ | false | false | self | 1 | null |
Power required to run something like veo 3 or kling 2.1 locally | 1 | [removed] | 2025-06-23T17:40:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lin2sj/power_required_to_run_something_like_veo_3_or/ | Inevitable_Drive4729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lin2sj | false | null | t3_1lin2sj | /r/LocalLLaMA/comments/1lin2sj/power_required_to_run_something_like_veo_3_or/ | false | false | self | 1 | null |
What local hosted chat/story front ends to open ai compatable api's may I have not heard of? | 1 | [removed] | 2025-06-23T18:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/1linot4/what_local_hosted_chatstory_front_ends_to_open_ai/ | mrgreaper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1linot4 | false | null | t3_1linot4 | /r/LocalLLaMA/comments/1linot4/what_local_hosted_chatstory_front_ends_to_open_ai/ | false | false | self | 1 | null |
Code with your voice | 1 | [removed] | 2025-06-23T18:24:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lio8xa/code_with_your_voice/ | D3c1m470r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lio8xa | false | null | t3_1lio8xa | /r/LocalLLaMA/comments/1lio8xa/code_with_your_voice/ | false | false | self | 1 | null |
Has anybody else found DeepSeek R1 0528 Qwen3 8B to be wildly unreliable? | 1 | [removed] | 2025-06-23T18:49:16 | https://www.reddit.com/r/LocalLLaMA/comments/1liowi7/has_anybody_else_found_deepseek_r1_0528_qwen3_8b/ | Quagmirable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liowi7 | false | null | t3_1liowi7 | /r/LocalLLaMA/comments/1liowi7/has_anybody_else_found_deepseek_r1_0528_qwen3_8b/ | false | false | self | 1 | null |
Guys wake up.... We can clone ourselves. | 1 | [removed] | 2025-06-23T18:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lip1pz/guys_wake_up_we_can_clone_ourselves/ | its_akphyo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lip1pz | false | null | t3_1lip1pz | /r/LocalLLaMA/comments/1lip1pz/guys_wake_up_we_can_clone_ourselves/ | false | false | self | 1 | null |
Facefusion launches HyperSwap 256 model seems to outperform INSwapper 128 | 1 | [removed] | 2025-06-23T19:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1liphbc/facefusion_launches_hyperswap_256_model_seems_to/ | khubebk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liphbc | false | null | t3_1liphbc | /r/LocalLLaMA/comments/1liphbc/facefusion_launches_hyperswap_256_model_seems_to/ | false | false | 1 | null |
|
What are the best self-hosted AI code assistants for local development without relying on cloud APIs? | 1 | [removed] | 2025-06-23T19:15:26 | https://www.reddit.com/r/LocalLLaMA/comments/1liplc0/what_are_the_best_selfhosted_ai_code_assistants/ | Sorry-Dragonfruit738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liplc0 | false | null | t3_1liplc0 | /r/LocalLLaMA/comments/1liplc0/what_are_the_best_selfhosted_ai_code_assistants/ | false | false | self | 1 | null |
Don’t Just Throw AI at Problems – How to Design Great Use Cases | 1 | 2025-06-23T19:23:46 | https://upwarddynamism.com/ai-use-cases-prompts/design-thinking-gen-ai-use-cases/ | DarknStormyKnight | upwarddynamism.com | 1970-01-01T00:00:00 | 0 | {} | 1liptb1 | false | null | t3_1liptb1 | /r/LocalLLaMA/comments/1liptb1/dont_just_throw_ai_at_problems_how_to_design/ | false | false | default | 1 | null |
|
What is LlamaBarn (llama.cpp) | 1 | [removed] | 2025-06-23T19:31:26 | https://www.reddit.com/r/LocalLLaMA/comments/1liq0fa/what_is_llamabarn_llamacpp/ | Broke_DBA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liq0fa | false | null | t3_1liq0fa | /r/LocalLLaMA/comments/1liq0fa/what_is_llamabarn_llamacpp/ | false | false | 1 | null |
|
Gave full control to AI for one feature and instantly regretted it | 1 | [removed] | 2025-06-23T19:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1liq706/gave_full_control_to_ai_for_one_feature_and/ | eastwindtoday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liq706 | false | null | t3_1liq706 | /r/LocalLLaMA/comments/1liq706/gave_full_control_to_ai_for_one_feature_and/ | false | false | self | 1 | null |
AMD Instinct MI60 (32gb VRAM) "llama bench" results for 10 models - Qwen3 30B A3B Q4_0 resulted in: pp512 - 1,165 t/s | tg128 68 t/s - Overall very pleased and resulted in a better outcome for my use case than I even expected | 1 | [removed] | 2025-06-23T19:45:07 | https://www.reddit.com/r/LocalLLaMA/comments/1liqd7c/amd_instinct_mi60_32gb_vram_llama_bench_results/ | FantasyMaster85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liqd7c | false | null | t3_1liqd7c | /r/LocalLLaMA/comments/1liqd7c/amd_instinct_mi60_32gb_vram_llama_bench_results/ | false | false | 1 | null |
|
Anyone tried to repurpose crypto mining rigs and use them for GenAI? | 1 | [removed] | 2025-06-23T19:48:57 | https://www.reddit.com/r/LocalLLaMA/comments/1liqgsz/anyone_tried_to_repurpose_crypto_mining_rigs_and/ | Illustrious_Swim9349 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liqgsz | false | null | t3_1liqgsz | /r/LocalLLaMA/comments/1liqgsz/anyone_tried_to_repurpose_crypto_mining_rigs_and/ | false | false | self | 1 | null |
Made An LLM Client for the PS Vita | 1 | [removed] | 2025-06-23T20:04:54 | https://v.redd.it/26zwv16h6q8f1 | ajunior7 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1liqvfb | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/26zwv16h6q8f1/DASHPlaylist.mpd?a=1753302367%2CYjQwZTIyMmY1MWJiYzRlOGQ5ZDNhNzQ3NzQzNTRmMzNlZTcyODc2N2E2YmMxM2M4OWFkZGJmODM4NjIzZDQwMw%3D%3D&v=1&f=sd', 'duration': 117, 'fallback_url': 'https://v.redd.it/26zwv16h6q8f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/26zwv16h6q8f1/HLSPlaylist.m3u8?a=1753302367%2CNGFkMzY5ZjIyOTQ4ZTM0YTM3MWQ1MzNkMDFmZTAxNmI4MDJiNTU0M2Y1M2U2ZDg0NDU1ZWZhMGE0OTQ1YmJkYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/26zwv16h6q8f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1liqvfb | /r/LocalLLaMA/comments/1liqvfb/made_an_llm_client_for_the_ps_vita/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bGp2bzYyNmg2cThmMfIP8BrPficmhyY5KB42Ptrwyms9E-ke6lpIPgzOipjX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bGp2bzYyNmg2cThmMfIP8BrPficmhyY5KB42Ptrwyms9E-ke6lpIPgzOipjX.png?width=108&crop=smart&format=pjpg&auto=webp&s=e3ae1b8a163e8e7b2db34c6fc178650e02ce2982', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bGp2bzYyNmg2cThmMfIP8BrPficmhyY5KB42Ptrwyms9E-ke6lpIPgzOipjX.png?width=216&crop=smart&format=pjpg&auto=webp&s=4f1c1426644bd91a19d19ac95a90006d5790a89c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bGp2bzYyNmg2cThmMfIP8BrPficmhyY5KB42Ptrwyms9E-ke6lpIPgzOipjX.png?width=320&crop=smart&format=pjpg&auto=webp&s=dd7ed506258d7eabb29c4b35f9ccc5537a756464', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bGp2bzYyNmg2cThmMfIP8BrPficmhyY5KB42Ptrwyms9E-ke6lpIPgzOipjX.png?width=640&crop=smart&format=pjpg&auto=webp&s=a7bd36d550243f275f048fdac912f78054b413ea', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bGp2bzYyNmg2cThmMfIP8BrPficmhyY5KB42Ptrwyms9E-ke6lpIPgzOipjX.png?width=960&crop=smart&format=pjpg&auto=webp&s=bd61f2edf8cd9c513ee840d8767e1ed98c7bbe7e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bGp2bzYyNmg2cThmMfIP8BrPficmhyY5KB42Ptrwyms9E-ke6lpIPgzOipjX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6fdca0baa837af994bf4d832d3e7faabecedbfe9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bGp2bzYyNmg2cThmMfIP8BrPficmhyY5KB42Ptrwyms9E-ke6lpIPgzOipjX.png?format=pjpg&auto=webp&s=c33b6e1275a8782632086e9143285eb32b99eedb', 'width': 1920}, 'variants': {}}]} |
|
Translation benchmark? | 1 | [removed] | 2025-06-23T20:21:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lirb0s/translation_benchmark/ | Educational_Grab_473 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lirb0s | false | null | t3_1lirb0s | /r/LocalLLaMA/comments/1lirb0s/translation_benchmark/ | false | false | self | 1 | null |
I figured out how to build AGI and here is my research work | 1 | [removed] | 2025-06-23T20:25:09 | https://playwithagi.com/ | Altruistic-Tea-5612 | playwithagi.com | 1970-01-01T00:00:00 | 0 | {} | 1lireai | false | null | t3_1lireai | /r/LocalLLaMA/comments/1lireai/i_figured_out_how_to_build_agi_and_here_is_my/ | false | false | default | 1 | null |
I'm building a 100% private, local AI assistant, but hit a wall with internet access. So I built this privacy-first solution. What do you think? | 1 | [removed] | 2025-06-23T20:27:50 | SmarterWaysProd | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lirgtu | false | null | t3_1lirgtu | /r/LocalLLaMA/comments/1lirgtu/im_building_a_100_private_local_ai_assistant_but/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'C7L95vQWMxYrKdqI9qKgLiJGtPYHTdyvtW1JZDzRcus', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/quxy8ypzjq8f1.png?width=108&crop=smart&auto=webp&s=080a1232fc40a5a71a2eb3afc05df7a15c1af220', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/quxy8ypzjq8f1.png?width=216&crop=smart&auto=webp&s=e0d78e066b752486003495995bde6162c5b5f795', 'width': 216}, {'height': 246, 'url': 'https://preview.redd.it/quxy8ypzjq8f1.png?width=320&crop=smart&auto=webp&s=2445f4b608f44aea658474e07eeeaf16a2b4bdb3', 'width': 320}], 'source': {'height': 382, 'url': 'https://preview.redd.it/quxy8ypzjq8f1.png?auto=webp&s=6e2efacd5203eef369a32c53edf79e6d4b673904', 'width': 496}, 'variants': {}}]} |
||
[Off Topic] What happened to this post? | 1 | [removed] | 2025-06-23T20:33:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lirlqg/off_topic_what_happened_to_this_post/ | IngenuityNo1411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lirlqg | false | null | t3_1lirlqg | /r/LocalLLaMA/comments/1lirlqg/off_topic_what_happened_to_this_post/ | false | false | 1 | null |
|
Trying to Learn AI on My Own – Need Help Creating a Roadmap | 1 | [removed] | 2025-06-23T20:33:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lirlx8/trying_to_learn_ai_on_my_own_need_help_creating_a/ | Specialist_Cry2443 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lirlx8 | false | null | t3_1lirlx8 | /r/LocalLLaMA/comments/1lirlx8/trying_to_learn_ai_on_my_own_need_help_creating_a/ | false | false | self | 1 | null |
I’ve been building an AI platform called “I AM”, like ChatGPT but more personal | 1 | [removed] | 2025-06-23T20:52:37 | https://v.redd.it/w9wlrpoepq8f1 | axeltdesign23 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lis40a | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w9wlrpoepq8f1/DASHPlaylist.mpd?a=1753303973%2CZDViZDAxZDY5OGZjNTAxNGJjNWNiMjU2NWEzMzVlNDI3YzkxN2Q0NDRlM2QwMmE2NWQ2Mzg2YmIxMDFlMjYwNw%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/w9wlrpoepq8f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1042, 'hls_url': 'https://v.redd.it/w9wlrpoepq8f1/HLSPlaylist.m3u8?a=1753303973%2COTM4MTEyM2FlZjNmYjhiMDYxOWZlOGVkZmM1NWE3YzVkM2UxYmY1YTZiMWUwNzNhM2U0NGUxZjBhZDdmYjc1Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w9wlrpoepq8f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1lis40a | /r/LocalLLaMA/comments/1lis40a/ive_been_building_an_ai_platform_called_i_am_like/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dmJzczRxb2VwcThmMZ6P8Scnx6jx3p1Z6WlHM7krHJZkPpKvoWvFYtblBHeU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dmJzczRxb2VwcThmMZ6P8Scnx6jx3p1Z6WlHM7krHJZkPpKvoWvFYtblBHeU.png?width=108&crop=smart&format=pjpg&auto=webp&s=4f7e8b6efae95e58ee08ccc012532cd3f1fc2188', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/dmJzczRxb2VwcThmMZ6P8Scnx6jx3p1Z6WlHM7krHJZkPpKvoWvFYtblBHeU.png?width=216&crop=smart&format=pjpg&auto=webp&s=34f25bd5e0e2679b633a225919b979147a9e0bac', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/dmJzczRxb2VwcThmMZ6P8Scnx6jx3p1Z6WlHM7krHJZkPpKvoWvFYtblBHeU.png?width=320&crop=smart&format=pjpg&auto=webp&s=0d8b013656a5abe86fea6ff69a84904d0cdb4b90', 'width': 320}, {'height': 347, 'url': 'https://external-preview.redd.it/dmJzczRxb2VwcThmMZ6P8Scnx6jx3p1Z6WlHM7krHJZkPpKvoWvFYtblBHeU.png?width=640&crop=smart&format=pjpg&auto=webp&s=c1678bd740c6cdb431239f75ea32e9fbc441fc2e', 'width': 640}, {'height': 521, 'url': 'https://external-preview.redd.it/dmJzczRxb2VwcThmMZ6P8Scnx6jx3p1Z6WlHM7krHJZkPpKvoWvFYtblBHeU.png?width=960&crop=smart&format=pjpg&auto=webp&s=dd376676c43635fa76c2eb7adc4ad43c475e6d46', 'width': 960}, {'height': 586, 'url': 'https://external-preview.redd.it/dmJzczRxb2VwcThmMZ6P8Scnx6jx3p1Z6WlHM7krHJZkPpKvoWvFYtblBHeU.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b285d58c892215d6cefcb25e12526b98d63957bf', 'width': 1080}], 'source': {'height': 1642, 'url': 'https://external-preview.redd.it/dmJzczRxb2VwcThmMZ6P8Scnx6jx3p1Z6WlHM7krHJZkPpKvoWvFYtblBHeU.png?format=pjpg&auto=webp&s=fa05149ee7d5344c089a01b48e6d39f546711889', 'width': 3024}, 'variants': {}}]} |
|
New Gemini Model Released on google ai studio!!!! | 1 | [removed] | 2025-06-23T21:18:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lisrxl/new_gemini_model_released_on_google_ai_studio/ | Minute_Window_9258 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lisrxl | false | null | t3_1lisrxl | /r/LocalLLaMA/comments/1lisrxl/new_gemini_model_released_on_google_ai_studio/ | false | false | 1 | null |
|
Best model to code a game with unity | 1 | [removed] | 2025-06-23T21:26:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lisz0r/best_model_to_code_a_game_with_unity/ | Momkiller781 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lisz0r | false | null | t3_1lisz0r | /r/LocalLLaMA/comments/1lisz0r/best_model_to_code_a_game_with_unity/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.