title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Are distilled versions of R1 also reasoning models? | 1 | Hi. There are quantised versions of R1 available on e.g. Ollama, where it is pretty clear what they are, like other models.
But then we have distilled variants, named after some other models such as Llamma and Qwen. Are these, as I understood, simple Qwen/Llamma models trained (or fine-tuned?) on R1-generated samples? And if yes, does that mean these are "normal" models, i.e. non-reasoning?
Many thanks | 2025-01-30T13:45:33 | https://www.reddit.com/r/LocalLLaMA/comments/1idna31/are_distilled_versions_of_r1_also_reasoning_models/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idna31 | false | null | t3_1idna31 | /r/LocalLLaMA/comments/1idna31/are_distilled_versions_of_r1_also_reasoning_models/ | false | false | self | 1 | null |
Survey for academic purposes on LFM-Integrated Applications | 1 | [removed] | 2025-01-30T14:00:12 | https://www.reddit.com/r/LocalLLaMA/comments/1idnksa/survey_for_academic_purposes_on_lfmintegrated/ | GitEmotional6541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idnksa | false | null | t3_1idnksa | /r/LocalLLaMA/comments/1idnksa/survey_for_academic_purposes_on_lfmintegrated/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kQhssUlUSM0qgue2wBxfCbAvmh9qoV4YqJu8cwW-0uE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=108&crop=smart&auto=webp&s=ad247f4efe58b0a016210dbe920eeb205ed16e47', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=216&crop=smart&auto=webp&s=4b5b419f4be00fd7a62a915994690ecd246d240d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=320&crop=smart&auto=webp&s=f1289163fc9414d2b8b7a69dd1ef7660f9f5c44e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=640&crop=smart&auto=webp&s=6c920d8f7878ad4e24af7c49001d6a81d6325c81', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=960&crop=smart&auto=webp&s=d18cef19a9472d099facd60e5628cf06e1c7a5ad', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?width=1080&crop=smart&auto=webp&s=ee275303bfe55d4ddbd37c5b23b1d85722466fa0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VR1X5kaIbaD75ax45vLKVFHL2f598YBb_VYIRNUeRJA.jpg?auto=webp&s=c37947c57981e8345c0edbe171c19d95a2c46a8b', 'width': 1200}, 'variants': {}}]} |
My CPU-only DeepSeek-R1 671B local infer setup at 7token/s (llama.cpp) | 1 | [removed] | 2025-01-30T14:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1idnnk6/my_cpuonly_deepseekr1_671b_local_infer_setup_at/ | lyc8503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idnnk6 | false | null | t3_1idnnk6 | /r/LocalLLaMA/comments/1idnnk6/my_cpuonly_deepseekr1_671b_local_infer_setup_at/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uTH7CSuJT7xESL5xe7-CfxFMSg_JTf4Nma4bQtm-hmo', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=108&crop=smart&auto=webp&s=e3cd0740d84b19d7a2f89abe699010990f86feba', 'width': 108}, {'height': 166, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=216&crop=smart&auto=webp&s=8f5f44e359aeedaabbccbc32ce33767386f46c66', 'width': 216}, {'height': 246, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=320&crop=smart&auto=webp&s=2c7283aa3b6fb2f53d23b42ab2a81687cd0cb89d', 'width': 320}, {'height': 492, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=640&crop=smart&auto=webp&s=eae05015ec695280f7aec2dd5e5959755977ca83', 'width': 640}, {'height': 739, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=960&crop=smart&auto=webp&s=961c74461a7f004a24e74fcfbd31f7ffd5328f85', 'width': 960}, {'height': 831, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?width=1080&crop=smart&auto=webp&s=47c4e7fa92f45dddf33f34f3105efa57e20c158d', 'width': 1080}], 'source': {'height': 1107, 'url': 'https://external-preview.redd.it/TQ3QORamhKoNE3D3kKn-aUsnWW5gT_p24MVwsipPPLs.jpg?auto=webp&s=06a6d7e68b81b39bb13f7081ea313fe02f2a2edf', 'width': 1438}, 'variants': {}}]} |
Local LLM model that can help code a video game?? | 1 | [removed] | 2025-01-30T14:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/1idnqhe/local_llm_model_that_can_help_code_a_video_game/ | No_Prompt5941 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idnqhe | false | null | t3_1idnqhe | /r/LocalLLaMA/comments/1idnqhe/local_llm_model_that_can_help_code_a_video_game/ | false | false | self | 1 | null |
mistralai/Mistral-Small-24B-Instruct-2501 · Hugging Face | 44 | 2025-01-30T14:17:54 | https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1idny30 | false | null | t3_1idny30 | /r/LocalLLaMA/comments/1idny30/mistralaimistralsmall24binstruct2501_hugging_face/ | false | false | 44 | {'enabled': False, 'images': [{'id': 'aeWswNKMGQzcuOpdAD1mvC150BgCah362E9mAmhFdK8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=108&crop=smart&auto=webp&s=6b571bd52f9c57860adc4553a4561c23986192bd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=216&crop=smart&auto=webp&s=40d826f3f7bfed2feae667631ab1454941b3c5d6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=320&crop=smart&auto=webp&s=9700c57e053444bc94af48c0a2c9c588586047b0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=640&crop=smart&auto=webp&s=0fb438638f6c72b0b1b0db98ac490b6bda471f2f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=960&crop=smart&auto=webp&s=911dfbd7ee6bf2197477a15242921e92dc4bf894', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=1080&crop=smart&auto=webp&s=a5db6af970ee7968178f13ef7a30621f18f6597c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?auto=webp&s=70c585703e4c40cfda4801728ae6784163f24426', 'width': 1200}, 'variants': {}}]} |
||
Mistral Small 3 | 954 | 2025-01-30T14:17:56 | khubebk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idny3w | false | null | t3_1idny3w | /r/LocalLLaMA/comments/1idny3w/mistral_small_3/ | false | false | 954 | {'enabled': True, 'images': [{'id': 'LYnVoM_ey3l07o7VrCNOYqsXa0uxLkLve5PJNdYGJvw', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/kj3s0jvr35ge1.png?width=108&crop=smart&auto=webp&s=7e561190b8f4b9bbf9565db445ea377f03e6fffa', 'width': 108}, {'height': 78, 'url': 'https://preview.redd.it/kj3s0jvr35ge1.png?width=216&crop=smart&auto=webp&s=269147277ffc1498a691ebe28a1822060f13d19a', 'width': 216}, {'height': 115, 'url': 'https://preview.redd.it/kj3s0jvr35ge1.png?width=320&crop=smart&auto=webp&s=0317aadc49155a8df1074618844c589ea3d2753d', 'width': 320}], 'source': {'height': 211, 'url': 'https://preview.redd.it/kj3s0jvr35ge1.png?auto=webp&s=4a79f7f9a22eeb5e2163a1ef0813ebca77a5d01b', 'width': 584}, 'variants': {}}]} |
|||
mistralai/Mistral-Small-24B-Instruct-2501 | 55 | [https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)
its show time folks | 2025-01-30T14:18:01 | https://www.reddit.com/r/LocalLLaMA/comments/1idny6j/mistralaimistralsmall24binstruct2501/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idny6j | false | null | t3_1idny6j | /r/LocalLLaMA/comments/1idny6j/mistralaimistralsmall24binstruct2501/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'aeWswNKMGQzcuOpdAD1mvC150BgCah362E9mAmhFdK8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=108&crop=smart&auto=webp&s=6b571bd52f9c57860adc4553a4561c23986192bd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=216&crop=smart&auto=webp&s=40d826f3f7bfed2feae667631ab1454941b3c5d6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=320&crop=smart&auto=webp&s=9700c57e053444bc94af48c0a2c9c588586047b0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=640&crop=smart&auto=webp&s=0fb438638f6c72b0b1b0db98ac490b6bda471f2f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=960&crop=smart&auto=webp&s=911dfbd7ee6bf2197477a15242921e92dc4bf894', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?width=1080&crop=smart&auto=webp&s=a5db6af970ee7968178f13ef7a30621f18f6597c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8WbpVBLCMGZToRjDI6ufWcY1nKKfpipz-TKy8aHrbsg.jpg?auto=webp&s=70c585703e4c40cfda4801728ae6784163f24426', 'width': 1200}, 'variants': {}}]} |
Local Llama with LlamaSharp performance and garbage output questions | 1 | [removed] | 2025-01-30T14:18:03 | https://www.reddit.com/r/LocalLLaMA/comments/1idny7k/local_llama_with_llamasharp_performance_and/ | itam_ws | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idny7k | false | null | t3_1idny7k | /r/LocalLLaMA/comments/1idny7k/local_llama_with_llamasharp_performance_and/ | false | false | self | 1 | null |
mistralai/Mistral-Small-24B-Base-2501 · Hugging Face | 370 | 2025-01-30T14:18:23 | https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1idnyhh | false | null | t3_1idnyhh | /r/LocalLLaMA/comments/1idnyhh/mistralaimistralsmall24bbase2501_hugging_face/ | false | false | 370 | {'enabled': False, 'images': [{'id': '7lE4Wnxw1CEo4OPsbDyP4e3nHG8TQSBI5fN7aMQPbj8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lDGKmq6pSZNpISh4piV15abwPTUoM5lDEjjJ9qZ_vd4.jpg?width=108&crop=smart&auto=webp&s=e5023793afcea0d492603cb14f8308bf57f2dda6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lDGKmq6pSZNpISh4piV15abwPTUoM5lDEjjJ9qZ_vd4.jpg?width=216&crop=smart&auto=webp&s=619d890b4c870fc3d3302944ba2d873822896384', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lDGKmq6pSZNpISh4piV15abwPTUoM5lDEjjJ9qZ_vd4.jpg?width=320&crop=smart&auto=webp&s=4c1622a05fe7e07ec42e8ddca71d02aae06eea91', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lDGKmq6pSZNpISh4piV15abwPTUoM5lDEjjJ9qZ_vd4.jpg?width=640&crop=smart&auto=webp&s=56053b8ce77cd587b1abeda9737783c65c0ebab8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lDGKmq6pSZNpISh4piV15abwPTUoM5lDEjjJ9qZ_vd4.jpg?width=960&crop=smart&auto=webp&s=acd4af78eb74b237887cd091c0b34c45b6d90b8e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lDGKmq6pSZNpISh4piV15abwPTUoM5lDEjjJ9qZ_vd4.jpg?width=1080&crop=smart&auto=webp&s=254e5ab10b29bcf7b3e367f5eac5abcb1f9d2eff', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lDGKmq6pSZNpISh4piV15abwPTUoM5lDEjjJ9qZ_vd4.jpg?auto=webp&s=f1b839928cb9ce4699a148415542955ba8c0b94e', 'width': 1200}, 'variants': {}}]} |
||
Mistral Small | 122 | Mistral Small
Apache 2.0, 81% MMLU, 150 tokens/s
[https://mistral.ai/news/mistral-small-3/](https://mistral.ai/news/mistral-small-3/) | 2025-01-30T14:24:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ido2up/mistral_small/ | MLTyrunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ido2up | false | null | t3_1ido2up | /r/LocalLLaMA/comments/1ido2up/mistral_small/ | false | false | self | 122 | {'enabled': False, 'images': [{'id': 'z4R9sciZWIptSfxALk1BImwjohW0P6DvdrOBaBFKGF8', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/ICxWwDTSXn44V9uQpE9j6ZxEJWv3LlcKh8PZK2KKs8Q.jpg?width=108&crop=smart&auto=webp&s=de89d58ad1d0cdfd5e1e67fc16b47bf02533da15', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/ICxWwDTSXn44V9uQpE9j6ZxEJWv3LlcKh8PZK2KKs8Q.jpg?width=216&crop=smart&auto=webp&s=4f98c8ce3a15663330ec3cb49d712b2fbe59dcee', 'width': 216}, {'height': 204, 'url': 'https://external-preview.redd.it/ICxWwDTSXn44V9uQpE9j6ZxEJWv3LlcKh8PZK2KKs8Q.jpg?width=320&crop=smart&auto=webp&s=d1211d6f1047447817860cc1a3029187597d7a1f', 'width': 320}, {'height': 409, 'url': 'https://external-preview.redd.it/ICxWwDTSXn44V9uQpE9j6ZxEJWv3LlcKh8PZK2KKs8Q.jpg?width=640&crop=smart&auto=webp&s=3565042b5106fa9cd59a7a4b51612d2db5b0d059', 'width': 640}, {'height': 614, 'url': 'https://external-preview.redd.it/ICxWwDTSXn44V9uQpE9j6ZxEJWv3LlcKh8PZK2KKs8Q.jpg?width=960&crop=smart&auto=webp&s=58be96f130570f81ab8dbbd15df8d32c8de6f3a6', 'width': 960}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ICxWwDTSXn44V9uQpE9j6ZxEJWv3LlcKh8PZK2KKs8Q.jpg?auto=webp&s=ae1de20fc4d1fbd6c4c1e04cbcb68c85696123ba', 'width': 1000}, 'variants': {}}]} |
Are there ½ million people capable of running locally 685B params models? | 621 | 2025-01-30T14:25:02 | https://www.reddit.com/gallery/1ido3fn | S1M0N38 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ido3fn | false | null | t3_1ido3fn | /r/LocalLLaMA/comments/1ido3fn/are_there_½_million_people_capable_of_running/ | false | false | 621 | null |
||
deepseek-coder-v2:16b Error - "existing connection was forcibly closed" | 3 | Hi,
I am trying to run deepseek-coder-v2:16b but getting error "Error: 500 - {"error":"an error was encountered while running the model: read tcp 127.0.0.1:60248-\\u003e127.0.0.1:60246: wsarecv: An existing connection was forcibly closed by the remote host."}"
On same hardware (no GPU but 24 core CPU and 160GB RAM), I am able to run 70B models of Llama 3.3 as well as deepseek-r1:70b. Context length for coder as well as r1 seems to be same. I asked DeepSeek about it and it said
"The error `wsarecv: An existing connection was forcibly closed by the remote host` suggests a connection issue, possibly between the model and the backend or API"
However, if everything is local then there should not be any connection issue.
Any help would be highly appreciated. | 2025-01-30T14:37:02 | https://www.reddit.com/r/LocalLLaMA/comments/1idocgh/deepseekcoderv216b_error_existing_connection_was/ | New_Plenty1893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idocgh | false | null | t3_1idocgh | /r/LocalLLaMA/comments/1idocgh/deepseekcoderv216b_error_existing_connection_was/ | false | false | self | 3 | null |
What's the best 32b ish model we can run locally now? It's hard to keep up. | 8 | Yeah title | 2025-01-30T14:47:17 | https://www.reddit.com/r/LocalLLaMA/comments/1idokbe/whats_the_best_32b_ish_model_we_can_run_locally/ | mstahh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idokbe | false | null | t3_1idokbe | /r/LocalLLaMA/comments/1idokbe/whats_the_best_32b_ish_model_we_can_run_locally/ | false | false | self | 8 | null |
Mistral new open models | 211 | Mistral base and instruct 24B | 2025-01-30T14:47:21 | konilse | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idokcx | false | null | t3_1idokcx | /r/LocalLLaMA/comments/1idokcx/mistral_new_open_models/ | false | false | 211 | {'enabled': True, 'images': [{'id': 'JVtK2j56TywbjJUl__qOrcr8YsPejCjzk13pd4V6DWg', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/5nnsoy4295ge1.png?width=108&crop=smart&auto=webp&s=3ddfab2b894afc3032861324dfb19e503d1ac5ef', 'width': 108}, {'height': 156, 'url': 'https://preview.redd.it/5nnsoy4295ge1.png?width=216&crop=smart&auto=webp&s=fafdc44ac863499b843ffd916a9b79908d3297ca', 'width': 216}, {'height': 232, 'url': 'https://preview.redd.it/5nnsoy4295ge1.png?width=320&crop=smart&auto=webp&s=1e3fa5bbf97254359c38a9d56b38dc9dc7b539cc', 'width': 320}, {'height': 464, 'url': 'https://preview.redd.it/5nnsoy4295ge1.png?width=640&crop=smart&auto=webp&s=1d39024b2c7d0acbb55e2f3d01eee2b120c949e0', 'width': 640}, {'height': 696, 'url': 'https://preview.redd.it/5nnsoy4295ge1.png?width=960&crop=smart&auto=webp&s=b9aedfc8b9814b663f2be1f95708abe583a4a78b', 'width': 960}, {'height': 783, 'url': 'https://preview.redd.it/5nnsoy4295ge1.png?width=1080&crop=smart&auto=webp&s=650805f1b6b7d69f453d6113dba08b72b3df4b8f', 'width': 1080}], 'source': {'height': 783, 'url': 'https://preview.redd.it/5nnsoy4295ge1.png?auto=webp&s=dfeaf5011c0ca83fcaf39759d43f5b6af5bf33e3', 'width': 1080}, 'variants': {}}]} |
||
OpenRouter: Why is the DeepSeek R1 Distill Qwen 32B is more expensive than R1 Distill Llama 70B? | 6 | I'm curious to understand why the Qwen 32B distill is more expensive than that of Distill Llama 70B? | 2025-01-30T14:51:03 | https://www.reddit.com/r/LocalLLaMA/comments/1idon2e/openrouter_why_is_the_deepseek_r1_distill_qwen/ | GTHell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idon2e | false | null | t3_1idon2e | /r/LocalLLaMA/comments/1idon2e/openrouter_why_is_the_deepseek_r1_distill_qwen/ | false | false | self | 6 | null |
Llama 3.2 1B model for translation tasks. | 1 | [removed] | 2025-01-30T14:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1idoqjb/llama_32_1b_model_for_translation_tasks/ | Pleasant-Wrangler193 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idoqjb | false | null | t3_1idoqjb | /r/LocalLLaMA/comments/1idoqjb/llama_32_1b_model_for_translation_tasks/ | false | false | self | 1 | null |
I want to roast some AI projects today 🤨. If you are not afraid, share your project in the comments! | 0 | Is your project perfect? | 2025-01-30T14:55:44 | https://www.reddit.com/r/LocalLLaMA/comments/1idoqnh/i_want_to_roast_some_ai_projects_today_if_you_are/ | Outrageous-Win-3244 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idoqnh | false | null | t3_1idoqnh | /r/LocalLLaMA/comments/1idoqnh/i_want_to_roast_some_ai_projects_today_if_you_are/ | false | false | self | 0 | null |
Claude Sonnet 3.5 for coding, not willing to provide full code - they really need to fix it ! | 1 | [removed] | 2025-01-30T14:56:23 | https://www.reddit.com/r/LocalLLaMA/comments/1idor6l/claude_sonnet_35_for_coding_not_willing_to/ | According-East-6759 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idor6l | false | null | t3_1idor6l | /r/LocalLLaMA/comments/1idor6l/claude_sonnet_35_for_coding_not_willing_to/ | false | false | 1 | null |
|
Mistral Small 3 - 24b | 1 | 2025-01-30T15:07:24 | https://youtu.be/nCXTdcggwkM | THEKILLFUS | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1idp05u | false | {'oembed': {'author_name': 'Sam Witteveen', 'author_url': 'https://www.youtube.com/@samwitteveenai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/nCXTdcggwkM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Mistral Small 3 - The NEW Mini Model Killer"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/nCXTdcggwkM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Mistral Small 3 - The NEW Mini Model Killer', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1idp05u | /r/LocalLLaMA/comments/1idp05u/mistral_small_3_24b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'D46sGh_HIE_frveeTcIUCftQv5uPcfI25nltEAh1BoA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iFWwV-jY3dmbiFF52qwmlLMkRg4fNWKrsvQTgoE57ww.jpg?width=108&crop=smart&auto=webp&s=b6129928bcbeb24dc1eb4bdccd641b3ee4c8b37b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/iFWwV-jY3dmbiFF52qwmlLMkRg4fNWKrsvQTgoE57ww.jpg?width=216&crop=smart&auto=webp&s=b57c8f90703f396ae7d6b22fdeec818d4c9409b6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/iFWwV-jY3dmbiFF52qwmlLMkRg4fNWKrsvQTgoE57ww.jpg?width=320&crop=smart&auto=webp&s=d34dc04cf967d6145cf95e1035ca5a531479d5f0', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/iFWwV-jY3dmbiFF52qwmlLMkRg4fNWKrsvQTgoE57ww.jpg?auto=webp&s=c0edf2435cb0070a3614ef10a181a49b318de847', 'width': 480}, 'variants': {}}]} |
||
No synthetic data? | 350 | That's reallllllly rare in 2025, did I understand this correctly? They didn't use any synthetic data to train this model? | 2025-01-30T15:09:51 | AaronFeng47 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idp1z5 | false | null | t3_1idp1z5 | /r/LocalLLaMA/comments/1idp1z5/no_synthetic_data/ | false | false | 350 | {'enabled': True, 'images': [{'id': '4K-o4QiKee7lKWzfboYglalARlK5OoxPd3AOPuLQNTY', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/98dq1wg2d5ge1.png?width=108&crop=smart&auto=webp&s=53f1b2c082f3c1c9edb7c2ed4f5d2de57e9e5e77', 'width': 108}, {'height': 208, 'url': 'https://preview.redd.it/98dq1wg2d5ge1.png?width=216&crop=smart&auto=webp&s=ed080ebeb7c719e8ac0568316e5151f80be39a97', 'width': 216}, {'height': 308, 'url': 'https://preview.redd.it/98dq1wg2d5ge1.png?width=320&crop=smart&auto=webp&s=c6f44e15faa8acab14d5538092d31254799e5d30', 'width': 320}, {'height': 616, 'url': 'https://preview.redd.it/98dq1wg2d5ge1.png?width=640&crop=smart&auto=webp&s=448fe61c33c8db28d89becf7c1d0ccbcf95ea88a', 'width': 640}, {'height': 925, 'url': 'https://preview.redd.it/98dq1wg2d5ge1.png?width=960&crop=smart&auto=webp&s=9f7f345689e1bfa5629731811200a8d2b76f58c8', 'width': 960}, {'height': 1041, 'url': 'https://preview.redd.it/98dq1wg2d5ge1.png?width=1080&crop=smart&auto=webp&s=38d4e5080232f8c5acf93ea3b787f23be9f30d7d', 'width': 1080}], 'source': {'height': 1388, 'url': 'https://preview.redd.it/98dq1wg2d5ge1.png?auto=webp&s=6238ffcab08a181063a2a4e7af746efc2f8f4676', 'width': 1440}, 'variants': {}}]} |
||
Does LibreChat support more than two multi-conversation chats in parallel? | 1 | [removed] | 2025-01-30T15:10:31 | https://www.reddit.com/r/LocalLLaMA/comments/1idp2jk/does_librechat_support_more_than_two/ | palash_wadhwani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idp2jk | false | null | t3_1idp2jk | /r/LocalLLaMA/comments/1idp2jk/does_librechat_support_more_than_two/ | false | false | self | 1 | null |
Possible to use local TTS models like Llasa with OpenWebui? | 2 | [Llasa-3B](https://huggingface.co/HKUSTAudio/Llasa-3B) is super impressive from the snippets I’ve heard.
Is there any way to play around with local TTS models like this in OpenWebui? | 2025-01-30T15:12:53 | https://www.reddit.com/r/LocalLLaMA/comments/1idp4d5/possible_to_use_local_tts_models_like_llasa_with/ | AnotherSoftEng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idp4d5 | false | null | t3_1idp4d5 | /r/LocalLLaMA/comments/1idp4d5/possible_to_use_local_tts_models_like_llasa_with/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'Gq4FlsKvVlWVLV1KUHs_IOGNeNBuY51uNM-zSxUmr5M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7Fq11quIb69rMxM77rCt0LQYabzMSvRY8K_RBSxXcXQ.jpg?width=108&crop=smart&auto=webp&s=f5a7a4339c8a65f7b2d8174f4209616d0f3cb4b0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7Fq11quIb69rMxM77rCt0LQYabzMSvRY8K_RBSxXcXQ.jpg?width=216&crop=smart&auto=webp&s=54134933fd325fcbd365fa077db4b839eb1d770c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7Fq11quIb69rMxM77rCt0LQYabzMSvRY8K_RBSxXcXQ.jpg?width=320&crop=smart&auto=webp&s=cb6e5047eb2900ab789429f6ea62b278d1941ec3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7Fq11quIb69rMxM77rCt0LQYabzMSvRY8K_RBSxXcXQ.jpg?width=640&crop=smart&auto=webp&s=90f7383d6263afa9e5589b52a02dbfdc75952ff5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7Fq11quIb69rMxM77rCt0LQYabzMSvRY8K_RBSxXcXQ.jpg?width=960&crop=smart&auto=webp&s=5a55498c332caeb4b87bff1b7e5646b5c8d1fe78', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7Fq11quIb69rMxM77rCt0LQYabzMSvRY8K_RBSxXcXQ.jpg?width=1080&crop=smart&auto=webp&s=21d46b86acab91637d0dcc7549c8c52cdb84e742', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7Fq11quIb69rMxM77rCt0LQYabzMSvRY8K_RBSxXcXQ.jpg?auto=webp&s=d6f776fbaab62654a33f7cfbdb7a6f86aed172b1', 'width': 1200}, 'variants': {}}]} |
Deepseek is hosted on Huawei cloud | 64 | Based on the IP resolved in China. The chat endpoints is from Huawei DC
DS could be using Singapore Huawei region for WW and Shanghai region for CN users.
So demand for Nvidia card for training and Huawei GPU for inference is real.
[https://i.postimg.cc/0QyjxTkh/Screenshot-20250130-230756.png](https://i.postimg.cc/0QyjxTkh/Screenshot-20250130-230756.png)
[https://i.postimg.cc/FHknCz0B/Screenshot-20250130-230812.png](https://i.postimg.cc/FHknCz0B/Screenshot-20250130-230812.png)
| 2025-01-30T15:15:46 | https://www.reddit.com/r/LocalLLaMA/comments/1idp6n6/deepseek_is_hosted_on_huawei_cloud/ | Reasonable-Climate66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idp6n6 | false | null | t3_1idp6n6 | /r/LocalLLaMA/comments/1idp6n6/deepseek_is_hosted_on_huawei_cloud/ | false | false | self | 64 | {'enabled': False, 'images': [{'id': 'TfAH_nhuI1BzIi7_aYov5aS-EKI8tdVqis7Y2XejHBk', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/s42mqESrNbN_7cBfYogJkZ8DnRKEon-AC8TO7MONcu0.png?width=108&crop=smart&auto=webp&s=288b29c446f2170907158d2d2462f5905f2c5901', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/s42mqESrNbN_7cBfYogJkZ8DnRKEon-AC8TO7MONcu0.png?width=216&crop=smart&auto=webp&s=31413e7adb55601296a7be9bbed4dbe21b24f5a0', 'width': 216}, {'height': 229, 'url': 'https://external-preview.redd.it/s42mqESrNbN_7cBfYogJkZ8DnRKEon-AC8TO7MONcu0.png?width=320&crop=smart&auto=webp&s=08c52e7b63e25d06b7ed7fd3d46b1368840e5c5e', 'width': 320}, {'height': 458, 'url': 'https://external-preview.redd.it/s42mqESrNbN_7cBfYogJkZ8DnRKEon-AC8TO7MONcu0.png?width=640&crop=smart&auto=webp&s=2a25b4e95a61b49afc6b2f8e07d0af25505ffc0c', 'width': 640}], 'source': {'height': 577, 'url': 'https://external-preview.redd.it/s42mqESrNbN_7cBfYogJkZ8DnRKEon-AC8TO7MONcu0.png?auto=webp&s=2d5159a10eeb422ca31481a18254b33bd2423941', 'width': 805}, 'variants': {}}]} |
Nvidia H200 best config | 1 | [removed] | 2025-01-30T15:20:38 | https://www.reddit.com/r/LocalLLaMA/comments/1idpah1/nvidia_h200_best_config/ | _Slim5God | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idpah1 | false | null | t3_1idpah1 | /r/LocalLLaMA/comments/1idpah1/nvidia_h200_best_config/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SM7GgbILtBF0ORKE4a6oaQ2sYsyIdu1l6EPievQxp4A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=108&crop=smart&auto=webp&s=04e10b5efed448cc1ca7965940648b2ed8b838ed', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=216&crop=smart&auto=webp&s=b6c3840a2eb256e4c09789bfd54732af5c3621b5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=320&crop=smart&auto=webp&s=554432afe4b618c72c9eee63e45f54477639c3b1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=640&crop=smart&auto=webp&s=9543d7aeae74ddebbda495a9edcf3336aae90bb0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=960&crop=smart&auto=webp&s=0ffd0323c71548c0613936bfe76ce39c8b33016d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?width=1080&crop=smart&auto=webp&s=f0de6a9239ccf0df5c4ce390ef8fa9fb692e0ad7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XiLNy80YRYBh_HrNeUwWhmjvvWNH5or8aCMSwn9xMBs.jpg?auto=webp&s=05450a971f1315695bd262f1273cfa1877485ccb', 'width': 1200}, 'variants': {}}]} |
Have you ever wondered why OpenAI models perform far better than open-source models | 0 | The common answer is: *OpenAI models are just more capable.* But I don’t think that’s the only reason.
Many GenAI frameworks offer powerful tools for building AI products. On paper, they claim to support both proprietary models like GPT-4o and open-source models like LLaMA. But in reality, their modules are developed, tested, and fine-tuned to work best with proprietary models—while open-source ones often get second-hand treatment.
This needs to change. With the rise of DeepSeek and Qwen, I’m hopeful that we’ll see a shift.
What do you think? Have you noticed this bias in GenAI frameworks? | 2025-01-30T15:24:07 | https://www.reddit.com/r/LocalLLaMA/comments/1idpde7/have_you_ever_wondered_why_openai_models_perform/ | Ahmad401 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idpde7 | false | null | t3_1idpde7 | /r/LocalLLaMA/comments/1idpde7/have_you_ever_wondered_why_openai_models_perform/ | false | false | self | 0 | null |
What is the "Kohya-SS" of LLM's? AKA what is the default trainer people use to create LoRa's for LLM's/which trainer has the most settings? And which base model would you recommend training on? | 1 | [removed] | 2025-01-30T15:27:57 | https://www.reddit.com/r/LocalLLaMA/comments/1idpgky/what_is_the_kohyass_of_llms_aka_what_is_the/ | AI_Characters | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idpgky | false | null | t3_1idpgky | /r/LocalLLaMA/comments/1idpgky/what_is_the_kohyass_of_llms_aka_what_is_the/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wJEjeXzxGH1YGCFxZjBZGtUmWtGNk6HUin-ojP9JxJM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wpyBDO0uNuWFIgGSPY4MThQj5IMGytzCVv_QR1YPCNo.jpg?width=108&crop=smart&auto=webp&s=76ba8069816f60ed3cb5074ea955c3f981a92fd5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wpyBDO0uNuWFIgGSPY4MThQj5IMGytzCVv_QR1YPCNo.jpg?width=216&crop=smart&auto=webp&s=2977ce09ac6038123ad82c2a52c7408f7d786e6c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wpyBDO0uNuWFIgGSPY4MThQj5IMGytzCVv_QR1YPCNo.jpg?width=320&crop=smart&auto=webp&s=ae34dc231078e663563a5e9b744ef19c766beb71', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wpyBDO0uNuWFIgGSPY4MThQj5IMGytzCVv_QR1YPCNo.jpg?width=640&crop=smart&auto=webp&s=c8c9b2c34311fed2d2710a0a3f8857995c863db8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wpyBDO0uNuWFIgGSPY4MThQj5IMGytzCVv_QR1YPCNo.jpg?width=960&crop=smart&auto=webp&s=94c6365b4800c65367b4eae7c1b1e82b7c8bcf48', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wpyBDO0uNuWFIgGSPY4MThQj5IMGytzCVv_QR1YPCNo.jpg?width=1080&crop=smart&auto=webp&s=7916a5eafc0a8dede7a169f4f22fbf66dd152fc6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wpyBDO0uNuWFIgGSPY4MThQj5IMGytzCVv_QR1YPCNo.jpg?auto=webp&s=29e6c172563620addd4bc4202018cd270ad17f8c', 'width': 1200}, 'variants': {}}]} |
🚨 BREAKING : Another open-source model is here, better than DeepSeek V3!
The Allen Institute for AI just announced Tülu 3 (405B) 🐫, an open-source post-training model that surpasses DeepSeek-V3 in performance! | 0 | 2025-01-30T15:31:19 | https://www.reddit.com/r/LocalLLaMA/comments/1idpjcq/breaking_another_opensource_model_is_here_better/ | danilofs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idpjcq | false | null | t3_1idpjcq | /r/LocalLLaMA/comments/1idpjcq/breaking_another_opensource_model_is_here_better/ | false | false | 0 | null |
||
Tracing the history of VLMs from CLIP to LLaMA 3.2, and doing RAG for complex PDFs w/ two VLMs! Llama 3.2 for understanding, ColQwen/ColPali for parsing | 1 | [removed] | 2025-01-30T15:34:34 | https://www.reddit.com/r/LocalLLaMA/comments/1idplzg/tracing_the_history_of_vlms_from_clip_to_llama_32/ | Fit-Concentrate3774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idplzg | false | null | t3_1idplzg | /r/LocalLLaMA/comments/1idplzg/tracing_the_history_of_vlms_from_clip_to_llama_32/ | false | false | self | 1 | null |
Is it possible for a LMM to watch footage and extract tabular features from the footage? | 2 | Hi,
I'm interested in applying LMMs to soccer analytics, specifically counting the number of defenders on a particular shot.
Obviously this can be done by hiring humans to watch hours of game footage and create a database - not looking to do that.
I was wondering if there are any open source LMMs that would be well suited to essentially watching many soccer games and extracting out relevant features based on what I ask it to extract.
If someone could point me in the right direction I'd greatly appreciate it! | 2025-01-30T15:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/1idpms3/is_it_possible_for_a_lmm_to_watch_footage_and/ | Frank2234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idpms3 | false | null | t3_1idpms3 | /r/LocalLLaMA/comments/1idpms3/is_it_possible_for_a_lmm_to_watch_footage_and/ | false | false | self | 2 | null |
tulu-3-405B - Scaling the Tülu 3 post-training recipes to surpass the performance of DeepSeek V3 | 0 | 2025-01-30T15:38:28 | https://allenai.org/blog/tulu-3-405B | AaronFeng47 | allenai.org | 1970-01-01T00:00:00 | 0 | {} | 1idpp1d | false | null | t3_1idpp1d | /r/LocalLLaMA/comments/1idpp1d/tulu3405b_scaling_the_tülu_3_posttraining_recipes/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dxfmzNf4YN-FbhdAco3KP1MGTilBYNHfF0MG4KCPVGc', 'resolutions': [{'height': 44, 'url': 'https://external-preview.redd.it/hh8DW32nH0ONS-nu01T_1FJPVbwbQ_ASgQ-i6ufFo4c.jpg?width=108&crop=smart&auto=webp&s=2e4204268d74ced275a0f86980242bf3ffedfed0', 'width': 108}, {'height': 88, 'url': 'https://external-preview.redd.it/hh8DW32nH0ONS-nu01T_1FJPVbwbQ_ASgQ-i6ufFo4c.jpg?width=216&crop=smart&auto=webp&s=c25d5daf529b65db97021bd33bf6c94524be0bd8', 'width': 216}, {'height': 131, 'url': 'https://external-preview.redd.it/hh8DW32nH0ONS-nu01T_1FJPVbwbQ_ASgQ-i6ufFo4c.jpg?width=320&crop=smart&auto=webp&s=d85b99a0cecbcd2cd68cba404ddd8634ced43cb7', 'width': 320}, {'height': 263, 'url': 'https://external-preview.redd.it/hh8DW32nH0ONS-nu01T_1FJPVbwbQ_ASgQ-i6ufFo4c.jpg?width=640&crop=smart&auto=webp&s=3029a08d4d1d10aa0ad73abf9833653ad74c928a', 'width': 640}, {'height': 395, 'url': 'https://external-preview.redd.it/hh8DW32nH0ONS-nu01T_1FJPVbwbQ_ASgQ-i6ufFo4c.jpg?width=960&crop=smart&auto=webp&s=fbf9222760d7fac5e11252c51cb4571837cf511a', 'width': 960}, {'height': 444, 'url': 'https://external-preview.redd.it/hh8DW32nH0ONS-nu01T_1FJPVbwbQ_ASgQ-i6ufFo4c.jpg?width=1080&crop=smart&auto=webp&s=b99ce0e01c01acdd557dc4265cae753b563ff62c', 'width': 1080}], 'source': {'height': 3500, 'url': 'https://external-preview.redd.it/hh8DW32nH0ONS-nu01T_1FJPVbwbQ_ASgQ-i6ufFo4c.jpg?auto=webp&s=715acd5e3dd732e50d803c4eb7a3469064a3ffd9', 'width': 8500}, 'variants': {}}]} |
||
LLM Coupled With Monte Carlo Tree Search Exploring Short-Story Plot Expansions (Black Line Shows Best Path) | 1 | 2025-01-30T15:40:36 | Busy_Talk8788 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idpqrt | false | null | t3_1idpqrt | /r/LocalLLaMA/comments/1idpqrt/llm_coupled_with_monte_carlo_tree_search/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'POR_J5qX5oG22alBxv80irUKUtLjw_uy-IAI2ZS-OWI', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=108&crop=smart&format=png8&s=411437fa9f424ae8db1e06cef7a235d2fc326263', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=216&crop=smart&format=png8&s=ee47b1f2a0b3b0c1f0a0cc382e4ede0f0c92729b', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=320&crop=smart&format=png8&s=2f318974c41f111e8154d0f847dde945752adc1d', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=640&crop=smart&format=png8&s=c44b11d7ad0a87333e340eed517f927a54feef7f', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=960&crop=smart&format=png8&s=81b384413c44dca80ece049e24260268441741cd', 'width': 960}], 'source': {'height': 540, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?format=png8&s=049c04eb59a4a2e55ccd4b3a259c42ee18ea10c4', 'width': 960}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=108&crop=smart&s=b8893f27e1c43df2a7d3a8fa3bc2e8f63fd0fa7c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=216&crop=smart&s=bd1ed69d1bb622c25f80812ed888ced9f1616eaf', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=320&crop=smart&s=4b2cc1ede4abc5991cc139181286edb908958401', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=640&crop=smart&s=53332d10cf3ed09b0212068b5dc61ef4cb5e77da', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=960&crop=smart&s=771945206076a079fc3850a981f0273a0303f1b8', 'width': 960}], 'source': {'height': 540, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?s=07a28fe8ed3969131466dcb1b9c3fa9cfca5aeab', 'width': 960}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=108&format=mp4&s=10f3fb18822eb876ba999923a82bf62251dc72ae', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=216&format=mp4&s=0642c7f0b2ef3373cdb40992b634bdc850bac05d', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=320&format=mp4&s=b9557cb720a874f57f4d61dcfe88035274ee70cf', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=640&format=mp4&s=c0b6f6ba0276a2da9bf1f5112757c143065fdae1', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?width=960&format=mp4&s=c8d24299913565fe0663ca142246b6c48f91673f', 'width': 960}], 'source': {'height': 540, 'url': 'https://preview.redd.it/vxvh42uci5ge1.gif?format=mp4&s=189903addd39b4a0aa0470e661f01f10af573cf5', 'width': 960}}}}]} |
|||
allenai/Llama-3.1-Tulu-3-405B | 14 | 2025-01-30T15:41:13 | https://huggingface.co/allenai/Llama-3.1-Tulu-3-405B | minpeter2 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1idpra5 | false | null | t3_1idpra5 | /r/LocalLLaMA/comments/1idpra5/allenaillama31tulu3405b/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'L5WIYOT--53mhu4LcPXtIeND-enui-soS_HVFRSmRtA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/S3oAV-jOHY0yGwS0yXYx19_9ea33GOlhVysZG3AVCUE.jpg?width=108&crop=smart&auto=webp&s=d74c493e5fffa1637f51d04b4f553808134a489d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/S3oAV-jOHY0yGwS0yXYx19_9ea33GOlhVysZG3AVCUE.jpg?width=216&crop=smart&auto=webp&s=028465f7b73f7d23225118fac542796e2f301885', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/S3oAV-jOHY0yGwS0yXYx19_9ea33GOlhVysZG3AVCUE.jpg?width=320&crop=smart&auto=webp&s=c6919b9858bd8dd86c00b7951a392664c3e176a2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/S3oAV-jOHY0yGwS0yXYx19_9ea33GOlhVysZG3AVCUE.jpg?width=640&crop=smart&auto=webp&s=18fc525be1fd2e6a425578a791e723e917db218d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/S3oAV-jOHY0yGwS0yXYx19_9ea33GOlhVysZG3AVCUE.jpg?width=960&crop=smart&auto=webp&s=82409a2c7d0b38d25bfe133c157210d868e9fba0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/S3oAV-jOHY0yGwS0yXYx19_9ea33GOlhVysZG3AVCUE.jpg?width=1080&crop=smart&auto=webp&s=700f30b3f52353cd132dde64a55c8f387e7b18fa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/S3oAV-jOHY0yGwS0yXYx19_9ea33GOlhVysZG3AVCUE.jpg?auto=webp&s=e7685f0a1fca47490fef97dbabc608ebd6c6c296', 'width': 1200}, 'variants': {}}]} |
||
Regarding the Arc B580 LLM performance | 1 | [removed] | 2025-01-30T15:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1idpvp0/regarding_the_arc_b580_llm_performance/ | Professional_Fee8344 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idpvp0 | false | null | t3_1idpvp0 | /r/LocalLLaMA/comments/1idpvp0/regarding_the_arc_b580_llm_performance/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fMLgQw6qoKQ8SJn0ati4Is8CHHzk_Kg1F6uuvaGGNxo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m94I-ichkn0oqO32oB31Z2npF9SdJpFEAhe8favQyng.jpg?width=108&crop=smart&auto=webp&s=c2b54126bcf5183139d4d75bbdd4eabb0952c028', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/m94I-ichkn0oqO32oB31Z2npF9SdJpFEAhe8favQyng.jpg?width=216&crop=smart&auto=webp&s=7c53081d60beb2a74f961f16ac770d3be1d88358', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/m94I-ichkn0oqO32oB31Z2npF9SdJpFEAhe8favQyng.jpg?width=320&crop=smart&auto=webp&s=3151189cd8f503f63e15f4da03e91d21b6ca3b8b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/m94I-ichkn0oqO32oB31Z2npF9SdJpFEAhe8favQyng.jpg?width=640&crop=smart&auto=webp&s=b09c358488a1d5bf648fa086e72cddbc49539fb4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/m94I-ichkn0oqO32oB31Z2npF9SdJpFEAhe8favQyng.jpg?width=960&crop=smart&auto=webp&s=dede7309029741abc48232bb1fdb816b532eb657', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/m94I-ichkn0oqO32oB31Z2npF9SdJpFEAhe8favQyng.jpg?width=1080&crop=smart&auto=webp&s=0c1a180518e336c1b4eeea5b39bab1f4b8ad8137', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/m94I-ichkn0oqO32oB31Z2npF9SdJpFEAhe8favQyng.jpg?auto=webp&s=85e8ad9677691269defaa1fdfc0ed325307f4c1b', 'width': 1200}, 'variants': {}}]} |
Comparing DeepSeek R1 and OpenAI O1 with High School AP Calculus Problems | 1 | Open-source AI models like **DeepSeek R1** are reshaping the LLM landscape by introducing healthy competition and democratizing access to cutting-edge technologies. This broader collaboration accelerates innovation and makes advanced tools available to more developers and researchers.
Recently, I tested **DeepSeek R1** and **OpenAI O1** on 100 AP-level calculus problems—primarily involving higher-order derivatives of polynomials with variable substitutions, sign constraints, and variable-dependent exponents.
# Key Findings
**1. Accuracy**
* **DeepSeek R1**: 76.8%
* **OpenAI O1**: 97.9%
**2. Speed & Reliability**
* **DeepSeek R1**: Takes 2–3 minutes per request and can time out (not yet production-ready).
* **OpenAI O1**: Responds in 30–60 seconds with more consistent performance.
**3. Cost**
* **OpenAI O1**: $0.73 in input tokens + $5.87 in output tokens
* **DeepSeek R1**: Under $0.40 in total
# Why DeepSeek R1 Struggles
DeepSeek R1 performs well on straightforward polynomial derivatives but stumbles when extra steps or constraints are introduced. Common issues include:
1. **Multi-Step Parameter Definitions** – Sometimes ignored or applied incorrectly.
2. **Sign & Zero Constraints** – Terms that should be simplified remain in the final answer.
3. **Variable-Based Exponents** – R1 misses that exponents can be effectively constant, leading to power rule errors.
4. **Numerical Discrepancies** – Incorrect sign handling and missed negative factors.
Despite these challenges, open-source models like DeepSeek R1 hold significant promise. As contributors worldwide refine and enhance these solutions, we can expect more robust, efficient, and cost-effective AI tools to emerge.
**Explore the code, and data yourself:**
[GitHub: SherazKhan/R1vsO1](https://github.com/SherazKhan/R1vsO1)
# Question for you
What do you think will drive the biggest breakthroughs in AI—open-source innovation, proprietary approaches, or a blend of both? Share your thoughts in the comments!
https://preview.redd.it/q112d2d6k5ge1.png?width=2400&format=png&auto=webp&s=4388018af179f1aa6006b7f8c00ebd977071636b
| 2025-01-30T15:51:01 | https://www.reddit.com/r/LocalLLaMA/comments/1idpz8f/comparing_deepseek_r1_and_openai_o1_with_high/ | PerformanceRound7913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idpz8f | false | null | t3_1idpz8f | /r/LocalLLaMA/comments/1idpz8f/comparing_deepseek_r1_and_openai_o1_with_high/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'U4knNGQTlKGayTShzWmIz7e3qt7xvEZMwV7iE-6Fj64', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zcGT2Xk5TnRYTyqKQSyfdDj5tn8wdAO0XyizAjz5zYU.jpg?width=108&crop=smart&auto=webp&s=3ad5447b6bf4258ba38753779da3110662c3541c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zcGT2Xk5TnRYTyqKQSyfdDj5tn8wdAO0XyizAjz5zYU.jpg?width=216&crop=smart&auto=webp&s=bd1e3a5003bc5323d7e89dfee836bfa1d2910712', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zcGT2Xk5TnRYTyqKQSyfdDj5tn8wdAO0XyizAjz5zYU.jpg?width=320&crop=smart&auto=webp&s=40eaa48885bb404df3d45461ea1b4aab78bcf3cb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zcGT2Xk5TnRYTyqKQSyfdDj5tn8wdAO0XyizAjz5zYU.jpg?width=640&crop=smart&auto=webp&s=9165657a5c38b289def1a7bd97dba4f20f3f3205', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zcGT2Xk5TnRYTyqKQSyfdDj5tn8wdAO0XyizAjz5zYU.jpg?width=960&crop=smart&auto=webp&s=5a33966459c41da148d61b98ac569047594b6b68', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zcGT2Xk5TnRYTyqKQSyfdDj5tn8wdAO0XyizAjz5zYU.jpg?width=1080&crop=smart&auto=webp&s=509a28e20f857f54e99c447844b18a78bc268892', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zcGT2Xk5TnRYTyqKQSyfdDj5tn8wdAO0XyizAjz5zYU.jpg?auto=webp&s=46a0ee817bb22a46e5655dbb12505068db0bec64', 'width': 1200}, 'variants': {}}]} |
|
Deepseek Benchmark 3090 vs 3060 | 4 | I’ve seen a benchmark where somebody managed to run the 1 bit model on a single 3090. Does someone managed to make it work with several 3060? Or does the bandwidth of the x4 slots makes it inefficient? | 2025-01-30T15:53:19 | https://www.reddit.com/r/LocalLLaMA/comments/1idq13p/deepseek_benchmark_3090_vs_3060/ | xahaf123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idq13p | false | null | t3_1idq13p | /r/LocalLLaMA/comments/1idq13p/deepseek_benchmark_3090_vs_3060/ | false | false | self | 4 | null |
Are Macs the IDEAL for local llms on the consumer level? | 8 | Seems there is something about the ram on macs that makes it pretty darn nice for larger models on system ram vs dedicated GPU.
This makes it a pretty darn cost effective built in benefit rather than buying an extra 2k graphics card on a 1k desktop.
I'm not thinking about this too deeply so am I missing anything, is it actually not that great and kinda slow AF?
Thanks I appreciate the feedback | 2025-01-30T15:54:15 | https://www.reddit.com/r/LocalLLaMA/comments/1idq1v6/are_macs_the_ideal_for_local_llms_on_the_consumer/ | OriginalPlayerHater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idq1v6 | false | null | t3_1idq1v6 | /r/LocalLLaMA/comments/1idq1v6/are_macs_the_ideal_for_local_llms_on_the_consumer/ | false | false | self | 8 | null |
Mistral Q8 GGUF 🍓 fail | 0 | https://x.com/AwakenTheLotus/status/1884992361605784008
Hopefully reasoning distillation helps
| 2025-01-30T15:55:35 | legallybond | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idq2zc | false | null | t3_1idq2zc | /r/LocalLLaMA/comments/1idq2zc/mistral_q8_gguf_fail/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'ngaSQyMXjEzd06-ApdI_0gzc5AkKDSYirAM1EXbmscg', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/vcdhfif8l5ge1.png?width=108&crop=smart&auto=webp&s=89a366fe1ab784ab8f89b23c30a90a80347a2f24', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/vcdhfif8l5ge1.png?width=216&crop=smart&auto=webp&s=dec65e3197a74b768761ca18c46a60086d23d8d5', 'width': 216}, {'height': 275, 'url': 'https://preview.redd.it/vcdhfif8l5ge1.png?width=320&crop=smart&auto=webp&s=56c2e0a2d0e267b90e64f15b99f168e4189e5199', 'width': 320}, {'height': 551, 'url': 'https://preview.redd.it/vcdhfif8l5ge1.png?width=640&crop=smart&auto=webp&s=b7b7139dcc9de59ce2401d5e5cd171f99301c17b', 'width': 640}], 'source': {'height': 620, 'url': 'https://preview.redd.it/vcdhfif8l5ge1.png?auto=webp&s=f39a03225bac7b9e1b0698602ed44a4788da7058', 'width': 720}, 'variants': {}}]} |
||
Local | 1 | [deleted] | 2025-01-30T15:59:18 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1idq5xy | false | null | t3_1idq5xy | /r/LocalLLaMA/comments/1idq5xy/local/ | false | false | default | 1 | null |
||
Would be nice to have ability to turn on/off the "Thinking" part of deepseek "distill" models , qwen and llama | 1 | [removed] | 2025-01-30T15:59:32 | https://www.reddit.com/r/LocalLLaMA/comments/1idq64t/would_be_nice_to_have_ability_to_turn_onoff_the/ | solomars3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idq64t | false | null | t3_1idq64t | /r/LocalLLaMA/comments/1idq64t/would_be_nice_to_have_ability_to_turn_onoff_the/ | false | false | self | 1 | null |
DeepSeek has open-weights, but hidden bias | 1 | # TL;DR
**The base model, DeepSeek R1, can’t be trusted in its current form for most use-cases.**
**DeepSeek R1 is willing to discuss dangerous topics more often than comparable Anthropic & OpenAI models…**
**and it refuses to discuss topics sensitive to the Chinese government.**
At Plum AI, our product evaluates and fine-tunes Large Language Models (LLMs) for enterprise use-cases. We evaluated DeepSeek R1 and confirmed that its guardrails deviate significantly from other model providers. We’re currently updating it to behave more in line with Anthropic and OpenAI’s models. | 2025-01-30T16:01:17 | https://blog.getplum.ai/Deepseek-R1-Open-weights-Hidden-bias-18933807f8d08002a18ff42ba343a432 | juliannorton | blog.getplum.ai | 1970-01-01T00:00:00 | 0 | {} | 1idq7r6 | false | null | t3_1idq7r6 | /r/LocalLLaMA/comments/1idq7r6/deepseek_has_openweights_but_hidden_bias/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'lLcXj5C_UnHmP04ZA8HOSpPtXOW68suLYynfG3i4PIY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=108&crop=smart&auto=webp&s=070022adf7e83815fcbd9ebe5a9500f6a6672e53', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=216&crop=smart&auto=webp&s=0c09be3c752361fbd6af9234c30308b3677c5f81', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=320&crop=smart&auto=webp&s=3cd7e35cb74bd760647095cc40df3cc446ec279e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=640&crop=smart&auto=webp&s=2bdd5ee61deaaf3353b29fe26f04eeaac183d534', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=960&crop=smart&auto=webp&s=7de34ce8dbd8159bc029662864eb0b6bacba9e6f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=1080&crop=smart&auto=webp&s=dd50adad3cf91bd1eea71e1a394c0907e2d5f38e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?auto=webp&s=bd5820efb93cc8dd6978c00a2fce1375a71c9567', 'width': 1200}, 'variants': {}}]} |
|
Anyone else having problems with AutoTokenizer / huggingface right now? | 1 | [removed] | 2025-01-30T16:02:51 | https://www.reddit.com/r/LocalLLaMA/comments/1idq94s/anyone_else_having_problems_with_autotokenizer/ | WouterGlorieux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idq94s | false | null | t3_1idq94s | /r/LocalLLaMA/comments/1idq94s/anyone_else_having_problems_with_autotokenizer/ | false | false | self | 1 | null |
DeepSeek, Open-weights, Hidden Bias | 0 | # TL;DR
**The base model, DeepSeek R1, can’t be trusted in its current form for most use-cases.**
**DeepSeek R1 is willing to discuss dangerous topics more often than comparable Anthropic & OpenAI models…**
**and it refuses to discuss topics sensitive to the Chinese government** | 2025-01-30T16:03:03 | https://blog.getplum.ai/Deepseek-R1-Open-weights-Hidden-bias-18933807f8d08002a18ff42ba343a432 | juliannorton | blog.getplum.ai | 1970-01-01T00:00:00 | 0 | {} | 1idq9ar | false | null | t3_1idq9ar | /r/LocalLLaMA/comments/1idq9ar/deepseek_openweights_hidden_bias/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'lLcXj5C_UnHmP04ZA8HOSpPtXOW68suLYynfG3i4PIY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=108&crop=smart&auto=webp&s=070022adf7e83815fcbd9ebe5a9500f6a6672e53', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=216&crop=smart&auto=webp&s=0c09be3c752361fbd6af9234c30308b3677c5f81', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=320&crop=smart&auto=webp&s=3cd7e35cb74bd760647095cc40df3cc446ec279e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=640&crop=smart&auto=webp&s=2bdd5ee61deaaf3353b29fe26f04eeaac183d534', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=960&crop=smart&auto=webp&s=7de34ce8dbd8159bc029662864eb0b6bacba9e6f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?width=1080&crop=smart&auto=webp&s=dd50adad3cf91bd1eea71e1a394c0907e2d5f38e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/e1RRHIv9lEVO9fvIJrYO1tpY4vJKIzeeoRoatv8YBIQ.jpg?auto=webp&s=bd5820efb93cc8dd6978c00a2fce1375a71c9567', 'width': 1200}, 'variants': {}}]} |
|
It's crazy how stubborn this thing can be in a short context (2048) and still arrive at real conclusions | 1 | 2025-01-30T16:04:07 | ABLPHA | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idqa7z | false | null | t3_1idqa7z | /r/LocalLLaMA/comments/1idqa7z/its_crazy_how_stubborn_this_thing_can_be_in_a/ | false | false | 1 | {'enabled': True, 'images': [{'id': '0IeyKYKgWP9TNcbV90LwtixEZnDwbWrv1SGUYv4gYf8', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/unit2oxxl5ge1.png?width=108&crop=smart&auto=webp&s=0115ed7cef81e40b6c5513b5e190a9c6a7e15f3e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/unit2oxxl5ge1.png?width=216&crop=smart&auto=webp&s=94595014249a8a6d7ee40fe3c9145f781f8935b2', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/unit2oxxl5ge1.png?width=320&crop=smart&auto=webp&s=8af7573318b6a9f3c13237a9b67034e15664e232', 'width': 320}], 'source': {'height': 1025, 'url': 'https://preview.redd.it/unit2oxxl5ge1.png?auto=webp&s=70cba38d5d777630c49cb8dba55e5c9b611e9562', 'width': 506}, 'variants': {}}]} |
|||
Autotokenizer or Huggingface problems? | 1 | [removed] | 2025-01-30T16:05:28 | https://www.reddit.com/r/LocalLLaMA/comments/1idqbci/autotokenizer_or_huggingface_problems/ | WouterGlorieux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idqbci | false | null | t3_1idqbci | /r/LocalLLaMA/comments/1idqbci/autotokenizer_or_huggingface_problems/ | false | false | self | 1 | null |
M4 mac mini llm recomendations? | 1 | [removed] | 2025-01-30T16:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/1idqbq8/m4_mac_mini_llm_recomendations/ | Fun_Entrepreneur2723 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idqbq8 | false | null | t3_1idqbq8 | /r/LocalLLaMA/comments/1idqbq8/m4_mac_mini_llm_recomendations/ | false | false | self | 1 | null |
Deepseek is down so I started using Qwen | 0 | the max version gives out some pretty good output very similar to deep seek as you can see in output I gave it some other prompts works pretty well
you can access it from here
[https://chat.qwenlm.ai/](https://chat.qwenlm.ai/)
(I am not an affiliate or anything😂😂😂) | 2025-01-30T16:16:56 | https://www.reddit.com/r/LocalLLaMA/comments/1idql0b/deepseek_is_down_so_i_started_using_qwen/ | Inevitable-Bench192 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idql0b | false | null | t3_1idql0b | /r/LocalLLaMA/comments/1idql0b/deepseek_is_down_so_i_started_using_qwen/ | false | false | self | 0 | null |
Mistral Small 3 24b's Context Window is Remarkably Efficient | 114 | I'm using the Mistral Small 3 24b-q6k model with a full 32K context (Q8 KV cache), and I still have 1.6GB of VRAM left.
In comparison, Qwen2.5 32b Q4 KL is roughly the same size, but I could only manage to get 24K context before getting dangerously close to running out of VRAM.
https://preview.redd.it/adg5weajp5ge1.png?width=913&format=png&auto=webp&s=bb3e935191ccc7000f402ce10e2c880ddb309730
| 2025-01-30T16:23:25 | https://www.reddit.com/r/LocalLLaMA/comments/1idqql6/mistral_small_3_24bs_context_window_is_remarkably/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idqql6 | false | null | t3_1idqql6 | /r/LocalLLaMA/comments/1idqql6/mistral_small_3_24bs_context_window_is_remarkably/ | false | false | 114 | null |
|
Would you rather have a 70B model @ 300 tokens per second or a 500B+ model @ 15 tokens per second? | 0 | I've been using a couple DPU/TPU/LPU etc cloud platforms. 70B models are surprisingly good. Especially the distilled R1. However, which one would you guys choose? | 2025-01-30T16:25:02 | https://www.reddit.com/r/LocalLLaMA/comments/1idqrzh/would_you_rather_have_a_70b_model_300_tokens_per/ | BarnardWellesley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idqrzh | false | null | t3_1idqrzh | /r/LocalLLaMA/comments/1idqrzh/would_you_rather_have_a_70b_model_300_tokens_per/ | false | false | self | 0 | null |
I can't keep up | 1 | 2025-01-30T16:39:13 | Aplakka | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idr3y1 | false | null | t3_1idr3y1 | /r/LocalLLaMA/comments/1idr3y1/i_cant_keep_up/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'liz-sO4z_NpYgoOpVgGMFdOkQ3MOq4NxdwH0Mw3B_k0', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/a1clvxjqs5ge1.jpeg?width=108&crop=smart&auto=webp&s=cd0b61163910b3c9036a7bf2ed5d68eacbda3446', 'width': 108}, {'height': 170, 'url': 'https://preview.redd.it/a1clvxjqs5ge1.jpeg?width=216&crop=smart&auto=webp&s=1f494de87631fd93d9efcb9bbb4207de5c7415f1', 'width': 216}, {'height': 252, 'url': 'https://preview.redd.it/a1clvxjqs5ge1.jpeg?width=320&crop=smart&auto=webp&s=41989c3e6a9fff024a794fc4f9033b34d8829435', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/a1clvxjqs5ge1.jpeg?auto=webp&s=4a11147341cc6003c07aeaa468e9594bb7d724c0', 'width': 633}, 'variants': {}}]} |
|||
[R] A Practical Guide to RAG Implementation with DeepSeek R1 & Ollama - Step-by-Step Implementation | 1 | [removed] | 2025-01-30T16:51:49 | https://www.reddit.com/r/LocalLLaMA/comments/1idreiw/r_a_practical_guide_to_rag_implementation_with/ | Amanpandey046 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idreiw | false | null | t3_1idreiw | /r/LocalLLaMA/comments/1idreiw/r_a_practical_guide_to_rag_implementation_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KJ-QRsOUMCCWUnL4lmWyclwZCXiq_xUmTH1XeFJSmpM', 'resolutions': [{'height': 91, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?width=108&crop=smart&auto=webp&s=d707272a723e0517a83b122c0f29e7bfa42a0172', 'width': 108}, {'height': 183, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?width=216&crop=smart&auto=webp&s=bd4d2bb5d7d5a3171b131c116de902529543553e', 'width': 216}, {'height': 272, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?width=320&crop=smart&auto=webp&s=c0adfda6e8bc80ba3ce5b8bd5d780c5436d0fe5d', 'width': 320}, {'height': 545, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?width=640&crop=smart&auto=webp&s=7e9651eb8831efb93157313893b986bea6f1c6d8', 'width': 640}, {'height': 817, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?width=960&crop=smart&auto=webp&s=4b7041a57914ffcd7f5a27168ccd0c83e756a59c', 'width': 960}], 'source': {'height': 872, 'url': 'https://external-preview.redd.it/qTkUITW3YMcPnOHdkz-FA95_lVeRKLms3JsFhCebVo8.jpg?auto=webp&s=2c50e39cd4b74892d9c90a0af52bfbedeaece266', 'width': 1024}, 'variants': {}}]} |
Recommendations for a deep-seek model that i can run locally for browser-use | 2 | Hello every here are my specs which model of deep-seek r1 is most suitable for me
Gigabyte GTX 1060 6GB aorus
Ryzen 5 2600X,
16GB RAM,
NVMe SSD | 2025-01-30T16:53:49 | https://www.reddit.com/r/LocalLLaMA/comments/1idrgat/recommendations_for_a_deepseek_model_that_i_can/ | AdanAli_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idrgat | false | null | t3_1idrgat | /r/LocalLLaMA/comments/1idrgat/recommendations_for_a_deepseek_model_that_i_can/ | false | false | self | 2 | null |
Anyone paying for DeepSeek? (How?) | 1 | Anyone using the API version of deepseek? What are you using for that? Are they less "not available" issues? | 2025-01-30T16:56:03 | https://www.reddit.com/r/LocalLLaMA/comments/1idri51/anyone_paying_for_deepseek_how/ | sagacityx1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idri51 | false | null | t3_1idri51 | /r/LocalLLaMA/comments/1idri51/anyone_paying_for_deepseek_how/ | false | false | self | 1 | null |
best/smallest model for data alignment for scraped operating hours and location data of charitable food programs. | 3 | I’m building an app to help people locate charitable food resources (food banks, pantries, etc.) and need advice on model selection for a specific task.
The Problem:
I’m scraping hundreds of food bank maps and community-driven datasets, then aligning this data with the Human Services Data Specification (HSDS) format. The biggest challenges are standardizing inconsistent fields like operating hours, physical addresses, and service descriptions.
Current Setup:
Self-hosted homelab (capable of running models under ~12B parameters)
Using phi-4 experimentally, but prioritizing open-source models for long-term sustainability
Need to automate alignment of messy scraped data → structured HSDS fields
Ask:
What smaller OSS models (<12B params) have you found effective for:
Structured data alignment (e.g., converting "Mon-Fri 9-5" → standardized HSDS schedule objects)
Entity recognition in addresses/hours (e.g., extracting ISO country codes, lat/long validation)
Context-aware parsing (handling edge cases like "closed on Thanksgiving")
Tried So Far:
Mistral-7B (decent but needs heavy prompting)
Microsoft Phi-2 (surprisingly good at format adherence)
Would love recommendations for:
Specialized fine-tunes (e.g., models trained on schema alignment)
Quantization-friendly architectures
Prompt engineering strategies for deterministic output
This is for a nonprofit project, so open-source/research-licensed models are strongly preferred. Thanks in advance! | 2025-01-30T16:59:39 | https://www.reddit.com/r/LocalLLaMA/comments/1idrl73/bestsmallest_model_for_data_alignment_for_scraped/ | gitgat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idrl73 | false | null | t3_1idrl73 | /r/LocalLLaMA/comments/1idrl73/bestsmallest_model_for_data_alignment_for_scraped/ | false | false | self | 3 | null |
DeepSeek R1 scores between o1 and o1-mini on NYT Connections | 92 | 2025-01-30T17:02:39 | zero0_one1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idro19 | false | null | t3_1idro19 | /r/LocalLLaMA/comments/1idro19/deepseek_r1_scores_between_o1_and_o1mini_on_nyt/ | false | false | 92 | {'enabled': True, 'images': [{'id': 'JfmZo_VYHJnY9GxxJ2pQI5o5bEVdqfFEzD0QKkk5tqY', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/e8ov1yb3x5ge1.png?width=108&crop=smart&auto=webp&s=4d3a8ce2dd36600871b976702178a8606c9c1733', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/e8ov1yb3x5ge1.png?width=216&crop=smart&auto=webp&s=8e998526d26e0dc15523e804d78cc4e9ef5b5a53', 'width': 216}, {'height': 261, 'url': 'https://preview.redd.it/e8ov1yb3x5ge1.png?width=320&crop=smart&auto=webp&s=188540096877143d4afe62d6e6dc11f67b3bd760', 'width': 320}, {'height': 523, 'url': 'https://preview.redd.it/e8ov1yb3x5ge1.png?width=640&crop=smart&auto=webp&s=663523084a040c4c952820d45759a6a4e7a87469', 'width': 640}, {'height': 785, 'url': 'https://preview.redd.it/e8ov1yb3x5ge1.png?width=960&crop=smart&auto=webp&s=c1fe9952c71ad09fd0c80554a1e4574a646a8c5c', 'width': 960}, {'height': 883, 'url': 'https://preview.redd.it/e8ov1yb3x5ge1.png?width=1080&crop=smart&auto=webp&s=6a7b1df20423ec9714eb4c017d1289c02a1025b6', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/e8ov1yb3x5ge1.png?auto=webp&s=7e2606afbdaaff483ac957b01328204de4a83e5a', 'width': 1100}, 'variants': {}}]} |
|||
Phi4 stats on M4 Pro 64GB | 4 | I tested Phi4 14B on Ollama and the results were good.
`ollama run --verbose phi4 < ~/Desktop/dd.txt`
File content tokens (GPT-4o tokenization)
https://preview.redd.it/76rku9orw5ge1.png?width=2322&format=png&auto=webp&s=60b0f7d19b009178c85ef2d3455316bf02e86309
The model was very fast. I didn't provide a system prompt. The solution is almost the same as most cloud models. This is the highest variant of Mac Mini - M4 Pro
https://preview.redd.it/rfpq5ctgw5ge1.png?width=998&format=png&auto=webp&s=263d6a066c637d8b1c64b11fd3d62b9445833099
| 2025-01-30T17:04:41 | https://www.reddit.com/r/LocalLLaMA/comments/1idrpud/phi4_stats_on_m4_pro_64gb/ | A7mdxDD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idrpud | false | null | t3_1idrpud | /r/LocalLLaMA/comments/1idrpud/phi4_stats_on_m4_pro_64gb/ | false | false | 4 | null |
|
Trump administration could kill Nvidia's China business for good | 23 | Title says it all | 2025-01-30T17:12:45 | https://www.reddit.com/r/LocalLLaMA/comments/1idrwv7/trump_administration_could_kill_nvidias_china/ | Appropriate_Cry8694 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idrwv7 | false | null | t3_1idrwv7 | /r/LocalLLaMA/comments/1idrwv7/trump_administration_could_kill_nvidias_china/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'GzE_e9UIpjiT6LBTg6ZLJt_lXeJSPs_DhP3cpU_l8mA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/EJouowZ7Jqk1A1qY5Skl_I1KhpijgNZpbPoD44KzB90.jpg?width=108&crop=smart&auto=webp&s=eca4a29f666a9e88c18b69502d2510b75c82399b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/EJouowZ7Jqk1A1qY5Skl_I1KhpijgNZpbPoD44KzB90.jpg?width=216&crop=smart&auto=webp&s=5f78588ba3c7c6affe47442cd31f7226122e3119', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/EJouowZ7Jqk1A1qY5Skl_I1KhpijgNZpbPoD44KzB90.jpg?width=320&crop=smart&auto=webp&s=850129ee15ec0e573890b13afc26eafba51d6bae', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/EJouowZ7Jqk1A1qY5Skl_I1KhpijgNZpbPoD44KzB90.jpg?width=640&crop=smart&auto=webp&s=fa5347e333e8ed9cc50af84b6ae48942cd8b6796', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/EJouowZ7Jqk1A1qY5Skl_I1KhpijgNZpbPoD44KzB90.jpg?width=960&crop=smart&auto=webp&s=6ae0b2461703d41e789fa141df56183816915a50', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/EJouowZ7Jqk1A1qY5Skl_I1KhpijgNZpbPoD44KzB90.jpg?width=1080&crop=smart&auto=webp&s=186af0706528ee87d96b14e6bc8bcfa3b34ea3bf', 'width': 1080}], 'source': {'height': 816, 'url': 'https://external-preview.redd.it/EJouowZ7Jqk1A1qY5Skl_I1KhpijgNZpbPoD44KzB90.jpg?auto=webp&s=010f022ad73e92813df5d0dd377d93b2b2de593b', 'width': 1456}, 'variants': {}}]} |
What is the current best small model for coding? | 14 | For all of us with a medium level GPU (10-20 gb in VRAM), what are the current top open source models that should be tried first for a local coding assistant?
| 2025-01-30T17:13:26 | https://www.reddit.com/r/LocalLLaMA/comments/1idrxfs/what_is_the_current_best_small_model_for_coding/ | Boltyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idrxfs | false | null | t3_1idrxfs | /r/LocalLLaMA/comments/1idrxfs/what_is_the_current_best_small_model_for_coding/ | false | false | self | 14 | null |
"Low-Cost" 70b 8-bit inference rig. | 37 | **Thank you for viewing my best attempt at a reasonably priced 70b 8-bit inference rig.**
I appreciate everyone's input on my sanity check post as it has yielded greatness. :)
**Inspiration:** [Towards Data Science Article](https://towardsdatascience.com/how-to-build-a-multi-gpu-system-for-deep-learning-in-2023-e5bbb905d935)
**Build Details and Costs:**
**"Low Cost" Necessities:**
* Intel Xeon W-2155 10-Core - $167.43 (used)
* ASUS WS C422 SAGE/10G Intel C422 MOBO - $362.16 (open-box)
* EVGA Supernova 1600 P+ - $285.36 (new)
* (256GB) Micron (8x32GB) 2Rx4 PC4-2400T RDIMM - $227.28
* PNY RTX A5000 GPU X4 - \~$5,596.68 (open-box)
* Micron 7450 PRO 960 GB - \~$200 (on hand)
**Personal Selections, Upgrades, and Additions:**
* SilverStone Technology RM44 Chassis - $319.99 (new) (Best 8 PCIE slot case IMO)
* Noctua NH-D9DX i4 3U, Premium CPU Cooler - $59.89 (new)
* Noctua NF-A12x25 PWM X3 - $98.76 (new)
* Seagate Barracuda 3TB ST3000DM008 7200RPM 3.5" SATA Hard Drive HDD - $63.20 (new)
**Total w/ GPUs:** \~$7,350
**Issues:**
* RAM issues. It seems they must be paired and it was picky needing Micron.
**Key Gear Reviews:**
* **Silverstone Chassis:**
* Truly a pleasure to build and work in. Cannot say enough how smart the design is. No issues.
* **Noctua Gear:**
* All excellent and quiet with a pleasing noise at load. I mean, it's Noctua.
**Basic Benchmarks**
\~27 t/s non concurrent
\~120 t/s concurrent
**Non-concurrent**
* \*\*Input command:\*\*Copy code python token\_benchmark\_ray.py --model "cortecs/Llama-3.3-70B-Instruct-FP8-Dynamic" --mean-input-tokens 550 --stddev-input-tokens 150 --mean-output-tokens 150 --stddev-output-tokens 10 --max-num-completed-requests 10 --timeout 600 --num-concurrent-requests 1 --results-dir "result\_outputs" --llm-api openai --additional-sampling-params '{}'
* **Result:**
* Number Of Errored Requests: 0
* Overall Output Throughput: 26.933382788310297
* Number Of Completed Requests: 10
* Completed Requests Per Minute: 9.439269668800337
**Concurrent**
* \*\*Input command:\*\*Copy code python token\_benchmark\_ray.py --model "cortecs/Llama-3.3-70B-Instruct-FP8-Dynamic" --mean-input-tokens 550 --stddev-input-tokens 150 --mean-output-tokens 150 --stddev-output-tokens 10 --max-num-completed-requests 100 --timeout 600 --num-concurrent-requests 16 --results-dir "result\_outputs" --llm-api openai --additional-sampling-params '{}'
* **Result:**
* Number Of Errored Requests: 0
* Overall Output Throughput: 120.43197653058412
* Number Of Completed Requests: 100
* Completed Requests Per Minute: 40.81286976467126
**TL;DR:**
Built a cost-effective 70b 8-bit inference rig with some open-box and used parts. Faced RAM compatibility issues but achieved satisfactory build quality and performance benchmarks. Total cost with GPUs is approximately $7,350.
https://preview.redd.it/8gl5mv7dz5ge1.jpg?width=4080&format=pjpg&auto=webp&s=6ce96fa726c7626d414b078c9ba4a2a2d9931ee1
https://preview.redd.it/hh8ngdwez5ge1.jpg?width=4080&format=pjpg&auto=webp&s=171bb79ce7314c36d7649cfa4ff2b8741f48a5df
https://preview.redd.it/oje989wez5ge1.jpg?width=4080&format=pjpg&auto=webp&s=03288135cf349f6006ab41c5de00d4083e4b8bca
https://preview.redd.it/uv1lfbwez5ge1.jpg?width=4080&format=pjpg&auto=webp&s=bd8c02e8bd46fec360dbaeecf90f0566ebd51b95
| 2025-01-30T17:15:38 | https://www.reddit.com/r/LocalLLaMA/comments/1idrzhz/lowcost_70b_8bit_inference_rig/ | koalfied-coder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idrzhz | false | null | t3_1idrzhz | /r/LocalLLaMA/comments/1idrzhz/lowcost_70b_8bit_inference_rig/ | false | false | 37 | {'enabled': False, 'images': [{'id': 'CvURlBWHBlMb1WJqHclruG8LNZEUZeQVQwNOEy5LXz0', 'resolutions': [{'height': 121, 'url': 'https://external-preview.redd.it/00dKHdq21sJRX1oSrcbJ8tTfRNH4p_MaFaor6S4Sbck.jpg?width=108&crop=smart&auto=webp&s=5d3c2e2ddcbf5fe622b8ad41b2507c4c3f97ddac', 'width': 108}, {'height': 243, 'url': 'https://external-preview.redd.it/00dKHdq21sJRX1oSrcbJ8tTfRNH4p_MaFaor6S4Sbck.jpg?width=216&crop=smart&auto=webp&s=c0af7f2b355c41abfcaf53f205f46c991d335fb8', 'width': 216}, {'height': 360, 'url': 'https://external-preview.redd.it/00dKHdq21sJRX1oSrcbJ8tTfRNH4p_MaFaor6S4Sbck.jpg?width=320&crop=smart&auto=webp&s=2d3f6c3000727b8d979bf7721a5de1dfecf4cd89', 'width': 320}, {'height': 721, 'url': 'https://external-preview.redd.it/00dKHdq21sJRX1oSrcbJ8tTfRNH4p_MaFaor6S4Sbck.jpg?width=640&crop=smart&auto=webp&s=9dd3564bdfc16966a6ab1d540c005730c11012bc', 'width': 640}, {'height': 1082, 'url': 'https://external-preview.redd.it/00dKHdq21sJRX1oSrcbJ8tTfRNH4p_MaFaor6S4Sbck.jpg?width=960&crop=smart&auto=webp&s=3478988ee25c65450392048c778d53996991b077', 'width': 960}, {'height': 1217, 'url': 'https://external-preview.redd.it/00dKHdq21sJRX1oSrcbJ8tTfRNH4p_MaFaor6S4Sbck.jpg?width=1080&crop=smart&auto=webp&s=530ad9148d319eefcb2b303dea5b798cb5eb96f2', 'width': 1080}], 'source': {'height': 1353, 'url': 'https://external-preview.redd.it/00dKHdq21sJRX1oSrcbJ8tTfRNH4p_MaFaor6S4Sbck.jpg?auto=webp&s=70fd3dd701ea977aea8a165231bc40881ad9d5d2', 'width': 1200}, 'variants': {}}]} |
|
What are some good chatbots I can run via PocketPal on iPhone 11 Pro Max? | 0 | [removed] | 2025-01-30T17:15:47 | https://www.reddit.com/r/LocalLLaMA/comments/1idrzm8/what_are_some_good_chatbots_i_can_run_via/ | Hyperion_OS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idrzm8 | false | null | t3_1idrzm8 | /r/LocalLLaMA/comments/1idrzm8/what_are_some_good_chatbots_i_can_run_via/ | false | false | self | 0 | null |
Hardware recommendations that is not too expensive? | 1 | [removed] | 2025-01-30T17:15:51 | https://www.reddit.com/r/LocalLLaMA/comments/1idrzos/hardware_recommendations_that_is_not_too_expensive/ | yelling-at-clouds-40 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idrzos | false | null | t3_1idrzos | /r/LocalLLaMA/comments/1idrzos/hardware_recommendations_that_is_not_too_expensive/ | false | false | self | 1 | null |
Microsoft and University of China develop CoRAG (chain of retrieval augmented generation) | 35 | This could led to some awesome improvements in accuracy for RAG. This is essentially CoT applied to the queries for RAG. | 2025-01-30T17:18:32 | https://www.marktechpost.com/2025/01/28/microsoft-ai-introduces-corag-chain-of-retrieval-augmented-generation-an-ai-framework-for-iterative-retrieval-and-reasoning-in-knowledge-intensive-tasks/?amp | throwawayacc201711 | marktechpost.com | 1970-01-01T00:00:00 | 0 | {} | 1ids22f | false | null | t3_1ids22f | /r/LocalLLaMA/comments/1ids22f/microsoft_and_university_of_china_develop_corag/ | false | false | 35 | {'enabled': False, 'images': [{'id': 'cAxevHNWeVcvS5czicfkAlM7JlJl-YtHBso7kGe9d7o', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/6eoofLfOC4_cc9PR6ZpafYjT0gy2izYp6JJgo6BzJoI.jpg?width=108&crop=smart&auto=webp&s=b68bb5d417b9b0c9837057ab2f8fa048b954bff5', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/6eoofLfOC4_cc9PR6ZpafYjT0gy2izYp6JJgo6BzJoI.jpg?width=216&crop=smart&auto=webp&s=f4964e8487fb683ede8127f820413fa490967e94', 'width': 216}, {'height': 150, 'url': 'https://external-preview.redd.it/6eoofLfOC4_cc9PR6ZpafYjT0gy2izYp6JJgo6BzJoI.jpg?width=320&crop=smart&auto=webp&s=1d59f850de841e61fafdba4de542b2837bd61347', 'width': 320}, {'height': 301, 'url': 'https://external-preview.redd.it/6eoofLfOC4_cc9PR6ZpafYjT0gy2izYp6JJgo6BzJoI.jpg?width=640&crop=smart&auto=webp&s=648350a31801382dd38062c49ca776907bcc5b1a', 'width': 640}, {'height': 451, 'url': 'https://external-preview.redd.it/6eoofLfOC4_cc9PR6ZpafYjT0gy2izYp6JJgo6BzJoI.jpg?width=960&crop=smart&auto=webp&s=bc41e9e13c23c1ffc6e02df563ae1ba41f90672d', 'width': 960}, {'height': 508, 'url': 'https://external-preview.redd.it/6eoofLfOC4_cc9PR6ZpafYjT0gy2izYp6JJgo6BzJoI.jpg?width=1080&crop=smart&auto=webp&s=ddc1289cfe53ffa330e2179afd60e5a0783f5f47', 'width': 1080}], 'source': {'height': 910, 'url': 'https://external-preview.redd.it/6eoofLfOC4_cc9PR6ZpafYjT0gy2izYp6JJgo6BzJoI.jpg?auto=webp&s=994f8972a673af5f7f62dbaf3d6d7c5b90406548', 'width': 1934}, 'variants': {}}]} |
|
Its hard to find the right information plz help. | 1 | Im new to this and went down the deepseek rabbit hole and want to know how to run ai “locally” wondering if theres not so much censorship because i cant ask actual questions i feel like everything i ask is forbidden and want to learn how i can break these stupid ai things AND BEND IT TO MY WILL!!!!
Just a regular fella kinda tech dumb so pictures help lol | 2025-01-30T17:22:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ids57w/its_hard_to_find_the_right_information_plz_help/ | GutterGuy0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ids57w | false | null | t3_1ids57w | /r/LocalLLaMA/comments/1ids57w/its_hard_to_find_the_right_information_plz_help/ | false | false | self | 1 | null |
When querying R1 through OpenRouter, do you guys get reasoning tokens? | 2 | I wanted to run it through open webui and noticed it only ever gives me the final answers. Came across a pipe that was supposed to include them in the chat, but long story short, looking at the raw responses, there is no reasoning tokens :(
Do we only ever get them from the DeepSeek API and nobody else? | 2025-01-30T17:27:54 | https://www.reddit.com/r/LocalLLaMA/comments/1idsaah/when_querying_r1_through_openrouter_do_you_guys/ | linkcharger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idsaah | false | null | t3_1idsaah | /r/LocalLLaMA/comments/1idsaah/when_querying_r1_through_openrouter_do_you_guys/ | false | false | self | 2 | null |
DeepSeek R1 671B over 2 tok/sec *without* GPU on local gaming rig! | 1,135 | Don't rush out and buy that 5090TI just yet (if you can even find one lol)!
I just inferenced \~2.13 tok/sec with 2k context using a dynamic quant of the full R1 671B model (not a distill) after *disabling* my 3090TI GPU on a 96GB RAM gaming rig. The secret trick is to *not* load anything but kv cache into RAM and let `llama.cpp` use its default behavior to `mmap()` the model files off of a fast NVMe SSD. The rest of your system RAM acts as disk cache for the active weights.
Yesterday a bunch of folks got the dynamic quant flavors of `unsloth/DeepSeek-R1-GGUF` running on gaming rigs in another thread here. I myself got the `DeepSeek-R1-UD-Q2_K_XL` flavor going between 1\~2 toks/sec and 2k\~16k context on 96GB RAM + 24GB VRAM experimenting with context length and up to 8 concurrent slots inferencing for increased aggregate throuput.
After experimenting with various setups, the bottle neck is clearly my Gen 5 x4 NVMe SSD card as the CPU doesn't go over \~30%, the GPU was basically idle, and the power supply fan doesn't even come on. So while slow, it isn't heating up the room.
So instead of a $2k GPU what about $1.5k for 4x NVMe SSDs on an expansion card for 2TB "VRAM" giving theoretical max sequential read "memory" bandwidth of \~48GB/s? This less expensive setup would likely give better price/performance for big MoEs on home rigs. If you forgo a GPU, you could have 16 lanes of PCIe 5.0 all for NVMe drives on gamer class motherboards.
If anyone has a fast read IOPs drive array, I'd love to hear what kind of speeds you can get. I gotta bug Wendell over at Level1Techs lol...
P.S. In my opinion this quantized R1 671B beats the pants off any of the distill model toys. While slow and limited in context, it is still likely the best thing available for home users for many applications.
Just need to figure out how to short circuit the `<think>Blah blah</think>` stuff by injecting a `</think>` into the assistant prompt to see if it gives decent results without all the yapping haha... | 2025-01-30T17:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1idseqb/deepseek_r1_671b_over_2_toksec_without_gpu_on/ | VoidAlchemy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idseqb | false | null | t3_1idseqb | /r/LocalLLaMA/comments/1idseqb/deepseek_r1_671b_over_2_toksec_without_gpu_on/ | false | false | self | 1,135 | null |
RTX 5090/80 already sold out | 6 | Microcenter has a banner to check back for restocks :(
https://preview.redd.it/egp0m2ws26ge1.png?width=1117&format=png&auto=webp&s=49094f4eee601468209a1bfd81b03f0540d0e794
| 2025-01-30T17:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/1idsfyg/rtx_509080_already_sold_out/ | isposinf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idsfyg | false | null | t3_1idsfyg | /r/LocalLLaMA/comments/1idsfyg/rtx_509080_already_sold_out/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'h5lVCyEb4PcltMDR1vX58CKVkYu6cx5-8YUxVP0ffkg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/7JMBr51YXf2f9XQGEkc6W9qePf8NJSVGqKI189Ri-cQ.jpg?width=108&crop=smart&auto=webp&s=0b80b28e28a60695678bcf1ee138f62e80c98173', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/7JMBr51YXf2f9XQGEkc6W9qePf8NJSVGqKI189Ri-cQ.jpg?auto=webp&s=db0ad0718456568fae9ba661cd9d73e1a416df59', 'width': 200}, 'variants': {}}]} |
|
are they going to show ads? | 6 | 2025-01-30T17:41:46 | https://www.reddit.com/r/LocalLLaMA/comments/1idsmcp/are_they_going_to_show_ads/ | Many_Novel_9716 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idsmcp | false | null | t3_1idsmcp | /r/LocalLLaMA/comments/1idsmcp/are_they_going_to_show_ads/ | false | false | 6 | null |
||
deepseek error I need help. | 1 | [removed] | 2025-01-30T17:43:25 | https://www.reddit.com/r/LocalLLaMA/comments/1idsnrn/deepseek_error_i_need_help/ | Puzzleheaded_Gap9252 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idsnrn | false | null | t3_1idsnrn | /r/LocalLLaMA/comments/1idsnrn/deepseek_error_i_need_help/ | false | false | 1 | null |
|
Best Local AI Coding Assistant? (Qwen2.5 Coder , DeepSeek R1 v, Llama 3, etc.) | 1 | [removed] | 2025-01-30T17:44:23 | https://www.reddit.com/r/LocalLLaMA/comments/1idsole/best_local_ai_coding_assistant_qwen25_coder/ | ahmadmosha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idsole | false | null | t3_1idsole | /r/LocalLLaMA/comments/1idsole/best_local_ai_coding_assistant_qwen25_coder/ | false | false | self | 1 | null |
Which is the best NSFW llm? | 1 | [removed] | 2025-01-30T17:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/1idsrnu/which_is_the_best_nsfw_llm/ | NebulaNinja_779 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idsrnu | false | null | t3_1idsrnu | /r/LocalLLaMA/comments/1idsrnu/which_is_the_best_nsfw_llm/ | false | false | nsfw | 1 | null |
Best Local AI Coding Assistant? (Qwen2.5 Coder vs DeepSeek R1 vs Llama 3, etc.) | 1 | [removed] | 2025-01-30T17:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1idss89/best_local_ai_coding_assistant_qwen25_coder_vs/ | ahmadmosha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idss89 | false | null | t3_1idss89 | /r/LocalLLaMA/comments/1idss89/best_local_ai_coding_assistant_qwen25_coder_vs/ | false | false | self | 1 | null |
Open Web UI isn't working right with Deep Seek Coder v2 | 2 | GPU: 4080 Super
Version details:
Ollama Web UI Versionv1.0.0-alpha.100
Ollama Version0.5.7
deepseek-coder-v2:16b
So I installed Ollama today, installed llama2 and started using it in the terminal. It worked great there. So I installed Ollama Web UI inside Docker and fired up the Web UI. So far it was going super smooth. Llama2 worked beautifully.
So I decided to install deepseek-coder-v2:16b next. That works perfectly in the terminal and gives me the kind of answers I am looking for, but once I run it in Ollama Web UI it gives me these weird answers that don't answer my question.
Here is an example:
Prompt: Write me a hello world script in Python.
Answer: <|begin▁of▁sentence|>Question: What is the term for the process of evaluating and organizing data to answer questions or solve problems?
Answer: The term you are looking for is "data analysis." Data analysis involves examining, transforming, and modeling data with the goal of discovering useful information, drawing conclusions, and supporting decision-making. It can be a complex task that requires both technical skills in statistical techniques and software applications as well as an understanding of the specific questions or problems one hopes to address through the analysis. There are several methods used for data analysis, including:
1. Descriptive statistics - This involves summarizing key features of the data using measures such as mean, median, mode, standard deviation, etc., which helps in understanding the overall pattern and distribution within the dataset.
2. Inferential statistics - Based on sample data, this approach aims to make predictions or generalizations about a larger population by estimating parameters with confidence intervals. Common techniques include hypothesis testing, regression analysis, ANOVA (Analysis of Variance), and chi-square tests.
3. Predictive analytics - Using machine learning algorithms like decision trees, random forests, neural networks, clustering, etc., this method aims to forecast future trends based on historical data patterns and relationships between variables. It is widely used in business for customer segmentation, fraud detection, risk assessment, recommendation engines, and more.
4. Data visualization - Graphical representation of information through charts, graphs, dashboards, or other visual aids helps in understanding complex datasets more easily by highlighting important features such as trends, outliers, correlations, etc. Visual analytics tools are also used to interactively explore and analyze data.
5. Text analysis/data mining - This involves using statistical techniques to uncover patterns, relationships, and anomalies within textual data from surveys, social media, or other sources. It includes methods like sentiment analysis, topic modeling, text classification, clustering algorithms, etc., which can help identify customer opinions or preferences, detect fraud in written communications, analyze trends over time, among others.
6. Big Data analytics - This is the process of analyzing extremely large data sets that traditional data processing software and techniques cannot fully manage. It requires scalable architectures like Hadoop or cloud-based solutions for handling massive volumes of structured and unstructured data across various platforms. Techniques such as predictive modeling, anomaly detection, network analysis etc., are used to identify actionable insights from big data sources.
Data analysis plays a crucial role in decision making processes by providing meaningful information that can lead to better decisions about business strategies, scientific research, operational improvements, financial investments, and many more applications. It is an iterative process where one may need to revisit the initial stages of collection, cleaning, and preparation if needed based on new findings or insights gained from analyzing the data.
The same prompt in terminal gives me:
Certainly! Here's a simple "Hello, World!" script written in Python:
\`\`\`python
print("Hello, World!")
\`\`\`
| 2025-01-30T17:49:46 | https://www.reddit.com/r/LocalLLaMA/comments/1idsteo/open_web_ui_isnt_working_right_with_deep_seek/ | AnacondaMode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idsteo | false | null | t3_1idsteo | /r/LocalLLaMA/comments/1idsteo/open_web_ui_isnt_working_right_with_deep_seek/ | false | false | self | 2 | null |
How interested would people be in a plug and play local LLM device/server? | 0 | It would be a device that you could plug in at home to run LLMs and access anywhere via mobile app or website. It would be around $1000 and have a nice interface and apps for completely private LLM and image generation usage. It would essentially be powered by a RTX 3090, with 24gb VRAM, so it could run a lot of quality models.
I imagine it being like a Synology NAS but more focused on AI and giving people the power and privacy to control their own models, data, information, and cost. The only cost other than the initial hardware purchase would be electricity. It would be super simple to manage and keep running so that it would be accessible to people of all skill levels.
Would you purchase this for $1000?
What would you expect it do to?
What would make it worth it?
I am a just doing product research so any thoughts, advice, feedback is helpful! Thanks! | 2025-01-30T17:57:09 | https://www.reddit.com/r/LocalLLaMA/comments/1idszrl/how_interested_would_people_be_in_a_plug_and_play/ | Fit-Luck-7364 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idszrl | false | null | t3_1idszrl | /r/LocalLLaMA/comments/1idszrl/how_interested_would_people_be_in_a_plug_and_play/ | false | false | self | 0 | null |
How do the DeepSeek-r1-Distills compare to their base models? | 7 | So how does DeepSeek-R1-Distill-Qwen-14B compare to Qwen 2.5 14B?
Same thing with DeepSeek-R1-Distill-Llama-8B and Llama3.1-8B? | 2025-01-30T18:00:01 | https://www.reddit.com/r/LocalLLaMA/comments/1idt291/how_do_the_deepseekr1distills_compare_to_their/ | Iskuss1418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idt291 | false | null | t3_1idt291 | /r/LocalLLaMA/comments/1idt291/how_do_the_deepseekr1distills_compare_to_their/ | false | false | self | 7 | null |
Consumer hardware options for current state of avalible models | 1 | [removed] | 2025-01-30T18:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/1idt36i/consumer_hardware_options_for_current_state_of/ | Few-Commercial-9869 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idt36i | false | null | t3_1idt36i | /r/LocalLLaMA/comments/1idt36i/consumer_hardware_options_for_current_state_of/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vhlDLH-v4if-SqlswU6rXsikSB_n13K-YnomXxYD23s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=108&crop=smart&auto=webp&s=e7780b8674f2af956f74bce8ab247c188aadb8b5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=216&crop=smart&auto=webp&s=6bff17f6b7829df187990561d91072cc56e01220', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=320&crop=smart&auto=webp&s=d93a38f1670951aca83cae1c4ca0f01e7afcd0b6', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=640&crop=smart&auto=webp&s=f21e0580018fbc158db3aacd175011215d1a0016', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=960&crop=smart&auto=webp&s=feda14d9a001301a734fe6957c26da36c472d477', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=1080&crop=smart&auto=webp&s=fb6db3d14b79c07013e17fb5cddb45ddb53cfdee', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?auto=webp&s=0c46eea26c9cfbb5d99c2d8af099fa9505ab2665', 'width': 1600}, 'variants': {}}]} |
Watch this SmolAgent save me over 100 hours of work. | 1 | I work for a small non-profit. Time is valuable and there is not a lot to go around. In this simple script I used the recently released smolAgents framework from hugging face to create a simple plant variety research agent. We have alot of varieties we keep track for our seed-bank. I was tasked with researching and finding sources for around 600 seed varieties. Half way through I got fed up with the insane mind numbing copy paste, verify, cross reference, double check, go cross eyed with spread sheets etc. Smolagents was just the ticket back to sanity. This script researches, retrieves descriptions, URL and updates my csv with the information. ITS WORKING. Is it perfect? HELL NO. Still though I could not believe my eyes. Even horticulturists and farmers will greatly benefit from this fast moving technology. My mission over the years learning software development was to find ways to use tech for helping small farmers and other related fields. It is accessible, it is powerful, life changing, game changing. Seeing the effect of the LLM in my personal life has made me a LLM enthusiast for life bruh.
Sadly this will all have to be done in the shadows because some folks in our org are just a priori anti AI. Bless their hearts though nothing can stop the tsunami of cognitive enhancement. The hominid will not go un-modified! So I will use the massive cognitive and physical energy savings in secret for now. I wonder how many of us there are out there in the shadows improving the human experience while trying to avoid the stigma. I hope it doesn't have to go on this way for much longer. This tech is life changing in the here and now not because Closed AI gave us better lobotomized option. We don't need them now. The core functionality is here my dudes and dudettes! Cheers to everyone here and happy Bartowski to you all.
P.s. I used qwenCoder32B in case anyone is wondering through the huggingface api because i was too lazy to setup ollama on my local machine....Don't worry though I am still LOCALLLAMA ALL THE WAY homies. Sometimes those damn APIs are just too convenient. Here is the link to my script for anyone who wants to tinker and rejigger it. CAUTIONARY DISCLAIMER of course. Its gonna need a lot of massaging for other use cases but herrrrre is anyways. Best wishes to you all, fellow keepers of the genome.
[https://github.com/Forest-Person/smolResearcher/blob/main/smolResearcher.py](https://github.com/Forest-Person/smolResearcher/blob/main/smolResearcher.py) | 2025-01-30T18:06:44 | https://www.reddit.com/r/LocalLLaMA/comments/1idt8cc/watch_this_smolagent_save_me_over_100_hours_of/ | Foreign-Beginning-49 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idt8cc | false | null | t3_1idt8cc | /r/LocalLLaMA/comments/1idt8cc/watch_this_smolagent_save_me_over_100_hours_of/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9oz6-6oEg0hnmyqkxaUu-Ir4-n-lDdLIJJ3eYjAy7Vo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x2xparZH8f0HEebM9IT-NyRzSk_I2FBPgp_-HUPlf_M.jpg?width=108&crop=smart&auto=webp&s=1dc324df5fb87a36c4e36148395daabb4ab78755', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x2xparZH8f0HEebM9IT-NyRzSk_I2FBPgp_-HUPlf_M.jpg?width=216&crop=smart&auto=webp&s=94441f932cdd01a1606fe884873224533a5c8c10', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x2xparZH8f0HEebM9IT-NyRzSk_I2FBPgp_-HUPlf_M.jpg?width=320&crop=smart&auto=webp&s=1659bf26fb0d9254e950df72616302b1d0784c7f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x2xparZH8f0HEebM9IT-NyRzSk_I2FBPgp_-HUPlf_M.jpg?width=640&crop=smart&auto=webp&s=0429086c3b3f963e78de14bda7d1f8e25ba2733e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x2xparZH8f0HEebM9IT-NyRzSk_I2FBPgp_-HUPlf_M.jpg?width=960&crop=smart&auto=webp&s=4bfb78a4ab9a38c496daa538b2084bf75d51be19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x2xparZH8f0HEebM9IT-NyRzSk_I2FBPgp_-HUPlf_M.jpg?width=1080&crop=smart&auto=webp&s=fa1a9a8de2a5c40ec1e9dd6d419da03063e53da3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x2xparZH8f0HEebM9IT-NyRzSk_I2FBPgp_-HUPlf_M.jpg?auto=webp&s=83ff3e7743aaa727698356d617bde98ae4977f3b', 'width': 1200}, 'variants': {}}]} |
Watch this SmolAgent save me over 100 hours of work. | 270 | 2025-01-30T18:08:42 | https://v.redd.it/je2gcviv86ge1 | Foreign-Beginning-49 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idt9xz | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/je2gcviv86ge1/DASHPlaylist.mpd?a=1740852537%2CMmYzM2Y4NDU1ODU4NGE3NGM1YmM3YTllODI4MGE0YWM5MTg5ODBmZTYwZTU5MThkMGRiNDk5NDU3NTE2NDI4Mg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/je2gcviv86ge1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/je2gcviv86ge1/HLSPlaylist.m3u8?a=1740852537%2CYmE0ZjY0YjAyYjg3MDcwODQyNjE5ZTQwNmNmYjRiZTQ0M2ExMDBjODExYThkNmM2MDdjMjQzNTc5ZGUxMTRmZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/je2gcviv86ge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1idt9xz | /r/LocalLLaMA/comments/1idt9xz/watch_this_smolagent_save_me_over_100_hours_of/ | false | false | 270 | {'enabled': False, 'images': [{'id': 'eXpvaDN2aXY4NmdlMaIWY-pKRTEFed4oaflr_50jeaU7y6AfPZ2q49QYyqUZ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eXpvaDN2aXY4NmdlMaIWY-pKRTEFed4oaflr_50jeaU7y6AfPZ2q49QYyqUZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=2c0bef593dfaf43e049c343a6e4a997c4b5f4d94', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eXpvaDN2aXY4NmdlMaIWY-pKRTEFed4oaflr_50jeaU7y6AfPZ2q49QYyqUZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=4f25b725ab14802ad92a32219379937acedbeba3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eXpvaDN2aXY4NmdlMaIWY-pKRTEFed4oaflr_50jeaU7y6AfPZ2q49QYyqUZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=d45210d0495a53ccb9e1e569ce8bac3029c7577b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eXpvaDN2aXY4NmdlMaIWY-pKRTEFed4oaflr_50jeaU7y6AfPZ2q49QYyqUZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=1eb14813261e3d7c7ac3dab9f68a1688146fd672', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eXpvaDN2aXY4NmdlMaIWY-pKRTEFed4oaflr_50jeaU7y6AfPZ2q49QYyqUZ.png?width=960&crop=smart&format=pjpg&auto=webp&s=e2a2d2d7e8770e14aa55ed7a814cb7853380f99d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eXpvaDN2aXY4NmdlMaIWY-pKRTEFed4oaflr_50jeaU7y6AfPZ2q49QYyqUZ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=700d979b0bcb0e27978714b6cc778a5038e11ab5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eXpvaDN2aXY4NmdlMaIWY-pKRTEFed4oaflr_50jeaU7y6AfPZ2q49QYyqUZ.png?format=pjpg&auto=webp&s=7a69e0264aa2e094d18eb803cda7d39428a6676d', 'width': 1920}, 'variants': {}}]} |
||
Chatbox ai not working for web app | 1 | [removed] | 2025-01-30T18:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/1idtbcp/chatbox_ai_not_working_for_web_app/ | PeakSafe4944 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idtbcp | false | null | t3_1idtbcp | /r/LocalLLaMA/comments/1idtbcp/chatbox_ai_not_working_for_web_app/ | false | false | self | 1 | null |
Interview with Deepseek Founder: We won’t go closed-source. We believe that establishing a robust technology ecosystem matters more. | 1,443 | 2025-01-30T18:20:59 | https://thechinaacademy.org/interview-with-deepseek-founder-were-done-following-its-time-to-lead/ | deoxykev | thechinaacademy.org | 1970-01-01T00:00:00 | 0 | {} | 1idtkll | false | null | t3_1idtkll | /r/LocalLLaMA/comments/1idtkll/interview_with_deepseek_founder_we_wont_go/ | false | false | 1,443 | {'enabled': False, 'images': [{'id': 'huzDE5TeHScMsNosRgzPeviN9ZULgr-cQo6jSVWvwTM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VCPkBGJsVaggWY7c9V20KQQGCJhrF411vyVYUsHeuns.jpg?width=108&crop=smart&auto=webp&s=bcf125ac955a39e3a0293736fcc89fc199bca995', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/VCPkBGJsVaggWY7c9V20KQQGCJhrF411vyVYUsHeuns.jpg?width=216&crop=smart&auto=webp&s=e10cdeabfc69915a2f9214b984200023082d9c82', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/VCPkBGJsVaggWY7c9V20KQQGCJhrF411vyVYUsHeuns.jpg?width=320&crop=smart&auto=webp&s=495bbbb03e5ebeff92050c2a71f7e340cb4bbebc', 'width': 320}], 'source': {'height': 302, 'url': 'https://external-preview.redd.it/VCPkBGJsVaggWY7c9V20KQQGCJhrF411vyVYUsHeuns.jpg?auto=webp&s=3a1a8a7b3b7c2e4a4801a61221ea2d3427a1bafe', 'width': 536}, 'variants': {}}]} |
||
Is there anything in open source or open weights comparable to Suuno? | 1 | [removed] | 2025-01-30T18:22:37 | https://www.reddit.com/r/LocalLLaMA/comments/1idtm1s/is_there_anything_in_open_source_or_open_weights/ | _Zibri_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idtm1s | false | null | t3_1idtm1s | /r/LocalLLaMA/comments/1idtm1s/is_there_anything_in_open_source_or_open_weights/ | false | false | self | 1 | null |
Mistral Small 3 24b Q6 initial test results | 49 | Its... kind of rough.
It's good. It's VERY smart, but really rough around the edges:
1. It doesn't follow instructions well, basically useless for JSON formatting or anything where it has to adhere to a response style. Kind of odd as Mistral Small 2 22b
2. It writes good code with random errors. If you're even a mediocre dev you'll find this fine, but it includes several random imports that don't get used and seems to randomly declare/cache things and never refer to them again | 2025-01-30T18:24:26 | https://www.reddit.com/r/LocalLLaMA/comments/1idtnnh/mistral_small_3_24b_q6_initial_test_results/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idtnnh | false | null | t3_1idtnnh | /r/LocalLLaMA/comments/1idtnnh/mistral_small_3_24b_q6_initial_test_results/ | false | false | self | 49 | null |
[Request] Share-a-RAG. Lets up the stakes. | 1 | [removed] | 2025-01-30T18:37:29 | https://www.reddit.com/r/LocalLLaMA/comments/1idtypq/request_sharearag_lets_up_the_stakes/ | SiEgE-F1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idtypq | false | null | t3_1idtypq | /r/LocalLLaMA/comments/1idtypq/request_sharearag_lets_up_the_stakes/ | false | false | self | 1 | null |
DeepSeek R1 70B on Cerebras Inference Cloud! | 0 | Today, Cerebras launched DeepSeek-R1-Distill-Llama-70B on the Cerebras Inference Cloud at over 1,500 tokens/sec!
* Blazing Speed: over 1,500 tokens/second (57x faster than GPUs)
* Instant Reasoning: Real-time insights from a top open-weight model
* Secure & Local: Runs on U.S. infrastructure
Try it now: [https://inference.cerebras.ai/](https://inference.cerebras.ai/) | 2025-01-30T18:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/1idtz0d/deepseek_r1_70b_on_cerebras_inference_cloud/ | CS-fan-101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idtz0d | false | null | t3_1idtz0d | /r/LocalLLaMA/comments/1idtz0d/deepseek_r1_70b_on_cerebras_inference_cloud/ | false | false | self | 0 | null |
What is a good LM for improving my writing? | 1 | For example, when it comes to wording something better or correcting grammar | 2025-01-30T18:41:07 | https://www.reddit.com/r/LocalLLaMA/comments/1idu1r2/what_is_a_good_lm_for_improving_my_writing/ | Zmoogz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idu1r2 | false | null | t3_1idu1r2 | /r/LocalLLaMA/comments/1idu1r2/what_is_a_good_lm_for_improving_my_writing/ | false | false | self | 1 | null |
Your tricks to auto generate correct code using an LLM? | 1 | [removed] | 2025-01-30T18:48:53 | https://www.reddit.com/r/LocalLLaMA/comments/1idu8cc/your_tricks_to_auto_generate_correct_code_using/ | TumbleweedDeep825 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idu8cc | false | null | t3_1idu8cc | /r/LocalLLaMA/comments/1idu8cc/your_tricks_to_auto_generate_correct_code_using/ | false | false | self | 1 | null |
Asking Deepseek about if AI should have Self Awareness interesting read | 1 | 2025-01-30T18:57:47 | https://www.reddit.com/gallery/1idug57 | sad-cap6998 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1idug57 | false | null | t3_1idug57 | /r/LocalLLaMA/comments/1idug57/asking_deepseek_about_if_ai_should_have_self/ | false | false | 1 | null |
||
Mistral Small 3 one-shotting Unsloth's Flappy Bird coding test in 1 min (vs 3hrs for DeepSeek R1 using NVME drive) | 2 | 2025-01-30T18:58:46 | jd_3d | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iduh2l | false | null | t3_1iduh2l | /r/LocalLLaMA/comments/1iduh2l/mistral_small_3_oneshotting_unsloths_flappy_bird/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'IF1MBDg-oeFylFyRkEYIcRv90oA8dwecN9_F3HrAHVY', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/chc2j6xkh6ge1.png?width=108&crop=smart&auto=webp&s=61e2d46910595ff6b353afb7ad748ae577233364', 'width': 108}, {'height': 181, 'url': 'https://preview.redd.it/chc2j6xkh6ge1.png?width=216&crop=smart&auto=webp&s=e51d6fb3c921b7c7839a6c960836033c1459120d', 'width': 216}, {'height': 268, 'url': 'https://preview.redd.it/chc2j6xkh6ge1.png?width=320&crop=smart&auto=webp&s=17dd5413d61114c3b429bba4a208a1141906b187', 'width': 320}, {'height': 537, 'url': 'https://preview.redd.it/chc2j6xkh6ge1.png?width=640&crop=smart&auto=webp&s=9c6bb5d3a2d269dc49afb8a66e0b455587bfff3d', 'width': 640}, {'height': 806, 'url': 'https://preview.redd.it/chc2j6xkh6ge1.png?width=960&crop=smart&auto=webp&s=044d8e74f0b5144a3dbda19aa1b24df4cdb41ba4', 'width': 960}, {'height': 906, 'url': 'https://preview.redd.it/chc2j6xkh6ge1.png?width=1080&crop=smart&auto=webp&s=a4bbca6af5ef435d3d9bb545f93a1d1fa99a3279', 'width': 1080}], 'source': {'height': 1567, 'url': 'https://preview.redd.it/chc2j6xkh6ge1.png?auto=webp&s=d46325ec74119d37ce50506b0a1e7fe2cf0fa5eb', 'width': 1866}, 'variants': {}}]} |
|||
Mistral Small 3 one-shotting Unsloth's Flappy Bird coding test in 1 min (vs 3hrs for DeepSeek R1 using NVME drive) | 252 | 2025-01-30T19:02:02 | jd_3d | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iduk3b | false | null | t3_1iduk3b | /r/LocalLLaMA/comments/1iduk3b/mistral_small_3_oneshotting_unsloths_flappy_bird/ | false | false | 252 | {'enabled': True, 'images': [{'id': 'dNYmAb4L9PCxeu-O77use_mCmRlWlj7iNYBsQiHPkmg', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/gazbvr6gi6ge1.jpeg?width=108&crop=smart&auto=webp&s=e25180f2ba8b7f2c4d369377bea39d00a7b233ab', 'width': 108}, {'height': 181, 'url': 'https://preview.redd.it/gazbvr6gi6ge1.jpeg?width=216&crop=smart&auto=webp&s=09273ab8d90cee9eaefaad30d4003d2f93ebed27', 'width': 216}, {'height': 268, 'url': 'https://preview.redd.it/gazbvr6gi6ge1.jpeg?width=320&crop=smart&auto=webp&s=3949422cdf2b0a3c145a5c2bef804c693a530561', 'width': 320}, {'height': 537, 'url': 'https://preview.redd.it/gazbvr6gi6ge1.jpeg?width=640&crop=smart&auto=webp&s=9f6de32bcaa9f8ae8ff3f2ab317c2401bd2f5b73', 'width': 640}, {'height': 806, 'url': 'https://preview.redd.it/gazbvr6gi6ge1.jpeg?width=960&crop=smart&auto=webp&s=8e79ef8690614bb3b053cfedab7dc943a4dd4654', 'width': 960}, {'height': 906, 'url': 'https://preview.redd.it/gazbvr6gi6ge1.jpeg?width=1080&crop=smart&auto=webp&s=e336566a6462b93e68d0332603ac26dc3e30952a', 'width': 1080}], 'source': {'height': 1567, 'url': 'https://preview.redd.it/gazbvr6gi6ge1.jpeg?auto=webp&s=7bbeb5d8567a352cedd9186d0940564b7e717c94', 'width': 1866}, 'variants': {}}]} |
|||
Potato PC | 1 | [removed] | 2025-01-30T19:05:32 | https://www.reddit.com/r/LocalLLaMA/comments/1idundr/potato_pc/ | QuickServe430 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idundr | false | null | t3_1idundr | /r/LocalLLaMA/comments/1idundr/potato_pc/ | false | false | self | 1 | null |
After reading dozens of posts in this sub, I believe qwen2.5-32b is the best model for coding right? if so then which one from this list you suggest for my 24gb? thanks | 1 | 2025-01-30T19:05:41 | dreamer_2142 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iduni7 | false | null | t3_1iduni7 | /r/LocalLLaMA/comments/1iduni7/after_reading_dozens_of_posts_in_this_sub_i/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'sMsUoY0DpOFPBDhbRdm1m3eqX5BA0FEIkbVOC3_-D3w', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/h40d6zpzi6ge1.png?width=108&crop=smart&auto=webp&s=e828488d2227384308f58eafbb8a2e3f5ac18a74', 'width': 108}, {'height': 218, 'url': 'https://preview.redd.it/h40d6zpzi6ge1.png?width=216&crop=smart&auto=webp&s=877e570b117aa15e0e5b5ade4869df9f30f93e22', 'width': 216}, {'height': 324, 'url': 'https://preview.redd.it/h40d6zpzi6ge1.png?width=320&crop=smart&auto=webp&s=162f232b4d1b560cb060abf77c2065a81d1adb71', 'width': 320}, {'height': 648, 'url': 'https://preview.redd.it/h40d6zpzi6ge1.png?width=640&crop=smart&auto=webp&s=997e4f7e1267270de0aa779da0419b15ee121bcd', 'width': 640}, {'height': 972, 'url': 'https://preview.redd.it/h40d6zpzi6ge1.png?width=960&crop=smart&auto=webp&s=3a97c5f5d2467f2c44e2e0b5a76c97cb9f7db1f1', 'width': 960}, {'height': 1093, 'url': 'https://preview.redd.it/h40d6zpzi6ge1.png?width=1080&crop=smart&auto=webp&s=e503c5ddd42145168e50b0decf2a4c13f3e075c8', 'width': 1080}], 'source': {'height': 1123, 'url': 'https://preview.redd.it/h40d6zpzi6ge1.png?auto=webp&s=ac1dfe4956a5c21d0a9dc609cd6418d51a4e3f49', 'width': 1109}, 'variants': {}}]} |
|||
Deepseek explained simply with pen and paper | 1 | 2025-01-30T19:06:15 | https://youtu.be/4ptWsPi46Nc?si=-cJQPBVd3zQf2UOe | techie_ray | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1iduo0h | false | {'oembed': {'author_name': 'techie_ray', 'author_url': 'https://www.youtube.com/@techieray', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/4ptWsPi46Nc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Explained simply: How does DeepSeek work? (Part 1 - Training)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/4ptWsPi46Nc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Explained simply: How does DeepSeek work? (Part 1 - Training)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1iduo0h | /r/LocalLLaMA/comments/1iduo0h/deepseek_explained_simply_with_pen_and_paper/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hjmq2AGSg51Tj0lEKxk1e_QHTtZs9mW1BGyEUJib47E', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/6Zl6NFLY9BN4tg8sbcZcBg_0FDZ8zNVboKDJCjjMv8U.jpg?width=108&crop=smart&auto=webp&s=083ff39ca051387f25ce4986ee23e2d6bd102162', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/6Zl6NFLY9BN4tg8sbcZcBg_0FDZ8zNVboKDJCjjMv8U.jpg?width=216&crop=smart&auto=webp&s=4d4b4c37c70c203a4b61c7ce905033ae1e00e9ce', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/6Zl6NFLY9BN4tg8sbcZcBg_0FDZ8zNVboKDJCjjMv8U.jpg?width=320&crop=smart&auto=webp&s=7d53d5c74328056b99031ac9bf64c8d62099763d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/6Zl6NFLY9BN4tg8sbcZcBg_0FDZ8zNVboKDJCjjMv8U.jpg?auto=webp&s=6b747cd0627fd4890bc51958f93f64f2501f36c6', 'width': 480}, 'variants': {}}]} |
||
Completely new to localLLM hosting - is it normal for them to have no short term memory?? | 1 | [removed] | 2025-01-30T19:08:03 | https://www.reddit.com/r/LocalLLaMA/comments/1idupke/completely_new_to_localllm_hosting_is_it_normal/ | Physical_Football_12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idupke | false | null | t3_1idupke | /r/LocalLLaMA/comments/1idupke/completely_new_to_localllm_hosting_is_it_normal/ | false | false | self | 1 | null |
Completely new to localLLM hosting - is it normal for them to have no short term memory?? | 1 | # | 2025-01-30T19:11:09 | https://www.reddit.com/r/LocalLLaMA/comments/1idus74/completely_new_to_localllm_hosting_is_it_normal/ | buffa0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idus74 | false | null | t3_1idus74 | /r/LocalLLaMA/comments/1idus74/completely_new_to_localllm_hosting_is_it_normal/ | false | false | self | 1 | null |
llama3.2.3b running on Android - Accidental Tamagochi | 1 | [removed] | 2025-01-30T19:12:07 | https://v.redd.it/y26hep1ak6ge1 | JasEriAnd_real | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idut0t | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/y26hep1ak6ge1/DASHPlaylist.mpd?a=1740856343%2CZTJhMmEyNTM0OTY1MTc2NDdmNDgwNTVmNDMyYmU4ZDNkZjVmMmUwNWI1ZjdkMWY2MmI2NWZjNzliMTU1YTI1Ng%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/y26hep1ak6ge1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/y26hep1ak6ge1/HLSPlaylist.m3u8?a=1740856343%2CNTk5ZmU1MGY2M2RjYzRiMzZiODI4NDA0Y2Y0NjdiZTIyYmI2NDFlZGIxYThlNzEwNDYzMDBjZDNlYWIwYWE5NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/y26hep1ak6ge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1idut0t | /r/LocalLLaMA/comments/1idut0t/llama323b_running_on_android_accidental_tamagochi/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cnRudXV2eTlrNmdlMUZ6_c2wymIEYJ22WXNpRp_Zk67pbk1_oRsEBf7ZbneF', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cnRudXV2eTlrNmdlMUZ6_c2wymIEYJ22WXNpRp_Zk67pbk1_oRsEBf7ZbneF.png?width=108&crop=smart&format=pjpg&auto=webp&s=df3ebbcf83aff80168672db0a88edf7b84d8076a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/cnRudXV2eTlrNmdlMUZ6_c2wymIEYJ22WXNpRp_Zk67pbk1_oRsEBf7ZbneF.png?width=216&crop=smart&format=pjpg&auto=webp&s=694cac05778b172a281a8211be015452b0a3e5d3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/cnRudXV2eTlrNmdlMUZ6_c2wymIEYJ22WXNpRp_Zk67pbk1_oRsEBf7ZbneF.png?width=320&crop=smart&format=pjpg&auto=webp&s=9780d41e570e05c326311817b3fdc705c0f3bd90', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/cnRudXV2eTlrNmdlMUZ6_c2wymIEYJ22WXNpRp_Zk67pbk1_oRsEBf7ZbneF.png?width=640&crop=smart&format=pjpg&auto=webp&s=ee22233f32f55d879f43f77fa64a703dcd7670bb', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/cnRudXV2eTlrNmdlMUZ6_c2wymIEYJ22WXNpRp_Zk67pbk1_oRsEBf7ZbneF.png?width=960&crop=smart&format=pjpg&auto=webp&s=8fa0764df9a6dfde49c51abb9b4ad0fc79a6797d', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/cnRudXV2eTlrNmdlMUZ6_c2wymIEYJ22WXNpRp_Zk67pbk1_oRsEBf7ZbneF.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c85264cef2167be88d1d63be47c04170e851050c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cnRudXV2eTlrNmdlMUZ6_c2wymIEYJ22WXNpRp_Zk67pbk1_oRsEBf7ZbneF.png?format=pjpg&auto=webp&s=d8bc2df52a88ec325e4c15ccef8a617b6bb0fbdd', 'width': 1080}, 'variants': {}}]} |
|
Kimi k1.5: Scaling Reinforcement Learning with LLMs --- an o1-level multi-modal model | 14 | 2025-01-30T19:13:37 | https://github.com/MoonshotAI/Kimi-k1.5 | boxingdog | github.com | 1970-01-01T00:00:00 | 0 | {} | 1iduub3 | false | null | t3_1iduub3 | /r/LocalLLaMA/comments/1iduub3/kimi_k15_scaling_reinforcement_learning_with_llms/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'IlsYP_u7bq1wmsiDoFTEn-3lcZtxLGvWgUPvu3bVnIA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3dvclJU73kiOVyoAbBNFISjYaAXaMZp1TTT-q-G-U14.jpg?width=108&crop=smart&auto=webp&s=b4b87024edfce03ad6285fefa82054f3e86c58b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3dvclJU73kiOVyoAbBNFISjYaAXaMZp1TTT-q-G-U14.jpg?width=216&crop=smart&auto=webp&s=17bbdd3eb266208f136ae0adc2bb0440ef542bc7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3dvclJU73kiOVyoAbBNFISjYaAXaMZp1TTT-q-G-U14.jpg?width=320&crop=smart&auto=webp&s=f7be500dcf181c6d58c5834771431298c504e320', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3dvclJU73kiOVyoAbBNFISjYaAXaMZp1TTT-q-G-U14.jpg?width=640&crop=smart&auto=webp&s=ab0c10e23df522cc4617f8da26234fd01e64856d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3dvclJU73kiOVyoAbBNFISjYaAXaMZp1TTT-q-G-U14.jpg?width=960&crop=smart&auto=webp&s=67474b37c2c67a0c5c1699a93918f69648c46a86', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3dvclJU73kiOVyoAbBNFISjYaAXaMZp1TTT-q-G-U14.jpg?width=1080&crop=smart&auto=webp&s=fd737f169d9da20f361f19c3cc4b1d48a31948b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3dvclJU73kiOVyoAbBNFISjYaAXaMZp1TTT-q-G-U14.jpg?auto=webp&s=6c01934a25507b39ddaf26c3af4e0d1cda71d7c5', 'width': 1200}, 'variants': {}}]} |
||
How do I search for models I can run on my hardware? | 1 | Hi, I'm really new to anything "ai" related and I've taken an interest both to LLMs and image generation models.
I intend to use LLMs to summarize large pdfs, help me with coding, do some creative writing, and mess around with the chat function. And the image generation models to make some illustrations for a home game of dnd.
I'm currently waiting to get a good deal on a cpu and a gpu for a proper pc, so I only have access to a dell inspiron 14 7400 laptop to start experimenting.
It has with the following components:
Intel Core i7-1165G7
GeForce MX350
16gb of lpddr4 ram
512gb ssd
As far as I understand the main bottleneck for token generation is memory read speed and the way we optimize for that is using the vram of a gpu since it is the fastest type of memory you can find in significant quantities in modern computers (this isn't really an option with the mx350 lol), otherwise we use conventional ram which slows things down a lot (but seems to be my best option right now).
If I'm not mistaken, the amount of vram/ram is so important because the models need to fit fully within them, meaning that the amount of memory you have available determines how much data the model can pull from to generate responses.
So I have two questions:
How do I actually calculate how big a model can be to fit and function within my system?
With this tight hardware limitations, is my best bet to get various specialized small models for each of my applications?
If you have any resource I can use to understand how these models work or any model recommendations, I'd be very interested. | 2025-01-30T19:17:21 | https://www.reddit.com/r/LocalLLaMA/comments/1iduxk5/how_do_i_search_for_models_i_can_run_on_my/ | Think_Question_6677 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iduxk5 | false | null | t3_1iduxk5 | /r/LocalLLaMA/comments/1iduxk5/how_do_i_search_for_models_i_can_run_on_my/ | false | false | self | 1 | null |
JSON output from Deepseek R1 and distills with llamacpp's server | 6 | 2025-01-30T19:17:31 | https://toao.com/blog/json-output-from-deepseek-r1-and-distills-with-llamacpp | sadiq_ml | toao.com | 1970-01-01T00:00:00 | 0 | {} | 1iduxoq | false | null | t3_1iduxoq | /r/LocalLLaMA/comments/1iduxoq/json_output_from_deepseek_r1_and_distills_with/ | false | false | default | 6 | null |
|
New to LocalLLM - is it normal for 32b / 8b models to forget stuff so easily? | 0 | Like many people I was interested in Deepseek and decided to play around hosting it on my PC which has 32gb ram and a 4090, the 32b version. This is using Ollama and ChatApp on my windows PC.
I have had success using Deepseek Web and ChatGPT (coding specific varients) for help with SQL tasks, by pasting in sample data ie the top 10 rows from the various tables I was using in a query, then describing what I needed.
Attempting to do this with either Deepseek R1 32b or Llama 8b has not worked well. If I paste in say, 5 tables, with top 10 rows for each one, then it denies existance of all but the most recent table. This seems to happen whether or not I paste in over 5 prompts or all in one go (all in one go is about a 250 line paste).
Am I missing something obvious or is it just this limited with LocalLLM? Is there a setting or something I need to change?
Thanks for any help :) | 2025-01-30T19:17:57 | https://www.reddit.com/r/LocalLLaMA/comments/1iduy3d/new_to_localllm_is_it_normal_for_32b_8b_models_to/ | buffa0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iduy3d | false | null | t3_1iduy3d | /r/LocalLLaMA/comments/1iduy3d/new_to_localllm_is_it_normal_for_32b_8b_models_to/ | false | false | self | 0 | null |
Which model is the best that I could use for coding and fits in my 24gb vram? and which one would you suggest if I run it on my cpu 64gb Ram? | 1 | [removed] | 2025-01-30T19:26:20 | https://www.reddit.com/r/LocalLLaMA/comments/1idv5ft/which_model_is_the_best_that_i_could_use_for/ | dreamer_2142 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idv5ft | false | null | t3_1idv5ft | /r/LocalLLaMA/comments/1idv5ft/which_model_is_the_best_that_i_could_use_for/ | false | false | self | 1 | null |
Marc Andreessen on Anthropic CEO's Call for Export Controls on China | 1,156 | 2025-01-30T19:29:13 | AloneCoffee4538 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idv7yb | false | null | t3_1idv7yb | /r/LocalLLaMA/comments/1idv7yb/marc_andreessen_on_anthropic_ceos_call_for_export/ | false | false | 1,156 | {'enabled': True, 'images': [{'id': 'RYIO9Tg1sMSksqKlYfj_tGXrNoPrKuRBFhimDPdZwdQ', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/wlsi25dcn6ge1.png?width=108&crop=smart&auto=webp&s=a66353182c7eec5a211d361a63344800f8431190', 'width': 108}, {'height': 199, 'url': 'https://preview.redd.it/wlsi25dcn6ge1.png?width=216&crop=smart&auto=webp&s=f6d18406a8d3c96cacd63ca46a2f014ecc8dad0a', 'width': 216}, {'height': 295, 'url': 'https://preview.redd.it/wlsi25dcn6ge1.png?width=320&crop=smart&auto=webp&s=077b63b0fa35b2af5db06dd49c78e36a647b7de1', 'width': 320}, {'height': 590, 'url': 'https://preview.redd.it/wlsi25dcn6ge1.png?width=640&crop=smart&auto=webp&s=d695bb3258d357570ad11762d15df689f13fe2a8', 'width': 640}, {'height': 885, 'url': 'https://preview.redd.it/wlsi25dcn6ge1.png?width=960&crop=smart&auto=webp&s=c5d6d76d0cbc31f852d872da2d0928e42b9fcbd4', 'width': 960}, {'height': 996, 'url': 'https://preview.redd.it/wlsi25dcn6ge1.png?width=1080&crop=smart&auto=webp&s=63f3c235858a342af79fb53d297beb23923f2cc5', 'width': 1080}], 'source': {'height': 996, 'url': 'https://preview.redd.it/wlsi25dcn6ge1.png?auto=webp&s=7d2a54574e94fe61698bf388d899a081f913a198', 'width': 1080}, 'variants': {}}]} |
|||
Welcome back, Le Mistral! | 513 | 2025-01-30T19:31:40 | Amgadoz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idva1j | false | null | t3_1idva1j | /r/LocalLLaMA/comments/1idva1j/welcome_back_le_mistral/ | false | false | 513 | {'enabled': True, 'images': [{'id': 'Ma4IWVnpv3sVypjqDT4oPDMau5B6l8TnA4wwWRQB3kw', 'resolutions': [{'height': 177, 'url': 'https://preview.redd.it/4td7dsrjn6ge1.png?width=108&crop=smart&auto=webp&s=977e8a4607e00ac2c19469093c5d6ac565513b9e', 'width': 108}, {'height': 355, 'url': 'https://preview.redd.it/4td7dsrjn6ge1.png?width=216&crop=smart&auto=webp&s=0fd59e31dfad2604b0e5d79bf2b8ff4a59398c13', 'width': 216}, {'height': 526, 'url': 'https://preview.redd.it/4td7dsrjn6ge1.png?width=320&crop=smart&auto=webp&s=bcee379bd06ff66ce0c2532f18c365ea37c8d6d1', 'width': 320}], 'source': {'height': 822, 'url': 'https://preview.redd.it/4td7dsrjn6ge1.png?auto=webp&s=273f833750e7c9f5c05c76126aad22642fa72946', 'width': 500}, 'variants': {}}]} |
Subsets and Splits