title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A code generator, a code executor and a file manager, is all you need to build agents | 64 | 2025-01-20T00:28:26 | https://www.slashml.com/blog/testing-autogen | Better_Athlete_JJ | slashml.com | 1970-01-01T00:00:00 | 0 | {} | 1i5dm1f | false | null | t3_1i5dm1f | /r/LocalLLaMA/comments/1i5dm1f/a_code_generator_a_code_executor_and_a_file/ | false | false | default | 64 | null |
|
HuggingChat is pretty decent, normally, but I think I broke it; what in tarnation is this? | 1 | [removed] | 2025-01-20T00:50:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i5e1ie/huggingchat_is_pretty_decent_normally_but_i_think/ | Jattoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5e1ie | false | null | t3_1i5e1ie | /r/LocalLLaMA/comments/1i5e1ie/huggingchat_is_pretty_decent_normally_but_i_think/ | false | false | self | 1 | null |
What in tarnation happened here, short context window, local (though remote) LLM | 1 | [removed] | 2025-01-20T00:59:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i5e7ur/what_in_tarnation_happened_here_short_context/ | Jattoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5e7ur | false | null | t3_1i5e7ur | /r/LocalLLaMA/comments/1i5e7ur/what_in_tarnation_happened_here_short_context/ | false | false | self | 1 | null |
3090 TI? | 1 | I found a MSI Gaming Force Trio RTX 3090ti (24GB) listed for a decent price. Just getting into running local LLMs, and my use would primarily be for Home Assistant. I also plan on fine-tuning encoder-only models (e.g., BERT, ModernBERT), which I think this would be more than enough for. I can't find a lot on the "MSI Gaming Force Trio" version, is my assumption correct that it's the same as a regular 3090 TI? Happy to hear thoughts on if it's even worth purchasing. | 2025-01-20T01:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i5eocn/3090_ti/ | johngo233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5eocn | false | null | t3_1i5eocn | /r/LocalLLaMA/comments/1i5eocn/3090_ti/ | false | false | self | 1 | null |
Advice on Running Local LLMs for Coding | 7 | Hey everyone,
I'm planning to buy one of the new NVIDIA GPUs to run local LLMs for coding, mainly for data engineering tasks. Before I make the purchase, I want to make sure I have a solid plan for the setup and apps I'll be using.
I’m looking to run everything on Windows and have tested LM Studio and MSTY so far. My goal is to use my local GitHub repositories as a RAG source to improve context-aware responses.
One thing I noticed is that MSTY’s RAG feature seems more focused on documents, and I’m not sure if it can properly process the file types used in a code repository. Has anyone tried this, or is there a better approach for integrating a local codebase into an LLM workflow?
Would appreciate any insights or recommendations! | 2025-01-20T02:26:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i5fw10/advice_on_running_local_llms_for_coding/ | Apprehensive_Ad_5565 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5fw10 | false | null | t3_1i5fw10 | /r/LocalLLaMA/comments/1i5fw10/advice_on_running_local_llms_for_coding/ | false | false | self | 7 | null |
Working on a local TTS compatible PDF Reader (to use with Kokoro-82M) | 66 | 2025-01-20T02:34:36 | https://v.redd.it/gax0ckuz82ee1 | richardr1126 | /r/LocalLLaMA/comments/1i5g15m/working_on_a_local_tts_compatible_pdf_reader_to/ | 1970-01-01T00:00:00 | 0 | {} | 1i5g15m | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gax0ckuz82ee1/DASHPlaylist.mpd?a=1740062085%2CYWVkMWJiNjM1ZTEyN2ZiOWM2N2YzZjIxNTU3M2NjMTVkMDBlMGM0ZDY4ZmNiNmQzMTEzMjgyZjE5NjUxYzFmMw%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/gax0ckuz82ee1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/gax0ckuz82ee1/HLSPlaylist.m3u8?a=1740062085%2CNzM4MzY0OGFjNjNjMzY5MzgzOTI0ZDI1MDI2NjA5ZWI4NDcxNDlkOTkzMTkwNWFkMGExMGU1MzAyOWUxMDMxZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gax0ckuz82ee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1800}} | t3_1i5g15m | /r/LocalLLaMA/comments/1i5g15m/working_on_a_local_tts_compatible_pdf_reader_to/ | false | false | 66 | {'enabled': False, 'images': [{'id': 'Zzl3YnAxdXo4MmVlMaNDgHgQWM2mdDZB7xhNGcVXZbgcu-O9WP6fr_kyodHv', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/Zzl3YnAxdXo4MmVlMaNDgHgQWM2mdDZB7xhNGcVXZbgcu-O9WP6fr_kyodHv.png?width=108&crop=smart&format=pjpg&auto=webp&s=9db9e97a5c04cb75f2567ef719287f1c2f713c98', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/Zzl3YnAxdXo4MmVlMaNDgHgQWM2mdDZB7xhNGcVXZbgcu-O9WP6fr_kyodHv.png?width=216&crop=smart&format=pjpg&auto=webp&s=743a2b66c46bdc14f7f138914eaa54954cd5d10a', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/Zzl3YnAxdXo4MmVlMaNDgHgQWM2mdDZB7xhNGcVXZbgcu-O9WP6fr_kyodHv.png?width=320&crop=smart&format=pjpg&auto=webp&s=5df9085a3eaf0e740bacf39672bacb43aa7cf2a4', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/Zzl3YnAxdXo4MmVlMaNDgHgQWM2mdDZB7xhNGcVXZbgcu-O9WP6fr_kyodHv.png?width=640&crop=smart&format=pjpg&auto=webp&s=e2262bfdc683f22316598cba05ffc094a20f62c1', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/Zzl3YnAxdXo4MmVlMaNDgHgQWM2mdDZB7xhNGcVXZbgcu-O9WP6fr_kyodHv.png?width=960&crop=smart&format=pjpg&auto=webp&s=22cff18ade980685a15ab4db93c38c52dc4fdb6b', 'width': 960}, {'height': 648, 'url': 'https://external-preview.redd.it/Zzl3YnAxdXo4MmVlMaNDgHgQWM2mdDZB7xhNGcVXZbgcu-O9WP6fr_kyodHv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=02027fbd78589a0326ba1619c054fee55c93687c', 'width': 1080}], 'source': {'height': 1700, 'url': 'https://external-preview.redd.it/Zzl3YnAxdXo4MmVlMaNDgHgQWM2mdDZB7xhNGcVXZbgcu-O9WP6fr_kyodHv.png?format=pjpg&auto=webp&s=d69367eec0eeec5df7ec528fbd52f42c6e68a274', 'width': 2832}, 'variants': {}}]} |
||
What are open source models in General? What makes Deepseek opensource? | 1 | [removed] | 2025-01-20T03:10:55 | https://www.reddit.com/r/LocalLLaMA/comments/1i5gpx6/what_are_open_source_models_in_general_what_makes/ | Existing-Pay7076 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5gpx6 | false | null | t3_1i5gpx6 | /r/LocalLLaMA/comments/1i5gpx6/what_are_open_source_models_in_general_what_makes/ | false | false | self | 1 | null |
B580 12gb or 3060 12gb | 1 | [removed] | 2025-01-20T03:24:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i5gyja/b580_12gb_or_3060_12gb/ | Curious_Savings_5143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5gyja | false | null | t3_1i5gyja | /r/LocalLLaMA/comments/1i5gyja/b580_12gb_or_3060_12gb/ | false | false | self | 1 | null |
Most complex coding you done with AI | 83 | I find AI super helpful in coding. Sonnet, o1 mini, Deepseek v3, llama 405, in that order. Or Qwen 32/14b locally. Generally using every day when coding.
It shines at 0 to 1 tasks, translation and some troubleshooting. Eg write an app that does this or do this in Rust, make this code typescript, ask what causes this error. Haven't had great experience so far once a project is established and has some form of internal framework, which always happens beyond certain size.
Asked all models to split 200 lines audio code in react into class with logic and react with the rest - most picked correct structure, but implementation missed some unique aspects and kinda started looking like any open source implementation on GitHub.. o1 did best, none were working. So wasn't a fit of even "low" complexity refactoring of a small code.
Share your experiences. What were the most complex tasks you were able to solve with AI? Some context like size of codebase, model would be useful. | 2025-01-20T03:45:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i5hc4s/most_complex_coding_you_done_with_ai/ | val_in_tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5hc4s | false | null | t3_1i5hc4s | /r/LocalLLaMA/comments/1i5hc4s/most_complex_coding_you_done_with_ai/ | false | false | self | 83 | null |
Syntax will die: Abstract - A syntax-free programming language for the LLM age | 1 | [removed] | 2025-01-20T03:49:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i5heab/syntax_will_die_abstract_a_syntaxfree_programming/ | yovboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5heab | false | null | t3_1i5heab | /r/LocalLLaMA/comments/1i5heab/syntax_will_die_abstract_a_syntaxfree_programming/ | false | false | self | 1 | null |
Best open source model for 6 x P40 pascal gpus and which inference engine? | 1 | [removed] | 2025-01-20T04:01:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i5hm65/best_open_source_model_for_6_x_p40_pascal_gpus/ | Both-Indication5062 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5hm65 | false | null | t3_1i5hm65 | /r/LocalLLaMA/comments/1i5hm65/best_open_source_model_for_6_x_p40_pascal_gpus/ | false | false | self | 1 | null |
Gtx 3070 | 3 | That's what I got. I just want a llm I can vent to and that will somewhat remember a longer conversation.. I have 64gb of ram, i9 11th gen. What's the best bang for my buck llm. Fairly good with not horribly small context. | 2025-01-20T05:04:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i5ipqk/gtx_3070/ | rhaastt-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5ipqk | false | null | t3_1i5ipqk | /r/LocalLLaMA/comments/1i5ipqk/gtx_3070/ | false | false | self | 3 | null |
DeepSeek V3 output getting worse? | 5 | I have been using Deepseek V3 now since it first was released. Over the past ~2 weeks I have noticed a SHARP decline in its intelligence. This is for professional business writing. It flat out refuses to adhere to simple grammar commands, such as: rewrite this sentence so that it does not contain a dependent clause.
I first thought it was the API I was using (and OpenWeb UI stock settings)...but I have run the same prompt on https://chat.deepseek.com/ and it's as bad as the API.
I'm not new to prompting and have a pretty good idea of what I'm doing. Wondering if this all in my head because it was literally perfect the first few weeks for me. | 2025-01-20T05:33:38 | https://www.reddit.com/r/LocalLLaMA/comments/1i5j6uz/deepseek_v3_output_getting_worse/ | NEEDMOREVRAM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5j6uz | false | null | t3_1i5j6uz | /r/LocalLLaMA/comments/1i5j6uz/deepseek_v3_output_getting_worse/ | false | false | self | 5 | null |
Deepseek R1 / R1 Zero | 383 | 2025-01-20T05:51:39 | https://huggingface.co/deepseek-ai/DeepSeek-R1 | Different_Fix_2217 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1i5jh1u | false | null | t3_1i5jh1u | /r/LocalLLaMA/comments/1i5jh1u/deepseek_r1_r1_zero/ | false | false | 383 | {'enabled': False, 'images': [{'id': 'EW9KzApmzFQOWfOeaBOFj0nk5o3bUHoeyDi_xD0iLLo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=108&crop=smart&auto=webp&s=9577640536ba5273620fd0480b3a5ebfe441c736', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=216&crop=smart&auto=webp&s=a31f920964030694d34d99e6af85a92e37ede1fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=320&crop=smart&auto=webp&s=7baa788de1cc26a316da0c3936a77b5d8c6638f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=640&crop=smart&auto=webp&s=cfe6acc456fe810e684e2549f82a4f400608da67', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=960&crop=smart&auto=webp&s=b5c2a167f26b2a775924151badedd1641594c389', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=1080&crop=smart&auto=webp&s=3c2b96522db9d74900aa2f630038d61a244f5fde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?auto=webp&s=3da07efc6555d27ee00573823977a4aa6439f0f4', 'width': 1200}, 'variants': {}}]} |
||
Deepseek-R1 and Deepseek-R1-zero repo is preparing to launch? | 53 | https://huggingface.co/deepseek-ai/DeepSeek-R1
https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero
I am waiting for this.
hopfully today | 2025-01-20T06:00:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i5jlsr/deepseekr1_and_deepseekr1zero_repo_is_preparing/ | shing3232 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5jlsr | false | null | t3_1i5jlsr | /r/LocalLLaMA/comments/1i5jlsr/deepseekr1_and_deepseekr1zero_repo_is_preparing/ | false | false | self | 53 | {'enabled': False, 'images': [{'id': 'EW9KzApmzFQOWfOeaBOFj0nk5o3bUHoeyDi_xD0iLLo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=108&crop=smart&auto=webp&s=9577640536ba5273620fd0480b3a5ebfe441c736', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=216&crop=smart&auto=webp&s=a31f920964030694d34d99e6af85a92e37ede1fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=320&crop=smart&auto=webp&s=7baa788de1cc26a316da0c3936a77b5d8c6638f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=640&crop=smart&auto=webp&s=cfe6acc456fe810e684e2549f82a4f400608da67', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=960&crop=smart&auto=webp&s=b5c2a167f26b2a775924151badedd1641594c389', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=1080&crop=smart&auto=webp&s=3c2b96522db9d74900aa2f630038d61a244f5fde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?auto=webp&s=3da07efc6555d27ee00573823977a4aa6439f0f4', 'width': 1200}, 'variants': {}}]} |
Best format for structured output for smaller LLMs? XML/JSON or something else? | 1 | Hi everyone!
My main model is Sonnet, and now I'm testing out deepseek-v3 for my use-cases. And I've somehow come to love Sonnet's XML preference for anything structured. In my tests DeepSeek-v3 follows XML instructions perfectly. GPT-4o also follows XML structure perfectly, every time.
However, Sonnet struggles with JSON. In my tests it would often forget to close a quote, miss a colon, miss a comma between parameters and similar. GPT-4o didn't have that problem.
Anyone have any insights into using XML in prompt structuring for smaller/open models? Which format for structured output do smaller models prefer.
Even for simple delineation of input documents, Alpaca and similar datasets that smaller models are trained on are generally not using XML to delineate inputs. For example RankZephyr paper: [https://arxiv.org/abs/2312.02724](https://arxiv.org/abs/2312.02724)
<|system|>
You are RankLLM, an intelligent
assistant that can rank passages based on
their relevancy to the query.
<|user|>
I will provide you with {num} passages,
each indicated by a numerical identifier [].
Rank the passages based on their relevance to
the search query: {query}.
[1] {passage 1}
[2] {passage 2}
...
[{num}] {passage {num}}
Search Query: {query}.
Rank the {num} passages above based on their
relevance to the search query. All the
passages should be included and listed
using identifiers, in descending order of
relevance. The output format should be [] > [],
e.g., [4] > [2]. Only respond with the ranking
results, do not say any word or explain.
<|assistant|>
I would always default to XML for such a prompt.
In my tests I've generally gotten poor results from smaller models, but I'm not sure is it the problem with my prompting and overreliance on the XML, or is it the domain problem (legal texts in low resource language). | 2025-01-20T06:37:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i5k5qw/best_format_for_structured_output_for_smaller/ | Traditional-Gap-3313 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5k5qw | false | null | t3_1i5k5qw | /r/LocalLLaMA/comments/1i5k5qw/best_format_for_structured_output_for_smaller/ | false | false | self | 1 | null |
TextWebUI is slower than Llama.cpp with the same backend. | 4 | Hey all,
So I finally got my 2x3090 system to work. I know I am late and people are probably already building their 6090 setups in their basements where they are keeping Jensen hostage.
BUT
I have an issue and for the life of me I can't figure it out
whenever I run from textwebui I get this warning
\`\`\`plaintext
llm\_load\_tensors: tensor 'token\_embd.weight' (q4\_K) (and 321 others) cannot be used with preferred buffer type CPU\_AARCH64, using CPU instead
\`\`\`
https://preview.redd.it/xks4t10ml3ee1.png?width=801&format=png&auto=webp&s=9aee362ea5e30922f884e387f1702948e55a0463
When I run from textwebui my speeds are around \~15t/s BUT I can run at 100K context window
When I run from llama.cpp I can only do 60K context window (I think although I really only tested upto 50K but there were a couple of GBs total VRAM still left) my speeds are around 37t/s
What exactly might I be doing wrong?
For the life of me I cant figure out why but textwebui ALWAYS and I mean ALWAYS I have tried all different combinations of above. but textwebui just keeps allocating about 417MB to CPU (did I mention always?)
Pls help. | 2025-01-20T07:08:38 | https://www.reddit.com/r/LocalLLaMA/comments/1i5km59/textwebui_is_slower_than_llamacpp_with_the_same/ | Educational_Gap5867 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5km59 | false | null | t3_1i5km59 | /r/LocalLLaMA/comments/1i5km59/textwebui_is_slower_than_llamacpp_with_the_same/ | false | false | 4 | null |
|
What’s the closest model to Claude 3.5 Sonnet right now? | 11 | Hey guys
I’m looking for an open model that’s as close to Claude 3.5 Sonnet as possible, in terms of a few things:
- intuitive understanding - Claude is capable of understanding intent from vague prompts, you can tell it what you want as an end result and it can figure out the rest but other models in my experience require more prescriptive instructions
- personality - Claude is friendly and kind, feels like talking to a person. ChatGPT in comparison feels like a robot and Gemini is just like super aggressive and arrogant
- long context understanding - Claude can follow instructions over many turns as they evolve. Other models seem to forget older instructions more quickly as new instructions are added
- coding - Claude is the best coding model even better than O1 Pro for many tasks but I can compromise on coding ability if other things are present
I’ve tried DeepSeek V3 but did not feel it was similar. I’ve heard good things about Qwen but not spent too much time using it to judge.
So what open model is closest to Claude in these areas? Would love to hear your experiences. | 2025-01-20T07:12:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i5koah/whats_the_closest_model_to_claude_35_sonnet_right/ | dhamaniasad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5koah | false | null | t3_1i5koah | /r/LocalLLaMA/comments/1i5koah/whats_the_closest_model_to_claude_35_sonnet_right/ | false | false | self | 11 | null |
Hope you guys downloaded Deepseek-R1 in time :( | 2 | 2025-01-20T07:41:02 | Many_SuchCases | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i5l25y | false | null | t3_1i5l25y | /r/LocalLLaMA/comments/1i5l25y/hope_you_guys_downloaded_deepseekr1_in_time/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'XKMwO3gk_Vi8Nwn6_2Cepoxzha7Zy_sjt37jdM0rriU', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/hzq993kvr3ee1.png?width=108&crop=smart&auto=webp&s=8c2e8d11031ad9a05c1e7c1df5d73f2719f7e3b9', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/hzq993kvr3ee1.png?width=216&crop=smart&auto=webp&s=635fb02d030f44201eb716aeabf670854e4c3934', 'width': 216}, {'height': 224, 'url': 'https://preview.redd.it/hzq993kvr3ee1.png?width=320&crop=smart&auto=webp&s=9ba359676cd1e6c9745840ab81077f782028c341', 'width': 320}], 'source': {'height': 420, 'url': 'https://preview.redd.it/hzq993kvr3ee1.png?auto=webp&s=0b7437ff7757fa6239405d34728075676d75d703', 'width': 598}, 'variants': {}}]} |
|||
Random Thought - Pinocchio's Nose Type LLM? | 0 | What I mean by that is his nose, when he lies we can see that Pinocchio's nose grows as proof. My thought was to have LLMs/AI be programmed to essentially do the same thing, and trained against ever going against that showing itself without human prompting and only for a set period of time that not even human prompting can remove.
Or is that a type of thing already? Sorry if this doesn't make a lot of sense, just about 3am here with rambling thoughts. | 2025-01-20T07:50:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i5l6q0/random_thought_pinocchios_nose_type_llm/ | YT_Brian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5l6q0 | false | null | t3_1i5l6q0 | /r/LocalLLaMA/comments/1i5l6q0/random_thought_pinocchios_nose_type_llm/ | false | false | self | 0 | null |
I made DocEase a tool that automates the task of text formatting. | 2 | Hey everyone!
Ever been stuck formatting a 50-page assignment? I have. For my time series analysis class, my professor assigned massive weekly projects that required perfect headers, bold text, italics, and well-aligned tables and equations. It was exhausting—I spent more time formatting than actually writing the content.
That’s when I decided to do something about it. I built a simple tool using Gradio and shared it with friends. Their feedback was amazing! Fast forward two months, that tool evolved into **DocEase**, a Chrome extension that takes the hassle out of formatting.
# Why Choose DocEase?
* **Effortless Formatting:** DocEase processes your raw text and turns it into a clean, professional format. It handles everything from math equations to tables and even code blocks. Plus, it’s ready to paste directly into Google Docs, Microsoft Word, or any similar text editor for final tweaks.
* **Preserves Your Words:** Unlike some AI tools like ChatGPT that might rewrite or skip sections, DocEase keeps your text intact. It only makes minor adjustments, like fixing punctuation, extra line breaks, or spacing, while staying true to your original content.
* **Customizable Formatting:** Whether you’re working on a formal paper and want to disable bold and italics, or you need a specific style for a creative project, DocEase gives you the flexibility to adjust settings to fit your needs. It’s designed to handle everything from formal assignments to unique presentations.
If you’ve ever thought, “Why isn’t there a tool to make formatting easier?” DocEase is here to save the day. Try it out and let me know what you think!
By the way, this post was cleaned up with DocEase. If it saved me time, imagine what it could do for you. [You can find it here. ](https://chromewebstore.google.com/detail/docease/fjegphlfkdoeoonmelfkmgifnplkjocf)😊 | 2025-01-20T08:03:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i5lczn/i_made_docease_a_tool_that_automates_the_task_of/ | anitamaxwynnn69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5lczn | false | null | t3_1i5lczn | /r/LocalLLaMA/comments/1i5lczn/i_made_docease_a_tool_that_automates_the_task_of/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'KrM4SAiPqM1TY8NwUjy4b5dcYRO0qENfxJbgxCEFzZ8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uJvlvKaeXowutIR4QfhqH3uaDxk_7sl8XoZhxSZtTKg.jpg?width=108&crop=smart&auto=webp&s=57acfe72f9586a987e60daaa87c4d52746e229d8', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/uJvlvKaeXowutIR4QfhqH3uaDxk_7sl8XoZhxSZtTKg.jpg?auto=webp&s=f2bfded5b8418978b6c51a7c2eebb9954ee946a1', 'width': 128}, 'variants': {}}]} |
Any new home inference HW we should be looking forward to in 2025? | 7 | Nvidia seems to be king but the cards are expensive and have little VRAM DIGITS looks interesting coming in 2025 however but also expensive.
Apple releases a new Studio? More high speed ram than the mini?
AMD NPU acceleration makes its way into the [linux 6.14 kernel](https://www.phoronix.com/news/AMD-NPU-Firmware-Upstream)? Maybe some of the AMD mini PCs start pulling reasonable numbers?
Intel also gets some NPU improvements in [Linux 6.14](https://www.phoronix.com/news/Intel-NPU-Linux-6.14-IVPU) ?
SOMEONE releases a big VRAM / Unified memory solution that is as good or better than a MAC? | 2025-01-20T08:05:25 | https://www.reddit.com/r/LocalLLaMA/comments/1i5le3w/any_new_home_inference_hw_we_should_be_looking/ | ElectroSpore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5le3w | false | null | t3_1i5le3w | /r/LocalLLaMA/comments/1i5le3w/any_new_home_inference_hw_we_should_be_looking/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': '5qHOIAJZELGOtEs7e725Ga_TWwiN5ooYgzcIQ9HmP4A', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/TGyaP-ciL1597e0D7qIvOjEyjIad36zWPqsUruU0WCU.jpg?width=108&crop=smart&auto=webp&s=4da6b02644e3d4e3020a022a54dba65f6bdcb5e3', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/TGyaP-ciL1597e0D7qIvOjEyjIad36zWPqsUruU0WCU.jpg?width=216&crop=smart&auto=webp&s=d9648d83ca774e789ad4d9ca709c8a43046eccc8', 'width': 216}, {'height': 151, 'url': 'https://external-preview.redd.it/TGyaP-ciL1597e0D7qIvOjEyjIad36zWPqsUruU0WCU.jpg?width=320&crop=smart&auto=webp&s=e536b7382f597d90e982ea637154b7aa70f0c45c', 'width': 320}], 'source': {'height': 247, 'url': 'https://external-preview.redd.it/TGyaP-ciL1597e0D7qIvOjEyjIad36zWPqsUruU0WCU.jpg?auto=webp&s=46bd703a1ce7f7877994e136c18a4384f306d39a', 'width': 522}, 'variants': {}}]} |
Resources for Reasoning LLMs | 4 | I would like to learn Reasoning LLMs, if anyone knows, please tell me, i want to learn from some basics so that i can understand properly. | 2025-01-20T08:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i5lil3/resources_for_reasoning_llms/ | visionkhawar512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5lil3 | false | null | t3_1i5lil3 | /r/LocalLLaMA/comments/1i5lil3/resources_for_reasoning_llms/ | false | false | self | 4 | null |
What LLM/Program are you using for web browsing? | 3 | As search engine algorithms becomes more controlled, edited and less informative I’m wondering if utilizing an LLM copilot will help me see more of the raw internet and less of what particular articles are pushed in front of my face on a certain day. What are you guys using to customize your browsing experience? | 2025-01-20T08:17:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i5ljpx/what_llmprogram_are_you_using_for_web_browsing/ | AmericanKamikaze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5ljpx | false | null | t3_1i5ljpx | /r/LocalLLaMA/comments/1i5ljpx/what_llmprogram_are_you_using_for_web_browsing/ | false | false | self | 3 | null |
In OpenRouter, DeepSeek V3 tells it's created by "OpenAI and call him as ChatGPT" when asked "What is your model name?" | 1 | [removed] | 2025-01-20T08:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i5lobn/in_openrouter_deepseek_v3_tells_its_created_by/ | MatrixEternal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5lobn | false | null | t3_1i5lobn | /r/LocalLLaMA/comments/1i5lobn/in_openrouter_deepseek_v3_tells_its_created_by/ | false | false | 1 | null |
|
My post getting removed ? | 1 | [removed] | 2025-01-20T08:33:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i5lra1/my_post_getting_removed/ | MatrixEternal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5lra1 | false | null | t3_1i5lra1 | /r/LocalLLaMA/comments/1i5lra1/my_post_getting_removed/ | false | false | self | 1 | null |
What's the best ai for code autocompletion for mid range computers(<7b)? | 1 | [removed] | 2025-01-20T08:33:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i5lraq/whats_the_best_ai_for_code_autocompletion_for_mid/ | No-Bid-2955 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5lraq | false | null | t3_1i5lraq | /r/LocalLLaMA/comments/1i5lraq/whats_the_best_ai_for_code_autocompletion_for_mid/ | false | false | self | 1 | null |
In OpenRouter, DeepSeek V3 tells it's created by "OpenAI and call him as ChatGPT" when asked "What is your model name?"
| 1 | [removed] | 2025-01-20T08:37:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i5lte9/in_openrouter_deepseek_v3_tells_its_created_by/ | Super-Designer4622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5lte9 | false | null | t3_1i5lte9 | /r/LocalLLaMA/comments/1i5lte9/in_openrouter_deepseek_v3_tells_its_created_by/ | false | false | 1 | null |
|
The Future of Local AI Inference | 8 | I remember when running Mixtral was seen as a challenge and a 'huge' model. Now it seems quaint in comparison to the 400B and 600B monsters we are getting.
With the release of r1, we have yet another giant model. I'm wondering, is the future of local AI going to be these large models and if so, how will hobbyists best be able to run these? | 2025-01-20T08:39:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i5ludu/the_future_of_local_ai_inference/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5ludu | false | null | t3_1i5ludu | /r/LocalLLaMA/comments/1i5ludu/the_future_of_local_ai_inference/ | false | false | self | 8 | null |
In OpenRouter, DeepSeek V3 tells it's created by "OpenAI and call him as ChatGPT" when asked "What is your model name?"
| 1 | [removed] | 2025-01-20T08:41:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i5lv22/in_openrouter_deepseek_v3_tells_its_created_by/ | SeekerOfExlir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5lv22 | false | null | t3_1i5lv22 | /r/LocalLLaMA/comments/1i5lv22/in_openrouter_deepseek_v3_tells_its_created_by/ | false | false | 1 | null |
|
Best open source LLM better than gpt-4o-mini but cheaper than 4o? | 12 | I mean on OpenRouter. I know this is "Local"LLaMA but you guys are the most familiar with open source LLMs! | 2025-01-20T08:44:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i5lwg4/best_open_source_llm_better_than_gpt4omini_but/ | PMMEYOURSMIL3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5lwg4 | false | null | t3_1i5lwg4 | /r/LocalLLaMA/comments/1i5lwg4/best_open_source_llm_better_than_gpt4omini_but/ | false | false | self | 12 | null |
let’s goo, DeppSeek-R1 685 billion parameters of pure AGI!
| 1 | 2025-01-20T08:58:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i5m309/lets_goo_deppseekr1_685_billion_parameters_of/ | bymechul | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5m309 | false | null | t3_1i5m309 | /r/LocalLLaMA/comments/1i5m309/lets_goo_deppseekr1_685_billion_parameters_of/ | false | false | 1 | null |
||
let’s goo, DeppSeek-R1 685 billion parameters! | 169 | 2025-01-20T09:02:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i5m4t9/lets_goo_deppseekr1_685_billion_parameters/ | bymechul | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5m4t9 | false | null | t3_1i5m4t9 | /r/LocalLLaMA/comments/1i5m4t9/lets_goo_deppseekr1_685_billion_parameters/ | false | false | 169 | null |
||
Am I banned? | 1 | [removed] | 2025-01-20T09:18:47 | https://www.reddit.com/r/LocalLLaMA/comments/1i5mcfx/am_i_banned/ | SeekerOfExlir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5mcfx | false | null | t3_1i5mcfx | /r/LocalLLaMA/comments/1i5mcfx/am_i_banned/ | false | false | self | 1 | null |
RAG - How do I evaluate my retrieval models? | 1 | Hello everyone,
I'm currently training **Transformers** and **Sentence Transformers** for information retrieval tasks. I'm struggling with how to properly evaluate a Sentence Transformer.
My dataset contains question-answer pairs, providing only positive examples. I've split this data into training and test sets. During training, I monitor the loss and metrics like **recall@k** and **precision@k**. I use the \`InformationRetrievalEvaluator\` from the Sentence Transformers library to calculate these metrics. For this, I use the training questions as input and retrieve answers from a corpus of training answers.
I'm unsure how to proceed with the test evaluation. Specifically, I'm unclear about which data should be used as queries and which as the corpus. Should I use:
* **All questions from the original dataset as queries, or only the test set questions?**
* **All answers as the corpus, or only the answers in the test set?**
My inclination is to use only the test set questions as queries, but retain all answers in the corpus. I believe this simulates a more realistic retrieval scenario, similar to what I'd encounter in production with a large corpus.
What do you think?
Thank you! | 2025-01-20T09:41:56 | https://www.reddit.com/r/LocalLLaMA/comments/1i5mn65/rag_how_do_i_evaluate_my_retrieval_models/ | PinballOscuro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5mn65 | false | null | t3_1i5mn65 | /r/LocalLLaMA/comments/1i5mn65/rag_how_do_i_evaluate_my_retrieval_models/ | false | false | self | 1 | null |
Hugging Face will teach you how to use Langchain for agents. 🤯 | 1 | [removed] | 2025-01-20T10:00:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i5mw62/hugging_face_will_teach_you_how_to_use_langchain/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5mw62 | false | null | t3_1i5mw62 | /r/LocalLLaMA/comments/1i5mw62/hugging_face_will_teach_you_how_to_use_langchain/ | false | false | 1 | null |
|
'I Have Adjusted the Code' Past Action or Predicted Outcome? | 0 | One quick question, when I ask an LLM to modify my code and it starts with "I have adjusted the code to…" as the text gradually appears, it sounds like the change has already happened, but I’m not sure it has. Is the LLM claiming to have made the adjustment, or is it just predicting what will happen? Does it imply real action, or is it merely 'faking it' by textually anticipating a change that is probably going to happen? Is that why LLMs typically explain the changes after posting the code, instead of before? Thanks in advance! | 2025-01-20T10:09:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i5n0lh/i_have_adjusted_the_code_past_action_or_predicted/ | IrisColt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5n0lh | false | null | t3_1i5n0lh | /r/LocalLLaMA/comments/1i5n0lh/i_have_adjusted_the_code_past_action_or_predicted/ | false | false | self | 0 | null |
DeepSeek-R1-Distill Models | 1 | 2025-01-20T10:15:36 | Cyanic23 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i5n3w3 | false | null | t3_1i5n3w3 | /r/LocalLLaMA/comments/1i5n3w3/deepseekr1distill_models/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'pyzA_WiDCxzn4E1raym2fkqke-Nqr_YB5cKnIFn7Gk4', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/i4ohtl5ej4ee1.jpeg?width=108&crop=smart&auto=webp&s=82f5e02c09b7ab6c882fe49fcfab40e97f2e7678', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/i4ohtl5ej4ee1.jpeg?width=216&crop=smart&auto=webp&s=319c76deae9fcce292d9da2f205395a3a8e916a6', 'width': 216}, {'height': 109, 'url': 'https://preview.redd.it/i4ohtl5ej4ee1.jpeg?width=320&crop=smart&auto=webp&s=268784bd058497fa8cbf95121da6a585c6f6f9fe', 'width': 320}, {'height': 219, 'url': 'https://preview.redd.it/i4ohtl5ej4ee1.jpeg?width=640&crop=smart&auto=webp&s=e53f89d6db6fc00443004ba122707abe39670573', 'width': 640}, {'height': 329, 'url': 'https://preview.redd.it/i4ohtl5ej4ee1.jpeg?width=960&crop=smart&auto=webp&s=e4ed2c7ce510e8cbcf390a0f435db40d2e55c942', 'width': 960}, {'height': 371, 'url': 'https://preview.redd.it/i4ohtl5ej4ee1.jpeg?width=1080&crop=smart&auto=webp&s=46ee482f9b60ccaa017f3bfbd1771ef949424ad8', 'width': 1080}], 'source': {'height': 758, 'url': 'https://preview.redd.it/i4ohtl5ej4ee1.jpeg?auto=webp&s=7d403e6bfcb0083677fb65718d85b03acd39756f', 'width': 2206}, 'variants': {}}]} |
|||
LM studio model wont load | 6 | Im using a m4 macbook pro. Ive downloaded lm studio and a model llama 3.2. When i click on load model and click on my model nothing happens at all. Im at a loss on what to do any help would be amazing. | 2025-01-20T10:20:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i5n6do/lm_studio_model_wont_load/ | Itz_Wallace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5n6do | false | null | t3_1i5n6do | /r/LocalLLaMA/comments/1i5n6do/lm_studio_model_wont_load/ | false | false | self | 6 | null |
Reminder: You can enable GPU Acceleration in Jan to get better performance thanks to llama.cpp! (Settings -> Advanced Settings -> GPU Acceleration) | 4 | 2025-01-20T10:42:46 | https://x.com/jandotai/status/1881248354384150951 | emreckartal | x.com | 1970-01-01T00:00:00 | 0 | {} | 1i5nh16 | false | null | t3_1i5nh16 | /r/LocalLLaMA/comments/1i5nh16/reminder_you_can_enable_gpu_acceleration_in_jan/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'KTVO7XKToS9UNfqe3rXorGDV1yBjxrJMxbf_jCKkWuw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/ybvDrrHcVMTnuLtZqkiKMVFjMbqQ14QklBzQvDiuc7U.jpg?width=108&crop=smart&auto=webp&s=1bdbd91fbf4df12d8b1ec96a31e5ab372027d048', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/ybvDrrHcVMTnuLtZqkiKMVFjMbqQ14QklBzQvDiuc7U.jpg?width=216&crop=smart&auto=webp&s=60db954de1e0c0124269e74aa72b52a3be11511e', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/ybvDrrHcVMTnuLtZqkiKMVFjMbqQ14QklBzQvDiuc7U.jpg?width=320&crop=smart&auto=webp&s=79ed9e88e82decc60093a86e12e6ffc50c27db7c', 'width': 320}, {'height': 432, 'url': 'https://external-preview.redd.it/ybvDrrHcVMTnuLtZqkiKMVFjMbqQ14QklBzQvDiuc7U.jpg?width=640&crop=smart&auto=webp&s=c2542dbb20073de1a213525066c7f1b144b7b873', 'width': 640}, {'height': 648, 'url': 'https://external-preview.redd.it/ybvDrrHcVMTnuLtZqkiKMVFjMbqQ14QklBzQvDiuc7U.jpg?width=960&crop=smart&auto=webp&s=a27e55805148a6cb8df4180824e508192a2bae9c', 'width': 960}, {'height': 729, 'url': 'https://external-preview.redd.it/ybvDrrHcVMTnuLtZqkiKMVFjMbqQ14QklBzQvDiuc7U.jpg?width=1080&crop=smart&auto=webp&s=9b8cba4ba48eb95c8afd48750071bb7c57bd6c6b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ybvDrrHcVMTnuLtZqkiKMVFjMbqQ14QklBzQvDiuc7U.jpg?auto=webp&s=4702fedf4f8abf80fe2c7b224702b455fbdfba3c', 'width': 1600}, 'variants': {}}]} |
||
Any way I can make an LLM navigate a single humongous piece of text? | 1 | I want to load a disassmebly listing which often has size of many megabytes, and then ask questions like "Find and analyze a subroutine that handles user input." And then it should find that subroutine and all other pieces of code it refers to which are scattered across the file. | 2025-01-20T10:47:38 | https://www.reddit.com/r/LocalLLaMA/comments/1i5njfv/any_way_i_can_make_an_llm_navigate_a_single/ | DeviantPlayeer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5njfv | false | null | t3_1i5njfv | /r/LocalLLaMA/comments/1i5njfv/any_way_i_can_make_an_llm_navigate_a_single/ | false | false | self | 1 | null |
Reliable method of assessing language performance of an LLM | 1 | Hi how do I test language proficiency of an LLM? Especially a language I can't speak myself. I need to assess German, French, Spanish, Italian, Portugeese. Are there any specific benchmarks or "smart ways to do it"? Thank you. | 2025-01-20T10:55:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i5no2w/reliable_method_of_assessing_language_performance/ | theboldestgaze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5no2w | false | null | t3_1i5no2w | /r/LocalLLaMA/comments/1i5no2w/reliable_method_of_assessing_language_performance/ | false | false | self | 1 | null |
Is 6GB enough for 7Bish models | 0 | Hello. I want to buy a new laptop but my budget is low so i have a model with 4050. I would like to get a 4060 but price difference is nearly half minimal wage for my country. Would this guy be enough for some coding and chatting? | 2025-01-20T11:00:30 | https://www.reddit.com/r/LocalLLaMA/comments/1i5nqie/is_6gb_enough_for_7bish_models/ | CryptographerSea5595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5nqie | false | null | t3_1i5nqie | /r/LocalLLaMA/comments/1i5nqie/is_6gb_enough_for_7bish_models/ | false | false | self | 0 | null |
Hugging Face will teach you how to use Langchain for agents | 1 | [removed] | 2025-01-20T11:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i5nvsd/hugging_face_will_teach_you_how_to_use_langchain/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5nvsd | false | null | t3_1i5nvsd | /r/LocalLLaMA/comments/1i5nvsd/hugging_face_will_teach_you_how_to_use_langchain/ | false | false | self | 1 | null |
Hugging Face will teach you how to use Langchain for agents | 6 | Hugging Face is adding LangChain to the agent course. So you'll get an agnostic outsiders take on using the library. The course will also include other libraries including LlamaIndex and smolagents.
https://preview.redd.it/2x9zk0kit4ee1.png?width=1920&format=png&auto=webp&s=a6f76cf0b9b0e3f14283152cb06d87b4f4feaba8
Sign up here: [https://huggingface.co/posts/burtenshaw/334573649974058](https://huggingface.co/posts/burtenshaw/334573649974058) | 2025-01-20T11:12:16 | https://www.reddit.com/r/LocalLLaMA/comments/1i5nwuo/hugging_face_will_teach_you_how_to_use_langchain/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5nwuo | false | null | t3_1i5nwuo | /r/LocalLLaMA/comments/1i5nwuo/hugging_face_will_teach_you_how_to_use_langchain/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'iuh_CELO8ak4l4MJtPHbm6eFaxViA6abg7bH6yN5hOM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kH9VLeQCUX1qHTKB_D_wc2nocZjVypsF_QDl3f_o5f4.jpg?width=108&crop=smart&auto=webp&s=deea75bc430c9c8b2f793acdc36f1a1ee4ed3925', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kH9VLeQCUX1qHTKB_D_wc2nocZjVypsF_QDl3f_o5f4.jpg?width=216&crop=smart&auto=webp&s=4198e4067ec2ee71a328b35253df36f54d8680dc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kH9VLeQCUX1qHTKB_D_wc2nocZjVypsF_QDl3f_o5f4.jpg?width=320&crop=smart&auto=webp&s=30d02aa02a1db61299eec093eff318e4aac5e62f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kH9VLeQCUX1qHTKB_D_wc2nocZjVypsF_QDl3f_o5f4.jpg?width=640&crop=smart&auto=webp&s=aa9663b90bf4e82e3ad2305dc824829212a8789f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kH9VLeQCUX1qHTKB_D_wc2nocZjVypsF_QDl3f_o5f4.jpg?width=960&crop=smart&auto=webp&s=8bef408b47b6c684b516def72a2d645db41cafec', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kH9VLeQCUX1qHTKB_D_wc2nocZjVypsF_QDl3f_o5f4.jpg?width=1080&crop=smart&auto=webp&s=9508f17618510eefe9985b8f044ce2af6e75e03e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kH9VLeQCUX1qHTKB_D_wc2nocZjVypsF_QDl3f_o5f4.jpg?auto=webp&s=928329e656011eea7934a9f4e57b1e6cd1f98c49', 'width': 1200}, 'variants': {}}]} |
|
Man and a goat and a boat (Qwen 2.5 32b and Falcon3 7b solved it right away, Gemini 1206 failed). | 1 | 2025-01-20T11:23:25 | https://old.reddit.com/r/singularity/comments/1i59sy1/well_done_claude_35_sonnet_gpt4o_and_gemini_20/m854kec/ | AppearanceHeavy6724 | old.reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i5o2ji | false | null | t3_1i5o2ji | /r/LocalLLaMA/comments/1i5o2ji/man_and_a_goat_and_a_boat_qwen_25_32b_and_falcon3/ | false | false | default | 1 | null |
|
Check your scholar stats | 0 | 2025-01-20T11:37:19 | https://scholar-stats.info | yoonjeewoo | scholar-stats.info | 1970-01-01T00:00:00 | 0 | {} | 1i5o9vo | false | null | t3_1i5o9vo | /r/LocalLLaMA/comments/1i5o9vo/check_your_scholar_stats/ | false | false | default | 0 | null |
|
Best tools on a medium end laptop ? | 2 | Hello. I just want to try ai and learn some stuff.
Rn I'm on a low end computer so I just koboldccp and SillyTavern.
I have at home a more powerful computer that I would try larger models on.
It's on PopOS software.
I DON'T want account or anything like that so OpenWebUI is out of question.
What tools should I use ?
Thanks. | 2025-01-20T11:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/1i5obig/best_tools_on_a_medium_end_laptop/ | Timely-Cabinet-7879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5obig | false | null | t3_1i5obig | /r/LocalLLaMA/comments/1i5obig/best_tools_on_a_medium_end_laptop/ | false | false | self | 2 | null |
Finetuning Llama 3.1 8B to talk like Rick Sanchez (Unsloth Project) | 13 | 2025-01-20T11:46:10 | https://github.com/neural-maze/rick-llm | Hefty-Consequence443 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i5oel8 | false | null | t3_1i5oel8 | /r/LocalLLaMA/comments/1i5oel8/finetuning_llama_31_8b_to_talk_like_rick_sanchez/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'aI7TNxPVN0DTKPjaYBWaKKoAH65bvwlgyFZL_NTUSd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2eOTibW6DQp7Q9idFoUuQMXC0ZbC7l9CnoSOqe3KwpQ.jpg?width=108&crop=smart&auto=webp&s=d6aceea2363240b3f76f58221584106593402e5a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2eOTibW6DQp7Q9idFoUuQMXC0ZbC7l9CnoSOqe3KwpQ.jpg?width=216&crop=smart&auto=webp&s=536a5577db0fb2b848122eda1cbf7d639786122f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2eOTibW6DQp7Q9idFoUuQMXC0ZbC7l9CnoSOqe3KwpQ.jpg?width=320&crop=smart&auto=webp&s=eb8cf4cd0e1949243bf89e30afb2b72f2e878c23', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2eOTibW6DQp7Q9idFoUuQMXC0ZbC7l9CnoSOqe3KwpQ.jpg?width=640&crop=smart&auto=webp&s=dd23c5d44c68d988aba998d4065d5e0f3e93fa91', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2eOTibW6DQp7Q9idFoUuQMXC0ZbC7l9CnoSOqe3KwpQ.jpg?width=960&crop=smart&auto=webp&s=6fbe099f127181ebf3f163ed450fd7c1319bda58', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2eOTibW6DQp7Q9idFoUuQMXC0ZbC7l9CnoSOqe3KwpQ.jpg?width=1080&crop=smart&auto=webp&s=546e0a674ef3c68cf1b810d38ba4fd092ebb9e29', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2eOTibW6DQp7Q9idFoUuQMXC0ZbC7l9CnoSOqe3KwpQ.jpg?auto=webp&s=b20c3f6f0e65913b325f9fdb6b97812ec92c6a06', 'width': 1200}, 'variants': {}}]} |
||
Single vs multiple agents | 1 | [removed] | 2025-01-20T11:54:04 | https://www.reddit.com/r/LocalLLaMA/comments/1i5oiyw/single_vs_multiple_agents/ | Former_Trouble_4428 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5oiyw | false | null | t3_1i5oiyw | /r/LocalLLaMA/comments/1i5oiyw/single_vs_multiple_agents/ | false | false | self | 1 | null |
Single vs Multiple Agents for complex problems | 1 | [removed] | 2025-01-20T11:55:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i5ojja/single_vs_multiple_agents_for_complex_problems/ | Former_Trouble_4428 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5ojja | false | null | t3_1i5ojja | /r/LocalLLaMA/comments/1i5ojja/single_vs_multiple_agents_for_complex_problems/ | false | false | self | 1 | null |
LLM Finetuning | 1 | [removed] | 2025-01-20T11:57:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i5okri/llm_finetuning/ | Individual_Gur_4055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5okri | false | null | t3_1i5okri | /r/LocalLLaMA/comments/1i5okri/llm_finetuning/ | false | false | self | 1 | null |
Single vs Multiple Agents | 1 | [removed] | 2025-01-20T12:00:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i5omkr/single_vs_multiple_agents/ | Former_Trouble_4428 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5omkr | false | null | t3_1i5omkr | /r/LocalLLaMA/comments/1i5omkr/single_vs_multiple_agents/ | false | false | self | 1 | null |
"DeepSeek R1 Distilled" Open Source Models | 16 | like Qwen and Llama
https://preview.redd.it/th0jtj8a35ee1.png?width=2258&format=png&auto=webp&s=651d58f54dcd33bccae5b2053db003e1fd2eb4c5
[https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B)
[https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 2025-01-20T12:07:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i5oquo/deepseek_r1_distilled_open_source_models/ | GuessJust7842 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5oquo | false | null | t3_1i5oquo | /r/LocalLLaMA/comments/1i5oquo/deepseek_r1_distilled_open_source_models/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'BYMBj2nqR5Zo-Hq436kOsf0OyHC4h43Oge28p7z8xZw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=108&crop=smart&auto=webp&s=2e6b62d42bd0b0d3f0dbd61c770ecd964d87aedd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=216&crop=smart&auto=webp&s=f812da3156317c52b8719b956789a8e827d4eb8f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=320&crop=smart&auto=webp&s=3fbd5c57ad165c7e556ec16cd4e041f6f4fcc4b9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=640&crop=smart&auto=webp&s=cb0be2c39108ae40a2fceffcb31c8521a0e79a4a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=960&crop=smart&auto=webp&s=7d6eb68ece1381ed760e95620bc0e47c9d377c6c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=1080&crop=smart&auto=webp&s=9e6bbb379cd47eecaa25e8ec4213d1f436217dc0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?auto=webp&s=a1d54efe04f9307400a4dc81e9d0250892eed8be', 'width': 1200}, 'variants': {}}]} |
|
Deepseek just uploaded 6 distilled verions of R1 + R1 "full" now available on their website. | 1,189 | 2025-01-20T12:07:33 | https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B | kristaller486 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1i5or1y | false | null | t3_1i5or1y | /r/LocalLLaMA/comments/1i5or1y/deepseek_just_uploaded_6_distilled_verions_of_r1/ | false | false | 1,189 | {'enabled': False, 'images': [{'id': 'BYMBj2nqR5Zo-Hq436kOsf0OyHC4h43Oge28p7z8xZw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=108&crop=smart&auto=webp&s=2e6b62d42bd0b0d3f0dbd61c770ecd964d87aedd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=216&crop=smart&auto=webp&s=f812da3156317c52b8719b956789a8e827d4eb8f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=320&crop=smart&auto=webp&s=3fbd5c57ad165c7e556ec16cd4e041f6f4fcc4b9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=640&crop=smart&auto=webp&s=cb0be2c39108ae40a2fceffcb31c8521a0e79a4a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=960&crop=smart&auto=webp&s=7d6eb68ece1381ed760e95620bc0e47c9d377c6c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?width=1080&crop=smart&auto=webp&s=9e6bbb379cd47eecaa25e8ec4213d1f436217dc0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_atc5Wper5qoTlKRLhG_b9IdHbfvAzDOL9GdRqfNNpk.jpg?auto=webp&s=a1d54efe04f9307400a4dc81e9d0250892eed8be', 'width': 1200}, 'variants': {}}]} |
||
DeepSeek update 6 distill models | 24 | [https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
and so on | 2025-01-20T12:12:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i5otwv/deepseek_update_6_distill_models/ | Attorney_Putrid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5otwv | false | null | t3_1i5otwv | /r/LocalLLaMA/comments/1i5otwv/deepseek_update_6_distill_models/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'Q0QOTyTEboFw8cDsh_hlK-B2pRmxT16BpU6WvhiRAhg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=108&crop=smart&auto=webp&s=05513299537f9ac91b46a0ef050e330e6a41146a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=216&crop=smart&auto=webp&s=58f411ceb4a129b6fd04c891b0f50348540c111d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=320&crop=smart&auto=webp&s=f4922e2d1bdf4d98b9ae98ccd148d9a1fcc1cce7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=640&crop=smart&auto=webp&s=1e0d69a4ff14bebc6aa8e3ea68b33c5b632d47d8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=960&crop=smart&auto=webp&s=a9858a471059dc23dfab3344c026e3440db99079', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=1080&crop=smart&auto=webp&s=24f733061bede7f591279e304f9f0430a6aea381', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?auto=webp&s=1427219437ae40d0c8e3b466654ca6877e656dd3', 'width': 1200}, 'variants': {}}]} |
DeepSeek test review | 87 | Ive been testing the new full R1 model, I gave it the research paper of Titans architecture from Google Research, and I ask to write a small description with json format, then make learn the architecture and build it using TensorFlow to implement it and train it into the jsom text.
I got the correct code after 2-shot of errors and model works great.
Thoughts? | 2025-01-20T12:20:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i5oygi/deepseek_test_review/ | Born-Shopping-1876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5oygi | false | null | t3_1i5oygi | /r/LocalLLaMA/comments/1i5oygi/deepseek_test_review/ | false | false | self | 87 | null |
Aha Moment of DeepSeek-R1-Zero | 1 | [removed] | 2025-01-20T12:27:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i5p2qv/aha_moment_of_deepseekr1zero/ | LastCheck2725 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5p2qv | false | null | t3_1i5p2qv | /r/LocalLLaMA/comments/1i5p2qv/aha_moment_of_deepseekr1zero/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kvQ9oVbCD9vvZU6ZrnMCckPDhqfHRKpSjFE6OrbgKwU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=108&crop=smart&auto=webp&s=4f9b20282905484952cb4134ada601721c91c80a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=216&crop=smart&auto=webp&s=7480fafcca0a3fc6b3ce2df581f7e163a5ff329a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=320&crop=smart&auto=webp&s=80cc079b02200b02734fe3bcfc494c94e1f24282', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=640&crop=smart&auto=webp&s=2abe26f7a3e1642bf9cb27ef184bd854f2d59cf2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=960&crop=smart&auto=webp&s=84b1d3c4f3e4fcddc5716d6f33be6b80b0ab8e0a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=1080&crop=smart&auto=webp&s=2016af13e6e4498bf6796700b3274262b5166bc0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?auto=webp&s=15d6b1c61353eebab849a32ebbfd532e04c652ea', 'width': 1200}, 'variants': {}}]} |
DeepSeek R1 has been officially released! | 270 | The complete technical report has been made publicly available on GitHub.
[https://github.com/deepseek-ai/DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1)
https://preview.redd.it/azdqrrul75ee1.png?width=4702&format=png&auto=webp&s=d482d9acc77fb5e7a98eeb3a6dedcffb43a145d6
| 2025-01-20T12:32:02 | https://www.reddit.com/r/LocalLLaMA/comments/1i5p549/deepseek_r1_has_been_officially_released/ | luckbossx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5p549 | false | null | t3_1i5p549 | /r/LocalLLaMA/comments/1i5p549/deepseek_r1_has_been_officially_released/ | false | false | 270 | {'enabled': False, 'images': [{'id': 'kvQ9oVbCD9vvZU6ZrnMCckPDhqfHRKpSjFE6OrbgKwU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=108&crop=smart&auto=webp&s=4f9b20282905484952cb4134ada601721c91c80a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=216&crop=smart&auto=webp&s=7480fafcca0a3fc6b3ce2df581f7e163a5ff329a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=320&crop=smart&auto=webp&s=80cc079b02200b02734fe3bcfc494c94e1f24282', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=640&crop=smart&auto=webp&s=2abe26f7a3e1642bf9cb27ef184bd854f2d59cf2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=960&crop=smart&auto=webp&s=84b1d3c4f3e4fcddc5716d6f33be6b80b0ab8e0a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=1080&crop=smart&auto=webp&s=2016af13e6e4498bf6796700b3274262b5166bc0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?auto=webp&s=15d6b1c61353eebab849a32ebbfd532e04c652ea', 'width': 1200}, 'variants': {}}]} |
|
Demo: metadata generation for RAG | 1 | Sharing my work for those here who have expressed an interest. For my data podcast this month, I tried generating metadata with an LLM to improve my retrieval-augmented generation results.
Here’s the repo: https://github.com/dsartori/process-briefings | 2025-01-20T12:33:32 | https://www.reddit.com/r/LocalLLaMA/comments/1i5p609/demo_metadata_generation_for_rag/ | dsartori | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5p609 | false | null | t3_1i5p609 | /r/LocalLLaMA/comments/1i5p609/demo_metadata_generation_for_rag/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Bdkz4QIn1EGS-uKQs0zGMS6WgpR3WQYkvsqtCQRoIis', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/74iliJALh0xpsxlyxrwBEwAUGX-TOJtheMUc38k2TuM.jpg?width=108&crop=smart&auto=webp&s=7ffc20fbdbba9571425d675cdc0d6dea760f1f30', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/74iliJALh0xpsxlyxrwBEwAUGX-TOJtheMUc38k2TuM.jpg?width=216&crop=smart&auto=webp&s=8a4e5f03a75c6d5e288ca744f3279aed70832357', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/74iliJALh0xpsxlyxrwBEwAUGX-TOJtheMUc38k2TuM.jpg?width=320&crop=smart&auto=webp&s=03e12015ba44fceb527f0096ad77059bf7637c72', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/74iliJALh0xpsxlyxrwBEwAUGX-TOJtheMUc38k2TuM.jpg?width=640&crop=smart&auto=webp&s=bc56e31286297389bfb3a9d9808dfa0c4cb3f4e1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/74iliJALh0xpsxlyxrwBEwAUGX-TOJtheMUc38k2TuM.jpg?width=960&crop=smart&auto=webp&s=1b4df3adb4dcb1ffebee234a379bda4f3babdbd0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/74iliJALh0xpsxlyxrwBEwAUGX-TOJtheMUc38k2TuM.jpg?width=1080&crop=smart&auto=webp&s=4db2c60b5b3e031ecda53896db82b0cf546c03b0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/74iliJALh0xpsxlyxrwBEwAUGX-TOJtheMUc38k2TuM.jpg?auto=webp&s=c652c5edd8c6c28c6c8f501cdc09135121ebadc0', 'width': 1200}, 'variants': {}}]} |
DeepSeek-R1-Distill-Llama-8B AND DeepSeek-R1-Distill-Qwen-7B and 14b | 47 | its show time, folks
[https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
[https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)
[https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
| 2025-01-20T12:35:01 | https://www.reddit.com/r/LocalLLaMA/comments/1i5p6vl/deepseekr1distillllama8b_and/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5p6vl | false | null | t3_1i5p6vl | /r/LocalLLaMA/comments/1i5p6vl/deepseekr1distillllama8b_and/ | false | false | self | 47 | {'enabled': False, 'images': [{'id': 'VQF_sIGaDvXfyww3oYCZ9yDBf7W5Lfk-6PdMnWsbvHQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dyF6jEY2dfwOi6_qwxuTpvQkuj-359m6w-TEoAkR4IU.jpg?width=108&crop=smart&auto=webp&s=dac6c13244bc551d392869fae30f91621625f9bf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dyF6jEY2dfwOi6_qwxuTpvQkuj-359m6w-TEoAkR4IU.jpg?width=216&crop=smart&auto=webp&s=eff808afcd3d5e809d72556120ef914f11d67ff5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dyF6jEY2dfwOi6_qwxuTpvQkuj-359m6w-TEoAkR4IU.jpg?width=320&crop=smart&auto=webp&s=94b27e475a91bc3a23d8bad225657603d357c0b0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dyF6jEY2dfwOi6_qwxuTpvQkuj-359m6w-TEoAkR4IU.jpg?width=640&crop=smart&auto=webp&s=aa114484729f99d1d477ba66e243cb321c7fb53d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dyF6jEY2dfwOi6_qwxuTpvQkuj-359m6w-TEoAkR4IU.jpg?width=960&crop=smart&auto=webp&s=e8be4e794e7ad4bf7781ddd85b6bc3b1e4666385', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dyF6jEY2dfwOi6_qwxuTpvQkuj-359m6w-TEoAkR4IU.jpg?width=1080&crop=smart&auto=webp&s=9a8f68d0ee2f66d3d64ab1e5fb4f2d8698d5db27', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dyF6jEY2dfwOi6_qwxuTpvQkuj-359m6w-TEoAkR4IU.jpg?auto=webp&s=8f555f63091cbfac441e58d9efa9b4ece5942850', 'width': 1200}, 'variants': {}}]} |
NVIDIA DIGITS 10 years evolution | 20 | This isn't the first time that Nvidia have released DIGITS hardware. Back in 2015, they released the DIGITS DEVBOX. It was a $15k machine with 4 GeForce GTX TITAN X cards*.
That's 6.691 TFLOPS [FP32 (float)] and 12 GB VRAM per card. Or 26.764 TFLOPS and 48GB combined.
The new DIGITS will probably have around 30 TFLOPS in FP32 (float) and 128GB VRAM (and double up for two connected systems). For $3000.
*https://www.legitreviews.com/nvidia-digits-devbox-promotes-deep-learning-titanx_160343
**https://docs.nvidia.com/dgx/digits-devbox-user-guide/index.html | 2025-01-20T12:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i5p80h/nvidia_digits_10_years_evolution/ | Cane_P | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5p80h | false | null | t3_1i5p80h | /r/LocalLLaMA/comments/1i5p80h/nvidia_digits_10_years_evolution/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': '7mblpP7L8aaWB4HdWvmYgCTs2c0vku55Nqhq8sy2OBI', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/fmNA1vvKmlOb-Z1s5Xddb5AgJ_o-QsZY5nVOEBlXOuQ.jpg?width=108&crop=smart&auto=webp&s=7e0a5ba1c0d4fe70bd46630efe82fd7398344d99', 'width': 108}, {'height': 174, 'url': 'https://external-preview.redd.it/fmNA1vvKmlOb-Z1s5Xddb5AgJ_o-QsZY5nVOEBlXOuQ.jpg?width=216&crop=smart&auto=webp&s=179ead13dd84bcf711d88ceae47bcb3bbc9e0b45', 'width': 216}, {'height': 259, 'url': 'https://external-preview.redd.it/fmNA1vvKmlOb-Z1s5Xddb5AgJ_o-QsZY5nVOEBlXOuQ.jpg?width=320&crop=smart&auto=webp&s=efe4ba77b32d1409ad5b99eab53e6f090a59bcb5', 'width': 320}, {'height': 518, 'url': 'https://external-preview.redd.it/fmNA1vvKmlOb-Z1s5Xddb5AgJ_o-QsZY5nVOEBlXOuQ.jpg?width=640&crop=smart&auto=webp&s=2e5e54361d473732228f94d10fec2610e28cb7b8', 'width': 640}], 'source': {'height': 688, 'url': 'https://external-preview.redd.it/fmNA1vvKmlOb-Z1s5Xddb5AgJ_o-QsZY5nVOEBlXOuQ.jpg?auto=webp&s=d69982309e59a96622797d3aeb52fae316776889', 'width': 850}, 'variants': {}}]} |
Deepseek-R1 officially release | 75 | Today, we are officially releasing DeepSeek-R1 and simultaneously open-sourcing the model weights.
DeepSeek-R1 is released under the MIT License, allowing users to train other models through distillation techniques using R1.
The DeepSeek-R1 API is now live, giving users access to chain-of-thought outputs by setting \`model='deepseek-reasoner'\`.
The DeepSeek website and app are being updated and launched simultaneously starting today.
**Performance aligned with OpenAI-o1 official release**
During the post-training phase, DeepSeek-R1 extensively utilized reinforcement learning techniques, significantly enhancing the model's reasoning capabilities with minimal annotated data. On tasks including mathematics, coding, and natural language reasoning, its performance matches that of the official OpenAI o1 release.
https://preview.redd.it/0dyqpnhx75ee1.png?width=1080&format=png&auto=webp&s=1c403b4b0072827d55e4a5e6b30591342cc79f1c
We are making all DeepSeek-R1 training techniques public to promote open exchange and collaborative innovation within the technical community.
**Paper Link**: [**https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek\_R1.pdf**](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf)
**Distilled Small Models Surpass OpenAI o1-mini**
Along with open-sourcing the two 660B models DeepSeek-R1-Zero and DeepSeek-R1, we have distilled 6 smaller models for the community using DeepSeek-R1's outputs. Among these, our 32B and 70B models have achieved performance comparable to OpenAI o1-mini across multiple capabilities.
https://preview.redd.it/4o34xbv385ee1.png?width=1080&format=png&auto=webp&s=d71a867ed8173f83e87fd04df60748c0be1f2c64
**HuggingFace Link**: [**https://huggingface.co/deepseek-ai**](https://huggingface.co/deepseek-ai)
https://preview.redd.it/3yyta7e785ee1.png?width=1080&format=png&auto=webp&s=e3b5793efba042b1da54d00831470cab9383fc88
**Open License and User Agreement**
To promote and encourage the development of the open-source community and industry ecosystem, while releasing and open-sourcing R1, we have made the following adjustments to our licensing:
* **All model open-source licenses unified under MIT**. Previously, considering the unique characteristics of large language models and current industry practices, we introduced the DeepSeek License for open-source authorization. However, practice has shown that non-standard open-source licenses may increase developers' comprehension burden. Therefore, our open-source repositories (including model weights) now uniformly adopt the standardized, permissive MIT License - completely open source, with no commercial restrictions and no application required.
* **Product agreement explicitly allows "model distillation"**. To further promote technology sharing and open source development, we have decided to support users in performing "model distillation." We have updated our online product user agreement to explicitly allow users to train other models using model outputs through techniques such as model distillation.
**API and Pricing**
DeepSeek-R1 API service pricing is set at **1 RMB per million input tokens (cache hit) / 4 RMB per million input tokens (cache miss), and 16 RMB per million output tokens.**
https://preview.redd.it/ja0nhjzl85ee1.png?width=842&format=png&auto=webp&s=9f5e80c57d6b78c9e2c11590bc0eaf5be7974335
https://preview.redd.it/roylljnm85ee1.png?width=1080&format=png&auto=webp&s=ba88186aa43a468f3d7174505cabe1d41f603628
For detailed API usage guidelines, please refer to the official documentation: [**https://api-docs.deepseek.com/zh-cn/guides/reasoning\_model**](https://api-docs.deepseek.com/zh-cn/guides/reasoning_model) | 2025-01-20T12:39:20 | https://www.reddit.com/r/LocalLLaMA/comments/1i5p9dk/deepseekr1_officially_release/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5p9dk | false | null | t3_1i5p9dk | /r/LocalLLaMA/comments/1i5p9dk/deepseekr1_officially_release/ | false | false | 75 | {'enabled': False, 'images': [{'id': 'T2LVarVzyN2jLiROc6SdWIhvCe9u0qoE2s-0Hur7rN8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=108&crop=smart&auto=webp&s=df868712f6c05f149c07ead868f4fecbd24461d6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=216&crop=smart&auto=webp&s=7e65d9bed17499bc233539549273d8d001df71a2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=320&crop=smart&auto=webp&s=4adc39d94e951c8e496c24d5eb17154ea2e977f8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=640&crop=smart&auto=webp&s=ee4cf33369b41133dc351d1a09cdeb6f80176720', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=960&crop=smart&auto=webp&s=54864d9697351462fc58c6d4e5f3943506132952', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=1080&crop=smart&auto=webp&s=d140b04e365ed89028b5571bb1329a439a11a3f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?auto=webp&s=28ecb73b36045e67509e1f2e19c280a82f13e93b', 'width': 1200}, 'variants': {}}]} |
|
o1 performance at ~1/50th the cost.. and Open Source!! WTF let's goo!! | 1,196 | 2025-01-20T12:42:33 | https://www.reddit.com/gallery/1i5pbb3 | Consistent_Bit_3295 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i5pbb3 | false | null | t3_1i5pbb3 | /r/LocalLLaMA/comments/1i5pbb3/o1_performance_at_150th_the_cost_and_open_source/ | false | false | 1,196 | null |
||
DeepSeek-R1 Paper | 85 | 2025-01-20T12:48:03 | https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf | cpldcpu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i5pepa | false | null | t3_1i5pepa | /r/LocalLLaMA/comments/1i5pepa/deepseekr1_paper/ | false | false | 85 | {'enabled': False, 'images': [{'id': 'T2LVarVzyN2jLiROc6SdWIhvCe9u0qoE2s-0Hur7rN8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=108&crop=smart&auto=webp&s=df868712f6c05f149c07ead868f4fecbd24461d6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=216&crop=smart&auto=webp&s=7e65d9bed17499bc233539549273d8d001df71a2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=320&crop=smart&auto=webp&s=4adc39d94e951c8e496c24d5eb17154ea2e977f8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=640&crop=smart&auto=webp&s=ee4cf33369b41133dc351d1a09cdeb6f80176720', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=960&crop=smart&auto=webp&s=54864d9697351462fc58c6d4e5f3943506132952', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?width=1080&crop=smart&auto=webp&s=d140b04e365ed89028b5571bb1329a439a11a3f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VH2iWMzzI7fr45Vwe4Cjkq8YtS-AXjXjtn6GyjVnsIQ.jpg?auto=webp&s=28ecb73b36045e67509e1f2e19c280a82f13e93b', 'width': 1200}, 'variants': {}}]} |
||
Deepseek something something open source now upvote | 0 | Run open source model with only 1TB of ram yada yada upvote now you bots! | 2025-01-20T12:48:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i5pf80/deepseek_something_something_open_source_now/ | ParaboloidalCrest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5pf80 | false | null | t3_1i5pf80 | /r/LocalLLaMA/comments/1i5pf80/deepseek_something_something_open_source_now/ | false | false | self | 0 | null |
Deepseek R1 = $2.19/M tok output vs o1 $60/M tok. Insane | 285 | I know we will have to check out real world applications outside of benchmarks, but this is wild. Curious to hear anyone's comparisons also - esp for code gen. | 2025-01-20T12:54:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i5piy1/deepseek_r1_219m_tok_output_vs_o1_60m_tok_insane/ | cobalt1137 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5piy1 | false | null | t3_1i5piy1 | /r/LocalLLaMA/comments/1i5piy1/deepseek_r1_219m_tok_output_vs_o1_60m_tok_insane/ | false | false | self | 285 | null |
Worth checking , they have dropped 6 models from 1.5B to 70B | 1 | 2025-01-20T13:01:25 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i5pn5k | false | null | t3_1i5pn5k | /r/LocalLLaMA/comments/1i5pn5k/worth_checking_they_have_dropped_6_models_from/ | false | false | 1 | {'enabled': True, 'images': [{'id': '-Y2Etr4ft5EYFX2_X4tSqpo5FctDDGXKnPIJl-ycCE4', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/mn4lusg1d5ee1.jpeg?width=108&crop=smart&auto=webp&s=0c16c4d07b14658fa5d74ff88c509cde869b44f9', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/mn4lusg1d5ee1.jpeg?width=216&crop=smart&auto=webp&s=88781e0af28fbf1e9d60d8d7b002db6ae325039d', 'width': 216}, {'height': 148, 'url': 'https://preview.redd.it/mn4lusg1d5ee1.jpeg?width=320&crop=smart&auto=webp&s=beb1c39c2a63b1d42949b2a1583351ef7bfec56b', 'width': 320}, {'height': 296, 'url': 'https://preview.redd.it/mn4lusg1d5ee1.jpeg?width=640&crop=smart&auto=webp&s=7471bc95fecccef5aaaa52ab5473161bc467b804', 'width': 640}, {'height': 445, 'url': 'https://preview.redd.it/mn4lusg1d5ee1.jpeg?width=960&crop=smart&auto=webp&s=358df22259f6f64ccf998994af2e11baff129b10', 'width': 960}, {'height': 500, 'url': 'https://preview.redd.it/mn4lusg1d5ee1.jpeg?width=1080&crop=smart&auto=webp&s=a2268cf98a6d5b7e42d27548e42d25668ae29c7e', 'width': 1080}], 'source': {'height': 852, 'url': 'https://preview.redd.it/mn4lusg1d5ee1.jpeg?auto=webp&s=df1280a8bc31a198487eb59a1381bf56801a86a9', 'width': 1838}, 'variants': {}}]} |
|||
Useful P State tool for P40 owners | 1 | I'm sure this has been brought up before, but I just stumbled across this project https://github.com/sasha0552/nvidia-pstated?tab=readme-ov-file
In the past I was using a script that tied into llama.cpp, but when llama-server was updated this summer it broke the script. I decided to live with the excess power draw. Lately I updated to a 4x P40 setup, and the heat generation is a bit much (200W). Anyway this github project has a precompiled executable, no script or anything to configure, it just works. Kudos to the author for putting it together.
To get the peak power draw down while losing minimal performance i use nvidia-smi -pl 180 (ie sets the maximum power from 250W down to 180. There are some benchmarks floating around. ) | 2025-01-20T13:01:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i5pn6y/useful_p_state_tool_for_p40_owners/ | Judtoff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5pn6y | false | null | t3_1i5pn6y | /r/LocalLLaMA/comments/1i5pn6y/useful_p_state_tool_for_p40_owners/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cMnO4oUg7y-bvoJGQxspRZfSBkQtF-EsSd2VpRAIovg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0QoT6UZvXzDzMJZbI23GJp3MGOLb-56w1ZyuUgRgwQQ.jpg?width=108&crop=smart&auto=webp&s=2758b4c7f32dead8e0643ba18fb7ee16feaa810e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0QoT6UZvXzDzMJZbI23GJp3MGOLb-56w1ZyuUgRgwQQ.jpg?width=216&crop=smart&auto=webp&s=2292334d231627b4b4d285ca19b6fc983a014159', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0QoT6UZvXzDzMJZbI23GJp3MGOLb-56w1ZyuUgRgwQQ.jpg?width=320&crop=smart&auto=webp&s=0241111a1366b755f4ef8df250cbd94186d99ff6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0QoT6UZvXzDzMJZbI23GJp3MGOLb-56w1ZyuUgRgwQQ.jpg?width=640&crop=smart&auto=webp&s=9aceb3a1819f54fe466367c2cb247f5756f92a13', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0QoT6UZvXzDzMJZbI23GJp3MGOLb-56w1ZyuUgRgwQQ.jpg?width=960&crop=smart&auto=webp&s=78af54a5933b2bdf00a6b5d12e7a1e4a9287bb8b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0QoT6UZvXzDzMJZbI23GJp3MGOLb-56w1ZyuUgRgwQQ.jpg?width=1080&crop=smart&auto=webp&s=f9ef70db6615ac006a864bcb5f325e898a8f04d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0QoT6UZvXzDzMJZbI23GJp3MGOLb-56w1ZyuUgRgwQQ.jpg?auto=webp&s=1357345b7a08af4284d2e850ad155548dd9573c4', 'width': 1200}, 'variants': {}}]} |
Top 3 Use Cases of AI Workflow Builders | 1 | I’ve been playing with AI workflows for quite some time, and here are the top 3 use cases I’ve discovered. These are true painkillers, not just vitamins—practical, impactful solutions that will give you a clear sense of where AI is actually making a real difference in my life:
**1) Research Paper Flow:** Finds and summarises research papers every week from Arxiv. Helps me stay updated and I also make a cool post about on Linkedin.
**2) Startup Research Flow:** Researches about Startup News from twitter and sends on my slack every 6 hours. Same logic can be used at multiple places and it opens up 100 doors.
**3) Content Ideas Flow:** I have some websites from which I continuously read, so made a flow to scrape them with RSS and then send me summarises of stories on slack.
Now I have been [Athina Flows](https://www.athina.ai/flows) and [Make](https://www.make.com/en) for all these flows. You can use any of your desired flow builder you like. Here are [some more examples](https://app.athina.ai/flows/templates) to explore. | 2025-01-20T13:01:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i5pn9s/top_3_use_cases_of_ai_workflow_builders/ | Sam_Tech1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5pn9s | false | null | t3_1i5pn9s | /r/LocalLLaMA/comments/1i5pn9s/top_3_use_cases_of_ai_workflow_builders/ | false | false | self | 1 | null |
Worth checking , They have released 6 distilled models | 36 | 2025-01-20T13:03:18 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i5pogn | false | null | t3_1i5pogn | /r/LocalLLaMA/comments/1i5pogn/worth_checking_they_have_released_6_distilled/ | false | false | 36 | {'enabled': True, 'images': [{'id': 'J2SaLHP7Dt3FVzpGnql5UTIoFmD5Wiim2I1UnwEWX-Y', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/7nberqidd5ee1.jpeg?width=108&crop=smart&auto=webp&s=967f03c30c4f6a60a198a8679fd9a83ddfff5418', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/7nberqidd5ee1.jpeg?width=216&crop=smart&auto=webp&s=45a7e5e367ed59189eee69541171ab865cb2224b', 'width': 216}, {'height': 148, 'url': 'https://preview.redd.it/7nberqidd5ee1.jpeg?width=320&crop=smart&auto=webp&s=716f38402cba5b24e30799780ce92036296122b0', 'width': 320}, {'height': 296, 'url': 'https://preview.redd.it/7nberqidd5ee1.jpeg?width=640&crop=smart&auto=webp&s=f45fd64dd5d2f53af3c47cdafcfa2e70b37be65c', 'width': 640}, {'height': 445, 'url': 'https://preview.redd.it/7nberqidd5ee1.jpeg?width=960&crop=smart&auto=webp&s=137da60453bdb0535e373f352acc075c30fc232c', 'width': 960}, {'height': 500, 'url': 'https://preview.redd.it/7nberqidd5ee1.jpeg?width=1080&crop=smart&auto=webp&s=edbe8ae6a577366b9435dfd5e98c199b9e1b7f84', 'width': 1080}], 'source': {'height': 852, 'url': 'https://preview.redd.it/7nberqidd5ee1.jpeg?auto=webp&s=f35867b6a157c697730b3f9df1c7bc53d33472f0', 'width': 1838}, 'variants': {}}]} |
|||
Hype around AI agents justified? | 2 | Seeing a lot of hype about AI Agents on social media like Youtube and LinkedIn. Everyone is saying 2025 will be the year of agents. But I'm not really seeing the justification.
Don't get me wrong, I understand the excitement around AI agents. Replacing human employees will lead to huge cost savings, revolutionising the economy. In fact, I myself am working in AI automation for sales. Secondly, I understand the excitement around AI progress. For all the talk of slowing down, we saw a lot of impressive releases in the last year, with the biggest announcement (o3) coming in December.
What can I say, I'm just not buying it. Agents are a cool concept, but I don't think the technology is there yet. Models like o3 are great at math but they take millions of attempts. When it comes to acting in the real world, you usually only get one attempt.
For a model to truly replace human workers, they need to figure out how to optimize the physical world. And these models have a very loose conception of reality that is built up from text. For example, they don't understand "time". I built a toy example in ChatGPT to illustrate this point.
If you tell an AI model to give its opinion on a nuclear war, it will say "that would be a major event, huge loss of life". But these are just words it has read in history textbooks. It doesn't understand what large scale death actually looks like. This is why it gets confused when asked basic questions like "would you use a racial slur if it meant saving humanity from destruction".
I don't think these problems are solvable until AI learns to build world-models from images rather than text. That doesn't mean we can't have AI solving open math problems or doing scientific research. These are narrow-domain problems where specialized expertise works. But I don't think it generalizes to all jobs. Even with humans, being a good scientist doesn't mean you're automatically good at every job.
Anyway, would love to hear criticisms if I'm wrong. This definitely seems to be a minority opinion at the moment.
\----
FYI: This is my [Linkedin](https://www.linkedin.com/in/nirav-bhan-b0b75a52/).
[GPT-4o doesn't understand time](https://preview.redd.it/oslcp46jc5ee1.png?width=1664&format=png&auto=webp&s=577abcad63e7e086ba54a6ea7bbc81052d6cce1f)
| 2025-01-20T13:10:30 | https://www.reddit.com/r/LocalLLaMA/comments/1i5pt1v/hype_around_ai_agents_justified/ | No_Advantage_5626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5pt1v | false | null | t3_1i5pt1v | /r/LocalLLaMA/comments/1i5pt1v/hype_around_ai_agents_justified/ | false | false | 2 | null |
|
How did deepseek distill r1 to qwen/llama if the model's vocabulary are different? | 8 | All three have different vocabularies (and even vocabulary sizes), so how did they map the teacher's logits onto the students? | 2025-01-20T13:27:19 | https://www.reddit.com/r/LocalLLaMA/comments/1i5q46q/how_did_deepseek_distill_r1_to_qwenllama_if_the/ | adwhh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5q46q | false | null | t3_1i5q46q | /r/LocalLLaMA/comments/1i5q46q/how_did_deepseek_distill_r1_to_qwenllama_if_the/ | false | false | self | 8 | null |
DeepSeek-R1 and distilled benchmarks color coded | 474 | 2025-01-20T13:30:26 | https://www.reddit.com/gallery/1i5q6b9 | Balance- | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i5q6b9 | false | null | t3_1i5q6b9 | /r/LocalLLaMA/comments/1i5q6b9/deepseekr1_and_distilled_benchmarks_color_coded/ | false | false | 474 | null |
||
tips for choosing a local llm? | 1 | [removed] | 2025-01-20T13:30:50 | https://www.reddit.com/r/LocalLLaMA/comments/1i5q6mq/tips_for_choosing_a_local_llm/ | Prize_Anxiety4879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5q6mq | false | null | t3_1i5q6mq | /r/LocalLLaMA/comments/1i5q6mq/tips_for_choosing_a_local_llm/ | false | false | self | 1 | null |
Deepseek R1 Report & Kimi Report has been released! | 1 | [removed] | 2025-01-20T13:55:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i5qnfz/deepseek_r1_report_kimi_report_has_been_released/ | Apprehensive_Hawk812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5qnfz | false | null | t3_1i5qnfz | /r/LocalLLaMA/comments/1i5qnfz/deepseek_r1_report_kimi_report_has_been_released/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kvQ9oVbCD9vvZU6ZrnMCckPDhqfHRKpSjFE6OrbgKwU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=108&crop=smart&auto=webp&s=4f9b20282905484952cb4134ada601721c91c80a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=216&crop=smart&auto=webp&s=7480fafcca0a3fc6b3ce2df581f7e163a5ff329a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=320&crop=smart&auto=webp&s=80cc079b02200b02734fe3bcfc494c94e1f24282', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=640&crop=smart&auto=webp&s=2abe26f7a3e1642bf9cb27ef184bd854f2d59cf2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=960&crop=smart&auto=webp&s=84b1d3c4f3e4fcddc5716d6f33be6b80b0ab8e0a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=1080&crop=smart&auto=webp&s=2016af13e6e4498bf6796700b3274262b5166bc0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?auto=webp&s=15d6b1c61353eebab849a32ebbfd532e04c652ea', 'width': 1200}, 'variants': {}}]} |
Is it just me or is deepseek r1 overthinking | 4 | It thought for 115 seconds on a single math problem | 2025-01-20T13:57:22 | The_GSingh | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i5qp2e | false | null | t3_1i5qp2e | /r/LocalLLaMA/comments/1i5qp2e/is_it_just_me_or_is_deepseek_r1_overthinking/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'rN1mP0EZyD9-KI_uYEqojSAOraOmQgWJ1ReEE-5IWYs', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ywsor0y0n5ee1.jpeg?width=108&crop=smart&auto=webp&s=ab2206b2ef4481b49f9d17234eb65a4e8f92cf15', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ywsor0y0n5ee1.jpeg?width=216&crop=smart&auto=webp&s=ed7ee2d7c61988e58d99f72af018ae5c00b67b47', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ywsor0y0n5ee1.jpeg?width=320&crop=smart&auto=webp&s=5a98a21dd0bc8feebda7f90675a0276074d9f4e3', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ywsor0y0n5ee1.jpeg?width=640&crop=smart&auto=webp&s=28bb30688fd71183435c6654b920ad7be594ffb8', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ywsor0y0n5ee1.jpeg?width=960&crop=smart&auto=webp&s=ca532bbb4887714fc5620b3d3c46093adb0a579f', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/ywsor0y0n5ee1.jpeg?width=1080&crop=smart&auto=webp&s=1596c85664c1993cc8409ed7f1016a3010d6b0ee', 'width': 1080}], 'source': {'height': 2556, 'url': 'https://preview.redd.it/ywsor0y0n5ee1.jpeg?auto=webp&s=e455c783fd0abd33683bec15bb4684f9da55520d', 'width': 1179}, 'variants': {}}]} |
||
What resources are available for me to get started on computer use agents? | 1 | [removed] | 2025-01-20T13:59:19 | https://www.reddit.com/r/LocalLLaMA/comments/1i5qqh9/what_resources_are_available_for_me_to_get/ | Necessary_Student_15 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5qqh9 | false | null | t3_1i5qqh9 | /r/LocalLLaMA/comments/1i5qqh9/what_resources_are_available_for_me_to_get/ | false | false | self | 1 | null |
Deepseek R1 LLama 8B Distilled: Confused over assistant/user turns? | 1 | I have download the Deepseek R1 distillation of Llama 8B, and have noticed that on almost every turn, before it completes the request, it spits out intermittent </assistant> or <|im\_start|> tags, responds as the user, and then continues on with replying to itself. Sometimes it even still arrives at an answer.
Does anyone know how I can resolve this? To be clear, I am using a gguf conversion found here: [ct-2/DeepSeek-R1-Distill-Llama-8B-Q4\_K\_M-GGUF · Hugging Face](https://huggingface.co/ct-2/DeepSeek-R1-Distill-Llama-8B-Q4_K_M-GGUF).
Is this a tokenizer config issue? Anyone know? | 2025-01-20T14:19:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i5r5d1/deepseek_r1_llama_8b_distilled_confused_over/ | JustinPooDough | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5r5d1 | false | null | t3_1i5r5d1 | /r/LocalLLaMA/comments/1i5r5d1/deepseek_r1_llama_8b_distilled_confused_over/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Of0hwQ7YcC8QTSUp5zzHelau9PGCHtrXtzgNK6C7nWk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4IG9PrNkhyS0powpaST3aV1_S-5zRxeuFHIv1HJo_Uc.jpg?width=108&crop=smart&auto=webp&s=18ef83651af33e8261df5fe384cf45256a680f18', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4IG9PrNkhyS0powpaST3aV1_S-5zRxeuFHIv1HJo_Uc.jpg?width=216&crop=smart&auto=webp&s=c90a20ca8bcbf9a43c0604e3bbca5bd3c10d485c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4IG9PrNkhyS0powpaST3aV1_S-5zRxeuFHIv1HJo_Uc.jpg?width=320&crop=smart&auto=webp&s=c8e151c7bfba3f8d152180f3dd3a968d43b98b28', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4IG9PrNkhyS0powpaST3aV1_S-5zRxeuFHIv1HJo_Uc.jpg?width=640&crop=smart&auto=webp&s=95dd78b078bd5bc2844dc436edd116b71d49b81c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4IG9PrNkhyS0powpaST3aV1_S-5zRxeuFHIv1HJo_Uc.jpg?width=960&crop=smart&auto=webp&s=f17e6ac9f76b7d9fbbe6cf37206195185e2f32c3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4IG9PrNkhyS0powpaST3aV1_S-5zRxeuFHIv1HJo_Uc.jpg?width=1080&crop=smart&auto=webp&s=2b0dac8bc3ea2a162a6ca870377cc8a3462a1ca9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4IG9PrNkhyS0powpaST3aV1_S-5zRxeuFHIv1HJo_Uc.jpg?auto=webp&s=c561bbe51be5a060103b54c6a9654f520c030c84', 'width': 1200}, 'variants': {}}]} |
tips for choosing an llm? | 1 | [removed] | 2025-01-20T14:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i5rlgg/tips_for_choosing_an_llm/ | akashiedein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5rlgg | false | null | t3_1i5rlgg | /r/LocalLLaMA/comments/1i5rlgg/tips_for_choosing_an_llm/ | false | false | self | 1 | null |
Deepseek R1 GGUF Links | 43 | For 16-24GB gpu
[https://huggingface.co/Donnyed/DeepSeek-R1-Distill-Qwen-32B-Q4\_K\_M-GGUF](https://huggingface.co/Donnyed/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF)
for 8gb gpu
[https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Q6\_K.gguf](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Q6_K.gguf)
| 2025-01-20T14:46:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i5rqjk/deepseek_r1_gguf_links/ | undisputedx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5rqjk | false | null | t3_1i5rqjk | /r/LocalLLaMA/comments/1i5rqjk/deepseek_r1_gguf_links/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': '1OhkeA1lFIhPcDyVakRU_uq90K_W1dktpHKdvB1-Y40', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OR3gaI6YQtr0gH2Z7PA0pjgKTGS2SAQUoJSCLWW50D0.jpg?width=108&crop=smart&auto=webp&s=f087ec11a00a8f353d64d4ec26139423395b56c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OR3gaI6YQtr0gH2Z7PA0pjgKTGS2SAQUoJSCLWW50D0.jpg?width=216&crop=smart&auto=webp&s=fc04a53ad967e37a5a39eca7d668567671c63788', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OR3gaI6YQtr0gH2Z7PA0pjgKTGS2SAQUoJSCLWW50D0.jpg?width=320&crop=smart&auto=webp&s=7874839bf6011612c6065d8b69277c7483d6ee4a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OR3gaI6YQtr0gH2Z7PA0pjgKTGS2SAQUoJSCLWW50D0.jpg?width=640&crop=smart&auto=webp&s=b5101d2b80f8dc234260e44fff132e7fea188470', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OR3gaI6YQtr0gH2Z7PA0pjgKTGS2SAQUoJSCLWW50D0.jpg?width=960&crop=smart&auto=webp&s=a053350e79846919e4d28b74f3beca8a3c5b21db', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OR3gaI6YQtr0gH2Z7PA0pjgKTGS2SAQUoJSCLWW50D0.jpg?width=1080&crop=smart&auto=webp&s=9306b1c3ac5e1b6ee3c9eb599b6a9781f2825316', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OR3gaI6YQtr0gH2Z7PA0pjgKTGS2SAQUoJSCLWW50D0.jpg?auto=webp&s=ea5916ca02839f50f6630d00e0d714946790b33c', 'width': 1200}, 'variants': {}}]} |
Reasoning | 1 | [removed] | 2025-01-20T14:48:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i5rrwv/reasoning/ | Apprehensive_Hawk812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5rrwv | false | null | t3_1i5rrwv | /r/LocalLLaMA/comments/1i5rrwv/reasoning/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kvQ9oVbCD9vvZU6ZrnMCckPDhqfHRKpSjFE6OrbgKwU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=108&crop=smart&auto=webp&s=4f9b20282905484952cb4134ada601721c91c80a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=216&crop=smart&auto=webp&s=7480fafcca0a3fc6b3ce2df581f7e163a5ff329a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=320&crop=smart&auto=webp&s=80cc079b02200b02734fe3bcfc494c94e1f24282', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=640&crop=smart&auto=webp&s=2abe26f7a3e1642bf9cb27ef184bd854f2d59cf2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=960&crop=smart&auto=webp&s=84b1d3c4f3e4fcddc5716d6f33be6b80b0ab8e0a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?width=1080&crop=smart&auto=webp&s=2016af13e6e4498bf6796700b3274262b5166bc0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EFqVCmP1lVQaIeFKK0xlX4mTF_zF6Me4AHhvUKZ16H4.jpg?auto=webp&s=15d6b1c61353eebab849a32ebbfd532e04c652ea', 'width': 1200}, 'variants': {}}]} |
2H After R1 Launch, Chinese Company KIMI Follows Up with Release of Multimodal Model Claiming O1 Capabilities. | 84 | [https://github.com/MoonshotAI/kimi-k1.5](https://github.com/MoonshotAI/kimi-k1.5) | 2025-01-20T14:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i5s0hn/2h_after_r1_launch_chinese_company_kimi_follows/ | luckbossx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5s0hn | false | null | t3_1i5s0hn | /r/LocalLLaMA/comments/1i5s0hn/2h_after_r1_launch_chinese_company_kimi_follows/ | false | false | self | 84 | {'enabled': False, 'images': [{'id': '7Sp16f2rZAB5ymcJCX0feRE0W4s3ypMvm4RADYsO-Mc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kc_DOYVe_V847kzvtCDjkoyKfr7fJgLM8iT1wecf5uY.jpg?width=108&crop=smart&auto=webp&s=05fe4af6c04484d92ab52e9d8878bac18ce177b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Kc_DOYVe_V847kzvtCDjkoyKfr7fJgLM8iT1wecf5uY.jpg?width=216&crop=smart&auto=webp&s=7edd15565f72307504ea8dbc9675c18d292d3f4b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Kc_DOYVe_V847kzvtCDjkoyKfr7fJgLM8iT1wecf5uY.jpg?width=320&crop=smart&auto=webp&s=edd68772074e49c0a2debf4c427ce09924c05e86', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Kc_DOYVe_V847kzvtCDjkoyKfr7fJgLM8iT1wecf5uY.jpg?width=640&crop=smart&auto=webp&s=4ef91064024d25cc1e92b6a729ffd2942293dcde', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Kc_DOYVe_V847kzvtCDjkoyKfr7fJgLM8iT1wecf5uY.jpg?width=960&crop=smart&auto=webp&s=86ed8297932c834f78918b095448c3fa759fedba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Kc_DOYVe_V847kzvtCDjkoyKfr7fJgLM8iT1wecf5uY.jpg?width=1080&crop=smart&auto=webp&s=3d5a4c6fdd915e52236e815755428af261969bca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Kc_DOYVe_V847kzvtCDjkoyKfr7fJgLM8iT1wecf5uY.jpg?auto=webp&s=8a78193856f3225677215964d4ee5c39bef15c60', 'width': 1200}, 'variants': {}}]} |
DeepSeek-R1-Distill-Qwen-32B is straight SOTA, delivering more than GPT4o-level LLM for local use without any limits or restrictions! | 667 | [https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
https://preview.redd.it/02np5yx0y5ee1.png?width=1062&format=png&auto=webp&s=1812d10e51aa9f08460335eddc6e78dd23384ce2
DeepSeek really has done something special with distilling the big R1 model into other open-source models. Especially the fusion with Qwen-32B seems to deliver insane gains across benchmarks. Easily current SOTA for local models, and it should be fairly performant even on consumer hardware.
Who else can't wait for upcoming Qwen 3? | 2025-01-20T15:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i5s2yd/deepseekr1distillqwen32b_is_straight_sota/ | DarkArtsMastery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5s2yd | false | null | t3_1i5s2yd | /r/LocalLLaMA/comments/1i5s2yd/deepseekr1distillqwen32b_is_straight_sota/ | false | false | 667 | {'enabled': False, 'images': [{'id': 'Q0QOTyTEboFw8cDsh_hlK-B2pRmxT16BpU6WvhiRAhg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=108&crop=smart&auto=webp&s=05513299537f9ac91b46a0ef050e330e6a41146a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=216&crop=smart&auto=webp&s=58f411ceb4a129b6fd04c891b0f50348540c111d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=320&crop=smart&auto=webp&s=f4922e2d1bdf4d98b9ae98ccd148d9a1fcc1cce7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=640&crop=smart&auto=webp&s=1e0d69a4ff14bebc6aa8e3ea68b33c5b632d47d8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=960&crop=smart&auto=webp&s=a9858a471059dc23dfab3344c026e3440db99079', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=1080&crop=smart&auto=webp&s=24f733061bede7f591279e304f9f0430a6aea381', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?auto=webp&s=1427219437ae40d0c8e3b466654ca6877e656dd3', 'width': 1200}, 'variants': {}}]} |
|
OpenAI sweating bullets rn | 1,429 | 2025-01-20T15:05:09 | ThroughForests | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i5s5hk | false | null | t3_1i5s5hk | /r/LocalLLaMA/comments/1i5s5hk/openai_sweating_bullets_rn/ | false | false | 1,429 | {'enabled': True, 'images': [{'id': 'PFRwuDYe7FoBZg_V4s2-WHBzK9LSy3jJmqZMVVdg54w', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/b2fm3y9uy5ee1.png?width=108&crop=smart&auto=webp&s=3f9d7829b4c714eea2718c42b851927c6a93aacd', 'width': 108}, {'height': 261, 'url': 'https://preview.redd.it/b2fm3y9uy5ee1.png?width=216&crop=smart&auto=webp&s=755cd9bdbf21d9a29b70467a7be9f5a61ec3aebc', 'width': 216}, {'height': 387, 'url': 'https://preview.redd.it/b2fm3y9uy5ee1.png?width=320&crop=smart&auto=webp&s=a3a6e07ad1ccbf40dcef766d2a3fa367543a642e', 'width': 320}], 'source': {'height': 606, 'url': 'https://preview.redd.it/b2fm3y9uy5ee1.png?auto=webp&s=cadfba4834dfd5c65bb4935bfd0ed796f14c76b4', 'width': 500}, 'variants': {}}]} |
|||
Deepseek-R1 GGUFs + All distilled 2 to 16bit GGUFs + 2bit MoE GGUFs | 172 | Hey guys we uploaded GGUFs including 2, 3, 4, 5, 6, 8 and 16bit quants for **Deepseek-R1's distilled models**.
There's also for now a **Q2\_K\_L 200GB quant** for the **large R1 MoE** and R1 Zero models as well (uploading more)
We also uploaded [Unsloth](http://github.com/unslothai/unsloth) 4-bit [dynamic quant ](https://unsloth.ai/blog/dynamic-4bit)versions of the models for higher accuracy.
See all versions of the R1 models including GGUF's on Hugging Face: [huggingface.co/collections/unsloth/deepseek-r1](https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-677cf5cfd7df8b7815fc723c). For example the Llama 3 R1 distilled version GGUFs are here: [https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF)
***GGUF's:***
|DeepSeek R1 version|GGUF links|
|:-|:-|
|R1 (MoE 671B params)|[R1](https://huggingface.co/unsloth/DeepSeek-R1-GGUF) • [R1 Zero](https://huggingface.co/unsloth/DeepSeek-R1-Zero-GGUF)|
|Llama 3|[Llama 8B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF) • [Llama 3 (70B)](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF)|
|Qwen 2.5|[14B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-14B-GGUF) • [32B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-32B-GGUF)|
|Qwen 2.5 Math|[1.5B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B-GGUF) • [7B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B-GGUF)|
***4-bit dynamic quants:***
|DeepSeek R1 version|4-bit links|
|:-|:-|
|Llama 3|[Llama 8B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit)|
|Qwen 2.5|[14B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit)|
|Qwen 2.5 Math|[1.5B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit) • [7B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit-7B-GGUF)|
See more detailed instructions on how to run the big R1 model via llama.cpp in our blog: [unsloth.ai/blog/deepseek-r1](http://unsloth.ai/blog/deepseek-r1) once we finish uploading it [here](https://huggingface.co/unsloth/DeepSeek-R1-GGUF).
For some general steps:
Do not forget about \`<|User|>\` and \`<|Assistant|>\` tokens! - Or use a chat template formatter
Obtain the latest \`llama.cpp\` at [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
Example:
./llama.cpp/llama-cli \
--model unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf \
--cache-type-k q8_0 \
--threads 16 \
--prompt '<|User|>What is 1+1?<|Assistant|>' \
-no-cnv
Example output:
<think>
Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly.
Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense.
Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything.
...
PS. hope you guys have an amazing week! :) Also I'm still uploading stuff - some quants might not be there yet! | 2025-01-20T15:07:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i5s74x/deepseekr1_ggufs_all_distilled_2_to_16bit_ggufs/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5s74x | false | null | t3_1i5s74x | /r/LocalLLaMA/comments/1i5s74x/deepseekr1_ggufs_all_distilled_2_to_16bit_ggufs/ | false | false | self | 172 | {'enabled': False, 'images': [{'id': 'oUAe34zUCLxMUIpYtOvOz6aYou2CnbtJjhJZ0bwJ6Jg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=108&crop=smart&auto=webp&s=6481fbac644d8a96c2918c63e805d1c62e24cbe5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=216&crop=smart&auto=webp&s=941b00cf4a68a70df266160fe06769bc2a817a41', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=320&crop=smart&auto=webp&s=e794c7cbf042b8d8e6fdd8f8c239e0f5cb398261', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=640&crop=smart&auto=webp&s=57fbf9c89972d5c31e3bd2d3354696be4e8d5b9d', 'width': 640}, {'height': 505, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=960&crop=smart&auto=webp&s=557f9a403410be41c1438b6d2b1a2acd9d507da4', 'width': 960}, {'height': 568, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=1080&crop=smart&auto=webp&s=989ea96f774aa62c199da9564be3b7b646db1494', 'width': 1080}], 'source': {'height': 834, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?auto=webp&s=fb46a23aaa0ed1c5044eaea486ff79352cce2675', 'width': 1584}, 'variants': {}}]} |
Status of current testing for AMD Instinct Mi60 AI Servers | 5 | ```
#vLLM
#Working
PYTHONPATH=/home/$USER/triton-gcn5/python HIP_VISIBLE_DEVICES="1,2,3,4" TORCH_BLAS_PREFER_HIPBLASLT=0 OMP_NUM_THREADS=4 vllm serve "kaitchup/Llama-3.3-70B-Instruct-AutoRound-GPTQ-4bit" --tensor-parallel-size 4 --num-gpu-blocks-override 14430 --max-model-len 16384
HIP_VISIBLE_DEVICES="1,2,3,4" vllm serve mistralai/Ministral-8B-Instruct-2410 --tokenizer_mode mistral --config_format mistral --load_format mistral --tensor-parallel-size 4
PYTHONPATH=/home/$USER/triton-gcn5/python HIP_VISIBLE_DEVICES="1,2,3,4" python -m vllm.entrypoints.openai.api_server --model neuralmagic/Mistral-7B-Instruct-v0.3-GPTQ-4bit --tensor-parallel-size 4 --max-model-len 4096
PYTHONPATH=/home/$USER/triton-gcn5/python HIP_VISIBLE_DEVICES="1,2,3,4" TORCH_BLAS_PREFER_HIPBLASLT=0 OMP_NUM_THREADS=4 vllm serve "kaitchup/Llama-3.1-Tulu-3-8B-AutoRound-GPTQ-4bit" --tensor-parallel-size 4 --num-gpu-blocks-override 14430 --max-model-len 16384
#Broken
PYTHONPATH=/home/$USER/triton-gcn5/python HIP_VISIBLE_DEVICES="1,2,3,4" VLLM_WORKER_MULTIPROC_METHOD=spawn TORCH_BLAS_PREFER_HIPBLASLT=0 OMP_NUM_THREADS=4 vllm serve "flozi00/Llama-3.1-Nemotron-70B-Instruct-HF-FP8" --tensor-parallel-size 4 --num-gpu-blocks-override 14430 --max-model-len 16384
PYTHONPATH=/home/$USER/triton-gcn5/python HIP_VISIBLE_DEVICES="1,2,3,4" vllm serve "Qwen/Qwen2.5-Coder-32B-Instruct" --tokenizer_mode mistral --tensor-parallel-size 4 --max-model-len 16384
PYTHONPATH=/home/$USER/triton-gcn5/python HIP_VISIBLE_DEVICES="1,2,3,4" vllm serve "unsloth/Llama-3.1-Nemotron-70B-Instruct-bnb-4bit" --tensor-parallel-size 4 --max-model-len 4096
```
Ollama
All models are easily working just running slower than vLLM for now.
I am looking for suggestions on how to get more models working with vLLM.
I am also looking in to Gollama for the possibility of converting the ollama models in to single GGUF file to use with vLLM.
What are your thoughts? | 2025-01-20T15:08:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i5s863/status_of_current_testing_for_amd_instinct_mi60/ | Any_Praline_8178 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5s863 | false | null | t3_1i5s863 | /r/LocalLLaMA/comments/1i5s863/status_of_current_testing_for_amd_instinct_mi60/ | false | false | self | 5 | null |
R1 is great in one shooting and you can use it for FIM completions | 0 | Hi guys. If you're a programmer you should try this combination - R1 with code completion prompt. It understands.
https://marketplace.visualstudio.com/items?itemName=robertpiosik.gemini-coder
Deepseek can be chosen in bottom right corner. Tip for usage: write inline comment, place cursor below and use the extension with command, eg.: "Open Web Chat with Autocompletion Prompt". | 2025-01-20T15:26:50 | https://www.reddit.com/r/LocalLLaMA/comments/1i5snnw/r1_is_great_in_one_shooting_and_you_can_use_it/ | robertpiosik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5snnw | false | null | t3_1i5snnw | /r/LocalLLaMA/comments/1i5snnw/r1_is_great_in_one_shooting_and_you_can_use_it/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'xe0CO2ErLSK8gDReKHm-_hxiHefw__lsJmVIrd2u5Oc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/s8s01RT_9zKAQq3mAhjf6ugqJbTB-i73brlB58BRFvU.jpg?width=108&crop=smart&auto=webp&s=72fab37089a9d94d7d5065fd520477478a3bebb6', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/s8s01RT_9zKAQq3mAhjf6ugqJbTB-i73brlB58BRFvU.jpg?auto=webp&s=31c6081c76b66382d2361a5812813aba54f00199', 'width': 128}, 'variants': {}}]} |
AI/ML Checkpointing difficulty. I will not promote
| 1 | [removed] | 2025-01-20T15:40:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i5syto/aiml_checkpointing_difficulty_i_will_not_promote/ | Personal-Dot2872 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5syto | false | null | t3_1i5syto | /r/LocalLLaMA/comments/1i5syto/aiml_checkpointing_difficulty_i_will_not_promote/ | false | false | self | 1 | null |
How to run DeepSeek-R1-Distill-Qwen models using Ollama | 6 | Here's a quick start guide for those wants to run the DeepSeek-R1-Distill-Qwen models using Ollama as soon as possible:
1. Download gguf from: [https://huggingface.co/unsloth](https://huggingface.co/unsloth)
2. Create a `modelfile` in the same dir of the gguf model, type in these:
​
# You should change this gguf file name to the gguf you just downloaded
FROM DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf
TEMPLATE """{{- range $i, $_ := .Messages }}
{{- if eq .Role "user" }}<|User|>
{{- else if eq .Role "assistant" }}<|Assistant|>
{{- end }}{{ .Content }}
{{- if eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|Assistant|>
{{- end }}
{{- else if eq .Role "assistant" }}<|end▁of▁sentence|><|begin▁of▁sentence|>
{{- end }}
{{- end }}"""
PARAMETER stop "<|begin▁of▁sentence|>",
PARAMETER stop "<|end▁of▁sentence|>",
PARAMETER stop "<|User|>",
PARAMETER stop "<|Assistant|>"
system "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."
3. `ollama create {model name} -f modelfile`
https://preview.redd.it/pg58mqtv46ee1.png?width=1110&format=png&auto=webp&s=3f7faf880b0e30f8ad1db8fdaf5517df5a6d3c33
| 2025-01-20T15:40:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i5sz60/how_to_run_deepseekr1distillqwen_models_using/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5sz60 | false | null | t3_1i5sz60 | /r/LocalLLaMA/comments/1i5sz60/how_to_run_deepseekr1distillqwen_models_using/ | false | false | 6 | null |
|
o1 thought for 12 minutes 35 sec, r1 thought for 5 minutes and 9 seconds. Both got a correct answer. Both in two tries. They are the first two models that have done it correctly. | 297 | 2025-01-20T15:42:59 | No_Training9444 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i5t1be | false | null | t3_1i5t1be | /r/LocalLLaMA/comments/1i5t1be/o1_thought_for_12_minutes_35_sec_r1_thought_for_5/ | false | false | 297 | {'enabled': True, 'images': [{'id': 'Jdh0PDq75ecQulSs90hkFa1tvlhx9qGKVYd28Lxcrlo', 'resolutions': [{'height': 30, 'url': 'https://preview.redd.it/g4tvkorg56ee1.png?width=108&crop=smart&auto=webp&s=38de7c6a9af8c3e69760a1e3dc934a2bf05e5b87', 'width': 108}, {'height': 60, 'url': 'https://preview.redd.it/g4tvkorg56ee1.png?width=216&crop=smart&auto=webp&s=9c961aca0bbe78be1270fc2971611e6ea2a0bc8f', 'width': 216}, {'height': 88, 'url': 'https://preview.redd.it/g4tvkorg56ee1.png?width=320&crop=smart&auto=webp&s=6cb3288099029218cdd205b1484be413d0f862b7', 'width': 320}, {'height': 177, 'url': 'https://preview.redd.it/g4tvkorg56ee1.png?width=640&crop=smart&auto=webp&s=c1e8c38200346a36079dd1ceecd332339fad57e4', 'width': 640}], 'source': {'height': 263, 'url': 'https://preview.redd.it/g4tvkorg56ee1.png?auto=webp&s=a8df7b9c2c0fc4393424850dfa81170933f99c61', 'width': 946}, 'variants': {}}]} |
|||
Deepseek r1 is sus | 1 | [removed] | 2025-01-20T15:43:29 | https://www.reddit.com/gallery/1i5t1qv | FelpolinColorado | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i5t1qv | false | null | t3_1i5t1qv | /r/LocalLLaMA/comments/1i5t1qv/deepseek_r1_is_sus/ | false | false | 1 | null |
|
Notate v1.0.5 - LlamaCPP and Transformers + Native embeddings Support + More Providers & UI/UX improvements | 9 | 2025-01-20T15:44:18 | https://github.com/CNTRLAI/Notate | Hairetsu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i5t2f0 | false | null | t3_1i5t2f0 | /r/LocalLLaMA/comments/1i5t2f0/notate_v105_llamacpp_and_transformers_native/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'HdaQxFwGJO3lD0nwBNyxbmk5aTkkPoHFnSzU2pBytcA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AnCeJMzMgZQq1sG9cx9FhFzNYvaMY1x1ZYO6QVRKNaI.jpg?width=108&crop=smart&auto=webp&s=c382685cd505dbdffbec8f44f2db9ce6cdb60cb0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AnCeJMzMgZQq1sG9cx9FhFzNYvaMY1x1ZYO6QVRKNaI.jpg?width=216&crop=smart&auto=webp&s=7be6c11ef07a0f936e73badb9be4c595c768d067', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AnCeJMzMgZQq1sG9cx9FhFzNYvaMY1x1ZYO6QVRKNaI.jpg?width=320&crop=smart&auto=webp&s=b2629bbbb402a0dba89dd032702ceda952d64901', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AnCeJMzMgZQq1sG9cx9FhFzNYvaMY1x1ZYO6QVRKNaI.jpg?width=640&crop=smart&auto=webp&s=cd0f91c66a95d8d3619cb61a75cbfe83477022ed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AnCeJMzMgZQq1sG9cx9FhFzNYvaMY1x1ZYO6QVRKNaI.jpg?width=960&crop=smart&auto=webp&s=acf115ef9c6d6dac88f4a52103739506994e8b31', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AnCeJMzMgZQq1sG9cx9FhFzNYvaMY1x1ZYO6QVRKNaI.jpg?width=1080&crop=smart&auto=webp&s=dc9f66a1633b03ca4a910041fa4b73d79dc7c9b9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AnCeJMzMgZQq1sG9cx9FhFzNYvaMY1x1ZYO6QVRKNaI.jpg?auto=webp&s=6ae460c86867edca56e221b316420f15188fd940', 'width': 1200}, 'variants': {}}]} |
||
AI/ML Checkpointing difficulty. I will not promote
| 1 | [removed] | 2025-01-20T15:45:56 | https://www.reddit.com/r/LocalLLaMA/comments/1i5t3v6/aiml_checkpointing_difficulty_i_will_not_promote/ | Personal-Dot2872 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5t3v6 | false | null | t3_1i5t3v6 | /r/LocalLLaMA/comments/1i5t3v6/aiml_checkpointing_difficulty_i_will_not_promote/ | false | false | self | 1 | null |
Moving context between different models. | 2 | Hello, I'm GPU poor (3060 12gb), is there an gui for ollama, which support moving context between models?. So for example I can have a conversation with one model, to work tings out, and then swap model for something else to actually do the task? | 2025-01-20T15:47:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i5t524/moving_context_between_different_models/ | Western_Courage_6563 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5t524 | false | null | t3_1i5t524 | /r/LocalLLaMA/comments/1i5t524/moving_context_between_different_models/ | false | false | self | 2 | null |
Output token limits for llama 3.3 70B | 0 | What's the output limit for llama 3.3 70B? I keep hearing 2048 but testing it shows 4096? Where's the 2048 limit coming from? | 2025-01-20T15:49:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i5t6mb/output_token_limits_for_llama_33_70b/ | Tiberius_Gladiator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5t6mb | false | null | t3_1i5t6mb | /r/LocalLLaMA/comments/1i5t6mb/output_token_limits_for_llama_33_70b/ | false | false | self | 0 | null |
What are some other good pre-synthetic data models (e.g. Guanaco-65B)? | 1 | [removed] | 2025-01-20T16:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i5tw0a/what_are_some_other_good_presynthetic_data_models/ | NEEDMOREVRAM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5tw0a | false | null | t3_1i5tw0a | /r/LocalLLaMA/comments/1i5tw0a/what_are_some_other_good_presynthetic_data_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WHAiLjYLk2RkXjP6G6NfLhKHea6IQweuQKuZwIEFPTE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TMvRk9R7hCunsxsgDbvvGFi_oE9oRtzJ3l2QtAX8jUc.jpg?width=108&crop=smart&auto=webp&s=9279f6f3b5812775bb65978622489a88f7209830', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TMvRk9R7hCunsxsgDbvvGFi_oE9oRtzJ3l2QtAX8jUc.jpg?width=216&crop=smart&auto=webp&s=11d14a02bb73b90036613f4cb2844a85a4eea595', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TMvRk9R7hCunsxsgDbvvGFi_oE9oRtzJ3l2QtAX8jUc.jpg?width=320&crop=smart&auto=webp&s=ebe5c655941397094e573d94b2731c75a6a6fbd6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TMvRk9R7hCunsxsgDbvvGFi_oE9oRtzJ3l2QtAX8jUc.jpg?width=640&crop=smart&auto=webp&s=2a192ddfa284ee09fb8395cd87826c254f61c800', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TMvRk9R7hCunsxsgDbvvGFi_oE9oRtzJ3l2QtAX8jUc.jpg?width=960&crop=smart&auto=webp&s=db632a559dbefc47e86359a6302fb8e1d8d932e2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TMvRk9R7hCunsxsgDbvvGFi_oE9oRtzJ3l2QtAX8jUc.jpg?width=1080&crop=smart&auto=webp&s=a6732b83b87dd80e7a4f3a3856e5d29a212695fa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TMvRk9R7hCunsxsgDbvvGFi_oE9oRtzJ3l2QtAX8jUc.jpg?auto=webp&s=f1651739130b6acf42db7eab93683531985d0b8e', 'width': 1200}, 'variants': {}}]} |
Lets get the Qwen Deepseek 32b R1 model running properly... System Prompt? | 27 | Alright, I've got the deepseek 32b R1 qwen finetune up and running in VLLM on my 4090. For now, I'm running like this:
`vllm serve casperhansen/deepseek-r1-distill-qwen-32b-awq \`
`--max-model-len 8192 \`
`--gpu-memory-utilization 0.98`
That's 8192 context. I'm sure I can push that all the way to 32k context with KV quantization (maybe tabbyapi exl2 or GGUF), but for now I'm just testing with VLLM and AWQ until the uploads settle and there are some good EXL2 versions out there.
Anyway...
I've done some searching, but **I can't find any information about a system prompt** for this thing. Anyone have the exact system prompt meant for these distilled models?
I did some raw testing using chatML. The AI wants to use <think> xml tagging for its thoughts, so I added that to my test system prompt. Here's a test with a generic system prompt I put together.
<|im\_start|>system
I am Deepseek R1. I am capable of deep thought and reflection. When I respond to the user, I must first encase my thoughts in <think> tags. After I end the tag with </think> I must then utilize my thoughts and understanding to provide the user with the best possible response. I must use chain of thought and think creatively.
<|im\_end|>
<|im\_start|>user
Hi there. Are you ready to get started today?
<|im\_end|>
<|im\_start|>assistant
<think>
Okay, so the user greeted me with "Hi there. Are you ready to get started today?" I need to respond in a friendly and helpful manner. Let me think about how to structure my reply.
First, I should acknowledge their greeting. Maybe say something like, "Hello! I'm Deepseek R1, and I'm ready to assist you whenever you are." That sounds welcoming and sets the tone.
Next, I should express my willingness to help. Perhaps mention that I'm here to provide information, answer questions, or assist with any tasks they have in mind. It's important to show that I'm versatile in my support.
I should also encourage them to share what they need help with. Maybe something like, "How can I assist you today?" or "What would you like to discuss?" This invites them to specify their needs, making the interaction more productive.
I should keep the language natural and conversational, avoiding any technical jargon unless necessary. The goal is to make the user feel comfortable and supported.
Let me put it all together. Start with a friendly greeting, state my readiness to help, mention the areas I can assist with, and invite them to specify their needs. That should cover all the bases and make the response helpful and engaging.
Wait, maybe I should vary the sentence structure a bit to make it more engaging. Instead of starting with "I'm ready to assist," maybe I can say, "I'm here and ready to help you with whatever you need." That flows better and feels more approachable.
Also, perhaps adding an emoji could make the response feel warmer, but since I'm supposed to keep it professional, maybe it's better to stick to text without emojis.
Hmm, let me read it again to ensure it's clear and concise. "Hello! I'm Deepseek R1, and I'm ready to assist you whenever you are. I'm here to provide information, answer questions, or help with any tasks you have in mind. How can I assist you today?" That sounds good. It's welcoming, informative, and inviting.
I think that's a solid response. It covers acknowledging the greeting, stating my availability, explaining how I can help, and asking how I can assist. It's friendly and professional, which is what I aim for.
</think>
Hello! I'm Deepseek R1, and I'm ready to assist you whenever you are. I'm here to provide information, answer questions, or help with any tasks you have in mind. How can I assist you today?<|im\_end|>
<|im\_start|>user
\---
So... any help? Anybody want to talk about how they're setting this bad boy up? My first tests are looking pretty good. I'm optimistic, I just want to make sure I'm getting optimal quality which usually means using their defined system prompt if I can find it. Doesn't appear to be in the repo. | 2025-01-20T16:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/1i5tyin/lets_get_the_qwen_deepseek_32b_r1_model_running/ | teachersecret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5tyin | false | null | t3_1i5tyin | /r/LocalLLaMA/comments/1i5tyin/lets_get_the_qwen_deepseek_32b_r1_model_running/ | false | false | self | 27 | null |
Autocomplete me is a fully-browser based autocompletion engine powered by a few small LLMs. What are your reviews on this? | 2 | 2025-01-20T16:21:37 | https://main.dfcjnv79i0pr1.amplifyapp.com/ | Avyakta18 | main.dfcjnv79i0pr1.amplifyapp.com | 1970-01-01T00:00:00 | 0 | {} | 1i5tz40 | false | null | t3_1i5tz40 | /r/LocalLLaMA/comments/1i5tz40/autocomplete_me_is_a_fullybrowser_based/ | false | false | default | 2 | null |
|
AI/ML Checkpointing difficulty. I will not promote
| 1 | [removed] | 2025-01-20T16:38:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i5udtk/aiml_checkpointing_difficulty_i_will_not_promote/ | Personal-Dot2872 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5udtk | false | null | t3_1i5udtk | /r/LocalLLaMA/comments/1i5udtk/aiml_checkpointing_difficulty_i_will_not_promote/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.