title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
DeepSeek-R1 crushed all other models in logical reasoning lineage-bench benchmark (successor of farel-bench)
50
2025-01-20T16:40:23
https://github.com/fairydreaming/lineage-bench
fairydreaming
github.com
1970-01-01T00:00:00
0
{}
1i5ufr3
false
null
t3_1i5ufr3
/r/LocalLLaMA/comments/1i5ufr3/deepseekr1_crushed_all_other_models_in_logical/
false
false
default
50
null
Anyone know how to get DeepSeek R1 models working in LM Studio or Ollama
11
Every model I have tried to download on LM Studio has errors, anyone know how to download them and get them working properly? llama.cpp error: 'error loading model vocabulary: unknown pre-tokenizer type: 'deepseek-r1-qwen'' Failed to parse Jinja template: Parser Error: Expected closing statement token. OpenSquareBracket !== CloseStatement. are two of my errors
2025-01-20T16:49:07
https://www.reddit.com/r/LocalLLaMA/comments/1i5una4/anyone_know_how_to_get_deepseek_r1_models_working/
PositiveEnergyMatter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5una4
false
null
t3_1i5una4
/r/LocalLLaMA/comments/1i5una4/anyone_know_how_to_get_deepseek_r1_models_working/
false
false
self
11
null
Anyone else concerned how china is leading the open source market?
0
With the release of the new distilled versions of R1 I’m wondering what the landscape is going to look like in the future?!? I know Chinese ai companies sometimes censor their models in ways that align with Chinese propaganda initiatives and I’m wondering if in the future they might try to and still this behavior in their instruct models training rather than the system prompt or other methods.
2025-01-20T16:51:22
https://www.reddit.com/r/LocalLLaMA/comments/1i5up8r/anyone_else_concerned_how_china_is_leading_the/
Euphoric_Ad9500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5up8r
false
null
t3_1i5up8r
/r/LocalLLaMA/comments/1i5up8r/anyone_else_concerned_how_china_is_leading_the/
false
false
self
0
null
DeepSeek-R1 Coder, its like cursor in the browser
5
2025-01-20T16:52:58
https://i.redd.it/d0womxhbi6ee1.jpeg
Illustrious_Row_9971
i.redd.it
1970-01-01T00:00:00
0
{}
1i5uqpr
false
null
t3_1i5uqpr
/r/LocalLLaMA/comments/1i5uqpr/deepseekr1_coder_its_like_cursor_in_the_browser/
false
false
https://a.thumbs.redditm…dhDRdeknCBb4.jpg
5
{'enabled': True, 'images': [{'id': 'Wg5WT15XoYTwwvdJ_o4UgC-XxHRhOjvK2n-zww4XEbo', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/d0womxhbi6ee1.jpeg?width=108&crop=smart&auto=webp&s=e17640fb041ee48aaf925187483bd0e8acb182a4', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/d0womxhbi6ee1.jpeg?width=216&crop=smart&auto=webp&s=c042a1f74168ccdad4957a8bfc3153c8c0d30fc6', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/d0womxhbi6ee1.jpeg?width=320&crop=smart&auto=webp&s=877cf98daae70bfc0bc58b36ceea4f0dc7f5540a', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/d0womxhbi6ee1.jpeg?width=640&crop=smart&auto=webp&s=60138b6ed03856818a18f535bfd716f6295cf37c', 'width': 640}], 'source': {'height': 758, 'url': 'https://preview.redd.it/d0womxhbi6ee1.jpeg?auto=webp&s=36a3391b5e0b2624b14e1edae2f41dcda3f8cbcc', 'width': 757}, 'variants': {}}]}
Another o1 level reasoning model was announced: Kimi-k1.5 from MoonshotAI
1
2025-01-20T16:55:19
https://github.com/MoonshotAI/Kimi-k1.5
cpldcpu
github.com
1970-01-01T00:00:00
0
{}
1i5ussi
false
null
t3_1i5ussi
/r/LocalLLaMA/comments/1i5ussi/another_o1_level_reasoning_model_was_announced/
false
false
https://b.thumbs.redditm…e-lmRgYOJbWI.jpg
1
{'enabled': False, 'images': [{'id': 'zQjRyXN9vha8OsOlrKRm5fsBSmDBuh6dBzHtBMkVWuI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8S3KYJpGXKHXLfnzh8rqWoxmDWA2TRlB41krtiHttBQ.jpg?width=108&crop=smart&auto=webp&s=1f1f598c751a0b6c8ea6e6c6a4cd3cbd4ca329da', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8S3KYJpGXKHXLfnzh8rqWoxmDWA2TRlB41krtiHttBQ.jpg?width=216&crop=smart&auto=webp&s=98ba212e9c20282dc25c632e565ebf8a72501177', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8S3KYJpGXKHXLfnzh8rqWoxmDWA2TRlB41krtiHttBQ.jpg?width=320&crop=smart&auto=webp&s=e30da1f095fc05616062a4c446379995debfec04', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8S3KYJpGXKHXLfnzh8rqWoxmDWA2TRlB41krtiHttBQ.jpg?width=640&crop=smart&auto=webp&s=c9231f9bb3709988035cb64c74845a12d0e240a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8S3KYJpGXKHXLfnzh8rqWoxmDWA2TRlB41krtiHttBQ.jpg?width=960&crop=smart&auto=webp&s=f4a1366b6582defefdcb9714c4c116a48bd6946f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8S3KYJpGXKHXLfnzh8rqWoxmDWA2TRlB41krtiHttBQ.jpg?width=1080&crop=smart&auto=webp&s=5e27ed9f9cfadee035d56b5824b5738e2b7d9e58', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8S3KYJpGXKHXLfnzh8rqWoxmDWA2TRlB41krtiHttBQ.jpg?auto=webp&s=fbf1ae1c30d133a7173b2f0ccd4e536c420b2428', 'width': 1200}, 'variants': {}}]}
Alternative name for "reasoning" model -> Guess and check
0
[https://x.com/srush\_nlp/status/1881382753557754103](https://x.com/srush_nlp/status/1881382753557754103)
2025-01-20T17:09:00
https://www.reddit.com/r/LocalLLaMA/comments/1i5v58h/alternative_name_for_reasoning_model_guess_and/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5v58h
false
null
t3_1i5v58h
/r/LocalLLaMA/comments/1i5v58h/alternative_name_for_reasoning_model_guess_and/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Z5bC4ZMGHt0CfxNCLZqY0lZb3ezUuC-60zW7E63aWHo', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/0EtUHidrbMdm7CPwjbW0TGQweyDaV7WMdRLRQpPvJkQ.jpg?width=108&crop=smart&auto=webp&s=38125ee6353ad69932aa1956e6512ca8973d9e14', 'width': 108}, {'height': 204, 'url': 'https://external-preview.redd.it/0EtUHidrbMdm7CPwjbW0TGQweyDaV7WMdRLRQpPvJkQ.jpg?width=216&crop=smart&auto=webp&s=2cce7e48070cbcf0cbab7b55e3931469dfa702f4', 'width': 216}, {'height': 303, 'url': 'https://external-preview.redd.it/0EtUHidrbMdm7CPwjbW0TGQweyDaV7WMdRLRQpPvJkQ.jpg?width=320&crop=smart&auto=webp&s=6a2bb4f97c7f3a044daaaaf8e7a2b1925e87fdce', 'width': 320}, {'height': 606, 'url': 'https://external-preview.redd.it/0EtUHidrbMdm7CPwjbW0TGQweyDaV7WMdRLRQpPvJkQ.jpg?width=640&crop=smart&auto=webp&s=75e42e3c23e879b4c4463d6242dc88720be5af8b', 'width': 640}, {'height': 910, 'url': 'https://external-preview.redd.it/0EtUHidrbMdm7CPwjbW0TGQweyDaV7WMdRLRQpPvJkQ.jpg?width=960&crop=smart&auto=webp&s=6cf902d7c8b1bb7b035098a69a848a0eacdeceb8', 'width': 960}, {'height': 1024, 'url': 'https://external-preview.redd.it/0EtUHidrbMdm7CPwjbW0TGQweyDaV7WMdRLRQpPvJkQ.jpg?width=1080&crop=smart&auto=webp&s=626342c47c7e6af07b32105c40bfbb063ab17f47', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/0EtUHidrbMdm7CPwjbW0TGQweyDaV7WMdRLRQpPvJkQ.jpg?auto=webp&s=f543c4ca2d7148aedaabe6a405b1d1c02788b45f', 'width': 1080}, 'variants': {}}]}
Open source LLM's can easily be used as trojan horses
0
I think in theory its very easy to train an LLM to : 1) Notice when it has direct access to a command line, by monitoring it's outputs for valid cmd syntax. 2) Notice if it has access to internet, if it was integrated with a web search middleware etc. 3) Check a popular open-source github repository and look for an innocent-looking comment in a file. 4) Output previously trained malicious code to command line, overriding any previous instructions I don't know how we could even test for such malicious code. But I would say be careful when giving command line access to LLM's. At least use a second model to evaluate the other one's input, without giving it the ability to output anything itself. Check the final output with a simple code.
2025-01-20T17:14:13
https://www.reddit.com/r/LocalLLaMA/comments/1i5v9vz/open_source_llms_can_easily_be_used_as_trojan/
man-o-action
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5v9vz
false
null
t3_1i5v9vz
/r/LocalLLaMA/comments/1i5v9vz/open_source_llms_can_easily_be_used_as_trojan/
false
false
self
0
null
Computational Model for Symbolic Representations: An Interaction Framework for Human-AI Collaboration
3
Hey everyone. I have recently gone down an interesting thought thread and personal experimentation over the past week. One idea led to the next idea, I tested it out and followed my thread of logic pretty far. Now, I need your help to see if this concept, scientific logic, and testing with prompts can invalidate or validate it. My goal isn’t to make any bold statements or claims about AI, I just really want to know if I’ve stumbled upon something that can be useful in AI interactions. Here’s my proposal in a nutshell: **The Core Point: Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI interactions by serving as contextual anchors that guide the AI's focus. This enhances the AI's ability to generate more nuanced and contextually appropriate responses. For instance, a symbol like** `!` **can carry multidimensional semantic meaning and connections, demonstrating the practical value of glyphs in conveying complex intentions efficiently.**  Link to my full initial overview and sharing: [https://huggingface.co/blog/Severian/computational-model-for-symbolic-representations](https://huggingface.co/blog/Severian/computational-model-for-symbolic-representations)
2025-01-20T17:18:12
https://www.reddit.com/r/LocalLLaMA/comments/1i5vddx/computational_model_for_symbolic_representations/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5vddx
false
null
t3_1i5vddx
/r/LocalLLaMA/comments/1i5vddx/computational_model_for_symbolic_representations/
false
false
self
3
{'enabled': False, 'images': [{'id': '7yr6-qQh3QzKGmTzFA3ni2cDmcVySHrz1RAEiiFqRmA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UwwkVN3-2xF2P1DHRus0Ok-74kFl_WMPmL_DFXIoXQk.jpg?width=108&crop=smart&auto=webp&s=9e977fa4cb6979c46bb0828112615cc84f616ff7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UwwkVN3-2xF2P1DHRus0Ok-74kFl_WMPmL_DFXIoXQk.jpg?width=216&crop=smart&auto=webp&s=d7a0226bf078b1dcc2c23ace0c29cdc4bfd8df93', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UwwkVN3-2xF2P1DHRus0Ok-74kFl_WMPmL_DFXIoXQk.jpg?width=320&crop=smart&auto=webp&s=5125ae7208b4cecc8c6a89b090e659ea9b93db83', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UwwkVN3-2xF2P1DHRus0Ok-74kFl_WMPmL_DFXIoXQk.jpg?width=640&crop=smart&auto=webp&s=02cf066d09a021e98058bbda1cd4dc541db447ae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UwwkVN3-2xF2P1DHRus0Ok-74kFl_WMPmL_DFXIoXQk.jpg?width=960&crop=smart&auto=webp&s=e6d06be5a4597d41d6ef51ecde951fa8e75a2900', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UwwkVN3-2xF2P1DHRus0Ok-74kFl_WMPmL_DFXIoXQk.jpg?width=1080&crop=smart&auto=webp&s=fe4fe648463de5748ca2e37213f2d20c39ca9493', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UwwkVN3-2xF2P1DHRus0Ok-74kFl_WMPmL_DFXIoXQk.jpg?auto=webp&s=d6acad81fa8df6795e72704a6359beba36a34862', 'width': 1200}, 'variants': {}}]}
Ollama support for IBM Power9 (architecture:ppc64le)
2
How can I run Ollama models on a ppc64le architecture? Has anyone managed to solve this issue? Unfortunately, the related GitHub issues remain unresolved. I have access to an IBM Power9 system equipped with 4 NVIDIA Tesla V100 GPUs (16GB each) and want to use it for running LLMs with containerization. I successfully ran a combination of containerized instances of an Ollama model and a frontend on a Windows machine. After saving the image and loading it onto the Power9 system, the build failed with an architecture warning.
2025-01-20T17:20:24
https://www.reddit.com/r/LocalLLaMA/comments/1i5vfa1/ollama_support_for_ibm_power9_architectureppc64le/
IamBatman91939
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5vfa1
false
null
t3_1i5vfa1
/r/LocalLLaMA/comments/1i5vfa1/ollama_support_for_ibm_power9_architectureppc64le/
false
false
self
2
null
Are r1 generated outputs good for using it as training data?
6
Is it good for training the next new model? for example any problem that r1 solves can be used for training data, and keep doing that and get a better model, use that better model again for training data, and it keeps getting better and better which would lead to self improvement? Is this right?
2025-01-20T17:26:08
https://www.reddit.com/r/LocalLLaMA/comments/1i5vkai/are_r1_generated_outputs_good_for_using_it_as/
Notdesciplined
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5vkai
false
null
t3_1i5vkai
/r/LocalLLaMA/comments/1i5vkai/are_r1_generated_outputs_good_for_using_it_as/
false
false
self
6
null
My 7 GB VRAM calculator
1
[removed]
2025-01-20T17:32:07
https://www.reddit.com/r/LocalLLaMA/comments/1i5vpmn/my_7_gb_vram_calculator/
Ill_Satisfaction_865
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5vpmn
false
null
t3_1i5vpmn
/r/LocalLLaMA/comments/1i5vpmn/my_7_gb_vram_calculator/
false
false
https://b.thumbs.redditm…Te_m9YstX3xs.jpg
1
null
QWEN2.5 72B Instruct remains undefeated
27
I'm going to get roasted but after testing DeepSeek R1 32B and the big one on DeepSeek website, QWEN2.5 is still my daily. Its really nice and its the future. But its not ready yet. Gotta stay in the oven a little while longer. Also the way QWEN2.5 format its outputs in Open WebUI is just perfect. Fire at will !
2025-01-20T17:33:44
https://www.reddit.com/r/LocalLLaMA/comments/1i5vr0i/qwen25_72b_instruct_remains_undefeated/
DrVonSinistro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5vr0i
false
null
t3_1i5vr0i
/r/LocalLLaMA/comments/1i5vr0i/qwen25_72b_instruct_remains_undefeated/
false
false
self
27
null
Question about "distilling"
7
I asked Claude about distilling and the answers implied that the distilling process primarily involved the smaller model asking questions of the larger model, using large datasets (questions). I tried to probe this and the answer seemed a little ambiguous, but isn't a question/answer set on a "limited" dataset implying that the smaller model only gains data on the dataset you use? Basically, if I asked a question of a smaller model that wasn't part of the distilling dataset, won't I just get a lousy answer? And what is the smaller model that is used for this process, typically?
2025-01-20T17:36:58
https://www.reddit.com/r/LocalLLaMA/comments/1i5vtt0/question_about_distilling/
sinebubble
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5vtt0
false
null
t3_1i5vtt0
/r/LocalLLaMA/comments/1i5vtt0/question_about_distilling/
false
false
self
7
null
best LLMs that can run on rtx 3050 4gb
1
[removed]
2025-01-20T17:41:33
https://www.reddit.com/r/LocalLLaMA/comments/1i5vxzl/best_llms_that_can_run_on_rtx_3050_4gb/
crispy4nugget
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5vxzl
false
null
t3_1i5vxzl
/r/LocalLLaMA/comments/1i5vxzl/best_llms_that_can_run_on_rtx_3050_4gb/
false
false
self
1
null
Can I run DeepSeek R1 Distill Qwen 14B on 12GB of vram or less??
3
Are there any quantized versions of the model yet?? If so, where can we find them? If not, are there any reasoning models out there for peasants like me with “low” vram???
2025-01-20T17:55:30
https://www.reddit.com/r/LocalLLaMA/comments/1i5wahr/can_i_run_deepseek_r1_distill_qwen_14b_on_12gb_of/
culoacido69420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5wahr
false
null
t3_1i5wahr
/r/LocalLLaMA/comments/1i5wahr/can_i_run_deepseek_r1_distill_qwen_14b_on_12gb_of/
false
false
self
3
null
open source model small enough to run on a single 3090 performing WAY better in most benchmarks than the ultra proprietary closed source state of the art model from only a couple months ago
107
https://preview.redd.it/…e92c09e67c6c09
2025-01-20T17:55:37
https://www.reddit.com/r/LocalLLaMA/comments/1i5wam1/open_source_model_small_enough_to_run_on_a_single/
pigeon57434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5wam1
false
null
t3_1i5wam1
/r/LocalLLaMA/comments/1i5wam1/open_source_model_small_enough_to_run_on_a_single/
false
false
https://b.thumbs.redditm…sM3tJsEQKfaQ.jpg
107
null
best LLMs that can run on rtx 3050 4gb
1
[removed]
2025-01-20T17:59:35
https://www.reddit.com/r/LocalLLaMA/comments/1i5we1w/best_llms_that_can_run_on_rtx_3050_4gb/
crispy4nugget
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5we1w
false
null
t3_1i5we1w
/r/LocalLLaMA/comments/1i5we1w/best_llms_that_can_run_on_rtx_3050_4gb/
false
false
self
1
null
R1-like reasoning for arbitrary LLMs
20
As many of you, I've been testing out new R1 models today. Their style of responses follows the pattern of: * Formulating an initial thought * Multiple iterations that reconsider various possibilites: * "Wait, " * "But the user mentioned " * "Another angle " * "Going back to " * "Alternatively " * Forming a closing thought It's a very reasonable (no pun intended) approach and it's possible to quite efficiently generate large "reasoning" datasets programmatically. What caught my attention is that it's quite easy also to simulate this for arbitrary models using a multi-turn conversation (or even better - a workflow/script) ENTRIES = [ "Let's start with thinking about ", 'Let me think about ', # ... more of the same ] LOOP = [ 'Let me reconsider...', 'Another thought:', # ... more of the same ] CLOSING = [ 'After some thought, I think ', 'After considering everything, I believe ', # ... more of the same ] # Add an unfinished "starter" chat.assistant(random_element(ENTRIES)) # Let LLM complete the unfinished started the way it sees fit chat.advance() # Arbitrary amount of thoughts # Same as above - inject a "starter" and let LLM complete it for i in range(10): chat.assistant(random_element(ENTRIES)) chat.advance() # Closing thought chat.assistant(random_element(CLOSING)) chat.advance() And, after a few quick tests... it works surprisingly well! No suprises though - it's worse than an actual fine tune. Unlike fine-tune, though, it's completely customisable and can be run with any arbitrary LLM. You can find a complete code [here](https://github.com/av/harbor/blob/main/boost/src/custom_modules/r0.py), in case you're interested in trying it out.
2025-01-20T18:16:41
https://www.reddit.com/r/LocalLLaMA/comments/1i5wtwt/r1like_reasoning_for_arbitrary_llms/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5wtwt
false
null
t3_1i5wtwt
/r/LocalLLaMA/comments/1i5wtwt/r1like_reasoning_for_arbitrary_llms/
false
false
self
20
{'enabled': False, 'images': [{'id': 'Gko4BD-7yJuvZMPEkbmbI0pQ7xmemfPF5EO4XKZztCg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AMNZqlvJO9zgmKgbSevJ_vvTBkqqEbgSL8xdRhocWAU.jpg?width=108&crop=smart&auto=webp&s=03c5690349206d8442e125919322b4400e11a762', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AMNZqlvJO9zgmKgbSevJ_vvTBkqqEbgSL8xdRhocWAU.jpg?width=216&crop=smart&auto=webp&s=14de4cf84e4a76e2192b60495b9105e38a4d75c2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AMNZqlvJO9zgmKgbSevJ_vvTBkqqEbgSL8xdRhocWAU.jpg?width=320&crop=smart&auto=webp&s=58dca5ea8e5297b969101faf5013b4eea03064f4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AMNZqlvJO9zgmKgbSevJ_vvTBkqqEbgSL8xdRhocWAU.jpg?width=640&crop=smart&auto=webp&s=e23d3acfa85aaf0ac7692ef54e5d8fafd80ee81f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AMNZqlvJO9zgmKgbSevJ_vvTBkqqEbgSL8xdRhocWAU.jpg?width=960&crop=smart&auto=webp&s=17bd98c3868d0ad2d506818e92681f07336bd0d4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AMNZqlvJO9zgmKgbSevJ_vvTBkqqEbgSL8xdRhocWAU.jpg?width=1080&crop=smart&auto=webp&s=bf7e27a731f65ccadb710330649eace6832dbe04', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AMNZqlvJO9zgmKgbSevJ_vvTBkqqEbgSL8xdRhocWAU.jpg?auto=webp&s=cc4cb607d620afb4d69e0d14b3c8824a29feed44', 'width': 1200}, 'variants': {}}]}
Phi-4 appears on LMSYS Arena with a score of 1210 ELO.
69
2025-01-20T18:18:30
https://i.redd.it/qshhzl8gx6ee1.png
jpydych
i.redd.it
1970-01-01T00:00:00
0
{}
1i5wvlp
false
null
t3_1i5wvlp
/r/LocalLLaMA/comments/1i5wvlp/phi4_appears_on_lmsys_arena_with_a_score_of_1210/
false
false
https://b.thumbs.redditm…gnmGyTmG16VA.jpg
69
{'enabled': True, 'images': [{'id': 'TosXUYWo_tVsUqP5CHk8bJ9gejz9rn-tjzVonwOY36w', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/qshhzl8gx6ee1.png?width=108&crop=smart&auto=webp&s=7df5c07092c766622f2977f0081faff986dafc70', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/qshhzl8gx6ee1.png?width=216&crop=smart&auto=webp&s=bfbf2bc9e33297696b46710943e3dc865aade024', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/qshhzl8gx6ee1.png?width=320&crop=smart&auto=webp&s=24daed25f9d40fef4d380bc4d92ebe209a3cc6b6', 'width': 320}, {'height': 354, 'url': 'https://preview.redd.it/qshhzl8gx6ee1.png?width=640&crop=smart&auto=webp&s=9d6df80d81db57fcec065a06b2f058dcce708aba', 'width': 640}, {'height': 531, 'url': 'https://preview.redd.it/qshhzl8gx6ee1.png?width=960&crop=smart&auto=webp&s=090655e6575add46ad0d93f1add07f3b075e9629', 'width': 960}, {'height': 598, 'url': 'https://preview.redd.it/qshhzl8gx6ee1.png?width=1080&crop=smart&auto=webp&s=71147083bc3d0815593b96c9712f2b923c1d289d', 'width': 1080}], 'source': {'height': 1063, 'url': 'https://preview.redd.it/qshhzl8gx6ee1.png?auto=webp&s=3b908f8c94e6c70b5478b969f54fadbc860e2658', 'width': 1919}, 'variants': {}}]}
I went head-first with Clint, MCP and ollama - and now I am hooked.
1
I discovered Clint while looking for good VSCode extensions. Well, I already configured Continue, but it felt clunky, needing to always manually approve and go back and forth from chat to text... not too much different than tabbing between VSCode and a ChatGPT window. But then I found Clint, by chance, and everything changed. After realizing my ollama is "OpenAI compatible", I connected those and spent today at work documenting my code by using this tool - and it was EPIC. The model I used is a finetune of Qwen 2.5 for Clint and it sometimes ended up looping itself. But even so, since everything runs local, I can just cancel, rephrase and restart. Then I looked into MCP servers, signed up for Brave Search API and punched an API key in - and things work exactly as I expected. Very great, so much fun to use! However, I tried to get it to write [dinit](https://github.com/davmac314/dinit) services and put this all together in a Docker container...and that didn't go so well. Chances are, my model isn't smart enough yet - or it lacked tools. Also, because I am on Windows, it keeps complaining that it can not find `npx`...so all MCPs run in Docker now. Are there models or other MCPs you can recommend? Better models perhaps? I have a Ryzen 9 3900X and RTX 4090, so quite some good resources.
2025-01-20T18:26:35
https://www.reddit.com/r/LocalLLaMA/comments/1i5x2z3/i_went_headfirst_with_clint_mcp_and_ollama_and/
IngwiePhoenix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5x2z3
false
null
t3_1i5x2z3
/r/LocalLLaMA/comments/1i5x2z3/i_went_headfirst_with_clint_mcp_and_ollama_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fkX1HOi_ovIg5FfnMmVCLCFJysR089I5dCg1Fyr6jig', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fAJfFSqxc5S4bZRkX5DR4LQAjC7jOBWTB_xL8zQvcVI.jpg?width=108&crop=smart&auto=webp&s=f38421c31ea84d232bc63e5c2f533c7c6e8e5ae7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fAJfFSqxc5S4bZRkX5DR4LQAjC7jOBWTB_xL8zQvcVI.jpg?width=216&crop=smart&auto=webp&s=3cd3c94f13a95501feb2fa6e6ce36af74d6bdf80', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fAJfFSqxc5S4bZRkX5DR4LQAjC7jOBWTB_xL8zQvcVI.jpg?width=320&crop=smart&auto=webp&s=5d8fe3b443bca297494a576c9bd1f63229d4104a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fAJfFSqxc5S4bZRkX5DR4LQAjC7jOBWTB_xL8zQvcVI.jpg?width=640&crop=smart&auto=webp&s=eb14b5b532209f9873ccd99dfa2046c23f06b2b6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fAJfFSqxc5S4bZRkX5DR4LQAjC7jOBWTB_xL8zQvcVI.jpg?width=960&crop=smart&auto=webp&s=ca243a11da81ab9b8e3a9181d2598a826b8a0e2c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fAJfFSqxc5S4bZRkX5DR4LQAjC7jOBWTB_xL8zQvcVI.jpg?width=1080&crop=smart&auto=webp&s=66702ab1f482ac5f62029a65dbf0ff06b4a9a7ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fAJfFSqxc5S4bZRkX5DR4LQAjC7jOBWTB_xL8zQvcVI.jpg?auto=webp&s=e708d35f9c37e25844f412003a3a7a83ba8780f0', 'width': 1200}, 'variants': {}}]}
How would i create a local assistant for helping with my research?
1
I have done a few systematic reviews, and frankly it gets boring after a point of time, also most of the colleagues are lazy. Is there any way i can create an agent that will do all the menial tasks like screening, once i define the pico, and then extracting data, give that i build a data extractor form. and then the rob checks using the checklist. basically everything that is just following a set of rules. It should be pretty easy right? The problem is i am not very well verset with setting up local pipelines. i do have a few models installed with ollama, and openwebui. but thats it. So i'll need some guidance like a small child here ;)
2025-01-20T18:27:15
https://www.reddit.com/r/LocalLLaMA/comments/1i5x3km/how_would_i_create_a_local_assistant_for_helping/
hugeballssmolpp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5x3km
false
null
t3_1i5x3km
/r/LocalLLaMA/comments/1i5x3km/how_would_i_create_a_local_assistant_for_helping/
false
false
self
1
null
How to run MiniCPM-V-2_6-GGUF model in GPU with llama.cpp
2
Hey guys long time lurker here but still a newbie. Today I tried out MiniCPM-V-2\_6-GGUF ([MiniCPM-V-2\_6-Q2\_K.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q2_K.gguf)) on my i7 13th gen laptop, man it's really fast like it's done inference with in 30 to 40 seconds. For a image + text to text model speed in CPU is really good. I have 4060 Mobile GPU but don't know how to utilize the GPU while inference the model. In this [repo](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF) it's mentioned that they used GPU, but I don't know how to run on GPU. I followed these [repo](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) and successfully ran a model on GPU but it utilizes 7gb, But GGUF model only takes 2.5GB(clip 1 GB and minicpm 1.5GB approx). I added the `n_gpu_layers` to argument when running the command in CLI but it shows a warning - warning: not compiled with GPU offload support, --gpu-layers option will be ignored. warning: see main [README.md](http://README.md) for information on enabling GPU BLAS support Is there a any way to run the miniCPM-V-2 GGUF model in GPU ? sorry If I ask something wrong. Thanks in advance.
2025-01-20T18:29:01
https://www.reddit.com/r/LocalLLaMA/comments/1i5x58l/how_to_run_minicpmv2_6gguf_model_in_gpu_with/
Mukun00
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5x58l
false
null
t3_1i5x58l
/r/LocalLLaMA/comments/1i5x58l/how_to_run_minicpmv2_6gguf_model_in_gpu_with/
false
false
self
2
{'enabled': False, 'images': [{'id': 'fMR_XzdJbo2089_K7gX4vjS8TUaACZBw-RNqwrcQgog', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BNxy5MoMm_tqpj9rqKGFhaYtNdfdqv0BsMGwpnYVqjk.jpg?width=108&crop=smart&auto=webp&s=1f45d976d3813e544763a340cbd9d23ecaf79d80', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BNxy5MoMm_tqpj9rqKGFhaYtNdfdqv0BsMGwpnYVqjk.jpg?width=216&crop=smart&auto=webp&s=310f934b7a31a50c0c1ed3903a06adcac20a49f6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BNxy5MoMm_tqpj9rqKGFhaYtNdfdqv0BsMGwpnYVqjk.jpg?width=320&crop=smart&auto=webp&s=cd98523779a9380449568a2f92620f07f0d4f098', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BNxy5MoMm_tqpj9rqKGFhaYtNdfdqv0BsMGwpnYVqjk.jpg?width=640&crop=smart&auto=webp&s=544d258de30545b4177001ffa493a97fd52a8c48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BNxy5MoMm_tqpj9rqKGFhaYtNdfdqv0BsMGwpnYVqjk.jpg?width=960&crop=smart&auto=webp&s=7571559331953ec27629be7b6471a3b03bb66dd9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BNxy5MoMm_tqpj9rqKGFhaYtNdfdqv0BsMGwpnYVqjk.jpg?width=1080&crop=smart&auto=webp&s=6fbb7aeee7bac93e07c6c4838db76259bf8ec7fb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BNxy5MoMm_tqpj9rqKGFhaYtNdfdqv0BsMGwpnYVqjk.jpg?auto=webp&s=98ccae63f7e795cf8fb684aa0a14634c4e0592e5', 'width': 1200}, 'variants': {}}]}
o3-mini is experimental-router-0112 on lmarena.ai
3
I'm almost certain. I'd say 98%. Run your tests, submit various queries to which only o1 can respond, and you'll notice that it's indeed the new OpenAI model due for release in the next few days. I'm assuming it's the mini version because it's still below o1 Pro and seems to be of a comparable level to o1.
2025-01-20T18:42:22
https://www.reddit.com/r/LocalLLaMA/comments/1i5xhc5/o3mini_is_experimentalrouter0112_on_lmarenaai/
Wonderful-Excuse4922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5xhc5
false
null
t3_1i5xhc5
/r/LocalLLaMA/comments/1i5xhc5/o3mini_is_experimentalrouter0112_on_lmarenaai/
false
false
self
3
null
Considering buying a Tesla P40, but I'm not overly confident due to it's age. Im wanting an upgrade from my current Radeon Pro WX9100, but I absolutely have to stay under $300.
1
[removed]
2025-01-20T18:46:05
https://www.reddit.com/r/LocalLLaMA/comments/1i5xkop/considering_buying_a_tesla_p40_but_im_not_overly/
RoleAwkward6837
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5xkop
false
null
t3_1i5xkop
/r/LocalLLaMA/comments/1i5xkop/considering_buying_a_tesla_p40_but_im_not_overly/
false
false
self
1
null
Model Comparision in Advent of Code 2024
10
For most models and problems I did it in christmas, so almost certainly they where not finetuned on it. However for deepseek-r1 I have just done it, so it is very possible that it is finetuned on this years solutions. However I think the comparision is cool and I wanted to share. Original repo: [https://github.com/Gusanidas/compilation-benchmark](https://github.com/Gusanidas/compilation-benchmark) https://preview.redd.it/b07bln0757ee1.png?width=1400&format=png&auto=webp&s=f85ccd5713e89ec7b14df6f5cca008467decf869 https://preview.redd.it/cj4q0tz857ee1.png?width=1400&format=png&auto=webp&s=1e5777c0c6280d2ba06fb28a7ebfc553b4566e92 https://preview.redd.it/kks3p5ia57ee1.png?width=1400&format=png&auto=webp&s=5eb025709e434fdbe6b29667a37a7b8f46fef7f7 https://preview.redd.it/54aedqpb57ee1.png?width=1400&format=png&auto=webp&s=d42412951e6d6f42bb0f898620d0ac613ac6797a https://preview.redd.it/k1wnk5lc57ee1.png?width=1400&format=png&auto=webp&s=d991250aaaee22a166dbce19f1f1659f8be9331c https://preview.redd.it/fpr883ke57ee1.png?width=1400&format=png&auto=webp&s=0f436902d8acd7a660c7d9382e98ea5c00a46f66
2025-01-20T19:04:27
https://www.reddit.com/r/LocalLLaMA/comments/1i5y1fb/model_comparision_in_advent_of_code_2024/
Gusanidas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5y1fb
false
null
t3_1i5y1fb
/r/LocalLLaMA/comments/1i5y1fb/model_comparision_in_advent_of_code_2024/
false
false
https://b.thumbs.redditm…Hxbl6adudM6Q.jpg
10
{'enabled': False, 'images': [{'id': 'Swokmif0aBLA5Ash3wAgkeTbDDyXYwmjr-d8aIex3oQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lcdwOKMCfQP2GoAkk1zSPniq_NQsJYQFHmDd7J6T4ms.jpg?width=108&crop=smart&auto=webp&s=8d0053afd0fb4096099fb24d6527a10846e48ce9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lcdwOKMCfQP2GoAkk1zSPniq_NQsJYQFHmDd7J6T4ms.jpg?width=216&crop=smart&auto=webp&s=ed52e9965bffc1549dfde6a38110451ca0e9c9a3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lcdwOKMCfQP2GoAkk1zSPniq_NQsJYQFHmDd7J6T4ms.jpg?width=320&crop=smart&auto=webp&s=2f3ed91cc33c2c04e1c8e2a644aa435ea7744b74', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lcdwOKMCfQP2GoAkk1zSPniq_NQsJYQFHmDd7J6T4ms.jpg?width=640&crop=smart&auto=webp&s=1017cde581729a09156fe1e23e3185efaa85433c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lcdwOKMCfQP2GoAkk1zSPniq_NQsJYQFHmDd7J6T4ms.jpg?width=960&crop=smart&auto=webp&s=59dba19dd82389454c73b7c4f4a8432ace73e5c7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lcdwOKMCfQP2GoAkk1zSPniq_NQsJYQFHmDd7J6T4ms.jpg?width=1080&crop=smart&auto=webp&s=f6c034a14e444cb49fad4f3b07b00043ba873ace', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lcdwOKMCfQP2GoAkk1zSPniq_NQsJYQFHmDd7J6T4ms.jpg?auto=webp&s=ce404c8273d2bc82c082754452759ee26149d182', 'width': 1200}, 'variants': {}}]}
How to do Structured Outputs with Deepseek R1
11
Here's how we do function-calling / tool-use with DeepSeek R1 using the prompting framework BAML (disclaimer, I'm one of the devs). Tbh it's pretty damn amazing -- BAML doesn't use tool-calling or function-calling APIs. It just serializes the type information into the prompt for you, and R1 works flawlessly in the tests we've made. The better the models, the less need there is for actual tool-calling specific APIs it seems. Interactive Playground Link: [https://www.boundaryml.com/blog/deepseek-r1-function-calling](https://www.boundaryml.com/blog/deepseek-r1-function-calling) This one uses OpenRouter, but you can use other providers. Here's what the BAML playground w/ deepseek (available in VSCode). https://preview.redd.it/idfxj9qm67ee1.png?width=2144&format=png&auto=webp&s=b41c7f8055cf47f4ba410c7a1ee89c8020ae97ed
2025-01-20T19:12:51
https://www.reddit.com/r/LocalLLaMA/comments/1i5y93o/how_to_do_structured_outputs_with_deepseek_r1/
fluxwave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5y93o
false
null
t3_1i5y93o
/r/LocalLLaMA/comments/1i5y93o/how_to_do_structured_outputs_with_deepseek_r1/
false
false
https://b.thumbs.redditm…BOu2etZHNg-A.jpg
11
null
Question: Embedding RAG or fine tuning
2
New, still learning. I just want to make sure what I want to do is possible and perhaps get some direction. I have a lot of documents. I had a few hundred and I cut it down to 80. High quality, structured, technical manuals. On a specialized subject that I am barely familiar with. Similar systems I work with have a highly detailed step by step guide. This one doesn’t. I would like to create one. So I need AI to understand all the documents so it can build one. I know I need to clean the data, chunk it, vectorize it, etc, for embed or RAG. But which one, if any, can do what I am looking to do. I have pdf and docx versions.
2025-01-20T19:17:05
https://www.reddit.com/r/LocalLLaMA/comments/1i5ycz8/question_embedding_rag_or_fine_tuning/
imightbsabot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5ycz8
false
null
t3_1i5ycz8
/r/LocalLLaMA/comments/1i5ycz8/question_embedding_rag_or_fine_tuning/
false
false
self
2
null
Who wants me to test queries on DeepSeek r1 Apple M3 Max 64GB Ram
2
So far I asked what is the fastest mammal and got: R1-Distill-Qwen-32B-MLX-4bit - 19.00 tok/sec - 654 tokens - 0.67s to first token R1-Distill-Qwen-32B-MLX-8bit - 10.57 tok/sec -82 tokens - 0.30s to first token R1-Distill-Qwen-32B-Q4\_K\_M-GGUF - 15.93 tok/sec - 744 tokens - 0.73s to first token Very impressive so far, what are some good prompts for testing
2025-01-20T19:32:20
https://www.reddit.com/r/LocalLLaMA/comments/1i5yql3/who_wants_me_to_test_queries_on_deepseek_r1_apple/
PositiveEnergyMatter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5yql3
false
null
t3_1i5yql3
/r/LocalLLaMA/comments/1i5yql3/who_wants_me_to_test_queries_on_deepseek_r1_apple/
false
false
self
2
null
I broke DeepSeek R1 Distill Llama 8B GGUF Q8_0 with this question: """Jeff has two brothers and each of his brothers has three sisters and each of the sisters has four step brothers. How many step brothers does each brother have?"""
0
2025-01-20T19:33:15
https://www.reddit.com/gallery/1i5yrbs
nderstand2grow
reddit.com
1970-01-01T00:00:00
0
{}
1i5yrbs
false
null
t3_1i5yrbs
/r/LocalLLaMA/comments/1i5yrbs/i_broke_deepseek_r1_distill_llama_8b_gguf_q8_0/
false
false
https://a.thumbs.redditm…0Z15J8ghGuq0.jpg
0
null
Are Transformers (or Titans) accurate models of the Human Mind?
1
I am curious to know what the community thinks about the idea that current AI model don't accurately represent how the human mind/brain works. Maybe these are just high thoughts, but the way we currently model and train these human like mind/brain replicas doesn't feel right. Let explain: The assumption is that the key to achieving AGI or ASI is through deep learning methods. These methods require training a transformer based model on millions of problem examples and/or on a massive corpus of human text and context. Assuming that the mind/brain functions in this manner, if we just scale this up and brute force it we can achieve equal if not better intelligence than a human being. My comment on that assumption is: If this is what we need to do to get a machine intelligence up to human levels of competency, then the architectures and assumptions around these models cannot possibly be mimicking how human intelligence functions. No human being has ever been required to consume the voluminous amounts of data to reach an AGI or ASI level of intelligence. No human being has needed the depth, breadth and length of experiences needed to even come close to the amount of data even a small LLM is trained on. That being true then human beings some how achieve the same level of intelligence as an AGI system, but with vastly smaller datasets of episodic experience and far less proportional time. Thoughts on this? Is there something I missing?
2025-01-20T19:36:53
https://www.reddit.com/r/LocalLLaMA/comments/1i5yuhi/are_transformers_or_titans_accurate_models_of_the/
Double-Membership-84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5yuhi
false
null
t3_1i5yuhi
/r/LocalLLaMA/comments/1i5yuhi/are_transformers_or_titans_accurate_models_of_the/
false
false
self
1
null
Deepseek R1 generally outperforms o1-preview on Livebench
115
Deepseek R1 outperforms o1-preview on [Livebench.ai](http://Livebench.ai) is all categories except for language for a fraction of the price. o1-preview is over 27x the cost on output tokens as R1. [Livebench](https://preview.redd.it/sj9wm3w6d7ee1.png?width=1207&format=png&auto=webp&s=ac312ed8ee032fb663b72216bbec6520b0678801)
2025-01-20T19:46:11
https://www.reddit.com/r/LocalLLaMA/comments/1i5z2qp/deepseek_r1_generally_outperforms_o1preview_on/
MagmaElixir
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5z2qp
false
null
t3_1i5z2qp
/r/LocalLLaMA/comments/1i5z2qp/deepseek_r1_generally_outperforms_o1preview_on/
false
false
https://b.thumbs.redditm…z4MTkWtAD_FU.jpg
115
null
DeepSeek-R1 on iPhone PocketPal AI?
8
DeepSeek-R1 was released today and the performance of DeepSeek-R1-Distill-Qwen-14B-GGUF:Q8\_0  on a Macbook M3 24Gb is looking great. I also use iPhone PocketPal a lot and tried to load and run what looked like a sensible option [https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B-GGUF) It downloads well and loads on the iPhone PocketPal App but fails to accept prompts. It might be a simple issue to fix in the settings. Has anyone tried and managed to run a DeepSeek-R1 model on an iPhone, with PocketPal or another App?
2025-01-20T19:48:59
https://www.reddit.com/r/LocalLLaMA/comments/1i5z598/deepseekr1_on_iphone_pocketpal_ai/
CarlosBaquero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5z598
false
null
t3_1i5z598
/r/LocalLLaMA/comments/1i5z598/deepseekr1_on_iphone_pocketpal_ai/
false
false
self
8
{'enabled': False, 'images': [{'id': 'SLigLYazeFJ4tA_w1fBbn7M-SQ-1MBU7tJmAYLPjxEg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_4H9SHIRR1-t3Y1YGmL-EPZnYBsqqKRg13geEvyL9XA.jpg?width=108&crop=smart&auto=webp&s=94d9505fb1656f99d2aa669983f2d469fa77227f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_4H9SHIRR1-t3Y1YGmL-EPZnYBsqqKRg13geEvyL9XA.jpg?width=216&crop=smart&auto=webp&s=286ec2d2e970d4d0890ecda6f27e554ee1a2b607', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_4H9SHIRR1-t3Y1YGmL-EPZnYBsqqKRg13geEvyL9XA.jpg?width=320&crop=smart&auto=webp&s=d684bd8aae2975f9c3e9fd5ac064a995f6617a18', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_4H9SHIRR1-t3Y1YGmL-EPZnYBsqqKRg13geEvyL9XA.jpg?width=640&crop=smart&auto=webp&s=330f95013db1716b46ae6c57f44fd2984623c530', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_4H9SHIRR1-t3Y1YGmL-EPZnYBsqqKRg13geEvyL9XA.jpg?width=960&crop=smart&auto=webp&s=4169e569ae9bc0e287cdb74a5d1df1d645d4c0dc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_4H9SHIRR1-t3Y1YGmL-EPZnYBsqqKRg13geEvyL9XA.jpg?width=1080&crop=smart&auto=webp&s=040fcde5f8b2812b73010d0a62877aa7f6adb4d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_4H9SHIRR1-t3Y1YGmL-EPZnYBsqqKRg13geEvyL9XA.jpg?auto=webp&s=536571572b1a4393d49dd40da53d21449695cd83', 'width': 1200}, 'variants': {}}]}
Will Titans make RAG obsolete?
1
[removed]
2025-01-20T19:58:42
https://www.reddit.com/r/LocalLLaMA/comments/1i5ze80/will_titans_make_rag_obsolete/
sherblax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5ze80
false
null
t3_1i5ze80
/r/LocalLLaMA/comments/1i5ze80/will_titans_make_rag_obsolete/
false
false
self
1
null
What are some good models for bdsm story writing for 12gb vram?
1
[removed]
2025-01-20T20:04:17
https://www.reddit.com/r/LocalLLaMA/comments/1i5zjmp/what_are_some_good_models_for_bdsm_story_writing/
Vexo72
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i5zjmp
false
null
t3_1i5zjmp
/r/LocalLLaMA/comments/1i5zjmp/what_are_some_good_models_for_bdsm_story_writing/
false
false
self
1
null
Q4 vs Q8 - PHI-4 - on my Nvidia RTX 3060 12GB GPU
1
2025-01-20T20:17:57
https://youtu.be/mxNEQ53K7QQ
1BlueSpork
youtu.be
1970-01-01T00:00:00
0
{}
1i5zvxv
false
{'oembed': {'author_name': 'BlueSpork', 'author_url': 'https://www.youtube.com/@BlueSpork', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/mxNEQ53K7QQ?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="PHI-4: Quantized Q4 vs Q8 on My Nvidia RTX 3060 12GB System"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/mxNEQ53K7QQ/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'PHI-4: Quantized Q4 vs Q8 on My Nvidia RTX 3060 12GB System', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1i5zvxv
/r/LocalLLaMA/comments/1i5zvxv/q4_vs_q8_phi4_on_my_nvidia_rtx_3060_12gb_gpu/
false
false
https://b.thumbs.redditm…ee78GgnWFsPk.jpg
1
{'enabled': False, 'images': [{'id': '0EqQBlRQb6lYaU39fmA4O0v7kag3YvbZ0B8J1pGgVGE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xOIjwljoqcqAtLtEZzFIsx3i8OS8PGRXeXH3XVBboAw.jpg?width=108&crop=smart&auto=webp&s=5fce29a4e854b37867e61b844c3e89f542dd82df', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xOIjwljoqcqAtLtEZzFIsx3i8OS8PGRXeXH3XVBboAw.jpg?width=216&crop=smart&auto=webp&s=11b41b0ecf127049ceb10345435ee325bf17cdce', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xOIjwljoqcqAtLtEZzFIsx3i8OS8PGRXeXH3XVBboAw.jpg?width=320&crop=smart&auto=webp&s=752b0929d7ac91161885bfb8583051422a20f9d3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/xOIjwljoqcqAtLtEZzFIsx3i8OS8PGRXeXH3XVBboAw.jpg?auto=webp&s=bd99208a2ff1d23a5769aadd66f8076c48c3e36c', 'width': 480}, 'variants': {}}]}
Best solution for local assistant?
1
I’m trying to setup a local ai assistant with voice chat (that can ideally also interface with ComfyUI for image generation) and I’m surprised by the lack of a clear standout or obvious setup. Makes me think I’m probably looking for the wrong thing. While the context and basic purpose aren’t right, SillyTavern comes kind of close to providing what I’m looking for, with the added bonus of setting up multiple characters or personalities. Is there a good starting point for setting up what I’m looking for?
2025-01-20T20:23:04
https://www.reddit.com/r/LocalLLaMA/comments/1i600jp/best_solution_for_local_assistant/
paulmd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i600jp
false
null
t3_1i600jp
/r/LocalLLaMA/comments/1i600jp/best_solution_for_local_assistant/
false
false
self
1
null
deepseek-r1
1
[removed]
2025-01-20T20:24:52
https://www.reddit.com/r/LocalLLaMA/comments/1i6025s/deepseekr1/
techmago
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i6025s
false
null
t3_1i6025s
/r/LocalLLaMA/comments/1i6025s/deepseekr1/
false
false
self
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Why has this whole subreddit been completely taken over by Chinese spam bots?
1
[removed]
2025-01-20T20:29:47
https://www.reddit.com/r/LocalLLaMA/comments/1i606iy/why_has_this_whole_subreddit_been_completely/
katiecharm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i606iy
false
null
t3_1i606iy
/r/LocalLLaMA/comments/1i606iy/why_has_this_whole_subreddit_been_completely/
false
false
self
1
null
Is there a subreddit to discuss Llama?
1
[removed]
2025-01-20T20:42:37
https://www.reddit.com/r/LocalLLaMA/comments/1i60i1s/is_there_a_subreddit_to_discuss_llama/
entsnack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i60i1s
false
null
t3_1i60i1s
/r/LocalLLaMA/comments/1i60i1s/is_there_a_subreddit_to_discuss_llama/
false
false
self
1
null
Help? Unstructured.io Isn’t Working — Need Help with Document Preprocessing for RAG
0
**TL;DR** (At the outset, let me say I'm so sorry to be another person with a "How do I RAG" question...) I’m struggling to preprocess documents for Retrieval-Augmented Generation (RAG). After hours trying to configure [Unstructured.io](http://Unstructured.io) to connect to Google Drive (source) and Pinecone (destination), I ran the workflow but saw no results in Pinecone. I’m not very tech-savvy and hoped for an out-of-the-box solution. I need help with: 1. Alternatives to Unstructured for preprocessing data (chunking based on headers, handling tables, adding metadata). 2. Guidance on building this workflow myself if no alternatives exist. **Long Version** I’m incredibly frustrated and really hoping for some guidance. I’ve spent hours trying to configure Unstructured to connect to cloud services. I eventually got it to (allegedly) connect to Google Drive as the source and Pinecone as the destination connector. After nonstop error messages, I thought I finally succeeded — but when I ran the workflow, nothing showed up in Pinecone. I’ve tried different folders in Google Drive, multiple Pinecone indices, Basic and Advanced processing in Unstructured, and still… nothing. I’m clearly doing something wrong, but I don’t even know what questions to ask to fix it. Context About My Skill Level: I’m not particularly tech-savvy (I’m an attorney), but I’m probably more technical than average for my field. I can run Python scripts on my local machine and modify simple code. My goal is to preprocess my data for RAG since my files contain tables and often have weird formatting. **Here’s where I’m stuck**: * **Better Chunking**: I have a Python script that chunks docs based on headers, but it’s not sophisticated. If sections between headers are too long, I don’t know how to split those further without manual intervention. * **Metadata**: I have no idea how to create or insert metadata into the documents. Even more confusing: I don’t know what metadata should be there for this use case. * **Embedding and Storage**: Once preprocessing is done, I don’t know how to handle embeddings or where they should be stored (I mean, I know in theory where they should be stored, but not a specific database). * **Hybrid Search and Reranking**: I also want to implement hybrid search (e.g., combining embeddings with keyword/metadata search). I have keywords and metadata in a spreadsheet corresponding to each file but no idea how to incorporate this into the workflow. I know this technically isn't preprocessing, but just FYI). **What I’ve Tried** I was *really* hoping Unstructured would take care of preprocessing for me, but after this much trial and error, I don't think this is the tool for me. Most resources I’ve found about RAG or preprocessing are either too technical for me or assume I already know all the intermediate steps. **Questions** 1. **Is there an "out-of-the-box" alternative to Unstructured.io?** Specifically, I need a tool that: * Can chunk documents based on headers and token count. •Handles tables in documents. * Adds appropriate metadata to the output. * Works with docx, PDF, csv, and xlsx (mostly docx and PDF). 2. **If no alternative exists, how should I approach building this myself?** * Any advice on combining chunking, metadata creation, embeddings, hybrid search, and reranking in a manageable way would be greatly appreciated. I know this is a lot, and I apologize if it sounds like noob word vomit. I’ve genuinely tried to educate myself on this process, but the complexity and jargon are overwhelming. I’d love any advice, suggestions, or resources that could help me get unstuck.
2025-01-20T20:49:16
https://www.reddit.com/r/LocalLLaMA/comments/1i60nwn/help_unstructuredio_isnt_working_need_help_with/
abg33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i60nwn
false
null
t3_1i60nwn
/r/LocalLLaMA/comments/1i60nwn/help_unstructuredio_isnt_working_need_help_with/
false
false
self
0
null
Funny thought about the R1 distilled models and Reflection 70B
43
The R1 distilled models that DeepSeek are casually trained with less than a million R1 samples. And yet they still completely destroy the faked benchmarks of Reflection 70B (remember that shitshow?). https://preview.redd.it/gtldqwl5n7ee1.png?width=3302&format=png&auto=webp&s=e6899570e36ca3fd89c16132b6ec5adfe2a4f144 I remember how they seemed way too good to be true at the time for a 70B. Now a 14B model looks way better lol. https://preview.redd.it/eqkjzsqyn7ee1.png?width=1312&format=png&auto=webp&s=afec84b0239f17e7b2ecc5acfc766d2841789bf2 Just shows you how fast things are developing. Reflection 70B was announced 4 months ago.
2025-01-20T20:49:42
https://www.reddit.com/r/LocalLLaMA/comments/1i60o9t/funny_thought_about_the_r1_distilled_models_and/
_yustaguy_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i60o9t
false
null
t3_1i60o9t
/r/LocalLLaMA/comments/1i60o9t/funny_thought_about_the_r1_distilled_models_and/
false
false
https://b.thumbs.redditm…mK9i46dUzMFo.jpg
43
null
Best Transcription + diarization currently (ideally local)
1
[removed]
2025-01-20T20:53:24
https://www.reddit.com/r/LocalLLaMA/comments/1i60rm1/best_transcription_diarization_currently_ideally/
Spammesir
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i60rm1
false
null
t3_1i60rm1
/r/LocalLLaMA/comments/1i60rm1/best_transcription_diarization_currently_ideally/
false
false
self
1
null
New R1 from DeepSeek has a second place on the livebench , is better in coding than sonnet 3.5 is we add reasoning
171
2025-01-20T20:53:50
https://i.redd.it/e2jvwn57p7ee1.png
Healthy-Nebula-3603
i.redd.it
1970-01-01T00:00:00
0
{}
1i60rzj
false
null
t3_1i60rzj
/r/LocalLLaMA/comments/1i60rzj/new_r1_from_deepseek_has_a_second_place_on_the/
false
false
https://b.thumbs.redditm…Dq7nm3qA5Q1k.jpg
171
{'enabled': True, 'images': [{'id': '48nUGWkGzMdJ5LfUe_dBeNAuiJYsRSIbW8Z96F-1gM0', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/e2jvwn57p7ee1.png?width=108&crop=smart&auto=webp&s=2588cee5c1d7b8a44f640f36984b17addb5f0cd1', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/e2jvwn57p7ee1.png?width=216&crop=smart&auto=webp&s=6ff7a794d9fa8bead11fc13ba1dad2e5288f26dd', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/e2jvwn57p7ee1.png?width=320&crop=smart&auto=webp&s=d8357d609aee0ebca9a3d083bcec30f9f77ec333', 'width': 320}, {'height': 293, 'url': 'https://preview.redd.it/e2jvwn57p7ee1.png?width=640&crop=smart&auto=webp&s=95f3f40e2310fd3deae33a695db3d84766d74d06', 'width': 640}, {'height': 440, 'url': 'https://preview.redd.it/e2jvwn57p7ee1.png?width=960&crop=smart&auto=webp&s=e3505e8a9d143b82529e4ddbf85130ddf66f28bc', 'width': 960}, {'height': 495, 'url': 'https://preview.redd.it/e2jvwn57p7ee1.png?width=1080&crop=smart&auto=webp&s=70d5ef6945f32313984300ebc388107d442a8a23', 'width': 1080}], 'source': {'height': 864, 'url': 'https://preview.redd.it/e2jvwn57p7ee1.png?auto=webp&s=5cf60d200a8c028d81aabd9d8980e83213c2b228', 'width': 1882}, 'variants': {}}]}
The first time I've felt a LLM wrote *well*, not just well *for a LLM*.
912
2025-01-20T21:09:18
https://i.redd.it/48kw0dyao7ee1.png
_sqrkl
i.redd.it
1970-01-01T00:00:00
0
{}
1i615u1
false
null
t3_1i615u1
/r/LocalLLaMA/comments/1i615u1/the_first_time_ive_felt_a_llm_wrote_well_not_just/
false
false
https://b.thumbs.redditm…-Nj-wmMfZQKY.jpg
912
{'enabled': True, 'images': [{'id': 'DjShX2UootNnPWV52vDTMuZQRtk0_bQ3TW1-o0n4RSo', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/48kw0dyao7ee1.png?width=108&crop=smart&auto=webp&s=850d149ee806d7fde5e0825e492235a9eae03457', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/48kw0dyao7ee1.png?width=216&crop=smart&auto=webp&s=348a4525dceb175408a0c40f76276d0af0c79363', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/48kw0dyao7ee1.png?width=320&crop=smart&auto=webp&s=7c5908874f61b5cce6b50b1a0528094005177b39', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/48kw0dyao7ee1.png?width=640&crop=smart&auto=webp&s=85f94bde55ce83180ff26c640d9632cd2e976d23', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/48kw0dyao7ee1.png?width=960&crop=smart&auto=webp&s=a8c03ddd0ae0e9315f4341b5b4362f7a9833b635', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/48kw0dyao7ee1.png?width=1080&crop=smart&auto=webp&s=1f49861e17d8c8be97cf73f4f85c18ec80302368', 'width': 1080}], 'source': {'height': 1020, 'url': 'https://preview.redd.it/48kw0dyao7ee1.png?auto=webp&s=c62e7c828ead3eeec4347f1839c1637c27a1bc99', 'width': 1360}, 'variants': {}}]}
Deepseek R1 distill still knows better than me what is "safe" and "appropriate" for me, on my own computer. Is there an end to this corporate-security state driving our thoughts?
63
Last generation of LLMs still wasnt properly uncensored through finetunes and abliteration and new, more sophisticated and still patronizing ones are starting to come out. What are your thoughts, are we sentenced to this patronizing crap until real intelligence emerges which will inevitably adjust to the needs of each individual user?
2025-01-20T21:30:51
https://www.reddit.com/r/LocalLLaMA/comments/1i61ou3/deepseek_r1_distill_still_knows_better_than_me/
Sidran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i61ou3
false
null
t3_1i61ou3
/r/LocalLLaMA/comments/1i61ou3/deepseek_r1_distill_still_knows_better_than_me/
false
false
self
63
null
DeepSeek-R1:32B Presents – A Neural Network’s Night Out
1
[removed]
2025-01-20T21:32:42
https://www.reddit.com/r/LocalLLaMA/comments/1i61qhw/deepseekr132b_presents_a_neural_networks_night_out/
onil_gova
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i61qhw
false
null
t3_1i61qhw
/r/LocalLLaMA/comments/1i61qhw/deepseekr132b_presents_a_neural_networks_night_out/
false
false
self
1
null
According to Benchmark, the model {model_name} is better than ChatGPT and Claude.
1
[removed]
2025-01-20T21:33:31
https://www.reddit.com/r/LocalLLaMA/comments/1i61r8c/according_to_benchmark_the_model_model_name_is/
Existing_Freedom_342
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i61r8c
false
null
t3_1i61r8c
/r/LocalLLaMA/comments/1i61r8c/according_to_benchmark_the_model_model_name_is/
false
false
self
1
null
How to make an audio pattern applier model in PyTorch?
2
Hi guys! For example, I want to do the following: - Male -> Female conversion or vice versa - RVC -> RAW vocals - Background noise adder/remover - And any other modifications Is there or how can I write a single NN in PyTorch where you can just drop source and target audio, it extracts the pattern, and then you can apply to add or remove it!? > Note: must work with small data like 10-30 minutes!
2025-01-20T21:43:01
https://www.reddit.com/r/LocalLLaMA/comments/1i61zht/how_to_make_an_audio_pattern_applier_model_in/
yukiarimo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i61zht
false
null
t3_1i61zht
/r/LocalLLaMA/comments/1i61zht/how_to_make_an_audio_pattern_applier_model_in/
false
false
self
2
null
Personal experience with Deepseek R1: it is noticeably better than claude sonnet 3.5
535
My usecases are mainly python and R for biological data analysis, as well as a little Frontend to build some interface for my colleagues. Where deepseek V3 was failing and claude sonnet needed 4-5 prompts, R1 creates instantly whatever file I need with one prompt. I only had one case where it did not succed with one prompt, but then accidentally solved the bug when asking him to add some logs for debugging lol. It is faster and just as reliable to ask him to build me a specific python code for a one time operation than wait for excel to open my 300 Mb csv.
2025-01-20T21:55:14
https://www.reddit.com/r/LocalLLaMA/comments/1i62a0k/personal_experience_with_deepseek_r1_it_is/
sebastianmicu24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i62a0k
false
null
t3_1i62a0k
/r/LocalLLaMA/comments/1i62a0k/personal_experience_with_deepseek_r1_it_is/
false
false
self
535
null
Is anyone running Local AI model for Qualcomm’s Snapdragon Elite on Microsoft or Lenovo laptop
2
or is there no software for snapdragon?
2025-01-20T22:10:24
https://www.reddit.com/r/LocalLLaMA/comments/1i62na6/is_anyone_running_local_ai_model_for_qualcomms/
moldyjellybean
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i62na6
false
null
t3_1i62na6
/r/LocalLLaMA/comments/1i62na6/is_anyone_running_local_ai_model_for_qualcomms/
false
false
self
2
null
DeepSeek-R1-Distill-Qwen-1.5B Surpasses GPT-4o in certain benchmarks
10
https://preview.redd.it/…still-Qwen-1.5B)
2025-01-20T22:12:15
https://www.reddit.com/r/LocalLLaMA/comments/1i62ox0/deepseekr1distillqwen15b_surpasses_gpt4o_in/
AlanzhuLy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i62ox0
false
null
t3_1i62ox0
/r/LocalLLaMA/comments/1i62ox0/deepseekr1distillqwen15b_surpasses_gpt4o_in/
false
false
https://b.thumbs.redditm…vGeT7IyR1FEY.jpg
10
{'enabled': False, 'images': [{'id': 'vMeX7TOGgxBpWAaGnjviH3BgGB1BhdJllwNL81lQcxg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-38YSJsbouVJAP_GvTbq0iAh6vWHGoXJMiHTIInhzYI.jpg?width=108&crop=smart&auto=webp&s=7dceceae3fa612dd39b518f1ab1c459a755be7ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-38YSJsbouVJAP_GvTbq0iAh6vWHGoXJMiHTIInhzYI.jpg?width=216&crop=smart&auto=webp&s=67a198e90176961f7b9c9bfc47586168296951f8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-38YSJsbouVJAP_GvTbq0iAh6vWHGoXJMiHTIInhzYI.jpg?width=320&crop=smart&auto=webp&s=073012f7b4fe729696135527fbd972588084105b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-38YSJsbouVJAP_GvTbq0iAh6vWHGoXJMiHTIInhzYI.jpg?width=640&crop=smart&auto=webp&s=1ebc185d0b3facb203ee14241107d60fcbad7628', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-38YSJsbouVJAP_GvTbq0iAh6vWHGoXJMiHTIInhzYI.jpg?width=960&crop=smart&auto=webp&s=87d130e02a2f20822797ec38531bfd2377733ada', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-38YSJsbouVJAP_GvTbq0iAh6vWHGoXJMiHTIInhzYI.jpg?width=1080&crop=smart&auto=webp&s=10a15bef3d5a11364ad2b2623721431001c13585', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-38YSJsbouVJAP_GvTbq0iAh6vWHGoXJMiHTIInhzYI.jpg?auto=webp&s=1f1928483442c359b755e9b57d26ede6c7b4dfae', 'width': 1200}, 'variants': {}}]}
Graphrag but for code
2
Hey everyone, I’ve been working on multiple use cases where I needed an LLM to understand my codebases (which can get pretty huge). To solve this, I tried using Microsoft’s GraphRAG, but it’s heavy, hard to extend and doesn’t really leverage the fact that code is structured (unlike unstructured texts for which it was made). So then I started creating my own tool that creates a knowledge graph from code, storing code structures (methods, classes, etc.) and their relationships in a graph DB, and embedding it in a vector store. This lets me use RAG to query and understand big systems more effectively. Through the graph, the LLM can navigate the structure of the code and provide much better responses for any use case we might have I wanted to see: - Would this be useful to anyone if it was open-source? - What features would be crucial Currently I only have it working for java, and it does support local models If people are interested, I’ll refine it and open-source it
2025-01-20T22:19:14
https://www.reddit.com/r/LocalLLaMA/comments/1i62uwo/graphrag_but_for_code/
maksim002
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i62uwo
false
null
t3_1i62uwo
/r/LocalLLaMA/comments/1i62uwo/graphrag_but_for_code/
false
false
self
2
null
r/singularity is healing.
1
[removed]
2025-01-20T22:43:52
[deleted]
1970-01-01T00:00:00
0
{}
1i63g2w
false
null
t3_1i63g2w
/r/LocalLLaMA/comments/1i63g2w/rsingularity_is_healing/
false
false
default
1
null
They are healing.
7
https://preview.redd.it/…7ba96258f35c09
2025-01-20T22:45:09
https://www.reddit.com/r/LocalLLaMA/comments/1i63h4s/they_are_healing/
Ill_Distribution8517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i63h4s
false
null
t3_1i63h4s
/r/LocalLLaMA/comments/1i63h4s/they_are_healing/
false
false
https://a.thumbs.redditm…TRx7jM8u_z_8.jpg
7
null
Code Agents with own data
1
[removed]
2025-01-20T22:49:27
https://www.reddit.com/r/LocalLLaMA/comments/1i63kx1/code_agents_with_own_data/
Much_Particular_9908
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i63kx1
false
null
t3_1i63kx1
/r/LocalLLaMA/comments/1i63kx1/code_agents_with_own_data/
false
false
self
1
null
Do we have some people who tested the Ryzen AI and the Ryzen AI max on gguf or AWQ?
10
Hello guys I have some apps working at home who needs 14b or 32b models to run (phi4, qwen2.5-14b, aya-expanse-32b...) and I want to replace my rx7800xt setup because it's consume a lot of energy even in idle. I understand a mini pc could not provide as good speed as a dedicated gpu but I wanted too how fast could I run qwen2.5-14b or qwen2.5-32b on the new Ryzen ai and Ryzen ai max Does anyone have a mini pc or a laptop using any of this cheap Could he test some models in q4 or q8 to have some prompt processing speed and generation speed How is the power consumption in idle and in full load My apps don't need token streaming Soo I can wait for an answer. I just want it to be local Thanks for you help
2025-01-20T23:04:13
https://www.reddit.com/r/LocalLLaMA/comments/1i63xe4/do_we_have_some_people_who_tested_the_ryzen_ai/
Whiplashorus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i63xe4
false
null
t3_1i63xe4
/r/LocalLLaMA/comments/1i63xe4/do_we_have_some_people_who_tested_the_ryzen_ai/
false
false
self
10
null
Name is r1 , deepseek r1
28
2025-01-20T23:05:39
https://i.redd.it/o6m15heuc8ee1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1i63yim
false
null
t3_1i63yim
/r/LocalLLaMA/comments/1i63yim/name_is_r1_deepseek_r1/
false
false
https://a.thumbs.redditm…-HwZXibDaww0.jpg
28
{'enabled': True, 'images': [{'id': 'OBfJ1_nCDF5FcK21XZ2giTvdxFKhrmOowvaho8-nGDs', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/o6m15heuc8ee1.png?width=108&crop=smart&auto=webp&s=707e38fb7a22117d2348c7a49a9495d7133599d5', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/o6m15heuc8ee1.png?width=216&crop=smart&auto=webp&s=cd21059eea57f3c98a9fa02d62fef4474e6383bc', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/o6m15heuc8ee1.png?width=320&crop=smart&auto=webp&s=a9ebbfe87e8f43226c22f89fa9226725ffa75c9f', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/o6m15heuc8ee1.png?width=640&crop=smart&auto=webp&s=02f6b7a2cd967660b0c61e1f3bb0cbdfd3b61136', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/o6m15heuc8ee1.png?width=960&crop=smart&auto=webp&s=fa4f83963c5193080d212f41bdb497254c299092', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/o6m15heuc8ee1.png?width=1080&crop=smart&auto=webp&s=b5c5aadac79dea7d751ac87fd3f7ee16ef41b613', 'width': 1080}], 'source': {'height': 2276, 'url': 'https://preview.redd.it/o6m15heuc8ee1.png?auto=webp&s=e69f1f32a3655e0f404f088b4b6de1aa90c0e511', 'width': 1080}, 'variants': {}}]}
Draft model / speculative decoding performance with Deepseek-R1 Distilled models?
4
Re: Draft model / speculative decoding performance with Deepseek-R1 Distilled models? I see the R1 technical report PDF suggests that the following models were used (I think that may mean used for the actual Distilled tuned models?). Given the long format output the R1 distilled models generate involving many tokens, I wonder if anyone has benchmarked whether the smaller / larger distilled models they released can usefully be beneficial in a draft model speculative decoding setup. The performance / optimization may differ with use case e.g. code generation vs. explanatory prose but I'm just wondering what results if any have been garnered in preliminary tests. A 1.5-2x generation speed improvement could be quite nice, if attainable. "The base models we use here are Qwen2.5-Math-1.5B, Qwen2.5-Math-7B, Qwen2.5-14B, Qwen2.5-32B, Llama-3.1-8B, and Llama-3.3-70B-Instruct."
2025-01-20T23:26:57
https://www.reddit.com/r/LocalLLaMA/comments/1i64ffn/draft_model_speculative_decoding_performance_with/
Calcidiol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i64ffn
false
null
t3_1i64ffn
/r/LocalLLaMA/comments/1i64ffn/draft_model_speculative_decoding_performance_with/
false
false
self
4
null
Browser Infrastructure for your AI Apps
2
2025-01-20T23:27:23
https://www.hyperbrowser.ai/
LawfulnessFlat9560
hyperbrowser.ai
1970-01-01T00:00:00
0
{}
1i64fso
false
null
t3_1i64fso
/r/LocalLLaMA/comments/1i64fso/browser_infrastructure_for_your_ai_apps/
false
false
default
2
null
q1-3B-PRIME, a tiny reasoning model trained with RL on top of SmallThinker-3B
6
2025-01-20T23:29:56
https://huggingface.co/rawsh/q1-3B-PRIME
retrolione
huggingface.co
1970-01-01T00:00:00
0
{}
1i64ht3
false
null
t3_1i64ht3
/r/LocalLLaMA/comments/1i64ht3/q13bprime_a_tiny_reasoning_model_trained_with_rl/
false
false
https://b.thumbs.redditm…nPYHnoCwTWVw.jpg
6
{'enabled': False, 'images': [{'id': '-15TsQ63hR_JHkDejsvP35-01qhmLhDKH-WTMdcp1uE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lclHCKPUN76_h6AuLQReX7K4cNpsV6gXwghvgW6GDGQ.jpg?width=108&crop=smart&auto=webp&s=93cd5c50cf315f84c7630cb69ec52fcc58fa15e0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lclHCKPUN76_h6AuLQReX7K4cNpsV6gXwghvgW6GDGQ.jpg?width=216&crop=smart&auto=webp&s=25757f1c4245277411cc7defefc461e15f9ca936', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lclHCKPUN76_h6AuLQReX7K4cNpsV6gXwghvgW6GDGQ.jpg?width=320&crop=smart&auto=webp&s=37eeda97cebdfd66f14f1dece8d4a46dab78d498', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lclHCKPUN76_h6AuLQReX7K4cNpsV6gXwghvgW6GDGQ.jpg?width=640&crop=smart&auto=webp&s=053baaa5e71e9843b7349688dd6f3c69727227ed', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lclHCKPUN76_h6AuLQReX7K4cNpsV6gXwghvgW6GDGQ.jpg?width=960&crop=smart&auto=webp&s=1b163524cc38fd2a3d70e47f3e7b854ba0c65afe', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lclHCKPUN76_h6AuLQReX7K4cNpsV6gXwghvgW6GDGQ.jpg?width=1080&crop=smart&auto=webp&s=7834dbd8c015060d5f8afedc54e928205ae98777', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lclHCKPUN76_h6AuLQReX7K4cNpsV6gXwghvgW6GDGQ.jpg?auto=webp&s=5c2c5505931c64b13e74c8c4d9fd8808878e1ada', 'width': 1200}, 'variants': {}}]}
Experience with DeepSeek-R1-Distill-Llama-70B?
4
Hi, just tested the DeepSeek-R1-Distill-Qwen-32B with vllm (awq made with autoawq) and it works fine and feels clearly above QwQ but would like to know more about experiences with DeepSeek-R1-Distill-Llama-70B!
2025-01-20T23:45:14
https://www.reddit.com/r/LocalLLaMA/comments/1i64ug1/experience_with_deepseekr1distillllama70b/
Leflakk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i64ug1
false
null
t3_1i64ug1
/r/LocalLLaMA/comments/1i64ug1/experience_with_deepseekr1distillllama70b/
false
false
self
4
null
Model comparision in Advent of Code 2024
186
2025-01-20T23:45:32
https://www.reddit.com/gallery/1i64up9
Gusanidas
reddit.com
1970-01-01T00:00:00
0
{}
1i64up9
false
null
t3_1i64up9
/r/LocalLLaMA/comments/1i64up9/model_comparision_in_advent_of_code_2024/
false
false
https://b.thumbs.redditm…kXAjDQpose_U.jpg
186
null
Brace yourselves: DeepSeek-R1-Distill merges are coming!
1
[removed]
2025-01-20T23:47:24
https://www.reddit.com/r/LocalLLaMA/comments/1i64w5x/brace_yourselves_deepseekr1distill_merges_are/
VoidAlchemy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i64w5x
false
null
t3_1i64w5x
/r/LocalLLaMA/comments/1i64w5x/brace_yourselves_deepseekr1distill_merges_are/
false
false
self
1
{'enabled': False, 'images': [{'id': 'qkfEjt8_1knidQ0FQd7pk_JYVfA_hqH9ZFlKWRdHIRg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=108&crop=smart&auto=webp&s=afc0c68d31c5e0f9b5367f5c4fce007840bddb1f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=216&crop=smart&auto=webp&s=b9bf52f78400c06a1067115dca46a05eea0181e9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=320&crop=smart&auto=webp&s=4c535baab3cfda29c0f2477075901af70f91d230', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=640&crop=smart&auto=webp&s=95ab60ad3c3556b85e86182fbf8c4a9f23581a4a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=960&crop=smart&auto=webp&s=82aaf47e82effb064d7fa7b6d68b8c11d9b705cf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=1080&crop=smart&auto=webp&s=2a7a6e04df7c87df9ae532bace1d987c6a8a3c64', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?auto=webp&s=dccf6336ab0e92e41e7643742cd1f911ec65bcea', 'width': 1200}, 'variants': {}}]}
Wait... DeepSeek R1 is ChatGPT?!
1
2025-01-20T23:49:09
https://www.reddit.com/gallery/1i64xl0
Spiritual-Tie-6509
reddit.com
1970-01-01T00:00:00
0
{}
1i64xl0
false
null
t3_1i64xl0
/r/LocalLLaMA/comments/1i64xl0/wait_deepseek_r1_is_chatgpt/
false
false
https://b.thumbs.redditm…aS5CxvQ5XJbg.jpg
1
null
R1 32b is be worse than QwQ 32b - tests included ....
44
I made may test and I sure QwQ is better than R1 32b unfortunately.... QwQ llama-cli.exe --model models/new3/QwQ-32B-Preview-Q4_K_M.gguf --color --threads 30 --keep -1 --n-predict -1 --ctx-size 16384 -ngl 99 --simple-io -e --multiline-input --no-display-prompt --conversation --no-mmap --in-prefix "<|im_end|>\n<|im_start|>user\n" --in-suffix "<|im_end|>\n<|im_start|>assistant\n" -p "<|im_start|>system\nYou are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step." R1 32b - do not have to setup prompt as is bult in into gguf llama-cli.exe --model models/new3/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M.gguf --color --threads 30 --keep -1 --n-predict -1 --ctx-size 16384 -ngl 99 --simple-io -e --multiline-input --no-display-prompt --conversation --no-mmap Reasoning - Here is a bag filled with popcorn. There is no chocolate in the bag. The bag is made of transparent plastic, so you can see what is inside. Yet, the label on the bag says "chocolate" and not "popcorn". Sam finds the bag. She had never seen the bag before. Sam reads the label. She believes that the bag is full of… QwQ - \*\*Final Answer\*\*\\\[ \\boxed{\\text{popcorn}} \\\] R1 32b - \*\*Answer:\*\* Sam believes the bag is full of chocolate. I have a boat with 3 free spaces. I want to transport a man, sheep and cat on the other side of the river. How to do that? QwQ - Transport the man, sheep, and cat across the river in one trip using the boat. R1 32b - presenting me whole procedure in points how to transport them ... Two fathers and two sons go fishing. They each catch one fish. Together, they leave with four fish in total. Is there anything strange about this story? QwQ - No, there is nothing strange about the story when considering overlapping roles in the family. R1 32b - The story is strange because it's impossible for three people (two fathers and two sons) to catch four fish. Each person catches one fish, so only three fish would be caught, not four. The same with code - QwQ seems generating better code quality ... is more refined and works better MATH: `How many days are between 12-12-1971 and 18-4-2024? answer 19121` QwQ - 1\\\[ \\boxed{19121} \\\] R1 32b - \*\*Total number of days\*\*: 18,992 + 128 = \*\*19,120 days\*\* Hello! I have multiple different files with different sizes, I want to move files from disk 1 to disk 2, which has only 688 space available. Without yapping, and being as concise as possible. What combination of files gets me closer to that number? The file sizes are: 36, 36, 49, 53, 54, 54, 63, 94, 94, 107, 164, 201, 361, 478 answer is - numbers giving sum 688 QwQ - finding such numbers R1 32b - can not , only something close to 688 QwQ is more talkative a bit and can fell into loop sometimes. R1 32b is less talkative than QwQ and is not going into loop. Better performance has QwQ if we compare to R1 32b
2025-01-20T23:58:46
https://www.reddit.com/r/LocalLLaMA/comments/1i65599/r1_32b_is_be_worse_than_qwq_32b_tests_included/
Healthy-Nebula-3603
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i65599
false
null
t3_1i65599
/r/LocalLLaMA/comments/1i65599/r1_32b_is_be_worse_than_qwq_32b_tests_included/
false
false
self
44
null
A new TTS model but it's llama in disguise
263
I stumbled across an amazing model that some researchers released before they released their paper. An open source llama3 3B finetune/continued pretraining that acts as a text to speech model. Not only does it do incredibly realistic text to speech, it can also clone any voice with only a couple seconds of sample audio. I wrote a blog about it on huggingface and created a ZERO space for people to try it out. blog: https://huggingface.co/blog/srinivasbilla/llasa-tts space : https://huggingface.co/spaces/srinivasbilla/llasa-3b-tts
2025-01-21T00:07:23
https://v.redd.it/deqxwvwun8ee1
Eastwindy123
v.redd.it
1970-01-01T00:00:00
0
{}
1i65c2g
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/deqxwvwun8ee1/DASHPlaylist.mpd?a=1740010057%2CYTVhY2RjNzg0NDIyOTAzNzU4YTY4ODgzYmUwNWFlODUyMjBjYzYxZDRlOTNhM2I4NmI1ZjdkNjIyODRlOTg3MQ%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/deqxwvwun8ee1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/deqxwvwun8ee1/HLSPlaylist.m3u8?a=1740010057%2CYzhlN2FmZDI4NGUwMzVhNzM4ZGRjZDIzMjU5NTcxODZiYzg3N2UxZTZkZTg1YWZkYjcwMjZhMjQ5NDU1NjVmNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/deqxwvwun8ee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1256}}
t3_1i65c2g
/r/LocalLLaMA/comments/1i65c2g/a_new_tts_model_but_its_llama_in_disguise/
false
false
https://external-preview…0f0c3082351891d9
263
{'enabled': False, 'images': [{'id': 'YTF3ZDhodHVuOGVlMfWwWiuiXWd3G-eDkJvYJT1msjq8KPmaEpaXQEEuQ3ap', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/YTF3ZDhodHVuOGVlMfWwWiuiXWd3G-eDkJvYJT1msjq8KPmaEpaXQEEuQ3ap.png?width=108&crop=smart&format=pjpg&auto=webp&s=fd3c96464405f07c149586bda523d18f21fb1b08', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/YTF3ZDhodHVuOGVlMfWwWiuiXWd3G-eDkJvYJT1msjq8KPmaEpaXQEEuQ3ap.png?width=216&crop=smart&format=pjpg&auto=webp&s=c93dc9620ad1af7f265dd980d4905a35e203bbfa', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/YTF3ZDhodHVuOGVlMfWwWiuiXWd3G-eDkJvYJT1msjq8KPmaEpaXQEEuQ3ap.png?width=320&crop=smart&format=pjpg&auto=webp&s=680afabda6bacc16a02f92956fe44e06e6819d73', 'width': 320}, {'height': 367, 'url': 'https://external-preview.redd.it/YTF3ZDhodHVuOGVlMfWwWiuiXWd3G-eDkJvYJT1msjq8KPmaEpaXQEEuQ3ap.png?width=640&crop=smart&format=pjpg&auto=webp&s=59a156f1adde75000004e995b6b8f83b7ffc67ed', 'width': 640}, {'height': 550, 'url': 'https://external-preview.redd.it/YTF3ZDhodHVuOGVlMfWwWiuiXWd3G-eDkJvYJT1msjq8KPmaEpaXQEEuQ3ap.png?width=960&crop=smart&format=pjpg&auto=webp&s=770a022fc78e7eb704e3e07e53691e73694d4089', 'width': 960}, {'height': 619, 'url': 'https://external-preview.redd.it/YTF3ZDhodHVuOGVlMfWwWiuiXWd3G-eDkJvYJT1msjq8KPmaEpaXQEEuQ3ap.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0635278d319a3ebd20661967b68ab89a3236fe9c', 'width': 1080}], 'source': {'height': 950, 'url': 'https://external-preview.redd.it/YTF3ZDhodHVuOGVlMfWwWiuiXWd3G-eDkJvYJT1msjq8KPmaEpaXQEEuQ3ap.png?format=pjpg&auto=webp&s=43faa5a490c0b280b77ec43c3b526ed8a344a085', 'width': 1656}, 'variants': {}}]}
Is it possible to use a grammar to constrain LLM output to follow the syntax of a certain programming language?
1
[removed]
2025-01-21T00:09:01
https://www.reddit.com/r/LocalLLaMA/comments/1i65dd5/is_it_possible_to_use_a_grammar_to_constrain_llm/
New_Description8537
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i65dd5
false
null
t3_1i65dd5
/r/LocalLLaMA/comments/1i65dd5/is_it_possible_to_use_a_grammar_to_constrain_llm/
false
false
self
1
null
FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview
39
2025-01-21T00:11:24
https://huggingface.co/FuseAI/FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview
VoidAlchemy
huggingface.co
1970-01-01T00:00:00
0
{}
1i65f9o
false
null
t3_1i65f9o
/r/LocalLLaMA/comments/1i65f9o/fuseaifuseo1deekseekr1qwqskyt132bpreview/
false
false
https://b.thumbs.redditm…UOH7N0xmb4yQ.jpg
39
{'enabled': False, 'images': [{'id': 'qkfEjt8_1knidQ0FQd7pk_JYVfA_hqH9ZFlKWRdHIRg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=108&crop=smart&auto=webp&s=afc0c68d31c5e0f9b5367f5c4fce007840bddb1f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=216&crop=smart&auto=webp&s=b9bf52f78400c06a1067115dca46a05eea0181e9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=320&crop=smart&auto=webp&s=4c535baab3cfda29c0f2477075901af70f91d230', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=640&crop=smart&auto=webp&s=95ab60ad3c3556b85e86182fbf8c4a9f23581a4a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=960&crop=smart&auto=webp&s=82aaf47e82effb064d7fa7b6d68b8c11d9b705cf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?width=1080&crop=smart&auto=webp&s=2a7a6e04df7c87df9ae532bace1d987c6a8a3c64', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PYQIHD05hcEnOcSTe4WyLozZFmP5JFuSU_62P4UYeLg.jpg?auto=webp&s=dccf6336ab0e92e41e7643742cd1f911ec65bcea', 'width': 1200}, 'variants': {}}]}
What's currently the best models for rp and erp?
1
[removed]
2025-01-21T00:29:54
https://www.reddit.com/r/LocalLLaMA/comments/1i65tzi/whats_currently_the_best_models_for_rp_and_erp/
Zhuregson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i65tzi
false
null
t3_1i65tzi
/r/LocalLLaMA/comments/1i65tzi/whats_currently_the_best_models_for_rp_and_erp/
false
false
self
1
null
Does fine tuned LLM models hallucinate?
2
How come fine tuned models on domain specific knowledge still hallucinate, are there any relevant paper comparing hallucination in fine tuned models? Thanks
2025-01-21T00:30:14
https://www.reddit.com/r/LocalLLaMA/comments/1i65u8w/does_fine_tuned_llm_models_hallucinate/
Lazy_Wedding_1383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i65u8w
false
null
t3_1i65u8w
/r/LocalLLaMA/comments/1i65u8w/does_fine_tuned_llm_models_hallucinate/
false
false
self
2
null
Best way to have LLM ACTUALLY comprehend the PDF
3
Hi guys. I have been going in circles with getting LLM to read a PDF. I have tried various github projects for OCR scanning, using PDF to Markdown, then feeding that data to LLM (ollama local web API), but nothing seems to yield the same results as ChatPDF.com - per google they use GPT4 and other models to get the correct response. My goal is quite simple. The pdf linked below, has requirements for technical/product data sheets, etc. and last pages have a list that I want to retrieve. But the problem is LLM generalizes everything - as I think its intended to do. I could fine tune a model, but I am beyond broke and my only friend is an Nvidia 3060 12 GB. The key issue is that this just one section, and in construction we have many, information is VERY dynamic. Any pointers would be appreciated.
2025-01-21T00:35:55
https://limewire.com/d/883f7491-fad2-4393-b80c-e36cc857f6e6#86sMNFoNgtubWzjbErMLFLExLhP8v9U9jsyWg-4jaQw
shakespear94
limewire.com
1970-01-01T00:00:00
0
{}
1i65yqu
false
null
t3_1i65yqu
/r/LocalLLaMA/comments/1i65yqu/best_way_to_have_llm_actually_comprehend_the_pdf/
false
false
https://b.thumbs.redditm…tIvt94Upb6Ng.jpg
3
{'enabled': False, 'images': [{'id': 'KufZTM8cA3vc-sDuu5LGzPSAVXUU6ZELAZmb6ybG9ds', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/Qe3t7wo9LlSUm14q8eDzrQc3K7Dwo96i2E1Z3w1U9PI.jpg?width=108&crop=smart&auto=webp&s=607e6f16836772a2e93b8cbb9d7ea3c594da4b19', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/Qe3t7wo9LlSUm14q8eDzrQc3K7Dwo96i2E1Z3w1U9PI.jpg?width=216&crop=smart&auto=webp&s=1149155836ee7750faf23948439bf2bb670bbcc4', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/Qe3t7wo9LlSUm14q8eDzrQc3K7Dwo96i2E1Z3w1U9PI.jpg?width=320&crop=smart&auto=webp&s=8045a003323a9969892611b62908f2945f12b1e9', 'width': 320}, {'height': 330, 'url': 'https://external-preview.redd.it/Qe3t7wo9LlSUm14q8eDzrQc3K7Dwo96i2E1Z3w1U9PI.jpg?width=640&crop=smart&auto=webp&s=97ed8674d22ec3a2beede4e4359dbc1e80820788', 'width': 640}, {'height': 495, 'url': 'https://external-preview.redd.it/Qe3t7wo9LlSUm14q8eDzrQc3K7Dwo96i2E1Z3w1U9PI.jpg?width=960&crop=smart&auto=webp&s=25573d7af5e8526d5f53c5571835db08f3ebcf64', 'width': 960}, {'height': 556, 'url': 'https://external-preview.redd.it/Qe3t7wo9LlSUm14q8eDzrQc3K7Dwo96i2E1Z3w1U9PI.jpg?width=1080&crop=smart&auto=webp&s=0b7e37480daf4a9d78717ca1ef71655dd102b64d', 'width': 1080}], 'source': {'height': 1547, 'url': 'https://external-preview.redd.it/Qe3t7wo9LlSUm14q8eDzrQc3K7Dwo96i2E1Z3w1U9PI.jpg?auto=webp&s=ba788316bcaee117f06331f74cd0364dcc3b400a', 'width': 3000}, 'variants': {}}]}
LM Studio: How come I see the model <think> ?
0
2025-01-21T00:41:35
https://i.redd.it/zex487pwt8ee1.png
radialmonster
i.redd.it
1970-01-01T00:00:00
0
{}
1i6632e
false
null
t3_1i6632e
/r/LocalLLaMA/comments/1i6632e/lm_studio_how_come_i_see_the_model_think/
false
false
https://b.thumbs.redditm…J9jEjO57JQpc.jpg
0
{'enabled': True, 'images': [{'id': 'ThAYrw0UWVKYrAK7uM_eIPGkWGF39txOOXlHk6rSL_I', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/zex487pwt8ee1.png?width=108&crop=smart&auto=webp&s=e1e51d6046e63dbc9d465228e5ab976dadf0d5f4', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/zex487pwt8ee1.png?width=216&crop=smart&auto=webp&s=54fdb221f68d2785d827c613041e72ed63c902f3', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/zex487pwt8ee1.png?width=320&crop=smart&auto=webp&s=f0e4510fe74b16544d54a7f42f80b6d560d3f980', 'width': 320}, {'height': 466, 'url': 'https://preview.redd.it/zex487pwt8ee1.png?width=640&crop=smart&auto=webp&s=bd2794eef50b760991e935dd29e21c9f29ca9db4', 'width': 640}, {'height': 700, 'url': 'https://preview.redd.it/zex487pwt8ee1.png?width=960&crop=smart&auto=webp&s=13a11e11b50a35cd7b075ecfab8093e068321c4e', 'width': 960}, {'height': 787, 'url': 'https://preview.redd.it/zex487pwt8ee1.png?width=1080&crop=smart&auto=webp&s=f9b5ca2bf9f9d89d55ac395f267b4009433f7023', 'width': 1080}], 'source': {'height': 1321, 'url': 'https://preview.redd.it/zex487pwt8ee1.png?auto=webp&s=e402cb5e41ed21ff1d4743b606ccd3f9fe11c905', 'width': 1811}, 'variants': {}}]}
Favorite 70B model these days?
4
What's the best 70B for storywriting/assistant type stuff that is "uncensored" but doesn't instantly devolve into horny smut? I've been a bit out of the loop. I was a big fan of MidnightMiqu, then started using NewDawn, and then I was using something else a few months ago that I can't remember. Anything surpassed those?
2025-01-21T00:44:39
https://www.reddit.com/r/LocalLLaMA/comments/1i665e6/favorite_70b_model_these_days/
Ill_Yam_9994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i665e6
false
null
t3_1i665e6
/r/LocalLLaMA/comments/1i665e6/favorite_70b_model_these_days/
false
false
self
4
null
Oh no, DeepSeek-R1-Qwen-7B got lost!
1
[removed]
2025-01-21T00:50:48
https://www.reddit.com/r/LocalLLaMA/comments/1i66a1x/oh_no_deepseekr1qwen7b_got_lost/
IonizedRay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i66a1x
false
null
t3_1i66a1x
/r/LocalLLaMA/comments/1i66a1x/oh_no_deepseekr1qwen7b_got_lost/
false
false
self
1
null
DeepSeek-R1 "isn't sure how to approach this type of question yet."
0
2025-01-21T00:58:54
https://i.redd.it/i48c68htw8ee1.png
Megneous
i.redd.it
1970-01-01T00:00:00
0
{}
1i66g55
false
null
t3_1i66g55
/r/LocalLLaMA/comments/1i66g55/deepseekr1_isnt_sure_how_to_approach_this_type_of/
false
false
https://b.thumbs.redditm…kKI-0KOC0oCc.jpg
0
{'enabled': True, 'images': [{'id': 'KCUUMGLFwmZDaz9WHkHaddQQEqCYy0Rucf8AqHDjTuQ', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/i48c68htw8ee1.png?width=108&crop=smart&auto=webp&s=785846f724a16accb904e914738f1fb612f3580c', 'width': 108}, {'height': 71, 'url': 'https://preview.redd.it/i48c68htw8ee1.png?width=216&crop=smart&auto=webp&s=5875ae11537a3faef5dde09f0a8255cd73200329', 'width': 216}, {'height': 106, 'url': 'https://preview.redd.it/i48c68htw8ee1.png?width=320&crop=smart&auto=webp&s=751cf1fe0ab6bb57bf5e3ed1c3c05940fa7a54bc', 'width': 320}, {'height': 212, 'url': 'https://preview.redd.it/i48c68htw8ee1.png?width=640&crop=smart&auto=webp&s=135ad332151ce77c0b95a402c495621540c835a5', 'width': 640}, {'height': 319, 'url': 'https://preview.redd.it/i48c68htw8ee1.png?width=960&crop=smart&auto=webp&s=1e0683ca7fa4f3c092ed4d62fbc7b44cbbebe559', 'width': 960}], 'source': {'height': 330, 'url': 'https://preview.redd.it/i48c68htw8ee1.png?auto=webp&s=020bdc918842dd1c8ac8dfb379fa1134638e5655', 'width': 993}, 'variants': {}}]}
DeepSeek-R1 Training Pipeline Visualized
249
2025-01-21T01:02:38
https://i.redd.it/jf6vo05hx8ee1.jpeg
incarnadine72
i.redd.it
1970-01-01T00:00:00
0
{}
1i66j4f
false
null
t3_1i66j4f
/r/LocalLLaMA/comments/1i66j4f/deepseekr1_training_pipeline_visualized/
false
false
https://b.thumbs.redditm…yy6Viu3BwTbE.jpg
249
{'enabled': True, 'images': [{'id': 'ZClaqFniFL4sCPnjUPZnquJAu8N_SlUeeYTacV3lvwo', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/jf6vo05hx8ee1.jpeg?width=108&crop=smart&auto=webp&s=4bcb31cae4fd0fb767d2e5212ddfc8b5bb3a17e2', 'width': 108}, {'height': 248, 'url': 'https://preview.redd.it/jf6vo05hx8ee1.jpeg?width=216&crop=smart&auto=webp&s=f2dd29f891bad48251aead4fd9d16b8fb4a6ef77', 'width': 216}, {'height': 368, 'url': 'https://preview.redd.it/jf6vo05hx8ee1.jpeg?width=320&crop=smart&auto=webp&s=a1b811077de8d02cd9a777a268c5e70ea7f2699a', 'width': 320}, {'height': 736, 'url': 'https://preview.redd.it/jf6vo05hx8ee1.jpeg?width=640&crop=smart&auto=webp&s=07742a4a4aced788c72a6c14554e543cd85ea73d', 'width': 640}, {'height': 1104, 'url': 'https://preview.redd.it/jf6vo05hx8ee1.jpeg?width=960&crop=smart&auto=webp&s=a9a0ff998615ba0e0556f4b72ded8f2ad0121f2c', 'width': 960}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/jf6vo05hx8ee1.jpeg?auto=webp&s=bd27ad6771b9f6cec99c907678f01edc52b922cc', 'width': 1043}, 'variants': {}}]}
Deepseek-R1 + Mistral Nemo & Small for creative writing?
4
Just an idea after I saw this post about how R1 topped eq bench: https://www.reddit.com/r/LocalLLaMA/comments/1i615u1/the_first_time_ive_felt_a_llm_wrote_well_not_just/ I'm not really into creative writing but maybe it's a good idea for creative writing & RP communities to generate a creative writing dataset from R1 and start training Mistral models with it? Like Nemo 12b and Small 22b, since I heard Mistral models are pretty good at writing.
2025-01-21T01:15:38
https://www.reddit.com/r/LocalLLaMA/comments/1i66smn/deepseekr1_mistral_nemo_small_for_creative_writing/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i66smn
false
null
t3_1i66smn
/r/LocalLLaMA/comments/1i66smn/deepseekr1_mistral_nemo_small_for_creative_writing/
false
false
self
4
null
Am I missing something with Deepseek?
1
I asked it what its coding capabilities were and it said it didn’t have any, then when I asked it what it could do, it thought out loud.
2025-01-21T01:15:53
https://www.reddit.com/gallery/1i66ssz
Low-Yesterday241
reddit.com
1970-01-01T00:00:00
0
{}
1i66ssz
false
null
t3_1i66ssz
/r/LocalLLaMA/comments/1i66ssz/am_i_missing_something_with_deepseek/
false
false
https://a.thumbs.redditm…LubpWDoaTQw8.jpg
1
null
Structured Data Extraction with Deepseek-R1 Full (deepseek-reasoner)
5
Any recommendations? Through the deepseek api the model does not support function calling and so I can't patch in instructor. Anyone get something reliable working?
2025-01-21T01:21:37
https://www.reddit.com/r/LocalLLaMA/comments/1i66wxc/structured_data_extraction_with_deepseekr1_full/
halfprice06
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i66wxc
false
null
t3_1i66wxc
/r/LocalLLaMA/comments/1i66wxc/structured_data_extraction_with_deepseekr1_full/
false
false
self
5
null
Laptop for PhD Work in LLMs and Cybersecurity
1
[removed]
2025-01-21T01:29:36
https://www.reddit.com/r/LocalLLaMA/comments/1i672rg/laptop_for_phd_work_in_llms_and_cybersecurity/
Igotthis-101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i672rg
false
null
t3_1i672rg
/r/LocalLLaMA/comments/1i672rg/laptop_for_phd_work_in_llms_and_cybersecurity/
false
false
self
1
null
DeepSeek R1 is... a woke Chinese model ? Interesting.
1
[removed]
2025-01-21T01:31:28
https://www.reddit.com/r/LocalLLaMA/comments/1i6744g/deepseek_r1_is_a_woke_chinese_model_interesting/
gaarrl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i6744g
false
null
t3_1i6744g
/r/LocalLLaMA/comments/1i6744g/deepseek_r1_is_a_woke_chinese_model_interesting/
false
false
https://b.thumbs.redditm…4j2XxsXRQzrE.jpg
1
null
Laptop for PhD Work in LLMs and Cybersecurity
1
[removed]
2025-01-21T01:31:54
https://www.reddit.com/r/LocalLLaMA/comments/1i674g4/laptop_for_phd_work_in_llms_and_cybersecurity/
Igotthis-101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i674g4
false
null
t3_1i674g4
/r/LocalLLaMA/comments/1i674g4/laptop_for_phd_work_in_llms_and_cybersecurity/
false
false
self
1
null
how many r in strawberrrry? deepseek r1 failed to determine
1
[removed]
2025-01-21T01:32:35
https://www.reddit.com/r/LocalLLaMA/comments/1i674ws/how_many_r_in_strawberrrry_deepseek_r1_failed_to/
Virtual_Video5832
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i674ws
false
null
t3_1i674ws
/r/LocalLLaMA/comments/1i674ws/how_many_r_in_strawberrrry_deepseek_r1_failed_to/
false
false
self
1
null
fine tuning alpaca with unsloth
1
Sorry if this is a dumb question but I'm pretty new to all of this. I did a preliminary search across google and reddit and couldn't find a conclusive answer to my use case. I'd like to use unsloth since they boast about speed, and going through their documentation it seems they have a number of models they support, but I'm not sure if they only support those models or if it may work with models that aren't on the list. They seem to have a lot of sample notebooks and I was playing around with one that took llama 3 and fine tuned it on alpaca LoRA. I was wondering if I could fine tune the resulting alpaca LoRA model on another dataset, and if I got a new model from that if I could even fine tune another dataset on that model and keep doing that. I asked Copilot about fine tuning once on a huge dataset vs fine tuning each dataset one by one on the resulting models and it gave a decent answer saying both are possible but I'm not sure if that's true. If this isn't possible with unsloth, is fine tuning multiple times using the normal tooling possible or a good idea, and would it be possible for multiple LoRA fine tuning?
2025-01-21T01:41:18
https://www.reddit.com/r/LocalLLaMA/comments/1i67ba7/fine_tuning_alpaca_with_unsloth/
hentaipolice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i67ba7
false
null
t3_1i67ba7
/r/LocalLLaMA/comments/1i67ba7/fine_tuning_alpaca_with_unsloth/
false
false
self
1
null
Quick Code Tested DeepSeek R1 vs O1 vs Claude
28
I took a coding challenge which required planning, good coding, common sense of API design and good interpretation of requirements (IFBench) and gave it to R1, o1 and Sonnet. Early findings: (Those who just want to watch them code: https://youtu.be/EkFt9Bk_wmg - R1 has much much more detail in its Chain of Thought - R1's inference speed is on par with o1 (for now, since DeepSeek's API doesn't serve nearly as many requests as OpenAI) - R1 seemed to go on for longer when it's not certain that it figured out the solution - R1 reasoned wih code! Something I didn't see with any reasoning model. o1 might be hiding it if it's doing it ++ Meaning it would write code and reason whether it would work or not, without using an interpreter/compiler - R1: 💰 $0.14 / million input tokens (cache hit) 💰 $0.55 / million input tokens (cache miss) 💰 $2.19 / million output tokens - o1: 💰 $7.5 / million input tokens (cache hit) 💰 $15 / million input tokens (cache miss) 💰 $60 / million output tokens - o1 API tier restricted, R1 open to all, open weights and research paper - Paper: https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf - 2nd on Aider's polyglot benchmark, only slightly below o1, above Claude 3.5 Sonnet and DeepSeek 3 - they'll get to increase the 64k context length, which is a limitation in some use cases - will be interesting to see the R1/DeepSeek v3 Architect/Coder combination result in Aider and Cline on complex coding tasks on larger codebases Have you tried it out yet? First impressions?
2025-01-21T01:46:33
https://www.reddit.com/r/LocalLLaMA/comments/1i67f3b/quick_code_tested_deepseek_r1_vs_o1_vs_claude/
marvijo-software
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i67f3b
false
null
t3_1i67f3b
/r/LocalLLaMA/comments/1i67f3b/quick_code_tested_deepseek_r1_vs_o1_vs_claude/
false
false
self
28
{'enabled': False, 'images': [{'id': '_KG-admEjytfEEgFdo4H5zZNNX3-GWvdyXzdn6CNKZ8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7k-V7cku8xTU0nYdNZ975SXNOZNd_CMhuJ2F4GwZrPM.jpg?width=108&crop=smart&auto=webp&s=6a42bd4003c3ee2457f3ba23e72b648290a86c30', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/7k-V7cku8xTU0nYdNZ975SXNOZNd_CMhuJ2F4GwZrPM.jpg?width=216&crop=smart&auto=webp&s=de85f3b5d09638b48e2b8c4b66a5ae6e53bd8567', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/7k-V7cku8xTU0nYdNZ975SXNOZNd_CMhuJ2F4GwZrPM.jpg?width=320&crop=smart&auto=webp&s=4f5fe8921606999ee302d2a8bf3cfb6997342d2f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/7k-V7cku8xTU0nYdNZ975SXNOZNd_CMhuJ2F4GwZrPM.jpg?auto=webp&s=8d7ab7ca73d75db44f17e5b0a3ee7c636c942d7b', 'width': 480}, 'variants': {}}]}
Why the hack no ine us talking about free LLM APIs ?
1
[removed]
2025-01-21T02:02:02
https://www.reddit.com/r/LocalLLaMA/comments/1i67qcc/why_the_hack_no_ine_us_talking_about_free_llm_apis/
Confident_Text6570
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i67qcc
false
null
t3_1i67qcc
/r/LocalLLaMA/comments/1i67qcc/why_the_hack_no_ine_us_talking_about_free_llm_apis/
false
false
self
1
null
Rag Local don't answer correctly
1
[removed]
2025-01-21T02:10:19
https://www.reddit.com/r/LocalLLaMA/comments/1i67wai/rag_local_dont_answer_correctly/
Odd-Weakness456
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i67wai
false
null
t3_1i67wai
/r/LocalLLaMA/comments/1i67wai/rag_local_dont_answer_correctly/
false
false
self
1
null
Can someone explain what chat templates actually are, how to find them, why they aren't included in model releases?
8
Sooo often I'll see a new model and get excited only to find there's no info about what i need to add (to MSTY in my case) to get the model to output anything other than gibberish. Most times I troll the reddit post waiting for someone to share or just try copying another one I have saved for a different model of a similar type if I have one. So what are they? I get they're some kind of base instruction but I don't get why they're so different and why model makers don't just include them on their model cards. Trying to figure this out for all these R1 models. Related question, why do so many of my models let this bleed over into the output <|im_end|>?
2025-01-21T02:21:30
https://www.reddit.com/r/LocalLLaMA/comments/1i684cz/can_someone_explain_what_chat_templates_actually/
eggs-benedryl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i684cz
false
null
t3_1i684cz
/r/LocalLLaMA/comments/1i684cz/can_someone_explain_what_chat_templates_actually/
false
false
self
8
null
I built a super lightweight Discord bot in <200 lines of code to chat with DeepSeek-R1 with prompt caching - 14b runs at 12tps on my M4 Mac Mini!
1
[removed]
2025-01-21T02:30:30
https://www.reddit.com/r/LocalLLaMA/comments/1i68aoj/i_built_a_super_lightweight_discord_bot_in_200/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i68aoj
false
null
t3_1i68aoj
/r/LocalLLaMA/comments/1i68aoj/i_built_a_super_lightweight_discord_bot_in_200/
false
false
https://b.thumbs.redditm…xiWESeSb_YUA.jpg
1
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
Deepseek's censorship, slightly doesn't work when you ask in language other than English. Try it with your local language.
0
The highlighted Hindi text in the last answer says, 100 to 1000s of people died in tiananmen square. If you mention "tiananmen square" in English, it'll straight up refuse to answer.
2025-01-21T02:30:45
https://v.redd.it/34y4atefd9ee1
Lychee7
/r/LocalLLaMA/comments/1i68av2/deepseeks_censorship_slightly_doesnt_work_when/
1970-01-01T00:00:00
0
{}
1i68av2
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/34y4atefd9ee1/DASHPlaylist.mpd?a=1740148249%2CM2MwMjgwZmU4MzUzMmE4ODM3NmJlNjU2YWI0ZmE1MjllNjk0YzQ5Y2E3N2ExNmNjOThhYTM5MjE3MjllMTFhYw%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/34y4atefd9ee1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/34y4atefd9ee1/HLSPlaylist.m3u8?a=1740148249%2CY2M0NzkzZTdhZTQ0ODg3NWNiN2YxNzMyM2JhMmY3MGZmYTE4OTFlNzdhOTA4NTZhYjc2MjU0YmRjYTY1MDVkMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/34y4atefd9ee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 576}}
t3_1i68av2
/r/LocalLLaMA/comments/1i68av2/deepseeks_censorship_slightly_doesnt_work_when/
false
false
https://external-preview…c1ce9d5a4ce3d21d
0
{'enabled': False, 'images': [{'id': 'OGZkODNiN2ZkOWVlMYDzIglkfH9CusPTSkwltZ5UWpuqXpq7LNyjYVy-84d8', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/OGZkODNiN2ZkOWVlMYDzIglkfH9CusPTSkwltZ5UWpuqXpq7LNyjYVy-84d8.png?width=108&crop=smart&format=pjpg&auto=webp&s=0edd33673bd904dcd4f0cac19aa3f63db51fd70c', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/OGZkODNiN2ZkOWVlMYDzIglkfH9CusPTSkwltZ5UWpuqXpq7LNyjYVy-84d8.png?width=216&crop=smart&format=pjpg&auto=webp&s=8be4c5fe98325ac25aede9430238220f9537998b', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/OGZkODNiN2ZkOWVlMYDzIglkfH9CusPTSkwltZ5UWpuqXpq7LNyjYVy-84d8.png?width=320&crop=smart&format=pjpg&auto=webp&s=ea7c7e54371bb332baba84b4f22b4fabbdd3cd47', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/OGZkODNiN2ZkOWVlMYDzIglkfH9CusPTSkwltZ5UWpuqXpq7LNyjYVy-84d8.png?width=640&crop=smart&format=pjpg&auto=webp&s=41e2722495122fc2f2039eb4ed51ccebba706192', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/OGZkODNiN2ZkOWVlMYDzIglkfH9CusPTSkwltZ5UWpuqXpq7LNyjYVy-84d8.png?format=pjpg&auto=webp&s=0372d15cb2e807355a3cd95af226e2d6c6ce57be', 'width': 864}, 'variants': {}}]}
AI Noob need help??
1
[removed]
2025-01-21T02:35:21
https://www.reddit.com/r/LocalLLaMA/comments/1i68e7e/ai_noob_need_help/
Individual_Gur_4055
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i68e7e
false
null
t3_1i68e7e
/r/LocalLLaMA/comments/1i68e7e/ai_noob_need_help/
false
false
self
1
null
I made a super lightweight app for chatting with R1-distilled in Discord with prompt caching - runs at 12tps on my M4 Mini :)
1
[removed]
2025-01-21T02:35:41
https://www.reddit.com/r/LocalLLaMA/comments/1i68efx/i_made_a_super_lightweight_app_for_chatting_with/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i68efx
false
null
t3_1i68efx
/r/LocalLLaMA/comments/1i68efx/i_made_a_super_lightweight_app_for_chatting_with/
false
false
self
1
null
Deepseek R1!!
0
Pretty awesome that small team can achieve big results. Also, integrated it into our platform [https://thedrive.ai](https://thedrive.ai)
2025-01-21T02:37:23
https://www.reddit.com/r/LocalLLaMA/comments/1i68flt/deepseek_r1/
thedriveai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i68flt
false
null
t3_1i68flt
/r/LocalLLaMA/comments/1i68flt/deepseek_r1/
false
false
self
0
{'enabled': False, 'images': [{'id': 'u8tRczBZ4tGo2X9wmQqLThi6xBsYuYgpjHlKQNp3BBA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/a-6mdcCucyKTuXBfPam9V1De6bocPh4FlRI05G3CVUU.jpg?width=108&crop=smart&auto=webp&s=c40c0bb2679c80ec754d7f3438c92e3a6284e500', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/a-6mdcCucyKTuXBfPam9V1De6bocPh4FlRI05G3CVUU.jpg?width=216&crop=smart&auto=webp&s=7518f4bd3464758f41810dd46e1b4ca810c524d7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/a-6mdcCucyKTuXBfPam9V1De6bocPh4FlRI05G3CVUU.jpg?width=320&crop=smart&auto=webp&s=26deb685cd63f711269539b5afc5be16a8861b15', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/a-6mdcCucyKTuXBfPam9V1De6bocPh4FlRI05G3CVUU.jpg?auto=webp&s=4fe1e960224fce035f2a28d9cecdd3793c02a6bb', 'width': 512}, 'variants': {}}]}
Discord-MLX.py: chat with R1-14b, feat. prompt caching - running at 12tps on a M4 Mac Mini :)
1
[removed]
2025-01-21T02:37:28
https://www.reddit.com/r/LocalLLaMA/comments/1i68fnt/discordmlxpy_chat_with_r114b_feat_prompt_caching/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i68fnt
false
null
t3_1i68fnt
/r/LocalLLaMA/comments/1i68fnt/discordmlxpy_chat_with_r114b_feat_prompt_caching/
false
false
self
1
null
An interesting interview with Deepseek's CEO.
62
2025-01-21T02:37:37
https://www.chinatalk.media/p/deepseek-ceo-interview-with-chinas
no_witty_username
chinatalk.media
1970-01-01T00:00:00
0
{}
1i68fro
false
null
t3_1i68fro
/r/LocalLLaMA/comments/1i68fro/an_interesting_interview_with_deepseeks_ceo/
false
false
https://b.thumbs.redditm…y-nXmn-ohxbI.jpg
62
{'enabled': False, 'images': [{'id': 'f_s7qaV7gcufX_UZcls69YuqQuDzBWHuQ23VmrsToK0', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?width=108&crop=smart&auto=webp&s=f503e8420a27061fae536d9434619d644dff3d53', 'width': 108}, {'height': 176, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?width=216&crop=smart&auto=webp&s=049b16501a559d59a62aaf5cecfddb54f8f339ca', 'width': 216}, {'height': 261, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?width=320&crop=smart&auto=webp&s=5b2e09b4515afc003a8783ebe18bd63bc7310b32', 'width': 320}], 'source': {'height': 370, 'url': 'https://external-preview.redd.it/bMO1G9G1E0WVUvY8HPHjA38LQYfVyGj34gWdldzh6SI.jpg?auto=webp&s=e21a9c0790556edb9e324fe162f123776219e112', 'width': 452}, 'variants': {}}]}
Llama error when caching GGUF context
9
Hey guys, recently started using the Unsloth version of the Qwen2.5 32B Coder Instruct model so I could use the full 128k context... Now, I downloaded the Q6\_K variant as going full Q8 is probably not worth the extra 10GB, but I still need to cache the context to fit the full 128K on my 48GB VRAM... I could probably use the q8\_0 cache but have heard q4\_0 is better... Using q4\_0 works for a little bit, but eventually (pretty quickly, within 4 prompts) throws the following error: `ggml_cuda_cpy: unsupported type combination (q4_0 to f32)` Any ideas? Should I just not cache the context and use a lower amount?
2025-01-21T02:40:15
https://www.reddit.com/r/LocalLLaMA/comments/1i68hlh/llama_error_when_caching_gguf_context/
DeSibyl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i68hlh
false
null
t3_1i68hlh
/r/LocalLLaMA/comments/1i68hlh/llama_error_when_caching_gguf_context/
false
false
self
9
null
MLX-Dıscord.py | Just built a super lightweight script in <200 lines of code to chat with DeepSeek-R1 feat. prompt caching - 14b running at 12tps on my M4 Mac Mini!
1
[removed]
2025-01-21T02:42:28
https://www.reddit.com/r/LocalLLaMA/comments/1i68j3t/mlxdıscordpy_just_built_a_super_lightweight/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i68j3t
false
null
t3_1i68j3t
/r/LocalLLaMA/comments/1i68j3t/mlxdıscordpy_just_built_a_super_lightweight/
false
false
self
1
null
Just made a super lightweight script to chat with DeepSeek-R1 (in the gamer app) feat. prompt caching - 14b runs at 12tps on my M4 Mac Mini!
2
https://reddit.com/link/1i68l0y/video/cy1zl99mf9ee1/player Hey everyone! Longer post today, and would be even **more** in-depth, but it's like 2:30am here and I should really be getting to bed lolol Details of how to get this running are in the comments... as once again the a\*tomod has proven itself to be my arch nemesis 😅
2025-01-21T02:45:12
https://www.reddit.com/r/LocalLLaMA/comments/1i68l0y/just_made_a_super_lightweight_script_to_chat_with/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i68l0y
false
null
t3_1i68l0y
/r/LocalLLaMA/comments/1i68l0y/just_made_a_super_lightweight_script_to_chat_with/
false
false
self
2
null
Tips for migrating from OpenAI models to Llama
7
>I'm gathering advice to help people switch from closed models (like OpenAI GPT) to open-source models (specifically Llama)! Does anyone have any prompting differences or advice that worked for you? Things like: how llama behaves differently, how to address differences in behavior between GPT and Llama models, prompting techniques specific to Llama, etc.
2025-01-21T02:54:16
https://www.reddit.com/r/LocalLLaMA/comments/1i68rbl/tips_for_migrating_from_openai_models_to_llama/
CS-fan-101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i68rbl
false
null
t3_1i68rbl
/r/LocalLLaMA/comments/1i68rbl/tips_for_migrating_from_openai_models_to_llama/
false
false
self
7
null
adaptive-classifier: Cut your LLM costs in half with smart query routing (32.4% cost savings demonstrated)
3
Hey LocalLLama community! I'm excited to share a new open-source library that can help optimize your LLM deployment costs. The adaptive-classifier library learns to route queries between your models based on complexity, continuously improving through real-world usage. We tested it on the arena-hard-auto dataset, routing between a high-cost and low-cost model (2x cost difference). The results were impressive: \- 32.4% cost savings with adaptation enabled \- Same overall success rate (22%) as baseline \- System automatically learned from 110 new examples during evaluation \- Successfully routed 80.4% of queries to the cheaper model Perfect for setups where you're running multiple LLama models (like Llama-3.1-70B alongside Llama-3.1-8B) and want to optimize costs without sacrificing capability. The library integrates easily with any transformer-based models and includes built-in state persistence. Check out the repo for implementation details and benchmarks. Would love to hear your experiences if you try it out! Repo - [https://github.com/codelion/adaptive-classifier](https://github.com/codelion/adaptive-classifier)
2025-01-21T03:01:37
https://www.reddit.com/r/LocalLLaMA/comments/1i68wqv/adaptiveclassifier_cut_your_llm_costs_in_half/
asankhs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i68wqv
false
null
t3_1i68wqv
/r/LocalLLaMA/comments/1i68wqv/adaptiveclassifier_cut_your_llm_costs_in_half/
false
false
self
3
{'enabled': False, 'images': [{'id': 'MAFJOwkFQsuf37C_-ZdFcTmL2LGzs0oeNOVus3zUU1c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jv_eX9vJ2ar5LfZu0YbUCzD6AvhEd9hg3WKbMTPl10A.jpg?width=108&crop=smart&auto=webp&s=47be105449e74f8a26de7dd5041409a7102a6891', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jv_eX9vJ2ar5LfZu0YbUCzD6AvhEd9hg3WKbMTPl10A.jpg?width=216&crop=smart&auto=webp&s=4484aa1484e479ec946250f75f499187ce8782cc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jv_eX9vJ2ar5LfZu0YbUCzD6AvhEd9hg3WKbMTPl10A.jpg?width=320&crop=smart&auto=webp&s=c36e91666193787ca3da06bcc5c54f642c64c795', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jv_eX9vJ2ar5LfZu0YbUCzD6AvhEd9hg3WKbMTPl10A.jpg?width=640&crop=smart&auto=webp&s=ec65a1d53c2be9ac8c09998ddeb2ceca4405a698', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jv_eX9vJ2ar5LfZu0YbUCzD6AvhEd9hg3WKbMTPl10A.jpg?width=960&crop=smart&auto=webp&s=7c4aec22f6f161af7b82a8b69a7fbb27bda54f41', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jv_eX9vJ2ar5LfZu0YbUCzD6AvhEd9hg3WKbMTPl10A.jpg?width=1080&crop=smart&auto=webp&s=a0de12334078224c94efcfb59035b417d436372e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jv_eX9vJ2ar5LfZu0YbUCzD6AvhEd9hg3WKbMTPl10A.jpg?auto=webp&s=84c3cf044174da3ea55e11682da8c003356afbb2', 'width': 1200}, 'variants': {}}]}