title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
How to build your own OpenAI operator
14
2025-01-23T23:30:35
https://v.redd.it/mfuvwluwvtee1
Illustrious_Row_9971
v.redd.it
1970-01-01T00:00:00
0
{}
1i8hdb9
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mfuvwluwvtee1/DASHPlaylist.mpd?a=1740267050%2COGQ4MTBlNjNmOWRmODRjYzVhNzNjMThiNzVkNDg1N2VmMGUyODYzZTBiZWEwMTNhY2JlMDBjNjJhZDhiZmU2Nw%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/mfuvwluwvtee1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/mfuvwluwvtee1/HLSPlaylist.m3u8?a=1740267050%2CZjMwZmUxNzFmMmY1Mzk0M2M2NWY4YzkzNjFiNjYxZjY2M2RlN2RjNjRkZGIzYmY4ZTE3ODQyODViMWFlOTE0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mfuvwluwvtee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1i8hdb9
/r/LocalLLaMA/comments/1i8hdb9/how_to_build_your_own_openai_operator/
false
false
https://external-preview…b365431375f375bb
14
{'enabled': False, 'images': [{'id': 'd2c1cGxsdXd2dGVlMfqBVMm3GQrk-O24O_mcMQ9gevG5I6cHKwferhaqHE-e', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d2c1cGxsdXd2dGVlMfqBVMm3GQrk-O24O_mcMQ9gevG5I6cHKwferhaqHE-e.png?width=108&crop=smart&format=pjpg&auto=webp&s=67594b29bb21627acfa58b8fab7820eb04db3af6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d2c1cGxsdXd2dGVlMfqBVMm3GQrk-O24O_mcMQ9gevG5I6cHKwferhaqHE-e.png?width=216&crop=smart&format=pjpg&auto=webp&s=4b6e87d313f430a056db880f3a11e178a5a6f736', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d2c1cGxsdXd2dGVlMfqBVMm3GQrk-O24O_mcMQ9gevG5I6cHKwferhaqHE-e.png?width=320&crop=smart&format=pjpg&auto=webp&s=147ee516a64bbfc43f1f1e71d5f80e85ecd7cc1a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d2c1cGxsdXd2dGVlMfqBVMm3GQrk-O24O_mcMQ9gevG5I6cHKwferhaqHE-e.png?width=640&crop=smart&format=pjpg&auto=webp&s=9f1cfa11155260322227ffcfdc143787524c3cb1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d2c1cGxsdXd2dGVlMfqBVMm3GQrk-O24O_mcMQ9gevG5I6cHKwferhaqHE-e.png?width=960&crop=smart&format=pjpg&auto=webp&s=9977a080d260540a4c1dc4734780d1498b6cddf7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d2c1cGxsdXd2dGVlMfqBVMm3GQrk-O24O_mcMQ9gevG5I6cHKwferhaqHE-e.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9c694277e052f43c81c77d934fd24307904b86f2', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d2c1cGxsdXd2dGVlMfqBVMm3GQrk-O24O_mcMQ9gevG5I6cHKwferhaqHE-e.png?format=pjpg&auto=webp&s=16a594402ac6352a1016a69f02b86a9339c7f874', 'width': 1920}, 'variants': {}}]}
adding pdf to LM Studio with DeepSeek R1 Distill Qwen didn't work well
1
[removed]
2025-01-23T23:35:55
https://www.reddit.com/r/LocalLLaMA/comments/1i8hhei/adding_pdf_to_lm_studio_with_deepseek_r1_distill/
Kindly_Machine_939
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8hhei
false
null
t3_1i8hhei
/r/LocalLLaMA/comments/1i8hhei/adding_pdf_to_lm_studio_with_deepseek_r1_distill/
false
false
self
1
null
DeepSeek R1 (reasoner) can use internet there o1 still can't
107
Funny ... DeepSeek doing more for free than paid o1...
2025-01-23T23:47:38
https://www.reddit.com/gallery/1i8hqp0
Healthy-Nebula-3603
reddit.com
1970-01-01T00:00:00
0
{}
1i8hqp0
false
null
t3_1i8hqp0
/r/LocalLLaMA/comments/1i8hqp0/deepseek_r1_reasoner_can_use_internet_there_o1/
false
false
https://b.thumbs.redditm…4ym1RrfuBo1M.jpg
107
null
DeepSeek R1's Chain of Thought is some of the Best Entertainment I've gotten from an LLM
1
2025-01-23T23:55:58
https://www.reddit.com/gallery/1i8hx1i
setothegreat
reddit.com
1970-01-01T00:00:00
0
{}
1i8hx1i
false
null
t3_1i8hx1i
/r/LocalLLaMA/comments/1i8hx1i/deepseek_r1s_chain_of_thought_is_some_of_the_best/
false
false
https://a.thumbs.redditm…1JOmtCD61R_4.jpg
1
null
Openai is ahead only till china reverse engineers...
95
2025-01-24T00:04:50
https://i.redd.it/zy8ljay42uee1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1i8i41v
false
null
t3_1i8i41v
/r/LocalLLaMA/comments/1i8i41v/openai_is_ahead_only_till_china_reverse_engineers/
false
false
https://b.thumbs.redditm…XJXJMMTMsHjg.jpg
95
{'enabled': True, 'images': [{'id': 'Lge9ufzB41jHzOgkDqaPMBgYHem_l3bIzmNfKAh7FUE', 'resolutions': [{'height': 154, 'url': 'https://preview.redd.it/zy8ljay42uee1.png?width=108&crop=smart&auto=webp&s=98c4281654909204fe300b1e70bea1bf0ed289ae', 'width': 108}, {'height': 309, 'url': 'https://preview.redd.it/zy8ljay42uee1.png?width=216&crop=smart&auto=webp&s=5a34ef34f1aa283562740a4abff408291538c749', 'width': 216}, {'height': 458, 'url': 'https://preview.redd.it/zy8ljay42uee1.png?width=320&crop=smart&auto=webp&s=ea6236ac0476372a49c0d06bd0f7b36bd0e2121c', 'width': 320}, {'height': 917, 'url': 'https://preview.redd.it/zy8ljay42uee1.png?width=640&crop=smart&auto=webp&s=8e8626d46d75e083e4343a6f229defc70f8055d8', 'width': 640}, {'height': 1376, 'url': 'https://preview.redd.it/zy8ljay42uee1.png?width=960&crop=smart&auto=webp&s=b08b022f147bb1eb4023285f2d3b5c515bb33ad7', 'width': 960}, {'height': 1549, 'url': 'https://preview.redd.it/zy8ljay42uee1.png?width=1080&crop=smart&auto=webp&s=b95daaca9d4012eea535e2f35b714aa32dd73c66', 'width': 1080}], 'source': {'height': 1549, 'url': 'https://preview.redd.it/zy8ljay42uee1.png?auto=webp&s=12bba8279f7e9b537c9fcd05fb1341bfb974f029', 'width': 1080}, 'variants': {}}]}
Recommendation for live avatar generation?
5
What do you recommend for generating avatars live that users can talk with?
2025-01-24T00:11:45
https://www.reddit.com/r/LocalLLaMA/comments/1i8i9iu/recommendation_for_live_avatar_generation/
Jentano
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8i9iu
false
null
t3_1i8i9iu
/r/LocalLLaMA/comments/1i8i9iu/recommendation_for_live_avatar_generation/
false
false
self
5
null
Ollama is confusing people by pretending that the little distillation models are "R1"
663
I was baffled at the number of people who seem to think they're using "R1" when they're actually running a Qwen or Llama finetune, until I saw a screenshot of the Ollama interface earlier. Ollama is misleadingly pretending that "R1" is actually a series of differently-sized models and that distillations are just smaller sizes of "R1", rather than what they are which is just some quasi-related experimental finetunes that Deepseek happened to release at the same time. It's not just annoying, it seems to be doing reputational damage to Deepseek as well, because a lot of low information Ollama users are using a shitty 1.5B model, seeing that it sucks (because it's 1.5B), and saying "wow I don't see why people are saying R1 is so good, this is terrible". Plus there's no misleading social media influencer content like "I got R1 running on my phone!" (no, you got a Qwen-1.5B finetune running on your phone).
2025-01-24T00:20:07
https://www.reddit.com/r/LocalLLaMA/comments/1i8ifxd/ollama_is_confusing_people_by_pretending_that_the/
blahblahsnahdah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8ifxd
false
null
t3_1i8ifxd
/r/LocalLLaMA/comments/1i8ifxd/ollama_is_confusing_people_by_pretending_that_the/
false
false
self
663
null
Value GPU for Ollama in a home server?
2
Hey everyone, I have an Unraid server with Ollama running in a docker container. I was hoping to get something that would run a 7-8b model with speed better than the 5700G I have in there right now on CPU inference. Not expecting anything crazy, just usable. Looks like my options are a 3060 Ti 12Gb or a 7600 XT 16GB with both sitting around $500 CAD. I know Nvidia is much better supported for this kind of stuff but how is RoCM support these days on AMD with Ollama? Goal is to have something that is always running which can be used for Home Assistant and ideally Plex transcoding. Thanks!
2025-01-24T00:38:20
https://www.reddit.com/r/LocalLLaMA/comments/1i8itsg/value_gpu_for_ollama_in_a_home_server/
Avendork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8itsg
false
null
t3_1i8itsg
/r/LocalLLaMA/comments/1i8itsg/value_gpu_for_ollama_in_a_home_server/
false
false
self
2
null
Deepseek is good but why it doesnt retain the past convos on a new chat? for example in chat 1 you talk about potatoes and if you starta new chat they dont remember it.
0
:) ty
2025-01-24T00:41:11
https://www.reddit.com/r/LocalLLaMA/comments/1i8ivwu/deepseek_is_good_but_why_it_doesnt_retain_the/
MechaHikaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8ivwu
false
null
t3_1i8ivwu
/r/LocalLLaMA/comments/1i8ivwu/deepseek_is_good_but_why_it_doesnt_retain_the/
false
false
self
0
null
Technical question about DeepSeek
1
As we know exactly what model they host and what is the inference speed, can we determine what hardware they use and how many concurrent users a node handles? I'm curious how much power my pointless riddles consume :)
2025-01-24T01:04:12
https://www.reddit.com/r/LocalLLaMA/comments/1i8jd96/technical_question_about_deepseek/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8jd96
false
null
t3_1i8jd96
/r/LocalLLaMA/comments/1i8jd96/technical_question_about_deepseek/
false
false
self
1
null
Theo Browne has an interesting video on Deepseek R1, what went into making it, and some of the risks with how it was made
1
2025-01-24T01:38:08
https://www.youtube.com/watch?v=by9PUlqtJlM
SysPsych
youtube.com
1970-01-01T00:00:00
0
{}
1i8k26g
false
{'oembed': {'author_name': 'Theo - t3․gg', 'author_url': 'https://www.youtube.com/@t3dotgg', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/by9PUlqtJlM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="OpenAI is terrified (there&#39;s finally a great open source LLM)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/by9PUlqtJlM/hqdefault.jpg', 'thumbnail_width': 480, 'title': "OpenAI is terrified (there's finally a great open source LLM)", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1i8k26g
/r/LocalLLaMA/comments/1i8k26g/theo_browne_has_an_interesting_video_on_deepseek/
false
false
https://b.thumbs.redditm…fGZgF9269Rrg.jpg
1
{'enabled': False, 'images': [{'id': '06_Ek6kumziSNoJtww7O6UX3q6qOl5coSfAaJv0gamo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/6MykFVbTJIr86w8JL2nouI6L36WQ4O6JUK4i1Map0Rg.jpg?width=108&crop=smart&auto=webp&s=af3c2157ca3fbf60f81cec21dcee3002282cf6e5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/6MykFVbTJIr86w8JL2nouI6L36WQ4O6JUK4i1Map0Rg.jpg?width=216&crop=smart&auto=webp&s=c52a91b3e8ddb20c522eb2e9ad1522fb50ea2dc3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/6MykFVbTJIr86w8JL2nouI6L36WQ4O6JUK4i1Map0Rg.jpg?width=320&crop=smart&auto=webp&s=91b9cb74282802128e69bc94ac841016ef269b7a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/6MykFVbTJIr86w8JL2nouI6L36WQ4O6JUK4i1Map0Rg.jpg?auto=webp&s=b9af15cbfb0b915af12a29ff07243d749c7e9723', 'width': 480}, 'variants': {}}]}
deepseek-r1-distill-qwen-32b benchmark results on LiveBench
31
https://preview.redd.it/…5fc267237b2a7e
2025-01-24T01:39:57
https://www.reddit.com/r/LocalLLaMA/comments/1i8k3i3/deepseekr1distillqwen32b_benchmark_results_on/
Emergency-Map9861
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8k3i3
false
null
t3_1i8k3i3
/r/LocalLLaMA/comments/1i8k3i3/deepseekr1distillqwen32b_benchmark_results_on/
false
false
https://b.thumbs.redditm…xJLbO0J7nnsY.jpg
31
null
Any suggestions for setting up a chat-style LLM that I can use for a project and refer to as an assistant 'expert' or 'teacher' to answer questions and provide suggestions? And maintain a persistent memory of topics we discussed to refer back to?
1
[removed]
2025-01-24T01:48:34
https://www.reddit.com/r/LocalLLaMA/comments/1i8k9kr/any_suggestions_for_setting_up_a_chatstyle_llm/
EquivalentAir9512
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8k9kr
false
null
t3_1i8k9kr
/r/LocalLLaMA/comments/1i8k9kr/any_suggestions_for_setting_up_a_chatstyle_llm/
false
false
self
1
null
Any suggestions for setting up a chat-style LLM that I can use for a project and refer to as an assistant 'expert' or 'teacher' to answer questions and provide suggestions? And maintain a persistent memory of topics we discussed to refer back to?
1
[removed]
2025-01-24T02:01:47
https://www.reddit.com/r/LocalLLaMA/comments/1i8kiym/any_suggestions_for_setting_up_a_chatstyle_llm/
SadMarionberry3405
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8kiym
false
null
t3_1i8kiym
/r/LocalLLaMA/comments/1i8kiym/any_suggestions_for_setting_up_a_chatstyle_llm/
false
false
self
1
null
DeepSeek R1 Helped with Huge Problem
1
[removed]
2025-01-24T02:45:53
https://www.reddit.com/r/LocalLLaMA/comments/1i8lecn/deepseek_r1_helped_with_huge_problem/
Any_Prune3965
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8lecn
false
null
t3_1i8lecn
/r/LocalLLaMA/comments/1i8lecn/deepseek_r1_helped_with_huge_problem/
false
false
self
1
null
So R1 wasn't trained on COT traces, like people alluded to with o1, but it emerged from RL? What am I missing here?
11
Is that all or anything more to it?
2025-01-24T02:49:39
https://www.reddit.com/r/LocalLLaMA/comments/1i8lgxe/so_r1_wasnt_trained_on_cot_traces_like_people/
Snoo_64233
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8lgxe
false
null
t3_1i8lgxe
/r/LocalLLaMA/comments/1i8lgxe/so_r1_wasnt_trained_on_cot_traces_like_people/
false
false
self
11
null
How Do I Load GGUF Models?
2
https://preview.redd.it/…? Please Help
2025-01-24T02:52:14
https://www.reddit.com/r/LocalLLaMA/comments/1i8lisj/how_do_i_load_gguf_models/
Lilith-Vampire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8lisj
false
null
t3_1i8lisj
/r/LocalLLaMA/comments/1i8lisj/how_do_i_load_gguf_models/
false
false
https://b.thumbs.redditm…ObZARt10ZwyI.jpg
2
null
TTS with credited voice sources?
2
Looking for a good free or very cheap TTS that paid and credited the source contributors of their voices. The credit thing is the main ethical issue stopping me from just using the first thing that shows up on Google. Primarily a Windows user, but I do also have an Android phone if that's somehow relevant.
2025-01-24T02:59:07
https://www.reddit.com/r/LocalLLaMA/comments/1i8lnmd/tts_with_credited_voice_sources/
ScholarlyIcarus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8lnmd
false
null
t3_1i8lnmd
/r/LocalLLaMA/comments/1i8lnmd/tts_with_credited_voice_sources/
false
false
self
2
null
How to Use Llamafile to Execute Prompts from Text Files
4
Hey, I'm trying to run a local LLM pipeline: 1. I have a list of instructions in ".md" files. 2. I want to run llamafile in chat completion mode, which reads these .md files and performs a single chat completion for each. 3. The chat completion output should be written to a file. It seems to work well if I paste the content of an .md file directly into the chat mode, but I'm having trouble using the `-f` flag to read the prompt from a file. The `-f` command replaces the entire prompt and doesn't stop properly. Any help is appreciated!
2025-01-24T03:19:01
https://www.reddit.com/r/LocalLLaMA/comments/1i8m1mc/how_to_use_llamafile_to_execute_prompts_from_text/
wuduzodemu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8m1mc
false
null
t3_1i8m1mc
/r/LocalLLaMA/comments/1i8m1mc/how_to_use_llamafile_to_execute_prompts_from_text/
false
false
self
4
null
How to create an assistant that replicates perfectly my writing style?
1
[removed]
2025-01-24T03:20:13
https://www.reddit.com/r/LocalLLaMA/comments/1i8m2fq/how_to_create_an_assistant_that_replicates/
SnappySauros
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8m2fq
false
null
t3_1i8m2fq
/r/LocalLLaMA/comments/1i8m2fq/how_to_create_an_assistant_that_replicates/
false
false
self
1
null
How to create an assistant that replicates my writing style?
1
[removed]
2025-01-24T03:34:24
https://www.reddit.com/r/LocalLLaMA/comments/1i8mbue/how_to_create_an_assistant_that_replicates_my/
Dazzling_Teach_7715
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8mbue
false
null
t3_1i8mbue
/r/LocalLLaMA/comments/1i8mbue/how_to_create_an_assistant_that_replicates_my/
false
false
self
1
null
Claude 3.5 Sonnet is still the best so far in real world coding
1
[removed]
2025-01-24T03:41:05
https://www.reddit.com/r/LocalLLaMA/comments/1i8mg3q/claude_35_sonnet_is_still_the_best_so_far_in_real/
Special_Monk356
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8mg3q
false
null
t3_1i8mg3q
/r/LocalLLaMA/comments/1i8mg3q/claude_35_sonnet_is_still_the_best_so_far_in_real/
false
false
self
1
null
Coming soon: 100% Local Video Understanding Engine (an open-source project that can classify, caption, transcribe, and understand any video on your local device)
137
2025-01-24T04:06:24
https://v.redd.it/v8xdjbf5suee1
ParsaKhaz
v.redd.it
1970-01-01T00:00:00
0
{}
1i8mwpc
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/v8xdjbf5suee1/DASHPlaylist.mpd?a=1740283599%2CYzYzYjc5MDNkZjZiNGNjMWRmNmFiNjc1Y2FhNGY4N2ViNzJjNTc2NTE1YjY3YmQxZjE5MTJmNDUxOGRiZTI2YQ%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/v8xdjbf5suee1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/v8xdjbf5suee1/HLSPlaylist.m3u8?a=1740283599%2CNjg2MGRiZDgzYmM2MmE2YWY4NGNhZWJlM2Y1YjUwMjUwZDEyYzMxNjlhZmJkZTI5YTMxOTE5Njk3NDEyNjUwOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/v8xdjbf5suee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1i8mwpc
/r/LocalLLaMA/comments/1i8mwpc/coming_soon_100_local_video_understanding_engine/
false
false
https://external-preview…f64e5039e19ef0d1
137
{'enabled': False, 'images': [{'id': 'a3BrbTdjZjVzdWVlMe21Biif0sGFU8GTsH3N7D_CJugYvIxsEVZ-nvrUed0U', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a3BrbTdjZjVzdWVlMe21Biif0sGFU8GTsH3N7D_CJugYvIxsEVZ-nvrUed0U.png?width=108&crop=smart&format=pjpg&auto=webp&s=8d70899e326b573798000883c5c6a7d7b1190c55', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a3BrbTdjZjVzdWVlMe21Biif0sGFU8GTsH3N7D_CJugYvIxsEVZ-nvrUed0U.png?width=216&crop=smart&format=pjpg&auto=webp&s=c9b5df0725d1006fb55c71b1ca6146b5d9b5a22d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a3BrbTdjZjVzdWVlMe21Biif0sGFU8GTsH3N7D_CJugYvIxsEVZ-nvrUed0U.png?width=320&crop=smart&format=pjpg&auto=webp&s=90cabe9a7bf82012e980427ea6880fef39596c5d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/a3BrbTdjZjVzdWVlMe21Biif0sGFU8GTsH3N7D_CJugYvIxsEVZ-nvrUed0U.png?width=640&crop=smart&format=pjpg&auto=webp&s=097c26632f5f8ba9e61319d7f6da13d79fcdaa04', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/a3BrbTdjZjVzdWVlMe21Biif0sGFU8GTsH3N7D_CJugYvIxsEVZ-nvrUed0U.png?width=960&crop=smart&format=pjpg&auto=webp&s=d56c8b231bb1a2c528a7fd72d15d4faaeaa205f0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/a3BrbTdjZjVzdWVlMe21Biif0sGFU8GTsH3N7D_CJugYvIxsEVZ-nvrUed0U.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5f91fc0003509cdad7724b8b6e9e2e0782197141', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/a3BrbTdjZjVzdWVlMe21Biif0sGFU8GTsH3N7D_CJugYvIxsEVZ-nvrUed0U.png?format=pjpg&auto=webp&s=19eac9062506c92672ec325018116ed806cda6df', 'width': 1920}, 'variants': {}}]}
Is 9.11 greater than 9.9? Deepseek-r1-7b
3
\>>> Is 9.11 greater than 9.9? <think> Alright, so I’m trying to figure out whether 9.11 is greater than 9.9. At first glance, it seems like a straightforward question: just compare the two numbers and see which one is bigger. But wait, both numbers are similar—they’re both in the nineties—but they have different decimal places. Okay, let me break this down step by step. First, I need to understand what these numbers represent. 9.11 could be a weight, like 9 pounds and 11 ounces, or it could be something else entirely, but for now, I'll assume it's just a numerical value. Similarly, 9.9 is also a number that’s very close to 10. Now, comparing two numbers involves looking at their place values—ones, tenths, hundredths, and so on. In this case: \- For 9.11: \- The "9" represents the ones place. \- The first "1" is in the tenths place (which is one-tenth). \- The second "1" is in the hundredths place (which is one-hundredth). \- For 9.9: \- The "9" represents the ones place, just like in 9.11. \- The "9" after the decimal point is also in the tenths place. So when comparing these two numbers: \`\`\` 9.11 9.90 \`\`\` It's helpful to align them by their decimal points for a clearer comparison. When I write it out like that, it’s easier to see how they stack up against each other. Looking at the ones place first: both have "9," so they're equal there. Next, moving to the tenths place: \- In 9.11, the tenths digit is "1." \- In 9.90, the tenths digit is "9." Since "9" is greater than "1" in the tenths place, it might seem like 9.9 is bigger overall. But wait a second—it depends on how precise we want to be. If I consider only up to two decimal places: \- 9.11 \- 9.90 In this case, 9.90 is indeed greater than 9.11 because "9" in the tenths place of 9.90 is larger than "1" in the tenths place of 9.11. However, if we consider only up to one decimal place: \- 9.1 \- 9.9 Here, 9.9 would still be greater because "9" in the tenths place is larger than "1." But sometimes, people might truncate numbers without considering their actual value beyond a certain decimal point. If someone writes just "9.1" instead of "9.11," then it's not immediately clear which one they're referring to. Additionally, this comparison could be influenced by context. For example: \- In money, $9.11 is less than $9.90 because after the decimal point represents cents. But in other contexts where precision beyond tenths isn't necessary or significant, both 9.1 and 9.9 might be considered roughly equal. Moreover, if we're talking about measurements like weight or distance: \- 9.11 kilograms is less than 9.9 kilograms because the decimal parts make a difference. But again, without knowing the specific context, it's hard to say definitively whether one is greater than the other based solely on numerical comparison. Furthermore, thinking about this numerically, let's subtract both numbers: \`\`\` 9.9 - 9.11 = ? \`\`\` Subtracting step by step: \- Subtract the ones: 9 - 9 = 0 \- Subtract the tenths: 9 tenths - 1 tenth = 8 tenths \- Subtract the hundredths: Since there are no hundredths in 9.9, it's like 0 hundredths minus 1 hundredth. But you can't subtract a larger number from a smaller one directly, so you'd need to borrow: \`\`\` 9.90 \-9.11 \------- \`\`\` Starting from the rightmost digit (hundredths place): \- Subtract hundredths: 0 - 1 → Can’t do that, so borrow 1 tenth. This turns the 0 into 10 and reduces the tenths place by 1. Now it looks like: \- Tenths place becomes 8 (from 9 before borrowing) Then subtracting in each place: \- Hundredths: 10 - 1 = 9 \- Tenths: 8 - 1 = 7 \- Ones: Still 9 - 9 = 0 So the result is 0.79. Since the difference is positive, this means that 9.90 is greater than 9.11 by 0.79. But wait—if I only have one decimal place for both numbers: \- 9.1 vs. 9.9 Then, subtracting would give -0.8 (since 9.1 - 9.9 = -0.8), indicating that 9.9 is greater than 9.1. However, when considering two decimal places, it's also clear that 9.90 > 9.11. Therefore, the conclusion depends on how precise we want to be: \- If considering up to tenths place only (ignoring hundredths), then 9.9 is greater than 9.1. \- If considering up to hundredths place explicitly, as in 9.11 versus 9.90, then 9.90 > 9.11. So, depending on the level of precision or decimal places required, the answer might seem slightly different. But generally, when writing numbers like this without specifying the number of decimal places: \- "9.11" is two decimal places, and \- "9.9" is one decimal place. If we're comparing 9.11 to 9.90 (adding an extra zero for consistency), then yes, 9.90 is greater than 9.11. So in the end: \*\*Yes, 9.11 is less than 9.9 when considering up to two decimal places.\*\* But if someone truncates or rounds numbers differently, there could be some confusion. To avoid ambiguity, it's best to specify how many decimal places you're using when making such comparisons. </think> When comparing the numerical values of 9.11 and 9.9: \- If considering up to two decimal places: \- \*\*9.11\*\* is less than \*\*9.90\*\* This conclusion is based on explicitly aligning both numbers to two decimal places for a clear comparison. However, if only one decimal place is considered: \- \*\*9.1\*\* (derived from 9.11) is still less than \*\*9.9\*\* In summary, regardless of the number of decimal places considered, \*\*9.9\*\* remains greater than \*\*9.11\*\* when properly aligned for comparison. \*\*Final Answer:\*\* No, 9.11 is not greater than 9.9; rather, 9.11 is less than 9.9 when both are compared up to two decimal places.
2025-01-24T04:20:09
https://www.reddit.com/r/LocalLLaMA/comments/1i8n59q/is_911_greater_than_99_deepseekr17b/
jloverich
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8n59q
false
null
t3_1i8n59q
/r/LocalLLaMA/comments/1i8n59q/is_911_greater_than_99_deepseekr17b/
false
false
self
3
null
Deepseek refuses my request based on OpenAI's policies?
1
[removed]
2025-01-24T04:37:48
https://www.reddit.com/r/LocalLLaMA/comments/1i8ngpd/deepseek_refuses_my_request_based_on_openais/
unknown9645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8ngpd
false
null
t3_1i8ngpd
/r/LocalLLaMA/comments/1i8ngpd/deepseek_refuses_my_request_based_on_openais/
false
false
nsfw
1
null
With the advent of enormous models such as R1, do any of you want me to download a model for you, and ship it to you on a storage device?
1
[removed]
2025-01-24T04:58:56
https://www.reddit.com/r/LocalLLaMA/comments/1i8ntnh/with_the_advent_of_enormous_models_such_as_r1_do/
Leading-Country3966
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8ntnh
false
null
t3_1i8ntnh
/r/LocalLLaMA/comments/1i8ntnh/with_the_advent_of_enormous_models_such_as_r1_do/
false
false
self
1
null
Deepseek-R1-Zero API available?
12
Hey guys deepseek seems to only provide API for R1 and not for R1-Zero, so is there another platform where i can find API for R1-Zero? If there's no API available, what GPUs do i need to run inference on R1-Zero?
2025-01-24T05:04:59
https://www.reddit.com/r/LocalLLaMA/comments/1i8nxpt/deepseekr1zero_api_available/
Ok-Engineering5104
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8nxpt
false
null
t3_1i8nxpt
/r/LocalLLaMA/comments/1i8nxpt/deepseekr1zero_api_available/
false
false
self
12
null
Snatching a RTX on a budget post 50s launch?
3
What do you think, how likely is it that once the new 50 series is available, there ought to be a drop of 40 series cards? I just checked eBay and 4090 is still going for ~1300€ o.o...
2025-01-24T05:19:18
https://www.reddit.com/r/LocalLLaMA/comments/1i8o6bn/snatching_a_rtx_on_a_budget_post_50s_launch/
IngwiePhoenix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8o6bn
false
null
t3_1i8o6bn
/r/LocalLLaMA/comments/1i8o6bn/snatching_a_rtx_on_a_budget_post_50s_launch/
false
false
self
3
null
Any IDE extensions that do multiple autocomplete suggestions?
2
I'm looking for something that can continuously generate autocomplete suggestions, similar to the GitHub Copilot extension, but the model is run locally. What could potentially be better is if it could query different self-hosted LLMs at once and then continuously to generate unlimited number of suggestions.
2025-01-24T05:42:22
https://www.reddit.com/r/LocalLLaMA/comments/1i8ojpl/any_ide_extensions_that_do_multiple_autocomplete/
tony__Y
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8ojpl
false
null
t3_1i8ojpl
/r/LocalLLaMA/comments/1i8ojpl/any_ide_extensions_that_do_multiple_autocomplete/
false
false
self
2
null
Default Github CoPilot is dumber than Gpt 4o - right?
0
Was ordered to test run this at work. Not a bad project, although I'm not stoked about sending our code offsite - but I'll never complain about getting paid to try out a new toy. Github Copilot extension for VsCode. Some of our simpler javascript code bases. It's bad. It's *really* bad. I'd say my results are on-par to at home using something like Continue.dev (not advocating for it, just what I have now) and Llama3.1 8b. If I use Codestral 22b or Qwen Coder 32b at home, then forget it. Copilot is in the dust. That said - Chatgpt4o, whether used in the site, app, or api, is not dumb by any metric. If i manually toss all of the content into Chatgpt4o's website it gets the job done very well. I look online and see disagreement on what models actually power Copilot. Is it still partially using 3.5-Turbo? Is it using some unreleased "mini" version?
2025-01-24T05:47:52
https://www.reddit.com/r/LocalLLaMA/comments/1i8omuy/default_github_copilot_is_dumber_than_gpt_4o_right/
ForsookComparison
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8omuy
false
null
t3_1i8omuy
/r/LocalLLaMA/comments/1i8omuy/default_github_copilot_is_dumber_than_gpt_4o_right/
false
false
self
0
null
Problem Related to OCR
1
[removed]
2025-01-24T06:02:56
https://www.reddit.com/r/LocalLLaMA/comments/1i8ovc6/problem_related_to_ocr/
AppropriateSuit1017
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8ovc6
false
null
t3_1i8ovc6
/r/LocalLLaMA/comments/1i8ovc6/problem_related_to_ocr/
false
false
self
1
null
Can I finetune one of the 1.5B Quants of R1?
2
Is finetuning a reasoning model a thing?
2025-01-24T06:38:57
https://www.reddit.com/r/LocalLLaMA/comments/1i8pent/can_i_finetune_one_of_the_15b_quants_of_r1/
CaptTechno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8pent
false
null
t3_1i8pent
/r/LocalLLaMA/comments/1i8pent/can_i_finetune_one_of_the_15b_quants_of_r1/
false
false
self
2
null
ollama on mac via dmg install or brew install
1
[removed]
2025-01-24T06:47:48
https://www.reddit.com/r/LocalLLaMA/comments/1i8pj8z/ollama_on_mac_via_dmg_install_or_brew_install/
discoveringnature12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8pj8z
false
null
t3_1i8pj8z
/r/LocalLLaMA/comments/1i8pj8z/ollama_on_mac_via_dmg_install_or_brew_install/
false
false
self
1
null
Deepseek r1 (full) intelligence rate?
0
Its been a few days since full r1 released. Openai said gpt3 was toddler level, gpt4 as a smart high schooler and o1 as phd level intelligence. what would deepseek r1 level intelligence be?
2025-01-24T06:49:35
https://www.reddit.com/r/LocalLLaMA/comments/1i8pk55/deepseek_r1_full_intelligence_rate/
Notdesciplined
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8pk55
false
null
t3_1i8pk55
/r/LocalLLaMA/comments/1i8pk55/deepseek_r1_full_intelligence_rate/
false
false
self
0
null
My plain ‘ol english take on planning v. reasoning in LLMs. What’s yours?
1
[removed]
2025-01-24T06:50:35
https://i.redd.it/6d5gkiti2wee1.jpeg
AdditionalWeb107
i.redd.it
1970-01-01T00:00:00
0
{}
1i8pkn0
false
null
t3_1i8pkn0
/r/LocalLLaMA/comments/1i8pkn0/my_plain_ol_english_take_on_planning_v_reasoning/
false
false
https://a.thumbs.redditm…09aZWzkFBeg0.jpg
1
{'enabled': True, 'images': [{'id': 'fPIS57vtmIkKmBoHKGIuYYC7CpqZTS-XKDRXevvR4eg', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/6d5gkiti2wee1.jpeg?width=108&crop=smart&auto=webp&s=9b896b8a5c563ab2083d48f3dc1d4dd56c36a5bc', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/6d5gkiti2wee1.jpeg?width=216&crop=smart&auto=webp&s=4fa6b7fdcd8b8e2814d8319be611d2488d008338', 'width': 216}, {'height': 130, 'url': 'https://preview.redd.it/6d5gkiti2wee1.jpeg?width=320&crop=smart&auto=webp&s=3acb7632d97deafba2ebf67b456a12538cb1b15e', 'width': 320}, {'height': 261, 'url': 'https://preview.redd.it/6d5gkiti2wee1.jpeg?width=640&crop=smart&auto=webp&s=896cf9831c61e5903cbd094c50a5c544a7d91fb6', 'width': 640}, {'height': 392, 'url': 'https://preview.redd.it/6d5gkiti2wee1.jpeg?width=960&crop=smart&auto=webp&s=917743aa61335d84ff32a82d2db5dd27fc448297', 'width': 960}, {'height': 441, 'url': 'https://preview.redd.it/6d5gkiti2wee1.jpeg?width=1080&crop=smart&auto=webp&s=a591a2b56692c0cf11de25edc8c4f9b6e75a1b46', 'width': 1080}], 'source': {'height': 527, 'url': 'https://preview.redd.it/6d5gkiti2wee1.jpeg?auto=webp&s=825b7ab33e6fbcf073a0aef6b6bf1e33ae0fa99b', 'width': 1290}, 'variants': {}}]}
Step-KTO: Optimizing Mathematical Reasoning through Stepwise Binary Feedback
16
2025-01-24T07:08:28
https://arxiv.org/abs/2501.10799
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1i8ptsj
false
null
t3_1i8ptsj
/r/LocalLLaMA/comments/1i8ptsj/stepkto_optimizing_mathematical_reasoning_through/
false
false
default
16
null
Is it possible to control thinking process into reasoning model?
2
For example I would like to instruct how to think is some cases or totally change behavior: ask to think by another language. Deepseek-R1 thinking process see my request, but ignore it in general and it affect only final output to user.
2025-01-24T07:20:18
https://www.reddit.com/r/LocalLLaMA/comments/1i8pzkz/is_it_possible_to_control_thinking_process_into/
bymihaj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8pzkz
false
null
t3_1i8pzkz
/r/LocalLLaMA/comments/1i8pzkz/is_it_possible_to_control_thinking_process_into/
false
false
self
2
null
where can we find Distilled R1 Evaluation for different quants
6
Looking for 4bit , 8bit quant scores .
2025-01-24T07:33:09
https://www.reddit.com/r/LocalLLaMA/comments/1i8q5vb/where_can_we_find_distilled_r1_evaluation_for/
sahil1572
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8q5vb
false
null
t3_1i8q5vb
/r/LocalLLaMA/comments/1i8q5vb/where_can_we_find_distilled_r1_evaluation_for/
false
false
self
6
null
Elon uses openwebui?!
0
2025-01-24T07:42:19
https://i.redd.it/7lc6i1npbwee1.png
umarmnaq
i.redd.it
1970-01-01T00:00:00
0
{}
1i8qa4c
false
null
t3_1i8qa4c
/r/LocalLLaMA/comments/1i8qa4c/elon_uses_openwebui/
false
false
https://b.thumbs.redditm…01Pu8g5iizHc.jpg
0
{'enabled': True, 'images': [{'id': 'I94yvBWA19Li3AY69FFEfkpi36efgxgCis0nvwDWSKk', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/7lc6i1npbwee1.png?width=108&crop=smart&auto=webp&s=fecc143da8241612147dc6d4dd3a94180d3e3410', 'width': 108}, {'height': 179, 'url': 'https://preview.redd.it/7lc6i1npbwee1.png?width=216&crop=smart&auto=webp&s=296aa927fa0b0b688e5683efc6c9148fe7a9c02d', 'width': 216}, {'height': 265, 'url': 'https://preview.redd.it/7lc6i1npbwee1.png?width=320&crop=smart&auto=webp&s=0361ea164388b4d6fab428cbb66d71f079e1f8fa', 'width': 320}, {'height': 531, 'url': 'https://preview.redd.it/7lc6i1npbwee1.png?width=640&crop=smart&auto=webp&s=849d38eff68d794892a229aeb89af12f82abf109', 'width': 640}], 'source': {'height': 621, 'url': 'https://preview.redd.it/7lc6i1npbwee1.png?auto=webp&s=b6dff18de4cea090a4a4ceaef49d021c00df35f5', 'width': 748}, 'variants': {}}]}
List of top Open Source Chat UI for ollama/any LLM in general. (community edition)
1
[removed]
2025-01-24T07:48:39
https://www.reddit.com/r/LocalLLaMA/comments/1i8qd1z/list_of_top_open_source_chat_ui_for_ollamaany_llm/
VisibleLawfulness246
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8qd1z
false
null
t3_1i8qd1z
/r/LocalLLaMA/comments/1i8qd1z/list_of_top_open_source_chat_ui_for_ollamaany_llm/
false
false
self
1
null
Need help with server configuration for Llama 3.3:70B
1
[removed]
2025-01-24T08:01:30
https://www.reddit.com/r/LocalLLaMA/comments/1i8qjb0/need_help_with_server_configuration_for_llama/
Schmiddi995
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8qjb0
false
null
t3_1i8qjb0
/r/LocalLLaMA/comments/1i8qjb0/need_help_with_server_configuration_for_llama/
false
false
self
1
null
Economist: "China’s AI industry has almost caught up with America’s"
93
In a recent article, The Economist claims that Chinese AI models are "more open and more effective" and "DeepSeek’s llm is not only bigger than many of its Western counterparts—it is also better, matched only by the proprietary models at Google and Openai." https://preview.redd.it/ucks1vgggwee1.png?width=360&format=png&auto=webp&s=57e7a71f13589a314f53cda9a39bd3ba318ec59b The article goes on to explain how DeepSeek is more effective thanks to a series of improvements, and more open, not only in terms of availability but also of research transparency: "This permissiveness is matched by a remarkable openness: the two companies publish papers whenever they release new models that provide a wealth of detail on the techniques used to improve their performance." Worth a read: [https://archive.is/vAop1#selection-1373.91-1373.298](https://archive.is/vAop1#selection-1373.91-1373.298)
2025-01-24T08:09:10
https://www.reddit.com/r/LocalLLaMA/comments/1i8qmwv/economist_chinas_ai_industry_has_almost_caught_up/
mayalihamur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8qmwv
false
null
t3_1i8qmwv
/r/LocalLLaMA/comments/1i8qmwv/economist_chinas_ai_industry_has_almost_caught_up/
false
false
https://a.thumbs.redditm…4CfG1prbLj84.jpg
93
null
lol, TikTok releasing a Curosr-like IDE was not in my bingo card this year... rip your data
1
2025-01-24T08:11:29
https://i.redd.it/vb5vp25wgwee1.gif
juanviera23
i.redd.it
1970-01-01T00:00:00
0
{}
1i8qnzp
false
null
t3_1i8qnzp
/r/LocalLLaMA/comments/1i8qnzp/lol_tiktok_releasing_a_curosrlike_ide_was_not_in/
false
false
default
1
{'enabled': True, 'images': [{'id': 'vb5vp25wgwee1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=108&crop=smart&format=png8&s=18614436c8ad9958a450b5d3c0231d1d94656c97', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=216&crop=smart&format=png8&s=33a50d19e2452418ecf5d53cd55af369206d9937', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=320&crop=smart&format=png8&s=bac53cb98bd2ef82b60672d82794a2bd14be179d', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=640&crop=smart&format=png8&s=d66e003dbbef9fd2f5144989602b58605881f5bc', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=960&crop=smart&format=png8&s=abdec013c7a8910a60b287cfa493020e2c424f15', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=1080&crop=smart&format=png8&s=652a3159dea9f11a63b1b7d7ac28546e7f1d6e04', 'width': 1080}], 'source': {'height': 1013, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?format=png8&s=78c715926347aabb6edcdde6dfc3409423845e69', 'width': 1800}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=108&crop=smart&s=d6efefd7e97b62b6a0cbc3134a009d117c95a913', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=216&crop=smart&s=20759f7363c0b8d288582d228b85cdb2d5ea32e7', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=320&crop=smart&s=249a950f5db3560034a490cae46ba506fd38e136', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=640&crop=smart&s=d0e572a8ccec0e1349e82aafb42c6c2c3dcbb6d7', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=960&crop=smart&s=97f695ec273d74ba715004ebdf0950994b9448b0', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=1080&crop=smart&s=73216c414e91f2907a449181f76e9192e2ceb3cd', 'width': 1080}], 'source': {'height': 1013, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?s=469084fa5b9813d80342f86137641a292a0c09da', 'width': 1800}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=108&format=mp4&s=cda1c5aeb359d6e66aa3d681a0118e0d362902d3', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=216&format=mp4&s=ce2d486ca98a8bc5761cb9eb25fea473afa0b5d2', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=320&format=mp4&s=333c9cbeecbc1aca4accf6eb1f06f068a1596ee2', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=640&format=mp4&s=c66fc5888bb513d9f6960223462ccfe879e3b47c', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=960&format=mp4&s=a79e619ee690f0e04107f8f583d54b28a4d944e5', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?width=1080&format=mp4&s=a103c34bf65a3ae833add7ed2d3d9fc4445d124c', 'width': 1080}], 'source': {'height': 1013, 'url': 'https://preview.redd.it/vb5vp25wgwee1.gif?format=mp4&s=0f7d1bcf62f805171345d6ee2d15eb3455857012', 'width': 1800}}}}]}
Help with Llamacpp parameters
2
I have issues with CPU-only inference on a long ctx user input both on Llamacpp and Koboldcpp. For small input, everything works fine. But for long ctx user input (10k+ ctx), inference falls apart. However using the same prompt and the same model on LM Studio, CPU only inference works just fine. Therefore, it has to be my hyper-parameter selection issue. Any advice would be appritiated... Inference on dual-CPU system, RAM>300Gb. \`\`\` .\\llama-cli.exe --model "C:\\xLLMs\\lmstudio-community\\Llama-3.3-70B-Instruct-GGUF\\Llama-3.3-70B-Instruct-Q8\_0-00001-of-00002.gguf" --file "C:\\xLLMs\\+prompts\\promptQAM.txt" --n-predict 5024 --threads 32 --numa distribute --ctx-size 32768 --top\_k 20 --top\_p 0.95 --temp 0.5 --repeat\_penalty 1.05 --repeat\_last\_n 128 --seed 42 \`\`\`
2025-01-24T08:13:46
https://www.reddit.com/r/LocalLLaMA/comments/1i8qozf/help_with_llamacpp_parameters/
dodo13333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8qozf
false
null
t3_1i8qozf
/r/LocalLLaMA/comments/1i8qozf/help_with_llamacpp_parameters/
false
false
self
2
null
Elon Musk Challenges Stargate AI Poject's $500B Budget
1
[removed]
2025-01-24T08:22:43
[deleted]
1970-01-01T00:00:00
0
{}
1i8qt24
false
null
t3_1i8qt24
/r/LocalLLaMA/comments/1i8qt24/elon_musk_challenges_stargate_ai_pojects_500b/
false
false
default
1
null
What a crazy week in AI
1
[removed]
2025-01-24T08:29:27
https://www.reddit.com/r/LocalLLaMA/comments/1i8qw50/what_a_crazy_week_in_ai/
Aggressive-Foot7427
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8qw50
false
null
t3_1i8qw50
/r/LocalLLaMA/comments/1i8qw50/what_a_crazy_week_in_ai/
false
false
self
1
null
I actually really like the idea of this. It won’t be long before they can look at your PC on call as well.
41
2025-01-24T08:39:17
https://i.redd.it/3vxre6rklwee1.png
omnisvosscio
i.redd.it
1970-01-01T00:00:00
0
{}
1i8r0on
false
null
t3_1i8r0on
/r/LocalLLaMA/comments/1i8r0on/i_actually_really_like_the_idea_of_this_it_wont/
false
false
https://b.thumbs.redditm…tPRX5L3z-8NE.jpg
41
{'enabled': True, 'images': [{'id': 'nK3Jo1TiQqYPUCXuQZaCnh7i6czpMsVo3lEsb43McM4', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/3vxre6rklwee1.png?width=108&crop=smart&auto=webp&s=7a77a0be09dc3b67ad4b0cdebc102f4063f8363c', 'width': 108}, {'height': 266, 'url': 'https://preview.redd.it/3vxre6rklwee1.png?width=216&crop=smart&auto=webp&s=128005a18f01e58c83f33645b31d6408bc19fbed', 'width': 216}, {'height': 395, 'url': 'https://preview.redd.it/3vxre6rklwee1.png?width=320&crop=smart&auto=webp&s=6ddbef961cfc76e4dbc466509afe79469c4b921c', 'width': 320}], 'source': {'height': 542, 'url': 'https://preview.redd.it/3vxre6rklwee1.png?auto=webp&s=2d252ff37653e4721ddb9579bd45f522f79d178c', 'width': 439}, 'variants': {}}]}
Am i Missing something? Is Deepseek just a skin above chat gpt ? 😂
0
This is the response when i asked deepseek about it's own R1 model!
2025-01-24T08:46:48
https://i.redd.it/gvmy0zg9nwee1.png
semi-column
i.redd.it
1970-01-01T00:00:00
0
{}
1i8r447
false
null
t3_1i8r447
/r/LocalLLaMA/comments/1i8r447/am_i_missing_something_is_deepseek_just_a_skin/
false
false
https://b.thumbs.redditm…Fopx0_2g6kGA.jpg
0
{'enabled': True, 'images': [{'id': '6dEp0PElXyfyeWyPpMvfXOoBPSMCNeIpsd6P4Su7S7w', 'resolutions': [{'height': 170, 'url': 'https://preview.redd.it/gvmy0zg9nwee1.png?width=108&crop=smart&auto=webp&s=16c15b782afe57a299577fc551e82817fe8fa366', 'width': 108}, {'height': 340, 'url': 'https://preview.redd.it/gvmy0zg9nwee1.png?width=216&crop=smart&auto=webp&s=a110e06548cf24443e22e994b561f764da054cd4', 'width': 216}, {'height': 504, 'url': 'https://preview.redd.it/gvmy0zg9nwee1.png?width=320&crop=smart&auto=webp&s=13eb42c47553bd9384542798d2c2adf66f8e2572', 'width': 320}, {'height': 1008, 'url': 'https://preview.redd.it/gvmy0zg9nwee1.png?width=640&crop=smart&auto=webp&s=3ac36f7c8b5c8cbf67d7ca1f85920c740ba64b80', 'width': 640}, {'height': 1512, 'url': 'https://preview.redd.it/gvmy0zg9nwee1.png?width=960&crop=smart&auto=webp&s=f7f11a95ce6ff6d27bafc7ba4ce8aba04ab40783', 'width': 960}, {'height': 1702, 'url': 'https://preview.redd.it/gvmy0zg9nwee1.png?width=1080&crop=smart&auto=webp&s=e691da02c1cc8129146a2265042f4b031f8cd035', 'width': 1080}], 'source': {'height': 1702, 'url': 'https://preview.redd.it/gvmy0zg9nwee1.png?auto=webp&s=0ae4a6715ccb133636110035129403277e5aa4e7', 'width': 1080}, 'variants': {}}]}
EPUB for Arxiv papers?
2
Long-form reading is easier on a tablet or ebook reader. PDFs and HTML are missing this use case. Let’s help Arxiv prioritize it via the petition. Thoughts welcome? Signatures welcome too!
2025-01-24T08:52:43
https://www.change.org/arxiv-papers-EPUB
Doorhacker
change.org
1970-01-01T00:00:00
0
{}
1i8r6v3
false
null
t3_1i8r6v3
/r/LocalLLaMA/comments/1i8r6v3/epub_for_arxiv_papers/
false
false
https://b.thumbs.redditm…jniQTwrTFbZw.jpg
2
{'enabled': False, 'images': [{'id': 'OvEreN13ZVt7XlaiYa05i9B8AHm4RPuh7oXBz7Tt8ss', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/W5gfQbtNOwMVwNJ1yMbyYws9oL2BRZGBDEEB6hklm38.jpg?width=108&crop=smart&auto=webp&s=42395aadc7a85e34ca0863666bea345abe04eb17', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/W5gfQbtNOwMVwNJ1yMbyYws9oL2BRZGBDEEB6hklm38.jpg?width=216&crop=smart&auto=webp&s=4e22f7eb1890b541da4295f60318577f26545f4d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/W5gfQbtNOwMVwNJ1yMbyYws9oL2BRZGBDEEB6hklm38.jpg?width=320&crop=smart&auto=webp&s=cbd68398f3df1bfb47997b7187d5abc0c9e7ff19', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/W5gfQbtNOwMVwNJ1yMbyYws9oL2BRZGBDEEB6hklm38.jpg?width=640&crop=smart&auto=webp&s=3d87ff7a4c8476055e8e6e3b7da4963f40b4b73f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/W5gfQbtNOwMVwNJ1yMbyYws9oL2BRZGBDEEB6hklm38.jpg?width=960&crop=smart&auto=webp&s=0e1be818cfa5326597990a63d4c3b6ee0ff0275f', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/W5gfQbtNOwMVwNJ1yMbyYws9oL2BRZGBDEEB6hklm38.jpg?width=1080&crop=smart&auto=webp&s=3a129790c22e711e93bc027d5876c01e58bcdcd0', 'width': 1080}], 'source': {'height': 820, 'url': 'https://external-preview.redd.it/W5gfQbtNOwMVwNJ1yMbyYws9oL2BRZGBDEEB6hklm38.jpg?auto=webp&s=964b13bd9b8b2e9e26ef1d6518191cecf6a65176', 'width': 1456}, 'variants': {}}]}
Open source version of operator & agents
10
2025-01-24T09:01:12
https://i.redd.it/ozncahpjpwee1.png
omnisvosscio
i.redd.it
1970-01-01T00:00:00
0
{}
1i8ratc
false
null
t3_1i8ratc
/r/LocalLLaMA/comments/1i8ratc/open_source_version_of_operator_agents/
false
false
https://b.thumbs.redditm…FK-wUqCWtoLc.jpg
10
{'enabled': True, 'images': [{'id': 'ohBcOQ76G5wnEUEEId8qbXXdK2jKlzvt4fVDQpj7FA0', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ozncahpjpwee1.png?width=108&crop=smart&auto=webp&s=2968bb8a896cd5816a079060517fc5816d184935', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ozncahpjpwee1.png?width=216&crop=smart&auto=webp&s=3bb1de4686cdb8d758b40349fd6aeaeec62cec4f', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ozncahpjpwee1.png?width=320&crop=smart&auto=webp&s=f1b0b6c5f28d4e1e5a38ed4c2d90c3e800b22085', 'width': 320}], 'source': {'height': 927, 'url': 'https://preview.redd.it/ozncahpjpwee1.png?auto=webp&s=84ad4f85492839cef9eb6e9fc945f26b5429646b', 'width': 440}, 'variants': {}}]}
How are you all running DeepSeek V3?
1
[removed]
2025-01-24T09:10:25
https://www.reddit.com/r/LocalLLaMA/comments/1i8rf12/how_are_you_all_running_deepseek_v3/
largelanguidmonkey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8rf12
false
null
t3_1i8rf12
/r/LocalLLaMA/comments/1i8rf12/how_are_you_all_running_deepseek_v3/
false
false
self
1
null
Best private model for explaining code
3
We know that these LLMs can generate good code. But which is the best at explaining good code? I’d like to know y’all’s thoughts on this for private LLMs (o1, deepseek r1, sonnet, and Gemini-1206) and local LLMs (QwQ, deepseekr1 qwen 32B, llama 3.3, and more) Let me know what y’all think? You can really benchmarking explaining can you?
2025-01-24T09:14:51
https://www.reddit.com/r/LocalLLaMA/comments/1i8rh1t/best_private_model_for_explaining_code/
Glittering-Bag-4662
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8rh1t
false
null
t3_1i8rh1t
/r/LocalLLaMA/comments/1i8rh1t/best_private_model_for_explaining_code/
false
false
self
3
null
Notes on Deepseek r1: Just how good it is compared to OpenAI o1
888
Finally, there is a model worthy of the hype it has been getting since Claude 3.6 Sonnet. Deepseek has released something anyone hardly expected: a reasoning model on par with OpenAI’s o1 within a month of the v3 release, with an MIT license and 1/20th of o1’s cost. This is easily the best release since GPT-4. It's wild; the general public seems excited about this, while the big AI labs are probably scrambling. It feels like things are about to speed up in the AI world. And it's all thanks to this new DeepSeek-R1 model and how they trained it.  Some key details from the paper * Pure RL (GRPO) on v3-base to get r1-zero. (No Monte-Carlo Tree Search or Process Reward Modelling) * The model uses “Aha moments” as pivot tokens to reflect and reevaluate answers during CoT. * To overcome r1-zero’s readability issues, v3 was SFTd on cold start data. * Distillation works, small models like Qwen and Llama trained over r1 generated data show significant improvements. Here’s an overall r0 pipeline * v3 base + RL (GRPO) → r1-zero r1 training pipeline. 1. **DeepSeek-V3 Base** \+ SFT (Cold Start Data) → **Checkpoint 1** 2. **Checkpoint 1** \+ RL (GRPO + Language Consistency) → **Checkpoint 2** 3. **Checkpoint 2** used to Generate Data (Rejection Sampling) 4. **DeepSeek-V3 Base** \+ SFT (Generated Data + Other Data) → **Checkpoint 3** 5. **Checkpoint 3** \+ RL (Reasoning + Preference Rewards) → **DeepSeek-R1** We know the benchmarks, but just how good is it? # Deepseek r1 vs OpenAI o1. So, for this, I tested r1 and o1 side by side on complex reasoning, math, coding, and creative writing problems. These are the questions that o1 solved only or by none before. Here’s what I found: * For **reasoning**, it is much better than any previous SOTA model until o1. It is better than o1-preview but a notch below o1. This is also shown in the ARC AGI bench. * **Mathematics**: It's also the same for mathematics; r1 is a killer, but o1 is better. * **Coding**: I didn’t get to play much, but on first look, it’s up there with o1, and the fact that it costs 20x less makes it the practical winner. * **Writing**: This is where R1 takes the lead. It gives the same vibes as early Opus. It’s free, less censored, has much more personality, is easy to steer, and is very creative compared to the rest, even o1-pro. What interested me was how free the model sounded and thought traces were, akin to human internal monologue. Perhaps this is because of the less stringent RLHF, unlike US models. The fact that you can get r1 from v3 via pure RL was the most surprising. For in-depth analysis, commentary, and remarks on the Deepseek r1, check out this blog post: [Notes on Deepseek r1](https://composio.dev/blog/notes-on-the-new-deepseek-r1/) What are your experiences with the new Deepseek r1? Did you find the model useful for your use cases?
2025-01-24T09:44:13
https://www.reddit.com/r/LocalLLaMA/comments/1i8rujw/notes_on_deepseek_r1_just_how_good_it_is_compared/
SunilKumarDash
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8rujw
false
null
t3_1i8rujw
/r/LocalLLaMA/comments/1i8rujw/notes_on_deepseek_r1_just_how_good_it_is_compared/
false
false
self
888
{'enabled': False, 'images': [{'id': 'KOCTO81N5YFCBxYf1XRYj-3kbf1W-nyOKZW578Y3vj8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Gkb21pGEkuITtFcovTPvupvsy9pDG3-wn7k7nOTvIlc.jpg?width=108&crop=smart&auto=webp&s=cce45565deec86a04654b4612e4f0b587db80fd9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Gkb21pGEkuITtFcovTPvupvsy9pDG3-wn7k7nOTvIlc.jpg?width=216&crop=smart&auto=webp&s=8f175cde9e479907c5a6785d97ea988b25b50b6c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Gkb21pGEkuITtFcovTPvupvsy9pDG3-wn7k7nOTvIlc.jpg?width=320&crop=smart&auto=webp&s=0a1b1934d279c32480cab30c91fd3fb5182d26c8', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Gkb21pGEkuITtFcovTPvupvsy9pDG3-wn7k7nOTvIlc.jpg?width=640&crop=smart&auto=webp&s=41cc3d2f170584ad8f20f21881b2b7baac6950e0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Gkb21pGEkuITtFcovTPvupvsy9pDG3-wn7k7nOTvIlc.jpg?width=960&crop=smart&auto=webp&s=d57da3d7a44e7398fde14e12ce0c6e4b519b1d40', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Gkb21pGEkuITtFcovTPvupvsy9pDG3-wn7k7nOTvIlc.jpg?width=1080&crop=smart&auto=webp&s=68eccb2b944ffab469b07ab8318cc1dd9bb739f6', 'width': 1080}], 'source': {'height': 639, 'url': 'https://external-preview.redd.it/Gkb21pGEkuITtFcovTPvupvsy9pDG3-wn7k7nOTvIlc.jpg?auto=webp&s=8f4dff05a5e2c20c0c109313f4d707e1ea848dd7', 'width': 1136}, 'variants': {}}]}
Are there LLM fintuned as a classifier?
1
Currently I use a promt like "if the following text is in english answer 0, in french answer 1, in german answer 2" or "what emotion is this, describe with one word". Are there models that is specifcally finetuned to work like this? Preferably across multiple languages and topics.
2025-01-24T09:48:21
https://www.reddit.com/r/LocalLLaMA/comments/1i8rwgi/are_there_llm_fintuned_as_a_classifier/
Suspicious_Loads
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8rwgi
false
null
t3_1i8rwgi
/r/LocalLLaMA/comments/1i8rwgi/are_there_llm_fintuned_as_a_classifier/
false
false
self
1
null
NVIDIA 50 series bottlenecks
10
Don't know how it translates to workloads regarding AI, but there was some questions about why we don't see better performance when the memory bandwidth is substantially higher. And this review mentions that there could potentially be a CPU or PCIe bottleneck. There also seems to be problems with older risers, for anyone that tries to cram a bunch of cards in the same case... https://youtu.be/5TJk_P2A0Iw
2025-01-24T09:55:35
https://www.reddit.com/r/LocalLLaMA/comments/1i8rzts/nvidia_50_series_bottlenecks/
Cane_P
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8rzts
false
null
t3_1i8rzts
/r/LocalLLaMA/comments/1i8rzts/nvidia_50_series_bottlenecks/
false
false
self
10
{'enabled': False, 'images': [{'id': 'z8cFpExwJGdwiFTejAJ47eZdizZfpkfgSUzElczQ_Xw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Up_gAe1gRuJe0L_tV0yVuCrgVYtI8n_0O40mmnVeP34.jpg?width=108&crop=smart&auto=webp&s=f46688f4a9cda7133ca009ae04cb5d14843e979b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Up_gAe1gRuJe0L_tV0yVuCrgVYtI8n_0O40mmnVeP34.jpg?width=216&crop=smart&auto=webp&s=218840c28267b58d8c9584d34220fea79398794b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Up_gAe1gRuJe0L_tV0yVuCrgVYtI8n_0O40mmnVeP34.jpg?width=320&crop=smart&auto=webp&s=bd5644b7fe25b6e024b0b5027ca8ba5b32fc1ede', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Up_gAe1gRuJe0L_tV0yVuCrgVYtI8n_0O40mmnVeP34.jpg?auto=webp&s=846cc3a7303c71f272268995a027c93983eabf2b', 'width': 480}, 'variants': {}}]}
I just wanted to ask the deepseek model to draw a circle. But it said I should draw it myself :(
1
2025-01-24T09:59:07
https://i.redd.it/6tqby1x50xee1.png
Awkward_Buddy7350
i.redd.it
1970-01-01T00:00:00
0
{}
1i8s1hb
false
null
t3_1i8s1hb
/r/LocalLLaMA/comments/1i8s1hb/i_just_wanted_to_ask_the_deepseek_model_to_draw_a/
false
false
https://b.thumbs.redditm…HDWkTGuJ7g-g.jpg
1
{'enabled': True, 'images': [{'id': 'wLuAmikIpp0NzwyDndi0OXRGDSPF_-uMRfns6mr6VHk', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/6tqby1x50xee1.png?width=108&crop=smart&auto=webp&s=e784f177873bb5d8378afaaa58c06a725d2fbec8', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/6tqby1x50xee1.png?width=216&crop=smart&auto=webp&s=82ef75ee545f3a39944148c6f58c71a0ed83cb93', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/6tqby1x50xee1.png?width=320&crop=smart&auto=webp&s=f2d280541f6ed0d975d9bb80f81a582c7733772d', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/6tqby1x50xee1.png?width=640&crop=smart&auto=webp&s=2bc2404734fc825bbd5a5c591484f898fe192553', 'width': 640}], 'source': {'height': 486, 'url': 'https://preview.redd.it/6tqby1x50xee1.png?auto=webp&s=4e3bbddb6aa35280318aa1a8061cc855c40a3ae7', 'width': 867}, 'variants': {}}]}
I just wanted to ask the deepseek model to draw a circle. But it said I should draw it myself
2
2025-01-24T09:59:57
https://i.redd.it/nli1n8fb0xee1.png
Awkward_Buddy7350
i.redd.it
1970-01-01T00:00:00
0
{}
1i8s1w1
false
null
t3_1i8s1w1
/r/LocalLLaMA/comments/1i8s1w1/i_just_wanted_to_ask_the_deepseek_model_to_draw_a/
false
false
https://b.thumbs.redditm…Td2EHXr5LiFA.jpg
2
{'enabled': True, 'images': [{'id': 'hK-uEstqCOcF_O3db203WtTGfE4HkOTCAaEq-lVOgYg', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/nli1n8fb0xee1.png?width=108&crop=smart&auto=webp&s=dd0109ae7790f7520a923c610d1908bf6847f475', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/nli1n8fb0xee1.png?width=216&crop=smart&auto=webp&s=32dcaf12399a2a1fdfea57fcea870eb765d1b433', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/nli1n8fb0xee1.png?width=320&crop=smart&auto=webp&s=9bfcc5d45a28f44ab74281e2bbc346a7883383c6', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/nli1n8fb0xee1.png?width=640&crop=smart&auto=webp&s=97d130df4d3af3429c2e90ed0d2487ea85442eb9', 'width': 640}], 'source': {'height': 486, 'url': 'https://preview.redd.it/nli1n8fb0xee1.png?auto=webp&s=0ba607d1ca4075c78ed19520b4b3550a281843fc', 'width': 867}, 'variants': {}}]}
How to find job as "Function Definition Writer" for AI Model?
1
[removed]
2025-01-24T10:12:06
https://www.reddit.com/r/LocalLLaMA/comments/1i8s7wn/how_to_find_job_as_function_definition_writer_for/
Spare_Flamingo7879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8s7wn
false
null
t3_1i8s7wn
/r/LocalLLaMA/comments/1i8s7wn/how_to_find_job_as_function_definition_writer_for/
false
false
self
1
null
LLM for categorization
1
[removed]
2025-01-24T10:14:43
https://www.reddit.com/r/LocalLLaMA/comments/1i8s96f/llm_for_categorization/
Turbulent_Debt3405
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8s96f
false
null
t3_1i8s96f
/r/LocalLLaMA/comments/1i8s96f/llm_for_categorization/
false
false
self
1
null
Overwhelmed trying to find out information about how to serve Llama-3 70b to multiple users with 128k context
1
[removed]
2025-01-24T10:31:40
https://www.reddit.com/r/LocalLLaMA/comments/1i8shpn/overwhelmed_trying_to_find_out_information_about/
Prince-of-Privacy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8shpn
false
null
t3_1i8shpn
/r/LocalLLaMA/comments/1i8shpn/overwhelmed_trying_to_find_out_information_about/
false
false
self
1
null
Simple Open source tool like AI (Apple Intelligence) but completely private / local using Ollama and Kokoro
29
2025-01-24T10:51:58
https://v.redd.it/3lmdx75e9xee1
namuan
v.redd.it
1970-01-01T00:00:00
0
{}
1i8srsv
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/3lmdx75e9xee1/DASHPlaylist.mpd?a=1740307933%2COWRhYTUxODlmMWEyMGU2NDI1ZjNjMGU5M2U3MDA3OWQ1ZjFjOGQyODNiNjBjMTM2MjZkZmY4M2Q0NzhlOWYzZg%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/3lmdx75e9xee1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/3lmdx75e9xee1/HLSPlaylist.m3u8?a=1740307933%2CZmFmZTMzNTFhMjlmMjg3MjYzYjBiMjBhMzE1MzAzMTk5MjhjOWNmNzBiNzU4MzYxNWE2NjdmMzBmNTg2MzY2Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3lmdx75e9xee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1150}}
t3_1i8srsv
/r/LocalLLaMA/comments/1i8srsv/simple_open_source_tool_like_ai_apple/
false
false
https://external-preview…a79a02038c272b30
29
{'enabled': False, 'images': [{'id': 'ZGpreW02NWU5eGVlMaNbrH4VawvqtBNIlI_TN6ZdlIQIRM_6iX5iv_gMzF0s', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/ZGpreW02NWU5eGVlMaNbrH4VawvqtBNIlI_TN6ZdlIQIRM_6iX5iv_gMzF0s.png?width=108&crop=smart&format=pjpg&auto=webp&s=e56897fd15772cc7e99666771457ee40c617f22f', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/ZGpreW02NWU5eGVlMaNbrH4VawvqtBNIlI_TN6ZdlIQIRM_6iX5iv_gMzF0s.png?width=216&crop=smart&format=pjpg&auto=webp&s=8e151ac6b0a102feb96fde1fbd7c4ca8288bbccd', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/ZGpreW02NWU5eGVlMaNbrH4VawvqtBNIlI_TN6ZdlIQIRM_6iX5iv_gMzF0s.png?width=320&crop=smart&format=pjpg&auto=webp&s=40f6afa8c3cc5cad51288c08854dc96a2a09fea9', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/ZGpreW02NWU5eGVlMaNbrH4VawvqtBNIlI_TN6ZdlIQIRM_6iX5iv_gMzF0s.png?width=640&crop=smart&format=pjpg&auto=webp&s=39296ea84aa44151422836f19c49a6ffb5ec1d76', 'width': 640}, {'height': 601, 'url': 'https://external-preview.redd.it/ZGpreW02NWU5eGVlMaNbrH4VawvqtBNIlI_TN6ZdlIQIRM_6iX5iv_gMzF0s.png?width=960&crop=smart&format=pjpg&auto=webp&s=cfa676029f1b1be5cd6f2ebeb7c6da11f1668067', 'width': 960}, {'height': 676, 'url': 'https://external-preview.redd.it/ZGpreW02NWU5eGVlMaNbrH4VawvqtBNIlI_TN6ZdlIQIRM_6iX5iv_gMzF0s.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d65d6ff367cad567bbd176402d610ae37a71f53d', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/ZGpreW02NWU5eGVlMaNbrH4VawvqtBNIlI_TN6ZdlIQIRM_6iX5iv_gMzF0s.png?format=pjpg&auto=webp&s=e10ab0207015346288caec6757237b4447123bff', 'width': 1150}, 'variants': {}}]}
Chinese censorship is absolutely more dangerous than western censorship and they are NOT the same.
0
With the DeepSeek discussion, I’m seeing a concerning amount of people claiming western models have worse censorship. - Yes, western models have subjects they won’t discuss. - Yes, western models have biases and preferences. And they are decided by the people who created the model. The “censorship” on western models are **self-imposed** by the creator. DeepSeek’s “historical revisions” and refusal to acknowledge certain aspects of Chinese society is something that, as many point out, *must* be included to avoid retribution from the CCP. The **government** imposed censorship on the models; the government decides which parts of history are true. And there is no recourse for those who oppose it. Even if American companies censor for legal concerns, those are a process—**not** a unilateral decision by the ruling party. After all, you can easily find abliterated models hosted on American websites. **Call out both when you see ‘em**, but do not think they are the same.
2025-01-24T10:52:07
https://www.reddit.com/r/LocalLLaMA/comments/1i8srvn/chinese_censorship_is_absolutely_more_dangerous/
xRolocker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8srvn
false
null
t3_1i8srvn
/r/LocalLLaMA/comments/1i8srvn/chinese_censorship_is_absolutely_more_dangerous/
false
false
self
0
null
"R1-Pro" - a 'Deep Think' Mode for DeepSeek-R1-Distilled Models – Boost Reasoning Effort for Local LLMs
19
2025-01-24T10:59:52
https://v.redd.it/qdylplx0axee1
AaronFeng47
v.redd.it
1970-01-01T00:00:00
0
{}
1i8svno
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/qdylplx0axee1/DASHPlaylist.mpd?a=1740308406%2CNTBkNzAwZmY3NzFlOTIxNTA3MzQ4YWZlZjk5MmNlNjVjZjJkNTcyZTlmYjJhMzEwOWQ4NjU2ODA4MjRjMTg2Zg%3D%3D&v=1&f=sd', 'duration': 114, 'fallback_url': 'https://v.redd.it/qdylplx0axee1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/qdylplx0axee1/HLSPlaylist.m3u8?a=1740308406%2CYzY3YWUzZTBhOTk3NzFhMzgxNTY3M2IzNzFiYzgxMTRlNWNlYmI0ODY0MjY1OGU2NzU4Yzk2M2JlYmU5YjY5Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qdylplx0axee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 966}}
t3_1i8svno
/r/LocalLLaMA/comments/1i8svno/r1pro_a_deep_think_mode_for_deepseekr1distilled/
false
false
https://external-preview…4bfde9740c917871
19
{'enabled': False, 'images': [{'id': 'eGF3YmJreDBheGVlMS3Azf7ct8pfQm1z9XYMY4QGWDLBtvNoRs9oapizu3FU', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/eGF3YmJreDBheGVlMS3Azf7ct8pfQm1z9XYMY4QGWDLBtvNoRs9oapizu3FU.png?width=108&crop=smart&format=pjpg&auto=webp&s=426d69908b28e0168f703563f7d8011306086191', 'width': 108}, {'height': 160, 'url': 'https://external-preview.redd.it/eGF3YmJreDBheGVlMS3Azf7ct8pfQm1z9XYMY4QGWDLBtvNoRs9oapizu3FU.png?width=216&crop=smart&format=pjpg&auto=webp&s=3d3aced239a6898ce39d62188d4ad37f76819bd1', 'width': 216}, {'height': 238, 'url': 'https://external-preview.redd.it/eGF3YmJreDBheGVlMS3Azf7ct8pfQm1z9XYMY4QGWDLBtvNoRs9oapizu3FU.png?width=320&crop=smart&format=pjpg&auto=webp&s=f592d610ddfb6ea9f1fe29d5b3d6a854899de1f8', 'width': 320}, {'height': 477, 'url': 'https://external-preview.redd.it/eGF3YmJreDBheGVlMS3Azf7ct8pfQm1z9XYMY4QGWDLBtvNoRs9oapizu3FU.png?width=640&crop=smart&format=pjpg&auto=webp&s=8250cdafcb8554484f97b4b6d68bed2785941bd8', 'width': 640}, {'height': 715, 'url': 'https://external-preview.redd.it/eGF3YmJreDBheGVlMS3Azf7ct8pfQm1z9XYMY4QGWDLBtvNoRs9oapizu3FU.png?width=960&crop=smart&format=pjpg&auto=webp&s=587a641aa314339c87635a8a11c4fab1ba68ceea', 'width': 960}, {'height': 804, 'url': 'https://external-preview.redd.it/eGF3YmJreDBheGVlMS3Azf7ct8pfQm1z9XYMY4QGWDLBtvNoRs9oapizu3FU.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9f585a90d7198b93c92bee476d5eda31df90bbf8', 'width': 1080}], 'source': {'height': 1042, 'url': 'https://external-preview.redd.it/eGF3YmJreDBheGVlMS3Azf7ct8pfQm1z9XYMY4QGWDLBtvNoRs9oapizu3FU.png?format=pjpg&auto=webp&s=b617a391233a71f25d2d6bd48dd1876929478dc9', 'width': 1398}, 'variants': {}}]}
Any advice on how to create an autonomous trading agent? (just for fun)
0
Exploring Local LLMs for Automated Trading Experiments – Seeking Advice! Hi everyone! I’m diving into building tools for locally running LLMs, and I’d love to use my background in automated trading (I did my master's thesis on it) as a fun first project to learn how to create AI agents using local models. To be clear—this isn't about making profits, just an exciting toy project for my personal enjoyment and learning. The idea: I want to create an agent-driven system where: 1. I provide a CSV file with stock prices. 2. The agent analyzes the data, proposes a trading strategy, and generates Python code to implement it. 3. It then runs a backtest in Python, evaluates the results, and: Refines the strategy based on performance, or Tries a new one using the previous strategies as context. 4. Each iteration should generate: The Python code for the strategy. A written description of the strategy. The backtesting results in CSV or another format. I’d like the process to run indefinitely until I stop it, allowing me to review the results afterward. --- My current progress: I've been using LM Studio for chatbot interactions and running Ollama through LangChain for simple prompts. However, LangChain feels like a vast ecosystem with endless possibilities, and I’m a bit overwhelmed about where to start. I believe LangChain should be capable of handling this project, but I’m wondering: Is LangChain the right choice, or is there a better-suited framework for this type of agent-based workflow? Any advice on structuring the workflow (e.g., chaining, memory, decision loops)? Suggested starting points or resources? Any help or suggestions would be greatly appreciated! And just to reiterate—this is all for fun, and I plan to share my experience with the community once I get it working. Thanks in advance!
2025-01-24T11:12:05
https://www.reddit.com/r/LocalLLaMA/comments/1i8t2cz/any_advice_on_how_to_create_an_autonomous_trading/
marloquemegusta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8t2cz
false
null
t3_1i8t2cz
/r/LocalLLaMA/comments/1i8t2cz/any_advice_on_how_to_create_an_autonomous_trading/
false
false
self
0
null
AI-Powered Apps Hit $1 Billion In Sales With ChatGPT And Gemini At The Helm In 2024
1
[removed]
2025-01-24T11:13:08
[deleted]
1970-01-01T00:00:00
0
{}
1i8t2wd
false
null
t3_1i8t2wd
/r/LocalLLaMA/comments/1i8t2wd/aipowered_apps_hit_1_billion_in_sales_with/
false
false
default
1
null
Best general chat models that can run on 16GB of VRAM
1
[removed]
2025-01-24T11:20:53
https://www.reddit.com/r/LocalLLaMA/comments/1i8t6zu/best_general_chat_models_that_can_run_on_16gb_of/
Funtruck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8t6zu
false
null
t3_1i8t6zu
/r/LocalLLaMA/comments/1i8t6zu/best_general_chat_models_that_can_run_on_16gb_of/
false
false
self
1
null
How does Deepseek integerate web search in its chat? Does it index the entire web?
1
The "Search Web" feature is resonably fast for retrieving content of over 30+ results in under a few seconds. Do you think they just perform a google/search API request with relevant keywords generated by the model and then retrieve the content of the pages concurrency? If wish to implement a similar feature, what options do I have? I'm aware of [https://tavily.com/](https://tavily.com/) that provides the searching service but would it be at par with the results DeepSeek Chat web search results? Any one has some experience or insights please do share! :)
2025-01-24T11:36:03
https://www.reddit.com/r/LocalLLaMA/comments/1i8tf5k/how_does_deepseek_integerate_web_search_in_its/
chulbulbulbulpandey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8tf5k
false
null
t3_1i8tf5k
/r/LocalLLaMA/comments/1i8tf5k/how_does_deepseek_integerate_web_search_in_its/
false
false
self
1
{'enabled': False, 'images': [{'id': 'VxEXsQQIwWn1Htk0_r7hYO6ZjDY8qPzU3apPkCm3KOc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/eGfngqsh997BSXVG7tJ3NeE9Uoik8zNcIbq61XlJz0s.jpg?width=108&crop=smart&auto=webp&s=8a1a064771af0f6d9c53310324e62606e09219e0', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/eGfngqsh997BSXVG7tJ3NeE9Uoik8zNcIbq61XlJz0s.jpg?width=216&crop=smart&auto=webp&s=c55a02df7d64c44aa0054aebdf19790015c2910e', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/eGfngqsh997BSXVG7tJ3NeE9Uoik8zNcIbq61XlJz0s.jpg?width=320&crop=smart&auto=webp&s=374b64ac1ba8df325fbf7efa51649e75ae15420c', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/eGfngqsh997BSXVG7tJ3NeE9Uoik8zNcIbq61XlJz0s.jpg?width=640&crop=smart&auto=webp&s=588730e746b00b57bd2fdaedb6d0cb7343791909', 'width': 640}], 'source': {'height': 381, 'url': 'https://external-preview.redd.it/eGfngqsh997BSXVG7tJ3NeE9Uoik8zNcIbq61XlJz0s.jpg?auto=webp&s=e2946f7e5d1efb706b318ce84ff268e45d967a52', 'width': 731}, 'variants': {}}]}
Working with an existing model and fine tuning specific images
1
[removed]
2025-01-24T11:53:07
https://www.reddit.com/r/LocalLLaMA/comments/1i8to5a/working_with_an_existing_model_and_fine_tuning/
HeyThereImMartin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8to5a
false
null
t3_1i8to5a
/r/LocalLLaMA/comments/1i8to5a/working_with_an_existing_model_and_fine_tuning/
false
false
self
1
null
Deepseek R1 at #3 above o1 on lmsys leaderboard!
1
2025-01-24T12:06:20
https://i.redd.it/80dy9y5umxee1.jpeg
Junior_Command_9377
i.redd.it
1970-01-01T00:00:00
0
{}
1i8tvtd
false
null
t3_1i8tvtd
/r/LocalLLaMA/comments/1i8tvtd/deepseek_r1_at_3_above_o1_on_lmsys_leaderboard/
false
false
https://a.thumbs.redditm…4ev4Q19zr0P4.jpg
1
{'enabled': True, 'images': [{'id': 'lqRJLOenp-p9OyqFrE-9lq7SCIC9KThSAqzT9WSxPWA', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/80dy9y5umxee1.jpeg?width=108&crop=smart&auto=webp&s=c66f33483018a646a32c87096768960b85e1c39d', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/80dy9y5umxee1.jpeg?width=216&crop=smart&auto=webp&s=43a5ba8999d93a6cfdbbc87ba36191cc6fe0e7b7', 'width': 216}, {'height': 212, 'url': 'https://preview.redd.it/80dy9y5umxee1.jpeg?width=320&crop=smart&auto=webp&s=84abfa80c85fe4a7e8ca71f36b0cfa86cca74094', 'width': 320}, {'height': 424, 'url': 'https://preview.redd.it/80dy9y5umxee1.jpeg?width=640&crop=smart&auto=webp&s=841d9160dfd31f14bc11fbab38d38857fd430c9d', 'width': 640}, {'height': 636, 'url': 'https://preview.redd.it/80dy9y5umxee1.jpeg?width=960&crop=smart&auto=webp&s=762a988102cbbb6b9a9912892c98f9269d94af9f', 'width': 960}, {'height': 716, 'url': 'https://preview.redd.it/80dy9y5umxee1.jpeg?width=1080&crop=smart&auto=webp&s=9bdbcea2452e4b1fe80ba94586998e97f41aa64c', 'width': 1080}], 'source': {'height': 1674, 'url': 'https://preview.redd.it/80dy9y5umxee1.jpeg?auto=webp&s=b2a958dc34239da910656daef1bf90f299526c06', 'width': 2524}, 'variants': {}}]}
I benchmarked (almost) every model that can fit in 24GB VRAM (Qwens, R1 distils, Mistrals, even Llama 70b gguf)
1,571
2025-01-24T12:08:50
https://i.redd.it/es9l38ezmxee1.png
kyazoglu
i.redd.it
1970-01-01T00:00:00
0
{}
1i8tx5z
false
null
t3_1i8tx5z
/r/LocalLLaMA/comments/1i8tx5z/i_benchmarked_almost_every_model_that_can_fit_in/
false
false
https://b.thumbs.redditm…2NgZs8J1sYXs.jpg
1,571
{'enabled': True, 'images': [{'id': 'a71UVY3iZ22XqOuq7ISrZQJwH8TlP46mQwJ6xDTPy3s', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/es9l38ezmxee1.png?width=108&crop=smart&auto=webp&s=66e7e134c4f79235e4138b789f3a62a01fde1438', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/es9l38ezmxee1.png?width=216&crop=smart&auto=webp&s=d36221230fd406e95cf20070d80780060c2ba898', 'width': 216}, {'height': 163, 'url': 'https://preview.redd.it/es9l38ezmxee1.png?width=320&crop=smart&auto=webp&s=4895d643ba8833c38ad4fddfd2d251f61a3bc0cf', 'width': 320}, {'height': 326, 'url': 'https://preview.redd.it/es9l38ezmxee1.png?width=640&crop=smart&auto=webp&s=6a66f2c3fda0b03915eea1c0a72185b32e17e660', 'width': 640}, {'height': 489, 'url': 'https://preview.redd.it/es9l38ezmxee1.png?width=960&crop=smart&auto=webp&s=86bdcdea7c331628d38013523a1c9681fe761510', 'width': 960}, {'height': 551, 'url': 'https://preview.redd.it/es9l38ezmxee1.png?width=1080&crop=smart&auto=webp&s=1c7f135455a822371e28861c31dd3ba0ed28bb53', 'width': 1080}], 'source': {'height': 1403, 'url': 'https://preview.redd.it/es9l38ezmxee1.png?auto=webp&s=a25e15632e756c69969090dabbc4bf7f27245b77', 'width': 2749}, 'variants': {}}]}
DeepSeek-R1 appears on LMSYS Arena Leaderboard
183
2025-01-24T12:29:12
https://www.reddit.com/gallery/1i8u9jk
jpydych
reddit.com
1970-01-01T00:00:00
0
{}
1i8u9jk
false
null
t3_1i8u9jk
/r/LocalLLaMA/comments/1i8u9jk/deepseekr1_appears_on_lmsys_arena_leaderboard/
false
false
https://b.thumbs.redditm…AQYUhGg_4B2M.jpg
183
null
Does Medusa decoding lead to worse accuracy?
1
[removed]
2025-01-24T12:32:45
https://www.reddit.com/r/LocalLLaMA/comments/1i8ubrr/does_medusa_decoding_lead_to_worse_accuracy/
Feeling_Room_5655
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8ubrr
false
null
t3_1i8ubrr
/r/LocalLLaMA/comments/1i8ubrr/does_medusa_decoding_lead_to_worse_accuracy/
false
false
self
1
null
Does Medusa decoding lead to worse accuracy?
1
[removed]
2025-01-24T12:34:50
https://www.reddit.com/r/LocalLLaMA/comments/1i8ud12/does_medusa_decoding_lead_to_worse_accuracy/
micky04
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8ud12
false
null
t3_1i8ud12
/r/LocalLLaMA/comments/1i8ud12/does_medusa_decoding_lead_to_worse_accuracy/
false
false
self
1
null
Using the R1 API
1
[removed]
2025-01-24T12:36:11
https://www.reddit.com/r/LocalLLaMA/comments/1i8uduz/using_the_r1_api/
Hungry_Neat_8080
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8uduz
false
null
t3_1i8uduz
/r/LocalLLaMA/comments/1i8uduz/using_the_r1_api/
false
false
self
1
null
😊 Are there any llama3.2 level Japanese and English capable LLM models?
1
[removed]
2025-01-24T12:36:13
https://www.reddit.com/r/LocalLLaMA/comments/1i8udvo/are_there_any_llama32_level_japanese_and_english/
FieldMouseInTheHouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8udvo
false
null
t3_1i8udvo
/r/LocalLLaMA/comments/1i8udvo/are_there_any_llama32_level_japanese_and_english/
false
false
self
1
null
What's the best ai for code autocompletion for mid range computers(<7b)?
4
I'm thinking of using a local AI model to do code completion and I've tried GitHub copilot but I'd rather use something local. Any recommendations?
2025-01-24T12:47:37
https://www.reddit.com/r/LocalLLaMA/comments/1i8ulbd/whats_the_best_ai_for_code_autocompletion_for_mid/
No-Bid-2955
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8ulbd
false
null
t3_1i8ulbd
/r/LocalLLaMA/comments/1i8ulbd/whats_the_best_ai_for_code_autocompletion_for_mid/
false
false
self
4
null
WebRover - Your AI Co-pilot for Web Navigation 🚀
11
Ever wished for an AI that not only understands your commands but also autonomously navigates the web to accomplish tasks? 🌐🤖Introducing WebRover 🛠️, an open-source Autonomous AI Agent I've been developing, designed to interpret user input and seamlessly browse the internet to fulfill your requests. Similar to Anthropic's "Computer Use" feature in Claude 3.5 Sonnet and OpenAI's "Operator" announced today , WebRover represents my effort in implementing this emerging technology. Although it sometimes encounters loops and is not yet perfect, I believe that further fine-tuning a foundational model to execute appropriate tasks can effectively improve its efficacy. Explore the project on GitHub: [https://github.com/hrithikkoduri/WebRover](https://github.com/hrithikkoduri/WebRover) I welcome your feedback, suggestions, and contributions to enhance WebRover further. Let's collaborate to push the boundaries of autonomous AI agents! 🚀 \[In the demo video below, I prompted the agent to find the cheapest flight from Tucson to Austin, departing on Feb 1st and returning on Feb 10th.\] https://reddit.com/link/1i8ur2c/video/dkawbbgsvxee1/player
2025-01-24T12:56:35
https://www.reddit.com/r/LocalLLaMA/comments/1i8ur2c/webrover_your_ai_copilot_for_web_navigation/
Elegant_Fish_3822
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8ur2c
false
null
t3_1i8ur2c
/r/LocalLLaMA/comments/1i8ur2c/webrover_your_ai_copilot_for_web_navigation/
false
false
https://b.thumbs.redditm…rEjt4uOT9bQY.jpg
11
{'enabled': False, 'images': [{'id': 'GXuBFs2_DZ3f_yipRUSTJeoOdp0nAsvuZr24T5_fwBY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uGXGDtOqF0El7p0QKiiaYuh-4n8GTnWBDFAYTSa_Ns8.jpg?width=108&crop=smart&auto=webp&s=be1b45faecbbfc4e6cec3fb6eac370df0684352c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uGXGDtOqF0El7p0QKiiaYuh-4n8GTnWBDFAYTSa_Ns8.jpg?width=216&crop=smart&auto=webp&s=7e71de7c38251a5274d096ed219ab475d883d672', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uGXGDtOqF0El7p0QKiiaYuh-4n8GTnWBDFAYTSa_Ns8.jpg?width=320&crop=smart&auto=webp&s=f7bc81812c8817b1bdb47cb8c1fe11c89e72cf08', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uGXGDtOqF0El7p0QKiiaYuh-4n8GTnWBDFAYTSa_Ns8.jpg?width=640&crop=smart&auto=webp&s=6b5eb2f4380186bc94114e3a7e0183e0764cbd73', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uGXGDtOqF0El7p0QKiiaYuh-4n8GTnWBDFAYTSa_Ns8.jpg?width=960&crop=smart&auto=webp&s=e571f3ed812bd3e450267961d206da8aa12da81a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uGXGDtOqF0El7p0QKiiaYuh-4n8GTnWBDFAYTSa_Ns8.jpg?width=1080&crop=smart&auto=webp&s=9084ebe295c57c17024f303aa693bba0cb0ef1b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uGXGDtOqF0El7p0QKiiaYuh-4n8GTnWBDFAYTSa_Ns8.jpg?auto=webp&s=a8dc962ef1ccced3acfac4023ee64847b8c95105', 'width': 1200}, 'variants': {}}]}
Are the Ollama models really Deepseek? Do they work the same way?
1
[removed]
2025-01-24T13:01:07
https://www.reddit.com/r/LocalLLaMA/comments/1i8uu87/are_the_ollama_models_really_deepseek_do_they/
joyfulsparrow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8uu87
false
null
t3_1i8uu87
/r/LocalLLaMA/comments/1i8uu87/are_the_ollama_models_really_deepseek_do_they/
false
false
self
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
What’s the rule of thumb when choosing parameter size for Apple Silicon macs?
1
I’ve never run local LLM but all this DeepSeek talk makes me want to. Planning on running Ollama but I don’t know which DeepSeek R1 parameter size to choose with my m1 max macbook pro 64GB. Can i get away with a 20GB+ model? What should be my upper limit based on ram size?
2025-01-24T13:02:00
https://www.reddit.com/r/LocalLLaMA/comments/1i8uv00/whats_the_rule_of_thumb_when_choosing_parameter/
BetterMeToday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8uv00
false
null
t3_1i8uv00
/r/LocalLLaMA/comments/1i8uv00/whats_the_rule_of_thumb_when_choosing_parameter/
false
false
self
1
null
Open Source chatbot like frontend for Deepseek r1?
1
By frontend I mean something like the chatbot app that ChatGPT and Claude has. I will run deepseek locally and don't want to work with docker. Something like a stand-alone frontend that connects to local deepseek server.
2025-01-24T13:09:16
https://www.reddit.com/r/LocalLLaMA/comments/1i8v00i/open_source_chatbot_like_frontend_for_deepseek_r1/
InternalVolcano
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8v00i
false
null
t3_1i8v00i
/r/LocalLLaMA/comments/1i8v00i/open_source_chatbot_like_frontend_for_deepseek_r1/
false
false
self
1
null
How can I fine-tune DeepSeek-R1?
7
I am a software engineer with virtually 0 knowledge of ML. I would use some SaaS tool to quickly fine-tune a model, but o1 is not available for fine-tuning yet through OpenAI API, and no services support R1. I have a dataset of \~300 examples of translating a query from a NoSQL language to SQL. Could someone advice me on how to fine-tune DeepSeek-R1? I don't care much about the cost, will rent a GPU
2025-01-24T13:16:33
https://www.reddit.com/r/LocalLLaMA/comments/1i8v54i/how_can_i_finetune_deepseekr1/
rafasofizadeh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8v54i
false
null
t3_1i8v54i
/r/LocalLLaMA/comments/1i8v54i/how_can_i_finetune_deepseekr1/
false
false
self
7
null
Depseek promises to open source agi
1,402
[https://x.com/victor207755822/status/1882757279436718454](https://x.com/victor207755822/status/1882757279436718454) From Deli chen: “ All I know is we keep pushing forward to make open-source AGI a reality for everyone. “
2025-01-24T13:27:12
https://www.reddit.com/r/LocalLLaMA/comments/1i8vclf/depseek_promises_to_open_source_agi/
Notdesciplined
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8vclf
false
null
t3_1i8vclf
/r/LocalLLaMA/comments/1i8vclf/depseek_promises_to_open_source_agi/
false
false
self
1,402
{'enabled': False, 'images': [{'id': 'zRODZGh2LqcM_G78gLcnxH0Y7rHOnXzLwk8DqQMPX2c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/BgiTfKM9ZpljXO0IjdwBYbLi9dCUhObX6ob4G6AyDuI.jpg?width=108&crop=smart&auto=webp&s=8c16454cf7d68fa33171c40a56e9ee187d366e69', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/BgiTfKM9ZpljXO0IjdwBYbLi9dCUhObX6ob4G6AyDuI.jpg?auto=webp&s=fa22b13a78bb761c2081f5a0857d8ff3553a7454', 'width': 200}, 'variants': {}}]}
How to Pass Image-Based Math/Geometry Problems to an LLM Without a Vision Model?
1
[removed]
2025-01-24T13:33:29
https://www.reddit.com/r/LocalLLaMA/comments/1i8vh31/how_to_pass_imagebased_mathgeometry_problems_to/
yvzyldrm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8vh31
false
null
t3_1i8vh31
/r/LocalLLaMA/comments/1i8vh31/how_to_pass_imagebased_mathgeometry_problems_to/
false
false
self
1
null
Open source agi promised by deepseek, closedAI should sweat now
1
2025-01-24T13:37:56
https://i.redd.it/7cw1ew773yee1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1i8vk7q
false
null
t3_1i8vk7q
/r/LocalLLaMA/comments/1i8vk7q/open_source_agi_promised_by_deepseek_closedai/
false
false
https://b.thumbs.redditm…rsvA6ibRUGuY.jpg
1
{'enabled': True, 'images': [{'id': 'OD8IAV5pExdemhj7h3gHqfW_LocshYfosdj_9oaSTE4', 'resolutions': [{'height': 167, 'url': 'https://preview.redd.it/7cw1ew773yee1.png?width=108&crop=smart&auto=webp&s=797f3f4493e388f7fc47b8782700c8c1c4724c4f', 'width': 108}, {'height': 334, 'url': 'https://preview.redd.it/7cw1ew773yee1.png?width=216&crop=smart&auto=webp&s=145ab590dabb375ecfe67ca91541d65ba7150009', 'width': 216}, {'height': 495, 'url': 'https://preview.redd.it/7cw1ew773yee1.png?width=320&crop=smart&auto=webp&s=d30cc5408fe2a499a73d7629fed277cb7c0c2c7c', 'width': 320}, {'height': 990, 'url': 'https://preview.redd.it/7cw1ew773yee1.png?width=640&crop=smart&auto=webp&s=7b487e4f2cd14ed5c9f8531ac7c92b43184a9a1a', 'width': 640}, {'height': 1486, 'url': 'https://preview.redd.it/7cw1ew773yee1.png?width=960&crop=smart&auto=webp&s=abd65a45581965f621b8e33e906b4a03f0ead2f2', 'width': 960}, {'height': 1672, 'url': 'https://preview.redd.it/7cw1ew773yee1.png?width=1080&crop=smart&auto=webp&s=129cd61bae20ffb3493f9e49adb225d9f72eac29', 'width': 1080}], 'source': {'height': 1672, 'url': 'https://preview.redd.it/7cw1ew773yee1.png?auto=webp&s=df44293cc2e21a92f0f08ac30b705d4f2c8b0636', 'width': 1080}, 'variants': {}}]}
What's the most influential model since BERT?
12
For you, which model has defined generative AI the most? * the first llama coming out of nowhere * the usability of mistral 7b * the ubiquity of llama3 * the impact of Qwen2.5 * the smolness of SmolLM2 * an other model It would be cool to discuss these models, and why they're important to folk looking back. https://preview.redd.it/nahcf0l33yee1.png?width=1578&format=png&auto=webp&s=2bed51466ee913448f243ae8eec5a2db68af364b Also discussing here: [https://huggingface.co/spaces/OSAIResearchCommunity/README/discussions/2](https://huggingface.co/spaces/OSAIResearchCommunity/README/discussions/2)
2025-01-24T13:38:18
https://www.reddit.com/r/LocalLLaMA/comments/1i8vkgo/whats_the_most_influential_model_since_bert/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8vkgo
false
null
t3_1i8vkgo
/r/LocalLLaMA/comments/1i8vkgo/whats_the_most_influential_model_since_bert/
false
false
https://b.thumbs.redditm…Xz7wzpa6siGQ.jpg
12
{'enabled': False, 'images': [{'id': 'XYLmT_e0LPZr2vduwzsgEHVPctVf76N0Kq8oV761eCY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7so8krQOIXNK0jAtBATRosaaxDYJRXv6QW5pw8EgvZ4.jpg?width=108&crop=smart&auto=webp&s=932a9f90470eb7c751a3ebaf5eeefdf3f834331b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7so8krQOIXNK0jAtBATRosaaxDYJRXv6QW5pw8EgvZ4.jpg?width=216&crop=smart&auto=webp&s=ec63ee434cde699da4a360c3d95f26422298c5ef', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7so8krQOIXNK0jAtBATRosaaxDYJRXv6QW5pw8EgvZ4.jpg?width=320&crop=smart&auto=webp&s=6c6785d72318ec0800366af3b809a6ffd38f6f5b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7so8krQOIXNK0jAtBATRosaaxDYJRXv6QW5pw8EgvZ4.jpg?width=640&crop=smart&auto=webp&s=e2003407e685381a252f8153002e0337270e66a2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7so8krQOIXNK0jAtBATRosaaxDYJRXv6QW5pw8EgvZ4.jpg?width=960&crop=smart&auto=webp&s=ecc5cecabe1d56a55175c056cc28db64d3e508db', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7so8krQOIXNK0jAtBATRosaaxDYJRXv6QW5pw8EgvZ4.jpg?width=1080&crop=smart&auto=webp&s=958d1d9752eb35235c1d027c1a925001d7f9a41b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7so8krQOIXNK0jAtBATRosaaxDYJRXv6QW5pw8EgvZ4.jpg?auto=webp&s=3483a5ba1d645b56c06b2a77a8d339492621a6cb', 'width': 1200}, 'variants': {}}]}
Hold it! manipulate thinking process for reasoning models
3
I haven’t implemented this yet but I have been thinking: what if we manually change the thinking process of the reasoning models? No matter how mighty these models are, that could still make minor mistakes such as calculation of large numbers. A better way is to allow models dynamically use tools, and we use regex to detect and replace the tool calling with results. For now we can make it simpler. For example, a model is thinking: we can use blablabla, and you can stop it (hold it!) and manually change the blablabla to whatever in your mind. Then allow the model to continue on your thoughts. This way you are not passively relying on the models but you are participating the problem solving.
2025-01-24T13:43:09
https://www.reddit.com/r/LocalLLaMA/comments/1i8vnxv/hold_it_manipulate_thinking_process_for_reasoning/
Just_Lifeguard_5033
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8vnxv
false
null
t3_1i8vnxv
/r/LocalLLaMA/comments/1i8vnxv/hold_it_manipulate_thinking_process_for_reasoning/
false
false
self
3
null
DeepSeek (Became 5th in productivity on PlayStore)
21
Will we finally got a free ChatGPT competitor that everyone can access to it??
2025-01-24T13:44:15
https://www.reddit.com/r/LocalLLaMA/comments/1i8voso/deepseek_became_5th_in_productivity_on_playstore/
Born-Shopping-1876
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8voso
false
null
t3_1i8voso
/r/LocalLLaMA/comments/1i8voso/deepseek_became_5th_in_productivity_on_playstore/
false
false
self
21
null
8xB200 - Fully Idle for the Next Few Weeks - What Should I Run on It?
28
So we recently got the DGX B200 system, but here’s the catch: there’s *literally no support* for our use case right now (PyTorch, Exllama, TensorRT). Feels like owning a rocket ship with no launchpad. While NVIDIA sorts out firmware and support, I’ve got 8 GPUs just sitting there begging to make some noise. Any suggestions on what I can run in the meantime? Maybe a massive DeepSeek finetune or something cool that could take advantage of this hardware? Open to any and all creative ideas—don’t let these GPUs stay silent! https://preview.redd.it/gnz26t335yee1.png?width=1306&format=png&auto=webp&s=85fa0a49b002765d6d6f9fc7335a488c65da153e
2025-01-24T13:50:52
https://www.reddit.com/r/LocalLLaMA/comments/1i8vthd/8xb200_fully_idle_for_the_next_few_weeks_what/
yanjb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8vthd
false
null
t3_1i8vthd
/r/LocalLLaMA/comments/1i8vthd/8xb200_fully_idle_for_the_next_few_weeks_what/
false
false
https://b.thumbs.redditm…wroSXfMnoOiM.jpg
28
null
Function calling at thinking time.
1
[removed]
2025-01-24T13:56:36
https://www.reddit.com/r/LocalLLaMA/comments/1i8vxm5/function_calling_at_thinking_time/
kreijstal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8vxm5
false
null
t3_1i8vxm5
/r/LocalLLaMA/comments/1i8vxm5/function_calling_at_thinking_time/
false
false
self
1
null
How is DeepSeek chat free?
240
I tried using DeepSeek recently on their own website and it seems they apparently let you use DeepSeek-V3 and R1 models as much as you like without any limitations. How are they able to afford that while ChatGPT-4o gives you only a couple of free prompts before timing out?
2025-01-24T14:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1i8w7ra/how_is_deepseek_chat_free/
Divergence1900
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8w7ra
false
null
t3_1i8w7ra
/r/LocalLLaMA/comments/1i8w7ra/how_is_deepseek_chat_free/
false
false
self
240
null
Sqlite3 n-gram database
6
I downloaded Google's n-gram files from version 20200217 and put them all in a single sqlite database. All of the orders 1 - 5 are there. sqlite3 ngrams.db "SELECT COUNT(*) FROM ngrams;" == 61949897 sqlite3 ngrams.db ".schema ngrams" CREATE TABLE ngrams ( ngram TEXT NOT NULL, count INTEGER NOT NULL, n INTEGER NOT NULL, PRIMARY KEY (ngram, n) ) WITHOUT ROWID ; sqlite3 ngrams.db "SELECT ngram FROM ngrams WHERE n = 4 AND ngram LIKE 'el%' LIMIT 6;" el acta de la el agua de un el agua el aire el agua en las el agua que en el al has not The link is a tarball [https://www.dropbox.com/scl/fi/mu5y4n9zd1pj51hfl5r4o/ngram-database.tar.gz?rlkey=mou7cw2barwbrm9p0t4n85t0e&st=qmapr0r9&dl=0](https://www.dropbox.com/scl/fi/mu5y4n9zd1pj51hfl5r4o/ngram-database.tar.gz?rlkey=mou7cw2barwbrm9p0t4n85t0e&st=qmapr0r9&dl=0) It's about 640MB compressed and close to 2GB expanded. The download will expire on or about 31 Jan 2025. If you're ~~f\*\*cking around with~~ researching n-grams and patterns this might save you some work. Enjoy!
2025-01-24T14:10:12
https://www.reddit.com/r/LocalLLaMA/comments/1i8w81e/sqlite3_ngram_database/
Incompetent_Magician
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8w81e
false
null
t3_1i8w81e
/r/LocalLLaMA/comments/1i8w81e/sqlite3_ngram_database/
false
false
self
6
{'enabled': False, 'images': [{'id': '_QAPfnXWC3o5pH72tEUzGAcMYZ9fEs6HtnzvK72FJNU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uDMltjMZzl_jfe8S8qRydhygC65sZxvYjV8SFPWD_fE.jpg?width=108&crop=smart&auto=webp&s=0a083b015e4e818afa2f0d0cb4b578c8e072defc', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uDMltjMZzl_jfe8S8qRydhygC65sZxvYjV8SFPWD_fE.jpg?width=216&crop=smart&auto=webp&s=9d63d173ab119e313a369024691457d611939526', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uDMltjMZzl_jfe8S8qRydhygC65sZxvYjV8SFPWD_fE.jpg?width=320&crop=smart&auto=webp&s=94c591d86440517545334f022cc21e1de0bf67d2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uDMltjMZzl_jfe8S8qRydhygC65sZxvYjV8SFPWD_fE.jpg?width=640&crop=smart&auto=webp&s=98a5aec493622e6201dc0bdd5d9c2b1e2d24b39c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uDMltjMZzl_jfe8S8qRydhygC65sZxvYjV8SFPWD_fE.jpg?width=960&crop=smart&auto=webp&s=94cff5edf80fe18a4c403648c455189fcdf6ac96', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uDMltjMZzl_jfe8S8qRydhygC65sZxvYjV8SFPWD_fE.jpg?width=1080&crop=smart&auto=webp&s=0d98ae2334036d8fbccdc4542ceb2213d5d896cd', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uDMltjMZzl_jfe8S8qRydhygC65sZxvYjV8SFPWD_fE.jpg?auto=webp&s=d5ba81e67e01525adee35c25dd79ca82be62a6d2', 'width': 1200}, 'variants': {}}]}
How can I automate the process of translating a big (structured) document
4
Hi, I’m working on translating a game, and someone developed a tool that generates an XML file containing all the game text. I wanted to ask if there’s a local LLM tool capable of reading XML documents or handling large files while preserving their structure. I just downloaded GPT-4 All and tried to test the local docs feature. To make it compatible, I renamed the file extension to .txt so it would be recognized. Now I’m waiting for the whole document to be embedded. The file is 12MB with over 500K words, so it’s taking a while. I’m wondering if I should’ve split the document into smaller parts first. Can anyone recommend a local LLM tool that can process large documents, preferably in XML format, and perform operations like text translation on them? I heard the aya expanse model is good for translating so I downloaded that to try it out with koboldcpp but that one apparently doesn't support local files only images.
2025-01-24T14:12:44
https://www.reddit.com/r/LocalLLaMA/comments/1i8w9yi/how_can_i_automate_the_process_of_translating_a/
Typical-Armadillo340
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8w9yi
false
null
t3_1i8w9yi
/r/LocalLLaMA/comments/1i8w9yi/how_can_i_automate_the_process_of_translating_a/
false
false
self
4
null
Paraphrasing using Llama3 base
1
[removed]
2025-01-24T14:13:16
https://www.reddit.com/r/LocalLLaMA/comments/1i8wacq/paraphrasing_using_llama3_base/
Appropriate_End4809
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8wacq
false
null
t3_1i8wacq
/r/LocalLLaMA/comments/1i8wacq/paraphrasing_using_llama3_base/
false
false
self
1
null
Paraphrasing with Llama 3 base
1
[removed]
2025-01-24T14:19:27
https://www.reddit.com/r/LocalLLaMA/comments/1i8wf1d/paraphrasing_with_llama_3_base/
Feeling_Watch7421
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8wf1d
false
null
t3_1i8wf1d
/r/LocalLLaMA/comments/1i8wf1d/paraphrasing_with_llama_3_base/
false
false
self
1
null
671B Deepseek R1 2bit speed on only NVMe (usable 1 t/s)
1
[removed]
2025-01-24T14:37:23
https://www.reddit.com/r/LocalLLaMA/comments/1i8wsuu/671b_deepseek_r1_2bit_speed_on_only_nvme_usable_1/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8wsuu
false
null
t3_1i8wsuu
/r/LocalLLaMA/comments/1i8wsuu/671b_deepseek_r1_2bit_speed_on_only_nvme_usable_1/
false
false
self
1
null
try to brainstorm a new architecture with deepseek r1
1
[removed]
2025-01-24T14:44:08
https://www.reddit.com/r/LocalLLaMA/comments/1i8wy6t/try_to_brainstorm_a_new_architecture_with/
Ok-One-5834
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8wy6t
false
null
t3_1i8wy6t
/r/LocalLLaMA/comments/1i8wy6t/try_to_brainstorm_a_new_architecture_with/
false
false
self
1
null
In a state of awe that I can run this locally, and with great speed (32B Qwen DeepSeek R1 + Ollama)
5
2025-01-24T14:47:17
https://i.redd.it/dmec4hd2fyee1.png
airduster_9000
i.redd.it
1970-01-01T00:00:00
0
{}
1i8x0ob
false
null
t3_1i8x0ob
/r/LocalLLaMA/comments/1i8x0ob/in_a_state_of_awe_that_i_can_run_this_locally_and/
false
false
https://b.thumbs.redditm…cg0LvQFl9l_I.jpg
5
{'enabled': True, 'images': [{'id': 'IIRwdAO05MSHGJJxr0d2qdlGupIQTRmzuJ4LetJ4sdM', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/dmec4hd2fyee1.png?width=108&crop=smart&auto=webp&s=fb9edab8576d3b2430560a5622b548e14d81e248', 'width': 108}, {'height': 57, 'url': 'https://preview.redd.it/dmec4hd2fyee1.png?width=216&crop=smart&auto=webp&s=33706c9e22595ee88331b8440c1ec48b882f2927', 'width': 216}, {'height': 84, 'url': 'https://preview.redd.it/dmec4hd2fyee1.png?width=320&crop=smart&auto=webp&s=cd04de0ca33479167f8a70364b0bff324b290d4f', 'width': 320}, {'height': 169, 'url': 'https://preview.redd.it/dmec4hd2fyee1.png?width=640&crop=smart&auto=webp&s=77c222be053d16a05e80cd096386fb1d1f58649f', 'width': 640}], 'source': {'height': 237, 'url': 'https://preview.redd.it/dmec4hd2fyee1.png?auto=webp&s=a3afa67c4aac530cefc0ba5b341d8b5c7f3b7117', 'width': 896}, 'variants': {}}]}
Deepseek R1 2bit on NVMe gets 1 t/s
1
[removed]
2025-01-24T14:57:33
https://www.reddit.com/r/LocalLLaMA/comments/1i8x8v4/deepseek_r1_2bit_on_nvme_gets_1_ts/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8x8v4
false
null
t3_1i8x8v4
/r/LocalLLaMA/comments/1i8x8v4/deepseek_r1_2bit_on_nvme_gets_1_ts/
false
false
self
1
null
Deepseek Ai irregularities
1
2025-01-24T14:58:04
https://www.reddit.com/gallery/1i8x99y
Timely-Opposite-5249
reddit.com
1970-01-01T00:00:00
0
{}
1i8x99y
false
null
t3_1i8x99y
/r/LocalLLaMA/comments/1i8x99y/deepseek_ai_irregularities/
false
false
https://b.thumbs.redditm…AoIPT1Gsu_bM.jpg
1
null
Absolute domination.
20
When was the last time a single company took over like that? Mistral, maybe? Also curious if there are any public stats/graphs that compare HF repository engagement or something like that.
2025-01-24T14:59:59
https://www.reddit.com/r/LocalLLaMA/comments/1i8xaul/absolute_domination/
olaf4343
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8xaul
false
null
t3_1i8xaul
/r/LocalLLaMA/comments/1i8xaul/absolute_domination/
false
false
self
20
null