title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
S1, grpo, and eval
1
S1 grpo on a 7b moe using qalora, gen multiple, grade, and then do grpo (how do we know if one is correct/best?). We likely don't and just iterate towards above mean, so generate a batch of answers, then grade them between 0 and 100%. Use an external tool to validate math operations (python eval). 100% can only be achieved for correct math. So eval is the critic model Thoughts?
2025-02-08T11:08:25
https://www.reddit.com/r/LocalLLaMA/comments/1ikl04a/s1_grpo_and_eval/
Thistleknot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikl04a
false
null
t3_1ikl04a
/r/LocalLLaMA/comments/1ikl04a/s1_grpo_and_eval/
false
false
self
1
null
DeepSeek R1: lying and mental gymnastics about the mass starvation in China during the 'Great Leap Forward'
1
[removed]
2025-02-08T11:17:45
https://www.reddit.com/r/LocalLLaMA/comments/1ikl4yo/deepseek_r1_lying_and_mental_gymnastics_about_the/
nomadman0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikl4yo
false
null
t3_1ikl4yo
/r/LocalLLaMA/comments/1ikl4yo/deepseek_r1_lying_and_mental_gymnastics_about_the/
false
false
self
1
null
Looking for a front end that would allow uploads of pdf and text files to APIs
1
[removed]
2025-02-08T11:30:51
https://www.reddit.com/r/LocalLLaMA/comments/1iklbzp/looking_for_a_front_end_that_would_allow_uploads/
newbioform
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iklbzp
false
null
t3_1iklbzp
/r/LocalLLaMA/comments/1iklbzp/looking_for_a_front_end_that_would_allow_uploads/
false
false
self
1
null
Desktop following AI
1
[removed]
2025-02-08T11:34:53
https://www.reddit.com/r/LocalLLaMA/comments/1ikle4b/desktop_following_ai/
Ok_Record7213
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikle4b
false
null
t3_1ikle4b
/r/LocalLLaMA/comments/1ikle4b/desktop_following_ai/
false
false
self
1
null
A question on performance of 6x p100 cards vs 4090?
1
[removed]
2025-02-08T11:37:07
https://www.reddit.com/r/LocalLLaMA/comments/1iklf8h/a_question_on_performance_of_6x_p100_cards_vs_4090/
Potential_Skirt_1115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iklf8h
false
null
t3_1iklf8h
/r/LocalLLaMA/comments/1iklf8h/a_question_on_performance_of_6x_p100_cards_vs_4090/
false
false
self
1
null
A Speaking following desktop AI
1
[removed]
2025-02-08T11:38:17
https://www.reddit.com/r/LocalLLaMA/comments/1iklfur/a_speaking_following_desktop_ai/
Ok_Record7213
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iklfur
false
null
t3_1iklfur
/r/LocalLLaMA/comments/1iklfur/a_speaking_following_desktop_ai/
false
false
self
1
null
Python and web developing LLM recommendation for 32GB VRAM gpu
1
[removed]
2025-02-08T12:01:45
https://www.reddit.com/r/LocalLLaMA/comments/1iklscl/python_and_web_developing_llm_recommendation_for/
evofromk0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iklscl
false
null
t3_1iklscl
/r/LocalLLaMA/comments/1iklscl/python_and_web_developing_llm_recommendation_for/
false
false
self
1
null
Reasoning LLM using REINFORCE Tutorial
1
2025-02-08T12:01:48
https://charm-octagon-74d.notion.site/Reasoning-LLM-using-REINFORCE-194e4301cb9980e7b73dd8c2f0fdc6a0
DLExplorers
charm-octagon-74d.notion.site
1970-01-01T00:00:00
0
{}
1iklsdf
false
null
t3_1iklsdf
/r/LocalLLaMA/comments/1iklsdf/reasoning_llm_using_reinforce_tutorial/
false
false
https://b.thumbs.redditm…DH6s4LGJl8_g.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/dK_EFkJw8Azn4-ldPTLDolDFe3YzCNIfzrkdBEjGcwE.jpg?auto=webp&s=fa5640bf29fa97156215ebe916a79473fae2b525', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/dK_EFkJw8Azn4-ldPTLDolDFe3YzCNIfzrkdBEjGcwE.jpg?width=108&crop=smart&auto=webp&s=879e52b64f689573967eaab290003c45972f7ba4', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/dK_EFkJw8Azn4-ldPTLDolDFe3YzCNIfzrkdBEjGcwE.jpg?width=216&crop=smart&auto=webp&s=dcbd45cc9ff715da2954f2fc1f199e0b916e7b22', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/dK_EFkJw8Azn4-ldPTLDolDFe3YzCNIfzrkdBEjGcwE.jpg?width=320&crop=smart&auto=webp&s=7022b526e84b6f21f259dfd1fdf49e17718b8584', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/dK_EFkJw8Azn4-ldPTLDolDFe3YzCNIfzrkdBEjGcwE.jpg?width=640&crop=smart&auto=webp&s=a2f5609a3ac128c84f75567e8eb258dac7c4496d', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/dK_EFkJw8Azn4-ldPTLDolDFe3YzCNIfzrkdBEjGcwE.jpg?width=960&crop=smart&auto=webp&s=2c12ddae07d2a71ffe4a1b828577e8ea1c60028e', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/dK_EFkJw8Azn4-ldPTLDolDFe3YzCNIfzrkdBEjGcwE.jpg?width=1080&crop=smart&auto=webp&s=69ec47345ef5cfe2b95740b234d1c7c6144b206d', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'xI67j8Ed-loEj1nBCEDvv2f37xscNaVT14fTtuKnpqk'}], 'enabled': False}
What fictional characters are going to get invented first; like this one⬇️‽
1
2025-02-08T12:13:17
https://v.redd.it/0rvfcr2qpwhe1
BidHot8598
v.redd.it
1970-01-01T00:00:00
0
{}
1iklym8
false
{'reddit_video': {'bitrate_kbps': 1200, 'fallback_url': 'https://v.redd.it/0rvfcr2qpwhe1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'width': 480, 'scrubber_media_url': 'https://v.redd.it/0rvfcr2qpwhe1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/0rvfcr2qpwhe1/DASHPlaylist.mpd?a=1741608810%2CNGRjNmMyOWNhMzY5M2UyMjg4ZWUwYmM0NmQ3NjdkNTYyMTI2NTZhZTcyYjA1M2JkZTZjMjcyYzM2NDRkMmIwMg%3D%3D&v=1&f=sd', 'duration': 16, 'hls_url': 'https://v.redd.it/0rvfcr2qpwhe1/HLSPlaylist.m3u8?a=1741608810%2COTJkODYwZmE3MmFmZDAxMDI2YjU2YjUwOGU2NWIwYTNkYmQxMDdhOWNhN2FjMTlkZWM5ZjVlMDA1OGU3OTk5NA%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1iklym8
/r/LocalLLaMA/comments/1iklym8/what_fictional_characters_are_going_to_get/
false
false
https://external-preview…cacb7b2c92c29d3a
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/bzVmejV4MnFwd2hlMa0OkiSb4WLV-Gm6A6FIunAxb3qUiDNWa7wR9OnjYSm8.png?format=pjpg&auto=webp&s=000193058ca10ff07a03b36baea59df163254e28', 'width': 640, 'height': 640}, 'resolutions': [{'url': 'https://external-preview.redd.it/bzVmejV4MnFwd2hlMa0OkiSb4WLV-Gm6A6FIunAxb3qUiDNWa7wR9OnjYSm8.png?width=108&crop=smart&format=pjpg&auto=webp&s=96cc15ce719b8adc8360c6d4de43b6cd43941fab', 'width': 108, 'height': 108}, {'url': 'https://external-preview.redd.it/bzVmejV4MnFwd2hlMa0OkiSb4WLV-Gm6A6FIunAxb3qUiDNWa7wR9OnjYSm8.png?width=216&crop=smart&format=pjpg&auto=webp&s=5d7493610495a632a09233fba3c0ebd9f47a4f94', 'width': 216, 'height': 216}, {'url': 'https://external-preview.redd.it/bzVmejV4MnFwd2hlMa0OkiSb4WLV-Gm6A6FIunAxb3qUiDNWa7wR9OnjYSm8.png?width=320&crop=smart&format=pjpg&auto=webp&s=557017329880e673202b49c6eec734f1be5af055', 'width': 320, 'height': 320}, {'url': 'https://external-preview.redd.it/bzVmejV4MnFwd2hlMa0OkiSb4WLV-Gm6A6FIunAxb3qUiDNWa7wR9OnjYSm8.png?width=640&crop=smart&format=pjpg&auto=webp&s=55e53c9252d67a7aa6d2e79a47411c391894c94c', 'width': 640, 'height': 640}], 'variants': {}, 'id': 'bzVmejV4MnFwd2hlMa0OkiSb4WLV-Gm6A6FIunAxb3qUiDNWa7wR9OnjYSm8'}], 'enabled': False}
Radeon 7900 XTX
1
I have been looking for a used 3090 but somehow havent been able to secure a good deal in the range of \~600 euros but I have seen some 7900 XTX for like 100e more, is it as good as a 3090 if I want to run popular models like R1 distill or Qwen? I know as for raw performance, its counterpart is the 4090 (nvidia's more powerful and pricier, I know).
2025-02-08T12:16:50
https://www.reddit.com/r/LocalLLaMA/comments/1ikm0qg/radeon_7900_xtx/
Theboyscampus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikm0qg
false
null
t3_1ikm0qg
/r/LocalLLaMA/comments/1ikm0qg/radeon_7900_xtx/
false
false
self
1
null
Implementing Custom AI Chat in Power BI Pro: Alternative Solutions to CoPilot
1
[removed]
2025-02-08T12:20:56
https://www.reddit.com/r/LocalLLaMA/comments/1ikm35n/implementing_custom_ai_chat_in_power_bi_pro/
Intrepid_Amount585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikm35n
false
null
t3_1ikm35n
/r/LocalLLaMA/comments/1ikm35n/implementing_custom_ai_chat_in_power_bi_pro/
false
false
self
1
null
Implementing Custom AI Chat in Power BI Pro: Alternative Solutions to CoPilot
1
[removed]
2025-02-08T12:24:37
https://www.reddit.com/r/LocalLLaMA/comments/1ikm59v/implementing_custom_ai_chat_in_power_bi_pro/
Old_Back_2860
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikm59v
false
null
t3_1ikm59v
/r/LocalLLaMA/comments/1ikm59v/implementing_custom_ai_chat_in_power_bi_pro/
false
false
self
1
null
Have a stock of Nvidia H100 Just Received
1
Now Got 100 Pcs Of H100 . Amazing . Any Buyers Here . From India
2025-02-08T12:25:44
https://www.reddit.com/r/LocalLLaMA/comments/1ikm5x4/have_a_stock_of_nvidia_h100_just_received/
anurag03890
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikm5x4
false
null
t3_1ikm5x4
/r/LocalLLaMA/comments/1ikm5x4/have_a_stock_of_nvidia_h100_just_received/
false
false
self
1
null
Should I Stay in my current job or Move On?
1
[removed]
2025-02-08T12:31:01
https://www.reddit.com/r/LocalLLaMA/comments/1ikm8x0/should_i_stay_in_my_current_job_or_move_on/
Disastrous-Trouble80
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikm8x0
false
null
t3_1ikm8x0
/r/LocalLLaMA/comments/1ikm8x0/should_i_stay_in_my_current_job_or_move_on/
false
false
self
1
null
Are Developers Using Local AI Models for Coding? Pros, Cons, and Use Cases
1
Is anyone using local models to help with coding? If so, how? If not, why not?
2025-02-08T12:31:34
https://www.reddit.com/r/LocalLLaMA/comments/1ikm999/are_developers_using_local_ai_models_for_coding/
Muted_Estate890
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikm999
false
null
t3_1ikm999
/r/LocalLLaMA/comments/1ikm999/are_developers_using_local_ai_models_for_coding/
false
false
self
1
null
Single GPU with more VRAM or split between two?
1
[removed]
2025-02-08T12:32:35
https://www.reddit.com/r/LocalLLaMA/comments/1ikm9ux/single_gpu_with_more_vram_or_split_between_two/
Fluffy_Sun1498
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikm9ux
false
null
t3_1ikm9ux
/r/LocalLLaMA/comments/1ikm9ux/single_gpu_with_more_vram_or_split_between_two/
false
false
self
1
null
DeepSeek not responding
1
[removed]
2025-02-08T12:39:52
https://www.reddit.com/r/LocalLLaMA/comments/1ikme4p/deepseek_not_responding/
Forsaken-Caramel-563
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikme4p
false
null
t3_1ikme4p
/r/LocalLLaMA/comments/1ikme4p/deepseek_not_responding/
false
false
self
1
null
Are There any model that doesnt know its own identity?
1
Im curious if there are Models, specially open sourced nodel that is not hard trained to reply their model name ? Eg. If i asked the Llama what is his name, he will reply that he is a language model called Llama... (Almost all models are like this) Im curious about it and maybe they are better on Role Playing or something??
2025-02-08T12:46:33
https://www.reddit.com/r/LocalLLaMA/comments/1ikmi8x/are_there_any_model_that_doesnt_know_its_own/
nojukuramu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikmi8x
false
null
t3_1ikmi8x
/r/LocalLLaMA/comments/1ikmi8x/are_there_any_model_that_doesnt_know_its_own/
false
false
self
1
null
Deepseek v3 inference outside of China?
1
[removed]
2025-02-08T12:51:20
https://www.reddit.com/r/LocalLLaMA/comments/1ikml54/deepseek_v3_inference_outside_of_china/
Cheesuasion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikml54
false
null
t3_1ikml54
/r/LocalLLaMA/comments/1ikml54/deepseek_v3_inference_outside_of_china/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/1CngOduYcioxYVObov_7P9FoXV5OylGZqYL5Smsk4kQ.jpg?auto=webp&s=e04c3e7336e2b92aa5ba038d519e12e58ecb9f80', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/1CngOduYcioxYVObov_7P9FoXV5OylGZqYL5Smsk4kQ.jpg?width=108&crop=smart&auto=webp&s=b7a80b8d2b6e9d011d4b7bfdfe316a1bb4b7d551', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/1CngOduYcioxYVObov_7P9FoXV5OylGZqYL5Smsk4kQ.jpg?width=216&crop=smart&auto=webp&s=b41a92cb183ff5ff4d757db9e0b3594a65c9d782', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/1CngOduYcioxYVObov_7P9FoXV5OylGZqYL5Smsk4kQ.jpg?width=320&crop=smart&auto=webp&s=17061a529ce9e8d13a6d96a41ce9d9709f830067', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/1CngOduYcioxYVObov_7P9FoXV5OylGZqYL5Smsk4kQ.jpg?width=640&crop=smart&auto=webp&s=173a50f03e5dcda0d0472778fd1d5a3ae77327dd', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/1CngOduYcioxYVObov_7P9FoXV5OylGZqYL5Smsk4kQ.jpg?width=960&crop=smart&auto=webp&s=144d0e4ecc70f05ba619333a35b4fd84ef996da2', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/1CngOduYcioxYVObov_7P9FoXV5OylGZqYL5Smsk4kQ.jpg?width=1080&crop=smart&auto=webp&s=4be66059dbe180adc2194259e7724ceef82e32ee', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'jZQBEDWXbvnDyPww1FdeBUYz4cLUgJWWNerNHUJvW0o'}], 'enabled': False}
GPU suggestion to pair with 4090?
1
I’m currently getting roughly 2 t/s with a 70b q3 model (deepseek distill) using a 4090. It seems the best options to speed up generation would be a second 4090 or 3090. Before moving in that direction, I wanted to prod around and ask if there are any cheaper cards I could pair with my 4090 for even a slight bump in T/s generation? I imagine that offloading additional layers to a second cad will be faster than offloading layers to GPU 0 / System ram, but wanted to know what my options are between adding a 3090 and perhaps a cheaper card.
2025-02-08T13:00:39
https://www.reddit.com/r/LocalLLaMA/comments/1ikmqxb/gpu_suggestion_to_pair_with_4090/
BackgroundAmoebaNine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikmqxb
false
null
t3_1ikmqxb
/r/LocalLLaMA/comments/1ikmqxb/gpu_suggestion_to_pair_with_4090/
false
false
self
1
null
[iOS] Botcast: Podcasts with TinyLlama and Kokoro
1
[removed]
2025-02-08T13:03:23
https://www.reddit.com/r/LocalLLaMA/comments/1ikmssh/ios_botcast_podcasts_with_tinyllama_and_kokoro/
derjanni
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikmssh
false
null
t3_1ikmssh
/r/LocalLLaMA/comments/1ikmssh/ios_botcast_podcasts_with_tinyllama_and_kokoro/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/coHaKKHCNyMF4kh8zrLUW-agxj7S22k8T5l1Rmtsqd4.jpg?auto=webp&s=283c45c5c6770e0af1e927fbacd4941913e9d0cc', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/coHaKKHCNyMF4kh8zrLUW-agxj7S22k8T5l1Rmtsqd4.jpg?width=108&crop=smart&auto=webp&s=c6bf52b2235df86f2993a2bc13ac4fc1795992f4', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/coHaKKHCNyMF4kh8zrLUW-agxj7S22k8T5l1Rmtsqd4.jpg?width=216&crop=smart&auto=webp&s=e54425c7b4b99d308398bf8ca9299a10fd127344', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/coHaKKHCNyMF4kh8zrLUW-agxj7S22k8T5l1Rmtsqd4.jpg?width=320&crop=smart&auto=webp&s=7c3456852e9fb3cd25969cf840bf79e051e61f3e', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/coHaKKHCNyMF4kh8zrLUW-agxj7S22k8T5l1Rmtsqd4.jpg?width=640&crop=smart&auto=webp&s=e2be02b153a3d875be6290a50f205d8425528f92', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/coHaKKHCNyMF4kh8zrLUW-agxj7S22k8T5l1Rmtsqd4.jpg?width=960&crop=smart&auto=webp&s=f961d05a6dd2a9f4c7a64aca771048a5e0ec9595', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/coHaKKHCNyMF4kh8zrLUW-agxj7S22k8T5l1Rmtsqd4.jpg?width=1080&crop=smart&auto=webp&s=fe8d833fa0e637779c4d5896c25c3acc6bc7e3c9', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'MJTvfFxRjiDYUVuZBNtGtUEPjpJW2uujxMN9iYGruhs'}], 'enabled': False}
Podcasts with TinyLlama and Kokoro on iOS
1
Hey Llama friends, around a month ago I was on a flight back to Germany and hastily downloaded Podcasts before departure. Once airborne, I found all of them boring which had me sitting bored on a four hour flight. I had no coverage and the ones I had stored in the device turned out to be not really what I was into. That got me thiniking and I wanted to see if you could generate podcasts offline on my iPhone. **tl;dr** before I get into the details, Botcast was approved by Apple an hour ago. Check it out if you are interested. # The challenge of generating podcasts I wanted an app that works offline and generates podcasts with decent voices. I went with TinyLlama 1.1B Chat v1.0 Q6\_K to generate the podcasts. My initial attempt was to generate each spoken line with an individual prompt, but it turned out that just prompting TinyLlama to generate a podcast transcript just worked fine. The podcasts are all chats between two people for which gender, name and voice are randomly selected. The entire process of generating the transcript takes around a minute on my iPhone 14, much faster on the 16 Pro and around 3-4 minutes on the SE 2020. For the voices, I went with Kokoro 0.19 since these voices seem to be the best quality I could find that work on iOS. After some testing, I threw out the UK voices since those sounded much too robotic. # Technical details of Botcast Botcast is a native iOS app built with Xcode and written in Swift and SwiftUI. However, the majority of it is C/C++ simple because of llama.cpp for iOS and the necessary inference libraries for Kokoro on iOS. A ton of bridging between Swift and the frameworks, libraries is involved. That's also why I went with 18.2 minimum as stability on earlies iOS versions is just way too much work to ensure. And as with all the audio stuff I did before, the app is brutally multi-threading both on the CPU, the Metal GPU and the Neural Core Engines. The app will need around 1.3 GB of RAM and hence has the entitlement to increase up to 3GB on iPhone 14, up to 1.4GB on SE 2020. Of course it also uses the extended memory areas of the GPU. Around 80% of bugfixing was simply getting the memory issues resolved. When I first got it into TestFlight it simply crashed when Apple reviewed it. It wouldn't even launch. I had to upgrade some inference libraries and fiddle around with their instanciation. It's technically hitting the limits of the iPhone 14, but anything above that is perfectly smooth from my experience. Since it's also Mac Catalyst compatible, it works like a charm on my M1 Pro. # Future of Botcast Botcast is currently free and I intent to keep it like that. Next step is CarPlay support which I definitely want as well as Siri integration for "Generate". The idea is to have it do its thing completely hands free. Further, the inference supports streaming, so exploring the option to really have the generate and the playback run instantly to provide really instant real-time podcasts is also on the list. Botcast was a lot of work and I am potentially looking into maybe giving it some customizing in the future and just charge a one-time fee for a pro version (e.g. custom prompting, different flavours of podcasts with some exclusive to a pro version). Pricing wise, a pro version will probably become something like $5 one-time fee as I'm totally not a fan of subscriptions for something that people run on their devices. Let me know what you think about Botcast, what features you'd like to see or any questions you have. I'm totally excited and into Ollama, llama.cpp and all the stuff around it. It's just pure magical what you can do with llama.cpp on iOS. Performance is really strong even with Q6\_K quants.
2025-02-08T13:07:14
https://www.reddit.com/r/LocalLLaMA/comments/1ikmv9x/podcasts_with_tinyllama_and_kokoro_on_ios/
derjanni
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikmv9x
false
null
t3_1ikmv9x
/r/LocalLLaMA/comments/1ikmv9x/podcasts_with_tinyllama_and_kokoro_on_ios/
false
false
self
1
null
Local Riffusion/Suno music model creation
1
I look forward to the age of Future AI Music Generation As time goes on, we lose people and the only music that remains is what was made, there are bands wanting to live on after this loss, people still want to hear their music styles, they want to leave a legacy.. Take Elvis and his remade songs ‘I little more action’, or the Beatles ‘now and then’ from a lost recording of John Lennon or Kiss making themselves digital avatars. We have brilliant music makers like Moonic Productions that gives us a taste of styles applied to different songs, but soon great artist like Ozzy Osbourne and bands such as Ironmaiden will not be able to continue. Just imagine, if technology like Suno and Riffusion can make models based on artists, if we could only use their legacy in a more personal way, singing about their own lives and stories in their favourite style that they may not be able to replicate themselves. Enter the value of localLLM’s and open source projects such as YuE that could open the doors to make this possible, but the details on how to make these models are limited for good quality stuff like what Suno and Riffusion create. Can anyone suggest a good tutorial or a way to create a Riffusion clone locally maybe even something like what I mentioned above if you based it on an artist? I haven’t had much luck with YuE with generating anything close to Suno or Riffusion nor can I find any details about their model creation.
2025-02-08T13:11:13
https://www.reddit.com/r/LocalLLaMA/comments/1ikmxsz/local_riffusionsuno_music_model_creation/
ihaag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikmxsz
false
null
t3_1ikmxsz
/r/LocalLLaMA/comments/1ikmxsz/local_riffusionsuno_music_model_creation/
false
false
self
1
null
Qwen in the mac menu bar
1
Dear all, I Developed this app for Mac OS and I need some testers; since I love Qwen family models I’ve developed this app to enrich the productivity, working as a floating window over other apps. Comments are really appreciated!
2025-02-08T13:20:00
https://github.com/andreaturchet/Qwen4Mac
Zealousideal-Net1385
github.com
1970-01-01T00:00:00
0
{}
1ikn3m4
false
null
t3_1ikn3m4
/r/LocalLLaMA/comments/1ikn3m4/qwen_in_the_mac_menu_bar/
false
false
https://b.thumbs.redditm…iZLKgKcN7aTE.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/noY1h5Okrq7ZANvEw5MMNJGxiH9lTWE3nW0dM83KMT8.jpg?auto=webp&s=1f8444b38cf5b4e30c558270012f482c8e2287c3', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/noY1h5Okrq7ZANvEw5MMNJGxiH9lTWE3nW0dM83KMT8.jpg?width=108&crop=smart&auto=webp&s=d1f5129d6ddecb8d555220e6629917ca5dce3356', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/noY1h5Okrq7ZANvEw5MMNJGxiH9lTWE3nW0dM83KMT8.jpg?width=216&crop=smart&auto=webp&s=b2af009042c0e2c1b08a9c51fb7dc09530dc5798', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/noY1h5Okrq7ZANvEw5MMNJGxiH9lTWE3nW0dM83KMT8.jpg?width=320&crop=smart&auto=webp&s=688d2e9c8620634d852521f779f0b6dd98a45e51', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/noY1h5Okrq7ZANvEw5MMNJGxiH9lTWE3nW0dM83KMT8.jpg?width=640&crop=smart&auto=webp&s=0e4ce9351cd888be2d92a844b79e910f95cc4e2d', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/noY1h5Okrq7ZANvEw5MMNJGxiH9lTWE3nW0dM83KMT8.jpg?width=960&crop=smart&auto=webp&s=2316f9cddee6cb8925b7cd5abc2810c55df9077f', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/noY1h5Okrq7ZANvEw5MMNJGxiH9lTWE3nW0dM83KMT8.jpg?width=1080&crop=smart&auto=webp&s=1cd12a4d43ec428e9c436b1c02c3eb176f6e57d6', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'YjYIxRwtxlKg9ICe_i_eDyjDjDEZwyeUW_2staUbLTA'}], 'enabled': False}
Your go to option for finet-uning LLMs
1
What is your preferred go to option for fine-tuning LLMs? I am currently using google colab free version but it is very limited. tried kaggle as well but facing issues with OOM error with VRAM. For a paid version is colab pro worth it? or are there any better options?
2025-02-08T13:20:40
https://www.reddit.com/r/LocalLLaMA/comments/1ikn434/your_go_to_option_for_finetuning_llms/
MasterDragon_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikn434
false
null
t3_1ikn434
/r/LocalLLaMA/comments/1ikn434/your_go_to_option_for_finetuning_llms/
false
false
self
1
null
Glyphstral-24b: Symbolic Deductive Reasoning Model
1
Hey Everyone! So I've been really obsessed lately with symbolic AI and the potential to improve reasoning and multi-dimensional thinking. I decided to go ahead and see if I could train a model to use a framework I am calling "Glyph Code Logic Flow". Essentially, it is a method of structured reasoning using deductive symbolic logic. You can learn more about it here [https://github.com/severian42/Computational-Model-for-Symbolic-Representations/tree/main](https://github.com/severian42/Computational-Model-for-Symbolic-Representations/tree/main) I first tried training Deepeek R1-Qwen-14 and QWQ-32 but their heavily pre-trained reasoning data seemed to conflict with my approach, which makes sense given the different concepts and ways of breaking down the problem. I opted for Mistral-Small-24b to see the results, and after 7 days of pure training 24hrs a day (all locally using MLX-Dora at 8bit on my Mac M1 64GB). In all, the model trained on about 27mil tokens of my custom GCLF dataset (each example was around 30k tokens, with a total of 4500 examples) I still need to get the docs and repo together, as I will be releasing it this weekend, but I felt like sharing a quick preview since this unexpectedly worked out awesomely. https://reddit.com/link/1ikn5fg/video/9h2mgdg02xhe1/player
2025-02-08T13:22:41
https://www.reddit.com/r/LocalLLaMA/comments/1ikn5fg/glyphstral24b_symbolic_deductive_reasoning_model/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikn5fg
false
null
t3_1ikn5fg
/r/LocalLLaMA/comments/1ikn5fg/glyphstral24b_symbolic_deductive_reasoning_model/
false
false
https://b.thumbs.redditm…_8YV37qdzskg.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/DghrhJAW-NKneHTJvXZ7IAcBmIpZ_fU36ahUXITL0bM.jpg?auto=webp&s=f41b6d5f91a535dd651b663b336d5b658c1ca26c', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/DghrhJAW-NKneHTJvXZ7IAcBmIpZ_fU36ahUXITL0bM.jpg?width=108&crop=smart&auto=webp&s=5caf235b2726ea300457cd68c5c183edc725fd24', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/DghrhJAW-NKneHTJvXZ7IAcBmIpZ_fU36ahUXITL0bM.jpg?width=216&crop=smart&auto=webp&s=b93d772b5d48daaae7ec83e88e2ef5c6945d4f78', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/DghrhJAW-NKneHTJvXZ7IAcBmIpZ_fU36ahUXITL0bM.jpg?width=320&crop=smart&auto=webp&s=db4615209aadf8a2c7ad45ae6c3c23f972a814e3', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/DghrhJAW-NKneHTJvXZ7IAcBmIpZ_fU36ahUXITL0bM.jpg?width=640&crop=smart&auto=webp&s=0ad5bc295749b594e323b350912e29031d02d474', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/DghrhJAW-NKneHTJvXZ7IAcBmIpZ_fU36ahUXITL0bM.jpg?width=960&crop=smart&auto=webp&s=a0e997eb4fae50655b9611842816dd9490f43d9c', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/DghrhJAW-NKneHTJvXZ7IAcBmIpZ_fU36ahUXITL0bM.jpg?width=1080&crop=smart&auto=webp&s=8efac3837e1d8ea8996d1e3302a374580f1b9e25', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'NrUxi-aQwOTRiIFtj7jTNuydh6LCzInrKdNw93T7Y74'}], 'enabled': False}
Simple local RAG setup for Markdown notes
1
[removed]
2025-02-08T13:26:16
https://www.reddit.com/r/LocalLLaMA/comments/1ikn7sh/simple_local_rag_setup_for_markdown_notes/
apukone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikn7sh
false
null
t3_1ikn7sh
/r/LocalLLaMA/comments/1ikn7sh/simple_local_rag_setup_for_markdown_notes/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/1Xmr6GoOeSf67p1s4hyjq6mRxU-TFl1zbDQhj7WDwW0.jpg?auto=webp&s=87b743e86d495002edafac82c33be2c15a6253e0', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/1Xmr6GoOeSf67p1s4hyjq6mRxU-TFl1zbDQhj7WDwW0.jpg?width=108&crop=smart&auto=webp&s=fa90ab96e628f57e4d56daf78fd4b48b786087ab', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/1Xmr6GoOeSf67p1s4hyjq6mRxU-TFl1zbDQhj7WDwW0.jpg?width=216&crop=smart&auto=webp&s=4e7f4ac44ece4f0f59453d726ac8f0e540be811f', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/1Xmr6GoOeSf67p1s4hyjq6mRxU-TFl1zbDQhj7WDwW0.jpg?width=320&crop=smart&auto=webp&s=7fee3ddf91928eaf98fed8a72cc51acb3bcfbf93', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/1Xmr6GoOeSf67p1s4hyjq6mRxU-TFl1zbDQhj7WDwW0.jpg?width=640&crop=smart&auto=webp&s=9c0cbebaa5faec84028c9c819c06d61598085808', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/1Xmr6GoOeSf67p1s4hyjq6mRxU-TFl1zbDQhj7WDwW0.jpg?width=960&crop=smart&auto=webp&s=9b2908235ec6c9a4d4c038aa746ac1b7a3a7cd7a', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/1Xmr6GoOeSf67p1s4hyjq6mRxU-TFl1zbDQhj7WDwW0.jpg?width=1080&crop=smart&auto=webp&s=6c71f2ee4c02e817ff4f20a13e39b3a655c40faa', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'CI_gbyvLlp6ndFKeGBEdtzt77pX1GNDQUGgc8arhcWk'}], 'enabled': False}
How Linux is better for local llm's since nvidia drivers not as good as on windows?
1
[removed]
2025-02-08T13:28:07
https://www.reddit.com/r/LocalLLaMA/comments/1ikn8zb/how_linux_is_better_for_local_llms_since_nvidia/
narca_hakan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikn8zb
false
null
t3_1ikn8zb
/r/LocalLLaMA/comments/1ikn8zb/how_linux_is_better_for_local_llms_since_nvidia/
false
false
self
1
null
Transformers are over, new revolutionary architecture called "SQL" speeds up inference x30 and runs on CPUs
1
2025-02-08T13:44:08
https://arxiv.org/abs/2502.02818
Elven77AI
arxiv.org
1970-01-01T00:00:00
0
{}
1iknk14
false
null
t3_1iknk14
/r/LocalLLaMA/comments/1iknk14/transformers_are_over_new_revolutionary/
false
false
default
1
null
Linking llm to file locations not files themselves?
1
[removed]
2025-02-08T13:44:16
https://www.reddit.com/r/LocalLLaMA/comments/1iknk47/linking_llm_to_file_locations_not_files_themselves/
planecrazy242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iknk47
false
null
t3_1iknk47
/r/LocalLLaMA/comments/1iknk47/linking_llm_to_file_locations_not_files_themselves/
false
false
self
1
null
Help
1
[removed]
2025-02-08T13:45:17
https://www.reddit.com/r/LocalLLaMA/comments/1iknkus/help/
Shot-Chemical5131
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iknkus
false
null
t3_1iknkus
/r/LocalLLaMA/comments/1iknkus/help/
false
false
self
1
null
Does anyone have a creative method to run DeepSeek R1 or large open-source LLMs on weaker systems?
1
[removed]
2025-02-08T13:47:15
https://www.reddit.com/r/LocalLLaMA/comments/1iknm7t/does_anyone_have_a_creative_method_to_run/
Super_Concentrate761
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iknm7t
false
null
t3_1iknm7t
/r/LocalLLaMA/comments/1iknm7t/does_anyone_have_a_creative_method_to_run/
false
false
self
1
null
DeepSeek-r1:1.5b not generating response
1
[removed]
2025-02-08T13:49:15
https://www.reddit.com/r/LocalLLaMA/comments/1iknnnj/deepseekr115b_not_generating_response/
CloakedByte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iknnnj
false
null
t3_1iknnnj
/r/LocalLLaMA/comments/1iknnnj/deepseekr115b_not_generating_response/
false
false
https://b.thumbs.redditm…gkPFNtnX-75U.jpg
1
null
What are the best small models for multi turn conversations?
1
Title, wondering if there are any models that do better with multi turn. Same way that Sonnet tends to do better
2025-02-08T13:51:36
https://www.reddit.com/r/LocalLLaMA/comments/1iknpaz/what_are_the_best_small_models_for_multi_turn/
crazyhorror
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iknpaz
false
null
t3_1iknpaz
/r/LocalLLaMA/comments/1iknpaz/what_are_the_best_small_models_for_multi_turn/
false
false
self
1
null
Pc upgradation
1
I currently have an i7 12th generation processor, 16GB of RAM, and a GTX 1650 GPU. I'm working on projects to train a generative AI model for a specific task. Can you recommend a good upgrade for my system, perhaps for 24B Sorry if I'm asking a dumb question. I'm a beginner in this field.
2025-02-08T13:53:07
https://www.reddit.com/r/LocalLLaMA/comments/1iknqbp/pc_upgradation/
Particular_Garbage32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iknqbp
false
null
t3_1iknqbp
/r/LocalLLaMA/comments/1iknqbp/pc_upgradation/
false
false
self
1
null
What are the major improvements from 2017 that lead to current SOTA LLM?
1
I would like to update my knowledge on transformers architecture since the foundational Attention is all you need paper from 2017. I'm struggling to find (or generate) a synthetic trustful resource that provides a high-level picture of the major improvements to the SOTA since then. Can we identify the major LLM architectural evolutions from the last few years? I suggest we don't cover the multimodal topics unless directly applicable to LLM. For example, the RoPE paper from 2021 [https://arxiv.org/pdf/2104.09864](https://arxiv.org/pdf/2104.09864) that introduces rotary position embeddings seems a major update that reduces the dependency of explicit position encoding into the embeddings.
2025-02-08T14:03:20
https://www.reddit.com/r/LocalLLaMA/comments/1iknxmq/what_are_the_major_improvements_from_2017_that/
Doug_Fripon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iknxmq
false
null
t3_1iknxmq
/r/LocalLLaMA/comments/1iknxmq/what_are_the_major_improvements_from_2017_that/
false
false
self
1
null
Hardware requirements and advice for model dealing with 3K tokens.
1
I am looking to run a 32B model for a task with max 3K tokens input and output each. I know that mainly the resources needed to run an LLM are the parameter sizes. But the data server I am going to rent offers 64 gigs of RAM as a base. Would I be able to run the model as is and not expect very long delays in processing? or is GPU a must-have? If yes then will it be okay if it's a consumer-grade GPU like 3080 or does it need to be enterprise?
2025-02-08T14:08:14
https://www.reddit.com/r/LocalLLaMA/comments/1iko12u/hardware_requirements_and_advice_for_model/
MohtashimSadiq
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iko12u
false
null
t3_1iko12u
/r/LocalLLaMA/comments/1iko12u/hardware_requirements_and_advice_for_model/
false
false
self
1
null
Hardware requirements and advice for model 32B model dealing with 3K tokens.
1
I am looking to run a 32B model for a task with max 3K tokens input and output each. I know that mainly the resources needed to run an LLM are the parameter sizes. But the data server I am going to rent offers 64 gigs of RAM as a base. Would I be able to run the model as is and not expect very long delays in processing? or is GPU a must-have? If yes then will it be okay if it's a consumer-grade GPU like 3080 or does it need to be enterprise?
2025-02-08T14:09:32
https://www.reddit.com/r/LocalLLaMA/comments/1iko20f/hardware_requirements_and_advice_for_model_32b/
MohtashimSadiq
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iko20f
false
null
t3_1iko20f
/r/LocalLLaMA/comments/1iko20f/hardware_requirements_and_advice_for_model_32b/
false
false
self
1
null
How do the models always answer in correct English when much of the web has badly written and incorrect English?
1
I was wondering how the training works to achieve this
2025-02-08T14:15:17
https://www.reddit.com/r/LocalLLaMA/comments/1iko5y3/how_do_the_models_always_answer_in_correct/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iko5y3
false
null
t3_1iko5y3
/r/LocalLLaMA/comments/1iko5y3/how_do_the_models_always_answer_in_correct/
false
false
self
1
null
What are the best models for code autocomplete (like cursor autocomplete)?
1
that's it. i decided to use my small GPU to host not a full coding assistant, but rather a good autocomplete, and invest the money i'd have spent on a huge GPU to pay for APIs. but then which model to choose? i'm trying currently qwen 1.5B, heard some good things about startcoder 3B. what is your experience? are there really good autocomplete-specialized models out there? like many here, i'm looking for that cursor experience but in a cheaper way. I think the largest my GPU would be able to handle is something around 5B unquantized, maybe 14B with reasonable quantization. also, are there benchmarks for this particular task? i've seen some benchmarks but haven't found their actual results.
2025-02-08T14:46:04
https://www.reddit.com/r/LocalLLaMA/comments/1ikosn5/what_are_the_best_models_for_code_autocomplete/
vniversvs_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikosn5
false
null
t3_1ikosn5
/r/LocalLLaMA/comments/1ikosn5/what_are_the_best_models_for_code_autocomplete/
false
false
self
1
null
Crafting an AI Agent
1
Ok so here's my problem, not sure if AI is the best tool for the job, but by the look it seems fit So i have a collection of video bookmarks, for which i have metadata (title and channel mainly). Currently they are organized manually in a hierarchical directory structure. I would like to \- have it done automatically (or at least get a suggestion to manually validate) \- change to or add a tag based classification How would you go about it?
2025-02-08T14:49:39
https://www.reddit.com/r/LocalLLaMA/comments/1ikovbp/crafting_an_ai_agent/
goingsplit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikovbp
false
null
t3_1ikovbp
/r/LocalLLaMA/comments/1ikovbp/crafting_an_ai_agent/
false
false
self
1
null
Is it possible for ChatGPT and Deepseek to play Go?
1
ChatGPT is black (closed one), DeepSeek is the white (open one). can anybody design a program to make them fight with each other?
2025-02-08T14:55:06
https://www.reddit.com/r/LocalLLaMA/comments/1ikozre/is_it_possible_for_chatgpt_and_deepseek_to_play_go/
Junior-Education8608
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikozre
false
null
t3_1ikozre
/r/LocalLLaMA/comments/1ikozre/is_it_possible_for_chatgpt_and_deepseek_to_play_go/
false
false
self
1
null
GeForce RTX 5090 fails to topple RTX 4090 in GPU compute benchmark.
1
So uh. Anyone have a good reason to upgrade from 4090 to 5090? VRAM? Power? Paper specs? Future updates?
2025-02-08T15:02:14
https://www.notebookcheck.net/GeForce-RTX-5090-fails-to-topple-RTX-4090-in-GPU-compute-benchmark-while-RTX-5080-struggles-against-RTX-4070-Ti.958334.0.html
el0_0le
notebookcheck.net
1970-01-01T00:00:00
0
{}
1ikp5ko
false
null
t3_1ikp5ko
/r/LocalLLaMA/comments/1ikp5ko/geforce_rtx_5090_fails_to_topple_rtx_4090_in_gpu/
false
false
default
1
null
Is there any way to get context from a codebase running in a docker container?
1
I have a relatively large codebase (about 5 million loc) running within a container. I'm currently developing a plugin for it locally and pushing the changes onto the container via sftp, in order to test them. Is there a plugin or something along these lines, that would allow me to get context form the actual codebase relatively quickly in a situation like this? Currently using windsurf and roo. Any help would be greatly appreciated!
2025-02-08T15:03:02
https://www.reddit.com/r/LocalLLaMA/comments/1ikp67k/is_there_any_way_to_get_context_from_a_codebase/
stopthecope
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikp67k
false
null
t3_1ikp67k
/r/LocalLLaMA/comments/1ikp67k/is_there_any_way_to_get_context_from_a_codebase/
false
false
self
1
null
How to understand the pass@1 formula in deepseek-r1's technical report?
1
https://preview.redd.it/…cy of k samples?
2025-02-08T15:05:33
https://www.reddit.com/r/LocalLLaMA/comments/1ikp8af/how_to_understand_the_pass1_formula_in/
secsilm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikp8af
false
null
t3_1ikp8af
/r/LocalLLaMA/comments/1ikp8af/how_to_understand_the_pass1_formula_in/
false
false
https://a.thumbs.redditm…f6JIHLLSP7Q4.jpg
1
null
Can you finetune instructions into a model without examples of how to follow those instructions?
1
I have been reading things like [https://arxiv.org/pdf/2501.11120](https://arxiv.org/pdf/2501.11120) and [https://x.com/flowersslop/status/1873115669568311727](https://x.com/flowersslop/status/1873115669568311727) that show that a model "knows" what it has been finetuned on-- that is, if you finetune it to perform some particular task, it can tell you what it has been finetuned to do. This made me think that maybe putting things in the finetuning data was more like putting things in the prompt than I had previously supposed. One way I thought of to test this was to finetune it with instructions like "never say the word 'the' " but \*without\* any examples of following those instructions. If it followed the instructions when you did inference, this would mean it was treating the finetuning data as if it were a prompt. Has anyone ever tried this experiment?
2025-02-08T15:07:26
https://www.reddit.com/r/LocalLLaMA/comments/1ikp9r3/can_you_finetune_instructions_into_a_model/
summerstay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikp9r3
false
null
t3_1ikp9r3
/r/LocalLLaMA/comments/1ikp9r3/can_you_finetune_instructions_into_a_model/
false
false
self
1
null
Local alternative to Cursor's cursor prediction?
1
I really like how Cursor can predict my next movements, or what comes next after I have applied some code. Therefore, I was wondering if there are any other alternatives that I can plug in and start using it locally? If not, how hard/costly would it be to train one?
2025-02-08T15:13:01
https://www.reddit.com/r/LocalLLaMA/comments/1ikpe7d/local_alternative_to_cursors_cursor_prediction/
Round_Mixture_7541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikpe7d
false
null
t3_1ikpe7d
/r/LocalLLaMA/comments/1ikpe7d/local_alternative_to_cursors_cursor_prediction/
false
false
self
1
null
Lab setup for local AI development help needed.
1
[removed]
2025-02-08T15:14:15
https://www.reddit.com/r/LocalLLaMA/comments/1ikpf8q/lab_setup_for_local_ai_development_help_needed/
Marslauncher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikpf8q
false
null
t3_1ikpf8q
/r/LocalLLaMA/comments/1ikpf8q/lab_setup_for_local_ai_development_help_needed/
false
false
self
1
null
What's your system prompt for day-to-day?
1
[removed]
2025-02-08T15:15:19
https://www.reddit.com/r/LocalLLaMA/comments/1ikpg3g/whats_your_system_prompt_for_daytoday/
einmaulwurf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikpg3g
false
null
t3_1ikpg3g
/r/LocalLLaMA/comments/1ikpg3g/whats_your_system_prompt_for_daytoday/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150, 'height': 150}, 'resolutions': [{'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108, 'height': 108}], 'variants': {}, 'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs'}], 'enabled': False}
Trouble with running llama.cpp with Deepseek-R1 on 4x NVME raid0.
1
I am trying to get some speed benefit out of running llama.cpp with the model (Deepseek-R1, 671B, Q2) on a 4x nvme raid0 in comparison to a single nvme. But running it from raid yields a much, much lower inference speed than running it from a single disk. The raid0, with 16 PCIe (4.0) lanes in total, yields 25GB/s (with negligible CPU usage) when benchmarked with fio (for sequential reads in 1MB chunks), the single nvme yields 7GB/s. With the model mem-mapped from the single disk, I get 1.2t/s (no GPU offload), with roughly 40%-50% of CPU usage by llama.cpp, so it seems I/O is the bottleneck in this case. But with the model mem-mapped from the raid I get merely <0.1 t/s, tens of seconds per token, with the CPU fully utilized. My first wild guess here is that llama.cpp does very small, discontinuous, random reads, which causes a lot of CPU overhead, when reading from a software raid. I tested/tried the following things also: - Filesystem doesn't matter, tried ext4, btrfs, f2fs on the raid. - md-raid (set up with mdadm) vs. btrfs-raid0 did not make a difference. - In an attempt to reduce CPU overhead I used only 2 instead of 4 nvmes for raid0 -> no improvement - Put swap on the raid array, and invoked llama.cpp with --no-mmap, to force the majority of the model into that swap: 0.5-0.7 t/s, so while better than mem-mapping from the raid, still slower than mem-mapping from a single disk. - dissolved the raid, and put the parts of split gguf (4 pieces), onto a separate Filesystem/nvme each: Expectedly, the same speed as from a single nvme (1.2 t/s), since llama.cpp doesn't seem to read the parts in parallel. - With raid0, tinkered with various stripe sizes and block sizes, always making sure they are well aligned: Negligible differences in speed. So is there any way to get some use for llama.cpp out of those 4 NVMEs, with 16 direct-to-cpu PCIe lanes to them? I'd be happy if I could get llama.cpp inference to be at least a tiny bit faster with those than running simply from a single device. With simply writing/reading huge files, I get incredibly high speeds out of that array.
2025-02-08T15:29:24
https://www.reddit.com/r/LocalLLaMA/comments/1ikprg7/trouble_with_running_llamacpp_with_deepseekr1_on/
U_A_beringianus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikprg7
false
null
t3_1ikprg7
/r/LocalLLaMA/comments/1ikprg7/trouble_with_running_llamacpp_with_deepseekr1_on/
false
false
self
1
null
A TTS model with specific entonation?
1
I have been searching for TTS models that can specify the entonation voice desired (happy, sad, and many others like whispers). I found the F5 TTS model finetuned with brazilian Portuguese language (it's the language what I want to generate the audios) but even using audios with the desired entonation, the model still uses the phrase context to give emotion or entonation or don't give any emotion at all. I was wondering if I finetune this model (or the other one: XTTS v2) with a specific dataset that has the entonation desired and finetune this model, it will generate audios only with this entonation? Do you think it's possible? I mean, if I finetune a model only with angry audios, the model will generate only angry audios? or this just not gonna work and still generate audios by the phrase context? I'm questioning this before any dataset preparation and starting the fine-tuning. Someone already did this test? My final plan with this question is to finetune multiple models, each one with specific entonation. Then, when generates an audio, first the algorithm will select an TTS model finetuned by the entonation chosed, then, the chosed model will generate the audios. So, I will have more control with the entonations if this idea work
2025-02-08T15:33:37
https://www.reddit.com/r/LocalLLaMA/comments/1ikpuwn/a_tts_model_with_specific_entonation/
RodrigoDNGT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikpuwn
false
null
t3_1ikpuwn
/r/LocalLLaMA/comments/1ikpuwn/a_tts_model_with_specific_entonation/
false
false
self
1
null
Looking for Agent Dev to test my dev tool project for AI Agents
1
[removed]
2025-02-08T15:50:54
https://www.reddit.com/r/LocalLLaMA/comments/1ikq8j5/looking_for_agent_dev_to_test_my_dev_tool_project/
suvsuvsuv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikq8j5
false
null
t3_1ikq8j5
/r/LocalLLaMA/comments/1ikq8j5/looking_for_agent_dev_to_test_my_dev_tool_project/
false
false
self
1
null
Inexpensive RAG system for a pen & paper game
1
Hi guys, A friend and I are working on a project where you can simulate your pen & paper worlds with AI. To do so we want to use a sort of "Oracle" that can retrieve relevant information from the world lore. We've tested the Assistant's API from OpenAI extensively and it worked pretty well. It's not a hundred percent accurate, but works well enough - let's say out of 10 prompts, maybe 8 are correct. However, we were shocked when we discovered the costs: After half an hour of playing around and prompting, I had already racked up more than half a million of input tokens and was billed 8 dollars, and that only with 3 PDF documents used less than 100mb in size. So obviously that is not a solution that is usable - it's just way too expensive. Now I know that there are ways to reduce the chunk size and limit the input tokens, but my friend was not convinced and to be honest he lost quite a bit of motivation because of it. Now the onus is on me to prove him that what we want to do is realistic. Is there a way to build a RAG system for this use case that is affordable and realistic to build yourself - or am I out of luck? And if yes, what would it entail, what's the best way to do it? I do know how to code and am studying CS - so if I had to I think I would build it myself, but what I'd like to know is whether it is realistic to build a RAG system that is- let's say 10-100 cheaper than OpenAI's assistant but performs equally well (for the above use case). I've heard that a lot depends on data preparation - but that is something I could do as well, manual data processing and creating structured data from it, etc. etc.
2025-02-08T16:08:00
https://www.reddit.com/r/LocalLLaMA/comments/1ikqmxv/inexpensive_rag_system_for_a_pen_paper_game/
Valuevow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikqmxv
false
null
t3_1ikqmxv
/r/LocalLLaMA/comments/1ikqmxv/inexpensive_rag_system_for_a_pen_paper_game/
false
false
self
1
null
Which models do you run locally?
1
Also, if you are using a specific model heavily? which factors stood out for you?
2025-02-08T16:14:15
https://www.reddit.com/r/LocalLLaMA/comments/1ikqsal/which_models_do_you_run_locally/
santhosh1993
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikqsal
false
null
t3_1ikqsal
/r/LocalLLaMA/comments/1ikqsal/which_models_do_you_run_locally/
false
false
self
1
null
Building an LLM-Optimized Linux Server on a Budget
1
Based on these benchmarks wouldn’t buying a Mac Studio with 128 GB RAM M2 Ultra 60 or 72 core be far better than traditional dedicated PC builds?
2025-02-08T16:21:16
https://linuxblog.io/build-llm-linux-server-on-budget/
Unprotectedtxt
linuxblog.io
1970-01-01T00:00:00
0
{}
1ikqy5w
false
null
t3_1ikqy5w
/r/LocalLLaMA/comments/1ikqy5w/building_an_llmoptimized_linux_server_on_a_budget/
false
false
https://b.thumbs.redditm…6xSp2dYP6I6A.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/9ar9q08cZR3Pnd8Hxka6_rkxXup9YdrkJJw3Y1V7agI.jpg?auto=webp&s=6189aef8306f2678a06095fdd223d1536b7af4a4', 'width': 868, 'height': 694}, 'resolutions': [{'url': 'https://external-preview.redd.it/9ar9q08cZR3Pnd8Hxka6_rkxXup9YdrkJJw3Y1V7agI.jpg?width=108&crop=smart&auto=webp&s=2742aa70ac3eb346141a936dddf25e260d45bc34', 'width': 108, 'height': 86}, {'url': 'https://external-preview.redd.it/9ar9q08cZR3Pnd8Hxka6_rkxXup9YdrkJJw3Y1V7agI.jpg?width=216&crop=smart&auto=webp&s=be949aa5c4455c4b8bb5a651be93553e95530801', 'width': 216, 'height': 172}, {'url': 'https://external-preview.redd.it/9ar9q08cZR3Pnd8Hxka6_rkxXup9YdrkJJw3Y1V7agI.jpg?width=320&crop=smart&auto=webp&s=e2faa5080070b981709cd8bef626fdb572971ae9', 'width': 320, 'height': 255}, {'url': 'https://external-preview.redd.it/9ar9q08cZR3Pnd8Hxka6_rkxXup9YdrkJJw3Y1V7agI.jpg?width=640&crop=smart&auto=webp&s=7da33d2c9a8a6d7c6f95f5206eb1b4796158fe2f', 'width': 640, 'height': 511}], 'variants': {}, 'id': 'IuPRvIUAgIGaOSMeENEwJZHwwxm4guOYIv8rvomfLAY'}], 'enabled': False}
Which Gpu should i choose ?
1
[removed]
2025-02-08T16:24:46
https://www.reddit.com/r/LocalLLaMA/comments/1ikr10b/which_gpu_should_i_choose/
B_O_TX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikr10b
false
null
t3_1ikr10b
/r/LocalLLaMA/comments/1ikr10b/which_gpu_should_i_choose/
false
false
self
1
null
Which gpu to Choose
1
[removed]
2025-02-08T16:26:51
https://www.reddit.com/r/LocalLLaMA/comments/1ikr2os/which_gpu_to_choose/
B_O_TX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikr2os
false
null
t3_1ikr2os
/r/LocalLLaMA/comments/1ikr2os/which_gpu_to_choose/
false
false
self
1
null
Photonics. 30x efficiency?
1
Please cost less than a car... PCIe card: [https://qant.com/photonic-computing/](https://qant.com/photonic-computing/) Apparently Nvidia and TSMC have created a photonics chip as well: [https://wccftech.com/nvidia-tsmc-develop-advanced-silicon-photonic-chip-prototype-says-report/](https://wccftech.com/nvidia-tsmc-develop-advanced-silicon-photonic-chip-prototype-says-report/)
2025-02-08T16:37:30
https://www.reddit.com/r/LocalLLaMA/comments/1ikrbhw/photonics_30x_efficiency/
mr_happy_nice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikrbhw
false
null
t3_1ikrbhw
/r/LocalLLaMA/comments/1ikrbhw/photonics_30x_efficiency/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/Er9HYpsbHXKZjyqHUdhG52ugoVtDxGK981lrCHl9uTs.jpg?auto=webp&s=218ca239bf203220346e5cf2d746ab3dfc20f225', 'width': 1024, 'height': 453}, 'resolutions': [{'url': 'https://external-preview.redd.it/Er9HYpsbHXKZjyqHUdhG52ugoVtDxGK981lrCHl9uTs.jpg?width=108&crop=smart&auto=webp&s=db5d3b7c8f1b69fd81377e44077ed4b05e167f64', 'width': 108, 'height': 47}, {'url': 'https://external-preview.redd.it/Er9HYpsbHXKZjyqHUdhG52ugoVtDxGK981lrCHl9uTs.jpg?width=216&crop=smart&auto=webp&s=6b6bed2a0dd3abbf0ee538d7b67b78297703b057', 'width': 216, 'height': 95}, {'url': 'https://external-preview.redd.it/Er9HYpsbHXKZjyqHUdhG52ugoVtDxGK981lrCHl9uTs.jpg?width=320&crop=smart&auto=webp&s=555cd71688c7575713da8019861792408abe80aa', 'width': 320, 'height': 141}, {'url': 'https://external-preview.redd.it/Er9HYpsbHXKZjyqHUdhG52ugoVtDxGK981lrCHl9uTs.jpg?width=640&crop=smart&auto=webp&s=64124ad0527b842605d381a503e33134442a869f', 'width': 640, 'height': 283}, {'url': 'https://external-preview.redd.it/Er9HYpsbHXKZjyqHUdhG52ugoVtDxGK981lrCHl9uTs.jpg?width=960&crop=smart&auto=webp&s=27c94bb3a28c6aa62a1e1d068db468bce12e7760', 'width': 960, 'height': 424}], 'variants': {}, 'id': 'EzY-pSfnQHLuz6wcIubT3BLf-tfuPDQES4QxmztW8kw'}], 'enabled': False}
Where has my voice function gone?
1
Occasionally I will use voice to chat with META when I'm lazy to type but today it has completely disappeared. I can't find the voice function at all and It also just says "with Llama" not "Llama 3.1" or whatever it was up to. I miss chatting with Judy Dench 😅😂 What happened to it?
2025-02-08T16:41:25
https://www.reddit.com/r/LocalLLaMA/comments/1ikrew4/where_has_my_voice_function_gone/
fakehungerpains
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikrew4
false
null
t3_1ikrew4
/r/LocalLLaMA/comments/1ikrew4/where_has_my_voice_function_gone/
false
false
self
1
null
Running Llama 70b online
1
[removed]
2025-02-08T16:43:00
https://www.reddit.com/r/LocalLLaMA/comments/1ikrga1/running_llama_70b_online/
Vectorsimp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikrga1
false
null
t3_1ikrga1
/r/LocalLLaMA/comments/1ikrga1/running_llama_70b_online/
false
false
self
1
null
There May Not be Aha Moment in R1-Zero-like Training
1
[https://oatllm.notion.site/oat-zero](https://oatllm.notion.site/oat-zero) There May Not be Aha Moment in R1-Zero-like Training > P.S. I am not affiliated with the authors
2025-02-08T16:43:40
https://www.reddit.com/r/LocalLLaMA/comments/1ikrgto/there_may_not_be_aha_moment_in_r1zerolike_training/
henryclw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikrgto
false
null
t3_1ikrgto
/r/LocalLLaMA/comments/1ikrgto/there_may_not_be_aha_moment_in_r1zerolike_training/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/4NSI81a-y2NzohkTH8p0go2MgriTyWdJ032uC9BxMQQ.jpg?auto=webp&s=114719ba723c69929be453b4a34ed7877ff4faf4', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/4NSI81a-y2NzohkTH8p0go2MgriTyWdJ032uC9BxMQQ.jpg?width=108&crop=smart&auto=webp&s=52fb8510d65775827a66c31c2f35d18ef510a364', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/4NSI81a-y2NzohkTH8p0go2MgriTyWdJ032uC9BxMQQ.jpg?width=216&crop=smart&auto=webp&s=05cc03429603dbc0573cc419348167247458b52c', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/4NSI81a-y2NzohkTH8p0go2MgriTyWdJ032uC9BxMQQ.jpg?width=320&crop=smart&auto=webp&s=14fbbfbbe9ebbeb108beb73de3bbd7d61951b3f3', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/4NSI81a-y2NzohkTH8p0go2MgriTyWdJ032uC9BxMQQ.jpg?width=640&crop=smart&auto=webp&s=01268da29e6bcc0fe2628d7759993f3fe8979407', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/4NSI81a-y2NzohkTH8p0go2MgriTyWdJ032uC9BxMQQ.jpg?width=960&crop=smart&auto=webp&s=c92f2ab867e1c82e01752a12552615befe157086', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/4NSI81a-y2NzohkTH8p0go2MgriTyWdJ032uC9BxMQQ.jpg?width=1080&crop=smart&auto=webp&s=612db5da45040751e5f428930575c817c1e1a962', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'Izabq32DdBnzMc78xqevpsLYhhraEtrhKMSfq9aR9Kw'}], 'enabled': False}
Best LLM for these specs
1
Could someone please tell me what would be the best local LLM for these specs- Alienware m18 R2 Processor 14th Gen Intel® Core™i9 14900HX (24-Core, 36MB L3 Cache, up to 5.8GHz Max Turbo Frequency) Operating System Windows 11 Pro, English, French, Spanish Graphics NVIDIA® GeForce RTX™ 4090, 16 GB GDDR6 Memory 64 GB: 2 x 32 GB, DDR5, 5200 MT/s, non-ECC, dual-channel Storage 8TB RAID0 (2 x 4 TB), M.2, PCIe NVMe, SSD Display 18" QHD+ (2560 x 1600) 165Hz, 3ms, ComfortView Plus, NVIDIA G-SYNC + DDS, 100% DCI-P3, FHD IR Camera
2025-02-08T17:11:10
https://www.reddit.com/r/LocalLLaMA/comments/1iks4jv/best_llm_for_these_specs/
danmcrae
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iks4jv
false
null
t3_1iks4jv
/r/LocalLLaMA/comments/1iks4jv/best_llm_for_these_specs/
false
false
self
1
null
Roleplay prompt for Deepseek R1
1
I found this prompt to work quite well for me when using uncensored Deepseek on LM Studio. I just copy pasted my characters from ooba UI in this prompt and could rp. I found the reasoning section interesting so I could see what it was thinking before replying. ——— I would like to do a fictional roleplay between me and you. You will assume the role of [insert character name], and I will assume the role of [insert your role here]. Here is more information about the charcter that you will play as: The charcter name is [insert character name]. The character persona is: [insert description here] Here is your character greeting: [Insert greeting here in quotes ""] Let's begin.
2025-02-08T17:12:08
https://www.reddit.com/r/LocalLLaMA/comments/1iks5dr/roleplay_prompt_for_deepseek_r1/
reverrover16
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iks5dr
false
null
t3_1iks5dr
/r/LocalLLaMA/comments/1iks5dr/roleplay_prompt_for_deepseek_r1/
false
false
self
1
null
Which Gpu should i Choose
1
[removed]
2025-02-08T17:16:04
https://www.reddit.com/r/LocalLLaMA/comments/1iks8pl/which_gpu_should_i_choose/
Ok_Pomelo_3956
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iks8pl
false
null
t3_1iks8pl
/r/LocalLLaMA/comments/1iks8pl/which_gpu_should_i_choose/
false
false
self
1
null
Notes on OpenAI o3-mini: How good is it compared to r1 and o1?
1
We finally have a reasonable reasoning model from OpenAI that has a reasonable cost; it must be Deepseek r1 impact. But anyway, we now have the first family of models from the o3 series. Also, It is the first reasoning model with official function-calling support. Another interesting thing is that, unlike o1, we can now see the chain of thought (CoT). However, the CoT is not raw like Deepseek r1, but only a summarized version of it, and I am not sure why they are still keeping it under wraps. # On pricing Perhaps the most highlighting aspect of the model is that it’s 15x cheaper than O1 with comparable performance and, in fact, better at times. The fact that it is cheaper by 2x than even the GPT-4o is even more amusing. Then why do Chatgpt users have limited queries while GPT-4o has unlimited queries? Did Deepseek force OpenAI to subsidize API costs? # On performance To know if it actually is a better model than r1 and o1, I tested it on my benchmark questions for reasoning, Math, Coding, etc. Here’s my observation: * O3-mini-high is the best available model for reasoning tasks, apart from o1-pro. * For math, o1 and o3-mini-high are on par, a tad bit better than Deepseek r1. * Again, for coding, o3-mini-high felt better in my use cases but can vary from case to case. It is faster, so it is better to work with. * I can’t get over Deepseek r1 for creative writing, well, especially its CoT traces. I wish OpenAI would disclose the raw CoT in the coming models. The model is actually good, and given the costs, it’s much better than o1. I would’ve loved if they showed us the actual CoT, and I think a lot of people are now more interested in thought patterns than actual responses. For in-depth analysis, commentary, and remarks on the OpenAI o3-mini and comparison with Deepseek r1, check out this blog post: [On OpenAI o3-mini](https://composio.dev/blog/openai-o3-mini-vs-deepseek-r1/) Would love to know what have been your views and experiences with the o3-mini. How did you like it compared to Deepseek r1?
2025-02-08T17:16:46
https://www.reddit.com/r/LocalLLaMA/comments/1iks9cl/notes_on_openai_o3mini_how_good_is_it_compared_to/
SunilKumarDash
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iks9cl
false
null
t3_1iks9cl
/r/LocalLLaMA/comments/1iks9cl/notes_on_openai_o3mini_how_good_is_it_compared_to/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/3Fd1Z8dVNBxctR-rhIsn4lsDOH5W_m4tU0ThkdON0sg.jpg?auto=webp&s=2dffd3cf345697b74ef89f4bc643b9f57aa7e775', 'width': 1136, 'height': 639}, 'resolutions': [{'url': 'https://external-preview.redd.it/3Fd1Z8dVNBxctR-rhIsn4lsDOH5W_m4tU0ThkdON0sg.jpg?width=108&crop=smart&auto=webp&s=dd27373c3d88a9ebaf980e461dc29c19d56a4bc9', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/3Fd1Z8dVNBxctR-rhIsn4lsDOH5W_m4tU0ThkdON0sg.jpg?width=216&crop=smart&auto=webp&s=604a084ff69fb576d9beec555032c179e271ada9', 'width': 216, 'height': 121}, {'url': 'https://external-preview.redd.it/3Fd1Z8dVNBxctR-rhIsn4lsDOH5W_m4tU0ThkdON0sg.jpg?width=320&crop=smart&auto=webp&s=9cd2b17aa34404bc80beaea8266b814982577fbd', 'width': 320, 'height': 180}, {'url': 'https://external-preview.redd.it/3Fd1Z8dVNBxctR-rhIsn4lsDOH5W_m4tU0ThkdON0sg.jpg?width=640&crop=smart&auto=webp&s=830ead9ebf95a8372afa60448cd4e86e91bf8a4e', 'width': 640, 'height': 360}, {'url': 'https://external-preview.redd.it/3Fd1Z8dVNBxctR-rhIsn4lsDOH5W_m4tU0ThkdON0sg.jpg?width=960&crop=smart&auto=webp&s=54c060c1c32e4c51fe594ed72816840f7f3b9648', 'width': 960, 'height': 540}, {'url': 'https://external-preview.redd.it/3Fd1Z8dVNBxctR-rhIsn4lsDOH5W_m4tU0ThkdON0sg.jpg?width=1080&crop=smart&auto=webp&s=85af5c7d6350b35070e439206d310ed4d13c9f14', 'width': 1080, 'height': 607}], 'variants': {}, 'id': '8a8b1JkZ_A53tCAt5zHd_6jR-nJyPw-uZEWV_OCGV7Q'}], 'enabled': False}
GPT4ALL Server
1
Hi, In installed GTP4All on Windows, enabled the Server API, I can list the models install with [http://localhost:4891/v1/models](http://localhost:4891/v1/models) But when I test chat completion I got the following error. Command: curl -X POST [http://localhost:4891/v1/chat/completions](http://localhost:4891/v1/chat/completions) \-d '{"model": "Phi-3 Mini Instruct", "messages": {"role":"user","content":"Who is Lionel Messi?"}\], "max\_tokens": 50, "temperature": 0.28}' Error: {"error":{"code":null,"message":"error parsing request JSON: illegal value","param":null,"type":"invalid\_request\_error"}}curl: (3) URL rejected: Malformed input to a URL function curl: (3) URL rejected: Port number was not a decimal number between 0 and 65535 curl: (3) unmatched close brace/bracket in URL position 40: {role:user,content:Who is Lionel Messi?}\], Is it a bug or problem with the installation or settings?
2025-02-08T17:23:31
https://www.reddit.com/r/LocalLLaMA/comments/1iksf3r/gpt4all_server/
hwlim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iksf3r
false
null
t3_1iksf3r
/r/LocalLLaMA/comments/1iksf3r/gpt4all_server/
false
false
self
1
null
GPT4ALL Server problem
1
Hi, In installed GTP4All on Windows, enabled the Server API, I can list the models install with [http://localhost:4891/v1/models](http://localhost:4891/v1/models) But when I test chat completion I got the following error. Command: curl -X POST [http://localhost:4891/v1/chat/completions](http://localhost:4891/v1/chat/completions) \-d '{"model": "Phi-3 Mini Instruct", "messages": {"role":"user","content":"Who is Lionel Messi?"}\], "max\_tokens": 50, "temperature": 0.28}' Error: {"error":{"code":null,"message":"error parsing request JSON: illegal value","param":null,"type":"invalid\_request\_error"}}curl: (3) URL rejected: Malformed input to a URL function curl: (3) URL rejected: Port number was not a decimal number between 0 and 65535 curl: (3) unmatched close brace/bracket in URL position 40: {role:user,content:Who is Lionel Messi?}\], Is it a bug or problem with the installation or settings?
2025-02-08T17:26:38
https://www.reddit.com/r/LocalLLaMA/comments/1ikshte/gpt4all_server_problem/
hwlim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikshte
false
null
t3_1ikshte
/r/LocalLLaMA/comments/1ikshte/gpt4all_server_problem/
false
false
self
1
null
Current reasoning models are dumb
1
[removed]
2025-02-08T17:42:24
https://www.reddit.com/r/LocalLLaMA/comments/1iksvbv/current_reasoning_models_are_dumb/
oldjar747
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iksvbv
false
null
t3_1iksvbv
/r/LocalLLaMA/comments/1iksvbv/current_reasoning_models_are_dumb/
false
false
self
1
null
In Eleven Labs I can record a real voice performance and convert that performance into a different voice (and accent), is this possible locally yet?
1
Eleven Labs is the only service I'm paying for, mainly because of the feature I described in my title, is there an offline local alternative that is able to do this? So far, I'm able to clone any voice I want, but I can't transfer a real performance. Is this possible yet locally?
2025-02-08T17:45:49
https://www.reddit.com/r/LocalLLaMA/comments/1iksy75/in_eleven_labs_i_can_record_a_real_voice/
MisPreguntas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iksy75
false
null
t3_1iksy75
/r/LocalLLaMA/comments/1iksy75/in_eleven_labs_i_can_record_a_real_voice/
false
false
self
1
null
Django project deployment
1
[removed]
2025-02-08T17:50:13
https://www.reddit.com/r/LocalLLaMA/comments/1ikt1v6/django_project_deployment/
Cultural_Aide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikt1v6
false
null
t3_1ikt1v6
/r/LocalLLaMA/comments/1ikt1v6/django_project_deployment/
false
false
self
1
null
Is deepseek distilled 32b better than qwq?
1
As per title
2025-02-08T17:57:49
https://www.reddit.com/r/LocalLLaMA/comments/1ikt8ef/is_deepseek_distilled_32b_better_than_qwq/
Moreh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikt8ef
false
null
t3_1ikt8ef
/r/LocalLLaMA/comments/1ikt8ef/is_deepseek_distilled_32b_better_than_qwq/
false
false
self
1
null
I Built lfind: A Natural Language File Finder Using LLMs
1
2025-02-08T17:58:53
https://i.redd.it/rwb26a0yeyhe1.gif
Mahrkeenerh1
i.redd.it
1970-01-01T00:00:00
0
{}
1ikt9an
false
null
t3_1ikt9an
/r/LocalLLaMA/comments/1ikt9an/i_built_lfind_a_natural_language_file_finder/
false
false
https://a.thumbs.redditm…N2ebw_Om6BR8.jpg
1
{'images': [{'source': {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?format=png8&s=00dbbab16784ca207c0bc2af41fdea3f609418cb', 'width': 1113, 'height': 626}, 'resolutions': [{'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=108&crop=smart&format=png8&s=f3471418a50f513343638aaf2b28b1b7761a30ac', 'width': 108, 'height': 60}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=216&crop=smart&format=png8&s=02c5353d513d0861327a3c7132b489c48af240bd', 'width': 216, 'height': 121}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=320&crop=smart&format=png8&s=88a0a22117c0fd8c95f410d1b38164e532513a5c', 'width': 320, 'height': 179}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=640&crop=smart&format=png8&s=ff674f1435e0b3b4d29b8e6736e8134841f17d9f', 'width': 640, 'height': 359}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=960&crop=smart&format=png8&s=c10a5f3ff5946d6f06609c8d5d891c26be962ac1', 'width': 960, 'height': 539}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=1080&crop=smart&format=png8&s=81a5249da451f1e902f1c7a68bb08b1ee15ed4e1', 'width': 1080, 'height': 607}], 'variants': {'gif': {'source': {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?s=5aa632b7f4f73704f6c307ff4dfae3483448f047', 'width': 1113, 'height': 626}, 'resolutions': [{'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=108&crop=smart&s=656152af1ec54b04dd415d9b30593c554c539864', 'width': 108, 'height': 60}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=216&crop=smart&s=24b0543448b7a3c4fd7a3918485043a7546f529b', 'width': 216, 'height': 121}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=320&crop=smart&s=4c23a37a2f96ebd79458acbd60951ca508ff8c82', 'width': 320, 'height': 179}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=640&crop=smart&s=653ea170e4f0049c8cfde86a3d70e9eb14484f48', 'width': 640, 'height': 359}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=960&crop=smart&s=e03980e09c7b73bf5acb0046842033ec4e787fed', 'width': 960, 'height': 539}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=1080&crop=smart&s=487bb5fe521ceb020af530c4a0637895e5ca7309', 'width': 1080, 'height': 607}]}, 'mp4': {'source': {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?format=mp4&s=28899bd8f9fb3f06fa1ef1f8d8bedca31c3fba7c', 'width': 1113, 'height': 626}, 'resolutions': [{'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=108&format=mp4&s=969f945d8bccdc7d472049d83ce3ac85bcfde094', 'width': 108, 'height': 60}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=216&format=mp4&s=2703565663292e60ec7732728fcde4e50fcccf00', 'width': 216, 'height': 121}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=320&format=mp4&s=daff2f75fecdd00c301941e507670d8e840848e3', 'width': 320, 'height': 179}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=640&format=mp4&s=7aeb9b5be6a974933f09246423f876200cfc994c', 'width': 640, 'height': 359}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=960&format=mp4&s=d5289b21740f99862ba9c877fdd2e20b33442344', 'width': 960, 'height': 539}, {'url': 'https://preview.redd.it/rwb26a0yeyhe1.gif?width=1080&format=mp4&s=1dc33a4dd8da68050ebe1bd4d4dbb700d88d7b15', 'width': 1080, 'height': 607}]}}, 'id': 'O3lgSzOW-3dCzBQdHJZx86jXH0rv0mxQeGhq4Om7X2Y'}], 'enabled': True}
This sub needs to get back to its focus on Llama and the open-source models
1
[removed]
2025-02-08T18:29:33
https://www.reddit.com/r/LocalLLaMA/comments/1iku01r/this_sub_needs_to_get_back_to_its_focus_on_llama/
entsnack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iku01r
false
null
t3_1iku01r
/r/LocalLLaMA/comments/1iku01r/this_sub_needs_to_get_back_to_its_focus_on_llama/
false
false
self
1
null
infermlx: Simple Llama LLM inference on macOS with MLX
1
2025-02-08T18:52:09
https://github.com/peterc/infermlx?tab=readme-ov-file
petercooper
github.com
1970-01-01T00:00:00
0
{}
1ikujdh
false
null
t3_1ikujdh
/r/LocalLLaMA/comments/1ikujdh/infermlx_simple_llama_llm_inference_on_macos_with/
false
false
https://b.thumbs.redditm…PJjPA14wHxBE.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/yoi_IWSvgLdvKuKdF1_dQsGUa72QUAYieJsd5grA7es.jpg?auto=webp&s=ae41f93aa9f1ac9da6c999c0dbd37ce89bc50417', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/yoi_IWSvgLdvKuKdF1_dQsGUa72QUAYieJsd5grA7es.jpg?width=108&crop=smart&auto=webp&s=cf2c59cf87b1404a2c926da0967174fbee002db2', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/yoi_IWSvgLdvKuKdF1_dQsGUa72QUAYieJsd5grA7es.jpg?width=216&crop=smart&auto=webp&s=4146f39e3a5fe0a2caca40f1dc71a1ea897c952b', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/yoi_IWSvgLdvKuKdF1_dQsGUa72QUAYieJsd5grA7es.jpg?width=320&crop=smart&auto=webp&s=efd5afc462691662408b4ddeec9c41b7615ce9fd', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/yoi_IWSvgLdvKuKdF1_dQsGUa72QUAYieJsd5grA7es.jpg?width=640&crop=smart&auto=webp&s=7223bb401e7c74e611fb273b11168db2a39bc44e', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/yoi_IWSvgLdvKuKdF1_dQsGUa72QUAYieJsd5grA7es.jpg?width=960&crop=smart&auto=webp&s=50b67e28338eb0700b05f1ac18915995687e854b', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/yoi_IWSvgLdvKuKdF1_dQsGUa72QUAYieJsd5grA7es.jpg?width=1080&crop=smart&auto=webp&s=ced54e4c12d0c2c6ab6da9cb3f92f60021a80872', 'width': 1080, 'height': 540}], 'variants': {}, 'id': '9L7a5nFq1gzvQ3kc8rqZxoHRSdOe9BxyYeOgcOezYtU'}], 'enabled': False}
does it make sense to use chatRTX locally or is better to RAG with external APIs? What are the advantages of local chatRTX? how many LLMs are supported? context window? Any other solution you might have in mind for a really powerful local LLM that does extensive RAG? Thank you!
1
i5 13400, 32GB RAM, RTX 4070 Super 12GB VRAM
2025-02-08T18:55:04
https://www.reddit.com/r/LocalLLaMA/comments/1ikulsj/does_it_make_sense_to_use_chatrtx_locally_or_is/
jim_andr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikulsj
false
null
t3_1ikulsj
/r/LocalLLaMA/comments/1ikulsj/does_it_make_sense_to_use_chatrtx_locally_or_is/
false
false
self
1
null
Using AI to generate short clips
1
[removed]
2025-02-08T19:13:37
https://www.reddit.com/r/LocalLLaMA/comments/1ikv1x0/using_ai_to_generate_short_clips/
sKemo12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikv1x0
false
null
t3_1ikv1x0
/r/LocalLLaMA/comments/1ikv1x0/using_ai_to_generate_short_clips/
false
false
self
1
null
Sider.ai version that uses local llms?
1
Good afternoon all, Does anyone know of any open-source version of sider.ai extension that can use a local llm? Perplexity doesn't do similar.
2025-02-08T19:22:16
https://www.reddit.com/r/LocalLLaMA/comments/1ikv9av/siderai_version_that_uses_local_llms/
skylabby
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikv9av
false
null
t3_1ikv9av
/r/LocalLLaMA/comments/1ikv9av/siderai_version_that_uses_local_llms/
false
false
self
1
null
Canned responses related to Israel Palestine conflict on chatgpt
1
[removed]
2025-02-08T19:28:45
https://www.reddit.com/r/LocalLLaMA/comments/1ikvesb/canned_responses_related_to_israel_palestine/
I_Short_TSLA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikvesb
false
null
t3_1ikvesb
/r/LocalLLaMA/comments/1ikvesb/canned_responses_related_to_israel_palestine/
false
false
self
1
null
I really need to upgrade
1
2025-02-08T19:38:41
https://i.redd.it/eto6oiq8xyhe1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1ikvnfx
false
null
t3_1ikvnfx
/r/LocalLLaMA/comments/1ikvnfx/i_really_need_to_upgrade/
false
false
https://b.thumbs.redditm…04C9toBdxC6Y.jpg
1
{'images': [{'source': {'url': 'https://preview.redd.it/eto6oiq8xyhe1.jpeg?auto=webp&s=30f69a12a3d41090f36ae7be374ee3adfdb22d05', 'width': 1125, 'height': 1125}, 'resolutions': [{'url': 'https://preview.redd.it/eto6oiq8xyhe1.jpeg?width=108&crop=smart&auto=webp&s=e56528e71761d9f12eb31fcc903e21ebe6db27bb', 'width': 108, 'height': 108}, {'url': 'https://preview.redd.it/eto6oiq8xyhe1.jpeg?width=216&crop=smart&auto=webp&s=2df451cf91394117a01e20748e5d10e64ecb9fb2', 'width': 216, 'height': 216}, {'url': 'https://preview.redd.it/eto6oiq8xyhe1.jpeg?width=320&crop=smart&auto=webp&s=da40add80085134bf382022a0abb325f11588cc0', 'width': 320, 'height': 320}, {'url': 'https://preview.redd.it/eto6oiq8xyhe1.jpeg?width=640&crop=smart&auto=webp&s=9be19e4af4ac7edf93efb804fef99881270152ec', 'width': 640, 'height': 640}, {'url': 'https://preview.redd.it/eto6oiq8xyhe1.jpeg?width=960&crop=smart&auto=webp&s=61b8ec8a96e5225da557a7b700c27f2a052df65f', 'width': 960, 'height': 960}, {'url': 'https://preview.redd.it/eto6oiq8xyhe1.jpeg?width=1080&crop=smart&auto=webp&s=59e2a07ae9fed1ab5f4ef7e30686e2f59b96ff94', 'width': 1080, 'height': 1080}], 'variants': {}, 'id': 'JptDGPKdgaqqtdh07CPdN-CYXtcjFAueDLtD7OSsc0s'}], 'enabled': True}
Your next home lab might have 48GB Chinese card😅
1
https://wccftech.com/chinese-gpu-manufacturers-push-out-support-for-running-deepseek-ai-models-on-local-systems/ Things are accelerating. China might give us all the VRAM we want. 😅😅👍🏼 Hope they don't make it illegal to import. For security sake, of course
2025-02-08T19:39:39
https://www.reddit.com/r/LocalLLaMA/comments/1ikvo8a/your_next_home_lab_might_have_48gb_chinese_card/
Redinaj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikvo8a
false
null
t3_1ikvo8a
/r/LocalLLaMA/comments/1ikvo8a/your_next_home_lab_might_have_48gb_chinese_card/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/-2Ysyu3sX6fO0DZRNdkXCuOzx0BFymt2zO0fHSU6xic.jpg?auto=webp&s=7303b3fd17d6befb820b9f72e5cc2966fc6fb783', 'width': 728, 'height': 516}, 'resolutions': [{'url': 'https://external-preview.redd.it/-2Ysyu3sX6fO0DZRNdkXCuOzx0BFymt2zO0fHSU6xic.jpg?width=108&crop=smart&auto=webp&s=24ba174d8c43838268ca7185264e1b4369abadaa', 'width': 108, 'height': 76}, {'url': 'https://external-preview.redd.it/-2Ysyu3sX6fO0DZRNdkXCuOzx0BFymt2zO0fHSU6xic.jpg?width=216&crop=smart&auto=webp&s=2d4e172d80128ba8f3dbcacd1684c29b578bc745', 'width': 216, 'height': 153}, {'url': 'https://external-preview.redd.it/-2Ysyu3sX6fO0DZRNdkXCuOzx0BFymt2zO0fHSU6xic.jpg?width=320&crop=smart&auto=webp&s=c78366b437c0387676a2a55f83565db41f292ff5', 'width': 320, 'height': 226}, {'url': 'https://external-preview.redd.it/-2Ysyu3sX6fO0DZRNdkXCuOzx0BFymt2zO0fHSU6xic.jpg?width=640&crop=smart&auto=webp&s=32238cf0cfe36589a99fbbebdebb674c383fd9b4', 'width': 640, 'height': 453}], 'variants': {}, 'id': 'lkz-MfcF29exvP3pe2apdSH9SVJIH63YmzcEuEfzEgU'}], 'enabled': False}
Local AI LLM or similar for validating that speech matches text
1
I would like to try creating a small reading practice app that can track a voice reading from a given text. E.g. an easier case of voice recognition where its just a matter of detecting if the sound matches the expected next word, however low latency is very important for obvious reasons Is there anything like that out there that is easy to work with, and can work with Danish? I was inspired to ask here after reading about people running "real" voice recognition locally in browsers.
2025-02-08T19:40:43
https://www.reddit.com/r/LocalLLaMA/comments/1ikvp3g/local_ai_llm_or_similar_for_validating_that/
ziphnor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikvp3g
false
null
t3_1ikvp3g
/r/LocalLLaMA/comments/1ikvp3g/local_ai_llm_or_similar_for_validating_that/
false
false
self
1
null
LPU for everyone
1
Hey everyone, I’m not a professional like many of you, but I have a question that I can’t seem to find an answer to, so I thought I’d ask here. Grok has developed LPUs, and AWS has introduced Trainium 2. However, it doesn’t seem like there’s anything consumer-friendly available for purchase—or even enterprise-level solutions, for that matter. Do you think we’ll ever see something like an add-on, a dedicated card, or a coprocessor (similar to what we had in the ‘80s) specifically designed for LLMs that consumers can buy and install? If so, when do you think that might happen? Curious to hear your thoughts! Dave
2025-02-08T19:44:24
https://www.reddit.com/r/LocalLLaMA/comments/1ikvs6w/lpu_for_everyone/
dave-lon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikvs6w
false
null
t3_1ikvs6w
/r/LocalLLaMA/comments/1ikvs6w/lpu_for_everyone/
false
false
self
1
null
TTS WITH PARTICULAR VOICE FEATURE
1
I want to create tts model with particular voice features say of some particular personality in a particular language suppose Indian languages like Marathi. What is the best way to do so? From What I have read, I need to extract feature of the voice Then have phoneme of that language Audio to text transcription Can someone guide on how to achieve this?
2025-02-08T19:47:29
https://www.reddit.com/r/LocalLLaMA/comments/1ikvupm/tts_with_particular_voice_feature/
Chdevman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikvupm
false
null
t3_1ikvupm
/r/LocalLLaMA/comments/1ikvupm/tts_with_particular_voice_feature/
false
false
self
1
null
Best creative local LLM for world building and creative writing? Fitting in 16gb VRAM?
1
You know I mean creative as in not me having to be creative for it but rather it can come up with some great stuff itself.
2025-02-08T19:57:26
https://www.reddit.com/r/LocalLLaMA/comments/1ikw2xd/best_creative_local_llm_for_world_building_and/
No_Expert1801
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikw2xd
false
null
t3_1ikw2xd
/r/LocalLLaMA/comments/1ikw2xd/best_creative_local_llm_for_world_building_and/
false
false
self
1
null
Why run at home AI?
1
I'm very interested, but probably limited in thinking that I just want an in house Jarvis. What's the reason you run an AI server in your house?
2025-02-08T20:04:59
https://www.reddit.com/r/LocalLLaMA/comments/1ikw9i2/why_run_at_home_ai/
imjustasking123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikw9i2
false
null
t3_1ikw9i2
/r/LocalLLaMA/comments/1ikw9i2/why_run_at_home_ai/
false
false
self
1
null
Clayton Christensen: Disruptive innovation
1
Recently, there were several pieces of news that keep reminding me of the late Clayton Christensen's theory of disruptive innovation: Intel's B580, the rumor about a 24GB B580, the tons of startups trying to get into the AI hardware space, and just today the wccftwch piece about MoorThreads adding support for DeepSeek. This is for those who are interested in understanding "disruptive innovation" from the man who first coined this term some 30 years ago. The video is one hour long, and part of a three lecture series he gave at Oxford University almost 12 years ago.
2025-02-08T20:25:58
https://youtu.be/rpkoCZ4vBSI?si=g-0U4XATmi5rlb0z
FullstackSensei
youtu.be
1970-01-01T00:00:00
0
{}
1ikwqps
false
{'type': 'youtube.com', 'oembed': {'provider_url': 'https://www.youtube.com/', 'version': '1.0', 'title': 'Clayton Christensen: Disruptive innovation', 'type': 'video', 'thumbnail_width': 480, 'height': 200, 'width': 356, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/rpkoCZ4vBSI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Clayton Christensen: Disruptive innovation"></iframe>', 'author_name': 'Saïd Business School, University of Oxford', 'provider_name': 'YouTube', 'thumbnail_url': 'https://i.ytimg.com/vi/rpkoCZ4vBSI/hqdefault.jpg', 'thumbnail_height': 360, 'author_url': 'https://www.youtube.com/@OxfordSBS'}}
t3_1ikwqps
/r/LocalLLaMA/comments/1ikwqps/clayton_christensen_disruptive_innovation/
false
false
https://b.thumbs.redditm…uCinRnjtfeis.jpg
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/aaFkUVp8rgXeZY3e3SZJT3WkSgYobDa6iLg6uffLqH4.jpg?auto=webp&s=9db1874768e5e263d14943c905edd307316ae6a2', 'width': 480, 'height': 360}, 'resolutions': [{'url': 'https://external-preview.redd.it/aaFkUVp8rgXeZY3e3SZJT3WkSgYobDa6iLg6uffLqH4.jpg?width=108&crop=smart&auto=webp&s=a271a34f3df78f9017ae2a9d5546f63649d7a794', 'width': 108, 'height': 81}, {'url': 'https://external-preview.redd.it/aaFkUVp8rgXeZY3e3SZJT3WkSgYobDa6iLg6uffLqH4.jpg?width=216&crop=smart&auto=webp&s=96ca82ce6a85926730358dc5a3dee4f0adbbef9d', 'width': 216, 'height': 162}, {'url': 'https://external-preview.redd.it/aaFkUVp8rgXeZY3e3SZJT3WkSgYobDa6iLg6uffLqH4.jpg?width=320&crop=smart&auto=webp&s=efd54e669a2936f102a845079fda6a0d24a16078', 'width': 320, 'height': 240}], 'variants': {}, 'id': 'TbWIrdFWtgPTm0EzPqXkaR12hI__laYCVSkIwJMDQn4'}], 'enabled': False}
CPU + RAM combo for a new build with 3090 (for in-GPU LLM only)
1
Hey guys, I've just picked up an RTX3090, which I'll be putting into my main machine (switching out a 4060) with a new PSU. It got me thinking, to build a dedicated machine for the RTX3090, all I'd need is a case, ram, cpu, motherboard and m.2. I plan to run Linux on it. I've seen that CPU and RAM aren't that important if the model is always in GPU VRAM (which is the plan). What kind of CPU/RAM combo should I be aiming for if the only purpose of this machine is to run in-VRAM models?
2025-02-08T20:50:08
https://www.reddit.com/r/LocalLLaMA/comments/1ikxaml/cpu_ram_combo_for_a_new_build_with_3090_for_ingpu/
luhkomo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikxaml
false
null
t3_1ikxaml
/r/LocalLLaMA/comments/1ikxaml/cpu_ram_combo_for_a_new_build_with_3090_for_ingpu/
false
false
self
1
null
150+ TPS on a 7900XTX with Compute Mode on 8B (32K)?
1
[removed]
2025-02-08T20:54:21
https://www.reddit.com/r/LocalLLaMA/comments/1ikxe8s/150_tps_on_a_7900xtx_with_compute_mode_on_8b_32k/
Better-Resist-5369
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikxe8s
false
null
t3_1ikxe8s
/r/LocalLLaMA/comments/1ikxe8s/150_tps_on_a_7900xtx_with_compute_mode_on_8b_32k/
false
false
self
1
null
150+ TPS on a 7900XTX with Compute Mode on 8B (32K)?
1
So on a discord server, a Local LLM user claimed that on thier system (7900XTX with a 9950X) that thier system can inference at 150 TPS on a 8B model even at full context of 32K. I found that hard to believe but they said something about turning on 'Computer Mode' on a 7900XTX. I searched that up and found barely any results for turning that on in BIOS. VRAM Usage seemed to be around 10-12GB for them. Is this true or the person lying?
2025-02-08T20:55:06
https://www.reddit.com/r/LocalLLaMA/comments/1ikxev0/150_tps_on_a_7900xtx_with_compute_mode_on_8b_32k/
Better-Resist-5369
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikxev0
false
null
t3_1ikxev0
/r/LocalLLaMA/comments/1ikxev0/150_tps_on_a_7900xtx_with_compute_mode_on_8b_32k/
false
false
self
1
null
dataset creation with deepseek
1
Not sure if this is of any help but I've created a small script that takes questions from a CSV file and send them to deepseek API for answers. It outputs the results with separate columns for timestamp, question, thinking traces, answer as CSV, json and txt files. https://github.com/EdwardDali/dset are there other tools doing something like this for AI with reasoning? does distillation requires a different type of dataset?
2025-02-08T20:55:18
https://www.reddit.com/r/LocalLLaMA/comments/1ikxf0r/dataset_creation_with_deepseek/
Eduard_T
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikxf0r
false
null
t3_1ikxf0r
/r/LocalLLaMA/comments/1ikxf0r/dataset_creation_with_deepseek/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/1ZaB3Nrs7ZFc52IPC1eC56zhNm33W29p65p1Vtd5uxw.jpg?auto=webp&s=16de1bbe99d4c41e2745d77c0311556e3124d1ea', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/1ZaB3Nrs7ZFc52IPC1eC56zhNm33W29p65p1Vtd5uxw.jpg?width=108&crop=smart&auto=webp&s=b6b8a86277e743844fac3c9093c8e342787340d6', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/1ZaB3Nrs7ZFc52IPC1eC56zhNm33W29p65p1Vtd5uxw.jpg?width=216&crop=smart&auto=webp&s=94180f5d25230f3f299bd3e4a787921431b7f717', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/1ZaB3Nrs7ZFc52IPC1eC56zhNm33W29p65p1Vtd5uxw.jpg?width=320&crop=smart&auto=webp&s=c8c39a22f49ea512eeb8de29394cf5597f807b45', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/1ZaB3Nrs7ZFc52IPC1eC56zhNm33W29p65p1Vtd5uxw.jpg?width=640&crop=smart&auto=webp&s=ad08009619fe35832899b2472bd865344066a02b', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/1ZaB3Nrs7ZFc52IPC1eC56zhNm33W29p65p1Vtd5uxw.jpg?width=960&crop=smart&auto=webp&s=39a38a82bd230b53fe39c0530703bf0ee0e1d625', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/1ZaB3Nrs7ZFc52IPC1eC56zhNm33W29p65p1Vtd5uxw.jpg?width=1080&crop=smart&auto=webp&s=d55c9bdfb8a8129eab5769d2c1655c673176aa26', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'wJHh0ININtsnIPrizeVTT9DqpaWIRMl55qggHVKkwWg'}], 'enabled': False}
My little setup grows
1
2025-02-08T21:36:13
https://i.redd.it/ivuoew07izhe1.jpeg
Flintbeker
i.redd.it
1970-01-01T00:00:00
0
{}
1ikyclh
false
null
t3_1ikyclh
/r/LocalLLaMA/comments/1ikyclh/my_little_setup_grows/
false
false
https://b.thumbs.redditm…87Ove3KTzvqc.jpg
1
{'images': [{'source': {'url': 'https://preview.redd.it/ivuoew07izhe1.jpeg?auto=webp&s=b3293f9487acd16b97347ef73fd783cec3b1e818', 'width': 4284, 'height': 5712}, 'resolutions': [{'url': 'https://preview.redd.it/ivuoew07izhe1.jpeg?width=108&crop=smart&auto=webp&s=74f6d899fa7492cc7432c4bdb171e6a640d66654', 'width': 108, 'height': 144}, {'url': 'https://preview.redd.it/ivuoew07izhe1.jpeg?width=216&crop=smart&auto=webp&s=4b84c31764cc77f09159d13b1bd9132b32d674fd', 'width': 216, 'height': 288}, {'url': 'https://preview.redd.it/ivuoew07izhe1.jpeg?width=320&crop=smart&auto=webp&s=4c4ddfdb4b9817e9040e5e4481c2bf59cff29687', 'width': 320, 'height': 426}, {'url': 'https://preview.redd.it/ivuoew07izhe1.jpeg?width=640&crop=smart&auto=webp&s=805b6dbac5a45f4acb9b1e6962f74ce91e1b6aaa', 'width': 640, 'height': 853}, {'url': 'https://preview.redd.it/ivuoew07izhe1.jpeg?width=960&crop=smart&auto=webp&s=7db39a12a42672cfd83a03659a7e5d2c573d9b39', 'width': 960, 'height': 1280}, {'url': 'https://preview.redd.it/ivuoew07izhe1.jpeg?width=1080&crop=smart&auto=webp&s=9cd91495d3b6b7c8ba0bf28b1a38d25a5d38bd0e', 'width': 1080, 'height': 1440}], 'variants': {}, 'id': 'e6to8ia8XDHyyr7fNY1SMjC7gAqHhJgE94QbJuSyqko'}], 'enabled': True}
AI and the fundamental implications on reality?
1
I find it fascinating how relatively small AI models can generate vast amounts of knowledge. When you look closer, you realize they’re not actually storing all the information they’ve been trained on. Instead, they encode patterns within the data and use those patterns to generate probabilistic responses—often with surprising accuracy. It reminds me of quantum mechanics. At first glance, it seems counterintuitive—how can so much knowledge emerge from such a compact system? Has anyone else thought about the implications this might have for advanced fields like physics or the fundamental nature of reality? If knowledge can be recreated from patterns rather than stored explicitly, what does that say about how reality itself might work? I know it might seem a little topic but this really does only apply to models like llama that we can see their actual disc space usage versus how much they can answer accurately.
2025-02-08T21:44:00
https://www.reddit.com/r/LocalLLaMA/comments/1ikyisw/ai_and_the_fundamental_implications_on_reality/
djav1985
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikyisw
false
null
t3_1ikyisw
/r/LocalLLaMA/comments/1ikyisw/ai_and_the_fundamental_implications_on_reality/
false
false
self
1
null
Qwen reaches limit
1
I just got this from Qwen max - Uh-oh! There was an issue connecting to Qwen2.5-Max. Reached call limited: too many requests in (86400.0) seconds How many requests can I make , will it reset in 24 hours ?
2025-02-08T21:49:58
https://www.reddit.com/r/LocalLLaMA/comments/1ikynm7/qwen_reaches_limit/
carwash2016
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikynm7
false
null
t3_1ikynm7
/r/LocalLLaMA/comments/1ikynm7/qwen_reaches_limit/
false
false
self
1
null
OpenAI plans to open an office in Germany | TechCrunch
1
Explains why Sama was at that panel in TU Berlin
2025-02-08T21:51:55
https://techcrunch.com/2025/02/07/openai-plans-to-open-an-office-in-germany/
FullstackSensei
techcrunch.com
1970-01-01T00:00:00
0
{}
1ikyp9p
false
null
t3_1ikyp9p
/r/LocalLLaMA/comments/1ikyp9p/openai_plans_to_open_an_office_in_germany/
false
false
default
1
null
DeepSeek Gained over 100+ Millions Users in 20 days.
1
Since launching DeepSeek R1 on January 20, DeepSeek has gained over 100 million users, with $0 advertising or marketing cost. By February 1, its daily active users surpassed 30 million, making it the fastest application in history to reach this milestone. Why? I also spend so much time chat with it, the profound answer, is the key reason for me.
2025-02-08T21:52:58
https://www.reddit.com/r/LocalLLaMA/comments/1ikyq45/deepseek_gained_over_100_millions_users_in_20_days/
blacktiger3654
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikyq45
false
null
t3_1ikyq45
/r/LocalLLaMA/comments/1ikyq45/deepseek_gained_over_100_millions_users_in_20_days/
false
false
self
1
null
Super Bowl Predictions Using a Fine-Tuned Deepseek
1
[removed]
2025-02-08T22:01:13
https://www.reddit.com/r/LocalLLaMA/comments/1ikyww8/super_bowl_predictions_using_a_finetuned_deepseek/
dantheman252
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikyww8
false
null
t3_1ikyww8
/r/LocalLLaMA/comments/1ikyww8/super_bowl_predictions_using_a_finetuned_deepseek/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/QGo3Ljw05ULW8J6G25QrcTMJZrDG4ZSbIHKqcW8yNyQ.jpg?auto=webp&s=4f8e3219372d7a41d81f20da083cf09e95d77ae7', 'width': 2400, 'height': 1350}, 'resolutions': [{'url': 'https://external-preview.redd.it/QGo3Ljw05ULW8J6G25QrcTMJZrDG4ZSbIHKqcW8yNyQ.jpg?width=108&crop=smart&auto=webp&s=7a2e908bfb355b21dc7c57ad9e9465e47da176b3', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/QGo3Ljw05ULW8J6G25QrcTMJZrDG4ZSbIHKqcW8yNyQ.jpg?width=216&crop=smart&auto=webp&s=d4cdbe58c766ce6b14fd346ea4d5c23972d8c344', 'width': 216, 'height': 121}, {'url': 'https://external-preview.redd.it/QGo3Ljw05ULW8J6G25QrcTMJZrDG4ZSbIHKqcW8yNyQ.jpg?width=320&crop=smart&auto=webp&s=35717e2f09ddb8e7994315e7ab7f049713307331', 'width': 320, 'height': 180}, {'url': 'https://external-preview.redd.it/QGo3Ljw05ULW8J6G25QrcTMJZrDG4ZSbIHKqcW8yNyQ.jpg?width=640&crop=smart&auto=webp&s=dd27c9bf6404e82bfebbe2d84b075c83f5e566a3', 'width': 640, 'height': 360}, {'url': 'https://external-preview.redd.it/QGo3Ljw05ULW8J6G25QrcTMJZrDG4ZSbIHKqcW8yNyQ.jpg?width=960&crop=smart&auto=webp&s=ce3fa2c68abd923466af3a06eae8fce7062ae39a', 'width': 960, 'height': 540}, {'url': 'https://external-preview.redd.it/QGo3Ljw05ULW8J6G25QrcTMJZrDG4ZSbIHKqcW8yNyQ.jpg?width=1080&crop=smart&auto=webp&s=80f2f8f3ea49dfcf8035aba64bdad3c210dc181d', 'width': 1080, 'height': 607}], 'variants': {}, 'id': 'U_zRbVGpf60IdhRnj1JuBPS6qguS1URc818d5esoT7A'}], 'enabled': False}
I'm looking best vision model, can you tell me which one is the best right now?
1
[removed]
2025-02-08T22:12:57
https://www.reddit.com/r/LocalLLaMA/comments/1ikz6b2/im_looking_best_vision_model_can_you_tell_me/
PastRequirement96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikz6b2
false
null
t3_1ikz6b2
/r/LocalLLaMA/comments/1ikz6b2/im_looking_best_vision_model_can_you_tell_me/
false
false
self
1
null
Multi-GPU Utilization Issue with llama-cpp
1
[removed]
2025-02-08T22:19:15
https://www.reddit.com/r/LocalLLaMA/comments/1ikzbi5/multigpu_utilization_issue_with_llamacpp/
QictR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikzbi5
false
null
t3_1ikzbi5
/r/LocalLLaMA/comments/1ikzbi5/multigpu_utilization_issue_with_llamacpp/
false
false
self
1
null
MLX Quants Vs GGUF
1
So I'm playing around with some MLX models lately (M3 Max), and have noticed the quants seem different I don't usually see qx\_k\_m or anything like that just literally 4bit/8bit and a few 6bit. And generally they are smaller than their GGUF counterparts. I know MLX is faster and that is definitely evident. My question is; is an MLX model at the same quant lower quality than the GGUF model and if so how much lower quality are we talking? While I haven't noticed anything particularly jarring yet, I'm curious to understand the differences, my assumption since discovering MLX is that, if you're on Apple silicone you should be using MLX.
2025-02-08T22:31:56
https://www.reddit.com/r/LocalLLaMA/comments/1ikzlpx/mlx_quants_vs_gguf/
BalaelGios
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ikzlpx
false
null
t3_1ikzlpx
/r/LocalLLaMA/comments/1ikzlpx/mlx_quants_vs_gguf/
false
false
self
1
null
Not promoting, but any nerds around here?
1
[removed]
2025-02-08T23:06:16
https://www.reddit.com/r/LocalLLaMA/comments/1il0d7a/not_promoting_but_any_nerds_around_here/
Justimrandy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1il0d7a
false
null
t3_1il0d7a
/r/LocalLLaMA/comments/1il0d7a/not_promoting_but_any_nerds_around_here/
false
false
self
1
null