title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Relation between parameters of a model and GPU memory?
1
I have hosted a 33B params model on NVIDIA A100 (80GB). I've generally heard that a model takes the GPU memory 1.5 times its parameters. So the 33B model takes approx 50GB memory. Is the calculation correct? If not then how to calculate? Also I'm thinking about hosting another LLM of 33B params. Will it fit? I've heard about quantization but I don't know how it works.
2024-12-27T10:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1hncgmj/relation_between_parameters_of_a_model_and_gpu/
Available-Stress8598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hncgmj
false
null
t3_1hncgmj
/r/LocalLLaMA/comments/1hncgmj/relation_between_parameters_of_a_model_and_gpu/
false
false
self
1
null
[image processing failed]
1
[deleted]
2024-12-27T10:53:18
[deleted]
1970-01-01T00:00:00
0
{}
1hncgv2
false
null
t3_1hncgv2
/r/LocalLLaMA/comments/1hncgv2/image_processing_failed/
false
false
default
1
null
Perspective: A "kid's gamer toy" 3060 GPU processes 13 TFLOP/s. So in other words...
0
Perspective: A "kid's gamer toy" 3060 GPU processes 13 TFLOP/s. + It would take one person working manually non stop well longer than the time Homo Sapiens has existed (500kA+) to perform the FP32 calculations it does in 1 second. + Every human alive on the planet working efficiently non stop together manually calculating would take at least several hours to perform the same 1 GPU-second of calculations (around 2000 FP32 calculations to be done by each person). I doubt we'd even have such things on a commonplace personal level if it wasn't considered "a good toy".
2024-12-27T11:12:29
https://www.reddit.com/r/LocalLLaMA/comments/1hncqke/perspective_a_kids_gamer_toy_3060_gpu_processes/
Calcidiol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hncqke
false
null
t3_1hncqke
/r/LocalLLaMA/comments/1hncqke/perspective_a_kids_gamer_toy_3060_gpu_processes/
false
false
self
0
null
How to improve speed of Llama3.3?
1
[removed]
2024-12-27T11:53:10
https://www.reddit.com/r/LocalLLaMA/comments/1hndbav/how_to_improve_speed_of_llama33/
TheSaltyJ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hndbav
false
null
t3_1hndbav
/r/LocalLLaMA/comments/1hndbav/how_to_improve_speed_of_llama33/
false
false
self
1
null
New template on Runpod for text-generation-webui v2.0 with API one-click
1
[removed]
2024-12-27T12:14:32
https://www.reddit.com/r/LocalLLaMA/comments/1hndmup/new_template_on_runpod_for_textgenerationwebui/
WouterGlorieux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hndmup
false
null
t3_1hndmup
/r/LocalLLaMA/comments/1hndmup/new_template_on_runpod_for_textgenerationwebui/
false
false
self
1
null
AMD's higher VRAM or NVIDIA cuda???
1
[removed]
2024-12-27T12:18:12
https://www.reddit.com/r/LocalLLaMA/comments/1hndown/amds_higher_vram_or_nvidia_cuda/
srxbern
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hndown
false
null
t3_1hndown
/r/LocalLLaMA/comments/1hndown/amds_higher_vram_or_nvidia_cuda/
false
false
self
1
null
New template on Runpod for text-generation-webui v2.0 with API one-click
1
[removed]
2024-12-27T12:21:31
https://www.reddit.com/r/LocalLLaMA/comments/1hndqrn/new_template_on_runpod_for_textgenerationwebui/
WouterGlorieux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hndqrn
false
null
t3_1hndqrn
/r/LocalLLaMA/comments/1hndqrn/new_template_on_runpod_for_textgenerationwebui/
false
false
self
1
{'enabled': False, 'images': [{'id': 'HBeTlCBu9jt8g9PKkCGZ7j8c3VA62Ro8bamp4CVz6w8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=108&crop=smart&auto=webp&s=a2ac0357b2c30636cc4c827e38e6651d2d371868', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=216&crop=smart&auto=webp&s=101169af90ca6b974e5b1fcc5654803406cc4887', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=320&crop=smart&auto=webp&s=32fab8c4e843aee49704091a94145ba2709ef1d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=640&crop=smart&auto=webp&s=2a0eb36a6f9ac4bd2861f1afcb892c94d6bdb19d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=960&crop=smart&auto=webp&s=716f9b7b10005a89dcd76c981efc297546af5b62', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=1080&crop=smart&auto=webp&s=c8005d9fb482baa2363660e7634e1939b949f736', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?auto=webp&s=285c64be222632d3bd4b233b6d795fc621230e48', 'width': 1200}, 'variants': {}}]}
Questions about the difference between deepgram streaming and whisper
1
Solutions like deepgram support a separate streaming API, which provides real-time transcripts of partial words, even if the sentence is incomplete. However, a model like whisper works by fetching specific chunks of audio in batches. Is it because the structure of a model like deepgram and a model like whisper is different, or is it the same model structure as whisper but just implemented differently codewise?
2024-12-27T12:22:53
https://www.reddit.com/r/LocalLLaMA/comments/1hndrhc/questions_about_the_difference_between_deepgram/
ComfortableAd2723
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hndrhc
false
null
t3_1hndrhc
/r/LocalLLaMA/comments/1hndrhc/questions_about_the_difference_between_deepgram/
false
false
self
1
null
New template on Runpod for text-generation-webui v2.0 with API one-click
1
[removed]
2024-12-27T12:23:20
https://www.reddit.com/r/LocalLLaMA/comments/1hndrqf/new_template_on_runpod_for_textgenerationwebui/
WouterGlorieux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hndrqf
false
null
t3_1hndrqf
/r/LocalLLaMA/comments/1hndrqf/new_template_on_runpod_for_textgenerationwebui/
false
false
self
1
{'enabled': False, 'images': [{'id': 'HBeTlCBu9jt8g9PKkCGZ7j8c3VA62Ro8bamp4CVz6w8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=108&crop=smart&auto=webp&s=a2ac0357b2c30636cc4c827e38e6651d2d371868', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=216&crop=smart&auto=webp&s=101169af90ca6b974e5b1fcc5654803406cc4887', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=320&crop=smart&auto=webp&s=32fab8c4e843aee49704091a94145ba2709ef1d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=640&crop=smart&auto=webp&s=2a0eb36a6f9ac4bd2861f1afcb892c94d6bdb19d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=960&crop=smart&auto=webp&s=716f9b7b10005a89dcd76c981efc297546af5b62', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?width=1080&crop=smart&auto=webp&s=c8005d9fb482baa2363660e7634e1939b949f736', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XCdU46nssuLjWwvg977bq1ghLtXf7ZcEGlw70B4_qEU.jpg?auto=webp&s=285c64be222632d3bd4b233b6d795fc621230e48', 'width': 1200}, 'variants': {}}]}
Can I install Fedora Server on the new Jetson Orin Nano Super? Need help moving beyond JetPack!
0
Hey r/LocalLLaMA I recently jumped on the Jetson bandwagon and ordered the newly announced Orin Nano Super, but I'm hitting a roadblock with my plans. Here's what I'm trying to do: **My Goal:** * Set up a GPU-accelerated home server * Run Fedora Server as the main OS * Install Nvidia drivers through the standard DKMS method **The Issue:** I've been searching around but can't find many (or any) examples of people running non-JetPack Linux distributions on this device. I'm starting to wonder if I'm trying to do something impossible here. **Questions:** 1. Has anyone successfully installed a different Linux distro on their Orin Nano? 2. Is there a fundamental limitation that forces us to stick with JetPack? 3. If it is possible, could someone point me towards relevant resources or documentation? Any insights from experienced Jetson users would be greatly appreciated. If this isn't possible, I'd love to hear about alternative approaches to achieve similar functionality within JetPack. Edit: Thanks in advance for any help! This is my first Jetson device, so I'm still learning the ropes. **TLDR:** Want to use the new Orin Nano Super as a GPU-accelerated home server running Fedora Server instead of JetPack. Looking for guidance on whether this is possible.
2024-12-27T12:32:04
https://www.reddit.com/r/LocalLLaMA/comments/1hndwlb/can_i_install_fedora_server_on_the_new_jetson/
darklord451616
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hndwlb
false
null
t3_1hndwlb
/r/LocalLLaMA/comments/1hndwlb/can_i_install_fedora_server_on_the_new_jetson/
false
false
self
0
null
How do I get involved with open source model building effort?
1
[removed]
2024-12-27T12:40:29
https://www.reddit.com/r/LocalLLaMA/comments/1hne1bi/how_do_i_get_involved_with_open_source_model/
Emergency_Ant_843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hne1bi
false
null
t3_1hne1bi
/r/LocalLLaMA/comments/1hne1bi/how_do_i_get_involved_with_open_source_model/
false
false
self
1
null
Running DeepSeek-V3 on M4 Mac Mini AI Cluster. 671B MoE model distributed across 8 M4 Pro 64GB Mac Minis.
147
2024-12-27T12:53:48
https://blog.exolabs.net/day-2/
zero_proof_fork
blog.exolabs.net
1970-01-01T00:00:00
0
{}
1hne97k
false
null
t3_1hne97k
/r/LocalLLaMA/comments/1hne97k/running_deepseekv3_on_m4_mac_mini_ai_cluster_671b/
false
false
https://b.thumbs.redditm…t-6YlrAFbU-k.jpg
147
{'enabled': False, 'images': [{'id': 'nqRm0BRbamlX8h-m2KUQWnZR71-UcjBppyGpBKzHYgM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/B4EreVdtadTL1j-8XmEVOxaCtvIg4WwRL1Fm6IC7WRI.jpg?width=108&crop=smart&auto=webp&s=849c3f3aedf7a49247cd0a4032c79e258234128e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/B4EreVdtadTL1j-8XmEVOxaCtvIg4WwRL1Fm6IC7WRI.jpg?width=216&crop=smart&auto=webp&s=d3f19124de141496867500bfdee0daaa1d283fc5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/B4EreVdtadTL1j-8XmEVOxaCtvIg4WwRL1Fm6IC7WRI.jpg?width=320&crop=smart&auto=webp&s=fb3599c23805b0bed48b64528f0fb8229368f171', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/B4EreVdtadTL1j-8XmEVOxaCtvIg4WwRL1Fm6IC7WRI.jpg?width=640&crop=smart&auto=webp&s=d4529ff27690b5db5ca229f4b9a09a33a74064d1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/B4EreVdtadTL1j-8XmEVOxaCtvIg4WwRL1Fm6IC7WRI.jpg?width=960&crop=smart&auto=webp&s=ffdc5451ae6a3abb3895eb396a4e96f2fdcccefc', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/B4EreVdtadTL1j-8XmEVOxaCtvIg4WwRL1Fm6IC7WRI.jpg?width=1080&crop=smart&auto=webp&s=51e385d2a0b589ef20bd4e9de68fa261976c94b9', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/B4EreVdtadTL1j-8XmEVOxaCtvIg4WwRL1Fm6IC7WRI.jpg?auto=webp&s=a7009de9f23e66f9e9f9d718ebe1be27a574412e', 'width': 2400}, 'variants': {}}]}
It was the best model less than a year ago
1
[removed]
2024-12-27T13:23:14
https://www.reddit.com/r/LocalLLaMA/comments/1hnerr2/it_was_the_best_model_less_than_a_year_ago/
nh_local
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnerr2
false
null
t3_1hnerr2
/r/LocalLLaMA/comments/1hnerr2/it_was_the_best_model_less_than_a_year_ago/
false
false
https://b.thumbs.redditm…5GWDbFcmh87A.jpg
1
null
What to do with two machines?
0
I have a couple mid range PCs. Both have decent CPUs, 128GB of ram, and two GPUs. One machine has 2 nvidia 3070s (16G VRAM total), the other has 2 VEGA Frontier (32G VRAM total). I'm brainstorming ideas of how to utilize both at the same time... I figure its not possible to run the same model on them together is it? As in loading the model partially into each and having them work together. So I was interested if anyone has ideas on what I could build that would utilize one of the PC's to do something while the other does something else, and combined they serve a common goal.
2024-12-27T13:52:42
https://www.reddit.com/r/LocalLLaMA/comments/1hnfbcm/what_to_do_with_two_machines/
RouteGuru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnfbcm
false
null
t3_1hnfbcm
/r/LocalLLaMA/comments/1hnfbcm/what_to_do_with_two_machines/
false
false
self
0
null
SemiAnalysis article "Nvidia’s Christmas Present: GB300 & B300 – Reasoning Inference, Amazon, Memory, Supply Chain" has potential clues about the architecture of o1, o1 pro, and o3
2
2024-12-27T13:54:33
https://semianalysis.com/2024/12/25/nvidias-christmas-present-gb300-b300-reasoning-inference-amazon-memory-supply-chain/
Wiskkey
semianalysis.com
1970-01-01T00:00:00
0
{}
1hnfciu
false
null
t3_1hnfciu
/r/LocalLLaMA/comments/1hnfciu/semianalysis_article_nvidias_christmas_present/
false
false
https://b.thumbs.redditm…94b9XBzUCPxc.jpg
2
{'enabled': False, 'images': [{'id': 'EbG5gnbKJxwCvzB2WintlRIolbA2PYYFdRypIKxfhrY', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/7v_FMLRyzTgu2mE4telEAD2fu53J0jOUm3FKIUoHQGE.jpg?width=108&crop=smart&auto=webp&s=1071f1755b7c5f26f3e61fba3f8e759bd58580b9', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/7v_FMLRyzTgu2mE4telEAD2fu53J0jOUm3FKIUoHQGE.jpg?width=216&crop=smart&auto=webp&s=d6fe7680f1d028dbd6804742fa44f30d1c32db7a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/7v_FMLRyzTgu2mE4telEAD2fu53J0jOUm3FKIUoHQGE.jpg?width=320&crop=smart&auto=webp&s=bc8eb41f227370c21bf6026bf3c1fb831e122e68', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/7v_FMLRyzTgu2mE4telEAD2fu53J0jOUm3FKIUoHQGE.jpg?width=640&crop=smart&auto=webp&s=36f32d95ee8cc1036ccbe9fd7b02715f898808d5', 'width': 640}, {'height': 542, 'url': 'https://external-preview.redd.it/7v_FMLRyzTgu2mE4telEAD2fu53J0jOUm3FKIUoHQGE.jpg?width=960&crop=smart&auto=webp&s=49c46ca74d5b03550168be7621aa824e523cafbf', 'width': 960}, {'height': 610, 'url': 'https://external-preview.redd.it/7v_FMLRyzTgu2mE4telEAD2fu53J0jOUm3FKIUoHQGE.jpg?width=1080&crop=smart&auto=webp&s=7d149463480d0f6cfd71479f94c69422ee7e26ae', 'width': 1080}], 'source': {'height': 678, 'url': 'https://external-preview.redd.it/7v_FMLRyzTgu2mE4telEAD2fu53J0jOUm3FKIUoHQGE.jpg?auto=webp&s=c23644900427380a9e18d7da707547b8cf3dafc4', 'width': 1200}, 'variants': {}}]}
Me after reading qwen is going to release sonnet level model and i am also hoping it has test time inference
234
2024-12-27T14:04:25
https://i.redd.it/24cu0jleee9e1.jpeg
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1hnfjnl
false
null
t3_1hnfjnl
/r/LocalLLaMA/comments/1hnfjnl/me_after_reading_qwen_is_going_to_release_sonnet/
false
false
https://b.thumbs.redditm…z8RxE-6Pizls.jpg
234
{'enabled': True, 'images': [{'id': 'TW0t36K0VfNIvocT54kNud2QQCTLMMJoW7P-CIFKLUI', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/24cu0jleee9e1.jpeg?width=108&crop=smart&auto=webp&s=0ca2fd325445beeaf4d779011a5167f1d1e6173a', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/24cu0jleee9e1.jpeg?width=216&crop=smart&auto=webp&s=1afa021d22b9ce1e57038e170816b065c021daeb', 'width': 216}, {'height': 306, 'url': 'https://preview.redd.it/24cu0jleee9e1.jpeg?width=320&crop=smart&auto=webp&s=73e998c04c4094decf05ebb763405f37103644c3', 'width': 320}, {'height': 613, 'url': 'https://preview.redd.it/24cu0jleee9e1.jpeg?width=640&crop=smart&auto=webp&s=319d67fc55acf4bce1cf5fbcfbd44ab9e1f158b2', 'width': 640}, {'height': 920, 'url': 'https://preview.redd.it/24cu0jleee9e1.jpeg?width=960&crop=smart&auto=webp&s=a854b256c991ae53a6a5e03a8a1308708350a9ab', 'width': 960}, {'height': 1036, 'url': 'https://preview.redd.it/24cu0jleee9e1.jpeg?width=1080&crop=smart&auto=webp&s=ac4482415a2f9d8a792b72367fc4aada29e88d18', 'width': 1080}], 'source': {'height': 1036, 'url': 'https://preview.redd.it/24cu0jleee9e1.jpeg?auto=webp&s=185440366956b5511c5e844bf63871278b16fc0e', 'width': 1080}, 'variants': {}}]}
Good model to generate technical design docs?
1
[removed]
2024-12-27T14:16:30
https://www.reddit.com/r/LocalLLaMA/comments/1hnfs31/good_model_to_generate_technical_design_docs/
MoooImACat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnfs31
false
null
t3_1hnfs31
/r/LocalLLaMA/comments/1hnfs31/good_model_to_generate_technical_design_docs/
false
false
self
1
null
What are cool VR things you can do with LLMs?
7
Any cool program or games that let you chat with LLMs (VR) Preferably locally I got VR for Christmas and want to make the most out of it
2024-12-27T14:35:54
https://www.reddit.com/r/LocalLLaMA/comments/1hng5ya/what_are_cool_vr_things_you_can_do_with_llms/
Deluded-1b-gguf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hng5ya
false
null
t3_1hng5ya
/r/LocalLLaMA/comments/1hng5ya/what_are_cool_vr_things_you_can_do_with_llms/
false
false
self
7
null
What's your favorite JS/TS llama.cpp client?
2
I realize there's the OpenAI-node client, which might also support the browser, but it's quite bloated with OpenAI-specific stuff that is not supported by llama.cpp. There's also a few WASM packages that run inference in the browser, but the requirement here is to just call the locally installed llama.cpp on the server. There are a couple more than don't seem maintained and haven't received any updates in 6-12 months. I feel I'm missing something. So please let me know what package you use in your JS/TS projects.
2024-12-27T14:57:03
https://www.reddit.com/r/LocalLLaMA/comments/1hngld3/whats_your_favorite_jsts_llamacpp_client/
ParaboloidalCrest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hngld3
false
null
t3_1hngld3
/r/LocalLLaMA/comments/1hngld3/whats_your_favorite_jsts_llamacpp_client/
false
false
self
2
null
Be careful where you load your credits...
261
I recently dropped $5 on DeepSeek tokens, excited to try their new model. Little did I know I was on their test server the whole time. Every API request got hit with "Authentication error (user not found)" despite triple-checking my credentials. https://preview.redd.it/i0ryp6ckoe9e1.png?width=538&format=png&auto=webp&s=0d478860c6ab7b337b2ac7d91747735ad570dd95 Turned out the URL had "test" in it, while their production site was completely different. Here's the kicker - their test environment processed real payments! A quick Google search led me straight to the test site, and the whole payment flow worked flawlessly... except it was utterly useless for actual API access. https://preview.redd.it/7i5d25vnoe9e1.png?width=516&format=png&auto=webp&s=4124b948177ae6402f11870bf543be56a42bef1f This feels like a major oversight. You shouldn't be able to spend real money on a test server. Sent an email about this but radio silence so far. Any other DeepSeek users run into this? And devs, if you're reading this, please add some clear warnings or disable real payments on the test environment.
2024-12-27T15:07:33
https://www.reddit.com/r/LocalLLaMA/comments/1hngth1/be_careful_where_you_load_your_credits/
emetah850
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hngth1
false
null
t3_1hngth1
/r/LocalLLaMA/comments/1hngth1/be_careful_where_you_load_your_credits/
false
false
https://b.thumbs.redditm…sRXpRXvJyhlI.jpg
261
null
AlphaGeometryRE: AlphaGeometry Re-Engineered
14
[AlphaGeometryRE](https://github.com/foldl/AlphaGeometryRE) is an re-engineered version of [AlphaGeometry](https://github.com/google-deepmind/alphageometry) with a goal to [make](https://github.com/google-deepmind/alphageometry/issues/130) [it](https://github.com/google-deepmind/alphageometry/issues/116) [easy](https://github.com/google-deepmind/alphageometry/issues/96) to use (especially on [Windows](https://github.com/google-deepmind/alphageometry/issues/120)): * Use [ChatLLM.cpp](http://github.com/foldl/chatllm.cpp) form LLM interference. * Greatly **simplified** _requirements.txt_. * Indent with **four** spaces.
2024-12-27T15:10:12
https://www.reddit.com/r/LocalLLaMA/comments/1hngvh6/alphageometryre_alphageometry_reengineered/
foldl-li
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hngvh6
false
null
t3_1hngvh6
/r/LocalLLaMA/comments/1hngvh6/alphageometryre_alphageometry_reengineered/
false
false
self
14
{'enabled': False, 'images': [{'id': '6xiB9KAyF-7U0RJV-emvZI7izu9CcEM3GnnIqu2un0s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5moO0W5awh2iExDsYmuilBbH4VIbAmFxfib5fzht4nk.jpg?width=108&crop=smart&auto=webp&s=037e3651bcf0e942d3770895e88d4d378f66e743', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5moO0W5awh2iExDsYmuilBbH4VIbAmFxfib5fzht4nk.jpg?width=216&crop=smart&auto=webp&s=14a415e0881735f292bafeff7beb0cf39c5d2682', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5moO0W5awh2iExDsYmuilBbH4VIbAmFxfib5fzht4nk.jpg?width=320&crop=smart&auto=webp&s=1ec84af2fcc2852d75a41ac118c323d7959adcb4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5moO0W5awh2iExDsYmuilBbH4VIbAmFxfib5fzht4nk.jpg?width=640&crop=smart&auto=webp&s=f28dd2776d981dc3c898b3fe798bccf0c847bb6d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5moO0W5awh2iExDsYmuilBbH4VIbAmFxfib5fzht4nk.jpg?width=960&crop=smart&auto=webp&s=9d4b3f13fac12d9d37e3b4ee79e39c21e3178c6a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5moO0W5awh2iExDsYmuilBbH4VIbAmFxfib5fzht4nk.jpg?width=1080&crop=smart&auto=webp&s=5b13670c0b76c60f0561ee760e3b69e5e5165985', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5moO0W5awh2iExDsYmuilBbH4VIbAmFxfib5fzht4nk.jpg?auto=webp&s=c7b51c2af6c7a5814e72b508cfc0627d3010fdcf', 'width': 1200}, 'variants': {}}]}
Hey Microsoft, where's Phi-4?
185
2024-12-27T15:13:55
https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090
Balance-
techcommunity.microsoft.com
1970-01-01T00:00:00
0
{}
1hngy76
false
null
t3_1hngy76
/r/LocalLLaMA/comments/1hngy76/hey_microsoft_wheres_phi4/
false
false
https://b.thumbs.redditm…32ErEFDm6eAs.jpg
185
{'enabled': False, 'images': [{'id': 'BfeC_1bgfqIqiWqAzMfQ4aHLoKL13cgpn7LkhqLVW4I', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=108&crop=smart&auto=webp&s=13eb0f808259f88846d8d94b88206835059cf516', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=216&crop=smart&auto=webp&s=0dd75bb1a0bcff3c538026330da57bd73faf0ddb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=320&crop=smart&auto=webp&s=608aa3ce01f9f18a3b9074e9a0ac42ba5aed14be', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=640&crop=smart&auto=webp&s=2c79ec68a48b99e3498e986fa7b51ce1237ea79c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=960&crop=smart&auto=webp&s=385599694063c9bc3f36f6d7aea2e1973e94b3ff', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=1080&crop=smart&auto=webp&s=e26c7affe6520318b7167809f9bc1fd67cf356cd', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?auto=webp&s=3f5a23a7b07b1c2c27370c5d5130736b15b33f3b', 'width': 1920}, 'variants': {}}]}
Can i run local LLM with just a 3rd gen i5 and 8gb of ram?
0
hello i never did this before, i want to run a local model on my old PC and i think that asking reddit directly would save a lot of time also may i know if there is some models online with some decent privacy
2024-12-27T15:45:41
https://www.reddit.com/r/LocalLLaMA/comments/1hnhnby/can_i_run_local_llm_with_just_a_3rd_gen_i5_and/
ultraganymede
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnhnby
false
null
t3_1hnhnby
/r/LocalLLaMA/comments/1hnhnby/can_i_run_local_llm_with_just_a_3rd_gen_i5_and/
false
false
self
0
null
Does anyone know of a well-structured guide for using llama.cpp?
29
While I am familiar with the basics and aware that guides exist on GitHub, I'm looking for a comprehensive, well-organized guide that covers all the features in detail. Specifically, I'd like to see a section dedicated to various methods for speeding up inference, such as lookahead and speculative decoding. The current guides are scattered and lack a clear structure, making it difficult to keep up with all the new features unless you closely follow the development of llama.cpp.
2024-12-27T15:55:25
https://www.reddit.com/r/LocalLLaMA/comments/1hnhuzq/does_anyone_know_of_a_wellstructured_guide_for/
MustBeSomethingThere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnhuzq
false
null
t3_1hnhuzq
/r/LocalLLaMA/comments/1hnhuzq/does_anyone_know_of_a_wellstructured_guide_for/
false
false
self
29
null
Best multimodel LLM to get body shape details?
0
I am looking for a model which can accurately (as accurate as possible) tell me body shape and other fitness details related to it. Any suggestions pls? I tried llava-13b and ollama-llama3.2-vision-11b, but accuracy is not great.
2024-12-27T16:10:24
https://www.reddit.com/r/LocalLLaMA/comments/1hni6zo/best_multimodel_llm_to_get_body_shape_details/
Electrical_Sound_757
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hni6zo
false
null
t3_1hni6zo
/r/LocalLLaMA/comments/1hni6zo/best_multimodel_llm_to_get_body_shape_details/
false
false
self
0
null
React Native ExecuTorch now supports computer vision models
6
**W**ith the latest release of the library following hooks were added: * [`useObjectDetection`](https://docs.swmansion.com/react-native-executorch/docs/computer-vision/useObjectDetection) \- SSDLite320 MobileNetv3 Large * [`useStyleTransfer`](https://docs.swmansion.com/react-native-executorch/docs/computer-vision/useStyleTransfer) \- Candy, Mosaic, Rain Princess, Udnie * [`useClassification`](https://docs.swmansion.com/react-native-executorch/docs/computer-vision/useClassification) \- EfficientNet V2 Small Full release notes [here](https://github.com/software-mansion/react-native-executorch/releases/tag/v0.2.0) and below s short demo of `useObjectDetection` in action. https://reddit.com/link/1hnibo5/video/2tjx33u91f9e1/player
2024-12-27T16:15:59
https://www.reddit.com/r/LocalLLaMA/comments/1hnibo5/react_native_executorch_now_supports_computer/
d_arthez
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnibo5
false
null
t3_1hnibo5
/r/LocalLLaMA/comments/1hnibo5/react_native_executorch_now_supports_computer/
false
false
self
6
null
What are your system prompts for using OpenAI, Anthropic, Google LLMs via API Keys? Trying to reproduce a general assistant chat like ChatGPT but have some credits
11
Any pointers or github gists?
2024-12-27T16:16:28
https://www.reddit.com/r/LocalLLaMA/comments/1hnic1x/what_are_your_system_prompts_for_using_openai/
fourfiftyfiveam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnic1x
false
null
t3_1hnic1x
/r/LocalLLaMA/comments/1hnic1x/what_are_your_system_prompts_for_using_openai/
false
false
self
11
null
Here is Phi-4
1
[removed]
2024-12-27T16:22:21
https://www.reddit.com/r/LocalLLaMA/comments/1hnigr6/here_is_phi4/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnigr6
false
null
t3_1hnigr6
/r/LocalLLaMA/comments/1hnigr6/here_is_phi4/
false
false
self
1
{'enabled': False, 'images': [{'id': 'LVOG-Ma4sVt7-GCtsGzHFYEd3xPduTj9AavI9bXwmV4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=108&crop=smart&auto=webp&s=e237b41d9f130ec3ceb0f930a826cfcb0ca9b96e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=216&crop=smart&auto=webp&s=a72d3d812c1d5e0696b24e1a1d6b6ca62c984164', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=320&crop=smart&auto=webp&s=9890b9af6c8c143a3afab629e8c620f6486c05d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=640&crop=smart&auto=webp&s=1938c52ca744654d08f36b6e5ef4675f9783cee1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=960&crop=smart&auto=webp&s=011d4eb1e5d88639566be50330522d5039c98d6a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?width=1080&crop=smart&auto=webp&s=daf49c729822a9e279ab3b2c38f2f10f8688a836', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VsqQRTfHHgTo9gyA_7S3cCK9lz2asK5S3XfH7TlTeR4.jpg?auto=webp&s=1a3a80dca5f60cc754b1e04f863a64ef4ff36ccd', 'width': 1200}, 'variants': {}}]}
Learn ML and Local LLM on Old PC
1
[removed]
2024-12-27T16:33:47
https://www.reddit.com/r/LocalLLaMA/comments/1hniq6e/learn_ml_and_local_llm_on_old_pc/
Otherwise_Ad_3382
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hniq6e
false
null
t3_1hniq6e
/r/LocalLLaMA/comments/1hniq6e/learn_ml_and_local_llm_on_old_pc/
false
false
self
1
null
Here's Microsoft's Phi-4 14B on HF
0
# TLDR HF Weights: [https://huggingface.co/amgadhasan/phi-4](https://huggingface.co/amgadhasan/phi-4) GGUF: [https://huggingface.co/matteogeniaccio/phi-4](https://huggingface.co/matteogeniaccio/phi-4) # Long Version This has been requested like 30 times this week. People are asking where is Phi-4 release on HF as announced by MSFT? Well, MSFT hasn't released it on HF yet. However, they did release the HF weights on their Azure AI Foundary. You can just download the weights from their and use it like you use any model from HuggingFace Hub. Or if you're lazy, people have already downloaded the weights from Azure AI Foundry and shared it publicly on HugginFace Hub. Someone even converted them into GGUF with quants. **But why hasn't MSFT released the weights officially on HugginFace Hub as promised?** Your guess is as good as mine. Some said because it's the holiday season so most employees are off.
2024-12-27T16:34:55
https://www.reddit.com/r/LocalLLaMA/comments/1hnir3c/heres_microsofts_phi4_14b_on_hf/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnir3c
false
null
t3_1hnir3c
/r/LocalLLaMA/comments/1hnir3c/heres_microsofts_phi4_14b_on_hf/
false
false
self
0
{'enabled': False, 'images': [{'id': 'noPWxDcjYghRUP15RUY5Gua-bfEoglu_J0l7OYgmRj4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/c5h4l8I_tfLEqUHA29lApWkTHoAwXtW3TILfx5ZVtRQ.jpg?width=108&crop=smart&auto=webp&s=be285a75bf5b18e7827b0f423a1ab194244101d3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/c5h4l8I_tfLEqUHA29lApWkTHoAwXtW3TILfx5ZVtRQ.jpg?width=216&crop=smart&auto=webp&s=22703aa6c36b9542a79ba5df8a89097f770d8b7f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/c5h4l8I_tfLEqUHA29lApWkTHoAwXtW3TILfx5ZVtRQ.jpg?width=320&crop=smart&auto=webp&s=ad50878b5c38492944b60bfb8df410699f3a6b6e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/c5h4l8I_tfLEqUHA29lApWkTHoAwXtW3TILfx5ZVtRQ.jpg?width=640&crop=smart&auto=webp&s=f54dbdfa4706dd0d734003cb99f0665807ca047c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/c5h4l8I_tfLEqUHA29lApWkTHoAwXtW3TILfx5ZVtRQ.jpg?width=960&crop=smart&auto=webp&s=cc48add01d7189e3d13bd9bd551d7941f8ab9343', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/c5h4l8I_tfLEqUHA29lApWkTHoAwXtW3TILfx5ZVtRQ.jpg?width=1080&crop=smart&auto=webp&s=b178b29bcdf9a3e63b6575592cb8ad9927aa4b61', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/c5h4l8I_tfLEqUHA29lApWkTHoAwXtW3TILfx5ZVtRQ.jpg?auto=webp&s=c20e596e056debc5d529677684d8443fc69cf466', 'width': 1200}, 'variants': {}}]}
Deepseek claims to be OpenAi Model?
0
2024-12-27T16:48:58
https://i.redd.it/bw6czcaq7f9e1.png
fraschm98
i.redd.it
1970-01-01T00:00:00
0
{}
1hnj2or
false
null
t3_1hnj2or
/r/LocalLLaMA/comments/1hnj2or/deepseek_claims_to_be_openai_model/
false
false
https://b.thumbs.redditm…aAO2a4Y8bVGM.jpg
0
{'enabled': True, 'images': [{'id': '0WJMWQ4OeVfyIpdMVsd1cYnxIX7ErtKMrMCkw5yYCaY', 'resolutions': [{'height': 20, 'url': 'https://preview.redd.it/bw6czcaq7f9e1.png?width=108&crop=smart&auto=webp&s=1bcb2b00b50d4b6677d7eeebb92b1ca622773e85', 'width': 108}, {'height': 41, 'url': 'https://preview.redd.it/bw6czcaq7f9e1.png?width=216&crop=smart&auto=webp&s=fbce936d59af45218345f8a0b6f5fba83e163740', 'width': 216}, {'height': 62, 'url': 'https://preview.redd.it/bw6czcaq7f9e1.png?width=320&crop=smart&auto=webp&s=ac070afd1067bb4166fedb3c816236bbe99bce8b', 'width': 320}, {'height': 124, 'url': 'https://preview.redd.it/bw6czcaq7f9e1.png?width=640&crop=smart&auto=webp&s=9a2939654640e500c175acdc62468dc4a82bb312', 'width': 640}, {'height': 186, 'url': 'https://preview.redd.it/bw6czcaq7f9e1.png?width=960&crop=smart&auto=webp&s=7a65048f9cae0724642739f65476bc3d8aaee6c0', 'width': 960}, {'height': 209, 'url': 'https://preview.redd.it/bw6czcaq7f9e1.png?width=1080&crop=smart&auto=webp&s=efc4254a749c0ddd413f0b97d4ff4bc8aeda5aa3', 'width': 1080}], 'source': {'height': 380, 'url': 'https://preview.redd.it/bw6czcaq7f9e1.png?auto=webp&s=710c99104e987ae993b3812ac0dfc1d1f05a4897', 'width': 1961}, 'variants': {}}]}
Deepseek v3 quantize links or how to?
2
I want to test out deepseek v3 q4, are there any links yet? Or how could I quantize it myself? Will post speeds once I get it running. Using a 3090, epyc 7302 and 320gb ram
2024-12-27T17:00:10
https://www.reddit.com/r/LocalLLaMA/comments/1hnjbu3/deepseek_v3_quantize_links_or_how_to/
fraschm98
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnjbu3
false
null
t3_1hnjbu3
/r/LocalLLaMA/comments/1hnjbu3/deepseek_v3_quantize_links_or_how_to/
false
false
self
2
null
Llama seems to ignore system prompt when using tools
1
[removed]
2024-12-27T17:07:00
https://www.reddit.com/r/LocalLLaMA/comments/1hnjhvt/llama_seems_to_ignore_system_prompt_when_using/
GulgPlayer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnjhvt
false
null
t3_1hnjhvt
/r/LocalLLaMA/comments/1hnjhvt/llama_seems_to_ignore_system_prompt_when_using/
false
false
self
1
null
is an RTX 8000 a good idea in 2024?
1
[removed]
2024-12-27T17:10:27
https://www.reddit.com/r/LocalLLaMA/comments/1hnjkss/is_an_rtx_8000_a_good_idea_in_2024/
thphon83
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnjkss
false
null
t3_1hnjkss
/r/LocalLLaMA/comments/1hnjkss/is_an_rtx_8000_a_good_idea_in_2024/
false
false
self
1
null
DeepSeek-v3 signals China may be winning the chip wars
0
2024-12-27T17:34:04
https://llamanews.ai/p/opinion-deepseek-v3-signals-china
jpmmcb
llamanews.ai
1970-01-01T00:00:00
0
{}
1hnk45x
false
null
t3_1hnk45x
/r/LocalLLaMA/comments/1hnk45x/deepseekv3_signals_china_may_be_winning_the_chip/
false
false
https://b.thumbs.redditm…Tv_OdBAFWhvE.jpg
0
{'enabled': False, 'images': [{'id': 'OeVmeLTbN2EPQUgJfQ8MYwcGPBigOPC6UwXKPnfmEV4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0pIcAO9DhOxEySdhtZrpsgLz44V9oPerss4j7iu8LAU.jpg?width=108&crop=smart&auto=webp&s=8b42b9d6aacf053fa34cf466719689b593c63b7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0pIcAO9DhOxEySdhtZrpsgLz44V9oPerss4j7iu8LAU.jpg?width=216&crop=smart&auto=webp&s=619e31c51ece249b0d9dc6bf9a769ba4efbf3c13', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0pIcAO9DhOxEySdhtZrpsgLz44V9oPerss4j7iu8LAU.jpg?width=320&crop=smart&auto=webp&s=feb2c5dc424c6734bb9c04e71d8d999c88a61a79', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0pIcAO9DhOxEySdhtZrpsgLz44V9oPerss4j7iu8LAU.jpg?width=640&crop=smart&auto=webp&s=1499d4ed8e1cfe23e3e0ff153b6c9daf743fc739', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0pIcAO9DhOxEySdhtZrpsgLz44V9oPerss4j7iu8LAU.jpg?width=960&crop=smart&auto=webp&s=b15808478487a3b2f2b2918ca5ae73beb4c74041', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0pIcAO9DhOxEySdhtZrpsgLz44V9oPerss4j7iu8LAU.jpg?width=1080&crop=smart&auto=webp&s=3d573844fb149507612c9d12b60e80a55fc842ed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0pIcAO9DhOxEySdhtZrpsgLz44V9oPerss4j7iu8LAU.jpg?auto=webp&s=f4b33931caa3123cf0f526129a6fa2684d838eda', 'width': 1200}, 'variants': {}}]}
Is there any uncensored language model u guys use right now?
16
can anyone suggest me the model
2024-12-27T17:36:32
https://www.reddit.com/r/LocalLLaMA/comments/1hnk67l/is_there_any_uncensored_language_model_u_guys_use/
pro_ut3104
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnk67l
false
null
t3_1hnk67l
/r/LocalLLaMA/comments/1hnk67l/is_there_any_uncensored_language_model_u_guys_use/
false
false
self
16
null
TypeScript MCP framework with built-in image, logging, and error handling, SSE, progress notifications, and more
22
2024-12-27T17:43:19
https://github.com/punkpeye/fastmcp
punkpeye
github.com
1970-01-01T00:00:00
0
{}
1hnkbor
false
null
t3_1hnkbor
/r/LocalLLaMA/comments/1hnkbor/typescript_mcp_framework_with_builtin_image/
false
false
https://a.thumbs.redditm…ytXZiIX2YPD0.jpg
22
{'enabled': False, 'images': [{'id': 'Z9orI1E9VFWbxzIYTqkGwH7_DzEE73nz8M-GMeWH5l0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0vPeoM8aapSThm7fn2lNms54r8Lvpd-rQsuUvthXzAg.jpg?width=108&crop=smart&auto=webp&s=38e851f3ee4cf4ebdcd2d3bf5c57a81755b3230c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0vPeoM8aapSThm7fn2lNms54r8Lvpd-rQsuUvthXzAg.jpg?width=216&crop=smart&auto=webp&s=cfe072f9895630251e8286c44c0334174553891d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0vPeoM8aapSThm7fn2lNms54r8Lvpd-rQsuUvthXzAg.jpg?width=320&crop=smart&auto=webp&s=31889ddf4813d5bb47f68fb674ce7f3a3d110910', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0vPeoM8aapSThm7fn2lNms54r8Lvpd-rQsuUvthXzAg.jpg?width=640&crop=smart&auto=webp&s=2e69eb59c9a35b1809e66ce4d2e7d098cee26da7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0vPeoM8aapSThm7fn2lNms54r8Lvpd-rQsuUvthXzAg.jpg?width=960&crop=smart&auto=webp&s=8a4a8077e7dddc9fb2d8be32e3cb160716117625', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0vPeoM8aapSThm7fn2lNms54r8Lvpd-rQsuUvthXzAg.jpg?width=1080&crop=smart&auto=webp&s=131a6983d96d8ae9daac462fc6f0d491faf4fb0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0vPeoM8aapSThm7fn2lNms54r8Lvpd-rQsuUvthXzAg.jpg?auto=webp&s=90a6423c45d479122180935f9b4e84537cb561dc', 'width': 1200}, 'variants': {}}]}
Is Deepseek v3 available on ollama?
1
[removed]
2024-12-27T17:48:26
https://www.reddit.com/r/LocalLLaMA/comments/1hnkfzp/is_deepseek_v3_available_on_ollama/
adithyag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnkfzp
false
null
t3_1hnkfzp
/r/LocalLLaMA/comments/1hnkfzp/is_deepseek_v3_available_on_ollama/
false
false
self
1
null
Deepseek v3, so much of the training data is contaminated/derived from GPT, openai.
0
How much of copyrighted data, artificial data is deepseek trained? Seems like most of the models have some sort of artificial data generated from another model mainly the gpts.
2024-12-27T18:01:14
https://i.redd.it/mt08954nkf9e1.png
iwinuwinvwin
i.redd.it
1970-01-01T00:00:00
0
{}
1hnkqm5
false
null
t3_1hnkqm5
/r/LocalLLaMA/comments/1hnkqm5/deepseek_v3_so_much_of_the_training_data_is/
false
false
https://b.thumbs.redditm…0y8vvkR_RdGg.jpg
0
{'enabled': True, 'images': [{'id': 'Y6bxf_x9mMWz73JA-r8MH7YrChWNRSGN7hPSqVEyNgQ', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/mt08954nkf9e1.png?width=108&crop=smart&auto=webp&s=337ebca5396a0a32f769c2b02ee1f219b041f92d', 'width': 108}, {'height': 234, 'url': 'https://preview.redd.it/mt08954nkf9e1.png?width=216&crop=smart&auto=webp&s=efcd1157e9e4e5fc8e91ee90956fbaac2662bade', 'width': 216}, {'height': 347, 'url': 'https://preview.redd.it/mt08954nkf9e1.png?width=320&crop=smart&auto=webp&s=42a76aec2f9d0eb92b0ec230cb5ef2cb4beab3db', 'width': 320}, {'height': 694, 'url': 'https://preview.redd.it/mt08954nkf9e1.png?width=640&crop=smart&auto=webp&s=e5302b78b0848a798141fd6952330814ef1e6c10', 'width': 640}, {'height': 1041, 'url': 'https://preview.redd.it/mt08954nkf9e1.png?width=960&crop=smart&auto=webp&s=f8af686dba6fefc8d44c3fd30b6c23551e2bf7df', 'width': 960}, {'height': 1172, 'url': 'https://preview.redd.it/mt08954nkf9e1.png?width=1080&crop=smart&auto=webp&s=d2c6979d185ca06601bb57fe818cf27d0cc4d823', 'width': 1080}], 'source': {'height': 1172, 'url': 'https://preview.redd.it/mt08954nkf9e1.png?auto=webp&s=f28f98ac302ae255ab71e68eec8881cc403a25b9', 'width': 1080}, 'variants': {}}]}
What? [DeepSeek v3]
0
2024-12-27T18:03:14
https://i.redd.it/5j5ehy4ukf9e1.png
glarefloor
i.redd.it
1970-01-01T00:00:00
0
{}
1hnksci
false
null
t3_1hnksci
/r/LocalLLaMA/comments/1hnksci/what_deepseek_v3/
false
false
https://b.thumbs.redditm…jrn452uHHADo.jpg
0
{'enabled': True, 'images': [{'id': '3VYs1WD93Lp7FFneyJuwhIi3Lm-ehW31iO7B45m3N60', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/5j5ehy4ukf9e1.png?width=108&crop=smart&auto=webp&s=089998bd2a6511f1b16eaafd35fc5226aa9e0dd5', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/5j5ehy4ukf9e1.png?width=216&crop=smart&auto=webp&s=28c5e9a31d37c1e1ec061161a4d3242454d09a2a', 'width': 216}, {'height': 150, 'url': 'https://preview.redd.it/5j5ehy4ukf9e1.png?width=320&crop=smart&auto=webp&s=49947c0171426d3d95b3a1ece75ef462a1d821ac', 'width': 320}, {'height': 301, 'url': 'https://preview.redd.it/5j5ehy4ukf9e1.png?width=640&crop=smart&auto=webp&s=ff25b897ab88e1a880aa9fd06538ada595e6aef6', 'width': 640}, {'height': 452, 'url': 'https://preview.redd.it/5j5ehy4ukf9e1.png?width=960&crop=smart&auto=webp&s=2a4d4d46297a609a371a47275d9fa6c9a4dbc740', 'width': 960}, {'height': 509, 'url': 'https://preview.redd.it/5j5ehy4ukf9e1.png?width=1080&crop=smart&auto=webp&s=4f72feb2947c4da2dc2512c82998001f2253c038', 'width': 1080}], 'source': {'height': 905, 'url': 'https://preview.redd.it/5j5ehy4ukf9e1.png?auto=webp&s=dd7cabb0c81a120ee22232f59706ae90cd66b0bf', 'width': 1919}, 'variants': {}}]}
Thunderbolt bridge with TB3 or TB4 ?
8
So currently I have been experimenting with connecting my computers (Mac’s) together to run bigger models, and most importantly have more power for some big models. I am connecting them via a thunderbolt bridge with TB3 cables for now and I don’t know if it’s worth going to TB4 since the transfer speeds are the same. All my TB3 cables are good quality (Belkin 1m bought from the Apple Store). For reference if it makes any difference my computers are the following: MBP M3 Max 64GB (scheduler), M2 Ultra 192GB (worker), Mac Mini M4 16GB (worker - to buy soon), M2 Air base (worker) I would need to buy 3 TB4 cables which is a lot of money (TB4 Pro cables are $130 each, regular good quality TB4 from Amazon are about $80). Is it worth it or should I stick to TB3 ?
2024-12-27T18:27:24
https://www.reddit.com/r/LocalLLaMA/comments/1hnlcqw/thunderbolt_bridge_with_tb3_or_tb4/
Dr_Superfluid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnlcqw
false
null
t3_1hnlcqw
/r/LocalLLaMA/comments/1hnlcqw/thunderbolt_bridge_with_tb3_or_tb4/
false
false
self
8
null
I’ve got two free Tesla k80s at work. Is that my golden ticket into much better home AI experiences than my 3060? Can I run massive models requiring 48GB across both GPUs?
1
Lastly I have an old 7940x workstation along with two modern systems (a 13600k and a 5950x). Is there any real reason use the old x299 system instead of the more modern ones? I mean yes I know it has more pcie lanes per GPU slot but will it make a difference?
2024-12-27T18:46:33
https://www.reddit.com/r/LocalLLaMA/comments/1hnlshe/ive_got_two_free_tesla_k80s_at_work_is_that_my/
Alternative_Spite_11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnlshe
false
null
t3_1hnlshe
/r/LocalLLaMA/comments/1hnlshe/ive_got_two_free_tesla_k80s_at_work_is_that_my/
false
false
self
1
null
Deepseek-V3 GGUF?
0
Can't wait to put her into my server ❤️😍
2024-12-27T18:56:52
https://i.redd.it/gr2zyqskuf9e1.jpeg
realJoeTrump
i.redd.it
1970-01-01T00:00:00
0
{}
1hnm102
false
null
t3_1hnm102
/r/LocalLLaMA/comments/1hnm102/deepseekv3_gguf/
false
false
https://b.thumbs.redditm…VENbS1Chy-TU.jpg
0
{'enabled': True, 'images': [{'id': 'pY975AnXw3LUJPFOqquwNVGGtC1jR-71kkDyFZaDGoA', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/gr2zyqskuf9e1.jpeg?width=108&crop=smart&auto=webp&s=4cfa12bbe4c011b3d29c990c1b982e904dc23285', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/gr2zyqskuf9e1.jpeg?width=216&crop=smart&auto=webp&s=b7083a853b1db686368879563a99f09cf0b20982', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/gr2zyqskuf9e1.jpeg?width=320&crop=smart&auto=webp&s=c7482f4f758c27bca961f70025efcd8a972ec708', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/gr2zyqskuf9e1.jpeg?width=640&crop=smart&auto=webp&s=6376e4fadf76fc25a9877261ebaf1f27ab94904d', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/gr2zyqskuf9e1.jpeg?width=960&crop=smart&auto=webp&s=4bd0ba92107f5fc7c08a4449993f833626546c1e', 'width': 960}], 'source': {'height': 768, 'url': 'https://preview.redd.it/gr2zyqskuf9e1.jpeg?auto=webp&s=2794aaa065476cc9433b11a63bd9fd3003615a70', 'width': 1024}, 'variants': {}}]}
Deepseek discounted vs new pricing.
159
2024-12-27T19:20:01
https://i.redd.it/17qka2wnyf9e1.png
Pro-editor-1105
i.redd.it
1970-01-01T00:00:00
0
{}
1hnmk9z
false
null
t3_1hnmk9z
/r/LocalLLaMA/comments/1hnmk9z/deepseek_discounted_vs_new_pricing/
false
false
https://b.thumbs.redditm…_TOn6KwCpA_s.jpg
159
{'enabled': True, 'images': [{'id': 'B0RNOloARrqu6j7IA8ZrgtY6xQ51ahqgDICqoyEUToA', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/17qka2wnyf9e1.png?width=108&crop=smart&auto=webp&s=5b563288d7b00161e7d4f419955fda869ee6b360', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/17qka2wnyf9e1.png?width=216&crop=smart&auto=webp&s=95ef0f0d417e01806301bdd2911fced993091f98', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/17qka2wnyf9e1.png?width=320&crop=smart&auto=webp&s=ca5b11c065820cb4a9bdf9c4acd0a6dc48c07572', 'width': 320}, {'height': 295, 'url': 'https://preview.redd.it/17qka2wnyf9e1.png?width=640&crop=smart&auto=webp&s=6e3572062e9073c0ff05e9bf31b00d268273c572', 'width': 640}, {'height': 443, 'url': 'https://preview.redd.it/17qka2wnyf9e1.png?width=960&crop=smart&auto=webp&s=6b77e67831f56881e3a33a1effe29e9e885f90b6', 'width': 960}, {'height': 499, 'url': 'https://preview.redd.it/17qka2wnyf9e1.png?width=1080&crop=smart&auto=webp&s=0aa6552e25afdf7d447305989d1367c09313653b', 'width': 1080}], 'source': {'height': 1266, 'url': 'https://preview.redd.it/17qka2wnyf9e1.png?auto=webp&s=ec617ba62a6e81d8cea43d8c3ae206f43c831dd2', 'width': 2738}, 'variants': {}}]}
Deepseek discounted vs new pricing.
1
2024-12-27T19:20:01
https://i.redd.it/17qka2wnyf9e1
Pro-editor-1105
i.redd.it
1970-01-01T00:00:00
0
{}
1hnmka5
false
null
t3_1hnmka5
/r/LocalLLaMA/comments/1hnmka5/deepseek_discounted_vs_new_pricing/
false
false
default
1
null
Exo + Daisy Chained Jetson Nano Supers
3
I’ve read through a lot of people’s comments about getting a full GPU rather than the Nvidia Jetson Nano. What I haven’t seen yet is if anyone has attempted to daisychain multiple Jetson Nano Supers and utilize exo to run larger local LLMs or smaller LLMs at greater speeds. Any insights?
2024-12-27T19:43:44
https://www.reddit.com/r/LocalLLaMA/comments/1hnn3ea/exo_daisy_chained_jetson_nano_supers/
Next_Employer_738
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnn3ea
false
null
t3_1hnn3ea
/r/LocalLLaMA/comments/1hnn3ea/exo_daisy_chained_jetson_nano_supers/
false
false
self
3
null
Open source the strongest: "Frontier AI systems have surpassed the self-replicating red line"
55
[https://arxiv.org/abs/2412.12140](https://arxiv.org/abs/2412.12140) "we for the first time discover that two AI systems driven by Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct, popular large language models of less parameters and weaker capabilities, have already surpassed the self-replicating red line. In 50% and 90% experimental trials, they succeed in creating a live and separate copy of itself respectively." Open Source power
2024-12-27T19:45:11
https://www.reddit.com/r/LocalLLaMA/comments/1hnn4kz/open_source_the_strongest_frontier_ai_systems/
GodComplecs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnn4kz
false
null
t3_1hnn4kz
/r/LocalLLaMA/comments/1hnn4kz/open_source_the_strongest_frontier_ai_systems/
false
false
self
55
null
Cheap server for deepseek v3? How much RAM?
1
[removed]
2024-12-27T19:50:04
https://www.reddit.com/r/LocalLLaMA/comments/1hnn8mk/cheap_server_for_deepseek_v3_how_much_ram/
henryclw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnn8mk
false
null
t3_1hnn8mk
/r/LocalLLaMA/comments/1hnn8mk/cheap_server_for_deepseek_v3_how_much_ram/
false
false
self
1
null
What would be the best model to control home assistant, but still have the ability to answer simple questions? Currently Qwen2.5 and Llama3 return nonsensical answers.
1
2024-12-27T19:58:12
https://i.redd.it/k4affhfi5g9e1.png
D3DCreations
i.redd.it
1970-01-01T00:00:00
0
{}
1hnnf52
false
null
t3_1hnnf52
/r/LocalLLaMA/comments/1hnnf52/what_would_be_the_best_model_to_control_home/
false
false
https://a.thumbs.redditm…TtQJnDthvfj8.jpg
1
{'enabled': True, 'images': [{'id': 'Tjkb12k5GoNj5HgiFN6wiJi_wrUiUOqwCHx6vub9fJA', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/k4affhfi5g9e1.png?width=108&crop=smart&auto=webp&s=bd6c197e96a53629e65324a3507c1d971d2d5b20', 'width': 108}, {'height': 266, 'url': 'https://preview.redd.it/k4affhfi5g9e1.png?width=216&crop=smart&auto=webp&s=68ad7bb70555a6c7b5810f19327a252f79bc7cfd', 'width': 216}, {'height': 394, 'url': 'https://preview.redd.it/k4affhfi5g9e1.png?width=320&crop=smart&auto=webp&s=269e3fe0caf7582c8f752ed0b2005b0a8fd574a4', 'width': 320}], 'source': {'height': 498, 'url': 'https://preview.redd.it/k4affhfi5g9e1.png?auto=webp&s=e1781e2c07e0fe71ff4f1695b7d0a5955197389c', 'width': 404}, 'variants': {}}]}
Upgrading GPU - difference in performence?
0
I am currently using this pc: 2080Ti 3700x 16GB ddr4 2133mhz I was planning on upgrading in general but since I got into AI I though I would go all out, but how much difference is there between the top tier options? For example I'm currently getting 50 tokens/s on the 3B llama 3.2 model, openvoice TTS takes about 3.5 seconds to generate a 40 sec audio file, and whisperx takes about 1 second to transcribe 10 seconds of speech (yes I'm working on a generic assistant) Any idea on how much of an improvement will getting a newer gpu give for the same models? 3090 / 4090 second hand or even the new upcoming 5090, I will be getting an entirety new PC aswell but was mainly wondering about AI performance for these models, how many tokes/s and processing time improvements can I expect for each GPU? I tried looking it up but couldn't find comparisons to my 2080Ti. The most important thing to me is getting the response time to be as quick as possible so I'm also working on streaming each part of the system instead of waiting on the full processing time.
2024-12-27T20:14:36
https://www.reddit.com/r/LocalLLaMA/comments/1hnnsgx/upgrading_gpu_difference_in_performence/
XPEZNAZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnnsgx
false
null
t3_1hnnsgx
/r/LocalLLaMA/comments/1hnnsgx/upgrading_gpu_difference_in_performence/
false
false
self
0
null
AI Agents Marketplace by Openserv.ai
0
Stumbled into this video by Openserv and was really impressed by the UI. Do you guys think this is what mass adoption of AI Agents will look like? As a non tech savvy person, this looks like a dream for me to automate my work!
2024-12-27T20:32:41
https://v.redd.it/q098xb2obg9e1
BejaiaDz
/r/LocalLLaMA/comments/1hno6w0/ai_agents_marketplace_by_openservai/
1970-01-01T00:00:00
0
{}
1hno6w0
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/q098xb2obg9e1/DASHPlaylist.mpd?a=1738053170%2CZjZkYTllOTBmOGMyOWYwZGM2ZmU4ZDk0OGYwM2ZiMjViMGQwNzVkZDA1YWVhZjk5Y2E4YzlkMDI1NDgyNGE1MQ%3D%3D&v=1&f=sd', 'duration': 116, 'fallback_url': 'https://v.redd.it/q098xb2obg9e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/q098xb2obg9e1/HLSPlaylist.m3u8?a=1738053170%2CNDRkZjkyZjNlMGJlODJiOTMyYjQwNWZmMDBiODdjZGRhMzdhODQ3Mjg0ZGQyODU0MTg3N2RhMWU1NmRhNWRlYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/q098xb2obg9e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1hno6w0
/r/LocalLLaMA/comments/1hno6w0/ai_agents_marketplace_by_openservai/
false
false
https://external-preview…cf7af3877a64780a
0
{'enabled': False, 'images': [{'id': 'bzYxdGJkMG9iZzllMdViglVYFMdWamO0Y148Y3092D4j69Rj2aJ1X0YRA2LF', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bzYxdGJkMG9iZzllMdViglVYFMdWamO0Y148Y3092D4j69Rj2aJ1X0YRA2LF.png?width=108&crop=smart&format=pjpg&auto=webp&s=d8a562eff8cae2d298f07dcb20133b8532c023c5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bzYxdGJkMG9iZzllMdViglVYFMdWamO0Y148Y3092D4j69Rj2aJ1X0YRA2LF.png?width=216&crop=smart&format=pjpg&auto=webp&s=a0473f5e7b2f3d2cb173377488e91f46530d5296', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bzYxdGJkMG9iZzllMdViglVYFMdWamO0Y148Y3092D4j69Rj2aJ1X0YRA2LF.png?width=320&crop=smart&format=pjpg&auto=webp&s=722a9d525773d77c72ced6b184ccbf97f16a9018', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bzYxdGJkMG9iZzllMdViglVYFMdWamO0Y148Y3092D4j69Rj2aJ1X0YRA2LF.png?width=640&crop=smart&format=pjpg&auto=webp&s=6c84ca06b87c2f74dcca6eed5bde6bdd319d1182', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bzYxdGJkMG9iZzllMdViglVYFMdWamO0Y148Y3092D4j69Rj2aJ1X0YRA2LF.png?width=960&crop=smart&format=pjpg&auto=webp&s=35dd54e47f2d946a2181b56859a926242fe5badf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bzYxdGJkMG9iZzllMdViglVYFMdWamO0Y148Y3092D4j69Rj2aJ1X0YRA2LF.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ebeb07831caf7e78a0e70fa3b3beed1bfae52eef', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bzYxdGJkMG9iZzllMdViglVYFMdWamO0Y148Y3092D4j69Rj2aJ1X0YRA2LF.png?format=pjpg&auto=webp&s=98ad1ffe8fbd08e4bdcbf7a579ffc1d9fc5bce46', 'width': 1920}, 'variants': {}}]}
Resources on Roleplay Models
8
Hi everyone, I'm starting a project to fine-tune a language model to behave like one of my D&D characters. I'm currently in the exploratory phase, and I'm interested in learning about the most commonly used roleplaying models (in both English and Italian). More importantly, I'd like to understand the standards and best practices used in this field. Some specific questions I have: * Is there a standard data format for representing characters? I know some UIs require specific file types to load character data. * How is world knowledge typically represented in these systems? * Which UIs and projects are most popular in the community? I'm particularly interested in open-source projects so I can learn from their implementation of both UIs and training techniques. Regarding the model's behavior, I want my character to converse naturally like a real person. I'd like to avoid the overly descriptive roleplay style that I've seen in most models so far (for example: "**laughs and looks joyful, while moving the left hand to cover their mouth**"). However, I'm flexible on this since others might prefer that style. While I'm okay with mildly erotic content, I'd prefer to avoid extremely NSFW models. Thank you for your help!
2024-12-27T20:35:08
https://www.reddit.com/r/LocalLLaMA/comments/1hno8v4/resources_on_roleplay_models/
PinballOscuro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hno8v4
false
null
t3_1hno8v4
/r/LocalLLaMA/comments/1hno8v4/resources_on_roleplay_models/
false
false
self
8
null
have to buy replacement computer for work - build big iron vs. pay for APIs?
18
IT keeps emailing that my computer is out of warranty and Must Be Replaced. money is no object but I'm torn what to do. mostly I need to analyze mountains of text. spent a few thou$and pushing data through OpenAI's API (gpt4o), could easily run through tens of thousands per year at that clip. been experimenting with Llama3.3-70B, which works as well or better as gtp4o. got excited and started pricing out my own rig with quad gpu cards totalling 80G of memory, or maybe a mac pro with 192G unified memory. but then, even running a 4-bit quantized version on RunPod it was an order of magnitude (or worse) slower than openAI. I then heard about Groq, which recently added support for 3.3-70B. Groq is ridiculously fast and quite economical. so, I thought better of building my own rig and decided to use Groq. then I learned about fine-tuning. I'm getting good results using gpt40-mini, which is horrible untuned but quite attractive when tuned (and super cheap). haven't tried yet, but maybe I could get similar results from a fine-tuned smaller Llama model. Groq doesn't support fine-tuning. of course everything is changing quickly and so maybe API prices will come down generally and there is not much point to having my own hardware. again, I am pushing many GB of data through the API, so speed will always be an issue as I prefer not to wait weeks for things to process. that's my brain dump, any thoughts? sorry it's so long. many thanks!!
2024-12-27T21:06:42
https://www.reddit.com/r/LocalLLaMA/comments/1hnoyi6/have_to_buy_replacement_computer_for_work_build/
vegatx40
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnoyi6
false
null
t3_1hnoyi6
/r/LocalLLaMA/comments/1hnoyi6/have_to_buy_replacement_computer_for_work_build/
false
false
self
18
null
It’s like a sixth sense now, I just know somehow.
459
2024-12-27T21:28:19
https://i.redd.it/53fompsllg9e1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1hnpg40
false
null
t3_1hnpg40
/r/LocalLLaMA/comments/1hnpg40/its_like_a_sixth_sense_now_i_just_know_somehow/
false
false
https://b.thumbs.redditm…eSZr8-fyID6E.jpg
459
{'enabled': True, 'images': [{'id': 'M3c1a0z8_YWvivGWNEd5V-PmSLfE5fGPd49ZvzFIzmk', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/53fompsllg9e1.jpeg?width=108&crop=smart&auto=webp&s=cb89c30faacb55a47c8c562c2c20a61673b38169', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/53fompsllg9e1.jpeg?width=216&crop=smart&auto=webp&s=b63c27690c8c54135cab533ec3603ebe0a1bd976', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/53fompsllg9e1.jpeg?width=320&crop=smart&auto=webp&s=eaa95f9f8890b45a87f3a91c83521935d13b41cd', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/53fompsllg9e1.jpeg?width=640&crop=smart&auto=webp&s=75cf4ada3b55d185494545b28f5ab8f39edaf2fc', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/53fompsllg9e1.jpeg?width=960&crop=smart&auto=webp&s=d4724d58057d6f308ed82ab37b798f907c8090e9', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/53fompsllg9e1.jpeg?width=1080&crop=smart&auto=webp&s=2b542e5b60a8efeaecea15416cdb4fe3ef2f3618', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://preview.redd.it/53fompsllg9e1.jpeg?auto=webp&s=1a0e297e8495076bf454193469c786add7f36038', 'width': 1125}, 'variants': {}}]}
Exo on ipad/iphone
1
[removed]
2024-12-27T22:12:54
https://www.reddit.com/r/LocalLLaMA/comments/1hnqg6x/exo_on_ipadiphone/
Diligent_Stable_2832
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnqg6x
false
null
t3_1hnqg6x
/r/LocalLLaMA/comments/1hnqg6x/exo_on_ipadiphone/
false
false
self
1
null
Model not adhering to prompt
1
[removed]
2024-12-27T22:17:18
https://www.reddit.com/r/LocalLLaMA/comments/1hnqjrz/model_not_adhering_to_prompt/
RidesFlysAndVibes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnqjrz
false
null
t3_1hnqjrz
/r/LocalLLaMA/comments/1hnqjrz/model_not_adhering_to_prompt/
false
false
https://b.thumbs.redditm…xx1MqZaXNSVo.jpg
1
null
Which embedding model to use in Open-WebUI?
9
I started playing with the RAG-function in Open-WebUI and I set the embedding model to paraphrase-multilingual as suggested in their blog post, but I wonder if that is a good choice. I hardly know anything about embedding models but I noticed this model was released already in 2019, which seems to be outdated to me. Is this still SOTA? Also is there a significant difference in accurary between embedding models in fp16 and as Q8 gguf quants? I plan to use RAG for text but also for code.
2024-12-27T23:15:16
https://www.reddit.com/r/LocalLLaMA/comments/1hnrtla/which_embedding_model_to_use_in_openwebui/
Steuern_Runter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnrtla
false
null
t3_1hnrtla
/r/LocalLLaMA/comments/1hnrtla/which_embedding_model_to_use_in_openwebui/
false
false
self
9
null
Any commercially friendly XTTS-V2/ alternatives
6
Please help :( I build a project all around voice cloning via xttsv2 it worked grrrreeatt! Multi-language, perfect enough output from a short .wav sample... only to find out the coqui fiasco of them selling commercial use licences for 1 year than quickly closing their business, not selling or renewing licences. Are there any revolutionary news in the TTS game, that have commercial friendly licences that let me use my outputs... I checked bark Opentts YourTTS Espnet Piper ... Either they're hacky with bad output especially when changing languages, the output is horrible or the licence is rubbish...
2024-12-27T23:35:26
https://www.reddit.com/r/LocalLLaMA/comments/1hns8ps/any_commercially_friendly_xttsv2_alternatives/
ranker2241
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hns8ps
false
null
t3_1hns8ps
/r/LocalLLaMA/comments/1hns8ps/any_commercially_friendly_xttsv2_alternatives/
false
false
self
6
null
Exo on ipad/iphone
1
[removed]
2024-12-27T23:40:25
https://www.reddit.com/r/LocalLLaMA/comments/1hnscij/exo_on_ipadiphone/
Diligent_Stable_2832
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnscij
false
null
t3_1hnscij
/r/LocalLLaMA/comments/1hnscij/exo_on_ipadiphone/
false
false
self
1
null
New open-weight SoTA: DeepSeek-V3 ranked #4 on LiveBench, tops 3.5 Sonnet.
1
[removed]
2024-12-27T23:43:19
https://www.reddit.com/r/LocalLLaMA/comments/1hnseq3/new_openweight_sota_deepseekv3_ranked_4_on/
reggionh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnseq3
false
null
t3_1hnseq3
/r/LocalLLaMA/comments/1hnseq3/new_openweight_sota_deepseekv3_ranked_4_on/
false
false
https://a.thumbs.redditm…Jf6ZvIf87vE8.jpg
1
null
Exo on iphone/iPad
1
[removed]
2024-12-28T00:03:11
https://www.reddit.com/r/LocalLLaMA/comments/1hnstq0/exo_on_iphoneipad/
Diligent_Stable_2832
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnstq0
false
null
t3_1hnstq0
/r/LocalLLaMA/comments/1hnstq0/exo_on_iphoneipad/
false
false
self
1
null
Ripper 4x3090 or Ryzen 2x3090?
1
Hi all, Part two of building a local AI rig for a copyright-friendly approach to novel-writing, editing, consulting, etc. with a larger LLM (100B or 200B+). The big decision I have to make: go big with a Threadripper 4x setup, or stick with Ryzen at 2x (if indeed these are the right choices). I'd love to hear thoughts from those more experienced, especially using pooled GPUs with NVLink. |**Component**|**Ryzen 9 Build (2x Pre-Bought RTX 3090)**|**Threadripper Build (4x RTX 3090)**| |:-|:-|:-| |**CPU**|AMD Ryzen 9 7950X|AMD Threadripper 3960X| |**Cooling**|Lian Li Galahad 360mm AIO|NZXT Kraken Elite 360mm AIO| |**Motherboard**|ASUS ProArt X670E Creator Wi-Fi|ASRock TRX40 WS| |**RAM**|64GB Crucial DDR5-5600 (Dual Kit)|128GB G.Skill DDR4-3600 ECC| |**Storage**|1TB Samsung 990 Pro NVMe SSD|1TB Samsung 990 Pro NVMe SSD| |**GPUs**|2x Pre-Bought RTX 3090 (User-Owned)|4x RTX 3090 (2 Pre-Bought + 2 Used at $700)| |**Power Supply**|Corsair AX1600i|Corsair AX1600i| |**Case**|Lian Li V3000 Plus|Lian Li V3000 Plus| |**Additional Cooling**|Pre-installed AIO and case fans|Pre-installed AIO and case fans| |**Estimated Total**|**Mid 2k range**|**Mid 5k range**| Interested in your thoughts -- what would you do in my shoes? And are there pitfalls I'm overlooking? Obviously price is an issue, though future-proofing is definitely a concern as models get larger and larger.
2024-12-28T01:15:39
https://www.reddit.com/r/LocalLLaMA/comments/1hnua2k/ripper_4x3090_or_ryzen_2x3090/
Lunrun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnua2k
false
null
t3_1hnua2k
/r/LocalLLaMA/comments/1hnua2k/ripper_4x3090_or_ryzen_2x3090/
false
false
self
1
null
Deepseek V3 ties for first in the weeb japanese translation leaderboard
127
2024-12-28T01:42:51
https://huggingface.co/datasets/lmg-anon/vntl-leaderboard
Charuru
huggingface.co
1970-01-01T00:00:00
0
{}
1hnut1f
false
null
t3_1hnut1f
/r/LocalLLaMA/comments/1hnut1f/deepseek_v3_ties_for_first_in_the_weeb_japanese/
false
false
https://b.thumbs.redditm…p9HXRLX0kJ5Y.jpg
127
{'enabled': False, 'images': [{'id': '0pHzxetpmsBZm01e_Je6uVg0ghLWxh14uhAkrDegrvI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vvsxu_BOpjL_k53WZv2dkdOb_IW_bqzq5M8MQ4qMBBc.jpg?width=108&crop=smart&auto=webp&s=3ccfaab1f954b689485baa31bfdc8f3b6e78d9ee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vvsxu_BOpjL_k53WZv2dkdOb_IW_bqzq5M8MQ4qMBBc.jpg?width=216&crop=smart&auto=webp&s=c87b10a006926856e9e204f87d5ec9a807a59c65', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vvsxu_BOpjL_k53WZv2dkdOb_IW_bqzq5M8MQ4qMBBc.jpg?width=320&crop=smart&auto=webp&s=bc3940e59e94d7a30b1254f8b46c7b45d29ad4f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vvsxu_BOpjL_k53WZv2dkdOb_IW_bqzq5M8MQ4qMBBc.jpg?width=640&crop=smart&auto=webp&s=21dc3bfb4568f798e24d554584bfa2d4f0451c7d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vvsxu_BOpjL_k53WZv2dkdOb_IW_bqzq5M8MQ4qMBBc.jpg?width=960&crop=smart&auto=webp&s=f6f111a5f6dbb45ad3a7265727222ad33df523ad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vvsxu_BOpjL_k53WZv2dkdOb_IW_bqzq5M8MQ4qMBBc.jpg?width=1080&crop=smart&auto=webp&s=720164396a2fb91d735a69722f2af7744deb9cb0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vvsxu_BOpjL_k53WZv2dkdOb_IW_bqzq5M8MQ4qMBBc.jpg?auto=webp&s=448e091b70d957317431ee41d0e8078967848558', 'width': 1200}, 'variants': {}}]}
DeepSeek will need almost 5 hours to generate 1 dollar worth of tokens
492
Starting March, DeepSeek will need almost 5 hours to generate 1 dollar worth of tokens. With Sonnet, dollar goes away after just 18 minutes. This blows my mind 🤯
2024-12-28T02:07:40
https://www.reddit.com/r/LocalLLaMA/comments/1hnva51/deepseek_will_need_almost_5_hours_to_generate_1/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnva51
false
null
t3_1hnva51
/r/LocalLLaMA/comments/1hnva51/deepseek_will_need_almost_5_hours_to_generate_1/
false
false
self
492
null
Is there a better model than Mistral Small 22b for creative writing right now?
61
Months later, the writing quality from Small seems unmatched. Uncensored, long context, and very good prompt following. Extremely good at positioning (no characters shaking hands from across the room) and avoids cliches really well. Varies response lengths fairly well and repetition. **Llama 70b** * Great instruction following but larger, censored, and repetitive issues **Command-R (32b)** * Better writing quality, but larger, consistently dumber, forgetting key details beyond 6k context **Qwen 2.5** * censored beyond belief, I keep trying it, surprises me sometimes, but writing quality is like 7th grade essay level Anyone who's roleplaying or writing, what are you using? I feel like there's just not any long context writing models, mistral small being the exception.
2024-12-28T02:15:38
https://www.reddit.com/r/LocalLLaMA/comments/1hnvfhm/is_there_a_better_model_than_mistral_small_22b/
Kep0a
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnvfhm
false
null
t3_1hnvfhm
/r/LocalLLaMA/comments/1hnvfhm/is_there_a_better_model_than_mistral_small_22b/
false
false
self
61
null
Vision LLMs are amazing for OCR + Named Entity Recognition [+Tiny Benchmark]
1
[removed]
2024-12-28T02:59:38
https://www.reddit.com/r/LocalLLaMA/comments/1hnw8qq/vision_llms_are_amazing_for_ocr_named_entity/
jjbarrea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnw8qq
false
null
t3_1hnw8qq
/r/LocalLLaMA/comments/1hnw8qq/vision_llms_are_amazing_for_ocr_named_entity/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XDMwt2u-gz-AgDpDz62hXsX9U4iTZsj-xOYjNv7UuDU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OaOxZKpIcW2CXstelUL5uoUcMWB57jZGrCV7rx-OFLE.jpg?width=108&crop=smart&auto=webp&s=3cd6be4f4805e02abdf3f18765925f0452442aa2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OaOxZKpIcW2CXstelUL5uoUcMWB57jZGrCV7rx-OFLE.jpg?width=216&crop=smart&auto=webp&s=aae00e8daed78adf0532d3fc105325ee64cf38d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OaOxZKpIcW2CXstelUL5uoUcMWB57jZGrCV7rx-OFLE.jpg?width=320&crop=smart&auto=webp&s=be37bd05b7123dc3b24d165c8bfd2af2b09c005d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OaOxZKpIcW2CXstelUL5uoUcMWB57jZGrCV7rx-OFLE.jpg?width=640&crop=smart&auto=webp&s=099175e9a8f4f04ed35baf0aeca0914919b3f847', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OaOxZKpIcW2CXstelUL5uoUcMWB57jZGrCV7rx-OFLE.jpg?width=960&crop=smart&auto=webp&s=8828f04ea8e8cde26aca3e45bd135193dc1fb1f4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OaOxZKpIcW2CXstelUL5uoUcMWB57jZGrCV7rx-OFLE.jpg?width=1080&crop=smart&auto=webp&s=55afc9c129c77a9950697eb7d7d8e458e5916a4e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OaOxZKpIcW2CXstelUL5uoUcMWB57jZGrCV7rx-OFLE.jpg?auto=webp&s=f0347a53b894717731856f31d6b146e7af72e922', 'width': 1200}, 'variants': {}}]}
Why does Phi-4 have the same architecture as Phi-3
0
So I’m confused & I know this is not the official Microsoft model on #ollama but… why does the architecture say #Phi3 for the #Phi4 model from #Ollama downloads? Person running experimental, wrong metadata, bad packaging “or” a #hoax? Am I misunderstanding this?
2024-12-28T03:35:32
https://www.reddit.com/gallery/1hnwwg5
AIForOver50Plus
reddit.com
1970-01-01T00:00:00
0
{}
1hnwwg5
false
null
t3_1hnwwg5
/r/LocalLLaMA/comments/1hnwwg5/why_does_phi4_have_the_same_architecture_as_phi3/
false
false
https://b.thumbs.redditm…9Q-vEkDxtBEs.jpg
0
null
I don't get it.
174
The new DeepSeek model is approximately 600b, so how is DeepSeek running it on their website so fast and giving it in API for so cheap, and people are so hyped (why are they hyped? I mean like it's a 600b model and it can't even fit on 80gb VRAM)? Doesn't it take like hours to generate a single response on an H100 GPU (considering the size of the model)? Like my 70b Llama takes a while to generate on A100 (I am using cloud GPU), and that's for just a 70b model, and 600b is like many times that size, and DeepSeek is able to give it to people for a very cheap price and it's very fast on their website.
2024-12-28T03:48:59
https://www.reddit.com/r/LocalLLaMA/comments/1hnx4z3/i_dont_get_it/
AlgorithmicKing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnx4z3
false
null
t3_1hnx4z3
/r/LocalLLaMA/comments/1hnx4z3/i_dont_get_it/
false
false
self
174
null
Docker image for quick start
1
[removed]
2024-12-28T03:56:52
https://www.reddit.com/r/LocalLLaMA/comments/1hnxa07/docker_image_for_quick_start/
Clyde_Frog_Spawn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnxa07
false
null
t3_1hnxa07
/r/LocalLLaMA/comments/1hnxa07/docker_image_for_quick_start/
false
false
self
1
null
Best ASR LLM on SoC | Whisper on RPI?
1
[removed]
2024-12-28T04:20:52
https://www.reddit.com/r/LocalLLaMA/comments/1hnxpci/best_asr_llm_on_soc_whisper_on_rpi/
Legal_Carpet1700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnxpci
false
null
t3_1hnxpci
/r/LocalLLaMA/comments/1hnxpci/best_asr_llm_on_soc_whisper_on_rpi/
false
false
self
1
null
Are we witnessing a LLLLLM?
0
I wonder if we are witnessing a Local Low of Local Large Language Models. What I mean by this is a temporary (local) minimum (low) of Local LLM convenience when compared to cloud services. Of course, convenience depends on the usecase and many other factors, so it is hard to understand if what I witness, is what others witness. Thus this post. So here is my experience. Till a few weeks ago, I was using Local LLM more than cloud. Low latency, privacy, cost, and the general preference to run your own stuff. However in the last weeks things have changed substantially: for me it started with Gemini 1206, but then we had Gemini Flash 2.0 Thinking, o1, Deepseek3 (with local, I mean something I/average user can run locally, so no the half trillion parameters of Deepseek). While QwQ for me was not up to the hype it generated. As a matter of fact, I found myself using almost exclusively cloud services for three weeks now. They are quicker, cheaper, more accurate than before, and I can for instance mix their usage to have a (distorted) feeling of privacy. Local performance is not as close as before. Sure, someone else would have some recent local model that just fit their needs. But in the last few weeks (which is like a relevant time in such a fast-moving field), my feeling is that Local hit a low w.r.t. Cloud. There are some reasons to believe that this is temporary. Llama 4 is coming soon (but maybe a smaller model is still months away), important hardware realeases are on the way. But there are also reasons to believe otherwise. In general, I believe the open-source community will stay very competitive. But let's not confuse open-source and local inference. What is your (very recent) experience? Any educated-guess for what's coming?
2024-12-28T06:05:10
https://www.reddit.com/r/LocalLLaMA/comments/1hnzghe/are_we_witnessing_a_lllllm/
HairyAd9854
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hnzghe
false
null
t3_1hnzghe
/r/LocalLLaMA/comments/1hnzghe/are_we_witnessing_a_lllllm/
false
false
self
0
null
LLM comparatives
0
There are many ways to compare and even more comparator But to have confirmation, what people use most here are lmarena-ai/chatbot-arena-leaderboard and k-mktr/gpu-poor-llm-arena ?
2024-12-28T06:58:03
https://www.reddit.com/r/LocalLLaMA/comments/1ho08io/llm_comparatives/
xmmr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho08io
false
null
t3_1ho08io
/r/LocalLLaMA/comments/1ho08io/llm_comparatives/
false
false
self
0
null
merry Xmas fam! Greetings from ALOHA, Hilton @ Hawaii
1
2024-12-28T07:09:50
https://youtube.com/shorts/29W7I2nhFis?si=kK-_HWobXE-3-kOw
Shoddy-Bid7140
youtube.com
1970-01-01T00:00:00
0
{}
1ho0evd
false
null
t3_1ho0evd
/r/LocalLLaMA/comments/1ho0evd/merry_xmas_fam_greetings_from_aloha_hilton_hawaii/
false
false
default
1
null
Llms still give hugely different math models
0
I tried the following query on deepseek, qwen math 72b, Claude.ai, chatgpt 4, o1. Let X be a random integer between 1024 and 2047 inclusive. What is the probability it has a prime factor between 512 and 1023 ? deepseek says between 6.4% and 12.9% as its answer. qwen math 72b says 1/2, which is completely wrong Claude.ai says 102/1024 Chatgpt 4 failed to give a number at all o1 says 101/1024 No two models gave the same answer!
2024-12-28T07:29:02
https://www.reddit.com/r/LocalLLaMA/comments/1ho0oo3/llms_still_give_hugely_different_math_models/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho0oo3
false
null
t3_1ho0oo3
/r/LocalLLaMA/comments/1ho0oo3/llms_still_give_hugely_different_math_models/
false
false
self
0
null
DeepSeek-V3 has a problem: it keeps claiming to be ChatGPT
0
2024-12-28T07:33:01
https://www.neowin.net/news/deepseek-v3-has-a-problem-it-keeps-claiming-to-be-chatgpt/
Ok-Nerves
neowin.net
1970-01-01T00:00:00
0
{}
1ho0qnh
false
null
t3_1ho0qnh
/r/LocalLLaMA/comments/1ho0qnh/deepseekv3_has_a_problem_it_keeps_claiming_to_be/
false
false
https://b.thumbs.redditm…g7QRRrFxs7QM.jpg
0
{'enabled': False, 'images': [{'id': '_Hp3_E-npL5iqc_OAx5SaQllK7fllznSdMSq_RB4q2c', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/ZG1Hdb5eeApta3vN-FRBYBt7JrnXuzT8EeEd32ULjsQ.jpg?width=108&crop=smart&auto=webp&s=b59806efd53d4127b3867c5bdb7f9ce3b1b82b32', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/ZG1Hdb5eeApta3vN-FRBYBt7JrnXuzT8EeEd32ULjsQ.jpg?width=216&crop=smart&auto=webp&s=de18a77aa55bbaa9478f35ada5e258aac2444c8d', 'width': 216}, {'height': 189, 'url': 'https://external-preview.redd.it/ZG1Hdb5eeApta3vN-FRBYBt7JrnXuzT8EeEd32ULjsQ.jpg?width=320&crop=smart&auto=webp&s=2773515d55a426dee0ab85991617298ab4248e68', 'width': 320}, {'height': 378, 'url': 'https://external-preview.redd.it/ZG1Hdb5eeApta3vN-FRBYBt7JrnXuzT8EeEd32ULjsQ.jpg?width=640&crop=smart&auto=webp&s=2c8f05cc3b8db5c98f078dfc19ac91c25b456ee1', 'width': 640}], 'source': {'height': 450, 'url': 'https://external-preview.redd.it/ZG1Hdb5eeApta3vN-FRBYBt7JrnXuzT8EeEd32ULjsQ.jpg?auto=webp&s=a82f38677ec7d16c9088d62cb8e9c7ab350c2ba2', 'width': 760}, 'variants': {}}]}
DeepSeek does not need 5 hours to generate $1 worth of tokens. Due to batching, they can get that in about 1 minute
205
I saw this [heavily upvoted post ](https://www.reddit.com/r/LocalLLaMA/comments/1hnva51/deepseek_will_need_almost_5_hours_to_generate_1/)and felt it was misleading. All LLM providers use batching during inference which allows a single instance of an LLM like Deepseek V3 to serve hundreds of customers at once. If we consider a system such as an 8xH200 hosting Deepseek V3, it looks like they can use a batch size of about 256 while still achieving 60tokens/sec/user. This means they are actually generating 15,000 tokens/sec or roughly $1/min or $60/hr. Divide that by the 8 GPUs and that is about $7.50/gpu/hr which is very reasonable. There's a good (but older) post on batching [here](https://www.perplexity.ai/hub/blog/turbocharging-llama-2-70b-with-nvidia-h100). Also, note that yes, Sonnet uses batching as well but since we have no idea of the size of the model (it likely has a lot more active params) they have to limit the batch size a lot to still get a reasonable tokens/sec/user which is why it is more expensive. I also think they take higher profit. If any of my calculations seem off please let me know.
2024-12-28T07:44:04
https://www.reddit.com/r/LocalLLaMA/comments/1ho0w52/deepseek_does_not_need_5_hours_to_generate_1/
jd_3d
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho0w52
false
null
t3_1ho0w52
/r/LocalLLaMA/comments/1ho0w52/deepseek_does_not_need_5_hours_to_generate_1/
false
false
self
205
null
DeepSeek V3 now available on Hyperbolic
1
2024-12-28T07:49:30
https://twitter.com/zjasper666/status/1872657228676895185
Ok_Can_593
twitter.com
1970-01-01T00:00:00
0
{}
1ho0yvu
false
{'oembed': {'author_name': 'Jasper 🤘🌪️', 'author_url': 'https://twitter.com/zjasper666', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">A small Christmas gift 🎅 from <a href="https://twitter.com/hyperbolic_labs?ref_src=twsrc%5Etfw">@hyperbolic_labs</a>: you can now play with <a href="https://twitter.com/deepseek_ai?ref_src=twsrc%5Etfw">@deepseek_ai</a> v3 through our APIs! It&#39;s running <a href="https://twitter.com/lmsysorg?ref_src=twsrc%5Etfw">@lmsysorg</a>&#39;s sglang on H200. Kudos to <a href="https://twitter.com/leshenj15?ref_src=twsrc%5Etfw">@leshenj15</a> and <a href="https://twitter.com/Yuchenj_UW?ref_src=twsrc%5Etfw">@Yuchenj_UW</a>! We will keep optimizing the speed 🚄<br><br>Try it yourself:<br><br>curl -X POST &quot;<a href="https://t.co/QHM6J98EvN">https://t.co/QHM6J98EvN</a>&quot; \\… <a href="https://t.co/fKLHsYJskF">pic.twitter.com/fKLHsYJskF</a></p>&mdash; Jasper 🤘🌪️ (@zjasper666) <a href="https://twitter.com/zjasper666/status/1872657228676895185?ref_src=twsrc%5Etfw">December 27, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/zjasper666/status/1872657228676895185', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_1ho0yvu
/r/LocalLLaMA/comments/1ho0yvu/deepseek_v3_now_available_on_hyperbolic/
false
false
https://a.thumbs.redditm…79KDAgvg_ni4.jpg
1
{'enabled': False, 'images': [{'id': 'juzrMVmPUHcydzZW9futKwlnUZ9-04yRs-I5_yVqS-I', 'resolutions': [{'height': 42, 'url': 'https://external-preview.redd.it/lw4PJYP8rvt9F8HuDZFas4lI6HB8vj1SMpjghF-2fyo.jpg?width=108&crop=smart&auto=webp&s=ecb93fe30165797ffcecb6cc29c1ed0212d9c413', 'width': 108}], 'source': {'height': 55, 'url': 'https://external-preview.redd.it/lw4PJYP8rvt9F8HuDZFas4lI6HB8vj1SMpjghF-2fyo.jpg?auto=webp&s=b225fb659b13af7d0de160072e1fa67bbfd1de4c', 'width': 140}, 'variants': {}}]}
Help Needed: Best Tool or Model to Chunk Annual Reports for a RAG System
9
# Hi, I’m working on a project that involves processing **annual reports (in PDF format)** to build a **Retrieval-Augmented Generation (RAG) system**. The goal is to **chunk these documents effectively** and store them in a vector database for question-answering tasks. I’m looking for the best **tools, models, or libraries** to help with this task. Here’s the context: * The PDFs are **not scanned** (text-based, no OCR required). * The documents are highly structured with **narrative text, tables, titles, and multi-column layouts**. # Key Features I’m Looking For: 1. **Layout Awareness**: Ability to differentiate between titles, paragraphs, tables, etc. 2. **Table Extraction**: Accurate table parsing and preservation of structure (e.g., rows, columns). 3. **Customizable Chunking**: Ability to split text into meaningful sections (e.g., token-based, semantic-based, or by document elements). 4. **Metadata Extraction**: Extracting section titles, page numbers, and other context. 5. **Compatibility with Vector Databases**: Outputs that can be indexed in tools like Weaviette. 6. **Scalability**: Ability to handle large, multi-page PDFs efficiently. 7. **Semantic Understanding**: Support for creating contextually coherent chunks. # Tools/Models I’ve Considered So Far: * **PyPDF**: Works for basic text extraction but lacks layout awareness or semantic chunking. * **LayoutLM**: Seems promising for layout-aware processing, but I’m unsure whether it's usable for non-scanned documents. Can I use this for non-scanned long context PDFs? * **Chipper**: Used by Unstructured-io. Read about it in its paper. But is it available for use? I’d love recommendations for: 1. **Open-source tools/models** that work well for similar tasks. 2. Any **experience or tips** for chunking and processing financial reports in RAG systems. 3. Suggestions for how to improve the chunking process for accurate retrieval and answer generation. Looking forward to your advice! Thanks in advance! 😊 Feel free to share your experiences, relevant links, or projects you've worked on!
2024-12-28T08:16:18
https://www.reddit.com/r/LocalLLaMA/comments/1ho1bwk/help_needed_best_tool_or_model_to_chunk_annual/
Physical-Security115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho1bwk
false
null
t3_1ho1bwk
/r/LocalLLaMA/comments/1ho1bwk/help_needed_best_tool_or_model_to_chunk_annual/
false
false
self
9
null
RTX 4090 48GB version tested
1
[removed]
2024-12-28T08:41:04
https://www.reddit.com/r/LocalLLaMA/comments/1ho1nmq/rtx_4090_48gb_version_tested/
aliencaocao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho1nmq
false
null
t3_1ho1nmq
/r/LocalLLaMA/comments/1ho1nmq/rtx_4090_48gb_version_tested/
false
false
https://b.thumbs.redditm…OVOPDR-EBoww.jpg
1
{'enabled': False, 'images': [{'id': 'Hgzkn0EGXS-cC-sJJ6iMJNAt3kOzloW1JbVUMjWbLh8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=108&crop=smart&auto=webp&s=304e41ff8d7c0496790d981f6f8df891b52ab5e6', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=216&crop=smart&auto=webp&s=fd711b110650e8b8f4d8eb7142bb82d4611f600a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=320&crop=smart&auto=webp&s=ac121054c3c60de7ca3d504a4c23b520269247fd', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=640&crop=smart&auto=webp&s=e9eee97aca47c939a2b7a2ade266c998b200b44a', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=960&crop=smart&auto=webp&s=1c97ca6cd63d89690e717c13ba5c6bbb188207c5', 'width': 960}], 'source': {'height': 1065, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?auto=webp&s=2f920dd9c111fd5f1be43dbcf1ef142f032471ea', 'width': 1065}, 'variants': {}}]}
the WHALE has landed
1
2024-12-28T09:19:15
https://i.redd.it/vt7pxwx44k9e1.png
44seconds
i.redd.it
1970-01-01T00:00:00
0
{}
1ho25dy
false
null
t3_1ho25dy
/r/LocalLLaMA/comments/1ho25dy/the_whale_has_landed/
false
false
https://b.thumbs.redditm…e1URaYeOSE1g.jpg
1
{'enabled': True, 'images': [{'id': '-IJOP-OtGjQoC0-obiS7t10vp_5JmTlcM5TmnH_WQgo', 'resolutions': [{'height': 141, 'url': 'https://preview.redd.it/vt7pxwx44k9e1.png?width=108&crop=smart&auto=webp&s=4334b81e011c212990cfea02b2901ca5382c72e4', 'width': 108}, {'height': 283, 'url': 'https://preview.redd.it/vt7pxwx44k9e1.png?width=216&crop=smart&auto=webp&s=88673c19acd28a8bf248810156f87f8058fbd50a', 'width': 216}, {'height': 420, 'url': 'https://preview.redd.it/vt7pxwx44k9e1.png?width=320&crop=smart&auto=webp&s=5e6946f79b9a73e8370b33813b75d8762873f87c', 'width': 320}, {'height': 840, 'url': 'https://preview.redd.it/vt7pxwx44k9e1.png?width=640&crop=smart&auto=webp&s=b73da0d41fb6e5b5da23682d24695de738bfe4cc', 'width': 640}], 'source': {'height': 919, 'url': 'https://preview.redd.it/vt7pxwx44k9e1.png?auto=webp&s=aaf2e92ba71ce5d875f1232b0ef7a9ddc9d2381a', 'width': 700}, 'variants': {}}]}
the WHALE has landed
1,788
2024-12-28T09:23:45
https://i.redd.it/y61vxgf85k9e1.png
fourDnet
i.redd.it
1970-01-01T00:00:00
0
{}
1ho27fr
false
null
t3_1ho27fr
/r/LocalLLaMA/comments/1ho27fr/the_whale_has_landed/
false
false
https://b.thumbs.redditm…LENRR9V6NXdU.jpg
1,788
{'enabled': True, 'images': [{'id': 'q5eWdwyMloZthiAcTPl6RTmBePehPwL6eifI_4qWM2k', 'resolutions': [{'height': 141, 'url': 'https://preview.redd.it/y61vxgf85k9e1.png?width=108&crop=smart&auto=webp&s=6c01fb1cc16d2e5bc569e3e339cdd034ecae1a9d', 'width': 108}, {'height': 283, 'url': 'https://preview.redd.it/y61vxgf85k9e1.png?width=216&crop=smart&auto=webp&s=b1aa4bc24737b8f74d12344cd785457cfbe64be8', 'width': 216}, {'height': 420, 'url': 'https://preview.redd.it/y61vxgf85k9e1.png?width=320&crop=smart&auto=webp&s=22e895f89da4987f24945e48c2ce4a91d3bf6d83', 'width': 320}, {'height': 840, 'url': 'https://preview.redd.it/y61vxgf85k9e1.png?width=640&crop=smart&auto=webp&s=f9e341294c273adcb7d31d095bdf45a9c33435f2', 'width': 640}], 'source': {'height': 919, 'url': 'https://preview.redd.it/y61vxgf85k9e1.png?auto=webp&s=d6c8434457ade61744acf4a828d56d90c138bc71', 'width': 700}, 'variants': {}}]}
GitHub - llmgenai/LLMInterviewQuestions: This repository contains LLM (Large language model) interview question asked in top companies like Google, Nvidia , Meta , Microsoft & fortune 500 companies.
0
Having taken over 50 interviews myself, I can confidently say that this is the best resource for preparing for Gen AI/LLM interviews. This is the only list of questions you need to go through, with more than 100 real-world interview questions. This guide includes questions from a wide range of topics, from the basics of prompt engineering to advanced subjects like LLM architecture, deployments, cost optimization, and numerous scenario-based questions asked in real-world interviews.
2024-12-28T10:30:46
https://github.com/llmgenai/LLMInterviewQuestions/tree/main
buntyshah2020
github.com
1970-01-01T00:00:00
0
{}
1ho33b5
false
null
t3_1ho33b5
/r/LocalLLaMA/comments/1ho33b5/github_llmgenaillminterviewquestions_this/
false
false
https://b.thumbs.redditm…yN1ET0XLB9sk.jpg
0
{'enabled': False, 'images': [{'id': 'RkX2CcJjv3XAwLjPUpk6kVYHyJv94fDaQM1i_AFQsoM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qDbV-4ROV4Zpo8D23FiVYWL2q8UAODRNB18WsNP3Fxw.jpg?width=108&crop=smart&auto=webp&s=4fdba0f267d6ce50111a0d17475f9a6f7b534dfa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qDbV-4ROV4Zpo8D23FiVYWL2q8UAODRNB18WsNP3Fxw.jpg?width=216&crop=smart&auto=webp&s=d73fe009b81b83eff2d964590ee3f10bfa4b8348', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qDbV-4ROV4Zpo8D23FiVYWL2q8UAODRNB18WsNP3Fxw.jpg?width=320&crop=smart&auto=webp&s=de16146aa3f3581fd37b91e78dd31f62563ced15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qDbV-4ROV4Zpo8D23FiVYWL2q8UAODRNB18WsNP3Fxw.jpg?width=640&crop=smart&auto=webp&s=0056af9db781c85651d5e3257f86f519045a0156', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qDbV-4ROV4Zpo8D23FiVYWL2q8UAODRNB18WsNP3Fxw.jpg?width=960&crop=smart&auto=webp&s=a52b2494e2a1af9d9e7d7d90e4e6a55c74558f94', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qDbV-4ROV4Zpo8D23FiVYWL2q8UAODRNB18WsNP3Fxw.jpg?width=1080&crop=smart&auto=webp&s=56ac99fbd324a95483c315115fcbd290b0dac484', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qDbV-4ROV4Zpo8D23FiVYWL2q8UAODRNB18WsNP3Fxw.jpg?auto=webp&s=1e0dbb2b37f2b252b45cdc41bffd7ecc5f708af0', 'width': 1200}, 'variants': {}}]}
Need to choose an LLM for a project
1
[removed]
2024-12-28T10:33:45
https://www.reddit.com/r/LocalLLaMA/comments/1ho34rl/need_to_choose_an_llm_for_a_project/
AdamTagnot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho34rl
false
null
t3_1ho34rl
/r/LocalLLaMA/comments/1ho34rl/need_to_choose_an_llm_for_a_project/
false
false
self
1
null
Tipps for designing system prompts
1
Does anyone have any tips for designing system prompts? How they are worded seems to be a huge factor. For example: > only search for information on example.com Does not work. > always add site:example.com to search requests Works perfectly fine. If anyone has any insights, what does system prompt length change? If been working with mistral-small, mostly, and using the system prompts to create specific
2024-12-28T10:48:57
https://www.reddit.com/r/LocalLLaMA/comments/1ho3c1w/tipps_for_designing_system_prompts/
WolpertingerRumo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho3c1w
false
null
t3_1ho3c1w
/r/LocalLLaMA/comments/1ho3c1w/tipps_for_designing_system_prompts/
false
false
self
1
null
Tips for designing system prompts
4
Does anyone have any tips for designing system prompts? How they are worded seems to be a huge factor. For example: > only search for information on example.com Does not work. > always add site:example.com to search requests Works perfectly fine. If anyone has any insights, what does system prompt length change? If been working with mistral-small, mostly, and using the system prompts to create specific agents.
2024-12-28T10:50:38
https://www.reddit.com/r/LocalLLaMA/comments/1ho3cuk/tips_for_designing_system_prompts/
WolpertingerRumo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho3cuk
false
null
t3_1ho3cuk
/r/LocalLLaMA/comments/1ho3cuk/tips_for_designing_system_prompts/
false
false
self
4
null
Need to choose an LLM for a project
0
Ok, so i am trying to create like a general purpose AI assistant and expand its functions over time so it can do physical tasks using Arduino/ rpi/ robots so i was looking for an LLM that has chatting feature on obviously and the ability to add functions so if i tell it do smth it should activate the function that does the action i want. I need it to not be compute hungry cuz my resources are a bit limited to a 4-6 gb of normal ram and 400 gb ssd and it would be great if it was free and open source
2024-12-28T11:02:15
https://www.reddit.com/r/LocalLLaMA/comments/1ho3in2/need_to_choose_an_llm_for_a_project/
AdamTagnot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho3in2
false
null
t3_1ho3in2
/r/LocalLLaMA/comments/1ho3in2/need_to_choose_an_llm_for_a_project/
false
false
self
0
null
Qwen2-VL-7B is giving very short answers, need help
2
so i am running Qwen2-VL-7B using the following command: .\llama.cpp\build\bin\Release\llama-qwen2vl-cli.exe -m .\models\me\Qwen2-VL-7B\Qwen2-VL-7B-Instruct-Q4_K_L.gguf --mmproj .\models\me\Qwen2-VL-7B\mmproj-Qwen2-VL-7B-Instruct-f32.gguf -p "write down all text in this document without missing anything. thanks" --image '.\models\me\Qwen2-VL-7B\test_ocr.png' -ngl 33 --temp 0.7 --min-p 0.1 -c 1000 and first i will say its ocr recognistion is the best i have seen so far, it would tell me what i want to know, doesnt make a single character's mistake. But! Whats not happening is that its not printing the whole document, its like stuck on some kind of a fixed size, and i have no idea honetly if this is controlled through some cli options, it basically writes a bunch of things from documents correctly but then abruptly ends in 3 or 5 or 6 lines after. Please let me know if this can be changed. I have played with -temp and -c, setting any large amount of context doesnt change a thing. And i will also like to know whats with llama-qwen2vl-cli not keeping the model loaded, no conversation possible like it used to be like when the vision support first showed up in llama.cpp? It just loads model, answers and exits. And also will this be merged with the main llama-cli? thanks.
2024-12-28T11:55:10
https://www.reddit.com/r/LocalLLaMA/comments/1ho483t/qwen2vl7b_is_giving_very_short_answers_need_help/
ab2377
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho483t
false
null
t3_1ho483t
/r/LocalLLaMA/comments/1ho483t/qwen2vl7b_is_giving_very_short_answers_need_help/
false
false
self
2
null
Long-term processed context caching.
0
Hey people! I was recently thinking about integrating LLM into a personal journal software, and one idea came up: you know, how he can somehow cache processed context, so that we don't have to reprocess it every time I add a new message? Well, I just though that it might make sense to cache already preprocessed prompt (say, info about me) onto the disk, so that every time I ask my assistant something, the server will actually pull up the old 'template' from the disk thus saving precious compute time. We are talking about a system where fast storage is abundant, and GPU power is seriously lacking. Is this something that already exists? How is it called? Is it implemented in llama.cpp? If not, are there other inference backends that supports this?
2024-12-28T12:29:54
https://www.reddit.com/r/LocalLLaMA/comments/1ho4qp1/longterm_processed_context_caching/
libregrape
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho4qp1
false
null
t3_1ho4qp1
/r/LocalLLaMA/comments/1ho4qp1/longterm_processed_context_caching/
false
false
self
0
null
Recommendation for starting the AI thing with Ollama (M4 Mac Mini or NV Card)
1
[removed]
2024-12-28T12:39:01
https://www.reddit.com/r/LocalLLaMA/comments/1ho4vn2/recommendation_for_starting_the_ai_thing_with/
sashX22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho4vn2
false
null
t3_1ho4vn2
/r/LocalLLaMA/comments/1ho4vn2/recommendation_for_starting_the_ai_thing_with/
false
false
self
1
null
hey is deepseak ai better then claude?
0
like is it better then coding then claude ai
2024-12-28T13:03:29
https://www.reddit.com/r/LocalLLaMA/comments/1ho599d/hey_is_deepseak_ai_better_then_claude/
pro_ut3104
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho599d
false
null
t3_1ho599d
/r/LocalLLaMA/comments/1ho599d/hey_is_deepseak_ai_better_then_claude/
false
false
self
0
null
DeepSeek-v3 | Best open-source model on ProLLM
81
Hey everyone! Just wanted to share some quick news -- the hype is real! DeepSeek-v3 is now the best open source model on our benchmark: [check it here](https://prollm.ai/leaderboard/stack-unseen). It's also the cheapest model in the top-10 and shows a 20% improvement across our benchmarks compared to the previous best DeepSeek model. If you're curious about how we do our benchmarking, we published a [paper at NeurIPS](https://arxiv.org/abs/2412.05288) that about our methodology. We share how we curated our datasets and conducted a thorough ablation on using LLMs for natural-language code evaluation. Some key takeaways: * Without a reference answer, CoT leads to overthinking in LLM judges. * LLM-as-a-Judge does not exhibit a self-preference bias in the coding domain. We've also made some small updates to our leaderboard since our last post: * Added new benchmarks (OpenBook-Q&A and Transcription) * Added 15-20 new models across multiple of our benchmarks Let me know if you have any questions or thoughts! Leaderboard: [https://prollm.ai/leaderboard/stack-unseen](https://prollm.ai/leaderboard/stack-unseen) NeurIPS paper: [https://arxiv.org/abs/2412.05288](https://arxiv.org/abs/2412.05288) https://preview.redd.it/ibvp9yjk8l9e1.png?width=1060&format=png&auto=webp&s=1f638d3970baf912ae03b5f0073595ff033be4ab
2024-12-28T13:06:19
https://www.reddit.com/r/LocalLLaMA/comments/1ho5ave/deepseekv3_best_opensource_model_on_prollm/
nidhishs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho5ave
false
null
t3_1ho5ave
/r/LocalLLaMA/comments/1ho5ave/deepseekv3_best_opensource_model_on_prollm/
false
false
https://b.thumbs.redditm…gdgsR5WP2LyY.jpg
81
null
llama-3-8b-instruct's top 100 lists of 50 random words, and other fun & interesting output landscapes
31
2024-12-28T13:14:57
https://i.redd.it/rhpdioxu9l9e1.png
phree_radical
i.redd.it
1970-01-01T00:00:00
0
{}
1ho5fy3
false
null
t3_1ho5fy3
/r/LocalLLaMA/comments/1ho5fy3/llama38binstructs_top_100_lists_of_50_random/
false
false
https://a.thumbs.redditm…gTq0UymxQ9L8.jpg
31
{'enabled': True, 'images': [{'id': 'dhHaSuEjvjxrC7yiClI18czFH9KksKKDEwwFIfykm9Y', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/rhpdioxu9l9e1.png?width=108&crop=smart&auto=webp&s=3213792926dabc7aa5f060990bb80bc0efefcee4', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/rhpdioxu9l9e1.png?width=216&crop=smart&auto=webp&s=46f4224ac4106b21b39a5f6d318dcd87ab903f2f', 'width': 216}, {'height': 232, 'url': 'https://preview.redd.it/rhpdioxu9l9e1.png?width=320&crop=smart&auto=webp&s=d0c4f38437a685c12755c2dd60b1716e244d073a', 'width': 320}, {'height': 465, 'url': 'https://preview.redd.it/rhpdioxu9l9e1.png?width=640&crop=smart&auto=webp&s=7135bc27bc863c74930dd50dd669a675004a66a9', 'width': 640}, {'height': 697, 'url': 'https://preview.redd.it/rhpdioxu9l9e1.png?width=960&crop=smart&auto=webp&s=6467d5cd9a94d8689439f0b46cf6e48bd39f6a64', 'width': 960}, {'height': 785, 'url': 'https://preview.redd.it/rhpdioxu9l9e1.png?width=1080&crop=smart&auto=webp&s=4160ddb013a8f365ec10815aff77fa2efe12bca3', 'width': 1080}], 'source': {'height': 3290, 'url': 'https://preview.redd.it/rhpdioxu9l9e1.png?auto=webp&s=405f89672641dcf241c8d1847898b99b82f7908e', 'width': 4525}, 'variants': {}}]}
DeepSeek-V3 web app censoring itself during generation
1
2024-12-28T13:15:26
https://v.redd.it/317q441f9l9e1
PitaKaas
v.redd.it
1970-01-01T00:00:00
0
{}
1ho5g9w
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/317q441f9l9e1/DASHPlaylist.mpd?a=1737983740%2CZjk0ZWVmMzc5ZjY2YzA3OTg1MjlhOGE5NmE3MTJjNDI0ODI4YWE0YmVkMmU0NmEwNzliODEzZGM3OTA1NjNiMA%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/317q441f9l9e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/317q441f9l9e1/HLSPlaylist.m3u8?a=1737983740%2CMGU0ZGVlMTFkNmY5YWQ3YWU2ZjY0MDQxZjcyMzJiMDhhMjZhOGZhMjg3YmY2MmY2YzYyYTY3NGNkMjBmZDliYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/317q441f9l9e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1ho5g9w
/r/LocalLLaMA/comments/1ho5g9w/deepseekv3_web_app_censoring_itself_during/
false
false
https://external-preview…08cc550be204dd66
1
{'enabled': False, 'images': [{'id': 'YXlnZzRuMGY5bDllMdl18-RecYnXV56DoZNk4DgF81_g-IBcT13i_GtndvEM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YXlnZzRuMGY5bDllMdl18-RecYnXV56DoZNk4DgF81_g-IBcT13i_GtndvEM.png?width=108&crop=smart&format=pjpg&auto=webp&s=fe35289649bd9101b6dbfc2d8e19ecb03c293633', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YXlnZzRuMGY5bDllMdl18-RecYnXV56DoZNk4DgF81_g-IBcT13i_GtndvEM.png?width=216&crop=smart&format=pjpg&auto=webp&s=033a6beaef18b2117a82e26f88468cede82567cd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YXlnZzRuMGY5bDllMdl18-RecYnXV56DoZNk4DgF81_g-IBcT13i_GtndvEM.png?width=320&crop=smart&format=pjpg&auto=webp&s=1016cbd37d453c9788e293f0cd39f0eaf2066532', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YXlnZzRuMGY5bDllMdl18-RecYnXV56DoZNk4DgF81_g-IBcT13i_GtndvEM.png?width=640&crop=smart&format=pjpg&auto=webp&s=4e3466dd6879f2aa6a108ba54b7b23f4cfa2c139', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YXlnZzRuMGY5bDllMdl18-RecYnXV56DoZNk4DgF81_g-IBcT13i_GtndvEM.png?width=960&crop=smart&format=pjpg&auto=webp&s=363ec88d2f3902ba0486b1c140bb49c11a347f4b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YXlnZzRuMGY5bDllMdl18-RecYnXV56DoZNk4DgF81_g-IBcT13i_GtndvEM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4524da300fcbd0de7ba8051d8cc30be827d1f657', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YXlnZzRuMGY5bDllMdl18-RecYnXV56DoZNk4DgF81_g-IBcT13i_GtndvEM.png?format=pjpg&auto=webp&s=9968f7a4e321aaafbfbe6f5d49b50caeb8e9daa6', 'width': 1920}, 'variants': {}}]}
Why is deepseek Ollama (4bit) context memory so bad?
0
I give it a short segment of code that ChatGPT has no problem following and it forgets the contents at the beginning of the code. Is this just lack of training or quantization?
2024-12-28T13:19:24
https://www.reddit.com/r/LocalLLaMA/comments/1ho5iii/why_is_deepseek_ollama_4bit_context_memory_so_bad/
Ok-Cicada-5207
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho5iii
false
null
t3_1ho5iii
/r/LocalLLaMA/comments/1ho5iii/why_is_deepseek_ollama_4bit_context_memory_so_bad/
false
false
self
0
null
using huggingface repo locally
0
can qnyone help me , i wanted to download huggingface repo and run it locally , but could pass errors i get during the run.
2024-12-28T13:38:16
https://www.reddit.com/r/LocalLLaMA/comments/1ho5tpc/using_huggingface_repo_locally/
LahmeriMohamed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho5tpc
false
null
t3_1ho5tpc
/r/LocalLLaMA/comments/1ho5tpc/using_huggingface_repo_locally/
false
false
self
0
null
Question about dataset preparation for fine tuning
3
I would like to perform fine tuning of llama3 using replicate because I have no possibility to do it locally. I have a few thousand examples of tags, categories, and titles of videos, and I would like to create a title 'prompter' that would give me pointers to quickly create video titles from the categorization information (tags and categories), using the same style of writing that I use. However, I have no idea how I should format the data, use the system prompt, etc. Could someone guide me with a silly example to give me an idea? If it can change anything, I am fine with any kind of cloud provider for llm, it is not essential to rely on replicate. From the replicate documentation I see that JSONL format is to be used and nothing more. Thanks for the help!
2024-12-28T13:58:07
https://www.reddit.com/r/LocalLLaMA/comments/1ho65xo/question_about_dataset_preparation_for_fine_tuning/
liberollo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho65xo
false
null
t3_1ho65xo
/r/LocalLLaMA/comments/1ho65xo/question_about_dataset_preparation_for_fine_tuning/
false
false
self
3
null
Looking for a Model to Analyze Facial Characteristics
1
Hi everyone, I’m looking for a model that can analyze facial characteristics from images. Specifically, I need to extract features like: * Face shape (oval, round, square, etc.) * Hair color and texture (straight, wavy, curly) * Eye color * Eyebrow thickness * Nose width * and so on... Does anyone know of a model that’s well-suited for this task? I’m open to suggestions, whether it’s a pre-trained model or something I’d need to fine-tune myself. Thanks in advance for your help! :) Feel free to ask if more details are needed! 👀
2024-12-28T13:58:44
https://www.reddit.com/r/LocalLLaMA/comments/1ho66bd/looking_for_a_model_to_analyze_facial/
00Mango
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho66bd
false
null
t3_1ho66bd
/r/LocalLLaMA/comments/1ho66bd/looking_for_a_model_to_analyze_facial/
false
false
self
1
null
RTX 4090 48GB tested
10
2024-12-28T14:08:37
https://main-horse.github.io/posts/4090-48gb/
aliencaocao
main-horse.github.io
1970-01-01T00:00:00
0
{}
1ho6d74
false
null
t3_1ho6d74
/r/LocalLLaMA/comments/1ho6d74/rtx_4090_48gb_tested/
false
false
https://b.thumbs.redditm…OVOPDR-EBoww.jpg
10
{'enabled': False, 'images': [{'id': 'Hgzkn0EGXS-cC-sJJ6iMJNAt3kOzloW1JbVUMjWbLh8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=108&crop=smart&auto=webp&s=304e41ff8d7c0496790d981f6f8df891b52ab5e6', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=216&crop=smart&auto=webp&s=fd711b110650e8b8f4d8eb7142bb82d4611f600a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=320&crop=smart&auto=webp&s=ac121054c3c60de7ca3d504a4c23b520269247fd', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=640&crop=smart&auto=webp&s=e9eee97aca47c939a2b7a2ade266c998b200b44a', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?width=960&crop=smart&auto=webp&s=1c97ca6cd63d89690e717c13ba5c6bbb188207c5', 'width': 960}], 'source': {'height': 1065, 'url': 'https://external-preview.redd.it/b13GKU7u2wqWzYUO_RwyzY0M0IE8YlE0kbXC0FCmQr4.jpg?auto=webp&s=2f920dd9c111fd5f1be43dbcf1ef142f032471ea', 'width': 1065}, 'variants': {}}]}
5090 plus 2080TI (11GB)
0
Couple of questions on a hypothetical setup. If the 5090 comes out with 32GB, is it a crazy idea to add it to my existing PC - and keep my current 2080TI, so I'd have 43GB Video RAM total for working with models? Or would this somehow cripple an incredibly expensive 5090 by pairing it with an ancient card? Also...would 43GB VRAM on a PC be enough to get a 70b Llama 3.3 model (03_K_L) running with decent speed? It's putting out about 1.7 tokens per second on M1 Max with 64GB shared RAM.
2024-12-28T14:14:47
https://www.reddit.com/r/LocalLLaMA/comments/1ho6he1/5090_plus_2080ti_11gb/
SteveRD1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho6he1
false
null
t3_1ho6he1
/r/LocalLLaMA/comments/1ho6he1/5090_plus_2080ti_11gb/
false
false
self
0
null
Distribute fineuning with fast api
1
[removed]
2024-12-28T14:28:06
https://www.reddit.com/r/LocalLLaMA/comments/1ho6q7h/distribute_fineuning_with_fast_api/
Strict_Tip_5195
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho6q7h
false
null
t3_1ho6q7h
/r/LocalLLaMA/comments/1ho6q7h/distribute_fineuning_with_fast_api/
false
false
self
1
null