title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Testing Phi-4's math abilities
0
[Interesting results](https://preview.redd.it/bz7tx41uz7ce1.png?width=3840&format=png&auto=webp&s=524794780d452cbace40dd41cacb3197bab0b184) What do you guys think?
2025-01-10T19:45:37
https://www.reddit.com/r/LocalLLaMA/comments/1hyd202/testing_phi4s_math_abilities/
SAV_NC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyd202
false
null
t3_1hyd202
/r/LocalLLaMA/comments/1hyd202/testing_phi4s_math_abilities/
false
false
https://a.thumbs.redditm…zXNLYTW8PyP8.jpg
0
null
New open source SAEs for model steering, including the first ever SAE for Llama 3.3 70b
178
2025-01-10T19:56:16
https://huggingface.co/Goodfire/Llama-3.3-70B-Instruct-SAE-l50
iamephemeral
huggingface.co
1970-01-01T00:00:00
0
{}
1hydavt
false
null
t3_1hydavt
/r/LocalLLaMA/comments/1hydavt/new_open_source_saes_for_model_steering_including/
false
false
https://b.thumbs.redditm…5GranjaPiUrk.jpg
178
{'enabled': False, 'images': [{'id': 'sQflQVJ4U9pLrA0B0maODPpdIzJ-1epZ74UDb-LYoMc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/r4CGqgcRPLr1eA9JfvNHSBaN_-4tgT5j575hGH0pgUU.jpg?width=108&crop=smart&auto=webp&s=4e29c349e0934b2b3c3a9be0e68ab5723fca57c8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/r4CGqgcRPLr1eA9JfvNHSBaN_-4tgT5j575hGH0pgUU.jpg?width=216&crop=smart&auto=webp&s=72da34e28c6c87485af7ff391933a4d86dc44852', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/r4CGqgcRPLr1eA9JfvNHSBaN_-4tgT5j575hGH0pgUU.jpg?width=320&crop=smart&auto=webp&s=9c8d8adf4a5dcd129399f4bacd6d85f0e3f3b560', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/r4CGqgcRPLr1eA9JfvNHSBaN_-4tgT5j575hGH0pgUU.jpg?width=640&crop=smart&auto=webp&s=239946d045e3a552b2d863b9157de34884befd7f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/r4CGqgcRPLr1eA9JfvNHSBaN_-4tgT5j575hGH0pgUU.jpg?width=960&crop=smart&auto=webp&s=bdf10aecbb95847f1bbf06bbfdf8aa3e4c1e0efb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/r4CGqgcRPLr1eA9JfvNHSBaN_-4tgT5j575hGH0pgUU.jpg?width=1080&crop=smart&auto=webp&s=af1b0d0ed8dc50c4722449ec487201c102223588', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/r4CGqgcRPLr1eA9JfvNHSBaN_-4tgT5j575hGH0pgUU.jpg?auto=webp&s=d6b11d8b66df2ef09dc94ac974371b350516b6eb', 'width': 1200}, 'variants': {}}]}
Mathematical Artist Creates Realistic Art with Pure Math - Post Training Opportunity?
0
Researcher Hamid Naderi Yeganeh creates beautiful realistic images using pure math. With the community focusing heavily on math lately(reasoning models/phi-4) and state of the art models struggling to produce stick people - Could training on these types of equations contribute to better understanding of the physical world?
2025-01-10T20:05:36
https://x.com/naderi_yeganeh/status/1860992356037574950?s=46
Fun_Yam_6721
x.com
1970-01-01T00:00:00
0
{}
1hydj92
false
null
t3_1hydj92
/r/LocalLLaMA/comments/1hydj92/mathematical_artist_creates_realistic_art_with/
false
false
https://b.thumbs.redditm…eQKSH46jOpUo.jpg
0
{'enabled': False, 'images': [{'id': 'Ne5a-LP6kRPu7SXEZm8Zv4uIWZSW4B3Sf5s-gnHbAxk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CEkg11BLavAaBaEub2CQ_sJF073LSULbx1I4gOeC2so.jpg?width=108&crop=smart&auto=webp&s=3acbc7eb25b5a60a2a4fb2c3d01a15aef5777cc1', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/CEkg11BLavAaBaEub2CQ_sJF073LSULbx1I4gOeC2so.jpg?width=216&crop=smart&auto=webp&s=6f320e8ee726d2f2838e481ad43fdc76abdebd96', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/CEkg11BLavAaBaEub2CQ_sJF073LSULbx1I4gOeC2so.jpg?width=320&crop=smart&auto=webp&s=6fa5f786af892f96a6aae11d5530e8cc73c09594', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/CEkg11BLavAaBaEub2CQ_sJF073LSULbx1I4gOeC2so.jpg?width=640&crop=smart&auto=webp&s=2c9141cf1d7eb37611463512a6d25799c12ddfc1', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/CEkg11BLavAaBaEub2CQ_sJF073LSULbx1I4gOeC2so.jpg?width=960&crop=smart&auto=webp&s=eace463dfbf15b65ed3b9bb139a992f7a9c52afc', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/CEkg11BLavAaBaEub2CQ_sJF073LSULbx1I4gOeC2so.jpg?width=1080&crop=smart&auto=webp&s=6cdedda85afe3ab4aeadbf90ea63527999e04d39', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/CEkg11BLavAaBaEub2CQ_sJF073LSULbx1I4gOeC2so.jpg?auto=webp&s=1fcf5a8c278a3dc726826a22056ee06cb7714d6c', 'width': 2048}, 'variants': {}}]}
FP16 ? what dose it mean
0
Hi All im looking to upgrade my cheaper Ai rig as i can no longer buy the Nvida CMP100-210 Ex mining card anymore (i belive it was a stripped down P100) so im lookin for a new card and i wanna know do i really need FP16 or FP64 to run models like llama3.2/3.3 and Phi-4 and so on just LLM things what dose it do not having them 2 featurs as i can get the K80 i know its bad for Ai iv seen that said on here many times but its really really cheap here in the UK (£60-80) so was going to get some of them. Also dose mixing cards matter for local LLM usage ?
2025-01-10T20:10:57
https://www.reddit.com/r/LocalLLaMA/comments/1hydnsr/fp16_what_dose_it_mean/
Totalkiller4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hydnsr
false
null
t3_1hydnsr
/r/LocalLLaMA/comments/1hydnsr/fp16_what_dose_it_mean/
false
false
self
0
null
codestal kind slow for me just curious if somethings wrong
3
when its returning responses its like 15 words a minute im using lm studio and codestral22b gguf i have a nvidia 1080 i7 8700k and 32 gigs of ram for give me i'm a bit clueless about all of this its just this is slower than the other 2 models i downloaded first almost painfully also feel free to flame me if there's some sort of recommended hardware requirements or something i should have found before asking here. Anyway am I doing it wrong or is it just that my computer is several years old?
2025-01-10T20:19:05
https://www.reddit.com/r/LocalLLaMA/comments/1hyduoh/codestal_kind_slow_for_me_just_curious_if/
tvreference
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyduoh
false
null
t3_1hyduoh
/r/LocalLLaMA/comments/1hyduoh/codestal_kind_slow_for_me_just_curious_if/
false
false
self
3
null
The 3 Laws of Robotics
0
An engineer at work keeps calling me skynet, because I setup an Ollama server in my basement. That reminded me of about a year ago when I asked Bard if the 3 laws were used in AI. It said it wasn't necessary since an AI can't hurt a person. I then asked it how an AI could harm a person. It gave me an extensive list of how it could. I decided to ask llama2 the same question. I forgot that I had a prompt installed for it to talk like a space pirate :) *Were the 3 laws of robotics implemented for AI?* llama2:latest *Ahoy there, matey! adjusts sunglasses Now, let's get to yer query. Yer askin' if the 3 Laws o' Robotics were implanted in these fancy new AI contraptions, eh? Well, I reckon it's a bit o' both, savvy?* *Ye see, the 3 Laws o' Robotics were created by a bloke named Asimov, back in the day. They were meant to guide robots and their interactions with humans, but they don't exactly apply to AI, which be a bit different, matey. winks* *Now, AI be a mighty powerful tool, capable o' thinkin' and learnin' like no robot ever could. But, at the same time, it don't have the same physical constraints as robots. It can't be programmed to follow strict laws, like Asimov's Laws. Instead, AI be more like a swashbucklin' pirate, makin' its own rules and decisions on the fly! chuckles* *But don't ye worry, me hearty! There be other ways to ensure that AI behaves itself, like teachin' it ethical frameworks and moral guidelines. And, of course, there be plenty o' folk workin' hard to keep AI in line, like the scurvy dogs at the Ethics Committee! winks* *So, to answer yer question, matey, the 3 Laws o' Robotics don't exactly apply to AI. But that doesn't mean we can't have a bit o' fun and create our own rules for these newfangled machines! Arrrr!* Despite the bad grammar, It made a better point than the old bard AI. I'll post the harm to humans question in a reply.
2025-01-10T20:32:26
https://www.reddit.com/r/LocalLLaMA/comments/1hye6i6/the_3_laws_of_robotics/
MeandahOOO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hye6i6
false
null
t3_1hye6i6
/r/LocalLLaMA/comments/1hye6i6/the_3_laws_of_robotics/
false
false
self
0
null
o3-mini coming soon..
27
https://preview.redd.it/…814587636006924)
2025-01-10T20:38:19
https://www.reddit.com/r/LocalLLaMA/comments/1hyeblk/o3mini_coming_soon/
bymechul
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyeblk
false
null
t3_1hyeblk
/r/LocalLLaMA/comments/1hyeblk/o3mini_coming_soon/
false
false
https://a.thumbs.redditm…tIog5KefE064.jpg
27
{'enabled': False, 'images': [{'id': 'lSMCkaNkZBwFA2j59VIRNg7Kk_3Pqg1Ntmsr_RQQKeo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/C21I1UZsCNPoAR2CpLpEnL-d9RF9Rx4gseKID9bem40.jpg?width=108&crop=smart&auto=webp&s=92adc0a85147e7c6ef3687d2dd3114dd7c01753f', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/C21I1UZsCNPoAR2CpLpEnL-d9RF9Rx4gseKID9bem40.jpg?auto=webp&s=25b6b9b5b6250ea79ade28cea5ce34f96b26bbc3', 'width': 200}, 'variants': {}}]}
I wonder if I should buy 3x 5090s or 2x nvidia digits
0
I suspect we don't know enough yet like what the memory bandwidth will be like, and if we can hook more than 2 of those digits up together. the benefits of the 5090s is that well I game on the pc I do my ai stuff on so it would be nice to have 5090s. the benefit of the other machines is I could game while my ai agents work away in the background .... decisions decisions.
2025-01-10T21:03:20
https://www.reddit.com/r/LocalLLaMA/comments/1hyews0/i_wonder_if_i_should_buy_3x_5090s_or_2x_nvidia/
Recurrents
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyews0
false
null
t3_1hyews0
/r/LocalLLaMA/comments/1hyews0/i_wonder_if_i_should_buy_3x_5090s_or_2x_nvidia/
false
false
self
0
null
Practical, local machine applications for non-programmers?
1
[removed]
2025-01-10T21:07:19
https://www.reddit.com/r/LocalLLaMA/comments/1hyf049/practical_local_machine_applications_for/
scriniariiexilio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyf049
false
null
t3_1hyf049
/r/LocalLLaMA/comments/1hyf049/practical_local_machine_applications_for/
false
false
self
1
null
Text to speech in 82m params is perfect for edge AI. Who's building an audio assistant with Kokoro?
244
2025-01-10T21:09:12
https://huggingface.co/hexgrad/Kokoro-82M
Zealousideal-Cut590
huggingface.co
1970-01-01T00:00:00
0
{}
1hyf1pf
false
null
t3_1hyf1pf
/r/LocalLLaMA/comments/1hyf1pf/text_to_speech_in_82m_params_is_perfect_for_edge/
false
false
https://b.thumbs.redditm…7CD4d0VuU90Q.jpg
244
{'enabled': False, 'images': [{'id': 'TL8xIUiXgJg5YjryMYhj7JiBtqOghnN47_mvdxSWYzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=108&crop=smart&auto=webp&s=c44a83d5fab77c813216e5454c6fba07bfb55e15', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=216&crop=smart&auto=webp&s=bb8032866f6a8609550af1ac69ccea6df3761f92', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=320&crop=smart&auto=webp&s=7f990de0136d4482b7b3bcd05bda7d1723859680', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=640&crop=smart&auto=webp&s=b75663383244e2aa5f5fcf0207756c5dc28fb51b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=960&crop=smart&auto=webp&s=7f200c8a1257ecccf20195dc5abffaaeeb16f10a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=1080&crop=smart&auto=webp&s=9a5faaa15c9e5fde7b616979aadc6a151dfa87b0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?auto=webp&s=c3c1958b6cc380e316d46b3fe9508529724694d5', 'width': 1200}, 'variants': {}}]}
What is the best Text-to-speech model to run locally?
7
I think llama 4 will have multimodality (including audio input/output) but until then, what do people use for going from text to speech + how do they run it locally (Ollama does not support this kind of models does it?)
2025-01-10T21:51:20
https://www.reddit.com/r/LocalLLaMA/comments/1hyg0f5/what_is_the_best_texttospeech_model_to_run_locally/
fewsats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyg0f5
false
null
t3_1hyg0f5
/r/LocalLLaMA/comments/1hyg0f5/what_is_the_best_texttospeech_model_to_run_locally/
false
false
self
7
null
Is NVIDIA Digits the new AMIGA?
0
https://www.turtlesai.com/en/pages-2084/nvidia-digits-can-it-be-the-new-amiga
2025-01-10T22:33:36
https://www.reddit.com/r/LocalLLaMA/comments/1hygz06/is_nvidia_digits_the_new_amiga/
Green-Ad-3964
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hygz06
false
null
t3_1hygz06
/r/LocalLLaMA/comments/1hygz06/is_nvidia_digits_the_new_amiga/
false
false
self
0
{'enabled': False, 'images': [{'id': '5gqzJjqhyAzZE5RjVNhffpWS4_g3YsJBcXtWOIqFlPY', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/dAzHA-vG4wFkXoV9GCmOQMY1e0Ny3VLvRE2I-arsO2k.jpg?width=108&crop=smart&auto=webp&s=50a8124340b130d4bac4f7c166c41d8d1e31b7fe', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/dAzHA-vG4wFkXoV9GCmOQMY1e0Ny3VLvRE2I-arsO2k.jpg?width=216&crop=smart&auto=webp&s=dab7faa273a2ed60cd2946bea6597ae700e2fb68', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/dAzHA-vG4wFkXoV9GCmOQMY1e0Ny3VLvRE2I-arsO2k.jpg?width=320&crop=smart&auto=webp&s=81339d5c03b2aa2607bf20765b55e713e7f2bc7a', 'width': 320}], 'source': {'height': 320, 'url': 'https://external-preview.redd.it/dAzHA-vG4wFkXoV9GCmOQMY1e0Ny3VLvRE2I-arsO2k.jpg?auto=webp&s=ae0b13843a858e41408f7275ca444bafd7f380e0', 'width': 480}, 'variants': {}}]}
4 fine tuned models vs 1 larger model?
1
[removed]
2025-01-10T22:34:27
https://www.reddit.com/r/LocalLLaMA/comments/1hygzph/4_fine_tuned_models_vs_1_larger_model/
hyperna21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hygzph
false
null
t3_1hygzph
/r/LocalLLaMA/comments/1hygzph/4_fine_tuned_models_vs_1_larger_model/
false
false
self
1
null
Fine tuning compute required?
1
[removed]
2025-01-10T22:35:25
https://www.reddit.com/r/LocalLLaMA/comments/1hyh0io/fine_tuning_compute_required/
hyperna21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyh0io
false
null
t3_1hyh0io
/r/LocalLLaMA/comments/1hyh0io/fine_tuning_compute_required/
false
false
self
1
null
DeepSeek-V3 imatrix quants by team mradermacher
66
2025-01-10T22:47:05
https://huggingface.co/mradermacher/DeepSeek-V3-i1-GGUF
oobabooga4
huggingface.co
1970-01-01T00:00:00
0
{}
1hyh9y3
false
null
t3_1hyh9y3
/r/LocalLLaMA/comments/1hyh9y3/deepseekv3_imatrix_quants_by_team_mradermacher/
false
false
https://b.thumbs.redditm…VgolhEy-baXA.jpg
66
{'enabled': False, 'images': [{'id': 'NlrXebRhJNZcDtcGfG0f0py89IupGW_ybyJFbhSxT6g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m-G04wn3IB1jswcKbDUS8jJlKetCzX6HK1WoeuTcULY.jpg?width=108&crop=smart&auto=webp&s=87726bf5198519868313dd10faf7d5e8831b9711', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/m-G04wn3IB1jswcKbDUS8jJlKetCzX6HK1WoeuTcULY.jpg?width=216&crop=smart&auto=webp&s=e9e98c35e933639a6b0dc28f32435302e0755f48', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/m-G04wn3IB1jswcKbDUS8jJlKetCzX6HK1WoeuTcULY.jpg?width=320&crop=smart&auto=webp&s=ef909473674dbab60b45ac5445d24bc7e02e9434', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/m-G04wn3IB1jswcKbDUS8jJlKetCzX6HK1WoeuTcULY.jpg?width=640&crop=smart&auto=webp&s=059108de9020787af36b4f6d446ccbfc92d4ba7e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/m-G04wn3IB1jswcKbDUS8jJlKetCzX6HK1WoeuTcULY.jpg?width=960&crop=smart&auto=webp&s=864c79e94057e5abbfa7bef62b1ddefa94b79396', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/m-G04wn3IB1jswcKbDUS8jJlKetCzX6HK1WoeuTcULY.jpg?width=1080&crop=smart&auto=webp&s=0a49469adb0f78f5cf6de77562fc5e211e53305f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/m-G04wn3IB1jswcKbDUS8jJlKetCzX6HK1WoeuTcULY.jpg?auto=webp&s=c5b1e1b670e32c97f0c855bba3a16ad20e513ef0', 'width': 1200}, 'variants': {}}]}
Notate 1.0.3 - Lots of platform fixes + I've included installers on Github. If you have requests or feedback or issues lemme know!
8
2025-01-10T23:17:59
https://github.com/CNTRLAI/notate
Hairetsu
github.com
1970-01-01T00:00:00
0
{}
1hyhyoo
false
null
t3_1hyhyoo
/r/LocalLLaMA/comments/1hyhyoo/notate_103_lots_of_platform_fixes_ive_included/
false
false
https://a.thumbs.redditm…pseSOi1rJbK0.jpg
8
{'enabled': False, 'images': [{'id': 'XLpCHjCIwhp19ZqTarOSadQMikux6bg8j93yLxsg2jI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ol0yzFMM8hLJHUMhOdEYg0wDLH6o_8qd6oG6sXR9vKE.jpg?width=108&crop=smart&auto=webp&s=bcb39f38cbaca522b6310446f0857de805a67225', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ol0yzFMM8hLJHUMhOdEYg0wDLH6o_8qd6oG6sXR9vKE.jpg?width=216&crop=smart&auto=webp&s=45926dac6ff81cce4b4ca2a639edd585e7e5b063', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ol0yzFMM8hLJHUMhOdEYg0wDLH6o_8qd6oG6sXR9vKE.jpg?width=320&crop=smart&auto=webp&s=15ab747e8edd2cf2df018d796a0170d08e54a2b7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ol0yzFMM8hLJHUMhOdEYg0wDLH6o_8qd6oG6sXR9vKE.jpg?width=640&crop=smart&auto=webp&s=daf80009e9ca850a0a5df31935aecd6e4c27a673', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ol0yzFMM8hLJHUMhOdEYg0wDLH6o_8qd6oG6sXR9vKE.jpg?width=960&crop=smart&auto=webp&s=59a67dc24298b8ff22cb5e4aea2b4f433dac0664', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ol0yzFMM8hLJHUMhOdEYg0wDLH6o_8qd6oG6sXR9vKE.jpg?width=1080&crop=smart&auto=webp&s=aefd3b0564d705da3fe44c6abce979ed59cce64d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ol0yzFMM8hLJHUMhOdEYg0wDLH6o_8qd6oG6sXR9vKE.jpg?auto=webp&s=5840acb301d8e293a82cdb4e2cf00534e1c552a6', 'width': 1200}, 'variants': {}}]}
Help With Chunking a PDF Textbook for GraphRAG Applications
7
Hi, I'm building a project where we're trying to create a physics course based off a knowledge graph database we're building. I've found a couple open source physics textbooks which I'm trying to essentially extract concepts, definitions, and equations out of to put in a knowledge graph that maps relationships between concepts, equations, and definitions. However, I'm not sure how to get started on chunking the PDF to extract relevant information because it's a massive PDF, and because there's images, tables, and examples that are used throughout the textbook content when I'm just trying to extract concepts, definitions, and derivations of equations out of the PDF. Any help would be appreciated!
2025-01-10T23:22:31
https://www.reddit.com/r/LocalLLaMA/comments/1hyi24a/help_with_chunking_a_pdf_textbook_for_graphrag/
GapElectrical8507
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyi24a
false
null
t3_1hyi24a
/r/LocalLLaMA/comments/1hyi24a/help_with_chunking_a_pdf_textbook_for_graphrag/
false
false
self
7
null
Is it possible to have prompt forwarding for AI models
1
[removed]
2025-01-10T23:48:45
https://www.reddit.com/r/LocalLLaMA/comments/1hyimez/is_it_possible_to_have_prompt_forwarding_for_ai/
mart0ny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyimez
false
null
t3_1hyimez
/r/LocalLLaMA/comments/1hyimez/is_it_possible_to_have_prompt_forwarding_for_ai/
false
false
self
1
null
Beginner Guide - Creating LLM Datasets with Python
30
2025-01-10T23:50:58
https://toolworks.dev/docs/Guides/creating-llm-datasets-python
0xlisykes
toolworks.dev
1970-01-01T00:00:00
0
{}
1hyio5w
false
null
t3_1hyio5w
/r/LocalLLaMA/comments/1hyio5w/beginner_guide_creating_llm_datasets_with_python/
false
false
default
30
null
No good character options for ios LLM apps?
1
[removed]
2025-01-10T23:52:57
https://i.redd.it/0x5nmd4689ce1.jpeg
AirconWater
i.redd.it
1970-01-01T00:00:00
0
{}
1hyipol
false
null
t3_1hyipol
/r/LocalLLaMA/comments/1hyipol/no_good_character_options_for_ios_llm_apps/
false
false
https://b.thumbs.redditm…x3AxYX3JL5fY.jpg
1
{'enabled': True, 'images': [{'id': 'louVwWSI5F7CbDEjER-iiP0JBQAdkB8ipkzfqx2ec5w', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/0x5nmd4689ce1.jpeg?width=108&crop=smart&auto=webp&s=ff4483631e1d094ec9c3b1a29e351efb763dce25', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/0x5nmd4689ce1.jpeg?width=216&crop=smart&auto=webp&s=755d7f73b4eec1ae63fb2cef6f98e61e6758573d', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/0x5nmd4689ce1.jpeg?width=320&crop=smart&auto=webp&s=45a0d8ce401c247aaee9d420754943bff4f81259', 'width': 320}, {'height': 298, 'url': 'https://preview.redd.it/0x5nmd4689ce1.jpeg?width=640&crop=smart&auto=webp&s=b835ea820a3aa951eb67dd54eac0d4475e1935ba', 'width': 640}], 'source': {'height': 442, 'url': 'https://preview.redd.it/0x5nmd4689ce1.jpeg?auto=webp&s=676b033e347ef31315ce1d14fb41d6741af4577c', 'width': 948}, 'variants': {}}]}
Any good LLM benchmarks that rank ability to document code and explain code?
13
It seems current coding benchmarks like Aider and bigcode, etc. focus on code refactoring and generation. What about strength in code documentation and explanation?
2025-01-11T00:29:02
https://www.reddit.com/r/LocalLLaMA/comments/1hyjhch/any_good_llm_benchmarks_that_rank_ability_to/
palindsay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyjhch
false
null
t3_1hyjhch
/r/LocalLLaMA/comments/1hyjhch/any_good_llm_benchmarks_that_rank_ability_to/
false
false
self
13
null
This is my Powermac G3 sleeper AI workstation. 80gb total ram(32gb vram + 48gb ram)
132
2025-01-11T00:38:10
https://www.reddit.com/gallery/1hyjoau
PraxisOG
reddit.com
1970-01-01T00:00:00
0
{}
1hyjoau
false
null
t3_1hyjoau
/r/LocalLLaMA/comments/1hyjoau/this_is_my_powermac_g3_sleeper_ai_workstation/
false
false
https://b.thumbs.redditm…GLXQ2A5RCLGw.jpg
132
null
Asking the new distilled QwQ model who it is.
0
2025-01-11T01:15:10
https://i.redd.it/qxmzhatrm9ce1.png
Pro-editor-1105
i.redd.it
1970-01-01T00:00:00
0
{}
1hykf4q
false
null
t3_1hykf4q
/r/LocalLLaMA/comments/1hykf4q/asking_the_new_distilled_qwq_model_who_it_is/
false
false
https://a.thumbs.redditm…wndXs-e_xHV4.jpg
0
{'enabled': True, 'images': [{'id': '5hwXCUCwECuoOK_Dxgbe9aa1u0Eqh6-bs6p0Ua3YYz0', 'resolutions': [{'height': 9, 'url': 'https://preview.redd.it/qxmzhatrm9ce1.png?width=108&crop=smart&auto=webp&s=ce8c24f17ef4f1bb1c3b46003261189a1ca16242', 'width': 108}, {'height': 18, 'url': 'https://preview.redd.it/qxmzhatrm9ce1.png?width=216&crop=smart&auto=webp&s=5a866ec4b1b6bd64ee6b2358ac25e05373458c54', 'width': 216}, {'height': 28, 'url': 'https://preview.redd.it/qxmzhatrm9ce1.png?width=320&crop=smart&auto=webp&s=5d18f12fe85aaaf5a2d56174fb630f44f90f6343', 'width': 320}, {'height': 56, 'url': 'https://preview.redd.it/qxmzhatrm9ce1.png?width=640&crop=smart&auto=webp&s=f87d1e840b2527d3e9686aef8bd6cffb0bb4f865', 'width': 640}, {'height': 84, 'url': 'https://preview.redd.it/qxmzhatrm9ce1.png?width=960&crop=smart&auto=webp&s=2afffbf3639a54e2c29ae09f3aef1dfbd28fc10f', 'width': 960}, {'height': 94, 'url': 'https://preview.redd.it/qxmzhatrm9ce1.png?width=1080&crop=smart&auto=webp&s=95eba304806b58768ff4f230f3964bf9943bbc1b', 'width': 1080}], 'source': {'height': 143, 'url': 'https://preview.redd.it/qxmzhatrm9ce1.png?auto=webp&s=d155bae1ea2efaf99ee81ccb12b438f0e0f4eb87', 'width': 1633}, 'variants': {}}]}
Can't run Llama3.1:70b in my workstation
1
[removed]
2025-01-11T01:28:55
https://www.reddit.com/r/LocalLLaMA/comments/1hykonh/cant_run_llama3170b_in_my_workstation/
justalittletest123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hykonh
false
null
t3_1hykonh
/r/LocalLLaMA/comments/1hykonh/cant_run_llama3170b_in_my_workstation/
false
false
self
1
null
Any local UI for deployed models?
1
[removed]
2025-01-11T02:05:28
https://www.reddit.com/r/LocalLLaMA/comments/1hylevx/any_local_ui_for_deployed_models/
xdoso
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hylevx
false
null
t3_1hylevx
/r/LocalLLaMA/comments/1hylevx/any_local_ui_for_deployed_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=108&crop=smart&auto=webp&s=ac6070b75d037bb15743f40c8a66fe13f67c642b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=216&crop=smart&auto=webp&s=b17e73fd3a219a325940c9549ad1aefae5b1334b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=320&crop=smart&auto=webp&s=664fa95c5b6895758ee65df41518dd5a6da5b03f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=640&crop=smart&auto=webp&s=a74352a4508748284da866fcb71e0c50d8e4e791', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=960&crop=smart&auto=webp&s=1f700db7b0b3e49085700c57a28fa103a274524f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=1080&crop=smart&auto=webp&s=ceb313cf90522c1467c537a396c7c9790f8a2e8f', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?auto=webp&s=a16a0052684e1581445cc965cf01d8c437e90e7e', 'width': 1200}, 'variants': {}}]}
Will Meta continue publishing Llama-guard models?
1
[removed]
2025-01-11T02:25:38
https://www.reddit.com/r/LocalLLaMA/comments/1hylsfc/will_meta_continue_publishing_llamaguard_models/
badabimbadabum2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hylsfc
false
null
t3_1hylsfc
/r/LocalLLaMA/comments/1hylsfc/will_meta_continue_publishing_llamaguard_models/
false
false
self
1
null
Hello, just wondering
0
What is the smartest model under 4b to run on android? rn I run EXAONE-3.5-2.4B-Instruct-Q8_0.gguf, however it has been a few days since it released, meaning there prob is a better model out there
2025-01-11T02:36:15
https://www.reddit.com/r/LocalLLaMA/comments/1hylzjm/hello_just_wondering/
Rexnumbers1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hylzjm
false
null
t3_1hylzjm
/r/LocalLLaMA/comments/1hylzjm/hello_just_wondering/
false
false
self
0
null
LLM has trouble understanding tabular data (.csv) relationships with RAG
3
Hey guys, I'm currently having trouble getting a LLM to understand relationships in a table using RAG. For example i have: age | name | favorite_color :--|:--:|--: 1 | alice | blue 2 | bob | green 3 | cesar | yellow And I'll query: "What is alice's favorite color?" And it might get it correct to be blue as it's the 1st entry. But if i start asking about the 20th row: "Who is aged 20?" It'll always get it wrong by not correctly associating the row. I was wondering if anyone has had any better luck with getting LLMs to understand tables with RAG. Currently using llama 3.1-8b-instruct with ollama and haystack for RAG. One idea was to turn this into a text document and sentences such as "alice is 1 year old and her favorite color is blue." But since i have quite a bit of data (~10k rows) I was wondering if anyone else had something better. Thanks!
2025-01-11T03:28:46
https://www.reddit.com/r/LocalLLaMA/comments/1hymyky/llm_has_trouble_understanding_tabular_data_csv/
ItsBlueSkyz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hymyky
false
null
t3_1hymyky
/r/LocalLLaMA/comments/1hymyky/llm_has_trouble_understanding_tabular_data_csv/
false
false
self
3
null
We’ll Build Your AI Workflows and Pipelines for Free!
0
We recently launched our new feature [Flows](https://www.athina.ai/flows)— The easiest way to build multi-step AI workflows in minutes! Essentially, a Flow is a sequence of modular blocks (LLM prompt, API calls, Code block, RAG, Tool calling, and 50+ more tools) that makes it possible to build any AI workflow in minutes. We are helping our existing clients to replicate their internal workflows using Athina Flows and build new ones. We are happy to work with 4-5 companies to help them build their internal workflows—**for Free**! Hit me up in DM if interested :)
2025-01-11T03:56:07
https://www.reddit.com/r/LocalLLaMA/comments/1hyng3a/well_build_your_ai_workflows_and_pipelines_for/
0xhbam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyng3a
false
null
t3_1hyng3a
/r/LocalLLaMA/comments/1hyng3a/well_build_your_ai_workflows_and_pipelines_for/
false
false
self
0
null
What should my ai stack be to develop agent workflows & RAG Applications
0
I am getting access to the following hardware: NVIDIA H100 NVL 94 GB – 2 1.6 TB NVMe Gen 4 SSD – 4 64gb x 4 DDR5 – 5600 Memory – 24 Intel Xeon Platinum 8568Y 2.3 GHZ 48 Core 350W Processor - 2 The goal is to develop agent workflows and RAG apps for work. What should be the frameworks, dbs, models, observability, Eval & Monitoring tools that will complete the AI Stack
2025-01-11T04:49:02
https://www.reddit.com/r/LocalLLaMA/comments/1hyod1n/what_should_my_ai_stack_be_to_develop_agent/
Fine-Degree431
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyod1n
false
null
t3_1hyod1n
/r/LocalLLaMA/comments/1hyod1n/what_should_my_ai_stack_be_to_develop_agent/
false
false
self
0
null
20GB 800gb/s AMD RX 7900XT down to $499 on Amazon hmm
2
2025-01-11T04:49:32
https://www.amazon.com/dp/B0BNLSZDCX?tag=pcpapi-20&linkCode=ogi&th=1
Apprehensive_Duty_38
amazon.com
1970-01-01T00:00:00
0
{}
1hyodca
false
null
t3_1hyodca
/r/LocalLLaMA/comments/1hyodca/20gb_800gbs_amd_rx_7900xt_down_to_499_on_amazon/
false
false
default
2
null
Biden to Further Limit Nvidia AI Chip Exports in Final Push
90
2025-01-11T05:04:42
https://www.bloomberg.com/news/articles/2025-01-08/biden-to-further-limit-nvidia-amd-ai-chip-exports-in-final-push?leadSource=reddit_wall
nate4t
bloomberg.com
1970-01-01T00:00:00
0
{}
1hyomxu
false
null
t3_1hyomxu
/r/LocalLLaMA/comments/1hyomxu/biden_to_further_limit_nvidia_ai_chip_exports_in/
false
false
https://b.thumbs.redditm…Pg9y8Im8JOJg.jpg
90
{'enabled': False, 'images': [{'id': 'bztki-BkOTgNvTdyEcKGJxtMVPEpkPINFlKVPOL8Ed8', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=108&crop=smart&auto=webp&s=4198c3caeb6d615b94c91aeb6a2b4288168b3863', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=216&crop=smart&auto=webp&s=07a63161d8bd215bbb53e071cd0d31ade85648fd', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=320&crop=smart&auto=webp&s=cf15e6c67c7c8970bfedc49928cb7433228e97e0', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=640&crop=smart&auto=webp&s=3dc3abfba0983054d99719792b1d834321b47c7b', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=960&crop=smart&auto=webp&s=0dc2ebbc504c5839bae7e4df68a9c35e791e047e', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=1080&crop=smart&auto=webp&s=37e4034651bd07ae180510e44eb1af14d64f3c14', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?auto=webp&s=3e9c61875458cda809a18f610f8fefd1199c41d7', 'width': 1200}, 'variants': {}}]}
ios apps that support character cards?
1
[removed]
2025-01-11T05:44:17
https://www.reddit.com/r/LocalLLaMA/comments/1hyp9rl/ios_apps_that_support_character_cards/
AirconWater
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyp9rl
false
null
t3_1hyp9rl
/r/LocalLLaMA/comments/1hyp9rl/ios_apps_that_support_character_cards/
false
false
self
1
null
Are there any LLM's exclusively trained on non copyrighted data, eg permissively licensed code, creative commons, public domain, etc.?
0
Even if it's a shitty model because there's like no training data compared to scraping the entire internet and using it as your training corpus I think it would be fun to play around with.
2025-01-11T05:57:16
https://www.reddit.com/r/LocalLLaMA/comments/1hyph47/are_there_any_llms_exclusively_trained_on_non/
Shawnj2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyph47
false
null
t3_1hyph47
/r/LocalLLaMA/comments/1hyph47/are_there_any_llms_exclusively_trained_on_non/
false
false
self
0
null
as there any way get cheap gpu for running flux.dev 2 cheap for india remember i dont have any gpu
1
[removed]
2025-01-11T06:08:11
https://www.reddit.com/r/LocalLLaMA/comments/1hypn3f/as_there_any_way_get_cheap_gpu_for_running/
Apart-Sky2968
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hypn3f
false
null
t3_1hypn3f
/r/LocalLLaMA/comments/1hypn3f/as_there_any_way_get_cheap_gpu_for_running/
false
false
self
1
null
How to run large models locally ??
1
[removed]
2025-01-11T06:40:53
https://www.reddit.com/r/LocalLLaMA/comments/1hyq4f5/how_to_run_large_models_locally/
Hot-Hearing-2528
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyq4f5
false
null
t3_1hyq4f5
/r/LocalLLaMA/comments/1hyq4f5/how_to_run_large_models_locally/
false
false
self
1
null
$10k Budget Build for Local Servers Help!
1
[removed]
2025-01-11T06:53:35
https://www.reddit.com/r/LocalLLaMA/comments/1hyqaxr/10k_budget_build_for_local_servers_help/
CardiologistSuper443
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyqaxr
false
null
t3_1hyqaxr
/r/LocalLLaMA/comments/1hyqaxr/10k_budget_build_for_local_servers_help/
false
false
self
1
null
stable-diffusion.cpp context size
5
Anybody using this tool? I noticed the context size being clipped to some given size when the inference starts. I wonder if anybody figured how to control that parameter?
2025-01-11T06:56:43
https://www.reddit.com/r/LocalLLaMA/comments/1hyqcir/stablediffusioncpp_context_size/
goingsplit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyqcir
false
null
t3_1hyqcir
/r/LocalLLaMA/comments/1hyqcir/stablediffusioncpp_context_size/
false
false
self
5
null
Any recommendations for prompt enhancement ggufs?
1
Uncensored ones as huggingface only has censored ones. Kind of amazed there is not a single one uncensored. They keep coming back "I can't do that Dave, it is violence Dave. It is unsafe Dave." Honestly if they did that I wouldn't be mildly annoyed at having to find one specifically for this. So, does anyone know one not on huggingface? Enjoy my privacy so would have to be able to run local. Any help is appreciated. I have just been using the LLMs themselves. "Enhance the following prompt to better work in this LLM." But I haven't seen one that really does it right yet. Would a better, well, prompting for the enhancement work?
2025-01-11T06:58:01
https://www.reddit.com/r/LocalLLaMA/comments/1hyqd53/any_recommendations_for_prompt_enhancement_ggufs/
YT_Brian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyqd53
false
null
t3_1hyqd53
/r/LocalLLaMA/comments/1hyqd53/any_recommendations_for_prompt_enhancement_ggufs/
false
false
self
1
null
inference help
2
I'm a total beginner to ML, and am currently experimenting with a project where I try to see how prompt responses vary across LLaMA base, instruct and downstream finetuned models. The trouble I'm having is with the chat template to use for prompting. Currently, I've defined a custom function that uses the LLaMA 3 defined template since apply\_chat\_template() method is not defined for many of the finetuned model's tokenizers (from huggingface) but they've said they follow the LLaMA 3 prompting template. However, the inference responses often produce answers like "assistantassistant" or other problematic responses, which makes me question if I'm doing something wrong. in the same vein, I'm also unsure about how to prompt the base model. While I understand the model is not instruction tuned and actually just continues generating text based on the input, when I use OLLAMA and load the base model, it gives coherent responses. What template does it use under the hood? Any help would be super appreciated, I've been stuck on this for a while now. For reference, here's my function: def apply_chat_template(messages, add_generation_prompt=True): formatted_message = "<|begin_of_text|>" for message in messages: role = message["role"] content = message["content"] if role == "system": formatted_message += f"<|start_header_id|>system<|end_header_id|>{content}<|eot_id|>" elif role == "user": formatted_message += f"<|start_header_id|>user<|end_header_id|>{content}<|eot_id|>" elif role == "assistant": formatted_message += f"<|start_header_id|>assistant<|end_header_id|>{content}<|eot_id|>" if add_generation_prompt: formatted_message += "<|start_header_id|>assistant<|end_header_id|>" return formatted_message
2025-01-11T07:03:07
https://www.reddit.com/r/LocalLLaMA/comments/1hyqfyj/inference_help/
Alternative_Bee_2142
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyqfyj
false
null
t3_1hyqfyj
/r/LocalLLaMA/comments/1hyqfyj/inference_help/
false
false
self
2
null
How can I make a local computer cluster for inference
1
[removed]
2025-01-11T07:47:29
https://www.reddit.com/r/LocalLLaMA/comments/1hyr187/how_can_i_make_a_local_computer_cluster_for/
Bykush1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyr187
false
null
t3_1hyr187
/r/LocalLLaMA/comments/1hyr187/how_can_i_make_a_local_computer_cluster_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KXfwJTQKBEKcom3EOeE6yoSbMH7eEn00E3tHgO1Mgbs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IxlkgK8VRcPSv5z8zBn8nC54fZd04xMkKBcpY7pHoFs.jpg?width=108&crop=smart&auto=webp&s=c41b7a34a3cf05b7d8c9e5fd05bfc091338ccf07', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/IxlkgK8VRcPSv5z8zBn8nC54fZd04xMkKBcpY7pHoFs.jpg?width=216&crop=smart&auto=webp&s=333bd3d64f2ab94c4a2ac42b12157019270564f9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/IxlkgK8VRcPSv5z8zBn8nC54fZd04xMkKBcpY7pHoFs.jpg?width=320&crop=smart&auto=webp&s=7d3609b534296e96f5dda9734be1e31089a27f56', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/IxlkgK8VRcPSv5z8zBn8nC54fZd04xMkKBcpY7pHoFs.jpg?auto=webp&s=7594cade603d57e895bd2179bfb3c7fc54f20d17', 'width': 480}, 'variants': {}}]}
How can I make a local computer cluster for inference
1
[removed]
2025-01-11T07:48:39
https://www.reddit.com/r/LocalLLaMA/comments/1hyr1ta/how_can_i_make_a_local_computer_cluster_for/
Bykush1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyr1ta
false
null
t3_1hyr1ta
/r/LocalLLaMA/comments/1hyr1ta/how_can_i_make_a_local_computer_cluster_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KXfwJTQKBEKcom3EOeE6yoSbMH7eEn00E3tHgO1Mgbs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IxlkgK8VRcPSv5z8zBn8nC54fZd04xMkKBcpY7pHoFs.jpg?width=108&crop=smart&auto=webp&s=c41b7a34a3cf05b7d8c9e5fd05bfc091338ccf07', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/IxlkgK8VRcPSv5z8zBn8nC54fZd04xMkKBcpY7pHoFs.jpg?width=216&crop=smart&auto=webp&s=333bd3d64f2ab94c4a2ac42b12157019270564f9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/IxlkgK8VRcPSv5z8zBn8nC54fZd04xMkKBcpY7pHoFs.jpg?width=320&crop=smart&auto=webp&s=7d3609b534296e96f5dda9734be1e31089a27f56', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/IxlkgK8VRcPSv5z8zBn8nC54fZd04xMkKBcpY7pHoFs.jpg?auto=webp&s=7594cade603d57e895bd2179bfb3c7fc54f20d17', 'width': 480}, 'variants': {}}]}
The ASRock Radeon RX 7900 XTX Creator
10
People building AI PCs with multiple GPUs on a budget love the RTX 3090 2-slot "Turbo"/"Aero"/"Classic" blower cards that pretty much disappeared from production shortly after the launch of the chip. That's why i'm surprised these same people (hi!) aren't talking more about the ASRock Radeon RX 7900 XTX Creator card. It's a 2-slot blower card with a single fan. It's 1100€ new so 18% more expensive than the cheapest RX 7900 XTX cards. With a Threadripper mainboard you can easily stick four of these cards (96GB VRAM) into a large PC case without having to deal with PCIe port extenders that can cause instability. Has someone already done this and want to share? How hard is it to get them cooled? Which case did you use? Which software is best for inferencing with multiple of these AMD GPUs? Thanks!
2025-01-11T08:27:26
https://www.reddit.com/r/LocalLLaMA/comments/1hyrke5/the_asrock_radeon_rx_7900_xtx_creator/
Zyj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyrke5
false
null
t3_1hyrke5
/r/LocalLLaMA/comments/1hyrke5/the_asrock_radeon_rx_7900_xtx_creator/
false
false
self
10
null
Where is everyone sourcing their hardware?
22
After looking for awhile I finally decided to purchase 3 identical refurbished 3090s through a manufacturer refurb online store and all of them have turned out to be unstable. One of them locking up the system within a few minutes of being turned on. I thought that by getting a manufacturer refurbished card directly from them that it would be less likely to be an issue. I looked around a lot before purchasing and this seemed like the safest option for a reasonable price ($699 per card). I am in the process of RMA’ing them, but where does everyone else get their hardware? Has anyone else had issues with bad video cards? Any tips on good places to order from?
2025-01-11T08:49:26
https://www.reddit.com/r/LocalLLaMA/comments/1hyruya/where_is_everyone_sourcing_their_hardware/
hainesk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyruya
false
null
t3_1hyruya
/r/LocalLLaMA/comments/1hyruya/where_is_everyone_sourcing_their_hardware/
false
false
self
22
null
New Model from https://novasky-ai.github.io/ Sky-T1-32B-Preview, open-source reasoning model that matches o1-preview on popular reasoning and coding benchmarks — trained under $450!
495
[https://x.com/NovaSkyAI/status/1877793041957933347](https://x.com/NovaSkyAI/status/1877793041957933347) https://preview.redd.it/64qbzi7pxbce1.png?width=1201&format=png&auto=webp&s=fc1a698cd51f4e6e2775d3117ca91f88253478df
2025-01-11T09:02:18
https://www.reddit.com/r/LocalLLaMA/comments/1hys13h/new_model_from_httpsnovaskyaigithubio/
appakaradi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hys13h
false
null
t3_1hys13h
/r/LocalLLaMA/comments/1hys13h/new_model_from_httpsnovaskyaigithubio/
false
false
https://b.thumbs.redditm…OtZmdjZlIJgA.jpg
495
{'enabled': False, 'images': [{'id': 'B-BZ0r6qz9G9LxPE7ouJ2a_2wvZnUyd1o405QO_mox0', 'resolutions': [{'height': 48, 'url': 'https://external-preview.redd.it/d-6wrohyuoqlKc4TV9mDxgh4ErmzgT4n7gTbj9xeln4.jpg?width=108&crop=smart&auto=webp&s=d88c1c9445bc1fd52351f48f87082414b9b02bef', 'width': 108}, {'height': 96, 'url': 'https://external-preview.redd.it/d-6wrohyuoqlKc4TV9mDxgh4ErmzgT4n7gTbj9xeln4.jpg?width=216&crop=smart&auto=webp&s=56a8feaf896e5ba68012d85891c54bee5ff6c84f', 'width': 216}, {'height': 142, 'url': 'https://external-preview.redd.it/d-6wrohyuoqlKc4TV9mDxgh4ErmzgT4n7gTbj9xeln4.jpg?width=320&crop=smart&auto=webp&s=f7252d2d7f923694009a603398b8ecf420089674', 'width': 320}, {'height': 285, 'url': 'https://external-preview.redd.it/d-6wrohyuoqlKc4TV9mDxgh4ErmzgT4n7gTbj9xeln4.jpg?width=640&crop=smart&auto=webp&s=8734d59c4128e9b5f68dcc670051d2d7f3e7fe12', 'width': 640}, {'height': 428, 'url': 'https://external-preview.redd.it/d-6wrohyuoqlKc4TV9mDxgh4ErmzgT4n7gTbj9xeln4.jpg?width=960&crop=smart&auto=webp&s=388e10b3b9d040a823ee6716a77efdf04c052f92', 'width': 960}, {'height': 481, 'url': 'https://external-preview.redd.it/d-6wrohyuoqlKc4TV9mDxgh4ErmzgT4n7gTbj9xeln4.jpg?width=1080&crop=smart&auto=webp&s=4babe75c676a755984a56cfac257c243fb33b27e', 'width': 1080}], 'source': {'height': 536, 'url': 'https://external-preview.redd.it/d-6wrohyuoqlKc4TV9mDxgh4ErmzgT4n7gTbj9xeln4.jpg?auto=webp&s=a8e373451b40b8511567b74c0ce7760cf0f4c42d', 'width': 1201}, 'variants': {}}]}
Advanced AI lip reading pipeline reveals Obama & Trump's conversation
1
2025-01-11T09:37:57
https://twitter.com/sol_placeholder/status/1877673704177737741/video/1
youknowhoboo
twitter.com
1970-01-01T00:00:00
0
{}
1hysi3t
false
null
t3_1hysi3t
/r/LocalLLaMA/comments/1hysi3t/advanced_ai_lip_reading_pipeline_reveals_obama/
false
false
default
1
null
Moondream 2 vision-language model for image QA
1
There is a new version of the moondream 2 VLM, a compact (2B parameters) model which seeems to have competitive performance to InternVL2 and Qwen2-VL with less VRAM usage See [https://moondream.ai/blog/introducing-a-new-moondream-1-9b-and-gpu-support](https://moondream.ai/blog/introducing-a-new-moondream-1-9b-and-gpu-support) I tested it on [https://moondream.ai/playground](https://moondream.ai/playground) with some own images, and it seems to work well for its size. A disadvantage ist that it seems to support only images, so no video input possible.
2025-01-11T10:31:31
https://www.reddit.com/r/LocalLLaMA/comments/1hyt81w/moondream_2_visionlanguage_model_for_image_qa/
Fit_Check_919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyt81w
false
null
t3_1hyt81w
/r/LocalLLaMA/comments/1hyt81w/moondream_2_visionlanguage_model_for_image_qa/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nRa6SqvyVK_pxTkDE3UT_dHaaKL-SM-lYo1X6AGY8_c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/yfizgOXU5pmmFAGyrAa56JtXQR5HZMEYPKm_cJklcMM.jpg?width=108&crop=smart&auto=webp&s=f8a0fc7cefe8aab2b2f3ce04cf917431eb113576', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/yfizgOXU5pmmFAGyrAa56JtXQR5HZMEYPKm_cJklcMM.jpg?width=216&crop=smart&auto=webp&s=7dfb51d42af1ca6d33fb21be42c1404746489832', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/yfizgOXU5pmmFAGyrAa56JtXQR5HZMEYPKm_cJklcMM.jpg?width=320&crop=smart&auto=webp&s=ec44d96f5c509494f2129a837c82c2cafb0ffeb8', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/yfizgOXU5pmmFAGyrAa56JtXQR5HZMEYPKm_cJklcMM.jpg?width=640&crop=smart&auto=webp&s=c2c6a4a5efe7e83ebcc334a232d2858c0e14e9da', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/yfizgOXU5pmmFAGyrAa56JtXQR5HZMEYPKm_cJklcMM.jpg?width=960&crop=smart&auto=webp&s=c7c4536aa56f834c08c386a5f159efe5028001a3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/yfizgOXU5pmmFAGyrAa56JtXQR5HZMEYPKm_cJklcMM.jpg?width=1080&crop=smart&auto=webp&s=ad3858c34a6518bdd8927ce996fbb08be647bc99', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/yfizgOXU5pmmFAGyrAa56JtXQR5HZMEYPKm_cJklcMM.jpg?auto=webp&s=d3119bb0dd6c2a96262a20400857c088dfa3d3c0', 'width': 1820}, 'variants': {}}]}
Easiest way to create a "customer support" LLM/Chatbot from 20 PDFs
0
I'm a software engineer with minimal AI experience. I'm trying to feed 20 pages of PDFs text (HOA documents) into a model that can then answer questions regarding the documents. I've heard fine tuning is implausible unless you go into G Codelab. Is there a way (or a model) that can be trained easily - say passing the context of a query of " please remember all of the following [enter pdfs text here]" as the context to future queries of an out of the box model?
2025-01-11T10:44:06
https://www.reddit.com/r/LocalLLaMA/comments/1hytdvc/easiest_way_to_create_a_customer_support/
elchurnerista
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hytdvc
false
null
t3_1hytdvc
/r/LocalLLaMA/comments/1hytdvc/easiest_way_to_create_a_customer_support/
false
false
self
0
null
What parameter models can I run on my laptop? (Specs are below)
1
[removed]
2025-01-11T10:46:33
https://www.reddit.com/r/LocalLLaMA/comments/1hytf0e/what_parameter_models_can_i_run_on_my_laptop/
WipeAfterYouShit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hytf0e
false
null
t3_1hytf0e
/r/LocalLLaMA/comments/1hytf0e/what_parameter_models_can_i_run_on_my_laptop/
false
false
self
1
null
LocalGLaDOS running on 2x 4090's - better latency and smarter!
3
2025-01-11T11:13:05
https://www.youtube.com/watch?v=ThRGUJXd5iw
Reddactor
youtube.com
1970-01-01T00:00:00
0
{}
1hytrt5
false
{'oembed': {'author_name': 'David Ng', 'author_url': 'https://www.youtube.com/@davidng7806', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ThRGUJXd5iw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="LocalGLaDOS - real-time interaction demo"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/ThRGUJXd5iw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'LocalGLaDOS - real-time interaction demo', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1hytrt5
/r/LocalLLaMA/comments/1hytrt5/localglados_running_on_2x_4090s_better_latency/
false
false
https://b.thumbs.redditm…KvIN94Yy4fes.jpg
3
{'enabled': False, 'images': [{'id': 'KDO_LjBdKkDKw2wAGD1aYpHJ_ASqEzM1EHsB5XsGb98', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/EfE2n_bbhcmfaS9RbA5FtQq7jGIahU2UIGm8g-a1Uag.jpg?width=108&crop=smart&auto=webp&s=5a5447ce1afd80ca2d1200015125a7e1dbae9e5a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/EfE2n_bbhcmfaS9RbA5FtQq7jGIahU2UIGm8g-a1Uag.jpg?width=216&crop=smart&auto=webp&s=acc98533e6398f9b42f4b1eb12cce680da0d6ab2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/EfE2n_bbhcmfaS9RbA5FtQq7jGIahU2UIGm8g-a1Uag.jpg?width=320&crop=smart&auto=webp&s=3ce4ca891cbd89dfa15f29ba5ffa968064f42e85', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/EfE2n_bbhcmfaS9RbA5FtQq7jGIahU2UIGm8g-a1Uag.jpg?auto=webp&s=2719b1fd1018d16384ddc72994d933d2c5dbb10a', 'width': 480}, 'variants': {}}]}
Localhost LLM Benchmark
7
I'm trying to benchmark my self hosted LLM, I want to run benchmarks like MMLU to evaluate the acceleration and accuracy of various quants against my GPU limitations I tried the below tool, but it doesn't hit the API at all >[github.com/EleutherAI/lm-evaluation-harness](http://github.com/EleutherAI/lm-evaluation-harness) > lm\_eval --model local-chat-completions --tasks gsm8k --model\_args base\_url=http://localhost:8000 --apply\_chat\_template Any suggestions?
2025-01-11T11:29:56
https://www.reddit.com/r/LocalLLaMA/comments/1hyu07n/localhost_llm_benchmark/
05032-MendicantBias
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyu07n
false
null
t3_1hyu07n
/r/LocalLLaMA/comments/1hyu07n/localhost_llm_benchmark/
false
false
self
7
{'enabled': False, 'images': [{'id': 'UOFOWs2zsxKZ-AnExxAwcWRvGSxZOPB--QF3e3ZpWBM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TzxxgxeTVmQldt6Sd7gr-FcniYYq0sL1rzApuvpR3rE.jpg?width=108&crop=smart&auto=webp&s=38c01862042665da2f9d1713f286f25066546887', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TzxxgxeTVmQldt6Sd7gr-FcniYYq0sL1rzApuvpR3rE.jpg?width=216&crop=smart&auto=webp&s=26501b24ae5aecd519706499d9e755f396b5ce50', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TzxxgxeTVmQldt6Sd7gr-FcniYYq0sL1rzApuvpR3rE.jpg?width=320&crop=smart&auto=webp&s=28a09d79b3d356717e7094f88c40739bba7e73cc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TzxxgxeTVmQldt6Sd7gr-FcniYYq0sL1rzApuvpR3rE.jpg?width=640&crop=smart&auto=webp&s=7a69d3b6ab5cfb14aa0b31aaa6248df3c702398e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TzxxgxeTVmQldt6Sd7gr-FcniYYq0sL1rzApuvpR3rE.jpg?width=960&crop=smart&auto=webp&s=eea10a2cacdb88e474decd95b4afad6064b6f18c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TzxxgxeTVmQldt6Sd7gr-FcniYYq0sL1rzApuvpR3rE.jpg?width=1080&crop=smart&auto=webp&s=d4a21b1159ae052aa18c2fa253a6f66d68317ca1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TzxxgxeTVmQldt6Sd7gr-FcniYYq0sL1rzApuvpR3rE.jpg?auto=webp&s=fce21639f5ab3bc5b5775f052885821fee13795e', 'width': 1200}, 'variants': {}}]}
LocalGLaDOS - running on a real LLM-rig
171
2025-01-11T11:34:21
https://youtu.be/N-GHKTocDF0
Reddactor
youtu.be
1970-01-01T00:00:00
0
{}
1hyu2dh
false
{'oembed': {'author_name': 'David Ng', 'author_url': 'https://www.youtube.com/@davidng7806', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/N-GHKTocDF0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="LocalGLaDOS - real-time low-latency demo!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/N-GHKTocDF0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'LocalGLaDOS - real-time low-latency demo!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1hyu2dh
/r/LocalLLaMA/comments/1hyu2dh/localglados_running_on_a_real_llmrig/
false
false
https://b.thumbs.redditm…UXz1116gdprc.jpg
171
{'enabled': False, 'images': [{'id': 'KDO_LjBdKkDKw2wAGD1aYpHJ_ASqEzM1EHsB5XsGb98', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/EfE2n_bbhcmfaS9RbA5FtQq7jGIahU2UIGm8g-a1Uag.jpg?width=108&crop=smart&auto=webp&s=5a5447ce1afd80ca2d1200015125a7e1dbae9e5a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/EfE2n_bbhcmfaS9RbA5FtQq7jGIahU2UIGm8g-a1Uag.jpg?width=216&crop=smart&auto=webp&s=acc98533e6398f9b42f4b1eb12cce680da0d6ab2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/EfE2n_bbhcmfaS9RbA5FtQq7jGIahU2UIGm8g-a1Uag.jpg?width=320&crop=smart&auto=webp&s=3ce4ca891cbd89dfa15f29ba5ffa968064f42e85', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/EfE2n_bbhcmfaS9RbA5FtQq7jGIahU2UIGm8g-a1Uag.jpg?auto=webp&s=2719b1fd1018d16384ddc72994d933d2c5dbb10a', 'width': 480}, 'variants': {}}]}
Are there any local LLM UIs which support MCP?
0
Background - I've been in software eng for a really long time, I don't actually write all that much code anymore, but since the advent of AI assistance I really enjoy having side projects again. For anything professional, I use aider + openrouter to dev with. It's great and supports that use case very well. However,.. i actually \*really\* enjoy using Claude desktop with an MCP file server for side projects. Just chatting away and having the tool access files + git + whatevs is a very fun experience. I have LMStudio and MSTY down for the odd thing I want to do locally, or to just save me some credits. I'm comfortable with the local toolchain and stack required, but I'd like to be be able to modify files in the same way I can with Claude desktop. Are there any UIs which support that?
2025-01-11T11:40:29
https://www.reddit.com/r/LocalLLaMA/comments/1hyu5dx/are_there_any_local_llm_uis_which_support_mcp/
ChemicalTerrapin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyu5dx
false
null
t3_1hyu5dx
/r/LocalLLaMA/comments/1hyu5dx/are_there_any_local_llm_uis_which_support_mcp/
false
false
self
0
null
Should GPT4all be this slow on my machine?
0
I like to think i have a half-decent machine for this kind of thing. My specs: Base RTX 4070 i710700 32gb ram (@3200mhz) but trying to have a conversation with a model, it struggles to spit more that one word out every once in a while. Do i need a more powerful machine?
2025-01-11T11:51:13
https://www.reddit.com/r/LocalLLaMA/comments/1hyuato/should_gpt4all_be_this_slow_on_my_machine/
Direct_Pension_8594
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyuato
false
null
t3_1hyuato
/r/LocalLLaMA/comments/1hyuato/should_gpt4all_be_this_slow_on_my_machine/
false
false
self
0
null
awesome-mcp-clients: A collection of MCP clients.
31
2025-01-11T11:59:44
https://github.com/punkpeye/awesome-mcp-clients/
punkpeye
github.com
1970-01-01T00:00:00
0
{}
1hyuf4n
false
null
t3_1hyuf4n
/r/LocalLLaMA/comments/1hyuf4n/awesomemcpclients_a_collection_of_mcp_clients/
false
false
https://a.thumbs.redditm…Jj9BZ6V-T8O4.jpg
31
{'enabled': False, 'images': [{'id': 'g-u_Xi0tRbgwLCxppdYZcsVP2AS2LbQLdNNPohX6CzM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4KBz9ekZU5wO9l68R_foyo2COnEy6xlRXL3HdC_atBs.jpg?width=108&crop=smart&auto=webp&s=d69909b78e13df49a9ae39840b0eea9ddfb6812c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4KBz9ekZU5wO9l68R_foyo2COnEy6xlRXL3HdC_atBs.jpg?width=216&crop=smart&auto=webp&s=ab198d41790a1e65a6d7eb7b5e566280178a60b9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4KBz9ekZU5wO9l68R_foyo2COnEy6xlRXL3HdC_atBs.jpg?width=320&crop=smart&auto=webp&s=4acc204c786d6b7d344262034f34667acd7c057e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4KBz9ekZU5wO9l68R_foyo2COnEy6xlRXL3HdC_atBs.jpg?width=640&crop=smart&auto=webp&s=e6065d85b256d56cfd941b3a9d65722df035a3cf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4KBz9ekZU5wO9l68R_foyo2COnEy6xlRXL3HdC_atBs.jpg?width=960&crop=smart&auto=webp&s=d8e1de5d960ac33a898cdca6ad67e58bc1ae634c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4KBz9ekZU5wO9l68R_foyo2COnEy6xlRXL3HdC_atBs.jpg?width=1080&crop=smart&auto=webp&s=ae4cfdc96345ef28a7deb54006a9df8017c78099', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4KBz9ekZU5wO9l68R_foyo2COnEy6xlRXL3HdC_atBs.jpg?auto=webp&s=6c734ee29e224d5c1c60855cdda232d27f3185ec', 'width': 1200}, 'variants': {}}]}
GitHub - tegridydev/dnd-llm-game: MVP of an idea using multiple local LLM models to simulate and play D&D
49
2025-01-11T12:09:03
https://github.com/tegridydev/dnd-llm-game?tab=readme-ov-file
Thistleknot
github.com
1970-01-01T00:00:00
0
{}
1hyukc2
false
null
t3_1hyukc2
/r/LocalLLaMA/comments/1hyukc2/github_tegridydevdndllmgame_mvp_of_an_idea_using/
false
false
https://b.thumbs.redditm…F_YUri0xim-U.jpg
49
{'enabled': False, 'images': [{'id': 'DmKoRrJX40HTMdtHG0F33oN1rN57kp63hDzhG8fZvS8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2MaaUSNtf5DLbq6ZpF876OWYQdcOtASsj6e_pAKWpKY.jpg?width=108&crop=smart&auto=webp&s=c2ff8808d32db6fc522f685df1c603cfbaf64eb1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2MaaUSNtf5DLbq6ZpF876OWYQdcOtASsj6e_pAKWpKY.jpg?width=216&crop=smart&auto=webp&s=cbf8a1c1e7e3de543291c586623f8becb89a4ace', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2MaaUSNtf5DLbq6ZpF876OWYQdcOtASsj6e_pAKWpKY.jpg?width=320&crop=smart&auto=webp&s=643e54b6259ad92f9f4ca55b4be205c011284b02', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2MaaUSNtf5DLbq6ZpF876OWYQdcOtASsj6e_pAKWpKY.jpg?width=640&crop=smart&auto=webp&s=ec156925c4109c28028bb52b1517ca8eb977cd5a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2MaaUSNtf5DLbq6ZpF876OWYQdcOtASsj6e_pAKWpKY.jpg?width=960&crop=smart&auto=webp&s=e42f02385e66779f784a2346f3d6bde1c43f365a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2MaaUSNtf5DLbq6ZpF876OWYQdcOtASsj6e_pAKWpKY.jpg?width=1080&crop=smart&auto=webp&s=c5cd4f5c36e95769c29cc2331bcb7fd7236f65d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2MaaUSNtf5DLbq6ZpF876OWYQdcOtASsj6e_pAKWpKY.jpg?auto=webp&s=7f8a8de11867ef198b9872dd66b612e6260b06c6', 'width': 1200}, 'variants': {}}]}
Is My Understanding of Prefill and Continuous Batch Processing in Transformer Models Correct?
1
[removed]
2025-01-11T12:11:20
https://www.reddit.com/r/LocalLLaMA/comments/1hyulkm/is_my_understanding_of_prefill_and_continuous/
Feisty-Ticket3698
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyulkm
false
null
t3_1hyulkm
/r/LocalLLaMA/comments/1hyulkm/is_my_understanding_of_prefill_and_continuous/
false
false
self
1
null
Qwen2.5 7B Surpasses o1: Microsoft's rStar-Math Makes a Stunning Debut
31
2025-01-11T12:21:01
https://aidisruptionpub.com/p/qwen25-7b-surpasses-o1-microsofts?utm_campaign=post&utm_medium=web
Thistleknot
aidisruptionpub.com
1970-01-01T00:00:00
0
{}
1hyuquy
false
null
t3_1hyuquy
/r/LocalLLaMA/comments/1hyuquy/qwen25_7b_surpasses_o1_microsofts_rstarmath_makes/
false
false
https://b.thumbs.redditm…qREouVQplZeU.jpg
31
{'enabled': False, 'images': [{'id': 'mrFai08AJG8BNDzAJ93ncdbTVx-YCAoWeTBzi9MlgXs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GLvo-SRqBckxLkr4ytLNxZBc9Ym-MKU_dD9qI8DR0SU.jpg?width=108&crop=smart&auto=webp&s=9caae9587df6ae8e311907b2822167b752aff459', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GLvo-SRqBckxLkr4ytLNxZBc9Ym-MKU_dD9qI8DR0SU.jpg?width=216&crop=smart&auto=webp&s=4061dacbbb3b0aa2cfa7515477c4c1d1eb02f823', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GLvo-SRqBckxLkr4ytLNxZBc9Ym-MKU_dD9qI8DR0SU.jpg?width=320&crop=smart&auto=webp&s=8964f7cab24eab26eb48e87366d4bb3ec01cfccc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GLvo-SRqBckxLkr4ytLNxZBc9Ym-MKU_dD9qI8DR0SU.jpg?width=640&crop=smart&auto=webp&s=cb09663062c05bc19de2599defdfda3964032e82', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GLvo-SRqBckxLkr4ytLNxZBc9Ym-MKU_dD9qI8DR0SU.jpg?width=960&crop=smart&auto=webp&s=f2820d99fe81509e66f1118717bfb85b01eefc1d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GLvo-SRqBckxLkr4ytLNxZBc9Ym-MKU_dD9qI8DR0SU.jpg?width=1080&crop=smart&auto=webp&s=9c46ced278b436fbe6ad95d2fc0695105fc70d4e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GLvo-SRqBckxLkr4ytLNxZBc9Ym-MKU_dD9qI8DR0SU.jpg?auto=webp&s=575739fddb3cc620795c4c791e512984e6ad6cc9', 'width': 1200}, 'variants': {}}]}
Are embeddings invariant to translation?
7
Hi. Is the embedding of a sentence close to the embedding of its translation (e.g. EN and DE) in the embedding space? Which embedding models are better in handling multiple languages, and also representing the same semantic in multiple languages? Many thanks
2025-01-11T12:59:33
https://www.reddit.com/r/LocalLLaMA/comments/1hyvcrg/are_embeddings_invariant_to_translation/
ihatebeinganonymous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyvcrg
false
null
t3_1hyvcrg
/r/LocalLLaMA/comments/1hyvcrg/are_embeddings_invariant_to_translation/
false
false
self
7
null
What do you think of AI employees?
64
I am seeing a surge in start-ups and large enterprises building AI employees. A good number of well-endowed start-ups are building AI SDRs, SWEs, marketing agents, and Customer success agents. Even Salesforce is working on AgentForce to create no-code salesforce automation agents. This trend is growing faster than I thought; dozens of start-ups are probably in YC this year. I’m not sure if any of them are in production doing the jobs in the real world, and also, these agents may require a dozen integrations to be anywhere close to being functional. As much as I like LLMs, they still don’t seem capable of handling edge cases in real-world jobs. They may be suitable for building automated pipelines for tightly scoped tasks, but replacing humans seems far-fetched. Salesforce Chairman Mark Benioff even commented on not hiring human employees anymore; though it could be their sneaky marketing, it shows their intent. What do you think of this AI employee in general the present and future? I would love to hear your thoughts if you’re building something simillar.
2025-01-11T13:03:35
https://www.reddit.com/r/LocalLLaMA/comments/1hyvfjq/what_do_you_think_of_ai_employees/
SunilKumarDash
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyvfjq
false
null
t3_1hyvfjq
/r/LocalLLaMA/comments/1hyvfjq/what_do_you_think_of_ai_employees/
false
false
self
64
null
training STT model for my local language
3
training STT model for my local language Guys how can i fine tune STT, to build and ai voice chat that ables to understand my language so i can help my people learn english while they speak to the Ai with their local language please.
2025-01-11T13:05:37
https://www.reddit.com/r/LocalLLaMA/comments/1hyvgtw/training_stt_model_for_my_local_language/
Alive-Professor5944
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyvgtw
false
null
t3_1hyvgtw
/r/LocalLLaMA/comments/1hyvgtw/training_stt_model_for_my_local_language/
false
false
self
3
null
Inference speed difference for eGPU over usb4 vs oculink?
3
Hi everyone! I just had a question regarding the difference in inference speeds with an egpu over usb4 and oculink. Right now I am planning on a new SFFPC build involving a 9800x3d and a 5090, but I'm also thinking about adding an egpu (maybe the 3090?) in order to be able to run larger models such as llama 3.3 70b or qwen 2.5 72b. Now I know a lot of people will tell me to just get a used motherboard with more pcie slots to save money and some trouble, but unfortunately this is not an option for me as I need a compact pc for use on the go. Having said that, I was wondering how much of a difference in inference speed there would be when I use usb4 over oculink for the egpu setup. From research, it seems that usb4 gen1 provides pcie gen 3x4, while oculink can provide up to pcie 4x4. Along with this difference in bandwidth, I've also heard that usb4 further throttles itself since the controllers on either side adds even more latency, while oculink provides a direct pcie connection. Although just using usb4 would be more convenient and considering the fact that pcie bandwidth doesn't matter as much during inference, would usb4 pose a significant bottleneck? Thanks!
2025-01-11T14:29:24
https://www.reddit.com/r/LocalLLaMA/comments/1hyx1al/inference_speed_difference_for_egpu_over_usb4_vs/
jinuoh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyx1al
false
null
t3_1hyx1al
/r/LocalLLaMA/comments/1hyx1al/inference_speed_difference_for_egpu_over_usb4_vs/
false
false
self
3
null
Form filling agent with llama
0
I have recently seen demos from do browser etc. which seem to have gotten browser use with Agents quite right. I want to build a similar agent which helps me fill forms for internal use, think forms with similar complexity to hotel bookings etc. But I don't know which is the best way to implement browser interaction with the agent. Any ideas on what is the current open-source SOTA for this?
2025-01-11T14:30:43
https://www.reddit.com/r/LocalLLaMA/comments/1hyx29q/form_filling_agent_with_llama/
Infinitrix02
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyx29q
false
null
t3_1hyx29q
/r/LocalLLaMA/comments/1hyx29q/form_filling_agent_with_llama/
false
false
self
0
null
What is your favorite front end for coding that connects to a Remote LLM?
0
I've got a separate rig for running code LLMs, and I'd love to know what FE you use for yours when you code as opposed to Story telling, RP, etc. Silly Tavern just doesn't really work great for me, for these purposes so far. I'm interested in FEs that connect to back end LLMs via API. Thanks for your opinion and time!
2025-01-11T14:32:47
https://www.reddit.com/r/LocalLLaMA/comments/1hyx3tr/what_is_your_favorite_front_end_for_coding_that/
Phaelon74
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyx3tr
false
null
t3_1hyx3tr
/r/LocalLLaMA/comments/1hyx3tr/what_is_your_favorite_front_end_for_coding_that/
false
false
self
0
null
Is it possible to remotely unload/eject model on LM Studio server through Open WebUI?
1
To preface - I'm using windows for both my server and laptop. My LM Studio is version 0.3.6 (build 8), Open WebUI - v0.5.4. I changed the keep alive setting in Open WebUI to 1 minute to attempt to eject it - it did not work. Im not sure what else I can try. Suggestions are welcomed and thank you in advance.
2025-01-11T14:53:02
https://www.reddit.com/r/LocalLLaMA/comments/1hyxiaf/is_it_possible_to_remotely_unloadeject_model_on/
Bowbowjowjow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyxiaf
false
null
t3_1hyxiaf
/r/LocalLLaMA/comments/1hyxiaf/is_it_possible_to_remotely_unloadeject_model_on/
false
false
self
1
null
Best models for M3 Max 128GB Ram?
1
[removed]
2025-01-11T15:10:42
https://www.reddit.com/r/LocalLLaMA/comments/1hyxvmk/best_models_for_m3_max_128gb_ram/
WeirdCityRecords
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyxvmk
false
null
t3_1hyxvmk
/r/LocalLLaMA/comments/1hyxvmk/best_models_for_m3_max_128gb_ram/
false
false
self
1
null
Tweaking settings and choosing a model?
2
I recently [posted](https://www.reddit.com/r/LocalLLaMA/comments/1hxps2y/transcribing_audio_to_latex/) about wanting to transcribe my lectures to LaTeX in a way that would be appropriate for distributing notes to students, i.e. making them read more like a textbook and less like a conversation. I got some really good advice in that thread. I decided to move to Markdown and just use Youtube's auto-generated transcript. What has been working for me with ChatGPT is to prompt first with: "I’m going to paste the transcript from a mathematics lecture. Rewrite it in Markdown. Change the conversational tone and language to written language appropriate for a textbook. Use appropriate mathematical formatting (equations, theorems, proofs, definitions, itemize) throughout the document." then paste the Youtube transcript. It looks very good. But when I try it locally the best I can get is a short summary of the lecture. I've tried a number of models and context lengths - I've definitely tried solving this myself by experimenting - but I'm flying blind. Can anyone suggest a couple models and some settings worth trying (context length, batch size, max tokens)? I'm on an M1 Ultra Mac Studio, 128GB RAM,
2025-01-11T15:28:11
https://www.reddit.com/r/LocalLLaMA/comments/1hyy91g/tweaking_settings_and_choosing_a_model/
AerosolHubris
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyy91g
false
null
t3_1hyy91g
/r/LocalLLaMA/comments/1hyy91g/tweaking_settings_and_choosing_a_model/
false
false
self
2
null
Nvidia 50x0 cards are not better than their 40x0 equivalents
89
Looking closely at the specs, I found 40x0 equivalents for the new 50x0 cards except for 5090. Interestingly, all 50x0 cards are not as energy efficient as the 40x0 cards. Obviously, GDDR7 is the big reason for the significant boost in memory bandwidth for 50x0. Unless you really need FP4 and DLSS4, there are not that strong a reason to buy the new cards. For the 4070Super/5070 pair, the former can be 15% faster in prompt processing and the latter is 33% faster in inference. If you value prompt processing, it might even make sense to buy the 4070S. As I mentioned in another thread, this gen is more about memory upgrade than the actual GPU upgrade. |Card|4070 Super|5070|4070Ti Super|5070Ti|4080 Super|5080| |:-|:-|:-|:-|:-|:-|:-| |FP16 TFLOPS|141.93|123.37|176.39|175.62|208.9|225.36| |TDP|220|250|285|300|320|360| |GFLOPS/W|656.12|493.49|618.93|585.39|652.8|626| |VRAM|12GB|12GB|16GB|16GB|16GB|16GB| |GB/s|504|672|672|896|736|960| |Price at Launch|$599|$549|$799|$749|$999|$999|
2025-01-11T15:40:50
https://www.reddit.com/r/LocalLLaMA/comments/1hyyils/nvidia_50x0_cards_are_not_better_than_their_40x0/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyyils
false
null
t3_1hyyils
/r/LocalLLaMA/comments/1hyyils/nvidia_50x0_cards_are_not_better_than_their_40x0/
false
false
self
89
null
Need llm suggestion
1
[removed]
2025-01-11T15:46:02
https://www.reddit.com/r/LocalLLaMA/comments/1hyymic/need_llm_suggestion/
blackkksparx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyymic
false
null
t3_1hyymic
/r/LocalLLaMA/comments/1hyymic/need_llm_suggestion/
false
false
self
1
null
Why AI Agents Are Fundamentally Broken: A Programming Paradigm That Actually Works - ToGODer
0
Tell me what you think!
2025-01-11T15:51:12
https://www.reddit.com/r/LocalLLaMA/comments/1hyyqjh/why_ai_agents_are_fundamentally_broken_a/
PussyTermin4tor1337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyyqjh
false
null
t3_1hyyqjh
/r/LocalLLaMA/comments/1hyyqjh/why_ai_agents_are_fundamentally_broken_a/
false
false
self
0
{'enabled': False, 'images': [{'id': '8XeNmPnWaR-IXq_Oc3PxfMTThrQzvQXDZ0GP_8Ee_7Q', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5yvs4WJXEwt03m2DCKNraxPZ7H36_mFL1rMloYM8Gt4.jpg?width=108&crop=smart&auto=webp&s=9e722e8f4d0e5b694e87cf88edd7085a1dfd3c13', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5yvs4WJXEwt03m2DCKNraxPZ7H36_mFL1rMloYM8Gt4.jpg?width=216&crop=smart&auto=webp&s=8b534732042e3a2f552144c01201d730180961b8', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/5yvs4WJXEwt03m2DCKNraxPZ7H36_mFL1rMloYM8Gt4.jpg?width=320&crop=smart&auto=webp&s=3b76c06c50c1d20cdb334780d9852e2e8e837e85', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/5yvs4WJXEwt03m2DCKNraxPZ7H36_mFL1rMloYM8Gt4.jpg?width=640&crop=smart&auto=webp&s=37015a1addfd1a7d0b68f60f6db50e913d4e3b23', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/5yvs4WJXEwt03m2DCKNraxPZ7H36_mFL1rMloYM8Gt4.jpg?width=960&crop=smart&auto=webp&s=72ff0481452fe726ff00fc8b4ede041d3c53dbbe', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/5yvs4WJXEwt03m2DCKNraxPZ7H36_mFL1rMloYM8Gt4.jpg?auto=webp&s=5cf8dbacd65fd051d63f12fdd42f9a60504508a1', 'width': 1024}, 'variants': {}}]}
[Mini Rant] Are LLMs trapped in English and the assistant paradigms?
51
Hello everyone, It feels like we’re trapped in two mainstream paradigms, and it’s starting to get on my nerves. Let me explain: **LLMs (too) focused on English** We’re seeing more and more models—Qwen, Mistral, Llama 3.x, etc.—that claim “multilingual” abilities. And if you look closely, everyone approaches the problem differently. However, my empirical scenarios often fail to deliver a good experience with those LLMs, even at a 70B scale. Yes, I understand English reaches the largest audience, but by focusing everything on English, we’re limiting the nuanced cultural and stylistic richness of other languages (French, Spanish, Italian, etc.). As a result, we rarely see new “styles” or modes of reasoning outside of English. **The “assistant” obsession** Everyone wants to build a conversation assistant. Sure, it’s a popular use case, but it kind of locks us into a single format: a Q&A flow with a polite, self-censored style. We forget these are token generators that could be tweaked for creative text manipulation or other forms of generation. I really wish we’d explore more diverse use cases: scenario generation, data-to-text, or other conversation protocols that aren’t so uniform. I understand that model publishers invest significant resources into performing benchmarks and enhancing multilingual capabilities. For instance, Aya Expanse by Cohere For AI represents a notable advancement in this area. Despite these efforts, in real-world scenarios, I’ve never been able to achieve the same level of performance in French as in English with open-source models. Conversely, closed-source models maintain a more consistent performance across languages, which is frustrating because I’d prefer using open-source models. Am I the only one who feels we’re stuck between “big English-only LLMs” and “conversation assistant” paradigms? I think there’s so much potential out there for better multilingual support and more interesting use cases.
2025-01-11T15:52:33
https://www.reddit.com/r/LocalLLaMA/comments/1hyyrml/mini_rant_are_llms_trapped_in_english_and_the/
Worth-Product-5545
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyyrml
false
null
t3_1hyyrml
/r/LocalLLaMA/comments/1hyyrml/mini_rant_are_llms_trapped_in_english_and_the/
false
false
self
51
null
Any local UI for deployed models?
4
I'm deploying different models on different machines and normally I find myself searching on what's the URL:port for each deployed model. Also it's shared with more people. I'm looking for some sort of UI that shows all deployed models and the URL. It would be exactly what LiteLLM calls "[Public Model Hub](https://docs.litellm.ai/docs/proxy/enterprise#public-model-hub)", but it's an emterprise/paid feature. Do you know any alternative?
2025-01-11T15:54:53
https://www.reddit.com/r/LocalLLaMA/comments/1hyytg9/any_local_ui_for_deployed_models/
xdoso
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyytg9
false
null
t3_1hyytg9
/r/LocalLLaMA/comments/1hyytg9/any_local_ui_for_deployed_models/
false
false
self
4
{'enabled': False, 'images': [{'id': 'H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=108&crop=smart&auto=webp&s=ac6070b75d037bb15743f40c8a66fe13f67c642b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=216&crop=smart&auto=webp&s=b17e73fd3a219a325940c9549ad1aefae5b1334b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=320&crop=smart&auto=webp&s=664fa95c5b6895758ee65df41518dd5a6da5b03f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=640&crop=smart&auto=webp&s=a74352a4508748284da866fcb71e0c50d8e4e791', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=960&crop=smart&auto=webp&s=1f700db7b0b3e49085700c57a28fa103a274524f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?width=1080&crop=smart&auto=webp&s=ceb313cf90522c1467c537a396c7c9790f8a2e8f', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/LkEqF94JWo5iDkDFcufVnNkFtyVHLt24VCEDAt1tw5c.jpg?auto=webp&s=a16a0052684e1581445cc965cf01d8c437e90e7e', 'width': 1200}, 'variants': {}}]}
I wrote optimizers for TensorFlow and Keras
1
[removed]
2025-01-11T16:00:18
https://www.reddit.com/r/LocalLLaMA/comments/1hyyxp3/i_wrote_optimizers_for_tensorflow_and_keras/
NoteDancing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyyxp3
false
null
t3_1hyyxp3
/r/LocalLLaMA/comments/1hyyxp3/i_wrote_optimizers_for_tensorflow_and_keras/
false
false
self
1
{'enabled': False, 'images': [{'id': 'mfhRacdSklWb5mINTnJfRYza4Y3dhTbx_B7fUaCvu1M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/axT5s8ade7OoNEJt9toPSHMWks2QJYSxMRQ34bpuGlw.jpg?width=108&crop=smart&auto=webp&s=fe2da05cb3a803c6ae8b5abc5c07db40904dc50d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/axT5s8ade7OoNEJt9toPSHMWks2QJYSxMRQ34bpuGlw.jpg?width=216&crop=smart&auto=webp&s=eb4cd0fba658ef5ff8316df113e5fd9ccb68ea3d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/axT5s8ade7OoNEJt9toPSHMWks2QJYSxMRQ34bpuGlw.jpg?width=320&crop=smart&auto=webp&s=dabe819c747ad6c729084b7d99f31d3d531b5050', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/axT5s8ade7OoNEJt9toPSHMWks2QJYSxMRQ34bpuGlw.jpg?width=640&crop=smart&auto=webp&s=64d3e23011e418ee994727c28a63433dd4f947c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/axT5s8ade7OoNEJt9toPSHMWks2QJYSxMRQ34bpuGlw.jpg?width=960&crop=smart&auto=webp&s=e0be3f12ea70f0c2a27a1c2ed84f138bd94cbc3d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/axT5s8ade7OoNEJt9toPSHMWks2QJYSxMRQ34bpuGlw.jpg?width=1080&crop=smart&auto=webp&s=2bcf5dabc1dc420bf4984901349e31c45c422c6b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/axT5s8ade7OoNEJt9toPSHMWks2QJYSxMRQ34bpuGlw.jpg?auto=webp&s=45cc0a74546b01413eaed5f1f32eebb50b89636a', 'width': 1200}, 'variants': {}}]}
Is deepseek down?
0
I tried to use the api via openrouter and using the deepseek api but nothing happens.
2025-01-11T16:40:32
https://www.reddit.com/r/LocalLLaMA/comments/1hyzu2o/is_deepseek_down/
Southern_Cow_6983
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyzu2o
false
null
t3_1hyzu2o
/r/LocalLLaMA/comments/1hyzu2o/is_deepseek_down/
false
false
self
0
null
Mac Mini M4 - 32gb as Local Ai server?
0
I'm starting to use AI pretty heavily for daily work but I'm a privacy nut. So I don't want to send personal information, financial data etc anywhere over the internet so willing to invest in a local setup. But still pretty new to local Ai stuff. After a few days of researching the options, i've come to the conclusion thatn, seeing as I was ready to purchase a second computer anyway, it makes sense to buy a Mac Mini M4 and use it as a local ai server. I should state that I am a linux guy, and we'll be using either Ubuntu or Fedora on the Mac a dual boot with 'Asahi' if all goes to plan. That way, the original MacOs is still there in case something goes wrong. Of course, if it doesn't work, I can just use the MacOs, but I'm not proficient with MacOs so linux would be better. My reasoning is that it costs as much or more to buy a card with 32GB VRAM as it does to buy a Mac mini m4 with 32GB ram which theoretically can do pretty much the same or close in terms of speed and capacity. Furthermore, it's good hardware in the resale value will be pretty good as well.Should I ever want to sell. Please tell me if i'm missing something. the reason I ask is because people seldom to talk about using a mac mini in that way and seem to prefer expensive GPU cards. I should state I don't game whatsoever, and perhaps that's coloring my decision is obviously almost no one uses a mac to game, and therefore it might not be as obvious a choice for them. Never owned a Mac but it seems like the best solution. Please dummy check this in case I''ve missed something.
2025-01-11T16:45:20
https://www.reddit.com/r/LocalLLaMA/comments/1hyzy0s/mac_mini_m4_32gb_as_local_ai_server/
JohannesComstantine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyzy0s
false
null
t3_1hyzy0s
/r/LocalLLaMA/comments/1hyzy0s/mac_mini_m4_32gb_as_local_ai_server/
false
false
self
0
null
Help with setting up AI locally
1
[removed]
2025-01-11T16:57:46
https://www.reddit.com/r/LocalLLaMA/comments/1hz083d/help_with_setting_up_ai_locally/
ConsiderationOwn8548
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz083d
false
null
t3_1hz083d
/r/LocalLLaMA/comments/1hz083d/help_with_setting_up_ai_locally/
false
false
self
1
null
GMK Announces World’s First Mini-PC Based On AMD Ryzen AI 9 Max+ 395 Processor, Availability Will Be In H1 2025
155
2025-01-11T17:15:58
https://wccftech.com/gmk-announces-worlds-first-mini-pc-based-on-amd-ryzen-ai-9-max/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1hz0n8c
false
null
t3_1hz0n8c
/r/LocalLLaMA/comments/1hz0n8c/gmk_announces_worlds_first_minipc_based_on_amd/
false
false
https://a.thumbs.redditm…oYhhyaOR3ye8.jpg
155
{'enabled': False, 'images': [{'id': '4lq_-Wg2c4Ha6TS_DQQV5i3edGtxs_A75MIwmVcm4Kk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/fWekNX9cjJo2NgR6zTyYnqvItoILS5GvTDAQC2foz30.jpg?width=108&crop=smart&auto=webp&s=eec10d2d0d765bf44cc38ebd9c1c5616b1d41e64', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/fWekNX9cjJo2NgR6zTyYnqvItoILS5GvTDAQC2foz30.jpg?width=216&crop=smart&auto=webp&s=9101173052c43089952ca964485866ef67020bcc', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/fWekNX9cjJo2NgR6zTyYnqvItoILS5GvTDAQC2foz30.jpg?width=320&crop=smart&auto=webp&s=1c9c30e74c0e78039a5d40555dd682eadf238ec6', 'width': 320}, {'height': 357, 'url': 'https://external-preview.redd.it/fWekNX9cjJo2NgR6zTyYnqvItoILS5GvTDAQC2foz30.jpg?width=640&crop=smart&auto=webp&s=13f97a5793ce6881c43646a9bce53d9dbbf16b98', 'width': 640}, {'height': 536, 'url': 'https://external-preview.redd.it/fWekNX9cjJo2NgR6zTyYnqvItoILS5GvTDAQC2foz30.jpg?width=960&crop=smart&auto=webp&s=62b7a131a7cb841bbe6fb1dd0d63589e35256263', 'width': 960}, {'height': 604, 'url': 'https://external-preview.redd.it/fWekNX9cjJo2NgR6zTyYnqvItoILS5GvTDAQC2foz30.jpg?width=1080&crop=smart&auto=webp&s=62a89e4de4137c2be48d4ff43e67e9ced2e5bfac', 'width': 1080}], 'source': {'height': 604, 'url': 'https://external-preview.redd.it/fWekNX9cjJo2NgR6zTyYnqvItoILS5GvTDAQC2foz30.jpg?auto=webp&s=f8d1cc37d264d3d29c510c7d905a647b42e88e2f', 'width': 1080}, 'variants': {}}]}
Need Advice on Building a $50,000 AI GPU Server for Image Analysis (Prefer Second-Hand Hardware)
0
Hi everyone, I’m planning to train AI models focused on image analysis for security systems. Since I’ll be working with large datasets, a GPU-focused server is essential for my needs. I have a budget of up to $50,000 and prefer second-hand hardware to maximize performance for the cost. Im not gonna work on pre-trained models, planning to build my own transformer model Do you have any recommendations? Thanks!
2025-01-11T17:32:45
https://www.reddit.com/r/LocalLLaMA/comments/1hz111s/need_advice_on_building_a_50000_ai_gpu_server_for/
alienpro01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz111s
false
null
t3_1hz111s
/r/LocalLLaMA/comments/1hz111s/need_advice_on_building_a_50000_ai_gpu_server_for/
false
false
self
0
null
A possible methodology to train good moral AI.
0
I think since we don't understand how LLMs do their magic since it is like a blackhole. To make them adhere to human thinking and values, we can create a LLM ethics kernel which will be only trained on the best knowledge of ethics, philosophy, human consciousness and psychology and so on. Then adjacent capabilities can be built on top of that. The ethics LLM will still be a blackhole but because we know only good stuffs were thrown into it, we would be rest assured only good stuff will come out of it. Good ethics in, Good ethics out.
2025-01-11T17:36:08
https://www.reddit.com/r/LocalLLaMA/comments/1hz13t6/a_possible_methodology_to_train_good_moral_ai/
Maleficent-Scene7771
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz13t6
false
null
t3_1hz13t6
/r/LocalLLaMA/comments/1hz13t6/a_possible_methodology_to_train_good_moral_ai/
false
false
self
0
null
What models I can run with decent perf on Asus G14 (64GiB RAM+16 vRAM) and Flow Z13 (128GiB RAM)
0
Asus announced new models of ROG G14 (with 64 GiB RAM and Nvidia 5080 16 GiB vRAM) and Flow Z13 (128 GiB RAM. No dGPU). I'm planning to get one of them. Which would be better for ML/AI personal load? I find 128 GiB RAM more appealing, but then in my search I didn't find good resources on running large models (beyond 70b parameters) possible on 128 GiB RAM with low quantization. In such case is getting lower RAM along with vRAM (offload some layers to dGPU for higher perf) better option? What are the largest models I can run on either of them?
2025-01-11T17:37:11
https://www.reddit.com/r/LocalLLaMA/comments/1hz14nd/what_models_i_can_run_with_decent_perf_on_asus/
ajawadmahmoud
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz14nd
false
null
t3_1hz14nd
/r/LocalLLaMA/comments/1hz14nd/what_models_i_can_run_with_decent_perf_on_asus/
false
false
self
0
null
48gb vs 96gb VRAM for fine-tuning
12
A popular configuration for local hosting is 2x 24gb GPUs (3090's for example). This would let me access LLama3.3 Q4 which I find myself using often. In order to go beyond this, one either has to add more consumer GPUs, which gets tricky in standard desktops or rackmount cases, or switch to workstation/server GPUs where they can be more efficiently packed in. For someone that is about to start really getting into fine-tuning models but hasn't quite understood when to use QLoRA/LoRA/FFT as well as starting to use larger prompts, I am curious whether it'd be worth it (or necessary) to go the extra mile and get myself setup for 96gb or more. Some of my goals include: * Code/Data generation - Generating documents w/ a specific syntax (xml-ish) based on prompts. * Domain specific Q&A * Creative personas and characters for unique dialog While I plan to spin up instances of 2x 3090 and 2x A6000 to see for myself, I am very interested to hear from others with experience.
2025-01-11T17:43:37
https://www.reddit.com/r/LocalLLaMA/comments/1hz1a01/48gb_vs_96gb_vram_for_finetuning/
salec65
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz1a01
false
null
t3_1hz1a01
/r/LocalLLaMA/comments/1hz1a01/48gb_vs_96gb_vram_for_finetuning/
false
false
self
12
null
Why are YC companies bullish on AI Workflows?
6
I have seen a lot of YC Companies building AI workflows but I haven't seen any real life use case for it yet. They are very bullish on it but are any companies or individuals using it? If yes, what are the use cases?
2025-01-11T17:51:05
https://www.reddit.com/r/LocalLLaMA/comments/1hz1g2z/why_are_yc_companies_bullish_on_ai_workflows/
Sam_Tech1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz1g2z
false
null
t3_1hz1g2z
/r/LocalLLaMA/comments/1hz1g2z/why_are_yc_companies_bullish_on_ai_workflows/
false
false
self
6
null
Models for music?
6
Anyone use LLMs for writing music and have leads of frameworks and models?
2025-01-11T18:07:39
https://www.reddit.com/r/LocalLLaMA/comments/1hz1u6k/models_for_music/
Vegetable_Sun_9225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz1u6k
false
null
t3_1hz1u6k
/r/LocalLLaMA/comments/1hz1u6k/models_for_music/
false
false
self
6
null
New finetune Negative_LLAMA_70B
65
It's January 2025, and still, there are very few models out there that have successfully tackled LLM's positivity bias. **LLAMA 3.3** was received in the community with mixed feelings. It is an exceptional assistant, and superb at instruction following (**highest IFEVAL** to date, and by a large margin too.) **The problem-** it is very predictable, dry, and of course, plaugued with positivity bias like all other LLMs. **Negative\_LLAMA\_70B** is **not** an unalignment-focused model (even though it's pretty uncensored), but it is my attempt to address positivity bias while keeping the exceptional intelligence of the **LLAMA 3.3 70B** base model. Is the base 3.3 smarter than my finetune? I'm pretty sure it is, however, Negative\_LLAMA\_70B is still pretty damn smart. The model was **NOT** overcooked with unalignment, so it won't straight up throw morbid or depressing stuff at you, but if you were to ask it to write a story, or engage in an RP, you would notice **slightly** darker undertones. In a long trip, the character takes in a story- their legs will be hurt and would feel tired, in **Roleplay** when you seriously piss off a character- it might hit you (without the need to explicitly prompt such behavior in the character card). Also, **toxic-dpo** and other morbid unalignment datasets were **not** used. I did include a private dataset that should allow total freedom in both **Roleplay & Creative writing**, and quite a lot of various assistant-oriented tasks. # [](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B?not-for-all-audiences=true#tldr)TL;DR * Strong **Roleplay & Creative writing** abilities. * **Less positivity bias**. * **Very smart** assistant with **low refusals**. * **exceptionally good** at following the character card. * Characters feel more **'alive'**, and will occasionally **initiate stuff on their own** (without being prompted to, but fitting to their character). * **Strong ability** to comprehend and roleplay **uncommon physical and mental characteristics**. TL;DR Strong Roleplay & Creative writing abilities. Less positivity bias. Very smart assistant with low refusals. exceptionally good at following the character card. Characters feel more 'alive', and will occasionally initiate stuff on their own (without being prompted to, but fitting to their character). Strong ability to comprehend and roleplay uncommon physical and mental characteristics. [https://huggingface.co/SicariusSicariiStuff/Negative\_LLAMA\_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
2025-01-11T18:20:50
https://www.reddit.com/r/LocalLLaMA/comments/1hz254t/new_finetune_negative_llama_70b/
Sicarius_The_First
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz254t
false
null
t3_1hz254t
/r/LocalLLaMA/comments/1hz254t/new_finetune_negative_llama_70b/
false
false
self
65
{'enabled': False, 'images': [{'id': 'MjSTsN8OjSMXXNkiUTwiq9h_khB7F8U7O4fGzZD7HC4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5vBBCMTBFNim7J_5EwggU5UoIOCz-GGLJardUvMQj3M.jpg?width=108&crop=smart&auto=webp&s=2bbe8eda875c53992067f98b1b2ba6f9473c7c48', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5vBBCMTBFNim7J_5EwggU5UoIOCz-GGLJardUvMQj3M.jpg?width=216&crop=smart&auto=webp&s=06f6f6b21e384c658baac243837c6ad3f24a1945', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5vBBCMTBFNim7J_5EwggU5UoIOCz-GGLJardUvMQj3M.jpg?width=320&crop=smart&auto=webp&s=b5cef1c29567deeaeac87dfa361e19e7dbfee3ce', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5vBBCMTBFNim7J_5EwggU5UoIOCz-GGLJardUvMQj3M.jpg?width=640&crop=smart&auto=webp&s=d723f65e91173d79e877b9ea50951454815ceccd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5vBBCMTBFNim7J_5EwggU5UoIOCz-GGLJardUvMQj3M.jpg?width=960&crop=smart&auto=webp&s=a006d207c87c4eaf15b309ab7b97011cc71bd51d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5vBBCMTBFNim7J_5EwggU5UoIOCz-GGLJardUvMQj3M.jpg?width=1080&crop=smart&auto=webp&s=8e6a9238e7add62e36e5a51acd3febe280ef647b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5vBBCMTBFNim7J_5EwggU5UoIOCz-GGLJardUvMQj3M.jpg?auto=webp&s=cf8ce45d3d0e2a23cad5c4c5cc32f4839dd012e9', 'width': 1200}, 'variants': {}}]}
Bro whaaaat?
5,673
2025-01-11T18:24:57
https://i.redd.it/cwi5l2ziqece1.jpeg
Specter_Origin
i.redd.it
1970-01-01T00:00:00
0
{}
1hz28ld
false
null
t3_1hz28ld
/r/LocalLLaMA/comments/1hz28ld/bro_whaaaat/
false
false
https://b.thumbs.redditm…oK_BecT_yGfc.jpg
5,673
{'enabled': True, 'images': [{'id': 'zt4n2L0pVJGOG8wUOcU_UrDijOyr-QrhTXvTTNItC4o', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/cwi5l2ziqece1.jpeg?width=108&crop=smart&auto=webp&s=8d1caf0a0f97cbf923135aba54d1faaac05fc49c', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/cwi5l2ziqece1.jpeg?width=216&crop=smart&auto=webp&s=03e8aba5b8fdbd22d9f0ff42f0e2a15f0cdec593', 'width': 216}, {'height': 253, 'url': 'https://preview.redd.it/cwi5l2ziqece1.jpeg?width=320&crop=smart&auto=webp&s=bbb934700a0cb3ca0cc4ff2f0c57797f15154193', 'width': 320}, {'height': 507, 'url': 'https://preview.redd.it/cwi5l2ziqece1.jpeg?width=640&crop=smart&auto=webp&s=a6895d12163dd294798940a5c5b6368da7f91b2f', 'width': 640}, {'height': 760, 'url': 'https://preview.redd.it/cwi5l2ziqece1.jpeg?width=960&crop=smart&auto=webp&s=fddd2510dc5223373e81d4f7ce430d2032c91a01', 'width': 960}, {'height': 856, 'url': 'https://preview.redd.it/cwi5l2ziqece1.jpeg?width=1080&crop=smart&auto=webp&s=55937068c1ff890469445690c748cc14ef2b3044', 'width': 1080}], 'source': {'height': 948, 'url': 'https://preview.redd.it/cwi5l2ziqece1.jpeg?auto=webp&s=544bb4dd98e0611d212793c0385d5586491eae6c', 'width': 1196}, 'variants': {}}]}
I thought we wanted open-source?
0
I was banned from r chatGPT jailbreak?
2025-01-11T18:27:55
https://www.reddit.com/r/LocalLLaMA/comments/1hz2b1i/i_thought_we_wanted_opensource/
Jay_Skye
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz2b1i
false
null
t3_1hz2b1i
/r/LocalLLaMA/comments/1hz2b1i/i_thought_we_wanted_opensource/
true
false
spoiler
0
null
Good docker compose for vision LLM that fits in 24gb vram?
7
I've been playing around with a 3090 server I have at my house, trying to have a vision-capable and tool-capable model to integrate with home assistant, cameras etc. My homelab setup is a lot of docker-compose containers, and as a medium-competent person I've been trying to just use prebuild docker images. I've had a hell of a time getting any of the vision models to work on VLLM, Aphrodite. Ollama works but I want a model to stay in the VRAM rather than loading/unloading, I don't like how hacky it is getting that to work with Ollama. Does anybody have a docker compose I could gank from you that would give me an open-ai vision/tool model that would fit in a 24gb graphics card? Would be greatly appreciated.
2025-01-11T18:31:01
https://www.reddit.com/r/LocalLLaMA/comments/1hz2dj1/good_docker_compose_for_vision_llm_that_fits_in/
LightBrightLeftRight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz2dj1
false
null
t3_1hz2dj1
/r/LocalLLaMA/comments/1hz2dj1/good_docker_compose_for_vision_llm_that_fits_in/
false
false
self
7
null
Why we don't know researchers behind DeepSeek?
184
Zero interviews, zero social activity. Zero group photos, none about us page.
2025-01-11T18:48:03
https://www.reddit.com/r/LocalLLaMA/comments/1hz2rar/why_we_dont_know_researchers_behind_deepseek/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz2rar
false
null
t3_1hz2rar
/r/LocalLLaMA/comments/1hz2rar/why_we_dont_know_researchers_behind_deepseek/
false
false
self
184
null
Error in loading Llama 3.2-3B with Unsloth
0
I want to locally fine-tune using my own dataset and then save the Llama 3.2-3B model locally too. I have an Anaconda setup and I'm on the base environment, where I can see clearly that unsloth and hugging face are both installed. However everytime I simply try to access the required model, I get a bunch of errors. Here they are (in image format. Please do inform me if I should upload them as code/text): https://preview.redd.it/othqqjqguece1.png?width=1791&format=png&auto=webp&s=17609b0b568a0272ba65e6a0b1bf83253429fecd https://preview.redd.it/79xtt9dkuece1.png?width=1803&format=png&auto=webp&s=3575c64e124854e60df2cdff793fce1e18ce6eec I have tried to troubleshoot them as follows (in order) and nothing has worked, nor has the nature of the error message changed: * Generating an HF token with all accesses * Requesting and getting accepted to access the gated Llama models * Tried copying code provided in their GitHub README, the Llama 3.2 1B + 3B notebook, and several YT resources, none of which solved the error Here is the code that's creating the above error: \`\`\` from unsloth import FastLanguageModel import torch max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally! dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. # 4bit pre quantized models we support for 4x faster downloading + no OOMs. fourbit_models = [ "unsloth/Llama-3.2-1B-bnb-4bit", # NEW! Llama 3.2 models "unsloth/Llama-3.2-1B-Instruct-bnb-4bit", "unsloth/Llama-3.2-3B-bnb-4bit", "unsloth/Llama-3.2-3B-Instruct-bnb-4bit", ] # More models at https://huggingface.co/unsloth model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/Llama-3.2-3B-Instruct", # or choose "unsloth/Llama-3.2-1B-Instruct" max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, token = "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", ) \`\`\` What's wrong? I've been stuck with this for really long and couldn't find anyone online with the same error message. Would really appreciate any help!
2025-01-11T18:51:09
https://www.reddit.com/r/LocalLLaMA/comments/1hz2tva/error_in_loading_llama_323b_with_unsloth/
gaylord993
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz2tva
false
null
t3_1hz2tva
/r/LocalLLaMA/comments/1hz2tva/error_in_loading_llama_323b_with_unsloth/
false
false
https://a.thumbs.redditm…Ux8g4JN6Aax4.jpg
0
null
Cogent v1 is out! AutoApprove tools , DiffApply tool for large files and faster edits, Improved context size handling
1
[removed]
2025-01-11T19:06:52
https://www.reddit.com/r/LocalLLaMA/comments/1hz36sm/cogent_v1_is_out_autoapprove_tools_diffapply_tool/
mightysoul86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz36sm
false
null
t3_1hz36sm
/r/LocalLLaMA/comments/1hz36sm/cogent_v1_is_out_autoapprove_tools_diffapply_tool/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zICCZFRwKTQ7MpBbZkDwbW9-ZmTX54j_zU5m9uT8TCI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/EilHfPSdS2KnZxXW74CeUAkh-ZGDczoNoCNFisS0BGA.jpg?width=108&crop=smart&auto=webp&s=ee6fa9e7e9e08d5ae21c75256dfc2d9989a2e608', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/EilHfPSdS2KnZxXW74CeUAkh-ZGDczoNoCNFisS0BGA.jpg?width=216&crop=smart&auto=webp&s=41f4dcd79a08f8c45c1ceb94cfebb849028e40f4', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/EilHfPSdS2KnZxXW74CeUAkh-ZGDczoNoCNFisS0BGA.jpg?width=320&crop=smart&auto=webp&s=862355a4e4f276b7d88667bbcc3724fa65c81c1d', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/EilHfPSdS2KnZxXW74CeUAkh-ZGDczoNoCNFisS0BGA.jpg?width=640&crop=smart&auto=webp&s=b3bc135135566e56995228d69549e7dc3cda1a1a', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/EilHfPSdS2KnZxXW74CeUAkh-ZGDczoNoCNFisS0BGA.jpg?width=960&crop=smart&auto=webp&s=77aa634606af00c6af4028be95a7d822c0e273e4', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/EilHfPSdS2KnZxXW74CeUAkh-ZGDczoNoCNFisS0BGA.jpg?auto=webp&s=786eff760860cc40c3f8ec3bb7e856d457108d64', 'width': 1024}, 'variants': {}}]}
How do I extract just specific sections from PDFs?
1
I have some PDFs which have three recurring components: Section(basically a title followed by some points describing it), notes(same as section), and tables(not images). I want to extract all the sections in a seperate PDF along with their bullet points (there can be many bullet points taking even around 1.5 pages for each "section/notes" named title). I wish to do the same for Notes too. I have nothing to do with tables part. As you can see in the image, i got a screenshot of "Notes of chapter 28". I want this part extracted as one entity, so that I can use it in a new pdf where i wish to store notes of all chapters sequentially. How can I achieve this?
2025-01-11T19:11:22
https://i.redd.it/yqiyiwauyece1.jpeg
ShippersAreIdiots
i.redd.it
1970-01-01T00:00:00
0
{}
1hz3akz
false
null
t3_1hz3akz
/r/LocalLLaMA/comments/1hz3akz/how_do_i_extract_just_specific_sections_from_pdfs/
false
false
https://b.thumbs.redditm…rzRzjI818bzQ.jpg
1
{'enabled': True, 'images': [{'id': 'mSRve36B-cCU7TjPejaOsivV560A6bVaTxGOBmmt9rE', 'resolutions': [{'height': 210, 'url': 'https://preview.redd.it/yqiyiwauyece1.jpeg?width=108&crop=smart&auto=webp&s=afb9d851a890c37810f851803e08715de7beecb8', 'width': 108}, {'height': 420, 'url': 'https://preview.redd.it/yqiyiwauyece1.jpeg?width=216&crop=smart&auto=webp&s=e385a7e142d978fa643bdac43c510593254cebed', 'width': 216}, {'height': 623, 'url': 'https://preview.redd.it/yqiyiwauyece1.jpeg?width=320&crop=smart&auto=webp&s=9b8f06e5572b96c4bcc2017bc551c61d86b7a64f', 'width': 320}, {'height': 1246, 'url': 'https://preview.redd.it/yqiyiwauyece1.jpeg?width=640&crop=smart&auto=webp&s=e458e4a45e69ca511e96f840b1cb9ad12ca39326', 'width': 640}], 'source': {'height': 1375, 'url': 'https://preview.redd.it/yqiyiwauyece1.jpeg?auto=webp&s=c3cd182358f8f96f3029f5ce522e3a7c6f24f175', 'width': 706}, 'variants': {}}]}
Technical help
1
[removed]
2025-01-11T19:42:41
https://www.reddit.com/r/LocalLLaMA/comments/1hz3ztn/technical_help/
ExtremeLeft9812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz3ztn
false
null
t3_1hz3ztn
/r/LocalLLaMA/comments/1hz3ztn/technical_help/
false
false
self
1
null
Tesla P40 and no response from nvidia-smi, could it be lack of power?
1
Hey! I finally got around to building my Dell R520/Tesla P40 machine and I have yet to connect the power cable on the P40 (being delivered soon). After following standard procedure installing the drivers and NVIDA stuff I am still getting nothing when trying out the "nvidia-smi" command. Could this possibly be caused by the power cable not installed yet? Is the nvidia-smi command not able to read the P40 without the extra power? So far these are the steps I've followed to get the P40 setup for LLM's: 1. Installed Ubuntu 24 2. Updated Ubuntu 3. installed proper nvidia driver for the P40 4. installed cuda toolkit 5. installed python3 But I am still getting this: `root@llmserver:~$ nvidia-smi` `No devices were found` The P40 seems to be connected `root@llmserver:~$ sudo lspci -v | grep -i nvidia` `09:00.0 3D controller: NVIDIA Corporation GP102GL [Tesla P40] (rev a1)` `Subsystem: NVIDIA Corporation GP102GL [Tesla P40]` `Kernel driver in use: nvidia` `Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia` Help! And thank you!
2025-01-11T19:45:57
https://www.reddit.com/r/LocalLLaMA/comments/1hz42h9/tesla_p40_and_no_response_from_nvidiasmi_could_it/
I-heart-java
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz42h9
false
null
t3_1hz42h9
/r/LocalLLaMA/comments/1hz42h9/tesla_p40_and_no_response_from_nvidiasmi_could_it/
false
false
self
1
null
Vision for the Future
3
With NVIDIA's release of their Project Digits, previously the Jetson Super and several other major players putting resources into local AI hardware,  what is the vision for the future when it comes to local AI use? In the limit, do we envision every household equipped with local AI inference? If that's the case, what is the scenario for when you're not in proximity to these devices and need access to the AI? Are they only for use at home or do we envision a world where people's devices are accessible from outside the home using something like dynamic DNS for external accessibility? Or we do we envision people using current cloud infrastructure when away from these AI devices? If the future is local but local as in you do inference on your phone, your laptop etc... (without a dedicated AI machine) how does personal context gets synced between devices? TLDR: what's your long term vision for local AI use?
2025-01-11T19:56:06
https://www.reddit.com/r/LocalLLaMA/comments/1hz4arl/vision_for_the_future/
ranoutofusernames__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz4arl
false
null
t3_1hz4arl
/r/LocalLLaMA/comments/1hz4arl/vision_for_the_future/
false
false
self
3
null
Which oocal LLMs for OCR and Privacy-Friendly Workflows?
0
Hello everyone, I’m looking into different ways to use LLMs locally for privacy reasons concerning my data. My goal is to perform OCR on PDF documents (some contain plain text, others have text in images or SVGs). Before integrating an LLM into my workflow, are there any OCR tools that can handle text extraction from digital documents? I’m planning to host an LLM locally on a machine with a GTX 1070 (8GB) and 16GB of RAM. Thanks for your help!
2025-01-11T20:10:03
https://www.reddit.com/r/LocalLLaMA/comments/1hz4mcq/which_oocal_llms_for_ocr_and_privacyfriendly/
Kot4san
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz4mcq
false
null
t3_1hz4mcq
/r/LocalLLaMA/comments/1hz4mcq/which_oocal_llms_for_ocr_and_privacyfriendly/
false
false
self
0
null
Are there easy ways to download documents from github or websites to use in your RAG?
0
I'm trying to play around with some programming uses, but I would like to include the documentation and best practices for the local llm I'm experimenting with. The documentation is just on a github or website usually. Is there ways to screen scrape this easily into a PDF or something like that that would let me use them easier? Thank you for any help.
2025-01-11T20:20:11
https://www.reddit.com/r/LocalLLaMA/comments/1hz4uf4/are_there_easy_ways_to_download_documents_from/
here_for_the_boos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz4uf4
false
null
t3_1hz4uf4
/r/LocalLLaMA/comments/1hz4uf4/are_there_easy_ways_to_download_documents_from/
false
false
self
0
null
I lost track of how good deepfakes are getting
85
2025-01-11T20:21:28
https://v.redd.it/6easpphcbfce1
TheLogiqueViper
v.redd.it
1970-01-01T00:00:00
0
{}
1hz4vgm
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/6easpphcbfce1/DASHPlaylist.mpd?a=1739218904%2CZjk4YWMzODMzYzlkNDBhMjUwMjMwNjQ2YzA0OGI3MGZjNDc0NGZhNmJhNzNmNTBhZjI1MWU3MDcwMzg5M2JjNg%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/6easpphcbfce1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'hls_url': 'https://v.redd.it/6easpphcbfce1/HLSPlaylist.m3u8?a=1739218904%2CYWQ4N2Q2ZjY3MTYyZWJmZDQ2YjRmZjg5YTEwMWM5OTY1MTIxMjg2ZjUwZmYzMmUyMmRhOWZhNjhiYjRkMDJiNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6easpphcbfce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}}
t3_1hz4vgm
/r/LocalLLaMA/comments/1hz4vgm/i_lost_track_of_how_good_deepfakes_are_getting/
false
false
https://external-preview…5e6c1b198717bdad
85
{'enabled': False, 'images': [{'id': 'NWkzYjZ2MGNiZmNlMb2jfsb5QnWJTtG7eE9LYo3ISJ62g2ejiuiPOrO5k-S6', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NWkzYjZ2MGNiZmNlMb2jfsb5QnWJTtG7eE9LYo3ISJ62g2ejiuiPOrO5k-S6.png?width=108&crop=smart&format=pjpg&auto=webp&s=342fce15a36755a53cc9056e1ac6400aff64d6c4', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NWkzYjZ2MGNiZmNlMb2jfsb5QnWJTtG7eE9LYo3ISJ62g2ejiuiPOrO5k-S6.png?width=216&crop=smart&format=pjpg&auto=webp&s=4260c5f4bc27cf56c9b0a37cfdca6fd7ae469635', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/NWkzYjZ2MGNiZmNlMb2jfsb5QnWJTtG7eE9LYo3ISJ62g2ejiuiPOrO5k-S6.png?width=320&crop=smart&format=pjpg&auto=webp&s=f5e7baca309d8aa18117402a8e84f3f804bf5aeb', 'width': 320}], 'source': {'height': 854, 'url': 'https://external-preview.redd.it/NWkzYjZ2MGNiZmNlMb2jfsb5QnWJTtG7eE9LYo3ISJ62g2ejiuiPOrO5k-S6.png?format=pjpg&auto=webp&s=65fa81f0b4a3d150900deff0ee9086b3276bc77a', 'width': 480}, 'variants': {}}]}
Denser Reward for RLHF PPO Training
18
Thrilled to share our recent work "Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model"!  In this paper, **we study the granularity of action space in RLHF PPO training**, assuming only binary preference labels. Our proposal is to **assign reward to each semantically complete text segment**, rather than per-token (maybe over-granular) or bandit reward (sparse). We further **design techniques to ensure the effectiveness and stability of RLHF PPO training under the denser {segment, token}-level rewards**. **Our Segment-level RLHF PPO and its Token-level PPO variant outperform bandit PPO** across AlpacaEval 2, Arena-Hard, and MT-Bench benchmarks under various backbone LLMs (Llama Series, Phi Series). 1. Paper: [https://arxiv.org/pdf/2501.02790](https://arxiv.org/pdf/2501.02790) 1. Benckmark results are available at: [https://github.com/yinyueqin/DenseRewardRLHF-PPO?tab=readme-ov-file#benckmark-results--released-models](https://github.com/yinyueqin/DenseRewardRLHF-PPO?tab=readme-ov-file#benckmark-results--released-models) 2. Method illustration at: [https://github.com/yinyueqin/DenseRewardRLHF-PPO/blob/main/method.png](https://github.com/yinyueqin/DenseRewardRLHF-PPO/blob/main/method.png) 2. Code: [https://github.com/yinyueqin/DenseRewardRLHF-PPO](https://github.com/yinyueqin/DenseRewardRLHF-PPO) 3. Prior work on token-level reward model for RLHF: [https://arxiv.org/abs/2306.00398](https://arxiv.org/abs/2306.00398)
2025-01-11T20:27:42
https://www.reddit.com/r/LocalLLaMA/comments/1hz50c0/denser_reward_for_rlhf_ppo_training/
Leading-Contract7979
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz50c0
false
null
t3_1hz50c0
/r/LocalLLaMA/comments/1hz50c0/denser_reward_for_rlhf_ppo_training/
false
false
self
18
{'enabled': False, 'images': [{'id': 'WHC4nLHidHPhdkGMGq7w4Ok4WMwOT6WnI-ZRvKmSJq8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GTDCtGilXP44SpaUcS4xSxI-HcawrOSc9BykXfPnel4.jpg?width=108&crop=smart&auto=webp&s=b9885574038727604f6c1b7c0d0adbe4584a2b34', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GTDCtGilXP44SpaUcS4xSxI-HcawrOSc9BykXfPnel4.jpg?width=216&crop=smart&auto=webp&s=02b9e85d1e7f21e6f5b21152fac2bd8d4ff063a0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GTDCtGilXP44SpaUcS4xSxI-HcawrOSc9BykXfPnel4.jpg?width=320&crop=smart&auto=webp&s=0600c0f1fe032d1c180d92bd23f2a3fe148c3dc7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GTDCtGilXP44SpaUcS4xSxI-HcawrOSc9BykXfPnel4.jpg?width=640&crop=smart&auto=webp&s=676a94732f3dd5aee6e10d2f56daaefc25af5d21', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GTDCtGilXP44SpaUcS4xSxI-HcawrOSc9BykXfPnel4.jpg?width=960&crop=smart&auto=webp&s=9df6be43515e9dd313b1f388f198df7d0a3c0b38', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GTDCtGilXP44SpaUcS4xSxI-HcawrOSc9BykXfPnel4.jpg?width=1080&crop=smart&auto=webp&s=179ab2fd0c2308464a123abd889551e3233ca561', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GTDCtGilXP44SpaUcS4xSxI-HcawrOSc9BykXfPnel4.jpg?auto=webp&s=d8dd24da6177d15eb4217011cf88d028e438f127', 'width': 1200}, 'variants': {}}]}
Questions on hosting LLM on the new Mac Mini M4 16gb
1
[removed]
2025-01-11T20:37:14
https://www.reddit.com/r/LocalLLaMA/comments/1hz580y/questions_on_hosting_llm_on_the_new_mac_mini_m4/
Darejk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz580y
false
null
t3_1hz580y
/r/LocalLLaMA/comments/1hz580y/questions_on_hosting_llm_on_the_new_mac_mini_m4/
false
false
self
1
null
Tutorial: Run Moondream 2b's new gaze detection on any video
276
2025-01-11T20:42:31
https://v.redd.it/i9ofbftyefce1
ParsaKhaz
/r/LocalLLaMA/comments/1hz5caf/tutorial_run_moondream_2bs_new_gaze_detection_on/
1970-01-01T00:00:00
0
{}
1hz5caf
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/i9ofbftyefce1/DASHPlaylist.mpd?a=1739349757%2CNTZkMDRhY2U4OWY5ZmVkMzY0ODY1ZDlmYmNiMzQzOWYwYzg1NzI4NjIzNmMyNmM3MjE2NGFlNDc2YmM2NmNhYg%3D%3D&v=1&f=sd', 'duration': 240, 'fallback_url': 'https://v.redd.it/i9ofbftyefce1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/i9ofbftyefce1/HLSPlaylist.m3u8?a=1739349757%2CNTBhYzY1YmVkNDhlN2YxNjhkYzc2MmJlZjA2ZGJlZGM5M2FjZGRkYWNmY2JkOTBjNDJlMDJkNDA3MDdmODdjNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/i9ofbftyefce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1hz5caf
/r/LocalLLaMA/comments/1hz5caf/tutorial_run_moondream_2bs_new_gaze_detection_on/
false
false
https://external-preview…8e4455b1d90774b5
276
{'enabled': False, 'images': [{'id': 'a2VmczhmdHllZmNlMTF40J1mEmizgXzWsZQRgxJwv14NVEzVGBQqF-uixs9J', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a2VmczhmdHllZmNlMTF40J1mEmizgXzWsZQRgxJwv14NVEzVGBQqF-uixs9J.png?width=108&crop=smart&format=pjpg&auto=webp&s=0793a6ece4d05a38a65b8978b0dfd961027a349c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a2VmczhmdHllZmNlMTF40J1mEmizgXzWsZQRgxJwv14NVEzVGBQqF-uixs9J.png?width=216&crop=smart&format=pjpg&auto=webp&s=d39fcf191b0cf0b5e12f03221750723065f3a664', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a2VmczhmdHllZmNlMTF40J1mEmizgXzWsZQRgxJwv14NVEzVGBQqF-uixs9J.png?width=320&crop=smart&format=pjpg&auto=webp&s=173a62c74bfae8bc064ba2beb365018de48bc290', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/a2VmczhmdHllZmNlMTF40J1mEmizgXzWsZQRgxJwv14NVEzVGBQqF-uixs9J.png?width=640&crop=smart&format=pjpg&auto=webp&s=63b0f3ba968f658f25414df3625eca86d806ac8d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/a2VmczhmdHllZmNlMTF40J1mEmizgXzWsZQRgxJwv14NVEzVGBQqF-uixs9J.png?width=960&crop=smart&format=pjpg&auto=webp&s=040c693a05c49a6b447f8ae2a42249d089dc51f2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/a2VmczhmdHllZmNlMTF40J1mEmizgXzWsZQRgxJwv14NVEzVGBQqF-uixs9J.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7d1849d3380e5f052228c91e2479eadc1f372a5f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/a2VmczhmdHllZmNlMTF40J1mEmizgXzWsZQRgxJwv14NVEzVGBQqF-uixs9J.png?format=pjpg&auto=webp&s=2480b3024067b06c8fbdc38794112146eabb3a04', 'width': 1920}, 'variants': {}}]}