title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Apache 2.0 licensed streaming 8B multimodal model beats gpt 4o in ASR/STT , claude sonnet in visual and gemini 1.5 pro in visual, speech - minicpm-o 2.6
1
[removed]
2025-01-15T01:12:54
https://www.reddit.com/r/LocalLLaMA/comments/1i1lvpc/apache_20_licensed_streaming_8b_multimodal_model/
TheLogiqueViper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1lvpc
false
null
t3_1i1lvpc
/r/LocalLLaMA/comments/1i1lvpc/apache_20_licensed_streaming_8b_multimodal_model/
false
false
self
1
{'enabled': False, 'images': [{'id': '47zIFZcMoq4eLfMpPpx9UsJi5Oq45jaPMLy4-KhnPPk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=108&crop=smart&auto=webp&s=182864ff8445baab94c3baf94f87c914c070fdb2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=216&crop=smart&auto=webp&s=167d61400fbd50a227ebcf27a757addebb5b38c3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=320&crop=smart&auto=webp&s=4a713995bcc7da68979a173d6d51f91a0c0d1dd1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=640&crop=smart&auto=webp&s=bf06c624cc0dcbf599f5edea7b4be7e420f634b7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=960&crop=smart&auto=webp&s=c8a17316ff5f86130a715ab4928aa91486aaa2a9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=1080&crop=smart&auto=webp&s=f098b938d40335f539be2c35054d1e8aaceec2b1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?auto=webp&s=34c637b0cd1f5a0766fb85d10608f3234ae6d28c', 'width': 1200}, 'variants': {}}]}
minicpm-o 2.6
8
https://preview.redd.it/…a5cbde5b8ff7d6
2025-01-15T01:13:43
https://www.reddit.com/r/LocalLLaMA/comments/1i1lw9r/minicpmo_26/
TheLogiqueViper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1lw9r
false
null
t3_1i1lw9r
/r/LocalLLaMA/comments/1i1lw9r/minicpmo_26/
false
false
https://b.thumbs.redditm…TFtnI4IuD3pM.jpg
8
{'enabled': False, 'images': [{'id': '47zIFZcMoq4eLfMpPpx9UsJi5Oq45jaPMLy4-KhnPPk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=108&crop=smart&auto=webp&s=182864ff8445baab94c3baf94f87c914c070fdb2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=216&crop=smart&auto=webp&s=167d61400fbd50a227ebcf27a757addebb5b38c3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=320&crop=smart&auto=webp&s=4a713995bcc7da68979a173d6d51f91a0c0d1dd1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=640&crop=smart&auto=webp&s=bf06c624cc0dcbf599f5edea7b4be7e420f634b7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=960&crop=smart&auto=webp&s=c8a17316ff5f86130a715ab4928aa91486aaa2a9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?width=1080&crop=smart&auto=webp&s=f098b938d40335f539be2c35054d1e8aaceec2b1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HfgLF9MP4qFrrT-J0Ft0hKLbfWPg5ZEDF194P91AP-U.jpg?auto=webp&s=34c637b0cd1f5a0766fb85d10608f3234ae6d28c', 'width': 1200}, 'variants': {}}]}
Python code for LLM
1
[removed]
2025-01-15T01:16:43
https://www.reddit.com/r/LocalLLaMA/comments/1i1lyjd/python_code_for_llm/
keepmybodymoving
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1lyjd
false
null
t3_1i1lyjd
/r/LocalLLaMA/comments/1i1lyjd/python_code_for_llm/
false
false
self
1
null
Deepseek android app
1
https://play.google.com/store/apps/details?id=com.deepseek.chat
2025-01-15T01:26:43
https://www.reddit.com/r/LocalLLaMA/comments/1i1m5ur/deepseek_android_app/
TheLogiqueViper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1m5ur
false
null
t3_1i1m5ur
/r/LocalLLaMA/comments/1i1m5ur/deepseek_android_app/
false
false
self
1
{'enabled': False, 'images': [{'id': 'I0OEx_hxudD-jCQIKr6KVgtU7FI6wz_5szm6Jw4mtDE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fDvKt4ix5Ed5YwQNXmFHwR_kepBF3mwNu_lRDLx35GY.jpg?width=108&crop=smart&auto=webp&s=91a6c51cde476d1831d169ef90510b13fb699058', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/fDvKt4ix5Ed5YwQNXmFHwR_kepBF3mwNu_lRDLx35GY.jpg?width=216&crop=smart&auto=webp&s=8b0600ef82cb2af6a0284fe02dcaf679530e68da', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/fDvKt4ix5Ed5YwQNXmFHwR_kepBF3mwNu_lRDLx35GY.jpg?width=320&crop=smart&auto=webp&s=203f70f825595b643dc8cf2ca6a92fa784f61860', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/fDvKt4ix5Ed5YwQNXmFHwR_kepBF3mwNu_lRDLx35GY.jpg?auto=webp&s=08a7caeca3c5caac882728d27c36fee6a15459b1', 'width': 512}, 'variants': {}}]}
Where are we on consistent clothing in videos?
1
Looking for SOTA, the last updates on this I've seen from AIEntrepreneur almost a year ago, all the vid platforms can't seem to come up with something that stays consistent. Is there a model or method out yet that has convinced you? I want to keep the person/shot of the video the same just change the clothing.
2025-01-15T01:30:58
https://www.reddit.com/r/LocalLLaMA/comments/1i1m8wg/where_are_we_on_consistent_clothing_in_videos/
Nokita_is_Back
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1m8wg
false
null
t3_1i1m8wg
/r/LocalLLaMA/comments/1i1m8wg/where_are_we_on_consistent_clothing_in_videos/
false
false
self
1
null
How to get full reply without extras with an exl2 quant?
1
I am learning how to use exl2 quants. Unlike gguf that I can set max\_tokens=-1 to get a full reply, it seems to me I need to explicitly set how many tokens I want to get in reply in advance. However, when I set it too high, it will come with extra tokens that I don't want. How do I fix this and get a fully reply without extras? This is the script I am testing. from exllamav2 import ExLlamaV2, ExLlamaV2Config, ExLlamaV2Cache, ExLlamaV2Tokenizer, Timer from exllamav2.generator import ExLlamaV2DynamicGenerator model_dir = "/home/user/Phi-3-mini-128k-instruct-exl2/4.0bpw/" config = ExLlamaV2Config(model_dir) model = ExLlamaV2(config) cache = ExLlamaV2Cache(model, max_seq_len = 40960, lazy = True) model.load_autosplit(cache, progress = True) tokenizer = ExLlamaV2Tokenizer(config) prompt = "Why was Duke Vladivoj enfeoffed Duchy of Bohemia with the Holy Roman Empire in 1002? Does that mean Duchy of Bohemia was part of the Holy Roman Empire already? If so, when did the Holy Roman Empire acquired Bohemia?" generator = ExLlamaV2DynamicGenerator(model = model, cache = cache, tokenizer = tokenizer) with Timer() as t_single: output = generator.generate(prompt = prompt, max_new_tokens = 1200, add_bos = True) print(output) print(f"speed, bsz 1: {max_new_tokens / t_single.interval:.2f} tokens/second")
2025-01-15T02:31:44
https://www.reddit.com/r/LocalLLaMA/comments/1i1nf98/how_to_get_full_reply_without_extras_with_an_exl2/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1nf98
false
null
t3_1i1nf98
/r/LocalLLaMA/comments/1i1nf98/how_to_get_full_reply_without_extras_with_an_exl2/
false
false
self
1
null
[2501.08313] MiniMax-01: Scaling Foundation Models with Lightning Attention
55
2025-01-15T02:52:54
https://arxiv.org/abs/2501.08313
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1i1ntmb
false
null
t3_1i1ntmb
/r/LocalLLaMA/comments/1i1ntmb/250108313_minimax01_scaling_foundation_models/
false
false
default
55
null
AI-Powered CrewAI Documentation Assistant! using Crawl4AI and Phi4
0
2025-01-15T03:32:03
https://v.redd.it/8y7edmpru2de1
oridnary_artist
v.redd.it
1970-01-01T00:00:00
0
{}
1i1ojy2
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/8y7edmpru2de1/DASHPlaylist.mpd?a=1739503938%2COWJjNzI0NjJkN2UzZGY0YmM3NzI5YmRmMjM2MjAxNjJhY2M2NmQ5MDFiNmRhNGIwYjU4MTUxYWUyOGMzNzVhNQ%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/8y7edmpru2de1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 598, 'hls_url': 'https://v.redd.it/8y7edmpru2de1/HLSPlaylist.m3u8?a=1739503938%2CZDM2YTJlYTNjMzdkMDgyZGQ5MmM3NDgzODViNmRlZDM0NDE2OTA4YzIwYjZjMmJmN2M3ZjMxNTUyNzhkOGYzMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8y7edmpru2de1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}}
t3_1i1ojy2
/r/LocalLLaMA/comments/1i1ojy2/aipowered_crewai_documentation_assistant_using/
false
false
https://external-preview…f675302a7a5ac9c4
0
{'enabled': False, 'images': [{'id': 'bmphczRucHJ1MmRlMT_cH7DyaCf0TERL2YvW2Fnn_Pl5_XKKOKtWuk0gwpen', 'resolutions': [{'height': 134, 'url': 'https://external-preview.redd.it/bmphczRucHJ1MmRlMT_cH7DyaCf0TERL2YvW2Fnn_Pl5_XKKOKtWuk0gwpen.png?width=108&crop=smart&format=pjpg&auto=webp&s=01a03b54805b1611386e63cac0a468049acd8b00', 'width': 108}, {'height': 269, 'url': 'https://external-preview.redd.it/bmphczRucHJ1MmRlMT_cH7DyaCf0TERL2YvW2Fnn_Pl5_XKKOKtWuk0gwpen.png?width=216&crop=smart&format=pjpg&auto=webp&s=f89e266f8a6c09528775a4b9f02ff3efe9cf7087', 'width': 216}, {'height': 399, 'url': 'https://external-preview.redd.it/bmphczRucHJ1MmRlMT_cH7DyaCf0TERL2YvW2Fnn_Pl5_XKKOKtWuk0gwpen.png?width=320&crop=smart&format=pjpg&auto=webp&s=04f0b32bdea6a23d875d3bba9460af1a43e16a01', 'width': 320}, {'height': 798, 'url': 'https://external-preview.redd.it/bmphczRucHJ1MmRlMT_cH7DyaCf0TERL2YvW2Fnn_Pl5_XKKOKtWuk0gwpen.png?width=640&crop=smart&format=pjpg&auto=webp&s=15ddd8e2ddef7afcf59a66c26b49cd90ce71e0ff', 'width': 640}], 'source': {'height': 888, 'url': 'https://external-preview.redd.it/bmphczRucHJ1MmRlMT_cH7DyaCf0TERL2YvW2Fnn_Pl5_XKKOKtWuk0gwpen.png?format=pjpg&auto=webp&s=1a2775ed490d135842928a4c6dc58948674eef9f', 'width': 712}, 'variants': {}}]}
OpenRouter Users: What feature are you missing?
202
I accidentally built an [OpenRouter alternative](https://glama.ai/gateway). I say accidentally because that wasn’t the goal of my project, but as people and companies adopted it, they requested similar features. Over time, I ended up with something that feels like an alternative. Key benefits include integration with the Chat and [MCP ecosystem](https://glama.ai/mcp/servers), more advanced analytics/logging, and reportedly lower latency and greater stability than OpenRouter. Pricing is similar, and we process several billion tokens daily. Having addressed feedback from current users, I’m now looking to the broader community for ideas on where to take the project next. What are your painpoints with OpenRouter?
2025-01-15T03:51:27
https://www.reddit.com/r/LocalLLaMA/comments/1i1owp1/openrouter_users_what_feature_are_you_missing/
punkpeye
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1owp1
false
null
t3_1i1owp1
/r/LocalLLaMA/comments/1i1owp1/openrouter_users_what_feature_are_you_missing/
false
false
self
202
{'enabled': False, 'images': [{'id': 'UJ4KDiSTFpO9nNxSI7uiY-_Ssbyq0N29-554ErzvaEw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Hu92voq59sIpYD8mTWSLB4kbJbwL4QZzvHJCP_uqHpA.jpg?width=108&crop=smart&auto=webp&s=1cc5f94e33f40fa3514c6c698b5c8d94d2f481fe', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Hu92voq59sIpYD8mTWSLB4kbJbwL4QZzvHJCP_uqHpA.jpg?width=216&crop=smart&auto=webp&s=d67fea1ccd892bb44f351f2b95a7348cf842f683', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Hu92voq59sIpYD8mTWSLB4kbJbwL4QZzvHJCP_uqHpA.jpg?width=320&crop=smart&auto=webp&s=a9a4c71a66fb0dba99cbbe15674df739da33786e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Hu92voq59sIpYD8mTWSLB4kbJbwL4QZzvHJCP_uqHpA.jpg?width=640&crop=smart&auto=webp&s=4d45988a64322d41d2464c6e71dd6cd2943763c7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Hu92voq59sIpYD8mTWSLB4kbJbwL4QZzvHJCP_uqHpA.jpg?width=960&crop=smart&auto=webp&s=700e4e228aaf95917dd89a64bbaaf41ca7ef1984', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Hu92voq59sIpYD8mTWSLB4kbJbwL4QZzvHJCP_uqHpA.jpg?width=1080&crop=smart&auto=webp&s=a76e99ece7677d6cd674caeb1675362c4859f891', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Hu92voq59sIpYD8mTWSLB4kbJbwL4QZzvHJCP_uqHpA.jpg?auto=webp&s=14b2b7a4b90d9b9074b2dd82f2fa3b6256c121f4', 'width': 1200}, 'variants': {}}]}
Running Deepseek V3 with a box of scraps (but not in a cave)
75
I got Deepseek running on a bunch of old 10GB Nvidia P102-100's on PCIE 1.0 x1 risers. (GPU's built for mining) Spread across 3 machines, connected via 1gb lan and through a firewall! Bought these GPU's for $30 each, (not for this purpose lol) Funnily enough the hardest part is that Llama.cpp wanted enough cpu ram to load the model before moving it to VRAM. Had to run it at Q2 because of this. Will try again at Q4 when I get some more. Speed, a whopping **3.6 T/s.** Considering this setup has literally everything going against it, not half bad really. If you are curious, without the GPUs, the CPU server alone starts around 2.4T/s but even after 1k tokens it was down to 1.8T/s Was only seeing like 30MB/s on the network, but might try upgrading everything to 10G lan just to see if it matters.
2025-01-15T03:58:00
https://www.reddit.com/r/LocalLLaMA/comments/1i1p0yb/running_deepseek_v3_with_a_box_of_scraps_but_not/
Conscious_Cut_6144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1p0yb
false
null
t3_1i1p0yb
/r/LocalLLaMA/comments/1i1p0yb/running_deepseek_v3_with_a_box_of_scraps_but_not/
false
false
self
75
null
Got Email about Project Digits from NVIDIA which if it materialize would be the right step towards having local AI computing.
0
2025-01-15T04:10:43
https://i.redd.it/3xglz58q13de1.jpeg
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
1i1p92s
false
null
t3_1i1p92s
/r/LocalLLaMA/comments/1i1p92s/got_email_about_project_digits_from_nvidia_which/
false
false
https://b.thumbs.redditm…mZH20qYXyWoE.jpg
0
{'enabled': True, 'images': [{'id': 'kdRiPD_N12VMB1XIL0d7WsFKlhmNZRT2-ULTuu3EKcA', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/3xglz58q13de1.jpeg?width=108&crop=smart&auto=webp&s=649d617a08e94c61caf053f0d82536595e34d0c8', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/3xglz58q13de1.jpeg?width=216&crop=smart&auto=webp&s=ed5b7cc01d2b6202ba8dfa0ee360af70199b2ca7', 'width': 216}, {'height': 260, 'url': 'https://preview.redd.it/3xglz58q13de1.jpeg?width=320&crop=smart&auto=webp&s=40b09d4d899befc418f749fd5028983a6ced05ad', 'width': 320}, {'height': 521, 'url': 'https://preview.redd.it/3xglz58q13de1.jpeg?width=640&crop=smart&auto=webp&s=0b9cf09ae997e9c090f837e91a1a0f76f2d5e222', 'width': 640}], 'source': {'height': 702, 'url': 'https://preview.redd.it/3xglz58q13de1.jpeg?auto=webp&s=b729cbed40790acbc61cf3bfcd5e5abeec25d31b', 'width': 861}, 'variants': {}}]}
Just added support for Phi-4 to MLX Model Manager so you can use it in your Swift applications with just a couple of lines of code.
16
2025-01-15T04:13:12
https://v.redd.it/ay8wkst523de1
Onboto
v.redd.it
1970-01-01T00:00:00
0
{}
1i1palb
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/ay8wkst523de1/DASHPlaylist.mpd?a=1739506408%2CMDRhZTgyYjlkNjhiODk2Y2RmNzdmMDFhZDg4YjY5ZWNjZDU5YzFlNWM3Nzg0YjQyZWViYmRmMjNhMTkyM2I5Yw%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/ay8wkst523de1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 440, 'hls_url': 'https://v.redd.it/ay8wkst523de1/HLSPlaylist.m3u8?a=1739506408%2CYWIzNTczMWUwYzhjOTk2MWM4OGFjZmM1MzMxYTQ4NmExZThmZmYwMWVjNjJlMjc5YmJlMzZhODg3NjljNjkzZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ay8wkst523de1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 854}}
t3_1i1palb
/r/LocalLLaMA/comments/1i1palb/just_added_support_for_phi4_to_mlx_model_manager/
false
false
https://external-preview…ab636de5c15f6cde
16
{'enabled': False, 'images': [{'id': 'dHdqaG90dDUyM2RlMaMEkI1tj4yerNzAEj0qZXlRscYhMqjgZV-XtG9Ya7Pu', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/dHdqaG90dDUyM2RlMaMEkI1tj4yerNzAEj0qZXlRscYhMqjgZV-XtG9Ya7Pu.png?width=108&crop=smart&format=pjpg&auto=webp&s=4d74ca0212356b7f0d9b6bc078cda19ab3f52f5e', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/dHdqaG90dDUyM2RlMaMEkI1tj4yerNzAEj0qZXlRscYhMqjgZV-XtG9Ya7Pu.png?width=216&crop=smart&format=pjpg&auto=webp&s=c633db00d24b107304f48faaa888a0410c3b001f', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/dHdqaG90dDUyM2RlMaMEkI1tj4yerNzAEj0qZXlRscYhMqjgZV-XtG9Ya7Pu.png?width=320&crop=smart&format=pjpg&auto=webp&s=a5b49f0b69d29e791d4eef048b81ca1b2d33308e', 'width': 320}, {'height': 330, 'url': 'https://external-preview.redd.it/dHdqaG90dDUyM2RlMaMEkI1tj4yerNzAEj0qZXlRscYhMqjgZV-XtG9Ya7Pu.png?width=640&crop=smart&format=pjpg&auto=webp&s=034be5962d93d7f3f1c2bcaf297c814ce5ad2912', 'width': 640}, {'height': 495, 'url': 'https://external-preview.redd.it/dHdqaG90dDUyM2RlMaMEkI1tj4yerNzAEj0qZXlRscYhMqjgZV-XtG9Ya7Pu.png?width=960&crop=smart&format=pjpg&auto=webp&s=d2d909d3af3ff2075983ca5cff8f90d291387b88', 'width': 960}, {'height': 556, 'url': 'https://external-preview.redd.it/dHdqaG90dDUyM2RlMaMEkI1tj4yerNzAEj0qZXlRscYhMqjgZV-XtG9Ya7Pu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0ea7ab22ce3db7e332521d89d5ad2caf668983d7', 'width': 1080}], 'source': {'height': 660, 'url': 'https://external-preview.redd.it/dHdqaG90dDUyM2RlMaMEkI1tj4yerNzAEj0qZXlRscYhMqjgZV-XtG9Ya7Pu.png?format=pjpg&auto=webp&s=7c01ceda957fcb495cb18364fc3cd79bc3a94ef0', 'width': 1280}, 'variants': {}}]}
German Math Tutor - RAG and/or Fine Tuning?
1
[removed]
2025-01-15T04:24:48
https://www.reddit.com/r/LocalLLaMA/comments/1i1pht5/german_math_tutor_rag_andor_fine_tuning/
DunklerErpel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1pht5
false
null
t3_1i1pht5
/r/LocalLLaMA/comments/1i1pht5/german_math_tutor_rag_andor_fine_tuning/
false
false
self
1
null
Need help setting up LLaMA 7B chat on Windows 11 please, installation errors!
1
[removed]
2025-01-15T04:39:57
https://www.reddit.com/r/LocalLLaMA/comments/1i1pqxb/need_help_setting_up_llama_7b_chat_on_windows_11/
AdOriginal4884
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1pqxb
false
null
t3_1i1pqxb
/r/LocalLLaMA/comments/1i1pqxb/need_help_setting_up_llama_7b_chat_on_windows_11/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gDvKmjrx98qXdHoD8oGDxwDRVpqyE3_BEUZW931TUJM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/80y0Iub7cYNljMbz09eH1MoLA6GNdjC1vGzEBfJS4kQ.jpg?width=108&crop=smart&auto=webp&s=5d2c541d467cca215e7663c8b12d0b70a9965c0d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/80y0Iub7cYNljMbz09eH1MoLA6GNdjC1vGzEBfJS4kQ.jpg?width=216&crop=smart&auto=webp&s=a6e87d706215ce094e88ecaeed86559b00dfa686', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/80y0Iub7cYNljMbz09eH1MoLA6GNdjC1vGzEBfJS4kQ.jpg?width=320&crop=smart&auto=webp&s=8f05cc766edb9cd01718964a484c593be9c3f5f4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/80y0Iub7cYNljMbz09eH1MoLA6GNdjC1vGzEBfJS4kQ.jpg?width=640&crop=smart&auto=webp&s=53071fc84d80df636a60714b7fb0d78dbf012805', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/80y0Iub7cYNljMbz09eH1MoLA6GNdjC1vGzEBfJS4kQ.jpg?width=960&crop=smart&auto=webp&s=5abaaf77c9a07e2c32deae6171db3816c1ff25f5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/80y0Iub7cYNljMbz09eH1MoLA6GNdjC1vGzEBfJS4kQ.jpg?width=1080&crop=smart&auto=webp&s=ba1a17bae1c40dc85be79c56f3296d9b8019f957', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/80y0Iub7cYNljMbz09eH1MoLA6GNdjC1vGzEBfJS4kQ.jpg?auto=webp&s=d5c620ee867fa9891b022e8bd381e8b8321a04ca', 'width': 1200}, 'variants': {}}]}
Structured extraction for VLMs
1
[removed]
2025-01-15T05:14:54
https://www.reddit.com/r/LocalLLaMA/comments/1i1qbzc/structured_extraction_for_vlms/
fuzzysingularity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1qbzc
false
null
t3_1i1qbzc
/r/LocalLLaMA/comments/1i1qbzc/structured_extraction_for_vlms/
false
false
self
1
{'enabled': False, 'images': [{'id': 'sgNc-6jdzdT0cenJD9o1JaMPH-9hz4_Mf0RoAxrXzj4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GlBW-LBqwom1YDvj0rarr_RGPUC3Tb7PmDHRISnUsJM.jpg?width=108&crop=smart&auto=webp&s=eedd081152a5d3369af0ced03cc716eebcb6ac07', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GlBW-LBqwom1YDvj0rarr_RGPUC3Tb7PmDHRISnUsJM.jpg?width=216&crop=smart&auto=webp&s=cf6df87fa9dbde2fef0e6b8f587797d84544bb8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GlBW-LBqwom1YDvj0rarr_RGPUC3Tb7PmDHRISnUsJM.jpg?width=320&crop=smart&auto=webp&s=0231a19df47d0baf8fc70b0381c7de0b9bd8e7d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GlBW-LBqwom1YDvj0rarr_RGPUC3Tb7PmDHRISnUsJM.jpg?width=640&crop=smart&auto=webp&s=d738b1777e663475737680ca4145153112945439', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GlBW-LBqwom1YDvj0rarr_RGPUC3Tb7PmDHRISnUsJM.jpg?width=960&crop=smart&auto=webp&s=e2714e949a6bca7634f9cb4aaa67f36e9d09f8fd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GlBW-LBqwom1YDvj0rarr_RGPUC3Tb7PmDHRISnUsJM.jpg?width=1080&crop=smart&auto=webp&s=79006890f1b97a04408bbb7414ed10fded4cc00a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GlBW-LBqwom1YDvj0rarr_RGPUC3Tb7PmDHRISnUsJM.jpg?auto=webp&s=9f249faa0141d98fde52f1e379ff5adbdf84452d', 'width': 1200}, 'variants': {}}]}
Megrez-3B-Instruct now available on Ollama
1
[https://www.ollama.com/JollyLlama/Megrez-3B-Instruct](https://www.ollama.com/JollyLlama/Megrez-3B-Instruct) `ollama run JollyLlama/Megrez-3B-Instruct:Q8_0` --- This model was somewhat ignored since the GGUF format wasn't available at the beginning of its release. However, the GGUF is now uploaded to Ollama with a **corrected chat template** (the one on HF doesn't work in Ollama). This is one of the few 3B models with an Apache-2.0 license. You should give it a try if you really care about the license. Otherwise, I found that Qwen2.5-3B performs better than this one for my use case: chat title generation in web UI. Qwen2.5-3B is much more consistent than Megrez-3B. Disclaimer: I'm NOT affiliated with the creators of these models.
2025-01-15T05:20:26
https://www.reddit.com/r/LocalLLaMA/comments/1i1qf67/megrez3binstruct_now_available_on_ollama/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1qf67
false
null
t3_1i1qf67
/r/LocalLLaMA/comments/1i1qf67/megrez3binstruct_now_available_on_ollama/
false
false
self
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Found this weird thing
1
[removed]
2025-01-15T05:25:08
https://www.reddit.com/r/LocalLLaMA/comments/1i1qhs5/found_this_weird_thing/
VXT7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1qhs5
false
null
t3_1i1qhs5
/r/LocalLLaMA/comments/1i1qhs5/found_this_weird_thing/
false
false
self
1
null
Bad performance with RTX 4060 and 16gb RAM. Help please
1
[removed]
2025-01-15T05:33:12
https://www.reddit.com/r/LocalLLaMA/comments/1i1qmat/bad_performance_with_rtx_4060_and_16gb_ram_help/
ShovvTime13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1qmat
false
null
t3_1i1qmat
/r/LocalLLaMA/comments/1i1qmat/bad_performance_with_rtx_4060_and_16gb_ram_help/
false
false
https://a.thumbs.redditm…iOqwrptsxKp8.jpg
1
null
How often are you using voice with local models?
2
I'm kind of getting sick of typing, and been thinking of setting up a voice mode. Either via whisper integration or a multimodal. If you are using voice, what's your workflow and use cases? I'm thinking of chat, prompting and running system commands.
2025-01-15T05:41:07
https://www.reddit.com/r/LocalLLaMA/comments/1i1qqkz/how_often_are_you_using_voice_with_local_models/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1qqkz
false
null
t3_1i1qqkz
/r/LocalLLaMA/comments/1i1qqkz/how_often_are_you_using_voice_with_local_models/
false
false
self
2
null
How many open source LLMs make their whole training data available?
4
When I interact with a chatbot (proprietary like GPT4o and Claude or open source/open weight like Llama 3.3 or QwQ) I often wonder if the model's knowledge of some textual resource derives from them being directly present in the training data or indirectly due to it being discussed in Wikipedia, public forums, secondary literature, etc. Also, I'd like to be able to test to what extent the model is able or unable to quote accurately from texts that I know are present in the training data. Are there many open source models that have their whole corpus of training data publicly available and easily searchable?
2025-01-15T05:42:26
https://www.reddit.com/r/LocalLLaMA/comments/1i1qrar/how_many_open_source_llms_make_their_whole/
Ok-Lengthiness-3988
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1qrar
false
null
t3_1i1qrar
/r/LocalLLaMA/comments/1i1qrar/how_many_open_source_llms_make_their_whole/
false
false
self
4
null
Tool to create long answers
1
[removed]
2025-01-15T06:08:21
https://www.reddit.com/r/LocalLLaMA/comments/1i1r5hw/tool_to_create_long_answers/
ZestycloseDrama3173
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1r5hw
false
null
t3_1i1r5hw
/r/LocalLLaMA/comments/1i1r5hw/tool_to_create_long_answers/
false
false
self
1
null
Old facebook with market place add - PVACPA
1
2025-01-15T06:09:37
https://pvacpa.com/product/old-facebook-with-market-place-add/
PleasantFormal8537
pvacpa.com
1970-01-01T00:00:00
0
{}
1i1r63r
false
null
t3_1i1r63r
/r/LocalLLaMA/comments/1i1r63r/old_facebook_with_market_place_add_pvacpa/
false
false
https://b.thumbs.redditm…aanI5AIC9MGE.jpg
1
{'enabled': False, 'images': [{'id': 'Q1VTseMQwLGDiK2BUCvk7s_Bxmch_yA6sbZ87mOByy4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?width=108&crop=smart&auto=webp&s=5a2ee9c9c17a94ca30558ebb359ab99487d5cad5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?width=216&crop=smart&auto=webp&s=2c986601834bccc31f4ffd788cce91c6dd399dd7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?width=320&crop=smart&auto=webp&s=956082d044fd570b2990372540fe7692d4296033', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?auto=webp&s=1ffc90330ad5173b99b446cec46100fe74a13ac0', 'width': 500}, 'variants': {}}]}
Old facebook with market place add - PVACPA
1
2025-01-15T06:09:37
https://pvacpa.com/product/old-facebook-with-market-place-add/
PleasantFormal8537
pvacpa.com
1970-01-01T00:00:00
0
{}
1i1r63z
false
null
t3_1i1r63z
/r/LocalLLaMA/comments/1i1r63z/old_facebook_with_market_place_add_pvacpa/
false
false
https://b.thumbs.redditm…aanI5AIC9MGE.jpg
1
{'enabled': False, 'images': [{'id': 'Q1VTseMQwLGDiK2BUCvk7s_Bxmch_yA6sbZ87mOByy4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?width=108&crop=smart&auto=webp&s=5a2ee9c9c17a94ca30558ebb359ab99487d5cad5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?width=216&crop=smart&auto=webp&s=2c986601834bccc31f4ffd788cce91c6dd399dd7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?width=320&crop=smart&auto=webp&s=956082d044fd570b2990372540fe7692d4296033', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?auto=webp&s=1ffc90330ad5173b99b446cec46100fe74a13ac0', 'width': 500}, 'variants': {}}]}
Old facebook with market place add - PVACPA
1
2025-01-15T06:10:25
https://pvacpa.com/product/old-facebook-with-market-place-add/
PleasantFormal8537
pvacpa.com
1970-01-01T00:00:00
0
{}
1i1r6jh
false
null
t3_1i1r6jh
/r/LocalLLaMA/comments/1i1r6jh/old_facebook_with_market_place_add_pvacpa/
false
false
https://b.thumbs.redditm…aanI5AIC9MGE.jpg
1
{'enabled': False, 'images': [{'id': 'Q1VTseMQwLGDiK2BUCvk7s_Bxmch_yA6sbZ87mOByy4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?width=108&crop=smart&auto=webp&s=5a2ee9c9c17a94ca30558ebb359ab99487d5cad5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?width=216&crop=smart&auto=webp&s=2c986601834bccc31f4ffd788cce91c6dd399dd7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?width=320&crop=smart&auto=webp&s=956082d044fd570b2990372540fe7692d4296033', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/kCBeXGX91jWLBN8FGO31voXcrFPKHM6Ro5qgCm9KLvg.jpg?auto=webp&s=1ffc90330ad5173b99b446cec46100fe74a13ac0', 'width': 500}, 'variants': {}}]}
facebook with market place add - PVACPA
1
2025-01-15T06:10:52
https://pvacpa.com/product/facebook-with-market-place-add/
PleasantFormal8537
pvacpa.com
1970-01-01T00:00:00
0
{}
1i1r6r3
false
null
t3_1i1r6r3
/r/LocalLLaMA/comments/1i1r6r3/facebook_with_market_place_add_pvacpa/
false
false
https://a.thumbs.redditm…pk90pR1rcVQ0.jpg
1
{'enabled': False, 'images': [{'id': 'tCzjF-pfNobY5kmtENcKAwn2mvrOhr3dj_bxrF2czNw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uiHOAq5XeP6c_AJzcDkEG9RK-nQKqUkBR1Eq_vOR9_w.jpg?width=108&crop=smart&auto=webp&s=850e6e9f100356ef89f8b1c7f655f298470c118a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/uiHOAq5XeP6c_AJzcDkEG9RK-nQKqUkBR1Eq_vOR9_w.jpg?width=216&crop=smart&auto=webp&s=b3acbd5bfe8626525cc0ca599a87cd8cacd3612d', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/uiHOAq5XeP6c_AJzcDkEG9RK-nQKqUkBR1Eq_vOR9_w.jpg?width=320&crop=smart&auto=webp&s=03e6144b59bd433c75415d6b984c15d3cf9a18a8', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/uiHOAq5XeP6c_AJzcDkEG9RK-nQKqUkBR1Eq_vOR9_w.jpg?auto=webp&s=dc53ed21764a2809f7c32024473a0e6e46bb0e17', 'width': 500}, 'variants': {}}]}
USA facebook account - PVACPA
1
2025-01-15T06:11:04
https://pvacpa.com/product/usa-facebook-account/
PleasantFormal8537
pvacpa.com
1970-01-01T00:00:00
0
{}
1i1r6v5
false
null
t3_1i1r6v5
/r/LocalLLaMA/comments/1i1r6v5/usa_facebook_account_pvacpa/
false
false
https://b.thumbs.redditm…ncJomkAXXd8U.jpg
1
{'enabled': False, 'images': [{'id': 'Tti04foBCRBJY2Er5k1jZqY6hrOHcjT_fyFQJKBYv7U', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/u4FAwkR-kCdkskkgl1gcPl9-Q51pXVmnvZRe7_hoShQ.jpg?width=108&crop=smart&auto=webp&s=3eea27a2fd1cbd71b1a7c400d80f5fc7a84bb9a1', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/u4FAwkR-kCdkskkgl1gcPl9-Q51pXVmnvZRe7_hoShQ.jpg?width=216&crop=smart&auto=webp&s=5193a00a3909d52bd39e233f9365cb881906c6b3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/u4FAwkR-kCdkskkgl1gcPl9-Q51pXVmnvZRe7_hoShQ.jpg?width=320&crop=smart&auto=webp&s=f736e002f137f5158564e2670dfd033d452d44c8', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/u4FAwkR-kCdkskkgl1gcPl9-Q51pXVmnvZRe7_hoShQ.jpg?auto=webp&s=fcb804ea092451fa055ab923ca4420ecfdc82d1c', 'width': 500}, 'variants': {}}]}
Buy Cashapp account - PVACPA
1
2025-01-15T06:11:24
https://pvacpa.com/product/buy-cashapp-account/
PleasantFormal8537
pvacpa.com
1970-01-01T00:00:00
0
{}
1i1r71g
false
null
t3_1i1r71g
/r/LocalLLaMA/comments/1i1r71g/buy_cashapp_account_pvacpa/
false
false
https://b.thumbs.redditm…eWQTYDWfJ-0Y.jpg
1
{'enabled': False, 'images': [{'id': 'H0ol554m1j1JTGAP2CKBD0c36Mx7xE9FtsFgD8xtU1A', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/lKYXGDJWDjkO6Db15NP5Mi-n_PTCGKpLtFFkpZt8UUM.jpg?width=108&crop=smart&auto=webp&s=7523fecd8b6fe7a3220d73fca4684206002b103d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/lKYXGDJWDjkO6Db15NP5Mi-n_PTCGKpLtFFkpZt8UUM.jpg?width=216&crop=smart&auto=webp&s=319e313955b1cae7fc3086174b3517562a30dc3b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/lKYXGDJWDjkO6Db15NP5Mi-n_PTCGKpLtFFkpZt8UUM.jpg?width=320&crop=smart&auto=webp&s=9a7904b343c260b583ec66a3c920e1f4257266d0', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/lKYXGDJWDjkO6Db15NP5Mi-n_PTCGKpLtFFkpZt8UUM.jpg?auto=webp&s=c03d3783e554353aec2fc2b5a7f87a12ba28a9bf', 'width': 500}, 'variants': {}}]}
Framework to create long answers by editing LLM responses
1
[removed]
2025-01-15T06:13:27
https://www.reddit.com/r/LocalLLaMA/comments/1i1r84a/framework_to_create_long_answers_by_editing_llm/
random_nlp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1r84a
false
null
t3_1i1r84a
/r/LocalLLaMA/comments/1i1r84a/framework_to_create_long_answers_by_editing_llm/
false
false
self
1
null
New InternLM Release: InternLM3-8B-Instruct
1
[removed]
2025-01-15T06:16:56
https://www.reddit.com/r/LocalLLaMA/comments/1i1r9x1/new_internlm_release_internlm38binstruct/
Many_SuchCases
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1r9x1
false
null
t3_1i1r9x1
/r/LocalLLaMA/comments/1i1r9x1/new_internlm_release_internlm38binstruct/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Y4LCwfHlLsaI48U9reo5Ii9ZN8AFDg5mdKkxRaOyt2c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=108&crop=smart&auto=webp&s=1b8b7dc16f77b6f48c7dac7be4cb1945366bbe93', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=216&crop=smart&auto=webp&s=837bb54f7fc24333021cc0e61c6640f5d97c339e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=320&crop=smart&auto=webp&s=aaf6b131d61bbea03bd6ec9968344625200c7e38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=640&crop=smart&auto=webp&s=8e72346c1336c3a6efdea458abaad1a7312317cb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=960&crop=smart&auto=webp&s=258c8e8c6e889091c768c4614b3e20172c663eb1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=1080&crop=smart&auto=webp&s=f5d8bf8c7e63b65ae10a85b92bfabfc056030ef7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?auto=webp&s=0927d8a4d994f2dab5b92e355604a207bc9159ef', 'width': 1200}, 'variants': {}}]}
New Release: InternLM3-8B-Instruct
1
[removed]
2025-01-15T06:19:11
https://www.reddit.com/r/LocalLLaMA/comments/1i1rb34/new_release_internlm38binstruct/
Many_SuchCases
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1rb34
false
null
t3_1i1rb34
/r/LocalLLaMA/comments/1i1rb34/new_release_internlm38binstruct/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Y4LCwfHlLsaI48U9reo5Ii9ZN8AFDg5mdKkxRaOyt2c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=108&crop=smart&auto=webp&s=1b8b7dc16f77b6f48c7dac7be4cb1945366bbe93', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=216&crop=smart&auto=webp&s=837bb54f7fc24333021cc0e61c6640f5d97c339e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=320&crop=smart&auto=webp&s=aaf6b131d61bbea03bd6ec9968344625200c7e38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=640&crop=smart&auto=webp&s=8e72346c1336c3a6efdea458abaad1a7312317cb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=960&crop=smart&auto=webp&s=258c8e8c6e889091c768c4614b3e20172c663eb1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=1080&crop=smart&auto=webp&s=f5d8bf8c7e63b65ae10a85b92bfabfc056030ef7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?auto=webp&s=0927d8a4d994f2dab5b92e355604a207bc9159ef', 'width': 1200}, 'variants': {}}]}
Can something be done about the hyper-vigilant auto-moderator on this sub?
1
[removed]
2025-01-15T06:25:54
https://www.reddit.com/r/LocalLLaMA/comments/1i1reml/can_something_be_done_about_the_hypervigilant/
Many_SuchCases
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1reml
false
null
t3_1i1reml
/r/LocalLLaMA/comments/1i1reml/can_something_be_done_about_the_hypervigilant/
false
false
self
1
null
New model....
224
2025-01-15T06:29:44
https://i.redd.it/curwy8vkq3de1.png
Many_SuchCases
i.redd.it
1970-01-01T00:00:00
0
{}
1i1rgn9
false
null
t3_1i1rgn9
/r/LocalLLaMA/comments/1i1rgn9/new_model/
false
false
https://b.thumbs.redditm…HW2waWgw55Wc.jpg
224
{'enabled': True, 'images': [{'id': 'AAuhHUjFdSNCouyBu20gY4mmFw7zkttn_b3nnZuu8gc', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/curwy8vkq3de1.png?width=108&crop=smart&auto=webp&s=d1439b672c14f98c8273d9e966f8cfe8578b7a45', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/curwy8vkq3de1.png?width=216&crop=smart&auto=webp&s=2db794c52ca6f3563839a9ba2332a5988280c899', 'width': 216}, {'height': 83, 'url': 'https://preview.redd.it/curwy8vkq3de1.png?width=320&crop=smart&auto=webp&s=f5aa2eb750ca567b9f77f9c2507a836dbfc5ccd2', 'width': 320}, {'height': 166, 'url': 'https://preview.redd.it/curwy8vkq3de1.png?width=640&crop=smart&auto=webp&s=a262daf4d3df2ddd46d444593d7171ed6352a6c5', 'width': 640}], 'source': {'height': 194, 'url': 'https://preview.redd.it/curwy8vkq3de1.png?auto=webp&s=640df1fe696ae14a43910953bb4961703e535351', 'width': 746}, 'variants': {}}]}
What is the technical difference between system prompt and user prompt?
1
[removed]
2025-01-15T06:50:12
https://www.reddit.com/r/LocalLLaMA/comments/1i1rqv6/what_is_the_technical_difference_between_system/
Interesting_Bath815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1rqv6
false
null
t3_1i1rqv6
/r/LocalLLaMA/comments/1i1rqv6/what_is_the_technical_difference_between_system/
false
false
self
1
null
Agents
1
[removed]
2025-01-15T06:51:36
https://www.reddit.com/r/LocalLLaMA/comments/1i1rrkw/agents/
Some-Conversation517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1rrkw
false
null
t3_1i1rrkw
/r/LocalLLaMA/comments/1i1rrkw/agents/
false
false
self
1
null
Not exactly an exclusively local LM question
1
Let's say I have 100,000 research papers I've stripped down to a sanitized group of .md files If I'm looking for a series of words that repeat across 100,000 files and want to train a language model on it, what's the term I need to be using to generate relationship correlation and keep the data coherent? I'm just bored with my job and doing some side projects that may help us out down the line Basically I want a local language model that can refer to these papers specifically when a question is asked Probably an incredibly difficult task yes?
2025-01-15T07:02:43
https://www.reddit.com/r/LocalLLaMA/comments/1i1rx8k/not_exactly_an_exclusively_local_lm_question/
OccasionllyAsleep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1rx8k
false
null
t3_1i1rx8k
/r/LocalLLaMA/comments/1i1rx8k/not_exactly_an_exclusively_local_lm_question/
false
false
self
1
null
What’s the best framework or tool for building and managing multi-agent AI systems?
1
I’m exploring solutions for a project that involves integrating multiple models and ensuring smooth collaboration between them. What frameworks or tools do you recommend for building systems where multiple AI agents collaborate effectively? I'm particularly interested in solutions that allow seamless integration with diverse models (open-source and commercial) and focus on scalability. It’d be great to hear about the tools you’ve used, their strengths, and any challenges you faced
2025-01-15T07:11:13
https://www.reddit.com/r/LocalLLaMA/comments/1i1s1d0/whats_the_best_framework_or_tool_for_building_and/
iamnotdeadnuts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1s1d0
false
null
t3_1i1s1d0
/r/LocalLLaMA/comments/1i1s1d0/whats_the_best_framework_or_tool_for_building_and/
false
false
self
1
null
Empowering AI for All: How Local Models Champion True Democratization
1
[removed]
2025-01-15T07:16:50
https://www.reddit.com/r/LocalLLaMA/comments/1i1s41a/empowering_ai_for_all_how_local_models_champion/
XinmingWong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1s41a
false
null
t3_1i1s41a
/r/LocalLLaMA/comments/1i1s41a/empowering_ai_for_all_how_local_models_champion/
false
false
self
1
null
Calculating TTFT using the normal vllm generate function.
1
[removed]
2025-01-15T07:20:36
https://www.reddit.com/r/LocalLLaMA/comments/1i1s5uc/calculating_ttft_using_the_normal_vllm_generate/
Mehdi135849
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1s5uc
false
null
t3_1i1s5uc
/r/LocalLLaMA/comments/1i1s5uc/calculating_ttft_using_the_normal_vllm_generate/
false
false
self
1
null
🍒 Cherry Studio: A Desktop Client Supporting Multi-Model Services, Designed for Professionals
5
🍒 **Cherry Studio: A Desktop Client Supporting Multi-Model Services, Designed for Professionals** Cherry Studio is a powerful desktop client built for professionals, featuring over 30 industry-specific intelligent assistants to help users enhance productivity across a variety of scenarios. **Aggregated Model Services** Cherry Studio integrates numerous service providers, offering access to over 300 large language models. You can seamlessly switch between models during usage, leveraging the strengths of each model to solve problems efficiently. For details on the integrated providers, refer to the configuration page. https://preview.redd.it/z0ocnljrz3de1.png?width=1536&format=png&auto=webp&s=cb732d73493e09a1f2b3c62c52a6e315681f7801 **Cross-Platform Compatibility for a Seamless Experience** Cherry Studio supports both Windows and macOS operating systems, with plans to expand to mobile platforms in the future. This means no matter what device you use, you can enjoy the convenience Cherry Studio brings. Say goodbye to platform restrictions and fully explore the potential of GPT technology! https://preview.redd.it/or6yogatz3de1.png?width=1536&format=png&auto=webp&s=fa7e6b69d1264996f551754aeb6e06ee1940a893 **Tailored for Diverse Professionals** Cherry Studio is designed to meet the needs of various industries utilizing GPT technology. Whether you are a developer coding away, a designer seeking inspiration, or a writer crafting stories, Cherry Studio can be your reliable assistant. With advanced natural language processing, it helps you tackle challenges like data analysis, text generation, and code writing effortlessly. https://preview.redd.it/khzjl29uz3de1.png?width=1536&format=png&auto=webp&s=6cbe6c70431ece12dc388357c97b59357d6905c7 **Rich Application Scenarios to Inspire Creativity** • **Developer’s Coding Partner:** Generate and debug code efficiently with Cherry Studio. • **Designer’s Creative Tool:** Produce creative text and design descriptions to spark ideas. • **Writer’s Trusted Assistant:** Assist with drafting and editing articles for a smoother writing process. **Built-in Translation Assistant:** Break language barriers with ease. https://preview.redd.it/y250tqtvz3de1.png?width=1536&format=png&auto=webp&s=5a214a85bd0447096f38cf31e1a9a316653070b7 **Standout Features Driving Innovation** • **Open-Source Spirit:** Cherry Studio offers open-source code, encouraging users to customize and expand their personalized GPT assistant. • **Continuous Updates:** The latest version, v0.4.4, is now available, with developers committed to enhancing functionality and user experience. • **Minimalist Design:** An intuitive interface ensures you can focus on your creations. • **Efficient Workflow:** Quickly switch between models to find the best solutions. • **Smart Conversations:** AI-powered session naming keeps your chat history organized for easy review. • **Drag-and-Drop Sorting:** Sort agents, conversations, or settings effortlessly for better organization. • **Worry-Free Translation:** Built-in intelligent translation covers major languages for accurate cross-language communication. • **Multi-Language Support:** Designed for global users, breaking language barriers with GPT technology. • **Theme Switching:** Day and night modes ensure an enjoyable visual experience at any time. **Getting Started with Cherry Studio** Using Cherry Studio is simple. Follow these steps to embark on your GPT journey: 1. Download the version for your system. 2. Install and launch the client. 3. Follow the on-screen instructions. 4. Explore powerful features. 5. Adjust settings as needed. 6. Join the community to share experiences with other users. Cherry Studio is not just software—it’s your gateway to the boundless possibilities of GPT technology. By simplifying complex technology into user-friendly tools, it empowers everyone to harness the power of GPT with ease. Whether you are a tech expert or a casual user, Cherry Studio will bring unparalleled convenience to your work and life. **Download Cherry Studio now and begin your intelligent journey!** [https://github.com/CherryHQ/cherry-studio](https://github.com/CherryHQ/cherry-studio)
2025-01-15T07:22:14
https://www.reddit.com/r/LocalLLaMA/comments/1i1s6n5/cherry_studio_a_desktop_client_supporting/
XinmingWong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1s6n5
false
null
t3_1i1s6n5
/r/LocalLLaMA/comments/1i1s6n5/cherry_studio_a_desktop_client_supporting/
false
false
https://b.thumbs.redditm…uZtlxzy-nIVk.jpg
5
{'enabled': False, 'images': [{'id': '5rmv8mXWa0BF4CpKttjXP5nzzvmqc06jnSEhKNZDqQ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r1Yo3XfvNwik4F4JitQEcSvauLBFZhasaAJAW6INyI0.jpg?width=108&crop=smart&auto=webp&s=5b6d500e900de82ee908a71d6b49d121210c468b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r1Yo3XfvNwik4F4JitQEcSvauLBFZhasaAJAW6INyI0.jpg?width=216&crop=smart&auto=webp&s=f1f9dcfa5e8fe89d50ede690e5b2c9c29c62d910', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r1Yo3XfvNwik4F4JitQEcSvauLBFZhasaAJAW6INyI0.jpg?width=320&crop=smart&auto=webp&s=4def15573100dff412d141f573ea996683fc7122', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r1Yo3XfvNwik4F4JitQEcSvauLBFZhasaAJAW6INyI0.jpg?width=640&crop=smart&auto=webp&s=1fbe1a18a6531f32c71769138c9ec787488988aa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r1Yo3XfvNwik4F4JitQEcSvauLBFZhasaAJAW6INyI0.jpg?width=960&crop=smart&auto=webp&s=b57e66c783e806400aa422d6d90588a7d7a2c741', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r1Yo3XfvNwik4F4JitQEcSvauLBFZhasaAJAW6INyI0.jpg?width=1080&crop=smart&auto=webp&s=2373f9139853749c5539cf6ecc82abf1b26a5650', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r1Yo3XfvNwik4F4JitQEcSvauLBFZhasaAJAW6INyI0.jpg?auto=webp&s=1ef71761a8f1cbf162c6e4ff37aff8a679f5061c', 'width': 1200}, 'variants': {}}]}
Company has plans to add external gpu memory
15
https://blocksandfiles.com/2025/01/13/panmnesia-gpu-cxl-memory-expansion/ https://www.archyde.com/panmnesia-wins-ces-award-for-gpu-cxl-memory-expansion-technology-blocks-and-files/ This looks pretty cool while not yet meant for home use as I think they targeting server stacks first. I hope we get a retail version of this! Sounds like they at the proof of concept stage. So maybe 2026 will be interesting. If more companys can train much cheaper we might get way more open source models. A lot of it over my head, but sounds like they are essentially just connecting ssds and ddr to gpus creating a unified memory space that the gpu sees. Whish the articals had more memory bandwidth and sizing specs.
2025-01-15T09:21:01
https://www.reddit.com/r/LocalLLaMA/comments/1i1tosi/company_has_plans_to_add_external_gpu_memory/
mindwip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1tosi
false
null
t3_1i1tosi
/r/LocalLLaMA/comments/1i1tosi/company_has_plans_to_add_external_gpu_memory/
false
false
self
15
{'enabled': False, 'images': [{'id': 'jxTmioBM5dxZr3r8xvhBUQcVRToJ_c3VwZjcJEixZco', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/WnWUz1SjRU8Z2Zc7TqGV2Y2q0ZEC2dxL97bEYgDhV7U.jpg?width=108&crop=smart&auto=webp&s=7c32551b7c6e7a36af7339e04e44bb13491cbe88', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/WnWUz1SjRU8Z2Zc7TqGV2Y2q0ZEC2dxL97bEYgDhV7U.jpg?width=216&crop=smart&auto=webp&s=a9dc1ee13fbe0e549cfd9d15b6c4b756fcdc195e', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/WnWUz1SjRU8Z2Zc7TqGV2Y2q0ZEC2dxL97bEYgDhV7U.jpg?width=320&crop=smart&auto=webp&s=59cc12394ff6ed765b6b4a91c586eaf1e505f781', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/WnWUz1SjRU8Z2Zc7TqGV2Y2q0ZEC2dxL97bEYgDhV7U.jpg?width=640&crop=smart&auto=webp&s=14af500b2c3d13ff7791893ba0438fee28216286', 'width': 640}], 'source': {'height': 522, 'url': 'https://external-preview.redd.it/WnWUz1SjRU8Z2Zc7TqGV2Y2q0ZEC2dxL97bEYgDhV7U.jpg?auto=webp&s=df7df54c62c3000c123cc0f763a1a2ea85e20715', 'width': 950}, 'variants': {}}]}
Building an app for Windows
1
[removed]
2025-01-15T09:22:36
https://www.reddit.com/r/LocalLLaMA/comments/1i1tpih/building_an_app_for_windows/
ramzeez88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1tpih
false
null
t3_1i1tpih
/r/LocalLLaMA/comments/1i1tpih/building_an_app_for_windows/
false
false
self
1
null
Swedish (Relevant) Computer Build Recommendations?
2
Greetings, I am trying my best to figure out how to run a 70b model in 4-bit, but I keep getting mixed responses on system requirements. I can't buy a computer if I don't know the specs required, though. The budget is flexible depending on what can be realistically expected in performance on a consumer grade computer. I want it to generate replies fairly fast and don't want it to be horribly difficult to train. (I have about 6 months worth of non stop information collection that's already curated but not yet edited into json format.) Goals: Train an LLM on my own writing so I can write with myself in a private environment. Expectations: Response speed similar to that of Janitor AI on a good day. Budget: Willing to go into debt to some extent... Reason for location specific advice: [inet.se](http://inet.se) is where i'd likely get the individual parts since i've never built a computer myself and would prefer to have assistance in doing it. Their selection isn't exhaustive. But, if my expectations are unrealistic, i'd be open to hosting a smaller model if it'd still be sufficient at roleplaying after being fine tuned. I'm not interested in using it for so much else. (An extremely expensive sounding board for my writing, but if it makes me happy...) It doesn't need to solve equations or whatever tasks require hundreds of requests every minute. I just seek something with nuance. I am happy to train it with appropriate explanations of correct and incorrect interpretations of nuance. I have a lot of free time to slave for this thing. DM's welcome. Thanks in advance!
2025-01-15T09:34:56
https://www.reddit.com/r/LocalLLaMA/comments/1i1tv13/swedish_relevant_computer_build_recommendations/
TrappedinSweden
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1tv13
false
null
t3_1i1tv13
/r/LocalLLaMA/comments/1i1tv13/swedish_relevant_computer_build_recommendations/
false
false
self
2
null
I created a notebook to fine tune LLMs with synthetic data and hyperparam tuning
2
I recently participated in a Kaggle fine tuning competition where we had to teach an LLM to analyze artwork from a foreign language. I explored Synthetic Data Generation, Full fine tuning, LLM as a Judge evaluation, hyperparameter tuning using optuna and much more here! I chose to train Gemma 2 2B IT for the competition and was really happy with the result. Here are some of the things I learnt: 1. After reading research papers, I found that full fine tune is preferable over PEFT for models over size 1B. 2. Runpod is super intuitive to use to fine tune and inexpensive. I used a A100 80GB and paid around 1.5$/hour to use it. 3. If you are like me and prefer to use VSCode for the bindings, use remote jupyter kernels to access GPUs. 4. Hyperparameter tuning is amazing! I would have spent more time investigating this if I did not work on this last minnute. There is no better feeling than when you see your training and eval loss creep slowly down. Here is my notebook, I would really appreciate an upvote if you found it useful: [https://www.kaggle.com/code/thee5z/gemma-2b-sft-on-urdu-poem-synt-data-param-tune](https://www.kaggle.com/code/thee5z/gemma-2b-sft-on-urdu-poem-synt-data-param-tune)
2025-01-15T09:40:00
https://www.reddit.com/r/LocalLLaMA/comments/1i1txb1/i_created_a_notebook_to_fine_tune_llms_with/
faizsameerahmed96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1txb1
false
null
t3_1i1txb1
/r/LocalLLaMA/comments/1i1txb1/i_created_a_notebook_to_fine_tune_llms_with/
false
false
self
2
{'enabled': False, 'images': [{'id': 'w80O13pC6EOKQgEaUud51DCJIQFYd0GRkESQ1-rCB4M', 'resolutions': [], 'source': {'height': 100, 'url': 'https://external-preview.redd.it/x5uUx6lx6iw16aWDeKal5oHhsMN8xSqOaqLIQb_v7gU.jpg?auto=webp&s=0f1a98887e7fb62f0538b564f20065eddb401028', 'width': 100}, 'variants': {}}]}
405B MiniMax MoE technical deepdive
82
tl;dr very (very) nice paper/model, lot of details and experiment details, hybrid with 7/8 Lightning attn, different MoE strategy than deepseek, deepnorm, WSD schedule, \~2000 H800 for training, \~12T token. blog: [https://huggingface.co/blog/eliebak/minimax01-deepdive](https://huggingface.co/blog/eliebak/minimax01-deepdive)
2025-01-15T09:41:33
https://www.reddit.com/r/LocalLLaMA/comments/1i1ty0e/405b_minimax_moe_technical_deepdive/
eliebakk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1ty0e
false
null
t3_1i1ty0e
/r/LocalLLaMA/comments/1i1ty0e/405b_minimax_moe_technical_deepdive/
false
false
self
82
{'enabled': False, 'images': [{'id': 'qud69mjaerOQR9rlL7w0-LuNuFyQvhV8ncdzCCjA0DM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eck1IYOkkOCyXpiSFyhstxydq7A88VNYw0IDvivD49o.jpg?width=108&crop=smart&auto=webp&s=c03e1171f5d5374f10462b5affc9af8e9f4a99a7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eck1IYOkkOCyXpiSFyhstxydq7A88VNYw0IDvivD49o.jpg?width=216&crop=smart&auto=webp&s=303e2f5224768db7e417a517d5471de9777c156f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eck1IYOkkOCyXpiSFyhstxydq7A88VNYw0IDvivD49o.jpg?width=320&crop=smart&auto=webp&s=af08f9440e4a7c70322f153c844970ec85a57386', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eck1IYOkkOCyXpiSFyhstxydq7A88VNYw0IDvivD49o.jpg?width=640&crop=smart&auto=webp&s=eeeb8b7e796e4b5584f7736fabc2d2c016b6b759', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eck1IYOkkOCyXpiSFyhstxydq7A88VNYw0IDvivD49o.jpg?width=960&crop=smart&auto=webp&s=d64c034bd2faf9295e3e2817a2b5547e29964583', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eck1IYOkkOCyXpiSFyhstxydq7A88VNYw0IDvivD49o.jpg?width=1080&crop=smart&auto=webp&s=db284e4279b0078b7e5aed3c2fef8fe308cdd5a2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eck1IYOkkOCyXpiSFyhstxydq7A88VNYw0IDvivD49o.jpg?auto=webp&s=7b3d2d63e82db038415791b83da23c88cd8473fe', 'width': 1200}, 'variants': {}}]}
Beginner with a daft question
1
[removed]
2025-01-15T09:56:13
https://www.reddit.com/r/LocalLLaMA/comments/1i1u4w6/beginner_with_a_daft_question/
Embarrassed_Status73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1u4w6
false
null
t3_1i1u4w6
/r/LocalLLaMA/comments/1i1u4w6/beginner_with_a_daft_question/
false
false
self
1
null
How to Implement a Local System for Managing and Querying Documentation
1
[removed]
2025-01-15T09:59:51
https://www.reddit.com/r/LocalLLaMA/comments/1i1u6kg/how_to_implement_a_local_system_for_managing_and/
Embarrassed_Status73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1u6kg
false
null
t3_1i1u6kg
/r/LocalLLaMA/comments/1i1u6kg/how_to_implement_a_local_system_for_managing_and/
false
false
self
1
null
Question. LLM coordinator system? Is there any?
0
I see that there is the tendency to let one model do everything. But then the model becomes gigantic more often than not. In contrast, (smaller) models can be optimized for specific domains, or one can also leverage other ML-based tools or normal handcoded programs. Is there a system where a main LLM classifies the task and rewrites it so that the input is as good as possible for a second tool that then does the work? Sure it won't be a super reactive system, but I think it could achieve higher reliability (read, less errors) in multiple domains. So far I am not aware of any of those. Hence the question to the community. PS: yes I am aware of the MoE models, but those are one LLM as well. They need to be loaded as a whole in memory.
2025-01-15T10:13:50
https://www.reddit.com/r/LocalLLaMA/comments/1i1udfe/question_llm_coordinator_system_is_there_any/
pier4r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1udfe
false
null
t3_1i1udfe
/r/LocalLLaMA/comments/1i1udfe/question_llm_coordinator_system_is_there_any/
false
false
self
0
null
Privacy Concerns with LLM Models (and DeepSeek in particular)
0
There have been growing concerns about privacy when it comes to using AI models like DeepSeek, and these concerns are valid. To help clarify, here's a quick ranking of privacy levels for using LLMs based on their setup: 1. Running open-source models on your personal server (10/10) * Full control over your data. The safest option for privacy. 2. Direct use of APIs or platforms like ChatGPT, Gemini, Grok, etc. (8/10) * These are generally secure but still involve sending your data to a third party. 3. Using intermediary platforms, which utilize APIs (6/10) 1. Adds an extra layer of potential data exposure due to intermediary platforms. 4. DeepSeek (1/10) * Significant concerns exist about data misuse. Not only are your chats not private, but the lack of strong data privacy laws in the country where this platform originates raises red flags. Given past examples, there's a high risk of your data being misused. Choose your LLM solution based on how much privacy you need. Be especially cautious with services like DeepSeek, as they might handle your data irresponsibly or expose it to misuse. What’s your take on this ranking? Do you agree, or do you think some of these should be rated differently? I’d love to hear your thoughts!
2025-01-15T10:20:22
https://www.reddit.com/r/LocalLLaMA/comments/1i1ugj5/privacy_concerns_with_llm_models_and_deepseek_in/
MindIndividual4397
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1ugj5
false
null
t3_1i1ugj5
/r/LocalLLaMA/comments/1i1ugj5/privacy_concerns_with_llm_models_and_deepseek_in/
false
false
self
0
null
ChatGPT Introduces New Tasks Feature for Better Planning
1
[removed]
2025-01-15T11:11:23
[deleted]
1970-01-01T00:00:00
0
{}
1i1v5bn
false
null
t3_1i1v5bn
/r/LocalLLaMA/comments/1i1v5bn/chatgpt_introduces_new_tasks_feature_for_better/
false
false
default
1
null
FastGPT - open-source AI platform for building knowledge-based LLM apps with data processing, RAG retrieval and visual workflow orchestration
0
2025-01-15T11:20:11
https://tryfastgpt.ai/
niutech
tryfastgpt.ai
1970-01-01T00:00:00
0
{}
1i1v9uc
false
null
t3_1i1v9uc
/r/LocalLLaMA/comments/1i1v9uc/fastgpt_opensource_ai_platform_for_building/
false
false
default
0
null
Chunking and resubmission a viable strategy to work around the context window limit?
1
Hi all So I am new to working with LLMs (web dev by day, so not new to tech in general) and have a use case to summarize larger texts. Reading through the forum, this seems to be a known issue with LLMs and their context window. (I am working with Llama3 via GPT4All locally in python via llm.datasette). So one way I am currently attempting to get around that is by chunking the text to about 30% below the context window, summarizing the chunk, and then re-adding the summary to the next raw chunk to be summarized. Are there any concerns with this approach? The results look okay so far, but since I have very little knowledge of whats under the hood, I am wondering if there is an inherent flaw in this. (The texts to be summarized are not ultra crucial. A good enough summary will do and does not need to be super detailed either)-
2025-01-15T11:32:51
https://www.reddit.com/r/LocalLLaMA/comments/1i1vgez/chunking_and_resubmission_a_viable_strategy_to/
brian-the-porpoise
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1vgez
false
null
t3_1i1vgez
/r/LocalLLaMA/comments/1i1vgez/chunking_and_resubmission_a_viable_strategy_to/
false
false
self
1
null
Performance of 64GB DDR4 for model + 6gb vram flash-attention for context?
0
My idea is to feed ~3000 tokens of documents into context to improve output quality. I dont mind slow token/s inference, but I do very much mind the time for prompt eval given these large contexts. Is it possible to load all layers of a model into memory and use VRAM exclusively for context? (Speeding up eval with flash-attention)
2025-01-15T11:49:40
https://www.reddit.com/r/LocalLLaMA/comments/1i1vpaz/performance_of_64gb_ddr4_for_model_6gb_vram/
Imjustmisunderstood
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1vpaz
false
null
t3_1i1vpaz
/r/LocalLLaMA/comments/1i1vpaz/performance_of_64gb_ddr4_for_model_6gb_vram/
false
false
self
0
null
Just realized that the latest InternLM has switched to Apache 2
1
[removed]
2025-01-15T12:10:55
https://www.reddit.com/r/LocalLLaMA/comments/1i1w16c/just_realized_that_the_latest_internlm_has/
Usual-Statement-9385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1w16c
false
null
t3_1i1w16c
/r/LocalLLaMA/comments/1i1w16c/just_realized_that_the_latest_internlm_has/
false
false
self
1
{'enabled': False, 'images': [{'id': '_OsrZg75Nt6FOKgto_gndzEnY1k8ozjId82NQ4NT6xE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qbc3U4oaXnCQ0dVEs8qoe5PUAcQlzn9Tr5ttLXCcH0Y.jpg?width=108&crop=smart&auto=webp&s=c73b19b4dab14c0df1699e20674f07170e1b3cce', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qbc3U4oaXnCQ0dVEs8qoe5PUAcQlzn9Tr5ttLXCcH0Y.jpg?width=216&crop=smart&auto=webp&s=b729fad457678b1bb0779606600bac0096e6a500', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qbc3U4oaXnCQ0dVEs8qoe5PUAcQlzn9Tr5ttLXCcH0Y.jpg?width=320&crop=smart&auto=webp&s=12d6c8fbac3f7d3a6e0abe2473a487deb66ef01d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qbc3U4oaXnCQ0dVEs8qoe5PUAcQlzn9Tr5ttLXCcH0Y.jpg?width=640&crop=smart&auto=webp&s=a74643cb8c9ca235aa93bacacce1b43a21957b06', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qbc3U4oaXnCQ0dVEs8qoe5PUAcQlzn9Tr5ttLXCcH0Y.jpg?width=960&crop=smart&auto=webp&s=e41ad54a8b1912134d437562447cdbe031dbb11d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qbc3U4oaXnCQ0dVEs8qoe5PUAcQlzn9Tr5ttLXCcH0Y.jpg?width=1080&crop=smart&auto=webp&s=96aa76d71153d1356b80a59f9bc95a36f3fb0d75', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qbc3U4oaXnCQ0dVEs8qoe5PUAcQlzn9Tr5ttLXCcH0Y.jpg?auto=webp&s=3d547c69a986ec7616a2d044b3890535a34af849', 'width': 1200}, 'variants': {}}]}
Introducing InternLM3-8B-Instruct with Apache License 2.0.
1
[removed]
2025-01-15T13:02:41
https://www.reddit.com/r/LocalLLaMA/comments/1i1ww4k/introducing_internlm38binstruct_with_apache/
InternLM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1ww4k
false
null
t3_1i1ww4k
/r/LocalLLaMA/comments/1i1ww4k/introducing_internlm38binstruct_with_apache/
false
false
https://b.thumbs.redditm…RvqCVgKzIH7M.jpg
1
null
Introducing InternLM3-8B-Instruct with Apache License 2.0.
1
[removed]
2025-01-15T13:05:41
https://www.reddit.com/r/LocalLLaMA/comments/1i1wy5h/introducing_internlm38binstruct_with_apache/
OpenMMLab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1wy5h
false
null
t3_1i1wy5h
/r/LocalLLaMA/comments/1i1wy5h/introducing_internlm38binstruct_with_apache/
false
false
https://b.thumbs.redditm…F1kCkibLjFXs.jpg
1
{'enabled': False, 'images': [{'id': 'Y4LCwfHlLsaI48U9reo5Ii9ZN8AFDg5mdKkxRaOyt2c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=108&crop=smart&auto=webp&s=1b8b7dc16f77b6f48c7dac7be4cb1945366bbe93', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=216&crop=smart&auto=webp&s=837bb54f7fc24333021cc0e61c6640f5d97c339e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=320&crop=smart&auto=webp&s=aaf6b131d61bbea03bd6ec9968344625200c7e38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=640&crop=smart&auto=webp&s=8e72346c1336c3a6efdea458abaad1a7312317cb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=960&crop=smart&auto=webp&s=258c8e8c6e889091c768c4614b3e20172c663eb1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?width=1080&crop=smart&auto=webp&s=f5d8bf8c7e63b65ae10a85b92bfabfc056030ef7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CpdtIFi2Xykw5wCJx3-nNXQL2YWviYTjARoqDPxQMpU.jpg?auto=webp&s=0927d8a4d994f2dab5b92e355604a207bc9159ef', 'width': 1200}, 'variants': {}}]}
Flow charts, flow charts everywhere
170
2025-01-15T13:10:56
https://i.redd.it/su32pem6q5de1.jpeg
AnotherSoftEng
i.redd.it
1970-01-01T00:00:00
0
{}
1i1x1mm
false
null
t3_1i1x1mm
/r/LocalLLaMA/comments/1i1x1mm/flow_charts_flow_charts_everywhere/
false
false
https://b.thumbs.redditm…inQaMX-ogFWs.jpg
170
{'enabled': True, 'images': [{'id': 'j2ea2-tdntSLwsnxDeFUUPVeBhubKvlS6YlWHX4ld54', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/su32pem6q5de1.jpeg?width=108&crop=smart&auto=webp&s=537552dea0ebf8d9abb0b65610628149ad37288e', 'width': 108}, {'height': 267, 'url': 'https://preview.redd.it/su32pem6q5de1.jpeg?width=216&crop=smart&auto=webp&s=72cfd7c79f4abbd4a31ff01717a5a63fa73c6e6b', 'width': 216}, {'height': 396, 'url': 'https://preview.redd.it/su32pem6q5de1.jpeg?width=320&crop=smart&auto=webp&s=465d5ab4be55a4022001d3282d137ff2a2bad7f5', 'width': 320}, {'height': 792, 'url': 'https://preview.redd.it/su32pem6q5de1.jpeg?width=640&crop=smart&auto=webp&s=48d3c26cb427054e3de9f38bb7fa6f4ea73f3686', 'width': 640}, {'height': 1189, 'url': 'https://preview.redd.it/su32pem6q5de1.jpeg?width=960&crop=smart&auto=webp&s=e9777471cb1aef407bfd6399f5e0bcba27ef19af', 'width': 960}, {'height': 1338, 'url': 'https://preview.redd.it/su32pem6q5de1.jpeg?width=1080&crop=smart&auto=webp&s=3950ec45b7accaa3f7775c43e74576916ef457eb', 'width': 1080}], 'source': {'height': 1467, 'url': 'https://preview.redd.it/su32pem6q5de1.jpeg?auto=webp&s=80f5702df873cb93434ad72479165210d0ec117e', 'width': 1184}, 'variants': {}}]}
We have released InternLM3-8B-Instruct, designed for general-purpose usage and advanced reasoning.
1
[removed]
2025-01-15T13:18:14
https://www.reddit.com/r/LocalLLaMA/comments/1i1x6j0/we_have_released_internlm38binstruct_designed_for/
InternLM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1x6j0
false
null
t3_1i1x6j0
/r/LocalLLaMA/comments/1i1x6j0/we_have_released_internlm38binstruct_designed_for/
false
false
https://b.thumbs.redditm…LSlxbbqOCgsE.jpg
1
null
What’s SOTA for codebase indexing?
5
Hi folks, I’ve been tasked with investigating codebase indexing, mostly in the context of RAG. Due to the popularity of “AI agents”, there seem to be new projects constantly popping up that use some sort of agentic retrieval. I’m mostly interested in speed (so self-querying is off the table) and instead want to be able to query the codebase with questions like, “where are functions that handle auth”? And have said chunks returned.
2025-01-15T13:20:09
https://www.reddit.com/r/LocalLLaMA/comments/1i1x7ql/whats_sota_for_codebase_indexing/
QueasyEntrance6269
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1x7ql
false
null
t3_1i1x7ql
/r/LocalLLaMA/comments/1i1x7ql/whats_sota_for_codebase_indexing/
false
false
self
5
null
OuteTTS 0.3: New 1B & 500M Models
236
2025-01-15T13:26:15
https://v.redd.it/rb1px5mjs5de1
OuteAI
v.redd.it
1970-01-01T00:00:00
0
{}
1i1xbv1
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rb1px5mjs5de1/DASHPlaylist.mpd?a=1739539588%2CMmY2MjYzZjE3NjVmZDE4YzNhMWJlNTk5MGJmOWFiMjA1Y2I3ZTg0N2UxYmFiYzdhNjRlYWQ4MWRjNzA2YjFkZA%3D%3D&v=1&f=sd', 'duration': 84, 'fallback_url': 'https://v.redd.it/rb1px5mjs5de1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/rb1px5mjs5de1/HLSPlaylist.m3u8?a=1739539588%2CNWMwYTY4NGNlYjY1YzQ3YzA1ODQ3N2FhYTRmY2EzYjY2ZGU5MDU5NDIxMDAzMjgxM2EyMjg3OGM2YjZkMWVjNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rb1px5mjs5de1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1i1xbv1
/r/LocalLLaMA/comments/1i1xbv1/outetts_03_new_1b_500m_models/
false
false
https://external-preview…94a8392bd5afb6f7
236
{'enabled': False, 'images': [{'id': 'MnR2a241bWpzNWRlMS8HG7_sP5Xscyq5qRLwQkOnJIWAwD3-JkIhoicGw7Ke', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MnR2a241bWpzNWRlMS8HG7_sP5Xscyq5qRLwQkOnJIWAwD3-JkIhoicGw7Ke.png?width=108&crop=smart&format=pjpg&auto=webp&s=e82490d03321bf4e8d163030f2ba21dce3fa8163', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MnR2a241bWpzNWRlMS8HG7_sP5Xscyq5qRLwQkOnJIWAwD3-JkIhoicGw7Ke.png?width=216&crop=smart&format=pjpg&auto=webp&s=754fb26aef87c2a83bc6b5075228e95a52c9742f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MnR2a241bWpzNWRlMS8HG7_sP5Xscyq5qRLwQkOnJIWAwD3-JkIhoicGw7Ke.png?width=320&crop=smart&format=pjpg&auto=webp&s=fd04c36b38e039715801db113ffb4dc53eb996bc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MnR2a241bWpzNWRlMS8HG7_sP5Xscyq5qRLwQkOnJIWAwD3-JkIhoicGw7Ke.png?width=640&crop=smart&format=pjpg&auto=webp&s=12ce1843196847104ad52799e844a50b5df7d9c8', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MnR2a241bWpzNWRlMS8HG7_sP5Xscyq5qRLwQkOnJIWAwD3-JkIhoicGw7Ke.png?width=960&crop=smart&format=pjpg&auto=webp&s=7bbdf8dc2924ab585aa36bbf5bf95f9e4ef87357', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MnR2a241bWpzNWRlMS8HG7_sP5Xscyq5qRLwQkOnJIWAwD3-JkIhoicGw7Ke.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8299ff8bde9a3bd2935cb5c4118560e0acceb650', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MnR2a241bWpzNWRlMS8HG7_sP5Xscyq5qRLwQkOnJIWAwD3-JkIhoicGw7Ke.png?format=pjpg&auto=webp&s=db41939469450cd8d42aa4408360be5df3413c9d', 'width': 1920}, 'variants': {}}]}
Open LLMS how to use them for a writer?
1
[removed]
2025-01-15T13:30:51
https://www.reddit.com/r/LocalLLaMA/comments/1i1xf0j/open_llms_how_to_use_them_for_a_writer/
No_Cartographer_6837
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1xf0j
false
null
t3_1i1xf0j
/r/LocalLLaMA/comments/1i1xf0j/open_llms_how_to_use_them_for_a_writer/
false
false
self
1
null
Finally got my second 3090
104
Any good model recommendations for story writing?
2025-01-15T13:47:51
https://i.redd.it/3zf958trw5de1.jpeg
fizzy1242
i.redd.it
1970-01-01T00:00:00
0
{}
1i1xqrk
false
null
t3_1i1xqrk
/r/LocalLLaMA/comments/1i1xqrk/finally_got_my_second_3090/
false
false
https://b.thumbs.redditm…JWP-Vwsenzxw.jpg
104
{'enabled': True, 'images': [{'id': 'hvNs5fZZKEPSR2HWKGqyUY89BSIN99g1m04c7xxiWRI', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/3zf958trw5de1.jpeg?width=108&crop=smart&auto=webp&s=5901d54593d056b38873f234716795b5a09fea92', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/3zf958trw5de1.jpeg?width=216&crop=smart&auto=webp&s=33c3d65e4d9ba702252345e2995fa360e4e058ab', 'width': 216}, {'height': 312, 'url': 'https://preview.redd.it/3zf958trw5de1.jpeg?width=320&crop=smart&auto=webp&s=268f5b991e99bedc8c8c5ec99b1f19eac4fce44c', 'width': 320}, {'height': 624, 'url': 'https://preview.redd.it/3zf958trw5de1.jpeg?width=640&crop=smart&auto=webp&s=a2de0310b00a316ad7343179f08849b452eeb969', 'width': 640}, {'height': 936, 'url': 'https://preview.redd.it/3zf958trw5de1.jpeg?width=960&crop=smart&auto=webp&s=4864f8d7cff91b34cd4dbd4ba7d456e002c05a1f', 'width': 960}], 'source': {'height': 944, 'url': 'https://preview.redd.it/3zf958trw5de1.jpeg?auto=webp&s=65d0d52c305c80831fc1649ea1982bfb0e55a34e', 'width': 968}, 'variants': {}}]}
Deepgram and LiteLLM
1
Does anyone use Deepgram with Litellm and Open WebUI? I've managed to get whisper transcription working with OpenWebUI->LiteLLM->Groq(Whisper) but when I swap out Groq for Deepgram (Nova-2) I get errors: ``` [ERROR: 400: [ERROR: External: litellm.APIConnectionError: Unsupported type for audio_file: <class '_io.BytesIO'> Traceback (most recent call last): File "/usr/lib/python3.13/site-packages/litellm/main.py", line 4785 ```
2025-01-15T14:02:26
https://www.reddit.com/r/LocalLLaMA/comments/1i1y1a7/deepgram_and_litellm/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1y1a7
false
null
t3_1i1y1a7
/r/LocalLLaMA/comments/1i1y1a7/deepgram_and_litellm/
false
false
self
1
null
Windows laptop equivalent (or "close enough") to an M4 Macbook Pro (Max?)
0
As the title states...is there a Windows laptop (or upcoming Windows laptop) that could give the M4 Pro or M4 Pro Max a run for its money? Yes, I know having a dedicated GPU is best—however—I'm currently running an M4 Pro 48GB, which allows me to run many local LLMs at reasonable t/s. The main reason I'm making this thread is that I recall some people on here talking about an AMD laptop that's coming out this year that should be pretty good. But I forget the name.
2025-01-15T14:17:33
https://www.reddit.com/r/LocalLLaMA/comments/1i1ycbc/windows_laptop_equivalent_or_close_enough_to_an/
NEEDMOREVRAM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1ycbc
false
null
t3_1i1ycbc
/r/LocalLLaMA/comments/1i1ycbc/windows_laptop_equivalent_or_close_enough_to_an/
false
false
self
0
null
Robust and efficient LLM cache policy
0
I am using llm for news classification. In fact, there are many news that are similar, making it unnecessary to call llm every time. I'm now using a cosine similarity-based method to cache the results of similar news. But there will be a problem: if a news category is misclassified, then subsequent similar news categories will also be misclassified. How to avoid this kind of situation?
2025-01-15T14:21:36
https://www.reddit.com/r/LocalLLaMA/comments/1i1yf88/robust_and_efficient_llm_cache_policy/
secsilm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1yf88
false
null
t3_1i1yf88
/r/LocalLLaMA/comments/1i1yf88/robust_and_efficient_llm_cache_policy/
false
false
self
0
null
For no cost: generate CoT data, fine-tune Qwen2.5-1.5B and run the model on hugging face
1
2025-01-15T14:39:49
https://v.redd.it/9qwn98su56de1
Scared_Air_2275
/r/LocalLLaMA/comments/1i1ysy1/for_no_cost_generate_cot_data_finetune_qwen2515b/
1970-01-01T00:00:00
0
{}
1i1ysy1
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9qwn98su56de1/DASHPlaylist.mpd?a=1739673595%2CNzA0MjZlMTA3NjU5NTU3NjY1ZGQwZDAwNGRmMWMwODA1NjZhYTRkZjg0YjM4NTc4MDliYTQ1MjU1ZTRjODdjNg%3D%3D&v=1&f=sd', 'duration': 51, 'fallback_url': 'https://v.redd.it/9qwn98su56de1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/9qwn98su56de1/HLSPlaylist.m3u8?a=1739673595%2CMGZjMmNkM2EzZTIzMGE4OTZmYTI1Y2Y5NjI1YjdmZDhiYzBjMWMwMjY3YWZjMjYzMzAyMGJmZmNiMDE3NDk3Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9qwn98su56de1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1i1ysy1
/r/LocalLLaMA/comments/1i1ysy1/for_no_cost_generate_cot_data_finetune_qwen2515b/
false
false
https://external-preview…80f2836e0e9dcbc0
1
{'enabled': False, 'images': [{'id': 'ZnQ0eWk3c3U1NmRlMZ50vg-sZ6VWFrs65LG_LLHVNRkSv3oVMVx5hszz_g_t', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZnQ0eWk3c3U1NmRlMZ50vg-sZ6VWFrs65LG_LLHVNRkSv3oVMVx5hszz_g_t.png?width=108&crop=smart&format=pjpg&auto=webp&s=324373d482fd6eba1f9a883e1b8096884cba458f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZnQ0eWk3c3U1NmRlMZ50vg-sZ6VWFrs65LG_LLHVNRkSv3oVMVx5hszz_g_t.png?width=216&crop=smart&format=pjpg&auto=webp&s=2b8946d7451bc8880a5b8a6cdbdaff28dbda06fe', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZnQ0eWk3c3U1NmRlMZ50vg-sZ6VWFrs65LG_LLHVNRkSv3oVMVx5hszz_g_t.png?width=320&crop=smart&format=pjpg&auto=webp&s=1ba07e7495dd150eb016c60d66e34f03d79a9903', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZnQ0eWk3c3U1NmRlMZ50vg-sZ6VWFrs65LG_LLHVNRkSv3oVMVx5hszz_g_t.png?width=640&crop=smart&format=pjpg&auto=webp&s=365ec191010bb4acd5e7cdc4817b79b777a41971', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZnQ0eWk3c3U1NmRlMZ50vg-sZ6VWFrs65LG_LLHVNRkSv3oVMVx5hszz_g_t.png?width=960&crop=smart&format=pjpg&auto=webp&s=62c54baf233934e05a2a5339d5fced5cd0b565b1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZnQ0eWk3c3U1NmRlMZ50vg-sZ6VWFrs65LG_LLHVNRkSv3oVMVx5hszz_g_t.png?width=1080&crop=smart&format=pjpg&auto=webp&s=59b18d899b3c0aa85ec6131cc89be17fe554999a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZnQ0eWk3c3U1NmRlMZ50vg-sZ6VWFrs65LG_LLHVNRkSv3oVMVx5hszz_g_t.png?format=pjpg&auto=webp&s=e8fc4bf9047aba4c8e93e6605e75b7a9db7c5770', 'width': 1920}, 'variants': {}}]}
Sakana.ai proposes Transformer-squared - Adaptive AI that adjusts its own weights dynamically and eveolves as it learns
49
Arxiv paper - https://arxiv.org/abs/2501.06252
2025-01-15T14:41:59
https://sakana.ai/transformer-squared/
Thrumpwart
sakana.ai
1970-01-01T00:00:00
0
{}
1i1yuke
false
null
t3_1i1yuke
/r/LocalLLaMA/comments/1i1yuke/sakanaai_proposes_transformersquared_adaptive_ai/
false
false
https://b.thumbs.redditm…MDm-i4j2MBbw.jpg
49
{'enabled': False, 'images': [{'id': '301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=108&crop=smart&auto=webp&s=61f7124235d3c9cc17267eb2ed7de46bab49765e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=216&crop=smart&auto=webp&s=b01c782fa93b021a180dc44d7151fade86d6431d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=320&crop=smart&auto=webp&s=670eb9c9058d14ac8846a6475e3d47cb616cf011', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=640&crop=smart&auto=webp&s=f5f30bf0b3bae15b4dee53ba7bd37f2486072c04', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=960&crop=smart&auto=webp&s=2c1d1a6c85eb92a670807f829ec7254dc53f1bd7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=1080&crop=smart&auto=webp&s=344e6dcc7b48a81d3b6727c749b0c289aabe5547', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?auto=webp&s=efb765a9e5d3d5585101bc98246d9babdd7d3105', 'width': 1600}, 'variants': {}}]}
Is the $20 a month ChatGPT plan even worth it now?
1
2025-01-15T14:44:47
https://i.redd.it/jlcstuhu66de1.png
Scared_Air_2275
i.redd.it
1970-01-01T00:00:00
0
{}
1i1ywok
false
null
t3_1i1ywok
/r/LocalLLaMA/comments/1i1ywok/is_the_20_a_month_chatgpt_plan_even_worth_it_now/
false
false
https://b.thumbs.redditm…vAwZkw0YPPzU.jpg
1
{'enabled': True, 'images': [{'id': 'EwAOmNkSeR7VwlOZDYtf3RTH8D21ujYmsPuQinH8VBo', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jlcstuhu66de1.png?width=108&crop=smart&auto=webp&s=45cdba90bd8ad7153769287142a416e2e7aafe3b', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jlcstuhu66de1.png?width=216&crop=smart&auto=webp&s=6478c8c9860f5eaa0158caa458b1d12aaaea4c0e', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/jlcstuhu66de1.png?width=320&crop=smart&auto=webp&s=6bf86e9f346ddd9f4c695c10ba741aea61fb5e86', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/jlcstuhu66de1.png?width=640&crop=smart&auto=webp&s=6e4feaa9053d624be1239f86feca372cf579cf57', 'width': 640}], 'source': {'height': 1628, 'url': 'https://preview.redd.it/jlcstuhu66de1.png?auto=webp&s=4c3e94a98974e6039da0e70906b32f680c347c20', 'width': 776}, 'variants': {}}]}
NVIDIA unveils Sana for ultra HD image generation on laptops
72
2025-01-15T14:50:16
https://nvlabs.github.io/Sana/?utm_source=substack&utm_medium=email
nate4t
nvlabs.github.io
1970-01-01T00:00:00
0
{}
1i1z0ur
false
null
t3_1i1z0ur
/r/LocalLLaMA/comments/1i1z0ur/nvidia_unveils_sana_for_ultra_hd_image_generation/
false
false
default
72
null
Is there much use case for paying $20-200pm for ChatGPT now?
115
2025-01-15T14:59:52
https://www.reddit.com/gallery/1i1z8kk
omnisvosscio
reddit.com
1970-01-01T00:00:00
0
{}
1i1z8kk
false
null
t3_1i1z8kk
/r/LocalLLaMA/comments/1i1z8kk/is_there_much_use_case_for_paying_20200pm_for/
false
false
https://b.thumbs.redditm…lYyIKQjOiQ5g.jpg
115
null
Hugging Face is doing a FREE and CERTIFIED course on LLM Agents!
1
[removed]
2025-01-15T15:00:34
https://www.reddit.com/r/LocalLLaMA/comments/1i1z97f/hugging_face_is_doing_a_free_and_certified_course/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1z97f
false
null
t3_1i1z97f
/r/LocalLLaMA/comments/1i1z97f/hugging_face_is_doing_a_free_and_certified_course/
false
false
self
1
null
First Intel B580 inference speed test
11
Upon my request someone agreed to test his B580 and the result is this: https://preview.redd.it/vhjixnb7a6de1.png?width=1024&format=png&auto=webp&s=2413cfd6985fecdcd88e4f17fd0d0844f2d1d70e
2025-01-15T15:03:33
https://www.reddit.com/r/LocalLLaMA/comments/1i1zbul/first_intel_b580_inference_speed_test/
ComprehensiveQuail77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1zbul
false
null
t3_1i1zbul
/r/LocalLLaMA/comments/1i1zbul/first_intel_b580_inference_speed_test/
false
false
https://b.thumbs.redditm…HmOSfgV0lXSA.jpg
11
null
Hugging Face is doing a FREE and CERTIFIED course on LLM Agents!
665
**Learn to build AI agents that can automate tasks, generate code, and more!** 🤖 Hugging Face just launched a **free, certified course** on building and deploying AI agents. * Learn what Agents are * **Build your own Agents** using the latest libraries and tools. * **Earn a certificate of completion** to showcase your achievement.
2025-01-15T15:04:30
https://www.reddit.com/r/LocalLLaMA/comments/1i1zcnq/hugging_face_is_doing_a_free_and_certified_course/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1zcnq
false
null
t3_1i1zcnq
/r/LocalLLaMA/comments/1i1zcnq/hugging_face_is_doing_a_free_and_certified_course/
false
false
self
665
{'enabled': False, 'images': [{'id': 'KoRBFuVSiMaCB5t2_WOQSHwu2Q_x6LkyCUrSvB_utqI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JOoVE9yE3UMtoGIBlW4phQep83QjxjwMVZvxo1yvB-4.jpg?width=108&crop=smart&auto=webp&s=570eaba8d56a5271fffc198b5cb01ab36f6b86d9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JOoVE9yE3UMtoGIBlW4phQep83QjxjwMVZvxo1yvB-4.jpg?width=216&crop=smart&auto=webp&s=a9650fa938779bb8804e66fa50d93b0250100b22', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JOoVE9yE3UMtoGIBlW4phQep83QjxjwMVZvxo1yvB-4.jpg?width=320&crop=smart&auto=webp&s=ebf66589e8689c44c383af7b38092f71c71e1e7a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JOoVE9yE3UMtoGIBlW4phQep83QjxjwMVZvxo1yvB-4.jpg?width=640&crop=smart&auto=webp&s=9f737a2f98cecfb91b00fb821dd675f7c44f7794', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JOoVE9yE3UMtoGIBlW4phQep83QjxjwMVZvxo1yvB-4.jpg?width=960&crop=smart&auto=webp&s=255f3f28b900385da3e4aa34e40bd337a79627b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JOoVE9yE3UMtoGIBlW4phQep83QjxjwMVZvxo1yvB-4.jpg?width=1080&crop=smart&auto=webp&s=8629fc31b0e2f123e8bc44e6479e344134e13a30', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JOoVE9yE3UMtoGIBlW4phQep83QjxjwMVZvxo1yvB-4.jpg?auto=webp&s=879821b9c75953b79fc691dd73363cafcfbed8fc', 'width': 1200}, 'variants': {}}]}
Are there any good models for natural translation from English to Hebrew?
1
[removed]
2025-01-15T15:10:36
https://www.reddit.com/r/LocalLLaMA/comments/1i1zhre/are_there_any_good_models_for_natural_translation/
ResponsibleTruck4717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1zhre
false
null
t3_1i1zhre
/r/LocalLLaMA/comments/1i1zhre/are_there_any_good_models_for_natural_translation/
false
false
self
1
null
Are there any good alternatives to promptlayer?
1
Been using promptlayer, but looking for an alternative for different reasons. Any suggestions?
2025-01-15T15:11:13
https://www.reddit.com/r/LocalLLaMA/comments/1i1zi9m/are_there_any_good_alternatives_to_promptlayer/
Practical-Rub-1190
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1zi9m
false
null
t3_1i1zi9m
/r/LocalLLaMA/comments/1i1zi9m/are_there_any_good_alternatives_to_promptlayer/
false
false
self
1
null
Open source - Lightweight GPU Virtualization Framework written in C++
5
Hello everyone, I am starting a new open-source project, partly to learn better C++, partly to offer something useful to people. Inspired by another open-source project ([scuda](https://github.com/kevmo314/scuda.git)) I decided to build [Litecuda](https://github.com/evangelosmeklis/litecuda) A lightweight C++ framework for GPU virtualization designed to simulate multiple isolated virtual GPU instances on a single physical GPU. It aims to enable efficient sharing of GPU resources such as memory and computation across multiple virtual GPUs. I am very early in the project and looking for other contributors, ideas to extend this.
2025-01-15T15:20:22
https://www.reddit.com/r/LocalLLaMA/comments/1i1zpll/open_source_lightweight_gpu_virtualization/
_twelvechess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1zpll
false
null
t3_1i1zpll
/r/LocalLLaMA/comments/1i1zpll/open_source_lightweight_gpu_virtualization/
false
false
self
5
{'enabled': False, 'images': [{'id': 'tCDKxwkaxMMNWOXE2w_K7qdPztnh0IyCeRiLzRj8180', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4QCA5o_Nj2Fco8UHQrtplmmkkmUrLXT3NgFN1sgHffY.jpg?width=108&crop=smart&auto=webp&s=3b71717afe193a562c80d475935bf25e0e0f1c3f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4QCA5o_Nj2Fco8UHQrtplmmkkmUrLXT3NgFN1sgHffY.jpg?width=216&crop=smart&auto=webp&s=39a82e4a1e39ea413807b088cd14cacd594db537', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4QCA5o_Nj2Fco8UHQrtplmmkkmUrLXT3NgFN1sgHffY.jpg?width=320&crop=smart&auto=webp&s=3325697d71251c46685d9ea9b144c482ce9e229e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4QCA5o_Nj2Fco8UHQrtplmmkkmUrLXT3NgFN1sgHffY.jpg?width=640&crop=smart&auto=webp&s=57b3a58f090b9493a4b0eaab0a58d51c22372d56', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4QCA5o_Nj2Fco8UHQrtplmmkkmUrLXT3NgFN1sgHffY.jpg?width=960&crop=smart&auto=webp&s=a3228cd351f21c8fd2629791513fcee79a444df9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4QCA5o_Nj2Fco8UHQrtplmmkkmUrLXT3NgFN1sgHffY.jpg?width=1080&crop=smart&auto=webp&s=a795dfddb4d1dce273036e82ebab7a66322cfcc6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4QCA5o_Nj2Fco8UHQrtplmmkkmUrLXT3NgFN1sgHffY.jpg?auto=webp&s=f4d5f19dac525a39aa6aa3eae218ac3ca83c074d', 'width': 1200}, 'variants': {}}]}
NOOB QUESTION: How can i make my local instance "smarter"
1
Just putting this preface out there - i probably sound like an idiot - but how do I make my local instance "smarter" Obviously the discrepancy between using Claude via their service blows anything I can host locally out of the water (at least i think this makes sense). Its level on intuition, memory and logic - especially while coding is just incredible. That being said - i would love if i could have something at least 80% as smart locally. I am running Llama 3.1 8b, which i understand is a very small quantized model. My question is this - is the only way to run something even in the ballpark of claude to do any of the following: 1. Improve my hardware - add more gpus (running on a single AMD 7900xtx) 2. Have the hardware required to run the full size llama 3.3 (unless this is a fools errand) 3. Maybe switch to a linux based system rather than running Ollama on windows? Anywho - thanks for any help here! Having alot of fun with getting this setup. Thanks!
2025-01-15T15:21:10
https://www.reddit.com/r/LocalLLaMA/comments/1i1zq7z/noob_question_how_can_i_make_my_local_instance/
HugeDelivery
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i1zq7z
false
null
t3_1i1zq7z
/r/LocalLLaMA/comments/1i1zq7z/noob_question_how_can_i_make_my_local_instance/
false
false
self
1
null
DeepSeek aims to disrupt business plans of western labs?
0
What do you think is the main goal of serving deepseek for free/so cheap? Is it just for data and next gen even better model, or maybe the reasons are political? Maybe its their way of battling the West? Usage on OpenRouter is skyrocketing 🚀
2025-01-15T15:34:12
https://www.reddit.com/r/LocalLLaMA/comments/1i200sz/deepseek_aims_to_disrupt_business_plans_of/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i200sz
false
null
t3_1i200sz
/r/LocalLLaMA/comments/1i200sz/deepseek_aims_to_disrupt_business_plans_of/
false
false
self
0
null
Judge Arena standings after 2 months. The 3.8B Flow-Judge is now in there!
6
2025-01-15T15:34:58
https://i.redd.it/o9jyjhjqf6de1.png
fortunemaple
i.redd.it
1970-01-01T00:00:00
0
{}
1i201g1
false
null
t3_1i201g1
/r/LocalLLaMA/comments/1i201g1/judge_arena_standings_after_2_months_the_38b/
false
false
https://b.thumbs.redditm…Va8W0zJJKy8M.jpg
6
{'enabled': True, 'images': [{'id': 'ZzXxGBK1xyfIGxWLkw3H6mfsV29jWoPwaM1AWFMUVR4', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/o9jyjhjqf6de1.png?width=108&crop=smart&auto=webp&s=83080f7244875edae6e67d114d267f34152e0f46', 'width': 108}, {'height': 156, 'url': 'https://preview.redd.it/o9jyjhjqf6de1.png?width=216&crop=smart&auto=webp&s=6de8118430482d38e0246b40dc7f273eb342148a', 'width': 216}, {'height': 232, 'url': 'https://preview.redd.it/o9jyjhjqf6de1.png?width=320&crop=smart&auto=webp&s=25bf37221f106a342e634364e700d88ae541c916', 'width': 320}, {'height': 464, 'url': 'https://preview.redd.it/o9jyjhjqf6de1.png?width=640&crop=smart&auto=webp&s=471614b975ff283fb060740da97ed2ccacfdf775', 'width': 640}, {'height': 696, 'url': 'https://preview.redd.it/o9jyjhjqf6de1.png?width=960&crop=smart&auto=webp&s=57b20d6e7048e2d6d74852d0a8f6f3c74e478925', 'width': 960}, {'height': 783, 'url': 'https://preview.redd.it/o9jyjhjqf6de1.png?width=1080&crop=smart&auto=webp&s=d1da120e77f4eb83f9e4f63c65fea50823fe08e6', 'width': 1080}], 'source': {'height': 876, 'url': 'https://preview.redd.it/o9jyjhjqf6de1.png?auto=webp&s=d16ed3d2de81e8049b20652d76dca6831b179bc6', 'width': 1207}, 'variants': {}}]}
Train 400x faster Static Embedding Models; 2 open models released
55
2025-01-15T15:35:40
https://huggingface.co/blog/static-embeddings
-Cubie-
huggingface.co
1970-01-01T00:00:00
0
{}
1i20211
false
null
t3_1i20211
/r/LocalLLaMA/comments/1i20211/train_400x_faster_static_embedding_models_2_open/
false
false
https://b.thumbs.redditm…_w8g0s5bNAWE.jpg
55
{'enabled': False, 'images': [{'id': 'Q7oEnpq4LYUPvgpkKMeoddSo-4Wn8UDKMbqnVIBZL8s', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7KIBT99WqmowwDguikX7zoXpmvjI60Ua61vkPn6VgEU.jpg?width=108&crop=smart&auto=webp&s=7f90d6e8c4655744f7862bf6b37fd93a4853ffa4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/7KIBT99WqmowwDguikX7zoXpmvjI60Ua61vkPn6VgEU.jpg?width=216&crop=smart&auto=webp&s=a95d31f58f71f4085da58f37324ce5ad62aff23f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/7KIBT99WqmowwDguikX7zoXpmvjI60Ua61vkPn6VgEU.jpg?width=320&crop=smart&auto=webp&s=353a4fa52f0abaae26484002194002b8b42943be', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/7KIBT99WqmowwDguikX7zoXpmvjI60Ua61vkPn6VgEU.jpg?width=640&crop=smart&auto=webp&s=4c7a6ae7297ed561b65221ae6db7678244b6b8a1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/7KIBT99WqmowwDguikX7zoXpmvjI60Ua61vkPn6VgEU.jpg?width=960&crop=smart&auto=webp&s=f65a14a22b836a9570d3ba33730626ba0f20b6db', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/7KIBT99WqmowwDguikX7zoXpmvjI60Ua61vkPn6VgEU.jpg?width=1080&crop=smart&auto=webp&s=1e56ae381bd529315ed1930a24348a7d7b17ef86', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/7KIBT99WqmowwDguikX7zoXpmvjI60Ua61vkPn6VgEU.jpg?auto=webp&s=70639fd0ce6ea1a41044bc332ce7dc25b9ad1ef4', 'width': 1920}, 'variants': {}}]}
Best way to classify NSFW text - BERT, small 3B LLM like llama 3.2 3B or something else?
5
I'm working on a project where I need to classify text as either nsfw or sfw. I know there are some BERT-based classifiers out there that are specifically trained for this kind of task. I've also seen people using smaller LLMs. What's the best approach for this? Since the underlying complexity of detecting NSFW text isn't that high, I'm thinking maybe a full blown LLM is overkill. What are your recommendations? I'm looking for a good balance of accuracy and efficiency. Any specific models or techniques you've found to be effective would be super helpful! Thanks!
2025-01-15T15:41:32
https://www.reddit.com/r/LocalLLaMA/comments/1i206u0/best_way_to_classify_nsfw_text_bert_small_3b_llm/
newyorkfuckingcity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i206u0
false
null
t3_1i206u0
/r/LocalLLaMA/comments/1i206u0/best_way_to_classify_nsfw_text_bert_small_3b_llm/
false
false
nsfw
5
null
AI research
1
[removed]
2025-01-15T15:43:30
https://www.reddit.com/r/LocalLLaMA/comments/1i208gp/ai_research/
ASI-Enjoyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i208gp
false
null
t3_1i208gp
/r/LocalLLaMA/comments/1i208gp/ai_research/
false
false
self
1
null
Speculative decoding isn't a silver bullet - but it can get you 3x speed-ups
36
Hey everyone! Quick benchmark today - did this using Exaone-32b-4bit\*, running with latest MLX\_LM backend using [this script](https://gist.github.com/mark-lord/93a9f53f4f1e230e7bd5828357649f89): No speculative decoding: [Prompt: 44.608 tps | Generation: 6.274 tps | Avg power: \~9w | Total energy used: \~400J | Time taken: 48.226s](https://reddit.com/link/1i20dka/video/bqtvz9rah6de1/player) Speculative decoding: [Prompt: 37.170 tps | Generation: 24.140 tps | Avg power: \~13w | Total energy used: \~300J | Time taken: 22.880s](https://reddit.com/link/1i20dka/video/ji82cmcfh6de1/player) **\*Benchmark done using my M1 Max 64gb in low power mode, using Exaone-2.4b-4bit as the draft model with 31 draft tokens** Prompt processing speed was a little bit slower - dropping by about 20%. Power draw was also higher, even in low power mode. But the time taken from start->finish was reduced by 53% overall (The reduction in time taken means the total energy used was also reduced from 400->300J.) Pretty damn good I think 😄
2025-01-15T15:49:40
https://www.reddit.com/r/LocalLLaMA/comments/1i20dka/speculative_decoding_isnt_a_silver_bullet_but_it/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i20dka
false
null
t3_1i20dka
/r/LocalLLaMA/comments/1i20dka/speculative_decoding_isnt_a_silver_bullet_but_it/
false
false
https://a.thumbs.redditm…fWRvNWbpD-p0.jpg
36
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
AI research
1
[removed]
2025-01-15T16:00:45
https://www.reddit.com/r/LocalLLaMA/comments/1i20mdn/ai_research/
ASI-Enjoyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i20mdn
false
null
t3_1i20mdn
/r/LocalLLaMA/comments/1i20mdn/ai_research/
false
false
self
1
null
Play Memory Card Game with MiniCPM-o 2.6 ( A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming )
2
https://reddit.com/link/1i20res/video/jxdg7gd8h6de1/player Here are 6 cards on the table, let MiniCPM-o 2.6 remember their patterns and positions. Then I flipped over five cards, ask MiniCPM-o 2.6 to recall the position of the card with same pattern with the one facing up. Any other interesting user case? let's share them in this post\~
2025-01-15T16:06:39
https://www.reddit.com/r/LocalLLaMA/comments/1i20res/play_memory_card_game_with_minicpmo_26_a_gpt4o/
Lynncc6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i20res
false
null
t3_1i20res
/r/LocalLLaMA/comments/1i20res/play_memory_card_game_with_minicpmo_26_a_gpt4o/
false
false
self
2
null
How will my LLM run time scale with different GPUs? 4GB vs 6GB and more
1
Hi all I am very new to this, and I have searched but I couldnt find any answer to this. I am currently on a Dell XPS8940 (16GB, i7-11700) tower with a Radeon RX 550 4GB (debian, hence Radeon). Trying to transcribe some audio files, 20 minutes of audio take about 3.5 minutes to transcribe (small.en *whisper* model via python). I have a backlog of around 400 such files I need to process. This will be a reoccurring task (about 1-5 files are generated per day), so I am looking at ways to achieve better performance via hardware upgrades. How much performance would I gain with an NVIDIA GPU with 6GB? Still have an NVIDIA GeForce RTX 2060 around I could use. Is it in the single digit % range? I am willing to invest some cash into upgrading the GPU. If I were to get one with 12GB, very very roughly, what would be the improvement I could expect? 5%? 20%? 50%? EDIT: not sure it's even using my GPU, as *whisper* gives the warning "P16 is not supported on CPU; using FP32 instead"
2025-01-15T16:07:47
https://www.reddit.com/r/LocalLLaMA/comments/1i20scx/how_will_my_llm_run_time_scale_with_different/
brian-the-porpoise
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i20scx
false
null
t3_1i20scx
/r/LocalLLaMA/comments/1i20scx/how_will_my_llm_run_time_scale_with_different/
false
false
self
1
null
Jina releases ReaderLM V2, 1.5B model for HTML-to-Markdown/JSON conversion
43
2025-01-15T16:14:41
https://huggingface.co/jinaai/ReaderLM-v2
paf1138
huggingface.co
1970-01-01T00:00:00
0
{}
1i20y53
false
null
t3_1i20y53
/r/LocalLLaMA/comments/1i20y53/jina_releases_readerlm_v2_15b_model_for/
false
false
https://a.thumbs.redditm…NAcP6aM36FP0.jpg
43
{'enabled': False, 'images': [{'id': 's-2lt-qtqwp4Ms2NExlAYUbE9-q8OCGVQORZ42MpxI0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OawXnZHfMYYebdHrDHFeKzgBRPwqxQJE51C0bsjvWqk.jpg?width=108&crop=smart&auto=webp&s=1ad6319ba4ba1a9e3dfb8551e755681ca8e9a48c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OawXnZHfMYYebdHrDHFeKzgBRPwqxQJE51C0bsjvWqk.jpg?width=216&crop=smart&auto=webp&s=966cd5803bf1203ba9f73564345518a306bdcdb7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OawXnZHfMYYebdHrDHFeKzgBRPwqxQJE51C0bsjvWqk.jpg?width=320&crop=smart&auto=webp&s=caf18a732694250035321c6f115bf23190d41db2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OawXnZHfMYYebdHrDHFeKzgBRPwqxQJE51C0bsjvWqk.jpg?width=640&crop=smart&auto=webp&s=59c70093e9abc7da1b0d7c8f17972ecb8b87d217', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OawXnZHfMYYebdHrDHFeKzgBRPwqxQJE51C0bsjvWqk.jpg?width=960&crop=smart&auto=webp&s=b653703cf2ade9f1baca6ab1d8789d90897780e7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OawXnZHfMYYebdHrDHFeKzgBRPwqxQJE51C0bsjvWqk.jpg?width=1080&crop=smart&auto=webp&s=1b988653ebf379837533f233fcf4313604284b3c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OawXnZHfMYYebdHrDHFeKzgBRPwqxQJE51C0bsjvWqk.jpg?auto=webp&s=cc983b7f4183061b167443676d08de2a6a392814', 'width': 1200}, 'variants': {}}]}
Looking for a writing framework
1
Looking for a light framework with a UI that allows me to run two different models at once and pass the input of one to the other. model A --> UI <-- model B I'd like to be able to set the system prompt for both and create a templated prompt pipeline to generate and refine content by letting the two models work together to ensure the output aligns with the examples, requirements and feedback delivered by the user. Does anything like this exist?
2025-01-15T16:28:13
https://www.reddit.com/r/LocalLLaMA/comments/1i21986/looking_for_a_writing_framework/
Vegetable_Sun_9225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i21986
false
null
t3_1i21986
/r/LocalLLaMA/comments/1i21986/looking_for_a_writing_framework/
false
false
self
1
null
What’s with the recent influx of Chinese propaganda?
1
[removed]
2025-01-15T16:31:54
https://www.reddit.com/r/LocalLLaMA/comments/1i21cg6/whats_with_the_recent_influx_of_chinese_propaganda/
katiecharm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i21cg6
false
null
t3_1i21cg6
/r/LocalLLaMA/comments/1i21cg6/whats_with_the_recent_influx_of_chinese_propaganda/
false
false
self
1
null
★☆☆☆☆ Would not buy again
214
2025-01-15T16:53:05
https://i.redd.it/rmea76m6s6de1.png
MoffKalast
i.redd.it
1970-01-01T00:00:00
0
{}
1i21u4x
false
null
t3_1i21u4x
/r/LocalLLaMA/comments/1i21u4x/would_not_buy_again/
false
false
https://b.thumbs.redditm…CUHKzODlwMNs.jpg
214
{'enabled': True, 'images': [{'id': 'VUbd_2fZKiqNazy9EUwZbgYj0APY1trYObdOyzybaY0', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/rmea76m6s6de1.png?width=108&crop=smart&auto=webp&s=912dedf3ea5367ba2713f63ac93af651d974b6dd', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/rmea76m6s6de1.png?width=216&crop=smart&auto=webp&s=870afa370e1930fd633b6d7252b0d6d686c37f22', 'width': 216}, {'height': 109, 'url': 'https://preview.redd.it/rmea76m6s6de1.png?width=320&crop=smart&auto=webp&s=d34968c668ad0c9e3eebc5652f46456397af3ab0', 'width': 320}, {'height': 219, 'url': 'https://preview.redd.it/rmea76m6s6de1.png?width=640&crop=smart&auto=webp&s=013fbcd3bc5ce9ff62b442bfa22ea1b33a661040', 'width': 640}, {'height': 328, 'url': 'https://preview.redd.it/rmea76m6s6de1.png?width=960&crop=smart&auto=webp&s=895bc4ddfa13dcf3a2425f35732ff305a4f4d727', 'width': 960}, {'height': 369, 'url': 'https://preview.redd.it/rmea76m6s6de1.png?width=1080&crop=smart&auto=webp&s=6e8344dcdccda540431175f7b3bb4b9db431d0db', 'width': 1080}], 'source': {'height': 402, 'url': 'https://preview.redd.it/rmea76m6s6de1.png?auto=webp&s=f9c783cdc17550c6dea81dc937852766f33d4417', 'width': 1174}, 'variants': {}}]}
Deepseek is officially available on Android and iOS!
67
2025-01-15T16:56:48
https://i.redd.it/47xatq2hu6de1.png
Available-Stress8598
i.redd.it
1970-01-01T00:00:00
0
{}
1i21x7z
false
null
t3_1i21x7z
/r/LocalLLaMA/comments/1i21x7z/deepseek_is_officially_available_on_android_and/
false
false
https://b.thumbs.redditm…ZGq2FRipdTAc.jpg
67
{'enabled': True, 'images': [{'id': 'FfjV1oW9FCijZ1q68ZdhxldDQUsfnKr_na1CulyA-68', 'resolutions': [{'height': 156, 'url': 'https://preview.redd.it/47xatq2hu6de1.png?width=108&crop=smart&auto=webp&s=ab706d48f276faec9810bd7b3ce857ccdb560285', 'width': 108}, {'height': 313, 'url': 'https://preview.redd.it/47xatq2hu6de1.png?width=216&crop=smart&auto=webp&s=f14d0185be0b842f8f140cc3316cd87d28404359', 'width': 216}, {'height': 463, 'url': 'https://preview.redd.it/47xatq2hu6de1.png?width=320&crop=smart&auto=webp&s=b2215e72cdafe3d59ccfddd7f4bbab910d756cce', 'width': 320}, {'height': 927, 'url': 'https://preview.redd.it/47xatq2hu6de1.png?width=640&crop=smart&auto=webp&s=0c38b07010ec1163e4765356a3b6c3e3a5dea964', 'width': 640}, {'height': 1391, 'url': 'https://preview.redd.it/47xatq2hu6de1.png?width=960&crop=smart&auto=webp&s=b46b4fc73d3220a216eaffb443fa2806f6ee2316', 'width': 960}, {'height': 1565, 'url': 'https://preview.redd.it/47xatq2hu6de1.png?width=1080&crop=smart&auto=webp&s=fb913bfc4b7452b3ac50323b472d23455afa3f75', 'width': 1080}], 'source': {'height': 1565, 'url': 'https://preview.redd.it/47xatq2hu6de1.png?auto=webp&s=f14c723634b8b9f388d924331ded90e097ed0b29', 'width': 1080}, 'variants': {}}]}
How to set up a ChatGPT-like memory feature with MSTY?
1
Is such a thing even possible? Large context windows take up so much space and making custom knowledge stacks is time consuming, especially for a lot of data.
2025-01-15T16:57:15
https://www.reddit.com/r/LocalLLaMA/comments/1i21xlg/how_to_set_up_a_chatgptlike_memory_feature_with/
ZoeyKL_NSFW
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i21xlg
false
null
t3_1i21xlg
/r/LocalLLaMA/comments/1i21xlg/how_to_set_up_a_chatgptlike_memory_feature_with/
false
false
self
1
null
AI Research Recap 2024: From New Scaling Laws to Scaling Inference Compute
1
[removed]
2025-01-15T16:57:58
https://www.reddit.com/r/LocalLLaMA/comments/1i21y63/ai_research_recap_2024_from_new_scaling_laws_to/
seraschka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i21y63
false
null
t3_1i21y63
/r/LocalLLaMA/comments/1i21y63/ai_research_recap_2024_from_new_scaling_laws_to/
false
false
self
1
{'enabled': False, 'images': [{'id': 'mhiQROOPAbGpGwyqSBNtSUMWTemoZzEI3j-o-b9mu14', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ky8sVqVpFvioyoRGhjrt57UmYV9N09Ymc4vHqYuR0as.jpg?width=108&crop=smart&auto=webp&s=54bca103908934c7e17eb70fdca77b491add9d62', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ky8sVqVpFvioyoRGhjrt57UmYV9N09Ymc4vHqYuR0as.jpg?width=216&crop=smart&auto=webp&s=44c1dbe197943fcd26bfc5d290e2591d9a9ea28f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ky8sVqVpFvioyoRGhjrt57UmYV9N09Ymc4vHqYuR0as.jpg?width=320&crop=smart&auto=webp&s=e10a2dda2b817aed204cc2d1de4c458c9e68745d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ky8sVqVpFvioyoRGhjrt57UmYV9N09Ymc4vHqYuR0as.jpg?width=640&crop=smart&auto=webp&s=2552161635c1495e08dda167268e8bdc82fe6486', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ky8sVqVpFvioyoRGhjrt57UmYV9N09Ymc4vHqYuR0as.jpg?width=960&crop=smart&auto=webp&s=47a294c3357b88fc639cc67e4d1b1d54458ec09e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ky8sVqVpFvioyoRGhjrt57UmYV9N09Ymc4vHqYuR0as.jpg?width=1080&crop=smart&auto=webp&s=943c5e179c316018b4213df4612f76d69aa2f0a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ky8sVqVpFvioyoRGhjrt57UmYV9N09Ymc4vHqYuR0as.jpg?auto=webp&s=27572dd926ec1cd659f5a7b072f23b1317afc87f', 'width': 1200}, 'variants': {}}]}
Has anyone cracked "proactive" LLMs that can actually monitor stuff in real-time?
12
I've been thinking about this limitation with LLMs - they're all just sitting there waiting for us to say something before they do anything. You know how it always goes: AI: *silently watching data stream* AI: "Yo heads up, something's happening here..." Human: "what you seeing?" AI: *still watching* "Pattern's getting clearer now..." Anyone seen projects or research about LLMs that can actually monitor stuff in real-time and pipe up when they notice something? Not just reacting to prompts, but actually having some kind of ongoing awareness? Been searching but most "autonomous" agents I've found still use that basic input/output loop, just automated. Edit: Not talking about basic monitoring with predetermined triggers - mean actual AI that can decide on its own when to speak up based on what it's seeing. Example: AI: [watches data] AI: "I see that..." AI: "Okay, now it's more clear" Human: "how's it looking?" AI: "It's looking decent..."
2025-01-15T16:58:05
https://www.reddit.com/r/LocalLLaMA/comments/1i21y9r/has_anyone_cracked_proactive_llms_that_can/
No-Conference-8133
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i21y9r
false
null
t3_1i21y9r
/r/LocalLLaMA/comments/1i21y9r/has_anyone_cracked_proactive_llms_that_can/
false
false
self
12
null
The truth about the censored model that certain “totally organic enthusiasts” are trying to sell you on.
1
2025-01-15T17:13:05
https://www.reddit.com/gallery/1i22bgw
katiecharm
reddit.com
1970-01-01T00:00:00
0
{}
1i22bgw
false
null
t3_1i22bgw
/r/LocalLLaMA/comments/1i22bgw/the_truth_about_the_censored_model_that_certain/
false
false
https://a.thumbs.redditm…ea445ZtAy7Y8.jpg
1
null
The truth about the model that certain “totally organic enthusiasts” are trying to sell you on.
37
2025-01-15T17:14:11
https://www.reddit.com/gallery/1i22ch0
katiecharm
reddit.com
1970-01-01T00:00:00
0
{}
1i22ch0
false
null
t3_1i22ch0
/r/LocalLLaMA/comments/1i22ch0/the_truth_about_the_model_that_certain_totally/
false
false
https://a.thumbs.redditm…ZSpw0i-hmFq0.jpg
37
null
Looking for advice on accomplishing a unique company use case using entirely local (enterprise compute) processing.
1
[removed]
2025-01-15T17:17:32
https://www.reddit.com/r/LocalLLaMA/comments/1i22ff3/looking_for_advice_on_accomplishing_a_unique/
acvilleimport
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i22ff3
false
null
t3_1i22ff3
/r/LocalLLaMA/comments/1i22ff3/looking_for_advice_on_accomplishing_a_unique/
false
false
self
1
null
How to Compile Whisper.cpp for Mac OS Sonoma or Later on Apple Silicon
2
I have tried hard to get Whisper.cpp to compile and run on a Macbook running Apple Silicon M1 with Sonoma, every try has failed because the Make command fails completely, this despite installing XCode, all the extra command line tools, and trying multiple tutorials including the ones on Github. Just in case the Homebrew one also fails. Can anyone help with some tips to make it work? P.S. I tried the official GitHub recommendations, they just did not work either: [https://github.com/ggerganov/whisper.cpp/tree/master](https://github.com/ggerganov/whisper.cpp/tree/master)
2025-01-15T17:18:58
https://www.reddit.com/r/LocalLLaMA/comments/1i22gnh/how_to_compile_whispercpp_for_mac_os_sonoma_or/
joseph-hurtado
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i22gnh
false
null
t3_1i22gnh
/r/LocalLLaMA/comments/1i22gnh/how_to_compile_whispercpp_for_mac_os_sonoma_or/
false
false
self
2
{'enabled': False, 'images': [{'id': 'xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=108&crop=smart&auto=webp&s=9f1a3c72bb85d28ca748578929e813c616ca047f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=216&crop=smart&auto=webp&s=d210c9e07ab2c76fd5db5866582e8d00dc69c210', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=320&crop=smart&auto=webp&s=5975f428f5ed1a6878c876d7a851448ccc82dec1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=640&crop=smart&auto=webp&s=ae5685e95d73e7f40e3ed12ad1d509c1c9bf2ff1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=960&crop=smart&auto=webp&s=30d3a941411a1d510ae4b967b3a13bf5bac8d020', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=1080&crop=smart&auto=webp&s=bb5888f4152853cf96cf29bc16492fa2f95a660b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?auto=webp&s=35f02b760b3d2d35fd8ab6c0ac7ca9e7239c34f1', 'width': 1280}, 'variants': {}}]}
Tool calling conversation with Qwen 2.5 using llama-cpp-python
1
[removed]
2025-01-15T17:36:25
https://www.reddit.com/r/LocalLLaMA/comments/1i22vga/tool_calling_conversation_with_qwen_25_using/
wheres-the-data
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i22vga
false
null
t3_1i22vga
/r/LocalLLaMA/comments/1i22vga/tool_calling_conversation_with_qwen_25_using/
false
false
self
1
{'enabled': False, 'images': [{'id': 'm4P55U-vQCR3vrW243t4x7TkTLCWb6DYBMoQqfDgtmk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ptplzznouRtpgm5t_Pl0D0L-1qvagrJMkGLSjezeM6A.jpg?width=108&crop=smart&auto=webp&s=e19fed5aa47c6ebd7274471e8bda9f055f9a50bd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ptplzznouRtpgm5t_Pl0D0L-1qvagrJMkGLSjezeM6A.jpg?width=216&crop=smart&auto=webp&s=baf26f583146d822b753b2fe21fc08480c54f145', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ptplzznouRtpgm5t_Pl0D0L-1qvagrJMkGLSjezeM6A.jpg?width=320&crop=smart&auto=webp&s=97c7100882838e35c47c96de16903060ce6559d0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ptplzznouRtpgm5t_Pl0D0L-1qvagrJMkGLSjezeM6A.jpg?width=640&crop=smart&auto=webp&s=5eaf2dc57eb4d0363636244fbd5887f8ef657d0b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ptplzznouRtpgm5t_Pl0D0L-1qvagrJMkGLSjezeM6A.jpg?width=960&crop=smart&auto=webp&s=a66c004b23c0376ce5887ecb241e9a41c734405a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ptplzznouRtpgm5t_Pl0D0L-1qvagrJMkGLSjezeM6A.jpg?width=1080&crop=smart&auto=webp&s=0ad55ab4e972cb55873c65d9ab2b2ac730eddd29', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ptplzznouRtpgm5t_Pl0D0L-1qvagrJMkGLSjezeM6A.jpg?auto=webp&s=cfc65a2753c6a1c12a1d0cd87abe0b5c5e0f162b', 'width': 1200}, 'variants': {}}]}
Dell T5820 w/ 2x Dell RTX 3090 for less than $2k - eBay sourced
74
2025-01-15T17:57:35
https://i.redd.it/qi354b2457de1.jpeg
_Boffin_
i.redd.it
1970-01-01T00:00:00
0
{}
1i23dhv
false
null
t3_1i23dhv
/r/LocalLLaMA/comments/1i23dhv/dell_t5820_w_2x_dell_rtx_3090_for_less_than_2k/
false
false
https://b.thumbs.redditm…H4WWI6EmOxiw.jpg
74
{'enabled': True, 'images': [{'id': 'E-7CuN6H_wHO8Nx98IJPUzj7ocGzGXNNcm78a8LTRp8', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/qi354b2457de1.jpeg?width=108&crop=smart&auto=webp&s=815f99156165e58439cf5de16654652d3e5b7b7f', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/qi354b2457de1.jpeg?width=216&crop=smart&auto=webp&s=b628cb3b2cdc08f29bcf4c3a8313fbbe4feb2839', 'width': 216}, {'height': 267, 'url': 'https://preview.redd.it/qi354b2457de1.jpeg?width=320&crop=smart&auto=webp&s=8b613e2577100121b2a1756429ae7d94d53a0619', 'width': 320}, {'height': 535, 'url': 'https://preview.redd.it/qi354b2457de1.jpeg?width=640&crop=smart&auto=webp&s=189fed98ad2b787956105b481e3545ba80e093c7', 'width': 640}, {'height': 802, 'url': 'https://preview.redd.it/qi354b2457de1.jpeg?width=960&crop=smart&auto=webp&s=54df694fb42d5cc5e3679e177e216ca74d841af4', 'width': 960}, {'height': 903, 'url': 'https://preview.redd.it/qi354b2457de1.jpeg?width=1080&crop=smart&auto=webp&s=dc954b7d7599577d9782069a16cea71c05de379f', 'width': 1080}], 'source': {'height': 1790, 'url': 'https://preview.redd.it/qi354b2457de1.jpeg?auto=webp&s=ab2436340852df86c9301cc76fe1130b11099392', 'width': 2140}, 'variants': {}}]}
Built a virtual AI employee with support for local LLMs—looking for feedback and suggestions!
0
A few months ago, I started with a simple goal: build a no-code tool for web automation. But midway through, I hit a roadblock… Initially, I was working on a tool that could record user actions on websites and replay them to automate tasks. It was coming along well—until Anthropic launched their Computer Use API, which could autonomously perform actions on the web without needing recording or playback. Suddenly, my project felt obsolete. At that point, I had to decide—pivot, abandon, or try something entirely new? I chose to pivot and aim bigger. Instead of just browser automation, I decided to build something broader—an AI agent that could handle any task on a computer just like a human, with special support for local models and open-source LLMs to ensure privacy and flexibility. That’s how Pilov.ai was born. What is Pilov.ai now? Pilov.ai is a virtual AI employee capable of working across your entire computer, with the added advantage of supporting local and open-source models for those who prefer privacy or offline capabilities. Whether it’s opening apps, managing files, sending emails, updating spreadsheets, or even making real phone calls, Pilov can autonomously complete complex, multi-step workflows. In the attached demo, Pilov: - Reads an email about a failed order tracking attempt. - Checks the order status on a delivery website. - Makes a real phone call to the logistics manager for an update. - Drafts a response and sends it back to the customer—all autonomously. Current Progress: For the past couple of months, I’ve been working solo on this project, balancing it with college. I’ve been putting in about 10 hours a day to bring Pilov to life. It now supports advanced features like: - Flow generation for automating repetitive tasks. - A super-powerful memory system to handle context. - An intervention system where users can guide the AI when needed. - Local model support, ensuring tasks are handled privately without relying on cloud services. I’d love your feedback: - What specific features would you like Pilov to have? - Should I focus on a specific domain (like customer support) or keep it broad? - Would local model support make this more valuable for your use case? I’m working hard as a solo developer and college undergrad, and I’d love to hear what you think. Feel free to check out the project and join the beta waitlist at www.pilov.ai. Thanks so much for reading! Your support and feedback mean a lot. Looking forward to your thoughts!
2025-01-15T18:30:06
https://v.redd.it/xv59vlx2b7de1
vishwa1238
/r/LocalLLaMA/comments/1i245ci/built_a_virtual_ai_employee_with_support_for/
1970-01-01T00:00:00
0
{}
1i245ci
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xv59vlx2b7de1/DASHPlaylist.mpd?a=1739687410%2COGM0MjY5OTQ3NjdmYjcxYzRmM2YzNTQ4M2NkYmI5YTY3Y2JmZmE2YzllZTYwZDQzOTYwNzBhNjNmNDU2NDY2NA%3D%3D&v=1&f=sd', 'duration': 216, 'fallback_url': 'https://v.redd.it/xv59vlx2b7de1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/xv59vlx2b7de1/HLSPlaylist.m3u8?a=1739687410%2CNDU4YTI0NzI3YTU2NjMxZmEzYjQ5NjQ3NGUyNjY3ZWM4OTdjOTQ1Nzg5MzYyYmFjMTliN2NiMDBiYWY0ODA5MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xv59vlx2b7de1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1i245ci
/r/LocalLLaMA/comments/1i245ci/built_a_virtual_ai_employee_with_support_for/
false
false
https://external-preview…7991154caf32e4c0
0
{'enabled': False, 'images': [{'id': 'NDdsazY1dTJiN2RlMbE8aD-dTxCn_POPxGxGfuuv0OQQsrXKopjRdw9g4Umh', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NDdsazY1dTJiN2RlMbE8aD-dTxCn_POPxGxGfuuv0OQQsrXKopjRdw9g4Umh.png?width=108&crop=smart&format=pjpg&auto=webp&s=76681a9ac9b7d613e482973a2131dd97e2f233e6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NDdsazY1dTJiN2RlMbE8aD-dTxCn_POPxGxGfuuv0OQQsrXKopjRdw9g4Umh.png?width=216&crop=smart&format=pjpg&auto=webp&s=db3e1f4a6bda703aa388e9fff07ca92921018de6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NDdsazY1dTJiN2RlMbE8aD-dTxCn_POPxGxGfuuv0OQQsrXKopjRdw9g4Umh.png?width=320&crop=smart&format=pjpg&auto=webp&s=384baeae0ac371c5caa45e7262b50e03d9aec865', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NDdsazY1dTJiN2RlMbE8aD-dTxCn_POPxGxGfuuv0OQQsrXKopjRdw9g4Umh.png?width=640&crop=smart&format=pjpg&auto=webp&s=d44ca5dceae05a8df64bb24c367378009b8b3312', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NDdsazY1dTJiN2RlMbE8aD-dTxCn_POPxGxGfuuv0OQQsrXKopjRdw9g4Umh.png?width=960&crop=smart&format=pjpg&auto=webp&s=2c423ff619f0cac850e6490e3e8aeb610b7672aa', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NDdsazY1dTJiN2RlMbE8aD-dTxCn_POPxGxGfuuv0OQQsrXKopjRdw9g4Umh.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fcfa302e964a8ecc5dd0a53219ef87512894e015', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NDdsazY1dTJiN2RlMbE8aD-dTxCn_POPxGxGfuuv0OQQsrXKopjRdw9g4Umh.png?format=pjpg&auto=webp&s=08386ca13152230830e2101f947ec569e9f73ae2', 'width': 1920}, 'variants': {}}]}