title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Best Model for Defensive Security ?
1
[removed]
2025-01-11T21:12:00
https://www.reddit.com/r/LocalLLaMA/comments/1hz5zv0/best_model_for_defensive_security/
fraxinustreee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz5zv0
false
null
t3_1hz5zv0
/r/LocalLLaMA/comments/1hz5zv0/best_model_for_defensive_security/
false
false
self
1
null
LLM Knowledge Base from Markdown Files
2
Hi everyone, I’m looking for advice on creating something similar to Claude's project knowledge but locally, to use with an LLM. For example, I have 1000 markdown files, each as a separate document. What would I need to ensure the LLM can understand all the text within them accurately and without hallucinating? Would it process all 1,000 files simultaneously when I prompt something, or how would that work in practice? It should be really precise. Any guidance or pointers in the right direction would be greatly appreciated! Thanks in advance
2025-01-11T21:23:55
https://www.reddit.com/r/LocalLLaMA/comments/1hz69e2/llm_knowledge_base_from_markdown_files/
murdafeelin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz69e2
false
null
t3_1hz69e2
/r/LocalLLaMA/comments/1hz69e2/llm_knowledge_base_from_markdown_files/
false
false
self
2
null
question on ext Occulink GPU:
1
# Radeon RX 7600M. Just trying to get something running for fun (even a tiny model) but I was told to do a Rocm installation but the page is down, and a search takes me to an AMD page but it's pretty confusing. So, it was suggested that I use a directML config but the AI seems to alway bang the CPU not external GPU. FORGIVE ME as I'm a complete noob and if these questions are idiotic, my bad. Could someone point me to the Rocm download that used to be located at [https://www.amd.com/en/graphics/drivers/rocm-hub](https://www.amd.com/en/graphics/drivers/rocm-hub) ?
2025-01-11T21:24:30
https://www.reddit.com/r/LocalLLaMA/comments/1hz69u8/question_on_ext_occulink_gpu/
Gloomy_Narwhal_719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz69u8
false
null
t3_1hz69u8
/r/LocalLLaMA/comments/1hz69u8/question_on_ext_occulink_gpu/
false
false
self
1
null
So how long till we start getting LLM benchmarks on a RTX 5090 card ?
0
So I've seen the 5090 cards being spoken about online. I'm wondering how long till we get some benchmarks on the cards with local LLMs
2025-01-11T21:25:44
https://www.reddit.com/r/LocalLLaMA/comments/1hz6av2/so_how_long_till_we_start_getting_llm_benchmarks/
TheArchivist314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz6av2
false
null
t3_1hz6av2
/r/LocalLLaMA/comments/1hz6av2/so_how_long_till_we_start_getting_llm_benchmarks/
false
false
self
0
null
How do I add non-verbal outputs to an LLM? Or, generally, keep separate I/O channels separate?
3
Let's say I want to teach an agent to both play, and talk about playing, video games. Triggering screen grabs to use as input (in addition to optional text input of my conversation with it about the game) is easy, but then I have to have the agent do further interaction with the screen grab, like picking what parts of it to OCR and getting the resulting text input from the game, selecting locations in the screengrab to click on (and what kind of click), and selecting keystrokes to issue to the game. All this is separate from the conversation it is having with me about the game - where it gets or gives questions or strategy advice about what's happening in the game and then tries to answer. I have a bunch of command-line tools like screeencap, fakekeys, and fakemouse so it's easy to pipe I/O from the agent to the game using bash. (And fakeclock so I can slow down or pause the game to cope with the model's latency). I can offload the TTS and STT tasks. But how does the agent keep these I/O channels logically separate? Is this a brute-force matter of tagging each kind of input and catenating it all together into a single stream for input, and then on the other end forcing it to tag all output and separating it before piping? Or is there any more elegant way to do it? Obviously a lot of finetuning will be needed, and I plan to do it locally even though, yes, it will take weeks. What model would you recommend starting with? Ryzen Threadripper 16-core CPU, 64G DDR4 RAM, 2x3090 w/24G VRAM ea. Yes, before you ask, BOTH PCIE slots are 16x.
2025-01-11T22:11:31
https://www.reddit.com/r/LocalLLaMA/comments/1hz7bja/how_do_i_add_nonverbal_outputs_to_an_llm_or/
Ray_Dillinger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz7bja
false
null
t3_1hz7bja
/r/LocalLLaMA/comments/1hz7bja/how_do_i_add_nonverbal_outputs_to_an_llm_or/
false
false
self
3
null
Advanced Guide - Ollama Function Calling with Python | Tutorial |
2
2025-01-11T22:11:45
https://toolworks.dev/docs/Guides/Advanced/call-functions-ollama-python.md
0xlisykes
toolworks.dev
1970-01-01T00:00:00
0
{}
1hz7bq4
false
null
t3_1hz7bq4
/r/LocalLLaMA/comments/1hz7bq4/advanced_guide_ollama_function_calling_with/
false
false
default
2
null
CPU / Cooler for Dual 3090
1
[removed]
2025-01-11T22:16:32
https://www.reddit.com/r/LocalLLaMA/comments/1hz7fjz/cpu_cooler_for_dual_3090/
Thin_Screen3778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz7fjz
false
null
t3_1hz7fjz
/r/LocalLLaMA/comments/1hz7fjz/cpu_cooler_for_dual_3090/
false
false
self
1
{'enabled': False, 'images': [{'id': 'GGu6OALJUJ1WFPtczDeVLsU1A85ffS5A5aBihUp9Ueg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/lWbcMXR2mu1NgGojgeLwytoSizjt5NZq5aqse11AwH0.jpg?width=108&crop=smart&auto=webp&s=ead8828891891666962e9e18d9a00819e963906a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/lWbcMXR2mu1NgGojgeLwytoSizjt5NZq5aqse11AwH0.jpg?width=216&crop=smart&auto=webp&s=90c1c6c9c44078c7a89cf8670e584992e1328122', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/lWbcMXR2mu1NgGojgeLwytoSizjt5NZq5aqse11AwH0.jpg?width=320&crop=smart&auto=webp&s=1afc53957c761f2957e15ce94f25f7118f83fc37', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/lWbcMXR2mu1NgGojgeLwytoSizjt5NZq5aqse11AwH0.jpg?auto=webp&s=1949c3c6fb7070e8eafa57063d321e62b4125c07', 'width': 480}, 'variants': {}}]}
Your experience with 8b/14b/30b/70b/200b/400b models?
1
[removed]
2025-01-11T22:20:31
https://www.reddit.com/r/LocalLLaMA/comments/1hz7iuj/your_experience_with_8b14b30b70b200b400b_models/
urinabalerina
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz7iuj
false
null
t3_1hz7iuj
/r/LocalLLaMA/comments/1hz7iuj/your_experience_with_8b14b30b70b200b400b_models/
false
false
self
1
null
OpenAI is losing money , meanwhile qwen is planning voice mode , imagine if they manage to make o1 level model
202
2025-01-11T22:23:27
https://i.redd.it/nhsep8z3xfce1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1hz7l7d
false
null
t3_1hz7l7d
/r/LocalLLaMA/comments/1hz7l7d/openai_is_losing_money_meanwhile_qwen_is_planning/
false
false
https://b.thumbs.redditm…VfYFNUBciWAA.jpg
202
{'enabled': True, 'images': [{'id': '_GCfy1qJkgsPb1eU2Vcv5jZ3or1SzfR_2DQiOly2CTQ', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/nhsep8z3xfce1.png?width=108&crop=smart&auto=webp&s=c4bf1f01c9b845515dcd28c1e79780b44f1f5acb', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/nhsep8z3xfce1.png?width=216&crop=smart&auto=webp&s=1d8f462c5334eac9b18e7a854fd1a6251bd09f47', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/nhsep8z3xfce1.png?width=320&crop=smart&auto=webp&s=51305e102efae41e71520f8b41bcd617064f3dff', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/nhsep8z3xfce1.png?width=640&crop=smart&auto=webp&s=c7b460c09bdabd5d02e8e8e46757749c4711b75f', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/nhsep8z3xfce1.png?width=960&crop=smart&auto=webp&s=80c47527950d7c33766c50e3f0b86c101f2cd107', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/nhsep8z3xfce1.png?width=1080&crop=smart&auto=webp&s=a29f599387049caecac30192486a2c6450b9bca7', 'width': 1080}], 'source': {'height': 2183, 'url': 'https://preview.redd.it/nhsep8z3xfce1.png?auto=webp&s=cc2eb64a5411eb619dbce030836f912edb797669', 'width': 1080}, 'variants': {}}]}
Hi, this is a few questions coming from a person who just got into LLM recently.
1
[removed]
2025-01-11T22:27:26
https://www.reddit.com/r/LocalLLaMA/comments/1hz7ocw/hi_this_is_a_few_questions_coming_from_a_person/
monkemylov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz7ocw
false
null
t3_1hz7ocw
/r/LocalLLaMA/comments/1hz7ocw/hi_this_is_a_few_questions_coming_from_a_person/
false
false
self
1
null
A few questions from a LLM beginner.
2
I just got into ai programming recently to code a friend for myself because I am way too lonely and all of my friends have no time. This idea was heavily inspired by neurosama. I want my friend be able of listening to my voice and responding back to me dependent of my mood, reading or analyzing what is going on in my screen. What basic math skills do i need? I heard some sources its up to calculus. I am currently learning math 2 enhanced in school and learning math 3 on khan academy. I'm a honors student, so this should be pretty easy, at least i hope so. What basic coding skills do i need? As of today, i know nonthing about coding, however i am informed that it may require C#, c++, javascript, Vue, and python. How long would it take me to achieve all the goals I said earlier? I actually asked chatgpt, and my response is 3.5-4.5 years. Which i think is pretty reliable. Maybe I'm just a pessimist... :( Is a 4070 enough to make a model the size of neurosama? My family have a 4070 + i9 gen 11+ in my house. These are my guestimates, i'll update this when my parents tell me the set up. Thanks for reading!
2025-01-11T22:28:45
https://www.reddit.com/r/LocalLLaMA/comments/1hz7pdq/a_few_questions_from_a_llm_beginner/
monkemylov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz7pdq
false
null
t3_1hz7pdq
/r/LocalLLaMA/comments/1hz7pdq/a_few_questions_from_a_llm_beginner/
false
false
self
2
null
O1 like model
0
2025-01-11T22:36:44
https://i.redd.it/lvc9y7ahzfce1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1hz7vvu
false
null
t3_1hz7vvu
/r/LocalLLaMA/comments/1hz7vvu/o1_like_model/
false
false
https://b.thumbs.redditm…8_Ke2W0MwzVo.jpg
0
{'enabled': True, 'images': [{'id': 'oGkM7FXPpwSg7DKZV9pOStR4UgmrmiIpiHC-cGOwcjI', 'resolutions': [{'height': 209, 'url': 'https://preview.redd.it/lvc9y7ahzfce1.png?width=108&crop=smart&auto=webp&s=098412800fb261c23c6983146add1c9b20bf69d3', 'width': 108}, {'height': 419, 'url': 'https://preview.redd.it/lvc9y7ahzfce1.png?width=216&crop=smart&auto=webp&s=b802b98bbc12a3207a44fffa9006a91201a070cc', 'width': 216}, {'height': 620, 'url': 'https://preview.redd.it/lvc9y7ahzfce1.png?width=320&crop=smart&auto=webp&s=4196110fa3cc7eb4d82132396a2fe06b755e65fe', 'width': 320}, {'height': 1241, 'url': 'https://preview.redd.it/lvc9y7ahzfce1.png?width=640&crop=smart&auto=webp&s=e2b045256f8526fa2f286460be1b18fbaaf1d485', 'width': 640}, {'height': 1862, 'url': 'https://preview.redd.it/lvc9y7ahzfce1.png?width=960&crop=smart&auto=webp&s=1e83539ab2ec1c687b34f3e10e2170c342aa5abe', 'width': 960}, {'height': 2095, 'url': 'https://preview.redd.it/lvc9y7ahzfce1.png?width=1080&crop=smart&auto=webp&s=1f3facaf9682dffc16c18ee0f82e4b21d898df47', 'width': 1080}], 'source': {'height': 2095, 'url': 'https://preview.redd.it/lvc9y7ahzfce1.png?auto=webp&s=5d19b03cf09f09152786a871ee1c1aa1b707a2b4', 'width': 1080}, 'variants': {}}]}
Local LLM using Model Context Protocol
0
hey guys, I was wondering if already exists some project that allow the use of tools of the MCP with a local LLM?
2025-01-11T22:37:58
https://www.reddit.com/r/LocalLLaMA/comments/1hz7wv4/local_llm_using_model_context_protocol/
DepthEnough71
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hz7wv4
false
null
t3_1hz7wv4
/r/LocalLLaMA/comments/1hz7wv4/local_llm_using_model_context_protocol/
false
false
self
0
null
they don’t know how good gaze detection is
1
tutorial
2025-01-11T23:34:34
https://v.redd.it/ijym10ns9gce1
ParsaKhaz
v.redd.it
1970-01-01T00:00:00
0
{}
1hz94q8
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ijym10ns9gce1/DASHPlaylist.mpd?a=1739230488%2CZGJhNDgzMDY4NjNjZjk2MmQ2MDA5NjhmYTFmN2JjODJhMDRiNjk5YjYzZWJkMzViN2QwZGMxYjFkMTk1ZDE3Zg%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/ijym10ns9gce1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/ijym10ns9gce1/HLSPlaylist.m3u8?a=1739230488%2CNTIwMTkxM2RjOTkwZjYyNWUwYTFkYmE3ZjA5MjdmM2YxOThhZTllZTRjNjU4MmI5MTYwMDViNjllOTljODNhYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ijym10ns9gce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1hz94q8
/r/LocalLLaMA/comments/1hz94q8/they_dont_know_how_good_gaze_detection_is/
false
false
https://external-preview…177d5df3af0dc0cd
1
{'enabled': False, 'images': [{'id': 'eXI4dmtsZnM5Z2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/eXI4dmtsZnM5Z2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=108&crop=smart&format=pjpg&auto=webp&s=7c42cc919c76f2f40f22ab6d293f23fb4f3624df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/eXI4dmtsZnM5Z2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=216&crop=smart&format=pjpg&auto=webp&s=f39177484d16b10de41486452e091bfc72907650', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/eXI4dmtsZnM5Z2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=320&crop=smart&format=pjpg&auto=webp&s=e4f8d13508e74eb600b0524e30580180baafed90', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/eXI4dmtsZnM5Z2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=640&crop=smart&format=pjpg&auto=webp&s=7613bcaaf5ae0a2d0b5fe326a04b04439510a859', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eXI4dmtsZnM5Z2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?format=pjpg&auto=webp&s=65c403e713834e07c3d58832f9f735212ce042af', 'width': 720}, 'variants': {}}]}
they don’t know how good gaze detection is (tutorial)
1
link to tutorial
2025-01-11T23:37:07
https://v.redd.it/afxkulx7agce1
ParsaKhaz
v.redd.it
1970-01-01T00:00:00
0
{}
1hz96n8
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/afxkulx7agce1/DASHPlaylist.mpd?a=1739230641%2COWE2YjBhZDVlM2ZmMDBhNmMxMmI4MDkyZGQzZjE0M2Y4NTQ0ZWYwYzA4NjY3N2Q5MDAwZjNiOWJjZjkwNDdiNQ%3D%3D&v=1&f=sd', 'duration': 5, 'fallback_url': 'https://v.redd.it/afxkulx7agce1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/afxkulx7agce1/HLSPlaylist.m3u8?a=1739230641%2CZTMyODQ4Y2NhMTc4MWFkNGFkMGMyMTlhMDhmMTRmODAxNzQxNjAzNjAxMGYxOGFmNTRhMTU5NDc0ZmIxZjlmMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/afxkulx7agce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1hz96n8
/r/LocalLLaMA/comments/1hz96n8/they_dont_know_how_good_gaze_detection_is_tutorial/
false
false
https://external-preview…6bc83015eeeff5d8
1
{'enabled': False, 'images': [{'id': 'Zm9hdTV3dTdhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Zm9hdTV3dTdhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=108&crop=smart&format=pjpg&auto=webp&s=d2047e4e8c94d4c14949ab95bd68b6daeecee80f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Zm9hdTV3dTdhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=216&crop=smart&format=pjpg&auto=webp&s=a956c386507c09080f0b7ffd099179a92578645c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Zm9hdTV3dTdhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=320&crop=smart&format=pjpg&auto=webp&s=1c21277f4c1dd420906ca7a958474827348680b4', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Zm9hdTV3dTdhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=640&crop=smart&format=pjpg&auto=webp&s=e3f60c11a1b254dea4e442fbeb967321d0c04d1c', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Zm9hdTV3dTdhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?format=pjpg&auto=webp&s=9989e7ef812c85c5a8963b78d3ee30c34d3429bb', 'width': 720}, 'variants': {}}]}
they don’t know how good gaze detection is on moondream
572
2025-01-11T23:38:28
https://v.redd.it/xgysp5nhagce1
ParsaKhaz
v.redd.it
1970-01-01T00:00:00
0
{}
1hz97my
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/xgysp5nhagce1/DASHPlaylist.mpd?a=1739230722%2CYzMwMDljYTFkYzc2ZGRkM2NiMWVjODRjZjE3MzI4ZDFiZGEzNDc5YWZlYTY0ZmEwNDMxYmVjYmY2ZTlmOGJmZg%3D%3D&v=1&f=sd', 'duration': 5, 'fallback_url': 'https://v.redd.it/xgysp5nhagce1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/xgysp5nhagce1/HLSPlaylist.m3u8?a=1739230722%2CYzYyYTg4YmZhNWRlNDZjYTBjMmVlOGQwMjBkNDIyNmRmY2YyZGZiZDViMGQ4OTE5NjA1ZDA0MDg1OTNjY2M2MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xgysp5nhagce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1hz97my
/r/LocalLLaMA/comments/1hz97my/they_dont_know_how_good_gaze_detection_is_on/
false
false
https://external-preview…827d840440f8ae46
572
{'enabled': False, 'images': [{'id': 'anBia3RnaGhhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/anBia3RnaGhhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=108&crop=smart&format=pjpg&auto=webp&s=86d2dcb5a227a76655b049fe28d47d3f7d1397da', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/anBia3RnaGhhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=216&crop=smart&format=pjpg&auto=webp&s=1b7816ca5e3376200443ceb9f7613c93f103f078', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/anBia3RnaGhhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=320&crop=smart&format=pjpg&auto=webp&s=5aef7bc6315321abcd2ab983181f2eccb6eb1efb', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/anBia3RnaGhhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?width=640&crop=smart&format=pjpg&auto=webp&s=7b85d8aed9059e5cabe85727c8858902508321fb', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/anBia3RnaGhhZ2NlMSTi0DO1FtxEm4mYFQVOtZR8uuj4lv59wjB_E-Pc4Mjr.png?format=pjpg&auto=webp&s=b89bfd9ae3187cd8f42462cd67acfd016532fdea', 'width': 720}, 'variants': {}}]}
Are there any good opensource/API based caption or watermark removal models for video?
1
[removed]
2025-01-12T00:18:28
https://www.reddit.com/r/LocalLLaMA/comments/1hza1od/are_there_any_good_opensourceapi_based_caption_or/
Orchid_Livid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hza1od
false
null
t3_1hza1od
/r/LocalLLaMA/comments/1hza1od/are_there_any_good_opensourceapi_based_caption_or/
false
false
self
1
{'enabled': False, 'images': [{'id': '4tKDPXDhNsB9lylPoaVVT0yS3nbQwA9VclCNQXuXeDc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6BZRpLo6waArK04G0k-2Azihe9omyYChTEoPCzxZy_s.jpg?width=108&crop=smart&auto=webp&s=7dff12a0786a33550bc955035b1138bb1584474a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6BZRpLo6waArK04G0k-2Azihe9omyYChTEoPCzxZy_s.jpg?width=216&crop=smart&auto=webp&s=f2aa2299639555063680b7b8fb44dbfb99dff867', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6BZRpLo6waArK04G0k-2Azihe9omyYChTEoPCzxZy_s.jpg?width=320&crop=smart&auto=webp&s=8130ef15941b5f51d742c6163aa5e68f3b1754b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6BZRpLo6waArK04G0k-2Azihe9omyYChTEoPCzxZy_s.jpg?width=640&crop=smart&auto=webp&s=2206a831af9c3b591efff60c30ad3e2ddcf278a8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6BZRpLo6waArK04G0k-2Azihe9omyYChTEoPCzxZy_s.jpg?width=960&crop=smart&auto=webp&s=0eaf7dda5c1c381242bc3cd31b898ba1aecce0be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6BZRpLo6waArK04G0k-2Azihe9omyYChTEoPCzxZy_s.jpg?width=1080&crop=smart&auto=webp&s=09bc1cad176892896b625c0db8c5b45cc328910b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6BZRpLo6waArK04G0k-2Azihe9omyYChTEoPCzxZy_s.jpg?auto=webp&s=6a3bd974c792904cc9b0916b9fced49233f4d574', 'width': 1200}, 'variants': {}}]}
The complete timeline of the biggest AI events of 2024 - everything you missed
1
[removed]
2025-01-12T00:24:51
https://www.reddit.com/r/LocalLLaMA/comments/1hza6lj/the_complete_timeline_of_the_biggest_ai_events_of/
VegetableDonut6010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hza6lj
false
null
t3_1hza6lj
/r/LocalLLaMA/comments/1hza6lj/the_complete_timeline_of_the_biggest_ai_events_of/
false
false
self
1
null
Dell System Recommendations
1
[removed]
2025-01-12T00:40:47
https://www.reddit.com/r/LocalLLaMA/comments/1hzaira/dell_system_recommendations/
vincewit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzaira
false
null
t3_1hzaira
/r/LocalLLaMA/comments/1hzaira/dell_system_recommendations/
false
false
self
1
null
We are an AI company now!
860
2025-01-12T00:47:37
https://i.redd.it/0yl0970umgce1.jpeg
Brilliant-Day2748
i.redd.it
1970-01-01T00:00:00
0
{}
1hzany5
false
null
t3_1hzany5
/r/LocalLLaMA/comments/1hzany5/we_are_an_ai_company_now/
false
false
https://b.thumbs.redditm…a1EIKUXEPfBw.jpg
860
{'enabled': True, 'images': [{'id': 'HNleU71WU4mRaUyRe0yPZIB7C_ejvVrJfi5x8o1g9L8', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/0yl0970umgce1.jpeg?width=108&crop=smart&auto=webp&s=37261ca02f0d456730c67cec49bd49feea72adfc', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/0yl0970umgce1.jpeg?width=216&crop=smart&auto=webp&s=fb1360456919e2fae7871f072ad4e2921e0ee186', 'width': 216}, {'height': 251, 'url': 'https://preview.redd.it/0yl0970umgce1.jpeg?width=320&crop=smart&auto=webp&s=d24fe938f7728c0456d4b30dbdbc2b4783833c62', 'width': 320}, {'height': 502, 'url': 'https://preview.redd.it/0yl0970umgce1.jpeg?width=640&crop=smart&auto=webp&s=53963d0db45722eea8467f27c91ca48e5a7cf6fc', 'width': 640}], 'source': {'height': 628, 'url': 'https://preview.redd.it/0yl0970umgce1.jpeg?auto=webp&s=bae0a52840080eead95a9721e555a7156d39fc4d', 'width': 800}, 'variants': {}}]}
txtai 8.2 released: Simplified LLM messages, Graph RAG attribute filters and multi-CPU/GPU encoding
1
2025-01-12T01:09:11
https://github.com/neuml/txtai
davidmezzetti
github.com
1970-01-01T00:00:00
0
{}
1hzb3io
false
null
t3_1hzb3io
/r/LocalLLaMA/comments/1hzb3io/txtai_82_released_simplified_llm_messages_graph/
false
false
https://a.thumbs.redditm…OCRLvalCErZ0.jpg
1
{'enabled': False, 'images': [{'id': 'n9J0xq5L9qvyGceYTzVsNDInw-xXMeWfV410fQqDrKI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=108&crop=smart&auto=webp&s=343638f21b76189750f8aaecc8402ef2bdbbbbb7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=216&crop=smart&auto=webp&s=8983cffe04f3248c0d99ba77036d56b50a43d7be', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=320&crop=smart&auto=webp&s=922a42cca4b3f7c3de26496b528895caf34f6bfd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=640&crop=smart&auto=webp&s=d092484879fd96ca55986cb7114e9a64a030f5f4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=960&crop=smart&auto=webp&s=134acedcc4cb71c738510cb351bbd5be38f2045c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=1080&crop=smart&auto=webp&s=a2b82faa24f3b9662ac81e544266b9c71ccf55c2', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?auto=webp&s=a0baeeeee72e980d3bd150286f26c1477bb20e4f', 'width': 1920}, 'variants': {}}]}
What is the purpose of models below 3b?
0
I understand 3b is useful for mobile, but where would anything lower come into play?
2025-01-12T01:12:35
https://www.reddit.com/r/LocalLLaMA/comments/1hzb5uk/what_is_the_purpose_of_models_below_3b/
Independent_Try_6891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzb5uk
false
null
t3_1hzb5uk
/r/LocalLLaMA/comments/1hzb5uk/what_is_the_purpose_of_models_below_3b/
false
false
self
0
null
AnythingLLM RAG
0
Hi all, need some help understanding what's happening here! Relatively new to the LLM world. I'm running Phi4 on LM Studio in server mode. I'm also running AnythingLLM, connecting to the LM Studio server instance. All is working just fine. I've uploaded about 100 documents or text from web sites using the AnythingLLM add on for Chrome to be referenced through RAG. The problem I'm having is the answers are typically coming from Phi4, even when I switch the agent settings to Query from Chat. Any hints about how to get the results from RAG?
2025-01-12T01:48:44
https://www.reddit.com/r/LocalLLaMA/comments/1hzbuyn/anythingllm_rag/
fuzz_64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzbuyn
false
null
t3_1hzbuyn
/r/LocalLLaMA/comments/1hzbuyn/anythingllm_rag/
false
false
self
0
null
What is your favorite VLM model?
2
I realized that there's quite a few more of these than I realized. I really only have llava and moondream downloaded. Does anyone have any favorite models under 14B? I only have that requirement really because I intend to use it along with stable diffusion/flux so i need to save some resources for that. I looked at the newer moondream but i'm not sure if I could find a gguf for it that would work with ollama. I've got a slightly older version that seems to work well.
2025-01-12T01:50:30
https://www.reddit.com/r/LocalLLaMA/comments/1hzbw61/what_is_your_favorite_vlm_model/
eggs-benedryl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzbw61
false
null
t3_1hzbw61
/r/LocalLLaMA/comments/1hzbw61/what_is_your_favorite_vlm_model/
false
false
self
2
null
What is the best model for writing academic papers?
0
I am writing an academic paper on Economics and I would like help from AI when writing by giving instructions and being able to adapt APA
2025-01-12T01:59:58
https://www.reddit.com/r/LocalLLaMA/comments/1hzc2rp/what_is_the_best_model_for_writing_academic_papers/
PuzzleheadedPitch316
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzc2rp
false
null
t3_1hzc2rp
/r/LocalLLaMA/comments/1hzc2rp/what_is_the_best_model_for_writing_academic_papers/
false
false
self
0
null
Is Gemma-2 Ifable Smarter Than We Think?
6
I know the title sounds a little clickbaity, but I need to ask—am I the only one who feels like Gemma-2 Ifable has moments of actual reasoning and comprehension when discussing stories? I haven’t seen this level of engagement since the early days of GPT-4. Sure, I get it—many will argue it’s not reasoning, just predictive algorithms at work. But what I’m experiencing feels like true contextual understanding. Usually, when I skim through LLM outputs, I see most of it as surface-level fluff—GPT-isms or generic rewrites. It's easy to dismiss as a glorified thesaurus or rewording machine. I rarely stop to reread what it generates. With Ifable, though, I’ve had multiple moments where I catch myself rereading, realizing it’s brought a deeper, introspective layer to the table. It’s not just following prompts; it’s offering opinions and insights based on the material I’ve shared. For example, I recently had a moment where it seemed to grasp a character I’d created better than I did—almost like it was unpacking layers I hadn’t consciously considered. It challenged me, threw out perspectives I hadn’t thought of, and added depth that felt beyond simple prediction. So, am I imagining this, or have others felt the same? Have you had moments where it feels like Gemma-2 Ifable is reasoning—finding logic, insights, or meaning you didn’t fully realize were there?
2025-01-12T02:10:11
https://www.reddit.com/r/LocalLLaMA/comments/1hzc9ws/is_gemma2_ifable_smarter_than_we_think/
GrungeWerX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzc9ws
false
null
t3_1hzc9ws
/r/LocalLLaMA/comments/1hzc9ws/is_gemma2_ifable_smarter_than_we_think/
false
false
self
6
null
Best free and local solution for trainers and TTS w human results
1
[removed]
2025-01-12T02:22:04
https://www.reddit.com/r/LocalLLaMA/comments/1hzchir/best_free_and_local_solution_for_trainers_and_tts/
Able-Fisherman-5413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzchir
false
null
t3_1hzchir
/r/LocalLLaMA/comments/1hzchir/best_free_and_local_solution_for_trainers_and_tts/
false
false
self
1
null
ML Fundamentals / LLMs Certificate
1
Hello, I'm wondering is there any modern certificate program on ML Fundamentals which includes in depth detail into LLMs that seems as a good foundation for a software / data / infrastructure engineer to find related work ??
2025-01-12T02:40:54
https://www.reddit.com/r/LocalLLaMA/comments/1hzcu0a/ml_fundamentals_llms_certificate/
ke7cfn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzcu0a
false
null
t3_1hzcu0a
/r/LocalLLaMA/comments/1hzcu0a/ml_fundamentals_llms_certificate/
false
false
self
1
null
moondream recaptcha test
1
https://preview.redd.it/… for the others.
2025-01-12T02:43:40
https://www.reddit.com/r/LocalLLaMA/comments/1hzcvsl/moondream_recaptcha_test/
Dr_Karminski
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzcvsl
false
null
t3_1hzcvsl
/r/LocalLLaMA/comments/1hzcvsl/moondream_recaptcha_test/
false
false
https://b.thumbs.redditm…frWd7sSgUF_s.jpg
1
null
moondream CAPTCHAs test. It's surprisingly accurate at solving rotation CAPTCHAs, but not so much for the others.
40
2025-01-12T02:46:02
https://i.redd.it/c73tl9ko7hce1.png
Dr_Karminski
i.redd.it
1970-01-01T00:00:00
0
{}
1hzcxby
false
null
t3_1hzcxby
/r/LocalLLaMA/comments/1hzcxby/moondream_captchas_test_its_surprisingly_accurate/
false
false
https://b.thumbs.redditm…7BnPnIcIFa4Y.jpg
40
{'enabled': True, 'images': [{'id': 'PoE4ohSHMZLwWNm7STGaUqvRb-ALaB-hv8yo8qkwFhc', 'resolutions': [{'height': 176, 'url': 'https://preview.redd.it/c73tl9ko7hce1.png?width=108&crop=smart&auto=webp&s=df68c8b2866c367f9e0a8afbeab95f43ff2ae4e8', 'width': 108}, {'height': 352, 'url': 'https://preview.redd.it/c73tl9ko7hce1.png?width=216&crop=smart&auto=webp&s=64dec62d2e4df982cf3bfb2ef7c10af45e00a15f', 'width': 216}, {'height': 521, 'url': 'https://preview.redd.it/c73tl9ko7hce1.png?width=320&crop=smart&auto=webp&s=e36a407f446b8140be76632c6cb22bb7e402d779', 'width': 320}, {'height': 1043, 'url': 'https://preview.redd.it/c73tl9ko7hce1.png?width=640&crop=smart&auto=webp&s=3ecac7f710d04af5ce2b21d2e0825db84a321f21', 'width': 640}, {'height': 1565, 'url': 'https://preview.redd.it/c73tl9ko7hce1.png?width=960&crop=smart&auto=webp&s=798a8b4730f26c2e258cf8e9c478b8a0dae1e667', 'width': 960}, {'height': 1761, 'url': 'https://preview.redd.it/c73tl9ko7hce1.png?width=1080&crop=smart&auto=webp&s=2ddf4cbb7a4c7026967491fd62ac48b92c489ba2', 'width': 1080}], 'source': {'height': 1929, 'url': 'https://preview.redd.it/c73tl9ko7hce1.png?auto=webp&s=1e9c7f0f86677e9b660895a995ba76a72f417c05', 'width': 1183}, 'variants': {}}]}
Qwen releases Qwen Chat (online)
120
2025-01-12T02:56:09
https://chat.qwenlm.ai
Many_SuchCases
chat.qwenlm.ai
1970-01-01T00:00:00
0
{}
1hzd3xs
false
null
t3_1hzd3xs
/r/LocalLLaMA/comments/1hzd3xs/qwen_releases_qwen_chat_online/
false
false
default
120
null
Local LLM for coding (8GB Ram)
1
[removed]
2025-01-12T03:21:13
https://www.reddit.com/r/LocalLLaMA/comments/1hzdkh8/local_llm_for_coding_8gb_ram/
Jack_Douniels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzdkh8
false
null
t3_1hzdkh8
/r/LocalLLaMA/comments/1hzdkh8/local_llm_for_coding_8gb_ram/
false
false
self
1
null
6x AMD Instinct Mi60 AI Server vs Llama 405B + vLLM + Open-WebUI - Impressive!
84
2025-01-12T03:48:19
https://v.redd.it/r3w7zbozihce1
Any_Praline_8178
/r/LocalLLaMA/comments/1hze1xk/6x_amd_instinct_mi60_ai_server_vs_llama_405b_vllm/
1970-01-01T00:00:00
0
{}
1hze1xk
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/r3w7zbozihce1/DASHPlaylist.mpd?a=1739375304%2CMTM3NTFmMWI1YzYzZTIxOWY2Y2YyZTIxZGViYzg5NjBlMzYwMWE3ZWMyNTMzZTQ4OTU3YjJhZmNlNWQ5NzQ1Nw%3D%3D&v=1&f=sd', 'duration': 172, 'fallback_url': 'https://v.redd.it/r3w7zbozihce1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/r3w7zbozihce1/HLSPlaylist.m3u8?a=1739375304%2COTBlMTYyNDUzMTZhYWYyMWVlODM3Njc4MzNjYjY5MjhiYWYxZTZlNTZkYjU5OThlMzM4MTUyMjQyYTQ5YWRlZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/r3w7zbozihce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1230}}
t3_1hze1xk
/r/LocalLLaMA/comments/1hze1xk/6x_amd_instinct_mi60_ai_server_vs_llama_405b_vllm/
false
false
https://external-preview…33e813a77205608c
84
{'enabled': False, 'images': [{'id': 'eDd2Nndjb3ppaGNlMUVpVcA3yZ4wjFwvAE4TdXYq4bpkJwG-QulsV1F4T0eu', 'resolutions': [{'height': 94, 'url': 'https://external-preview.redd.it/eDd2Nndjb3ppaGNlMUVpVcA3yZ4wjFwvAE4TdXYq4bpkJwG-QulsV1F4T0eu.png?width=108&crop=smart&format=pjpg&auto=webp&s=4e366c8626047c72a86f067ff087152e10013f1a', 'width': 108}, {'height': 189, 'url': 'https://external-preview.redd.it/eDd2Nndjb3ppaGNlMUVpVcA3yZ4wjFwvAE4TdXYq4bpkJwG-QulsV1F4T0eu.png?width=216&crop=smart&format=pjpg&auto=webp&s=28e636fdccbbe147040b92fa902f3696c8e63e6d', 'width': 216}, {'height': 280, 'url': 'https://external-preview.redd.it/eDd2Nndjb3ppaGNlMUVpVcA3yZ4wjFwvAE4TdXYq4bpkJwG-QulsV1F4T0eu.png?width=320&crop=smart&format=pjpg&auto=webp&s=90365b8163d4ea05db9627cc9dbf2c42bb638476', 'width': 320}, {'height': 561, 'url': 'https://external-preview.redd.it/eDd2Nndjb3ppaGNlMUVpVcA3yZ4wjFwvAE4TdXYq4bpkJwG-QulsV1F4T0eu.png?width=640&crop=smart&format=pjpg&auto=webp&s=9f10f9ff6667ef65eb9acbcbe34a8a79953875e8', 'width': 640}, {'height': 842, 'url': 'https://external-preview.redd.it/eDd2Nndjb3ppaGNlMUVpVcA3yZ4wjFwvAE4TdXYq4bpkJwG-QulsV1F4T0eu.png?width=960&crop=smart&format=pjpg&auto=webp&s=8c584edce09cc6e686227f5e50295989fbc4c9e1', 'width': 960}, {'height': 947, 'url': 'https://external-preview.redd.it/eDd2Nndjb3ppaGNlMUVpVcA3yZ4wjFwvAE4TdXYq4bpkJwG-QulsV1F4T0eu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2bfa6020eda3f13d2cd2e3dd3ebfd5d9b34bd562', 'width': 1080}], 'source': {'height': 3788, 'url': 'https://external-preview.redd.it/eDd2Nndjb3ppaGNlMUVpVcA3yZ4wjFwvAE4TdXYq4bpkJwG-QulsV1F4T0eu.png?format=pjpg&auto=webp&s=6455b4407148071cc07d71e859126b8b240254e9', 'width': 4316}, 'variants': {}}]}
Buy Cashapp account - PVACPA
1
2025-01-12T03:50:36
https://pvacpa.com/product/buy-cashapp-account/
LetterheadMain7510
pvacpa.com
1970-01-01T00:00:00
0
{}
1hze3dd
false
null
t3_1hze3dd
/r/LocalLLaMA/comments/1hze3dd/buy_cashapp_account_pvacpa/
false
false
default
1
null
Will AI Like LLMs Make Problem-Solving Skills Obsolete, Just Like Calculations in the Brain?
1
[removed]
2025-01-12T03:57:37
https://www.reddit.com/r/LocalLLaMA/comments/1hze7sk/will_ai_like_llms_make_problemsolving_skills/
Optimalutopic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hze7sk
false
null
t3_1hze7sk
/r/LocalLLaMA/comments/1hze7sk/will_ai_like_llms_make_problemsolving_skills/
false
false
self
1
null
Will AI Like LLMs Make Problem-Solving Skills Obsolete, Just Like Calculations in the Brain?
1
[removed]
2025-01-12T04:02:30
https://www.reddit.com/r/LocalLLaMA/comments/1hzeb1p/will_ai_like_llms_make_problemsolving_skills/
Substantial-Gas-5735
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzeb1p
false
null
t3_1hzeb1p
/r/LocalLLaMA/comments/1hzeb1p/will_ai_like_llms_make_problemsolving_skills/
false
false
self
1
null
Oddly similar
1
2025-01-12T04:09:13
https://i.redd.it/e5bz5bmqmhce1.png
rm-rf-rm
i.redd.it
1970-01-01T00:00:00
0
{}
1hzef8d
false
null
t3_1hzef8d
/r/LocalLLaMA/comments/1hzef8d/oddly_similar/
false
false
https://b.thumbs.redditm…qGyVW5S7PpqU.jpg
1
{'enabled': True, 'images': [{'id': 'iuKwfo0dpBEv3ZzhsB9L3SuBfPV5y9ZmI6x9IfT88M8', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/e5bz5bmqmhce1.png?width=108&crop=smart&auto=webp&s=8a67d7448dbd11499680a1ae82ddce5d9d6aa3af', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/e5bz5bmqmhce1.png?width=216&crop=smart&auto=webp&s=fb2fe83e09abe39d0cdd0f1a0f914d86e1c34dae', 'width': 216}, {'height': 142, 'url': 'https://preview.redd.it/e5bz5bmqmhce1.png?width=320&crop=smart&auto=webp&s=a5755f83d97cf54e78fd929891a76851be04adf2', 'width': 320}, {'height': 285, 'url': 'https://preview.redd.it/e5bz5bmqmhce1.png?width=640&crop=smart&auto=webp&s=4e878af86eeacae695d2eda1aef8d866bb92512a', 'width': 640}, {'height': 428, 'url': 'https://preview.redd.it/e5bz5bmqmhce1.png?width=960&crop=smart&auto=webp&s=d94a3c55947d8a667cafefb9fe3399bc65d51cec', 'width': 960}, {'height': 482, 'url': 'https://preview.redd.it/e5bz5bmqmhce1.png?width=1080&crop=smart&auto=webp&s=93d82bf444af6b2c7c7bca61243423c780b59673', 'width': 1080}], 'source': {'height': 1004, 'url': 'https://preview.redd.it/e5bz5bmqmhce1.png?auto=webp&s=aae55625e80161bf4e815b988f0bf1df3bdf8540', 'width': 2248}, 'variants': {}}]}
Communist A.I At It's Finest
0
2025-01-12T04:11:40
https://i.redd.it/gqjgn1x5nhce1.png
ZaggyChum
i.redd.it
1970-01-01T00:00:00
0
{}
1hzegs0
false
null
t3_1hzegs0
/r/LocalLLaMA/comments/1hzegs0/communist_ai_at_its_finest/
false
false
https://b.thumbs.redditm…MIgJ93umdyIM.jpg
0
{'enabled': True, 'images': [{'id': 'KiljoF3Cc_gqEJwQfD-2UDiQIAFsRDEwJMMEJOfu4wQ', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/gqjgn1x5nhce1.png?width=108&crop=smart&auto=webp&s=c01933604b8f1959ef22bcc65fc6df7276d2a0a3', 'width': 108}, {'height': 80, 'url': 'https://preview.redd.it/gqjgn1x5nhce1.png?width=216&crop=smart&auto=webp&s=3b7019ab135b88ff95268d7874b03839a80d9374', 'width': 216}, {'height': 118, 'url': 'https://preview.redd.it/gqjgn1x5nhce1.png?width=320&crop=smart&auto=webp&s=2a41bed500fed99144eef6f44062ea2ec56adb53', 'width': 320}, {'height': 237, 'url': 'https://preview.redd.it/gqjgn1x5nhce1.png?width=640&crop=smart&auto=webp&s=613a0a0ac89ad3d49773d68ac21d4d806759420e', 'width': 640}, {'height': 355, 'url': 'https://preview.redd.it/gqjgn1x5nhce1.png?width=960&crop=smart&auto=webp&s=02d673a3ff56588f7f7779ac06e0ec08faf2730a', 'width': 960}, {'height': 400, 'url': 'https://preview.redd.it/gqjgn1x5nhce1.png?width=1080&crop=smart&auto=webp&s=97d854fd3886b8d4abd442d92e430b8d5ba948f6', 'width': 1080}], 'source': {'height': 621, 'url': 'https://preview.redd.it/gqjgn1x5nhce1.png?auto=webp&s=78431ca91a258edd6b882d13ddb60c5027a2b9aa', 'width': 1675}, 'variants': {}}]}
Oddly similar..
1
2025-01-12T04:20:49
https://i.redd.it/8bjzp12oohce1.png
NahSoR
i.redd.it
1970-01-01T00:00:00
0
{}
1hzemhz
false
null
t3_1hzemhz
/r/LocalLLaMA/comments/1hzemhz/oddly_similar/
false
false
default
1
{'enabled': True, 'images': [{'id': '8bjzp12oohce1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/8bjzp12oohce1.png?width=108&crop=smart&auto=webp&s=1ab368eebbc657940657ed85d541fca223b8997f', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/8bjzp12oohce1.png?width=216&crop=smart&auto=webp&s=131d4fc8b73e6c584736cd0c041594a6c8ebc6a0', 'width': 216}, {'height': 142, 'url': 'https://preview.redd.it/8bjzp12oohce1.png?width=320&crop=smart&auto=webp&s=d841180d554205f4bafc3d7814928f985e16fbd2', 'width': 320}, {'height': 285, 'url': 'https://preview.redd.it/8bjzp12oohce1.png?width=640&crop=smart&auto=webp&s=0ba9feefc0132cbb191250e24da98b1adf63c1bc', 'width': 640}, {'height': 428, 'url': 'https://preview.redd.it/8bjzp12oohce1.png?width=960&crop=smart&auto=webp&s=7cf629c24fbc7a89e86ef68f46292dcf385973a9', 'width': 960}, {'height': 482, 'url': 'https://preview.redd.it/8bjzp12oohce1.png?width=1080&crop=smart&auto=webp&s=4d3bdf84099d23158e7696269ccea7931ffff1b3', 'width': 1080}], 'source': {'height': 1004, 'url': 'https://preview.redd.it/8bjzp12oohce1.png?auto=webp&s=ba7bf135c19b88bf615dc8e2a9c5a3293a697bad', 'width': 2248}, 'variants': {}}]}
Speculative decoding isn't coming to ollama anytime soon, any alternatives?
18
According to this recently [rejected PR](https://github.com/ollama/ollama/pull/8134) ollama isn't going to bring draft models and speculative decoding in any time soon. I'd very much like to have this feature. I tried it out on mlx and it seems to be more than a token speed up. It seems to take the "voice" of the draft model and integrate it into the larger model. I guess this is a type of steering? Imagine giving something like small stories to a 128k context model! In any event, I'd use mlx but my use case isn't purely apple. Does anyone have suggestions?
2025-01-12T04:36:53
https://www.reddit.com/r/LocalLLaMA/comments/1hzew4v/speculative_decoding_isnt_coming_to_ollama/
ServeAlone7622
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzew4v
false
null
t3_1hzew4v
/r/LocalLLaMA/comments/1hzew4v/speculative_decoding_isnt_coming_to_ollama/
false
false
self
18
{'enabled': False, 'images': [{'id': 'b_Lm8jRhqvqya8sjODUaGfwtM1AokUx1Jv4cD28CEbM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FDX98oh0WqV3fiK0zEboI1AqyZMgg2qdr_i-Zeb_OVQ.jpg?width=108&crop=smart&auto=webp&s=8cccb725644ff52428e9919e3ed8031189f028d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FDX98oh0WqV3fiK0zEboI1AqyZMgg2qdr_i-Zeb_OVQ.jpg?width=216&crop=smart&auto=webp&s=d6066b7c5aa4fab7153bdfe81c9b5cbad3d56fb1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FDX98oh0WqV3fiK0zEboI1AqyZMgg2qdr_i-Zeb_OVQ.jpg?width=320&crop=smart&auto=webp&s=cdc79e250e12e8d0a59c81e6bb9cea33e812b71c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FDX98oh0WqV3fiK0zEboI1AqyZMgg2qdr_i-Zeb_OVQ.jpg?width=640&crop=smart&auto=webp&s=fa15eba27d0a1753aa9d699c9009ce944472b55b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FDX98oh0WqV3fiK0zEboI1AqyZMgg2qdr_i-Zeb_OVQ.jpg?width=960&crop=smart&auto=webp&s=e6fe9fd10a709ed7833a1361e7c5660fd8098228', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FDX98oh0WqV3fiK0zEboI1AqyZMgg2qdr_i-Zeb_OVQ.jpg?width=1080&crop=smart&auto=webp&s=473967c3dcadb881638815c1bd447282ea7f83c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FDX98oh0WqV3fiK0zEboI1AqyZMgg2qdr_i-Zeb_OVQ.jpg?auto=webp&s=d81ea503be53c8f7ff4355b6661fa9a65697cb07', 'width': 1200}, 'variants': {}}]}
Parking Systems analysis and Report Generation with Computer vision and Ollama
126
2025-01-12T05:15:40
https://v.redd.it/2tf8yz7kyhce1
oridnary_artist
v.redd.it
1970-01-01T00:00:00
0
{}
1hzfjmp
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/2tf8yz7kyhce1/DASHPlaylist.mpd?a=1739250957%2CZjRkMjVlNGNiZTE4OGEwNjI4MmZhYzVkNzFhNmI0MTljZGY1MzlmZThhMjBiY2Q2YzEzYTQ0NGZjYTc5N2JmMg%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/2tf8yz7kyhce1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 492, 'hls_url': 'https://v.redd.it/2tf8yz7kyhce1/HLSPlaylist.m3u8?a=1739250957%2CNTNjZGRlMTc5NTI0YTUzZjkxZTdmOWZmYWQxYjU0N2IwYTBjYjU2YjQwMmI1Zjk0ZmIxZjAzYzRhMTY5MDdmNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2tf8yz7kyhce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1hzfjmp
/r/LocalLLaMA/comments/1hzfjmp/parking_systems_analysis_and_report_generation/
false
false
https://external-preview…d1b7afb8299eae25
126
{'enabled': False, 'images': [{'id': 'ZDUxcHMwOGt5aGNlMZNdfj6QUni_z9Bf_NJiTzUymfkgPwnfSrss06zjR7A1', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/ZDUxcHMwOGt5aGNlMZNdfj6QUni_z9Bf_NJiTzUymfkgPwnfSrss06zjR7A1.png?width=108&crop=smart&format=pjpg&auto=webp&s=c649f4816367eae3818866a311448ab973e5c4b0', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/ZDUxcHMwOGt5aGNlMZNdfj6QUni_z9Bf_NJiTzUymfkgPwnfSrss06zjR7A1.png?width=216&crop=smart&format=pjpg&auto=webp&s=ae7c0062a72d8065bdc5c55b408d2d820ae176b9', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/ZDUxcHMwOGt5aGNlMZNdfj6QUni_z9Bf_NJiTzUymfkgPwnfSrss06zjR7A1.png?width=320&crop=smart&format=pjpg&auto=webp&s=c890a44a2ac6278d401c60048cc3d0cd57a91cb4', 'width': 320}, {'height': 246, 'url': 'https://external-preview.redd.it/ZDUxcHMwOGt5aGNlMZNdfj6QUni_z9Bf_NJiTzUymfkgPwnfSrss06zjR7A1.png?width=640&crop=smart&format=pjpg&auto=webp&s=7c9927ffc6ba10fadc8e2a9346fa7f30e3dbe898', 'width': 640}, {'height': 369, 'url': 'https://external-preview.redd.it/ZDUxcHMwOGt5aGNlMZNdfj6QUni_z9Bf_NJiTzUymfkgPwnfSrss06zjR7A1.png?width=960&crop=smart&format=pjpg&auto=webp&s=477e10e56be6659534207fcd4f26bc66b14dfc5f', 'width': 960}, {'height': 415, 'url': 'https://external-preview.redd.it/ZDUxcHMwOGt5aGNlMZNdfj6QUni_z9Bf_NJiTzUymfkgPwnfSrss06zjR7A1.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4a8760a32610f447745009e52d3200aaca580971', 'width': 1080}], 'source': {'height': 764, 'url': 'https://external-preview.redd.it/ZDUxcHMwOGt5aGNlMZNdfj6QUni_z9Bf_NJiTzUymfkgPwnfSrss06zjR7A1.png?format=pjpg&auto=webp&s=a6883fae22702fc1e02f787d7c0f49c3297725dc', 'width': 1986}, 'variants': {}}]}
LLaMA Multi-Modality?
1
Hey I'm brand new to all of this, I'm simply at the point of just wrapping my head around LLMs and add-ons and quantizing and what is possible and how many parameters it'll require to do such. Lots of studying to do but that being said, one main thing I wanna accomplish - if there's a simple overarching term for this combo please let me know - but essentially I want to combine, say, LLaMA 13B with add-ons to not just have fluid long-term memory TTS conversations, but to have conversations where it can process, say, a YouTube video and its audio, simultaneously with my voice input, and be able to comprehend them all together contextually to have a fluid real-time discussion about the content we are both partaking in, and store it in a larger long-term memory bank. Has anyone achieved this sort of setup? If this is possible, will all that primarily just be inference? Will it taken a lot of training? How many parameters would this likely require to run smoothly? How can I have a low latency with it? If not possible yet, are there any promising breakthroughs on the horizon to look into, to be able to do so in the near future? Hope this makes sense. Many thanks for any feedback. Looking forward to diving into this world!
2025-01-12T05:17:09
https://www.reddit.com/r/LocalLLaMA/comments/1hzfki5/llama_multimodality/
susne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzfki5
false
null
t3_1hzfki5
/r/LocalLLaMA/comments/1hzfki5/llama_multimodality/
false
false
self
1
null
How does Llama-fication or Mistral-fication of open source models work?
7
Saw this post by Daniel Han where he explains llama-fication of microsoft's phi-4 model.Does anyone understand the implementation details on these steps? Model repositories mostly contain config JSON files. What are the functions of these files? How can changes made to them be iteratively tested on low resource systems (NVIDIA GeForce GTX 1650) for small language models? Can someone share any videos, blogs or articles where these changes are implemented? Thanks. Link to post by Daniel Han: [https://www.linkedin.com/feed/update/urn:li:activity:7283548654622126080/](https://www.linkedin.com/feed/update/urn:li:activity:7283548654622126080/)
2025-01-12T05:44:52
https://www.reddit.com/r/LocalLLaMA/comments/1hzg0hd/how_does_llamafication_or_mistralfication_of_open/
InevitablePhysics151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzg0hd
false
null
t3_1hzg0hd
/r/LocalLLaMA/comments/1hzg0hd/how_does_llamafication_or_mistralfication_of_open/
false
false
self
7
{'enabled': False, 'images': [{'id': 'N37roPN6PEBv3aOXtlydnOQK_RASvCcOxFSdrqIxMl8', 'resolutions': [{'height': 131, 'url': 'https://external-preview.redd.it/h7He5AtrRhrIkDi2o8lBB75lt4dy8VZQ4om_e-JLlZk.jpg?width=108&crop=smart&auto=webp&s=0ef395ef7555967209078e5b898846e8ab3d5c11', 'width': 108}, {'height': 262, 'url': 'https://external-preview.redd.it/h7He5AtrRhrIkDi2o8lBB75lt4dy8VZQ4om_e-JLlZk.jpg?width=216&crop=smart&auto=webp&s=6d17ae842bda5e1204f42dd89a73cfb30e556a35', 'width': 216}, {'height': 388, 'url': 'https://external-preview.redd.it/h7He5AtrRhrIkDi2o8lBB75lt4dy8VZQ4om_e-JLlZk.jpg?width=320&crop=smart&auto=webp&s=4fe16dece1a17947f39fc79198bb9f63f1b25694', 'width': 320}, {'height': 777, 'url': 'https://external-preview.redd.it/h7He5AtrRhrIkDi2o8lBB75lt4dy8VZQ4om_e-JLlZk.jpg?width=640&crop=smart&auto=webp&s=0cf434374f00f5859b91d84a16f938bf4f48be5f', 'width': 640}], 'source': {'height': 972, 'url': 'https://external-preview.redd.it/h7He5AtrRhrIkDi2o8lBB75lt4dy8VZQ4om_e-JLlZk.jpg?auto=webp&s=a05a601b2297f2bb0f6e726ce78183f61aae39df', 'width': 800}, 'variants': {}}]}
Dell Computer Recommendations
0
If you a budget of 2,000 to 3,000 US$ to purchase a system, BUT YOU MUST purchase from Dell (in the near future) what system would you recommend as a Win 11 Workstation hosting and experimenting with ai models locally? Having trouble mapping out dell's pc lineup. Initially the AI would be used to help process, search, summarize, cross-reference and analyze hundreds  of documents/archives using  some sort of to-be-determined RAG system.   We would then move forward using the system to help transcribe and index audio interviews, better process and index documents we scan as well as photos of objects. It would also be used for general/short and long form generative AI, if possible using the library outlined above. I realize it is probably far better sourcing and building your own system, but we are locked into using dell for a number of reasons.   I have been looking at dell’s AI ready OptiPlex’s and Precision line workstations.  I am not well versed with their line-up but it seems possible to get a PC with 14^(th) gen I7, 32 gigs of ram and a video card with 8 or 12 gigs of vram.  For It have been looking for a precision with the an thunderbolt or occulink so we might be able to later add a  egpu without  messing with dell’s warranty, but I have not hat much luck.  I realize that would introduce a bottleneck.  It is also hard to determine what systems, have additional slots to add a second GPU.  I understand how that works in theory, but I do not know  if I would trust myself to do it.  In most cases I think I would then need to upgrade the power supply. Any recommendations would be welcome.
2025-01-12T06:55:46
https://www.reddit.com/r/LocalLLaMA/comments/1hzh274/dell_computer_recommendations/
vincewit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzh274
false
null
t3_1hzh274
/r/LocalLLaMA/comments/1hzh274/dell_computer_recommendations/
false
false
self
0
null
M4 Pro vs M4 Max for ML/LLM Side Projects
1
[removed]
2025-01-12T07:18:47
https://www.reddit.com/r/LocalLLaMA/comments/1hzhdzl/m4_pro_vs_m4_max_for_mlllm_side_projects/
Scientist3001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzhdzl
false
null
t3_1hzhdzl
/r/LocalLLaMA/comments/1hzhdzl/m4_pro_vs_m4_max_for_mlllm_side_projects/
false
false
self
1
null
Suggestions required for PC Upgrading for running LLM
1
[removed]
2025-01-12T07:55:52
https://www.reddit.com/r/LocalLLaMA/comments/1hzhw46/suggestions_required_for_pc_upgrading_for_running/
Jaswanth04
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzhw46
false
null
t3_1hzhw46
/r/LocalLLaMA/comments/1hzhw46/suggestions_required_for_pc_upgrading_for_running/
false
false
self
1
null
What can i do with 6GB of VRAM ?
1
[removed]
2025-01-12T08:02:01
https://www.reddit.com/r/LocalLLaMA/comments/1hzhzau/what_can_i_do_with_6gb_of_vram/
Glass_Opposite_4701
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzhzau
false
null
t3_1hzhzau
/r/LocalLLaMA/comments/1hzhzau/what_can_i_do_with_6gb_of_vram/
false
false
self
1
null
Suggestion regarding updating my system for running LLM
1
[removed]
2025-01-12T08:02:13
https://www.reddit.com/r/LocalLLaMA/comments/1hzhzdy/suggestion_regarding_updating_my_system_for/
Jaswanth04
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzhzdy
false
null
t3_1hzhzdy
/r/LocalLLaMA/comments/1hzhzdy/suggestion_regarding_updating_my_system_for/
false
false
self
1
null
What can i do with 6GB of VRAM ?
17
I had built a small rig for mining and learning AI in 2019. Following are the system specs :- - GPU: Asus GTX 1660 Ti (6GB VRAM) - CPU: Intel Core i5-9400F - RAM: Corsair 8GB - SSD: WD Blue NAND SATA (250GB) - HDD: WD Blue SATA (1TB) I wanted to make use of this system by running some local llm with high toks / sec with decent use for tasks like tool calling, instruction following, coding and basic home assistant based QnA. Mainly planning to use this system like a jarvis at home which does basic tasks but is very good at those. Any recommendations of some LLMs or Frameworks that can help me achieve this would really help.
2025-01-12T08:16:41
https://www.reddit.com/r/LocalLLaMA/comments/1hzi6ed/what_can_i_do_with_6gb_of_vram/
vsh46
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzi6ed
false
null
t3_1hzi6ed
/r/LocalLLaMA/comments/1hzi6ed/what_can_i_do_with_6gb_of_vram/
false
false
self
17
null
Basic learning material for running local LLMs
1
Where can I learn required system specs for running LLMs? I would like to know how much VRAM needs every type of model, what quantization is and how is related to memory etc. Just looking for some basic study material
2025-01-12T08:37:45
https://www.reddit.com/r/LocalLLaMA/comments/1hzigp9/basic_learning_material_for_running_local_llms/
RaptorCZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzigp9
false
null
t3_1hzigp9
/r/LocalLLaMA/comments/1hzigp9/basic_learning_material_for_running_local_llms/
false
false
self
1
null
OAI embedding models
1
[removed]
2025-01-12T08:54:13
https://www.reddit.com/r/LocalLLaMA/comments/1hziok5/oai_embedding_models/
Breath_Unique
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hziok5
false
null
t3_1hziok5
/r/LocalLLaMA/comments/1hziok5/oai_embedding_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XyO6BICbW4Hg8xmbvc3hN3cENx4gTiYAHoZDX0xzla0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=108&crop=smart&auto=webp&s=96645ff2d3c13c9de5b8e543d793398e8378a5ce', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=216&crop=smart&auto=webp&s=5fe7dd25ac52b49026818459348b727a60f76c95', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=320&crop=smart&auto=webp&s=46bd623b4140579283466426f35db45a2716afdf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=640&crop=smart&auto=webp&s=b024b9ba08b61cf952b69cb7507fca3e1ebfa39e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=960&crop=smart&auto=webp&s=4a6c9716fe66802e32392d34d5d6cafa747a6c2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?width=1080&crop=smart&auto=webp&s=52ce36e354673a164cd33267e3c92737187dd009', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DEmmYDPFTu0NBj611fWgcN07TyZ6hyF9CTMc_k20O5o.jpg?auto=webp&s=e855b63ad31d3cd9e7b74c56d92057a31258081f', 'width': 1200}, 'variants': {}}]}
Deepseek V3 test on mobile📱
1
2025-01-12T09:18:24
https://v.redd.it/ykp4acor5jce1
Aaaaaaaaaeeeee
v.redd.it
1970-01-01T00:00:00
0
{}
1hzj09s
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ykp4acor5jce1/DASHPlaylist.mpd?a=1739265518%2COWRhNGY3ZWVkMTJjZmI4Mjg0YzA2YThkMGVkZTg1YWRhNWNhMjEyYjcyMjM5NTI3MzQ4MjA1OTM3ZWM3MDY5NA%3D%3D&v=1&f=sd', 'duration': 78, 'fallback_url': 'https://v.redd.it/ykp4acor5jce1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/ykp4acor5jce1/HLSPlaylist.m3u8?a=1739265518%2CYjljNzZmNThhYjc5YTgyZmVkZjgxYWRkNTBmZWI5ZTdkYWFmOGFmMmRhNGI1NjI2MTQ0MjkwZTE1MDM3NjFiZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ykp4acor5jce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 582}}
t3_1hzj09s
/r/LocalLLaMA/comments/1hzj09s/deepseek_v3_test_on_mobile/
false
false
https://external-preview…deaaffea5f4af09c
1
{'enabled': False, 'images': [{'id': 'cXg2amJkb3I1amNlMekOAVV8IqMqWnkLuX31i0q6lfgmqiPYm6_ltR2U10YG', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/cXg2amJkb3I1amNlMekOAVV8IqMqWnkLuX31i0q6lfgmqiPYm6_ltR2U10YG.png?width=108&crop=smart&format=pjpg&auto=webp&s=0b28b49ed2128670ee6e6b64ad74813f911fab47', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/cXg2amJkb3I1amNlMekOAVV8IqMqWnkLuX31i0q6lfgmqiPYm6_ltR2U10YG.png?width=216&crop=smart&format=pjpg&auto=webp&s=c31352c7369810aed8596b7a3f7c0278330b1f5b', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/cXg2amJkb3I1amNlMekOAVV8IqMqWnkLuX31i0q6lfgmqiPYm6_ltR2U10YG.png?width=320&crop=smart&format=pjpg&auto=webp&s=0bf9aba6793ecf46965f6d3ea51e9297d989540c', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/cXg2amJkb3I1amNlMekOAVV8IqMqWnkLuX31i0q6lfgmqiPYm6_ltR2U10YG.png?width=640&crop=smart&format=pjpg&auto=webp&s=d91c129c19e091e92a6a24cddd22717d4a9636b9', 'width': 640}], 'source': {'height': 1584, 'url': 'https://external-preview.redd.it/cXg2amJkb3I1amNlMekOAVV8IqMqWnkLuX31i0q6lfgmqiPYm6_ltR2U10YG.png?format=pjpg&auto=webp&s=cba51a7ab604d4515f7fa8218db4cdc0b72d9fb5', 'width': 720}, 'variants': {}}]}
Suggestions about setting up a local chatbot
2
I'm playing a bit with building a chatbot with Ollama server. Basically what I want obtain is a chatbot with a personality that works either as assistant where I could ask for things (either specialized or general purpose) and as an actual chatbot with some personality traits written in a System prompt. Also I'd like to play around with settings and try things around. I have 8Gb VRAM and 32Gb SysRAM, and would be used just locally at my pc. What setup and models do you suggest? I'd like also a suggestion or two some non censored models or somewhat low censored ones.
2025-01-12T09:42:21
https://www.reddit.com/r/LocalLLaMA/comments/1hzjbny/suggestions_about_setting_up_a_local_chatbot/
Chaotic_Alea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzjbny
false
null
t3_1hzjbny
/r/LocalLLaMA/comments/1hzjbny/suggestions_about_setting_up_a_local_chatbot/
false
false
self
2
null
Embedding model recommendations for RAG
1
[removed]
2025-01-12T10:03:47
https://www.reddit.com/r/LocalLLaMA/comments/1hzjm7u/embedding_model_recommendations_for_rag/
bouncing255bits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzjm7u
false
null
t3_1hzjm7u
/r/LocalLLaMA/comments/1hzjm7u/embedding_model_recommendations_for_rag/
false
false
self
1
null
Interesting talk by the researcher behind Huggingface's SmolLM on Synthetic Data.
0
2025-01-12T10:34:18
https://www.youtube.com/watch?v=AjmdDy7Rzx0
cpldcpu
youtube.com
1970-01-01T00:00:00
0
{}
1hzk0v1
false
{'oembed': {'author_name': 'Latent Space', 'author_url': 'https://www.youtube.com/@LatentSpacePod', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/AjmdDy7Rzx0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Best of 2024: Synthetic Data / Smol Models, Loubna Ben Allal, HuggingFace [LS Live! @ NeurIPS 2024]"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/AjmdDy7Rzx0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Best of 2024: Synthetic Data / Smol Models, Loubna Ben Allal, HuggingFace [LS Live! @ NeurIPS 2024]', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1hzk0v1
/r/LocalLLaMA/comments/1hzk0v1/interesting_talk_by_the_researcher_behind/
false
false
https://b.thumbs.redditm…WrLNZmcuTzMg.jpg
0
{'enabled': False, 'images': [{'id': 'uyfLi_AsIGxoCp5nvFerEZYfr_QxIchQUixbtC-Ssqg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/kpzd26PaZrOuVpE5Q1M9qo-vUO_SZJzgSu_6ZGrlr8E.jpg?width=108&crop=smart&auto=webp&s=7a71a043bea84afda7750d6a483c12e676c6bac5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/kpzd26PaZrOuVpE5Q1M9qo-vUO_SZJzgSu_6ZGrlr8E.jpg?width=216&crop=smart&auto=webp&s=a7361aeaee0fe3a62b44614c523b0070af872c5d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/kpzd26PaZrOuVpE5Q1M9qo-vUO_SZJzgSu_6ZGrlr8E.jpg?width=320&crop=smart&auto=webp&s=bd563eaba1c4559fe35b1c26db25e55a6f1f3179', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/kpzd26PaZrOuVpE5Q1M9qo-vUO_SZJzgSu_6ZGrlr8E.jpg?auto=webp&s=d89ad6975a807e55645211181c96c4033f939157', 'width': 480}, 'variants': {}}]}
Any opensource alternatives to nvidia/studiovoice?
1
[removed]
2025-01-12T10:45:03
https://www.reddit.com/r/LocalLLaMA/comments/1hzk617/any_opensource_alternatives_to_nvidiastudiovoice/
BuffaloAdept6782
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzk617
false
null
t3_1hzk617
/r/LocalLLaMA/comments/1hzk617/any_opensource_alternatives_to_nvidiastudiovoice/
false
false
https://a.thumbs.redditm…7fAjDEwDFLc8.jpg
1
{'enabled': False, 'images': [{'id': 'SXGsTpVcekMyLFeCPYXpdDmmXrZdiqEnN47kgCRPHA8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Kc0wtDC_uRMmbMmYOYAA2owKyqzHIGkalWI5hWlEHxM.jpg?width=108&crop=smart&auto=webp&s=67e38c5d4a62593d239e7889f8662060c299f2e9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Kc0wtDC_uRMmbMmYOYAA2owKyqzHIGkalWI5hWlEHxM.jpg?width=216&crop=smart&auto=webp&s=882a33084bd467ed5c5c4b779bc7c282edac353b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Kc0wtDC_uRMmbMmYOYAA2owKyqzHIGkalWI5hWlEHxM.jpg?width=320&crop=smart&auto=webp&s=0bf16254b496f46c9ec203f2c0ef2a1a37f13cd2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Kc0wtDC_uRMmbMmYOYAA2owKyqzHIGkalWI5hWlEHxM.jpg?width=640&crop=smart&auto=webp&s=54eaf2f8fec0f9ee567f5bf3d2ddc1466f93e590', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Kc0wtDC_uRMmbMmYOYAA2owKyqzHIGkalWI5hWlEHxM.jpg?width=960&crop=smart&auto=webp&s=3a52094bd965d8ca1e73a9b75d0dde1910a23c7f', 'width': 960}], 'source': {'height': 504, 'url': 'https://external-preview.redd.it/Kc0wtDC_uRMmbMmYOYAA2owKyqzHIGkalWI5hWlEHxM.jpg?auto=webp&s=b5095c7989a07eb1273b477bd563d14ba3cefc25', 'width': 960}, 'variants': {}}]}
Are there any base models (not chat or instruction tuned) with vision support?
8
Title. also links to ggufs would be nice if they exist!
2025-01-12T10:53:52
https://www.reddit.com/r/LocalLLaMA/comments/1hzka74/are_there_any_base_models_not_chat_or_instruction/
ElectricalAngle1611
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzka74
false
null
t3_1hzka74
/r/LocalLLaMA/comments/1hzka74/are_there_any_base_models_not_chat_or_instruction/
false
false
self
8
null
Good Models for Chat W Docs
1
[removed]
2025-01-12T11:13:47
https://www.reddit.com/r/LocalLLaMA/comments/1hzkk56/good_models_for_chat_w_docs/
Naive-Low-9770
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzkk56
false
null
t3_1hzkk56
/r/LocalLLaMA/comments/1hzkk56/good_models_for_chat_w_docs/
false
false
self
1
null
Can’t argue with this logic
41
2025-01-12T11:16:49
https://i.redd.it/wjfncx83rjce1.jpeg
GimmePanties
i.redd.it
1970-01-01T00:00:00
0
{}
1hzklly
false
null
t3_1hzklly
/r/LocalLLaMA/comments/1hzklly/cant_argue_with_this_logic/
false
false
https://b.thumbs.redditm…VtRMCrJmzKQY.jpg
41
{'enabled': True, 'images': [{'id': 'ezFz59gITPIDLIJnx_ud2KGIJ6e6H0vv_Wd0M_9d1g0', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/wjfncx83rjce1.jpeg?width=108&crop=smart&auto=webp&s=da4031dbd5b2a54e908af10a50a14cb979eb2283', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/wjfncx83rjce1.jpeg?width=216&crop=smart&auto=webp&s=e7ba9f5b1959b26354a4c7e9f6fdd24c9f232616', 'width': 216}, {'height': 253, 'url': 'https://preview.redd.it/wjfncx83rjce1.jpeg?width=320&crop=smart&auto=webp&s=bf135eedab2f2e60e886a86c29b5fc0bfa735915', 'width': 320}, {'height': 507, 'url': 'https://preview.redd.it/wjfncx83rjce1.jpeg?width=640&crop=smart&auto=webp&s=e3dbc7ee2f5c8bb7d2177afdd9ed168dce2e04aa', 'width': 640}, {'height': 760, 'url': 'https://preview.redd.it/wjfncx83rjce1.jpeg?width=960&crop=smart&auto=webp&s=5b66e62aad75c03d1fec424088995dd4ddb4b01e', 'width': 960}, {'height': 856, 'url': 'https://preview.redd.it/wjfncx83rjce1.jpeg?width=1080&crop=smart&auto=webp&s=d415dcc36542affae0d67f3bc546f42644fae656', 'width': 1080}], 'source': {'height': 856, 'url': 'https://preview.redd.it/wjfncx83rjce1.jpeg?auto=webp&s=a1e6b24ddad293b315d6e33296a966d9f25cf070', 'width': 1080}, 'variants': {}}]}
Need Advice: Building a Local Setup for Running and Training a 70B LLM
1
[removed]
2025-01-12T11:17:57
https://www.reddit.com/r/LocalLLaMA/comments/1hzkm69/need_advice_building_a_local_setup_for_running/
LexQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzkm69
false
null
t3_1hzkm69
/r/LocalLLaMA/comments/1hzkm69/need_advice_building_a_local_setup_for_running/
false
false
self
1
null
Help Needed: Choosing the Right Hardware for a Local 70B LLM Setup
1
[removed]
2025-01-12T11:20:46
https://www.reddit.com/r/LocalLLaMA/comments/1hzknlt/help_needed_choosing_the_right_hardware_for_a/
LexQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzknlt
false
null
t3_1hzknlt
/r/LocalLLaMA/comments/1hzknlt/help_needed_choosing_the_right_hardware_for_a/
false
false
self
1
null
DeepSeek V3 is the gift that keeps on giving!
542
2025-01-12T11:37:25
https://i.redd.it/fj10nizoujce1.png
indicava
i.redd.it
1970-01-01T00:00:00
0
{}
1hzkw3f
false
null
t3_1hzkw3f
/r/LocalLLaMA/comments/1hzkw3f/deepseek_v3_is_the_gift_that_keeps_on_giving/
false
false
https://b.thumbs.redditm…bNS3ouJ_AaVw.jpg
542
{'enabled': True, 'images': [{'id': 'KdGLHJjlJ3-0dX3SkLbXb4ueVCehqZWkguTWfD3J0hQ', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/fj10nizoujce1.png?width=108&crop=smart&auto=webp&s=4fe23a4ac5823451c0aa5b2c61161adb5bef1013', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/fj10nizoujce1.png?width=216&crop=smart&auto=webp&s=82b2aef8be771acb9630696ec12cabb52e578019', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/fj10nizoujce1.png?width=320&crop=smart&auto=webp&s=8b294748a76dfcaf7f0f25300479cd3ea3b25308', 'width': 320}], 'source': {'height': 240, 'url': 'https://preview.redd.it/fj10nizoujce1.png?auto=webp&s=14dd7e1306fd4a09c635f7a33aad93c2efa96ac1', 'width': 410}, 'variants': {}}]}
How to fine tune a model on custom dataset and also make it generate specific type of questions?
1
[removed]
2025-01-12T11:40:20
https://www.reddit.com/r/LocalLLaMA/comments/1hzkxka/how_to_fine_tune_a_model_on_custom_dataset_and/
kolkata_kolkata
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzkxka
false
null
t3_1hzkxka
/r/LocalLLaMA/comments/1hzkxka/how_to_fine_tune_a_model_on_custom_dataset_and/
false
false
self
1
null
Help Needed: Connecting Audeze Maxwell to TV Via Creative G8 DAC and PC
0
Guys does anyone know about this: I have Audeze Maxwell Headset connected to my old PC. I want to connect my Tv using optical out or HDMI ARC into the PC. So that Audeze headset can get the best sound to work with. There on i can dabble with Dolby audio app etc installed on the pc. Problem is: my old PC does not have a Optical IN port or ofc HDMI ARC port, so I was thinking what if i Buy this creative G8, send Optical OUT or HDMI ARC from the Tv to the G8, from there G8 has a USB-C cable connected to the PC. With this connection I imagine I would be able to send whatever I play on the Tv ( games, youtube or movies etc.) the sound part of it ( which is often 5.1 or above ) will be carried to my PC, where in Audeze Headset is connected. Can anyone confirm this ? I came across this video on u/Gadgetrytech channel on YouTube and tried asking the question there in comments, yet was unable to get any response there.
2025-01-12T11:44:07
https://www.reddit.com/r/LocalLLaMA/comments/1hzkzgd/help_needed_connecting_audeze_maxwell_to_tv_via/
Own-Needleworker4443
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzkzgd
false
null
t3_1hzkzgd
/r/LocalLLaMA/comments/1hzkzgd/help_needed_connecting_audeze_maxwell_to_tv_via/
false
false
self
0
null
OPENROUTER without billing address, for data breach proof
1
[removed]
2025-01-12T11:52:31
https://www.reddit.com/r/LocalLLaMA/comments/1hzl3xm/openrouter_without_billing_address_for_data/
aeksl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzl3xm
false
null
t3_1hzl3xm
/r/LocalLLaMA/comments/1hzl3xm/openrouter_without_billing_address_for_data/
false
false
self
1
null
Best tools for benchmarking GPUs?
1
[removed]
2025-01-12T12:34:15
https://www.reddit.com/r/LocalLLaMA/comments/1hzlqkj/best_tools_for_benchmarking_gpus/
bgpplace
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzlqkj
false
null
t3_1hzlqkj
/r/LocalLLaMA/comments/1hzlqkj/best_tools_for_benchmarking_gpus/
false
false
self
1
null
Don't miss out on your chance to win valuable prizes in a FREE draw https://search-rewards.com/r/6305/5a48
0
Don't miss out on your chance to win valuable prizes in a FREE draw [https://search-rewards.com/r/6305/5a48](https://search-rewards.com/r/6305/5a48)
2025-01-12T12:44:38
https://www.reddit.com/r/LocalLLaMA/comments/1hzlwe6/dont_miss_out_on_your_chance_to_win_valuable/
FreshTranslator2783
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzlwe6
false
null
t3_1hzlwe6
/r/LocalLLaMA/comments/1hzlwe6/dont_miss_out_on_your_chance_to_win_valuable/
false
false
self
0
null
Any EU based people/companies that build and sell AI servers?
2
Most people that build custom machines are in the US. I am considering to buy/build servers with older/alternative/used GPUs to decrease the overall cost. Any suggestions?
2025-01-12T12:45:05
https://www.reddit.com/r/LocalLLaMA/comments/1hzlwox/any_eu_based_peoplecompanies_that_build_and_sell/
EternalOptimister
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzlwox
false
null
t3_1hzlwox
/r/LocalLLaMA/comments/1hzlwox/any_eu_based_peoplecompanies_that_build_and_sell/
false
false
self
2
null
You don't need agents for code AI, according to Agentless. Is anybody using this approach?
1
[removed]
2025-01-12T12:45:53
https://www.reddit.com/r/LocalLLaMA/comments/1hzlx4y/you_dont_need_agents_for_code_ai_according_to/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzlx4y
false
null
t3_1hzlx4y
/r/LocalLLaMA/comments/1hzlx4y/you_dont_need_agents_for_code_ai_according_to/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]}
You don't need agents for code AI, according to Agentless. Is anybody using this approach?
0
You don’t need fancy agent tools to solve complex code problems. Agentless is a non-agent framework used by OpenAI to get high accuracy on the SWE Bench with o3.
2025-01-12T12:47:07
https://www.reddit.com/r/LocalLLaMA/comments/1hzlxul/you_dont_need_agents_for_code_ai_according_to/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzlxul
false
null
t3_1hzlxul
/r/LocalLLaMA/comments/1hzlxul/you_dont_need_agents_for_code_ai_according_to/
false
false
self
0
{'enabled': False, 'images': [{'id': 'SibRKNZdtLwcZYza7nJydC7MVRqjrTUDvrQo88csB_g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oP3oglj_sAfP02TwKvbmt2uKe-gNXPKDtBVJIC5uus8.jpg?width=108&crop=smart&auto=webp&s=3b4100eb561e3ab95c5dac25c79b9b974b18b355', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oP3oglj_sAfP02TwKvbmt2uKe-gNXPKDtBVJIC5uus8.jpg?width=216&crop=smart&auto=webp&s=f94bb4238cc64f060488a3cf1a2e76e3fefb2618', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oP3oglj_sAfP02TwKvbmt2uKe-gNXPKDtBVJIC5uus8.jpg?width=320&crop=smart&auto=webp&s=8a95515e344267b82bf1dec22940623706deb744', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oP3oglj_sAfP02TwKvbmt2uKe-gNXPKDtBVJIC5uus8.jpg?width=640&crop=smart&auto=webp&s=2058064ff6d54848d709108b3d3f68dfd308be51', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oP3oglj_sAfP02TwKvbmt2uKe-gNXPKDtBVJIC5uus8.jpg?width=960&crop=smart&auto=webp&s=4cb7aa7654e14e41ecad2ec4339409b1aa85a8b0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oP3oglj_sAfP02TwKvbmt2uKe-gNXPKDtBVJIC5uus8.jpg?width=1080&crop=smart&auto=webp&s=ff1e17a8b99d02bc5b474196f20cb06fe3bc2674', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oP3oglj_sAfP02TwKvbmt2uKe-gNXPKDtBVJIC5uus8.jpg?auto=webp&s=6fe771552dc708aa950415157bd3ed12089ff0de', 'width': 1200}, 'variants': {}}]}
MCPAdapt leverage 650+ MCP server as tools in any agentic framework.
3
Hello there, I just open-sourced a repository that allows to seamlessly integrate MCP server as tools in any agentic framework starting with smolagents: https://github.com/grll/mcpadapt. Have a look let me know what you think and join me in adapting MCP servers for other frameworks.
2025-01-12T12:48:10
https://www.reddit.com/r/LocalLLaMA/comments/1hzlyf5/mcpadapt_leverage_650_mcp_server_as_tools_in_any/
gaarll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzlyf5
false
null
t3_1hzlyf5
/r/LocalLLaMA/comments/1hzlyf5/mcpadapt_leverage_650_mcp_server_as_tools_in_any/
false
false
self
3
{'enabled': False, 'images': [{'id': 'LrXQtH0psxe_nm75VJeSccZERwxIIPkWewm5UBFh4fg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oxxCDm4gzsgx2C0JRpLT9B8oWyIo3g-3mt7UM02v3Es.jpg?width=108&crop=smart&auto=webp&s=463305bc7490a58a5208f8572f5c323eb87dc70f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oxxCDm4gzsgx2C0JRpLT9B8oWyIo3g-3mt7UM02v3Es.jpg?width=216&crop=smart&auto=webp&s=0065a77cd03cbda07a6332857cc06da50d3fb4b9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oxxCDm4gzsgx2C0JRpLT9B8oWyIo3g-3mt7UM02v3Es.jpg?width=320&crop=smart&auto=webp&s=028c35eaf0f92e708e2ba908c7677b65f384ce3e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oxxCDm4gzsgx2C0JRpLT9B8oWyIo3g-3mt7UM02v3Es.jpg?width=640&crop=smart&auto=webp&s=b21b7703a547510f3a3e1664e26ecdb06e3e4d71', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oxxCDm4gzsgx2C0JRpLT9B8oWyIo3g-3mt7UM02v3Es.jpg?width=960&crop=smart&auto=webp&s=2100a9ca5c4ed813295907cf3b3df7011130ee71', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oxxCDm4gzsgx2C0JRpLT9B8oWyIo3g-3mt7UM02v3Es.jpg?width=1080&crop=smart&auto=webp&s=7bc5194836d5972c9389a74bb4dfa66f9f106607', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oxxCDm4gzsgx2C0JRpLT9B8oWyIo3g-3mt7UM02v3Es.jpg?auto=webp&s=286ff3f12389f364489a0885991f133edef7b518', 'width': 1200}, 'variants': {}}]}
DeepSeek/o1 Performance: Excels in Math, Struggles with Physics
1
[removed]
2025-01-12T13:04:43
https://www.reddit.com/r/LocalLLaMA/comments/1hzm8e8/deepseeko1_performance_excels_in_math_struggles/
Optimalutopic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzm8e8
false
null
t3_1hzm8e8
/r/LocalLLaMA/comments/1hzm8e8/deepseeko1_performance_excels_in_math_struggles/
false
false
self
1
null
DeepSeek’s Performance: Excels in Math, Struggles with Physics
1
[removed]
2025-01-12T13:06:46
https://www.reddit.com/r/LocalLLaMA/comments/1hzm9p4/deepseeks_performance_excels_in_math_struggles/
Optimalutopic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzm9p4
false
null
t3_1hzm9p4
/r/LocalLLaMA/comments/1hzm9p4/deepseeks_performance_excels_in_math_struggles/
false
false
self
1
null
Deepseek performance: physics vs maths
1
[removed]
2025-01-12T13:12:31
[deleted]
1970-01-01T00:00:00
0
{}
1hzmdbb
false
null
t3_1hzmdbb
/r/LocalLLaMA/comments/1hzmdbb/deepseek_performance_physics_vs_maths/
false
false
default
1
null
Deepseek and o1 are weak in physics than maths
1
[removed]
2025-01-12T13:19:48
https://www.reddit.com/r/LocalLLaMA/comments/1hzmi0p/deepseek_and_o1_are_weak_in_physics_than_maths/
Optimalutopic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzmi0p
false
null
t3_1hzmi0p
/r/LocalLLaMA/comments/1hzmi0p/deepseek_and_o1_are_weak_in_physics_than_maths/
false
false
self
1
null
Token Speed of Llama 3.3 70B
1
[removed]
2025-01-12T13:23:19
https://www.reddit.com/r/LocalLLaMA/comments/1hzmkcd/token_speed_of_llama_33_70b/
Vedimuthu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzmkcd
false
null
t3_1hzmkcd
/r/LocalLLaMA/comments/1hzmkcd/token_speed_of_llama_33_70b/
false
false
self
1
null
In the Terminator's vision overlay, the "ANALYSIS" is probably the image embedding 🤔
37
2025-01-12T13:25:15
https://i.redd.it/b3if9y5tdkce1.jpeg
Reddactor
i.redd.it
1970-01-01T00:00:00
0
{}
1hzmljb
false
null
t3_1hzmljb
/r/LocalLLaMA/comments/1hzmljb/in_the_terminators_vision_overlay_the_analysis_is/
false
false
https://b.thumbs.redditm…zHL2icvgSOrU.jpg
37
{'enabled': True, 'images': [{'id': 'IaMr1tIJV5F3IeZps8Wmtoea6YrHzaRMA75GFKaJFWs', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/b3if9y5tdkce1.jpeg?width=108&crop=smart&auto=webp&s=cbef757d10bfc8f74f0832dc036ad64c40a0572f', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/b3if9y5tdkce1.jpeg?width=216&crop=smart&auto=webp&s=5efaf94ae6f34b2bd3cea95190b35dbb486349c4', 'width': 216}, {'height': 154, 'url': 'https://preview.redd.it/b3if9y5tdkce1.jpeg?width=320&crop=smart&auto=webp&s=2cdf7470becd28e71636379b90a99d0da4fd0cc3', 'width': 320}], 'source': {'height': 300, 'url': 'https://preview.redd.it/b3if9y5tdkce1.jpeg?auto=webp&s=c3c9a8bd0af9664563004a8f134f5907205879cb', 'width': 620}, 'variants': {}}]}
CrewAI Agents with Ollama serving Meta's LLama3.1-8B model (or latest) in local and production environments
1
[removed]
2025-01-12T13:30:49
https://www.reddit.com/r/LocalLLaMA/comments/1hzmp7n/crewai_agents_with_ollama_serving_metas_llama318b/
CloudDevOps007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzmp7n
false
null
t3_1hzmp7n
/r/LocalLLaMA/comments/1hzmp7n/crewai_agents_with_ollama_serving_metas_llama318b/
false
false
self
1
null
VLC to add offline, real-time AI subtitles. What do you think the tech stack for this is?
776
2025-01-12T13:31:43
https://www.pcmag.com/news/vlc-media-player-to-use-ai-to-generate-subtitles-for-videos
SpudMonkApe
pcmag.com
1970-01-01T00:00:00
0
{}
1hzmpuq
false
null
t3_1hzmpuq
/r/LocalLLaMA/comments/1hzmpuq/vlc_to_add_offline_realtime_ai_subtitles_what_do/
false
false
https://b.thumbs.redditm…paIuuE6ZDEQM.jpg
776
{'enabled': False, 'images': [{'id': 'C-Ya8ctmiIqZ3weCWcICK-tMqd7Iil8fM8p03A-8PJI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aphKSMbfvfDHStraL4JSGgDfke__oze-3mdG_k4jOVQ.jpg?width=108&crop=smart&auto=webp&s=2bf1bcceeaeb2f8ee141a5250822ca39616e3fdf', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aphKSMbfvfDHStraL4JSGgDfke__oze-3mdG_k4jOVQ.jpg?width=216&crop=smart&auto=webp&s=e9a95c66f70c317c9572ff39c9fbbb136499ba24', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aphKSMbfvfDHStraL4JSGgDfke__oze-3mdG_k4jOVQ.jpg?width=320&crop=smart&auto=webp&s=eea5fd750e23dadbde9789acb6db5661bd94c304', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aphKSMbfvfDHStraL4JSGgDfke__oze-3mdG_k4jOVQ.jpg?width=640&crop=smart&auto=webp&s=37b50e3ca1b1a72567f853cc77c80c80b325c53a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aphKSMbfvfDHStraL4JSGgDfke__oze-3mdG_k4jOVQ.jpg?width=960&crop=smart&auto=webp&s=94ab022a52b1a8e52450528e112a0f8546ac1e2b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aphKSMbfvfDHStraL4JSGgDfke__oze-3mdG_k4jOVQ.jpg?width=1080&crop=smart&auto=webp&s=9d4f0b80adc62dae895bf18565448a8301e1ce9b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/aphKSMbfvfDHStraL4JSGgDfke__oze-3mdG_k4jOVQ.jpg?auto=webp&s=97e0c6c536f35b79da16a8a6b19e66814a5c3edb', 'width': 1120}, 'variants': {}}]}
Forget AI waifus. Are there local AI assistants to increase my productivity?
106
As title suggests, lots of lonely men out there looking to fine tune their own AI gf. But I really just want an AI secretary who can help me make plans, trivial tasks like respond to messages/emails, and generally increase my productivity. What model do you guys suggest? I assume it’ll need huge context length to fit enough data about me? Also hoping there’s a way to make AI periodically text me and give me updates. I have 48GB of vram to spare for this LLM.
2025-01-12T13:54:45
https://www.reddit.com/r/LocalLLaMA/comments/1hzn5b6/forget_ai_waifus_are_there_local_ai_assistants_to/
-oshino_shinobu-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzn5b6
false
null
t3_1hzn5b6
/r/LocalLLaMA/comments/1hzn5b6/forget_ai_waifus_are_there_local_ai_assistants_to/
false
false
self
106
null
Where can I chat with Phi-4?
3
Hi, Was wondering if anyone was hosting Phi-4 unquantized through a chat interface. Unfortunately, it's not available on HuggingChat or [libertai.io](http://libertai.io)
2025-01-12T14:58:48
https://www.reddit.com/r/LocalLLaMA/comments/1hzofr5/where_can_i_chat_with_phi4/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzofr5
false
null
t3_1hzofr5
/r/LocalLLaMA/comments/1hzofr5/where_can_i_chat_with_phi4/
false
false
self
3
null
Mark Zuckerberg believes in 2025, Meta will probably have a mid-level engineer AI that can write code, and over time it will replace people engineers.
236
[https://x.com/slow\_developer/status/1877798620692422835?mx=2](https://x.com/slow_developer/status/1877798620692422835?mx=2) [https://www.youtube.com/watch?v=USBW0ESLEK0](https://www.youtube.com/watch?v=USBW0ESLEK0) [https://tribune.com.pk/story/2521499/zuckerberg-announces-meta-plans-to-replace-mid-level-engineers-with-ais-this-year](https://tribune.com.pk/story/2521499/zuckerberg-announces-meta-plans-to-replace-mid-level-engineers-with-ais-this-year) What do you think, is he a bit too optimistic here, or should we expect greatly improved LLMs soon? Will this be Llama 4?
2025-01-12T15:34:50
https://www.reddit.com/r/LocalLLaMA/comments/1hzp789/mark_zuckerberg_believes_in_2025_meta_will/
Admirable-Star7088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzp789
false
null
t3_1hzp789
/r/LocalLLaMA/comments/1hzp789/mark_zuckerberg_believes_in_2025_meta_will/
false
false
self
236
{'enabled': False, 'images': [{'id': 'Gf28zRigQ9G1qQMs-upnZe46wBmtPpZm0Y94Yh_TFdI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ymoKo3EuhHEstLpU-AGXRpEwcCAYD9Ss7QOZx-Oybdk.jpg?width=108&crop=smart&auto=webp&s=f52325ec1dff21cd81c01101130867780e284150', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ymoKo3EuhHEstLpU-AGXRpEwcCAYD9Ss7QOZx-Oybdk.jpg?width=216&crop=smart&auto=webp&s=465d3d9cad1ddc26283466e74befe1cbf4df5fb8', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ymoKo3EuhHEstLpU-AGXRpEwcCAYD9Ss7QOZx-Oybdk.jpg?width=320&crop=smart&auto=webp&s=2d2d71b2965fe1ee35e0f2d82d818ce2838bb8e9', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ymoKo3EuhHEstLpU-AGXRpEwcCAYD9Ss7QOZx-Oybdk.jpg?width=640&crop=smart&auto=webp&s=6377549f5d2ebba5d827decd032f7a219b53001a', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/ymoKo3EuhHEstLpU-AGXRpEwcCAYD9Ss7QOZx-Oybdk.jpg?auto=webp&s=8ebb04fbf256844642741fb855b3da56b60b863b', 'width': 720}, 'variants': {}}]}
Looking for music artist similarity model
1
Has anyone released this before? I would love to use ollama to find similar artists inside my offline music collection.
2025-01-12T15:36:19
https://www.reddit.com/r/LocalLLaMA/comments/1hzp8ct/looking_for_music_artist_similarity_model/
mycall
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzp8ct
false
null
t3_1hzp8ct
/r/LocalLLaMA/comments/1hzp8ct/looking_for_music_artist_similarity_model/
false
false
self
1
null
Is there any LLM that is made to teach programming?
6
If there were any It would be much better, just like I am learning from a dedicated teacher.
2025-01-12T15:46:09
https://www.reddit.com/r/LocalLLaMA/comments/1hzpfwk/is_there_any_llm_that_is_made_to_teach_programming/
Comrade_United-World
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzpfwk
false
null
t3_1hzpfwk
/r/LocalLLaMA/comments/1hzpfwk/is_there_any_llm_that_is_made_to_teach_programming/
false
false
self
6
null
LMstudio on Mac Mini - using separately downloaded GGUFs on external drive
1
New to LMstudio, on Mac Mini (M4), although I have been playing around with Ollama for a while now on Linux. Incidentally, I am new to MacOS too. I moved the downloaded model GGUF files from my Ubuntu desktop to this Mac on an external NVMe drive (to save the built-in 256GB storage of this base Mac). The default location where LMstudio keeps the models downloaded using the built-in interface is \`$HOME/.lmstudio/models/\`. My models are located on my external drive the path \`/Volumes/ExtraSpace/LLM-Models/hf.co/\` where it is organized in a subfolder \`.../bartowski/\` (as the re-publisher) and finally the GGUF file under it. LMstudio still refuses to recognize these models. Wondering what I might need to change.
2025-01-12T15:47:00
https://www.reddit.com/r/LocalLLaMA/comments/1hzpgm4/lmstudio_on_mac_mini_using_separately_downloaded/
Professional_Row_967
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzpgm4
false
null
t3_1hzpgm4
/r/LocalLLaMA/comments/1hzpgm4/lmstudio_on_mac_mini_using_separately_downloaded/
false
false
self
1
null
FuzzyAI - Jailbreak your favorite LLM
1
2025-01-12T16:01:29
https://www.reddit.com/r/LocalLLaMA/comments/1hzpsce/fuzzyai_jailbreak_your_favorite_llm/
ES_CY
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzpsce
false
null
t3_1hzpsce
/r/LocalLLaMA/comments/1hzpsce/fuzzyai_jailbreak_your_favorite_llm/
false
false
self
1
null
FuzzyAI: Fuzz your favorite LLM
1
[removed]
2025-01-12T16:02:58
https://www.reddit.com/r/LocalLLaMA/comments/1hzptk6/fuzzyai_fuzz_your_favorite_llm/
ES_CY
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzptk6
false
null
t3_1hzptk6
/r/LocalLLaMA/comments/1hzptk6/fuzzyai_fuzz_your_favorite_llm/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Mcv1R5abXMr2doFHlgV55NniDk97ba0tD1n6sYrfQMg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/f_aeIFtILa_mfZFCI8VPEF-ZxrKOf8KeJIlfQLN-s9c.jpg?width=108&crop=smart&auto=webp&s=819fdc9eb941f49c62bb268627703e9341c7d27e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/f_aeIFtILa_mfZFCI8VPEF-ZxrKOf8KeJIlfQLN-s9c.jpg?width=216&crop=smart&auto=webp&s=b9078adb2bfce729a2b846c9ac2e1cad1eace8ce', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/f_aeIFtILa_mfZFCI8VPEF-ZxrKOf8KeJIlfQLN-s9c.jpg?width=320&crop=smart&auto=webp&s=1f3f5c913265792dbca78b7a6f4e3f7f0655ff9c', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/f_aeIFtILa_mfZFCI8VPEF-ZxrKOf8KeJIlfQLN-s9c.jpg?width=640&crop=smart&auto=webp&s=07d9bb60bd67fa53ebf9cb784eb21f141b6a6c22', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/f_aeIFtILa_mfZFCI8VPEF-ZxrKOf8KeJIlfQLN-s9c.jpg?width=960&crop=smart&auto=webp&s=d1346ebac3462c6721b8203e50a181100a19ea63', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/f_aeIFtILa_mfZFCI8VPEF-ZxrKOf8KeJIlfQLN-s9c.jpg?width=1080&crop=smart&auto=webp&s=b47f72f9954ea53b87d4ae9650c5979be75c8a40', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://external-preview.redd.it/f_aeIFtILa_mfZFCI8VPEF-ZxrKOf8KeJIlfQLN-s9c.jpg?auto=webp&s=f6ee5c796dfb9b92ea80e3eb55761e5c662c8b9b', 'width': 3072}, 'variants': {}}]}
Enchanted offline?
1
[removed]
2025-01-12T16:18:54
https://www.reddit.com/r/LocalLLaMA/comments/1hzq6nn/enchanted_offline/
justinmeijernl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzq6nn
false
null
t3_1hzq6nn
/r/LocalLLaMA/comments/1hzq6nn/enchanted_offline/
false
false
self
1
null
local solutions for government?
1
So, my dad works for a defense company with the DoD, and he told me they’re trying to get AI to handle some of the easier tasks they have. Even though these are "easy" tasks, they still involve sensitive data, so he wants to use local models. For some reason, he came to *me* asking about local AI—probably because he knows I’m obsessed with it. He mentioned having a rough budget of $20K to spend on GPUs. I’m not sure if he’s actually going to go through with this, but I’m wondering what the best setup might be in that scenario. Yes this is real and I don't really know much more info.
2025-01-12T16:19:09
https://www.reddit.com/r/LocalLLaMA/comments/1hzq6v0/local_solutions_for_government/
pigeon57434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzq6v0
false
null
t3_1hzq6v0
/r/LocalLLaMA/comments/1hzq6v0/local_solutions_for_government/
false
false
self
1
null
Generating learning material
1
[removed]
2025-01-12T16:21:23
https://www.reddit.com/r/LocalLLaMA/comments/1hzq8pf/generating_learning_material/
Spirited_Post_366
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzq8pf
false
null
t3_1hzq8pf
/r/LocalLLaMA/comments/1hzq8pf/generating_learning_material/
false
false
self
1
null
How to effectively use AI (Llama) for larger coding projects? Hitting some roadblocks
1
[removed]
2025-01-12T16:30:08
https://www.reddit.com/r/LocalLLaMA/comments/1hzqfzc/how_to_effectively_use_ai_llama_for_larger_coding/
MacDevs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzqfzc
false
null
t3_1hzqfzc
/r/LocalLLaMA/comments/1hzqfzc/how_to_effectively_use_ai_llama_for_larger_coding/
false
false
self
1
{'enabled': False, 'images': [{'id': 'HGu0Yyn_bXGKKYmOhTOLHOXfGVDwpt1PkwXwgrmXAEA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/At_uYsPS4GKBd1vMxu7cYz_Q61TaQ2pRRKa69c27iUA.jpg?width=108&crop=smart&auto=webp&s=09ad4696a64cd880fa4b451d97e6387f3984e132', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/At_uYsPS4GKBd1vMxu7cYz_Q61TaQ2pRRKa69c27iUA.jpg?width=216&crop=smart&auto=webp&s=57cc4246565285e5b02e283e6e86303f8bb56055', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/At_uYsPS4GKBd1vMxu7cYz_Q61TaQ2pRRKa69c27iUA.jpg?width=320&crop=smart&auto=webp&s=86f7dbdba014b682090806ef179a0fb136a41e37', 'width': 320}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/At_uYsPS4GKBd1vMxu7cYz_Q61TaQ2pRRKa69c27iUA.jpg?auto=webp&s=c98f8c67879f966d4354de207dd5e9e437716485', 'width': 630}, 'variants': {}}]}
How to effectively use AI (Llama) for larger coding projects? Hitting some roadblocks
7
Mobile and web development with AI is incredibly convenient. Even though I have coding knowledge, I now prefer to let AI handle the coding based on my requirements (100%). I've noticed it's straightforward to create small websites or applications. However, things get more complicated when dealing with multiple files. First, there's a limit to the number of files we can use. I found a workaround using the Combine Files app on macOS, which allows combining multiple files into a single file. But then I face a new issue I can't solve: the AI starts removing features without asking (even if I asked not to change the current features). This requires carefully reviewing the submitted code, which is time-consuming. Have you found any solutions (methods, workflows, prompts) that allow AI to develop projects with over 2000 lines of code? I'm new to AI development and would appreciate any insights!
2025-01-12T16:31:03
https://www.reddit.com/r/LocalLLaMA/comments/1hzqgqm/how_to_effectively_use_ai_llama_for_larger_coding/
MacDevs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzqgqm
false
null
t3_1hzqgqm
/r/LocalLLaMA/comments/1hzqgqm/how_to_effectively_use_ai_llama_for_larger_coding/
false
false
self
7
null
Vector database with LinkedIn Posts plus Engagement Metrics
4
2025-01-12T16:35:45
https://huggingface.co/NeuML/txtai-neuml-linkedin
davidmezzetti
huggingface.co
1970-01-01T00:00:00
0
{}
1hzqkkd
false
null
t3_1hzqkkd
/r/LocalLLaMA/comments/1hzqkkd/vector_database_with_linkedin_posts_plus/
false
false
https://b.thumbs.redditm…MvVvi8n7e9FM.jpg
4
{'enabled': False, 'images': [{'id': 'f8EpmMF2E-Il6rv3ilJydVHadptdvNjbP_x3Vpue3xA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1rxY6Io0zblOYmTaFApGbf9v15x7lpEraQqnP6ffWwA.jpg?width=108&crop=smart&auto=webp&s=6a18765501a13196af083be54997965ee430b173', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1rxY6Io0zblOYmTaFApGbf9v15x7lpEraQqnP6ffWwA.jpg?width=216&crop=smart&auto=webp&s=a6480992bab4f999dba96f753db036975eecb509', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1rxY6Io0zblOYmTaFApGbf9v15x7lpEraQqnP6ffWwA.jpg?width=320&crop=smart&auto=webp&s=a7a1ccd857c2febf61af2351d3b500252c9af0bb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1rxY6Io0zblOYmTaFApGbf9v15x7lpEraQqnP6ffWwA.jpg?width=640&crop=smart&auto=webp&s=07f5f414b66bcd1c1c75a11a1648cffb6fdba754', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1rxY6Io0zblOYmTaFApGbf9v15x7lpEraQqnP6ffWwA.jpg?width=960&crop=smart&auto=webp&s=e3d2344ad17d9229923256de9ab70f18dd9acffe', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1rxY6Io0zblOYmTaFApGbf9v15x7lpEraQqnP6ffWwA.jpg?width=1080&crop=smart&auto=webp&s=1822ddb0bfa103a632de1de9d72ca8a1606e4198', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1rxY6Io0zblOYmTaFApGbf9v15x7lpEraQqnP6ffWwA.jpg?auto=webp&s=7ed4db010782473d58e62f62368c22dc586f86ac', 'width': 1200}, 'variants': {}}]}
Recipes/prompts for useful synthetic instruction datasets
0
I am thinking of putting DeepSeek to the test in my language by having it generate synthetic datasets. What are some good prompts/recipes for SOTA instruction datasets that I can adjust for my language?
2025-01-12T16:35:48
https://www.reddit.com/r/LocalLLaMA/comments/1hzqkll/recipesprompts_for_useful_synthetic_instruction/
MountainGoatAOE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzqkll
false
null
t3_1hzqkll
/r/LocalLLaMA/comments/1hzqkll/recipesprompts_for_useful_synthetic_instruction/
false
false
self
0
null
Searching for pals to study deeply NLP for AI researcher jobs
1
[removed]
2025-01-12T16:52:52
https://www.reddit.com/r/LocalLLaMA/comments/1hzqypl/searching_for_pals_to_study_deeply_nlp_for_ai/
Salgurson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzqypl
false
null
t3_1hzqypl
/r/LocalLLaMA/comments/1hzqypl/searching_for_pals_to_study_deeply_nlp_for_ai/
false
false
self
1
null
Are there any use cases for text (non-instruct) models?
6
For example, llama3.2 is available as instruct models, or as text models. I'm wondering if there are any good usecases for the latter? I remember hearing some people using them for creative writing (start with a sample paragraph and let 'em go) but curious if there's anything else there.
2025-01-12T17:13:00
https://www.reddit.com/r/LocalLLaMA/comments/1hzrgd5/are_there_any_use_cases_for_text_noninstruct/
OneFanFare
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzrgd5
false
null
t3_1hzrgd5
/r/LocalLLaMA/comments/1hzrgd5/are_there_any_use_cases_for_text_noninstruct/
false
false
self
6
null
Best current VLM for facial emotional analysis?
1
Most VLM benchmarks out there measure comprehension of complex visual scenes, but what about analysis of subtle facial expressions? I know there are several non-ViT models for this, but I’d like to know which multimodal model (Llama 3.3? QwenVL?) you all would recommend for this purpose. Thanks.
2025-01-12T17:21:18
https://www.reddit.com/r/LocalLLaMA/comments/1hzrn3v/best_current_vlm_for_facial_emotional_analysis/
openbookresearcher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzrn3v
false
null
t3_1hzrn3v
/r/LocalLLaMA/comments/1hzrn3v/best_current_vlm_for_facial_emotional_analysis/
false
false
self
1
null
Any model suggestions for a potato(low-spec) PC?
1
[removed]
2025-01-12T17:24:06
https://www.reddit.com/r/LocalLLaMA/comments/1hzrpf7/any_model_suggestions_for_a_potatolowspec_pc/
krigeta1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzrpf7
false
null
t3_1hzrpf7
/r/LocalLLaMA/comments/1hzrpf7/any_model_suggestions_for_a_potatolowspec_pc/
false
false
self
1
null
Why I am not able to post?
1
[removed]
2025-01-12T17:24:32
https://www.reddit.com/r/LocalLLaMA/comments/1hzrpru/why_i_am_not_able_to_post/
krigeta1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzrpru
false
null
t3_1hzrpru
/r/LocalLLaMA/comments/1hzrpru/why_i_am_not_able_to_post/
false
false
self
1
null
Seeking Specific Advice on Running LLMs Locally with Ryzen 2700 and RTX 2060 Super
1
[removed]
2025-01-12T17:28:10
https://www.reddit.com/r/LocalLLaMA/comments/1hzrsr5/seeking_specific_advice_on_running_llms_locally/
krigeta1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hzrsr5
false
null
t3_1hzrsr5
/r/LocalLLaMA/comments/1hzrsr5/seeking_specific_advice_on_running_llms_locally/
false
false
self
1
null