title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Fine-tune ColPali for Multimodal RAG
1
Hello! I am Merve working at Hugging Face on multimodal AI/vision language models. In transformers we have recently shipped ColPali, cutting-edge multimodal retrieval model built on PaliGemma. There was no contrastive document retrieval fine-tuning tutorial with transformers, so I have built a fine-tuning tutorial for your convenience 🤗  4-bit QLoRA with batch size of 4 uses 32 GB peak memory so you can run this on Colab's A100 (or use gradient accumulation with lower batch size) [https://github.com/merveenoyan/smol-vision/blob/main/Finetune\_ColPali.ipynb](https://github.com/merveenoyan/smol-vision/blob/main/Finetune_ColPali.ipynb) I hope this is useful!
2024-12-20T16:06:26
https://www.reddit.com/r/LocalLLaMA/comments/1hin1nh/finetune_colpali_for_multimodal_rag/
unofficialmerve
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hin1nh
false
null
t3_1hin1nh
/r/LocalLLaMA/comments/1hin1nh/finetune_colpali_for_multimodal_rag/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NCQ_nxK5yV0b47ZGEZy1pekv0-N2hA84apsraK3PoI8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=108&crop=smart&auto=webp&s=d88717b85c60864eb1169285daeb37ecf5bd9349', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=216&crop=smart&auto=webp&s=1ee3687641916a16fad17e9520aa67c04370c9dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=320&crop=smart&auto=webp&s=e110ddb282efd6d78523a625a1622b171d91ebf7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=640&crop=smart&auto=webp&s=bdeecfe59bbb40e572f435a544222b59215cba80', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=960&crop=smart&auto=webp&s=c6f44593fc1e2e239b64662b9eb5df148a8632f3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=1080&crop=smart&auto=webp&s=360968f477d64306bf40767ee66b20a3c5adb7a8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?auto=webp&s=78eaec399596ba343645877167ff9ce5a1ee31ba', 'width': 1200}, 'variants': {}}]}
QLoRA Fine-tuning ColPali for Multimodal RAG
9
Hello! I am Merve working at Hugging Face on multimodal AI/vision language models. In transformers we have recently shipped ColPali, cutting-edge multimodal retrieval model built on PaliGemma. There was no contrastive document retrieval fine-tuning tutorial with transformers, so I have built a fine-tuning tutorial for your convenience 🤗  4-bit QLoRA with batch size of 4 uses 32 GB peak memory so you can run this on Colab's A100 (or use gradient accumulation with lower batch size) [https://github.com/merveenoyan/smol-vision/blob/main/Finetune\_ColPali.ipynb](https://github.com/merveenoyan/smol-vision/blob/main/Finetune_ColPali.ipynb) I hope this is useful!
2024-12-20T16:09:29
https://www.reddit.com/r/LocalLLaMA/comments/1hin426/qlora_finetuning_colpali_for_multimodal_rag/
unofficialmerve
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hin426
false
null
t3_1hin426
/r/LocalLLaMA/comments/1hin426/qlora_finetuning_colpali_for_multimodal_rag/
false
false
self
9
{'enabled': False, 'images': [{'id': 'NCQ_nxK5yV0b47ZGEZy1pekv0-N2hA84apsraK3PoI8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=108&crop=smart&auto=webp&s=d88717b85c60864eb1169285daeb37ecf5bd9349', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=216&crop=smart&auto=webp&s=1ee3687641916a16fad17e9520aa67c04370c9dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=320&crop=smart&auto=webp&s=e110ddb282efd6d78523a625a1622b171d91ebf7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=640&crop=smart&auto=webp&s=bdeecfe59bbb40e572f435a544222b59215cba80', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=960&crop=smart&auto=webp&s=c6f44593fc1e2e239b64662b9eb5df148a8632f3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?width=1080&crop=smart&auto=webp&s=360968f477d64306bf40767ee66b20a3c5adb7a8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/q18zuGMxOaSr5RT_fO0UpwmYxkGeNpkYjJaPPHwzIIg.jpg?auto=webp&s=78eaec399596ba343645877167ff9ce5a1ee31ba', 'width': 1200}, 'variants': {}}]}
Transcription - Translation and TTS - What I have tried
1
[removed]
2024-12-20T16:22:59
https://www.reddit.com/r/LocalLLaMA/comments/1hinexe/transcription_translation_and_tts_what_i_have/
Aggravating-Coder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hinexe
false
null
t3_1hinexe
/r/LocalLLaMA/comments/1hinexe/transcription_translation_and_tts_what_i_have/
false
false
self
1
null
What are your favorite macros?
7
I’m curious: **What are your favorite macros?** What little tools or scripts have you created that make your day easier or boost your productivity? I’d love to hear your ideas and be inspired by your creativity! Here’s a macro I recently built that combines **AutoHotkey** and **Python** to improve selected text using the **Gemini API** or Llama 3.3. Here's how it works: 1. **AutoHotkey script**: When pressing `Ctrl + Shift + X`, it runs a Python script that processes the text from the clipboard. 2. **Python script**: The text is sent to the Gemini API, which rewrites it to be clearer, more concise, and kind-hearted. The improved text is then copied back to the clipboard. Here’s the AutoHotkey script: ```autohotkey ^+x:: ; Ctrl + Shift + X { RunWait 'python.exe "YOUR_PATH\shortcut.py"', , "Hide" Sleep 250 ; Wait for processing Send "^v" } And Python: import pyperclip import requests import json import os # Read API key from environment variable api_key = os.environ.get("GEMINI_API_KEY") # API endpoint url = f"https://generativelanguage.googleapis.com/v1beta/models/gemini-exp-1206:generateContent?key={api_key}" # Headers headers = { 'Content-Type': 'application/json' } def improve_text(text): """Enhances the given text using the Gemini API.""" data = { "contents": [{ "parts": [{"text": f"Rewrite this text to be exceptionally well-written, clear, concise, kind-hearted, and impactful. Use NVC and ACIM principles when relevant but do not force them. Return only the improved text in the same language as the input: {text}"}] }] } response = requests.post(url, headers=headers, data=json.dumps(data)) if response.status_code == 200: try: result = response.json() return result["candidates"][0]["content"]["parts"][0]["text"] except (KeyError, IndexError): print("Unexpected API response.") return text else: print(f"API request failed: {response.status_code} - {response.text}") return text # Get selected text from clipboard text = pyperclip.paste() # Improve text improved_text = improve_text(text) # Copy improved text back to clipboard pyperclip.copy(improved_text) print("Text improved and copied to clipboard.")
2024-12-20T17:33:26
https://www.reddit.com/r/LocalLLaMA/comments/1hip0mk/what_are_your_favorite_macros/
Funny_Acanthaceae285
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hip0mk
false
null
t3_1hip0mk
/r/LocalLLaMA/comments/1hip0mk/what_are_your_favorite_macros/
false
false
self
7
null
[Video] Significance of the Konwinski Prize ($1 Million Coding Agent Challenge)
1
[deleted]
2024-12-20T17:49:35
[deleted]
1970-01-01T00:00:00
0
{}
1hipdxs
false
{'oembed': {'author_name': 'nopilot', 'author_url': 'https://www.youtube.com/@nopilot-dev', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/KAcomQhZ-0A?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Win the Konwinski Prize! | Million Dollar Coding Agent"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/KAcomQhZ-0A/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Win the Konwinski Prize! | Million Dollar Coding Agent', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1hipdxs
/r/LocalLLaMA/comments/1hipdxs/video_significance_of_the_konwinski_prize_1/
false
false
default
1
null
Every AI Copyright Lawsuit in the US, Visualized | Wired
0
2024-12-20T17:50:20
https://www.wired.com/story/ai-copyright-case-tracker/
ttkciar
wired.com
1970-01-01T00:00:00
0
{}
1hipeks
false
null
t3_1hipeks
/r/LocalLLaMA/comments/1hipeks/every_ai_copyright_lawsuit_in_the_us_visualized/
false
false
https://a.thumbs.redditm…zTi6NrC93Hc4.jpg
0
{'enabled': False, 'images': [{'id': 'Qu3LnV2RT5ire2WrobKjYj6ZNOwu7HfFHjPzQYsMAOU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/h8p9zNDtvkCom0naeRPJ4uxz7l1tb2oT9ZhJa2UoK-s.jpg?width=108&crop=smart&auto=webp&s=0595a958ea926021a7d3fbcd6c3746975a4e0dbe', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/h8p9zNDtvkCom0naeRPJ4uxz7l1tb2oT9ZhJa2UoK-s.jpg?width=216&crop=smart&auto=webp&s=2b58052368e3446ce663c131b337879af92d63b3', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/h8p9zNDtvkCom0naeRPJ4uxz7l1tb2oT9ZhJa2UoK-s.jpg?width=320&crop=smart&auto=webp&s=80284c4d4929ba860a59d90febd0f1784aca9ab9', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/h8p9zNDtvkCom0naeRPJ4uxz7l1tb2oT9ZhJa2UoK-s.jpg?width=640&crop=smart&auto=webp&s=470f620e153b1a623a217c9b2ff2ce6bdefc1971', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/h8p9zNDtvkCom0naeRPJ4uxz7l1tb2oT9ZhJa2UoK-s.jpg?width=960&crop=smart&auto=webp&s=b3b9eb52d48b9104543c55239f9f0c138d820c79', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/h8p9zNDtvkCom0naeRPJ4uxz7l1tb2oT9ZhJa2UoK-s.jpg?width=1080&crop=smart&auto=webp&s=995517a729201369859884c4c739516ef197bbe2', 'width': 1080}], 'source': {'height': 670, 'url': 'https://external-preview.redd.it/h8p9zNDtvkCom0naeRPJ4uxz7l1tb2oT9ZhJa2UoK-s.jpg?auto=webp&s=04e2d6fa21bede7f37f08d66380211129b5a8eb9', 'width': 1280}, 'variants': {}}]}
How good is Llama 3-70b for creative writing?
1
[removed]
2024-12-20T18:01:07
https://www.reddit.com/r/LocalLLaMA/comments/1hipnis/how_good_is_llama_370b_for_creative_writing/
Candid_Bar_3484
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hipnis
false
null
t3_1hipnis
/r/LocalLLaMA/comments/1hipnis/how_good_is_llama_370b_for_creative_writing/
false
false
self
1
null
4 GPU on AM5 platform.
4
I want to build computer for llm inference with 4 gpu 3090. I found some am5 motherboards support x4+x4+x4+x4 bifurcation on first pcie slot. Motherboard example: asus prime x870-p wifi. I also found pcie bifurcation cards pcie x16 to quad m2. [https://www.amazon.com/JMT-Bifurcation-Four-Bay-Expansion-Heatsink/dp/B09R3P5MVL?th=1](https://www.amazon.com/JMT-Bifurcation-Four-Bay-Expansion-Heatsink/dp/B09R3P5MVL?th=1) after that i plan to connect gpu with m2 to pcie risers. Finally i want to get four pcie 4.0 gpus connected to motherboard. Will this build work with 4 gpu?
2024-12-20T18:02:22
https://www.reddit.com/r/LocalLLaMA/comments/1hipolr/4_gpu_on_am5_platform/
Single-Persimmon9439
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hipolr
false
null
t3_1hipolr
/r/LocalLLaMA/comments/1hipolr/4_gpu_on_am5_platform/
false
false
self
4
null
o3 Coding Benchmarks. Hoping for similar capabilities from Meta in 2025
108
2024-12-20T18:06:07
https://i.redd.it/m2c3fs95n18e1.png
No_Training9444
i.redd.it
1970-01-01T00:00:00
0
{}
1hiprtb
false
null
t3_1hiprtb
/r/LocalLLaMA/comments/1hiprtb/o3_coding_benchmarks_hoping_for_similar/
false
false
https://a.thumbs.redditm…Jfy4DBARYz84.jpg
108
{'enabled': True, 'images': [{'id': 'Z6G_PE6cpab4IO6kLS3DZNQnO-Q6dHfyVjk2EMMp8uc', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/m2c3fs95n18e1.png?width=108&crop=smart&auto=webp&s=e17926208316b905edff21643e1d32dff18e4c6d', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/m2c3fs95n18e1.png?width=216&crop=smart&auto=webp&s=fbdf52787ac29b88072d810be752cd0216af3930', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/m2c3fs95n18e1.png?width=320&crop=smart&auto=webp&s=05bc417fcc6facd35d20cf2a574400b2f77db005', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/m2c3fs95n18e1.png?width=640&crop=smart&auto=webp&s=c66c2e768976ebfccbf6322763c72e870b39e560', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/m2c3fs95n18e1.png?width=960&crop=smart&auto=webp&s=c2cb1042e96c157e6f1e5e6048621c732d2a9695', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/m2c3fs95n18e1.png?width=1080&crop=smart&auto=webp&s=8ebb261978e3e0a567594a2f4776e54d42705a59', 'width': 1080}], 'source': {'height': 914, 'url': 'https://preview.redd.it/m2c3fs95n18e1.png?auto=webp&s=8b87e083099c577be027c13edd8c4d0b4c6b41ff', 'width': 1828}, 'variants': {}}]}
OpenAI o3 performance on ARC-AGI
3
https://preview.redd.it/…750fdcae1a89fe
2024-12-20T18:09:05
https://www.reddit.com/r/LocalLLaMA/comments/1hipu8b/openai_o3_performance_on_arcagi/
jpydych
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hipu8b
false
null
t3_1hipu8b
/r/LocalLLaMA/comments/1hipu8b/openai_o3_performance_on_arcagi/
false
false
https://b.thumbs.redditm…NYVPMPntozaw.jpg
3
null
4.5 Million (Suspected) Fake Stars in GitHub
179
2024-12-20T18:10:21
https://arxiv.org/abs/2412.13459
davidmezzetti
arxiv.org
1970-01-01T00:00:00
0
{}
1hipvc3
false
null
t3_1hipvc3
/r/LocalLLaMA/comments/1hipvc3/45_million_suspected_fake_stars_in_github/
false
false
default
179
null
o3 performance on EpochAI benchmark. Closed benchmark. Takes professors, phds several hours or days to solve the problems.
74
2024-12-20T18:14:24
https://i.redd.it/vs6imkpdo18e1.png
No_Training9444
i.redd.it
1970-01-01T00:00:00
0
{}
1hipync
false
null
t3_1hipync
/r/LocalLLaMA/comments/1hipync/o3_performance_on_epochai_benchmark_closed/
false
false
https://a.thumbs.redditm…8SKZqe7jf-K0.jpg
74
{'enabled': True, 'images': [{'id': 'QsJh5pDrJaKiOSsb74ZqV7AfUDJG0OuMMazNSI1ajok', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/vs6imkpdo18e1.png?width=108&crop=smart&auto=webp&s=3eb768bed550bce48992cdc086d07966a44cabeb', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/vs6imkpdo18e1.png?width=216&crop=smart&auto=webp&s=effb7b64037b29be208bc6d4c98ae21abd30b414', 'width': 216}, {'height': 242, 'url': 'https://preview.redd.it/vs6imkpdo18e1.png?width=320&crop=smart&auto=webp&s=af3618f83dd20a981191a7473e5b58f49e32b36a', 'width': 320}, {'height': 485, 'url': 'https://preview.redd.it/vs6imkpdo18e1.png?width=640&crop=smart&auto=webp&s=b67fe2b84a16e2518cfac1c10d50a8fe49f9c569', 'width': 640}, {'height': 728, 'url': 'https://preview.redd.it/vs6imkpdo18e1.png?width=960&crop=smart&auto=webp&s=25e7c44bce81dd40ab3ede98520b0e7d34d25c69', 'width': 960}, {'height': 820, 'url': 'https://preview.redd.it/vs6imkpdo18e1.png?width=1080&crop=smart&auto=webp&s=d8f8fb43e2115b8ea5b1ae270c54e75ff73ae6f5', 'width': 1080}], 'source': {'height': 874, 'url': 'https://preview.redd.it/vs6imkpdo18e1.png?auto=webp&s=94ef98a53a27ac6600b8cb314a1327a0b8090bc8', 'width': 1151}, 'variants': {}}]}
OpenAI just announced O3 and O3 mini
463
They seem to be a considerable improvement
2024-12-20T18:17:57
https://www.reddit.com/r/LocalLLaMA/comments/1hiq1jg/openai_just_announced_o3_and_o3_mini/
Friendly_Fan5514
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hiq1jg
false
null
t3_1hiq1jg
/r/LocalLLaMA/comments/1hiq1jg/openai_just_announced_o3_and_o3_mini/
false
false
self
463
{'enabled': False, 'images': [{'id': 'x8xX6kFhWK5XYpEQyBNEtisjOjdBf_RVVv1MiyhTgGo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/BET0YJIi1XNx4rSYaV28pzN9CqTr8YPe3We8wyVdNL4.jpg?width=108&crop=smart&auto=webp&s=8b3c48ce696113872920119bc50289cb4bcbb9d5', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/BET0YJIi1XNx4rSYaV28pzN9CqTr8YPe3We8wyVdNL4.jpg?auto=webp&s=beb7872cf67e8f8641c086fbad0f25f95e99ee3e', 'width': 200}, 'variants': {}}]}
What is the best LLM for generating architecture diagram
1
I tried Claude, it's not bad but given that it can't generate images it write the diagram as mermaid syntax. I also tried chatgpt, it created a 3D diagram but had a lot of irrelevant component and misinformation. If you have a good prompt that will be helpful as well.
2024-12-20T18:35:23
https://www.reddit.com/r/LocalLLaMA/comments/1hiqfl1/what_is_the_best_llm_for_generating_architecture/
engkamyabi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hiqfl1
false
null
t3_1hiqfl1
/r/LocalLLaMA/comments/1hiqfl1/what_is_the_best_llm_for_generating_architecture/
false
false
self
1
null
03 beats 99.8% competitive coders
336
So apparently the equivalent percentile of a 2727 elo rating is 99.8 on codeforces Source: https://codeforces.com/blog/entry/126802
2024-12-20T18:39:24
https://www.reddit.com/gallery/1hiqing
user0069420
reddit.com
1970-01-01T00:00:00
0
{}
1hiqing
false
null
t3_1hiqing
/r/LocalLLaMA/comments/1hiqing/03_beats_998_competitive_coders/
false
false
https://b.thumbs.redditm…46N2ClZuyEtw.jpg
336
null
Did microsoft generally release Phi-4 yet?
64
Microsoft annouced last week that it was going to release Phi-4 on HF this week. I know there's been a bunch of GGUF files floating around, was super impressed with the version I downloaded. Eagerly waiting to see what's available in Microsoft's official collection.
2024-12-20T18:44:29
https://www.reddit.com/r/LocalLLaMA/comments/1hiqmtf/did_microsoft_generally_release_phi4_yet/
CSharpSauce
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hiqmtf
false
null
t3_1hiqmtf
/r/LocalLLaMA/comments/1hiqmtf/did_microsoft_generally_release_phi4_yet/
false
false
self
64
null
WebOpenUI, Qwen Agent and multiple models
0
Before I go through the effort of setting it up, just wondering what thoughts are involving the following: Web UI front end (like Open WebUI or custom if necessary), utilizing Qwen Agent Multiple models for different types of tasks (Coding, Math, general, etc.) Qwen Agent has task routing to send it to the proper model, as well as "chain of reasoning" functionality. (Idea being general model used first, then specialized models for each step.) I may have different models hosted on different servers, or have some type of loading/unloading logic in each step for the different models (adding latency, but also meaning I don't need as powerful, etc a machine.) Idea being that I can interact via a WebUI for the most part, but have it be robust behind the scenes, using specialized models for certain types of questions while also getting benefits from "chain of reasoning" functionality in Qwen Agent?
2024-12-20T19:07:43
https://www.reddit.com/r/LocalLLaMA/comments/1hir5el/webopenui_qwen_agent_and_multiple_models/
exponentfrost
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hir5el
false
null
t3_1hir5el
/r/LocalLLaMA/comments/1hir5el/webopenui_qwen_agent_and_multiple_models/
false
false
self
0
null
With the o3 release, the new era of test time compute scaling begins
132
I knew on the last day of the 12 days open ai would release something huge. I was hoping for a gpt 4.5 or 5, but we got o3. If that's the best thing they could announce, then one could reason that they have shifted from mostly just scaling pretraining to test time compute. This would be inline with the rumors of diminishing returns during the training of gpt 5. Will open source follow suit?
2024-12-20T19:19:48
https://www.reddit.com/r/LocalLLaMA/comments/1hirf2f/with_the_o3_release_the_new_era_of_test_time/
takuonline
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hirf2f
false
null
t3_1hirf2f
/r/LocalLLaMA/comments/1hirf2f/with_the_o3_release_the_new_era_of_test_time/
false
false
self
132
null
What’s the best uncensored AI?
0
Trying to use the AI for trolling 🤣 I think open-source will be superior in the long run. We need AI with a sense of humor.
2024-12-20T20:15:21
https://www.reddit.com/r/LocalLLaMA/comments/1hisnqt/whats_the_best_uncensored_ai/
notdemiurge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hisnqt
false
null
t3_1hisnqt
/r/LocalLLaMA/comments/1hisnqt/whats_the_best_uncensored_ai/
false
false
self
0
null
LLMs: A Risk-based Approach to Evaluations
1
[removed]
2024-12-20T20:18:39
https://www.reddit.com/r/LocalLLaMA/comments/1hisqdk/llms_a_riskbased_approach_to_evaluations/
scottishwanderer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hisqdk
false
null
t3_1hisqdk
/r/LocalLLaMA/comments/1hisqdk/llms_a_riskbased_approach_to_evaluations/
false
false
self
1
{'enabled': False, 'images': [{'id': 'emNgELyXjSAU_bd8dPvLh31OIX0ad-aXExszHvqRLMA', 'resolutions': [{'height': 33, 'url': 'https://external-preview.redd.it/4et9VmdMgCCNK_lMWqQeiPyPdQ75rIxljQ5uwntWTLM.jpg?width=108&crop=smart&auto=webp&s=7874e01f54f17479ac24722f32c7d06320e94c1b', 'width': 108}, {'height': 66, 'url': 'https://external-preview.redd.it/4et9VmdMgCCNK_lMWqQeiPyPdQ75rIxljQ5uwntWTLM.jpg?width=216&crop=smart&auto=webp&s=677b16f897db0b6ae75d5a272ee77158695c4188', 'width': 216}, {'height': 98, 'url': 'https://external-preview.redd.it/4et9VmdMgCCNK_lMWqQeiPyPdQ75rIxljQ5uwntWTLM.jpg?width=320&crop=smart&auto=webp&s=8897116ac483d9b5b2a3e380c257f7acd2ecfe1e', 'width': 320}, {'height': 196, 'url': 'https://external-preview.redd.it/4et9VmdMgCCNK_lMWqQeiPyPdQ75rIxljQ5uwntWTLM.jpg?width=640&crop=smart&auto=webp&s=049173e31eabe7a428418922f4360501a08d3827', 'width': 640}, {'height': 294, 'url': 'https://external-preview.redd.it/4et9VmdMgCCNK_lMWqQeiPyPdQ75rIxljQ5uwntWTLM.jpg?width=960&crop=smart&auto=webp&s=f73349e3f220eb4536e5951d70ad74e69f5b8771', 'width': 960}, {'height': 331, 'url': 'https://external-preview.redd.it/4et9VmdMgCCNK_lMWqQeiPyPdQ75rIxljQ5uwntWTLM.jpg?width=1080&crop=smart&auto=webp&s=a5a5010d6aa5c240e228fb3538a399c7760e9938', 'width': 1080}], 'source': {'height': 344, 'url': 'https://external-preview.redd.it/4et9VmdMgCCNK_lMWqQeiPyPdQ75rIxljQ5uwntWTLM.jpg?auto=webp&s=b37153bebce1d1a6b8bd3ebe368d532bb613c0e1', 'width': 1121}, 'variants': {}}]}
I have a question/open discussion hope someone can answer with an open mind
0
I saw on a post that talked about adding noise to the brain could create new ideas. They brought up how in models with weather formations that had noise, and no noise created different outcomes. Which is pretty obvious. The comment I saw in this post had me interested, LLMs use pseudorandom noise in sampling to prevent the output from being "too uniform". The amount of noise injected is controlled by a setting called "temperature". The difference seems to be that in a biological system, the noise is inherent to the system. Neurons cannot be perfectly accurate. A computer can - so the noise is added intentionally instead. What is this noise they speak of? Also if their is noise that is causing the change in the LLM would we be able to create something that is similar for the human brain? That noise so to speak what is it?
2024-12-20T20:31:42
https://www.reddit.com/r/LocalLLaMA/comments/1hit0de/i_have_a_questionopen_discussion_hope_someone_can/
johnnymburgess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hit0de
false
null
t3_1hit0de
/r/LocalLLaMA/comments/1hit0de/i_have_a_questionopen_discussion_hope_someone_can/
false
false
self
0
null
Building Live Transcriptions with Whisper: Looking for Feedback and Ideas!
1
[removed]
2024-12-20T20:35:24
https://www.reddit.com/r/LocalLLaMA/comments/1hit39c/building_live_transcriptions_with_whisper_looking/
QPixel18
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hit39c
false
null
t3_1hit39c
/r/LocalLLaMA/comments/1hit39c/building_live_transcriptions_with_whisper_looking/
false
false
self
1
null
open source copilot/codium with Ollama integration?
0
Hey local llamas, I'm not actually a huge fan of copilot-style code completion but it is handy sometimes -- not handy enough to have it on all the time or justify a copilot sub, or have that extra risk of my code going to some random company I don't know, but somewhat useful? As such, I was thinking there might be a local server I can run and power with ollama. Especially because of the latest advancements with Qwen I was imagining it would make a solid local coder for the "super autosuggest" use case specifically. It would be a nice in-between for a use case where the value in sending my code to a remote uncontrolled server doesn't outweigh the risks (for me personally). Does anyone know of a project that supports this? I'll take any leads -- vscode extension, other IDE targets, whatever. My end target is actually [marimo.io](http://marimo.io) which supports copilot servers but I'm imagining I can fix it to point to something running locally if I can get something that matches the API. I did search the forum and google around, but had no luck. Thanks!
2024-12-20T20:53:51
https://www.reddit.com/r/LocalLLaMA/comments/1hithyp/open_source_copilotcodium_with_ollama_integration/
maxtheman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hithyp
false
null
t3_1hithyp
/r/LocalLLaMA/comments/1hithyp/open_source_copilotcodium_with_ollama_integration/
false
false
self
0
{'enabled': False, 'images': [{'id': 'E8yfjxAhzMwVDslg_OQ5qU4qSGr1CR3QhWLVc34G-Zs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/u4NGqyMsWNnMDw1QhyvpdhTC7ahuz7qNP6n3rX3yoV8.jpg?width=108&crop=smart&auto=webp&s=fb475eca0ddfafc9b01a27cf279449f983ec9080', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/u4NGqyMsWNnMDw1QhyvpdhTC7ahuz7qNP6n3rX3yoV8.jpg?width=216&crop=smart&auto=webp&s=ef76b45a2607a212ba4dc6a7e70d186f7fe21f9a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/u4NGqyMsWNnMDw1QhyvpdhTC7ahuz7qNP6n3rX3yoV8.jpg?width=320&crop=smart&auto=webp&s=ab620858193daea9ba92475d67abedbe931f46ac', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/u4NGqyMsWNnMDw1QhyvpdhTC7ahuz7qNP6n3rX3yoV8.jpg?width=640&crop=smart&auto=webp&s=696d4b0a13ff9f1d3c61113f036160a851bca657', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/u4NGqyMsWNnMDw1QhyvpdhTC7ahuz7qNP6n3rX3yoV8.jpg?width=960&crop=smart&auto=webp&s=8155ec0cf10d6217a8873dd3bf49012fb5933d57', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/u4NGqyMsWNnMDw1QhyvpdhTC7ahuz7qNP6n3rX3yoV8.jpg?width=1080&crop=smart&auto=webp&s=def1fcd2843159d43c159a2c8b4451f2d5c458bb', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/u4NGqyMsWNnMDw1QhyvpdhTC7ahuz7qNP6n3rX3yoV8.jpg?auto=webp&s=01c859c8a24297fdbdf3b85915ae472a69fad517', 'width': 1200}, 'variants': {}}]}
Any cloud providers for the new Llama 3.3?
4
I'm new here and was hoping to find cloud providers which offer llama 3.3. I'm interested in a cheaper alternative to OpenAI's gpt4 models like I'm sure most people are. Hoping some kind folks could point me in the right direciton
2024-12-20T20:58:04
https://www.reddit.com/r/LocalLLaMA/comments/1hitl7x/any_cloud_providers_for_the_new_llama_33/
cs_cast_away_boi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hitl7x
false
null
t3_1hitl7x
/r/LocalLLaMA/comments/1hitl7x/any_cloud_providers_for_the_new_llama_33/
false
false
self
4
null
What PC upgrade you would suggest?
1
[removed]
2024-12-20T20:59:33
https://www.reddit.com/r/LocalLLaMA/comments/1hitmd1/what_pc_upgrade_you_would_suggest/
YouWillNeeverFindOut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hitmd1
false
null
t3_1hitmd1
/r/LocalLLaMA/comments/1hitmd1/what_pc_upgrade_you_would_suggest/
false
false
self
1
null
The o3 chart is logarithmic on X axis and linear on Y
505
2024-12-20T21:12:59
https://i.redd.it/s1t6d3ubk28e1.png
hyperknot
i.redd.it
1970-01-01T00:00:00
0
{}
1hitwwt
false
null
t3_1hitwwt
/r/LocalLLaMA/comments/1hitwwt/the_o3_chart_is_logarithmic_on_x_axis_and_linear/
false
false
https://b.thumbs.redditm…ZVqUO-FqPzcY.jpg
505
{'enabled': True, 'images': [{'id': 'xXTJ7v3htbAkL6VlVox8JAFjQ9BrdLrPjfsznG2up3I', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/s1t6d3ubk28e1.png?width=108&crop=smart&auto=webp&s=f5b11d2aff8581d4e2997b2c574bed827a869f8b', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/s1t6d3ubk28e1.png?width=216&crop=smart&auto=webp&s=baca10f2beb30604d96bd77342e7da6d67cc1bdf', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/s1t6d3ubk28e1.png?width=320&crop=smart&auto=webp&s=08953d52be1096f75b89fceac1132f3ce6582154', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/s1t6d3ubk28e1.png?width=640&crop=smart&auto=webp&s=2a23a42470c4738a5ffd7abef516e80321158e55', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/s1t6d3ubk28e1.png?width=960&crop=smart&auto=webp&s=93f474260bb5d0725eadd4300c3bdd2b5cfd2142', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/s1t6d3ubk28e1.png?width=1080&crop=smart&auto=webp&s=692b981606abb3de7c44d2a0af0e280d8813a198', 'width': 1080}], 'source': {'height': 675, 'url': 'https://preview.redd.it/s1t6d3ubk28e1.png?auto=webp&s=b166ee7cfa0a9d527c9045ce022f36d4f1c3e3d5', 'width': 1200}, 'variants': {}}]}
I need a local chatbot. I own RTX 3060 6GB
1
[removed]
2024-12-20T21:33:48
https://www.reddit.com/r/LocalLLaMA/comments/1hiucxd/i_need_a_local_chatbot_i_own_rtx_3060_6gb/
Ill_Baker_9712
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hiucxd
false
null
t3_1hiucxd
/r/LocalLLaMA/comments/1hiucxd/i_need_a_local_chatbot_i_own_rtx_3060_6gb/
false
false
self
1
null
how accurate benchmarks for closed source llms really are?
3
how accurate benchmarks for closed source llms really are, we literally could fine tune them on benchmark datasets or even use rigged ones (custom datasets to favor the architecture of the llm) and nobody would ever be able to audit the results?
2024-12-20T21:51:39
https://www.reddit.com/r/LocalLLaMA/comments/1hiuqs1/how_accurate_benchmarks_for_closed_source_llms/
Life_Ask2806
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hiuqs1
false
null
t3_1hiuqs1
/r/LocalLLaMA/comments/1hiuqs1/how_accurate_benchmarks_for_closed_source_llms/
false
false
self
3
null
Will Phi 4 be released in a 7b version?
1
Thoughts?
2024-12-20T22:28:15
https://www.reddit.com/r/LocalLLaMA/comments/1hivi66/will_phi_4_be_released_in_a_7b_version/
AstroDogOnline
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hivi66
false
null
t3_1hivi66
/r/LocalLLaMA/comments/1hivi66/will_phi_4_be_released_in_a_7b_version/
false
false
self
1
null
PowerColor Red Devil Radeon™ RX 580 8GB
56
2024-12-20T22:28:39
https://i.redd.it/tn3jm96qx28e1.jpeg
firearms_wtf
i.redd.it
1970-01-01T00:00:00
0
{}
1hivigi
false
null
t3_1hivigi
/r/LocalLLaMA/comments/1hivigi/powercolor_red_devil_radeon_rx_580_8gb/
false
false
https://b.thumbs.redditm…r0bJtTaFRHKQ.jpg
56
{'enabled': True, 'images': [{'id': 'J17EQzCkGvB3q3-EUcJnfDJLh66aSilRrvWIqL_o4sM', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/tn3jm96qx28e1.jpeg?width=108&crop=smart&auto=webp&s=c200712efc859a886318f92bde20e154fad44446', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/tn3jm96qx28e1.jpeg?width=216&crop=smart&auto=webp&s=adaa1d12b1b58db5ca6703633d1779d8c4e31422', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/tn3jm96qx28e1.jpeg?width=320&crop=smart&auto=webp&s=1c84af71a9596ef2bd5c5b9f29ae552d3b245917', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/tn3jm96qx28e1.jpeg?width=640&crop=smart&auto=webp&s=0d2a8343359f0675c06317404e22168cd5ab9247', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/tn3jm96qx28e1.jpeg?width=960&crop=smart&auto=webp&s=37e67f9747beeef1bd52a985a32f1d901cec45ff', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/tn3jm96qx28e1.jpeg?width=1080&crop=smart&auto=webp&s=47ada0bc3c122d24ab24d34f2e6ff4e4a5e1f5f5', 'width': 1080}], 'source': {'height': 4284, 'url': 'https://preview.redd.it/tn3jm96qx28e1.jpeg?auto=webp&s=ba21f4d24f7993be1377e63b306f72baea5774e6', 'width': 5712}, 'variants': {}}]}
Newbie Question: Fine Tuning Trouble (Help Needed)
1
Hit a huge roadblock with finetuning Mixtral 8x22b FP16. The tokenizer runs fine.. The model loads fine.. But, then when the fine tuning process begins, I immediately get indefinite indices errors: Starting epoch 1/2... [DEBUG Step 0] Current device: cuda:0 Batch device: cuda:0 [ERROR] Step 0 RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cuda:1) [INFO] Retrying after clearing memory... Others who have had this problem were also running on a multiple GPU setup. I'm running 5 x L40 's + 1.25TB RAM. CUDA Version: 12.4 Torch version: 2.1.0+cu121 CUDA available: True CUDA device count: 5 Device name: NVIDIA L40 transformers==4.44.2 torch==2.1.0+cu121 torchvision==0.16.0+cu121 Here's the code: import os import torch from transformers import AutoModelForCausalLM, LlamaTokenizerFast, logging from torch.utils.data import DataLoader, Dataset from accelerate import infer_auto_device_map, dispatch_model # Disable unwanted warnings logging.set_verbosity_error() # Set environment variables to optimize memory utilization os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:128" os.environ["TOKENIZERS_PARALLELISM"] = "false" # Disable tokenizer warnings # --- Dataset Preparation --- class TextDataset(Dataset): def __init__(self, file_path, tokenizer, block_size=512): print(f"Loading dataset from: {file_path}") with open(file_path, "r", encoding="utf-8") as f: lines = f.readlines() print(f"Loaded {len(lines)} lines of text.") print("Tokenizing the dataset...") self.examples = tokenizer( lines, truncation=True, padding="max_length", max_length=block_size, return_tensors="pt" )["input_ids"] print(f"Tokenization complete. {len(self.examples)} examples ready.") def __len__(self): return len(self.examples) def __getitem__(self, idx): return self.examples[idx] # --- Model Loading and Device Mapping --- def load_model(args): print("[INFO] Loading model with multi-GPU device mapping...") model = AutoModelForCausalLM.from_pretrained( args["model_path"], torch_dtype=torch.float16, low_cpu_mem_usage=True ) # Explicit device map for testing purposes device_map = infer_auto_device_map( model, no_split_module_classes=["LlamaDecoderLayer"], # Prevent splitting critical layers max_memory={ 0: "40GB", 1: "40GB", 2: "40GB", 3: "40GB", 4: "40GB", "cpu": "1024GB", } ) print("\nDevice Map Automatically Generated:") for k, v in device_map.items(): print(f"{k}: {v}") # Dispatch model to correct devices model = dispatch_model(model, device_map=device_map, offload_buffers=True) # Enable gradient checkpointing to reduce memory usage model.gradient_checkpointing_enable() print("[INFO] Model successfully loaded and distributed!") return model # --- Fine-Tuning Logic --- def calibrate(model, dataset, tokenizer, args): dataloader = DataLoader(dataset, batch_size=args["batch_size"], shuffle=True) model.train() optimizer = torch.optim.AdamW( filter(lambda p: p.requires_grad, model.parameters()), lr=args["learning_rate"] ) for epoch in range(args["epochs"]): print(f"Starting epoch {epoch + 1}/{args['epochs']}...") total_loss = 0.0 for step, batch in enumerate(dataloader): # Force device alignment at each step try: # Ensure batch tensors are moved to the first device of the model device = next(model.parameters()).device batch = batch.to(device, non_blocking=True) # Debugging tensor placements if step == 0 or step % 10 == 0: print(f"[DEBUG Step {step}] Current device: {device}") print(f"Batch device: {batch.device}") # Forward pass with controlled device context with torch.cuda.device(device): outputs = model(input_ids=batch, labels=batch, use_cache=False) # Experimental: disable caching loss = outputs.loss # Backward pass loss.backward() if (step + 1) % args["gradient_accumulation"] == 0: optimizer.step() optimizer.zero_grad() total_loss += loss.item() if step % 10 == 0: print(f"Step {step}, Loss: {loss:.4f}") except RuntimeError as e: print(f"[ERROR] Step {step} RuntimeError: {e}") print("[INFO] Retrying after clearing memory...") torch.cuda.empty_cache() continue avg_loss = total_loss / len(dataloader) print(f"Epoch {epoch + 1} completed. Average Loss: {avg_loss:.4f}") return model # --- Main Workflow --- def run_training(): args = { "model_path": "/workspace/workspace/models/modelname", "dataset_path": "/workspace/datasets/combined/dataset.txt", "output_path": "/workspace/models/finetune", "batch_size": 8, "gradient_accumulation": 32, "epochs": 2, "learning_rate": 5e-5, } print("[INFO] Loading tokenizer...") tokenizer = LlamaTokenizerFast.from_pretrained(args["model_path"]) print("[INFO] Preparing dataset...") dataset = TextDataset(args["dataset_path"], tokenizer) print("[INFO] Loading model...") model = load_model(args) print("[INFO] Beginning fine-tuning...") fine_tuned_model = calibrate(model, dataset, tokenizer, args) print("[INFO] Saving fine-tuned model...") fine_tuned_model.save_pretrained(args["output_path"]) tokenizer.save_pretrained(args["output_path"]) print("[INFO] Fine-tuning completed successfully!") if __name__ == "__main__": run_training()
2024-12-20T22:40:39
https://www.reddit.com/r/LocalLLaMA/comments/1hivrf5/newbie_question_fine_tuning_trouble_help_needed/
misterflyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hivrf5
false
null
t3_1hivrf5
/r/LocalLLaMA/comments/1hivrf5/newbie_question_fine_tuning_trouble_help_needed/
false
false
self
1
null
Proteus - A Genesis-based LLM
34
**Proteus: A Genesis-Based LLM Physics Sandbox** I’ve been working on a project called “Proteus” that embeds an LLM into a 4D physics simulation environment using the Genesis engine. Instead of just spitting out text, the LLM can now “enter” a simulated physics room, become a particle, a wave, a field, or even a time controller, and directly experience physical concepts. The goal is to help it build more intuitive reasoning about ideas like gravity or spacetime curvature by actually interacting with them. You can prompt it to demonstrate gravitational forces, show how wave interference works, or visualize a 4D hypercube. I've been able to get it to run successfully and simulate a few times with cool questions but it also fails often. It’s still rough around the edges, and code generation might break occasionally, but I'm working on making it more stable and adding retrieval-augmented generation (RAG) to improve that. Qwen2.5 Coder works best so far. I felt like sharing since it's about 80-90% there and a few other interested parties might be able to help problem solve so we can turn this into a true LLM Room Simulation Engine. If you’re curious, the code and examples are on GitHub. Feel free to experiment, push it to its limits, and see what weird “physical” metaphors an LLM can come up with. **Links:** * Repo: [https://github.com/severian42/Proteus-The-Genesis-LLM](https://github.com/severian42/Proteus-The-Genesis-LLM) * Genesis engine docs: [https://genesis-embodied-ai.github.io/](https://genesis-embodied-ai.github.io/) * Proteus Example Output (a bit buggy but getting there): [https://github.com/severian42/Proteus-The-Genesis-LLM/blob/main/Proteus-Example.md](https://github.com/severian42/Proteus-The-Genesis-LLM/blob/main/Proteus-Example.md)
2024-12-20T22:47:56
https://www.reddit.com/r/LocalLLaMA/comments/1hivwuq/proteus_a_genesisbased_llm/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hivwuq
false
null
t3_1hivwuq
/r/LocalLLaMA/comments/1hivwuq/proteus_a_genesisbased_llm/
false
false
self
34
{'enabled': False, 'images': [{'id': 'Dbr1ZpxOfnLhUKMj5J0sWCVe8aPrAXrn_z8DoGnn6uc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6zEJl7SZrO-KfdxznkzXAwv_OnMuJNTclFXZn-Cx8oc.jpg?width=108&crop=smart&auto=webp&s=a9e390ba24402297881239b1dd19f38d1bc1a036', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6zEJl7SZrO-KfdxznkzXAwv_OnMuJNTclFXZn-Cx8oc.jpg?width=216&crop=smart&auto=webp&s=48efaf5898e0aa2b724edb31d7975dbc8f0e95fc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6zEJl7SZrO-KfdxznkzXAwv_OnMuJNTclFXZn-Cx8oc.jpg?width=320&crop=smart&auto=webp&s=34cfca97d1e444a17e8cc63c124d7afa9d4dd696', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6zEJl7SZrO-KfdxznkzXAwv_OnMuJNTclFXZn-Cx8oc.jpg?width=640&crop=smart&auto=webp&s=59c6f113926de9252cd72065ebde245281f46ba2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6zEJl7SZrO-KfdxznkzXAwv_OnMuJNTclFXZn-Cx8oc.jpg?width=960&crop=smart&auto=webp&s=0cf765506d8a74eba28d921eed26eb0d582fcc11', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6zEJl7SZrO-KfdxznkzXAwv_OnMuJNTclFXZn-Cx8oc.jpg?width=1080&crop=smart&auto=webp&s=fd470f96e71535fc872dedd9bd67bbcda7ae9ad3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6zEJl7SZrO-KfdxznkzXAwv_OnMuJNTclFXZn-Cx8oc.jpg?auto=webp&s=fede201dfefb81c55a95e99843cc6adc947df0f6', 'width': 1200}, 'variants': {}}]}
Making LLMs Reliable: Building an LLM-powered Web App to Generate Gift Ideas
0
2024-12-20T22:51:28
https://blog.dottxt.co/gifter.html
cameron_pfiffer
blog.dottxt.co
1970-01-01T00:00:00
0
{}
1hivzjh
false
null
t3_1hivzjh
/r/LocalLLaMA/comments/1hivzjh/making_llms_reliable_building_an_llmpowered_web/
false
false
default
0
null
I need a local chatbot that would perform good in customer support. I own RTX 3060 6GB
1
[removed]
2024-12-20T22:51:54
https://www.reddit.com/r/LocalLLaMA/comments/1hivzua/i_need_a_local_chatbot_that_would_perform_good_in/
Ill_Baker_9712
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hivzua
false
null
t3_1hivzua
/r/LocalLLaMA/comments/1hivzua/i_need_a_local_chatbot_that_would_perform_good_in/
false
false
self
1
null
Structured output and following field descriptions
0
I am playing around with the "Structured Outputs" feature of [Ollama](https://ollama.com/blog/structured-outputs) and [GPT4o](https://openai.com/index/introducing-structured-outputs-in-the-api/). In both cases I am using the OpenAI client with the `response_format` option and provide it with a Pydantic model. As the fields are quite cryptic (medical jargon) I added a description on each field as well, like class Findings(BaseModel): ecg_sync: bool = Field( ..., description="Was the CT performed with ECG synchronization/triggering?" ) From looking at the GPT4o documentation it seems to be possible to add such a description (at least when looking at the JSON schema examples), but the Ollama documentation doesn't mention if this is respected. From my experiments (mostly with GPT4o) it is only respected sometimes (I tried some stuff like instructing to add a fixed number to int values). But if I put the descriptions in the (system) prompt directly it is much better respected. I wonder how those schemas are processed and why the description on the field itself is only sometimes respected (I made sure that it is always sent to the server, so that is not the problem).
2024-12-20T22:56:07
https://www.reddit.com/r/LocalLLaMA/comments/1hiw2z7/structured_output_and_following_field_descriptions/
medihack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hiw2z7
false
null
t3_1hiw2z7
/r/LocalLLaMA/comments/1hiw2z7/structured_output_and_following_field_descriptions/
false
false
self
0
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Arch (0.1.7) 🚀 Accurate multi-turn intent detection especially for follow-up questions. Fast (contextual) parameter extraction and function calling in <400ms (p50) early in the request lifecycle.
41
https://github.com/katanemo/archgw - an intelligent gateway for agents. Engineered with (fast) LLMs for the secure handling, rich observability, and seamless integration of prompts with functions/APIs - all outside business logic. Built by the core contributors of Envoy proxy, on Envoy.
2024-12-20T22:59:52
https://i.redd.it/qajs15xj338e1.jpeg
Terrible_Attention83
i.redd.it
1970-01-01T00:00:00
0
{}
1hiw5ps
false
null
t3_1hiw5ps
/r/LocalLLaMA/comments/1hiw5ps/arch_017_accurate_multiturn_intent_detection/
false
false
https://b.thumbs.redditm…h2kE4-UNNmsU.jpg
41
{'enabled': True, 'images': [{'id': 'QnehuRk7rn6rperqIrumAb8kGPAstxWg9RbFNo9gAzo', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/qajs15xj338e1.jpeg?width=108&crop=smart&auto=webp&s=2a2b10f5d684eae0858d64dbf378a5d8f3b050f8', 'width': 108}, {'height': 212, 'url': 'https://preview.redd.it/qajs15xj338e1.jpeg?width=216&crop=smart&auto=webp&s=25646210fe5c1232ec7d9f1be1fe7b452db4d147', 'width': 216}, {'height': 314, 'url': 'https://preview.redd.it/qajs15xj338e1.jpeg?width=320&crop=smart&auto=webp&s=4a3487eb30c604234b85b2aaf6911d9514d4881a', 'width': 320}, {'height': 629, 'url': 'https://preview.redd.it/qajs15xj338e1.jpeg?width=640&crop=smart&auto=webp&s=e4d8003f6defe13e5d2e73bcd2cabf9dee155099', 'width': 640}, {'height': 943, 'url': 'https://preview.redd.it/qajs15xj338e1.jpeg?width=960&crop=smart&auto=webp&s=c4b5c157b0a094cfa8482431f151e3c5ac256de2', 'width': 960}, {'height': 1061, 'url': 'https://preview.redd.it/qajs15xj338e1.jpeg?width=1080&crop=smart&auto=webp&s=807195f39bfd1eeaa6d988b49a3c231484939964', 'width': 1080}], 'source': {'height': 1573, 'url': 'https://preview.redd.it/qajs15xj338e1.jpeg?auto=webp&s=a88ee1548d330071243997945bfc0b095fe6c4d0', 'width': 1600}, 'variants': {}}]}
I need a local chatbot that would perform good in customer support. I own RTX 3060 6GB
1
[removed]
2024-12-20T23:03:06
https://www.reddit.com/r/LocalLLaMA/comments/1hiw87v/i_need_a_local_chatbot_that_would_perform_good_in/
Ill_Baker_9712
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hiw87v
false
null
t3_1hiw87v
/r/LocalLLaMA/comments/1hiw87v/i_need_a_local_chatbot_that_would_perform_good_in/
false
false
self
1
null
RAG with JSON
3
Is anyone doing RAG with JSON? In theory, it's all text, but in practice, I suspect some models will handle JSON inputs better than others. Any favorite models?
2024-12-20T23:21:29
https://www.reddit.com/r/LocalLLaMA/comments/1hiwlue/rag_with_json/
ivoras
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hiwlue
false
null
t3_1hiwlue
/r/LocalLLaMA/comments/1hiwlue/rag_with_json/
false
false
self
3
null
I try creating O3 with opensource model
0
2024-12-20T23:38:13
https://huggingface.co/spaces/llamameta/OpenAi-O3-Preview-Mini
balianone
huggingface.co
1970-01-01T00:00:00
0
{}
1hiwxui
false
null
t3_1hiwxui
/r/LocalLLaMA/comments/1hiwxui/i_try_creating_o3_with_opensource_model/
false
false
https://b.thumbs.redditm…iCRdxinZ8JXo.jpg
0
{'enabled': False, 'images': [{'id': 'AtCbsXNaGFEauv489pmJ926FRtzg8DRIyg_jkemfacY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Tur_IpMbNR4Y-Ld7gc96VCIu23DgjyFFcH7jbqDWivU.jpg?width=108&crop=smart&auto=webp&s=d82ce86680c7ad524d2fb81d22b792ca52b7ceaf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Tur_IpMbNR4Y-Ld7gc96VCIu23DgjyFFcH7jbqDWivU.jpg?width=216&crop=smart&auto=webp&s=e20c6ea6792a3e04daa8fe33b10d2c6c17b73d0c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Tur_IpMbNR4Y-Ld7gc96VCIu23DgjyFFcH7jbqDWivU.jpg?width=320&crop=smart&auto=webp&s=3173a01a2723651f9d45a6e543c16afd7918f26a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Tur_IpMbNR4Y-Ld7gc96VCIu23DgjyFFcH7jbqDWivU.jpg?width=640&crop=smart&auto=webp&s=653f8956aa284c04f71670c4e4e6c3513cd7d241', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Tur_IpMbNR4Y-Ld7gc96VCIu23DgjyFFcH7jbqDWivU.jpg?width=960&crop=smart&auto=webp&s=2158e0277f426a0517c8ae48f0e411651daff9a2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Tur_IpMbNR4Y-Ld7gc96VCIu23DgjyFFcH7jbqDWivU.jpg?width=1080&crop=smart&auto=webp&s=8be046f61c9530a1e1fb63cf6a91f490db62541c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Tur_IpMbNR4Y-Ld7gc96VCIu23DgjyFFcH7jbqDWivU.jpg?auto=webp&s=80af1156c86b112547fa5c71655e45ca893f6af2', 'width': 1200}, 'variants': {}}]}
I created SwitchAI
10
With the rapid development of state-of-the-art AI models, it has become increasingly challenging to switch between providers once you start using one. Each provider has its own unique library and requires significant effort to understand and adapt your code. To address this problem, I created [SwitchAI](https://github.com/yelboudouri/SwitchAI), a Python library that offers a unified interface for interacting with various AI APIs. Whether you're working with text generation, embeddings, speech-to-text, or other AI functionalities, SwitchAI simplifies the process by providing a single, consistent library. SwitchAI is also an excellent solution for scenarios where you need to use multiple AI providers simultaneously. As an open-source project, I encourage you to explore it, use it, and contribute if you're interested!
2024-12-20T23:45:33
https://www.reddit.com/r/LocalLLaMA/comments/1hix34e/i_created_switchai/
LittleRedApp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hix34e
false
null
t3_1hix34e
/r/LocalLLaMA/comments/1hix34e/i_created_switchai/
false
false
self
10
{'enabled': False, 'images': [{'id': 'ZWhdkog6NBqXfoxZ21SQ5Xn90IsYkIoiMDbx2_I8HBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/94-OyJh8o8_9C-opj6fvjCU6OEgaw84FnciTxgxQWF8.jpg?width=108&crop=smart&auto=webp&s=95fde2ead2239932f603a6cbf14c40c81748b9f8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/94-OyJh8o8_9C-opj6fvjCU6OEgaw84FnciTxgxQWF8.jpg?width=216&crop=smart&auto=webp&s=acbdfbe6357f7afb91e1e56d46382cac3a2becc3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/94-OyJh8o8_9C-opj6fvjCU6OEgaw84FnciTxgxQWF8.jpg?width=320&crop=smart&auto=webp&s=174da7bdc7b4ae54c61df61e8a381f0c8f7f0174', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/94-OyJh8o8_9C-opj6fvjCU6OEgaw84FnciTxgxQWF8.jpg?width=640&crop=smart&auto=webp&s=b493980d96dddc346996b1d1e2193125371b72c1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/94-OyJh8o8_9C-opj6fvjCU6OEgaw84FnciTxgxQWF8.jpg?width=960&crop=smart&auto=webp&s=328c70bb1d1598d0a1bc3c149e6c0028572c250e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/94-OyJh8o8_9C-opj6fvjCU6OEgaw84FnciTxgxQWF8.jpg?width=1080&crop=smart&auto=webp&s=059c8c0d1a04f32ce54de42df800b95c9b710ec6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/94-OyJh8o8_9C-opj6fvjCU6OEgaw84FnciTxgxQWF8.jpg?auto=webp&s=aa1721649fa7d0d137f44cf544aeefbfb97a74c0', 'width': 1200}, 'variants': {}}]}
How to pack a model and a folder of local docs into a macOS app
1
[removed]
2024-12-20T23:53:33
https://www.reddit.com/r/LocalLLaMA/comments/1hix8r8/how_to_pack_a_model_and_a_folder_of_local_docs/
s0ulmachine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hix8r8
false
null
t3_1hix8r8
/r/LocalLLaMA/comments/1hix8r8/how_to_pack_a_model_and_a_folder_of_local_docs/
false
false
self
1
null
Llama3.3 70B experiences
3
I've been auditioning llama3.3 70B since it was released and I have to say it's mostly disappointing, but has some redeeming qualities. When given ambiguous instruction or some context without a preface it simply regurgitates what is sent. It often fails to conduct a metaanalysis or infer beyond a superficial level. It's a little dumb. I do like it's output style. it's simple. no frills. Beyond that, I think I'll stick with Qwen2.5 72B which has been a daily driver for months now. What are your experiences with it like?
2024-12-21T00:52:52
https://www.reddit.com/r/LocalLLaMA/comments/1hiyen0/llama33_70b_experiences/
one-escape-left
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hiyen0
false
null
t3_1hiyen0
/r/LocalLLaMA/comments/1hiyen0/llama33_70b_experiences/
false
false
self
3
null
This person (most likely in this subreddit) knowingly harassed and hated an autistic trans person for being those two things, do with this information what you will.
0
2024-12-21T01:04:49
https://www.reddit.com/gallery/1hiymsy
Wild-Ant1384
reddit.com
1970-01-01T00:00:00
0
{}
1hiymsy
false
null
t3_1hiymsy
/r/LocalLLaMA/comments/1hiymsy/this_person_most_likely_in_this_subreddit/
false
false
https://a.thumbs.redditm…ulE_gov23yU8.jpg
0
null
Local o3.
0
Do any of you all think that we would be able to replicate a model like O3 on our local? Specifically, I'm curious about the following: Hardware Requirements: What kind of computing power would be necessary to achieve this? Would a GPU cluster or high-end local setup be enough? Software Stack: Which frameworks and tools (e.g., PyTorch, TensorFlow, etc.) would be best for creating something comparable? Data Requirements: How much data would be needed to train such a model effectively? Are there any publicly available datasets that could be a good starting point? Feasibility: Are there any significant challenges, like memory constraints, fine-tuning complexity, or limitations of working on a local machine, that would make this impractical? Would love to hear your thoughts and insights!
2024-12-21T03:39:45
https://www.reddit.com/r/LocalLLaMA/comments/1hj1dhk/local_o3/
gamblingapocalypse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj1dhk
false
null
t3_1hj1dhk
/r/LocalLLaMA/comments/1hj1dhk/local_o3/
false
false
self
0
null
asked QwQ what a black hole was. This was its thought process.
139
What is a black hole? QwQ: \*\*Thoughts\*\* So I've heard about black holes before, but I'm not exactly sure what they are. I know they have something to do with space and astronomy, but beyond that, my knowledge is a bit fuzzy. Maybe I should start by understanding what a black hole isn't, and then build up to what it actually is. I think a black hole isn't just a empty space in the universe; it's not like a void or an absence of matter. Actually, from what I recall, black holes are quite the opposite—they contain an enormous amount of matter packed into a very small area. That must make their gravitational pull extremely strong, right? Yeah, I think that's a key aspect of black holes—their gravity is so powerful that nothing can escape it, not even light. That's why they're called "black" because they don't emit or reflect any light. But how do black holes form? I think they come from the collapse of massive stars. When a really big star runs out of fuel, it can't support itself anymore and collapses in on itself, creating a black hole. Is that correct? Maybe I should look into the life cycle of stars to understand this better. Let me see: Stars are huge balls of gas, mostly hydrogen, and they generate energy through nuclear fusion in their cores. Hydrogen atoms combine to form helium, releasing a tremendous amount of energy in the process. This energy creates outward pressure that counteracts the star's gravitational pull inward. It's like there's a balancing act going on. But eventually, the star runs out of hydrogen fuel in its core, and it starts fusing heavier elements like helium into carbon, oxygen, and so on, depending on the star's mass. This process can continue until the star tries to fuse iron, which doesn't release energy; instead, it absorbs energy. That causes the star's core to collapse rapidly, leading to a supernova explosion. Now, after this explosion, what's left depends on the original mass of the star. If the core is massive enough, nothing can stop its collapse, and it keeps shrinking down to a point where gravity becomes infinite, and space-time curves infinitely as well. That point is called a singularity, and it's at the center of a black hole. So, a black hole has a singularity at its core, surrounded by an event horizon. The event horizon is like a boundary beyond which nothing can escape the black hole's gravity. It's the point of no return. If something crosses the event horizon, it's doomed to be pulled into the singularity. I've also heard about different types of black holes. There are stellar black holes, which form from the collapse of individual massive stars, and then there are supermassive black holes, which are millions or even billions of times more massive than our sun and are found at the centers of galaxies, including our own Milky Way. How did these supermassive black holes form? Did they start as stellar black holes and grow over time by merging with other black holes or accreting matter? Or is there another mechanism at play? I'm not sure about that yet. Another thing I've wondered about is whether black holes spin. I think they can because the original star might have been rotating before it collapsed, and that rotation would be conserved in the black hole. Spinning black holes are probably more complex because their rotation could affect space-time around them. Also, I've heard of something called gravitational waves, which were predicted by Einstein's theory of general relativity and recently detected by experiments like LIGO. I believe these waves can be produced when two black holes spiral into each other and merge. That must create some intense gravitational ripples in space-time. But let's get back to the basics. What exactly is gravity, and how does it relate to black holes? According to general relativity, gravity isn't a force like magnetism or electricity; instead, it's a curvature of space-time caused by mass and energy. So, massive objects like planets and stars warp the fabric of space-time around them, and other objects move along those curved paths. In the case of black holes, since they have such an extreme concentration of mass in a tiny region, they create an incredible curvature of space-time, leading to the event horizon and the singularity. Now, I'm curious about how we can observe black holes if they don't emit any light. Since they're completely dark, how do astronomers know they exist? Well, one way is by observing their gravitational effects on nearby objects. For example, if there's a binary system where one star is orbiting an invisible companion, and we can see the star being pulled around by the unseen object's gravity, that could be a black hole. Also, as matter falls into a black hole, it forms an accretion disk around the event horizon. This disk can get extremely hot due to friction and collisions between particles, and it emits X-rays and other forms of radiation. So, by detecting these X-rays, astronomers can infer the presence of a black hole. There are also more direct ways to observe black holes, like capturing images of their event horizons. I recall that in 2019, the Event Horizon Telescope project released the first-ever image of a black hole's event horizon, specifically from a galaxy called M87. It was a groundbreaking achievement, and it looked like a ring of light surrounding a dark center. Seeing is believing, right? But even with that image, I still have trouble wrapping my head around the concept of a singularity. How can an object have infinite density? That seems impossible. Maybe the idea of a singularity is just a result of our current theories breaking down at extremely high densities and small scales. Perhaps in the future, when we have a theory of quantum gravity that unifies general relativity with quantum mechanics, we'll have a better understanding of what really happens at the center of a black hole. Until then, singularities remain a bit mysterious. Another mind-bending aspect of black holes is the concept of time dilation. According to general relativity, time passes differently in strong gravitational fields compared to weaker ones. So, close to a black hole's event horizon, time would appear to slow down for an outside observer. If someone were falling into a black hole, an outside observer would see them slowing down as they approach the event horizon, but for the person falling in, time would seem normal. This leads to some interesting thought experiments, like the "black hole information paradox." It's a problem in theoretical physics that questions what happens to information about the state of matter that falls into a black hole. According to quantum mechanics, information can't be destroyed, but according to general relativity, anything that crosses the event horizon is lost forever. So, there's a conflict between these two fundamental theories, and resolving it is one of the big challenges in physics today. Some proposed solutions involve ideas like holographic principles or modifications to our understanding of space and time. All of this is quite fascinating but also pretty complex. I think I've got a basic grasp of what black holes are—extremely dense objects with strong gravitational pulls that not even light can escape. They form from the collapse of massive stars and can be observed through their effects on surrounding matter and by detecting the radiation emitted by accretion disks. But there's still so much more to learn about them, especially regarding their inner workings and how they interact with the rest of the universe. Maybe one day, scientists will unlock the secrets of black holes and help us understand some of the most profound questions about the nature of space, time, and gravity. \*\*Final Answer\*\* A black hole is a region in space where the gravitational field is so strong that nothing, not even light, can escape from it. They typically form from the remnants of massive stars that have undergone supernova explosions, leading to a singularity enclosed by an event horizon. Black holes play a crucial role in astrophysics and are key to understanding gravity and the structure of the universe.
2024-12-21T04:23:37
https://www.reddit.com/r/LocalLLaMA/comments/1hj23ji/asked_qwq_what_a_black_hole_was_this_was_its/
Corpo_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj23ji
false
null
t3_1hj23ji
/r/LocalLLaMA/comments/1hj23ji/asked_qwq_what_a_black_hole_was_this_was_its/
false
false
self
139
null
Decision Support System
1
[removed]
2024-12-21T04:27:06
https://www.reddit.com/r/LocalLLaMA/comments/1hj25i8/decision_support_system/
pkp_836
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj25i8
false
null
t3_1hj25i8
/r/LocalLLaMA/comments/1hj25i8/decision_support_system/
false
false
self
1
null
What would be an optimal and power-efficient GPU setup for a home environment with a budget around $10,000.
1
[removed]
2024-12-21T04:49:27
https://www.reddit.com/r/LocalLLaMA/comments/1hj2igz/what_would_be_an_optimal_and_powerefficient_gpu/
Lazy_Wedding_1383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj2igz
false
null
t3_1hj2igz
/r/LocalLLaMA/comments/1hj2igz/what_would_be_an_optimal_and_powerefficient_gpu/
false
false
self
1
null
What would be an optimal and power efficient GPU setup for a home with a budget around $10,000?
3
Can go up to $12,000 USD at most. I am looking for a solution that balances both performance and energy efficiency. Also please post your GPU configuration and the largest model you are able to run with it. Thanks
2024-12-21T04:53:58
https://www.reddit.com/r/LocalLLaMA/comments/1hj2kzp/what_would_be_an_optimal_and_power_efficient_gpu/
kitkatmafia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj2kzp
false
null
t3_1hj2kzp
/r/LocalLLaMA/comments/1hj2kzp/what_would_be_an_optimal_and_power_efficient_gpu/
false
false
self
3
null
Plug-and-play video generation enhancement
1
[removed]
2024-12-21T05:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1hj33tu/plugandplay_video_generation_enhancement/
Gold_Dingo_4116
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj33tu
false
null
t3_1hj33tu
/r/LocalLLaMA/comments/1hj33tu/plugandplay_video_generation_enhancement/
false
false
self
1
null
Plug-and-play video generation enhancement
1
[removed]
2024-12-21T05:27:30
https://www.reddit.com/r/LocalLLaMA/comments/1hj34h8/plugandplay_video_generation_enhancement/
Gold_Dingo_4116
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj34h8
false
null
t3_1hj34h8
/r/LocalLLaMA/comments/1hj34h8/plugandplay_video_generation_enhancement/
false
false
self
1
null
Video generation enhancement
1
[removed]
2024-12-21T05:29:02
https://www.reddit.com/r/LocalLLaMA/comments/1hj35cq/video_generation_enhancement/
Gold_Dingo_4116
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj35cq
false
null
t3_1hj35cq
/r/LocalLLaMA/comments/1hj35cq/video_generation_enhancement/
false
false
self
1
null
Should be called ARC-GP (ARC-generalization power) then...
10
2024-12-21T05:57:13
https://imgur.com/7HRHSCT
Extension-Mastodon67
imgur.com
1970-01-01T00:00:00
0
{}
1hj3l42
false
null
t3_1hj3l42
/r/LocalLLaMA/comments/1hj3l42/should_be_called_arcgp_arcgeneralization_power/
false
false
https://b.thumbs.redditm…sUHbUB_YbHCk.jpg
10
{'enabled': False, 'images': [{'id': 'RwZI56u1Jd95if5ZXVmm27pcd04XPAQcO8XjOos_kFU', 'resolutions': [{'height': 23, 'url': 'https://external-preview.redd.it/DJjHAvDjYo487H27S31pRkzNsaVZX2ue4AlnWbdIaiU.jpg?width=108&crop=smart&auto=webp&s=cbc50b7cc53652b47647637c569eee03c0cbbce5', 'width': 108}, {'height': 47, 'url': 'https://external-preview.redd.it/DJjHAvDjYo487H27S31pRkzNsaVZX2ue4AlnWbdIaiU.jpg?width=216&crop=smart&auto=webp&s=3f8f1847b3acc0b066746353c9ea1b55f5df1016', 'width': 216}, {'height': 69, 'url': 'https://external-preview.redd.it/DJjHAvDjYo487H27S31pRkzNsaVZX2ue4AlnWbdIaiU.jpg?width=320&crop=smart&auto=webp&s=676b2a2f7b6dad6a1c9ccb0f42c0f550f481b7c0', 'width': 320}, {'height': 139, 'url': 'https://external-preview.redd.it/DJjHAvDjYo487H27S31pRkzNsaVZX2ue4AlnWbdIaiU.jpg?width=640&crop=smart&auto=webp&s=8dca3781c6c15672753eae4bc34bbf471fac8624', 'width': 640}, {'height': 209, 'url': 'https://external-preview.redd.it/DJjHAvDjYo487H27S31pRkzNsaVZX2ue4AlnWbdIaiU.jpg?width=960&crop=smart&auto=webp&s=cd6603e246c34f7bb9e6262fccfa11d69cc6f2e5', 'width': 960}, {'height': 235, 'url': 'https://external-preview.redd.it/DJjHAvDjYo487H27S31pRkzNsaVZX2ue4AlnWbdIaiU.jpg?width=1080&crop=smart&auto=webp&s=d03f957bbd9647cbf9984e9f19b33f29d769d000', 'width': 1080}], 'source': {'height': 299, 'url': 'https://external-preview.redd.it/DJjHAvDjYo487H27S31pRkzNsaVZX2ue4AlnWbdIaiU.jpg?auto=webp&s=df9215241e41a159fc119fc8e981b0a0ef69d97b', 'width': 1372}, 'variants': {}}]}
Python util for prompt engineering and optimization
1
[removed]
2024-12-21T06:30:51
https://www.reddit.com/r/LocalLLaMA/comments/1hj42tm/python_util_for_prompt_engineering_and/
ExcitingAmbition4283
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj42tm
false
null
t3_1hj42tm
/r/LocalLLaMA/comments/1hj42tm/python_util_for_prompt_engineering_and/
false
false
self
1
null
How to prompt InternVL 2.5 -> Prompt Decides the best output ??
0
So My Problem is -> \`\`\` **Detect the label of segmented object passed in bounding Box coordinates in given image ?** \`\`\` So What I am doing in back is (Taking input image , From click I am getting mask from SAM and getting Boduning Boxes of each region or mask and passing it dynamically in prompt like region1 , region2) , Still it is giving False positives , Wrong Results , I think the problem is with the Prompt , How should I improve this ??? My Prompt looks like this -> **PROMPT ->** >Analyze the following regions in the image and classify each into one of the predefined categories. Provide the class detected and a brief explanation for each region. \*\*Only respond to the regions provided below. Do not mention any additional regions.\*\* >\### \*\*Classes:\*\* >1. \*\*Drywall\*\*: Flat panels, typically blue or purple, used for interior walls. Covers moderate vertical wall areas where construction is ongoing. >2. \*\*Metallic Beam\*\*: Structural elements made of shiny or matte metal, usually occupying small to moderate areas as part of the building frame. >3. \*\*Ceiling\*\*: The overhead structure, often covered with insulation materials or reflective foil. Typically occupies a large portion of the image horizontally. >4. \*\*Floor\*\*: The flat surface underfoot, often made of concrete, covering the largest area in the image horizontally. >5. \*\*Stud\*\*: Vertical or horizontal metal or wooden supports, appearing in small to moderate areas, forming the skeleton for drywall or insulation panels. >6. \*\*External Sheet\*\*: Weather-resistant panels on outer walls. Occupies small to moderate vertical areas, depending on the progress of construction. >7. \*\*White Pipe\*\*: Smooth, cylindrical pipes, usually occupying a small portion of the image in various locations. >8. \*\*Horizontal Pipe\*\*: Pipes running horizontally, part of the plumbing or HVAC systems. Typically occupy small to moderate areas. >9. \*\*Insulation Duct\*\*: Reflective or insulated large ducts for air circulation, usually found in the ceiling area. Occupies small to moderate areas. >10. \*\*Duct Pipe\*\*: Cylindrical air distribution pipes, typically running along the ceiling. Occupies small to moderate areas. >\### \*\*Regions to Analyze:\*\* >\- \*\*Region 1\*\*: >  \- \*\*Bounding Box\*\*: >\- x\_min: 2749 >\- y\_min: 420 >\- x\_max: 4019 >\- y\_max: 1690 >  \- \*\*Segmentation Area\*\*: 1271083 pixels > >\### \*\*Expected Response Format:\*\* >\- \*\*Region 1\*\*: >  \- \*\*Class Detected\*\*: \[Your Classification Here\] >  \- \*\*Explanation\*\*: \[Your Description Here\] Please do help me guys.. Thanks in Advance
2024-12-21T06:47:50
https://www.reddit.com/r/LocalLLaMA/comments/1hj4bfn/how_to_prompt_internvl_25_prompt_decides_the_best/
Hot-Hearing-2528
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj4bfn
false
null
t3_1hj4bfn
/r/LocalLLaMA/comments/1hj4bfn/how_to_prompt_internvl_25_prompt_decides_the_best/
false
false
self
0
null
How to FineTune InternVL 2.5 Model for my custom Dataset (Construction Classes)
1
Hai Everyone , My problem Statement is , -> **Finetune the InternVL2.5 Model such that it works better for my construction classes ->** I want to detect these classes , But they are not coming as accurate as I think in these models , As my classes include construction classes like 1. `Dry wall` 2. `Insulation` 3. `Metal Beams` 4. `Ceiling` 5. `Floor` 6. `Studs` 7. `External sheets` 8. `Pipes And so.on` These classes Will not be pretrained ,or mainly trained in these models , As per my guess , So I want to Finetune the InternVL 2.5 8B Model on my dataset that works and detects my objects with just give text it should detect that object in perfect manner .. >(Eg : Detect and Describe the position of Drwall in above image ?) To achieve that , I dont have how to proceed or What to do .. Can anybody please help with What Steps should I follow to Finetune INTERNVL or Any VLM on custom dataset , or any blog post that helps me a lot... Thanks in advance..
2024-12-21T06:58:14
https://www.reddit.com/r/LocalLLaMA/comments/1hj4gl4/how_to_finetune_internvl_25_model_for_my_custom/
Hot-Hearing-2528
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj4gl4
false
null
t3_1hj4gl4
/r/LocalLLaMA/comments/1hj4gl4/how_to_finetune_internvl_25_model_for_my_custom/
false
false
self
1
null
Need a Solution for Cursor Control and Automating Text Input
1
[removed]
2024-12-21T07:26:41
https://www.reddit.com/r/LocalLLaMA/comments/1hj4uhz/need_a_solution_for_cursor_control_and_automating/
Puzzleheaded_Acadia1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj4uhz
false
null
t3_1hj4uhz
/r/LocalLLaMA/comments/1hj4uhz/need_a_solution_for_cursor_control_and_automating/
false
false
self
1
null
Qwq full version? Open source o3?
416
2024-12-21T07:30:17
https://i.redd.it/suvl2l7mm58e1.jpeg
Evening_Action6217
i.redd.it
1970-01-01T00:00:00
0
{}
1hj4w8b
false
null
t3_1hj4w8b
/r/LocalLLaMA/comments/1hj4w8b/qwq_full_version_open_source_o3/
false
false
https://b.thumbs.redditm…tOLym8iWSo7M.jpg
416
{'enabled': True, 'images': [{'id': 'avqSeyHHA592gIfac16eXkS_3tlAtpNbSR0DermGJVk', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/suvl2l7mm58e1.jpeg?width=108&crop=smart&auto=webp&s=d1ba5b909510b78f6d5c515da3bf8299f7119a10', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/suvl2l7mm58e1.jpeg?width=216&crop=smart&auto=webp&s=532150135ba7e1bb38eeb98373854bcaed41956f', 'width': 216}, {'height': 201, 'url': 'https://preview.redd.it/suvl2l7mm58e1.jpeg?width=320&crop=smart&auto=webp&s=c6cdf39c3f1dcbac77ada1c377d7ab101e80ab50', 'width': 320}, {'height': 402, 'url': 'https://preview.redd.it/suvl2l7mm58e1.jpeg?width=640&crop=smart&auto=webp&s=cad590fab1986b0ad3fa3f1b1b02c831db2130c6', 'width': 640}, {'height': 604, 'url': 'https://preview.redd.it/suvl2l7mm58e1.jpeg?width=960&crop=smart&auto=webp&s=5d5125afc3b1ee0fc24420cdc4a9bd5e507abbea', 'width': 960}, {'height': 680, 'url': 'https://preview.redd.it/suvl2l7mm58e1.jpeg?width=1080&crop=smart&auto=webp&s=60d8336b7e4e52e87bd4e8a3ebbc73642bdb2479', 'width': 1080}], 'source': {'height': 680, 'url': 'https://preview.redd.it/suvl2l7mm58e1.jpeg?auto=webp&s=376abba1478beefcd8d0208f3c12c5c831df37d1', 'width': 1080}, 'variants': {}}]}
Best Small LLMs for Real-World Use: Your Recommendations?
62
Hey everyone, We often hear about the behemoths of the language model world, but I'm curious about the smaller models that quietly impress. For those of us who don't have access to massive server farms, these compact powerhouses are where it's at. I'm especially interested in what you've found to be surprisingly effective within these size brackets: * **7-14B:** The sweet spot for many with a decent gaming setup. Which models in this range have stood out to you? * **1-3B:** For when you need something lean and mean that can run without a dedicated GPU. Any favorites that punch above their weight? Beyond just listing names, I'd love to hear why you favor a particular model in each size. What are its standout capabilities, and what do you actually use it for in practice? Personally, I've been exploring a few myself and have noticed some interesting strengths: Llama 3.1 8B seems like a solid all-rounder for general knowledge, while the Qwen2.5 models (both 7B and 14B) have been quite helpful with coding tasks. Gemma 2 9B has also caught my attention for its translation abilities. And honestly, the knowledge packed into Llama 3.2 3B is genuinely impressive for its size.
2024-12-21T07:39:26
https://www.reddit.com/r/LocalLLaMA/comments/1hj50f5/best_small_llms_for_realworld_use_your/
skyline159
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj50f5
false
null
t3_1hj50f5
/r/LocalLLaMA/comments/1hj50f5/best_small_llms_for_realworld_use_your/
false
false
self
62
null
Looking for GPU Server for 24/7 Operation
2
Hi! I'm looking for a GPU server running Windows for testing purposes. I'm not exactly sure what it's called, sorry about that. Basically, I need a virtual computer with remote access. Could you recommend a good and affordable service where it runs 24/7? The GPU needs to be at the level of a GTX 1080 or higher. Are there any providers you could suggest?
2024-12-21T08:06:01
https://www.reddit.com/r/LocalLLaMA/comments/1hj5d46/looking_for_gpu_server_for_247_operation/
neekerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj5d46
false
null
t3_1hj5d46
/r/LocalLLaMA/comments/1hj5d46/looking_for_gpu_server_for_247_operation/
false
false
self
2
null
Different models trained by h2o
1
[removed]
2024-12-21T08:43:43
https://www.reddit.com/r/LocalLLaMA/comments/1hj5uin/different_models_trained_by_h2o/
wdcgyujm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj5uin
false
null
t3_1hj5uin
/r/LocalLLaMA/comments/1hj5uin/different_models_trained_by_h2o/
false
false
self
1
null
Custom News Monitor
9
I would like to build a local setup so I can basically replace my news consumption with LLM. I read Substack, follow YouTube Channels and read on Bluesky. Maybe it also should read new scientific papers in areas I am interested in. The content is multilingual. It would also be cool if it could login to my newspaper subscription and read the stuff there. I would like the LLM to watch that stuff for me. It should summarize it and I should be able to give it hints what kind of stuff I find especially interesting. I want it to run overnight so in the morning I have a nice report. Anybody knows about open source projects that already solve parts of these problems as a starting point for me?
2024-12-21T08:51:36
https://www.reddit.com/r/LocalLLaMA/comments/1hj5yam/custom_news_monitor/
mysteryhumpf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj5yam
false
null
t3_1hj5yam
/r/LocalLLaMA/comments/1hj5yam/custom_news_monitor/
false
false
self
9
null
Questions about "Train-time Compute"
5
I have been trying to follow the articles on test-time compute and have couple of questions: * What does "train-time" compute mean? Does it just means how accuracy is improving with training? * Is there a search happening during train? If yes, what does it mean? * Assuming test-time scaling is the way to go, does the approaches have anything to do with training? Can we facilitate training in a way that it improves test-time?
2024-12-21T09:10:09
https://i.redd.it/aqu0qhqf468e1.jpeg
Low_Tour_4060
i.redd.it
1970-01-01T00:00:00
0
{}
1hj6714
false
null
t3_1hj6714
/r/LocalLLaMA/comments/1hj6714/questions_about_traintime_compute/
false
false
https://a.thumbs.redditm…Ahlf26NJiiy8.jpg
5
{'enabled': True, 'images': [{'id': 'ZX7MSbTIdl1HVdDmnw2eW0XCWURwhNle2a4EzGSLg00', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/aqu0qhqf468e1.jpeg?width=108&crop=smart&auto=webp&s=0e16e1f64e0565fd83526ad116a189cf852b48b2', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/aqu0qhqf468e1.jpeg?width=216&crop=smart&auto=webp&s=65c5bc3466d4c46dedbc6cac23bfd0d5b7fa33e3', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/aqu0qhqf468e1.jpeg?width=320&crop=smart&auto=webp&s=e0dd2cda821a5528c5b0e181231cb0f152a7cff8', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/aqu0qhqf468e1.jpeg?width=640&crop=smart&auto=webp&s=e5475a3469544c5349a62a2e7778a8970d9b3431', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/aqu0qhqf468e1.jpeg?width=960&crop=smart&auto=webp&s=238011a5e86cc15105dadd2786f77cd7960df78c', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/aqu0qhqf468e1.jpeg?width=1080&crop=smart&auto=webp&s=6786d4144b72f7b245c804ec24d8c6daa85aa307', 'width': 1080}], 'source': {'height': 1113, 'url': 'https://preview.redd.it/aqu0qhqf468e1.jpeg?auto=webp&s=13f3497794994dbf673bdbdb9f9adbb9d0be31ea', 'width': 1980}, 'variants': {}}]}
Accelerating LLM Inference on NVIDIA GPUs with ReDrafter
19
2024-12-21T09:36:35
https://machinelearning.apple.com/research/redrafter-nvidia-tensorrt-llm
Hopeful-Site1162
machinelearning.apple.com
1970-01-01T00:00:00
0
{}
1hj6j14
false
null
t3_1hj6j14
/r/LocalLLaMA/comments/1hj6j14/accelerating_llm_inference_on_nvidia_gpus_with/
false
false
https://b.thumbs.redditm…eJnmYb21zAak.jpg
19
{'enabled': False, 'images': [{'id': 'D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=108&crop=smart&auto=webp&s=cdcbdf7d4e054676a9ea185723b2cca1b298211b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=216&crop=smart&auto=webp&s=236ee2cb0fd51c8feb2185840b9b4c5339cb0ba1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=320&crop=smart&auto=webp&s=30825797867938b50b42671226c9c7da51a9f448', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=640&crop=smart&auto=webp&s=08a86235cd9b365c08749230f9302dd340fba50b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=960&crop=smart&auto=webp&s=1bab2db17030b48eb06b2b7c20f33a46e36e69e4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=1080&crop=smart&auto=webp&s=1e1801db787237252688dda8bbf280b56afb151f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?auto=webp&s=57f21999f40fdbca7f025acb8aa88a0534c88097', 'width': 1200}, 'variants': {}}]}
Local multimodal?
2
I'm trying my best to keep up, but it's so much marketing and so many models, I lost track again, so sorry for the newbie question. What local model can interpret images and also sound? Is there one now? I am looking for a local alternative to 4o-streaming, but one-by-one prompting with text and images would also work if I do the sound processing beforehand. This is to provide a "kitchen-copilot", which I can show ingredients, ask questions and get answers. Thank you for your time
2024-12-21T09:50:43
https://www.reddit.com/r/LocalLLaMA/comments/1hj6pob/local_multimodal/
ComprehensiveBird317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj6pob
false
null
t3_1hj6pob
/r/LocalLLaMA/comments/1hj6pob/local_multimodal/
false
false
self
2
null
So how does o3 compare to o1 in a fair comparison?
48
I feel like it most of it might be hype. They trained o3 on the ARC-AGI train set (unlike o1?) and threw 1.6m in compute cost to show 85%. They also spent lots of compute on frontier math benchmark. We don't know how other models would perform in similar conditions. Maybe gemini Flash 2.0 thinking or o1 are smarter or dumber, but we just don't know. Or do we? Is there any fair comparison out there that is not just hyping? Do you have thoughts on how it would do in a fair competition?
2024-12-21T09:54:55
https://www.reddit.com/r/LocalLLaMA/comments/1hj6rkh/so_how_does_o3_compare_to_o1_in_a_fair_comparison/
__Maximum__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj6rkh
false
null
t3_1hj6rkh
/r/LocalLLaMA/comments/1hj6rkh/so_how_does_o3_compare_to_o1_in_a_fair_comparison/
false
false
self
48
null
OpenAI's o3 marketing crap
0
Am I the only one who thinks that OpenAI's new o3 system is just a marketing crap. They have made a Semi-Private Evaluation and are waiting a new Chinese open source model. O3 is not available for evaluation. Until that is all marketing crap.
2024-12-21T09:57:51
https://www.reddit.com/r/LocalLLaMA/comments/1hj6sz6/openais_o3_marketing_crap/
DarKresnik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj6sz6
false
null
t3_1hj6sz6
/r/LocalLLaMA/comments/1hj6sz6/openais_o3_marketing_crap/
false
false
self
0
null
Encoder (BERT) model inference on Android
14
While small LLMs are now actually becoming usable on phones, they are still a huge waste of resources for many tasks, where a BERT model could shine. BERT is also a lot easier to fine-tune on custom tasks. Then there is modernbert now, which looks really promising, with 8k context length and all. I'm looking for recommendations for inference engines or frameworks for android. Ideally something that can make use of accelerator hardware when present.
2024-12-21T10:06:57
https://www.reddit.com/r/LocalLLaMA/comments/1hj6xba/encoder_bert_model_inference_on_android/
mnze_brngo_7325
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj6xba
false
null
t3_1hj6xba
/r/LocalLLaMA/comments/1hj6xba/encoder_bert_model_inference_on_android/
false
false
self
14
null
we now desperately need SOTA reasoning model from Meta
1
[removed]
2024-12-21T11:23:43
https://www.reddit.com/r/LocalLLaMA/comments/1hj7ydd/we_now_desperately_need_sota_reasoning_model_from/
Various-Operation550
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj7ydd
false
null
t3_1hj7ydd
/r/LocalLLaMA/comments/1hj7ydd/we_now_desperately_need_sota_reasoning_model_from/
false
false
self
1
null
I need a local chatbot that would perform good in customer support. I own RTX 3060 6GB
0
I need a chatbot orianted in customer support/fixing grammar or logical issues in sentences fast. Would i actually be able to generate stuff fast with my GPU?
2024-12-21T11:34:07
https://www.reddit.com/r/LocalLLaMA/comments/1hj83kf/i_need_a_local_chatbot_that_would_perform_good_in/
Ill_Baker_9712
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj83kf
false
null
t3_1hj83kf
/r/LocalLLaMA/comments/1hj83kf/i_need_a_local_chatbot_that_would_perform_good_in/
false
false
self
0
null
Looking for a photo analysis model
6
I just got about 1000 old baby pictures scanned and they came back in all sorts of orientations and colors (faded, sepia from age, etc). I'm looking for a model that can analyze and tell me (or actually implement the changes) to just get them to a decent state before I need to manually touch them. I checked the regular search engines but couldn't find anything that was a SaaS with a fee attached (and one photo at a time, i.e. manual) which is not what I'm looking for. I also would rather do this locally and not send the photos out. I have the photos both local and in google photos, if that helps.
2024-12-21T11:47:09
https://www.reddit.com/r/LocalLLaMA/comments/1hj8a3s/looking_for_a_photo_analysis_model/
aram535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj8a3s
false
null
t3_1hj8a3s
/r/LocalLLaMA/comments/1hj8a3s/looking_for_a_photo_analysis_model/
false
false
self
6
null
how to finetune reasoning model open source like QWQ etc
0
[removed]
2024-12-21T11:51:41
https://www.reddit.com/r/LocalLLaMA/comments/1hj8cce/how_to_finetune_reasoning_model_open_source_like/
Opening_Opinion5067
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj8cce
false
null
t3_1hj8cce
/r/LocalLLaMA/comments/1hj8cce/how_to_finetune_reasoning_model_open_source_like/
false
false
self
0
null
[Video] On the significance of the Konwinski prize | $1 Million USD Open Source Coding Agent Challenge
14
2024-12-21T12:07:35
https://www.youtube.com/watch?v=KAcomQhZ-0A
raymyers
youtube.com
1970-01-01T00:00:00
0
{}
1hj8kth
false
{'oembed': {'author_name': 'nopilot', 'author_url': 'https://www.youtube.com/@nopilot-dev', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/KAcomQhZ-0A?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Win the Konwinski Prize! | Million Dollar Coding Agent"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/KAcomQhZ-0A/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Win the Konwinski Prize! | Million Dollar Coding Agent', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1hj8kth
/r/LocalLLaMA/comments/1hj8kth/video_on_the_significance_of_the_konwinski_prize/
false
false
https://a.thumbs.redditm…XouPhajjYbU8.jpg
14
{'enabled': False, 'images': [{'id': '2nX7YqTncgOFLPzh24JXpP1I-VXaMzF9g_9apbZ78zg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tO3ddPx3_zhy8B2YGHyFagDT9FF5fnhBB6Kcn0-tw6Q.jpg?width=108&crop=smart&auto=webp&s=6df4f17474256669fd9ae12fc1183e64efd5ab22', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/tO3ddPx3_zhy8B2YGHyFagDT9FF5fnhBB6Kcn0-tw6Q.jpg?width=216&crop=smart&auto=webp&s=30729fd0dfa729ef6d71bd8bb707756ccc7653d7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/tO3ddPx3_zhy8B2YGHyFagDT9FF5fnhBB6Kcn0-tw6Q.jpg?width=320&crop=smart&auto=webp&s=4f363b12b6eb93683cefc58e18c5932a8c0c3ecb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tO3ddPx3_zhy8B2YGHyFagDT9FF5fnhBB6Kcn0-tw6Q.jpg?auto=webp&s=8a998474de348598df7c5d6f70350b98f083710a', 'width': 480}, 'variants': {}}]}
Brute Force Over Innovation? My Thoughts on o1-Pro and o3
273
I’ve been pondering o1-pro and o3, and honestly, I’m not convinced there’s anything groundbreaking happening under the hood. From what I’ve seen, they’re mostly using brute force approaches—starting with chain-of-thought reasoning and now trying tree-of-thought—along with some clever engineering. It works, but it doesn’t feel like a big leap forward in terms of LLM architecture or training methods. That being said, I think this actually highlights some exciting potential for **local LLMs**. It shows that with some smart optimization, we can get a lot more out of high-end gaming GPUs, even with VRAM limitations. Maybe this is a sign that local models could start catching up in meaningful ways. The benchmark scores for these models are impressive, but the **cost scaling** numbers have me raising an eyebrow. It feels like there’s a disconnect between the hype and what’s actually sustainable at scale. Curious if anyone else has similar thoughts, or maybe a different perspective?
2024-12-21T12:09:29
https://www.reddit.com/r/LocalLLaMA/comments/1hj8lrt/brute_force_over_innovation_my_thoughts_on_o1pro/
anzzax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hj8lrt
false
null
t3_1hj8lrt
/r/LocalLLaMA/comments/1hj8lrt/brute_force_over_innovation_my_thoughts_on_o1pro/
false
false
self
273
null
Been trying to chat with base LLMs for a while (no RLHF, etc.), making some progress!
93
2024-12-21T12:13:00
https://v.redd.it/2072meos078e1
funiculares
v.redd.it
1970-01-01T00:00:00
0
{}
1hj8nnk
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/2072meos078e1/DASHPlaylist.mpd?a=1737375196%2CNmE2NzU4Njg2YjNlZjc0NmMyZWRjNzAxMTUzZjEzODc3Njg2OWI1ZjVkY2EwZjA2ZDUwMzlmYTdiZGJhNmQ2NA%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/2072meos078e1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 960, 'hls_url': 'https://v.redd.it/2072meos078e1/HLSPlaylist.m3u8?a=1737375196%2CNTdjZDE1NDNhNzU5MWU3MWU5MTA5OTE0MmEwOWUzZTBlNjQzODc5YzJhMzZiODE5ODgwMTE4ZjBmMDFiODBmMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2072meos078e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1hj8nnk
/r/LocalLLaMA/comments/1hj8nnk/been_trying_to_chat_with_base_llms_for_a_while_no/
false
false
https://external-preview…b7b47ca8d220166a
93
{'enabled': False, 'images': [{'id': 'ZXJwendqbnMwNzhlMRFyTs_mnqvwrXHXNggrXT44t6H_I9H613ZkfWGoN4Gz', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/ZXJwendqbnMwNzhlMRFyTs_mnqvwrXHXNggrXT44t6H_I9H613ZkfWGoN4Gz.png?width=108&crop=smart&format=pjpg&auto=webp&s=b3a28614bf3da837fb45de3554abcae2be12e5f5', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/ZXJwendqbnMwNzhlMRFyTs_mnqvwrXHXNggrXT44t6H_I9H613ZkfWGoN4Gz.png?width=216&crop=smart&format=pjpg&auto=webp&s=292ce03afc0b58ce00aee2e8f9ef22746349e874', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/ZXJwendqbnMwNzhlMRFyTs_mnqvwrXHXNggrXT44t6H_I9H613ZkfWGoN4Gz.png?width=320&crop=smart&format=pjpg&auto=webp&s=3dc6a7009b79c709ef339a4fd0788dcd97ef932e', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/ZXJwendqbnMwNzhlMRFyTs_mnqvwrXHXNggrXT44t6H_I9H613ZkfWGoN4Gz.png?width=640&crop=smart&format=pjpg&auto=webp&s=471fc8b639c1dcd8ba36f77ed4adf970f7f101e6', 'width': 640}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZXJwendqbnMwNzhlMRFyTs_mnqvwrXHXNggrXT44t6H_I9H613ZkfWGoN4Gz.png?format=pjpg&auto=webp&s=3c733342090bfae620e8cbf1141f23a881504f6e', 'width': 810}, 'variants': {}}]}
I swear I didn't ask it to repeat anything...
1
2024-12-21T12:33:08
https://i.redd.it/gxyxjdkl478e1.png
lIlI1lII1Il1Il
i.redd.it
1970-01-01T00:00:00
0
{}
1hj8yba
false
null
t3_1hj8yba
/r/LocalLLaMA/comments/1hj8yba/i_swear_i_didnt_ask_it_to_repeat_anything/
false
false
https://b.thumbs.redditm…o9VbZMhBXe8A.jpg
1
{'enabled': True, 'images': [{'id': 'qleiajddUxzeGR3e6MeOJoeI6NFmrnXcEQbmXeqGcqE', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/gxyxjdkl478e1.png?width=108&crop=smart&auto=webp&s=ce93260c77caa100da44e0442871f761a7bc5023', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/gxyxjdkl478e1.png?width=216&crop=smart&auto=webp&s=eae4286be91a3b56a8501ee648bb88446e112f6b', 'width': 216}, {'height': 211, 'url': 'https://preview.redd.it/gxyxjdkl478e1.png?width=320&crop=smart&auto=webp&s=5e81e9638f82d0756dd1c29f356d0af3a6f48c55', 'width': 320}, {'height': 423, 'url': 'https://preview.redd.it/gxyxjdkl478e1.png?width=640&crop=smart&auto=webp&s=88a33cb419095b9110ac817d26dde4ddf9a32132', 'width': 640}, {'height': 635, 'url': 'https://preview.redd.it/gxyxjdkl478e1.png?width=960&crop=smart&auto=webp&s=05438a03b4297ee060b52cc63790710e65818f81', 'width': 960}, {'height': 714, 'url': 'https://preview.redd.it/gxyxjdkl478e1.png?width=1080&crop=smart&auto=webp&s=8703efa7469919add0a3f313487009ba58334914', 'width': 1080}], 'source': {'height': 1178, 'url': 'https://preview.redd.it/gxyxjdkl478e1.png?auto=webp&s=cfe13436a6c13b66db12b02d293605ca247a5790', 'width': 1780}, 'variants': {}}]}
Any recommendations for medical LLMs, especially for radiology?
1
[removed]
2024-12-21T14:34:35
https://www.reddit.com/r/LocalLLaMA/comments/1hjb0wa/any_recommendations_for_medical_llms_especially/
Equivalent_Set_1999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjb0wa
false
null
t3_1hjb0wa
/r/LocalLLaMA/comments/1hjb0wa/any_recommendations_for_medical_llms_especially/
false
false
self
1
null
Day 12 of Open AI Summary
1
2024-12-21T14:55:07
https://v.redd.it/zh6bk7vhs78e1
Heisinic
/r/LocalLLaMA/comments/1hjbet0/day_12_of_open_ai_summary/
1970-01-01T00:00:00
0
{}
1hjbet0
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/zh6bk7vhs78e1/DASHPlaylist.mpd?a=1737514529%2CZjIxMjQ3MjRiMTRhZjI4OGJmNTgyMzFkNTJhMDFjODc4YjVjNDI3MTdhM2Y0Njg0NDRiNzFkZGE2NGIzYjlkNQ%3D%3D&v=1&f=sd', 'duration': 107, 'fallback_url': 'https://v.redd.it/zh6bk7vhs78e1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/zh6bk7vhs78e1/HLSPlaylist.m3u8?a=1737514529%2CNDBlNTg3ZjIyMmNkZjAxZWY1M2YyMjkwMzM2MDIyNmE1MjhhODIxODRhMTA3M2Q0YmFhZjhkNzIxNTAxNjdhOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/zh6bk7vhs78e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 848}}
t3_1hjbet0
/r/LocalLLaMA/comments/1hjbet0/day_12_of_open_ai_summary/
false
false
https://external-preview…ed0a17fc568e1ef6
1
{'enabled': False, 'images': [{'id': 'dXJpdGhjdmhzNzhlMUF7XeIqC_InNKPtSavHqdvvVWE2STPXlCfxbdr7qcsP', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/dXJpdGhjdmhzNzhlMUF7XeIqC_InNKPtSavHqdvvVWE2STPXlCfxbdr7qcsP.png?width=108&crop=smart&format=pjpg&auto=webp&s=2955787eb1ab095357e1e2fabf5ba2a387e74c0b', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/dXJpdGhjdmhzNzhlMUF7XeIqC_InNKPtSavHqdvvVWE2STPXlCfxbdr7qcsP.png?width=216&crop=smart&format=pjpg&auto=webp&s=6c36d685ba343fc25c3558cd25c20b331d4a0d7e', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/dXJpdGhjdmhzNzhlMUF7XeIqC_InNKPtSavHqdvvVWE2STPXlCfxbdr7qcsP.png?width=320&crop=smart&format=pjpg&auto=webp&s=d5949b4d91478a310d2ca89ea86e9c2f2ad1567e', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/dXJpdGhjdmhzNzhlMUF7XeIqC_InNKPtSavHqdvvVWE2STPXlCfxbdr7qcsP.png?width=640&crop=smart&format=pjpg&auto=webp&s=e45564edba23833b0a513e3737c796ba092a8616', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/dXJpdGhjdmhzNzhlMUF7XeIqC_InNKPtSavHqdvvVWE2STPXlCfxbdr7qcsP.png?format=pjpg&auto=webp&s=463562cab5be6a772f1069803193f5b2bc8908a8', 'width': 848}, 'variants': {}}]}
llama 3.3 70B instruct ablated (decensored)
75
I wanted to share this release with the community of an [ablated](https://www.lesswrong.com/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction) version of Llama 3.3 (70B) instruct. In this way the assistant will refuse requests less often. We landed on layer 10 as the candidate. But wanted to explore other attempts and learnings. The release on hf: [Llama-3.3-70B-Instruct-ablated](https://huggingface.co/NaniDAO/Llama-3.3-70B-Instruct-ablated).
2024-12-21T14:56:04
https://www.reddit.com/r/LocalLLaMA/comments/1hjbfe0/llama_33_70b_instruct_ablated_decensored/
ro5ssss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjbfe0
false
null
t3_1hjbfe0
/r/LocalLLaMA/comments/1hjbfe0/llama_33_70b_instruct_ablated_decensored/
false
false
self
75
{'enabled': False, 'images': [{'id': 'oYRnFr2AaooJ90hoIR5rnhOS6n4J8YdQ1RTRXDp7ZC4', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=108&crop=smart&auto=webp&s=6ba9184844005aa98afb60bd51667287f60dc270', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=216&crop=smart&auto=webp&s=47cd1297cb95d335c2e047312bad9e780d249701', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=320&crop=smart&auto=webp&s=a4c0484e65966c8414ff93c7f30432bc53a45473', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=640&crop=smart&auto=webp&s=708b20b4ca742f23c32c88e75a1840ad8e604e81', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=960&crop=smart&auto=webp&s=0189549ac903b6a3da0079015f4c5c11a5db727c', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?width=1080&crop=smart&auto=webp&s=8be7f16a9e8a8fd9cbb645948dc262d7f90dc002', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://external-preview.redd.it/I4qbLX8o5_Z2RtR80xk6arKsKai2vkiFs9Uyr2Bp-pE.jpg?auto=webp&s=0920c0d81c11764a8c8466f83cc9f1d5386d22b2', 'width': 4200}, 'variants': {}}]}
Should All LLM Predictions Use Equal Computation Power?
0
2024-12-21T15:06:12
https://www.linkedin.com/pulse/should-all-llm-predictions-use-equal-computation-power-seungduk-kim-slwlc/?trackingId=bVGTXt2eRWylzF7v%2F5q8Rg%3D%3D
OldPin8654
linkedin.com
1970-01-01T00:00:00
0
{}
1hjbmo9
false
null
t3_1hjbmo9
/r/LocalLLaMA/comments/1hjbmo9/should_all_llm_predictions_use_equal_computation/
false
false
default
0
null
What Makes LLMs Tick: Can We Decode Their Learning Process?
0
2024-12-21T15:07:05
https://www.linkedin.com/pulse/what-makes-llms-tick-can-we-decode-learning-process-kim-pjfkc/?trackingId=5JUmfTYwQ%2FarjW4kGkQm%2BA%3D%3D
OldPin8654
linkedin.com
1970-01-01T00:00:00
0
{}
1hjbnaj
false
null
t3_1hjbnaj
/r/LocalLLaMA/comments/1hjbnaj/what_makes_llms_tick_can_we_decode_their_learning/
false
false
default
0
null
Reasons I think o3 is a game changer and will have positive effects on local models
62
In a talk by F. Chollet [1], he explains the premise and aims of the ARC AGI benchmark: 1. Solving hard unseen problems requires mode-2 thinking i.e. searching for the right answer by chaining steps together. 2. Solving an already seen problem or known problem will take one attempt i.e. mode-1, first intuitive guess or just look up in memory. 3. Solving an unseen problem from a known pattern of problems, requires mode-2 search is with mode-1 giviny great intuitive first guesses. Then mode-2 will only have to explore a small subset of paths that mode-1 suggests that can often fit in context window. (This is what I think o1 like models do). 4. Arc-agi was designed to have hard unseen problems with potentially unseen patterns as well. Therefore any model that solves these problems, will require serious rabbit hole explorations with expert mode-1 guesses and intutions potentially having to come up with new patterns. (My example would be - the mathematician that had one novel idea or guess which helped solve Fermat's last theorem. I don't think he reasoned into the idea, he intuited, and reasoned the proof after intution). My conclusions or predictions: 1. Anything o1 can do, local models under 100GB can do and will be able to do. Some engineering, hacks involved. This is point 3 from above. Local models can solve unseen problems with seen or known patterns. Potentially make intuitive leaps for some kinds of unknown patterns. 2. Difference in o1 and o3 is scale, and by definition local models can't scale and we don't want to spend thousands $$$ on solving really hard problems. 3. o3 and similar models will start solving novel problems. 4. You don't need o3 to solve trivial problems. Local models are enough. 5. Here's the hidden treasure I think. Once o3 has solved novel problems and the solution and transcripts are publishied, local models can incorporate that information into themselves and will be available as part of their mode-1 capability or intutions. So this is great news, as long as large companies publish their transcripts for novel problems. An analogy would be - I'm not the best software developer and can't come up with the best algorithms. But if the best software developer released a great algorithm, I can use it as well as anyone as else. 6. A local model that has incorporated new knowledge can solve for that family of problems without needing to scale like o3. [1] https://youtu.be/s7_NlkBwdj8?si=Eqf0HlK52mhSS9kq
2024-12-21T16:08:33
https://www.reddit.com/r/LocalLLaMA/comments/1hjcwbv/reasons_i_think_o3_is_a_game_changer_and_will/
keepawayb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjcwbv
false
null
t3_1hjcwbv
/r/LocalLLaMA/comments/1hjcwbv/reasons_i_think_o3_is_a_game_changer_and_will/
false
false
self
62
{'enabled': False, 'images': [{'id': 'CRL_K2eCtEyjbLw6_gM74gU9RLIjkKh4uGz718u5IlM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_lU6w8qBgUYlyN01RddEh8wMZNIoslacPOT1HRreOmg.jpg?width=108&crop=smart&auto=webp&s=9d7c96d7de3734b726ba4043fb152edb1988bdcc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/_lU6w8qBgUYlyN01RddEh8wMZNIoslacPOT1HRreOmg.jpg?width=216&crop=smart&auto=webp&s=4ef839b977421b5c53a3ad0de98f3e2012943f2e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/_lU6w8qBgUYlyN01RddEh8wMZNIoslacPOT1HRreOmg.jpg?width=320&crop=smart&auto=webp&s=ad36daf095458ab4490b280e6032a6a509da26f6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/_lU6w8qBgUYlyN01RddEh8wMZNIoslacPOT1HRreOmg.jpg?auto=webp&s=6e1b69a8fbba87bde568db72231fdec25fbd85c4', 'width': 480}, 'variants': {}}]}
where is phi4 ??
72
I heard that it's coming out this week.
2024-12-21T16:15:07
https://www.reddit.com/r/LocalLLaMA/comments/1hjd1ae/where_is_phi4/
Mean-Neighborhood-42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjd1ae
false
null
t3_1hjd1ae
/r/LocalLLaMA/comments/1hjd1ae/where_is_phi4/
false
false
self
72
null
How to Maintain Document Identity for Cross-Document Analysis and Comparisons? (RAG Especially)
1
[removed]
2024-12-21T16:29:50
https://www.reddit.com/r/LocalLLaMA/comments/1hjdc0c/how_to_maintain_document_identity_for/
PleasantInspection12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjdc0c
false
null
t3_1hjdc0c
/r/LocalLLaMA/comments/1hjdc0c/how_to_maintain_document_identity_for/
false
false
self
1
null
Recommended setup for local llama autocomplete service
1
I'm playing with making an autocomplete service using llama-cpp. The 8B 3.2 instruct tuned model is good quality but on my hardware (3090) it takes about ~5s per request. There seems to be room for optimization. As it is autocomplete, 99% of the prompt prefix stays the same between keystrokes. llama-cli supports a "prompt cache" but it has to reload the model weights every invocation. Anyone have a better solution? Another question I have is, does anyone know where I can find benchmarks of the different llama model quantizations (speed and quality) on instruction-following tasks.
2024-12-21T16:43:49
https://www.reddit.com/r/LocalLLaMA/comments/1hjdmei/recommended_setup_for_local_llama_autocomplete/
Smart-Waltz-5594
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjdmei
false
null
t3_1hjdmei
/r/LocalLLaMA/comments/1hjdmei/recommended_setup_for_local_llama_autocomplete/
false
false
self
1
null
those two guys
381
2024-12-21T17:12:55
https://i.redd.it/58er2apdi88e1.jpeg
Wrong_User_Logged
i.redd.it
1970-01-01T00:00:00
0
{}
1hje9bg
false
null
t3_1hje9bg
/r/LocalLLaMA/comments/1hje9bg/those_two_guys/
false
false
https://b.thumbs.redditm…SyZskHTN1KDE.jpg
381
{'enabled': True, 'images': [{'id': 'MPZpTC0_OtWpM1kubCYMLCeqUunj37bxsjMrgxObJvc', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/58er2apdi88e1.jpeg?width=108&crop=smart&auto=webp&s=edb80b4456f2bcbdcf3ae9d7a324200e59df777a', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/58er2apdi88e1.jpeg?width=216&crop=smart&auto=webp&s=8baa47e980a46e3539a657da36d27f5195b1ad95', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/58er2apdi88e1.jpeg?width=320&crop=smart&auto=webp&s=e123b68a38121f0167951227ff57ed37dbd4ec14', 'width': 320}, {'height': 344, 'url': 'https://preview.redd.it/58er2apdi88e1.jpeg?width=640&crop=smart&auto=webp&s=d798e3498915560b1fa340b54489d2971d56140d', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/58er2apdi88e1.jpeg?auto=webp&s=32730970290ae2f268d176a2f1370cc60d084934', 'width': 928}, 'variants': {}}]}
Groq's LLMs are the best for testing the workflow
1
[removed]
2024-12-21T17:32:50
https://i.redd.it/kpsnoqb4m88e1.png
Available-Stress8598
i.redd.it
1970-01-01T00:00:00
0
{}
1hjeonm
false
null
t3_1hjeonm
/r/LocalLLaMA/comments/1hjeonm/groqs_llms_are_the_best_for_testing_the_workflow/
false
false
https://b.thumbs.redditm…2F84XOvvelXI.jpg
1
{'enabled': True, 'images': [{'id': 'p1RkX4AUoivJ5lI6_U9RO-ck5FH1yiWfvgTnTPrCblI', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/kpsnoqb4m88e1.png?width=108&crop=smart&auto=webp&s=81a313f7965920034b57e531facb8dab4b99cf52', 'width': 108}, {'height': 277, 'url': 'https://preview.redd.it/kpsnoqb4m88e1.png?width=216&crop=smart&auto=webp&s=ac7a51a05ac4c26b08df0c119bab7d543378af3d', 'width': 216}, {'height': 410, 'url': 'https://preview.redd.it/kpsnoqb4m88e1.png?width=320&crop=smart&auto=webp&s=4a8100955c8439041058fa40a88f91c5cc44e97d', 'width': 320}, {'height': 821, 'url': 'https://preview.redd.it/kpsnoqb4m88e1.png?width=640&crop=smart&auto=webp&s=60fa88f1bbae1ebd47bfde6fb097a2885557962a', 'width': 640}, {'height': 1232, 'url': 'https://preview.redd.it/kpsnoqb4m88e1.png?width=960&crop=smart&auto=webp&s=3044583787b995e0232225b9f9a9fbbcbeb15c09', 'width': 960}, {'height': 1387, 'url': 'https://preview.redd.it/kpsnoqb4m88e1.png?width=1080&crop=smart&auto=webp&s=89a392235ebba7fefb774711afc13b75c9f0779a', 'width': 1080}], 'source': {'height': 1387, 'url': 'https://preview.redd.it/kpsnoqb4m88e1.png?auto=webp&s=741b7ba158e57cd66d74c7e084af6c9ed68b73fc', 'width': 1080}, 'variants': {}}]}
Many local users will be fine with leaving their PC running overnight, I wonder how well the 8b coding models can perform when given 12 hours of inference time.
104
Does anyone have any educated guess from the scaling of o3 models, on how an 8b model can scale in performance? like if we are willing to leave it running for 12 hours or 3 days?
2024-12-21T17:44:48
https://www.reddit.com/r/LocalLLaMA/comments/1hjey1e/many_local_users_will_be_fine_with_leaving_their/
OnurCetinkaya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjey1e
false
null
t3_1hjey1e
/r/LocalLLaMA/comments/1hjey1e/many_local_users_will_be_fine_with_leaving_their/
false
false
self
104
null
How are some LLMs so good at OCR while on the other hand dedicated OCR solutions can't outperform them
198
Total newbie to the LLM/ AI field. AFAIK they are meant for text input and output, and if the training data had images and annotations, then well how did they get better than dedicated tech? And can we really decode how OCR works internally in LLMs? or its another black box problem?
2024-12-21T18:10:59
https://www.reddit.com/r/LocalLLaMA/comments/1hjfirl/how_are_some_llms_so_good_at_ocr_while_on_the/
ThiccStorms
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjfirl
false
null
t3_1hjfirl
/r/LocalLLaMA/comments/1hjfirl/how_are_some_llms_so_good_at_ocr_while_on_the/
false
false
self
198
null
What is the maximum number of prompts you run in production?
1
[removed]
2024-12-21T18:29:00
https://www.reddit.com/r/LocalLLaMA/comments/1hjfwn7/what_is_the_maximum_number_of_prompts_you_run_in/
phantom69_ftw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjfwn7
false
null
t3_1hjfwn7
/r/LocalLLaMA/comments/1hjfwn7/what_is_the_maximum_number_of_prompts_you_run_in/
false
false
self
1
null
AI agents business idea
0
Give me your honest opinion about creating a business where you create customized prompts for the people's needs and give them knowledge how to use it most efficiently (prob simple courses). It's not for the experts but regular people, every prompt would be specifically designed for their own needs. I would analyze them with some psychological tests, ask them certain questions how they would like to be addressed etc. potential here is really high. What you guys think about it, is it technically possible or rather wishful thinking ?
2024-12-21T18:50:15
https://www.reddit.com/r/LocalLLaMA/comments/1hjgd3z/ai_agents_business_idea/
Illustrious_Gene3930
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjgd3z
false
null
t3_1hjgd3z
/r/LocalLLaMA/comments/1hjgd3z/ai_agents_business_idea/
false
false
self
0
null
Benchmark Saturation? Here’s a 10-Task benchmark humans can do today but AI won’t be able to before 2026. (Much less localAI)
1
[removed]
2024-12-21T19:07:44
https://www.reddit.com/r/LocalLLaMA/comments/1hjgqog/benchmark_saturation_heres_a_10task_benchmark/
xRolocker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjgqog
false
null
t3_1hjgqog
/r/LocalLLaMA/comments/1hjgqog/benchmark_saturation_heres_a_10task_benchmark/
false
false
self
1
null
PoSe
1
Can someone explain this paper? https://arxiv.org/pdf/2309.10400
2024-12-21T19:10:15
https://www.reddit.com/r/LocalLLaMA/comments/1hjgsj8/pose/
Sufficient-Smile-481
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjgsj8
false
null
t3_1hjgsj8
/r/LocalLLaMA/comments/1hjgsj8/pose/
false
false
self
1
null
Noob SFT advice (Qwen using unsloth)
3
Would love if someone could point me in the right direction (documentation, tutorials, etc.). I’m trying to to run a SFT on Qwen2.5 Coder Instruct model. My goal for it is to learn a proprietary programming language and development environment. I have a dataset of input/output pairs (about 90,000) that I generated by taking the (quite thorough) technical documentation and generating 5~ natural language questions for each subtopic of the documentation. So it’s basically the same output for 5 different inputs. Documentation excerpts are about 2 paragraphs on average and vary between general information, code samples and setup/configuration guides. Ran 1 epoch on the 3b model using a LoRA adapter (am using unsloth). I eventually plan to fine tune the larger models 7b and 32b but this was just a test I could run on my local GPU. Problem is: after 1 epoch the model is objectively much worse! While before it would provide highly hallucinated but at least nicely written and nicely formatted answers now it generates trash answers with bits and pieces of the documentation from the training data along with hallucinations. But the biggest issue is it gets stuck in infinite generation all the time while the base model with the same system/user prompts never did (running inference with llama.cpp). What am I doing wrong?
2024-12-21T19:10:43
https://www.reddit.com/r/LocalLLaMA/comments/1hjgsw5/noob_sft_advice_qwen_using_unsloth/
indicava
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjgsw5
false
null
t3_1hjgsw5
/r/LocalLLaMA/comments/1hjgsw5/noob_sft_advice_qwen_using_unsloth/
false
false
self
3
null
Best approach to get unknown column's value from known dataset using local LLM?
1
[removed]
2024-12-21T19:16:52
https://www.reddit.com/r/LocalLLaMA/comments/1hjgxhu/best_approach_to_get_unknown_columns_value_from/
E17089
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjgxhu
false
null
t3_1hjgxhu
/r/LocalLLaMA/comments/1hjgxhu/best_approach_to_get_unknown_columns_value_from/
false
false
self
1
null
Custom Llama 3.3 Instruct Quantization
21
I've been really liking llama 3.3 instruct, even at only 2.25 bpw which fits on a single 24gb at 28k context. Even at such a low bpw, the performance for RP seems surprisingly good. I was curious if I could squeeze even more out of such a low quant by optimizing the settings during quantization. I used a custom multi turn dataset pre-formatted with llama instruct tokens added for each turn. I also extended the sample context size to 16k. The context of each conversation ranges from 10k-20k, but the way exllama concatenates samples, I'm not sure how much that actually helps. Here is my resulting model: [https://huggingface.co/openerotica/Llama-3.3-70B-Instruct-multiturn-exl2-2.25bpw](https://huggingface.co/openerotica/Llama-3.3-70B-Instruct-multiturn-exl2-2.25bpw) And here is what I've been testing it against: [https://huggingface.co/LoneStriker/Llama-3.3-70B-Instruct-2.25bpw-h6-exl2](https://huggingface.co/LoneStriker/Llama-3.3-70B-Instruct-2.25bpw-h6-exl2) So far, I haven't noticed a huge difference. But I also wasn't seeing a huge difference between 2.25bpw and 4.65 bpw, even though I'm sure there is a difference that might be more noticeably for non RP tasks. I'm really not interested in solving complicated logic problems or coding, or abstract benchmarks. I just want to keep the model coherent and creative at long contexts while fitting on one GPU. One thing I regret is not increasing the number of samples. I was having issues running out of both ram and vram putting it higher than 100 samples it seemed (H200 SXM on runpod) even though 100 samples 16k context did not seem even close to maxing out the system. So am I just spinning my wheels? It's so hard to find information about things like this, and its kind of hard to believe that nobody has definitively worked this kind of thing out. Does quantization context length have an effect on long context length inference? Does increasing the number of samples increase quality or performance for lower BPW quants? How much does the quantization dataset matter? Was it dumb to try and create a multi-turn tokened dataset? Other than just playing with both models in specific role-play scenarios, is there any other kind of real world benchmark I could use to more easily see differences between quantization runs? I am extremely skeptical that the popular benchmarks have any bearing on "roleplay quality" but I'm definitely open to suggestions. If there are any heavy RP users here, I'd be willing to give out some blind API keys for each model for public testing. Listing each model as something like "model A" and "model B". Maybe even throw in "model C" 4.65 bpw and see how hard it is to tell between quantization bits when you don't know which model you're using.
2024-12-21T19:32:31
https://www.reddit.com/r/LocalLLaMA/comments/1hjh9aq/custom_llama_33_instruct_quantization/
Scam_Altman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjh9aq
false
null
t3_1hjh9aq
/r/LocalLLaMA/comments/1hjh9aq/custom_llama_33_instruct_quantization/
false
false
self
21
{'enabled': False, 'images': [{'id': '3HGvlUNXKCe4GqxJjBRQFDNmMay4O4yX9xWOqoGyMwc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/79DTctLXdKpGl6H9_Rl5hi06hdefnXON1fvp-S6P1_4.jpg?width=108&crop=smart&auto=webp&s=5ffde5bf9b072acc0cf00e7639a8cd8a872ccd39', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/79DTctLXdKpGl6H9_Rl5hi06hdefnXON1fvp-S6P1_4.jpg?width=216&crop=smart&auto=webp&s=b62d270b45478c16a0cc344bedba9048e564cbfc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/79DTctLXdKpGl6H9_Rl5hi06hdefnXON1fvp-S6P1_4.jpg?width=320&crop=smart&auto=webp&s=7d7226bef8eda60d9bea10b47e1eaf994780c0d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/79DTctLXdKpGl6H9_Rl5hi06hdefnXON1fvp-S6P1_4.jpg?width=640&crop=smart&auto=webp&s=64852aec906f11b745f71d77ceceb25995590a56', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/79DTctLXdKpGl6H9_Rl5hi06hdefnXON1fvp-S6P1_4.jpg?width=960&crop=smart&auto=webp&s=93a0bd335f3f922e0cca65b3b65b8165d72d8ffd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/79DTctLXdKpGl6H9_Rl5hi06hdefnXON1fvp-S6P1_4.jpg?width=1080&crop=smart&auto=webp&s=fe8876ebef2adddaf3fe4de7a55455460a82894a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/79DTctLXdKpGl6H9_Rl5hi06hdefnXON1fvp-S6P1_4.jpg?auto=webp&s=e27393d92d560f04602c9c870e653d0f1055e323', 'width': 1200}, 'variants': {}}]}
Should I get another GPU?
3
Hey, I'm new to running local LLM's and stuff, I have an RTX 3070 Super which I mainly use for my selfhosted services, like: * Jellyfin Transcoding * Tdarr Transcoding * Frigate Transcoding * Immich Transcoding And that card can handle all of those in parallel. But today I started playing with Open-WebUI with Ollama, as I wanted to use that with the new home assistant voice assistant hardware, and to my (ignorant) surprise, using a simple LLM like llama3.2 filled half of the VRAM, and if I wanted something fancier it'll either fill the entire VRAM, or fill the entire VRAM and be very slow. So now I have the following questions: 1. If I get another GPU, can I just add it and have my services just use whatever card has enough space? 2. What GPU is suggested? (Assume I have 1000s of $ just laying around) 3. My system has 512Gb of RAM. Can that be used instead of VRAM somehow?
2024-12-21T19:40:19
https://www.reddit.com/r/LocalLLaMA/comments/1hjhezk/should_i_get_another_gpu/
TomerHorowitz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjhezk
false
null
t3_1hjhezk
/r/LocalLLaMA/comments/1hjhezk/should_i_get_another_gpu/
false
false
self
3
null
Lost with what is the best build for my AI use case.
8
I see lots of LLM builds recommending 3090,4090,5090 cards due to their vram but the cost and availability is garbage and I see that this is mostly based on the assumption that you want really fast token generation (16-60+ tokens a second). Since I want to use a 70b model with 40k-128k context size for fantasy novel writing and don't plan on training the AI myself I feel this is a waste of money as I am not utilizing the processing power. Currently I'm budgeting about 2k to upgrade my build so I am wondering due to context size if it's best to go with the CPU route and get a ryzen 9950x with 128gb ddr5 6000 cl30 or even 192gb with 48gb sticks. Currently I have a 3080 10gb with 64gb ddr4 3200 cl16 and a 2070 just sitting around. My goal is to have 3 tokens a second generation as this is basically a slow reading speed. Since this is also a general use PC I think going threadripper is impractical though I also do video editing on occasion so it may also be an option. Lastly how practical would it be going with dual 7900xtx? Is it a waste due to the large context and poor optimization? Thanks.
2024-12-21T20:18:26
https://www.reddit.com/r/LocalLLaMA/comments/1hji7je/lost_with_what_is_the_best_build_for_my_ai_use/
Massive-Question-550
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hji7je
false
null
t3_1hji7je
/r/LocalLLaMA/comments/1hji7je/lost_with_what_is_the_best_build_for_my_ai_use/
false
false
self
8
null
How did you optimize llama for your use case?
1
[removed]
2024-12-21T20:25:46
https://www.reddit.com/r/LocalLLaMA/comments/1hjicto/how_did_you_optimize_llama_for_your_use_case/
Anarchosyndikalismus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjicto
false
null
t3_1hjicto
/r/LocalLLaMA/comments/1hjicto/how_did_you_optimize_llama_for_your_use_case/
false
false
self
1
null
Best Setups to imitate Cursor/Windsurf in Vscode?
1
[removed]
2024-12-21T20:48:34
https://www.reddit.com/r/LocalLLaMA/comments/1hjitiq/best_setups_to_imitate_cursorwindsurf_in_vscode/
Soumil30
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjitiq
false
null
t3_1hjitiq
/r/LocalLLaMA/comments/1hjitiq/best_setups_to_imitate_cursorwindsurf_in_vscode/
false
false
self
1
null
Total Noob Trying to Set Up a Local AI – Where Do I Start?
1
[removed]
2024-12-21T21:09:01
https://www.reddit.com/r/LocalLLaMA/comments/1hjj8iq/total_noob_trying_to_set_up_a_local_ai_where_do_i/
tremorscary
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hjj8iq
false
null
t3_1hjj8iq
/r/LocalLLaMA/comments/1hjj8iq/total_noob_trying_to_set_up_a_local_ai_where_do_i/
false
false
self
1
null