title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Lift Yourself
1
[removed]
2025-01-16T22:51:23
https://www.reddit.com/r/LocalLLaMA/comments/1i31qmy/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31qmy
false
null
t3_1i31qmy
/r/LocalLLaMA/comments/1i31qmy/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:51:41
https://www.reddit.com/r/LocalLLaMA/comments/1i31quy/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31quy
false
null
t3_1i31quy
/r/LocalLLaMA/comments/1i31quy/lift_yourself/
false
false
self
1
null
Task queue app for multiple sessions / models / distributed runners
6
I would like to use a system where I can add multiple machines as model runners, e.g. one with a GPU for small models, and one with large memory but CPU only for larger models. Of course the CPU-only model will be super-slow, but the point is it will be much better. When I submit a task, it could schedule it for each model, and put it in their respective queue. The machines would pull these task and burn through them as they can, submitting the results. The UI would collect these and present them as they become available. Is there any software that does it?
2025-01-16T22:51:53
https://www.reddit.com/r/LocalLLaMA/comments/1i31r0k/task_queue_app_for_multiple_sessions_models/
yelling-at-clouds-40
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31r0k
false
null
t3_1i31r0k
/r/LocalLLaMA/comments/1i31r0k/task_queue_app_for_multiple_sessions_models/
false
false
self
6
null
Lift Yourself
1
[removed]
2025-01-16T22:51:57
https://www.reddit.com/r/LocalLLaMA/comments/1i31r24/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31r24
false
null
t3_1i31r24
/r/LocalLLaMA/comments/1i31r24/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:52:12
https://www.reddit.com/r/LocalLLaMA/comments/1i31r99/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31r99
false
null
t3_1i31r99
/r/LocalLLaMA/comments/1i31r99/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:52:29
https://www.reddit.com/r/LocalLLaMA/comments/1i31rhw/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31rhw
false
null
t3_1i31rhw
/r/LocalLLaMA/comments/1i31rhw/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:52:44
https://www.reddit.com/r/LocalLLaMA/comments/1i31roz/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31roz
false
null
t3_1i31roz
/r/LocalLLaMA/comments/1i31roz/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:52:59
https://www.reddit.com/r/LocalLLaMA/comments/1i31rwk/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31rwk
false
null
t3_1i31rwk
/r/LocalLLaMA/comments/1i31rwk/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:53:16
https://www.reddit.com/r/LocalLLaMA/comments/1i31s4b/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31s4b
false
null
t3_1i31s4b
/r/LocalLLaMA/comments/1i31s4b/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:53:31
https://www.reddit.com/r/LocalLLaMA/comments/1i31sb5/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31sb5
false
null
t3_1i31sb5
/r/LocalLLaMA/comments/1i31sb5/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:53:47
https://www.reddit.com/r/LocalLLaMA/comments/1i31siw/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31siw
false
null
t3_1i31siw
/r/LocalLLaMA/comments/1i31siw/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:54:02
https://www.reddit.com/r/LocalLLaMA/comments/1i31sqd/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31sqd
false
null
t3_1i31sqd
/r/LocalLLaMA/comments/1i31sqd/lift_yourself/
false
false
self
1
null
How to use chat templates for multicharacter roleplays?
9
I have implemented my own roleplay front-end for KoboldCpp. In contrast to SillyTavern and BackyardAI, my approach is not character-centric but rather scenario-centric. Both AI and the user can control multiple characters, and AI makes its own choice of who should speak next. At first, I did not even bother to figure out how to use chat templates. I just send a simple example dialogue to the LLM together with my scenario: Bob: Hi! Anna: Hello! Then I launch the generation and poll the API to check for the result. I look for a valid \`Character Name:\` marker in the response and allow only the characters that are setup for AI control. If I receive one more character marker, I stop the generation to avoid the infamous "speaking for others" issue, and clean up the response to remove the unnecessary text. I'm testing it now and even Llama 3.2 3B seems to work quite OK with this setup. However, I've heard that some models benefit from system prompts, and, as I understand, to pass the system prompt to the model, I need to use a proper chat template for the specific model. And now we come to the root of the problem. **Chat templates seem to be centered on the idea of only two parties - the user and the assistant. I have more parties. How would I encode their messages in a chat template?** A naive approach would be to send the system prompt with the proper formatting for the template, and then just dump the entire accumulated context with the scenario, character descriptions and all the chat messages into a single "assistant" message and ignore the user part of the template completely. But wouldn't this somehow make the model less smart and not obey the scenario as well as it would if I separate the chat messages and create a single assistant (or user) message for every character's reply? What are the practical effects of the chat template on the inference quality? Is the chat template just a convenient wrapper to properly separate messages in more complex situations or does it actually improve the model's behavior?
2025-01-16T22:54:17
https://www.reddit.com/r/LocalLLaMA/comments/1i31sxj/how_to_use_chat_templates_for_multicharacter/
martinerous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31sxj
false
null
t3_1i31sxj
/r/LocalLLaMA/comments/1i31sxj/how_to_use_chat_templates_for_multicharacter/
false
false
self
9
null
Lift Yourself
1
[removed]
2025-01-16T22:54:18
https://www.reddit.com/r/LocalLLaMA/comments/1i31sxs/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31sxs
false
null
t3_1i31sxs
/r/LocalLLaMA/comments/1i31sxs/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:54:34
https://www.reddit.com/r/LocalLLaMA/comments/1i31t57/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31t57
false
null
t3_1i31t57
/r/LocalLLaMA/comments/1i31t57/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:54:49
https://www.reddit.com/r/LocalLLaMA/comments/1i31tc3/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31tc3
false
null
t3_1i31tc3
/r/LocalLLaMA/comments/1i31tc3/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:55:05
https://www.reddit.com/r/LocalLLaMA/comments/1i31tiy/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31tiy
false
null
t3_1i31tiy
/r/LocalLLaMA/comments/1i31tiy/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:55:21
https://www.reddit.com/r/LocalLLaMA/comments/1i31tpz/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31tpz
false
null
t3_1i31tpz
/r/LocalLLaMA/comments/1i31tpz/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:55:37
https://www.reddit.com/r/LocalLLaMA/comments/1i31twl/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31twl
false
null
t3_1i31twl
/r/LocalLLaMA/comments/1i31twl/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:55:52
https://www.reddit.com/r/LocalLLaMA/comments/1i31u4j/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31u4j
false
null
t3_1i31u4j
/r/LocalLLaMA/comments/1i31u4j/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:56:08
https://www.reddit.com/r/LocalLLaMA/comments/1i31ubm/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31ubm
false
null
t3_1i31ubm
/r/LocalLLaMA/comments/1i31ubm/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:56:24
https://www.reddit.com/r/LocalLLaMA/comments/1i31uid/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31uid
false
null
t3_1i31uid
/r/LocalLLaMA/comments/1i31uid/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:56:40
https://www.reddit.com/r/LocalLLaMA/comments/1i31uqg/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31uqg
false
null
t3_1i31uqg
/r/LocalLLaMA/comments/1i31uqg/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:56:56
https://www.reddit.com/r/LocalLLaMA/comments/1i31uyc/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31uyc
false
null
t3_1i31uyc
/r/LocalLLaMA/comments/1i31uyc/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:57:11
https://www.reddit.com/r/LocalLLaMA/comments/1i31v55/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31v55
false
null
t3_1i31v55
/r/LocalLLaMA/comments/1i31v55/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:57:27
https://www.reddit.com/r/LocalLLaMA/comments/1i31vcs/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31vcs
false
null
t3_1i31vcs
/r/LocalLLaMA/comments/1i31vcs/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:57:42
https://www.reddit.com/r/LocalLLaMA/comments/1i31vk6/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31vk6
false
null
t3_1i31vk6
/r/LocalLLaMA/comments/1i31vk6/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:57:59
https://www.reddit.com/r/LocalLLaMA/comments/1i31vqz/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31vqz
false
null
t3_1i31vqz
/r/LocalLLaMA/comments/1i31vqz/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:58:14
https://www.reddit.com/r/LocalLLaMA/comments/1i31vye/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31vye
false
null
t3_1i31vye
/r/LocalLLaMA/comments/1i31vye/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1i31w62/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31w62
false
null
t3_1i31w62
/r/LocalLLaMA/comments/1i31w62/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:58:45
https://www.reddit.com/r/LocalLLaMA/comments/1i31wd0/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31wd0
false
null
t3_1i31wd0
/r/LocalLLaMA/comments/1i31wd0/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T23:03:09
https://www.reddit.com/r/LocalLLaMA/comments/1i31zwc/lift_yourself/
maniac-maniac-8493
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31zwc
false
null
t3_1i31zwc
/r/LocalLLaMA/comments/1i31zwc/lift_yourself/
false
false
self
1
null
basic shit
1
[removed]
2025-01-16T23:06:33
https://www.reddit.com/r/LocalLLaMA/comments/1i322je/basic_shit/
input_output_stream3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i322je
false
null
t3_1i322je
/r/LocalLLaMA/comments/1i322je/basic_shit/
false
false
self
1
null
Why tools like Perplexity can't accurately give me custom percentages?
1
[removed]
2025-01-16T23:59:52
https://www.reddit.com/r/LocalLLaMA/comments/1i3377x/why_tools_like_perplexity_cant_accurately_give_me/
vamos-viendo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3377x
false
null
t3_1i3377x
/r/LocalLLaMA/comments/1i3377x/why_tools_like_perplexity_cant_accurately_give_me/
false
false
self
1
null
Agentic AI learning resources
1
Looking for resources to learn how to use agentic ai to automate workflows.
2025-01-17T00:18:29
https://www.reddit.com/r/LocalLLaMA/comments/1i33lea/agentic_ai_learning_resources/
akbfs826
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i33lea
false
null
t3_1i33lea
/r/LocalLLaMA/comments/1i33lea/agentic_ai_learning_resources/
false
false
self
1
null
GPU Enclosure Experiences?
3
Sorry for the noob question, but will an eGPU enclosure work as well for LLM loading as it would for gaming? I have a 4070ti incompatible with my PC (OEM XPS PSU can’t handle it). The card I have now is a 3060Ti. I got the 4070 so cheap that even with an enclosure it’d be less than avg used price. If anyone has good/bad eGPU experience, that might sway me on keeping vs selling. It’s just been sitting in the box for awhile.
2025-01-17T00:39:15
https://www.reddit.com/r/LocalLLaMA/comments/1i340rd/gpu_enclosure_experiences/
ilovepolthavemybabie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i340rd
false
null
t3_1i340rd
/r/LocalLLaMA/comments/1i340rd/gpu_enclosure_experiences/
false
false
self
3
null
Why do tools like Perplexity struggle to calculate accurate stats from different sources when the exact number is not posted online?
1
I’ve been wondering why tools like Perplexity seem to fall short on calculating stats that don’t already exist online. Perplexity tries—with its reasoning steps—but the results often fail in accuracy or iterative depth. For example: * **“What percentage of countries with universal healthcare also have female leaders?”** If this functionality exists, I haven’t seen it work well. Curious—what do you think is the blocker here? * Is it a complexity or cost issue (the multi-step iterative reasoning)? * Is the demand just not there? * Are these tools just focusing elsewhere?
2025-01-17T00:46:39
https://www.reddit.com/r/LocalLLaMA/comments/1i34642/why_do_tools_like_perplexity_struggle_to/
vamos-viendo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i34642
false
null
t3_1i34642
/r/LocalLLaMA/comments/1i34642/why_do_tools_like_perplexity_struggle_to/
false
false
self
1
null
4x AMD Instinct AI Server + Mistral 7B + vLLM
25
2025-01-17T01:38:35
https://v.redd.it/1sni53vckgde1
Any_Praline_8178
/r/LocalLLaMA/comments/1i357ov/4x_amd_instinct_ai_server_mistral_7b_vllm/
1970-01-01T00:00:00
0
{}
1i357ov
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1sni53vckgde1/DASHPlaylist.mpd?a=1739799518%2CNjQ3ZWExYWRhODk1OTMyNWFmZDViYjUwMWFiMTJhZTg2MGIwZGM5YjI5ZGE3NzI3M2EyODUyM2VkODAyYmQxMQ%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/1sni53vckgde1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1904, 'hls_url': 'https://v.redd.it/1sni53vckgde1/HLSPlaylist.m3u8?a=1739799518%2COWQyZGRjOWYxYzZjOTExNmI5OWJjMDg4N2FlOWE3NTI5MmRmNmFhZDUyYTJjYzZiMzRhNTE2NzMxNjQ4OGQxNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1sni53vckgde1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1i357ov
/r/LocalLLaMA/comments/1i357ov/4x_amd_instinct_ai_server_mistral_7b_vllm/
false
false
https://external-preview…1ae0c0715fb51128
25
{'enabled': False, 'images': [{'id': 'OXZzbzY0dmNrZ2RlMYrnczNrVsQkdH3BrjnNDBSvBen7AmAirsnxCxjuWUYQ', 'resolutions': [{'height': 190, 'url': 'https://external-preview.redd.it/OXZzbzY0dmNrZ2RlMYrnczNrVsQkdH3BrjnNDBSvBen7AmAirsnxCxjuWUYQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=f73d86812aa437d36cfaa456f9c268c14f7de010', 'width': 108}, {'height': 380, 'url': 'https://external-preview.redd.it/OXZzbzY0dmNrZ2RlMYrnczNrVsQkdH3BrjnNDBSvBen7AmAirsnxCxjuWUYQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=857a8a96e5fff18d161d7695bdd103bbfab317a2', 'width': 216}, {'height': 563, 'url': 'https://external-preview.redd.it/OXZzbzY0dmNrZ2RlMYrnczNrVsQkdH3BrjnNDBSvBen7AmAirsnxCxjuWUYQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=e243718680aec27b48672482bc873ab0c6072a3e', 'width': 320}, {'height': 1127, 'url': 'https://external-preview.redd.it/OXZzbzY0dmNrZ2RlMYrnczNrVsQkdH3BrjnNDBSvBen7AmAirsnxCxjuWUYQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=9183be6e05dd063f150fee5a4e6f168acc5b8150', 'width': 640}, {'height': 1691, 'url': 'https://external-preview.redd.it/OXZzbzY0dmNrZ2RlMYrnczNrVsQkdH3BrjnNDBSvBen7AmAirsnxCxjuWUYQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=a2740feff2e21151ede7bc5d6ab9aaa1b61ab5af', 'width': 960}, {'height': 1903, 'url': 'https://external-preview.redd.it/OXZzbzY0dmNrZ2RlMYrnczNrVsQkdH3BrjnNDBSvBen7AmAirsnxCxjuWUYQ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=59c866cd6cb65f2836c1e0b508fb0a82814217fc', 'width': 1080}], 'source': {'height': 3796, 'url': 'https://external-preview.redd.it/OXZzbzY0dmNrZ2RlMYrnczNrVsQkdH3BrjnNDBSvBen7AmAirsnxCxjuWUYQ.png?format=pjpg&auto=webp&s=67ab0e6a91bd668383954b545cc0e6d37ec5d73c', 'width': 2154}, 'variants': {}}]}
My Tesla P40 just caught on fire and exploded… help?
41
https://imgur.com/a/1ViaFVL Um… so, this GPU has an insanely long lore. To summarize, I ended up trying to sell it, UPS ravaged the box and the buyer claimed the GPU didn’t work anymore (wouldn’t power on), I received it back, tried to power it up, and it immediately caught on fire in catastrophic fashion and shot flames into my motherboard. I’m powering them with a good quality PCIe to EPS adapter, which I just used again to try and check if it was indeed dead. Well, it sure as hell is now. Uh, what the hell happened? What is the component that exploded? It looks to be power related and it had a thermal pad on the backplate that is now scorched. I actually have ANOTHER P40 from this shipment that I’m wanting to test and I’m absolutely mortified to plug it in now. I don’t think I’ll ever trust a PC build again.
2025-01-17T01:52:52
https://www.reddit.com/r/LocalLLaMA/comments/1i35hs3/my_tesla_p40_just_caught_on_fire_and_exploded_help/
Cressio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i35hs3
false
null
t3_1i35hs3
/r/LocalLLaMA/comments/1i35hs3/my_tesla_p40_just_caught_on_fire_and_exploded_help/
false
false
self
41
{'enabled': False, 'images': [{'id': 'E-rEcp6PSN_oBerqUpkWMIoOoZCxhsfAvK9QjqHW9fg', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/vZr1Jnu6NQVJE51kTm7VDDk2w5Zew4osUGQoixqN5w4.jpg?width=108&crop=smart&auto=webp&s=6406f4edaf63b470986facd965bce9eceb1b77d1', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/vZr1Jnu6NQVJE51kTm7VDDk2w5Zew4osUGQoixqN5w4.jpg?width=216&crop=smart&auto=webp&s=d3d880d0bfee2ef36d90ed4411bb14c0afd5fdb9', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/vZr1Jnu6NQVJE51kTm7VDDk2w5Zew4osUGQoixqN5w4.jpg?width=320&crop=smart&auto=webp&s=69d11bc570e4a139e657d3164ec051dfcc1a289c', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/vZr1Jnu6NQVJE51kTm7VDDk2w5Zew4osUGQoixqN5w4.jpg?width=640&crop=smart&auto=webp&s=c0a1884c9ec4bb671379b78dae5b80075c11f8ac', 'width': 640}, {'height': 1280, 'url': 'https://external-preview.redd.it/vZr1Jnu6NQVJE51kTm7VDDk2w5Zew4osUGQoixqN5w4.jpg?width=960&crop=smart&auto=webp&s=0202b0dff005e4cd5eda82101eeced422cc34783', 'width': 960}, {'height': 1440, 'url': 'https://external-preview.redd.it/vZr1Jnu6NQVJE51kTm7VDDk2w5Zew4osUGQoixqN5w4.jpg?width=1080&crop=smart&auto=webp&s=7e7eefffaaefb2994066c49f745ef39fcf442d63', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/vZr1Jnu6NQVJE51kTm7VDDk2w5Zew4osUGQoixqN5w4.jpg?auto=webp&s=ec856041a9243ecc48066b868020726a16dc796d', 'width': 1536}, 'variants': {}}]}
I made a simple python scripts that automate deletion of your ChatGPT chats
4
Hey! I was considering whether to post this or not, and I decided other people may have had this issue too, where you been using chatGPT all the time, there's like a thousand chats, I was in this predicament and made a program that I made on Linux for firefox with Selenium, that essentially automatically goes through and starts deleting your chats on chatGPT. I made it on Linux, I have no clue the compatibility with windows, and it's for firefox, If anyone else who's in this predicament wants to use it feel free! Github: [https://github.com/TheBlewish/Automated-ChatGPT-Chats-Deletion](https://github.com/TheBlewish/Automated-ChatGPT-Chats-Deletion)
2025-01-17T01:55:46
https://www.reddit.com/r/LocalLLaMA/comments/1i35juw/i_made_a_simple_python_scripts_that_automate/
CuriousAustralianBoy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i35juw
false
null
t3_1i35juw
/r/LocalLLaMA/comments/1i35juw/i_made_a_simple_python_scripts_that_automate/
false
false
self
4
{'enabled': False, 'images': [{'id': 'cj4NLfAcHSFNnZHO3wBhBMHs9hZrA788h5V_5y0EhR4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WbfKxUiFHUZWwPakKfgmxYEQK-VyAFhFz7iECKT71Ts.jpg?width=108&crop=smart&auto=webp&s=f710ffdeb1dd3c038f8ab5f6485626429190537e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WbfKxUiFHUZWwPakKfgmxYEQK-VyAFhFz7iECKT71Ts.jpg?width=216&crop=smart&auto=webp&s=a19d057972441f10ac83a504f67cd2eb8994e2d5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WbfKxUiFHUZWwPakKfgmxYEQK-VyAFhFz7iECKT71Ts.jpg?width=320&crop=smart&auto=webp&s=73cacdef58916043e1243b20ef272a78967b1025', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WbfKxUiFHUZWwPakKfgmxYEQK-VyAFhFz7iECKT71Ts.jpg?width=640&crop=smart&auto=webp&s=fea84ec245b1846bbef07b78b6f851ae62ba7d20', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WbfKxUiFHUZWwPakKfgmxYEQK-VyAFhFz7iECKT71Ts.jpg?width=960&crop=smart&auto=webp&s=48e0e1f42881c449aef62cb68a857b57e161ba84', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WbfKxUiFHUZWwPakKfgmxYEQK-VyAFhFz7iECKT71Ts.jpg?width=1080&crop=smart&auto=webp&s=c07155c7cb8168bc4c8d90fc6d852e3725493c04', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WbfKxUiFHUZWwPakKfgmxYEQK-VyAFhFz7iECKT71Ts.jpg?auto=webp&s=a23467b1ca67f20bea327db739156bcc4ba003b0', 'width': 1200}, 'variants': {}}]}
You are an absolute moron for believing in the hype of “AI Agents”.
1
2025-01-17T02:11:41
https://medium.com/p/c0f760e7e48e
No-Definition-2886
medium.com
1970-01-01T00:00:00
0
{}
1i35uvy
false
null
t3_1i35uvy
/r/LocalLLaMA/comments/1i35uvy/you_are_an_absolute_moron_for_believing_in_the/
false
false
https://b.thumbs.redditm…qE5tm7xSHasQ.jpg
1
{'enabled': False, 'images': [{'id': '1E3xxo6_PV-k6mv0objYiPTFRGYWWtCzOdyf3J7s5os', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MoVplprQIrTO-LgxJ2svkt9m4t3YwydFjrCCUTlQwHs.jpg?width=108&crop=smart&auto=webp&s=334518867597e354884e8b54e5cf83921bcded5c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/MoVplprQIrTO-LgxJ2svkt9m4t3YwydFjrCCUTlQwHs.jpg?width=216&crop=smart&auto=webp&s=6d625a7ddb8d5c851d5f0b76fc624c9582b33aa3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/MoVplprQIrTO-LgxJ2svkt9m4t3YwydFjrCCUTlQwHs.jpg?width=320&crop=smart&auto=webp&s=2c8919aad1685b42a30cc4aec06ced79fb89224f', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/MoVplprQIrTO-LgxJ2svkt9m4t3YwydFjrCCUTlQwHs.jpg?width=640&crop=smart&auto=webp&s=2690d395ac52feed727c51785ec54f2ff6ed439d', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/MoVplprQIrTO-LgxJ2svkt9m4t3YwydFjrCCUTlQwHs.jpg?width=960&crop=smart&auto=webp&s=5b4f0a3df6b38e4c3c7d919359f108a2d2ca8474', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/MoVplprQIrTO-LgxJ2svkt9m4t3YwydFjrCCUTlQwHs.jpg?auto=webp&s=e9e71292923dd0cb2bb95d0ea47d3de1dbf01216', 'width': 1024}, 'variants': {}}]}
Ollama on 16GB Ram
1
[removed]
2025-01-17T02:31:54
https://www.reddit.com/r/LocalLLaMA/comments/1i3693h/ollama_on_16gb_ram/
TimelySentence2063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3693h
false
null
t3_1i3693h
/r/LocalLLaMA/comments/1i3693h/ollama_on_16gb_ram/
false
false
self
1
null
Deepseek V3 running on my local dual CPU PC, 384GB RAM, no GPU!
1
2025-01-17T03:07:21
https://v.redd.it/gexlt6mb0hde1
Big_Specific9749
v.redd.it
1970-01-01T00:00:00
0
{}
1i36wz8
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gexlt6mb0hde1/DASHPlaylist.mpd?a=1739675255%2CN2FjMTU5ZDZjNTUyM2JmNjFlY2MyZWEwMzQ0ZDMxZjQ1Y2JmOTRiYTgzMGE3MDlhZTQ3OWRiMWJhMGNiZDc4MA%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/gexlt6mb0hde1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1120, 'hls_url': 'https://v.redd.it/gexlt6mb0hde1/HLSPlaylist.m3u8?a=1739675255%2CY2RiOWNmODJlN2Q4M2ZkNjQ2OTMyZTEwZWNkZTk5YzJjNWVhMTk1MmEzNTRiZDg2YjY0YjQ2ODk3ODA4ZGJiZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gexlt6mb0hde1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1i36wz8
/r/LocalLLaMA/comments/1i36wz8/deepseek_v3_running_on_my_local_dual_cpu_pc_384gb/
false
false
https://external-preview…07fc6d0da0544dcc
1
{'enabled': False, 'images': [{'id': 'MGhvZHZ2amIwaGRlMe1V2s-JyylPqB0ZjZe_rqNYTsy1A_T8uoIGo2wBts3a', 'resolutions': [{'height': 111, 'url': 'https://external-preview.redd.it/MGhvZHZ2amIwaGRlMe1V2s-JyylPqB0ZjZe_rqNYTsy1A_T8uoIGo2wBts3a.png?width=108&crop=smart&format=pjpg&auto=webp&s=606873d933cd9090c81f286465b9e59976e07539', 'width': 108}, {'height': 223, 'url': 'https://external-preview.redd.it/MGhvZHZ2amIwaGRlMe1V2s-JyylPqB0ZjZe_rqNYTsy1A_T8uoIGo2wBts3a.png?width=216&crop=smart&format=pjpg&auto=webp&s=73eacf77341316911f6dc53b880d4081066f8aab', 'width': 216}, {'height': 331, 'url': 'https://external-preview.redd.it/MGhvZHZ2amIwaGRlMe1V2s-JyylPqB0ZjZe_rqNYTsy1A_T8uoIGo2wBts3a.png?width=320&crop=smart&format=pjpg&auto=webp&s=9a369cfe887819865e417f56785e20fe89961de1', 'width': 320}, {'height': 663, 'url': 'https://external-preview.redd.it/MGhvZHZ2amIwaGRlMe1V2s-JyylPqB0ZjZe_rqNYTsy1A_T8uoIGo2wBts3a.png?width=640&crop=smart&format=pjpg&auto=webp&s=87a234bd1b8cd9e398957a22d32a7a875db14737', 'width': 640}, {'height': 995, 'url': 'https://external-preview.redd.it/MGhvZHZ2amIwaGRlMe1V2s-JyylPqB0ZjZe_rqNYTsy1A_T8uoIGo2wBts3a.png?width=960&crop=smart&format=pjpg&auto=webp&s=0cfe622e0fdb30f13c759797c08594de2e74dde8', 'width': 960}, {'height': 1119, 'url': 'https://external-preview.redd.it/MGhvZHZ2amIwaGRlMe1V2s-JyylPqB0ZjZe_rqNYTsy1A_T8uoIGo2wBts3a.png?width=1080&crop=smart&format=pjpg&auto=webp&s=bed0d843b952c4b45f281e9b0ec69a94ce5f8ea6', 'width': 1080}], 'source': {'height': 2152, 'url': 'https://external-preview.redd.it/MGhvZHZ2amIwaGRlMe1V2s-JyylPqB0ZjZe_rqNYTsy1A_T8uoIGo2wBts3a.png?format=pjpg&auto=webp&s=b2a42bfcc1acee16b06f023164a6eaac53cdbff9', 'width': 2076}, 'variants': {}}]}
Whats the current State-of-The-Art for voice cloning?
12
last time i checked which was quite a while voice cloning and like making AI song covers and etc used RVC v2 but im sure a LOT has changed since then Ive heard a lot of stuff about tts models like the new 82M model but i dont think ive heard specifically about voice cloning and cover tools
2025-01-17T04:02:10
https://www.reddit.com/r/LocalLLaMA/comments/1i37x87/whats_the_current_stateoftheart_for_voice_cloning/
pigeon57434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i37x87
false
null
t3_1i37x87
/r/LocalLLaMA/comments/1i37x87/whats_the_current_stateoftheart_for_voice_cloning/
false
false
self
12
null
What is your stage rn?
0
2025-01-17T04:09:43
https://i.redd.it/u9zca70gbhde1.png
iamnotdeadnuts
i.redd.it
1970-01-01T00:00:00
0
{}
1i381za
false
null
t3_1i381za
/r/LocalLLaMA/comments/1i381za/what_is_your_stage_rn/
false
false
https://a.thumbs.redditm…2O8fYrRWVmu0.jpg
0
{'enabled': True, 'images': [{'id': 'XECOeI3ffVexM6nF7rgjvy0TQnhpCbibT-WsMBDTqFw', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/u9zca70gbhde1.png?width=108&crop=smart&auto=webp&s=0d7c9f64fdcd1a5847d2effaafdb021d04cc642e', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/u9zca70gbhde1.png?width=216&crop=smart&auto=webp&s=b6648f302b941824b617cae5fdb8339e01bd1db5', 'width': 216}, {'height': 282, 'url': 'https://preview.redd.it/u9zca70gbhde1.png?width=320&crop=smart&auto=webp&s=82da4657160e701edc541b96d4e3ec0be53858ba', 'width': 320}, {'height': 564, 'url': 'https://preview.redd.it/u9zca70gbhde1.png?width=640&crop=smart&auto=webp&s=5afe2198ee43f27a23f3b504625eef2a3ba63184', 'width': 640}], 'source': {'height': 762, 'url': 'https://preview.redd.it/u9zca70gbhde1.png?auto=webp&s=f13e9bf753dba68bcc595fe73ea9cc3ea3013638', 'width': 864}, 'variants': {}}]}
Running Kokoro-82M ONNX TTS Model in the Browser
1
[removed]
2025-01-17T04:13:25
https://www.reddit.com/r/LocalLLaMA/comments/1i3848r/running_kokoro82m_onnx_tts_model_in_the_browser/
BluebirdInfinite1812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3848r
false
null
t3_1i3848r
/r/LocalLLaMA/comments/1i3848r/running_kokoro82m_onnx_tts_model_in_the_browser/
false
false
nsfw
1
null
Titan architecture and reasoning
7
So I've been thinking about how all the commercial models have been focusing on creating a better reasoning model with quite possibly incorporating CoT into the training process and scaling it out. And with the release of the Titans architecture where it retains "selective long term memory" I wonder if this architecture can better learn the important reasoning steps found in CoT process and thus actually be able to be a model that very closely and very successfully mimics a reasoning AI. If that's the case, with 2M+ context and long-term memory in the model itself, will we possibly see an AI that potentially could behave very much like AGI like we have imagined AI would be?
2025-01-17T04:18:21
https://www.reddit.com/r/LocalLLaMA/comments/1i38790/titan_architecture_and_reasoning/
hugganao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i38790
false
null
t3_1i38790
/r/LocalLLaMA/comments/1i38790/titan_architecture_and_reasoning/
false
false
self
7
null
Avoid risky dependencies in AI-generated code with Opensource Project CodeGate
0
2025-01-17T04:19:03
https://www.youtube.com/watch?v=WimBevc_Ji0
zero_proof_fork
youtube.com
1970-01-01T00:00:00
0
{}
1i387pt
false
{'oembed': {'author_name': 'Stacklok', 'author_url': 'https://www.youtube.com/@Stacklok', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/WimBevc_Ji0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Avoid risky dependencies in AI-generated code with CodeGate"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/WimBevc_Ji0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Avoid risky dependencies in AI-generated code with CodeGate', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1i387pt
/r/LocalLLaMA/comments/1i387pt/avoid_risky_dependencies_in_aigenerated_code_with/
false
false
https://b.thumbs.redditm…sd5TpKlXoHvw.jpg
0
{'enabled': False, 'images': [{'id': '5aILawYwqbWLyzZjfiomKvNrcaZRH6vtzlj4qSCPjbY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/OkXGejKiKrfg4RPA1jsrMlEJ821qsUeqhVRM09e9r0E.jpg?width=108&crop=smart&auto=webp&s=a75e88847b164d8d7e208c59131a5d06be7dc029', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/OkXGejKiKrfg4RPA1jsrMlEJ821qsUeqhVRM09e9r0E.jpg?width=216&crop=smart&auto=webp&s=b7872fd067a63e1234f360b0f413834ac99edef6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/OkXGejKiKrfg4RPA1jsrMlEJ821qsUeqhVRM09e9r0E.jpg?width=320&crop=smart&auto=webp&s=74b7723061a361dd09162041dab7e39b05b193c5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/OkXGejKiKrfg4RPA1jsrMlEJ821qsUeqhVRM09e9r0E.jpg?auto=webp&s=1a32ff9d457a40ad5772e188008243930649a318', 'width': 480}, 'variants': {}}]}
Running Kokoro-82M ONNX TTS Model in the Browser
1
[removed]
2025-01-17T04:28:17
https://www.reddit.com/r/LocalLLaMA/comments/1i38dfv/running_kokoro82m_onnx_tts_model_in_the_browser/
BluebirdInfinite1812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i38dfv
false
null
t3_1i38dfv
/r/LocalLLaMA/comments/1i38dfv/running_kokoro82m_onnx_tts_model_in_the_browser/
false
false
self
1
null
Which do you think will be better: Qwen-3 or Llama-4
1
and which do you think will come out first? and more importantly will llama-4 actually have a middle ground size between 8 and 70 so i can run it
2025-01-17T04:37:46
https://www.reddit.com/r/LocalLLaMA/comments/1i38jih/which_do_you_think_will_be_better_qwen3_or_llama4/
pigeon57434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i38jih
false
null
t3_1i38jih
/r/LocalLLaMA/comments/1i38jih/which_do_you_think_will_be_better_qwen3_or_llama4/
false
false
self
1
null
Vision Models for extracting Attributes
1
[removed]
2025-01-17T04:38:22
https://www.reddit.com/r/LocalLLaMA/comments/1i38jvd/vision_models_for_extracting_attributes/
Potential_Nature4974
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i38jvd
false
null
t3_1i38jvd
/r/LocalLLaMA/comments/1i38jvd/vision_models_for_extracting_attributes/
false
false
self
1
null
Here's what I've found for those of you comparing mistral codestral 25.01 against claude 3.5 sonnet.
0
Mistral's new Codestral 25.01 is impressive on paper (support for over 80 coding languages!), so I compared it with other leading models like Claude 3.5 Sonnet to see how they stack up for coding tasks. * Performance Metrics: Codestral achieves an impressive HumanEval score of 86.6%, while Claude stands strong with competitive scores in various programming languages. * Speed: Codestral claims to generate code twice as fast as its predecessor, which could be a game-changer for developers needing rapid assistance. * Language Support: Supporting over 80 languages gives Codestral a versatility edge; however, Claude also offers robust support across popular languages. * Context Length: With a context length of 256k tokens, Codestral may handle larger codebases better than Claude's 200k limit. Both models have their strengths and weaknesses. From my tests, I still think Claude is better overall. But, what are your thoughts on their performance in practical applications? Btw, I found this detailed article that compares codestral 25.01 with other models like Claude, GPT, DeepSeek etc.: [https://blog.getbind.co/2025/01/15/mistral-codestral-25-01-is-it-the-best-model-for-coding/](https://blog.getbind.co/2025/01/15/mistral-codestral-25-01-is-it-the-best-model-for-coding/)
2025-01-17T04:42:06
https://www.reddit.com/r/LocalLLaMA/comments/1i38m86/heres_what_ive_found_for_those_of_you_comparing/
johnzakma10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i38m86
false
null
t3_1i38m86
/r/LocalLLaMA/comments/1i38m86/heres_what_ive_found_for_those_of_you_comparing/
false
false
self
0
{'enabled': False, 'images': [{'id': '1_MnsoBOjHUVlBv8s1AW8GF3ZoHqy4Q7Cx8Vh-5po64', 'resolutions': [{'height': 23, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?width=108&crop=smart&auto=webp&s=5d3f084b1f24c6be1b219ed06d50ede11039ae20', 'width': 108}, {'height': 47, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?width=216&crop=smart&auto=webp&s=4c4c27a0375b804db5d90bf12bf5c57a81b64386', 'width': 216}], 'source': {'height': 60, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?auto=webp&s=58df702c38afd9cce5d0d8f1b6181031aa15e77b', 'width': 272}, 'variants': {}}]}
How do you guys use Open Source models in your workplace? I wish to start using them at my workplace.
1
[removed]
2025-01-17T05:52:50
https://www.reddit.com/r/LocalLLaMA/comments/1i39rn7/how_do_you_guys_use_open_source_models_in_your/
Existing-Pay7076
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i39rn7
false
null
t3_1i39rn7
/r/LocalLLaMA/comments/1i39rn7/how_do_you_guys_use_open_source_models_in_your/
false
false
self
1
null
Astrologer & Psychic Spiritual Healer Fortune Teller
1
[removed]
2025-01-17T06:32:09
https://www.reddit.com/r/LocalLLaMA/comments/1i3acsd/astrologer_psychic_spiritual_healer_fortune_teller/
Spirited_Tourist_565
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3acsd
false
null
t3_1i3acsd
/r/LocalLLaMA/comments/1i3acsd/astrologer_psychic_spiritual_healer_fortune_teller/
true
false
spoiler
1
null
Problem running f5 tts on pinokio
1
2025-01-17T06:35:54
https://i.redd.it/hvvfmzoi1ide1.jpeg
Loves_to_analyse
i.redd.it
1970-01-01T00:00:00
0
{}
1i3aeqe
false
null
t3_1i3aeqe
/r/LocalLLaMA/comments/1i3aeqe/problem_running_f5_tts_on_pinokio/
false
false
https://b.thumbs.redditm…xP7aXdqGHUxw.jpg
1
{'enabled': True, 'images': [{'id': 'dUp_FJTnvNK1y-L5Jyj2CeeyEL5k7_G1Yj2G7WQ5aFs', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/hvvfmzoi1ide1.jpeg?width=108&crop=smart&auto=webp&s=b13f86a540e56ba52d82180a4440f889390441a3', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/hvvfmzoi1ide1.jpeg?width=216&crop=smart&auto=webp&s=ac7b3b0e3582acf8276ab84763b16b5eaf54ac69', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/hvvfmzoi1ide1.jpeg?width=320&crop=smart&auto=webp&s=796082f99bf27bb29558efe33095f23c059e9403', 'width': 320}, {'height': 295, 'url': 'https://preview.redd.it/hvvfmzoi1ide1.jpeg?width=640&crop=smart&auto=webp&s=50b90a1486e602fc4e2ee44d2ed25f811826c2de', 'width': 640}, {'height': 443, 'url': 'https://preview.redd.it/hvvfmzoi1ide1.jpeg?width=960&crop=smart&auto=webp&s=d52cfd41357d859b113d0b9bae6e3e06d204a99e', 'width': 960}, {'height': 498, 'url': 'https://preview.redd.it/hvvfmzoi1ide1.jpeg?width=1080&crop=smart&auto=webp&s=74150060868d2247e6f8b82b10390635988e1d7a', 'width': 1080}], 'source': {'height': 2136, 'url': 'https://preview.redd.it/hvvfmzoi1ide1.jpeg?auto=webp&s=90a0f7a92d67e47f1837a1729a05e14a7b18ba9a', 'width': 4624}, 'variants': {}}]}
Since I can't find similar subreddit for text-to-video unlike LLM, gonna ask here. How is temporal consistency solved?
1
[removed]
2025-01-17T06:54:36
https://www.reddit.com/r/LocalLLaMA/comments/1i3anxo/since_i_cant_find_similar_subreddit_for/
Snoo_64233
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3anxo
false
null
t3_1i3anxo
/r/LocalLLaMA/comments/1i3anxo/since_i_cant_find_similar_subreddit_for/
false
false
self
1
null
What is the best free AI for assisting with coding?
1
[removed]
2025-01-17T07:00:31
https://www.reddit.com/r/LocalLLaMA/comments/1i3aquh/what_is_the_best_free_ai_for_assisting_with_coding/
mmahdiSZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3aquh
false
null
t3_1i3aquh
/r/LocalLLaMA/comments/1i3aquh/what_is_the_best_free_ai_for_assisting_with_coding/
false
false
self
1
null
OpenWebUI Canvas Implementation -- Coming Soon! (Better Artifacts)
232
[C# and XML View](https://preview.redd.it/ytezb1q05ide1.png?width=1862&format=png&auto=webp&s=93364222443da5f695a745265842c91ee604d9e5) [Design View](https://preview.redd.it/1ttzjm4s5ide1.png?width=1862&format=png&auto=webp&s=bd00eb16ef20e090d9f5ebee0d69f48c4f3b8bf0) [Code VIew](https://preview.redd.it/7tj92xav5ide1.png?width=1749&format=png&auto=webp&s=81d8f9dec9bd3575fb4fc4ea8d399627b2eacd4a) Hi all! I'm implementing Canvas (beefing up Artifacts) on OpenWebUI. This was my only issue ever with OpenWebUI, just the very limited canvas feature (only restricted to HTML, CSS, JavaScript and SVG). I've expanded support for the following languages: C#, Python, Java, PHP, Ruby, Bash, Shell, AppleScript, SQL, JSON, XML, YAML, Markdown, HTML If I'm missing one feel free to comment it! It's super easy to add at this point. Another notable feature I'm adding is to switch between Design view and Code view for web design. I'm super close to finishing! I just need to clean it up and visualize/track changes between revisions. Expect my pull request it in the next couple of weeks!
2025-01-17T07:02:43
https://www.reddit.com/r/LocalLLaMA/comments/1i3as1m/openwebui_canvas_implementation_coming_soon/
maxwell321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3as1m
false
null
t3_1i3as1m
/r/LocalLLaMA/comments/1i3as1m/openwebui_canvas_implementation_coming_soon/
false
false
https://a.thumbs.redditm…nohaqQF7icF0.jpg
232
null
New framework aims to mimic human thinking for writing long-form content (OmniThink)
44
Sharing a paper about OmniThink - an approach that tries to replicate how humans write long-form content. The framework focuses on continuous reflection and exploration, similar to how we gather information and refine our understanding when writing detailed articles. (Not affiliated with the authors) The paper's style reminds me of Google Deep Research's functionality. I couldn't get their online demo to work, but the ideas in the paper are worth checking out, IMO. I will spend some time on their repo to see if that will work out of the box. Paper: [https://huggingface.co/papers/2501.09751](https://huggingface.co/papers/2501.09751) Project page: [https://zjunlp.github.io/project/OmniThink/](https://zjunlp.github.io/project/OmniThink/) GitHub: [https://github.com/zjunlp/OmniThink](https://github.com/zjunlp/OmniThink) https://preview.redd.it/alrt6fyh9ide1.png?width=3875&format=png&auto=webp&s=6a41e77eac565e5bf61deeaae9c0de535fb45feb
2025-01-17T07:22:39
https://www.reddit.com/r/LocalLLaMA/comments/1i3b1jb/new_framework_aims_to_mimic_human_thinking_for/
emanuilov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3b1jb
false
null
t3_1i3b1jb
/r/LocalLLaMA/comments/1i3b1jb/new_framework_aims_to_mimic_human_thinking_for/
false
false
https://b.thumbs.redditm…YiarJSaeI2pM.jpg
44
{'enabled': False, 'images': [{'id': '2kae0vsjZm2286qrNI1XgJ7bmiMgWKJm_7xg7QVN4QM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/P3wPulsj-vbHIfL8pdoJemWboTREaTu--SoaotPYjzU.jpg?width=108&crop=smart&auto=webp&s=5946e3d2b7788e9a5818d1a12ca54951538b74e8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/P3wPulsj-vbHIfL8pdoJemWboTREaTu--SoaotPYjzU.jpg?width=216&crop=smart&auto=webp&s=132120dcae05cc07c350cd56f52c6cb262665efe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/P3wPulsj-vbHIfL8pdoJemWboTREaTu--SoaotPYjzU.jpg?width=320&crop=smart&auto=webp&s=0a937d4d70066aa757a39f32327aa075c7affea3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/P3wPulsj-vbHIfL8pdoJemWboTREaTu--SoaotPYjzU.jpg?width=640&crop=smart&auto=webp&s=098b6934967a73dbc796419d5bd3b3397ed04814', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/P3wPulsj-vbHIfL8pdoJemWboTREaTu--SoaotPYjzU.jpg?width=960&crop=smart&auto=webp&s=d8323b8d8eb6a0b0819972ebae2c9cf27a8d8270', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/P3wPulsj-vbHIfL8pdoJemWboTREaTu--SoaotPYjzU.jpg?width=1080&crop=smart&auto=webp&s=225057b22efbaf2506fbe7a9788dd681447604af', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/P3wPulsj-vbHIfL8pdoJemWboTREaTu--SoaotPYjzU.jpg?auto=webp&s=f33add2ddabcfeb6d87d534b6a7ef0dd234b8f2a', 'width': 1200}, 'variants': {}}]}
Models for shorter context
0
Recent trends have been to push for larger context windows and to compensate for the ballooning VRAM and compute costs of longer contexts by using techniques such as GQA etc. But let's say you have a task that requires only 4k or 8k of context. And you want to have the best performance possible for this context size. Are there models that perform better within this limited context or a way of tuning existing models to perform better with a 4k or 8k context window?
2025-01-17T07:58:56
https://www.reddit.com/r/LocalLLaMA/comments/1i3bieb/models_for_shorter_context/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3bieb
false
null
t3_1i3bieb
/r/LocalLLaMA/comments/1i3bieb/models_for_shorter_context/
false
false
self
0
null
Best vision model via API?
1
Can somebody please suggest the best vision model which is available for commercial use via API outside of GPT-4o? I find it’s censored in reading biometric data or helping in medical analysis. Thank you!
2025-01-17T08:28:07
https://www.reddit.com/r/LocalLLaMA/comments/1i3bvzv/best_vision_model_via_api/
99OG121314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3bvzv
false
null
t3_1i3bvzv
/r/LocalLLaMA/comments/1i3bvzv/best_vision_model_via_api/
false
false
self
1
null
"Can't live without tool" for LLM datasets?
20
I thought it would be interesting to know what tool people absolutely love using when it comes to LLM training - more specifically creating and preparing datasets? Also, feel free to just share any knowledge you feel is a "cheatsheet" or too good to be true? Have a great weekend!
2025-01-17T09:07:39
https://www.reddit.com/r/LocalLLaMA/comments/1i3cdws/cant_live_without_tool_for_llm_datasets/
Secure_Archer_1529
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3cdws
false
null
t3_1i3cdws
/r/LocalLLaMA/comments/1i3cdws/cant_live_without_tool_for_llm_datasets/
false
false
self
20
null
Hugging Face Spaces make the perfect agent tools!
1
[removed]
2025-01-17T09:17:55
[deleted]
1970-01-01T00:00:00
0
{}
1i3cird
false
null
t3_1i3cird
/r/LocalLLaMA/comments/1i3cird/hugging_face_spaces_make_the_perfect_agent_tools/
false
false
default
1
null
Thinking about finetuning an SLM (i.e 0.5B, 2B) for PII a as a way to learn. Worth the shot?
5
Hello! I posted something similar a few months ago, but after evaluating the quality of the new SLM models, I think it would make sense to undertake a project to finetune a model specifically for PII. Additionally, perhaps developing a Docker container with a complete solution incorporating the model, agentic behavior, and possibly [Presidio](https://microsoft.github.io/presidio/) could be beneficial. Could be a good way to learn all the finetuning pipeline with [unsloth](https://unsloth.ai/)? Tell me what you think. Thank you! https://preview.redd.it/wrdfedt0yide1.png?width=3190&format=png&auto=webp&s=98842a4696a8cf4ac8780dc0749565e32856bfb1
2025-01-17T09:39:23
https://www.reddit.com/r/LocalLLaMA/comments/1i3csqz/thinking_about_finetuning_an_slm_ie_05b_2b_for/
GeorgiaWitness1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3csqz
false
null
t3_1i3csqz
/r/LocalLLaMA/comments/1i3csqz/thinking_about_finetuning_an_slm_ie_05b_2b_for/
false
false
https://b.thumbs.redditm…PtZT0_42KVOM.jpg
5
null
Korea AI Chip - DEEPX NPU . Price? Under 50$ . Better that GPU?
0
Hello. This will be game changer? Better that GPU? DEEPX NPU. Edge Computing Website: https://deepx.ai/
2025-01-17T09:50:04
https://youtu.be/5aJNJLRsVlk
bi4key
youtu.be
1970-01-01T00:00:00
0
{}
1i3cxm0
false
{'oembed': {'author_name': 'ipXchange', 'author_url': 'https://www.youtube.com/@ipXchange', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/5aJNJLRsVlk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Real-Time Edge Computing for Under $50"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/5aJNJLRsVlk/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Real-Time Edge Computing for Under $50', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1i3cxm0
/r/LocalLLaMA/comments/1i3cxm0/korea_ai_chip_deepx_npu_price_under_50_better/
false
false
https://b.thumbs.redditm…dviH3VYa3YQc.jpg
0
{'enabled': False, 'images': [{'id': 'wIq8jgchQyOcWX6QmgFjaZ8Fzi-ddGFxOIfHp3LTMLo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/P-NYeqvu1eSDu3ZqaeWja5plMoJW5E-Wg-nWjjs5CuU.jpg?width=108&crop=smart&auto=webp&s=7243543f136f2d28e6290a4facfaff55e66887d0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/P-NYeqvu1eSDu3ZqaeWja5plMoJW5E-Wg-nWjjs5CuU.jpg?width=216&crop=smart&auto=webp&s=6652566287d7e84ba7ed4b3061e68aec5cf40b0a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/P-NYeqvu1eSDu3ZqaeWja5plMoJW5E-Wg-nWjjs5CuU.jpg?width=320&crop=smart&auto=webp&s=80557a815c31a1454adfc473c446fa4d55b69fdb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/P-NYeqvu1eSDu3ZqaeWja5plMoJW5E-Wg-nWjjs5CuU.jpg?auto=webp&s=888fa16e521be4178c9baaf04bb9686d1edb2a8b', 'width': 480}, 'variants': {}}]}
Hugging Face Spaces make the perfect agent tools!
16
Figured out that you could use Gradio based spaces on the hub as tools for agents. I don't get why everyone isn't doing this. https://preview.redd.it/etq92tij3jde1.png?width=1092&format=png&auto=webp&s=baf38c94e6240885d8d4d02953e16f9414a12a02 Made a guide here: [https://huggingface.co/blog/burtenshaw/gradio-spaces-agent-tools](https://huggingface.co/blog/burtenshaw/gradio-spaces-agent-tools)
2025-01-17T10:09:40
https://www.reddit.com/r/LocalLLaMA/comments/1i3d6t0/hugging_face_spaces_make_the_perfect_agent_tools/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3d6t0
false
null
t3_1i3d6t0
/r/LocalLLaMA/comments/1i3d6t0/hugging_face_spaces_make_the_perfect_agent_tools/
false
false
https://a.thumbs.redditm…8c3brufszWM0.jpg
16
null
InternLM3 Open Source: Achieving High-Performance Models with 4T Data
1
[removed]
2025-01-17T10:16:46
https://www.reddit.com/r/LocalLLaMA/comments/1i3da6n/internlm3_open_source_achieving_highperformance/
InternLM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3da6n
false
null
t3_1i3da6n
/r/LocalLLaMA/comments/1i3da6n/internlm3_open_source_achieving_highperformance/
false
false
https://b.thumbs.redditm…oMAbClXuzSzw.jpg
1
null
Table extraction from Finance PDF's
10
By any chance is there any way which is 100% accurate to extract tabular data from finance pdfs which basically contains balance sheets and tables. I tried everything pytesseract , camelot , tabula , Microsoft table transformer , but there isnt any accuracy with proper headers , empty columns. I even tried openai's assistant api with code\_interpreter as tool but that also lacks with the accuracy. Anyone has ever tried to work on this solution ??
2025-01-17T10:22:21
https://www.reddit.com/r/LocalLLaMA/comments/1i3dcxz/table_extraction_from_finance_pdfs/
Maleficent_Repair359
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3dcxz
false
null
t3_1i3dcxz
/r/LocalLLaMA/comments/1i3dcxz/table_extraction_from_finance_pdfs/
false
false
self
10
null
Help me choose a model
1
[removed]
2025-01-17T10:44:20
https://www.reddit.com/r/LocalLLaMA/comments/1i3dnv2/help_me_choose_a_model/
Internal_Pass_2227
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3dnv2
false
null
t3_1i3dnv2
/r/LocalLLaMA/comments/1i3dnv2/help_me_choose_a_model/
false
false
self
1
null
Looking for a LLM that I can run on my iPhone for learning German
1
[removed]
2025-01-17T11:01:12
https://www.reddit.com/r/LocalLLaMA/comments/1i3dwgo/looking_for_a_llm_that_i_can_run_on_my_iphone_for/
chikyiuting
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3dwgo
false
null
t3_1i3dwgo
/r/LocalLLaMA/comments/1i3dwgo/looking_for_a_llm_that_i_can_run_on_my_iphone_for/
false
false
self
1
null
Top LLM Benchmarking Platforms - Need Suggestions
1
Hi, I have been looking for some popular LLM Benchmarking and Evaluation platforms and some of my tech friends recommended me Athina AI, Deep Eval, Confident AI. Any more suggestions?
2025-01-17T11:16:05
https://www.reddit.com/r/LocalLLaMA/comments/1i3e4h3/top_llm_benchmarking_platforms_need_suggestions/
Sam_Tech1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3e4h3
false
null
t3_1i3e4h3
/r/LocalLLaMA/comments/1i3e4h3/top_llm_benchmarking_platforms_need_suggestions/
false
false
self
1
null
HTTP 404 Not Found from ...
1
[removed]
2025-01-17T11:37:13
https://www.reddit.com/r/LocalLLaMA/comments/1i3eft3/http_404_not_found_from/
Hefty_Cup_8160
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3eft3
false
null
t3_1i3eft3
/r/LocalLLaMA/comments/1i3eft3/http_404_not_found_from/
false
false
https://b.thumbs.redditm…zGTmQ9SYIFrU.jpg
1
null
Anyone collecting numbers on efficiency / performance in terms of tokens-per-watt? | LLM-efficiency leaderboard?
3
Hey everyone! Basically just the title - anyone know if there's any data out there on e.g. max tokens-per-watt of a RaspPi vs, say, a 4090? I found [this post](https://www.reddit.com/r/LocalLLaMA/comments/1gb8lmp/inference_comparing_tokens_per_watt_4090_vs_apple/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) from a few months back but it didn't have much actual data, just anecdotal stuff. I'm kind assuming that if any kind of LLM-efficiency leaderboard **did** exist I'd probably be able to find it, but a quick Google hasn't yielded anything fruitful either. Would appreciate if anyone's got any leads / would be willing to share any numbers 🙌
2025-01-17T11:51:17
https://www.reddit.com/r/LocalLLaMA/comments/1i3enfs/anyone_collecting_numbers_on_efficiency/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3enfs
false
null
t3_1i3enfs
/r/LocalLLaMA/comments/1i3enfs/anyone_collecting_numbers_on_efficiency/
false
false
self
3
null
Motherboard for dual gpus?
1
[removed]
2025-01-17T11:58:22
https://www.reddit.com/r/LocalLLaMA/comments/1i3erdz/motherboard_for_dual_gpus/
XPEZNAZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3erdz
false
null
t3_1i3erdz
/r/LocalLLaMA/comments/1i3erdz/motherboard_for_dual_gpus/
false
false
self
1
null
Cloud the 360M model learn reasoning ?
0
Show your perspective :)
2025-01-17T12:02:12
https://www.reddit.com/r/LocalLLaMA/comments/1i3etqv/cloud_the_360m_model_learn_reasoning/
absurd-dream-studio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3etqv
false
null
t3_1i3etqv
/r/LocalLLaMA/comments/1i3etqv/cloud_the_360m_model_learn_reasoning/
false
false
self
0
null
What is the best VS code AI extension?
1
[removed]
2025-01-17T12:46:31
https://www.reddit.com/r/LocalLLaMA/comments/1i3fkh1/what_is_the_best_vs_code_ai_extension/
SkylarNox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3fkh1
false
null
t3_1i3fkh1
/r/LocalLLaMA/comments/1i3fkh1/what_is_the_best_vs_code_ai_extension/
false
false
self
1
null
Laptop LLM performance - beware of the power settings!
51
It's pity that I did such a lame negligence, but want to share with you, in case someone struggles with the same issue. Both me and the wife have Lenovo gaming laptops: 1. Rizen 5, 16GB RAM, 3050ti 4GB 2. i5, 16GB RAM, 4060 8GB Logically, if a model fits entirely in the VRAM, the machine 2 runs it noticeble faster. BUT, everything beyond 7B which is partially offloaded in VRAM, practically goes with less than 0.2T/s and takes 2-3 minutes to output the first token on the machine 2! While machine 1 runs Qwen 2.5 14B quite acceptable with around 2T/s. I was changing nVidia/CUDA drivers, settings of llama.cpp - nothing helped. Till I checked the "power settings" of Windows and changed the presets from "balanced" to "performance". It was the CPU/RAM of the machine which killed all the fun. Now I get 5-10 T/s with 14B model and 26/49 layers to GPU.
2025-01-17T12:48:10
https://www.reddit.com/r/LocalLLaMA/comments/1i3fli7/laptop_llm_performance_beware_of_the_power/
YordanTU
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3fli7
false
null
t3_1i3fli7
/r/LocalLLaMA/comments/1i3fli7/laptop_llm_performance_beware_of_the_power/
false
false
self
51
null
2025 Hardware Options for 70B models at Q8?
1
[removed]
2025-01-17T13:22:09
https://www.reddit.com/r/LocalLLaMA/comments/1i3g7q4/2025_hardware_options_for_70b_models_at_q8/
dwrz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3g7q4
false
null
t3_1i3g7q4
/r/LocalLLaMA/comments/1i3g7q4/2025_hardware_options_for_70b_models_at_q8/
false
false
self
1
null
Anyone has a succesfully running local self hosted LLM utilizing local RAG system? How did you do it?
1
[removed]
2025-01-17T13:36:39
https://www.reddit.com/r/LocalLLaMA/comments/1i3ghs9/anyone_has_a_succesfully_running_local_self/
peacepleaseluv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3ghs9
false
null
t3_1i3ghs9
/r/LocalLLaMA/comments/1i3ghs9/anyone_has_a_succesfully_running_local_self/
false
false
self
1
null
Llm for translation
9
Hi I recently installed subtitle edit and the newer version has a llm translation option, I have 32gb of ram and 8 gb of vram what is the best model to install to ollama for this job? Better to go with something lighter like Gemma 2 or opting for lighter quantization of llama 3.3? Thx for the help.
2025-01-17T14:06:26
https://www.reddit.com/r/LocalLLaMA/comments/1i3h313/llm_for_translation/
InternalMode8159
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3h313
false
null
t3_1i3h313
/r/LocalLLaMA/comments/1i3h313/llm_for_translation/
false
false
self
9
null
Attend - Proof of Concept
39
I've gotten fed up with hoping on the computer to do one thing, and doing other stuff instead. I'm building Attend so that our devices can help us dedicate our time and attention on what matters to us, instead of what someone else thinks is best. Right now, it is a voice assistant that uses a vision LLM to "watch" your screen and help you get back on track if what you're doing isn't aligned with what you said you wanted to do. I've got some work to do on the workflows and prompts to reduce false positives, but it "works" and I'm very excited about it! I'd like to get this down to a single 3090, but two seems pretty feasible. Part of the problem most open weight vision language models are garbage with 4K images/screenshots. Qwen2-VL seems to be an exception, but it (especially the 7B) is garbage when it comes to driving the workflows behind Attend. So, I've just been using Qwen2-VL-7B-Instruct and Llama-3.3 at 8-bit as I get it working. I'd love to hear suggestions for minimizing VRAM (Intern2\_5-VL also seems to handle 4K alright, but I haven't tested it enough on the workflows). Attend interfaces with all models using OpenAI compatable API calls. So, you should be able to use the cloud, if you're into that kinda thing... You could also take a hybrid approach. I think you could get the STT and vision LLM into 16GB VRAM and run that locally. Piper TTS runs well on CPU. You could then use a cloud model just for the text LLM and keep the most sensitive stuff (screenshots!) local. Check it out the code [https://github.com/hyperfocAIs/Attend/](https://github.com/hyperfocAIs/Attend/) and a proof of concept video [https://youtu.be/PETrY540zMM](https://youtu.be/PETrY540zMM)
2025-01-17T14:12:36
https://www.reddit.com/r/LocalLLaMA/comments/1i3h7hs/attend_proof_of_concept/
Pedalnomica
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3h7hs
false
null
t3_1i3h7hs
/r/LocalLLaMA/comments/1i3h7hs/attend_proof_of_concept/
false
false
self
39
{'enabled': False, 'images': [{'id': '1Gn-P5acoEExl5aIYNCZLbJySDb-cAwmCuzkLs4pELQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4ZP0JefOBQj5iWrXQEEaRp2ybz17gH-cE2g-WXlA5-0.jpg?width=108&crop=smart&auto=webp&s=f4e96fc1698e3be152d11a98706961aa7a522d5a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4ZP0JefOBQj5iWrXQEEaRp2ybz17gH-cE2g-WXlA5-0.jpg?width=216&crop=smart&auto=webp&s=723767bb79156541d613addef6a18b9295bb8d94', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4ZP0JefOBQj5iWrXQEEaRp2ybz17gH-cE2g-WXlA5-0.jpg?width=320&crop=smart&auto=webp&s=6f51bf214b3d116533973d8c0e2466c9101773f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4ZP0JefOBQj5iWrXQEEaRp2ybz17gH-cE2g-WXlA5-0.jpg?width=640&crop=smart&auto=webp&s=34ea13299c3460b389ceacd21e80f0da77a76482', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4ZP0JefOBQj5iWrXQEEaRp2ybz17gH-cE2g-WXlA5-0.jpg?width=960&crop=smart&auto=webp&s=fe31bb4466342a982f59d03e47e771d57e07eddd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4ZP0JefOBQj5iWrXQEEaRp2ybz17gH-cE2g-WXlA5-0.jpg?width=1080&crop=smart&auto=webp&s=ea7d8aa07714fe1e4b08a22d925f2d9b5ba83148', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4ZP0JefOBQj5iWrXQEEaRp2ybz17gH-cE2g-WXlA5-0.jpg?auto=webp&s=a5be17dd71f2533e6abf42800d7add45bcc1a32a', 'width': 1200}, 'variants': {}}]}
Whats the simplest form of training data attack that one can try on BERT like models?
1
Im referring to membership inference attack to identify whether a given data sample was used in the BERT model training data.
2025-01-17T14:14:26
https://www.reddit.com/r/LocalLLaMA/comments/1i3h8tx/whats_the_simplest_form_of_training_data_attack/
Lazy_Wedding_1383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3h8tx
false
null
t3_1i3h8tx
/r/LocalLLaMA/comments/1i3h8tx/whats_the_simplest_form_of_training_data_attack/
false
false
self
1
null
What do I need to use to lip sync with audio just a few seconds / segment of a video?
1
[removed]
2025-01-17T14:26:25
https://www.reddit.com/r/LocalLLaMA/comments/1i3hhty/what_do_i_need_to_use_to_lip_sync_with_audio_just/
WarmSummerDrink
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3hhty
false
null
t3_1i3hhty
/r/LocalLLaMA/comments/1i3hhty/what_do_i_need_to_use_to_lip_sync_with_audio_just/
false
false
self
1
null
Does AI TOPS Impact AI Inference and Training Speeds? Comparing GPUs in the Charts
1
2025-01-17T14:46:11
https://i.redd.it/068yj4uwgkde1.jpeg
One_Imagination_5581
i.redd.it
1970-01-01T00:00:00
0
{}
1i3hwxt
false
null
t3_1i3hwxt
/r/LocalLLaMA/comments/1i3hwxt/does_ai_tops_impact_ai_inference_and_training/
false
false
https://b.thumbs.redditm…1bogpZg64NtQ.jpg
1
{'enabled': True, 'images': [{'id': '7aREA8rgT2nWTZQKNTo9A4jpxOelFxGJ0K3mLUb00DI', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/068yj4uwgkde1.jpeg?width=108&crop=smart&auto=webp&s=b3afb32581c2818cd5c8ae7effd85ebff6782bde', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/068yj4uwgkde1.jpeg?width=216&crop=smart&auto=webp&s=da10d6088578256b786f18fb8e7501c298970dbd', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/068yj4uwgkde1.jpeg?width=320&crop=smart&auto=webp&s=318df70763e3e207b7df3b60c1f324f69b5944c6', 'width': 320}, {'height': 490, 'url': 'https://preview.redd.it/068yj4uwgkde1.jpeg?width=640&crop=smart&auto=webp&s=6846e83652a1529f13373ac0b2e9d4966cff9fdc', 'width': 640}], 'source': {'height': 698, 'url': 'https://preview.redd.it/068yj4uwgkde1.jpeg?auto=webp&s=9dd5688fd3eceea2c8390d37bf5eaf4f64a7c6ad', 'width': 911}, 'variants': {}}]}
[REPOST]Linux 6.14 will have amdxdna! The Ryzen AI NPU driver
29
What will this mean for amd cards and AI inference?
2025-01-17T15:17:23
https://www.reddit.com/r/LocalLLaMA/comments/1i3ilu3/repostlinux_614_will_have_amdxdna_the_ryzen_ai/
KillerX629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3ilu3
false
null
t3_1i3ilu3
/r/LocalLLaMA/comments/1i3ilu3/repostlinux_614_will_have_amdxdna_the_ryzen_ai/
false
false
self
29
null
NVIDIA RTX 5090: Limited Availability and Restrictions on AI and Multi-GPU
0
According to a recent article from El Chapuzas Informático, NVIDIA’s upcoming RTX 50 series GPUs will not only be released in limited quantities but will also include built-in restrictions on certain functionalities. These include reduced performance for AI workloads, cryptocurrency mining, and the use of multiple GPUs in the same setup.
2025-01-17T15:21:45
https://elchapuzasinformatico.com/2025/01/nvidia-rtx-50-limitadas-tiendas-capadas-ia-criptomineria-multi-gpu/
Spiritual_Tie_5574
elchapuzasinformatico.com
1970-01-01T00:00:00
0
{}
1i3ipgs
false
null
t3_1i3ipgs
/r/LocalLLaMA/comments/1i3ipgs/nvidia_rtx_5090_limited_availability_and/
false
false
https://b.thumbs.redditm…ztq0wGCpkHbs.jpg
0
{'enabled': False, 'images': [{'id': 'r788YZJQERdLVJ5SY6QfV0F8vzCqCClewZLVXsMpQ2U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bteDT1wTCKTt11oDOxCHMBb98egjkfRqBELv99v2-pQ.jpg?width=108&crop=smart&auto=webp&s=d806a82adec18df58cfc812006581a4efc702a7e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bteDT1wTCKTt11oDOxCHMBb98egjkfRqBELv99v2-pQ.jpg?width=216&crop=smart&auto=webp&s=f849176cc4919bbca0aa05f2a939a35e9c1228d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bteDT1wTCKTt11oDOxCHMBb98egjkfRqBELv99v2-pQ.jpg?width=320&crop=smart&auto=webp&s=a349625a828bbb738c80c81f22f1f8fff3d43cc1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bteDT1wTCKTt11oDOxCHMBb98egjkfRqBELv99v2-pQ.jpg?width=640&crop=smart&auto=webp&s=289505e7d1bdf4bb60380e17eb9f3257cce959d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bteDT1wTCKTt11oDOxCHMBb98egjkfRqBELv99v2-pQ.jpg?width=960&crop=smart&auto=webp&s=92e50a73c3892cf840044f94feba5c0600a35049', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/bteDT1wTCKTt11oDOxCHMBb98egjkfRqBELv99v2-pQ.jpg?auto=webp&s=f2d35592e4db842c8fecaddc3a3e19429cf62e63', 'width': 1000}, 'variants': {}}]}
"I/We/They Couldn't Help But..." Repeating LLM Phrasing?
16
>The spacecraft's sensors detected a safe landing spot near a lush forest, and the pilot navigated the ship towards the area. As they approached, they couldn't help but notice the array of exotic flora that thrived in the region. To those that use LLMs often, I think you know of this affect. I've actually added "Don't use the words '*I couldn't help but*' in your output" and have still had the LLM put the phrase in there, almost like it worked like the "don't think of an elephant," concept for humans.
2025-01-17T15:27:12
https://www.reddit.com/r/LocalLLaMA/comments/1i3itva/iwethey_couldnt_help_but_repeating_llm_phrasing/
Jattoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3itva
false
null
t3_1i3itva
/r/LocalLLaMA/comments/1i3itva/iwethey_couldnt_help_but_repeating_llm_phrasing/
false
false
self
16
null
Built this PlayPixAI app with Qwen2-VL in under 15 minutes!
1
[removed]
2025-01-17T15:35:06
https://www.reddit.com/r/LocalLLaMA/comments/1i3j04z/built_this_playpixai_app_with_qwen2vl_in_under_15/
codes_astro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3j04z
false
null
t3_1i3j04z
/r/LocalLLaMA/comments/1i3j04z/built_this_playpixai_app_with_qwen2vl_in_under_15/
false
false
https://b.thumbs.redditm…67AC0fApGcoA.jpg
1
null
Best Setup PC Config For Llama-3.1-8B
1
[removed]
2025-01-17T15:35:58
https://www.reddit.com/r/LocalLLaMA/comments/1i3j0te/best_setup_pc_config_for_llama318b/
AvaloxBR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3j0te
false
null
t3_1i3j0te
/r/LocalLLaMA/comments/1i3j0te/best_setup_pc_config_for_llama318b/
false
false
self
1
null
Finetuning Llama for a step by step synthesis
1
[removed]
2025-01-17T16:02:21
https://www.reddit.com/r/LocalLLaMA/comments/1i3jmpr/finetuning_llama_for_a_step_by_step_synthesis/
No-Judge3265
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3jmpr
false
null
t3_1i3jmpr
/r/LocalLLaMA/comments/1i3jmpr/finetuning_llama_for_a_step_by_step_synthesis/
false
false
self
1
null
Fine tune llama on synthesis procedures
1
[removed]
2025-01-17T16:04:04
https://www.reddit.com/r/LocalLLaMA/comments/1i3jo7y/fine_tune_llama_on_synthesis_procedures/
No-Judge3265
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3jo7y
false
null
t3_1i3jo7y
/r/LocalLLaMA/comments/1i3jo7y/fine_tune_llama_on_synthesis_procedures/
false
false
self
1
null
Ollama is using RAM despite having enough VRAM
1
[removed]
2025-01-17T16:09:06
https://www.reddit.com/r/LocalLLaMA/comments/1i3jskm/ollama_is_using_ram_despite_having_enough_vram/
DamballaTun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3jskm
false
null
t3_1i3jskm
/r/LocalLLaMA/comments/1i3jskm/ollama_is_using_ram_despite_having_enough_vram/
false
false
https://b.thumbs.redditm…F3ro428NNh_k.jpg
1
null
Ollama is using RAM despite having enough VRAM
1
[removed]
2025-01-17T16:09:07
https://www.reddit.com/r/LocalLLaMA/comments/1i3jsks/ollama_is_using_ram_despite_having_enough_vram/
DamballaTun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3jsks
false
null
t3_1i3jsks
/r/LocalLLaMA/comments/1i3jsks/ollama_is_using_ram_despite_having_enough_vram/
false
false
self
1
null
Ollama is loading a part of the model to RAM despite having limited VRAM
1
[removed]
2025-01-17T16:11:15
https://www.reddit.com/r/LocalLLaMA/comments/1i3jud8/ollama_is_loading_a_part_of_the_model_to_ram/
DamballaTun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3jud8
false
null
t3_1i3jud8
/r/LocalLLaMA/comments/1i3jud8/ollama_is_loading_a_part_of_the_model_to_ram/
false
false
https://b.thumbs.redditm…kXBcm_CAitIo.jpg
1
null
AI Rig recommendations - up to $10k budget
1
[removed]
2025-01-17T16:25:00
https://www.reddit.com/r/LocalLLaMA/comments/1i3k5v6/ai_rig_recommendations_up_to_10k_budget/
lord_denister
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3k5v6
false
null
t3_1i3k5v6
/r/LocalLLaMA/comments/1i3k5v6/ai_rig_recommendations_up_to_10k_budget/
false
false
self
1
null
Best Approach to Create MCQs from Large PDFs with Correct Answers as Ground Truth?
5
I’m working on generating multiple-choice questions (MCQs) from large PDFs (400-500 pages). The goal is to create a training dataset with correct answers as ground truth. My main concerns are: Efficiently extracting and summarizing content from such large PDFs to generate relevant MCQs, and add varying level of relevancy to test retrieval. I’m considering using LLM for summarization and question generation, but I’m unsure about the best tools or frameworks to handle this effectively. Additionally, I’d appreciate any recommendations on where to start learning about this process (e.g., tutorials, courses, or resources).
2025-01-17T16:38:45
https://www.reddit.com/r/LocalLLaMA/comments/1i3khqj/best_approach_to_create_mcqs_from_large_pdfs_with/
suns9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3khqj
false
null
t3_1i3khqj
/r/LocalLLaMA/comments/1i3khqj/best_approach_to_create_mcqs_from_large_pdfs_with/
false
false
self
5
null
Is there a difference between chat and repeated calling from scratch?
3
When I do chat with a bot, it is like: \- #0: <me writing> \- #1 <LLM writing> \- #2: <me writing> \- #3 <LLM writing> \- #4: <me writing> \- #5 <LLM writing> Is there any fundamental difference between that and calling the LLM with #0, then with the concatenation of #0 and #2 (or is it #0, #1, #2?), and then #0, #2, and #4 (or is it #0..#4?) Is there any difference between the models, whether they respond significantly different ways?
2025-01-17T16:46:27
https://www.reddit.com/r/LocalLLaMA/comments/1i3kohb/is_there_a_difference_between_chat_and_repeated/
yelling-at-clouds-40
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3kohb
false
null
t3_1i3kohb
/r/LocalLLaMA/comments/1i3kohb/is_there_a_difference_between_chat_and_repeated/
false
false
self
3
null
[Magnum/SE] LLama 3.3 70b
57
Hello again, folks! We've got something a little different to share this time. It's not a full release or a new series as of yet, but more like an epilogue to the v4 series we released a few months back. DoctorShotgun wasn't entirely satisfied with how the large models in the series turned out, so he spent some more time in the lab - this time on the newer llama 3.3 model for a change: [https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v4-SE](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v4-SE) This time, the model was trained as an rslora with recommendations from Gryphe of Mythomax fame, and it comes with the full set of adapter checkpoints for mergers and other experimenters to play around with ([available here](https://huggingface.co/Doctor-Shotgun/Magnum-v4-SE-70B-LoRA)). Preliminary testing suggests that rslora adequately style-transfers the classic Claude-y flavor of magnum to the llama 3.3 model. In terms of changes to the data, the model doesn't deviate too far from the v4 series. The dataset includes some further cleaning of the RP log dataset used in v4, as well as the re-introduction of a subset of the data used in the v2 and earlier models. As per usual, the training config is linked from the model card in the spirit of open source. No first-party quants are available at this time, but links to those created by well-known quanters are linked in the model description. Hope you enjoy this belated New Years present, and stay tuned for what's to come!
2025-01-17T16:54:07
https://www.reddit.com/r/LocalLLaMA/comments/1i3kv1n/magnumse_llama_33_70b/
lucyknada
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3kv1n
false
null
t3_1i3kv1n
/r/LocalLLaMA/comments/1i3kv1n/magnumse_llama_33_70b/
false
false
self
57
{'enabled': False, 'images': [{'id': '_GNxGlqytIboTVafo63MP51m4Pre1VBSMvfwIZ7lyJs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OulaTA0iLU_ZB0lP9Cybw9YSZVhXk9mcP-oILIJ_zrE.jpg?width=108&crop=smart&auto=webp&s=cd2cf07fc39d57dd8f8506343f9b84c9f30872d7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OulaTA0iLU_ZB0lP9Cybw9YSZVhXk9mcP-oILIJ_zrE.jpg?width=216&crop=smart&auto=webp&s=c4239413e08903de9654b26ef3cf7ac08b937c12', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OulaTA0iLU_ZB0lP9Cybw9YSZVhXk9mcP-oILIJ_zrE.jpg?width=320&crop=smart&auto=webp&s=c1f5fb66f700cbee9c3b5a52a5abee888c778e7d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OulaTA0iLU_ZB0lP9Cybw9YSZVhXk9mcP-oILIJ_zrE.jpg?width=640&crop=smart&auto=webp&s=cdbc5ac85044410820db02c5f1e39f08ed5842be', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OulaTA0iLU_ZB0lP9Cybw9YSZVhXk9mcP-oILIJ_zrE.jpg?width=960&crop=smart&auto=webp&s=ccb0787ab37955c25570941e370e1a5c050d9867', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OulaTA0iLU_ZB0lP9Cybw9YSZVhXk9mcP-oILIJ_zrE.jpg?width=1080&crop=smart&auto=webp&s=55a11538db534c8e84d6224d8e58746ebd097e5c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OulaTA0iLU_ZB0lP9Cybw9YSZVhXk9mcP-oILIJ_zrE.jpg?auto=webp&s=664ce2af7c75f353b4cbf93be700e8904a50259b', 'width': 1200}, 'variants': {}}]}
How many documents do I need to create something useful?
0
I'm an attorney and I can't really put client data into ChatGPT. I was thinking about taking all of the cases and statutes (laws) and feeding them to a local LLM. It wouldn't be a ton of documents, probably in 3k range. Would this be feasible or would I need a lot more documents? This would just be for personal use.
2025-01-17T17:42:03
https://www.reddit.com/r/LocalLLaMA/comments/1i3m0h9/how_many_documents_do_i_need_to_create_something/
irr1449
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3m0h9
false
null
t3_1i3m0h9
/r/LocalLLaMA/comments/1i3m0h9/how_many_documents_do_i_need_to_create_something/
false
false
self
0
null
A local model recognizing price
1
[removed]
2025-01-17T18:02:33
https://www.reddit.com/r/LocalLLaMA/comments/1i3mi6e/a_local_model_recognizing_price/
NikIta_Gx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3mi6e
false
null
t3_1i3mi6e
/r/LocalLLaMA/comments/1i3mi6e/a_local_model_recognizing_price/
false
false
self
1
null