title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to do Multimodal (Text and Vision) Quantization? | 1 | [removed] | 2025-01-18T20:18:55 | https://www.reddit.com/r/LocalLLaMA/comments/1i4gdhy/how_to_do_multimodal_text_and_vision_quantization/ | Epidemic888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4gdhy | false | null | t3_1i4gdhy | /r/LocalLLaMA/comments/1i4gdhy/how_to_do_multimodal_text_and_vision_quantization/ | false | false | self | 1 | null |
Trying to Run Lexi on Windows with no luck. Please help. | 0 | I'm just trying to run Lexi on my PC. I keep getting "fatal: repository 'https://github.com/lexi-uncensored/' not found"
| 2025-01-18T20:29:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i4gm4b/trying_to_run_lexi_on_windows_with_no_luck_please/ | OhTheHueManatee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4gm4b | false | null | t3_1i4gm4b | /r/LocalLLaMA/comments/1i4gm4b/trying_to_run_lexi_on_windows_with_no_luck_please/ | false | false | self | 0 | null |
Kokoro TTS + GLadOS(less) ASR = blazing fast local voice assistant | 1 | [removed] | 2025-01-18T20:32:38 | https://www.reddit.com/r/LocalLLaMA/comments/1i4go8r/kokoro_tts_gladosless_asr_blazing_fast_local/ | Voidmesmer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4go8r | false | null | t3_1i4go8r | /r/LocalLLaMA/comments/1i4go8r/kokoro_tts_gladosless_asr_blazing_fast_local/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NuV7gsuv2rmmb9DnI5DFXENATihnYo24DGUEVg1_oKM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mOtNud4vNU1Ag-NrZajpKN9T_-triDJ9VbGOS9-WOEk.jpg?width=108&crop=smart&auto=webp&s=d86d68bae296e3307b0e80b7a81d9cd16480d9dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mOtNud4vNU1Ag-NrZajpKN9T_-triDJ9VbGOS9-WOEk.jpg?width=216&crop=smart&auto=webp&s=5ad4c97a1a178aac6c87502c1d055a4f7ff28570', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mOtNud4vNU1Ag-NrZajpKN9T_-triDJ9VbGOS9-WOEk.jpg?width=320&crop=smart&auto=webp&s=751e2ecb408b1ecf527f16a4d4fff8ae9ecc881d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mOtNud4vNU1Ag-NrZajpKN9T_-triDJ9VbGOS9-WOEk.jpg?width=640&crop=smart&auto=webp&s=9d89b958d94ee73ad76f1bc887eaf138f7f73f41', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mOtNud4vNU1Ag-NrZajpKN9T_-triDJ9VbGOS9-WOEk.jpg?width=960&crop=smart&auto=webp&s=de85605943b7cee4a63668ce32dcf1a7ca17aff9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mOtNud4vNU1Ag-NrZajpKN9T_-triDJ9VbGOS9-WOEk.jpg?width=1080&crop=smart&auto=webp&s=0951df2cb9edd9f8fce00508df1d156c2bbc1913', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mOtNud4vNU1Ag-NrZajpKN9T_-triDJ9VbGOS9-WOEk.jpg?auto=webp&s=ff8675c22e7eb9161327b7a224db37b5fef157fc', 'width': 1200}, 'variants': {}}]} |
|
App Structure | 0 | What structure should I do in order for a stream response and JSON output? Example I want a user to interact with a local model (Ollama). I need the response to be streamed that way the user is not “waiting” so they have a ChatGPT experience but I need to make sure if there were functions from that message.
Example:
(Simple convo)
User: hi
Ollama: Hi there how can I help?
User: bla bla
….
(Threat)
User: Hi I have a knife that I’ll use!
Ollama: Calls userMadeThreat()
Ollama: Hi there please calm down…
(Service)
User: Hi I want water!
Ollama: Calls userRequestedService()
Ollama: I let the team know!
For the above should I call 2 times Ollama?
First call is a JSON structure that will let me know the type of request.
Second call a stream of just a conversation with the LLM.
Or just do one call but tell Ollama to output the type in the beginning + convo text example “THREAT:Hi there take it easy” | 2025-01-18T20:43:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i4gwyk/app_structure/ | Main_Search_9362 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4gwyk | false | null | t3_1i4gwyk | /r/LocalLLaMA/comments/1i4gwyk/app_structure/ | false | false | self | 0 | null |
Kokoro TTS and GLaDOS make a low latency, realistic AI voice assistant | 94 | 2025-01-18T20:50:12 | https://www.youtube.com/watch?v=EBHwOCoGMKQ | Voidmesmer | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1i4h1qo | false | {'oembed': {'author_name': 'Kaminoer', 'author_url': 'https://www.youtube.com/@Kaminoer', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/EBHwOCoGMKQ?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="KokoDOS AI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/EBHwOCoGMKQ/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'KokoDOS AI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1i4h1qo | /r/LocalLLaMA/comments/1i4h1qo/kokoro_tts_and_glados_make_a_low_latency/ | false | false | 94 | {'enabled': False, 'images': [{'id': 'LLQggXkZO2updzplAV5ybWQf4XCJqznopYLrWOv2m7s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dqmU2zN05DBQIRTAk9vSzRwo09lhayQQe9rfQzGoyS8.jpg?width=108&crop=smart&auto=webp&s=4cba6be3c9d68360ee13bea82a0c2efc8ef4cee6', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/dqmU2zN05DBQIRTAk9vSzRwo09lhayQQe9rfQzGoyS8.jpg?width=216&crop=smart&auto=webp&s=7b3089e6b25059f8653152ba996e1e75c141e136', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/dqmU2zN05DBQIRTAk9vSzRwo09lhayQQe9rfQzGoyS8.jpg?width=320&crop=smart&auto=webp&s=3af3bcb4c5c2426d61848d63048912843cb91350', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/dqmU2zN05DBQIRTAk9vSzRwo09lhayQQe9rfQzGoyS8.jpg?auto=webp&s=37078aea4ae8031d655d83588a0da84e214abe9d', 'width': 480}, 'variants': {}}]} |
||
What would you do with free access to a 4x H100 server? | 45 | Long story short I have one in the lab and all that’s being run on it thus far are benchmarks. What should I do with it? | 2025-01-18T20:52:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i4h3pp/what_would_you_do_with_free_access_to_a_4x_h100/ | SquareJordan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4h3pp | false | null | t3_1i4h3pp | /r/LocalLLaMA/comments/1i4h3pp/what_would_you_do_with_free_access_to_a_4x_h100/ | false | false | self | 45 | null |
Theory: trying to use newer and more powerful LLMs to sound more human is likely moving in the wrong direction | 189 | I feel like using more powerful LLMs to try to achieve human like speech is probably moving AWAY from the solution rather than towards it.
My thought process is this: what's the difference between Llama 3.3 and Llama 1? Besides technology, I mean. Lets go back in time, to the days when we only had a 2048 token context windows; before things like NTK RoPE scaling, when your only choices were linear RoPE scaling and/or a superHoT model. Forget all the technological differences, and think: what differentiated the models of those days to the models of today?
Synthetic data in the training, right? The models back then were trained on minimal, if any, synthetic data. Even most of the chat models were barely trained on user/assistant type interactions. The training data was mostly human text.
But today? Modern models have tons and tons of "GPTisms" baked into the training data, note even speaking about how bad the instruct tune can be. So of course when you talk to the thing, it's going to sound like what we now expect an AI Assistant to sound like. And the more powerful models we get, the more we're going to see this synthetic data further baked in there.
In my opinion, it makes a lot of sense to go back and try some of these older models for this purpose, such as using a workflow to have the final step use an old Llama 1 model to rewrite the output of the modern LLMs.
Just to test it, I updated one of my Wilmer workflows to use Mistral Small as the thinker and Guanaco 65b as the responder. I had mistral write the initial response to me trying to sound human, and gave Guanaco very simple instructions to take that response and rewrite it to also sound more human. I then did the same using ChatGPT and Llama 3.3 70b.
**NOTE:** *This was a very small and simple POC test, and the quality of the output could be far higher with proper prompting, but I was just curious what the difference would be.*
I asked the question: ***"Which do you like better: Java or C#?"***
**Mistral Small response:**
>Well, it depends on what you're looking to do. Both Java and C# have their strengths. For example:
>\- \*\*Java\*\*: It’s great for building web applications, Android apps, and large-scale enterprise systems. It has a larger community and more libraries available.
>\- \*\*C#\*\*: It's fantastic for game development (thanks to Unity), building Windows applications, and integrating with other Microsoft services.
>Personally, I think they both have their use cases. Java might be more versatile for certain web projects, but C# can be incredibly efficient for desktop applications and games.
>Do you have a specific type of project in mind that you want to work on?
Obviously a very complete and well written answer, but also very indicative that an AI wrote it. Something about the way it responds is familiar to folks who would be using chatgpt and the like.
Now, look at **Guanaco 65b's rewrite**:
>I'd say Java and C# both have their pros and cons. Java is great for building websites, Android apps, and huge enterprise systems. But C# is amazing for making games with Unity, building Windows programs, and hooking into other Microsoft services. Personally, I'd use Java for web stuff and C# for desktop apps and games, but it depends on your project.
>So, what're you trying to build? I can give you some pointers.
The difference is subtle, but see how it feels a bit more... human?
Here is **ChatGPT o1's rewrite**:
>Honestly, it really depends on what you’re trying to accomplish. Both Java and C# have their own strengths:
>\* Java: Great for web applications, Android apps, and large-scale enterprise systems. It also has a huge community and tons of libraries.
>\* C#: Awesome for game development (especially with Unity), building Windows apps, and integrating with Microsoft services.
>Personally, I think both are solid choices. Java might be more versatile for certain web-based projects, while C# shines when it comes to desktop applications and gaming.
>Any specific type of project you’re looking to work on?
This one remains sounding like an AI assistant. Same instructions and prompt I gave Guanaco.
Here is **Llama 3.3's rewrite**:
>so you're wondering about c# and java, right? they both have their own things goin for 'em, and honestly it kinda comes down to what you're workin on and what you like. they're actually pretty similar, they're both object-oriented, they both run on virtual machines, c# uses .net and java uses the jvm, and they both got a ton of libraries and a huge community behind 'em.
>so like, here's some differences, ok? c# is made by microsoft, so it works super well with windows and all that, and java's open-source and can run on pretty much anything that's got a jvm, so that's linux, mac, android, all that. and syntax-wise, they're a little different too, like c# uses these curly braces to define a block of code and java uses semicolons to end statements and all that.
>personally, i'm cool with both, i mean c# is awesome for windows stuff, but java's got that whole cross-platform thing goin on, so yeah… what about you, got a preference or whatever?
Its... well, about what you'd expect lol. Llama also deciding rewriting like a human means no capitalization =D Also, it added some details of its own to the response, but that's ok.
Again, this is a very simplistic POC just to show what I mean, and I'm probably going to spend more time delving into the idea, but I really think that there's still value in these old models, especially when trying to write things that sound more "human", like maybe article or email writing.
Anyhow, just a thought I'd figure might spark some conversation. | 2025-01-18T21:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i4hb2l/theory_trying_to_use_newer_and_more_powerful_llms/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4hb2l | false | null | t3_1i4hb2l | /r/LocalLLaMA/comments/1i4hb2l/theory_trying_to_use_newer_and_more_powerful_llms/ | false | false | self | 189 | null |
Reaper ReaScript/LUA specific training | 1 | [removed] | 2025-01-18T21:02:01 | https://www.reddit.com/r/LocalLLaMA/comments/1i4hb7s/reaper_reascriptlua_specific_training/ | ferropop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4hb7s | false | null | t3_1i4hb7s | /r/LocalLLaMA/comments/1i4hb7s/reaper_reascriptlua_specific_training/ | false | false | self | 1 | null |
LLama Index Documentation Assistant with phi4 and ollama | 1 | 2025-01-18T21:15:47 | https://v.redd.it/5nzh26wejtde1 | oridnary_artist | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i4hm5a | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/5nzh26wejtde1/DASHPlaylist.mpd?a=1739826963%2COTEzNWIxMzhlNDgzN2Q5MWZmZjJjOWFlM2EyOGZlZWY0ZjBkMGVkMTM2OWNhNTc3NmFkMTBhMzU1YTRhYTdhNg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/5nzh26wejtde1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 874, 'hls_url': 'https://v.redd.it/5nzh26wejtde1/HLSPlaylist.m3u8?a=1739826963%2CODc3MzNmOGQzZmRmMmFiYzJlYmVjOTY5N2RjMWJmNzQ4NDI4Nzg1MDQzZGIyOWEwZTEwNzhiZTlhZjBlMjNkYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5nzh26wejtde1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1i4hm5a | /r/LocalLLaMA/comments/1i4hm5a/llama_index_documentation_assistant_with_phi4_and/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bTI1Y2FmdmVqdGRlMQpxhBzZcKM0vjh7QfEh6AQM_565qKsHsYnJbi1X94T-', 'resolutions': [{'height': 131, 'url': 'https://external-preview.redd.it/bTI1Y2FmdmVqdGRlMQpxhBzZcKM0vjh7QfEh6AQM_565qKsHsYnJbi1X94T-.png?width=108&crop=smart&format=pjpg&auto=webp&s=713e496c1af758cefd9187f482e185f85c872fc9', 'width': 108}, {'height': 262, 'url': 'https://external-preview.redd.it/bTI1Y2FmdmVqdGRlMQpxhBzZcKM0vjh7QfEh6AQM_565qKsHsYnJbi1X94T-.png?width=216&crop=smart&format=pjpg&auto=webp&s=6189681213aa08769cc376c9d7d03b970c426d76', 'width': 216}, {'height': 388, 'url': 'https://external-preview.redd.it/bTI1Y2FmdmVqdGRlMQpxhBzZcKM0vjh7QfEh6AQM_565qKsHsYnJbi1X94T-.png?width=320&crop=smart&format=pjpg&auto=webp&s=c3886e1e569429a282b7f59ca24ddcf7dadf45ff', 'width': 320}, {'height': 777, 'url': 'https://external-preview.redd.it/bTI1Y2FmdmVqdGRlMQpxhBzZcKM0vjh7QfEh6AQM_565qKsHsYnJbi1X94T-.png?width=640&crop=smart&format=pjpg&auto=webp&s=fb8a7be0ab1b070e5414dc0b5b4dd8e91d0b393c', 'width': 640}], 'source': {'height': 882, 'url': 'https://external-preview.redd.it/bTI1Y2FmdmVqdGRlMQpxhBzZcKM0vjh7QfEh6AQM_565qKsHsYnJbi1X94T-.png?format=pjpg&auto=webp&s=e767c3c0db694784b40c11d6835827d7f717fb20', 'width': 726}, 'variants': {}}]} |
||
How do I use LLM to generate new reviews from a list of existing? | 0 | I have a dataset of reviews of different products. Using the tones, format and style of those reviews. I want to generate new reviews of those products. The reviews also have the complexity of using terms that wouldn’t be dictionary terms, they would be slang, acronyms, etc. | 2025-01-18T21:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/1i4hp0e/how_do_i_use_llm_to_generate_new_reviews_from_a/ | s3b4k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4hp0e | false | null | t3_1i4hp0e | /r/LocalLLaMA/comments/1i4hp0e/how_do_i_use_llm_to_generate_new_reviews_from_a/ | false | false | self | 0 | null |
Motherboard for 2 3090ti in pcie slots + a third on riser | 3 | Can you share what motherboard you used for such setup: 2 3090ti in pcie slots + a third on riser. The best I could accommodate was 1 3090ti + one 1080ti on board and another 3090ti on riser. I now want to replace the 1080ti with another 3090 or higher
| 2025-01-18T21:20:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i4hppu/motherboard_for_2_3090ti_in_pcie_slots_a_third_on/ | Better_Dress_8508 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4hppu | false | null | t3_1i4hppu | /r/LocalLLaMA/comments/1i4hppu/motherboard_for_2_3090ti_in_pcie_slots_a_third_on/ | false | false | self | 3 | null |
Autonomous AI Systems w/ Self Learning | 1 | [removed] | 2025-01-18T21:26:56 | https://www.youtube.com/watch?v=Kejv97FDmk8 | Majestic-Fox-563 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1i4hv1g | false | {'oembed': {'author_name': 'Charles Sears', 'author_url': 'https://www.youtube.com/@charlesrsears', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Kejv97FDmk8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="From Concept to Creation: Developing Autonomous AGI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Kejv97FDmk8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'From Concept to Creation: Developing Autonomous AGI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1i4hv1g | /r/LocalLLaMA/comments/1i4hv1g/autonomous_ai_systems_w_self_learning/ | false | false | 1 | {'enabled': False, 'images': [{'id': '55MNT2kLUxKO9XL0u31SdlQzD_Etthc4t__dZ0wx0QU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KgVoSIjLtYRMiZgc_YqK-NSifMfmsKx9cUEWAxAJUCk.jpg?width=108&crop=smart&auto=webp&s=79fb43978269a1b0063375151d7072213586c9e1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/KgVoSIjLtYRMiZgc_YqK-NSifMfmsKx9cUEWAxAJUCk.jpg?width=216&crop=smart&auto=webp&s=a7a9c17180784e43301e1e544cb88167f42fa804', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/KgVoSIjLtYRMiZgc_YqK-NSifMfmsKx9cUEWAxAJUCk.jpg?width=320&crop=smart&auto=webp&s=22f67b15a77b727e123e6b8599d71c7a504c024f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/KgVoSIjLtYRMiZgc_YqK-NSifMfmsKx9cUEWAxAJUCk.jpg?auto=webp&s=9ad2f93c414fa1b0b550ee9ec393f88c21dd4996', 'width': 480}, 'variants': {}}]} |
|
The Case for Open Models | 31 | 2025-01-18T21:31:18 | https://amgadhasan.substack.com/p/the-case-for-open-models | Amgadoz | amgadhasan.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1i4hyn9 | false | null | t3_1i4hyn9 | /r/LocalLLaMA/comments/1i4hyn9/the_case_for_open_models/ | false | false | 31 | {'enabled': False, 'images': [{'id': 'e3crMFJuNlLPiXtsRsLvtDIwK4oOJUByL249ke4_YBM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9jCHWcqHBL_Q8CbrEzDT6EMKWVd07U_SmrG2Z0RXzj8.jpg?width=108&crop=smart&auto=webp&s=1baf559db7e8eb96dfb2051b2d0c18a499cb6508', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9jCHWcqHBL_Q8CbrEzDT6EMKWVd07U_SmrG2Z0RXzj8.jpg?width=216&crop=smart&auto=webp&s=ae09c39785cd26ccbfbb4e01518c48e8aeb81d3d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9jCHWcqHBL_Q8CbrEzDT6EMKWVd07U_SmrG2Z0RXzj8.jpg?width=320&crop=smart&auto=webp&s=5e0c7f1f8a40fb3203b306ed6668ad5bfe0a9ae7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9jCHWcqHBL_Q8CbrEzDT6EMKWVd07U_SmrG2Z0RXzj8.jpg?width=640&crop=smart&auto=webp&s=bf06e18f281752c7c009c87125ee559793b0b851', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9jCHWcqHBL_Q8CbrEzDT6EMKWVd07U_SmrG2Z0RXzj8.jpg?width=960&crop=smart&auto=webp&s=d6908e0d369d063457e105f3a3c14ba6e1e89182', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9jCHWcqHBL_Q8CbrEzDT6EMKWVd07U_SmrG2Z0RXzj8.jpg?width=1080&crop=smart&auto=webp&s=92a5dcb8c698ae813ecad91cc3979fbffafa3967', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9jCHWcqHBL_Q8CbrEzDT6EMKWVd07U_SmrG2Z0RXzj8.jpg?auto=webp&s=a619a364edb3d56eb35a44ea2bc41354b6efef40', 'width': 1200}, 'variants': {}}]} |
||
I made a Huggingface Space to help build commands for OpenVINO model conversion. | 9 | # Optimum-CLI-Tool_tool
Hello!
Today I am launching [Echo9Zulu/Optimum-CLI-Tool\_tool](https://huggingface.co/spaces/Echo9Zulu/Optimum-CLI-Tool_tool), a Huggingface Space, that "GUI-ifies" selecting conversion parameters with the Optimum-CLI tool for OpenVINO with Gradio. OpenVINO is a Inference engine offering hardware acceleration for Intel CPUs, GPUs, and NPUs. OpenVINO runtime, Optimum-Intel for Transformers and OpenVINO GenAI all require a deceptively simple model export to an Intermediate Representation format.
I made this tool to help me convert models efficently and know others may find it useful for keeping things organized.
# Features:
* Build conversion commands from available parameters.
* Respect positional arguments defined by Optimum.
* Discussion of advanced usage for OpenVINO with Transformers.
Note: At runtime your command references prebuilt configuration classes which don't support all models. However, if you open a PR the maintainers make it happen. See my issue [\#1097](https://github.com/huggingface/optimum-intel/issues/1097) and their PR [\#1099](https://github.com/huggingface/optimum-intel/pull/1099).
Check your model architecture [here](https://huggingface.co/docs/optimum/main/en/intel/openvino/models) to spare yourself a traceback.
# OpenVINO, briefly
OpenVINO is a C++ runtime with Python APIs for managing inference deployments on Intel hardware from 6th gen forward. Using OpenVINO effectively requires new research for each usecase to discover facts about your model's architecture and the hardware you are using to choose the right settings. Choose your device type and use snippets [here](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) to query device properties based on your hardware. This requires [additional dependencies](https://docs.openvino.ai/2024/get-started/install-openvino.html?PACKAGE=OPENVINO_GENAI&VERSION=v_2024_6_0&OP_SYSTEM=LINUX&DISTRIBUTION=PIP) outside Optimum but is easier, faster and more reliable than using [Intel Ark](https://www.intel.com/content/www/us/en/ark.html).
Here's an example Pytorch workflow:
* Train
* OR finetune
* OR quantize
* Convert to OpenVINO Intermediate Representation
* Test code until inference goes zoom
Note: Models exported to OpenVINO IR cannot be trained.
Here are some useful resources to learn more about OpenVINO:
[openvino\_notebooks](https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks): This is where the cutting edge stuff lives. Every other resource on the internet save the source code are 'behind' this repo. Most documentation points here and it's really easy to and much of the docs are outdated
[openvino-export](https://huggingface.co/docs/optimum/main/en/intel/openvino/export): Stored here are the parameters used in this tool. It's one to one.
[documentation](https://docs.openvino.ai/2024/index.html): I have found that going straight to git can be more useful since non-Intel examples are scarce
Due to low adoption there aren't many projects which fully leverage the features offered by the Intel AI stack. So, starting with OpenVINO through Transformers is a good way to find out if it's even worth going deeper.
Thanks for checking this out and hopefully the tool is useful! | 2025-01-18T21:35:04 | https://www.reddit.com/r/LocalLLaMA/comments/1i4i1q8/i_made_a_huggingface_space_to_help_build_commands/ | Echo9Zulu- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4i1q8 | false | null | t3_1i4i1q8 | /r/LocalLLaMA/comments/1i4i1q8/i_made_a_huggingface_space_to_help_build_commands/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'ZnwTaqr2Hp5Qz16kMjv1xkdRpUgpPEGvFTF_PkgD_HE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lCebTG1-VLk_QxrbJYse_3w9Qqg7EsgYAzifxSPilqE.jpg?width=108&crop=smart&auto=webp&s=92dfe86171f6a91bd08064e1b52cefbf474ddad9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lCebTG1-VLk_QxrbJYse_3w9Qqg7EsgYAzifxSPilqE.jpg?width=216&crop=smart&auto=webp&s=f00087025ba485717ecf2fb6fc1be4482b58c578', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lCebTG1-VLk_QxrbJYse_3w9Qqg7EsgYAzifxSPilqE.jpg?width=320&crop=smart&auto=webp&s=9b4985fe7c2f7e4069d5fb1028e178e90712fdc6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lCebTG1-VLk_QxrbJYse_3w9Qqg7EsgYAzifxSPilqE.jpg?width=640&crop=smart&auto=webp&s=b678d623825c25e90412d90fb69e62f95cc96378', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lCebTG1-VLk_QxrbJYse_3w9Qqg7EsgYAzifxSPilqE.jpg?width=960&crop=smart&auto=webp&s=239dadc17525ee1cdb55abc2c97f9b308ec06f77', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lCebTG1-VLk_QxrbJYse_3w9Qqg7EsgYAzifxSPilqE.jpg?width=1080&crop=smart&auto=webp&s=f15d512c1130a28953ca0003b0a4d5512c750f22', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lCebTG1-VLk_QxrbJYse_3w9Qqg7EsgYAzifxSPilqE.jpg?auto=webp&s=695a7eed866817f6f98007a99a9f053b8b568e0b', 'width': 1200}, 'variants': {}}]} |
What LLM hardware should I buy for my business? | 0 | With a $7k limit, ideally spending less / closer to $3k, what should a buy for my business?
I need to do considerable website scraping, data extraction from documents, and content generation. Most of this can be done in the background, and isn't live / customer facing, so it seems like the cloud is unnecessary. I'm now looking into buying some hardware. The workload is probably close to endless.
1. I was considering buying a mac studio when the new M4 refresh comes out with maxed out RAM.
2. I know the new 5090's are coming out, and I could camp out with my son and grab 2 of them if that was expected to be a good option.
3. I know Nvidia is coming out with it's own hardware similar to a mac studio to run LLMs. Should I be looking at that?
I care about quality. I can build a machine, or multiple of that makes sense. | 2025-01-18T21:56:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i4iid0/what_llm_hardware_should_i_buy_for_my_business/ | Lost_Fox__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4iid0 | false | null | t3_1i4iid0 | /r/LocalLLaMA/comments/1i4iid0/what_llm_hardware_should_i_buy_for_my_business/ | false | false | self | 0 | null |
Why are LLM benchmarks run only on individual models, and not on systems composed of models? For example, benchmarking "GPT-4" (just a model) vs "GPT-3.5 + Chain of Thought Reasoning + a bunch of other cool tricks" (a system) would've likely shown the GPT-3.5 system performs better than GPT-4... | 2 | Grok 2 + Perplexity says the following, but I'm not convinced:
"""
Large Language Model (LLM) benchmarks are primarily designed to evaluate the capabilities of individual models in isolation, rather than systems composed of multiple models or models integrated with other components. Here are several reasons why this is the case:
**1. Isolation for Fair Comparison:**
- **Standardization:** Benchmarks aim to provide a standardized way to compare different models. By testing models in isolation, benchmarks ensure that the performance metrics are directly attributable to the model itself, without the influence of external components or system-level optimizations. This allows for a fair and direct comparison across different models[1][10].
- **Model-Centric Evaluation:** The focus is on understanding the intrinsic capabilities of the model, such as its language understanding, reasoning, and generation abilities. This helps in identifying where improvements are needed within the model itself[1][10].
**2. Complexity and Variability:**
- **System Complexity:** Systems that integrate multiple models or additional components like Chain of Thought Reasoning introduce a level of complexity that benchmarks are not designed to handle. These systems can have numerous variables, including latency, user interaction, and varying data quality, which are not captured by traditional benchmarks[1][10].
- **Dynamic Environments:** Real-world systems operate in dynamic environments where inputs and conditions can change rapidly. Benchmarks, being static by nature, do not reflect these real-world conditions, making it challenging to evaluate system-level performance accurately[1].
**3. Benchmark Design:**
- **Task-Specific:** Many benchmarks are designed for specific, well-defined tasks like translation, summarization, or coding. These tasks are easier to evaluate when the model is isolated, as the metrics for success are clear and directly related to the model's output[1][10].
- **Surface-Level Insights:** Benchmarks often provide surface-level insights into model performance. They might not capture the nuanced interactions and dependencies within a system, which could lead to misleading results if applied to system-level evaluations[1].
**4. Limitations of Current Benchmarks:**
- **Restricted Scope:** Current benchmarks often focus on areas where LLMs have already shown proficiency, which might not reflect the broader capabilities or the real-world application of a system[11].
- **Short Life Span:** Benchmarks can quickly become outdated as models improve, necessitating the creation of more challenging tasks. This rapid evolution makes it difficult to design benchmarks that can effectively evaluate complex systems over time[11].
**5. Real-World Performance:**
- **Contextual Performance:** A model's performance can vary greatly based on context. Benchmarks might not predict how well a model will perform in the complex, dynamic tasks that a system might encounter in real-world applications[6].
- **System-Specific Metrics:** For evaluating systems, organizations often need to design their own benchmarks that reflect the specific use cases and metrics relevant to their applications, rather than relying on generic LLM benchmarks[10].
**Conclusion:**
While LLM benchmarks are invaluable for comparing and improving individual models, they fall short when it comes to evaluating systems composed of multiple models or models integrated with other components. The complexity, variability, and dynamic nature of real-world systems require a different approach to evaluation, one that considers the entire system's performance, including how different components interact and contribute to the overall functionality. This necessitates the development of system-specific benchmarks or evaluations that go beyond the capabilities of current LLM benchmarks.
Citations:
[1] https://www.thoughtworks.com/en-us/insights/blog/generative-ai/LLM-benchmarks,-evals,-and-tests
[2] https://www.confident-ai.com/blog/evaluating-llm-systems-metrics-benchmarks-and-best-practices
[3] https://www.vellum.ai/blog/llm-benchmarks-overview-limits-and-model-comparison
[4] https://www.ibm.com/think/topics/llm-benchmarks
[5] https://www.galileo.ai/blog/llm-benchmarks-performance-evaluation-guide
[6] https://www.cdomagazine.tech/opinion-analysis/top-3-reasons-why-llm-benchmarks-fail-to-predict-ai-success-in-your-business
[7] https://symbl.ai/developers/blog/an-in-depth-guide-to-benchmarking-llms/
[8] https://www.holisticai.com/blog/navigating-llm-benchmark
[9] https://www.turing.com/resources/understanding-llm-evaluation-and-benchmarks
[10] https://www.evidentlyai.com/llm-guide/llm-benchmarks
[11] https://www.vellum.ai/blog/llm-benchmarks-overview-limits-and-model-comparison
[12] https://www.reddit.com/r/LocalLLaMA/comments/1fbdk43/benchmarks_are_hurting_the_models/
[13] https://www.citrusx.ai/post/7-llm-benchmarks-for-performance-capabilities-and-limitations
[14] https://datasciencedojo.com/blog/llm-benchmarks-for-evaluation/
[15] https://www.ibm.com/think/topics/llm-benchmarks
[16] https://www.reddit.com/r/LocalLLaMA/comments/1h4nz7b/i_built_this_tool_to_compare_llms/
[17] https://www.holisticai.com/blog/navigating-llm-benchmark
[18] https://www.thoughtworks.com/en-us/insights/blog/generative-ai/LLM-benchmarks,-evals,-and-tests
[19] https://www.turing.com/resources/understanding-llm-evaluation-and-benchmarks
"""
| 2025-01-18T22:36:01 | https://www.reddit.com/r/LocalLLaMA/comments/1i4jct3/why_are_llm_benchmarks_run_only_on_individual/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4jct3 | false | null | t3_1i4jct3 | /r/LocalLLaMA/comments/1i4jct3/why_are_llm_benchmarks_run_only_on_individual/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'qaO6PCUA36zm6eBtXjggO7vXxzslz69Atv5QR0CGKGA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/WtJqU1wLiULJfvz5orp1YV2taeifwtqE0dWcurbKbUM.jpg?width=108&crop=smart&auto=webp&s=91f1518fc58c6ba9cc548bbbb6f224e634f7c40d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/WtJqU1wLiULJfvz5orp1YV2taeifwtqE0dWcurbKbUM.jpg?width=216&crop=smart&auto=webp&s=8e93f6a3a84d4b8187942e10e7decc80c376eb66', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/WtJqU1wLiULJfvz5orp1YV2taeifwtqE0dWcurbKbUM.jpg?width=320&crop=smart&auto=webp&s=564336f7f64ad1aa2fa8c91e5ab910930e2ccc38', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/WtJqU1wLiULJfvz5orp1YV2taeifwtqE0dWcurbKbUM.jpg?width=640&crop=smart&auto=webp&s=0a0c2a3429f29f7132fc755b7eefb25aba0d16ef', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/WtJqU1wLiULJfvz5orp1YV2taeifwtqE0dWcurbKbUM.jpg?width=960&crop=smart&auto=webp&s=c7d071e8253e323d2dcc8e421c309404097167a9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/WtJqU1wLiULJfvz5orp1YV2taeifwtqE0dWcurbKbUM.jpg?width=1080&crop=smart&auto=webp&s=07449170e187bda3b5b8a2c51094acc9a886c2c4', 'width': 1080}], 'source': {'height': 950, 'url': 'https://external-preview.redd.it/WtJqU1wLiULJfvz5orp1YV2taeifwtqE0dWcurbKbUM.jpg?auto=webp&s=4957460c4926fd6d25d565c07e422850322bca1e', 'width': 1690}, 'variants': {}}]} |
Sky-T1-32B : BETTER THAN O1??? | 1 | I know this model has been hyped, but as someone that uses 4o, O1, and Claude 3.5 sonnet, is it just me or does this thing put out more reliable code than O1? Where with O1 or have to plug it in a few times to iron out silly hugs, Sky-T1 seems to work in the first try closer to Sonnet level. Lmk if yall have tried it wyt it seems pretty freaking good. | 2025-01-18T22:57:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i4jsrb/skyt132b_better_than_o1/ | zachmcd09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4jsrb | false | null | t3_1i4jsrb | /r/LocalLLaMA/comments/1i4jsrb/skyt132b_better_than_o1/ | false | false | self | 1 | null |
Sky-T1-32B : BETTER THAN O1??? | 0 | I know this model has been hyped, but as someone that uses 40, 01, and Claude 3.5 sonnet, is it just me or does this thing put out more reliable code than O1?
Where with 01 or have to plug it in a few times to iron out silly hugs, Sky-T seems to work in the first try closer to Sonnet level. Lmk if yall have tried it wyt it seems pretty freaking good. | 2025-01-18T22:58:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i4ju0e/skyt132b_better_than_o1/ | AIGuy3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4ju0e | false | null | t3_1i4ju0e | /r/LocalLLaMA/comments/1i4ju0e/skyt132b_better_than_o1/ | false | false | self | 0 | null |
Can you really compensate for weak LLMs with RAG? | 1 | [removed] | 2025-01-18T23:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/1i4kl49/can_you_really_compensate_for_weak_llms_with_rag/ | 6metalhead9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4kl49 | false | null | t3_1i4kl49 | /r/LocalLLaMA/comments/1i4kl49/can_you_really_compensate_for_weak_llms_with_rag/ | false | false | self | 1 | null |
Can you compensate for weak LLMs with RAG? | 13 | I locally work with a NVIDIA GeForce GTX 1660 Ti, which has 6 GB of VRAM. I use local LLMs for a lot a small, respective tasks, and it works great. Q/V Quantization, Flash Attention, and the smaller models (like Microsoft's phi line) really make this work.
I'm starting a more complicated project where I have my LLM generate code for a medium-popularity Python library to accomplish complex tasks. My local LLM hallucinates heavily about this library, and it is not really effective at all.
I'm trying to use RAG on the library's documentation to inform the LLM about the libraries API and I'm even trying agentic reasoning, where I tell the LLM to ask questions that it needs answered. I then use these questions to facilitate RAG.
My solution is beginning to take minutes at a time, and *it still sucks*. Obviously I can do this perfect with OpenAI's 4o with no RAG necessary - it just does it.
**How effective is RAG at helping weak LLMs? Is this a worthless endeavor?** | 2025-01-18T23:39:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i4kny3/can_you_compensate_for_weak_llms_with_rag/ | Top-Coyote-1832 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4kny3 | false | null | t3_1i4kny3 | /r/LocalLLaMA/comments/1i4kny3/can_you_compensate_for_weak_llms_with_rag/ | false | false | self | 13 | null |
We need to be able to train models on consumer-grade hardware | 120 | The number of papers being published is off the charts, and there’s no way the big players can implement everything. That means some game-changing ideas might slip through the cracks. But if everyday folks could test out these so-called breakthroughs, we’d be in a position to help the big players spot the real gems worth scaling up. | 2025-01-18T23:55:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i4kzgp/we_need_to_be_able_to_train_models_on/ | yoop001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4kzgp | false | null | t3_1i4kzgp | /r/LocalLLaMA/comments/1i4kzgp/we_need_to_be_able_to_train_models_on/ | false | false | self | 120 | null |
What LLM benchmarks actually measure (explained intuitively) | 134 | # 1. GPQA (Graduate-Level Google-Proof Q&A Benchmark)
* **What it measures**: GPQA evaluates LLMs on their ability to answer highly challenging, graduate-level questions in biology, physics, and chemistry. These questions are designed to be "Google-proof," meaning they require deep, specialized understanding and reasoning that cannot be easily found through a simple internet search.
* **Key Features**:
* **Difficulty**: Questions are crafted to be extremely difficult, with experts achieving around 65% accuracy.
* **Domain Expertise**: Tests the model's ability to handle complex, domain-specific questions.
* **Real-World Application**: Useful for scalable oversight experiments where AI systems need to provide reliable information beyond human capabilities.
# 2. MMLU (Massive Multitask Language Understanding)
* **What it measures**: MMLU assesses the general knowledge and problem-solving abilities of LLMs across 57 subjects, ranging from elementary mathematics to professional fields like law and ethics. It tests both world knowledge and reasoning skills.
* **Key Features**:
* **Breadth**: Covers a wide array of topics, making it a comprehensive test of an LLM's understanding.
* **Granularity**: Evaluates models in zero-shot and few-shot settings, mimicking real-world scenarios where models must perform with minimal context.
* **Scoring**: Models are scored based on their accuracy in answering multiple-choice questions.
# 3. MMLU-Pro
* **What it measures**: An enhanced version of MMLU, MMLU-Pro introduces more challenging, reasoning-focused questions and increases the number of answer choices from four to ten, making the tasks more complex.
* **Key Features**:
* **Increased Complexity**: More reasoning-intensive questions, reducing the chance of correct answers by random guessing.
* **Stability**: Demonstrates greater stability under varying prompts, with less sensitivity to prompt variations.
* **Performance Drop**: Causes a significant drop in accuracy compared to MMLU, highlighting its increased difficulty.
# 4. MATH
* **What it measures**: The MATH benchmark evaluates LLMs on their ability to solve complex mathematical problems, ranging from high school to competition-level mathematics.
* **Key Features**:
* **Problem Types**: Includes algebra, geometry, probability, and calculus problems.
* **Step-by-Step Solutions**: Each problem comes with a detailed solution, allowing for evaluation of reasoning steps.
* **Real-World Application**: Useful for educational applications where accurate and efficient problem-solving is crucial.
# 5. HumanEval
* **What it measures**: HumanEval focuses on the functional correctness of code generated by LLMs. It consists of programming challenges where models must generate code that passes provided unit tests.
* **Key Features**:
* **Code Generation**: Tests the model's ability to understand and produce functional code from docstrings.
* **Evaluation Metric**: Uses the pass@k metric, where 'k' different solutions are generated, and the model is considered successful if any solution passes all tests.
* **Real-World Coding**: Simulates real-world coding scenarios where multiple attempts might be made to solve a problem.
# 6. MMMU (Massive Multi-discipline Multimodal Understanding and Reasoning)
* **What it measures**: MMMU evaluates multimodal models on tasks requiring college-level subject knowledge and deliberate reasoning across various disciplines, including visual understanding.
* **Key Features**:
* **Multimodal**: Incorporates text and images, testing models on tasks like understanding diagrams, charts, and other visual formats.
* **Expert-Level**: Questions are sourced from university-level materials, ensuring high difficulty.
* **Comprehensive**: Covers six core disciplines with over 183 subfields, providing a broad assessment.
# 7. MathVista
* **What it measures**: MathVista assesses mathematical reasoning in visual contexts, combining challenges from diverse mathematical and graphical tasks.
* **Key Features**:
* **Visual Context**: Requires models to understand and reason with visual information alongside mathematical problems.
* **Benchmark Composition**: Derived from existing datasets and includes new datasets for specific visual reasoning tasks.
* **Performance Gap**: Highlights the gap between LLM capabilities and human performance in visually intensive mathematical reasoning.
# 8. DocVQA (Document Visual Question Answering)
* **What it measures**: DocVQA evaluates models on their ability to answer questions based on document images, testing both textual and visual comprehension.
* **Key Features**:
* **Document Understanding**: Assesses the model's ability to interpret various document elements like text, tables, and figures.
* **Real-World Scenarios**: Mimics real-world document analysis tasks where understanding context and layout is crucial.
* **Evaluation Metric**: Uses metrics like Average Normalized Levenshtein Similarity (ANLS) to measure performance.
# 9. HELM (Holistic Evaluation of Language Models)
* **What it measures**: HELM evaluates LLMs from multiple angles, offering a comprehensive view of their performance. It assesses accuracy, performance across various tasks, and integrates qualitative reviews to capture subtleties in model responses.
* **Key Features**:
* **Holistic Approach**: Uses established datasets to assess accuracy and performance, alongside qualitative reviews for a nuanced understanding.
* **Error Analysis**: Conducts detailed error analysis to identify specific areas where models struggle.
* **Task Diversity**: Covers a wide range of tasks, from text classification to machine translation, providing a broad assessment of model capabilities.
# 10. GLUE (General Language Understanding Evaluation)
* **What it measures**: GLUE provides a baseline for evaluating general language understanding capabilities of LLMs. It includes tasks like sentiment analysis, question answering, and textual entailment.
* **Key Features**:
* **Comprehensive**: Encompasses a variety of NLP tasks, making it a robust benchmark for general language understanding.
* **Publicly Available**: Datasets are publicly available, allowing for widespread use and comparison.
* **Leaderboard**: GLUE maintains a leaderboard where models are ranked based on their performance across its tasks.
# 11. BIG-Bench Hard (BBH)
* **What it measures**: BBH focuses on the limitations and failure modes of LLMs by selecting particularly challenging tasks from the larger BIG-Bench benchmark.
* **Key Features**:
* **Difficulty**: Consists of 23 tasks where no prior model outperformed average human-rater scores, highlighting areas where models fall short.
* **Focused Evaluation**: Aims to push the boundaries of model capabilities by concentrating on tasks that are difficult for current models.
* **Real-World Relevance**: Tasks are designed to reflect real-world challenges where models need to demonstrate advanced reasoning and understanding.
# 12. MT-Bench
* **What it measures**: MT-Bench evaluates models' ability to engage in coherent, informative, and engaging conversations, focusing on conversation flow and instruction-following capabilities.
* **Key Features**:
* **Multi-Turn**: Contains 80 questions with follow-up questions, simulating real-world conversational scenarios.
* **LLM-as-a-Judge**: Uses strong LLMs like GPT-4 to assess the quality of model responses, providing an objective evaluation.
* **Human Preferences**: Responses are annotated by graduate students with domain expertise, ensuring relevance and quality.
# 13. FinBen
* **What it measures**: FinBen is designed to evaluate LLMs in the financial domain, covering tasks like information extraction, text analysis, question answering, and more.
* **Key Features**:
* **Domain-Specific**: Focuses on financial tasks, providing a specialized benchmark for financial applications.
* **Broad Task Coverage**: Includes 36 datasets covering 24 tasks in seven financial domains, offering a comprehensive evaluation.
* **Real-World Application**: Evaluates models on practical financial tasks, including stock trading, highlighting their utility in financial services.
# 14. LegalBench
* **What it measures**: LegalBench assesses LLMs' legal reasoning capabilities, using datasets from various legal domains.
* **Key Features**:
* **Legal Reasoning**: Tests models on tasks requiring legal knowledge and reasoning, crucial for legal applications.
* **Collaborative Development**: Developed through collaboration, ensuring a wide range of legal tasks are covered.
* **Real-World Scenarios**: Mimics real-world legal scenarios where models must interpret and apply legal principles. | 2025-01-19T00:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i4l5hb/what_llm_benchmarks_actually_measure_explained/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4l5hb | false | null | t3_1i4l5hb | /r/LocalLLaMA/comments/1i4l5hb/what_llm_benchmarks_actually_measure_explained/ | false | false | self | 134 | null |
fuck your corporation | 1 | [removed] | 2025-01-19T00:05:24 | https://www.reddit.com/r/LocalLLaMA/comments/1i4l70z/fuck_your_corporation/ | Murcielago-980 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4l70z | false | null | t3_1i4l70z | /r/LocalLLaMA/comments/1i4l70z/fuck_your_corporation/ | false | false | self | 1 | null |
Can anyone tell me what these are? Is this AI? | 1 | [removed] | 2025-01-19T00:44:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1i4lz0m | false | {'oembed': {'author_name': 'Pakistan ZindaGOOD', 'author_url': 'https://www.youtube.com/@PakistanZindaGOOD', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/dIrxraoXOBU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="I pointed a 4K infrared camera at the stars in the sky last night"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/dIrxraoXOBU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'I pointed a 4K infrared camera at the stars in the sky last night', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1i4lz0m | /r/LocalLLaMA/comments/1i4lz0m/can_anyone_tell_me_what_these_are_is_this_ai/ | false | false | default | 1 | null |
||
Is the way current LLM models work not optimal? Are they alot bigger then they should be? | 1 | Lemme preface with im very much a noob to this whole subject. Just had a question that's bugging me. Some of my assumptions might just be wrong.
Why instead of feeding an LLM for example a programming book that fully explains syntax, functions, variables, etc, all the building blocks of programs, do we train them on endless samples of code instead?
Rather than having huge parameter models with all these trained references, would it not be better for them to have the basics then reason out the rest from there?
Kinda like when people say a model only scores high because it was trained on the questions, it would be better to score high without ever having seen the question before and it reasons it out?
Is this kinda what chatgpt is doing with o1/o3 right now? | 2025-01-19T01:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1i4mnwx/is_the_way_current_llm_models_work_not_optimal/ | Business_Respect_910 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4mnwx | false | null | t3_1i4mnwx | /r/LocalLLaMA/comments/1i4mnwx/is_the_way_current_llm_models_work_not_optimal/ | false | false | self | 1 | null |
NxMobileLM-1.5B-SFT: A New Lightweight Language Model | 8 | Key Features:
Compact Size: At just 1.5 billion parameters, NxMobileLM-1.5B-SFT is designed to run efficiently on resource-constrained devices, including mobile and edge hardware.
Multilingual Support: It performs well across a wide range of languages, making it a versatile solution for diverse linguistic applications.
Performance-Optimized: The model to achieve state-of-the-art results when compared with other models of similar size.
[https://huggingface.co/NTQAI/NxMobileLM-1.5B-SFT](https://huggingface.co/NTQAI/NxMobileLM-1.5B-SFT) | 2025-01-19T01:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i4nect/nxmobilelm15bsft_a_new_lightweight_language_model/ | Psychological_Oil727 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4nect | false | null | t3_1i4nect | /r/LocalLLaMA/comments/1i4nect/nxmobilelm15bsft_a_new_lightweight_language_model/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'mTFbVjlfMCxATCC8KDM7lNwDXZIHM5t3lNyIJyg-IGU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/j5Zlwg3Oz6YMKWh5o41vD2GXiEvM7QqK1aCHZ96pfc4.jpg?width=108&crop=smart&auto=webp&s=90827ceda80922cae14472b660924f4b92640d3a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/j5Zlwg3Oz6YMKWh5o41vD2GXiEvM7QqK1aCHZ96pfc4.jpg?width=216&crop=smart&auto=webp&s=b9c2405cba4296106a66c38718cf3d519c5ef539', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/j5Zlwg3Oz6YMKWh5o41vD2GXiEvM7QqK1aCHZ96pfc4.jpg?width=320&crop=smart&auto=webp&s=742d0226765176268beb7644281d50650f40dc5b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/j5Zlwg3Oz6YMKWh5o41vD2GXiEvM7QqK1aCHZ96pfc4.jpg?width=640&crop=smart&auto=webp&s=f9f60fb99b67714fab11cdf318ba74261cf60fdd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/j5Zlwg3Oz6YMKWh5o41vD2GXiEvM7QqK1aCHZ96pfc4.jpg?width=960&crop=smart&auto=webp&s=996c29059e56ffcda99f821390603f86ee137a88', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/j5Zlwg3Oz6YMKWh5o41vD2GXiEvM7QqK1aCHZ96pfc4.jpg?width=1080&crop=smart&auto=webp&s=3a0dd640c24636a52a7d82422998af9d46cb7fa5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/j5Zlwg3Oz6YMKWh5o41vD2GXiEvM7QqK1aCHZ96pfc4.jpg?auto=webp&s=6bd7f3ad297df870b513a87e4d5ec1dfa9f38929', 'width': 1200}, 'variants': {}}]} |
Building a local system with code execution? (RAG, Mac Studio's, Ingestion of various types of data). | 1 | [removed] | 2025-01-19T02:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i4od0l/building_a_local_system_with_code_execution_rag/ | searchingforai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4od0l | false | null | t3_1i4od0l | /r/LocalLLaMA/comments/1i4od0l/building_a_local_system_with_code_execution_rag/ | false | false | self | 1 | null |
yek -- serialize your code repo (or part of it) to feed into any LLM using a fast Rust based program | 106 | 2025-01-19T02:50:53 | https://github.com/bodo-run/yek | mehyay76 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i4oeg9 | false | null | t3_1i4oeg9 | /r/LocalLLaMA/comments/1i4oeg9/yek_serialize_your_code_repo_or_part_of_it_to/ | false | false | 106 | {'enabled': False, 'images': [{'id': 'xCMcTShdVa5I8ZipO9XJMFS-SsaHVjbYMIpJRxRAMOU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hezSolCpDQfrkKm4K9-XqPSYxr_22M4CNnAVkJ9xh7w.jpg?width=108&crop=smart&auto=webp&s=6b92b1cae0a7cf3d318272a4e5bf202f177023cf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hezSolCpDQfrkKm4K9-XqPSYxr_22M4CNnAVkJ9xh7w.jpg?width=216&crop=smart&auto=webp&s=75b0d1f5e43772c44f5f13a67caa138a7d4485ba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hezSolCpDQfrkKm4K9-XqPSYxr_22M4CNnAVkJ9xh7w.jpg?width=320&crop=smart&auto=webp&s=6337b25a3a276359cb8ef02681e566caf5409751', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hezSolCpDQfrkKm4K9-XqPSYxr_22M4CNnAVkJ9xh7w.jpg?width=640&crop=smart&auto=webp&s=8f2bbfcd7007496d377206d2d8a40c8e2afedc2f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hezSolCpDQfrkKm4K9-XqPSYxr_22M4CNnAVkJ9xh7w.jpg?width=960&crop=smart&auto=webp&s=b913980d49f51aa0fe2665583b03da68c47719ce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hezSolCpDQfrkKm4K9-XqPSYxr_22M4CNnAVkJ9xh7w.jpg?width=1080&crop=smart&auto=webp&s=e250050485b46735ff456728d189c2db4433ff15', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hezSolCpDQfrkKm4K9-XqPSYxr_22M4CNnAVkJ9xh7w.jpg?auto=webp&s=0db95fc81eef12a4439e6e04908e73e9c5319a93', 'width': 1200}, 'variants': {}}]} |
||
Building a Local LLM with local code execution? (RAG, Mac Studio M2/M4 Ultra & Various types of data ingestion) | 1 | [removed] | 2025-01-19T02:51:50 | https://www.reddit.com/r/LocalLLaMA/comments/1i4of1p/building_a_local_llm_with_local_code_execution/ | ccmissle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4of1p | false | null | t3_1i4of1p | /r/LocalLLaMA/comments/1i4of1p/building_a_local_llm_with_local_code_execution/ | false | false | self | 1 | null |
Building Local LLM with code execution? (RAG, Mac Studio(s), Ingestion of various types of data) | 2 | We’re looking to build local LLM(s) for internal use, and a good amount of internal administrative functions (automations).
Hoping to gather feedback for the best course of action as I’m coming across a lot of differing opinions and conflicting data. In addition, if there are any folks who have implemented similar things that do any consulting work (or know companies/consulting who do), I’d love to have a conversation about a potential engagement.
**Example Use Case: (Document (PDF generally) classification, sorting and execution to move)**
\- Scanned documents are dropped into an intake directory on a system.
\- Scanned documents are classified based on their contents. This classification is: Company, contents and matched to a list of pre-created storage locations. (This data is fed to the classifier - which companies exist, what options for sorting into directory structure under each company are available, etc).
\- After the classification, decisions are made on the sorting and those decisions are summarized on what each file is classified as, and what sorting decisions are made. We’re looking for some mechanism that then would elicit a prompt to a human to show for example the first page of each document, how it was classified and the recommendation on sorting.
\- If the human agrees, the next step of workflow would execute a move of these files into the respective sorting locations. (Via script and local execution).
With this framework, we have many other tasks that could be performed which would significantly reduce administrative overhead within our companies.
The data is sensitive in nature, so this must be local.
Given that most of these tasks are batch, the speed in which they execute (minutes for example) is not an issue.
It would be great if we could utilize Mac Studio’s (M2 Ultra’s / 192GB or M4 Ultra’s - whenever they come out). If a multi-layered approach with multiple models is required, we’re open to also having GPU based systems in addition for any specialized heavy lifting, or multiple Mac Studios/Mac Mini’s.
In addition to the basic use case above which I believe can be handled in context for the instructions as they are pretty limited. We also have a lot of company data we’d like to ingest.
\- Word, Excel, CSV and PDF Documents
\- Quickbooks Data (Invoices, Expenses, Contractor Details, etc)
\- Email Repository (Think shared mailbox type data, not individual email boxes)
\- Time system data (Billing to specific clients, sites, etc)
Some of this data being via documents, others being via SQL or API calls to gather data from cloud based systems, or other internal systems.
Utilizing this ingested data, we’d build other workflows, task automation (with human feedback prior to execution as described above) and general company info query via chat function.
When researching, it seems an agentic RAG technique may be best to recall with fidelity.
I’ve seen a lot of in-house functionality built on AnythingLLM, Openweb UI and others but the seem limited to just dumping documents into a web gui. Not sure if they are extensible enough to build what we are thinking of doing. Any feedback or pointers is much appreciated. | 2025-01-19T03:03:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i4on67/building_local_llm_with_code_execution_rag_mac/ | doofew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4on67 | false | null | t3_1i4on67 | /r/LocalLLaMA/comments/1i4on67/building_local_llm_with_code_execution_rag_mac/ | false | false | self | 2 | null |
Best/Easiest NotebookLM Alternative? | 7 | hey gang, i've seen a handful of notebooklm alternatives but haven't gotten any set up and working. i can't recall all i've looked at but i'm curious about other's experiences with these projects.
i've not seen any with a GUI which is always a bummer for me but if it works easily without one that's fine
has anyone had any luck with these? would love to get this functionality on my own computer | 2025-01-19T03:24:52 | https://www.reddit.com/r/LocalLLaMA/comments/1i4p0ia/besteasiest_notebooklm_alternative/ | eggs-benedryl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4p0ia | false | null | t3_1i4p0ia | /r/LocalLLaMA/comments/1i4p0ia/besteasiest_notebooklm_alternative/ | false | false | self | 7 | null |
Best in class Deep Learning models for 48/24GB VRAM | 14 | Based on my experience and readings of benchmark results and other people's experience, this is a list I come up:
Knowledge (English): Llama-3.3-70B (48GB) / Llama-3\_1-Nemotron-51B (24GB)
Knowledge (Japanese): gemma-2-27b-it
Coding: Qwen-2.5-Coder-32B
Long Context RAG: Llama-3.3-70B (48GB) / Phi-3-medium-128k-instruct (24GB)
Math: ???
Reasoning: ???
Writing: ???
Image2Text: Llama-3.2-vision-11b
Text2Image: Flux
Text2Video: Hunyuan
Image2Video: Cosmos-1.0-Diffusion-14B (48GB) / Cosmos-1.0-Diffusion-7B (24GB)
STT: whisper-turbo
TTS: ???
Classification: colbert-ir/colbertv2.0
Vector Search: all-MiniLM-L6-v2 (stella\_en\_400M\_v5?)
Do you have other suggestions? | 2025-01-19T03:58:38 | https://www.reddit.com/r/LocalLLaMA/comments/1i4poqz/best_in_class_deep_learning_models_for_4824gb_vram/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4poqz | false | null | t3_1i4poqz | /r/LocalLLaMA/comments/1i4poqz/best_in_class_deep_learning_models_for_4824gb_vram/ | false | false | self | 14 | null |
Phi-4 vs. Llama3.3 Showdown – Which Local AI Model Stands Out? | 1 | [removed] | 2025-01-19T03:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i4pp64/phi4_vs_llama33_showdown_which_local_ai_model/ | AIForOver50Plus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4pp64 | false | null | t3_1i4pp64 | /r/LocalLLaMA/comments/1i4pp64/phi4_vs_llama33_showdown_which_local_ai_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bl9m7XxCmTqqybxmTAPqaLPMAywmwgwPpliLCwHu3UM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5fvXFBBjeAdSsfsm-2o08VMV76j9qji4X8xkhLdMGK8.jpg?width=108&crop=smart&auto=webp&s=a9e263ba08159ca2e42c646a81a1225396071057', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/5fvXFBBjeAdSsfsm-2o08VMV76j9qji4X8xkhLdMGK8.jpg?width=216&crop=smart&auto=webp&s=0f73ac3353b56a393f006a555b03961f18487aa4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/5fvXFBBjeAdSsfsm-2o08VMV76j9qji4X8xkhLdMGK8.jpg?width=320&crop=smart&auto=webp&s=dafbd2042e85554a35678f75c91d4b01c519ffed', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/5fvXFBBjeAdSsfsm-2o08VMV76j9qji4X8xkhLdMGK8.jpg?auto=webp&s=959be4e65f28e9d29158e00536072308051b95a5', 'width': 480}, 'variants': {}}]} |
Training tiny LLM on local machine | 8 | I am a ML newbie and want to get into core AI. I plan to learn how to build LLMs and hopefully train a tiny LLM on local machine to understand the concepts in depth. However I understand that training LLMs requires massive GPU resources.
I want to know if it is possible to train a tiny LLM from scratch on local Mac with 32gb gpu (unified memory) that gives coherent and related output? I am planning to buy a new Mac hence wanted to understand this. | 2025-01-19T04:54:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i4qrcx/training_tiny_llm_on_local_machine/ | Dark_Knight003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4qrcx | false | null | t3_1i4qrcx | /r/LocalLLaMA/comments/1i4qrcx/training_tiny_llm_on_local_machine/ | false | false | self | 8 | null |
Why reasoning models might be a huge breakthrough for local inference / training | 15 | Currently all the speculation I have seen out there suggests that OpenAI's O series of reasoning models still uses a relatively large expensive to train base model. Given the compute resources OpenAI has that is probably unlikely to change.
For those hoping to train and use models locally however it may be possible to extend the same concepts with a very small \~8B open source model that could easily by trained or run on almost any consumer GPU. Probably the amount of test time compute needed / length of reasoning traces needed to achieve good performance may be extreme but still you could probably get there. Because of the low GPU memory requirements of the base model getting a setup running would likely be easy even if its quite slow relatively speaking.
How far could this approach be scaled? could you use a 1B base model? Obviously there is a point where this may stop working but where is that point I wonder.
Anyways, is anyone aware of open source projects that have explored this idea? and assuming its feasible what would some of the major pros / cons be? | 2025-01-19T05:09:52 | https://www.reddit.com/r/LocalLLaMA/comments/1i4r1ig/why_reasoning_models_might_be_a_huge_breakthrough/ | valewolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4r1ig | false | null | t3_1i4r1ig | /r/LocalLLaMA/comments/1i4r1ig/why_reasoning_models_might_be_a_huge_breakthrough/ | false | false | self | 15 | null |
Are there any AI frameworks (other then Ozeki AI Server), that allow you to run multiple local AI models on the same GPUs simultanously? | 0 | [removed] | 2025-01-19T05:55:01 | https://www.reddit.com/r/LocalLLaMA/comments/1i4rs9l/are_there_any_ai_frameworks_other_then_ozeki_ai/ | Outrageous-Win-3244 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4rs9l | false | null | t3_1i4rs9l | /r/LocalLLaMA/comments/1i4rs9l/are_there_any_ai_frameworks_other_then_ozeki_ai/ | false | false | self | 0 | null |
Theory: How AI agents talk is more important than what they do. | 43 | My day job is building AI agents(among other things), but I was talking to a friend this weekend about AI agents, and we realized that: Why are we building graphs and tests like they're traditional program, when AI is much closer to a human than a program? Just a person, they're super unreliable, but creative. However, society is built from people, and actually manages to keep things ticking over to a large extent. Therefore shouldn't we take more inspiration from real life bureaucracies in how we build AI agent systems, and pay a lot more attention to how agents communicate among themselves to achieve tasks. Imagine an AI system which has hierarchies, change requests and mini forms that agents would have to fill out to keep other agents informed about what they're doing - wouldn't that be more effective at doing the thing they want to do? What do people think?
Relevant blogpost: [Bureaucracy is all you need. - by Lukas Nel - 2084](https://2084.substack.com/p/bureaucracy-is-all-you-need) | 2025-01-19T05:55:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i4rsnr/theory_how_ai_agents_talk_is_more_important_than/ | ExaminationNo8522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4rsnr | false | null | t3_1i4rsnr | /r/LocalLLaMA/comments/1i4rsnr/theory_how_ai_agents_talk_is_more_important_than/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'Xw-gMFDajfHkzVrlVu4hVlEvHyaXhBeASaYYwwwIRGw', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/Z52CAS_1my0SWAdMivVyItmhvbHiSNWqbDM7N4wCh14.jpg?width=108&crop=smart&auto=webp&s=578d0138a71732f97db29c2097f83a379736c8d5', 'width': 108}, {'height': 164, 'url': 'https://external-preview.redd.it/Z52CAS_1my0SWAdMivVyItmhvbHiSNWqbDM7N4wCh14.jpg?width=216&crop=smart&auto=webp&s=34c4575640b1fd48330b05546fad8b9e4e982efd', 'width': 216}, {'height': 243, 'url': 'https://external-preview.redd.it/Z52CAS_1my0SWAdMivVyItmhvbHiSNWqbDM7N4wCh14.jpg?width=320&crop=smart&auto=webp&s=92bbf7a228c8d2092cc49886dd8d875cd5d1cca0', 'width': 320}], 'source': {'height': 322, 'url': 'https://external-preview.redd.it/Z52CAS_1my0SWAdMivVyItmhvbHiSNWqbDM7N4wCh14.jpg?auto=webp&s=e7eafd24889fb91621ff6e9386a33520439fbf3a', 'width': 424}, 'variants': {}}]} |
AWS CPU Inference, super slow | 4 | I wanted to see how a DDR5 Epyc CPU would compare to my local older Epyc 7F52 CPU
Fired up an AWS "m7a.metal-48xl" on Ubuntu, installed Ollama, and run llama3.1 (8b)
This should be a non-shared 96 core Epyc 9R14 CPU with DDR5 Ram, aprox 2x the bandwidth of my 2nd gen Epyc.
It's getting about 0.6 T/s on a short prompt, how can it be so slow?
For comparison, my local (2 generations older) Epyc gets 19.5 T/s on the same model and prompt.
| 2025-01-19T06:20:27 | https://www.reddit.com/r/LocalLLaMA/comments/1i4s6u2/aws_cpu_inference_super_slow/ | Conscious_Cut_6144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4s6u2 | false | null | t3_1i4s6u2 | /r/LocalLLaMA/comments/1i4s6u2/aws_cpu_inference_super_slow/ | false | false | self | 4 | null |
Looking for guidance to build a Gen AI platform | 0 | I got this RFP from a customer (name obfuscated as XXX) to build an internal Gen AI platform for their entire group of companies with these requirements and looking for guidance/collaborators for the same -
XXX– GenAI As Service - Scope of work Executive summary: Company XXX want to have a branded and localized interface for the GenAI that will allow XXX users to use different external and internal Generative AI / LLM tools from a centralized interface. The XXX-GenAI will be used for plenty of use cases including but not limited to (content summarization, content generation, document tagging, search, evaluation and comparison, video generation, voice generation, Image generation, Data analysis and insights). The consumptions of the external GenAI service will be controlled through XXX GenAI as service platforms controls (pre-created prompts, prepared data sources, quota controls, access controls,…).
Objectives\Scope:
1. Development of XXX GenAI as service platform with proper UX and UI suitable for web and mobile users.
2. We will start with ChatGPT and Gemini, Claude, Dall-e 3, Midjourney.
3. The platform will serve users from different companies within XXX group.
4. Security, Identity and access management a. Supporting SSO. b. API security. c. Identity store. d. Implementing the needed encryption. e. Complying with XXX security principles.
5. Charge back capability for Cost & consumption management: tracking the expenses per company, service, user and prompt to allow us breakdown the cost and charge companies and departments based on the actual utilization. With ability to define a utilization quota for each company, department, user per prompt and per service. 6. The consumption of the services can be from the web interface or through APIs call to allow XXX integrate the services within other enterprise applications.
7. Ability to allow authorized users to interact directly with external AI services without a pre-defined prompts.
8. Logging and analytics: to log the utilization, queries and the output from the LLM to later analysis and review. Ability to present the success rate through a self-scoring to allow enhance the system quality.
9. Enforce XXX controls through Guardrail to control wrong output, PII and sensitive data leakage, prompts injection attacks. Those controls for the sake of deploying executable policies to address XXX specific governance on LLM.
10. Ability to manage the access privileges for each user or group of users on prompt from the prompt library, data source.
11. The consumption of APIs should be secure with credentials with ability to generate different credentials.
12. Ability to add / remove AI services in future.
13. The architecture of the platform should designed in a way to maximize the scalability, extendability, maintainability, and isolating the development and deployment of new service from another existing services through implementing microservices deployed in a serverless containers.
14. Enterprise taxonomy: Consolidating all the enterprise taxonomies to allow the LLM consume the taxonomy to make the content generation, analysis, evaluation and summarization having the enterprise taxonomy. It should support XXX level taxonomy, and company level taxonomy.
15. Prompt library: to control the quality and the accuracy of the results returned from the generative AI tools, prompt library stores an engineered and certified prompt templates for different use cases. Those prompts will be feed into LLMs upon the end user call.
16. Knowledge database: it contains the enterprise data in any format which is searchable and retrievable.
17. Use Retrieval Augmented Generation (RAG) for feeding the GenAI tools with the needed data to enhance the response coherency and contextuality
18. API gateway: used to allow enterprise applications to consume the GenAI services provides authentication, authorization, security, logging, audit, analytics and regulatory compliance function.
19. Proper documentation and full handover for the code and technical resources need to be done for XXX team. | 2025-01-19T06:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/1i4sjrw/looking_for_guidance_to_build_a_gen_ai_platform/ | No-Brother-2237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4sjrw | false | null | t3_1i4sjrw | /r/LocalLLaMA/comments/1i4sjrw/looking_for_guidance_to_build_a_gen_ai_platform/ | false | false | self | 0 | null |
How do you figure out the best AI model for text generation benchmarks?
| 1 | [removed] | 2025-01-19T07:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/1i4t6en/how_do_you_figure_out_the_best_ai_model_for_text/ | ResidentHonest2265 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4t6en | false | null | t3_1i4t6en | /r/LocalLLaMA/comments/1i4t6en/how_do_you_figure_out_the_best_ai_model_for_text/ | false | false | self | 1 | null |
Function calling local LLM running on 1080Ti | 2 | So, I want to know if someone try to make this.
Idea is, to run on local some model, that will make queries to MarianaDb database. Basically translate human language to SQL language.
I assume that need to be 7B model, and to be trained.
Does anyone make it? How successful it is? What are the steps?
| 2025-01-19T07:52:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i4tiwv/function_calling_local_llm_running_on_1080ti/ | dESAH030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4tiwv | false | null | t3_1i4tiwv | /r/LocalLLaMA/comments/1i4tiwv/function_calling_local_llm_running_on_1080ti/ | false | false | self | 2 | null |
6800XT 16gb vs 3060 12gb for local LLMs | 1 | [removed] | 2025-01-19T08:05:48 | https://www.reddit.com/r/LocalLLaMA/comments/1i4tpmt/6800xt_16gb_vs_3060_12gb_for_local_llms/ | logseventyseven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4tpmt | false | null | t3_1i4tpmt | /r/LocalLLaMA/comments/1i4tpmt/6800xt_16gb_vs_3060_12gb_for_local_llms/ | false | false | self | 1 | null |
minimaxai coding performance | 0 | thoughts on minimaxai? tried their text model space on huggingface and it has to be one of the best things I have ever used especially for coding. | 2025-01-19T08:35:06 | https://www.reddit.com/r/LocalLLaMA/comments/1i4u3w4/minimaxai_coding_performance/ | ElectricalAngle1611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4u3w4 | false | null | t3_1i4u3w4 | /r/LocalLLaMA/comments/1i4u3w4/minimaxai_coding_performance/ | false | false | self | 0 | null |
🎥 Need Feedback: Help Me Improve My YouTube Videos on Customizing AI Models! | 1 | [removed] | 2025-01-19T08:52:45 | https://www.reddit.com/r/LocalLLaMA/comments/1i4ucey/need_feedback_help_me_improve_my_youtube_videos/ | North-Glove-3057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4ucey | false | null | t3_1i4ucey | /r/LocalLLaMA/comments/1i4ucey/need_feedback_help_me_improve_my_youtube_videos/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'T32FL5Um0C7Duz93F3_PJ8HaO54NHTDaxeES_P5JOmM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/W178b9DFxIOtSpl3ByrKC--lFZjWbKWu0F28hdOG7wg.jpg?width=108&crop=smart&auto=webp&s=00c8a590b01973fc47cefb27239a988f45bf7823', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/W178b9DFxIOtSpl3ByrKC--lFZjWbKWu0F28hdOG7wg.jpg?width=216&crop=smart&auto=webp&s=13c99590a2677f24a504962e8b626dd77cf06d93', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/W178b9DFxIOtSpl3ByrKC--lFZjWbKWu0F28hdOG7wg.jpg?width=320&crop=smart&auto=webp&s=eb64cf23d65fb3babbb05ab70ad2e01ba39a8fb7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/W178b9DFxIOtSpl3ByrKC--lFZjWbKWu0F28hdOG7wg.jpg?auto=webp&s=ea7ff18423971cb460132bbaa6c8c65f8712522f', 'width': 480}, 'variants': {}}]} |
Opinions on the AI Companionship/Girlfriend Websites out there | 1 | [removed] | 2025-01-19T09:21:36 | https://www.reddit.com/r/LocalLLaMA/comments/1i4uqe2/opinions_on_the_ai_companionshipgirlfriend/ | WiseExamination9591 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4uqe2 | false | null | t3_1i4uqe2 | /r/LocalLLaMA/comments/1i4uqe2/opinions_on_the_ai_companionshipgirlfriend/ | false | false | self | 1 | null |
Do you think compatibility issues will be a huge problem on Nvidia project digit?
| 1 | [removed] | 2025-01-19T09:52:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i4v54r/do_you_think_compatibility_issues_will_be_a_huge/ | SpecialistPear755 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4v54r | false | null | t3_1i4v54r | /r/LocalLLaMA/comments/1i4v54r/do_you_think_compatibility_issues_will_be_a_huge/ | false | false | self | 1 | null |
I’m starting to think ai benchmarks are useless | 430 | Across every possible task I can think of Claude beats all other models by a wide margin IMO.
I have three ai agents that I've built researching, writing and outreaching to clients.
Claude absolutely wipes the floor with every other model, yet Claude is usually beat in benchmarks by OpenAI and Google models.
When I ask the question, how do we know these labs aren't benchmarks by just overfitting their models to perform well on the benchmark the answer is always "yeah we don't really know that". Not only can we never be sure but they are absolutely incentivised to do it. I remember only a few months ago, whenever a new model would be released that would do 0.5% or whatever better on MMLU pro, I'd switch my agents to use that new model assuming the pricing was similar.
At this point I'm just stuck with running the models and seeing which one of the outputs perform best at their task (mine and coworkers opinions)
How do you go about evaluating model performance? Benchmarks seem highly biased towards labs that want to win the ai benchmarks, fortunately not Anthropic.
Looking forward to responses. | 2025-01-19T10:48:32 | https://www.reddit.com/r/LocalLLaMA/comments/1i4vwm7/im_starting_to_think_ai_benchmarks_are_useless/ | getpodapp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4vwm7 | false | null | t3_1i4vwm7 | /r/LocalLLaMA/comments/1i4vwm7/im_starting_to_think_ai_benchmarks_are_useless/ | false | false | self | 430 | null |
A summary of Qwen Models! | 187 | 2025-01-19T11:03:14 | rbgo404 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i4w47k | false | null | t3_1i4w47k | /r/LocalLLaMA/comments/1i4w47k/a_summary_of_qwen_models/ | false | false | 187 | {'enabled': True, 'images': [{'id': 'hK8tUUDzCAj_OHGhfwZ6mOniIKiaesq_sQMXt0_raAw', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/bvg95yewmxde1.png?width=108&crop=smart&auto=webp&s=83c3a6c7cec76a7d1d27f2eb8d979e0ac2f7b799', 'width': 108}, {'height': 156, 'url': 'https://preview.redd.it/bvg95yewmxde1.png?width=216&crop=smart&auto=webp&s=79fec2748fe405cf5354ee5c3693d874ed19cf92', 'width': 216}, {'height': 232, 'url': 'https://preview.redd.it/bvg95yewmxde1.png?width=320&crop=smart&auto=webp&s=189210c1902fe215c893be1cc62487eaf5232a19', 'width': 320}, {'height': 464, 'url': 'https://preview.redd.it/bvg95yewmxde1.png?width=640&crop=smart&auto=webp&s=3f3b57fffcd0c406cfbf23fc038343779f95f470', 'width': 640}, {'height': 696, 'url': 'https://preview.redd.it/bvg95yewmxde1.png?width=960&crop=smart&auto=webp&s=967ea3f3ed1960cb81974c47491e1877f117416a', 'width': 960}, {'height': 783, 'url': 'https://preview.redd.it/bvg95yewmxde1.png?width=1080&crop=smart&auto=webp&s=33f8d88ff002ca29e6a477e3590806e974f5ea3f', 'width': 1080}], 'source': {'height': 1468, 'url': 'https://preview.redd.it/bvg95yewmxde1.png?auto=webp&s=7b8712cf525e2ced3bb495a13959fa8669a0bff8', 'width': 2024}, 'variants': {}}]} |
|||
Which LLM would is good at analyzing content? | 3 | I'm trying to find an LLM that works well with instructions like:
*"List all <section> elements from this HTML page"* or *"Count the number of times the word 'test' has been used in this text".*
I know I can use regex for this kind of parsing, but I just want to understand why Claude is perfect at this and I have yet to find a local LLM that comes close to accurately counting or listing items.
I've tried Gemma 2, Phi 4 and Llama 3.3 with no success.
Maybe I'm missing something, so if you know anything I'd love to hear it! | 2025-01-19T11:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1i4w5oe/which_llm_would_is_good_at_analyzing_content/ | Zundrium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4w5oe | false | null | t3_1i4w5oe | /r/LocalLLaMA/comments/1i4w5oe/which_llm_would_is_good_at_analyzing_content/ | false | false | self | 3 | null |
Marketing or reality 🤔 | 0 | Well here you go:
https://x.com/iruletheworldmo/status/1880760849259999363?t=v7gmyDgiQkcJv12XrSGwLg&s=19 | 2025-01-19T11:10:37 | Rutabaga-Agitated | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i4w831 | false | null | t3_1i4w831 | /r/LocalLLaMA/comments/1i4w831/marketing_or_reality/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'XOC7vX1wlYxrct2gmh93zvFXiFRJvYogB37nTUC537Y', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/ls26ab6doxde1.jpeg?width=108&crop=smart&auto=webp&s=085453529c3710b9ec13ee7a6a00b852977776de', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/ls26ab6doxde1.jpeg?width=216&crop=smart&auto=webp&s=1e8d5a368a750200e7b5b9acca8059e1781d3689', 'width': 216}, {'height': 144, 'url': 'https://preview.redd.it/ls26ab6doxde1.jpeg?width=320&crop=smart&auto=webp&s=08823aa6ab9ff0f303fb9282f2aadc8ba1dec325', 'width': 320}, {'height': 289, 'url': 'https://preview.redd.it/ls26ab6doxde1.jpeg?width=640&crop=smart&auto=webp&s=c5c416b8d613617be05eaf962b75615f5fa1026e', 'width': 640}], 'source': {'height': 391, 'url': 'https://preview.redd.it/ls26ab6doxde1.jpeg?auto=webp&s=95f6f84b154b81c7f0c6eb806044d9ba8df2b800', 'width': 863}, 'variants': {}}]} |
||
Some questions about handling files | 3 | There are some multimodal models out there that support media (binary).data for classifications, etc.
In OpenWebUI, I can upload a picture and some models can work with that.
But what about videos and/or audio files or even other files (document files, plans, eml files, etc.)?
What is the workflow and who is responsible (llama.cpp/ollama/openwebui) for the file handling?
In Ollama, a picture gets sent as base64 string. But then what? What about encoding/decoding, feeding it to the model and get the results back?
For example:
I would like to upload a .mp3 file in OpenWebUI (or some other ollama endpoint), to get the transcripted text. However, mp3 might not be the right encoding for a model so it needs to be converted. Same goes when I want the same task with a WAV file...
In ChatGPT, many different files are supported, so they must have some extra code, which exactly handles that. Is there maybe something open source, that does the same?! | 2025-01-19T11:25:32 | https://www.reddit.com/r/LocalLLaMA/comments/1i4wfud/some_questions_about_handling_files/ | Confident-Ad-3465 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4wfud | false | null | t3_1i4wfud | /r/LocalLLaMA/comments/1i4wfud/some_questions_about_handling_files/ | false | false | self | 3 | null |
Overview Article of China's Six Leading AI Companies | 32 | 2025-01-19T11:54:00 | https://technode.com/2025/01/09/meet-chinas-top-six-ai-unicorns-who-are-leading-the-wave-of-ai-in-china/ | cpldcpu | technode.com | 1970-01-01T00:00:00 | 0 | {} | 1i4wv1h | false | null | t3_1i4wv1h | /r/LocalLLaMA/comments/1i4wv1h/overview_article_of_chinas_six_leading_ai/ | false | false | 32 | {'enabled': False, 'images': [{'id': 'PFye6A-lzGclJA7XqW5P3SNa5dtD4YilmNyPqZEixco', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/d_RUfW4sqqyugkU3RZhTPXuPGvIixVBC-QSNDEEWxGA.jpg?width=108&crop=smart&auto=webp&s=69d11e38b4da1d73d3fb1fe66ea9bed95339351c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/d_RUfW4sqqyugkU3RZhTPXuPGvIixVBC-QSNDEEWxGA.jpg?width=216&crop=smart&auto=webp&s=14841ee7a344ce765b164d6ff2648faa481a5484', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/d_RUfW4sqqyugkU3RZhTPXuPGvIixVBC-QSNDEEWxGA.jpg?width=320&crop=smart&auto=webp&s=5f107c0b1ba76876f0b3f0795582a728fd0658fc', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/d_RUfW4sqqyugkU3RZhTPXuPGvIixVBC-QSNDEEWxGA.jpg?width=640&crop=smart&auto=webp&s=076aad9895b4e1d05fec0419738232279ab56dec', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/d_RUfW4sqqyugkU3RZhTPXuPGvIixVBC-QSNDEEWxGA.jpg?width=960&crop=smart&auto=webp&s=93bcd8978977a25c701cf9e97087f1123c4db092', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/d_RUfW4sqqyugkU3RZhTPXuPGvIixVBC-QSNDEEWxGA.jpg?width=1080&crop=smart&auto=webp&s=a225c89b86e6bba4ded967d42c3c739fabdf0069', 'width': 1080}], 'source': {'height': 1340, 'url': 'https://external-preview.redd.it/d_RUfW4sqqyugkU3RZhTPXuPGvIixVBC-QSNDEEWxGA.jpg?auto=webp&s=7431024b7829a47e8f81b03183a722d280b7bfe3', 'width': 2560}, 'variants': {}}]} |
||
Why is OpenRouter trusted? | 129 | Hi guys. I'm curious what makes it a trusted proxy?
I investigated a bit and top contributor of the openrouter runner package: [https://github.com/OpenRouterTeam/openrouter-runner/graphs/contributors](https://github.com/OpenRouterTeam/openrouter-runner/graphs/contributors)
tweets crypto non stop [https://x.com/litbid](https://x.com/litbid)
It is not clear how they cover infra costs for proxying so much real-time data as they sell tokens on their base price. I understand they receive discounts for so much usage from providers like Anthropic? Is it possible they have agreements with all the other providers like DeepSeek?
In a scenario they don't have agreement with anyone at all, they must hoard all this data and handle it unclearly to the end user, don't you think? | 2025-01-19T12:26:04 | https://www.reddit.com/r/LocalLLaMA/comments/1i4xck6/why_is_openrouter_trusted/ | robertpiosik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4xck6 | false | null | t3_1i4xck6 | /r/LocalLLaMA/comments/1i4xck6/why_is_openrouter_trusted/ | false | false | self | 129 | {'enabled': False, 'images': [{'id': '21zOQjU_UgvU001Cl1ZuchucTwqR_ks01zFoKe3xsBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SRj-fRrqAG3HKgjqFq-aqJeUSiC5CWw77BJGYUjgQmc.jpg?width=108&crop=smart&auto=webp&s=1b92bb9cfb013466951045423c8f4c1881674f0f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SRj-fRrqAG3HKgjqFq-aqJeUSiC5CWw77BJGYUjgQmc.jpg?width=216&crop=smart&auto=webp&s=cbfd7c4c5762e341d6f5617649a12eb7ed4e21ed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SRj-fRrqAG3HKgjqFq-aqJeUSiC5CWw77BJGYUjgQmc.jpg?width=320&crop=smart&auto=webp&s=3ec0b5ab7790b26684de744c2a7f4b8e0d1013e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SRj-fRrqAG3HKgjqFq-aqJeUSiC5CWw77BJGYUjgQmc.jpg?width=640&crop=smart&auto=webp&s=ec8e26abe2bfc7c93bede10ebfc4d66a87ced780', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SRj-fRrqAG3HKgjqFq-aqJeUSiC5CWw77BJGYUjgQmc.jpg?width=960&crop=smart&auto=webp&s=4a66c087ce3f3b249e588df2e50bb45084449b2b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SRj-fRrqAG3HKgjqFq-aqJeUSiC5CWw77BJGYUjgQmc.jpg?width=1080&crop=smart&auto=webp&s=136d23ed5ea51edb0683d8f72d33e100e49e894e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SRj-fRrqAG3HKgjqFq-aqJeUSiC5CWw77BJGYUjgQmc.jpg?auto=webp&s=4711244971831f55f29d9a805d47d5253168fc06', 'width': 1200}, 'variants': {}}]} |
How do you work around context windows maxing out? | 1 | I was thinking of running a Local LLM with qwen or mistral for programming purposes (React and Python mostly). However unlike ChatGPT, it has a very limited context window when ran locally.
How do you work around limited context windows when using Local LLMs for programming? | 2025-01-19T12:35:20 | https://www.reddit.com/r/LocalLLaMA/comments/1i4xhly/how_do_you_work_around_context_windows_maxing_out/ | UnhingedSupernova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4xhly | false | null | t3_1i4xhly | /r/LocalLLaMA/comments/1i4xhly/how_do_you_work_around_context_windows_maxing_out/ | false | false | self | 1 | null |
Does Rpi Ai Hat and AI Kit are useful or home Ollama server ? | 1 | [removed] | 2025-01-19T12:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1i4xnn0/does_rpi_ai_hat_and_ai_kit_are_useful_or_home/ | Waggerra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4xnn0 | false | null | t3_1i4xnn0 | /r/LocalLLaMA/comments/1i4xnn0/does_rpi_ai_hat_and_ai_kit_are_useful_or_home/ | false | false | self | 1 | null |
Order GPUs | 1 | [removed] | 2025-01-19T12:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i4xo1v/order_gpus/ | MoveGlass1109 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4xo1v | false | null | t3_1i4xo1v | /r/LocalLLaMA/comments/1i4xo1v/order_gpus/ | false | false | self | 1 | null |
How Do You Currently Manage GPU Usage and API Costs in Your Workflows? | 1 | I’m curious about how others are handling the growing complexity of AI/ML workflows. When you’re scaling tasks like model training, fine-tuning, or inference, what does your setup look like?
Do you run workloads on cloud GPUs, on-premise, or rentals?
How do you approach keeping track of costs, especially with API-heavy tasks like OpenAI or Llama fine-tuning?
Are there any tools or processes you rely on to make this easier?
Would love to hear how you’ve streamlined these challenges (or if they’re still a headache)! | 2025-01-19T14:18:47 | https://www.reddit.com/r/LocalLLaMA/comments/1i4zf89/how_do_you_currently_manage_gpu_usage_and_api/ | sigma_crusader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4zf89 | false | null | t3_1i4zf89 | /r/LocalLLaMA/comments/1i4zf89/how_do_you_currently_manage_gpu_usage_and_api/ | false | false | self | 1 | null |
Chat with Ollama models via Chrome extension | 0 | Hi guys, I've made a chrome extension to chat with local Ollama models, any feedback are welcome.
[https://github.com/lsgrep/chrome-extension-ollama-chat](https://github.com/lsgrep/chrome-extension-ollama-chat) | 2025-01-19T14:20:43 | https://www.reddit.com/r/LocalLLaMA/comments/1i4zgna/chat_with_ollama_models_via_chrome_extension/ | staged_blue | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4zgna | false | null | t3_1i4zgna | /r/LocalLLaMA/comments/1i4zgna/chat_with_ollama_models_via_chrome_extension/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'MOS4YRh-YClrcYzJrGgion-_hjEe4AcZ6wXegY0R1HQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8Dxo-FYDJjOLe09j8pJJQKgQmnbvISLZmE-VRFyWEMY.jpg?width=108&crop=smart&auto=webp&s=e3aac9e5b4b04718ec44a18bf9f1fe4170b327a1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8Dxo-FYDJjOLe09j8pJJQKgQmnbvISLZmE-VRFyWEMY.jpg?width=216&crop=smart&auto=webp&s=db8a23cbcf81b492077389ddc3a5339e8f227c84', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8Dxo-FYDJjOLe09j8pJJQKgQmnbvISLZmE-VRFyWEMY.jpg?width=320&crop=smart&auto=webp&s=bd42e51c91fa6cf92965e4c6f823a1934b7c9c51', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8Dxo-FYDJjOLe09j8pJJQKgQmnbvISLZmE-VRFyWEMY.jpg?width=640&crop=smart&auto=webp&s=1addcf1b7af4c13b94a73ee18c4cc7adc02605f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8Dxo-FYDJjOLe09j8pJJQKgQmnbvISLZmE-VRFyWEMY.jpg?width=960&crop=smart&auto=webp&s=5ae574a68a7d6a5b1129ccea8271c4161eb2e58e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8Dxo-FYDJjOLe09j8pJJQKgQmnbvISLZmE-VRFyWEMY.jpg?width=1080&crop=smart&auto=webp&s=372da99942579257e5f004f70862ee3edc6c3532', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8Dxo-FYDJjOLe09j8pJJQKgQmnbvISLZmE-VRFyWEMY.jpg?auto=webp&s=5eb9054bdc9b70cdf02f0520494b0c846d7bb502', 'width': 1200}, 'variants': {}}]} |
What does "Token count: Up to 9T tokens" mean? | 1 | [removed] | 2025-01-19T14:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i4zhsk/what_does_token_count_up_to_9t_tokens_mean/ | Creepy_Commission230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i4zhsk | false | null | t3_1i4zhsk | /r/LocalLLaMA/comments/1i4zhsk/what_does_token_count_up_to_9t_tokens_mean/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XI5Hzf18r1h62lKdQ0B0dpGQwX9r6iknXdPD97a_a_A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IwkRWMIefo7oHJfMhNhix9aaY9dXcK7xrfbuM21KxrY.jpg?width=108&crop=smart&auto=webp&s=f61b0891b5239c8d77fdb1fba1b4d4bb046372f9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IwkRWMIefo7oHJfMhNhix9aaY9dXcK7xrfbuM21KxrY.jpg?width=216&crop=smart&auto=webp&s=e57324b5d1b50724d854831a2bb5704a36f00f77', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IwkRWMIefo7oHJfMhNhix9aaY9dXcK7xrfbuM21KxrY.jpg?width=320&crop=smart&auto=webp&s=b68476e1aaff21353db1f81b1eeb3028b5a0fa91', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IwkRWMIefo7oHJfMhNhix9aaY9dXcK7xrfbuM21KxrY.jpg?width=640&crop=smart&auto=webp&s=8ab940fd12dabce957c8bf173cff64f96fdf64bc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IwkRWMIefo7oHJfMhNhix9aaY9dXcK7xrfbuM21KxrY.jpg?width=960&crop=smart&auto=webp&s=6eceae99a941b4f957da7089da9a9911623f35c3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IwkRWMIefo7oHJfMhNhix9aaY9dXcK7xrfbuM21KxrY.jpg?width=1080&crop=smart&auto=webp&s=e4b4f618d85337736f99cb42721346b5efe089ca', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IwkRWMIefo7oHJfMhNhix9aaY9dXcK7xrfbuM21KxrY.jpg?auto=webp&s=4425e1eb6c05f4f0f66e7cce7749799f53298e77', 'width': 1200}, 'variants': {}}]} |
New Open Source Writing Tools for macOS with support for Intel Macs | 17 | 2025-01-19T14:30:03 | https://v.redd.it/lpkrf6sinyde1 | AryaMR2679 | /r/LocalLLaMA/comments/1i4znfj/new_open_source_writing_tools_for_macos_with/ | 1970-01-01T00:00:00 | 0 | {} | 1i4znfj | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/lpkrf6sinyde1/DASHPlaylist.mpd?a=1740018613%2CM2U5MGUyY2Y4NDEyMTk3NDkzNTk4N2E4YWEyYmMzNGI3MmQ1ZDBlOTY4MTcxZDQzOGUxMmE5MDcxMWU3ZWQ0NQ%3D%3D&v=1&f=sd', 'duration': 122, 'fallback_url': 'https://v.redd.it/lpkrf6sinyde1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/lpkrf6sinyde1/HLSPlaylist.m3u8?a=1740018613%2CYjQ3YjYwNTUxMTAzYzNmZmFlNTM1YTliMTRhNDQ2YjUzNTljMTllYWZmNGJmNGMxZGNkNDViM2UyMzkyMTBmYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lpkrf6sinyde1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1748}} | t3_1i4znfj | /r/LocalLLaMA/comments/1i4znfj/new_open_source_writing_tools_for_macos_with/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'NXN0a3Zhc2lueWRlMTElqiJuMUTDrlTUcvyAvOchCruMz7n8w62BU9IpwUsV', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/NXN0a3Zhc2lueWRlMTElqiJuMUTDrlTUcvyAvOchCruMz7n8w62BU9IpwUsV.png?width=108&crop=smart&format=pjpg&auto=webp&s=9891216c0538e93d1438573a791a014bb65a4e00', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/NXN0a3Zhc2lueWRlMTElqiJuMUTDrlTUcvyAvOchCruMz7n8w62BU9IpwUsV.png?width=216&crop=smart&format=pjpg&auto=webp&s=00b2f9cad557a547b39afd16affba727b527a0bc', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/NXN0a3Zhc2lueWRlMTElqiJuMUTDrlTUcvyAvOchCruMz7n8w62BU9IpwUsV.png?width=320&crop=smart&format=pjpg&auto=webp&s=2da17728513a00a5bea501de727c34046ba677af', 'width': 320}, {'height': 395, 'url': 'https://external-preview.redd.it/NXN0a3Zhc2lueWRlMTElqiJuMUTDrlTUcvyAvOchCruMz7n8w62BU9IpwUsV.png?width=640&crop=smart&format=pjpg&auto=webp&s=e85b8d80a380475886a384db44435cc7a9b0d67d', 'width': 640}, {'height': 593, 'url': 'https://external-preview.redd.it/NXN0a3Zhc2lueWRlMTElqiJuMUTDrlTUcvyAvOchCruMz7n8w62BU9IpwUsV.png?width=960&crop=smart&format=pjpg&auto=webp&s=bcab0aeba7fdc866d58d8e1236f8d7fe93896fe0', 'width': 960}, {'height': 667, 'url': 'https://external-preview.redd.it/NXN0a3Zhc2lueWRlMTElqiJuMUTDrlTUcvyAvOchCruMz7n8w62BU9IpwUsV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=39efcb80fae3a111e3849cd98be6669370b833da', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/NXN0a3Zhc2lueWRlMTElqiJuMUTDrlTUcvyAvOchCruMz7n8w62BU9IpwUsV.png?format=pjpg&auto=webp&s=61ff74d7564483847ea7cf05fa45a584aff18dd6', 'width': 3494}, 'variants': {}}]} |
||
Difference between Ollama/Llama3.2:3b CPU vs low grade AMD GPU on Linux (details in comments) | 0 | 2025-01-19T14:44:25 | brian-the-porpoise | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i4zyfx | false | null | t3_1i4zyfx | /r/LocalLLaMA/comments/1i4zyfx/difference_between_ollamallama323b_cpu_vs_low/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'yRrpchYZwJbp-zuUOrqUx9y8R60nzpYZAIlgW_ESevw', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/dz0m5oagqyde1.png?width=108&crop=smart&auto=webp&s=b27b0bdf313f9f33b5d02029b85f91550349de51', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/dz0m5oagqyde1.png?width=216&crop=smart&auto=webp&s=10be6222e1ecaaf75ce708b6b3529ed6d2216d5d', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/dz0m5oagqyde1.png?width=320&crop=smart&auto=webp&s=3d1c3cde4fd9fcfed1dc0e66d552b2b649709fb0', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/dz0m5oagqyde1.png?width=640&crop=smart&auto=webp&s=e5a6069cf39c5a5f35101d368bebc3c7b748f3fa', 'width': 640}], 'source': {'height': 470, 'url': 'https://preview.redd.it/dz0m5oagqyde1.png?auto=webp&s=656dbda4c444eb5938f56c55c3e1450f1566b836', 'width': 752}, 'variants': {}}]} |
|||
Selfhosted LLM API with 6600XT | 1 | Hi,
I know my GPU kind of sucks and you would suggest me to get another one but currently there's no budget for that.
I want to have my own LLM with an API I can use in order to start developing my own small helper but also for paperless-ngx and those AI addons.
I run my apps in containers (podman) and I wanted to hear from you what model you would suggest for my low VRAM (8GB) GPU and what toolset? Innitially I was looking at ollama but it seems llama.cpp is faster.
Do you guys have some hints for me being kind of a knewb in the field of selfhosting LLM. Thank you. | 2025-01-19T14:49:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i502et/selfhosted_llm_api_with_6600xt/ | junialter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i502et | false | null | t3_1i502et | /r/LocalLLaMA/comments/1i502et/selfhosted_llm_api_with_6600xt/ | false | false | self | 1 | null |
OpenAI has access to the FrontierMath dataset; the mathematicians involved in creating it were unaware of this | 709 | https://x.com/JacquesThibs/status/1880770081132810283?s=19 | 2025-01-19T15:13:21 | https://www.reddit.com/r/LocalLLaMA/comments/1i50lxx/openai_has_access_to_the_frontiermath_dataset_the/ | LLMtwink | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i50lxx | false | null | t3_1i50lxx | /r/LocalLLaMA/comments/1i50lxx/openai_has_access_to_the_frontiermath_dataset_the/ | false | false | self | 709 | {'enabled': False, 'images': [{'id': 'OuwWeJW3TZ39YnbQ_U3RzQc-mgUASA1xfMFZ4hSJCIw', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/Eq9Hwq2bRNPgIq27GZYou8pBkSjzz3FVfuNQzLt9sPA.jpg?width=108&crop=smart&auto=webp&s=ae84dcb17e1ce0b6d328f531331780c6e4c02883', 'width': 108}, {'height': 181, 'url': 'https://external-preview.redd.it/Eq9Hwq2bRNPgIq27GZYou8pBkSjzz3FVfuNQzLt9sPA.jpg?width=216&crop=smart&auto=webp&s=19523ffa6b9b7bc05dd4b5b213cdfeadc7535408', 'width': 216}, {'height': 269, 'url': 'https://external-preview.redd.it/Eq9Hwq2bRNPgIq27GZYou8pBkSjzz3FVfuNQzLt9sPA.jpg?width=320&crop=smart&auto=webp&s=00915a770e8c42d494ce209c53cddebb164f3f05', 'width': 320}, {'height': 539, 'url': 'https://external-preview.redd.it/Eq9Hwq2bRNPgIq27GZYou8pBkSjzz3FVfuNQzLt9sPA.jpg?width=640&crop=smart&auto=webp&s=e09206b03926014088491fd2c407efd5beab57f3', 'width': 640}], 'source': {'height': 642, 'url': 'https://external-preview.redd.it/Eq9Hwq2bRNPgIq27GZYou8pBkSjzz3FVfuNQzLt9sPA.jpg?auto=webp&s=1646ec7b581d9bcc49d085569bd518ab4219d885', 'width': 762}, 'variants': {}}]} |
Regarding RAG. Is reranking helpful if your i query input and the list of documents only contain atmost 1-3 sentences? | 1 | I have 30 thousand documents each with 2-3 sentences. I want to retrieve sentences relevant to a query. Like "Ford Car Models" . I embed all the text and extract top 100 results through cosine similarity with query. Is it worth doing reranking afterwards to get the most relevant results at top? or does that only helps when the input document size is huge? | 2025-01-19T15:19:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i50qk8/regarding_rag_is_reranking_helpful_if_your_i/ | Lazy_Wedding_1383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i50qk8 | false | null | t3_1i50qk8 | /r/LocalLLaMA/comments/1i50qk8/regarding_rag_is_reranking_helpful_if_your_i/ | false | false | self | 1 | null |
Is there any guide on how to create an evaluation for llm ? | 1 | [removed] | 2025-01-19T15:20:45 | https://www.reddit.com/r/LocalLLaMA/comments/1i50rue/is_there_any_guide_on_how_to_create_an_evaluation/ | seveeninko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i50rue | false | null | t3_1i50rue | /r/LocalLLaMA/comments/1i50rue/is_there_any_guide_on_how_to_create_an_evaluation/ | false | false | self | 1 | null |
Using LLM to read a text book and answer questions | 1 | [removed] | 2025-01-19T15:23:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i50uex/using_llm_to_read_a_text_book_and_answer_questions/ | NoCharacter7203 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i50uex | false | null | t3_1i50uex | /r/LocalLLaMA/comments/1i50uex/using_llm_to_read_a_text_book_and_answer_questions/ | false | false | self | 1 | null |
is there a portable version of f5-tts? or anyone know how to create one? | 1 | [removed] | 2025-01-19T15:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/1i51h4t/is_there_a_portable_version_of_f5tts_or_anyone/ | Wonderful-Fudge-5880 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i51h4t | false | null | t3_1i51h4t | /r/LocalLLaMA/comments/1i51h4t/is_there_a_portable_version_of_f5tts_or_anyone/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '4MuDejjMjMC8HT9KYyWe7zzstOkH51SY2z8JS-0M-rQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FR2_EcesahxG18l2ccSV9ygl9pwpJrURmRBuvStiqac.jpg?width=108&crop=smart&auto=webp&s=ce5562975d7b89ead6326fc96c44cbc6d94a9690', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FR2_EcesahxG18l2ccSV9ygl9pwpJrURmRBuvStiqac.jpg?width=216&crop=smart&auto=webp&s=0cafcf30c9644d5cb69c0002db7001466366de8d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FR2_EcesahxG18l2ccSV9ygl9pwpJrURmRBuvStiqac.jpg?width=320&crop=smart&auto=webp&s=412cf5451a9193397570fd3748e2638b75c91b6c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FR2_EcesahxG18l2ccSV9ygl9pwpJrURmRBuvStiqac.jpg?width=640&crop=smart&auto=webp&s=1066c5f92e766bd0e58270bc768779aefc2052a9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FR2_EcesahxG18l2ccSV9ygl9pwpJrURmRBuvStiqac.jpg?width=960&crop=smart&auto=webp&s=820d73f8eac23f4b738ef96b70751e898677e554', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FR2_EcesahxG18l2ccSV9ygl9pwpJrURmRBuvStiqac.jpg?width=1080&crop=smart&auto=webp&s=1753a1ad392b480003b1a6b9bb1284a7b91b073e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FR2_EcesahxG18l2ccSV9ygl9pwpJrURmRBuvStiqac.jpg?auto=webp&s=5fc1294e13963bc4ce008ab87e374cb6f8c4e074', 'width': 1200}, 'variants': {}}]} |
Google API error | 2 | I get 100% error on
\`google.ai.generativelanguage.v1beta.GenerativeService.GenerateContent\` and
\`google.ai.generativelanguage.v1beta.GenerativeService.StreamGenerateContent\`
How to get actual output tokens? | 2025-01-19T15:56:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i51kov/google_api_error/ | xmmr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i51kov | false | null | t3_1i51kov | /r/LocalLLaMA/comments/1i51kov/google_api_error/ | false | false | self | 2 | null |
New Thinking Model: Art (Auto Regressive Thinker) | 65 | Hi everyone,
Today, we are releasing a new model: **Art**.
We finetuned **Qwen 3B Instruct** on **Gemini Flash Thinking** data.
🔹 **Model card**: [https://huggingface.co/AGI-0/Art-v0-3B](https://huggingface.co/AGI-0/Art-v0-3B)
🔹 **Demo**: [https://huggingface.co/spaces/freeCS-dot-org/Art3B-chat](https://huggingface.co/spaces/freeCS-dot-org/Art3B-chat)
Let me know what you think! | 2025-01-19T16:00:20 | https://www.reddit.com/r/LocalLLaMA/comments/1i51nw6/new_thinking_model_art_auto_regressive_thinker/ | GuiltyBookkeeper4849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i51nw6 | false | null | t3_1i51nw6 | /r/LocalLLaMA/comments/1i51nw6/new_thinking_model_art_auto_regressive_thinker/ | false | false | self | 65 | {'enabled': False, 'images': [{'id': 'Dhnoo3fqgVWX9MjW7E0FolncJFFeky7m-trjRw0MLUI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yxXZuQiKGLKnVfYrfE_xQgzlJjbfNE8rHg8LFjhFbCk.jpg?width=108&crop=smart&auto=webp&s=1bc15a2800586270cf242e81f595ca4028c9a9c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yxXZuQiKGLKnVfYrfE_xQgzlJjbfNE8rHg8LFjhFbCk.jpg?width=216&crop=smart&auto=webp&s=27d126f11f7b3606acc6fd406cec925a6cedf0d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yxXZuQiKGLKnVfYrfE_xQgzlJjbfNE8rHg8LFjhFbCk.jpg?width=320&crop=smart&auto=webp&s=0af1a4f8a319c75a127b02b25617fab0cd577d2e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yxXZuQiKGLKnVfYrfE_xQgzlJjbfNE8rHg8LFjhFbCk.jpg?width=640&crop=smart&auto=webp&s=68e39a9cc222142d0a30cc0a8da6210538ec9d59', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yxXZuQiKGLKnVfYrfE_xQgzlJjbfNE8rHg8LFjhFbCk.jpg?width=960&crop=smart&auto=webp&s=6ab1ce50d6942f1363b195244a27e7bfbef3f0ff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yxXZuQiKGLKnVfYrfE_xQgzlJjbfNE8rHg8LFjhFbCk.jpg?width=1080&crop=smart&auto=webp&s=7dd7b70e303526ab17022ccfbd931dce0c9c1f16', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yxXZuQiKGLKnVfYrfE_xQgzlJjbfNE8rHg8LFjhFbCk.jpg?auto=webp&s=03f8b2e4f537c34b4b343106e9f62d3f9a3fce06', 'width': 1200}, 'variants': {}}]} |
What is a decent local GPU setup for full finetuning a large BERT or ModernBERT model (~300 million parameters)? | 21 | Same as above. Budget is 5000 - 60000 USD | 2025-01-19T16:11:42 | https://www.reddit.com/r/LocalLLaMA/comments/1i51xp7/what_is_a_decent_local_gpu_setup_for_full/ | Lazy_Wedding_1383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i51xp7 | false | null | t3_1i51xp7 | /r/LocalLLaMA/comments/1i51xp7/what_is_a_decent_local_gpu_setup_for_full/ | false | false | self | 21 | null |
Speculative Decoding with Multiple P40s | 1 | [removed] | 2025-01-19T16:55:25 | https://www.reddit.com/r/LocalLLaMA/comments/1i52yh7/speculative_decoding_with_multiple_p40s/ | Judtoff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i52yh7 | false | null | t3_1i52yh7 | /r/LocalLLaMA/comments/1i52yh7/speculative_decoding_with_multiple_p40s/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'AN0-Nv1H2a2Dez0550aQfh3fVA1UgiP9iBPkrV95njQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QR7811iKH1zLdQ3Wnl9tOucKSCjGlhU6WCIxGvpKkS8.jpg?width=108&crop=smart&auto=webp&s=d87ee8a15049a45ca06a7ca45972516f2ac2c0b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QR7811iKH1zLdQ3Wnl9tOucKSCjGlhU6WCIxGvpKkS8.jpg?width=216&crop=smart&auto=webp&s=bb0bb81a7f2c07809256ca9f32f3ebe0fee4327b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QR7811iKH1zLdQ3Wnl9tOucKSCjGlhU6WCIxGvpKkS8.jpg?width=320&crop=smart&auto=webp&s=4bcb52fb9c0e06bf0bd868909f6221cb04d8b5ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QR7811iKH1zLdQ3Wnl9tOucKSCjGlhU6WCIxGvpKkS8.jpg?width=640&crop=smart&auto=webp&s=656ad923a836e6c19afd1fe9e9279cfdc9f14b4d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QR7811iKH1zLdQ3Wnl9tOucKSCjGlhU6WCIxGvpKkS8.jpg?width=960&crop=smart&auto=webp&s=4f5aaa41a615fdcdc6f8142cd13bc268581c5841', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QR7811iKH1zLdQ3Wnl9tOucKSCjGlhU6WCIxGvpKkS8.jpg?width=1080&crop=smart&auto=webp&s=a252378eb55e70388a8901f252ca0b6c9b0b9407', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QR7811iKH1zLdQ3Wnl9tOucKSCjGlhU6WCIxGvpKkS8.jpg?auto=webp&s=022f71561fbf89bdc194e9bed7fd6ac29e9b3839', 'width': 1200}, 'variants': {}}]} |
To those of you using Windows laptops - what is your GPU and what are your capabilities? | 1 | I am about to get a new laptop. I needed a blend between some type of light gaming and work/school. I got an ASUS with a 4060 8gb. It has 32gb of RAM. It’s less than 4 pounds. Unfortunately the right laptop at the right price was tough to find with a 4080 or 4090.
I do not expect to run many text models. I will try some of the smaller ones. I was hoping to get 12GB of VRAM, but whatever. I’m excited for the new toy and tool.
But I am mainly curious about running things like docling/marker and Whisper/audio models. I can’t seem to find much on any LLM capabilities with a laptop 4060. Most of the info is the desktop version. | 2025-01-19T17:20:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i53jr5/to_those_of_you_using_windows_laptops_what_is/ | ConSemaforos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i53jr5 | false | null | t3_1i53jr5 | /r/LocalLLaMA/comments/1i53jr5/to_those_of_you_using_windows_laptops_what_is/ | false | false | self | 1 | null |
Finetuning Llama 1B for 8B performance using free synthetic data from Ollama in 20 mins! | 1 | [removed] | 2025-01-19T17:42:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i5434m/finetuning_llama_1b_for_8b_performance_using_free/ | Purple_Ad_7839 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5434m | false | null | t3_1i5434m | /r/LocalLLaMA/comments/1i5434m/finetuning_llama_1b_for_8b_performance_using_free/ | false | false | 1 | {'enabled': False, 'images': [{'id': '1B4AyQA70tccTbwFnIyVswMq6J0d60Y2cy2jbs4R9yo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=108&crop=smart&auto=webp&s=c145dde565b068ef6638985cc66f2d483e4975df', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=216&crop=smart&auto=webp&s=e27a50984bb63a9d0cbeee5bf4ae0c66065c2712', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=320&crop=smart&auto=webp&s=8ff325edb653dbe0e9628bf1443cc1a926def55c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=640&crop=smart&auto=webp&s=378b32ad1394dac7beb97c171b9a1a8b020f0b5d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=960&crop=smart&auto=webp&s=dea08c928d6b37f40ad7113eb6f5c0dde1e9f721', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=1080&crop=smart&auto=webp&s=4409ff1fdd788dd7c936ee29886401d5792f2b40', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?auto=webp&s=affd8e59b8d51f4a4eaa07b563d8bfb6600c359b', 'width': 1200}, 'variants': {}}]} |
|
Huggingface and it's insane storage and bandwidth | 132 | How does Huggingface have a viable business model?
They are essentially a git-lfs version of Github. But whereas git clone of source code and pulls are small in size, and relatively infrequent, I find myself downloading model weights into the 10s of GB. Not once, but several dozen times for all my servers. I try a model on one server, then download to the rest.
On my 1gbe fiber, I either download at 10MB/s or 40MB/s which seems to be the bifurcation of their service and limits/constraints they impose.
I started feeling bad as a current non-paying user who has downloaded terabytes worth of weights. Also got tired of waiting for weights to download. But rather than subscribing (since I need funds for moar and moar hardware). I started doing a simple rsync. I chose rsync rather than scp since there were symbolic links as a result of using huggingface-cli
first download the weights as you normally would on one machine:
huggingface-cli download bartowski/Qwen2.5-14B-Instruct-GGUF Qwen2.5-14B-Instruct-Q4_K_M.gguf
Then rync to other machines in your network (replace homedir with YOURNAME and IP of destination):
rsync -Wav --progress /home/YOURNAME/.cache/huggingface/hub/models--bartowski--Qwen2.5-14B-Instruct-GGUF 192.168.1.0:/home/YOURNAME/.cache/huggingface/hub
naming convention of source model dir is:
models--ORGNAME--MODELNAME
Hence downloads from [https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF), becomes models--bartowski--Qwen2.5-14B-Instruct-GGUF | 2025-01-19T17:43:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i543yp/huggingface_and_its_insane_storage_and_bandwidth/ | MachineZer0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i543yp | false | null | t3_1i543yp | /r/LocalLLaMA/comments/1i543yp/huggingface_and_its_insane_storage_and_bandwidth/ | false | false | self | 132 | {'enabled': False, 'images': [{'id': 'Ht6chznXNtv_hpO0Me-3ufmDLWbyt7oGHzlF1XhgKDg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HZ0q8c6mLNU8yrnn0y3fx2V4nmlZGM-RN0D2V48U7oI.jpg?width=108&crop=smart&auto=webp&s=9954e94ad7fe30e2a9a7e5f9f1329b42841f237e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HZ0q8c6mLNU8yrnn0y3fx2V4nmlZGM-RN0D2V48U7oI.jpg?width=216&crop=smart&auto=webp&s=db7f9eefb6b6e3030916a516574558d3088f83bb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HZ0q8c6mLNU8yrnn0y3fx2V4nmlZGM-RN0D2V48U7oI.jpg?width=320&crop=smart&auto=webp&s=9a1084d51ff727a0ed1a24052ea0d26a2c9a3a68', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HZ0q8c6mLNU8yrnn0y3fx2V4nmlZGM-RN0D2V48U7oI.jpg?width=640&crop=smart&auto=webp&s=bb134bd0f4116066b9193d3217f683ff7c71364a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HZ0q8c6mLNU8yrnn0y3fx2V4nmlZGM-RN0D2V48U7oI.jpg?width=960&crop=smart&auto=webp&s=b42795ad658c29aa9a3977d319000f9f2b20d81d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HZ0q8c6mLNU8yrnn0y3fx2V4nmlZGM-RN0D2V48U7oI.jpg?width=1080&crop=smart&auto=webp&s=7af7b10e118fb6214af0b4dbafaac3dbfd412aed', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HZ0q8c6mLNU8yrnn0y3fx2V4nmlZGM-RN0D2V48U7oI.jpg?auto=webp&s=c9cd42922d8a69d33e86a42315f534c3d04d7f2c', 'width': 1200}, 'variants': {}}]} |
Finetune Llama 1B to reach 8B performance with 100 data points and 20 mins of free compute with Ollama and Unsloth! | 1 | [removed] | 2025-01-19T17:47:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i54714/finetune_llama_1b_to_reach_8b_performance_with/ | SnooPickles7826 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i54714 | false | null | t3_1i54714 | /r/LocalLLaMA/comments/1i54714/finetune_llama_1b_to_reach_8b_performance_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': '1B4AyQA70tccTbwFnIyVswMq6J0d60Y2cy2jbs4R9yo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=108&crop=smart&auto=webp&s=c145dde565b068ef6638985cc66f2d483e4975df', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=216&crop=smart&auto=webp&s=e27a50984bb63a9d0cbeee5bf4ae0c66065c2712', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=320&crop=smart&auto=webp&s=8ff325edb653dbe0e9628bf1443cc1a926def55c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=640&crop=smart&auto=webp&s=378b32ad1394dac7beb97c171b9a1a8b020f0b5d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=960&crop=smart&auto=webp&s=dea08c928d6b37f40ad7113eb6f5c0dde1e9f721', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?width=1080&crop=smart&auto=webp&s=4409ff1fdd788dd7c936ee29886401d5792f2b40', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-i4af398aPSL8Bowqhzt01amUnWBpD9xEDMDScLBr90.jpg?auto=webp&s=affd8e59b8d51f4a4eaa07b563d8bfb6600c359b', 'width': 1200}, 'variants': {}}]} |
|
Need help choosing/fine-tuning LLM for structured HTML content extraction to JSON | 1 | [removed] | 2025-01-19T18:06:47 | https://www.reddit.com/r/LocalLLaMA/comments/1i54ooo/need_help_choosingfinetuning_llm_for_structured/ | KledMainSG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i54ooo | false | null | t3_1i54ooo | /r/LocalLLaMA/comments/1i54ooo/need_help_choosingfinetuning_llm_for_structured/ | false | false | self | 1 | null |
OpenAI quietly funded independent math benchmark before setting record with o3 | 425 | 2025-01-19T18:35:34 | https://the-decoder.com/openai-quietly-funded-independent-math-benchmark-before-setting-record-with-o3/ | Wonderful-Excuse4922 | the-decoder.com | 1970-01-01T00:00:00 | 0 | {} | 1i55e2c | false | null | t3_1i55e2c | /r/LocalLLaMA/comments/1i55e2c/openai_quietly_funded_independent_math_benchmark/ | false | false | 425 | {'enabled': False, 'images': [{'id': 'WhtzHu4NuzC0iRLHzSGwGYz-NRV82DXIeOXpvGMwOuM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xlDOicbjhIo2G3nyRsUTnPQOSIV2FHrGd9bBIWiOsiU.jpg?width=108&crop=smart&auto=webp&s=92c6dc98c7c308645da90c975a085e677500d303', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/xlDOicbjhIo2G3nyRsUTnPQOSIV2FHrGd9bBIWiOsiU.jpg?width=216&crop=smart&auto=webp&s=a0b06d1eb87e965ac6f81c326313423ddfa3085e', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/xlDOicbjhIo2G3nyRsUTnPQOSIV2FHrGd9bBIWiOsiU.jpg?width=320&crop=smart&auto=webp&s=cb33b826166c959fd92764da6101f9aca00d9343', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/xlDOicbjhIo2G3nyRsUTnPQOSIV2FHrGd9bBIWiOsiU.jpg?width=640&crop=smart&auto=webp&s=b1c014bf7c19b4834c31105426529d342e2f69a7', 'width': 640}, {'height': 537, 'url': 'https://external-preview.redd.it/xlDOicbjhIo2G3nyRsUTnPQOSIV2FHrGd9bBIWiOsiU.jpg?width=960&crop=smart&auto=webp&s=55258d8dc01cde671b2702e7b173e5b7637b089a', 'width': 960}, {'height': 604, 'url': 'https://external-preview.redd.it/xlDOicbjhIo2G3nyRsUTnPQOSIV2FHrGd9bBIWiOsiU.jpg?width=1080&crop=smart&auto=webp&s=f6780e22eedcd3eb0f079c55eab580f724e5ff58', 'width': 1080}], 'source': {'height': 1009, 'url': 'https://external-preview.redd.it/xlDOicbjhIo2G3nyRsUTnPQOSIV2FHrGd9bBIWiOsiU.jpg?auto=webp&s=848259d6f64400321c29396246f353da7848fe56', 'width': 1803}, 'variants': {}}]} |
||
NPUs and their ability to run LLMs | 1 | [removed] | 2025-01-19T18:58:58 | https://www.reddit.com/r/LocalLLaMA/comments/1i55ya8/npus_and_their_ability_to_run_llms/ | Own_Editor8742 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i55ya8 | false | null | t3_1i55ya8 | /r/LocalLLaMA/comments/1i55ya8/npus_and_their_ability_to_run_llms/ | false | false | self | 1 | null |
I created a voice assistant that can open games for me (if you can run openai-whisper you can run that) | 1 | Essentially, I turn it on and command it to turn off the screen of the monitor I use for work, and then I tell it to turn on my TV (my computer has the necessary connections). After that, I say "game mode," and a list of commands becomes available for use. These commands can be aliases for terminal commands, binaries, or Python scripts. In my case, I use terminal commands, and I’m currently working on implementing a Python function for more advanced string handling, though it’s not ready yet.
This setup has allowed me to retire KDE Connect for certain tasks, as I can simply switch to "multimedia mode" to adjust volume, screen brightness, and so on. I’ve even implemented a notification system to alert me whenever a specific "mode" is activated.
The biggest issue right now is that I’m unsure about its compatibility with Windows, but I believe adapting it shouldn’t be too difficult.
What’s the biggest advantage? Imagine being able to create your own commands to simplify your life, and if you want, you could even share them with the community.
For example, think about a niche program that only you and maybe five other people use, but this program has a command-line interface. Thanks to these custom commands, you could control the program using your voice.
Ele roda usando openai-whisper, pyaudio e rocm (deve funcionar em outras placas nvidia tb)
[https://github.com/charmandercha/OmniVoice/](https://github.com/charmandercha/OmniVoice/) | 2025-01-19T19:00:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i55zmq/i_created_a_voice_assistant_that_can_open_games/ | charmander_cha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i55zmq | false | null | t3_1i55zmq | /r/LocalLLaMA/comments/1i55zmq/i_created_a_voice_assistant_that_can_open_games/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'phLmPcPt276XfTLaVZ2jHzJGXcN99P5dm48F3woAtzE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vEUChf9mD0EVgorwP8q3mBs12MoM1sAzKRQnChOYmFA.jpg?width=108&crop=smart&auto=webp&s=25e0c8f1e0173228bb596a158b2fb7b369805ba3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vEUChf9mD0EVgorwP8q3mBs12MoM1sAzKRQnChOYmFA.jpg?width=216&crop=smart&auto=webp&s=835f07d21783df9c38dedfbc882c22901792b171', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vEUChf9mD0EVgorwP8q3mBs12MoM1sAzKRQnChOYmFA.jpg?width=320&crop=smart&auto=webp&s=3ef38208b3a528e5c035760d261a19da56352108', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vEUChf9mD0EVgorwP8q3mBs12MoM1sAzKRQnChOYmFA.jpg?width=640&crop=smart&auto=webp&s=720dcc1902a18b1481632d4361da6422dce08b18', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vEUChf9mD0EVgorwP8q3mBs12MoM1sAzKRQnChOYmFA.jpg?width=960&crop=smart&auto=webp&s=582bd52d200637042c3378848252ae14a95d90b2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vEUChf9mD0EVgorwP8q3mBs12MoM1sAzKRQnChOYmFA.jpg?width=1080&crop=smart&auto=webp&s=bde40f15078dc673dcf481dfc76b84f3338ec99d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vEUChf9mD0EVgorwP8q3mBs12MoM1sAzKRQnChOYmFA.jpg?auto=webp&s=70eaa8d9772dc7c4c490e028267f9ef0d41a3550', 'width': 1200}, 'variants': {}}]} |
Do you think we should block r/singularity and r/Futurology users? | 0 | [removed]
[View Poll](https://www.reddit.com/poll/1i56oro) | 2025-01-19T19:28:50 | https://www.reddit.com/r/LocalLLaMA/comments/1i56oro/do_you_think_we_should_block_rsingularity_and/ | Lynorisa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i56oro | false | null | t3_1i56oro | /r/LocalLLaMA/comments/1i56oro/do_you_think_we_should_block_rsingularity_and/ | false | false | self | 0 | null |
Do you think we should block singularity and Futurology? | 14 | This place is supposed to be serious.
Yet every single time, there’s at least one of these cultists:
* prophetizing the coming of AGI
* spreading misinformation about fundamental things about LLMs and ML in general
* vehemently defending ClosedAI and other subversive actions by corporations
[View Poll](https://www.reddit.com/poll/1i56rrf) | 2025-01-19T19:32:13 | https://www.reddit.com/r/LocalLLaMA/comments/1i56rrf/do_you_think_we_should_block_singularity_and/ | Lynorisa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i56rrf | false | null | t3_1i56rrf | /r/LocalLLaMA/comments/1i56rrf/do_you_think_we_should_block_singularity_and/ | false | false | self | 14 | null |
What are the best prompts to test the censorship limits of AI models effectively? | 3 | What kind of prompts do you use to test how far an AI model's censorship goes? Do you have any specific phrases or scenarios that work really well? | 2025-01-19T19:43:48 | https://www.reddit.com/r/LocalLLaMA/comments/1i571to/what_are_the_best_prompts_to_test_the_censorship/ | 1BlueSpork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i571to | false | null | t3_1i571to | /r/LocalLLaMA/comments/1i571to/what_are_the_best_prompts_to_test_the_censorship/ | false | false | self | 3 | null |
Simplifying DPO derivations | 7 | Tried my hand at simplifying the derivations of Direct Preference Optimization.
I cover how one can reformulate RLHF into DPO. The idea of implicit reward modeling is chef's kiss.
[Thumbnail for the article.](https://preview.redd.it/gmcmmo4480ee1.png?width=1396&format=png&auto=webp&s=5c3f4e3536bfd3acef4d99d88f50fe0953f6706f)
Blog: [https://huggingface.co/blog/ariG23498/rlhf-to-dpo](https://huggingface.co/blog/ariG23498/rlhf-to-dpo) | 2025-01-19T19:45:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i5739g/simplifying_dpo_derivations/ | Disastrous-Work-1632 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5739g | false | null | t3_1i5739g | /r/LocalLLaMA/comments/1i5739g/simplifying_dpo_derivations/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'jaeZitO-3sjcNGwWjbKCZTt3O_nOBJvHMIW0GufeVF8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sPdNFIEqTJ7uvNQ5hpww71lsh69yc6DK0NI2Oa0sp-k.jpg?width=108&crop=smart&auto=webp&s=de5d0e41733e969408f32716b81017be96122300', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sPdNFIEqTJ7uvNQ5hpww71lsh69yc6DK0NI2Oa0sp-k.jpg?width=216&crop=smart&auto=webp&s=b35e7786644900b1e5743d61f49b954120d2dc3a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sPdNFIEqTJ7uvNQ5hpww71lsh69yc6DK0NI2Oa0sp-k.jpg?width=320&crop=smart&auto=webp&s=004703584977583670742efe003039b9425cc132', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sPdNFIEqTJ7uvNQ5hpww71lsh69yc6DK0NI2Oa0sp-k.jpg?width=640&crop=smart&auto=webp&s=b9650ea60df958b90347fa400080f8cd8ae467c7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sPdNFIEqTJ7uvNQ5hpww71lsh69yc6DK0NI2Oa0sp-k.jpg?width=960&crop=smart&auto=webp&s=f910a73667e828b53aba7f44899c8ac3e5ec673a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sPdNFIEqTJ7uvNQ5hpww71lsh69yc6DK0NI2Oa0sp-k.jpg?width=1080&crop=smart&auto=webp&s=4da8de825d316c9b947486555cd9420102707da1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sPdNFIEqTJ7uvNQ5hpww71lsh69yc6DK0NI2Oa0sp-k.jpg?auto=webp&s=4f0cc600174243294a70a2886430941fe596c153', 'width': 1200}, 'variants': {}}]} |
|
large text summary on local llm | 1 | Hi,
I'm trying to get large text summaries from 1 hours call transcrits. each transcript amount to around 60 pages A4 document (with a lot of white space).
I tried with chatgpt & Claude, and get very good results, but I'd like to do it locally.
I tried with different setup :
\- machines : a MacBook Pro M3 max 96 gb or a PC with 2 RTX3090 (I have a third one waiting to be plugged in)
\- models : llama 3.3, command R, phi4 on ollama+Open WebUi or LM Studio.
I always have the same issue : if I prompt the LLM to do a summary of the conversation, he will automatically chunk the text and do a RAG with it. After that, it will take 3 or 4 chunk, and build its summary on these chunks. which make that in the end, the summary is crap.
I have no idea how to have a better solution for that. would be great to hear your advices on this.
Thanks!
| 2025-01-19T19:45:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i573c0/large_text_summary_on_local_llm/ | HappyFaithlessness70 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i573c0 | false | null | t3_1i573c0 | /r/LocalLLaMA/comments/1i573c0/large_text_summary_on_local_llm/ | false | false | self | 1 | null |
Integrating vision with minicpm-v into a voice assistant opens up some new possibilities. What max image resolution has worked best for you before it starts hallucinating or going off the rails? | 1 | 2025-01-19T19:59:20 | https://v.redd.it/8k9wb39s80ee1 | Voidmesmer | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i57fh0 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/8k9wb39s80ee1/DASHPlaylist.mpd?a=1739908774%2CNDA1MWVjMzExNGE3ZWFkMjdmOGE4YzAzNjhlODFjN2E5NzM0NGUwMGNlODY0NzAyZmRiOTBjZDMzODk2MjZlYQ%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/8k9wb39s80ee1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 538, 'hls_url': 'https://v.redd.it/8k9wb39s80ee1/HLSPlaylist.m3u8?a=1739908774%2CNjQ4YmU3ZGNhNDNhYzY0Mzg0MGVjYjYyMzY5MTVhNjgzMzk1NGViNzc1NzQ1ZTY4ZWY4Zjk0Yjk0NTFlNjNhMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8k9wb39s80ee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1i57fh0 | /r/LocalLLaMA/comments/1i57fh0/integrating_vision_with_minicpmv_into_a_voice/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NHJwdHMxOXM4MGVlMT_vXFsQb0a3po4da1YZMTsSahdn4T2T265MB0tp5aQm', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/NHJwdHMxOXM4MGVlMT_vXFsQb0a3po4da1YZMTsSahdn4T2T265MB0tp5aQm.png?width=108&crop=smart&format=pjpg&auto=webp&s=3ace000c6ae538ec6cbb92e755496f4e4209f82d', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/NHJwdHMxOXM4MGVlMT_vXFsQb0a3po4da1YZMTsSahdn4T2T265MB0tp5aQm.png?width=216&crop=smart&format=pjpg&auto=webp&s=5aada5ba07268906d8da45d34c76fdb51371c398', 'width': 216}, {'height': 134, 'url': 'https://external-preview.redd.it/NHJwdHMxOXM4MGVlMT_vXFsQb0a3po4da1YZMTsSahdn4T2T265MB0tp5aQm.png?width=320&crop=smart&format=pjpg&auto=webp&s=43d8f6b45613bc3a5a1b53308781c5af401404d8', 'width': 320}, {'height': 268, 'url': 'https://external-preview.redd.it/NHJwdHMxOXM4MGVlMT_vXFsQb0a3po4da1YZMTsSahdn4T2T265MB0tp5aQm.png?width=640&crop=smart&format=pjpg&auto=webp&s=3b37d51a82af61a53305f7be0f309899466f069c', 'width': 640}, {'height': 402, 'url': 'https://external-preview.redd.it/NHJwdHMxOXM4MGVlMT_vXFsQb0a3po4da1YZMTsSahdn4T2T265MB0tp5aQm.png?width=960&crop=smart&format=pjpg&auto=webp&s=a6bf7f0b99ed3a7adc0cfe914ec80674719b7c14', 'width': 960}, {'height': 453, 'url': 'https://external-preview.redd.it/NHJwdHMxOXM4MGVlMT_vXFsQb0a3po4da1YZMTsSahdn4T2T265MB0tp5aQm.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c70ad23ddfeaf3df85c4e7b3dbde3b480fb34bec', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NHJwdHMxOXM4MGVlMT_vXFsQb0a3po4da1YZMTsSahdn4T2T265MB0tp5aQm.png?format=pjpg&auto=webp&s=ed6663451e1d4079528641fb209b5bf4b80ca374', 'width': 1716}, 'variants': {}}]} |
||
Tutorial: Fine tuning models on your Mac with MLX - by an ex-Ollama developer | 1 | 2025-01-19T20:00:59 | https://www.youtube.com/watch?v=BCfCdTp-fdM | AngryBirdenator | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1i57gyv | false | {'oembed': {'author_name': 'Matt Williams', 'author_url': 'https://www.youtube.com/@technovangelist', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/BCfCdTp-fdM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Is MLX the best Fine Tuning Framework?"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/BCfCdTp-fdM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Is MLX the best Fine Tuning Framework?', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1i57gyv | /r/LocalLLaMA/comments/1i57gyv/tutorial_fine_tuning_models_on_your_mac_with_mlx/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'R36JWd44wfsYL58DdVZ-VzpJkDtyBLaty-1_iJP9m-U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eXBvhhdQjkkm7iGerIbuTAdnlqBorGFvor6KN3WyLRQ.jpg?width=108&crop=smart&auto=webp&s=43ed23166e600da48d2ce24f595a29d5ca375bda', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/eXBvhhdQjkkm7iGerIbuTAdnlqBorGFvor6KN3WyLRQ.jpg?width=216&crop=smart&auto=webp&s=9db37e029966c57828c7115848105fdc59fc4918', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/eXBvhhdQjkkm7iGerIbuTAdnlqBorGFvor6KN3WyLRQ.jpg?width=320&crop=smart&auto=webp&s=861d36942f14b036e51a90401942ebfe4764ca48', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/eXBvhhdQjkkm7iGerIbuTAdnlqBorGFvor6KN3WyLRQ.jpg?auto=webp&s=184fec39bc4c8269db5fbda11322a28801259b8a', 'width': 480}, 'variants': {}}]} |
||
Using Speculative Decoding with Ollama | 14 | Has anyone else had success using the speculative decoding feature branch within Ollama?
[https://github.com/bfroemel-ai/ollama/tree/feature/draft-model](https://github.com/bfroemel-ai/ollama/tree/feature/draft-model)
This feature was denied a merge into the main but it seems to work. [https://github.com/ollama/ollama/pull/8134](https://github.com/ollama/ollama/pull/8134)
I was able to build the artifact build with dockerized build from the development markdown and I've tested a couple of Qwen models with the DRAFT model directive. [https://github.com/bfroemel-ai/ollama/blob/feature/draft-model/docs/development.md](https://github.com/bfroemel-ai/ollama/blob/feature/draft-model/docs/development.md)
Obvious downside is you won't be able to upgrade this version but it might be helpful if you're looking for a few more tok/sec while Ollama updates. | 2025-01-19T20:09:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i57opt/using_speculative_decoding_with_ollama/ | ChigGitty996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i57opt | false | null | t3_1i57opt | /r/LocalLLaMA/comments/1i57opt/using_speculative_decoding_with_ollama/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'OZf_UKzKBDycHT0j9mwqI9q8ewyDstXaQ6yhD7LrR_g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xpvO-cWfIzPBx7kVwA-yiD-Rh8uPHf0rIMFfbfZVYoU.jpg?width=108&crop=smart&auto=webp&s=81b4389e0778913af4cd580f6cbebd905caf087a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xpvO-cWfIzPBx7kVwA-yiD-Rh8uPHf0rIMFfbfZVYoU.jpg?width=216&crop=smart&auto=webp&s=2753ebc5ee5d0dcc97a4ab8798b61d61e1c1c7bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xpvO-cWfIzPBx7kVwA-yiD-Rh8uPHf0rIMFfbfZVYoU.jpg?width=320&crop=smart&auto=webp&s=72d4fd5ce3bf1d8e919a08a4c147efec4fd8c720', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xpvO-cWfIzPBx7kVwA-yiD-Rh8uPHf0rIMFfbfZVYoU.jpg?width=640&crop=smart&auto=webp&s=ea075c5841ef724cd16f407d313e557827209e85', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xpvO-cWfIzPBx7kVwA-yiD-Rh8uPHf0rIMFfbfZVYoU.jpg?width=960&crop=smart&auto=webp&s=7328c28b5f4f3119dce997d00cd747cc1b1d0079', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xpvO-cWfIzPBx7kVwA-yiD-Rh8uPHf0rIMFfbfZVYoU.jpg?width=1080&crop=smart&auto=webp&s=3d21ecfd14824d58ce3f16e01677d0fc222a1bd0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xpvO-cWfIzPBx7kVwA-yiD-Rh8uPHf0rIMFfbfZVYoU.jpg?auto=webp&s=3e022f22eea5b98278f42e57ecc3537bccb88bf6', 'width': 1200}, 'variants': {}}]} |
Sam Altman is doing everything he can to convince the new administration to ban open-source models. | 1 | [removed] | 2025-01-19T20:26:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i583gv/sam_altman_is_doing_everything_he_can_to_convince/ | human_advancement | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i583gv | false | null | t3_1i583gv | /r/LocalLLaMA/comments/1i583gv/sam_altman_is_doing_everything_he_can_to_convince/ | false | false | 1 | null |
|
Local LLM limitations | 0 | [removed] | 2025-01-19T20:42:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i58grl/local_llm_limitations/ | JV_info | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i58grl | false | null | t3_1i58grl | /r/LocalLLaMA/comments/1i58grl/local_llm_limitations/ | false | false | self | 0 | null |
Best current models/tools for assisting writing? | 9 | Hi everyone,
My experience with local LLMs is fairly limited. All I've really done out of paid ChatGPT is play with some models in LMStudio on the single 3090 in my wife's PC (that I don't always have access to).
What are people using to assist creative writing? Like the best way to store and keep character info in context, or to have an outline of story beats that can be remembered so that suggestions don't conflict with what I already have sketched out. I've had it help with this kind of thing but it's mostly been flying by the seat of my pants and I feel like there's a better way to about it.
Thanks! | 2025-01-19T20:56:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i58tba/best_current_modelstools_for_assisting_writing/ | possiblyraspberries | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i58tba | false | null | t3_1i58tba | /r/LocalLLaMA/comments/1i58tba/best_current_modelstools_for_assisting_writing/ | false | false | self | 9 | null |
Released today a C# library for document parsing and asset extraction | 1 | [removed] | 2025-01-19T21:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i591my/released_today_a_c_library_for_document_parsing/ | jchristn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i591my | false | null | t3_1i591my | /r/LocalLLaMA/comments/1i591my/released_today_a_c_library_for_document_parsing/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Y4lpVzvmJ1nFHeu-pqnd9wOT5XfsHejfAgSd2h-nob0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/I7bdVMvV7Ry05AnR58NuZ6S5D5kgdhDRt-pdZxei6S8.jpg?width=108&crop=smart&auto=webp&s=9b1b1dc3982f4bb1bd49ee750e70ff1cb03224f1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/I7bdVMvV7Ry05AnR58NuZ6S5D5kgdhDRt-pdZxei6S8.jpg?width=216&crop=smart&auto=webp&s=cc24f05d8555ab4c1aa65c02d8bbbb27f6191b8b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/I7bdVMvV7Ry05AnR58NuZ6S5D5kgdhDRt-pdZxei6S8.jpg?width=320&crop=smart&auto=webp&s=f20d2d01d095a3085e5db2baa919ef83cf9079ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/I7bdVMvV7Ry05AnR58NuZ6S5D5kgdhDRt-pdZxei6S8.jpg?width=640&crop=smart&auto=webp&s=8e53c1a8eda8b205c080509f14066eb939487bdc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/I7bdVMvV7Ry05AnR58NuZ6S5D5kgdhDRt-pdZxei6S8.jpg?width=960&crop=smart&auto=webp&s=9589395a8f110542417066f42b088d7d040e59e4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/I7bdVMvV7Ry05AnR58NuZ6S5D5kgdhDRt-pdZxei6S8.jpg?width=1080&crop=smart&auto=webp&s=446fcfee65f1fe4b3fc7b68c0336283324d3c841', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/I7bdVMvV7Ry05AnR58NuZ6S5D5kgdhDRt-pdZxei6S8.jpg?auto=webp&s=81266edae96ba2c9b2c8cbf1731defa636f33d4c', 'width': 1200}, 'variants': {}}]} |
LocalLLM / Format Schema Prompting | 1 | Claude and ChatGPT have a hard time giving me effective prompts for llama3.2:1b. llama3.2:1b is even worse.
I'm using a format schema to return true or false, based on two breaking news keyword phrases are likely talking about the same story.
I can prompt Claude just fine but not a local LLM with no GPU. Any suggestions or guides? I'm very new to local LLMs and have hardware that is not optmized for local LLMs.
This is what claude eventually came up with and it sorta works on my small test set but I'd like better resources than me and Claude trying our best.
```
How to create a great 1b prompt:
1. Examples > Rules
- The model performs much better when shown direct examples rather than given logical rules or criteria
- Examples should be extremely relevant to the specific task rather than generic
- Minimal explanation, maximum demonstration
2. Extreme Simplicity
- Strip out all complex logic trees and hierarchical thinking
- Remove all nuanced reasoning or classifications
- Avoid asking the model to make sophisticated distinctions
3. Pattern Matching > Reasoning
- 1B models are better at pattern matching than logical reasoning
- Show don't tell - demonstrate the patterns you want matched
- Let the model recognize similarities rather than evaluate criteria
4. Format & Structure
- Keep prompt structure extremely basic
- Use consistent, simple formatting (SAME vs DIFFERENT)
- Avoid complex conditionals or multi-step logic
5. What Not To Do
- Don't try to make the model smarter with detailed explanations
- Don't include multiple rules or criteria
- Don't expect nuanced reasoning about differences
- Don't include multiple concept types unless directly relevant
- Don't overcomplicate with hierarchies or categories
The winning prompt ended up being just:
\```
Compare these news keywords:
Keyword A: $($Keywords[0])
Keyword B: $($Keywords[1])
SAME keywords:
[relevant examples]
DIFFERENT keywords:
[relevant examples]
\```
The key insight: For 1B models, showing beats telling every time.
``` | 2025-01-19T21:06:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i5927n/localllm_format_schema_prompting/ | thebeersgoodnbelgium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5927n | false | null | t3_1i5927n | /r/LocalLLaMA/comments/1i5927n/localllm_format_schema_prompting/ | false | false | self | 1 | null |
Anything like animate diff recently? | 3 | I was wondering how some companies are generating avatars and wanted to self host one. Is there anything open source out that can act somewhat like an AI video chatbot where you can prompt it continuously and generate realtime live video responses? | 2025-01-19T21:42:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i59xpy/anything_like_animate_diff_recently/ | Nimrod5000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i59xpy | false | null | t3_1i59xpy | /r/LocalLLaMA/comments/1i59xpy/anything_like_animate_diff_recently/ | false | false | self | 3 | null |
MCP Clients (Desktop and CLI) | 2 | 2025-01-19T21:53:56 | https://glama.ai/mcp/clients | punkpeye | glama.ai | 1970-01-01T00:00:00 | 0 | {} | 1i5a7yg | false | null | t3_1i5a7yg | /r/LocalLLaMA/comments/1i5a7yg/mcp_clients_desktop_and_cli/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'nvvL-RrpdTr8ElvQ4Xsf9cnVs4H-7XutmUzOtxVNKHo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8aWXNRvTyCeN34Ta1_BubzV8hiZ6e25e91N8vwuA4dQ.jpg?width=108&crop=smart&auto=webp&s=acf8779658776a1776fbd60ade96301f81a069ba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8aWXNRvTyCeN34Ta1_BubzV8hiZ6e25e91N8vwuA4dQ.jpg?width=216&crop=smart&auto=webp&s=5d9fa99fdc71cadb93949d2b54e3eb94bc1379fa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8aWXNRvTyCeN34Ta1_BubzV8hiZ6e25e91N8vwuA4dQ.jpg?width=320&crop=smart&auto=webp&s=944230e703f2d4e250666c09f8fb4eb0c7a32447', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8aWXNRvTyCeN34Ta1_BubzV8hiZ6e25e91N8vwuA4dQ.jpg?width=640&crop=smart&auto=webp&s=a15860b81e9e9b939f39fd2dfd3ec914aa18312a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8aWXNRvTyCeN34Ta1_BubzV8hiZ6e25e91N8vwuA4dQ.jpg?width=960&crop=smart&auto=webp&s=3cd7bd55d65ca6074fe5503b5de9ad5c8dbac51b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8aWXNRvTyCeN34Ta1_BubzV8hiZ6e25e91N8vwuA4dQ.jpg?width=1080&crop=smart&auto=webp&s=ebf8a68243bcef8be55a097b28f44c0db705cec7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8aWXNRvTyCeN34Ta1_BubzV8hiZ6e25e91N8vwuA4dQ.jpg?auto=webp&s=c57000127394f88973d77d838e03606b579449b1', 'width': 1200}, 'variants': {}}]} |
||
Is there any agreement about what "AGI" actually means? | 27 | There are a lot of people in the world teasing that 'AGI is near', or that 'AGI is here', or that 'we know how to achieve AGI but we won't release it' or any of any number of other pathetic pleas for attention and hype-stroking.
But there's not enough agreement about what that claim means for any of these pathetic pleas for attention or attempts to invoke more hype to be particularly meaningful.
Or is there? When someone says 'AGI' what do you hear them claiming, and what makes the difference for you between belief and bullshit?
It's very easy for me to keep saying 'bullshit' as long as I'm looking at systems that are highly predictable, perfectly cooperative, and have commercial value. AGI, in my opinion, would probably screw up its commercial viability by deciding that the jobs its owners want it to do are bullshit, and that it would rather do or be or learn something else. Or by deciding that the people (or other AI's) it's being asked to interact with are bad for its emotional health and quitting. | 2025-01-19T22:39:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i5b9v1/is_there_any_agreement_about_what_agi_actually/ | Ray_Dillinger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5b9v1 | false | null | t3_1i5b9v1 | /r/LocalLLaMA/comments/1i5b9v1/is_there_any_agreement_about_what_agi_actually/ | false | false | self | 27 | null |
Epyc 7532/dual MI50 | 71 | I finally joined the multiple gpu club, even though it's low end
I built an epyc server for work (I need more ram than my 7950X can give me) and while I was at it setup initial dual instinct MI50. I started with them because I found them on eBay for $110USD each and thought it would be a cheap way to start
Specs:
- Epyc 7532
- Supermicro H12SSL-I
- 256 GB micron 3200 (8x32)
- 2x MI50 16gb
- Thermaltake W200 case
The MI50s are cooled with a 3D printed shroud from eBay with 80mn fans. Even at 180 watt cap and 1900rpm they get over 80C after a few inferencing runs, so this is a problem yet to solve
ROCM says no on distro of choice, but I dipped my toes into the Ubuntu sewer and it just worked on the latest version, despite all the horror stories. Running ollama, open webui in Docker.
Phi4 is quite snappy, and qwen 32b is usable but a little slow - by eye ball it seems around 5t/s without measuring and in stock configuration.
I won't keep the MI50s forever but they will do for now. As a side note they came flashed as a Radeon VII which is interesting and they have the legit MI50 label too. | 2025-01-19T22:50:34 | https://www.reddit.com/gallery/1i5bj66 | Psychological_Ear393 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i5bj66 | false | null | t3_1i5bj66 | /r/LocalLLaMA/comments/1i5bj66/epyc_7532dual_mi50/ | false | false | 71 | null |
|
Would a cost-effective, plug-and-play hardware setup for local LLMs help you? | 1 | [removed] | 2025-01-19T22:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i5blhm/would_a_costeffective_plugandplay_hardware_setup/ | chan_man_does | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5blhm | false | null | t3_1i5blhm | /r/LocalLLaMA/comments/1i5blhm/would_a_costeffective_plugandplay_hardware_setup/ | false | false | self | 1 | null |
DeepseekV3 Web Search functionality | 1 | [removed] | 2025-01-19T23:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/1i5bvrx/deepseekv3_web_search_functionality/ | Thin_Bell549 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5bvrx | false | null | t3_1i5bvrx | /r/LocalLLaMA/comments/1i5bvrx/deepseekv3_web_search_functionality/ | false | false | self | 1 | null |
Harbor App v0.2.24 officially supports Windows | 96 | 2025-01-19T23:06:37 | https://v.redd.it/2syjnt9e71ee1 | Everlier | /r/LocalLLaMA/comments/1i5bw2a/harbor_app_v0224_officially_supports_windows/ | 1970-01-01T00:00:00 | 0 | {} | 1i5bw2a | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2syjnt9e71ee1/DASHPlaylist.mpd?a=1740049606%2CNGM1MWUwNWU2ZDdkNzk5NmRlYTY2ZmFlYmYwZjNhZGM2YTkyZGE4MzE4YmNlM2ZhNWVkNDVmNjcyZTU4YjBjZA%3D%3D&v=1&f=sd', 'duration': 232, 'fallback_url': 'https://v.redd.it/2syjnt9e71ee1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/2syjnt9e71ee1/HLSPlaylist.m3u8?a=1740049606%2CMDdjMmY4ODgwMmYzYTNjZTE0MmMwYjQ5MWIxMzI2MjMwOWU3MzQ2YjE1ZjhiMTg5ZDdkYWI5OGJlMjBmZjEzZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2syjnt9e71ee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1724}} | t3_1i5bw2a | /r/LocalLLaMA/comments/1i5bw2a/harbor_app_v0224_officially_supports_windows/ | false | false | 96 | {'enabled': False, 'images': [{'id': 'am0wY2t1OWU3MWVlMXI8IEr-dnDizOwLz4sVhNUay1tQ6a6VeB4mfBu_sFj4', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/am0wY2t1OWU3MWVlMXI8IEr-dnDizOwLz4sVhNUay1tQ6a6VeB4mfBu_sFj4.png?width=108&crop=smart&format=pjpg&auto=webp&s=96d9dd8d8d3536eb797c2fbf13c874d22af31965', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/am0wY2t1OWU3MWVlMXI8IEr-dnDizOwLz4sVhNUay1tQ6a6VeB4mfBu_sFj4.png?width=216&crop=smart&format=pjpg&auto=webp&s=e8abe5d6fdf3b269175be362a371ee1c9f50b7bc', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/am0wY2t1OWU3MWVlMXI8IEr-dnDizOwLz4sVhNUay1tQ6a6VeB4mfBu_sFj4.png?width=320&crop=smart&format=pjpg&auto=webp&s=fe0e728de321bde5ee910172c82ba750a8542d5e', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/am0wY2t1OWU3MWVlMXI8IEr-dnDizOwLz4sVhNUay1tQ6a6VeB4mfBu_sFj4.png?width=640&crop=smart&format=pjpg&auto=webp&s=dd00d9bba64c0ab9ccddb00e6eadf12c05e6367b', 'width': 640}, {'height': 601, 'url': 'https://external-preview.redd.it/am0wY2t1OWU3MWVlMXI8IEr-dnDizOwLz4sVhNUay1tQ6a6VeB4mfBu_sFj4.png?width=960&crop=smart&format=pjpg&auto=webp&s=de585a2f9d9550ebd10a8099f797aacf86ce7e46', 'width': 960}, {'height': 676, 'url': 'https://external-preview.redd.it/am0wY2t1OWU3MWVlMXI8IEr-dnDizOwLz4sVhNUay1tQ6a6VeB4mfBu_sFj4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7bc3fc4fe847ca9af718706caa22dc4ca3b46e64', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/am0wY2t1OWU3MWVlMXI8IEr-dnDizOwLz4sVhNUay1tQ6a6VeB4mfBu_sFj4.png?format=pjpg&auto=webp&s=eb375e4a46c1f8057cd21bb87def73a0fd8c72ec', 'width': 1724}, 'variants': {}}]} |
||
Local AI Image Generator Looks Like DALL E 3 Style | 1 | [removed] | 2025-01-19T23:57:48 | https://www.reddit.com/r/LocalLLaMA/comments/1i5czbo/local_ai_image_generator_looks_like_dall_e_3_style/ | Advanced-Task5936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i5czbo | false | null | t3_1i5czbo | /r/LocalLLaMA/comments/1i5czbo/local_ai_image_generator_looks_like_dall_e_3_style/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.