title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Upgraded from 3090 to 5090. Oobabooga complaints. | 1 | So as the title said, i got new drivers, but getting CUDA Fatal error when loading. Tried pip uninstall torch, torchaudio, and torch vision with an fresh install again.
Tried
pip install --pre --upgrade --no-cache-dir torch --extra-index-url https://download.pytorch.org/whl/nightly/cu128
Not sure what needs to be uninstalled and reinstalled. Im not interested in a full wipe of c:\ | 2025-06-29T21:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lnqaea/upgraded_from_3090_to_5090_oobabooga_complaints/ | jebeller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnqaea | false | null | t3_1lnqaea | /r/LocalLLaMA/comments/1lnqaea/upgraded_from_3090_to_5090_oobabooga_complaints/ | false | false | self | 1 | null |
AGI/ASI Research 20250628 - Corporate Artificial General Intelligence | 1 | 2025-06-29T21:47:21 | https://v.redd.it/u7fuxd3msx9f1 | Financial_Pick8394 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lnqk9i | false | {'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/u7fuxd3msx9f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'width': 1280, 'scrubber_media_url': 'https://v.redd.it/u7fuxd3msx9f1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/u7fuxd3msx9f1/DASHPlaylist.mpd?a=1753825655%2CMDgzZjAxMTdhMmYxMzExYWZkNDMyNjY3OThkMDI2ZmY2MzAyYWRjZDRiM2QwMGIzZTA1NzU3YTIwYTg5MDhhNQ%3D%3D&v=1&f=sd', 'duration': 179, 'hls_url': 'https://v.redd.it/u7fuxd3msx9f1/HLSPlaylist.m3u8?a=1753825655%2COGZiZTBjNjEyOWU3MThiMjAyMzFhNTRhZmEyMTdjZTQ2MmI2YThiZTNhNDY0MDFjZmI2NDFlMzU1NTUyOTczYg%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}} | t3_1lnqk9i | /r/LocalLLaMA/comments/1lnqk9i/agiasi_research_20250628_corporate_artificial/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/ODRnMm5kM21zeDlmMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?format=pjpg&auto=webp&s=57beb41e0d0d6086a4f803efa9de610d1c985871', 'width': 1280, 'height': 720}, 'resolutions': [{'url': 'https://external-preview.redd.it/ODRnMm5kM21zeDlmMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=108&crop=smart&format=pjpg&auto=webp&s=39c95ba3e0b6ec5ad57b8d7abda8059bc5c884a0', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/ODRnMm5kM21zeDlmMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=216&crop=smart&format=pjpg&auto=webp&s=398d50a5f30d012070c0ca6535bc79eb8625b888', 'width': 216, 'height': 121}, {'url': 'https://external-preview.redd.it/ODRnMm5kM21zeDlmMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=320&crop=smart&format=pjpg&auto=webp&s=679630836cc6664d553bc49902eaf55032a2dbde', 'width': 320, 'height': 180}, {'url': 'https://external-preview.redd.it/ODRnMm5kM21zeDlmMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=640&crop=smart&format=pjpg&auto=webp&s=cb409f564c646f50fd89aa11778f4a08e0752af9', 'width': 640, 'height': 360}, {'url': 'https://external-preview.redd.it/ODRnMm5kM21zeDlmMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=960&crop=smart&format=pjpg&auto=webp&s=9b23f3b5102b379a682646fae4f1d23e5ac41567', 'width': 960, 'height': 540}, {'url': 'https://external-preview.redd.it/ODRnMm5kM21zeDlmMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=007a6e199f02889b46d505b79856958fa3905f7d', 'width': 1080, 'height': 607}], 'variants': {}, 'id': 'ODRnMm5kM21zeDlmMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX'}], 'enabled': False} |
||
Using classifier-free guidance to prompt instruct models (with the tags) works better for creative writing than prompting the model outright | 1 | OK, so I was playing around with classifier-free guidance, and it occurred to me: Why not just put the whole damn string in there? I loathe how programmatic the responses can be, so maybe that might give the poor thing some freaking room to breathe, lol. Human beings do not acquire and use language that way, so why should my language model? Better to let them percolate up through all that voodoo instead (?)
I'm using Qwen3-235B-A22 right now, but I don't see why it wouldn't work with any other model.
Just try it. Disable all your samplers. Use the entire string that you'd send to the model \*including the instruct tags\* as the guidance. Depending on the model, you may want to try using e.g. "Continue" as the user prompt, and like "Continuing: " for the assistant response. You may have to do a little wrangling to get it to work right, but it's a markedly different experience. You'll see.
Caveat: I couldn't fall asleep last night, so perhaps this is a subtle delusion. I don't think so tho. Try using the negative guidance, too, and watch it invert the ... umm, what should I call them, derr ... "homeostatic semantic property clusters" (?) in the output. That is, it will flip the sexual orientation of characters, physical attributes, etc.
I'm aware that this is what CFG \*does\*, of course. I'm just kinda nonplussed as to why it's never \*applied\* in this manner for instruct models. UIs should have a knob you can fiddle with with 1 in the middle and then 0<1 on one side and 1<5 on the other which simply applies it to your ACTUAL PROMPT, period. Don't submit the the actual tags/instructions to the model directly at all!
For long-form, natural, \*human\* "free writing", this is clearly superior imho. | 2025-06-29T21:59:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lnqtog/using_classifierfree_guidance_to_prompt_instruct/ | apodicity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnqtog | false | null | t3_1lnqtog | /r/LocalLLaMA/comments/1lnqtog/using_classifierfree_guidance_to_prompt_instruct/ | false | false | self | 1 | null |
OpenAI reportedly ‘recalibrating’ compensation in response to Meta hires | TechCrunch | 1 | 2025-06-29T22:01:58 | https://techcrunch.com/2025/06/29/openai-reportedly-recalibrating-compensation-in-response-to-meta-hires/ | RhubarbSimilar1683 | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1lnqw0p | false | null | t3_1lnqw0p | /r/LocalLLaMA/comments/1lnqw0p/openai_reportedly_recalibrating_compensation_in/ | false | false | default | 1 | null |
|
You can just RL a model to beat any "AI detectors" | 1 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/p4binxqqvx9f1.png?width=783&format=png&auto=webp&s=5af26533b3e667d6f0382d11163331aedf6bc42d\n\nhttps://preview.redd.it/k4tcfdmsvx9f1.png?width=2574&format=png&auto=webp&s=934ff9d043c7021764743c443feff0f0767c25cd\n\nBaseline \n• Model: Llama-3.1 8B-Instruct \n• Prompt: plain \"Write an essay about X\" \n• Detector: ZeroGPT \nResult: 100 % AI-written\n\nhttps://preview.redd.it/09nmithvvx9f1.png?width=1204&format=png&auto=webp&s=82d0071d8579effb1f1b75eaa5c037a56385ef9d\n\n \nData \n• Synthetic dataset of 150 school-style prompts (history, literature, tech). Nothing fancy, just json lines + system prompt \"You are a human essay writer\"\n\nhttps://preview.redd.it/d189whuxvx9f1.png?width=3456&format=png&auto=webp&s=5fdd406d1df4a40f3f4c1623b6b049026559f29e\n\nFirst training run \nAfter \\~30 GRPO steps on a single A100: \n• ZeroGPT score drops from 100 → 42 % \nThe model learned: \n Write a coherent intro \n Stuff one line of high-entropy junk \n Finish normally \nAverage \"human-ness\" skyrockets because detector averages per-sentence scores\n\nhttps://preview.redd.it/c4bkar70wx9f1.png?width=941&format=png&auto=webp&s=4e3a86287c2d0cc273fd9f3854634cbd7c8ecf75\n\n \nPatch #1 \nAdded a gibberish classifier (tiny DistilRoBERTa) and multiplied reward by its minimum \"clean\" score. Junk lines now tank reward → behaviour disappears. GRPO’s beta ≈ how harshly to penalize incoherence. Set β = 0.4 and reward curve stabilized; no more oscillation between genius & garbage. Removed reasoning (memory constraints).\n\nhttps://preview.redd.it/prmgkja2wx9f1.png?width=652&format=png&auto=webp&s=79f46c100445337e257dc3b7666ffdf2ba826252\n\n \nTiny models crush it \nSwapped in Qwen 0.5B LoRA rank 8, upped num\\_generations → 64. \nResult after 7 steps: best sample already at 28 % \"human\". Smaller vocab seems to help leak less LM \"signature\" (the model learned to use lots of proper nouns to trick the detector).\n\nhttps://preview.redd.it/2e6g1pm7wx9f1.png?width=800&format=png&auto=webp&s=cfbcaa7fd8c6baa2a05d063a3989ba282c8d31a2\n\nColab: [https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1\\_(8B)-GRPO.ipynb](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb)\n\nDetector bug? \nZeroGPT sometimes marks the first half AI, second half human for the same paragraph. The RL agent locks onto that gradient and exploits it. Classifier clearly over-fits surface patterns rather than semantics\n\nSingle scalar feedback is enough for LMs to reverse-engineer public detectors \n \nAdd even a tiny auxiliary reward (gibberish, length) to stop obvious failure modes \n \nPublic \"AI/Not-AI\" classifiers are security-through-obscurity\n\nReward function: [https://codefile.io/f/R4O9IdGEhg](https://codefile.io/f/R4O9IdGEhg)\n\n" | 2025-06-29T22:22:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lnrd1t/you_can_just_rl_a_model_to_beat_any_ai_detectors/ | HOLUPREDICTIONS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnrd1t | false | null | t3_1lnrd1t | /r/LocalLLaMA/comments/1lnrd1t/you_can_just_rl_a_model_to_beat_any_ai_detectors/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260, 'height': 260}, 'resolutions': [{'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108, 'height': 108}, {'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216, 'height': 216}], 'variants': {}, 'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg'}], 'enabled': False} |
|
GitHub - khimaros/enc: `cc`, but for english | 1 | this tool "compiles" (more accurately, transpiles) english language files to any other programming language. for example `enc hello.en -o hello.py`. there is more documentation and many examples in the repo. it is compatible (and has been tested with) llama.cpp/server | 2025-06-29T22:23:14 | https://github.com/khimaros/enc | xhimaros | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lnrda7 | false | null | t3_1lnrda7 | /r/LocalLLaMA/comments/1lnrda7/github_khimarosenc_cc_but_for_english/ | false | false | default | 1 | null |
Please convince me not to get a GPU I don't need. Can any local LLM compare with cloud models? | 1 | I pay for Claude to assist with coding / tool calling which I use for my job all day. I feel a strong urge to waste tons of money on a nice GPU, but realistically the models aren't as strong or even as cheap as the cloud models.
I'm trying to self-reflect hard and in this moment of clarity, I see this as a distract of an expensive new toy I won't use much. | 2025-06-29T23:05:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lnsax9/please_convince_me_not_to_get_a_gpu_i_dont_need/ | TumbleweedDeep825 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnsax9 | false | null | t3_1lnsax9 | /r/LocalLLaMA/comments/1lnsax9/please_convince_me_not_to_get_a_gpu_i_dont_need/ | false | false | self | 1 | null |
LLM Inference with CPP only | 1 | I am trying to look for cpp based llm inference and post processing repos, any ideas on where can I get started? Llama cpp has efficient post processing techniques? | 2025-06-29T23:07:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lnscnw/llm_inference_with_cpp_only/ | Waste_Ad_2764 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnscnw | false | null | t3_1lnscnw | /r/LocalLLaMA/comments/1lnscnw/llm_inference_with_cpp_only/ | false | false | self | 1 | null |
Build a PC or not? | 1 | Hey everyone,
I’m planning to get started with machine learning. Right now, I have an M1 Mac Mini (16GB RAM, 50GB storage left). Will it be enough?
Appreciate any advice! | 2025-06-29T23:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lnsgvy/build_a_pc_or_not/ | InternetBest7599 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnsgvy | false | null | t3_1lnsgvy | /r/LocalLLaMA/comments/1lnsgvy/build_a_pc_or_not/ | false | false | self | 1 | null |
What Is Context Engineering? My Thoughts.. | 1 | Basically it's a step above 'prompt engineering '
The prompt is for the moment, the specific input.
'Context engineering' is setting up for the moment.
Think about it as building a movie - the background, the details etc. That would be the context framing. The prompt would be when the actors come in and say their one line.
Same thing for context engineering. You're building the set for the LLM to come in and say they're one line.
This is a lot more detailed way of framing the LLM over saying "Act as a Meta Prompt Master and develop a badass prompt...."
You have to understand Linguistics Programming (I wrote an article on it, link in bio)
Since English is the new coding language, users have to understand Linguistics a little more than the average bear.
The Linguistics Compression is the important aspect of this "Context Engineering" to save tokens so your context frame doesn't fill up the entire context window.
If you do not use your word choices correctly, you can easily fill up a context window and not get the results you're looking for. Linguistics compression reduces the amount of tokens while maintaining maximum information Density.
And that's why I say it's a step above prompt engineering. I create digital notebooks for my prompts. Now I have a name for them - Context Engineering Notebooks...
As an example, I have a digital writing notebook that has seven or eight tabs, and 20 pages in a Google document. Most of the pages are samples of my writing, I have a tab dedicated to resources, best practices, etc. this writing notebook serve as a context notebook for the LLM in terms of producing an output similar to my writing style. So I've created an environment a resources for the llm to pull from. The result is an output that's probably 80% my style, my tone, my specific word choices, etc.
| 2025-06-29T23:25:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lnsqkl/what_is_context_engineering_my_thoughts/ | Lumpy-Ad-173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnsqkl | false | null | t3_1lnsqkl | /r/LocalLLaMA/comments/1lnsqkl/what_is_context_engineering_my_thoughts/ | false | false | self | 1 | null |
AMD published 51 quants (mostly ONNX) in HF this week (a third of their current total of 145) | 1 | [https://huggingface.co/amd/models](https://huggingface.co/amd/models) | 2025-06-29T23:29:05 | choose_a_guest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lnst4m | false | null | t3_1lnst4m | /r/LocalLLaMA/comments/1lnst4m/amd_published_51_quants_mostly_onnx_in_hf_this/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/k01eyatr9y9f1.png?auto=webp&s=1f464226ef99cb1b854fa82e97b35274cd349e0c', 'width': 1888, 'height': 1080}, 'resolutions': [{'url': 'https://preview.redd.it/k01eyatr9y9f1.png?width=108&crop=smart&auto=webp&s=b42f8eb1a91dfe9e0c9e72cea526155127be3af1', 'width': 108, 'height': 61}, {'url': 'https://preview.redd.it/k01eyatr9y9f1.png?width=216&crop=smart&auto=webp&s=6216ea58fbcb0d9225f5f48be4f554300db48992', 'width': 216, 'height': 123}, {'url': 'https://preview.redd.it/k01eyatr9y9f1.png?width=320&crop=smart&auto=webp&s=5809cc2080edb087a9f335e56a84063073e3db7c', 'width': 320, 'height': 183}, {'url': 'https://preview.redd.it/k01eyatr9y9f1.png?width=640&crop=smart&auto=webp&s=e7c03885aadeb549ab16280c28549c6f179d4a72', 'width': 640, 'height': 366}, {'url': 'https://preview.redd.it/k01eyatr9y9f1.png?width=960&crop=smart&auto=webp&s=af2d7eb4dd8f24000f55966e24862841582fb462', 'width': 960, 'height': 549}, {'url': 'https://preview.redd.it/k01eyatr9y9f1.png?width=1080&crop=smart&auto=webp&s=c85b58c793fc70f23046110e63ab358c148279e8', 'width': 1080, 'height': 617}], 'variants': {}, 'id': 'lxBvsfGX5DBlxJ7Kq_mtplJCRZ_eriRlRM7EdcytejM'}], 'enabled': True} |
||
Help me design a robust on-prem Llama 3 70B infrastructure for 30 users – Complete hardware/software list wanted | 1 | Hi everyone,
I’m planning to build a **private, on-premise infrastructure** to serve **Llama 3 70B** for my office (about 30 users, possibly with a few remote users via VPN).
**No data or files should leave our local network** – security and privacy are key. All inference and data processing must stay entirely within our private servers.
My requirements:
* Serve Llama 3 70B (chat/inference, not training) to up to 30 simultaneous users (browser chat interface and API endpoints).
* Support file uploads and interaction with the model (docs, pdfs, txt, etc.), again, strictly within our own storage/network.
* I want to allow remote use for staff working from home, but only via VPN and under full company control.
* I want a **detailed, complete list** of what to buy (hardware, GPUs, server specs, network, power, backup, etc.) and recommended open-source software stack for this use-case.
* Budget is flexible, but I want the best price/performance/capacity ratio and a future-proof build.
Thanks in advance for your help and expertise! | 2025-06-29T23:47:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lnt6yj/help_me_design_a_robust_onprem_llama_3_70b/ | Routine_Fail_2255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnt6yj | false | null | t3_1lnt6yj | /r/LocalLLaMA/comments/1lnt6yj/help_me_design_a_robust_onprem_llama_3_70b/ | false | false | self | 1 | null |
Is there a deepseek r1 uncensored? | 1 | I'm enjoying using deepseek r1 in LM studio. Is a good tool, but i'm annoyed by how defensive it is if something doesn't like because has parameters and guidelines too heavy to ignore and i am a noob to edit an AI (if that's even possible with what i have in hardware, software and knowledge avalible). So, as the tittle says, is there a deepseek r1 uncensored? Should i study to do it myself? | 2025-06-29T23:50:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lnt9kl/is_there_a_deepseek_r1_uncensored/ | Elfo_Sovietico | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnt9kl | false | null | t3_1lnt9kl | /r/LocalLLaMA/comments/1lnt9kl/is_there_a_deepseek_r1_uncensored/ | false | false | self | 1 | null |
Simple textual lists for llm rankings | 1 | Hey there all. I know benchmarks exist, but they're too clunky for screen readers (I'm blind). So is there some sort of active blog or website or mailing list that cuts through all that rainfall of models and actually tells us which ones are the best based on size and specialty? Thanks. | 2025-06-30T00:22:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lntw6i/simple_textual_lists_for_llm_rankings/ | Silver-Champion-4846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lntw6i | false | null | t3_1lntw6i | /r/LocalLLaMA/comments/1lntw6i/simple_textual_lists_for_llm_rankings/ | false | false | self | 1 | null |
Kimi-Dev-72B - Minimum specs needed to run on a high end PC | 1 | Just recently watched Julian Goldie's facebook post on Kimi-dev-72b. He seemed to be saying he was running this on a PC, but the AI models are saying it takes a high end server, that costs substantially more money, to run it. Anyone have any experience or helpful input on this?
Thanks, | 2025-06-30T00:28:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lnu0o0/kimidev72b_minimum_specs_needed_to_run_on_a_high/ | texrock100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnu0o0 | false | null | t3_1lnu0o0 | /r/LocalLLaMA/comments/1lnu0o0/kimidev72b_minimum_specs_needed_to_run_on_a_high/ | false | false | self | 1 | null |
BAIDU releases ERNIE 4.5 | 1 | [https://huggingface.co/baidu/ERNIE-4.5-VL-424B-A47B-Base-Paddle](https://huggingface.co/baidu/ERNIE-4.5-VL-424B-A47B-Base-Paddle)
llama.cpp support for ERNIE 4.5 0.3B
[https://github.com/ggml-org/llama.cpp/pull/14408](https://github.com/ggml-org/llama.cpp/pull/14408)
vllm Ernie4.5 and Ernie4.5MoE Model Support
[https://github.com/vllm-project/vllm/pull/20220](https://github.com/vllm-project/vllm/pull/20220) | 2025-06-30T00:31:50 | https://huggingface.co/collections/baidu/ernie-45-6861cd4c9be84540645f35c9 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lnu372 | false | null | t3_1lnu372 | /r/LocalLLaMA/comments/1lnu372/baidu_releases_ernie_45/ | false | false | default | 1 | null |
Baidu releases ERNIE 4.5 models on huggingface | 1 | llama.cpp support for ERNIE 4.5 0.3B
[https://github.com/ggml-org/llama.cpp/pull/14408](https://github.com/ggml-org/llama.cpp/pull/14408)
vllm Ernie4.5 and Ernie4.5MoE Model Support
[https://github.com/vllm-project/vllm/pull/20220](https://github.com/vllm-project/vllm/pull/20220) | 2025-06-30T00:34:16 | https://huggingface.co/collections/baidu/ernie-45-6861cd4c9be84540645f35c9 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lnu4zl | false | null | t3_1lnu4zl | /r/LocalLLaMA/comments/1lnu4zl/baidu_releases_ernie_45_models_on_huggingface/ | false | false | default | 1 | null |
GPU Learning and Optimization on Macbook | 1 | So my doubt is very simple. I wish to buy a macbook and would like to locally build and train my VLM and LLM models (mini ones).
What are my options of frameworks etc to learn and utilise to squeeze out the compute juice for this in macOS GPU cores. Any alternative to cuda? Does JAX work alright? What are my options? | 2025-06-30T01:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lnv75q/gpu_learning_and_optimization_on_macbook/ | Electronic-Guess-878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnv75q | false | null | t3_1lnv75q | /r/LocalLLaMA/comments/1lnv75q/gpu_learning_and_optimization_on_macbook/ | false | false | self | 1 | null |
Healthcare space | 1 | The healthcare space is overdue for disruption. I’ve already built a working prototype—now I’m assembling a sharp, agile team to take it to market. I’m looking for one exceptional full-stack engineer with AI expertise and one driven individual with strong sales, marketing, and business acumen. If you're ready to help reshape the future of healthcare, let’s talk. | 2025-06-30T02:52:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lnwsfe/healthcare_space/ | junebugg62 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnwsfe | false | null | t3_1lnwsfe | /r/LocalLLaMA/comments/1lnwsfe/healthcare_space/ | false | false | self | 1 | null |
Week 2: Building a Small Language Model from Scratch(Positional Embeddings, RoPE, and Model Distillation) - June 30 - July 4 | 1 | 2025-06-30T03:16:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lnx8js/week_2_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnx8js | false | null | t3_1lnx8js | /r/LocalLLaMA/comments/1lnx8js/week_2_building_a_small_language_model_from/ | false | false | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.