Josh's picture

Josh

josh-a
ยท

AI & ML interests

I build things, like advanced Localization models.

Recent Activity

updated a collection 3 days ago
Tiri-J - v1.2
updated a collection 3 days ago
Tiri-J - v1.2
updated a collection 17 days ago
Tiri-J - BAD DATA - Dont use
View all activity

Organizations

Straker Ltd's profile picture

josh-a's activity

reacted to tegridydev's post with ๐Ÿ‘ 24 days ago
view post
Post
1915
WTF is Fine-Tuning? (intro4devs)

Fine-tuning your LLM is like min-maxing your ARPG hero so you can push high-level dungeons and get the most out of your build/gear... Makes sense, right? ๐Ÿ˜ƒ

Here's a cheat sheet for devs (but open to anyone!)

---

TL;DR

- Full Fine-Tuning: Max performance, high resource needs, best reliability.
- PEFT: Efficient, cost-effective, mainstream, enhanced by AutoML.
- Instruction Fine-Tuning: Ideal for command-following AI, often combined with RLHF and CoT.
- RAFT: Best for fact-grounded models with dynamic retrieval.
- RLHF: Produces ethical, high-quality conversational AI, but expensive.

Choose wisely and match your approach to your task, budget, and deployment constraints.

I just posted the full extended article here
if you want to continue reading >>>

https://huggingface.co/blog/tegridydev/fine-tuning-dev-intro-2025