All key links to OpenAI open sourced GPT OSS models (117B and 21B) which are released under apache 2.0. Here is a quick guide to explore and build with them:
I focused on showing the core steps side by side with tokenization, embedding and the transformer model layers, each highlighting the self attention and feedforward parts without getting lost in too much technical depth.
Its showing how these layers work together to understand context and generate meaningful output!
If you are curious about the architecture behind AI language models or want a clean way to explain it, hit me up, Iād love to share!
Hugging Face just made life easier with the new hf CLI! huggingface-cli to hf With renaming the CLI, there are new features added like hf jobs. We can now run any script or Docker image on dedicated Hugging Face infrastructure with a simple command. It's a good addition for running experiments and jobs on the fly. To get started, just run: pip install -U huggingface_hub List of hf CLI Commands
Main Commands hf auth: Manage authentication (login, logout, etc.). hf cache: Manage the local cache directory. hf download: Download files from the Hub. hf jobs: Run and manage Jobs on the Hub. hf repo: Manage repos on the Hub. hf upload: Upload a file or a folder to the Hub. hf version: Print information about the hf version. hf env: Print information about the environment. Authentication Subcommands (hf auth) login: Log in using a Hugging Face token. logout: Log out of your account. whoami: See which account you are logged in as. switch: Switch between different stored access tokens/profiles. list: List all stored access tokens. Jobs Subcommands (hf jobs) run: Run a Job on Hugging Face infrastructure. inspect: Display detailed information on one or more Jobs. logs: Fetch the logs of a Job. ps: List running Jobs. cancel: Cancel a Job.