The Inference Endpoints dashboard is the central interface to manage, monitor, and deploy inference endpoints across multiple organizations and accounts. Users can switch between organizations, view endpoint statuses, manage quotas, and access deployment configurations. You can access the dasboard by logging in on endpoints.huggingface.co
Click the + New button in the top section to create a new endpoint deployment. This will take you to the Model Catalog which provides access to 100+ pre-configured models available for deployment as inference endpoints. Use this to browse, filter, and deploy models directly.

If you cannot find a suitable model in the catalog you can click the “Deploy From Hugging Face” button which allows you to deploy from any Hugging Face repository.

After this you will be directed to the configuration page. You can read here more in detail about all the configuration options.
Endpoints can be in one of several states:
The endpoint details page provides gives you information and let’s you control the configuration of an individual endpoint. Access this view by clicking on any endpoint from the main endpoints list.
The endpoint name displays with its current state. You can pause a running endpoint or wake up an endpoint scaled to zero.
The page displays the configuration options that are available for each endpoint. You’ll find a more in-depth walk through all of the under the configuration section

The endpoints table displays critical information for each deployment. Click Edit Columns to show or hide specific information columns. Available columns include State, Task, Instance, Vendor, Container, Access, Tags, URL, Created, and Updated timestamps

Use the search bar to filter endpoints by name, provider, task, or tags. The Status dropdown allows filtering by specific endpoint states.

Access account settings through the dropdown menu in the top-right corner. This provides access to organization switching, billing information, and access token management.

The Quotas section displays your current resource usage and limits across different cloud providers and hardware types. Access this view to monitor consumption and request additional capacity when needed.
Note that:

Use the Request More button to submit requests for increased limits when approaching quota thresholds. This allows you to scale your inference deployments beyond current allocations. Or click the button below:
Request MoreThe Audit Logs section provides a chronological record of all actions performed on your inference endpoints. You can use this to track changes, troubleshoot issues, and maintain security oversight of your deployments.
Use the All Endpoints dropdown to filter logs by specific endpoint instances. This allows you to focus on activity for particular deployments.

Each audit log entry contains: