questions
stringlengths 4
1.65k
| answers
stringlengths 1.73k
353k
| site
stringclasses 24
values | answers_cleaned
stringlengths 1.73k
353k
|
---|---|---|---|
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Invoices API Docs HCP Terraform Use the invoices endpoint to access an organization s invoices List previous invoices and get the next invoice using the HTTP API | ---
page_title: Invoices - API Docs - HCP Terraform
description: >-
Use the `invoices` endpoint to access an organization's invoices. List previous invoices and get the next invoice using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Invoices API
-> **Note:** The invoices API is only available in HCP Terraform.
Organizations on credit-card-billed plans may view their previous and upcoming invoices.
## List Invoices
This endpoint lists the previous invoices for an organization.
It uses a pagination scheme that's somewhat different from [our standard pagination](/terraform/cloud-docs/api-docs#pagination). The page size is always 10 items and is not configurable; if there are no more items, `meta.continuation` will be null. The current page is controlled by the `cursor` parameter, described below.
`GET /organizations/:organization_name/invoices`
| Parameter | Description |
| -------------------- | --------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization you'd like to view invoices for |
| `:cursor` | **Optional.** The ID of the invoice where the page should start. If omitted, the endpoint will return the first page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/invoices
```
### Sample Response
```json
{
"data": [
{
"id": "in_1I4sraHcjZv6Wm0g7nC34mAi",
"type": "billing-invoices",
"attributes": {
"created-at": "2021-01-01T19:00:38Z",
"external-link": "https://pay.stripe.com/invoice/acct_1Eov7THcjZv6Wm0g/invst_IgFMMfdzAZzMQq8GXyUbrk9lFMqvp9SX/pdf",
"number": "2F8CA1AE-0006",
"paid": true,
"status": "paid",
"total": 21000
}
},
{...}
{
"id": "in_1Hte5nHcjZv6Wm0g2Q8hFctH",
"type": "billing-invoices",
"attributes": {
"created-at": "2020-06-01T19:00:51Z",
"external-link": "https://pay.stripe.com/invoice/acct_1Eov7THcjZv6Wm0g/invst_IUdMM6wl0JfA95tgWGZxpBGXYtJwmBgY/pdf",
"number": "2F8CA1AE-0005",
"paid": true,
"status": "paid",
"total": 21000
}
}
],
"meta": {
"continuation": "in_1IBpkEHcjZv6Wm0gHcgc2uwN"
}
}
```
## Get Next Invoice
This endpoint lists the next month's invoice for an organization.
`GET /organizations/:organization_name/invoices/next`
| Parameter | Description |
| ------------------- | ---------------------------- |
| `organization_name` | The name of the organization |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/invoices/next
```
### Sample Response
```json
{
"data": {
"id": "in_upcoming_510DEB1F-0002",
"type": "billing-invoices",
"attributes": {
"created-at": "2021-02-01T20:00:00Z",
"external-link": "",
"number": "510DEB1F-0002",
"paid": false,
"status": "draft",
"total": 21000
}
}
}
``` | terraform | page title Invoices API Docs HCP Terraform description Use the invoices endpoint to access an organization s invoices List previous invoices and get the next invoice using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Invoices API Note The invoices API is only available in HCP Terraform Organizations on credit card billed plans may view their previous and upcoming invoices List Invoices This endpoint lists the previous invoices for an organization It uses a pagination scheme that s somewhat different from our standard pagination terraform cloud docs api docs pagination The page size is always 10 items and is not configurable if there are no more items meta continuation will be null The current page is controlled by the cursor parameter described below GET organizations organization name invoices Parameter Description organization name The name of the organization you d like to view invoices for cursor Optional The ID of the invoice where the page should start If omitted the endpoint will return the first page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp invoices Sample Response json data id in 1I4sraHcjZv6Wm0g7nC34mAi type billing invoices attributes created at 2021 01 01T19 00 38Z external link https pay stripe com invoice acct 1Eov7THcjZv6Wm0g invst IgFMMfdzAZzMQq8GXyUbrk9lFMqvp9SX pdf number 2F8CA1AE 0006 paid true status paid total 21000 id in 1Hte5nHcjZv6Wm0g2Q8hFctH type billing invoices attributes created at 2020 06 01T19 00 51Z external link https pay stripe com invoice acct 1Eov7THcjZv6Wm0g invst IUdMM6wl0JfA95tgWGZxpBGXYtJwmBgY pdf number 2F8CA1AE 0005 paid true status paid total 21000 meta continuation in 1IBpkEHcjZv6Wm0gHcgc2uwN Get Next Invoice This endpoint lists the next month s invoice for an organization GET organizations organization name invoices next Parameter Description organization name The name of the organization Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp invoices next Sample Response json data id in upcoming 510DEB1F 0002 type billing invoices attributes created at 2021 02 01T20 00 00Z external link number 510DEB1F 0002 paid false status draft total 21000 |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use the team membership API to manage a team s users Add and remove a user from a team using the HTTP API page title Team Membership API Docs HCP Terraform | ---
page_title: Team Membership - API Docs - HCP Terraform
description: >-
Use the team membership API to manage a team's users. Add and remove a user from a team using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Team Membership API
<!-- BEGIN: TFC:only name:pnp-callout -->
-> **Note:** Team management is available in HCP Terraform **Standard** Edition. Free organizations can also use this API, but can only manage membership of their owners team. [Learn more about HCP Terraform pricing here](https://www.hashicorp.com/products/terraform/pricing).
<!-- END: TFC:only name:pnp-callout -->
The Team Membership API is used to add or remove users from teams. The [Team API](/terraform/cloud-docs/api-docs/teams) is used to create or destroy teams.
## Organization Membership
-> **Note:** To add users to a team, they must first receive and accept the invitation to join the organization by email. This process ensures that you do not accidentally add the wrong person by mistyping a username. Refer to [the Organization Memberships API documentation](/terraform/cloud-docs/api-docs/organization-memberships) for more information.
## Add a User to Team (With user ID)
This method adds multiple users to a team using the user ID. Both users and teams must already exist.
`POST /teams/:team_id/relationships/users`
| Parameter | Description |
| ---------- | ------------------- |
| `:team_id` | The ID of the team. |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ------------- | ------ | ------- | -------------------------------- |
| `data[].type` | string | | Must be `"users"`. |
| `data[].id` | string | | The ID of the user you want to add to this team. |
### Sample Payload
```json
{
"data": [
{
"type": "users",
"id": "myuser1"
},
{
"type": "users",
"id": "myuser2"
}
]
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/teams/257525/relationships/users
```
## Add a User to Team (With organization membership ID)
This method adds multiple users to a team using the organization membership ID. Unlike the user ID method, the user only needs an invitation to the organization.
`POST /teams/:team_id/relationships/organization-memberships`
| Parameter | Description |
| ---------- | ------------------- |
| `:team_id` | The ID of the team. |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ------------- | ------ | ------- | -------------------------------- |
| `data[].type` | string | | Must be `"organization-memberships"`. |
| `data[].id` | string | | The organization membership ID of the user to add. |
### Sample Payload
```json
{
"data": [
{
"type": "organization-memberships",
"id": "ou-nX7inDHhmC3quYgy"
},
{
"type": "organization-memberships",
"id": "ou-tTJph1AQVK5ZmdND"
}
]
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/teams/257525/relationships/organization-memberships
```
## Delete a User from Team (With user ID)
This method removes multiple users from a team using the user ID. Both users and teams must already exist. This method only removes a user from this team. It does not delete that user overall.
`DELETE /teams/:team_id/relationships/users`
| Parameter | Description |
| ---------- | ------------------- |
| `:team_id` | The ID of the team. |
### Request Body
This DELETE endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ------------- | ------ | ------- | ----------------------------------- |
| `data[].type` | string | | Must be `"users"`. |
| `data[].id` | string | | The ID of the user to remove from this team. |
### Sample Payload
```json
{
"data": [
{
"type": "users",
"id": "myuser1"
},
{
"type": "users",
"id": "myuser2"
}
]
}
```
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
--data @payload.json \
https://app.terraform.io/api/v2/teams/257525/relationships/users
```
## Delete a User from Team (With organization membership ID)
This method removes multiple users from a team using the organization membership ID. This method only removes a user from this team. It does not delete that user overall.
`DELETE /teams/:team_id/relationships/organization-memberships`
| Parameter | Description |
| ---------- | ------------------- |
| `:team_id` | The ID of the team. |
### Request Body
This DELETE endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ------------- | ------ | ------- | ----------------------------------- |
| `data[].type` | string | | Must be `"organization-memberships"`. |
| `data[].id` | string | | The organization membership ID of the user to remove. |
### Sample Payload
```json
{
"data": [
{
"type": "organization-memberships",
"id": "ou-nX7inDHhmC3quYgy"
},
{
"type": "organization-memberships",
"id": "ou-tTJph1AQVK5ZmdND"
}
]
}
```
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
--data @payload.json \
https://app.terraform.io/api/v2/teams/257525/relationships/organization-memberships
`` | terraform | page title Team Membership API Docs HCP Terraform description Use the team membership API to manage a team s users Add and remove a user from a team using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Team Membership API BEGIN TFC only name pnp callout Note Team management is available in HCP Terraform Standard Edition Free organizations can also use this API but can only manage membership of their owners team Learn more about HCP Terraform pricing here https www hashicorp com products terraform pricing END TFC only name pnp callout The Team Membership API is used to add or remove users from teams The Team API terraform cloud docs api docs teams is used to create or destroy teams Organization Membership Note To add users to a team they must first receive and accept the invitation to join the organization by email This process ensures that you do not accidentally add the wrong person by mistyping a username Refer to the Organization Memberships API documentation terraform cloud docs api docs organization memberships for more information Add a User to Team With user ID This method adds multiple users to a team using the user ID Both users and teams must already exist POST teams team id relationships users Parameter Description team id The ID of the team Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be users data id string The ID of the user you want to add to this team Sample Payload json data type users id myuser1 type users id myuser2 Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 teams 257525 relationships users Add a User to Team With organization membership ID This method adds multiple users to a team using the organization membership ID Unlike the user ID method the user only needs an invitation to the organization POST teams team id relationships organization memberships Parameter Description team id The ID of the team Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be organization memberships data id string The organization membership ID of the user to add Sample Payload json data type organization memberships id ou nX7inDHhmC3quYgy type organization memberships id ou tTJph1AQVK5ZmdND Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 teams 257525 relationships organization memberships Delete a User from Team With user ID This method removes multiple users from a team using the user ID Both users and teams must already exist This method only removes a user from this team It does not delete that user overall DELETE teams team id relationships users Parameter Description team id The ID of the team Request Body This DELETE endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be users data id string The ID of the user to remove from this team Sample Payload json data type users id myuser1 type users id myuser2 Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE data payload json https app terraform io api v2 teams 257525 relationships users Delete a User from Team With organization membership ID This method removes multiple users from a team using the organization membership ID This method only removes a user from this team It does not delete that user overall DELETE teams team id relationships organization memberships Parameter Description team id The ID of the team Request Body This DELETE endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be organization memberships data id string The organization membership ID of the user to remove Sample Payload json data type organization memberships id ou nX7inDHhmC3quYgy type organization memberships id ou tTJph1AQVK5ZmdND Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE data payload json https app terraform io api v2 teams 257525 relationships organization memberships |
terraform Plan exports allow users to download data exported from the plan of a Run in a Terraform workspace Currently the only supported format for exporting plan data is to generate mock data for Sentinel page title Plan Exports API Docs HCP Terraform Plan Exports API Use the plan exports endpoint to manage plan exports for a Terraform run Create and show plan exports and download and delete exported plan data using the HTTP API | ---
page_title: Plan Exports - API Docs - HCP Terraform
description: >-
Use the `/plan-exports` endpoint to manage plan exports for a Terraform run. Create and show plan exports, and download and delete exported plan data using the HTTP API.
---
# Plan Exports API
Plan exports allow users to download data exported from the plan of a Run in a Terraform workspace. Currently, the only supported format for exporting plan data is to generate mock data for Sentinel.
## Create a plan export
`POST /plan-exports`
This endpoint exports data from a plan in the specified format. The export process is asynchronous, and the resulting data becomes downloadable when its status is `"finished"`. The data is then available for one hour before expiring. After the hour is up, a new export can be created.
| Status | Response | Reason |
| ------- | ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "plan-exports"`) | Successfully created a plan export |
| [404][] | [JSON API error object][] | Plan not found, or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.), or a plan export of the provided `data-type` is already pending or downloadable for this plan |
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ------------------------------ | ------ | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"plan-exports"`. |
| `data.attributes.data-type` | string | | The format for the export. Currently, the only supported format is `"sentinel-mock-bundle-v0"`. |
| `data.relationships.plan.data` | object | | A JSON API relationship object that represents the plan being exported. This object must have a `type` of `plans`, and the `id` of a finished Terraform plan that does not already have a downloadable export of the specified `data-type` (e.g: `{"type": "plans", "id": "plan-8F5JFydVYAmtTjET"}`) |
### Sample Payload
```json
{
"data": {
"type": "plan-exports",
"attributes": {
"data-type": "sentinel-mock-bundle-v0"
},
"relationships": {
"plan": {
"data": {
"id": "plan-8F5JFydVYAmtTjET",
"type": "plans"
}
}
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/plan-exports
```
### Sample Response
```json
{
"data": {
"id": "pe-3yVQZvHzf5j3WRJ1",
"type": "plan-exports",
"attributes": {
"data-type": "sentinel-mock-bundle-v0",
"status": "queued",
"status-timestamps": {
"queued-at": "2019-03-04T22:29:53+00:00",
},
},
"relationships": {
"plan": {
"data": {
"id": "plan-8F5JFydVYAmtTjET",
"type": "plans"
}
}
},
"links": {
"self": "/api/v2/plan-exports/pe-3yVQZvHzf5j3WRJ1",
}
}
}
```
## Show a plan export
`GET /plan-exports/:id`
| Parameter | Description |
| --------- | ---------------------------------- |
| `id` | The ID of the plan export to show. |
There is no endpoint to list plan exports. You can find IDs for plan exports in the
`relationships.exports` property of a plan object.
| Status | Response | Reason |
| ------- | ---------------------------------------------- | ------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "plan-exports"`) | The request was successful |
| [404][] | [JSON API error object][] | Plan export not found, or user unauthorized to perform action |
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/plan-exports/pe-3yVQZvHzf5j3WRJ1
```
### Sample Response
```json
{
"data": {
"id": "pe-3yVQZvHzf5j3WRJ1",
"type": "plan-exports",
"attributes": {
"data-type": "sentinel-mock-bundle-v0",
"status": "finished",
"status-timestamps": {
"queued-at": "2019-03-04T22:29:53+00:00",
"finished-at": "2019-03-04T22:29:58+00:00",
"expired-at": "2019-03-04T23:29:58+00:00"
},
},
"relationships": {
"plan": {
"data": {
"id": "plan-8F5JFydVYAmtTjET",
"type": "plans"
}
}
},
"links": {
"self": "/api/v2/plan-exports/pe-3yVQZvHzf5j3WRJ1",
"download": "/api/v2/plan-exports/pe-3yVQZvHzf5j3WRJ1/download"
}
}
}
```
## Download exported plan data
`GET /plan-exports/:id/download`
This endpoint generates a temporary URL to the location of the exported plan data in a `.tar.gz` archive, and then redirects to that link. If using a client that can follow redirects, you can use this endpoint to save the `.tar.gz` archive locally without needing to save the temporary URL.
| Status | Response | Reason |
| ------- | ------------------------- | ------------------------------------------------------------- |
| [302][] | HTTP Redirect | Plan export found and temporary download URL generated |
| [404][] | [JSON API error object][] | Plan export not found, or user unauthorized to perform action |
[302]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/302
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[JSON API error object]: https://jsonapi.org/format/#error-objects
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--location \
https://app.terraform.io/api/v2/plan-exports/pe-3yVQZvHzf5j3WRJ1/download \
> export.tar.gz
```
## Delete exported plan data
`DELETE /plan-exports/:id`
Plan exports expire after being available for one hour, but they can be deleted manually as well.
| Status | Response | Reason |
| ------- | ------------------------- | ------------------------------------------------------------- |
| [204][] | No content | Plan export deleted successfully |
| [404][] | [JSON API error object][] | Plan export not found, or user unauthorized to perform action |
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[JSON API error object]: https://jsonapi.org/format/#error-objects
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
-X DELETE \
https://app.terraform.io/api/v2/plan-exports/pe-3yVQZvHzf5j3WRJ1
``` | terraform | page title Plan Exports API Docs HCP Terraform description Use the plan exports endpoint to manage plan exports for a Terraform run Create and show plan exports and download and delete exported plan data using the HTTP API Plan Exports API Plan exports allow users to download data exported from the plan of a Run in a Terraform workspace Currently the only supported format for exporting plan data is to generate mock data for Sentinel Create a plan export POST plan exports This endpoint exports data from a plan in the specified format The export process is asynchronous and the resulting data becomes downloadable when its status is finished The data is then available for one hour before expiring After the hour is up a new export can be created Status Response Reason 201 JSON API document type plan exports Successfully created a plan export 404 JSON API error object Plan not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc or a plan export of the provided data type is already pending or downloadable for this plan 201 https developer mozilla org en US docs Web HTTP Status 201 404 https developer mozilla org en US docs Web HTTP Status 404 422 https developer mozilla org en US docs Web HTTP Status 422 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be plan exports data attributes data type string The format for the export Currently the only supported format is sentinel mock bundle v0 data relationships plan data object A JSON API relationship object that represents the plan being exported This object must have a type of plans and the id of a finished Terraform plan that does not already have a downloadable export of the specified data type e g type plans id plan 8F5JFydVYAmtTjET Sample Payload json data type plan exports attributes data type sentinel mock bundle v0 relationships plan data id plan 8F5JFydVYAmtTjET type plans Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 plan exports Sample Response json data id pe 3yVQZvHzf5j3WRJ1 type plan exports attributes data type sentinel mock bundle v0 status queued status timestamps queued at 2019 03 04T22 29 53 00 00 relationships plan data id plan 8F5JFydVYAmtTjET type plans links self api v2 plan exports pe 3yVQZvHzf5j3WRJ1 Show a plan export GET plan exports id Parameter Description id The ID of the plan export to show There is no endpoint to list plan exports You can find IDs for plan exports in the relationships exports property of a plan object Status Response Reason 200 JSON API document type plan exports The request was successful 404 JSON API error object Plan export not found or user unauthorized to perform action 200 https developer mozilla org en US docs Web HTTP Status 200 404 https developer mozilla org en US docs Web HTTP Status 404 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 plan exports pe 3yVQZvHzf5j3WRJ1 Sample Response json data id pe 3yVQZvHzf5j3WRJ1 type plan exports attributes data type sentinel mock bundle v0 status finished status timestamps queued at 2019 03 04T22 29 53 00 00 finished at 2019 03 04T22 29 58 00 00 expired at 2019 03 04T23 29 58 00 00 relationships plan data id plan 8F5JFydVYAmtTjET type plans links self api v2 plan exports pe 3yVQZvHzf5j3WRJ1 download api v2 plan exports pe 3yVQZvHzf5j3WRJ1 download Download exported plan data GET plan exports id download This endpoint generates a temporary URL to the location of the exported plan data in a tar gz archive and then redirects to that link If using a client that can follow redirects you can use this endpoint to save the tar gz archive locally without needing to save the temporary URL Status Response Reason 302 HTTP Redirect Plan export found and temporary download URL generated 404 JSON API error object Plan export not found or user unauthorized to perform action 302 https developer mozilla org en US docs Web HTTP Status 302 404 https developer mozilla org en US docs Web HTTP Status 404 JSON API error object https jsonapi org format error objects Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json location https app terraform io api v2 plan exports pe 3yVQZvHzf5j3WRJ1 download export tar gz Delete exported plan data DELETE plan exports id Plan exports expire after being available for one hour but they can be deleted manually as well Status Response Reason 204 No content Plan export deleted successfully 404 JSON API error object Plan export not found or user unauthorized to perform action 204 https developer mozilla org en US docs Web HTTP Status 204 404 https developer mozilla org en US docs Web HTTP Status 404 JSON API error object https jsonapi org format error objects Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json X DELETE https app terraform io api v2 plan exports pe 3yVQZvHzf5j3WRJ1 |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Workspace Variables API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the workspace vars endpoint to manage workspace specific variables List create update and delete variables using the HTTP API | ---
page_title: Workspace Variables - API Docs - HCP Terraform
description: >-
Use the workspace `/vars` endpoint to manage workspace-specific variables. List, create, update, and delete variables using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Workspace Variables API
This set of APIs covers create, update, list and delete operations on workspace variables.
Viewing variables requires permission to read variables for their workspace. Creating, updating, and deleting variables requires permission to read and write variables for their workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
## Create a Variable
`POST /workspaces/:workspace_id/vars`
| Parameter | Description |
| --------------- | -------------------------------------------------- |
| `:workspace_id` | The ID of the workspace to create the variable in. |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ----------------------------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"vars"`. |
| `data.attributes.key` | string | | The name of the variable. |
| `data.attributes.value` | string | `""` | The value of the variable. |
| `data.attributes.description` | string | | The description of the variable. |
| `data.attributes.category` | string | | Whether this is a Terraform or environment variable. Valid values are `"terraform"` or `"env"`. |
| `data.attributes.hcl` | bool | `false` | Whether to evaluate the value of the variable as a string of HCL code. Has no effect for environment variables. |
| `data.attributes.sensitive` | bool | `false` | Whether the value is sensitive. If true then the variable is written once and not visible thereafter. |
**Deprecation warning**: The custom `filter` properties are replaced by JSON API `relationships` and will be removed from future versions of the API!
| Key path | Type | Default | Description |
| -------------------------- | ------ | ------- | ----------------------------------------------------- |
| `filter.workspace.name` | string | | The name of the workspace that owns the variable. |
| `filter.organization.name` | string | | The name of the organization that owns the workspace. |
### Sample Payload
```json
{
"data": {
"type":"vars",
"attributes": {
"key":"some_key",
"value":"some_value",
"description":"some description",
"category":"terraform",
"hcl":false,
"sensitive":false
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/workspaces/ws-4j8p6jX1w33MiDC7/vars
```
### Sample Response
```json
{
"data": {
"id":"var-EavQ1LztoRTQHSNT",
"type":"vars",
"attributes": {
"key":"some_key",
"value":"some_value",
"description":"some description",
"sensitive":false,
"category":"terraform",
"hcl":false,
"version-id":"1aa07d63ea8ff4df941c94ca9ddfd5d2bd04"
},
"relationships": {
"configurable": {
"data": {
"id":"ws-4j8p6jX1w33MiDC7",
"type":"workspaces"
},
"links": {
"related":"/api/v2/organizations/my-organization/workspaces/my-workspace"
}
}
},
"links": {
"self":"/api/v2/workspaces/ws-4j8p6jX1w33MiDC7/vars/var-EavQ1LztoRTQHSNT"
}
}
}
```
## List Variables
`GET /workspaces/:workspace_id/vars`
| Parameter | Description |
| --------------- | ---------------------------------------------- |
| `:workspace_id` | The ID of the workspace to list variables for. |
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
"https://app.terraform.io/api/v2/workspaces/ws-cZE9LERN3rGPRAmH/vars"
```
### Sample Response
```json
{
"data": [
{
"id":"var-AD4pibb9nxo1468E",
"type":"vars","attributes": {
"key":"name",
"value":"hello",
"description":"some description",
"sensitive":false,
"category":"terraform",
"hcl":false,
"version-id":"1aa07d63ea8ff4df941c94ca9ddfd5d2bd04"
},
"relationships": {
"configurable": {
"data": {
"id":"ws-cZE9LERN3rGPRAmH",
"type":"workspaces"
},
"links": {
"related":"/api/v2/organizations/my-organization/workspaces/my-workspace"
}
}
},
"links": {
"self":"/api/v2/workspaces/ws-cZE9LERN3rGPRAmH/vars/var-AD4pibb9nxo1468E"
}
}
]
}
```
## Update Variables
`PATCH /workspaces/:workspace_id/vars/:variable_id`
| Parameter | Description |
| --------------- | ----------------------------------------------- |
| `:workspace_id` | The ID of the workspace that owns the variable. |
| `:variable_id` | The ID of the variable to be updated. |
### Request Body
This PATCH endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ----------------- | ------ | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"vars"`. |
| `data.id` | string | | The ID of the variable to update. |
| `data.attributes` | object | | New attributes for the variable. This object can include `key`, `value`, `description`, `category`, `hcl`, and `sensitive` properties, which are described above under [create a variable](#create-a-variable). All of these properties are optional; if omitted, a property will be left unchanged. |
### Sample Payload
```json
{
"data": {
"id":"var-yRmifb4PJj7cLkMG",
"attributes": {
"key":"name",
"value":"mars",
"description":"some description",
"category":"terraform",
"hcl": false,
"sensitive": false
},
"type":"vars"
}
}
```
### Sample Request
```bash
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/workspaces/ws-4j8p6jX1w33MiDC7/vars/var-yRmifb4PJj7cLkMG
```
### Sample Response
```json
{
"data": {
"id":"var-yRmifb4PJj7cLkMG",
"type":"vars",
"attributes": {
"key":"name",
"value":"mars",
"description":"some description",
"sensitive":false,
"category":"terraform",
"hcl":false,
"version-id":"1aa07d63ea8ff4df941c94ca9ddfd5d2bd04"
},
"relationships": {
"configurable": {
"data": {
"id":"ws-4j8p6jX1w33MiDC7",
"type":"workspaces"
},
"links": {
"related":"/api/v2/organizations/workspace-v2-06/workspaces/workspace-v2-06"
}
}
},
"links": {
"self":"/api/v2/workspaces/ws-4j8p6jX1w33MiDC7/vars/var-yRmifb4PJj7cLkMG"
}
}
}
```
## Delete Variables
`DELETE /workspaces/:workspace_id/vars/:variable_id`
| Parameter | Description |
| --------------- | ----------------------------------------------- |
| `:workspace_id` | The ID of the workspace that owns the variable. |
| `:variable_id` | The ID of the variable to be deleted. |
### Sample Request
```bash
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/workspaces/ws-4j8p6jX1w33MiDC7/vars/var-yRmifb4PJj7cLkMG
``` | terraform | page title Workspace Variables API Docs HCP Terraform description Use the workspace vars endpoint to manage workspace specific variables List create update and delete variables using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Workspace Variables API This set of APIs covers create update list and delete operations on workspace variables Viewing variables requires permission to read variables for their workspace Creating updating and deleting variables requires permission to read and write variables for their workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Create a Variable POST workspaces workspace id vars Parameter Description workspace id The ID of the workspace to create the variable in Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be vars data attributes key string The name of the variable data attributes value string The value of the variable data attributes description string The description of the variable data attributes category string Whether this is a Terraform or environment variable Valid values are terraform or env data attributes hcl bool false Whether to evaluate the value of the variable as a string of HCL code Has no effect for environment variables data attributes sensitive bool false Whether the value is sensitive If true then the variable is written once and not visible thereafter Deprecation warning The custom filter properties are replaced by JSON API relationships and will be removed from future versions of the API Key path Type Default Description filter workspace name string The name of the workspace that owns the variable filter organization name string The name of the organization that owns the workspace Sample Payload json data type vars attributes key some key value some value description some description category terraform hcl false sensitive false Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 workspaces ws 4j8p6jX1w33MiDC7 vars Sample Response json data id var EavQ1LztoRTQHSNT type vars attributes key some key value some value description some description sensitive false category terraform hcl false version id 1aa07d63ea8ff4df941c94ca9ddfd5d2bd04 relationships configurable data id ws 4j8p6jX1w33MiDC7 type workspaces links related api v2 organizations my organization workspaces my workspace links self api v2 workspaces ws 4j8p6jX1w33MiDC7 vars var EavQ1LztoRTQHSNT List Variables GET workspaces workspace id vars Parameter Description workspace id The ID of the workspace to list variables for Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 workspaces ws cZE9LERN3rGPRAmH vars Sample Response json data id var AD4pibb9nxo1468E type vars attributes key name value hello description some description sensitive false category terraform hcl false version id 1aa07d63ea8ff4df941c94ca9ddfd5d2bd04 relationships configurable data id ws cZE9LERN3rGPRAmH type workspaces links related api v2 organizations my organization workspaces my workspace links self api v2 workspaces ws cZE9LERN3rGPRAmH vars var AD4pibb9nxo1468E Update Variables PATCH workspaces workspace id vars variable id Parameter Description workspace id The ID of the workspace that owns the variable variable id The ID of the variable to be updated Request Body This PATCH endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be vars data id string The ID of the variable to update data attributes object New attributes for the variable This object can include key value description category hcl and sensitive properties which are described above under create a variable create a variable All of these properties are optional if omitted a property will be left unchanged Sample Payload json data id var yRmifb4PJj7cLkMG attributes key name value mars description some description category terraform hcl false sensitive false type vars Sample Request bash curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 workspaces ws 4j8p6jX1w33MiDC7 vars var yRmifb4PJj7cLkMG Sample Response json data id var yRmifb4PJj7cLkMG type vars attributes key name value mars description some description sensitive false category terraform hcl false version id 1aa07d63ea8ff4df941c94ca9ddfd5d2bd04 relationships configurable data id ws 4j8p6jX1w33MiDC7 type workspaces links related api v2 organizations workspace v2 06 workspaces workspace v2 06 links self api v2 workspaces ws 4j8p6jX1w33MiDC7 vars var yRmifb4PJj7cLkMG Delete Variables DELETE workspaces workspace id vars variable id Parameter Description workspace id The ID of the workspace that owns the variable variable id The ID of the variable to be deleted Sample Request bash curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 workspaces ws 4j8p6jX1w33MiDC7 vars var yRmifb4PJj7cLkMG |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Team Access API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the team workspaces endpoint to manage team access to a workspace List show add update and remove team access using the HTTP API | ---
page_title: Team Access - API Docs - HCP Terraform
description: >-
Use the `/team-workspaces` endpoint to manage team access to a workspace. List, show, add, update, and remove team access using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Team Access API
<!-- BEGIN: TFC:only name:pnp-callout -->
-> **Note:** Team management is available in HCP Terraform **Standard** Edition. [Learn more about HCP Terraform pricing here](https://www.hashicorp.com/products/terraform/pricing).
<!-- END: TFC:only name:pnp-callout -->
The team access APIs are used to associate a team to permissions on a workspace. A single `team-workspace` resource contains the relationship between the Team and Workspace, including the privileges the team has on the workspace.
## Resource permissions
A `team-workspace` resource represents a team's local permissions on a specific workspace. Teams can also have organization-level permissions that grant access to workspaces. HCP Terraform uses the more restrictive access level. For example, a team with the "**Manage workspaces** permission enabled has admin access on all workspaces, even if their `team-workspace` on a particular workspace only grants read access. For more information, refer to [Managing Workspace Access](/terraform/cloud-docs/users-teams-organizations/teams/manage#managing-workspace-access).
Any member of an organization can view team access relative to their own team memberships, including secret teams of which they are a member. Organization owners and workspace admins can modify team access or view the full set of secret team accesses. The organization token and the owners team token can act as an owner on these endpoints. Refer to [Permissions](/terraform/cloud-docs/users-teams-organizations/permissions) for additional information.
## List Team Access to a Workspace
`GET /team-workspaces`
| Status | Response | Reason |
| ------- | ------------------------------------------------- | ---------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "team-workspaces"`) | The request was successful |
| [404][] | [JSON API error object][] | Workspace not found or user unauthorized to perform action |
### Query Parameters
[These are standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.
| Parameter | Description |
| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `filter[workspace][id]` | **Required.** The workspace ID to list team access for. Obtain this from the [workspace settings](/terraform/cloud-docs/workspaces/settings) or the [Show Workspace](/terraform/cloud-docs/api-docs/workspaces#show-workspace) endpoint. |
| `page[number]` | **Optional.** |
| `page[size]` | **Optional.** |
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request GET \
"https://app.terraform.io/api/v2/team-workspaces?filter%5Bworkspace%5D%5Bid%5D=ws-XGA52YVykdTgryTN"
```
### Sample Response
```json
{
"data": [
{
"id": "tws-19iugLwoNgtWZbKP",
"type": "team-workspaces",
"attributes": {
"access": "custom",
"runs": "apply",
"variables": "none",
"state-versions": "none",
"sentinel-mocks": "none",
"workspace-locking": false,
"run-tasks": false
},
"relationships": {
"team": {
"data": {
"id": "team-DBycxkdQrGFf5zEM",
"type": "teams"
},
"links": {
"related": "/api/v2/teams/team-DBycxkdQrGFf5zEM"
}
},
"workspace": {
"data": {
"id": "ws-XGA52YVykdTgryTN",
"type": "workspaces"
},
"links": {
"related": "/api/v2/organizations/my-organization/workspaces/my-workspace"
}
}
},
"links": {
"self": "/api/v2/team-workspaces/tws-19iugLwoNgtWZbKP"
}
}
]
}
```
## Show a Team Access relationship
`GET /team-workspaces/:id`
| Status | Response | Reason |
| ------- | ------------------------------------------------- | ------------------------------------------------------------ |
| [200][] | [JSON API document][] (`type: "team-workspaces"`) | The request was successful |
| [404][] | [JSON API error object][] | Team access not found or user unauthorized to perform action |
| Parameter | Description |
| --------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| `:id` | The ID of the team/workspace relationship. Obtain this from the [list team access action](#list-team-access-to-a-workspace) described above. |
-> **Note:** As mentioned in [Add Team Access to a Workspace](#add-team-access-to-a-workspace) and [Update Team Access
to a Workspace](#update-team-access-to-a-workspace), several permission attributes are not editable unless `access` is
set to `custom`. When access is `read`, `plan`, `write`, or `admin`, these attributes are read-only and reflect the
implicit permissions granted to the current access level.
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request GET \
https://app.terraform.io/api/v2/team-workspaces/tws-s68jV4FWCDwWvQq8
```
### Sample Response
```json
{
"data": {
"id": "tws-s68jV4FWCDwWvQq8",
"type": "team-workspaces",
"attributes": {
"access": "write",
"runs": "apply",
"variables": "write",
"state-versions": "write",
"sentinel-mocks": "read",
"workspace-locking": true,
"run-tasks": false
},
"relationships": {
"team": {
"data": {
"id": "team-DBycxkdQrGFf5zEM",
"type": "teams"
},
"links": {
"related": "/api/v2/teams/team-DBycxkdQrGFf5zEM"
}
},
"workspace": {
"data": {
"id": "ws-XGA52YVykdTgryTN",
"type": "workspaces"
},
"links": {
"related": "/api/v2/organizations/my-organization/workspaces/my-workspace"
}
}
},
"links": {
"self": "/api/v2/team-workspaces/tws-s68jV4FWCDwWvQq8"
}
}
}
```
## Add Team Access to a Workspace
`POST /team-workspaces`
| Status | Response | Reason |
| ------- | ------------------------------------------------- | ------------------------------------------------------------------ |
| [200][] | [JSON API document][] (`type: "team-workspaces"`) | The request was successful |
| [404][] | [JSON API error object][] | Workspace or Team not found or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ---------------------------------------- | ------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"team-workspaces"`. |
| `data.attributes.access` | string | | The type of access to grant. Valid values are `read`, `plan`, `write`, `admin`, or `custom`. |
| `data.attributes.runs` | string | "read" | If `access` is `custom`, the permission to grant for the workspace's runs. Can only be used when `access` is `custom`. Valid values include `read`, `plan`, or `apply`. |
| `data.attributes.variables` | string | "none" | If `access` is `custom`, the permission to grant for the workspace's variables. Can only be used when `access` is `custom`. Valid values include `none`, `read`, or `write`. |
| `data.attributes.state-versions` | string | "none" | If `access` is `custom`, the permission to grant for the workspace's state versions. Can only be used when `access` is `custom`. Valid values include `none`, `read-outputs`, `read`, or `write`. |
| `data.attributes.sentinel-mocks` | string | "none" | If `access` is `custom`, the permission to grant for the workspace's Sentinel mocks. Can only be used when `access` is `custom`. Valid values include `none`, or `read`. |
| `data.attributes.workspace-locking` | boolean | false | If `access` is `custom`, the permission granting the ability to manually lock or unlock the workspace. Can only be used when `access` is `custom`. |
| `data.attributes.run-tasks` | boolean | false | If `access` is `custom`, this permission allows the team to manage run tasks within the workspace. |
| `data.relationships.workspace.data.type` | string | | Must be `workspaces`. |
| `data.relationships.workspace.data.id` | string | | The workspace ID to which the team is to be added. |
| `data.relationships.team.data.type` | string | | Must be `teams`. |
| `data.relationships.team.data.id` | string | | The ID of the team to add to the workspace. |
### Sample Payload
```json
{
"data": {
"attributes": {
"access": "custom",
"runs": "apply",
"variables": "none",
"state-versions": "read-outputs",
"plan-outputs": "none",
"sentinel-mocks": "read",
"workspace-locking": false,
"run-tasks": false
},
"relationships": {
"workspace": {
"data": {
"type": "workspaces",
"id": "ws-XGA52YVykdTgryTN"
}
},
"team": {
"data": {
"type": "teams",
"id": "team-DBycxkdQrGFf5zEM"
}
}
},
"type": "team-workspaces"
}
}
```
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/team-workspaces
```
### Sample Response
```json
{
"data": {
"id": "tws-sezDAcCYWLnd3xz2",
"type": "team-workspaces",
"attributes": {
"access": "custom",
"runs": "apply",
"variables": "none",
"state-versions": "read-outputs",
"sentinel-mocks": "read",
"workspace-locking": false,
"run-tasks": false
},
"relationships": {
"team": {
"data": {
"id": "team-DBycxkdQrGFf5zEM",
"type": "teams"
},
"links": {
"related": "/api/v2/teams/team-DBycxkdQrGFf5zEM"
}
},
"workspace": {
"data": {
"id": "ws-XGA52YVykdTgryTN",
"type": "workspaces"
},
"links": {
"related": "/api/v2/organizations/my-organization/workspaces/my-workspace"
}
}
},
"links": {
"self": "/api/v2/team-workspaces/tws-sezDAcCYWLnd3xz2"
}
}
}
```
## Update Team Access to a Workspace
`PATCH /team-workspaces/:id`
| Status | Response | Reason |
| ------- | ------------------------------------------------- | -------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "team-workspaces"`) | The request was successful |
| [404][] | [JSON API error object][] | Team Access not found or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| Parameter | | | Description |
| ----------------------------------- | ------- | ------ | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:id` | | | The ID of the team/workspace relationship. Obtain this from the [list team access action](#list-team-access-to-a-workspace) described above. |
| `data.attributes.access` | string | | The type of access to grant. Valid values are `read`, `plan`, `write`, `admin`, or `custom`. |
| `data.attributes.runs` | string | "read" | If `access` is `custom`, the permission to grant for the workspace's runs. Can only be used when `access` is `custom`. |
| `data.attributes.variables` | string | "none" | If `access` is `custom`, the permission to grant for the workspace's variables. Can only be used when `access` is `custom`. |
| `data.attributes.state-versions` | string | "none" | If `access` is `custom`, the permission to grant for the workspace's state versions. Can only be used when `access` is `custom`. |
| `data.attributes.sentinel-mocks` | string | "none" | If `access` is `custom`, the permission to grant for the workspace's Sentinel mocks. Can only be used when `access` is `custom`. |
| `data.attributes.workspace-locking` | boolean | false | If `access` is `custom`, the permission granting the ability to manually lock or unlock the workspace. Can only be used when `access` is `custom`. |
| `data.attributes.run-tasks` | boolean | false | If `access` is `custom`, this permission allows the team to manage run tasks within the workspace. |
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/team-workspaces/tws-s68jV4FWCDwWvQq8
```
### Sample Payload
```json
{
"data": {
"attributes": {
"access": "custom",
"state-versions": "none"
}
}
}
```
### Sample Response
```json
{
"data": {
"id": "tws-s68jV4FWCDwWvQq8",
"type": "team-workspaces",
"attributes": {
"access": "custom",
"runs": "apply",
"variables": "write",
"state-versions": "none",
"sentinel-mocks": "read",
"workspace-locking": true,
"run-tasks": true
},
"relationships": {
"team": {
"data": {
"id": "team-DBycxkdQrGFf5zEM",
"type": "teams"
},
"links": {
"related": "/api/v2/teams/team-DBycxkdQrGFf5zEM"
}
},
"workspace": {
"data": {
"id": "ws-XGA52YVykdTgryTN",
"type": "workspaces"
},
"links": {
"related": "/api/v2/organizations/my-organization/workspaces/my-workspace"
}
}
},
"links": {
"self": "/api/v2/team-workspaces/tws-s68jV4FWCDwWvQq8"
}
}
}
```
## Remove Team Access to a Workspace
`DELETE /team-workspaces/:id`
| Status | Response | Reason |
| ------- | ------------------------- | ------------------------------------------------------------ |
| [204][] | | The Team Access was successfully destroyed |
| [404][] | [JSON API error object][] | Team Access not found or user unauthorized to perform action |
| Parameter | Description |
| --------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| `:id` | The ID of the team/workspace relationship. Obtain this from the [list team access action](#list-team-access-to-a-workspace) described above. |
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/team-workspaces/tws-sezDAcCYWLnd3xz2
``` | terraform | page title Team Access API Docs HCP Terraform description Use the team workspaces endpoint to manage team access to a workspace List show add update and remove team access using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Team Access API BEGIN TFC only name pnp callout Note Team management is available in HCP Terraform Standard Edition Learn more about HCP Terraform pricing here https www hashicorp com products terraform pricing END TFC only name pnp callout The team access APIs are used to associate a team to permissions on a workspace A single team workspace resource contains the relationship between the Team and Workspace including the privileges the team has on the workspace Resource permissions A team workspace resource represents a team s local permissions on a specific workspace Teams can also have organization level permissions that grant access to workspaces HCP Terraform uses the more restrictive access level For example a team with the Manage workspaces permission enabled has admin access on all workspaces even if their team workspace on a particular workspace only grants read access For more information refer to Managing Workspace Access terraform cloud docs users teams organizations teams manage managing workspace access Any member of an organization can view team access relative to their own team memberships including secret teams of which they are a member Organization owners and workspace admins can modify team access or view the full set of secret team accesses The organization token and the owners team token can act as an owner on these endpoints Refer to Permissions terraform cloud docs users teams organizations permissions for additional information List Team Access to a Workspace GET team workspaces Status Response Reason 200 JSON API document type team workspaces The request was successful 404 JSON API error object Workspace not found or user unauthorized to perform action Query Parameters These are standard URL query parameters terraform cloud docs api docs query parameters remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters If neither pagination query parameters are provided the endpoint will not be paginated and will return all results Parameter Description filter workspace id Required The workspace ID to list team access for Obtain this from the workspace settings terraform cloud docs workspaces settings or the Show Workspace terraform cloud docs api docs workspaces show workspace endpoint page number Optional page size Optional Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request GET https app terraform io api v2 team workspaces filter 5Bworkspace 5D 5Bid 5D ws XGA52YVykdTgryTN Sample Response json data id tws 19iugLwoNgtWZbKP type team workspaces attributes access custom runs apply variables none state versions none sentinel mocks none workspace locking false run tasks false relationships team data id team DBycxkdQrGFf5zEM type teams links related api v2 teams team DBycxkdQrGFf5zEM workspace data id ws XGA52YVykdTgryTN type workspaces links related api v2 organizations my organization workspaces my workspace links self api v2 team workspaces tws 19iugLwoNgtWZbKP Show a Team Access relationship GET team workspaces id Status Response Reason 200 JSON API document type team workspaces The request was successful 404 JSON API error object Team access not found or user unauthorized to perform action Parameter Description id The ID of the team workspace relationship Obtain this from the list team access action list team access to a workspace described above Note As mentioned in Add Team Access to a Workspace add team access to a workspace and Update Team Access to a Workspace update team access to a workspace several permission attributes are not editable unless access is set to custom When access is read plan write or admin these attributes are read only and reflect the implicit permissions granted to the current access level Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request GET https app terraform io api v2 team workspaces tws s68jV4FWCDwWvQq8 Sample Response json data id tws s68jV4FWCDwWvQq8 type team workspaces attributes access write runs apply variables write state versions write sentinel mocks read workspace locking true run tasks false relationships team data id team DBycxkdQrGFf5zEM type teams links related api v2 teams team DBycxkdQrGFf5zEM workspace data id ws XGA52YVykdTgryTN type workspaces links related api v2 organizations my organization workspaces my workspace links self api v2 team workspaces tws s68jV4FWCDwWvQq8 Add Team Access to a Workspace POST team workspaces Status Response Reason 200 JSON API document type team workspaces The request was successful 404 JSON API error object Workspace or Team not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be team workspaces data attributes access string The type of access to grant Valid values are read plan write admin or custom data attributes runs string read If access is custom the permission to grant for the workspace s runs Can only be used when access is custom Valid values include read plan or apply data attributes variables string none If access is custom the permission to grant for the workspace s variables Can only be used when access is custom Valid values include none read or write data attributes state versions string none If access is custom the permission to grant for the workspace s state versions Can only be used when access is custom Valid values include none read outputs read or write data attributes sentinel mocks string none If access is custom the permission to grant for the workspace s Sentinel mocks Can only be used when access is custom Valid values include none or read data attributes workspace locking boolean false If access is custom the permission granting the ability to manually lock or unlock the workspace Can only be used when access is custom data attributes run tasks boolean false If access is custom this permission allows the team to manage run tasks within the workspace data relationships workspace data type string Must be workspaces data relationships workspace data id string The workspace ID to which the team is to be added data relationships team data type string Must be teams data relationships team data id string The ID of the team to add to the workspace Sample Payload json data attributes access custom runs apply variables none state versions read outputs plan outputs none sentinel mocks read workspace locking false run tasks false relationships workspace data type workspaces id ws XGA52YVykdTgryTN team data type teams id team DBycxkdQrGFf5zEM type team workspaces Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 team workspaces Sample Response json data id tws sezDAcCYWLnd3xz2 type team workspaces attributes access custom runs apply variables none state versions read outputs sentinel mocks read workspace locking false run tasks false relationships team data id team DBycxkdQrGFf5zEM type teams links related api v2 teams team DBycxkdQrGFf5zEM workspace data id ws XGA52YVykdTgryTN type workspaces links related api v2 organizations my organization workspaces my workspace links self api v2 team workspaces tws sezDAcCYWLnd3xz2 Update Team Access to a Workspace PATCH team workspaces id Status Response Reason 200 JSON API document type team workspaces The request was successful 404 JSON API error object Team Access not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc Parameter Description id The ID of the team workspace relationship Obtain this from the list team access action list team access to a workspace described above data attributes access string The type of access to grant Valid values are read plan write admin or custom data attributes runs string read If access is custom the permission to grant for the workspace s runs Can only be used when access is custom data attributes variables string none If access is custom the permission to grant for the workspace s variables Can only be used when access is custom data attributes state versions string none If access is custom the permission to grant for the workspace s state versions Can only be used when access is custom data attributes sentinel mocks string none If access is custom the permission to grant for the workspace s Sentinel mocks Can only be used when access is custom data attributes workspace locking boolean false If access is custom the permission granting the ability to manually lock or unlock the workspace Can only be used when access is custom data attributes run tasks boolean false If access is custom this permission allows the team to manage run tasks within the workspace Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 team workspaces tws s68jV4FWCDwWvQq8 Sample Payload json data attributes access custom state versions none Sample Response json data id tws s68jV4FWCDwWvQq8 type team workspaces attributes access custom runs apply variables write state versions none sentinel mocks read workspace locking true run tasks true relationships team data id team DBycxkdQrGFf5zEM type teams links related api v2 teams team DBycxkdQrGFf5zEM workspace data id ws XGA52YVykdTgryTN type workspaces links related api v2 organizations my organization workspaces my workspace links self api v2 team workspaces tws s68jV4FWCDwWvQq8 Remove Team Access to a Workspace DELETE team workspaces id Status Response Reason 204 The Team Access was successfully destroyed 404 JSON API error object Team Access not found or user unauthorized to perform action Parameter Description id The ID of the team workspace relationship Obtain this from the list team access action list team access to a workspace described above Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 team workspaces tws sezDAcCYWLnd3xz2 |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the reserved tag keys API endpoints to denote tag keys that have special meaning for your organization Reserving tag keys allows project and workspace managers can follow a consistent tagging strategy and also provides project admins with a means of disabling overriding inherited tags 201 https developer mozilla org en US docs Web HTTP Status 201 page title Reserved Tag Keys API Docs HCP Terraform | ---
page_title: Reserved Tag Keys - API Docs - HCP Terraform
description: >-
Use the `/reserved-tag-keys` API endpoints to denote tag keys that have special meaning for your organization. Reserving tag keys allows project and workspace managers can follow a consistent tagging strategy, and also provides project admins with a means of disabling overriding inherited tags.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
[speculative plans]: /terraform/cloud-docs/run/remote-operations#speculative-plans
# Reserved Tag Keys API
Use the `/reserved-tag-keys` API endpoints to define and manage tag keys that
have special meaning for your organization. Reserving tag keys enable project
and workspace managers to follow a consistent tagging strategy across the
organization. You can also use them to provide project managers with a means of
disabling overrides for inherited tags.
The following table describes the available endpoints:
| Method | Path | Description |
| --- | --- | --- |
| `GET` | `/organizations/:organization_name/reserved-tag-keys` | [List reserved tag keys](#list-reserved-tag-keys) for the specified organization. |
| `POST` | `/organizations/:organization_name/reserved-tag-keys` | [Add a reserved tag key](#add-a-reserved-tag-key) to the specified organization. |
| `PATCH` | `/reserved-tags/:reserved_tag_key_id` | [Update a reserved tag key](#add-a-reserved-tag-value) with the specified ID. |
| `DELETE` | `/reserved-tags/:reserved_tag_key_id` | [Delete a reserved tag key](#delete-a-reserved-tag-key) with the specified ID. |
## Path parameters
The `/reserved-tag-keys/` API endpoints require the following path parameters:
| Parameter | Description |
|---------------|----------------|
| `:reserved_tag_key_id` | The external ID of the reserved tag key. |
| `:organization_name` | The name of the organization containing the reserved tags. |
## List reserved tag keys
`GET /organizations/:organization_name/reserved-tag-keys`
### Sample payload
This endpoint does not require a payload.
### Sample request
```shell-session
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request GET \
https://app.terraform.io/api/v2/organizations/my-organization/reserved-tag-keys
```
### Sample response
```json
{
"data": [
{
"id": "rtk-jjnTseo8NN1jACbk",
"type": "reserved-tag-keys",
"attributes": {
"key": "environment",
"disable-overrides": false,
"created-at": "2024-08-13T23:06:42.523Z",
"updated-at": "2024-08-13T23:06:42.523Z"
}
},
{
"id": "rtk-F1s7kKUShAQxhA1b",
"type": "reserved-tag-keys",
"attributes": {
"key": "cost-center",
"disable-overrides": false,
"created-at": "2024-08-13T23:06:51.445Z",
"updated-at": "2024-08-13T23:06:51.445Z"
}
},
],
"links": {
"self": "https://app.terraform.io/api/v2/organizations/my-organization/reserved-tag-keys?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://app.terraform.io/api/v2/organizations/my-organization/reserved-tag-keys?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://app.terraform.io/api/v2/organizations/my-organization/reserved-tag-keys?page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"pagination": {
"current-page": 1,
"page-size": 20,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 2
}
}
}
```
## Create a reserved tag key
`POST /organizations/:organization_name/reserved-tag-keys`
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
|---|---|---|--- |
| `data.type` | string | none | Must be `reserved-tag-keys`. |
| `data.attributes.key` | string | none | The key targeted by this reserved
tag key. |
| `data.attributes.disable-overrides` | boolean | none | If `true`, disables
overriding inherited tags with the specified key at the workspace level. |
### Sample payload
```json
{
"data": {
"type": "reserved-tag-keys",
"attributes": {
"key": "environment",
"disable-overrides": false
}
}
}
```
### Sample request
```shell-session
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/organizations/${ORGANIZATION_NAME}/reserved-tag-keys
```
### Sample response
```json
{
"data": {
"id": "rtk-Tj86UdGahKGDiYXY",
"type": "reserved-tag-keys",
"attributes": {
"key": "environment",
"disable-overrides": false,
"created-at": "2024-09-04T05:02:06.794Z",
"updated-at": "2024-09-04T05:02:06.794Z"
}
}
}
```
## Update a reserved tag key
`PATCH /reserved-tags/:reserved_tag_key_id`
This PATCH endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
|---|---|---|--- |
| `data.type` | string | none | Must be `reserved-tag-keys`. |
| `data.attributes.key` | string | none | The key targeted by this reserved
tag key. |
| `data.attributes.disable-overrides` | boolean | none | If `true`, disables
overriding inherited tags with the specified key at the workspace level. |
### Sample payload
```json
{
"data": {
"id": "rtk-Tj86UdGahKGDiYXY",
"type": "reserved-tag-keys",
"attributes": {
"key": "env",
"disable-overrides": true,
"created-at": "2024-09-04T05:02:06.794Z",
"updated-at": "2024-09-04T05:02:06.794Z"
}
}
}
```
### Sample request
```shell-session
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
https://app.terraform.io/api/v2/reserved-tags/${RESERVED_TAG_ID}
```
### Sample response
```json
{
"data": {
"id": "rtk-zMtWLDftAjY3b5pA",
"type": "reserved-tag-keys",
"attributes": {
"key": "env",
"disable-overrides": true,
"created-at": "2024-09-04T05:05:10.449Z",
"updated-at": "2024-09-04T05:05:13.486Z"
}
}
}
```
## Delete a reserved tag key
`DELETE /reserved-tags/:reserved_tag_key_id`
### Sample payload
This endpoint does not require a payload.
### Sample request
```shell-session
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/reserved-tags/rtk-zMtWLDftAjY3b5pA
```
### Sample response
This endpoint does not return a response body.
| terraform | page title Reserved Tag Keys API Docs HCP Terraform description Use the reserved tag keys API endpoints to denote tag keys that have special meaning for your organization Reserving tag keys allows project and workspace managers can follow a consistent tagging strategy and also provides project admins with a means of disabling overriding inherited tags 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects speculative plans terraform cloud docs run remote operations speculative plans Reserved Tag Keys API Use the reserved tag keys API endpoints to define and manage tag keys that have special meaning for your organization Reserving tag keys enable project and workspace managers to follow a consistent tagging strategy across the organization You can also use them to provide project managers with a means of disabling overrides for inherited tags The following table describes the available endpoints Method Path Description GET organizations organization name reserved tag keys List reserved tag keys list reserved tag keys for the specified organization POST organizations organization name reserved tag keys Add a reserved tag key add a reserved tag key to the specified organization PATCH reserved tags reserved tag key id Update a reserved tag key add a reserved tag value with the specified ID DELETE reserved tags reserved tag key id Delete a reserved tag key delete a reserved tag key with the specified ID Path parameters The reserved tag keys API endpoints require the following path parameters Parameter Description reserved tag key id The external ID of the reserved tag key organization name The name of the organization containing the reserved tags List reserved tag keys GET organizations organization name reserved tag keys Sample payload This endpoint does not require a payload Sample request shell session curl header Authorization Bearer TOKEN header Content Type application vnd api json request GET https app terraform io api v2 organizations my organization reserved tag keys Sample response json data id rtk jjnTseo8NN1jACbk type reserved tag keys attributes key environment disable overrides false created at 2024 08 13T23 06 42 523Z updated at 2024 08 13T23 06 42 523Z id rtk F1s7kKUShAQxhA1b type reserved tag keys attributes key cost center disable overrides false created at 2024 08 13T23 06 51 445Z updated at 2024 08 13T23 06 51 445Z links self https app terraform io api v2 organizations my organization reserved tag keys page 5Bnumber 5D 1 page 5Bsize 5D 20 first https app terraform io api v2 organizations my organization reserved tag keys page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https app terraform io api v2 organizations my organization reserved tag keys page 5Bnumber 5D 1 page 5Bsize 5D 20 meta pagination current page 1 page size 20 prev page null next page null total pages 1 total count 2 Create a reserved tag key POST organizations organization name reserved tag keys This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string none Must be reserved tag keys data attributes key string none The key targeted by this reserved tag key data attributes disable overrides boolean none If true disables overriding inherited tags with the specified key at the workspace level Sample payload json data type reserved tag keys attributes key environment disable overrides false Sample request shell session curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST https app terraform io api v2 organizations ORGANIZATION NAME reserved tag keys Sample response json data id rtk Tj86UdGahKGDiYXY type reserved tag keys attributes key environment disable overrides false created at 2024 09 04T05 02 06 794Z updated at 2024 09 04T05 02 06 794Z Update a reserved tag key PATCH reserved tags reserved tag key id This PATCH endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string none Must be reserved tag keys data attributes key string none The key targeted by this reserved tag key data attributes disable overrides boolean none If true disables overriding inherited tags with the specified key at the workspace level Sample payload json data id rtk Tj86UdGahKGDiYXY type reserved tag keys attributes key env disable overrides true created at 2024 09 04T05 02 06 794Z updated at 2024 09 04T05 02 06 794Z Sample request shell session curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH https app terraform io api v2 reserved tags RESERVED TAG ID Sample response json data id rtk zMtWLDftAjY3b5pA type reserved tag keys attributes key env disable overrides true created at 2024 09 04T05 05 10 449Z updated at 2024 09 04T05 05 13 486Z Delete a reserved tag key DELETE reserved tags reserved tag key id Sample payload This endpoint does not require a payload Sample request shell session curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 reserved tags rtk zMtWLDftAjY3b5pA Sample response This endpoint does not return a response body |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the feature sets endpoint to review feature sets List available feature sets and which feature sets an organization is eligible for using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Feature Sets API Docs HCP Terraform tfc only true | ---
page_title: Feature Sets - API Docs - HCP Terraform
tfc_only: true
description: >-
Use the `/feature-sets` endpoint to review feature sets. List available feature sets and which feature sets an organization is eligible for using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Feature Sets API
-> **Note:** The feature sets API is only available in HCP Terraform.
Feature sets represent the different [pricing plans](/terraform/cloud-docs/overview) available to HCP Terraform organizations. An organization's [entitlement set](/terraform/cloud-docs/api-docs#feature-entitlements) is calculated using its [subscription](/terraform/cloud-docs/api-docs/subscriptions) and feature set.
## List Feature Sets
This endpoint lists the feature sets available in HCP Terraform.
`GET /feature-sets`
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs. If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.
| Parameter | Description |
| -------------- | ---------------------------------------------------------------------------- |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 feature sets per page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/feature-sets
```
### Sample Response
```json
{
"data": [
{
"id": "fs-GN3kSR1GqWNfcFaW",
"type": "feature-sets",
"attributes": {
"assessments": false,
"audit-logging": false,
"comparison-description": "",
"concurrency-override": false,
"cost-estimation": true,
"cost": 0,
"default-agents-ceiling": 1,
"default-runs-ceiling": 1,
"description": "Free 500 managed resources, then downgrade to limited features",
"global-run-tasks": false,
"identifier": "free_standard",
"is-current": true,
"is-free-tier": true,
"module-tests-generation": false,
"name": "Free",
"no-code-modules": false,
"plan": null,
"policy-enforcement": true,
"policy-limit": null,
"policy-mandatory-enforcement-limit": null,
"policy-set-limit": null,
"private-networking": true,
"private-policy-agents": false,
"private-vcs": false,
"run-task-limit": null,
"run-task-mandatory-enforcement-limit": null,
"run-task-workspace-limit": null,
"run-tasks": true,
"self-serve-billing": true,
"sentinel": true,
"sso": true,
"teams": false,
"user-limit": null,
"versioned-policy-set-limit": null
}
},
{
"id": "fs-f3xYUkkXwY8ZGP9g",
"type": "feature-sets",
"attributes": {
"assessments": false,
"audit-logging": false,
"comparison-description": "",
"concurrency-override": true,
"cost-estimation": true,
"cost": 0,
"default-agents-ceiling": 10,
"default-runs-ceiling": 10,
"description": "Automated infrastructure provisioning at any scale. First 500 free managed resources included.",
"global-run-tasks": false,
"identifier": "standard",
"is-current": true,
"is-free-tier": true,
"module-tests-generation": false,
"name": "Standard",
"no-code-modules": false,
"plan": null,
"policy-enforcement": true,
"policy-limit": null,
"policy-mandatory-enforcement-limit": null,
"policy-set-limit": null,
"private-networking": true,
"private-policy-agents": false,
"private-vcs": false,
"run-task-limit": null,
"run-task-mandatory-enforcement-limit": null,
"run-task-workspace-limit": null,
"run-tasks": true,
"self-serve-billing": true,
"sentinel": true,
"sso": true,
"teams": false,
"user-limit": null,
"versioned-policy-set-limit": null
}
},
{
"id": "fs-JhVd6dwBSZ3THzHV",
"type": "feature-sets",
"attributes": {
"assessments": true,
"audit-logging": true,
"comparison-description": "",
"concurrency-override": true,
"cost-estimation": true,
"cost": 0,
"default-agents-ceiling": 10,
"default-runs-ceiling": 10,
"description": "Automated infrastructure provisioning and management at any scale",
"global-run-tasks": true,
"identifier": "plus",
"is-current": true,
"is-free-tier": true,
"module-tests-generation": true,
"name": "Plus",
"no-code-modules": true,
"plan": null,
"policy-enforcement": true,
"policy-limit": null,
"policy-mandatory-enforcement-limit": null,
"policy-set-limit": null,
"private-networking": true,
"private-policy-agents": false,
"private-vcs": false,
"run-task-limit": null,
"run-task-mandatory-enforcement-limit": null,
"run-task-workspace-limit": null,
"run-tasks": true,
"self-serve-billing": true,
"sentinel": true,
"sso": true,
"teams": true,
"user-limit": null,
"versioned-policy-set-limit": null
}
}
]
}
```
## List Feature Sets for Organization
This endpoint lists the feature sets a particular organization is eligible to access. The results may differ from the previous global endpoint - for instance, if the organization has already had a free trial, the trial feature set will not appear in this list.
`GET /organizations/:organization_name/feature-sets`
| Parameter | Description |
| ------------------- | ---------------------------- |
| `organization_name` | The name of the organization |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs. If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.
| Parameter | Description |
| -------------- | ----------------------------------------------------------------------------------------- |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 organization feature sets per page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/feature-sets
```
### Sample Response
```json
{
"data": [
{
"id": "fs-GN3kSR1GqWNfcFaW",
"type": "feature-sets",
"attributes": {
"assessments": false,
"audit-logging": false,
"comparison-description": "",
"concurrency-override": false,
"cost-estimation": true,
"cost": 0,
"default-agents-ceiling": 1,
"default-runs-ceiling": 1,
"description": "Free 500 managed resources, then downgrade to limited features",
"global-run-tasks": false,
"identifier": "free_standard",
"is-current": true,
"is-free-tier": true,
"module-tests-generation": false,
"name": "Free",
"no-code-modules": false,
"plan": null,
"policy-enforcement": true,
"policy-limit": 5,
"policy-mandatory-enforcement-limit": 1,
"policy-set-limit": 1,
"private-networking": true,
"private-policy-agents": false,
"private-vcs": false,
"run-task-limit": 1,
"run-task-mandatory-enforcement-limit": 1,
"run-task-workspace-limit": 10,
"run-tasks": true,
"self-serve-billing": true,
"sentinel": true,
"sso": true,
"teams": false,
"user-limit": null,
"versioned-policy-set-limit": 0
}
},
{
"id": "fs-f3xYUkkXwY8ZGP9g",
"type": "feature-sets",
"attributes": {
"assessments": false,
"audit-logging": false,
"comparison-description": "",
"concurrency-override": true,
"cost-estimation": true,
"cost": 0,
"default-agents-ceiling": 10,
"default-runs-ceiling": 10,
"description": "Automated infrastructure provisioning at any scale. First 500 free managed resources included.",
"global-run-tasks": false,
"identifier": "standard",
"is-current": true,
"is-free-tier": true,
"module-tests-generation": false,
"name": "Standard",
"no-code-modules": false,
"plan": null,
"policy-enforcement": true,
"policy-limit": null,
"policy-mandatory-enforcement-limit": null,
"policy-set-limit": null,
"private-networking": true,
"private-policy-agents": false,
"private-vcs": false,
"run-task-limit": null,
"run-task-mandatory-enforcement-limit": null,
"run-task-workspace-limit": null,
"run-tasks": true,
"self-serve-billing": true,
"sentinel": true,
"sso": true,
"teams": false,
"user-limit": null,
"versioned-policy-set-limit": null
}
},
{
"id": "fs-JhVd6dwBSZ3THzHV",
"type": "feature-sets",
"attributes": {
"assessments": true,
"audit-logging": true,
"comparison-description": "",
"concurrency-override": true,
"cost-estimation": true,
"cost": 0,
"default-agents-ceiling": 10,
"default-runs-ceiling": 10,
"description": "Automated infrastructure provisioning and management at any scale",
"global-run-tasks": true,
"identifier": "plus",
"is-current": true,
"is-free-tier": true,
"module-tests-generation": true,
"name": "Plus",
"no-code-modules": true,
"plan": null,
"policy-enforcement": true,
"policy-limit": null,
"policy-mandatory-enforcement-limit": null,
"policy-set-limit": null,
"private-networking": true,
"private-policy-agents": false,
"private-vcs": false,
"run-task-limit": null,
"run-task-mandatory-enforcement-limit": null,
"run-task-workspace-limit": null,
"run-tasks": true,
"self-serve-billing": true,
"sentinel": true,
"sso": true,
"teams": true,
"user-limit": null,
"versioned-policy-set-limit": null
}
}
]
}
``` | terraform | page title Feature Sets API Docs HCP Terraform tfc only true description Use the feature sets endpoint to review feature sets List available feature sets and which feature sets an organization is eligible for using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Feature Sets API Note The feature sets API is only available in HCP Terraform Feature sets represent the different pricing plans terraform cloud docs overview available to HCP Terraform organizations An organization s entitlement set terraform cloud docs api docs feature entitlements is calculated using its subscription terraform cloud docs api docs subscriptions and feature set List Feature Sets This endpoint lists the feature sets available in HCP Terraform GET feature sets Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs If neither pagination query parameters are provided the endpoint will not be paginated and will return all results Parameter Description page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 feature sets per page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 feature sets Sample Response json data id fs GN3kSR1GqWNfcFaW type feature sets attributes assessments false audit logging false comparison description concurrency override false cost estimation true cost 0 default agents ceiling 1 default runs ceiling 1 description Free 500 managed resources then downgrade to limited features global run tasks false identifier free standard is current true is free tier true module tests generation false name Free no code modules false plan null policy enforcement true policy limit null policy mandatory enforcement limit null policy set limit null private networking true private policy agents false private vcs false run task limit null run task mandatory enforcement limit null run task workspace limit null run tasks true self serve billing true sentinel true sso true teams false user limit null versioned policy set limit null id fs f3xYUkkXwY8ZGP9g type feature sets attributes assessments false audit logging false comparison description concurrency override true cost estimation true cost 0 default agents ceiling 10 default runs ceiling 10 description Automated infrastructure provisioning at any scale First 500 free managed resources included global run tasks false identifier standard is current true is free tier true module tests generation false name Standard no code modules false plan null policy enforcement true policy limit null policy mandatory enforcement limit null policy set limit null private networking true private policy agents false private vcs false run task limit null run task mandatory enforcement limit null run task workspace limit null run tasks true self serve billing true sentinel true sso true teams false user limit null versioned policy set limit null id fs JhVd6dwBSZ3THzHV type feature sets attributes assessments true audit logging true comparison description concurrency override true cost estimation true cost 0 default agents ceiling 10 default runs ceiling 10 description Automated infrastructure provisioning and management at any scale global run tasks true identifier plus is current true is free tier true module tests generation true name Plus no code modules true plan null policy enforcement true policy limit null policy mandatory enforcement limit null policy set limit null private networking true private policy agents false private vcs false run task limit null run task mandatory enforcement limit null run task workspace limit null run tasks true self serve billing true sentinel true sso true teams true user limit null versioned policy set limit null List Feature Sets for Organization This endpoint lists the feature sets a particular organization is eligible to access The results may differ from the previous global endpoint for instance if the organization has already had a free trial the trial feature set will not appear in this list GET organizations organization name feature sets Parameter Description organization name The name of the organization Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs If neither pagination query parameters are provided the endpoint will not be paginated and will return all results Parameter Description page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 organization feature sets per page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp feature sets Sample Response json data id fs GN3kSR1GqWNfcFaW type feature sets attributes assessments false audit logging false comparison description concurrency override false cost estimation true cost 0 default agents ceiling 1 default runs ceiling 1 description Free 500 managed resources then downgrade to limited features global run tasks false identifier free standard is current true is free tier true module tests generation false name Free no code modules false plan null policy enforcement true policy limit 5 policy mandatory enforcement limit 1 policy set limit 1 private networking true private policy agents false private vcs false run task limit 1 run task mandatory enforcement limit 1 run task workspace limit 10 run tasks true self serve billing true sentinel true sso true teams false user limit null versioned policy set limit 0 id fs f3xYUkkXwY8ZGP9g type feature sets attributes assessments false audit logging false comparison description concurrency override true cost estimation true cost 0 default agents ceiling 10 default runs ceiling 10 description Automated infrastructure provisioning at any scale First 500 free managed resources included global run tasks false identifier standard is current true is free tier true module tests generation false name Standard no code modules false plan null policy enforcement true policy limit null policy mandatory enforcement limit null policy set limit null private networking true private policy agents false private vcs false run task limit null run task mandatory enforcement limit null run task workspace limit null run tasks true self serve billing true sentinel true sso true teams false user limit null versioned policy set limit null id fs JhVd6dwBSZ3THzHV type feature sets attributes assessments true audit logging true comparison description concurrency override true cost estimation true cost 0 default agents ceiling 10 default runs ceiling 10 description Automated infrastructure provisioning and management at any scale global run tasks true identifier plus is current true is free tier true module tests generation true name Plus no code modules true plan null policy enforcement true policy limit null policy mandatory enforcement limit null policy set limit null private networking true private policy agents false private vcs false run task limit null run task mandatory enforcement limit null run task workspace limit null run tasks true self serve billing true sentinel true sso true teams true user limit null versioned policy set limit null |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Organization Tags API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the tags endpoint to manage an organization s workspace tags Assign list and delete tags using the HTTP API | ---
page_title: Organization Tags - API Docs - HCP Terraform
description: >-
Use the `tags` endpoint to manage an organization's workspace tags. Assign, list, and delete tags using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Organization Tags API
This API returns the list of tags used in workspaces across the organization. Tags can be added to this pool via workspaces. Tags deleted here will be removed from all other workspaces. Tags can be added, applied, removed and deleted in bulk.
Tags are subject to the following rules:
- Workspace tags or tags must be one or more characters, have a 255 character limit, and can include letters, numbers, colons, hyphens, and underscores.
- You can create tags for a workspace using the user interface or the API. After you create a tag, you can assign it to other workspaces in the same organization.
- You cannot create tags for a workspace using the CLI.
- You cannot set tags at the project level, so there is no tag inheritance from projects to workspaces.
## List Tags
`GET /organizations/:organization_name/tags`
| Parameter | Description |
| -------------------- | ---------------------------------------------- |
| `:organization_name` | The name of the organization to list tags from |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
| ------------------------------- | ---------------------------------------------------------------------------------------- |
| `q` | **Optional.** A search query string. Organization tags are searchable by name likeness. |
| `filter[exclude][taggable][id]` | **Optional.** If specified, omits organization's related workspace's tags. |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 organization tags per page. |
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/tags
```
### Sample Response
```json
{
"data": [
{
"id": "tag-1",
"type": "tags",
"attributes": {
"name": "tag1",
"created-at": "2022-03-09T06:04:39.585Z",
"instance-count": 1
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
}
},
{
"id": "tag-2",
"type": "tags",
"attributes": {
"name": "tag2",
"created-at": "2022-03-09T06:04:39.585Z",
"instance-count": 2
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
}
}
]
}
```
## Delete tags
This endpoint deletes one or more tags from an organization. The organization and tags must already
exist. Tags deleted here will be removed from all other resources.
`DELETE /organizations/:organization_name/tags`
| Parameter | Description |
| -------------------- | ------------------------------------------------ |
| `:organization_name` | The name of the organization to delete tags from |
| Status | Response | Reason(s) |
| ------- | ------------------------- | -------------------------------------------------------------- |
| [204][] | No Content | Successfully removed tags from organization |
| [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
It is important to note that `type` and `id` are required.
| Key path | Type | Default | Description |
| ------------- | ------ | ------- | ---------------------------- |
| `data[].type` | string | | Must be `"tags"`. |
| `data[].id` | string | | The id of the tag to remove. |
### Sample Payload
```json
{
"data": [
{
"type": "tags",
"id": "tag-Yfha4YpPievQ8wJw"
}
]
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/hashicorp/tags
```
## Sample Response
No response body.
Status code `204`.
## Add workspaces to a tag
`POST /tags/:tag_id/relationships/workspaces`
| Parameter | Description |
| --------- | ---------------------------------------------------- |
| `:tag_id` | The ID of the tag that workspaces should have added. |
| Status | Response | Reason(s) |
| ------- | ------------------------- | ----------------------------------------------------- |
| [204][] | No Content | Successfully added workspaces to tag |
| [404][] | [JSON API error object][] | Tag not found, or user unauthorized to perform action |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
| Key path | Type | Default | Description |
| ------------- | ------ | ------- | ------------------------------- |
| `data[].type` | string | | Must be `"workspaces"`. |
| `data[].id` | string | | The id of the workspace to add. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/tags/tag-2/relationships/workspaces
```
### Sample Payload
```json
{
"data": [
{
"type": "workspaces",
"id": "ws-pmKTbUwH2VPiiTC4"
}
]
}
```
### Sample Response
No response body.
Status code `204`. | terraform | page title Organization Tags API Docs HCP Terraform description Use the tags endpoint to manage an organization s workspace tags Assign list and delete tags using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Organization Tags API This API returns the list of tags used in workspaces across the organization Tags can be added to this pool via workspaces Tags deleted here will be removed from all other workspaces Tags can be added applied removed and deleted in bulk Tags are subject to the following rules Workspace tags or tags must be one or more characters have a 255 character limit and can include letters numbers colons hyphens and underscores You can create tags for a workspace using the user interface or the API After you create a tag you can assign it to other workspaces in the same organization You cannot create tags for a workspace using the CLI You cannot set tags at the project level so there is no tag inheritance from projects to workspaces List Tags GET organizations organization name tags Parameter Description organization name The name of the organization to list tags from Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description q Optional A search query string Organization tags are searchable by name likeness filter exclude taggable id Optional If specified omits organization s related workspace s tags page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 organization tags per page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp tags Sample Response json data id tag 1 type tags attributes name tag1 created at 2022 03 09T06 04 39 585Z instance count 1 relationships organization data id my organization type organizations id tag 2 type tags attributes name tag2 created at 2022 03 09T06 04 39 585Z instance count 2 relationships organization data id my organization type organizations Delete tags This endpoint deletes one or more tags from an organization The organization and tags must already exist Tags deleted here will be removed from all other resources DELETE organizations organization name tags Parameter Description organization name The name of the organization to delete tags from Status Response Reason s 204 No Content Successfully removed tags from organization 404 JSON API error object Organization not found or user unauthorized to perform action Request Body This POST endpoint requires a JSON object with the following properties as a request payload It is important to note that type and id are required Key path Type Default Description data type string Must be tags data id string The id of the tag to remove Sample Payload json data type tags id tag Yfha4YpPievQ8wJw Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE data payload json https app terraform io api v2 organizations hashicorp tags Sample Response No response body Status code 204 Add workspaces to a tag POST tags tag id relationships workspaces Parameter Description tag id The ID of the tag that workspaces should have added Status Response Reason s 204 No Content Successfully added workspaces to tag 404 JSON API error object Tag not found or user unauthorized to perform action Request Body This POST endpoint requires a JSON object with the following properties as a request payload Key path Type Default Description data type string Must be workspaces data id string The id of the workspace to add Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 tags tag 2 relationships workspaces Sample Payload json data type workspaces id ws pmKTbUwH2VPiiTC4 Sample Response No response body Status code 204 |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use the state versions endpoint to manage Terraform state versions List create show and roll back state versions using the HTTP API page title State Versions API Docs HCP Terraform | ---
page_title: State Versions - API Docs - HCP Terraform
description: >-
Use the `/state-versions` endpoint to manage Terraform state versions. List, create, show, and roll back state versions using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# State Versions API
## Attributes
State version API objects represent an instance of Terraform state data, but do not directly contain the stored state. Instead, they contain information about the state, its properties, and its contents, and include one or more URLs from which the state can be downloaded.
Some of the information returned in a state version API object might be **populated asynchronously** by HCP Terraform. This includes resources, modules, providers, and the [state version outputs](/terraform/cloud-docs/api-docs/state-version-outputs) associated with the state version. These values might not be immediately available after the state version is uploaded. The `resources-processed` property on the state version object indicates whether or not HCP Terraform has finished any necessary asynchronous processing. If you need to use these values, be sure to wait for `resources-processed` to become `true` before assuming that the values are in fact empty.
Attribute | Description
---------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
`billable-rum-count` | Count of billable Resources Under Management (RUM). Only present for Organization members on RUM plans who have visibility to see billable RUM usage in the Usage page
`hosted-json-state-download-url` | A URL from which you can download the state data in a [stable format](/terraform/internals/json-format) appropriate for external integrations to consume. Only available if the state was created by Terraform 1.3+.
`hosted-state-download-url` | A URL from which you can download the raw state data, in the format used internally by Terraform.
`hosted-json-state-upload-url` | A URL to which you can upload state data in a [stable format](/terraform/internals/json-format) appropriate for external integrations to consume. You can upload JSON state content once per state version.
`hosted-state-upload-url` | A URL to which you can upload state data in the format used Terraform uses internally. You can upload state data once per state version.
`modules` | Extracted information about the Terraform modules in this state data. Populated asynchronously.
`providers` | Extracted information about the Terraform providers used for resources in this state data. Populated asynchronously.
`intermediate` | A boolean flag that indicates the state version is a snapshot and not yet set as the current state version for a workspace. The last intermediate state version becomes the current state version when the workspace is unlocked. Not yet supported in Terraform Enterprise.
`resources` | Extracted information about the resources in this state data. Populated asynchronously.
`resources-processed` | A Boolean flag indicating whether HCP Terraform has finished asynchronously extracting outputs, resources, and other information about this state data.
`serial` | The serial number of this state instance, which increases every time Terraform creates new state in the workspace.
`state-version` | The version of the internal state format used for this state. Different Terraform versions read and write different format versions, but it only changes infrequently.
`status` | Indicates a state version's content upload [status](/terraform/cloud-docs/api-docs/state-versions#state-version-status). This status can be `pending`, `finalized` or `discarded`.
`terraform-version` | The Terraform version that created this state. Populated asynchronously.
`vcs-commit-sha` | The SHA of the configuration commit used in the Terraform run that produced this state. Only present if the workspace is connected to a VCS repository.
`vcs-commit-url` | A link to the configuration commit used in the Terraform run that produced this state. Only present if the workspace is connected to a VCS repository.
### State Version Status
The state version status is found in `data.attributes.status`, and you can reference the following list of possible statuses.
A state version created through the API or CLI will only be listed in the UI if it is has a `finalized` status.
| State | Description |
| --- | --- |
| `pending` | Indicates that a state version has been created but the state data is not encoded within the request. Pending state versions do not contain state data and do not appear in the UI. You cannot unlock the workspace until the latest state version is finalized. |
| `finalized` | Indicates that the state version has been successfully uploaded to HCP Terraform or that the state version was created with a valid `state` attribute. |
| `discarded` | The state version was discarded because it was superseded by a newer state version before it could be uploaded. |
| `backing_data_soft_deleted` | <EnterpriseAlert inline /> The backing files associated with this state version are marked for garbage collection. Terraform permanently deletes backing files associated with this state version after a set number of days, but you can restore the backing data associated with it before it is permanently deleted. |
| `backing_data_permanently_deleted` | <EnterpriseAlert inline /> The backing files associated with this state version have been permanently deleted and can no longer be restored. |
## Create a State Version
> **Hands-on:** Try the [Version Remote State with the HCP Terraform API](/terraform/tutorials/cloud/cloud-state-api) tutorial to download a remote state file and use the Terraform API to create a new state version.
`POST /workspaces/:workspace_id/state-versions`
| Parameter | Description |
|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `:workspace_id` | The workspace ID to create the new state version in. Obtain this from the [workspace settings](/terraform/cloud-docs/workspaces/settings) or the [Show Workspace](/terraform/cloud-docs/api-docs/workspaces#show-workspace) endpoint. |
Creates a state version and sets it as the current state version for the given workspace. The workspace must be locked by the user creating a state version. The workspace may be locked [with the API](/terraform/cloud-docs/api-docs/workspaces#lock-a-workspace) or [with the UI](/terraform/cloud-docs/workspaces/settings#locking). This is most useful for migrating existing state from Terraform Community edition into a new HCP Terraform workspace.
Creating state versions requires permission to read and write state versions for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
!> **Warning:** Use caution when uploading state to workspaces that have already performed Terraform runs. Replacing state improperly can result in orphaned or duplicated infrastructure resources.
-> **Note:** For Free Tier organizations, HCP Terraform always retains at least the last 100 states (across all workspaces) and at least the most recent state for every workspace. Additional states beyond the last 100 are retained for six months, and are then deleted.
-> **Note:** You cannot access this endpoint with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
| Status | Response | Reason |
|---------|---------------------------|-------------------------------------------------------------------|
| [201][] | [JSON API document][] | Successfully created a state version. |
| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action. |
| [409][] | [JSON API error object][] | Conflict; check the error object for more information. |
| [412][] | [JSON API error object][] | Precondition failed; check the error object for more information. |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.). |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
|--------------------------------------|---------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `data.type` | string | | Must be `"state-versions"`. |
| `data.attributes.serial` | integer | | The serial of the state version. Must match the serial value extracted from the raw state file. |
| `data.attributes.md5` | string | | An MD5 hash of the raw state version. |
| `data.attributes.state` | string | (nothing) | **Optional** Base64 encoded raw state file. If omitted, you must use the upload method below to complete the state version creation. The workspace may not be unlocked normally until the state version is uploaded. |
| `data.attributes.lineage` | string | (nothing) | **Optional** Lineage of the state version. Should match the lineage extracted from the raw state file. Early versions of terraform did not have the concept of lineage, so this is an optional attribute. |
| `data.attributes.json-state` | string | (nothing) | **Optional** Base64 encoded json state, as expressed by `terraform show -json`. See [JSON Output Format](/terraform/internals/json-format) for more details. |
| `data.attributes.json-state-outputs` | string | (nothing) | **Optional** Base64 encoded output values as represented by `terraform show -json` (the contents of the values/outputs key). If provided, the workspace outputs populate immediately. If omitted, HCP Terraform populates the workspace outputs from the given state after a short time. |
| `data.relationships.run.data.id` | string | (nothing) | **Optional** The ID of the run to associate with the state version. |
### Sample Payload
```json
{
"data": {
"type":"state-versions",
"attributes": {
"serial": 1,
"md5": "d41d8cd98f00b204e9800998ecf8427e",
"lineage": "871d1b4a-e579-fb7c-ffdb-f0c858a647a7",
"state": "...",
"json-state": "...",
"json-state-outputs": "..."
},
"relationships": {
"run": {
"data": {
"type": "runs",
"id": "run-bWSq4YeYpfrW4mx7"
}
}
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/workspaces/ws-6fHMCom98SDXSQUv/state-versions
```
### Sample Response
```json
{
"data": {
"id": "sv-DmoXecHePnNznaA4",
"type": "state-versions",
"attributes": {
"vcs-commit-sha": null,
"vcs-commit-url": null,
"created-at": "2018-07-12T20:32:01.490Z",
"hosted-state-download-url": "https://archivist.terraform.io/v1/object/f55b739b-ff03-4716-b436-726466b96dc4",
"hosted-json-state-download-url": "https://archivist.terraform.io/v1/object/4fde7951-93c0-4414-9a40-f3abc4bac490",
"hosted-state-upload-url": null,
"hosted-json-state-upload-url": null,
"status": "finalized",
"intermediate": true,
"serial": 1
},
"links": {
"self": "/api/v2/state-versions/sv-DmoXecHePnNznaA4"
}
}
}
```
## Upload State and JSON State
You can upload state version content in the same request when creating a state version. However, we _strongly_ recommend that you upload content separately.
`PUT https://archivist.terraform.io/v1/object/<UNIQUE OBJECT ID>`
HCP Terraform returns a `hosted-state-upload-url` or `hosted-json-state-upload-url` returned when you create a `state-version`. Once you upload state content, this URL is hidden on the resource and _no longer available_.
### Sample Request
In the below example, `@filename` is the name of Terraform state file you wish to upload.
```shell
curl \
--header "Content-Type: application/octet-stream" \
--request PUT \
--data-binary @filename \
https://archivist.terraform.io/v1/object/4c44d964-eba7-4dd5-ad29-1ece7b99e8da
```
## List State Versions for a Workspace
`GET /state-versions`
Listing state versions requires permission to read state versions for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
|------------------------------|----------------------------------------------------------------------------------------|
| `filter[workspace][name]` | **Required** The name of one workspace to list versions for. |
| `filter[organization][name]` | **Required** The name of the organization that owns the desired workspace. |
| `filter[status]` | **Optional.** Filter state versions by status: `pending`, `finalized`, or `discarded`. |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 state versions per page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
"https://app.terraform.io/api/v2/state-versions?filter%5Bworkspace%5D%5Bname%5D=my-workspace&filter%5Borganization%5D%5Bname%5D=my-organization"
```
### Sample Response
```json
{
"data": [
{
"id": "sv-g4rqST72reoHMM5a",
"type": "state-versions",
"attributes": {
"created-at": "2021-06-08T01:22:03.794Z",
"size": 940,
"hosted-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-state-upload-url": null,
"hosted-json-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-json-state-upload-url": null,
"status": "finalized",
"intermediate": false,
"modules": {
"root": {
"null-resource": 1,
"data.terraform-remote-state": 1
}
},
"providers": {
"provider[\"terraform.io/builtin/terraform\"]": {
"data.terraform-remote-state": 1
},
"provider[\"registry.terraform.io/hashicorp/null\"]": {
"null-resource": 1
}
},
"resources": [
{
"name": "other_username",
"type": "data.terraform_remote_state",
"count": 1,
"module": "root",
"provider": "provider[\"terraform.io/builtin/terraform\"]"
},
{
"name": "random",
"type": "null_resource",
"count": 1,
"module": "root",
"provider": "provider[\"registry.terraform.io/hashicorp/null\"]"
}
],
"resources-processed": true,
"serial": 9,
"state-version": 4,
"terraform-version": "0.15.4",
"vcs-commit-url": "https://gitlab.com/my-organization/terraform-test/-/commit/abcdef12345",
"vcs-commit-sha": "abcdef12345"
},
"relationships": {
"run": {
"data": {
"id": "run-YfmFLWpgTv31VZsP",
"type": "runs"
}
},
"created-by": {
"data": {
"id": "user-onZs69ThPZjBK2wo",
"type": "users"
},
"links": {
"self": "/api/v2/users/user-onZs69ThPZjBK2wo",
"related": "/api/v2/runs/run-YfmFLWpgTv31VZsP/created-by"
}
},
"workspace": {
"data": {
"id": "ws-noZcaGXsac6aZSJR",
"type": "workspaces"
}
},
"outputs": {
"data": [
{
"id": "wsout-V22qbeM92xb5mw9n",
"type": "state-version-outputs"
},
{
"id": "wsout-ymkuRnrNFeU5wGpV",
"type": "state-version-outputs"
},
{
"id": "wsout-v82BjkZnFEcscipg",
"type": "state-version-outputs"
}
]
}
},
"links": {
"self": "/api/v2/state-versions/sv-g4rqST72reoHMM5a"
}
},
{
"id": "sv-QYKf6GvNv75ZPTBr",
"type": "state-versions",
"attributes": {
"created-at": "2021-06-01T21:40:25.941Z",
"size": 819,
"hosted-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-state-upload-url": null,
"hosted-json-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-json-state-upload-url": null,
"status": "finalized",
"intermediate": false,
"modules": {
"root": {
"data.terraform-remote-state": 1
}
},
"providers": {
"provider[\"terraform.io/builtin/terraform\"]": {
"data.terraform-remote-state": 1
}
},
"resources": [
{
"name": "other_username",
"type": "data.terraform_remote_state",
"count": 1,
"module": "root",
"provider": "provider[\"terraform.io/builtin/terraform\"]"
}
],
"resources-processed": true,
"serial": 8,
"state-version": 4,
"terraform-version": "0.15.4",
"vcs-commit-url": "https://gitlab.com/my-organization/terraform-test/-/commit/12345abcdef",
"vcs-commit-sha": "12345abcdef"
},
"relationships": {
"run": {
"data": {
"id": "run-cVtxks6R8wsjCZMD",
"type": "runs"
}
},
"created-by": {
"data": {
"id": "user-onZs69ThPZjBK2wo",
"type": "users"
},
"links": {
"self": "/api/v2/users/user-onZs69ThPZjBK2wo",
"related": "/api/v2/runs/run-YfmFLWpgTv31VZsP/created-by"
}
},
"workspace": {
"data": {
"id": "ws-noZcaGXsac6aZSJR",
"type": "workspaces"
}
},
"outputs": {
"data": [
{
"id": "wsout-MmqMhmht6jFmLRvh",
"type": "state-version-outputs"
},
{
"id": "wsout-Kuo9TCHg3oDLDQqa",
"type": "state-version-outputs"
}
]
}
},
"links": {
"self": "/api/v2/state-versions/sv-QYKf6GvNv75ZPTBr"
}
}
],
"links": {
"self": "https://app.terraform.io/api/v2/state-versions?filter%5Borganization%5D%5Bname%5D=hashicorp&filter%5Bworkspace%5D%5Bname%5D=my-workspace&page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://app.terraform.io/api/v2/state-versions?filter%5Borganization%5D%5Bname%5D=hashicorp&filter%5Bworkspace%5D%5Bname%5D=my-workspace&page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://app.terraform.io.io/api/v2/state-versions?filter%5Borganization%5D%5Bname%5D=hashicorp&filter%5Bworkspace%5D%5Bname%5D=my-workspace&page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"pagination": {
"current-page": 1,
"page-size": 20,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 10
}
}
}
```
## Fetch the Current State Version for a Workspace
`GET /workspaces/:workspace_id/current-state-version`
| Parameter | Description |
|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `:workspace_id` | The ID for the workspace whose current state version you want to fetch. Obtain this from the [workspace settings](/terraform/cloud-docs/workspaces/settings) or the [Show Workspace](/terraform/cloud-docs/api-docs/workspaces#show-workspace) endpoint. |
Fetches the current state version for the given workspace. This state version
will be the input state when running terraform operations.
Viewing state versions requires permission to read state versions for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Status | Response | Reason |
|---------|---------------------------|---------------------------------------------------------------------------------------------------------------|
| [200][] | [JSON API document][] | Successfully returned current state version for the given workspace. |
| [404][] | [JSON API error object][] | Workspace not found, workspace does not have a current state version, or user unauthorized to perform action. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/workspaces/ws-6fHMCom98SDXSQUv/current-state-version
```
### Sample Response
```json
{
"data": {
"id": "sv-g4rqST72reoHMM5a",
"type": "state-versions",
"attributes": {
"billable-rum-count": 0,
"created-at": "2021-06-08T01:22:03.794Z",
"size": 940,
"hosted-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-state-upload-url": null,
"hosted-json-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-json-state-upload-url": null,
"status": "finalized",
"intermediate": false,
"modules": {
"root": {
"null-resource": 1,
"data.terraform-remote-state": 1
}
},
"providers": {
"provider[\"terraform.io/builtin/terraform\"]": {
"data.terraform-remote-state": 1
},
"provider[\"registry.terraform.io/hashicorp/null\"]": {
"null-resource": 1
}
},
"resources": [
{
"name": "other_username",
"type": "data.terraform_remote_state",
"count": 1,
"module": "root",
"provider": "provider[\"terraform.io/builtin/terraform\"]"
},
{
"name": "random",
"type": "null_resource",
"count": 1,
"module": "root",
"provider": "provider[\"registry.terraform.io/hashicorp/null\"]"
}
],
"resources-processed": true,
"serial": 9,
"state-version": 4,
"terraform-version": "0.15.4",
"vcs-commit-url": "https://gitlab.com/my-organization/terraform-test/-/commit/abcdef12345",
"vcs-commit-sha": "abcdef12345"
},
"relationships": {
"run": {
"data": {
"id": "run-YfmFLWpgTv31VZsP",
"type": "runs"
}
},
"created-by": {
"data": {
"id": "user-onZs69ThPZjBK2wo",
"type": "users"
},
"links": {
"self": "/api/v2/users/user-onZs69ThPZjBK2wo",
"related": "/api/v2/runs/run-YfmFLWpgTv31VZsP/created-by"
}
},
"workspace": {
"data": {
"id": "ws-noZcaGXsac6aZSJR",
"type": "workspaces"
}
},
"outputs": {
"data": [
{
"id": "wsout-V22qbeM92xb5mw9n",
"type": "state-version-outputs"
},
{
"id": "wsout-ymkuRnrNFeU5wGpV",
"type": "state-version-outputs"
},
{
"id": "wsout-v82BjkZnFEcscipg",
"type": "state-version-outputs"
}
]
}
},
"links": {
"self": "/api/v2/state-versions/sv-g4rqST72reoHMM5a"
}
}
}
```
## Show a State Version
`GET /state-versions/:state_version_id`
Viewing state versions requires permission to read state versions for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Parameter | Description |
|---------------------|--------------------------------------|
| `:state_version_id` | The ID of the desired state version. |
| Status | Response | Reason |
|---------|---------------------------|---------------------------------------------------------------------------------------------------------------|
| [200][] | [JSON API document][] | Successfully returned current state version for the given workspace. |
| [404][] | [JSON API error object][] | Workspace not found, workspace does not have a current state version, or user unauthorized to perform action. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/state-versions/sv-SDboVZC8TCxXEneJ
```
### Sample Response
```json
{
"data": {
"id": "sv-g4rqST72reoHMM5a",
"type": "state-versions",
"attributes": {
"created-at": "2021-06-08T01:22:03.794Z",
"size": 940,
"hosted-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-state-upload-url": null,
"hosted-json-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-json-state-upload-url": null,
"status": "finalized",
"intermediate": false,
"modules": {
"root": {
"null-resource": 1,
"data.terraform-remote-state": 1
}
},
"providers": {
"provider[\"terraform.io/builtin/terraform\"]": {
"data.terraform-remote-state": 1
},
"provider[\"registry.terraform.io/hashicorp/null\"]": {
"null-resource": 1
}
},
"resources": [
{
"name": "other_username",
"type": "data.terraform_remote_state",
"count": 1,
"module": "root",
"provider": "provider[\"terraform.io/builtin/terraform\"]"
},
{
"name": "random",
"type": "null_resource",
"count": 1,
"module": "root",
"provider": "provider[\"registry.terraform.io/hashicorp/null\"]"
}
],
"resources-processed": true,
"serial": 9,
"state-version": 4,
"terraform-version": "0.15.4",
"vcs-commit-url": "https://gitlab.com/my-organization/terraform-test/-/commit/abcdef12345",
"vcs-commit-sha": "abcdef12345"
},
"relationships": {
"run": {
"data": {
"id": "run-YfmFLWpgTv31VZsP",
"type": "runs"
}
},
"created-by": {
"data": {
"id": "user-onZs69ThPZjBK2wo",
"type": "users"
},
"links": {
"self": "/api/v2/users/user-onZs69ThPZjBK2wo",
"related": "/api/v2/runs/run-YfmFLWpgTv31VZsP/created-by"
}
},
"workspace": {
"data": {
"id": "ws-noZcaGXsac6aZSJR",
"type": "workspaces"
}
},
"outputs": {
"data": [
{
"id": "wsout-V22qbeM92xb5mw9n",
"type": "state-version-outputs"
},
{
"id": "wsout-ymkuRnrNFeU5wGpV",
"type": "state-version-outputs"
},
{
"id": "wsout-v82BjkZnFEcscipg",
"type": "state-version-outputs"
}
]
}
},
"links": {
"self": "/api/v2/state-versions/sv-g4rqST72reoHMM5a"
}
}
}
```
## Rollback to a Previous State Version
`PATCH /workspaces/:workspace_id/state-versions`
| Parameter | Description |
|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `:workspace_id` | The workspace ID to create the new state version in. Obtain this from the [workspace settings](/terraform/cloud-docs/workspaces/settings) or the [Show Workspace](/terraform/cloud-docs/api-docs/workspaces#show-workspace) endpoint. |
Creates a state version by duplicating the specified state version and sets it as the current state version for the given workspace. The workspace must be locked by the user creating a state version. The workspace may be locked [with the API](/terraform/cloud-docs/api-docs/workspaces#lock-a-workspace) or [with the UI](/terraform/cloud-docs/workspaces/settings#locking). This is most useful for rolling back to a known-good state after an operation such as a Terraform upgrade didn't go as planned.
Creating state versions requires permission to read and write state versions for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
!> **Warning:** Use caution when rolling back to a previous state. Replacing state improperly can result in orphaned or duplicated infrastructure resources.
-> **Note:** You cannot access this endpoint with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
| Status | Response | Reason |
|---------|---------------------------|-----------------------------------------------------------------|
| [201][] | [JSON API document][] | Successfully rolled back. |
| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action. |
| [409][] | [JSON API error object][] | Conflict; check the error object for more information. |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.). |
### Request Body
This PATCH endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
|-----------------------------------------------------|--------|---------|----------------------------------------------------------------|
| `data.type` | string | | Must be `"state-versions"`. |
| `data.relationships.rollback-state-version.data.id` | string | | The ID of the state version to use for the rollback operation. |
### Sample Payload
```json
{
"data": {
"type":"state-versions",
"relationships": {
"rollback-state-version": {
"data": {
"type": "state-versions",
"id": "sv-bWfq4Y1YpRKW4mx7"
}
}
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/workspaces/ws-6fHMCom98SDXSQUv/state-versions
```
### Sample Response
```json
{
"data": {
"id": "sv-DmoXecHePnNznaA4",
"type": "state-versions",
"attributes": {
"created-at": "2022-11-22T01:22:03.794Z",
"size": 940,
"hosted-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-state-upload-url": null,
"hosted-json-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-json-state-upload-url": null,
"modules": {
"root": {
"null-resource": 1,
"data.terraform-remote-state": 1
}
},
"providers": {
"provider[\"terraform.io/builtin/terraform\"]": {
"data.terraform-remote-state": 1
},
"provider[\"registry.terraform.io/hashicorp/null\"]": {
"null-resource": 1
}
},
"resources": [
{
"name": "other_username",
"type": "data.terraform_remote_state",
"count": 1,
"module": "root",
"provider": "provider[\"terraform.io/builtin/terraform\"]"
},
{
"name": "random",
"type": "null_resource",
"count": 1,
"module": "root",
"provider": "provider[\"registry.terraform.io/hashicorp/null\"]"
}
],
"resources-processed": true,
"serial": 9,
"state-version": 4,
"terraform-version": "1.3.5"
},
"relationships": {
"rollback-state-version": {
"data": {
"id": "sv-YfmFLgTv31VZsP",
"type": "state-versions"
}
}
},
"links": {
"self": "/api/v2/state-versions/sv-DmoXecHePnNznaA4"
}
}
}
```
## Mark a State Version for Garbage Collection
<EnterpriseAlert>
This endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href="https://developer.hashicorp.com/terraform/enterprise">Learn more about Terraform Enterprise</a>.
</EnterpriseAlert>
`POST /api/v2/state-versions/:state_version_id/actions/soft_delete_backing_data`
This endpoint directs Terraform Enterprise to _soft delete_ the backing files associated with this state version. Soft deletion marks the state version for garbage collection. Terraform permanently deletes state versions after a set number of days unless the state version is restored. Once a state version is soft deleted, any attempts to read the state version will fail. Refer to [State Version Status](#state-version-status) for information about all data states.
This endpoint can only soft delete state versions that are in an [`finalized` state](#state-version-status) and are not the current state version. Otherwise, calling this endpoint results in an error.
You must have organization owner permissions to soft delete state versions. Refer to [Permissions](/terraform/enterprise/users-teams-organizations/permissions) for additional information about permissions.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Parameter | Description |
|---------------------|--------------------------------------|
| `:state_version_id` | The ID of the state version to mark for garbage collection. |
| Status | Response | Reason |
|---------|---------------------------|---------------------------------------------------------------------------------------------------------------|
| [200][] | [JSON API document][] | Terraform successfully marked the data for garbage collection. |
| [400][] | [JSON API error object][] | Terraform failed to transition the state to `backing_data_soft_deleted`. |
| [404][] | [JSON API error object][] | Terraform did not find the state version or the user is not authorized to modify the state version. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/state-versions/sv-ntv3HbhJqvFzamy7/actions/soft_delete_backing_data
--data {"data": {"attributes": {"delete-older-than-n-days": 23}}}
```
### Sample Response
```json
{
"data": {
"id": "sv-g4rqST72reoHMM5a",
"type": "state-versions",
"attributes": {
"created-at": "2021-06-08T01:22:03.794Z",
"size": 940,
"hosted-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-state-upload-url": null,
"hosted-json-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-json-state-upload-url": null,
"status": "backing_data_soft_deleted",
"intermediate": false,
"delete-older-than-n-days": 23,
"modules": {
"root": {
"null-resource": 1,
"data.terraform-remote-state": 1
}
},
"providers": {
"provider[\"terraform.io/builtin/terraform\"]": {
"data.terraform-remote-state": 1
},
"provider[\"registry.terraform.io/hashicorp/null\"]": {
"null-resource": 1
}
},
"resources": [
{
"name": "other_username",
"type": "data.terraform_remote_state",
"count": 1,
"module": "root",
"provider": "provider[\"terraform.io/builtin/terraform\"]"
},
{
"name": "random",
"type": "null_resource",
"count": 1,
"module": "root",
"provider": "provider[\"registry.terraform.io/hashicorp/null\"]"
}
],
"resources-processed": true,
"serial": 9,
"state-version": 4,
"terraform-version": "0.15.4",
"vcs-commit-url": "https://gitlab.com/my-organization/terraform-test/-/commit/abcdef12345",
"vcs-commit-sha": "abcdef12345"
},
"relationships": {
"run": {
"data": {
"id": "run-YfmFLWpgTv31VZsP",
"type": "runs"
}
},
"created-by": {
"data": {
"id": "user-onZs69ThPZjBK2wo",
"type": "users"
},
"links": {
"self": "/api/v2/users/user-onZs69ThPZjBK2wo",
"related": "/api/v2/runs/run-YfmFLWpgTv31VZsP/created-by"
}
},
"workspace": {
"data": {
"id": "ws-noZcaGXsac6aZSJR",
"type": "workspaces"
}
},
"outputs": {
"data": [
{
"id": "wsout-V22qbeM92xb5mw9n",
"type": "state-version-outputs"
},
{
"id": "wsout-ymkuRnrNFeU5wGpV",
"type": "state-version-outputs"
},
{
"id": "wsout-v82BjkZnFEcscipg",
"type": "state-version-outputs"
}
]
}
},
"links": {
"self": "/api/v2/state-versions/sv-g4rqST72reoHMM5a"
}
}
}
```
## Restore a State Version Marked for Garbage Collection
<EnterpriseAlert>
This endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href="https://developer.hashicorp.com/terraform/enterprise">Learn more about Terraform Enterprise</a>.
</EnterpriseAlert>
`POST /api/v2/state-versions/:state_version_id/actions/restore_backing_data`
This endpoint directs Terraform Enterprise to restore backing files associated with this state version. This endpoint can only restore state versions that are not in a [`backing_data_permanently_deleted` state](#state-version-status). Terraform restores applicable state versions back to their `finalized` state. Otherwise, calling this endpoint results in an error.
You must have organization owner permissions to restore state versions. Refer to [Permissions](/terraform/enterprise/users-teams-organizations/permissions) for additional information about permissions.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Parameter | Description |
|---------------------|--------------------------------------|
| `:state_version_id` | The ID of the state version to restore. |
| Status | Response | Reason |
|---------|---------------------------|---------------------------------------------------------------------------------------------------------------|
| [200][] | [JSON API document][] | Terraform successfully initiated the restore process. |
| [400][] | [JSON API error object][] | Terraform failed to transition the state to `finalized`. |
| [404][] | [JSON API error object][] | Terraform did not find the state version or the user is not authorized to modify the state version. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/state-versions/sv-ntv3HbhJqvFzamy7/actions/restore_backing_data
```
### Sample Response
```json
{
"data": {
"id": "sv-g4rqST72reoHMM5a",
"type": "state-versions",
"attributes": {
"created-at": "2021-06-08T01:22:03.794Z",
"size": 940,
"hosted-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-state-upload-url": null,
"hosted-json-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-json-state-upload-url": null,
"status": "uploaded",
"intermediate": false,
"modules": {
"root": {
"null-resource": 1,
"data.terraform-remote-state": 1
}
},
"providers": {
"provider[\"terraform.io/builtin/terraform\"]": {
"data.terraform-remote-state": 1
},
"provider[\"registry.terraform.io/hashicorp/null\"]": {
"null-resource": 1
}
},
"resources": [
{
"name": "other_username",
"type": "data.terraform_remote_state",
"count": 1,
"module": "root",
"provider": "provider[\"terraform.io/builtin/terraform\"]"
},
{
"name": "random",
"type": "null_resource",
"count": 1,
"module": "root",
"provider": "provider[\"registry.terraform.io/hashicorp/null\"]"
}
],
"resources-processed": true,
"serial": 9,
"state-version": 4,
"terraform-version": "0.15.4",
"vcs-commit-url": "https://gitlab.com/my-organization/terraform-test/-/commit/abcdef12345",
"vcs-commit-sha": "abcdef12345"
},
"relationships": {
"run": {
"data": {
"id": "run-YfmFLWpgTv31VZsP",
"type": "runs"
}
},
"created-by": {
"data": {
"id": "user-onZs69ThPZjBK2wo",
"type": "users"
},
"links": {
"self": "/api/v2/users/user-onZs69ThPZjBK2wo",
"related": "/api/v2/runs/run-YfmFLWpgTv31VZsP/created-by"
}
},
"workspace": {
"data": {
"id": "ws-noZcaGXsac6aZSJR",
"type": "workspaces"
}
},
"outputs": {
"data": [
{
"id": "wsout-V22qbeM92xb5mw9n",
"type": "state-version-outputs"
},
{
"id": "wsout-ymkuRnrNFeU5wGpV",
"type": "state-version-outputs"
},
{
"id": "wsout-v82BjkZnFEcscipg",
"type": "state-version-outputs"
}
]
}
},
"links": {
"self": "/api/v2/state-versions/sv-g4rqST72reoHMM5a"
}
}
}
```
## Permanently Delete a State Version
<EnterpriseAlert>
This endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href="https://developer.hashicorp.com/terraform/enterprise">Learn more about Terraform Enterprise</a>.
</EnterpriseAlert>
`POST /api/v2/state-versions/:state_version_id/actions/permanently_delete_backing_data`
This endpoint directs Terraform Enterprise to permanently delete backing files associated with this state version. This endpoint can only permanently delete state versions that are in an [`backing_data_soft_deleted` state](#state-version-status) and are not the current state version. Otherwise, calling this endpoint results in an error.
You must have organization owner permissions to permanently delete state versions. Refer to [Permissions](/terraform/enterprise/users-teams-organizations/permissions) for additional information about permissions.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Parameter | Description |
|---------------------|--------------------------------------|
| `:state_version_id` | The ID of the state version to permanently delete. |
| Status | Response | Reason |
|---------|---------------------------|---------------------------------------------------------------------------------------------------------------|
| [200][] | [JSON API document][] | Terraform deleted the data permanently. |
| [400][] | [JSON API error object][] | Terraform failed to transition the state to `backing_data_permanently_deleted`. |
| [404][] | [JSON API error object][] | Terraform did not find the state version or the user is not authorized to modify the state version data. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/state-versions/sv-ntv3HbhJqvFzamy7/actions/permanently_delete_backing_data
```
### Sample Response
```json
{
"data": {
"id": "sv-g4rqST72reoHMM5a",
"type": "state-versions",
"attributes": {
"created-at": "2021-06-08T01:22:03.794Z",
"size": 940,
"hosted-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-state-upload-url": null,
"hosted-json-state-download-url": "https://archivist.terraform.io/v1/object/...",
"hosted-json-state-upload-url": null,
"status": "backing_data_permanently_deleted",
"intermediate": false,
"modules": {
"root": {
"null-resource": 1,
"data.terraform-remote-state": 1
}
},
"providers": {
"provider[\"terraform.io/builtin/terraform\"]": {
"data.terraform-remote-state": 1
},
"provider[\"registry.terraform.io/hashicorp/null\"]": {
"null-resource": 1
}
},
"resources": [
{
"name": "other_username",
"type": "data.terraform_remote_state",
"count": 1,
"module": "root",
"provider": "provider[\"terraform.io/builtin/terraform\"]"
},
{
"name": "random",
"type": "null_resource",
"count": 1,
"module": "root",
"provider": "provider[\"registry.terraform.io/hashicorp/null\"]"
}
],
"resources-processed": true,
"serial": 9,
"state-version": 4,
"terraform-version": "0.15.4",
"vcs-commit-url": "https://gitlab.com/my-organization/terraform-test/-/commit/abcdef12345",
"vcs-commit-sha": "abcdef12345"
},
"relationships": {
"run": {
"data": {
"id": "run-YfmFLWpgTv31VZsP",
"type": "runs"
}
},
"created-by": {
"data": {
"id": "user-onZs69ThPZjBK2wo",
"type": "users"
},
"links": {
"self": "/api/v2/users/user-onZs69ThPZjBK2wo",
"related": "/api/v2/runs/run-YfmFLWpgTv31VZsP/created-by"
}
},
"workspace": {
"data": {
"id": "ws-noZcaGXsac6aZSJR",
"type": "workspaces"
}
},
"outputs": {
"data": [
{
"id": "wsout-V22qbeM92xb5mw9n",
"type": "state-version-outputs"
},
{
"id": "wsout-ymkuRnrNFeU5wGpV",
"type": "state-version-outputs"
},
{
"id": "wsout-v82BjkZnFEcscipg",
"type": "state-version-outputs"
}
]
}
},
"links": {
"self": "/api/v2/state-versions/sv-g4rqST72reoHMM5a"
}
}
}
```
## List State Version Outputs
The output values from a state version are also available via the API. For details, see the [state version outputs documentation.](/terraform/cloud-docs/api-docs/state-version-outputs#list-state-version-outputs)
### Available Related Resources
The GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](/terraform/cloud-docs/api-docs#inclusion-of-related-resources). The following resource types are available:
* `created_by` - The user that created the state version. For state versions created via a run executed by HCP Terraform, this is an internal user account.
* `run` - The run that created the state version, if applicable.
* `run.created_by` - The user that manually triggered the run, if applicable.
* `run.configuration_version` - The configuration version used in the run.
* `outputs` - The parsed outputs for this state version. | terraform | page title State Versions API Docs HCP Terraform description Use the state versions endpoint to manage Terraform state versions List create show and roll back state versions using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects State Versions API Attributes State version API objects represent an instance of Terraform state data but do not directly contain the stored state Instead they contain information about the state its properties and its contents and include one or more URLs from which the state can be downloaded Some of the information returned in a state version API object might be populated asynchronously by HCP Terraform This includes resources modules providers and the state version outputs terraform cloud docs api docs state version outputs associated with the state version These values might not be immediately available after the state version is uploaded The resources processed property on the state version object indicates whether or not HCP Terraform has finished any necessary asynchronous processing If you need to use these values be sure to wait for resources processed to become true before assuming that the values are in fact empty Attribute Description billable rum count Count of billable Resources Under Management RUM Only present for Organization members on RUM plans who have visibility to see billable RUM usage in the Usage page hosted json state download url A URL from which you can download the state data in a stable format terraform internals json format appropriate for external integrations to consume Only available if the state was created by Terraform 1 3 hosted state download url A URL from which you can download the raw state data in the format used internally by Terraform hosted json state upload url A URL to which you can upload state data in a stable format terraform internals json format appropriate for external integrations to consume You can upload JSON state content once per state version hosted state upload url A URL to which you can upload state data in the format used Terraform uses internally You can upload state data once per state version modules Extracted information about the Terraform modules in this state data Populated asynchronously providers Extracted information about the Terraform providers used for resources in this state data Populated asynchronously intermediate A boolean flag that indicates the state version is a snapshot and not yet set as the current state version for a workspace The last intermediate state version becomes the current state version when the workspace is unlocked Not yet supported in Terraform Enterprise resources Extracted information about the resources in this state data Populated asynchronously resources processed A Boolean flag indicating whether HCP Terraform has finished asynchronously extracting outputs resources and other information about this state data serial The serial number of this state instance which increases every time Terraform creates new state in the workspace state version The version of the internal state format used for this state Different Terraform versions read and write different format versions but it only changes infrequently status Indicates a state version s content upload status terraform cloud docs api docs state versions state version status This status can be pending finalized or discarded terraform version The Terraform version that created this state Populated asynchronously vcs commit sha The SHA of the configuration commit used in the Terraform run that produced this state Only present if the workspace is connected to a VCS repository vcs commit url A link to the configuration commit used in the Terraform run that produced this state Only present if the workspace is connected to a VCS repository State Version Status The state version status is found in data attributes status and you can reference the following list of possible statuses A state version created through the API or CLI will only be listed in the UI if it is has a finalized status State Description pending Indicates that a state version has been created but the state data is not encoded within the request Pending state versions do not contain state data and do not appear in the UI You cannot unlock the workspace until the latest state version is finalized finalized Indicates that the state version has been successfully uploaded to HCP Terraform or that the state version was created with a valid state attribute discarded The state version was discarded because it was superseded by a newer state version before it could be uploaded backing data soft deleted EnterpriseAlert inline The backing files associated with this state version are marked for garbage collection Terraform permanently deletes backing files associated with this state version after a set number of days but you can restore the backing data associated with it before it is permanently deleted backing data permanently deleted EnterpriseAlert inline The backing files associated with this state version have been permanently deleted and can no longer be restored Create a State Version Hands on Try the Version Remote State with the HCP Terraform API terraform tutorials cloud cloud state api tutorial to download a remote state file and use the Terraform API to create a new state version POST workspaces workspace id state versions Parameter Description workspace id The workspace ID to create the new state version in Obtain this from the workspace settings terraform cloud docs workspaces settings or the Show Workspace terraform cloud docs api docs workspaces show workspace endpoint Creates a state version and sets it as the current state version for the given workspace The workspace must be locked by the user creating a state version The workspace may be locked with the API terraform cloud docs api docs workspaces lock a workspace or with the UI terraform cloud docs workspaces settings locking This is most useful for migrating existing state from Terraform Community edition into a new HCP Terraform workspace Creating state versions requires permission to read and write state versions for the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Warning Use caution when uploading state to workspaces that have already performed Terraform runs Replacing state improperly can result in orphaned or duplicated infrastructure resources Note For Free Tier organizations HCP Terraform always retains at least the last 100 states across all workspaces and at least the most recent state for every workspace Additional states beyond the last 100 are retained for six months and are then deleted Note You cannot access this endpoint with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Status Response Reason 201 JSON API document Successfully created a state version 404 JSON API error object Workspace not found or user unauthorized to perform action 409 JSON API error object Conflict check the error object for more information 412 JSON API error object Precondition failed check the error object for more information 422 JSON API error object Malformed request body missing attributes wrong types etc Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be state versions data attributes serial integer The serial of the state version Must match the serial value extracted from the raw state file data attributes md5 string An MD5 hash of the raw state version data attributes state string nothing Optional Base64 encoded raw state file If omitted you must use the upload method below to complete the state version creation The workspace may not be unlocked normally until the state version is uploaded data attributes lineage string nothing Optional Lineage of the state version Should match the lineage extracted from the raw state file Early versions of terraform did not have the concept of lineage so this is an optional attribute data attributes json state string nothing Optional Base64 encoded json state as expressed by terraform show json See JSON Output Format terraform internals json format for more details data attributes json state outputs string nothing Optional Base64 encoded output values as represented by terraform show json the contents of the values outputs key If provided the workspace outputs populate immediately If omitted HCP Terraform populates the workspace outputs from the given state after a short time data relationships run data id string nothing Optional The ID of the run to associate with the state version Sample Payload json data type state versions attributes serial 1 md5 d41d8cd98f00b204e9800998ecf8427e lineage 871d1b4a e579 fb7c ffdb f0c858a647a7 state json state json state outputs relationships run data type runs id run bWSq4YeYpfrW4mx7 Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 workspaces ws 6fHMCom98SDXSQUv state versions Sample Response json data id sv DmoXecHePnNznaA4 type state versions attributes vcs commit sha null vcs commit url null created at 2018 07 12T20 32 01 490Z hosted state download url https archivist terraform io v1 object f55b739b ff03 4716 b436 726466b96dc4 hosted json state download url https archivist terraform io v1 object 4fde7951 93c0 4414 9a40 f3abc4bac490 hosted state upload url null hosted json state upload url null status finalized intermediate true serial 1 links self api v2 state versions sv DmoXecHePnNznaA4 Upload State and JSON State You can upload state version content in the same request when creating a state version However we strongly recommend that you upload content separately PUT https archivist terraform io v1 object UNIQUE OBJECT ID HCP Terraform returns a hosted state upload url or hosted json state upload url returned when you create a state version Once you upload state content this URL is hidden on the resource and no longer available Sample Request In the below example filename is the name of Terraform state file you wish to upload shell curl header Content Type application octet stream request PUT data binary filename https archivist terraform io v1 object 4c44d964 eba7 4dd5 ad29 1ece7b99e8da List State Versions for a Workspace GET state versions Listing state versions requires permission to read state versions for the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description filter workspace name Required The name of one workspace to list versions for filter organization name Required The name of the organization that owns the desired workspace filter status Optional Filter state versions by status pending finalized or discarded page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 state versions per page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 state versions filter 5Bworkspace 5D 5Bname 5D my workspace filter 5Borganization 5D 5Bname 5D my organization Sample Response json data id sv g4rqST72reoHMM5a type state versions attributes created at 2021 06 08T01 22 03 794Z size 940 hosted state download url https archivist terraform io v1 object hosted state upload url null hosted json state download url https archivist terraform io v1 object hosted json state upload url null status finalized intermediate false modules root null resource 1 data terraform remote state 1 providers provider terraform io builtin terraform data terraform remote state 1 provider registry terraform io hashicorp null null resource 1 resources name other username type data terraform remote state count 1 module root provider provider terraform io builtin terraform name random type null resource count 1 module root provider provider registry terraform io hashicorp null resources processed true serial 9 state version 4 terraform version 0 15 4 vcs commit url https gitlab com my organization terraform test commit abcdef12345 vcs commit sha abcdef12345 relationships run data id run YfmFLWpgTv31VZsP type runs created by data id user onZs69ThPZjBK2wo type users links self api v2 users user onZs69ThPZjBK2wo related api v2 runs run YfmFLWpgTv31VZsP created by workspace data id ws noZcaGXsac6aZSJR type workspaces outputs data id wsout V22qbeM92xb5mw9n type state version outputs id wsout ymkuRnrNFeU5wGpV type state version outputs id wsout v82BjkZnFEcscipg type state version outputs links self api v2 state versions sv g4rqST72reoHMM5a id sv QYKf6GvNv75ZPTBr type state versions attributes created at 2021 06 01T21 40 25 941Z size 819 hosted state download url https archivist terraform io v1 object hosted state upload url null hosted json state download url https archivist terraform io v1 object hosted json state upload url null status finalized intermediate false modules root data terraform remote state 1 providers provider terraform io builtin terraform data terraform remote state 1 resources name other username type data terraform remote state count 1 module root provider provider terraform io builtin terraform resources processed true serial 8 state version 4 terraform version 0 15 4 vcs commit url https gitlab com my organization terraform test commit 12345abcdef vcs commit sha 12345abcdef relationships run data id run cVtxks6R8wsjCZMD type runs created by data id user onZs69ThPZjBK2wo type users links self api v2 users user onZs69ThPZjBK2wo related api v2 runs run YfmFLWpgTv31VZsP created by workspace data id ws noZcaGXsac6aZSJR type workspaces outputs data id wsout MmqMhmht6jFmLRvh type state version outputs id wsout Kuo9TCHg3oDLDQqa type state version outputs links self api v2 state versions sv QYKf6GvNv75ZPTBr links self https app terraform io api v2 state versions filter 5Borganization 5D 5Bname 5D hashicorp filter 5Bworkspace 5D 5Bname 5D my workspace page 5Bnumber 5D 1 page 5Bsize 5D 20 first https app terraform io api v2 state versions filter 5Borganization 5D 5Bname 5D hashicorp filter 5Bworkspace 5D 5Bname 5D my workspace page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https app terraform io io api v2 state versions filter 5Borganization 5D 5Bname 5D hashicorp filter 5Bworkspace 5D 5Bname 5D my workspace page 5Bnumber 5D 1 page 5Bsize 5D 20 meta pagination current page 1 page size 20 prev page null next page null total pages 1 total count 10 Fetch the Current State Version for a Workspace GET workspaces workspace id current state version Parameter Description workspace id The ID for the workspace whose current state version you want to fetch Obtain this from the workspace settings terraform cloud docs workspaces settings or the Show Workspace terraform cloud docs api docs workspaces show workspace endpoint Fetches the current state version for the given workspace This state version will be the input state when running terraform operations Viewing state versions requires permission to read state versions for the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Status Response Reason 200 JSON API document Successfully returned current state version for the given workspace 404 JSON API error object Workspace not found workspace does not have a current state version or user unauthorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 workspaces ws 6fHMCom98SDXSQUv current state version Sample Response json data id sv g4rqST72reoHMM5a type state versions attributes billable rum count 0 created at 2021 06 08T01 22 03 794Z size 940 hosted state download url https archivist terraform io v1 object hosted state upload url null hosted json state download url https archivist terraform io v1 object hosted json state upload url null status finalized intermediate false modules root null resource 1 data terraform remote state 1 providers provider terraform io builtin terraform data terraform remote state 1 provider registry terraform io hashicorp null null resource 1 resources name other username type data terraform remote state count 1 module root provider provider terraform io builtin terraform name random type null resource count 1 module root provider provider registry terraform io hashicorp null resources processed true serial 9 state version 4 terraform version 0 15 4 vcs commit url https gitlab com my organization terraform test commit abcdef12345 vcs commit sha abcdef12345 relationships run data id run YfmFLWpgTv31VZsP type runs created by data id user onZs69ThPZjBK2wo type users links self api v2 users user onZs69ThPZjBK2wo related api v2 runs run YfmFLWpgTv31VZsP created by workspace data id ws noZcaGXsac6aZSJR type workspaces outputs data id wsout V22qbeM92xb5mw9n type state version outputs id wsout ymkuRnrNFeU5wGpV type state version outputs id wsout v82BjkZnFEcscipg type state version outputs links self api v2 state versions sv g4rqST72reoHMM5a Show a State Version GET state versions state version id Viewing state versions requires permission to read state versions for the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Parameter Description state version id The ID of the desired state version Status Response Reason 200 JSON API document Successfully returned current state version for the given workspace 404 JSON API error object Workspace not found workspace does not have a current state version or user unauthorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 state versions sv SDboVZC8TCxXEneJ Sample Response json data id sv g4rqST72reoHMM5a type state versions attributes created at 2021 06 08T01 22 03 794Z size 940 hosted state download url https archivist terraform io v1 object hosted state upload url null hosted json state download url https archivist terraform io v1 object hosted json state upload url null status finalized intermediate false modules root null resource 1 data terraform remote state 1 providers provider terraform io builtin terraform data terraform remote state 1 provider registry terraform io hashicorp null null resource 1 resources name other username type data terraform remote state count 1 module root provider provider terraform io builtin terraform name random type null resource count 1 module root provider provider registry terraform io hashicorp null resources processed true serial 9 state version 4 terraform version 0 15 4 vcs commit url https gitlab com my organization terraform test commit abcdef12345 vcs commit sha abcdef12345 relationships run data id run YfmFLWpgTv31VZsP type runs created by data id user onZs69ThPZjBK2wo type users links self api v2 users user onZs69ThPZjBK2wo related api v2 runs run YfmFLWpgTv31VZsP created by workspace data id ws noZcaGXsac6aZSJR type workspaces outputs data id wsout V22qbeM92xb5mw9n type state version outputs id wsout ymkuRnrNFeU5wGpV type state version outputs id wsout v82BjkZnFEcscipg type state version outputs links self api v2 state versions sv g4rqST72reoHMM5a Rollback to a Previous State Version PATCH workspaces workspace id state versions Parameter Description workspace id The workspace ID to create the new state version in Obtain this from the workspace settings terraform cloud docs workspaces settings or the Show Workspace terraform cloud docs api docs workspaces show workspace endpoint Creates a state version by duplicating the specified state version and sets it as the current state version for the given workspace The workspace must be locked by the user creating a state version The workspace may be locked with the API terraform cloud docs api docs workspaces lock a workspace or with the UI terraform cloud docs workspaces settings locking This is most useful for rolling back to a known good state after an operation such as a Terraform upgrade didn t go as planned Creating state versions requires permission to read and write state versions for the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Warning Use caution when rolling back to a previous state Replacing state improperly can result in orphaned or duplicated infrastructure resources Note You cannot access this endpoint with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Status Response Reason 201 JSON API document Successfully rolled back 404 JSON API error object Workspace not found or user unauthorized to perform action 409 JSON API error object Conflict check the error object for more information 422 JSON API error object Malformed request body missing attributes wrong types etc Request Body This PATCH endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be state versions data relationships rollback state version data id string The ID of the state version to use for the rollback operation Sample Payload json data type state versions relationships rollback state version data type state versions id sv bWfq4Y1YpRKW4mx7 Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 workspaces ws 6fHMCom98SDXSQUv state versions Sample Response json data id sv DmoXecHePnNznaA4 type state versions attributes created at 2022 11 22T01 22 03 794Z size 940 hosted state download url https archivist terraform io v1 object hosted state upload url null hosted json state download url https archivist terraform io v1 object hosted json state upload url null modules root null resource 1 data terraform remote state 1 providers provider terraform io builtin terraform data terraform remote state 1 provider registry terraform io hashicorp null null resource 1 resources name other username type data terraform remote state count 1 module root provider provider terraform io builtin terraform name random type null resource count 1 module root provider provider registry terraform io hashicorp null resources processed true serial 9 state version 4 terraform version 1 3 5 relationships rollback state version data id sv YfmFLgTv31VZsP type state versions links self api v2 state versions sv DmoXecHePnNznaA4 Mark a State Version for Garbage Collection EnterpriseAlert This endpoint is exclusive to Terraform Enterprise and not available in HCP Terraform a href https developer hashicorp com terraform enterprise Learn more about Terraform Enterprise a EnterpriseAlert POST api v2 state versions state version id actions soft delete backing data This endpoint directs Terraform Enterprise to soft delete the backing files associated with this state version Soft deletion marks the state version for garbage collection Terraform permanently deletes state versions after a set number of days unless the state version is restored Once a state version is soft deleted any attempts to read the state version will fail Refer to State Version Status state version status for information about all data states This endpoint can only soft delete state versions that are in an finalized state state version status and are not the current state version Otherwise calling this endpoint results in an error You must have organization owner permissions to soft delete state versions Refer to Permissions terraform enterprise users teams organizations permissions for additional information about permissions permissions citation intentionally unused keep for maintainers Parameter Description state version id The ID of the state version to mark for garbage collection Status Response Reason 200 JSON API document Terraform successfully marked the data for garbage collection 400 JSON API error object Terraform failed to transition the state to backing data soft deleted 404 JSON API error object Terraform did not find the state version or the user is not authorized to modify the state version Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST https app terraform io api v2 state versions sv ntv3HbhJqvFzamy7 actions soft delete backing data data data attributes delete older than n days 23 Sample Response json data id sv g4rqST72reoHMM5a type state versions attributes created at 2021 06 08T01 22 03 794Z size 940 hosted state download url https archivist terraform io v1 object hosted state upload url null hosted json state download url https archivist terraform io v1 object hosted json state upload url null status backing data soft deleted intermediate false delete older than n days 23 modules root null resource 1 data terraform remote state 1 providers provider terraform io builtin terraform data terraform remote state 1 provider registry terraform io hashicorp null null resource 1 resources name other username type data terraform remote state count 1 module root provider provider terraform io builtin terraform name random type null resource count 1 module root provider provider registry terraform io hashicorp null resources processed true serial 9 state version 4 terraform version 0 15 4 vcs commit url https gitlab com my organization terraform test commit abcdef12345 vcs commit sha abcdef12345 relationships run data id run YfmFLWpgTv31VZsP type runs created by data id user onZs69ThPZjBK2wo type users links self api v2 users user onZs69ThPZjBK2wo related api v2 runs run YfmFLWpgTv31VZsP created by workspace data id ws noZcaGXsac6aZSJR type workspaces outputs data id wsout V22qbeM92xb5mw9n type state version outputs id wsout ymkuRnrNFeU5wGpV type state version outputs id wsout v82BjkZnFEcscipg type state version outputs links self api v2 state versions sv g4rqST72reoHMM5a Restore a State Version Marked for Garbage Collection EnterpriseAlert This endpoint is exclusive to Terraform Enterprise and not available in HCP Terraform a href https developer hashicorp com terraform enterprise Learn more about Terraform Enterprise a EnterpriseAlert POST api v2 state versions state version id actions restore backing data This endpoint directs Terraform Enterprise to restore backing files associated with this state version This endpoint can only restore state versions that are not in a backing data permanently deleted state state version status Terraform restores applicable state versions back to their finalized state Otherwise calling this endpoint results in an error You must have organization owner permissions to restore state versions Refer to Permissions terraform enterprise users teams organizations permissions for additional information about permissions permissions citation intentionally unused keep for maintainers Parameter Description state version id The ID of the state version to restore Status Response Reason 200 JSON API document Terraform successfully initiated the restore process 400 JSON API error object Terraform failed to transition the state to finalized 404 JSON API error object Terraform did not find the state version or the user is not authorized to modify the state version Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST https app terraform io api v2 state versions sv ntv3HbhJqvFzamy7 actions restore backing data Sample Response json data id sv g4rqST72reoHMM5a type state versions attributes created at 2021 06 08T01 22 03 794Z size 940 hosted state download url https archivist terraform io v1 object hosted state upload url null hosted json state download url https archivist terraform io v1 object hosted json state upload url null status uploaded intermediate false modules root null resource 1 data terraform remote state 1 providers provider terraform io builtin terraform data terraform remote state 1 provider registry terraform io hashicorp null null resource 1 resources name other username type data terraform remote state count 1 module root provider provider terraform io builtin terraform name random type null resource count 1 module root provider provider registry terraform io hashicorp null resources processed true serial 9 state version 4 terraform version 0 15 4 vcs commit url https gitlab com my organization terraform test commit abcdef12345 vcs commit sha abcdef12345 relationships run data id run YfmFLWpgTv31VZsP type runs created by data id user onZs69ThPZjBK2wo type users links self api v2 users user onZs69ThPZjBK2wo related api v2 runs run YfmFLWpgTv31VZsP created by workspace data id ws noZcaGXsac6aZSJR type workspaces outputs data id wsout V22qbeM92xb5mw9n type state version outputs id wsout ymkuRnrNFeU5wGpV type state version outputs id wsout v82BjkZnFEcscipg type state version outputs links self api v2 state versions sv g4rqST72reoHMM5a Permanently Delete a State Version EnterpriseAlert This endpoint is exclusive to Terraform Enterprise and not available in HCP Terraform a href https developer hashicorp com terraform enterprise Learn more about Terraform Enterprise a EnterpriseAlert POST api v2 state versions state version id actions permanently delete backing data This endpoint directs Terraform Enterprise to permanently delete backing files associated with this state version This endpoint can only permanently delete state versions that are in an backing data soft deleted state state version status and are not the current state version Otherwise calling this endpoint results in an error You must have organization owner permissions to permanently delete state versions Refer to Permissions terraform enterprise users teams organizations permissions for additional information about permissions permissions citation intentionally unused keep for maintainers Parameter Description state version id The ID of the state version to permanently delete Status Response Reason 200 JSON API document Terraform deleted the data permanently 400 JSON API error object Terraform failed to transition the state to backing data permanently deleted 404 JSON API error object Terraform did not find the state version or the user is not authorized to modify the state version data Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST https app terraform io api v2 state versions sv ntv3HbhJqvFzamy7 actions permanently delete backing data Sample Response json data id sv g4rqST72reoHMM5a type state versions attributes created at 2021 06 08T01 22 03 794Z size 940 hosted state download url https archivist terraform io v1 object hosted state upload url null hosted json state download url https archivist terraform io v1 object hosted json state upload url null status backing data permanently deleted intermediate false modules root null resource 1 data terraform remote state 1 providers provider terraform io builtin terraform data terraform remote state 1 provider registry terraform io hashicorp null null resource 1 resources name other username type data terraform remote state count 1 module root provider provider terraform io builtin terraform name random type null resource count 1 module root provider provider registry terraform io hashicorp null resources processed true serial 9 state version 4 terraform version 0 15 4 vcs commit url https gitlab com my organization terraform test commit abcdef12345 vcs commit sha abcdef12345 relationships run data id run YfmFLWpgTv31VZsP type runs created by data id user onZs69ThPZjBK2wo type users links self api v2 users user onZs69ThPZjBK2wo related api v2 runs run YfmFLWpgTv31VZsP created by workspace data id ws noZcaGXsac6aZSJR type workspaces outputs data id wsout V22qbeM92xb5mw9n type state version outputs id wsout ymkuRnrNFeU5wGpV type state version outputs id wsout v82BjkZnFEcscipg type state version outputs links self api v2 state versions sv g4rqST72reoHMM5a List State Version Outputs The output values from a state version are also available via the API For details see the state version outputs documentation terraform cloud docs api docs state version outputs list state version outputs Available Related Resources The GET endpoints above can optionally return related resources if requested with the include query parameter terraform cloud docs api docs inclusion of related resources The following resource types are available created by The user that created the state version For state versions created via a run executed by HCP Terraform this is an internal user account run The run that created the state version if applicable run created by The user that manually triggered the run if applicable run configuration version The configuration version used in the run outputs The parsed outputs for this state version |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the ssh keys endpoint to manage an organization s SSH keys List get create update and delete SSH keys using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title SSH Keys API Docs HCP Terraform | ---
page_title: SSH Keys - API Docs - HCP Terraform
description: >-
Use the `/ssh-keys` endpoint to manage an organization's SSH keys. List, get, create, update, and delete SSH keys using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# SSH Keys
The `ssh-key` object represents an SSH key which includes a name and the SSH private key. An organization can have multiple SSH keys available.
SSH keys can be used in two places:
- You can assign them to VCS provider integrations, which are available in the API as `oauth-tokens`. Refer to [OAuth Tokens](/terraform/cloud-docs/api-docs/oauth-tokens) for additional information. Azure DevOps Server and Bitbucket Data Center require an SSH key. Other providers only require an SSH key when your repositories include submodules that are only accessible using an SSH connection instead of your VCS provider's API.
- They can be [assigned to workspaces](/terraform/cloud-docs/api-docs/workspaces#assign-an-ssh-key-to-a-workspace) and used when Terraform needs to clone modules from a Git server. This is only necessary when your configurations directly reference modules from a Git server; you do not need to do this if you use HCP Terraform's [private module registry](/terraform/cloud-docs/registry).
Listing and viewing SSH keys requires either permission to manage VCS settings for the organization, or admin access to at least one workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
~> **Important:** The list and read methods on this API only provide metadata about SSH keys. The actual private key text is write-only, and HCP Terraform never provides it to users via the API or UI.
## List SSH Keys
`GET /organizations/:organization_name/ssh-keys`
| Parameter | Description |
| -------------------- | -------------------------------------------------- |
| `:organization_name` | The name of the organization to list SSH keys for. |
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
| Status | Response | Reason |
| ------- | ---------------------------------------------------- | --------------------------------------------- |
| [200][] | Array of [JSON API document][]s (`type: "ssh-keys"`) | Success |
| [404][] | [JSON API error object][] | Organization not found or user not authorized |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs. If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.
| Parameter | Description |
| -------------- | ------------------------------------------------------------------------ |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 ssh keys per page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
https://app.terraform.io/api/v2/organizations/my-organization/ssh-keys
```
### Sample Response
```json
{
"data": [
{
"attributes": {
"name": "SSH Key"
},
"id": "sshkey-GxrePWre1Ezug7aM",
"links": {
"self": "/api/v2/ssh-keys/sshkey-GxrePWre1Ezug7aM"
},
"type": "ssh-keys"
}
]
}
```
## Get an SSH Key
`GET /ssh-keys/:ssh_key_id`
| Parameter | Description |
| ------------- | ---------------------- |
| `:ssh_key_id` | The SSH key ID to get. |
This endpoint is for looking up the name associated with an SSH key ID. It does not retrieve the key text.
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
| Status | Response | Reason |
| ------- | ------------------------------------------ | ---------------------------------------- |
| [200][] | [JSON API document][] (`type: "ssh-keys"`) | Success |
| [404][] | [JSON API error object][] | SSH key not found or user not authorized |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
https://app.terraform.io/api/v2/ssh-keys/sshkey-GxrePWre1Ezug7aM
```
### Sample Response
```json
{
"data": {
"attributes": {
"name": "SSH Key"
},
"id": "sshkey-GxrePWre1Ezug7aM",
"links": {
"self": "/api/v2/ssh-keys/sshkey-GxrePWre1Ezug7aM"
},
"type": "ssh-keys"
}
}
```
## Create an SSH Key
`POST /organizations/:organization_name/ssh-keys`
| Parameter | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to create an SSH key in. The organization must already exist, and the token authenticating the API request must have permission to manage VCS settings. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions)) |
[permissions-citation]: #intentionally-unused---keep-for-maintainers
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
| Status | Response | Reason |
| ------- | ------------------------------------------ | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "ssh-keys"`) | Success |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [404][] | [JSON API error object][] | User not authorized |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ----------------------- | ------ | ------- | -------------------------------- |
| `data.type` | string | | Must be `"ssh-keys"`. |
| `data.attributes.name` | string | | A name to identify the SSH key. |
| `data.attributes.value` | string | | The text of the SSH private key. |
### Sample Payload
```json
{
"data": {
"type": "ssh-keys",
"attributes": {
"name": "SSH Key",
"value": "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAm6+JVgl..."
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/ssh-keys
```
### Sample Response
```json
{
"data": {
"attributes": {
"name": "SSH Key"
},
"id": "sshkey-GxrePWre1Ezug7aM",
"links": {
"self": "/api/v2/ssh-keys/sshkey-GxrePWre1Ezug7aM"
},
"type": "ssh-keys"
}
}
```
## Update an SSH Key
`PATCH /ssh-keys/:ssh_key_id`
| Parameter | Description |
| ------------- | ------------------------- |
| `:ssh_key_id` | The SSH key ID to update. |
This endpoint replaces the name of an existing SSH key.
Editing SSH keys requires permission to manage VCS settings. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
| Status | Response | Reason |
| ------- | ------------------------------------------ | ---------------------------------------- |
| [200][] | [JSON API document][] (`type: "ssh-keys"`) | Success |
| [404][] | [JSON API error object][] | SSH key not found or user not authorized |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ----------------------- | ------ | --------- | ----------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"ssh-keys"`. |
| `data.attributes.name` | string | (nothing) | A name to identify the SSH key. If omitted, the existing name is preserved. |
### Sample Payload
```json
{
"data": {
"attributes": {
"name": "SSH Key for GitHub"
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/ssh-keys/sshkey-GxrePWre1Ezug7aM
```
### Sample Response
```json
{
"data": {
"attributes": {
"name": "SSH Key for GitHub"
},
"id": "sshkey-GxrePWre1Ezug7aM",
"links": {
"self": "/api/v2/ssh-keys/sshkey-GxrePWre1Ezug7aM"
},
"type": "ssh-keys"
}
}
```
## Delete an SSH Key
`DELETE /ssh-keys/:ssh_key_id`
| Parameter | Description |
| ------------- | ------------------------- |
| `:ssh_key_id` | The SSH key ID to delete. |
Deleting SSH keys requires permission to manage VCS settings. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
| Status | Response | Reason |
| ------- | ------------------------- | ---------------------------------------- |
| [204][] | No Content | Success |
| [404][] | [JSON API error object][] | SSH key not found or user not authorized |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/ssh-keys/sshkey-GxrePWre1Ezug7aM
``` | terraform | page title SSH Keys API Docs HCP Terraform description Use the ssh keys endpoint to manage an organization s SSH keys List get create update and delete SSH keys using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects SSH Keys The ssh key object represents an SSH key which includes a name and the SSH private key An organization can have multiple SSH keys available SSH keys can be used in two places You can assign them to VCS provider integrations which are available in the API as oauth tokens Refer to OAuth Tokens terraform cloud docs api docs oauth tokens for additional information Azure DevOps Server and Bitbucket Data Center require an SSH key Other providers only require an SSH key when your repositories include submodules that are only accessible using an SSH connection instead of your VCS provider s API They can be assigned to workspaces terraform cloud docs api docs workspaces assign an ssh key to a workspace and used when Terraform needs to clone modules from a Git server This is only necessary when your configurations directly reference modules from a Git server you do not need to do this if you use HCP Terraform s private module registry terraform cloud docs registry Listing and viewing SSH keys requires either permission to manage VCS settings for the organization or admin access to at least one workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Important The list and read methods on this API only provide metadata about SSH keys The actual private key text is write only and HCP Terraform never provides it to users via the API or UI List SSH Keys GET organizations organization name ssh keys Parameter Description organization name The name of the organization to list SSH keys for Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Status Response Reason 200 Array of JSON API document s type ssh keys Success 404 JSON API error object Organization not found or user not authorized Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs If neither pagination query parameters are provided the endpoint will not be paginated and will return all results Parameter Description page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 ssh keys per page Sample Request shell curl header Authorization Bearer TOKEN https app terraform io api v2 organizations my organization ssh keys Sample Response json data attributes name SSH Key id sshkey GxrePWre1Ezug7aM links self api v2 ssh keys sshkey GxrePWre1Ezug7aM type ssh keys Get an SSH Key GET ssh keys ssh key id Parameter Description ssh key id The SSH key ID to get This endpoint is for looking up the name associated with an SSH key ID It does not retrieve the key text Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Status Response Reason 200 JSON API document type ssh keys Success 404 JSON API error object SSH key not found or user not authorized Sample Request shell curl header Authorization Bearer TOKEN https app terraform io api v2 ssh keys sshkey GxrePWre1Ezug7aM Sample Response json data attributes name SSH Key id sshkey GxrePWre1Ezug7aM links self api v2 ssh keys sshkey GxrePWre1Ezug7aM type ssh keys Create an SSH Key POST organizations organization name ssh keys Parameter Description organization name The name of the organization to create an SSH key in The organization must already exist and the token authenticating the API request must have permission to manage VCS settings More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Status Response Reason 201 JSON API document type ssh keys Success 422 JSON API error object Malformed request body missing attributes wrong types etc 404 JSON API error object User not authorized Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be ssh keys data attributes name string A name to identify the SSH key data attributes value string The text of the SSH private key Sample Payload json data type ssh keys attributes name SSH Key value BEGIN RSA PRIVATE KEY nMIIEowIBAAKCAQEAm6 JVgl Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations my organization ssh keys Sample Response json data attributes name SSH Key id sshkey GxrePWre1Ezug7aM links self api v2 ssh keys sshkey GxrePWre1Ezug7aM type ssh keys Update an SSH Key PATCH ssh keys ssh key id Parameter Description ssh key id The SSH key ID to update This endpoint replaces the name of an existing SSH key Editing SSH keys requires permission to manage VCS settings More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Status Response Reason 200 JSON API document type ssh keys Success 404 JSON API error object SSH key not found or user not authorized Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be ssh keys data attributes name string nothing A name to identify the SSH key If omitted the existing name is preserved Sample Payload json data attributes name SSH Key for GitHub Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 ssh keys sshkey GxrePWre1Ezug7aM Sample Response json data attributes name SSH Key for GitHub id sshkey GxrePWre1Ezug7aM links self api v2 ssh keys sshkey GxrePWre1Ezug7aM type ssh keys Delete an SSH Key DELETE ssh keys ssh key id Parameter Description ssh key id The SSH key ID to delete Deleting SSH keys requires permission to manage VCS settings More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Status Response Reason 204 No Content Success 404 JSON API error object SSH key not found or user not authorized Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 ssh keys sshkey GxrePWre1Ezug7aM |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Policy Set Parameters API Docs HCP Terraform Use the policy sets endpoint to manage key value pairs used by Sentinel policy checks List create update and delete parameters using the HTTP API | ---
page_title: Policy Set Parameters - API Docs - HCP Terraform
description: >-
Use the `/policy-sets` endpoint to manage key/value pairs used by Sentinel policy checks. List, create, update, and delete parameters using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Policy Set Parameters API
[Sentinel parameters](https://docs.hashicorp.com/sentinel/language/parameters) are a list of key/value pairs that HCP Terraform sends to the Sentinel runtime when performing policy checks on workspaces. They can help you avoid hardcoding sensitive parameters into a policy.
<!-- BEGIN: TFC:only name:pnp-callout -->
@include 'tfc-package-callouts/policies.mdx'
<!-- END: TFC:only name:pnp-callout -->
Parameters are only available for Sentinel policies. This set of APIs provides endpoints to create, update, list and delete parameters.
## Create a Parameter
`POST /policy-sets/:policy_set_id/parameters`
| Parameter | Description |
| ---------------- | ---------------------------------------------------- |
| `:policy_set_id` | The ID of the policy set to create the parameter in. |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| --------------------------- | ------ | ------- | ------------------------------------------------------------------------------------------------------ |
| `data.type` | string | | Must be `"vars"`. |
| `data.attributes.key` | string | | The name of the parameter. |
| `data.attributes.value` | string | `""` | The value of the parameter. |
| `data.attributes.category` | string | | The category of the parameters. Must be `"policy-set"`. |
| `data.attributes.sensitive` | bool | `false` | Whether the value is sensitive. If true then the parameter is written once and not visible thereafter. |
### Sample Payload
```json
{
"data": {
"type":"vars",
"attributes": {
"key":"some_key",
"value":"some_value",
"category":"policy-set",
"sensitive":false
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/policy-sets/polset-u3S5p2Uwk21keu1s/parameters
```
### Sample Response
```json
{
"data": {
"id":"var-EavQ1LztoRTQHSNT",
"type":"vars",
"attributes": {
"key":"some_key",
"value":"some_value",
"sensitive":false,
"category":"policy-set"
},
"relationships": {
"configurable": {
"data": {
"id":"pol-u3S5p2Uwk21keu1s",
"type":"policy-sets"
},
"links": {
"related":"/api/v2/policy-sets/polset-u3S5p2Uwk21keu1s"
}
}
},
"links": {
"self":"/api/v2/policy-sets/polset-u3S5p2Uwk21keu1s/parameters/var-EavQ1LztoRTQHSNT"
}
}
}
```
## List Parameters
`GET /policy-sets/:policy_set_id/parameters`
| Parameter | Description |
| ---------------- | ------------------------------------------------ |
| `:policy_set_id` | The ID of the policy set to list parameters for. |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs. If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.
| Parameter | Description |
| -------------- | -------------------------------------------------------------------------- |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 parameters per page. |
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
"https://app.terraform.io/api/v2/policy-sets/polset-u3S5p2Uwk21keu1s/parameters"
```
### Sample Response
```json
{
"data": [
{
"id":"var-AD4pibb9nxo1468E",
"type":"vars",
"attributes": {
"key":"name",
"value":"hello",
"sensitive":false,
"category":"policy-set",
},
"relationships": {
"configurable": {
"data": {
"id":"pol-u3S5p2Uwk21keu1s",
"type":"policy-sets"
},
"links": {
"related":"/api/v2/policy-sets/polset-u3S5p2Uwk21keu1s"
}
}
},
"links": {
"self":"/api/v2/policy-sets/polset-u3S5p2Uwk21keu1s/parameters/var-AD4pibb9nxo1468E"
}
}
]
}
```
## Update Parameters
`PATCH /policy-sets/:policy_set_id/parameters/:parameter_id`
| Parameter | Description |
| ---------------- | ------------------------------------------------- |
| `:policy_set_id` | The ID of the policy set that owns the parameter. |
| `:parameter_id` | The ID of the parameter to be updated. |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ----------------- | ------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"vars"`. |
| `data.id` | string | | The ID of the parameter to update. |
| `data.attributes` | object | | New attributes for the parameter. This object can include `key`, `value`, `category` and `sensitive` properties, which are described above under [create a parameter](#create-a-parameter). All of these properties are optional; if omitted, a property will be left unchanged. |
### Sample Payload
```json
{
"data": {
"id":"var-yRmifb4PJj7cLkMG",
"attributes": {
"key":"name",
"value":"mars",
"category":"policy-set",
"sensitive": false
},
"type":"vars"
}
}
```
### Sample Request
```bash
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/policy-sets/polset-u3S5p2Uwk21keu1s/parameters/var-yRmifb4PJj7cLkMG
```
### Sample Response
```json
{
"data": {
"id":"var-yRmifb4PJj7cLkMG",
"type":"vars",
"attributes": {
"key":"name",
"value":"mars",
"sensitive":false,
"category":"policy-set",
},
"relationships": {
"configurable": {
"data": {
"id":"pol-u3S5p2Uwk21keu1s",
"type":"policy-sets"
},
"links": {
"related":"/api/v2/policy-sets/polset-u3S5p2Uwk21keu1s"
}
}
},
"links": {
"self":"/api/v2/policy-sets/polset-u3S5p2Uwk21keu1s/parameters/var-yRmifb4PJj7cLkMG"
}
}
}
```
## Delete Parameters
`DELETE /policy-sets/:policy_set_id/parameters/:parameter_id`
| Parameter | Description |
| ---------------- | ------------------------------------------------- |
| `:policy_set_id` | The ID of the policy set that owns the parameter. |
| `:parameter_id` | The ID of the parameter to be deleted. |
### Sample Request
```bash
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/policy-sets/polset-u3S5p2Uwk21keu1s/parameters/var-yRmifb4PJj7cLkMG
``` | terraform | page title Policy Set Parameters API Docs HCP Terraform description Use the policy sets endpoint to manage key value pairs used by Sentinel policy checks List create update and delete parameters using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Policy Set Parameters API Sentinel parameters https docs hashicorp com sentinel language parameters are a list of key value pairs that HCP Terraform sends to the Sentinel runtime when performing policy checks on workspaces They can help you avoid hardcoding sensitive parameters into a policy BEGIN TFC only name pnp callout include tfc package callouts policies mdx END TFC only name pnp callout Parameters are only available for Sentinel policies This set of APIs provides endpoints to create update list and delete parameters Create a Parameter POST policy sets policy set id parameters Parameter Description policy set id The ID of the policy set to create the parameter in Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be vars data attributes key string The name of the parameter data attributes value string The value of the parameter data attributes category string The category of the parameters Must be policy set data attributes sensitive bool false Whether the value is sensitive If true then the parameter is written once and not visible thereafter Sample Payload json data type vars attributes key some key value some value category policy set sensitive false Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 policy sets polset u3S5p2Uwk21keu1s parameters Sample Response json data id var EavQ1LztoRTQHSNT type vars attributes key some key value some value sensitive false category policy set relationships configurable data id pol u3S5p2Uwk21keu1s type policy sets links related api v2 policy sets polset u3S5p2Uwk21keu1s links self api v2 policy sets polset u3S5p2Uwk21keu1s parameters var EavQ1LztoRTQHSNT List Parameters GET policy sets policy set id parameters Parameter Description policy set id The ID of the policy set to list parameters for Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs If neither pagination query parameters are provided the endpoint will not be paginated and will return all results Parameter Description page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 parameters per page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 policy sets polset u3S5p2Uwk21keu1s parameters Sample Response json data id var AD4pibb9nxo1468E type vars attributes key name value hello sensitive false category policy set relationships configurable data id pol u3S5p2Uwk21keu1s type policy sets links related api v2 policy sets polset u3S5p2Uwk21keu1s links self api v2 policy sets polset u3S5p2Uwk21keu1s parameters var AD4pibb9nxo1468E Update Parameters PATCH policy sets policy set id parameters parameter id Parameter Description policy set id The ID of the policy set that owns the parameter parameter id The ID of the parameter to be updated Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be vars data id string The ID of the parameter to update data attributes object New attributes for the parameter This object can include key value category and sensitive properties which are described above under create a parameter create a parameter All of these properties are optional if omitted a property will be left unchanged Sample Payload json data id var yRmifb4PJj7cLkMG attributes key name value mars category policy set sensitive false type vars Sample Request bash curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 policy sets polset u3S5p2Uwk21keu1s parameters var yRmifb4PJj7cLkMG Sample Response json data id var yRmifb4PJj7cLkMG type vars attributes key name value mars sensitive false category policy set relationships configurable data id pol u3S5p2Uwk21keu1s type policy sets links related api v2 policy sets polset u3S5p2Uwk21keu1s links self api v2 policy sets polset u3S5p2Uwk21keu1s parameters var yRmifb4PJj7cLkMG Delete Parameters DELETE policy sets policy set id parameters parameter id Parameter Description policy set id The ID of the policy set that owns the parameter parameter id The ID of the parameter to be deleted Sample Request bash curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 policy sets polset u3S5p2Uwk21keu1s parameters var yRmifb4PJj7cLkMG |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the organizations endpoint to interact with organizations List organizations entitlement sets and module producers and show create update and destroy organizations using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Organizations API Docs HCP Terraform | ---
page_title: Organizations - API Docs - HCP Terraform
description: >-
Use the `/organizations` endpoint to interact with organizations. List organizations, entitlement sets, and module producers, and show, create, update, and destroy organizations using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Organizations API
The Organizations API is used to list, show, create, update, and destroy organizations.
## List Organizations
`GET /organizations`
| Status | Response | Reason |
| ------- | ----------------------------------------------- | ------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "organizations"`) | The request was successful |
| [404][] | [JSON API error object][] | Organization not found or user unauthorized to perform action |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
Currently, this endpoint returns a full, unpaginated list of organizations (without pagination metadata) if both of the pagination query parameters are omitted. To avoid inconsistent behavior, we recommend always supplying pagination parameters when building against this API.
| Parameter | Description |
| -------------- | ----------------------------------------------------------------------------- |
| `q` | **Optional.** A search query string. Organizations are searchable by name and notification email. This query takes precedence over the attribute specific searches `q[email]` or `q[name]`. |
| `q[email]` | **Optional.** A search query string. This query searches organizations by notification email. If used with `q[name]`, it returns organizations that match both queries. |
| `q[name]` | **Optional.** A search query string. This query searches organizations by name. If used with `q[email]`, it returns organizations that match both queries. |
| `page[number]` | **Optional.** Defaults to the first page, if omitted when `page[size]` is provided. |
| `page[size]` | **Optional.** Defaults to 20 organizations per page, if omitted when `page[number]` is provided. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request GET \
https://app.terraform.io/api/v2/organizations\?page\[number\]\=1\&page\[size\]\=20
```
### Sample Response
**Note:** Only HCP Terraform organizations return the `two-factor-conformant` and `assessments-enforced` properties.
```json
{
"data": [
{
"id": "hashicorp",
"type": "organizations",
"attributes": {
"external-id": "org-Hysjx5eUviuKVCJY",
"created-at": "2021-08-24T23:10:04.675Z",
"email": "[email protected]",
"session-timeout": null,
"session-remember": null,
"collaborator-auth-policy": "password",
"plan-expired": false,
"plan-expires-at": null,
"plan-is-trial": false,
"plan-is-enterprise": false,
"plan-identifier": "developer",
"cost-estimation-enabled": true,
"send-passing-statuses-for-untriggered-speculative-plans": true,
"aggregated-commit-status-enabled": false,
"speculative-plan-management-enabled": true,
"allow-force-delete-workspaces": true,
"name": "hashicorp",
"permissions": {
"can-update": true,
"can-destroy": true,
"can-access-via-teams": true,
"can-create-module": true,
"can-create-team": true,
"can-create-workspace": true,
"can-manage-users": true,
"can-manage-subscription": true,
"can-manage-sso": true,
"can-update-oauth": true,
"can-update-sentinel": true,
"can-update-ssh-keys": true,
"can-update-api-token": true,
"can-traverse": true,
"can-start-trial": true,
"can-update-agent-pools": true,
"can-manage-tags": true,
"can-manage-varsets": true,
"can-read-varsets": true,
"can-manage-public-providers": true,
"can-create-provider": true,
"can-manage-public-modules": true,
"can-manage-custom-providers": false,
"can-manage-run-tasks": false,
"can-read-run-tasks": false,
"can-create-project": true
},
"fair-run-queuing-enabled": true,
"saml-enabled": false,
"owners-team-saml-role-id": null,
"two-factor-conformant": false,
"assessments-enforced": false,
"default-execution-mode": "remote"
},
"relationships": {
"default-agent-pool": {
"data": null
},
"oauth-tokens": {
"links": {
"related": "/api/v2/organizations/hashicorp/oauth-tokens"
}
},
"authentication-token": {
"links": {
"related": "/api/v2/organizations/hashicorp/authentication-token"
}
},
"entitlement-set": {
"data": {
"id": "org-Hysjx5eUviuKVCJY",
"type": "entitlement-sets"
},
"links": {
"related": "/api/v2/organizations/hashicorp/entitlement-set"
}
},
"subscription": {
"links": {
"related": "/api/v2/organizations/hashicorp/subscription"
}
}
},
"links": {
"self": "/api/v2/organizations/hashicorp"
}
},
{
"id": "hashicorp-two",
"type": "organizations",
"attributes": {
"external-id": "org-iJ5tr4WgB4WpA1hD",
"created-at": "2022-01-04T18:57:16.036Z",
"email": "[email protected]",
"session-timeout": null,
"session-remember": null,
"collaborator-auth-policy": "password",
"plan-expired": false,
"plan-expires-at": null,
"plan-is-trial": false,
"plan-is-enterprise": false,
"plan-identifier": "free",
"cost-estimation-enabled": false,
"send-passing-statuses-for-untriggered-speculative-plans": false,
"aggregated-commit-status-enabled": true,
"speculative-plan-management-enabled": true,
"allow-force-delete-workspaces": false,
"name": "hashicorp-two",
"permissions": {
"can-update": true,
"can-destroy": true,
"can-access-via-teams": true,
"can-create-module": true,
"can-create-team": false,
"can-create-workspace": true,
"can-manage-users": true,
"can-manage-subscription": true,
"can-manage-sso": false,
"can-update-oauth": true,
"can-update-sentinel": false,
"can-update-ssh-keys": true,
"can-update-api-token": true,
"can-traverse": true,
"can-start-trial": true,
"can-update-agent-pools": false,
"can-manage-tags": true,
"can-manage-varsets": true,
"can-read-varsets": true,
"can-manage-public-providers": true,
"can-create-provider": true,
"can-manage-public-modules": true,
"can-manage-custom-providers": false,
"can-manage-run-tasks": false,
"can-read-run-tasks": false,
"can-create-project": false
},
"fair-run-queuing-enabled": true,
"saml-enabled": false,
"owners-team-saml-role-id": null,
"two-factor-conformant": false,
"assessments-enforced": false,
"default-execution-mode": "remote"
},
"relationships": {
"default-agent-pool": {
"data": null
},
"oauth-tokens": {
"links": {
"related": "/api/v2/organizations/hashicorp-two/oauth-tokens"
}
},
"authentication-token": {
"links": {
"related": "/api/v2/organizations/hashicorp-two/authentication-token"
}
},
"entitlement-set": {
"data": {
"id": "org-iJ5tr4WgB4WpA1hD",
"type": "entitlement-sets"
},
"links": {
"related": "/api/v2/organizations/hashicorp-two/entitlement-set"
}
},
"subscription": {
"links": {
"related": "/api/v2/organizations/hashicorp-two/subscription"
}
}
},
"links": {
"self": "/api/v2/organizations/hashicorp-two"
}
}
],
"links": {
"self": "https://tfe-zone-b0c8608c.ngrok.io/api/v2/organizations?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://tfe-zone-b0c8608c.ngrok.io/api/v2/organizations?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://tfe-zone-b0c8608c.ngrok.io/api/v2/organizations?page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"pagination": {
"current-page": 1,
"page-size": 20,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 2
}
}
}
```
## Show an Organization
`GET /organizations/:organization_name`
| Parameter | Description |
| -------------------- | ------------------------------------ |
| `:organization_name` | The name of the organization to show |
| Status | Response | Reason |
| ------- | ----------------------------------------------- | ------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "organizations"`) | The request was successful |
| [404][] | [JSON API error object][] | Organization not found or user unauthorized to perform action |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request GET \
https://app.terraform.io/api/v2/organizations/hashicorp
```
### Sample Response
**Note:** Only HCP Terraform organizations return the `two-factor-conformant` and `assessments-enforced` properties.
```json
{
"data": {
"id": "hashicorp",
"type": "organizations",
"attributes": {
"external-id": "org-WV6DfwfxxXvLfvfs",
"created-at": "2020-03-26T22:13:38.456Z",
"email": "[email protected]",
"session-timeout": null,
"session-remember": null,
"collaborator-auth-policy": "password",
"plan-expired": false,
"plan-expires-at": null,
"plan-is-trial": false,
"plan-is-enterprise": false,
"cost-estimation-enabled": false,
"send-passing-statuses-for-untriggered-speculative-plans": false,
"aggregated-commit-status-enabled": true,
"speculative-plan-management-enabled": true,
"allow-force-delete-workspaces": false,
"name": "hashicorp",
"permissions": {
"can-update": true,
"can-destroy": true,
"can-access-via-teams": true,
"can-create-module": true,
"can-create-team": false,
"can-create-workspace": true,
"can-manage-users": true,
"can-manage-subscription": true,
"can-manage-sso": false,
"can-update-oauth": true,
"can-update-sentinel": false,
"can-update-ssh-keys": true,
"can-update-api-token": true,
"can-traverse": true,
"can-start-trial": true,
"can-update-agent-pools": false,
"can-manage-tags": true,
"can-manage-public-modules": true,
"can-manage-public-providers": false,
"can-manage-run-tasks": false,
"can-read-run-tasks": false,
"can-create-provider": false,
"can-create-project": true
},
"fair-run-queuing-enabled": true,
"saml-enabled": false,
"owners-team-saml-role-id": null,
"two-factor-conformant": false,
"assessments-enforced": false,
"default-execution-mode": "remote"
},
"relationships": {
"default-agent-pool": {
"data": null
},
"oauth-tokens": {
"links": {
"related": "/api/v2/organizations/hashicorp/oauth-tokens"
}
},
"authentication-token": {
"links": {
"related": "/api/v2/organizations/hashicorp/authentication-token"
}
},
"entitlement-set": {
"data": {
"id": "org-WV6DfwfxxXvLfvfs",
"type": "entitlement-sets"
},
"links": {
"related": "/api/v2/organizations/hashicorp/entitlement-set"
}
},
"subscription": {
"links": {
"related": "/api/v2/organizations/hashicorp/subscription"
}
}
},
"links": {
"self": "/api/v2/organizations/hashicorp"
}
}
}
```
## Create an Organization
`POST /organizations`
| Status | Response | Reason |
| ------- | ----------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "organizations"`) | The organization was successfully created |
| [404][] | [JSON API error object][] | Organization not found or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ------------------------------------------------------------------------- | ------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"organizations"` |
| `data.attributes.name` | string | | Name of the organization |
| `data.attributes.email` | string | | Admin email address |
| `data.attributes.session-timeout` | integer | 20160 | Session timeout after inactivity (minutes) |
| `data.attributes.session-remember` | integer | 20160 | Session expiration (minutes) |
| `data.attributes.collaborator-auth-policy` | string | password | Authentication policy (`password` or `two_factor_mandatory`) |
| `data.attributes.cost-estimation-enabled` | boolean | false | Whether or not the cost estimation feature is enabled for all workspaces in the organization. Defaults to false. In Terraform Enterprise, you must also enable cost estimation in [Site Administration](/terraform/enterprise/admin/application/integration#cost-estimation-integration). |
| `data.attributes.send-passing-statuses-for-untriggered-speculative-plans` | boolean | false | Whether or not to send VCS status updates for untriggered speculative plans. This can be useful if large numbers of untriggered workspaces are exhausting request limits for connected version control service providers like GitHub. Defaults to false. In Terraform Enterprise, this setting is always false and cannot be changed but is also available in Site Administration. |
| `data.attributes.aggregated-commit-status-enabled` | boolean | true | Whether or not to aggregate VCS status updates for triggered workspaces. This is useful for monorepo projects with configuration spanning many workspaces. Defaults to `true`. You cannot use this option if `send-passing-statuses-for-untriggered-speculative-plans` is set to `true`. |
| `data.attributes.speculative-plan-management-enabled` | boolean | true | Whether or not to enable [Automatically cancel plan-only runs](/terraform/cloud-docs/users-teams-organizations/organizations/vcs-speculative-plan-management). Defaults to `true`. |
| `data.attributes.owners-team-saml-role-id` | string | (nothing) | **Optional.** **SAML only** The name of the ["owners" team](/terraform/enterprise/saml/team-membership#managing-membership-of-the-owners-team) |
| `data.attributes.assessments-enforced` | boolean | (false) | Whether or not to compel health assessments for all eligible workspaces. When true, health assessments occur on all compatible workspaces, regardless of the value of the workspace setting `assessments-enabled`. When false, health assessments only occur for workspaces that opt in by setting `assessments-enabled: true`. |
| `data.attributes.allow-force-delete-workspaces` | boolean | (false) | Whether workspace administrators can [delete workspaces with resources under management](/terraform/cloud-docs/users-teams-organizations/organizations#general). If false, only organization owners may delete these workspaces. |
| `data.attributes.default-execution-mode` | boolean | `remote` | Which [execution mode](/terraform/cloud-docs/workspaces/settings#execution-mode) to use by default. Valid values are `remote`, `local`, and `agent`. |
| `data.attributes.default-agent-pool-id` | string | (previous value) | Required when `default-execution-mode` is set to `agent`. The ID of the agent pool belonging to the organization. Do _not_ specify this value if you set `execution-mode` to `remote` or `local`. |
### Sample Payload
```json
{
"data": {
"type": "organizations",
"attributes": {
"name": "hashicorp",
"email": "[email protected]"
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations
```
### Sample Response
**Note:** Only HCP Terraform organizations return the `two-factor-conformant` and `assessments-enforced` properties.
```json
{
"data": {
"id": "hashicorp",
"type": "organizations",
"attributes": {
"external-id": "org-Bzyc2JuegvVLAibn",
"created-at": "2021-08-30T18:09:57.561Z",
"email": "[email protected]",
"session-timeout": null,
"session-remember": null,
"collaborator-auth-policy": "password",
"plan-expired": false,
"plan-expires-at": null,
"plan-is-trial": false,
"plan-is-enterprise": false,
"cost-estimation-enabled": false,
"send-passing-statuses-for-untriggered-speculative-plans": false,
"aggregated-commit-status-enabled": true,
"speculative-plan-management-enabled": true,
"allow-force-delete-workspaces": false,
"name": "hashicorp",
"permissions": {
"can-update": true,
"can-destroy": true,
"can-access-via-teams": true,
"can-create-module": true,
"can-create-team": false,
"can-create-workspace": true,
"can-manage-users": true,
"can-manage-subscription": true,
"can-manage-sso": false,
"can-update-oauth": true,
"can-update-sentinel": false,
"can-update-ssh-keys": true,
"can-update-api-token": true,
"can-traverse": true,
"can-start-trial": true,
"can-update-agent-pools": false,
"can-manage-tags": true,
"can-manage-public-modules": true,
"can-manage-public-providers": false,
"can-manage-run-tasks": false,
"can-read-run-tasks": false,
"can-create-provider": false,
"can-create-project": true
},
"fair-run-queuing-enabled": true,
"saml-enabled": false,
"owners-team-saml-role-id": null,
"two-factor-conformant": false,
"assessments-enforced": false,
"default-execution-mode": "remote"
},
"relationships": {
"default-agent-pool": {
"data": null
},
"oauth-tokens": {
"links": {
"related": "/api/v2/organizations/hashicorp/oauth-tokens"
}
},
"authentication-token": {
"links": {
"related": "/api/v2/organizations/hashicorp/authentication-token"
}
},
"entitlement-set": {
"data": {
"id": "org-Bzyc2JuegvVLAibn",
"type": "entitlement-sets"
},
"links": {
"related": "/api/v2/organizations/hashicorp/entitlement-set"
}
},
"subscription": {
"links": {
"related": "/api/v2/organizations/hashicorp/subscription"
}
}
},
"links": {
"self": "/api/v2/organizations/hashicorp"
}
},
"included": [
{
"id": "org-Bzyc2JuegvVLAibn",
"type": "entitlement-sets",
"attributes": {
"agents": false,
"audit-logging": false,
"configuration-designer": true,
"cost-estimation": false,
"global-run-tasks": false,
"module-tests-generation": false,
"operations": true,
"policy-enforcement": false,
"policy-limit": null,
"policy-mandatory-enforcement-limit": null,
"policy-set-limit": null,
"private-module-registry": true,
"run-task-limit": null,
"run-task-mandatory-enforcement-limit": null,
"run-task-workspace-limit": null,
"run-tasks": false,
"self-serve-billing": true,
"sentinel": false,
"sso": false,
"state-storage": true,
"teams": false,
"usage-reporting": false,
"user-limit": 5,
"vcs-integrations": true,
"versioned-policy-set-limit": null
},
"links": {
"self": "/api/v2/entitlement-sets/org-Bzyc2JuegvVLAibn"
}
}
]
}
```
## Update an Organization
`PATCH /organizations/:organization_name`
| Parameter | Description |
| -------------------- | -------------------------------------- |
| `:organization_name` | The name of the organization to update |
| Status | Response | Reason |
| ------- | ----------------------------------------------- | -------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "organizations"`) | The organization was successfully updated |
| [404][] | [JSON API error object][] | Organization not found or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
### Request Body
This PATCH endpoint requires a JSON object with the following properties as a request payload.
| Key path | Type | Default | Description |
| ------------------------------------------------------------------------- | ------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"organizations"` |
| `data.attributes.name` | string | | Name of the organization |
| `data.attributes.email` | string | | Admin email address |
| `data.attributes.session-timeout` | integer | 20160 | Session timeout after inactivity (minutes) |
| `data.attributes.session-remember` | integer | 20160 | Session expiration (minutes) |
| `data.attributes.collaborator-auth-policy` | string | password | Authentication policy (`password` or `two_factor_mandatory`) |
| `data.attributes.cost-estimation-enabled` | boolean | false | Whether or not the cost estimation feature is enabled for all workspaces in the organization. Defaults to false. In Terraform Enterprise, you must also enable cost estimation in [Site Administration](/terraform/enterprise/admin/application/integration#cost-estimation-integration). |
| `data.attributes.send-passing-statuses-for-untriggered-speculative-plans` | boolean | false | Whether or not to send VCS status updates for untriggered speculative plans. This can be useful if large numbers of untriggered workspaces are exhausting request limits for connected version control service providers like GitHub. Defaults to false. In Terraform Enterprise, this setting is always false and cannot be changed but is also available in Site Administration. |
| `data.attributes.aggregated-commit-status-enabled` | boolean | true | Whether or not to aggregate VCS status updates for triggered workspaces. This is useful for monorepo projects with configuration spanning many workspaces. Defaults to `true`. You cannot use this option if `send-passing-statuses-for-untriggered-speculative-plans` is set to `true`. |
| `data.attributes.speculative-plan-management-enabled` | boolean | true | Whether or not to enable [Automatically cancel plan-only runs](/terraform/cloud-docs/users-teams-organizations/organizations/vcs-speculative-plan-management). Defaults to `true`. |
| `data.attributes.owners-team-saml-role-id` | string | (nothing) | **Optional.** **SAML only** The name of the ["owners" team](/terraform/enterprise/saml/team-membership#managing-membership-of-the-owners-team) |
| `data.attributes.assessments-enforced` | boolean | false | Whether or not to compel health assessments for all eligible workspaces. When true, health assessments occur on all compatible workspaces, regardless of the value of the workspace setting `assessments-enabled`. When false, health assessments only occur for workspaces that opt in by setting `assessments-enabled: true`. |
| `data.attributes.allow-force-delete-workspaces` | boolean | false | Whether workspace administrators can [delete workspaces with resources under management](/terraform/cloud-docs/users-teams-organizations/organizations#general). If false, only organization owners may delete these workspaces. |
| `data.attributes.default-execution-mode` | boolean | `remote` | Which [execution mode](/terraform/cloud-docs/workspaces/settings#execution-mode) to use by default. Valid values are `remote`, `local`, and `agent`. |
| `data.attributes.default-agent-pool-id` | string | (previous value) | Required when `default-execution-mode` is set to `agent`. The ID of the agent pool belonging to the organization. Do _not_ specify this value if you set `execution-mode` to `remote` or `local`. |
### Sample Payload
```json
{
"data": {
"type": "organizations",
"attributes": {
"email": "[email protected]"
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/hashicorp
```
### Sample Response
**Note:** The `two-factor-conformant` and `assessments-enforced` properties are only returned from HCP Terraform organizations.
```json
{
"data": {
"id": "hashicorp",
"type": "organizations",
"attributes": {
"external-id": "org-Bzyc2JuegvVLAibn",
"created-at": "2021-08-30T18:09:57.561Z",
"email": "[email protected]",
"session-timeout": null,
"session-remember": null,
"collaborator-auth-policy": "password",
"plan-expired": false,
"plan-expires-at": null,
"plan-is-trial": false,
"plan-is-enterprise": false,
"cost-estimation-enabled": false,
"send-passing-statuses-for-untriggered-speculative-plans": false,
"aggregated-commit-status-enabled": true,
"speculative-plan-management-enabled": true,
"name": "hashicorp",
"permissions": {
"can-update": true,
"can-destroy": true,
"can-access-via-teams": true,
"can-create-module": true,
"can-create-team": false,
"can-create-workspace": true,
"can-manage-users": true,
"can-manage-subscription": true,
"can-manage-sso": false,
"can-update-oauth": true,
"can-update-sentinel": false,
"can-update-ssh-keys": true,
"can-update-api-token": true,
"can-traverse": true,
"can-start-trial": true,
"can-update-agent-pools": false,
"can-manage-tags": true,
"can-manage-public-modules": true,
"can-manage-public-providers": false,
"can-manage-run-tasks": false,
"can-read-run-tasks": false,
"can-create-provider": false,
"can-create-project": true
},
"fair-run-queuing-enabled": true,
"saml-enabled": false,
"owners-team-saml-role-id": null,
"two-factor-conformant": false,
"assessments-enforced": false,
"default-execution-mode": "remote"
},
"relationships": {
"default-agent-pool": {
"data": null
},
"oauth-tokens": {
"links": {
"related": "/api/v2/organizations/hashicorp/oauth-tokens"
}
},
"authentication-token": {
"links": {
"related": "/api/v2/organizations/hashicorp/authentication-token"
}
},
"entitlement-set": {
"data": {
"id": "org-Bzyc2JuegvVLAibn",
"type": "entitlement-sets"
},
"links": {
"related": "/api/v2/organizations/hashicorp/entitlement-set"
}
},
"subscription": {
"links": {
"related": "/api/v2/organizations/hashicorp/subscription"
}
}
},
"links": {
"self": "/api/v2/organizations/hashicorp"
}
}
}
```
## Destroy an Organization
`DELETE /organizations/:organization_name`
| Parameter | Description |
| -------------------- | --------------------------------------- |
| `:organization_name` | The name of the organization to destroy |
| Status | Response | Reason |
| ------- | ------------------------- | ------------------------------------------------------------- |
| [204][] | | The organization was successfully destroyed |
| [404][] | [JSON API error object][] | Organization not found or user unauthorized to perform action |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/organizations/hashicorp
```
### Sample Response
The response body will be empty if successful.
## Show the Entitlement Set
This endpoint shows the [entitlements](/terraform/cloud-docs/api-docs#feature-entitlements) for an organization.
`GET /organizations/:organization_name/entitlement-set`
| Parameter | Description |
| -------------------- | ------------------------------------------------------ |
| `:organization_name` | The name of the organization's entitlement set to view |
| Status | Response | Reason |
| ------- | -------------------------------------------------- | ------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "entitlement-sets"`) | The request was successful |
| [404][] | [JSON API error object][] | Organization not found or user unauthorized to perform action |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/entitlement-set
```
### Sample Response
```json
{
"data": {
"id": "org-Bzyc2JuegvVLAibn",
"type": "entitlement-sets",
"attributes": {
"agents": false,
"audit-logging": false,
"configuration-designer": true,
"cost-estimation": false,
"global-run-tasks": false,
"module-tests-generation": false,
"operations": true,
"policy-enforcement": false,
"policy-limit": 5,
"policy-mandatory-enforcement-limit": null,
"policy-set-limit": 1,
"private-module-registry": true,
"private-policy-agents": false,
"private-vcs": false,
"run-task-limit": 1,
"run-task-mandatory-enforcement-limit": 1,
"run-task-workspace-limit": 10,
"run-tasks": false,
"self-serve-billing": true,
"sentinel": false,
"sso": false,
"state-storage": true,
"teams": false,
"usage-reporting": false,
"user-limit": 5,
"vcs-integrations": true,
"versioned-policy-set-limit": null
},
"links": {
"self": "/api/v2/entitlement-sets/org-Bzyc2JuegvVLAibn"
}
}
}
```
## Show Module Producers
<EnterpriseAlert>
This endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform.
</EnterpriseAlert>
This endpoint shows organizations that are configured to share modules with an organization through [Module Sharing](/terraform/enterprise/admin/application/module-sharing).
`GET /organizations/:organization_name/relationships/module-producers`
| Parameter | Description |
| -------------------- | ------------------------------------------------------- |
| `:organization_name` | The name of the organization's module producers to view |
| Status | Response | Reason |
| ------- | ----------------------------------------------- | ------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "organizations"`) | The request was successful |
| [404][] | [JSON API error object][] | Organization not found or user unauthorized to perform action |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
| -------------- | -------------------------------------------------------------------------------- |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 module producers per page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://tfe.example.com/api/v2/organizations/hashicorp/relationships/module-producers
```
### Sample Response
```json
{
"data": [
{
"id": "hc-nomad",
"type": "organizations",
"attributes": {
"name": "hc-nomad",
"external-id": "org-ArQSQMAkFQsSUZjB"
},
"links": {
"self": "/api/v2/organizations/hc-nomad"
}
}
],
"links": {
"self": "https://tfe.example.com/api/v2/organizations/hashicorp/relationships/module-producers?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://tfe.example.com/api/v2/organizations/hashicorp/relationships/module-producers?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://tfe.example.com/api/v2/organizations/hashicorp/relationships/module-producers?page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"pagination": {
"current-page": 1,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 1
}
}
}
```
## Show data retention policy
<EnterpriseAlert>
This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform.
</EnterpriseAlert>
`GET /organizations/:organization_name/relationships/data-retention-policy`
| Parameter | Description |
| ---------------------| --------------------------------------------------------------------|
| `:organization_name` | The name of the organization to show the data retention policy for. |
This endpoint shows the data retention policy set explicitly on the organization.
When no data retention policy is set for the organization, the endpoint returns the default policy configured for the Terraform Enterprise installation. Read more about [organization data retention policies](/terraform/enterprise/users-teams-organizations/organizations#data-retention-policies).
For additional information, refer to [Data Retention Policy Types](/terraform/enterprise/api-docs/data-retention-policies#data-retention-policy-types) in the Terraform Enterprise documentation.
## Create or update data retention policy
<EnterpriseAlert>
This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform.
</EnterpriseAlert>
`POST /organizations/:organization_name/relationships/data-retention-policy`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to update the data retention policy for. |
This endpoint creates a data retention policy for an organization or updates the existing policy.
Read more about [organization data retention policies](/terraform/enterprise/users-teams-organizations/organizations#data-retention-policies).
Refer to [Data Retention Policy API](/terraform/enterprise/api-docs/data-retention-policies#create-or-update-data-retention-policy) in the Terraform Enterprise documentation for details.
## Remove data retention policy
<EnterpriseAlert>
This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform.
</EnterpriseAlert>
`DELETE /organizations/:organization_name/relationships/data-retention-policy`
| Parameter | Description |
| ---------------------| -------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to remove the data retention policy for. |
This endpoint removes the data retention policy explicitly set on an organization.
When the data retention policy is deleted, the organization inherits the default policy configured for the Terraform Enterprise installation. Refer to [Data Retention Policies](/terraform/enterprise/application-administration/general#data-retention-policies) for additional information.
Refer to [Data Retention Policies](/terraform/enterprise/users-teams-organizations/organizations#data-retention-policies) for information about configuring data retention policies for an organization.
Refer to [Data Retention Policy API](/terraform/enterprise/api-docs/data-retention-policies#remove-data-retention-policy) in the Terraform Enterprise documentation for details.
## Available Related Resources
The GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](/terraform/cloud-docs/api-docs#inclusion-of-related-resources). The following resource types are available:
| Resource Name | Description |
| --------------------- | -------------------------------------------------------------------------------------------- |
| `entitlement_set` | The entitlement set that determines which HCP Terraform features the organization can use. |
## Relationships
The following relationships may be present in various responses.
| Resource Name | Description |
| --------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `module-producers` | Other organizations configured to share modules with the organization. |
| `oauth-tokens` | OAuth tokens associated with VCS configurations for the organization. |
| `authentication-token` | The API token for an organization. |
| `entitlement-set` | The entitlement set that determines which HCP Terraform features the organization can use. |
| `subscription` | The current subscription for an organization. |
| `default-agent-pool` | An organization's default agent pool. Set this value if your `default-execution-mode` is `agent`. |
| `data-retention-policy` | <EnterpriseAlert inline/> Specifies an organization's data retention policy. Refer to [Data Retention Policy APIs](/terraform/enterprise/api-docs/data-retention-policies) in the Terraform Enterprise documentation for more details. | | terraform | page title Organizations API Docs HCP Terraform description Use the organizations endpoint to interact with organizations List organizations entitlement sets and module producers and show create update and destroy organizations using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Organizations API The Organizations API is used to list show create update and destroy organizations List Organizations GET organizations Status Response Reason 200 JSON API document type organizations The request was successful 404 JSON API error object Organization not found or user unauthorized to perform action Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Currently this endpoint returns a full unpaginated list of organizations without pagination metadata if both of the pagination query parameters are omitted To avoid inconsistent behavior we recommend always supplying pagination parameters when building against this API Parameter Description q Optional A search query string Organizations are searchable by name and notification email This query takes precedence over the attribute specific searches q email or q name q email Optional A search query string This query searches organizations by notification email If used with q name it returns organizations that match both queries q name Optional A search query string This query searches organizations by name If used with q email it returns organizations that match both queries page number Optional Defaults to the first page if omitted when page size is provided page size Optional Defaults to 20 organizations per page if omitted when page number is provided Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request GET https app terraform io api v2 organizations page number 1 page size 20 Sample Response Note Only HCP Terraform organizations return the two factor conformant and assessments enforced properties json data id hashicorp type organizations attributes external id org Hysjx5eUviuKVCJY created at 2021 08 24T23 10 04 675Z email hashicorp example com session timeout null session remember null collaborator auth policy password plan expired false plan expires at null plan is trial false plan is enterprise false plan identifier developer cost estimation enabled true send passing statuses for untriggered speculative plans true aggregated commit status enabled false speculative plan management enabled true allow force delete workspaces true name hashicorp permissions can update true can destroy true can access via teams true can create module true can create team true can create workspace true can manage users true can manage subscription true can manage sso true can update oauth true can update sentinel true can update ssh keys true can update api token true can traverse true can start trial true can update agent pools true can manage tags true can manage varsets true can read varsets true can manage public providers true can create provider true can manage public modules true can manage custom providers false can manage run tasks false can read run tasks false can create project true fair run queuing enabled true saml enabled false owners team saml role id null two factor conformant false assessments enforced false default execution mode remote relationships default agent pool data null oauth tokens links related api v2 organizations hashicorp oauth tokens authentication token links related api v2 organizations hashicorp authentication token entitlement set data id org Hysjx5eUviuKVCJY type entitlement sets links related api v2 organizations hashicorp entitlement set subscription links related api v2 organizations hashicorp subscription links self api v2 organizations hashicorp id hashicorp two type organizations attributes external id org iJ5tr4WgB4WpA1hD created at 2022 01 04T18 57 16 036Z email hashicorp example com session timeout null session remember null collaborator auth policy password plan expired false plan expires at null plan is trial false plan is enterprise false plan identifier free cost estimation enabled false send passing statuses for untriggered speculative plans false aggregated commit status enabled true speculative plan management enabled true allow force delete workspaces false name hashicorp two permissions can update true can destroy true can access via teams true can create module true can create team false can create workspace true can manage users true can manage subscription true can manage sso false can update oauth true can update sentinel false can update ssh keys true can update api token true can traverse true can start trial true can update agent pools false can manage tags true can manage varsets true can read varsets true can manage public providers true can create provider true can manage public modules true can manage custom providers false can manage run tasks false can read run tasks false can create project false fair run queuing enabled true saml enabled false owners team saml role id null two factor conformant false assessments enforced false default execution mode remote relationships default agent pool data null oauth tokens links related api v2 organizations hashicorp two oauth tokens authentication token links related api v2 organizations hashicorp two authentication token entitlement set data id org iJ5tr4WgB4WpA1hD type entitlement sets links related api v2 organizations hashicorp two entitlement set subscription links related api v2 organizations hashicorp two subscription links self api v2 organizations hashicorp two links self https tfe zone b0c8608c ngrok io api v2 organizations page 5Bnumber 5D 1 page 5Bsize 5D 20 first https tfe zone b0c8608c ngrok io api v2 organizations page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https tfe zone b0c8608c ngrok io api v2 organizations page 5Bnumber 5D 1 page 5Bsize 5D 20 meta pagination current page 1 page size 20 prev page null next page null total pages 1 total count 2 Show an Organization GET organizations organization name Parameter Description organization name The name of the organization to show Status Response Reason 200 JSON API document type organizations The request was successful 404 JSON API error object Organization not found or user unauthorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request GET https app terraform io api v2 organizations hashicorp Sample Response Note Only HCP Terraform organizations return the two factor conformant and assessments enforced properties json data id hashicorp type organizations attributes external id org WV6DfwfxxXvLfvfs created at 2020 03 26T22 13 38 456Z email user example com session timeout null session remember null collaborator auth policy password plan expired false plan expires at null plan is trial false plan is enterprise false cost estimation enabled false send passing statuses for untriggered speculative plans false aggregated commit status enabled true speculative plan management enabled true allow force delete workspaces false name hashicorp permissions can update true can destroy true can access via teams true can create module true can create team false can create workspace true can manage users true can manage subscription true can manage sso false can update oauth true can update sentinel false can update ssh keys true can update api token true can traverse true can start trial true can update agent pools false can manage tags true can manage public modules true can manage public providers false can manage run tasks false can read run tasks false can create provider false can create project true fair run queuing enabled true saml enabled false owners team saml role id null two factor conformant false assessments enforced false default execution mode remote relationships default agent pool data null oauth tokens links related api v2 organizations hashicorp oauth tokens authentication token links related api v2 organizations hashicorp authentication token entitlement set data id org WV6DfwfxxXvLfvfs type entitlement sets links related api v2 organizations hashicorp entitlement set subscription links related api v2 organizations hashicorp subscription links self api v2 organizations hashicorp Create an Organization POST organizations Status Response Reason 201 JSON API document type organizations The organization was successfully created 404 JSON API error object Organization not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be organizations data attributes name string Name of the organization data attributes email string Admin email address data attributes session timeout integer 20160 Session timeout after inactivity minutes data attributes session remember integer 20160 Session expiration minutes data attributes collaborator auth policy string password Authentication policy password or two factor mandatory data attributes cost estimation enabled boolean false Whether or not the cost estimation feature is enabled for all workspaces in the organization Defaults to false In Terraform Enterprise you must also enable cost estimation in Site Administration terraform enterprise admin application integration cost estimation integration data attributes send passing statuses for untriggered speculative plans boolean false Whether or not to send VCS status updates for untriggered speculative plans This can be useful if large numbers of untriggered workspaces are exhausting request limits for connected version control service providers like GitHub Defaults to false In Terraform Enterprise this setting is always false and cannot be changed but is also available in Site Administration data attributes aggregated commit status enabled boolean true Whether or not to aggregate VCS status updates for triggered workspaces This is useful for monorepo projects with configuration spanning many workspaces Defaults to true You cannot use this option if send passing statuses for untriggered speculative plans is set to true data attributes speculative plan management enabled boolean true Whether or not to enable Automatically cancel plan only runs terraform cloud docs users teams organizations organizations vcs speculative plan management Defaults to true data attributes owners team saml role id string nothing Optional SAML only The name of the owners team terraform enterprise saml team membership managing membership of the owners team data attributes assessments enforced boolean false Whether or not to compel health assessments for all eligible workspaces When true health assessments occur on all compatible workspaces regardless of the value of the workspace setting assessments enabled When false health assessments only occur for workspaces that opt in by setting assessments enabled true data attributes allow force delete workspaces boolean false Whether workspace administrators can delete workspaces with resources under management terraform cloud docs users teams organizations organizations general If false only organization owners may delete these workspaces data attributes default execution mode boolean remote Which execution mode terraform cloud docs workspaces settings execution mode to use by default Valid values are remote local and agent data attributes default agent pool id string previous value Required when default execution mode is set to agent The ID of the agent pool belonging to the organization Do not specify this value if you set execution mode to remote or local Sample Payload json data type organizations attributes name hashicorp email user example com Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations Sample Response Note Only HCP Terraform organizations return the two factor conformant and assessments enforced properties json data id hashicorp type organizations attributes external id org Bzyc2JuegvVLAibn created at 2021 08 30T18 09 57 561Z email user example com session timeout null session remember null collaborator auth policy password plan expired false plan expires at null plan is trial false plan is enterprise false cost estimation enabled false send passing statuses for untriggered speculative plans false aggregated commit status enabled true speculative plan management enabled true allow force delete workspaces false name hashicorp permissions can update true can destroy true can access via teams true can create module true can create team false can create workspace true can manage users true can manage subscription true can manage sso false can update oauth true can update sentinel false can update ssh keys true can update api token true can traverse true can start trial true can update agent pools false can manage tags true can manage public modules true can manage public providers false can manage run tasks false can read run tasks false can create provider false can create project true fair run queuing enabled true saml enabled false owners team saml role id null two factor conformant false assessments enforced false default execution mode remote relationships default agent pool data null oauth tokens links related api v2 organizations hashicorp oauth tokens authentication token links related api v2 organizations hashicorp authentication token entitlement set data id org Bzyc2JuegvVLAibn type entitlement sets links related api v2 organizations hashicorp entitlement set subscription links related api v2 organizations hashicorp subscription links self api v2 organizations hashicorp included id org Bzyc2JuegvVLAibn type entitlement sets attributes agents false audit logging false configuration designer true cost estimation false global run tasks false module tests generation false operations true policy enforcement false policy limit null policy mandatory enforcement limit null policy set limit null private module registry true run task limit null run task mandatory enforcement limit null run task workspace limit null run tasks false self serve billing true sentinel false sso false state storage true teams false usage reporting false user limit 5 vcs integrations true versioned policy set limit null links self api v2 entitlement sets org Bzyc2JuegvVLAibn Update an Organization PATCH organizations organization name Parameter Description organization name The name of the organization to update Status Response Reason 200 JSON API document type organizations The organization was successfully updated 404 JSON API error object Organization not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc Request Body This PATCH endpoint requires a JSON object with the following properties as a request payload Key path Type Default Description data type string Must be organizations data attributes name string Name of the organization data attributes email string Admin email address data attributes session timeout integer 20160 Session timeout after inactivity minutes data attributes session remember integer 20160 Session expiration minutes data attributes collaborator auth policy string password Authentication policy password or two factor mandatory data attributes cost estimation enabled boolean false Whether or not the cost estimation feature is enabled for all workspaces in the organization Defaults to false In Terraform Enterprise you must also enable cost estimation in Site Administration terraform enterprise admin application integration cost estimation integration data attributes send passing statuses for untriggered speculative plans boolean false Whether or not to send VCS status updates for untriggered speculative plans This can be useful if large numbers of untriggered workspaces are exhausting request limits for connected version control service providers like GitHub Defaults to false In Terraform Enterprise this setting is always false and cannot be changed but is also available in Site Administration data attributes aggregated commit status enabled boolean true Whether or not to aggregate VCS status updates for triggered workspaces This is useful for monorepo projects with configuration spanning many workspaces Defaults to true You cannot use this option if send passing statuses for untriggered speculative plans is set to true data attributes speculative plan management enabled boolean true Whether or not to enable Automatically cancel plan only runs terraform cloud docs users teams organizations organizations vcs speculative plan management Defaults to true data attributes owners team saml role id string nothing Optional SAML only The name of the owners team terraform enterprise saml team membership managing membership of the owners team data attributes assessments enforced boolean false Whether or not to compel health assessments for all eligible workspaces When true health assessments occur on all compatible workspaces regardless of the value of the workspace setting assessments enabled When false health assessments only occur for workspaces that opt in by setting assessments enabled true data attributes allow force delete workspaces boolean false Whether workspace administrators can delete workspaces with resources under management terraform cloud docs users teams organizations organizations general If false only organization owners may delete these workspaces data attributes default execution mode boolean remote Which execution mode terraform cloud docs workspaces settings execution mode to use by default Valid values are remote local and agent data attributes default agent pool id string previous value Required when default execution mode is set to agent The ID of the agent pool belonging to the organization Do not specify this value if you set execution mode to remote or local Sample Payload json data type organizations attributes email admin example com Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 organizations hashicorp Sample Response Note The two factor conformant and assessments enforced properties are only returned from HCP Terraform organizations json data id hashicorp type organizations attributes external id org Bzyc2JuegvVLAibn created at 2021 08 30T18 09 57 561Z email admin example com session timeout null session remember null collaborator auth policy password plan expired false plan expires at null plan is trial false plan is enterprise false cost estimation enabled false send passing statuses for untriggered speculative plans false aggregated commit status enabled true speculative plan management enabled true name hashicorp permissions can update true can destroy true can access via teams true can create module true can create team false can create workspace true can manage users true can manage subscription true can manage sso false can update oauth true can update sentinel false can update ssh keys true can update api token true can traverse true can start trial true can update agent pools false can manage tags true can manage public modules true can manage public providers false can manage run tasks false can read run tasks false can create provider false can create project true fair run queuing enabled true saml enabled false owners team saml role id null two factor conformant false assessments enforced false default execution mode remote relationships default agent pool data null oauth tokens links related api v2 organizations hashicorp oauth tokens authentication token links related api v2 organizations hashicorp authentication token entitlement set data id org Bzyc2JuegvVLAibn type entitlement sets links related api v2 organizations hashicorp entitlement set subscription links related api v2 organizations hashicorp subscription links self api v2 organizations hashicorp Destroy an Organization DELETE organizations organization name Parameter Description organization name The name of the organization to destroy Status Response Reason 204 The organization was successfully destroyed 404 JSON API error object Organization not found or user unauthorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 organizations hashicorp Sample Response The response body will be empty if successful Show the Entitlement Set This endpoint shows the entitlements terraform cloud docs api docs feature entitlements for an organization GET organizations organization name entitlement set Parameter Description organization name The name of the organization s entitlement set to view Status Response Reason 200 JSON API document type entitlement sets The request was successful 404 JSON API error object Organization not found or user unauthorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp entitlement set Sample Response json data id org Bzyc2JuegvVLAibn type entitlement sets attributes agents false audit logging false configuration designer true cost estimation false global run tasks false module tests generation false operations true policy enforcement false policy limit 5 policy mandatory enforcement limit null policy set limit 1 private module registry true private policy agents false private vcs false run task limit 1 run task mandatory enforcement limit 1 run task workspace limit 10 run tasks false self serve billing true sentinel false sso false state storage true teams false usage reporting false user limit 5 vcs integrations true versioned policy set limit null links self api v2 entitlement sets org Bzyc2JuegvVLAibn Show Module Producers EnterpriseAlert This endpoint is exclusive to Terraform Enterprise and not available in HCP Terraform EnterpriseAlert This endpoint shows organizations that are configured to share modules with an organization through Module Sharing terraform enterprise admin application module sharing GET organizations organization name relationships module producers Parameter Description organization name The name of the organization s module producers to view Status Response Reason 200 JSON API document type organizations The request was successful 404 JSON API error object Organization not found or user unauthorized to perform action Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 module producers per page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https tfe example com api v2 organizations hashicorp relationships module producers Sample Response json data id hc nomad type organizations attributes name hc nomad external id org ArQSQMAkFQsSUZjB links self api v2 organizations hc nomad links self https tfe example com api v2 organizations hashicorp relationships module producers page 5Bnumber 5D 1 page 5Bsize 5D 20 first https tfe example com api v2 organizations hashicorp relationships module producers page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https tfe example com api v2 organizations hashicorp relationships module producers page 5Bnumber 5D 1 page 5Bsize 5D 20 meta pagination current page 1 prev page null next page null total pages 1 total count 1 Show data retention policy EnterpriseAlert This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform EnterpriseAlert GET organizations organization name relationships data retention policy Parameter Description organization name The name of the organization to show the data retention policy for This endpoint shows the data retention policy set explicitly on the organization When no data retention policy is set for the organization the endpoint returns the default policy configured for the Terraform Enterprise installation Read more about organization data retention policies terraform enterprise users teams organizations organizations data retention policies For additional information refer to Data Retention Policy Types terraform enterprise api docs data retention policies data retention policy types in the Terraform Enterprise documentation Create or update data retention policy EnterpriseAlert This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform EnterpriseAlert POST organizations organization name relationships data retention policy Parameter Description organization name The name of the organization to update the data retention policy for This endpoint creates a data retention policy for an organization or updates the existing policy Read more about organization data retention policies terraform enterprise users teams organizations organizations data retention policies Refer to Data Retention Policy API terraform enterprise api docs data retention policies create or update data retention policy in the Terraform Enterprise documentation for details Remove data retention policy EnterpriseAlert This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform EnterpriseAlert DELETE organizations organization name relationships data retention policy Parameter Description organization name The name of the organization to remove the data retention policy for This endpoint removes the data retention policy explicitly set on an organization When the data retention policy is deleted the organization inherits the default policy configured for the Terraform Enterprise installation Refer to Data Retention Policies terraform enterprise application administration general data retention policies for additional information Refer to Data Retention Policies terraform enterprise users teams organizations organizations data retention policies for information about configuring data retention policies for an organization Refer to Data Retention Policy API terraform enterprise api docs data retention policies remove data retention policy in the Terraform Enterprise documentation for details Available Related Resources The GET endpoints above can optionally return related resources if requested with the include query parameter terraform cloud docs api docs inclusion of related resources The following resource types are available Resource Name Description entitlement set The entitlement set that determines which HCP Terraform features the organization can use Relationships The following relationships may be present in various responses Resource Name Description module producers Other organizations configured to share modules with the organization oauth tokens OAuth tokens associated with VCS configurations for the organization authentication token The API token for an organization entitlement set The entitlement set that determines which HCP Terraform features the organization can use subscription The current subscription for an organization default agent pool An organization s default agent pool Set this value if your default execution mode is agent data retention policy EnterpriseAlert inline Specifies an organization s data retention policy Refer to Data Retention Policy APIs terraform enterprise api docs data retention policies in the Terraform Enterprise documentation for more details |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Run Triggers API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the run triggers endpoint to manage run triggers List show create and delete run triggers using the HTTP API | ---
page_title: Run Triggers - API Docs - HCP Terraform
description: >-
Use the `/run-triggers` endpoint to manage run triggers. List, show, create, and delete run triggers using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Run Triggers API
## Create a Run Trigger
`POST /workspaces/:workspace_id/run-triggers`
| Parameter | Description |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `:workspace_id` | The ID of the workspace to create the run trigger in. Obtain this from the [workspace settings](/terraform/cloud-docs/workspaces/settings) or the [Show Workspace](/terraform/cloud-docs/api-docs/workspaces#show-workspace) endpoint. |
| Status | Response | Reason |
| ------- | ---------------------------------------------- | ------------------------------------------------------------------------ |
| [201][] | [JSON API document][] (`type: "run-triggers"`) | Successfully created a run trigger |
| [404][] | [JSON API error object][] | Workspace or sourceable not found or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
### Permissions
In order to create a run trigger, the user must have admin access to the specified workspace and permission to read runs for the sourceable workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ------------------------------------ | ------ | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.relationships.sourceable.data` | object | | A JSON API relationship object that represents the source workspace for the run trigger. This object must have `id` and `type` properties, and the `type` property must be `workspaces` (e.g. `{ "id": "ws-2HRvNs49EWPjDqT1", "type": "workspaces" }`). Obtain workspace IDs from the [workspace settings](/terraform/cloud-docs/workspaces/settings) or the [Show Workspace](/terraform/cloud-docs/api-docs/workspaces#show-workspace) endpoint. |
### Sample Payload
```json
{
"data": {
"relationships": {
"sourceable": {
"data": {
"id": "ws-2HRvNs49EWPjDqT1",
"type": "workspaces"
}
}
}
}
}
```
### Sample Request
```shell
curl \
--request POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--data @payload.json \
https://app.terraform.io/api/v2/workspaces/ws-XdeUVMWShTesDMME/run-triggers
```
### Sample Response
```json
{
"data": {
"id": "rt-3yVQZvHzf5j3WRJ1",
"type": "run-triggers",
"attributes": {
"workspace-name": "workspace-1",
"sourceable-name": "workspace-2",
"created-at": "2018-09-11T18:21:21.784Z"
},
"relationships": {
"workspace": {
"data": {
"id": "ws-XdeUVMWShTesDMME",
"type": "workspaces"
}
},
"sourceable": {
"data": {
"id": "ws-2HRvNs49EWPjDqT1",
"type": "workspaces"
}
}
},
"links": {
"self": "/api/v2/run-triggers/rt-3yVQZvHzf5j3WRJ1"
}
}
}
```
## List Run Triggers
`GET /workspaces/:workspace_id/run-triggers`
| Parameter | Description |
| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:workspace_id` | The ID of the workspace to list run triggers for. Obtain this from the [workspace settings](/terraform/cloud-docs/workspaces/settings) or the [Show Workspace](/terraform/cloud-docs/api-docs/workspaces#show-workspace) endpoint. |
| Status | Response | Reason |
| ------- | ---------------------------------------------- | -------------------------------------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "run-triggers"`) | Request was successful |
| [400][] | [JSON API error object][] | Required parameter `filter[run-trigger][type]` is missing or has been given an invalid value |
| [404][] | [JSON API error object][] | Workspace not found or user unauthorized to perform action |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
| --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `filter[run-trigger][type]` | **Required** Which type of run triggers to list; valid values are `inbound` or `outbound`. `inbound` run triggers create runs in the specified workspace, and `outbound` run triggers create runs in other workspaces. |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 run triggers per page. |
### Permissions
In order to list run triggers, the user must have permission to read runs for the specified workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
### Sample Request
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/workspaces/ws-XdeUVMWShTesDMME/run-triggers?filter%5Brun-trigger%5D%5Btype%5D=inbound
```
### Sample Response
```json
{
"data": [
{
"id": "rt-WygcwSBuYaQWrM39",
"type": "run-triggers",
"attributes": {
"workspace-name": "workspace-1",
"sourceable-name": "workspace-2",
"created-at": "2018-09-11T18:21:21.784Z"
},
"relationships": {
"workspace": {
"data": {
"id": "ws-XdeUVMWShTesDMME",
"type": "workspaces"
}
},
"sourceable": {
"data": {
"id": "ws-2HRvNs49EWPjDqT1",
"type": "workspaces"
}
}
},
"links": {
"self": "/api/v2/run-triggers/rt-WygcwSBuYaQWrM39"
}
},
{
"id": "rt-8F5JFydVYAmtTjET",
"type": "run-triggers",
"attributes": {
"workspace-name": "workspace-1",
"sourceable-name": "workspace-3",
"created-at": "2018-09-11T18:21:21.784Z"
},
"relationships": {
"workspace": {
"data": {
"id": "ws-XdeUVMWShTesDMME",
"type": "workspaces"
}
},
"sourceable": {
"data": {
"id": "ws-BUHBEM97xboT8TVz",
"type": "workspaces"
}
}
},
"links": {
"self": "/api/v2/run-triggers/rt-8F5JFydVYAmtTjET"
}
}
],
"links": {
"self": "https://app.terraform.io/api/v2/workspaces/ws-xdiJLyGpCugbFDE1/run-triggers?filter%5Brun-trigger%5D%5Btype%5D=inbound&page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://app.terraform.io/api/v2/workspaces/ws-xdiJLyGpCugbFDE1/run-triggers?filter%5Brun-trigger%5D%5Btype%5D=inbound&page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://app.terraform.io/api/v2/workspaces/ws-xdiJLyGpCugbFDE1/run-triggers?filter%5Brun-trigger%5D%5Btype%5D=inbound&page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"pagination": {
"current-page": 1,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 2
}
}
}
```
## Show a Run Trigger
`GET /run-triggers/:run_trigger_id`
| Parameter | Description |
| ----------------- | ------------------------------------------------------------------------------------ |
| `:run_trigger_id` | The ID of the run trigger to show. Send a `GET` request to the `run-triggers` endpoint to find IDs. Refer to [List Run Triggers](#list-run-triggers) for details. |
| Status | Response | Reason |
| ------- | ---------------------------------------------- | ------------------------------------------------------------ |
| [200][] | [JSON API document][] (`type: "run-triggers"`) | The request was successful |
| [404][] | [JSON API error object][] | Run trigger not found or user unauthorized to perform action |
### Permissions
In order to show a run trigger, the user must have permission to read runs for either the workspace or sourceable workspace of the specified run trigger. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
### Sample Request
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/run-triggers/rt-3yVQZvHzf5j3WRJ1
```
### Sample Response
```json
{
"data": {
"id": "rt-3yVQZvHzf5j3WRJ1",
"type": "run-triggers",
"attributes": {
"workspace-name": "workspace-1",
"sourceable-name": "workspace-2",
"created-at": "2018-09-11T18:21:21.784Z"
},
"relationships": {
"workspace": {
"data": {
"id": "ws-XdeUVMWShTesDMME",
"type": "workspaces"
}
},
"sourceable": {
"data": {
"id": "ws-2HRvNs49EWPjDqT1",
"type": "workspaces"
}
}
},
"links": {
"self": "/api/v2/run-triggers/rt-3yVQZvHzf5j3WRJ1"
}
}
}
```
## Delete a Run Trigger
`DELETE /run-triggers/:run_trigger_id`
| Parameter | Description |
| ----------------- | -------------------------------------------------------------------------------------- |
| `:run_trigger_id` | The ID of the run trigger to delete. Send a `GET` request to the `run-triggers` endpoint o find IDs. Refer to [List Run Triggers](#list-run-triggers) for details. |
| Status | Response | Reason |
| ------- | ------------------------- | ------------------------------------------------------------ |
| [204][] | No Content | Successfully deleted the run trigger |
| [404][] | [JSON API error object][] | Run trigger not found or user unauthorized to perform action |
### Permissions
In order to delete a run trigger, the user must have admin access to the specified workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
### Sample Request
```shell
curl \
--request DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/run-triggers/rt-3yVQZvHzf5j3WRJ1
```
## Available Related Resources
The GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](/terraform/cloud-docs/api-docs#inclusion-of-related-resources). The following resource types are available:
These includes respect read permissions. If you do not have access to read the related resource, it will not be returned.
* `workspace` - The full workspace object.
* `sourceable` - The full source workspace object. | terraform | page title Run Triggers API Docs HCP Terraform description Use the run triggers endpoint to manage run triggers List show create and delete run triggers using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Run Triggers API Create a Run Trigger POST workspaces workspace id run triggers Parameter Description workspace id The ID of the workspace to create the run trigger in Obtain this from the workspace settings terraform cloud docs workspaces settings or the Show Workspace terraform cloud docs api docs workspaces show workspace endpoint Status Response Reason 201 JSON API document type run triggers Successfully created a run trigger 404 JSON API error object Workspace or sourceable not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc Permissions In order to create a run trigger the user must have admin access to the specified workspace and permission to read runs for the sourceable workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data relationships sourceable data object A JSON API relationship object that represents the source workspace for the run trigger This object must have id and type properties and the type property must be workspaces e g id ws 2HRvNs49EWPjDqT1 type workspaces Obtain workspace IDs from the workspace settings terraform cloud docs workspaces settings or the Show Workspace terraform cloud docs api docs workspaces show workspace endpoint Sample Payload json data relationships sourceable data id ws 2HRvNs49EWPjDqT1 type workspaces Sample Request shell curl request POST header Authorization Bearer TOKEN header Content Type application vnd api json data payload json https app terraform io api v2 workspaces ws XdeUVMWShTesDMME run triggers Sample Response json data id rt 3yVQZvHzf5j3WRJ1 type run triggers attributes workspace name workspace 1 sourceable name workspace 2 created at 2018 09 11T18 21 21 784Z relationships workspace data id ws XdeUVMWShTesDMME type workspaces sourceable data id ws 2HRvNs49EWPjDqT1 type workspaces links self api v2 run triggers rt 3yVQZvHzf5j3WRJ1 List Run Triggers GET workspaces workspace id run triggers Parameter Description workspace id The ID of the workspace to list run triggers for Obtain this from the workspace settings terraform cloud docs workspaces settings or the Show Workspace terraform cloud docs api docs workspaces show workspace endpoint Status Response Reason 200 JSON API document type run triggers Request was successful 400 JSON API error object Required parameter filter run trigger type is missing or has been given an invalid value 404 JSON API error object Workspace not found or user unauthorized to perform action Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description filter run trigger type Required Which type of run triggers to list valid values are inbound or outbound inbound run triggers create runs in the specified workspace and outbound run triggers create runs in other workspaces page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 run triggers per page Permissions In order to list run triggers the user must have permission to read runs for the specified workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Sample Request shell curl request GET header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 workspaces ws XdeUVMWShTesDMME run triggers filter 5Brun trigger 5D 5Btype 5D inbound Sample Response json data id rt WygcwSBuYaQWrM39 type run triggers attributes workspace name workspace 1 sourceable name workspace 2 created at 2018 09 11T18 21 21 784Z relationships workspace data id ws XdeUVMWShTesDMME type workspaces sourceable data id ws 2HRvNs49EWPjDqT1 type workspaces links self api v2 run triggers rt WygcwSBuYaQWrM39 id rt 8F5JFydVYAmtTjET type run triggers attributes workspace name workspace 1 sourceable name workspace 3 created at 2018 09 11T18 21 21 784Z relationships workspace data id ws XdeUVMWShTesDMME type workspaces sourceable data id ws BUHBEM97xboT8TVz type workspaces links self api v2 run triggers rt 8F5JFydVYAmtTjET links self https app terraform io api v2 workspaces ws xdiJLyGpCugbFDE1 run triggers filter 5Brun trigger 5D 5Btype 5D inbound page 5Bnumber 5D 1 page 5Bsize 5D 20 first https app terraform io api v2 workspaces ws xdiJLyGpCugbFDE1 run triggers filter 5Brun trigger 5D 5Btype 5D inbound page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https app terraform io api v2 workspaces ws xdiJLyGpCugbFDE1 run triggers filter 5Brun trigger 5D 5Btype 5D inbound page 5Bnumber 5D 1 page 5Bsize 5D 20 meta pagination current page 1 prev page null next page null total pages 1 total count 2 Show a Run Trigger GET run triggers run trigger id Parameter Description run trigger id The ID of the run trigger to show Send a GET request to the run triggers endpoint to find IDs Refer to List Run Triggers list run triggers for details Status Response Reason 200 JSON API document type run triggers The request was successful 404 JSON API error object Run trigger not found or user unauthorized to perform action Permissions In order to show a run trigger the user must have permission to read runs for either the workspace or sourceable workspace of the specified run trigger More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Sample Request shell curl request GET header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 run triggers rt 3yVQZvHzf5j3WRJ1 Sample Response json data id rt 3yVQZvHzf5j3WRJ1 type run triggers attributes workspace name workspace 1 sourceable name workspace 2 created at 2018 09 11T18 21 21 784Z relationships workspace data id ws XdeUVMWShTesDMME type workspaces sourceable data id ws 2HRvNs49EWPjDqT1 type workspaces links self api v2 run triggers rt 3yVQZvHzf5j3WRJ1 Delete a Run Trigger DELETE run triggers run trigger id Parameter Description run trigger id The ID of the run trigger to delete Send a GET request to the run triggers endpoint o find IDs Refer to List Run Triggers list run triggers for details Status Response Reason 204 No Content Successfully deleted the run trigger 404 JSON API error object Run trigger not found or user unauthorized to perform action Permissions In order to delete a run trigger the user must have admin access to the specified workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Sample Request shell curl request DELETE header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 run triggers rt 3yVQZvHzf5j3WRJ1 Available Related Resources The GET endpoints above can optionally return related resources if requested with the include query parameter terraform cloud docs api docs inclusion of related resources The following resource types are available These includes respect read permissions If you do not have access to read the related resource it will not be returned workspace The full workspace object sourceable The full source workspace object |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Organization Memberships API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the organization memberships endpoint to manage user membership within an organization Invite users and list show and remove memberships using the HTTP API | ---
page_title: Organization Memberships - API Docs - HCP Terraform
description: >-
Use the `/organization-memberships` endpoint to manage user membership within an organization. Invite users, and list, show, and remove memberships using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Organization Memberships API
Users are added to organizations by inviting them to join. Once accepted, they become members of the organization. The Organization Membership resource represents this membership.
You can invite users who already have an account, as well as new users. If the user has an existing account with the same email address used to invite them, they can reuse the same login.
-> **Note:** Once a user is a member of the organization, you can manage their team memberships using [the Team Membership API](/terraform/cloud-docs/api-docs/team-members).
## Invite a User to an Organization
`POST /organizations/:organization_name/organization-memberships`
| Parameter | Description |
| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization the user will be invited to join. The inviting user must have permission to manage organization memberships. |
-> **Note:** Organization membership management is restricted to members of the owners team, the owners [team API token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens), the [organization API token](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens), and users or teams with one of the [Team Management](/terraform/cloud-docs/users-teams-organizations/permissions#team-management-permissions) permissions.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Status | Response | Reason |
| ------- | ------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] | Successfully invited the user |
| [400][] | [JSON API error object][] | Unable to invite user due to organization limits |
| [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Unable to invite user due to validation errors |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| --------------------------------- | -------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"organization-memberships"`. |
| `data.attributes.email` | string | | The email address of the user to be invited. |
| `data.relationships.teams.data[]` | array\[object] | | A list of resource identifier objects that defines which teams the invited user will be a member of. These objects must contain `id` and `type` properties, and the `type` property must be `teams` (e.g. `{ "id": "team-GeLZkdnK6xAVjA5H", "type": "teams" }`). Obtain team IDs from the [List Teams](/terraform/cloud-docs/api-docs/teams#list-teams) endpoint. All users must be added to at least one team. |
### Sample Payload
```json
{
"data": {
"attributes": {
"email": "[email protected]"
},
"relationships": {
"teams": {
"data": [
{
"type": "teams",
"id": "team-GeLZkdnK6xAVjA5H"
}
]
}
},
"type": "organization-memberships"
}
}
```
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/organization-memberships
```
### Sample Response
```json
{
"data": {
"id": "ou-nX7inDHhmC3quYgy",
"type": "organization-memberships",
"attributes": {
"status": "invited"
},
"relationships": {
"teams": {
"data": [
{
"id": "team-GeLZkdnK6xAVjA5H",
"type": "teams"
}
]
},
"user": {
"data": {
"id": "user-J8oxGmRk5eC2WLfX",
"type": "users"
}
},
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
}
},
"included": [
{
"id": "user-J8oxGmRk5eC2WLfX",
"type": "users",
"attributes": {
"username": null,
"is-service-account": false,
"auth-method": "hcp_sso",
"avatar-url": "https://www.gravatar.com/avatar/55502f40dc8b7c769880b10874abc9d0?s=100&d=mm",
"two-factor": {
"enabled": false,
"verified": false
},
"email": "[email protected]",
"permissions": {
"can-create-organizations": true,
"can-change-email": true,
"can-change-username": true,
"can-manage-user-tokens": false
}
},
"relationships": {
"authentication-tokens": {
"links": {
"related": "/api/v2/users/user-J8oxGmRk5eC2WLfX/authentication-tokens"
}
}
},
"links": {
"self": "/api/v2/users/user-J8oxGmRk5eC2WLfX"
}
}
]
}
```
## List Memberships for an Organization
`GET /organizations/:organization_name/organization-memberships`
| Parameter | Description |
| -------------------- | -------------------------------------------------------- |
| `:organization_name` | The name of the organization to list the memberships of. |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
| ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `q` | **Optional.** A search query string. Organization memberships are searchable by user name and email. |
| `filter[status]` | **Optional.** If specified, restricts results to those with the matching status value. Valid values are `invited` and `active`. |
| `filter[email]` | **Optional.** If specified, restricts results to those with a matching user email address. If multiple comma separated values are specified, results matching any of the values are returned. |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 users per page. |
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/my-organization/organization-memberships
```
### Sample Response
```json
{
"data": [
{
"id": "ou-tTJph1AQVK5ZmdND",
"type": "organization-memberships",
"attributes": {
"status": "active"
},
"relationships": {
"teams": {
"data": [
{
"id": "team-yUrEehvfG4pdmSjc",
"type": "teams"
}
]
},
"user": {
"data": {
"id": "user-vaQqszES9JnuK4eB",
"type": "users"
}
},
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
}
},
{
"id": "ou-D6HPYFt4GzeBt3gB",
"type": "organization-memberships",
"attributes": {
"status": "active"
},
"relationships": {
"teams": {
"data": [
{
"id": "team-yUrEehvfG4pdmSjc",
"type": "teams"
}
]
},
"user": {
"data": {
"id": "user-oqCgH7NgTn95jTGc",
"type": "users"
}
},
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
}
},
{
"id": "ou-x1E2eBwYwusLDC7h",
"type": "organization-memberships",
"attributes": {
"status": "invited"
},
"relationships": {
"teams": {
"data": [
{
"id": "team-yUrEehvfG4pdmSjc",
"type": "teams"
}
]
},
"user": {
"data": {
"id": "user-UntUdBTHsVRQMzC8",
"type": "users"
}
},
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
}
}
],
"links": {
"self": "https://app.terraform.io/api/v2/organizations/my-organization/organization-memberships?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://app.terraform.io/api/v2/organizations/my-organization/organization-memberships?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://app.terraform.io/api/v2/organizations/my-organization/organization-memberships?page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"status-counts": {
"total": 3,
"active": 2,
"invited": 1
},
"pagination": {
"current-page": 1,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 3
}
}
}
```
## List User's Own Memberships
`GET /organization-memberships`
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organization-memberships
```
### Sample Response
```json
{
"data": [
{
"id": "ou-VgJgfbDVN3APUm2F",
"type": "organization-memberships",
"attributes": {
"status": "invited"
},
"relationships": {
"teams": {
"data": [
{
"id": "team-4QrJKzxB3J5N4cJc",
"type": "teams"
}
]
},
"user": {
"data": {
"id": "user-vaQqszES9JnuK4eB",
"type": "users"
}
},
"organization": {
"data": {
"id": "acme-corp",
"type": "organizations"
}
}
}
},
{
"id": "ou-tTJph1AQVK5ZmdND",
"type": "organization-memberships",
"attributes": {
"status": "active"
},
"relationships": {
"teams": {
"data": [
{
"id": "team-yUrEehvfG4pdmSjc",
"type": "teams"
}
]
},
"user": {
"data": {
"id": "user-vaQqszES9JnuK4eB",
"type": "users"
}
},
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
}
}
]
}
```
## Show a Membership
`GET /organization-memberships/:organization_membership_id`
| Parameter | Description |
| ----------------------------- | --------------------------- |
| `:organization_membership_id` | The organization membership |
| Status | Response | Reason |
| ------- | ---------------------------------------------------------- | ------------------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "organization-memberships"`) | The request was successful |
| [404][] | [JSON API error object][] | Organization membership not found, or user unauthorized to perform action |
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organization-memberships/ou-kit6GaMo3zPGCzWb
```
### Sample Response
```json
{
"data": {
"id": "ou-kit6GaMo3zPGCzWb",
"type": "organization-memberships",
"attributes": {
"status": "active"
},
"relationships": {
"teams": {
"data": [
{
"id": "team-97LkM7QciNkwb2nh",
"type": "teams"
}
]
},
"user": {
"data": {
"id": "user-hn6v2WK1naDpGadd",
"type": "users"
}
},
"organization": {
"data": {
"id": "hashicorp",
"type": "organizations"
}
}
}
}
}
```
## Remove User from Organization
`DELETE /organization-memberships/:organization_membership_id`
| Parameter | Description |
| ----------------------------- | --------------------------- |
| `:organization_membership_id` | The organization membership |
| Status | Response | Reason |
| ------- | ------------------------- | -------------------------------------------------------------------------------------- |
| [204][] | Empty body | Successfully removed the user from the organization |
| [403][] | [JSON API error object][] | Unable to remove the user: you cannot remove yourself from organizations which you own |
| [404][] | [JSON API error object][] | Organization membership not found, or user unauthorized to perform action |
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/organization-memberships/ou-tTJph1AQVK5ZmdND
```
## Available Related Resources
The GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](/terraform/cloud-docs/api-docs#inclusion-of-related-resources). The following resource types are available:
* `user` - The user associated with the membership.
* `teams` - Teams the user is a member of. | terraform | page title Organization Memberships API Docs HCP Terraform description Use the organization memberships endpoint to manage user membership within an organization Invite users and list show and remove memberships using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Organization Memberships API Users are added to organizations by inviting them to join Once accepted they become members of the organization The Organization Membership resource represents this membership You can invite users who already have an account as well as new users If the user has an existing account with the same email address used to invite them they can reuse the same login Note Once a user is a member of the organization you can manage their team memberships using the Team Membership API terraform cloud docs api docs team members Invite a User to an Organization POST organizations organization name organization memberships Parameter Description organization name The name of the organization the user will be invited to join The inviting user must have permission to manage organization memberships Note Organization membership management is restricted to members of the owners team the owners team API token terraform cloud docs users teams organizations api tokens team api tokens the organization API token terraform cloud docs users teams organizations api tokens organization api tokens and users or teams with one of the Team Management terraform cloud docs users teams organizations permissions team management permissions permissions permissions citation intentionally unused keep for maintainers Status Response Reason 201 JSON API document Successfully invited the user 400 JSON API error object Unable to invite user due to organization limits 404 JSON API error object Organization not found or user unauthorized to perform action 422 JSON API error object Unable to invite user due to validation errors Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be organization memberships data attributes email string The email address of the user to be invited data relationships teams data array object A list of resource identifier objects that defines which teams the invited user will be a member of These objects must contain id and type properties and the type property must be teams e g id team GeLZkdnK6xAVjA5H type teams Obtain team IDs from the List Teams terraform cloud docs api docs teams list teams endpoint All users must be added to at least one team Sample Payload json data attributes email test example com relationships teams data type teams id team GeLZkdnK6xAVjA5H type organization memberships Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations my organization organization memberships Sample Response json data id ou nX7inDHhmC3quYgy type organization memberships attributes status invited relationships teams data id team GeLZkdnK6xAVjA5H type teams user data id user J8oxGmRk5eC2WLfX type users organization data id my organization type organizations included id user J8oxGmRk5eC2WLfX type users attributes username null is service account false auth method hcp sso avatar url https www gravatar com avatar 55502f40dc8b7c769880b10874abc9d0 s 100 d mm two factor enabled false verified false email test example com permissions can create organizations true can change email true can change username true can manage user tokens false relationships authentication tokens links related api v2 users user J8oxGmRk5eC2WLfX authentication tokens links self api v2 users user J8oxGmRk5eC2WLfX List Memberships for an Organization GET organizations organization name organization memberships Parameter Description organization name The name of the organization to list the memberships of Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description q Optional A search query string Organization memberships are searchable by user name and email filter status Optional If specified restricts results to those with the matching status value Valid values are invited and active filter email Optional If specified restricts results to those with a matching user email address If multiple comma separated values are specified results matching any of the values are returned page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 users per page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations my organization organization memberships Sample Response json data id ou tTJph1AQVK5ZmdND type organization memberships attributes status active relationships teams data id team yUrEehvfG4pdmSjc type teams user data id user vaQqszES9JnuK4eB type users organization data id my organization type organizations id ou D6HPYFt4GzeBt3gB type organization memberships attributes status active relationships teams data id team yUrEehvfG4pdmSjc type teams user data id user oqCgH7NgTn95jTGc type users organization data id my organization type organizations id ou x1E2eBwYwusLDC7h type organization memberships attributes status invited relationships teams data id team yUrEehvfG4pdmSjc type teams user data id user UntUdBTHsVRQMzC8 type users organization data id my organization type organizations links self https app terraform io api v2 organizations my organization organization memberships page 5Bnumber 5D 1 page 5Bsize 5D 20 first https app terraform io api v2 organizations my organization organization memberships page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https app terraform io api v2 organizations my organization organization memberships page 5Bnumber 5D 1 page 5Bsize 5D 20 meta status counts total 3 active 2 invited 1 pagination current page 1 prev page null next page null total pages 1 total count 3 List User s Own Memberships GET organization memberships Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organization memberships Sample Response json data id ou VgJgfbDVN3APUm2F type organization memberships attributes status invited relationships teams data id team 4QrJKzxB3J5N4cJc type teams user data id user vaQqszES9JnuK4eB type users organization data id acme corp type organizations id ou tTJph1AQVK5ZmdND type organization memberships attributes status active relationships teams data id team yUrEehvfG4pdmSjc type teams user data id user vaQqszES9JnuK4eB type users organization data id my organization type organizations Show a Membership GET organization memberships organization membership id Parameter Description organization membership id The organization membership Status Response Reason 200 JSON API document type organization memberships The request was successful 404 JSON API error object Organization membership not found or user unauthorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organization memberships ou kit6GaMo3zPGCzWb Sample Response json data id ou kit6GaMo3zPGCzWb type organization memberships attributes status active relationships teams data id team 97LkM7QciNkwb2nh type teams user data id user hn6v2WK1naDpGadd type users organization data id hashicorp type organizations Remove User from Organization DELETE organization memberships organization membership id Parameter Description organization membership id The organization membership Status Response Reason 204 Empty body Successfully removed the user from the organization 403 JSON API error object Unable to remove the user you cannot remove yourself from organizations which you own 404 JSON API error object Organization membership not found or user unauthorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 organization memberships ou tTJph1AQVK5ZmdND Available Related Resources The GET endpoints above can optionally return related resources if requested with the include query parameter terraform cloud docs api docs inclusion of related resources The following resource types are available user The user associated with the membership teams Teams the user is a member of |
terraform page title IP Ranges API Docs HCP Terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use the meta ip ranges endpoint to query HCP Terraform s IP ranges Get a list of the IP ranges used by HCP Terraform using the HTTP API tfc only true | ---
page_title: IP Ranges - API Docs - HCP Terraform
tfc_only: true
description: >-
Use the `/meta/ip-ranges` endpoint to query HCP Terraform's IP ranges. Get a list of the IP ranges used by HCP Terraform using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[304]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/304
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[If-Modified-Since]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-Modified-Since
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
[CIDR Notation]: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation
[run task requests]: /terraform/cloud-docs/api-docs/run-tasks/run-tasks-integration#run-task-request
# IP Ranges API
IP Ranges provides a list of HCP Terraform's IP ranges. For more information about HCP Terraform's IP ranges, view our documentation about [HCP Terraform IP Ranges](/terraform/cloud-docs/architectural-details/ip-ranges).
## IP Ranges Payload
| Name | Type | Description |
| --------------- | ----- | ------------------------------------------------------------------------------------------------ |
| `api` | array | List of IP ranges in [CIDR notation] used for connections from user site to HCP Terraform APIs |
| `notifications` | array | List of IP ranges in [CIDR notation] used for notifications and outbound [run task requests] |
| `sentinel` | array | List of IP ranges in [CIDR notation] used for outbound requests from Sentinel policies |
| `vcs` | array | List of IP ranges in [CIDR notation] used for connecting to VCS providers |
-> **Note:** The IP ranges for each feature returned by the IP Ranges API may overlap. Additionally, these published ranges do not currently allow for execution of Terraform runs against local resources.
-> **Note:** Under normal circumstances, HashiCorp will publish any expected changes to HCP Terraform's IP ranges at least 24 hours in advance of implementing them. This should allow sufficient time for users to update any connected systems to reflect the changes. In the event of an emergency outage or failover operation, it may not be possible to pre-publish these changes.
## Get IP Ranges
-> **Note:** The IP Ranges API does not require authentication
-> **Note:** This endpoint supports the [If-Modified-Since][] HTTP request header
`GET /meta/ip-ranges`
| Status | Response | Reason |
| ------- | ------------------ | -------------------------------------------------------------------------------------------------------------- |
| [200][] | `application/json` | The request was successful |
| [304][] | empty body | The request was successful; IP ranges were not modified since the specified date in `If-Modified-Since` header |
### Sample Request
```shell
curl \
--request GET \
-H "If-Modified-Since: Tue, 26 May 2020 15:10:05 GMT" \
https://app.terraform.io/api/meta/ip-ranges
```
### Sample Response
```json
{
"api": [
"75.2.98.97/32",
"99.83.150.238/32"
],
"notifications": [
"10.0.0.1/32",
"192.168.0.1/32",
"172.16.0.1/32"
],
"sentinel": [
"10.0.0.1/32",
"192.168.0.1/32",
"172.16.0.1/32"
],
"vcs": [
"10.0.0.1/32",
"192.168.0.1/32",
"172.16.0.1/32"
]
}
``` | terraform | page title IP Ranges API Docs HCP Terraform tfc only true description Use the meta ip ranges endpoint to query HCP Terraform s IP ranges Get a list of the IP ranges used by HCP Terraform using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 304 https developer mozilla org en US docs Web HTTP Status 304 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 If Modified Since https developer mozilla org en US docs Web HTTP Headers If Modified Since JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects CIDR Notation https en wikipedia org wiki Classless Inter Domain Routing CIDR notation run task requests terraform cloud docs api docs run tasks run tasks integration run task request IP Ranges API IP Ranges provides a list of HCP Terraform s IP ranges For more information about HCP Terraform s IP ranges view our documentation about HCP Terraform IP Ranges terraform cloud docs architectural details ip ranges IP Ranges Payload Name Type Description api array List of IP ranges in CIDR notation used for connections from user site to HCP Terraform APIs notifications array List of IP ranges in CIDR notation used for notifications and outbound run task requests sentinel array List of IP ranges in CIDR notation used for outbound requests from Sentinel policies vcs array List of IP ranges in CIDR notation used for connecting to VCS providers Note The IP ranges for each feature returned by the IP Ranges API may overlap Additionally these published ranges do not currently allow for execution of Terraform runs against local resources Note Under normal circumstances HashiCorp will publish any expected changes to HCP Terraform s IP ranges at least 24 hours in advance of implementing them This should allow sufficient time for users to update any connected systems to reflect the changes In the event of an emergency outage or failover operation it may not be possible to pre publish these changes Get IP Ranges Note The IP Ranges API does not require authentication Note This endpoint supports the If Modified Since HTTP request header GET meta ip ranges Status Response Reason 200 application json The request was successful 304 empty body The request was successful IP ranges were not modified since the specified date in If Modified Since header Sample Request shell curl request GET H If Modified Since Tue 26 May 2020 15 10 05 GMT https app terraform io api meta ip ranges Sample Response json api 75 2 98 97 32 99 83 150 238 32 notifications 10 0 0 1 32 192 168 0 1 32 172 16 0 1 32 sentinel 10 0 0 1 32 192 168 0 1 32 172 16 0 1 32 vcs 10 0 0 1 32 192 168 0 1 32 172 16 0 1 32 |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the state version outputs endpoint to access output values from a Terraform state version List and show state version outputs and show current state version outputs for a workspace using the HTTP API page title State Version Outputs API Docs HCP Terraform 404 https developer mozilla org en US docs Web HTTP Status 404 | ---
page_title: State Version Outputs - API Docs - HCP Terraform
description: >-
Use the `/state-version-outputs` endpoint to access output values from a Terraform state version. List and show state version outputs, and show current state version outputs for a workspace using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# State Version Outputs API
State version outputs are the [output values](/terraform/language/values/outputs) from a Terraform state file. They include
the name and value of the output, as well as a sensitive boolean if the value should be hidden by default in UIs.
~> **Important:** The state version outputs for a state version (as well as some other information about it) might be **populated asynchronously** by HCP Terraform. These values might not be immediately available after the state version is uploaded. The `resources-processed` property on the associated [state version object](/terraform/cloud-docs/api-docs/state-versions) indicates whether or not HCP Terraform has finished any necessary asynchronous processing. If you need to use these values, be sure to wait for `resources-processed` to become `true` before assuming that the values are in fact empty.
## List State Version Outputs
`GET /state-versions/:state_version_id/outputs`
Listing state version outputs requires permission to read state outputs for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
| Parameter | Description |
| ------------------- | ------------------------------------ |
| `:state_version_id` | The ID of the desired state version. |
| Status | Response | Reason |
| ------- | ------------------------- | -------------------------------------------------------------------- |
| [200][] | [JSON API document][] | Successfully returned a list of outputs for the given state version. |
| [404][] | [JSON API error object][] | State version not found, or user unauthorized to perform action. |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
| -------------- | ------------------------------------------------------------------------------------- |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 state version outputs per page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/state-versions/sv-SDboVZC8TCxXEneJ/outputs
```
### Sample Response
```json
{
"data": [
{
"id": "wsout-xFAmCR3VkBGepcee",
"type": "state-version-outputs",
"attributes": {
"name": "fruits",
"sensitive": false,
"type": "array",
"value": [
"apple",
"strawberry",
"blueberry",
"rasberry"
],
"detailed_type": [
"tuple",
[
"string",
"string",
"string",
"string"
]
]
},
"links": {
"self": "/api/v2/state-version-outputs/wsout-xFAmCR3VkBGepcee"
}
},
{
"id": "wsout-vspuB754AUNkfxwo",
"type": "state-version-outputs",
"attributes": {
"name": "vegetables",
"sensitive": false,
"type": "array",
"value": [
"carrots",
"potato",
"tomato",
"onions"
],
"detailed_type": [
"tuple",
[
"string",
"string",
"string",
"string"
]
]
},
"links": {
"self": "/api/v2/state-version-outputs/wsout-vspuB754AUNkfxwo"
}
}
],
"links": {
"self": "https://app.terraform.io/api/v2/state-versions/sv-SVB5wMrDL1XUgJ4G/outputs?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://app.terraform.io/api/v2/state-versions/sv-SVB5wMrDL1XUgJ4G/outputs?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://app.terraform.io/api/v2/state-versions/sv-SVB5wMrDL1XUgJ4G/outputs?page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"pagination": {
"current-page": 1,
"page-size": 20,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 2
}
}
}
```
## Show a State Version Output
`GET /state-version-outputs/:state_version_output_id`
| Parameter | Description |
| -------------------------- | ------------------------------------------- |
| `:state_version_output_id` | The ID of the desired state version output. |
State version output IDs must be obtained from a [state version object](/terraform/cloud-docs/api-docs/state-versions). When requesting a state version, you can optionally add `?include=outputs` to include full details for all of that state version's outputs.
| Status | Response | Reason |
| ------- | ------------------------------------------------------- | ------------------------------------------------------ |
| [200][] | [JSON API document][] (`type: "state-version-outputs"`) | Success. |
| [404][] | [JSON API error object][] | State version output not found or user not authorized. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
https://app.terraform.io/api/v2/state-version-outputs/wsout-J2zM24JPFbfc7bE5
```
### Sample Response
```json
{
"data": {
"id": "wsout-J2zM24JPFbfc7bE5",
"type": "state-version-outputs",
"attributes": {
"name": "flavor",
"sensitive": false,
"type": "string",
"value": "Peanut Butter",
"detailed-type": "string"
},
"links": {
"self": "/api/v2/state-version-outputs/wsout-J2zM24JPFbfc7bE5"
}
}
}
```
## Show Current State Version Outputs for a Workspace
This endpoint allows organization users, who do not have permissions to read state versions, to fetch the latest [output values](/terraform/language/values/outputs) for a workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
-> **Note:** Sensitive values are not revealed and will be returned as `null`. To fetch an output including sensitive values see [Show a State Version Output](/terraform/cloud-docs/api-docs/state-version-outputs#show-a-state-version-output).
`GET /workspaces/:workspace_id/current-state-version-outputs`
| Parameter | Description |
| --------------- | -------------------------------------------- |
| `:workspace_id` | The ID of the workspace to read outputs from.|
| Status | Response | Reason |
| ------- | ------------------------------------------------------- | ------------------------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "state-version-outputs"`) | Successfully returned a list of outputs for the given workspace. |
| [404][] | [JSON API error object][] | State version outputs not found or user not authorized. |
| [503][] | [JSON API error object][] | State version outputs are being processed and are not ready. Retry the request. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/workspaces/ws-G4zM299PFbfc10E5/current-state-version-outputs
```
### Sample Response
```json
{
"data": [
{
"id": "wsout-J2zM24JPFbfc7bE5",
"type": "state-version-outputs",
"attributes": {
"name": "flavor",
"sensitive": false,
"type": "string",
"value": "Peanut Butter",
"detailed-type": "string"
},
"links": {
"self": "/api/v2/state-version-outputs/wsout-J2zM24JPFbfc7bE5"
}
},
{
"id": "wsout-FLzM23Gcd5f37bE5",
"type": "state-version-outputs",
"attributes": {
"name": "recipe",
"sensitive": true,
"type": "string",
"value": "Don Douglas' Peanut Butter Frenzy",
"detailed-type": "string"
},
"links": {
"self": "/api/v2/state-version-outputs/wsout-FLzM23Gcd5f37bE5"
}
}
]
}
```
| terraform | page title State Version Outputs API Docs HCP Terraform description Use the state version outputs endpoint to access output values from a Terraform state version List and show state version outputs and show current state version outputs for a workspace using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 404 https developer mozilla org en US docs Web HTTP Status 404 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects State Version Outputs API State version outputs are the output values terraform language values outputs from a Terraform state file They include the name and value of the output as well as a sensitive boolean if the value should be hidden by default in UIs Important The state version outputs for a state version as well as some other information about it might be populated asynchronously by HCP Terraform These values might not be immediately available after the state version is uploaded The resources processed property on the associated state version object terraform cloud docs api docs state versions indicates whether or not HCP Terraform has finished any necessary asynchronous processing If you need to use these values be sure to wait for resources processed to become true before assuming that the values are in fact empty List State Version Outputs GET state versions state version id outputs Listing state version outputs requires permission to read state outputs for the workspace More about permissions terraform cloud docs users teams organizations permissions Parameter Description state version id The ID of the desired state version Status Response Reason 200 JSON API document Successfully returned a list of outputs for the given state version 404 JSON API error object State version not found or user unauthorized to perform action Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 state version outputs per page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 state versions sv SDboVZC8TCxXEneJ outputs Sample Response json data id wsout xFAmCR3VkBGepcee type state version outputs attributes name fruits sensitive false type array value apple strawberry blueberry rasberry detailed type tuple string string string string links self api v2 state version outputs wsout xFAmCR3VkBGepcee id wsout vspuB754AUNkfxwo type state version outputs attributes name vegetables sensitive false type array value carrots potato tomato onions detailed type tuple string string string string links self api v2 state version outputs wsout vspuB754AUNkfxwo links self https app terraform io api v2 state versions sv SVB5wMrDL1XUgJ4G outputs page 5Bnumber 5D 1 page 5Bsize 5D 20 first https app terraform io api v2 state versions sv SVB5wMrDL1XUgJ4G outputs page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https app terraform io api v2 state versions sv SVB5wMrDL1XUgJ4G outputs page 5Bnumber 5D 1 page 5Bsize 5D 20 meta pagination current page 1 page size 20 prev page null next page null total pages 1 total count 2 Show a State Version Output GET state version outputs state version output id Parameter Description state version output id The ID of the desired state version output State version output IDs must be obtained from a state version object terraform cloud docs api docs state versions When requesting a state version you can optionally add include outputs to include full details for all of that state version s outputs Status Response Reason 200 JSON API document type state version outputs Success 404 JSON API error object State version output not found or user not authorized Sample Request shell curl header Authorization Bearer TOKEN https app terraform io api v2 state version outputs wsout J2zM24JPFbfc7bE5 Sample Response json data id wsout J2zM24JPFbfc7bE5 type state version outputs attributes name flavor sensitive false type string value Peanut Butter detailed type string links self api v2 state version outputs wsout J2zM24JPFbfc7bE5 Show Current State Version Outputs for a Workspace This endpoint allows organization users who do not have permissions to read state versions to fetch the latest output values terraform language values outputs for a workspace More about permissions terraform cloud docs users teams organizations permissions Note Sensitive values are not revealed and will be returned as null To fetch an output including sensitive values see Show a State Version Output terraform cloud docs api docs state version outputs show a state version output GET workspaces workspace id current state version outputs Parameter Description workspace id The ID of the workspace to read outputs from Status Response Reason 200 JSON API document type state version outputs Successfully returned a list of outputs for the given workspace 404 JSON API error object State version outputs not found or user not authorized 503 JSON API error object State version outputs are being processed and are not ready Retry the request Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 workspaces ws G4zM299PFbfc10E5 current state version outputs Sample Response json data id wsout J2zM24JPFbfc7bE5 type state version outputs attributes name flavor sensitive false type string value Peanut Butter detailed type string links self api v2 state version outputs wsout J2zM24JPFbfc7bE5 id wsout FLzM23Gcd5f37bE5 type state version outputs attributes name recipe sensitive true type string value Don Douglas Peanut Butter Frenzy detailed type string links self api v2 state version outputs wsout FLzM23Gcd5f37bE5 |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the users endpoint to query user details Show details for a user using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Users API Docs HCP Terraform | ---
page_title: Users - API Docs - HCP Terraform
description: >-
Use the `/users` endpoint to query user details. Show details for a user using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Users API
HCP Terraform's user objects do not contain any identifying information about a user, other than their HCP Terraform username and avatar image; they are intended for displaying names and avatars in contexts that refer to a user by ID, like lists of team members or the details of a run. Most of these contexts can already include user objects via an `?include` parameter, so you shouldn't usually need to make a separate call to this endpoint.
## Show a User
Shows details for a given user.
`GET /users/:user_id`
| Parameter | Description |
| ---------- | --------------------------- |
| `:user_id` | The ID of the desired user. |
To find the ID that corresponds to a given username, you can request a [team object](/terraform/cloud-docs/api-docs/teams) for a team that user belongs to, specify `?include=users` in the request, and look for the user's name in the included list of user objects.
| Status | Response | Reason |
| ------- | --------------------------------------- | ------------------------------------------------ |
| [200][] | [JSON API document][] (`type: "users"`) | The request was successful |
| [401][] | [JSON API error object][] | Unauthorized |
| [404][] | [JSON API error object][] | User not found, or unauthorized to view the user |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request GET \
https://app.terraform.io/api/v2/users/user-MA4GL63FmYRpSFxa
```
### Sample Response
```json
{
"data": {
"id": "user-MA4GL63FmYRpSFxa",
"type": "users",
"attributes": {
"username": "admin",
"is-service-account": false,
"auth-method": "hcp_sso",
"avatar-url": "https://www.gravatar.com/avatar/fa1f0c9364253d351bf1c7f5c534cd40?s=100&d=mm",
"v2-only": true,
"permissions": {
"can-create-organizations": false,
"can-change-email": true,
"can-change-username": true
}
},
"relationships": {
"authentication-tokens": {
"links": {
"related": "/api/v2/users/user-MA4GL63FmYRpSFxa/authentication-tokens"
}
}
},
"links": {
"self": "/api/v2/users/user-MA4GL63FmYRpSFxa"
}
}
}
``` | terraform | page title Users API Docs HCP Terraform description Use the users endpoint to query user details Show details for a user using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Users API HCP Terraform s user objects do not contain any identifying information about a user other than their HCP Terraform username and avatar image they are intended for displaying names and avatars in contexts that refer to a user by ID like lists of team members or the details of a run Most of these contexts can already include user objects via an include parameter so you shouldn t usually need to make a separate call to this endpoint Show a User Shows details for a given user GET users user id Parameter Description user id The ID of the desired user To find the ID that corresponds to a given username you can request a team object terraform cloud docs api docs teams for a team that user belongs to specify include users in the request and look for the user s name in the included list of user objects Status Response Reason 200 JSON API document type users The request was successful 401 JSON API error object Unauthorized 404 JSON API error object User not found or unauthorized to view the user Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request GET https app terraform io api v2 users user MA4GL63FmYRpSFxa Sample Response json data id user MA4GL63FmYRpSFxa type users attributes username admin is service account false auth method hcp sso avatar url https www gravatar com avatar fa1f0c9364253d351bf1c7f5c534cd40 s 100 d mm v2 only true permissions can create organizations false can change email true can change username true relationships authentication tokens links related api v2 users user MA4GL63FmYRpSFxa authentication tokens links self api v2 users user MA4GL63FmYRpSFxa |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title User Tokens API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the authentication tokens endpoint to manage user specific API tokens List show create and destroy user tokens using the HTTP API | ---
page_title: User Tokens - API Docs - HCP Terraform
description: >-
Use the `/authentication-tokens` endpoint to manage user-specific API tokens. List, show, create, and destroy user tokens using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# User Tokens API
## List User Tokens
`GET /users/:user_id/authentication-tokens`
| Parameter | Description |
| ---------- | ------------------- |
| `:user_id` | The ID of the User. |
Use the [Account API](/terraform/cloud-docs/api-docs/account) to find your own user ID.
The objects returned by this endpoint only contain metadata, and do not include the secret text of any authentication tokens. A token is only shown upon creation, and cannot be recovered later.
-> **Note:** You must access this endpoint with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens), and it will only return useful data for that token's user account.
| Status | Response | Reason |
| ------- | ------------------------------------------------------- | ------------------------------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "authentication-tokens"`) | The request was successful |
| [200][] | Empty [JSON API document][] (no type) | User has no authentication tokens, or request was made by someone other than the user |
| [404][] | [JSON API error object][] | User not found |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs. If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.
| Parameter | Description |
| -------------- | --------------------------------------------------------------------------- |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 user tokens per page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request GET \
https://app.terraform.io/api/v2/users/user-MA4GL63FmYRpSFxa/authentication-tokens
```
### Sample Response
```json
{
"data": [
{
"id": "at-QmATJea6aWj1xR2t",
"type": "authentication-tokens",
"attributes": {
"created-at": "2018-11-06T22:56:10.203Z",
"last-used-at": null,
"description": null,
"token": null,
"expired-at": null
},
"relationships": {
"created-by": {
"data": null
}
}
},
{
"id": "at-6yEmxNAhaoQLH1Da",
"type": "authentication-tokens",
"attributes": {
"created-at": "2018-11-25T22:31:30.624Z",
"last-used-at": "2018-11-26T20:27:54.931Z",
"description": "api",
"token": null,
"expired-at": "2023-04-06T12:00:00.000Z"
},
"relationships": {
"created-by": {
"data": {
"id": "user-MA4GL63FmYRpSFxa",
"type": "users"
}
}
}
}
]
}
```
## Show a User Token
`GET /authentication-tokens/:id`
| Parameter | Description |
| --------- | ------------------------- |
| `:id` | The ID of the User Token. |
The objects returned by this endpoint only contain metadata, and do not include the secret text of any authentication tokens. A token is only shown upon creation, and cannot be recovered later.
-> **Note:** You must access this endpoint with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens), and it will only return useful data for that token's user account.
| Status | Response | Reason |
| ------- | ------------------------------------------------------- | ------------------------------------------------------------ |
| [200][] | [JSON API document][] (`type: "authentication-tokens"`) | The request was successful |
| [404][] | [JSON API error object][] | User Token not found, or unauthorized to view the User Token |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request GET \
https://app.terraform.io/api/v2/authentication-tokens/at-6yEmxNAhaoQLH1Da
```
### Sample Response
```json
{
"data": {
"id": "at-6yEmxNAhaoQLH1Da",
"type": "authentication-tokens",
"attributes": {
"created-at": "2018-11-25T22:31:30.624Z",
"last-used-at": "2018-11-26T20:34:59.487Z",
"description": "api",
"token": null,
"expired-at": "2023-04-06T12:00:00.000Z"
},
"relationships": {
"created-by": {
"data": {
"id": "user-MA4GL63FmYRpSFxa",
"type": "users"
}
}
}
}
}
```
## Create a User Token
`POST /users/:user_id/authentication-tokens`
| Parameter | Description |
| ---------- | ------------------- |
| `:user_id` | The ID of the User. |
Use the [Account API](/terraform/cloud-docs/api-docs/account) to find your own user ID.
This endpoint returns the secret text of the created authentication token. A token is only shown upon creation, and cannot be recovered later.
-> **Note:** You must access this endpoint with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens), and it will only create new tokens for that token's user account.
| Status | Response | Reason |
| ------- | ------------------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "authentication-tokens"`) | The request was successful |
| [404][] | [JSON API error object][] | User not found or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [500][] | [JSON API error object][] | Failure during User Token creation |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ----------------------------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"authentication-tokens"`. |
| `data.attributes.description` | string | | The description for the User Token. |
| `data.attributes.expired-at` | string | `null` | The UTC date and time that the User Token will expire, in ISO 8601 format. If omitted or set to `null` the token will never expire. |
### Sample Payload
```json
{
"data": {
"type": "authentication-tokens",
"attributes": {
"description":"api",
"expired-at": "2023-04-06T12:00:00.000Z"
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/users/user-MA4GL63FmYRpSFxa/authentication-tokens
```
### Sample Response
```json
{
"data": {
"id": "at-MKD1X3i4HS3AuD41",
"type": "authentication-tokens",
"attributes": {
"created-at": "2018-11-26T20:48:35.054Z",
"last-used-at": null,
"description": "api",
"token": "6tL24nM38M7XWQ.atlasv1.KmWckRfzeNmUVFNvpvwUEChKaLGznCSD6fPf3VPzqMMVzmSxFU0p2Ibzpo2h5eTGwPU",
"expired-at": "2023-04-06T12:00:00.000Z"
},
"relationships": {
"created-by": {
"data": {
"id": "user-MA4GL63FmYRpSFxa",
"type": "users"
}
}
}
}
}
```
## Destroy a User Token
`DELETE /authentication-tokens/:id`
| Parameter | Description |
| --------- | ------------------------------------ |
| `:id` | The ID of the User Token to destroy. |
-> **Note:** You must access this endpoint with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens), and it will only delete tokens for that token's user account.
| Status | Response | Reason |
| ------- | ------------------------- | ------------------------------------------------------------ |
| [204][] | Empty response | The User Token was successfully destroyed |
| [404][] | [JSON API error object][] | User Token not found, or user unauthorized to perform action |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/authentication-tokens/at-6yEmxNAhaoQLH1Da
``` | terraform | page title User Tokens API Docs HCP Terraform description Use the authentication tokens endpoint to manage user specific API tokens List show create and destroy user tokens using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects User Tokens API List User Tokens GET users user id authentication tokens Parameter Description user id The ID of the User Use the Account API terraform cloud docs api docs account to find your own user ID The objects returned by this endpoint only contain metadata and do not include the secret text of any authentication tokens A token is only shown upon creation and cannot be recovered later Note You must access this endpoint with a user token terraform cloud docs users teams organizations users api tokens and it will only return useful data for that token s user account Status Response Reason 200 JSON API document type authentication tokens The request was successful 200 Empty JSON API document no type User has no authentication tokens or request was made by someone other than the user 404 JSON API error object User not found Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs If neither pagination query parameters are provided the endpoint will not be paginated and will return all results Parameter Description page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 user tokens per page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request GET https app terraform io api v2 users user MA4GL63FmYRpSFxa authentication tokens Sample Response json data id at QmATJea6aWj1xR2t type authentication tokens attributes created at 2018 11 06T22 56 10 203Z last used at null description null token null expired at null relationships created by data null id at 6yEmxNAhaoQLH1Da type authentication tokens attributes created at 2018 11 25T22 31 30 624Z last used at 2018 11 26T20 27 54 931Z description api token null expired at 2023 04 06T12 00 00 000Z relationships created by data id user MA4GL63FmYRpSFxa type users Show a User Token GET authentication tokens id Parameter Description id The ID of the User Token The objects returned by this endpoint only contain metadata and do not include the secret text of any authentication tokens A token is only shown upon creation and cannot be recovered later Note You must access this endpoint with a user token terraform cloud docs users teams organizations users api tokens and it will only return useful data for that token s user account Status Response Reason 200 JSON API document type authentication tokens The request was successful 404 JSON API error object User Token not found or unauthorized to view the User Token Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request GET https app terraform io api v2 authentication tokens at 6yEmxNAhaoQLH1Da Sample Response json data id at 6yEmxNAhaoQLH1Da type authentication tokens attributes created at 2018 11 25T22 31 30 624Z last used at 2018 11 26T20 34 59 487Z description api token null expired at 2023 04 06T12 00 00 000Z relationships created by data id user MA4GL63FmYRpSFxa type users Create a User Token POST users user id authentication tokens Parameter Description user id The ID of the User Use the Account API terraform cloud docs api docs account to find your own user ID This endpoint returns the secret text of the created authentication token A token is only shown upon creation and cannot be recovered later Note You must access this endpoint with a user token terraform cloud docs users teams organizations users api tokens and it will only create new tokens for that token s user account Status Response Reason 201 JSON API document type authentication tokens The request was successful 404 JSON API error object User not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc 500 JSON API error object Failure during User Token creation Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be authentication tokens data attributes description string The description for the User Token data attributes expired at string null The UTC date and time that the User Token will expire in ISO 8601 format If omitted or set to null the token will never expire Sample Payload json data type authentication tokens attributes description api expired at 2023 04 06T12 00 00 000Z Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 users user MA4GL63FmYRpSFxa authentication tokens Sample Response json data id at MKD1X3i4HS3AuD41 type authentication tokens attributes created at 2018 11 26T20 48 35 054Z last used at null description api token 6tL24nM38M7XWQ atlasv1 KmWckRfzeNmUVFNvpvwUEChKaLGznCSD6fPf3VPzqMMVzmSxFU0p2Ibzpo2h5eTGwPU expired at 2023 04 06T12 00 00 000Z relationships created by data id user MA4GL63FmYRpSFxa type users Destroy a User Token DELETE authentication tokens id Parameter Description id The ID of the User Token to destroy Note You must access this endpoint with a user token terraform cloud docs users teams organizations users api tokens and it will only delete tokens for that token s user account Status Response Reason 204 Empty response The User Token was successfully destroyed 404 JSON API error object User Token not found or user unauthorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 authentication tokens at 6yEmxNAhaoQLH1Da |
terraform A meaningful description of this endpoint Boilerplate link references This entire list should be included at the top of every API page so that the tables can use short links freely This SHOULD be all of the status codes we use in HCP Terraform s API if we need to add more update the list on every page 200 https developer mozilla org en US docs Web HTTP Status 200 Follow this template to format each API method There are usually multiple sections like this on a given API endpoint page page title Something API Docs HCP Terraform | ---
page_title: Something - API Docs - HCP Terraform
description: A meaningful description of this endpoint.
---
Follow this template to format each API method. There are usually multiple sections like this on a given API endpoint page.
<!-- Boilerplate link references: This entire list should be included at the top of every API page, so that the tables can use short links freely. This SHOULD be all of the status codes we use in HCP Terraform's API; if we need to add more, update the list on every page. -->
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: http://jsonapi.org/format/#error-objects
## Create a Something
<!-- Header: "Verb a Noun" or "Verb Nouns." -->
`POST /organizations/:organization_name/somethings`
<!-- ^ The method and path are styled as a single code span, with global prefix (`/api/v2`) omitted and the method capitalized. -->
| Parameter | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `:organization_name` | The name of the organization to create the something in. The organization must already exist in the system, and the user must have permissions to create new somethings. |
<!-- ^ The list of URL path parameters goes directly below the method and path, without a header of its own. They're simpler than other parameters because they're always strings and they're always mandatory, so this table only has two columns. Prefix URL path parameter names with a colon.
If further explanation of this method is needed beyond its title, write it here, after the parameter list. -->
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
<!-- ^ Include a note like the above if the endpoint CANNOT be used with a given token type. Most endpoints don't need this. -->
| Status | Response | Reason |
| ------- | -------------------------------------------- | -------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "somethings"`) | Successfully created a team |
| [400][] | [JSON API error object][] | Invalid `include` parameter |
| [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [500][] | [JSON API error object][] | Failure during team creation |
<!-- ^ Include status codes even if they're plain 200/404.
If a JSON API document is returned, specify the `type`.
If the table includes links, use reference-style links to keep the table size small. The references should be included once per API page, at the very top.
-->
### Query Parameters
[These are standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
<!-- ^ Query parameters get their own header and boilerplate. Omit the whole section if this method takes no query parameters; we only use them for certain GET requests. -->
| Parameter | Description |
| ----------------------- | ------------------------------------------------------------- |
| `filter[workspace][id]` | **Required.** The workspace ID where this action will happen. |
<!-- ^ This table is flexible. If we somehow end up with a case where there's a long list of parameters, in a mix of optional and required, you could add a "Required?" or "Default" column or something; likewise if there are multiple data types in play. But in the usual minimal case, keep the table minimal and style important information as strong emphasis.
Do not prefix query parameter names with a question mark. -->
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
<!-- ^ Payload parameters go under this header and boilerplate. -->
| Key path | Type | Default | Description |
| --------------------------- | ------ | ------- | ------------------------------------------------------------------------------------------------------ |
| `data.type` | string | | Must be `"somethings"`. |
| `data[].type` | string | | ... <!-- use data[].x when data is an array of objects. --> |
| `data.attributes.category` | string | | Whether this is a blue or red something. Valid values are `"blue"` or `"red"`. |
| `data.attributes.sensitive` | bool | `false` | Whether the value is sensitive. If true then the something is written once and not visible thereafter. |
| `filter.workspace.name` | string | | The name of the workspace that owns the something. |
| `filter.organization.name` | string | | The name of the organization that owns the workspace. |
<!--
- Name the paths to these object properties with dot notation, starting from the
root of the JSON object. So, `data.attributes.category` instead of just
`category`. Since our API format uses deeply nested structures and is finicky
about the details, err on the side of being very explicit about where the user
puts everything.
- Style key paths as code spans.
- Style data types as plain text.
- Style string values as code spans with interior double-quotes, to distinguish
them from unquoted values like booleans and nulls.
- If a limited number of values are valid, list them in the description.
- In the rare case where a parameter is optional but has no default, you can
list something like "(nothing)" as the default and explain in the description.
- List the properties in the simplest order you can... but the concept of
"simple" can be a little complex. ;) As a general guideline:
- The first level of sorting is _importance._ This is open to interpretation,
but at least put the type and name first.
- The second level of sorting is _complexity._ If one of the properties is a
huge object with a bunch of sub-properties, put it last — this lets the
reader clear the simpler properties out of their head before dealing with
it, without having to remember where they were in the list and without
having to remember to pop back out of the "big sub-object" context when
they hit the end of it.
- The third order of sorting is _predictability,_ which basically means that
within a group of properties of equal relative importance and complexity,
you should probably list them alphabetically so it's easier to find a
specific property.
-->
### Available Related Resources
<!-- Omit this subheader and section if it's not applicable. -->
This GET endpoint can optionally return related resources, if requested with [the `include` query parameter](/terraform/cloud-docs/api-docs#inclusion-of-related-resources). The following resource types are available:
| Resource Name | Description |
| ------------------ | --------------------------------------------- |
| `organization` | The full organization record. |
| `current_run` | Additional information about the current run. |
| `current_run.plan` | The plan used in the current run. |
### Sample Payload
```json
{
"data": {
"type":"somethings",
"attributes": {
"category":"red",
"sensitive":true
}
},
"filter": {
"organization": {
"name":"my-organization"
},
"workspace": {
"name":"my-workspace"
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/somethings
```
<!-- In curl examples, you can use the `$TOKEN` environment variable. If it's a GET request with query parameters, you can use double-quotes to have curl handle the URL encoding for you.
Make sure to test a query that's very nearly the same as the example, to avoid errors. -->
### Sample Response
```json
{
"data": {
"id":"som-EavQ1LztoRTQHSNT",
"type":"somethings",
"attributes": {
"sensitive":true,
"category":"red",
},
"relationships": {
"configurable": {
"data": {
"id":"ws-4j8p6jX1w33MiDC7",
"type":"workspaces"
},
"links": {
"related":"/api/v2/organizations/my-organization/workspaces/my-workspace"
}
}
},
"links": {
"self":"/api/v2/somethings/som-EavQ1LztoRTQHSNT"
}
}
}
```
<!-- Make sure to mangle any real IDs this might expose. --> | terraform | page title Something API Docs HCP Terraform description A meaningful description of this endpoint Follow this template to format each API method There are usually multiple sections like this on a given API endpoint page Boilerplate link references This entire list should be included at the top of every API page so that the tables can use short links freely This SHOULD be all of the status codes we use in HCP Terraform s API if we need to add more update the list on every page 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object http jsonapi org format error objects Create a Something Header Verb a Noun or Verb Nouns POST organizations organization name somethings The method and path are styled as a single code span with global prefix api v2 omitted and the method capitalized Parameter Description organization name The name of the organization to create the something in The organization must already exist in the system and the user must have permissions to create new somethings The list of URL path parameters goes directly below the method and path without a header of its own They re simpler than other parameters because they re always strings and they re always mandatory so this table only has two columns Prefix URL path parameter names with a colon If further explanation of this method is needed beyond its title write it here after the parameter list Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Include a note like the above if the endpoint CANNOT be used with a given token type Most endpoints don t need this Status Response Reason 200 JSON API document type somethings Successfully created a team 400 JSON API error object Invalid include parameter 404 JSON API error object Organization not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc 500 JSON API error object Failure during team creation Include status codes even if they re plain 200 404 If a JSON API document is returned specify the type If the table includes links use reference style links to keep the table size small The references should be included once per API page at the very top Query Parameters These are standard URL query parameters terraform cloud docs api docs query parameters remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Query parameters get their own header and boilerplate Omit the whole section if this method takes no query parameters we only use them for certain GET requests Parameter Description filter workspace id Required The workspace ID where this action will happen This table is flexible If we somehow end up with a case where there s a long list of parameters in a mix of optional and required you could add a Required or Default column or something likewise if there are multiple data types in play But in the usual minimal case keep the table minimal and style important information as strong emphasis Do not prefix query parameter names with a question mark Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Payload parameters go under this header and boilerplate Key path Type Default Description data type string Must be somethings data type string use data x when data is an array of objects data attributes category string Whether this is a blue or red something Valid values are blue or red data attributes sensitive bool false Whether the value is sensitive If true then the something is written once and not visible thereafter filter workspace name string The name of the workspace that owns the something filter organization name string The name of the organization that owns the workspace Name the paths to these object properties with dot notation starting from the root of the JSON object So data attributes category instead of just category Since our API format uses deeply nested structures and is finicky about the details err on the side of being very explicit about where the user puts everything Style key paths as code spans Style data types as plain text Style string values as code spans with interior double quotes to distinguish them from unquoted values like booleans and nulls If a limited number of values are valid list them in the description In the rare case where a parameter is optional but has no default you can list something like nothing as the default and explain in the description List the properties in the simplest order you can but the concept of simple can be a little complex As a general guideline The first level of sorting is importance This is open to interpretation but at least put the type and name first The second level of sorting is complexity If one of the properties is a huge object with a bunch of sub properties put it last this lets the reader clear the simpler properties out of their head before dealing with it without having to remember where they were in the list and without having to remember to pop back out of the big sub object context when they hit the end of it The third order of sorting is predictability which basically means that within a group of properties of equal relative importance and complexity you should probably list them alphabetically so it s easier to find a specific property Available Related Resources Omit this subheader and section if it s not applicable This GET endpoint can optionally return related resources if requested with the include query parameter terraform cloud docs api docs inclusion of related resources The following resource types are available Resource Name Description organization The full organization record current run Additional information about the current run current run plan The plan used in the current run Sample Payload json data type somethings attributes category red sensitive true filter organization name my organization workspace name my workspace Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 somethings In curl examples you can use the TOKEN environment variable If it s a GET request with query parameters you can use double quotes to have curl handle the URL encoding for you Make sure to test a query that s very nearly the same as the example to avoid errors Sample Response json data id som EavQ1LztoRTQHSNT type somethings attributes sensitive true category red relationships configurable data id ws 4j8p6jX1w33MiDC7 type workspaces links related api v2 organizations my organization workspaces my workspace links self api v2 somethings som EavQ1LztoRTQHSNT Make sure to mangle any real IDs this might expose |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Runs API Docs HCP Terraform Use the runs endpoint to manage Terraform runs List get create apply discard execute and cancel runs using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 | ---
page_title: Runs - API Docs - HCP Terraform
description: >-
Use the `/runs` endpoint to manage Terraform runs. List, get, create, apply, discard, execute, and cancel runs using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Runs API
-> **Note:** Before working with the runs or configuration versions APIs, read the [API-driven run workflow](/terraform/cloud-docs/run/api) page, which includes both a full overview of this workflow and a walkthrough of a simple implementation of it.
Performing a run on a new configuration is a multi-step process.
1. [Create a configuration version on the workspace](/terraform/cloud-docs/api-docs/configuration-versions#create-a-configuration-version).
1. [Upload configuration files to the configuration version](/terraform/cloud-docs/api-docs/configuration-versions#upload-configuration-files).
1. [Create a run on the workspace](#create-a-run); this is done automatically when a configuration file is uploaded.
1. [Create and queue an apply on the run](#apply-a-run); if the run can't be auto-applied.
Alternatively, you can create a run with a pre-existing configuration version, even one from another workspace. This is useful for promoting known good code from one workspace to another.
## Attributes
### Run States
The run state is found in `data.attributes.status`, and you can reference the following list of possible states.
| State | Description |
|------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `pending` | The initial status of a run after creation. |
| `fetching` | The run is waiting for HCP Terraform to fetch the configuration from VCS. |
| `fetching_completed` | HCP Terraform has fetched the configuration from VCS and the run will continue. |
| `pre_plan_running` | The pre-plan phase of the run is in progress. |
| `pre_plan_completed` | The pre-plan phase of the run has completed. |
| `queuing` | HCP Terraform is queuing the run to start the planning phase. |
| `plan_queued` | HCP Terraform is waiting for its backend services to start the plan. |
| `planning` | The planning phase of a run is in progress. |
| `planned` | The planning phase of a run has completed. |
| `cost_estimating` | The cost estimation phase of a run is in progress. |
| `cost_estimated` | The cost estimation phase of a run has completed. |
| `policy_checking` | The sentinel policy checking phase of a run is in progress. |
| `policy_override` | A sentinel policy has soft failed, and a user can override it to continue the run. |
| `policy_soft_failed` | A sentinel policy has soft failed for a plan-only run. This is a final state. |
| `policy_checked` | The sentinel policy checking phase of a run has completed. |
| `confirmed` | A user has confirmed the plan. |
| `post_plan_running` | The post-plan phase of the run is in progress. |
| `post_plan_completed` | The post-plan phase of the run has completed. |
| `planned_and_finished` | The run is completed. This status only exists for plan-only runs and runs that produce a plan with no changes to apply. This is a final state. |
| `planned_and_saved` | The run has finished its planning, checks, and estimates, and can be confirmed for apply. This status is only used for saved plan runs. |
| `apply_queued` | Once the changes in the plan have been confirmed, the run will transition to `apply_queued`. This status indicates that the run should start as soon as the backend services that run terraform have available capacity. In HCP Terraform, you should seldom see this status, as our aim is to always have capacity. However, in Terraform Enterprise this status will be more common due to the self-hosted nature. |
| `applying` | Terraform is applying the changes specified in the plan. |
| `applied` | Terraform has applied the changes specified in the plan. |
| `discarded` | The run has been discarded. This is a final state. |
| `errored` | The run has errored. This is a final state. |
| `canceled` | The run has been canceled. |
| `force_canceled` | A workspace admin forcefully canceled the run.
### Run Operations
The run operation specifies the Terraform execution mode. You can reference the following list of possible execution modes and use them as query parameters in the [workspace](/terraform/cloud-docs/api-docs/run#list-runs-in-a-workspace) and [organization](/terraform/cloud-docs/api-docs/run#list-runs-in-a-organization) runs lists.
| Operation | Description |
| ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `plan_only` | The run does not have an apply phase. This is also called a [_speculative plan_](/terraform/cloud-docs/run/modes-and-options#plan-only-speculative-plan). |
| `plan_and_apply` | The run includes both plan and apply phases. |
| `save_plan` | The run is a saved plan run. It can include both plan and apply phases, but only becomes the workspace's current run if a user chooses to apply it. |
| `refresh_only` | The run should update Terraform state, but not make changes to resources. |
| `destroy` | The run should destroy all objects, regardless of configuration changes. |
| `empty_apply` | The run should perform an apply with no changes to resources. This is most commonly used to [upgrade terraform state versions](/terraform/cloud-docs/workspaces/state#upgrading-state). |
### Run Sources
You can use the following sources as query parameters in [workspace](/terraform/cloud-docs/api-docs/run#list-runs-in-a-workspace) and [organization](/terraform/cloud-docs/api-docs/run#list-runs-in-a-organization) runs lists.
| Source | Description |
|-----------------------------|-----------------------------------------------------------------------------------------|
| `tfe-ui` | Indicates a run was queued from HCP Terraform UI. |
| `tfe-api` | Indicates a run was queued from HCP Terraform API. |
| `tfe-configuration-version` | Indicates a run was queued from a Configuration Version, triggered from a VCS provider. |
### Run Status Groups
The run status group specifies a collection of run states by logical category.
| Group | Description |
|------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `non_final` | Inclusive of runs that are currently running, require user confirmation, or are queued/pending. |
| `final` | Inclusive of runs that have reached their final and terminal state. |
| `discardable` | Inclusive of runs whose state falls under the following: `planned`, `planned_and_saved`, `cost_estimated`, `policy_checked`, `policy_override`, `post_plan_running`, `post_plan_completed` |
## Create a Run
`POST /runs`
A run performs a plan and apply, using a configuration version and the workspace’s current variables. You can specify a configuration version when creating a run; if you don’t provide one, the run defaults to the workspace’s most recently used version. (A configuration version is “used” when it is created or used for a run in this workspace.)
Creating a run requires permission to queue plans for the specified workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
When creating a run, you may optionally provide a list of variable objects containing key and value attributes. These values apply to that run specifically and take precedence over variables with the same key applied to the workspace(e.g., variable sets). Refer to [Variable Precedence](/terraform/cloud-docs/workspaces/variables#precedence) for more information. All values must be expressed as an HCL literal in the same syntax you would use when writing Terraform code. Refer to [Types](/terraform/language/expressions/types#types) in the Terraform documentation for more details.
Setting `debugging_mode: true` enables debugging mode for the queued run only. This is equivalent to setting the `TF_LOG` environment variable to `TRACE` for this run. See [Debugging Terraform](/terraform/internals/debugging) for more information.
**Sample Run Variables:**
```json
"attributes": {
"variables": [
{ "key": "replicas", "value": "2" },
{ "key": "access_key", "value": "\"ABCDE12345\"" }
]
}
```
[permissions-citation]: #intentionally-unused---keep-for-maintainers
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| -------------------------------------------------- | -------------------- | -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.attributes.allow-empty-apply` | bool | none | Specifies whether Terraform can apply the run even when the plan [contains no changes](/terraform/cloud-docs/run/modes-and-options#allow-empty-apply). Use this property to [upgrade state](/terraform/cloud-docs/workspaces/state#upgrading-state) after upgrading a workspace to a new terraform version. |
| `data.attributes.allow-config-generation` | bool | `false` | Specifies whether Terraform can [generate resource configuration](/terraform/language/import/generating-configuration) when planning to import new resources. When set to `false`, Terraform returns an error when `import` blocks do not have a corresponding `resource` block. |
| `data.attributes.auto-apply` | bool | Defaults to the [Auto Apply](/terraform/cloud-docs/workspaces/settings#auto-apply-and-manual-apply) workspace setting. | Determines if Terraform automatically applies the configuration on a successful `terraform plan`. |
| `data.attributes.debugging-mode` | bool | `false` | When set to `true`, enables verbose logging for the queued plan. |
| `data.attributes.is-destroy` | bool | `false` | When set to `true`, the plan destroys all provisioned resources. Mutually exclusive with `refresh-only`. |
| `data.attributes.message` | string | `"Queued manually via the Terraform Enterprise API"` | Specifies the message associated with this run. |
| `data.attributes.refresh` | bool | `true` | Specifies whether or not to refresh the state before a plan. |
| `data.attributes.refresh-only` | bool | `false` | When set to `true`, this run refreshes the state without modifying any resources. Mutually exclusive with `is-destroy`. |
| `data.attributes.replace-addrs` | array\[string] | | Specifies an optional list of resource addresses to be passed to the `-replace` flag. |
| `data.attributes.target-addrs` | array\[string] | | Specifies an optional list of resource addresses to be passed to the `-target` flag. |
| `data.attributes.variables` | array\[{key, value}] | (empty array) | Specifies an optional list of run-specific variable values. Refer to [Run-Specific Variables](/terraform/cloud-docs/workspaces/variables/managing-variables#run-specific-variables) for details. |
| `data.attributes.plan-only` | bool | (from configuration version) | Specifies if this is a [speculative, plan-only](/terraform/cloud-docs/run/modes-and-options#plan-only-speculative-plan) run that Terraform cannot apply. Often used in conjunction with terraform-version in order to test whether an upgrade would succeed. |
| `data.attributes.save-plan` | bool | `false` | When set to `true`, the run is executed as a `save plan` run. A `save plan` run plans and checks the configuration without becoming the workspace's current run. These run types only becomes the current run if you confirm that you want to apply them when prompted. When creating new [configuration versions](/terraform/enterprise/api-docs/configuration-versions) for saved plan runs, be sure to make them `provisional`. |
| `data.attributes.terraform-version` | string | none | Specifies the Terraform version to use in this run. Only valid for plan-only runs; must be a valid Terraform version available to the organization. |
| `data.relationships.workspace.data.id` | string | none | Specifies the workspace ID to execute the run in. |
| `data.relationships.configuration-version.data.id` | string | none | Specifies the configuration version to use for this run. If the `configuration-version` object is omitted, Terraform uses the workspace's latest configuration version to create the run . |
| Status | Response | Reason |
|---------|----------------------------------------|-----------------------------------------------------------------------------|
| [201][] | [JSON API document][] (`type: "runs"`) | Successfully created a run |
| [404][] | [JSON API error object][] | Organization or workspace not found, or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
### Sample Payload
```json
{
"data": {
"attributes": {
"message": "Custom message"
},
"type":"runs",
"relationships": {
"workspace": {
"data": {
"type": "workspaces",
"id": "ws-LLGHCr4SWy28wyGN"
}
},
"configuration-version": {
"data": {
"type": "configuration-versions",
"id": "cv-n4XQPBa2QnecZJ4G"
}
}
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/runs
```
### Sample Response
```json
{
"data": {
"id": "run-CZcmD7eagjhyX0vN",
"type": "runs",
"attributes": {
"actions": {
"is-cancelable": true,
"is-confirmable": false,
"is-discardable": false,
"is-force-cancelable": false
},
"canceled-at": null,
"created-at": "2021-05-24T07:38:04.171Z",
"has-changes": false,
"auto-apply": false,
"allow-empty-apply": false,
"allow-config-generation": false,
"is-destroy": false,
"message": "Custom message",
"plan-only": false,
"source": "tfe-api",
"status-timestamps": {
"plan-queueable-at": "2021-05-24T07:38:04+00:00"
},
"status": "pending",
"trigger-reason": "manual",
"target-addrs": null,
"permissions": {
"can-apply": true,
"can-cancel": true,
"can-comment": true,
"can-discard": true,
"can-force-execute": true,
"can-force-cancel": true,
"can-override-policy-check": true
},
"refresh": false,
"refresh-only": false,
"replace-addrs": null,
"save-plan": false,
"variables": []
},
"relationships": {
"apply": {...},
"comments": {...},
"configuration-version": {...},
"cost-estimate": {...},
"created-by": {...},
"input-state-version": {...},
"plan": {...},
"run-events": {...},
"policy-checks": {...},
"workspace": {...},
"workspace-run-alerts": {...}
},
"links": {
"self": "/api/v2/runs/run-CZcmD7eagjhyX0vN"
}
}
}
```
## Apply a Run
`POST /runs/:run_id/actions/apply`
| Parameter | Description |
|-----------|---------------------|
| `run_id` | The run ID to apply |
Applies a run that is paused waiting for confirmation after a plan. This includes runs in the "needs confirmation" and "policy checked" states. This action is only required for runs that can't be auto-applied. Plans can be auto-applied if the auto-apply setting is enabled on the workspace and the plan was queued by a new VCS commit or by a user with permission to apply runs for the workspace.
-> **Note:** If the run has a soft failed sentinel policy, you will need to [override the policy check](/terraform/cloud-docs/api-docs/policy-checks#override-policy) before Terraform can apply the run. You can find policy check details in the `relationships` section of the [run details endpoint](#get-run-details) response.
Applying a run requires permission to apply runs for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
This endpoint queues the request to perform an apply; the apply might not happen immediately.
Since this endpoint represents an action (not a resource), it does not return any object in the response body.
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
| Status | Response | Reason(s) |
|---------|---------------------------|---------------------------------------------------------|
| [202][] | none | Successfully queued an apply request. |
| [409][] | [JSON API error object][] | Run was not paused for confirmation; apply not allowed. |
### Request Body
This POST endpoint allows an optional JSON object with the following properties as a request payload.
| Key path | Type | Default | Description |
|-----------|--------|---------|------------------------------------|
| `comment` | string | `null` | An optional comment about the run. |
### Sample Payload
This payload is optional, so the `curl` command will work without the `--data @payload.json` option too.
```json
{
"comment":"Looks good to me"
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/runs/run-DQGdmrWMX8z9yWQB/actions/apply
```
## List Runs in a Workspace
`GET /workspaces/:workspace_id/runs`
| Parameter | Description |
|----------------|------------------------------------|
| `workspace_id` | The workspace ID to list runs for. |
By default, `plan_only` runs will be excluded from the results. To see all runs, use `filter[operation]` with all available operations included as a comma-separated list.
This endpoint has an adjusted rate limit of 30 requests per minute. Note that most endpoints are limited to 30 requests per second.
| Status | Response | Reason |
|---------|--------------------------------------------------|--------------------------|
| [200][] | Array of [JSON API document][]s (`type: "runs"`) | Successfully listed runs |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description | Required |
| -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |
| `page[number]` | If omitted, the endpoint returns the first page. | Optional |
| `page[size]` | If omitted, the endpoint returns 20 runs per page. | Optional |
| `filter[operation]` | A comma-separated list of run operations. The result lists runs that perform one of these operations. For details on options, refer to [Run operations](/terraform/enterprise/api-docs/run#run-operations). | Optional |
| `filter[status]` | A comma-separated list of run statuses. The result lists runs that are in one of the statuses you specify. For details on options, refer to [Run states](/terraform/enterprise/api-docs/run#run-states). | Optional |
| `filter[agent_pool_names]` | A comma-separated list of agent pool names. The result lists runs that use one of the agent pools you specify. | Optional |
| `filter[source]` | A comma-separated list of run sources. The result lists runs that came from one of the sources you specify. Options are listed in [Run Sources](/terraform/enterprise/api-docs/run#run-sources). | Optional |
| `filter[status_group]` | A single status group. The result lists runs whose status falls under this status group. For details on options, refer to [Run status groups](/terraform/enterprise/api-docs/run#run-status-groups). | Optional |
| `filter[timeframe]` | A single year period. The result lists runs that were created within the year you specify. An integer year or the string "year" for the past year are valid values. If omitted, the endpoint returns all runs since the creation of the workspace. | Optional |
| `search[user]` | Searches for runs that match the VCS username you supply. | Optional |
| `search[commit]` | Searches for runs that match the commit sha you specify. | Optional |
| `search[basic]` | Searches for runs that match the VCS username, commit sha, run_id, or run message your specify. HCP Terraform prioritizes `search[commit]` or `search[user]` and ignores `search[basic]` in favor of the higher priority parameters if you include them in your query. | Optional |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/workspaces/ws-yF7z4gyEQRhaCNG9/runs
```
### Sample Response
```json
{
"data": [
{
"id": "run-CZcmD7eagjhyX0vN",
"type": "runs",
"attributes": {
"actions": {
"is-cancelable": true,
"is-confirmable": false,
"is-discardable": false,
"is-force-cancelable": false
},
"canceled-at": null,
"created-at": "2021-05-24T07:38:04.171Z",
"has-changes": false,
"auto-apply": false,
"allow-empty-apply": false,
"allow-config-generation": false,
"is-destroy": false,
"message": "Custom message",
"plan-only": false,
"source": "tfe-api",
"status-timestamps": {
"plan-queueable-at": "2021-05-24T07:38:04+00:00"
},
"status": "pending",
"trigger-reason": "manual",
"target-addrs": null,
"permissions": {
"can-apply": true,
"can-cancel": true,
"can-comment": true,
"can-discard": true,
"can-force-execute": true,
"can-force-cancel": true,
"can-override-policy-check": true
},
"refresh": false,
"refresh-only": false,
"replace-addrs": null,
"save-plan": false,
"variables": []
},
"relationships": {
"apply": {...},
"comments": {...},
"configuration-version": {...},
"cost-estimate": {...},
"created-by": {...},
"input-state-version": {...},
"plan": {...},
"run-events": {...},
"policy-checks": {...},
"workspace": {...},
"workspace-run-alerts": {...}
},
"links": {
"self": "/api/v2/runs/run-bWSq4YeYpfrW4mx7"
}
},
{...}
]
}
```
## List Runs in an Organization
`GET /organizations/:organization_name/runs`
| Parameter | Description |
|----------------|------------------------------------|
| `organization_name` | The organization name to list runs for. |
This endpoint has an adjusted rate limit of 30 requests per minute. Note that most endpoints are limited to 30 requests per second.
| Status | Response | Reason |
|---------|--------------------------------------------------|--------------------------|
| [200][] | Array of [JSON API document][]s (`type: "runs"`) | Successfully listed runs |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description | Required |
| -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |
| `page[number]` | If omitted, the endpoint returns the first page. | Optional |
| `page[size]` | If omitted, the endpoint returns 20 runs per page. | Optional |
| `filter[operation]` | A comma-separated list of run operations. The result lists runs that perform one of these operations. For details on options, refer to [Run operations](/terraform/enterprise/api-docs/run#run-operations). | Optional |
| `filter[status]` | A comma-separated list of run statuses. The result lists runs that are in one of the statuses you specify. For details on options, refer to [Run states](/terraform/enterprise/api-docs/run#run-states). | Optional |
| `filter[agent_pool_names]` | A comma-separated list of agent pool names. The result lists runs that use one of the agent pools you specify. | Optional |
| `filter[workspace_names]` | A comma-separated list of workspace names. The result lists runs that belong to one of the workspaces your specify. | Optional |
| `filter[source]` | A comma-separated list of run sources. The result lists runs that came from one of the sources you specify. Options are listed in [Run Sources](/terraform/enterprise/api-docs/run#run-sources). | Optional |
| `filter[status_group]` | A single status group. The result lists runs whose status falls under this status group. For details on options, refer to [Run status groups](/terraform/enterprise/api-docs/run#run-status-groups). | Optional |
| `filter[timeframe]` | A single year period. The result lists runs that were created within the year you specify. An integer year or the string "year" for the past year are valid values. If omitted, the endpoint returns runs created in the last year. | Optional |
| `search[user]` | Searches for runs that match the VCS username you supply. | Optional |
| `search[commit]` | Searches for runs that match the commit sha you specify. | Optional |
| `search[basic]` | Searches for runs that match the VCS username, commit sha, run_id, or run message your specify. HCP Terraform prioritizes `search[commit]` or `search[user]` and ignores `search[basic]` in favor of the higher priority parameters if you include them in your query. | Optional |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/runs
```
### Sample Response
```json
{
"data": [
{
"id": "run-CZcmD7eagjhyX0vN",
"type": "runs",
"attributes": {
"actions": {
"is-cancelable": true,
"is-confirmable": false,
"is-discardable": false,
"is-force-cancelable": false
},
"canceled-at": null,
"created-at": "2021-05-24T07:38:04.171Z",
"has-changes": false,
"auto-apply": false,
"allow-empty-apply": false,
"allow-config-generation": false,
"is-destroy": false,
"message": "Custom message",
"plan-only": false,
"source": "tfe-api",
"status-timestamps": {
"plan-queueable-at": "2021-05-24T07:38:04+00:00"
},
"status": "pending",
"trigger-reason": "manual",
"target-addrs": null,
"permissions": {
"can-apply": true,
"can-cancel": true,
"can-comment": true,
"can-discard": true,
"can-force-execute": true,
"can-force-cancel": true,
"can-override-policy-check": true
},
"refresh": false,
"refresh-only": false,
"replace-addrs": null,
"save-plan": false,
"variables": []
},
"relationships": {
"apply": {...},
"comments": {...},
"configuration-version": {...},
"cost-estimate": {...},
"created-by": {...},
"input-state-version": {...},
"plan": {...},
"run-events": {...},
"policy-checks": {...},
"workspace": {...},
"workspace-run-alerts": {...}
},
"links": {
"self": "/api/v2/runs/run-bWSq4YeYpfrW4mx7"
}
},
{...}
]
}
```
## Get run details
`GET /runs/:run_id`
| Parameter | Description |
|-----------|--------------------|
| `:run_id` | The run ID to get. |
This endpoint is used for showing details of a specific run.
| Status | Response | Reason |
|---------|----------------------------------------|--------------------------------------|
| [200][] | [JSON API document][] (`type: "runs"`) | Success |
| [404][] | [JSON API error object][] | Run not found or user not authorized |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
https://app.terraform.io/api/v2/runs/run-bWSq4YeYpfrW4mx7
```
### Sample Response
```json
{
"data": {
"id": "run-CZcmD7eagjhyX0vN",
"type": "runs",
"attributes": {
"actions": {
"is-cancelable": true,
"is-confirmable": false,
"is-discardable": false,
"is-force-cancelable": false
},
"canceled-at": null,
"created-at": "2021-05-24T07:38:04.171Z",
"has-changes": false,
"auto-apply": false,
"allow-empty-apply": false,
"allow-config-generation": false,
"is-destroy": false,
"message": "Custom message",
"plan-only": false,
"source": "tfe-api",
"status-timestamps": {
"plan-queueable-at": "2021-05-24T07:38:04+00:00"
},
"status": "pending",
"trigger-reason": "manual",
"target-addrs": null,
"permissions": {
"can-apply": true,
"can-cancel": true,
"can-comment": true,
"can-discard": true,
"can-force-execute": true,
"can-force-cancel": true,
"can-override-policy-check": true
},
"refresh": false,
"refresh-only": false,
"replace-addrs": null,
"save-plan": false,
"variables": []
},
"relationships": {
"apply": {...},
"comments": {...},
"configuration-version": {...},
"cost-estimate": {...},
"created-by": {...},
"input-state-version": {...},
"plan": {...},
"run-events": {...},
"policy-checks": {...},
"task-stages": {...},
"workspace": {...},
"workspace-run-alerts": {...}
},
"links": {
"self": "/api/v2/runs/run-bWSq4YeYpfrW4mx7"
}
}
}
```
## Discard a Run
`POST /runs/:run_id/actions/discard`
| Parameter | Description |
|-----------|-----------------------|
| `run_id` | The run ID to discard |
The `discard` action can be used to skip any remaining work on runs that are paused waiting for confirmation or priority. This includes runs in the "pending," "needs confirmation," "policy checked," and "policy override" states.
Discarding a run requires permission to apply runs for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
This endpoint queues the request to perform a discard; the discard might not happen immediately. After discarding, the run is completed and later runs can proceed.
This endpoint represents an action as opposed to a resource. As such, it does not return any object in the response body.
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
| Status | Response | Reason(s) |
|---------|---------------------------|-----------------------------------------------------------------------|
| [202][] | none | Successfully queued a discard request. |
| [409][] | [JSON API error object][] | Run was not paused for confirmation or priority; discard not allowed. |
### Request Body
This POST endpoint allows an optional JSON object with the following properties as a request payload.
| Key path | Type | Default | Description |
|-----------|--------|---------|--------------------------------------------------------|
| `comment` | string | `null` | An optional explanation for why the run was discarded. |
### Sample Payload
This payload is optional, so the `curl` command will work without the `--data @payload.json` option too.
```json
{
"comment": "This run was discarded"
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/runs/run-DQGdmrWMX8z9yWQB/actions/discard
```
## Cancel a Run
`POST /runs/:run_id/actions/cancel`
| Parameter | Description |
|-----------|----------------------|
| `run_id` | The run ID to cancel |
The `cancel` action can be used to interrupt a run that is currently planning or applying. Performing a cancel is roughly equivalent to hitting ctrl+c during a Terraform plan or apply on the CLI. The running Terraform process is sent an `INT` signal, which instructs Terraform to end its work and wrap up in the safest way possible.
Canceling a run requires permission to apply runs for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
This endpoint queues the request to perform a cancel; the cancel might not happen immediately. After canceling, the run is completed and later runs can proceed.
This endpoint represents an action as opposed to a resource. As such, it does not return any object in the response body.
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
| Status | Response | Reason(s) |
|---------|---------------------------|-------------------------------------------------------|
| [202][] | none | Successfully queued a cancel request. |
| [409][] | [JSON API error object][] | Run was not planning or applying; cancel not allowed. |
| [404][] | [JSON API error object][] | Run was not found or user not authorized. |
### Request Body
This POST endpoint allows an optional JSON object with the following properties as a request payload.
| Key path | Type | Default | Description |
|-----------|--------|---------|-------------------------------------------------------|
| `comment` | string | `null` | An optional explanation for why the run was canceled. |
### Sample Payload
This payload is optional, so the `curl` command will work without the `--data @payload.json` option too.
```json
{
"comment": "This run was stuck and would never finish."
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/runs/run-DQGdmrWMX8z9yWQB/actions/cancel
```
## Forcefully cancel a run
`POST /runs/:run_id/actions/force-cancel`
| Parameter | Description |
|-----------|----------------------|
| `run_id` | The run ID to cancel |
The `force-cancel` action is like [cancel](#cancel-a-run), but ends the run immediately. Once invoked, the run is placed into a `canceled` state, and the running Terraform process is terminated. The workspace is immediately unlocked, allowing further runs to be queued. The `force-cancel` operation requires admin access to the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
This endpoint enforces a prerequisite that a [non-forceful cancel](#cancel-a-run) is performed first, and a cool-off period has elapsed. To determine if this criteria is met, it is useful to check the `data.attributes.is-force-cancelable` value of the [run details endpoint](#get-run-details). The time at which the force-cancel action will become available can be found using the [run details endpoint](#get-run-details), in the key `data.attributes.force_cancel_available_at`. Note that this key is only present in the payload after the initial cancel has been initiated.
This endpoint represents an action as opposed to a resource. As such, it does not return any object in the response body.
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
~> **Warning:** This endpoint has potentially dangerous side-effects, including loss of any in-flight state in the running Terraform process. Use this operation with extreme caution.
| Status | Response | Reason(s) |
|---------|---------------------------|--------------------------------------------------------------------------------------------------------------------|
| [202][] | none | Successfully queued a cancel request. |
| [409][] | [JSON API error object][] | Run was not planning or applying, has not been canceled non-forcefully, or the cool-off period has not yet passed. |
| [404][] | [JSON API error object][] | Run was not found or user not authorized. |
### Request Body
This POST endpoint allows an optional JSON object with the following properties as a request payload.
| Key path | Type | Default | Description |
|-----------|--------|---------|-------------------------------------------------------|
| `comment` | string | `null` | An optional explanation for why the run was canceled. |
### Sample Payload
This payload is optional, so the `curl` command will work without the `--data @payload.json` option too.
```json
{
"comment": "This run was stuck and would never finish."
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/runs/run-DQGdmrWMX8z9yWQB/actions/force-cancel
```
## Forcefully execute a run
`POST /runs/:run_id/actions/force-execute`
| Parameter | Description |
|-----------|-----------------------|
| `run_id` | The run ID to execute |
The force-execute action cancels all prior runs that are not already complete, unlocking the run's workspace and allowing the run to be executed. (It initiates the same actions as the "Run this plan now" button at the top of the view of a pending run.)
Force-executing a run requires permission to apply runs for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions))
[permissions-citation]: #intentionally-unused---keep-for-maintainers
This endpoint enforces the following prerequisites:
- The target run is in the "pending" state.
- The workspace is locked by another run.
- The run locking the workspace can be discarded.
This endpoint represents an action as opposed to a resource. As such, it does not return any object in the response body.
-> **Note:** This endpoint cannot be accessed with [organization tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#organization-api-tokens). You must access it with a [user token](/terraform/cloud-docs/users-teams-organizations/users#api-tokens) or [team token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens).
~> **Note:** While useful at times, force-executing a run circumvents the typical workflow of applying runs using HCP Terraform. It is not intended for regular use. If you find yourself using it frequently, please reach out to HashiCorp Support for help in developing an alternative approach.
| Status | Response | Reason(s) |
|---------|---------------------------|-----------------------------------------------------------------------------------------------|
| [202][] | none | Successfully initiated the force-execution process. |
| [403][] | [JSON API error object][] | Run is not pending, its workspace was not locked, or its workspace association was not found. |
| [409][] | [JSON API error object][] | The run locking the workspace was not in a discardable state. |
| [404][] | [JSON API error object][] | Run was not found or user not authorized. |
### Request Body
This POST endpoint does not take a request body.
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/runs/run-DQGdmrWMX8z9yWQB/actions/force-execute
```
## Available Related Resources
The GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](/terraform/cloud-docs/api-docs#inclusion-of-related-resources). The following resource types are available:
- `plan` - Additional information about plans.
- `apply` - Additional information about applies.
- `created_by` - Full user records of the users responsible for creating the runs.
- `cost_estimate` - Additional information about cost estimates.
- `configuration_version` - The configuration record used in the run.
- `configuration_version.ingress_attributes` - The commit information used in the run. | terraform | page title Runs API Docs HCP Terraform description Use the runs endpoint to manage Terraform runs List get create apply discard execute and cancel runs using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Runs API Note Before working with the runs or configuration versions APIs read the API driven run workflow terraform cloud docs run api page which includes both a full overview of this workflow and a walkthrough of a simple implementation of it Performing a run on a new configuration is a multi step process 1 Create a configuration version on the workspace terraform cloud docs api docs configuration versions create a configuration version 1 Upload configuration files to the configuration version terraform cloud docs api docs configuration versions upload configuration files 1 Create a run on the workspace create a run this is done automatically when a configuration file is uploaded 1 Create and queue an apply on the run apply a run if the run can t be auto applied Alternatively you can create a run with a pre existing configuration version even one from another workspace This is useful for promoting known good code from one workspace to another Attributes Run States The run state is found in data attributes status and you can reference the following list of possible states State Description pending The initial status of a run after creation fetching The run is waiting for HCP Terraform to fetch the configuration from VCS fetching completed HCP Terraform has fetched the configuration from VCS and the run will continue pre plan running The pre plan phase of the run is in progress pre plan completed The pre plan phase of the run has completed queuing HCP Terraform is queuing the run to start the planning phase plan queued HCP Terraform is waiting for its backend services to start the plan planning The planning phase of a run is in progress planned The planning phase of a run has completed cost estimating The cost estimation phase of a run is in progress cost estimated The cost estimation phase of a run has completed policy checking The sentinel policy checking phase of a run is in progress policy override A sentinel policy has soft failed and a user can override it to continue the run policy soft failed A sentinel policy has soft failed for a plan only run This is a final state policy checked The sentinel policy checking phase of a run has completed confirmed A user has confirmed the plan post plan running The post plan phase of the run is in progress post plan completed The post plan phase of the run has completed planned and finished The run is completed This status only exists for plan only runs and runs that produce a plan with no changes to apply This is a final state planned and saved The run has finished its planning checks and estimates and can be confirmed for apply This status is only used for saved plan runs apply queued Once the changes in the plan have been confirmed the run will transition to apply queued This status indicates that the run should start as soon as the backend services that run terraform have available capacity In HCP Terraform you should seldom see this status as our aim is to always have capacity However in Terraform Enterprise this status will be more common due to the self hosted nature applying Terraform is applying the changes specified in the plan applied Terraform has applied the changes specified in the plan discarded The run has been discarded This is a final state errored The run has errored This is a final state canceled The run has been canceled force canceled A workspace admin forcefully canceled the run Run Operations The run operation specifies the Terraform execution mode You can reference the following list of possible execution modes and use them as query parameters in the workspace terraform cloud docs api docs run list runs in a workspace and organization terraform cloud docs api docs run list runs in a organization runs lists Operation Description plan only The run does not have an apply phase This is also called a speculative plan terraform cloud docs run modes and options plan only speculative plan plan and apply The run includes both plan and apply phases save plan The run is a saved plan run It can include both plan and apply phases but only becomes the workspace s current run if a user chooses to apply it refresh only The run should update Terraform state but not make changes to resources destroy The run should destroy all objects regardless of configuration changes empty apply The run should perform an apply with no changes to resources This is most commonly used to upgrade terraform state versions terraform cloud docs workspaces state upgrading state Run Sources You can use the following sources as query parameters in workspace terraform cloud docs api docs run list runs in a workspace and organization terraform cloud docs api docs run list runs in a organization runs lists Source Description tfe ui Indicates a run was queued from HCP Terraform UI tfe api Indicates a run was queued from HCP Terraform API tfe configuration version Indicates a run was queued from a Configuration Version triggered from a VCS provider Run Status Groups The run status group specifies a collection of run states by logical category Group Description non final Inclusive of runs that are currently running require user confirmation or are queued pending final Inclusive of runs that have reached their final and terminal state discardable Inclusive of runs whose state falls under the following planned planned and saved cost estimated policy checked policy override post plan running post plan completed Create a Run POST runs A run performs a plan and apply using a configuration version and the workspace s current variables You can specify a configuration version when creating a run if you don t provide one the run defaults to the workspace s most recently used version A configuration version is used when it is created or used for a run in this workspace Creating a run requires permission to queue plans for the specified workspace More about permissions terraform cloud docs users teams organizations permissions When creating a run you may optionally provide a list of variable objects containing key and value attributes These values apply to that run specifically and take precedence over variables with the same key applied to the workspace e g variable sets Refer to Variable Precedence terraform cloud docs workspaces variables precedence for more information All values must be expressed as an HCL literal in the same syntax you would use when writing Terraform code Refer to Types terraform language expressions types types in the Terraform documentation for more details Setting debugging mode true enables debugging mode for the queued run only This is equivalent to setting the TF LOG environment variable to TRACE for this run See Debugging Terraform terraform internals debugging for more information Sample Run Variables json attributes variables key replicas value 2 key access key value ABCDE12345 permissions citation intentionally unused keep for maintainers Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data attributes allow empty apply bool none Specifies whether Terraform can apply the run even when the plan contains no changes terraform cloud docs run modes and options allow empty apply Use this property to upgrade state terraform cloud docs workspaces state upgrading state after upgrading a workspace to a new terraform version data attributes allow config generation bool false Specifies whether Terraform can generate resource configuration terraform language import generating configuration when planning to import new resources When set to false Terraform returns an error when import blocks do not have a corresponding resource block data attributes auto apply bool Defaults to the Auto Apply terraform cloud docs workspaces settings auto apply and manual apply workspace setting Determines if Terraform automatically applies the configuration on a successful terraform plan data attributes debugging mode bool false When set to true enables verbose logging for the queued plan data attributes is destroy bool false When set to true the plan destroys all provisioned resources Mutually exclusive with refresh only data attributes message string Queued manually via the Terraform Enterprise API Specifies the message associated with this run data attributes refresh bool true Specifies whether or not to refresh the state before a plan data attributes refresh only bool false When set to true this run refreshes the state without modifying any resources Mutually exclusive with is destroy data attributes replace addrs array string Specifies an optional list of resource addresses to be passed to the replace flag data attributes target addrs array string Specifies an optional list of resource addresses to be passed to the target flag data attributes variables array key value empty array Specifies an optional list of run specific variable values Refer to Run Specific Variables terraform cloud docs workspaces variables managing variables run specific variables for details data attributes plan only bool from configuration version Specifies if this is a speculative plan only terraform cloud docs run modes and options plan only speculative plan run that Terraform cannot apply Often used in conjunction with terraform version in order to test whether an upgrade would succeed data attributes save plan bool false When set to true the run is executed as a save plan run A save plan run plans and checks the configuration without becoming the workspace s current run These run types only becomes the current run if you confirm that you want to apply them when prompted When creating new configuration versions terraform enterprise api docs configuration versions for saved plan runs be sure to make them provisional data attributes terraform version string none Specifies the Terraform version to use in this run Only valid for plan only runs must be a valid Terraform version available to the organization data relationships workspace data id string none Specifies the workspace ID to execute the run in data relationships configuration version data id string none Specifies the configuration version to use for this run If the configuration version object is omitted Terraform uses the workspace s latest configuration version to create the run Status Response Reason 201 JSON API document type runs Successfully created a run 404 JSON API error object Organization or workspace not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc Sample Payload json data attributes message Custom message type runs relationships workspace data type workspaces id ws LLGHCr4SWy28wyGN configuration version data type configuration versions id cv n4XQPBa2QnecZJ4G Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 runs Sample Response json data id run CZcmD7eagjhyX0vN type runs attributes actions is cancelable true is confirmable false is discardable false is force cancelable false canceled at null created at 2021 05 24T07 38 04 171Z has changes false auto apply false allow empty apply false allow config generation false is destroy false message Custom message plan only false source tfe api status timestamps plan queueable at 2021 05 24T07 38 04 00 00 status pending trigger reason manual target addrs null permissions can apply true can cancel true can comment true can discard true can force execute true can force cancel true can override policy check true refresh false refresh only false replace addrs null save plan false variables relationships apply comments configuration version cost estimate created by input state version plan run events policy checks workspace workspace run alerts links self api v2 runs run CZcmD7eagjhyX0vN Apply a Run POST runs run id actions apply Parameter Description run id The run ID to apply Applies a run that is paused waiting for confirmation after a plan This includes runs in the needs confirmation and policy checked states This action is only required for runs that can t be auto applied Plans can be auto applied if the auto apply setting is enabled on the workspace and the plan was queued by a new VCS commit or by a user with permission to apply runs for the workspace Note If the run has a soft failed sentinel policy you will need to override the policy check terraform cloud docs api docs policy checks override policy before Terraform can apply the run You can find policy check details in the relationships section of the run details endpoint get run details response Applying a run requires permission to apply runs for the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers This endpoint queues the request to perform an apply the apply might not happen immediately Since this endpoint represents an action not a resource it does not return any object in the response body Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Status Response Reason s 202 none Successfully queued an apply request 409 JSON API error object Run was not paused for confirmation apply not allowed Request Body This POST endpoint allows an optional JSON object with the following properties as a request payload Key path Type Default Description comment string null An optional comment about the run Sample Payload This payload is optional so the curl command will work without the data payload json option too json comment Looks good to me Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 runs run DQGdmrWMX8z9yWQB actions apply List Runs in a Workspace GET workspaces workspace id runs Parameter Description workspace id The workspace ID to list runs for By default plan only runs will be excluded from the results To see all runs use filter operation with all available operations included as a comma separated list This endpoint has an adjusted rate limit of 30 requests per minute Note that most endpoints are limited to 30 requests per second Status Response Reason 200 Array of JSON API document s type runs Successfully listed runs Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description Required page number If omitted the endpoint returns the first page Optional page size If omitted the endpoint returns 20 runs per page Optional filter operation A comma separated list of run operations The result lists runs that perform one of these operations For details on options refer to Run operations terraform enterprise api docs run run operations Optional filter status A comma separated list of run statuses The result lists runs that are in one of the statuses you specify For details on options refer to Run states terraform enterprise api docs run run states Optional filter agent pool names A comma separated list of agent pool names The result lists runs that use one of the agent pools you specify Optional filter source A comma separated list of run sources The result lists runs that came from one of the sources you specify Options are listed in Run Sources terraform enterprise api docs run run sources Optional filter status group A single status group The result lists runs whose status falls under this status group For details on options refer to Run status groups terraform enterprise api docs run run status groups Optional filter timeframe A single year period The result lists runs that were created within the year you specify An integer year or the string year for the past year are valid values If omitted the endpoint returns all runs since the creation of the workspace Optional search user Searches for runs that match the VCS username you supply Optional search commit Searches for runs that match the commit sha you specify Optional search basic Searches for runs that match the VCS username commit sha run id or run message your specify HCP Terraform prioritizes search commit or search user and ignores search basic in favor of the higher priority parameters if you include them in your query Optional Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 workspaces ws yF7z4gyEQRhaCNG9 runs Sample Response json data id run CZcmD7eagjhyX0vN type runs attributes actions is cancelable true is confirmable false is discardable false is force cancelable false canceled at null created at 2021 05 24T07 38 04 171Z has changes false auto apply false allow empty apply false allow config generation false is destroy false message Custom message plan only false source tfe api status timestamps plan queueable at 2021 05 24T07 38 04 00 00 status pending trigger reason manual target addrs null permissions can apply true can cancel true can comment true can discard true can force execute true can force cancel true can override policy check true refresh false refresh only false replace addrs null save plan false variables relationships apply comments configuration version cost estimate created by input state version plan run events policy checks workspace workspace run alerts links self api v2 runs run bWSq4YeYpfrW4mx7 List Runs in an Organization GET organizations organization name runs Parameter Description organization name The organization name to list runs for This endpoint has an adjusted rate limit of 30 requests per minute Note that most endpoints are limited to 30 requests per second Status Response Reason 200 Array of JSON API document s type runs Successfully listed runs Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description Required page number If omitted the endpoint returns the first page Optional page size If omitted the endpoint returns 20 runs per page Optional filter operation A comma separated list of run operations The result lists runs that perform one of these operations For details on options refer to Run operations terraform enterprise api docs run run operations Optional filter status A comma separated list of run statuses The result lists runs that are in one of the statuses you specify For details on options refer to Run states terraform enterprise api docs run run states Optional filter agent pool names A comma separated list of agent pool names The result lists runs that use one of the agent pools you specify Optional filter workspace names A comma separated list of workspace names The result lists runs that belong to one of the workspaces your specify Optional filter source A comma separated list of run sources The result lists runs that came from one of the sources you specify Options are listed in Run Sources terraform enterprise api docs run run sources Optional filter status group A single status group The result lists runs whose status falls under this status group For details on options refer to Run status groups terraform enterprise api docs run run status groups Optional filter timeframe A single year period The result lists runs that were created within the year you specify An integer year or the string year for the past year are valid values If omitted the endpoint returns runs created in the last year Optional search user Searches for runs that match the VCS username you supply Optional search commit Searches for runs that match the commit sha you specify Optional search basic Searches for runs that match the VCS username commit sha run id or run message your specify HCP Terraform prioritizes search commit or search user and ignores search basic in favor of the higher priority parameters if you include them in your query Optional Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp runs Sample Response json data id run CZcmD7eagjhyX0vN type runs attributes actions is cancelable true is confirmable false is discardable false is force cancelable false canceled at null created at 2021 05 24T07 38 04 171Z has changes false auto apply false allow empty apply false allow config generation false is destroy false message Custom message plan only false source tfe api status timestamps plan queueable at 2021 05 24T07 38 04 00 00 status pending trigger reason manual target addrs null permissions can apply true can cancel true can comment true can discard true can force execute true can force cancel true can override policy check true refresh false refresh only false replace addrs null save plan false variables relationships apply comments configuration version cost estimate created by input state version plan run events policy checks workspace workspace run alerts links self api v2 runs run bWSq4YeYpfrW4mx7 Get run details GET runs run id Parameter Description run id The run ID to get This endpoint is used for showing details of a specific run Status Response Reason 200 JSON API document type runs Success 404 JSON API error object Run not found or user not authorized Sample Request shell curl header Authorization Bearer TOKEN https app terraform io api v2 runs run bWSq4YeYpfrW4mx7 Sample Response json data id run CZcmD7eagjhyX0vN type runs attributes actions is cancelable true is confirmable false is discardable false is force cancelable false canceled at null created at 2021 05 24T07 38 04 171Z has changes false auto apply false allow empty apply false allow config generation false is destroy false message Custom message plan only false source tfe api status timestamps plan queueable at 2021 05 24T07 38 04 00 00 status pending trigger reason manual target addrs null permissions can apply true can cancel true can comment true can discard true can force execute true can force cancel true can override policy check true refresh false refresh only false replace addrs null save plan false variables relationships apply comments configuration version cost estimate created by input state version plan run events policy checks task stages workspace workspace run alerts links self api v2 runs run bWSq4YeYpfrW4mx7 Discard a Run POST runs run id actions discard Parameter Description run id The run ID to discard The discard action can be used to skip any remaining work on runs that are paused waiting for confirmation or priority This includes runs in the pending needs confirmation policy checked and policy override states Discarding a run requires permission to apply runs for the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers This endpoint queues the request to perform a discard the discard might not happen immediately After discarding the run is completed and later runs can proceed This endpoint represents an action as opposed to a resource As such it does not return any object in the response body Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Status Response Reason s 202 none Successfully queued a discard request 409 JSON API error object Run was not paused for confirmation or priority discard not allowed Request Body This POST endpoint allows an optional JSON object with the following properties as a request payload Key path Type Default Description comment string null An optional explanation for why the run was discarded Sample Payload This payload is optional so the curl command will work without the data payload json option too json comment This run was discarded Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 runs run DQGdmrWMX8z9yWQB actions discard Cancel a Run POST runs run id actions cancel Parameter Description run id The run ID to cancel The cancel action can be used to interrupt a run that is currently planning or applying Performing a cancel is roughly equivalent to hitting ctrl c during a Terraform plan or apply on the CLI The running Terraform process is sent an INT signal which instructs Terraform to end its work and wrap up in the safest way possible Canceling a run requires permission to apply runs for the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers This endpoint queues the request to perform a cancel the cancel might not happen immediately After canceling the run is completed and later runs can proceed This endpoint represents an action as opposed to a resource As such it does not return any object in the response body Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Status Response Reason s 202 none Successfully queued a cancel request 409 JSON API error object Run was not planning or applying cancel not allowed 404 JSON API error object Run was not found or user not authorized Request Body This POST endpoint allows an optional JSON object with the following properties as a request payload Key path Type Default Description comment string null An optional explanation for why the run was canceled Sample Payload This payload is optional so the curl command will work without the data payload json option too json comment This run was stuck and would never finish Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 runs run DQGdmrWMX8z9yWQB actions cancel Forcefully cancel a run POST runs run id actions force cancel Parameter Description run id The run ID to cancel The force cancel action is like cancel cancel a run but ends the run immediately Once invoked the run is placed into a canceled state and the running Terraform process is terminated The workspace is immediately unlocked allowing further runs to be queued The force cancel operation requires admin access to the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers This endpoint enforces a prerequisite that a non forceful cancel cancel a run is performed first and a cool off period has elapsed To determine if this criteria is met it is useful to check the data attributes is force cancelable value of the run details endpoint get run details The time at which the force cancel action will become available can be found using the run details endpoint get run details in the key data attributes force cancel available at Note that this key is only present in the payload after the initial cancel has been initiated This endpoint represents an action as opposed to a resource As such it does not return any object in the response body Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Warning This endpoint has potentially dangerous side effects including loss of any in flight state in the running Terraform process Use this operation with extreme caution Status Response Reason s 202 none Successfully queued a cancel request 409 JSON API error object Run was not planning or applying has not been canceled non forcefully or the cool off period has not yet passed 404 JSON API error object Run was not found or user not authorized Request Body This POST endpoint allows an optional JSON object with the following properties as a request payload Key path Type Default Description comment string null An optional explanation for why the run was canceled Sample Payload This payload is optional so the curl command will work without the data payload json option too json comment This run was stuck and would never finish Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 runs run DQGdmrWMX8z9yWQB actions force cancel Forcefully execute a run POST runs run id actions force execute Parameter Description run id The run ID to execute The force execute action cancels all prior runs that are not already complete unlocking the run s workspace and allowing the run to be executed It initiates the same actions as the Run this plan now button at the top of the view of a pending run Force executing a run requires permission to apply runs for the workspace More about permissions terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers This endpoint enforces the following prerequisites The target run is in the pending state The workspace is locked by another run The run locking the workspace can be discarded This endpoint represents an action as opposed to a resource As such it does not return any object in the response body Note This endpoint cannot be accessed with organization tokens terraform cloud docs users teams organizations api tokens organization api tokens You must access it with a user token terraform cloud docs users teams organizations users api tokens or team token terraform cloud docs users teams organizations api tokens team api tokens Note While useful at times force executing a run circumvents the typical workflow of applying runs using HCP Terraform It is not intended for regular use If you find yourself using it frequently please reach out to HashiCorp Support for help in developing an alternative approach Status Response Reason s 202 none Successfully initiated the force execution process 403 JSON API error object Run is not pending its workspace was not locked or its workspace association was not found 409 JSON API error object The run locking the workspace was not in a discardable state 404 JSON API error object Run was not found or user not authorized Request Body This POST endpoint does not take a request body Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST https app terraform io api v2 runs run DQGdmrWMX8z9yWQB actions force execute Available Related Resources The GET endpoints above can optionally return related resources if requested with the include query parameter terraform cloud docs api docs inclusion of related resources The following resource types are available plan Additional information about plans apply Additional information about applies created by Full user records of the users responsible for creating the runs cost estimate Additional information about cost estimates configuration version The configuration record used in the run configuration version ingress attributes The commit information used in the run |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Tests API Docs HCP Terraform Use the tests endpoint to manage Terraform tests List get create and cancel tests using the HTTP API | ---
page_title: Tests - API Docs - HCP Terraform
description: >-
Use the `/tests` endpoint to manage Terraform tests. List, get, create, and cancel tests using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Tests API
Tests are terraform operations(runs) and are referred to as Test Runs within the HCP Terraform API.
Performing a test on a new configuration is a multi-step process.
1. [Create a configuration version on the registry module](#create-a-configuration-version-for-a-test).
1. [Upload configuration files to the configuration version](#upload-configuration-files-for-a-test).
1. [Create a test on the module](#create-a-test-run); HCP Terraform completes this step automatically when you upload a configuration file.
Alternatively, you can create a test with a pre-existing configuration version, even one from another module. This is useful for promoting known good code from one module to another.
## Attributes
The `tests` API endpoint has the following attributes.
### Test Run States
The state of the test operation is found in `data.attributes.status`, and you can reference the following list of possible states.
| State | Description |
| ---------- | ------------------------------------------------------- |
| `pending` | The initial status of a run after creation. |
| `queued` | HCP Terraform has queued the test operation to start. |
| `running` | HCP Terraform is executing the test. |
| `errored` | The test has errored. This is a final state. |
| `canceled` | The test has been canceled. This is a final state. |
| `finished` | The test has completed. This is a final state. |
### Test run status
The final test status is found in `data.attributes.test-status`, and you can reference the following list of possible states.
| Status | Description |
| ------ | ---------------------------- |
| `pass` | The given tests have passed. |
| `fail` | The given tests have failed. |
### Detailed test status
The test results can be found via the following attributes
| Status | Description | |
| ------------------------------- | ------------------------------------------- |
| `data.attributes.tests-passed` | The number of tests that have passed. |
| `data.attributes.tests-failed` | The number of tests that have failed. |
| `data.attributes.tests-errored` | The number of tests that have errored out. |
| `data.attributes.tests-skipped` | The number of tests that have been skipped. |
### Test Sources
List tests for a module. You can use the following sources as [tests list](/terraform/cloud-docs/api-docs/private-registry/tests#list-tests-for-a-module) query parameters.
| Source | Description |
|-----------------------------|------------------------------------------------------------------------------------------|
| `terraform` | Indicates a test was queued from HCP Terraform CLI. |
| `tfe-api` | Indicates a test was queued from HCP Terraform API. |
| `tfe-configuration-version` | Indicates a test was queued from a Configuration Version, triggered from a VCS provider. |
## Create a Test
`POST /organizations/:organization_name/tests/registry-modules/private/:namespace/:name/:provider/test-runs`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:namespace` | The namespace of the module for which the test is being created. For private modules this is the same as the `:organization_name` parameter. |
| `:name` | The name of the module for which the test is being created. |
| `:provider` | The name of the provider for which the test is being created. |
A test run executes tests against a registry module, using a configuration version and the modules’s current environment variables.
Creating a test run requires permission to access the specified module. Refer to [Permissions](/terraform/cloud-docs/users-teams-organizations/permissions) for more information.
When creating a test run, you may optionally provide a list of variable objects containing key and value attributes. These values apply to that test run specifically and take precedence over variables with the same key that are created within the module. All values must be expressed as an HCL literal in the same syntax you would use when writing Terraform code.
**Sample Test Variables:**
```json
"attributes": {
"variables": [
{ "key": "replicas", "value": "2" },
{ "key": "access_key", "value": "\"ABCDE12345\"" }
]
}
```
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| -------------------------------------------------- | -------------------- | ------------- | -------------------------------------------------------------------------------------------------- |
| `data.attributes.verbose` | bool | `false` | Specifies whether Terraform should print the plan or state for each test run block as it executes. |
| `data.attributes.test-directory` | string | `"tests"` | Sets the directory where HCP Terraform executes the tests. |
| `data.attributes.filters` | array\[string] | (empty array) | When specified, HCP Terraform only executes the test files contained within this array. |
| `data.attributes.variables` | array\[{key, value}] | (empty array) | Specifies an optional list of test-specific environment variable values. |
| `data.relationships.configuration-version.data.id` | string | none | Specifies the configuration version that HCP Terraform executes the test against. |
### Sample Payload
```json
{
"data": {
"attributes": {
"verbose": true,
"filters": ["tests/test.tftest.hcl"],
"test-directory": "tests",
"variables": [
{ "key" : "number", "value": 4}
]
},
"type":"test-runs"
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/tests/registry-modules/private/my-organization/private/registry-provider/test-runs
```
### Sample Response
```json
{
"data": {
"id": "trun-KFg8DSiRz4E37mdJ",
"type": "test-runs",
"attributes": {
"status": "queued",
"status-timestamps": {
"queued-at": "2023-10-03T18:27:39+00:00"
},
"created-at": "2023-10-03T18:27:39.239Z",
"updated-at": "2023-10-03T18:27:39.264Z",
"test-configurable-type": "RegistryModule",
"test-configurable-id": "mod-9rjVHLCUE9QD3k6L",
"variables": [
{
"key": "number",
"value": "4"
}
],
"filters": [
"tests/test.tftest.hcl"
],
"test-directory": "tests",
"verbose": true,
"test-status": null,
"tests-passed": null,
"tests-failed": null,
"tests-errored": null,
"tests-skipped": null,
"source": "tfe-api",
"message": "Queued manually via the Terraform Enterprise API"
},
"relationships": {
"configuration-version": {
"data": {
"id": "cv-d3zBGFf5DfWY4GY9",
"type": "configuration-versions"
},
"links": {
"related": "/api/v2/configuration-versions/cv-d3zBGFf5DfWY4GY9"
}
},
"created-by": {
"data": {
"id": "user-zsRFs3AGaAHzbEfs",
"type": "users"
},
"links": {
"related": "/api/v2/users/user-zsRFs3AGaAHzbEfs"
}
}
}
}
}
```
## Create a Configuration Version for a Test
`POST /organizations/:organization_name/tests/registry-modules/private/:namespace/:name/:provider/test-runs/configuration-versions`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:namespace` | The namespace of the module for which the configuration version is being created. For private modules this is the same as the `:organization_name` parameter. |
| `:name` | The name of the module for which the configuration version is being created. |
| `:provider` | The name of the provider for which the configuration version is being created. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/organizations/my-organization/tests/registry-modules/private/my-organization/registry-name/registry-provider/test-runs/configuration-versions
```
### Sample Response
```json
{
"data": {
"id": "cv-aaady7niJMY1wAvx",
"type": "configuration-versions",
"attributes": {
"auto-queue-runs": true,
"error": null,
"error-message": null,
"source": "tfe-api",
"speculative": false,
"status": "pending",
"status-timestamps": {},
"changed-files": [],
"provisional": false,
"upload-url": "https://archivist.terraform.io/v1/object/dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv"
},
"relationships": {
"ingress-attributes": {
"data": null,
"links": {
"related": "/api/v2/configuration-versions/cv-aaady7niJMY1wAvx/ingress-attributes"
}
}
},
"links": {
"self": "/api/v2/configuration-versions/cv-aaady7niJMY1wAvx"
}
}
}
```
## Upload Configuration Files for a Test
`PUT https://archivist.terraform.io/v1/object/<UNIQUE OBJECT ID>`
**The URL is provided in the `upload-url` attribute when creating a `configuration-versions` resource. After creation, the URL is hidden on the resource and no longer available.**
### Sample Request
**@filename is the name of the configuration file you wish to upload.**
```shell
curl \
--header "Content-Type: application/octet-stream" \
--request PUT \
--data-binary @filename \
https://archivist.terraform.io/v1/object/4c44d964-eba7-4dd5-ad29-1ece7b99e8da
```
## List Tests for a Module
`GET /organizations/:organization_name/tests/registry-modules/private/:namespace/:name/:provider/test-runs/`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:namespace` | The namespace of the module which the tests have executed against. For private modules this is the same as the `:organization_name` parameter. |
| `:name` | The name of the module which the tests have executed against. |
| `:provider` | The name of the provider which the tests have executed against. |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling does not automatically encode URLs.
| Parameter | Description | Required |
| --- | --- | --- |
| `page[number]` | If omitted, the endpoint returns the first page. | Optional |
| `page[size]` | If omitted, the endpoint returns 20 runs per page. | Optional |
| `filter[source]` | **Optional.** A comma-separated list of test sources; the result will only include tests that came from one of these sources. Options are listed in [Test Sources](/terraform/cloud-docs/api-docs/private-registry/tests#test-sources). |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/my-organization/tests/registry-modules/private/my-organization/registry-name/registry-provider/test-runs
```
### Sample Response
```json
{
"data": [
{
"id": "trun-KFg8DSiRz4E37mdJ",
"type": "test-runs",
"attributes": {
"status": "finished",
"status-timestamps": {
"queued-at": "2023-10-03T18:27:39+00:00",
"started-at": "2023-10-03T18:27:41+00:00",
"finished-at": "2023-10-03T18:27:53+00:00"
},
"log-read-url": "https://archivist.terraform.io/v1/object/dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv",
"created-at": "2023-10-03T18:27:39.239Z",
"updated-at": "2023-10-03T18:27:53.574Z",
"test-configurable-type": "RegistryModule",
"test-configurable-id": "mod-9rjVHLCUE9QD3k6L",
"variables": [
{
"key": "number",
"value": "4"
}
],
"filters": [
"tests/test.tftest.hcl"
],
"test-directory": "tests",
"verbose": true,
"test-status": "pass",
"tests-passed": 1,
"tests-failed": 0,
"tests-errored": 0,
"tests-skipped": 0,
"source": "tfe-api",
"message": "Queued manually via the Terraform Enterprise API"
},
"relationships": {
"configuration-version": {
"data": {
"id": "cv-d3zBGFf5DfWY4GY9",
"type": "configuration-versions"
},
"links": {
"related": "/api/v2/configuration-versions/cv-d3zBGFf5DfWY4GY9"
}
},
"created-by": {
"data": {
"id": "user-zsRFs3AGaAHzbEfs",
"type": "users"
},
"links": {
"related": "/api/v2/users/user-zsRFs3AGaAHzbEfs"
}
}
}
},
{...}
]
}
```
## Get Test Details
`GET /organizations/:organization_name/tests/registry-modules/private/:namespace/:name/:provider/test-runs/:test_run_id`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:namespace` | The namespace of the module which the test was executed against. For private modules this is the same as the `:organization_name` parameter. |
| `:name` | The name of the module which the test was executed against. |
| `:provider` | The name of the provider which the test was executed against. |
| `:test_run_id` | The test ID to get. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/my-organization/tests/registry-modules/private/my-organization/registry-name/registry-provider/test-runs/trun-xFMAHM3FhkFBL6Z7
```
### Sample Response
```json
{
"data": {
"id": "trun-KFg8DSiRz4E37mdJ",
"type": "test-runs",
"attributes": {
"status": "finished",
"status-timestamps": {
"queued-at": "2023-10-03T18:27:39+00:00",
"started-at": "2023-10-03T18:27:41+00:00",
"finished-at": "2023-10-03T18:27:53+00:00"
},
"log-read-url": "https://archivist.terraform.io/v1/object/dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv",
"created-at": "2023-10-03T18:27:39.239Z",
"updated-at": "2023-10-03T18:27:53.574Z",
"test-configurable-type": "RegistryModule",
"test-configurable-id": "mod-9rjVHLCUE9QD3k6L",
"variables": [
{
"key": "number",
"value": "4"
}
],
"filters": [
"tests/test.tftest.hcl"
],
"test-directory": "tests",
"verbose": true,
"test-status": "pass",
"tests-passed": 1,
"tests-failed": 0,
"tests-errored": 0,
"tests-skipped": 0,
"source": "tfe-api",
"message": "Queued manually via the Terraform Enterprise API"
},
"relationships": {
"configuration-version": {
"data": {
"id": "cv-d3zBGFf5DfWY4GY9",
"type": "configuration-versions"
},
"links": {
"related": "/api/v2/configuration-versions/cv-d3zBGFf5DfWY4GY9"
}
},
"created-by": {
"data": {
"id": "user-zsRFs3AGaAHzbEfs",
"type": "users"
},
"links": {
"related": "/api/v2/users/user-zsRFs3AGaAHzbEfs"
}
}
}
}
}
```
## Cancel a Test
`POST /organizations/:organization_name/tests/registry-modules/private/:namespace/:name/:provider/test-runs/:test_run_id/cancel`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to create a test in. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:namespace` | The namespace of the module for which the test is being canceled. For private modules this is the same as the `:organization_name` parameter. |
| `:name` | The name of the module for which the test is being canceled. |
| `:provider` | The name of the provider for which the test is being canceled. |
| `:test_run_id` | The test ID to cancel. |
Use the `cancel` action to interrupt a test that is currently running. The action sends an `INT` signal to the running Terraform process, which instructs Terraform to safely end the tests and attempt to teardown any infrastructure that your tests create.
| Status | Response | Reason(s) |
| ------- | ------------------------- | ------------------------------------------ |
| [202][] | none | Successfully queued a cancel request. |
| [409][] | [JSON API error object][] | Test was not running; cancel not allowed. |
| [404][] | [JSON API error object][] | Test was not found or user not authorized. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/organizations/my-organization/tests/registry-modules/private/my-organization/registry-name/registry-provider/test-runs/trun-xFMAHM3FhkFBL6Z7/cancel
```
## Forcefully cancel a Test
`POST /organizations/:organization_name/tests/registry-modules/private/:namespace/:name/:provider/test-runs/:test_run_id/force-cancel`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the `owners` team or a member of the `owners` team. |
| `:namespace` | The namespace of the module for which the test is being force-canceled. For private modules this is the same as the `:organization_name` parameter. |
| `:name` | The name of the module for which the test is being force-canceled. |
| `:provider` | The name of the provider for which the test is being force-canceled. |
| `:test_run_id` | The test ID to cancel.
The `force-cancel` action ends the test immediately. Once invoked, Terraform places the test into a `canceled` state and terminates the running Terraform process.
~> **Warning:** This endpoint has potentially dangerous side-effects, including loss of any in-flight state in the running Terraform process. Use this operation with extreme caution.
| Status | Response | Reason(s) |
| ------- | ------------------------- | -------------------------------------------------------------- |
| [202][] | none | Successfully queued a cancel request. |
| [409][] | [JSON API error object][] | Test was not running, or has not been canceled non-forcefully. |
| [404][] | [JSON API error object][] | Test was not found or user not authorized. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/organizations/my-organization/tests/registry-modules/private/my-organization/registry-name/registry-provider/test-runs/trun-xFMAHM3FhkFBL6Z7/force-cancel
```
## Create an Environment Variable for Module Tests
`POST /organizations/:organization_name/tests/registry-modules/private/:namespace/:name/:provider/vars`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization of the module. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:namespace` | The namespace of the module for which the testing environment variable is being created. For private modules this is the same as the `:organization_name` parameter. |
| `:name` | The name of the module for which the testing environment variable is being created. |
| `:provider` | The name of the provider for which the testing environment variable is being created. |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ----------------------------- | ------ | ------- | ----------------------------------------------------------------------------------------------------- |
| `data.type` | string | none | Must be `"vars"`. |
| `data.attributes.key` | string | none | The variable's name. Test variable keys must begin with a letter or underscore and can only contain letters, numbers, and underscores. |
| `data.attributes.value` | string | `""` | The value of the variable. |
| `data.attributes.description` | string | none | The description of the variable. |
| `data.attributes.category` | string | none | This must be `"env"`. |
| `data.attributes.sensitive` | bool | `false` | Whether the value is sensitive. When set to `true`, Terraform writes the variable once and is not visible thereafter. |
### Sample Payload
```json
{
"data": {
"type":"vars",
"attributes": {
"key":"some_key",
"value":"some_value",
"description":"some description",
"category":"env",
"sensitive":false
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/tests/registry-modules/private/my-organization/registry-name/registry-provider/vars
```
### Sample Response
```
{
"data": {
"id": "var-xSCUzCxdqMs2ygcg",
"type": "vars",
"attributes": {
"key": "keykey",
"value": "some_value",
"sensitive": false,
"category": "env",
"hcl": false,
"created-at": "2023-10-03T19:47:05.393Z",
"description": "some description",
"version-id": "699b14ea5d5e5c02f6352fac6bfd0a1424c21d32be14d1d9eb79f5e1f28f663a"
},
"links": {
"self": "/api/v2/vars/var-xSCUzCxdqMs2ygcg"
}
}
}
```
## List Test Variables for a Module
`GET /organizations/:organization_name/tests/registry-modules/private/:namespace/:name/:provider/vars`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the `owners` team or a member of the `owners` team. |
| `:namespace` | The namespace of the module which the test environment variables were created for. For private modules this is the same as the `:organization_name` parameter. |
| `:name` | The name of the module which the test environment variables were created for. |
| `:provider` | The name of the provider which the test environment variables were created for. |
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/my-organization/tests/registry-modules/private/my-organization/registry-name/registry-provider/vars
```
### Sample Response
```json
{
"data": [
{
"id": "var-xSCUzCxdqMs2ygcg",
"type": "vars",
"attributes": {
"key": "keykey",
"value": "some_value",
"sensitive": false,
"category": "env",
"hcl": false,
"created-at": "2023-10-03T19:47:05.393Z",
"description": "some description",
"version-id": "699b14ea5d5e5c02f6352fac6bfd0a1424c21d32be14d1d9eb79f5e1f28f663a"
},
"links": {
"self": "/api/v2/vars/var-xSCUzCxdqMs2ygcg"
}
}
]
}
```
## Update Test Variables for a Module
`PATCH /organizations/:organization_name/tests/registry-modules/private/:namespace/:name/:provider/vars/variable_id`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:namespace` | The namespace of the module for which the test environment variable is being updated. For private modules this is the same as the `:organization_name` parameter. |
| `:name` | The name of the module for which the test environment variable is being updated. |
| `:provider` | The name of the provider for which the test environment variable is being updated. |
| `:variable_id` | The ID of the variable to update. |
### Request Body
This PATCH endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ----------------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"vars"`. |
| `data.attributes` | object | none | New attributes for the variable. This object can include `key`, `value`, `description`, `category`, and `sensitive` properties. Refer to [Create an Environment Variable for Module Tests](#create-an-environment-variable-for-module-tests) for additional information. All properties are optional. |
### Sample Payload
```json
{
"data": {
"attributes": {
"key":"name",
"value":"mars",
"description": "new description",
"category":"env",
"sensitive": false
},
"type":"vars"
}
}
```
### Sample Request
```bash
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/tests/registry-modules/private/my-organization/registry-name/registry-provider/vars/var-yRmifb4PJj7cLkMG
```
### Sample Response
```json
{
"data": {
"id":"var-yRmifb4PJj7cLkMG",
"type":"vars",
"attributes": {
"key":"name",
"value":"mars",
"description":"new description",
"sensitive":false,
"category":"env",
"hcl":false
}
}
}
```
## Delete Test Variable for a Module
`DELETE /organizations/:organization_name/tests/registry-modules/private/:namespace/:name/:provider/vars/variable_id`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the `owners` team or a member of the `owners` team. |
| `:namespace` | The namespace of the module for which the test environment variable is being deleted. For private modules this is the same as the `:organization_name` parameter. |
| `:name` | The name of the module for which the test environment variable is being deleted. |
| `:provider` | The name of the provider for which the test environment variable is being deleted. |
| `:variable_id` | The ID of the variable to delete. |
### Sample Request
```bash
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/organizations/my-organization/tests/registry-modules/private/my-organization/registry-name/registry-provider/vars/var-yRmifb4PJj7cLkMG
``` | terraform | page title Tests API Docs HCP Terraform description Use the tests endpoint to manage Terraform tests List get create and cancel tests using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Tests API Tests are terraform operations runs and are referred to as Test Runs within the HCP Terraform API Performing a test on a new configuration is a multi step process 1 Create a configuration version on the registry module create a configuration version for a test 1 Upload configuration files to the configuration version upload configuration files for a test 1 Create a test on the module create a test run HCP Terraform completes this step automatically when you upload a configuration file Alternatively you can create a test with a pre existing configuration version even one from another module This is useful for promoting known good code from one module to another Attributes The tests API endpoint has the following attributes Test Run States The state of the test operation is found in data attributes status and you can reference the following list of possible states State Description pending The initial status of a run after creation queued HCP Terraform has queued the test operation to start running HCP Terraform is executing the test errored The test has errored This is a final state canceled The test has been canceled This is a final state finished The test has completed This is a final state Test run status The final test status is found in data attributes test status and you can reference the following list of possible states Status Description pass The given tests have passed fail The given tests have failed Detailed test status The test results can be found via the following attributes Status Description data attributes tests passed The number of tests that have passed data attributes tests failed The number of tests that have failed data attributes tests errored The number of tests that have errored out data attributes tests skipped The number of tests that have been skipped Test Sources List tests for a module You can use the following sources as tests list terraform cloud docs api docs private registry tests list tests for a module query parameters Source Description terraform Indicates a test was queued from HCP Terraform CLI tfe api Indicates a test was queued from HCP Terraform API tfe configuration version Indicates a test was queued from a Configuration Version triggered from a VCS provider Create a Test POST organizations organization name tests registry modules private namespace name provider test runs Parameter Description organization name The name of the organization for the module The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The namespace of the module for which the test is being created For private modules this is the same as the organization name parameter name The name of the module for which the test is being created provider The name of the provider for which the test is being created A test run executes tests against a registry module using a configuration version and the modules s current environment variables Creating a test run requires permission to access the specified module Refer to Permissions terraform cloud docs users teams organizations permissions for more information When creating a test run you may optionally provide a list of variable objects containing key and value attributes These values apply to that test run specifically and take precedence over variables with the same key that are created within the module All values must be expressed as an HCL literal in the same syntax you would use when writing Terraform code Sample Test Variables json attributes variables key replicas value 2 key access key value ABCDE12345 Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data attributes verbose bool false Specifies whether Terraform should print the plan or state for each test run block as it executes data attributes test directory string tests Sets the directory where HCP Terraform executes the tests data attributes filters array string empty array When specified HCP Terraform only executes the test files contained within this array data attributes variables array key value empty array Specifies an optional list of test specific environment variable values data relationships configuration version data id string none Specifies the configuration version that HCP Terraform executes the test against Sample Payload json data attributes verbose true filters tests test tftest hcl test directory tests variables key number value 4 type test runs Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations my organization tests registry modules private my organization private registry provider test runs Sample Response json data id trun KFg8DSiRz4E37mdJ type test runs attributes status queued status timestamps queued at 2023 10 03T18 27 39 00 00 created at 2023 10 03T18 27 39 239Z updated at 2023 10 03T18 27 39 264Z test configurable type RegistryModule test configurable id mod 9rjVHLCUE9QD3k6L variables key number value 4 filters tests test tftest hcl test directory tests verbose true test status null tests passed null tests failed null tests errored null tests skipped null source tfe api message Queued manually via the Terraform Enterprise API relationships configuration version data id cv d3zBGFf5DfWY4GY9 type configuration versions links related api v2 configuration versions cv d3zBGFf5DfWY4GY9 created by data id user zsRFs3AGaAHzbEfs type users links related api v2 users user zsRFs3AGaAHzbEfs Create a Configuration Version for a Test POST organizations organization name tests registry modules private namespace name provider test runs configuration versions Parameter Description organization name The name of the organization for the module The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The namespace of the module for which the configuration version is being created For private modules this is the same as the organization name parameter name The name of the module for which the configuration version is being created provider The name of the provider for which the configuration version is being created Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST https app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider test runs configuration versions Sample Response json data id cv aaady7niJMY1wAvx type configuration versions attributes auto queue runs true error null error message null source tfe api speculative false status pending status timestamps changed files provisional false upload url https archivist terraform io v1 object dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv relationships ingress attributes data null links related api v2 configuration versions cv aaady7niJMY1wAvx ingress attributes links self api v2 configuration versions cv aaady7niJMY1wAvx Upload Configuration Files for a Test PUT https archivist terraform io v1 object UNIQUE OBJECT ID The URL is provided in the upload url attribute when creating a configuration versions resource After creation the URL is hidden on the resource and no longer available Sample Request filename is the name of the configuration file you wish to upload shell curl header Content Type application octet stream request PUT data binary filename https archivist terraform io v1 object 4c44d964 eba7 4dd5 ad29 1ece7b99e8da List Tests for a Module GET organizations organization name tests registry modules private namespace name provider test runs Parameter Description organization name The name of the organization for the module The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The namespace of the module which the tests have executed against For private modules this is the same as the organization name parameter name The name of the module which the tests have executed against provider The name of the provider which the tests have executed against Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters remember to percent encode as 5B and as 5D if your tooling does not automatically encode URLs Parameter Description Required page number If omitted the endpoint returns the first page Optional page size If omitted the endpoint returns 20 runs per page Optional filter source Optional A comma separated list of test sources the result will only include tests that came from one of these sources Options are listed in Test Sources terraform cloud docs api docs private registry tests test sources Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider test runs Sample Response json data id trun KFg8DSiRz4E37mdJ type test runs attributes status finished status timestamps queued at 2023 10 03T18 27 39 00 00 started at 2023 10 03T18 27 41 00 00 finished at 2023 10 03T18 27 53 00 00 log read url https archivist terraform io v1 object dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv created at 2023 10 03T18 27 39 239Z updated at 2023 10 03T18 27 53 574Z test configurable type RegistryModule test configurable id mod 9rjVHLCUE9QD3k6L variables key number value 4 filters tests test tftest hcl test directory tests verbose true test status pass tests passed 1 tests failed 0 tests errored 0 tests skipped 0 source tfe api message Queued manually via the Terraform Enterprise API relationships configuration version data id cv d3zBGFf5DfWY4GY9 type configuration versions links related api v2 configuration versions cv d3zBGFf5DfWY4GY9 created by data id user zsRFs3AGaAHzbEfs type users links related api v2 users user zsRFs3AGaAHzbEfs Get Test Details GET organizations organization name tests registry modules private namespace name provider test runs test run id Parameter Description organization name The name of the organization for the module The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The namespace of the module which the test was executed against For private modules this is the same as the organization name parameter name The name of the module which the test was executed against provider The name of the provider which the test was executed against test run id The test ID to get Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider test runs trun xFMAHM3FhkFBL6Z7 Sample Response json data id trun KFg8DSiRz4E37mdJ type test runs attributes status finished status timestamps queued at 2023 10 03T18 27 39 00 00 started at 2023 10 03T18 27 41 00 00 finished at 2023 10 03T18 27 53 00 00 log read url https archivist terraform io v1 object dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv created at 2023 10 03T18 27 39 239Z updated at 2023 10 03T18 27 53 574Z test configurable type RegistryModule test configurable id mod 9rjVHLCUE9QD3k6L variables key number value 4 filters tests test tftest hcl test directory tests verbose true test status pass tests passed 1 tests failed 0 tests errored 0 tests skipped 0 source tfe api message Queued manually via the Terraform Enterprise API relationships configuration version data id cv d3zBGFf5DfWY4GY9 type configuration versions links related api v2 configuration versions cv d3zBGFf5DfWY4GY9 created by data id user zsRFs3AGaAHzbEfs type users links related api v2 users user zsRFs3AGaAHzbEfs Cancel a Test POST organizations organization name tests registry modules private namespace name provider test runs test run id cancel Parameter Description organization name The name of the organization to create a test in The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The namespace of the module for which the test is being canceled For private modules this is the same as the organization name parameter name The name of the module for which the test is being canceled provider The name of the provider for which the test is being canceled test run id The test ID to cancel Use the cancel action to interrupt a test that is currently running The action sends an INT signal to the running Terraform process which instructs Terraform to safely end the tests and attempt to teardown any infrastructure that your tests create Status Response Reason s 202 none Successfully queued a cancel request 409 JSON API error object Test was not running cancel not allowed 404 JSON API error object Test was not found or user not authorized Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST https app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider test runs trun xFMAHM3FhkFBL6Z7 cancel Forcefully cancel a Test POST organizations organization name tests registry modules private namespace name provider test runs test run id force cancel Parameter Description organization name The name of the organization for the module The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The namespace of the module for which the test is being force canceled For private modules this is the same as the organization name parameter name The name of the module for which the test is being force canceled provider The name of the provider for which the test is being force canceled test run id The test ID to cancel The force cancel action ends the test immediately Once invoked Terraform places the test into a canceled state and terminates the running Terraform process Warning This endpoint has potentially dangerous side effects including loss of any in flight state in the running Terraform process Use this operation with extreme caution Status Response Reason s 202 none Successfully queued a cancel request 409 JSON API error object Test was not running or has not been canceled non forcefully 404 JSON API error object Test was not found or user not authorized Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST https app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider test runs trun xFMAHM3FhkFBL6Z7 force cancel Create an Environment Variable for Module Tests POST organizations organization name tests registry modules private namespace name provider vars Parameter Description organization name The name of the organization of the module The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The namespace of the module for which the testing environment variable is being created For private modules this is the same as the organization name parameter name The name of the module for which the testing environment variable is being created provider The name of the provider for which the testing environment variable is being created Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string none Must be vars data attributes key string none The variable s name Test variable keys must begin with a letter or underscore and can only contain letters numbers and underscores data attributes value string The value of the variable data attributes description string none The description of the variable data attributes category string none This must be env data attributes sensitive bool false Whether the value is sensitive When set to true Terraform writes the variable once and is not visible thereafter Sample Payload json data type vars attributes key some key value some value description some description category env sensitive false Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider vars Sample Response data id var xSCUzCxdqMs2ygcg type vars attributes key keykey value some value sensitive false category env hcl false created at 2023 10 03T19 47 05 393Z description some description version id 699b14ea5d5e5c02f6352fac6bfd0a1424c21d32be14d1d9eb79f5e1f28f663a links self api v2 vars var xSCUzCxdqMs2ygcg List Test Variables for a Module GET organizations organization name tests registry modules private namespace name provider vars Parameter Description organization name The name of the organization for the module The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The namespace of the module which the test environment variables were created for For private modules this is the same as the organization name parameter name The name of the module which the test environment variables were created for provider The name of the provider which the test environment variables were created for Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider vars Sample Response json data id var xSCUzCxdqMs2ygcg type vars attributes key keykey value some value sensitive false category env hcl false created at 2023 10 03T19 47 05 393Z description some description version id 699b14ea5d5e5c02f6352fac6bfd0a1424c21d32be14d1d9eb79f5e1f28f663a links self api v2 vars var xSCUzCxdqMs2ygcg Update Test Variables for a Module PATCH organizations organization name tests registry modules private namespace name provider vars variable id Parameter Description organization name The name of the organization for the module The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The namespace of the module for which the test environment variable is being updated For private modules this is the same as the organization name parameter name The name of the module for which the test environment variable is being updated provider The name of the provider for which the test environment variable is being updated variable id The ID of the variable to update Request Body This PATCH endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be vars data attributes object none New attributes for the variable This object can include key value description category and sensitive properties Refer to Create an Environment Variable for Module Tests create an environment variable for module tests for additional information All properties are optional Sample Payload json data attributes key name value mars description new description category env sensitive false type vars Sample Request bash curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider vars var yRmifb4PJj7cLkMG Sample Response json data id var yRmifb4PJj7cLkMG type vars attributes key name value mars description new description sensitive false category env hcl false Delete Test Variable for a Module DELETE organizations organization name tests registry modules private namespace name provider vars variable id Parameter Description organization name The name of the organization for the module The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The namespace of the module for which the test environment variable is being deleted For private modules this is the same as the organization name parameter name The name of the module for which the test environment variable is being deleted provider The name of the provider for which the test environment variable is being deleted variable id The ID of the variable to delete Sample Request bash curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider vars var yRmifb4PJj7cLkMG |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Manage module versions API Docs HCP Terraform tfc only true Use these module management endpoints to deprecate and revert the deprecation of module versions you published to an organization s private registry | ---
page_title: Manage module versions - API Docs - HCP Terraform
description: |-
Use these module management endpoints to deprecate and revert the deprecation of module versions you published to an organization's private registry.
tfc_only: true
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[503]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/503
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Manage module versions API
This topic provides reference information about API endpoints that let your deprecate module versions in your organization’s private registry.
## Introduction
When you deprecate a module version, HCP Terraform adds warnings to the module's registry page and to run outputs when anyone uses the deprecated version.
<!-- BEGIN: TFC:only name:pnp-callout -->
@include "tfc-package-callouts/manage-module-versions.mdx"
<!-- END: TFC:only name:pnp-callout -->
After deprecating a module version, you can revert that deprecated status to remove the warnings from that version in the registry and outputs. For more details on module deprecation, refer to [Deprecate module versions](/terraform/cloud-docs/registry/manage-module-versions).
@include "public-beta/manage-module-versions.mdx"
## Deprecate a module version
Use this endpoint to deprecate a module version.
`PATCH /api/v2/organizations/:organization_name/registry-modules/private/:organization_name/:module_name/:module_provider/:module_version`
| Parameter | Description |
| :---- | :---- |
| `:organization_name` | The name of the organization the module belongs to. |
| `:module_name` | The name of the module whose version you want to deprecate. |
| `:module_provider` | Specifies the Terraform provider that this module is used for. |
| `:module_version` | The module version you want to deprecate. |
This endpoint allows you to deprecate a specific module version. Deprecating a module version adds warnings to the run output of any consumers using this module.
| Status | Response | Reason |
| :---- | :---- | :---- |
| [200](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200) | [JSON API document](http://terraform/cloud-docs/api-docs#json-api-documents) | Successfully deprecated a module version. |
| [404](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404) | [JSON API error object](http://jsonapi.org/format/#error-objects) | This organization is not authorized to deprecate this module version, or the module version does not exist. |
| [422](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422) | [JSON API error object](http://jsonapi.org/format/#error-objects) | Malformed request body, for example the request is missing attributes or uses the wrong types. |
| [500](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500) or [503](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/503) | [JSON API error object](http://jsonapi.org/format/#error-objects) | Failure occurred while deprecating a module version. |
### **Sample Payload**
```json
{
"data": {
"type": "module-versions",
"attributes": {
"deprecation": {
"deprecated-status": "Deprecated",
"reason": "Deprecated due to a security vulnerability issue.",
"link": "https://www.hashicorp.com/"
}
}
}
}
```
### **Sample Request**
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-modules/private/hashicorp/lb-http/google/11.0.0
```
### Sample Response
```json
{
"data": {
"type": "module-versions",
"id": "1",
"relationships": {
"deprecation": {
"data": {
"id": "2",
"type": "deprecations"
}
}
}
},
"included": [
{
"type": "deprecations",
"id": "2",
"attributes": {
"link": "https://www.hashicorp.com/",
"reason": "Deprecated due to a security vulnerability issue. Applies will be blocked in 15 days."
}
}
]
}
```
## Revert the deprecation status for a module version
Use this endpoint to revert the deprecation of a module version.
`PATCH /api/v2/organizations/:organization_name/registry-modules/private/:organization_name/:module_name/:module_provider/:module_version`
| Parameter | Description |
| :---- | :---- |
| `:organization_name` | The name of the organization the module belongs to. |
| `:module_name` | The name of the module you want to revert the deprecation of. |
| `:module_provider` | Specifies the Terraform provider that this module is used for. |
| `:module_version` | The module version you want to revert the deprecation of. |
Deprecating a module version adds warnings to the run output of any consumers using this module. Reverting the deprecation status removes warnings from the output of consumers and fully reinstates the module version.
| Status | Response | Reason |
| :---- | :---- | :---- |
| [200](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200) | [JSON API document](http:///terraform/cloud-docs/api-docs#json-api-documents) | Successfully reverted a module version’s deprecation status and reinstated that version. |
| [404](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404) | [JSON API error object](http://jsonapi.org/format/#error-objects) | This organization is not authorized to revert the depreciation of this module version, or the module version does not exist. |
| [422](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422) | [JSON API error object](http://jsonapi.org/format/#error-objects) | Malformed request body, for example the request is missing attributes or uses the wrong types. |
| [500](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500) or [503] | [JSON API error object](http://jsonapi.org/format/#error-objects) | Failure occurred while reverting the deprecation of a module version. |
### **Sample Request**
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-modules/private/hashicorp/lb-http/google/11.0.0
```
**Sample payload**
```json
{
"data": {
"type": "module-versions",
"attributes": {
"deprecation": {
"deprecated-status": "Undeprecated"
}
}
}
}
```
### Sample Response
```json
{
"data": {
"type": "module-versions",
"id": "1"
}
}
``` | terraform | page title Manage module versions API Docs HCP Terraform description Use these module management endpoints to deprecate and revert the deprecation of module versions you published to an organization s private registry tfc only true 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 503 https developer mozilla org en US docs Web HTTP Status 503 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Manage module versions API This topic provides reference information about API endpoints that let your deprecate module versions in your organization s private registry Introduction When you deprecate a module version HCP Terraform adds warnings to the module s registry page and to run outputs when anyone uses the deprecated version BEGIN TFC only name pnp callout include tfc package callouts manage module versions mdx END TFC only name pnp callout After deprecating a module version you can revert that deprecated status to remove the warnings from that version in the registry and outputs For more details on module deprecation refer to Deprecate module versions terraform cloud docs registry manage module versions include public beta manage module versions mdx Deprecate a module version Use this endpoint to deprecate a module version PATCH api v2 organizations organization name registry modules private organization name module name module provider module version Parameter Description organization name The name of the organization the module belongs to module name The name of the module whose version you want to deprecate module provider Specifies the Terraform provider that this module is used for module version The module version you want to deprecate This endpoint allows you to deprecate a specific module version Deprecating a module version adds warnings to the run output of any consumers using this module Status Response Reason 200 https developer mozilla org en US docs Web HTTP Status 200 JSON API document http terraform cloud docs api docs json api documents Successfully deprecated a module version 404 https developer mozilla org en US docs Web HTTP Status 404 JSON API error object http jsonapi org format error objects This organization is not authorized to deprecate this module version or the module version does not exist 422 https developer mozilla org en US docs Web HTTP Status 422 JSON API error object http jsonapi org format error objects Malformed request body for example the request is missing attributes or uses the wrong types 500 https developer mozilla org en US docs Web HTTP Status 500 or 503 https developer mozilla org en US docs Web HTTP Status 503 JSON API error object http jsonapi org format error objects Failure occurred while deprecating a module version Sample Payload json data type module versions attributes deprecation deprecated status Deprecated reason Deprecated due to a security vulnerability issue link https www hashicorp com Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 organizations hashicorp registry modules private hashicorp lb http google 11 0 0 Sample Response json data type module versions id 1 relationships deprecation data id 2 type deprecations included type deprecations id 2 attributes link https www hashicorp com reason Deprecated due to a security vulnerability issue Applies will be blocked in 15 days Revert the deprecation status for a module version Use this endpoint to revert the deprecation of a module version PATCH api v2 organizations organization name registry modules private organization name module name module provider module version Parameter Description organization name The name of the organization the module belongs to module name The name of the module you want to revert the deprecation of module provider Specifies the Terraform provider that this module is used for module version The module version you want to revert the deprecation of Deprecating a module version adds warnings to the run output of any consumers using this module Reverting the deprecation status removes warnings from the output of consumers and fully reinstates the module version Status Response Reason 200 https developer mozilla org en US docs Web HTTP Status 200 JSON API document http terraform cloud docs api docs json api documents Successfully reverted a module version s deprecation status and reinstated that version 404 https developer mozilla org en US docs Web HTTP Status 404 JSON API error object http jsonapi org format error objects This organization is not authorized to revert the depreciation of this module version or the module version does not exist 422 https developer mozilla org en US docs Web HTTP Status 422 JSON API error object http jsonapi org format error objects Malformed request body for example the request is missing attributes or uses the wrong types 500 https developer mozilla org en US docs Web HTTP Status 500 or 503 JSON API error object http jsonapi org format error objects Failure occurred while reverting the deprecation of a module version Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 organizations hashicorp registry modules private hashicorp lb http google 11 0 0 Sample payload json data type module versions attributes deprecation deprecated status Undeprecated Sample Response json data type module versions id 1 |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Providers API Docs HCP Terraform Use the gpg keys endpoint to manage the GPG keys used to sign private providers List add get update and delete GPG keys using the HTTP API | ---
page_title: Providers - API Docs - HCP Terraform
description: >-
Use the `/gpg-keys` endpoint to manage the GPG keys used to sign private providers. List, add, get, update, and delete GPG keys using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# GPG Keys API
These endpoints are only relevant to private providers. When you [publish a private provider](/terraform/cloud-docs/registry/publish-providers) to the HCP Terraform private registry, you must upload the public key of the GPG keypair used to sign the release. Refer to [Preparing and Adding a Signing Key](/terraform/registry/providers/publishing#preparing-and-adding-a-signing-key) for more details.
You need [owners team](/terraform/cloud-docs/users-teams-organizations/permissions#organization-owners) or [Manage Private Registry](/terraform/cloud-docs/users-teams-organizations/permissions#manage-private-registry) permissions to add, update, or delete GPG keys in a private registry.
## List GPG Keys
`GET /api/registry/:registry_name/v2/gpg-keys`
### Parameters
| Parameter | Description |
|------------------|--------------------|
| `:registry_name` | Must be `private`. |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling does not automatically encode URLs.
| Parameter | Description |
|---------------------|----------------------------------------------------------------------------------------------------------------------|
| `filter[namespace]` | **Required.** A comma-separated list of one or more namespaces. The namespaces must be an authorized HCP Terraform or Terraform Enterprise organization name. |
| `page[number]` | **Optional.** If omitted, the endpoint returns the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint returns 20 GPG keys per page. |
Gets a list of GPG keys belonging to the specified namespaces.
| Status | Response | Reason |
|---------|--------------------------------------------|-----------------------------------------------------------|
| [200][] | [JSON API document][] (`type: "gpg-keys"`) | Successfully fetched GPG keys |
| [400][] | [JSON API error object][] | Error - missing namespaces in request |
| [403][] | [JSON API error object][] | Forbidden - no authorized namespaces specified in request |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request GET \
"https://app.terraform.io/api/registry/private/v2/gpg-keys?filter%5Bnamespace%5D=my-organization,my-other-organization"
```
### Sample Response
```json
{
"data": [
{
"type": "gpg-keys",
"id": "1",
"attributes": {
"ascii-armor": "-----BEGIN PGP PUBLIC KEY BLOCK-----...",
"created-at": "2022-02-08T19:15:47Z",
"key-id": "C4E5E6C66C79C778",
"namespace": "my-other-organization",
"source": "",
"source-url": null,
"trust-signature": "",
"updated-at": "2022-02-08T19:15:47Z"
},
"links": {
"self": "/v2/gpg-keys/1"
}
},
{
"type": "gpg-keys",
"id": "140",
"attributes": {
"ascii-armor": "-----BEGIN PGP PUBLIC KEY BLOCK-----...",
"created-at": "2022-04-28T21:32:11Z",
"key-id": "C4E5E6C66C79C778",
"namespace": "my-organization",
"source": "",
"source-url": null,
"trust-signature": "",
"updated-at": "2022-04-28T21:32:11Z"
},
"links": {
"self": "/v2/gpg-keys/140"
}
}
],
"links": {
"first": "/v2/gpg-keys?filter%5Bnamespace%5D=my-organization%2Cmy-other-organization&page%5Bnumber%5D=1&page%5Bsize%5D=15",
"last": "/v2/gpg-keys?filter%5Bnamespace%5D=my-organization%2Cmy-other-organization&page%5Bnumber%5D=1&page%5Bsize%5D=15",
"next": null,
"prev": null
},
"meta": {
"pagination": {
"page-size": 15,
"current-page": 1,
"next-page": null,
"prev-page": null,
"total-pages": 1,
"total-count": 2
}
}
}
```
## Add a GPG Key
`POST /api/registry/:registry_name/v2/gpg-keys`
### Parameters
| Parameter | Description |
| -------------------- | -------------------- |
| `:registry_name` | Must be `private`. |
Uploads a GPG Key to a private registry scoped with a namespace. The response will provide a "key-id", which is required to [Create a Provider Version](/terraform/cloud-docs/api-docs/private-registry/provider-versions-platforms#create-a-provider-version).
| Status | Response | Reason |
| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "gpg-keys"`) | Successfully uploads a GPG key to a private provider |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [403][] | [JSON API error object][] | Forbidden - not available for public providers |
| [404][] | [JSON API error object][] | User not authorized |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ----------------------------- | ------ | ------- | -------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"gpg-keys"`. |
| `data.attributes.namespace` | string | | The namespace of the provider. Must be the same as the `organization_name` for the provider. |
| `data.attributes.ascii-armor` | string | | A valid gpg-key string. |
### Sample Payload
```json
{
"data": {
"type": "gpg-keys",
"attributes": {
"namespace": "hashicorp",
"ascii-armor": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINB...=txfz\n-----END PGP PUBLIC KEY BLOCK-----\n"
} }
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/registry/private/v2/gpg-keys
```
### Sample Response
```json
{
"data": {
"type": "gpg-keys",
"id": "23",
"attributes": {
"ascii-armor": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINB...=txfz\n-----END PGP PUBLIC KEY BLOCK-----\n",
"created-at": "2022-02-11T19:16:59Z",
"key-id": "32966F3FB5AC1129",
"namespace": "hashicorp",
"source": "",
"source-url": null,
"trust-signature": "",
"updated-at": "2022-02-11T19:16:59Z"
},
"links": {
"self": "/v2/gpg-keys/23"
}
}
}
```
## Get GPG Key
`GET /api/registry/:registry_name/v2/gpg-keys/:namespace/:key_id`
### Parameters
| Parameter | Description |
| -------------------- | ---------------------------------------------------- |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider scoped to the GPG key. |
| `:key_id` | The id of the GPG key. |
Gets the content of a GPG key.
| Status | Response | Reason |
| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "gpg-keys"`) | Successfully fetched GPG key |
| [403][] | [JSON API error object][] | Forbidden - not available for public providers |
| [404][] | [JSON API error object][] | GPG key not found or user not authorized |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request GET \
https://app.terraform.io/api/registry/private/v2/gpg-keys/hashicorp/32966F3FB5AC1129
```
### Sample Response
```json
{
"data": {
"type": "gpg-keys",
"id": "2",
"attributes": {
"ascii-armor": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINB...=txfz\n-----END PGP PUBLIC KEY BLOCK-----\n",
"created-at": "2022-02-24T17:07:25Z",
"key-id": "32966F3FB5AC1129",
"namespace": "hashicorp",
"source": "",
"source-url": null,
"trust-signature": "",
"updated-at": "2022-02-24T17:07:25Z"
},
"links": {
"self": "/v2/gpg-keys/2"
}
}
}
```
## Update a GPG Key
`PATCH /api/registry/:registry_name/v2/gpg-keys/:namespace/:key_id`
### Parameters
| Parameter | Description |
| -------------------- | ---------------------------------------------------- |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider scoped to the GPG key. |
| `:key_id` | The id of the GPG key. |
Updates the specified GPG key. Only the `namespace` attribute can be updated, and `namespace` has to match an `organization` the user has permission to access.
| Status | Response | Reason |
| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "gpg-keys"`) | Successfully updates a GPG key |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [403][] | [JSON API error object][] | Forbidden - not available for public providers |
| [404][] | [JSON API error object][] | GPG key not found or user not authorized |
### Request Body
This PATCH endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ----------------------------- | ------ | ------- | -------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"gpg-keys"`. |
| `data.attributes.namespace` | string | | The namespace of the provider. Must be the same as the `organization_name` for the provider. |
### Sample Payload
```json
{
"data": {
"type": "gpg-keys",
"attributes": {
"namespace": "new-namespace",
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/registry/private/v2/gpg-keys/hashicorp/32966F3FB5AC1129
```
### Sample Response
```json
{
"data": {
"type": "gpg-keys",
"id": "2",
"attributes": {
"ascii-armor": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINB...=txfz\n-----END PGP PUBLIC KEY BLOCK-----\n",
"created-at": "2022-02-24T17:07:25Z",
"key-id": "32966F3FB5AC1129",
"namespace": "new-name",
"source": "",
"source-url": null,
"trust-signature": "",
"updated-at": "2022-02-24T17:12:10Z"
},
"links": {
"self": "/v2/gpg-keys/2"
}
}
}
```
## Delete a GPG Key
`DELETE /api/registry/:registry_name/v2/gpg-keys/:namespace/:key_id`
### Parameters
| Parameter | Description |
| -------------------- | ---------------------------------------------------- |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider scoped to the GPG key. |
| `:key_id` | The id of the GPG key. |
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Status | Response | Reason |
| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "gpg-keys"`) | Successfully deletes a GPG key |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [403][] | [JSON API error object][] | Forbidden - not available for public providers |
| [404][] | [JSON API error object][] | GPG key not found or user not authorized |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
--data @payload.json \
https://app.terraform.io/api/registry/private/v2/gpg-keys/hashicorp/32966F3FB5AC1129
``` | terraform | page title Providers API Docs HCP Terraform description Use the gpg keys endpoint to manage the GPG keys used to sign private providers List add get update and delete GPG keys using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects GPG Keys API These endpoints are only relevant to private providers When you publish a private provider terraform cloud docs registry publish providers to the HCP Terraform private registry you must upload the public key of the GPG keypair used to sign the release Refer to Preparing and Adding a Signing Key terraform registry providers publishing preparing and adding a signing key for more details You need owners team terraform cloud docs users teams organizations permissions organization owners or Manage Private Registry terraform cloud docs users teams organizations permissions manage private registry permissions to add update or delete GPG keys in a private registry List GPG Keys GET api registry registry name v2 gpg keys Parameters Parameter Description registry name Must be private Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling does not automatically encode URLs Parameter Description filter namespace Required A comma separated list of one or more namespaces The namespaces must be an authorized HCP Terraform or Terraform Enterprise organization name page number Optional If omitted the endpoint returns the first page page size Optional If omitted the endpoint returns 20 GPG keys per page Gets a list of GPG keys belonging to the specified namespaces Status Response Reason 200 JSON API document type gpg keys Successfully fetched GPG keys 400 JSON API error object Error missing namespaces in request 403 JSON API error object Forbidden no authorized namespaces specified in request Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request GET https app terraform io api registry private v2 gpg keys filter 5Bnamespace 5D my organization my other organization Sample Response json data type gpg keys id 1 attributes ascii armor BEGIN PGP PUBLIC KEY BLOCK created at 2022 02 08T19 15 47Z key id C4E5E6C66C79C778 namespace my other organization source source url null trust signature updated at 2022 02 08T19 15 47Z links self v2 gpg keys 1 type gpg keys id 140 attributes ascii armor BEGIN PGP PUBLIC KEY BLOCK created at 2022 04 28T21 32 11Z key id C4E5E6C66C79C778 namespace my organization source source url null trust signature updated at 2022 04 28T21 32 11Z links self v2 gpg keys 140 links first v2 gpg keys filter 5Bnamespace 5D my organization 2Cmy other organization page 5Bnumber 5D 1 page 5Bsize 5D 15 last v2 gpg keys filter 5Bnamespace 5D my organization 2Cmy other organization page 5Bnumber 5D 1 page 5Bsize 5D 15 next null prev null meta pagination page size 15 current page 1 next page null prev page null total pages 1 total count 2 Add a GPG Key POST api registry registry name v2 gpg keys Parameters Parameter Description registry name Must be private Uploads a GPG Key to a private registry scoped with a namespace The response will provide a key id which is required to Create a Provider Version terraform cloud docs api docs private registry provider versions platforms create a provider version Status Response Reason 201 JSON API document type gpg keys Successfully uploads a GPG key to a private provider 422 JSON API error object Malformed request body missing attributes wrong types etc 403 JSON API error object Forbidden not available for public providers 404 JSON API error object User not authorized Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be gpg keys data attributes namespace string The namespace of the provider Must be the same as the organization name for the provider data attributes ascii armor string A valid gpg key string Sample Payload json data type gpg keys attributes namespace hashicorp ascii armor BEGIN PGP PUBLIC KEY BLOCK n nmQINB txfz n END PGP PUBLIC KEY BLOCK n Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api registry private v2 gpg keys Sample Response json data type gpg keys id 23 attributes ascii armor BEGIN PGP PUBLIC KEY BLOCK n nmQINB txfz n END PGP PUBLIC KEY BLOCK n created at 2022 02 11T19 16 59Z key id 32966F3FB5AC1129 namespace hashicorp source source url null trust signature updated at 2022 02 11T19 16 59Z links self v2 gpg keys 23 Get GPG Key GET api registry registry name v2 gpg keys namespace key id Parameters Parameter Description registry name Must be private namespace The namespace of the provider scoped to the GPG key key id The id of the GPG key Gets the content of a GPG key Status Response Reason 200 JSON API document type gpg keys Successfully fetched GPG key 403 JSON API error object Forbidden not available for public providers 404 JSON API error object GPG key not found or user not authorized Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request GET https app terraform io api registry private v2 gpg keys hashicorp 32966F3FB5AC1129 Sample Response json data type gpg keys id 2 attributes ascii armor BEGIN PGP PUBLIC KEY BLOCK n nmQINB txfz n END PGP PUBLIC KEY BLOCK n created at 2022 02 24T17 07 25Z key id 32966F3FB5AC1129 namespace hashicorp source source url null trust signature updated at 2022 02 24T17 07 25Z links self v2 gpg keys 2 Update a GPG Key PATCH api registry registry name v2 gpg keys namespace key id Parameters Parameter Description registry name Must be private namespace The namespace of the provider scoped to the GPG key key id The id of the GPG key Updates the specified GPG key Only the namespace attribute can be updated and namespace has to match an organization the user has permission to access Status Response Reason 201 JSON API document type gpg keys Successfully updates a GPG key 422 JSON API error object Malformed request body missing attributes wrong types etc 403 JSON API error object Forbidden not available for public providers 404 JSON API error object GPG key not found or user not authorized Request Body This PATCH endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be gpg keys data attributes namespace string The namespace of the provider Must be the same as the organization name for the provider Sample Payload json data type gpg keys attributes namespace new namespace Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api registry private v2 gpg keys hashicorp 32966F3FB5AC1129 Sample Response json data type gpg keys id 2 attributes ascii armor BEGIN PGP PUBLIC KEY BLOCK n nmQINB txfz n END PGP PUBLIC KEY BLOCK n created at 2022 02 24T17 07 25Z key id 32966F3FB5AC1129 namespace new name source source url null trust signature updated at 2022 02 24T17 12 10Z links self v2 gpg keys 2 Delete a GPG Key DELETE api registry registry name v2 gpg keys namespace key id Parameters Parameter Description registry name Must be private namespace The namespace of the provider scoped to the GPG key key id The id of the GPG key permissions citation intentionally unused keep for maintainers Status Response Reason 201 JSON API document type gpg keys Successfully deletes a GPG key 422 JSON API error object Malformed request body missing attributes wrong types etc 403 JSON API error object Forbidden not available for public providers 404 JSON API error object GPG key not found or user not authorized Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE data payload json https app terraform io api registry private v2 gpg keys hashicorp 32966F3FB5AC1129 |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Providers API Docs HCP Terraform Use the registry providers endpoint to curate providers in your private registry List create get and delete providers using the HTTP API | ---
page_title: Providers - API Docs - HCP Terraform
description: >-
Use the `/registry-providers` endpoint to curate providers in your private registry. List, create, get, and delete providers using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Registry Providers API
You can add publicly curated providers from the [Terraform Registry](https://registry.terraform.io/) and custom, private providers to your HCP Terraform private registry. The private registry stores a pointer to public providers so that you can view their data from within HCP Terraform. This lets you clearly designate all of the providers that are recommended for the organization and makes them centrally accessible.
All members of an organization can view and use both public and private providers, but you need [owners team](/terraform/cloud-docs/users-teams-organizations/permissions#organization-owners) or [Manage Private Registry](/terraform/cloud-docs/users-teams-organizations/permissions#manage-private-registry) permissions to add, update, or delete them them in private registry.
## HCP Terraform Registry Implementation
For publicly curated providers, the HCP Terraform Registry acts as a proxy to the [Terraform Registry](https://registry.terraform.io) for the following:
- The public registry discovery endpoints have the path prefix provided in the [discovery document](/terraform/registry/api-docs#service-discovery) which is currently `/api/registry/public/v1`.
- [Authentication](/terraform/cloud-docs/api-docs#authentication) is handled the same as all other HCP Terraform endpoints.
## List Terraform Registry Providers for an Organization
`GET /organizations/:organization_name/registry-providers`
### Parameters
| Parameter | Description |
| -------------------- | -------------------------------------------------------------- |
| `:organization_name` | The name of the organization to list available providers from. |
Lists the providers included in the private registry for the specified organization.
| Status | Response | Reason |
| ------- | ---------------------------------------------------- | ---------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "registry-providers"`) | Success |
| [404][] | [JSON API error object][] | Providers not found or user unauthorized to perform action |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `q` | **Optional.** A search query string. Providers are searchable by both their name and their namespace fields. |
| `filter[field name]` | **Optional.** If specified, restricts results to those with the matching field name value. Valid values are `registry_name`, and `organization_name`. |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 registry providers per page. |
### Sample Request
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
https://app.terraform.io/api/v2/organizations/my-organization/registry-providers
```
### Sample Response
```json
{
"data": [
{
"id": "prov-kwt1cBiX2SdDz38w",
"type": "registry-providers",
"attributes": {
"name": "aws",
"namespace": "my-organization",
"created-at": "2021-04-07T19:01:18.528Z",
"updated-at": "2021-04-07T19:01:19.863Z",
"registry-name": "public",
"permissions": {
"can-delete": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-providers/public/my-organization/aws"
}
},
{
"id": "prov-PopQnMtYDCcd3PRX",
"type": "registry-providers",
"attributes": {
"name": "aurora",
"namespace": "my-organization",
"created-at": "2021-04-07T19:04:41.375Z",
"updated-at": "2021-04-07T19:04:42.828Z",
"registry-name": "public",
"permissions": {
"can-delete": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-providers/public/my-organization/aurora"
}
},
...,
],
"links": {
"self": "https://app.terraform.io/api/v2/organizations/my-organization/registry-providers?page%5Bnumber%5D=1&page%5Bsize%5D=6",
"first": "https://app.terraform.io/api/v2/organizations/my-organization/registry-providers?page%5Bnumber%5D=1&page%5Bsize%5D=6",
"prev": null,
"next": "https://app.terraform.io/api/v2/organizations/my-organization/registry-providers?page%5Bnumber%5D=2&page%5Bsize%5D=6",
"last": "https://app.terraform.io/api/v2/organizations/my-organization/registry-providers?page%5Bnumber%5D=29&page%5Bsize%5D=6"
},
"meta": {
"pagination": {
"current-page": 1,
"page-size": 6,
"prev-page": null,
"next-page": 2,
"total-pages": 29,
"total-count": 169
}
}
}
```
## Create a Provider
`POST /organizations/:organization_name/registry-providers`
Use this endpoint to create both public and private providers:
- **Public providers:** The public provider record will be available in the organization's registry provider list immediately after creation. You cannot create versions for public providers; you must use the versions available on the Terraform Registry.
- **Private providers:** The private provider record will be available in the organization's registry provider list immediately after creation, but you must [create a version and upload release assets](/terraform/cloud-docs/registry/publish-providers#publishing-a-provider-and-creating-a-version) before consumers can use it. The private registry does not automatically update private providers when you release new versions. You must add each new version with the [Create a Provider Version](/terraform/cloud-docs/api-docs/private-registry/provider-versions-platforms#create-a-provider-version) endpoint.
### Parameters
| Parameter | Description |
| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to create a provider in. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Status | Response | Reason |
| ------- | ---------------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "registry-providers"`) | Successfully published provider |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled |
| [404][] | [JSON API error object][] | User not authorized |
### Request Body
~> **Important:** For private providers, you must also create a version, a platform, and upload release assets before consumers can use the provider. Refer to [Publishing a Private Provider](/terraform/cloud-docs/registry/publish-providers) for more details.
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ------------------------------- | ------ | ------- | ------------------------------------------------------------------------------------------------------------ |
| `data.type` | string | | Must be `"registry-providers"`. |
| `data.attributes.name` | string | | The name of the provider. |
| `data.attributes.namespace` | string | | The namespace of the provider. For private providers this is the same as the `:organization_name` parameter. |
| `data.attributes.registry-name` | string | | Whether this is a publicly maintained provider or private. Must be either `public` or `private`. |
### Sample Payload (Private Provider)
```json
{
"data": {
"type": "registry-providers",
"attributes": {
"name": "aws",
"namespace": "hashicorp",
"registry-name": "private"
}
}
}
```
### Sample Payload (Public Provider)
```json
{
"data": {
"type": "registry-providers",
"attributes": {
"name": "aws",
"namespace": "hashicorp",
"registry-name": "public"
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/registry-providers
```
### Sample Response (Private Provider)
```json
{
"data": {
"id": "prov-cmEmLstBfjNNA9F3",
"type": "registry-providers",
"attributes": {
"name": "aws",
"namespace": "hashicorp",
"registry-name": "private",
"created-at": "2022-02-11T19:16:59.533Z",
"updated-at": "2022-02-11T19:16:59.533Z",
"permissions": {
"can-delete": true
}
},
"relationships": {
"organization": {
"data": {
"id": "hashicorp",
"type": "organizations"
}
},
"versions": {
"data": [],
"links": {
"related": "/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws"
}
}
},
"links": {
"self": "/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws"
}
}
}
```
### Sample Response (Public Provider)
```json
{
"data": {
"id": "prov-fZn7uHu99ZCpAKZJ",
"type": "registry-providers",
"attributes": {
"name": "aws",
"namespace": "hashicorp",
"registry-name": "public",
"created-at": "2020-07-09T19:36:56.288Z",
"updated-at": "2020-07-09T19:36:56.288Z",
"permissions": {
"can-delete": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-providers/public/hashicorp/aws"
}
}
}
```
## Get a Provider
`GET /organizations/:organization_name/registry-providers/:registry_name/:namespace/:name`
### Parameters
| Parameter | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------ |
| `:organization_name` | The name of the organization the provider belongs to. |
| `:registry_name` | Whether this is a publicly maintained provider or private. Must be either `public` or `private`. |
| `:namespace` | The namespace of the provider. For private providers this is the same as the `:organization_name` parameter. |
| `:name` | The provider name. |
| Status | Response | Reason |
| ------- | ---------------------------------------------------- | --------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "registry-providers"`) | Success |
| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled |
| [404][] | [JSON API error object][] | Provider not found or user unauthorized to perform action |
### Sample Request (Private Provider)
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws
```
### Sample Request (Public Provider)
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/my-organization/registry-providers/public/hashicorp/aws
```
### Sample Response (Private Provider)
```json
{
"data": {
"id": "prov-cmEmLstBfjNNA9F3",
"type": "registry-providers",
"attributes": {
"name": "aws",
"namespace": "hashicorp",
"created-at": "2022-02-11T19:16:59.533Z",
"updated-at": "2022-02-11T19:16:59.533Z",
"registry-name": "private",
"permissions": {
"can-delete": true
}
},
"relationships": {
"organization": {
"data": {
"id": "hashicorp",
"type": "organizations"
}
},
"versions": {
"data": [
{
"id": "provver-y5KZUsSBRLV9zCtL",
"type": "registry-provider-versions"
}
],
"links": {
"related": "/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws"
}
}
},
"links": {
"self": "/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws"
}
}
}
```
### Sample Response (Public Provider)
```json
{
"data": {
"id": "prov-fZn7uHu99ZCpAKZJ",
"type": "registry-providers",
"attributes": {
"name": "aws",
"namespace": "hashicorp",
"registry-name": "public",
"created-at": "2020-07-09T19:36:56.288Z",
"updated-at": "2020-07-09T20:16:20.538Z",
"permissions": {
"can-delete": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-providers/public/hashicorp/aws"
}
}
}
```
## Delete a Provider
`DELETE /organizations/:organization_name/registry-providers/:registry_name/:namespace/:name`
### Parameters
| Parameter | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `:organization_name` | The name of the organization to delete a provider from. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:registry_name` | Whether this is a publicly maintained provider or private. Must be either `public` or `private`. |
| `:namespace` | The namespace of the provider that will be deleted. |
| `:name` | The name of the provider that will be deleted. |
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Status | Response | Reason |
| ------- | ------------------------- | ----------------------------------------------------------- |
| [204][] | No Content | Success |
| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled |
| [404][] | [JSON API error object][] | Provider not found or user not authorized to perform action |
### Sample Request (Private Provider)
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws
```
### Sample Request (Public Provider)
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/organizations/my-organization/registry-providers/public/hashicorp/aws
``` | terraform | page title Providers API Docs HCP Terraform description Use the registry providers endpoint to curate providers in your private registry List create get and delete providers using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Registry Providers API You can add publicly curated providers from the Terraform Registry https registry terraform io and custom private providers to your HCP Terraform private registry The private registry stores a pointer to public providers so that you can view their data from within HCP Terraform This lets you clearly designate all of the providers that are recommended for the organization and makes them centrally accessible All members of an organization can view and use both public and private providers but you need owners team terraform cloud docs users teams organizations permissions organization owners or Manage Private Registry terraform cloud docs users teams organizations permissions manage private registry permissions to add update or delete them them in private registry HCP Terraform Registry Implementation For publicly curated providers the HCP Terraform Registry acts as a proxy to the Terraform Registry https registry terraform io for the following The public registry discovery endpoints have the path prefix provided in the discovery document terraform registry api docs service discovery which is currently api registry public v1 Authentication terraform cloud docs api docs authentication is handled the same as all other HCP Terraform endpoints List Terraform Registry Providers for an Organization GET organizations organization name registry providers Parameters Parameter Description organization name The name of the organization to list available providers from Lists the providers included in the private registry for the specified organization Status Response Reason 200 JSON API document type registry providers Success 404 JSON API error object Providers not found or user unauthorized to perform action Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description q Optional A search query string Providers are searchable by both their name and their namespace fields filter field name Optional If specified restricts results to those with the matching field name value Valid values are registry name and organization name page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 registry providers per page Sample Request shell curl request GET header Authorization Bearer TOKEN https app terraform io api v2 organizations my organization registry providers Sample Response json data id prov kwt1cBiX2SdDz38w type registry providers attributes name aws namespace my organization created at 2021 04 07T19 01 18 528Z updated at 2021 04 07T19 01 19 863Z registry name public permissions can delete true relationships organization data id my organization type organizations links self api v2 organizations my organization registry providers public my organization aws id prov PopQnMtYDCcd3PRX type registry providers attributes name aurora namespace my organization created at 2021 04 07T19 04 41 375Z updated at 2021 04 07T19 04 42 828Z registry name public permissions can delete true relationships organization data id my organization type organizations links self api v2 organizations my organization registry providers public my organization aurora links self https app terraform io api v2 organizations my organization registry providers page 5Bnumber 5D 1 page 5Bsize 5D 6 first https app terraform io api v2 organizations my organization registry providers page 5Bnumber 5D 1 page 5Bsize 5D 6 prev null next https app terraform io api v2 organizations my organization registry providers page 5Bnumber 5D 2 page 5Bsize 5D 6 last https app terraform io api v2 organizations my organization registry providers page 5Bnumber 5D 29 page 5Bsize 5D 6 meta pagination current page 1 page size 6 prev page null next page 2 total pages 29 total count 169 Create a Provider POST organizations organization name registry providers Use this endpoint to create both public and private providers Public providers The public provider record will be available in the organization s registry provider list immediately after creation You cannot create versions for public providers you must use the versions available on the Terraform Registry Private providers The private provider record will be available in the organization s registry provider list immediately after creation but you must create a version and upload release assets terraform cloud docs registry publish providers publishing a provider and creating a version before consumers can use it The private registry does not automatically update private providers when you release new versions You must add each new version with the Create a Provider Version terraform cloud docs api docs private registry provider versions platforms create a provider version endpoint Parameters Parameter Description organization name The name of the organization to create a provider in The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team permissions citation intentionally unused keep for maintainers Status Response Reason 201 JSON API document type registry providers Successfully published provider 422 JSON API error object Malformed request body missing attributes wrong types etc 403 JSON API error object Forbidden public provider curation disabled 404 JSON API error object User not authorized Request Body Important For private providers you must also create a version a platform and upload release assets before consumers can use the provider Refer to Publishing a Private Provider terraform cloud docs registry publish providers for more details This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be registry providers data attributes name string The name of the provider data attributes namespace string The namespace of the provider For private providers this is the same as the organization name parameter data attributes registry name string Whether this is a publicly maintained provider or private Must be either public or private Sample Payload Private Provider json data type registry providers attributes name aws namespace hashicorp registry name private Sample Payload Public Provider json data type registry providers attributes name aws namespace hashicorp registry name public Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations my organization registry providers Sample Response Private Provider json data id prov cmEmLstBfjNNA9F3 type registry providers attributes name aws namespace hashicorp registry name private created at 2022 02 11T19 16 59 533Z updated at 2022 02 11T19 16 59 533Z permissions can delete true relationships organization data id hashicorp type organizations versions data links related api v2 organizations hashicorp registry providers private hashicorp aws links self api v2 organizations hashicorp registry providers private hashicorp aws Sample Response Public Provider json data id prov fZn7uHu99ZCpAKZJ type registry providers attributes name aws namespace hashicorp registry name public created at 2020 07 09T19 36 56 288Z updated at 2020 07 09T19 36 56 288Z permissions can delete true relationships organization data id my organization type organizations links self api v2 organizations my organization registry providers public hashicorp aws Get a Provider GET organizations organization name registry providers registry name namespace name Parameters Parameter Description organization name The name of the organization the provider belongs to registry name Whether this is a publicly maintained provider or private Must be either public or private namespace The namespace of the provider For private providers this is the same as the organization name parameter name The provider name Status Response Reason 200 JSON API document type registry providers Success 403 JSON API error object Forbidden public provider curation disabled 404 JSON API error object Provider not found or user unauthorized to perform action Sample Request Private Provider shell curl request GET header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws Sample Request Public Provider shell curl request GET header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations my organization registry providers public hashicorp aws Sample Response Private Provider json data id prov cmEmLstBfjNNA9F3 type registry providers attributes name aws namespace hashicorp created at 2022 02 11T19 16 59 533Z updated at 2022 02 11T19 16 59 533Z registry name private permissions can delete true relationships organization data id hashicorp type organizations versions data id provver y5KZUsSBRLV9zCtL type registry provider versions links related api v2 organizations hashicorp registry providers private hashicorp aws links self api v2 organizations hashicorp registry providers private hashicorp aws Sample Response Public Provider json data id prov fZn7uHu99ZCpAKZJ type registry providers attributes name aws namespace hashicorp registry name public created at 2020 07 09T19 36 56 288Z updated at 2020 07 09T20 16 20 538Z permissions can delete true relationships organization data id my organization type organizations links self api v2 organizations my organization registry providers public hashicorp aws Delete a Provider DELETE organizations organization name registry providers registry name namespace name Parameters Parameter Description organization name The name of the organization to delete a provider from The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team registry name Whether this is a publicly maintained provider or private Must be either public or private namespace The namespace of the provider that will be deleted name The name of the provider that will be deleted permissions citation intentionally unused keep for maintainers Status Response Reason 204 No Content Success 403 JSON API error object Forbidden public provider curation disabled 404 JSON API error object Provider not found or user not authorized to perform action Sample Request Private Provider shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws Sample Request Public Provider shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 organizations my organization registry providers public hashicorp aws |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Modules API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the registry modules endpoint to manage modules published to an organization s private registry List get create publish and delete modules and versions using the HTTP API | ---
page_title: Modules - API Docs - HCP Terraform
description: >-
Use the `/registry-modules` endpoint to manage modules published to an organization's private registry. List, get, create, publish, and delete modules and versions using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Registry Modules API
-> **Note:** Public Module Curation is only available in HCP Terraform. Where applicable, the `registry_name` parameter must be `private` for Terraform Enterprise.
## HCP Terraform Registry Implementation
The HCP Terraform Module Registry implements the [Registry standard API](/terraform/registry/api-docs) for consuming/exposing private modules. Refer to the [Module Registry HTTP API](/terraform/registry/api-docs) to perform the following:
- Browse available modules
- Search modules by keyword
- List available versions for a specific module
- Download source code for a specific module version
- List latest version of a module for all providers
- Get the latest version for a specific module provider
- Get a specific module
- Download the latest version of a module
For publicly curated modules, the HCP Terraform Module Registry acts as a proxy to the [Terraform Registry](https://registry.terraform.io) for the following:
- List available versions for a specific module
- Get a specific module
- Get the latest version for a specific module provider
The HCP Terraform Module Registry endpoints differs from the Module Registry endpoints in the following ways:
- The `:namespace` parameter should be replaced with the organization name for private modules.
- The private module registry discovery endpoints have the path prefix provided in the [discovery document](/terraform/registry/api-docs#service-discovery) which is currently `/api/registry/v1`.
- The public module registry discovery endpoints have the path prefix provided in the [discovery document](/terraform/registry/api-docs#service-discovery) which is currently `/api/registry/public/v1`.
- [Authentication](/terraform/cloud-docs/api-docs#authentication) is handled the same as all other HCP Terraform endpoints.
### Sample Registry Request (private module)
List available versions for the `consul` module for the `aws` provider on the module registry published from the Github organization `my-gh-repo-org`:
```shell
$ curl https://registry.terraform.io/v1/modules/my-gh-repo-org/consul/aws/versions
```
The same request for the same module and provider on the HCP Terraform module registry for the `my-cloud-org` organization:
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
https://app.terraform.io/api/registry/v1/modules/my-cloud-org/consul/aws/versions
```
### Sample Proxy Request (public module)
List available versions for the `consul` module for the `aws` provider on the module registry published from the Github organization `my-gh-repo-org`:
```shell
$ curl https://registry.terraform.io/v1/modules/my-gh-repo-org/consul/aws/versions
```
The same request for the same module and provider on the HCP Terraform module registry:
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
https://app.terraform.io/api/registry/public/v1/modules/my-gh-repo-org/consul/aws/versions
```
## List Registry Modules for an Organization
`GET /organizations/:organization_name/registry-modules`
| Parameter | Description |
| -------------------- | ------------------------------------------------------------ |
| `:organization_name` | The name of the organization to list available modules from. |
Lists the modules that are available to a given organization. This includes the full list of publicly curated and private modules and is filterable.
| Status | Response | Reason |
| ------- | -------------------------------------------------- | -------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "registry-modules"`) | The request was successful |
| [404][] | [JSON API error object][] | Modules not found or user unauthorized to perform action |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `q` | **Optional.** A search query string. Modules are searchable by name, namespace, provider fields. |
| `filter[field name]` | **Optional.** If specified, restricts results to those with the matching field name value. Valid values are `registry_name`, `provider`, and `organization_name`. |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 registry modules per page. |
### Sample Request
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
https://app.terraform.io/api/v2/organizations/my-organization/registry-modules
```
### Sample Response
```json
{
"data": [
{
"id": "mod-kwt1cBiX2SdDz38w",
"type": "registry-modules",
"attributes": {
"name": "api-gateway",
"namespace": "my-organization",
"provider": "alicloud",
"status": "setup_complete",
"version-statuses": [
{
"version": "1.1.0",
"status": "ok"
}
],
"created-at": "2021-04-07T19:01:18.528Z",
"updated-at": "2021-04-07T19:01:19.863Z",
"registry-name": "private",
"permissions": {
"can-delete": true,
"can-resync": true,
"can-retry": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-modules/private/my-organization/api-gateway/alicloud"
}
},
{
"id": "mod-PopQnMtYDCcd3PRX",
"type": "registry-modules",
"attributes": {
"name": "aurora",
"namespace": "my-organization",
"provider": "aws",
"status": "setup_complete",
"version-statuses": [
{
"version": "4.1.0",
"status": "ok"
}
],
"created-at": "2021-04-07T19:04:41.375Z",
"updated-at": "2021-04-07T19:04:42.828Z",
"registry-name": "private",
"permissions": {
"can-delete": true,
"can-resync": true,
"can-retry": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-modules/private/my-organization/aurora/aws"
}
},
...,
],
"links": {
"self": "https://app.terraform.io/api/v2/organizations/my-organization/registry-modules?page%5Bnumber%5D=1&page%5Bsize%5D=6",
"first": "https://app.terraform.io/api/v2/organizations/my-organization/registry-modules?page%5Bnumber%5D=1&page%5Bsize%5D=6",
"prev": null,
"next": "https://app.terraform.io/api/v2/organizations/my-organization/registry-modules?page%5Bnumber%5D=2&page%5Bsize%5D=6",
"last": "https://app.terraform.io/api/v2/organizations/my-organization/registry-modules?page%5Bnumber%5D=29&page%5Bsize%5D=6"
},
"meta": {
"pagination": {
"current-page": 1,
"page-size": 6,
"prev-page": null,
"next-page": 2,
"total-pages": 29,
"total-count": 169
}
}
}
```
## Publish a Private Module from a VCS
~> **Deprecation warning**: the following endpoint `POST /registry-modules` is replaced by the below endpoint and will be removed from future versions of the API!
`POST /organizations/:organization_name/registry-modules/vcs`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to create a module in. The organization must already exist, and the token authenticating the API request must belong to a team or team member with the **Manage modules** permission enabled. |
Publishes a new registry private module from a VCS repository, with module versions managed automatically by the repository's tags. The publishing process will fetch all tags in the source repository that look like [SemVer](https://semver.org/) versions with optional 'v' prefix. For each version, the tag is cloned and the config parsed to populate module details (input and output variables, readme, submodules, etc.). The [Module Registry Requirements](/terraform/registry/modules/publish#requirements) define additional requirements on naming, standard module structure and tags for releases.
| Status | Response | Reason |
| ------- | -------------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "registry-modules"`) | Successfully published module |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [404][] | [JSON API error object][] | User not authorized |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| --------------------------------------------- | ------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"registry-modules"`. |
| `data.attributes.vcs-repo.identifier` | string | | The repository from which to ingress the configuration. |
| `data.attributes.vcs-repo.oauth-token-id` | string | | The VCS Connection (OAuth Connection + Token) to use as identified. Get this ID from the [oauth-tokens](/terraform/cloud-docs/api-docs/oauth-tokens) endpoint. You can not specify this value if `github-app-installation-id` is specified. |
| `data.attributes.vcs-repo.github-app-installation-id` | string | | The VCS Connection GitHub App Installation to use. Find this ID on the account settings page. Requires previously authorizing the GitHub App and generating a user-to-server token. Manage the token from **Account Settings** within HCP Terraform. You can not specify this value if `oauth-token-id` is specified. |
| `data.attributes.vcs-repo.display_identifier` | string | | The display identifier for the repository. For most VCS providers outside of Bitbucket Cloud, this identifier matches the `data.attributes.vcs-repo.identifier` string. |
| `data.attributes.no-code` | boolean | | Allows you to enable or disable the no-code publishing workflow for a module. |
| `data.attributes.vcs-repo.branch` | string | | The repository branch to publish the module from if you are using the branch-based publishing workflow. If omitted, the module will be published using the tag-based publishing workflow. |
A VCS repository identifier is a reference to a VCS repository in the format `:org/:repo`, where `:org` and `:repo` refer to the organization, or project key for Bitbucket Data Center, and repository in your VCS provider. The format for Azure DevOps is `:org/:project/_git/:repo`.
The OAuth Token ID identifies the VCS connection, and therefore the organization, that the module will be created in.
### Sample Payload
```json
{
"data": {
"attributes": {
"vcs-repo": {
"identifier":"lafentres/terraform-aws-my-module",
"oauth-token-id":"ot-hmAyP66qk2AMVdbJ",
"display_identifier":"lafentres/terraform-aws-my-module",
"branch": "main"
},
"no-code": true
},
"type":"registry-modules"
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/registry-modules/vcs
```
### Sample Response
```json
{
"data": {
"id": "mod-fZn7uHu99ZCpAKZJ",
"type": "registry-modules",
"attributes": {
"name": "my-module",
"namespace": "my-organization",
"registry-name": "private",
"provider": "aws",
"status": "pending",
"version-statuses": [],
"created-at": "2020-07-09T19:36:56.288Z",
"updated-at": "2020-07-09T19:36:56.288Z",
"vcs-repo": {
"branch": "",
"ingress-submodules": true,
"identifier": "lafentres/terraform-aws-my-module",
"display-identifier": "lafentres/terraform-aws-my-module",
"oauth-token-id": "ot-hmAyP66qk2AMVdbJ",
"webhook-url": "https://app.terraform.io/webhooks/vcs/a12b3456..."
},
"permissions": {
"can-delete": true,
"can-resync": true,
"can-retry": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-modules/private/my-organization/my-module/aws"
}
}
}
```
## Create a Module (with no VCS connection)
`POST /organizations/:organization_name/registry-modules`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to create a module in. The organization must already exist, and the token authenticating the API request must belong to a team or team member with the **Manage modules** permission enabled. |
[permissions-citation]: #intentionally-unused---keep-for-maintainers
Creates a new registry module without a backing VCS repository.
#### Private modules
After creating a module, a version must be created and uploaded in order to be usable. Modules created this way do not automatically update with new versions; instead, you must explicitly create and upload each new version with the [Create a Module Version](#create-a-module-version) endpoint.
#### Public modules
When created, the public module record will be available in the organization's registry module list. You cannot create versions for public modules as they are maintained in the public registry.
| Status | Response | Reason |
| ------- | -------------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "registry-modules"`) | Successfully published module |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [403][] | [JSON API error object][] | Forbidden - public module curation disabled |
| [404][] | [JSON API error object][] | User not authorized |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ------------------------------- | ------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"registry-modules"`. |
| `data.attributes.name` | string | | The name of this module. May contain alphanumeric characters, with dashes and underscores allowed in non-leading or trailing positions. Maximum length is 64 characters. |
| `data.attributes.provider` | string | | Specifies the Terraform provider that this module is used for. May contain lowercase alphanumeric characters. Maximum length is 64 characters. |
| `data.attributes.namespace` | string | | The namespace of this module. Cannot be set for private modules. May contain alphanumeric characters, with dashes and underscores allowed in non-leading or trailing positions. Maximum length is 64 characters. |
| `data.attributes.registry-name` | string | | Indicates whether this is a publicly maintained module or private. Must be either `public` or `private`. |
| `data.attributes.no-code` | boolean | | Allows you to enable or disable the no-code publishing workflow for a module.
### Sample Payload (private module)
```json
{
"data": {
"type": "registry-modules",
"attributes": {
"name": "my-module",
"provider": "aws",
"registry-name": "private",
"no-code": true
}
}
}
```
### Sample Payload (public module)
```json
{
"data": {
"type": "registry-modules",
"attributes": {
"name": "vpc",
"namespace": "terraform-aws-modules",
"provider": "aws",
"registry-name": "public",
"no-code": true
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/registry-modules
```
### Sample Response (private module)
```json
{
"data": {
"id": "mod-fZn7uHu99ZCpAKZJ",
"type": "registry-modules",
"attributes": {
"name": "my-module",
"namespace": "my-organization",
"registry-name": "private",
"provider": "aws",
"status": "pending",
"version-statuses": [],
"created-at": "2020-07-09T19:36:56.288Z",
"updated-at": "2020-07-09T19:36:56.288Z",
"permissions": {
"can-delete": true,
"can-resync": true,
"can-retry": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-modules/private/my-organization/my-module/aws"
}
}
}
```
### Sample Response (public module)
```json
{
"data": {
"id": "mod-fZn7uHu99ZCpAKZJ",
"type": "registry-modules",
"attributes": {
"name": "vpc",
"namespace": "terraform-aws-modules",
"registry-name": "public",
"provider": "aws",
"status": "pending",
"version-statuses": [],
"created-at": "2020-07-09T19:36:56.288Z",
"updated-at": "2020-07-09T19:36:56.288Z",
"permissions": {
"can-delete": true,
"can-resync": true,
"can-retry": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-modules/public/terraform-aws-modules/vpc/aws"
}
}
}
```
## Create a Module Version
~> **Deprecation warning**: the following endpoint `POST /registry-modules/:organization_name/:name/:provider/versions` is replaced by the below endpoint and will be removed from future versions of the API!
`POST /organizations/:organization_name/registry-modules/:registry_name/:namespace/:name/:provider/versions`
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to create a module in. The organization must already exist, and the token authenticating the API request must belong to a team or team member with the **Manage modules** permission enabled. |
| `:namespace` | The namespace of the module for which the version is being created. For private modules this is the same as the `:organization_name` parameter |
| `:name` | The name of the module for which the version is being created. |
| `:provider` | The name of the provider for which the version is being created. |
| `:registry-name` | Must be `private`. |
[permissions-citation]: #intentionally-unused---keep-for-maintainers
Creates a new registry module version. This endpoint only applies to private modules without a VCS repository and VCS-linked branch based modules. VCS-linked tag-based modules automatically create new versions for new tags. After creating the version for a non-VCS backed module, you should upload the module to the link that HCP Terraform returns.
| Status | Response | Reason |
| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "registry-module-versions"`) | Successfully published module version |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [403][] | [JSON API error object][] | Forbidden - not available for public modules |
| [404][] | [JSON API error object][] | User not authorized |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ---------------------------- | ------ | ------- | ---------------------------------------------------- |
| `data.type` | string | | Must be `"registry-module-versions"`. |
| `data.attributes.version` | string | | A valid semver version string. |
| `data.attributes.commit-sha` | string | | The commit SHA to use to create the module version. |
### Sample Payload
```json
{
"data": {
"type": "registry-module-versions",
"attributes": {
"version": "1.2.3",
"commit-sha": "abcdef12345"
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/registry-modules/private/my-organization/my-module/aws/versions
```
### Sample Response
```json
{
"data": {
"id": "modver-qjjF7ArLXJSWU3WU",
"type": "registry-module-versions",
"attributes": {
"source": "tfe-api",
"status": "pending",
"version": "1.2.3",
"created-at": "2018-09-24T20:47:20.931Z",
"updated-at": "2018-09-24T20:47:20.931Z"
},
"relationships": {
"registry-module": {
"data": {
"id": "1881",
"type": "registry-modules"
}
}
},
"links": {
"upload": "https://archivist.terraform.io/v1/object/dmF1bHQ6djE6NWJPbHQ4QjV4R1ox..."
}
}
}
```
## Add a Module Version (Private Module)
`PUT https://archivist.terraform.io/v1/object/<UNIQUE OBJECT ID>`
**The URL is provided in the `upload` links attribute in the `registry-module-versions` resource.**
### Expected Archive Format
HCP Terraform expects the module version uploaded to be a gzip tarball with the module in the root (not in a subdirectory).
Given the following folder structure:
```
terraform-null-test
├── README.md
├── examples
│ └── default
│ ├── README.md
│ └── main.tf
└── main.tf
```
Package the files in an archive format by running `tar zcvf module.tar.gz *` in the module's directory.
```
~$ cd terraform-null-test
terraform-null-test$ tar zcvf module.tar.gz *
a README.md
a examples
a examples/default
a examples/default/main.tf
a examples/default/README.md
a main.tf
```
### Sample Request
```shell
curl \
--header "Content-Type: application/octet-stream" \
--request PUT \
--data-binary @module.tar.gz \
https://archivist.terraform.io/v1/object/dmF1bHQ6djE6NWJPbHQ4QjV4R1ox...
```
After the registry module version is successfully parsed, its status will become `"ok"`.
## Get a Module
~> **Deprecation warning**: the following endpoint `GET /registry-modules/show/:organization_name/:name/:provider` is replaced by the below endpoint and will be removed from future versions of the API!
`GET /organizations/:organization_name/registry-modules/:registry_name/:namespace/:name/:provider`
### Parameters
| Parameter | Description |
| -------------------- | ----------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization the module belongs to. |
| `:namespace` | The namespace of the module. For private modules this is the name of the organization that owns the module. |
| `:name` | The module name. |
| `:provider` | The module provider. Must be lowercase alphanumeric. |
| `:registry-name` | Either `public` or `private`. |
| Status | Response | Reason |
| ------- | -------------------------------------------------- | ------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "registry-modules"`) | The request was successful |
| [403][] | [JSON API error object][] | Forbidden - public module curation disabled |
| [404][] | [JSON API error object][] | Module not found or user unauthorized to perform action |
### Sample Request (private module)
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/my-organization/registry-modules/private/my-organization/my-module/aws
```
### Sample Request (public module)
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/my-organization/registry-modules/public/terraform-aws-modules/vpc/aws
```
### Sample Response (private module)
```json
{
"data": {
"id": "mod-fZn7uHu99ZCpAKZJ",
"type": "registry-modules",
"attributes": {
"name": "my-module",
"provider": "aws",
"namespace": "my-organization",
"registry-name": "private",
"status": "setup_complete",
"version-statuses": [
{
"version": "1.0.0",
"status": "ok"
}
],
"created-at": "2020-07-09T19:36:56.288Z",
"updated-at": "2020-07-09T20:16:20.538Z",
"vcs-repo": {
"branch": "",
"ingress-submodules": true,
"identifier": "lafentres/terraform-aws-my-module",
"display-identifier": "lafentres/terraform-aws-my-module",
"oauth-token-id": "ot-hmAyP66qk2AMVdbJ",
"webhook-url": "https://app.terraform.io/webhooks/vcs/a12b3456..."
},
"permissions": {
"can-delete": true,
"can-resync": true,
"can-retry": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-modules/private/my-organization/my-module/aws"
}
}
}
```
### Sample Response (public module)
```json
{
"data": {
"id": "mod-fZn7uHu99ZCpAKZJ",
"type": "registry-modules",
"attributes": {
"name": "vpc",
"provider": "aws",
"namespace": "terraform-aws-modules",
"registry-name": "public",
"status": "setup_complete",
"version-statuses": [],
"created-at": "2020-07-09T19:36:56.288Z",
"updated-at": "2020-07-09T20:16:20.538Z",
"permissions": {
"can-delete": true,
"can-resync": true,
"can-retry": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-modules/public/terraform-aws-modules/vpc/aws"
}
}
}
```
## Update a Private Registry Module
`PATCH /organizations/:organization_name/registry-modules/private/:namespace/:name/:provider/`
### Parameters
| Parameter | Description |
| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to update a module from. The organization must already exist, and the token authenticating the API request must belong to the `owners` team or a member of the `owners` team. |
| `:namespace` | The module namespace that the update affects. For private modules this is the name of the organization that owns the module. |
| `:name` | The module name that the update affects. |
| `:provider` | The name of the provider of the module that is being updated. |
### Request Body
These PATCH endpoints require a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
|-----------------------------------------------|----------------|------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `data.type` | string | | Must be `"registry-modules"`. |
| `data.attributes.vcs-repo.branch` | string | (previous value) | The repository branch that Terraform executes tests and publishes new versions from. This cannot be used with the `data.attributes.vcs-repo.tags` key. |
| `data.attributes.vcs-repo.tags` | boolean | (previous value) | Whether the registry module should be tag-based. This cannot be used with the `data.attributes.vcs-repo.branch` key. |
| `data.attributes.test-config.tests-enabled` | boolean | (previous value) | Allows you to enable or disable tests for the module. |
### Sample Payload
```json
{
"data": {
"attributes": {
"vcs-repo": {
"branch": "main",
"tags": false
},
"test-config": {
"tests-enabled": true
}
},
"type": "registry-modules"
}
}
```
### Sample Request
```shell
$ curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/registry-modules/private/my-organization/registry-name/registry-provider/
```
### Sample Response
```json
{
"data": {
"id": "mod-fZn7uHu99ZCpAKZJ",
"type": "registry-modules",
"attributes": {
"name": "my-module",
"namespace": "my-organization",
"registry-name": "private",
"provider": "aws",
"status": "pending",
"version-statuses": [],
"created-at": "2020-07-09T19:36:56.288Z",
"updated-at": "2020-07-09T19:36:56.288Z",
"vcs-repo": {
"branch": "main",
"ingress-submodules": true,
"identifier": "lafentres/terraform-aws-my-module",
"display-identifier": "lafentres/terraform-aws-my-module",
"oauth-token-id": "ot-hmAyP66qk2AMVdbJ",
"webhook-url": "https://app.terraform.io/webhooks/vcs/a12b3456..."
},
"permissions": {
"can-delete": true,
"can-resync": true,
"can-retry": true
},
"test-config": {
"id": "tc-tcR6bxV5zE75Zb3B",
"tests-enabled": true
}
},
"relationships": {
"organization": {
"data": {
"id": "my-organization",
"type": "organizations"
}
}
},
"links": {
"self": "/api/v2/organizations/my-organization/registry-modules/private/my-organization/my-module/aws"
}
}
}
```
## Delete a Module
<div className="alert alert-warning" role="alert">
**Deprecation warning**: the following endpoints:
- `POST /registry-modules/actions/delete/:organization_name/:name/:provider/:version`
- `POST /registry-modules/actions/delete/:organization_name/:name/:provider`
- `POST /registry-modules/actions/delete/:organization_name/:name`
are replaced by the below endpoints and will be removed from future versions of the API!
</div>
- `DELETE /organizations/:organization_name/registry-modules/:registry_name/:namespace/:name/:provider/:version`
- `DELETE /organizations/:organization_name/registry-modules/:registry_name/:namespace/:name/:provider`
- `DELETE /organizations/:organization_name/registry-modules/:registry_name/:namespace/:name`
### Parameters
| Parameter | Description |
| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to delete a module from. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:namespace` | The module namespace that the deletion will affect. For private modules this is the name of the organization that owns the module. |
| `:name` | The module name that the deletion will affect. |
| `:provider` | If specified, the provider for the module that the deletion will affect. |
| `:version` | If specified, the version for the module and provider that will be deleted. |
| `:registry_name` | Either `public` or `private` |
[permissions-citation]: #intentionally-unused---keep-for-maintainers
When removing modules, there are three versions of the endpoint, depending on how many parameters are specified.
- If all parameters (module namespace, name, provider, and version) are specified, the specified version for the given provider of the module is deleted.
- If module namespace, name, and provider are specified, the specified provider for the given module is deleted along with all its versions.
- If only module namespace and name are specified, the entire module is deleted.
For public modules, only the the endpoint specifying the module namespace and name is valid. The other DELETE endpoints will 404.
For public modules, this only removes the record from the organization's HCP Terraform Registry and does not remove the public module from registry.terraform.io.
If a version deletion would leave a provider with no versions, the provider will be deleted. If a provider deletion would leave a module with no providers, the module will be deleted.
| Status | Response | Reason |
| ------- | ------------------------- | ------------------------------------------------------------- |
| [204][] | No Content | Success |
| [403][] | [JSON API error object][] | Forbidden - public module curation disabled |
| [404][] | [JSON API error object][] | Module, provider, or version not found or user not authorized |
### Sample Request (private module)
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/organizations/my-organization/registry-modules/private/my-organization/my-module/aws/2.0.0
```
### Sample Request (public module)
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/organizations/my-organization/registry-modules/public/terraform-aws-modules/vpc/aws
``` | terraform | page title Modules API Docs HCP Terraform description Use the registry modules endpoint to manage modules published to an organization s private registry List get create publish and delete modules and versions using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Registry Modules API Note Public Module Curation is only available in HCP Terraform Where applicable the registry name parameter must be private for Terraform Enterprise HCP Terraform Registry Implementation The HCP Terraform Module Registry implements the Registry standard API terraform registry api docs for consuming exposing private modules Refer to the Module Registry HTTP API terraform registry api docs to perform the following Browse available modules Search modules by keyword List available versions for a specific module Download source code for a specific module version List latest version of a module for all providers Get the latest version for a specific module provider Get a specific module Download the latest version of a module For publicly curated modules the HCP Terraform Module Registry acts as a proxy to the Terraform Registry https registry terraform io for the following List available versions for a specific module Get a specific module Get the latest version for a specific module provider The HCP Terraform Module Registry endpoints differs from the Module Registry endpoints in the following ways The namespace parameter should be replaced with the organization name for private modules The private module registry discovery endpoints have the path prefix provided in the discovery document terraform registry api docs service discovery which is currently api registry v1 The public module registry discovery endpoints have the path prefix provided in the discovery document terraform registry api docs service discovery which is currently api registry public v1 Authentication terraform cloud docs api docs authentication is handled the same as all other HCP Terraform endpoints Sample Registry Request private module List available versions for the consul module for the aws provider on the module registry published from the Github organization my gh repo org shell curl https registry terraform io v1 modules my gh repo org consul aws versions The same request for the same module and provider on the HCP Terraform module registry for the my cloud org organization shell curl header Authorization Bearer TOKEN https app terraform io api registry v1 modules my cloud org consul aws versions Sample Proxy Request public module List available versions for the consul module for the aws provider on the module registry published from the Github organization my gh repo org shell curl https registry terraform io v1 modules my gh repo org consul aws versions The same request for the same module and provider on the HCP Terraform module registry shell curl header Authorization Bearer TOKEN https app terraform io api registry public v1 modules my gh repo org consul aws versions List Registry Modules for an Organization GET organizations organization name registry modules Parameter Description organization name The name of the organization to list available modules from Lists the modules that are available to a given organization This includes the full list of publicly curated and private modules and is filterable Status Response Reason 200 JSON API document type registry modules The request was successful 404 JSON API error object Modules not found or user unauthorized to perform action Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description q Optional A search query string Modules are searchable by name namespace provider fields filter field name Optional If specified restricts results to those with the matching field name value Valid values are registry name provider and organization name page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 registry modules per page Sample Request shell curl request GET header Authorization Bearer TOKEN https app terraform io api v2 organizations my organization registry modules Sample Response json data id mod kwt1cBiX2SdDz38w type registry modules attributes name api gateway namespace my organization provider alicloud status setup complete version statuses version 1 1 0 status ok created at 2021 04 07T19 01 18 528Z updated at 2021 04 07T19 01 19 863Z registry name private permissions can delete true can resync true can retry true relationships organization data id my organization type organizations links self api v2 organizations my organization registry modules private my organization api gateway alicloud id mod PopQnMtYDCcd3PRX type registry modules attributes name aurora namespace my organization provider aws status setup complete version statuses version 4 1 0 status ok created at 2021 04 07T19 04 41 375Z updated at 2021 04 07T19 04 42 828Z registry name private permissions can delete true can resync true can retry true relationships organization data id my organization type organizations links self api v2 organizations my organization registry modules private my organization aurora aws links self https app terraform io api v2 organizations my organization registry modules page 5Bnumber 5D 1 page 5Bsize 5D 6 first https app terraform io api v2 organizations my organization registry modules page 5Bnumber 5D 1 page 5Bsize 5D 6 prev null next https app terraform io api v2 organizations my organization registry modules page 5Bnumber 5D 2 page 5Bsize 5D 6 last https app terraform io api v2 organizations my organization registry modules page 5Bnumber 5D 29 page 5Bsize 5D 6 meta pagination current page 1 page size 6 prev page null next page 2 total pages 29 total count 169 Publish a Private Module from a VCS Deprecation warning the following endpoint POST registry modules is replaced by the below endpoint and will be removed from future versions of the API POST organizations organization name registry modules vcs Parameter Description organization name The name of the organization to create a module in The organization must already exist and the token authenticating the API request must belong to a team or team member with the Manage modules permission enabled Publishes a new registry private module from a VCS repository with module versions managed automatically by the repository s tags The publishing process will fetch all tags in the source repository that look like SemVer https semver org versions with optional v prefix For each version the tag is cloned and the config parsed to populate module details input and output variables readme submodules etc The Module Registry Requirements terraform registry modules publish requirements define additional requirements on naming standard module structure and tags for releases Status Response Reason 201 JSON API document type registry modules Successfully published module 422 JSON API error object Malformed request body missing attributes wrong types etc 404 JSON API error object User not authorized Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be registry modules data attributes vcs repo identifier string The repository from which to ingress the configuration data attributes vcs repo oauth token id string The VCS Connection OAuth Connection Token to use as identified Get this ID from the oauth tokens terraform cloud docs api docs oauth tokens endpoint You can not specify this value if github app installation id is specified data attributes vcs repo github app installation id string The VCS Connection GitHub App Installation to use Find this ID on the account settings page Requires previously authorizing the GitHub App and generating a user to server token Manage the token from Account Settings within HCP Terraform You can not specify this value if oauth token id is specified data attributes vcs repo display identifier string The display identifier for the repository For most VCS providers outside of Bitbucket Cloud this identifier matches the data attributes vcs repo identifier string data attributes no code boolean Allows you to enable or disable the no code publishing workflow for a module data attributes vcs repo branch string The repository branch to publish the module from if you are using the branch based publishing workflow If omitted the module will be published using the tag based publishing workflow A VCS repository identifier is a reference to a VCS repository in the format org repo where org and repo refer to the organization or project key for Bitbucket Data Center and repository in your VCS provider The format for Azure DevOps is org project git repo The OAuth Token ID identifies the VCS connection and therefore the organization that the module will be created in Sample Payload json data attributes vcs repo identifier lafentres terraform aws my module oauth token id ot hmAyP66qk2AMVdbJ display identifier lafentres terraform aws my module branch main no code true type registry modules Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations my organization registry modules vcs Sample Response json data id mod fZn7uHu99ZCpAKZJ type registry modules attributes name my module namespace my organization registry name private provider aws status pending version statuses created at 2020 07 09T19 36 56 288Z updated at 2020 07 09T19 36 56 288Z vcs repo branch ingress submodules true identifier lafentres terraform aws my module display identifier lafentres terraform aws my module oauth token id ot hmAyP66qk2AMVdbJ webhook url https app terraform io webhooks vcs a12b3456 permissions can delete true can resync true can retry true relationships organization data id my organization type organizations links self api v2 organizations my organization registry modules private my organization my module aws Create a Module with no VCS connection POST organizations organization name registry modules Parameter Description organization name The name of the organization to create a module in The organization must already exist and the token authenticating the API request must belong to a team or team member with the Manage modules permission enabled permissions citation intentionally unused keep for maintainers Creates a new registry module without a backing VCS repository Private modules After creating a module a version must be created and uploaded in order to be usable Modules created this way do not automatically update with new versions instead you must explicitly create and upload each new version with the Create a Module Version create a module version endpoint Public modules When created the public module record will be available in the organization s registry module list You cannot create versions for public modules as they are maintained in the public registry Status Response Reason 201 JSON API document type registry modules Successfully published module 422 JSON API error object Malformed request body missing attributes wrong types etc 403 JSON API error object Forbidden public module curation disabled 404 JSON API error object User not authorized Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be registry modules data attributes name string The name of this module May contain alphanumeric characters with dashes and underscores allowed in non leading or trailing positions Maximum length is 64 characters data attributes provider string Specifies the Terraform provider that this module is used for May contain lowercase alphanumeric characters Maximum length is 64 characters data attributes namespace string The namespace of this module Cannot be set for private modules May contain alphanumeric characters with dashes and underscores allowed in non leading or trailing positions Maximum length is 64 characters data attributes registry name string Indicates whether this is a publicly maintained module or private Must be either public or private data attributes no code boolean Allows you to enable or disable the no code publishing workflow for a module Sample Payload private module json data type registry modules attributes name my module provider aws registry name private no code true Sample Payload public module json data type registry modules attributes name vpc namespace terraform aws modules provider aws registry name public no code true Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations my organization registry modules Sample Response private module json data id mod fZn7uHu99ZCpAKZJ type registry modules attributes name my module namespace my organization registry name private provider aws status pending version statuses created at 2020 07 09T19 36 56 288Z updated at 2020 07 09T19 36 56 288Z permissions can delete true can resync true can retry true relationships organization data id my organization type organizations links self api v2 organizations my organization registry modules private my organization my module aws Sample Response public module json data id mod fZn7uHu99ZCpAKZJ type registry modules attributes name vpc namespace terraform aws modules registry name public provider aws status pending version statuses created at 2020 07 09T19 36 56 288Z updated at 2020 07 09T19 36 56 288Z permissions can delete true can resync true can retry true relationships organization data id my organization type organizations links self api v2 organizations my organization registry modules public terraform aws modules vpc aws Create a Module Version Deprecation warning the following endpoint POST registry modules organization name name provider versions is replaced by the below endpoint and will be removed from future versions of the API POST organizations organization name registry modules registry name namespace name provider versions Parameter Description organization name The name of the organization to create a module in The organization must already exist and the token authenticating the API request must belong to a team or team member with the Manage modules permission enabled namespace The namespace of the module for which the version is being created For private modules this is the same as the organization name parameter name The name of the module for which the version is being created provider The name of the provider for which the version is being created registry name Must be private permissions citation intentionally unused keep for maintainers Creates a new registry module version This endpoint only applies to private modules without a VCS repository and VCS linked branch based modules VCS linked tag based modules automatically create new versions for new tags After creating the version for a non VCS backed module you should upload the module to the link that HCP Terraform returns Status Response Reason 201 JSON API document type registry module versions Successfully published module version 422 JSON API error object Malformed request body missing attributes wrong types etc 403 JSON API error object Forbidden not available for public modules 404 JSON API error object User not authorized Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be registry module versions data attributes version string A valid semver version string data attributes commit sha string The commit SHA to use to create the module version Sample Payload json data type registry module versions attributes version 1 2 3 commit sha abcdef12345 Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations my organization registry modules private my organization my module aws versions Sample Response json data id modver qjjF7ArLXJSWU3WU type registry module versions attributes source tfe api status pending version 1 2 3 created at 2018 09 24T20 47 20 931Z updated at 2018 09 24T20 47 20 931Z relationships registry module data id 1881 type registry modules links upload https archivist terraform io v1 object dmF1bHQ6djE6NWJPbHQ4QjV4R1ox Add a Module Version Private Module PUT https archivist terraform io v1 object UNIQUE OBJECT ID The URL is provided in the upload links attribute in the registry module versions resource Expected Archive Format HCP Terraform expects the module version uploaded to be a gzip tarball with the module in the root not in a subdirectory Given the following folder structure terraform null test README md examples default README md main tf main tf Package the files in an archive format by running tar zcvf module tar gz in the module s directory cd terraform null test terraform null test tar zcvf module tar gz a README md a examples a examples default a examples default main tf a examples default README md a main tf Sample Request shell curl header Content Type application octet stream request PUT data binary module tar gz https archivist terraform io v1 object dmF1bHQ6djE6NWJPbHQ4QjV4R1ox After the registry module version is successfully parsed its status will become ok Get a Module Deprecation warning the following endpoint GET registry modules show organization name name provider is replaced by the below endpoint and will be removed from future versions of the API GET organizations organization name registry modules registry name namespace name provider Parameters Parameter Description organization name The name of the organization the module belongs to namespace The namespace of the module For private modules this is the name of the organization that owns the module name The module name provider The module provider Must be lowercase alphanumeric registry name Either public or private Status Response Reason 200 JSON API document type registry modules The request was successful 403 JSON API error object Forbidden public module curation disabled 404 JSON API error object Module not found or user unauthorized to perform action Sample Request private module shell curl request GET header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations my organization registry modules private my organization my module aws Sample Request public module shell curl request GET header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations my organization registry modules public terraform aws modules vpc aws Sample Response private module json data id mod fZn7uHu99ZCpAKZJ type registry modules attributes name my module provider aws namespace my organization registry name private status setup complete version statuses version 1 0 0 status ok created at 2020 07 09T19 36 56 288Z updated at 2020 07 09T20 16 20 538Z vcs repo branch ingress submodules true identifier lafentres terraform aws my module display identifier lafentres terraform aws my module oauth token id ot hmAyP66qk2AMVdbJ webhook url https app terraform io webhooks vcs a12b3456 permissions can delete true can resync true can retry true relationships organization data id my organization type organizations links self api v2 organizations my organization registry modules private my organization my module aws Sample Response public module json data id mod fZn7uHu99ZCpAKZJ type registry modules attributes name vpc provider aws namespace terraform aws modules registry name public status setup complete version statuses created at 2020 07 09T19 36 56 288Z updated at 2020 07 09T20 16 20 538Z permissions can delete true can resync true can retry true relationships organization data id my organization type organizations links self api v2 organizations my organization registry modules public terraform aws modules vpc aws Update a Private Registry Module PATCH organizations organization name registry modules private namespace name provider Parameters Parameter Description organization name The name of the organization to update a module from The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The module namespace that the update affects For private modules this is the name of the organization that owns the module name The module name that the update affects provider The name of the provider of the module that is being updated Request Body These PATCH endpoints require a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be registry modules data attributes vcs repo branch string previous value The repository branch that Terraform executes tests and publishes new versions from This cannot be used with the data attributes vcs repo tags key data attributes vcs repo tags boolean previous value Whether the registry module should be tag based This cannot be used with the data attributes vcs repo branch key data attributes test config tests enabled boolean previous value Allows you to enable or disable tests for the module Sample Payload json data attributes vcs repo branch main tags false test config tests enabled true type registry modules Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 organizations my organization registry modules private my organization registry name registry provider Sample Response json data id mod fZn7uHu99ZCpAKZJ type registry modules attributes name my module namespace my organization registry name private provider aws status pending version statuses created at 2020 07 09T19 36 56 288Z updated at 2020 07 09T19 36 56 288Z vcs repo branch main ingress submodules true identifier lafentres terraform aws my module display identifier lafentres terraform aws my module oauth token id ot hmAyP66qk2AMVdbJ webhook url https app terraform io webhooks vcs a12b3456 permissions can delete true can resync true can retry true test config id tc tcR6bxV5zE75Zb3B tests enabled true relationships organization data id my organization type organizations links self api v2 organizations my organization registry modules private my organization my module aws Delete a Module div className alert alert warning role alert Deprecation warning the following endpoints POST registry modules actions delete organization name name provider version POST registry modules actions delete organization name name provider POST registry modules actions delete organization name name are replaced by the below endpoints and will be removed from future versions of the API div DELETE organizations organization name registry modules registry name namespace name provider version DELETE organizations organization name registry modules registry name namespace name provider DELETE organizations organization name registry modules registry name namespace name Parameters Parameter Description organization name The name of the organization to delete a module from The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team namespace The module namespace that the deletion will affect For private modules this is the name of the organization that owns the module name The module name that the deletion will affect provider If specified the provider for the module that the deletion will affect version If specified the version for the module and provider that will be deleted registry name Either public or private permissions citation intentionally unused keep for maintainers When removing modules there are three versions of the endpoint depending on how many parameters are specified If all parameters module namespace name provider and version are specified the specified version for the given provider of the module is deleted If module namespace name and provider are specified the specified provider for the given module is deleted along with all its versions If only module namespace and name are specified the entire module is deleted For public modules only the the endpoint specifying the module namespace and name is valid The other DELETE endpoints will 404 For public modules this only removes the record from the organization s HCP Terraform Registry and does not remove the public module from registry terraform io If a version deletion would leave a provider with no versions the provider will be deleted If a provider deletion would leave a module with no providers the module will be deleted Status Response Reason 204 No Content Success 403 JSON API error object Forbidden public module curation disabled 404 JSON API error object Module provider or version not found or user not authorized Sample Request private module shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 organizations my organization registry modules private my organization my module aws 2 0 0 Sample Request public module shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 organizations my organization registry modules public terraform aws modules vpc aws |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use the registry providers to manage private providers in your private registry Create get and delete versions and create get and delete platforms using the HTTP API page title Private Provider Versions and Platforms API Docs HCP Terraform | ---
page_title: Private Provider Versions and Platforms - API Docs - HCP Terraform
description: >-
Use the `/registry-providers` to manage private providers in your private registry. Create, get, and delete versions, and create, get, and delete platforms using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Private Provider Versions and Platforms API
These endpoints are only relevant to private providers. When you [publish a private provider](/terraform/cloud-docs/registry/publish-providers) to the HCP Terraform private registry, you must also create at least one version and at least one platform for that version before consumers can use the provider in configurations. Unlike the public Terraform Registry, the private registry does not automatically upload new releases. You must manually add new provider versions and the associated release files.
All members of an organization can view and use both public and private providers, but you need [owners team](/terraform/cloud-docs/users-teams-organizations/permissions#organization-owners) or [Manage Private Registry](/terraform/cloud-docs/users-teams-organizations/permissions#manage-private-registry) permissions to add, update, or delete provider versions and platforms in private registry.
## Create a Provider Version
`POST /organizations/:organization_name/registry-providers/:registry_name/:namespace/:name/versions`
The private registry does not automatically update private providers when you release new versions. You must use this endpoint to add each new version. Consumers cannot use new versions until you upload all [required release files](/terraform/cloud-docs/registry/publish-providers#release-files) and [Create a Provider Platform](#create-a-provider-platform).
### Parameters
| Parameter | Description |
| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to create a provider in. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider for which the version is being created. For private providers this is the same as the `:organization_name` parameter. |
| `:name` | The name of the provider for which the version is being created. |
Creates a new registry provider version. This endpoint only applies to private providers.
| Status | Response | Reason |
| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "registry-provider-versions"`) | Success |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [403][] | [JSON API error object][] | Forbidden - not available for public providers |
| [404][] | [JSON API error object][] | User not authorized |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| --------------------------- | ------ | ------- | ----------------------------------------------------------------- |
| `data.type` | string | | Must be `"registry-provider-versions"`. |
| `data.attributes.version` | string | | A valid semver version string. |
| `data.attributes.key-id` | string | | A valid gpg-key string. |
| `data.attributes.protocols` | array | | An array of Terraform provider API versions that this version supports. Must be one or all of the following values `["4.0","5.0","6.0"]`. |
-> **Note:** Only Terraform 0.13 and later support third-party provider registries, and that Terraform version requires provider API version 5.0 or later. So you do not need to list major versions 4.0 or earlier in the `protocols` attribute.
### Sample Payload
```json
{
"data": {
"type": "registry-provider-versions",
"attributes": {
"version": "3.1.1",
"key-id": "32966F3FB5AC1129",
"protocols": ["5.0"]
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions
```
### Sample Response
```json
{
"data": {
"id": "provver-y5KZUsSBRLV9zCtL",
"type": "registry-provider-versions",
"attributes": {
"version": "3.1.1",
"created-at": "2022-02-11T19:16:59.876Z",
"updated-at": "2022-02-11T19:16:59.876Z",
"key-id": "32966F3FB5AC1129",
"protocols": ["5.0"],
"permissions": {
"can-delete": true,
"can-upload-asset": true
},
"shasums-uploaded": false,
"shasums-sig-uploaded": false
},
"relationships": {
"registry-provider": {
"data": {
"id": "prov-cmEmLstBfjNNA9F3",
"type": "registry-providers"
}
},
"platforms": {
"data": [],
"links": {
"related": "/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1/platforms"
}
}
},
"links": {
"shasums-upload": "https://archivist.terraform.io/v1/object/dmF1b...",
"shasums-sig-upload": "https://archivist.terraform.io/v1/object/dmF1b..."
}
}
}
```
## Get All Versions for a Single Provider
`GET /organizations/:organization_name/registry-providers/:registry_name/:namespace/:name/versions/`
### Parameters
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization the provider belongs to. |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider. Must be the same as the `organization_name` for the provider. |
| `:name` | The provider name. |
| Status | Response | Reason |
| ------- | ---------------------------------------------------- | --------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "registry-providers"`) | Success |
| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled |
| [404][] | [JSON API error object][] | Provider not found or user unauthorized to perform action |
### Sample Request
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions
```
### Sample Response
```json
{
"data": [
{
"id": "provver-y5KZUsSBRLV9zCtL",
"type": "registry-provider-versions",
"attributes": {
"version": "3.1.1",
"created-at": "2022-02-11T19:16:59.876Z",
"updated-at": "2022-02-11T19:16:59.876Z",
"key-id": "32966F3FB5AC1129",
"protocols": ["5.0"],
"permissions": {
"can-delete": true,
"can-upload-asset": true
},
"shasums-uploaded": true,
"shasums-sig-uploaded": true
},
"relationships": {
"registry-provider": {
"data": {
"id": "prov-cmEmLstBfjNNA9F3",
"type": "registry-providers"
}
},
"platforms": {
"data": [
{
"id": "provpltfrm-GSHhNzptr9s3WoLD",
"type": "registry-provider-platforms"
},
{
"id": "provpltfrm-A1PHitiM2KkKpVoM",
"type": "registry-provider-platforms"
},
{
"id": "provpltfrm-BLJWvWyJ2QMs525k",
"type": "registry-provider-platforms"
},
{
"id": "provpltfrm-qQYosUguetYtXGzJ",
"type": "registry-provider-platforms"
},
{
"id": "provpltfrm-pjDHFN46y193bS7t",
"type": "registry-provider-platforms"
}
],
"links": {
"related": "/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1/platforms"
}
}
},
"links": {
"shasums-download": "https://archivist.terraform.io/v1/object/dmF1b...",
"shasums-sig-download": "https://archivist.terraform.io/v1/object/dmF1b..."
}
}
],
"links": {
"self": "https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions?page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"pagination": {
"current-page": 1,
"page-size": 20,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 1
}
}
}
```
**Note:** The `shasums-uploaded` and `shasums-sig-uploaded` properties will be false if those files have not been uploaded to Archivist. In this case, instead of including links to `shasums-download` and `shasums-sig-download`, the response will include upload links (`shasums-upload` and `shasums-sig-upload`).
## Get a Version
`GET /organizations/:organization_name/registry-providers/:registry_name/:namespace/:name/versions/:version`
### Parameters
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization the provider belongs to. |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider. Must be the same as the `organization_name` for the provider. |
| `:name` | The provider name. |
| `:version` | The version of the provider being created to which different platforms can be added. |
| Status | Response | Reason |
| ------- | ---------------------------------------------------- | --------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "registry-providers"`) | Success |
| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled |
| [404][] | [JSON API error object][] | Provider not found or user unauthorized to perform action |
### Sample Request
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1
```
### Sample Response
```json
{
"data": {
"id": "provver-y5KZUsSBRLV9zCtL",
"type": "registry-provider-versions",
"attributes": {
"version": "3.1.1",
"created-at": "2022-02-11T19:16:59.876Z",
"updated-at": "2022-02-11T19:16:59.876Z",
"key-id": "32966F3FB5AC1129",
"protocols": ["5.0"],
"permissions": {
"can-delete": true,
"can-upload-asset": true
},
"shasums-uploaded": true,
"shasums-sig-uploaded": true
},
"relationships": {
"registry-provider": {
"data": {
"id": "prov-cmEmLstBfjNNA9F3",
"type": "registry-providers"
}
},
"platforms": {
"data": [
{
"id": "provpltfrm-GSHhNzptr9s3WoLD",
"type": "registry-provider-platforms"
},
{
"id": "provpltfrm-A1PHitiM2KkKpVoM",
"type": "registry-provider-platforms"
},
{
"id": "provpltfrm-BLJWvWyJ2QMs525k",
"type": "registry-provider-platforms"
},
{
"id": "provpltfrm-qQYosUguetYtXGzJ",
"type": "registry-provider-platforms"
},
{
"id": "provpltfrm-pjDHFN46y193bS7t",
"type": "registry-provider-platforms"
}
],
"links": {
"related": "/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1/platforms"
}
}
},
"links": {
"shasums-download": "https://archivist.terraform.io/v1/object/dmF1b...",
"shasums-sig-download": "https://archivist.terraform.io/v1/object/dmF1b..."
}
}
}
```
**Note:** `shasums-uploaded` and `shasums-sig-uploaded` will be false if those files haven't been uploaded to Archivist yet. In this case, instead of including links to `shasums-download` and `shasums-sig-download`, the response will include upload links (`shasums-upload` and `shasums-sig-upload`).
## Delete a Version
`DELETE /organizations/:organization_name/registry-providers/:registry_name/:namespace/:name/versions/:provider_version`
### Parameters
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to delete a provider version from. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider for which the version is being deleted. For private providers this is the same as the `:organization_name` parameter. |
| `:name` | The name of the provider for which the version is being deleted. |
| `:version` | The version for the provider that will be deleted along with its corresponding platforms. |
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Status | Response | Reason |
| ------- | ------------------------- | ----------------------------------------------------------- |
| [204][] | No Content | Success |
| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled |
| [404][] | [JSON API error object][] | Provider not found or user not authorized to perform action |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/public/hashicorp/aws/versions/3.1.1
```
## Create a Provider Platform
`POST /organizations/:organization_name/registry-providers/:registry_name/:namespace/:name/versions/:version/platforms`
Platforms are binaries that allow the provider to run on a particular operating system and architecture combination (e.g., Linux and AMD64). GoReleaser creates binaries automatically when you [create a release on GitHub](/terraform/registry/providers/publishing#creating-a-github-release) or [create a release locally](/terraform/registry/providers/publishing#using-goreleaser-locally).
You must upload one or more platforms for each version of a private provider. After you create a platform, you must upload the platform binary file to the `provider-binary-upload` URL.
### Parameters
| Parameter | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to create a provider platform in. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider for which the platform is being created. For private providers this is the same as the `:organization_name` parameter. |
| `:name` | The name of the provider for which the platform is being created. |
| `:version` | The provider version of the provider for which the platform is being created. |
Creates a new registry provider platform. This endpoint only applies to private providers.
| Status | Response | Reason |
| ------- | ------------------------------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "registry-provider-platforms"`) | Success |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
| [403][] | [JSON API error object][] | Forbidden - not available for public providers |
| [404][] | [JSON API error object][] | User not authorized |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
| ------------------------- | ------ | ------- | ------------------------------------- |
| `data.type` | string | | Must be `"registry-provider-platforms"`. |
| `data.attributes.os` | string | | A valid operating system string. |
| `data.attributes.arch` | string | | A valid architecture string. |
| `data.attributes.shasum` | string | | A valid shasum string. |
| `data.attributes.filename` | string | | A valid filename string. |
### Sample Payload
```json
{
"data": {
"type": "registry-provider-version-platforms",
"attributes": {
"os": "linux",
"arch": "amd64",
"shasum": "8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38",
"filename": "terraform-provider-aws_3.1.1_linux_amd64.zip"
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1/platforms
```
### Sample Response
```json
{
"data": {
"id": "provpltfrm-BLJWvWyJ2QMs525k",
"type": "registry-provider-platforms",
"attributes": {
"os": "linux",
"arch": "amd64",
"filename": "terraform-provider-aws_3.1.1_linux_amd64.zip",
"shasum": "8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38",
"permissions": {
"can-delete": true,
"can-upload-asset": true
},
"provider-binary-uploaded": false
},
"relationships": {
"registry-provider-version": {
"data": {
"id": "provver-y5KZUsSBRLV9zCtL",
"type": "registry-provider-versions"
}
}
},
"links": {
"provider-binary-upload": "https://archivist.terraform.io/v1/object/dmF1b..."
}
}
}
```
## Get All Platforms for a Single Version
`GET /organizations/:organization_name/registry-providers/:registry_name/:namespace/:name/versions/:version/platforms`
### Parameters
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization the provider belongs to. |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider. Must be the same as the `organization_name` for the provider. |
| `:name` | The provider name. |
| `:version` | The version of the provider. |
| Status | Response | Reason |
| ------- | ---------------------------------------------------- | --------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "registry-providers"`) | Success |
| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled |
| [404][] | [JSON API error object][] | Provider not found or user unauthorized to perform action |
### Sample Request
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1/platforms
```
### Sample Response
```json
{
"data": [
{
"id": "provpltfrm-GSHhNzptr9s3WoLD",
"type": "registry-provider-platforms",
"attributes": {
"os": "darwin",
"arch": "amd64",
"filename": "terraform-provider-aws_3.1.1_darwin_amd64.zip",
"shasum": "fd580e71bd76d76913e1925f2641be9330c536464af9a08a5b8994da65a26cbc",
"permissions": {
"can-delete": true,
"can-upload-asset": true
},
"provider-binary-uploaded": true
},
"relationships": {
"registry-provider-version": {
"data": {
"id": "provver-y5KZUsSBRLV9zCtL",
"type": "registry-provider-versions"
}
}
},
"links": {
"provider-binary-download": "https://archivist.terraform.io/v1/object/dmF1b..."
}
},
{
"id": "provpltfrm-A1PHitiM2KkKpVoM",
"type": "registry-provider-platforms",
"attributes": {
"os": "darwin",
"arch": "arm64",
"filename": "terraform-provider-aws_3.1.1_darwin_arm64.zip",
"shasum": "de3c351d7f35a3c8c583c0da5c1c4d558b8cea3731a49b15f63de5bbbafc0165",
"permissions": {
"can-delete": true,
"can-upload-asset": true
},
"provider-binary-uploaded": true
},
"relationships": {
"registry-provider-version": {
"data": {
"id": "provver-y5KZUsSBRLV9zCtL",
"type": "registry-provider-versions"
}
}
},
"links": {
"provider-binary-download": "https://archivist.terraform.io/v1/object/dmF1b..."
}
}
],
"links": {
"self": "https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1/platforms?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1/platforms?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1/platforms?page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"pagination": {
"current-page": 1,
"page-size": 20,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 2
}
}
}
```
**Note:** The `provider-binary-uploaded` property will be `false` if that file has not been uploaded to Archivist. In this case, instead of including a link to `provider-binary-download`, the response will include an upload link `provider-binary-upload`.
## Get a Platform
`GET /organizations/:organization_name/registry-providers/:registry_name/:namespace/:name/versions/:version/platforms/:os/:arch`
### Parameters
| Parameter | Description |
| -------------------- | -------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization the provider belongs to. |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider. Must be the same as the `organization_name` for the provider. |
| `:name` | The provider name. |
| `:version` | The version of the provider. |
| `:os` | The operating system of the provider platform. |
| `:arch` | The architecture of the provider platform. |
| Status | Response | Reason |
| ------- | ---------------------------------------------------- | --------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "registry-providers"`) | Success |
| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled |
| [404][] | [JSON API error object][] | Provider not found or user unauthorized to perform action |
### Sample Request
```shell
curl \
--request GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1/platforms/linux/amd64
```
### Sample Response
```json
{
"data": {
"id": "provpltfrm-BLJWvWyJ2QMs525k",
"type": "registry-provider-platforms",
"attributes": {
"os": "linux",
"arch": "amd64",
"filename": "terraform-provider-aws_3.1.1_linux_amd64.zip",
"shasum": "8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38",
"permissions": {
"can-delete": true,
"can-upload-asset": true
},
"provider-binary-uploaded": true
},
"relationships": {
"registry-provider-version": {
"data": {
"id": "provver-y5KZUsSBRLV9zCtL",
"type": "registry-provider-versions"
}
}
},
"links": {
"provider-binary-download": "https://archivist.terraform.io/v1/object/dmF1b..."
}
}
}
```
**Note:** The `provider-binary-uploaded` property will be `false` if that file has not been uploaded to Archivist. In this case, instead of including a link to `provider-binary-download`, the response will include an upload link `provider-binary-upload`.
## Delete a Platform
`DELETE /organizations/:organization_name/registry-providers/:registry_name/:namespace/:name/versions/:version/platforms/:os/:arch`
### Parameters
| Parameter | Description |
| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The name of the organization to delete a provider platform from. The organization must already exist, and the token authenticating the API request must belong to the "owners" team or a member of the "owners" team. |
| `:registry_name` | Must be `private`. |
| `:namespace` | The namespace of the provider for which the platform is being deleted. For private providers this is the same as the `:organization_name` parameter. |
| `:name` | The name of the provider for which the platform is being deleted. |
| `:version` | The version for which the platform is being deleted. |
| `:os` | The operating system of the provider platform that is being deleted. |
| `:arch` | The architecture of the provider platform that is being deleted. |
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Status | Response | Reason |
| ------- | ------------------------- | ----------------------------------------------------------- |
| [204][] | No Content | Success |
| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled |
| [404][] | [JSON API error object][] | Provider not found or user not authorized to perform action |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/organizations/hashicorp/registry-providers/private/hashicorp/aws/versions/3.1.1/platforms/linux/amd64
``` | terraform | page title Private Provider Versions and Platforms API Docs HCP Terraform description Use the registry providers to manage private providers in your private registry Create get and delete versions and create get and delete platforms using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects Private Provider Versions and Platforms API These endpoints are only relevant to private providers When you publish a private provider terraform cloud docs registry publish providers to the HCP Terraform private registry you must also create at least one version and at least one platform for that version before consumers can use the provider in configurations Unlike the public Terraform Registry the private registry does not automatically upload new releases You must manually add new provider versions and the associated release files All members of an organization can view and use both public and private providers but you need owners team terraform cloud docs users teams organizations permissions organization owners or Manage Private Registry terraform cloud docs users teams organizations permissions manage private registry permissions to add update or delete provider versions and platforms in private registry Create a Provider Version POST organizations organization name registry providers registry name namespace name versions The private registry does not automatically update private providers when you release new versions You must use this endpoint to add each new version Consumers cannot use new versions until you upload all required release files terraform cloud docs registry publish providers release files and Create a Provider Platform create a provider platform Parameters Parameter Description organization name The name of the organization to create a provider in The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team registry name Must be private namespace The namespace of the provider for which the version is being created For private providers this is the same as the organization name parameter name The name of the provider for which the version is being created Creates a new registry provider version This endpoint only applies to private providers Status Response Reason 201 JSON API document type registry provider versions Success 422 JSON API error object Malformed request body missing attributes wrong types etc 403 JSON API error object Forbidden not available for public providers 404 JSON API error object User not authorized Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be registry provider versions data attributes version string A valid semver version string data attributes key id string A valid gpg key string data attributes protocols array An array of Terraform provider API versions that this version supports Must be one or all of the following values 4 0 5 0 6 0 Note Only Terraform 0 13 and later support third party provider registries and that Terraform version requires provider API version 5 0 or later So you do not need to list major versions 4 0 or earlier in the protocols attribute Sample Payload json data type registry provider versions attributes version 3 1 1 key id 32966F3FB5AC1129 protocols 5 0 Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions Sample Response json data id provver y5KZUsSBRLV9zCtL type registry provider versions attributes version 3 1 1 created at 2022 02 11T19 16 59 876Z updated at 2022 02 11T19 16 59 876Z key id 32966F3FB5AC1129 protocols 5 0 permissions can delete true can upload asset true shasums uploaded false shasums sig uploaded false relationships registry provider data id prov cmEmLstBfjNNA9F3 type registry providers platforms data links related api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms links shasums upload https archivist terraform io v1 object dmF1b shasums sig upload https archivist terraform io v1 object dmF1b Get All Versions for a Single Provider GET organizations organization name registry providers registry name namespace name versions Parameters Parameter Description organization name The name of the organization the provider belongs to registry name Must be private namespace The namespace of the provider Must be the same as the organization name for the provider name The provider name Status Response Reason 200 JSON API document type registry providers Success 403 JSON API error object Forbidden public provider curation disabled 404 JSON API error object Provider not found or user unauthorized to perform action Sample Request shell curl request GET header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions Sample Response json data id provver y5KZUsSBRLV9zCtL type registry provider versions attributes version 3 1 1 created at 2022 02 11T19 16 59 876Z updated at 2022 02 11T19 16 59 876Z key id 32966F3FB5AC1129 protocols 5 0 permissions can delete true can upload asset true shasums uploaded true shasums sig uploaded true relationships registry provider data id prov cmEmLstBfjNNA9F3 type registry providers platforms data id provpltfrm GSHhNzptr9s3WoLD type registry provider platforms id provpltfrm A1PHitiM2KkKpVoM type registry provider platforms id provpltfrm BLJWvWyJ2QMs525k type registry provider platforms id provpltfrm qQYosUguetYtXGzJ type registry provider platforms id provpltfrm pjDHFN46y193bS7t type registry provider platforms links related api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms links shasums download https archivist terraform io v1 object dmF1b shasums sig download https archivist terraform io v1 object dmF1b links self https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions page 5Bnumber 5D 1 page 5Bsize 5D 20 first https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions page 5Bnumber 5D 1 page 5Bsize 5D 20 meta pagination current page 1 page size 20 prev page null next page null total pages 1 total count 1 Note The shasums uploaded and shasums sig uploaded properties will be false if those files have not been uploaded to Archivist In this case instead of including links to shasums download and shasums sig download the response will include upload links shasums upload and shasums sig upload Get a Version GET organizations organization name registry providers registry name namespace name versions version Parameters Parameter Description organization name The name of the organization the provider belongs to registry name Must be private namespace The namespace of the provider Must be the same as the organization name for the provider name The provider name version The version of the provider being created to which different platforms can be added Status Response Reason 200 JSON API document type registry providers Success 403 JSON API error object Forbidden public provider curation disabled 404 JSON API error object Provider not found or user unauthorized to perform action Sample Request shell curl request GET header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 Sample Response json data id provver y5KZUsSBRLV9zCtL type registry provider versions attributes version 3 1 1 created at 2022 02 11T19 16 59 876Z updated at 2022 02 11T19 16 59 876Z key id 32966F3FB5AC1129 protocols 5 0 permissions can delete true can upload asset true shasums uploaded true shasums sig uploaded true relationships registry provider data id prov cmEmLstBfjNNA9F3 type registry providers platforms data id provpltfrm GSHhNzptr9s3WoLD type registry provider platforms id provpltfrm A1PHitiM2KkKpVoM type registry provider platforms id provpltfrm BLJWvWyJ2QMs525k type registry provider platforms id provpltfrm qQYosUguetYtXGzJ type registry provider platforms id provpltfrm pjDHFN46y193bS7t type registry provider platforms links related api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms links shasums download https archivist terraform io v1 object dmF1b shasums sig download https archivist terraform io v1 object dmF1b Note shasums uploaded and shasums sig uploaded will be false if those files haven t been uploaded to Archivist yet In this case instead of including links to shasums download and shasums sig download the response will include upload links shasums upload and shasums sig upload Delete a Version DELETE organizations organization name registry providers registry name namespace name versions provider version Parameters Parameter Description organization name The name of the organization to delete a provider version from The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team registry name Must be private namespace The namespace of the provider for which the version is being deleted For private providers this is the same as the organization name parameter name The name of the provider for which the version is being deleted version The version for the provider that will be deleted along with its corresponding platforms permissions citation intentionally unused keep for maintainers Status Response Reason 204 No Content Success 403 JSON API error object Forbidden public provider curation disabled 404 JSON API error object Provider not found or user not authorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 organizations hashicorp registry providers public hashicorp aws versions 3 1 1 Create a Provider Platform POST organizations organization name registry providers registry name namespace name versions version platforms Platforms are binaries that allow the provider to run on a particular operating system and architecture combination e g Linux and AMD64 GoReleaser creates binaries automatically when you create a release on GitHub terraform registry providers publishing creating a github release or create a release locally terraform registry providers publishing using goreleaser locally You must upload one or more platforms for each version of a private provider After you create a platform you must upload the platform binary file to the provider binary upload URL Parameters Parameter Description organization name The name of the organization to create a provider platform in The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team registry name Must be private namespace The namespace of the provider for which the platform is being created For private providers this is the same as the organization name parameter name The name of the provider for which the platform is being created version The provider version of the provider for which the platform is being created Creates a new registry provider platform This endpoint only applies to private providers Status Response Reason 201 JSON API document type registry provider platforms Success 422 JSON API error object Malformed request body missing attributes wrong types etc 403 JSON API error object Forbidden not available for public providers 404 JSON API error object User not authorized Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be registry provider platforms data attributes os string A valid operating system string data attributes arch string A valid architecture string data attributes shasum string A valid shasum string data attributes filename string A valid filename string Sample Payload json data type registry provider version platforms attributes os linux arch amd64 shasum 8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38 filename terraform provider aws 3 1 1 linux amd64 zip Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms Sample Response json data id provpltfrm BLJWvWyJ2QMs525k type registry provider platforms attributes os linux arch amd64 filename terraform provider aws 3 1 1 linux amd64 zip shasum 8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38 permissions can delete true can upload asset true provider binary uploaded false relationships registry provider version data id provver y5KZUsSBRLV9zCtL type registry provider versions links provider binary upload https archivist terraform io v1 object dmF1b Get All Platforms for a Single Version GET organizations organization name registry providers registry name namespace name versions version platforms Parameters Parameter Description organization name The name of the organization the provider belongs to registry name Must be private namespace The namespace of the provider Must be the same as the organization name for the provider name The provider name version The version of the provider Status Response Reason 200 JSON API document type registry providers Success 403 JSON API error object Forbidden public provider curation disabled 404 JSON API error object Provider not found or user unauthorized to perform action Sample Request shell curl request GET header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms Sample Response json data id provpltfrm GSHhNzptr9s3WoLD type registry provider platforms attributes os darwin arch amd64 filename terraform provider aws 3 1 1 darwin amd64 zip shasum fd580e71bd76d76913e1925f2641be9330c536464af9a08a5b8994da65a26cbc permissions can delete true can upload asset true provider binary uploaded true relationships registry provider version data id provver y5KZUsSBRLV9zCtL type registry provider versions links provider binary download https archivist terraform io v1 object dmF1b id provpltfrm A1PHitiM2KkKpVoM type registry provider platforms attributes os darwin arch arm64 filename terraform provider aws 3 1 1 darwin arm64 zip shasum de3c351d7f35a3c8c583c0da5c1c4d558b8cea3731a49b15f63de5bbbafc0165 permissions can delete true can upload asset true provider binary uploaded true relationships registry provider version data id provver y5KZUsSBRLV9zCtL type registry provider versions links provider binary download https archivist terraform io v1 object dmF1b links self https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms page 5Bnumber 5D 1 page 5Bsize 5D 20 first https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms page 5Bnumber 5D 1 page 5Bsize 5D 20 meta pagination current page 1 page size 20 prev page null next page null total pages 1 total count 2 Note The provider binary uploaded property will be false if that file has not been uploaded to Archivist In this case instead of including a link to provider binary download the response will include an upload link provider binary upload Get a Platform GET organizations organization name registry providers registry name namespace name versions version platforms os arch Parameters Parameter Description organization name The name of the organization the provider belongs to registry name Must be private namespace The namespace of the provider Must be the same as the organization name for the provider name The provider name version The version of the provider os The operating system of the provider platform arch The architecture of the provider platform Status Response Reason 200 JSON API document type registry providers Success 403 JSON API error object Forbidden public provider curation disabled 404 JSON API error object Provider not found or user unauthorized to perform action Sample Request shell curl request GET header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms linux amd64 Sample Response json data id provpltfrm BLJWvWyJ2QMs525k type registry provider platforms attributes os linux arch amd64 filename terraform provider aws 3 1 1 linux amd64 zip shasum 8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38 permissions can delete true can upload asset true provider binary uploaded true relationships registry provider version data id provver y5KZUsSBRLV9zCtL type registry provider versions links provider binary download https archivist terraform io v1 object dmF1b Note The provider binary uploaded property will be false if that file has not been uploaded to Archivist In this case instead of including a link to provider binary download the response will include an upload link provider binary upload Delete a Platform DELETE organizations organization name registry providers registry name namespace name versions version platforms os arch Parameters Parameter Description organization name The name of the organization to delete a provider platform from The organization must already exist and the token authenticating the API request must belong to the owners team or a member of the owners team registry name Must be private namespace The namespace of the provider for which the platform is being deleted For private providers this is the same as the organization name parameter name The name of the provider for which the platform is being deleted version The version for which the platform is being deleted os The operating system of the provider platform that is being deleted arch The architecture of the provider platform that is being deleted permissions citation intentionally unused keep for maintainers Status Response Reason 204 No Content Success 403 JSON API error object Forbidden public provider curation disabled 404 JSON API error object Provider not found or user not authorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms linux amd64 |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 404 https developer mozilla org en US docs Web HTTP Status 404 Use the task stages endpoint to manage run task stages and results List show and override task stages and show run task results using the HTTP API page title Run Task Stages and Results API Docs HCP Terraform | ---
page_title: Run Task Stages and Results - API Docs - HCP Terraform
description: >-
Use the `/task-stages` endpoint to manage run task stages and results. List, show, and override task stages, and show run task results using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API documents]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
[run]: /terraform/cloud-docs/run/states
# Run Task Stages and Results API
HCP Terraform uses run task stages and run task results to track [run task](/terraform/cloud-docs/workspaces/settings/run-tasks) execution.
<!-- BEGIN: TFC:only name:pnp-callout -->
@include 'tfc-package-callouts/run-tasks.mdx'
<!-- END: TFC:only name:pnp-callout -->
When HCP Terraform creates a [run][], it reads the run tasks associated to the workspace. Each run task in the workspace is configured to begin during a specific [run stage](/terraform/cloud-docs/run/states). HCP Terraform creates a run task stage object for each run stage that triggers run tasks. You can configure run tasks during the [Pre-Plan Stage](/terraform/cloud-docs/run/states#the-pre-plan-stage), [Post-Plan Stage](/terraform/cloud-docs/run/states#the-post-plan-stage), [Pre-Apply Stage](/terraform/cloud-docs/run/states#the-pre-apply-stage) and [Post-Apply Stage](/terraform/cloud-docs/run/states#the-post-apply-stage).
Run task stages then create a run task result for each run task. For example, a workspace has two run tasks called `alpha` and `beta`. For each run, HCP Terraform creates one run task stage called `post-plan`. That run task stage has two run task results: one for the `alpha` run task and one for the `beta` run task.
This page lists the endpoints to retrieve run task stages and run task results. Refer to the [Run Tasks API](/terraform/cloud-docs/api-docs/run-tasks/run-tasks) for endpoints to create and manage run tasks within HCP Terraform. Refer to the [Run Tasks Integration API](/terraform/cloud-docs/api-docs/run-tasks/run-tasks-integration) for endpoints to build custom run tasks for the HCP Terraform ecosystem.
## Attributes
### Run Task Stage Status
The run task stage status is found in `data.attributes.status`, and you can reference the following list of possible values.
| Status | Description |
|-------------------- |------------------------------------------------------------------------------------------------------------------------------------------- |
| `pending` | The initial status of a run task stage after creation. |
| `running` | The run task stage is executing one or more tasks, which have not yet completed. |
| `passed` | All of the run task results in the stage passed. |
| `failed` | One more results in the run task stage failed. |
| `awaiting_override` | The task stage is waiting for user input. Once a user manually overrides the failed run tasks, the run returns to the `running` state. |
| `errored` | The run task stage has errored. |
| `canceled` | The run task stage has been canceled. |
| `unreachable` | The run task stage could not be executed. |
### Run Task Result Status
The run task result status is found in `data.attributes.status`, and you can reference the following list of possible values.
| Status | Description |
|---------------|-----------------------------------------------------------------------|
| `pending` | The initial status of a run task result after creation. |
| `running` | The associated run task is begun execution and has not yet completed. |
| `passed` | The associated run task executed and returned a passing result. |
| `failed` | The associated run task executed and returned a failed result. |
| `errored` | The associated run task has errored during execution. |
| `canceled` | The associated run task execution has been canceled. |
| `unreachable` | The associated run task could not be executed. |
## List the Run Task Stages in a Run
`GET /runs/:run_id/task-stages`
| Parameter | Description |
|-----------|-------------------------------------|
| `run_id` | The run ID to list task stages for. |
| Status | Response | Reason |
|---------|---------------------------------------------------------|---------------------------------|
| [200][] | Array of [JSON API documents][] (`type: "task-stages"`) | Successfully listed task-stages |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
|----------------|----------------------------------------------------------------------|
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 runs per page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/runs/run-XdgtChJuuUwLoSmw/task-stages
```
### Sample Response
```json
{
"data": [
{
"id": "ts-rL5ZsuwfjqfPJcdi",
"type": "task-stages",
"attributes": {
"status": "passed",
"stage": "post_plan",
"status-timestamps": {
"passed-at": "2022-06-08T20:32:12+08:00",
"running-at": "2022-06-08T20:32:11+08:00"
},
"created-at": "2022-06-08T12:31:56.94Z",
"updated-at": "2022-06-08T12:32:12.315Z"
},
"relationships": {
"run": {
"data": {
"id": "run-XdgtChJuuUwLoSmw",
"type": "runs"
}
},
"task-results": {
"data": [
{
"id": "taskrs-EmnmsEDL1jgd1GTP",
"type": "task-results"
}
]
},
"policy-evaluations":{
"data":[
{
"id":"poleval-iouaha9KLgGWkBRQ",
"type":"policy-evaluations"
}
]
}
},
"links": {
"self": "/api/v2/task-stages/ts-rL5ZsuwfjqfPJcdi"
}
}
]
}
```
## Show a Run Task Stage
`GET /task-stages/:task_stage_id`
| Parameter | Description |
|------------------|---------------------------|
| `:task_stage_id` | The run task stage ID to get. |
This endpoint shows details of a specific task stage.
| Status | Response | Reason |
|---------|-----------------------------------------------|---------------------------------------------|
| [200][] | [JSON API document][] (`type: "task-stages"`) | Success |
| [404][] | [JSON API error object][] | Task stage not found or user not authorized |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/task-stages/ts-rL5ZsuwfjqfPJcdi
```
### Sample Response
```json
{
"data": {
"id": "ts-rL5ZsuwfjqfPJcdi",
"type": "task-stages",
"attributes": {
"status": "passed",
"stage": "post_plan",
"status-timestamps": {
"passed-at": "2022-06-08T20:32:12+08:00",
"running-at": "2022-06-08T20:32:11+08:00"
},
"created-at": "2022-06-08T12:31:56.94Z",
"updated-at": "2022-06-08T12:32:12.315Z"
},
"relationships": {
"run": {
"data": {
"id": "run-XdgtChJuuUwLoSmw",
"type": "runs"
}
},
"task-results": {
"data": [
{
"id": "taskrs-EmnmsEDL1jgd1GTP",
"type": "task-results"
}
]
},
"policy-evaluations":{
"data":[
{
"id":"poleval-iouaha9KLgGWkBRQ",
"type":"policy-evaluations"
}
]
}
},
"links": {
"self": "/api/v2/task-stages/ts-rL5ZsuwfjqfPJcdi"
}
}
}
```
## Show a Run Task Result
`GET /task-results/:task_result_id`
| Parameter | Description |
|-------------------|----------------------------|
| `:task_result_id` | The run task result ID to get. |
This endpoint shows the details for a specific run task result.
| Status | Response | Reason |
|---------|------------------------------------------------|----------------------------------------------|
| [200][] | [JSON API document][] (`type: "task-results"`) | Success |
| [404][] | [JSON API error object][] | Task result not found or user not authorized |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/task-results/taskrs-EmnmsEDL1jgd1GZz
```
### Sample Response
```json
{
"data": {
"id": "taskrs-EmnmsEDL1jgd1GZz",
"type": "task-results",
"attributes": {
"message": "No issues found.\nSeverity threshold is set to low.",
"status": "passed",
"status-timestamps": {
"passed-at": "2022-06-08T20:32:12+08:00",
"running-at": "2022-06-08T20:32:11+08:00"
},
"url": "https://external.service/project/task-123abc",
"created-at": "2022-06-08T12:31:56.954Z",
"updated-at": "2022-06-08T12:32:12.27Z",
"task-id": "task-b6MaHZmGopHDtqhn",
"task-name": "example-task",
"task-url": "https://external.service/task-123abc",
"stage": "post_plan",
"is-speculative": false,
"workspace-task-id": "wstask-258juqenQeWb3DZz",
"workspace-task-enforcement-level": "mandatory"
},
"relationships": {
"task-stage": {
"data": {
"id": "ts-rL5ZsuwfjqfPJczZ",
"type": "task-stages"
}
}
},
"links": {
"self": "/api/v2/task-results/taskrs-EmnmsEDL1jgd1GZz"
}
}
}
```
## Available Related Resources
### Task Stage
The GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](/terraform/cloud-docs/api-docs#inclusion-of-related-resources). The following resource types are available:
| Resource | Description |
|--------------------- |---------------------------------------------------------- |
| `run` | Information about the associated run. |
| `run.workspace` | Information about the associated workspace. |
| `task-results` | Information about the results for a task-stage. |
| `policy-evaluations` | Information about the policy evaluations for a task-stage. |
## Override a Task Stage
`POST /task-stages/:task_stage_id/actions/override`
| Parameter | Description |
| -------------------- | ----------------------------------------------------------------------------------------------- |
| `:task_stage_id` | The ID of the task stage to override. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/task-stages/ts-rL5ZsuwfjqfPJcdi/actions/override
```
### Sample Response
```json
{
"data":{
"id":"ts-F7MumZQcJzVh1ZZk",
"type":"task-stages",
"attributes":{
"status":"running",
"stage":"post_plan",
"status-timestamps":{
"running-at":"2022-09-21T06:36:54+00:00",
"awaiting-override-at":"2022-09-21T06:31:50+00:00"
},
"created-at":"2022-09-21T06:29:44.632Z",
"updated-at":"2022-09-21T06:36:54.952Z",
"permissions":{
"can-override-policy":true,
"can-override-tasks":false,
"can-override":true
},
"actions":{
"is-overridable":false
}
},
"relationships":{
"run":{
"data":{
"id":"run-K6N4BAz8NfUyR2QB",
"type":"runs"
}
},
"task-results":{
"data":[
]
},
"policy-evaluations":{
"data":[
{
"id":"poleval-atNKxwvjYy4Gwk3k",
"type":"policy-evaluations"
}
]
}
},
"links":{
"self":"/api/v2/task-stages/ts-F7MumZQcJzVh1ZZk"
}
}
}
``` | terraform | page title Run Task Stages and Results API Docs HCP Terraform description Use the task stages endpoint to manage run task stages and results List show and override task stages and show run task results using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 404 https developer mozilla org en US docs Web HTTP Status 404 JSON API document terraform cloud docs api docs json api documents JSON API documents terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects run terraform cloud docs run states Run Task Stages and Results API HCP Terraform uses run task stages and run task results to track run task terraform cloud docs workspaces settings run tasks execution BEGIN TFC only name pnp callout include tfc package callouts run tasks mdx END TFC only name pnp callout When HCP Terraform creates a run it reads the run tasks associated to the workspace Each run task in the workspace is configured to begin during a specific run stage terraform cloud docs run states HCP Terraform creates a run task stage object for each run stage that triggers run tasks You can configure run tasks during the Pre Plan Stage terraform cloud docs run states the pre plan stage Post Plan Stage terraform cloud docs run states the post plan stage Pre Apply Stage terraform cloud docs run states the pre apply stage and Post Apply Stage terraform cloud docs run states the post apply stage Run task stages then create a run task result for each run task For example a workspace has two run tasks called alpha and beta For each run HCP Terraform creates one run task stage called post plan That run task stage has two run task results one for the alpha run task and one for the beta run task This page lists the endpoints to retrieve run task stages and run task results Refer to the Run Tasks API terraform cloud docs api docs run tasks run tasks for endpoints to create and manage run tasks within HCP Terraform Refer to the Run Tasks Integration API terraform cloud docs api docs run tasks run tasks integration for endpoints to build custom run tasks for the HCP Terraform ecosystem Attributes Run Task Stage Status The run task stage status is found in data attributes status and you can reference the following list of possible values Status Description pending The initial status of a run task stage after creation running The run task stage is executing one or more tasks which have not yet completed passed All of the run task results in the stage passed failed One more results in the run task stage failed awaiting override The task stage is waiting for user input Once a user manually overrides the failed run tasks the run returns to the running state errored The run task stage has errored canceled The run task stage has been canceled unreachable The run task stage could not be executed Run Task Result Status The run task result status is found in data attributes status and you can reference the following list of possible values Status Description pending The initial status of a run task result after creation running The associated run task is begun execution and has not yet completed passed The associated run task executed and returned a passing result failed The associated run task executed and returned a failed result errored The associated run task has errored during execution canceled The associated run task execution has been canceled unreachable The associated run task could not be executed List the Run Task Stages in a Run GET runs run id task stages Parameter Description run id The run ID to list task stages for Status Response Reason 200 Array of JSON API documents type task stages Successfully listed task stages Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 runs per page Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 runs run XdgtChJuuUwLoSmw task stages Sample Response json data id ts rL5ZsuwfjqfPJcdi type task stages attributes status passed stage post plan status timestamps passed at 2022 06 08T20 32 12 08 00 running at 2022 06 08T20 32 11 08 00 created at 2022 06 08T12 31 56 94Z updated at 2022 06 08T12 32 12 315Z relationships run data id run XdgtChJuuUwLoSmw type runs task results data id taskrs EmnmsEDL1jgd1GTP type task results policy evaluations data id poleval iouaha9KLgGWkBRQ type policy evaluations links self api v2 task stages ts rL5ZsuwfjqfPJcdi Show a Run Task Stage GET task stages task stage id Parameter Description task stage id The run task stage ID to get This endpoint shows details of a specific task stage Status Response Reason 200 JSON API document type task stages Success 404 JSON API error object Task stage not found or user not authorized Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 task stages ts rL5ZsuwfjqfPJcdi Sample Response json data id ts rL5ZsuwfjqfPJcdi type task stages attributes status passed stage post plan status timestamps passed at 2022 06 08T20 32 12 08 00 running at 2022 06 08T20 32 11 08 00 created at 2022 06 08T12 31 56 94Z updated at 2022 06 08T12 32 12 315Z relationships run data id run XdgtChJuuUwLoSmw type runs task results data id taskrs EmnmsEDL1jgd1GTP type task results policy evaluations data id poleval iouaha9KLgGWkBRQ type policy evaluations links self api v2 task stages ts rL5ZsuwfjqfPJcdi Show a Run Task Result GET task results task result id Parameter Description task result id The run task result ID to get This endpoint shows the details for a specific run task result Status Response Reason 200 JSON API document type task results Success 404 JSON API error object Task result not found or user not authorized Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json https app terraform io api v2 task results taskrs EmnmsEDL1jgd1GZz Sample Response json data id taskrs EmnmsEDL1jgd1GZz type task results attributes message No issues found nSeverity threshold is set to low status passed status timestamps passed at 2022 06 08T20 32 12 08 00 running at 2022 06 08T20 32 11 08 00 url https external service project task 123abc created at 2022 06 08T12 31 56 954Z updated at 2022 06 08T12 32 12 27Z task id task b6MaHZmGopHDtqhn task name example task task url https external service task 123abc stage post plan is speculative false workspace task id wstask 258juqenQeWb3DZz workspace task enforcement level mandatory relationships task stage data id ts rL5ZsuwfjqfPJczZ type task stages links self api v2 task results taskrs EmnmsEDL1jgd1GZz Available Related Resources Task Stage The GET endpoints above can optionally return related resources if requested with the include query parameter terraform cloud docs api docs inclusion of related resources The following resource types are available Resource Description run Information about the associated run run workspace Information about the associated workspace task results Information about the results for a task stage policy evaluations Information about the policy evaluations for a task stage Override a Task Stage POST task stages task stage id actions override Parameter Description task stage id The ID of the task stage to override Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST https app terraform io api v2 task stages ts rL5ZsuwfjqfPJcdi actions override Sample Response json data id ts F7MumZQcJzVh1ZZk type task stages attributes status running stage post plan status timestamps running at 2022 09 21T06 36 54 00 00 awaiting override at 2022 09 21T06 31 50 00 00 created at 2022 09 21T06 29 44 632Z updated at 2022 09 21T06 36 54 952Z permissions can override policy true can override tasks false can override true actions is overridable false relationships run data id run K6N4BAz8NfUyR2QB type runs task results data policy evaluations data id poleval atNKxwvjYy4Gwk3k type policy evaluations links self api v2 task stages ts F7MumZQcJzVh1ZZk |
terraform page title Run Tasks Integration API Docs HCP Terraform 200 https developer mozilla org en US docs Web HTTP Status 200 401 https developer mozilla org en US docs Web HTTP Status 401 Use run tasks to make requests when a run reaches a specific phase Learn about the run task request and callback formats | ---
page_title: Run Tasks Integration - API Docs - HCP Terraform
description: >-
Use run tasks to make requests when a run reaches a specific phase. Learn about the run task request and callback formats.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[JSON API error object]: https://jsonapi.org/format/#error-objects
# Run Tasks Integration API
[Run tasks](/terraform/cloud-docs/workspaces/settings/run-tasks) allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle.
This page lists the API endpoints used to trigger a run task and the expected response from the integration.
<!-- BEGIN: TFC:only name:pnp-callout -->
@include 'tfc-package-callouts/run-tasks.mdx'
<!-- END: TFC:only name:pnp-callout -->
Refer to [run tasks](/terraform/cloud-docs/api-docs/run-tasks/run-tasks) for the API endpoints to create and manage run tasks within HCP Terraform. You can also access a complete list of all run tasks in the [Terraform Registry](https://registry.terraform.io/browse/run-tasks).
## Run Task Request
When a run reaches the appropriate phase and a run task is triggered, HCP Terraform will send a request to the run task's URL.
The service receiving the run task request should respond with `200 OK`, or HCP Terraform will retry to trigger the run task.
`POST :url`
| Parameter | Description |
|-----------|---------------------------------------------------------|
| `:url` | The URL configured in the run task to send requests to. |
| Status | Response | Reason |
|---------|------------|-----------------------------------|
| [200][] | No Content | Successfully submitted a run task |
### Request Body
The POST request submits a JSON object with the following properties as a request payload.
#### Common Properties
All request payloads contain the following properties.
| Key path | Type | Values | Description |
| ------------------------------------ | ------- | ------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `payload_version` | integer | `1` | Schema version of the payload. Only `1` is supported. |
| `stage` | string | `pre_plan`, `post_plan`, `pre_apply`, `post_apply` | The [run stage](/terraform/cloud-docs/run/states) when HCP Terraform triggers the run task. |
| `access_token` | string | | Bearer token to use when calling back to HCP Terraform. |
| `capabilities` | object | | A map of the capabilities that the caller supports. |
| `capabilities.outcomes` | bool | | A flag indicating the caller accepts detailed run task outcomes. |
| `configuration_version_download_url` | string | | The URL to [download the configuration version](/terraform/cloud-docs/api-docs/configuration-versions#download-configuration-files). This is `null` if the configuration version is not available to download. |
| `configuration_version_id` | string | | The ID of the [configuration version](/terraform/cloud-docs/api-docs/configuration-versions) for the run. |
| `is_speculative` | bool | | Whether the task is part of a [speculative run](/terraform/cloud-docs/run/remote-operations#speculative-plans). |
| `organization_name` | string | | Name of the organization the task is configured within. |
| `run_app_url` | string | | URL within HCP Terraform to the run. |
| `run_created_at` | string | | When the run was started. |
| `run_created_by` | string | | Who created the run. |
| `run_id` | string | | Id of the run this task is part of. |
| `run_message` | string | | Message that was associated with the run. |
| `task_result_callback_url` | string | | URL that should called back with the result of this task. |
| `task_result_enforcement_level` | string | `mandatory`, `advisory` | Enforcement level for this task. |
| `task_result_id` | string | | ID of task result within HCP Terraform. |
| `vcs_branch` | string | | Repository branch that the workspace executes from. This is `null` if the workspace does not have a VCS repository. |
| `vcs_commit_url` | string | | URL to the commit that triggered this run. This is `null` if the workspace does not a VCS repository. |
| `vcs_pull_request_url` | string | | URL to the Pull Request/Merge Request that triggered this run. This is `null` if the run was not triggered. |
| `vcs_repo_url` | string | | URL to the workspace's VCS repository. This is `null` if the workspace does not have a VCS repository. |
| `workspace_app_url` | string | | URL within HCP Terraform to the workspace. |
| `workspace_id` | string | | Id of the workspace the task is associated with. |
| `workspace_name` | string | | Name of the workspace. |
| `workspace_working_directory` | string | | The working directory specified in the run's [workspace settings](/terraform/cloud-docs/workspaces/settings#terraform-working-directory). |
#### Post-Plan, Pre-Apply, and Post-Apply Properties
Requests with `stage` set to `post_plan`, `pre_apply` or `post_apply` contain the following additional properties.
| Key path | Type | Values | Description |
| ------------------- | ------ | ------ | --------------------------------------------------------- |
| `plan_json_api_url` | string | | The URL to retrieve the JSON Terraform plan for this run. |
### Sample Payload
```json
{
"payload_version": 1,
"stage": "post_plan",
"access_token": "4QEuyyxug1f2rw.atlasv1.iDyxqhXGVZ0ykes53YdQyHyYtFOrdAWNBxcVUgWvzb64NFHjcquu8gJMEdUwoSLRu4Q",
"capabilities": {
"outcomes": true
},
"configuration_version_download_url": "https://app.terraform.io/api/v2/configuration-versions/cv-ntv3HbhJqvFzamy7/download",
"configuration_version_id": "cv-ntv3HbhJqvFzamy7",
"is_speculative": false,
"organization_name": "hashicorp",
"plan_json_api_url": "https://app.terraform.io/api/v2/plans/plan-6AFmRJW1PFJ7qbAh/json-output",
"run_app_url": "https://app.terraform.io/app/hashicorp/my-workspace/runs/run-i3Df5to9ELvibKpQ",
"run_created_at": "2021-09-02T14:47:13.036Z",
"run_created_by": "username",
"run_id": "run-i3Df5to9ELvibKpQ",
"run_message": "Triggered via UI",
"task_result_callback_url": "https://app.terraform.io/api/v2/task-results/5ea8d46c-2ceb-42cd-83f2-82e54697bddd/callback",
"task_result_enforcement_level": "mandatory",
"task_result_id": "taskrs-2nH5dncYoXaMVQmJ",
"vcs_branch": "main",
"vcs_commit_url": "https://github.com/hashicorp/terraform-random/commit/7d8fb2a2d601edebdb7a59ad2088a96673637d22",
"vcs_pull_request_url": null,
"vcs_repo_url": "https://github.com/hashicorp/terraform-random",
"workspace_app_url": "https://app.terraform.io/app/hashicorp/my-workspace",
"workspace_id": "ws-ck4G5bb1Yei5szRh",
"workspace_name": "tfr_github_0",
"workspace_working_directory": "/terraform"
}
```
### Request Headers
The POST request submits the following properties as the request headers.
| Name | Value | Description |
| ---------------------- | ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Content-Type` | `application/json` | Specifies the type of data in the request body |
| `User-Agent` | `TFC/1.0 (+https://app.terraform.io; TFC)` | Identifies the request is coming from HCP Terraform |
| `X-TFC-Task-Signature` | string | If the run task is configured with an [HMAC Key](/terraform/cloud-docs/integrations/run-tasks#securing-your-run-task), this header contains the signed SHA512 sum of the request payload using the configured HMAC key. Otherwise, this is an empty string. |
## Run Task Callback
While a run task runs, it may send progressive updates to HCP Terraform with a `running` status. Once an integrator determines that Terraform supports detailed run task outcomes, they can send these outcomes by appending to the run task's callback payload.
Once the external integration fulfills the request, that integration must call back into HCP Terraform with the overall result of either `passed` or `failed`. Terraform expects this callback within 10 minutes, or the request is considered errored.
You can send outcomes with a status of `running`, `passed`, or `failed`, but it is a good practice only to send outcomes when a run task is `running`.
`PATCH :callback_url`
| Parameter | Description |
| --------------- | ---------------------------------------------------------------- |
| `:callback_url` | The `task_result_callback_url` specified in the run task request. Typically `/task-results/:guid/callback`. |
| Status | Response | Reason |
| ------- | ------------------------- | ---------------------------------------- |
| [200][] | No Content | Successfully submitted a run task result |
| [401][] | [JSON API error object][] | Not authorized to perform action |
| [422][] | [JSON API error object][] | Invalid response payload. This could be caused by invalid attributes, or sending a status that is not accepted. |
### Request Body
The PATCH request submits a JSON object with the following properties as a request payload. This payload is also described in the [JSON API schema for run task results](https://github.com/hashicorp/terraform-docs-common/blob/main/website/public/schema/run-tasks/runtask-result.json).
| Key path | Type | Description |
| ------------------------- | ------ | ----------------------------------------------------------------------------------------------- |
| `data.type` | string | Must be `"task-results"`. |
| `data.attributes.status` | string | The current status of the task. Only `passed`, `failed` or `running` are allowed. |
| `data.attributes.message` | string | (Recommended, but optional) A short message describing the status of the task. |
| `data.attributes.url` | string | (Optional) A URL where users can obtain more information about the task. |
| `relationships.outcomes.data` | array | (Recommended, but optional) A collection of detailed run task outcomes. |
Status values other than passed, failed, or running return an error. Both the passed and failed statuses represent a final state for a run task. The running status allows one or more partial updates until the task has reached a final state.
```json
{
"data": {
"type": "task-results",
"attributes": {
"status": "passed",
"message": "4 passed, 0 skipped, 0 failed",
"url": "https://external.service.dev/terraform-plan-checker/run-i3Df5to9ELvibKpQ"
},
"relationships": {
"outcomes": {
"data": [...]
}
}
}
}
```
#### Outcomes Payload Body
A run task result may optionally contain one or more detailed outcomes, which improves result visibility and content in the HCP Terraform user interface. The following attributes define the outcome.
| Key path | Type | Description |
| ------------------------- | ------ | ----------------------------------------------------------------------------------------------- |
| `outcome-id` | string | A partner supplied identifier for this outcome. |
| `description` | string | A one-line description of the result. |
| `body` | string | (Optional) A detailed message for the result in Markdown format. |
| `url` | string | (Optional) A URL that a user can navigate to for more information about this result. |
| `tags` | object | (Optional) An object containing tag arrays, named by the property key. |
| `tags.key` | string | The two or three word name of the header tag. [Special handling](#severity-and-status-tags) is given to `severity` and `status` keys. |
| `tags.key[].label` | string | The text value of the tag. |
| `tags.key[].level` | enum string | (Optional) The error level for the tag. Defaults to `none`, but accepts `none`, `info`, `warning`, or `error`. For levels other than `none`, labels render with a color and icon for that level. |
##### Severity and Status Tags
Run task outcomes with tags named "severity" or "status" are enriched within the outcomes display list in HCP Terraform, enabling an earlier response to issues with severity and status.
```json
{
"type": "task-result-outcomes",
"attributes": {
"outcome-id": "PRTNR-CC-TF-127",
"description": "ST-2942: S3 Bucket will not enforce MFA login on delete requests",
"tags": {
"Status": [
{
"label": "Denied",
"level": "error"
}
],
"Severity": [
{
"label": "High",
"level": "error"
},
{
"label": "Recoverable",
"level": "info"
}
],
"Cost Centre": [
{
"label": "IT-OPS"
}
]
},
"body": "# Resolution for issue ST-2942\n\n## Impact\n\nFollow instructions in the [AWS S3 docs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiFactorAuthenticationDelete.html) to manually configure the MFA setting.\n—-- Payload truncated —--",
"url": "https://external.service.dev/result/PRTNR-CC-TF-127"
}
}
```
##### Complete Callback Payload Example
The example below shows a complete payload explaining the data structure of a callback payload, including all the necessary fields.
```json
{
"data": {
"type": "task-results",
"attributes": {
"status": "failed",
"message": "0 passed, 0 skipped, 1 failed",
"url": "https://external.service.dev/terraform-plan-checker/run-i3Df5to9ELvibKpQ"
},
"relationships": {
"outcomes": {
"data": [
{
"type": "task-result-outcomes",
"attributes": {
"outcome-id": "PRTNR-CC-TF-127",
"description": "ST-2942: S3 Bucket will not enforce MFA login on delete requests",
"tags": {
"Status": [
{
"label": "Denied",
"level": "error"
}
],
"Severity": [
{
"label": "High",
"level": "error"
},
{
"label": "Recoverable",
"level": "info"
}
],
"Cost Centre": [
{
"label": "IT-OPS"
}
]
},
"body": "# Resolution for issue ST-2942\n\n## Impact\n\nFollow instructions in the [AWS S3 docs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiFactorAuthenticationDelete.html) to manually configure the MFA setting.\n—-- Payload truncated —--",
"url": "https://external.service.dev/result/PRTNR-CC-TF-127"
}
}
]
}
}
}
}
```
### Request Headers
The PATCH request must use the token supplied in the originating request (`access_token`) for [authentication](/terraform/cloud-docs/api-docs#authentication). | terraform | page title Run Tasks Integration API Docs HCP Terraform description Use run tasks to make requests when a run reaches a specific phase Learn about the run task request and callback formats 200 https developer mozilla org en US docs Web HTTP Status 200 401 https developer mozilla org en US docs Web HTTP Status 401 422 https developer mozilla org en US docs Web HTTP Status 422 JSON API error object https jsonapi org format error objects Run Tasks Integration API Run tasks terraform cloud docs workspaces settings run tasks allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle This page lists the API endpoints used to trigger a run task and the expected response from the integration BEGIN TFC only name pnp callout include tfc package callouts run tasks mdx END TFC only name pnp callout Refer to run tasks terraform cloud docs api docs run tasks run tasks for the API endpoints to create and manage run tasks within HCP Terraform You can also access a complete list of all run tasks in the Terraform Registry https registry terraform io browse run tasks Run Task Request When a run reaches the appropriate phase and a run task is triggered HCP Terraform will send a request to the run task s URL The service receiving the run task request should respond with 200 OK or HCP Terraform will retry to trigger the run task POST url Parameter Description url The URL configured in the run task to send requests to Status Response Reason 200 No Content Successfully submitted a run task Request Body The POST request submits a JSON object with the following properties as a request payload Common Properties All request payloads contain the following properties Key path Type Values Description payload version integer 1 Schema version of the payload Only 1 is supported stage string pre plan post plan pre apply post apply The run stage terraform cloud docs run states when HCP Terraform triggers the run task access token string Bearer token to use when calling back to HCP Terraform capabilities object A map of the capabilities that the caller supports capabilities outcomes bool A flag indicating the caller accepts detailed run task outcomes configuration version download url string The URL to download the configuration version terraform cloud docs api docs configuration versions download configuration files This is null if the configuration version is not available to download configuration version id string The ID of the configuration version terraform cloud docs api docs configuration versions for the run is speculative bool Whether the task is part of a speculative run terraform cloud docs run remote operations speculative plans organization name string Name of the organization the task is configured within run app url string URL within HCP Terraform to the run run created at string When the run was started run created by string Who created the run run id string Id of the run this task is part of run message string Message that was associated with the run task result callback url string URL that should called back with the result of this task task result enforcement level string mandatory advisory Enforcement level for this task task result id string ID of task result within HCP Terraform vcs branch string Repository branch that the workspace executes from This is null if the workspace does not have a VCS repository vcs commit url string URL to the commit that triggered this run This is null if the workspace does not a VCS repository vcs pull request url string URL to the Pull Request Merge Request that triggered this run This is null if the run was not triggered vcs repo url string URL to the workspace s VCS repository This is null if the workspace does not have a VCS repository workspace app url string URL within HCP Terraform to the workspace workspace id string Id of the workspace the task is associated with workspace name string Name of the workspace workspace working directory string The working directory specified in the run s workspace settings terraform cloud docs workspaces settings terraform working directory Post Plan Pre Apply and Post Apply Properties Requests with stage set to post plan pre apply or post apply contain the following additional properties Key path Type Values Description plan json api url string The URL to retrieve the JSON Terraform plan for this run Sample Payload json payload version 1 stage post plan access token 4QEuyyxug1f2rw atlasv1 iDyxqhXGVZ0ykes53YdQyHyYtFOrdAWNBxcVUgWvzb64NFHjcquu8gJMEdUwoSLRu4Q capabilities outcomes true configuration version download url https app terraform io api v2 configuration versions cv ntv3HbhJqvFzamy7 download configuration version id cv ntv3HbhJqvFzamy7 is speculative false organization name hashicorp plan json api url https app terraform io api v2 plans plan 6AFmRJW1PFJ7qbAh json output run app url https app terraform io app hashicorp my workspace runs run i3Df5to9ELvibKpQ run created at 2021 09 02T14 47 13 036Z run created by username run id run i3Df5to9ELvibKpQ run message Triggered via UI task result callback url https app terraform io api v2 task results 5ea8d46c 2ceb 42cd 83f2 82e54697bddd callback task result enforcement level mandatory task result id taskrs 2nH5dncYoXaMVQmJ vcs branch main vcs commit url https github com hashicorp terraform random commit 7d8fb2a2d601edebdb7a59ad2088a96673637d22 vcs pull request url null vcs repo url https github com hashicorp terraform random workspace app url https app terraform io app hashicorp my workspace workspace id ws ck4G5bb1Yei5szRh workspace name tfr github 0 workspace working directory terraform Request Headers The POST request submits the following properties as the request headers Name Value Description Content Type application json Specifies the type of data in the request body User Agent TFC 1 0 https app terraform io TFC Identifies the request is coming from HCP Terraform X TFC Task Signature string If the run task is configured with an HMAC Key terraform cloud docs integrations run tasks securing your run task this header contains the signed SHA512 sum of the request payload using the configured HMAC key Otherwise this is an empty string Run Task Callback While a run task runs it may send progressive updates to HCP Terraform with a running status Once an integrator determines that Terraform supports detailed run task outcomes they can send these outcomes by appending to the run task s callback payload Once the external integration fulfills the request that integration must call back into HCP Terraform with the overall result of either passed or failed Terraform expects this callback within 10 minutes or the request is considered errored You can send outcomes with a status of running passed or failed but it is a good practice only to send outcomes when a run task is running PATCH callback url Parameter Description callback url The task result callback url specified in the run task request Typically task results guid callback Status Response Reason 200 No Content Successfully submitted a run task result 401 JSON API error object Not authorized to perform action 422 JSON API error object Invalid response payload This could be caused by invalid attributes or sending a status that is not accepted Request Body The PATCH request submits a JSON object with the following properties as a request payload This payload is also described in the JSON API schema for run task results https github com hashicorp terraform docs common blob main website public schema run tasks runtask result json Key path Type Description data type string Must be task results data attributes status string The current status of the task Only passed failed or running are allowed data attributes message string Recommended but optional A short message describing the status of the task data attributes url string Optional A URL where users can obtain more information about the task relationships outcomes data array Recommended but optional A collection of detailed run task outcomes Status values other than passed failed or running return an error Both the passed and failed statuses represent a final state for a run task The running status allows one or more partial updates until the task has reached a final state json data type task results attributes status passed message 4 passed 0 skipped 0 failed url https external service dev terraform plan checker run i3Df5to9ELvibKpQ relationships outcomes data Outcomes Payload Body A run task result may optionally contain one or more detailed outcomes which improves result visibility and content in the HCP Terraform user interface The following attributes define the outcome Key path Type Description outcome id string A partner supplied identifier for this outcome description string A one line description of the result body string Optional A detailed message for the result in Markdown format url string Optional A URL that a user can navigate to for more information about this result tags object Optional An object containing tag arrays named by the property key tags key string The two or three word name of the header tag Special handling severity and status tags is given to severity and status keys tags key label string The text value of the tag tags key level enum string Optional The error level for the tag Defaults to none but accepts none info warning or error For levels other than none labels render with a color and icon for that level Severity and Status Tags Run task outcomes with tags named severity or status are enriched within the outcomes display list in HCP Terraform enabling an earlier response to issues with severity and status json type task result outcomes attributes outcome id PRTNR CC TF 127 description ST 2942 S3 Bucket will not enforce MFA login on delete requests tags Status label Denied level error Severity label High level error label Recoverable level info Cost Centre label IT OPS body Resolution for issue ST 2942 n n Impact n nFollow instructions in the AWS S3 docs https docs aws amazon com AmazonS3 latest userguide MultiFactorAuthenticationDelete html to manually configure the MFA setting n Payload truncated url https external service dev result PRTNR CC TF 127 Complete Callback Payload Example The example below shows a complete payload explaining the data structure of a callback payload including all the necessary fields json data type task results attributes status failed message 0 passed 0 skipped 1 failed url https external service dev terraform plan checker run i3Df5to9ELvibKpQ relationships outcomes data type task result outcomes attributes outcome id PRTNR CC TF 127 description ST 2942 S3 Bucket will not enforce MFA login on delete requests tags Status label Denied level error Severity label High level error label Recoverable level info Cost Centre label IT OPS body Resolution for issue ST 2942 n n Impact n nFollow instructions in the AWS S3 docs https docs aws amazon com AmazonS3 latest userguide MultiFactorAuthenticationDelete html to manually configure the MFA setting n Payload truncated url https external service dev result PRTNR CC TF 127 Request Headers The PATCH request must use the token supplied in the originating request access token for authentication terraform cloud docs api docs authentication |
terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Run Tasks API Docs HCP Terraform Use the tasks endpoint to manage run tasks List show create update and delete run tasks and list show update delete and associate workspace run tasks using the HTTP API | ---
page_title: Run Tasks - API Docs - HCP Terraform
description: >-
Use the `/tasks` endpoint to manage run tasks. List, show, create, update, and delete run tasks, and list, show, update, delete and associate workspace run tasks using the HTTP API.
---
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200
[201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201
[202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
[204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204
[400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
[401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
[403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
[409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409
[412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412
[422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422
[429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
[500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
[504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504
[JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents
[JSON API error object]: https://jsonapi.org/format/#error-objects
[JSON API Schema document]: https://github.com/hashicorp/terraform-docs-common/blob/main/website/public/schema/run-tasks/runtask-results.json
# Run Tasks API
[Run tasks](/terraform/cloud-docs/workspaces/settings/run-tasks) allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle. Run tasks are reusable configurations that you can associate to any workspace in an organization. This page lists the API endpoints for run tasks in an organization and explains how to associate run tasks to workspaces.
<!-- BEGIN: TFC:only name:pnp-callout -->
@include 'tfc-package-callouts/run-tasks.mdx'
<!-- END: TFC:only name:pnp-callout -->
Refer to [run tasks Integration](/terraform/cloud-docs/api-docs/run-tasks/run-tasks-integration) for the API endpoints related triggering run tasks and the expected integration response.
## Required Permissions
To interact with run tasks on an organization, you need the [Manage Run Tasks permission](/terraform/cloud-docs/users-teams-organizations/permissions#manage-run-tasks). To associate or dissociate run tasks in a workspace, you need the [Manage Workspace Run Tasks permission](/terraform/cloud-docs/users-teams-organizations/permissions#general-workspace-permissions) on that particular workspace.
[permissions-citation]: #intentionally-unused---keep-for-maintainers
## Create a Run Task
`POST /organizations/:organization_name/tasks`
| Parameter | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `:organization_name` | The organization to create a run task in. The organization must already exist in HCP Terraform, and the token authenticating the API request must have [owner permission](/terraform/cloud-docs/users-teams-organizations/permissions). |
[permissions-citation]: #intentionally-unused---keep-for-maintainers
| Status | Response | Reason |
| ------- | --------------------------------------- | -------------------------------------------------------------- |
| [201][] | [JSON API document][] (`type: "tasks"`) | Successfully created a run task |
| [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required unless otherwise specified.
| Key path | Type | Default | Description |
| -------------------------------------------------------- | --------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"tasks"`. |
| `data.attributes.name` | string | | The name of the task. Can include letters, numbers, `-`, and `_`. |
| `data.attributes.url` | string | | URL to send a run task payload. |
| `data.attributes.description` | string | | The description of the run task. Can be up to 300 characters long including spaces, letters, numbers, and special characters. |
| `data.attributes.category` | string | | Must be `"task"`. |
| `data.attributes.hmac-key` | string | | (Optional) HMAC key to verify run task. |
| `data.attributes.enabled` | bool | true | (Optional) Whether the task will be run. |
| `data.attributes.global-configuration.enabled` | bool | false | (Optional) Whether the task will be associated on all workspaces. |
| `data.attributes.global-configuration.stages` | array | | (Optional) An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `"pre_plan"`, `"post_plan"`, `"pre_apply"`, or `"post_apply"`. |
| `data.attributes.global-configuration.enforcement-level` | string | | (Optional) The enforcement level of the workspace task. Must be `"advisory"` or `"mandatory"`. |
### Sample Payload
```json
{
"data": {
"type": "tasks",
"attributes": {
"name": "example",
"url": "http://example.com",
"description": "Simple description",
"hmac_key": "secret",
"enabled": "true",
"category": "task",
"global-configuration": {
"enabled": true,
"stages": ["pre_plan"],
"enforcement-level": "mandatory"
}
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/tasks
```
### Sample Response
```json
{
"data": {
"id": "task-7oD7doVTQdAFnMLV",
"type": "tasks",
"attributes": {
"category": "task",
"name": "my-run-task",
"url": "http://example.com",
"description": "Simple description",
"enabled": "true",
"hmac-key": null,
"global-configuration": {
"enabled": true,
"stages": ["pre_plan"],
"enforcement-level": "mandatory"
}
},
"relationships": {
"organization": {
"data": {
"id": "hashicorp",
"type": "organizations"
}
},
"tasks": {
"data": []
}
},
"links": {
"self": "/api/v2/tasks/task-7oD7doVTQdAFnMLV"
}
}
}
```
## List Run Tasks
`GET /organizations/:organization_name/tasks`
| Parameter | Description |
| -------------------- | ----------------------------------- |
| `:organization_name` | The organization to list tasks for. |
| Status | Response | Reason |
| ------- | --------------------------------------- | -------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "tasks"`) | Request was successful |
| [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `include` | **Optional.** Allows including related resource data. Value must be a comma-separated list containing one or more of `workspace_tasks` or `workspace_tasks.workspace`. |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 run tasks per page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
https://app.terraform.io/api/v2/organizations/my-organization/tasks
```
### Sample Response
```json
{
"data": [
{
"id": "task-7oD7doVTQdAFnMLV",
"type": "tasks",
"attributes": {
"category": "task",
"name": "my-task",
"url": "http://example.com",
"description": "Simple description",
"enabled": "true",
"hmac-key": null,
"global-configuration": {
"enabled": true,
"stages": ["pre_plan"],
"enforcement-level": "mandatory"
}
},
"relationships": {
"organization": {
"data": {
"id": "hashicorp",
"type": "organizations"
}
},
"tasks": {
"data": []
}
},
"links": {
"self": "/api/v2/tasks/task-7oD7doVTQdAFnMLV"
}
}
],
"links": {
"self": "https://app.terraform.io/api/v2/organizations/hashicorp/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://app.terraform.io/api/v2/organizations/hashicorp/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://app.terraform.io/api/v2/organizations/hashicorp/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"pagination": {
"current-page": 1,
"page-size": 20,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 1
}
}
}
```
## Show a Run Task
`GET /tasks/:id`
| Parameter | Description |
| --------- | --------------------------------------------------------------------------------------------- |
| `:id` | The ID of the task to show. Use the ["List Run Tasks"](#list-run-tasks) endpoint to find IDs. |
| Status | Response | Reason |
| ------- | --------------------------------------- | --------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "tasks"`) | The request was successful |
| [404][] | [JSON API error object][] | Run task not found or user unauthorized to perform action |
| Parameter | Description |
| --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `include` | **Optional.** Allows including related resource data. Value must be a comma-separated list containing one or more of `workspace_tasks` or `workspace_tasks.workspace`. |
### Sample Request
```shell
curl --request GET \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/tasks/task-7oD7doVTQdAFnMLV
```
### Sample Response
```json
{
"data": {
"id": "task-7oD7doVTQdAFnMLV",
"type": "tasks",
"attributes": {
"category": "task",
"name": "my-task",
"url": "http://example.com",
"description": "Simple description",
"enabled": "true",
"hmac-key": null,
},
"relationships": {
"organization": {
"data": {
"id": "hashicorp",
"type": "organizations"
}
},
"tasks": {
"data": [
{
"id": "task-xjKZw9KaeXda61az",
"type": "tasks"
}
]
}
},
"links": {
"self": "/api/v2/tasks/task-7oD7doVTQdAFnMLV"
}
}
}
```
## Update a Run Task
`PATCH /tasks/:id`
| Parameter | Description |
| --------- | ----------------------------------------------------------------------------------------------- |
| `:id` | The ID of the task to update. Use the ["List Run Tasks"](#list-run-tasks) endpoint to find IDs. |
| Status | Response | Reason |
| ------- | --------------------------------------- | -------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "tasks"`) | The request was successful |
| [404][] | [JSON API error object][] | Run task not found or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
### Request Body
This PATCH endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required unless otherwise specified.
| Key path | Type | Default | Description |
| -------------------------------------------------------- | --------------- | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data.type` | string | | Must be `"tasks"`. |
| `data.attributes.name` | string | (previous value) | The name of the run task. Can include letters, numbers, `-`, and `_`. |
| `data.attributes.url` | string | (previous value) | URL to send a run task payload. |
| `data.attributes.description` | string | | The description of the run task. Can be up to 300 characters long including spaces, letters, numbers, and special characters. |
| `data.attributes.category` | string | (previous value) | Must be `"task"`. |
| `data.attributes.hmac-key` | string | (previous value) | (Optional) HMAC key to verify run task. |
| `data.attributes.enabled` | bool | (previous value) | (Optional) Whether the task will be run. |
| `data.attributes.global-configuration.enabled` | bool | (previous value) | (Optional) Whether the task will be associated on all workspaces. |
| `data.attributes.global-configuration.stages` | array | (previous value) | (Optional) An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `"pre_plan"`, `"post_plan"`, `"pre_apply"`, or `"post_apply"`. |
| `data.attributes.global-configuration.enforcement-level` | string | (previous value) | (Optional) The enforcement level of the workspace task. Must be `"advisory"` or `"mandatory"`. |
### Sample Payload
```json
{
"data": {
"type": "tasks",
"attributes": {
"name": "new-example",
"url": "http://new-example.com",
"description": "New description",
"hmac_key": "new-secret",
"enabled": "false",
"category": "task",
"global-configuration": {
"enabled": false
}
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/tasks/task-7oD7doVTQdAFnMLV
```
### Sample Response
```json
{
"data": {
"id": "task-7oD7doVTQdAFnMLV",
"type": "tasks",
"attributes": {
"category": "task",
"name": "new-example",
"url": "http://new-example.com",
"description": "New description",
"enabled": "false",
"hmac-key": null,
"global-configuration": {
"enabled": false,
"stages": ["pre_plan"],
"enforcement-level": "mandatory"
}
},
"relationships": {
"organization": {
"data": {
"id": "hashicorp",
"type": "organizations"
}
},
"tasks": {
"data": [
{
"id": "wstask-xjKZw9KaeXda61az",
"type": "workspace-tasks"
}
]
}
},
"links": {
"self": "/api/v2/tasks/task-7oD7doVTQdAFnMLV"
}
}
}
```
## Delete a Run Task
`DELETE /tasks/:id`
| Parameter | Description |
| --------- | --------------------------------------------------------------------------------------------------- |
| `:id` | The ID of the run task to delete. Use the ["List Run Tasks"](#list-run-tasks) endpoint to find IDs. |
| Status | Response | Reason |
| ------- | ------------------------- | ---------------------------------------------------------- |
| [204][] | No Content | Successfully deleted the run task |
| [404][] | [JSON API error object][] | Run task not found, or user unauthorized to perform action |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/tasks/task-7oD7doVTQdAFnMLV
```
## Associate a Run Task to a Workspace
`POST /workspaces/:workspace_id/tasks`
| Parameter | Description |
| --------------- | ------------------------ |
| `:workspace_id` | The ID of the workspace. |
This endpoint associates an existing run task to a specific workspace.
This involves setting the run task enforcement level, which determines whether the run task blocks runs from completing.
- Advisory run tasks can not block a run from completing. If the task fails, the run will proceed with a warning.
- Mandatory run tasks block a run from completing. If the task fails (including a timeout or unexpected remote error condition), the run stops with an error.
You may also configure the run task to begin during specific [run stages](/terraform/cloud-docs/run/states). Run tasks use the [Post-Plan Stage](/terraform/cloud-docs/run/states#the-post-plan-stage) by default.
| Status | Response | Reason |
| ------- | ------------------------- | ---------------------------------------------------------------------- |
| [204][] | No Content | The request was successful |
| [404][] | [JSON API error object][] | Workspace or run task not found or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body |
### Request Body
This POST endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
|-------------------------------------|--------|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `data.type` | string | | Must be `"workspace-tasks"`. |
| `data.attributes.enforcement-level` | string | | The enforcement level of the workspace task. Must be `"advisory"` or `"mandatory"`. |
| `data.attributes.stage` | string | `"post_plan"` | **DEPRECATED** Use `stages` instead. The stage in the run lifecycle when the run task should begin. Must be `"pre_plan"`, `"post_plan"`, `"pre_apply"`, or `"post_apply"`. |
| `data.attributes.stages` | array | `["post_plan"]` | An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `"pre_plan"`, `"post_plan"`, `"pre_apply"`, or `"post_apply"`. |
| `data.relationships.task.data.id` | string | | The ID of the run task. |
| `data.relationships.task.data.type` | string | | Must be `"tasks"`. |
### Sample Payload
```json
{
"data": {
"type": "workspace-tasks",
"attributes": {
"enforcement-level": "advisory",
"stages": ["post_plan"]
},
"relationships": {
"task": {
"data": {
"id": "task-7oD7doVTQdAFnMLV",
"type": "tasks"
}
}
}
}
}
```
### Sample Request
```shell
curl \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/vnd.api+json" \
--request POST \
--data @payload.json \
https://app.terraform.io/api/v2/workspaces/ws-PphL7ix3yGasYGrq/tasks
```
### Sample Response
```json
{
"data": {
"id": "wstask-tBXYu8GVAFBpcmPm",
"type": "workspace-tasks",
"attributes": {
"enforcement-level": "advisory",
"stage": "post_plan",
"stages": ["post_plan"]
},
"relationships": {
"task": {
"data": {
"id": "task-7oD7doVTQdAFnMLV",
"type": "tasks"
}
},
"workspace": {
"data": {
"id": "ws-PphL7ix3yGasYGrq",
"type": "workspaces"
}
}
},
"links": {
"self": "/api/v2/workspaces/ws-PphL7ix3yGasYGrq/tasks/task-tBXYu8GVAFBpcmPm"
}
}
}
```
## List Workspace Run Tasks
`GET /workspaces/:workspace_id/tasks`
| Parameter | Description |
| --------------- | -------------------------------- |
| `:workspace_id` | The workspace to list tasks for. |
| Status | Response | Reason |
| ------- | --------------------------------------- | ----------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "tasks"`) | Request was successful |
| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action |
### Query Parameters
This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.
| Parameter | Description |
| -------------- | --------------------------------------------------------------------------- |
| `page[number]` | **Optional.** If omitted, the endpoint will return the first page. |
| `page[size]` | **Optional.** If omitted, the endpoint will return 20 run tasks per page. |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks
```
### Sample Response
```json
{
"data": [
{
"id": "wstask-tBXYu8GVAFBpcmPm",
"type": "workspace-tasks",
"attributes": {
"enforcement-level": "advisory",
"stage": "post_plan",
"stages": ["post_plan"]
},
"relationships": {
"task": {
"data": {
"id": "task-hu74ST39g566Q4m5",
"type": "tasks"
}
},
"workspace": {
"data": {
"id": "ws-kRsDRPtTmtcEme4t",
"type": "workspaces"
}
}
},
"links": {
"self": "/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/task-tBXYu8GVAFBpcmPm"
}
}
],
"links": {
"self": "https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"first": "https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20",
"prev": null,
"next": null,
"last": "https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20"
},
"meta": {
"pagination": {
"current-page": 1,
"page-size": 20,
"prev-page": null,
"next-page": null,
"total-pages": 1,
"total-count": 1
}
}
}
```
## Show Workspace Run Task
`GET /workspaces/:workspace_id/tasks/:id`
| Parameter | Description |
| --------- | --------------------------------------------------------------------------------------------------------------------------- |
| `:id` | The ID of the workspace task to show. Use the ["List Workspace Run Tasks"](#list-workspace-run-tasks) endpoint to find IDs. |
| Status | Response | Reason |
| ------- | --------------------------------------- | ------------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "tasks"`) | The request was successful |
| [404][] | [JSON API error object][] | Workspace run task not found or user unauthorized to perform action |
### Sample Request
```shell
curl --request GET \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/wstask-tBXYu8GVAFBpcmPm
```
### Sample Response
```json
{
"data": {
"id": "wstask-tBXYu8GVAFBpcmPm",
"type": "workspace-tasks",
"attributes": {
"enforcement-level": "advisory",
"stage": "post_plan",
"stages": ["post_plan"]
},
"relationships": {
"task": {
"data": {
"id": "task-hu74ST39g566Q4m5",
"type": "tasks"
}
},
"workspace": {
"data": {
"id": "ws-kRsDRPtTmtcEme4t",
"type": "workspaces"
}
}
},
"links": {
"self": "/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/wstask-tBXYu8GVAFBpcmPm"
}
}
}
```
## Update Workspace Run Task
`PATCH /workspaces/:workspace_id/tasks/:id`
| Parameter | Description |
| --------- | ------------------------------------------------------------------------------------------------------------------- |
| `:id` | The ID of the task to update. Use the ["List Workspace Run Tasks"](#list-workspace-run-tasks) endpoint to find IDs. |
| Status | Response | Reason |
| ------- | --------------------------------------- | ------------------------------------------------------------------- |
| [200][] | [JSON API document][] (`type: "tasks"`) | The request was successful |
| [404][] | [JSON API error object][] | Workspace run task not found or user unauthorized to perform action |
| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) |
### Request Body
This PATCH endpoint requires a JSON object with the following properties as a request payload.
Properties without a default value are required.
| Key path | Type | Default | Description |
|-------------------------------------|--------|------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `data.type` | string | (previous value) | Must be `"workspace-tasks"`. |
| `data.attributes.enforcement-level` | string | (previous value) | The enforcement level of the workspace run task. Must be `"advisory"` or `"mandatory"`. |
| `data.attributes.stage` | string | (previous value) | **DEPRECATED** Use `stages` instead. The stage in the run lifecycle when the run task should begin. Must be `"pre_plan"` or `"post_plan"`. |
| `data.attributes.stages` | array | (previous value) | An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `"pre_plan"`, `"post_plan"`, `"pre_apply"`, or `"post_apply"`. |
### Sample Payload
```json
{
"data": {
"type": "workspace-tasks",
"attributes": {
"enforcement-level": "mandatory",
"stages": ["post_plan"]
}
}
}
```
#### Deprecated Payload
```json
{
"data": {
"type": "workspace-tasks",
"attributes": {
"enforcement-level": "mandatory",
"stages": ["post_plan"]
}
}
}
```
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request PATCH \
--data @payload.json \
https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/wstask-tBXYu8GVAFBpcmPm
```
### Sample Response
```json
{
"data": {
"id": "wstask-tBXYu8GVAFBpcmPm",
"type": "workspace-tasks",
"attributes": {
"enforcement-level": "mandatory",
"stage": "post_plan",
"stages": ["post_plan"]
},
"relationships": {
"task": {
"data": {
"id": "task-hu74ST39g566Q4m5",
"type": "tasks"
}
},
"workspace": {
"data": {
"id": "ws-kRsDRPtTmtcEme4t",
"type": "workspaces"
}
}
},
"links": {
"self": "/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/task-tBXYu8GVAFBpcmPm"
}
}
}
```
## Delete Workspace Run Task
`DELETE /workspaces/:workspace_id/tasks/:id`
| Parameter | Description |
| --------- | --------------------------------------------------------------------------------------------------------------------------------- |
| `:id` | The ID of the Workspace run task to delete. Use the ["List Workspace Run Tasks"](#list-workspace-run-tasks) endpoint to find IDs. |
| Status | Response | Reason |
| ------- | ------------------------- | -------------------------------------------------------------------- |
| [204][] | No Content | Successfully deleted the workspace run task |
| [404][] | [JSON API error object][] | Workspace run task not found, or user unauthorized to perform action |
### Sample Request
```shell
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request DELETE \
https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/wstask-tBXYu8GVAFBpcmPm
``` | terraform | page title Run Tasks API Docs HCP Terraform description Use the tasks endpoint to manage run tasks List show create update and delete run tasks and list show update delete and associate workspace run tasks using the HTTP API 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 202 https developer mozilla org en US docs Web HTTP Status 202 204 https developer mozilla org en US docs Web HTTP Status 204 400 https developer mozilla org en US docs Web HTTP Status 400 401 https developer mozilla org en US docs Web HTTP Status 401 403 https developer mozilla org en US docs Web HTTP Status 403 404 https developer mozilla org en US docs Web HTTP Status 404 409 https developer mozilla org en US docs Web HTTP Status 409 412 https developer mozilla org en US docs Web HTTP Status 412 422 https developer mozilla org en US docs Web HTTP Status 422 429 https developer mozilla org en US docs Web HTTP Status 429 500 https developer mozilla org en US docs Web HTTP Status 500 504 https developer mozilla org en US docs Web HTTP Status 504 JSON API document terraform cloud docs api docs json api documents JSON API error object https jsonapi org format error objects JSON API Schema document https github com hashicorp terraform docs common blob main website public schema run tasks runtask results json Run Tasks API Run tasks terraform cloud docs workspaces settings run tasks allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle Run tasks are reusable configurations that you can associate to any workspace in an organization This page lists the API endpoints for run tasks in an organization and explains how to associate run tasks to workspaces BEGIN TFC only name pnp callout include tfc package callouts run tasks mdx END TFC only name pnp callout Refer to run tasks Integration terraform cloud docs api docs run tasks run tasks integration for the API endpoints related triggering run tasks and the expected integration response Required Permissions To interact with run tasks on an organization you need the Manage Run Tasks permission terraform cloud docs users teams organizations permissions manage run tasks To associate or dissociate run tasks in a workspace you need the Manage Workspace Run Tasks permission terraform cloud docs users teams organizations permissions general workspace permissions on that particular workspace permissions citation intentionally unused keep for maintainers Create a Run Task POST organizations organization name tasks Parameter Description organization name The organization to create a run task in The organization must already exist in HCP Terraform and the token authenticating the API request must have owner permission terraform cloud docs users teams organizations permissions permissions citation intentionally unused keep for maintainers Status Response Reason 201 JSON API document type tasks Successfully created a run task 404 JSON API error object Organization not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required unless otherwise specified Key path Type Default Description data type string Must be tasks data attributes name string The name of the task Can include letters numbers and data attributes url string URL to send a run task payload data attributes description string The description of the run task Can be up to 300 characters long including spaces letters numbers and special characters data attributes category string Must be task data attributes hmac key string Optional HMAC key to verify run task data attributes enabled bool true Optional Whether the task will be run data attributes global configuration enabled bool false Optional Whether the task will be associated on all workspaces data attributes global configuration stages array Optional An array of strings representing the stages of the run lifecycle when the run task should begin Must be one or more of pre plan post plan pre apply or post apply data attributes global configuration enforcement level string Optional The enforcement level of the workspace task Must be advisory or mandatory Sample Payload json data type tasks attributes name example url http example com description Simple description hmac key secret enabled true category task global configuration enabled true stages pre plan enforcement level mandatory Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request POST data payload json https app terraform io api v2 organizations my organization tasks Sample Response json data id task 7oD7doVTQdAFnMLV type tasks attributes category task name my run task url http example com description Simple description enabled true hmac key null global configuration enabled true stages pre plan enforcement level mandatory relationships organization data id hashicorp type organizations tasks data links self api v2 tasks task 7oD7doVTQdAFnMLV List Run Tasks GET organizations organization name tasks Parameter Description organization name The organization to list tasks for Status Response Reason 200 JSON API document type tasks Request was successful 404 JSON API error object Organization not found or user unauthorized to perform action Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description include Optional Allows including related resource data Value must be a comma separated list containing one or more of workspace tasks or workspace tasks workspace page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 run tasks per page Sample Request shell curl header Authorization Bearer TOKEN https app terraform io api v2 organizations my organization tasks Sample Response json data id task 7oD7doVTQdAFnMLV type tasks attributes category task name my task url http example com description Simple description enabled true hmac key null global configuration enabled true stages pre plan enforcement level mandatory relationships organization data id hashicorp type organizations tasks data links self api v2 tasks task 7oD7doVTQdAFnMLV links self https app terraform io api v2 organizations hashicorp tasks page 5Bnumber 5D 1 page 5Bsize 5D 20 first https app terraform io api v2 organizations hashicorp tasks page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https app terraform io api v2 organizations hashicorp tasks page 5Bnumber 5D 1 page 5Bsize 5D 20 meta pagination current page 1 page size 20 prev page null next page null total pages 1 total count 1 Show a Run Task GET tasks id Parameter Description id The ID of the task to show Use the List Run Tasks list run tasks endpoint to find IDs Status Response Reason 200 JSON API document type tasks The request was successful 404 JSON API error object Run task not found or user unauthorized to perform action Parameter Description include Optional Allows including related resource data Value must be a comma separated list containing one or more of workspace tasks or workspace tasks workspace Sample Request shell curl request GET H Authorization Bearer TOKEN H Content Type application vnd api json https app terraform io api v2 tasks task 7oD7doVTQdAFnMLV Sample Response json data id task 7oD7doVTQdAFnMLV type tasks attributes category task name my task url http example com description Simple description enabled true hmac key null relationships organization data id hashicorp type organizations tasks data id task xjKZw9KaeXda61az type tasks links self api v2 tasks task 7oD7doVTQdAFnMLV Update a Run Task PATCH tasks id Parameter Description id The ID of the task to update Use the List Run Tasks list run tasks endpoint to find IDs Status Response Reason 200 JSON API document type tasks The request was successful 404 JSON API error object Run task not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc Request Body This PATCH endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required unless otherwise specified Key path Type Default Description data type string Must be tasks data attributes name string previous value The name of the run task Can include letters numbers and data attributes url string previous value URL to send a run task payload data attributes description string The description of the run task Can be up to 300 characters long including spaces letters numbers and special characters data attributes category string previous value Must be task data attributes hmac key string previous value Optional HMAC key to verify run task data attributes enabled bool previous value Optional Whether the task will be run data attributes global configuration enabled bool previous value Optional Whether the task will be associated on all workspaces data attributes global configuration stages array previous value Optional An array of strings representing the stages of the run lifecycle when the run task should begin Must be one or more of pre plan post plan pre apply or post apply data attributes global configuration enforcement level string previous value Optional The enforcement level of the workspace task Must be advisory or mandatory Sample Payload json data type tasks attributes name new example url http new example com description New description hmac key new secret enabled false category task global configuration enabled false Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 tasks task 7oD7doVTQdAFnMLV Sample Response json data id task 7oD7doVTQdAFnMLV type tasks attributes category task name new example url http new example com description New description enabled false hmac key null global configuration enabled false stages pre plan enforcement level mandatory relationships organization data id hashicorp type organizations tasks data id wstask xjKZw9KaeXda61az type workspace tasks links self api v2 tasks task 7oD7doVTQdAFnMLV Delete a Run Task DELETE tasks id Parameter Description id The ID of the run task to delete Use the List Run Tasks list run tasks endpoint to find IDs Status Response Reason 204 No Content Successfully deleted the run task 404 JSON API error object Run task not found or user unauthorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 tasks task 7oD7doVTQdAFnMLV Associate a Run Task to a Workspace POST workspaces workspace id tasks Parameter Description workspace id The ID of the workspace This endpoint associates an existing run task to a specific workspace This involves setting the run task enforcement level which determines whether the run task blocks runs from completing Advisory run tasks can not block a run from completing If the task fails the run will proceed with a warning Mandatory run tasks block a run from completing If the task fails including a timeout or unexpected remote error condition the run stops with an error You may also configure the run task to begin during specific run stages terraform cloud docs run states Run tasks use the Post Plan Stage terraform cloud docs run states the post plan stage by default Status Response Reason 204 No Content The request was successful 404 JSON API error object Workspace or run task not found or user unauthorized to perform action 422 JSON API error object Malformed request body Request Body This POST endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string Must be workspace tasks data attributes enforcement level string The enforcement level of the workspace task Must be advisory or mandatory data attributes stage string post plan DEPRECATED Use stages instead The stage in the run lifecycle when the run task should begin Must be pre plan post plan pre apply or post apply data attributes stages array post plan An array of strings representing the stages of the run lifecycle when the run task should begin Must be one or more of pre plan post plan pre apply or post apply data relationships task data id string The ID of the run task data relationships task data type string Must be tasks Sample Payload json data type workspace tasks attributes enforcement level advisory stages post plan relationships task data id task 7oD7doVTQdAFnMLV type tasks Sample Request shell curl H Authorization Bearer TOKEN H Content Type application vnd api json request POST data payload json https app terraform io api v2 workspaces ws PphL7ix3yGasYGrq tasks Sample Response json data id wstask tBXYu8GVAFBpcmPm type workspace tasks attributes enforcement level advisory stage post plan stages post plan relationships task data id task 7oD7doVTQdAFnMLV type tasks workspace data id ws PphL7ix3yGasYGrq type workspaces links self api v2 workspaces ws PphL7ix3yGasYGrq tasks task tBXYu8GVAFBpcmPm List Workspace Run Tasks GET workspaces workspace id tasks Parameter Description workspace id The workspace to list tasks for Status Response Reason 200 JSON API document type tasks Request was successful 404 JSON API error object Workspace not found or user unauthorized to perform action Query Parameters This endpoint supports pagination with standard URL query parameters terraform cloud docs api docs query parameters Remember to percent encode as 5B and as 5D if your tooling doesn t automatically encode URLs Parameter Description page number Optional If omitted the endpoint will return the first page page size Optional If omitted the endpoint will return 20 run tasks per page Sample Request shell curl header Authorization Bearer TOKEN https app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks Sample Response json data id wstask tBXYu8GVAFBpcmPm type workspace tasks attributes enforcement level advisory stage post plan stages post plan relationships task data id task hu74ST39g566Q4m5 type tasks workspace data id ws kRsDRPtTmtcEme4t type workspaces links self api v2 workspaces ws kRsDRPtTmtcEme4t tasks task tBXYu8GVAFBpcmPm links self https app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks page 5Bnumber 5D 1 page 5Bsize 5D 20 first https app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks page 5Bnumber 5D 1 page 5Bsize 5D 20 prev null next null last https app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks page 5Bnumber 5D 1 page 5Bsize 5D 20 meta pagination current page 1 page size 20 prev page null next page null total pages 1 total count 1 Show Workspace Run Task GET workspaces workspace id tasks id Parameter Description id The ID of the workspace task to show Use the List Workspace Run Tasks list workspace run tasks endpoint to find IDs Status Response Reason 200 JSON API document type tasks The request was successful 404 JSON API error object Workspace run task not found or user unauthorized to perform action Sample Request shell curl request GET H Authorization Bearer TOKEN H Content Type application vnd api json https app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks wstask tBXYu8GVAFBpcmPm Sample Response json data id wstask tBXYu8GVAFBpcmPm type workspace tasks attributes enforcement level advisory stage post plan stages post plan relationships task data id task hu74ST39g566Q4m5 type tasks workspace data id ws kRsDRPtTmtcEme4t type workspaces links self api v2 workspaces ws kRsDRPtTmtcEme4t tasks wstask tBXYu8GVAFBpcmPm Update Workspace Run Task PATCH workspaces workspace id tasks id Parameter Description id The ID of the task to update Use the List Workspace Run Tasks list workspace run tasks endpoint to find IDs Status Response Reason 200 JSON API document type tasks The request was successful 404 JSON API error object Workspace run task not found or user unauthorized to perform action 422 JSON API error object Malformed request body missing attributes wrong types etc Request Body This PATCH endpoint requires a JSON object with the following properties as a request payload Properties without a default value are required Key path Type Default Description data type string previous value Must be workspace tasks data attributes enforcement level string previous value The enforcement level of the workspace run task Must be advisory or mandatory data attributes stage string previous value DEPRECATED Use stages instead The stage in the run lifecycle when the run task should begin Must be pre plan or post plan data attributes stages array previous value An array of strings representing the stages of the run lifecycle when the run task should begin Must be one or more of pre plan post plan pre apply or post apply Sample Payload json data type workspace tasks attributes enforcement level mandatory stages post plan Deprecated Payload json data type workspace tasks attributes enforcement level mandatory stages post plan Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request PATCH data payload json https app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks wstask tBXYu8GVAFBpcmPm Sample Response json data id wstask tBXYu8GVAFBpcmPm type workspace tasks attributes enforcement level mandatory stage post plan stages post plan relationships task data id task hu74ST39g566Q4m5 type tasks workspace data id ws kRsDRPtTmtcEme4t type workspaces links self api v2 workspaces ws kRsDRPtTmtcEme4t tasks task tBXYu8GVAFBpcmPm Delete Workspace Run Task DELETE workspaces workspace id tasks id Parameter Description id The ID of the Workspace run task to delete Use the List Workspace Run Tasks list workspace run tasks endpoint to find IDs Status Response Reason 204 No Content Successfully deleted the workspace run task 404 JSON API error object Workspace run task not found or user unauthorized to perform action Sample Request shell curl header Authorization Bearer TOKEN header Content Type application vnd api json request DELETE https app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks wstask tBXYu8GVAFBpcmPm |
terraform How HCP Terraform and Terraform Enterprise help teams use Terraform to HCP Terraform Plans and Features page title Plans and Features HCP Terraform tfc only true manage infrastructure at scale | ---
page_title: Plans and Features - HCP Terraform
description: >-
How HCP Terraform and Terraform Enterprise help teams use Terraform to
manage infrastructure at scale.
tfc_only: true
---
# HCP Terraform Plans and Features
[cli]: /terraform/cli
[speculative plans]: /terraform/cloud-docs/run/remote-operations#speculative-plans
[remote_state]: /terraform/language/state/remote-state-data
[outputs]: /terraform/language/values/outputs
[modules]: /terraform/language/modules/develop
[terraform enterprise]: /terraform/enterprise
HCP Terraform is a platform that performs Terraform runs to provision infrastructure, either on demand or in response to various events. Unlike a general-purpose continuous integration (CI) system, it is deeply integrated with Terraform's workflows and data, which allows it to make Terraform significantly more convenient and powerful.
> **Hands On:** Try our [What is HCP Terraform - Intro and Sign Up](/terraform/tutorials/cloud-get-started/cloud-sign-up) tutorial.
## Free and Paid Plans
HCP Terraform is a commercial SaaS product developed by HashiCorp. Many of its features are free for small teams, including remote state storage, remote runs, and VCS connections. We also offer paid plans for larger teams that include additional collaboration and governance features.
HCP Terraform manages plans and billing at the [organization level](/terraform/cloud-docs/users-teams-organizations/organizations). Each HCP Terraform user can belong to multiple organizations, which might subscribe to different billing plans. The set of features available depends on which organization you are currently working in.
Refer to [Terraform pricing](https://www.hashicorp.com/products/terraform/pricing) for details about available plans and their features.
### Free Organizations
Small teams can use most of HCP Terraform's features for free, including remote Terraform execution, VCS integration, the private module registry, single-sign-on, policy enforcement, run tasks, and more.
Free organizations are limited to 500 managed resources. Refer to [What is a managed resource](/terraform/cloud-docs/overview/estimate-hcp-terraform-cost#what-is-a-managed-resource) for more details.
### Paid Features
Some of HCP Terraform's features are limited to particular paid upgrade plans.
Each higher paid upgrade plan is a strict superset of any lower plans — for example, the **Plus** edition includes all of the features of the **Standard** edition. Paid feature callouts in the documentation indicate the _lowest_ edition at which the feature is available, but any higher plans also include that feature.
Terraform Enterprise generally includes all of HCP Terraform's paid features, plus additional features geared toward large enterprises. However, some features are implemented differently due to the differences between self-hosted and SaaS environments, and some features might be absent due to being impractical or irrelevant in the types of organizations that need Terraform Enterprise. Cloud-only or Enterprise-only features are clearly indicated in documentation.
### Changing Your Payment Plan
[Organization owners](/terraform/cloud-docs/users-teams-organizations/teams#the-owners-team) can manage an organization's billing plan. The plan and billing settings include an integrated storefront, and you can subscribe to paid plans with a credit card.
To change an organization's plan:
1. Click **Settings** in the navigation bar.
1. Click **Plan and billing**. The **Plan and Billing** page appears showing your current plan and any available invoices.
1. Click **Change plan**.
1. Select a plan, enter your billing information, and click **Update plan**.
## Terraform Workflow
HCP Terraform runs [Terraform CLI][cli] to provision infrastructure.
In its default state, Terraform CLI uses a local workflow, performing operations on the workstation where it is invoked and storing state in a local directory.
Since teams must share responsibilities and awareness to avoid single points of failure, working with Terraform in a team requires a remote workflow. At minimum, state must be shared; ideally, Terraform should execute in a consistent remote environment.
HCP Terraform offers a team-oriented remote Terraform workflow, designed to be comfortable for existing Terraform users and easily learned by new users. The foundations of this workflow are remote Terraform execution, a workspace-based organizational model, version control integration, command-line integration, remote state management with cross-workspace data sharing, and a private Terraform module registry.
### Remote Terraform Execution
HCP Terraform runs Terraform on disposable virtual machines in its own cloud infrastructure by default. You can leverage [HCP Terraform agents](/terraform/cloud-docs/agents) to run Terraform on your own isolated, private, or on-premises infrastructure. Remote Terraform execution is sometimes referred to as "remote operations."
Remote execution helps provide consistency and visibility for critical provisioning operations. It also enables powerful features like Sentinel policy enforcement, cost estimation, notifications, version control integration, and more.
- More info: [Terraform Runs and Remote Operations](/terraform/cloud-docs/run/remote-operations)
#### Support for Local Execution
[execution_mode]: /terraform/cloud-docs/workspaces/settings#execution-mode
Remote execution can be disabled on specific workspaces with the ["Execution Mode" setting][execution_mode]. The workspace will still host remote state, and Terraform CLI can use that state for local runs via the [HCP Terraform CLI integration](/terraform/cli/cloud).
## Organize Infrastructure with Projects and Workspaces
Terraform's local workflow manages a collection of infrastructure with a persistent working directory, which contains configuration, state data, and variables. You can use separate directories to organize infrastructure resources into meaningful groups, and Terraform will use the configuration in the directory you invoke Terraform commands from.
HCP Terraform organizes infrastructure into projects and workspaces instead of directories. Each workspace contains everything necessary to manage a given collection of infrastructure, and Terraform uses that content when it runs in the context of that workspace.
You can use projects to organize your workspaces into groups. Organizations with HCP Terraform [Standard](https://www.hashicorp.com/products/terraform/pricing) Edition can assign teams permissions for specific projects.
This lets you grant access to collections of workspaces instead of using workspace-specific or organization-wide permissions, making it easier to limit access to only the resources required for a team member's job function.
Refer to [Workspaces](/terraform/cloud-docs/workspaces) and [Organizing Workspaces with Projects](/terraform/cloud-docs/workspaces/projects) for more details.
### Remote State Management, Data Sharing, and Run Triggers
HCP Terraform acts as a remote backend for your Terraform state. State storage is tied to workspaces, which helps keep state associated with the configuration that created it.
HCP Terraform also enables you to share information between workspaces with root-level [outputs][]. Separate groups of infrastructure resources often need to share a small amount of information, and workspace outputs are an ideal interface for these dependencies.
Workspaces that use remote operations can use [`terraform_remote_state` data sources][remote_state] to access other workspaces' outputs, subject to per-workspace access controls. And since new information from one workspace might change the desired infrastructure state in another, you can create workspace-to-workspace run triggers to ensure downstream workspaces react when their dependencies change.
- More info: [Terraform State in HCP Terraform](/terraform/cloud-docs/workspaces/state), [Run Triggers](/terraform/cloud-docs/workspaces/settings/run-triggers)
### Version Control Integration
Like other kinds of code, infrastructure-as-code belongs in version control, so HCP Terraform is designed to work directly with your version control system (VCS) provider.
Each workspace can be linked to a VCS repository that contains its Terraform configuration, optionally specifying a branch and subdirectory. HCP Terraform automatically retrieves configuration content from the repository, and will also watch the repository for changes:
- When new commits are merged, linked workspaces automatically run Terraform plans with the new code.
- When pull requests are opened, linked workspaces run speculative plans with the proposed code changes and post the results as a pull request check; reviewers can see at a glance whether the plan was successful, and can click through to view the proposed changes in detail.
VCS integration is powerful, but optional; if you use an unsupported VCS or want to preserve an existing validation and deployment pipeline, you can use the API or Terraform CLI to upload new configuration versions. You'll still get the benefits of remote execution and HCP Terraform's other features.
- More info: [VCS-driven Runs](/terraform/cloud-docs/run/ui)
- More info: [Supported VCS Providers](/terraform/cloud-docs/vcs#supported-vcs-providers)
### Command Line Integration
Remote execution offers major benefits to a team, but local execution offers major benefits to individual developers; for example, most Terraform users run `terraform plan` to interactively check their work while editing configurations.
HCP Terraform offers the best of both worlds, allowing you to run remote plans from your local command line. Configure the [HCP Terraform CLI integration](/terraform/cli/cloud), and the `terraform plan` command will start a remote run in the configured HCP Terraform workspace. The output of the run streams directly to your terminal, and you can also share a link to the remote run with your teammates.
Remote CLI-driven runs use the current working directory's Terraform configuration and the remote workspace's variables, so you don't need to obtain production cloud credentials just to preview a configuration change.
The HCP Terraform CLI integration also supports state manipulation commands like `terraform import` or `terraform taint`.
-> **Note:** When used with HCP Terraform, the `terraform plan` command runs [speculative plans][], which preview changes without modifying real infrastructure. You can also use `terraform apply` to perform full remote runs, but only with workspaces that are _not_ connected to a VCS repository. This helps ensure that your VCS remains the source of record for all real infrastructure changes.
- More info: [CLI-driven Runs](/terraform/cloud-docs/run/cli)
### Private Registry
Even small teams can benefit greatly by codifying commonly used infrastructure patterns into reusable [modules][].
Terraform can fetch providers and modules from many sources. HCP Terraform makes it easier to find providers and modules to use with a private registry. Users throughout your organization can browse a directory of internal providers and modules, and can specify flexible version constraints for the modules they use in their configurations. Easy versioning lets downstream teams use private modules with confidence, and frees upstream teams to iterate faster.
The private registry uses your VCS as the source of truth, relying on Git tags to manage module versions. Tell HCP Terraform which repositories contain modules, and the registry handles the rest.
- More info: [Private Registry](/terraform/cloud-docs/registry)
## Integrations
In addition to providing powerful extensions to the core Terraform workflow, HCP Terraform makes it simple to integrate infrastructure provisioning with your business's other systems.
### Full API
Nearly all of HCP Terraform's features are available in [its API](/terraform/cloud-docs/api-docs), which means other services can create or configure workspaces, upload configurations, start Terraform runs, and more. There's even [a Terraform provider based on the API](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs), so you can manage your HCP Terraform teams and workspaces as a Terraform configuration.
- More info: [API](/terraform/cloud-docs/api-docs)
### Notifications
HCP Terraform can send notifications about Terraform runs to other systems, including Slack and any other service that accepts webhooks. Notifications can be configured per-workspace.
- More info: [Notifications](/terraform/cloud-docs/workspaces/settings/notifications)
### Run Tasks
Run Tasks allow HCP Terraform to execute tasks in external systems at specific points in the HCP Terraform run lifecycle.
There are several [partner integrations](https://www.hashicorp.com/integrations) already available, or you can create your own based on the [API](/terraform/cloud-docs/api-docs/run-tasks/run-tasks).
- More info: [Run Tasks](/terraform/cloud-docs/workspaces/settings/run-tasks)
## Access Control and Governance
Larger organizations are more complex, and tend to use access controls and explicit policies to help manage that complexity. HCP Terraform's paid upgrade plans provide extra features to help meet the control and governance needs of large organizations.
- More info: [Free and Paid Plans](/terraform/cloud-docs/overview)
### Team-Based Permissions System
With HCP Terraform's team management, you can define groups of users that match your organization's real-world teams and assign them only the permissions they need. When combined with the access controls your VCS provider already offers for code, workspace permissions are an effective way to follow the principle of least privilege.
- More info: [Users, Teams, and Organizations](/terraform/cloud-docs/users-teams-organizations/permissions)
### Policy Enforcement
<!-- BEGIN: TFC:only name:pnp-callout -->
@include 'tfc-package-callouts/policies.mdx'
<!-- END: TFC:only name:pnp-callout -->
Policy-as-code lets you define and enforce granular policies for how your organization provisions infrastructure. You can limit the size of compute VMs, confine major updates to defined maintenance windows, and much more.
You can use the Sentinel and the Open Policy Agent (OPA) policy-as-code frameworks to define policies. Depending on the settings, policies can act as advisory warnings, firm requirements that prevent Terraform from provisioning infrastructure, or soft requirements that your compliance team can bypass when appropriate.
Refer to [Policy Enforcement](/terraform/cloud-docs/policy-enforcement) for details.
### Cost Estimation
Before making changes to infrastructure in the major cloud providers, HCP Terraform can display an estimate of its total cost, as well as any change in cost caused by the proposed updates. Cost estimates can also be used in Sentinel policies to provide warnings for major price shifts.
- More info: [Cost Estimation](/terraform/cloud-docs/cost-estimation) | terraform | page title Plans and Features HCP Terraform description How HCP Terraform and Terraform Enterprise help teams use Terraform to manage infrastructure at scale tfc only true HCP Terraform Plans and Features cli terraform cli speculative plans terraform cloud docs run remote operations speculative plans remote state terraform language state remote state data outputs terraform language values outputs modules terraform language modules develop terraform enterprise terraform enterprise HCP Terraform is a platform that performs Terraform runs to provision infrastructure either on demand or in response to various events Unlike a general purpose continuous integration CI system it is deeply integrated with Terraform s workflows and data which allows it to make Terraform significantly more convenient and powerful Hands On Try our What is HCP Terraform Intro and Sign Up terraform tutorials cloud get started cloud sign up tutorial Free and Paid Plans HCP Terraform is a commercial SaaS product developed by HashiCorp Many of its features are free for small teams including remote state storage remote runs and VCS connections We also offer paid plans for larger teams that include additional collaboration and governance features HCP Terraform manages plans and billing at the organization level terraform cloud docs users teams organizations organizations Each HCP Terraform user can belong to multiple organizations which might subscribe to different billing plans The set of features available depends on which organization you are currently working in Refer to Terraform pricing https www hashicorp com products terraform pricing for details about available plans and their features Free Organizations Small teams can use most of HCP Terraform s features for free including remote Terraform execution VCS integration the private module registry single sign on policy enforcement run tasks and more Free organizations are limited to 500 managed resources Refer to What is a managed resource terraform cloud docs overview estimate hcp terraform cost what is a managed resource for more details Paid Features Some of HCP Terraform s features are limited to particular paid upgrade plans Each higher paid upgrade plan is a strict superset of any lower plans for example the Plus edition includes all of the features of the Standard edition Paid feature callouts in the documentation indicate the lowest edition at which the feature is available but any higher plans also include that feature Terraform Enterprise generally includes all of HCP Terraform s paid features plus additional features geared toward large enterprises However some features are implemented differently due to the differences between self hosted and SaaS environments and some features might be absent due to being impractical or irrelevant in the types of organizations that need Terraform Enterprise Cloud only or Enterprise only features are clearly indicated in documentation Changing Your Payment Plan Organization owners terraform cloud docs users teams organizations teams the owners team can manage an organization s billing plan The plan and billing settings include an integrated storefront and you can subscribe to paid plans with a credit card To change an organization s plan 1 Click Settings in the navigation bar 1 Click Plan and billing The Plan and Billing page appears showing your current plan and any available invoices 1 Click Change plan 1 Select a plan enter your billing information and click Update plan Terraform Workflow HCP Terraform runs Terraform CLI cli to provision infrastructure In its default state Terraform CLI uses a local workflow performing operations on the workstation where it is invoked and storing state in a local directory Since teams must share responsibilities and awareness to avoid single points of failure working with Terraform in a team requires a remote workflow At minimum state must be shared ideally Terraform should execute in a consistent remote environment HCP Terraform offers a team oriented remote Terraform workflow designed to be comfortable for existing Terraform users and easily learned by new users The foundations of this workflow are remote Terraform execution a workspace based organizational model version control integration command line integration remote state management with cross workspace data sharing and a private Terraform module registry Remote Terraform Execution HCP Terraform runs Terraform on disposable virtual machines in its own cloud infrastructure by default You can leverage HCP Terraform agents terraform cloud docs agents to run Terraform on your own isolated private or on premises infrastructure Remote Terraform execution is sometimes referred to as remote operations Remote execution helps provide consistency and visibility for critical provisioning operations It also enables powerful features like Sentinel policy enforcement cost estimation notifications version control integration and more More info Terraform Runs and Remote Operations terraform cloud docs run remote operations Support for Local Execution execution mode terraform cloud docs workspaces settings execution mode Remote execution can be disabled on specific workspaces with the Execution Mode setting execution mode The workspace will still host remote state and Terraform CLI can use that state for local runs via the HCP Terraform CLI integration terraform cli cloud Organize Infrastructure with Projects and Workspaces Terraform s local workflow manages a collection of infrastructure with a persistent working directory which contains configuration state data and variables You can use separate directories to organize infrastructure resources into meaningful groups and Terraform will use the configuration in the directory you invoke Terraform commands from HCP Terraform organizes infrastructure into projects and workspaces instead of directories Each workspace contains everything necessary to manage a given collection of infrastructure and Terraform uses that content when it runs in the context of that workspace You can use projects to organize your workspaces into groups Organizations with HCP Terraform Standard https www hashicorp com products terraform pricing Edition can assign teams permissions for specific projects This lets you grant access to collections of workspaces instead of using workspace specific or organization wide permissions making it easier to limit access to only the resources required for a team member s job function Refer to Workspaces terraform cloud docs workspaces and Organizing Workspaces with Projects terraform cloud docs workspaces projects for more details Remote State Management Data Sharing and Run Triggers HCP Terraform acts as a remote backend for your Terraform state State storage is tied to workspaces which helps keep state associated with the configuration that created it HCP Terraform also enables you to share information between workspaces with root level outputs Separate groups of infrastructure resources often need to share a small amount of information and workspace outputs are an ideal interface for these dependencies Workspaces that use remote operations can use terraform remote state data sources remote state to access other workspaces outputs subject to per workspace access controls And since new information from one workspace might change the desired infrastructure state in another you can create workspace to workspace run triggers to ensure downstream workspaces react when their dependencies change More info Terraform State in HCP Terraform terraform cloud docs workspaces state Run Triggers terraform cloud docs workspaces settings run triggers Version Control Integration Like other kinds of code infrastructure as code belongs in version control so HCP Terraform is designed to work directly with your version control system VCS provider Each workspace can be linked to a VCS repository that contains its Terraform configuration optionally specifying a branch and subdirectory HCP Terraform automatically retrieves configuration content from the repository and will also watch the repository for changes When new commits are merged linked workspaces automatically run Terraform plans with the new code When pull requests are opened linked workspaces run speculative plans with the proposed code changes and post the results as a pull request check reviewers can see at a glance whether the plan was successful and can click through to view the proposed changes in detail VCS integration is powerful but optional if you use an unsupported VCS or want to preserve an existing validation and deployment pipeline you can use the API or Terraform CLI to upload new configuration versions You ll still get the benefits of remote execution and HCP Terraform s other features More info VCS driven Runs terraform cloud docs run ui More info Supported VCS Providers terraform cloud docs vcs supported vcs providers Command Line Integration Remote execution offers major benefits to a team but local execution offers major benefits to individual developers for example most Terraform users run terraform plan to interactively check their work while editing configurations HCP Terraform offers the best of both worlds allowing you to run remote plans from your local command line Configure the HCP Terraform CLI integration terraform cli cloud and the terraform plan command will start a remote run in the configured HCP Terraform workspace The output of the run streams directly to your terminal and you can also share a link to the remote run with your teammates Remote CLI driven runs use the current working directory s Terraform configuration and the remote workspace s variables so you don t need to obtain production cloud credentials just to preview a configuration change The HCP Terraform CLI integration also supports state manipulation commands like terraform import or terraform taint Note When used with HCP Terraform the terraform plan command runs speculative plans which preview changes without modifying real infrastructure You can also use terraform apply to perform full remote runs but only with workspaces that are not connected to a VCS repository This helps ensure that your VCS remains the source of record for all real infrastructure changes More info CLI driven Runs terraform cloud docs run cli Private Registry Even small teams can benefit greatly by codifying commonly used infrastructure patterns into reusable modules Terraform can fetch providers and modules from many sources HCP Terraform makes it easier to find providers and modules to use with a private registry Users throughout your organization can browse a directory of internal providers and modules and can specify flexible version constraints for the modules they use in their configurations Easy versioning lets downstream teams use private modules with confidence and frees upstream teams to iterate faster The private registry uses your VCS as the source of truth relying on Git tags to manage module versions Tell HCP Terraform which repositories contain modules and the registry handles the rest More info Private Registry terraform cloud docs registry Integrations In addition to providing powerful extensions to the core Terraform workflow HCP Terraform makes it simple to integrate infrastructure provisioning with your business s other systems Full API Nearly all of HCP Terraform s features are available in its API terraform cloud docs api docs which means other services can create or configure workspaces upload configurations start Terraform runs and more There s even a Terraform provider based on the API https registry terraform io providers hashicorp tfe latest docs so you can manage your HCP Terraform teams and workspaces as a Terraform configuration More info API terraform cloud docs api docs Notifications HCP Terraform can send notifications about Terraform runs to other systems including Slack and any other service that accepts webhooks Notifications can be configured per workspace More info Notifications terraform cloud docs workspaces settings notifications Run Tasks Run Tasks allow HCP Terraform to execute tasks in external systems at specific points in the HCP Terraform run lifecycle There are several partner integrations https www hashicorp com integrations already available or you can create your own based on the API terraform cloud docs api docs run tasks run tasks More info Run Tasks terraform cloud docs workspaces settings run tasks Access Control and Governance Larger organizations are more complex and tend to use access controls and explicit policies to help manage that complexity HCP Terraform s paid upgrade plans provide extra features to help meet the control and governance needs of large organizations More info Free and Paid Plans terraform cloud docs overview Team Based Permissions System With HCP Terraform s team management you can define groups of users that match your organization s real world teams and assign them only the permissions they need When combined with the access controls your VCS provider already offers for code workspace permissions are an effective way to follow the principle of least privilege More info Users Teams and Organizations terraform cloud docs users teams organizations permissions Policy Enforcement BEGIN TFC only name pnp callout include tfc package callouts policies mdx END TFC only name pnp callout Policy as code lets you define and enforce granular policies for how your organization provisions infrastructure You can limit the size of compute VMs confine major updates to defined maintenance windows and much more You can use the Sentinel and the Open Policy Agent OPA policy as code frameworks to define policies Depending on the settings policies can act as advisory warnings firm requirements that prevent Terraform from provisioning infrastructure or soft requirements that your compliance team can bypass when appropriate Refer to Policy Enforcement terraform cloud docs policy enforcement for details Cost Estimation Before making changes to infrastructure in the major cloud providers HCP Terraform can display an estimate of its total cost as well as any change in cost caused by the proposed updates Cost estimates can also be used in Sentinel policies to provide warnings for major price shifts More info Cost Estimation terraform cloud docs cost estimation |
terraform Learn the authorization model potential security threats and our recommendations for securely using HCP Terraform HCP Terraform security model page title Security Model HCP Terraform tfc only true | ---
page_title: Security Model - HCP Terraform
description: >-
Learn the authorization model, potential security threats, and our
recommendations for securely using HCP Terraform.
tfc_only: true
---
# HCP Terraform security model
## Purpose of this document
This document explains the security model of HCP Terraform and the security controls available to end users. Additionally, it provides best practices for securely managing your infrastructure with HCP Terraform.
## Important concepts
### Projects, workspaces, and teams
HCP Terraform organizes infrastructure with workspaces. Workspaces represent a logical security boundary within the organization. Variables, state, SSH keys, and log output are local to a workspace. You can grant teams [read, plan, write, admin, or a customized set of permissions](/terraform/cloud-docs/users-teams-organizations/permissions) within a workspace.
Projects let you group related workspaces in your organization. You can use projects to assign [read, write, maintain, admin, or a customized set of permissions](/terraform/cloud-docs/users-teams-organizations/permissions#project-permissions) to a particular team which grants specific permissions to all workspaces in the project.
### Terraform runs - plans and applies
HCP Terraform will provision infrastructure according to your Terraform configuration which you can upload through the VCS-driven, API-driven, or CLI-driven workflows. You can read more about the different workflows [here](/terraform/cloud-docs/run/remote-operations#starting-runs). It’s important to note that HCP Terraform performs all Terraform operations within the same privilege context. Both the plan and apply operations have access to the full workspace variables, state versions, and Terraform configuration.
### Terraform state file
HCP Terraform retains the current and all historical [state](/terraform/language/state) versions for each workspace. Depending on the resources that are used in your Terraform configuration, these state versions may contain sensitive data such as database passwords, resource IDs, etc.
## Personas
### Organization owners
Members of the [owners team](/terraform/cloud-docs/users-teams-organizations/teams#the-owners-team) have administrator-level privileges within an organization. Members of this team will have access to workspaces, projects, and settings within the organization. This role is intended for users who will perform administrative tasks in your organization.
### Workspace and project team members
Teams let you group users within an organization. You can grant teams [read, plan, write, admin, or a customized set of permissions](/terraform/cloud-docs/users-teams-organizations/permissions), each of which allow them to perform various functions within the workspace. You can also grant teams [read, write, maintain, admin, or a customized set of permissions for a project](/terraform/cloud-docs/users-teams-organizations/permissions#project-permissions), which grants specific permissions to any workspaces in that project. At a higher level, you can use [organization-level privileges](/terraform/cloud-docs/users-teams-organizations/permissions#organization-permissions), which apply to projects and workspaces across the organization.
### Contributors to connected VCS repositories
HCP Terraform executes Terraform configuration from connected VCS repositories. Depending on the configuration, HCP Terraform may automatically trigger Terraform operations when the connected repositories receive new contributions.
## Authorization model
[](/img/docs/terraform-cloud-authorization.png)
_Click on the diagram for a larger view\._
~> **Note:** This diagram displays a useful subset of HCP Terraform's authorization model, but is not comprehensive. Some details were omitted for the sake of clarity. More information is available in our [Permissions documentation](/terraform/cloud-docs/users-teams-organizations/permissions).
Workspaces provide a logical security boundary within the organization. Environment variables and Terraform configurations are isolated within a workspace, and access to a workspace is granted on a per-team basis.
All organizations in HCP Terraform contain an “owners” team, which grants admin-level access to the organization and all its workspaces.
~> **Note:** Teams are not available to free-tier users on HCP Terraform. Organizations at the free-level will only have an owners team.
All workspaces in an organization belong to a project. You can grant teams [read, write, maintain, admin, or a customized set of permissions for the project](/terraform/cloud-docs/users-teams-organizations/permissions#project-permissions), which grants specific permissions on on all workspaces within the project. You can also grant teams [read, plan, write, admin, or a customized set of permissions](/terraform/cloud-docs/users-teams-organizations/permissions#workspace-permissions) for a specific workspace. It’s important to note that, from a security perspective, the plan permission is equivalent to the write permission. The plan permission is provided to protect against accidental Terraform runs but is not intended to stop malicious actors from accessing sensitive data within a workspace. Terraform `plan` and `apply` operations can execute arbitrary code within the ephemeral build environment. Both of these operations happen in the same security context with access to the full set of workspace variables, Terraform configuration, and Terraform state.
By default, Teams with read privileges within a workspace can view the workspace's state. You can remove this access by using [customized workspace permissions](/terraform/cloud-docs/users-teams-organizations/permissions#custom-workspace-permissions); however, this will only apply to state file access through the API or UI. Terraform must access the state file in order to perform plan and apply operations, so any user with the ability to upload Terraform configurations and initiate runs will transitively have access to the workspaces' state.
State may be shared across workspaces via the [remote state access workspace setting](/terraform/cloud-docs/workspaces/state#accessing-state-from-other-workspaces).
Terraform configuration files in connected VCS repositories are inherently trusted. Commits to connected repositories will automatically queue a plan within the corresponding workspace. Pull requests to connected repositories will initiate a speculative plan, though this behavior may be disabled via the [speculative plan setting](/terraform/cloud-docs/workspaces/settings/vcs#automatic-speculative-plans) on the workspace settings page. HCP Terraform has no knowledge of your VCS's authorization controls and does not associate HCP Terraform user accounts with VCS user accounts — the two should be considered separate identities.
## Threat model
HCP Terraform is designed to execute Terraform operations and manage the state file to ensure that infrastructure is reliably created, updated, and destroyed by multiple users of an organization.
The following are part of the HCP Terraform threat model:
### Confidentiality and integrity of communication between Terraform clients and HCP Terraform
All communication between clients and HCP Terraform is encrypted end-to-end using TLS. HCP Terraform currently supports TLS version 1.2. HCP Terraform communicates with linked VCS repositories using the Oauth2 authorization protocol. HCP Terraform can also be configured to fetch Terraform modules from private repositories using the SSH protocol with a customer-provided private key.
### Confidentiality of state versions, Terraform configurations, and stored variables
As a user, you will entrust HCP Terraform with information that is very sensitive to your organization such as API tokens, your Terraform configurations, and your Terraform state file. HCP Terraform is designed to ensure the confidentiality of this information, it relies on [Vault Transit](/vault/docs/secrets/transit) for encrypting workspace variables. Terraform configurations and state are encrypted at rest with uniquely derived encryption keys backed by Vault. You can view how all customer data is encrypted and stored on our [data security page](/terraform/cloud-docs/architectural-details/data-security).
### Enforcement of authentication and authorization policies for data access and actions taken through the UI or API
HCP Terraform enforces authorization checks for all actions taken within the API or through the UI. More information about HCP Terraform workspace-level and organization level permission are available [here](/terraform/cloud-docs/users-teams-organizations/permissions).
### Isolation of Terraform executions
Each Terraform operation (plan and apply) happens in an ephemeral environment that is created immediately before the run and destroyed after it is completed. The build environment is designed to provide isolation between Terraform executions and between HCP Terraform tenants.
### Reliability and availability of HCP Terraform
HCP Terraform is spread across multiple availability zones for reliability, we perform regular backups of our production data stores and have a process for recovering in case of a major outage.
## What isn’t part of the threat model
### Malicious contributions to Terraform configuration in VCS repositories
Commits and pull requests to connected VCS repositories will trigger a plan operation within the workspace. HCP Terraform does not perform any authentication or authorization checks against commits in linked VCS repositories, and cannot prevent malicious Terraform configuration from exfiltrating sensitive data during plan operations. For this reason, it is important to restrict access to connected VCS repositories. Speculative plans for pull requests may be disabled on the [workspace settings page](/terraform/cloud-docs/workspaces/settings/vcs#automatic-speculative-plans).
-> **Note:** HCP Terraform will not automatically trigger plans for pull requests from forked repositories.
### Malicious Terraform providers or modules
Terraform providers and modules used in your Terraform configuration will have full access to the variables and Terraform state within a workspace. HCP Terraform cannot prevent malicious providers and modules from exfiltrating this sensitive data. We recommend only using trusted modules and providers within your Terraform configuration.
### Malicious bypasses of Terraform policies
The policy-as-code frameworks used by the Terraform [Policy Enforcement](/terraform/cloud-docs/policy-enforcement) feature are embedded within HCP Terraform and can be used to ensure the infrastructure provisioned using Terraform complies with defined organizational policies. The goal of this feature is to enforce compliance with organizational policies and best practices when provisioning infrastructure using Terraform.
It is important to note that the policy-as-code integration in HCP Terraform should be viewed as a guide or set of guardrails, not a security boundary. It is not designed to prevent malicious actors from executing malicious Terraform configurations or modifying infrastructure.
### Malicious or insecure third-party run tasks
Terraform [Run Tasks](/terraform/cloud-docs/integrations/run-tasks) are provided with access to all Terraform configuration and plan data. HCP Terraform does not have the capability to prevent malicious Run Tasks from potentially exfiltrating sensitive data that may be present in either the Terraform configuration or plan.
In order to minimize potential security risks, it is highly recommended to only utilize trusted technology partners for Run Tasks within your Terraform organization and limit the number of users who have been assigned the [Manage Run Tasks](/terraform/cloud-docs/users-teams-organizations/permissions#manage-run-tasks) permission.
### Access to sensitive variables or state from Terraform operations
Marking a variable as “sensitive” will prevent it from being displayed in the UI, but will not prevent it from being read by Terraform during plan or apply operations. Similarly, customized workspace permissions allow you to restrict access to workspace state via the UI and API, but will not prevent it from being read during Terraform operations.
### Redaction of sensitive variables in Terraform logs
The logs from a Terraform plan or apply operation are visible to any user with at least “read” level access in the associated workspace. While Terraform tries to avoid writing sensitive information to logs, redactions are best-effort. This feature should not be treated as a security boundary, but instead as a mechanism to mitigate accidental exposure. Additionally, HCP Terraform is unable to protect against malicious users who attempt to use Terraform logs to exfiltrate sensitive data.
## Recommendations for securely using HCP Terraform
### Enforce strong authentication
HCP Terraform supports [two factor authentication](/terraform/cloud-docs/users-teams-organizations/2fa) via SMS or TOTP. Organizations can configure mandatory 2FA for all members in the [organization settings](/terraform/cloud-docs/users-teams-organizations/organizations#authentication). Organizations may choose to configure [SSO for their organization](/terraform/cloud-docs/users-teams-organizations/single-sign-on).
### Minimize the number of users in the owners team
Users of the [owners team](/terraform/cloud-docs/users-teams-organizations/teams#the-owners-team) will have full access to all workspaces within the organization. If SSO is enabled, members of the “Owners” team will still be able to authenticate with their username and password. This group should be reserved for only a small number of administrators, and membership should be audited periodically.
### Apply the principle of least privilege to workspace membership
[Teams](/terraform/cloud-docs/users-teams-organizations/teams) allow you to group users and assign them various privileges within workspaces. We recommend applying the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) when creating teams and assigning permissions so that each user within your organization has the minimum required privileges.
### Protect API keys
HCP Terraform allows you to create [user, team, and organization API tokens](/terraform/cloud-docs/api-docs#authentication). You should take care to store these tokens securely, and rotate them periodically.
Vault users can leverage the [Terraform Cloud secret backend](/vault/docs/secrets/terraform), which allows you to generate ephemeral tokens.
### Control access to source code
By default, commits and pull requests to connected VCS repositories will automatically trigger a plan operation in an HCP Terraform workspace. HCP Terraform cannot protect against malicious code in linked repositories, so you should take care to only grant trusted operators access to these repositories.
Workspaces may be configured to [enable or disable speculative plans for pull requests](/terraform/cloud-docs/workspaces/settings/vcs#automatic-speculative-plans) to linked repositories. You should disable this setting if you allow untrusted users to open pull requests in connected VCS repositories.
-> **Note:** HCP Terraform will not automatically trigger plans for pull requests from forked repositories.
### Restrict access to workspace state
Workspaces may be configured to share their state with other workspaces within the organization or globally with the entire organization via the [remote state setting](/terraform/cloud-docs/workspaces/state#accessing-state-from-other-workspaces). Because workspace state may contain sensitive information, we recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other.
### Use separate agent pools for sensitive workspaces
You can share [HCP Terraform Agents](/terraform/cloud-docs/agents) across all workspaces within an organization or [scope them to specific workspaces](/terraform/cloud-docs/agents#scope-an-agent-pool-to-specific-workspaces). If multiple workspaces share agent pools, a malicious actor in one of those workspaces could exfiltrate the agent’s API token, access private resources from the perspective of the agent, or modify the agent’s environment, potentially impacting other workspaces. For this reason, we recommend creating separate agent pools for sensitive workspaces and using the agent scoping setting to restrict which workspaces can target each agent pool.
### Treat Archivist URLs as secrets
HCP Terraform uses a blob storage service called Archivist for storing various pieces of customer data. Archivist URLs have the origin `https://archivist.terraform.io` and are returned by various HCP Terraform APIs, such as the [state versions API](/terraform/cloud-docs/api-docs/state-versions#fetch-the-current-state-version-for-a-workspace). You do not need to submit a bearer token with each request to call the Archivist API. Instead, Archivist URLs contain a short-term signed authorization token that performs authorization checks. The expiry time depends on the API endpoints you used to generate the Archivist link. As a result, you must treat Archivist URLs as secrets and avoid logging or sharing them.
### Use dynamic credentials
Storing static credentials in HCP Terraform increases the inherent risk of a malicious user or a compromised plan or apply operation exposing your credentials. Because static credentials are usually long-lived and exposed in many locations, they are troublesome to revoke and replace.
Using [dynamic provider credentials](/terraform/cloud-docs/workspaces/dynamic-provider-credentials/) eliminates the need to store static credentials in HCP Terraform, reducing the risk of exposure. Dynamic provider credentials generate new temporary credentials for each operation and expire after that operation completes. | terraform | page title Security Model HCP Terraform description Learn the authorization model potential security threats and our recommendations for securely using HCP Terraform tfc only true HCP Terraform security model Purpose of this document This document explains the security model of HCP Terraform and the security controls available to end users Additionally it provides best practices for securely managing your infrastructure with HCP Terraform Important concepts Projects workspaces and teams HCP Terraform organizes infrastructure with workspaces Workspaces represent a logical security boundary within the organization Variables state SSH keys and log output are local to a workspace You can grant teams read plan write admin or a customized set of permissions terraform cloud docs users teams organizations permissions within a workspace Projects let you group related workspaces in your organization You can use projects to assign read write maintain admin or a customized set of permissions terraform cloud docs users teams organizations permissions project permissions to a particular team which grants specific permissions to all workspaces in the project Terraform runs plans and applies HCP Terraform will provision infrastructure according to your Terraform configuration which you can upload through the VCS driven API driven or CLI driven workflows You can read more about the different workflows here terraform cloud docs run remote operations starting runs It s important to note that HCP Terraform performs all Terraform operations within the same privilege context Both the plan and apply operations have access to the full workspace variables state versions and Terraform configuration Terraform state file HCP Terraform retains the current and all historical state terraform language state versions for each workspace Depending on the resources that are used in your Terraform configuration these state versions may contain sensitive data such as database passwords resource IDs etc Personas Organization owners Members of the owners team terraform cloud docs users teams organizations teams the owners team have administrator level privileges within an organization Members of this team will have access to workspaces projects and settings within the organization This role is intended for users who will perform administrative tasks in your organization Workspace and project team members Teams let you group users within an organization You can grant teams read plan write admin or a customized set of permissions terraform cloud docs users teams organizations permissions each of which allow them to perform various functions within the workspace You can also grant teams read write maintain admin or a customized set of permissions for a project terraform cloud docs users teams organizations permissions project permissions which grants specific permissions to any workspaces in that project At a higher level you can use organization level privileges terraform cloud docs users teams organizations permissions organization permissions which apply to projects and workspaces across the organization Contributors to connected VCS repositories HCP Terraform executes Terraform configuration from connected VCS repositories Depending on the configuration HCP Terraform may automatically trigger Terraform operations when the connected repositories receive new contributions Authorization model HCP Terraform authorization model diagram img docs terraform cloud authorization png img docs terraform cloud authorization png Click on the diagram for a larger view Note This diagram displays a useful subset of HCP Terraform s authorization model but is not comprehensive Some details were omitted for the sake of clarity More information is available in our Permissions documentation terraform cloud docs users teams organizations permissions Workspaces provide a logical security boundary within the organization Environment variables and Terraform configurations are isolated within a workspace and access to a workspace is granted on a per team basis All organizations in HCP Terraform contain an owners team which grants admin level access to the organization and all its workspaces Note Teams are not available to free tier users on HCP Terraform Organizations at the free level will only have an owners team All workspaces in an organization belong to a project You can grant teams read write maintain admin or a customized set of permissions for the project terraform cloud docs users teams organizations permissions project permissions which grants specific permissions on on all workspaces within the project You can also grant teams read plan write admin or a customized set of permissions terraform cloud docs users teams organizations permissions workspace permissions for a specific workspace It s important to note that from a security perspective the plan permission is equivalent to the write permission The plan permission is provided to protect against accidental Terraform runs but is not intended to stop malicious actors from accessing sensitive data within a workspace Terraform plan and apply operations can execute arbitrary code within the ephemeral build environment Both of these operations happen in the same security context with access to the full set of workspace variables Terraform configuration and Terraform state By default Teams with read privileges within a workspace can view the workspace s state You can remove this access by using customized workspace permissions terraform cloud docs users teams organizations permissions custom workspace permissions however this will only apply to state file access through the API or UI Terraform must access the state file in order to perform plan and apply operations so any user with the ability to upload Terraform configurations and initiate runs will transitively have access to the workspaces state State may be shared across workspaces via the remote state access workspace setting terraform cloud docs workspaces state accessing state from other workspaces Terraform configuration files in connected VCS repositories are inherently trusted Commits to connected repositories will automatically queue a plan within the corresponding workspace Pull requests to connected repositories will initiate a speculative plan though this behavior may be disabled via the speculative plan setting terraform cloud docs workspaces settings vcs automatic speculative plans on the workspace settings page HCP Terraform has no knowledge of your VCS s authorization controls and does not associate HCP Terraform user accounts with VCS user accounts the two should be considered separate identities Threat model HCP Terraform is designed to execute Terraform operations and manage the state file to ensure that infrastructure is reliably created updated and destroyed by multiple users of an organization The following are part of the HCP Terraform threat model Confidentiality and integrity of communication between Terraform clients and HCP Terraform All communication between clients and HCP Terraform is encrypted end to end using TLS HCP Terraform currently supports TLS version 1 2 HCP Terraform communicates with linked VCS repositories using the Oauth2 authorization protocol HCP Terraform can also be configured to fetch Terraform modules from private repositories using the SSH protocol with a customer provided private key Confidentiality of state versions Terraform configurations and stored variables As a user you will entrust HCP Terraform with information that is very sensitive to your organization such as API tokens your Terraform configurations and your Terraform state file HCP Terraform is designed to ensure the confidentiality of this information it relies on Vault Transit vault docs secrets transit for encrypting workspace variables Terraform configurations and state are encrypted at rest with uniquely derived encryption keys backed by Vault You can view how all customer data is encrypted and stored on our data security page terraform cloud docs architectural details data security Enforcement of authentication and authorization policies for data access and actions taken through the UI or API HCP Terraform enforces authorization checks for all actions taken within the API or through the UI More information about HCP Terraform workspace level and organization level permission are available here terraform cloud docs users teams organizations permissions Isolation of Terraform executions Each Terraform operation plan and apply happens in an ephemeral environment that is created immediately before the run and destroyed after it is completed The build environment is designed to provide isolation between Terraform executions and between HCP Terraform tenants Reliability and availability of HCP Terraform HCP Terraform is spread across multiple availability zones for reliability we perform regular backups of our production data stores and have a process for recovering in case of a major outage What isn t part of the threat model Malicious contributions to Terraform configuration in VCS repositories Commits and pull requests to connected VCS repositories will trigger a plan operation within the workspace HCP Terraform does not perform any authentication or authorization checks against commits in linked VCS repositories and cannot prevent malicious Terraform configuration from exfiltrating sensitive data during plan operations For this reason it is important to restrict access to connected VCS repositories Speculative plans for pull requests may be disabled on the workspace settings page terraform cloud docs workspaces settings vcs automatic speculative plans Note HCP Terraform will not automatically trigger plans for pull requests from forked repositories Malicious Terraform providers or modules Terraform providers and modules used in your Terraform configuration will have full access to the variables and Terraform state within a workspace HCP Terraform cannot prevent malicious providers and modules from exfiltrating this sensitive data We recommend only using trusted modules and providers within your Terraform configuration Malicious bypasses of Terraform policies The policy as code frameworks used by the Terraform Policy Enforcement terraform cloud docs policy enforcement feature are embedded within HCP Terraform and can be used to ensure the infrastructure provisioned using Terraform complies with defined organizational policies The goal of this feature is to enforce compliance with organizational policies and best practices when provisioning infrastructure using Terraform It is important to note that the policy as code integration in HCP Terraform should be viewed as a guide or set of guardrails not a security boundary It is not designed to prevent malicious actors from executing malicious Terraform configurations or modifying infrastructure Malicious or insecure third party run tasks Terraform Run Tasks terraform cloud docs integrations run tasks are provided with access to all Terraform configuration and plan data HCP Terraform does not have the capability to prevent malicious Run Tasks from potentially exfiltrating sensitive data that may be present in either the Terraform configuration or plan In order to minimize potential security risks it is highly recommended to only utilize trusted technology partners for Run Tasks within your Terraform organization and limit the number of users who have been assigned the Manage Run Tasks terraform cloud docs users teams organizations permissions manage run tasks permission Access to sensitive variables or state from Terraform operations Marking a variable as sensitive will prevent it from being displayed in the UI but will not prevent it from being read by Terraform during plan or apply operations Similarly customized workspace permissions allow you to restrict access to workspace state via the UI and API but will not prevent it from being read during Terraform operations Redaction of sensitive variables in Terraform logs The logs from a Terraform plan or apply operation are visible to any user with at least read level access in the associated workspace While Terraform tries to avoid writing sensitive information to logs redactions are best effort This feature should not be treated as a security boundary but instead as a mechanism to mitigate accidental exposure Additionally HCP Terraform is unable to protect against malicious users who attempt to use Terraform logs to exfiltrate sensitive data Recommendations for securely using HCP Terraform Enforce strong authentication HCP Terraform supports two factor authentication terraform cloud docs users teams organizations 2fa via SMS or TOTP Organizations can configure mandatory 2FA for all members in the organization settings terraform cloud docs users teams organizations organizations authentication Organizations may choose to configure SSO for their organization terraform cloud docs users teams organizations single sign on Minimize the number of users in the owners team Users of the owners team terraform cloud docs users teams organizations teams the owners team will have full access to all workspaces within the organization If SSO is enabled members of the Owners team will still be able to authenticate with their username and password This group should be reserved for only a small number of administrators and membership should be audited periodically Apply the principle of least privilege to workspace membership Teams terraform cloud docs users teams organizations teams allow you to group users and assign them various privileges within workspaces We recommend applying the principle of least privilege https en wikipedia org wiki Principle of least privilege when creating teams and assigning permissions so that each user within your organization has the minimum required privileges Protect API keys HCP Terraform allows you to create user team and organization API tokens terraform cloud docs api docs authentication You should take care to store these tokens securely and rotate them periodically Vault users can leverage the Terraform Cloud secret backend vault docs secrets terraform which allows you to generate ephemeral tokens Control access to source code By default commits and pull requests to connected VCS repositories will automatically trigger a plan operation in an HCP Terraform workspace HCP Terraform cannot protect against malicious code in linked repositories so you should take care to only grant trusted operators access to these repositories Workspaces may be configured to enable or disable speculative plans for pull requests terraform cloud docs workspaces settings vcs automatic speculative plans to linked repositories You should disable this setting if you allow untrusted users to open pull requests in connected VCS repositories Note HCP Terraform will not automatically trigger plans for pull requests from forked repositories Restrict access to workspace state Workspaces may be configured to share their state with other workspaces within the organization or globally with the entire organization via the remote state setting terraform cloud docs workspaces state accessing state from other workspaces Because workspace state may contain sensitive information we recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other Use separate agent pools for sensitive workspaces You can share HCP Terraform Agents terraform cloud docs agents across all workspaces within an organization or scope them to specific workspaces terraform cloud docs agents scope an agent pool to specific workspaces If multiple workspaces share agent pools a malicious actor in one of those workspaces could exfiltrate the agent s API token access private resources from the perspective of the agent or modify the agent s environment potentially impacting other workspaces For this reason we recommend creating separate agent pools for sensitive workspaces and using the agent scoping setting to restrict which workspaces can target each agent pool Treat Archivist URLs as secrets HCP Terraform uses a blob storage service called Archivist for storing various pieces of customer data Archivist URLs have the origin https archivist terraform io and are returned by various HCP Terraform APIs such as the state versions API terraform cloud docs api docs state versions fetch the current state version for a workspace You do not need to submit a bearer token with each request to call the Archivist API Instead Archivist URLs contain a short term signed authorization token that performs authorization checks The expiry time depends on the API endpoints you used to generate the Archivist link As a result you must treat Archivist URLs as secrets and avoid logging or sharing them Use dynamic credentials Storing static credentials in HCP Terraform increases the inherent risk of a malicious user or a compromised plan or apply operation exposing your credentials Because static credentials are usually long lived and exposed in many locations they are troublesome to revoke and replace Using dynamic provider credentials terraform cloud docs workspaces dynamic provider credentials eliminates the need to store static credentials in HCP Terraform reducing the risk of exposure Dynamic provider credentials generate new temporary credentials for each operation and expire after that operation completes |
consul to config There may be a few other special cases not included but this covers off as you go you can mark them as done by replace with so github renders them as checked Then please include the completed lists you worked This is a checklist of all the places you need to update when adding a new field We suggest you copy the raw markdown into a gist or local file and check them through in your PR description the majority of configs Adding a Consul Config Field | # Adding a Consul Config Field
This is a checklist of all the places you need to update when adding a new field
to config. There may be a few other special cases not included but this covers
the majority of configs.
We suggest you copy the raw markdown into a gist or local file and check them
off as you go (you can mark them as done by replace `[ ]` with `[x]` so github
renders them as checked). Then **please include the completed lists you worked
through in your PR description**.
Examples of special cases this doesn't cover are:
- If the config needs special treatment like a different default in `-dev` mode
or differences between CE and Enterprise.
- If custom logic is needed to support backwards compatibility when changing
syntax or semantics of anything
There are four specific cases covered with increasing complexity:
1. adding a simple config field only used by client agents
1. adding a CLI flag to mirror that config field
1. adding a config field that needs to be used in Consul servers
1. adding a field to the Service Definition
## Adding a Simple Config Field for Client Agents
- [ ] Add the field to the Config struct (or an appropriate sub-struct) in
`agent/config/config.go`.
- [ ] Add the field to the actual RuntimeConfig struct in
`agent/config/runtime.go`.
- [ ] Add an appropriate parser/setter in `agent/config/builder.go` to
translate.
- [ ] Add the new field with a random value to both the JSON and HCL files in
`agent/config/testdata/full-config.*`, which should cause the test to fail.
Then update the expected value in `TestLoad_FullConfig` in
`agent/config/runtime_test.go` to make the test pass again.
- [ ] Run `go test -run TestRuntimeConfig_Sanitize ./agent/config -update` to update
the expected value for `TestRuntimeConfig_Sanitize`. Look at `git diff` to
make sure the value changed as you expect.
- [ ] **If** your new config field needed some validation as it's only valid in
some cases or with some values (often true).
- [ ] Add validation to Validate in `agent/config/builder.go`.
- [ ] Add a test case to the table test `TestLoad_IntegrationWithFlags` in
`agent/config/runtime_test.go`.
- [ ] **If** your new config field needs a non-zero-value default.
- [ ] Add that to `DefaultSource` in `agent/config/defaults.go`.
- [ ] Add a test case to the table test `TestLoad_IntegrationWithFlags` in
`agent/config/runtime_test.go`.
- [ ] If the config needs to be defaulted for the test server used in unit tests,
also add it to `DefaultConfig()` in `agent/consul/config.go`.
- [ ] **If** your config should take effect on a reload/HUP.
- [ ] Add necessary code to to trigger a safe (locked or atomic) update to
any state the feature needs changing. This needs to be added to one or
more of the following places:
- `ReloadConfig` in `agent/agent.go` if it needs to affect the local
client state or another client agent component.
- `ReloadConfig` in `agent/consul/client.go` if it needs to affect
state for client agent's RPC client.
- [ ] Add a test to `agent/agent_test.go` similar to others with prefix
`TestAgent_reloadConfig*`.
- [ ] Add documentation to `website/content/docs/agent/config/config-files.mdx`.
Done! You can now use your new field in a client agent by accessing
`s.agent.Config.<FieldName>`.
If you need a CLI flag, access to the variable in a Server context, or touched
the Service Definition, make sure you continue on to follow the appropriate
checklists below.
## Adding a CLI Flag Corresponding to the new Field
If the config field also needs a CLI flag, then follow these steps.
- [ ] Do all of the steps in [Adding a Simple Config
Field For Client Agents](#adding-a-simple-config-field-for-client-agents).
- [ ] Add the new flag to `agent/config/flags.go`.
- [ ] Add a test case to TestParseFlags in `agent/config/flag_test.go`.
- [ ] Add a test case (or extend one if appropriate) to the table test
`TestLoad_IntegrationWithFlags` in `agent/config/runtime_test.go` to ensure setting the
flag works.
- [ ] Add flag (as well as config file) documentation to
`website/source/docs/agent/config/config-files.mdx` and `website/source/docs/agent/config/cli-flags.mdx`.
## Adding a Simple Config Field for Servers
Consul servers have a separate Config struct for reasons. Note that Consul
server agents are actually also client agents, so in some cases config that is
only destined for servers doesn't need to follow this checklist provided it's
only needed during the bootstrapping of the server (which is done in code shared
by both server and client components in `agent.go`). For example WAN Gossip
configs are only valid on server agents but since WAN Gossip is setup in
`agent.go` they don't need to follow this checklist. The simplest (and mostly
accurate) rule is:
> If you need to access the config field from code in `agent/consul` (e.g. RPC
> endpoints), then you need to follow this. If it's only in `agent` (e.g. HTTP
> endpoints or agent startup) you don't.
A final word of warning - **you should never need to pass config into the FSM
(`agent/consul/fsm`) or state store (`agent/consul/state`)**. Doing so is **_very
dangerous_** and can violate consistency guarantees and corrupt databases. If
you think you need this then please discuss the design with the Consul team
before writing code!
Consul's server components for historical reasons don't use the `RuntimeConfig`
struct they have their own struct called `Config` in `agent/consul/config.go`.
- [ ] Do all of the steps in [Adding a Simple Config
Field For Client Agents](#adding-a-simple-config-field-for-client-agents).
- [ ] Add the new field to Config struct in `agent/consul/config.go`
- [ ] Add code to set the values from the `RuntimeConfig` in `newConsulConfig` method in `agent/agent.go`
- [ ] **If needed**, add a test to `agent_test.go` if there is some non-trivial
behavior in the code you added in the previous step. We tend not to test
simple assignments from one to the other since these are typically caught by
higher-level tests of the actual functionality that matters but some examples
can be found prefixed with `TestAgent_consulConfig*`
- [ ] **If** your config should take effect on a reload/HUP
- [ ] Add necessary code to `ReloadConfig` in `agent/consul/server.go` this
needs to be adequately synchronized with any readers of the state being
updated.
- [ ] Add a new test or a new assertion to `TestServer_ReloadConfig`
You can now access that field from `s.srv.config.<FieldName>` inside an RPC
handler.
## Adding a New Field to Service Definition
The [Service Definition](https://www.consul.io/docs/agent/services.html) syntax
appears both in Consul config files but also in the `/v1/agent/service/register`
API.
For wonderful historical reasons, our config files have always used `snake_case`
attribute names in both JSON and HCL (even before we supported HCL!!) while our
API uses `CamelCase`.
Because we want documentation examples to work in both config files and API
bodies to avoid needless confusion, we have to accept both snake case and camel
case field names for the service definition.
Finally, adding a field to the service definition implies adding the field to
several internal structs and to all API outputs that display services from the
catalog. That explains the multiple layers needed below.
This list assumes a new field in the base service definition struct. Adding new
fields to health checks is similar but mostly needs `HealthCheck` structs and
methods updating instead. Adding fields to embedded structs like `ProxyConfig`
is largely the same pattern but may need different test methods etc. updating.
- [ ] Do all of the steps in [Adding a Simple Config
Field For Client Agents](#adding-a-simple-config-field-for-client-agents).
- [ ] `agent/structs` package
- [ ] Add the field to `ServiceDefinition` (`service_definition.go`)
- [ ] Add the field to `NodeService` (`structs.go`)
- [ ] Add the field to `ServiceNode` (`structs.go`)
- [ ] Update `ServiceDefinition.ToNodeService` to translate the field
- [ ] Update `NodeService.ToServiceNode` to translate the field
- [ ] Update `ServiceNode.ToNodeService` to translate the field
- [ ] Update `TestStructs_ServiceNode_Conversions`
- [ ] Update `ServiceNode.PartialClone`
- [ ] Update `TestStructs_ServiceNode_PartialClone` (`structs_test.go`)
- [ ] If needed, update `NodeService.Validate` to ensure the field value is
reasonable
- [ ] Add test like `TestStructs_NodeService_Validate*` in
`structs_test.go`
- [ ] Add comparison in `NodeService.IsSame`
- [ ] Update `TestStructs_NodeService_IsSame`
- [ ] Add comparison in `ServiceNode.IsSameService`
- [ ] Update `TestStructs_ServiceNode_IsSameService`
- [ ] **If** your field name has MultipleWords,
- [ ] Add it to the `aux` inline struct in
`ServiceDefinition.UnmarshalJSON` (`service_defintion.go`).
- Note: if the field is embedded higher up in a nested struct,
follow the chain and update the necessary struct's `UnmarshalJSON`
method - you may need to add one if there are no other case
transformations being done, copy and existing example.
- Note: the tests that exercise this are in agent endpoint for
historical reasons (this is where the translation used to happen).
- [ ] `agent` package
- [ ] Update `testAgent_RegisterService` and/or add a new test to ensure
your fields register correctly via API (`agent_endpoint_test.go`)
- [ ] **If** your field name has MultipleWords,
- [ ] Update `testAgent_RegisterService_TranslateKeys` to include
examples with it set in `snake_case` and ensure it is parsed
correctly. Run this via `TestAgent_RegisterService_TranslateKeys`
(agent_endpoint_test.go).
- [ ] `api` package
- [ ] Add the field to `AgentService` (`agent.go`)
- [ ] Add/update an appropriate test in `agent_test.go`
- (Note you need to use `make test` or ensure the `consul` binary on
your `$PATH` is a build with your new field - usually `make dev`
ensures this unless you're path is funky or you have a consul binary
even further up the shell's `$PATH`).
- [ ] Docs
- [ ] Update docs in `website/source/docs/agent/services.html.md`
- [ ] Consider if it's worth adding examples to feature docs or API docs
that show the new field's usage.
Note that although the new field will show up in the API output of
`/agent/services` , `/catalog/services` and `/health/services`, those tests
right now don't exercise anything that's super useful unless custom logic is
required since they don't even encode the response object as JSON and just
assert on the structs you already modified. If custom presentation logic is
needed, tests for these endpoints might be warranted too. It's usual to use
`omit-empty` for new fields that will typically not be used by existing
registrations although we don't currently test for that systematically. | consul | Adding a Consul Config Field This is a checklist of all the places you need to update when adding a new field to config There may be a few other special cases not included but this covers the majority of configs We suggest you copy the raw markdown into a gist or local file and check them off as you go you can mark them as done by replace with x so github renders them as checked Then please include the completed lists you worked through in your PR description Examples of special cases this doesn t cover are If the config needs special treatment like a different default in dev mode or differences between CE and Enterprise If custom logic is needed to support backwards compatibility when changing syntax or semantics of anything There are four specific cases covered with increasing complexity 1 adding a simple config field only used by client agents 1 adding a CLI flag to mirror that config field 1 adding a config field that needs to be used in Consul servers 1 adding a field to the Service Definition Adding a Simple Config Field for Client Agents Add the field to the Config struct or an appropriate sub struct in agent config config go Add the field to the actual RuntimeConfig struct in agent config runtime go Add an appropriate parser setter in agent config builder go to translate Add the new field with a random value to both the JSON and HCL files in agent config testdata full config which should cause the test to fail Then update the expected value in TestLoad FullConfig in agent config runtime test go to make the test pass again Run go test run TestRuntimeConfig Sanitize agent config update to update the expected value for TestRuntimeConfig Sanitize Look at git diff to make sure the value changed as you expect If your new config field needed some validation as it s only valid in some cases or with some values often true Add validation to Validate in agent config builder go Add a test case to the table test TestLoad IntegrationWithFlags in agent config runtime test go If your new config field needs a non zero value default Add that to DefaultSource in agent config defaults go Add a test case to the table test TestLoad IntegrationWithFlags in agent config runtime test go If the config needs to be defaulted for the test server used in unit tests also add it to DefaultConfig in agent consul config go If your config should take effect on a reload HUP Add necessary code to to trigger a safe locked or atomic update to any state the feature needs changing This needs to be added to one or more of the following places ReloadConfig in agent agent go if it needs to affect the local client state or another client agent component ReloadConfig in agent consul client go if it needs to affect state for client agent s RPC client Add a test to agent agent test go similar to others with prefix TestAgent reloadConfig Add documentation to website content docs agent config config files mdx Done You can now use your new field in a client agent by accessing s agent Config FieldName If you need a CLI flag access to the variable in a Server context or touched the Service Definition make sure you continue on to follow the appropriate checklists below Adding a CLI Flag Corresponding to the new Field If the config field also needs a CLI flag then follow these steps Do all of the steps in Adding a Simple Config Field For Client Agents adding a simple config field for client agents Add the new flag to agent config flags go Add a test case to TestParseFlags in agent config flag test go Add a test case or extend one if appropriate to the table test TestLoad IntegrationWithFlags in agent config runtime test go to ensure setting the flag works Add flag as well as config file documentation to website source docs agent config config files mdx and website source docs agent config cli flags mdx Adding a Simple Config Field for Servers Consul servers have a separate Config struct for reasons Note that Consul server agents are actually also client agents so in some cases config that is only destined for servers doesn t need to follow this checklist provided it s only needed during the bootstrapping of the server which is done in code shared by both server and client components in agent go For example WAN Gossip configs are only valid on server agents but since WAN Gossip is setup in agent go they don t need to follow this checklist The simplest and mostly accurate rule is If you need to access the config field from code in agent consul e g RPC endpoints then you need to follow this If it s only in agent e g HTTP endpoints or agent startup you don t A final word of warning you should never need to pass config into the FSM agent consul fsm or state store agent consul state Doing so is very dangerous and can violate consistency guarantees and corrupt databases If you think you need this then please discuss the design with the Consul team before writing code Consul s server components for historical reasons don t use the RuntimeConfig struct they have their own struct called Config in agent consul config go Do all of the steps in Adding a Simple Config Field For Client Agents adding a simple config field for client agents Add the new field to Config struct in agent consul config go Add code to set the values from the RuntimeConfig in newConsulConfig method in agent agent go If needed add a test to agent test go if there is some non trivial behavior in the code you added in the previous step We tend not to test simple assignments from one to the other since these are typically caught by higher level tests of the actual functionality that matters but some examples can be found prefixed with TestAgent consulConfig If your config should take effect on a reload HUP Add necessary code to ReloadConfig in agent consul server go this needs to be adequately synchronized with any readers of the state being updated Add a new test or a new assertion to TestServer ReloadConfig You can now access that field from s srv config FieldName inside an RPC handler Adding a New Field to Service Definition The Service Definition https www consul io docs agent services html syntax appears both in Consul config files but also in the v1 agent service register API For wonderful historical reasons our config files have always used snake case attribute names in both JSON and HCL even before we supported HCL while our API uses CamelCase Because we want documentation examples to work in both config files and API bodies to avoid needless confusion we have to accept both snake case and camel case field names for the service definition Finally adding a field to the service definition implies adding the field to several internal structs and to all API outputs that display services from the catalog That explains the multiple layers needed below This list assumes a new field in the base service definition struct Adding new fields to health checks is similar but mostly needs HealthCheck structs and methods updating instead Adding fields to embedded structs like ProxyConfig is largely the same pattern but may need different test methods etc updating Do all of the steps in Adding a Simple Config Field For Client Agents adding a simple config field for client agents agent structs package Add the field to ServiceDefinition service definition go Add the field to NodeService structs go Add the field to ServiceNode structs go Update ServiceDefinition ToNodeService to translate the field Update NodeService ToServiceNode to translate the field Update ServiceNode ToNodeService to translate the field Update TestStructs ServiceNode Conversions Update ServiceNode PartialClone Update TestStructs ServiceNode PartialClone structs test go If needed update NodeService Validate to ensure the field value is reasonable Add test like TestStructs NodeService Validate in structs test go Add comparison in NodeService IsSame Update TestStructs NodeService IsSame Add comparison in ServiceNode IsSameService Update TestStructs ServiceNode IsSameService If your field name has MultipleWords Add it to the aux inline struct in ServiceDefinition UnmarshalJSON service defintion go Note if the field is embedded higher up in a nested struct follow the chain and update the necessary struct s UnmarshalJSON method you may need to add one if there are no other case transformations being done copy and existing example Note the tests that exercise this are in agent endpoint for historical reasons this is where the translation used to happen agent package Update testAgent RegisterService and or add a new test to ensure your fields register correctly via API agent endpoint test go If your field name has MultipleWords Update testAgent RegisterService TranslateKeys to include examples with it set in snake case and ensure it is parsed correctly Run this via TestAgent RegisterService TranslateKeys agent endpoint test go api package Add the field to AgentService agent go Add update an appropriate test in agent test go Note you need to use make test or ensure the consul binary on your PATH is a build with your new field usually make dev ensures this unless you re path is funky or you have a consul binary even further up the shell s PATH Docs Update docs in website source docs agent services html md Consider if it s worth adding examples to feature docs or API docs that show the new field s usage Note that although the new field will show up in the API output of agent services catalog services and health services those tests right now don t exercise anything that s super useful unless custom logic is required since they don t even encode the response object as JSON and just assert on the structs you already modified If custom presentation logic is needed tests for these endpoints might be warranted too It s usual to use omit empty for new fields that will typically not be used by existing registrations although we don t currently test for that systematically |
consul write requests from the RPC subsystem See the Consul Architecture Guide for an introduction to the Consul deployment architecture and the Consensus Protocol used by generic resource oriented storage layer introduced in Consul 1 16 Please see Note While the content of this document is still accurate it doesn t cover the new for more information Cluster Persistence The cluser persistence subsystem runs entirely in Server Agents It handles both read and | # Cluster Persistence
> **Note**
> While the content of this document is still accurate, it doesn't cover the new
> generic resource-oriented storage layer introduced in Consul 1.16. Please see
> [Resources](../v2-architecture/controller-architecture) for more information.
The cluser persistence subsystem runs entirely in Server Agents. It handles both read and
write requests from the [RPC] subsystem. See the [Consul Architecture Guide] for an
introduction to the Consul deployment architecture and the [Consensus Protocol] used by
the cluster persistence subsystem.
[RPC]: ../rpc
[Consul Architecture Guide]: https://www.consul.io/docs/architecture
[Consensus Protocol]: https://www.consul.io/docs/architecture/consensus

<sup>[source](./overview.mmd)</sup>
## Raft and FSM
[hashicorp/raft] is at the core of cluster persistence. Raft requires an [FSM], a
finite-state machine implementation, to persist state changes. The Consul FSM is
implemented in [agent/consul/fsm] as a set of commands.
[FSM]: https://pkg.go.dev/github.com/hashicorp/raft#FSM
[hashicorp/raft]: https://github.com/hashicorp/raft
[agent/consul/fsm]: https://github.com/hashicorp/consul/tree/main/agent/consul/fsm
Raft also requires a [LogStore] to persist logs to disk. Consul uses [hashicorp/raft-boltdb]
which implements [LogStore] using [boltdb]. In the near future we should be updating to
use [bbolt].
[LogStore]: https://pkg.go.dev/github.com/hashicorp/raft#LogStore
[hashicorp/raft-boltdb]: https://github.com/hashicorp/raft-boltdb
[boltdb]: https://github.com/boltdb/bolt
[bbolt]: https://github.com/etcd-io/bbolt
See [diagrams](#diagrams) below for more details on the interaction.
## State Store
Consul stores the full state of the cluster in memory using the state store. The state store is
implemented in [agent/consul/state] and uses [hashicorp/go-memdb] to maintain indexes of
data stored in a set of tables. The main entrypoint to the state store is [NewStateStore].
[agent/consul/state]: https://github.com/hashicorp/consul/tree/main/agent/consul/state
[hashicorp/go-memdb]: https://github.com/hashicorp/go-memdb
[NewStateStore]: https://github.com/hashicorp/consul/blob/main/agent/consul/state/state_store.go
### Tables, Schemas, and Indexes
The state store is organized as a set of tables, and each table has a set of indexes.
`newDBSchema` in [schema.go] shows the full list of tables, and each schema function shows
the full list of indexes.
[schema.go]: https://github.com/hashicorp/consul/blob/main/agent/consul/state/schema.go
There are two styles for defining table indexes. The original style uses generic indexer
implementations from [hashicorp/go-memdb] (ex: `StringFieldIndex`). These indexes use
[reflect] to find values for an index. These generic indexers work well when the index
value is a single value available directly from the struct field, and there are no
ce/enterprise differences.
The second style of indexers are custom indexers implemented using only functions and
based on the types defined in [indexer.go]. This style of index works well when the index
value is a value derived from one or multiple fields, or when there are ce/enterprise
differences between the indexes.
[reflect]: https://golang.org/pkg/reflect/
[indexer.go]: https://github.com/hashicorp/consul/blob/main/agent/consul/state/indexer.go
## Snapshot and Restore
Snapshots are the primary mechanism used to backup the data stored by cluster persistence.
If all Consul servers fail, a snapshot can be used to restore the cluster back
to its previous state.
Note that there are two different snapshot and restore concepts that exist at different
layers. First there is the `Snapshot` and `Restore` methods on the raft [FSM] interface,
that Consul must implement. These methods are implemented as mostly passthrough to the
state store. These methods may be called internally by raft to perform log compaction
(snapshot) or to bootstrap a new follower (restore). Consul implements snapshot and
restore using the `Snapshot` and `Restore` types in [agent/consul/state].
Snapshot and restore also exist as actions that a user may perform. There are [CLI]
commands, [HTTP API] endpoints, and [RPC] endpoints that allow a user to capture an
archive which contains a snapshot of the state, and restore that state to a running
cluster. The [consul/snapshot] package provides some of the logic for creating and reading
the snapshot archives for users. See [commands/snapshot] for a reference to these user
facing operations.
[CLI]: ../cli
[HTTP API]: ../http-api
[commands/snapshot]: https://www.consul.io/commands/snapshot
[consul/snapshot]: https://github.com/hashicorp/consul/tree/main/snapshot
Finally, there is also a [snapshot agent] (enterprise only) that uses the snapshot API
endpoints to periodically capture a snapshot, and optionally send it somewhere for
storage.
[snapshot agent]: https://www.consul.io/commands/snapshot/agent
## Raft Autopilot
[hashicorp/raft-autopilot] is used by Consul to automate some parts of the upgrade process.
[hashicorp/raft-autopilot]: https://github.com/hashicorp/raft-autopilot
## Diagrams
### High-level life of a write

### Deep-dive into write through Raft
 to fetch data when it changes, streaming
sends events as they occur, and the client maintains a materialized view of the events.
At the time of writing only the service health endpoint uses streaming, but more endpoints
will be added in the future.
See [adding a topic](./adding-a-topic.md) for a guide on adding new topics to streaming.
## Overview
The diagram below shows the components that are used in streaming, and how they fit into
the rest of Consul.

<sup>[source](./overview.mmd)</sup>
Read requests are received either from the HTTP API or from a DNS request. They use
[rpcclient/health.Health]
to query the cache. The [StreamingHealthServices cache-type] uses a [materialized view]
to manage subscriptions and store the aggregated events. On the server, the
[SubscribeEndpoint] subscribes and receives events from [EventPublisher].
Writes will likely enter the system through the client as well, but to make the diagram
less complicated the write flow starts when it is received by the RPC endpoint. The
endpoint calls raft.Apply, which if successful will save the new data in the state.Store.
When the [state.Store commits] it produces an event which is managed by the [EventPublisher]
and sent to any active subscriptions.
[rpcclient/health.Health]: https://github.com/hashicorp/consul/blob/main/agent/rpcclient/health/health.go
[StreamingHealthServices cache-type]: https://github.com/hashicorp/consul/blob/main/agent/cache-types/streaming_health_services.go
[materialized view]: https://github.com/hashicorp/consul/blob/main/agent/submatview/materializer.go
[SubscribeEndpoint]: https://github.com/hashicorp/consul/blob/main/agent/grpc-internal/services/subscribe/subscribe.go
[EventPublisher]: https://github.com/hashicorp/consul/blob/main/agent/consul/stream/event_publisher.go
[state.Store commits]: https://github.com/hashicorp/consul/blob/main/agent/consul/state/memdb.go
## Event Publisher
The [EventPublisher] is at the core of streaming. It receives published events, and
subscription requests, and forwards events to the appropriate subscriptions. The diagram
below illustrates how events are stored by the [EventPublisher].

<sup>[source](./event-publisher-layout.mmd)</sup>
When a new subscription is created it will create a snapshot of the events required to
reflect the current state. This snapshot is cached by the [EventPublisher] so that other
subscriptions can re-use the snapshot without having to recreate it.
The snapshot always points at the first item in the linked list of events. A subscription
will initially point at the first item, but the pointer advances each time
`Subscribe.Next` is called. The topic buffers in the EventPublisher always point at the
latest item in the linked list, so that new events can be appended to the buffer.
When a snapshot cache TTL expires, the snapshot is removed. If there are no other
subscriptions holding a reference to those items, the items will be garbage collected by
the Go runtime. This setup allows EventPublisher to keep some events around for a short
period of time, without any hard coded limit on the number of events to cache.
## Subscription events
A subscription provides a stream of events on a single topic. Most of the events contain
data for a change in state, but there are a few special "framing" events that are used to
communicate something to the client. The diagram below helps illustrate the logic in
`EventPublisher.Subscribe` and the [materialized view].

<sup>[source](./framing-events.mmd)</sup>
Events in the `Snapshot` contain the same data as those in the `EventStream`, the only
difference is that events in the `Snapshot` indicate the current state not a change in
state.
`NewSnapshotToFollow` is a framing event that indicates to the client that their existing
view is out of date. They must reset their view and prepare to receive a new snapshot.
`EndOfSnapshot` indicates to the client that the snapshot is complete. Any future events
will be changes in state.
## Event filtering
As events pass through the system from the `state.Store` to the client they are grouped
and filtered along the way. The diagram below helps illustrate where each of the grouping
and filtering happens.

<sup>[source](./event-filtering.mmd)</sup> | consul | Event Streaming Event streaming is a new asynchronous RPC mechanism that is being added to Consul Instead of synchronous blocking RPC calls long polling to fetch data when it changes streaming sends events as they occur and the client maintains a materialized view of the events At the time of writing only the service health endpoint uses streaming but more endpoints will be added in the future See adding a topic adding a topic md for a guide on adding new topics to streaming Overview The diagram below shows the components that are used in streaming and how they fit into the rest of Consul Streaming Overview overview svg sup source overview mmd sup Read requests are received either from the HTTP API or from a DNS request They use rpcclient health Health to query the cache The StreamingHealthServices cache type uses a materialized view to manage subscriptions and store the aggregated events On the server the SubscribeEndpoint subscribes and receives events from EventPublisher Writes will likely enter the system through the client as well but to make the diagram less complicated the write flow starts when it is received by the RPC endpoint The endpoint calls raft Apply which if successful will save the new data in the state Store When the state Store commits it produces an event which is managed by the EventPublisher and sent to any active subscriptions rpcclient health Health https github com hashicorp consul blob main agent rpcclient health health go StreamingHealthServices cache type https github com hashicorp consul blob main agent cache types streaming health services go materialized view https github com hashicorp consul blob main agent submatview materializer go SubscribeEndpoint https github com hashicorp consul blob main agent grpc internal services subscribe subscribe go EventPublisher https github com hashicorp consul blob main agent consul stream event publisher go state Store commits https github com hashicorp consul blob main agent consul state memdb go Event Publisher The EventPublisher is at the core of streaming It receives published events and subscription requests and forwards events to the appropriate subscriptions The diagram below illustrates how events are stored by the EventPublisher Event Publisher layout event publisher layout svg sup source event publisher layout mmd sup When a new subscription is created it will create a snapshot of the events required to reflect the current state This snapshot is cached by the EventPublisher so that other subscriptions can re use the snapshot without having to recreate it The snapshot always points at the first item in the linked list of events A subscription will initially point at the first item but the pointer advances each time Subscribe Next is called The topic buffers in the EventPublisher always point at the latest item in the linked list so that new events can be appended to the buffer When a snapshot cache TTL expires the snapshot is removed If there are no other subscriptions holding a reference to those items the items will be garbage collected by the Go runtime This setup allows EventPublisher to keep some events around for a short period of time without any hard coded limit on the number of events to cache Subscription events A subscription provides a stream of events on a single topic Most of the events contain data for a change in state but there are a few special framing events that are used to communicate something to the client The diagram below helps illustrate the logic in EventPublisher Subscribe and the materialized view Framing events framing events svg sup source framing events mmd sup Events in the Snapshot contain the same data as those in the EventStream the only difference is that events in the Snapshot indicate the current state not a change in state NewSnapshotToFollow is a framing event that indicates to the client that their existing view is out of date They must reset their view and prepare to receive a new snapshot EndOfSnapshot indicates to the client that the snapshot is complete Any future events will be changes in state Event filtering As events pass through the system from the state Store to the client they are grouped and filtered along the way The diagram below helps illustrate where each of the grouping and filtering happens event filtering event filtering svg sup source event filtering mmd sup |
consul Certificate Authority Connect CA The code for the Certificate Authority is in the following packages 2 the providers are in agent connect ca 3 the RPC interface is in agent consul connectcaendpoint go 1 most of the core logic is in agent consul leaderconnectca go services and client agents via auto encrypt and auto config The Certificate Authority Subsystem manages a CA trust chain for issuing certificates to | # Certificate Authority (Connect CA)
The Certificate Authority Subsystem manages a CA trust chain for issuing certificates to
services and client agents (via auto-encrypt and auto-config).
The code for the Certificate Authority is in the following packages:
1. most of the core logic is in [agent/consul/leader_connect_ca.go]
2. the providers are in [agent/connect/ca]
3. the RPC interface is in [agent/consul/connect_ca_endpoint.go]
[agent/consul/leader_connect_ca.go]: https://github.com/hashicorp/consul/blob/main/agent/consul/leader_connect_ca.go
[agent/connect/ca]: https://github.com/hashicorp/consul/blob/main/agent/connect/ca/
[agent/consul/connect_ca_endpoint.go]: https://github.com/hashicorp/consul/blob/main/agent/consul/connect_ca_endpoint.go
## Architecture
### High level overview
In Consul the leader is responsible for handling the CA management.
When a leader election happen, and the elected leader do not have any root CA available it will start a process of creating a set of CA certificate.
Those certificates will be used to authenticate/encrypt communication between services (service mesh) or between `Consul client agent` (auto-encrypt/auto-config). This process is described in the following diagram:

<sup>[source](./hl-ca-overview.mmd)</sup>
The features that benefit from Consul CA management are:
- [service Mesh/Connect](https://www.consul.io/docs/connect)
- [auto encrypt](https://www.consul.io/docs/agent/options#auto_encrypt)
### CA and Certificate relationship
This diagram shows the relationship between the CA certificates in Consul primary and
secondary.

<sup>[source](./cert-relationship.mmd)</sup>
In most cases there is an external root CA that provides an intermediate CA that Consul
uses as the Primary Root CA. The only except to this is when the Consul CA Provider is
used without specifying a `RootCert`. In this one case Consul will generate the Root CA
from the provided primary key, and it will be used in the primary as the top of the chain
of trust.
In the primary datacenter, the Consul and AWS providers use the Primary Root CA to sign
leaf certificates. The Vault provider uses an intermediate CA to sign leaf certificates.
Leaf certificates are created for two purposes:
1. the Leaf Cert Service is used by envoy proxies in the mesh to perform mTLS with other
services.
2. the Leaf Cert Client Agent is created by auto-encrypt and auto-config. It is used by
client agents for HTTP API TLS, and for mTLS for RPC requests to servers.
Any secondary datacenters receive an intermediate certificate, signed by the Primary Root
CA, which is used as the CA certificate to sign leaf certificates in the secondary
datacenter.
## Operations
When trying to learn the CA subsystem it can be helpful to understand the operations that
it can perform. The sections below are the complete set of read, write, and periodic
operations that provide the full behaviour of the CA subsystem.
### Periodic Operations
Periodic (or background) opeartions are started automatically by the Consul leader. They run at some interval (often 1 hour).
- `CAManager.InitializeCA` - attempts to initialize the CA when a leader is ellected. If the synchronous InitializeCA fails, `CAManager.backgroundCAInitialization` runs `InitializeCA` periodically in a goroutine until it succeeds.
- `CAManager.RenewIntermediate` - (called by `CAManager.intermediateCertRenewalWatch`) runs in the primary if the provider uses a separate signing cert (the Vault provider). The operation always runs in the secondary. Renews the signing cert once half its lifetime has passed.
- `CAManager.secondaryCARootWatch` - runs in secondary only. Performs a blocking query to the primary to retrieve any updates to the CA roots and stores them locally.
- `Server.runCARootPruning` - removes non-active and expired roots from state.CARoots
### Read Operations
- `RPC.ConnectCA.ConfigurationGet` - returns the CA provider configuration. Only called by user, not by any internal subsystems.
- `RPC.ConnectCA.Roots` - returns all the roots, the trust domain ID, and the ID of the active root. Each "root" also includes the signing key/cert, and any intermediate certs in the chain. It is used (via the cache) by all the connect proxy types.
### Write Operations
- `CAManager.UpdateConfiguration` - (via `RPC.ConnectCA.ConfigurationSet`) called by a user when they want to change the provider or provider configuration (ex: rotate root CA).
- `CAManager.Provider.SignIntermediate` - (via `RPC.ConnectCA.SignIntermediate`) called from the secondary DC:
1. by `CAManager.RenewIntermediate` to sign the new intermediate when the old intermediate is about to expire
2. by `CAMananger.initializeSecondary` when setting up a new secondary, when the provider is changed in the secondary
by a user action, or when the primary roots changed and the secondary needs to generate a new intermediate for the new
primary roots.
- `CAMananger.SignCertificate` - is used by:
1. (via `RPC.ConnectCA.Sign`) - called by client agents to sign a leaf cert for a connect proxy (via `agent/cache-types/connect_ca_leaf.go`)
2. (via in-process call to `RPC.ConnectCA.Sign`) - called by auto-encrypt to sign a leaf cert for a client agent
3. called by Auto-Config to sign a leaf cert for a client agent
## detailed call flow

<sup>[source](./ca-leader-sequence.mmd)</sup>
####TODO:
- sequence diagram for leaf signing
- sequence diagram for CA cert rotation
## CAManager states
This section is a work in progress
TODO: style the diagram to match the others, and add some narative text to describe the
diagram.

| consul | Certificate Authority Connect CA The Certificate Authority Subsystem manages a CA trust chain for issuing certificates to services and client agents via auto encrypt and auto config The code for the Certificate Authority is in the following packages 1 most of the core logic is in agent consul leader connect ca go 2 the providers are in agent connect ca 3 the RPC interface is in agent consul connect ca endpoint go agent consul leader connect ca go https github com hashicorp consul blob main agent consul leader connect ca go agent connect ca https github com hashicorp consul blob main agent connect ca agent consul connect ca endpoint go https github com hashicorp consul blob main agent consul connect ca endpoint go Architecture High level overview In Consul the leader is responsible for handling the CA management When a leader election happen and the elected leader do not have any root CA available it will start a process of creating a set of CA certificate Those certificates will be used to authenticate encrypt communication between services service mesh or between Consul client agent auto encrypt auto config This process is described in the following diagram CA creation hl ca overview svg sup source hl ca overview mmd sup The features that benefit from Consul CA management are service Mesh Connect https www consul io docs connect auto encrypt https www consul io docs agent options auto encrypt CA and Certificate relationship This diagram shows the relationship between the CA certificates in Consul primary and secondary CA relationship cert relationship svg sup source cert relationship mmd sup In most cases there is an external root CA that provides an intermediate CA that Consul uses as the Primary Root CA The only except to this is when the Consul CA Provider is used without specifying a RootCert In this one case Consul will generate the Root CA from the provided primary key and it will be used in the primary as the top of the chain of trust In the primary datacenter the Consul and AWS providers use the Primary Root CA to sign leaf certificates The Vault provider uses an intermediate CA to sign leaf certificates Leaf certificates are created for two purposes 1 the Leaf Cert Service is used by envoy proxies in the mesh to perform mTLS with other services 2 the Leaf Cert Client Agent is created by auto encrypt and auto config It is used by client agents for HTTP API TLS and for mTLS for RPC requests to servers Any secondary datacenters receive an intermediate certificate signed by the Primary Root CA which is used as the CA certificate to sign leaf certificates in the secondary datacenter Operations When trying to learn the CA subsystem it can be helpful to understand the operations that it can perform The sections below are the complete set of read write and periodic operations that provide the full behaviour of the CA subsystem Periodic Operations Periodic or background opeartions are started automatically by the Consul leader They run at some interval often 1 hour CAManager InitializeCA attempts to initialize the CA when a leader is ellected If the synchronous InitializeCA fails CAManager backgroundCAInitialization runs InitializeCA periodically in a goroutine until it succeeds CAManager RenewIntermediate called by CAManager intermediateCertRenewalWatch runs in the primary if the provider uses a separate signing cert the Vault provider The operation always runs in the secondary Renews the signing cert once half its lifetime has passed CAManager secondaryCARootWatch runs in secondary only Performs a blocking query to the primary to retrieve any updates to the CA roots and stores them locally Server runCARootPruning removes non active and expired roots from state CARoots Read Operations RPC ConnectCA ConfigurationGet returns the CA provider configuration Only called by user not by any internal subsystems RPC ConnectCA Roots returns all the roots the trust domain ID and the ID of the active root Each root also includes the signing key cert and any intermediate certs in the chain It is used via the cache by all the connect proxy types Write Operations CAManager UpdateConfiguration via RPC ConnectCA ConfigurationSet called by a user when they want to change the provider or provider configuration ex rotate root CA CAManager Provider SignIntermediate via RPC ConnectCA SignIntermediate called from the secondary DC 1 by CAManager RenewIntermediate to sign the new intermediate when the old intermediate is about to expire 2 by CAMananger initializeSecondary when setting up a new secondary when the provider is changed in the secondary by a user action or when the primary roots changed and the secondary needs to generate a new intermediate for the new primary roots CAMananger SignCertificate is used by 1 via RPC ConnectCA Sign called by client agents to sign a leaf cert for a connect proxy via agent cache types connect ca leaf go 2 via in process call to RPC ConnectCA Sign called by auto encrypt to sign a leaf cert for a client agent 3 called by Auto Config to sign a leaf cert for a client agent detailed call flow CA Leader Sequence ca leader sequence svg sup source ca leader sequence mmd sup TODO sequence diagram for leaf signing sequence diagram for CA cert rotation CAManager states This section is a work in progress TODO style the diagram to match the others and add some narative text to describe the diagram CA Mananger states state machine svg |
consul Controller Testing 1 Unit Tests These should live alongside the controller and utilize mocks and the controller TestController Where possible split out controller functionality so that other functions can be independently tested 3 Container based integration tests These tests live along with our other container based integration tests They utilize a full multi node cluster and sometimes client agents There are 3 types of tests that can be created here 2 Lightweight integration tests These should live in an internal api group api group test package These tests utilize the in memory resource service and the standard controller manager There are two types of tests that should be created Lifecycle Integration Tests These go step by step to modify resources and check what the controller did They are meant to go through the lifecycle of resources and how they are reconciled Verifications are typically intermingled with resource updates For every controller we want to enable 3 types of testing One Shot Integration Tests These tests publish a bunch of resources and then perform all the verifications These mainly are focused on the controller eventually converging given all the resources thrown at it and aren t as concerned with any intermediate states resources go through Lifecycle Integration Tests These are the same as for the lighweight integration tests | # Controller Testing
For every controller we want to enable 3 types of testing.
1. Unit Tests - These should live alongside the controller and utilize mocks and the controller.TestController. Where possible split out controller functionality so that other functions can be independently tested.
2. Lightweight integration tests - These should live in an internal/<api group>/<api group>test package. These tests utilize the in-memory resource service and the standard controller manager. There are two types of tests that should be created.
* Lifecycle Integration Tests - These go step by step to modify resources and check what the controller did. They are meant to go through the lifecycle of resources and how they are reconciled. Verifications are typically intermingled with resource updates.
* One-Shot Integration Tests - These tests publish a bunch of resources and then perform all the verifications. These mainly are focused on the controller eventually converging given all the resources thrown at it and aren't as concerned with any intermediate states resources go through.
3. Container based integration tests - These tests live along with our other container based integration tests. They utilize a full multi-node cluster (and sometimes client agents). There are 3 types of tests that can be created here:
* Lifecycle Integration Tests - These are the same as for the lighweight integration tests.
* One-shot IntegrationTests - These are the same as for the lightweight integration tests.
* Upgrade Tests - These are a special form of One-shot Integration tests where the cluster is brought up with some original version, data is pushed in, an upgrade is done and then we verify the consistency of the data post-upgrade.
Between the lightweight and container based integration tests there is a lot of duplication in what is being tested. For this reason these integration test bodies should be defined as exported functions within the apigroups test package. The container based tests can then import those packages and invoke the same functionality with minimal overhead.
For one-shot integration tests, functions to do the resource publishing should be split from functions to perform the verifications. This allows upgrade tests to publish the resources once pre-upgrade and then validate that their correctness post-upgrade without requiring rewriting them.
Sometimes it may also be a good idea to export functions in the test packages for running a specific controllers integration tests. This is a good idea when the controller will use a different version of a dependency in Consul Enterprise to allow for the enterprise implementations package to invoke the integration tests after setting up the controller with its injected dependency.
## Unit Test Template
These tests live alongside controller source.
```go
package foo
import (
"testing"
"github.com/stretchr/testif/mock"
"github.com/stretchr/testif/require"
"github.com/stretchr/testif/suite"
)
func TestReconcile(t *testing.T) {
rtest.RunWithTenancies(func(tenancy *pbresource.Tenancy) {
suite.Run(t, &reconcileSuite{tenancy: tenancy})
})
}
type reconcileSuite struct {
suite.Suite
tenancy *pbresource.Tenancy
ctx context.Context
ctl *controller.TestController
client *rtest.Client
// Mock objects needed for testing
}
func (suite *reconcileSuite) SetupTest() {
suite.ctx = testutil.TestContext(suite.T())
// Alternatively it is sometimes useful to use a mock resource service. For that
// you can use github.com/hashicorp/consul/grpcmocks.NewResourceServiceClient
// to create the client.
client := svctest.NewResourceServiceBuilder().
// register this API groups types. Also register any other
// types this controller depends on.
WithRegisterFns(types.Register).
WithTenancies(suite.tenancy).
Run(suite.T())
// Build any mock objects or other dependencies of the controller here.
// Build the TestController
suite.ctl = controller.NewTestController(Controller(), client)
suite.client = rtest.NewClient(suite.ctl.Runtime().Client)
}
// Implement tests on the suite as needed.
func (suite *reconcileSuite) TestSomething() {
// Setup Mock expectations
// Push resources into the resource service as needed.
// Issue the Reconcile call
suite.ctl.Reconcile(suite.ctx, controller.Request{})
}
```
## Integration Testing Templates
These tests should live in internal/<api group>/<api group>test. For these examples, assume the API group under test is named `foo` and the latest API group version is v2.
### `run_test.go`
This file is how `go test` knows to execute the tests. These integration tests should
be executed against an in-memory resource service with the standard controller manager.
```go
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: BUSL-1.1
package footest
import (
"testing"
"github.com/hashicorp/consul/internal/foo"
"github.com/hashicorp/consul/internal/controller/controllertest"
"github.com/hashicorp/consul/internal/resource/reaper"
rtest "github.com/hashicorp/consul/internal/resource/resourcetest"
"github.com/hashicorp/consul/proto-public/pbresource"
)
var (
// This makes the CLI options available to control timing delays of requests. The
// randomized timings helps to build confidence that regardless of resources writes
// occurring in quick succession, the controller under test will eventually converge
// on its steady state.
clientOpts = rtest.ConfigureTestCLIFlags()
)
func runInMemResourceServiceAndControllers(t *testing.T) pbresource.ResourceServiceClient {
t.Helper()
return controllertest.NewControllerTestBuilder().
// Register your types for the API group and any others that these tests will depend on
WithResourceRegisterFns(types.Register).
WithControllerRegisterFns(
reaper.RegisterControllers,
foo.RegisterControllers,
).Run(t)
}
// The basic integration test should operate mostly in a one-shot manner where resources
// are published and then verifications are performed.
func TestControllers_Integration(t *testing.T) {
client := runInMemResourceServiceAndControllers(t)
RunFooV2IntegrationTest(t, client, clientOpts.ClientOptions(t)...)
}
// The lifecycle integration test is typically more complex and deals with changing
// some values over time to cause the controllers to do something differently.
func TestControllers_Lifecycle(t *testing.T) {
client := runInMemResourceServiceAndControllers(t)
RunFooV2LifecycleTest(t, client, clientOpts.ClientOptions(t)...)
}
```
### `test_integration_v2.go`
```go
package footest
import (
"embed"
"fmt"
"testing"
rtest "github.com/hashicorp/consul/internal/resource/resourcetest"
"github.com/hashicorp/consul/proto-public/pbresource"
)
var (
//go:embed integration_test_data
testData embed.FS
)
// Execute the full integration test
func RunFooV2IntegrationTest(t *testing.T, client pbresource.ResourceServiceClient, opts ...rtest.ClientOption) {
t.Helper
PublishFooV2IntegrationTestData(t, client, opts...)
VerifyFooV2IntegrationTestResults(t, client)
}
// PublishFooV2IntegrationTestData publishes all the data that needs to exist in the resource service
// for the controllers to converge on the desired state.
func PublishFooV2IntegrationTestData(t *testing.T, client pbresource.ResourceServiceClient, opts ...rtest.ClientOption) {
t.Helper()
c := rtest.NewClient(client, opts...)
// Publishing resources manually is an option but alternatively you can store the resources on disk
// and use go:embed declarations to embed the whole test data filesystem into the test binary.
resources := rtest.ParseResourcesFromFilesystem(t, testData, "integration_test_data/v2")
c.PublishResources(t, resources)
}
func VerifyFooV2IntegrationTestResults(t *testing.T, client pbresource.ResourceServiceClient) {
t.Helper()
c := rtest.NewClient(client)
// Perform verifications here. All verifications should be retryable except in very exceptional circumstances.
// This could be in a retry.Run block or could be retryed by using one of the WaitFor* methods on the rtest.Client.
// Having them be retryable will prevent flakes especially when the verifications are run in the context of
// a multi-server cluster where a raft follower hasn't yet observed some change.
}
```
### `test_lifecycle_v2.go`
```go
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: BUSL-1.1
package footest
import (
"testing"
rtest "github.com/hashicorp/consul/internal/resource/resourcetest"
"github.com/hashicorp/consul/proto-public/pbresource"
)
func RunFooV2LifecycleIntegrationTest(t *testing.T, client pbresource.ResourceServiceClient, opts ...rtest.ClientOption) {
t.Helper()
// execute tests.
}
``` | consul | Controller Testing For every controller we want to enable 3 types of testing 1 Unit Tests These should live alongside the controller and utilize mocks and the controller TestController Where possible split out controller functionality so that other functions can be independently tested 2 Lightweight integration tests These should live in an internal api group api group test package These tests utilize the in memory resource service and the standard controller manager There are two types of tests that should be created Lifecycle Integration Tests These go step by step to modify resources and check what the controller did They are meant to go through the lifecycle of resources and how they are reconciled Verifications are typically intermingled with resource updates One Shot Integration Tests These tests publish a bunch of resources and then perform all the verifications These mainly are focused on the controller eventually converging given all the resources thrown at it and aren t as concerned with any intermediate states resources go through 3 Container based integration tests These tests live along with our other container based integration tests They utilize a full multi node cluster and sometimes client agents There are 3 types of tests that can be created here Lifecycle Integration Tests These are the same as for the lighweight integration tests One shot IntegrationTests These are the same as for the lightweight integration tests Upgrade Tests These are a special form of One shot Integration tests where the cluster is brought up with some original version data is pushed in an upgrade is done and then we verify the consistency of the data post upgrade Between the lightweight and container based integration tests there is a lot of duplication in what is being tested For this reason these integration test bodies should be defined as exported functions within the apigroups test package The container based tests can then import those packages and invoke the same functionality with minimal overhead For one shot integration tests functions to do the resource publishing should be split from functions to perform the verifications This allows upgrade tests to publish the resources once pre upgrade and then validate that their correctness post upgrade without requiring rewriting them Sometimes it may also be a good idea to export functions in the test packages for running a specific controllers integration tests This is a good idea when the controller will use a different version of a dependency in Consul Enterprise to allow for the enterprise implementations package to invoke the integration tests after setting up the controller with its injected dependency Unit Test Template These tests live alongside controller source go package foo import testing github com stretchr testif mock github com stretchr testif require github com stretchr testif suite func TestReconcile t testing T rtest RunWithTenancies func tenancy pbresource Tenancy suite Run t reconcileSuite tenancy tenancy type reconcileSuite struct suite Suite tenancy pbresource Tenancy ctx context Context ctl controller TestController client rtest Client Mock objects needed for testing func suite reconcileSuite SetupTest suite ctx testutil TestContext suite T Alternatively it is sometimes useful to use a mock resource service For that you can use github com hashicorp consul grpcmocks NewResourceServiceClient to create the client client svctest NewResourceServiceBuilder register this API groups types Also register any other types this controller depends on WithRegisterFns types Register WithTenancies suite tenancy Run suite T Build any mock objects or other dependencies of the controller here Build the TestController suite ctl controller NewTestController Controller client suite client rtest NewClient suite ctl Runtime Client Implement tests on the suite as needed func suite reconcileSuite TestSomething Setup Mock expectations Push resources into the resource service as needed Issue the Reconcile call suite ctl Reconcile suite ctx controller Request Integration Testing Templates These tests should live in internal api group api group test For these examples assume the API group under test is named foo and the latest API group version is v2 run test go This file is how go test knows to execute the tests These integration tests should be executed against an in memory resource service with the standard controller manager go Copyright c HashiCorp Inc SPDX License Identifier BUSL 1 1 package footest import testing github com hashicorp consul internal foo github com hashicorp consul internal controller controllertest github com hashicorp consul internal resource reaper rtest github com hashicorp consul internal resource resourcetest github com hashicorp consul proto public pbresource var This makes the CLI options available to control timing delays of requests The randomized timings helps to build confidence that regardless of resources writes occurring in quick succession the controller under test will eventually converge on its steady state clientOpts rtest ConfigureTestCLIFlags func runInMemResourceServiceAndControllers t testing T pbresource ResourceServiceClient t Helper return controllertest NewControllerTestBuilder Register your types for the API group and any others that these tests will depend on WithResourceRegisterFns types Register WithControllerRegisterFns reaper RegisterControllers foo RegisterControllers Run t The basic integration test should operate mostly in a one shot manner where resources are published and then verifications are performed func TestControllers Integration t testing T client runInMemResourceServiceAndControllers t RunFooV2IntegrationTest t client clientOpts ClientOptions t The lifecycle integration test is typically more complex and deals with changing some values over time to cause the controllers to do something differently func TestControllers Lifecycle t testing T client runInMemResourceServiceAndControllers t RunFooV2LifecycleTest t client clientOpts ClientOptions t test integration v2 go go package footest import embed fmt testing rtest github com hashicorp consul internal resource resourcetest github com hashicorp consul proto public pbresource var go embed integration test data testData embed FS Execute the full integration test func RunFooV2IntegrationTest t testing T client pbresource ResourceServiceClient opts rtest ClientOption t Helper PublishFooV2IntegrationTestData t client opts VerifyFooV2IntegrationTestResults t client PublishFooV2IntegrationTestData publishes all the data that needs to exist in the resource service for the controllers to converge on the desired state func PublishFooV2IntegrationTestData t testing T client pbresource ResourceServiceClient opts rtest ClientOption t Helper c rtest NewClient client opts Publishing resources manually is an option but alternatively you can store the resources on disk and use go embed declarations to embed the whole test data filesystem into the test binary resources rtest ParseResourcesFromFilesystem t testData integration test data v2 c PublishResources t resources func VerifyFooV2IntegrationTestResults t testing T client pbresource ResourceServiceClient t Helper c rtest NewClient client Perform verifications here All verifications should be retryable except in very exceptional circumstances This could be in a retry Run block or could be retryed by using one of the WaitFor methods on the rtest Client Having them be retryable will prevent flakes especially when the verifications are run in the context of a multi server cluster where a raft follower hasn t yet observed some change test lifecycle v2 go go Copyright c HashiCorp Inc SPDX License Identifier BUSL 1 1 package footest import testing rtest github com hashicorp consul internal resource resourcetest github com hashicorp consul proto public pbresource func RunFooV2LifecycleIntegrationTest t testing T client pbresource ResourceServiceClient opts rtest ClientOption t Helper execute tests |
consul Note out the Consul 1 16 introduced a set of generic APIs for managing resources and a controller runtime for building functionality on top of them Overview generic APIs proto public pbresource resource proto Looking for guidance on adding new resources and controllers to Consul Check | # Overview
> **Note**
> Looking for guidance on adding new resources and controllers to Consul? Check
> out the [developer guide](guide.md).
Consul 1.16 introduced a set of [generic APIs] for managing resources, and a
[controller runtime] for building functionality on top of them.
[generic APIs]: ../../../proto-public/pbresource/resource.proto
[controller runtime]: ../../../internal/controller
Previously, adding features to Consul involved making changes at every layer of
the stack, including: HTTP handlers, RPC handlers, MemDB tables, Raft
operations, and CLI commands.
This architecture made sense when the product was maintained by a small core
group who could keep the entire system in their heads, but presented significant
collaboration, ownership, and onboarding challenges when our contributor base
expanded to many engineers, across several teams, and the product grew in
complexity.
In the new model, teams can work with much greater autonomy by building on top
of a shared platform and own their resource types and controllers.
## Architecture Overview

<sup>[source](https://whimsical.com/state-store-v2-UKE6SaEPXNc4UrZBrZj4Kg)</sup>
Our resource-oriented architecture comprises the following components:
#### Resource Service
[Resource Service](../../../proto-public/pbresource/resource.proto) is a gRPC
service that contains the shared logic for creating, reading, updating,
deleting, and watching resources. It will be consumed by controllers, our
Kubernetes integration, the CLI, and mapped to an HTTP+JSON API.
#### Type Registry
[Type Registry](../../../internal/resource/registry.go) is where teams register
their resource types, along with hooks for performing structural validation,
authorization, etc.
#### Storage Backend
[Storage Backend](../../../internal/storage/storage.go) is an abstraction over
low-level storage primitives. Today, there are two implementations (Raft and
an in-memory backend for tests) but in the future, we envisage external storage
systems such as the Kubernetes API or an RDBMS could be supported which would
reduce operational complexity for our customers.
#### Controllers
[Controllers](../../../internal/controller/api.go) implement Consul's business
logic using asynchronous control loops that respond to changes in resources.
Please see [Controller docs](controllers.md) for more details about controllers
## Raft Storage Backend
Our [Raft Storage Backend](../../../internal/storage/raft/backend.go) integrates
with the existing Raft machinery (e.g. FSM) used by the [old state store]. It
also transparently forwards writes and strongly consistent reads to the leader
over gRPC.
There's quite a lot going on here, so to dig into the details, let's take a look
at how a write operation is handled.
[old state store]: ../persistence/

<sup>[source](https://whimsical.com/state-store-v2-UKE6SaEPXNc4UrZBrZj4Kg)</sup>
#### Steps 1 & 2
User calls the resource service's `Write` endpoint, on a Raft follower, which
in-turn calls the storage backend's `WriteCAS` method.
#### Steps 3 & 4
The storage backend determines that the current server is a Raft follower, and
forwards the operation to the leader via a gRPC [forwarding service] listening
on the multiplexed RPC port ([`ports.server`]).
[forwarding service]: ../../../proto/private/pbstorage/raft.proto
[`ports.server`]: https://developer.hashicorp.com/consul/docs/agent/config/config-files#server_rpc_port
#### Step 5
The leader's storage backend serializes the operation to protobuf and applies it
to the Raft log. As we need to share the Raft log with the old state store, we go
through the [`consul.raftHandle`](../../../agent/consul/raft_handle.go) and
[`consul.Server`](../../agent/consul/server/server.go) which applies a msgpack
envelope and type byte prefix.
#### Step 6
Raft consensus happens! Once the log has been committed, it is applied to the
[FSM](../../../agent/consul/fsm/fsm.go) which calls the storage backend's `Apply`
method to apply the protobuf-encoded operation to the [`inmem.Store`].
[`inmem.Store`]: ../../../internal/storage/inmem/store.go
#### Steps 7, 8, 9
At this point, the operation is complete. The forwarding service returns a
successful response, as does the follower's storage backend, and the user
gets a successful response too.
#### Steps 10 & 11
Asynchronously, the log is replicated to followers and applied to their storage
backends. | consul | Overview Note Looking for guidance on adding new resources and controllers to Consul Check out the developer guide guide md Consul 1 16 introduced a set of generic APIs for managing resources and a controller runtime for building functionality on top of them generic APIs proto public pbresource resource proto controller runtime internal controller Previously adding features to Consul involved making changes at every layer of the stack including HTTP handlers RPC handlers MemDB tables Raft operations and CLI commands This architecture made sense when the product was maintained by a small core group who could keep the entire system in their heads but presented significant collaboration ownership and onboarding challenges when our contributor base expanded to many engineers across several teams and the product grew in complexity In the new model teams can work with much greater autonomy by building on top of a shared platform and own their resource types and controllers Architecture Overview architecture diagram architecture overview png sup source https whimsical com state store v2 UKE6SaEPXNc4UrZBrZj4Kg sup Our resource oriented architecture comprises the following components Resource Service Resource Service proto public pbresource resource proto is a gRPC service that contains the shared logic for creating reading updating deleting and watching resources It will be consumed by controllers our Kubernetes integration the CLI and mapped to an HTTP JSON API Type Registry Type Registry internal resource registry go is where teams register their resource types along with hooks for performing structural validation authorization etc Storage Backend Storage Backend internal storage storage go is an abstraction over low level storage primitives Today there are two implementations Raft and an in memory backend for tests but in the future we envisage external storage systems such as the Kubernetes API or an RDBMS could be supported which would reduce operational complexity for our customers Controllers Controllers internal controller api go implement Consul s business logic using asynchronous control loops that respond to changes in resources Please see Controller docs controllers md for more details about controllers Raft Storage Backend Our Raft Storage Backend internal storage raft backend go integrates with the existing Raft machinery e g FSM used by the old state store It also transparently forwards writes and strongly consistent reads to the leader over gRPC There s quite a lot going on here so to dig into the details let s take a look at how a write operation is handled old state store persistence raft storage backend diagram raft backend png sup source https whimsical com state store v2 UKE6SaEPXNc4UrZBrZj4Kg sup Steps 1 2 User calls the resource service s Write endpoint on a Raft follower which in turn calls the storage backend s WriteCAS method Steps 3 4 The storage backend determines that the current server is a Raft follower and forwards the operation to the leader via a gRPC forwarding service listening on the multiplexed RPC port ports server forwarding service proto private pbstorage raft proto ports server https developer hashicorp com consul docs agent config config files server rpc port Step 5 The leader s storage backend serializes the operation to protobuf and applies it to the Raft log As we need to share the Raft log with the old state store we go through the consul raftHandle agent consul raft handle go and consul Server agent consul server server go which applies a msgpack envelope and type byte prefix Step 6 Raft consensus happens Once the log has been committed it is applied to the FSM agent consul fsm fsm go which calls the storage backend s Apply method to apply the protobuf encoded operation to the inmem Store inmem Store internal storage inmem store go Steps 7 8 9 At this point the operation is complete The forwarding service returns a successful response as does the follower s storage backend and the user gets a successful response too Steps 10 11 Asynchronously the log is replicated to followers and applied to their storage backends |
consul Resource Schema Consul Adding a new resource type begins with defining the object schema as a protobuf message in the appropriate package under This is a whistle stop tour through adding a new resource type and controller to Resource and Controller Developer Guide | # Resource and Controller Developer Guide
This is a whistle-stop tour through adding a new resource type and controller to
Consul 🚂
## Resource Schema
Adding a new resource type begins with defining the object schema as a protobuf
message, in the appropriate package under [`proto-public`](../../../proto-public).
```shell
$ mkdir proto-public/pbfoo/v1alpha1
```
```proto
// proto-public/pbfoo/v1alpha1/foo.proto
syntax = "proto3";
import "pbresource/resource.proto";
import "pbresource/annotations.proto";
package hashicorp.consul.foo.v1alpha1;
message Bar {
option (hashicorp.consul.resource.spec) = {scope: SCOPE_NAMESPACE};
string baz = 1;
hashicorp.consul.resource.ID qux = 2;
}
```
```shell
$ make proto
```
Next, we must add our resource type to the registry. At this point, it's useful
to add a package (e.g. under [`internal`](../../../internal)) to contain the logic
associated with this resource type.
The convention is to have this package export variables for its type identifiers
along with a method for registering its types:
```Go
// internal/foo/types.go
package foo
import (
"github.com/hashicorp/consul/internal/resource"
pbv1alpha1 "github.com/hashicorp/consul/proto-public/pbfoo/v1alpha1"
"github.com/hashicorp/consul/proto-public/pbresource"
)
func RegisterTypes(r resource.Registry) {
r.Register(resource.Registration{
Type: pbv1alpha1.BarType,
Scope: resource.ScopePartition,
Proto: &pbv1alpha1.Bar{},
})
}
```
Note that Scope reference the scope of the new resource, `resource.ScopePartition`
mean that resource will be at the partition level and have no namespace, while `resource.ScopeNamespace` mean it will have both a namespace
and a partition.
Update the `NewTypeRegistry` method in [`type_registry.go`] to call your
package's type registration method:
[`type_registry.go`]: ../../../agent/consul/type_registry.go
```Go
import (
// …
"github.com/hashicorp/consul/internal/foo"
// …
)
func NewTypeRegistry() resource.Registry {
// …
foo.RegisterTypes(registry)
// …
}
```
That should be all you need to start using your new resource type. Test it out
by starting an agent in dev mode:
```shell
$ make dev
$ consul agent -dev
```
You can now use [grpcurl](https://github.com/fullstorydev/grpcurl) to interact
with the [resource service](../../../proto-public/pbresource/resource.proto):
```shell
$ grpcurl -d @ \
-plaintext \
-protoset pkg/consul.protoset \
127.0.0.1:8502 \
hashicorp.consul.resource.ResourceService.Write \
<<EOF
{
"resource": {
"id": {
"type": {
"group": "foo",
"group_version": "v1alpha1",
"kind": "bar"
},
"tenancy": {
"partition": "default",
"namespace": "default"
}
},
"data": {
"@type": "types.googleapis.com/hashicorp.consul.foo.v1alpha1.Bar",
"baz": "Hello World"
}
}
}
EOF
```
## Validation
Broadly, there are two kinds of validation you might want to perform against
your resources:
- **Structural** validation ensures the user's input is well-formed, for
example: checking that a required field is provided, or that a port is within
an acceptable range.
- **Semantic** validation ensures that the resource makes sense in the context
of *other* resources, for example: checking that an L7 intention is not
targeting an L4 service.
Structural validation should be done up-front, before the resource is admitted,
using a validation hook provided in the type registration:
```Go
func RegisterTypes(r resource.Registry) {
r.Register(resource.Registration{
Type: pbv1alpha1.BarType,
Proto: &pbv1alpha1.Bar{},
Scope: resource.ScopeNamespace,
Validate: validateBar,
})
}
func validateBar(res *pbresource.Resource) error {
var bar pbv1alpha1.Bar
if err := res.Data.UnmarshalTo(&bar); err != nil {
return resource.NewErrDataParse(&bar, err)
}
if bar.Baz == "" {
return resource.ErrInvalidField{
Name: "baz",
Wrapped: resource.ErrMissing,
}
}
return nil
}
```
Semantic validation should be done asynchronously, after the resource is
written, by controllers ([covered below](#controllers)).
## Authorization
You can control how operations on your resource type are authorized by providing
a set of ACL hooks:
```Go
func RegisterTypes(r resource.Registry) {
r.Register(resource.Registration{
Type: pbv1alpha1.BarType,
Proto: &pbv1alpha1.Bar{},
Scope: resource.ScopeNamespace,
ACLs: &resource.ACLHooks{,
Read: authzReadBar,
Write: authzWriteBar,
List: authzListBar,
},
})
}
func authzReadBar(authz acl.Authorizer, authzContext *acl.AuthorizerContext, id *pbresource.ID, _ *pbresource.Resource) error {
return authz.ToAllowAuthorizer().
BarReadAllowed(id.Name, authzContext)
}
func authzWriteBar(authz acl.Authorizer, authzContext *acl.AuthorizerContext, res *pbresource.Resource) error {
return authz.ToAllowAuthorizer().
BarWriteAllowed(res.ID().Name, authzContext)
}
func authzListBar(authz acl.Authorizer, authzContext *acl.AuthorizerContext) error {
return authz.ToAllowAuthorizer().
BarListAllowed(authzContext)
}
```
If you do not provide ACL hooks, `operator:read` and `operator:write`
permissions will be required.
## Mutation
Sometimes, it's necessary to modify resources before they're persisted. For
example, to set sensible default values or normalize user input. You can do this
by providing a mutation hook:
```Go
func RegisterTypes(r resource.Registry) {
r.Register(resource.Registration{
Type: pbv1alpha1.BarType,
Proto: &pbv1alpha1.Bar{},
Scope: resource.ScopeNamespace,
Mutate: mutateBar,
})
}
func mutateBar(res *pbresource.Resource) error {
var bar pbv1alpha1.Bar
if err := res.Data.UnmarshalTo(&bar); err != nil {
return resource.NewErrDataParse(&bar, err)
}
bar.Baz = strings.ToLower(bar.Baz)
return res.Data.MarshalFrom(&bar)
}
```
## Controllers
Controllers are where the business logic of your resources will live. They're
asynchronous [reconciliation loops] that "wake up" whenever a resource is
modified to validate and realize the changes.
You can create a new controller using the [builder API]. Start by identifying
the resource type you want this controller to manage, and provide a reconciler
that will be called whenever a resource of that type is changed.
```Go
package foo
import (
"context"
"github.com/hashicorp/consul/internal/controller"
pbv1alpha1 "github.com/hashicorp/consul/proto-public/pbfoo/v1alpha1"
"github.com/hashicorp/consul/proto-public/pbresource"
)
func barController() controller.Controller {
return controller.NewController("bar", pbv1alpha1.BarType).
WithReconciler(barReconciler{})
}
type barReconciler struct{}
func (barReconciler) Reconcile(ctx context.Context, rt controller.Runtime, req controller.Request) error {
rsp, err := rt.Client.Read(ctx, &pbresource.ReadRequest{Id: req.ID})
switch {
case status.Code(err) == codes.NotFound:
return nil
case err != nil:
return err
}
var bar pbv1alpha1.Bar
if err := rsp.Resource.Data.UnmarshalTo(&bar); err != nil {
return err
}
rt.Logger.Debug("Hello from bar reconciler!", "baz", bar.Baz)
return nil
}
```
[reconciliation loops]: https://www.oreilly.com/library/view/97-things-every/9781492050896/ch73.html
[builder API]: https://pkg.go.dev/github.com/hashicorp/consul/internal/controller#Controller
Next, register your controller with the controller manager. Another common
pattern is to have your package expose a method for registering controllers,
which is called from `registerControllers` in [`server.go`].
[`server.go`]: ../../../agent/consul/server.go
```Go
package foo
func RegisterControllers(mgr *controller.Manager) {
mgr.Register(barController())
}
```
```Go
package consul
func (s *Server) registerControllers() {
// …
foo.RegisterControllers(s.controllerManager)
// …
}
```
### Retries
By default, if your reconciler returns an error, it will be retried with
exponential backoff. While this is correct in most circumstances, you can
override it by returning [`RequeueAfter`] or [`RequeueNow`].
[`RequeueAfter`]: https://pkg.go.dev/github.com/hashicorp/consul/internal/controller#RequeueAfter
[`RequeueNow`]: https://pkg.go.dev/github.com/hashicorp/consul/internal/controller#RequeueNow
```Go
func (barReconciler) Reconcile(context.Context, controller.Runtime, controller.Request) error {
if time.Now().Hour() < 9 {
return controller.RequeueAfter(1 * time.Hour)
}
return nil
}
```
### Status
Controllers can communicate the result of reconciling resource changes (e.g.
surfacing semantic validation issues) with users and other controllers by
updating the resource's status using the `WriteStatus` method.
Each resource can have multiple statuses, typically one per controller,
identified by a string key. Statuses are composed of a set of conditions, which
represent discreet observations about the resource in relation to the current
state of the system.
That all sounds a little abstract, so let's take a look at an example.
```Go
client.WriteStatus(ctx, &pbresource.WriteStatusRequest{
Id: res.Id,
Key: "consul.io/bar",
Status: &pbresource.Status{
ObservedGeneration: res.Generation,
Conditions: []*pbresource.Condition{
{
Type: "Healthy",
State: pbresource.Condition_STATE_TRUE,
Reason: "OK",
Message: "All checks are passing",
},
{
Type: "ResolvedRefs",
State: pbresource.Condition_STATE_FALSE,
Reason: "INVALID_REFERENCE",
Message: "Bar contained an invalid reference to qux",
Resource: resource.Reference(bar.Qux, ""),
},
},
},
})
```
In the previous example, the controller makes two observations about the
current state of the resource:
1. That it's "healthy" (whatever that means in this hypothetical scenario)
1. That it contains a reference that couldn't be resolved
The `Type` and `Reason` should be simple, machine-readable, strings, but there
aren't any strict rules about what are acceptable values. Over time, we
anticipate that common values will emerge that we'll standardize on for
consistency.
`Message` should be a human-readable explanation of the condition.
> **Warning**
> Writing a status to the resource will cause it to be re-reconciled. To avoid
> infinite loops, we recommend dirty checking the status before writing it with
> [`resource.EqualStatus`].
[`resource.EqualStatus`]: https://pkg.go.dev/github.com/hashicorp/consul/internal/resource#EqualStatus
### Watching Other Resources
In addition to watching their "managed" resources, controllers can also watch
resources of different, related, types. For example, the service endpoints
controller also watches workloads and services.
```Go
func barController() controller.Controller {
return controller.NewController("bar", pbv1alpha1.BarType).
WithWatch(pbv1alpha1.BazType, controller.MapOwner)
WithReconciler(barReconciler{})
}
```
The second argument to `WithWatch` is a [dependency mapper] function. Whenever a
resource of the watched type is modified, the dependency mapper will be called
to determine which of the controller's managed resources need to be reconciled.
[`dependency.MapOwner`] is a convenience function which causes the watched
resource's [owner](#ownership--cascading-deletion) to be reconciled.
[dependency mapper]: https://pkg.go.dev/github.com/hashicorp/consul/internal/controller#DependencyMapper
[`dependency.MapOwner`]: https://pkg.go.dev/github.com/hashicorp/consul/internal/controller/dependency#MapOwner
### Placement
By default, only a single, leader-elected, replica of each controller will run
within a cluster. Sometimes it's necessary to override this, for example when
you want to run a copy of the controller on each server (e.g. to apply some
configuration to the server whenever it changes). You can do this by changing
the controller's placement.
```Go
func barController() controller.Controller {
return controller.NewController("bar", pbv1alpha1.BarType).
WithPlacement(controller.PlacementEachServer)
WithReconciler(barReconciler{})
}
```
> **Warning**
> Controllers placed with [`controller.PlacementEachServer`] generally shouldn't
> modify resources (as it could lead to race conditions).
[`controller.PlacementEachServer`]: https://pkg.go.dev/github.com/hashicorp/consul/internal/controller#PlacementEachServer
### Initializer
If your controller needs to execute setup steps when the controller
first starts and before any resources are reconciled, you can add an
Initializer.
If the controller has an Initializer, it will not start unless the
Initialize method is successful. The controller does not have retry
logic for the initialize method specifically, but the controller
is restarted on error. When restarted, the controller will attempt
to execute the initialization again.
The example below has the controller creating a default resource as
part of initialization.
```Go
package foo
import (
"context"
"github.com/hashicorp/consul/internal/controller"
pbv1alpha1 "github.com/hashicorp/consul/proto-public/pbfoo/v1alpha1"
"github.com/hashicorp/consul/proto-public/pbresource"
)
func barController() controller.Controller {
return controller.ForType(pbv1alpha1.BarType).
WithReconciler(barReconciler{}).
WithInitializer(barInitializer{})
}
type barInitializer struct{}
func (barInitializer) Initialize(ctx context.Context, rt controller.Runtime) error {
_, err := rt.Client.Write(ctx,
&pbresource.WriteRequest{
Resource: &pbresource.Resource{
Id: &pbresource.ID{
Name: "default",
Type: pbv1alpha1.BarType,
},
},
},
)
if err != nil {
return err
}
return nil
}
```
### Finalizer
A finalizer allows a controller to execute teardown logic before a
resource is deleted. This can be useful to perform cleanup or block
deletion until certain conditions are met.
Finalizers are encoded as keys within a resource's metadata map. It
is the responsibility of each controller that adds a finalizer to a
resource to remove the finalizer when it is marked for deletion.
Once a resource has no finalizers present, it is deleted by the
resource service.
When the `Delete` endpoint is called on a resource with one or more
finalizers, the resource is marked for deletion by adding an immutable
`deletionTimestamp` key to the resource's metadata map. The resource is
now effectively frozen and will only accept subsequent `Write`s
that remove finalizers. `WriteStatus` is still allowed.
The `resource` package API can be used to manage finalizers and
check whether a resource has been marked for deletion. You would
typically use this API within the logic of your controller's
`Reconcile` method to either put a finalizer in place or perform
cleanup and then remove a finalizer. Don't forget to `Write` your
changes once you add or remove finalizers.
```Go
package resource
// IsMarkedForDeletion returns true if a resource has been marked for deletion,
// false otherwise.
func IsMarkedForDeletion(res *pbresource.Resource) bool { ... }
// HasFinalizers returns true if a resource has one or more finalizers, false otherwise.
func HasFinalizers(res *pbresource.Resource) bool { ... }
// HasFinalizer returns true if a resource has a given finalizer, false otherwise.
func HasFinalizer(res *pbresource.Resource, finalizer string) bool { ... }
// AddFinalizer adds a finalizer to the given resource.
func AddFinalizer(res *pbresource.Resource, finalizer string) { ... }
// RemoveFinalizer removes a finalizer from the given resource.
func RemoveFinalizer(res *pbresource.Resource, finalizer string) { ... }
// GetFinalizers returns the set of finalizers for the given resource.
func GetFinalizers(res *pbresource.Resource) mapset.Set[string] { ... }
```
Example flow in a controller's `Reconcile` method
```Go
const finalizer = "consul.io/bar-finalizer"
func (barReconciler) Reconcile(ctx context.Context, rt controller.Runtime, req controller.Request) error {
...
// Check if resource is marked for deletion. If yes, perform cleanup, remove finalizer, and Write the resource
if resource.IsMarkedForDeletion(res) {
// Perform some cleanup...
return EnsureFinalizerRemoved(ctx, rt, res, finalizer)
}
// Check if resource has finalizer. If not, add it and Write the resource
if err := EnsureHasFinalizer(ctx, rt, res, finalizer); err != nil {
return err
}
}
```
## Ownership & Cascading Deletion
The resource service implements a lightweight `1:N` ownership model where, on
creation, you can mark a resource as being "owned" by another resource. When the
owner is deleted, the owned resource will be deleted too.
```Go
client.Write(ctx, &pbresource.WriteRequest{
Resource: &pbresource.Resource{,
Owner: ownerID,
// …
},
})
```
## Testing
Now that you have created your controller its time to test it. The types of tests each controller should have and boiler plat for test files is documented [here](./testing.md) | consul | Resource and Controller Developer Guide This is a whistle stop tour through adding a new resource type and controller to Consul Resource Schema Adding a new resource type begins with defining the object schema as a protobuf message in the appropriate package under proto public proto public shell mkdir proto public pbfoo v1alpha1 proto proto public pbfoo v1alpha1 foo proto syntax proto3 import pbresource resource proto import pbresource annotations proto package hashicorp consul foo v1alpha1 message Bar option hashicorp consul resource spec scope SCOPE NAMESPACE string baz 1 hashicorp consul resource ID qux 2 shell make proto Next we must add our resource type to the registry At this point it s useful to add a package e g under internal internal to contain the logic associated with this resource type The convention is to have this package export variables for its type identifiers along with a method for registering its types Go internal foo types go package foo import github com hashicorp consul internal resource pbv1alpha1 github com hashicorp consul proto public pbfoo v1alpha1 github com hashicorp consul proto public pbresource func RegisterTypes r resource Registry r Register resource Registration Type pbv1alpha1 BarType Scope resource ScopePartition Proto pbv1alpha1 Bar Note that Scope reference the scope of the new resource resource ScopePartition mean that resource will be at the partition level and have no namespace while resource ScopeNamespace mean it will have both a namespace and a partition Update the NewTypeRegistry method in type registry go to call your package s type registration method type registry go agent consul type registry go Go import github com hashicorp consul internal foo func NewTypeRegistry resource Registry foo RegisterTypes registry That should be all you need to start using your new resource type Test it out by starting an agent in dev mode shell make dev consul agent dev You can now use grpcurl https github com fullstorydev grpcurl to interact with the resource service proto public pbresource resource proto shell grpcurl d plaintext protoset pkg consul protoset 127 0 0 1 8502 hashicorp consul resource ResourceService Write EOF resource id type group foo group version v1alpha1 kind bar tenancy partition default namespace default data type types googleapis com hashicorp consul foo v1alpha1 Bar baz Hello World EOF Validation Broadly there are two kinds of validation you might want to perform against your resources Structural validation ensures the user s input is well formed for example checking that a required field is provided or that a port is within an acceptable range Semantic validation ensures that the resource makes sense in the context of other resources for example checking that an L7 intention is not targeting an L4 service Structural validation should be done up front before the resource is admitted using a validation hook provided in the type registration Go func RegisterTypes r resource Registry r Register resource Registration Type pbv1alpha1 BarType Proto pbv1alpha1 Bar Scope resource ScopeNamespace Validate validateBar func validateBar res pbresource Resource error var bar pbv1alpha1 Bar if err res Data UnmarshalTo bar err nil return resource NewErrDataParse bar err if bar Baz return resource ErrInvalidField Name baz Wrapped resource ErrMissing return nil Semantic validation should be done asynchronously after the resource is written by controllers covered below controllers Authorization You can control how operations on your resource type are authorized by providing a set of ACL hooks Go func RegisterTypes r resource Registry r Register resource Registration Type pbv1alpha1 BarType Proto pbv1alpha1 Bar Scope resource ScopeNamespace ACLs resource ACLHooks Read authzReadBar Write authzWriteBar List authzListBar func authzReadBar authz acl Authorizer authzContext acl AuthorizerContext id pbresource ID pbresource Resource error return authz ToAllowAuthorizer BarReadAllowed id Name authzContext func authzWriteBar authz acl Authorizer authzContext acl AuthorizerContext res pbresource Resource error return authz ToAllowAuthorizer BarWriteAllowed res ID Name authzContext func authzListBar authz acl Authorizer authzContext acl AuthorizerContext error return authz ToAllowAuthorizer BarListAllowed authzContext If you do not provide ACL hooks operator read and operator write permissions will be required Mutation Sometimes it s necessary to modify resources before they re persisted For example to set sensible default values or normalize user input You can do this by providing a mutation hook Go func RegisterTypes r resource Registry r Register resource Registration Type pbv1alpha1 BarType Proto pbv1alpha1 Bar Scope resource ScopeNamespace Mutate mutateBar func mutateBar res pbresource Resource error var bar pbv1alpha1 Bar if err res Data UnmarshalTo bar err nil return resource NewErrDataParse bar err bar Baz strings ToLower bar Baz return res Data MarshalFrom bar Controllers Controllers are where the business logic of your resources will live They re asynchronous reconciliation loops that wake up whenever a resource is modified to validate and realize the changes You can create a new controller using the builder API Start by identifying the resource type you want this controller to manage and provide a reconciler that will be called whenever a resource of that type is changed Go package foo import context github com hashicorp consul internal controller pbv1alpha1 github com hashicorp consul proto public pbfoo v1alpha1 github com hashicorp consul proto public pbresource func barController controller Controller return controller NewController bar pbv1alpha1 BarType WithReconciler barReconciler type barReconciler struct func barReconciler Reconcile ctx context Context rt controller Runtime req controller Request error rsp err rt Client Read ctx pbresource ReadRequest Id req ID switch case status Code err codes NotFound return nil case err nil return err var bar pbv1alpha1 Bar if err rsp Resource Data UnmarshalTo bar err nil return err rt Logger Debug Hello from bar reconciler baz bar Baz return nil reconciliation loops https www oreilly com library view 97 things every 9781492050896 ch73 html builder API https pkg go dev github com hashicorp consul internal controller Controller Next register your controller with the controller manager Another common pattern is to have your package expose a method for registering controllers which is called from registerControllers in server go server go agent consul server go Go package foo func RegisterControllers mgr controller Manager mgr Register barController Go package consul func s Server registerControllers foo RegisterControllers s controllerManager Retries By default if your reconciler returns an error it will be retried with exponential backoff While this is correct in most circumstances you can override it by returning RequeueAfter or RequeueNow RequeueAfter https pkg go dev github com hashicorp consul internal controller RequeueAfter RequeueNow https pkg go dev github com hashicorp consul internal controller RequeueNow Go func barReconciler Reconcile context Context controller Runtime controller Request error if time Now Hour 9 return controller RequeueAfter 1 time Hour return nil Status Controllers can communicate the result of reconciling resource changes e g surfacing semantic validation issues with users and other controllers by updating the resource s status using the WriteStatus method Each resource can have multiple statuses typically one per controller identified by a string key Statuses are composed of a set of conditions which represent discreet observations about the resource in relation to the current state of the system That all sounds a little abstract so let s take a look at an example Go client WriteStatus ctx pbresource WriteStatusRequest Id res Id Key consul io bar Status pbresource Status ObservedGeneration res Generation Conditions pbresource Condition Type Healthy State pbresource Condition STATE TRUE Reason OK Message All checks are passing Type ResolvedRefs State pbresource Condition STATE FALSE Reason INVALID REFERENCE Message Bar contained an invalid reference to qux Resource resource Reference bar Qux In the previous example the controller makes two observations about the current state of the resource 1 That it s healthy whatever that means in this hypothetical scenario 1 That it contains a reference that couldn t be resolved The Type and Reason should be simple machine readable strings but there aren t any strict rules about what are acceptable values Over time we anticipate that common values will emerge that we ll standardize on for consistency Message should be a human readable explanation of the condition Warning Writing a status to the resource will cause it to be re reconciled To avoid infinite loops we recommend dirty checking the status before writing it with resource EqualStatus resource EqualStatus https pkg go dev github com hashicorp consul internal resource EqualStatus Watching Other Resources In addition to watching their managed resources controllers can also watch resources of different related types For example the service endpoints controller also watches workloads and services Go func barController controller Controller return controller NewController bar pbv1alpha1 BarType WithWatch pbv1alpha1 BazType controller MapOwner WithReconciler barReconciler The second argument to WithWatch is a dependency mapper function Whenever a resource of the watched type is modified the dependency mapper will be called to determine which of the controller s managed resources need to be reconciled dependency MapOwner is a convenience function which causes the watched resource s owner ownership cascading deletion to be reconciled dependency mapper https pkg go dev github com hashicorp consul internal controller DependencyMapper dependency MapOwner https pkg go dev github com hashicorp consul internal controller dependency MapOwner Placement By default only a single leader elected replica of each controller will run within a cluster Sometimes it s necessary to override this for example when you want to run a copy of the controller on each server e g to apply some configuration to the server whenever it changes You can do this by changing the controller s placement Go func barController controller Controller return controller NewController bar pbv1alpha1 BarType WithPlacement controller PlacementEachServer WithReconciler barReconciler Warning Controllers placed with controller PlacementEachServer generally shouldn t modify resources as it could lead to race conditions controller PlacementEachServer https pkg go dev github com hashicorp consul internal controller PlacementEachServer Initializer If your controller needs to execute setup steps when the controller first starts and before any resources are reconciled you can add an Initializer If the controller has an Initializer it will not start unless the Initialize method is successful The controller does not have retry logic for the initialize method specifically but the controller is restarted on error When restarted the controller will attempt to execute the initialization again The example below has the controller creating a default resource as part of initialization Go package foo import context github com hashicorp consul internal controller pbv1alpha1 github com hashicorp consul proto public pbfoo v1alpha1 github com hashicorp consul proto public pbresource func barController controller Controller return controller ForType pbv1alpha1 BarType WithReconciler barReconciler WithInitializer barInitializer type barInitializer struct func barInitializer Initialize ctx context Context rt controller Runtime error err rt Client Write ctx pbresource WriteRequest Resource pbresource Resource Id pbresource ID Name default Type pbv1alpha1 BarType if err nil return err return nil Finalizer A finalizer allows a controller to execute teardown logic before a resource is deleted This can be useful to perform cleanup or block deletion until certain conditions are met Finalizers are encoded as keys within a resource s metadata map It is the responsibility of each controller that adds a finalizer to a resource to remove the finalizer when it is marked for deletion Once a resource has no finalizers present it is deleted by the resource service When the Delete endpoint is called on a resource with one or more finalizers the resource is marked for deletion by adding an immutable deletionTimestamp key to the resource s metadata map The resource is now effectively frozen and will only accept subsequent Write s that remove finalizers WriteStatus is still allowed The resource package API can be used to manage finalizers and check whether a resource has been marked for deletion You would typically use this API within the logic of your controller s Reconcile method to either put a finalizer in place or perform cleanup and then remove a finalizer Don t forget to Write your changes once you add or remove finalizers Go package resource IsMarkedForDeletion returns true if a resource has been marked for deletion false otherwise func IsMarkedForDeletion res pbresource Resource bool HasFinalizers returns true if a resource has one or more finalizers false otherwise func HasFinalizers res pbresource Resource bool HasFinalizer returns true if a resource has a given finalizer false otherwise func HasFinalizer res pbresource Resource finalizer string bool AddFinalizer adds a finalizer to the given resource func AddFinalizer res pbresource Resource finalizer string RemoveFinalizer removes a finalizer from the given resource func RemoveFinalizer res pbresource Resource finalizer string GetFinalizers returns the set of finalizers for the given resource func GetFinalizers res pbresource Resource mapset Set string Example flow in a controller s Reconcile method Go const finalizer consul io bar finalizer func barReconciler Reconcile ctx context Context rt controller Runtime req controller Request error Check if resource is marked for deletion If yes perform cleanup remove finalizer and Write the resource if resource IsMarkedForDeletion res Perform some cleanup return EnsureFinalizerRemoved ctx rt res finalizer Check if resource has finalizer If not add it and Write the resource if err EnsureHasFinalizer ctx rt res finalizer err nil return err Ownership Cascading Deletion The resource service implements a lightweight 1 N ownership model where on creation you can mark a resource as being owned by another resource When the owner is deleted the owned resource will be deleted too Go client Write ctx pbresource WriteRequest Resource pbresource Resource Owner ownerID Testing Now that you have created your controller its time to test it The types of tests each controller should have and boiler plat for test files is documented here testing md |
consul Controllers This page describes how to write controllers in Consul s new controller architecture A controller consists of several parts Note This information is valid as of Consul 1 17 but some portions may change in future releases Controller Basics | # Controllers
This page describes how to write controllers in Consul's new controller architecture.
-> **Note**: This information is valid as of Consul 1.17 but some portions may change in future releases.
## Controller Basics
A controller consists of several parts:
1. **The watched type** - This is the main type a controller is watching and reconciling.
2. **Additional watched types** - These are additional types a controller may care about in addition to the main watched type.
3. **Additional custom watches** - These are the watches for things that aren't resources in Consul.
4. **Reconciler** - This is the instance that's responsible for reconciling requests whenever there's an event for the main watched type or for any of the watched types.
5. **Initializer** - This is responsible for anything that needs to be executed when the controller is started.
A basic controller setup could look like this:
```go
func barController() controller.Controller {
return controller.NewController("bar", pbexample.BarType).
WithReconciler(barReconciler{})
}
```
barReconciler needs to implement the `Reconcile` method of the `Reconciler` interface.
It's important to note that the `Reconcile` method only gets the request with the `ID` of the main
watched resource and so it's up to the reconcile implementation to fetch the resource and any relevant information needed
to perform the reconciliation. The most basic reconciler could look as follows:
```go
type barReconciler struct {}
func (b *barReconciler) Reconcile(ctx context.Context, rt Runtime, req Request) error {
...
}
```
## Watching Additional Resources
Most of the time, controllers will need to watch more resources in addition to the main watched type.
To set up an additional watch, the main thing we need to figure out is how to map additional watched resource to the main
watched resource. Controller-runtime allows us to implement a mapper function that can take the additional watched resource
as the input and produce reconcile `Requests` for our main watched type.
To figure out how to map the two resources together, we need to think about the relationship between the two resources.
There are several common relationship types between resources that are being used currently:
1. Name-alignment: this relationship means that resources are named the same and live in the same tenancy, but have different data. Examples: `Service` and `ServiceEndpoints`, `Workload` and `ProxyStateTemplate`.
2. Selector: this relationship happens when one resource selects another by name or name prefix. Examples: `Service` and `Workload`, `ProxyConfiguration` and `Workload`.
3. Owner: in this relationship, one resource is the owner of another resource. Examples: `Service` and `ServiceEndpoints`, `HealthStatus` and `Workload`.
4. Arbitrary reference: in this relationship, one resource may reference another by some sort of reference. This reference could be a single string in the resource data or a more composite reference containing name, tenancy, and type. Examples: `Workload` and `WorkloadIdentity`, `HTTPRoute` and `Service`.
Note that it's possible for the two watched resources to have more than one relationship type simultaneously.
For example, `FailoverPolicy` type is name-aligned with a service to which it applies, however, it also contains
references to destination services, and for a controller that reconciles `FailoverPolicy` and watches `Service`
we need to account for both type 1 and type 4 relationship whenever we get an event for a `Service`.
### Simple Mappers
Let's look at some simple mapping examples.
#### Name-aligned resources
If our resources only have a name-aligned relationship, we can map them with a built-in function:
```go
func barController() controller.Controller {
return controller.NewController("bar", pbexample.BarType).
WithWatch(pbexample.FooType, controller.ReplaceType(pbexample.BarType)).
WithReconciler(barReconciler{})
}
```
Here, all we need to do is replace the type of the `Foo` resource whenever we get an event for it.
#### Owned resources
Let's say our `Foo` resource owns `Bar` resources, where any `Foo` resource can own multiple `Bar` resources.
In this case, whenever we see a new event for `Foo`, all we need to do is get all `Bar` resources that `Foo` currently owns.
For this, we can also use a built-in function to set up our watch:
```go
func MapOwned(ctx context.Context, rt controller.Runtime, res *pbresource.Resource) ([]controller.Request, error) {
resp, err := rt.Client.ListByOwner(ctx, &pbresource.ListByOwnerRequest{Owner: res.Id})
if err != nil {
return nil, err
}
var result []controller.Request
for _, r := range resp.Resources {
result = append(result, controller.Request{ID: r.Id})
}
return result, nil
}
func barController() controller.Controller {
return controller.NewController("bar", pbexample.BarType).
WithWatch(pbexample.FooType, MapOwned).
WithReconciler(barReconciler{})
}
```
### Advanced Mappers and Caches
For selector or arbitrary reference relationships, the mapping that we choose may need to be more advanced.
#### Naive mapper implementation
Let's first consider what a naive mapping function could look like in this case. Let's say that the `Bar` resource
references `Foo` resource by name in the data. Now to watch and map `Foo` resources, we need to be able to find all relevant `Bar` resources
whenever we get an event for a `Foo` resource.
```go
func MapFoo(ctx context.Context, rt controller.Runtime, res *pbresource.Resource) ([]controller.Request, error) {
resp, err := rt.Client.List(ctx, &pbresource.ListRequest{Type: pbexample.BarType, Tenancy: res.Id.Tenancy})
if err != nil {
return nil, err
}
var result []controller.Request
for _, r := range resp.Resources {
decodedResource, err := resource.Decode[*pbexample.Bar](r)
if err != nil {
return nil, err
}
// Only add Bar resources that match Foo by name.
if decodedResource.GetData().GetFooName() == res.Id.Name {
result = append(result, controller.Request{ID: r.Id})
}
}
}
```
This approach is fine for cases when the number of `Bar` resources in a cluster is relatively small. If it's not,
then we'd be doing a large `O(N)` search on each `Bar` event which could be too expensive.
#### Caching Mappers
For cases when `N` is too large, we'd want to use a caching layer to help us make lookups more efficient so that they
don't require an `O(N)` search of potentially all cluster resources.
The controller runtime contains a controller cache and the facilities to keep the cache up to date in response to watches. Additionally there are dependency mappers provided for querying the cache.
_While it is possible to not use the builtin cache and manage state in dependency mappers yourself, this can get quite complex and reasoning about the correct times to track and untrack relationships is tricky to get right. Usage of the cache is therefore the advised approach._
At a high level, the controller author provides the indexes to track for each watchedtype and can then query thosfunc fooFromArgs(args ...any) ([]byte, error)e indexes in the {
}future. The querying can occur during both dependency mapping and during resource reconciliation.
The following example shows how to configure the "bar" controller to rereconcile a Bar resource whenever a Foo resource is changed that references the Bar
```go
func fooReferenceFromBar(r *resource.DecodedResource[*pbexample.Bar]) (bool, []byte, error) {
idx := index.IndexFromRefOrID(&pbresource.ID{
Type: pbexample.FooType,
Tenancy: r.Id.Tenancy,
Name: r.Data.GetFooName(),
})
return true, idx, nil
}
func barController() controller.Controller {
fooIndex := indexers.DecodedSingleIndexer(
"foo",
index.ReferenceOrIDFromArgs,
fooReferenceFromBar,
)
return controller.NewController("bar", pbexample.BarType, fooIndex).
WithWatch(
pbexample.FooType,
dependency.CacheListMapper(pbexample.BarType, fooIndex.Name()),
).
WithReconciler(barReconciler{})
}
```
The controller will now reconcile Bar type resources whenever the Foo type resources they reference are updated. No further tracking is necessary as changes to all Bar types will automatically update the cache.
One limitation of the cache is that it only has knowledge about the current state of resources. That specifically means that the previous state is forgotten once the cache observes a write. This can be problematic when you want to reconcile a resource to no longer take into account something that previously reference it.
Lets say there are two types: `Baz` and `ComputedBaz` and a controller that will aggregate all `Baz` resource with some value into a single `ComputedBaz` object. When
a `Baz` resource gets updated to no longer have a value, it should not be represented in the `ComputedBaz` resource. The typical way to work around this is to:
1. Store references to the resources that were used during reconciliation within the computed/reconciled resource. For types computed by controllers and not expected to be written directly by users a `bound_references` field should be added to the top level resource types message. For other user manageable types the references may need to be stored within the Status field.
2. Add a cache index to the watch of the computed type (usually the controllers main managed type). This index can use one of the indexers specified within the [`internal/controller/cache/indexers`](../../../internal/controller/cache/indexers/) package. That package contains some builtin functionality around reference indexing.
3. Update the dependency mappers to query the cache index *in addition to* looking at the current state of the dependent resource. In our example above the `Baz` dependency mapper could use the [`MultiMapper`] to combine querying the cache for `Baz` types that currently should be associated with a `ComputedBaz` and querying the index added in step 2 for previous references.
#### Footgun: Needing Bound References
When an interior (mutable) foreign key pointer on watched data is used to
determine the resources's applicability in a dependency mapper, it is subject
to the "orphaned computed resource" problem.
(An example of this would be a ParentRef on an xRoute, or the Destination field
of a TrafficPermission.)
When you edit the mutable pointer to point elsewhere, the DependencyMapper will
only witness the NEW value and will trigger reconciles for things derived from
the NEW pointer, but side effects from a prior reconcile using the OLD pointer
will be orphaned until some other event triggers that reconcile (if ever).
This applies equally to all varieties of controller:
- creates computed resources
- only updates status conditions on existing resources
- has other external side effects (xDS controller writes envoy config over a stream)
To solve this we need to collect the list of bound references that were
"ingredients" into a computed resource's output and persist them on the newly
written resource. Then we load them up and index them such that we can use them
to AUGMENT a mapper event with additional maps using the OLD data as well.
We have only actively worked to solve this for the computed resource flavor of
controller:
1. The top level of the resource data protobuf needs a
`BoundReferences []*pbresource.Reference` field.
2. Use a `*resource.BoundReferenceCollector` to capture any resource during
`Reconcile` that directly contributes to the final output resource data
payload.
3. Call `brc.List()` on the above and set it to the `BoundReferences` field on
the computed resource before persisting.
4. Use `indexers.BoundRefsIndex` to index this field on the primary type of the
controller.
5. Create `boundRefsMapper := dependency.CacheListMapper(ZZZ, boundRefsIndex.Name())`
6. For each watched type, wrap its DependencyMapper with
`dependency.MultiMapper(boundRefsMapper, ZZZ)`
7. That's it.
This will cause each reconcile to index the prior list of inputs and augment
the results of future mapper events with historical references.
### Custom Watches
In some cases, we may want to trigger reconciles for events that aren't generated from CRUD operations on resources, for example
when Envoy proxy connects or disconnects to a server. Controller-runtime allows us to setup watches from
events that come from a custom event channel. Please see [xds-controller](https://github.com/hashicorp/consul/blob/ecfeb7aac51df8730064d869bb1f2c633a531522/internal/mesh/internal/controllers/xds/controller.go#L40-L41) for examples of custom watches.
## Statuses
In many cases, controllers would need to update statuses on resources to let the user know about the successful or unsuccessful
state of a resource.
These are the guidelines that we recommend for statuses:
* While status conditions is a list, the Condition type should be treated as a key in a map, meaning a resource should not have two status conditions with the same type.
* Controllers need to both update successful and unsuccessful conditions states. This is because we need to make sure that we clear any failed status conditions.
* Status conditions should be named such that the `True` state is a successful state and `False` state is a failed state.
## Best Practices
Below is a list of controller best practices that we've learned so far. Many of them are inspired by [kubebuilder](https://book.kubebuilder.io/reference/good-practices).
* Avoid monolithic controllers as much as possible. A single controller should only manage a single resource to avoid complexity and race conditions.
* If using cached mappers, aim to write (update or delete entries) to mappers in the `Reconcile` method and read from them in the mapper functions used by watches.
* Fetch all data in the `Reconcile` method and avoid caching it from the mapper functions. This ensures that we get the latest data for each reconciliation. | consul | Controllers This page describes how to write controllers in Consul s new controller architecture Note This information is valid as of Consul 1 17 but some portions may change in future releases Controller Basics A controller consists of several parts 1 The watched type This is the main type a controller is watching and reconciling 2 Additional watched types These are additional types a controller may care about in addition to the main watched type 3 Additional custom watches These are the watches for things that aren t resources in Consul 4 Reconciler This is the instance that s responsible for reconciling requests whenever there s an event for the main watched type or for any of the watched types 5 Initializer This is responsible for anything that needs to be executed when the controller is started A basic controller setup could look like this go func barController controller Controller return controller NewController bar pbexample BarType WithReconciler barReconciler barReconciler needs to implement the Reconcile method of the Reconciler interface It s important to note that the Reconcile method only gets the request with the ID of the main watched resource and so it s up to the reconcile implementation to fetch the resource and any relevant information needed to perform the reconciliation The most basic reconciler could look as follows go type barReconciler struct func b barReconciler Reconcile ctx context Context rt Runtime req Request error Watching Additional Resources Most of the time controllers will need to watch more resources in addition to the main watched type To set up an additional watch the main thing we need to figure out is how to map additional watched resource to the main watched resource Controller runtime allows us to implement a mapper function that can take the additional watched resource as the input and produce reconcile Requests for our main watched type To figure out how to map the two resources together we need to think about the relationship between the two resources There are several common relationship types between resources that are being used currently 1 Name alignment this relationship means that resources are named the same and live in the same tenancy but have different data Examples Service and ServiceEndpoints Workload and ProxyStateTemplate 2 Selector this relationship happens when one resource selects another by name or name prefix Examples Service and Workload ProxyConfiguration and Workload 3 Owner in this relationship one resource is the owner of another resource Examples Service and ServiceEndpoints HealthStatus and Workload 4 Arbitrary reference in this relationship one resource may reference another by some sort of reference This reference could be a single string in the resource data or a more composite reference containing name tenancy and type Examples Workload and WorkloadIdentity HTTPRoute and Service Note that it s possible for the two watched resources to have more than one relationship type simultaneously For example FailoverPolicy type is name aligned with a service to which it applies however it also contains references to destination services and for a controller that reconciles FailoverPolicy and watches Service we need to account for both type 1 and type 4 relationship whenever we get an event for a Service Simple Mappers Let s look at some simple mapping examples Name aligned resources If our resources only have a name aligned relationship we can map them with a built in function go func barController controller Controller return controller NewController bar pbexample BarType WithWatch pbexample FooType controller ReplaceType pbexample BarType WithReconciler barReconciler Here all we need to do is replace the type of the Foo resource whenever we get an event for it Owned resources Let s say our Foo resource owns Bar resources where any Foo resource can own multiple Bar resources In this case whenever we see a new event for Foo all we need to do is get all Bar resources that Foo currently owns For this we can also use a built in function to set up our watch go func MapOwned ctx context Context rt controller Runtime res pbresource Resource controller Request error resp err rt Client ListByOwner ctx pbresource ListByOwnerRequest Owner res Id if err nil return nil err var result controller Request for r range resp Resources result append result controller Request ID r Id return result nil func barController controller Controller return controller NewController bar pbexample BarType WithWatch pbexample FooType MapOwned WithReconciler barReconciler Advanced Mappers and Caches For selector or arbitrary reference relationships the mapping that we choose may need to be more advanced Naive mapper implementation Let s first consider what a naive mapping function could look like in this case Let s say that the Bar resource references Foo resource by name in the data Now to watch and map Foo resources we need to be able to find all relevant Bar resources whenever we get an event for a Foo resource go func MapFoo ctx context Context rt controller Runtime res pbresource Resource controller Request error resp err rt Client List ctx pbresource ListRequest Type pbexample BarType Tenancy res Id Tenancy if err nil return nil err var result controller Request for r range resp Resources decodedResource err resource Decode pbexample Bar r if err nil return nil err Only add Bar resources that match Foo by name if decodedResource GetData GetFooName res Id Name result append result controller Request ID r Id This approach is fine for cases when the number of Bar resources in a cluster is relatively small If it s not then we d be doing a large O N search on each Bar event which could be too expensive Caching Mappers For cases when N is too large we d want to use a caching layer to help us make lookups more efficient so that they don t require an O N search of potentially all cluster resources The controller runtime contains a controller cache and the facilities to keep the cache up to date in response to watches Additionally there are dependency mappers provided for querying the cache While it is possible to not use the builtin cache and manage state in dependency mappers yourself this can get quite complex and reasoning about the correct times to track and untrack relationships is tricky to get right Usage of the cache is therefore the advised approach At a high level the controller author provides the indexes to track for each watchedtype and can then query thosfunc fooFromArgs args any byte error e indexes in the future The querying can occur during both dependency mapping and during resource reconciliation The following example shows how to configure the bar controller to rereconcile a Bar resource whenever a Foo resource is changed that references the Bar go func fooReferenceFromBar r resource DecodedResource pbexample Bar bool byte error idx index IndexFromRefOrID pbresource ID Type pbexample FooType Tenancy r Id Tenancy Name r Data GetFooName return true idx nil func barController controller Controller fooIndex indexers DecodedSingleIndexer foo index ReferenceOrIDFromArgs fooReferenceFromBar return controller NewController bar pbexample BarType fooIndex WithWatch pbexample FooType dependency CacheListMapper pbexample BarType fooIndex Name WithReconciler barReconciler The controller will now reconcile Bar type resources whenever the Foo type resources they reference are updated No further tracking is necessary as changes to all Bar types will automatically update the cache One limitation of the cache is that it only has knowledge about the current state of resources That specifically means that the previous state is forgotten once the cache observes a write This can be problematic when you want to reconcile a resource to no longer take into account something that previously reference it Lets say there are two types Baz and ComputedBaz and a controller that will aggregate all Baz resource with some value into a single ComputedBaz object When a Baz resource gets updated to no longer have a value it should not be represented in the ComputedBaz resource The typical way to work around this is to 1 Store references to the resources that were used during reconciliation within the computed reconciled resource For types computed by controllers and not expected to be written directly by users a bound references field should be added to the top level resource types message For other user manageable types the references may need to be stored within the Status field 2 Add a cache index to the watch of the computed type usually the controllers main managed type This index can use one of the indexers specified within the internal controller cache indexers internal controller cache indexers package That package contains some builtin functionality around reference indexing 3 Update the dependency mappers to query the cache index in addition to looking at the current state of the dependent resource In our example above the Baz dependency mapper could use the MultiMapper to combine querying the cache for Baz types that currently should be associated with a ComputedBaz and querying the index added in step 2 for previous references Footgun Needing Bound References When an interior mutable foreign key pointer on watched data is used to determine the resources s applicability in a dependency mapper it is subject to the orphaned computed resource problem An example of this would be a ParentRef on an xRoute or the Destination field of a TrafficPermission When you edit the mutable pointer to point elsewhere the DependencyMapper will only witness the NEW value and will trigger reconciles for things derived from the NEW pointer but side effects from a prior reconcile using the OLD pointer will be orphaned until some other event triggers that reconcile if ever This applies equally to all varieties of controller creates computed resources only updates status conditions on existing resources has other external side effects xDS controller writes envoy config over a stream To solve this we need to collect the list of bound references that were ingredients into a computed resource s output and persist them on the newly written resource Then we load them up and index them such that we can use them to AUGMENT a mapper event with additional maps using the OLD data as well We have only actively worked to solve this for the computed resource flavor of controller 1 The top level of the resource data protobuf needs a BoundReferences pbresource Reference field 2 Use a resource BoundReferenceCollector to capture any resource during Reconcile that directly contributes to the final output resource data payload 3 Call brc List on the above and set it to the BoundReferences field on the computed resource before persisting 4 Use indexers BoundRefsIndex to index this field on the primary type of the controller 5 Create boundRefsMapper dependency CacheListMapper ZZZ boundRefsIndex Name 6 For each watched type wrap its DependencyMapper with dependency MultiMapper boundRefsMapper ZZZ 7 That s it This will cause each reconcile to index the prior list of inputs and augment the results of future mapper events with historical references Custom Watches In some cases we may want to trigger reconciles for events that aren t generated from CRUD operations on resources for example when Envoy proxy connects or disconnects to a server Controller runtime allows us to setup watches from events that come from a custom event channel Please see xds controller https github com hashicorp consul blob ecfeb7aac51df8730064d869bb1f2c633a531522 internal mesh internal controllers xds controller go L40 L41 for examples of custom watches Statuses In many cases controllers would need to update statuses on resources to let the user know about the successful or unsuccessful state of a resource These are the guidelines that we recommend for statuses While status conditions is a list the Condition type should be treated as a key in a map meaning a resource should not have two status conditions with the same type Controllers need to both update successful and unsuccessful conditions states This is because we need to make sure that we clear any failed status conditions Status conditions should be named such that the True state is a successful state and False state is a failed state Best Practices Below is a list of controller best practices that we ve learned so far Many of them are inspired by kubebuilder https book kubebuilder io reference good practices Avoid monolithic controllers as much as possible A single controller should only manage a single resource to avoid complexity and race conditions If using cached mappers aim to write update or delete entries to mappers in the Reconcile method and read from them in the mapper functions used by watches Fetch all data in the Reconcile method and avoid caching it from the mapper functions This ensures that we get the latest data for each reconciliation |
consul Installation CPU Profiles pprof A Profile is a collection of stack traces showing the call sequences that led to instances of a particular event such as allocation It does not track time spent sleeping or waiting for I O Every 10ms the CPU profiler interrupts a thread in the program and records the stack traces of running goroutines | # pprof
> A Profile is a collection of stack traces showing the call sequences that led to instances of a particular event, such as allocation.
## Installation
`go install github.com/google/pprof@latest`
### CPU Profiles
* Every 10ms the CPU profiler interrupts a thread in the program and records the stack traces of **running** goroutines.
* It does **not** track time spent sleeping or waiting for I/O.
* The duration of a sample is assumed to be 10ms, so the seconds spent in a function is calculated as: `(num_samples * 10ms)/1000`
### Heap Profiles
* Aims to record the stack trace for every 512KB allocated, at the point of the allocation.
* Tracks allocations and when profiled allocations are freed. With both of these data points pprof can calculate allocations and memory in use.
* `alloc_objects` and `alloc_space` refer to allocations since the start of the profiling period.
* Helpful for tracking functions that produce a lot of allocations.
* `inuse_objects` and `inuse_space` refers to allocations that have not been freed.
* `inuse_objects = allocations - frees`
* Helpful for tracking sources of high memory usage.
* May not line up with OS reported memory usage! The profiler only tracks heap memory usage, and there are also cases where the Go GC will not release free heap memory to the OS.
* When allocations are made, the heap sampler makes a decision about whether to sample the allocation or not. If it decides to sample it, it records the stack trace, counts the allocation, and counts the number of bytes allocated.
* When an object's memory is freed, the heap profiler checks whether that allocation was sampled, and if it was, it counts the bytes freed.
### Goroutine Profiles
* Shows the number of times each stack trace was seen across goroutines.
## Common commands
* Open a profile in the terminal:
`go tool pprof profile.prof`
`go tool pprof heap.prof`
`go tool pprof goroutine.prof`
* Open a profile in a web browser:
`go tool pprof -http=:8080 profile.prof`
* If the correct source isn't detected automatically, specify the location of the source associated with the profile:
`go tool pprof -http=:9090 -source_path=/Users/freddy/go/src/github.com/hashicorp/consul-enterprise profile.prof`
Useful in annotated `Source` view which shows time on a line-by-line basis.
**Important:** Ensure that the source code matches the version of the binary! The source view relies on line numbers for its annotations.
* Compare two profiles:
`go tool pprof -http=:8080 -base before/profile.prof after/profile.prof`
This comparison will subtract the `-base` profile from the given profile. In this case, "before" is subtracted from "after".
* Useful commands when profile is opened in terminal:
* `top` lists the top 10 nodes by value
* `list <function regex>` lists the top matches to the pattern
## Graph view
* **Node Values:**
* Package
* Function
* Flat value: the value in the function itself.
* Cumulative value: the sum of the flat value and all its descendants.
* Percentage of flat and cumulative values relative to total samples. Total sample time is visible with `top` command in terminal view.
Example:
```
func foo(){
a() // step 1 takes 1s
do something directly. // step 2 takes 3s
b() // step 3 takes 1s
}
```
`flat` is the time spent on step 2 (3s), while `cum` is time spent on steps 1 to 3.
* **Path Value**
* The cumulative value of the following node.
* **Node Color**:
* large positive cumulative values are red.
* cumulative values close to zero are grey.
* large negative cumulative values are green; negative values are most likely to appear during profile comparison.
* **Node Font Size**:
* larger font size means larger absolute flat values.
* smaller font size means smaller absolute flat values.
* **Edge Weight**:
* thicker edges indicate more resources were used along that path.
* thinner edges indicate fewer resources were used along that path.
* **Edge Color**:
* large positive values are red.
* large negative values are green.
* values close to zero are grey.
* **Dashed Edges**: some locations between the two connected locations were removed.
* **Solid Edges**: one location directly calls the other.
* **"(inline)" Edge Marker**: the call has been inlined into the caller. More on inlining: [Inlining optimisations in Go | Dave Cheney](https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go)
Example graph:

* 20.11s is spent doing direct work in `(*Store).Services()`
* 186.88s is spent in this function **and** its descendants
* `(*Store).Services()` has both large `flat` and `cumulative` values, so the font is large and the box is red.
* The edges to `mapassign_faststr` and `(radixIterator).Next()` are solid and red because these are direct calls with large positive values.
## Flame graph view
A collection of stack traces, where each stack is a column of boxes, and each box represents a function.
Functions at the top of the flame graph are parents of functions below.
The width of each box is proportional to the number of times it was observed during sampling.
Mouse-over boxes shows `cum` value and percentage, while clicking on boxes lets you zoom into their stack traces.
### Note
* The background color is **not** significant.
* Sibling boxes are not necessarily in chronological order.
## References:
* [Diagnostics - The Go Programming Language](https://go.dev/doc/diagnostics)
* [Profiling Go programs with pprof](https://jvns.ca/blog/2017/09/24/profiling-go-with-pprof/)
* [pprof/README.md](https://github.com/google/pprof/blob/master/doc/README.md)
* [GitHub - DataDog/go-profiler-notes: felixge's notes on the various go profiling methods that are available.](https://github.com/DataDog/go-profiler-notes)
* [The Flame Graph - ACM Queue](https://queue.acm.org/detail.cfm?id=2927301)
* [High Performance Go Workshop](https://dave.cheney.net/high-performance-go-workshop/dotgo-paris.html#pprof)
* [Pprof and golang - how to interpret a results?](https://stackoverflow.com/a/56882137 | consul | pprof A Profile is a collection of stack traces showing the call sequences that led to instances of a particular event such as allocation Installation go install github com google pprof latest CPU Profiles Every 10ms the CPU profiler interrupts a thread in the program and records the stack traces of running goroutines It does not track time spent sleeping or waiting for I O The duration of a sample is assumed to be 10ms so the seconds spent in a function is calculated as num samples 10ms 1000 Heap Profiles Aims to record the stack trace for every 512KB allocated at the point of the allocation Tracks allocations and when profiled allocations are freed With both of these data points pprof can calculate allocations and memory in use alloc objects and alloc space refer to allocations since the start of the profiling period Helpful for tracking functions that produce a lot of allocations inuse objects and inuse space refers to allocations that have not been freed inuse objects allocations frees Helpful for tracking sources of high memory usage May not line up with OS reported memory usage The profiler only tracks heap memory usage and there are also cases where the Go GC will not release free heap memory to the OS When allocations are made the heap sampler makes a decision about whether to sample the allocation or not If it decides to sample it it records the stack trace counts the allocation and counts the number of bytes allocated When an object s memory is freed the heap profiler checks whether that allocation was sampled and if it was it counts the bytes freed Goroutine Profiles Shows the number of times each stack trace was seen across goroutines Common commands Open a profile in the terminal go tool pprof profile prof go tool pprof heap prof go tool pprof goroutine prof Open a profile in a web browser go tool pprof http 8080 profile prof If the correct source isn t detected automatically specify the location of the source associated with the profile go tool pprof http 9090 source path Users freddy go src github com hashicorp consul enterprise profile prof Useful in annotated Source view which shows time on a line by line basis Important Ensure that the source code matches the version of the binary The source view relies on line numbers for its annotations Compare two profiles go tool pprof http 8080 base before profile prof after profile prof This comparison will subtract the base profile from the given profile In this case before is subtracted from after Useful commands when profile is opened in terminal top lists the top 10 nodes by value list function regex lists the top matches to the pattern Graph view Node Values Package Function Flat value the value in the function itself Cumulative value the sum of the flat value and all its descendants Percentage of flat and cumulative values relative to total samples Total sample time is visible with top command in terminal view Example func foo a step 1 takes 1s do something directly step 2 takes 3s b step 3 takes 1s flat is the time spent on step 2 3s while cum is time spent on steps 1 to 3 Path Value The cumulative value of the following node Node Color large positive cumulative values are red cumulative values close to zero are grey large negative cumulative values are green negative values are most likely to appear during profile comparison Node Font Size larger font size means larger absolute flat values smaller font size means smaller absolute flat values Edge Weight thicker edges indicate more resources were used along that path thinner edges indicate fewer resources were used along that path Edge Color large positive values are red large negative values are green values close to zero are grey Dashed Edges some locations between the two connected locations were removed Solid Edges one location directly calls the other inline Edge Marker the call has been inlined into the caller More on inlining Inlining optimisations in Go Dave Cheney https dave cheney net 2020 04 25 inlining optimisations in go Example graph Nodes pprof cpu nodes png 20 11s is spent doing direct work in Store Services 186 88s is spent in this function and its descendants Store Services has both large flat and cumulative values so the font is large and the box is red The edges to mapassign faststr and radixIterator Next are solid and red because these are direct calls with large positive values Flame graph view A collection of stack traces where each stack is a column of boxes and each box represents a function Functions at the top of the flame graph are parents of functions below The width of each box is proportional to the number of times it was observed during sampling Mouse over boxes shows cum value and percentage while clicking on boxes lets you zoom into their stack traces Note The background color is not significant Sibling boxes are not necessarily in chronological order References Diagnostics The Go Programming Language https go dev doc diagnostics Profiling Go programs with pprof https jvns ca blog 2017 09 24 profiling go with pprof pprof README md https github com google pprof blob master doc README md GitHub DataDog go profiler notes felixge s notes on the various go profiling methods that are available https github com DataDog go profiler notes The Flame Graph ACM Queue https queue acm org detail cfm id 2927301 High Performance Go Workshop https dave cheney net high performance go workshop dotgo paris html pprof Pprof and golang how to interpret a results https stackoverflow com a 56882137 |
vault Instead of starting your Vault server manually from the command line you can Run Vault as a service layout docs page title Run Vault as a service Configure and deploy Vault as a service for Linux or Windows | ---
layout: docs
page_title: Run Vault as a service
description: >-
Configure and deploy Vault as a service for Linux or Windows.
---
# Run Vault as a service
Instead of starting your Vault server manually from the command line, you can
configure a service to start Vault automatically.
## Before you start
- **You must install Vault**. You can [use a package manager](/vault/install)
or [install a binary manually](/vault/docs/install/install-binary).
## Step 1: Create a new service
<Tabs>
<Tab heading="Linux shell" group="nix">
<Highlight title="Example tested on Ubuntu 22.04">
The following service definition is a simpler version of the `vault.service`
example in the Vault GitHub repo: [vault/.release/linux/package/usr/lib/systemd/system/vault.service](https://github.com/hashicorp/vault/blob/main/.release/linux/package/usr/lib/systemd/system/vault.service)
</Highlight>
1. Set the `VAULT_CONFIG` environment variable to your Vault configuration
directory. The default configuration directory is `/etc/vault.d`:
```shell-session
$ VAULT_CONFIG=/etc/vault.d
```
1. Confirm the path to your Vault binary:
```
$ VAULT_BINARY=$(which vault)
```
1. Create a `systemd` service called `vault.service` that uses the Vault
binary:
```shell-session
$ sudo tee /lib/systemd/system/vault.service <<EOF
[Unit]
Description="HashiCorp Vault"
Documentation="https://developer.hashicorp.com/vault/docs"
ConditionFileNotEmpty="${VAULT_CONFIG}/vault.hcl"
[Service]
User=vault
Group=vault
SecureBits=keep-caps
AmbientCapabilities=CAP_IPC_LOCK
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
NoNewPrivileges=yes
ExecStart=${VAULT_BINARY} server -config=${VAULT_CONFIG}/vault.hcl
ExecReload=/bin/kill --signal HUP
KillMode=process
KillSignal=SIGINT
[Install]
WantedBy=multi-user.target
EOF
```
1. Change the permissions on `/lib/systemd/system/vault.service` to `644`:
```shell-session
$ sudo chmod 644 /lib/systemd/system/vault.service
```
</Tab>
<Tab heading="Powershell" group="ps">
The Windows binary for Vault does not support the Windows Service Application
API. To run Vault as a service, you must use a Windows service wrapper. You can
use whatever wrapper is appropriate for your environment, but the easiest we
have found is `nssm`.
1. Download and install [`nssm`](https://nssm.cc/) manually or install the
package with [Chocolatey](https://chocolatey.org/):
```powershell
choco install nssm
```
1. Set a `VAULT_HOME` environment variable to your preferred Vault home
directory. For example, `c:\Program Files\Vault`:
```powershell
$env:VAULT_HOME = "${env:ProgramFiles}\Vault"
```
1. Use `nssm` to create a new Windows service:
```powershell
nssm install MS_VAULT "${env:VAULT_HOME}\vault.exe"
```
1. Set the working directory for your Vault installation:
```powershell
nssm set MS_VAULT AppDirectory "${env:VAULT_HOME}" ; `
nssm set MS_VAULT AppParameters "server -config Config\vault.hcl"
```
1. Define the runtime parameters for Vault, including the
`-config` flag with the relative path to your Vault configuration file, for
example `Config\vault.hcl`:
```powershell
nssm set MS_VAULT AppDirectory "${env:VAULT_HOME}" ; `
nssm set MS_VAULT AppParameters "server -config Config\vault.hcl"
```
1. Set the display name and description for the "Services"
management console:
```powershell
nssm set MS_VAULT DisplayName "Vault Service" ; `
nssm set MS_VAULT Description "Vault server running as a service"
```
1. Set the startup type for your service. We recommend setting startup to
"Manual" until you confirm the service is working as expected:
```powershell
nssm set MS_VAULT Start SERVICE_DEMAND_START
```
1. Configure the service to pipe information from `stdout` and `stderr` to files
under your logging directory, for example `${env:VAULT_HOME}\Logs`:
```powershell
nssm set MS_VAULT AppStdout "${env:VAULT_HOME}\Logs\vault-stdout.log" ; `
nssm set MS_VAULT AppStderr "${env:VAULT_HOME}\Logs\vault-error.log"
```
1. Optionally, you can use the `AppEnvironmentExtra` parameter to set relevant
variables for the service environment. For example, to set the `VAULT_ADDR`
environment variable:
```powershell
nssm set MS_VAULT AppEnvironmentExtra `$env:VAULT_ADDR=https://localhost:8200
```
1. Confirm your Vault service settings with `nssm`:
```powershell
nssm dump MS_VAULT | Foreach {$_ -replace '.+nssm\.exe ',''}
```
</Tab>
</Tabs>
## Step 2: Start the new service
<Tabs>
<Tab heading="Linux shell" group="nix">
1. Reload the `systemd` configuration:
```shell-session
$ sudo systemctl daemon-reload
```
1. Start the Vault service:
```shell-session
$ sudo systemctl start vault.service
```
1. Verify the service status:
```shell-session
$ systemctl status vault.service
vault.service - "HashiCorp Vault"
Loaded: loaded (/lib/systemd/system/vault.service; disabled; vendor preset: enabled)
Active: active (running) since Thu 2024-09-05 13:58:45 UTC; 4s ago
Docs: https://developer.hashicorp.com/vault/docs
Main PID: 3145 (vault)
Tasks: 8 (limit: 2241)
Memory: 23.6M
CPU: 200ms
CGroup: /system.slice/vault.service
└─3145 /usr/bin/vault server -config=/etc/vault.d/vault.hcl
```
</Tab>
<Tab heading="Powershell" group="ps">
<Highlight title="Use Powershell commands or wrapper commands to manage your service">
Once you create the service, you can control it using standard `*-Service`
cmdlets **or** the relevant commands for the associated wrapper. For example,
to control the service with `nssm` use `nssm start MS_VAULT`.
</Highlight>
1. Start the Vault service::
```powershell
Start-Service -Name MS_VAULT
```
1. Confirm service status:
```powershell
Get-Service -Name MS_VAULT
Status Name DisplayName
------ ---- -----------
Running MS_VAULT Vault Service
```
</Tab>
</Tabs>
## Step 3: Verify the service is running
To confirm the service is running and your Vault service is available, open the
Vault GUI in a browser at the default address:
[http://localhost:8200](http://localhost:8200)
## Related tutorials
The following tutorials provide additional guidance for installing Vault and
production cluster deployment:
- [Day One Preparation](/vault/tutorials/day-one-raft)
- [Recommended Patterns](/vault/tutorials/recommended-patterns) | vault | layout docs page title Run Vault as a service description Configure and deploy Vault as a service for Linux or Windows Run Vault as a service Instead of starting your Vault server manually from the command line you can configure a service to start Vault automatically Before you start You must install Vault You can use a package manager vault install or install a binary manually vault docs install install binary Step 1 Create a new service Tabs Tab heading Linux shell group nix Highlight title Example tested on Ubuntu 22 04 The following service definition is a simpler version of the vault service example in the Vault GitHub repo vault release linux package usr lib systemd system vault service https github com hashicorp vault blob main release linux package usr lib systemd system vault service Highlight 1 Set the VAULT CONFIG environment variable to your Vault configuration directory The default configuration directory is etc vault d shell session VAULT CONFIG etc vault d 1 Confirm the path to your Vault binary VAULT BINARY which vault 1 Create a systemd service called vault service that uses the Vault binary shell session sudo tee lib systemd system vault service EOF Unit Description HashiCorp Vault Documentation https developer hashicorp com vault docs ConditionFileNotEmpty VAULT CONFIG vault hcl Service User vault Group vault SecureBits keep caps AmbientCapabilities CAP IPC LOCK CapabilityBoundingSet CAP SYSLOG CAP IPC LOCK NoNewPrivileges yes ExecStart VAULT BINARY server config VAULT CONFIG vault hcl ExecReload bin kill signal HUP KillMode process KillSignal SIGINT Install WantedBy multi user target EOF 1 Change the permissions on lib systemd system vault service to 644 shell session sudo chmod 644 lib systemd system vault service Tab Tab heading Powershell group ps The Windows binary for Vault does not support the Windows Service Application API To run Vault as a service you must use a Windows service wrapper You can use whatever wrapper is appropriate for your environment but the easiest we have found is nssm 1 Download and install nssm https nssm cc manually or install the package with Chocolatey https chocolatey org powershell choco install nssm 1 Set a VAULT HOME environment variable to your preferred Vault home directory For example c Program Files Vault powershell env VAULT HOME env ProgramFiles Vault 1 Use nssm to create a new Windows service powershell nssm install MS VAULT env VAULT HOME vault exe 1 Set the working directory for your Vault installation powershell nssm set MS VAULT AppDirectory env VAULT HOME nssm set MS VAULT AppParameters server config Config vault hcl 1 Define the runtime parameters for Vault including the config flag with the relative path to your Vault configuration file for example Config vault hcl powershell nssm set MS VAULT AppDirectory env VAULT HOME nssm set MS VAULT AppParameters server config Config vault hcl 1 Set the display name and description for the Services management console powershell nssm set MS VAULT DisplayName Vault Service nssm set MS VAULT Description Vault server running as a service 1 Set the startup type for your service We recommend setting startup to Manual until you confirm the service is working as expected powershell nssm set MS VAULT Start SERVICE DEMAND START 1 Configure the service to pipe information from stdout and stderr to files under your logging directory for example env VAULT HOME Logs powershell nssm set MS VAULT AppStdout env VAULT HOME Logs vault stdout log nssm set MS VAULT AppStderr env VAULT HOME Logs vault error log 1 Optionally you can use the AppEnvironmentExtra parameter to set relevant variables for the service environment For example to set the VAULT ADDR environment variable powershell nssm set MS VAULT AppEnvironmentExtra env VAULT ADDR https localhost 8200 1 Confirm your Vault service settings with nssm powershell nssm dump MS VAULT Foreach replace nssm exe Tab Tabs Step 2 Start the new service Tabs Tab heading Linux shell group nix 1 Reload the systemd configuration shell session sudo systemctl daemon reload 1 Start the Vault service shell session sudo systemctl start vault service 1 Verify the service status shell session systemctl status vault service vault service HashiCorp Vault Loaded loaded lib systemd system vault service disabled vendor preset enabled Active active running since Thu 2024 09 05 13 58 45 UTC 4s ago Docs https developer hashicorp com vault docs Main PID 3145 vault Tasks 8 limit 2241 Memory 23 6M CPU 200ms CGroup system slice vault service 3145 usr bin vault server config etc vault d vault hcl Tab Tab heading Powershell group ps Highlight title Use Powershell commands or wrapper commands to manage your service Once you create the service you can control it using standard Service cmdlets or the relevant commands for the associated wrapper For example to control the service with nssm use nssm start MS VAULT Highlight 1 Start the Vault service powershell Start Service Name MS VAULT 1 Confirm service status powershell Get Service Name MS VAULT Status Name DisplayName Running MS VAULT Vault Service Tab Tabs Step 3 Verify the service is running To confirm the service is running and your Vault service is available open the Vault GUI in a browser at the default address http localhost 8200 http localhost 8200 Related tutorials The following tutorials provide additional guidance for installing Vault and production cluster deployment Day One Preparation vault tutorials day one raft Recommended Patterns vault tutorials recommended patterns |
vault Guide to partnership integrations and creating plugins for Vault Vault integration program page title Vault Integration Program layout docs The HashiCorp Vault Integration Program allows for partners to integrate their products to work with HashiCorp Vault Open Source or Enterprise versions or HashiCorp Cloud Platform https cloud hashicorp com HCP Vault Vault covers a relatively large surface area and thereby a large set of possible integrations some of which require the partner to build a Vault plugin or an integration that results in the partner s solution working tightly with Vault | ---
layout: docs
page_title: Vault Integration Program
description: Guide to partnership integrations and creating plugins for Vault.
---
# Vault integration program
The HashiCorp Vault Integration Program allows for partners to integrate their products to work with HashiCorp Vault (Open Source or Enterprise versions) or [HashiCorp Cloud Platform](https://cloud.hashicorp.com/) (HCP) Vault. Vault covers a relatively large surface area and thereby a large set of possible integrations, some of which require the partner to build a Vault plugin or an integration that results in the partner’s solution working tightly with Vault.
Partners integrating their solutions via the Vault Integration Process provide their customers a verified and seamless user experience.
This program is intended to be largely a self-service process with links and guidance to information sources, clearly defined steps, and checkpoints.
## Types of Vault integrations
Vault is an Identity-based security solution that leverages trusted sources of identity to keep secrets and application data secured with one centralized, audited workflow for tightly controlling access to secrets across applications, systems, and infrastructure while encrypting data both in flight and at rest. For a full description of the current features please refer to the Vault [website](/).
There are two main types of integrations with Vault. The first is Runtime Integrations which use Vault as part of a workflow. Many partners have integrations that use existing Vault deployments to retrieve various types of secrets for use in a partner’s application or platform. The use cases can range from Vault storing and providing secrets, issuing or managing PKI certificates or acting as an external key management system.
The second type is where a partner develops a custom plugin. Vault has a secure [plugin](/vault/docs/plugins) architecture. Vault’s plugins are completely separate, standalone applications that Vault executes and communicates with over RPC.
Plugins can be broken into two categories, Secrets Engines and Auth Methods. They can be built-in and bundled with the Vault binary, or be external that has to be manually registered. Built-in plugins are developed by HashiCorp, while external plugins can be developed by HashiCorp, technology partners, or the community. There is a curated collection of all plugins, both built-in and external, located on the [Vault Integrations](/vault/integrations) page.
The diagram below depicts the key Vault integration categories and types.

Main Vault categories for partners to integrate with include:
**Authentication Methods**: Authentication (or Auth) methods are plugin components in Vault that perform authentication and are responsible for assigning identity along with a set of policies to a user. Vault supports multiple auth methods/identity models and partners can build a plugin that allows Vault to authenticate against the partners’ platform. You can find more information about Vault Auth Methods [here](/vault/docs/auth/).
**Runtime Integrations**: These types of integrations include integrations developed by partners that work with existing deployments of Vault and the partner’s product as part of the customer's identity/security workflow.
Oftentimes these integrations involve modifying a partner’s product to become “Vault aware”. There are two main components that need to be considered for this type of integration:
1. How is the application going to authenticate itself to Vault?
1. Support of Namespaces
There are many ways for an application to authenticate itself to Vault (see [Auth Methods](/vault/docs/auth/)), but we recommend partners use one of the following methods: [AppRole](/vault/docs/auth/approle), [JWT / OIDC](/vault/docs/auth/jwt), [TLS Certificates](/vault/docs/auth/cert) or [Username / Password](/vault/docs/auth/userpass). For an integration to be verified as production ready by HashiCorp, there needs to be at least one other Auth method supported besides [Token](/vault/docs/auth/token). Token is not recommended for use in production since it involves creating a manual long lived token (which is against best practice and poses a security risk). Using one of the above mentioned auth methods automatically creates short lived tokens and eliminates the need to manually generate a new token on a regular basis.
As the number of customers using Vault Enterprise increases, partners are encouraged to support [Namespaces](/vault/tutorials/enterprise/namespaces). By supporting Namespaces, there is an additional benefit that an integration should be able to work with HCP Vault Dedicated.
HSM (Hardware Security Module) are specific types of runtime integrations and can be configured to work with new or existing Vault deployments. They provide an added level of security and compliance. The HSM communicates with Vault using the PKCS#11 protocol thereby resulting in the integration to primarily involve verification of the operation of the functionality. You can find more information about Vault’s HSM support [here](/vault/docs/enterprise/hsm). A list of HSMs that have been verified to work with Vault is shown in our [interoperability matrix](/vault/docs/interoperability-matrix).
**Audit/Monitoring & Compliance**: Audit/Monitoring and Compliance are components in Vault that keep a detailed log of all requests and responses to Vault. Because every operation with Vault is an API request/response, the audit log contains every authenticated interaction with Vault, including errors. Vault supports multiple audit devices to support your business use case. You can find more information about Vault Audit Devices [here](/vault/docs/audit/).
**Secrets Engines**: Secrets engines are plugin components which store, generate, or encrypt data. Secrets engines are provided with some set of data, that take some action on that data, and then return a result. Some secrets engines store and read data, like encrypted in-memory data structure, other secrets engines connect to other services. Examples of Secrets Engines include identity modules of Cloud providers like AWS, Azure IAM models, Cloud (LDAP), database or certificate management. You can find more information about Vault Secrets Engines [here](/vault/docs/secrets/).
-> **Note:** Integrations related Vault’s [storage](/vault/docs/concepts/storage) backend, [auto auth](/vault/docs/agent-and-proxy/autoauth), and [auto unseal](/vault/docs/concepts/seal#auto-unseal) functionality are not encouraged. Please reach out to [[email protected]](mailto:[email protected]) for any questions related to this.
### HCP Vault Dedicated
HCP Vault Dedicated is a managed version of Vault which is operated by HashiCorp to allow customers to quickly get up and running. HCP Vault Dedicated uses the same binary as self-managed Vault Enterprise, and offers a consistent user experience. You can use the same Vault clients to communicate with HCP Vault Dedicated as you use to communicate with Vault. Most runtime integrations can be verified with HCP Vault Dedicated.
Sign up for HCP Vault Dedicated [here](https://portal.cloud.hashicorp.com/) and check out [this](/vault/tutorials/cloud) learn guide for quickly getting started.
### Vault integration badges
There are two types of badges that partners could receive: Vault Enterprise Verified and HCP Vault Verified badges. Partners will be issued the Vault Enterprise badge for integrations that work with Vault Enterprise features such as namespaces, HSM support, or key management. Partners will be issued the HCP Vault Dedicated badge once their integration has been verified to work with HCP Vault Dedicated. The badge(s) would be displayed on their partner page (example: [MongoDB](https://www.hashicorp.com/partners/tech/mongodb#vault) and can also be used on their own website to help provide better visibility and differentiation to customers. The process for verification of these integrations is detailed below.
<span style=>
<ImageConfig inline height={200} width={200}>

</ImageConfig>
<ImageConfig inline height={200} width={200}>

</ImageConfig>
</span>
## Development process
The Vault integration development process is divided into six steps. By following these steps, Vault integrations can be developed alongside HashiCorp to ensure that the integrations are able to be verified and supported in Vault as quickly as possible. A visual representation of the self-guided steps is depicted below.

1. Engage: Initial contact between vendor and HashiCorp
1. Enable: Information and articles to aid with the development of the integration
1. Develop and Test: Integration development and testing process
1. Review: HashiCorp verification of integration (iterative process)
1. Release: Verified integration made available and listed on the HashiCorp website once the HashiCorp technology partnership agreement has been executed
1. Support: Ongoing maintenance and support of the integration by the partner.
### 1. engage
Please begin by providing some basic information about the integration that is being built via a simple [webform](https://docs.google.com/forms/d/e/1FAIpQLSfQL1uj-mL59bd2EyCPI31LT9uvVT-xKyoHAb5FKIwWwwJ1qQ/viewform).
This information is recorded and used by HashiCorp to track the integration through various stages. The information is also used to notify the integration developer of any overlapping work, perhaps coming from the community so you may better focus resources.
Vault has a large and active community and ecosystem of partners that may have already started working on a similar integration. We'll do our best to connect similar parties to avoid duplicate work.
### 2. enable
While not mandatory, HashiCorp encourages partners to sign an MNDA (Mutual Non-Disclosure Agreement) to allow for open dialog and sharing of ideas during the integration process.
In an effort to support our self-serve model, we’ve included links to resources, documentation, examples and best practices to guide you through the Vault integration development and testing process.
- [Vault Tutorial and Learn Site](/vault/tutorials)
- Sample development implemented by a [partner](https://www.hashicorp.com/integrations/venafi/vault/)
- Example runtime integrations for reference: [F5](https://www.hashicorp.com/integrations/f5/vault), [ServiceNow](https://www.hashicorp.com/integrations/servicenow/vault)
- [Vault Community Forum](https://discuss.hashicorp.com/c/vault)
We encourage partners to closely follow the above guidance. Adopting the same structure and coding patterns helps expedite the review and release cycles.
### 3. develop and test
For our partners who are building runtime integrations with Vault, we encourage them to support multiple [authentication](/vault/docs/auth) methods (e.g. Approle, JWT, K8s) besides tokens. Additionally we encourage them to add as much flexibility when specifying paths for secrets engines. For our partners who want to build a plugin, the only knowledge necessary to write a plugin is basic command-line skills and knowledge of the Go programming language. When writing in Go-Language, HashiCorp has found the integration development process to be straightforward and simple when partners pay close attention and follow the resources by adopting the same structure and coding patterns to help expedite the review and release cycles.
Please remember that all integrations should have the appropriate documentation to assist Vault users in configuring the integrations.
**Auth Methods**
- [Auth Methods documentation](/vault/docs/auth)
- [Example of how to build, install, and maintain auth method plugins plugin](https://www.hashicorp.com/blog/building-a-vault-secure-plugin)
- [Sample plugin code](https://github.com/hashicorp/vault-auth-plugin-example)
**Runtime Integration**
- [Vault Tutorial and Learn Site](/vault/tutorials)
- [Auth Methods documentation](/vault/docs/auth)
- [HSM documentation](/vault/docs/enterprise/hsm)
- [HSM Configuration information](/vault/docs/configuration/seal/pkcs11)
**Audit, Monitoring & Compliance Integration**
- [Audit devices documentation](/vault/docs/audit)
**Secrets Engine Integration**
- [Secret engine documentation](/vault/docs/secrets)
- [Custom Secrets Engines | Vault - HashiCorp Learn](/vault/tutorials/custom-secrets-engine)
**HCP Vault Dedicated**
The process to spin up a testing instance of HCP Vault Dedicated is very [straightforward](/vault/tutorials/cloud/get-started-vault). HCP has been designed as a turn-key managed service so configuration is minimal. Furthermore, HashiCorp provides all new users an initial credit which lasts for a couple of months when using the [development](https://cloud.hashicorp.com/products/vault/pricing) cluster. Used in conjunction with AWS free tier resources, there should be no cost beyond the time spent by the designated tester.
There are a couple of items to consider when determining if the integration will work with HCP Vault Dedicated.
- Since HCP Vault Dedicated is running Vault Enterprise, the integration will need to be aware of [Namespaces](/vault/tutorials/enterprise/namespaces). This is important as the main namespace in HCP Vault Dedicated is called 'admin' which is different from the standard ‘root’ namespace in a self managed Vault instance. If the integration currently doesn't support namespaces, then an additional benefit of adding Namespace support iis that this will also enable it to work with all self managed Vault Enterprise installations.
- HCP Vault Dedicated is currently only deployed on AWS and so the partner’s application should be able to be deployed or run in AWS. This is vital so that HCP Vault Dedicated is able to communicate with the application using a [private peered](/hcp/docs/hcp/network/hvn-aws/hvn-peering) connection via a [HashiCorp Virtual Network](/hcp/docs/hcp/network).
Additional resources:
- [HCP Sign up](/hcp/docs/hcp/network)
- [Namespaces - Vault Enterprise](/vault/docs/enterprise/namespaces)
- [Create a Vault Cluster on HCP | HashiCorp Learn](/vault/tutorials/cloud/get-started-vault)
### 4. review
During the review process, HashiCorp will provide feedback on the newly developed integration for both Vault and HCP Vault Dedicated. This is an important step to allow HashiCorp to review and verify your Vault integration. Please reach out to [[email protected]](mailto:[email protected]) for verification.
The review process can take some time to complete and may require some iterations through the code to address any problems identified by the HashiCorp team.
Once the integration has been verified, the partner is requested to sign the HashiCorp Technology Partner Agreement to have their integration listed on the HashiCorp website upon release.
### 5. release
At this stage, it is expected that the integration is fully complete, the necessary documentation has been written, and HashiCorp has reviewed the integration.
For Auth or Secret Engine plugins specifically, once the plugin has been verified by HashiCorp, it is recommended the plugin be hosted on Github so it can more easily be downloaded and installed within Vault. We also encourage partners to list their plugin on the [Vault Integrations](/vault/integrations) page. This is in addition to the listing of the plugin on the technology partners’ dedicated HashiCorp partner page. To have the plugin listed on the portal page, please do a pull request via the “edit in GitHub” link on the bottom of the page and add the plugin in the partner section.
For HCP Vault Dedicated verifications, the partner will be issued an HCP Vault Verified badge and will have this displayed on their partner page.
### 6. support
At HashiCorp, we view the release step as the beginning of the journey. Getting the integration built is just the first step in enabling users to leverage it against their infrastructure. Once development is completed, on-going effort is required to support the developed integration and address any issues in a timely manner.
The expectation from the partner is to create a mechanism to track and resolve all critical issues within 48 hours, and all other issues within 5 business days. This is a requirement given the critical nature of Vault to customers’ operations. Partners who choose to not support their integration will not be considered a verified integration and cannot be listed on the website.
## Checklist
Below is a checklist of steps that should be followed during the Vault integration development process. This reiterates the steps described above.
- Fill out the [Vault Integration webform](https://docs.google.com/forms/d/e/1FAIpQLSfQL1uj-mL59bd2EyCPI31LT9uvVT-xKyoHAb5FKIwWwwJ1qQ/viewform).
- Develop and test Vault integration along with the documentation, send to [[email protected]](mailto:[email protected]), to schedule an initial review.
- Address review feedback and finalize the development process.
- Provide HashiCorp with credentials for underlying infrastructure for test purposes.
- Demo the integration.
- Execute HashiCorp Partner Agreement Documents, review logo guidelines, partner listing and more.
- Plan to continue supporting the integration with additional functionality and responding to customer issues
## Contact us
For any questions or feedback, please contact us at: [[email protected]](mailto:[email protected]) | vault | layout docs page title Vault Integration Program description Guide to partnership integrations and creating plugins for Vault Vault integration program The HashiCorp Vault Integration Program allows for partners to integrate their products to work with HashiCorp Vault Open Source or Enterprise versions or HashiCorp Cloud Platform https cloud hashicorp com HCP Vault Vault covers a relatively large surface area and thereby a large set of possible integrations some of which require the partner to build a Vault plugin or an integration that results in the partner s solution working tightly with Vault Partners integrating their solutions via the Vault Integration Process provide their customers a verified and seamless user experience This program is intended to be largely a self service process with links and guidance to information sources clearly defined steps and checkpoints Types of Vault integrations Vault is an Identity based security solution that leverages trusted sources of identity to keep secrets and application data secured with one centralized audited workflow for tightly controlling access to secrets across applications systems and infrastructure while encrypting data both in flight and at rest For a full description of the current features please refer to the Vault website There are two main types of integrations with Vault The first is Runtime Integrations which use Vault as part of a workflow Many partners have integrations that use existing Vault deployments to retrieve various types of secrets for use in a partner s application or platform The use cases can range from Vault storing and providing secrets issuing or managing PKI certificates or acting as an external key management system The second type is where a partner develops a custom plugin Vault has a secure plugin vault docs plugins architecture Vault s plugins are completely separate standalone applications that Vault executes and communicates with over RPC Plugins can be broken into two categories Secrets Engines and Auth Methods They can be built in and bundled with the Vault binary or be external that has to be manually registered Built in plugins are developed by HashiCorp while external plugins can be developed by HashiCorp technology partners or the community There is a curated collection of all plugins both built in and external located on the Vault Integrations vault integrations page The diagram below depicts the key Vault integration categories and types Integration Categories img integration program vaulteco png Main Vault categories for partners to integrate with include Authentication Methods Authentication or Auth methods are plugin components in Vault that perform authentication and are responsible for assigning identity along with a set of policies to a user Vault supports multiple auth methods identity models and partners can build a plugin that allows Vault to authenticate against the partners platform You can find more information about Vault Auth Methods here vault docs auth Runtime Integrations These types of integrations include integrations developed by partners that work with existing deployments of Vault and the partner s product as part of the customer s identity security workflow Oftentimes these integrations involve modifying a partner s product to become Vault aware There are two main components that need to be considered for this type of integration 1 How is the application going to authenticate itself to Vault 1 Support of Namespaces There are many ways for an application to authenticate itself to Vault see Auth Methods vault docs auth but we recommend partners use one of the following methods AppRole vault docs auth approle JWT OIDC vault docs auth jwt TLS Certificates vault docs auth cert or Username Password vault docs auth userpass For an integration to be verified as production ready by HashiCorp there needs to be at least one other Auth method supported besides Token vault docs auth token Token is not recommended for use in production since it involves creating a manual long lived token which is against best practice and poses a security risk Using one of the above mentioned auth methods automatically creates short lived tokens and eliminates the need to manually generate a new token on a regular basis As the number of customers using Vault Enterprise increases partners are encouraged to support Namespaces vault tutorials enterprise namespaces By supporting Namespaces there is an additional benefit that an integration should be able to work with HCP Vault Dedicated HSM Hardware Security Module are specific types of runtime integrations and can be configured to work with new or existing Vault deployments They provide an added level of security and compliance The HSM communicates with Vault using the PKCS 11 protocol thereby resulting in the integration to primarily involve verification of the operation of the functionality You can find more information about Vault s HSM support here vault docs enterprise hsm A list of HSMs that have been verified to work with Vault is shown in our interoperability matrix vault docs interoperability matrix Audit Monitoring Compliance Audit Monitoring and Compliance are components in Vault that keep a detailed log of all requests and responses to Vault Because every operation with Vault is an API request response the audit log contains every authenticated interaction with Vault including errors Vault supports multiple audit devices to support your business use case You can find more information about Vault Audit Devices here vault docs audit Secrets Engines Secrets engines are plugin components which store generate or encrypt data Secrets engines are provided with some set of data that take some action on that data and then return a result Some secrets engines store and read data like encrypted in memory data structure other secrets engines connect to other services Examples of Secrets Engines include identity modules of Cloud providers like AWS Azure IAM models Cloud LDAP database or certificate management You can find more information about Vault Secrets Engines here vault docs secrets Note Integrations related Vault s storage vault docs concepts storage backend auto auth vault docs agent and proxy autoauth and auto unseal vault docs concepts seal auto unseal functionality are not encouraged Please reach out to technologypartners hashicorp com mailto technologypartners hashicorp com for any questions related to this HCP Vault Dedicated HCP Vault Dedicated is a managed version of Vault which is operated by HashiCorp to allow customers to quickly get up and running HCP Vault Dedicated uses the same binary as self managed Vault Enterprise and offers a consistent user experience You can use the same Vault clients to communicate with HCP Vault Dedicated as you use to communicate with Vault Most runtime integrations can be verified with HCP Vault Dedicated Sign up for HCP Vault Dedicated here https portal cloud hashicorp com and check out this vault tutorials cloud learn guide for quickly getting started Vault integration badges There are two types of badges that partners could receive Vault Enterprise Verified and HCP Vault Verified badges Partners will be issued the Vault Enterprise badge for integrations that work with Vault Enterprise features such as namespaces HSM support or key management Partners will be issued the HCP Vault Dedicated badge once their integration has been verified to work with HCP Vault Dedicated The badge s would be displayed on their partner page example MongoDB https www hashicorp com partners tech mongodb vault and can also be used on their own website to help provide better visibility and differentiation to customers The process for verification of these integrations is detailed below span style ImageConfig inline height 200 width 200 Vault Enterprise Badge img VaultEnterprise badge png ImageConfig ImageConfig inline height 200 width 200 HCP Vault Dedicated img HCPV badge png ImageConfig span Development process The Vault integration development process is divided into six steps By following these steps Vault integrations can be developed alongside HashiCorp to ensure that the integrations are able to be verified and supported in Vault as quickly as possible A visual representation of the self guided steps is depicted below Development Process img integration program devprocess png 1 Engage Initial contact between vendor and HashiCorp 1 Enable Information and articles to aid with the development of the integration 1 Develop and Test Integration development and testing process 1 Review HashiCorp verification of integration iterative process 1 Release Verified integration made available and listed on the HashiCorp website once the HashiCorp technology partnership agreement has been executed 1 Support Ongoing maintenance and support of the integration by the partner 1 engage Please begin by providing some basic information about the integration that is being built via a simple webform https docs google com forms d e 1FAIpQLSfQL1uj mL59bd2EyCPI31LT9uvVT xKyoHAb5FKIwWwwJ1qQ viewform This information is recorded and used by HashiCorp to track the integration through various stages The information is also used to notify the integration developer of any overlapping work perhaps coming from the community so you may better focus resources Vault has a large and active community and ecosystem of partners that may have already started working on a similar integration We ll do our best to connect similar parties to avoid duplicate work 2 enable While not mandatory HashiCorp encourages partners to sign an MNDA Mutual Non Disclosure Agreement to allow for open dialog and sharing of ideas during the integration process In an effort to support our self serve model we ve included links to resources documentation examples and best practices to guide you through the Vault integration development and testing process Vault Tutorial and Learn Site vault tutorials Sample development implemented by a partner https www hashicorp com integrations venafi vault Example runtime integrations for reference F5 https www hashicorp com integrations f5 vault ServiceNow https www hashicorp com integrations servicenow vault Vault Community Forum https discuss hashicorp com c vault We encourage partners to closely follow the above guidance Adopting the same structure and coding patterns helps expedite the review and release cycles 3 develop and test For our partners who are building runtime integrations with Vault we encourage them to support multiple authentication vault docs auth methods e g Approle JWT K8s besides tokens Additionally we encourage them to add as much flexibility when specifying paths for secrets engines For our partners who want to build a plugin the only knowledge necessary to write a plugin is basic command line skills and knowledge of the Go programming language When writing in Go Language HashiCorp has found the integration development process to be straightforward and simple when partners pay close attention and follow the resources by adopting the same structure and coding patterns to help expedite the review and release cycles Please remember that all integrations should have the appropriate documentation to assist Vault users in configuring the integrations Auth Methods Auth Methods documentation vault docs auth Example of how to build install and maintain auth method plugins plugin https www hashicorp com blog building a vault secure plugin Sample plugin code https github com hashicorp vault auth plugin example Runtime Integration Vault Tutorial and Learn Site vault tutorials Auth Methods documentation vault docs auth HSM documentation vault docs enterprise hsm HSM Configuration information vault docs configuration seal pkcs11 Audit Monitoring Compliance Integration Audit devices documentation vault docs audit Secrets Engine Integration Secret engine documentation vault docs secrets Custom Secrets Engines Vault HashiCorp Learn vault tutorials custom secrets engine HCP Vault Dedicated The process to spin up a testing instance of HCP Vault Dedicated is very straightforward vault tutorials cloud get started vault HCP has been designed as a turn key managed service so configuration is minimal Furthermore HashiCorp provides all new users an initial credit which lasts for a couple of months when using the development https cloud hashicorp com products vault pricing cluster Used in conjunction with AWS free tier resources there should be no cost beyond the time spent by the designated tester There are a couple of items to consider when determining if the integration will work with HCP Vault Dedicated Since HCP Vault Dedicated is running Vault Enterprise the integration will need to be aware of Namespaces vault tutorials enterprise namespaces This is important as the main namespace in HCP Vault Dedicated is called admin which is different from the standard root namespace in a self managed Vault instance If the integration currently doesn t support namespaces then an additional benefit of adding Namespace support iis that this will also enable it to work with all self managed Vault Enterprise installations HCP Vault Dedicated is currently only deployed on AWS and so the partner s application should be able to be deployed or run in AWS This is vital so that HCP Vault Dedicated is able to communicate with the application using a private peered hcp docs hcp network hvn aws hvn peering connection via a HashiCorp Virtual Network hcp docs hcp network Additional resources HCP Sign up hcp docs hcp network Namespaces Vault Enterprise vault docs enterprise namespaces Create a Vault Cluster on HCP HashiCorp Learn vault tutorials cloud get started vault 4 review During the review process HashiCorp will provide feedback on the newly developed integration for both Vault and HCP Vault Dedicated This is an important step to allow HashiCorp to review and verify your Vault integration Please reach out to technologypartners hashicorp com mailto technologypartners hashicorp com for verification The review process can take some time to complete and may require some iterations through the code to address any problems identified by the HashiCorp team Once the integration has been verified the partner is requested to sign the HashiCorp Technology Partner Agreement to have their integration listed on the HashiCorp website upon release 5 release At this stage it is expected that the integration is fully complete the necessary documentation has been written and HashiCorp has reviewed the integration For Auth or Secret Engine plugins specifically once the plugin has been verified by HashiCorp it is recommended the plugin be hosted on Github so it can more easily be downloaded and installed within Vault We also encourage partners to list their plugin on the Vault Integrations vault integrations page This is in addition to the listing of the plugin on the technology partners dedicated HashiCorp partner page To have the plugin listed on the portal page please do a pull request via the edit in GitHub link on the bottom of the page and add the plugin in the partner section For HCP Vault Dedicated verifications the partner will be issued an HCP Vault Verified badge and will have this displayed on their partner page 6 support At HashiCorp we view the release step as the beginning of the journey Getting the integration built is just the first step in enabling users to leverage it against their infrastructure Once development is completed on going effort is required to support the developed integration and address any issues in a timely manner The expectation from the partner is to create a mechanism to track and resolve all critical issues within 48 hours and all other issues within 5 business days This is a requirement given the critical nature of Vault to customers operations Partners who choose to not support their integration will not be considered a verified integration and cannot be listed on the website Checklist Below is a checklist of steps that should be followed during the Vault integration development process This reiterates the steps described above Fill out the Vault Integration webform https docs google com forms d e 1FAIpQLSfQL1uj mL59bd2EyCPI31LT9uvVT xKyoHAb5FKIwWwwJ1qQ viewform Develop and test Vault integration along with the documentation send to technologypartners hashicorp com mailto technologypartners hashicorp com to schedule an initial review Address review feedback and finalize the development process Provide HashiCorp with credentials for underlying infrastructure for test purposes Demo the integration Execute HashiCorp Partner Agreement Documents review logo guidelines partner listing and more Plan to continue supporting the integration with additional functionality and responding to customer issues Contact us For any questions or feedback please contact us at technologypartners hashicorp com mailto technologypartners hashicorp com |
vault page title Glossary of Terms Glossary Vault Glossary sidebar title Glossary layout docs | ---
layout: docs
page_title: Glossary of Terms
sidebar_title: Glossary
description: |-
Vault Glossary.
---
# Glossary
This page collects brief definitions of some of the technical terms used in the
documentation for Vault.
- [Audit Device](#audit-device)
- [Auth Method](#auth-method)
- [Barrier](#barrier)
- [Client Token](#client-token)
- [Plugin](#plugin)
- [Request](#request)
- [Secret](#secret)
- [Secrets Engine](#secrets-engine)
- [Server](#server)
- [Storage Backend](#storage-backend)
### Audit device
An audit device is responsible for managing audit logs.
Every request to Vault and response from Vault goes through the configured
audit devices. This provides a simple way to integrate Vault with multiple
audit logging destinations of different types.
### Auth method
An auth method is used to authenticate users or applications
which are connecting to Vault. Once authenticated, the auth method returns the
list of applicable policies which should be applied. Vault takes an
authenticated user and returns a client token that can be used for future
requests. As an example, the `userpass` auth method uses a username and
password to authenticate the user. Alternatively, the `github` auth method
allows users to authenticate via GitHub.
### Barrier
Almost everything Vault writes to storage is encrypted using the keyring, which is protected by the seal. We refer to this practice as "the barrier". There are a few exceptions to the rule, for example, the seal configuration is stored in an unencrypted file since it's needed to unseal the barrier, and the keyring is encrypted using the root key, while the root key is encrypted using the seal.
### Client token
A client token (aka "Vault Token") is conceptually
similar to a session cookie on a web site. Once a user authenticates, Vault
returns a client token which is used for future requests. The token is used by
Vault to verify the identity of the client and to enforce the applicable ACL
policies. This token is passed via HTTP headers.
### Plugin
Plugins are a feature of Vault that can be enabled, disabled, and customized to
some degree. All Vault [auth methods](/vault/docs/auth) and [secrets engines](/vault/docs/secrets)
are considered plugins.
#### Built-in plugin
Built-in plugins are shipped with Vault, often for commonly used
implementations, and require no additional operator intervention to run.
Built-in plugins are just like any other backend code inside Vault.
#### External plugin
External plugins are not shipped with Vault and require additional operator
intervention to run. Vault's external plugins are completely separate,
standalone applications that Vault executes and communicates with over RPC.
Each time a Vault secret engine or auth method is mounted, a new process is
spawned.
#### External multiplexed plugin
An external plugin may make use of [plugin multiplexing](/vault/docs/plugins/plugin-architecture#plugin-multiplexing).
A multiplexed plugin allows a single plugin process to be used for multiple
mounts of the same type.
### Request
A request being made to Vault contains all relevant parameters and context in order
for Vault to be able to act accordingly. Vault represents this request internally
in a way that understands:
* Mount Point - Used to generate relative paths.
* Mount Type - The type of mount the request is interacting with.
* Namespace - The [namespace](/vault/docs/enterprise/namespaces) the request is taking place within.
* Operation - See [the operation description](#operation) below for the supported operations.
* Path - The full path of the request.
<Note title="Request's Namespace">
The Namespace a request is targeting may be specified either as part of the path
or the Vault Namespace header.
</Note>
Please see our Enterprise documentation for further information on how
[Namespaces can be specified](/vault/docs/enterprise/namespaces#vault-api-and-namespaces)
as part of a request.
#### Operation
The request's operation can be one of the following: `alias-lookahead`, `create`, `delete`,
`header`, `help`, `list`, `patch`, `read`, `renew`, `resolve-role`, `revoke`, `rollback`
`update`.
### Secret
A secret is the term for anything returned by Vault which
contains confidential or cryptographic material. Not everything returned by
Vault is a secret, for example system configuration, status information, or
policies are not considered secrets. Dynamic secrets always have an associated lease, and static secrets do not.
This means clients cannot assume that the dynamic secret contents can be used
indefinitely. Vault will revoke a dynamic secret at the end of the lease, and an
operator may intervene to revoke the Dynamic Secret before the lease is over. This
contract between Vault and its clients is critical, as it allows for changes
in keys and policies without manual intervention.
### Secrets engine
A secrets engine is responsible for managing secrets.
Simple secrets engines, such as the "kv" secrets engine, return the same
secret when queried. Some secrets engines support using policies to
dynamically generate a secret each time they are queried. This allows for
unique secrets to be used which allows Vault to do fine-grained revocation and
policy updates. As an example, a MySQL secrets engine could be configured with
a "web" policy. When the "web" secret is read, a new MySQL user/password pair
will be generated with a limited set of privileges for the web server.
### Server
Vault depends on a long-running instance which operates as a
server. The Vault server provides an API which clients interact with and
manages the interaction between all the secrets engines, ACL enforcement, and
secret lease revocation. Having a server based architecture decouples clients
from the security keys and policies, enables centralized audit logging, and
simplifies administration for operators.
### Storage backend
A storage backend is responsible for durable storage of
_encrypted_ data. Backends are not trusted by Vault and are only expected to
provide durability. The storage backend is configured when starting the Vault
server. | vault | layout docs page title Glossary of Terms sidebar title Glossary description Vault Glossary Glossary This page collects brief definitions of some of the technical terms used in the documentation for Vault Audit Device audit device Auth Method auth method Barrier barrier Client Token client token Plugin plugin Request request Secret secret Secrets Engine secrets engine Server server Storage Backend storage backend Audit device An audit device is responsible for managing audit logs Every request to Vault and response from Vault goes through the configured audit devices This provides a simple way to integrate Vault with multiple audit logging destinations of different types Auth method An auth method is used to authenticate users or applications which are connecting to Vault Once authenticated the auth method returns the list of applicable policies which should be applied Vault takes an authenticated user and returns a client token that can be used for future requests As an example the userpass auth method uses a username and password to authenticate the user Alternatively the github auth method allows users to authenticate via GitHub Barrier Almost everything Vault writes to storage is encrypted using the keyring which is protected by the seal We refer to this practice as the barrier There are a few exceptions to the rule for example the seal configuration is stored in an unencrypted file since it s needed to unseal the barrier and the keyring is encrypted using the root key while the root key is encrypted using the seal Client token A client token aka Vault Token is conceptually similar to a session cookie on a web site Once a user authenticates Vault returns a client token which is used for future requests The token is used by Vault to verify the identity of the client and to enforce the applicable ACL policies This token is passed via HTTP headers Plugin Plugins are a feature of Vault that can be enabled disabled and customized to some degree All Vault auth methods vault docs auth and secrets engines vault docs secrets are considered plugins Built in plugin Built in plugins are shipped with Vault often for commonly used implementations and require no additional operator intervention to run Built in plugins are just like any other backend code inside Vault External plugin External plugins are not shipped with Vault and require additional operator intervention to run Vault s external plugins are completely separate standalone applications that Vault executes and communicates with over RPC Each time a Vault secret engine or auth method is mounted a new process is spawned External multiplexed plugin An external plugin may make use of plugin multiplexing vault docs plugins plugin architecture plugin multiplexing A multiplexed plugin allows a single plugin process to be used for multiple mounts of the same type Request A request being made to Vault contains all relevant parameters and context in order for Vault to be able to act accordingly Vault represents this request internally in a way that understands Mount Point Used to generate relative paths Mount Type The type of mount the request is interacting with Namespace The namespace vault docs enterprise namespaces the request is taking place within Operation See the operation description operation below for the supported operations Path The full path of the request Note title Request s Namespace The Namespace a request is targeting may be specified either as part of the path or the Vault Namespace header Note Please see our Enterprise documentation for further information on how Namespaces can be specified vault docs enterprise namespaces vault api and namespaces as part of a request Operation The request s operation can be one of the following alias lookahead create delete header help list patch read renew resolve role revoke rollback update Secret A secret is the term for anything returned by Vault which contains confidential or cryptographic material Not everything returned by Vault is a secret for example system configuration status information or policies are not considered secrets Dynamic secrets always have an associated lease and static secrets do not This means clients cannot assume that the dynamic secret contents can be used indefinitely Vault will revoke a dynamic secret at the end of the lease and an operator may intervene to revoke the Dynamic Secret before the lease is over This contract between Vault and its clients is critical as it allows for changes in keys and policies without manual intervention Secrets engine A secrets engine is responsible for managing secrets Simple secrets engines such as the kv secrets engine return the same secret when queried Some secrets engines support using policies to dynamically generate a secret each time they are queried This allows for unique secrets to be used which allows Vault to do fine grained revocation and policy updates As an example a MySQL secrets engine could be configured with a web policy When the web secret is read a new MySQL user password pair will be generated with a limited set of privileges for the web server Server Vault depends on a long running instance which operates as a server The Vault server provides an API which clients interact with and manages the interaction between all the secrets engines ACL enforcement and secret lease revocation Having a server based architecture decouples clients from the security keys and policies enables centralized audit logging and simplifies administration for operators Storage backend A storage backend is responsible for durable storage of encrypted data Backends are not trusted by Vault and are only expected to provide durability The storage backend is configured when starting the Vault server |
vault compares to existing software and contains a quick start for using Vault with Vault We cover what Vault is what problems it can solve how it page title Introduction What is Vault layout docs Welcome to the intro guide to Vault This guide is the best place to start | ---
layout: docs
page_title: Introduction
description: >-
Welcome to the intro guide to Vault! This guide is the best place to start
with Vault. We cover what Vault is, what problems it can solve, how it
compares to existing software, and contains a quick start for using Vault.
---
# What is Vault?
HashiCorp Vault is an identity-based secrets and encryption management system.
It provides encryption services that are gated by authentication and authorization
methods to ensure secure, auditable and restricted access to _secrets_.
A secret is anything that you want to tightly control access to, such as tokens,
API keys, passwords, encryption keys or certificates. Vault provides a unified
interface to any secret, while providing tight access control and recording a
detailed audit log.
API keys for external services, credentials for service-oriented architecture
communication, etc. It can be difficult to understand who is accessing which
secrets, especially since this can be platform-specific. Adding on key rolling,
secure storage, and detailed audit logs is almost impossible without a custom
solution. This is where Vault steps in.
Vault validates and authorizes clients (users, machines, apps) before providing
them access to secrets or stored sensitive data.

## How does Vault work?
Vault works primarily with tokens and a token is associated to the client's policy. Each policy is path-based and policy rules constrains the actions and accessibility to the paths for each client. With Vault, you can create tokens manually and assign them to your clients, or the clients can log in and obtain a token. The illustration below displays Vault's core workflow.

The core Vault workflow consists of four stages:
- **Authenticate:** Authentication in Vault is the process by which a client supplies information that Vault uses to determine if they are who they say they are. Once the client is authenticated against an auth method, a token is generated and associated to a policy.
- **Validation:** Vault validates the client against third-party trusted sources, such as Github, LDAP, AppRole, and more.
- **Authorize**: A client is matched against the Vault security policy. This policy is a set of rules defining which API endpoints a client has access to with its Vault token. Policies provide a declarative way to grant or forbid access to certain paths and operations in Vault.
- **Access**: Vault grants access to secrets, keys, and encryption capabilities by issuing a token based on policies associated with the client’s identity. The client can then use their Vault token for future operations.
## Why Vault?
Most enterprises today have credentials sprawled across their organizations. Passwords, API keys, and credentials are stored in plain text, app source code, config files, and other locations. Because these credentials live everywhere, the sprawl can make it difficult and daunting to really know who has access and authorization to what. Having credentials in plain text also increases the potential for malicious attacks, both by internal and external attackers.
Vault was designed with these challenges in mind. Vault takes all of these credentials and centralizes them so that they are defined in one location, which reduces unwanted exposure to credentials. But Vault takes it a few steps further by making sure users, apps, and systems are authenticated and explicitly authorized to access resources, while also providing an audit trail that captures and preserves a history of clients' actions.
The key features of Vault are:
- **Secure Secret Storage**: Arbitrary key/value secrets can be stored
in Vault. Vault encrypts these secrets prior to writing them to persistent
storage, so gaining access to the raw storage isn't enough to access
your secrets. Vault can write to disk, [Consul](https://www.consul.io/),
and more.
- **Dynamic Secrets**: Vault can generate secrets on-demand for some
systems, such as AWS or SQL databases. For example, when an application
needs to access an S3 bucket, it asks Vault for credentials, and Vault
will generate an AWS keypair with valid permissions on demand. After
creating these dynamic secrets, Vault will also automatically revoke them
after the lease is up.
- **Data Encryption**: Vault can encrypt and decrypt data without storing
it. This allows security teams to define encryption parameters and
developers to store encrypted data in a location such as a SQL database
without having to design their own encryption methods.
- **Leasing and Renewal**: All secrets in Vault have a _lease_ associated
with them. At the end of the lease, Vault will automatically revoke that
secret. Clients are able to renew leases via built-in renew APIs.
- **Revocation**: Vault has built-in support for secret revocation. Vault
can revoke not only single secrets, but a tree of secrets, for example
all secrets read by a specific user, or all secrets of a particular type.
Revocation assists in key rolling as well as locking down systems in the
case of an intrusion.
<Tip title="Vault use cases">
Learn more about Vault [use cases](/vault/docs/use-cases).
</Tip>
## What is HCP Vault Dedicated?
HashiCorp Cloud Platform (HCP) Vault Dedicated is a hosted version of Vault, which is operated by HashiCorp to allow organizations to get up and running quickly. HCP Vault Dedicated uses the same binary as self-hosted Vault, which means you will have a consistent user experience. You can use the same Vault clients to communicate with HCP Vault Dedicated as you use to communicate with a self-hosted Vault. Refer to the [HCP Vault Dedicated](/hcp/docs/vault) documentation to learn more.
<Tip title="Hands-on">
Try the [Get started](/vault/tutorials/cloud) tutorials to set up a managed
Vault cluster.
</Tip>
## Community
We welcome questions, suggestions, and contributions from the community.
- Ask questions in [HashiCorp Discuss](https://discuss.hashicorp.com/c/vault/30).
- Read our [contributing guide](https://github.com/hashicorp/vault/blob/main/CONTRIBUTING.md).
- [Submit an issue](https://github.com/hashicorp/vault/issues/new/choose) for bugs and feature requests. | vault | layout docs page title Introduction description Welcome to the intro guide to Vault This guide is the best place to start with Vault We cover what Vault is what problems it can solve how it compares to existing software and contains a quick start for using Vault What is Vault HashiCorp Vault is an identity based secrets and encryption management system It provides encryption services that are gated by authentication and authorization methods to ensure secure auditable and restricted access to secrets A secret is anything that you want to tightly control access to such as tokens API keys passwords encryption keys or certificates Vault provides a unified interface to any secret while providing tight access control and recording a detailed audit log API keys for external services credentials for service oriented architecture communication etc It can be difficult to understand who is accessing which secrets especially since this can be platform specific Adding on key rolling secure storage and detailed audit logs is almost impossible without a custom solution This is where Vault steps in Vault validates and authorizes clients users machines apps before providing them access to secrets or stored sensitive data How Vault Works img how vault works png How does Vault work Vault works primarily with tokens and a token is associated to the client s policy Each policy is path based and policy rules constrains the actions and accessibility to the paths for each client With Vault you can create tokens manually and assign them to your clients or the clients can log in and obtain a token The illustration below displays Vault s core workflow Vault Workflow img vault workflow diagram1 png The core Vault workflow consists of four stages Authenticate Authentication in Vault is the process by which a client supplies information that Vault uses to determine if they are who they say they are Once the client is authenticated against an auth method a token is generated and associated to a policy Validation Vault validates the client against third party trusted sources such as Github LDAP AppRole and more Authorize A client is matched against the Vault security policy This policy is a set of rules defining which API endpoints a client has access to with its Vault token Policies provide a declarative way to grant or forbid access to certain paths and operations in Vault Access Vault grants access to secrets keys and encryption capabilities by issuing a token based on policies associated with the client s identity The client can then use their Vault token for future operations Why Vault Most enterprises today have credentials sprawled across their organizations Passwords API keys and credentials are stored in plain text app source code config files and other locations Because these credentials live everywhere the sprawl can make it difficult and daunting to really know who has access and authorization to what Having credentials in plain text also increases the potential for malicious attacks both by internal and external attackers Vault was designed with these challenges in mind Vault takes all of these credentials and centralizes them so that they are defined in one location which reduces unwanted exposure to credentials But Vault takes it a few steps further by making sure users apps and systems are authenticated and explicitly authorized to access resources while also providing an audit trail that captures and preserves a history of clients actions The key features of Vault are Secure Secret Storage Arbitrary key value secrets can be stored in Vault Vault encrypts these secrets prior to writing them to persistent storage so gaining access to the raw storage isn t enough to access your secrets Vault can write to disk Consul https www consul io and more Dynamic Secrets Vault can generate secrets on demand for some systems such as AWS or SQL databases For example when an application needs to access an S3 bucket it asks Vault for credentials and Vault will generate an AWS keypair with valid permissions on demand After creating these dynamic secrets Vault will also automatically revoke them after the lease is up Data Encryption Vault can encrypt and decrypt data without storing it This allows security teams to define encryption parameters and developers to store encrypted data in a location such as a SQL database without having to design their own encryption methods Leasing and Renewal All secrets in Vault have a lease associated with them At the end of the lease Vault will automatically revoke that secret Clients are able to renew leases via built in renew APIs Revocation Vault has built in support for secret revocation Vault can revoke not only single secrets but a tree of secrets for example all secrets read by a specific user or all secrets of a particular type Revocation assists in key rolling as well as locking down systems in the case of an intrusion Tip title Vault use cases Learn more about Vault use cases vault docs use cases Tip What is HCP Vault Dedicated HashiCorp Cloud Platform HCP Vault Dedicated is a hosted version of Vault which is operated by HashiCorp to allow organizations to get up and running quickly HCP Vault Dedicated uses the same binary as self hosted Vault which means you will have a consistent user experience You can use the same Vault clients to communicate with HCP Vault Dedicated as you use to communicate with a self hosted Vault Refer to the HCP Vault Dedicated hcp docs vault documentation to learn more Tip title Hands on Try the Get started vault tutorials cloud tutorials to set up a managed Vault cluster Tip Community We welcome questions suggestions and contributions from the community Ask questions in HashiCorp Discuss https discuss hashicorp com c vault 30 Read our contributing guide https github com hashicorp vault blob main CONTRIBUTING md Submit an issue https github com hashicorp vault issues new choose for bugs and feature requests |
vault Vault interoperability matrix page title Vault interoperability matrix Reference list of Vault integration partners layout docs To support a variety of use cases Vault verifies protocol implementation and | ---
layout: docs
page_title: Vault interoperability matrix
description: >-
Reference list of Vault integration partners
---
# Vault interoperability matrix
To support a variety of use cases, Vault verifies protocol implementation and
integrations with partner products, appliances, and applications that support
advanced data protection features.
<Highlight title="Is your integration missing?">
Join the [Vault integration program](/vault/docs/partnerships) to get your
integration verified and added or reach out to
[[email protected]](mailto:[email protected])
with questions.
</Highlight>
## IPv6 validation and compliance
[Vault Enterprise supports IPv6](https://www.hashicorp.com/trust/compliance/vault-enterprise)
in compliance with OMB Mandate M-21-07 and Federal IPv6 policy requirements
for the following operating systems and storage backends.
**Self-attested testing covers functionality related to HSM, FIPS 140-2, and
HSM/FIPS 140-2.**
Operating system | OS version | Validation | Vault version
---------------- | ------------------------------ | ------------ | -----------------------
FreeBSD | N/A | N/A | Untested
Linux | Amazon Linux (versions 2023) | Self-attested | ent-1.18+
Linux | openSUSE Leap (version 15.6) | Self-attested | ent-1.18+
Linux | RHEL (versions 8.10, 9.4) | Self-attested | ent-1.18+
Linux | SUSE SLES (version 15.6) | Self-attested | ent-1.18+
Linux | Ubuntu (versions 20.04, 24.04) | Self-attested | ent-1.18+
MacOS | N/A | N/A | Untested
NetBSD | N/A | N/A | Untested
OpenBSD | N/A | N/A | Untested
Windows | N/A | N/A | Untested
<span style=>
<em>
<b>Last Updated</b>:
October 14, 2024
</em>
</span>
<Note title="IPv6 limitations for Windows">
IPv6 does not work with external plugins (plugins not built into Vault) when
running on Windows in server mode because they default to IPv4 and Vault
cannot override that behavior.
</Note>
Backend storage system | Validation | Vault version
----------------------- | ------------- | -----------------------
Consul | N/A | Untested
Integrated Raft storage | Self-attested | ent-1.18+
<span style=>
<em>
<b>Last Updated</b>:
October 14, 2024
</em>
</span>
## Auto unsealing and HSM support
Hardware Security Module (HSM) support reduces the operational complexity of
securing unseal keys by delegating the responsibility of securing unseal keys to
trusted devices or services (instead of humans). At startup, Vault connects to
the delegate device or service and provides an encrypted root key for
decryption.
Vault implements HSM support with the following features:
Feature | Introduced
-------------------------------------------------------------------- | ----------
[Auto unsealing](/vault/docs/concepts/seal#auto-unseal) | Vault 0.9
[Entropy augmentation](/vault/docs/enterprise/entropy-augmentation) | Vault 1.3
[Seal wrapping](/vault/docs/enterprise/sealwrap) | Vault 0.9
The following table outlines the implementation status of HSM-related features
for partners products and the minimum Vault version required for verified
functionality.
| Partner | Product | Auto unseal | Entropy augment | Seal wrap | Managed keys | Vault verified
| ----------------- | -------------------------------------- | ----------- | --------------- | --------- |------------- | -------------
| AliCloud | AliCloud KMS | Yes | **No** | Yes | **No** | 0.11.2+
| Atos | Trustway Proteccio HSM | Yes | Yes | Yes | **No** | 1.9+
| AWS | AWS KMS | Yes | Yes | Yes | Yes | 0.9+
| Crypto4a | QxEDGE&tm; HSP | Yes | Yes | Yes | Yes | 1.9+
| Entrust | nShield HSM | Yes | Yes | Yes | Yes | 1.3+
| Fortanix | FX2200 Series | Yes | Yes | Yes | **No** | 0.10+
| FutureX | Vectera Plus, KMES Series 3 | Yes | Yes | Yes | Yes | 1.5+
| FutureX | VirtuCrypt cloud HSM | Yes | Yes | Yes | Yes | 1.5+
| Google | GCP Cloud KMS | Yes | **No** | Yes | Yes | 0.9+
| Marvell | Cavium HSM | Yes | Yes | Yes | Yes | 1.11+
| Microsoft | Azure Key Vault | Yes | **No** | Yes | Yes | 0.10.2+
| Oracle | OCI KMS | Yes | **No** | Yes | **No** | 1.2.3+
| PrimeKey | SignServer Hardware Appliance | Yes | Yes | Yes | **No** | 1.6+
| Private Machines | ENFORCER Blade | Yes | **No** | Yes | **No** | 1.17.3+
| Qrypt | Quantum Entropy Service | **No** | Yes | **No** | **No** | 1.11+
| Quintessence Labs | TSF 400 | Yes | Yes | Yes | **No** | 1.4+
| Securosys SA | Primus HSM | Yes | Yes | Yes | Yes | 1.7+
| Thales | Luna HSM | Yes | Yes | Yes | Yes | 1.4+
| Thales | Luna TCT HSM | Yes | Yes | Yes | Yes | 1.4+
| Thales | CipherTrust Manager | Yes | Yes | Yes | **No** | 1.7+
| Utimaco | HSM | Yes | Yes | Yes | Yes | 1.4+
| Yubico | YubiHSM 2 | Yes | Yes | Yes | Yes | 1.17.2+
<span style=>
<em>
<b>Last Updated</b>:
May 03, 2023
</em>
</span>
## External key management (EKMS)
Vault centrally manages and automates encryption keys across environments so
customers can [manage external encryption keys](/vault/docs/secrets/key-management)
used in third party services and products with the following plugins:
Abbreviation | Full plugin name
------------ | ----------------
EKMMSSQL | [Vault EKM provider for SQL server](/vault/docs/platform/mssql)
KV | [Key/Value secrets engine](/vault/docs/secrets/kv)
KMSE | [Key Management secrets engine](/vault/docs/secrets/key-management)
KMIP | [KMIP secrets engine](/vault/docs/secrets/kmip)
PKCS#11 | [PKCS#11 provider](/vault/docs/enterprise/pkcs11-provider)
Transit | [Transit secrets engine](/vault/docs/secrets/transit)
<Note title="Vault verified vs HCP Vault verified">
HCP Vault verified integrations work with the current version HCP Vault
Dedicated. Self-managed Vault instances must meet the required minimum version
for verification guarantees.
</Note>
The table below indicates the plugin support for partner products, the
verification status for HCP Vault Dedicated and the minimum Vault version
required for verified behavior in self-managed Vault instances:
| Partner | Product | Vault plugin | Vault verified | HCP Vault verified
| ----------------- | ------------------------ | ------------ | -------------- | ------------------
| AWS | AWS KMS | KMSE | 1.8+ | Yes
| Baffle | Shield | KV | 1.3+ | **No**
| Bloombase | StoreSafe | KMIP | 1.9+ | N/A
| Cloudian | HyperStore 7.5.1 | KMIP | 1.12+ | N/A
| Cockroach Labs | Cockroach Cloud DB | KMSE | 1.10+ | N/A
| Cockroach Labs | Cockroach DB | Transit | 1.10+ | Yes
| Cohesity | Cohesity DataPlatform | KMIP | 1.13.2+ | N/A
| Commvault Systems | CommVault | KMIP | 1.9+ | N/A
| Cribl | Cribl Stream | KV | 1.8+ | Yes
| DataStax | DataStax Enterprise | KMIP | 1.11+ | Yes
| Dell | PowerMax | KMIP | 1.12.1+ | N/A
| Dell | PowerProtect DDOS 8.0.X | KMIP | 1.15.2+ | N/A
| EnterpriseDB | Postgres Advanced Server | KMIP | 1.12.6+ | N/A
| Garantir | GaraSign | Transit | 1.5+ | Yes
| Google | Google KMS | KMSE | 1.9+ | N/A
| HPE | Exmeral Data Fabric | KMIP | 1.2+ | N/A
| Intel | Key Broker Service | KMIP | 1.11+ | N/A
| JumpWire | JumpWire | KV | 1.12+ | Yes
| Micro Focus | Connected Mx | Transit | 1.7+ | **No**
| Microsoft | Azure Key Vault | KMSE | 1.6+ | N/A
| Microsoft | MSSSQL | EKMMSSQL | 1.9+ | **No**
| MinIO | Key Encryption Service | KV | 1.11+ | **No**
| MongoDB | Atlas | KMSE | 1.6+ | N/A
| MongoDB | MongoDB Enterprise | KMIP | 1.2+ | N/A
| MongoDB | Client Libraries | KMIP | 1.9+ | N/A
| NetApp | ONTAP | KMIP | 1.2+ | N/A
| NetApp | StorageGrid | KMIP | 1.2+ | N/A
| Nutanix | AHV/AOS 6.5.1.6 | KMIP | 1.12+ | N/A
| Ondat | Trousseau | Transit | 1.9+ | Yes
| Oracle | MySQL | KMIP | 1.2+ | N/A
| Oracle | Oracle 19c | PKCS#11 | 1.11+ | N/A
| Percona | Server 8.0 | KMIP | 1.9+ | N/A
| Percona | XtraBackup 8.0 | KMIP | 1.9+ | N/A
| Rubrik | CDM 9.1 (Edge) | KMIP | 1.16.2+ | N/A
| Scality | Scality RING | KMIP | 1.12+ | N/A
| Snowflake | Snowflake | KMSE | 1.6+ | N/A
| Veeam | Karsten K10 | Transit | 1.9+ | N/A
| Veritas | NetBackup | KMIP | 1.13.9+ | N/A
| VMware | vSphere 7.0, 8.0 | KMIP | 1.2+ | N/A
| VMware | vSan 7.0, 8.0 | KMIP | 1.2+ | N/A
| Yugabyte | Yugabyte Platform | Transit | 1.9+ | **No**
<span style=>
<em>
<b>Last Updated</b>:
August 25, 2023
</em>
</span> | vault | layout docs page title Vault interoperability matrix description Reference list of Vault integration partners Vault interoperability matrix To support a variety of use cases Vault verifies protocol implementation and integrations with partner products appliances and applications that support advanced data protection features Highlight title Is your integration missing Join the Vault integration program vault docs partnerships to get your integration verified and added or reach out to technologypartners hashicorp com mailto technologypartners hashicorp com with questions Highlight IPv6 validation and compliance Vault Enterprise supports IPv6 https www hashicorp com trust compliance vault enterprise in compliance with OMB Mandate M 21 07 and Federal IPv6 policy requirements for the following operating systems and storage backends Self attested testing covers functionality related to HSM FIPS 140 2 and HSM FIPS 140 2 Operating system OS version Validation Vault version FreeBSD N A N A Untested Linux Amazon Linux versions 2023 Self attested ent 1 18 Linux openSUSE Leap version 15 6 Self attested ent 1 18 Linux RHEL versions 8 10 9 4 Self attested ent 1 18 Linux SUSE SLES version 15 6 Self attested ent 1 18 Linux Ubuntu versions 20 04 24 04 Self attested ent 1 18 MacOS N A N A Untested NetBSD N A N A Untested OpenBSD N A N A Untested Windows N A N A Untested span style em b Last Updated b October 14 2024 em span Note title IPv6 limitations for Windows IPv6 does not work with external plugins plugins not built into Vault when running on Windows in server mode because they default to IPv4 and Vault cannot override that behavior Note Backend storage system Validation Vault version Consul N A Untested Integrated Raft storage Self attested ent 1 18 span style em b Last Updated b October 14 2024 em span Auto unsealing and HSM support Hardware Security Module HSM support reduces the operational complexity of securing unseal keys by delegating the responsibility of securing unseal keys to trusted devices or services instead of humans At startup Vault connects to the delegate device or service and provides an encrypted root key for decryption Vault implements HSM support with the following features Feature Introduced Auto unsealing vault docs concepts seal auto unseal Vault 0 9 Entropy augmentation vault docs enterprise entropy augmentation Vault 1 3 Seal wrapping vault docs enterprise sealwrap Vault 0 9 The following table outlines the implementation status of HSM related features for partners products and the minimum Vault version required for verified functionality Partner Product Auto unseal Entropy augment Seal wrap Managed keys Vault verified AliCloud AliCloud KMS Yes No Yes No 0 11 2 Atos Trustway Proteccio HSM Yes Yes Yes No 1 9 AWS AWS KMS Yes Yes Yes Yes 0 9 Crypto4a QxEDGE tm HSP Yes Yes Yes Yes 1 9 Entrust nShield HSM Yes Yes Yes Yes 1 3 Fortanix FX2200 Series Yes Yes Yes No 0 10 FutureX Vectera Plus KMES Series 3 Yes Yes Yes Yes 1 5 FutureX VirtuCrypt cloud HSM Yes Yes Yes Yes 1 5 Google GCP Cloud KMS Yes No Yes Yes 0 9 Marvell Cavium HSM Yes Yes Yes Yes 1 11 Microsoft Azure Key Vault Yes No Yes Yes 0 10 2 Oracle OCI KMS Yes No Yes No 1 2 3 PrimeKey SignServer Hardware Appliance Yes Yes Yes No 1 6 Private Machines ENFORCER Blade Yes No Yes No 1 17 3 Qrypt Quantum Entropy Service No Yes No No 1 11 Quintessence Labs TSF 400 Yes Yes Yes No 1 4 Securosys SA Primus HSM Yes Yes Yes Yes 1 7 Thales Luna HSM Yes Yes Yes Yes 1 4 Thales Luna TCT HSM Yes Yes Yes Yes 1 4 Thales CipherTrust Manager Yes Yes Yes No 1 7 Utimaco HSM Yes Yes Yes Yes 1 4 Yubico YubiHSM 2 Yes Yes Yes Yes 1 17 2 span style em b Last Updated b May 03 2023 em span External key management EKMS Vault centrally manages and automates encryption keys across environments so customers can manage external encryption keys vault docs secrets key management used in third party services and products with the following plugins Abbreviation Full plugin name EKMMSSQL Vault EKM provider for SQL server vault docs platform mssql KV Key Value secrets engine vault docs secrets kv KMSE Key Management secrets engine vault docs secrets key management KMIP KMIP secrets engine vault docs secrets kmip PKCS 11 PKCS 11 provider vault docs enterprise pkcs11 provider Transit Transit secrets engine vault docs secrets transit Note title Vault verified vs HCP Vault verified HCP Vault verified integrations work with the current version HCP Vault Dedicated Self managed Vault instances must meet the required minimum version for verification guarantees Note The table below indicates the plugin support for partner products the verification status for HCP Vault Dedicated and the minimum Vault version required for verified behavior in self managed Vault instances Partner Product Vault plugin Vault verified HCP Vault verified AWS AWS KMS KMSE 1 8 Yes Baffle Shield KV 1 3 No Bloombase StoreSafe KMIP 1 9 N A Cloudian HyperStore 7 5 1 KMIP 1 12 N A Cockroach Labs Cockroach Cloud DB KMSE 1 10 N A Cockroach Labs Cockroach DB Transit 1 10 Yes Cohesity Cohesity DataPlatform KMIP 1 13 2 N A Commvault Systems CommVault KMIP 1 9 N A Cribl Cribl Stream KV 1 8 Yes DataStax DataStax Enterprise KMIP 1 11 Yes Dell PowerMax KMIP 1 12 1 N A Dell PowerProtect DDOS 8 0 X KMIP 1 15 2 N A EnterpriseDB Postgres Advanced Server KMIP 1 12 6 N A Garantir GaraSign Transit 1 5 Yes Google Google KMS KMSE 1 9 N A HPE Exmeral Data Fabric KMIP 1 2 N A Intel Key Broker Service KMIP 1 11 N A JumpWire JumpWire KV 1 12 Yes Micro Focus Connected Mx Transit 1 7 No Microsoft Azure Key Vault KMSE 1 6 N A Microsoft MSSSQL EKMMSSQL 1 9 No MinIO Key Encryption Service KV 1 11 No MongoDB Atlas KMSE 1 6 N A MongoDB MongoDB Enterprise KMIP 1 2 N A MongoDB Client Libraries KMIP 1 9 N A NetApp ONTAP KMIP 1 2 N A NetApp StorageGrid KMIP 1 2 N A Nutanix AHV AOS 6 5 1 6 KMIP 1 12 N A Ondat Trousseau Transit 1 9 Yes Oracle MySQL KMIP 1 2 N A Oracle Oracle 19c PKCS 11 1 11 N A Percona Server 8 0 KMIP 1 9 N A Percona XtraBackup 8 0 KMIP 1 9 N A Rubrik CDM 9 1 Edge KMIP 1 16 2 N A Scality Scality RING KMIP 1 12 N A Snowflake Snowflake KMSE 1 6 N A Veeam Karsten K10 Transit 1 9 N A Veritas NetBackup KMIP 1 13 9 N A VMware vSphere 7 0 8 0 KMIP 1 2 N A VMware vSan 7 0 8 0 KMIP 1 2 N A Yugabyte Yugabyte Platform Transit 1 9 No span style em b Last Updated b August 25 2023 em span |
vault Manage custom messages in the Vault UI include alerts enterprise only mdx page title Manage custom messages Use custom messages in the Vault UI to share system wide alerts layout docs | ---
layout: docs
page_title: Manage custom messages
description: >-
Use custom messages in the Vault UI to share system-wide alerts.
---
# Manage custom messages in the Vault UI
@include 'alerts/enterprise-only.mdx'
Use custom banners and modals in the Vault UI to share system-wide alerts for all Vault UI users.
<Tip title="Best practices for UI messages">
1. **Messages are sticky**. Users can only dismiss messages **temporarily**.
The message reappears if the user refreshes their browser window or logs out
of the current session.
1. **Messages are intrusive**. Limit the number of active messages to minimize
the intrusion on users and reduce the chances that they will dismiss the
message without reading it.
1. **Messages are inheritable**. Child namespaces inherit all messages created
on the parent namespace. Take advantage of inheritance to reach the greatest
number of users with the smallest number of messages.
1. **Delete old messages**. Vault supports a maximum of 100 messages per namespace at a
time. Practice good message hygiene by regularly deleting expired and outdated
messages.
</Tip>
## Before you start
- **You must have Vault Enterprise 1.16.0 or higher installed.**
- **You must have the appropriate permissions**:
- You must have `list` permission for the `sys/config/ui/custom-messages` endpoint.
- To **create messages**, you must have `read` permission for the `sys/config/ui/custom-messages/:id` endpoint and `create` permission for the `sys/config/ui/custom-messages` endpoint.
- To **edit messages**, you must have `read` and `update` permission for the `sys/config/ui/custom-messages/:id` endpoint.
- To **delete messages**, `delete` permission for the `sys/config/ui/custom-messages/:id` endpoint.
## Add a custom message
1. Navigate to the **Settings** section in the Vault UI sidebar and select **Custom
Messages**.
1. On the **Custom messages** page, select whether you want the message
to appear on the Vault UI login page or after a user logs in.
1. Select the **+ Create message** button in the toolbar to open the **Create message** form.
1. On the **Create message** form:
- Select the locations where your message should appear.
- Select the message type.
- Provide a title and the message text. For important messages, we recommend
keeping the text short and including a link to more information rather
than writing a longer message that users may not take the time to read.
- Set a start time when your message will publish. By default, messages
publish at midnight in your local timezone.
- Set an optional end time for the message. By default, messages do not
expire.
1. Click **Preview** to see how your message will appear to users.
1. Click **Create message** to save your new message.
## Create messages for a specific namespace
Child [namespaces](/vault/docs/enterprise/namespaces) inherit all messages
created on the parent namespace. For example, assume you have a
cluster with the following namespace hierarchy:
<CodeBlockConfig hideClipboard>
```plaintext
─ admin
├── finance
└── marketing
├── digital-marketing
└── events
```
</CodeBlockConfig>
Custom messages created on the `admin` namespace apply to all child namespaces:
`finance`, `marketing`, `marketing/digital-marketing` and `marketing/events`.
To create a custom message that only targets the marketing team, log into the `marketing` namespace before creating your message.
To create a message under a specific namespace:
1. On your Vault login page, enter the namespace you want to target in
the **Namespace** text field.
1. Select an appropriate authentication method and log in.
1. Select **Custom Messages**.
1. Select the **+ Create message** button in the toolbar to open and fill out
the **Create message** form.
1. Click **Preview** to see how your message will appear to users.
1. Click **Create message** to save your new message.
Your new message only appears when a user logs into the targeted
namespace or one of its child namespaces. You can verify that your
message has the correct behavior by logging into an admin or parent
namespace. The message should only appear when you switch to the
targeted namespace.

## Edit a custom message
You can open the edit screen for a custom message in two places from the **Custom messages** page.
**From the additional options menu for the message**:
1. Find the custom message you want to edit.
1. Click the additional option button (three dots).
1. Select **Edit** from the dropdown menu to bring up the edit page.
**From the message details page**:
1. Find the custom message you want to edit.
1. Select the message to open the message details page.
1. Click the **Edit message** button to bring up the edit page.
Fill in the information that you would like to edit for the custom message.
## Delete a custom message
<Warning title="Message deletion is permanent">
Deleted messages cannot be recovered. If you delete a message by mistake, you
will have to recreate it.
</Warning>
You can delete a custom message in two places from the **Custom messages** page.
**From the additional options menu for the message**:
1. Find the custom message you want to delete.
1. Click the additional option button (three dots).
1. Select **Delete** from the dropdown menu to bring up the delete confirmation modal.
**From the message details page**:
1. Find the custom message you want to delete.
1. Select the message to open the message details page.
1. Click the **Delete message** button to bring up the delete confirmation modal. | vault | layout docs page title Manage custom messages description Use custom messages in the Vault UI to share system wide alerts Manage custom messages in the Vault UI include alerts enterprise only mdx Use custom banners and modals in the Vault UI to share system wide alerts for all Vault UI users Tip title Best practices for UI messages 1 Messages are sticky Users can only dismiss messages temporarily The message reappears if the user refreshes their browser window or logs out of the current session 1 Messages are intrusive Limit the number of active messages to minimize the intrusion on users and reduce the chances that they will dismiss the message without reading it 1 Messages are inheritable Child namespaces inherit all messages created on the parent namespace Take advantage of inheritance to reach the greatest number of users with the smallest number of messages 1 Delete old messages Vault supports a maximum of 100 messages per namespace at a time Practice good message hygiene by regularly deleting expired and outdated messages Tip Before you start You must have Vault Enterprise 1 16 0 or higher installed You must have the appropriate permissions You must have list permission for the sys config ui custom messages endpoint To create messages you must have read permission for the sys config ui custom messages id endpoint and create permission for the sys config ui custom messages endpoint To edit messages you must have read and update permission for the sys config ui custom messages id endpoint To delete messages delete permission for the sys config ui custom messages id endpoint Add a custom message 1 Navigate to the Settings section in the Vault UI sidebar and select Custom Messages 1 On the Custom messages page select whether you want the message to appear on the Vault UI login page or after a user logs in 1 Select the Create message button in the toolbar to open the Create message form 1 On the Create message form Select the locations where your message should appear Select the message type Provide a title and the message text For important messages we recommend keeping the text short and including a link to more information rather than writing a longer message that users may not take the time to read Set a start time when your message will publish By default messages publish at midnight in your local timezone Set an optional end time for the message By default messages do not expire 1 Click Preview to see how your message will appear to users 1 Click Create message to save your new message Create messages for a specific namespace Child namespaces vault docs enterprise namespaces inherit all messages created on the parent namespace For example assume you have a cluster with the following namespace hierarchy CodeBlockConfig hideClipboard plaintext admin finance marketing digital marketing events CodeBlockConfig Custom messages created on the admin namespace apply to all child namespaces finance marketing marketing digital marketing and marketing events To create a custom message that only targets the marketing team log into the marketing namespace before creating your message To create a message under a specific namespace 1 On your Vault login page enter the namespace you want to target in the Namespace text field 1 Select an appropriate authentication method and log in 1 Select Custom Messages 1 Select the Create message button in the toolbar to open and fill out the Create message form 1 Click Preview to see how your message will appear to users 1 Click Create message to save your new message Your new message only appears when a user logs into the targeted namespace or one of its child namespaces You can verify that your message has the correct behavior by logging into an admin or parent namespace The message should only appear when you switch to the targeted namespace namespace picker to change namespaces img ui custom msg png Edit a custom message You can open the edit screen for a custom message in two places from the Custom messages page From the additional options menu for the message 1 Find the custom message you want to edit 1 Click the additional option button three dots 1 Select Edit from the dropdown menu to bring up the edit page From the message details page 1 Find the custom message you want to edit 1 Select the message to open the message details page 1 Click the Edit message button to bring up the edit page Fill in the information that you would like to edit for the custom message Delete a custom message Warning title Message deletion is permanent Deleted messages cannot be recovered If you delete a message by mistake you will have to recreate it Warning You can delete a custom message in two places from the Custom messages page From the additional options menu for the message 1 Find the custom message you want to delete 1 Click the additional option button three dots 1 Select Delete from the dropdown menu to bring up the delete confirmation modal From the message details page 1 Find the custom message you want to delete 1 Select the message to open the message details page 1 Click the Delete message button to bring up the delete confirmation modal |
vault page title Prevent lease explosions Prevent lease explosions As your Vault environment scales to meet deployment needs you run the risk of layout docs Learn how to prevent lease explosions in Vault | ---
layout: docs
page_title: Prevent lease explosions
description: >-
Learn how to prevent lease explosions in Vault.
---
# Prevent lease explosions
As your Vault environment scales to meet deployment needs, you run the risk of
lease explosions. Lease explosions can occur when a Vault cluster is
over-subscribed and clients overwhelm system resources with consistent,
high-volume API requests
Unchecked lease explosions create a memory drain on the active node, which can
cascade to other nodes and result in denial-of-service issues for the entire
cluster.
## Look for early warning signs
Cleaning up after a lease explosion is time consuming and resource intensive, so
we strongly recommend monitoring your Vault instance for signals that your
Vault deployment has matured and requires tuning:
Issue | Possible cause
-------------------------------------------------------------------------------- | --------------
Unused leases consume storage space for extended periods while waiting to expire | The TTL values for dynamic secret leases or authentication tokens may be too high
Lease revocation fails frequently | Failures in an external service (e.g., for dynamic secrets)
Build up of leases associated with unused credentials | Clients are not reusing valid, existing leases
Lease revocation is slow | Insufficient IOPS for the storage backend
Rapid lease count growth disproportionate to the number of clients | Misconfiguration or anti-patterns in client usage
## Enforce client best practices
High lease counts can degrade system performance:
- Use the smallest default time-to-live (TTL) possible for tokens and leases to
avoid excessive unexpired lease backlogs and high-volume, simultaneous
expirations.
- Review telemetry for aberrant client behavior that might lead to rapid
over-subscription.
- Limit the number of simultaneous dynamic secret requests and service token
authentication requests.
- Ensure that machine clients adhere to [recommended AppRole patterns](/vault/tutorials/recommended-patterns/pattern-approle).
- Review [AppRole best practices](https://www.hashicorp.com/blog/how-and-why-to-use-approle-correctly-in-hashicorp-vault).
## Set reasonable TTL guardrails
Choose appropriate defaults for your situation and use resource quotas as
guardrails against lease explosion. You can set default and maximum TTLs
globally, in the mount configuration for a specific authN or secrets plugin, and
at the role-level (e.g., database credential roles).
Vault prioritizes TTL values by granularity:
- Global values act as the default.
- Plugin TTL values override global values.
- Role, group, and user level TTL values override plugin and global values.
<Note title="TTL changes are not retroactive">
Leases and tokens keep the TTL value in affect during their creation. When you
adjust TTL values, the new limits only apply to leases and tokens issued after
you deploy the changes.
</Note>
## Monitor key metrics and logs
Proactive monitoring is key to finding problematic behavior and usage patterns
before they escalate:
- Review [key Vault metrics](/well-architected-framework/reliability/reliability-vault-monitoring-key-metrics)
- Understand [metric anti-patterns](/well-architected-framework/operational-excellence/security-vault-anti-patterns#poor-metrics-or-no-telemetry-data)
- Monitor [Vault audit device logs](/vault/tutorials/monitoring/monitor-telemetry-audit-splunk) for quota-related failures.
## Control resource usage with quotas
Use API rate limiting quotas and
[lease count quotas](/vault/tutorials/operations/resource-quotas#lease-count-quotas)
to limit the number of leases generated on a per-mount basis and control
resource consumption for your Vault instance where hard limits makes sense.
## Consider batch tokens
If your environment inherently leads to a large number of lease requests,
consider using batch tokens over service tokens.
The following resources can help you decide if batch tokens are reasonable for
your situation:
- [Vault service tokens vs batch tokens](/vault/tutorials/tokens/batch-tokens#service-tokens-vs-batch-tokens)
- [Service vs batch token lease handling](/vault/docs/concepts/tokens#service-vs-batch-token-lease-handling)
## Next steps
Proactive monitoring and periodic usage analysis can help you identify potential
problems before they escalate.
- Brush up on [general Vault resource quotas](/vault/docs/concepts/resource-quotas) in general.
- Learn about [lease count quotas for Vault Enterprise](/vault/docs/enterprise/lease-count-quotas).
- Learn how to [query audit device logs](/vault/tutorials/monitoring/query-audit-device-logs).
- Review [recommended Vault lease limits](/vault/docs/internals/limits#lease-limits).
- Review [lease anti-patterns](/well-architected-framework/operational-excellence/security-vault-anti-patterns#not-adjusting-the-default-lease-time) for a clear explanation of the issue and solution. | vault | layout docs page title Prevent lease explosions description Learn how to prevent lease explosions in Vault Prevent lease explosions As your Vault environment scales to meet deployment needs you run the risk of lease explosions Lease explosions can occur when a Vault cluster is over subscribed and clients overwhelm system resources with consistent high volume API requests Unchecked lease explosions create a memory drain on the active node which can cascade to other nodes and result in denial of service issues for the entire cluster Look for early warning signs Cleaning up after a lease explosion is time consuming and resource intensive so we strongly recommend monitoring your Vault instance for signals that your Vault deployment has matured and requires tuning Issue Possible cause Unused leases consume storage space for extended periods while waiting to expire The TTL values for dynamic secret leases or authentication tokens may be too high Lease revocation fails frequently Failures in an external service e g for dynamic secrets Build up of leases associated with unused credentials Clients are not reusing valid existing leases Lease revocation is slow Insufficient IOPS for the storage backend Rapid lease count growth disproportionate to the number of clients Misconfiguration or anti patterns in client usage Enforce client best practices High lease counts can degrade system performance Use the smallest default time to live TTL possible for tokens and leases to avoid excessive unexpired lease backlogs and high volume simultaneous expirations Review telemetry for aberrant client behavior that might lead to rapid over subscription Limit the number of simultaneous dynamic secret requests and service token authentication requests Ensure that machine clients adhere to recommended AppRole patterns vault tutorials recommended patterns pattern approle Review AppRole best practices https www hashicorp com blog how and why to use approle correctly in hashicorp vault Set reasonable TTL guardrails Choose appropriate defaults for your situation and use resource quotas as guardrails against lease explosion You can set default and maximum TTLs globally in the mount configuration for a specific authN or secrets plugin and at the role level e g database credential roles Vault prioritizes TTL values by granularity Global values act as the default Plugin TTL values override global values Role group and user level TTL values override plugin and global values Note title TTL changes are not retroactive Leases and tokens keep the TTL value in affect during their creation When you adjust TTL values the new limits only apply to leases and tokens issued after you deploy the changes Note Monitor key metrics and logs Proactive monitoring is key to finding problematic behavior and usage patterns before they escalate Review key Vault metrics well architected framework reliability reliability vault monitoring key metrics Understand metric anti patterns well architected framework operational excellence security vault anti patterns poor metrics or no telemetry data Monitor Vault audit device logs vault tutorials monitoring monitor telemetry audit splunk for quota related failures Control resource usage with quotas Use API rate limiting quotas and lease count quotas vault tutorials operations resource quotas lease count quotas to limit the number of leases generated on a per mount basis and control resource consumption for your Vault instance where hard limits makes sense Consider batch tokens If your environment inherently leads to a large number of lease requests consider using batch tokens over service tokens The following resources can help you decide if batch tokens are reasonable for your situation Vault service tokens vs batch tokens vault tutorials tokens batch tokens service tokens vs batch tokens Service vs batch token lease handling vault docs concepts tokens service vs batch token lease handling Next steps Proactive monitoring and periodic usage analysis can help you identify potential problems before they escalate Brush up on general Vault resource quotas vault docs concepts resource quotas in general Learn about lease count quotas for Vault Enterprise vault docs enterprise lease count quotas Learn how to query audit device logs vault tutorials monitoring query audit device logs Review recommended Vault lease limits vault docs internals limits lease limits Review lease anti patterns well architected framework operational excellence security vault anti patterns not adjusting the default lease time for a clear explanation of the issue and solution |
vault Create a lease count quota page title Create a lease count quota authentication plugin layout docs Step by step instructions for creating lease count quotas for an | ---
layout: docs
page_title: Create a lease count quota
description: >-
Step-by-step instructions for creating lease count quotas for an
authentication plugin
---
# Create a lease count quota
Use lease count quotas to limit the number of leases generated on a per-mount
basis and control resource consumption for your Vault instance where hard
limits makes sense.
## Before you start
- **Confirm you have access to the root or administration namespace for your
Vault instance**. Modifying lease count quotas is a restricted activity.
## Step 1: Determine the appropriate granularity
The granularity of your lease limits can affect the performance of your Vault
cluster. In particular, if your lease limits cause the number of rejected
requests to increase dramatically, the increased audit logging may impact Vault
performance.
Review past system behavior to identify whether the quota limits should be
inheritable or limited to a specific role.
## Step 2: Apply the count quota
<Tabs>
<Tab heading="CLI" group="cli">
Use `vault write` and the `sys/quotas/lease-count/{quota-name}` mount path to
create a new lease count quota:
```shell-session
$ vault write \
sys/quotas/lease-count/<QUOTA_NAME> \
name="<QUOTA_NAME>" \
path="<PLUGIN_MOUNT_PATH>" \
role="<OPTIONAL_AUTHN_ROLE>" \
max_leases=<LEASE_LIMIT>
```
For example, to create a targeted quota limit called **webapp-tokens** on the
`webapp` role for the `approle` plugin at the default mount path:
```shell-session
$ vault write \
sys/quotas/lease-count/webapp-tokens \
name="webapp-tokens" \
path="auth/approle" \
role="webapp" \
max_leases=100
Success! Data written to: sys/quotas/lease-count/webapp-tokens
```
</Tab>
<Tab heading="API" group="api">
1. Create a payload file with your quota settings.
```json
{
"name": "<QUOTA_NAME>",
"path": "<PLUGIN_MOUNT_PATH>",
"role": "<OPTIONAL_AUTHN_ROLE>",
"max_leases": <LEASE_LIMIT>,
}
```
For example, to create a targeted quota limit called **webapp-tokens** on the
`webapp` role for the `approle` plugin at the default mount path:
```json
{
"name": "webapp-tokens",
"path": "auth/approle",
"role": "webapp",
"max_leases": 100,
}
```
1. Call the `/sys/quotas/lease-count/{quota-name}` endpoint to apply the lease
count quota. For example, to apply the `webapp-tokens` quota:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data @payload.json \
${VAULT_ADDR}/v1/sys/quotas/lease-count/webapp-tokens
```
<Note title="Silent endpoint">
The `/sys/quotas/lease-count/{quota-name}` endpoint succeeds silently.
</Note>
</Tab>
</Tabs>
## Step 3: Confirm the quota settings
<Tabs>
<Tab heading="CLI" group="cli">
Use `vault read` and the `sys/quotas/lease-count/{quota-name}` mount path to
display the lease count quota details:
```shell-session
$ vault read sys/quotas/lease-count/<QUOTA_NAME>
```
For example, to read the **webapp-tokens** quota details:
```shell-session
$ vault read sys/quotas/lease-count/webapp-tokens
Key Value
--- -----
counter 0
inheritable true
max_leases 100
name webapp-tokens
path auth/approle/
role webapp
type lease-count
```
</Tab>
<Tab heading="API" group="api">
Call the `sys/quotas/lease-count/{quota-name}` endpoint to display the lease
count quota details. For example, to read the **webapp-tokens** quota details:
```shell-session
$ curl \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--request GET \
--silent \
${VAULT_ADDR}/v1/sys/quotas/lease-count/webapp-tokens | jq
{
"request_id": "188e22f1-dc1a-251a-a0a1-005e256fe70f",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"counter": 0,
"inheritable": false,
"max_leases": 100,
"name": "webapp-tokens",
"path": "auth/approle/",
"role": "webapp",
"type": "lease-count"
},
"wrap_info": null,
"warnings": null,
"auth": null
}
```
</Tab>
</Tabs>
## Next steps
Proactive monitoring and periodic usage analysis can help you identify potential
problems before they escalate.
- Brush up on [general Vault resource quotas](/vault/docs/concepts/resource-quotas) in general.
- Learn about [lease count quotas for Vault Enterprise](/vault/docs/enterprise/lease-count-quotas).
- Learn how to [query audit device logs](/vault/tutorials/monitoring/query-audit-device-logs).
- Review [key Vault metrics for common health checks](/well-architected-framework/reliability/reliability-vault-monitoring-key-metrics) | vault | layout docs page title Create a lease count quota description Step by step instructions for creating lease count quotas for an authentication plugin Create a lease count quota Use lease count quotas to limit the number of leases generated on a per mount basis and control resource consumption for your Vault instance where hard limits makes sense Before you start Confirm you have access to the root or administration namespace for your Vault instance Modifying lease count quotas is a restricted activity Step 1 Determine the appropriate granularity The granularity of your lease limits can affect the performance of your Vault cluster In particular if your lease limits cause the number of rejected requests to increase dramatically the increased audit logging may impact Vault performance Review past system behavior to identify whether the quota limits should be inheritable or limited to a specific role Step 2 Apply the count quota Tabs Tab heading CLI group cli Use vault write and the sys quotas lease count quota name mount path to create a new lease count quota shell session vault write sys quotas lease count QUOTA NAME name QUOTA NAME path PLUGIN MOUNT PATH role OPTIONAL AUTHN ROLE max leases LEASE LIMIT For example to create a targeted quota limit called webapp tokens on the webapp role for the approle plugin at the default mount path shell session vault write sys quotas lease count webapp tokens name webapp tokens path auth approle role webapp max leases 100 Success Data written to sys quotas lease count webapp tokens Tab Tab heading API group api 1 Create a payload file with your quota settings json name QUOTA NAME path PLUGIN MOUNT PATH role OPTIONAL AUTHN ROLE max leases LEASE LIMIT For example to create a targeted quota limit called webapp tokens on the webapp role for the approle plugin at the default mount path json name webapp tokens path auth approle role webapp max leases 100 1 Call the sys quotas lease count quota name endpoint to apply the lease count quota For example to apply the webapp tokens quota shell session curl request POST header X Vault Token VAULT TOKEN data payload json VAULT ADDR v1 sys quotas lease count webapp tokens Note title Silent endpoint The sys quotas lease count quota name endpoint succeeds silently Note Tab Tabs Step 3 Confirm the quota settings Tabs Tab heading CLI group cli Use vault read and the sys quotas lease count quota name mount path to display the lease count quota details shell session vault read sys quotas lease count QUOTA NAME For example to read the webapp tokens quota details shell session vault read sys quotas lease count webapp tokens Key Value counter 0 inheritable true max leases 100 name webapp tokens path auth approle role webapp type lease count Tab Tab heading API group api Call the sys quotas lease count quota name endpoint to display the lease count quota details For example to read the webapp tokens quota details shell session curl header X Vault Token VAULT TOKEN request GET silent VAULT ADDR v1 sys quotas lease count webapp tokens jq request id 188e22f1 dc1a 251a a0a1 005e256fe70f lease id renewable false lease duration 0 data counter 0 inheritable false max leases 100 name webapp tokens path auth approle role webapp type lease count wrap info null warnings null auth null Tab Tabs Next steps Proactive monitoring and periodic usage analysis can help you identify potential problems before they escalate Brush up on general Vault resource quotas vault docs concepts resource quotas in general Learn about lease count quotas for Vault Enterprise vault docs enterprise lease count quotas Learn how to query audit device logs vault tutorials monitoring query audit device logs Review key Vault metrics for common health checks well architected framework reliability reliability vault monitoring key metrics |
vault page title Server Configuration Vault server configuration reference The format of this file is HCL https github com hashicorp hcl or JSON layout docs Outside of development mode Vault servers are configured using a file Vault configuration | ---
layout: docs
page_title: Server Configuration
description: Vault server configuration reference.
---
# Vault configuration
Outside of development mode, Vault servers are configured using a file.
The format of this file is [HCL](https://github.com/hashicorp/hcl) or JSON.
@include 'plugin-file-permissions-check.mdx'
An example configuration is shown below:
<Note>
For multi-node clusters, replace the loopback address with a valid, routable IP address for each Vault node in your network.
Refer to the [Vault HA clustering with integrated storage tutorial](/vault/tutorials/raft/raft-storage) for a complete scenario.
</Note>
```hcl
ui = true
cluster_addr = "https://127.0.0.1:8201"
api_addr = "https://127.0.0.1:8200"
disable_mlock = true
storage "raft" {
path = "/path/to/raft/data"
node_id = "raft_node_id"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_cert_file = "/path/to/full-chain.pem"
tls_key_file = "/path/to/private-key.pem"
}
telemetry {
statsite_address = "127.0.0.1:8125"
disable_hostname = true
}
```
After the configuration is written, use the `-config` flag with `vault server`
to specify where the configuration is.
## Parameters
- `storage` `([StorageBackend][storage-backend]: <required>)` –
Configures the storage backend where Vault data is stored. Please see the
[storage backends documentation][storage-backend] for the full list of
available storage backends. Running Vault in HA mode would require
coordination semantics to be supported by the backend. If the storage backend
supports HA coordination, HA backend options can also be specified in this
parameter block. If not, a separate `ha_storage` parameter should be
configured with a backend that supports HA, along with corresponding HA
options.
- `ha_storage` `([StorageBackend][storage-backend]: nil)` – Configures
the storage backend where Vault HA coordination will take place. This must be
an HA-supporting backend. If not set, HA will be attempted on the backend
given in the `storage` parameter. This parameter is not required if the
storage backend supports HA coordination and if HA specific options are
already specified with `storage` parameter. (Refer to [Use Integrated Storage
for HA
Coordination](/vault/tutorials/raft/raft-ha-storage)
for a usage example.)
- `listener` `([Listener][listener]: <required>)` – Configures how
Vault is listening for API requests.
- `user_lockout` `([UserLockout][user-lockout]: nil)` –
Configures the user-lockout behaviour for failed logins. For more information, please see the
[user lockout configuration documentation](/vault/docs/configuration/user-lockout).
- `seal` `([Seal][seal]: nil)` – Configures the seal type to use for
auto-unsealing, as well as for
[seal wrapping][sealwrap] as an additional layer of data protection.
- `reporting` `([Reporting][reporting]: nil)` -
Configures options relating to license reporting in Vault.
- `cluster_name` `(string: <generated>)` – Specifies a human-readable
identifier for the Vault cluster. If omitted, Vault will generate a value.
The cluster name is included as a label in some [telemetry metrics](/vault/docs/internals/telemetry/metrics/).
The cluster name is safe to update on an existing Vault cluster.
- `cache_size` `(string: "131072")` – Specifies the size of the read cache used
by the physical storage subsystem. The value is in number of entries, so the
total cache size depends on the size of stored entries.
- `disable_cache` `(bool: false)` – Disables all caches within Vault, including
the read cache used by the physical storage subsystem. This will very
significantly impact performance.
- `disable_mlock` `(bool: false)` – Disables the server from executing the
`mlock` syscall. `mlock` prevents memory from being swapped to disk. Disabling
`mlock` is not recommended unless using [integrated storage](/vault/docs/internals/integrated-storage).
Follow the additional security precautions outlined below when disabling `mlock`.
This can also be provided via the environment variable `VAULT_DISABLE_MLOCK`.
Disabling `mlock` is not recommended unless the systems running Vault only
use encrypted swap or do not use swap at all. Vault only supports memory
locking on UNIX-like systems that support the mlock() syscall (Linux, FreeBSD, etc).
Non UNIX-like systems (e.g. Windows, NaCL, Android) lack the primitives to keep a
process's entire memory address space from spilling to disk and is therefore
automatically disabled on unsupported platforms.
Disabling `mlock` is strongly recommended if using [integrated
storage](/vault/docs/internals/integrated-storage) due to
the fact that `mlock` does not interact well with memory mapped files such as
those created by BoltDB, which is used by Raft to track state. When using
`mlock`, memory-mapped files get loaded into resident memory which causes
Vault's entire dataset to be loaded in-memory and cause out-of-memory
issues if Vault's data becomes larger than the available RAM. In this case,
even though the data within BoltDB remains encrypted at rest, swap should be
disabled to prevent Vault's other in-memory sensitive data from being dumped
into disk.
On Linux, to give the Vault executable the ability to use the `mlock`
syscall without running the process as root, run:
```shell
sudo setcap cap_ipc_lock=+ep $(readlink -f $(which vault))
```
<Note>
Since each plugin runs as a separate process, you need to do the same
for each plugin in your [plugins
directory](/vault/docs/plugins/plugin-architecture#plugin-directory).
</Note>
If you use a Linux distribution with a modern version of systemd, you can add
the following directive to the "[Service]" configuration section:
```ini
LimitMEMLOCK=infinity
```
- `plugin_directory` `(string: "")` – A directory from which plugins are
allowed to be loaded. Vault must have permission to read files in this
directory to successfully load plugins, and the value cannot be a symbolic link.
- `plugin_tmpdir` `(string: "")` - A directory that Vault can create temporary
files in to support Unix socket communication with containerized plugins. If
not set, Vault will use the system's default directory for temporary files.
Generally not necessary unless you are using
[containerized plugins](/vault/docs/plugins/containerized-plugins) and Vault
does not share a temporary folder with other processes, such as if using
systemd's [PrivateTmp](https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html#PrivateTmp=)
setting. This can also be specified via the `VAULT_PLUGIN_TMPDIR` environment
variable.
@include 'plugin-file-permissions-check.mdx'
- `plugin_file_uid` `(integer: 0)` – Uid of the plugin directories and plugin binaries if they
are owned by an user other than the user running Vault. This only needs to be set if the
file permissions check is enabled via the environment variable `VAULT_ENABLE_FILE_PERMISSIONS_CHECK`.
- `plugin_file_permissions` `(string: "")` – Octal permission string of the plugin
directories and plugin binaries if they have write or execute permissions for group or others.
This only needs to be set if the file permissions check is enabled via the environment variable
`VAULT_ENABLE_FILE_PERMISSIONS_CHECK`.
- `telemetry` `([Telemetry][telemetry]: <none>)` – Specifies the telemetry
reporting system.
- `default_lease_ttl` `(string: "768h")` – Specifies the default lease duration
for tokens and secrets. This is specified using a label suffix like `"30s"` or
`"1h"`. This value cannot be larger than `max_lease_ttl`.
- `max_lease_ttl` `(string: "768h")` – Specifies the maximum possible lease
duration for tokens and secrets. This is specified using a label
suffix like `"30s"` or `"1h"`. Individual mounts can override this value
by tuning the mount with the `max-lease-ttl` flag of the
[auth](/vault/docs/commands/auth/tune#max-lease-ttl) or
[secret](/vault/docs/commands/secrets/tune#max-lease-ttl) commands.
- `default_max_request_duration` `(string: "90s")` – Specifies the default
maximum request duration allowed before Vault cancels the request. This can
be overridden per listener via the `max_request_duration` value.
- `detect_deadlocks` `(string: "")` - A comma separated string that specifies the internal
mutex locks that should be monitored for potential deadlocks. Currently supported values
include `statelock`, `quotas` and `expiration` which will cause "POTENTIAL DEADLOCK:"
to be logged when an attempt at a core state lock appears to be deadlocked. Enabling this
can have a negative effect on performance due to the tracking of each lock attempt.
- `raw_storage_endpoint` `(bool: false)` – Enables the `sys/raw` endpoint which
allows the decryption/encryption of raw data into and out of the security
barrier. This is a highly privileged endpoint.
- `introspection_endpoint` `(bool: false)` - Enables the `sys/internal/inspect` endpoint
which allows users with a root token or sudo privileges to inspect certain subsystems inside Vault.
- `ui` `(bool: false)` – Enables the built-in web UI, which is available on all
listeners (address + port) at the `/ui` path. Browsers accessing the standard
Vault API address will automatically redirect there. This can also be provided
via the environment variable `VAULT_UI`. For more information, please see the
[ui configuration documentation](/vault/docs/configuration/ui).
- `pid_file` `(string: "")` - Path to the file in which the Vault server's
Process ID (PID) should be stored.
- `enable_response_header_hostname` `(bool: false)` - Enables the addition of an HTTP header
in all of Vault's HTTP responses: `X-Vault-Hostname`. This will contain the
host name of the Vault node that serviced the HTTP request. This information
is best effort and is not guaranteed to be present. If this configuration
option is enabled and the `X-Vault-Hostname` header is not present in a response,
it means there was some kind of error retrieving the host name from the
operating system.
- `enable_response_header_raft_node_id` `(bool: false)` - Enables the addition of an HTTP header
in all of Vault's HTTP responses: `X-Vault-Raft-Node-ID`. If Vault is participating
in a Raft cluster (i.e. using integrated storage), this header will contain the
Raft node ID of the Vault node that serviced the HTTP request. If Vault is not
participating in a Raft cluster, this header will be omitted, whether this configuration
option is enabled or not.
- `log_level` `(string: "info")` - Log verbosity level.
Supported values (in order of descending detail) are `trace`, `debug`, `info`, `warn`, and `error`.
This can also be specified via the `VAULT_LOG_LEVEL` environment variable.
<Note>
On SIGHUP (`sudo kill -s HUP` _pid of vault_), if a valid value is specified, Vault will update the existing log level,
overriding (even if specified) both the CLI flag and environment variable.
</Note>
<Note>
Not all parts of Vault's logging can have its log level be changed dynamically this way; in particular,
secrets/auth plugins are currently not updated dynamically.
</Note>
- `log_format` - Equivalent to the [`-log-format` command-line flag](/vault/docs/commands/server#_log_format).
- `log_file` - Equivalent to the [`-log-file` command-line flag](/vault/docs/commands/server#_log_file).
- `log_rotate_duration` - Equivalent to the [`-log-rotate-duration` command-line flag](/vault/docs/commands/server#_log_rotate_duration).
- `log_rotate_bytes` - Equivalent to the [`-log-rotate-bytes` command-line flag](/vault/docs/commands/server#_log_rotate_bytes).
- `log_rotate_max_files` - Equivalent to the [`-log-rotate-max-files` command-line flag](/vault/docs/commands/server#_log_rotate_max_files).
- `experiments` `(string array: [])` - The list of experiments to enable for this node.
Experiments should NOT be used in production, and the associated APIs may have backwards
incompatible changes between releases. Additional experiments can also be specified via
the `VAULT_EXPERIMENTS` environment variable as a comma-separated list, or via the
[`-experiment`](/vault/docs/commands/server#experiment) flag.
- `imprecise_lease_role_tracking` `(bool: "false")` - Skip lease counting by role if there are no role based quotas enabled.
When `imprecise_lease_role_tracking` is set to true and a new role-based quota is enabled, subsequent lease counts start from 0.
`imprecise_lease_role_tracking` affects role-based lease count quotas, but reduces latencies when not using role based quotas.
- `enable_post_unseal_trace` `(bool: false)` - Enables the server to generate a Go trace during the execution of the
`core.postUnseal` function for debug purposes. The resulting trace can be viewed with the `go tool trace` command. The output
directory can be specified with the `post_unseal_trace_directory` parameter. This should only be enabled temporarily for
debugging purposes as it can have a significant performance impact. This can be updated on a running Vault process with a
SIGHUP signal.
- `post_unseal_trace_directory` `(string: "")` - Specifies the directory where the trace file will be written, which must exist
and be writable by the Vault process. If not specified it will create a subdirectory `vault-traces` under the result from
[os.TempDir()](https://pkg.go.dev/os#TempDir) (usually `/tmp` on Unix systems). This can be updated on a running Vault process
with a SIGHUP signal.
### High availability parameters
The following parameters are used on backends that support [high availability][high-availability].
- `api_addr` `(string: "")` – Specifies the address (full URL) to advertise to
other Vault servers in the cluster for client redirection. This value is also
used for [plugin backends][plugins]. This can also be provided via the
environment variable `VAULT_API_ADDR`. In general this should be set as a full
URL that points to the value of the [`listener`](#listener) address.
This can be dynamically defined with a
[go-sockaddr template](https://pkg.go.dev/github.com/hashicorp/go-sockaddr/template)
that is resolved at runtime.
- `cluster_addr` `(string: "")` – Specifies the address to advertise to other
Vault servers in the cluster for request forwarding. This can also be provided
via the environment variable `VAULT_CLUSTER_ADDR`. This is a full URL, like
`api_addr`, but Vault will ignore the scheme (all cluster members always
use TLS with a private key/certificate).
This can be dynamically defined with a
[go-sockaddr template](https://pkg.go.dev/github.com/hashicorp/go-sockaddr/template)
that is resolved at runtime.
- `disable_clustering` `(bool: false)` – Specifies whether clustering features
such as request forwarding are enabled. Setting this to true on one Vault node
will disable these features _only when that node is the active node_. This
parameter cannot be set to `true` if `raft` is the storage type.
### Vault enterprise parameters
The following parameters are only used with Vault Enterprise
- `disable_sealwrap` `(bool: false)` – Disables using [seal wrapping][sealwrap]
for any value except the root key. If this value is toggled, the new
behavior will happen lazily (as values are read or written).
- `disable_performance_standby` `(bool: false)` – Specifies whether performance
standbys should be disabled on this node. Setting this to true on one Vault
node will disable this feature when this node is Active or Standby. It's
recommended to sync this setting across all nodes in the cluster.
- `license_path` `(string: "")` - Path to license file. This can also be
provided via the environment variable `VAULT_LICENSE_PATH`, or the license
itself can be provided in the environment variable `VAULT_LICENSE`.
- `administrative_namespace_path` `(string: "")` - Specifies the absolute path
to the Vault namespace to be used as an [Administrative namespace](/vault/docs/enterprise/namespaces/create-admin-namespace).
[storage-backend]: /vault/docs/configuration/storage
[listener]: /vault/docs/configuration/listener
[reporting]: /vault/docs/configuration/reporting
[seal]: /vault/docs/configuration/seal
[sealwrap]: /vault/docs/enterprise/sealwrap
[telemetry]: /vault/docs/configuration/telemetry
[sentinel]: /vault/docs/configuration/sentinel
[high-availability]: /vault/docs/concepts/ha
[plugins]: /vault/docs/plugins | vault | layout docs page title Server Configuration description Vault server configuration reference Vault configuration Outside of development mode Vault servers are configured using a file The format of this file is HCL https github com hashicorp hcl or JSON include plugin file permissions check mdx An example configuration is shown below Note For multi node clusters replace the loopback address with a valid routable IP address for each Vault node in your network Refer to the Vault HA clustering with integrated storage tutorial vault tutorials raft raft storage for a complete scenario Note hcl ui true cluster addr https 127 0 0 1 8201 api addr https 127 0 0 1 8200 disable mlock true storage raft path path to raft data node id raft node id listener tcp address 127 0 0 1 8200 tls cert file path to full chain pem tls key file path to private key pem telemetry statsite address 127 0 0 1 8125 disable hostname true After the configuration is written use the config flag with vault server to specify where the configuration is Parameters storage StorageBackend storage backend required Configures the storage backend where Vault data is stored Please see the storage backends documentation storage backend for the full list of available storage backends Running Vault in HA mode would require coordination semantics to be supported by the backend If the storage backend supports HA coordination HA backend options can also be specified in this parameter block If not a separate ha storage parameter should be configured with a backend that supports HA along with corresponding HA options ha storage StorageBackend storage backend nil Configures the storage backend where Vault HA coordination will take place This must be an HA supporting backend If not set HA will be attempted on the backend given in the storage parameter This parameter is not required if the storage backend supports HA coordination and if HA specific options are already specified with storage parameter Refer to Use Integrated Storage for HA Coordination vault tutorials raft raft ha storage for a usage example listener Listener listener required Configures how Vault is listening for API requests user lockout UserLockout user lockout nil Configures the user lockout behaviour for failed logins For more information please see the user lockout configuration documentation vault docs configuration user lockout seal Seal seal nil Configures the seal type to use for auto unsealing as well as for seal wrapping sealwrap as an additional layer of data protection reporting Reporting reporting nil Configures options relating to license reporting in Vault cluster name string generated Specifies a human readable identifier for the Vault cluster If omitted Vault will generate a value The cluster name is included as a label in some telemetry metrics vault docs internals telemetry metrics The cluster name is safe to update on an existing Vault cluster cache size string 131072 Specifies the size of the read cache used by the physical storage subsystem The value is in number of entries so the total cache size depends on the size of stored entries disable cache bool false Disables all caches within Vault including the read cache used by the physical storage subsystem This will very significantly impact performance disable mlock bool false Disables the server from executing the mlock syscall mlock prevents memory from being swapped to disk Disabling mlock is not recommended unless using integrated storage vault docs internals integrated storage Follow the additional security precautions outlined below when disabling mlock This can also be provided via the environment variable VAULT DISABLE MLOCK Disabling mlock is not recommended unless the systems running Vault only use encrypted swap or do not use swap at all Vault only supports memory locking on UNIX like systems that support the mlock syscall Linux FreeBSD etc Non UNIX like systems e g Windows NaCL Android lack the primitives to keep a process s entire memory address space from spilling to disk and is therefore automatically disabled on unsupported platforms Disabling mlock is strongly recommended if using integrated storage vault docs internals integrated storage due to the fact that mlock does not interact well with memory mapped files such as those created by BoltDB which is used by Raft to track state When using mlock memory mapped files get loaded into resident memory which causes Vault s entire dataset to be loaded in memory and cause out of memory issues if Vault s data becomes larger than the available RAM In this case even though the data within BoltDB remains encrypted at rest swap should be disabled to prevent Vault s other in memory sensitive data from being dumped into disk On Linux to give the Vault executable the ability to use the mlock syscall without running the process as root run shell sudo setcap cap ipc lock ep readlink f which vault Note Since each plugin runs as a separate process you need to do the same for each plugin in your plugins directory vault docs plugins plugin architecture plugin directory Note If you use a Linux distribution with a modern version of systemd you can add the following directive to the Service configuration section ini LimitMEMLOCK infinity plugin directory string A directory from which plugins are allowed to be loaded Vault must have permission to read files in this directory to successfully load plugins and the value cannot be a symbolic link plugin tmpdir string A directory that Vault can create temporary files in to support Unix socket communication with containerized plugins If not set Vault will use the system s default directory for temporary files Generally not necessary unless you are using containerized plugins vault docs plugins containerized plugins and Vault does not share a temporary folder with other processes such as if using systemd s PrivateTmp https www freedesktop org software systemd man latest systemd exec html PrivateTmp setting This can also be specified via the VAULT PLUGIN TMPDIR environment variable include plugin file permissions check mdx plugin file uid integer 0 Uid of the plugin directories and plugin binaries if they are owned by an user other than the user running Vault This only needs to be set if the file permissions check is enabled via the environment variable VAULT ENABLE FILE PERMISSIONS CHECK plugin file permissions string Octal permission string of the plugin directories and plugin binaries if they have write or execute permissions for group or others This only needs to be set if the file permissions check is enabled via the environment variable VAULT ENABLE FILE PERMISSIONS CHECK telemetry Telemetry telemetry none Specifies the telemetry reporting system default lease ttl string 768h Specifies the default lease duration for tokens and secrets This is specified using a label suffix like 30s or 1h This value cannot be larger than max lease ttl max lease ttl string 768h Specifies the maximum possible lease duration for tokens and secrets This is specified using a label suffix like 30s or 1h Individual mounts can override this value by tuning the mount with the max lease ttl flag of the auth vault docs commands auth tune max lease ttl or secret vault docs commands secrets tune max lease ttl commands default max request duration string 90s Specifies the default maximum request duration allowed before Vault cancels the request This can be overridden per listener via the max request duration value detect deadlocks string A comma separated string that specifies the internal mutex locks that should be monitored for potential deadlocks Currently supported values include statelock quotas and expiration which will cause POTENTIAL DEADLOCK to be logged when an attempt at a core state lock appears to be deadlocked Enabling this can have a negative effect on performance due to the tracking of each lock attempt raw storage endpoint bool false Enables the sys raw endpoint which allows the decryption encryption of raw data into and out of the security barrier This is a highly privileged endpoint introspection endpoint bool false Enables the sys internal inspect endpoint which allows users with a root token or sudo privileges to inspect certain subsystems inside Vault ui bool false Enables the built in web UI which is available on all listeners address port at the ui path Browsers accessing the standard Vault API address will automatically redirect there This can also be provided via the environment variable VAULT UI For more information please see the ui configuration documentation vault docs configuration ui pid file string Path to the file in which the Vault server s Process ID PID should be stored enable response header hostname bool false Enables the addition of an HTTP header in all of Vault s HTTP responses X Vault Hostname This will contain the host name of the Vault node that serviced the HTTP request This information is best effort and is not guaranteed to be present If this configuration option is enabled and the X Vault Hostname header is not present in a response it means there was some kind of error retrieving the host name from the operating system enable response header raft node id bool false Enables the addition of an HTTP header in all of Vault s HTTP responses X Vault Raft Node ID If Vault is participating in a Raft cluster i e using integrated storage this header will contain the Raft node ID of the Vault node that serviced the HTTP request If Vault is not participating in a Raft cluster this header will be omitted whether this configuration option is enabled or not log level string info Log verbosity level Supported values in order of descending detail are trace debug info warn and error This can also be specified via the VAULT LOG LEVEL environment variable Note On SIGHUP sudo kill s HUP pid of vault if a valid value is specified Vault will update the existing log level overriding even if specified both the CLI flag and environment variable Note Note Not all parts of Vault s logging can have its log level be changed dynamically this way in particular secrets auth plugins are currently not updated dynamically Note log format Equivalent to the log format command line flag vault docs commands server log format log file Equivalent to the log file command line flag vault docs commands server log file log rotate duration Equivalent to the log rotate duration command line flag vault docs commands server log rotate duration log rotate bytes Equivalent to the log rotate bytes command line flag vault docs commands server log rotate bytes log rotate max files Equivalent to the log rotate max files command line flag vault docs commands server log rotate max files experiments string array The list of experiments to enable for this node Experiments should NOT be used in production and the associated APIs may have backwards incompatible changes between releases Additional experiments can also be specified via the VAULT EXPERIMENTS environment variable as a comma separated list or via the experiment vault docs commands server experiment flag imprecise lease role tracking bool false Skip lease counting by role if there are no role based quotas enabled When imprecise lease role tracking is set to true and a new role based quota is enabled subsequent lease counts start from 0 imprecise lease role tracking affects role based lease count quotas but reduces latencies when not using role based quotas enable post unseal trace bool false Enables the server to generate a Go trace during the execution of the core postUnseal function for debug purposes The resulting trace can be viewed with the go tool trace command The output directory can be specified with the post unseal trace directory parameter This should only be enabled temporarily for debugging purposes as it can have a significant performance impact This can be updated on a running Vault process with a SIGHUP signal post unseal trace directory string Specifies the directory where the trace file will be written which must exist and be writable by the Vault process If not specified it will create a subdirectory vault traces under the result from os TempDir https pkg go dev os TempDir usually tmp on Unix systems This can be updated on a running Vault process with a SIGHUP signal High availability parameters The following parameters are used on backends that support high availability high availability api addr string Specifies the address full URL to advertise to other Vault servers in the cluster for client redirection This value is also used for plugin backends plugins This can also be provided via the environment variable VAULT API ADDR In general this should be set as a full URL that points to the value of the listener listener address This can be dynamically defined with a go sockaddr template https pkg go dev github com hashicorp go sockaddr template that is resolved at runtime cluster addr string Specifies the address to advertise to other Vault servers in the cluster for request forwarding This can also be provided via the environment variable VAULT CLUSTER ADDR This is a full URL like api addr but Vault will ignore the scheme all cluster members always use TLS with a private key certificate This can be dynamically defined with a go sockaddr template https pkg go dev github com hashicorp go sockaddr template that is resolved at runtime disable clustering bool false Specifies whether clustering features such as request forwarding are enabled Setting this to true on one Vault node will disable these features only when that node is the active node This parameter cannot be set to true if raft is the storage type Vault enterprise parameters The following parameters are only used with Vault Enterprise disable sealwrap bool false Disables using seal wrapping sealwrap for any value except the root key If this value is toggled the new behavior will happen lazily as values are read or written disable performance standby bool false Specifies whether performance standbys should be disabled on this node Setting this to true on one Vault node will disable this feature when this node is Active or Standby It s recommended to sync this setting across all nodes in the cluster license path string Path to license file This can also be provided via the environment variable VAULT LICENSE PATH or the license itself can be provided in the environment variable VAULT LICENSE administrative namespace path string Specifies the absolute path to the Vault namespace to be used as an Administrative namespace vault docs enterprise namespaces create admin namespace storage backend vault docs configuration storage listener vault docs configuration listener reporting vault docs configuration reporting seal vault docs configuration seal sealwrap vault docs enterprise sealwrap telemetry vault docs configuration telemetry sentinel vault docs configuration sentinel high availability vault docs concepts ha plugins vault docs plugins |
vault Step by step instructions for managing Vault resources programmatically with page title Manage Vault resources programmatically layout docs Terraform Manage Vault resources programmatically with Terraform | ---
layout: docs
page_title: Manage Vault resources programmatically
description: >-
Step-by-step instructions for managing Vault resources programmatically with
Terraform
---
# Manage Vault resources programmatically with Terraform
Use Terraform to manage policies, namespaces, and plugins in Vault.
## Before you start
- **You must have [Terraform installed](/terraform/install)**.
- **You must have the [Terraform Vault provider](https://registry.terraform.io/providers/hashicorp/vault/latest) configured**.
- **You must have sufficient access to run Terraform**.
- **You must have a [Vault server running](/vault/tutorials/getting-started/getting-started-dev-server)**.
## Step 1: Create a resource file for namespaces
Terraform Vault provider supports a `vault_namespace` resource type for
managing Vault namespaces:
```hcl
resource "vault_namespace" "<TERRAFORM_RESOURCE_NAME>" {
path = "<VAULT_NAMESPACE>"
}
```
To manage your Vault namespaces in Terraform:
1. Use the `vault namespace list` command to identify any unmanaged namespaces
that you need to migrate. For example:
```shell-session
$ vault namespace list
Keys
----
admin/
```
1. Create a new Terraform Vault Provider resource file called
`vault_namespaces.tf` that defines `vault_namespace` resources for each of
the new or existing namespaces resources you want to manage.
For example, to migrate the `admin` namespace in the example and create a new
`dev` namespace:
```hcl
resource "vault_namespace" "admin_ns" {
path = "admin"
}
resource "vault_namespace" "dev_ns" {
path = "dev"
}
```
## Step 2: Create a resource file for secret engines
Terraform Vault provider supports discrete types for the different
[auth](https://registry.terraform.io/providers/hashicorp/vault/latest/docs#vault-authentication-configuration-options),
[secret](https://registry.terraform.io/providers/hashicorp/vault/latest/docs/resources/mount),
and [database](https://registry.terraform.io/providers/hashicorp/vault/latest/docs/resources/database_secrets_mount)
plugin types in Vault.
To migrate a secret engine, use the `vault_mount` resource type:
```hcl
resource "vault_mount" "<TERRAFORM_RESOURCE_NAME>" {
path = "<VAULT_NAMESPACE>"
type = "<VAULT_PLUGIN_TYPE>"
}
```
To manage your Vault secret engines in Terraform:
1. Use the `vault secret list` command to identify any unmanaged secret engines
that you need to migrate. For example:
```shell-session
$ vault secrets list | grep -vEw '(cubbyhole|identity|sys)'
Path Type Accessor Description
---- ---- -------- -----------
transit/ transit transit_8291b949 n/a
```
1. Use the `-namespace` flag to check for unmanaged secret engines under any
namespaces you identified in the previous step. For example, to check for
secret engines under the `admin` namespace:
```shell-session
$ vault secrets list -namespace=admin | grep -vEw '(cubbyhole|identity|sys)'
Path Type Accessor Description
---- ---- -------- -----------
admin_keys/ kv kv_87edfc65 n/a
```
1. Create a new Terraform Vault Provider resource file called `vault_secrets.tf`
that defines `vault_mount` resources for each of the new or existing secret
engines you want to manage.
For example, to migrate the `transit` and `admin_keys` secret engines in the
example and enable a new `kv` engine under the new `dev` namespace called
`dev_keys`:
```hcl
resource "vault_mount" "transit_plugin" {
path = "transit"
type = "transit"
}
resource "vault_mount" "admin_keys_plugin" {
namespace = vault_namespace.admin_ns.path
path = "admin_keys"
type = "kv"
options = {
version = "2"
}
}
resource "vault_mount" "dev_keys_plugin" {
namespace = vault_namespace.dev_ns.path
path = "dev_keys"
type = "kv"
options = {
version = "2"
}
}
```
## Step 3: Create a resource file for policies
Terraform Vault provider supports a `vault_policy` resource type for
managing Vault policies:
```hcl
resource "vault_policy" "<TERRAFORM_RESOURCE_NAME>" {
name = "<VAULT_POLICY_NAME>"
policy = <<EOT
<VAULT_POLICY_DEFINITION>
EOT
}
```
To manage your Vault policies in Terraform:
1. Use the `vault policy list` command to identify any unmanaged policies that
you need to migrate. For example:
```shell-session
$ vault policy list | grep -vEw 'root'
default
```
1. Create a Terraform Vault Provider resource file called `vault_policies.tf`
that defines `vault_mount` resources for each policy resource you want to
manage in Terraform. You can use the following `bash` code to write all
your existing, non-root policies to the file:
```shell-session
for vpolicy in $(vault policy list | grep -vw root) ; do
echo "resource \"vault_policy\" \"vault_$vpolicy\" {"
echo " name = \"$vpolicy\""
echo " policy = <<EOT"
vault policy read $vpolicy
echo "EOT"
echo "}"
echo ""
done > vault_policies.tf
```
1. Update the `vault_policies.tf` file with any new policies you want to add.
For example, to create a policy for the example `dev_keys` secret engine:
```hcl
resource "vault_policy" "dev_team_policy" {
name = "dev_team"
policy = <<EOT
path vault_mount.dev_keys_plugin.path {
capabilities = ["create", "update"]
}
EOT
}
```
## Step 4: Update your Terraform configuration
1. Create a `vault` directory wherever you keep your deployment configuration
files for Terraform.
1. Save your new resource files to your new Vault configuration directory.
1. Use `terraform fmt` to adjust the formatting (if needed) of your new
configuration files:
```shell-session
$ terraform fmt
vault_namespaces.tf
vault_secrets.tf
vault_policies.tf
```
1. Use `terraform validate` to confirm the new configuration is valid:
```shell-session
$ terraform validate
Success! The configuration is valid.
```
## Step 5: Import preexisting root-level resources
Use the `terraform import` command to import the preexisting root-level resources.
For example, import the `admin` namespace, `default` policy, and `transit`
plugin from the previous steps:
```shell-session
$ terraform import vault_namespace.admin_ns admin
vault_namespace.admin_ns: Importing from ID "admin"...
vault_namespace.admin_ns: Import prepared!
Prepared vault_namespace for import
vault_namespace.admin_ns: Refreshing state... [id=admin]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
```
```shell-session
$ terraform import vault_policy.default_policy default
vault_policy.default_policy: Importing from ID "default"...
vault_policy.default_policy: Import prepared!
Prepared vault_policy for import
vault_policy.default_policy: Refreshing state... [id=default]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
```
```shell-session
$ terraform import vault_mount.transit_plugin transit
vault_mount.transit_plugin: Importing from ID "transit"...
vault_mount.transit_plugin: Import prepared!
Prepared vault_mount for import
vault_mount.transit_plugin: Refreshing state... [id=transit]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
```
## Step 6: Import preexisting nested resources
To import resources that belong to a previously unmanaged namespace, you must
set the `TERRAFORM_VAULT_NAMESPACE_IMPORT` environment variable before importing.
For example, to import the `admin_keys` secret engine from the `admin` namespace:
1. Set `TERRAFORM_VAULT_NAMESPACE_IMPORT` to the `admin` Vault namespace:
```shell-session
$ export TERRAFORM_VAULT_NAMESPACE_IMPORT="admin"
```
1. Import the `vault_mount` resource `admin_keys`:
```shell-session
$ terraform import vault_mount.admin_keys_plugin admin_keys
vault_mount.admin_keys_plugin: Importing from ID "admin_keys"...
vault_mount.admin_keys_plugin: Import prepared!
Prepared vault_mount for import
vault_mount.admin_keys_plugin: Refreshing state... [id=admin_keys]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
```
1. Unset the `TERRAFORM_VAULT_NAMESPACE_IMPORT` variable when you finish
importing child resources:
```shell-session
$ unset TERRAFORM_VAULT_NAMESPACE_IMPORT
```
## Step 6: Verify the import
1. Use the `terraform state show` command to check your Terraform state file and
verify the resources imported successfully. For example, to check the
`admin_keys` resource:
```shell-session
$ terraform state show vault_mount.admin_keys_plugin
# vault_mount.admin_keys_plugin:
resource "vault_mount" "admin_keys" {
accessor = "kv_87edfc65"
allowed_managed_keys = []
audit_non_hmac_request_keys = []
audit_non_hmac_response_keys = []
default_lease_ttl_seconds = 0
description = null
external_entropy_access = false
id = "admin_keys"
local = false
max_lease_ttl_seconds = 0
namespace = "admin"
options = {
"version" = "2"
}
path = "admin_keys"
seal_wrap = false
type = "kv"
```
1. For each of the migrated resources, compare the `accessor` value from your
Terraform state to the accessor value in Vault. For example, to confirm the
accessor for `admin_keys`:
```shell-session
$ vault secrets list -namespace="admin" | grep -vEw '(cubbyhole|identity|sys)'
Path Type Accessor Description
---- ---- -------- -----------
admin_keys/ kv kv_87edfc65 n/a
```
## Step 7: Add new Vault resources
1. Run `terraform plan` to confirm the new resources that Terraform will manage:
```shell-session
$ terraform plan
vault_policy.default_policy: Refreshing state... [id=default]
vault_namespace.admin_ns: Refreshing state... [id=admin/]
vault_mount.transit_plugin: Refreshing state... [id=transit]
vault_mount.admin_keys_plugin: Refreshing state... [id=admin_keys]
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# vault_mount.dev_keys_plugin will be created
+ resource "vault_mount" "dev_keys" {
+ accessor = (known after apply)
+ audit_non_hmac_request_keys = (known after apply)
+ audit_non_hmac_response_keys = (known after apply)
+ default_lease_ttl_seconds = (known after apply)
+ external_entropy_access = false
+ id = (known after apply)
+ max_lease_ttl_seconds = (known after apply)
+ namespace = "dev"
+ options = {
+ "version" = "2"
}
+ path = "dev_keys"
+ seal_wrap = (known after apply)
+ type = "kv"
}
# vault_namespace.dev_ns will be created
+ resource "vault_namespace" "dev" {
+ custom_metadata = (known after apply)
+ id = (known after apply)
+ namespace_id = (known after apply)
+ path = "dev"
+ path_fq = (known after apply)
}
# vault_policy.dev_team will be created
+ resource "vault_policy" "dev_team" {
+ id = (known after apply)
+ name = "dev_team"
+ policy = <<-EOT
path vault_mount.dev_keys_plugin.path {
capabilities = ["create", "update"]
}
EOT
}
Plan: 3 to add, 0 to change, 0 to destroy.
```
1. Run `terraform apply` to create the new resources:
```shell-session
$ terraform apply
vault_namespace.dev_ns: Creating...
vault_namespace.dev_ns: Creation complete after 0s [id=dev/]
vault_mount.dev_keys_plugin: Creating...
vault_mount.dev_keys_plugin: Creation complete after 0s [id=dev_keys]
vault_policy.dev_team: Creating...
vault_policy.dev_team: Creation complete after 0s [id=dev_team]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
```
1. Use the `terraform state show` command to check your Terraform state file and
verify the new resources created successfully. For example, to check the
`dev_keys` resource:
```shell-session
$ terraform state show vault_mount.dev_keys_plugin
# vault_mount.dev_keys_plugin:
resource "vault_mount" "dev_keys" {
accessor = "kv_b3d2dd6f"
allowed_managed_keys = []
audit_non_hmac_request_keys = []
audit_non_hmac_response_keys = []
default_lease_ttl_seconds = 0
description = null
external_entropy_access = false
id = "dev_keys"
local = false
max_lease_ttl_seconds = 0
namespace = "dev"
options = {
"version" = "2"
}
path = "dev_keys"
seal_wrap = false
type = "kv"
}
```
1. Confirm that your Vault instance can use the new resources. For example, to
confirm the `dev_keys` resources:
```shell-session
$ vault secrets list -namespace="dev" | grep -vEw '(cubbyhole|identity|sys)'
Path Type Accessor Description
---- ---- -------- -----------
dev_keys/ kv kv_b3d2dd6f n/a
```
## Next steps
1. Review the [best practices for programmatic Vault management](/vault/docs/configuration/programmatic-best-practices). | vault | layout docs page title Manage Vault resources programmatically description Step by step instructions for managing Vault resources programmatically with Terraform Manage Vault resources programmatically with Terraform Use Terraform to manage policies namespaces and plugins in Vault Before you start You must have Terraform installed terraform install You must have the Terraform Vault provider https registry terraform io providers hashicorp vault latest configured You must have sufficient access to run Terraform You must have a Vault server running vault tutorials getting started getting started dev server Step 1 Create a resource file for namespaces Terraform Vault provider supports a vault namespace resource type for managing Vault namespaces hcl resource vault namespace TERRAFORM RESOURCE NAME path VAULT NAMESPACE To manage your Vault namespaces in Terraform 1 Use the vault namespace list command to identify any unmanaged namespaces that you need to migrate For example shell session vault namespace list Keys admin 1 Create a new Terraform Vault Provider resource file called vault namespaces tf that defines vault namespace resources for each of the new or existing namespaces resources you want to manage For example to migrate the admin namespace in the example and create a new dev namespace hcl resource vault namespace admin ns path admin resource vault namespace dev ns path dev Step 2 Create a resource file for secret engines Terraform Vault provider supports discrete types for the different auth https registry terraform io providers hashicorp vault latest docs vault authentication configuration options secret https registry terraform io providers hashicorp vault latest docs resources mount and database https registry terraform io providers hashicorp vault latest docs resources database secrets mount plugin types in Vault To migrate a secret engine use the vault mount resource type hcl resource vault mount TERRAFORM RESOURCE NAME path VAULT NAMESPACE type VAULT PLUGIN TYPE To manage your Vault secret engines in Terraform 1 Use the vault secret list command to identify any unmanaged secret engines that you need to migrate For example shell session vault secrets list grep vEw cubbyhole identity sys Path Type Accessor Description transit transit transit 8291b949 n a 1 Use the namespace flag to check for unmanaged secret engines under any namespaces you identified in the previous step For example to check for secret engines under the admin namespace shell session vault secrets list namespace admin grep vEw cubbyhole identity sys Path Type Accessor Description admin keys kv kv 87edfc65 n a 1 Create a new Terraform Vault Provider resource file called vault secrets tf that defines vault mount resources for each of the new or existing secret engines you want to manage For example to migrate the transit and admin keys secret engines in the example and enable a new kv engine under the new dev namespace called dev keys hcl resource vault mount transit plugin path transit type transit resource vault mount admin keys plugin namespace vault namespace admin ns path path admin keys type kv options version 2 resource vault mount dev keys plugin namespace vault namespace dev ns path path dev keys type kv options version 2 Step 3 Create a resource file for policies Terraform Vault provider supports a vault policy resource type for managing Vault policies hcl resource vault policy TERRAFORM RESOURCE NAME name VAULT POLICY NAME policy EOT VAULT POLICY DEFINITION EOT To manage your Vault policies in Terraform 1 Use the vault policy list command to identify any unmanaged policies that you need to migrate For example shell session vault policy list grep vEw root default 1 Create a Terraform Vault Provider resource file called vault policies tf that defines vault mount resources for each policy resource you want to manage in Terraform You can use the following bash code to write all your existing non root policies to the file shell session for vpolicy in vault policy list grep vw root do echo resource vault policy vault vpolicy echo name vpolicy echo policy EOT vault policy read vpolicy echo EOT echo echo done vault policies tf 1 Update the vault policies tf file with any new policies you want to add For example to create a policy for the example dev keys secret engine hcl resource vault policy dev team policy name dev team policy EOT path vault mount dev keys plugin path capabilities create update EOT Step 4 Update your Terraform configuration 1 Create a vault directory wherever you keep your deployment configuration files for Terraform 1 Save your new resource files to your new Vault configuration directory 1 Use terraform fmt to adjust the formatting if needed of your new configuration files shell session terraform fmt vault namespaces tf vault secrets tf vault policies tf 1 Use terraform validate to confirm the new configuration is valid shell session terraform validate Success The configuration is valid Step 5 Import preexisting root level resources Use the terraform import command to import the preexisting root level resources For example import the admin namespace default policy and transit plugin from the previous steps shell session terraform import vault namespace admin ns admin vault namespace admin ns Importing from ID admin vault namespace admin ns Import prepared Prepared vault namespace for import vault namespace admin ns Refreshing state id admin Import successful The resources that were imported are shown above These resources are now in your Terraform state and will henceforth be managed by Terraform shell session terraform import vault policy default policy default vault policy default policy Importing from ID default vault policy default policy Import prepared Prepared vault policy for import vault policy default policy Refreshing state id default Import successful The resources that were imported are shown above These resources are now in your Terraform state and will henceforth be managed by Terraform shell session terraform import vault mount transit plugin transit vault mount transit plugin Importing from ID transit vault mount transit plugin Import prepared Prepared vault mount for import vault mount transit plugin Refreshing state id transit Import successful The resources that were imported are shown above These resources are now in your Terraform state and will henceforth be managed by Terraform Step 6 Import preexisting nested resources To import resources that belong to a previously unmanaged namespace you must set the TERRAFORM VAULT NAMESPACE IMPORT environment variable before importing For example to import the admin keys secret engine from the admin namespace 1 Set TERRAFORM VAULT NAMESPACE IMPORT to the admin Vault namespace shell session export TERRAFORM VAULT NAMESPACE IMPORT admin 1 Import the vault mount resource admin keys shell session terraform import vault mount admin keys plugin admin keys vault mount admin keys plugin Importing from ID admin keys vault mount admin keys plugin Import prepared Prepared vault mount for import vault mount admin keys plugin Refreshing state id admin keys Import successful The resources that were imported are shown above These resources are now in your Terraform state and will henceforth be managed by Terraform 1 Unset the TERRAFORM VAULT NAMESPACE IMPORT variable when you finish importing child resources shell session unset TERRAFORM VAULT NAMESPACE IMPORT Step 6 Verify the import 1 Use the terraform state show command to check your Terraform state file and verify the resources imported successfully For example to check the admin keys resource shell session terraform state show vault mount admin keys plugin vault mount admin keys plugin resource vault mount admin keys accessor kv 87edfc65 allowed managed keys audit non hmac request keys audit non hmac response keys default lease ttl seconds 0 description null external entropy access false id admin keys local false max lease ttl seconds 0 namespace admin options version 2 path admin keys seal wrap false type kv 1 For each of the migrated resources compare the accessor value from your Terraform state to the accessor value in Vault For example to confirm the accessor for admin keys shell session vault secrets list namespace admin grep vEw cubbyhole identity sys Path Type Accessor Description admin keys kv kv 87edfc65 n a Step 7 Add new Vault resources 1 Run terraform plan to confirm the new resources that Terraform will manage shell session terraform plan vault policy default policy Refreshing state id default vault namespace admin ns Refreshing state id admin vault mount transit plugin Refreshing state id transit vault mount admin keys plugin Refreshing state id admin keys Terraform used the selected providers to generate the following execution plan Resource actions are indicated with the following symbols create Terraform will perform the following actions vault mount dev keys plugin will be created resource vault mount dev keys accessor known after apply audit non hmac request keys known after apply audit non hmac response keys known after apply default lease ttl seconds known after apply external entropy access false id known after apply max lease ttl seconds known after apply namespace dev options version 2 path dev keys seal wrap known after apply type kv vault namespace dev ns will be created resource vault namespace dev custom metadata known after apply id known after apply namespace id known after apply path dev path fq known after apply vault policy dev team will be created resource vault policy dev team id known after apply name dev team policy EOT path vault mount dev keys plugin path capabilities create update EOT Plan 3 to add 0 to change 0 to destroy 1 Run terraform apply to create the new resources shell session terraform apply vault namespace dev ns Creating vault namespace dev ns Creation complete after 0s id dev vault mount dev keys plugin Creating vault mount dev keys plugin Creation complete after 0s id dev keys vault policy dev team Creating vault policy dev team Creation complete after 0s id dev team Apply complete Resources 3 added 0 changed 0 destroyed 1 Use the terraform state show command to check your Terraform state file and verify the new resources created successfully For example to check the dev keys resource shell session terraform state show vault mount dev keys plugin vault mount dev keys plugin resource vault mount dev keys accessor kv b3d2dd6f allowed managed keys audit non hmac request keys audit non hmac response keys default lease ttl seconds 0 description null external entropy access false id dev keys local false max lease ttl seconds 0 namespace dev options version 2 path dev keys seal wrap false type kv 1 Confirm that your Vault instance can use the new resources For example to confirm the dev keys resources shell session vault secrets list namespace dev grep vEw cubbyhole identity sys Path Type Accessor Description dev keys kv kv b3d2dd6f n a Next steps 1 Review the best practices for programmatic Vault management vault docs configuration programmatic best practices |
vault metrics to upstream systems telemetry stanza layout docs page title Telemetry Configuration The telemetry stanza specifies various configurations for Vault to publish | ---
layout: docs
page_title: Telemetry - Configuration
description: |-
The telemetry stanza specifies various configurations for Vault to publish
metrics to upstream systems.
---
# `telemetry` stanza
The `telemetry` stanza specifies various configurations for Vault to publish
metrics to upstream systems. Available Vault metrics can be found in the
[Telemetry internals documentation](/vault/docs/internals/telemetry).
```hcl
telemetry {
statsite_address = "statsite.company.local:8125"
}
```
## `telemetry` parameters
Due to the number of configurable parameters to the `telemetry` stanza,
parameters on this page are grouped by the telemetry provider.
### Common
The following options are available on all telemetry configurations.
- `usage_gauge_period` `(string: "10m")` - Specifies the interval at which high-cardinality
usage data is collected, such as token counts, entity counts, and secret counts.
A value of "none" disables the collection. Uses [duration format strings](/vault/docs/concepts/duration-format).
- `maximum_gauge_cardinality` `(int: 500)` - The maximum cardinality of gauge labels.
- `disable_hostname` `(bool: false)` - Specifies if gauge values should be
prefixed with the local hostname.
- `enable_hostname_label` `(bool: false)` - Specifies if all metric values should
contain the `host` label with the local hostname. It is recommended to enable
`disable_hostname` if this option is used.
- `metrics_prefix` `(string: "vault")` - Specifies the prefix used for metric vaules. By default, metrics are prefixed with "vault".
- `lease_metrics_epsilon` `(string: "1h")` - Specifies the size of the bucket used to measure future
lease expiration. For example, for the default value of 1 hour, the `vault.expire.leases.by_expiration`
metric will aggregate the total number of expiring leases for 1 hour buckets, starting from the current time.
Note that leases are put into buckets by rounding. For example, if `lease_metrics_epsilon` is set to 1h and
lease A expires 25 minutes from now, and lease B expires 35 minutes from now, then lease A will be in the first
bucket, which corresponds to 0-30 minutes, and lease B will be in the second bucket, which corresponds to 31-90
minutes. Uses [duration format strings](/vault/docs/concepts/duration-format).
- `num_lease_metrics_buckets` `(int: 168)` - The number of expiry buckets for leases. For the default value, for
example, 168 value labels for the `vault.expire.leases.by_expiration` metric will be reported, where each value
each bucket is separated in time by the `lease_metrics_epsilon` parameter. For the default 1 hour value of
`lease_metrics_epsilon` and the default value of `num_lease_metrics_buckets`, `vault.expire.leases.by_expiration`
will report the total number of leases expiring within each hour from the current time to one week from the current time.
- `add_lease_metrics_namespace_labels` `(bool: false)` - If this value is set to true, then `vault.expire.leases.by_expiration`
will break down expiring leases by both time and namespace. This parameter is disabled by default because enabling it can lead
to a large-cardinality metric.
- `add_mount_point_rollback_metrics` `(bool: false)` - If this value is set to true, then `vault.rollback.attempt.{MOUNT_POINT}`
and `vault.route.rollback.{MOUNT_POINT}` metrics will be reported for every mount point. If this parameter is false, then
`vault.rollback.attempt` and `vault.route.rollback` metrics (which do not have the mount point in the metric name)
will be reported instead. This parameter is disabled by default starting in Vault 1.15 due to the high cardinality of
these metrics.
- `filter_default` `(bool: true)` - This controls whether to allow metrics that have not been specified by the filter.
Defaults to `true`, which will allow all metrics when no filters are provided.
When set to `false` with no filters, no metrics will be sent.
- `prefix_filter` `(string array: [])` - This is a list of filter rules to apply for allowing/blocking metrics by
prefix in the following format:
```json
["+vault.token", "-vault.expire", "+vault.expire.num_leases"]
```
A leading "**+**" will enable any metrics with the given prefix, and a leading "**-**" will block them.
If there is overlap between two rules, the more specific rule will take precedence. Blocking will take priority if the same prefix is listed multiple times.
### `statsite`
These `telemetry` parameters apply to
[statsite](https://github.com/armon/statsite).
- `statsite_address` `(string: "")` - Specifies the address of a statsite server
to forward metrics data to.
```hcl
telemetry {
statsite_address = "statsite.company.local:8125"
}
```
### `statsd`
These `telemetry` parameters apply to
[statsd](https://github.com/etsy/statsd).
- `statsd_address` `(string: "")` - Specifies the address of a statsd server to
forward metrics to.
```hcl
telemetry {
statsd_address = "statsd.company.local:8125"
}
```
### `circonus`
These `telemetry` parameters apply to [Circonus](http://circonus.com/).
- `circonus_api_token` `(string: "")` - Specifies a valid Circonus API Token
used to create/manage check. If provided, metric management is enabled.
- `circonus_api_app` `(string: "nomad")` - Specifies a valid app name associated
with the API token.
- `circonus_api_url` `(string: "https://api.circonus.com/v2")` - Specifies the
base URL to use for contacting the Circonus API.
- `circonus_submission_interval` `(string: "10s")` - Specifies the interval at
which metrics are submitted to Circonus.
- `circonus_submission_url` `(string: "")` - Specifies the
`check.config.submission_url` field, of a Check API object, from a previously
created HTTPTRAP check.
- `circonus_check_id` `(string: "")` - Specifies the Check ID (**not check
bundle**) from a previously created HTTPTRAP check. The numeric portion of the
`check._cid` field in the Check API object.
- `circonus_check_force_metric_activation` `(bool: false)` - Specifies if force
activation of metrics which already exist and are not currently active. If
check management is enabled, the default behavior is to add new metrics as
they are encountered. If the metric already exists in the check, it will
not be activated. This setting overrides that behavior.
- `circonus_check_instance_id` `(string: "<hostname>:<application>")` - Serves
to uniquely identify the metrics coming from this _instance_. It can be used
to maintain metric continuity with transient or ephemeral instances as they
move around within an infrastructure. By default, this is set to
hostname:application name (e.g. "host123:nomad").
- `circonus_check_search_tag` `(string: <service>:<application>)` - Specifies a
special tag which, when coupled with the instance id, helps to narrow down the
search results when neither a Submission URL or Check ID is provided. By
default, this is set to service:app (e.g. "service:nomad").
- `circonus_check_display_name` `(string: "")` - Specifies a name to give a
check when it is created. This name is displayed in the Circonus UI Checks
list.
- `circonus_check_tags` `(string: "")` - Comma separated list of additional
tags to add to a check when it is created.
- `circonus_broker_id` `(string: "")` - Specifies the ID of a specific Circonus
Broker to use when creating a new check. The numeric portion of `broker._cid`
field in a Broker API object. If metric management is enabled and neither a
Submission URL nor Check ID is provided, an attempt will be made to search for
an existing check using Instance ID and Search Tag. If one is not found, a new
HTTPTRAP check will be created. By default, this is a random
Enterprise Broker is selected, or, the default Circonus Public Broker.
- `circonus_broker_select_tag` `(string: "")` - Specifies a special tag which
will be used to select a Circonus Broker when a Broker ID is not provided. The
best use of this is to as a hint for which broker should be used based on
_where_ this particular instance is running (e.g. a specific geo location or
datacenter, dc:sfo).
### `dogstatsd`
These `telemetry` parameters apply to
[DogStatsD](http://docs.datadoghq.com/guides/dogstatsd/).
- `dogstatsd_addr` `(string: "")` - This provides the address of a DogStatsD
instance. DogStatsD is a protocol-compatible flavor of statsd, with the added
ability to decorate metrics with tags and event information. If provided,
Vault will send various telemetry information to that instance for
aggregation. This can be used to capture runtime information.
* `dogstatsd_tags` `(string array: [])` - This provides a list of global tags
that will be added to all telemetry packets sent to DogStatsD. It is a list
of strings, where each string looks like "my_tag_name:my_tag_value".
### `prometheus`
These `telemetry` parameters apply to
[prometheus](https://prometheus.io).
- `prometheus_retention_time` `(string: "24h")` - Specifies the amount of time that
Prometheus metrics are retained in memory. Setting this to 0 will disable Prometheus telemetry.
- `disable_hostname` `(bool: false)` - It is recommended to also enable the option
`disable_hostname` to avoid having prefixed metrics with hostname.
The `/v1/sys/metrics` endpoint is only accessible on active nodes
and automatically disabled on standby nodes. You can enable the `/v1/sys/metrics`
endpoint on standby nodes by [enabling unauthenticated metrics access][telemetry-tcp].
Standby nodes will never forward a request to `/v1/sys/metrics` to the active
node. If unauthenticated metrics access is enabled, the standby node will
respond with its own metrics. If unauthenticated metrics access is not enabled,
then a standby node will attempt to service the request but fail and then
redirect the request to the active node.
Querying `/v1/sys/metrics` with one of the following headers:
- `Accept: prometheus/telemetry`
- `Accept: application/openmetrics-text`
will return Prometheus formatted results. Most Prometheus servers automatically
query scrape targets with these headers by default.
A Vault token is required with `capabilities = ["read", "list"]` to
/v1/sys/metrics. The Prometheus `bearer_token` or `bearer_token_file` options
must be added to the scrape job.
Vault does not use the default Prometheus path, so Prometheus must be configured
to scrape `v1/sys/metrics` instead of the default scrape path.
An example job_name stanza required in the [Prometheus config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) is provided below.
```
# prometheus.yml
scrape_configs:
- job_name: 'vault'
metrics_path: "/v1/sys/metrics"
scheme: https
tls_config:
ca_file: your_ca_here.pem
bearer_token: "your_vault_token_here"
static_configs:
- targets: ['your_vault_server_here:8200']
```
An example telemetry configuration to be added to Vault's configuration file is shown below:
```hcl
telemetry {
prometheus_retention_time = "30s"
disable_hostname = true
}
```
### `stackdriver`
These `telemetry` parameters apply to [Stackdriver Monitoring](https://cloud.google.com/monitoring/).
The Stackdriver telemetry provider uses the official Google Cloud Golang SDK. This means
it supports the common ways of
[providing credentials to Google Cloud](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application).
To use this telemetry provider, the service account must have the following
minimum scope(s):
```text
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/monitoring
https://www.googleapis.com/auth/monitoring.write
```
And the following IAM role(s):
```text
roles/monitoring.metricWriter
```
- `stackdriver_project_id` `(string: "")` - The Google Cloud ProjectID to send telemetry data to.
- `stackdriver_location` `(string: "")` - The GCP or AWS region of the monitored resource.
- `stackdriver_namespace` `(string: "")` - A namespace identifier for the telemetry data.
- `stackdriver_debug_logs` `(bool: "false")` - Specifies if Vault writes additional stackdriver
related debug logs to standard error output (stderr).
It is recommended to also enable the option `disable_hostname` to avoid having prefixed
metrics with hostname and enable instead `enable_hostname_label`.
```hcl
telemetry {
stackdriver_project_id = "my-test-project"
stackdriver_location = "us-east1-a"
stackdriver_namespace = "vault-cluster-a"
disable_hostname = true
enable_hostname_label = true
}
```
Metrics from Vault can be found in [Metrics Explorer](https://cloud.google.com/monitoring/charts/metrics-explorer).
All those metrics are shown with a resource type of `generic_task`, and the metric name
is prefixed with `custom.googleapis.com/go-metrics/`.
[telemetry-tcp]: /vault/docs/configuration/listener/tcp#telemetry-parameters | vault | layout docs page title Telemetry Configuration description The telemetry stanza specifies various configurations for Vault to publish metrics to upstream systems telemetry stanza The telemetry stanza specifies various configurations for Vault to publish metrics to upstream systems Available Vault metrics can be found in the Telemetry internals documentation vault docs internals telemetry hcl telemetry statsite address statsite company local 8125 telemetry parameters Due to the number of configurable parameters to the telemetry stanza parameters on this page are grouped by the telemetry provider Common The following options are available on all telemetry configurations usage gauge period string 10m Specifies the interval at which high cardinality usage data is collected such as token counts entity counts and secret counts A value of none disables the collection Uses duration format strings vault docs concepts duration format maximum gauge cardinality int 500 The maximum cardinality of gauge labels disable hostname bool false Specifies if gauge values should be prefixed with the local hostname enable hostname label bool false Specifies if all metric values should contain the host label with the local hostname It is recommended to enable disable hostname if this option is used metrics prefix string vault Specifies the prefix used for metric vaules By default metrics are prefixed with vault lease metrics epsilon string 1h Specifies the size of the bucket used to measure future lease expiration For example for the default value of 1 hour the vault expire leases by expiration metric will aggregate the total number of expiring leases for 1 hour buckets starting from the current time Note that leases are put into buckets by rounding For example if lease metrics epsilon is set to 1h and lease A expires 25 minutes from now and lease B expires 35 minutes from now then lease A will be in the first bucket which corresponds to 0 30 minutes and lease B will be in the second bucket which corresponds to 31 90 minutes Uses duration format strings vault docs concepts duration format num lease metrics buckets int 168 The number of expiry buckets for leases For the default value for example 168 value labels for the vault expire leases by expiration metric will be reported where each value each bucket is separated in time by the lease metrics epsilon parameter For the default 1 hour value of lease metrics epsilon and the default value of num lease metrics buckets vault expire leases by expiration will report the total number of leases expiring within each hour from the current time to one week from the current time add lease metrics namespace labels bool false If this value is set to true then vault expire leases by expiration will break down expiring leases by both time and namespace This parameter is disabled by default because enabling it can lead to a large cardinality metric add mount point rollback metrics bool false If this value is set to true then vault rollback attempt MOUNT POINT and vault route rollback MOUNT POINT metrics will be reported for every mount point If this parameter is false then vault rollback attempt and vault route rollback metrics which do not have the mount point in the metric name will be reported instead This parameter is disabled by default starting in Vault 1 15 due to the high cardinality of these metrics filter default bool true This controls whether to allow metrics that have not been specified by the filter Defaults to true which will allow all metrics when no filters are provided When set to false with no filters no metrics will be sent prefix filter string array This is a list of filter rules to apply for allowing blocking metrics by prefix in the following format json vault token vault expire vault expire num leases A leading will enable any metrics with the given prefix and a leading will block them If there is overlap between two rules the more specific rule will take precedence Blocking will take priority if the same prefix is listed multiple times statsite These telemetry parameters apply to statsite https github com armon statsite statsite address string Specifies the address of a statsite server to forward metrics data to hcl telemetry statsite address statsite company local 8125 statsd These telemetry parameters apply to statsd https github com etsy statsd statsd address string Specifies the address of a statsd server to forward metrics to hcl telemetry statsd address statsd company local 8125 circonus These telemetry parameters apply to Circonus http circonus com circonus api token string Specifies a valid Circonus API Token used to create manage check If provided metric management is enabled circonus api app string nomad Specifies a valid app name associated with the API token circonus api url string https api circonus com v2 Specifies the base URL to use for contacting the Circonus API circonus submission interval string 10s Specifies the interval at which metrics are submitted to Circonus circonus submission url string Specifies the check config submission url field of a Check API object from a previously created HTTPTRAP check circonus check id string Specifies the Check ID not check bundle from a previously created HTTPTRAP check The numeric portion of the check cid field in the Check API object circonus check force metric activation bool false Specifies if force activation of metrics which already exist and are not currently active If check management is enabled the default behavior is to add new metrics as they are encountered If the metric already exists in the check it will not be activated This setting overrides that behavior circonus check instance id string hostname application Serves to uniquely identify the metrics coming from this instance It can be used to maintain metric continuity with transient or ephemeral instances as they move around within an infrastructure By default this is set to hostname application name e g host123 nomad circonus check search tag string service application Specifies a special tag which when coupled with the instance id helps to narrow down the search results when neither a Submission URL or Check ID is provided By default this is set to service app e g service nomad circonus check display name string Specifies a name to give a check when it is created This name is displayed in the Circonus UI Checks list circonus check tags string Comma separated list of additional tags to add to a check when it is created circonus broker id string Specifies the ID of a specific Circonus Broker to use when creating a new check The numeric portion of broker cid field in a Broker API object If metric management is enabled and neither a Submission URL nor Check ID is provided an attempt will be made to search for an existing check using Instance ID and Search Tag If one is not found a new HTTPTRAP check will be created By default this is a random Enterprise Broker is selected or the default Circonus Public Broker circonus broker select tag string Specifies a special tag which will be used to select a Circonus Broker when a Broker ID is not provided The best use of this is to as a hint for which broker should be used based on where this particular instance is running e g a specific geo location or datacenter dc sfo dogstatsd These telemetry parameters apply to DogStatsD http docs datadoghq com guides dogstatsd dogstatsd addr string This provides the address of a DogStatsD instance DogStatsD is a protocol compatible flavor of statsd with the added ability to decorate metrics with tags and event information If provided Vault will send various telemetry information to that instance for aggregation This can be used to capture runtime information dogstatsd tags string array This provides a list of global tags that will be added to all telemetry packets sent to DogStatsD It is a list of strings where each string looks like my tag name my tag value prometheus These telemetry parameters apply to prometheus https prometheus io prometheus retention time string 24h Specifies the amount of time that Prometheus metrics are retained in memory Setting this to 0 will disable Prometheus telemetry disable hostname bool false It is recommended to also enable the option disable hostname to avoid having prefixed metrics with hostname The v1 sys metrics endpoint is only accessible on active nodes and automatically disabled on standby nodes You can enable the v1 sys metrics endpoint on standby nodes by enabling unauthenticated metrics access telemetry tcp Standby nodes will never forward a request to v1 sys metrics to the active node If unauthenticated metrics access is enabled the standby node will respond with its own metrics If unauthenticated metrics access is not enabled then a standby node will attempt to service the request but fail and then redirect the request to the active node Querying v1 sys metrics with one of the following headers Accept prometheus telemetry Accept application openmetrics text will return Prometheus formatted results Most Prometheus servers automatically query scrape targets with these headers by default A Vault token is required with capabilities read list to v1 sys metrics The Prometheus bearer token or bearer token file options must be added to the scrape job Vault does not use the default Prometheus path so Prometheus must be configured to scrape v1 sys metrics instead of the default scrape path An example job name stanza required in the Prometheus config https prometheus io docs prometheus latest configuration configuration scrape config is provided below prometheus yml scrape configs job name vault metrics path v1 sys metrics scheme https tls config ca file your ca here pem bearer token your vault token here static configs targets your vault server here 8200 An example telemetry configuration to be added to Vault s configuration file is shown below hcl telemetry prometheus retention time 30s disable hostname true stackdriver These telemetry parameters apply to Stackdriver Monitoring https cloud google com monitoring The Stackdriver telemetry provider uses the official Google Cloud Golang SDK This means it supports the common ways of providing credentials to Google Cloud https cloud google com docs authentication production providing credentials to your application To use this telemetry provider the service account must have the following minimum scope s text https www googleapis com auth cloud platform https www googleapis com auth monitoring https www googleapis com auth monitoring write And the following IAM role s text roles monitoring metricWriter stackdriver project id string The Google Cloud ProjectID to send telemetry data to stackdriver location string The GCP or AWS region of the monitored resource stackdriver namespace string A namespace identifier for the telemetry data stackdriver debug logs bool false Specifies if Vault writes additional stackdriver related debug logs to standard error output stderr It is recommended to also enable the option disable hostname to avoid having prefixed metrics with hostname and enable instead enable hostname label hcl telemetry stackdriver project id my test project stackdriver location us east1 a stackdriver namespace vault cluster a disable hostname true enable hostname label true Metrics from Vault can be found in Metrics Explorer https cloud google com monitoring charts metrics explorer All those metrics are shown with a resource type of generic task and the metric name is prefixed with custom googleapis com go metrics telemetry tcp vault docs configuration listener tcp telemetry parameters |
vault Kubernetes service registration Kubernetes Service Registration labels Vault pods with their current status layout docs for use with selectors page title Kubernetes Service Registration Configuration | ---
layout: docs
page_title: Kubernetes - Service Registration - Configuration
description: >-
Kubernetes Service Registration labels Vault pods with their current status
for use with selectors.
---
# Kubernetes service registration
Kubernetes Service Registration tags Vault pods with their current status for
use with selectors. Service registration is only available when Vault is running in
[High Availability mode](/vault/docs/concepts/ha).
- **HashiCorp Supported** – Kubernetes Service Registration is officially supported
by HashiCorp.
## Configuration
```hcl
service_registration "kubernetes" {
namespace = "my-namespace"
pod_name = "my-pod-name"
}
```
Alternatively, the namespace and pod name can be set through the following
environment variables:
- `VAULT_K8S_NAMESPACE`
- `VAULT_K8S_POD_NAME`
This allows you to set these parameters using
[the Downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/).
If using only environment variables, the service registration stanza declaring
you're using Kubernetes must still exist to indicate your intentions:
```
service_registration "kubernetes" {}
```
For service registration to succeed, Vault must be able to apply labels to pods
in Kubernetes. The following RBAC rules are required to allow the service account
associated with the Vault pods to update its own pod specification:
```
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: mynamespace
name: vault-service-account
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "update", "patch"]
```
## Examples
Once properly configured, enabling service registration will cause Kubernetes pods
to come up with the following labels:
```
apiVersion: v1
kind: Pod
metadata:
name: vault
labels:
vault-active: "false"
vault-initialized: "true"
vault-perf-standby: "false"
vault-sealed: "false"
vault-version: 1.18.1
```
After shutdowns, Vault pods will bear the following labels:
```
apiVersion: v1
kind: Pod
metadata:
name: vault
labels:
vault-active: "false"
vault-initialized: "false"
vault-perf-standby: "false"
vault-sealed: "true"
vault-version: 1.18.1
```
## Label definitions
- `vault-active` `(string: "true"/"false")` – Vault active is updated dynamically each time Vault's active status changes.
True indicates that this Vault pod is currently the leader. False indicates that this Vault pod is currently a standby.
- `vault-initialized` `(string: "true"/"false")` – Vault initialized is updated dynamically each time Vault's initialization
status changes. True indicates that Vault is currently initialized. False indicates the Vault is currently uninitialized.
- `vault-perf-standby` `(string: "true"/"false")` – Vault performance standby is updated dynamically each
time Vault's leader/standby status changes. **This field is only valuable if the pod is a member of a performance standby cluster**,
it will simply be set to "false" when it's not applicable. True indicates that this Vault pod is currently a performance standby. False indicates
that this Vault pod is currently a performance leader.
- `vault-sealed` `(string: "true"/"false")` – Vault sealed is updated dynamically each
time Vault's sealed/unsealed status changes. True indicates that Vault is currently sealed. False indicates that Vault
is currently unsealed.
- `vault-version` `(string: "1.18.1")` – Vault version is a string that will not change during a pod's lifecycle.
## Working with vault's service discovery labels
### Example service
With labels applied to the pod, services can be created using selectors to filter pods with specific Vault HA roles,
effectively allowing direct communication with subsets of Vault pods. Note the `vault-active: "true"` line below.
```
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: vault
app.kubernetes.io/name: vault
helm.sh/chart: vault-0.29.1
name: vault-active-us-east
namespace: default
spec:
clusterIP: 10.7.254.51
ports:
- name: http
port: 8200
protocol: TCP
targetPort: 8200
- name: internal
port: 8201
protocol: TCP
targetPort: 8201
publishNotReadyAddresses: false
selector:
app.kubernetes.io/instance: vault
app.kubernetes.io/name: vault
component: server
vault-active: "true"
type: ClusterIP
```
Also, by setting `publishNotReadyAddresses: false` above, pods that have failed will be removed from the service pool.
With this active service in place, we now have a dedicated endpoint that will always reach the active node. When
setting up Vault replication, it can be used as the primary address:
```shell-session
$ vault write -f sys/replication/performance/primary/enable \
primary_cluster_addr='https://vault-active-us-east:8201'
```
### Example upgrades
In conjunction with the pod labels and the `OnDelete` upgrade strategy, upgrades are much easier to orchestrate:
```shell-session
$ helm upgrade vault --set='server.image.tag=1.18.1'
$ kubectl delete pod --selector=vault-active=false \
--selector=vault-version=1.2.3
$ kubectl delete pod --selector=vault-active=true \
--selector=vault-version=1.2.3
```
When deleting an instance of a pod, the `StatefulSet` defining the desired state of the cluster will reschedule the
deleted pods with the newest image. | vault | layout docs page title Kubernetes Service Registration Configuration description Kubernetes Service Registration labels Vault pods with their current status for use with selectors Kubernetes service registration Kubernetes Service Registration tags Vault pods with their current status for use with selectors Service registration is only available when Vault is running in High Availability mode vault docs concepts ha HashiCorp Supported Kubernetes Service Registration is officially supported by HashiCorp Configuration hcl service registration kubernetes namespace my namespace pod name my pod name Alternatively the namespace and pod name can be set through the following environment variables VAULT K8S NAMESPACE VAULT K8S POD NAME This allows you to set these parameters using the Downward API https kubernetes io docs tasks inject data application downward api volume expose pod information If using only environment variables the service registration stanza declaring you re using Kubernetes must still exist to indicate your intentions service registration kubernetes For service registration to succeed Vault must be able to apply labels to pods in Kubernetes The following RBAC rules are required to allow the service account associated with the Vault pods to update its own pod specification kind Role apiVersion rbac authorization k8s io v1 metadata namespace mynamespace name vault service account rules apiGroups resources pods verbs get update patch Examples Once properly configured enabling service registration will cause Kubernetes pods to come up with the following labels apiVersion v1 kind Pod metadata name vault labels vault active false vault initialized true vault perf standby false vault sealed false vault version 1 18 1 After shutdowns Vault pods will bear the following labels apiVersion v1 kind Pod metadata name vault labels vault active false vault initialized false vault perf standby false vault sealed true vault version 1 18 1 Label definitions vault active string true false Vault active is updated dynamically each time Vault s active status changes True indicates that this Vault pod is currently the leader False indicates that this Vault pod is currently a standby vault initialized string true false Vault initialized is updated dynamically each time Vault s initialization status changes True indicates that Vault is currently initialized False indicates the Vault is currently uninitialized vault perf standby string true false Vault performance standby is updated dynamically each time Vault s leader standby status changes This field is only valuable if the pod is a member of a performance standby cluster it will simply be set to false when it s not applicable True indicates that this Vault pod is currently a performance standby False indicates that this Vault pod is currently a performance leader vault sealed string true false Vault sealed is updated dynamically each time Vault s sealed unsealed status changes True indicates that Vault is currently sealed False indicates that Vault is currently unsealed vault version string 1 18 1 Vault version is a string that will not change during a pod s lifecycle Working with vault s service discovery labels Example service With labels applied to the pod services can be created using selectors to filter pods with specific Vault HA roles effectively allowing direct communication with subsets of Vault pods Note the vault active true line below apiVersion v1 kind Service metadata labels app kubernetes io instance vault app kubernetes io name vault helm sh chart vault 0 29 1 name vault active us east namespace default spec clusterIP 10 7 254 51 ports name http port 8200 protocol TCP targetPort 8200 name internal port 8201 protocol TCP targetPort 8201 publishNotReadyAddresses false selector app kubernetes io instance vault app kubernetes io name vault component server vault active true type ClusterIP Also by setting publishNotReadyAddresses false above pods that have failed will be removed from the service pool With this active service in place we now have a dedicated endpoint that will always reach the active node When setting up Vault replication it can be used as the primary address shell session vault write f sys replication performance primary enable primary cluster addr https vault active us east 8201 Example upgrades In conjunction with the pod labels and the OnDelete upgrade strategy upgrades are much easier to orchestrate shell session helm upgrade vault set server image tag 1 18 1 kubectl delete pod selector vault active false selector vault version 1 2 3 kubectl delete pod selector vault active true selector vault version 1 2 3 When deleting an instance of a pod the StatefulSet defining the desired state of the cluster will reschedule the deleted pods with the newest image |
vault default health check page title Consul Service Registration Configuration layout docs Consul Service Registration registers Vault as a service in Consul with a | ---
layout: docs
page_title: Consul - Service Registration - Configuration
description: >-
Consul Service Registration registers Vault as a service in Consul with a
default
health check.
---
# Consul service registration
Consul Service Registration registers Vault as a service in [Consul][consul] with
a default health check. When Consul is configured as the storage backend, the stanza
`service_registration` is not needed as it will automatically register Vault as a service.
~> **Version information:** The `service_registration` configuration option was introduced in Vault 1.4.0.
@include 'consul-dataplane-compat.mdx'
- **HashiCorp Supported** – Consul Service Registration is officially supported
by HashiCorp.
## Configuration
```hcl
service_registration "consul" {
address = "127.0.0.1:8500"
}
```
If Vault is running in HA mode, include the transfer protocol (`http://` or
`https://`) in the address:
```hcl
service_registration "consul" {
address = "http://127.0.0.1:8500"
}
```
Once properly configured, an unsealed Vault installation should be available and
accessible at:
```text
active.vault.service.consul
```
Unsealed Vault instances in standby mode are available at:
```text
standby.vault.service.consul
```
All unsealed Vault instances are available as healthy at:
```text
vault.service.consul
```
Sealed Vault instances will mark themselves as unhealthy to avoid being returned
at Consul's service discovery layer.
## `consul` parameters
- `address` `(string: "127.0.0.1:8500")` – Specifies the address of the Consul
agent to communicate with. This can be an IP address, DNS record, or unix
socket. It is recommended that you communicate with a local Consul agent; do
not communicate directly with a server.
- `check_timeout` `(string: "5s")` – Specifies the check interval used to send
health check information back to Consul. This is specified using a label
suffix like `"30s"` or `"1h"`.
- `disable_registration` `(string: "false")` – Specifies whether Vault should
register itself with Consul.
- `scheme` `(string: "http")` – Specifies the scheme to use when communicating
with Consul. This can be set to "http" or "https". It is highly recommended
you communicate with Consul over https over non-local connections. When
communicating over a unix socket, this option is ignored.
- `service` `(string: "vault")` – Specifies the name of the service to register
in Consul.
- `service_tags` `(string: "")` – Specifies a comma-separated list of case-sensitive
tags to attach to the service registration in Consul.
- `service_meta` `(map[string]string: {})` – Specifies a key-value list of meta tags to
attach to the service registration in Consul. See [ServiceMeta](/consul/api-docs/catalog#servicemeta) in the Consul docs for more information.
- `service_address` `(string: nil)` – Specifies a service-specific address to
set on the service registration in Consul. If unset, Vault will use what it
knows to be the HA redirect address - which is usually desirable. Setting
this parameter to `""` will tell Consul to leverage the configuration of the
node the service is registered on dynamically. This could be beneficial if
you intend to leverage Consul's
[`translate_wan_addrs`][consul-translate-wan-addrs] parameter.
- `token` `(string: "")` – Specifies the [Consul ACL token][consul-acl] with
permission to register the Vault service into Consul's service catalog.
This is **not** a Vault token. See the ACL section below for help.
The following settings apply when communicating with Consul via an encrypted
connection. You can read more about encrypting Consul connections on the
[Consul encryption page][consul-encryption].
- `tls_ca_file` `(string: "")` – Specifies the path to the CA certificate used
for Consul communication. This defaults to system bundle if not specified.
This should be set according to the
[`ca_file`](/consul/docs/agent/config/config-files#ca_file) setting in
Consul.
- `tls_cert_file` `(string: "")` (optional) – Specifies the path to the
certificate for Consul communication. This should be set according to the
[`cert_file`](/consul/docs/agent/config/config-files#cert_file) setting
in Consul.
- `tls_key_file` `(string: "")` – Specifies the path to the private key for
Consul communication. This should be set according to the
[`key_file`](/consul/docs/agent/config/config-files#key_file) setting
in Consul.
- `tls_min_version` `(string: "tls12")` – Specifies the minimum TLS version to
use. Accepted values are `"tls10"`, `"tls11"`, `"tls12"` or `"tls13"`.
- `tls_skip_verify` `(string: "false")` – Disable verification of TLS certificates.
Using this option is highly discouraged.
## ACLs
If using ACLs in Consul, you'll need appropriate permissions to register the
Vault service. The following ACL policy will work for most use-cases, assuming
that your service name is `vault`:
```json
{
"service": {
"vault": {
"policy": "write"
}
}
}
```
## `consul` examples
### Local agent
This example shows a sample configuration which communicates with a local
Consul agent running on `127.0.0.1:8500`.
```hcl
service_registration "consul" {}
```
### Detailed customization
This example shows communicating with Consul on a custom address with an ACL
token.
```hcl
service_registration "consul" {
address = "10.5.7.92:8194"
token = "abcd1234"
}
```
### Consul via unix socket
This example shows communicating with Consul over a local unix socket.
```hcl
service_registration "consul" {
address = "unix:///tmp/.consul.http.sock"
}
```
### Custom TLS
This example shows using a custom CA, certificate, and key file to securely
communicate with Consul over TLS.
```hcl
service_registration "consul" {
scheme = "https"
tls_ca_file = "/etc/pem/vault.ca"
tls_cert_file = "/etc/pem/vault.cert"
tls_key_file = "/etc/pem/vault.key"
}
```
[consul]: https://www.consul.io/ 'Consul by HashiCorp'
[consul-acl]: /consul/docs/guides/acl 'Consul ACLs'
[consul-encryption]: /consul/docs/agent/encryption 'Consul Encryption'
[consul-translate-wan-addrs]: /consul/docs/agent/options#translate_wan_addrs 'Consul Configuration' | vault | layout docs page title Consul Service Registration Configuration description Consul Service Registration registers Vault as a service in Consul with a default health check Consul service registration Consul Service Registration registers Vault as a service in Consul consul with a default health check When Consul is configured as the storage backend the stanza service registration is not needed as it will automatically register Vault as a service Version information The service registration configuration option was introduced in Vault 1 4 0 include consul dataplane compat mdx HashiCorp Supported Consul Service Registration is officially supported by HashiCorp Configuration hcl service registration consul address 127 0 0 1 8500 If Vault is running in HA mode include the transfer protocol http or https in the address hcl service registration consul address http 127 0 0 1 8500 Once properly configured an unsealed Vault installation should be available and accessible at text active vault service consul Unsealed Vault instances in standby mode are available at text standby vault service consul All unsealed Vault instances are available as healthy at text vault service consul Sealed Vault instances will mark themselves as unhealthy to avoid being returned at Consul s service discovery layer consul parameters address string 127 0 0 1 8500 Specifies the address of the Consul agent to communicate with This can be an IP address DNS record or unix socket It is recommended that you communicate with a local Consul agent do not communicate directly with a server check timeout string 5s Specifies the check interval used to send health check information back to Consul This is specified using a label suffix like 30s or 1h disable registration string false Specifies whether Vault should register itself with Consul scheme string http Specifies the scheme to use when communicating with Consul This can be set to http or https It is highly recommended you communicate with Consul over https over non local connections When communicating over a unix socket this option is ignored service string vault Specifies the name of the service to register in Consul service tags string Specifies a comma separated list of case sensitive tags to attach to the service registration in Consul service meta map string string Specifies a key value list of meta tags to attach to the service registration in Consul See ServiceMeta consul api docs catalog servicemeta in the Consul docs for more information service address string nil Specifies a service specific address to set on the service registration in Consul If unset Vault will use what it knows to be the HA redirect address which is usually desirable Setting this parameter to will tell Consul to leverage the configuration of the node the service is registered on dynamically This could be beneficial if you intend to leverage Consul s translate wan addrs consul translate wan addrs parameter token string Specifies the Consul ACL token consul acl with permission to register the Vault service into Consul s service catalog This is not a Vault token See the ACL section below for help The following settings apply when communicating with Consul via an encrypted connection You can read more about encrypting Consul connections on the Consul encryption page consul encryption tls ca file string Specifies the path to the CA certificate used for Consul communication This defaults to system bundle if not specified This should be set according to the ca file consul docs agent config config files ca file setting in Consul tls cert file string optional Specifies the path to the certificate for Consul communication This should be set according to the cert file consul docs agent config config files cert file setting in Consul tls key file string Specifies the path to the private key for Consul communication This should be set according to the key file consul docs agent config config files key file setting in Consul tls min version string tls12 Specifies the minimum TLS version to use Accepted values are tls10 tls11 tls12 or tls13 tls skip verify string false Disable verification of TLS certificates Using this option is highly discouraged ACLs If using ACLs in Consul you ll need appropriate permissions to register the Vault service The following ACL policy will work for most use cases assuming that your service name is vault json service vault policy write consul examples Local agent This example shows a sample configuration which communicates with a local Consul agent running on 127 0 0 1 8500 hcl service registration consul Detailed customization This example shows communicating with Consul on a custom address with an ACL token hcl service registration consul address 10 5 7 92 8194 token abcd1234 Consul via unix socket This example shows communicating with Consul over a local unix socket hcl service registration consul address unix tmp consul http sock Custom TLS This example shows using a custom CA certificate and key file to securely communicate with Consul over TLS hcl service registration consul scheme https tls ca file etc pem vault ca tls cert file etc pem vault cert tls key file etc pem vault key consul https www consul io Consul by HashiCorp consul acl consul docs guides acl Consul ACLs consul encryption consul docs agent encryption Consul Encryption consul translate wan addrs consul docs agent options translate wan addrs Consul Configuration |
vault port page title TCP Listeners Configuration tcp listener layout docs The TCP listener configures Vault to listen on the specified TCP address and | ---
layout: docs
page_title: TCP - Listeners - Configuration
description: >-
The TCP listener configures Vault to listen on the specified TCP address and
port.
---
# `tcp` listener
@include 'alerts/ipv6-compliance.mdx'
The TCP listener configures Vault to listen on a TCP address/port.
```hcl
listener "tcp" {
address = "127.0.0.1:8200"
}
```
The `listener` stanza may be specified more than once to make Vault listen on
multiple interfaces. If you configure multiple listeners you also need to
specify [`api_addr`][api-addr] and [`cluster_addr`][cluster-addr] so Vault will
advertise the correct address to other nodes.
## Sensitive data redaction for unauthenticated endpoints
Unauthenticated API endpoints may return the following sensitive information:
* Vault version number
* Vault binary build date
* Vault cluster name
* IP address of nodes in the cluster
Vault offers the ability to configure each `tcp` `listener` stanza such that,
when appropriate, these values are redacted from responses.
The following API endpoints support redaction based on `listener` stanza configuration:
* [`/sys/health`](/vault/api-docs/system/health)
* [`/sys/leader`](/vault/api-docs/system/leader)
* [`/sys/seal-status`](/vault/api-docs/system/seal-status)
Vault replaces redacted information with an empty string (`""`). Some Vault APIs
also omit keys from the response when the corresponding value is empty (`""`).
<Note title="Redacting values affects responses to all API clients">
The Vault CLI and UI consume Vault API responses. As a result, your redaction
settings will apply to CLI and UI output in addition to direct API calls.
</Note>
## Default TLS configuration
By default, Vault TCP listeners only accept TLS 1.2 or 1.3 connections and will
drop connection requests from clients using TLS 1.0 or 1.1.
Vault uses the following ciphersuites by default:
- **TLS 1.3** - `TLS_AES_128_GCM_SHA256`, `TLS_AES_256_GCM_SHA384`, or `TLS_CHACHA20_POLY1305_SHA256`.
- **TLS 1.2** - depends on whether you configure Vault with a RSA or ECDSA certificate.
You can configure Vault with any cipher supported by the
[`tls`](https://pkg.go.dev/crypto/tls) and
[`tlsutil`](https://github.com/hashicorp/go-secure-stdlib/blob/main/tlsutil/tlsutil.go#L31-L57)
Go packages. Vault uses the `tlsutil` package to parse ciphersuite configurations.
<Note title="Sweet32 and 3DES">
The Go team and HashiCorp believe that the set of cyphers supported by `tls`
and `tlsutil` is appropriate for modern, secure usage. However, some
vulnerability scanners may flag issues with your configuration.
In particular, Sweet32 (CVE-2016-2183) is an attack against 64-bit block size
ciphers including 3DES that may allow an attacker to break the encryption of
long lived connections. According to the
[vulnerability disclosure](https://sweet32.info/), Sweet32 took a
single HTTPS session with 785 GB of traffic to break the encryption.
As of May 2024, the Go team does not believe the risk of Sweet32 is sufficient
to remove existing client compatibility by deprecating 3DES support, however,
the team did [de-prioritize 3DES](https://github.com/golang/go/issues/45430)
in favor of AES-based ciphers.
</Note>
Before overriding Vault defaults, we recommend reviewing the recommended Go team
[approach to TLS configuration](https://go.dev/blog/tls-cipher-suites) with
particular attention to their ciphersuite selections.
## Listener's custom response headers
As of version 1.9, Vault supports defining custom HTTP response headers for the root path (`/`) and also on API endpoints (`/v1/*`).
The headers are defined based on the returned status code. For example, a user can define a list of
custom response headers for the `200` status code, and another list of custom response headers for
the `307` status code. There is a `"/sys/config/ui"` [API endpoint](/vault/api-docs/system/config-ui) which allows users
to set `UI` specific custom headers. If a header is configured in a configuration file, it is not allowed
to be reconfigured through the `"/sys/config/ui"` [API endpoint](/vault/api-docs/system/config-ui). In cases where a
custom header value needs to be modified or the custom header needs to be removed, the Vault's configuration file
needs to be modified accordingly, and a `SIGHUP` signal needs to be sent to the Vault process.
If a header is defined in the configuration file and the same header is used by the internal
processes of Vault, the configured header is not accepted. For example, a custom header which has
the `X-Vault-` prefix will not be accepted. A message will be logged in the Vault's logs
upon start up indicating the header with `X-Vault-` prefix is not accepted.
### Order of precedence
If the same header is configured in both the configuration file and
in the `"/sys/config/ui"` [API endpoint](/vault/api-docs/system/config-ui), the header in the configuration file takes precedence.
For example, the `"Content-Security-Policy"` header is defined by default in the
`"/sys/config/ui"` [API endpoint](/vault/api-docs/system/config-ui). If that header is also defined in the configuration file,
the value in the configuration file is set in the response headers instead of the
default value in the `"/sys/config/ui"` [API endpoint](/vault/api-docs/system/config-ui).
## `tcp` listener parameters
- `address` `(string: "127.0.0.1:8200")` – Specifies the address to bind to for
listening. This can be dynamically defined with a
[go-sockaddr template](https://pkg.go.dev/github.com/hashicorp/go-sockaddr/template)
that is resolved at runtime.
- `cluster_address` `(string: "127.0.0.1:8201")` – Specifies the address to bind
to for cluster server-to-server requests. This defaults to one port higher
than the value of `address`. This does not usually need to be set, but can be
useful in case Vault servers are isolated from each other in such a way that
they need to hop through a TCP load balancer or some other scheme in order to
talk. This can be dynamically defined with a
[go-sockaddr template](https://pkg.go.dev/github.com/hashicorp/go-sockaddr/template)
that is resolved at runtime.
- `chroot_namespace` `(string: "")` – Specifies an alternate top-level namespace
for the listener. Vault appends namespaces provided in the `X-Vault-Namespace`
header or the `-namespace` field in a CLI command to the top-level namespace
to determine the full namespace path for the request. For example, if
`chroot_namespace` is set to `admin` and the `X-Vault-Namespace` header is
`ns1`, the full namespace path is `admin/ns1`. Calls to the listener will fail
with a 4XX error if the top-level namespace provided for `chroot_namespace`
does not exist.
- `http_idle_timeout` `(string: "5m")` - Specifies the maximum amount of time to
wait for the next request when keep-alives are enabled. If `http_idle_timeout`
is zero, the value of `http_read_timeout` is used. If both are zero, the value
of `http_read_header_timeout` is used. This is specified using a label suffix
like `"30s"` or `"1h"`.
- `http_read_header_timeout` `(string: "10s")` - Specifies the amount of time
allowed to read request headers. This is specified using a label suffix like
`"30s"` or `"1h"`.
- `http_read_timeout` `(string: "30s")` - Specifies the maximum duration for
reading the entire request, including the body. This is specified using a
label suffix like `"30s"` or `"1h"`.
- `http_write_timeout` `string: "0")` - Specifies the maximum duration before
timing out writes of the response and is reset whenever a new request's header
is read. The default value of `"0"` means infinity. This is specified using a
label suffix like `"30s"` or `"1h"`.
- `max_request_size` `(int: 33554432)` – Specifies a hard maximum allowed
request size, in bytes. Defaults to 32 MB if not set or set to `0`.
Specifying a number less than `0` turns off limiting altogether.
- `max_request_duration` `(string: "90s")` – Specifies the maximum
request duration allowed before Vault cancels the request. This overrides
`default_max_request_duration` for this listener.
- `proxy_protocol_behavior` `(string: "")` – When specified, enables a PROXY
protocol behavior for the listener (version 1 and 2 are both supported).
Accepted Values:
- _use_always_ - The client's IP address will always be used.
- _allow_authorized_ - If the source IP address is in the
`proxy_protocol_authorized_addrs` list, the client's IP address will be used.
If the source IP is not in the list, the source IP address will be used.
- _deny_unauthorized_ - The traffic will be rejected if the source IP
address is not in the `proxy_protocol_authorized_addrs` list.
- `proxy_protocol_authorized_addrs` `(string: <required-if-enabled> or array: <required-if-enabled> )` –
Specifies the list of allowed source IP addresses to be used with the PROXY protocol.
Not required if `proxy_protocol_behavior` is set to `use_always`. Source IPs should
be comma-delimited if provided as a string. At least one source IP must be provided,
`proxy_protocol_authorized_addrs` cannot be an empty array or string.
- `redact_addresses` `(bool: false)` - Redacts `leader_address` and `cluster_leader_address` values in applicable API responses when `true`.
- `redact_cluster_name` `(bool: false)` - Redacts `cluster_name` values in applicable API responses when `true`.
- `redact_version` `(bool: false)` - Redacts `version` and `build_date` values in applicable API responses when `true`.
- `tls_disable` `(string: "false")` – Specifies if TLS will be disabled. Vault
assumes TLS by default, so you must explicitly disable TLS to opt-in to
insecure communication. Disabling TLS can **disable** some UI functionality. See
the [Browser Support](/vault/docs/browser-support) page for more details.
- `tls_cert_file` `(string: <required-if-enabled>, reloads-on-SIGHUP)` –
Specifies the path to the certificate for TLS. It requires a PEM-encoded file.
To configure the listener to use a CA certificate, concatenate the primary certificate and the CA
certificate together. The primary certificate should appear first in the
combined file. On `SIGHUP`, the path set here _at Vault startup_ will be used
for reloading the certificate; modifying this value while Vault is running
will have no effect for `SIGHUP`s.
- `tls_key_file` `(string: <required-if-enabled>, reloads-on-SIGHUP)` –
Specifies the path to the private key for the certificate. It requires a PEM-encoded file.
If the key file is encrypted, you will be prompted to enter the passphrase on server startup.
The passphrase must stay the same between key files when reloading your
configuration using `SIGHUP`. On `SIGHUP`, the path set here _at Vault
startup_ will be used for reloading the certificate; modifying this value
while Vault is running will have no effect for `SIGHUP`s.
- `tls_min_version` `(string: "tls12")` – Specifies the minimum supported
version of TLS. Accepted values are "tls10", "tls11", "tls12" or "tls13".
~> **Warning**: TLS 1.1 and lower (`tls10` and `tls11` values for the
`tls_min_version` and `tls_max_version` parameters) are widely considered
insecure.
- `tls_max_version` `(string: "tls13")` – Specifies the maximum supported
version of TLS. Accepted values are "tls10", "tls11", "tls12" or "tls13".
~> **Warning**: TLS 1.1 and lower (`tls10` and `tls11` values for the
`tls_min_version` and `tls_max_version` parameters) are widely considered
insecure.
- `tls_cipher_suites` `(string: "")` – Specifies the list of supported
ciphersuites as a comma-separated-list. The list of all available ciphersuites
is available in the [Golang TLS documentation][golang-tls].
~> **Note**: Go only consults the `tls_cipher_suites` list for TLSv1.2 and
earlier; the order of ciphers is not important. For this parameter to be
effective, the `tls_max_version` property must be set to `tls12` to prevent
negotiation of TLSv1.3, which is not recommended. For more information about
this and other TLS related changes, see the [Go TLS blog post][go-tls-blog].
- `tls_prefer_server_cipher_suites` `(string: "false")` – Specifies to prefer the
server's ciphersuite over the client ciphersuites.
~> **Warning**: The `tls_prefer_server_cipher_suites` parameter is
deprecated. Setting it has no effect. See the above
[Go blog post][go-tls-blog] for more information about
this change.
- `tls_require_and_verify_client_cert` `(string: "false")` – Turns on client
authentication for this listener; the listener will require a presented
client cert that successfully validates against system CAs.
- `tls_client_ca_file` `(string: "")` – PEM-encoded Certificate Authority file
used for checking the authenticity of client.
- `tls_disable_client_certs` `(string: "false")` – Turns off client
authentication for this listener. The default behavior (when this is false)
is for Vault to request client authentication certificates when available.
~> **Warning**: The `tls_disable_client_certs` and `tls_require_and_verify_client_cert` fields in the listener stanza of the Vault server configuration are mutually exclusive fields. Please ensure they are not both set to true. TLS client verification remains optional with default settings and is not enforced.
- `x_forwarded_for_authorized_addrs` `(string: <required-to-enable>)` –
Specifies the list of source IP CIDRs for which an X-Forwarded-For header
will be trusted. Comma-separated list or JSON array. This turns on
X-Forwarded-For support. If for example Vault receives connections from the
load balancer's IP of `1.2.3.4`, adding `1.2.3.4` to `x_forwarded_for_authorized_addrs`
will result in the `remote_address` field in the audit log being populated with the
connecting client's IP, for example `3.4.5.6`. Note this requires the load balancer
to send the connecting client's IP in the `X-Forwarded-For` header.
- `x_forwarded_for_client_cert_header` `(string: "")` –
Specifies the header that will be used for the client certificate.
This is required if you use the [TLS Certificates Auth Method](/vault/docs/auth/cert) and your
vault server is behind a reverse proxy.
- `x_forwarded_for_client_cert_header_decoders` `(string: "")` –
Comma delimited list that specifies the decoders that will be used to decode the client certificate.
This is required if you use the [TLS Certificates Auth Method](/vault/docs/auth/cert) and your
vault server is behind a reverse proxy. The resulting certificate should be in DER format.
Available Values:
- BASE64 - Runs Base64 decode
- DER - Converts a pem certificate to der
- URL - Runs URL decode
Known Values:
- Traefik = "BASE64"
- NGINX = "URL,DER"
- `x_forwarded_for_hop_skips` `(string: "0")` – The number of addresses that will be
skipped from the _rear_ of the set of hops. For instance, for a header value
of `1.2.3.4, 2.3.4.5, 3.4.5.6, 4.5.6.7`, if this value is set to `"1"`, the address that
will be used as the originating client IP is `3.4.5.6`.
- `x_forwarded_for_reject_not_authorized` `(string: "true")` – If set false,
if there is an X-Forwarded-For header in a connection from an unauthorized
address, the header will be ignored and the client connection used as-is,
rather than the client connection rejected.
- `x_forwarded_for_reject_not_present` `(string: "true")` – If set false, if
there is no X-Forwarded-For header or it is empty, the client address will be
used as-is, rather than the client connection rejected.
- `disable_replication_status_endpoints` `(bool: false)` - Disables replication
status endpoints for the configured listener when set to `true`.
### `telemetry` parameters
- `unauthenticated_metrics_access` `(bool: false)` - If set to true, allows
unauthenticated access to the `/v1/sys/metrics` endpoint.
### `profiling` parameters
- `unauthenticated_pprof_access` `(bool: false)` - If set to true, allows
unauthenticated access to the `/v1/sys/pprof` endpoint.
### `inflight_requests_logging` parameters
- `unauthenticated_in_flight_requests_access` `(bool: false)` - If set to true, allows
unauthenticated access to the `/v1/sys/in-flight-req` endpoint.
### `custom_response_headers` parameters
- `default` `(key-value-map: {})` - A map of string header names to an array of
string values. The default headers are set on all endpoints regardless of
the status code value. For an example, please refer to the
"Configuring custom http response headers" section.
- `<specific status code>` `(key-value-map: {})` - A map of string header names
to an array of string values. These headers are set only when the specific status
code is returned. For example, `"200" = {"Header-A": ["Value1", "Value2"]}`, `"Header-A"`
is set when the http response status code is `"200"`.
- `<collective status code>` `(key-value-map: {})` - A map of string header names
to an array of string values. These headers are set only when the response status
code falls under the collective status code.
For example, `"2xx" = {"Header-A": ["Value1", "Value2"]}`, `"Header-A"`
is set when the http response status code is `"200"`, `"204"`, etc.
## `tcp` listener examples
### Configuring TLS
This example shows enabling a TLS listener.
```hcl
listener "tcp" {
address = "127.0.0.1:8200"
tls_cert_file = "/etc/certs/vault.crt"
tls_key_file = "/etc/certs/vault.key"
}
```
### Listening on multiple interfaces
This example shows Vault listening on a private interface, as well as localhost.
```hcl
listener "tcp" {
address = "127.0.0.1:8200"
}
listener "tcp" {
address = "10.0.0.5:8200"
}
# Advertise the non-loopback interface
api_addr = "https://10.0.0.5:8200"
cluster_addr = "https://10.0.0.5:8201"
```
### Configuring unauthenticated metrics access
This example shows enabling unauthenticated metrics access.
```hcl
listener "tcp" {
telemetry {
unauthenticated_metrics_access = true
}
}
```
### Configuring unauthenticated profiling access
This example shows enabling unauthenticated profiling access.
```hcl
listener "tcp" {
profiling {
unauthenticated_pprof_access = true
unauthenticated_in_flight_request_access = true
}
}
```
### Configuring custom http response headers
Note: Requires Vault version 1.9 or newer. This example shows configuring custom http response headers.
Operators can configure `"custom_response_headers"` sub-stanza in the listener stanza to set custom http
headers appropriate to their applications. Examples of such headers are `"Strict-Transport-Security"`
and `"Content-Security-Policy"` which are known HTTP headers, and could be configured to harden
the security of an application communicating with the Vault endpoints. Note that vulnerability
scans often examine such security related HTTP headers. In addition, application specific
custom headers can also be configured. For example, `"X-Custom-Header"` has been configured
in the example below.
```hcl
listener "tcp" {
custom_response_headers {
"default" = {
"Strict-Transport-Security" = ["max-age=31536000","includeSubDomains"],
"Content-Security-Policy" = ["connect-src https://clusterA.vault.external/"],
"X-Custom-Header" = ["Custom Header Default Value"],
},
"2xx" = {
"Content-Security-Policy" = ["connect-src https://clusterB.vault.external/"],
"X-Custom-Header" = ["Custom Header Value 1", "Custom Header Value 2"],
},
"301" = {
"Strict-Transport-Security" = ["max-age=31536000"],
"Content-Security-Policy" = ["connect-src https://clusterC.vault.external/"],
},
}
}
```
In situations where a header is defined under several status code subsections,
the header matching the most specific response code will be returned. For example,
with the config example below, a `307` response would return `307 Custom header value`,
while a `306` would return `3xx Custom header value`.
```hcl
listener "tcp" {
custom_response_headers {
"default" = {
"X-Custom-Header" = ["default Custom header value"]
},
"3xx" = {
"X-Custom-Header" = ["3xx Custom header value"]
},
"307" = {
"X-Custom-Header" = ["307 Custom header value"]
}
}
}
```
### Listening on all IPv6 & IPv4 interfaces
This example shows Vault listening on all IPv4 & IPv6 interfaces including localhost.
```hcl
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
}
```
### Listening to specific IPv6 address
This example shows Vault only using IPv6 and binding to the interface with the IP address: `2001:1c04:90d:1c00:a00:27ff:fefa:58ec`
```hcl
listener "tcp" {
address = "[2001:1c04:90d:1c00:a00:27ff:fefa:58ec]:8200"
cluster_address = "[2001:1c04:90d:1c00:a00:27ff:fefa:58ec]:8201"
}
# Advertise the non-loopback interface
api_addr = "https://[2001:1c04:90d:1c00:a00:27ff:fefa:58ec]:8200"
cluster_addr = "https://[2001:1c04:90d:1c00:a00:27ff:fefa:58ec]:8201"
```
## Redaction examples
Please see redaction settings above, for details on each redaction setting.
Example configuration for the [`tcp`](/vault/docs/configuration/listener/tcp) listener,
enabling [`redact_addresses`](/vault/docs/configuration/listener/tcp#redact_addresses),
[`redact_cluster_name`](/vault/docs/configuration/listener/tcp#redact_cluster_name) and
[`redact_version`](/vault/docs/configuration/listener/tcp#redact_version).
```hcl
ui = true
cluster_addr = "https://127.0.0.1:8201"
api_addr = "https://127.0.0.1:8200"
disable_mlock = true
storage "raft" {
path = "/path/to/raft/data"
node_id = "raft_node_1"
}
listener "tcp" {
address = "127.0.0.1:8200",
tls_cert_file = "/path/to/full-chain.pem"
tls_key_file = "/path/to/private-key.pem"
redact_addresses = "true"
redact_cluster_name = "true"
redact_version = "true"
}
telemetry {
statsite_address = "127.0.0.1:8125"
disable_hostname = true
}
```
### API: `/sys/health`
In the following call to `/sys/health/` notice that `cluster_name` and `version`
are both redacted. The `cluster_name` field is fully omitted from the response
and `version` is the empty string (`""`).
```shell-session
$ curl -s https://127.0.0.1:8200/v1/sys/health | jq`:
{
"initialized": true,
"sealed": false,
"standby": true,
"performance_standby": false,
"replication_performance_mode": "disabled",
"replication_dr_mode": "disabled",
"server_time_utc": 1696598650,
"version": "",
"cluster_id": "a1a7a078-0ae1-7fb9-41ec-2f4f583c773e"
}
```
### API: `sys/leader`
In the following call to `/sys/leader/` notice that `leader_address` and `leader_cluster_address`
are both redacted and set to the empty string (`""`).
```shell-session
$ curl -s https://127.0.0.1:8200/v1/sys/leader | jq`:
{
"ha_enabled": true,
"is_self": false,
"active_time": "0001-01-01T00:00:00Z",
"leader_address": "",
"leader_cluster_address": "",
"performance_standby": false,
"performance_standby_last_remote_wal": 0,
"raft_committed_index": 164,
"raft_applied_index": 164
}
```
### API: `sys/seal-status`
In the following call to `/sys/seal-status/` notice that `cluster_name`, `build_date`,
and `version` are all redacted. The `cluster_name` field is fully omitted from
the response while `build_date` and `version` are empty strings (`""`).
```shell-session
$ curl -s https://127.0.0.1:8200/v1/sys/seal-status | jq`:
{
"type": "shamir",
"initialized": true,
"sealed": false,
"t": 1,
"n": 1,
"progress": 0,
"nonce": "",
"version": "",
"build_date": "",
"migration": false,
"cluster_id": "a1a7a078-0ae1-7fb9-41ec-2f4f583c773e",
"recovery_seal": false,
"storage_type": "raft"
}
```
### CLI: `vault status`
The CLI command `vault status` uses endpoints that support redacting data, so the
output redacts `Version`, `Build Date`, `HA Cluster`, and `Active Node Address`.
`Version`, `Build Date`, `HA Cluster` show `n/a` because the underlying endpoint
returned the empty string, and `Active Node Address` shows as `<none>` because
it was omitted from the API response.
```shell-session
$ vault status
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version n/a
Build Date n/a
Storage Type raft
HA Enabled true
HA Cluster n/a
HA Mode standby
Active Node Address <none>
Raft Committed Index 219
Raft Applied Index 219
```
[golang-tls]: https://golang.org/src/crypto/tls/cipher_suites.go
[api-addr]: /vault/docs/configuration#api_addr
[cluster-addr]: /vault/docs/configuration#cluster_addr
[go-tls-blog]: https://go.dev/blog/tls-cipher-suites | vault | layout docs page title TCP Listeners Configuration description The TCP listener configures Vault to listen on the specified TCP address and port tcp listener include alerts ipv6 compliance mdx The TCP listener configures Vault to listen on a TCP address port hcl listener tcp address 127 0 0 1 8200 The listener stanza may be specified more than once to make Vault listen on multiple interfaces If you configure multiple listeners you also need to specify api addr api addr and cluster addr cluster addr so Vault will advertise the correct address to other nodes Sensitive data redaction for unauthenticated endpoints Unauthenticated API endpoints may return the following sensitive information Vault version number Vault binary build date Vault cluster name IP address of nodes in the cluster Vault offers the ability to configure each tcp listener stanza such that when appropriate these values are redacted from responses The following API endpoints support redaction based on listener stanza configuration sys health vault api docs system health sys leader vault api docs system leader sys seal status vault api docs system seal status Vault replaces redacted information with an empty string Some Vault APIs also omit keys from the response when the corresponding value is empty Note title Redacting values affects responses to all API clients The Vault CLI and UI consume Vault API responses As a result your redaction settings will apply to CLI and UI output in addition to direct API calls Note Default TLS configuration By default Vault TCP listeners only accept TLS 1 2 or 1 3 connections and will drop connection requests from clients using TLS 1 0 or 1 1 Vault uses the following ciphersuites by default TLS 1 3 TLS AES 128 GCM SHA256 TLS AES 256 GCM SHA384 or TLS CHACHA20 POLY1305 SHA256 TLS 1 2 depends on whether you configure Vault with a RSA or ECDSA certificate You can configure Vault with any cipher supported by the tls https pkg go dev crypto tls and tlsutil https github com hashicorp go secure stdlib blob main tlsutil tlsutil go L31 L57 Go packages Vault uses the tlsutil package to parse ciphersuite configurations Note title Sweet32 and 3DES The Go team and HashiCorp believe that the set of cyphers supported by tls and tlsutil is appropriate for modern secure usage However some vulnerability scanners may flag issues with your configuration In particular Sweet32 CVE 2016 2183 is an attack against 64 bit block size ciphers including 3DES that may allow an attacker to break the encryption of long lived connections According to the vulnerability disclosure https sweet32 info Sweet32 took a single HTTPS session with 785 GB of traffic to break the encryption As of May 2024 the Go team does not believe the risk of Sweet32 is sufficient to remove existing client compatibility by deprecating 3DES support however the team did de prioritize 3DES https github com golang go issues 45430 in favor of AES based ciphers Note Before overriding Vault defaults we recommend reviewing the recommended Go team approach to TLS configuration https go dev blog tls cipher suites with particular attention to their ciphersuite selections Listener s custom response headers As of version 1 9 Vault supports defining custom HTTP response headers for the root path and also on API endpoints v1 The headers are defined based on the returned status code For example a user can define a list of custom response headers for the 200 status code and another list of custom response headers for the 307 status code There is a sys config ui API endpoint vault api docs system config ui which allows users to set UI specific custom headers If a header is configured in a configuration file it is not allowed to be reconfigured through the sys config ui API endpoint vault api docs system config ui In cases where a custom header value needs to be modified or the custom header needs to be removed the Vault s configuration file needs to be modified accordingly and a SIGHUP signal needs to be sent to the Vault process If a header is defined in the configuration file and the same header is used by the internal processes of Vault the configured header is not accepted For example a custom header which has the X Vault prefix will not be accepted A message will be logged in the Vault s logs upon start up indicating the header with X Vault prefix is not accepted Order of precedence If the same header is configured in both the configuration file and in the sys config ui API endpoint vault api docs system config ui the header in the configuration file takes precedence For example the Content Security Policy header is defined by default in the sys config ui API endpoint vault api docs system config ui If that header is also defined in the configuration file the value in the configuration file is set in the response headers instead of the default value in the sys config ui API endpoint vault api docs system config ui tcp listener parameters address string 127 0 0 1 8200 Specifies the address to bind to for listening This can be dynamically defined with a go sockaddr template https pkg go dev github com hashicorp go sockaddr template that is resolved at runtime cluster address string 127 0 0 1 8201 Specifies the address to bind to for cluster server to server requests This defaults to one port higher than the value of address This does not usually need to be set but can be useful in case Vault servers are isolated from each other in such a way that they need to hop through a TCP load balancer or some other scheme in order to talk This can be dynamically defined with a go sockaddr template https pkg go dev github com hashicorp go sockaddr template that is resolved at runtime chroot namespace string Specifies an alternate top level namespace for the listener Vault appends namespaces provided in the X Vault Namespace header or the namespace field in a CLI command to the top level namespace to determine the full namespace path for the request For example if chroot namespace is set to admin and the X Vault Namespace header is ns1 the full namespace path is admin ns1 Calls to the listener will fail with a 4XX error if the top level namespace provided for chroot namespace does not exist http idle timeout string 5m Specifies the maximum amount of time to wait for the next request when keep alives are enabled If http idle timeout is zero the value of http read timeout is used If both are zero the value of http read header timeout is used This is specified using a label suffix like 30s or 1h http read header timeout string 10s Specifies the amount of time allowed to read request headers This is specified using a label suffix like 30s or 1h http read timeout string 30s Specifies the maximum duration for reading the entire request including the body This is specified using a label suffix like 30s or 1h http write timeout string 0 Specifies the maximum duration before timing out writes of the response and is reset whenever a new request s header is read The default value of 0 means infinity This is specified using a label suffix like 30s or 1h max request size int 33554432 Specifies a hard maximum allowed request size in bytes Defaults to 32 MB if not set or set to 0 Specifying a number less than 0 turns off limiting altogether max request duration string 90s Specifies the maximum request duration allowed before Vault cancels the request This overrides default max request duration for this listener proxy protocol behavior string When specified enables a PROXY protocol behavior for the listener version 1 and 2 are both supported Accepted Values use always The client s IP address will always be used allow authorized If the source IP address is in the proxy protocol authorized addrs list the client s IP address will be used If the source IP is not in the list the source IP address will be used deny unauthorized The traffic will be rejected if the source IP address is not in the proxy protocol authorized addrs list proxy protocol authorized addrs string required if enabled or array required if enabled Specifies the list of allowed source IP addresses to be used with the PROXY protocol Not required if proxy protocol behavior is set to use always Source IPs should be comma delimited if provided as a string At least one source IP must be provided proxy protocol authorized addrs cannot be an empty array or string redact addresses bool false Redacts leader address and cluster leader address values in applicable API responses when true redact cluster name bool false Redacts cluster name values in applicable API responses when true redact version bool false Redacts version and build date values in applicable API responses when true tls disable string false Specifies if TLS will be disabled Vault assumes TLS by default so you must explicitly disable TLS to opt in to insecure communication Disabling TLS can disable some UI functionality See the Browser Support vault docs browser support page for more details tls cert file string required if enabled reloads on SIGHUP Specifies the path to the certificate for TLS It requires a PEM encoded file To configure the listener to use a CA certificate concatenate the primary certificate and the CA certificate together The primary certificate should appear first in the combined file On SIGHUP the path set here at Vault startup will be used for reloading the certificate modifying this value while Vault is running will have no effect for SIGHUP s tls key file string required if enabled reloads on SIGHUP Specifies the path to the private key for the certificate It requires a PEM encoded file If the key file is encrypted you will be prompted to enter the passphrase on server startup The passphrase must stay the same between key files when reloading your configuration using SIGHUP On SIGHUP the path set here at Vault startup will be used for reloading the certificate modifying this value while Vault is running will have no effect for SIGHUP s tls min version string tls12 Specifies the minimum supported version of TLS Accepted values are tls10 tls11 tls12 or tls13 Warning TLS 1 1 and lower tls10 and tls11 values for the tls min version and tls max version parameters are widely considered insecure tls max version string tls13 Specifies the maximum supported version of TLS Accepted values are tls10 tls11 tls12 or tls13 Warning TLS 1 1 and lower tls10 and tls11 values for the tls min version and tls max version parameters are widely considered insecure tls cipher suites string Specifies the list of supported ciphersuites as a comma separated list The list of all available ciphersuites is available in the Golang TLS documentation golang tls Note Go only consults the tls cipher suites list for TLSv1 2 and earlier the order of ciphers is not important For this parameter to be effective the tls max version property must be set to tls12 to prevent negotiation of TLSv1 3 which is not recommended For more information about this and other TLS related changes see the Go TLS blog post go tls blog tls prefer server cipher suites string false Specifies to prefer the server s ciphersuite over the client ciphersuites Warning The tls prefer server cipher suites parameter is deprecated Setting it has no effect See the above Go blog post go tls blog for more information about this change tls require and verify client cert string false Turns on client authentication for this listener the listener will require a presented client cert that successfully validates against system CAs tls client ca file string PEM encoded Certificate Authority file used for checking the authenticity of client tls disable client certs string false Turns off client authentication for this listener The default behavior when this is false is for Vault to request client authentication certificates when available Warning The tls disable client certs and tls require and verify client cert fields in the listener stanza of the Vault server configuration are mutually exclusive fields Please ensure they are not both set to true TLS client verification remains optional with default settings and is not enforced x forwarded for authorized addrs string required to enable Specifies the list of source IP CIDRs for which an X Forwarded For header will be trusted Comma separated list or JSON array This turns on X Forwarded For support If for example Vault receives connections from the load balancer s IP of 1 2 3 4 adding 1 2 3 4 to x forwarded for authorized addrs will result in the remote address field in the audit log being populated with the connecting client s IP for example 3 4 5 6 Note this requires the load balancer to send the connecting client s IP in the X Forwarded For header x forwarded for client cert header string Specifies the header that will be used for the client certificate This is required if you use the TLS Certificates Auth Method vault docs auth cert and your vault server is behind a reverse proxy x forwarded for client cert header decoders string Comma delimited list that specifies the decoders that will be used to decode the client certificate This is required if you use the TLS Certificates Auth Method vault docs auth cert and your vault server is behind a reverse proxy The resulting certificate should be in DER format Available Values BASE64 Runs Base64 decode DER Converts a pem certificate to der URL Runs URL decode Known Values Traefik BASE64 NGINX URL DER x forwarded for hop skips string 0 The number of addresses that will be skipped from the rear of the set of hops For instance for a header value of 1 2 3 4 2 3 4 5 3 4 5 6 4 5 6 7 if this value is set to 1 the address that will be used as the originating client IP is 3 4 5 6 x forwarded for reject not authorized string true If set false if there is an X Forwarded For header in a connection from an unauthorized address the header will be ignored and the client connection used as is rather than the client connection rejected x forwarded for reject not present string true If set false if there is no X Forwarded For header or it is empty the client address will be used as is rather than the client connection rejected disable replication status endpoints bool false Disables replication status endpoints for the configured listener when set to true telemetry parameters unauthenticated metrics access bool false If set to true allows unauthenticated access to the v1 sys metrics endpoint profiling parameters unauthenticated pprof access bool false If set to true allows unauthenticated access to the v1 sys pprof endpoint inflight requests logging parameters unauthenticated in flight requests access bool false If set to true allows unauthenticated access to the v1 sys in flight req endpoint custom response headers parameters default key value map A map of string header names to an array of string values The default headers are set on all endpoints regardless of the status code value For an example please refer to the Configuring custom http response headers section specific status code key value map A map of string header names to an array of string values These headers are set only when the specific status code is returned For example 200 Header A Value1 Value2 Header A is set when the http response status code is 200 collective status code key value map A map of string header names to an array of string values These headers are set only when the response status code falls under the collective status code For example 2xx Header A Value1 Value2 Header A is set when the http response status code is 200 204 etc tcp listener examples Configuring TLS This example shows enabling a TLS listener hcl listener tcp address 127 0 0 1 8200 tls cert file etc certs vault crt tls key file etc certs vault key Listening on multiple interfaces This example shows Vault listening on a private interface as well as localhost hcl listener tcp address 127 0 0 1 8200 listener tcp address 10 0 0 5 8200 Advertise the non loopback interface api addr https 10 0 0 5 8200 cluster addr https 10 0 0 5 8201 Configuring unauthenticated metrics access This example shows enabling unauthenticated metrics access hcl listener tcp telemetry unauthenticated metrics access true Configuring unauthenticated profiling access This example shows enabling unauthenticated profiling access hcl listener tcp profiling unauthenticated pprof access true unauthenticated in flight request access true Configuring custom http response headers Note Requires Vault version 1 9 or newer This example shows configuring custom http response headers Operators can configure custom response headers sub stanza in the listener stanza to set custom http headers appropriate to their applications Examples of such headers are Strict Transport Security and Content Security Policy which are known HTTP headers and could be configured to harden the security of an application communicating with the Vault endpoints Note that vulnerability scans often examine such security related HTTP headers In addition application specific custom headers can also be configured For example X Custom Header has been configured in the example below hcl listener tcp custom response headers default Strict Transport Security max age 31536000 includeSubDomains Content Security Policy connect src https clusterA vault external X Custom Header Custom Header Default Value 2xx Content Security Policy connect src https clusterB vault external X Custom Header Custom Header Value 1 Custom Header Value 2 301 Strict Transport Security max age 31536000 Content Security Policy connect src https clusterC vault external In situations where a header is defined under several status code subsections the header matching the most specific response code will be returned For example with the config example below a 307 response would return 307 Custom header value while a 306 would return 3xx Custom header value hcl listener tcp custom response headers default X Custom Header default Custom header value 3xx X Custom Header 3xx Custom header value 307 X Custom Header 307 Custom header value Listening on all IPv6 IPv4 interfaces This example shows Vault listening on all IPv4 IPv6 interfaces including localhost hcl listener tcp address 8200 cluster address 8201 Listening to specific IPv6 address This example shows Vault only using IPv6 and binding to the interface with the IP address 2001 1c04 90d 1c00 a00 27ff fefa 58ec hcl listener tcp address 2001 1c04 90d 1c00 a00 27ff fefa 58ec 8200 cluster address 2001 1c04 90d 1c00 a00 27ff fefa 58ec 8201 Advertise the non loopback interface api addr https 2001 1c04 90d 1c00 a00 27ff fefa 58ec 8200 cluster addr https 2001 1c04 90d 1c00 a00 27ff fefa 58ec 8201 Redaction examples Please see redaction settings above for details on each redaction setting Example configuration for the tcp vault docs configuration listener tcp listener enabling redact addresses vault docs configuration listener tcp redact addresses redact cluster name vault docs configuration listener tcp redact cluster name and redact version vault docs configuration listener tcp redact version hcl ui true cluster addr https 127 0 0 1 8201 api addr https 127 0 0 1 8200 disable mlock true storage raft path path to raft data node id raft node 1 listener tcp address 127 0 0 1 8200 tls cert file path to full chain pem tls key file path to private key pem redact addresses true redact cluster name true redact version true telemetry statsite address 127 0 0 1 8125 disable hostname true API sys health In the following call to sys health notice that cluster name and version are both redacted The cluster name field is fully omitted from the response and version is the empty string shell session curl s https 127 0 0 1 8200 v1 sys health jq initialized true sealed false standby true performance standby false replication performance mode disabled replication dr mode disabled server time utc 1696598650 version cluster id a1a7a078 0ae1 7fb9 41ec 2f4f583c773e API sys leader In the following call to sys leader notice that leader address and leader cluster address are both redacted and set to the empty string shell session curl s https 127 0 0 1 8200 v1 sys leader jq ha enabled true is self false active time 0001 01 01T00 00 00Z leader address leader cluster address performance standby false performance standby last remote wal 0 raft committed index 164 raft applied index 164 API sys seal status In the following call to sys seal status notice that cluster name build date and version are all redacted The cluster name field is fully omitted from the response while build date and version are empty strings shell session curl s https 127 0 0 1 8200 v1 sys seal status jq type shamir initialized true sealed false t 1 n 1 progress 0 nonce version build date migration false cluster id a1a7a078 0ae1 7fb9 41ec 2f4f583c773e recovery seal false storage type raft CLI vault status The CLI command vault status uses endpoints that support redacting data so the output redacts Version Build Date HA Cluster and Active Node Address Version Build Date HA Cluster show n a because the underlying endpoint returned the empty string and Active Node Address shows as none because it was omitted from the API response shell session vault status Key Value Seal Type shamir Initialized true Sealed false Total Shares 5 Threshold 3 Version n a Build Date n a Storage Type raft HA Enabled true HA Cluster n a HA Mode standby Active Node Address none Raft Committed Index 219 Raft Applied Index 219 golang tls https golang org src crypto tls cipher suites go api addr vault docs configuration api addr cluster addr vault docs configuration cluster addr go tls blog https go dev blog tls cipher suites |
vault Example TCP listener configuration with TLS encryption You can configure your TCP listener to use specific versions of TLS and specific page title Configure TLS for your Vault TCP listener Configure TLS for your Vault TCP listener layout docs | ---
layout: docs
page_title: Configure TLS for your Vault TCP listener
description: >-
Example TCP listener configuration with TLS encryption.
---
# Configure TLS for your Vault TCP listener
You can configure your TCP listener to use specific versions of TLS and specific
ciphersuites.
## Assumptions
- **Your Vault instance is not currently running**. If your Vault cluster is
running, you must
[restart the cluster gracefully](https://support.hashicorp.com/hc/en-us/articles/17169701076371-A-Step-by-Step-Guide-to-Restarting-a-Vault-Cluster)
to apply changes to your TCP listener. SIGHUP will not reload your TLS
configuration.
- **You have a valid TLS certificate file**.
- **You have a valid TLS key file**.
- **You have a valid CA file (if required)**.
## Example TLS 1.3 configuration
If a reasonably modern set of clients are connecting to a Vault instance, you
can configure the `tcp` listener stanza to only accept TLS 1.3 with the
`tls_min_version` parameter:
<CodeBlockConfig hideClipboard highlight="5">
```plaintext
listener "tcp" {
address = "127.0.0.1:8200"
tls_cert_file = "cert.pem"
tls_key_file = "key.pem"
tls_min_version = "tls13"
}
```
</CodeBlockConfig>
Vault does not accept explicit ciphersuite configuration for TLS 1.3 because the
Go team has already designated a select set of ciphers that align with the
broadly-accepted Mozilla Security/Server Side TLS guidance for [modern TLS
configuration](https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility).
## Example TLS 1.2 configuration
To use TLS 1.2 with a non-default set of ciphersuites, you can set 1.2 as the
minimum and maximum allowed TLS version and explicitly define your preferred
ciphersuites with `tls_ciper_suites` and one or more of the ciphersuite
constants from the ciphersuite configuration parser. For example:
<CodeBlockConfig hideClipboard highlight="5-7">
```plaintext
listener "tcp" {
address = "127.0.0.1:8200"
tls_cert_file = "cert.pem"
tls_key_file = "key.pem"
tls_min_version = "tls12"
tls_max_version = "tls12"
tls_cipher_suites = "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
}
```
</CodeBlockConfig>
You must set the minimum and maximum TLS version to disable TLS 1.3, which does
not support explicit cipher selection. The priority order of the ciphersuites
in `tls_cipher_suites` is determined by the `tls` Go package.
<Note>
The TLS 1.2 configuration example excludes any 3DES ciphers to avoid potential
exposure to the Sweet32 attack (CVE-2016-2183). You should customize the
ciphersuite list as needed to meet your environment-specific security
requirements.
</Note>
## Verify your TLS configuration
You can verify your TLS configuration using an SSL scanner such as
[`sslscan`](https://github.com/rbsec/sslscan).
<Tabs>
<Tab heading="Example scan with ECDSA certificate">
<CodeBlockConfig hideClipboard>
```shell-session
$ sslscan 127.0.0.1:8200
Version: 2.1.3
OpenSSL 3.2.1 30 Jan 2024
Connected to 127.0.0.1
Testing SSL server 127.0.0.1 on port 8200 using SNI name 127.0.0.1
SSL/TLS Protocols:
SSLv2 disabled
SSLv3 disabled
TLSv1.0 disabled
TLSv1.1 disabled
TLSv1.2 enabled
TLSv1.3 enabled
TLS Fallback SCSV:
Server supports TLS Fallback SCSV
TLS renegotiation:
Session renegotiation not supported
TLS Compression:
Compression disabled
Heartbleed:
TLSv1.3 not vulnerable to heartbleed
TLSv1.2 not vulnerable to heartbleed
Supported Server Cipher(s):
Preferred TLSv1.3 128 bits TLS_AES_128_GCM_SHA256 Curve 25519 DHE 253
Accepted TLSv1.3 256 bits TLS_AES_256_GCM_SHA384 Curve 25519 DHE 253
Accepted TLSv1.3 256 bits TLS_CHACHA20_POLY1305_SHA256 Curve 25519 DHE 253
Preferred TLSv1.2 128 bits ECDHE-ECDSA-AES128-GCM-SHA256 Curve 25519 DHE 253
Accepted TLSv1.2 256 bits ECDHE-ECDSA-AES256-GCM-SHA384 Curve 25519 DHE 253
Accepted TLSv1.2 256 bits ECDHE-ECDSA-CHACHA20-POLY1305 Curve 25519 DHE 253
Accepted TLSv1.2 128 bits ECDHE-ECDSA-AES128-SHA Curve 25519 DHE 253
Accepted TLSv1.2 256 bits ECDHE-ECDSA-AES256-SHA Curve 25519 DHE 253
Server Key Exchange Group(s):
TLSv1.3 128 bits secp256r1 (NIST P-256)
TLSv1.3 192 bits secp384r1 (NIST P-384)
TLSv1.3 260 bits secp521r1 (NIST P-521)
TLSv1.3 128 bits x25519
TLSv1.2 128 bits secp256r1 (NIST P-256)
TLSv1.2 192 bits secp384r1 (NIST P-384)
TLSv1.2 260 bits secp521r1 (NIST P-521)
TLSv1.2 128 bits x25519
SSL Certificate:
Signature Algorithm: ecdsa-with-SHA256
ECC Curve Name: prime256v1
ECC Key Strength: 128
Subject: localhost
Issuer: localhost
Not valid before: May 17 17:27:29 2024 GMT
Not valid after: Jun 16 17:27:29 2024 GMT
```
</CodeBlockConfig>
</Tab>
<Tab heading="Example scan with RSA certificate">
<CodeBlockConfig hideClipboard>
```shell-session
sslscan 127.0.0.1:8200
Testing SSL server 127.0.0.1 on port 8200 using SNI name 127.0.0.1
SSL/TLS Protocols:
SSLv2 disabled
SSLv3 disabled
TLSv1.0 disabled
TLSv1.1 disabled
TLSv1.2 enabled
TLSv1.3 enabled
Supported Server Cipher(s):
Preferred TLSv1.3 128 bits TLS_AES_128_GCM_SHA256 Curve 25519 DHE 253
Accepted TLSv1.3 256 bits TLS_AES_256_GCM_SHA384 Curve 25519 DHE 253
Accepted TLSv1.3 256 bits TLS_CHACHA20_POLY1305_SHA256 Curve 25519 DHE 253
Preferred TLSv1.2 128 bits ECDHE-RSA-AES128-GCM-SHA256 Curve 25519 DHE 253
Accepted TLSv1.2 256 bits ECDHE-RSA-AES256-GCM-SHA384 Curve 25519 DHE 253
Accepted TLSv1.2 256 bits ECDHE-RSA-CHACHA20-POLY1305 Curve 25519 DHE 253
Accepted TLSv1.2 128 bits ECDHE-RSA-AES128-SHA Curve 25519 DHE 253
Accepted TLSv1.2 256 bits ECDHE-RSA-AES256-SHA Curve 25519 DHE 253
Accepted TLSv1.2 128 bits AES128-GCM-SHA256
Accepted TLSv1.2 256 bits AES256-GCM-SHA384
Accepted TLSv1.2 128 bits AES128-SHA
Accepted TLSv1.2 256 bits AES256-SHA
Accepted TLSv1.2 112 bits TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
Accepted TLSv1.2 112 bits TLS_RSA_WITH_3DES_EDE_CBC_SHA
Server Key Exchange Group(s):
TLSv1.3 128 bits secp256r1 (NIST P-256)
TLSv1.3 192 bits secp384r1 (NIST P-384)
TLSv1.3 260 bits secp521r1 (NIST P-521)
TLSv1.3 128 bits x25519
TLSv1.2 128 bits secp256r1 (NIST P-256)
TLSv1.2 192 bits secp384r1 (NIST P-384)
TLSv1.2 260 bits secp521r1 (NIST P-521)
TLSv1.2 128 bits x25519
SSL Certificate:
Signature Algorithm: sha256WithRSAEncryption
RSA Key Strength: 4096
```
</CodeBlockConfig>
</Tab>
</Tabs> | vault | layout docs page title Configure TLS for your Vault TCP listener description Example TCP listener configuration with TLS encryption Configure TLS for your Vault TCP listener You can configure your TCP listener to use specific versions of TLS and specific ciphersuites Assumptions Your Vault instance is not currently running If your Vault cluster is running you must restart the cluster gracefully https support hashicorp com hc en us articles 17169701076371 A Step by Step Guide to Restarting a Vault Cluster to apply changes to your TCP listener SIGHUP will not reload your TLS configuration You have a valid TLS certificate file You have a valid TLS key file You have a valid CA file if required Example TLS 1 3 configuration If a reasonably modern set of clients are connecting to a Vault instance you can configure the tcp listener stanza to only accept TLS 1 3 with the tls min version parameter CodeBlockConfig hideClipboard highlight 5 plaintext listener tcp address 127 0 0 1 8200 tls cert file cert pem tls key file key pem tls min version tls13 CodeBlockConfig Vault does not accept explicit ciphersuite configuration for TLS 1 3 because the Go team has already designated a select set of ciphers that align with the broadly accepted Mozilla Security Server Side TLS guidance for modern TLS configuration https wiki mozilla org Security Server Side TLS Modern compatibility Example TLS 1 2 configuration To use TLS 1 2 with a non default set of ciphersuites you can set 1 2 as the minimum and maximum allowed TLS version and explicitly define your preferred ciphersuites with tls ciper suites and one or more of the ciphersuite constants from the ciphersuite configuration parser For example CodeBlockConfig hideClipboard highlight 5 7 plaintext listener tcp address 127 0 0 1 8200 tls cert file cert pem tls key file key pem tls min version tls12 tls max version tls12 tls cipher suites TLS ECDHE ECDSA WITH AES 128 GCM SHA256 TLS ECDHE RSA WITH AES 128 GCM SHA256 TLS ECDHE ECDSA WITH AES 256 GCM SHA384 TLS ECDHE RSA WITH AES 256 GCM SHA384 TLS ECDHE ECDSA WITH CHACHA20 POLY1305 SHA256 TLS ECDHE RSA WITH CHACHA20 POLY1305 SHA256 CodeBlockConfig You must set the minimum and maximum TLS version to disable TLS 1 3 which does not support explicit cipher selection The priority order of the ciphersuites in tls cipher suites is determined by the tls Go package Note The TLS 1 2 configuration example excludes any 3DES ciphers to avoid potential exposure to the Sweet32 attack CVE 2016 2183 You should customize the ciphersuite list as needed to meet your environment specific security requirements Note Verify your TLS configuration You can verify your TLS configuration using an SSL scanner such as sslscan https github com rbsec sslscan Tabs Tab heading Example scan with ECDSA certificate CodeBlockConfig hideClipboard shell session sslscan 127 0 0 1 8200 Version 2 1 3 OpenSSL 3 2 1 30 Jan 2024 Connected to 127 0 0 1 Testing SSL server 127 0 0 1 on port 8200 using SNI name 127 0 0 1 SSL TLS Protocols SSLv2 disabled SSLv3 disabled TLSv1 0 disabled TLSv1 1 disabled TLSv1 2 enabled TLSv1 3 enabled TLS Fallback SCSV Server supports TLS Fallback SCSV TLS renegotiation Session renegotiation not supported TLS Compression Compression disabled Heartbleed TLSv1 3 not vulnerable to heartbleed TLSv1 2 not vulnerable to heartbleed Supported Server Cipher s Preferred TLSv1 3 128 bits TLS AES 128 GCM SHA256 Curve 25519 DHE 253 Accepted TLSv1 3 256 bits TLS AES 256 GCM SHA384 Curve 25519 DHE 253 Accepted TLSv1 3 256 bits TLS CHACHA20 POLY1305 SHA256 Curve 25519 DHE 253 Preferred TLSv1 2 128 bits ECDHE ECDSA AES128 GCM SHA256 Curve 25519 DHE 253 Accepted TLSv1 2 256 bits ECDHE ECDSA AES256 GCM SHA384 Curve 25519 DHE 253 Accepted TLSv1 2 256 bits ECDHE ECDSA CHACHA20 POLY1305 Curve 25519 DHE 253 Accepted TLSv1 2 128 bits ECDHE ECDSA AES128 SHA Curve 25519 DHE 253 Accepted TLSv1 2 256 bits ECDHE ECDSA AES256 SHA Curve 25519 DHE 253 Server Key Exchange Group s TLSv1 3 128 bits secp256r1 NIST P 256 TLSv1 3 192 bits secp384r1 NIST P 384 TLSv1 3 260 bits secp521r1 NIST P 521 TLSv1 3 128 bits x25519 TLSv1 2 128 bits secp256r1 NIST P 256 TLSv1 2 192 bits secp384r1 NIST P 384 TLSv1 2 260 bits secp521r1 NIST P 521 TLSv1 2 128 bits x25519 SSL Certificate Signature Algorithm ecdsa with SHA256 ECC Curve Name prime256v1 ECC Key Strength 128 Subject localhost Issuer localhost Not valid before May 17 17 27 29 2024 GMT Not valid after Jun 16 17 27 29 2024 GMT CodeBlockConfig Tab Tab heading Example scan with RSA certificate CodeBlockConfig hideClipboard shell session sslscan 127 0 0 1 8200 Testing SSL server 127 0 0 1 on port 8200 using SNI name 127 0 0 1 SSL TLS Protocols SSLv2 disabled SSLv3 disabled TLSv1 0 disabled TLSv1 1 disabled TLSv1 2 enabled TLSv1 3 enabled Supported Server Cipher s Preferred TLSv1 3 128 bits TLS AES 128 GCM SHA256 Curve 25519 DHE 253 Accepted TLSv1 3 256 bits TLS AES 256 GCM SHA384 Curve 25519 DHE 253 Accepted TLSv1 3 256 bits TLS CHACHA20 POLY1305 SHA256 Curve 25519 DHE 253 Preferred TLSv1 2 128 bits ECDHE RSA AES128 GCM SHA256 Curve 25519 DHE 253 Accepted TLSv1 2 256 bits ECDHE RSA AES256 GCM SHA384 Curve 25519 DHE 253 Accepted TLSv1 2 256 bits ECDHE RSA CHACHA20 POLY1305 Curve 25519 DHE 253 Accepted TLSv1 2 128 bits ECDHE RSA AES128 SHA Curve 25519 DHE 253 Accepted TLSv1 2 256 bits ECDHE RSA AES256 SHA Curve 25519 DHE 253 Accepted TLSv1 2 128 bits AES128 GCM SHA256 Accepted TLSv1 2 256 bits AES256 GCM SHA384 Accepted TLSv1 2 128 bits AES128 SHA Accepted TLSv1 2 256 bits AES256 SHA Accepted TLSv1 2 112 bits TLS ECDHE RSA WITH 3DES EDE CBC SHA Accepted TLSv1 2 112 bits TLS RSA WITH 3DES EDE CBC SHA Server Key Exchange Group s TLSv1 3 128 bits secp256r1 NIST P 256 TLSv1 3 192 bits secp384r1 NIST P 384 TLSv1 3 260 bits secp521r1 NIST P 521 TLSv1 3 128 bits x25519 TLSv1 2 128 bits secp256r1 NIST P 256 TLSv1 2 192 bits secp384r1 NIST P 384 TLSv1 2 260 bits secp521r1 NIST P 521 TLSv1 2 128 bits x25519 SSL Certificate Signature Algorithm sha256WithRSAEncryption RSA Key Strength 4096 CodeBlockConfig Tab Tabs |
vault wrapping mechanism layout docs page title PKCS11 Seals Configuration The PKCS11 seal configures Vault to use an HSM with PKCS11 as the seal pkcs11 seal | ---
layout: docs
page_title: PKCS11 - Seals - Configuration
description: |-
The PKCS11 seal configures Vault to use an HSM with PKCS11 as the seal
wrapping mechanism.
---
# `pkcs11` seal
<Note title="Auto-unseal and seal wrapping requires Vault Enterprise">
Auto-unseal **and** seal wrapping for PKCS11 require Vault Enterprise.
Vault Enterprise enables seal wrapping by default, which means the KMS service
must be available at runtime and not just during the unseal process. Refer to
the [Seal wrap](/vault/docs/enterprise/sealwrap) overview for more
information.
</Note>
The PKCS11 seal configures Vault to use an HSM with PKCS11 as the seal wrapping
mechanism. Vault Enterprise's HSM PKCS11 support is activated by one of the
following:
- The presence of a `seal "pkcs11"` block in Vault's configuration file
- The presence of the environment variable `VAULT_HSM_LIB` set to the library's
path as well as `VAULT_SEAL_TYPE` set to `pkcs11`. If enabling via environment
variable, all other required values (i.e. `VAULT_HSM_SLOT`) must be also
supplied.
**IMPORTANT**: Having Vault generate its own key is the easiest way to get up
and running, but for security, Vault marks the key as non-exportable. If your
HSM key backup strategy requires the key to be exportable, you should generate
the key yourself. The list of creation attributes that Vault uses to generate
the key are listed at the end of this document.
## Requirements
The following software packages are required for Vault Enterprise HSM:
- PKCS#11 compatible HSM integration library. Vault targets version 2.2 or
higher of PKCS#11. Depending on any given HSM, some functions (such as key
generation) may have to be performed manually.
- The [GNU libltdl
library](https://www.gnu.org/software/libtool/manual/html_node/Using-libltdl.html)
— ensure that it is installed for the correct architecture of your servers
## `pkcs11` example
This example shows configuring HSM PKCS11 seal through the Vault configuration
file by providing all the required values:
```hcl
seal "pkcs11" {
lib = "/usr/vault/lib/libCryptoki2_64.so"
slot = "2305843009213693953"
pin = "AAAA-BBBB-CCCC-DDDD"
key_label = "vault-hsm-key"
hmac_key_label = "vault-hsm-hmac-key"
}
```
## `pkcs11` parameters
These parameters apply to the `seal` stanza in the Vault configuration file:
- `lib` `(string: <required>)`: The path to the PKCS#11 library shared object
file. May also be specified by the `VAULT_HSM_LIB` environment variable.
~> **Note:** Depending on your HSM, the value of the `lib` parameter may be
either a binary or a dynamic library, and its use may require other libraries
depending on which system the Vault binary is currently running on (e.g.: a
Linux system may require other libraries to interpret Windows .dll files).
- `slot` `(string: <slot or token label required>)`: The slot number to use,
specified as a string (e.g. `"2305843009213693953"`). May also be specified by
the `VAULT_HSM_SLOT` environment variable.
~> **Note**: Slots are typically listed as hex-decimal values in the OS setup
utility but this configuration uses their decimal equivalent. For example, using the
HSM command-line `pkcs11-tool`, a slot listed as `0x2000000000000001`in hex is equal
to `2305843009213693953` in decimal; these values may be listed shorter or
differently as determined by the HSM in use.
- `token_label` `(string: <slot or token label required>)`: The slot token label to
use. May also be specified by the `VAULT_HSM_TOKEN_LABEL` environment variable.
- `pin` `(string: <required>)`: The PIN for login. May also be specified by the
`VAULT_HSM_PIN` environment variable. _If set via the environment variable,
it will need to be re-set if Vault is restarted._
- `key_label` `(string: <required>)`: The label of the key to use. If the key
does not exist and generation is enabled, this is the label that will be given
to the generated key. May also be specified by the `VAULT_HSM_KEY_LABEL`
environment variable.
- `default_key_label` `(string: "")`: This is the default key label for decryption
operations. Prior to 0.10.1, key labels were not stored with the ciphertext.
Seal entries now track the label used in encryption operations. The default value
for this field is the `key_label`. If `key_label` is rotated and this value is not
set, decryption may fail. May also be specified by the `VAULT_HSM_DEFAULT_KEY_LABEL`
environment variable. This value is ignored in new installations.
- `key_id` `(string: "")`: The ID of the key to use. The value should be a hexadecimal
string (e.g., "0x33333435363434373537"). May also be specified by the
`VAULT_HSM_KEY_ID` environment variable.
- `hmac_key_label` `(string: <required>)`: The label of the key to use for
HMACing. This needs to be a suitable type. If Vault tries to create this it
will attempt to use CKK_GENERIC_SECRET_KEY. If the key does not exist and
generation is enabled, this is the label that will be given to the generated
key. May also be specified by the `VAULT_HSM_HMAC_KEY_LABEL` environment
variable.
- `default_hmac_key_label` `(string: "")`: This is the default HMAC key label for signing
operations. Prior to 0.10.1, HMAC key labels were not stored with the signature.
Seal entries now track the label used in signing operations. The default value
for this field is the `hmac_key_label`. If `hmac_key_label` is rotated and this
value is not set, signature verification may fail. May also be specified by the
`VAULT_HSM_HMAC_DEFAULT_KEY_LABEL` environment variable. This value is ignored in
new installations.
- `hmac_key_id` `(string: "")`: The ID of the HMAC key to use. The value should be a
hexadecimal string (e.g., "0x33333435363434373537"). May also be specified by the
`VAULT_HSM_HMAC_KEY_ID` environment variable.
- `mechanism` `(string: <best available>)`: The encryption/decryption mechanism to use,
specified as a decimal or hexadecimal (prefixed by `0x`) string. May also be
specified by the `VAULT_HSM_MECHANISM` environment variable.
Currently supported mechanisms (in order of precedence):
- `0x1085` `CKM_AES_CBC_PAD` (HMAC mechanism required)
- `0x1082` `CKM_AES_CBC` (HMAC mechanism required)
- `0x1087` `CKM_AES_GCM`
- `0x0009` `CKM_RSA_PKCS_OAEP`
- `0x0001` `CKM_RSA_PKCS`
~> **Warning**: CKM_RSA_PKCS specifies the PKCS #1 v1.5 padding scheme, which is
in considered less secure than OAEP. Where possible, use of CKM_RSA_PKCS_OAEP is
recommended over CKM_RSA_PKCS.
- `hmac_mechanism` `(string: "0x0251")`: The encryption/decryption mechanism to
use, specified as a decimal or hexadecimal (prefixed by `0x`) string.
Currently only `0x0251` (corresponding to `CKM_SHA256_HMAC` from the
specification) is supported. May also be specified by the
`VAULT_HSM_HMAC_MECHANISM` environment variable. This value is only required
for specific mechanisms.
- `generate_key` `(string: "false")`: If no existing key with the label
specified by `key_label` can be found at Vault initialization time, instructs
Vault to generate a key. This is a boolean expressed as a string (e.g.
`"true"`). May also be specified by the `VAULT_HSM_GENERATE_KEY` environment
variable. Vault may not be able to successfully generate keys in all
circumstances, such as if proprietary vendor extensions are required to
create keys of a suitable type.
~> **NOTE**: Once the initial key creation has occurred post cluster
initialization, it is advisable to disable this flag to prevent any
unintended key creation in the future.
- `force_rw_session` `(string: "false")`: Force all operations to open up
a read-write session to the HSM. This is a boolean expressed as a string (e.g.
`"true"`). May also be specified by the `VAULT_HSM_FORCE_RW_SESSION` environment
variable. This key is mainly to work around a limitation within AWS's CloudHSM v5
pkcs11 implementation.
- `max_parallel` `(int: 1)` - The number of concurrent requests that may be
in flight to the HSM at any given time.
- `disabled` `(string: "")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.
Refer to the [Seal Migration](/vault/docs/concepts/seal#seal-migration) documentation for more information about the seal migration process.
### Mechanism specific flags
- `rsa_encrypt_local` `(string: "false")`: For HSMs that do not support encryption
for RSA keys, perform encryption locally. Available for mechanisms
`CKM_RSA_PKCS_OAEP` and `CKM_RSA_PKCS`. May also be specified by the
`VAULT_HSM_RSA_ENCRYPT_LOCAL` environment variable.
- `rsa_oaep_hash` `(string: "sha256")`: Specify the hash algorithm to use for RSA
with OAEP padding. Valid values are sha1, sha224, sha256, sha384, and sha512.
Available for mechanism `CKM_RSA_PKCS_OAEP`. May also be specified by the
`VAULT_HSM_RSA_OAEP_HASH` environment variable.
~> **Note:** Although the configuration file allows you to pass in
`VAULT_HSM_PIN` as part of the seal's parameters, it is _strongly_ recommended
to set this value via environment variables.
## `pkcs11` environment variables
Alternatively, the HSM seal can be activated by providing the following
environment variables:
```text
VAULT_SEAL_TYPE
VAULT_HSM_LIB
VAULT_HSM_SLOT
VAULT_HSM_TOKEN_LABEL
VAULT_HSM_PIN
VAULT_HSM_KEY_LABEL
VAULT_HSM_DEFAULT_KEY_LABEL
VAULT_HSM_KEY_ID
VAULT_HSM_HMAC_KEY_LABEL
VAULT_HSM_HMAC_DEFAULT_KEY_LABEL
VAULT_HSM_HMAC_KEY_ID
VAULT_HSM_MECHANISM
VAULT_HSM_HMAC_MECHANISM
VAULT_HSM_GENERATE_KEY
VAULT_HSM_RSA_ENCRYPT_LOCAL
VAULT_HSM_RSA_OAEP_HASH
VAULT_HSM_FORCE_RW_SESSION
```
## Vault key generation attributes
If Vault generates the HSM key for you, the following is the list of attributes
it uses. These identifiers correspond to official PKCS#11 identifiers.
### AES key
- `CKA_CLASS`: `CKO_SECRET_KEY` (It's a secret key)
- `CKA_KEY_TYPE`: `CKK_AES` (Key type is AES)
- `CKA_VALUE_LEN`: `32` (Key size is 256 bits)
- `CKA_LABEL`: Set to the key label set in Vault's configuration
- `CKA_ID`: Set to a random 32-bit unsigned integer
- `CKA_PRIVATE`: `true` (Key is private to this slot/token)
- `CKA_TOKEN`: `true` (Key persists to the slot/token rather than being for one
session only)
- `CKA_SENSITIVE`: `true` (Key is a sensitive value)
- `CKA_ENCRYPT`: `true` (Key can be used for encryption)
- `CKA_DECRYPT`: `true` (Key can be used for decryption)
- `CKA_WRAP`: `true` (Key can be used for wrapping)
- `CKA_UNWRAP`: `true` (Key can be used for unwrapping)
- `CKA_EXTRACTABLE`: `false` (Key cannot be exported)
### RSA key
_Public Key_
- `CKA_CLASS`: `CKO_PUBLIC_KEY` (It's a public key)
- `CKA_KEY_TYPE`: `CKK_RSA` (Key type is RSA)
- `CKA_LABEL`: Set to the key label set in Vault's configuration
- `CKA_ID`: Set to a random 32-bit unsigned integer
- `CKA_ENCRYPT`: `true` (Key can be used for encryption)
- `CKA_WRAP`: `true` (Key can be used for wrapping)
- `CKA_MODULUS_BITS`: `2048` (Key size is 2048 bits)
- `CKA_PUBLIC_EXPONENT`: `0x10001` (Public exponent of 65537)
- `CKA_TOKEN`: `true` (Key persists to the slot/token rather than being for one
session only)
_Private Key_
- `CKA_CLASS`: `CKO_PRIVATE_KEY` (It's a private key)
- `CKA_KEY_TYPE`: `CKK_RSA` (Key type is RSA)
- `CKA_LABEL`: Set to the key label set in Vault's configuration
- `CKA_ID`: Set to a random 32-bit unsigned integer
- `CKA_DECRYPT`: `true` (Key can be used for decryption)
- `CKA_UNWRAP`: `true` (Key can be used for unwrapping)
- `CKA_TOKEN`: `true` (Key persists to the slot/token rather than being for one
session only)
- `CKA_EXTRACTABLE`: `false` (Key cannot be exported)
### HMAC key
- `CKA_CLASS`: `CKO_SECRET_KEY` (It's a secret key)
- `CKA_KEY_TYPE`: `CKK_GENERIC_SECRET_KEY` (Key type is a generic secret key)
- `CKA_VALUE_LEN`: `32` (Key size is 256 bits)
- `CKA_LABEL`: Set to the HMAC key label set in Vault's configuration
- `CKA_ID`: Set to a random 32-bit unsigned integer
- `CKA_PRIVATE`: `true` (Key is private to this slot/token)
- `CKA_TOKEN`: `true` (Key persists to the slot/token rather than being for one
session only)
- `CKA_SENSITIVE`: `true` (Key is a sensitive value)
- `CKA_SIGN`: `true` (Key can be used for signing)
- `CKA_VERIFY`: `true` (Key can be used for verifying)
- `CKA_EXTRACTABLE`: `false` (Key cannot be exported)
## Key rotation
This seal supports rotating keys by using different key labels to track key versions. To rotate
the key value, generate a new key in a different key label in the HSM and update Vault's
configuration with the new key label value. Restart your vault instance to pick up the new key
label and all new encryption operations will use the updated key label. Old keys must not be disabled
or deleted and are used to decrypt older data. To disable or delete old keys, Vault needs to
perform [seal-rewrap](/vault/api-docs/system/sealwrap-rewrap#start-a-seal-rewrap-process)
so that data encrypted by the old key can be decrypted using the new key.
**NOTE**: Prior to version 0.10.1, key information was not tracked with the ciphertext. If
rotation is desired for data that was seal wrapped prior to this version must also set
`default_key_label` and `hmac_default_key_label` to allow for decryption of older values.
## Tutorial
Refer to the [HSM Integration - Seal Wrap](/vault/tutorials/auto-unseal/seal-wrap)
tutorial to learn how to enable the Seal Wrap feature to protect your data. | vault | layout docs page title PKCS11 Seals Configuration description The PKCS11 seal configures Vault to use an HSM with PKCS11 as the seal wrapping mechanism pkcs11 seal Note title Auto unseal and seal wrapping requires Vault Enterprise Auto unseal and seal wrapping for PKCS11 require Vault Enterprise Vault Enterprise enables seal wrapping by default which means the KMS service must be available at runtime and not just during the unseal process Refer to the Seal wrap vault docs enterprise sealwrap overview for more information Note The PKCS11 seal configures Vault to use an HSM with PKCS11 as the seal wrapping mechanism Vault Enterprise s HSM PKCS11 support is activated by one of the following The presence of a seal pkcs11 block in Vault s configuration file The presence of the environment variable VAULT HSM LIB set to the library s path as well as VAULT SEAL TYPE set to pkcs11 If enabling via environment variable all other required values i e VAULT HSM SLOT must be also supplied IMPORTANT Having Vault generate its own key is the easiest way to get up and running but for security Vault marks the key as non exportable If your HSM key backup strategy requires the key to be exportable you should generate the key yourself The list of creation attributes that Vault uses to generate the key are listed at the end of this document Requirements The following software packages are required for Vault Enterprise HSM PKCS 11 compatible HSM integration library Vault targets version 2 2 or higher of PKCS 11 Depending on any given HSM some functions such as key generation may have to be performed manually The GNU libltdl library https www gnu org software libtool manual html node Using libltdl html ensure that it is installed for the correct architecture of your servers pkcs11 example This example shows configuring HSM PKCS11 seal through the Vault configuration file by providing all the required values hcl seal pkcs11 lib usr vault lib libCryptoki2 64 so slot 2305843009213693953 pin AAAA BBBB CCCC DDDD key label vault hsm key hmac key label vault hsm hmac key pkcs11 parameters These parameters apply to the seal stanza in the Vault configuration file lib string required The path to the PKCS 11 library shared object file May also be specified by the VAULT HSM LIB environment variable Note Depending on your HSM the value of the lib parameter may be either a binary or a dynamic library and its use may require other libraries depending on which system the Vault binary is currently running on e g a Linux system may require other libraries to interpret Windows dll files slot string slot or token label required The slot number to use specified as a string e g 2305843009213693953 May also be specified by the VAULT HSM SLOT environment variable Note Slots are typically listed as hex decimal values in the OS setup utility but this configuration uses their decimal equivalent For example using the HSM command line pkcs11 tool a slot listed as 0x2000000000000001 in hex is equal to 2305843009213693953 in decimal these values may be listed shorter or differently as determined by the HSM in use token label string slot or token label required The slot token label to use May also be specified by the VAULT HSM TOKEN LABEL environment variable pin string required The PIN for login May also be specified by the VAULT HSM PIN environment variable If set via the environment variable it will need to be re set if Vault is restarted key label string required The label of the key to use If the key does not exist and generation is enabled this is the label that will be given to the generated key May also be specified by the VAULT HSM KEY LABEL environment variable default key label string This is the default key label for decryption operations Prior to 0 10 1 key labels were not stored with the ciphertext Seal entries now track the label used in encryption operations The default value for this field is the key label If key label is rotated and this value is not set decryption may fail May also be specified by the VAULT HSM DEFAULT KEY LABEL environment variable This value is ignored in new installations key id string The ID of the key to use The value should be a hexadecimal string e g 0x33333435363434373537 May also be specified by the VAULT HSM KEY ID environment variable hmac key label string required The label of the key to use for HMACing This needs to be a suitable type If Vault tries to create this it will attempt to use CKK GENERIC SECRET KEY If the key does not exist and generation is enabled this is the label that will be given to the generated key May also be specified by the VAULT HSM HMAC KEY LABEL environment variable default hmac key label string This is the default HMAC key label for signing operations Prior to 0 10 1 HMAC key labels were not stored with the signature Seal entries now track the label used in signing operations The default value for this field is the hmac key label If hmac key label is rotated and this value is not set signature verification may fail May also be specified by the VAULT HSM HMAC DEFAULT KEY LABEL environment variable This value is ignored in new installations hmac key id string The ID of the HMAC key to use The value should be a hexadecimal string e g 0x33333435363434373537 May also be specified by the VAULT HSM HMAC KEY ID environment variable mechanism string best available The encryption decryption mechanism to use specified as a decimal or hexadecimal prefixed by 0x string May also be specified by the VAULT HSM MECHANISM environment variable Currently supported mechanisms in order of precedence 0x1085 CKM AES CBC PAD HMAC mechanism required 0x1082 CKM AES CBC HMAC mechanism required 0x1087 CKM AES GCM 0x0009 CKM RSA PKCS OAEP 0x0001 CKM RSA PKCS Warning CKM RSA PKCS specifies the PKCS 1 v1 5 padding scheme which is in considered less secure than OAEP Where possible use of CKM RSA PKCS OAEP is recommended over CKM RSA PKCS hmac mechanism string 0x0251 The encryption decryption mechanism to use specified as a decimal or hexadecimal prefixed by 0x string Currently only 0x0251 corresponding to CKM SHA256 HMAC from the specification is supported May also be specified by the VAULT HSM HMAC MECHANISM environment variable This value is only required for specific mechanisms generate key string false If no existing key with the label specified by key label can be found at Vault initialization time instructs Vault to generate a key This is a boolean expressed as a string e g true May also be specified by the VAULT HSM GENERATE KEY environment variable Vault may not be able to successfully generate keys in all circumstances such as if proprietary vendor extensions are required to create keys of a suitable type NOTE Once the initial key creation has occurred post cluster initialization it is advisable to disable this flag to prevent any unintended key creation in the future force rw session string false Force all operations to open up a read write session to the HSM This is a boolean expressed as a string e g true May also be specified by the VAULT HSM FORCE RW SESSION environment variable This key is mainly to work around a limitation within AWS s CloudHSM v5 pkcs11 implementation max parallel int 1 The number of concurrent requests that may be in flight to the HSM at any given time disabled string Set this to true if Vault is migrating from an auto seal configuration Otherwise set to false Refer to the Seal Migration vault docs concepts seal seal migration documentation for more information about the seal migration process Mechanism specific flags rsa encrypt local string false For HSMs that do not support encryption for RSA keys perform encryption locally Available for mechanisms CKM RSA PKCS OAEP and CKM RSA PKCS May also be specified by the VAULT HSM RSA ENCRYPT LOCAL environment variable rsa oaep hash string sha256 Specify the hash algorithm to use for RSA with OAEP padding Valid values are sha1 sha224 sha256 sha384 and sha512 Available for mechanism CKM RSA PKCS OAEP May also be specified by the VAULT HSM RSA OAEP HASH environment variable Note Although the configuration file allows you to pass in VAULT HSM PIN as part of the seal s parameters it is strongly recommended to set this value via environment variables pkcs11 environment variables Alternatively the HSM seal can be activated by providing the following environment variables text VAULT SEAL TYPE VAULT HSM LIB VAULT HSM SLOT VAULT HSM TOKEN LABEL VAULT HSM PIN VAULT HSM KEY LABEL VAULT HSM DEFAULT KEY LABEL VAULT HSM KEY ID VAULT HSM HMAC KEY LABEL VAULT HSM HMAC DEFAULT KEY LABEL VAULT HSM HMAC KEY ID VAULT HSM MECHANISM VAULT HSM HMAC MECHANISM VAULT HSM GENERATE KEY VAULT HSM RSA ENCRYPT LOCAL VAULT HSM RSA OAEP HASH VAULT HSM FORCE RW SESSION Vault key generation attributes If Vault generates the HSM key for you the following is the list of attributes it uses These identifiers correspond to official PKCS 11 identifiers AES key CKA CLASS CKO SECRET KEY It s a secret key CKA KEY TYPE CKK AES Key type is AES CKA VALUE LEN 32 Key size is 256 bits CKA LABEL Set to the key label set in Vault s configuration CKA ID Set to a random 32 bit unsigned integer CKA PRIVATE true Key is private to this slot token CKA TOKEN true Key persists to the slot token rather than being for one session only CKA SENSITIVE true Key is a sensitive value CKA ENCRYPT true Key can be used for encryption CKA DECRYPT true Key can be used for decryption CKA WRAP true Key can be used for wrapping CKA UNWRAP true Key can be used for unwrapping CKA EXTRACTABLE false Key cannot be exported RSA key Public Key CKA CLASS CKO PUBLIC KEY It s a public key CKA KEY TYPE CKK RSA Key type is RSA CKA LABEL Set to the key label set in Vault s configuration CKA ID Set to a random 32 bit unsigned integer CKA ENCRYPT true Key can be used for encryption CKA WRAP true Key can be used for wrapping CKA MODULUS BITS 2048 Key size is 2048 bits CKA PUBLIC EXPONENT 0x10001 Public exponent of 65537 CKA TOKEN true Key persists to the slot token rather than being for one session only Private Key CKA CLASS CKO PRIVATE KEY It s a private key CKA KEY TYPE CKK RSA Key type is RSA CKA LABEL Set to the key label set in Vault s configuration CKA ID Set to a random 32 bit unsigned integer CKA DECRYPT true Key can be used for decryption CKA UNWRAP true Key can be used for unwrapping CKA TOKEN true Key persists to the slot token rather than being for one session only CKA EXTRACTABLE false Key cannot be exported HMAC key CKA CLASS CKO SECRET KEY It s a secret key CKA KEY TYPE CKK GENERIC SECRET KEY Key type is a generic secret key CKA VALUE LEN 32 Key size is 256 bits CKA LABEL Set to the HMAC key label set in Vault s configuration CKA ID Set to a random 32 bit unsigned integer CKA PRIVATE true Key is private to this slot token CKA TOKEN true Key persists to the slot token rather than being for one session only CKA SENSITIVE true Key is a sensitive value CKA SIGN true Key can be used for signing CKA VERIFY true Key can be used for verifying CKA EXTRACTABLE false Key cannot be exported Key rotation This seal supports rotating keys by using different key labels to track key versions To rotate the key value generate a new key in a different key label in the HSM and update Vault s configuration with the new key label value Restart your vault instance to pick up the new key label and all new encryption operations will use the updated key label Old keys must not be disabled or deleted and are used to decrypt older data To disable or delete old keys Vault needs to perform seal rewrap vault api docs system sealwrap rewrap start a seal rewrap process so that data encrypted by the old key can be decrypted using the new key NOTE Prior to version 0 10 1 key information was not tracked with the ciphertext If rotation is desired for data that was seal wrapped prior to this version must also set default key label and hmac default key label to allow for decryption of older values Tutorial Refer to the HSM Integration Seal Wrap vault tutorials auto unseal seal wrap tutorial to learn how to enable the Seal Wrap feature to protect your data |
vault page title GCP Cloud KMS Seals Configuration mechanism wrapping The GCP Cloud KMS seal configures Vault to use GCP Cloud KMS as the seal layout docs | ---
layout: docs
page_title: GCP Cloud KMS - Seals - Configuration
description: >-
The GCP Cloud KMS seal configures Vault to use GCP Cloud KMS as the seal
wrapping
mechanism.
---
# `gcpckms` seal
<Note title="Seal wrapping requires Vault Enterprise">
All Vault versions support **auto-unseal** for GCP Cloud, but **seal wrapping**
requires Vault Enterprise.
Vault Enterprise enables seal wrapping by default, which means the KMS service
must be available at runtime and not just during the unseal process. Refer to
the [Seal wrap](/vault/docs/enterprise/sealwrap) overview for more
information.
</Note>
The GCP Cloud KMS seal configures Vault to use GCP Cloud KMS as the seal
wrapping mechanism. The GCP Cloud KMS seal is activated by one of the following:
- The presence of a `seal "gcpckms"` block in Vault's configuration file.
- The presence of the environment variable `VAULT_SEAL_TYPE` set to `gcpckms`.
If enabling via environment variable, all other required values specific to
Cloud KMS (i.e. `VAULT_GCPCKMS_SEAL_KEY_RING`, etc.) must be also supplied, as
well as all other GCP-related environment variables that lends to successful
authentication (i.e. `GOOGLE_PROJECT`, etc.).
## `gcpckms` example
This example shows configuring GCP Cloud KMS seal through the Vault
configuration file by providing all the required values:
```hcl
seal "gcpckms" {
credentials = "/usr/vault/vault-project-user-creds.json"
project = "vault-project"
region = "global"
key_ring = "vault-keyring"
crypto_key = "vault-key"
}
```
## `gcpckms` parameters
These parameters apply to the `seal` stanza in the Vault configuration file:
- `credentials` `(string: <required>)`: The path to the credentials JSON file
to use. May be also specified by the `GOOGLE_CREDENTIALS` or
`GOOGLE_APPLICATION_CREDENTIALS` environment variable or set automatically if
running under Google App Engine, Google Compute Engine or Google Kubernetes
Engine.
- `project` `(string: <required>)`: The GCP project ID to use. May also be
specified by the `GOOGLE_PROJECT` environment variable.
- `region` `(string: <required>)`: The GCP region/location where the key ring
lives. May also be specified by the `GOOGLE_REGION` environment variable.
- `key_ring` `(string: <required>)`: The GCP CKMS key ring to use. May also be
specified by the `VAULT_GCPCKMS_SEAL_KEY_RING` environment variable.
- `crypto_key` `(string: <required>)`: The GCP CKMS crypto key to use for
encryption and decryption. May also be specified by the
`VAULT_GCPCKMS_SEAL_CRYPTO_KEY` environment variable.
- `disabled` `(string: "")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.
Refer to the [Seal Migration](/vault/docs/concepts/seal#seal-migration) documentation for more information about the seal migration process.
## Authentication & permissions
Authentication-related values must be provided, either as environment
variables or as configuration parameters.
GCP authentication values:
- `GOOGLE_CREDENTIALS` or `GOOGLE_APPLICATION_CREDENTIALS`
- `GOOGLE_PROJECT`
- `GOOGLE_REGION`
Note: The client uses the official Google SDK and will use the specified
credentials, environment credentials, or [application default
credentials](https://developers.google.com/identity/protocols/application-default-credentials)
in that order, if the above GCP specific values are not provided.
The service account needs the following minimum permissions on the crypto key:
```text
cloudkms.cryptoKeyVersions.useToEncrypt
cloudkms.cryptoKeyVersions.useToDecrypt
cloudkms.cryptoKeys.get
```
These permissions can be described with the following role:
```text
roles/cloudkms.cryptoKeyEncrypterDecrypter
cloudkms.cryptoKeys.get
```
`cloudkms.cryptoKeys.get` permission is used for retrieving metadata information of keys from CloudKMS within this engine initialization process.
## `gcpckms` environment variables
Alternatively, the GCP Cloud KMS seal can be activated by providing the following
environment variables:
- `VAULT_SEAL_TYPE`
- `VAULT_GCPCKMS_SEAL_KEY_RING`
- `VAULT_GCPCKMS_SEAL_CRYPTO_KEY`
## Key rotation
This seal supports rotating keys defined in Google Cloud KMS
[doc](https://cloud.google.com/kms/docs/rotating-keys). Both scheduled rotation and manual
rotation is supported for CKMS since the key information. Old keys version must not be
disabled or deleted and are used to decrypt older data. Any new or updated data will be
encrypted with the primary key version.
## Tutorial
Refer to the [Auto-unseal using GCP Cloud KMS](/vault/tutorials/auto-unseal/autounseal-gcp-kms)
guide for a step-by-step tutorial. | vault | layout docs page title GCP Cloud KMS Seals Configuration description The GCP Cloud KMS seal configures Vault to use GCP Cloud KMS as the seal wrapping mechanism gcpckms seal Note title Seal wrapping requires Vault Enterprise All Vault versions support auto unseal for GCP Cloud but seal wrapping requires Vault Enterprise Vault Enterprise enables seal wrapping by default which means the KMS service must be available at runtime and not just during the unseal process Refer to the Seal wrap vault docs enterprise sealwrap overview for more information Note The GCP Cloud KMS seal configures Vault to use GCP Cloud KMS as the seal wrapping mechanism The GCP Cloud KMS seal is activated by one of the following The presence of a seal gcpckms block in Vault s configuration file The presence of the environment variable VAULT SEAL TYPE set to gcpckms If enabling via environment variable all other required values specific to Cloud KMS i e VAULT GCPCKMS SEAL KEY RING etc must be also supplied as well as all other GCP related environment variables that lends to successful authentication i e GOOGLE PROJECT etc gcpckms example This example shows configuring GCP Cloud KMS seal through the Vault configuration file by providing all the required values hcl seal gcpckms credentials usr vault vault project user creds json project vault project region global key ring vault keyring crypto key vault key gcpckms parameters These parameters apply to the seal stanza in the Vault configuration file credentials string required The path to the credentials JSON file to use May be also specified by the GOOGLE CREDENTIALS or GOOGLE APPLICATION CREDENTIALS environment variable or set automatically if running under Google App Engine Google Compute Engine or Google Kubernetes Engine project string required The GCP project ID to use May also be specified by the GOOGLE PROJECT environment variable region string required The GCP region location where the key ring lives May also be specified by the GOOGLE REGION environment variable key ring string required The GCP CKMS key ring to use May also be specified by the VAULT GCPCKMS SEAL KEY RING environment variable crypto key string required The GCP CKMS crypto key to use for encryption and decryption May also be specified by the VAULT GCPCKMS SEAL CRYPTO KEY environment variable disabled string Set this to true if Vault is migrating from an auto seal configuration Otherwise set to false Refer to the Seal Migration vault docs concepts seal seal migration documentation for more information about the seal migration process Authentication amp permissions Authentication related values must be provided either as environment variables or as configuration parameters GCP authentication values GOOGLE CREDENTIALS or GOOGLE APPLICATION CREDENTIALS GOOGLE PROJECT GOOGLE REGION Note The client uses the official Google SDK and will use the specified credentials environment credentials or application default credentials https developers google com identity protocols application default credentials in that order if the above GCP specific values are not provided The service account needs the following minimum permissions on the crypto key text cloudkms cryptoKeyVersions useToEncrypt cloudkms cryptoKeyVersions useToDecrypt cloudkms cryptoKeys get These permissions can be described with the following role text roles cloudkms cryptoKeyEncrypterDecrypter cloudkms cryptoKeys get cloudkms cryptoKeys get permission is used for retrieving metadata information of keys from CloudKMS within this engine initialization process gcpckms environment variables Alternatively the GCP Cloud KMS seal can be activated by providing the following environment variables VAULT SEAL TYPE VAULT GCPCKMS SEAL KEY RING VAULT GCPCKMS SEAL CRYPTO KEY Key rotation This seal supports rotating keys defined in Google Cloud KMS doc https cloud google com kms docs rotating keys Both scheduled rotation and manual rotation is supported for CKMS since the key information Old keys version must not be disabled or deleted and are used to decrypt older data Any new or updated data will be encrypted with the primary key version Tutorial Refer to the Auto unseal using GCP Cloud KMS vault tutorials auto unseal autounseal gcp kms guide for a step by step tutorial |
vault transit seal page title Vault Transit Seals Configuration layout docs autoseal mechanism The Transit seal configures Vault to use Vault s Transit Secret Engine as the | ---
layout: docs
page_title: Vault Transit - Seals - Configuration
description: |-
The Transit seal configures Vault to use Vault's Transit Secret Engine as the
autoseal mechanism.
---
# `transit` seal
<Note title="Seal wrap functionality requires Vault Enterprise">
All Vault versions support **auto-unseal** for Transit, but **seal wrapping**
requires Vault Enterprise.
Vault Enterprise enables seal wrapping by default, which means the KMS service
must be available at runtime and not just during the unseal process. Refer to
the [Seal wrap](/vault/docs/enterprise/sealwrap) overview for more
information.
</Note>
The Transit seal configures Vault to use Vault's Transit Secret Engine as the
autoseal mechanism.
The Transit seal is activated by one of the following:
- The presence of a `seal "transit"` block in Vault's configuration file
- The presence of the environment variable `VAULT_SEAL_TYPE` set to `transit`.
## `transit` example
This example shows configuring Transit seal through the Vault configuration file
by providing all the required values:
```hcl
seal "transit" {
address = "https://vault:8200"
token = "s.Qf1s5zigZ4OX6akYjQXJC1jY"
disable_renewal = "false"
// Key configuration
key_name = "transit_key_name"
mount_path = "transit/"
namespace = "ns1/"
// TLS Configuration
tls_ca_cert = "/etc/vault/ca_cert.pem"
tls_client_cert = "/etc/vault/client_cert.pem"
tls_client_key = "/etc/vault/ca_cert.pem"
tls_server_name = "vault"
tls_skip_verify = "false"
}
```
## `transit` parameters
These parameters apply to the `seal` stanza in the Vault configuration file:
- `address` `(string: <required>)`: The full address to the Vault cluster.
This may also be specified by the `VAULT_ADDR` environment variable.
- `token` `(string: <required>)`: The Vault token to use. This may also be
specified by the `VAULT_TOKEN` environment variable.
- `key_name` `(string: <required>)`: The transit key to use for encryption and
decryption. This may also be supplied using the `VAULT_TRANSIT_SEAL_KEY_NAME`
environment variable.
- `key_id_prefix` `(string: "")`: An optional string to add to the key id
of values wrapped by this transit seal. This can help disambiguate between
two transit seals.
- `mount_path` `(string: <required>)`: The mount path to the transit secret engine.
This may also be supplied using the `VAULT_TRANSIT_SEAL_MOUNT_PATH` environment
variable.
- `namespace` `(string: "")`: The namespace path to the transit secret engine.
This may also be supplied using the `VAULT_NAMESPACE` environment variable.
- `disable_renewal` `(string: "false")`: Disables the automatic renewal of the token
in case the lifecycle of the token is managed with some other mechanism outside of
Vault, such as Vault Agent. This may also be specified using the
`VAULT_TRANSIT_SEAL_DISABLE_RENEWAL` environment variable.
- `tls_ca_cert` `(string: "")`: Specifies the path to the CA certificate file used
for communication with the Vault server. This may also be specified using the
`VAULT_CACERT` environment variable.
- `tls_client_cert` `(string: "")`: Specifies the path to the client certificate
for communication with the Vault server. This may also be specified using the
`VAULT_CLIENT_CERT` environment variable.
- `tls_client_key` `(string: "")`: Specifies the path to the private key for
communication with the Vault server. This may also be specified using the
`VAULT_CLIENT_KEY` environment variable.
- `tls_server_name` `(string: "")`: Name to use as the SNI host when connecting
to the Vault server via TLS. This may also be specified via the
`VAULT_TLS_SERVER_NAME` environment variable.
- `tls_skip_verify` `(bool: "false")`: Disable verification of TLS certificates.
Using this option is highly discouraged and decreases the security of data
transmissions to and from the Vault server. This may also be specified using the
`VAULT_SKIP_VERIFY` environment variable.
- `disabled` `(string: "")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.
Refer to the [Seal Migration](/vault/docs/concepts/seal#seal-migration) documentation for more information about the seal migration process.
## Authentication
Authentication-related values must be provided, either as environment
variables or as configuration parameters.
~> **Note:** Although the configuration file allows you to pass in
`VAULT_TOKEN` as part of the seal's parameters, it is _strongly_ recommended
to set these values via environment variables.
The Vault token used to authenticate needs the following permissions on the
transit key:
```hcl
path "<mount path>/encrypt/<key name>" {
capabilities = ["update"]
}
path "<mount path>/decrypt/<key name>" {
capabilities = ["update"]
}
```
Other considerations for the token used:
* it should probably be an [orphan token](/vault/docs/concepts/tokens#token-hierarchies-and-orphan-tokens),
otherwise when the parent token expires or gets revoked the seal will break.
* consider making it a [periodic token](/vault/docs/concepts/tokens#periodic-tokens)
and not setting an explicit max TTL, otherwise at some point it will cease to be renewable.
## Key rotation
This seal supports key rotation using the Transit Secret Engine's key rotation endpoints. See
[doc](/vault/api-docs/secret/transit#rotate-key). Old keys must not be disabled or deleted and are
used to decrypt older data.
## Tutorial
Refer to the [Auto-unseal using Transit Secrets Engine](/vault/tutorials/auto-unseal/autounseal-transit)
tutorial to learn how use the transit secrets engine to automatically unseal Vault. | vault | layout docs page title Vault Transit Seals Configuration description The Transit seal configures Vault to use Vault s Transit Secret Engine as the autoseal mechanism transit seal Note title Seal wrap functionality requires Vault Enterprise All Vault versions support auto unseal for Transit but seal wrapping requires Vault Enterprise Vault Enterprise enables seal wrapping by default which means the KMS service must be available at runtime and not just during the unseal process Refer to the Seal wrap vault docs enterprise sealwrap overview for more information Note The Transit seal configures Vault to use Vault s Transit Secret Engine as the autoseal mechanism The Transit seal is activated by one of the following The presence of a seal transit block in Vault s configuration file The presence of the environment variable VAULT SEAL TYPE set to transit transit example This example shows configuring Transit seal through the Vault configuration file by providing all the required values hcl seal transit address https vault 8200 token s Qf1s5zigZ4OX6akYjQXJC1jY disable renewal false Key configuration key name transit key name mount path transit namespace ns1 TLS Configuration tls ca cert etc vault ca cert pem tls client cert etc vault client cert pem tls client key etc vault ca cert pem tls server name vault tls skip verify false transit parameters These parameters apply to the seal stanza in the Vault configuration file address string required The full address to the Vault cluster This may also be specified by the VAULT ADDR environment variable token string required The Vault token to use This may also be specified by the VAULT TOKEN environment variable key name string required The transit key to use for encryption and decryption This may also be supplied using the VAULT TRANSIT SEAL KEY NAME environment variable key id prefix string An optional string to add to the key id of values wrapped by this transit seal This can help disambiguate between two transit seals mount path string required The mount path to the transit secret engine This may also be supplied using the VAULT TRANSIT SEAL MOUNT PATH environment variable namespace string The namespace path to the transit secret engine This may also be supplied using the VAULT NAMESPACE environment variable disable renewal string false Disables the automatic renewal of the token in case the lifecycle of the token is managed with some other mechanism outside of Vault such as Vault Agent This may also be specified using the VAULT TRANSIT SEAL DISABLE RENEWAL environment variable tls ca cert string Specifies the path to the CA certificate file used for communication with the Vault server This may also be specified using the VAULT CACERT environment variable tls client cert string Specifies the path to the client certificate for communication with the Vault server This may also be specified using the VAULT CLIENT CERT environment variable tls client key string Specifies the path to the private key for communication with the Vault server This may also be specified using the VAULT CLIENT KEY environment variable tls server name string Name to use as the SNI host when connecting to the Vault server via TLS This may also be specified via the VAULT TLS SERVER NAME environment variable tls skip verify bool false Disable verification of TLS certificates Using this option is highly discouraged and decreases the security of data transmissions to and from the Vault server This may also be specified using the VAULT SKIP VERIFY environment variable disabled string Set this to true if Vault is migrating from an auto seal configuration Otherwise set to false Refer to the Seal Migration vault docs concepts seal seal migration documentation for more information about the seal migration process Authentication Authentication related values must be provided either as environment variables or as configuration parameters Note Although the configuration file allows you to pass in VAULT TOKEN as part of the seal s parameters it is strongly recommended to set these values via environment variables The Vault token used to authenticate needs the following permissions on the transit key hcl path mount path encrypt key name capabilities update path mount path decrypt key name capabilities update Other considerations for the token used it should probably be an orphan token vault docs concepts tokens token hierarchies and orphan tokens otherwise when the parent token expires or gets revoked the seal will break consider making it a periodic token vault docs concepts tokens periodic tokens and not setting an explicit max TTL otherwise at some point it will cease to be renewable Key rotation This seal supports key rotation using the Transit Secret Engine s key rotation endpoints See doc vault api docs secret transit rotate key Old keys must not be disabled or deleted and are used to decrypt older data Tutorial Refer to the Auto unseal using Transit Secrets Engine vault tutorials auto unseal autounseal transit tutorial to learn how use the transit secrets engine to automatically unseal Vault |
vault The AliCloud KMS seal configures Vault to use AliCloud KMS as the seal mechanism wrapping layout docs page title AliCloud KMS Seals Configuration | ---
layout: docs
page_title: AliCloud KMS - Seals - Configuration
description: >-
The AliCloud KMS seal configures Vault to use AliCloud KMS as the seal
wrapping
mechanism.
---
# `alicloudkms` seal
<Note title="Seal wrapping requires Vault Enterprise">
All Vault versions support **auto-unseal** for AliCloud, but **seal wrapping**
requires Vault Enterprise.
Vault Enterprise enables seal wrapping by default, which means the KMS service
must be available at runtime and not just during the unseal process. Refer to
the [Seal wrap](/vault/docs/enterprise/sealwrap) overview for more
information.
</Note>
The AliCloud KMS seal configures Vault to use AliCloud KMS as the seal wrapping mechanism.
The AliCloud KMS seal is activated by one of the following:
- The presence of a `seal "alicloudkms"` block in Vault's configuration file.
- The presence of the environment variable `VAULT_SEAL_TYPE` set to `alicloudkms`. If
enabling via environment variable, all other required values specific to AliCloud
KMS (i.e. `VAULT_ALICLOUDKMS_SEAL_KEY_ID`) must be also supplied, as well as all
other AliCloud-related environment variables that lends to successful
authentication.
## `alicloudkms` example
This example shows configuring AliCloud KMS seal through the Vault configuration file
by providing all the required values:
```hcl
seal "alicloudkms" {
region = "us-east-1"
access_key = "0wNEpMMlzy7szvai"
secret_key = "PupkTg8jdmau1cXxYacgE736PJj4cA"
kms_key_id = "08c33a6f-4e0a-4a1b-a3fa-7ddfa1d4fb73"
}
```
## `alicloudkms` parameters
These parameters apply to the `seal` stanza in the Vault configuration file:
- `region` `(string: <required> "us-east-1")`: The AliCloud region where the encryption key
lives. May also be specified by the `ALICLOUD_REGION`
environment variable.
- `domain` `(string: "kms.us-east-1.aliyuncs.com")`: If set, overrides the endpoint
AliCloud would normally use for KMS for a particular region. May also be specified
by the `ALICLOUD_DOMAIN` environment variable.
- `access_key` `(string: <required>)`: The AliCloud access key ID to use. May also be
specified by the `ALICLOUD_ACCESS_KEY` environment variable or as part of the
AliCloud profile from the AliCloud CLI or instance profile.
- `secret_key` `(string: <required>)`: The AliCloud secret access key to use. May
also be specified by the `ALICLOUD_SECRET_KEY` environment variable or as
part of the AliCloud profile from the AliCloud CLI or instance profile.
- `kms_key_id` `(string: <required>)`: The AliCloud KMS key ID to use for encryption
and decryption. May also be specified by the `VAULT_ALICLOUDKMS_SEAL_KEY_ID`
environment variable.
- `disabled` `(string: "")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.
Refer to the [Seal Migration](/vault/docs/concepts/seal#seal-migration) documentation for more information about the seal migration process.
## Authentication
Authentication-related values must be provided, either as environment
variables or as configuration parameters.
~> **Note:** Although the configuration file allows you to pass in
`ALICLOUD_ACCESS_KEY` and `ALICLOUD_SECRET_KEY` as part of the seal's parameters, it
is _strongly_ recommended to set these values via environment variables.
```text
AliCloud authentication values:
* `ALICLOUD_REGION`
* `ALICLOUD_ACCESS_KEY`
* `ALICLOUD_SECRET_KEY`
```
Note: The client uses the official AliCloud SDK and will use environment credentials,
the specified credentials, or RAM role credentials in that order.
## `alicloudkms` environment variables
Alternatively, the AliCloud KMS seal can be activated by providing the following
environment variables:
```text
Vault Seal specific values:
* `VAULT_SEAL_TYPE`
* `VAULT_ALICLOUDKMS_SEAL_KEY_ID`
``` | vault | layout docs page title AliCloud KMS Seals Configuration description The AliCloud KMS seal configures Vault to use AliCloud KMS as the seal wrapping mechanism alicloudkms seal Note title Seal wrapping requires Vault Enterprise All Vault versions support auto unseal for AliCloud but seal wrapping requires Vault Enterprise Vault Enterprise enables seal wrapping by default which means the KMS service must be available at runtime and not just during the unseal process Refer to the Seal wrap vault docs enterprise sealwrap overview for more information Note The AliCloud KMS seal configures Vault to use AliCloud KMS as the seal wrapping mechanism The AliCloud KMS seal is activated by one of the following The presence of a seal alicloudkms block in Vault s configuration file The presence of the environment variable VAULT SEAL TYPE set to alicloudkms If enabling via environment variable all other required values specific to AliCloud KMS i e VAULT ALICLOUDKMS SEAL KEY ID must be also supplied as well as all other AliCloud related environment variables that lends to successful authentication alicloudkms example This example shows configuring AliCloud KMS seal through the Vault configuration file by providing all the required values hcl seal alicloudkms region us east 1 access key 0wNEpMMlzy7szvai secret key PupkTg8jdmau1cXxYacgE736PJj4cA kms key id 08c33a6f 4e0a 4a1b a3fa 7ddfa1d4fb73 alicloudkms parameters These parameters apply to the seal stanza in the Vault configuration file region string required us east 1 The AliCloud region where the encryption key lives May also be specified by the ALICLOUD REGION environment variable domain string kms us east 1 aliyuncs com If set overrides the endpoint AliCloud would normally use for KMS for a particular region May also be specified by the ALICLOUD DOMAIN environment variable access key string required The AliCloud access key ID to use May also be specified by the ALICLOUD ACCESS KEY environment variable or as part of the AliCloud profile from the AliCloud CLI or instance profile secret key string required The AliCloud secret access key to use May also be specified by the ALICLOUD SECRET KEY environment variable or as part of the AliCloud profile from the AliCloud CLI or instance profile kms key id string required The AliCloud KMS key ID to use for encryption and decryption May also be specified by the VAULT ALICLOUDKMS SEAL KEY ID environment variable disabled string Set this to true if Vault is migrating from an auto seal configuration Otherwise set to false Refer to the Seal Migration vault docs concepts seal seal migration documentation for more information about the seal migration process Authentication Authentication related values must be provided either as environment variables or as configuration parameters Note Although the configuration file allows you to pass in ALICLOUD ACCESS KEY and ALICLOUD SECRET KEY as part of the seal s parameters it is strongly recommended to set these values via environment variables text AliCloud authentication values ALICLOUD REGION ALICLOUD ACCESS KEY ALICLOUD SECRET KEY Note The client uses the official AliCloud SDK and will use environment credentials the specified credentials or RAM role credentials in that order alicloudkms environment variables Alternatively the AliCloud KMS seal can be activated by providing the following environment variables text Vault Seal specific values VAULT SEAL TYPE VAULT ALICLOUDKMS SEAL KEY ID |
vault ocikms seal mechanism layout docs page title OCI KMS Seals Configuration The OCI KMS seal configures Vault to use OCI KMS as the seal wrapping | ---
layout: docs
page_title: OCI KMS - Seals - Configuration
description: |-
The OCI KMS seal configures Vault to use OCI KMS as the seal wrapping
mechanism.
---
# `ocikms` seal
<Note title="Seal wrapping requires Vault Enterprise">
All Vault versions support **auto-unseal** for OCI KMS, but **seal wrapping**
requires Vault Enterprise.
Vault Enterprise enables seal wrapping by default, which means the KMS service
must be available at runtime and not just during the unseal process. Refer to
the [Seal wrap](/vault/docs/enterprise/sealwrap) overview for more
information.
</Note>
The OCI KMS seal configures Vault to use OCI KMS as the seal wrapping mechanism.
The OCI KMS seal is activated by one of the following:
- The presence of a `seal "ocikms"` block in Vault's configuration file
- The presence of the environment variable `VAULT_SEAL_TYPE` set to `ocikms`. If
enabling via environment variable, all other required values specific to OCI
KMS (i.e. `VAULT_OCIKMS_SEAL_KEY_ID`, `VAULT_OCIKMS_CRYPTO_ENDPOINT` `VAULT_OCIKMS_MANAGEMENT_ENDPOINT`) must be also supplied, as well as all
other OCI-related [environment variables][oci-sdk] that lends to successful
authentication.
## `ocikms` example
This example shows configuring the OCI KMS seal through the Vault configuration file
by providing all the required values:
```hcl
seal "ocikms" {
key_id = "ocid1.key.oc1.iad.afnxza26aag4s.abzwkljsbapzb2nrha5nt3s7s7p42ctcrcj72vn3kq5qx"
crypto_endpoint = "https://afnxza26aag4s-crypto.kms.us-ashburn-1.oraclecloud.com"
management_endpoint = "https://afnxza26aag4s-management.kms.us-ashburn-1.oraclecloud.com"
auth_type_api_key = "true"
}
```
## `ocikms` parameters
These parameters apply to the `seal` stanza in the Vault configuration file:
- `key_id` `(string: <required>)`: The OCI KMS key ID to use. May also be
specified by the `VAULT_OCIKMS_SEAL_KEY_ID` environment variable.
- `crypto_endpoint` `(string: <required>)`: The OCI KMS cryptographic endpoint (or data plane endpoint)
to be used to make OCI KMS encryption/decryption requests. May also be specified by the `VAULT_OCIKMS_CRYPTO_ENDPOINT` environment
variable.
- `management_endpoint` `(string: <required>)`: The OCI KMS management endpoint (or control plane endpoint)
to be used to make OCI KMS key management requests. May also be specified by the `VAULT_OCIKMS_MANAGEMENT_ENDPOINT` environment
variable.
- `auth_type_api_key` `(boolean: false)`: Specifies if using API key to authenticate to OCI KMS service.
If it is `false`, Vault authenticates using the instance principal from the compute instance. See Authentication section for details. Default is `false`.
- `disabled` `(string: "")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.
Refer to the [Seal Migration](/vault/docs/concepts/seal#seal-migration) documentation for more information about the seal migration process.
## Authentication
Authentication-related values must be provided, either as environment
variables or as configuration parameters.
If you want to use Instance Principal, add section configuration below and add further configuration settings as detailed in the [configuration docs](/vault/docs/configuration/).
```hcl
seal "ocikms" {
crypto_endpoint = "<kms-crypto-endpoint>"
management_endpoint = "<kms-management-endpoint>"
key_id = "<kms-key-id>"
}
# Notes:
# crypto_endpoint can be replaced by VAULT_OCIKMS_CRYPTO_ENDPOINT environment var
# management_endpoint can be replaced by VAULT_OCIKMS_MANAGEMENT_ENDPOINT environment var
# key_id can be replaced by VAULT_OCIKMS_SEAL_KEY_ID environment var
```
If you want to use User Principal, the plugin will take the API key you defined for OCI SDK, often under `~/.oci/config`.
```hcl
seal "ocikms" {
auth_type_api_key = true
crypto_endpoint = "<kms-crypto-endpoint>"
management_endpoint = "<kms-management-endpoint>"
key_id = "<kms-key-id>"
}
```
To grant permission for a compute instance to use OCI KMS service, write policies for KMS access.
- Create a [Dynamic Group][oci-dg] in your OCI tenancy.
- Create a policy that allows the Dynamic Group to use or manage keys from OCI KMS. There are multiple ways to write these policies. The [OCI Identity Policy][oci-id] can be used as a reference or starting point.
The most common policy allows a dynamic group of tenant A to use KMS's keys in tenant B:
```text
define tenancy tenantB as <tenantB-ocid>
endorse dynamic-group <dynamic-group-name> to use keys in tenancy tenantB
```
```text
define tenancy tenantA as <tenantA-ocid>
define dynamic-group <dynamic-group-name> as <dynamic-group-ocid>
admit dynamic-group <dynamic-group-name> of tenancy tenantA to use keys in compartment <key-compartment>
```
## `ocikms` rotate OCI KMS master key
For the [OCI KMS key rotation feature][oci-kms-rotation], OCI KMS will create a new version of key internally. This process is independent from Vault, and Vault still uses the same `key_id` without any interruption.
If you want to change the `key_id`: migrate to Shamir, change `key_id`, and then migrate to OCI KMS with the new `key_id`.
[oci-sdk]: https://docs.cloud.oracle.com/iaas/Content/API/Concepts/sdkconfig.htm
[oci-dg]: https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingdynamicgroups.htm
[oci-id]: https://docs.cloud.oracle.com/iaas/Content/Identity/Concepts/policies.htm
[oci-kms-rotation]: https://docs.cloud.oracle.com/iaas/Content/KeyManagement/Tasks/managingkeys.htm | vault | layout docs page title OCI KMS Seals Configuration description The OCI KMS seal configures Vault to use OCI KMS as the seal wrapping mechanism ocikms seal Note title Seal wrapping requires Vault Enterprise All Vault versions support auto unseal for OCI KMS but seal wrapping requires Vault Enterprise Vault Enterprise enables seal wrapping by default which means the KMS service must be available at runtime and not just during the unseal process Refer to the Seal wrap vault docs enterprise sealwrap overview for more information Note The OCI KMS seal configures Vault to use OCI KMS as the seal wrapping mechanism The OCI KMS seal is activated by one of the following The presence of a seal ocikms block in Vault s configuration file The presence of the environment variable VAULT SEAL TYPE set to ocikms If enabling via environment variable all other required values specific to OCI KMS i e VAULT OCIKMS SEAL KEY ID VAULT OCIKMS CRYPTO ENDPOINT VAULT OCIKMS MANAGEMENT ENDPOINT must be also supplied as well as all other OCI related environment variables oci sdk that lends to successful authentication ocikms example This example shows configuring the OCI KMS seal through the Vault configuration file by providing all the required values hcl seal ocikms key id ocid1 key oc1 iad afnxza26aag4s abzwkljsbapzb2nrha5nt3s7s7p42ctcrcj72vn3kq5qx crypto endpoint https afnxza26aag4s crypto kms us ashburn 1 oraclecloud com management endpoint https afnxza26aag4s management kms us ashburn 1 oraclecloud com auth type api key true ocikms parameters These parameters apply to the seal stanza in the Vault configuration file key id string required The OCI KMS key ID to use May also be specified by the VAULT OCIKMS SEAL KEY ID environment variable crypto endpoint string required The OCI KMS cryptographic endpoint or data plane endpoint to be used to make OCI KMS encryption decryption requests May also be specified by the VAULT OCIKMS CRYPTO ENDPOINT environment variable management endpoint string required The OCI KMS management endpoint or control plane endpoint to be used to make OCI KMS key management requests May also be specified by the VAULT OCIKMS MANAGEMENT ENDPOINT environment variable auth type api key boolean false Specifies if using API key to authenticate to OCI KMS service If it is false Vault authenticates using the instance principal from the compute instance See Authentication section for details Default is false disabled string Set this to true if Vault is migrating from an auto seal configuration Otherwise set to false Refer to the Seal Migration vault docs concepts seal seal migration documentation for more information about the seal migration process Authentication Authentication related values must be provided either as environment variables or as configuration parameters If you want to use Instance Principal add section configuration below and add further configuration settings as detailed in the configuration docs vault docs configuration hcl seal ocikms crypto endpoint kms crypto endpoint management endpoint kms management endpoint key id kms key id Notes crypto endpoint can be replaced by VAULT OCIKMS CRYPTO ENDPOINT environment var management endpoint can be replaced by VAULT OCIKMS MANAGEMENT ENDPOINT environment var key id can be replaced by VAULT OCIKMS SEAL KEY ID environment var If you want to use User Principal the plugin will take the API key you defined for OCI SDK often under oci config hcl seal ocikms auth type api key true crypto endpoint kms crypto endpoint management endpoint kms management endpoint key id kms key id To grant permission for a compute instance to use OCI KMS service write policies for KMS access Create a Dynamic Group oci dg in your OCI tenancy Create a policy that allows the Dynamic Group to use or manage keys from OCI KMS There are multiple ways to write these policies The OCI Identity Policy oci id can be used as a reference or starting point The most common policy allows a dynamic group of tenant A to use KMS s keys in tenant B text define tenancy tenantB as tenantB ocid endorse dynamic group dynamic group name to use keys in tenancy tenantB text define tenancy tenantA as tenantA ocid define dynamic group dynamic group name as dynamic group ocid admit dynamic group dynamic group name of tenancy tenantA to use keys in compartment key compartment ocikms rotate OCI KMS master key For the OCI KMS key rotation feature oci kms rotation OCI KMS will create a new version of key internally This process is independent from Vault and Vault still uses the same key id without any interruption If you want to change the key id migrate to Shamir change key id and then migrate to OCI KMS with the new key id oci sdk https docs cloud oracle com iaas Content API Concepts sdkconfig htm oci dg https docs cloud oracle com iaas Content Identity Tasks managingdynamicgroups htm oci id https docs cloud oracle com iaas Content Identity Concepts policies htm oci kms rotation https docs cloud oracle com iaas Content KeyManagement Tasks managingkeys htm |
vault The recommended pattern and best practices for unsealing a production Vault cluster This documentation explains the concepts options and considerations for unsealing a production Vault cluster It builds on the Reference Architecture vault tutorials raft raft reference architecture and Deployment Guide vault tutorials day one raft raft deployment guide for Vault to deliver a pattern for a common Vault use case layout docs Seal best practices page title Seal best practices | ---
layout: docs
page_title: Seal best practices
description: >-
The recommended pattern and best practices for unsealing a production Vault cluster.
---
# Seal best practices
This documentation explains the concepts, options, and considerations for unsealing a production Vault cluster. It builds on the [Reference Architecture](/vault/tutorials/raft/raft-reference-architecture) and [Deployment Guide](/vault/tutorials/day-one-raft/raft-deployment-guide) for Vault to deliver a pattern for a common Vault use case.
## Vault unseal
Once Vault is installed and configured according to the [Deployment Guide](/vault/tutorials/day-one-raft/raft-deployment-guide), the Vault starts in a sealed state.
Because Vault always starts in a sealed state, the first decision point is around your implementation strategy to handle unsealing. Unsealing is the process by which your Vault root key is used to decrypt the data encryption key that Vault uses to encrypt all data. For obvious security reasons, Vault neither keeps nor knows the root key and so this is the function of the unsealing process; to present the root key to Vault.
Vault Community Edition supports Shamir and cloud auto-unseal methods for most major cloud providers. Vault Enterprise also offers an hardware security module (HSM) unseal.
There are several considerations when deciding on an unseal strategy.
<Tip>
Refer to the [seal/unseal](/vault/docs/concepts/seal) documentation to learn more about the concepts and reasoning behind Vault sealing.
</Tip>
## Operator overhead
The default method for unseal uses [Shamir's Secret Sharing algorithm](https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing) to split the key into shards so that there is never a single root key. This method relies on multiple operators (each with their own key) to be available to unseal Vault, so it may not be ideal in an Enterprise solution.
If this method is employed, the recommendation is to put additional operational processes in place, such as:
- Quarterly unseal drills to make sure all operators can respond.
- Key shards should be stored in secure locations and further encrypted using personal encryption. Vault provides for this in the [init](/vault/docs/commands/operator/init) command with flags to PGP encrypt the unseal keys and root token.
- Key holder key access is tied to enterprise user lifecycle management to ensure the process is responsive to staffing changes.
## Cloud provider
If your Vault implementation is in a public cloud or has access to one, you may have access to a secure Key Management Service (KMS), and Vault can take advantage of this to store the root key and retrieve it from there. This option is easy to use but relies on access to a public cloud.
Considerations for using this method include:
- **Security policy:** Does your security policy allow for secrets to be stored on a public cloud?
- **Business continuity:** Some enterprises may have policies around vendor reliance for business continuity reasons.
- You need to put additional security in place for the cloud provider access keys required to read the key store.
## Access to an HSM
If you have access to an HSM, Vault provides a way to store and retrieve the root key using the `pkcs11` configuration block in the `seal` stanza. This method offers considerable security as all parts of the secrets management infrastructure are within business control, but there are other considerations:
- Security policies must manage access to and security of the HSM.
- You need to put additional security in place for the HSM PIN that Vault needs to access the HSM.
## Cloud provider auto-unseal
Major cloud providers offer a cryptographic key management system, which Vault can use to provide the root key for the unseal operation.
- [AWS KMS](/vault/docs/configuration/seal/awskms)
- [Azure Key Vault](/vault/docs/configuration/seal/azurekeyvault)
- [GCP Cloud KMS](/vault/docs/configuration/seal/gcpckms)
- [AliCloud KMS](/vault/docs/configuration/seal/alicloudkms)
- [OCI KMS](/vault/docs/configuration/seal/ocikms)
For all of these cloud-provider methods of auto-unseal, the high-level principles are the same. Rather than having a root key protected by splitting it into shards and then distributing them securely, the root key is generated and stored in the cloud-provider key management platform offering. Vault is configured to retrieve this key on startup and unseal automatically.
Using a cloud provider to auto-unseal has security implications around the trust of the provider.
### Use AWS KMS for auto-unseal
When using AWS, the AWS Key Management Service can store and provide the root key to the startup's Vault cluster. There are two steps:
1. Generate an AWS KMS key
2. Configure Vault to be able to read and un-encrypt this key
For the first step, you can generate an AWS KMS key in whatever way you usually provision your AWS infrastructure and services. The result is a key that has a key ID. Best practice would only allow minimal access to administer this key and only allow Vault to access it. You should also enable CloudTrail audit logs for the KMS key and validate that nothing else tries to access this key. With AWS KMS you can generate the key with your own key material or allow AWS to provide this (the default method).
The second step is to add the key ID and the AWS region of the key into the Vault configuration file inside the `seal` stanza. Vault will look for keys in `us-east-1` by default, but it is good practice to add the region key/value even if the key is in this region.
As access to your KMS keys is limited by default, you will need to allow Vault to access it, and you can do this with Instance Profiles on the EC2 instances that run Vault. Best practice is to run Vault on its own instances and not co-host other services. The Instance Profile will need a role with a policy to `kms:Encrypt`, `kms:Decrypt` and `kms:DescribeKey`. Tie this policy to the single encryption key `arn` in the Policy JSON's Resources section.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["kms:Encrypt", "kms:Decrypt", "kms:DescribeKey"],
"Resource": ["${kms_arn}"]
}
]
}
```
## Use HSM to unseal Vault
It is also possible to unseal Vault using either a hardware HSM or a cloud KMS. The unseal method is similar to using a KMS, which is documented in the [HSM section](/vault/docs/configuration/seal/pkcs11) of the unseal docs. You can see a useful walkthrough video of this process by [SafenetAT](https://www.youtube.com/watch?v=3LyWfN9fWFE). You can also watch this [HashiCorp training video](https://www.hashicorp.com/resources/hashicorp-and-aws-integrating-cloudhsm-with-vault-e) on using AWS HSM.
AWS also provides a high-level guide on [Securing and Managing secrets with HashiCorp Vault](https://aws.amazon.com/blogs/apn/securing-and-managing-secrets-with-hashicorp-vault-enterprise/) with its offering of both CloudHSM and KMS.
## Recovery keys
When Vault is initialized while using HSM or cloud-provided external keys for sealing, it returns several recovery keys. These are still required for highly privileged actions, such as generating new root keys. A section in the HSM docs addresses [recovery keys](/vault/docs/enterprise/hsm/behavior#recovery-key).
## Changing seal method
You can change the seal method using [seal migration](/vault/docs/concepts/seal#seal-migration).
## Reference material
- [Reference Architecture](/vault/tutorials/raft/raft-reference-architecture) covers the recommended production Vault cluster architecture
- [Deployment Guide](/vault/tutorials/day-one-raft/raft-deployment-guide) covers how to install and configure Vault for production use
- Recommended Pattern - Vault Centralized Secrets Management
- [K/V Secrets Engine](/vault/docs/secrets/kv) is used to store static secrets within the configured physical storage for Vault
- [Auth Methods](/vault/docs/auth) are used to authenticate users and machines with Vault
- [Auto unseal tutorials](/vault/tutorials/auto-unseal)
- [Consul Template](https://github.com/hashicorp/consul-template) is used to access static secrets stored in Vault and provide them to the applications and services that require them. | vault | layout docs page title Seal best practices description The recommended pattern and best practices for unsealing a production Vault cluster Seal best practices This documentation explains the concepts options and considerations for unsealing a production Vault cluster It builds on the Reference Architecture vault tutorials raft raft reference architecture and Deployment Guide vault tutorials day one raft raft deployment guide for Vault to deliver a pattern for a common Vault use case Vault unseal Once Vault is installed and configured according to the Deployment Guide vault tutorials day one raft raft deployment guide the Vault starts in a sealed state Because Vault always starts in a sealed state the first decision point is around your implementation strategy to handle unsealing Unsealing is the process by which your Vault root key is used to decrypt the data encryption key that Vault uses to encrypt all data For obvious security reasons Vault neither keeps nor knows the root key and so this is the function of the unsealing process to present the root key to Vault Vault Community Edition supports Shamir and cloud auto unseal methods for most major cloud providers Vault Enterprise also offers an hardware security module HSM unseal There are several considerations when deciding on an unseal strategy Tip Refer to the seal unseal vault docs concepts seal documentation to learn more about the concepts and reasoning behind Vault sealing Tip Operator overhead The default method for unseal uses Shamir s Secret Sharing algorithm https en wikipedia org wiki Shamir 27s Secret Sharing to split the key into shards so that there is never a single root key This method relies on multiple operators each with their own key to be available to unseal Vault so it may not be ideal in an Enterprise solution If this method is employed the recommendation is to put additional operational processes in place such as Quarterly unseal drills to make sure all operators can respond Key shards should be stored in secure locations and further encrypted using personal encryption Vault provides for this in the init vault docs commands operator init command with flags to PGP encrypt the unseal keys and root token Key holder key access is tied to enterprise user lifecycle management to ensure the process is responsive to staffing changes Cloud provider If your Vault implementation is in a public cloud or has access to one you may have access to a secure Key Management Service KMS and Vault can take advantage of this to store the root key and retrieve it from there This option is easy to use but relies on access to a public cloud Considerations for using this method include Security policy Does your security policy allow for secrets to be stored on a public cloud Business continuity Some enterprises may have policies around vendor reliance for business continuity reasons You need to put additional security in place for the cloud provider access keys required to read the key store Access to an HSM If you have access to an HSM Vault provides a way to store and retrieve the root key using the pkcs11 configuration block in the seal stanza This method offers considerable security as all parts of the secrets management infrastructure are within business control but there are other considerations Security policies must manage access to and security of the HSM You need to put additional security in place for the HSM PIN that Vault needs to access the HSM Cloud provider auto unseal Major cloud providers offer a cryptographic key management system which Vault can use to provide the root key for the unseal operation AWS KMS vault docs configuration seal awskms Azure Key Vault vault docs configuration seal azurekeyvault GCP Cloud KMS vault docs configuration seal gcpckms AliCloud KMS vault docs configuration seal alicloudkms OCI KMS vault docs configuration seal ocikms For all of these cloud provider methods of auto unseal the high level principles are the same Rather than having a root key protected by splitting it into shards and then distributing them securely the root key is generated and stored in the cloud provider key management platform offering Vault is configured to retrieve this key on startup and unseal automatically Using a cloud provider to auto unseal has security implications around the trust of the provider Use AWS KMS for auto unseal When using AWS the AWS Key Management Service can store and provide the root key to the startup s Vault cluster There are two steps 1 Generate an AWS KMS key 2 Configure Vault to be able to read and un encrypt this key For the first step you can generate an AWS KMS key in whatever way you usually provision your AWS infrastructure and services The result is a key that has a key ID Best practice would only allow minimal access to administer this key and only allow Vault to access it You should also enable CloudTrail audit logs for the KMS key and validate that nothing else tries to access this key With AWS KMS you can generate the key with your own key material or allow AWS to provide this the default method The second step is to add the key ID and the AWS region of the key into the Vault configuration file inside the seal stanza Vault will look for keys in us east 1 by default but it is good practice to add the region key value even if the key is in this region As access to your KMS keys is limited by default you will need to allow Vault to access it and you can do this with Instance Profiles on the EC2 instances that run Vault Best practice is to run Vault on its own instances and not co host other services The Instance Profile will need a role with a policy to kms Encrypt kms Decrypt and kms DescribeKey Tie this policy to the single encryption key arn in the Policy JSON s Resources section json Version 2012 10 17 Statement Effect Allow Action kms Encrypt kms Decrypt kms DescribeKey Resource kms arn Use HSM to unseal Vault It is also possible to unseal Vault using either a hardware HSM or a cloud KMS The unseal method is similar to using a KMS which is documented in the HSM section vault docs configuration seal pkcs11 of the unseal docs You can see a useful walkthrough video of this process by SafenetAT https www youtube com watch v 3LyWfN9fWFE You can also watch this HashiCorp training video https www hashicorp com resources hashicorp and aws integrating cloudhsm with vault e on using AWS HSM AWS also provides a high level guide on Securing and Managing secrets with HashiCorp Vault https aws amazon com blogs apn securing and managing secrets with hashicorp vault enterprise with its offering of both CloudHSM and KMS Recovery keys When Vault is initialized while using HSM or cloud provided external keys for sealing it returns several recovery keys These are still required for highly privileged actions such as generating new root keys A section in the HSM docs addresses recovery keys vault docs enterprise hsm behavior recovery key Changing seal method You can change the seal method using seal migration vault docs concepts seal seal migration Reference material Reference Architecture vault tutorials raft raft reference architecture covers the recommended production Vault cluster architecture Deployment Guide vault tutorials day one raft raft deployment guide covers how to install and configure Vault for production use Recommended Pattern Vault Centralized Secrets Management K V Secrets Engine vault docs secrets kv is used to store static secrets within the configured physical storage for Vault Auth Methods vault docs auth are used to authenticate users and machines with Vault Auto unseal tutorials vault tutorials auto unseal Consul Template https github com hashicorp consul template is used to access static secrets stored in Vault and provide them to the applications and services that require them |
vault page title AWS KMS Seals Configuration mechanism awskms seal The AWS KMS seal configures Vault to use AWS KMS as the seal wrapping layout docs | ---
layout: docs
page_title: AWS KMS - Seals - Configuration
description: |-
The AWS KMS seal configures Vault to use AWS KMS as the seal wrapping
mechanism.
---
# `awskms` seal
<Note title="Seal wrapping requires Vault Enterprise">
All Vault versions support **auto-unseal** for AWS, but **seal wrapping**
requires Vault Enterprise.
Vault Enterprise enables seal wrapping by default, which means the KMS service
must be available at runtime and not just during the unseal process. Refer to
the [Seal wrap](/vault/docs/enterprise/sealwrap) overview for more
information.
</Note>
The AWS KMS seal configures Vault to use AWS KMS as the seal wrapping mechanism.
The AWS KMS seal is activated by one of the following:
- The presence of a `seal "awskms"` block in Vault's configuration file
- The presence of the environment variable `VAULT_SEAL_TYPE` set to `awskms`. If
enabling via environment variable, all other required values specific to AWS
KMS (i.e. `VAULT_AWSKMS_SEAL_KEY_ID`) must be also supplied, as well as all
other AWS-related environment variables that lends to successful
authentication (i.e. `AWS_ACCESS_KEY_ID`, etc.).
## `awskms` example
This example shows configuring AWS KMS seal through the Vault configuration file
by providing all the required values:
```hcl
seal "awskms" {
region = "us-east-1"
access_key = "AKIAIOSFODNN7EXAMPLE"
secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
kms_key_id = "19ec80b0-dfdd-4d97-8164-c6examplekey"
endpoint = "https://vpce-0e1bb1852241f8cc6-pzi0do8n.kms.us-east-1.vpce.amazonaws.com"
}
```
## `awskms` parameters
These parameters apply to the `seal` stanza in the Vault configuration file:
- `region` `(string: "us-east-1")`: The AWS region where the encryption key
lives. If not provided, may be populated from the `AWS_REGION` or
`AWS_DEFAULT_REGION` environment variables, from your `~/.aws/config` file,
or from instance metadata.
- `access_key` `(string: <required>)`: The AWS access key ID to use. May also be
specified by the `AWS_ACCESS_KEY_ID` environment variable or as part of the
AWS profile from the AWS CLI or instance profile.
- `session_token` `(string: "")`: Specifies the AWS session token. This can
also be provided via the environment variable `AWS_SESSION_TOKEN`.
- `secret_key` `(string: <required>)`: The AWS secret access key to use. May
also be specified by the `AWS_SECRET_ACCESS_KEY` environment variable or as
part of the AWS profile from the AWS CLI or instance profile.
- `kms_key_id` `(string: <required>)`: The AWS KMS key ID or ARN to use for encryption
and decryption. May also be specified by the `VAULT_AWSKMS_SEAL_KEY_ID`
environment variable. An alias in the format `alias/key-alias-name` may also be used here.
- `disabled` `(string: "")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.
- `endpoint` `(string: "")`: The KMS API endpoint to be used to make AWS KMS
requests. May also be specified by the `AWS_KMS_ENDPOINT` environment
variable. This is useful, for example, when connecting to KMS over a [VPC
Endpoint](https://docs.aws.amazon.com/kms/latest/developerguide/kms-vpc-endpoint.html).
If not set, Vault will use the default API endpoint for your region.
Refer to the [Seal Migration](/vault/docs/concepts/seal#seal-migration) documentation for more information about the seal migration process.
## Authentication
Authentication-related values must be provided, either as environment
variables or as configuration parameters.
~> **Note:** Although the configuration file allows you to pass in
`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` as part of the seal's parameters, it
is _strongly_ recommended to set these values via environment variables.
AWS authentication values:
- `AWS_REGION` or `AWS_DEFAULT_REGION`
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
Note: The client uses the official AWS SDK and will use the specified
credentials, environment credentials, shared file credentials, or IAM role/ECS
task credentials in that order, if the above AWS specific values are not
provided.
Vault needs the following permissions on the KMS key:
- `kms:Encrypt`
- `kms:Decrypt`
- `kms:DescribeKey`
These can be granted via IAM permissions on the principal that Vault uses, on
the KMS key policy for the KMS key, or via KMS Grants on the key.
## `awskms` environment variables
Alternatively, the AWS KMS seal can be activated by providing the following
environment variables.
Vault Seal specific values:
- `VAULT_SEAL_TYPE`
- `VAULT_AWSKMS_SEAL_KEY_ID`
## Key rotation
This seal supports rotating the root keys defined in AWS KMS
[doc](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html). Both automatic
rotation and manual rotation is supported for KMS since the key information is stored with the
encrypted data. Old keys must not be disabled or deleted and are used to decrypt older data.
Any new or updated data will be encrypted with the current key defined in the seal configuration
or set to current under a key alias.
## AWS instance metadata timeout
@include 'aws-imds-timeout.mdx'
## Tutorial
Refer to the [Auto-unseal using AWS KMS](/vault/tutorials/auto-unseal/autounseal-aws-kms)
tutorial to learn how to auto-unseal Vault using AWS KMS. | vault | layout docs page title AWS KMS Seals Configuration description The AWS KMS seal configures Vault to use AWS KMS as the seal wrapping mechanism awskms seal Note title Seal wrapping requires Vault Enterprise All Vault versions support auto unseal for AWS but seal wrapping requires Vault Enterprise Vault Enterprise enables seal wrapping by default which means the KMS service must be available at runtime and not just during the unseal process Refer to the Seal wrap vault docs enterprise sealwrap overview for more information Note The AWS KMS seal configures Vault to use AWS KMS as the seal wrapping mechanism The AWS KMS seal is activated by one of the following The presence of a seal awskms block in Vault s configuration file The presence of the environment variable VAULT SEAL TYPE set to awskms If enabling via environment variable all other required values specific to AWS KMS i e VAULT AWSKMS SEAL KEY ID must be also supplied as well as all other AWS related environment variables that lends to successful authentication i e AWS ACCESS KEY ID etc awskms example This example shows configuring AWS KMS seal through the Vault configuration file by providing all the required values hcl seal awskms region us east 1 access key AKIAIOSFODNN7EXAMPLE secret key wJalrXUtnFEMI K7MDENG bPxRfiCYEXAMPLEKEY kms key id 19ec80b0 dfdd 4d97 8164 c6examplekey endpoint https vpce 0e1bb1852241f8cc6 pzi0do8n kms us east 1 vpce amazonaws com awskms parameters These parameters apply to the seal stanza in the Vault configuration file region string us east 1 The AWS region where the encryption key lives If not provided may be populated from the AWS REGION or AWS DEFAULT REGION environment variables from your aws config file or from instance metadata access key string required The AWS access key ID to use May also be specified by the AWS ACCESS KEY ID environment variable or as part of the AWS profile from the AWS CLI or instance profile session token string Specifies the AWS session token This can also be provided via the environment variable AWS SESSION TOKEN secret key string required The AWS secret access key to use May also be specified by the AWS SECRET ACCESS KEY environment variable or as part of the AWS profile from the AWS CLI or instance profile kms key id string required The AWS KMS key ID or ARN to use for encryption and decryption May also be specified by the VAULT AWSKMS SEAL KEY ID environment variable An alias in the format alias key alias name may also be used here disabled string Set this to true if Vault is migrating from an auto seal configuration Otherwise set to false endpoint string The KMS API endpoint to be used to make AWS KMS requests May also be specified by the AWS KMS ENDPOINT environment variable This is useful for example when connecting to KMS over a VPC Endpoint https docs aws amazon com kms latest developerguide kms vpc endpoint html If not set Vault will use the default API endpoint for your region Refer to the Seal Migration vault docs concepts seal seal migration documentation for more information about the seal migration process Authentication Authentication related values must be provided either as environment variables or as configuration parameters Note Although the configuration file allows you to pass in AWS ACCESS KEY ID and AWS SECRET ACCESS KEY as part of the seal s parameters it is strongly recommended to set these values via environment variables AWS authentication values AWS REGION or AWS DEFAULT REGION AWS ACCESS KEY ID AWS SECRET ACCESS KEY Note The client uses the official AWS SDK and will use the specified credentials environment credentials shared file credentials or IAM role ECS task credentials in that order if the above AWS specific values are not provided Vault needs the following permissions on the KMS key kms Encrypt kms Decrypt kms DescribeKey These can be granted via IAM permissions on the principal that Vault uses on the KMS key policy for the KMS key or via KMS Grants on the key awskms environment variables Alternatively the AWS KMS seal can be activated by providing the following environment variables Vault Seal specific values VAULT SEAL TYPE VAULT AWSKMS SEAL KEY ID Key rotation This seal supports rotating the root keys defined in AWS KMS doc https docs aws amazon com kms latest developerguide rotate keys html Both automatic rotation and manual rotation is supported for KMS since the key information is stored with the encrypted data Old keys must not be disabled or deleted and are used to decrypt older data Any new or updated data will be encrypted with the current key defined in the seal configuration or set to current under a key alias AWS instance metadata timeout include aws imds timeout mdx Tutorial Refer to the Auto unseal using AWS KMS vault tutorials auto unseal autounseal aws kms tutorial to learn how to auto unseal Vault using AWS KMS |
vault The Azure Key Vault seal configures Vault to use Azure Key Vault as the seal mechanism wrapping layout docs page title Azure Key Vault Seals Configuration | ---
layout: docs
page_title: Azure Key Vault - Seals - Configuration
description: >-
The Azure Key Vault seal configures Vault to use Azure Key Vault as the seal
wrapping
mechanism.
---
# `azurekeyvault` seal
<Note title="Seal wrapping requires Vault Enterprise">
All Vault versions support **auto-unseal** for Azure Key Vault, but
**seal wrapping** requires Vault Enterprise.
Vault Enterprise enables seal wrapping by default, which means the KMS service
must be available at runtime and not just during the unseal process. Refer to
the [Seal wrap](/vault/docs/enterprise/sealwrap) overview for more
information.
</Note>
The Azure Key Vault seal configures Vault to use Azure Key Vault as the seal
wrapping mechanism. The Azure Key Vault seal is activated by one of the following:
- The presence of a `seal "azurekeyvault"` block in Vault's configuration file.
- The presence of the environment variable `VAULT_SEAL_TYPE` set to `azurekeyvault`.
If enabling via environment variable, all other required values specific to
Key Vault (i.e. `VAULT_AZUREKEYVAULT_VAULT_NAME`, etc.) must be also supplied, as
well as all other Azure-related environment variables that lends to successful
authentication (i.e. `AZURE_TENANT_ID`, etc.).
## `azurekeyvault` example
This example shows configuring Azure Key Vault seal through the Vault
configuration file by providing all the required values:
```hcl
seal "azurekeyvault" {
tenant_id = "46646709-b63e-4747-be42-516edeaf1e14"
client_id = "03dc33fc-16d9-4b77-8152-3ec568f8af6e"
client_secret = "DUJDS3..."
vault_name = "hc-vault"
key_name = "vault_key"
}
```
## `azurekeyvault` parameters
These parameters apply to the `seal` stanza in the Vault configuration file:
- `tenant_id` `(string: <required>)`: The tenant id for the Azure Active Directory organization. May
also be specified by the `AZURE_TENANT_ID` environment variable.
- `client_id` `(string: <required or MSI>)`: The client id for credentials to query the Azure APIs.
May also be specified by the `AZURE_CLIENT_ID` environment variable.
- `client_secret` `(string: <required or MSI>)`: The client secret for credentials to query the Azure APIs.
May also be specified by the `AZURE_CLIENT_SECRET` environment variable.
- `environment` `(string: "AZUREPUBLICCLOUD")`: The Azure Cloud environment API endpoints to use. May also
be specified by the `AZURE_ENVIRONMENT` environment variable.
- `vault_name` `(string: <required>)`: The Key Vault vault to use the encryption keys for encryption and
decryption. May also be specified by the `VAULT_AZUREKEYVAULT_VAULT_NAME` environment variable.
- `key_name` `(string: <required>)`: The Key Vault key to use for encryption and decryption. May also be specified by the
`VAULT_AZUREKEYVAULT_KEY_NAME` environment variable.
- `resource` `(string: "vault.azure.net")`: The AZ KeyVault resource's DNS Suffix to connect to.
May also be specified in the `AZURE_AD_RESOURCE` environment variable.
Needs to be changed to connect to Azure's Managed HSM KeyVault instance type.
- `disabled` `(string: "")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.
Refer to the [Seal Migration](/vault/docs/concepts/seal#seal-migration) documentation for more information about the seal migration process.
## Authentication
Authentication-related values must be provided, either as environment
variables or as configuration parameters.
Azure authentication values:
- `AZURE_TENANT_ID`
- `AZURE_CLIENT_ID`
- `AZURE_CLIENT_SECRET`
- `AZURE_ENVIRONMENT`
- `AZURE_AD_RESOURCE`
~> **Note:** If Vault is hosted on Azure, Vault can use Managed Service
Identities (MSI) to access Azure instead of an environment and shared client id
and secret. MSI must be
[enabled](https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/qs-configure-portal-windows-vm)
on the VMs hosting Vault, and it is the preferred configuration since MSI
prevents your Azure credentials from being stored as clear text. Refer to the
[Production
Hardening](/vault/tutorials/operations/production-hardening) tutorial
for more best practices.
-> **Note:** If you are using a Managed HSM KeyVault, `AZURE_AD_RESOURCE` or the `resource`
configuration parameter must be specified; usually this should point to `managedhsm.azure.net`,
but could point to other suffixes depending on Azure environment.
## `azurekeyvault` environment variables
Alternatively, the Azure Key Vault seal can be activated by providing the following
environment variables:
- `VAULT_AZUREKEYVAULT_VAULT_NAME`
- `VAULT_AZUREKEYVAULT_KEY_NAME`
## Key rotation
This seal supports rotating keys defined in Azure Key Vault. Key metadata is
stored with the encrypted data to ensure the correct key is used during
decryption operations. Simply [set up Azure Key Vault with key
rotation](https://docs.microsoft.com/en-us/azure/key-vault/key-vault-key-rotation-log-monitoring)
using Azure Automation Account and Vault will recognize newly rotated keys.
## Tutorial
Refer to the [Auto-unseal using Azure Key Vault](/vault/tutorials/auto-unseal/autounseal-azure-keyvault)
tutorial to learn how to use the Azure Key Vault to auto-unseal a Vault server. | vault | layout docs page title Azure Key Vault Seals Configuration description The Azure Key Vault seal configures Vault to use Azure Key Vault as the seal wrapping mechanism azurekeyvault seal Note title Seal wrapping requires Vault Enterprise All Vault versions support auto unseal for Azure Key Vault but seal wrapping requires Vault Enterprise Vault Enterprise enables seal wrapping by default which means the KMS service must be available at runtime and not just during the unseal process Refer to the Seal wrap vault docs enterprise sealwrap overview for more information Note The Azure Key Vault seal configures Vault to use Azure Key Vault as the seal wrapping mechanism The Azure Key Vault seal is activated by one of the following The presence of a seal azurekeyvault block in Vault s configuration file The presence of the environment variable VAULT SEAL TYPE set to azurekeyvault If enabling via environment variable all other required values specific to Key Vault i e VAULT AZUREKEYVAULT VAULT NAME etc must be also supplied as well as all other Azure related environment variables that lends to successful authentication i e AZURE TENANT ID etc azurekeyvault example This example shows configuring Azure Key Vault seal through the Vault configuration file by providing all the required values hcl seal azurekeyvault tenant id 46646709 b63e 4747 be42 516edeaf1e14 client id 03dc33fc 16d9 4b77 8152 3ec568f8af6e client secret DUJDS3 vault name hc vault key name vault key azurekeyvault parameters These parameters apply to the seal stanza in the Vault configuration file tenant id string required The tenant id for the Azure Active Directory organization May also be specified by the AZURE TENANT ID environment variable client id string required or MSI The client id for credentials to query the Azure APIs May also be specified by the AZURE CLIENT ID environment variable client secret string required or MSI The client secret for credentials to query the Azure APIs May also be specified by the AZURE CLIENT SECRET environment variable environment string AZUREPUBLICCLOUD The Azure Cloud environment API endpoints to use May also be specified by the AZURE ENVIRONMENT environment variable vault name string required The Key Vault vault to use the encryption keys for encryption and decryption May also be specified by the VAULT AZUREKEYVAULT VAULT NAME environment variable key name string required The Key Vault key to use for encryption and decryption May also be specified by the VAULT AZUREKEYVAULT KEY NAME environment variable resource string vault azure net The AZ KeyVault resource s DNS Suffix to connect to May also be specified in the AZURE AD RESOURCE environment variable Needs to be changed to connect to Azure s Managed HSM KeyVault instance type disabled string Set this to true if Vault is migrating from an auto seal configuration Otherwise set to false Refer to the Seal Migration vault docs concepts seal seal migration documentation for more information about the seal migration process Authentication Authentication related values must be provided either as environment variables or as configuration parameters Azure authentication values AZURE TENANT ID AZURE CLIENT ID AZURE CLIENT SECRET AZURE ENVIRONMENT AZURE AD RESOURCE Note If Vault is hosted on Azure Vault can use Managed Service Identities MSI to access Azure instead of an environment and shared client id and secret MSI must be enabled https docs microsoft com en us azure active directory managed service identity qs configure portal windows vm on the VMs hosting Vault and it is the preferred configuration since MSI prevents your Azure credentials from being stored as clear text Refer to the Production Hardening vault tutorials operations production hardening tutorial for more best practices Note If you are using a Managed HSM KeyVault AZURE AD RESOURCE or the resource configuration parameter must be specified usually this should point to managedhsm azure net but could point to other suffixes depending on Azure environment azurekeyvault environment variables Alternatively the Azure Key Vault seal can be activated by providing the following environment variables VAULT AZUREKEYVAULT VAULT NAME VAULT AZUREKEYVAULT KEY NAME Key rotation This seal supports rotating keys defined in Azure Key Vault Key metadata is stored with the encrypted data to ensure the correct key is used during decryption operations Simply set up Azure Key Vault with key rotation https docs microsoft com en us azure key vault key vault key rotation log monitoring using Azure Automation Account and Vault will recognize newly rotated keys Tutorial Refer to the Auto unseal using Azure Key Vault vault tutorials auto unseal autounseal azure keyvault tutorial to learn how to use the Azure Key Vault to auto unseal a Vault server |
vault How to configure multiple Seals for high availability Seal High Availability include alerts enterprise only mdx layout docs page title Seal High Availability Seals Configuration | ---
layout: docs
page_title: Seal High Availability - Seals - Configuration
description: |-
How to configure multiple Seals for high availability.
---
# Seal High Availability
@include 'alerts/enterprise-only.mdx'
[Seal High Availability](/vault/docs/concepts/seal#seal-high-availability-enterprise)
provides the means to configure at least two auto-seals (and no more than three)
in order to have resilience against outage of a seal service or mechanism.
Shamir seals cannot be used in a Seal HA setup.
Using Seal HA involves configuring extra seals in Vault's server configuration file
and restarting Vault or triggering a reload of it's configuration via sending
it the SIGHUP signal.
Before using Seal HA, one must have upgraded to Vault 1.16 or higher. Seal HA is enabled
by adding the following line to Vault's configuration:
```
enable_multiseal = true
```
## Adding and Removing Seals
In order to use Seal HA, there must be more than one defined [`seal` stanza](/vault/docs/configuration/seal)
in Vault's configuration.
Seal HA adds two fields to these stanzas, `name`, and `priority`:
```hcl
seal [TYPE] {
name = "seal_name"
priority = "1"
# ...
}
```
Name is optional, and if not specified is set to the type of the seal. Names
must be unique. If using two seals of the same type name must be specified.
Internally, name is used to disambiguate seal wrapped values in some cases,
so renaming seals should be avoided if possible. Many seal types can use
environment variables instead of configuration lines to provide sensitive
values. Because there may be two seals of the same type, one must
disambiguate the environment variables used. To do this, in HA setups,
append an underscore followed by the seal's configured name (matching its
case) to any environment variable names. For example, in the sample
configuration below, the AWS access key could be provided as `ACCESS_KEY_aws_east`.
Keep in mind that the seal name must be valid in an environment variable name to use it.
Priority is mandatory if more than one seal is specified. Priority tells Vault
the order in which to try seals during unseal (least priority first),
in the case more than one seal can unwrap a seal wrapped value, the order
in which to attempt decryption, and which order to attempt to source entropy
for entropy augmentation. This can be useful if your seals have different
performance or cost characteristics.
Here is a hypothetical configuration for an [AWS seal](/vault/docs/configuration/seal/awskms)
compatible with Seal HA:
```hcl
seal "awskms" {
name = "aws_east"
priority = "1"
region = "us-east-1"
access_key = "AKIAIOSFODNN7EXAMPLE"
secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
kms_key_id = "19ec80b0-dfdd-4d97-8164-c6examplekey"
}
```
All configured, healthy seals are used to seal wrap values. This means that
for every write of a seal wrapped value or CSP, an encryption is requested
from every configured seal, and the results are stored in the storage entry.
When seals are unhealthy, Vault keeps track of values that could not be fully
wrapped and will re-wrap them once seals become healthy again. Note, however,
that it is not possible to rotate the data encryption key nor the recovery keys
while seals are unavailable. Disabled seals can still be used for decryption
of wrapped values, but will be avoided when encrypting values.
When reading a CSP or seal wrapped value, Vault will try to decrypt with the
highest priority available seal, and then try other seals on failure.
To add an additional seal, simply add another seal stanza, specifying priority
and optionally name, and restart Vault.
To remove a seal, remove the corresponding seal stanza and restart. There must
be at least one seal remaining.
It is highly recommended to take a snapshot of your Vault storage before applying
any seal configuration change.
Once Vault unseals with the new seal configuration, it will be available to process
traffic even as re-wrapping proceeds.
Here is a partial snippet of a two seal HA setup, using an AWS KMS seal as
the primary (highest priority) seal, and a Azure KMS seal as well:
```
seal "awskms" {
name = "AWS"
priority = "1"
# ...
}
seal "azurekeyvault" {
name = "Azure"
priority = "2"
# ...
}
```
### Safety checks
Vault will reject seal configuration changes in the following circumstances,
as a safety mechanism:
* The old seal configuration and new seal configuration do not share one seal
in common. This is necessary as there would be no seal capable of decrypting
CSPs or seal wrapped values previously written.
* Seal re-wrapping is in progress. Vault must be in a clean, fully wrapped state
on the previous configuration before attempting a configuration change.
* More than one seal is being added or removed at a time.
In rare circumstances it may become impossible to update seal configuration
without triggering the safety checks. If this happens, it is possible to bypass
the checks by setting the environment variable `VAULT_SEAL_REWRAP_SAFETY` to
`disable`.
~> **Warning**: The use of environment variable `VAULT_SEAL_REWRAP_SAFETY`
should be considered as a last resort.
### Interaction with Shamir Seals
Seal HA is only supported with auto seal mechanisms. To use Seal HA when
running a Shamir seal, first use the traditional
[seal migration](/vault/docs/concepts/seal#seal-migration) mechanism to migrate to
an auto seal of your choice. Afterwards you may follow the above
instructions to add a second auto seal.
Correspondingly, to migrate back to a shamir seal, first use the above
instructions to move to a single auto seal, and use the traditional
migration method to migrate back to a Shamir seal.
### Removing Seal HA
Migrating back to a single seal may result in data loss if an operator does not
use the HA seal feature. To migrate back to a single seal:
1. Perform a [seal migration](/vault/docs/concepts/seal#seal-migration) as described.
2. Monitor [`sys/sealwrap/rewrap`](/vault/api-docs/system/sealwrap-rewrap) until the API returns `fully_wrapped=true`.
3. Remove `enable_multiseal` from all Vault configuration files in the cluster.
4. Restart Vault. | vault | layout docs page title Seal High Availability Seals Configuration description How to configure multiple Seals for high availability Seal High Availability include alerts enterprise only mdx Seal High Availability vault docs concepts seal seal high availability enterprise provides the means to configure at least two auto seals and no more than three in order to have resilience against outage of a seal service or mechanism Shamir seals cannot be used in a Seal HA setup Using Seal HA involves configuring extra seals in Vault s server configuration file and restarting Vault or triggering a reload of it s configuration via sending it the SIGHUP signal Before using Seal HA one must have upgraded to Vault 1 16 or higher Seal HA is enabled by adding the following line to Vault s configuration enable multiseal true Adding and Removing Seals In order to use Seal HA there must be more than one defined seal stanza vault docs configuration seal in Vault s configuration Seal HA adds two fields to these stanzas name and priority hcl seal TYPE name seal name priority 1 Name is optional and if not specified is set to the type of the seal Names must be unique If using two seals of the same type name must be specified Internally name is used to disambiguate seal wrapped values in some cases so renaming seals should be avoided if possible Many seal types can use environment variables instead of configuration lines to provide sensitive values Because there may be two seals of the same type one must disambiguate the environment variables used To do this in HA setups append an underscore followed by the seal s configured name matching its case to any environment variable names For example in the sample configuration below the AWS access key could be provided as ACCESS KEY aws east Keep in mind that the seal name must be valid in an environment variable name to use it Priority is mandatory if more than one seal is specified Priority tells Vault the order in which to try seals during unseal least priority first in the case more than one seal can unwrap a seal wrapped value the order in which to attempt decryption and which order to attempt to source entropy for entropy augmentation This can be useful if your seals have different performance or cost characteristics Here is a hypothetical configuration for an AWS seal vault docs configuration seal awskms compatible with Seal HA hcl seal awskms name aws east priority 1 region us east 1 access key AKIAIOSFODNN7EXAMPLE secret key wJalrXUtnFEMI K7MDENG bPxRfiCYEXAMPLEKEY kms key id 19ec80b0 dfdd 4d97 8164 c6examplekey All configured healthy seals are used to seal wrap values This means that for every write of a seal wrapped value or CSP an encryption is requested from every configured seal and the results are stored in the storage entry When seals are unhealthy Vault keeps track of values that could not be fully wrapped and will re wrap them once seals become healthy again Note however that it is not possible to rotate the data encryption key nor the recovery keys while seals are unavailable Disabled seals can still be used for decryption of wrapped values but will be avoided when encrypting values When reading a CSP or seal wrapped value Vault will try to decrypt with the highest priority available seal and then try other seals on failure To add an additional seal simply add another seal stanza specifying priority and optionally name and restart Vault To remove a seal remove the corresponding seal stanza and restart There must be at least one seal remaining It is highly recommended to take a snapshot of your Vault storage before applying any seal configuration change Once Vault unseals with the new seal configuration it will be available to process traffic even as re wrapping proceeds Here is a partial snippet of a two seal HA setup using an AWS KMS seal as the primary highest priority seal and a Azure KMS seal as well seal awskms name AWS priority 1 seal azurekeyvault name Azure priority 2 Safety checks Vault will reject seal configuration changes in the following circumstances as a safety mechanism The old seal configuration and new seal configuration do not share one seal in common This is necessary as there would be no seal capable of decrypting CSPs or seal wrapped values previously written Seal re wrapping is in progress Vault must be in a clean fully wrapped state on the previous configuration before attempting a configuration change More than one seal is being added or removed at a time In rare circumstances it may become impossible to update seal configuration without triggering the safety checks If this happens it is possible to bypass the checks by setting the environment variable VAULT SEAL REWRAP SAFETY to disable Warning The use of environment variable VAULT SEAL REWRAP SAFETY should be considered as a last resort Interaction with Shamir Seals Seal HA is only supported with auto seal mechanisms To use Seal HA when running a Shamir seal first use the traditional seal migration vault docs concepts seal seal migration mechanism to migrate to an auto seal of your choice Afterwards you may follow the above instructions to add a second auto seal Correspondingly to migrate back to a shamir seal first use the above instructions to move to a single auto seal and use the traditional migration method to migrate back to a Shamir seal Removing Seal HA Migrating back to a single seal may result in data loss if an operator does not use the HA seal feature To migrate back to a single seal 1 Perform a seal migration vault docs concepts seal seal migration as described 2 Monitor sys sealwrap rewrap vault api docs system sealwrap rewrap until the API returns fully wrapped true 3 Remove enable multiseal from all Vault configuration files in the cluster 4 Restart Vault |
vault bucket The S3 storage backend is used to persist Vault s data in an Amazon S3 layout docs page title S3 Storage Backends Configuration S3 storage backend | ---
layout: docs
page_title: S3 - Storage Backends - Configuration
description: |-
The S3 storage backend is used to persist Vault's data in an Amazon S3
bucket.
---
# S3 storage backend
The S3 storage backend is used to persist Vault's data in an [Amazon S3][s3]
bucket.
- **No High Availability** – the S3 storage backend does not support high
availability.
- **Community Supported** – the S3 storage backend is supported by the
community. While it has undergone review by HashiCorp employees, they may not
be as knowledgeable about the technology. If you encounter problems with them,
you may be referred to the original author.
```hcl
storage "s3" {
access_key = "abcd1234"
secret_key = "defg5678"
bucket = "my-bucket"
}
```
## `s3` parameters
- `bucket` `(string: <required>)` – Specifies the name of the S3 bucket. This
can also be provided via the environment variable `AWS_S3_BUCKET`.
- `endpoint` `(string: "")` – Specifies an alternative, AWS compatible, S3
endpoint. This can also be provided via the environment variable
`AWS_S3_ENDPOINT`.
- `region` `(string "us-east-1")` – Specifies the AWS region. This can also be
provided via the environment variable `AWS_REGION` or `AWS_DEFAULT_REGION`,
in that order of preference.
The following settings are used for authenticating to AWS. If you are
running your Vault server on an EC2 instance, you can also make use of the EC2
instance profile service to provide the credentials Vault will use to make
S3 API calls. Leaving the `access_key` and `secret_key` fields empty will
cause Vault to attempt to retrieve credentials from the AWS metadata service.
- `access_key` – Specifies the AWS access key. This can also be provided via
the environment variable `AWS_ACCESS_KEY_ID`, AWS credential files, or by
IAM role.
- `secret_key` – Specifies the AWS secret key. This can also be provided via
the environment variable `AWS_SECRET_ACCESS_KEY`, AWS credential files, or
by IAM role.
- `session_token` `(string: "")` – Specifies the AWS session token. This can
also be provided via the environment variable `AWS_SESSION_TOKEN`.
- `max_parallel` `(string: "128")` – Specifies the maximum number of concurrent
requests to S3.
- `s3_force_path_style` `(string: "false")` - Specifies whether to use host
bucket style domains with the configured endpoint.
- `disable_ssl` `(string: "false")` - Specifies if SSL should be used for the
endpoint connection (highly recommended not to disable for production).
- `kms_key_id` `(string: "")` - Specifies the ID or Alias of the KMS key used to
encrypt data in the S3 backend. Vault must have `kms:Encrypt`, `kms:Decrypt`
and `kms:GenerateDataKey` permissions for this KMS key. You can use
`alias/aws/s3` to specify the default key for the account.
- `path` `(string: "")` - Specifies the path in the S3 Bucket where Vault
data will be stored.
## `s3` examples
### Default example
This example shows using Amazon S3 as a storage backend.
```hcl
storage "s3" {
access_key = "abcd1234"
secret_key = "defg5678"
bucket = "my-bucket"
}
```
### S3 KMS encryption with default key
This example shows using Amazon S3 as a storage backend using KMS
encryption with the default S3 KMS key for the account.
```hcl
storage "s3" {
access_key = "abcd1234"
secret_key = "defg5678"
bucket = "my-bucket"
kms_key_id = "alias/aws/s3"
}
```
### S3 KMS encryption with custom key
This example shows using Amazon S3 as a storage backend using KMS
encryption with a customer managed KMS key.
```hcl
storage "s3" {
access_key = "abcd1234"
secret_key = "defg5678"
bucket = "my-bucket"
kms_key_id = "001234ac-72d3-9902-a3fc-0123456789ab"
}
```
[s3]: https://aws.amazon.com/s3/
## AWS instance metadata timeouts
@include 'aws-imds-timeout.mdx' | vault | layout docs page title S3 Storage Backends Configuration description The S3 storage backend is used to persist Vault s data in an Amazon S3 bucket S3 storage backend The S3 storage backend is used to persist Vault s data in an Amazon S3 s3 bucket No High Availability the S3 storage backend does not support high availability Community Supported the S3 storage backend is supported by the community While it has undergone review by HashiCorp employees they may not be as knowledgeable about the technology If you encounter problems with them you may be referred to the original author hcl storage s3 access key abcd1234 secret key defg5678 bucket my bucket s3 parameters bucket string required Specifies the name of the S3 bucket This can also be provided via the environment variable AWS S3 BUCKET endpoint string Specifies an alternative AWS compatible S3 endpoint This can also be provided via the environment variable AWS S3 ENDPOINT region string us east 1 Specifies the AWS region This can also be provided via the environment variable AWS REGION or AWS DEFAULT REGION in that order of preference The following settings are used for authenticating to AWS If you are running your Vault server on an EC2 instance you can also make use of the EC2 instance profile service to provide the credentials Vault will use to make S3 API calls Leaving the access key and secret key fields empty will cause Vault to attempt to retrieve credentials from the AWS metadata service access key Specifies the AWS access key This can also be provided via the environment variable AWS ACCESS KEY ID AWS credential files or by IAM role secret key Specifies the AWS secret key This can also be provided via the environment variable AWS SECRET ACCESS KEY AWS credential files or by IAM role session token string Specifies the AWS session token This can also be provided via the environment variable AWS SESSION TOKEN max parallel string 128 Specifies the maximum number of concurrent requests to S3 s3 force path style string false Specifies whether to use host bucket style domains with the configured endpoint disable ssl string false Specifies if SSL should be used for the endpoint connection highly recommended not to disable for production kms key id string Specifies the ID or Alias of the KMS key used to encrypt data in the S3 backend Vault must have kms Encrypt kms Decrypt and kms GenerateDataKey permissions for this KMS key You can use alias aws s3 to specify the default key for the account path string Specifies the path in the S3 Bucket where Vault data will be stored s3 examples Default example This example shows using Amazon S3 as a storage backend hcl storage s3 access key abcd1234 secret key defg5678 bucket my bucket S3 KMS encryption with default key This example shows using Amazon S3 as a storage backend using KMS encryption with the default S3 KMS key for the account hcl storage s3 access key abcd1234 secret key defg5678 bucket my bucket kms key id alias aws s3 S3 KMS encryption with custom key This example shows using Amazon S3 as a storage backend using KMS encryption with a customer managed KMS key hcl storage s3 access key abcd1234 secret key defg5678 bucket my bucket kms key id 001234ac 72d3 9902 a3fc 0123456789ab s3 https aws amazon com s3 AWS instance metadata timeouts include aws imds timeout mdx |
vault page title FoundationDB Storage Backends Configuration FoundationDB KV store FoundationDB storage backend layout docs The FoundationDB storage backend is used to persist Vault s data in the | ---
layout: docs
page_title: FoundationDB - Storage Backends - Configuration
description: |-
The FoundationDB storage backend is used to persist Vault's data in the
FoundationDB KV store.
---
# FoundationDB storage backend
The FoundationDB storage backend is used to persist Vault's data in
[FoundationDB][foundationdb].
The backend needs to be explicitly enabled at build time, and is not available
in the standard Vault binary distribution. Please refer to the documentation
accompanying the backend's source in the Vault source tree.
- **High Availability** – the FoundationDB storage backend supports high
availability. The HA implementation relies on the clocks of the Vault
nodes inside the cluster being properly synchronized; clock skews are
susceptible to cause contention on the locks.
- **Community Supported** – the FoundationDB storage backend is supported
by the community. While it has undergone review by HashiCorp employees,
they may not be as knowledgeable about the technology. If you encounter
problems with them, you may be referred to the original author.
```hcl
storage "foundationdb" {
api_version = 520
cluster_file = "/path/to/fdb.cluster"
tls_verify_peers = "I.CN=MyTrustedIssuer,I.O=MyCompany\, Inc.,I.OU=Certification Authority"
tls_ca_file = "/path/to/ca_bundle.pem"
tls_cert_file = "/path/to/cert.pem"
tls_key_file = "/path/to/key.pem"
tls_password = "PrivateKeyPassword"
path = "vault-top-level-directory"
ha_enabled = "true"
}
```
## `foundationdb` parameters
- `api_version` `(int)` - The FoundationDB API version to use; this is a
required parameter and doesn't have a default value. The minimum required API
version is 520.
- `cluster_file` `(string)` - The path to the cluster file containing the
connection data for the target cluster; this is a required parameter and
doesn't have a default value.
- `tls_verify_peers` `(string)` - The peer certificate verification criteria;
this parameter is mandatory if TLS is enabled. Refer to the [FoundationDB TLS][fdb-tls] documentation.
- `tls_ca_file` `(string)` - The path to the CA certificate bundle file; this
parameter is mandatory if TLS is enabled.
- `tls_cert_file` `(string)` - The path to the certificate file; specifying this
parameter together with `tls_key_file` will enable TLS support.
- `tls_key_file` `(string)` - The path to the key file; specifying this
parameter together with `tls_cert_file` will enable TLS support.
- `tls_password` `(string)` - The password needed to decrypt `tls_key_file`, if
it is encrypted; optional. This can also be specified via the
`FDB_TLS_PASSWORD` environment variable.
- `path` `(string: "vault")` - The path of the top-level FoundationDB directory
(using the directory layer) under which the Vault data will reside.
- `ha_enabled` `(string: "false")` - Whether or not to enable Vault
high-availability mode using the FoundationDB backend.
## `foundationdb` tips
### Cluster file
The FoundationDB client expects to be able to update the cluster file at
runtime, to keep it current with changes happening to the cluster.
It does so by first writing a new cluster file alongside the current one,
then atomically renaming it into place.
This means the cluster file and the directory it resides in must be writable
by the user Vault is running as. You probably want to isolate the cluster
file into its own directory.
### Multi-version client
The FoundationDB client library version is tightly coupled to the server
version; during cluster upgrades, multiple server versions will be running
in the cluster, and the client must cope with that situation.
This is handled by the (primary) client library having the ability to load
a different, later version of the client library to connect to a particular
server; it is referred to as the [multi-version client][multi-ver-client]
feature.
#### Client setup with `LD_LIBRARY_PATH`
If you do not use mlock, you can use `LD_LIBRARY_PATH` to point the linker at
the location of the primary client library.
```shell-session
$ export LD_LIBRARY_PATH=/dest/dir/for/primary:$LD_LIBRARY_PATH
$ export FDB_NETWORK_OPTION_EXTERNAL_CLIENT_DIRECTORY=/dest/dir/for/secondary
$ /path/to/bin/vault ...
```
#### Client setup with `RPATH`
When running Vault with mlock, the Vault binary must have capabilities set to
allow the use of mlock.
```
# setcap cap_ipc_lock=+ep /path/to/bin/vault
$ getcap /path/to/bin/vault
/path/to/bin/vault = cap_ipc_lock+ep
```
The presence of the capabilities will cause the linker to ignore
`LD_LIBRARY_PATH`, for security reasons.
In that case, we have to set an `RPATH` on the Vault binary at build time
to replace the use of `LD_LIBRARY_PATH`.
When building Vault, pass the `-r /dest/dir/for/primary` option to the Go
linker, for instance:
```shell-session
$ make dev FDB_ENABLED=1 LD_FLAGS="-r /dest/dir/for/primary "
```
(Note the trailing space in the variable value above).
You can verify `RPATH` is set on the Vault binary using `readelf`:
```shell-session
$ readelf -d /path/to/bin/vault | grep RPATH
0x000000000000000f (RPATH) Library rpath: [/dest/dir/for/primary]
```
With the client libraries installed:
```shell-session
$ ldd /path/to/bin/vault
...
libfdb_c.so => /dest/dir/for/primary/libfdb_c.so (0x00007f270ad05000)
...
```
Now run Vault:
```shell-session
$ export FDB_NETWORK_OPTION_EXTERNAL_CLIENT_DIRECTORY=/dest/dir/for/secondary
$ /path/to/bin/vault ...
```
[foundationdb]: https://www.foundationdb.org
[fdb-tls]: https://apple.github.io/foundationdb/tls.html
[multi-ver-client]: https://apple.github.io/foundationdb/api-general.html#multi-version-client-api | vault | layout docs page title FoundationDB Storage Backends Configuration description The FoundationDB storage backend is used to persist Vault s data in the FoundationDB KV store FoundationDB storage backend The FoundationDB storage backend is used to persist Vault s data in FoundationDB foundationdb The backend needs to be explicitly enabled at build time and is not available in the standard Vault binary distribution Please refer to the documentation accompanying the backend s source in the Vault source tree High Availability the FoundationDB storage backend supports high availability The HA implementation relies on the clocks of the Vault nodes inside the cluster being properly synchronized clock skews are susceptible to cause contention on the locks Community Supported the FoundationDB storage backend is supported by the community While it has undergone review by HashiCorp employees they may not be as knowledgeable about the technology If you encounter problems with them you may be referred to the original author hcl storage foundationdb api version 520 cluster file path to fdb cluster tls verify peers I CN MyTrustedIssuer I O MyCompany Inc I OU Certification Authority tls ca file path to ca bundle pem tls cert file path to cert pem tls key file path to key pem tls password PrivateKeyPassword path vault top level directory ha enabled true foundationdb parameters api version int The FoundationDB API version to use this is a required parameter and doesn t have a default value The minimum required API version is 520 cluster file string The path to the cluster file containing the connection data for the target cluster this is a required parameter and doesn t have a default value tls verify peers string The peer certificate verification criteria this parameter is mandatory if TLS is enabled Refer to the FoundationDB TLS fdb tls documentation tls ca file string The path to the CA certificate bundle file this parameter is mandatory if TLS is enabled tls cert file string The path to the certificate file specifying this parameter together with tls key file will enable TLS support tls key file string The path to the key file specifying this parameter together with tls cert file will enable TLS support tls password string The password needed to decrypt tls key file if it is encrypted optional This can also be specified via the FDB TLS PASSWORD environment variable path string vault The path of the top level FoundationDB directory using the directory layer under which the Vault data will reside ha enabled string false Whether or not to enable Vault high availability mode using the FoundationDB backend foundationdb tips Cluster file The FoundationDB client expects to be able to update the cluster file at runtime to keep it current with changes happening to the cluster It does so by first writing a new cluster file alongside the current one then atomically renaming it into place This means the cluster file and the directory it resides in must be writable by the user Vault is running as You probably want to isolate the cluster file into its own directory Multi version client The FoundationDB client library version is tightly coupled to the server version during cluster upgrades multiple server versions will be running in the cluster and the client must cope with that situation This is handled by the primary client library having the ability to load a different later version of the client library to connect to a particular server it is referred to as the multi version client multi ver client feature Client setup with LD LIBRARY PATH If you do not use mlock you can use LD LIBRARY PATH to point the linker at the location of the primary client library shell session export LD LIBRARY PATH dest dir for primary LD LIBRARY PATH export FDB NETWORK OPTION EXTERNAL CLIENT DIRECTORY dest dir for secondary path to bin vault Client setup with RPATH When running Vault with mlock the Vault binary must have capabilities set to allow the use of mlock setcap cap ipc lock ep path to bin vault getcap path to bin vault path to bin vault cap ipc lock ep The presence of the capabilities will cause the linker to ignore LD LIBRARY PATH for security reasons In that case we have to set an RPATH on the Vault binary at build time to replace the use of LD LIBRARY PATH When building Vault pass the r dest dir for primary option to the Go linker for instance shell session make dev FDB ENABLED 1 LD FLAGS r dest dir for primary Note the trailing space in the variable value above You can verify RPATH is set on the Vault binary using readelf shell session readelf d path to bin vault grep RPATH 0x000000000000000f RPATH Library rpath dest dir for primary With the client libraries installed shell session ldd path to bin vault libfdb c so dest dir for primary libfdb c so 0x00007f270ad05000 Now run Vault shell session export FDB NETWORK OPTION EXTERNAL CLIENT DIRECTORY dest dir for secondary path to bin vault foundationdb https www foundationdb org fdb tls https apple github io foundationdb tls html multi ver client https apple github io foundationdb api general html multi version client api |
vault on the version of the Etcd cluster Etcd storage backend page title Etcd Storage Backends Configuration layout docs both the v2 and v3 Etcd APIs and the version is automatically detected based The Etcd storage backend is used to persist Vault s data in Etcd It supports | ---
layout: docs
page_title: Etcd - Storage Backends - Configuration
description: |-
The Etcd storage backend is used to persist Vault's data in Etcd. It supports
both the v2 and v3 Etcd APIs, and the version is automatically detected based
on the version of the Etcd cluster.
---
# Etcd storage backend
The Etcd storage backend is used to persist Vault's data in [Etcd][etcd]. It
supports both the v2 and v3 Etcd APIs, and the version is automatically detected
based on the version of the Etcd cluster.
~> The Etcd v2 API has been deprecated with the release of Etcd v3.5, and will
be decommissioned by Etcd v3.6. It will be removed from Vault in Vault 1.10.
Users of the Etcd storage backend should prepare to
[migrate](/vault/docs/commands/operator/migrate) Vault storage to an Etcd v3 cluster
prior to upgrading to Vault 1.10. All storage migrations should have
[backups](/vault/docs/concepts/storage#backing-up-vault-s-persisted-data) taken prior
to migration.
- **High Availability** – the Etcd storage backend supports high availability.
The v2 API has known issues with HA support and should not be used in HA
scenarios.
- **Community Supported** – the Etcd storage backend is supported by CoreOS.
While it has undergone review by HashiCorp employees, they may not be as
knowledgeable about the technology. If you encounter problems with them, you
may be referred to the original author.
```hcl
storage "etcd" {
address = "http://localhost:2379"
etcd_api = "v3"
}
```
## `etcd` parameters
- `address` `(string: "http://localhost:2379")` – Specifies the addresses of the
Etcd instances as a comma-separated list. This can also be provided via the
environment variable `ETCD_ADDR`.
- `discovery_srv` `(string: "example.com")` - Specifies the domain name to
query for SRV records describing cluster endpoints. This can also be provided
via the environment variable `ETCD_DISCOVERY_SRV`.
- `discovery_srv_name` `(string: "vault")` - Specifies the service name to use
when querying for SRV records describing cluster endpoints. This can also be
provided via the environment variable `ETCD_DISCOVERY_SRV_NAME`.
- `etcd_api` `(string: "<varies>")` – Specifies the version of the API to
communicate with. By default, this is derived automatically. If the cluster
version is 3.1+ and there has been no data written using the v2 API, the
auto-detected default is v3.
- `ha_enabled` `(string: "false")` – Specifies if high availability should be
enabled. This can also be provided via the environment variable
`ETCD_HA_ENABLED`.
- `path` `(string: "/vault/")` – Specifies the path in Etcd where Vault data will
be stored.
- `sync` `(string: "true")` – Specifies whether to sync the list of available
Etcd services on startup. This is a string that is coerced into a boolean
value. You may want to set this to false if your cluster is behind a proxy
server and syncing causes Vault to fail.
- `username` `(string: "")` – Specifies the username to use when authenticating
with the Etcd server. This can also be provided via the environment variable
`ETCD_USERNAME`.
- `password` `(string: "")` – Specifies the password to use when authenticating
with the Etcd server. This can also be provided via the environment variable
`ETCD_PASSWORD`.
- `tls_ca_file` `(string: "")` – Specifies the path to the CA certificate used
for Etcd communication. This defaults to system bundle if not specified.
- `tls_cert_file` `(string: "")` – Specifies the path to the certificate for
Etcd communication.
- `tls_key_file` `(string: "")` – Specifies the path to the private key for Etcd
communication.
- `request_timeout` `(string: "5s")` – Specifies timeout for requests
to etcd. 5 seconds should be long enough for most cases, even with internal
retry.
- `lock_timeout` `(string: "15s")` – Specifies lock timeout for master
Vault instance. Set bigger value if you don't need faster recovery.
- `max_receive_size` `(int)` – Specifies the client-side response receive limit.
Make sure that "max_receive_size" >= server-side default send/recv limit.
("--max-request-bytes" flag to etcd or "embed.Config.MaxRequestBytes").
- `max_send_size` `(int)` – Specifies the client-side request send limit in bytes.
Make sure that "max_send_size" < server-side default send/recv limit.
("--max-request-bytes" flag to etcd or "embed.Config.MaxRequestBytes").
## `etcd` Examples
### DNS discovery of cluster members
This example configures vault to discover the Etcd cluster members via SRV
records as outlined in the
[DNS Discovery protocol documentation][dns discovery].
```hcl
storage "etcd" {
discovery_srv = "example.com"
}
```
### Custom authentication
This example shows connecting to the Etcd cluster using a username and password.
```hcl
storage "etcd" {
username = "user1234"
password = "pass5678"
}
```
### Custom path
This example shows storing data in a custom path.
```hcl
storage "etcd" {
path = "my-vault-data/"
}
```
### Enabling high availability
This example shows enabling high availability for the Etcd storage backend.
```hcl
api_addr = "https://vault-leader.my-company.internal"
storage "etcd" {
ha_enabled = "true"
...
}
```
[etcd]: https://coreos.com/etcd 'Etcd by CoreOS'
[dns discovery]: https://coreos.com/etcd/docs/latest/op-guide/clustering.html#dns-discovery 'Etcd cluster DNS Discovery' | vault | layout docs page title Etcd Storage Backends Configuration description The Etcd storage backend is used to persist Vault s data in Etcd It supports both the v2 and v3 Etcd APIs and the version is automatically detected based on the version of the Etcd cluster Etcd storage backend The Etcd storage backend is used to persist Vault s data in Etcd etcd It supports both the v2 and v3 Etcd APIs and the version is automatically detected based on the version of the Etcd cluster The Etcd v2 API has been deprecated with the release of Etcd v3 5 and will be decommissioned by Etcd v3 6 It will be removed from Vault in Vault 1 10 Users of the Etcd storage backend should prepare to migrate vault docs commands operator migrate Vault storage to an Etcd v3 cluster prior to upgrading to Vault 1 10 All storage migrations should have backups vault docs concepts storage backing up vault s persisted data taken prior to migration High Availability the Etcd storage backend supports high availability The v2 API has known issues with HA support and should not be used in HA scenarios Community Supported the Etcd storage backend is supported by CoreOS While it has undergone review by HashiCorp employees they may not be as knowledgeable about the technology If you encounter problems with them you may be referred to the original author hcl storage etcd address http localhost 2379 etcd api v3 etcd parameters address string http localhost 2379 Specifies the addresses of the Etcd instances as a comma separated list This can also be provided via the environment variable ETCD ADDR discovery srv string example com Specifies the domain name to query for SRV records describing cluster endpoints This can also be provided via the environment variable ETCD DISCOVERY SRV discovery srv name string vault Specifies the service name to use when querying for SRV records describing cluster endpoints This can also be provided via the environment variable ETCD DISCOVERY SRV NAME etcd api string varies Specifies the version of the API to communicate with By default this is derived automatically If the cluster version is 3 1 and there has been no data written using the v2 API the auto detected default is v3 ha enabled string false Specifies if high availability should be enabled This can also be provided via the environment variable ETCD HA ENABLED path string vault Specifies the path in Etcd where Vault data will be stored sync string true Specifies whether to sync the list of available Etcd services on startup This is a string that is coerced into a boolean value You may want to set this to false if your cluster is behind a proxy server and syncing causes Vault to fail username string Specifies the username to use when authenticating with the Etcd server This can also be provided via the environment variable ETCD USERNAME password string Specifies the password to use when authenticating with the Etcd server This can also be provided via the environment variable ETCD PASSWORD tls ca file string Specifies the path to the CA certificate used for Etcd communication This defaults to system bundle if not specified tls cert file string Specifies the path to the certificate for Etcd communication tls key file string Specifies the path to the private key for Etcd communication request timeout string 5s Specifies timeout for requests to etcd 5 seconds should be long enough for most cases even with internal retry lock timeout string 15s Specifies lock timeout for master Vault instance Set bigger value if you don t need faster recovery max receive size int Specifies the client side response receive limit Make sure that max receive size server side default send recv limit max request bytes flag to etcd or embed Config MaxRequestBytes max send size int Specifies the client side request send limit in bytes Make sure that max send size server side default send recv limit max request bytes flag to etcd or embed Config MaxRequestBytes etcd Examples DNS discovery of cluster members This example configures vault to discover the Etcd cluster members via SRV records as outlined in the DNS Discovery protocol documentation dns discovery hcl storage etcd discovery srv example com Custom authentication This example shows connecting to the Etcd cluster using a username and password hcl storage etcd username user1234 password pass5678 Custom path This example shows storing data in a custom path hcl storage etcd path my vault data Enabling high availability This example shows enabling high availability for the Etcd storage backend hcl api addr https vault leader my company internal storage etcd ha enabled true etcd https coreos com etcd Etcd by CoreOS dns discovery https coreos com etcd docs latest op guide clustering html dns discovery Etcd cluster DNS Discovery |
vault DynamoDB storage backend layout docs page title DynamoDB Storage Backends Configuration The DynamoDB storage backend is used to persist Vault s data in DynamoDB table | ---
layout: docs
page_title: DynamoDB - Storage Backends - Configuration
description: |-
The DynamoDB storage backend is used to persist Vault's data in DynamoDB
table.
---
# DynamoDB storage backend
The DynamoDB storage backend is used to persist Vault's data in
[DynamoDB][dynamodb] table.
- **High Availability** – the DynamoDB storage backend supports high
availability. Because DynamoDB uses the time on the Vault node to implement
the session lifetimes on its locks, significant clock skew across Vault nodes
could cause contention issues on the lock.
- **Community Supported** – the DynamoDB storage backend is supported by the
community. While it has undergone review by HashiCorp employees, they may not
be as knowledgeable about the technology. If you encounter problems with this
storage backend, you could be referred to the original author for support.
```hcl
storage "dynamodb" {
ha_enabled = "true"
region = "us-west-2"
table = "vault-data"
}
```
For more information about the read/write capacity of DynamoDB tables, please
see the [official AWS DynamoDB documentation][dynamodb-rw-capacity].
## DynamoDB parameters
- `endpoint` `(string: "")` – Specifies an alternative, AWS compatible, DynamoDB
endpoint. This can also be provided via the environment variable
`AWS_DYNAMODB_ENDPOINT`.
- `ha_enabled` `(string: "false")` – Specifies whether this backend should be used
to run Vault in high availability mode. Valid values are "true" or "false". This
can also be provided via the environment variable `DYNAMODB_HA_ENABLED`.
- `max_parallel` `(string: "128")` – Specifies the maximum number of concurrent
requests.
- `region` `(string "us-east-1")` – Specifies the AWS region. This can also be
provided via the environment variable `AWS_DEFAULT_REGION`.
- `read_capacity` `(int: 5)` – Specifies the maximum number of reads consumed
per second on the table, for use if Vault creates the DynamoDB table. This has
no effect if the `table` already exists. This can also be provided via the
environment variable `AWS_DYNAMODB_READ_CAPACITY`.
- `table` `(string: "vault-dynamodb-backend")` – Specifies the name of the
DynamoDB table in which to store Vault data. If the specified table does not
yet exist, it will be created during initialization. This can also be
provided via the environment variable `AWS_DYNAMODB_TABLE`. See the
information on the table schema below.
- `write_capacity` `(int: 5)` – Specifies the maximum number of writes performed
per second on the table, for use if Vault creates the DynamoDB table. This value
has no effect if the `table` already exists. This can also be provided via the
environment variable `AWS_DYNAMODB_WRITE_CAPACITY`.
The following settings are used for authenticating to AWS. If you are
running your Vault server on an EC2 instance, you can also make use of the EC2
instance profile service to provide the credentials Vault will use to make
DynamoDB API calls. Leaving the `access_key` and `secret_key` fields empty will
cause Vault to attempt to retrieve credentials from the AWS metadata service.
- `access_key` `(string: <required>)` – Specifies the AWS access key. This can
also be provided via the environment variable `AWS_ACCESS_KEY_ID`.
- `secret_key` `(string: <required>)` – Specifies the AWS secret key. This can
also be provided via the environment variable `AWS_SECRET_ACCESS_KEY`.
- `session_token` `(string: "")` – Specifies the AWS session token. This can
also be provided via the environment variable `AWS_SESSION_TOKEN`.
## Required AWS permissions
The governing policy for the IAM user or EC2 instance profile that Vault uses
to access DynamoDB must contain the following permissions for Vault to perform
the required operations on the DynamoDB table:
```javascript
"Statement": [
{
"Action": [
"dynamodb:DescribeLimits",
"dynamodb:DescribeTimeToLive",
"dynamodb:ListTagsOfResource",
"dynamodb:DescribeReservedCapacityOfferings",
"dynamodb:DescribeReservedCapacity",
"dynamodb:ListTables",
"dynamodb:BatchGetItem",
"dynamodb:BatchWriteItem",
"dynamodb:CreateTable",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:GetRecords",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:UpdateItem",
"dynamodb:Scan",
"dynamodb:DescribeTable"
],
"Effect": "Allow",
"Resource": [ "arn:aws:dynamodb:us-east-1:... dynamodb table ARN" ]
},
```
## Table schema
If you are going to create the DynamoDB table prior to the execution and
initialization of Vault, you will need to create a table with these attributes:
- Primary partition key: "Path", a string
- Primary sort key: "Key", a string
You might create the table via Terraform, with a configuration similar to this:
```
resource "aws_dynamodb_table" "dynamodb-table" {
name = "${var.dynamoTable}"
read_capacity = 1
write_capacity = 1
hash_key = "Path"
range_key = "Key"
attribute {
name = "Path"
type = "S"
}
attribute {
name = "Key"
type = "S"
}
tags = {
Name = "vault-dynamodb-table"
Environment = "prod"
}
}
```
If a table with the configured name already exists, Vault will not modify it -
and the Vault configuration values of `read_capacity` and `write_capacity` have
no effect.
If the table does not already exist, Vault will try to create it, with read and
write capacities set to the values of `read_capacity` and `write_capacity`
respectively.
## AWS instance metadata timeout
@include 'aws-imds-timeout.mdx'
## DynamoDB examples of Vault configuration
### Custom table and Read-Write capacity
This example shows using a custom table name and read/write capacity.
```hcl
storage "dynamodb" {
table = "my-vault-data"
read_capacity = 10
write_capacity = 15
}
```
### Enabling high availability
This example shows enabling high availability for the DynamoDB storage backend.
```hcl
api_addr = "https://vault-leader.my-company.internal"
storage "dynamodb" {
ha_enabled = "true"
...
}
```
[dynamodb]: https://aws.amazon.com/dynamodb/
[dynamodb-rw-capacity]: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#ProvisionedThroughput | vault | layout docs page title DynamoDB Storage Backends Configuration description The DynamoDB storage backend is used to persist Vault s data in DynamoDB table DynamoDB storage backend The DynamoDB storage backend is used to persist Vault s data in DynamoDB dynamodb table High Availability the DynamoDB storage backend supports high availability Because DynamoDB uses the time on the Vault node to implement the session lifetimes on its locks significant clock skew across Vault nodes could cause contention issues on the lock Community Supported the DynamoDB storage backend is supported by the community While it has undergone review by HashiCorp employees they may not be as knowledgeable about the technology If you encounter problems with this storage backend you could be referred to the original author for support hcl storage dynamodb ha enabled true region us west 2 table vault data For more information about the read write capacity of DynamoDB tables please see the official AWS DynamoDB documentation dynamodb rw capacity DynamoDB parameters endpoint string Specifies an alternative AWS compatible DynamoDB endpoint This can also be provided via the environment variable AWS DYNAMODB ENDPOINT ha enabled string false Specifies whether this backend should be used to run Vault in high availability mode Valid values are true or false This can also be provided via the environment variable DYNAMODB HA ENABLED max parallel string 128 Specifies the maximum number of concurrent requests region string us east 1 Specifies the AWS region This can also be provided via the environment variable AWS DEFAULT REGION read capacity int 5 Specifies the maximum number of reads consumed per second on the table for use if Vault creates the DynamoDB table This has no effect if the table already exists This can also be provided via the environment variable AWS DYNAMODB READ CAPACITY table string vault dynamodb backend Specifies the name of the DynamoDB table in which to store Vault data If the specified table does not yet exist it will be created during initialization This can also be provided via the environment variable AWS DYNAMODB TABLE See the information on the table schema below write capacity int 5 Specifies the maximum number of writes performed per second on the table for use if Vault creates the DynamoDB table This value has no effect if the table already exists This can also be provided via the environment variable AWS DYNAMODB WRITE CAPACITY The following settings are used for authenticating to AWS If you are running your Vault server on an EC2 instance you can also make use of the EC2 instance profile service to provide the credentials Vault will use to make DynamoDB API calls Leaving the access key and secret key fields empty will cause Vault to attempt to retrieve credentials from the AWS metadata service access key string required Specifies the AWS access key This can also be provided via the environment variable AWS ACCESS KEY ID secret key string required Specifies the AWS secret key This can also be provided via the environment variable AWS SECRET ACCESS KEY session token string Specifies the AWS session token This can also be provided via the environment variable AWS SESSION TOKEN Required AWS permissions The governing policy for the IAM user or EC2 instance profile that Vault uses to access DynamoDB must contain the following permissions for Vault to perform the required operations on the DynamoDB table javascript Statement Action dynamodb DescribeLimits dynamodb DescribeTimeToLive dynamodb ListTagsOfResource dynamodb DescribeReservedCapacityOfferings dynamodb DescribeReservedCapacity dynamodb ListTables dynamodb BatchGetItem dynamodb BatchWriteItem dynamodb CreateTable dynamodb DeleteItem dynamodb GetItem dynamodb GetRecords dynamodb PutItem dynamodb Query dynamodb UpdateItem dynamodb Scan dynamodb DescribeTable Effect Allow Resource arn aws dynamodb us east 1 dynamodb table ARN Table schema If you are going to create the DynamoDB table prior to the execution and initialization of Vault you will need to create a table with these attributes Primary partition key Path a string Primary sort key Key a string You might create the table via Terraform with a configuration similar to this resource aws dynamodb table dynamodb table name var dynamoTable read capacity 1 write capacity 1 hash key Path range key Key attribute name Path type S attribute name Key type S tags Name vault dynamodb table Environment prod If a table with the configured name already exists Vault will not modify it and the Vault configuration values of read capacity and write capacity have no effect If the table does not already exist Vault will try to create it with read and write capacities set to the values of read capacity and write capacity respectively AWS instance metadata timeout include aws imds timeout mdx DynamoDB examples of Vault configuration Custom table and Read Write capacity This example shows using a custom table name and read write capacity hcl storage dynamodb table my vault data read capacity 10 write capacity 15 Enabling high availability This example shows enabling high availability for the DynamoDB storage backend hcl api addr https vault leader my company internal storage dynamodb ha enabled true dynamodb https aws amazon com dynamodb dynamodb rw capacity https docs aws amazon com amazondynamodb latest developerguide WorkingWithTables html ProvisionedThroughput |
vault key value store In addition to providing durable storage inclusion of this backend will also register Vault as a service in Consul with a default health The Consul storage backend is used to persist Vault s data in Consul s page title Consul Storage Backends Configuration layout docs check | ---
layout: docs
page_title: Consul - Storage Backends - Configuration
description: |-
The Consul storage backend is used to persist Vault's data in Consul's
key-value store. In addition to providing durable storage, inclusion of this
backend will also register Vault as a service in Consul with a default health
check.
---
# Consul storage backend
The Consul storage backend is used to persist Vault's data in [Consul's][consul]
key-value store. In addition to providing durable storage, inclusion of this
backend will also register Vault as a service in Consul with a default health
check.
@include 'consul-dataplane-compat.mdx'
- **High Availability** – the Consul storage backend supports high availability.
- **HashiCorp Supported** – the Consul storage backend is officially supported
by HashiCorp.
```hcl
storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
}
```
Once properly configured, an unsealed Vault installation should be available and
accessible at:
```text
active.vault.service.consul
```
Unsealed Vault instances in standby mode are available at:
```text
standby.vault.service.consul
```
All unsealed Vault instances are available as healthy at:
```text
vault.service.consul
```
Sealed Vault instances will mark themselves as unhealthy to avoid being returned
at Consul's service discovery layer.
Note that if you have configured multiple listeners for Vault, you must specify
which one Consul should advertise to the cluster using [`api_addr`][api-addr]
and [`cluster_addr`][cluster-addr] ([example][listener-example]).
## `consul` parameters
- `address` `(string: "127.0.0.1:8500")` – Specifies the address of the Consul
agent to communicate with. This can be an IP address, DNS record, or unix
socket. It is recommended that you communicate with a local Consul agent; do
not communicate directly with a server.
- `check_timeout` `(string: "5s")` – Specifies the check interval used to send
health check information back to Consul. This is specified using a label
suffix like `"30s"` or `"1h"`.
- `consistency_mode` `(string: "default")` – Specifies the Consul
[consistency mode][consul-consistency]. Possible values are `"default"` or
`"strong"`.
- `disable_registration` `(string: "false")` – Specifies whether Vault should
register itself with Consul.
- `max_parallel` `(string: "128")` – Specifies the maximum number of concurrent
requests to Consul. Make sure that your Consul agents are configured to
support this level of parallelism, see
[http_max_conns_per_client](/consul/docs/agent/config/config-files#http_max_conns_per_client).
- `path` `(string: "vault/")` – Specifies the path in Consul's key-value store
where Vault data will be stored.
- `scheme` `(string: "http")` – Specifies the scheme to use when communicating
with Consul. This can be set to "http" or "https". It is highly recommended
you communicate with Consul over https over non-local connections. When
communicating over a unix socket, this option is ignored.
- `service` `(string: "vault")` – Specifies the name of the service to register
in Consul.
- `service_tags` `(string: "")` – Specifies a comma-separated list of tags to
attach to the service registration in Consul.
- `service_meta` `(map[string]string: {})` – Specifies a key-value list of meta tags to
attach to the service registration in Consul. See [ServiceMeta](/consul/api-docs/catalog#servicemeta) in the Consul docs for more information.
- `service_address` `(string: nil)` – Specifies a service-specific address to
set on the service registration in Consul. If unset, Vault will use what it
knows to be the HA redirect address - which is usually desirable. Setting
this parameter to `""` will tell Consul to leverage the configuration of the
node the service is registered on dynamically. This could be beneficial if
you intend to leverage Consul's
[`translate_wan_addrs`][consul-translate-wan-addrs] parameter.
- `token` `(string: "")` – Specifies the [Consul ACL token][consul-acl] with
permission to read and write from the `path` in Consul's key-value store.
This is **not** a Vault token. This can also be provided via the environment
variable [`CONSUL_HTTP_TOKEN`][consul-token]. See the ACL section below for help.
- `session_ttl` `(string: "15s")` - Specifies the minimum allowed [session
TTL][consul-session-ttl]. Consul server has a lower limit of 10s on the
session TTL by default. The value of `session_ttl` here cannot be lesser than
10s unless the `session_ttl_min` on the consul server's configuration has a
lesser value.
- `lock_wait_time` `(string: "15s")` - Specifies the wait time before a lock
acquisition is made. This affects the minimum time it takes to cancel a
lock acquisition.
The following settings apply when communicating with Consul via an encrypted
connection. You can read more about encrypting Consul connections on the
[Consul encryption page][consul-encryption].
- `tls_ca_file` `(string: "")` – Specifies the path to the CA certificate used
for Consul communication. This defaults to system bundle if not specified.
This should be set according to the
[`ca_file`](/consul/docs/agent/config/config-files#ca_file) setting in
Consul.
- `tls_cert_file` `(string: "")` (optional) – Specifies the path to the
certificate for Consul communication. This should be set according to the
[`cert_file`](/consul/docs/agent/config/config-files#cert_file) setting
in Consul.
- `tls_key_file` `(string: "")` – Specifies the path to the private key for
Consul communication. This should be set according to the
[`key_file`](/consul/docs/agent/config/config-files#key_file) setting
in Consul.
- `tls_min_version` `(string: "tls12")` – Specifies the minimum TLS version to
use. Accepted values are `"tls10"`, `"tls11"`, `"tls12"` or `"tls13"`.
- `tls_skip_verify` `(string: "false")` – Disable verification of TLS certificates.
Using this option is highly discouraged.
## ACLs
If using ACLs in Consul, you'll need appropriate permissions. For Consul 0.8,
the following will work for most use-cases, assuming that your service name is
`vault` and the prefix being used is `vault/`:
```json
{
"key": {
"vault/": {
"policy": "write"
}
},
"service": {
"vault": {
"policy": "write"
}
},
"agent": {
"": {
"policy": "read"
}
},
"session": {
"": {
"policy": "write"
}
}
}
```
For Consul 1.4+, the following example takes into account the changed ACL
language:
```json
{
"key_prefix": {
"vault/": {
"policy": "write"
}
},
"service": {
"vault": {
"policy": "write"
}
},
"agent_prefix": {
"": {
"policy": "read"
}
},
"session_prefix": {
"": {
"policy": "write"
}
}
}
```
## `consul` examples
### Local agent
This example shows a sample physical backend configuration which communicates
with a local Consul agent running on `127.0.0.1:8500`.
```hcl
storage "consul" {}
```
### Detailed customization
This example shows communicating with Consul on a custom address with an ACL
token.
```hcl
storage "consul" {
address = "10.5.7.92:8194"
token = "abcd1234"
}
```
### Custom storage path
This example shows storing data at a custom path in Consul's key-value store.
This path must be readable and writable by the Consul ACL token, if Consul
configured to use ACLs.
```hcl
storage "consul" {
path = "vault/"
}
```
### Consul via unix socket
This example shows communicating with Consul over a local unix socket.
```hcl
storage "consul" {
address = "unix:///tmp/.consul.http.sock"
}
```
### Custom TLS
This example shows using a custom CA, certificate, and key file to securely
communicate with Consul over TLS.
```hcl
storage "consul" {
scheme = "https"
tls_ca_file = "/etc/pem/vault.ca"
tls_cert_file = "/etc/pem/vault.cert"
tls_key_file = "/etc/pem/vault.key"
}
```
[consul]: https://www.consul.io/ 'Consul by HashiCorp'
[consul-acl]: /consul/docs/guides/acl 'Consul ACLs'
[consul-consistency]: /consul/api-docs/features/consistency 'Consul Consistency Modes'
[consul-encryption]: /consul/docs/agent/encryption 'Consul Encryption'
[consul-translate-wan-addrs]: /consul/docs/agent/options#translate_wan_addrs 'Consul Configuration'
[consul-token]: /consul/docs/commands/acl/set-agent-token#token-lt-value-gt- 'Consul Token'
[consul-session-ttl]: /consul/docs/agent/options#session_ttl_min 'Consul Configuration'
[api-addr]: /vault/docs/configuration#api_addr
[cluster-addr]: /vault/docs/configuration#cluster_addr
[listener-example]: /vault/docs/configuration/listener/tcp#listening-on-multiple-interfaces | vault | layout docs page title Consul Storage Backends Configuration description The Consul storage backend is used to persist Vault s data in Consul s key value store In addition to providing durable storage inclusion of this backend will also register Vault as a service in Consul with a default health check Consul storage backend The Consul storage backend is used to persist Vault s data in Consul s consul key value store In addition to providing durable storage inclusion of this backend will also register Vault as a service in Consul with a default health check include consul dataplane compat mdx High Availability the Consul storage backend supports high availability HashiCorp Supported the Consul storage backend is officially supported by HashiCorp hcl storage consul address 127 0 0 1 8500 path vault Once properly configured an unsealed Vault installation should be available and accessible at text active vault service consul Unsealed Vault instances in standby mode are available at text standby vault service consul All unsealed Vault instances are available as healthy at text vault service consul Sealed Vault instances will mark themselves as unhealthy to avoid being returned at Consul s service discovery layer Note that if you have configured multiple listeners for Vault you must specify which one Consul should advertise to the cluster using api addr api addr and cluster addr cluster addr example listener example consul parameters address string 127 0 0 1 8500 Specifies the address of the Consul agent to communicate with This can be an IP address DNS record or unix socket It is recommended that you communicate with a local Consul agent do not communicate directly with a server check timeout string 5s Specifies the check interval used to send health check information back to Consul This is specified using a label suffix like 30s or 1h consistency mode string default Specifies the Consul consistency mode consul consistency Possible values are default or strong disable registration string false Specifies whether Vault should register itself with Consul max parallel string 128 Specifies the maximum number of concurrent requests to Consul Make sure that your Consul agents are configured to support this level of parallelism see http max conns per client consul docs agent config config files http max conns per client path string vault Specifies the path in Consul s key value store where Vault data will be stored scheme string http Specifies the scheme to use when communicating with Consul This can be set to http or https It is highly recommended you communicate with Consul over https over non local connections When communicating over a unix socket this option is ignored service string vault Specifies the name of the service to register in Consul service tags string Specifies a comma separated list of tags to attach to the service registration in Consul service meta map string string Specifies a key value list of meta tags to attach to the service registration in Consul See ServiceMeta consul api docs catalog servicemeta in the Consul docs for more information service address string nil Specifies a service specific address to set on the service registration in Consul If unset Vault will use what it knows to be the HA redirect address which is usually desirable Setting this parameter to will tell Consul to leverage the configuration of the node the service is registered on dynamically This could be beneficial if you intend to leverage Consul s translate wan addrs consul translate wan addrs parameter token string Specifies the Consul ACL token consul acl with permission to read and write from the path in Consul s key value store This is not a Vault token This can also be provided via the environment variable CONSUL HTTP TOKEN consul token See the ACL section below for help session ttl string 15s Specifies the minimum allowed session TTL consul session ttl Consul server has a lower limit of 10s on the session TTL by default The value of session ttl here cannot be lesser than 10s unless the session ttl min on the consul server s configuration has a lesser value lock wait time string 15s Specifies the wait time before a lock acquisition is made This affects the minimum time it takes to cancel a lock acquisition The following settings apply when communicating with Consul via an encrypted connection You can read more about encrypting Consul connections on the Consul encryption page consul encryption tls ca file string Specifies the path to the CA certificate used for Consul communication This defaults to system bundle if not specified This should be set according to the ca file consul docs agent config config files ca file setting in Consul tls cert file string optional Specifies the path to the certificate for Consul communication This should be set according to the cert file consul docs agent config config files cert file setting in Consul tls key file string Specifies the path to the private key for Consul communication This should be set according to the key file consul docs agent config config files key file setting in Consul tls min version string tls12 Specifies the minimum TLS version to use Accepted values are tls10 tls11 tls12 or tls13 tls skip verify string false Disable verification of TLS certificates Using this option is highly discouraged ACLs If using ACLs in Consul you ll need appropriate permissions For Consul 0 8 the following will work for most use cases assuming that your service name is vault and the prefix being used is vault json key vault policy write service vault policy write agent policy read session policy write For Consul 1 4 the following example takes into account the changed ACL language json key prefix vault policy write service vault policy write agent prefix policy read session prefix policy write consul examples Local agent This example shows a sample physical backend configuration which communicates with a local Consul agent running on 127 0 0 1 8500 hcl storage consul Detailed customization This example shows communicating with Consul on a custom address with an ACL token hcl storage consul address 10 5 7 92 8194 token abcd1234 Custom storage path This example shows storing data at a custom path in Consul s key value store This path must be readable and writable by the Consul ACL token if Consul configured to use ACLs hcl storage consul path vault Consul via unix socket This example shows communicating with Consul over a local unix socket hcl storage consul address unix tmp consul http sock Custom TLS This example shows using a custom CA certificate and key file to securely communicate with Consul over TLS hcl storage consul scheme https tls ca file etc pem vault ca tls cert file etc pem vault cert tls key file etc pem vault key consul https www consul io Consul by HashiCorp consul acl consul docs guides acl Consul ACLs consul consistency consul api docs features consistency Consul Consistency Modes consul encryption consul docs agent encryption Consul Encryption consul translate wan addrs consul docs agent options translate wan addrs Consul Configuration consul token consul docs commands acl set agent token token lt value gt Consul Token consul session ttl consul docs agent options session ttl min Consul Configuration api addr vault docs configuration api addr cluster addr vault docs configuration cluster addr listener example vault docs configuration listener tcp listening on multiple interfaces |
vault Consensus Algorithm The Integrated Storage Raft backend is used to persist Vault s data Unlike all the other data Instead all the nodes in a Vault cluster will have a replicated copy of the entire data The data is replicated across the nodes using the Raft page title Integrated Storage Storage Backends Configuration layout docs storage backends this backend does not operate from a single source for the | ---
layout: docs
page_title: Integrated Storage - Storage Backends - Configuration
description: >-
The Integrated Storage (Raft) backend is used to persist Vault's data. Unlike all the other
storage backends, this backend does not operate from a single source for the
data. Instead all the nodes in a Vault cluster will have a replicated copy of
the entire data. The data is replicated across the nodes using the Raft
Consensus Algorithm.
---
# Integrated storage (Raft) backend
The Integrated Storage backend is used to persist Vault's data. Unlike other storage
backends, Integrated Storage does not operate from a single source of data. Instead
all the nodes in a Vault cluster will have a replicated copy of Vault's data.
Data gets replicated across all the nodes via the [Raft Consensus
Algorithm][raft].
- **High Availability** – the Integrated Storage backend supports high availability.
- **HashiCorp Supported** – the Integrated Storage backend is officially supported
by HashiCorp.
```hcl
storage "raft" {
path = "/path/to/raft/data"
node_id = "raft_node_1"
}
cluster_addr = "http://127.0.0.1:8201"
```
~> **Note:** When using the Integrated Storage backend, it is required to provide
[`cluster_addr`](/vault/docs/concepts/ha#per-node-cluster-address) to indicate the address and port to be used for communication
between the nodes in the Raft cluster.
~> **Note:** When using the Integrated Storage backend, a separate
[`ha_storage`](/vault/docs/configuration#ha_storage)
backend cannot be declared.
~> **Note:** When using the Integrated Storage backend, it is strongly recommended to
set [`disable_mlock`](/vault/docs/configuration#disable_mlock) to `true`, and to disable memory swapping on the system.
## `raft` parameters
- `path` `(string: "")` – The file system path where all the Vault data gets
stored.
This value can be overridden by setting the `VAULT_RAFT_PATH` environment variable.
- `node_id` `(string: "")` - The identifier for the node in the Raft cluster.
You can override `node_id` with the `VAULT_RAFT_NODE_ID` environment
variable. When `VAULT_RAFT_NODE_ID` is unset, Vault assigns a random
GUID during initialization and writes the value to `data/node-id` in the
directory specified by the `path` parameter.
- `performance_multiplier` `(integer: 0)` - An integer multiplier used by
servers to scale key Raft timing parameters, where each increment translates to approximately 1 – 2 seconds of delay. For example, setting the multiplier to "3" translates to 3 – 6 seconds of total delay. Tuning the multiplier affects the time it
takes Vault to detect leader failures and to perform leader elections, at the
expense of requiring more network and CPU resources for better performance.
Omitting this value or setting it to 0 uses default timing described below.
Lower values are used to tighten timing and increase sensitivity while higher
values relax timings and reduce sensitivity.
By default, Vault uses a balanced timing value of 5, which is suitable for most
platforms and scenarios. You should only adjust the timing value when platform
telemetry indicators that a change is needed or different timing is required due
to the overall reliability your platform (network, etc.).
Setting the timing value to 1 configures Raft to its highest performance (lowest
delay) mode. The maximum allowed value is 10.
- `trailing_logs` `(integer: 10000)` - This controls how many log entries are
left in the log store on disk after a snapshot is made. This should only be
adjusted when followers cannot catch up to the leader due to a very large
snapshot size and high write throughput causing log truncation before a
snapshot can be fully installed. If you need to use this to recover a cluster,
consider reducing write throughput or the amount of data stored on Vault. The
default value is 10000 which is suitable for all normal workloads. The
`trailing_logs` metric is not the same as `max_trailing_logs`.
- `snapshot_threshold` `(integer: 8192)` - This controls the minimum number of Raft
commit entries between snapshots that are saved to disk. This is a low-level
parameter that should rarely need to be changed. Very busy clusters
experiencing excessive disk IO may increase this value to reduce disk IO and
minimize the chances of all servers taking snapshots at the same time.
Increasing this trades off disk IO for disk space since the log will grow much
larger and the space in the `raft.db` file can't be reclaimed till the next
snapshot. Servers may take longer to recover from crashes or failover if this
is increased significantly as more logs will need to be replayed.
- `snapshot_interval` `(integer: 120 seconds)` - The snapshot interval
controls how often Raft checks whether a snapshot operation is
required. Raft randomly staggers snapshots between the configured
interval and twice the configured interval to keep the entire cluster
from performing a snapshot at once. The default snapshot interval is
120 seconds.
- `retry_join` `(list: [])` - A set of connection details for another node in the
cluster, which is used to help nodes locate a leader in order to join a cluster.
There can be one or more [`retry_join`](#retry_join-stanza) stanzas.
If the connection details for all nodes in the cluster are known in advance, you
can include these stanzas to enable nodes to automatically join the Raft cluster.
Once one of the nodes is initialized as the leader, the remaining nodes will use
their [`retry_join`](#retry_join-stanza) configuration to locate the leader and
join the cluster. Note that when using Shamir seal, the joined nodes will still
need to be unsealed manually.
See [the section below](#retry_join-stanza) for the parameters accepted by the
[`retry_join`](#retry_join-stanza) stanza.
- `retry_join_as_non_voter` `(boolean: false)` - <EnterpriseAlert inline />
Configures this node as a permanent non-voter. The node will not participate
in the Raft quorum but will still receive the data replication stream
enhancing the read throughput of the cluster. This option has the same effect
as the [`-non-voter`](/vault/docs/commands/operator/raft#non-voter) flag for
the `vault operator raft join` command, but only affects voting status when
joining via `retry_join` config. You can override the non-voter configuration
by setting the `VAULT_RAFT_RETRY_JOIN_AS_NON_VOTER` environment variable to
any non-empty value. Configuring a node as a non-voter is only valid if there
is at least one `retry_join` stanza.
- `max_entry_size` `(integer: 1048576)` - This configures the maximum number of
bytes for a Raft entry. It applies to both Put operations and transactions.
Any put or transaction operation exceeding this configuration value will cause
the respective operation to fail. Raft has a suggested max size of data in a
Raft log entry. This is based on current architecture, default timing, etc.
Integrated Storage also uses a chunk size that is the threshold used for
breaking a large value into chunks. By default, the chunk size is the same as
Raft's max size log entry. The default value for this configuration is 1048576
-- two times the chunking size.
- **Note:** This option corresponds to [Consul's `kv_max_value_size` parameter](/consul/docs/agent/config/config-files#kv_max_value_size) for
Vault clusters using a Consul storage backend. If you are migrating from Consul
storage to Raft Integrated Storage, and have changed this value in Consul from its
default to a value larger than the Integrated Storage default of 1MB, then you will
need to make the same change in Vault's Integrated Storage config.
- `max_mount_and_namespace_table_entry_size` `(integer)`- <EnterpriseAlert
inline /> Overrides `max_entry_size` to set a different limit for the specific
storage entries that contain mount tables, auth tables and namespace
configuration data. If you are reaching limits on the mount table size, you
can use this to increase the number of mounts and namespaces that can be
stored without the risk of other storage entries becoming too large. All other
notes on [`max_entry_size`](#max-entry-size) apply. Before changing this, read
the [Run Vault Enterprise
with many namespaces](/vault/docs/enterprise/namespaces/namespace-limits) guide regarding important performance considerations.
- `autopilot_reconcile_interval` `(string: "10s")` - This is the interval after
which autopilot will pick up any state changes. State change could mean multiple
things; for example a newly joined voter node, initially added as non-voter to
the Raft cluster by autopilot has successfully completed the stabilization
period thereby qualifying for being promoted as a voter, a node that has become
unhealthy and needs to be shown as such in the state API, a node has been marked
as dead needing eviction from Raft configuration, etc.
- `autopilot_update_interval` `(string: "2s")` - This is the interval after which
autopilot will poll Vault for any updates to the information it cares about. This
includes things like the autopilot configuration, current autopilot state, raft
configuration, known servers, latest raft index, and stats for all the known servers.
The information that autopilot receives will be used to calculate its next state.
- `autopilot_upgrade_version` `(string: "")` - <EnterpriseAlert inline />
Overrides the version used by Autopilot during [automated
upgrades](/vault/docs/enterprise/automated-upgrades). Vault's build version is
used by default. The string provided must be a valid [Semantic
Version](https://semver.org).
- `autopilot_redundancy_zone` `(string: "")` - <EnterpriseAlert inline />
Specifies a [redundancy zone](/vault/docs/enterprise/redundancy-zones) which
is used by Autopilot to automatically swap out failed servers for enhanced
reliability.
<Warning title="Experimental">
- `raft_wal` `(boolean: false)` - Enables the
[write-ahead](/vault/docs/internals/integrated-storage#configurable-raft-log-store)
log store instead of the default of BoltDB.
- `raft_log_verifier_enabled` `(boolean: false)` - Enables the raft log verifier.
The verifier periodically writes small raft logs and verifies checksums to
ensure that data has been written correctly. The verifier works with raft
write-ahead **and** BoltDB log stores.
- `raft_log_verification_interval` `(string: "60s")` - Sets the interval at
which the raft log verifier write verification logs. The default interval is
`60s` and the minimum supported interval is `10s`. The `raft_log_verification_interval`
parameter has no effect if `raft_log_verifier_enabled` is `false`.
</Warning>
### `retry_join` stanza
- `leader_api_addr` `(string: "")` - Address of a possible leader node.
- `auto_join` `(string: "")` - Cloud auto-join configuration, using
[go-discover](https://github.com/hashicorp/go-discover) syntax.
- `auto_join_scheme` `(string: "")` - The optional URI protocol scheme for addresses
discovered via auto-join. Available values are `http` or `https`.
- `auto_join_port` `(uint: "")` - The optional port used for addressed discovered
via auto-join.
- `leader_tls_servername` `(string: "")` - The TLS server name to use when
connecting with HTTPS.
Should match one of the names in the [DNS
SANs](https://en.wikipedia.org/wiki/Subject_Alternative_Name) of the remote
server certificate.
See also [Integrated Storage and TLS](/vault/docs/concepts/integrated-storage#autojoin-with-tls-servername)
- `leader_ca_cert_file` `(string: "")` - File path to the CA cert of the
possible leader node.
- `leader_client_cert_file` `(string: "")` - File path to the client certificate
for the follower node to establish client authentication with the possible
leader node.
- `leader_client_key_file` `(string: "")` - File path to the client key for the
follower node to establish client authentication with the possible leader node.
- `leader_ca_cert` `(string: "")` - CA cert of the possible leader node.
- `leader_client_cert` `(string: "")` - Client certificate for the follower node
to establish client authentication with the possible leader node.
- `leader_client_key` `(string: "")` - Client key for the follower node to
establish client authentication with the possible leader node.
Each [`retry_join`](#retry_join-stanza) block may provide TLS certificates via
file paths or as a single-line certificate string value with newlines delimited
by `\n`, but not a combination of both. Each [`retry_join`](#retry_join-stanza)
stanza may contain either a [`leader_api_addr`](#leader_api_addr) value or a
cloud [`auto_join`](#auto_join) configuration value, but not both. When an
[`auto_join`](#auto_join) value is provided, Vault will automatically attempt to
discover and resolve potential Raft leader addresses using [go-discover](https://github.com/hashicorp/go-discover).
See the go-discover
[README](https://github.com/hashicorp/go-discover/blob/master/README.md)
for details on the format of the `auto_join` value.
By default, Vault will attempt to reach discovered peers using HTTPS and port 8200. Operators may override these through the
[`auto_join_scheme`](#auto_join_scheme) and [`auto_join_port`](#auto_join_port)
fields respectively.
Example Configuration:
```hcl
storage "raft" {
path = "/Users/foo/raft/"
node_id = "node1"
retry_join {
leader_api_addr = "http://127.0.0.2:8200"
leader_ca_cert_file = "/path/to/ca1"
leader_client_cert_file = "/path/to/client/cert1"
leader_client_key_file = "/path/to/client/key1"
}
retry_join {
leader_api_addr = "http://127.0.0.3:8200"
leader_ca_cert_file = "/path/to/ca2"
leader_client_cert_file = "/path/to/client/cert2"
leader_client_key_file = "/path/to/client/key2"
}
retry_join {
leader_api_addr = "http://127.0.0.4:8200"
leader_ca_cert_file = "/path/to/ca3"
leader_client_cert_file = "/path/to/client/cert3"
leader_client_key_file = "/path/to/client/key3"
}
retry_join {
auto_join = "provider=aws region=eu-west-1 tag_key=vault tag_value=... access_key_id=... secret_access_key=..."
}
}
```
## Tutorial
Refer to the [Integrated
Storage](/vault/tutorials/raft) series of tutorials to learn more about implementing Vault using Integrated Storage.
[raft]: https://raft.github.io/ 'The Raft Consensus Algorithm' | vault | layout docs page title Integrated Storage Storage Backends Configuration description The Integrated Storage Raft backend is used to persist Vault s data Unlike all the other storage backends this backend does not operate from a single source for the data Instead all the nodes in a Vault cluster will have a replicated copy of the entire data The data is replicated across the nodes using the Raft Consensus Algorithm Integrated storage Raft backend The Integrated Storage backend is used to persist Vault s data Unlike other storage backends Integrated Storage does not operate from a single source of data Instead all the nodes in a Vault cluster will have a replicated copy of Vault s data Data gets replicated across all the nodes via the Raft Consensus Algorithm raft High Availability the Integrated Storage backend supports high availability HashiCorp Supported the Integrated Storage backend is officially supported by HashiCorp hcl storage raft path path to raft data node id raft node 1 cluster addr http 127 0 0 1 8201 Note When using the Integrated Storage backend it is required to provide cluster addr vault docs concepts ha per node cluster address to indicate the address and port to be used for communication between the nodes in the Raft cluster Note When using the Integrated Storage backend a separate ha storage vault docs configuration ha storage backend cannot be declared Note When using the Integrated Storage backend it is strongly recommended to set disable mlock vault docs configuration disable mlock to true and to disable memory swapping on the system raft parameters path string The file system path where all the Vault data gets stored This value can be overridden by setting the VAULT RAFT PATH environment variable node id string The identifier for the node in the Raft cluster You can override node id with the VAULT RAFT NODE ID environment variable When VAULT RAFT NODE ID is unset Vault assigns a random GUID during initialization and writes the value to data node id in the directory specified by the path parameter performance multiplier integer 0 An integer multiplier used by servers to scale key Raft timing parameters where each increment translates to approximately 1 2 seconds of delay For example setting the multiplier to 3 translates to 3 6 seconds of total delay Tuning the multiplier affects the time it takes Vault to detect leader failures and to perform leader elections at the expense of requiring more network and CPU resources for better performance Omitting this value or setting it to 0 uses default timing described below Lower values are used to tighten timing and increase sensitivity while higher values relax timings and reduce sensitivity By default Vault uses a balanced timing value of 5 which is suitable for most platforms and scenarios You should only adjust the timing value when platform telemetry indicators that a change is needed or different timing is required due to the overall reliability your platform network etc Setting the timing value to 1 configures Raft to its highest performance lowest delay mode The maximum allowed value is 10 trailing logs integer 10000 This controls how many log entries are left in the log store on disk after a snapshot is made This should only be adjusted when followers cannot catch up to the leader due to a very large snapshot size and high write throughput causing log truncation before a snapshot can be fully installed If you need to use this to recover a cluster consider reducing write throughput or the amount of data stored on Vault The default value is 10000 which is suitable for all normal workloads The trailing logs metric is not the same as max trailing logs snapshot threshold integer 8192 This controls the minimum number of Raft commit entries between snapshots that are saved to disk This is a low level parameter that should rarely need to be changed Very busy clusters experiencing excessive disk IO may increase this value to reduce disk IO and minimize the chances of all servers taking snapshots at the same time Increasing this trades off disk IO for disk space since the log will grow much larger and the space in the raft db file can t be reclaimed till the next snapshot Servers may take longer to recover from crashes or failover if this is increased significantly as more logs will need to be replayed snapshot interval integer 120 seconds The snapshot interval controls how often Raft checks whether a snapshot operation is required Raft randomly staggers snapshots between the configured interval and twice the configured interval to keep the entire cluster from performing a snapshot at once The default snapshot interval is 120 seconds retry join list A set of connection details for another node in the cluster which is used to help nodes locate a leader in order to join a cluster There can be one or more retry join retry join stanza stanzas If the connection details for all nodes in the cluster are known in advance you can include these stanzas to enable nodes to automatically join the Raft cluster Once one of the nodes is initialized as the leader the remaining nodes will use their retry join retry join stanza configuration to locate the leader and join the cluster Note that when using Shamir seal the joined nodes will still need to be unsealed manually See the section below retry join stanza for the parameters accepted by the retry join retry join stanza stanza retry join as non voter boolean false EnterpriseAlert inline Configures this node as a permanent non voter The node will not participate in the Raft quorum but will still receive the data replication stream enhancing the read throughput of the cluster This option has the same effect as the non voter vault docs commands operator raft non voter flag for the vault operator raft join command but only affects voting status when joining via retry join config You can override the non voter configuration by setting the VAULT RAFT RETRY JOIN AS NON VOTER environment variable to any non empty value Configuring a node as a non voter is only valid if there is at least one retry join stanza max entry size integer 1048576 This configures the maximum number of bytes for a Raft entry It applies to both Put operations and transactions Any put or transaction operation exceeding this configuration value will cause the respective operation to fail Raft has a suggested max size of data in a Raft log entry This is based on current architecture default timing etc Integrated Storage also uses a chunk size that is the threshold used for breaking a large value into chunks By default the chunk size is the same as Raft s max size log entry The default value for this configuration is 1048576 two times the chunking size Note This option corresponds to Consul s kv max value size parameter consul docs agent config config files kv max value size for Vault clusters using a Consul storage backend If you are migrating from Consul storage to Raft Integrated Storage and have changed this value in Consul from its default to a value larger than the Integrated Storage default of 1MB then you will need to make the same change in Vault s Integrated Storage config max mount and namespace table entry size integer EnterpriseAlert inline Overrides max entry size to set a different limit for the specific storage entries that contain mount tables auth tables and namespace configuration data If you are reaching limits on the mount table size you can use this to increase the number of mounts and namespaces that can be stored without the risk of other storage entries becoming too large All other notes on max entry size max entry size apply Before changing this read the Run Vault Enterprise with many namespaces vault docs enterprise namespaces namespace limits guide regarding important performance considerations autopilot reconcile interval string 10s This is the interval after which autopilot will pick up any state changes State change could mean multiple things for example a newly joined voter node initially added as non voter to the Raft cluster by autopilot has successfully completed the stabilization period thereby qualifying for being promoted as a voter a node that has become unhealthy and needs to be shown as such in the state API a node has been marked as dead needing eviction from Raft configuration etc autopilot update interval string 2s This is the interval after which autopilot will poll Vault for any updates to the information it cares about This includes things like the autopilot configuration current autopilot state raft configuration known servers latest raft index and stats for all the known servers The information that autopilot receives will be used to calculate its next state autopilot upgrade version string EnterpriseAlert inline Overrides the version used by Autopilot during automated upgrades vault docs enterprise automated upgrades Vault s build version is used by default The string provided must be a valid Semantic Version https semver org autopilot redundancy zone string EnterpriseAlert inline Specifies a redundancy zone vault docs enterprise redundancy zones which is used by Autopilot to automatically swap out failed servers for enhanced reliability Warning title Experimental raft wal boolean false Enables the write ahead vault docs internals integrated storage configurable raft log store log store instead of the default of BoltDB raft log verifier enabled boolean false Enables the raft log verifier The verifier periodically writes small raft logs and verifies checksums to ensure that data has been written correctly The verifier works with raft write ahead and BoltDB log stores raft log verification interval string 60s Sets the interval at which the raft log verifier write verification logs The default interval is 60s and the minimum supported interval is 10s The raft log verification interval parameter has no effect if raft log verifier enabled is false Warning retry join stanza leader api addr string Address of a possible leader node auto join string Cloud auto join configuration using go discover https github com hashicorp go discover syntax auto join scheme string The optional URI protocol scheme for addresses discovered via auto join Available values are http or https auto join port uint The optional port used for addressed discovered via auto join leader tls servername string The TLS server name to use when connecting with HTTPS Should match one of the names in the DNS SANs https en wikipedia org wiki Subject Alternative Name of the remote server certificate See also Integrated Storage and TLS vault docs concepts integrated storage autojoin with tls servername leader ca cert file string File path to the CA cert of the possible leader node leader client cert file string File path to the client certificate for the follower node to establish client authentication with the possible leader node leader client key file string File path to the client key for the follower node to establish client authentication with the possible leader node leader ca cert string CA cert of the possible leader node leader client cert string Client certificate for the follower node to establish client authentication with the possible leader node leader client key string Client key for the follower node to establish client authentication with the possible leader node Each retry join retry join stanza block may provide TLS certificates via file paths or as a single line certificate string value with newlines delimited by n but not a combination of both Each retry join retry join stanza stanza may contain either a leader api addr leader api addr value or a cloud auto join auto join configuration value but not both When an auto join auto join value is provided Vault will automatically attempt to discover and resolve potential Raft leader addresses using go discover https github com hashicorp go discover See the go discover README https github com hashicorp go discover blob master README md for details on the format of the auto join value By default Vault will attempt to reach discovered peers using HTTPS and port 8200 Operators may override these through the auto join scheme auto join scheme and auto join port auto join port fields respectively Example Configuration hcl storage raft path Users foo raft node id node1 retry join leader api addr http 127 0 0 2 8200 leader ca cert file path to ca1 leader client cert file path to client cert1 leader client key file path to client key1 retry join leader api addr http 127 0 0 3 8200 leader ca cert file path to ca2 leader client cert file path to client cert2 leader client key file path to client key2 retry join leader api addr http 127 0 0 4 8200 leader ca cert file path to ca3 leader client cert file path to client cert3 leader client key file path to client key3 retry join auto join provider aws region eu west 1 tag key vault tag value access key id secret access key Tutorial Refer to the Integrated Storage vault tutorials raft series of tutorials to learn more about implementing Vault using Integrated Storage raft https raft github io The Raft Consensus Algorithm |
vault Spanner a fully managed mission critical relational database service that page title Google Cloud Spanner Storage Backends Configuration Google Cloud spanner storage backend The Google Cloud Spanner storage backend is used to persist Vault s data in layout docs offers transactional consistency at global scale | ---
layout: docs
page_title: Google Cloud Spanner - Storage Backends - Configuration
description: |-
The Google Cloud Spanner storage backend is used to persist Vault's data in
Spanner, a fully managed, mission-critical, relational database service that
offers transactional consistency at global scale.
---
# Google Cloud spanner storage backend
The Google Cloud Spanner storage backend is used to persist Vault's data in
[Spanner][spanner-docs], a fully managed, mission-critical, relational database
service that offers transactional consistency at global scale, schemas, SQL, and
automatic, synchronous replication for high availability.
- **High Availability** – the Google Cloud Spanner storage backend supports high
availability. Because the Google Cloud Spanner storage backend uses the system
time on the Vault node to acquire sessions, clock skew across Vault servers
can cause lock contention.
- **Community Supported** – the Google Cloud Spanner storage backend is
supported by the community. While it has undergone review by HashiCorp
employees, they may not be as knowledgeable about the technology. If you
encounter problems with them, you may be referred to the original author.
```hcl
storage "spanner" {
database = "projects/my-project/instances/my-instance/databases/my-database"
}
```
For more information on schemas or Google Cloud Spanner, please see the [Google
Cloud Spanner documentation][spanner-docs].
## `spanner` setup
To use the Google Cloud Spanner Vault storage backend, you must have a Google
Cloud Platform account. Either using the API or web interface, create a database
and the following tables:
-> You can choose "Edit as text" and copy-paste the following as the schema.
These are the default table names. If you choose to use different table names,
you will need to update the configuration accordingly.
```sql
CREATE TABLE Vault (
Key STRING(MAX) NOT NULL,
Value BYTES(MAX),
) PRIMARY KEY (Key);
CREATE TABLE VaultHA (
Key STRING(MAX) NOT NULL,
Value STRING(MAX),
Identity STRING(36) NOT NULL,
Timestamp TIMESTAMP NOT NULL,
) PRIMARY KEY (Key);
```
The Google Cloud Spanner storage backend does not support creating the table
automatically at this time, but this could be a future enhancement. For more
information on schemas or Google Cloud Spanner, please see the [Google Cloud
Spanner documentation][spanner-docs].
## `spanner` authentication
The Google Cloud Spanner Vault storage backend uses the official Google Cloud
Golang SDK. This means it supports the common ways of [providing credentials to
Google Cloud][cloud-creds].
1. The environment variable `GOOGLE_APPLICATION_CREDENTIALS`. This is specified
as the **path** to a Google Cloud credentials file, typically for a service
account. If this environment variable is present, the resulting credentials are
used. If the credentials are invalid, an error is returned.
1. Default instance credentials. When no environment variable is present, the
default service account credentials are used.
For more information on service accounts, please see the [Google Cloud Service
Accounts documentation][service-accounts].
To use this storage backend, the service account must have the following
minimum scope(s):
```text
https://www.googleapis.com/auth/google-cloud-spanner.data
```
## `spanner` parameters
- `database` `(string: <required>)` – Specifies the name of the database. Note
that this is specified as a "path" including the project ID and instance, for
example:
```text
projects/my-project/instances/my-instance/databases/my-database
```
- `table` `(string: "Vault")` - Specifies the name of the table where
data will be stored and retrieved.
- `max_parallel` `(int: 128)` - Specifies the maximum number of parallel
operations to take place.
### High availability parameters
- `ha_enabled` `(string: "false")` - Specifies if high availability mode is
enabled. This is a boolean value, but it is specified as a string like "true"
or "false".
- `ha_table` `(string: "VaultHA")` - Specifies the name of the table to use for
storing high availability information. By default, this is the name of the
`table` suffixed with "HA".
## `spanner` examples
### High availability
This example shows configuring Google Cloud Spanner with high availability
enabled.
```hcl
api_addr = "https://vault-leader.my-company.internal"
storage "spanner" {
database = "projects/demo/instances/abc123/databases/vault-data"
ha_enabled = "true"
}
```
### Custom tables
This example shows listing custom table names for data and HA with the Google
Cloud Spanner Vault storage backend.
```hcl
storage "spanner" {
database = "projects/demo/instances/abc123/databases/vault-data"
table = "VaultData"
ha_table = "VaultLeader"
}
```
[cloud-creds]: https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application
[service-accounts]: https://cloud.google.com/compute/docs/access/service-accounts
[spanner-docs]: https://cloud.google.com/spanner/docs/ | vault | layout docs page title Google Cloud Spanner Storage Backends Configuration description The Google Cloud Spanner storage backend is used to persist Vault s data in Spanner a fully managed mission critical relational database service that offers transactional consistency at global scale Google Cloud spanner storage backend The Google Cloud Spanner storage backend is used to persist Vault s data in Spanner spanner docs a fully managed mission critical relational database service that offers transactional consistency at global scale schemas SQL and automatic synchronous replication for high availability High Availability the Google Cloud Spanner storage backend supports high availability Because the Google Cloud Spanner storage backend uses the system time on the Vault node to acquire sessions clock skew across Vault servers can cause lock contention Community Supported the Google Cloud Spanner storage backend is supported by the community While it has undergone review by HashiCorp employees they may not be as knowledgeable about the technology If you encounter problems with them you may be referred to the original author hcl storage spanner database projects my project instances my instance databases my database For more information on schemas or Google Cloud Spanner please see the Google Cloud Spanner documentation spanner docs spanner setup To use the Google Cloud Spanner Vault storage backend you must have a Google Cloud Platform account Either using the API or web interface create a database and the following tables You can choose Edit as text and copy paste the following as the schema These are the default table names If you choose to use different table names you will need to update the configuration accordingly sql CREATE TABLE Vault Key STRING MAX NOT NULL Value BYTES MAX PRIMARY KEY Key CREATE TABLE VaultHA Key STRING MAX NOT NULL Value STRING MAX Identity STRING 36 NOT NULL Timestamp TIMESTAMP NOT NULL PRIMARY KEY Key The Google Cloud Spanner storage backend does not support creating the table automatically at this time but this could be a future enhancement For more information on schemas or Google Cloud Spanner please see the Google Cloud Spanner documentation spanner docs spanner authentication The Google Cloud Spanner Vault storage backend uses the official Google Cloud Golang SDK This means it supports the common ways of providing credentials to Google Cloud cloud creds 1 The environment variable GOOGLE APPLICATION CREDENTIALS This is specified as the path to a Google Cloud credentials file typically for a service account If this environment variable is present the resulting credentials are used If the credentials are invalid an error is returned 1 Default instance credentials When no environment variable is present the default service account credentials are used For more information on service accounts please see the Google Cloud Service Accounts documentation service accounts To use this storage backend the service account must have the following minimum scope s text https www googleapis com auth google cloud spanner data spanner parameters database string required Specifies the name of the database Note that this is specified as a path including the project ID and instance for example text projects my project instances my instance databases my database table string Vault Specifies the name of the table where data will be stored and retrieved max parallel int 128 Specifies the maximum number of parallel operations to take place High availability parameters ha enabled string false Specifies if high availability mode is enabled This is a boolean value but it is specified as a string like true or false ha table string VaultHA Specifies the name of the table to use for storing high availability information By default this is the name of the table suffixed with HA spanner examples High availability This example shows configuring Google Cloud Spanner with high availability enabled hcl api addr https vault leader my company internal storage spanner database projects demo instances abc123 databases vault data ha enabled true Custom tables This example shows listing custom table names for data and HA with the Google Cloud Spanner Vault storage backend hcl storage spanner database projects demo instances abc123 databases vault data table VaultData ha table VaultLeader cloud creds https cloud google com docs authentication production providing credentials to your application service accounts https cloud google com compute docs access service accounts spanner docs https cloud google com spanner docs |
vault The MySQL storage backend is used to persist Vault s data in a MySQL server or cluster MySQL storage backend layout docs page title MySQL Storage Backends Configuration | ---
layout: docs
page_title: MySQL - Storage Backends - Configuration
description: |-
The MySQL storage backend is used to persist Vault's data in a MySQL server or
cluster.
---
# MySQL storage backend
The MySQL storage backend is used to persist Vault's data in a [MySQL][mysql]
server or cluster.
- **High Availability** – the MySQL storage backend supports high availability.
Note that due to the way mysql locking functions work they are lost if a connection
dies. If you would like to not have frequent changes in your elected leader you
can increase interactive_timeout and wait_timeout MySQL config to much higher than
default which is set at 8 hours.
- **Community Supported** – the MySQL storage backend is supported by the
community. While it has undergone review by HashiCorp employees, they may not
be as knowledgeable about the technology. If you encounter problems with them,
you may be referred to the original author.
```hcl
storage "mysql" {
username = "user1234"
password = "secret123!"
database = "vault"
}
```
## `mysql` parameters
- `address` `(string: "127.0.0.1:3306")` – Specifies the address of the MySQL
host.
- `database` `(string: "vault")` – Specifies the name of the database. If the
database does not exist, Vault will attempt to create it.
- `table` `(string: "vault")` – Specifies the name of the table. If the table
does not exist, Vault will attempt to create it.
- `tls_ca_file` `(string: "")` – Specifies the path to the CA certificate to
connect using TLS.
- `plaintext_credentials_transmission` `(string: "")` - Provides authorization
to send credentials over plaintext. Failure to provide a value AND a failure
to provide a TLS CA certificate will warn that the credentials are being sent
over plain text. In the future, failure to do acknowledge or use TLS will
result in server start being prevented. This will be done to ensure credentials
are not leaked accidentally.
- `max_parallel` `(string: "128")` – Specifies the maximum number of concurrent
requests to MySQL.
- `max_idle_connections` `(string: "0")` – Specifies the maximum number of idle
connections to the database. A zero uses value defaults to 2 idle connections
and a negative value disables idle connections. If larger than
`max_parallel` it will be reduced to be equal.
- `max_connection_lifetime` `(string: "0")` – Specifies the maximum amount of
time in seconds that a connection may be reused. If <= 0s connections are reused forever.
Additionally, Vault requires the following authentication information.
- `username` `(string: <required>)` – Specifies the MySQL username to connect to
the database.
- `password` `(string: <required>)` – Specifies the MySQL password to connect to
the database.
### High availability parameters
- `ha_enabled` `(string: "false")` - Specifies if high availability mode is
enabled. This is a boolean value, but it is specified as a string like "true"
or "false".
- `lock_table` `(string: "vault_lock")` – Specifies the name of the table to
use for storing high availability information. By default, this is the name
of the `table` suffixed with `_lock`. If the table does not exist, Vault will
attempt to create it.
## `mysql` examples
### Custom database and table
This example shows configuring the MySQL backend to use a custom database and
table name.
```hcl
storage "mysql" {
database = "my-vault"
table = "vault-data"
username = "user1234"
password = "pass5678"
}
```
[mysql]: https://dev.mysql.com | vault | layout docs page title MySQL Storage Backends Configuration description The MySQL storage backend is used to persist Vault s data in a MySQL server or cluster MySQL storage backend The MySQL storage backend is used to persist Vault s data in a MySQL mysql server or cluster High Availability the MySQL storage backend supports high availability Note that due to the way mysql locking functions work they are lost if a connection dies If you would like to not have frequent changes in your elected leader you can increase interactive timeout and wait timeout MySQL config to much higher than default which is set at 8 hours Community Supported the MySQL storage backend is supported by the community While it has undergone review by HashiCorp employees they may not be as knowledgeable about the technology If you encounter problems with them you may be referred to the original author hcl storage mysql username user1234 password secret123 database vault mysql parameters address string 127 0 0 1 3306 Specifies the address of the MySQL host database string vault Specifies the name of the database If the database does not exist Vault will attempt to create it table string vault Specifies the name of the table If the table does not exist Vault will attempt to create it tls ca file string Specifies the path to the CA certificate to connect using TLS plaintext credentials transmission string Provides authorization to send credentials over plaintext Failure to provide a value AND a failure to provide a TLS CA certificate will warn that the credentials are being sent over plain text In the future failure to do acknowledge or use TLS will result in server start being prevented This will be done to ensure credentials are not leaked accidentally max parallel string 128 Specifies the maximum number of concurrent requests to MySQL max idle connections string 0 Specifies the maximum number of idle connections to the database A zero uses value defaults to 2 idle connections and a negative value disables idle connections If larger than max parallel it will be reduced to be equal max connection lifetime string 0 Specifies the maximum amount of time in seconds that a connection may be reused If 0s connections are reused forever Additionally Vault requires the following authentication information username string required Specifies the MySQL username to connect to the database password string required Specifies the MySQL password to connect to the database High availability parameters ha enabled string false Specifies if high availability mode is enabled This is a boolean value but it is specified as a string like true or false lock table string vault lock Specifies the name of the table to use for storing high availability information By default this is the name of the table suffixed with lock If the table does not exist Vault will attempt to create it mysql examples Custom database and table This example shows configuring the MySQL backend to use a custom database and table name hcl storage mysql database my vault table vault data username user1234 password pass5678 mysql https dev mysql com |
vault Cassandra storage backend The Cassandra storage backend is used to persist Vault s data in an Apache layout docs page title Cassandra Storage Backends Configuration Cassandra cluster | ---
layout: docs
page_title: Cassandra - Storage Backends - Configuration
description: |-
The Cassandra storage backend is used to persist Vault's data in an Apache
Cassandra cluster.
---
# Cassandra storage backend
The Cassandra storage backend is used to persist Vault's data in an [Apache
Cassandra][cassandra] cluster.
- **No High Availability** – the Cassandra storage backend does not support high
availability.
- **Community Supported** – the Cassandra storage backend is supported by the
community. While it has undergone review by HashiCorp employees, they may not
be as knowledgeable about the technology. If you encounter problems with it,
you may be referred to the original author.
```hcl
storage "cassandra" {
hosts = "localhost"
consistency = "LOCAL_QUORUM"
protocol_version = 3
}
```
The Cassandra storage backend does not automatically create the keyspace and
table. This sample configuration can be used as a guide, but you will want to
ensure the keyspace [replication options][replication-options]
are appropriate for your cluster:
```cql
CREATE KEYSPACE "vault" WITH REPLICATION = {
'class': 'SimpleStrategy',
'replication_factor': 1
};
CREATE TABLE "vault"."entries" (
bucket text,
key text,
value blob,
PRIMARY KEY (bucket, key)
) WITH CLUSTERING ORDER BY (key ASC);
```
## `cassandra` parameters
- `hosts` `(string: "127.0.0.1")` – Comma-separated list of Cassandra hosts to
connect to.
- `keyspace` `(string: "vault")` Cassandra keyspace to use.
- `table` `(string: "entries")` – Table within the `keyspace` in which to store
data.
- `consistency` `(string: "LOCAL_QUORUM")` Consistency level to use when
reading/writing data. If set, must be one of `"ANY"`, `"ONE"`, `"TWO"`,
`"THREE"`, `"QUORUM"`, `"ALL"`, `"LOCAL_QUORUM"`, `"EACH_QUORUM"`, or
`"LOCAL_ONE"`.
- `protocol_version` `(int: 2)` Cassandra protocol version to use.
- `username` `(string: "")` – Username to use when authenticating with the
Cassandra hosts.
- `password` `(string: "")` – Password to use when authenticating with the
Cassandra hosts.
- `disable_initial_host_lookup` `(bool: false)` - If set to true, Vault will not attempt
to get host info from the `system.peers` table. It will instead connect to
hosts supplied and will not attempt to look up the host information. This will
mean that `data_centre`, `rack` and `token` information will not be available and as
such host filtering and token aware query routing will not be available.
- `initial_connection_timeout` `(int: 0)` - A timeout in seconds to wait until an initial connection is established
with the Cassandra hosts. If not set, default value from Cassandra driver(gocql) will be used - 600ms
- `connection_timeout` `(int: 0)` - A timeout in seconds for each query.
If not set, default value from Cassandra driver(gocql) will be used - 600ms
- `simple_retry_policy_retries` `(int: 0)` - Useful for Cassandra cluster with several nodes.
If current master node is down request will be retried on the next node `simple_retry_policy_retries`
times, and the client won't get an error.
- `tls` `(int: 0)` – If `1`, indicates the connection with the Cassandra hosts
should use TLS.
- `pem_bundle_file` `(string: "")` - Specifies a file containing a
certificate and private key; a certificate, private key, and issuing CA
certificate; or just a CA certificate.
- `pem_json_file` `(string: "")` - Specifies a JSON file containing a certificate
and private key; a certificate, private key, and issuing CA certificate;
or just a CA certificate.
- `tls_skip_verify` `(int: 0)` - If `1`, then TLS host verification
will be disabled for Cassandra. Defaults to `0`.
- `tls_min_version` `(string: "tls12")` - Minimum TLS version to use. Accepted
values are `tls10`, `tls11`, `tls12` or `tls13`. Defaults to `tls12`.
[cassandra]: http://cassandra.apache.org/
[replication-options]: https://docs.datastax.com/en/cassandra/2.1/cassandra/architecture/architectureDataDistributeReplication_c.html | vault | layout docs page title Cassandra Storage Backends Configuration description The Cassandra storage backend is used to persist Vault s data in an Apache Cassandra cluster Cassandra storage backend The Cassandra storage backend is used to persist Vault s data in an Apache Cassandra cassandra cluster No High Availability the Cassandra storage backend does not support high availability Community Supported the Cassandra storage backend is supported by the community While it has undergone review by HashiCorp employees they may not be as knowledgeable about the technology If you encounter problems with it you may be referred to the original author hcl storage cassandra hosts localhost consistency LOCAL QUORUM protocol version 3 The Cassandra storage backend does not automatically create the keyspace and table This sample configuration can be used as a guide but you will want to ensure the keyspace replication options replication options are appropriate for your cluster cql CREATE KEYSPACE vault WITH REPLICATION class SimpleStrategy replication factor 1 CREATE TABLE vault entries bucket text key text value blob PRIMARY KEY bucket key WITH CLUSTERING ORDER BY key ASC cassandra parameters hosts string 127 0 0 1 Comma separated list of Cassandra hosts to connect to keyspace string vault Cassandra keyspace to use table string entries Table within the keyspace in which to store data consistency string LOCAL QUORUM Consistency level to use when reading writing data If set must be one of ANY ONE TWO THREE QUORUM ALL LOCAL QUORUM EACH QUORUM or LOCAL ONE protocol version int 2 Cassandra protocol version to use username string Username to use when authenticating with the Cassandra hosts password string Password to use when authenticating with the Cassandra hosts disable initial host lookup bool false If set to true Vault will not attempt to get host info from the system peers table It will instead connect to hosts supplied and will not attempt to look up the host information This will mean that data centre rack and token information will not be available and as such host filtering and token aware query routing will not be available initial connection timeout int 0 A timeout in seconds to wait until an initial connection is established with the Cassandra hosts If not set default value from Cassandra driver gocql will be used 600ms connection timeout int 0 A timeout in seconds for each query If not set default value from Cassandra driver gocql will be used 600ms simple retry policy retries int 0 Useful for Cassandra cluster with several nodes If current master node is down request will be retried on the next node simple retry policy retries times and the client won t get an error tls int 0 If 1 indicates the connection with the Cassandra hosts should use TLS pem bundle file string Specifies a file containing a certificate and private key a certificate private key and issuing CA certificate or just a CA certificate pem json file string Specifies a JSON file containing a certificate and private key a certificate private key and issuing CA certificate or just a CA certificate tls skip verify int 0 If 1 then TLS host verification will be disabled for Cassandra Defaults to 0 tls min version string tls12 Minimum TLS version to use Accepted values are tls10 tls11 tls12 or tls13 Defaults to tls12 cassandra http cassandra apache org replication options https docs datastax com en cassandra 2 1 cassandra architecture architectureDataDistributeReplication c html |
vault PostgreSQL storage backend The PostgreSQL storage backend is used to persist Vault s data in a PostgreSQL page title PostgreSQL Storage Backends Configuration layout docs server or cluster | ---
layout: docs
page_title: PostgreSQL - Storage Backends - Configuration
description: |-
The PostgreSQL storage backend is used to persist Vault's data in a PostgreSQL
server or cluster.
---
# PostgreSQL storage backend
The PostgreSQL storage backend is used to persist Vault's data in a
[PostgreSQL][postgresql] server or cluster.
- **High Availability** – the PostgreSQL storage backend supports
high availability. Requires PostgreSQL 9.5 or later.
- **Community Supported** – the PostgreSQL storage backend is supported by the
community. While it has undergone review by HashiCorp employees, they may not
be as knowledgeable about the technology. If you encounter problems with them,
you may be referred to the original author.
```hcl
storage "postgresql" {
connection_url = "postgres://user123:secret123!@localhost:5432/vault"
}
```
~> **Note:** The PostgreSQL storage backend plugin will attempt to use SSL
when connecting to the database. If SSL is not enabled the `connection_url`
will need to be configured to disable SSL. See the documentation below
to disable SSL.
The PostgreSQL storage backend does not automatically create the table. Here is
some sample SQL to create the schema and indexes.
```sql
CREATE TABLE vault_kv_store (
parent_path TEXT COLLATE "C" NOT NULL,
path TEXT COLLATE "C",
key TEXT COLLATE "C",
value BYTEA,
CONSTRAINT pkey PRIMARY KEY (path, key)
);
CREATE INDEX parent_path_idx ON vault_kv_store (parent_path);
```
Store for HAEnabled backend
```sql
CREATE TABLE vault_ha_locks (
ha_key TEXT COLLATE "C" NOT NULL,
ha_identity TEXT COLLATE "C" NOT NULL,
ha_value TEXT COLLATE "C",
valid_until TIMESTAMP WITH TIME ZONE NOT NULL,
CONSTRAINT ha_key PRIMARY KEY (ha_key)
);
```
If you're using a version of PostgreSQL prior to 9.5, create the following function:
```sql
CREATE FUNCTION vault_kv_put(_parent_path TEXT, _path TEXT, _key TEXT, _value BYTEA) RETURNS VOID AS
$$
BEGIN
LOOP
-- first try to update the key
UPDATE vault_kv_store
SET (parent_path, path, key, value) = (_parent_path, _path, _key, _value)
WHERE _path = path AND key = _key;
IF found THEN
RETURN;
END IF;
-- not there, so try to insert the key
-- if someone else inserts the same key concurrently,
-- we could get a unique-key failure
BEGIN
INSERT INTO vault_kv_store (parent_path, path, key, value)
VALUES (_parent_path, _path, _key, _value);
RETURN;
EXCEPTION WHEN unique_violation THEN
-- Do nothing, and loop to try the UPDATE again.
END;
END LOOP;
END;
$$
LANGUAGE plpgsql;
```
## `postgresql` parameters
- `connection_url` `(string: <required>)` – Specifies the connection string to
use to authenticate and connect to PostgreSQL. The connection URL can also be
set using the `VAULT_PG_CONNECTION_URL` environment variable. A full list of supported
parameters can be found in the [pgx library][pgxlib] and [PostgreSQL connection string][pg_conn_docs]
documentation. For example connection string URLs, see the examples section below.
- `table` `(string: "vault_kv_store")` – Specifies the name of the table in
which to write Vault data. This table must already exist (Vault will not
attempt to create it).
- `max_idle_connections` `(int)` - Default not set. Sets the maximum number of
connections in the idle connection pool. See
[golang docs on SetMaxIdleConns][golang_setmaxidleconns] for more information.
Requires 1.2 or later.
- `max_parallel` `(string: "128")` – Specifies the maximum number of concurrent
requests to PostgreSQL.
- `ha_enabled` `(string: "true|false")` – Default not enabled, requires 9.5 or later.
- `ha_table` `(string: "vault_ha_locks")` – Specifies the name of the table to use
for storing high availability information. This table must already exist (Vault
will not attempt to create it).
## `postgresql` examples
### Custom SSL verification
This example shows connecting to a PostgreSQL cluster using full SSL
verification (recommended).
```hcl
storage "postgresql" {
connection_url = "postgres://user:pass@localhost:5432/database?sslmode=verify-full"
}
```
To disable SSL verification (not recommended), replace `verify-full` with
`disable`:
```hcl
storage "postgresql" {
connection_url = "postgres://user:pass@localhost:5432/database?sslmode=disable"
}
```
[golang_setmaxidleconns]: https://golang.org/pkg/database/sql/#DB.SetMaxIdleConns
[postgresql]: https://www.postgresql.org/
[pgxlib]: https://pkg.go.dev/github.com/jackc/pgx/stdlib
[pg_conn_docs]: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING | vault | layout docs page title PostgreSQL Storage Backends Configuration description The PostgreSQL storage backend is used to persist Vault s data in a PostgreSQL server or cluster PostgreSQL storage backend The PostgreSQL storage backend is used to persist Vault s data in a PostgreSQL postgresql server or cluster High Availability the PostgreSQL storage backend supports high availability Requires PostgreSQL 9 5 or later Community Supported the PostgreSQL storage backend is supported by the community While it has undergone review by HashiCorp employees they may not be as knowledgeable about the technology If you encounter problems with them you may be referred to the original author hcl storage postgresql connection url postgres user123 secret123 localhost 5432 vault Note The PostgreSQL storage backend plugin will attempt to use SSL when connecting to the database If SSL is not enabled the connection url will need to be configured to disable SSL See the documentation below to disable SSL The PostgreSQL storage backend does not automatically create the table Here is some sample SQL to create the schema and indexes sql CREATE TABLE vault kv store parent path TEXT COLLATE C NOT NULL path TEXT COLLATE C key TEXT COLLATE C value BYTEA CONSTRAINT pkey PRIMARY KEY path key CREATE INDEX parent path idx ON vault kv store parent path Store for HAEnabled backend sql CREATE TABLE vault ha locks ha key TEXT COLLATE C NOT NULL ha identity TEXT COLLATE C NOT NULL ha value TEXT COLLATE C valid until TIMESTAMP WITH TIME ZONE NOT NULL CONSTRAINT ha key PRIMARY KEY ha key If you re using a version of PostgreSQL prior to 9 5 create the following function sql CREATE FUNCTION vault kv put parent path TEXT path TEXT key TEXT value BYTEA RETURNS VOID AS BEGIN LOOP first try to update the key UPDATE vault kv store SET parent path path key value parent path path key value WHERE path path AND key key IF found THEN RETURN END IF not there so try to insert the key if someone else inserts the same key concurrently we could get a unique key failure BEGIN INSERT INTO vault kv store parent path path key value VALUES parent path path key value RETURN EXCEPTION WHEN unique violation THEN Do nothing and loop to try the UPDATE again END END LOOP END LANGUAGE plpgsql postgresql parameters connection url string required Specifies the connection string to use to authenticate and connect to PostgreSQL The connection URL can also be set using the VAULT PG CONNECTION URL environment variable A full list of supported parameters can be found in the pgx library pgxlib and PostgreSQL connection string pg conn docs documentation For example connection string URLs see the examples section below table string vault kv store Specifies the name of the table in which to write Vault data This table must already exist Vault will not attempt to create it max idle connections int Default not set Sets the maximum number of connections in the idle connection pool See golang docs on SetMaxIdleConns golang setmaxidleconns for more information Requires 1 2 or later max parallel string 128 Specifies the maximum number of concurrent requests to PostgreSQL ha enabled string true false Default not enabled requires 9 5 or later ha table string vault ha locks Specifies the name of the table to use for storing high availability information This table must already exist Vault will not attempt to create it postgresql examples Custom SSL verification This example shows connecting to a PostgreSQL cluster using full SSL verification recommended hcl storage postgresql connection url postgres user pass localhost 5432 database sslmode verify full To disable SSL verification not recommended replace verify full with disable hcl storage postgresql connection url postgres user pass localhost 5432 database sslmode disable golang setmaxidleconns https golang org pkg database sql DB SetMaxIdleConns postgresql https www postgresql org pgxlib https pkg go dev github com jackc pgx stdlib pg conn docs https www postgresql org docs current libpq connect html LIBPQ CONNSTRING |
vault page title Google Cloud Storage Storage Backends Configuration Google Cloud storage storage backend layout docs Google Cloud Storage The Google Cloud Storage storage backend is used to persist Vault s data in | ---
layout: docs
page_title: Google Cloud Storage - Storage Backends - Configuration
description: |-
The Google Cloud Storage storage backend is used to persist Vault's data in
Google Cloud Storage.
---
# Google Cloud storage storage backend
The Google Cloud Storage storage backend is used to persist Vault's data in
[Google Cloud Storage][gcs-docs].
- **High Availability** – the Google Cloud Storage storage backend supports high
availability. Because the Google Cloud Storage storage backend uses the system
time on the Vault node to acquire sessions, clock skew across Vault servers
can cause lock contention.
- **Community Supported** – the Google Cloud Storage storage backend is
supported by the community. While it has undergone review by HashiCorp
employees, they may not be as knowledgeable about the technology. If you
encounter problems with them, you may be referred to the original author.
```hcl
storage "gcs" {
bucket = "my-storage-bucket"
}
```
For more information on schemas or Google Cloud Storage, please see the [Google
Cloud Storage documentation][gcs-docs].
## `gcs` setup
To use the Google Cloud Storage Vault storage backend, you must have a Google
Cloud Platform account with permissions to create Google Cloud Storage buckets.
To use the Google Cloud Storage Vault storage backend, you must have a Google
Cloud Platform account. Either using the API or web interface, create a bucket
using the [`gsutil`][cloud-sdk] command. Bucket names must be globally unique
across all of Google Cloud, so choose a unique name:
```shell-session
$ gsutil mb gs://mycompany-vault-data
```
Even though the data is encrypted in transit and at rest, be sure to set the
appropriate permissions on the bucket to limit exposure. You may want to create
a service account that limits Vault's interactions with Google Cloud to objects
in the storage bucket using IAM permissions.
Here is a sample [Google Cloud IAM][iam] policy that grants the proper
permissions to a [service account][service-accounts]. Be sure to replace the
value with the value for your service account.
```json
{
"bindings": [
{
"role": "roles/storage.objectAdmin",
"members": ["serviceAccount:[email protected]"]
}
]
}
```
Then give Vault the service account's credential file as a configuration option.
For more information on schemas or Google Cloud Storage, please see the [Google
Cloud Storage documentation][gcs-docs].
## `gcs` authentication
The Google Cloud Storage Vault storage backend uses the official Google Cloud
Golang SDK. This means it supports the common ways of [providing credentials to
Google Cloud][cloud-creds].
1. The environment variable `GOOGLE_APPLICATION_CREDENTIALS`. This is specified
as the **path** to a Google Cloud credentials file, typically for a service
account. If this environment variable is present, the resulting credentials are
used. If the credentials are invalid, an error is returned.
1. Default instance credentials. When no environment variable is present, the
default service account credentials are used.
For more information on service accounts, please see the [Google Cloud Service
Accounts documentation][service-accounts].
To use this storage backend, the service account must have the following
minimum scope(s):
```text
https://www.googleapis.com/auth/devstorage.read_write
```
## `gcs` parameters
- `bucket` `(string: <required>)` – Specifies the name of the bucket to use for
storage. Alternatively, this parameter can be omitted and the `GOOGLE_STORAGE_BUCKET`
environment variable can be used to set the name of the bucket. If both the environment
variable and the parameter in the stanza are set, the value of the environment variable
will take precedence.
- `chunk_size` `(string: "8192")` – Specifies the maximum size (in kilobytes) to
send in a single request. If set to 0, it will attempt to send the whole
object at once, but will not retry any failures. If you are not storing large
objects in Vault, it is recommended to set this to a low value (minimum is
256), since it will reduce the amount of memory Vault uses. Alternatively, this parameter
can be omitted and the `GOOGLE_STORAGE_CHUNK_SIZE` environment variable can be used to set
the chunk size. If both the environment variable and the parameter in the stanza are set,
the value of the environment variable will take precedence.
- `max_parallel` `(int: 128)` - Specifies the maximum number of parallel
operations to take place.
### High availability parameters
- `ha_enabled` `(string: "false")` - Specifies if high availability mode is
enabled. This is a boolean value, but it is specified as a string like "true"
or "false". Alternatively, this parameter can be omitted and the
`GOOGLE_STORAGE_HA_ENABLED` environment variable can be used to
enable or disable high availability. If both the environment variable and
the parameter in the stanza are set, the value of the environment variable will
take precedence.
## `gcs` examples
### High availability
This example shows configuring Google Cloud Storage with high availability
enabled.
```hcl
api_addr = "https://vault-leader.my-company.internal"
storage "gcs" {
bucket = "mycompany-vault-data"
ha_enabled = "true"
}
```
### Custom chunk size
This example shows setting a custom chunk size for uploads. When uploading large
data to Vault, setting a lower number can reduce Vault's memory consumption, but
will increase the number of outbound requests.
```hcl
storage "gcs" {
bucket = "mycompany-vault-data"
chunk_size = "512"
}
```
[cloud-creds]: https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application
[cloud-sdk]: https://cloud.google.com/sdk/downloads
[gcs-docs]: https://cloud.google.com/storage/docs/
[iam]: https://cloud.google.com/iam/docs/
[service-accounts]: https://cloud.google.com/compute/docs/access/service-accounts | vault | layout docs page title Google Cloud Storage Storage Backends Configuration description The Google Cloud Storage storage backend is used to persist Vault s data in Google Cloud Storage Google Cloud storage storage backend The Google Cloud Storage storage backend is used to persist Vault s data in Google Cloud Storage gcs docs High Availability the Google Cloud Storage storage backend supports high availability Because the Google Cloud Storage storage backend uses the system time on the Vault node to acquire sessions clock skew across Vault servers can cause lock contention Community Supported the Google Cloud Storage storage backend is supported by the community While it has undergone review by HashiCorp employees they may not be as knowledgeable about the technology If you encounter problems with them you may be referred to the original author hcl storage gcs bucket my storage bucket For more information on schemas or Google Cloud Storage please see the Google Cloud Storage documentation gcs docs gcs setup To use the Google Cloud Storage Vault storage backend you must have a Google Cloud Platform account with permissions to create Google Cloud Storage buckets To use the Google Cloud Storage Vault storage backend you must have a Google Cloud Platform account Either using the API or web interface create a bucket using the gsutil cloud sdk command Bucket names must be globally unique across all of Google Cloud so choose a unique name shell session gsutil mb gs mycompany vault data Even though the data is encrypted in transit and at rest be sure to set the appropriate permissions on the bucket to limit exposure You may want to create a service account that limits Vault s interactions with Google Cloud to objects in the storage bucket using IAM permissions Here is a sample Google Cloud IAM iam policy that grants the proper permissions to a service account service accounts Be sure to replace the value with the value for your service account json bindings role roles storage objectAdmin members serviceAccount my vault gserviceaccount com Then give Vault the service account s credential file as a configuration option For more information on schemas or Google Cloud Storage please see the Google Cloud Storage documentation gcs docs gcs authentication The Google Cloud Storage Vault storage backend uses the official Google Cloud Golang SDK This means it supports the common ways of providing credentials to Google Cloud cloud creds 1 The environment variable GOOGLE APPLICATION CREDENTIALS This is specified as the path to a Google Cloud credentials file typically for a service account If this environment variable is present the resulting credentials are used If the credentials are invalid an error is returned 1 Default instance credentials When no environment variable is present the default service account credentials are used For more information on service accounts please see the Google Cloud Service Accounts documentation service accounts To use this storage backend the service account must have the following minimum scope s text https www googleapis com auth devstorage read write gcs parameters bucket string required Specifies the name of the bucket to use for storage Alternatively this parameter can be omitted and the GOOGLE STORAGE BUCKET environment variable can be used to set the name of the bucket If both the environment variable and the parameter in the stanza are set the value of the environment variable will take precedence chunk size string 8192 Specifies the maximum size in kilobytes to send in a single request If set to 0 it will attempt to send the whole object at once but will not retry any failures If you are not storing large objects in Vault it is recommended to set this to a low value minimum is 256 since it will reduce the amount of memory Vault uses Alternatively this parameter can be omitted and the GOOGLE STORAGE CHUNK SIZE environment variable can be used to set the chunk size If both the environment variable and the parameter in the stanza are set the value of the environment variable will take precedence max parallel int 128 Specifies the maximum number of parallel operations to take place High availability parameters ha enabled string false Specifies if high availability mode is enabled This is a boolean value but it is specified as a string like true or false Alternatively this parameter can be omitted and the GOOGLE STORAGE HA ENABLED environment variable can be used to enable or disable high availability If both the environment variable and the parameter in the stanza are set the value of the environment variable will take precedence gcs examples High availability This example shows configuring Google Cloud Storage with high availability enabled hcl api addr https vault leader my company internal storage gcs bucket mycompany vault data ha enabled true Custom chunk size This example shows setting a custom chunk size for uploads When uploading large data to Vault setting a lower number can reduce Vault s memory consumption but will increase the number of outbound requests hcl storage gcs bucket mycompany vault data chunk size 512 cloud creds https cloud google com docs authentication production providing credentials to your application cloud sdk https cloud google com sdk downloads gcs docs https cloud google com storage docs iam https cloud google com iam docs service accounts https cloud google com compute docs access service accounts |
vault The Zookeeper storage backend is used to persist Vault s data in Zookeeper Zookeeper storage backend The Zookeeper storage backend is used to persist Vault s data in layout docs page title Zookeeper Storage Backends Configuration Zookeeper zk | ---
layout: docs
page_title: Zookeeper - Storage Backends - Configuration
description: The Zookeeper storage backend is used to persist Vault's data in Zookeeper.
---
# Zookeeper storage backend
The Zookeeper storage backend is used to persist Vault's data in
[Zookeeper][zk].
- **High Availability** – the Zookeeper storage backend supports high
availability.
- **Community Supported** – the Zookeeper storage backend is supported by the
community. While it has undergone review by HashiCorp employees, they may not
be as knowledgeable about the technology. If you encounter problems with them,
you may be referred to the original author.
```hcl
storage "zookeeper" {
address = "localhost:2181"
path = "vault/"
}
```
## `zookeeper` parameters
- `address` `(string: "localhost:2181")` – Specifies the addresses of the
Zookeeper instances as a comma-separated list.
- `path` `(string: "vault/")` – Specifies the path in Zookeeper where data will
be stored.
The following optional settings can be used to configure zNode ACLs:
~> **Warning!** If neither `auth_info` nor `znode_owner` are set, the backend
will not authenticate with Zookeeper and will set the `OPEN_ACL_UNSAFE` ACL on
all nodes. In this scenario, anyone connected to Zookeeper could change Vault’s
znodes and, potentially, take Vault out of service.
- `auth_info` `(string: "")` – Specifies an authentication string in Zookeeper
AddAuth format. For example, `digest:UserName:Password` could be used to
authenticate as user `UserName` using password `Password` with the `digest`
mechanism.
- `znode_owner` `(string: "")` – If specified, Vault will always set all
permissions (CRWDA) to the ACL identified here via the Schema and User parts
of the Zookeeper ACL format. The expected format is `schema:user-ACL-match`,
for example:
```text
# Access for user "UserName" with corresponding digest "HIDfRvTv623G=="
digest:UserName:HIDfRvTv623G==
```
```text
# Access from localhost only
ip:127.0.0.1
```
```text
# Access from any host on the 70.95.0.0 network (Zookeeper 3.5+)
ip:70.95.0.0/16
```
- `tls_enabled` `(bool: false)` – Specifies if TLS communication with the Zookeeper
backend has to be enabled.
- `tls_ca_file` `(string: "")` – Specifies the path to the CA certificate file used
for Zookeeper communication. Multiple CA certificates can be provided in the same file.
- `tls_cert_file` `(string: "")` (optional) – Specifies the path to the
client certificate for Zookeeper communication.
- `tls_key_file` `(string: "")` – Specifies the path to the private key for
Zookeeper communication.
- `tls_min_version` `(string: "tls12")` – Specifies the minimum TLS version to
use. Accepted values are `"tls10"`, `"tls11"`, `"tls12"` or `"tls13"`.
- `tls_skip_verify` `(bool: false)` – Disable verification of TLS certificates.
Using this option is highly discouraged.
- `tls_verify_ip` `(bool: false)` - This property comes into play only when
'tls_skip_verify' is set to false. When 'tls_verify_ip' is set to 'true', the
zookeeper server's IP is verified in the presented certificates CN/SAN entry.
When set to 'false' the server's DNS name is verified in the certificates CN/SAN entry.
## `zookeeper` examples
### Custom address and path
This example shows configuring Vault to communicate with a Zookeeper
installation running on a custom port and to store data at a custom path.
```hcl
storage "zookeeper" {
address = "localhost:3253"
path = "my-vault-data/"
}
```
### zNode Vault user only
This example instructs Vault to set an ACL on all of its zNodes which permit
access only to the user "vaultUser". As per Zookeeper's ACL model, the digest
value in `znode_owner` must match the user in `znode_owner`.
```hcl
storage "zookeeper" {
znode_owner = "digest:vaultUser:raxgVAfnDRljZDAcJFxznkZsExs="
auth_info = "digest:vaultUser:abc"
}
```
### zNode localhost only
This example instructs Vault to only allow access from localhost. As this is the
`ip` no `auth_info` is required since Zookeeper uses the address of the client
for the ACL check.
```hcl
storage "zookeeper" {
znode_owner = "ip:127.0.0.1"
}
```
### zNode connection over TLS.
This example instructs Vault to connect to Zookeeper using the provided TLS configuration. The host verification will happen with the presented certificate using the servers IP because 'tls_verify_ip' is set to true.
```hcl
storage "zookeeper" {
address = "host1.com:5200,host2.com:5200,host3.com:5200"
path = "vault_path_on_zk/"
znode_owner = "digest:vault_user:digestvalueforpassword="
auth_info = "digest:vault_user:thisisthepassword"
redirect_addr = "http://localhost:8200"
tls_verify_ip = "true"
tls_enabled= "true"
tls_min_version= "tls12"
tls_cert_file = "/path/to/the/cert/file/zkcert.pem"
tls_key_file = "/path/to/the/key/file/zkkey.pem"
tls_skip_verify= "false"
tls_ca_file= "/path/to/the/ca/file/ca.pem"
}
```
[zk]: https://zookeeper.apache.org/ | vault | layout docs page title Zookeeper Storage Backends Configuration description The Zookeeper storage backend is used to persist Vault s data in Zookeeper Zookeeper storage backend The Zookeeper storage backend is used to persist Vault s data in Zookeeper zk High Availability the Zookeeper storage backend supports high availability Community Supported the Zookeeper storage backend is supported by the community While it has undergone review by HashiCorp employees they may not be as knowledgeable about the technology If you encounter problems with them you may be referred to the original author hcl storage zookeeper address localhost 2181 path vault zookeeper parameters address string localhost 2181 Specifies the addresses of the Zookeeper instances as a comma separated list path string vault Specifies the path in Zookeeper where data will be stored The following optional settings can be used to configure zNode ACLs Warning If neither auth info nor znode owner are set the backend will not authenticate with Zookeeper and will set the OPEN ACL UNSAFE ACL on all nodes In this scenario anyone connected to Zookeeper could change Vault s znodes and potentially take Vault out of service auth info string Specifies an authentication string in Zookeeper AddAuth format For example digest UserName Password could be used to authenticate as user UserName using password Password with the digest mechanism znode owner string If specified Vault will always set all permissions CRWDA to the ACL identified here via the Schema and User parts of the Zookeeper ACL format The expected format is schema user ACL match for example text Access for user UserName with corresponding digest HIDfRvTv623G digest UserName HIDfRvTv623G text Access from localhost only ip 127 0 0 1 text Access from any host on the 70 95 0 0 network Zookeeper 3 5 ip 70 95 0 0 16 tls enabled bool false Specifies if TLS communication with the Zookeeper backend has to be enabled tls ca file string Specifies the path to the CA certificate file used for Zookeeper communication Multiple CA certificates can be provided in the same file tls cert file string optional Specifies the path to the client certificate for Zookeeper communication tls key file string Specifies the path to the private key for Zookeeper communication tls min version string tls12 Specifies the minimum TLS version to use Accepted values are tls10 tls11 tls12 or tls13 tls skip verify bool false Disable verification of TLS certificates Using this option is highly discouraged tls verify ip bool false This property comes into play only when tls skip verify is set to false When tls verify ip is set to true the zookeeper server s IP is verified in the presented certificates CN SAN entry When set to false the server s DNS name is verified in the certificates CN SAN entry zookeeper examples Custom address and path This example shows configuring Vault to communicate with a Zookeeper installation running on a custom port and to store data at a custom path hcl storage zookeeper address localhost 3253 path my vault data zNode Vault user only This example instructs Vault to set an ACL on all of its zNodes which permit access only to the user vaultUser As per Zookeeper s ACL model the digest value in znode owner must match the user in znode owner hcl storage zookeeper znode owner digest vaultUser raxgVAfnDRljZDAcJFxznkZsExs auth info digest vaultUser abc zNode localhost only This example instructs Vault to only allow access from localhost As this is the ip no auth info is required since Zookeeper uses the address of the client for the ACL check hcl storage zookeeper znode owner ip 127 0 0 1 zNode connection over TLS This example instructs Vault to connect to Zookeeper using the provided TLS configuration The host verification will happen with the presented certificate using the servers IP because tls verify ip is set to true hcl storage zookeeper address host1 com 5200 host2 com 5200 host3 com 5200 path vault path on zk znode owner digest vault user digestvalueforpassword auth info digest vault user thisisthepassword redirect addr http localhost 8200 tls verify ip true tls enabled true tls min version tls12 tls cert file path to the cert file zkcert pem tls key file path to the key file zkkey pem tls skip verify false tls ca file path to the ca file ca pem zk https zookeeper apache org |
vault Manually install a Vault binary page title Install Vault manually layout docs Install Vault using a compiled binary | ---
layout: docs
page_title: Install Vault manually
description: >-
Manually install a Vault binary.
---
# Manually install a Vault binary
Install Vault using a compiled binary.
## Before you start
- **You must have a valid Vault binary**. You can
[download and unzip a precompiled binary](/vault/install) or
[build a local instance of Vault from source code](/vault/docs/install/build-from-code).
## Step 1: Configure the environment
<Tabs>
<Tab heading="Linux shell" group="nix">
1. Set the `VAULT_DATA` environment variable to your preferred Vault data
directory. For example, `/opt/vault/data`:
```shell-session
export VAULT_DATA=/opt/vault/data
```
1. Set the `VAULT_CONFIG` environment variable to your preferred Vault
configuration directory. For example, `/etc/vault.d`:
```shell-session
export VAULT_CONFIG=/etc/vault.d
```
1. Move the Vault binary to `/usr/bin`:
```shell-session
$ sudo mv PATH/TO/VAULT/BINARY /usr/bin/
```
1. Ensure the Vault binary can use `mlock()` to run as a non-root user:
```shell-session
$ sudo setcap cap_ipc_lock=+ep $(readlink -f $(which vault))
```
See the support article
[Vault and mlock()](https://support.hashicorp.com/hc/en-us/articles/115012787688-Vault-and-mlock)
for more information.
1. Create your Vault data directory:
```shell-session
$ sudo mkdir -p ${VAULT_DATA}
```
1. Create your Vault configuration directory:
```shell-session
$ sudo mkdir -p ${VAULT_CONFIG}
```
<Highlight title="Best practice">
We recommend storing Vault data and Vault logs on different volumes than the
operating system.
</Highlight>
</Tab>
<Tab heading="Powershell" group="ps">
1. Run Powershell as Administrator.
1. Set a `VAULT_HOME` environment variable to your preferred Vault home
directory. For example, `c:\Program Files\Vault`:
```powershell
$env:VAULT_HOME = "${env:ProgramFiles}\Vault"
```
1. Create the Vault home directory:
```powershell
New-Item -ItemType Directory -Path "${env:VAULT_HOME}"
```
1. Create the Vault data directory. For example, `c:\Program Files\Vault\Data`:
```powershell
New-Item -ItemType Directory -Path "${env:VAULT_HOME}/Data"
```
1. Create the Vault configuration directory. For example,
`c:\Program Files\Vault\Config`:
```powershell
New-Item -ItemType Directory -Path "${env:VAULT_HOME}/Config"
```
1. Create the Vault logs directory. For example, `c:\Program Files\Vault\Logs`:
```powershell
New-Item -ItemType Directory -Path "${env:VAULT_HOME}/Logs"
```
1. Move the Vault binary to your Vault directory:
```powershell
Move-Item `
-Path <PATH/TO/VAULT/BINARY> `
-Destination ${env:VAULT_HOME}\vault.exe
```
1. Add the Vault home directory to the system `Path` variable.
[](/img/install/windows-system-path.png)
</Tab>
</Tabs>
## Step 2: Configure user permissions
<Tabs>
<Tab heading="Linux shell" group="nix">
1. Create a system user called `vault` to run Vault when your Vault data
directory as `home` and `nologin` as the shell:
```shell-session
$ sudo useradd --system --home ${VAULT_DATA} --shell /sbin/nologin vault
```
1. Change directory ownership of your data directory to the `vault` user:
```shell-session
$ sudo chown vault:vault ${VAULT_DATA}
```
1. Grant the `vault` user full permission on the data directory, search
permission for the group, and deny access to others:
```shell-session
$ sudo chmod -R 750 ${VAULT_DATA}
```
</Tab>
<Tab heading="Powershell" group="ps">
1. Create an access rule to grant the `Local System` user access to the Vault
directory and related files:
```powershell
$SystemAccessRule =
New-Object System.Security.AccessControl.FileSystemAccessRule(
"SYSTEM",
"FullControl",
"ContainerInherit,Objectinherit",
"none",
"Allow"
)
```
1. Create an access rule to grant yourself access to the Vault directory and
related files so you can test your Vault installation:
```powershell
$myUsername = Get-CimInstance -Class Win32_Computersystem | `
Select-Object UserName | foreach {$_.UserName} ; `
$AdminAccessRule =
New-Object System.Security.AccessControl.FileSystemAccessRule(
"$myUsername",
"FullControl",
"ContainerInherit,Objectinherit",
"none",
"Allow"
)
```
<Highlight title="Create additional access rules for human users if needed">
If you expect other accounts to start and run the Vault server, you must
create and apply access rules for those users as well. While users can run
the Vault CLI without explicit access, if they try to start the Vault
server, the process will fail with a permission denied error.
</Highlight>
1. Update permissions on the `env:VAULT_HOME` directory:
```powershell
$ACLObject = Get-ACL ${env:VAULT_HOME} ; `
$ACLObject.AddAccessRule($AdminAccessRule) ; `
$ACLObject.AddAccessRule($SystemAccessRule) ; `
Set-Acl ${env:VAULT_HOME} $ACLObject
```
</Tab>
</Tabs>
## Step 3: Create a basic configuration file
Create a basic Vault configuration file for testing and development.
<Warning title="Always enable TLS for production">
The sample configuration below disables TLS for simplicity and is not
appropriate for production use. Refer to the
[configuration documentation](/vault/docs/configuration) for a full list of
supported parameters.
</Warning>
<Tabs>
<Tab heading="Linux shell" group="nix">
1. Create a file called `vault.hcl` under your configuration directory:
```shell-session
$ sudo tee ${VAULT_CONFIG}/vault.hcl <<EOF
ui = true
cluster_addr = "http://127.0.0.1:8201"
api_addr = "https://127.0.0.1:8200"
disable_mlock = true
storage "raft" {
path = "${VAULT_DATA}"
node_id = "127.0.0.1"
}
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_disable = 1
}
EOF
```
1. Change ownership and permissions on the Vault configuration file.
```shell-session
$ sudo chown vault:vault "${VAULT_CONFIG}/vault.hcl" && \
sudo chmod 640 "${VAULT_CONFIG}/vault.hcl"
```
</Tab>
<Tab heading="Powershell" group="ps">
Create a file called `vault.hcl` under your configuration directory:
```powershell
@"
ui = true
cluster_addr = "http://127.0.0.1:8201"
api_addr = "https://127.0.0.1:8200"
disable_mlock = true
storage "raft" {
path = "$(${env:VAULT_HOME}.Replace('\','\\'))\\Data"
node_id = "127.0.0.1"
}
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_disable = 1
}
"@ | Out-File -FilePath ${env:VAULT_HOME}/Config/vault.hcl -Encoding ascii
```
<Note title="The double backslashes (\\) are not an error">
You **must** escape the Windows path character in your Vault configuration
file or the Vault server will fail with an error claiming the file contains
invalid characters.
</Note>
</Tab>
</Tabs>
## Step 4: Verify your installation
To confirm your Vault installation, use the help option with the Vault CLI to
confirm the CLI is accessible and bring up the server in development mode to
confirm you can run the binary.
<Tabs>
<Tab heading="Linux shell" group="nix">
1. Bring up the help menu in the Vault CLI:
```shell-session
$ vault -h
```
1. Use the Vault CLI to bring up a Vault server in development mode:
```shell-session
$ vault server -dev -config ${VAULT_CONFIG}/vault.hcl
```
</Tab>
<Tab heading="Powershell" group="ps">
1. Start a new Powershell session without Administrator permission.
1. Bring up the help menu in the Vault CLI:
```powershell
vault -h
```
1. Use the Vault CLI to bring up a Vault server in development mode:
```powershell
vault server -dev -config ${env:VAULT_HOME}\Config\vault.hcl
```
</Tab>
</Tabs>
## Related tutorials
The following tutorials provide additional guidance for installing Vault and
production cluster deployment:
- [Get started: Install Vault](/vault/tutorials/getting-started/getting-started-install)
- [Day One Preparation](/vault/tutorials/day-one-raft)
- [Recommended Patterns](/vault/tutorials/recommended-patterns)
- [Start the server in dev mode](/vault/tutorials/getting-started/getting-started-dev-server | vault | layout docs page title Install Vault manually description Manually install a Vault binary Manually install a Vault binary Install Vault using a compiled binary Before you start You must have a valid Vault binary You can download and unzip a precompiled binary vault install or build a local instance of Vault from source code vault docs install build from code Step 1 Configure the environment Tabs Tab heading Linux shell group nix 1 Set the VAULT DATA environment variable to your preferred Vault data directory For example opt vault data shell session export VAULT DATA opt vault data 1 Set the VAULT CONFIG environment variable to your preferred Vault configuration directory For example etc vault d shell session export VAULT CONFIG etc vault d 1 Move the Vault binary to usr bin shell session sudo mv PATH TO VAULT BINARY usr bin 1 Ensure the Vault binary can use mlock to run as a non root user shell session sudo setcap cap ipc lock ep readlink f which vault See the support article Vault and mlock https support hashicorp com hc en us articles 115012787688 Vault and mlock for more information 1 Create your Vault data directory shell session sudo mkdir p VAULT DATA 1 Create your Vault configuration directory shell session sudo mkdir p VAULT CONFIG Highlight title Best practice We recommend storing Vault data and Vault logs on different volumes than the operating system Highlight Tab Tab heading Powershell group ps 1 Run Powershell as Administrator 1 Set a VAULT HOME environment variable to your preferred Vault home directory For example c Program Files Vault powershell env VAULT HOME env ProgramFiles Vault 1 Create the Vault home directory powershell New Item ItemType Directory Path env VAULT HOME 1 Create the Vault data directory For example c Program Files Vault Data powershell New Item ItemType Directory Path env VAULT HOME Data 1 Create the Vault configuration directory For example c Program Files Vault Config powershell New Item ItemType Directory Path env VAULT HOME Config 1 Create the Vault logs directory For example c Program Files Vault Logs powershell New Item ItemType Directory Path env VAULT HOME Logs 1 Move the Vault binary to your Vault directory powershell Move Item Path PATH TO VAULT BINARY Destination env VAULT HOME vault exe 1 Add the Vault home directory to the system Path variable System PATH editor in Windows OS GUI img install windows system path png img install windows system path png Tab Tabs Step 2 Configure user permissions Tabs Tab heading Linux shell group nix 1 Create a system user called vault to run Vault when your Vault data directory as home and nologin as the shell shell session sudo useradd system home VAULT DATA shell sbin nologin vault 1 Change directory ownership of your data directory to the vault user shell session sudo chown vault vault VAULT DATA 1 Grant the vault user full permission on the data directory search permission for the group and deny access to others shell session sudo chmod R 750 VAULT DATA Tab Tab heading Powershell group ps 1 Create an access rule to grant the Local System user access to the Vault directory and related files powershell SystemAccessRule New Object System Security AccessControl FileSystemAccessRule SYSTEM FullControl ContainerInherit Objectinherit none Allow 1 Create an access rule to grant yourself access to the Vault directory and related files so you can test your Vault installation powershell myUsername Get CimInstance Class Win32 Computersystem Select Object UserName foreach UserName AdminAccessRule New Object System Security AccessControl FileSystemAccessRule myUsername FullControl ContainerInherit Objectinherit none Allow Highlight title Create additional access rules for human users if needed If you expect other accounts to start and run the Vault server you must create and apply access rules for those users as well While users can run the Vault CLI without explicit access if they try to start the Vault server the process will fail with a permission denied error Highlight 1 Update permissions on the env VAULT HOME directory powershell ACLObject Get ACL env VAULT HOME ACLObject AddAccessRule AdminAccessRule ACLObject AddAccessRule SystemAccessRule Set Acl env VAULT HOME ACLObject Tab Tabs Step 3 Create a basic configuration file Create a basic Vault configuration file for testing and development Warning title Always enable TLS for production The sample configuration below disables TLS for simplicity and is not appropriate for production use Refer to the configuration documentation vault docs configuration for a full list of supported parameters Warning Tabs Tab heading Linux shell group nix 1 Create a file called vault hcl under your configuration directory shell session sudo tee VAULT CONFIG vault hcl EOF ui true cluster addr http 127 0 0 1 8201 api addr https 127 0 0 1 8200 disable mlock true storage raft path VAULT DATA node id 127 0 0 1 listener tcp address 0 0 0 0 8200 cluster address 0 0 0 0 8201 tls disable 1 EOF 1 Change ownership and permissions on the Vault configuration file shell session sudo chown vault vault VAULT CONFIG vault hcl sudo chmod 640 VAULT CONFIG vault hcl Tab Tab heading Powershell group ps Create a file called vault hcl under your configuration directory powershell ui true cluster addr http 127 0 0 1 8201 api addr https 127 0 0 1 8200 disable mlock true storage raft path env VAULT HOME Replace Data node id 127 0 0 1 listener tcp address 0 0 0 0 8200 cluster address 0 0 0 0 8201 tls disable 1 Out File FilePath env VAULT HOME Config vault hcl Encoding ascii Note title The double backslashes are not an error You must escape the Windows path character in your Vault configuration file or the Vault server will fail with an error claiming the file contains invalid characters Note Tab Tabs Step 4 Verify your installation To confirm your Vault installation use the help option with the Vault CLI to confirm the CLI is accessible and bring up the server in development mode to confirm you can run the binary Tabs Tab heading Linux shell group nix 1 Bring up the help menu in the Vault CLI shell session vault h 1 Use the Vault CLI to bring up a Vault server in development mode shell session vault server dev config VAULT CONFIG vault hcl Tab Tab heading Powershell group ps 1 Start a new Powershell session without Administrator permission 1 Bring up the help menu in the Vault CLI powershell vault h 1 Use the Vault CLI to bring up a Vault server in development mode powershell vault server dev config env VAULT HOME Config vault hcl Tab Tabs Related tutorials The following tutorials provide additional guidance for installing Vault and production cluster deployment Get started Install Vault vault tutorials getting started getting started install Day One Preparation vault tutorials day one raft Recommended Patterns vault tutorials recommended patterns Start the server in dev mode vault tutorials getting started getting started dev server |
vault page title Why use Agent or Proxy Use Vault tools like Agent and Proxy to simplify secret fetching and add Vault to your development environment with minimal client code updates Why use Agent or Proxy layout docs | ---
layout: docs
page_title: Why use Agent or Proxy?
description: >-
Use Vault tools like Agent and Proxy to simplify secret fetching and add Vault
to your development environment with minimal client code updates.
---
# Why use Agent or Proxy?
A valid client token must accompany most requests to Vault. This
includes all API requests, as well as via the Vault CLI and other libraries.
Therefore, Vault clients must first authenticate with Vault to acquire a token.
Vault provides several authentication methods to assist in
delivering this initial token.

If the client can securely acquire the token, all subsequent requests (e.g., request
database credentials, read key/value secrets) are processed based on the
trust established by a successful authentication.
This means that client application must invoke the Vault API to authenticate
with Vault and manage the acquired token, in addition to invoking the API to
request secrets from Vault. This implies code changes to client applications
along with additional testing and maintenance of the application.
The following code example implements Vault API to authenticate with Vault
through [AppRole auth method](/vault/docs/auth/approle#code-example), and then uses
the returned client token to read secrets at `kv-v2/data/creds`.
```go
package main
import (
...snip...
vault "github.com/hashicorp/vault/api"
)
// Fetches a key-value secret (kv-v2) after authenticating via AppRole
func getSecretWithAppRole() (string, error) {
config := vault.DefaultConfig()
client := vault.NewClient(config)
wrappingToken := ioutil.ReadFile("path/to/wrapping-token")
unwrappedToken := client.Logical().Unwrap(strings.TrimSuffix(string(wrappingToken), "\n"))
secretID := unwrappedToken.Data["secret_id"]
roleID := os.Getenv("APPROLE_ROLE_ID")
params := map[string]interface{}{
"role_id": roleID,
"secret_id": secretID,
}
resp := client.Logical().Write("auth/approle/login", params)
client.SetToken(resp.Auth.ClientToken)
secret, err := client.Logical().Read("kv-v2/data/creds")
if err != nil {
return "", fmt.Errorf("unable to read secret: %w", err)
}
data := secret.Data["data"].(map[string]interface{})
...snip...
}
```
For some Vault deployments, making (and maintaining) these changes to
applications may not be a problem, and may actually be preferred. This may be
applied to scenarios where you have a small number of applications, or you want
to keep strict, customized control over how each application interacts with
Vault. However, in other situations where you have a large number of
applications, as in large enterprises, you may not have the resources or expertise
to update and maintain the Vault integration code for every application. When
third party applications are being deployed by the application, it is prohibited
to add the Vault integration code.
### Introduce Vault Agent and Vault Proxy to the workflow
[Vault Agent][vaultagent] and [Vault Proxy][vaultproxy] aim to remove this initial hurdle to adopt Vault by providing a
more scalable and simpler way for applications to integrate with Vault. Vault Agent can
obtain secrets and provide them to applications, and Vault Proxy can act as
a proxy between Vault and the application, optionally simplifying the authentication process
and caching requests.
As with most other CLI commands for the Vault binary, neither Vault Agent nor Vault Proxy
require a Vault Enterprise license, and are available in all Vault binaries and images. Note
however that some features, such as [static secret caching][static-secret-caching], are only
available when connected to a Vault Enterprise server.
| Capability | Vault Agent | Vault Proxy |
|------------------------------------------------------------------------------------------|:------------------:|:-----------:|
| [Auto-Auth][autoauth] to authenticate with Vault | x | x |
| Run as a [Windows Service][winsvc] | x | x |
| [Caching][caching] the newly created tokens and leases | x | x |
| [Templating][template] to render user-supplied templates | x | |
| [Process Supervisor][exec] for injecting secrets as environment variables into a process | x | |
| [API Proxy][apiproxy] to act as a proxy for Vault API | Will be deprecated | x |
| [Static secret caching][static-secret-caching] for KV secrets | | x |
To learn more, refer to the [Vault Agent][vaultagent] or [Vault
Proxy][vaultproxy] documentation page.
[autoauth]: /vault/docs/agent-and-proxy/autoauth
[caching]: /vault/docs/agent-and-proxy/proxy/caching
[static-secret-caching]: /vault/docs/agent-and-proxy/proxy/caching/static-secret-caching
[apiproxy]: /vault/docs/agent-and-proxy/proxy/apiproxy
[template]: /vault/docs/agent-and-proxy/agent/template
[exec]: /vault/docs/agent-and-proxy/agent/process-supervisor
[template-config]: /vault/docs/agent-and-proxy/agent/template#template-configurations
[vaultagent]: /vault/docs/agent-and-proxy/agent
[vaultproxy]: /vault/docs/agent-and-proxy/proxy
[winsvc]: /vault/docs/agent-and-proxy/agent/winsvc | vault | layout docs page title Why use Agent or Proxy description Use Vault tools like Agent and Proxy to simplify secret fetching and add Vault to your development environment with minimal client code updates Why use Agent or Proxy A valid client token must accompany most requests to Vault This includes all API requests as well as via the Vault CLI and other libraries Therefore Vault clients must first authenticate with Vault to acquire a token Vault provides several authentication methods to assist in delivering this initial token Client authentication img diagram vault agent png If the client can securely acquire the token all subsequent requests e g request database credentials read key value secrets are processed based on the trust established by a successful authentication This means that client application must invoke the Vault API to authenticate with Vault and manage the acquired token in addition to invoking the API to request secrets from Vault This implies code changes to client applications along with additional testing and maintenance of the application The following code example implements Vault API to authenticate with Vault through AppRole auth method vault docs auth approle code example and then uses the returned client token to read secrets at kv v2 data creds go package main import snip vault github com hashicorp vault api Fetches a key value secret kv v2 after authenticating via AppRole func getSecretWithAppRole string error config vault DefaultConfig client vault NewClient config wrappingToken ioutil ReadFile path to wrapping token unwrappedToken client Logical Unwrap strings TrimSuffix string wrappingToken n secretID unwrappedToken Data secret id roleID os Getenv APPROLE ROLE ID params map string interface role id roleID secret id secretID resp client Logical Write auth approle login params client SetToken resp Auth ClientToken secret err client Logical Read kv v2 data creds if err nil return fmt Errorf unable to read secret w err data secret Data data map string interface snip For some Vault deployments making and maintaining these changes to applications may not be a problem and may actually be preferred This may be applied to scenarios where you have a small number of applications or you want to keep strict customized control over how each application interacts with Vault However in other situations where you have a large number of applications as in large enterprises you may not have the resources or expertise to update and maintain the Vault integration code for every application When third party applications are being deployed by the application it is prohibited to add the Vault integration code Introduce Vault Agent and Vault Proxy to the workflow Vault Agent vaultagent and Vault Proxy vaultproxy aim to remove this initial hurdle to adopt Vault by providing a more scalable and simpler way for applications to integrate with Vault Vault Agent can obtain secrets and provide them to applications and Vault Proxy can act as a proxy between Vault and the application optionally simplifying the authentication process and caching requests As with most other CLI commands for the Vault binary neither Vault Agent nor Vault Proxy require a Vault Enterprise license and are available in all Vault binaries and images Note however that some features such as static secret caching static secret caching are only available when connected to a Vault Enterprise server Capability Vault Agent Vault Proxy Auto Auth autoauth to authenticate with Vault x x Run as a Windows Service winsvc x x Caching caching the newly created tokens and leases x x Templating template to render user supplied templates x Process Supervisor exec for injecting secrets as environment variables into a process x API Proxy apiproxy to act as a proxy for Vault API Will be deprecated x Static secret caching static secret caching for KV secrets x To learn more refer to the Vault Agent vaultagent or Vault Proxy vaultproxy documentation page autoauth vault docs agent and proxy autoauth caching vault docs agent and proxy proxy caching static secret caching vault docs agent and proxy proxy caching static secret caching apiproxy vault docs agent and proxy proxy apiproxy template vault docs agent and proxy agent template exec vault docs agent and proxy agent process supervisor template config vault docs agent and proxy agent template template configurations vaultagent vault docs agent and proxy agent vaultproxy vault docs agent and proxy proxy winsvc vault docs agent and proxy agent winsvc |
vault What is Auto authentication Use auto authentication with Vault Agent or Vault Proxy to simplify client layout docs authentication to Vault in a variety of environments page title What is Auto authentication | ---
layout: docs
page_title: What is Auto-authentication?
description: >-
Use auto-authentication with Vault Agent or Vault Proxy to simplify client
authentication to Vault in a variety of environments.
---
# What is Auto-authentication?
Auto-authentication simplifies client authentication in a wide variety of
environments. The following Vault tools come with auto-authentication built in:
- Vault Agent
- Vault Proxy
## Methods and sinks
Auto-auth consists of two parts:
- a **method** - the desired authentication method for the current environment
- a **sink** - the location where tools save tokens when the token value changes
When a supported tool starts with auto-auth enabled, the tool requests a Vault
token using the configured method. If the request fails, the tool retries the
request with an exponential back off.
Once the request succeeds, the auth-auth renews unwrapped authentication tokens
automatically until Vault denies the renewal. If the authentication method wraps
tokens, auto-authentication cannot renew the token automatically.
Vault typically denies renewal if the token:
- the token was revoked.
- the token has exceeded the maximum number of uses.
- the token is otherwise invalid.
Every time authentication succeeds, auto-auth writes the token to any
appropriately configured sink.
## Advanced functionality
Sinks support some advanced features, including the ability for the written
values to be encrypted or
[response-wrapped](/vault/docs/concepts/response-wrapping).
Both mechanisms can be used concurrently; in this case, the value will be
response-wrapped, then encrypted.
### Response-Wrapping tokens
There are two ways that tokens can be response-wrapped:
1. By the auth method. This allows the end client to introspect the
`creation_path` of the token, helping prevent Man-In-The-Middle (MITM)
attacks. However, because auto-auth cannot then unwrap the token and rewrap
it without modifying the `creation_path`, we are not able to renew the
token; it is up to the end client to renew the token. Agent and Proxy both
stay daemonized in this mode since some auth methods allow for reauthentication
on certain events.
2. By any of the token sinks. Because more than one sink can be configured, the
token must be wrapped after it is fetched, rather than wrapped by Vault as
it's being returned. As a result, the `creation_path` will always be
`sys/wrapping/wrap`, and validation of this field cannot be used as
protection against MITM attacks. However, this mode allows auto-auth to keep
the token renewed for the end client and automatically reauthenticate when
it expires.
### Encrypting tokens
~> Support for encrypted tokens is experimental; if input/output formats
change, we will make every effort to provide backwards compatibility.
Tokens can be encrypted, using a Diffie-Hellman exchange to generate an
ephemeral key. In this mechanism, the client receiving the token writes a
generated public key to a file. The sink responsible for writing the token to
that client looks for this public key and uses it to compute a shared secret
key, which is then used to encrypt the token via AES-GCM. The nonce, encrypted
payload, and the sink's public key are then written to the output file, where
the client can compute the shared secret and decrypt the token value.
~> NOTE: Token encryption is not a protection against MITM attacks! The purpose
of this feature is for forward-secrecy and coverage against bare token values
being persisted. A MITM that can write to the sink's output and/or client
public-key input files could attack this exchange. Using TLS to protect the
transit of tokens is highly recommended.
To help mitigate MITM attacks, additional authenticated data (AAD) can be
provided to Agent and Proxy. This data is written as part of the AES-GCM tag and must
match on both Agent and Proxy and the client. This of course means that protecting
this AAD becomes important, but it provides another layer for an attacker to
have to overcome. For instance, if the attacker has access to the file system
where the token is being written, but not to read configuration or read
environment variables, this AAD can be generated and passed to Agent or Proxy and
the client in ways that would be difficult for the attacker to find.
When using AAD, it is always a good idea for this to be as fresh as possible;
generate a value and pass it to your client and Agent or Proxy on startup. Additionally,
Agent and Proxy a Trust On First Use model; after it finds a generated public key,
it will reuse that public key instead of looking for new values that have been
written.
If writing a client that uses this feature, it will likely be helpful to look
at the
[dhutil](https://github.com/hashicorp/vault/blob/main/helper/dhutil/dhutil.go)
library. This shows the expected format of the public key input and envelope
output formats.
## Configuration
The top level `auto_auth` block has two configuration entries:
- `method` `(object: required)` - Configuration for the method
- `sinks` `(array of objects: optional)` - Configuration for the sinks
- `enable_reauth_on_new_credentials` `(bool: false)` - If enabled, Auto-auth will
handle new credential events from supported auth methods (AliCloud/AWS/Cert/JWT/LDAP/OCI)
and re-authenticate with the new credential.
### Configuration (Method)
~> Auto-auth does not support using tokens with a limited number of uses. Auto-auth
does not track the number of uses remaining, and may allow the token to
expire before attempting to renew it. For example, if using AppRole auto-auth,
you must use 0 (meaning unlimited) as the value for
[`token_num_uses`](/vault/api-docs/auth/approle#token_num_uses).
These are common configuration values that live within the `method` block:
- `type` `(string: required)` - The type of the method to use, e.g. `aws`,
`gcp`, `azure`, etc. _Note_: when using HCL this can be used as the key for
the block, e.g. `method "aws" {...}`.
- `mount_path` `(string: optional)` - The mount path of the method. If not
specified, defaults to a value of `auth/<method type>`.
- `namespace` `(string: optional)` - Namespace in which the mount lives.
The order of precedence is: this setting lowest, followed by the
environment variable `VAULT_NAMESPACE`, and then the highest precedence
command-line option `-namespace`.
If none of these are specified, defaults to the root namespace.
Note that because sink response wrapping and templating are also based
on the client created by auto-auth, they use the same namespace.
If specified alongside the `namespace` option in the Vault Stanza of
[Vault Agent](/vault/docs/agent-and-proxy/agent#vault-stanza) or
[Vault Proxy](/vault/docs/agent-and-proxy/proxy#vault-stanza), that
configuration will take precedence on everything except auto-auth.
- `wrap_ttl` `(string or integer: optional)` - If specified, the written token
will be response-wrapped by auto-auth. This is more secure than wrapping by
sinks, but does not allow the auto-auth to keep the token renewed or
automatically reauthenticate when it expires. Rather than a simple string,
the written value will be a JSON-encoded
[SecretWrapInfo](https://godoc.org/github.com/hashicorp/vault/api#SecretWrapInfo)
structure. Uses [duration format strings](/vault/docs/concepts/duration-format).
- `min_backoff` `(string or integer: "1s")` - The minimum backoff time auto-auth
will delay before retrying after a failed auth attempt. The backoff will start
at the configured value and double (with some randomness) after successive
failures, capped by `max_backoff.` If Agent templating is being used, this
value is also used as the min backoff time for the templating server.
Uses [duration format strings](/vault/docs/concepts/duration-format).
- `max_backoff` `(string or integer: "5m")` - The maximum time Agent will delay
before retrying after a failed auth attempt. The backoff will start at
`min_backoff` and double (with some randomness) after successive failures,
capped by `max_backoff.` If Agent templating is being used, this value is also
used as the max backoff time for the templating server. `max_backoff` is the
duration between retries, and **not** the duration that retries will be
performed before giving up. Uses [duration format strings](/vault/docs/concepts/duration-format).
- `exit_on_err` `(bool: false)` - When set to true, Vault Agent and Vault Proxy
will exit if any errors occur during authentication. This configurable only affects login
attempts for new tokens (either initial or expired tokens) and will not exit for errors on
valid token renewals.
- `config` `(object: required)` - Configuration of the method itself. See the
sidebar for information about each method.
### Configuration (Sinks)
These configuration values are common to all Sinks:
- `type` `(string: required)` - The type of the method to use, e.g. `file`.
_Note_: when using HCL this can be used as the key for the block, e.g. `sink "file" {...}`.
- `wrap_ttl` `(string or integer: optional)` - If specified, the written token
will be response-wrapped by the sink. This is less secure than wrapping by
the method, but allows auto-auth to keep the token renewed and automatically
reauthenticate when it expires. Rather than a simple string, the written
value will be a JSON-encoded
[SecretWrapInfo](https://godoc.org/github.com/hashicorp/vault/api#SecretWrapInfo)
structure. Uses [duration format strings](/vault/docs/concepts/duration-format).
- `dh_type` `(string: optional)` - If specified, the type of Diffie-Hellman exchange to
perform, meaning, which ciphers and/or curves. Currently only `curve25519` is
supported.
- `dh_path` `(string: required if dh_type is set)` - The path from which the
auto-auth should read the client's initial parameters (e.g. curve25519 public
key).
- `derive_key` `(bool: false)` - If specified, the final encryption key is
calculated by using HKDF-SHA256 to derive a key from the calculated shared
secret and the two public keys for enhanced security. This is recommended
if backward compatibility isn't a concern.
- `aad` `(string: optional)` - If specified, additional authenticated data to
use with the AES-GCM encryption of the token. Can be any string, including
serialized data.
- `aad_env_var` `(string: optional)` - If specified, AAD will be read from the
given environment variable rather than a value in the configuration file.
- `config` `(object: required)` - Configuration of the sink itself. See the
sidebar for information about each sink.
### Auto auth examples
Auto-auth configuration objects take two separate forms when specified in HCL
and JSON. The following examples are meant to clarify the differences between
the two formats.
#### Sinks (HCL format)
The HCL format may define any number of sink objects with an optional wrapping
`sinks {...}` object.
~> Note: The [corresponding JSON format](#sinks-json-format) _must_ specify a
`"sinks" : [...]` array to encapsulate all `sink` JSON objects.
```hcl
// Other Vault Agent or Vault Proxy configuration blocks
// ...
auto_auth {
method {
type = "approle"
config = {
role_id_file_path = "/etc/vault/roleid"
secret_id_file_path = "/etc/vault/secretid"
}
}
sinks {
sink {
type = "file"
config = {
path = "/tmp/file-foo"
}
}
}
}
```
The following valid HCL omits the wrapping `sinks` object while specifying
multiple sinks.
```hcl
// Other Vault Agent or Vault Proxy configuration blocks
// ...
auto_auth {
method {
type = "approle"
config = {
role_id_file_path = "/etc/vault/roleid"
secret_id_file_path = "/etc/vault/secretid"
}
}
sink {
type = "file"
config = {
path = "/tmp/file-foo"
}
}
sink {
type = "file"
config = {
path = "/tmp/file-bar"
}
}
}
```
#### Sinks (JSON format)
The following JSON configuration illustrates the need for a `sinks: [...]` array
wrapping any number of `sink` objects.
```hcl
{
"auto_auth" : {
"method" : [
{
type = "approle"
config = {
role_id_file_path = "/etc/vault/roleid"
secret_id_file_path = "/etc/vault/secretid"
}
}
],
"sinks" : [
{
"sink" : {
type = "file"
config = {
path = "/tmp/file-foo"
}
}
}
]
}
}
```
Multiple sinks are defined by appending more `sink` objects within the `sinks`
array:
```hcl
{
"auto_auth" : {
"method" : [
{
type = "approle"
config = {
role_id_file_path = "/etc/vault/roleid"
secret_id_file_path = "/etc/vault/secretid"
}
}
],
"sinks" : [
{
"sink" : {
type = "file"
config = {
path = "/tmp/file-foo"
}
}
},
{
"sink" : {
type = "file"
config = {
path = "/tmp/file-bar"
}
}
}
]
}
}
``` | vault | layout docs page title What is Auto authentication description Use auto authentication with Vault Agent or Vault Proxy to simplify client authentication to Vault in a variety of environments What is Auto authentication Auto authentication simplifies client authentication in a wide variety of environments The following Vault tools come with auto authentication built in Vault Agent Vault Proxy Methods and sinks Auto auth consists of two parts a method the desired authentication method for the current environment a sink the location where tools save tokens when the token value changes When a supported tool starts with auto auth enabled the tool requests a Vault token using the configured method If the request fails the tool retries the request with an exponential back off Once the request succeeds the auth auth renews unwrapped authentication tokens automatically until Vault denies the renewal If the authentication method wraps tokens auto authentication cannot renew the token automatically Vault typically denies renewal if the token the token was revoked the token has exceeded the maximum number of uses the token is otherwise invalid Every time authentication succeeds auto auth writes the token to any appropriately configured sink Advanced functionality Sinks support some advanced features including the ability for the written values to be encrypted or response wrapped vault docs concepts response wrapping Both mechanisms can be used concurrently in this case the value will be response wrapped then encrypted Response Wrapping tokens There are two ways that tokens can be response wrapped 1 By the auth method This allows the end client to introspect the creation path of the token helping prevent Man In The Middle MITM attacks However because auto auth cannot then unwrap the token and rewrap it without modifying the creation path we are not able to renew the token it is up to the end client to renew the token Agent and Proxy both stay daemonized in this mode since some auth methods allow for reauthentication on certain events 2 By any of the token sinks Because more than one sink can be configured the token must be wrapped after it is fetched rather than wrapped by Vault as it s being returned As a result the creation path will always be sys wrapping wrap and validation of this field cannot be used as protection against MITM attacks However this mode allows auto auth to keep the token renewed for the end client and automatically reauthenticate when it expires Encrypting tokens Support for encrypted tokens is experimental if input output formats change we will make every effort to provide backwards compatibility Tokens can be encrypted using a Diffie Hellman exchange to generate an ephemeral key In this mechanism the client receiving the token writes a generated public key to a file The sink responsible for writing the token to that client looks for this public key and uses it to compute a shared secret key which is then used to encrypt the token via AES GCM The nonce encrypted payload and the sink s public key are then written to the output file where the client can compute the shared secret and decrypt the token value NOTE Token encryption is not a protection against MITM attacks The purpose of this feature is for forward secrecy and coverage against bare token values being persisted A MITM that can write to the sink s output and or client public key input files could attack this exchange Using TLS to protect the transit of tokens is highly recommended To help mitigate MITM attacks additional authenticated data AAD can be provided to Agent and Proxy This data is written as part of the AES GCM tag and must match on both Agent and Proxy and the client This of course means that protecting this AAD becomes important but it provides another layer for an attacker to have to overcome For instance if the attacker has access to the file system where the token is being written but not to read configuration or read environment variables this AAD can be generated and passed to Agent or Proxy and the client in ways that would be difficult for the attacker to find When using AAD it is always a good idea for this to be as fresh as possible generate a value and pass it to your client and Agent or Proxy on startup Additionally Agent and Proxy a Trust On First Use model after it finds a generated public key it will reuse that public key instead of looking for new values that have been written If writing a client that uses this feature it will likely be helpful to look at the dhutil https github com hashicorp vault blob main helper dhutil dhutil go library This shows the expected format of the public key input and envelope output formats Configuration The top level auto auth block has two configuration entries method object required Configuration for the method sinks array of objects optional Configuration for the sinks enable reauth on new credentials bool false If enabled Auto auth will handle new credential events from supported auth methods AliCloud AWS Cert JWT LDAP OCI and re authenticate with the new credential Configuration Method Auto auth does not support using tokens with a limited number of uses Auto auth does not track the number of uses remaining and may allow the token to expire before attempting to renew it For example if using AppRole auto auth you must use 0 meaning unlimited as the value for token num uses vault api docs auth approle token num uses These are common configuration values that live within the method block type string required The type of the method to use e g aws gcp azure etc Note when using HCL this can be used as the key for the block e g method aws mount path string optional The mount path of the method If not specified defaults to a value of auth method type namespace string optional Namespace in which the mount lives The order of precedence is this setting lowest followed by the environment variable VAULT NAMESPACE and then the highest precedence command line option namespace If none of these are specified defaults to the root namespace Note that because sink response wrapping and templating are also based on the client created by auto auth they use the same namespace If specified alongside the namespace option in the Vault Stanza of Vault Agent vault docs agent and proxy agent vault stanza or Vault Proxy vault docs agent and proxy proxy vault stanza that configuration will take precedence on everything except auto auth wrap ttl string or integer optional If specified the written token will be response wrapped by auto auth This is more secure than wrapping by sinks but does not allow the auto auth to keep the token renewed or automatically reauthenticate when it expires Rather than a simple string the written value will be a JSON encoded SecretWrapInfo https godoc org github com hashicorp vault api SecretWrapInfo structure Uses duration format strings vault docs concepts duration format min backoff string or integer 1s The minimum backoff time auto auth will delay before retrying after a failed auth attempt The backoff will start at the configured value and double with some randomness after successive failures capped by max backoff If Agent templating is being used this value is also used as the min backoff time for the templating server Uses duration format strings vault docs concepts duration format max backoff string or integer 5m The maximum time Agent will delay before retrying after a failed auth attempt The backoff will start at min backoff and double with some randomness after successive failures capped by max backoff If Agent templating is being used this value is also used as the max backoff time for the templating server max backoff is the duration between retries and not the duration that retries will be performed before giving up Uses duration format strings vault docs concepts duration format exit on err bool false When set to true Vault Agent and Vault Proxy will exit if any errors occur during authentication This configurable only affects login attempts for new tokens either initial or expired tokens and will not exit for errors on valid token renewals config object required Configuration of the method itself See the sidebar for information about each method Configuration Sinks These configuration values are common to all Sinks type string required The type of the method to use e g file Note when using HCL this can be used as the key for the block e g sink file wrap ttl string or integer optional If specified the written token will be response wrapped by the sink This is less secure than wrapping by the method but allows auto auth to keep the token renewed and automatically reauthenticate when it expires Rather than a simple string the written value will be a JSON encoded SecretWrapInfo https godoc org github com hashicorp vault api SecretWrapInfo structure Uses duration format strings vault docs concepts duration format dh type string optional If specified the type of Diffie Hellman exchange to perform meaning which ciphers and or curves Currently only curve25519 is supported dh path string required if dh type is set The path from which the auto auth should read the client s initial parameters e g curve25519 public key derive key bool false If specified the final encryption key is calculated by using HKDF SHA256 to derive a key from the calculated shared secret and the two public keys for enhanced security This is recommended if backward compatibility isn t a concern aad string optional If specified additional authenticated data to use with the AES GCM encryption of the token Can be any string including serialized data aad env var string optional If specified AAD will be read from the given environment variable rather than a value in the configuration file config object required Configuration of the sink itself See the sidebar for information about each sink Auto auth examples Auto auth configuration objects take two separate forms when specified in HCL and JSON The following examples are meant to clarify the differences between the two formats Sinks HCL format The HCL format may define any number of sink objects with an optional wrapping sinks object Note The corresponding JSON format sinks json format must specify a sinks array to encapsulate all sink JSON objects hcl Other Vault Agent or Vault Proxy configuration blocks auto auth method type approle config role id file path etc vault roleid secret id file path etc vault secretid sinks sink type file config path tmp file foo The following valid HCL omits the wrapping sinks object while specifying multiple sinks hcl Other Vault Agent or Vault Proxy configuration blocks auto auth method type approle config role id file path etc vault roleid secret id file path etc vault secretid sink type file config path tmp file foo sink type file config path tmp file bar Sinks JSON format The following JSON configuration illustrates the need for a sinks array wrapping any number of sink objects hcl auto auth method type approle config role id file path etc vault roleid secret id file path etc vault secretid sinks sink type file config path tmp file foo Multiple sinks are defined by appending more sink objects within the sinks array hcl auto auth method type approle config role id file path etc vault roleid secret id file path etc vault secretid sinks sink type file config path tmp file foo sink type file config path tmp file bar |
vault Vault Proxy Auto auth method application roles AppRole layout docs page title Auto auth with AppRole Use application roles for auto authentication with Vault Agent or | ---
layout: docs
page_title: Auto-auth with AppRole
description: >-
Use application roles for auto-authentication with Vault Agent or
Vault Proxy.
---
# Auto-auth method: application roles (AppRole)
The `approle` method reads in a role ID and a secret ID from files and sends
the values to the [AppRole Auth
method](/vault/docs/auth/approle).
The method caches values and it is safe to delete the role ID/secret ID files
after they have been read. In fact, by default, after reading the secret ID,
the agent will delete the file. New files or values written at the expected
locations will be used on next authentication and the new values will be
cached.
## Configuration
- `role_id_file_path` `(string: required)` - The path to the file with role ID
- `secret_id_file_path` `(string: optional)` - The path to the file with secret
ID.
If not set, only the `role-id` will be used.
In that case, the AppRole should have `bind_secret_id` set to `false` otherwise
Vault Agent wouldn't be able to login.
- `remove_secret_id_file_after_reading` `(bool: optional, defaults to true)` -
This can be set to `false` to disable the default behavior of removing the
secret ID file after it's been read.
- `secret_id_response_wrapping_path` `(string: optional)` - If set, the value
at `secret_id_file_path` will be expected to be a [Response-Wrapping
Token](/vault/docs/concepts/response-wrapping)
containing the output of the secret ID retrieval endpoint for the role (e.g.
`auth/approle/role/webservers/secret-id`) and the creation path for the
response-wrapping token must match the value set here.
## Example configuration
An example configuration, using approle to enable [auto-auth](/vault/docs/agent-and-proxy/autoauth)
and creating both a plaintext token sink and a [response-wrapped token sink file](/vault/docs/agent-and-proxy/autoauth#wrap_ttl), follows:
```hcl
pid_file = "./pidfile"
vault {
address = "https://127.0.0.1:8200"
}
auto_auth {
method {
type = "approle"
config = {
role_id_file_path = "roleid"
secret_id_file_path = "secretid"
remove_secret_id_file_after_reading = false
}
}
sink {
type = "file"
wrap_ttl = "30m"
config = {
path = "sink_file_wrapped_1.txt"
}
}
sink {
type = "file"
config = {
path = "sink_file_unwrapped_2.txt"
}
}
}
api_proxy {
use_auto_auth_token = true
}
listener "tcp" {
address = "127.0.0.1:8100"
tls_disable = true
}
template {
source = "/etc/vault/server.key.ctmpl"
destination = "/etc/vault/server.key"
}
template {
source = "/etc/vault/server.crt.ctmpl"
destination = "/etc/vault/server.crt"
}
``` | vault | layout docs page title Auto auth with AppRole description Use application roles for auto authentication with Vault Agent or Vault Proxy Auto auth method application roles AppRole The approle method reads in a role ID and a secret ID from files and sends the values to the AppRole Auth method vault docs auth approle The method caches values and it is safe to delete the role ID secret ID files after they have been read In fact by default after reading the secret ID the agent will delete the file New files or values written at the expected locations will be used on next authentication and the new values will be cached Configuration role id file path string required The path to the file with role ID secret id file path string optional The path to the file with secret ID If not set only the role id will be used In that case the AppRole should have bind secret id set to false otherwise Vault Agent wouldn t be able to login remove secret id file after reading bool optional defaults to true This can be set to false to disable the default behavior of removing the secret ID file after it s been read secret id response wrapping path string optional If set the value at secret id file path will be expected to be a Response Wrapping Token vault docs concepts response wrapping containing the output of the secret ID retrieval endpoint for the role e g auth approle role webservers secret id and the creation path for the response wrapping token must match the value set here Example configuration An example configuration using approle to enable auto auth vault docs agent and proxy autoauth and creating both a plaintext token sink and a response wrapped token sink file vault docs agent and proxy autoauth wrap ttl follows hcl pid file pidfile vault address https 127 0 0 1 8200 auto auth method type approle config role id file path roleid secret id file path secretid remove secret id file after reading false sink type file wrap ttl 30m config path sink file wrapped 1 txt sink type file config path sink file unwrapped 2 txt api proxy use auto auth token true listener tcp address 127 0 0 1 8100 tls disable true template source etc vault server key ctmpl destination etc vault server key template source etc vault server crt ctmpl destination etc vault server crt |
vault page title What is Vault Proxy Vault Proxy is a server side daemon with caching and auto authentication that acts as load balancer and API proxy for Vault layout docs What is Vault Proxy | ---
layout: docs
page_title: What is Vault Proxy?
description: >-
Vault Proxy is a server-side daemon with caching and auto-authentication that
acts as load-balancer and API proxy for Vault.
---
# What is Vault Proxy?
Vault Proxy aims to remove the initial hurdle to adopt Vault by providing a
more scalable and simpler way for applications to integrate with Vault.
Vault Proxy acts as an [API Proxy][apiproxy] for Vault, and can optionally allow
or force interacting clients to use its [automatically authenticated token][autoauth].
Vault Proxy is a client daemon that provides the following features:
- [Auto-Auth][autoauth] - Automatically authenticate to Vault and manage the
token renewal process for locally-retrieved dynamic secrets.
- [API Proxy][apiproxy] - Acts as a proxy for Vault's API,
optionally using (or forcing the use of) the Auto-Auth token.
- [Caching][caching] - Allows client-side caching of responses containing newly
created tokens and responses containing leased secrets generated off of these
newly created tokens. The agent also manages the renewals of the cached tokens and leases.
## Auto-Auth
Vault Proxy allows easy authentication to Vault in a wide variety of
environments. Please see the [Auto-Auth docs][autoauth]
for information.
Auto-Auth functionality takes place within an `auto_auth` configuration stanza.
## API proxy
Vault Proxy's primary purpose is to act as an API proxy for Vault, allowing you to talk to Vault's
API via a listener. It can be configured to optionally allow or force the automatic use of
the Auto-Auth token for these requests. Please see the [API Proxy docs][apiproxy]
for more information.
API Proxy functionality takes place within a defined `listener`, and its behaviour can be configured with an
[`api_proxy` stanza](/vault/docs/agent-and-proxy/proxy/apiproxy#configuration-api_proxy).
## Caching
Vault Proxy allows client-side caching of responses containing newly created tokens
and responses containing leased secrets generated off of these newly created tokens.
Please see the [Caching docs][caching] for information.
## API
### Quit
This endpoint triggers shutdown of the proxy. By default, it is disabled, and can
be enabled per listener using the [`proxy_api`][proxy-api] stanza. It is recommended
to only enable this on trusted interfaces, as it does not require any authorization to use.
| Method | Path |
| :----- | :--------------- |
| `POST` | `/proxy/v1/quit` |
### Cache
See the [caching](/vault/docs/agent-and-proxy/proxy/caching#api) page for details on the cache API.
## Configuration
### Command options
- `-log-level` ((#\_log_level)) `(string: "info")` - Log verbosity level. Supported values (in
order of descending detail) are `trace`, `debug`, `info`, `warn`, and `error`. This can
also be specified via the `VAULT_LOG_LEVEL` environment variable.
- `-log-format` ((#\_log_format)) `(string: "standard")` - Log format. Supported values
are `standard` and `json`. This can also be specified via the
`VAULT_LOG_FORMAT` environment variable.
- `-log-file` ((#\_log_file)) - the absolute path where Vault Proxy should save
log messages. Paths that end with a path separator use the default file name,
`proxy.log`. Paths that do not end with a file extension use the default
`.log` extension. If the log file rotates, Vault Proxy appends the current
timestamp to the file name at the time of rotation. For example:
`log-file` | Full log file | Rotated log file
---------- | ------------- | ----------------
`/var/log` | `/var/log/proxy.log` | `/var/log/proxy-{timestamp}.log`
`/var/log/my-diary` | `/var/log/my-diary.log` | `/var/log/my-diary-{timestamp}.log`
`/var/log/my-diary.txt` | `/var/log/my-diary.txt` | `/var/log/my-diary-{timestamp}.txt`
- `-log-rotate-bytes` ((#\_log_rotate_bytes)) - to specify the number of
bytes that should be written to a log before it needs to be rotated. Unless specified,
there is no limit to the number of bytes that can be written to a log file.
- `-log-rotate-duration` ((#\_log_rotate_duration)) - to specify the maximum
duration a log should be written to before it needs to be rotated. Must be a duration
value such as 30s. Defaults to 24h.
- `-log-rotate-max-files` ((#\_log_rotate_max_files)) - to specify the maximum
number of older log file archives to keep. Defaults to `0` (no files are ever deleted).
Set to `-1` to discard old log files when a new one is created.
### Configuration file options
These are the currently-available general configuration options:
- `vault` <code>([vault][vault]: <optional\>)</code> - Specifies the remote Vault server the Proxy connects to.
- `auto_auth` <code>([auto_auth][autoauth]: <optional\>)</code> - Specifies the method and other options used for Auto-Auth functionality.
- `api_proxy` <code>([api_proxy][apiproxy]: <optional\>)</code> - Specifies options used for API Proxy functionality.
- `cache` <code>([cache][caching]: <optional\>)</code> - Specifies options used for Caching functionality.
- `listener` <code>([listener][listener]: <optional\>)</code> - Specifies the addresses and ports on which the Proxy will respond to requests.
~> **Note:** On `SIGHUP` (`kill -SIGHUP $(pidof vault)`), Vault Proxy will attempt to reload listener TLS configuration.
This method can be used to refresh certificates used by Vault Proxy without having to restart its process.
- `pid_file` `(string: "")` - Path to the file in which the Proxy's Process ID
(PID) should be stored
- `exit_after_auth` `(bool: false)` - If set to `true`, the proxy will exit
with code `0` after a single successful auth, where success means that a
token was retrieved and all sinks successfully wrote it
- `disable_idle_connections` `(string array: [])` - A list of strings that disables idle connections for various features in Vault Proxy.
Valid values include: `auto-auth`, and `proxying`. Can also be configured by setting the `VAULT_PROXY_DISABLE_IDLE_CONNECTIONS`
environment variable as a comma separated string. This environment variable will override any values found in a configuration file.
- `disable_keep_alives` `(string array: [])` - A list of strings that disables keep alives for various features in Vault Agent.
Valid values include: `auto-auth`, and `proxying`. Can also be configured by setting the `VAULT_PROXY_DISABLE_KEEP_ALIVES`
environment variable as a comma separated string. This environment variable will override any values found in a configuration file.
- `telemetry` <code>([telemetry][telemetry]: <optional\>)</code> – Specifies the telemetry
reporting system. See the [telemetry Stanza](/vault/docs/agent-and-proxy/proxy#telemetry-stanza) section below
for a list of metrics specific to Proxy.
- `log_level` - Equivalent to the [`-log-level` command-line flag](#_log_level).
~> **Note:** On `SIGHUP` (`kill -SIGHUP $(pidof vault)`), Vault Proxy will update the log level to the value
specified by configuration file (including overriding values set using CLI or environment variable parameters).
- `log_format` - Equivalent to the [`-log-format` command-line flag](#_log_format).
- `log_file` - Equivalent to the [`-log-file` command-line flag](#_log_file).
- `log_rotate_duration` - Equivalent to the [`-log-rotate-duration` command-line flag](#_log_rotate_duration).
- `log_rotate_bytes` - Equivalent to the [`-log-rotate-bytes` command-line flag](#_log_rotate_bytes).
- `log_rotate_max_files` - Equivalent to the [`-log-rotate-max-files` command-line flag](#_log_rotate_max_files).
### vault stanza
There can at most be one top level `vault` block, and it has the following
configuration entries:
- `address` `(string: <optional>)` - The address of the Vault server to
connect to. This should be a Fully Qualified Domain Name (FQDN) or IP
such as `https://vault-fqdn:8200` or `https://172.16.9.8:8200`.
This value can be overridden by setting the `VAULT_ADDR` environment variable.
- `ca_cert` `(string: <optional>)` - Path on the local disk to a single PEM-encoded
CA certificate to verify the Vault server's SSL certificate. This value can
be overridden by setting the `VAULT_CACERT` environment variable.
- `ca_path` `(string: <optional>)` - Path on the local disk to a directory of
PEM-encoded CA certificates to verify the Vault server's SSL certificate.
This value can be overridden by setting the `VAULT_CAPATH` environment
variable.
- `client_cert` `(string: <optional>)` - Path on the local disk to a single
PEM-encoded CA certificate to use for TLS authentication to the Vault server.
This value can be overridden by setting the `VAULT_CLIENT_CERT` environment
variable.
- `client_key` `(string: <optional>)` - Path on the local disk to a single
PEM-encoded private key matching the client certificate from `client_cert`.
This value can be overridden by setting the `VAULT_CLIENT_KEY` environment
variable.
- `tls_skip_verify` `(string: <optional>)` - Disable verification of TLS
certificates. Using this option is highly discouraged as it decreases the
security of data transmissions to and from the Vault server. This value can
be overridden by setting the `VAULT_SKIP_VERIFY` environment variable.
- `tls_server_name` `(string: <optional>)` - Name to use as the SNI host when
connecting via TLS. This value can be overridden by setting the
`VAULT_TLS_SERVER_NAME` environment variable.
- `namespace` `(string: <optional>)` - Namespace to use for all of Vault Proxy's
requests to Vault. This can also be specified by command line or environment variable.
The order of precedence is: this setting lowest, followed by the environment variable
`VAULT_NAMESPACE`, and then the highest precedence command-line option `-namespace`.
If none of these are specified, defaults to the root namespace.
#### retry stanza
The `vault` stanza may contain a `retry` stanza that controls how failing Vault
requests are handled. Auto-auth, however, has its own notion of retrying and is not
affected by this section.
Here are the options for the `retry` stanza:
- `num_retries` `(int: 12)` - Specify how many times a failing request will
be retried. A value of `0` translates to the default, i.e. 12 retries.
A value of `-1` disables retries. The environment variable `VAULT_MAX_RETRIES`
overrides this setting.
Requests originating
from the proxy cache will only be retried if they resulted in specific HTTP
result codes: any 50x code except 501 ("not implemented"), as well as 412
("precondition failed"); 412 is used in Vault Enterprise 1.7+ to indicate a
stale read due to eventual consistency. Requests coming from the template
subsystem are retried regardless of the failure.
### listener stanza
Vault Proxy supports one or more [listener][listener_main] stanzas. Listeners
can be configured with or without [caching][caching], but will use the cache if it
has been configured, and will enable the [API proxy][apiproxy]. In addition to the standard
listener configuration, a Proxy's listener configuration also supports the following:
- `require_request_header` `(bool: false)` - Require that all incoming HTTP
requests on this listener must have an `X-Vault-Request: true` header entry.
Using this option offers an additional layer of protection from Server Side
Request Forgery attacks. Requests on the listener that do not have the proper
`X-Vault-Request` header will fail, with a HTTP response status code of `412: Precondition Failed`.
- `role` `(string: default)` - `role` determines which APIs the listener serves.
It can be configured to `metrics_only` to serve only metrics, or the default role, `default`,
which serves everything (including metrics). The `require_request_header` does not apply
to `metrics_only` listeners.
- `proxy_api` <code>([proxy_api][proxy-api]: <optional\>)</code> - Manages optional Proxy API endpoints.
#### proxy_api stanza
- `enable_quit` `(bool: false)` - If set to `true`, the Proxy will enable the [quit](/vault/docs/agent-and-proxy/proxy#quit) API.
### telemetry stanza
Vault Proxy supports the [telemetry][telemetry] stanza and collects various
runtime metrics about its performance, the auto-auth and the cache status:
| Metric | Description | Type |
| -------------------------------- | ---------------------------------------------------- | ------- |
| `vault.proxy.auth.failure` | Number of authentication failures | counter |
| `vault.proxy.auth.success` | Number of authentication successes | counter |
| `vault.proxy.proxy.success` | Number of requests successfully proxied | counter |
| `vault.proxy.proxy.client_error` | Number of requests for which Vault returned an error | counter |
| `vault.proxy.proxy.error` | Number of requests the proxy failed to proxy | counter |
| `vault.proxy.cache.hit` | Number of cache hits | counter |
| `vault.proxy.cache.miss` | Number of cache misses | counter |
## Start Vault proxy
To run Vault Proxy:
1. [Download](/vault/downloads) the Vault binary where the client application runs
(virtual machine, Kubernetes pod, etc.)
1. Create a Vault Proxy configuration file. (See the [Example
Configuration](#example-configuration) section for an example configuration.)
1. Start a Vault Proxy with the configuration file.
**Example:**
```shell-session
$ vault proxy -config=/etc/vault/proxy-config.hcl
```
To get help, run:
```shell-session
$ vault proxy -h
```
As with Vault, the `-config` flag can be used in three different ways:
- Use the flag once to name the path to a single specific configuration file.
- Use the flag multiple times to name multiple configuration files, which will be composed at runtime.
- Use the flag to name a directory of configuration files, the contents of which will be composed at runtime.
## Example configuration
An example configuration, with very contrived values, follows:
```hcl
pid_file = "./pidfile"
vault {
address = "https://vault-fqdn:8200"
retry {
num_retries = 5
}
}
auto_auth {
method "aws" {
mount_path = "auth/aws-subaccount"
config = {
type = "iam"
role = "foobar"
}
}
sink "file" {
config = {
path = "/tmp/file-foo"
}
}
sink "file" {
wrap_ttl = "5m"
aad_env_var = "TEST_AAD_ENV"
dh_type = "curve25519"
dh_path = "/tmp/file-foo-dhpath2"
config = {
path = "/tmp/file-bar"
}
}
}
cache {
// An empty cache stanza still enables caching
}
api_proxy {
use_auto_auth_token = true
}
listener "unix" {
address = "/path/to/socket"
tls_disable = true
agent_api {
enable_quit = true
}
}
listener "tcp" {
address = "127.0.0.1:8100"
tls_disable = true
}
```
[vault]: /vault/docs/agent-and-proxy/proxy#vault-stanza
[autoauth]: /vault/docs/agent-and-proxy/autoauth
[caching]: /vault/docs/agent-and-proxy/proxy/caching
[apiproxy]: /vault/docs/agent-and-proxy/proxy/apiproxy
[persistent-cache]: /vault/docs/agent-and-proxy/proxy/caching/persistent-caches
[proxy-api]: /vault/docs/agent-and-proxy/proxy/#proxy_api-stanza
[listener]: /vault/docs/agent-and-proxy/proxy#listener-stanza
[listener_main]: /vault/docs/configuration/listener/tcp
[telemetry]: /vault/docs/configuration/telemetry | vault | layout docs page title What is Vault Proxy description Vault Proxy is a server side daemon with caching and auto authentication that acts as load balancer and API proxy for Vault What is Vault Proxy Vault Proxy aims to remove the initial hurdle to adopt Vault by providing a more scalable and simpler way for applications to integrate with Vault Vault Proxy acts as an API Proxy apiproxy for Vault and can optionally allow or force interacting clients to use its automatically authenticated token autoauth Vault Proxy is a client daemon that provides the following features Auto Auth autoauth Automatically authenticate to Vault and manage the token renewal process for locally retrieved dynamic secrets API Proxy apiproxy Acts as a proxy for Vault s API optionally using or forcing the use of the Auto Auth token Caching caching Allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens The agent also manages the renewals of the cached tokens and leases Auto Auth Vault Proxy allows easy authentication to Vault in a wide variety of environments Please see the Auto Auth docs autoauth for information Auto Auth functionality takes place within an auto auth configuration stanza API proxy Vault Proxy s primary purpose is to act as an API proxy for Vault allowing you to talk to Vault s API via a listener It can be configured to optionally allow or force the automatic use of the Auto Auth token for these requests Please see the API Proxy docs apiproxy for more information API Proxy functionality takes place within a defined listener and its behaviour can be configured with an api proxy stanza vault docs agent and proxy proxy apiproxy configuration api proxy Caching Vault Proxy allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens Please see the Caching docs caching for information API Quit This endpoint triggers shutdown of the proxy By default it is disabled and can be enabled per listener using the proxy api proxy api stanza It is recommended to only enable this on trusted interfaces as it does not require any authorization to use Method Path POST proxy v1 quit Cache See the caching vault docs agent and proxy proxy caching api page for details on the cache API Configuration Command options log level log level string info Log verbosity level Supported values in order of descending detail are trace debug info warn and error This can also be specified via the VAULT LOG LEVEL environment variable log format log format string standard Log format Supported values are standard and json This can also be specified via the VAULT LOG FORMAT environment variable log file log file the absolute path where Vault Proxy should save log messages Paths that end with a path separator use the default file name proxy log Paths that do not end with a file extension use the default log extension If the log file rotates Vault Proxy appends the current timestamp to the file name at the time of rotation For example log file Full log file Rotated log file var log var log proxy log var log proxy timestamp log var log my diary var log my diary log var log my diary timestamp log var log my diary txt var log my diary txt var log my diary timestamp txt log rotate bytes log rotate bytes to specify the number of bytes that should be written to a log before it needs to be rotated Unless specified there is no limit to the number of bytes that can be written to a log file log rotate duration log rotate duration to specify the maximum duration a log should be written to before it needs to be rotated Must be a duration value such as 30s Defaults to 24h log rotate max files log rotate max files to specify the maximum number of older log file archives to keep Defaults to 0 no files are ever deleted Set to 1 to discard old log files when a new one is created Configuration file options These are the currently available general configuration options vault code vault vault optional code Specifies the remote Vault server the Proxy connects to auto auth code auto auth autoauth optional code Specifies the method and other options used for Auto Auth functionality api proxy code api proxy apiproxy optional code Specifies options used for API Proxy functionality cache code cache caching optional code Specifies options used for Caching functionality listener code listener listener optional code Specifies the addresses and ports on which the Proxy will respond to requests Note On SIGHUP kill SIGHUP pidof vault Vault Proxy will attempt to reload listener TLS configuration This method can be used to refresh certificates used by Vault Proxy without having to restart its process pid file string Path to the file in which the Proxy s Process ID PID should be stored exit after auth bool false If set to true the proxy will exit with code 0 after a single successful auth where success means that a token was retrieved and all sinks successfully wrote it disable idle connections string array A list of strings that disables idle connections for various features in Vault Proxy Valid values include auto auth and proxying Can also be configured by setting the VAULT PROXY DISABLE IDLE CONNECTIONS environment variable as a comma separated string This environment variable will override any values found in a configuration file disable keep alives string array A list of strings that disables keep alives for various features in Vault Agent Valid values include auto auth and proxying Can also be configured by setting the VAULT PROXY DISABLE KEEP ALIVES environment variable as a comma separated string This environment variable will override any values found in a configuration file telemetry code telemetry telemetry optional code Specifies the telemetry reporting system See the telemetry Stanza vault docs agent and proxy proxy telemetry stanza section below for a list of metrics specific to Proxy log level Equivalent to the log level command line flag log level Note On SIGHUP kill SIGHUP pidof vault Vault Proxy will update the log level to the value specified by configuration file including overriding values set using CLI or environment variable parameters log format Equivalent to the log format command line flag log format log file Equivalent to the log file command line flag log file log rotate duration Equivalent to the log rotate duration command line flag log rotate duration log rotate bytes Equivalent to the log rotate bytes command line flag log rotate bytes log rotate max files Equivalent to the log rotate max files command line flag log rotate max files vault stanza There can at most be one top level vault block and it has the following configuration entries address string optional The address of the Vault server to connect to This should be a Fully Qualified Domain Name FQDN or IP such as https vault fqdn 8200 or https 172 16 9 8 8200 This value can be overridden by setting the VAULT ADDR environment variable ca cert string optional Path on the local disk to a single PEM encoded CA certificate to verify the Vault server s SSL certificate This value can be overridden by setting the VAULT CACERT environment variable ca path string optional Path on the local disk to a directory of PEM encoded CA certificates to verify the Vault server s SSL certificate This value can be overridden by setting the VAULT CAPATH environment variable client cert string optional Path on the local disk to a single PEM encoded CA certificate to use for TLS authentication to the Vault server This value can be overridden by setting the VAULT CLIENT CERT environment variable client key string optional Path on the local disk to a single PEM encoded private key matching the client certificate from client cert This value can be overridden by setting the VAULT CLIENT KEY environment variable tls skip verify string optional Disable verification of TLS certificates Using this option is highly discouraged as it decreases the security of data transmissions to and from the Vault server This value can be overridden by setting the VAULT SKIP VERIFY environment variable tls server name string optional Name to use as the SNI host when connecting via TLS This value can be overridden by setting the VAULT TLS SERVER NAME environment variable namespace string optional Namespace to use for all of Vault Proxy s requests to Vault This can also be specified by command line or environment variable The order of precedence is this setting lowest followed by the environment variable VAULT NAMESPACE and then the highest precedence command line option namespace If none of these are specified defaults to the root namespace retry stanza The vault stanza may contain a retry stanza that controls how failing Vault requests are handled Auto auth however has its own notion of retrying and is not affected by this section Here are the options for the retry stanza num retries int 12 Specify how many times a failing request will be retried A value of 0 translates to the default i e 12 retries A value of 1 disables retries The environment variable VAULT MAX RETRIES overrides this setting Requests originating from the proxy cache will only be retried if they resulted in specific HTTP result codes any 50x code except 501 not implemented as well as 412 precondition failed 412 is used in Vault Enterprise 1 7 to indicate a stale read due to eventual consistency Requests coming from the template subsystem are retried regardless of the failure listener stanza Vault Proxy supports one or more listener listener main stanzas Listeners can be configured with or without caching caching but will use the cache if it has been configured and will enable the API proxy apiproxy In addition to the standard listener configuration a Proxy s listener configuration also supports the following require request header bool false Require that all incoming HTTP requests on this listener must have an X Vault Request true header entry Using this option offers an additional layer of protection from Server Side Request Forgery attacks Requests on the listener that do not have the proper X Vault Request header will fail with a HTTP response status code of 412 Precondition Failed role string default role determines which APIs the listener serves It can be configured to metrics only to serve only metrics or the default role default which serves everything including metrics The require request header does not apply to metrics only listeners proxy api code proxy api proxy api optional code Manages optional Proxy API endpoints proxy api stanza enable quit bool false If set to true the Proxy will enable the quit vault docs agent and proxy proxy quit API telemetry stanza Vault Proxy supports the telemetry telemetry stanza and collects various runtime metrics about its performance the auto auth and the cache status Metric Description Type vault proxy auth failure Number of authentication failures counter vault proxy auth success Number of authentication successes counter vault proxy proxy success Number of requests successfully proxied counter vault proxy proxy client error Number of requests for which Vault returned an error counter vault proxy proxy error Number of requests the proxy failed to proxy counter vault proxy cache hit Number of cache hits counter vault proxy cache miss Number of cache misses counter Start Vault proxy To run Vault Proxy 1 Download vault downloads the Vault binary where the client application runs virtual machine Kubernetes pod etc 1 Create a Vault Proxy configuration file See the Example Configuration example configuration section for an example configuration 1 Start a Vault Proxy with the configuration file Example shell session vault proxy config etc vault proxy config hcl To get help run shell session vault proxy h As with Vault the config flag can be used in three different ways Use the flag once to name the path to a single specific configuration file Use the flag multiple times to name multiple configuration files which will be composed at runtime Use the flag to name a directory of configuration files the contents of which will be composed at runtime Example configuration An example configuration with very contrived values follows hcl pid file pidfile vault address https vault fqdn 8200 retry num retries 5 auto auth method aws mount path auth aws subaccount config type iam role foobar sink file config path tmp file foo sink file wrap ttl 5m aad env var TEST AAD ENV dh type curve25519 dh path tmp file foo dhpath2 config path tmp file bar cache An empty cache stanza still enables caching api proxy use auto auth token true listener unix address path to socket tls disable true agent api enable quit true listener tcp address 127 0 0 1 8100 tls disable true vault vault docs agent and proxy proxy vault stanza autoauth vault docs agent and proxy autoauth caching vault docs agent and proxy proxy caching apiproxy vault docs agent and proxy proxy apiproxy persistent cache vault docs agent and proxy proxy caching persistent caches proxy api vault docs agent and proxy proxy proxy api stanza listener vault docs agent and proxy proxy listener stanza listener main vault docs configuration listener tcp telemetry vault docs configuration telemetry |
vault Vault Proxy s API Proxy functionality allows you to use Vault Proxy s API as a proxy Use Vault Proxy as an API proxy page title Use Vault Proxy as an API proxy layout docs Use auto authentication and configure Vault Proxy as a proxy for the Vault API | ---
layout: docs
page_title: Use Vault Proxy as an API proxy
description: >-
Use auto-authentication and configure Vault Proxy as a proxy for the Vault API.
---
# Use Vault Proxy as an API proxy
Vault Proxy's API Proxy functionality allows you to use Vault Proxy's API as a proxy
for Vault's API.
## Functionality
The [`listener` stanza](/vault/docs/agent-and-proxy/proxy#listener-stanza) for Vault Proxy configures a listener for Vault Proxy. If
its `role` is not set to `metrics_only`, it will act as a proxy for the Vault server that
has been configured in the [`vault` stanza](/vault/docs/agent-and-proxy/proxy#vault-stanza) of Proxy. This enables access to the Vault
API from the Proxy API, and can be configured to optionally allow or force the automatic use of
the Auto-Auth token for these requests, as described below.
If a `listener` has been configured alongside a `cache` stanza, the API Proxy will
first attempt to utilize the cache subsystem for qualifying requests, before forwarding the
request to Vault. See the [caching docs](/vault/docs/agent-and-proxy/proxy/caching) for more information on caching.
## Using Auto-Auth token
Vault Proxy allows for easy authentication to Vault in a wide variety of
environments using [Auto-Auth](/vault/docs/agent-and-proxy/autoauth). By setting the
`use_auto_auth_token` (see below) configuration, clients will not be required
to provide a Vault token to the requests made to the Proxy. When this
configuration is set, if the request doesn't already bear a token, then the
auto-auth token will be used to forward the request to the Vault server. This
configuration will be overridden if the request already has a token attached,
in which case, the token present in the request will be used to forward the
request to the Vault server.
## Forcing Auto-Auth token
Vault Proxy can be configured to force the use of the auto-auth token by using
the value `force` for the `use_auto_auth_token` option. This configuration
overrides the default behavior described above in [Using Auto-Auth
Token](/vault/docs/agent-and-proxy/proxy/apiproxy#using-auto-auth-token), and instead ignores any
existing Vault token in the request and instead uses the auto-auth token.
## Configuration (`api_proxy`)
The top level `api_proxy` block has the following configuration entries:
- `use_auto_auth_token` `(bool/string: false)` - If set, the requests made to Proxy
without a Vault token will be forwarded to the Vault server with the
auto-auth token attached. If the requests already bear a token, this
configuration will be overridden and the token in the request will be used to
forward the request to the Vault server. If set to `"force"` Proxy will use the
auto-auth token, overwriting the attached Vault token if set.
~> **Note**: When using the proxy's auto-auth token with the `use_auto_auth_token`
configuration, one proxy per application is very strongly recommended, as Vault will
unable to distinguish requests coming from multiple applications through a single proxy.
In situations where a single proxy is shared by multiple applications, setting `use_auto_auth_token`
to `false` (the default) is recommended.
- `prepend_configured_namespace` `(bool: false)` - If set, when Proxy has a
namespace configured, such as through the
[Vault stanza](/vault/docs/agent-and-proxy/proxy#vault-stanza), all requests
proxied to Vault will have the configured namespace prepended to the namespace
header. If Proxy's namespace is set to `ns1` and Proxy is sent a request with the
namespace `ns2`, the request will go to the `ns1/ns2` namespace. Likewise, if Proxy
is sent a request without a namespace, the request will go to the `ns1` namespace.
In essence, what this means is that all proxied requests must go to the configured
namespace or to its child namespaces.
The following two `api_proxy` options are only useful when making requests to a Vault
Enterprise cluster, and are documented as part of its
[Eventual Consistency](/vault/docs/enterprise/consistency#vault-proxy-and-consistency-headers)
page.
- `enforce_consistency` `(string: "never")` - Set to one of `"always"`
or `"never"`.
- `when_inconsistent` `(string: optional)` - Set to one of `"fail"`, `"retry"`,
or `"forward"`.
### Example configuration
Here is an example of a `listener` configuration alongside `api_proxy` configuration to force the use of the auto_auth token
and enforce consistency for a proxy dedicated to a single application.
```hcl
# Other Vault Proxy configuration blocks
# ...
api_proxy {
use_auto_auth_token = "force"
enforce_consistency = "always"
}
listener "unix" {
address = "/var/run/vault-proxy.sock
}
``` | vault | layout docs page title Use Vault Proxy as an API proxy description Use auto authentication and configure Vault Proxy as a proxy for the Vault API Use Vault Proxy as an API proxy Vault Proxy s API Proxy functionality allows you to use Vault Proxy s API as a proxy for Vault s API Functionality The listener stanza vault docs agent and proxy proxy listener stanza for Vault Proxy configures a listener for Vault Proxy If its role is not set to metrics only it will act as a proxy for the Vault server that has been configured in the vault stanza vault docs agent and proxy proxy vault stanza of Proxy This enables access to the Vault API from the Proxy API and can be configured to optionally allow or force the automatic use of the Auto Auth token for these requests as described below If a listener has been configured alongside a cache stanza the API Proxy will first attempt to utilize the cache subsystem for qualifying requests before forwarding the request to Vault See the caching docs vault docs agent and proxy proxy caching for more information on caching Using Auto Auth token Vault Proxy allows for easy authentication to Vault in a wide variety of environments using Auto Auth vault docs agent and proxy autoauth By setting the use auto auth token see below configuration clients will not be required to provide a Vault token to the requests made to the Proxy When this configuration is set if the request doesn t already bear a token then the auto auth token will be used to forward the request to the Vault server This configuration will be overridden if the request already has a token attached in which case the token present in the request will be used to forward the request to the Vault server Forcing Auto Auth token Vault Proxy can be configured to force the use of the auto auth token by using the value force for the use auto auth token option This configuration overrides the default behavior described above in Using Auto Auth Token vault docs agent and proxy proxy apiproxy using auto auth token and instead ignores any existing Vault token in the request and instead uses the auto auth token Configuration api proxy The top level api proxy block has the following configuration entries use auto auth token bool string false If set the requests made to Proxy without a Vault token will be forwarded to the Vault server with the auto auth token attached If the requests already bear a token this configuration will be overridden and the token in the request will be used to forward the request to the Vault server If set to force Proxy will use the auto auth token overwriting the attached Vault token if set Note When using the proxy s auto auth token with the use auto auth token configuration one proxy per application is very strongly recommended as Vault will unable to distinguish requests coming from multiple applications through a single proxy In situations where a single proxy is shared by multiple applications setting use auto auth token to false the default is recommended prepend configured namespace bool false If set when Proxy has a namespace configured such as through the Vault stanza vault docs agent and proxy proxy vault stanza all requests proxied to Vault will have the configured namespace prepended to the namespace header If Proxy s namespace is set to ns1 and Proxy is sent a request with the namespace ns2 the request will go to the ns1 ns2 namespace Likewise if Proxy is sent a request without a namespace the request will go to the ns1 namespace In essence what this means is that all proxied requests must go to the configured namespace or to its child namespaces The following two api proxy options are only useful when making requests to a Vault Enterprise cluster and are documented as part of its Eventual Consistency vault docs enterprise consistency vault proxy and consistency headers page enforce consistency string never Set to one of always or never when inconsistent string optional Set to one of fail retry or forward Example configuration Here is an example of a listener configuration alongside api proxy configuration to force the use of the auto auth token and enforce consistency for a proxy dedicated to a single application hcl Other Vault Proxy configuration blocks api proxy use auto auth token force enforce consistency always listener unix address var run vault proxy sock |
vault created tokens or leased secrets generated from a newly created token page title Vault Proxy caching overview Vault Proxy caching overview Use client side caching with Vault Proxy for responses with newly layout docs | ---
layout: docs
page_title: Vault Proxy caching overview
description: >-
Use client-side caching with Vault Proxy for responses with newly
created tokens or leased secrets generated from a newly created token.
---
# Vault Proxy caching overview
Vault Proxy caching allows client-side caching of responses containing newly
created tokens and responses containing leased secrets generated off of these
newly created tokens. The renewals of the cached tokens and leases are also
managed by the proxy. Additionally, with `cache_static_secrets` set to `true`,
Vault Proxy [can be configured to cache KVv1 and KVv2 secrets][static-secret-caching].
## Caching and renewals
Response caching and renewals for dynamic secrets are managed by Proxy only under these
specific scenarios.
1. Token creation requests are made through the proxy. This means that any
login operations performed using various auth methods and invoking the token
creation endpoints of the token auth method via the proxy will result in the
response getting cached by the proxy. Responses containing new tokens will
be cached by the proxy only if the parent token is already being managed by
the proxy or if the new token is an orphan token.
2. Leased secret creation requests are made through the proxy using tokens that
are already managed by the proxy. This means that any dynamic credentials
that are issued using the tokens managed by the proxy, will be cached and
its renewals are taken care of.
## Static secret caching
You can configure Vault Proxy to cache dynamic secrets and static (KVv1 and KVv2)
secrets. When you enable caching for static secrets. Proxy keeps a cached entry
of the secret but only provides the cached response to requests made with tokens
that can access the secret. As a result, multiple requests to Vault Proxy for
the same KV secret only require a single, initial request to be forwarded to Vault.
Static secret caching is disabled by default. To enable caching for static secrets you must
configure [auto-auth](/vault/docs/agent-and-proxy/autoauth) and ensure the
auto-auth token has permission to subscribe to KV
[event](/vault/docs/concepts/events) updates.
Once configured, Proxy uses the auto-auth token to subscribe to KV events, and
monitors the subscription feed to know when to update the secrets in its cache.
For more information on static secret caching, refer to the
[Vault Proxy static secret caching][static-secret-caching] overview.
## Persistent cache
Vault Proxy can restore secrets, such as, tokens, leases, and static secrets, from a persistent
cache file created by a previous Vault Proxy process.
Refer to the [Vault Proxy Persistent
Caching](/vault/docs/agent-and-proxy/proxy/caching/persistent-caches) page for more information on
this functionality.
## Cache evictions
The eviction of cache entries pertaining to dynamic secrets will occur when the proxy
can no longer renew them. This can happen when the secrets hit their maximum
TTL or if the renewals result in errors.
Vault Proxy does some best-effort cache evictions by observing specific request types
and response codes. For example, if a token revocation request is made via the
proxy and if the forwarded request to the Vault server succeeds, then proxy
evicts all the cache entries associated with the revoked token. Similarly, any
lease revocation operation will also be intercepted by the proxy and the
respective cache entries will be evicted.
Note that while proxy evicts the cache entries upon secret expirations and upon
intercepting revocation requests, it is still possible for the proxy to be
completely unaware of the revocations that happen through direct client
interactions with the Vault server. This could potentially lead to stale cache
entries. For managing the stale entries in the cache, an endpoint
`/proxy/v1/cache-clear`(see below) is made available to manually evict cache
entries based on some of the query criteria used for indexing the cache entries.
## Request uniqueness
In order to detect repeat requests and return cached responses, Proxy needs
to have a way to uniquely identify the requests. This computation as it stands
today takes a simplistic approach (may change in future) of serializing and
hashing the HTTP request along with all the headers and the request body. This
hash value is then used as an index into the cache to check if the response is
readily available. The consequence of this approach is that the hash value for
any request will differ if any data in the request is modified. This has the
side-effect of resulting in false negatives if say, the ordering of the request
parameters are modified. As long as the requests come in without any change,
caching behavior should be consistent. Identical requests with differently
ordered request values will result in duplicated cache entries. A heuristic
assumption that the clients will use consistent mechanisms to make requests,
thereby resulting in consistent hash values per request is the idea upon which
the caching functionality is built upon.
## Renewal management
The tokens and leases are renewed by the proxy using the secret renewer that is
made available via the Vault server's [Go
API](https://godoc.org/github.com/hashicorp/vault/api#Renewer). Proxy performs
all operations in memory and does not persist anything to storage. This means
that when the proxy is shut down, all the renewal operations are immediately
terminated and there is no way for the proxy to resume renewals after the fact.
Note that shutting down the proxy does not indicate revocations of the secrets,
instead it only means that renewal responsibility for all the valid unrevoked
secrets are no longer performed by the Vault proxy.
## API
### Cache clear
This endpoint clears the cache based on given criteria. To use this
API, some information on how the proxy caches values should be known
beforehand. Each response that is cached in the proxy will be indexed on some
factors depending on the type of request. Those factors can be the `token` that
is belonging to the cached response, the `token_accessor` of the token
belonging to the cached response, the `request_path` that resulted in the
cached response, the `lease` that is attached to the cached response, the
`namespace` to which the cached response belongs to, and a few more. This API
exposes some factors through which associated cache entries are fetched and
evicted. For listeners without caching enabled, this API will still be available,
but will do nothing (there is no cache to clear) and will return a `200` response.
| Method | Path | Produces |
| :----- | :---------------------- | :--------------------- |
| `POST` | `/proxy/v1/cache-clear` | `200 application/json` |
#### Parameters
- `type` `(strings: required)` - The type of cache entries to evict. Valid
values are `request_path`, `lease`, `token`, `token_accessor`, and `all`.
If the `type` is set to `all`, the _entire cache_ is cleared.
- `value` `(string: required)` - An exact value or the prefix of the value for
the `type` selected. This parameter is optional when the `type` is set
to `all`.
- `namespace` `(string: optional)` - This is only applicable when the `type` is set to
`request_path`. The namespace of which the cache entries to be evicted for
the given request path.
### Sample payload
```json
{
"type": "token",
"value": "hvs.rlNjegSKykWcplOkwsjd8bP9"
}
```
### Sample request
```shell-session
$ curl \
--request POST \
--data @payload.json \
http://127.0.0.1:1234/proxy/v1/cache-clear
```
## Configuration (`cache`)
The presence of the top level `cache` block in any way (including an empty `cache` block) will enable the cache.
Note that either `cache_static_secrets` must be `true` and/or `disable_caching_dynamic_secrets` must
be `false`, otherwise the cache does nothing. The top level `cache` block has the following configuration entries:
- `persist` `(object: optional)` - Configuration for the persistent cache.
- `cache_static_secrets` `(bool: false)` - Enables static secret caching when
`true`.
- `disable_caching_dynamic_secrets` `(bool: false)` - Disables dynamic secret caching when
`true`.
-> **Note:** When the `cache` block is defined, a [listener][proxy-listener] must also be defined
in the config, otherwise there is no way to utilize the cache.
[proxy-listener]: /vault/docs/agent-and-proxy/proxy#listener-stanza
### Configuration (Persist)
These are common configuration values that live within the `persist` block:
- `type` `(string: required)` - The type of the persistent cache to use,
e.g. `kubernetes`. _Note_: when using HCL this can be used as the key for
the block, e.g. `persist "kubernetes" {...}`. Currently, only `kubernetes`
is supported.
- `path` `(string: required)` - The path on disk where the persistent cache file
should be created or restored from.
- `keep_after_import` `(bool: optional)` - When set to true, a restored cache file
is not deleted. Defaults to `false`.
- `exit_on_err` `(bool: optional)` - When set to true, if any errors occur during
a persistent cache restore, Vault Proxy will exit with an error. Defaults to `true`.
- `service_account_token_file` `(string: optional)` - When `type` is set to `kubernetes`,
this configures the path on disk where the Kubernetes service account token can be found.
Defaults to `/var/run/secrets/kubernetes.io/serviceaccount/token`.
## Configuration (`listener`)
- `listener` `(array of objects: required)` - Configuration for the listeners.
There can be one or more `listener` blocks at the top level. Adding a listener enables
the [API Proxy](/vault/docs/agent-and-proxy/proxy/apiproxy) and enables the API proxy to use the cache, if configured.
These configuration values are common to both `tcp` and `unix` listener blocks. Blocks of type
`tcp` support the standard `tcp` [listener](/vault/docs/configuration/listener/tcp)
options. Additionally, the `role` string option is available as part of the top level
of the `listener` block, which can be configured to `metrics_only` to serve only metrics,
or the default role, `default`, which serves everything (including metrics).
- `type` `(string: required)` - The type of the listener to use. Valid values
are `tcp` and `unix`.
_Note_: when using HCL this can be used as the key for the block, e.g.
`listener "tcp" {...}`.
- `address` `(string: required)` - The address for the listener to listen to.
This can either be a URL path when using `tcp` or a file path when using
`unix`. For example, `127.0.0.1:8200` or `/path/to/socket`. Defaults to
`127.0.0.1:8200`.
- `tls_disable` `(bool: false)` - Specifies if TLS will be disabled.
- `tls_key_file` `(string: optional)` - Specifies the path to the private key
for the certificate.
- `tls_cert_file` `(string: optional)` - Specifies the path to the certificate
for TLS.
### Example configuration
Here is an example of a cache configuration with the optional `persist` block,
alongside a regular listener, and a listener that only serves metrics.
```hcl
# Other Vault Proxy configuration blocks
# ...
cache {
persist = {
type = "kubernetes"
path = "/vault/proxy-cache/"
keep_after_import = true
exit_on_err = true
service_account_token_file = "/tmp/serviceaccount/token"
}
}
listener "tcp" {
address = "127.0.0.1:8100"
tls_disable = true
}
listener "tcp" {
address = "127.0.0.1:3000"
tls_disable = true
role = "metrics_only"
}
```
[static-secret-caching]: /vault/docs/agent-and-proxy/proxy/caching/static-secret-caching | vault | layout docs page title Vault Proxy caching overview description Use client side caching with Vault Proxy for responses with newly created tokens or leased secrets generated from a newly created token Vault Proxy caching overview Vault Proxy caching allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens The renewals of the cached tokens and leases are also managed by the proxy Additionally with cache static secrets set to true Vault Proxy can be configured to cache KVv1 and KVv2 secrets static secret caching Caching and renewals Response caching and renewals for dynamic secrets are managed by Proxy only under these specific scenarios 1 Token creation requests are made through the proxy This means that any login operations performed using various auth methods and invoking the token creation endpoints of the token auth method via the proxy will result in the response getting cached by the proxy Responses containing new tokens will be cached by the proxy only if the parent token is already being managed by the proxy or if the new token is an orphan token 2 Leased secret creation requests are made through the proxy using tokens that are already managed by the proxy This means that any dynamic credentials that are issued using the tokens managed by the proxy will be cached and its renewals are taken care of Static secret caching You can configure Vault Proxy to cache dynamic secrets and static KVv1 and KVv2 secrets When you enable caching for static secrets Proxy keeps a cached entry of the secret but only provides the cached response to requests made with tokens that can access the secret As a result multiple requests to Vault Proxy for the same KV secret only require a single initial request to be forwarded to Vault Static secret caching is disabled by default To enable caching for static secrets you must configure auto auth vault docs agent and proxy autoauth and ensure the auto auth token has permission to subscribe to KV event vault docs concepts events updates Once configured Proxy uses the auto auth token to subscribe to KV events and monitors the subscription feed to know when to update the secrets in its cache For more information on static secret caching refer to the Vault Proxy static secret caching static secret caching overview Persistent cache Vault Proxy can restore secrets such as tokens leases and static secrets from a persistent cache file created by a previous Vault Proxy process Refer to the Vault Proxy Persistent Caching vault docs agent and proxy proxy caching persistent caches page for more information on this functionality Cache evictions The eviction of cache entries pertaining to dynamic secrets will occur when the proxy can no longer renew them This can happen when the secrets hit their maximum TTL or if the renewals result in errors Vault Proxy does some best effort cache evictions by observing specific request types and response codes For example if a token revocation request is made via the proxy and if the forwarded request to the Vault server succeeds then proxy evicts all the cache entries associated with the revoked token Similarly any lease revocation operation will also be intercepted by the proxy and the respective cache entries will be evicted Note that while proxy evicts the cache entries upon secret expirations and upon intercepting revocation requests it is still possible for the proxy to be completely unaware of the revocations that happen through direct client interactions with the Vault server This could potentially lead to stale cache entries For managing the stale entries in the cache an endpoint proxy v1 cache clear see below is made available to manually evict cache entries based on some of the query criteria used for indexing the cache entries Request uniqueness In order to detect repeat requests and return cached responses Proxy needs to have a way to uniquely identify the requests This computation as it stands today takes a simplistic approach may change in future of serializing and hashing the HTTP request along with all the headers and the request body This hash value is then used as an index into the cache to check if the response is readily available The consequence of this approach is that the hash value for any request will differ if any data in the request is modified This has the side effect of resulting in false negatives if say the ordering of the request parameters are modified As long as the requests come in without any change caching behavior should be consistent Identical requests with differently ordered request values will result in duplicated cache entries A heuristic assumption that the clients will use consistent mechanisms to make requests thereby resulting in consistent hash values per request is the idea upon which the caching functionality is built upon Renewal management The tokens and leases are renewed by the proxy using the secret renewer that is made available via the Vault server s Go API https godoc org github com hashicorp vault api Renewer Proxy performs all operations in memory and does not persist anything to storage This means that when the proxy is shut down all the renewal operations are immediately terminated and there is no way for the proxy to resume renewals after the fact Note that shutting down the proxy does not indicate revocations of the secrets instead it only means that renewal responsibility for all the valid unrevoked secrets are no longer performed by the Vault proxy API Cache clear This endpoint clears the cache based on given criteria To use this API some information on how the proxy caches values should be known beforehand Each response that is cached in the proxy will be indexed on some factors depending on the type of request Those factors can be the token that is belonging to the cached response the token accessor of the token belonging to the cached response the request path that resulted in the cached response the lease that is attached to the cached response the namespace to which the cached response belongs to and a few more This API exposes some factors through which associated cache entries are fetched and evicted For listeners without caching enabled this API will still be available but will do nothing there is no cache to clear and will return a 200 response Method Path Produces POST proxy v1 cache clear 200 application json Parameters type strings required The type of cache entries to evict Valid values are request path lease token token accessor and all If the type is set to all the entire cache is cleared value string required An exact value or the prefix of the value for the type selected This parameter is optional when the type is set to all namespace string optional This is only applicable when the type is set to request path The namespace of which the cache entries to be evicted for the given request path Sample payload json type token value hvs rlNjegSKykWcplOkwsjd8bP9 Sample request shell session curl request POST data payload json http 127 0 0 1 1234 proxy v1 cache clear Configuration cache The presence of the top level cache block in any way including an empty cache block will enable the cache Note that either cache static secrets must be true and or disable caching dynamic secrets must be false otherwise the cache does nothing The top level cache block has the following configuration entries persist object optional Configuration for the persistent cache cache static secrets bool false Enables static secret caching when true disable caching dynamic secrets bool false Disables dynamic secret caching when true Note When the cache block is defined a listener proxy listener must also be defined in the config otherwise there is no way to utilize the cache proxy listener vault docs agent and proxy proxy listener stanza Configuration Persist These are common configuration values that live within the persist block type string required The type of the persistent cache to use e g kubernetes Note when using HCL this can be used as the key for the block e g persist kubernetes Currently only kubernetes is supported path string required The path on disk where the persistent cache file should be created or restored from keep after import bool optional When set to true a restored cache file is not deleted Defaults to false exit on err bool optional When set to true if any errors occur during a persistent cache restore Vault Proxy will exit with an error Defaults to true service account token file string optional When type is set to kubernetes this configures the path on disk where the Kubernetes service account token can be found Defaults to var run secrets kubernetes io serviceaccount token Configuration listener listener array of objects required Configuration for the listeners There can be one or more listener blocks at the top level Adding a listener enables the API Proxy vault docs agent and proxy proxy apiproxy and enables the API proxy to use the cache if configured These configuration values are common to both tcp and unix listener blocks Blocks of type tcp support the standard tcp listener vault docs configuration listener tcp options Additionally the role string option is available as part of the top level of the listener block which can be configured to metrics only to serve only metrics or the default role default which serves everything including metrics type string required The type of the listener to use Valid values are tcp and unix Note when using HCL this can be used as the key for the block e g listener tcp address string required The address for the listener to listen to This can either be a URL path when using tcp or a file path when using unix For example 127 0 0 1 8200 or path to socket Defaults to 127 0 0 1 8200 tls disable bool false Specifies if TLS will be disabled tls key file string optional Specifies the path to the private key for the certificate tls cert file string optional Specifies the path to the certificate for TLS Example configuration Here is an example of a cache configuration with the optional persist block alongside a regular listener and a listener that only serves metrics hcl Other Vault Proxy configuration blocks cache persist type kubernetes path vault proxy cache keep after import true exit on err true service account token file tmp serviceaccount token listener tcp address 127 0 0 1 8100 tls disable true listener tcp address 127 0 0 1 3000 tls disable true role metrics only static secret caching vault docs agent and proxy proxy caching static secret caching |
vault Use static secret caching with Vault Proxy to cache key value data in Vault Improve Vault traffic resiliency with Vault Proxy page title Improve Vault traffic resiliency layout docs handle updates and reduce direct requests to Vault from clients | ---
layout: docs
page_title: Improve Vault traffic resiliency
description: >-
Use static secret caching with Vault Proxy to cache key/value data in Vault,
handle updates, and reduce direct requests to Vault from clients.
---
# Improve Vault traffic resiliency with Vault Proxy
@include 'alerts/enterprise-only.mdx'
Use static secret caching with Vault Proxy to cache KVv1 and KVv2 secrets to
minimize requests made to Vault and provide resilient connections for clients.
Vault Proxy utilizes the Enterprise only [Vault event notification system](/vault/docs/concepts/events)
feature for cache freshness. As a result, static secret caching can only be used
with Vault Enterprise installations.
When using a Vault cluster with performance standbys, Proxy may receive secret update events
before the secret update has been fully replicated. To make sure that Proxy can get updated
secret values after receiving an event notification, Proxy must be configured to point to the
address of the active node in its [Vault stanza](/vault/docs/agent-and-proxy/proxy#vault-stanza),
or [allow_forwarding_via_header must be set to true](/vault/docs/configuration/replication#allow_forwarding_via_header)
on the cluster. When `allow_forwarding_via_header` is configured, Proxy will only forward
requests to update a secret in its cache after receiving an event indicating that secret got updated.
This approach would be recommended if access to Vault was behind, for example, a load balancer.
## Step 1: Subscribe Vault Proxy to KV events
Vault Proxy uses Vault events and auto-auth to monitor secret status and make
appropriate cache updates.
1. Enable [auto-auth](/vault/docs/agent-and-proxy/autoauth).
1. Create an auto-auth token with permission to subscribe to KV event updates
with the [Vault event notification system](/vault/docs/concepts/events). For
example, to create a policy that grants access to static secret (KVv1 and KVv2)
events, you need permission to subscribe to the `events` endpoint, as well as
the `list` and `subscribe` permissions on KV secrets you want to get secrets
from:
```hcl
path "sys/events/subscribe/kv*" {
capabilities = ["read"]
}
path "*" {
capabilities = ["list", "subscribe"]
subscribe_event_types = ["kv*"]
}
```
Subscribing to KV events means that Proxy receives updates as soon as a secret
changes, which reduces staleness in the cache. Vault Proxy only checks for a
secret update if an event notification indicates that the related secret was
updated.
## Step 2: Ensure tokens have `capabilities-self` access
Tokens require `update` access to the
[`sys/capabilies-self`](/vault/api-docs/system/capabilities-self) endpoint to
request cached secrets. Vault tokens receive `update` permissions
[by default](/vault/docs/concepts/policies#default-policy). If you have modified
or removed the default policy, you must explicitly create a policy with the
appropriate permissions. For example:
```hcl
path "sys/capabilities-self" {
capabilities = ["update"]
}
```
## Step 3: Configure an appropriate refresh interval
By default, Vault Proxy refreshes tokens every five minutes. You can change the
default behavior and configure Proxy to verify and update cached token
capabilities with the `static_secret_token_capability_refresh_interval`
parameter in the `cache` configuration stanza. For example, to set a refresh
interval of one minute:
```hcl
cache {
cache_static_secrets = true
static_secret_token_capability_refresh_interval = "1m"
}
```
## Functionality
With static secret caching, Vault Proxy caches `GET` requests for KVv1 and KVv2
endpoints.
When a client sends a `GET` request for a new KV secret, Proxy forwards the
request to Vault but caches the response before forwarding it to the client. If
that client makes subsequent `GET` requests for the same secret, Vault Proxy
serves the cached response rather than forwarding the request to Vault.
<Tip title="'Offline' Secret Access and CLI KV Get">
Vault Proxy does not cache any non-KV API responses. While KV secrets can be retrieved even if
Vault is unavailable, other requests cannot be served. As a result, using the `vault kv`
CLI command, which sends a request to `/sys/internal/ui/mounts` before the KV `GET` request,
will require a real request to Vault and cannot be served entirely from the cache or
when Vault is unavailable (you can use `vault read` instead).
</Tip>
Similarly, when a token requests access to a KV secret, it must complete a
success `GET` request. If the request is successful, Proxy caches the fact that
the token was successful in addition to the result. Subsequent requests by the
same token can then access this secret from the cache instead of Vault.
Vault Proxy uses the [event notification system](/vault/docs/concepts/events) to keep the
cache up to date. It monitors the KV event feed for events related to any secret
currently stored in the cache, including modification events like updates and
deletes. When Proxy detects a change in a cached secret, it will update or
evict the cache entry as appropriate.
Vault Proxy also checks and refreshes the access permissions of known tokens
according to the window set with `static_secret_token_capability_refresh_interval`.
By default, the refresh interval is five minutes.
Every interval, Proxy calls [`sys/capabilies-self`](/vault/api-docs/system/capabilities-self) on
behalf of every token in the cache to confirm the token still has permission to
access the cached secret. If the result from Vault indicates that permission (or
the token itself) was revoked, Proxy updates the cache entry so that the affected
token can no longer access the relevant paths from the cache. The refresh interval
is essentially the maximum period after which permission to read a KV secret is
fully revoked for the relevant token.
If the capabilities have been removed, or Proxy receives a `403` response, the
capability is removed from the token, and that token cannot be used to access the
cache. For other kinds of errors, such as Vault being unreachable or sealed,
the `static_secret_token_capability_refresh_behavior` config is consulted.
If set to `optimistic` (the default), the capability will not be removed unless we
receive a `403` or valid response without the capability. If set to `pessimistic`,
the capability will be removed for any error, such as would occur if Vault is sealed.
For token refresh to work, any token that will access the cache also needs
`update` permission for [`sys/capabilies-self`](/vault/api-docs/system/capabilities-self).
Having `update` permission for the token lets Proxy test capabilities for the
token against multiple paths with a single request instead of testing for a `403`
response for each path explicitly.
<Tip title="Refresh is per token, not per secret">
If Proxy's API proxy is configured to use auto-authentication for tokens, and **all**
requests that pass through Vault Proxy use the same token, Proxy only
makes a single request to Vault every refresh interval, no matter how many
secrets are currently cached.
</Tip>
When static secret caching is enabled, Proxy returns `HIT` or `MISS` in the `X-Cache`
response header for requests so client can tell if the response was served from
the cache or forwarded from Vault. In the event of a hit, Proxy also sets the
`Age` header to indicate, in seconds, how old the cache entry is.
<Tip title="Old does not mean stale">
The fact that a cache entry is old, does not necessarily mean that the
information is out of date. Vault Proxy continually monitors KV events for
updates. A large value for `Age` may simply mean that the secret has not been
rotated recently.
</Tip>
## Configuration
The top level `cache` block has the following configuration entries relating to static secret caching:
- `cache_static_secrets` `(bool: false)` - Enables static secret caching when
set to `true`. When `cache_static_secrets` and `auto_auth` are both enabled,
Vault Proxy serves KV secrets directly from the cache to clients with
sufficient permission.
- `static_secret_token_capability_refresh_interval` `(duration: "5m", optional)` -
Sets the interval as a [duration format string](/vault/docs/concepts/duration-format)
at which Vault Proxy rechecks the permissions of tokens used to access cached
secrets. The refresh interval is the maximum period after which permission to
read a cached KV secret is fully revoked. Ignored when `cache_static_secrets`
is `false`.
- `static_secret_token_capability_refresh_behavior` `(string: "optimistic", optional)` -
Sets the capability refresh behavior in the case of an error when attempting to
refresh capabilities. In the case of a `403`, capabilities will be removed for the token
with either option. In case of other errors, such as Vault being sealed or Vault being
unavailable, this setting controls the behavior. If set to `optimistic` (the default),
capabilities will be removed for only `403` errors. If set to `pessimistic`, capabilities
will be removed for any error. This essentially allows configuring a preference between
favoring availability (`optimistic`) or access fidelity (`pessimistic`) of cached
static secrets. Ignored when `cache_static_secrets` is `false`.
### Example configuration
The following example Vault Proxy configuration:
- Defines a TCP listener (`listener`) with TLS disabled.
- Forces clients using API proxy (`api_proxy`) to identify with an auto-auth token.
- Configures auto-authentication (`auto-auth`) for `approle`.
- Enables static secret caching with `cache_static_secrets`.
- Sets an explicit token capability refresh window of 1 hour with `static_secret_token_capability_refresh_interval`.
```hcl
# Other Vault Proxy configuration blocks
# ...
cache {
cache_static_secrets = true
static_secret_token_capability_refresh_interval = "1h"
}
api_proxy {
use_auto_auth_token = "force"
}
listener "tcp" {
address = "127.0.0.1:8100"
tls_disable = true
}
auto_auth {
method {
type = "approle"
config = {
role_id_file_path = "roleid"
secret_id_file_path = "secretid"
remove_secret_id_file_after_reading = false
}
}
```
[event-system]: /vault/docs/concepts/events | vault | layout docs page title Improve Vault traffic resiliency description Use static secret caching with Vault Proxy to cache key value data in Vault handle updates and reduce direct requests to Vault from clients Improve Vault traffic resiliency with Vault Proxy include alerts enterprise only mdx Use static secret caching with Vault Proxy to cache KVv1 and KVv2 secrets to minimize requests made to Vault and provide resilient connections for clients Vault Proxy utilizes the Enterprise only Vault event notification system vault docs concepts events feature for cache freshness As a result static secret caching can only be used with Vault Enterprise installations When using a Vault cluster with performance standbys Proxy may receive secret update events before the secret update has been fully replicated To make sure that Proxy can get updated secret values after receiving an event notification Proxy must be configured to point to the address of the active node in its Vault stanza vault docs agent and proxy proxy vault stanza or allow forwarding via header must be set to true vault docs configuration replication allow forwarding via header on the cluster When allow forwarding via header is configured Proxy will only forward requests to update a secret in its cache after receiving an event indicating that secret got updated This approach would be recommended if access to Vault was behind for example a load balancer Step 1 Subscribe Vault Proxy to KV events Vault Proxy uses Vault events and auto auth to monitor secret status and make appropriate cache updates 1 Enable auto auth vault docs agent and proxy autoauth 1 Create an auto auth token with permission to subscribe to KV event updates with the Vault event notification system vault docs concepts events For example to create a policy that grants access to static secret KVv1 and KVv2 events you need permission to subscribe to the events endpoint as well as the list and subscribe permissions on KV secrets you want to get secrets from hcl path sys events subscribe kv capabilities read path capabilities list subscribe subscribe event types kv Subscribing to KV events means that Proxy receives updates as soon as a secret changes which reduces staleness in the cache Vault Proxy only checks for a secret update if an event notification indicates that the related secret was updated Step 2 Ensure tokens have capabilities self access Tokens require update access to the sys capabilies self vault api docs system capabilities self endpoint to request cached secrets Vault tokens receive update permissions by default vault docs concepts policies default policy If you have modified or removed the default policy you must explicitly create a policy with the appropriate permissions For example hcl path sys capabilities self capabilities update Step 3 Configure an appropriate refresh interval By default Vault Proxy refreshes tokens every five minutes You can change the default behavior and configure Proxy to verify and update cached token capabilities with the static secret token capability refresh interval parameter in the cache configuration stanza For example to set a refresh interval of one minute hcl cache cache static secrets true static secret token capability refresh interval 1m Functionality With static secret caching Vault Proxy caches GET requests for KVv1 and KVv2 endpoints When a client sends a GET request for a new KV secret Proxy forwards the request to Vault but caches the response before forwarding it to the client If that client makes subsequent GET requests for the same secret Vault Proxy serves the cached response rather than forwarding the request to Vault Tip title Offline Secret Access and CLI KV Get Vault Proxy does not cache any non KV API responses While KV secrets can be retrieved even if Vault is unavailable other requests cannot be served As a result using the vault kv CLI command which sends a request to sys internal ui mounts before the KV GET request will require a real request to Vault and cannot be served entirely from the cache or when Vault is unavailable you can use vault read instead Tip Similarly when a token requests access to a KV secret it must complete a success GET request If the request is successful Proxy caches the fact that the token was successful in addition to the result Subsequent requests by the same token can then access this secret from the cache instead of Vault Vault Proxy uses the event notification system vault docs concepts events to keep the cache up to date It monitors the KV event feed for events related to any secret currently stored in the cache including modification events like updates and deletes When Proxy detects a change in a cached secret it will update or evict the cache entry as appropriate Vault Proxy also checks and refreshes the access permissions of known tokens according to the window set with static secret token capability refresh interval By default the refresh interval is five minutes Every interval Proxy calls sys capabilies self vault api docs system capabilities self on behalf of every token in the cache to confirm the token still has permission to access the cached secret If the result from Vault indicates that permission or the token itself was revoked Proxy updates the cache entry so that the affected token can no longer access the relevant paths from the cache The refresh interval is essentially the maximum period after which permission to read a KV secret is fully revoked for the relevant token If the capabilities have been removed or Proxy receives a 403 response the capability is removed from the token and that token cannot be used to access the cache For other kinds of errors such as Vault being unreachable or sealed the static secret token capability refresh behavior config is consulted If set to optimistic the default the capability will not be removed unless we receive a 403 or valid response without the capability If set to pessimistic the capability will be removed for any error such as would occur if Vault is sealed For token refresh to work any token that will access the cache also needs update permission for sys capabilies self vault api docs system capabilities self Having update permission for the token lets Proxy test capabilities for the token against multiple paths with a single request instead of testing for a 403 response for each path explicitly Tip title Refresh is per token not per secret If Proxy s API proxy is configured to use auto authentication for tokens and all requests that pass through Vault Proxy use the same token Proxy only makes a single request to Vault every refresh interval no matter how many secrets are currently cached Tip When static secret caching is enabled Proxy returns HIT or MISS in the X Cache response header for requests so client can tell if the response was served from the cache or forwarded from Vault In the event of a hit Proxy also sets the Age header to indicate in seconds how old the cache entry is Tip title Old does not mean stale The fact that a cache entry is old does not necessarily mean that the information is out of date Vault Proxy continually monitors KV events for updates A large value for Age may simply mean that the secret has not been rotated recently Tip Configuration The top level cache block has the following configuration entries relating to static secret caching cache static secrets bool false Enables static secret caching when set to true When cache static secrets and auto auth are both enabled Vault Proxy serves KV secrets directly from the cache to clients with sufficient permission static secret token capability refresh interval duration 5m optional Sets the interval as a duration format string vault docs concepts duration format at which Vault Proxy rechecks the permissions of tokens used to access cached secrets The refresh interval is the maximum period after which permission to read a cached KV secret is fully revoked Ignored when cache static secrets is false static secret token capability refresh behavior string optimistic optional Sets the capability refresh behavior in the case of an error when attempting to refresh capabilities In the case of a 403 capabilities will be removed for the token with either option In case of other errors such as Vault being sealed or Vault being unavailable this setting controls the behavior If set to optimistic the default capabilities will be removed for only 403 errors If set to pessimistic capabilities will be removed for any error This essentially allows configuring a preference between favoring availability optimistic or access fidelity pessimistic of cached static secrets Ignored when cache static secrets is false Example configuration The following example Vault Proxy configuration Defines a TCP listener listener with TLS disabled Forces clients using API proxy api proxy to identify with an auto auth token Configures auto authentication auto auth for approle Enables static secret caching with cache static secrets Sets an explicit token capability refresh window of 1 hour with static secret token capability refresh interval hcl Other Vault Proxy configuration blocks cache cache static secrets true static secret token capability refresh interval 1h api proxy use auto auth token force listener tcp address 127 0 0 1 8100 tls disable true auto auth method type approle config role id file path roleid secret id file path secretid remove secret id file after reading false event system vault docs concepts events |
vault Vault Agent can be run as a Windows service In order to do this you need to register Vault Agent with the Windows page title Run Vault Agent as a Windows service Register Vault Agent with sc exe and run Agent as a Windows service layout docs Run Vault Agent as a Windows service | ---
layout: docs
page_title: Run Vault Agent as a Windows service
description: >-
Register Vault Agent with sc.exe and run Agent as a Windows service.
---
# Run Vault Agent as a Windows service
Vault Agent can be run as a Windows service. In order to do this, you need to register Vault Agent with the Windows
Service Control Manager. After Vault Agent is registered, it can be started like any other Windows
service.
While this guide focuses on an example for Vault Agent, this example can be easily adapted to work for
[Vault Proxy](/vault/docs/agent-and-proxy/proxy) by changing the config and subcommand
given to `vault.exe` as appropriate.
~> Note: The commands on this page should be run in a PowerShell session with Administrator capabilities.
~> Note: When specifying Windows file paths in config files, they should be formatted like this: `C:/foo/bar/file.txt`
instead of using backslashes.
## Register Vault Agent as a Windows service
There are multiple ways to register Vault Agent as a Windows service. One way is to use
[`sc.exe`](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/sc-create). `sc.exe` works
best if the path to your Vault binary and its associated agent config file do not contain spaces. `sc.exe` can be
pretty tricky to get working correctly if your path contains spaces, as paths containing spaces must be quoted,
and escaping quotes correctly in a way that makes `sc.exe` happy is non-trivial. If your path contains spaces, or you prefer not to use `sc.exe`, another
alternative is to use the
[`New-Service`](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.management/new-service?view=powershell-5.1)
cmdlet. `New-Service` is less picky about the method used to escape quotes, and can sometimes be easier. Examples of
both will be shown below.
### Using sc.exe
~> **Important Note:** Ensure the executable path of the service is quoted, especially when it contains spaces, to avoid
potential privilege escalation risks.
If you use `sc.exe`, make sure you specify `sc.exe` explicitly, and not just `sc`. The command below shows the creation
of Vault Agent as a service, using "Vault Agent" as the display name, and starting automatically when Windows starts.
The `binPath` argument should include the fully qualified path to the Vault executable, as well as any arguments required.
```shell-session
PS C:\Windows\system32> sc.exe create VaultAgent binPath="C:\vault\vault.exe agent -config=C:\vault\agent-config.hcl" displayName="Vault Agent" start=auto
[SC] CreateService SUCCESS
```
Note that the spacing after the `=` in all of the arguments is intentional and required.
If you receive a success message, your service is registered with the service manager.
If you get an error, please verify the path to the binary and check the arguments, by running the contents of
`binPath=` directly in a PowerShell session and observing the results.
### Using New-Service
The syntax is slightly different for `New-Service`, but the gist is the same. The invocation below is equivalent to the
`sc.exe` one above.
```shell-session
PS C:\Windows\system32> New-Service -Name "VaultAgent" -BinaryPathName "C:\vault\vault.exe agent -config=C:\vault\agent-config.hcl" -DisplayName "Vault Agent" -StartupType "Automatic"
Status Name DisplayName
------ ---- -----------
Stopped VaultAgent Vault Agent
```
As mentioned previously, `New-Service` is easier to use if the path to your Vault executable and/or agent config contains spaces.
Below is an example of how to configure Vault Agent as a service using a path with spaces.
```shell-session
PS C:\Windows\system32> New-Service -Name "VaultAgent" -BinaryPathName '"C:\my dir\vault.exe" agent -config="C:\my dir\agent-config.hcl"' -DisplayName "Vault Agent" -StartupType "Automatic"
Status Name DisplayName
------ ---- -----------
Stopped VaultAgent Vault Agent
```
Note that only the paths themselves are double quoted, and the entire `BinaryPathName` is wrapped in single quotes, in order
to escape the double quotes used for the paths.
If anything goes wrong during this process, and you need to manually edit the path later, use the Registry Editor to find
the following key: `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VaultAgent`. You can edit the `ImagePath` value
at that key to the correct path.
## Start the Vault Agent service
There are multiple ways to start the service.
- Using the `sc.exe` command.
- Using the `Start-Service` cmdlet.
- Go to the Windows Service Manager, and look for **VaultAgent** in the service name column. Click the
`Start` button to start the service.
### Example starting Vault Agent using `sc.exe`
```shell-session
PS C:\Windows\system32> sc.exe start VaultAgent
SERVICE_NAME: VaultAgent
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
PID : 6548
FLAGS :
```
### Example starting Vault Agent using `Start-Service`
```shell-session
PS C:\Windows\system32> Start-Service -Name "VaultAgent"
```
Note that in the case where the service was started successfully, `New-Service` does not return any output. | vault | layout docs page title Run Vault Agent as a Windows service description Register Vault Agent with sc exe and run Agent as a Windows service Run Vault Agent as a Windows service Vault Agent can be run as a Windows service In order to do this you need to register Vault Agent with the Windows Service Control Manager After Vault Agent is registered it can be started like any other Windows service While this guide focuses on an example for Vault Agent this example can be easily adapted to work for Vault Proxy vault docs agent and proxy proxy by changing the config and subcommand given to vault exe as appropriate Note The commands on this page should be run in a PowerShell session with Administrator capabilities Note When specifying Windows file paths in config files they should be formatted like this C foo bar file txt instead of using backslashes Register Vault Agent as a Windows service There are multiple ways to register Vault Agent as a Windows service One way is to use sc exe https docs microsoft com en us windows server administration windows commands sc create sc exe works best if the path to your Vault binary and its associated agent config file do not contain spaces sc exe can be pretty tricky to get working correctly if your path contains spaces as paths containing spaces must be quoted and escaping quotes correctly in a way that makes sc exe happy is non trivial If your path contains spaces or you prefer not to use sc exe another alternative is to use the New Service https docs microsoft com en us powershell module microsoft powershell management new service view powershell 5 1 cmdlet New Service is less picky about the method used to escape quotes and can sometimes be easier Examples of both will be shown below Using sc exe Important Note Ensure the executable path of the service is quoted especially when it contains spaces to avoid potential privilege escalation risks If you use sc exe make sure you specify sc exe explicitly and not just sc The command below shows the creation of Vault Agent as a service using Vault Agent as the display name and starting automatically when Windows starts The binPath argument should include the fully qualified path to the Vault executable as well as any arguments required shell session PS C Windows system32 sc exe create VaultAgent binPath C vault vault exe agent config C vault agent config hcl displayName Vault Agent start auto SC CreateService SUCCESS Note that the spacing after the in all of the arguments is intentional and required If you receive a success message your service is registered with the service manager If you get an error please verify the path to the binary and check the arguments by running the contents of binPath directly in a PowerShell session and observing the results Using New Service The syntax is slightly different for New Service but the gist is the same The invocation below is equivalent to the sc exe one above shell session PS C Windows system32 New Service Name VaultAgent BinaryPathName C vault vault exe agent config C vault agent config hcl DisplayName Vault Agent StartupType Automatic Status Name DisplayName Stopped VaultAgent Vault Agent As mentioned previously New Service is easier to use if the path to your Vault executable and or agent config contains spaces Below is an example of how to configure Vault Agent as a service using a path with spaces shell session PS C Windows system32 New Service Name VaultAgent BinaryPathName C my dir vault exe agent config C my dir agent config hcl DisplayName Vault Agent StartupType Automatic Status Name DisplayName Stopped VaultAgent Vault Agent Note that only the paths themselves are double quoted and the entire BinaryPathName is wrapped in single quotes in order to escape the double quotes used for the paths If anything goes wrong during this process and you need to manually edit the path later use the Registry Editor to find the following key HKEY LOCAL MACHINE SYSTEM CurrentControlSet Services VaultAgent You can edit the ImagePath value at that key to the correct path Start the Vault Agent service There are multiple ways to start the service Using the sc exe command Using the Start Service cmdlet Go to the Windows Service Manager and look for VaultAgent in the service name column Click the Start button to start the service Example starting Vault Agent using sc exe shell session PS C Windows system32 sc exe start VaultAgent SERVICE NAME VaultAgent TYPE 10 WIN32 OWN PROCESS STATE 4 RUNNING STOPPABLE NOT PAUSABLE ACCEPTS SHUTDOWN WIN32 EXIT CODE 0 0x0 SERVICE EXIT CODE 0 0x0 CHECKPOINT 0x0 WAIT HINT 0x0 PID 6548 FLAGS Example starting Vault Agent using Start Service shell session PS C Windows system32 Start Service Name VaultAgent Note that in the case where the service was started successfully New Service does not return any output |
vault Vault Agent is a client side daemon that securely extracts secrets from Vault page title What is Vault Agent for clients without the complexity of API calls layout docs What is Vault Agent | ---
layout: docs
page_title: What is Vault Agent?
description: >-
Vault Agent is a client-side daemon that securely extracts secrets from Vault
for clients without the complexity of API calls.
---
# What is Vault Agent?
Vault Agent aims to remove the initial hurdle to adopt Vault by providing a
more scalable and simpler way for applications to integrate with Vault, by
providing the ability to render [templates][template] containing the secrets
required by your application, without requiring changes to your application.

Vault Agent is a client daemon that provides the following features:
- [Auto-Auth][autoauth] - Automatically authenticate to Vault and manage the
token renewal process for locally-retrieved dynamic secrets.
- [API Proxy][apiproxy] - Allows Vault Agent to act as a proxy for Vault's API,
optionally using (or forcing the use of) the Auto-Auth token.
- [Caching][caching] - Allows client-side caching of responses containing newly
created tokens and responses containing leased secrets generated off of these
newly created tokens. The agent also manages the renewals of the cached tokens and leases.
- [Windows Service][winsvc] - Allows running the Vault Agent as a Windows
service.
- [Templating][template] - Allows rendering of user-supplied templates by Vault
Agent, using the token generated by the Auto-Auth step.
- [Process Supervisor Mode][process-supervisor] - Runs a child process with Vault
secrets injected as environment variables.
## Auto-Auth
Vault Agent allows easy authentication to Vault in a wide variety of
environments. Please see the [Auto-Auth docs][autoauth]
for information.
Auto-Auth functionality takes place within an `auto_auth` configuration stanza.
## API proxy
Vault Agent can act as an API proxy for Vault, allowing you to talk to Vault's
API via a listener defined for Agent. It can be configured to optionally allow or force the automatic use of
the Auto-Auth token for these requests. Please see the [API Proxy docs][apiproxy]
for more information.
API Proxy functionality takes place within a defined `listener`, and its behaviour can be configured with an
[`api_proxy` stanza](/vault/docs/agent-and-proxy/agent/apiproxy#configuration-api_proxy).
## Caching
Vault Agent allows client-side caching of responses containing newly created tokens
and responses containing leased secrets generated off of these newly created tokens.
Please see the [Caching docs][caching] for information.
## API
### Quit
This endpoint triggers shutdown of the agent. By default, it is disabled, and can
be enabled per listener using the [`agent_api`][agent-api] stanza. It is recommended
to only enable this on trusted interfaces, as it does not require any authorization to use.
| Method | Path |
| :----- | :--------------- |
| `POST` | `/agent/v1/quit` |
### Cache
See the [caching](/vault/docs/agent-and-proxy/agent/caching#api) page for details on the cache API.
## Configuration
### Command options
- `-log-level` ((#\_log_level)) `(string: "info")` - Log verbosity level. Supported values (in
order of descending detail) are `trace`, `debug`, `info`, `warn`, and `error`. This can
also be specified via the `VAULT_LOG_LEVEL` environment variable.
- `-log-format` ((#\_log_format)) `(string: "standard")` - Log format. Supported values
are `standard` and `json`. This can also be specified via the
`VAULT_LOG_FORMAT` environment variable.
- `-log-file` ((#\_log_file)) - the absolute path where Vault Agent should save
log messages. Paths that end with a path separator use the default file name,
`agent.log`. Paths that do not end with a file extension use the default
`.log` extension. If the log file rotates, Vault Agent appends the current
timestamp to the file name at the time of rotation. For example:
`log-file` | Full log file | Rotated log file
---------- | ------------- | ----------------
`/var/log` | `/var/log/agent.log` | `/var/log/agent-{timestamp}.log`
`/var/log/my-diary` | `/var/log/my-diary.log` | `/var/log/my-diary-{timestamp}.log`
`/var/log/my-diary.txt` | `/var/log/my-diary.txt` | `/var/log/my-diary-{timestamp}.txt`
- `-log-rotate-bytes` ((#\_log_rotate_bytes)) - to specify the number of
bytes that should be written to a log before it needs to be rotated. Unless specified,
there is no limit to the number of bytes that can be written to a log file.
- `-log-rotate-duration` ((#\_log_rotate_duration)) - to specify the maximum
duration a log should be written to before it needs to be rotated. Must be a duration
value such as 30s. Defaults to 24h.
- `-log-rotate-max-files` ((#\_log_rotate_max_files)) - to specify the maximum
number of older log file archives to keep. Defaults to `0` (no files are ever deleted).
Set to `-1` to discard old log files when a new one is created.
### Configuration file options
These are the currently-available general configuration options:
- `vault` <code>([vault][vault]: <optional\>)</code> - Specifies the remote Vault server the Agent connects to.
- `auto_auth` <code>([auto_auth][autoauth]: <optional\>)</code> - Specifies the method and other options used for Auto-Auth functionality.
- `api_proxy` <code>([api_proxy][apiproxy]: <optional\>)</code> - Specifies options used for API Proxy functionality.
- `cache` <code>([cache][caching]: <optional\>)</code> - Specifies options used for Caching functionality.
- `listener` <code>([listener][listener]: <optional\>)</code> - Specifies the addresses and ports on which the Agent will respond to requests.
~> **Note:** On `SIGHUP` (`kill -SIGHUP $(pidof vault)`), Vault Agent will attempt to reload listener TLS configuration.
This method can be used to refresh certificates used by Vault Agent without having to restart its process.
- `pid_file` `(string: "")` - Path to the file in which the agent's Process ID
(PID) should be stored
- `exit_after_auth` `(bool: false)` - If set to `true`, the agent will exit
with code `0` after a single successful auth, where success means that a
token was retrieved and all sinks successfully wrote it. If you have
`template` stanzas defined in your agent configuration, the agent
waits for the configured templates to render successfully before
exiting. If you use environment templates (`env_template` ) and set
`exit_after_auth` to true, Vault agent will not run the child processes
defined in your `exec` stanza.
- `disable_idle_connections` `(string array: [])` - A list of strings that disables idle connections for various features in Vault Agent.
Valid values include: `auto-auth`, `caching`, `proxying`, and `templating`. `proxying` configures this for the API proxy, which is
identical in function to `caching` for historical reasons. Can also be configured by setting the `VAULT_AGENT_DISABLE_IDLE_CONNECTIONS`
environment variable as a comma separated string. This environment variable will override any values found in a configuration file.
- `disable_keep_alives` `(string array: [])` - A list of strings that disables keep alives for various features in Vault Agent.
Valid values include: `auto-auth`, `caching`, `proxying`, and `templating`. `proxying` configures this for the API proxy, which is
identical in function to `caching` for historical reasons. Can also be configured by setting the `VAULT_AGENT_DISABLE_KEEP_ALIVES`
environment variable as a comma separated string. This environment variable will override any values found in a configuration file.
- `template` <code>([template][template]: <optional\>)</code> - Specifies options used for templating Vault secrets to files.
- `template_config` <code>([template_config][template-config]: <optional\>)</code> - Specifies templating engine behavior.
- `exec` <code>([exec][process-supervisor]: <optional\>)</code> - Specifies options for vault agent to run a child process
that injects secrets (via `env_template` stanzas) as environment variables.
- `env_template` <code>([env_template][template]: <optional\>)</code> - Multiple blocks accepted. Each block contains
the options used for templating Vault secrets as environment variables via the
[process supervisor mode](/vault/docs/agent-and-proxy/agent/process-supervisor).
- `telemetry` <code>([telemetry][telemetry]: <optional\>)</code> – Specifies the telemetry
reporting system. See the [telemetry Stanza](/vault/docs/agent-and-proxy/agent#telemetry-stanza) section below
for a list of metrics specific to Agent.
- `log_level` - Equivalent to the [`-log-level` command-line flag](#_log_level).
~> **Note:** On `SIGHUP` (`kill -SIGHUP $(pidof vault)`), Vault Agent will update the log level to the value
specified by configuration file (including overriding values set using CLI or environment variable parameters).
- `log_format` - Equivalent to the [`-log-format` command-line flag](#_log_format).
- `log_file` - Equivalent to the [`-log-file` command-line flag](#_log_file).
- `log_rotate_duration` - Equivalent to the [`-log-rotate-duration` command-line flag](#_log_rotate_duration).
- `log_rotate_bytes` - Equivalent to the [`-log-rotate-bytes` command-line flag](#_log_rotate_bytes).
- `log_rotate_max_files` - Equivalent to the [`-log-rotate-max-files` command-line flag](#_log_rotate_max_files).
### vault stanza
There can at most be one top level `vault` block, and it has the following
configuration entries:
- `address` `(string: <optional>)` - The address of the Vault server to
connect to. This should be a Fully Qualified Domain Name (FQDN) or IP
such as `https://vault-fqdn:8200` or `https://172.16.9.8:8200`.
This value can be overridden by setting the `VAULT_ADDR` environment variable.
- `ca_cert` `(string: <optional>)` - Path on the local disk to a single PEM-encoded
CA certificate to verify the Vault server's SSL certificate. This value can
be overridden by setting the `VAULT_CACERT` environment variable.
- `ca_path` `(string: <optional>)` - Path on the local disk to a directory of
PEM-encoded CA certificates to verify the Vault server's SSL certificate.
This value can be overridden by setting the `VAULT_CAPATH` environment
variable.
- `client_cert` `(string: <optional>)` - Path on the local disk to a single
PEM-encoded CA certificate to use for TLS authentication to the Vault server.
This value can be overridden by setting the `VAULT_CLIENT_CERT` environment
variable.
- `client_key` `(string: <optional>)` - Path on the local disk to a single
PEM-encoded private key matching the client certificate from `client_cert`.
This value can be overridden by setting the `VAULT_CLIENT_KEY` environment
variable.
- `tls_skip_verify` `(string: <optional>)` - Disable verification of TLS
certificates. Using this option is highly discouraged as it decreases the
security of data transmissions to and from the Vault server. This value can
be overridden by setting the `VAULT_SKIP_VERIFY` environment variable.
- `tls_server_name` `(string: <optional>)` - Name to use as the SNI host when
connecting via TLS. This value can be overridden by setting the
`VAULT_TLS_SERVER_NAME` environment variable.
- `namespace` `(string: <optional>)` - Namespace to use for all of Vault Agent's
requests to Vault. This can also be specified by command line or environment variable.
The order of precedence is: this setting lowest, followed by the environment variable
`VAULT_NAMESPACE`, and then the highest precedence command-line option `-namespace`.
If none of these are specified, defaults to the root namespace.
#### retry stanza
The `vault` stanza may contain a `retry` stanza that controls how failing Vault
requests are handled, whether these requests are issued in order to render
templates, or are proxied requests coming from the api proxy subsystem.
Auto-auth, however, has its own notion of retrying and is not affected by this
section.
For requests from the templating engine, Vaul Agent will reset its retry counter and
perform retries again once all retries are exhausted. This means that templating
will retry on failures indefinitely unless `exit_on_retry_failure` from the
[`template_config`][template-config] stanza is set to `true`.
Here are the options for the `retry` stanza:
- `num_retries` `(int: 12)` - Specify how many times a failing request will
be retried. A value of `0` translates to the default, i.e. 12 retries.
A value of `-1` disables retries. The environment variable `VAULT_MAX_RETRIES`
overrides this setting.
There are a few subtleties to be aware of here. First, requests originating
from the proxy cache will only be retried if they resulted in specific HTTP
result codes: any 50x code except 501 ("not implemented"), as well as 412
("precondition failed"); 412 is used in Vault Enterprise 1.7+ to indicate a
stale read due to eventual consistency. Requests coming from the template
subsystem are retried regardless of the failure.
Second, templating retries may be performed by both the templating engine _and_
the cache proxy if Vault Agent [persistent
cache][persistent-cache] is enabled. This is due to the
fact that templating requests go through the cache proxy when persistence is
enabled.
Third, the backoff algorithm used to set the time between retries differs for
the template and cache subsystems. This is a technical limitation we hope
to address in the future.
### listener stanza
Vault Agent supports one or more [listener][listener_main] stanzas. Listeners
can be configured with or without [caching][caching], but will use the cache if it
has been configured, and will enable the [API proxy][apiproxy]. In addition to the standard
listener configuration, an Agent's listener configuration also supports the following:
- `require_request_header` `(bool: false)` - Require that all incoming HTTP
requests on this listener must have an `X-Vault-Request: true` header entry.
Using this option offers an additional layer of protection from Server Side
Request Forgery attacks. Requests on the listener that do not have the proper
`X-Vault-Request` header will fail, with a HTTP response status code of `412: Precondition Failed`.
- `role` `(string: default)` - `role` determines which APIs the listener serves.
It can be configured to `metrics_only` to serve only metrics, or the default role, `default`,
which serves everything (including metrics). The `require_request_header` does not apply
to `metrics_only` listeners.
- `agent_api` <code>([agent_api][agent-api]: <optional\>)</code> - Manages optional Agent API endpoints.
#### agent_api stanza
- `enable_quit` `(bool: false)` - If set to `true`, the agent will enable the [quit](/vault/docs/agent-and-proxy/agent#quit) API.
### telemetry stanza
Vault Agent supports the [telemetry][telemetry] stanza and collects various
runtime metrics about its performance, the auto-auth and the cache status:
| Metric | Description | Type |
| -------------------------------- | ---------------------------------------------------- | ------- |
| `vault.agent.authenticated` | Current authentication status (1 - has valid token, | gauge |
| | 0 - no valid token) | |
| `vault.agent.auth.failure` | Number of authentication failures | counter |
| `vault.agent.auth.success` | Number of authentication successes | counter |
| `vault.agent.proxy.success` | Number of requests successfully proxied | counter |
| `vault.agent.proxy.client_error` | Number of requests for which Vault returned an error | counter |
| `vault.agent.proxy.error` | Number of requests the agent failed to proxy | counter |
| `vault.agent.cache.hit` | Number of cache hits | counter |
| `vault.agent.cache.miss` | Number of cache misses | counter |
### IMPORTANT: `VAULT_ADDR` usage
If you export the `VAULT_ADDR` environment variable on the Vault Agent instance, that value takes precedence over the value in the configuration file. The Vault Agent uses that to connect to Vault and this can create an infinite loop where the value of `VAULT_ADDR` is used to make a connection, and the Vault Agent ends up trying to connect to itself instead of the server.
When the connection fails, the Vault Agent increments the port and tries again. The agent repeats these attempts, which leads to port exhaustion.
This problem is a result of the precedence order of the 3 different ways to configure the Vault address. They are, in increasing order of priority:
1. Configuration files
1. Environment variables
1. CLI flags
## Start Vault Agent
To run Vault Agent:
1. [Download](/vault/downloads) the Vault binary where the client application runs
(virtual machine, Kubernetes pod, etc.)
1. Create a Vault Agent configuration file. (See the [Example
Configuration](#example-configuration) section for an example configuration.)
1. Start a Vault Agent with the configuration file.
**Example:**
```shell-session
$ vault agent -config=/etc/vault/agent-config.hcl
```
To get help, run:
```shell-session
$ vault agent -h
```
As with Vault, the `-config` flag can be used in three different ways:
- Use the flag once to name the path to a single specific configuration file.
- Use the flag multiple times to name multiple configuration files, which will be composed at runtime.
- Use the flag to name a directory of configuration files, the contents of which will be composed at runtime.
## Example configuration
An example configuration, with very contrived values, follows:
```hcl
pid_file = "./pidfile"
vault {
address = "https://vault-fqdn:8200"
retry {
num_retries = 5
}
}
auto_auth {
method "aws" {
mount_path = "auth/aws-subaccount"
config = {
type = "iam"
role = "foobar"
}
}
sink "file" {
config = {
path = "/tmp/file-foo"
}
}
sink "file" {
wrap_ttl = "5m"
aad_env_var = "TEST_AAD_ENV"
dh_type = "curve25519"
dh_path = "/tmp/file-foo-dhpath2"
config = {
path = "/tmp/file-bar"
}
}
}
cache {
// An empty cache stanza still enables caching
}
api_proxy {
use_auto_auth_token = true
}
listener "unix" {
address = "/path/to/socket"
tls_disable = true
agent_api {
enable_quit = true
}
}
listener "tcp" {
address = "127.0.0.1:8100"
tls_disable = true
}
template {
source = "/etc/vault/server.key.ctmpl"
destination = "/etc/vault/server.key"
}
template {
source = "/etc/vault/server.crt.ctmpl"
destination = "/etc/vault/server.crt"
}
```
[vault]: /vault/docs/agent-and-proxy/agent#vault-stanza
[autoauth]: /vault/docs/agent-and-proxy/autoauth
[caching]: /vault/docs/agent-and-proxy/agent/caching
[apiproxy]: /vault/docs/agent-and-proxy/agent/apiproxy
[persistent-cache]: /vault/docs/agent-and-proxy/agent/caching/persistent-caches
[template]: /vault/docs/agent-and-proxy/agent/template
[process-supervisor]: /vault/docs/agent-and-proxy/agent/process-supervisor
[template-config]: /vault/docs/agent-and-proxy/agent/template#template-configurations
[agent-api]: /vault/docs/agent-and-proxy/agent/#agent_api-stanza
[listener]: /vault/docs/agent-and-proxy/agent#listener-stanza
[listener_main]: /vault/docs/configuration/listener/tcp
[winsvc]: /vault/docs/agent-and-proxy/agent/winsvc
[telemetry]: /vault/docs/configuration/telemetry | vault | layout docs page title What is Vault Agent description Vault Agent is a client side daemon that securely extracts secrets from Vault for clients without the complexity of API calls What is Vault Agent Vault Agent aims to remove the initial hurdle to adopt Vault by providing a more scalable and simpler way for applications to integrate with Vault by providing the ability to render templates template containing the secrets required by your application without requiring changes to your application Vault Agent workflow img vault agent workflow png Vault Agent is a client daemon that provides the following features Auto Auth autoauth Automatically authenticate to Vault and manage the token renewal process for locally retrieved dynamic secrets API Proxy apiproxy Allows Vault Agent to act as a proxy for Vault s API optionally using or forcing the use of the Auto Auth token Caching caching Allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens The agent also manages the renewals of the cached tokens and leases Windows Service winsvc Allows running the Vault Agent as a Windows service Templating template Allows rendering of user supplied templates by Vault Agent using the token generated by the Auto Auth step Process Supervisor Mode process supervisor Runs a child process with Vault secrets injected as environment variables Auto Auth Vault Agent allows easy authentication to Vault in a wide variety of environments Please see the Auto Auth docs autoauth for information Auto Auth functionality takes place within an auto auth configuration stanza API proxy Vault Agent can act as an API proxy for Vault allowing you to talk to Vault s API via a listener defined for Agent It can be configured to optionally allow or force the automatic use of the Auto Auth token for these requests Please see the API Proxy docs apiproxy for more information API Proxy functionality takes place within a defined listener and its behaviour can be configured with an api proxy stanza vault docs agent and proxy agent apiproxy configuration api proxy Caching Vault Agent allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens Please see the Caching docs caching for information API Quit This endpoint triggers shutdown of the agent By default it is disabled and can be enabled per listener using the agent api agent api stanza It is recommended to only enable this on trusted interfaces as it does not require any authorization to use Method Path POST agent v1 quit Cache See the caching vault docs agent and proxy agent caching api page for details on the cache API Configuration Command options log level log level string info Log verbosity level Supported values in order of descending detail are trace debug info warn and error This can also be specified via the VAULT LOG LEVEL environment variable log format log format string standard Log format Supported values are standard and json This can also be specified via the VAULT LOG FORMAT environment variable log file log file the absolute path where Vault Agent should save log messages Paths that end with a path separator use the default file name agent log Paths that do not end with a file extension use the default log extension If the log file rotates Vault Agent appends the current timestamp to the file name at the time of rotation For example log file Full log file Rotated log file var log var log agent log var log agent timestamp log var log my diary var log my diary log var log my diary timestamp log var log my diary txt var log my diary txt var log my diary timestamp txt log rotate bytes log rotate bytes to specify the number of bytes that should be written to a log before it needs to be rotated Unless specified there is no limit to the number of bytes that can be written to a log file log rotate duration log rotate duration to specify the maximum duration a log should be written to before it needs to be rotated Must be a duration value such as 30s Defaults to 24h log rotate max files log rotate max files to specify the maximum number of older log file archives to keep Defaults to 0 no files are ever deleted Set to 1 to discard old log files when a new one is created Configuration file options These are the currently available general configuration options vault code vault vault optional code Specifies the remote Vault server the Agent connects to auto auth code auto auth autoauth optional code Specifies the method and other options used for Auto Auth functionality api proxy code api proxy apiproxy optional code Specifies options used for API Proxy functionality cache code cache caching optional code Specifies options used for Caching functionality listener code listener listener optional code Specifies the addresses and ports on which the Agent will respond to requests Note On SIGHUP kill SIGHUP pidof vault Vault Agent will attempt to reload listener TLS configuration This method can be used to refresh certificates used by Vault Agent without having to restart its process pid file string Path to the file in which the agent s Process ID PID should be stored exit after auth bool false If set to true the agent will exit with code 0 after a single successful auth where success means that a token was retrieved and all sinks successfully wrote it If you have template stanzas defined in your agent configuration the agent waits for the configured templates to render successfully before exiting If you use environment templates env template and set exit after auth to true Vault agent will not run the child processes defined in your exec stanza disable idle connections string array A list of strings that disables idle connections for various features in Vault Agent Valid values include auto auth caching proxying and templating proxying configures this for the API proxy which is identical in function to caching for historical reasons Can also be configured by setting the VAULT AGENT DISABLE IDLE CONNECTIONS environment variable as a comma separated string This environment variable will override any values found in a configuration file disable keep alives string array A list of strings that disables keep alives for various features in Vault Agent Valid values include auto auth caching proxying and templating proxying configures this for the API proxy which is identical in function to caching for historical reasons Can also be configured by setting the VAULT AGENT DISABLE KEEP ALIVES environment variable as a comma separated string This environment variable will override any values found in a configuration file template code template template optional code Specifies options used for templating Vault secrets to files template config code template config template config optional code Specifies templating engine behavior exec code exec process supervisor optional code Specifies options for vault agent to run a child process that injects secrets via env template stanzas as environment variables env template code env template template optional code Multiple blocks accepted Each block contains the options used for templating Vault secrets as environment variables via the process supervisor mode vault docs agent and proxy agent process supervisor telemetry code telemetry telemetry optional code Specifies the telemetry reporting system See the telemetry Stanza vault docs agent and proxy agent telemetry stanza section below for a list of metrics specific to Agent log level Equivalent to the log level command line flag log level Note On SIGHUP kill SIGHUP pidof vault Vault Agent will update the log level to the value specified by configuration file including overriding values set using CLI or environment variable parameters log format Equivalent to the log format command line flag log format log file Equivalent to the log file command line flag log file log rotate duration Equivalent to the log rotate duration command line flag log rotate duration log rotate bytes Equivalent to the log rotate bytes command line flag log rotate bytes log rotate max files Equivalent to the log rotate max files command line flag log rotate max files vault stanza There can at most be one top level vault block and it has the following configuration entries address string optional The address of the Vault server to connect to This should be a Fully Qualified Domain Name FQDN or IP such as https vault fqdn 8200 or https 172 16 9 8 8200 This value can be overridden by setting the VAULT ADDR environment variable ca cert string optional Path on the local disk to a single PEM encoded CA certificate to verify the Vault server s SSL certificate This value can be overridden by setting the VAULT CACERT environment variable ca path string optional Path on the local disk to a directory of PEM encoded CA certificates to verify the Vault server s SSL certificate This value can be overridden by setting the VAULT CAPATH environment variable client cert string optional Path on the local disk to a single PEM encoded CA certificate to use for TLS authentication to the Vault server This value can be overridden by setting the VAULT CLIENT CERT environment variable client key string optional Path on the local disk to a single PEM encoded private key matching the client certificate from client cert This value can be overridden by setting the VAULT CLIENT KEY environment variable tls skip verify string optional Disable verification of TLS certificates Using this option is highly discouraged as it decreases the security of data transmissions to and from the Vault server This value can be overridden by setting the VAULT SKIP VERIFY environment variable tls server name string optional Name to use as the SNI host when connecting via TLS This value can be overridden by setting the VAULT TLS SERVER NAME environment variable namespace string optional Namespace to use for all of Vault Agent s requests to Vault This can also be specified by command line or environment variable The order of precedence is this setting lowest followed by the environment variable VAULT NAMESPACE and then the highest precedence command line option namespace If none of these are specified defaults to the root namespace retry stanza The vault stanza may contain a retry stanza that controls how failing Vault requests are handled whether these requests are issued in order to render templates or are proxied requests coming from the api proxy subsystem Auto auth however has its own notion of retrying and is not affected by this section For requests from the templating engine Vaul Agent will reset its retry counter and perform retries again once all retries are exhausted This means that templating will retry on failures indefinitely unless exit on retry failure from the template config template config stanza is set to true Here are the options for the retry stanza num retries int 12 Specify how many times a failing request will be retried A value of 0 translates to the default i e 12 retries A value of 1 disables retries The environment variable VAULT MAX RETRIES overrides this setting There are a few subtleties to be aware of here First requests originating from the proxy cache will only be retried if they resulted in specific HTTP result codes any 50x code except 501 not implemented as well as 412 precondition failed 412 is used in Vault Enterprise 1 7 to indicate a stale read due to eventual consistency Requests coming from the template subsystem are retried regardless of the failure Second templating retries may be performed by both the templating engine and the cache proxy if Vault Agent persistent cache persistent cache is enabled This is due to the fact that templating requests go through the cache proxy when persistence is enabled Third the backoff algorithm used to set the time between retries differs for the template and cache subsystems This is a technical limitation we hope to address in the future listener stanza Vault Agent supports one or more listener listener main stanzas Listeners can be configured with or without caching caching but will use the cache if it has been configured and will enable the API proxy apiproxy In addition to the standard listener configuration an Agent s listener configuration also supports the following require request header bool false Require that all incoming HTTP requests on this listener must have an X Vault Request true header entry Using this option offers an additional layer of protection from Server Side Request Forgery attacks Requests on the listener that do not have the proper X Vault Request header will fail with a HTTP response status code of 412 Precondition Failed role string default role determines which APIs the listener serves It can be configured to metrics only to serve only metrics or the default role default which serves everything including metrics The require request header does not apply to metrics only listeners agent api code agent api agent api optional code Manages optional Agent API endpoints agent api stanza enable quit bool false If set to true the agent will enable the quit vault docs agent and proxy agent quit API telemetry stanza Vault Agent supports the telemetry telemetry stanza and collects various runtime metrics about its performance the auto auth and the cache status Metric Description Type vault agent authenticated Current authentication status 1 has valid token gauge 0 no valid token vault agent auth failure Number of authentication failures counter vault agent auth success Number of authentication successes counter vault agent proxy success Number of requests successfully proxied counter vault agent proxy client error Number of requests for which Vault returned an error counter vault agent proxy error Number of requests the agent failed to proxy counter vault agent cache hit Number of cache hits counter vault agent cache miss Number of cache misses counter IMPORTANT VAULT ADDR usage If you export the VAULT ADDR environment variable on the Vault Agent instance that value takes precedence over the value in the configuration file The Vault Agent uses that to connect to Vault and this can create an infinite loop where the value of VAULT ADDR is used to make a connection and the Vault Agent ends up trying to connect to itself instead of the server When the connection fails the Vault Agent increments the port and tries again The agent repeats these attempts which leads to port exhaustion This problem is a result of the precedence order of the 3 different ways to configure the Vault address They are in increasing order of priority 1 Configuration files 1 Environment variables 1 CLI flags Start Vault Agent To run Vault Agent 1 Download vault downloads the Vault binary where the client application runs virtual machine Kubernetes pod etc 1 Create a Vault Agent configuration file See the Example Configuration example configuration section for an example configuration 1 Start a Vault Agent with the configuration file Example shell session vault agent config etc vault agent config hcl To get help run shell session vault agent h As with Vault the config flag can be used in three different ways Use the flag once to name the path to a single specific configuration file Use the flag multiple times to name multiple configuration files which will be composed at runtime Use the flag to name a directory of configuration files the contents of which will be composed at runtime Example configuration An example configuration with very contrived values follows hcl pid file pidfile vault address https vault fqdn 8200 retry num retries 5 auto auth method aws mount path auth aws subaccount config type iam role foobar sink file config path tmp file foo sink file wrap ttl 5m aad env var TEST AAD ENV dh type curve25519 dh path tmp file foo dhpath2 config path tmp file bar cache An empty cache stanza still enables caching api proxy use auto auth token true listener unix address path to socket tls disable true agent api enable quit true listener tcp address 127 0 0 1 8100 tls disable true template source etc vault server key ctmpl destination etc vault server key template source etc vault server crt ctmpl destination etc vault server crt vault vault docs agent and proxy agent vault stanza autoauth vault docs agent and proxy autoauth caching vault docs agent and proxy agent caching apiproxy vault docs agent and proxy agent apiproxy persistent cache vault docs agent and proxy agent caching persistent caches template vault docs agent and proxy agent template process supervisor vault docs agent and proxy agent process supervisor template config vault docs agent and proxy agent template template configurations agent api vault docs agent and proxy agent agent api stanza listener vault docs agent and proxy agent listener stanza listener main vault docs configuration listener tcp winsvc vault docs agent and proxy agent winsvc telemetry vault docs configuration telemetry |
vault page title Run Vault Agent in process supervisor mode environment variables for use in external processes Run Vault Agent in process supervisor mode Run Vault Agent in process supervisor mode to write Vault secrets to layout docs | ---
layout: docs
page_title: Run Vault Agent in process supervisor mode
description: >-
Run Vault Agent in process supervisor mode to write Vault secrets to
environment variables for use in external processes.
---
# Run Vault Agent in process supervisor mode
Vault Agent's Process Supervisor Mode allows Vault secrets to be injected into
a process via environment variables using
[Consul Template markup][consul-templating-language].
-> If you are running your applications in a Kubernetes cluster, we recommend
evaluating the [Vault Secrets Operator](/vault/docs/platform/k8s/vso) and
the [Vault Agent Sidecar Injector](/vault/docs/platform/k8s/injector)
instead.
## Functionality
Vault Agent will inject secrets referenced in the `env_template` configuration
blocks as environment variables into the child process specified in the `exec` block.
When you start Vault Agent in process supervisor mode, it will wait until each
environment variable template has rendered at least once before starting the
process. If `restart_on_secret_changes` is set to `always` (default), Agent
will restart the process whenever an update to an injected secret is detected.
This could be either a static secret update (done on
[`static_secret_render_interval`](/vault/docs/agent-and-proxy/agent/template#static_secret_render_interval))
or dynamic secret being close to its expiration.
In many ways, Vault Agent will mirror the child process. Standard intput and
output streams (`stdin` / `stdout` / `stderr`) are all forwarded to the child
process. Additionally, Vault Agent will exit when the child process exits on
its own with the same exit code.
## Configuration
-> Agent's [generate-config](/vault/docs/agent-and-proxy/agent/generate-config)
tool will help you get started by generating a valid agent configuration
file from the given inputs.
The process supervisor mode requires at least one `env_template` block and
exactly one top level `exec` block. It is incompatible with regular file
`template` entries.
### `env_template`
`env_template` stanza maps the template specified in the `contents` field or
referenced in the `source` field to the environment variable name in the title
of the stanza. It uses the same
[templating language](/vault/docs/agent-and-proxy/agent/template#templating-language)
as file templates but permits only a subset of
[its configuration parameters](/vault/docs/agent-and-proxy/agent/template#template_configurations):
- environment variable name `(string: <required>)` - the name of the
environment variable to which the contents of the template should map.
- `contents` `(string: "")` - This option allows embedding the contents of
a template in the configuration file rather then supplying the `source` path to
the template file. This is useful for short templates. This option is mutually
exclusive with the `source` option.
- `source` `(string: "")` - Path on disk to use as the input template. This
option is required if not using the `contents` option.
- `error_on_missing_key` `(bool: false)` - Exit with an error when accessing
a struct or map field/key that does notexist. The default behavior will print `<no value>`
when accessing a field that does not exist. It is highly recommended you set this
to "true". Also see
[`exit_on_retry_failure` in global Vault Agent Template Config](/vault/docs/agent-and-proxy/agent/template#interaction-between-exit_on_retry_failure-and-error_on_missing_key).
- `left_delimiter` `(string: "")` - Delimiter to use in the template. The
default is "}}" but for some templates, it may be easier to use a different
delimiter that does not conflict with the output file itself.
### `exec`
The top level `exec` block has the following configuration entries.
- `command` `(string array: required)` - Specify the command for the child
process with optional arguments. The executable's path must be either
absolute or relative to the current working directory.
- `restart_on_secret_changes` `(string: "always")` - Controls whether agent
will restart the child process on secret changes. There are two types of
secret changes relevant to this configuration: a static secret update (on
[static_secret_render_interval`](/vault/docs/agent-and-proxy/agent/template#static_secret_render_interval))
and dynamic secret being close to its expiration. The configuration supports
two options: `always` and `never`.
- `restart_stop_signal` `(string: "SIGTERM")` - Signal to send to the child
process when a secret has been updated and the process needs to be restarted.
The process has 30 seconds after this signal is sent until `SIGKILL` is sent
to force the child process to stop.
## Configuration example
The following example was generated using
[`vault agent generate-config`](/vault/docs/agent-and-proxy/agent/generate-config),
a configuration helper tool. Given this configuration, Vault Agent will run
the child process (`./my-app arg1 arg2`) with two additional environment
variables (`FOO_USER` and `FOO_PASSWORD`) populated with secrets from Vault.
```hcl
auto_auth {
method {
type = "token_file"
config {
token_file_path = "/Users/avean/.vault-token"
}
}
}
template_config {
static_secret_render_interval = "5m"
exit_on_retry_failure = true
max_connections_per_host = 10
}
vault {
address = "http://localhost:8200"
}
env_template "FOO_PASSWORD" {
contents = ""
error_on_missing_key = true
}
env_template "FOO_USER" {
contents = ""
error_on_missing_key = true
}
exec {
command = ["./my-app", "arg1", "arg2"]
restart_on_secret_changes = "always"
restart_stop_signal = "SIGTERM"
}
```
[consul-templating-language]: https://github.com/hashicorp/consul-template/blob/v0.28.1/docs/templating-language.md
[template-config]: /vault/docs/agent-and-proxy/agent/template#template-configurations
## Tutorial
Refer to the [Vault Agent - secrets as environment
variables](/vault/tutorials/vault-agent/agent-env-vars) tutorial for an
end-to-end example. | vault | layout docs page title Run Vault Agent in process supervisor mode description Run Vault Agent in process supervisor mode to write Vault secrets to environment variables for use in external processes Run Vault Agent in process supervisor mode Vault Agent s Process Supervisor Mode allows Vault secrets to be injected into a process via environment variables using Consul Template markup consul templating language If you are running your applications in a Kubernetes cluster we recommend evaluating the Vault Secrets Operator vault docs platform k8s vso and the Vault Agent Sidecar Injector vault docs platform k8s injector instead Functionality Vault Agent will inject secrets referenced in the env template configuration blocks as environment variables into the child process specified in the exec block When you start Vault Agent in process supervisor mode it will wait until each environment variable template has rendered at least once before starting the process If restart on secret changes is set to always default Agent will restart the process whenever an update to an injected secret is detected This could be either a static secret update done on static secret render interval vault docs agent and proxy agent template static secret render interval or dynamic secret being close to its expiration In many ways Vault Agent will mirror the child process Standard intput and output streams stdin stdout stderr are all forwarded to the child process Additionally Vault Agent will exit when the child process exits on its own with the same exit code Configuration Agent s generate config vault docs agent and proxy agent generate config tool will help you get started by generating a valid agent configuration file from the given inputs The process supervisor mode requires at least one env template block and exactly one top level exec block It is incompatible with regular file template entries env template env template stanza maps the template specified in the contents field or referenced in the source field to the environment variable name in the title of the stanza It uses the same templating language vault docs agent and proxy agent template templating language as file templates but permits only a subset of its configuration parameters vault docs agent and proxy agent template template configurations environment variable name string required the name of the environment variable to which the contents of the template should map contents string This option allows embedding the contents of a template in the configuration file rather then supplying the source path to the template file This is useful for short templates This option is mutually exclusive with the source option source string Path on disk to use as the input template This option is required if not using the contents option error on missing key bool false Exit with an error when accessing a struct or map field key that does notexist The default behavior will print no value when accessing a field that does not exist It is highly recommended you set this to true Also see exit on retry failure in global Vault Agent Template Config vault docs agent and proxy agent template interaction between exit on retry failure and error on missing key left delimiter string Delimiter to use in the template The default is but for some templates it may be easier to use a different delimiter that does not conflict with the output file itself exec The top level exec block has the following configuration entries command string array required Specify the command for the child process with optional arguments The executable s path must be either absolute or relative to the current working directory restart on secret changes string always Controls whether agent will restart the child process on secret changes There are two types of secret changes relevant to this configuration a static secret update on static secret render interval vault docs agent and proxy agent template static secret render interval and dynamic secret being close to its expiration The configuration supports two options always and never restart stop signal string SIGTERM Signal to send to the child process when a secret has been updated and the process needs to be restarted The process has 30 seconds after this signal is sent until SIGKILL is sent to force the child process to stop Configuration example The following example was generated using vault agent generate config vault docs agent and proxy agent generate config a configuration helper tool Given this configuration Vault Agent will run the child process my app arg1 arg2 with two additional environment variables FOO USER and FOO PASSWORD populated with secrets from Vault hcl auto auth method type token file config token file path Users avean vault token template config static secret render interval 5m exit on retry failure true max connections per host 10 vault address http localhost 8200 env template FOO PASSWORD contents error on missing key true env template FOO USER contents error on missing key true exec command my app arg1 arg2 restart on secret changes always restart stop signal SIGTERM consul templating language https github com hashicorp consul template blob v0 28 1 docs templating language md template config vault docs agent and proxy agent template template configurations Tutorial Refer to the Vault Agent secrets as environment variables vault tutorials vault agent agent env vars tutorial for an end to end example |
vault Use Vault Agent templates page title Use Vault Agent templates Template markup layout docs Use templates with Vault Agent to write Vault secrets files with Consul | ---
layout: docs
page_title: Use Vault Agent templates
description: >-
Use templates with Vault Agent to write Vault secrets files with Consul
Template markup.
---
# Use Vault Agent templates
Vault Agent's Template functionality allows Vault secrets to be rendered to files
or environment variables (via the [Process Supervisor Mode](/vault/docs/agent-and-proxy/agent/process-supervisor))
using [Consul Template markup][consul-templating-language].
## Functionality
The `template_config` stanza configures overall default behavior for the
templating engine. Note that `template_config` can only be defined once, and is
different from the `template` stanza. Unlike `template` which focuses on where
and how a specific secret is rendered, `template_config` contains parameters
affecting how the templating engine as a whole behaves and its interaction with
the rest of Agent. This includes, but is not limited to, program exit behavior.
Other parameters that apply to the templating engine as a whole may be added
over time.
The `template` stanza configures the Vault Agent for rendering secrets to files
using Consul Template markup language. Multiple `template` stanzas can be
defined to render multiple files.
When the Agent is started with templating enabled, it will attempt to acquire a
Vault token using the configured auto-auth Method. On failure, it will back off
for a short while (including some randomness to help prevent thundering herd
scenarios) and retry. On success, secrets defined in the templates will be
retrieved from Vault and rendered locally.
## Templating language
The template output content can be provided directly as part of the `contents`
option in a `template` stanza or as a separate `.ctmpl` file and specified in
the `source` option of a `template` stanza.
In order to fetch secrets from Vault, whether those are static secrets, dynamic
credentials, or certificates, Vault Agent templates require the use of the
`secret`
[function](https://github.com/hashicorp/consul-template/blob/master/docs/templating-language.md#secret)
or `pkiCert`
[function](https://github.com/hashicorp/consul-template/blob/main/docs/templating-language.md#pkicert)
from Consul Template.
The `secret` function works for all types of secrets and depending on the type
of secret that's being rendered by this function, template will have different
renewal behavior as detailed in the [Renewals
section](#renewals-and-updating-secrets). The `pkiCert` function is intended to
work specifically for certificates issued by the [PKI Secrets
Engine](/vault/docs/secrets/pki). Refer to the [Certificates](#certificates) section
for differences in certificate renewal behavior between `secret` and `pkiCert`.
The following links contain additional resources for the templating language used by Vault Agent templating.
- [Consul Templating Documentation][consul-templating-language]
- [Go Templating Language Documentation](https://pkg.go.dev/text/template#pkg-overview)
### Template language example
The following is an example of a template that retrieves a generic secret from Vault's
KV store:
```
```
The following is an example of a template that issues a PKI certificate in
Vault's PKI secrets engine. The fetching of the certificate or key from a PKI role
through this function will be based on the certificate's expiration.
To generate a new certificate and create a bundle with the key, certificate, and CA, use:
```
```
To fetch only the issuing CA for this mount, use:
```
```
Alternatively, `pki/cert/ca_chain` can be used to fetch the full CA chain.
## Global configurations
The top level `template_config` block has the following configuration entries that affect
all templates:
- `exit_on_retry_failure` `(bool: false)` - This option configures Vault Agent
to exit after it has exhausted its number of template retry attempts due to
failures.
- `static_secret_render_interval` `(string or integer: 5m)` - If specified, configures
how often Vault Agent Template should render non-leased secrets such as KV v2.
This setting will not change how often Vault Agent Templating renders leased
secrets. Uses [duration format strings](/vault/docs/concepts/duration-format).
- `max_connections_per_host` `(int: 10)` - Limits the total number of connections
that the Vault Agent templating engine can use for a particular Vault host. This limit
includes connections in the dialing, active, and idle states.
- `lease_renewal_threshold` `(float: 0.9)` - How long Vault Agent's template
engine should wait for to refresh dynamic, non-renewable leases, measured as
a fraction of the lease duration.
### `template_config` stanza example
```hcl
template_config {
exit_on_retry_failure = true
static_secret_render_interval = "10m"
max_connections_per_host = 20
}
```
In another example `template_config` with [`error_on_missing_key` parameter in the template stanza](/vault/docs/agent-and-proxy/agent/template#error_on_missing_key)
as well as `exit_on_retry_failure` result in the Agent exiting in case of no key
/ value issues instead of the default retry behavior.
```hcl
template_config {
exit_on_retry_failure = true
static_secret_render_interval = "10m"
max_connections_per_host = 20
}
template {
source = "/tmp/agent/template.ctmpl"
destination = "/tmp/agent/render.txt"
error_on_missing_key = true
}
```
### Interaction between `exit_on_retry_failure` and `error_on_missing_key`
The parameter
[`error_on_missing_key`](/vault/docs/agent-and-proxy/agent/template#error_on_missing_key) can be
specified within the `template` stanza which determines if a template should
error when a key is missing in the secret. When `error_on_missing_key` is not
specified or set to `false` and the key to render is not in the secret's
response, the templating engine will ignore it (or render `"<no value>"`) and
continue on with its rendering.
If the desire is to have Agent fail and exit on a missing key, both
`template.error_on_missing_key` and `template_config.exit_on_retry_failure` must
be set to true. Otherwise, the templating engine will error and render to its
destination, but Agent will not exit and will retry until the key exists or until
the process is terminated.
Note that a missing key from a secret's response is different from a missing or
non-existent secret. The templating engine will always error if a secret is
missing, but will only error for a missing key if `error_on_missing_key` is set.
Whether Vault Agent will exit when the templating engine errors depends on the
value of `exit_on_retry_failure`.
## Template configurations
The top level `template` block has multiple configuration entries. The
parameters found in the template configuration section in the consul-template
[documentation
page](https://github.com/hashicorp/consul-template/blob/main/docs/configuration.md#templates)
can be used here:
<Tip>
The parameters marked with `Δ` below are only applicable to file templates and
cannot be used with `env_template` entries in process supervisor mode.
</Tip>
- `source` `(string: "")` - Path on disk to use as the input template. This
option is required if not using the `contents` option.
- `destination`Δ `(string: required)` - Path on disk where the rendered secrets should
be created. If the parent directories do not exist, Vault
Agent will attempt to create them, unless `create_dest_dirs` is false.
- `create_dest_dirs`Δ `(bool: true)` - This option tells Vault Agent to create
the parent directories of the destination path if they do not exist.
- `contents` `(string: "")` - This option allows embedding the contents of
a template in the configuration file rather then supplying the `source` path to
the template file. This is useful for short templates. This option is mutually
exclusive with the `source` option.
- `command`Δ `(string: "")` - This is the optional command to run when the
template is rendered. The command will only run if the resulting template changes.
The command must return within 30s (configurable), and it must have a successful
exit code. Vault Agent is not a replacement for a process monitor or init system.
This is deprecated in favor of the `exec` option.
- `command_timeout`Δ `(duration: 30s)` - This is the maximum amount of time to
wait for the optional command to return. This is deprecated in favor of the
`exec` option.
- `error_on_missing_key` `(bool: false)` - Exit with an error when accessing
a struct or map field/key that does notexist. The default behavior will print `<no value>`
when accessing a field that does not exist. It is highly recommended you set this
to "true". Also see [`exit_on_retry_failure` in global Vault Agent Template Config](/vault/docs/agent-and-proxy/agent/template#interaction-between-exit_on_retry_failure-and-error_on_missing_key).
- `exec`Δ `(object: optional)` - The exec block executes a command when the
template is rendered and the output has changed. The block parameters are
`command` `(string or array: required)` and `timeout` `(string: optional, defaults
to 30s)`. `command` can be given as a string or array of strings to execute, such as
`"touch myfile"` or `["touch", "myfile"]`. To protect against command injection, we
strongly recommend using an array of strings, and we attempt to parse that way first.
Note also that using a comma with the string approach will cause it to be interpreted as an
array, which may not be desirable.
- `perms`Δ `(string: "")` - This is the permission to render the file. If
this option is left unspecified, Vault Agent will attempt to match the permissions
of the file that already exists at the destination path. If no file exists at that
path, the permissions are 0644.
- `backup`Δ `(bool: true)` - This option backs up the previously rendered template
at the destination path before writing a new one. It keeps exactly one backup.
This option is useful for preventing accidental changes to the data without having
a rollback strategy.
- `left_delimiter` `(string: "")` - Delimiter to use in the template. The
default is "}}" but for some templates, it may be easier to use a different
delimiter that does not conflict with the output file itself.
- `sandbox_path`Δ `(string: "")` - If a sandbox path is provided, any path
provided to the `file` function is checked that it falls within the sandbox path.
Relative paths that try to traverse outside the sandbox path will exit with an error.
- `wait`Δ `(object: required)` - This is the `minimum(:maximum)` to wait before rendering
a new template to disk and triggering a command, separated by a colon (`:`).
### Example `template` stanza
```hcl
template {
source = "/tmp/agent/template.ctmpl"
destination = "/tmp/agent/render.txt"
error_on_missing_key = true
}
```
If you only want to use the Vault Agent to render one or more templates and do
not need to sink the acquired credentials, you can omit the `sink` stanza from
the `auto_auth` stanza in the Agent configuration.
## Renewals and updating secrets
The Vault Agent templating automatically renews and fetches secrets/tokens.
Unlike [Vault Agent caching](/vault/docs/agent-and-proxy/agent/caching), the behavior of how Vault Agent
templating does this depends on the type of secret or token. The following is a
high level overview of different behaviors.
### Renewable secrets
If a secret or token is renewable, Vault Agent will renew the secret after 2/3
of the secret's lease duration has elapsed.
### Non-Renewable secrets
If a secret or token isn't renewable or leased, Vault Agent will fetch the secret every 5 minutes.
This can be configured using the `template_config` stanza value [static_secret_render_interval](/vault/docs/agent-and-proxy/agent/template#static_secret_render_interval) (requires Vault 1.8+).
Non-renewable secrets include (but not limited to) [KV Version 2](/vault/docs/secrets/kv/kv-v2).
### Non-Renewable leased secrets
If a secret or token is non-renewable but leased, Vault Agent will fetch the secret when 90% of the secrets time-to-live (TTL)
is reached, plus or minus some jitter to ensure that many clients don't hit Vault simultaneously. Leased, non-renewable secrets
include (but are not limited to) dynamic secrets such as [database credentials](/vault/docs/secrets/databases). The 90% value
is configurable using the `template_config` stanza value
[lease_renewal_threshold](/vault/docs/agent-and-proxy/agent/template#lease_renewal_threshold). While KVv1 secrets are not leased,
this also controls the fraction at which Agent will re-fetch [KV Version 1](/vault/docs/secrets/kv/kv-v1) secrets that
have a defined `lease_duration`.
### Static roles
If a secret has a `rotation_period`, such as a [database static role](/vault/docs/secrets/databases#static-roles),
Vault Agent template will fetch the new secret as it changes in Vault. It does
this by inspecting the secret's time-to-live (TTL).
### Certificates
As of Vault 1.11, certificates can be rendered using either `pkiCert` or
`secret` template functions, although it is recommended to use `pkiCert` to
avoid unnecessarily generating certificates whenever Agent restarts or
re-authenticates.
#### Rendering using the `pkiCert` template function
If a [certificate](/vault/docs/secrets/pki) is rendered using the `pkiCert` template
function, Vault Agent template will have the following fetching and re-rendering
behaviors on certificates:
- Fetches a new certificate on Agent startup if none has been previously
rendered or the current rendered one has expired.
- On Agent's auto-auth re-authentication, due to a token expiry for example,
skip fetching unless the current rendered one has expired.
#### Rendering using the `secret` template function
If a [certificate](/vault/docs/secrets/pki) is rendered using the `secret` template
function, Vault Agent template will have the following fetching and re-rendering
behaviors on certificates:
- Fetches a new certificate on Agent startup, even if previously rendered
certificates are still valid.
- If `generate_lease` is unset or set to `false`, it uses the certificate's
`validTo` field to determine re-fetch interval.
- If `generate_lease` is set to `true`, apply the non-renewable, leased secret
rules.
- On Agent's auto-auth re-authentication, due to a token expiry for example, it
fetches and re-renders a new certificate even if the existing certificate is
valid.
## Templating configuration example
The following demonstrates Vault Agent Templates configuration blocks.
```hcl
# Other Vault Agent configuration blocks
# ...
template_config {
static_secret_render_interval = "10m"
exit_on_retry_failure = true
max_connections_per_host = 20
}
template {
source = "/tmp/agent/template.ctmpl"
destination = "/tmp/agent/render.txt"
}
template {
contents = ""
destination = "/tmp/agent/render-content.txt"
}
```
And the following demonstrates how the templates look when using `env_template` with
[Process Supervisor Mode](/vault/docs/agent-and-proxy/agent/process-supervisor)
```hcl
# Other Vault Agent configuration blocks
# ...
template_config {
static_secret_render_interval = "10m"
exit_on_retry_failure = true
max_connections_per_host = 20
}
env_template "MY_ENV_VAR" {
contents = ""
}
env_template "ENV_VAR_FROM_FILE" {
source = "/tmp/agent/template.ctmpl"
}
```
[consul-templating-language]: https://github.com/hashicorp/consul-template/blob/v0.28.1/docs/templating-language.md
[process-supervisor]: /vault/docs/agent-and-proxy/agent/process-supervisor | vault | layout docs page title Use Vault Agent templates description Use templates with Vault Agent to write Vault secrets files with Consul Template markup Use Vault Agent templates Vault Agent s Template functionality allows Vault secrets to be rendered to files or environment variables via the Process Supervisor Mode vault docs agent and proxy agent process supervisor using Consul Template markup consul templating language Functionality The template config stanza configures overall default behavior for the templating engine Note that template config can only be defined once and is different from the template stanza Unlike template which focuses on where and how a specific secret is rendered template config contains parameters affecting how the templating engine as a whole behaves and its interaction with the rest of Agent This includes but is not limited to program exit behavior Other parameters that apply to the templating engine as a whole may be added over time The template stanza configures the Vault Agent for rendering secrets to files using Consul Template markup language Multiple template stanzas can be defined to render multiple files When the Agent is started with templating enabled it will attempt to acquire a Vault token using the configured auto auth Method On failure it will back off for a short while including some randomness to help prevent thundering herd scenarios and retry On success secrets defined in the templates will be retrieved from Vault and rendered locally Templating language The template output content can be provided directly as part of the contents option in a template stanza or as a separate ctmpl file and specified in the source option of a template stanza In order to fetch secrets from Vault whether those are static secrets dynamic credentials or certificates Vault Agent templates require the use of the secret function https github com hashicorp consul template blob master docs templating language md secret or pkiCert function https github com hashicorp consul template blob main docs templating language md pkicert from Consul Template The secret function works for all types of secrets and depending on the type of secret that s being rendered by this function template will have different renewal behavior as detailed in the Renewals section renewals and updating secrets The pkiCert function is intended to work specifically for certificates issued by the PKI Secrets Engine vault docs secrets pki Refer to the Certificates certificates section for differences in certificate renewal behavior between secret and pkiCert The following links contain additional resources for the templating language used by Vault Agent templating Consul Templating Documentation consul templating language Go Templating Language Documentation https pkg go dev text template pkg overview Template language example The following is an example of a template that retrieves a generic secret from Vault s KV store The following is an example of a template that issues a PKI certificate in Vault s PKI secrets engine The fetching of the certificate or key from a PKI role through this function will be based on the certificate s expiration To generate a new certificate and create a bundle with the key certificate and CA use To fetch only the issuing CA for this mount use Alternatively pki cert ca chain can be used to fetch the full CA chain Global configurations The top level template config block has the following configuration entries that affect all templates exit on retry failure bool false This option configures Vault Agent to exit after it has exhausted its number of template retry attempts due to failures static secret render interval string or integer 5m If specified configures how often Vault Agent Template should render non leased secrets such as KV v2 This setting will not change how often Vault Agent Templating renders leased secrets Uses duration format strings vault docs concepts duration format max connections per host int 10 Limits the total number of connections that the Vault Agent templating engine can use for a particular Vault host This limit includes connections in the dialing active and idle states lease renewal threshold float 0 9 How long Vault Agent s template engine should wait for to refresh dynamic non renewable leases measured as a fraction of the lease duration template config stanza example hcl template config exit on retry failure true static secret render interval 10m max connections per host 20 In another example template config with error on missing key parameter in the template stanza vault docs agent and proxy agent template error on missing key as well as exit on retry failure result in the Agent exiting in case of no key value issues instead of the default retry behavior hcl template config exit on retry failure true static secret render interval 10m max connections per host 20 template source tmp agent template ctmpl destination tmp agent render txt error on missing key true Interaction between exit on retry failure and error on missing key The parameter error on missing key vault docs agent and proxy agent template error on missing key can be specified within the template stanza which determines if a template should error when a key is missing in the secret When error on missing key is not specified or set to false and the key to render is not in the secret s response the templating engine will ignore it or render no value and continue on with its rendering If the desire is to have Agent fail and exit on a missing key both template error on missing key and template config exit on retry failure must be set to true Otherwise the templating engine will error and render to its destination but Agent will not exit and will retry until the key exists or until the process is terminated Note that a missing key from a secret s response is different from a missing or non existent secret The templating engine will always error if a secret is missing but will only error for a missing key if error on missing key is set Whether Vault Agent will exit when the templating engine errors depends on the value of exit on retry failure Template configurations The top level template block has multiple configuration entries The parameters found in the template configuration section in the consul template documentation page https github com hashicorp consul template blob main docs configuration md templates can be used here Tip The parameters marked with below are only applicable to file templates and cannot be used with env template entries in process supervisor mode Tip source string Path on disk to use as the input template This option is required if not using the contents option destination string required Path on disk where the rendered secrets should be created If the parent directories do not exist Vault Agent will attempt to create them unless create dest dirs is false create dest dirs bool true This option tells Vault Agent to create the parent directories of the destination path if they do not exist contents string This option allows embedding the contents of a template in the configuration file rather then supplying the source path to the template file This is useful for short templates This option is mutually exclusive with the source option command string This is the optional command to run when the template is rendered The command will only run if the resulting template changes The command must return within 30s configurable and it must have a successful exit code Vault Agent is not a replacement for a process monitor or init system This is deprecated in favor of the exec option command timeout duration 30s This is the maximum amount of time to wait for the optional command to return This is deprecated in favor of the exec option error on missing key bool false Exit with an error when accessing a struct or map field key that does notexist The default behavior will print no value when accessing a field that does not exist It is highly recommended you set this to true Also see exit on retry failure in global Vault Agent Template Config vault docs agent and proxy agent template interaction between exit on retry failure and error on missing key exec object optional The exec block executes a command when the template is rendered and the output has changed The block parameters are command string or array required and timeout string optional defaults to 30s command can be given as a string or array of strings to execute such as touch myfile or touch myfile To protect against command injection we strongly recommend using an array of strings and we attempt to parse that way first Note also that using a comma with the string approach will cause it to be interpreted as an array which may not be desirable perms string This is the permission to render the file If this option is left unspecified Vault Agent will attempt to match the permissions of the file that already exists at the destination path If no file exists at that path the permissions are 0644 backup bool true This option backs up the previously rendered template at the destination path before writing a new one It keeps exactly one backup This option is useful for preventing accidental changes to the data without having a rollback strategy left delimiter string Delimiter to use in the template The default is but for some templates it may be easier to use a different delimiter that does not conflict with the output file itself sandbox path string If a sandbox path is provided any path provided to the file function is checked that it falls within the sandbox path Relative paths that try to traverse outside the sandbox path will exit with an error wait object required This is the minimum maximum to wait before rendering a new template to disk and triggering a command separated by a colon Example template stanza hcl template source tmp agent template ctmpl destination tmp agent render txt error on missing key true If you only want to use the Vault Agent to render one or more templates and do not need to sink the acquired credentials you can omit the sink stanza from the auto auth stanza in the Agent configuration Renewals and updating secrets The Vault Agent templating automatically renews and fetches secrets tokens Unlike Vault Agent caching vault docs agent and proxy agent caching the behavior of how Vault Agent templating does this depends on the type of secret or token The following is a high level overview of different behaviors Renewable secrets If a secret or token is renewable Vault Agent will renew the secret after 2 3 of the secret s lease duration has elapsed Non Renewable secrets If a secret or token isn t renewable or leased Vault Agent will fetch the secret every 5 minutes This can be configured using the template config stanza value static secret render interval vault docs agent and proxy agent template static secret render interval requires Vault 1 8 Non renewable secrets include but not limited to KV Version 2 vault docs secrets kv kv v2 Non Renewable leased secrets If a secret or token is non renewable but leased Vault Agent will fetch the secret when 90 of the secrets time to live TTL is reached plus or minus some jitter to ensure that many clients don t hit Vault simultaneously Leased non renewable secrets include but are not limited to dynamic secrets such as database credentials vault docs secrets databases The 90 value is configurable using the template config stanza value lease renewal threshold vault docs agent and proxy agent template lease renewal threshold While KVv1 secrets are not leased this also controls the fraction at which Agent will re fetch KV Version 1 vault docs secrets kv kv v1 secrets that have a defined lease duration Static roles If a secret has a rotation period such as a database static role vault docs secrets databases static roles Vault Agent template will fetch the new secret as it changes in Vault It does this by inspecting the secret s time to live TTL Certificates As of Vault 1 11 certificates can be rendered using either pkiCert or secret template functions although it is recommended to use pkiCert to avoid unnecessarily generating certificates whenever Agent restarts or re authenticates Rendering using the pkiCert template function If a certificate vault docs secrets pki is rendered using the pkiCert template function Vault Agent template will have the following fetching and re rendering behaviors on certificates Fetches a new certificate on Agent startup if none has been previously rendered or the current rendered one has expired On Agent s auto auth re authentication due to a token expiry for example skip fetching unless the current rendered one has expired Rendering using the secret template function If a certificate vault docs secrets pki is rendered using the secret template function Vault Agent template will have the following fetching and re rendering behaviors on certificates Fetches a new certificate on Agent startup even if previously rendered certificates are still valid If generate lease is unset or set to false it uses the certificate s validTo field to determine re fetch interval If generate lease is set to true apply the non renewable leased secret rules On Agent s auto auth re authentication due to a token expiry for example it fetches and re renders a new certificate even if the existing certificate is valid Templating configuration example The following demonstrates Vault Agent Templates configuration blocks hcl Other Vault Agent configuration blocks template config static secret render interval 10m exit on retry failure true max connections per host 20 template source tmp agent template ctmpl destination tmp agent render txt template contents destination tmp agent render content txt And the following demonstrates how the templates look when using env template with Process Supervisor Mode vault docs agent and proxy agent process supervisor hcl Other Vault Agent configuration blocks template config static secret render interval 10m exit on retry failure true max connections per host 20 env template MY ENV VAR contents env template ENV VAR FROM FILE source tmp agent template ctmpl consul templating language https github com hashicorp consul template blob v0 28 1 docs templating language md process supervisor vault docs agent and proxy agent process supervisor |
vault Generate a Vault Agent development configuration file Vault Agent in process supervisor mode page title Generate a development configuration file Use the Vault CLI to create a basic development configuration file to run layout docs | ---
layout: docs
page_title: Generate a development configuration file
description: >-
Use the Vault CLI to create a basic development configuration file to run
Vault Agent in process supervisor mode.
---
# Generate a Vault Agent development configuration file
Use the Vault CLI to create a basic development configuration file to run Vault
Agent in process supervisor mode.
Development configuration files include an `auto_auth` section that reference a
token file based on the Vault token used to authenticate the CLI command. Token
files are convenient for local testing but **are not** appropriate for in
production. **Always use a robust
[auto-authentication method](/vault/docs/agent-and-proxy/autoauth/methods) in
production**.
<Tip title="Assumptions">
- You have [set up a `kv` v2 plugin](/vault/docs/secrets/kv/kv-v2/setup).
- Your authentication token has `read` permissions for the `kv` v2 plugin.
</Tip>
Use [`vault agent generate-config`](/vault/docs/commands/agent/generate-config)
to create a development configuration file with environment variable templates:
```shell-session
$ vault agent generate-config
-type "env-template" \
-exec "<path_to_child_process> <list_of_arguments>" \
-namespace "<plugin_namespace>" \
-path "<mount_path_to_kv_plugin_1>" \
-path "<mount_path_to_kv_plugin_2>" \
...
-path "<mount_path_to_kv_plugin_N>" \
<config_file_name>
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault agent generate-config \
-type="env-template" \
-exec="./payment-app 'wf-test'" \
-namespace="testing" \
-path="shared/dev/*" \
-path="private/ci/integration" \
agent-config.hcl
Successfully generated "agent-config.hcl" configuration file!
Warning: the generated file uses 'token_file' authentication method, which is not suitable for production environments.
```
</CodeBlockConfig>
The configuration file includes `env_template` entries for each key stored at
the explicit paths and any key encountered while recursing through paths ending
with `/*`. Template keys have the form `<final_path_segment>_<key_name>`.
For example:
<CodeBlockConfig highlight="7,22,26,30,34,38,42">
```hcl
auto_auth {
method {
type = "token_file"
config {
token_file_path = "/home/<username>/.vault-token"
}
}
}
template_config {
static_secret_render_interval = "5m"
exit_on_retry_failure = true
max_connections_per_host = 10
}
vault {
address = "http://192.168.0.1:8200"
}
env_template "SQUARE_API_PROD" {
contents = ""
error_on_missing_key = true
}
env_template "SQUARE_API_SANDBOX" {
contents = ""
error_on_missing_key = true
}
env_template "SQUARE_API_SMOKE" {
contents = ""
error_on_missing_key = true
}
env_template "SEEDS_SEED1" {
contents = ""
error_on_missing_key = true
}
env_template "SEEDS_SEED2" {
contents = ""
error_on_missing_key = true
}
env_template "DEV_POSTMAN" {
contents = ""
error_on_missing_key = true
}
exec {
command = ["./payment-app", "'wf-test'"]
restart_on_secret_changes = "always"
restart_stop_signal = "SIGTERM"
}
```
</CodeBlockConfig> | vault | layout docs page title Generate a development configuration file description Use the Vault CLI to create a basic development configuration file to run Vault Agent in process supervisor mode Generate a Vault Agent development configuration file Use the Vault CLI to create a basic development configuration file to run Vault Agent in process supervisor mode Development configuration files include an auto auth section that reference a token file based on the Vault token used to authenticate the CLI command Token files are convenient for local testing but are not appropriate for in production Always use a robust auto authentication method vault docs agent and proxy autoauth methods in production Tip title Assumptions You have set up a kv v2 plugin vault docs secrets kv kv v2 setup Your authentication token has read permissions for the kv v2 plugin Tip Use vault agent generate config vault docs commands agent generate config to create a development configuration file with environment variable templates shell session vault agent generate config type env template exec path to child process list of arguments namespace plugin namespace path mount path to kv plugin 1 path mount path to kv plugin 2 path mount path to kv plugin N config file name For example CodeBlockConfig hideClipboard shell session vault agent generate config type env template exec payment app wf test namespace testing path shared dev path private ci integration agent config hcl Successfully generated agent config hcl configuration file Warning the generated file uses token file authentication method which is not suitable for production environments CodeBlockConfig The configuration file includes env template entries for each key stored at the explicit paths and any key encountered while recursing through paths ending with Template keys have the form final path segment key name For example CodeBlockConfig highlight 7 22 26 30 34 38 42 hcl auto auth method type token file config token file path home username vault token template config static secret render interval 5m exit on retry failure true max connections per host 10 vault address http 192 168 0 1 8200 env template SQUARE API PROD contents error on missing key true env template SQUARE API SANDBOX contents error on missing key true env template SQUARE API SMOKE contents error on missing key true env template SEEDS SEED1 contents error on missing key true env template SEEDS SEED2 contents error on missing key true env template DEV POSTMAN contents error on missing key true exec command payment app wf test restart on secret changes always restart stop signal SIGTERM CodeBlockConfig |
vault created tokens or leased secrets generated from a newly created token Vault Agent caching overview Use client side caching with Vault Agent for responses with newly layout docs page title Vault Agent caching overview | ---
layout: docs
page_title: Vault Agent caching overview
description: >-
Use client-side caching with Vault Agent for responses with newly
created tokens or leased secrets generated from a newly created token.
---
# Vault Agent caching overview
<Note title="Use Vault Proxy for static secret caching">
[Static secret caching](/vault/docs/agent-and-proxy/proxy/caching/static-secret-caching)
(KVv1 and KVv2) with API proxy minimizes the number of requests forwarded to
Vault. Vault Agent does not support static secret caching with API proxy. We
recommend using [Vault Proxy](/vault/docs/agent-and-proxy/proxy) for API Proxy
related workflows.
</Note>
Vault Agent Caching allows client-side caching of responses containing newly
created tokens and responses containing leased secrets generated off of these
newly created tokens. The renewals of the cached tokens and leases are also
managed by the agent.
## Caching and renewals
Response caching and renewals are managed by the agent only under these
specific scenarios.
1. Token creation requests are made through the agent. This means that any
login operations performed using various auth methods and invoking the token
creation endpoints of the token auth method via the agent will result in the
response getting cached by the agent. Responses containing new tokens will
be cached by the agent only if the parent token is already being managed by
the agent or if the new token is an orphan token.
2. Leased secret creation requests are made through the agent using tokens that
are already managed by the agent. This means that any dynamic credentials
that are issued using the tokens managed by the agent, will be cached and
its renewals are taken care of.
## Persistent cache
Vault Agent can restore tokens and leases from a persistent cache file created
by a previous Vault Agent process.
Refer to the [Vault Agent Persistent
Caching](/vault/docs/agent-and-proxy/agent/caching/persistent-caches) page for more information on
this functionality.
## Cache evictions
The eviction of cache entries pertaining to secrets will occur when the agent
can no longer renew them. This can happen when the secrets hit their maximum
TTL or if the renewals result in errors.
Agent does some best-effort cache evictions by observing specific request types
and response codes. For example, if a token revocation request is made via the
agent and if the forwarded request to the Vault server succeeds, then agent
evicts all the cache entries associated with the revoked token. Similarly, any
lease revocation operation will also be intercepted by the agent and the
respective cache entries will be evicted.
Note that while agent evicts the cache entries upon secret expirations and upon
intercepting revocation requests, it is still possible for the agent to be
completely unaware of the revocations that happen through direct client
interactions with the Vault server. This could potentially lead to stale cache
entries. For managing the stale entries in the cache, an endpoint
`/agent/v1/cache-clear`(see below) is made available to manually evict cache
entries based on some of the query criteria used for indexing the cache entries.
## Request uniqueness
In order to detect repeat requests and return cached responses, Agent needs
to have a way to uniquely identify the requests. This computation as it stands
today takes a simplistic approach (may change in future) of serializing and
hashing the HTTP request along with all the headers and the request body. This
hash value is then used as an index into the cache to check if the response is
readily available. The consequence of this approach is that the hash value for
any request will differ if any data in the request is modified. This has the
side-effect of resulting in false negatives if say, the ordering of the request
parameters are modified. As long as the requests come in without any change,
caching behavior should be consistent. Identical requests with differently
ordered request values will result in duplicated cache entries. A heuristic
assumption that the clients will use consistent mechanisms to make requests,
thereby resulting in consistent hash values per request is the idea upon which
the caching functionality is built upon.
## Renewal management
The tokens and leases are renewed by the agent using the secret renewer that is
made available via the Vault server's [Go
API](https://godoc.org/github.com/hashicorp/vault/api#Renewer). Agent performs
all operations in memory and does not persist anything to storage. This means
that when the agent is shut down, all the renewal operations are immediately
terminated and there is no way for agent to resume renewals after the fact.
Note that shutting down the agent does not indicate revocations of the secrets,
instead it only means that renewal responsibility for all the valid unrevoked
secrets are no longer performed by the Vault agent.
### Agent CLI
Agent's listener address will be picked up by the CLI through the
`VAULT_AGENT_ADDR` environment variable. This should be a complete URL such as
`"http://127.0.0.1:8200"`.
## API
### Cache clear
This endpoint clears the cache based on given criteria. To use this
API, some information on how the agent caches values should be known
beforehand. Each response that is cached in the agent will be indexed on some
factors depending on the type of request. Those factors can be the `token` that
is belonging to the cached response, the `token_accessor` of the token
belonging to the cached response, the `request_path` that resulted in the
cached response, the `lease` that is attached to the cached response, the
`namespace` to which the cached response belongs to, and a few more. This API
exposes some factors through which associated cache entries are fetched and
evicted. For listeners without caching enabled, this API will still be available,
but will do nothing (there is no cache to clear) and will return a `200` response.
| Method | Path | Produces |
| :----- | :---------------------- | :--------------------- |
| `POST` | `/agent/v1/cache-clear` | `200 application/json` |
#### Parameters
- `type` `(strings: required)` - The type of cache entries to evict. Valid
values are `request_path`, `lease`, `token`, `token_accessor`, and `all`.
If the `type` is set to `all`, the _entire cache_ is cleared.
- `value` `(string: required)` - An exact value or the prefix of the value for
the `type` selected. This parameter is optional when the `type` is set
to `all`.
- `namespace` `(string: optional)` - This is only applicable when the `type` is set to
`request_path`. The namespace of which the cache entries to be evicted for
the given request path.
### Sample payload
```json
{
"type": "token",
"value": "hvs.rlNjegSKykWcplOkwsjd8bP9"
}
```
### Sample request
```shell-session
$ curl \
--request POST \
--data @payload.json \
http://127.0.0.1:1234/agent/v1/cache-clear
```
## Configuration (`cache`)
The presence of the top level `cache` block in any way (including an empty `cache` block) will enable the cache.
The top level `cache` block has the following configuration entry:
- `persist` `(object: optional)` - Configuration for the persistent cache.
The `cache` block also supports the `use_auto_auth_token`, `enforce_consistency`, and
`when_inconsistent` configuration values of the `api_proxy` block
[described in the API Proxy documentation](/vault/docs/agent-and-proxy/agent/apiproxy#configuration-api_proxy) only to
maintain backwards compatibility. This configuration **cannot** be specified alongside `api_proxy` equivalents,
should not be preferred over configuring these values in the `api_proxy` block,
and `api_proxy` should be the preferred place to configure these values.
-> **Note:** When the `cache` block is defined, at least one
[template][agent-template] or [listener][agent-listener] must also be defined
in the config, otherwise there is no way to utilize the cache.
[agent-template]: /vault/docs/agent-and-proxy/agent/template
[agent-listener]: /vault/docs/agent-and-proxy/agent#listener-stanza
### Configuration (Persist)
These are common configuration values that live within the `persist` block:
- `type` `(string: required)` - The type of the persistent cache to use,
e.g. `kubernetes`. _Note_: when using HCL this can be used as the key for
the block, e.g. `persist "kubernetes" {...}`. Currently, only `kubernetes`
is supported.
- `path` `(string: required)` - The path on disk where the persistent cache file
should be created or restored from.
- `keep_after_import` `(bool: optional)` - When set to true, a restored cache file
is not deleted. Defaults to `false`.
- `exit_on_err` `(bool: optional)` - When set to true, if any errors occur during
a persistent cache restore, Vault Agent will exit with an error. Defaults to `true`.
- `service_account_token_file` `(string: optional)` - When `type` is set to `kubernetes`,
this configures the path on disk where the Kubernetes service account token can be found.
Defaults to `/var/run/secrets/kubernetes.io/serviceaccount/token`.
## Configuration (`listener`)
- `listener` `(array of objects: required)` - Configuration for the listeners.
There can be one or more `listener` blocks at the top level. Adding a listener enables
the [API Proxy](/vault/docs/agent-and-proxy/agent/apiproxy) and enables the API proxy to use the cache, if configured.
These configuration values are common to both `tcp` and `unix` listener blocks. Blocks of type
`tcp` support the standard `tcp` [listener](/vault/docs/configuration/listener/tcp)
options. Additionally, the `role` string option is available as part of the top level
of the `listener` block, which can be configured to `metrics_only` to serve only metrics,
or the default role, `default`, which serves everything (including metrics).
- `type` `(string: required)` - The type of the listener to use. Valid values
are `tcp` and `unix`.
_Note_: when using HCL this can be used as the key for the block, e.g.
`listener "tcp" {...}`.
- `address` `(string: required)` - The address for the listener to listen to.
This can either be a URL path when using `tcp` or a file path when using
`unix`. For example, `127.0.0.1:8200` or `/path/to/socket`. Defaults to
`127.0.0.1:8200`.
- `tls_disable` `(bool: false)` - Specifies if TLS will be disabled.
- `tls_key_file` `(string: optional)` - Specifies the path to the private key
for the certificate.
- `tls_cert_file` `(string: optional)` - Specifies the path to the certificate
for TLS.
### Example configuration
Here is an example of a cache configuration with the optional `persist` block,
alongside a regular listener, and a listener that only serves metrics.
```hcl
# Other Vault agent configuration blocks
# ...
cache {
persist = {
type = "kubernetes"
path = "/vault/agent-cache/"
keep_after_import = true
exit_on_err = true
service_account_token_file = "/tmp/serviceaccount/token"
}
}
listener "tcp" {
address = "127.0.0.1:8100"
tls_disable = true
}
listener "tcp" {
address = "127.0.0.1:3000"
tls_disable = true
role = "metrics_only"
}
```
## Tutorial
Refer to the [Vault Agent
Caching](/vault/tutorials/vault-agent/agent-caching)
tutorial to learn how to use the Vault Agent to increase the availability of tokens and secrets to clients using its Caching function. | vault | layout docs page title Vault Agent caching overview description Use client side caching with Vault Agent for responses with newly created tokens or leased secrets generated from a newly created token Vault Agent caching overview Note title Use Vault Proxy for static secret caching Static secret caching vault docs agent and proxy proxy caching static secret caching KVv1 and KVv2 with API proxy minimizes the number of requests forwarded to Vault Vault Agent does not support static secret caching with API proxy We recommend using Vault Proxy vault docs agent and proxy proxy for API Proxy related workflows Note Vault Agent Caching allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens The renewals of the cached tokens and leases are also managed by the agent Caching and renewals Response caching and renewals are managed by the agent only under these specific scenarios 1 Token creation requests are made through the agent This means that any login operations performed using various auth methods and invoking the token creation endpoints of the token auth method via the agent will result in the response getting cached by the agent Responses containing new tokens will be cached by the agent only if the parent token is already being managed by the agent or if the new token is an orphan token 2 Leased secret creation requests are made through the agent using tokens that are already managed by the agent This means that any dynamic credentials that are issued using the tokens managed by the agent will be cached and its renewals are taken care of Persistent cache Vault Agent can restore tokens and leases from a persistent cache file created by a previous Vault Agent process Refer to the Vault Agent Persistent Caching vault docs agent and proxy agent caching persistent caches page for more information on this functionality Cache evictions The eviction of cache entries pertaining to secrets will occur when the agent can no longer renew them This can happen when the secrets hit their maximum TTL or if the renewals result in errors Agent does some best effort cache evictions by observing specific request types and response codes For example if a token revocation request is made via the agent and if the forwarded request to the Vault server succeeds then agent evicts all the cache entries associated with the revoked token Similarly any lease revocation operation will also be intercepted by the agent and the respective cache entries will be evicted Note that while agent evicts the cache entries upon secret expirations and upon intercepting revocation requests it is still possible for the agent to be completely unaware of the revocations that happen through direct client interactions with the Vault server This could potentially lead to stale cache entries For managing the stale entries in the cache an endpoint agent v1 cache clear see below is made available to manually evict cache entries based on some of the query criteria used for indexing the cache entries Request uniqueness In order to detect repeat requests and return cached responses Agent needs to have a way to uniquely identify the requests This computation as it stands today takes a simplistic approach may change in future of serializing and hashing the HTTP request along with all the headers and the request body This hash value is then used as an index into the cache to check if the response is readily available The consequence of this approach is that the hash value for any request will differ if any data in the request is modified This has the side effect of resulting in false negatives if say the ordering of the request parameters are modified As long as the requests come in without any change caching behavior should be consistent Identical requests with differently ordered request values will result in duplicated cache entries A heuristic assumption that the clients will use consistent mechanisms to make requests thereby resulting in consistent hash values per request is the idea upon which the caching functionality is built upon Renewal management The tokens and leases are renewed by the agent using the secret renewer that is made available via the Vault server s Go API https godoc org github com hashicorp vault api Renewer Agent performs all operations in memory and does not persist anything to storage This means that when the agent is shut down all the renewal operations are immediately terminated and there is no way for agent to resume renewals after the fact Note that shutting down the agent does not indicate revocations of the secrets instead it only means that renewal responsibility for all the valid unrevoked secrets are no longer performed by the Vault agent Agent CLI Agent s listener address will be picked up by the CLI through the VAULT AGENT ADDR environment variable This should be a complete URL such as http 127 0 0 1 8200 API Cache clear This endpoint clears the cache based on given criteria To use this API some information on how the agent caches values should be known beforehand Each response that is cached in the agent will be indexed on some factors depending on the type of request Those factors can be the token that is belonging to the cached response the token accessor of the token belonging to the cached response the request path that resulted in the cached response the lease that is attached to the cached response the namespace to which the cached response belongs to and a few more This API exposes some factors through which associated cache entries are fetched and evicted For listeners without caching enabled this API will still be available but will do nothing there is no cache to clear and will return a 200 response Method Path Produces POST agent v1 cache clear 200 application json Parameters type strings required The type of cache entries to evict Valid values are request path lease token token accessor and all If the type is set to all the entire cache is cleared value string required An exact value or the prefix of the value for the type selected This parameter is optional when the type is set to all namespace string optional This is only applicable when the type is set to request path The namespace of which the cache entries to be evicted for the given request path Sample payload json type token value hvs rlNjegSKykWcplOkwsjd8bP9 Sample request shell session curl request POST data payload json http 127 0 0 1 1234 agent v1 cache clear Configuration cache The presence of the top level cache block in any way including an empty cache block will enable the cache The top level cache block has the following configuration entry persist object optional Configuration for the persistent cache The cache block also supports the use auto auth token enforce consistency and when inconsistent configuration values of the api proxy block described in the API Proxy documentation vault docs agent and proxy agent apiproxy configuration api proxy only to maintain backwards compatibility This configuration cannot be specified alongside api proxy equivalents should not be preferred over configuring these values in the api proxy block and api proxy should be the preferred place to configure these values Note When the cache block is defined at least one template agent template or listener agent listener must also be defined in the config otherwise there is no way to utilize the cache agent template vault docs agent and proxy agent template agent listener vault docs agent and proxy agent listener stanza Configuration Persist These are common configuration values that live within the persist block type string required The type of the persistent cache to use e g kubernetes Note when using HCL this can be used as the key for the block e g persist kubernetes Currently only kubernetes is supported path string required The path on disk where the persistent cache file should be created or restored from keep after import bool optional When set to true a restored cache file is not deleted Defaults to false exit on err bool optional When set to true if any errors occur during a persistent cache restore Vault Agent will exit with an error Defaults to true service account token file string optional When type is set to kubernetes this configures the path on disk where the Kubernetes service account token can be found Defaults to var run secrets kubernetes io serviceaccount token Configuration listener listener array of objects required Configuration for the listeners There can be one or more listener blocks at the top level Adding a listener enables the API Proxy vault docs agent and proxy agent apiproxy and enables the API proxy to use the cache if configured These configuration values are common to both tcp and unix listener blocks Blocks of type tcp support the standard tcp listener vault docs configuration listener tcp options Additionally the role string option is available as part of the top level of the listener block which can be configured to metrics only to serve only metrics or the default role default which serves everything including metrics type string required The type of the listener to use Valid values are tcp and unix Note when using HCL this can be used as the key for the block e g listener tcp address string required The address for the listener to listen to This can either be a URL path when using tcp or a file path when using unix For example 127 0 0 1 8200 or path to socket Defaults to 127 0 0 1 8200 tls disable bool false Specifies if TLS will be disabled tls key file string optional Specifies the path to the private key for the certificate tls cert file string optional Specifies the path to the certificate for TLS Example configuration Here is an example of a cache configuration with the optional persist block alongside a regular listener and a listener that only serves metrics hcl Other Vault agent configuration blocks cache persist type kubernetes path vault agent cache keep after import true exit on err true service account token file tmp serviceaccount token listener tcp address 127 0 0 1 8100 tls disable true listener tcp address 127 0 0 1 3000 tls disable true role metrics only Tutorial Refer to the Vault Agent Caching vault tutorials vault agent agent caching tutorial to learn how to use the Vault Agent to increase the availability of tokens and secrets to clients using its Caching function |
vault page title Regenerate a Vault root token layout docs Your Vault root token is a special token that gives you access to all Vault Regenerate a lost or revoked root token Regenerate a Vault root token | ---
layout: docs
page_title: Regenerate a Vault root token
description: >-
Regenerate a lost or revoked root token.
---
# Regenerate a Vault root token
Your Vault root token is a special token that gives you access to **all** Vault
operations. Best practice is to enable an appropriate authentication method for
Vault admins once the server is running and revoke the root token.
For emergency situations where your require a root token, you can use the
[`operator generate-root`](/vault/docs/commands/operator/generate-root) CLI
command and a one-time password (OTP) or Pretty Good Privacy (PGP) to generate
a new root token.
## Before you start
- **You need your Vault keys**. If you use auto-unseal, you need your
[recovery](/vault/docs/concepts/seal#recovery-key) keys, otherwise you need
your unseal keys.
- **Identify current key holders**. You must distribute the token nonce to your
unseal/recovery key holders during root token generation.
## Step 1: Create a root token nonce
1. Generate a token nonce for your new root token:
<Tabs>
<Tab heading="OTP" group="otp">
**You need the returned OTP value to decode the new root token**.
```shell-session
$ vault operator generate-root -init
A One-Time-Password has been generated for you and is shown in the OTP field.
You will need this value to decode the resulting root token, so keep it safe.
Nonce 15565c79-cc9e-5e64-b986-8506e7bd1918
Started true
Progress 0/1
Complete false
OTP 5JFQaH76Ky2TIuSt4SPvO1CGkx
OTP Length 26
```
</Tab>
<Tab heading="PGP" group="pgp">
Use the `-pgp-key` option to provide a path to your PGP public key or Keybase
username to encrypt the new root token. **You will need the returned PGP
value to decode the new root token**.
```shell-session
$ vault operator generate-root -init -pgp-key=keybase:sethvargo
Nonce e24dec5e-f1ea-2dfe-ecce-604022006976
Started true
Progress 0/5
Complete false
PGP Fingerprint e2f8e2974623ba2a0e933a59c921994f9c27e0ff
```
</Tab>
</Tabs>
1. Distribute the nonce to each of your unseal/recovery key holders.
## Step 2: Establish key quorum with the token nonce
<Highlight title="Use TTY to autocomplete the nonce">
If you use a TTY, the `operator generate-root` command prompts for your key
and automatically completes the nonce value.
</Highlight>
1. Have each unseal/recovery key holder run `operator generator-root` with their
key and the distributed nonce value:
```shell-session
$ echo ${UNSEAL_OR_RECOVERY_KEY} | vault operator generate-root -nonce=${NONCE_VALUE} -
Root generation operation nonce: f67f4da3-4ae4-68fb-4716-91da6b609c3e
Unseal Key (will be hidden):
```
1. Vault returns the new, encoded root token to the user who triggers quorum:
<Tabs>
<Tab heading="OTP" group="otp">
```shell-session
Nonce f67f4da3-4ae4-68fb-4716-91da6b609c3e
Started true
Progress 5/5
Complete true
Encoded Token IxJpyqxn3YafOGhqhvP6cQ==
```
</Tab>
<Tab heading="PGP" group="pgp">
```shell-session
Nonce e24dec5e-f1ea-2dfe-ecce-604022006976
Started true
Progress 1/1
Complete true
PGP Fingerprint e2f8e2974623ba2a0e933a59c921994f9c27e0ff
Encoded Token wcFMA0RVkFtoqzRlARAAI3Ux8kdSpfgXdF9mg...
```
</Tab>
</Tabs>
## Step 3: Decode the new root token
Decode the new root token using OTP or PGP.
<Tabs>
<Tab heading="OTP" group="otp">
Use `operator generate-root` and the OTP value from nonce generation to decode
the new root token:
```shell-session
$ vault operator generate-root \
-decode=${ENCODED_TOKEN} \
-otp=${NONCE_OTP}
hvs.XXXXXXXXXXXXXXXXXXXXXXXX
```
</Tab>
<Tab heading="PGP" group="pgp">
Use your PGP credentials and `gpg` or `keybase` to decrypt the new root token.
**`gpg`**:
```shell-session
$ echo ${ENCODED_TOKEN} | base64 --decode | gpg --decrypt
hvs.XXXXXXXXXXXXXXXXXXXXXXXX
```
**`keybase`**:
```shell-session
$ echo ${ENCODED_TOKEN} | base64 --decode | keybase pgp decrypt
hvs.XXXXXXXXXXXXXXXXXXXXXXXX
```
</Tab>
</Tabs> | vault | layout docs page title Regenerate a Vault root token description Regenerate a lost or revoked root token Regenerate a Vault root token Your Vault root token is a special token that gives you access to all Vault operations Best practice is to enable an appropriate authentication method for Vault admins once the server is running and revoke the root token For emergency situations where your require a root token you can use the operator generate root vault docs commands operator generate root CLI command and a one time password OTP or Pretty Good Privacy PGP to generate a new root token Before you start You need your Vault keys If you use auto unseal you need your recovery vault docs concepts seal recovery key keys otherwise you need your unseal keys Identify current key holders You must distribute the token nonce to your unseal recovery key holders during root token generation Step 1 Create a root token nonce 1 Generate a token nonce for your new root token Tabs Tab heading OTP group otp You need the returned OTP value to decode the new root token shell session vault operator generate root init A One Time Password has been generated for you and is shown in the OTP field You will need this value to decode the resulting root token so keep it safe Nonce 15565c79 cc9e 5e64 b986 8506e7bd1918 Started true Progress 0 1 Complete false OTP 5JFQaH76Ky2TIuSt4SPvO1CGkx OTP Length 26 Tab Tab heading PGP group pgp Use the pgp key option to provide a path to your PGP public key or Keybase username to encrypt the new root token You will need the returned PGP value to decode the new root token shell session vault operator generate root init pgp key keybase sethvargo Nonce e24dec5e f1ea 2dfe ecce 604022006976 Started true Progress 0 5 Complete false PGP Fingerprint e2f8e2974623ba2a0e933a59c921994f9c27e0ff Tab Tabs 1 Distribute the nonce to each of your unseal recovery key holders Step 2 Establish key quorum with the token nonce Highlight title Use TTY to autocomplete the nonce If you use a TTY the operator generate root command prompts for your key and automatically completes the nonce value Highlight 1 Have each unseal recovery key holder run operator generator root with their key and the distributed nonce value shell session echo UNSEAL OR RECOVERY KEY vault operator generate root nonce NONCE VALUE Root generation operation nonce f67f4da3 4ae4 68fb 4716 91da6b609c3e Unseal Key will be hidden 1 Vault returns the new encoded root token to the user who triggers quorum Tabs Tab heading OTP group otp shell session Nonce f67f4da3 4ae4 68fb 4716 91da6b609c3e Started true Progress 5 5 Complete true Encoded Token IxJpyqxn3YafOGhqhvP6cQ Tab Tab heading PGP group pgp shell session Nonce e24dec5e f1ea 2dfe ecce 604022006976 Started true Progress 1 1 Complete true PGP Fingerprint e2f8e2974623ba2a0e933a59c921994f9c27e0ff Encoded Token wcFMA0RVkFtoqzRlARAAI3Ux8kdSpfgXdF9mg Tab Tabs Step 3 Decode the new root token Decode the new root token using OTP or PGP Tabs Tab heading OTP group otp Use operator generate root and the OTP value from nonce generation to decode the new root token shell session vault operator generate root decode ENCODED TOKEN otp NONCE OTP hvs XXXXXXXXXXXXXXXXXXXXXXXX Tab Tab heading PGP group pgp Use your PGP credentials and gpg or keybase to decrypt the new root token gpg shell session echo ENCODED TOKEN base64 decode gpg decrypt hvs XXXXXXXXXXXXXXXXXXXXXXXX keybase shell session echo ENCODED TOKEN base64 decode keybase pgp decrypt hvs XXXXXXXXXXXXXXXXXXXXXXXX Tab Tabs |
vault Understand the behavior of time to live set on leases The benefit of using Vault s dynamic secrets engines and auth methods is the Tune the lease time to live TTL layout docs page title Tune the lease TTL | ---
layout: docs
page_title: Tune the lease TTL
description: >-
Understand the behavior of time-to-live set on leases.
---
# Tune the lease time-to-live (TTL)
The benefit of using Vault's dynamic secrets engines and auth methods is the
ability to control how long the Vault-managed credentials (leases) remain valid.
Often times, you generate short-lived credentials or tokens to reduce the risk
of unauthorized attacks caused by leaked credentials or tokens. If you do not
explicitly specify the time-to-live (TTL), Vault generates leases with TTL of 32
days by default.
For example, you enabled AppRole auth method at `approle`, and create a role
named `read-only` with max lease TTL of **120 days**.
```shell-session
$ vault write auth/approle/role/read-only token_policies="read-only" \
token_ttl=90d token_max_ttl=120d
```
The command returns a warning about the TTL exceeding the mount's max TTL value.
<CodeBlockConfig hideClipboard>
```plaintext
WARNING! The following warnings were returned from Vault:
* token_max_ttl is greater than the backend mount's maximum TTL value;
issued tokens' max TTL value will be truncated
```
</CodeBlockConfig>
Therefore, it will return a client token with TTL of 768 hours (32 days) instead
of 120 days.
<CodeBlockConfig highlight="12" hideClipboard>
```shell-session
$ vault write auth/approle/login role_id=<ROLE_ID> secret_id=<SECRET_ID>
WARNING! The following warnings were returned from Vault:
* TTL of "2880h" exceeded the effective max_ttl of "768h"; TTL value is
capped accordingly
Key Value
--- -----
token hvs.CAESIJeVezY3UObHXTvzpI722q0MmaARB1692fT-MmdzcryvGh4KHGh2cy43czViYXVZS3FnSzltWmdVZ3Q0MmFTdkc
token_accessor wXTOvz5xxBi2vvUpTBhemUXr
token_duration 768h
token_renewable true
token_policies ["default" "read-only"]
identity_policies []
policies ["default" "read-only"]
token_meta_role_name read-only
```
</CodeBlockConfig>
## Max lease TTL on an auth mount
You cannot set the TTL for a role to go beyond the max lease TTL set on the
AppRole auth mount (`approle` in this example). The default lease TTL and max
lease TTL are 32 days (768 hours).
```shell-session
$ vault read sys/auth/approle/tune
```
**Output:**
<CodeBlockConfig highlight="3,6" hideClipboard>
```plaintext
Key Value
--- -----
default_lease_ttl 768h
description n/a
force_no_cache false
max_lease_ttl 768h
token_type default-service
```
</CodeBlockConfig>
If the desired max lease TTL is 120 days (2880 hours), update the max lease TTL
on the mount.
```shell-session
$ vault auth tune -max-lease-ttl=120d approle
```
The following command lists all available parameters that you can tune.
```shell-session
$ vault auth tune -h
```
Now, the AppRole will generate a lease with token duration of 120 days (2880 hours).
<CodeBlockConfig highlight="7" hideClipboard>
```shell-session
$ vault write auth/approle/login role_id=<ROLE_ID> secret_id=<SECRET_ID>
Key Value
--- -----
token hvs.CAESIOzTpLX4naKw-epzhcb3DneZ9ZuRTx4tKh5mTT1CajLQGh4KHGh2cy5TUFFhY3QzVzdmSTFwQUduOWlrMVRWaUE
token_accessor blc2MGA4EmmqEROzqlotFbqr
token_duration 2880h
token_renewable true
token_policies ["default" "jenkins"]
identity_policies []
policies ["default" "jenkins"]
token_meta_role_name jenkins
```
</CodeBlockConfig>
## Max lease TTL on a secrets mount
Similar to the AppRole auth method example, you can tune the max lease TTL on
dynamic secrets.
For example, you enabled database secrets engine at `mongodb` and create a role
named `tester` with max lease TTL of 120 days (2880 hours). When you request a
database credential for the `tester` role, it returns a warning, and its lease
duration is 32 days (768 hours) instead of 120 days.
<CodeBlockConfig hideClipboard highlight="11">
```shell-session
$ vault read mongodb/creds/tester
WARNING! The following warnings were returned from Vault:
* TTL of "2880h" exceeded the effective max_ttl of "768h"; TTL value is
capped accordingly
Key Value
--- -----
lease_id mongodb/creds/tester/fVPt15506k3UW9n4pq0kIpBH
lease_duration 768h
lease_renewable true
password Eskkx6yRhAN4--H9WL7B
username v-token-tester-6BtY903qOZBpzYa4yQs8-1724715513
```
</CodeBlockConfig>
To set the desired TTL on the role, tune the max lease TTL on the `mongodb`
mount.
```shell-session
$ vault secrets tune -max-lease-ttl=120d mongodb
```
Verify the configured max lease TTL available on the mount.
<CodeBlockConfig hideClipboard highlight="8">
```shell-session
$ vault read sys/mounts/mongodb/tune
Key Value
--- -----
default_lease_ttl 768h
description n/a
force_no_cache false
max_lease_ttl 2880h
```
</CodeBlockConfig>
The following command lists all available parameters that you can tune.
```shell-session
$ vault secrets tune -h
```
When you introduce Vault into your existing system, the existing applications
may not be able to handle short-lived leases. You can tune the default TTLs
on each mount.
On the similar note, if the system default of 32 days is too long, you can tune
the default TTL to be shorter to comply with your organization's policy.
## API
- [Tune auth method](/vault/api-docs/system/auth#tune-auth-method)
- [Tune mount configuration](/vault/api-docs/system/mounts#tune-mount-configuration) | vault | layout docs page title Tune the lease TTL description Understand the behavior of time to live set on leases Tune the lease time to live TTL The benefit of using Vault s dynamic secrets engines and auth methods is the ability to control how long the Vault managed credentials leases remain valid Often times you generate short lived credentials or tokens to reduce the risk of unauthorized attacks caused by leaked credentials or tokens If you do not explicitly specify the time to live TTL Vault generates leases with TTL of 32 days by default For example you enabled AppRole auth method at approle and create a role named read only with max lease TTL of 120 days shell session vault write auth approle role read only token policies read only token ttl 90d token max ttl 120d The command returns a warning about the TTL exceeding the mount s max TTL value CodeBlockConfig hideClipboard plaintext WARNING The following warnings were returned from Vault token max ttl is greater than the backend mount s maximum TTL value issued tokens max TTL value will be truncated CodeBlockConfig Therefore it will return a client token with TTL of 768 hours 32 days instead of 120 days CodeBlockConfig highlight 12 hideClipboard shell session vault write auth approle login role id ROLE ID secret id SECRET ID WARNING The following warnings were returned from Vault TTL of 2880h exceeded the effective max ttl of 768h TTL value is capped accordingly Key Value token hvs CAESIJeVezY3UObHXTvzpI722q0MmaARB1692fT MmdzcryvGh4KHGh2cy43czViYXVZS3FnSzltWmdVZ3Q0MmFTdkc token accessor wXTOvz5xxBi2vvUpTBhemUXr token duration 768h token renewable true token policies default read only identity policies policies default read only token meta role name read only CodeBlockConfig Max lease TTL on an auth mount You cannot set the TTL for a role to go beyond the max lease TTL set on the AppRole auth mount approle in this example The default lease TTL and max lease TTL are 32 days 768 hours shell session vault read sys auth approle tune Output CodeBlockConfig highlight 3 6 hideClipboard plaintext Key Value default lease ttl 768h description n a force no cache false max lease ttl 768h token type default service CodeBlockConfig If the desired max lease TTL is 120 days 2880 hours update the max lease TTL on the mount shell session vault auth tune max lease ttl 120d approle The following command lists all available parameters that you can tune shell session vault auth tune h Now the AppRole will generate a lease with token duration of 120 days 2880 hours CodeBlockConfig highlight 7 hideClipboard shell session vault write auth approle login role id ROLE ID secret id SECRET ID Key Value token hvs CAESIOzTpLX4naKw epzhcb3DneZ9ZuRTx4tKh5mTT1CajLQGh4KHGh2cy5TUFFhY3QzVzdmSTFwQUduOWlrMVRWaUE token accessor blc2MGA4EmmqEROzqlotFbqr token duration 2880h token renewable true token policies default jenkins identity policies policies default jenkins token meta role name jenkins CodeBlockConfig Max lease TTL on a secrets mount Similar to the AppRole auth method example you can tune the max lease TTL on dynamic secrets For example you enabled database secrets engine at mongodb and create a role named tester with max lease TTL of 120 days 2880 hours When you request a database credential for the tester role it returns a warning and its lease duration is 32 days 768 hours instead of 120 days CodeBlockConfig hideClipboard highlight 11 shell session vault read mongodb creds tester WARNING The following warnings were returned from Vault TTL of 2880h exceeded the effective max ttl of 768h TTL value is capped accordingly Key Value lease id mongodb creds tester fVPt15506k3UW9n4pq0kIpBH lease duration 768h lease renewable true password Eskkx6yRhAN4 H9WL7B username v token tester 6BtY903qOZBpzYa4yQs8 1724715513 CodeBlockConfig To set the desired TTL on the role tune the max lease TTL on the mongodb mount shell session vault secrets tune max lease ttl 120d mongodb Verify the configured max lease TTL available on the mount CodeBlockConfig hideClipboard highlight 8 shell session vault read sys mounts mongodb tune Key Value default lease ttl 768h description n a force no cache false max lease ttl 2880h CodeBlockConfig The following command lists all available parameters that you can tune shell session vault secrets tune h When you introduce Vault into your existing system the existing applications may not be able to handle short lived leases You can tune the default TTLs on each mount On the similar note if the system default of 32 days is too long you can tune the default TTL to be shorter to comply with your organization s policy API Tune auth method vault api docs system auth tune auth method Tune mount configuration vault api docs system mounts tune mount configuration |
vault Explanations workarounds and solutions for common lease problems in Vault page title Lease problems Troubleshoot lease problems in Vault Troubleshoot lease problems layout docs | ---
layout: docs
page_title: Lease problems
description: >-
Troubleshoot lease problems in Vault.
---
# Troubleshoot lease problems
Explanations, workarounds, and solutions for common lease problems in Vault.
## `429 - Too Many Requests`
### Problem
Vault returns a `429 - Too Many Requests` response when users try to
authenticate. For example:
<CodeBlockConfig hideClipboard>
```text
Error making API request.
URL: PUT https://127.0.0.1:61555/v1/auth/userpass/login/foo
Code: 429. Errors:
* 1 error occurred:
* request path "auth/userpass/login/foo": lease count quota exceeded
```
</CodeBlockConfig>
### Cause
Vault returns a `429 - Too Many Requests` response if a new lease request
violates the configured lease quota limit.
To guard against [lease explosions](/vault/docs/troubleshoot/lease-explosions),
Vault rejects authentication requests if completing the request would violate
the configured lease quota limit.
### Solution
1. Correct any client-side errors that may cause excessive lease creation.
1. Determine if your resource needs have changed and complete the
[Protecting Vault with Resource Quotas](/vault/tutorials/operations/resource-quotas)
tutorial to determine new, appropriate defaults.
1. Use the [`vault lease`](/vault/docs/commands/lease) CLI command or
[lease count quota endpoint](/vault/api-docs/system/lease-count-quotas) to
tune your lease count quota.
<Highlight title="Use proactive tuning to avoid errors">
Consider making short-term changes to your lease quotas when you expect a
significant increase in lease creation. For example, when you release a new
feature or complete a marketing push to increase your user base.
</Highlight>
## Lease explosion (degraded performance)
### Problem
Your Vault nodes are out of memory and unresponsive to new lease requests.
### Cause
Clients have caused a lease explosion with consistent, high-volume API requests.
<Note title="Lease explosions can lead to DoS">
Unchecked lease explosions create cascading denial-of-service issues for the
active node that can result in denial-of-service issues for the entire
cluster.
</Note>
### Solution
To resolve a lease explosion, you need to mitigate the problem to stabilize
Vault and provide space for cluster recovery then clean up your Vault
environment.
1. Mitigate resource stress by adjusting TTL values for your Vault instance:
Config level | Parameter | Precedence
-------------------- | ---------------------- | -----------
Database plugin | `ttl` or `default_ttl` | first
Database plugin | `max_ttl` | first
AuthN/secrets plugin | `ttl` or `default_ttl` | second
AuthN/secrets plugin | `max_ttl` | second
Vault | `default_lease_ttl` | last
Vault | `max_lease_ttl` | last
**Granular TTLs on a role, group, or user level always override plugin and
system-wide TTL values**.
1. Use firewalls or load balancers to limit API calls to Vault from aberrant
clients and reduce load on the struggling cluster .
1. Once the cluster stabilizes, check the active node to determine if you can
wait for it to purge leases automatically or if you need to speed up the
process by manually revoking leases.
1. If the cluster requires manual intervention, confirm you have a recent, valid
snapshots of the cluster.
1. Once you confirm a valid snapshot of the cluster exists, use
[`vault lease revoke`](/vault/docs/commands/lease/revoke) to manually revoke
the offending leases.
<Warning title="Potentially dangerous operation">
Revoking or forcefully revoking leases is potentially a dangerous operation.
Do not proceed without a valid snapshot. If you have a valid Vault
Enterprise license, consider contacting the
[HashiCorp Customer Support team](https://support.hashicorp.com/) for help.
</Warning>
### Related tutorials
- [Troubleshoot irrevocable leases](/vault/tutorials/monitoring/troubleshoot-irrevocable-leases) | vault | layout docs page title Lease problems description Troubleshoot lease problems in Vault Troubleshoot lease problems Explanations workarounds and solutions for common lease problems in Vault 429 Too Many Requests Problem Vault returns a 429 Too Many Requests response when users try to authenticate For example CodeBlockConfig hideClipboard text Error making API request URL PUT https 127 0 0 1 61555 v1 auth userpass login foo Code 429 Errors 1 error occurred request path auth userpass login foo lease count quota exceeded CodeBlockConfig Cause Vault returns a 429 Too Many Requests response if a new lease request violates the configured lease quota limit To guard against lease explosions vault docs troubleshoot lease explosions Vault rejects authentication requests if completing the request would violate the configured lease quota limit Solution 1 Correct any client side errors that may cause excessive lease creation 1 Determine if your resource needs have changed and complete the Protecting Vault with Resource Quotas vault tutorials operations resource quotas tutorial to determine new appropriate defaults 1 Use the vault lease vault docs commands lease CLI command or lease count quota endpoint vault api docs system lease count quotas to tune your lease count quota Highlight title Use proactive tuning to avoid errors Consider making short term changes to your lease quotas when you expect a significant increase in lease creation For example when you release a new feature or complete a marketing push to increase your user base Highlight Lease explosion degraded performance Problem Your Vault nodes are out of memory and unresponsive to new lease requests Cause Clients have caused a lease explosion with consistent high volume API requests Note title Lease explosions can lead to DoS Unchecked lease explosions create cascading denial of service issues for the active node that can result in denial of service issues for the entire cluster Note Solution To resolve a lease explosion you need to mitigate the problem to stabilize Vault and provide space for cluster recovery then clean up your Vault environment 1 Mitigate resource stress by adjusting TTL values for your Vault instance Config level Parameter Precedence Database plugin ttl or default ttl first Database plugin max ttl first AuthN secrets plugin ttl or default ttl second AuthN secrets plugin max ttl second Vault default lease ttl last Vault max lease ttl last Granular TTLs on a role group or user level always override plugin and system wide TTL values 1 Use firewalls or load balancers to limit API calls to Vault from aberrant clients and reduce load on the struggling cluster 1 Once the cluster stabilizes check the active node to determine if you can wait for it to purge leases automatically or if you need to speed up the process by manually revoking leases 1 If the cluster requires manual intervention confirm you have a recent valid snapshots of the cluster 1 Once you confirm a valid snapshot of the cluster exists use vault lease revoke vault docs commands lease revoke to manually revoke the offending leases Warning title Potentially dangerous operation Revoking or forcefully revoking leases is potentially a dangerous operation Do not proceed without a valid snapshot If you have a valid Vault Enterprise license consider contacting the HashiCorp Customer Support team https support hashicorp com for help Warning Related tutorials Troubleshoot irrevocable leases vault tutorials monitoring troubleshoot irrevocable leases |
vault page title Vault Enterprise Lease Count Quotas include alerts enterprise only mdx Vault Enterprise features a mechanism to create lease count quotas Lease count quotas layout docs | ---
layout: docs
page_title: Vault Enterprise Lease Count Quotas
description: |-
Vault Enterprise features a mechanism to create lease count quotas.
---
# Lease count quotas
@include 'alerts/enterprise-only.mdx'
Vault features an extension to resource quotas that allows operators to enforce
limits on how many leases are created. For a given lease count quota, if the
number of leases in the cluster hits the configured limit, `max_leases`,
additional lease creations will be forbidden for all clients until the
an operator modifies the configured limit, or a lease has been revoked or
expired.
Lease count quotas guard against [lease
explosions](/vault/docs/concepts/lease-explosions).
## Root tokens
It is important to note that lease count quotas do not apply to the root tokens.
If the number of leases in the cluster hits the configured limit, `max_leases`,
an operator can still create a root token and access the cluster to try to
recover.
## Batch tokens
Batch token creation is blocked when the lease count quota is exceeded, but
batch tokens do not count toward the quota.
All the nodes in the Vault cluster share the lease quota rules, meaning that the
lease counters are shared, regardless of which node in the Vault cluster
receives lease generation requests. Lease quotas can be imposed across Vault's
API, or scoped down to API pertaining to specific namespaces or specific mounts.
## Lease count quota inheritance
A quota that is defined in the `root` namespace with no specified path is
inherited by all namespaces. This type of quota is referred to as a `global`
quota. Global quotas applie to the entire Vault API unless a more specific quota
(higher precedence) quota has been defined.
## Lease count quota precedence
Lease count quota precedence is dictated by highest to lowest level of
specificity. The rules are as follows:
1. Global lease count quotas are applied to all mounts and namespaces only if no
other, more specific namespace is defined.
1. Lease count quotas defined on a namespace take precedence over the global
quotas.
1. Lease count quotas defined for a mount will take precedence over global
and namespace quotas.
1. Lease count quotas defined for a specific path will take
precedence over global, namespace, and mount quotas.
1. Lease count quotas defined with a login role for a specific auth mount will
take precedence over every other quota when applying to login requests using
that auth method and the specified role.
The limits on quotas can either be increased or decreased. If a lower precedence
quota is very restrictive and if it is desired to relax the limits in one
namespace, or on a specific mount, it can be done using this precedence model.
On the other hand, if a lower precedence quota is very liberal and if it is
desired to further restrict usages in a specific namespace or mount, that can be
done using the precedence model too.
## Default lease count quota
As of Vault 1.16.0, new installations of Vault Enterprise will include a default
global quota with a `max_leases` value of `300000`. This value is an
intentionally low limit, intended to prevent runaway leases in the event that no
other lease count quota is specified.
This limit will affect all new clusters with no pre-existing configuration. As
with any other quota, the default can be directly increased, decreased, or
removed using the [lease-count-quotas endpoints](/vault/api-docs/system/lease-count-quotas).
The default may also be overridden by higher precedence quotas (specified for a
namespace, mount, path, or role) as described in the [Lease count quota
precedence](#lease-count-quota-precedence) section above.
## Quota inspection
Vault also allows the inspection of the state of lease count quotas in a Vault
cluster through various
[metrics](/vault/docs/internals/telemetry/metrics/core-system#quota-metrics)
and through enabling optional audit logging.
## Lease count quota exceeded
Vault returns a `429 - Too Many Requests` response if a new lease request
violates the quota limit. For more information on this error, refer to [the
error document](/vault/docs/concepts/lease-count-quota-exceeded).
## Tutorial
Refer to [Protecting Vault with Resource
Quotas](/vault/tutorials/operations/resource-quotas) for a
step-by-step tutorial.
## API
Lease count quotas can be managed over the HTTP API. Please see
[Lease Count Quotas API](/vault/api-docs/system/lease-count-quotas) for more details. | vault | layout docs page title Vault Enterprise Lease Count Quotas description Vault Enterprise features a mechanism to create lease count quotas Lease count quotas include alerts enterprise only mdx Vault features an extension to resource quotas that allows operators to enforce limits on how many leases are created For a given lease count quota if the number of leases in the cluster hits the configured limit max leases additional lease creations will be forbidden for all clients until the an operator modifies the configured limit or a lease has been revoked or expired Lease count quotas guard against lease explosions vault docs concepts lease explosions Root tokens It is important to note that lease count quotas do not apply to the root tokens If the number of leases in the cluster hits the configured limit max leases an operator can still create a root token and access the cluster to try to recover Batch tokens Batch token creation is blocked when the lease count quota is exceeded but batch tokens do not count toward the quota All the nodes in the Vault cluster share the lease quota rules meaning that the lease counters are shared regardless of which node in the Vault cluster receives lease generation requests Lease quotas can be imposed across Vault s API or scoped down to API pertaining to specific namespaces or specific mounts Lease count quota inheritance A quota that is defined in the root namespace with no specified path is inherited by all namespaces This type of quota is referred to as a global quota Global quotas applie to the entire Vault API unless a more specific quota higher precedence quota has been defined Lease count quota precedence Lease count quota precedence is dictated by highest to lowest level of specificity The rules are as follows 1 Global lease count quotas are applied to all mounts and namespaces only if no other more specific namespace is defined 1 Lease count quotas defined on a namespace take precedence over the global quotas 1 Lease count quotas defined for a mount will take precedence over global and namespace quotas 1 Lease count quotas defined for a specific path will take precedence over global namespace and mount quotas 1 Lease count quotas defined with a login role for a specific auth mount will take precedence over every other quota when applying to login requests using that auth method and the specified role The limits on quotas can either be increased or decreased If a lower precedence quota is very restrictive and if it is desired to relax the limits in one namespace or on a specific mount it can be done using this precedence model On the other hand if a lower precedence quota is very liberal and if it is desired to further restrict usages in a specific namespace or mount that can be done using the precedence model too Default lease count quota As of Vault 1 16 0 new installations of Vault Enterprise will include a default global quota with a max leases value of 300000 This value is an intentionally low limit intended to prevent runaway leases in the event that no other lease count quota is specified This limit will affect all new clusters with no pre existing configuration As with any other quota the default can be directly increased decreased or removed using the lease count quotas endpoints vault api docs system lease count quotas The default may also be overridden by higher precedence quotas specified for a namespace mount path or role as described in the Lease count quota precedence lease count quota precedence section above Quota inspection Vault also allows the inspection of the state of lease count quotas in a Vault cluster through various metrics vault docs internals telemetry metrics core system quota metrics and through enabling optional audit logging Lease count quota exceeded Vault returns a 429 Too Many Requests response if a new lease request violates the quota limit For more information on this error refer to the error document vault docs concepts lease count quota exceeded Tutorial Refer to Protecting Vault with Resource Quotas vault tutorials operations resource quotas for a step by step tutorial API Lease count quotas can be managed over the HTTP API Please see Lease Count Quotas API vault api docs system lease count quotas for more details |
vault include alerts enterprise only mdx Learn the details about long term support for Vault Enterprise layout docs Long term support for Vault page title Long term support for Vault | ---
layout: docs
page_title: Long-term support for Vault
description: >-
Learn the details about long-term support for Vault Enterprise.
---
# Long-term support for Vault
@include 'alerts/enterprise-only.mdx'
Long-term support (LTS) eases upgrade requirements for installations that cannot
upgrade frequently, quickly, or easily.
## LTS summary
<table>
<thead>
<tr>
<th style=>Question</th>
<th style=>Answer</th>
</tr>
</thead>
<tbody>
<tr>
<td style=>
<a href="#who">Who should consider long-term support?</a>
</td>
<td style=>
Enterprise customers using Vault for sensitive or critical workflows.
</td>
</tr>
<tr>
<td style=>
<a href="#what">What is long-term support?</a>
</td>
<td style=>
Extended maintenance for select, major Vault Enterprise versions.
By default, HashiCorp maintains Vault Enterprise versions for one year,
which includes feature updates and critical patches. LTS extends
maintenance for an additional year with critical patches.
</td>
</tr>
<tr>
<td style=>
<a href="#where">Where do I enable long-term support?</a>
</td>
<td style=>
You do not need to download a separate binary or set a flag for long-term
support. As long as you select an LTS Vault Enterprise version when
you <a href="/vault/install">install</a> or <a href="/vault/docs/upgrading">upgrade</a> your
Vault instance, LTS is included.
</td>
</tr>
<tr>
<td style=>
<a href="#when">When are LTS versions released?</a>
</td>
<td style=>
As of Vault Enterprise 1.16, the first major release of a calendar year includes
long-term support.
</td>
</tr>
<tr>
<td style=>
<a href="#why">Why is there a risk to updating to a non-LTS Vault Enterprise version?</a>
</td>
<td style=>
If you upgrade to a non-LTS Vault Enterprise version, your Vault instance
will stop receiving critical updates when that version leaves the default
maintenance window.
</td>
</tr>
<tr>
<td style=>
<a href="#how">How do I update my LTS Vault Enterprise installation?</a>
</td>
<td style=>
Follow your existing Vault upgrade process, but allow extra time for the
possibility of transitional upgrades across multiple Vault versions.
</td>
</tr>
</tbody>
</table>
<a id="who" />
## Who should consider long-term support?
Vault upgrades are challenging, especially for sensitive or critical workflows,
extensive integrations, and large-scale deployments. Strict upgrade policies
also require significant planning, testing, and employee hours to execute
successfully.
Customers who need assurances that their current installation will receive
critical bug fixes and security patches with minimal service disruptions should
consider moving to a Vault Enterprise version with long-term support.
<a id="what" />
## What is long-term support?
Long-term support offers extended maintenance through minor releases for select,
major Vault Enterprise versions.
The standard [support period and end of life policy](https://go.hashi.co/vault-support-policy)
covers "N−2" versions, which means, at any given time, HashiCorp maintains
the current version ("N") and the two previous versions ("N−2").
Vault versions typically update 3 times per calendar year (CY), which means that
**standard maintenance** for a given Vault version lasts approximately 1 year.
After the first year, LTS Vault versions move from standard maintenance to
**extended maintenance** for three additional major version releases (approximately one additional year)
with patches for bugs that may cause outages and critical vulnerabilities and exposures (CVEs).
Maintenance updates | Standard maintenance | Extended maintenance
--------------------------------- | -------------------- | --------------------
Performance improvements | YES | NO
Bug fixes | YES | OUTAGE-RISK ONLY
Security patches | YES | HIGH-RISK ONLY
CVE patches | YES | YES
<a id="where" />
## Where do I enable long-term support?
You do not need to download a separate binary or set a flag for long-term
support. As long as you select an LTS Vault Enterprise version
(e.g., 1.16, 1.19) when you [install](/vault/install) or [upgrade](/vault/docs/upgrading) your
Vault instance, LTS is included.
<a id="when" />
## When are LTS versions released?
As of Vault Enterprise 1.16, the first release of a calendar year includes
long-term support.
LTS versions overlap by one year with the previous LTS version entering its
extended maintenance window when the new LTS version begins its standard
maintenance window.
<a id="why" />
## Why is there a risk to updating to a non-LTS Vault Enterprise version?
Long-term support is intended for Enterprise customers who cannot upgrade
frequently enough to stay within the standard maintenance timeline of one year.
The goal is to establish a predictable upgrade path with a longer timeline
rather than extending the lifetime for every Vault version.
Long-term support ensures your Vault Enterprise version continues to receive
critical patches for an additional three major version releases (approximately one additional year).
If you upgrade to a non-LTS version,you are moving your Vault instance to a version
that lacks extended support. Non-LTS versions stop receiving updates once they leave
the standard maintenance window.
@include 'assets/lts-upgrade-path.mdx'
Version | Expected release | Standard maintenance ends | Extended maintenance ends
------- | ---------------- | -------------------------- | ---------------------
1.19 | CY25 Q1 | CY26 Q1 (1.22 release) | CY27 Q1 (1.25 release)
1.18 | CY24 Q3 | CY25 Q3 (1.21 release) | Not provided
1.17 | CY24 Q2 | CY25 Q2 (1.20 release) | Not provided
1.16 | CY24 Q1 | CY25 Q1 (1.19 release) | CY26 Q1 (1.22 release)
If a newer version of Vault Enterprise includes features you want to take
advantage of, you have two options:
1. Wait for the next available LTS release to maintain long-term support.
1. Upgrade immediately, then upgrade to an LTS release before the standard
maintenance window expires.
<a id="how" />
## How do I upgrade my Vault Enterprise LTS installation?
You should follow your existing upgrade process for major version upgrades but
allow additional time. Upgrading from version LTS to LTS+1 translates to jumping
3 major Vault Enterprise versions, which **may** require transitional upgrades
to move through the intermediate Vault versions. | vault | layout docs page title Long term support for Vault description Learn the details about long term support for Vault Enterprise Long term support for Vault include alerts enterprise only mdx Long term support LTS eases upgrade requirements for installations that cannot upgrade frequently quickly or easily LTS summary table thead tr th style Question th th style Answer th tr thead tbody tr td style a href who Who should consider long term support a td td style Enterprise customers using Vault for sensitive or critical workflows td tr tr td style a href what What is long term support a td td style Extended maintenance for select major Vault Enterprise versions By default HashiCorp maintains Vault Enterprise versions for one year which includes feature updates and critical patches LTS extends maintenance for an additional year with critical patches td tr tr td style a href where Where do I enable long term support a td td style You do not need to download a separate binary or set a flag for long term support As long as you select an LTS Vault Enterprise version when you a href vault install install a or a href vault docs upgrading upgrade a your Vault instance LTS is included td tr tr td style a href when When are LTS versions released a td td style As of Vault Enterprise 1 16 the first major release of a calendar year includes long term support td tr tr td style a href why Why is there a risk to updating to a non LTS Vault Enterprise version a td td style If you upgrade to a non LTS Vault Enterprise version your Vault instance will stop receiving critical updates when that version leaves the default maintenance window td tr tr td style a href how How do I update my LTS Vault Enterprise installation a td td style Follow your existing Vault upgrade process but allow extra time for the possibility of transitional upgrades across multiple Vault versions td tr tbody table a id who Who should consider long term support Vault upgrades are challenging especially for sensitive or critical workflows extensive integrations and large scale deployments Strict upgrade policies also require significant planning testing and employee hours to execute successfully Customers who need assurances that their current installation will receive critical bug fixes and security patches with minimal service disruptions should consider moving to a Vault Enterprise version with long term support a id what What is long term support Long term support offers extended maintenance through minor releases for select major Vault Enterprise versions The standard support period and end of life policy https go hashi co vault support policy covers N minus 2 versions which means at any given time HashiCorp maintains the current version N and the two previous versions N minus 2 Vault versions typically update 3 times per calendar year CY which means that standard maintenance for a given Vault version lasts approximately 1 year After the first year LTS Vault versions move from standard maintenance to extended maintenance for three additional major version releases approximately one additional year with patches for bugs that may cause outages and critical vulnerabilities and exposures CVEs Maintenance updates Standard maintenance Extended maintenance Performance improvements YES NO Bug fixes YES OUTAGE RISK ONLY Security patches YES HIGH RISK ONLY CVE patches YES YES a id where Where do I enable long term support You do not need to download a separate binary or set a flag for long term support As long as you select an LTS Vault Enterprise version e g 1 16 1 19 when you install vault install or upgrade vault docs upgrading your Vault instance LTS is included a id when When are LTS versions released As of Vault Enterprise 1 16 the first release of a calendar year includes long term support LTS versions overlap by one year with the previous LTS version entering its extended maintenance window when the new LTS version begins its standard maintenance window a id why Why is there a risk to updating to a non LTS Vault Enterprise version Long term support is intended for Enterprise customers who cannot upgrade frequently enough to stay within the standard maintenance timeline of one year The goal is to establish a predictable upgrade path with a longer timeline rather than extending the lifetime for every Vault version Long term support ensures your Vault Enterprise version continues to receive critical patches for an additional three major version releases approximately one additional year If you upgrade to a non LTS version you are moving your Vault instance to a version that lacks extended support Non LTS versions stop receiving updates once they leave the standard maintenance window include assets lts upgrade path mdx Version Expected release Standard maintenance ends Extended maintenance ends 1 19 CY25 Q1 CY26 Q1 1 22 release CY27 Q1 1 25 release 1 18 CY24 Q3 CY25 Q3 1 21 release Not provided 1 17 CY24 Q2 CY25 Q2 1 20 release Not provided 1 16 CY24 Q1 CY25 Q1 1 19 release CY26 Q1 1 22 release If a newer version of Vault Enterprise includes features you want to take advantage of you have two options 1 Wait for the next available LTS release to maintain long term support 1 Upgrade immediately then upgrade to an LTS release before the standard maintenance window expires a id how How do I upgrade my Vault Enterprise LTS installation You should follow your existing upgrade process for major version upgrades but allow additional time Upgrading from version LTS to LTS 1 translates to jumping 3 major Vault Enterprise versions which may require transitional upgrades to move through the intermediate Vault versions |
vault page title Vault Enterprise Control Groups layout docs include alerts enterprise and hcp mdx Vault Enterprise has support for Control Group Authorization Vault Enterprise control groups | ---
layout: docs
page_title: Vault Enterprise Control Groups
description: Vault Enterprise has support for Control Group Authorization.
---
# Vault Enterprise control groups
@include 'alerts/enterprise-and-hcp.mdx'
Vault Enterprise has support for Control Group Authorization. Control Groups
add additional authorization factors to be required before satisfying a request.
When a Control Group is required for a request, a limited duration response
wrapping token is returned to the user instead of the requested data. The
accessor of the response wrapping token can be passed to the authorizers
required by the control group policy. Once all authorizations are satisfied,
the wrapping token can be used to unwrap and process the original request.
## Control group factors
Control Groups can verify the following factors:
- `Identity Groups` - Require an authorizer to be in a specific set of identity
groups.
### Controlled capabilities
Control group factors can be configured to trigger the control group workflow
on specific capabilities. This is done with the `controlled_capabilities` field.
Not specifying the `controlled_capabilities` field will necessitate the factor to be
checked for all operations to the specified policy path. The `controlled_capabilities`
field can differ per factor, so that different factors can be required for different
operations.
Finally, the capabilities in the `controlled_capabilities` stanza must be a subset of the
`capabilities` specified in the policy itself. For example, a policy giving only `read` access to
the path `secret/foo` cannot specify a control group factor with `list` as a controlled capability.
Please see the following section for examples using ACL policies.
## Control groups in ACL policies
Control Group requirements on paths are specified as `control_group` along
with other ACL parameters.
### Sample ACL policies
```
path "secret/foo" {
capabilities = ["read"]
control_group = {
factor "ops_manager" {
identity {
group_names = ["managers"]
approvals = 1
}
}
}
}
```
The above policy grants `read` access to `secret/foo` only after one member of
the "managers" group authorizes the request.
```
path "secret/foo" {
capabilities = ["create", "update"]
control_group = {
ttl = "4h"
factor "tech leads" {
identity {
group_names = ["managers", "leads"]
approvals = 2
}
}
factor "super users" {
identity {
group_names = ["superusers"]
approvals = 1
}
}
}
}
```
The above policy grants `create` and `update` access to `secret/foo` only after
two (2) members of the "managers" or "leads" group and one member of the "superusers"
group authorizes the request. If an authorizer is a member of both the
"managers" and "superusers" group, one authorization for both factors will be
satisfied.
```
path "secret/foo" {
capabilities = ["write","read"]
control_group = {
factor "admin" {
controlled_capabilities = ["write"]
identity {
group_names = ["admin"]
approvals = 1
}
}
}
}
```
The above policy grants `read` access to `secret/foo` for anyone that has a vault token
with this policy. It grants `write` access to `secret/foo` only after one member from the
admin group authorizes the request.
```
path "kv/*" {
capabilities = ["create", "update","delete","list","sudo"]
control_group = {
factor "admin" {
controlled_capabilities = ["delete","list","sudo"]
identity {
group_names = ["admin"]
approvals = 1
}
}
}
}
path "kv/*" {
capabilities = ["create"]
control_group = {
factor "superuser" {
identity {
group_names = ["superuser"]
approvals = 2
}
}
}
}
```
Because the second path stanza has a control group factor with no `controlled_capabilities` field,
any token with this policy will be required to get two (2) approvals from the "superuser" group before executing
any operation against `kv/*`. In addition, by virtue of the `controlled_capabilities` field in the first
path stanza, `delete`,`list`, and `sudo` operations will require an additional approval from the "admin" group.
```
path "kv/*" {
capabilities = ["read", "list", "create"]
control_group = {
controlled_capabilities = ["read"]
factor "admin" {
identity {
group_names = ["admin"]
approvals = 1
}
}
factor "superuser" {
controlled_capabilities = ["create"]
identity {
group_names = ["superuser"]
approvals = 1
}
}
}
}
```
In this case, `read` will require one admin approval and `create` will require
one superuser approval and one admin approval. `List` will require no extra approvals
from any of the control group factors, and a token with this policy will not be required
to go through the control group workflow in order to execute a read operation against `kv/*`.
## Control groups in Sentinel
Control Groups are also supported in Sentinel policies using the `controlgroup`
import. See [Sentinel Documentation](/vault/docs/enterprise/sentinel) for more
details on available properties.
### Sample Sentinel policy
```
import "time"
import "controlgroup"
control_group = func() {
numAuthzs = 0
for controlgroup.authorizations as authz {
if "managers" in authz.groups.by_name {
if time.load(authz.time).unix > time.now.unix - 3600 {
numAuthzs = numAuthzs + 1
}
}
}
if numAuthzs >= 2 {
return true
}
return false
}
main = rule {
control_group()
}
```
The above policy will reject the request unless two members of the "managers"
group have authorized the request. Additionally it verifies the authorizations
happened in the last hour.
## Tutorial
Refer to the [Control Groups](/vault/tutorials/enterprise/control-groups)
tutorial to learn how to implement dual controller authorization within your policies.
## API
Control Groups can be managed over the HTTP API. Please see
[Control Groups API](/vault/api-docs/system/control-group) for more details. | vault | layout docs page title Vault Enterprise Control Groups description Vault Enterprise has support for Control Group Authorization Vault Enterprise control groups include alerts enterprise and hcp mdx Vault Enterprise has support for Control Group Authorization Control Groups add additional authorization factors to be required before satisfying a request When a Control Group is required for a request a limited duration response wrapping token is returned to the user instead of the requested data The accessor of the response wrapping token can be passed to the authorizers required by the control group policy Once all authorizations are satisfied the wrapping token can be used to unwrap and process the original request Control group factors Control Groups can verify the following factors Identity Groups Require an authorizer to be in a specific set of identity groups Controlled capabilities Control group factors can be configured to trigger the control group workflow on specific capabilities This is done with the controlled capabilities field Not specifying the controlled capabilities field will necessitate the factor to be checked for all operations to the specified policy path The controlled capabilities field can differ per factor so that different factors can be required for different operations Finally the capabilities in the controlled capabilities stanza must be a subset of the capabilities specified in the policy itself For example a policy giving only read access to the path secret foo cannot specify a control group factor with list as a controlled capability Please see the following section for examples using ACL policies Control groups in ACL policies Control Group requirements on paths are specified as control group along with other ACL parameters Sample ACL policies path secret foo capabilities read control group factor ops manager identity group names managers approvals 1 The above policy grants read access to secret foo only after one member of the managers group authorizes the request path secret foo capabilities create update control group ttl 4h factor tech leads identity group names managers leads approvals 2 factor super users identity group names superusers approvals 1 The above policy grants create and update access to secret foo only after two 2 members of the managers or leads group and one member of the superusers group authorizes the request If an authorizer is a member of both the managers and superusers group one authorization for both factors will be satisfied path secret foo capabilities write read control group factor admin controlled capabilities write identity group names admin approvals 1 The above policy grants read access to secret foo for anyone that has a vault token with this policy It grants write access to secret foo only after one member from the admin group authorizes the request path kv capabilities create update delete list sudo control group factor admin controlled capabilities delete list sudo identity group names admin approvals 1 path kv capabilities create control group factor superuser identity group names superuser approvals 2 Because the second path stanza has a control group factor with no controlled capabilities field any token with this policy will be required to get two 2 approvals from the superuser group before executing any operation against kv In addition by virtue of the controlled capabilities field in the first path stanza delete list and sudo operations will require an additional approval from the admin group path kv capabilities read list create control group controlled capabilities read factor admin identity group names admin approvals 1 factor superuser controlled capabilities create identity group names superuser approvals 1 In this case read will require one admin approval and create will require one superuser approval and one admin approval List will require no extra approvals from any of the control group factors and a token with this policy will not be required to go through the control group workflow in order to execute a read operation against kv Control groups in Sentinel Control Groups are also supported in Sentinel policies using the controlgroup import See Sentinel Documentation vault docs enterprise sentinel for more details on available properties Sample Sentinel policy import time import controlgroup control group func numAuthzs 0 for controlgroup authorizations as authz if managers in authz groups by name if time load authz time unix time now unix 3600 numAuthzs numAuthzs 1 if numAuthzs 2 return true return false main rule control group The above policy will reject the request unless two members of the managers group have authorized the request Additionally it verifies the authorizations happened in the last hour Tutorial Refer to the Control Groups vault tutorials enterprise control groups tutorial to learn how to implement dual controller authorization within your policies API Control Groups can be managed over the HTTP API Please see Control Groups API vault api docs system control group for more details |
vault Vault Enterprise Consistency Model page title Vault Enterprise Eventual Consistency layout docs include alerts enterprise and hcp mdx Vault eventual consistency | ---
layout: docs
page_title: Vault Enterprise Eventual Consistency
description: Vault Enterprise Consistency Model
---
# Vault eventual consistency
@include 'alerts/enterprise-and-hcp.mdx'
When running in a cluster, Vault has an eventual consistency model.
Only one node (the leader) can write to Vault's storage.
Users generally expect read-after-write consistency: in other
words, after writing foo=1, a subsequent read of foo should return 1. Depending
on the Vault configuration this isn't always the case. When using performance
standbys with Integrated Storage, or when using performance replication,
there are some sequences of operations that don't always yield read-after-write
consistency.
## Performance standby nodes
When using the Integrated Storage backend without performance standbys, only
a single Vault node (the active node) handles requests. Requests sent to
regular standbys are handled by forwarding them to the active node. This Vault configuration
gives Vault the same behavior as the default Consul consistency model.
When using the Integrated Storage backend with performance standbys, both the
active node and performance standbys can handle requests. If a performance standby
handles a login request, or a request that generates a dynamic secret, the
performance standby will issue a remote procedure call (RPC) to the active node to store the token
and/or lease. If the performance standby handles any other request that
results in a storage write, it will forward that request to the active node
in the same way a regular standby forwards all requests.
With Integrated Storage, all writes occur on the active node, which then issues
RPCs to update the local storage on every other node. Between when the active
node writes the data to its local disk, and when those RPCs are handled on the
other nodes to write the data to their local disks, those nodes present a stale
view of the data.
As a result, even if you're always talking to the same performance standby,
you may not get read-after-write semantics. The write gets sent to the active
node, and if the subsequent read request occurs before the new data gets sent
to the node handling the read request, the read request won't be able to take
the write into account because the new data isn't present on that node yet.
## Performance replication
A similar phenomenon occurs when using performance replication. One example
of how this manifests is when using shared mounts. If a KV secrets engine
is mounted on the primary with `local=false`, it will exist on the secondary
cluster as well. The secondary cluster can handle requests to that mount,
though as with performance standbys, write requests must be forwarded - in
this case to the primary active node. Once data is written to the primary cluster,
it won't be visible on the secondary cluster until the data has been replicated
from the primary. Therefore, on the secondary cluster, it initially appears as if
the data write hasn't happened.
If the secondary cluster is using Integrated Storage, and the read request is
being handled on one of its performance standbys, the problem is exacerbated because it
has to be sent first from the primary active node to the secondary active node,
and then from there to the secondary performance standby, each of which can
introduce their own form of lag.
Even without shared secret engines, stale reads can still happen with performance
replication. The Identity subsystem aims to provide a view on entities and
groups which span across clusters. As such, when logging in to a secondary cluster
using a shared mount, Vault tries to generate an entity and alias if they don't
already exist, and these must be stored on the primary using an RPC. Something
similar happens with groups.
## Clock skew and replication lag
As seen above, both performance standbys and replication secondaries can lag
behind the active node or the primary. As of Vault 1.17, it's possible to get
some insight into that lag using sys/health, sys/ha-status, and the replication
status endpoints.
Secondaries and standbys regularly issue an "echo" heartbeat RPC to their upstream
source. This heartbeat serves many purposes, one of them being to get a rough
idea of whether the clocks of the client and server are in sync. The server
response to the heartbeat RPC includes the server's local clock time, and the
client takes the delta in milliseconds between that time and the client's local
clock time to compute the clock_skew_ms field. No effort is made to factor into
that field the time it took to actually perform the RPC, though that information
is made available as the last_heartbeat_duration_ms field. In other words, the
reported clock skew has an uncertainty of up to last_heartbeat_duration_ms.
Vault assumes that clocks are synced across all nodes in a cluster, and if they
aren't, problems may arise, e.g. one node may think that a lease has expired and
another node won't yet. Some community-supported storage backends may have further
problems relating to HA mode.
There are fewer problems expected when clock skew exists between a replication primary
and secondary. However, one known issue is that the replication lag canary discussed
next will produce surprising values if clocks aren't synced between the clusters.
Non-secondary active nodes periodically write a small record to storage containing the
local clock time for that node. Replication secondaries read that record and compare
it to their local clock time, calling the delta the replication_primary_canary_age_ms,
which is exposed in the replication status endpoints. Performance standbys do the same
computation, exposing replication_primary_canary_age_ms in the sys/health and
sys/ha-status endpoints. Performance standbys and replication secondaries include
their current replication_primary_canary_age_ms as part of their payload for the
aforementioned "echo" heartbeat RPCs they issue, allowing the active node or primary
cluster to report on the lag seen by their downstream clients.
## Mitigations
There has long been a partial mitigation for the above problems. When writing
data via RPC, e.g. when a performance standby registers tokens and leases on the
active node after a login or generating a dynamic secret, part of the response
includes a number known as the "WAL index", aka Write-Ahead Log index.
A full explanation of this is outside the scope of this document, but the short
version is that both performance replication and performance standbys use log
shipping to stay in sync with the upstream source of writes. The mitigation
historically used by nodes doing writes via RPC is to look at the WAL index in
the response and wait up to 2 seconds to see if that WAL index appear in the
logs being shipped from upstream. Once the WAL index is seen, the Vault node
handling the request that resulted in RPCs can return its own response to the
client: it knows that any subsequent reads will be able to see the value that
was just written. If the WAL index isn't seen within those 2 seconds, the Vault
node completes the request anyway, returning a warning in the response.
This mitigation option still exists in Vault 1.7, though now there is a
configuration option to adjust the wait time:
[best_effort_wal_wait_duration](/vault/docs/configuration/replication).
## Vault 1.7 mitigations
There are now a variety of other mitigations available:
- per-request option to always forward the request to the active node
- per-request option to conditionally forward the request to the active node
if it would otherwise result in a stale read
- per-request option to fail requests if they might result in a stale read
- Vault Proxy configuration to do the above for proxied requests
The remainder of this document describes the tradeoffs of these mitigations and
how to use them.
Note that any headers requesting forwarding are disabled by default, and must
be enabled using [allow_forwarding_via_header](/vault/docs/configuration/replication).
### Unconditional forwarding (Performance standbys only)
The simplest solution to never experience stale reads from a performance standby
is to provide the following HTTP header in the request:
```
X-Vault-Forward: active-node
```
The drawback here is that if all your requests are forwarded to the active node,
you might as well not be using performance standbys. So this mitigation only
makes sense to use selectively.
This mitigation will not help with stale reads relating to performance replication.
### Conditional forwarding (Performance standbys only)
As of Vault Enterprise 1.7, all requests that modify storage now return a new
HTTP response header:
```
X-Vault-Index: <base64 value>
```
To ensure that the state resulting from that write request is visible to a
subsequent request, add these headers to that second request:
```
X-Vault-Index: <base64 value taken from previous response>
X-Vault-Inconsistent: forward-active-node
```
The effect will be that the node handling the request will look at the state
it has locally, and if it doesn't contain the state described by the X-Vault-Index
header, the node will forward the request to the active node.
The drawback here is that when requests are forwarded to the active node,
performance standbys provide less value. If this happens often enough
the active node can become a bottleneck, limiting the horizontal read scalability
performance standbys are intended to provide.
### Retry stale requests
As of Vault Enterprise 1.7, all requests that modify storage now return a new
HTTP response header:
```
X-Vault-Index: <base64 value>
```
To ensure that the state resulting from that write request is visible to a
subsequent request, add this headers to that second request:
```
X-Vault-Index: <base64 value taken from previous response>
```
When the desired state isn't present, Vault will return a failure response with
HTTP status code 412. This tells the client that it should retry the request.
The advantage over the Conditional Forwarding solution above is twofold:
first, there's no additional load on the active node. Second, this solution
is applicable to performance replication as well as performance standbys.
The Vault Go API will now automatically retry 412s, and provides convenience
methods for propagating the X-Vault-Index response header into the request
header of subsequent requests. Those not using the Vault Go API will want
to build equivalent functionality into their client library.
### Vault proxy and consistency headers
When configured, the [Vault API Proxy](/vault/docs/agent-and-proxy/proxy/apiproxy) will proxy incoming requests to Vault. There is
Proxy configuration available in the `api_proxy` stanza that allows making use
of some of the above mitigations without modifying clients.
By setting `enforce_consistency="always"`, Proxy will always provide
the `X-Vault-Index` consistency header. The value it uses for the header
will be based on the responses that have passed through the Proxy previously.
The option `when_inconsistent` controls how stale reads are prevented:
- `"fail"` means that when a `412` response is seen, it is returned to the client
- `"retry"` means that `412` responses will be retried automatically by Proxy,
so the client doesn't have to deal with them
- `"forward"` makes Proxy provide the
`X-Vault-Inconsistent: forward-active-node` header as described above under
Conditional Forwarding
## Vault 1.10 mitigations
In Vault 1.10, the token format has changed, where service tokens now employ server side consistency.
This means that by default, requests made
to nodes which cannot support read-after-write consistency due to
not having the necessary WAL index to check Vault tokens locally will output
a 412 status code. The Vault Go API automatically retries when receiving 412s, so
unless there is a considerable replication delay, users will experience
read-after-write consistency.
The replication option [allow_forwarding_via_token](/vault/docs/configuration/replication)
can be used to enforce requests that would have returned 412s in the
aforementioned way will be forwarded instead to the active node.
Refer to the [Server Side Consistent Token FAQ](/vault/docs/faq/ssct) for details.
## Client API helpers
There are some new helpers in the `api` package to work with the new headers.
`WithRequestCallbacks` and `WithResponseCallbacks` create a shallow clone of
the client and populate it with the given callbacks. `RecordState` and
`RequireState` are used to store the response header from one request and
provide it in a subsequent request. For example:
```go
client := api.NewClient(api.DefaultConfig)
var state string
_, err := client.WithResponseCallbacks(api.RecordState(&state)).Write(path, data)
secret, err := client.WithRequestCallbacks(api.RequireState(state)).Read(path)
```
This will retry the `Read` until the data stored by the `Write` is present.
There are also callbacks to use forwarding: `ForwardInconsistent` and
`ForwardAlways`. | vault | layout docs page title Vault Enterprise Eventual Consistency description Vault Enterprise Consistency Model Vault eventual consistency include alerts enterprise and hcp mdx When running in a cluster Vault has an eventual consistency model Only one node the leader can write to Vault s storage Users generally expect read after write consistency in other words after writing foo 1 a subsequent read of foo should return 1 Depending on the Vault configuration this isn t always the case When using performance standbys with Integrated Storage or when using performance replication there are some sequences of operations that don t always yield read after write consistency Performance standby nodes When using the Integrated Storage backend without performance standbys only a single Vault node the active node handles requests Requests sent to regular standbys are handled by forwarding them to the active node This Vault configuration gives Vault the same behavior as the default Consul consistency model When using the Integrated Storage backend with performance standbys both the active node and performance standbys can handle requests If a performance standby handles a login request or a request that generates a dynamic secret the performance standby will issue a remote procedure call RPC to the active node to store the token and or lease If the performance standby handles any other request that results in a storage write it will forward that request to the active node in the same way a regular standby forwards all requests With Integrated Storage all writes occur on the active node which then issues RPCs to update the local storage on every other node Between when the active node writes the data to its local disk and when those RPCs are handled on the other nodes to write the data to their local disks those nodes present a stale view of the data As a result even if you re always talking to the same performance standby you may not get read after write semantics The write gets sent to the active node and if the subsequent read request occurs before the new data gets sent to the node handling the read request the read request won t be able to take the write into account because the new data isn t present on that node yet Performance replication A similar phenomenon occurs when using performance replication One example of how this manifests is when using shared mounts If a KV secrets engine is mounted on the primary with local false it will exist on the secondary cluster as well The secondary cluster can handle requests to that mount though as with performance standbys write requests must be forwarded in this case to the primary active node Once data is written to the primary cluster it won t be visible on the secondary cluster until the data has been replicated from the primary Therefore on the secondary cluster it initially appears as if the data write hasn t happened If the secondary cluster is using Integrated Storage and the read request is being handled on one of its performance standbys the problem is exacerbated because it has to be sent first from the primary active node to the secondary active node and then from there to the secondary performance standby each of which can introduce their own form of lag Even without shared secret engines stale reads can still happen with performance replication The Identity subsystem aims to provide a view on entities and groups which span across clusters As such when logging in to a secondary cluster using a shared mount Vault tries to generate an entity and alias if they don t already exist and these must be stored on the primary using an RPC Something similar happens with groups Clock skew and replication lag As seen above both performance standbys and replication secondaries can lag behind the active node or the primary As of Vault 1 17 it s possible to get some insight into that lag using sys health sys ha status and the replication status endpoints Secondaries and standbys regularly issue an echo heartbeat RPC to their upstream source This heartbeat serves many purposes one of them being to get a rough idea of whether the clocks of the client and server are in sync The server response to the heartbeat RPC includes the server s local clock time and the client takes the delta in milliseconds between that time and the client s local clock time to compute the clock skew ms field No effort is made to factor into that field the time it took to actually perform the RPC though that information is made available as the last heartbeat duration ms field In other words the reported clock skew has an uncertainty of up to last heartbeat duration ms Vault assumes that clocks are synced across all nodes in a cluster and if they aren t problems may arise e g one node may think that a lease has expired and another node won t yet Some community supported storage backends may have further problems relating to HA mode There are fewer problems expected when clock skew exists between a replication primary and secondary However one known issue is that the replication lag canary discussed next will produce surprising values if clocks aren t synced between the clusters Non secondary active nodes periodically write a small record to storage containing the local clock time for that node Replication secondaries read that record and compare it to their local clock time calling the delta the replication primary canary age ms which is exposed in the replication status endpoints Performance standbys do the same computation exposing replication primary canary age ms in the sys health and sys ha status endpoints Performance standbys and replication secondaries include their current replication primary canary age ms as part of their payload for the aforementioned echo heartbeat RPCs they issue allowing the active node or primary cluster to report on the lag seen by their downstream clients Mitigations There has long been a partial mitigation for the above problems When writing data via RPC e g when a performance standby registers tokens and leases on the active node after a login or generating a dynamic secret part of the response includes a number known as the WAL index aka Write Ahead Log index A full explanation of this is outside the scope of this document but the short version is that both performance replication and performance standbys use log shipping to stay in sync with the upstream source of writes The mitigation historically used by nodes doing writes via RPC is to look at the WAL index in the response and wait up to 2 seconds to see if that WAL index appear in the logs being shipped from upstream Once the WAL index is seen the Vault node handling the request that resulted in RPCs can return its own response to the client it knows that any subsequent reads will be able to see the value that was just written If the WAL index isn t seen within those 2 seconds the Vault node completes the request anyway returning a warning in the response This mitigation option still exists in Vault 1 7 though now there is a configuration option to adjust the wait time best effort wal wait duration vault docs configuration replication Vault 1 7 mitigations There are now a variety of other mitigations available per request option to always forward the request to the active node per request option to conditionally forward the request to the active node if it would otherwise result in a stale read per request option to fail requests if they might result in a stale read Vault Proxy configuration to do the above for proxied requests The remainder of this document describes the tradeoffs of these mitigations and how to use them Note that any headers requesting forwarding are disabled by default and must be enabled using allow forwarding via header vault docs configuration replication Unconditional forwarding Performance standbys only The simplest solution to never experience stale reads from a performance standby is to provide the following HTTP header in the request X Vault Forward active node The drawback here is that if all your requests are forwarded to the active node you might as well not be using performance standbys So this mitigation only makes sense to use selectively This mitigation will not help with stale reads relating to performance replication Conditional forwarding Performance standbys only As of Vault Enterprise 1 7 all requests that modify storage now return a new HTTP response header X Vault Index base64 value To ensure that the state resulting from that write request is visible to a subsequent request add these headers to that second request X Vault Index base64 value taken from previous response X Vault Inconsistent forward active node The effect will be that the node handling the request will look at the state it has locally and if it doesn t contain the state described by the X Vault Index header the node will forward the request to the active node The drawback here is that when requests are forwarded to the active node performance standbys provide less value If this happens often enough the active node can become a bottleneck limiting the horizontal read scalability performance standbys are intended to provide Retry stale requests As of Vault Enterprise 1 7 all requests that modify storage now return a new HTTP response header X Vault Index base64 value To ensure that the state resulting from that write request is visible to a subsequent request add this headers to that second request X Vault Index base64 value taken from previous response When the desired state isn t present Vault will return a failure response with HTTP status code 412 This tells the client that it should retry the request The advantage over the Conditional Forwarding solution above is twofold first there s no additional load on the active node Second this solution is applicable to performance replication as well as performance standbys The Vault Go API will now automatically retry 412s and provides convenience methods for propagating the X Vault Index response header into the request header of subsequent requests Those not using the Vault Go API will want to build equivalent functionality into their client library Vault proxy and consistency headers When configured the Vault API Proxy vault docs agent and proxy proxy apiproxy will proxy incoming requests to Vault There is Proxy configuration available in the api proxy stanza that allows making use of some of the above mitigations without modifying clients By setting enforce consistency always Proxy will always provide the X Vault Index consistency header The value it uses for the header will be based on the responses that have passed through the Proxy previously The option when inconsistent controls how stale reads are prevented fail means that when a 412 response is seen it is returned to the client retry means that 412 responses will be retried automatically by Proxy so the client doesn t have to deal with them forward makes Proxy provide the X Vault Inconsistent forward active node header as described above under Conditional Forwarding Vault 1 10 mitigations In Vault 1 10 the token format has changed where service tokens now employ server side consistency This means that by default requests made to nodes which cannot support read after write consistency due to not having the necessary WAL index to check Vault tokens locally will output a 412 status code The Vault Go API automatically retries when receiving 412s so unless there is a considerable replication delay users will experience read after write consistency The replication option allow forwarding via token vault docs configuration replication can be used to enforce requests that would have returned 412s in the aforementioned way will be forwarded instead to the active node Refer to the Server Side Consistent Token FAQ vault docs faq ssct for details Client API helpers There are some new helpers in the api package to work with the new headers WithRequestCallbacks and WithResponseCallbacks create a shallow clone of the client and populate it with the given callbacks RecordState and RequireState are used to store the response header from one request and provide it in a subsequent request For example go client api NewClient api DefaultConfig var state string err client WithResponseCallbacks api RecordState state Write path data secret err client WithRequestCallbacks api RequireState state Read path This will retry the Read until the data stored by the Write is present There are also callbacks to use forwarding ForwardInconsistent and ForwardAlways |
vault encryption for supporting seals Seal wrap layout docs page title Vault Enterprise Seal Wrap Vault Enterprise features a mechanism to wrap values with an extra layer of | ---
layout: docs
page_title: Vault Enterprise Seal Wrap
description: |-
Vault Enterprise features a mechanism to wrap values with an extra layer of
encryption for supporting seals.
---
# Seal wrap
@include 'alerts/enterprise-and-hcp.mdx'
Vault Enterprise features a mechanism to wrap values with an extra layer of
encryption for supporting [seals](/vault/docs/configuration/seal). This adds an
extra layer of protection and is useful in some compliance and regulatory
environments, including FIPS 140-2 environments.
To use this feature, you must have an active or trial license for Vault
Enterprise Plus (HSMs). To start a trial, contact [HashiCorp
sales](mailto:[email protected]).
## Seal Wrap benefits
Your Vault deployments can gain the following benefits by enabling seal wrapping:
- Conformance with FIPS 140-2 directives on Key Storage and Key Transport as [certified by Leidos](/vault/docs/enterprise/sealwrap#fips-140-2-compliance)
- Supports FIPS level of security equal to HSM
- For example, if you use Level 3 hardware encryption on an HSM, Vault will be
using FIPS 140-2 Level 3 cryptography
- Enables Vault deployments in high security [GRC](https://en.wikipedia.org/wiki/Governance,_risk_management,_and_compliance)
environments (e.g. PCI-DSS, HIPAA) where FIPS guidelines important for external audits
- Pathway to use Vault for managing Department of Defense (DOD) or North
Atlantic Treaty Organization (NATO) military secrets
## Enabling/Disabling
Seal Wrap is enabled by default on supporting seals. This implies that the seal
must be available throughout Vault's runtime. Most cloud-based seals should be
quite reliable, but, for instance, if using an HSM in a non-HA setup a
connection interruption to the HSM will result in issues with Vault
functionality.
<Tip>
Having Vault generate its own key is the easiest way to get up and running, but for security, Vault marks the key as non-exportable. If your HSM key backup strategy requires the key to be exportable, you should generate the key yourself. Refer to the [key generation attributes](/vault/docs/configuration/seal/pkcs11#vault-key-generation-attributes).
</Tip>
To disable seal wrapping, set `disable_sealwrap = true` in Vault's
[configuration file][configuration]. This will not affect auto-unsealing functionality; Vault's
root key will still be protected by the seal wrapping mechanism. It will
simply prevent other storage entries within Vault from being seal wrapped.
_N.B._: This is a lazy downgrade; as keys are accessed or written their seal
wrapping status will change. Similarly, if the flag is removed, it will be a
lazy upgrade (which is the case when initially upgrading to a seal
wrap-supporting version of Vault).
## Activating seal wrapping
For some values, seal wrapping is always enabled with a supporting seal. This
includes the recovery key, any stored key shares, the root key, the keyring,
and more; essentially, any Critical Security Parameter (CSP) within Vault's
core. If upgrading from a version of Vault that did not support seal wrapping,
the next time these values are read they will be seal-wrapped and stored.
Backend mounts within Vault can also take advantage of seal wrapping. Seal
wrapping can be activated at mount time for a given mount by mounting the
backend with the `seal_wrap` configuration value set to `true`. (This value
cannot currently be changed later.)
A given backend's author can specify which values should be seal-wrapped by
identifying where CSPs are stored. They may also choose to seal wrap all or none
of their values.
Note that it is often an order of magnitude or two slower to write to and read
from HSMs or remote seals. However, values will be cached in memory
un-seal-wrapped (but still encrypted by Vault's built-in cryptographic barrier)
in Vault, which will mitigate this for read-heavy workloads.
## Seal wrap and replication
Seal wrapping takes place below the replication logic. As a result, it is
transparent to replication. Replication will convey which values should be
seal-wrapped, but it is up to the seal on the local cluster to implement it.
In practice, this means that seal wrapping can be used without needing to have
the replicated keys on both ends of the connection; each cluster can have
distinct keys in an HSM or in KMS.
In addition, it is possible to replicate from a Shamir-protected primary
cluster to clusters that use HSMs when seal wrapping is required in downstream
datacenters but not in the primary.
## Wrapped parameters
Each plugin (whether secret or auth) maintains control over parameters it
feels are best to Seal Wrap. These are usually just a few core values as
Seal Wrapping does incur some performance overhead.
Some examples of places where seal wrapping is used include:
- The [LDAP](/vault/docs/auth/ldap), [RADIUS](/vault/docs/auth/radius),
[Okta](/vault/docs/auth/okta), and [AWS](/vault/docs/auth/aws) auth methods,
for storing their config.
- [PKI](/vault/docs/secrets/pki) for storing the issuers and their keys,
- [SSH](/vault/docs/secrets/ssh) for storing the CA's keys,
- [KMIP](/vault/docs/secrets/kmip) for storing managed objects (externally-provided
keys) and its CA keys.
- [Transit](/vault/docs/secrets/transit) for storing keys and their policy.
## FIPS status
See the [FIPS-specific Seal Wrap documentation](/vault/docs/enterprise/fips/sealwrap)
for more information about using Seal Wrapping to achieve FIPS 140-2 compliance.
Note that there are additional [FIPS considerations](/vault/docs/enterprise/sealwrap#seal-wrap-and-replication)
regarding Seal Wrap usage and Vault Replication.
[configuration]: /vault/docs/configuration | vault | layout docs page title Vault Enterprise Seal Wrap description Vault Enterprise features a mechanism to wrap values with an extra layer of encryption for supporting seals Seal wrap include alerts enterprise and hcp mdx Vault Enterprise features a mechanism to wrap values with an extra layer of encryption for supporting seals vault docs configuration seal This adds an extra layer of protection and is useful in some compliance and regulatory environments including FIPS 140 2 environments To use this feature you must have an active or trial license for Vault Enterprise Plus HSMs To start a trial contact HashiCorp sales mailto sales hashicorp com Seal Wrap benefits Your Vault deployments can gain the following benefits by enabling seal wrapping Conformance with FIPS 140 2 directives on Key Storage and Key Transport as certified by Leidos vault docs enterprise sealwrap fips 140 2 compliance Supports FIPS level of security equal to HSM For example if you use Level 3 hardware encryption on an HSM Vault will be using FIPS 140 2 Level 3 cryptography Enables Vault deployments in high security GRC https en wikipedia org wiki Governance risk management and compliance environments e g PCI DSS HIPAA where FIPS guidelines important for external audits Pathway to use Vault for managing Department of Defense DOD or North Atlantic Treaty Organization NATO military secrets Enabling Disabling Seal Wrap is enabled by default on supporting seals This implies that the seal must be available throughout Vault s runtime Most cloud based seals should be quite reliable but for instance if using an HSM in a non HA setup a connection interruption to the HSM will result in issues with Vault functionality Tip Having Vault generate its own key is the easiest way to get up and running but for security Vault marks the key as non exportable If your HSM key backup strategy requires the key to be exportable you should generate the key yourself Refer to the key generation attributes vault docs configuration seal pkcs11 vault key generation attributes Tip To disable seal wrapping set disable sealwrap true in Vault s configuration file configuration This will not affect auto unsealing functionality Vault s root key will still be protected by the seal wrapping mechanism It will simply prevent other storage entries within Vault from being seal wrapped N B This is a lazy downgrade as keys are accessed or written their seal wrapping status will change Similarly if the flag is removed it will be a lazy upgrade which is the case when initially upgrading to a seal wrap supporting version of Vault Activating seal wrapping For some values seal wrapping is always enabled with a supporting seal This includes the recovery key any stored key shares the root key the keyring and more essentially any Critical Security Parameter CSP within Vault s core If upgrading from a version of Vault that did not support seal wrapping the next time these values are read they will be seal wrapped and stored Backend mounts within Vault can also take advantage of seal wrapping Seal wrapping can be activated at mount time for a given mount by mounting the backend with the seal wrap configuration value set to true This value cannot currently be changed later A given backend s author can specify which values should be seal wrapped by identifying where CSPs are stored They may also choose to seal wrap all or none of their values Note that it is often an order of magnitude or two slower to write to and read from HSMs or remote seals However values will be cached in memory un seal wrapped but still encrypted by Vault s built in cryptographic barrier in Vault which will mitigate this for read heavy workloads Seal wrap and replication Seal wrapping takes place below the replication logic As a result it is transparent to replication Replication will convey which values should be seal wrapped but it is up to the seal on the local cluster to implement it In practice this means that seal wrapping can be used without needing to have the replicated keys on both ends of the connection each cluster can have distinct keys in an HSM or in KMS In addition it is possible to replicate from a Shamir protected primary cluster to clusters that use HSMs when seal wrapping is required in downstream datacenters but not in the primary Wrapped parameters Each plugin whether secret or auth maintains control over parameters it feels are best to Seal Wrap These are usually just a few core values as Seal Wrapping does incur some performance overhead Some examples of places where seal wrapping is used include The LDAP vault docs auth ldap RADIUS vault docs auth radius Okta vault docs auth okta and AWS vault docs auth aws auth methods for storing their config PKI vault docs secrets pki for storing the issuers and their keys SSH vault docs secrets ssh for storing the CA s keys KMIP vault docs secrets kmip for storing managed objects externally provided keys and its CA keys Transit vault docs secrets transit for storing keys and their policy FIPS status See the FIPS specific Seal Wrap documentation vault docs enterprise fips sealwrap for more information about using Seal Wrapping to achieve FIPS 140 2 compliance Note that there are additional FIPS considerations vault docs enterprise sealwrap seal wrap and replication regarding Seal Wrap usage and Vault Replication configuration vault docs configuration |
vault include alerts enterprise only mdx Learn what data HashiCorp collects to meter Enterprise license utilization Enable or disable reporting Review sample payloads and logs Automated license utilization reporting layout docs page title Automated license utilization reporting | ---
layout: docs
page_title: Automated license utilization reporting
description: >-
Learn what data HashiCorp collects to meter Enterprise license utilization. Enable or disable reporting. Review sample payloads and logs.
---
# Automated license utilization reporting
@include 'alerts/enterprise-only.mdx'
Automated license utilization reporting sends license utilization data to
HashiCorp without requiring you to manually collect and report them. It also
lets you review your license usage with the monitoring solution you already use
(for example Splunk, Datadog, or others) so you can optimize and manage your
deployments. Use these reports to understand how much more you can deploy under
your current contract, protect against overutilization, and budget for predicted
consumption.
Automated reporting shares the minimum data required to validate license
utilization as defined in our contracts. They consist of mostly computed metrics
and will never contain Personal Identifiable Information (PII) or other
sensitive information. Automated reporting shares the data with HashiCorp using
a secure, unidirectional HTTPS API and makes an auditable record in the product
logs each time it submits a report. The reporting process is GDPR compliant and submits
reports roughly once every 24 hours.
## Enable automated reporting
To enable automated reporting, you need to make sure that outbound network
traffic is configured correctly and upgrade your enterprise product to a version
that supports it. If your installation is air-gapped or network settings are not
in place, automated reporting will not work.
### 1. Allow outbound HTTPS traffic on port 443
Make sure that your network allows HTTPS egress on port 443 to
https://reporting.hashicorp.services by allow-listing the following IP
addresses:
- 100.20.70.12
- 35.166.5.222
- 23.95.85.111
- 44.215.244.1
### 2. Upgrade
Upgrade to a release that supports license utilization reporting. These
releases include:
- [Vault Enterprise 1.14.0](https://releases.hashicorp.com/vault/) and later
- [Vault Enterprise 1.13.4](https://releases.hashicorp.com/vault/) and later 1.13.x versions
- [Vault Enterprise 1.12.8](https://releases.hashicorp.com/vault/) and later 1.12.x versions
- [Vault Enterprise 1.11.12](https://releases.hashicorp.com/vault/)
### 3. Check logs
Automatic license utilization reporting will start sending data within roughly 24 hours.
Check the server logs for records that the data sent successfully.
You will find log entries similar to the following:
<CodeBlockConfig hideClipboard>
```
[DEBUG] core.reporting: beginning snapshot export
[DEBUG] core.reporting: creating payload
[DEBUG] core.reporting: marshalling payload to json
[DEBUG] core.reporting: generating authentication headers
[DEBUG] core.reporting: creating request
[DEBUG] core.reporting: sending request
[DEBUG] core.reporting: performing request: method=POST url=https://reporting.hashicorp.services
[DEBUG] core.reporting: recording audit record
[INFO] core.reporting: Report sent: auditRecord="{\"payload\":{\"payload_version\":\"1\",\"license_id\":\"97afe7b4-b9c8-bf19-bf35-b89b5cc0efea\",\"product\":\"vault\",\"product_version\":\"1.14.0-rc1+ent\",\"export_timestamp\":\"2023-06-01T09:34:44.215133-04:00\",\"snapshots\":[{\"snapshot_version\":1,\"snapshot_id\":\"0001J7H7KMEDRXKM5C1QJGBXV3\",\"process_id\":\"01H1T45CZK2GN9WR22863W2K32\",\"timestamp\":\"2023-06-01T09:34:44.215001-04:00\",\"schema_version\":\"1.0.0\",\"service\":\"vault\",\"metrics\":{\"clientcount.current_month_estimate\":{\"key\":\"clientcount.current_month_estimate\",\"kind\":\"sum\",\"mode\":\"write\",\"labels\":{\"type\":{\"entity\":20,\"nonentity\":11}}},\"clientcount.previous_month_complete\":{\"key\":\"clientcount.previous_month_complete\",\"kind\":\"sum\",\"mode\":\"write\",\"labels\":{\"type\":{\"entity\":10,\"nonentity\":11}}}}}],\"metadata\":{\"vault\":{\"billing_start\":\"2023-03-01T00:00:00Z\",\"cluster_id\":\"a8d95acc-ec0a-6087-d7f6-4f054ab2e7fd\"}}}}"
[DEBUG] core.reporting: completed recording audit record
[DEBUG] core.reporting: export finished successfully
```
</CodeBlockConfig>
If your installation is air-gapped or your network doesn’t allow the correct
egress, logs will show an error.
<CodeBlockConfig hideClipboard>
```
[DEBUG] core.reporting: beginning snapshot export
[DEBUG] core.reporting: creating payload
[DEBUG] core.reporting: marshalling payload to json
[DEBUG] core.reporting: generating authentication headers
[DEBUG] core.reporting: creating request
[DEBUG] core.reporting: sending request
[DEBUG] core.reporting: performing request: method=POST url=https://reporting.hashicorp.services
[DEBUG] core.reporting: error status code received: statusCode=403
```
</CodeBlockConfig>
In this case, reconfigure your network to allow egress and check back in 24
hours.
## Opt out
If your installation is air-gapped or you want to manually collect and report on
the same license utilization metrics, you can opt-out of automated reporting.
Manually reporting these metrics can be time consuming. Opting out of automated
reporting does not mean that you also opt out from sending license utilization
metrics. Customers who opt out of automated reporting will still be required to
manually collect and send license utilization metrics to HashiCorp.
If you are considering opting out because you’re worried about the data, we
strongly recommend that you review the [example payloads](#example-payloads)
before opting out. If you have concerns with any of the automatically-reported
data please bring them to your account manager.
You have two options to opt out of automated reporting:
- HCL configuration (recommended)
- Environment variable (requires restart)
#### HCL configuration
Opting out in your product’s configuration file doesn’t require a system
restart, and is the method we recommend. Add the following block to your server
configuration file (e.g. `vault-config.hcl`).
```hcl
reporting {
license {
enabled = false
}
}
```
<Warning>
When you have a cluster, each node must have the reporting stanza in its
configuration to be consistent. In the event of leadership change, nodes will
use its server configuration to determine whether or not to opt-out the
automated reporting. Inconsistent configuration between nodes will change the
reporting status upon active unseal.
</Warning>
You will find the following entries in the server log.
<CodeBlockConfig hideClipboard>
```
[DEBUG] core: reloading automated reporting
[INFO] core: opting out of automated reporting
[DEBUG] activity: there is no reporting agent configured, skipping counts reporting
```
</CodeBlockConfig>
#### Environment variable
If you need to, you can also opt out using an environment variable, which will
provide a startup message confirming that you have disabled automated reporting.
This option requires a system restart.
<Note>
If the reporting stanza exists in the configuration file, the
`OPTOUT_LICENSE_REPORTING` value overrides the configuration.
</Note>
Set the following environment variable.
```shell-session
$ export OPTOUT_LICENSE_REPORTING=true
```
Now, restart your [Vault servers](/vault/docs/commands/server) from the shell
where you set the environment variable.
You will find the following entries in the server log.
<CodeBlockConfig hideClipboard>
```
[INFO] core: automated reporting disabled via environment variable: env=OPTOUT_LICENSE_REPORTING
[INFO] core: opting out of automated reporting
[DEBUG] activity: there is no reporting agent configured, skipping counts reporting
```
</CodeBlockConfig>
Check your product logs roughly 24 hours after opting out to make sure that the system
isn’t trying to send reports.
If your configuration file and environment variable differ, the environment
variable setting will take precedence.
## Example payloads
HashiCorp collects the following utilization data as JSON payloads:
- `payload_version` - The version of this payload schema
- `license_id` - The license ID for this product
- `product` - The product that this contribution is for
- `product_version` - The product version this contribution is for
- `export_timestamp`- The date and time for this contribution
- `snapshots` - An array of snapshot details. A snapshot is a structure that
represents a single data collection
- `snapshot_version` - The version of the snapshot package that produced this
snapshot
- `snapshot_id` - A unique identifier for this particular snapshot
- `process_id` - An identifier for the system that produced this snapshot
- `timestamp` - The date and time for this snapshot
- `schema_version` - The version of the schema associated with this snapshot
- `service` - The service that produced this snapshot (likely to be product
name)
- `metrics` - A map of representations of snapshot metrics contained within
this snapshot
- `key` - The key name associated with this metric
- `kind` - The kind of metric (feature, counter, sum, or mean)
- `mode` - The mode of operation associated with this metric (write or
collect)
- `labels` - The labels associated with each collected metric
- `entity` - The sum of tokens generated for a unique client identifier
- `nonentity` - The sum of tokens without an entity attached
- `metadata` - Optional product-specific metadata
- `billing_start` - The billing start date associated with the reporting cluster (license start date if not configured).
<Note title="Important change to supported versions">
As of 1.16.7, 1.17.3 and later,
the <a href="/vault/docs/concepts/billing-start-date">billing start date</a> automatically
rolls over to the latest billing year at the end of the last cycle.
For more information, refer to the upgrade guide for your Vault version:
[Vault v1.16.x](/vault/docs/upgrading/upgrade-to-1.16.x#auto-rolled-billing-start-date),
[Vault v1.17.x](/vault/docs/upgrading/upgrade-to-1.17.x#auto-rolled-billing-start-date)
</Note>
- `cluster_id` - The cluster UUID as shown by `vault status` on the reporting
cluster
<CodeBlockConfig hideClipboard>
```json
{
"payload_version": "1",
"license_id": "97afe7b4-b9c8-bf19-bf35-b89b5cc0efea",
"product": "vault",
"product_version": "1.14.0-rc1+ent",
"export_timestamp": "2023-06-01T11:39:00.76643-04:00",
"snapshots": [
{
"snapshot_version": 1,
"snapshot_id": "0001J7HEWM1PEHPMF5YZT8EV65",
"process_id": "01H1VSQMNYAP77R566F1Y03GE6",
"timestamp": "2023-06-01T11:39:00.766099-04:00",
"schema_version": "1.0.0",
"service": "vault",
"metrics": {
"clientcount.current_month_estimate": {
"key": "clientcount.current_month_estimate",
"kind": "sum",
"mode": "write",
"labels": {
"type": {
"entity": 20,
"nonentity": 11
}
}
},
"clientcount.previous_month_complete": {
"key": "clientcount.previous_month_complete",
"kind": "sum",
"mode": "write",
"labels": {
"type": {
"entity": 10,
"nonentity": 11
}
}
}
}
}
],
"metadata": {
"vault": {
"billing_start": "2023-03-01T00:00:00Z",
"cluster_id": "a8d95acc-ec0a-6087-d7f6-4f054ab2e7fd"
}
}
}
```
</CodeBlockConfig>
## Pre-1.9 counts
When upgrading Vault from 1.8 (or earlier) to 1.9 (or later), utilization reporting will only include the [non-entity tokens](/vault/docs/concepts/client-count#non-entity-tokens) that are used after the upgrade.
Starting in Vault 1.9, the activity log records and de-duplicates non-entity tokens by using the namespace and token's policies to generate a unique identifier. Because Vault did not create identifiers for these tokens before 1.9, the activity log cannot know whether this token has been seen pre-1.9. To prevent inaccurate and inflated counts, the activity log will ignore any counts of non-entity tokens that were created before the upgrade and only the non-entity tokens from versions 1.9 and later will be counted.
See the client count [overview](/vault/docs/concepts/client-count) and [FAQ](/vault/docs/concepts/client-count/faq) for more information.
| vault | layout docs page title Automated license utilization reporting description Learn what data HashiCorp collects to meter Enterprise license utilization Enable or disable reporting Review sample payloads and logs Automated license utilization reporting include alerts enterprise only mdx Automated license utilization reporting sends license utilization data to HashiCorp without requiring you to manually collect and report them It also lets you review your license usage with the monitoring solution you already use for example Splunk Datadog or others so you can optimize and manage your deployments Use these reports to understand how much more you can deploy under your current contract protect against overutilization and budget for predicted consumption Automated reporting shares the minimum data required to validate license utilization as defined in our contracts They consist of mostly computed metrics and will never contain Personal Identifiable Information PII or other sensitive information Automated reporting shares the data with HashiCorp using a secure unidirectional HTTPS API and makes an auditable record in the product logs each time it submits a report The reporting process is GDPR compliant and submits reports roughly once every 24 hours Enable automated reporting To enable automated reporting you need to make sure that outbound network traffic is configured correctly and upgrade your enterprise product to a version that supports it If your installation is air gapped or network settings are not in place automated reporting will not work 1 Allow outbound HTTPS traffic on port 443 Make sure that your network allows HTTPS egress on port 443 to https reporting hashicorp services by allow listing the following IP addresses 100 20 70 12 35 166 5 222 23 95 85 111 44 215 244 1 2 Upgrade Upgrade to a release that supports license utilization reporting These releases include Vault Enterprise 1 14 0 https releases hashicorp com vault and later Vault Enterprise 1 13 4 https releases hashicorp com vault and later 1 13 x versions Vault Enterprise 1 12 8 https releases hashicorp com vault and later 1 12 x versions Vault Enterprise 1 11 12 https releases hashicorp com vault 3 Check logs Automatic license utilization reporting will start sending data within roughly 24 hours Check the server logs for records that the data sent successfully You will find log entries similar to the following CodeBlockConfig hideClipboard DEBUG core reporting beginning snapshot export DEBUG core reporting creating payload DEBUG core reporting marshalling payload to json DEBUG core reporting generating authentication headers DEBUG core reporting creating request DEBUG core reporting sending request DEBUG core reporting performing request method POST url https reporting hashicorp services DEBUG core reporting recording audit record INFO core reporting Report sent auditRecord payload payload version 1 license id 97afe7b4 b9c8 bf19 bf35 b89b5cc0efea product vault product version 1 14 0 rc1 ent export timestamp 2023 06 01T09 34 44 215133 04 00 snapshots snapshot version 1 snapshot id 0001J7H7KMEDRXKM5C1QJGBXV3 process id 01H1T45CZK2GN9WR22863W2K32 timestamp 2023 06 01T09 34 44 215001 04 00 schema version 1 0 0 service vault metrics clientcount current month estimate key clientcount current month estimate kind sum mode write labels type entity 20 nonentity 11 clientcount previous month complete key clientcount previous month complete kind sum mode write labels type entity 10 nonentity 11 metadata vault billing start 2023 03 01T00 00 00Z cluster id a8d95acc ec0a 6087 d7f6 4f054ab2e7fd DEBUG core reporting completed recording audit record DEBUG core reporting export finished successfully CodeBlockConfig If your installation is air gapped or your network doesn t allow the correct egress logs will show an error CodeBlockConfig hideClipboard DEBUG core reporting beginning snapshot export DEBUG core reporting creating payload DEBUG core reporting marshalling payload to json DEBUG core reporting generating authentication headers DEBUG core reporting creating request DEBUG core reporting sending request DEBUG core reporting performing request method POST url https reporting hashicorp services DEBUG core reporting error status code received statusCode 403 CodeBlockConfig In this case reconfigure your network to allow egress and check back in 24 hours Opt out If your installation is air gapped or you want to manually collect and report on the same license utilization metrics you can opt out of automated reporting Manually reporting these metrics can be time consuming Opting out of automated reporting does not mean that you also opt out from sending license utilization metrics Customers who opt out of automated reporting will still be required to manually collect and send license utilization metrics to HashiCorp If you are considering opting out because you re worried about the data we strongly recommend that you review the example payloads example payloads before opting out If you have concerns with any of the automatically reported data please bring them to your account manager You have two options to opt out of automated reporting HCL configuration recommended Environment variable requires restart HCL configuration Opting out in your product s configuration file doesn t require a system restart and is the method we recommend Add the following block to your server configuration file e g vault config hcl hcl reporting license enabled false Warning When you have a cluster each node must have the reporting stanza in its configuration to be consistent In the event of leadership change nodes will use its server configuration to determine whether or not to opt out the automated reporting Inconsistent configuration between nodes will change the reporting status upon active unseal Warning You will find the following entries in the server log CodeBlockConfig hideClipboard DEBUG core reloading automated reporting INFO core opting out of automated reporting DEBUG activity there is no reporting agent configured skipping counts reporting CodeBlockConfig Environment variable If you need to you can also opt out using an environment variable which will provide a startup message confirming that you have disabled automated reporting This option requires a system restart Note If the reporting stanza exists in the configuration file the OPTOUT LICENSE REPORTING value overrides the configuration Note Set the following environment variable shell session export OPTOUT LICENSE REPORTING true Now restart your Vault servers vault docs commands server from the shell where you set the environment variable You will find the following entries in the server log CodeBlockConfig hideClipboard INFO core automated reporting disabled via environment variable env OPTOUT LICENSE REPORTING INFO core opting out of automated reporting DEBUG activity there is no reporting agent configured skipping counts reporting CodeBlockConfig Check your product logs roughly 24 hours after opting out to make sure that the system isn t trying to send reports If your configuration file and environment variable differ the environment variable setting will take precedence Example payloads HashiCorp collects the following utilization data as JSON payloads payload version The version of this payload schema license id The license ID for this product product The product that this contribution is for product version The product version this contribution is for export timestamp The date and time for this contribution snapshots An array of snapshot details A snapshot is a structure that represents a single data collection snapshot version The version of the snapshot package that produced this snapshot snapshot id A unique identifier for this particular snapshot process id An identifier for the system that produced this snapshot timestamp The date and time for this snapshot schema version The version of the schema associated with this snapshot service The service that produced this snapshot likely to be product name metrics A map of representations of snapshot metrics contained within this snapshot key The key name associated with this metric kind The kind of metric feature counter sum or mean mode The mode of operation associated with this metric write or collect labels The labels associated with each collected metric entity The sum of tokens generated for a unique client identifier nonentity The sum of tokens without an entity attached metadata Optional product specific metadata billing start The billing start date associated with the reporting cluster license start date if not configured Note title Important change to supported versions As of 1 16 7 1 17 3 and later the a href vault docs concepts billing start date billing start date a automatically rolls over to the latest billing year at the end of the last cycle For more information refer to the upgrade guide for your Vault version Vault v1 16 x vault docs upgrading upgrade to 1 16 x auto rolled billing start date Vault v1 17 x vault docs upgrading upgrade to 1 17 x auto rolled billing start date Note cluster id The cluster UUID as shown by vault status on the reporting cluster CodeBlockConfig hideClipboard json payload version 1 license id 97afe7b4 b9c8 bf19 bf35 b89b5cc0efea product vault product version 1 14 0 rc1 ent export timestamp 2023 06 01T11 39 00 76643 04 00 snapshots snapshot version 1 snapshot id 0001J7HEWM1PEHPMF5YZT8EV65 process id 01H1VSQMNYAP77R566F1Y03GE6 timestamp 2023 06 01T11 39 00 766099 04 00 schema version 1 0 0 service vault metrics clientcount current month estimate key clientcount current month estimate kind sum mode write labels type entity 20 nonentity 11 clientcount previous month complete key clientcount previous month complete kind sum mode write labels type entity 10 nonentity 11 metadata vault billing start 2023 03 01T00 00 00Z cluster id a8d95acc ec0a 6087 d7f6 4f054ab2e7fd CodeBlockConfig Pre 1 9 counts When upgrading Vault from 1 8 or earlier to 1 9 or later utilization reporting will only include the non entity tokens vault docs concepts client count non entity tokens that are used after the upgrade Starting in Vault 1 9 the activity log records and de duplicates non entity tokens by using the namespace and token s policies to generate a unique identifier Because Vault did not create identifiers for these tokens before 1 9 the activity log cannot know whether this token has been seen pre 1 9 To prevent inaccurate and inflated counts the activity log will ignore any counts of non entity tokens that were created before the upgrade and only the non entity tokens from versions 1 9 and later will be counted See the client count overview vault docs concepts client count and FAQ vault docs concepts client count faq for more information |
vault page title Frequently Asked Questions FAQ License FAQ An overview of license Q How do the license termination changes affect upgrades q how do the license termination changes affect upgrades layout docs This FAQ section is for license changes and updates introduced for Vault Enterprise | ---
layout: docs
page_title: Frequently Asked Questions (FAQ)
description: An overview of license.
---
# License FAQ
This FAQ section is for license changes and updates introduced for Vault Enterprise.
- [Q: How do the license termination changes affect upgrades?](#q-how-do-the-license-termination-changes-affect-upgrades)
- [Q: What impact on upgrades do the license termination behavior changes pose?](#q-what-impact-on-upgrades-do-the-license-termination-behavior-changes-pose)
- [Q: Will these license changes impact HCP Vault Dedicated?](#q-will-these-license-changes-impact-hcp-vault)
- [Q: Do these license changes impact all Vault customers/licenses?](#q-do-these-license-changes-impact-all-vault-customers-licenses)
- [Q: What is the product behavior change introduced by the licensing changes?](#q-what-is-the-product-behavior-change-introduced-by-the-licensing-changes)
- [Q: How will Vault behave at startup when a license expires or terminates?](#q-how-will-vault-behave-at-startup-when-a-license-expires-or-terminates)
- [Q: What is the impact on evaluation licenses due to this change?](#q-what-is-the-impact-on-evaluation-licenses-due-to-this-change)
- [Q: Are there any changes to existing methods for manual license loading (API or CLI)?](#q-are-there-any-changes-to-existing-methods-for-manual-license-loading-api-or-cli)
- [Q: Is there a grace period when evaluation licenses expire?](#q-is-there-a-grace-period-when-evaluation-licenses-expire)
- [Q: Are the license files locked to a specific cluster?](#q-are-the-license-files-locked-to-a-specific-cluster)
- [Q: Will a EULA check happen every time a Vault restarts?](#q-will-a-eula-check-happen-every-time-a-vault-restarts)
- [Q: What scenarios should a customer plan for due to these license changes?](#q-what-scenarios-should-a-customer-plan-for-due-to-these-license-changes)
- [Q: What is the migration path for customers who want to migrate from their existing license-as-applied-via-the-CLI flow to the license on disk flow?](#q-what-is-the-migration-path-for-customers-who-want-to-migrate-from-their-existing-license-as-applied-via-the-cli-flow-to-the-license-on-disk-flow)
- [Q: What is the path for customers who want to downgrade/rollback from Vault 1.11 or later (auto-loaded license mandatory) to a pre-Vault 1.11 (auto-loading not mandatory, stored license supported)?](#q-what-is-the-path-for-customers-who-want-to-downgrade-rollback-from-vault-1-11-or-later-auto-loaded-license-mandatory-to-a-pre-vault-1-11-auto-loading-not-mandatory-stored-license-supported)
- [Q: Is there a limited time for support of licenses that are in storage?](#q-is-there-a-limited-time-for-support-of-licenses-that-are-in-storage)
- [Q: What are the steps to upgrade from one autoloaded license to another autoloaded license?](#q-what-are-the-steps-to-upgrade-from-one-autoloaded-license-to-another-autoloaded-license)
- [Q: What are the Vault ADP module licensing changes introduced in 1.8?](#q-what-are-the-vault-adp-module-licensing-changes-introduced-in-1-8)
- [Q: How can the new ADP modules be purchased and what features are customer entitled to as part of that purchase?](#q-how-can-the-new-adp-modules-be-purchased-and-what-features-are-customer-entitled-to-as-part-of-that-purchase)
- [Q: What is the impact to customers based on these ADP module licensing changes?](#q-what-is-the-impact-to-customers-based-on-these-adp-module-licensing-changes)
### Q: what is the product behavior change introduced by the licensing changes?
Per the [feature deprecation plans](/vault/docs/deprecation), Vault will no longer support licenses in storage. An [auto-loaded license](/vault/docs/enterprise/license/autoloading) must be used instead. If you are using stored licenses, you must migrate to auto-loaded licenses prior to upgrading to Vault 1.11
Vault 1.12 will also introduce different termination behavior for evaluation licenses versus non-evaluation licenses. An evaluation license will include a 30-day trial period after which a running Vault server will terminate. Vault servers using a non-evaluation license will not terminate.
### Q: how do the license termination changes affect upgrades?
Vault 1.12 will introduce changes to the license termination behavior. Upgrades when using expired licenses will now be limited.
Vault will not startup if the build date of the binary is _after_ the expiration date of a license. License expiration date and binary build date compatibility can be verified using the [Check for Autoloaded License](/vault/docs/commands/operator/diagnose#check-for-autoloaded-license) check performed by the `vault operator diagnose` command.
The build date of a binary can also be found using the [vault version](/vault/docs/commands/version#version) command.
A user can expect the following to occur based on the following scenarios:
**Evaluation or non-evaluation license is valid:**
Vault will start normally
**Evaluation or non-evaluation license is expired, binary build date _before_ license expiry date:**
Vault will start normally
**Evaluation or non-evaluation license is expired, binary build date _after_ license expiry date:**
Vault will not start
**Evaluation license is terminated:**
Vault will not start independent of the binary's build date
**Non-evaluation license is terminated, binary build date _before_ license expiry date:**
Vault will start normally
**Non-evaluation license is terminated, binary build date _after_ license expiry date:**
Vault will not start
The Vault support team can issue you a temporary evaluation license to allow for security upgrades if your license has expired.
### Q: will these license changes impact HCP Vault Dedicated?
No, these changes will not impact HCP Vault Dedicated.
### Q: do these license changes impact all Vault customers/licenses?
| Customer/licenses | Impacted? |
| --------------------------------------------------------------------------------------------------------------------------- | --------- |
| ENT binaries (evaluation or non-evaluation downloaded from [releases.hashicorp.com](https://releases.hashicorp.com/vault/)) | Yes |
| Business-Source (BSL) | No |
### Q: what is the product behavior change introduced by the licensing changes?
With Vault 1.11, the use of an [auto-loaded license](/vault/docs/enterprise/license/autoloading) is required for Vault to start successfully.
### Q: how will Vault behave at startup when a license expires or terminates?
When a license expires, Vault continues to function until the license terminates. This behavior exists today and remains unchanged in Vault 1.11. The grace period, defined as the time between license expiration and license termination, is one day for evaluation licenses (as of 1.8), and ten years for non-evaluation licenses.
Customers must provide a valid license before the grace period expires. This license is required to be [auto-loaded](/vault/docs/enterprise/license/autoloading). When license terminates (upon grace period expiry), Vault will seal itself and customers will need a valid license in order to successfully bring-up Vault. If a valid license was not installed after license expiry, customers will need to provide one, and this license will need to be auto-loaded.
Vault 1.12 changes the license expiration and termination behavior. Evaluation licenses include a 30-day trial period after which a running Vault server will terminate. Non-evaluation licenses, however, will no longer terminate. When a non-evaluation license expires, Vault will continue to function but upgrades will be limited. The build date of the upgrade binary must be before the expiration date of the license.
Vault will not start when attempting to use an expired license and binary with a build date _after_ the license expiration date. Attempting to [reload](/vault/api-docs/system/config-reload#reload-license-file) an expired license will result in an error if the build date of the running Vault server is _after_ the license expiration date.
License expiration date and binary build date compatibility can be verified using the [Check for Autoloaded License](/vault/docs/commands/operator/diagnose#check-for-autoloaded-license) check performed by the `vault operator diagnose` command. The build date of a binary can also be found using the [vault version](/vault/docs/commands/version#version) command.
### Q: what is the impact on evaluation licenses due to this change?
As of Vault 1.8, any Vault cluster deployed must have a valid [auto-loaded](/vault/docs/enterprise/license/autoloading) license.
Vault 1.12 introduces [expiration and termination behavior changes](#q-how-will-vault-behave-at-startup-when-a-license-expires-or-terminates) for non-evaluation licenses. Evaluation licenses will continue to have a 1-day grace period upon license expiry after which they will terminate. Vault will seal itself and shutdown once an evaluation license terminates.
### Q: are there any changes to existing methods for manual license loading (API or CLI)?
The [`/sys/license`](/vault/api-docs/v1.10.x/system/license#install-license) and [`/sys/license/signed`](/vault/api-docs/v1.10.x/system/license#read-signed-license) endpoints have been removed as of Vault 1.11. With that said, it is no longer possible to provide a license via the `/sys/license` endpoint. License [auto-loading](/vault/docs/enterprise/license/autoloading) must be used instead.
The [`/sys/config/reload/license`](/vault/api-docs/system/config-reload#reload-license-file) endpoint can be used to reload an auto-loaded license provided as a path via an environment variable or configuration.
### Q: is there a grace period when evaluation licenses expire?
Evaluation licenses have a 1-day grace period. The grace period is the time until the license expires. Upon expiration, Vault will seal and will require a valid license to unseal and function properly.
### Q: are the license files locked to a specific cluster?
The changes to licenses apply to all nodes in a cluster. The license files are not locked to a cluster, but are independently applied to the appropriate clusters.
### Q: will a EULA check happen every time a Vault restarts?
Yes, starting with Vault 1.8, ENT binaries will be subjected to a EULA check. The release of Vault 1.8 introduces the EULA check for evaluation licenses (non-evaluation licenses are evaluated with a EULA check during contractual agreement) .
Although the agreement to a EULA occurs only once (when the user receives their license), Vault will check for the presence of a valid license every time a node is restarted.
Starting Vault 1.11, when customers deploy new Vault clusters, or upgrade existing Vault clusters, a valid [auto-loaded](/vault/docs/enterprise/license/autoloading) license must exist for the upgrade to be successful.
### Q: what scenarios should a customer plan for due to these license changes?
- **New Cluster Deployment**: When a customer deploys new clusters to Vault 1.11 or later, a valid license must exist to successfully deploy. This valid license must be on-disk ([auto-loaded](/vault/docs/enterprise/license/autoloading)).
- **Eventual Migration**: Vault 1.11 removes support for in-storage licenses. Migrating to an auto-loaded license is required for Vault to start successfully using version 1.11 or greater. Pre-existing license storage entries will be automatically removed from storage upon startup.
### Q: what is the migration path for customers who want to migrate from their existing license-as-applied-via-the-CLI flow to the license on disk flow?
If a Vault cluster using a stored license is planned to be upgraded to Vault 1.11 or greater, the operator must migrate to using an auto-loaded license. The [`vault license get -signed`](/vault/docs/v1.10.x/commands/license/get) command (or underlying [`/sys/license/signed`](/vault/api-docs/v1.10.x/system/license#read-signed-license) endpoint) can be used to retrieve the license from storage in a running cluster.
It is not necessary to remove the stored license entry. That will occur automatically upon startup in Vault 1.11 or greater. Prior to completing the [recommended upgrade steps](/vault/docs/upgrading), perform the following to ensure your license is properly configured:
1. Use the command `vault license get -signed` to retrieve the license from storage of your running cluster.
2. Put the license on disk
3. Configure license auto-loading by specifying the [`license_path`](/vault/docs/configuration#license_path) config option or setting the [`VAULT_LICENSE`](/vault/docs/commands#vault_license) or [`VAULT_LICENSE_PATH`](/vault/docs/commands#vault_license_path) environment variable.
### Q: what is the path for customers who want to downgrade/rollback from Vault 1.11 or later (auto-loaded license mandatory) to a pre-Vault 1.11 (auto-loading not mandatory, stored license supported)?
The downgrade procedure remains the same for Vault customers who are currently on Vault 1.11 or later, have a license installed via auto-loading, and would like to downgrade their cluster to a pre-1.11 version. Please refer to the [upgrade procedures](/vault/tutorials/standard-procedures/sop-upgrade) that remind the customers that they must take a snapshot before the upgrade. Customers will need to restore their version from the snapshot, apply the pre-1.11 enterprise binary version they wish to roll back, and bring up the clusters.
### Q: is there a limited time for support of licenses that are in storage?
The support of licenses installed by alternative means often leads to difficulties providing the appropriate support. To provide the support expected by our customers, as we have announced in [Vault feature deprecations and plans](/vault/docs/deprecation) we are removing support for licenses in storage with Vault 1.11. This implies licensing endpoints that deal with licenses in storage will be removed, and Vault will no longer check for valid licenses in storage. This change requires that all customers have [auto-loaded](/vault/docs/enterprise/license/autoloading) licenses to upgrade to 1.11(+) successfully.
### Q: what are the steps to upgrade from one autoloaded license to another autoloaded license?
Follow these steps to migrate from one autoloaded license to another autoloaded license.
1. Use the [`vault license inspect`](/vault/docs/commands/license/inspect) command to compare the new license against the output of the [`vault license get`](/vault/docs/commands/license/get) command. This is to ensure that you have the correct license.
1. Backup the old license file in a safe location.
1. Replace the old license file on each Vault server with the new one.
1. Invoke the [reload command](/vault/api-docs/system/config-reload#reload-license-file) on each individual Vault server, starting with the standbys and doing the leader last. Invoking in this manner reduces possible disruptions if something was performed incorrectly with the above steps. You can either use the [reload command](/vault/api-docs/system/config-reload#reload-license-file) or send a SIGHUP.
1. On each node, ensure that the new license is in use by using the [`vault license get`](/vault/docs/commands/license/get) command and/or checking the logs.
# ADP licensing
This FAQ section is for the Advanced Data Protection (ADP) license changes introduced in Vault Enterprise 1.8.
### Q: what are the Vault ADP module licensing changes introduced in 1.8?
As of Vault Enterprise 1.8, the functionality formerly sold as the Vault ADP module is now separated between two new modules:
**ADP-KM** includes:
- [Key Management Secrets Engine (KMSE)](/vault/docs/secrets/key-management)
- [Key Management Interoperability (KMIP)](/vault/docs/secrets/kmip)
- [MSSQL Transparent Data Encryption (TDE)](https://www.hashicorp.com/blog/enabling-transparent-data-encryption-for-microsoft-sql-with-vault)
**ADP-Transform** includes:
- [Transform Secrets Engine (TSE)](/vault/docs/secrets/transform)
### Q: how can the new ADP modules be purchased and what features are customer entitled to as part of that purchase?
**ADP-KM includes**:
- This is the first Vault Enterprise module that can be purchased standalone. This means it can be purchased without the purchase of a Vault Enterprise Standard license.
- ADP-KM still requires a Vault Enterprise binary. The Vault Enterprise Standard license is automatically included with the ADP-KM module, but customers are contractually prohibited from using any features besides those in Vault Community Edition and ADP-KM (KMSE and KMIP).
**ADP-Transform includes**:
- This module cannot be purchased as a standalone. It requires a Vault Enterprise binary, and customers must purchase the base Vault Enterprise Standard license (at least) to use the corresponding Enterprise features.
- The ADP-Transform SKU can be applied as an add-on. This workflow is similar to the consolidated ADP SKU.
### Q: what is the impact to customers based on these ADP module licensing changes?
Customers need to be aware of the following as a result of these changes:
- **New customers** may choose to purchase either or both of these modules. The old (consolidated) module is not available to them as an option.
- **Existing customers** may continue with the consolidated Vault ADP module uninterrupted. They will only be converted to one or both new ADP modules the next time they make a change to their licensing details (i.e. contract change). | vault | layout docs page title Frequently Asked Questions FAQ description An overview of license License FAQ This FAQ section is for license changes and updates introduced for Vault Enterprise Q How do the license termination changes affect upgrades q how do the license termination changes affect upgrades Q What impact on upgrades do the license termination behavior changes pose q what impact on upgrades do the license termination behavior changes pose Q Will these license changes impact HCP Vault Dedicated q will these license changes impact hcp vault Q Do these license changes impact all Vault customers licenses q do these license changes impact all vault customers licenses Q What is the product behavior change introduced by the licensing changes q what is the product behavior change introduced by the licensing changes Q How will Vault behave at startup when a license expires or terminates q how will vault behave at startup when a license expires or terminates Q What is the impact on evaluation licenses due to this change q what is the impact on evaluation licenses due to this change Q Are there any changes to existing methods for manual license loading API or CLI q are there any changes to existing methods for manual license loading api or cli Q Is there a grace period when evaluation licenses expire q is there a grace period when evaluation licenses expire Q Are the license files locked to a specific cluster q are the license files locked to a specific cluster Q Will a EULA check happen every time a Vault restarts q will a eula check happen every time a vault restarts Q What scenarios should a customer plan for due to these license changes q what scenarios should a customer plan for due to these license changes Q What is the migration path for customers who want to migrate from their existing license as applied via the CLI flow to the license on disk flow q what is the migration path for customers who want to migrate from their existing license as applied via the cli flow to the license on disk flow Q What is the path for customers who want to downgrade rollback from Vault 1 11 or later auto loaded license mandatory to a pre Vault 1 11 auto loading not mandatory stored license supported q what is the path for customers who want to downgrade rollback from vault 1 11 or later auto loaded license mandatory to a pre vault 1 11 auto loading not mandatory stored license supported Q Is there a limited time for support of licenses that are in storage q is there a limited time for support of licenses that are in storage Q What are the steps to upgrade from one autoloaded license to another autoloaded license q what are the steps to upgrade from one autoloaded license to another autoloaded license Q What are the Vault ADP module licensing changes introduced in 1 8 q what are the vault adp module licensing changes introduced in 1 8 Q How can the new ADP modules be purchased and what features are customer entitled to as part of that purchase q how can the new adp modules be purchased and what features are customer entitled to as part of that purchase Q What is the impact to customers based on these ADP module licensing changes q what is the impact to customers based on these adp module licensing changes Q what is the product behavior change introduced by the licensing changes Per the feature deprecation plans vault docs deprecation Vault will no longer support licenses in storage An auto loaded license vault docs enterprise license autoloading must be used instead If you are using stored licenses you must migrate to auto loaded licenses prior to upgrading to Vault 1 11 Vault 1 12 will also introduce different termination behavior for evaluation licenses versus non evaluation licenses An evaluation license will include a 30 day trial period after which a running Vault server will terminate Vault servers using a non evaluation license will not terminate Q how do the license termination changes affect upgrades Vault 1 12 will introduce changes to the license termination behavior Upgrades when using expired licenses will now be limited Vault will not startup if the build date of the binary is after the expiration date of a license License expiration date and binary build date compatibility can be verified using the Check for Autoloaded License vault docs commands operator diagnose check for autoloaded license check performed by the vault operator diagnose command The build date of a binary can also be found using the vault version vault docs commands version version command A user can expect the following to occur based on the following scenarios Evaluation or non evaluation license is valid Vault will start normally Evaluation or non evaluation license is expired binary build date before license expiry date Vault will start normally Evaluation or non evaluation license is expired binary build date after license expiry date Vault will not start Evaluation license is terminated Vault will not start independent of the binary s build date Non evaluation license is terminated binary build date before license expiry date Vault will start normally Non evaluation license is terminated binary build date after license expiry date Vault will not start The Vault support team can issue you a temporary evaluation license to allow for security upgrades if your license has expired Q will these license changes impact HCP Vault Dedicated No these changes will not impact HCP Vault Dedicated Q do these license changes impact all Vault customers licenses Customer licenses Impacted ENT binaries evaluation or non evaluation downloaded from releases hashicorp com https releases hashicorp com vault Yes Business Source BSL No Q what is the product behavior change introduced by the licensing changes With Vault 1 11 the use of an auto loaded license vault docs enterprise license autoloading is required for Vault to start successfully Q how will Vault behave at startup when a license expires or terminates When a license expires Vault continues to function until the license terminates This behavior exists today and remains unchanged in Vault 1 11 The grace period defined as the time between license expiration and license termination is one day for evaluation licenses as of 1 8 and ten years for non evaluation licenses Customers must provide a valid license before the grace period expires This license is required to be auto loaded vault docs enterprise license autoloading When license terminates upon grace period expiry Vault will seal itself and customers will need a valid license in order to successfully bring up Vault If a valid license was not installed after license expiry customers will need to provide one and this license will need to be auto loaded Vault 1 12 changes the license expiration and termination behavior Evaluation licenses include a 30 day trial period after which a running Vault server will terminate Non evaluation licenses however will no longer terminate When a non evaluation license expires Vault will continue to function but upgrades will be limited The build date of the upgrade binary must be before the expiration date of the license Vault will not start when attempting to use an expired license and binary with a build date after the license expiration date Attempting to reload vault api docs system config reload reload license file an expired license will result in an error if the build date of the running Vault server is after the license expiration date License expiration date and binary build date compatibility can be verified using the Check for Autoloaded License vault docs commands operator diagnose check for autoloaded license check performed by the vault operator diagnose command The build date of a binary can also be found using the vault version vault docs commands version version command Q what is the impact on evaluation licenses due to this change As of Vault 1 8 any Vault cluster deployed must have a valid auto loaded vault docs enterprise license autoloading license Vault 1 12 introduces expiration and termination behavior changes q how will vault behave at startup when a license expires or terminates for non evaluation licenses Evaluation licenses will continue to have a 1 day grace period upon license expiry after which they will terminate Vault will seal itself and shutdown once an evaluation license terminates Q are there any changes to existing methods for manual license loading API or CLI The sys license vault api docs v1 10 x system license install license and sys license signed vault api docs v1 10 x system license read signed license endpoints have been removed as of Vault 1 11 With that said it is no longer possible to provide a license via the sys license endpoint License auto loading vault docs enterprise license autoloading must be used instead The sys config reload license vault api docs system config reload reload license file endpoint can be used to reload an auto loaded license provided as a path via an environment variable or configuration Q is there a grace period when evaluation licenses expire Evaluation licenses have a 1 day grace period The grace period is the time until the license expires Upon expiration Vault will seal and will require a valid license to unseal and function properly Q are the license files locked to a specific cluster The changes to licenses apply to all nodes in a cluster The license files are not locked to a cluster but are independently applied to the appropriate clusters Q will a EULA check happen every time a Vault restarts Yes starting with Vault 1 8 ENT binaries will be subjected to a EULA check The release of Vault 1 8 introduces the EULA check for evaluation licenses non evaluation licenses are evaluated with a EULA check during contractual agreement Although the agreement to a EULA occurs only once when the user receives their license Vault will check for the presence of a valid license every time a node is restarted Starting Vault 1 11 when customers deploy new Vault clusters or upgrade existing Vault clusters a valid auto loaded vault docs enterprise license autoloading license must exist for the upgrade to be successful Q what scenarios should a customer plan for due to these license changes New Cluster Deployment When a customer deploys new clusters to Vault 1 11 or later a valid license must exist to successfully deploy This valid license must be on disk auto loaded vault docs enterprise license autoloading Eventual Migration Vault 1 11 removes support for in storage licenses Migrating to an auto loaded license is required for Vault to start successfully using version 1 11 or greater Pre existing license storage entries will be automatically removed from storage upon startup Q what is the migration path for customers who want to migrate from their existing license as applied via the CLI flow to the license on disk flow If a Vault cluster using a stored license is planned to be upgraded to Vault 1 11 or greater the operator must migrate to using an auto loaded license The vault license get signed vault docs v1 10 x commands license get command or underlying sys license signed vault api docs v1 10 x system license read signed license endpoint can be used to retrieve the license from storage in a running cluster It is not necessary to remove the stored license entry That will occur automatically upon startup in Vault 1 11 or greater Prior to completing the recommended upgrade steps vault docs upgrading perform the following to ensure your license is properly configured 1 Use the command vault license get signed to retrieve the license from storage of your running cluster 2 Put the license on disk 3 Configure license auto loading by specifying the license path vault docs configuration license path config option or setting the VAULT LICENSE vault docs commands vault license or VAULT LICENSE PATH vault docs commands vault license path environment variable Q what is the path for customers who want to downgrade rollback from Vault 1 11 or later auto loaded license mandatory to a pre Vault 1 11 auto loading not mandatory stored license supported The downgrade procedure remains the same for Vault customers who are currently on Vault 1 11 or later have a license installed via auto loading and would like to downgrade their cluster to a pre 1 11 version Please refer to the upgrade procedures vault tutorials standard procedures sop upgrade that remind the customers that they must take a snapshot before the upgrade Customers will need to restore their version from the snapshot apply the pre 1 11 enterprise binary version they wish to roll back and bring up the clusters Q is there a limited time for support of licenses that are in storage The support of licenses installed by alternative means often leads to difficulties providing the appropriate support To provide the support expected by our customers as we have announced in Vault feature deprecations and plans vault docs deprecation we are removing support for licenses in storage with Vault 1 11 This implies licensing endpoints that deal with licenses in storage will be removed and Vault will no longer check for valid licenses in storage This change requires that all customers have auto loaded vault docs enterprise license autoloading licenses to upgrade to 1 11 successfully Q what are the steps to upgrade from one autoloaded license to another autoloaded license Follow these steps to migrate from one autoloaded license to another autoloaded license 1 Use the vault license inspect vault docs commands license inspect command to compare the new license against the output of the vault license get vault docs commands license get command This is to ensure that you have the correct license 1 Backup the old license file in a safe location 1 Replace the old license file on each Vault server with the new one 1 Invoke the reload command vault api docs system config reload reload license file on each individual Vault server starting with the standbys and doing the leader last Invoking in this manner reduces possible disruptions if something was performed incorrectly with the above steps You can either use the reload command vault api docs system config reload reload license file or send a SIGHUP 1 On each node ensure that the new license is in use by using the vault license get vault docs commands license get command and or checking the logs ADP licensing This FAQ section is for the Advanced Data Protection ADP license changes introduced in Vault Enterprise 1 8 Q what are the Vault ADP module licensing changes introduced in 1 8 As of Vault Enterprise 1 8 the functionality formerly sold as the Vault ADP module is now separated between two new modules ADP KM includes Key Management Secrets Engine KMSE vault docs secrets key management Key Management Interoperability KMIP vault docs secrets kmip MSSQL Transparent Data Encryption TDE https www hashicorp com blog enabling transparent data encryption for microsoft sql with vault ADP Transform includes Transform Secrets Engine TSE vault docs secrets transform Q how can the new ADP modules be purchased and what features are customer entitled to as part of that purchase ADP KM includes This is the first Vault Enterprise module that can be purchased standalone This means it can be purchased without the purchase of a Vault Enterprise Standard license ADP KM still requires a Vault Enterprise binary The Vault Enterprise Standard license is automatically included with the ADP KM module but customers are contractually prohibited from using any features besides those in Vault Community Edition and ADP KM KMSE and KMIP ADP Transform includes This module cannot be purchased as a standalone It requires a Vault Enterprise binary and customers must purchase the base Vault Enterprise Standard license at least to use the corresponding Enterprise features The ADP Transform SKU can be applied as an add on This workflow is similar to the consolidated ADP SKU Q what is the impact to customers based on these ADP module licensing changes Customers need to be aware of the following as a result of these changes New customers may choose to purchase either or both of these modules The old consolidated module is not available to them as an option Existing customers may continue with the consolidated Vault ADP module uninterrupted They will only be converted to one or both new ADP modules the next time they make a change to their licensing details i e contract change |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.